Greg Kroah-Hartman <greg@kroah.com>
Henk Vergonet <Henk.Vergonet@gmail.com>
Henrik Kretzschmar <henne@nachtwindheim.de>
+Henrik Rydberg <rydberg@bitmath.org>
Herbert Xu <herbert@gondor.apana.org.au>
Jacob Shin <Jacob.Shin@amd.com>
James Bottomley <jejb@mulgrave.(none)>
Each button (key) is represented as a sub-node of "gpio-keys":
Subnode properties:
+ - gpios: OF device-tree gpio specification.
+ - interrupts: the interrupt line for that input.
- label: Descriptive name of the key.
- linux,code: Keycode to emit.
-Required mutual exclusive subnode-properties:
- - gpios: OF device-tree gpio specification.
- - interrupts: the interrupt line for that input
+Note that either "interrupts" or "gpios" properties can be omitted, but not
+both at the same time. Specifying both properties is allowed.
Optional subnode-properties:
- linux,input-type: Specify event type this button/key generates.
- debounce-interval: Debouncing interval time in milliseconds.
If not specified defaults to 5.
- gpio-key,wakeup: Boolean, button can wake-up the system.
+ - linux,can-disable: Boolean, indicates that button is connected
+ to dedicated (not shared) interrupt which can be disabled to
+ suppress events from the button.
Example nodes:
- debounce-interval : Debouncing interval time in milliseconds
- st,scan-count : Scanning cycles elapsed before key data is updated
- st,no-autorepeat : If specified device will not autorepeat
+ - keypad,num-rows : See ./matrix-keymap.txt
+ - keypad,num-columns : See ./matrix-keymap.txt
Example:
- SerDes Rx/Tx registers
- SerDes integration registers (1/2)
- SerDes integration registers (2/2)
+- interrupt-parent: Should be the phandle for the interrupt controller
+ that services interrupts for this device
+- interrupts: Should contain the amd-xgbe-phy interrupt.
Optional properties:
- amd,speed-set: Speed capabilities of the device
0 - 1GbE and 10GbE (default)
1 - 2.5GbE and 10GbE
+The following optional properties are represented by an array with each
+value corresponding to a particular speed. The first array value represents
+the setting for the 1GbE speed, the second value for the 2.5GbE speed and
+the third value for the 10GbE speed. All three values are required if the
+property is used.
+- amd,serdes-blwc: Baseline wandering correction enablement
+ 0 - Off
+ 1 - On
+- amd,serdes-cdr-rate: CDR rate speed selection
+- amd,serdes-pq-skew: PQ (data sampling) skew
+- amd,serdes-tx-amp: TX amplitude boost
+
Example:
xgbe_phy@e1240800 {
compatible = "amd,xgbe-phy-seattle-v1a", "ethernet-phy-ieee802.3-c45";
reg = <0 0xe1240800 0 0x00400>,
<0 0xe1250000 0 0x00060>,
<0 0xe1250080 0 0x00004>;
+ interrupt-parent = <&gic>;
+ interrupts = <0 323 4>;
amd,speed-set = <0>;
+ amd,serdes-blwc = <1>, <1>, <0>;
+ amd,serdes-cdr-rate = <2>, <2>, <7>;
+ amd,serdes-pq-skew = <10>, <10>, <30>;
+ amd,serdes-tx-amp = <15>, <15>, <10>;
};
Optional properties:
- davicom,no-eeprom : Configuration EEPROM is not available
- davicom,ext-phy : Use external PHY
+- reset-gpios : phandle of gpio that will be used to reset chip during probe
+- vcc-supply : phandle of regulator that will be used to enable power to chip
Example:
interrupts = <7 4>;
local-mac-address = [00 00 de ad be ef];
davicom,no-eeprom;
+ reset-gpios = <&gpf 12 GPIO_ACTIVE_LOW>;
+ vcc-supply = <ð0_power>;
};
--- /dev/null
+Hisilicon hip04 Ethernet Controller
+
+* Ethernet controller node
+
+Required properties:
+- compatible: should be "hisilicon,hip04-mac".
+- reg: address and length of the register set for the device.
+- interrupts: interrupt for the device.
+- port-handle: <phandle port channel>
+ phandle, specifies a reference to the syscon ppe node
+ port, port number connected to the controller
+ channel, recv channel start from channel * number (RX_DESC_NUM)
+- phy-mode: see ethernet.txt [1].
+
+Optional properties:
+- phy-handle: see ethernet.txt [1].
+
+[1] Documentation/devicetree/bindings/net/ethernet.txt
+
+
+* Ethernet ppe node:
+Control rx & tx fifos of all ethernet controllers.
+Have 2048 recv channels shared by all ethernet controllers, only if no overlap.
+Each controller's recv channel start from channel * number (RX_DESC_NUM).
+
+Required properties:
+- compatible: "hisilicon,hip04-ppe", "syscon".
+- reg: address and length of the register set for the device.
+
+
+* MDIO bus node:
+
+Required properties:
+
+- compatible: should be "hisilicon,hip04-mdio".
+- Inherits from MDIO bus node binding [2]
+[2] Documentation/devicetree/bindings/net/phy.txt
+
+Example:
+ mdio {
+ compatible = "hisilicon,hip04-mdio";
+ reg = <0x28f1000 0x1000>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ phy0: ethernet-phy@0 {
+ compatible = "ethernet-phy-ieee802.3-c22";
+ reg = <0>;
+ marvell,reg-init = <18 0x14 0 0x8001>;
+ };
+
+ phy1: ethernet-phy@1 {
+ compatible = "ethernet-phy-ieee802.3-c22";
+ reg = <1>;
+ marvell,reg-init = <18 0x14 0 0x8001>;
+ };
+ };
+
+ ppe: ppe@28c0000 {
+ compatible = "hisilicon,hip04-ppe", "syscon";
+ reg = <0x28c0000 0x10000>;
+ };
+
+ fe: ethernet@28b0000 {
+ compatible = "hisilicon,hip04-mac";
+ reg = <0x28b0000 0x10000>;
+ interrupts = <0 413 4>;
+ phy-mode = "mii";
+ port-handle = <&ppe 31 0>;
+ };
+
+ ge0: ethernet@2800000 {
+ compatible = "hisilicon,hip04-mac";
+ reg = <0x2800000 0x10000>;
+ interrupts = <0 402 4>;
+ phy-mode = "sgmii";
+ port-handle = <&ppe 0 1>;
+ phy-handle = <&phy0>;
+ };
+
+ ge8: ethernet@2880000 {
+ compatible = "hisilicon,hip04-mac";
+ reg = <0x2880000 0x10000>;
+ interrupts = <0 410 4>;
+ phy-mode = "sgmii";
+ port-handle = <&ppe 8 2>;
+ phy-handle = <&phy1>;
+ };
--- /dev/null
+This document describes the device tree bindings associated with the
+keystone network coprocessor(NetCP) driver support.
+
+The network coprocessor (NetCP) is a hardware accelerator that processes
+Ethernet packets. NetCP has a gigabit Ethernet (GbE) subsytem with a ethernet
+switch sub-module to send and receive packets. NetCP also includes a packet
+accelerator (PA) module to perform packet classification operations such as
+header matching, and packet modification operations such as checksum
+generation. NetCP can also optionally include a Security Accelerator (SA)
+capable of performing IPSec operations on ingress/egress packets.
+
+Keystone II SoC's also have a 10 Gigabit Ethernet Subsystem (XGbE) which
+includes a 3-port Ethernet switch sub-module capable of 10Gb/s and 1Gb/s rates
+per Ethernet port.
+
+Keystone NetCP driver has a plug-in module architecture where each of the NetCP
+sub-modules exist as a loadable kernel module which plug in to the netcp core.
+These sub-modules are represented as "netcp-devices" in the dts bindings. It is
+mandatory to have the ethernet switch sub-module for the ethernet interface to
+be operational. Any other sub-module like the PA is optional.
+
+NetCP Ethernet SubSystem Layout:
+
+-----------------------------
+ NetCP subsystem(10G or 1G)
+-----------------------------
+ |
+ |-> NetCP Devices -> |
+ | |-> GBE/XGBE Switch
+ | |
+ | |-> Packet Accelerator
+ | |
+ | |-> Security Accelerator
+ |
+ |
+ |
+ |-> NetCP Interfaces -> |
+ |-> Ethernet Port 0
+ |
+ |-> Ethernet Port 1
+ |
+ |-> Ethernet Port 2
+ |
+ |-> Ethernet Port 3
+
+
+NetCP subsystem properties:
+Required properties:
+- compatible: Should be "ti,netcp-1.0"
+- clocks: phandle to the reference clocks for the subsystem.
+- dma-id: Navigator packet dma instance id.
+
+Optional properties:
+- reg: register location and the size for the following register
+ regions in the specified order.
+ - Efuse MAC address register
+- dma-coherent: Present if dma operations are coherent
+- big-endian: Keystone devices can be operated in a mode where the DSP is in
+ the big endian mode. In such cases enable this option. This
+ option should also be enabled if the ARM is operated in
+ big endian mode with the DSP in little endian.
+
+NetCP device properties: Device specification for NetCP sub-modules.
+1Gb/10Gb (gbe/xgbe) ethernet switch sub-module specifications.
+Required properties:
+- label: Must be "netcp-gbe" for 1Gb & "netcp-xgbe" for 10Gb.
+- reg: register location and the size for the following register
+ regions in the specified order.
+ - subsystem registers
+ - serdes registers
+- tx-channel: the navigator packet dma channel name for tx.
+- tx-queue: the navigator queue number associated with the tx dma channel.
+- interfaces: specification for each of the switch port to be registered as a
+ network interface in the stack.
+-- slave-port: Switch port number, 0 based numbering.
+-- link-interface: type of link interface, supported options are
+ - mac<->mac auto negotiate mode: 0
+ - mac<->phy mode: 1
+ - mac<->mac forced mode: 2
+ - mac<->fiber mode: 3
+ - mac<->phy mode with no mdio: 4
+ - 10Gb mac<->phy mode : 10
+ - 10Gb mac<->mac forced mode : 11
+----phy-handle: phandle to PHY device
+
+Optional properties:
+- enable-ale: NetCP driver keeps the address learning feature in the ethernet
+ switch module disabled. This attribute is to enable the address
+ learning.
+- secondary-slave-ports: specification for each of the switch port not be
+ registered as a network interface. NetCP driver
+ will only initialize these ports and attach PHY
+ driver to them if needed.
+
+NetCP interface properties: Interface specification for NetCP sub-modules.
+Required properties:
+- rx-channel: the navigator packet dma channel name for rx.
+- rx-queue: the navigator queue number associated with rx dma channel.
+- rx-pool: specifies the number of descriptors to be used & the region-id
+ for creating the rx descriptor pool.
+- tx-pool: specifies the number of descriptors to be used & the region-id
+ for creating the tx descriptor pool.
+- rx-queue-depth: number of descriptors in each of the free descriptor
+ queue (FDQ) for the pktdma Rx flow. There can be at
+ present a maximum of 4 queues per Rx flow.
+- rx-buffer-size: the buffer size for each of the Rx flow FDQ.
+- tx-completion-queue: the navigator queue number where the descriptors are
+ recycled after Tx DMA completion.
+
+Optional properties:
+- efuse-mac: If this is 1, then the MAC address for the interface is
+ obtained from the device efuse mac address register
+- local-mac-address: the driver is designed to use the of_get_mac_address api
+ only if efuse-mac is 0. When efuse-mac is 0, the MAC
+ address is obtained from local-mac-address. If this
+ attribute is not present, then the driver will use a
+ random MAC address.
+- "netcp-device label": phandle to the device specification for each of NetCP
+ sub-module attached to this interface.
+
+Example binding:
+
+netcp: netcp@2090000 {
+ reg = <0x2620110 0x8>;
+ reg-names = "efuse";
+ compatible = "ti,netcp-1.0";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+
+ clocks = <&papllclk>, <&clkcpgmac>, <&chipclk12>;
+ dma-coherent;
+ /* big-endian; */
+ dma-id = <0>;
+
+ netcp-devices {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ gbe@0x2090000 {
+ label = "netcp-gbe";
+ reg = <0x2090000 0xf00>;
+ /* enable-ale; */
+ tx-queue = <648>;
+ tx-channel = <8>;
+
+ interfaces {
+ gbe0: interface-0 {
+ slave-port = <0>;
+ link-interface = <4>;
+ };
+ gbe1: interface-1 {
+ slave-port = <1>;
+ link-interface = <4>;
+ };
+ };
+
+ secondary-slave-ports {
+ port-2 {
+ slave-port = <2>;
+ link-interface = <2>;
+ };
+ port-3 {
+ slave-port = <3>;
+ link-interface = <2>;
+ };
+ };
+ };
+ };
+
+ netcp-interfaces {
+ interface-0 {
+ rx-channel = <22>;
+ rx-pool = <1024 12>;
+ tx-pool = <1024 12>;
+ rx-queue-depth = <128 128 0 0>;
+ rx-buffer-size = <1518 4096 0 0>;
+ rx-queue = <8704>;
+ tx-completion-queue = <8706>;
+ efuse-mac = <1>;
+ netcp-gbe = <&gbe0>;
+
+ };
+ interface-1 {
+ rx-channel = <23>;
+ rx-pool = <1024 12>;
+ tx-pool = <1024 12>;
+ rx-queue-depth = <128 128 0 0>;
+ rx-buffer-size = <1518 4096 0 0>;
+ rx-queue = <8705>;
+ tx-completion-queue = <8707>;
+ efuse-mac = <0>;
+ local-mac-address = [02 18 31 7e 3e 6f];
+ netcp-gbe = <&gbe1>;
+ };
+ };
+};
Optional properties:
- tx_delay: Delay value for TXD timing. Range value is 0~0x7F, 0x30 as default.
- rx_delay: Delay value for RXD timing. Range value is 0~0x7F, 0x10 as default.
+ - phy-supply: phandle to a regulator if the PHY needs one
Example:
Required properties:
- compatible : Can be "st,stih415-dwmac", "st,stih416-dwmac",
"st,stih407-dwmac", "st,stid127-dwmac".
- - reg : Offset of the glue configuration register map in system
- configuration regmap pointed by st,syscon property and size.
- - st,syscon : Should be phandle to system configuration node which
- encompases this glue registers.
+ - st,syscon : Should be phandle/offset pair. The phandle to the syscon node which
+ encompases the glue register, and the offset of the control register.
- st,gmac_en: this is to enable the gmac into a dedicated sysctl control
register available on STiH407 SoC.
- - sti-ethconf: this is the gmac glue logic register to enable the GMAC,
- select among the different modes and program the clk retiming.
- pinctrl-0: pin-control for all the MII mode supported.
Optional properties:
device_type = "network";
status = "disabled";
compatible = "st,stih407-dwmac", "snps,dwmac", "snps,dwmac-3.710";
- reg = <0x9630000 0x8000>, <0x80 0x4>;
- reg-names = "stmmaceth", "sti-ethconf";
+ reg = <0x9630000 0x8000>;
+ reg-names = "stmmaceth";
- st,syscon = <&syscfg_sbc_reg>;
+ st,syscon = <&syscfg_sbc_reg 0x80>;
st,gmac_en;
resets = <&softreset STIH407_ETH1_SOFTRESET>;
reset-names = "stmmaceth";
available this clock is used for programming the Timestamp Addend Register.
If not passed then the system clock will be used and this is fine on some
platforms.
+- snps,burst_len: The AXI burst lenth value of the AXI BUS MODE register.
Examples:
Required properties (controller (parent) node):
- compatible : Should be "st,miphy365x-phy"
-- st,syscfg : Should be a phandle of the system configuration register group
- which contain the SATA, PCIe mode setting bits
+- st,syscfg : Phandle / integer array property. Phandle of sysconfig group
+ containing the miphy registers and integer array should contain
+ an entry for each port sub-node, specifying the control
+ register offset inside the sysconfig group.
Required nodes : A sub-node is required for each channel the controller
provides. Address range information including the usual
registers filled in "reg":
- sata: For SATA devices
- pcie: For PCIe devices
- - syscfg: To specify the syscfg based config register
Optional properties (port (child) node):
- st,sata-gen : Generation of locally attached SATA IP. Expected values
miphy365x_phy: miphy365x@fe382000 {
compatible = "st,miphy365x-phy";
- st,syscfg = <&syscfg_rear>;
+ st,syscfg = <&syscfg_rear 0x824 0x828>;
#address-cells = <1>;
#size-cells = <1>;
ranges;
phy_port0: port@fe382000 {
- reg = <0xfe382000 0x100>, <0xfe394000 0x100>, <0x824 0x4>;
- reg-names = "sata", "pcie", "syscfg";
+ reg = <0xfe382000 0x100>, <0xfe394000 0x100>;
+ reg-names = "sata", "pcie";
#phy-cells = <1>;
st,sata-gen = <3>;
};
phy_port1: port@fe38a000 {
- reg = <0xfe38a000 0x100>, <0xfe804000 0x100>, <0x828 0x4>;;
+ reg = <0xfe38a000 0x100>, <0xfe804000 0x100>;;
reg-names = "sata", "pcie", "syscfg";
#phy-cells = <1>;
st,pcie-tx-pol-inv;
Required properties:
- compatible : should be "st,stih407-usb2-phy"
-- reg : contain the offset and length of the system configuration registers
- used as glue logic to control & parameter phy
-- reg-names : the names of the system configuration registers in "reg", should be "param" and "reg"
-- st,syscfg : sysconfig register to manage phy parameter at driver level
+- st,syscfg : phandle of sysconfig bank plus integer array containing phyparam and phyctrl register offsets
- resets : list of phandle and reset specifier pairs. There should be two entries, one
for the whole phy and one for the port
- reset-names : list of reset signal names. Should be "global" and "port"
usb2_picophy0: usbpicophy@f8 {
compatible = "st,stih407-usb2-phy";
- reg = <0xf8 0x04>, /* syscfg 5062 */
- <0xf4 0x04>; /* syscfg 5061 */
- reg-names = "param", "ctrl";
#phy-cells = <0>;
- st,syscfg = <&syscfg_core>;
+ st,syscfg = <&syscfg_core 0x100 0xf4>;
resets = <&softreset STIH407_PICOPHY_SOFTRESET>,
<&picophyreset STIH407_PICOPHY0_RESET>;
reset-names = "global", "port";
hatype skb->dev->type
rxhash skb->hash
cpu raw_smp_processor_id()
- vlan_tci vlan_tx_tag_get(skb)
- vlan_pr vlan_tx_tag_present(skb)
+ vlan_tci skb_vlan_tag_get(skb)
+ vlan_pr skb_vlan_tag_present(skb)
rand prandom_u32()
These extensions can also be prefixed with '#'.
route/max_size - INTEGER
Maximum number of routes allowed in the kernel. Increase
this when using large numbers of interfaces and/or routes.
+ From linux kernel 3.6 onwards, this is deprecated for ipv4
+ as route cache is no longer used.
neigh/default/gc_thresh1 - INTEGER
Minimum number of entries to keep. Garbage collector will not
Functional default: enabled if accept_ra is enabled.
disabled if accept_ra is disabled.
+accept_ra_mtu - BOOLEAN
+ Apply the MTU value specified in RA option 5 (RFC4861). If
+ disabled, the MTU specified in the RA will be ignored.
+
+ Functional default: enabled if accept_ra is enabled.
+ disabled if accept_ra is disabled.
+
accept_redirects - BOOLEAN
Accept Redirects.
Size of hash table. If not specified as parameter during module
loading, the default size is calculated by dividing total memory
by 16384 to determine the number of buckets but the hash table will
- never have fewer than 32 or more than 16384 buckets.
+ never have fewer than 32 and limited to 16384 buckets. For systems
+ with more than 4GB of memory it will be 65536 buckets.
nf_conntrack_checksum - BOOLEAN
0 - disabled
some but not all of them. However, this behavior may change in future versions.
+Unique flow identifiers
+-----------------------
+
+An alternative to using the original match portion of a key as the handle for
+flow identification is a unique flow identifier, or "UFID". UFIDs are optional
+for both the kernel and user space program.
+
+User space programs that support UFID are expected to provide it during flow
+setup in addition to the flow, then refer to the flow using the UFID for all
+future operations. The kernel is not required to index flows by the original
+flow key if a UFID is specified.
+
+
Basic rule for evolving flow keys
---------------------------------
* 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
*/
+#define _GNU_SOURCE
+
#include <arpa/inet.h>
#include <asm/types.h>
#include <error.h>
#include <time.h>
#include <unistd.h>
-/* ugly hack to work around netinet/in.h and linux/ipv6.h conflicts */
-#ifndef in6_pktinfo
-struct in6_pktinfo {
- struct in6_addr ipi6_addr;
- int ipi6_ifindex;
-};
-#endif
-
/* command line parameters */
static int cfg_proto = SOCK_STREAM;
static int cfg_ipproto = IPPROTO_TCP;
buf += " .release_cmd = " + fabric_mod_name + "_release_cmd,\n"
buf += " .shutdown_session = " + fabric_mod_name + "_shutdown_session,\n"
buf += " .close_session = " + fabric_mod_name + "_close_session,\n"
- buf += " .stop_session = " + fabric_mod_name + "_stop_session,\n"
- buf += " .fall_back_to_erl0 = " + fabric_mod_name + "_reset_nexus,\n"
- buf += " .sess_logged_in = " + fabric_mod_name + "_sess_logged_in,\n"
buf += " .sess_get_index = " + fabric_mod_name + "_sess_get_index,\n"
buf += " .sess_get_initiator_sid = NULL,\n"
buf += " .write_pending = " + fabric_mod_name + "_write_pending,\n"
buf += " .queue_data_in = " + fabric_mod_name + "_queue_data_in,\n"
buf += " .queue_status = " + fabric_mod_name + "_queue_status,\n"
buf += " .queue_tm_rsp = " + fabric_mod_name + "_queue_tm_rsp,\n"
- buf += " .is_state_remove = " + fabric_mod_name + "_is_state_remove,\n"
+ buf += " .aborted_task = " + fabric_mod_name + "_aborted_task,\n"
buf += " /*\n"
buf += " * Setup function pointers for generic logic in target_core_fabric_configfs.c\n"
buf += " */\n"
buf += " /*\n"
buf += " * Register the top level struct config_item_type with TCM core\n"
buf += " */\n"
- buf += " fabric = target_fabric_configfs_init(THIS_MODULE, \"" + fabric_mod_name[4:] + "\");\n"
+ buf += " fabric = target_fabric_configfs_init(THIS_MODULE, \"" + fabric_mod_name + "\");\n"
buf += " if (IS_ERR(fabric)) {\n"
buf += " printk(KERN_ERR \"target_fabric_configfs_init() failed\\n\");\n"
buf += " return PTR_ERR(fabric);\n"
if re.search('get_fabric_name', fo):
buf += "char *" + fabric_mod_name + "_get_fabric_name(void)\n"
buf += "{\n"
- buf += " return \"" + fabric_mod_name[4:] + "\";\n"
+ buf += " return \"" + fabric_mod_name + "\";\n"
buf += "}\n\n"
bufi += "char *" + fabric_mod_name + "_get_fabric_name(void);\n"
continue
buf += "}\n\n"
bufi += "void " + fabric_mod_name + "_close_session(struct se_session *);\n"
- if re.search('stop_session\)\(', fo):
- buf += "void " + fabric_mod_name + "_stop_session(struct se_session *se_sess, int sess_sleep , int conn_sleep)\n"
- buf += "{\n"
- buf += " return;\n"
- buf += "}\n\n"
- bufi += "void " + fabric_mod_name + "_stop_session(struct se_session *, int, int);\n"
-
- if re.search('fall_back_to_erl0\)\(', fo):
- buf += "void " + fabric_mod_name + "_reset_nexus(struct se_session *se_sess)\n"
- buf += "{\n"
- buf += " return;\n"
- buf += "}\n\n"
- bufi += "void " + fabric_mod_name + "_reset_nexus(struct se_session *);\n"
-
- if re.search('sess_logged_in\)\(', fo):
- buf += "int " + fabric_mod_name + "_sess_logged_in(struct se_session *se_sess)\n"
- buf += "{\n"
- buf += " return 0;\n"
- buf += "}\n\n"
- bufi += "int " + fabric_mod_name + "_sess_logged_in(struct se_session *);\n"
-
if re.search('sess_get_index\)\(', fo):
buf += "u32 " + fabric_mod_name + "_sess_get_index(struct se_session *se_sess)\n"
buf += "{\n"
bufi += "int " + fabric_mod_name + "_queue_status(struct se_cmd *);\n"
if re.search('queue_tm_rsp\)\(', fo):
- buf += "int " + fabric_mod_name + "_queue_tm_rsp(struct se_cmd *se_cmd)\n"
+ buf += "void " + fabric_mod_name + "_queue_tm_rsp(struct se_cmd *se_cmd)\n"
buf += "{\n"
- buf += " return 0;\n"
+ buf += " return;\n"
buf += "}\n\n"
- bufi += "int " + fabric_mod_name + "_queue_tm_rsp(struct se_cmd *);\n"
+ bufi += "void " + fabric_mod_name + "_queue_tm_rsp(struct se_cmd *);\n"
- if re.search('is_state_remove\)\(', fo):
- buf += "int " + fabric_mod_name + "_is_state_remove(struct se_cmd *se_cmd)\n"
+ if re.search('aborted_task\)\(', fo):
+ buf += "void " + fabric_mod_name + "_aborted_task(struct se_cmd *se_cmd)\n"
buf += "{\n"
- buf += " return 0;\n"
+ buf += " return;\n"
buf += "}\n\n"
- bufi += "int " + fabric_mod_name + "_is_state_remove(struct se_cmd *);\n"
-
+ bufi += "void " + fabric_mod_name + "_aborted_task(struct se_cmd *);\n"
ret = p.write(buf)
if ret:
tcm_mod_build_kbuild(fabric_mod_dir, fabric_mod_name)
tcm_mod_build_kconfig(fabric_mod_dir, fabric_mod_name)
- input = raw_input("Would you like to add " + fabric_mod_name + "to drivers/target/Makefile..? [yes,no]: ")
+ input = raw_input("Would you like to add " + fabric_mod_name + " to drivers/target/Makefile..? [yes,no]: ")
if input == "yes" or input == "y":
tcm_mod_add_kbuild(tcm_dir, fabric_mod_name)
- input = raw_input("Would you like to add " + fabric_mod_name + "to drivers/target/Kconfig..? [yes,no]: ")
+ input = raw_input("Would you like to add " + fabric_mod_name + " to drivers/target/Kconfig..? [yes,no]: ")
if input == "yes" or input == "y":
tcm_mod_add_kconfig(tcm_dir, fabric_mod_name)
F: drivers/char/apm-emulation.c
APPLE BCM5974 MULTITOUCH DRIVER
-M: Henrik Rydberg <rydberg@euromail.se>
+M: Henrik Rydberg <rydberg@bitmath.org>
L: linux-input@vger.kernel.org
-S: Maintained
+S: Odd fixes
F: drivers/input/mouse/bcm5974.c
APPLE SMC DRIVER
-M: Henrik Rydberg <rydberg@euromail.se>
+M: Henrik Rydberg <rydberg@bitmath.org>
L: lm-sensors@lm-sensors.org
-S: Maintained
+S: Odd fixes
F: drivers/hwmon/applesmc.c
APPLETALK NETWORK LAYER
BTRFS FILE SYSTEM
M: Chris Mason <clm@fb.com>
M: Josef Bacik <jbacik@fb.com>
+M: David Sterba <dsterba@suse.cz>
L: linux-btrfs@vger.kernel.org
W: http://btrfs.wiki.kernel.org/
Q: http://patchwork.kernel.org/project/linux-btrfs/list/
F: drivers/scsi/ipr.*
IBM Power Virtual Ethernet Device Driver
-M: Santiago Leon <santil@linux.vnet.ibm.com>
+M: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/ibm/ibmveth.*
F: include/linux/input/
INPUT MULTITOUCH (MT) PROTOCOL
-M: Henrik Rydberg <rydberg@euromail.se>
+M: Henrik Rydberg <rydberg@bitmath.org>
L: linux-input@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/rydberg/input-mt.git
-S: Maintained
+S: Odd fixes
F: Documentation/input/multi-touch-protocol.txt
F: drivers/input/input-mt.c
K: \b(ABS|SYN)_MT_
Q: http://patchwork.kernel.org/project/linux-rdma/list/
F: drivers/infiniband/ulp/iser/
+ISCSI EXTENSIONS FOR RDMA (ISER) TARGET
+M: Sagi Grimberg <sagig@mellanox.com>
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git master
+L: linux-rdma@vger.kernel.org
+L: target-devel@vger.kernel.org
+S: Supported
+W: http://www.linux-iscsi.org
+F: drivers/infiniband/ulp/isert
+
ISDN SUBSYSTEM
M: Karsten Keil <isdn@linux-pingi.de>
L: isdn4linux@listserv.isdn4linux.de (subscribers-only)
F: include/uapi/linux/in.h
F: include/uapi/linux/net.h
F: include/uapi/linux/netdevice.h
+F: include/uapi/linux/net_namespace.h
F: tools/net/
F: tools/testing/selftests/net/
F: lib/random32.c
F: Documentation/rfkill.txt
F: net/rfkill/
+RHASHTABLE
+M: Thomas Graf <tgraf@suug.ch>
+L: netdev@vger.kernel.org
+S: Maintained
+F: lib/rhashtable.c
+F: include/linux/rhashtable.h
+
RICOH SMARTMEDIA/XD DRIVER
M: Maxim Levitsky <maximlevitsky@gmail.com>
S: Maintained
F: drivers/regulator/lp8788-*.c
F: include/linux/mfd/lp8788*.h
+TI NETCP ETHERNET DRIVER
+M: Wingman Kwok <w-kwok2@ti.com>
+M: Murali Karicheri <m-karicheri2@ti.com>
+L: netdev@vger.kernel.org
+S: Maintained
+F: drivers/net/ethernet/ti/netcp*
+
TI TWL4030 SERIES SOC CODEC DRIVER
M: Peter Ujfalusi <peter.ujfalusi@ti.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
VERSION = 3
PATCHLEVEL = 19
SUBLEVEL = 0
-EXTRAVERSION = -rc2
+EXTRAVERSION = -rc4
NAME = Diseased Newt
# *DOCUMENTATION*
# Needed to be compatible with the O= option
LINUXINCLUDE := \
-I$(srctree)/arch/$(hdr-arch)/include \
+ -Iarch/$(hdr-arch)/include/generated/uapi \
-Iarch/$(hdr-arch)/include/generated \
$(if $(KBUILD_SRC), -I$(srctree)/include) \
-Iinclude \
compatible = "linux,spdif-dir";
};
};
-
-&pinctrl {
- /*
- * These pins might be muxed as I2S by
- * the bootloader, but it conflicts
- * with the real I2S pins that are
- * muxed using i2s_pins. We must mux
- * those pins to a function other than
- * I2S.
- */
- pinctrl-0 = <&hog_pins1 &hog_pins2>;
- pinctrl-names = "default";
-
- hog_pins1: hog-pins1 {
- marvell,pins = "mpp6", "mpp8", "mpp10",
- "mpp12", "mpp13";
- marvell,function = "gpio";
- };
-
- hog_pins2: hog-pins2 {
- marvell,pins = "mpp5", "mpp7", "mpp9";
- marvell,function = "gpo";
- };
-};
phy-mode = "rgmii";
interrupts-extended = <&gpio1 6 IRQ_TYPE_LEVEL_HIGH>,
<&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
- fsl,magic-packet;
status = "okay";
};
pinctrl-0 = <&pinctrl_enet>;
phy-mode = "rgmii";
phy-reset-gpios = <&gpio1 25 0>;
- fsl,magic-packet;
status = "okay";
};
pinctrl-0 = <&pinctrl_enet1>;
phy-supply = <®_enet_3v3>;
phy-mode = "rgmii";
+ phy-handle = <ðphy1>;
status = "okay";
+
+ mdio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ ethphy1: ethernet-phy@0 {
+ reg = <1>;
+ };
+
+ ethphy2: ethernet-phy@1 {
+ reg = <2>;
+ };
+ };
};
&fec2 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_enet2>;
phy-mode = "rgmii";
+ phy-handle = <ðphy2>;
status = "okay";
};
};
&gmac {
- phy_regulator = "vcc_phy";
+ phy-supply = <&vcc_phy>;
phy-mode = "rgmii";
clock_in_out = "input";
snps,reset-gpio = <&gpio4 7 0>;
pinctrl-names = "default";
pinctrl-0 = <ð_phy_pwr>;
regulator-name = "vcc_phy";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
regulator-always-on;
regulator-boot-on;
};
status = "disabled";
};
+
+ usb2_picophy0: phy1 {
+ compatible = "st,stih407-usb2-phy";
+ #phy-cells = <0>;
+ st,syscfg = <&syscfg_core 0x100 0xf4>;
+ resets = <&softreset STIH407_PICOPHY_SOFTRESET>,
+ <&picophyreset STIH407_PICOPHY0_RESET>;
+ reset-names = "global", "port";
+ };
};
};
#include "stih407-family.dtsi"
#include "stih410-pinctrl.dtsi"
/ {
+ soc {
+ usb2_picophy1: phy2 {
+ compatible = "st,stih407-usb2-phy";
+ #phy-cells = <0>;
+ st,syscfg = <&syscfg_core 0xf8 0xf4>;
+ resets = <&softreset STIH407_PICOPHY_SOFTRESET>,
+ <&picophyreset STIH407_PICOPHY0_RESET>;
+ reset-names = "global", "port";
+ };
+ usb2_picophy2: phy3 {
+ compatible = "st,stih407-usb2-phy";
+ #phy-cells = <0>;
+ st,syscfg = <&syscfg_core 0xfc 0xf4>;
+ resets = <&softreset STIH407_PICOPHY_SOFTRESET>,
+ <&picophyreset STIH407_PICOPHY1_RESET>;
+ reset-names = "global", "port";
+ };
+
+ ohci0: usb@9a03c00 {
+ compatible = "st,st-ohci-300x";
+ reg = <0x9a03c00 0x100>;
+ interrupts = <GIC_SPI 180 IRQ_TYPE_NONE>;
+ clocks = <&clk_s_c0_flexgen CLK_TX_ICN_DISP_0>;
+ resets = <&powerdown STIH407_USB2_PORT0_POWERDOWN>,
+ <&softreset STIH407_USB2_PORT0_SOFTRESET>;
+ reset-names = "power", "softreset";
+ phys = <&usb2_picophy1>;
+ phy-names = "usb";
+ };
+
+ ehci0: usb@9a03e00 {
+ compatible = "st,st-ehci-300x";
+ reg = <0x9a03e00 0x100>;
+ interrupts = <GIC_SPI 151 IRQ_TYPE_NONE>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usb0>;
+ clocks = <&clk_s_c0_flexgen CLK_TX_ICN_DISP_0>;
+ resets = <&powerdown STIH407_USB2_PORT0_POWERDOWN>,
+ <&softreset STIH407_USB2_PORT0_SOFTRESET>;
+ reset-names = "power", "softreset";
+ phys = <&usb2_picophy1>;
+ phy-names = "usb";
+ };
+
+ ohci1: usb@9a83c00 {
+ compatible = "st,st-ohci-300x";
+ reg = <0x9a83c00 0x100>;
+ interrupts = <GIC_SPI 181 IRQ_TYPE_NONE>;
+ clocks = <&clk_s_c0_flexgen CLK_TX_ICN_DISP_0>;
+ resets = <&powerdown STIH407_USB2_PORT1_POWERDOWN>,
+ <&softreset STIH407_USB2_PORT1_SOFTRESET>;
+ reset-names = "power", "softreset";
+ phys = <&usb2_picophy2>;
+ phy-names = "usb";
+ };
+
+ ehci1: usb@9a83e00 {
+ compatible = "st,st-ehci-300x";
+ reg = <0x9a83e00 0x100>;
+ interrupts = <GIC_SPI 153 IRQ_TYPE_NONE>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usb1>;
+ clocks = <&clk_s_c0_flexgen CLK_TX_ICN_DISP_0>;
+ resets = <&powerdown STIH407_USB2_PORT1_POWERDOWN>,
+ <&softreset STIH407_USB2_PORT1_SOFTRESET>;
+ reset-names = "power", "softreset";
+ phys = <&usb2_picophy2>;
+ phy-names = "usb";
+ };
+ };
};
compatible = "st,stih415-dwmac", "snps,dwmac", "snps,dwmac-3.610";
status = "disabled";
- reg = <0xfe810000 0x8000>, <0x148 0x4>;
- reg-names = "stmmaceth", "sti-ethconf";
+ reg = <0xfe810000 0x8000>;
+ reg-names = "stmmaceth";
interrupts = <0 147 0>, <0 148 0>, <0 149 0>;
interrupt-names = "macirq", "eth_wake_irq", "eth_lpi";
snps,mixed-burst;
snps,force_sf_dma_mode;
- st,syscon = <&syscfg_rear>;
+ st,syscon = <&syscfg_rear 0x148>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_mii0>;
device_type = "network";
compatible = "st,stih415-dwmac", "snps,dwmac", "snps,dwmac-3.610";
status = "disabled";
- reg = <0xfef08000 0x8000>, <0x74 0x4>;
- reg-names = "stmmaceth", "sti-ethconf";
+ reg = <0xfef08000 0x8000>;
+ reg-names = "stmmaceth";
interrupts = <0 150 0>, <0 151 0>, <0 152 0>;
interrupt-names = "macirq", "eth_wake_irq", "eth_lpi";
snps,mixed-burst;
snps,force_sf_dma_mode;
- st,syscon = <&syscfg_sbc>;
+ st,syscon = <&syscfg_sbc 0x74>;
resets = <&softreset STIH415_ETH1_SOFTRESET>;
reset-names = "stmmaceth";
device_type = "network";
compatible = "st,stih416-dwmac", "snps,dwmac", "snps,dwmac-3.710";
status = "disabled";
- reg = <0xfe810000 0x8000>, <0x8bc 0x4>;
- reg-names = "stmmaceth", "sti-ethconf";
+ reg = <0xfe810000 0x8000>;
+ reg-names = "stmmaceth";
interrupts = <0 133 0>, <0 134 0>, <0 135 0>;
interrupt-names = "macirq", "eth_wake_irq", "eth_lpi";
snps,pbl = <32>;
snps,mixed-burst;
- st,syscon = <&syscfg_rear>;
+ st,syscon = <&syscfg_rear 0x8bc>;
resets = <&softreset STIH416_ETH0_SOFTRESET>;
reset-names = "stmmaceth";
pinctrl-names = "default";
device_type = "network";
compatible = "st,stih416-dwmac", "snps,dwmac", "snps,dwmac-3.710";
status = "disabled";
- reg = <0xfef08000 0x8000>, <0x7f0 0x4>;
- reg-names = "stmmaceth", "sti-ethconf";
+ reg = <0xfef08000 0x8000>;
+ reg-names = "stmmaceth";
interrupts = <0 136 0>, <0 137 0>, <0 138 0>;
interrupt-names = "macirq", "eth_wake_irq", "eth_lpi";
snps,pbl = <32>;
snps,mixed-burst;
- st,syscon = <&syscfg_sbc>;
+ st,syscon = <&syscfg_sbc 0x7f0>;
resets = <&softreset STIH416_ETH1_SOFTRESET>;
reset-names = "stmmaceth";
miphy365x_phy: phy@fe382000 {
compatible = "st,miphy365x-phy";
- st,syscfg = <&syscfg_rear>;
+ st,syscfg = <&syscfg_rear 0x824 0x828>;
#address-cells = <1>;
#size-cells = <1>;
ranges;
phy_port0: port@fe382000 {
#phy-cells = <1>;
- reg = <0xfe382000 0x100>, <0xfe394000 0x100>, <0x824 0x4>;
- reg-names = "sata", "pcie", "syscfg";
+ reg = <0xfe382000 0x100>, <0xfe394000 0x100>;
+ reg-names = "sata", "pcie";
};
phy_port1: port@fe38a000 {
#phy-cells = <1>;
- reg = <0xfe38a000 0x100>, <0xfe804000 0x100>, <0x828 0x4>;
- reg-names = "sata", "pcie", "syscfg";
+ reg = <0xfe38a000 0x100>, <0xfe804000 0x100>;
+ reg-names = "sata", "pcie";
};
};
&fec0 {
phy-mode = "rmii";
+ phy-handle = <ðphy0>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_fec0>;
status = "okay";
+
+ mdio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ ethphy0: ethernet-phy@0 {
+ reg = <0>;
+ };
+
+ ethphy1: ethernet-phy@1 {
+ reg = <1>;
+ };
+ };
};
&fec1 {
phy-mode = "rmii";
+ phy-handle = <ðphy1>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_fec1>;
status = "okay";
CONFIG_USB_XHCI_HCD=y
CONFIG_USB_XHCI_MVEBU=y
CONFIG_USB_EHCI_HCD=y
+CONFIG_USB_EHCI_EXYNOS=y
CONFIG_USB_EHCI_TEGRA=y
CONFIG_USB_EHCI_HCD_STI=y
CONFIG_USB_EHCI_HCD_PLATFORM=y
CONFIG_TI_PIPE3=y
CONFIG_PHY_MIPHY365X=y
CONFIG_PHY_STIH41X_USB=y
+CONFIG_PHY_STIH407_USB=y
CONFIG_PHY_SUN4I_USB=y
CONFIG_EXT4_FS=y
CONFIG_AUTOFS4_FS=y
#define __NR_getrandom (__NR_SYSCALL_BASE+384)
#define __NR_memfd_create (__NR_SYSCALL_BASE+385)
#define __NR_bpf (__NR_SYSCALL_BASE+386)
+#define __NR_execveat (__NR_SYSCALL_BASE+387)
/*
* The following SWIs are ARM private.
CALL(sys_getrandom)
/* 385 */ CALL(sys_memfd_create)
CALL(sys_bpf)
+ CALL(sys_execveat)
#ifndef syscalls_counted
.equ syscalls_padding, ((NR_syscalls + 3) & ~3) - NR_syscalls
#define syscalls_counted
{
return PERF_SAMPLE_REGS_ABI_32;
}
+
+void perf_get_regs_user(struct perf_regs *regs_user,
+ struct pt_regs *regs,
+ struct pt_regs *regs_user_copy)
+{
+ regs_user->regs = task_pt_regs(current);
+ regs_user->abi = perf_reg_abi(current);
+}
seq_printf(m, "model name\t: %s rev %d (%s)\n",
cpu_name, cpuid & 15, elf_platform);
+#if defined(CONFIG_SMP)
+ seq_printf(m, "BogoMIPS\t: %lu.%02lu\n",
+ per_cpu(cpu_data, i).loops_per_jiffy / (500000UL/HZ),
+ (per_cpu(cpu_data, i).loops_per_jiffy / (5000UL/HZ)) % 100);
+#else
+ seq_printf(m, "BogoMIPS\t: %lu.%02lu\n",
+ loops_per_jiffy / (500000/HZ),
+ (loops_per_jiffy / (5000/HZ)) % 100);
+#endif
/* dump out the processor features */
seq_puts(m, "Features\t: ");
void __init smp_cpus_done(unsigned int max_cpus)
{
+ int cpu;
+ unsigned long bogosum = 0;
+
+ for_each_online_cpu(cpu)
+ bogosum += per_cpu(cpu_data, cpu).loops_per_jiffy;
+
+ printk(KERN_INFO "SMP: Total of %d processors activated "
+ "(%lu.%02lu BogoMIPS).\n",
+ num_online_cpus(),
+ bogosum / (500000/HZ),
+ (bogosum / (5000/HZ)) % 100);
+
hyp_mode_check();
}
#include <linux/micrel_phy.h>
#include <linux/mfd/syscon.h>
#include <linux/mfd/syscon/imx6q-iomuxc-gpr.h>
-#include <linux/fec.h>
-#include <linux/netdevice.h>
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
#include <asm/system_misc.h>
#include "cpuidle.h"
#include "hardware.h"
-static struct fec_platform_data fec_pdata;
-
-static void imx6q_fec_sleep_enable(int enabled)
-{
- struct regmap *gpr;
-
- gpr = syscon_regmap_lookup_by_compatible("fsl,imx6q-iomuxc-gpr");
- if (!IS_ERR(gpr)) {
- if (enabled)
- regmap_update_bits(gpr, IOMUXC_GPR13,
- IMX6Q_GPR13_ENET_STOP_REQ,
- IMX6Q_GPR13_ENET_STOP_REQ);
-
- else
- regmap_update_bits(gpr, IOMUXC_GPR13,
- IMX6Q_GPR13_ENET_STOP_REQ, 0);
- } else
- pr_err("failed to find fsl,imx6q-iomux-gpr regmap\n");
-}
-
-static void __init imx6q_enet_plt_init(void)
-{
- struct device_node *np;
-
- np = of_find_node_by_path("/soc/aips-bus@02100000/ethernet@02188000");
- if (np && of_get_property(np, "fsl,magic-packet", NULL))
- fec_pdata.sleep_mode_enable = imx6q_fec_sleep_enable;
-}
-
/* For imx6q sabrelite board: set KSZ9021RN RGMII pad skew */
static int ksz9021rn_phy_fixup(struct phy_device *phydev)
{
}
}
-/* Add auxdata to pass platform data */
-static const struct of_dev_auxdata imx6q_auxdata_lookup[] __initconst = {
- OF_DEV_AUXDATA("fsl,imx6q-fec", 0x02188000, NULL, &fec_pdata),
- { /* sentinel */ }
-};
-
static void __init imx6q_init_machine(void)
{
struct device *parent;
imx6q_enet_phy_init();
- of_platform_populate(NULL, of_default_bus_match_table,
- imx6q_auxdata_lookup, parent);
+ of_platform_populate(NULL, of_default_bus_match_table, NULL, parent);
imx_anatop_init();
cpu_is_imx6q() ? imx6q_pm_init() : imx6dl_pm_init();
imx6q_1588_init();
- imx6q_enet_plt_init();
imx6q_axi_init();
}
#include <linux/regmap.h>
#include <linux/mfd/syscon.h>
#include <linux/mfd/syscon/imx6q-iomuxc-gpr.h>
-#include <linux/fec.h>
-#include <linux/netdevice.h>
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
#include "common.h"
#include "cpuidle.h"
-static struct fec_platform_data fec_pdata[2];
-
-static void imx6sx_fec1_sleep_enable(int enabled)
-{
- struct regmap *gpr;
-
- gpr = syscon_regmap_lookup_by_compatible("fsl,imx6sx-iomuxc-gpr");
- if (!IS_ERR(gpr)) {
- if (enabled)
- regmap_update_bits(gpr, IOMUXC_GPR4,
- IMX6SX_GPR4_FEC_ENET1_STOP_REQ,
- IMX6SX_GPR4_FEC_ENET1_STOP_REQ);
- else
- regmap_update_bits(gpr, IOMUXC_GPR4,
- IMX6SX_GPR4_FEC_ENET1_STOP_REQ, 0);
- } else
- pr_err("failed to find fsl,imx6sx-iomux-gpr regmap\n");
-}
-
-static void imx6sx_fec2_sleep_enable(int enabled)
-{
- struct regmap *gpr;
-
- gpr = syscon_regmap_lookup_by_compatible("fsl,imx6sx-iomuxc-gpr");
- if (!IS_ERR(gpr)) {
- if (enabled)
- regmap_update_bits(gpr, IOMUXC_GPR4,
- IMX6SX_GPR4_FEC_ENET2_STOP_REQ,
- IMX6SX_GPR4_FEC_ENET2_STOP_REQ);
- else
- regmap_update_bits(gpr, IOMUXC_GPR4,
- IMX6SX_GPR4_FEC_ENET2_STOP_REQ, 0);
- } else
- pr_err("failed to find fsl,imx6sx-iomux-gpr regmap\n");
-}
-
-static void __init imx6sx_enet_plt_init(void)
-{
- struct device_node *np;
-
- np = of_find_node_by_path("/soc/aips-bus@02100000/ethernet@02188000");
- if (np && of_get_property(np, "fsl,magic-packet", NULL))
- fec_pdata[0].sleep_mode_enable = imx6sx_fec1_sleep_enable;
- np = of_find_node_by_path("/soc/aips-bus@02100000/ethernet@021b4000");
- if (np && of_get_property(np, "fsl,magic-packet", NULL))
- fec_pdata[1].sleep_mode_enable = imx6sx_fec2_sleep_enable;
-}
-
static int ar8031_phy_fixup(struct phy_device *dev)
{
u16 val;
static const char units[] = "KMGTPE";
u64 prot = val & pg_level[level].mask;
- if (addr < USER_PGTABLES_CEILING)
- return;
-
if (!st->level) {
st->level = level;
st->current_prot = prot;
pgd_t *pgd = swapper_pg_dir;
struct pg_state st;
unsigned long addr;
- unsigned i, pgdoff = USER_PGTABLES_CEILING / PGDIR_SIZE;
+ unsigned i;
memset(&st, 0, sizeof(st));
st.seq = m;
st.marker = address_markers;
- pgd += pgdoff;
-
- for (i = pgdoff; i < PTRS_PER_PGD; i++, pgd++) {
+ for (i = 0; i < PTRS_PER_PGD; i++, pgd++) {
addr = i * PGDIR_SIZE;
if (!pgd_none(*pgd)) {
walk_pud(&st, pgd, addr);
.start = (unsigned long)_stext,
.end = (unsigned long)__init_begin,
#ifdef CONFIG_ARM_LPAE
- .mask = ~PMD_SECT_RDONLY,
- .prot = PMD_SECT_RDONLY,
+ .mask = ~L_PMD_SECT_RDONLY,
+ .prot = L_PMD_SECT_RDONLY,
#else
.mask = ~(PMD_SECT_APX | PMD_SECT_AP_WRITE),
.prot = PMD_SECT_APX | PMD_SECT_AP_WRITE,
static void __init map_lowmem(void)
{
struct memblock_region *reg;
- unsigned long kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
- unsigned long kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
+ phys_addr_t kernel_x_start = round_down(__pa(_stext), SECTION_SIZE);
+ phys_addr_t kernel_x_end = round_up(__pa(__init_end), SECTION_SIZE);
/* Map all the lowmem memory banks. */
for_each_memblock(memory, reg) {
#include <asm/barrier.h>
+#include <linux/bug.h>
#include <linux/init.h>
#include <linux/types.h>
u64 reg_id_aa64pfr0;
u64 reg_id_aa64pfr1;
+ u32 reg_id_dfr0;
u32 reg_id_isar0;
u32 reg_id_isar1;
u32 reg_id_isar2;
u32 reg_id_mmfr3;
u32 reg_id_pfr0;
u32 reg_id_pfr1;
+
+ u32 reg_mvfr0;
+ u32 reg_mvfr1;
+ u32 reg_mvfr2;
};
DECLARE_PER_CPU(struct cpuinfo_arm64, cpu_data);
static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
{
vcpu->arch.hcr_el2 = HCR_GUEST_FLAGS;
+ if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
+ vcpu->arch.hcr_el2 &= ~HCR_RW;
}
static inline unsigned long *vcpu_pc(const struct kvm_vcpu *vcpu)
#include <asm/fpsimd.h>
#include <asm/hw_breakpoint.h>
+#include <asm/pgtable-hwdef.h>
#include <asm/ptrace.h>
#include <asm/types.h>
/* Free all resources held by a thread. */
extern void release_thread(struct task_struct *);
-/* Prepare to copy thread state - unlazy all lazy status */
-#define prepare_to_copy(tsk) do { } while (0)
-
unsigned long get_wchan(struct task_struct *p);
#define cpu_relax() barrier()
#define __ARM_NR_compat_cacheflush (__ARM_NR_COMPAT_BASE+2)
#define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE+5)
-#define __NR_compat_syscalls 386
+#define __NR_compat_syscalls 387
#endif
#define __ARCH_WANT_SYS_CLONE
* If we have AArch32, we care about 32-bit features for compat. These
* registers should be RES0 otherwise.
*/
+ diff |= CHECK(id_dfr0, boot, cur, cpu);
diff |= CHECK(id_isar0, boot, cur, cpu);
diff |= CHECK(id_isar1, boot, cur, cpu);
diff |= CHECK(id_isar2, boot, cur, cpu);
diff |= CHECK(id_pfr0, boot, cur, cpu);
diff |= CHECK(id_pfr1, boot, cur, cpu);
+ diff |= CHECK(mvfr0, boot, cur, cpu);
+ diff |= CHECK(mvfr1, boot, cur, cpu);
+ diff |= CHECK(mvfr2, boot, cur, cpu);
+
/*
* Mismatched CPU features are a recipe for disaster. Don't even
* pretend to support them.
info->reg_id_aa64pfr0 = read_cpuid(ID_AA64PFR0_EL1);
info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1);
+ info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1);
info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1);
info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1);
info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1);
info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1);
info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1);
+ info->reg_mvfr0 = read_cpuid(MVFR0_EL1);
+ info->reg_mvfr1 = read_cpuid(MVFR1_EL1);
+ info->reg_mvfr2 = read_cpuid(MVFR2_EL1);
+
cpuinfo_detect_icache_policy(info);
check_local_cpu_errata();
/* boot time idmap_pg_dir is incomplete, so fill in missing parts */
efi_setup_idmap();
+ early_memunmap(memmap.map, memmap.map_end - memmap.map);
}
static int __init remap_region(efi_memory_desc_t *md, void **new)
}
mapsize = memmap.map_end - memmap.map;
- early_memunmap(memmap.map, mapsize);
if (efi_runtime_disabled()) {
pr_info("EFI runtime services will be disabled.\n");
#include <linux/mm.h>
#include <linux/moduleloader.h>
#include <linux/vmalloc.h>
+#include <asm/alternative.h>
#include <asm/insn.h>
#include <asm/sections.h>
else
return PERF_SAMPLE_REGS_ABI_64;
}
+
+void perf_get_regs_user(struct perf_regs *regs_user,
+ struct pt_regs *regs,
+ struct pt_regs *regs_user_copy)
+{
+ regs_user->regs = task_pt_regs(current);
+ regs_user->abi = perf_reg_abi(current);
+}
request_standard_resources();
efi_idmap_init();
+ early_ioremap_reset();
unflatten_device_tree();
#include <asm/cacheflush.h>
#include <asm/cpu_ops.h>
#include <asm/cputype.h>
+#include <asm/io.h>
#include <asm/smp_plat.h>
extern void secondary_holding_pen(void);
* Instead, we invalidate Stage-2 for this IPA, and the
* whole of Stage-1. Weep...
*/
+ lsr x1, x1, #12
tlbi ipas2e1is, x1
/*
* We have to ensure completion of the invalidation at Stage-2,
if (!cpu_has_32bit_el1())
return -EINVAL;
cpu_reset = &default_regs_reset32;
- vcpu->arch.hcr_el2 &= ~HCR_RW;
} else {
cpu_reset = &default_regs_reset;
}
*/
#include <linux/device.h>
+#include <linux/delay.h>
#include <linux/platform_device.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
-#define NR_syscalls 318 /* length of syscall table */
+#define NR_syscalls 319 /* length of syscall table */
/*
* The following defines stop scripts/checksyscalls.sh from complaining about
#define __NR_getrandom 1339
#define __NR_memfd_create 1340
#define __NR_bpf 1341
+#define __NR_execveat 1342
#endif /* _UAPI_ASM_IA64_UNISTD_H */
}
/* wrapper to silence section mismatch warning */
-int __ref acpi_map_lsapic(acpi_handle handle, int physid, int *pcpu)
+int __ref acpi_map_cpu(acpi_handle handle, int physid, int *pcpu)
{
return _acpi_map_lsapic(handle, physid, pcpu);
}
-EXPORT_SYMBOL(acpi_map_lsapic);
+EXPORT_SYMBOL(acpi_map_cpu);
-int acpi_unmap_lsapic(int cpu)
+int acpi_unmap_cpu(int cpu)
{
ia64_cpu_to_sapicid[cpu] = -1;
set_cpu_present(cpu, false);
return (0);
}
-
-EXPORT_SYMBOL(acpi_unmap_lsapic);
+EXPORT_SYMBOL(acpi_unmap_cpu);
#endif /* CONFIG_ACPI_HOTPLUG_CPU */
#ifdef CONFIG_ACPI_NUMA
data8 sys_getrandom
data8 sys_memfd_create // 1340
data8 sys_bpf
+ data8 sys_execveat
.org sys_call_table + 8*NR_syscalls // guard against failures to increase NR_syscalls
#endif /* __IA64_ASM_PARAVIRTUALIZED_NATIVE */
cpuinfo.has_div = fcpu_has(cpu, "altr,has-div");
cpuinfo.has_mul = fcpu_has(cpu, "altr,has-mul");
cpuinfo.has_mulx = fcpu_has(cpu, "altr,has-mulx");
+ cpuinfo.mmu = fcpu_has(cpu, "altr,has-mmu");
if (IS_ENABLED(CONFIG_NIOS2_HW_DIV_SUPPORT) && !cpuinfo.has_div)
err_cpu("DIV");
GET_THREAD_INFO r1
ldw r4, TI_PREEMPT_COUNT(r1)
bne r4, r0, restore_all
-
-need_resched:
ldw r4, TI_FLAGS(r1) /* ? Need resched set */
BTBZ r10, r4, TIF_NEED_RESCHED, restore_all
ldw r4, PT_ESTATUS(sp) /* ? Interrupts off */
andi r10, r4, ESTATUS_EPIE
beq r10, r0, restore_all
- movia r4, PREEMPT_ACTIVE
- stw r4, TI_PREEMPT_COUNT(r1)
- rdctl r10, status /* enable intrs again */
- ori r10, r10 ,STATUS_PIE
- wrctl status, r10
- PUSH r1
- call schedule
- POP r1
- mov r4, r0
- stw r4, TI_PREEMPT_COUNT(r1)
- rdctl r10, status /* disable intrs */
- andi r10, r10, %lo(~STATUS_PIE)
- wrctl status, r10
- br need_resched
-#else
- br restore_all
+ call preempt_schedule_irq
#endif
+ br restore_all
/***********************************************************************
* A few syscall wrappers
extern void reserve_crashkernel(void);
extern void machine_kexec_mask_interrupts(void);
+static inline bool kdump_in_progress(void)
+{
+ return crashing_cpu >= 0;
+}
+
#else /* !CONFIG_KEXEC */
static inline void crash_kexec_secondary(struct pt_regs *regs) { }
return 0;
}
+static inline bool kdump_in_progress(void)
+{
+ return false;
+}
+
#endif /* CONFIG_KEXEC */
#endif /* ! __ASSEMBLY__ */
#endif /* __KERNEL__ */
SYSCALL_SPU(getrandom)
SYSCALL_SPU(memfd_create)
SYSCALL_SPU(bpf)
+COMPAT_SYS(execveat)
#include <uapi/asm/unistd.h>
-#define __NR_syscalls 362
+#define __NR_syscalls 363
#define __NR__exit __NR_exit
#define NR_syscalls __NR_syscalls
#define __NR_getrandom 359
#define __NR_memfd_create 360
#define __NR_bpf 361
+#define __NR_execveat 362
#endif /* _UAPI_ASM_POWERPC_UNISTD_H_ */
* using debugger IPI.
*/
- if (crashing_cpu == -1)
+ if (!kdump_in_progress())
kexec_prepare_cpus();
pr_debug("kexec: Starting switchover sequence.\n");
smp_store_cpu_info(cpu);
set_dec(tb_ticks_per_jiffy);
preempt_disable();
+ cpu_callin_map[cpu] = 1;
if (smp_ops->setup_cpu)
smp_ops->setup_cpu(cpu);
notify_cpu_starting(cpu);
set_cpu_online(cpu, true);
- /*
- * CPU must be marked active and online before we signal back to the
- * master, because the scheduler needs to see the cpu_online and
- * cpu_active bits set.
- */
- smp_wmb();
- cpu_callin_map[cpu] = 1;
-
local_irq_enable();
cpu_startup_entry(CPUHP_ONLINE);
#include <asm/trace.h>
#include <asm/firmware.h>
#include <asm/plpar_wrappers.h>
+#include <asm/kexec.h>
#include <asm/fadump.h>
#include "pseries.h"
* out to the user, but at least this will stop us from
* continuing on further and creating an even more
* difficult to debug situation.
+ *
+ * There is a known problem when kdump'ing, if cpus are offline
+ * the above call will fail. Rather than panicking again, keep
+ * going and hope the kdump kernel is also little endian, which
+ * it usually is.
*/
- if (rc)
+ if (rc && !kdump_in_progress())
panic("Could not enable big endian exceptions");
}
#endif
struct dbfs_d2fc_hdr {
u64 len; /* Length of d2fc buffer without header */
u16 version; /* Version of header */
- char tod_ext[16]; /* TOD clock for d2fc */
+ char tod_ext[STORE_CLOCK_EXT_SIZE]; /* TOD clock for d2fc */
u64 count; /* Number of VM guests in d2fc buffer */
char reserved[30];
} __attribute__ ((packed));
static inline notrace unsigned long arch_local_save_flags(void)
{
- return __arch_local_irq_stosm(0x00);
+ return __arch_local_irq_stnsm(0xff);
}
static inline notrace unsigned long arch_local_irq_save(void)
set_clock_comparator(S390_lowcore.clock_comparator);
}
-#define CLOCK_TICK_RATE 1193180 /* Underlying HZ */
+#define CLOCK_TICK_RATE 1193180 /* Underlying HZ */
+#define STORE_CLOCK_EXT_SIZE 16 /* stcke writes 16 bytes */
typedef unsigned long long cycles_t;
-static inline void get_tod_clock_ext(char clk[16])
+static inline void get_tod_clock_ext(char *clk)
{
- typedef struct { char _[sizeof(clk)]; } addrtype;
+ typedef struct { char _[STORE_CLOCK_EXT_SIZE]; } addrtype;
asm volatile("stcke %0" : "=Q" (*(addrtype *) clk) : : "cc");
}
static inline unsigned long long get_tod_clock(void)
{
- unsigned char clk[16];
+ unsigned char clk[STORE_CLOCK_EXT_SIZE];
+
get_tod_clock_ext(clk);
return *((unsigned long long *)&clk[1]);
}
#define __NR_bpf 351
#define __NR_s390_pci_mmio_write 352
#define __NR_s390_pci_mmio_read 353
-#define NR_syscalls 354
+#define __NR_execveat 354
+#define NR_syscalls 355
/*
* There are some system calls that are not present on 64 bit, some
SYSCALL(sys_bpf,sys_bpf,compat_sys_bpf)
SYSCALL(sys_ni_syscall,sys_s390_pci_mmio_write,compat_sys_s390_pci_mmio_write)
SYSCALL(sys_ni_syscall,sys_s390_pci_mmio_read,compat_sys_s390_pci_mmio_read)
+SYSCALL(sys_execveat,sys_execveat,compat_sys_execveat)
return false;
}
+static int check_per_event(unsigned short cause, unsigned long control,
+ struct pt_regs *regs)
+{
+ if (!(regs->psw.mask & PSW_MASK_PER))
+ return 0;
+ /* user space single step */
+ if (control == 0)
+ return 1;
+ /* over indication for storage alteration */
+ if ((control & 0x20200000) && (cause & 0x2000))
+ return 1;
+ if (cause & 0x8000) {
+ /* all branches */
+ if ((control & 0x80800000) == 0x80000000)
+ return 1;
+ /* branch into selected range */
+ if (((control & 0x80800000) == 0x80800000) &&
+ regs->psw.addr >= current->thread.per_user.start &&
+ regs->psw.addr <= current->thread.per_user.end)
+ return 1;
+ }
+ return 0;
+}
+
int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs)
{
int fixup = probe_get_fixup_type(auprobe->insn);
if (regs->psw.addr - utask->xol_vaddr == ilen)
regs->psw.addr = utask->vaddr + ilen;
}
- /* If per tracing was active generate trap */
- if (regs->psw.mask & PSW_MASK_PER)
- do_per_trap(regs);
+ if (check_per_event(current->thread.per_event.cause,
+ current->thread.per_user.control, regs)) {
+ /* fix per address */
+ current->thread.per_event.address = utask->vaddr;
+ /* trigger per event */
+ set_pt_regs_flag(regs, PIF_PER_TRAP);
+ }
return 0;
}
clear_thread_flag(TIF_UPROBE_SINGLESTEP);
regs->int_code = auprobe->saved_int_code;
regs->psw.addr = current->utask->vaddr;
+ current->thread.per_event.address = current->utask->vaddr;
}
unsigned long arch_uretprobe_hijack_return_addr(unsigned long trampoline,
__rc; \
})
-#define emu_store_ril(ptr, input) \
+#define emu_store_ril(regs, ptr, input) \
({ \
unsigned int mask = sizeof(*(ptr)) - 1; \
+ __typeof__(ptr) __ptr = (ptr); \
int __rc = 0; \
\
if (!test_facility(34)) \
__rc = EMU_ILLEGAL_OP; \
- else if ((u64 __force)ptr & mask) \
+ else if ((u64 __force)__ptr & mask) \
__rc = EMU_SPECIFICATION; \
- else if (put_user(*(input), ptr)) \
+ else if (put_user(*(input), __ptr)) \
__rc = EMU_ADDRESSING; \
+ if (__rc == 0) \
+ sim_stor_event(regs, __ptr, mask + 1); \
__rc; \
})
s16 s16[4];
};
+/*
+ * If user per registers are setup to trace storage alterations and an
+ * emulated store took place on a fitting address a user trap is generated.
+ */
+static void sim_stor_event(struct pt_regs *regs, void *addr, int len)
+{
+ if (!(regs->psw.mask & PSW_MASK_PER))
+ return;
+ if (!(current->thread.per_user.control & PER_EVENT_STORE))
+ return;
+ if ((void *)current->thread.per_user.start > (addr + len))
+ return;
+ if ((void *)current->thread.per_user.end < addr)
+ return;
+ current->thread.per_event.address = regs->psw.addr;
+ current->thread.per_event.cause = PER_EVENT_STORE >> 16;
+ set_pt_regs_flag(regs, PIF_PER_TRAP);
+}
+
/*
* pc relative instructions are emulated, since parameters may not be
* accessible from the xol area due to range limitations.
rc = emu_load_ril((u32 __user *)uptr, &rx->u64);
break;
case 0x07: /* sthrl */
- rc = emu_store_ril((u16 __user *)uptr, &rx->u16[3]);
+ rc = emu_store_ril(regs, (u16 __user *)uptr, &rx->u16[3]);
break;
case 0x0b: /* stgrl */
- rc = emu_store_ril((u64 __user *)uptr, &rx->u64);
+ rc = emu_store_ril(regs, (u64 __user *)uptr, &rx->u64);
break;
case 0x0f: /* strl */
- rc = emu_store_ril((u32 __user *)uptr, &rx->u32[1]);
+ rc = emu_store_ril(regs, (u32 __user *)uptr, &rx->u32[1]);
break;
}
break;
struct thread_info *ti = task_thread_info(tsk);
u64 timer, system;
- WARN_ON_ONCE(!irqs_disabled());
-
timer = S390_lowcore.last_update_timer;
S390_lowcore.last_update_timer = get_vtimer();
S390_lowcore.system_timer += timer - S390_lowcore.last_update_timer;
static unsigned long __gmap_segment_gaddr(unsigned long *entry)
{
struct page *page;
- unsigned long offset;
+ unsigned long offset, mask;
offset = (unsigned long) entry / sizeof(unsigned long);
offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE;
- page = pmd_to_page((pmd_t *) entry);
+ mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1);
+ page = virt_to_page((void *)((unsigned long) entry & mask));
return page->index + offset;
}
EMIT4_DISP(0x88500000, K);
break;
case BPF_ALU | BPF_NEG: /* A = -A */
- /* lnr %r5,%r5 */
- EMIT2(0x1155);
+ /* lcr %r5,%r5 */
+ EMIT2(0x1355);
break;
case BPF_JMP | BPF_JA: /* ip += K */
offset = addrs[i + K] + jit->start - jit->prg;
xbranch: /* Emit compare if the branch targets are different */
if (filter->jt != filter->jf) {
jit->seen |= SEEN_XREG;
- /* cr %r5,%r12 */
- EMIT2(0x195c);
+ /* clr %r5,%r12 */
+ EMIT2(0x155c);
}
goto branch;
case BPF_JMP | BPF_JSET | BPF_X: /* ip += (A & X) ? jt : jf */
default y
select HAVE_ARCH_AUDITSYSCALL
select HAVE_UID16
+ select HAVE_FUTEX_CMPXCHG if FUTEX
select GENERIC_IRQ_SHOW
select GENERIC_CPU_DEVICES
select GENERIC_IO
$(obj)/cpustr.h: $(obj)/mkcpustr FORCE
$(call if_changed,cpustr)
endif
+clean-files += cpustr.h
# ---------------------------------------------------------------------------
obj-$(CONFIG_CRYPTO_CRC32C_INTEL) += crc32c-intel.o
obj-$(CONFIG_CRYPTO_SHA1_SSSE3) += sha1-ssse3.o
-obj-$(CONFIG_CRYPTO_SHA1_MB) += sha-mb/
obj-$(CONFIG_CRYPTO_CRC32_PCLMUL) += crc32-pclmul.o
obj-$(CONFIG_CRYPTO_SHA256_SSSE3) += sha256-ssse3.o
obj-$(CONFIG_CRYPTO_SHA512_SSSE3) += sha512-ssse3.o
ifeq ($(avx2_supported),yes)
obj-$(CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64) += camellia-aesni-avx2.o
obj-$(CONFIG_CRYPTO_SERPENT_AVX2_X86_64) += serpent-avx2.o
+ obj-$(CONFIG_CRYPTO_SHA1_MB) += sha-mb/
endif
aes-i586-y := aes-i586-asm_32.o aes_glue.o
.if (klen == KEY_128)
.if (load_keys)
- vmovdqa 3*16(p_keys), xkeyA
+ vmovdqa 3*16(p_keys), xkey4
.endif
.else
vmovdqa 3*16(p_keys), xkeyA
add $(16*by), p_in
.if (klen == KEY_128)
- vmovdqa 4*16(p_keys), xkey4
+ vmovdqa 4*16(p_keys), xkeyB
.else
.if (load_keys)
vmovdqa 4*16(p_keys), xkey4
.set i, 0
.rept by
club XDATA, i
- vaesenc xkeyA, var_xdata, var_xdata /* key 3 */
+ /* key 3 */
+ .if (klen == KEY_128)
+ vaesenc xkey4, var_xdata, var_xdata
+ .else
+ vaesenc xkeyA, var_xdata, var_xdata
+ .endif
.set i, (i +1)
.endr
.set i, 0
.rept by
club XDATA, i
- vaesenc xkey4, var_xdata, var_xdata /* key 4 */
+ /* key 4 */
+ .if (klen == KEY_128)
+ vaesenc xkeyB, var_xdata, var_xdata
+ .else
+ vaesenc xkey4, var_xdata, var_xdata
+ .endif
.set i, (i +1)
.endr
.if (klen == KEY_128)
.if (load_keys)
- vmovdqa 6*16(p_keys), xkeyB
+ vmovdqa 6*16(p_keys), xkey8
.endif
.else
vmovdqa 6*16(p_keys), xkeyB
.set i, 0
.rept by
club XDATA, i
- vaesenc xkeyB, var_xdata, var_xdata /* key 6 */
+ /* key 6 */
+ .if (klen == KEY_128)
+ vaesenc xkey8, var_xdata, var_xdata
+ .else
+ vaesenc xkeyB, var_xdata, var_xdata
+ .endif
.set i, (i +1)
.endr
.if (klen == KEY_128)
- vmovdqa 8*16(p_keys), xkey8
+ vmovdqa 8*16(p_keys), xkeyB
.else
.if (load_keys)
vmovdqa 8*16(p_keys), xkey8
.if (klen == KEY_128)
.if (load_keys)
- vmovdqa 9*16(p_keys), xkeyA
+ vmovdqa 9*16(p_keys), xkey12
.endif
.else
vmovdqa 9*16(p_keys), xkeyA
.set i, 0
.rept by
club XDATA, i
- vaesenc xkey8, var_xdata, var_xdata /* key 8 */
+ /* key 8 */
+ .if (klen == KEY_128)
+ vaesenc xkeyB, var_xdata, var_xdata
+ .else
+ vaesenc xkey8, var_xdata, var_xdata
+ .endif
.set i, (i +1)
.endr
.set i, 0
.rept by
club XDATA, i
- vaesenc xkeyA, var_xdata, var_xdata /* key 9 */
+ /* key 9 */
+ .if (klen == KEY_128)
+ vaesenc xkey12, var_xdata, var_xdata
+ .else
+ vaesenc xkeyA, var_xdata, var_xdata
+ .endif
.set i, (i +1)
.endr
/* main body of aes ctr load */
.macro do_aes_ctrmain key_len
-
cmp $16, num_bytes
jb .Ldo_return2\key_len
/*
* Load per CPU data from GDT. LSL is faster than RDTSCP and
- * works on all CPUs.
+ * works on all CPUs. This is volatile so that it orders
+ * correctly wrt barrier() and to keep gcc from cleverly
+ * hoisting it out of the calling function.
*/
- asm("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG));
+ asm volatile ("lsl %1,%0" : "=r" (p) : "r" (__PER_CPU_SEG));
return p;
}
}
/* wrapper to silence section mismatch warning */
-int __ref acpi_map_lsapic(acpi_handle handle, int physid, int *pcpu)
+int __ref acpi_map_cpu(acpi_handle handle, int physid, int *pcpu)
{
return _acpi_map_lsapic(handle, physid, pcpu);
}
-EXPORT_SYMBOL(acpi_map_lsapic);
+EXPORT_SYMBOL(acpi_map_cpu);
-int acpi_unmap_lsapic(int cpu)
+int acpi_unmap_cpu(int cpu)
{
#ifdef CONFIG_ACPI_NUMA
set_apicid_to_node(per_cpu(x86_cpu_to_apicid, cpu), NUMA_NO_NODE);
return (0);
}
-
-EXPORT_SYMBOL(acpi_unmap_lsapic);
+EXPORT_SYMBOL(acpi_unmap_cpu);
#endif /* CONFIG_ACPI_HOTPLUG_CPU */
int acpi_register_ioapic(acpi_handle handle, u64 phys_addr, u32 gsi_base)
$(obj)/capflags.c: $(cpufeature) $(src)/mkcapflags.sh FORCE
$(call if_changed,mkcapflags)
endif
+clean-files += capflags.c
# If the /* comment */ starts with a quote string, grab that.
VALUE="$(echo "$i" | sed -n 's@.*/\* *\("[^"]*"\).*\*/@\1@p')"
[ -z "$VALUE" ] && VALUE="\"$NAME\""
- [ "$VALUE" == '""' ] && continue
+ [ "$VALUE" = '""' ] && continue
# Name is uppercase, VALUE is all lowercase
VALUE="$(echo "$VALUE" | tr A-Z a-z)"
#define UNCORE_PCI_DEV_TYPE(data) ((data >> 8) & 0xff)
#define UNCORE_PCI_DEV_IDX(data) (data & 0xff)
#define UNCORE_EXTRA_PCI_DEV 0xff
-#define UNCORE_EXTRA_PCI_DEV_MAX 2
+#define UNCORE_EXTRA_PCI_DEV_MAX 3
/* support up to 8 sockets */
#define UNCORE_SOCKET_MAX 8
enum {
SNBEP_PCI_QPI_PORT0_FILTER,
SNBEP_PCI_QPI_PORT1_FILTER,
+ HSWEP_PCI_PCU_3,
};
static int snbep_qpi_hw_config(struct intel_uncore_box *box, struct perf_event *event)
{
if (hswep_uncore_cbox.num_boxes > boot_cpu_data.x86_max_cores)
hswep_uncore_cbox.num_boxes = boot_cpu_data.x86_max_cores;
+
+ /* Detect 6-8 core systems with only two SBOXes */
+ if (uncore_extra_pci_dev[0][HSWEP_PCI_PCU_3]) {
+ u32 capid4;
+
+ pci_read_config_dword(uncore_extra_pci_dev[0][HSWEP_PCI_PCU_3],
+ 0x94, &capid4);
+ if (((capid4 >> 6) & 0x3) == 0)
+ hswep_uncore_sbox.num_boxes = 2;
+ }
+
uncore_msr_uncores = hswep_msr_uncores;
}
.driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
SNBEP_PCI_QPI_PORT1_FILTER),
},
+ { /* PCU.3 (for Capability registers) */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x2fc0),
+ .driver_data = UNCORE_PCI_DEV_DATA(UNCORE_EXTRA_PCI_DEV,
+ HSWEP_PCI_PCU_3),
+ },
{ /* end: all zeroes */ }
};
{
return PERF_SAMPLE_REGS_ABI_32;
}
+
+void perf_get_regs_user(struct perf_regs *regs_user,
+ struct pt_regs *regs,
+ struct pt_regs *regs_user_copy)
+{
+ regs_user->regs = task_pt_regs(current);
+ regs_user->abi = perf_reg_abi(current);
+}
#else /* CONFIG_X86_64 */
#define REG_NOSUPPORT ((1ULL << PERF_REG_X86_DS) | \
(1ULL << PERF_REG_X86_ES) | \
else
return PERF_SAMPLE_REGS_ABI_64;
}
+
+void perf_get_regs_user(struct perf_regs *regs_user,
+ struct pt_regs *regs,
+ struct pt_regs *regs_user_copy)
+{
+ struct pt_regs *user_regs = task_pt_regs(current);
+
+ /*
+ * If we're in an NMI that interrupted task_pt_regs setup, then
+ * we can't sample user regs at all. This check isn't really
+ * sufficient, though, as we could be in an NMI inside an interrupt
+ * that happened during task_pt_regs setup.
+ */
+ if (regs->sp > (unsigned long)&user_regs->r11 &&
+ regs->sp <= (unsigned long)(user_regs + 1)) {
+ regs_user->abi = PERF_SAMPLE_REGS_ABI_NONE;
+ regs_user->regs = NULL;
+ return;
+ }
+
+ /*
+ * RIP, flags, and the argument registers are usually saved.
+ * orig_ax is probably okay, too.
+ */
+ regs_user_copy->ip = user_regs->ip;
+ regs_user_copy->cx = user_regs->cx;
+ regs_user_copy->dx = user_regs->dx;
+ regs_user_copy->si = user_regs->si;
+ regs_user_copy->di = user_regs->di;
+ regs_user_copy->r8 = user_regs->r8;
+ regs_user_copy->r9 = user_regs->r9;
+ regs_user_copy->r10 = user_regs->r10;
+ regs_user_copy->r11 = user_regs->r11;
+ regs_user_copy->orig_ax = user_regs->orig_ax;
+ regs_user_copy->flags = user_regs->flags;
+
+ /*
+ * Don't even try to report the "rest" regs.
+ */
+ regs_user_copy->bx = -1;
+ regs_user_copy->bp = -1;
+ regs_user_copy->r12 = -1;
+ regs_user_copy->r13 = -1;
+ regs_user_copy->r14 = -1;
+ regs_user_copy->r15 = -1;
+
+ /*
+ * For this to be at all useful, we need a reasonable guess for
+ * sp and the ABI. Be careful: we're in NMI context, and we're
+ * considering current to be the current task, so we should
+ * be careful not to look at any other percpu variables that might
+ * change during context switches.
+ */
+ if (IS_ENABLED(CONFIG_IA32_EMULATION) &&
+ task_thread_info(current)->status & TS_COMPAT) {
+ /* Easy case: we're in a compat syscall. */
+ regs_user->abi = PERF_SAMPLE_REGS_ABI_32;
+ regs_user_copy->sp = user_regs->sp;
+ regs_user_copy->cs = user_regs->cs;
+ regs_user_copy->ss = user_regs->ss;
+ } else if (user_regs->orig_ax != -1) {
+ /*
+ * We're probably in a 64-bit syscall.
+ * Warning: this code is severely racy. At least it's better
+ * than just blindly copying user_regs.
+ */
+ regs_user->abi = PERF_SAMPLE_REGS_ABI_64;
+ regs_user_copy->sp = this_cpu_read(old_rsp);
+ regs_user_copy->cs = __USER_CS;
+ regs_user_copy->ss = __USER_DS;
+ regs_user_copy->cx = -1; /* usually contains garbage */
+ } else {
+ /* We're probably in an interrupt or exception. */
+ regs_user->abi = user_64bit_mode(user_regs) ?
+ PERF_SAMPLE_REGS_ABI_64 : PERF_SAMPLE_REGS_ABI_32;
+ regs_user_copy->sp = user_regs->sp;
+ regs_user_copy->cs = user_regs->cs;
+ regs_user_copy->ss = user_regs->ss;
+ }
+
+ regs_user->regs = regs_user_copy;
+}
#endif /* CONFIG_X86_32 */
/* Verify next sizeof(t) bytes can be on the same instruction */
#define validate_next(t, insn, n) \
- ((insn)->next_byte + sizeof(t) + n < (insn)->end_kaddr)
+ ((insn)->next_byte + sizeof(t) + n <= (insn)->end_kaddr)
#define __get_next(t, insn) \
({ t r = *(t*)insn->next_byte; insn->next_byte += sizeof(t); r; })
static unsigned long __init get_new_step_size(unsigned long step_size)
{
/*
- * Explain why we shift by 5 and why we don't have to worry about
- * 'step_size << 5' overflowing:
- *
- * initial mapped size is PMD_SIZE (2M).
+ * Initial mapped size is PMD_SIZE (2M).
* We can not set step_size to be PUD_SIZE (1G) yet.
* In worse case, when we cross the 1G boundary, and
* PG_LEVEL_2M is not set, we will need 1+1+512 pages (2M + 8k)
- * to map 1G range with PTE. Use 5 as shift for now.
+ * to map 1G range with PTE. Hence we use one less than the
+ * difference of page table level shifts.
*
- * Don't need to worry about overflow, on 32bit, when step_size
- * is 0, round_down() returns 0 for start, and that turns it
- * into 0x100000000ULL.
+ * Don't need to worry about overflow in the top-down case, on 32bit,
+ * when step_size is 0, round_down() returns 0 for start, and that
+ * turns it into 0x100000000ULL.
+ * In the bottom-up case, round_up(x, 0) returns 0 though too, which
+ * needs to be taken into consideration by the code below.
*/
- return step_size << 5;
+ return step_size << (PMD_SHIFT - PAGE_SHIFT - 1);
}
/**
unsigned long step_size;
unsigned long addr;
unsigned long mapped_ram_size = 0;
- unsigned long new_mapped_ram_size;
/* xen has big range in reserved near end of ram, skip it at first.*/
addr = memblock_find_in_range(map_start, map_end, PMD_SIZE, PMD_SIZE);
start = map_start;
} else
start = map_start;
- new_mapped_ram_size = init_range_memory_mapping(start,
+ mapped_ram_size += init_range_memory_mapping(start,
last_start);
last_start = start;
min_pfn_mapped = last_start >> PAGE_SHIFT;
- /* only increase step_size after big range get mapped */
- if (new_mapped_ram_size > mapped_ram_size)
+ if (mapped_ram_size >= step_size)
step_size = get_new_step_size(step_size);
- mapped_ram_size += new_mapped_ram_size;
}
if (real_end < map_end)
static void __init memory_map_bottom_up(unsigned long map_start,
unsigned long map_end)
{
- unsigned long next, new_mapped_ram_size, start;
+ unsigned long next, start;
unsigned long mapped_ram_size = 0;
/* step_size need to be small so pgt_buf from BRK could cover it */
unsigned long step_size = PMD_SIZE;
* for page table.
*/
while (start < map_end) {
- if (map_end - start > step_size) {
+ if (step_size && map_end - start > step_size) {
next = round_up(start + 1, step_size);
if (next > map_end)
next = map_end;
- } else
+ } else {
next = map_end;
+ }
- new_mapped_ram_size = init_range_memory_mapping(start, next);
+ mapped_ram_size += init_range_memory_mapping(start, next);
start = next;
- if (new_mapped_ram_size > mapped_ram_size)
+ if (mapped_ram_size >= step_size)
step_size = get_new_step_size(step_size);
- mapped_ram_size += new_mapped_ram_size;
}
}
extern asmlinkage void sys_ni_syscall(void);
-const sys_call_ptr_t sys_call_table[] __cacheline_aligned = {
+const sys_call_ptr_t sys_call_table[] ____cacheline_aligned = {
/*
* Smells like a compiler bug -- it doesn't work
* when the & below is removed.
extern void sys_ni_syscall(void);
-const sys_call_ptr_t sys_call_table[] __cacheline_aligned = {
+const sys_call_ptr_t sys_call_table[] ____cacheline_aligned = {
/*
* Smells like a compiler bug -- it doesn't work
* when the & below is removed.
struct linux_binprm;
-/* Put the vdso above the (randomized) stack with another randomized offset.
- This way there is no hole in the middle of address space.
- To save memory make sure it is still in the same PTE as the stack top.
- This doesn't give that many random bits.
-
- Only used for the 64-bit and x32 vdsos. */
+/*
+ * Put the vdso above the (randomized) stack with another randomized
+ * offset. This way there is no hole in the middle of address space.
+ * To save memory make sure it is still in the same PTE as the stack
+ * top. This doesn't give that many random bits.
+ *
+ * Note that this algorithm is imperfect: the distribution of the vdso
+ * start address within a PMD is biased toward the end.
+ *
+ * Only used for the 64-bit and x32 vdsos.
+ */
static unsigned long vdso_addr(unsigned long start, unsigned len)
{
#ifdef CONFIG_X86_32
#else
unsigned long addr, end;
unsigned offset;
- end = (start + PMD_SIZE - 1) & PMD_MASK;
+
+ /*
+ * Round up the start address. It can start out unaligned as a result
+ * of stack start randomization.
+ */
+ start = PAGE_ALIGN(start);
+
+ /* Round the lowest possible end address up to a PMD boundary. */
+ end = (start + len + PMD_SIZE - 1) & PMD_MASK;
if (end >= TASK_SIZE_MAX)
end = TASK_SIZE_MAX;
end -= len;
- /* This loses some more bits than a modulo, but is cheaper */
- offset = get_random_int() & (PTRS_PER_PTE - 1);
- addr = start + (offset << PAGE_SHIFT);
- if (addr >= end)
- addr = end;
+
+ if (end > start) {
+ offset = get_random_int() % (((end - start) >> PAGE_SHIFT) + 1);
+ addr = start + (offset << PAGE_SHIFT);
+ } else {
+ addr = start;
+ }
/*
- * page-align it here so that get_unmapped_area doesn't
- * align it wrongfully again to the next page. addr can come in 4K
- * unaligned here as a result of stack start randomization.
+ * Forcibly align the final address in case we have a hardware
+ * issue that requires alignment for performance reasons.
*/
- addr = PAGE_ALIGN(addr);
addr = align_vdso_addr(addr);
return addr;
#include <xen/interface/physdev.h>
#include <xen/interface/vcpu.h>
#include <xen/interface/memory.h>
+#include <xen/interface/nmi.h>
#include <xen/interface/xen-mca.h>
#include <xen/features.h>
#include <xen/page.h>
#include <asm/reboot.h>
#include <asm/stackprotector.h>
#include <asm/hypervisor.h>
+#include <asm/mach_traps.h>
#include <asm/mwait.h>
#include <asm/pci_x86.h>
#include <asm/pat.h>
.emergency_restart = xen_emergency_restart,
};
+static unsigned char xen_get_nmi_reason(void)
+{
+ unsigned char reason = 0;
+
+ /* Construct a value which looks like it came from port 0x61. */
+ if (test_bit(_XEN_NMIREASON_io_error,
+ &HYPERVISOR_shared_info->arch.nmi_reason))
+ reason |= NMI_REASON_IOCHK;
+ if (test_bit(_XEN_NMIREASON_pci_serr,
+ &HYPERVISOR_shared_info->arch.nmi_reason))
+ reason |= NMI_REASON_SERR;
+
+ return reason;
+}
+
static void __init xen_boot_params_init_edd(void)
{
#if IS_ENABLED(CONFIG_EDD)
pv_info = xen_info;
pv_init_ops = xen_init_ops;
pv_apic_ops = xen_apic_ops;
- if (!xen_pvh_domain())
+ if (!xen_pvh_domain()) {
pv_cpu_ops = xen_cpu_ops;
+ x86_platform.get_nmi_reason = xen_get_nmi_reason;
+ }
+
if (xen_feature(XENFEAT_auto_translated_physmap))
x86_init.resources.memory_setup = xen_auto_xlated_memory_setup;
else
return (void *)__get_free_page(GFP_KERNEL | __GFP_REPEAT);
}
-/* Only to be called in case of a race for a page just allocated! */
-static void free_p2m_page(void *p)
+static void __ref free_p2m_page(void *p)
{
- BUG_ON(!slab_is_available());
+ if (unlikely(!slab_is_available())) {
+ free_bootmem((unsigned long)p, PAGE_SIZE);
+ return;
+ }
+
free_page((unsigned long)p);
}
p2m_missing_pte : p2m_identity_pte;
for (i = 0; i < PMDS_PER_MID_PAGE; i++) {
pmdp = populate_extra_pmd(
- (unsigned long)(p2m + pfn + i * PTRS_PER_PTE));
+ (unsigned long)(p2m + pfn) + i * PMD_SIZE);
set_pmd(pmdp, __pmd(__pa(ptep) | _KERNPG_TABLE));
}
}
* a new pmd is to replace p2m_missing_pte or p2m_identity_pte by a individual
* pmd. In case of PAE/x86-32 there are multiple pmds to allocate!
*/
-static pte_t *alloc_p2m_pmd(unsigned long addr, pte_t *ptep, pte_t *pte_pg)
+static pte_t *alloc_p2m_pmd(unsigned long addr, pte_t *pte_pg)
{
pte_t *ptechk;
- pte_t *pteret = ptep;
pte_t *pte_newpg[PMDS_PER_MID_PAGE];
pmd_t *pmdp;
unsigned int level;
if (ptechk == pte_pg) {
set_pmd(pmdp,
__pmd(__pa(pte_newpg[i]) | _KERNPG_TABLE));
- if (vaddr == (addr & ~(PMD_SIZE - 1)))
- pteret = pte_offset_kernel(pmdp, addr);
pte_newpg[i] = NULL;
}
vaddr += PMD_SIZE;
}
- return pteret;
+ return lookup_address(addr, &level);
}
/*
if (pte_pg == p2m_missing_pte || pte_pg == p2m_identity_pte) {
/* PMD level is missing, allocate a new one */
- ptep = alloc_p2m_pmd(addr, ptep, pte_pg);
+ ptep = alloc_p2m_pmd(addr, pte_pg);
if (!ptep)
return false;
}
unsigned long __ref xen_chk_extra_mem(unsigned long pfn)
{
int i;
- unsigned long addr = PFN_PHYS(pfn);
+ phys_addr_t addr = PFN_PHYS(pfn);
for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {
if (addr >= xen_extra_mem[i].start &&
int i;
for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {
+ if (!xen_extra_mem[i].size)
+ continue;
pfn_s = PFN_DOWN(xen_extra_mem[i].start);
pfn_e = PFN_UP(xen_extra_mem[i].start + xen_extra_mem[i].size);
for (pfn = pfn_s; pfn < pfn_e; pfn++)
* as a fallback if the remapping fails.
*/
static void __init xen_set_identity_and_release_chunk(unsigned long start_pfn,
- unsigned long end_pfn, unsigned long nr_pages, unsigned long *identity,
- unsigned long *released)
+ unsigned long end_pfn, unsigned long nr_pages, unsigned long *released)
{
- unsigned long len = 0;
unsigned long pfn, end;
int ret;
WARN_ON(start_pfn > end_pfn);
+ /* Release pages first. */
end = min(end_pfn, nr_pages);
for (pfn = start_pfn; pfn < end; pfn++) {
unsigned long mfn = pfn_to_mfn(pfn);
WARN(ret != 1, "Failed to release pfn %lx err=%d\n", pfn, ret);
if (ret == 1) {
+ (*released)++;
if (!__set_phys_to_machine(pfn, INVALID_P2M_ENTRY))
break;
- len++;
} else
break;
}
- /* Need to release pages first */
- *released += len;
- *identity += set_phys_range_identity(start_pfn, end_pfn);
+ set_phys_range_identity(start_pfn, end_pfn);
}
/*
}
/* Update kernel mapping, but not for highmem. */
- if ((pfn << PAGE_SHIFT) >= __pa(high_memory))
+ if (pfn >= PFN_UP(__pa(high_memory - 1)))
return;
if (HYPERVISOR_update_va_mapping((unsigned long)__va(pfn << PAGE_SHIFT),
unsigned long ident_pfn_iter, remap_pfn_iter;
unsigned long ident_end_pfn = start_pfn + size;
unsigned long left = size;
- unsigned long ident_cnt = 0;
unsigned int i, chunk;
WARN_ON(size == 0);
xen_remap_mfn = mfn;
/* Set identity map */
- ident_cnt += set_phys_range_identity(ident_pfn_iter,
- ident_pfn_iter + chunk);
+ set_phys_range_identity(ident_pfn_iter, ident_pfn_iter + chunk);
left -= chunk;
}
static unsigned long __init xen_set_identity_and_remap_chunk(
const struct e820entry *list, size_t map_size, unsigned long start_pfn,
unsigned long end_pfn, unsigned long nr_pages, unsigned long remap_pfn,
- unsigned long *identity, unsigned long *released)
+ unsigned long *released, unsigned long *remapped)
{
unsigned long pfn;
unsigned long i = 0;
/* Do not remap pages beyond the current allocation */
if (cur_pfn >= nr_pages) {
/* Identity map remaining pages */
- *identity += set_phys_range_identity(cur_pfn,
- cur_pfn + size);
+ set_phys_range_identity(cur_pfn, cur_pfn + size);
break;
}
if (cur_pfn + size > nr_pages)
if (!remap_range_size) {
pr_warning("Unable to find available pfn range, not remapping identity pages\n");
xen_set_identity_and_release_chunk(cur_pfn,
- cur_pfn + left, nr_pages, identity, released);
+ cur_pfn + left, nr_pages, released);
break;
}
/* Adjust size to fit in current e820 RAM region */
/* Update variables to reflect new mappings. */
i += size;
remap_pfn += size;
- *identity += size;
+ *remapped += size;
}
/*
static void __init xen_set_identity_and_remap(
const struct e820entry *list, size_t map_size, unsigned long nr_pages,
- unsigned long *released)
+ unsigned long *released, unsigned long *remapped)
{
phys_addr_t start = 0;
- unsigned long identity = 0;
unsigned long last_pfn = nr_pages;
const struct e820entry *entry;
unsigned long num_released = 0;
+ unsigned long num_remapped = 0;
int i;
/*
last_pfn = xen_set_identity_and_remap_chunk(
list, map_size, start_pfn,
end_pfn, nr_pages, last_pfn,
- &identity, &num_released);
+ &num_released, &num_remapped);
start = end;
}
}
*released = num_released;
+ *remapped = num_remapped;
- pr_info("Set %ld page(s) to 1-1 mapping\n", identity);
pr_info("Released %ld page(s)\n", num_released);
}
struct xen_memory_map memmap;
unsigned long max_pages;
unsigned long extra_pages = 0;
+ unsigned long remapped_pages;
int i;
int op;
* underlying RAM.
*/
xen_set_identity_and_remap(map, memmap.nr_entries, max_pfn,
- &xen_released_pages);
+ &xen_released_pages, &remapped_pages);
extra_pages += xen_released_pages;
+ extra_pages += remapped_pages;
/*
* Clamp the amount of extra memory to a EXTRA_MEM_RATIO
struct xen_clock_event_device {
struct clock_event_device evt;
- char *name;
+ char name[16];
};
static DEFINE_PER_CPU(struct xen_clock_event_device, xen_clock_events) = { .evt.irq = -1 };
if (evt->irq >= 0) {
unbind_from_irqhandler(evt->irq, NULL);
evt->irq = -1;
- kfree(per_cpu(xen_clock_events, cpu).name);
- per_cpu(xen_clock_events, cpu).name = NULL;
}
}
void xen_setup_timer(int cpu)
{
- char *name;
- struct clock_event_device *evt;
+ struct xen_clock_event_device *xevt = &per_cpu(xen_clock_events, cpu);
+ struct clock_event_device *evt = &xevt->evt;
int irq;
- evt = &per_cpu(xen_clock_events, cpu).evt;
WARN(evt->irq >= 0, "IRQ%d for CPU%d is already allocated\n", evt->irq, cpu);
if (evt->irq >= 0)
xen_teardown_timer(cpu);
printk(KERN_INFO "installing Xen timer for CPU %d\n", cpu);
- name = kasprintf(GFP_KERNEL, "timer%d", cpu);
- if (!name)
- name = "<timer kasprintf failed>";
+ snprintf(xevt->name, sizeof(xevt->name), "timer%d", cpu);
irq = bind_virq_to_irqhandler(VIRQ_TIMER, cpu, xen_timer_interrupt,
IRQF_PERCPU|IRQF_NOBALANCING|IRQF_TIMER|
IRQF_FORCE_RESUME|IRQF_EARLY_RESUME,
- name, NULL);
+ xevt->name, NULL);
(void)xen_set_irq_priority(irq, XEN_IRQ_PRIORITY_MAX);
memcpy(evt, xen_clockevent, sizeof(*evt));
evt->cpumask = cpumask_of(cpu);
evt->irq = irq;
- per_cpu(xen_clock_events, cpu).name = name;
}
void xen_setup_cpu_clockevents(void)
{
- BUG_ON(preemptible());
-
clockevents_register_device(this_cpu_ptr(&xen_clock_events.evt));
}
}
EXPORT_SYMBOL_GPL(blk_queue_bypass_end);
+void blk_set_queue_dying(struct request_queue *q)
+{
+ queue_flag_set_unlocked(QUEUE_FLAG_DYING, q);
+
+ if (q->mq_ops)
+ blk_mq_wake_waiters(q);
+ else {
+ struct request_list *rl;
+
+ blk_queue_for_each_rl(rl, q) {
+ if (rl->rq_pool) {
+ wake_up(&rl->wait[BLK_RW_SYNC]);
+ wake_up(&rl->wait[BLK_RW_ASYNC]);
+ }
+ }
+ }
+}
+EXPORT_SYMBOL_GPL(blk_set_queue_dying);
+
/**
* blk_cleanup_queue - shutdown a request queue
* @q: request queue to shutdown
/* mark @q DYING, no new request or merges will be allowed afterwards */
mutex_lock(&q->sysfs_lock);
- queue_flag_set_unlocked(QUEUE_FLAG_DYING, q);
+ blk_set_queue_dying(q);
spin_lock_irq(lock);
/*
}
/*
- * Wakeup all potentially sleeping on normal (non-reserved) tags
+ * Wakeup all potentially sleeping on tags
*/
-static void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags)
+void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool include_reserve)
{
struct blk_mq_bitmap_tags *bt;
int i, wake_index;
wake_index = bt_index_inc(wake_index);
}
+
+ if (include_reserve) {
+ bt = &tags->breserved_tags;
+ if (waitqueue_active(&bt->bs[0].wait))
+ wake_up(&bt->bs[0].wait);
+ }
}
/*
atomic_dec(&tags->active_queues);
- blk_mq_tag_wakeup_all(tags);
+ blk_mq_tag_wakeup_all(tags, false);
}
/*
* static and should never need resizing.
*/
bt_update_count(&tags->bitmap_tags, tdepth);
- blk_mq_tag_wakeup_all(tags);
+ blk_mq_tag_wakeup_all(tags, false);
return 0;
}
extern ssize_t blk_mq_tag_sysfs_show(struct blk_mq_tags *tags, char *page);
extern void blk_mq_tag_init_last_tag(struct blk_mq_tags *tags, unsigned int *last_tag);
extern int blk_mq_tag_update_depth(struct blk_mq_tags *tags, unsigned int depth);
+extern void blk_mq_tag_wakeup_all(struct blk_mq_tags *tags, bool);
enum {
BLK_MQ_TAG_CACHE_MIN = 1,
wake_up_all(&q->mq_freeze_wq);
}
-static void blk_mq_freeze_queue_start(struct request_queue *q)
+void blk_mq_freeze_queue_start(struct request_queue *q)
{
bool freeze;
blk_mq_run_queues(q, false);
}
}
+EXPORT_SYMBOL_GPL(blk_mq_freeze_queue_start);
static void blk_mq_freeze_queue_wait(struct request_queue *q)
{
blk_mq_freeze_queue_wait(q);
}
-static void blk_mq_unfreeze_queue(struct request_queue *q)
+void blk_mq_unfreeze_queue(struct request_queue *q)
{
bool wake;
wake_up_all(&q->mq_freeze_wq);
}
}
+EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);
+
+void blk_mq_wake_waiters(struct request_queue *q)
+{
+ struct blk_mq_hw_ctx *hctx;
+ unsigned int i;
+
+ queue_for_each_hw_ctx(q, hctx, i)
+ if (blk_mq_hw_queue_mapped(hctx))
+ blk_mq_tag_wakeup_all(hctx->tags, true);
+
+ /*
+ * If we are called because the queue has now been marked as
+ * dying, we need to ensure that processes currently waiting on
+ * the queue are notified as well.
+ */
+ wake_up_all(&q->mq_freeze_wq);
+}
bool blk_mq_can_queue(struct blk_mq_hw_ctx *hctx)
{
ctx = alloc_data.ctx;
}
blk_mq_put_ctx(ctx);
- if (!rq)
+ if (!rq) {
+ blk_mq_queue_exit(q);
return ERR_PTR(-EWOULDBLOCK);
+ }
return rq;
}
EXPORT_SYMBOL(blk_mq_alloc_request);
}
EXPORT_SYMBOL(blk_mq_complete_request);
+int blk_mq_request_started(struct request *rq)
+{
+ return test_bit(REQ_ATOM_STARTED, &rq->atomic_flags);
+}
+EXPORT_SYMBOL_GPL(blk_mq_request_started);
+
void blk_mq_start_request(struct request *rq)
{
struct request_queue *q = rq->q;
}
EXPORT_SYMBOL(blk_mq_add_to_requeue_list);
+void blk_mq_cancel_requeue_work(struct request_queue *q)
+{
+ cancel_work_sync(&q->requeue_work);
+}
+EXPORT_SYMBOL_GPL(blk_mq_cancel_requeue_work);
+
void blk_mq_kick_requeue_list(struct request_queue *q)
{
kblockd_schedule_work(&q->requeue_work);
}
EXPORT_SYMBOL(blk_mq_kick_requeue_list);
+void blk_mq_abort_requeue_list(struct request_queue *q)
+{
+ unsigned long flags;
+ LIST_HEAD(rq_list);
+
+ spin_lock_irqsave(&q->requeue_lock, flags);
+ list_splice_init(&q->requeue_list, &rq_list);
+ spin_unlock_irqrestore(&q->requeue_lock, flags);
+
+ while (!list_empty(&rq_list)) {
+ struct request *rq;
+
+ rq = list_first_entry(&rq_list, struct request, queuelist);
+ list_del_init(&rq->queuelist);
+ rq->errors = -EIO;
+ blk_mq_end_request(rq, rq->errors);
+ }
+}
+EXPORT_SYMBOL(blk_mq_abort_requeue_list);
+
static inline bool is_flush_request(struct request *rq,
struct blk_flush_queue *fq, unsigned int tag)
{
break;
}
}
-
+
static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
struct request *rq, void *priv, bool reserved)
{
struct blk_mq_timeout_data *data = priv;
- if (!test_bit(REQ_ATOM_STARTED, &rq->atomic_flags))
+ if (!test_bit(REQ_ATOM_STARTED, &rq->atomic_flags)) {
+ /*
+ * If a request wasn't started before the queue was
+ * marked dying, kill it here or it'll go unnoticed.
+ */
+ if (unlikely(blk_queue_dying(rq->q))) {
+ rq->errors = -EIO;
+ blk_mq_complete_request(rq);
+ }
+ return;
+ }
+ if (rq->cmd_flags & REQ_NO_TIMEOUT)
return;
if (time_after_eq(jiffies, rq->deadline)) {
hctx->queue = q;
hctx->queue_num = hctx_idx;
hctx->flags = set->flags;
- hctx->cmd_size = set->cmd_size;
blk_mq_init_cpu_notifier(&hctx->cpu_notifier,
blk_mq_hctx_notify, hctx);
void blk_mq_clone_flush_request(struct request *flush_rq,
struct request *orig_rq);
int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr);
+void blk_mq_wake_waiters(struct request_queue *q);
/*
* CPU hotplug helpers
struct request_queue *q = req->q;
unsigned long expiry;
+ if (req->cmd_flags & REQ_NO_TIMEOUT)
+ return;
+
/* blk-mq has its own handler, so we don't need ->rq_timed_out_fn */
if (!q->mq_ops && !q->rq_timed_out_fn)
return;
{
struct af_alg_completion *completion = req->data;
+ if (err == -EINPROGRESS)
+ return;
+
completion->err = err;
complete(&completion->completion);
}
obj-y += tty/
obj-y += char/
-# gpu/ comes after char for AGP vs DRM startup
+# iommu/ comes before gpu as gpu are using iommu controllers
+obj-$(CONFIG_IOMMU_SUPPORT) += iommu/
+
+# gpu/ comes after char for AGP vs DRM startup and after iommu
obj-y += gpu/
obj-$(CONFIG_CONNECTOR) += connector/
obj-$(CONFIG_MAILBOX) += mailbox/
obj-$(CONFIG_HWSPINLOCK) += hwspinlock/
-obj-$(CONFIG_IOMMU_SUPPORT) += iommu/
obj-$(CONFIG_REMOTEPROC) += remoteproc/
obj-$(CONFIG_RPMSG) += rpmsg/
acpi_status status;
int ret;
- if (pr->apic_id == -1)
+ if (pr->phys_id == -1)
return -ENODEV;
status = acpi_evaluate_integer(pr->handle, "_STA", NULL, &sta);
cpu_maps_update_begin();
cpu_hotplug_begin();
- ret = acpi_map_lsapic(pr->handle, pr->apic_id, &pr->id);
+ ret = acpi_map_cpu(pr->handle, pr->phys_id, &pr->id);
if (ret)
goto out;
ret = arch_register_cpu(pr->id);
if (ret) {
- acpi_unmap_lsapic(pr->id);
+ acpi_unmap_cpu(pr->id);
goto out;
}
union acpi_object object = { 0 };
struct acpi_buffer buffer = { sizeof(union acpi_object), &object };
struct acpi_processor *pr = acpi_driver_data(device);
- int apic_id, cpu_index, device_declaration = 0;
+ int phys_id, cpu_index, device_declaration = 0;
acpi_status status = AE_OK;
static int cpu0_initialized;
unsigned long long value;
pr->acpi_id = value;
}
- apic_id = acpi_get_apicid(pr->handle, device_declaration, pr->acpi_id);
- if (apic_id < 0)
- acpi_handle_debug(pr->handle, "failed to get CPU APIC ID.\n");
- pr->apic_id = apic_id;
+ phys_id = acpi_get_phys_id(pr->handle, device_declaration, pr->acpi_id);
+ if (phys_id < 0)
+ acpi_handle_debug(pr->handle, "failed to get CPU physical ID.\n");
+ pr->phys_id = phys_id;
- cpu_index = acpi_map_cpuid(pr->apic_id, pr->acpi_id);
+ cpu_index = acpi_map_cpuid(pr->phys_id, pr->acpi_id);
if (!cpu0_initialized && !acpi_has_cpu_in_madt()) {
cpu0_initialized = 1;
- /* Handle UP system running SMP kernel, with no LAPIC in MADT */
+ /*
+ * Handle UP system running SMP kernel, with no CPU
+ * entry in MADT
+ */
if ((cpu_index == -1) && (num_online_cpus() == 1))
cpu_index = 0;
}
/* Remove the CPU. */
arch_unregister_cpu(pr->id);
- acpi_unmap_lsapic(pr->id);
+ acpi_unmap_cpu(pr->id);
cpu_hotplug_done();
cpu_maps_update_done();
device->power.state = ACPI_STATE_UNKNOWN;
if (!acpi_device_is_present(device))
- return 0;
+ return -ENXIO;
result = acpi_device_get_power(device, &state);
if (result)
struct acpi_genl_event *event;
void *msg_header;
int size;
- int result;
/* allocate memory */
size = nla_total_size(sizeof(struct acpi_genl_event)) +
event->data = data;
/* send multicast genetlink message */
- result = genlmsg_end(skb, msg_header);
- if (result < 0) {
- nlmsg_free(skb);
- return result;
- }
+ genlmsg_end(skb, msg_header);
genlmsg_multicast(&acpi_event_genl_family, skb, 0, 0, GFP_ATOMIC);
return 0;
#include "internal.h"
-#define DO_ENUMERATION 0x01
+#define INT3401_DEVICE 0X01
static const struct acpi_device_id int340x_thermal_device_ids[] = {
- {"INT3400", DO_ENUMERATION },
- {"INT3401"},
+ {"INT3400"},
+ {"INT3401", INT3401_DEVICE},
{"INT3402"},
{"INT3403"},
{"INT3404"},
const struct acpi_device_id *id)
{
#if defined(CONFIG_INT340X_THERMAL) || defined(CONFIG_INT340X_THERMAL_MODULE)
- if (id->driver_data == DO_ENUMERATION)
+ acpi_create_platform_device(adev);
+#elif defined(INTEL_SOC_DTS_THERMAL) || defined(INTEL_SOC_DTS_THERMAL_MODULE)
+ /* Intel SoC DTS thermal driver needs INT3401 to set IRQ descriptor */
+ if (id->driver_data == INT3401_DEVICE)
acpi_create_platform_device(adev);
#endif
return 1;
unsigned long madt_end, entry;
static struct acpi_table_madt *madt;
static int read_madt;
- int apic_id = -1;
+ int phys_id = -1; /* CPU hardware ID */
if (!read_madt) {
if (ACPI_FAILURE(acpi_get_table(ACPI_SIG_MADT, 0,
}
if (!madt)
- return apic_id;
+ return phys_id;
entry = (unsigned long)madt;
madt_end = entry + madt->header.length;
struct acpi_subtable_header *header =
(struct acpi_subtable_header *)entry;
if (header->type == ACPI_MADT_TYPE_LOCAL_APIC) {
- if (!map_lapic_id(header, acpi_id, &apic_id))
+ if (!map_lapic_id(header, acpi_id, &phys_id))
break;
} else if (header->type == ACPI_MADT_TYPE_LOCAL_X2APIC) {
- if (!map_x2apic_id(header, type, acpi_id, &apic_id))
+ if (!map_x2apic_id(header, type, acpi_id, &phys_id))
break;
} else if (header->type == ACPI_MADT_TYPE_LOCAL_SAPIC) {
- if (!map_lsapic_id(header, type, acpi_id, &apic_id))
+ if (!map_lsapic_id(header, type, acpi_id, &phys_id))
break;
}
entry += header->length;
}
- return apic_id;
+ return phys_id;
}
static int map_mat_entry(acpi_handle handle, int type, u32 acpi_id)
struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *obj;
struct acpi_subtable_header *header;
- int apic_id = -1;
+ int phys_id = -1;
if (ACPI_FAILURE(acpi_evaluate_object(handle, "_MAT", NULL, &buffer)))
goto exit;
header = (struct acpi_subtable_header *)obj->buffer.pointer;
if (header->type == ACPI_MADT_TYPE_LOCAL_APIC)
- map_lapic_id(header, acpi_id, &apic_id);
+ map_lapic_id(header, acpi_id, &phys_id);
else if (header->type == ACPI_MADT_TYPE_LOCAL_SAPIC)
- map_lsapic_id(header, type, acpi_id, &apic_id);
+ map_lsapic_id(header, type, acpi_id, &phys_id);
else if (header->type == ACPI_MADT_TYPE_LOCAL_X2APIC)
- map_x2apic_id(header, type, acpi_id, &apic_id);
+ map_x2apic_id(header, type, acpi_id, &phys_id);
exit:
kfree(buffer.pointer);
- return apic_id;
+ return phys_id;
}
-int acpi_get_apicid(acpi_handle handle, int type, u32 acpi_id)
+int acpi_get_phys_id(acpi_handle handle, int type, u32 acpi_id)
{
- int apic_id;
+ int phys_id;
- apic_id = map_mat_entry(handle, type, acpi_id);
- if (apic_id == -1)
- apic_id = map_madt_entry(type, acpi_id);
+ phys_id = map_mat_entry(handle, type, acpi_id);
+ if (phys_id == -1)
+ phys_id = map_madt_entry(type, acpi_id);
- return apic_id;
+ return phys_id;
}
-int acpi_map_cpuid(int apic_id, u32 acpi_id)
+int acpi_map_cpuid(int phys_id, u32 acpi_id)
{
#ifdef CONFIG_SMP
int i;
#endif
- if (apic_id == -1) {
+ if (phys_id == -1) {
/*
* On UP processor, there is no _MAT or MADT table.
- * So above apic_id is always set to -1.
+ * So above phys_id is always set to -1.
*
* BIOS may define multiple CPU handles even for UP processor.
* For example,
* Processor (CPU3, 0x03, 0x00000410, 0x06) {}
* }
*
- * Ignores apic_id and always returns 0 for the processor
+ * Ignores phys_id and always returns 0 for the processor
* handle with acpi id 0 if nr_cpu_ids is 1.
* This should be the case if SMP tables are not found.
* Return -1 for other CPU's handle.
if (nr_cpu_ids <= 1 && acpi_id == 0)
return acpi_id;
else
- return apic_id;
+ return phys_id;
}
#ifdef CONFIG_SMP
for_each_possible_cpu(i) {
- if (cpu_physical_id(i) == apic_id)
+ if (cpu_physical_id(i) == phys_id)
return i;
}
#else
/* In UP kernel, only processor 0 is valid */
- if (apic_id == 0)
- return apic_id;
+ if (phys_id == 0)
+ return phys_id;
#endif
return -1;
}
int acpi_get_cpuid(acpi_handle handle, int type, u32 acpi_id)
{
- int apic_id;
+ int phys_id;
- apic_id = acpi_get_apicid(handle, type, acpi_id);
+ phys_id = acpi_get_phys_id(handle, type, acpi_id);
- return acpi_map_cpuid(apic_id, acpi_id);
+ return acpi_map_cpuid(phys_id, acpi_id);
}
EXPORT_SYMBOL_GPL(acpi_get_cpuid);
if (device->wakeup.flags.valid)
acpi_power_resources_list_free(&device->wakeup.resources);
- if (!device->flags.power_manageable)
+ if (!device->power.flags.power_resources)
return;
for (i = ACPI_STATE_D0; i <= ACPI_STATE_D3_HOT; i++) {
device->power.flags.power_resources)
device->power.states[ACPI_STATE_D3_COLD].flags.os_accessible = 1;
- if (acpi_bus_init_power(device)) {
- acpi_free_power_resources_lists(device);
+ if (acpi_bus_init_power(device))
device->flags.power_manageable = 0;
- }
}
static void acpi_bus_get_flags(struct acpi_device *device)
/* Skip devices that are not present. */
if (!acpi_device_is_present(device)) {
device->flags.visited = false;
+ device->flags.power_manageable = 0;
return;
}
if (device->handler)
goto ok;
if (!device->flags.initialized) {
- acpi_bus_update_power(device, NULL);
+ device->flags.power_manageable =
+ device->power.states[ACPI_STATE_D0].flags.valid;
+ if (acpi_bus_init_power(device))
+ device->flags.power_manageable = 0;
+
device->flags.initialized = true;
}
device->flags.visited = false;
DMI_MATCH(DMI_PRODUCT_NAME, "370R4E/370R4V/370R5E/3570RE/370R5V"),
},
},
+
+ {
+ /* https://bugzilla.redhat.com/show_bug.cgi?id=1163574 */
+ .callback = video_disable_native_backlight,
+ .ident = "Dell XPS15 L521X",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "XPS L521X"),
+ },
+ },
{}
};
eni_vcc = ENI_VCC(vcc);
paddr = 0; /* GCC, shut up */
if (skb) {
- paddr = pci_map_single(eni_dev->pci_dev,skb->data,skb->len,
- PCI_DMA_FROMDEVICE);
- if (pci_dma_mapping_error(eni_dev->pci_dev, paddr))
+ paddr = dma_map_single(&eni_dev->pci_dev->dev,skb->data,skb->len,
+ DMA_FROM_DEVICE);
+ if (dma_mapping_error(&eni_dev->pci_dev->dev, paddr))
goto dma_map_error;
ENI_PRV_PADDR(skb) = paddr;
if (paddr & 3)
trouble:
if (paddr)
- pci_unmap_single(eni_dev->pci_dev,paddr,skb->len,
- PCI_DMA_FROMDEVICE);
+ dma_unmap_single(&eni_dev->pci_dev->dev,paddr,skb->len,
+ DMA_FROM_DEVICE);
dma_map_error:
if (skb) dev_kfree_skb_irq(skb);
return -1;
}
eni_vcc->rxing--;
eni_vcc->rx_pos = ENI_PRV_POS(skb) & (eni_vcc->words-1);
- pci_unmap_single(eni_dev->pci_dev,ENI_PRV_PADDR(skb),skb->len,
- PCI_DMA_TODEVICE);
+ dma_unmap_single(&eni_dev->pci_dev->dev,ENI_PRV_PADDR(skb),skb->len,
+ DMA_TO_DEVICE);
if (!skb->len) dev_kfree_skb_irq(skb);
else {
EVENT("pushing (len=%ld)\n",skb->len,0);
vcc->dev->number);
return enq_jam;
}
- paddr = pci_map_single(eni_dev->pci_dev,skb->data,skb->len,
- PCI_DMA_TODEVICE);
+ paddr = dma_map_single(&eni_dev->pci_dev->dev,skb->data,skb->len,
+ DMA_TO_DEVICE);
ENI_PRV_PADDR(skb) = paddr;
/* prepare DMA queue entries */
j = 0;
break;
}
ENI_VCC(vcc)->txing -= ENI_PRV_SIZE(skb);
- pci_unmap_single(eni_dev->pci_dev,ENI_PRV_PADDR(skb),skb->len,
- PCI_DMA_TODEVICE);
+ dma_unmap_single(&eni_dev->pci_dev->dev,ENI_PRV_PADDR(skb),skb->len,
+ DMA_TO_DEVICE);
if (vcc->pop) vcc->pop(vcc,skb);
else dev_kfree_skb_irq(skb);
atomic_inc(&vcc->stats->tx);
if (rc < 0)
goto out;
+ rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32));
+ if (rc < 0)
+ goto out;
+
rc = -ENOMEM;
eni_dev = kmalloc(sizeof(struct eni_dev), GFP_KERNEL);
if (!eni_dev)
goto err_disable;
zero = &eni_dev->zero;
- zero->addr = pci_alloc_consistent(pci_dev, ENI_ZEROES_SIZE, &zero->dma);
+ zero->addr = dma_alloc_coherent(&pci_dev->dev,
+ ENI_ZEROES_SIZE, &zero->dma, GFP_KERNEL);
if (!zero->addr)
goto err_kfree;
err_unregister:
atm_dev_deregister(dev);
err_free_consistent:
- pci_free_consistent(pci_dev, ENI_ZEROES_SIZE, zero->addr, zero->dma);
+ dma_free_coherent(&pci_dev->dev, ENI_ZEROES_SIZE, zero->addr, zero->dma);
err_kfree:
kfree(eni_dev);
err_disable:
eni_do_release(dev);
atm_dev_deregister(dev);
- pci_free_consistent(pdev, ENI_ZEROES_SIZE, zero->addr, zero->dma);
+ dma_free_coherent(&pdev->dev, ENI_ZEROES_SIZE, zero->addr, zero->dma);
kfree(ed);
pci_disable_device(pdev);
}
static u32
fore200e_pca_dma_map(struct fore200e* fore200e, void* virt_addr, int size, int direction)
{
- u32 dma_addr = pci_map_single((struct pci_dev*)fore200e->bus_dev, virt_addr, size, direction);
+ u32 dma_addr = dma_map_single(&((struct pci_dev *) fore200e->bus_dev)->dev, virt_addr, size, direction);
DPRINTK(3, "PCI DVMA mapping: virt_addr = 0x%p, size = %d, direction = %d, --> dma_addr = 0x%08x\n",
virt_addr, size, direction, dma_addr);
DPRINTK(3, "PCI DVMA unmapping: dma_addr = 0x%08x, size = %d, direction = %d\n",
dma_addr, size, direction);
- pci_unmap_single((struct pci_dev*)fore200e->bus_dev, dma_addr, size, direction);
+ dma_unmap_single(&((struct pci_dev *) fore200e->bus_dev)->dev, dma_addr, size, direction);
}
{
DPRINTK(3, "PCI DVMA sync: dma_addr = 0x%08x, size = %d, direction = %d\n", dma_addr, size, direction);
- pci_dma_sync_single_for_cpu((struct pci_dev*)fore200e->bus_dev, dma_addr, size, direction);
+ dma_sync_single_for_cpu(&((struct pci_dev *) fore200e->bus_dev)->dev, dma_addr, size, direction);
}
static void
{
DPRINTK(3, "PCI DVMA sync: dma_addr = 0x%08x, size = %d, direction = %d\n", dma_addr, size, direction);
- pci_dma_sync_single_for_device((struct pci_dev*)fore200e->bus_dev, dma_addr, size, direction);
+ dma_sync_single_for_device(&((struct pci_dev *) fore200e->bus_dev)->dev, dma_addr, size, direction);
}
{
/* returned chunks are page-aligned */
chunk->alloc_size = size * nbr;
- chunk->alloc_addr = pci_alloc_consistent((struct pci_dev*)fore200e->bus_dev,
- chunk->alloc_size,
- &chunk->dma_addr);
+ chunk->alloc_addr = dma_alloc_coherent(&((struct pci_dev *) fore200e->bus_dev)->dev,
+ chunk->alloc_size,
+ &chunk->dma_addr,
+ GFP_KERNEL);
if ((chunk->alloc_addr == NULL) || (chunk->dma_addr == 0))
return -ENOMEM;
static void
fore200e_pca_dma_chunk_free(struct fore200e* fore200e, struct chunk* chunk)
{
- pci_free_consistent((struct pci_dev*)fore200e->bus_dev,
+ dma_free_coherent(&((struct pci_dev *) fore200e->bus_dev)->dev,
chunk->alloc_size,
chunk->alloc_addr,
chunk->dma_addr);
err = -EINVAL;
goto out;
}
+
+ if (dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32))) {
+ err = -EINVAL;
+ goto out;
+ }
fore200e = kzalloc(sizeof(struct fore200e), GFP_KERNEL);
if (fore200e == NULL) {
if (pci_enable_device(pci_dev))
return -EIO;
- if (pci_set_dma_mask(pci_dev, DMA_BIT_MASK(32)) != 0) {
+ if (dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32)) != 0) {
printk(KERN_WARNING "he: no suitable dma available\n");
err = -EIO;
goto init_one_failure;
static int he_init_tpdrq(struct he_dev *he_dev)
{
- he_dev->tpdrq_base = pci_zalloc_consistent(he_dev->pci_dev,
- CONFIG_TPDRQ_SIZE * sizeof(struct he_tpdrq),
- &he_dev->tpdrq_phys);
+ he_dev->tpdrq_base = dma_zalloc_coherent(&he_dev->pci_dev->dev,
+ CONFIG_TPDRQ_SIZE * sizeof(struct he_tpdrq),
+ &he_dev->tpdrq_phys, GFP_KERNEL);
if (he_dev->tpdrq_base == NULL) {
hprintk("failed to alloc tpdrq\n");
return -ENOMEM;
}
/* large buffer pool */
- he_dev->rbpl_pool = pci_pool_create("rbpl", he_dev->pci_dev,
+ he_dev->rbpl_pool = dma_pool_create("rbpl", &he_dev->pci_dev->dev,
CONFIG_RBPL_BUFSIZE, 64, 0);
if (he_dev->rbpl_pool == NULL) {
hprintk("unable to create rbpl pool\n");
goto out_free_rbpl_virt;
}
- he_dev->rbpl_base = pci_zalloc_consistent(he_dev->pci_dev,
- CONFIG_RBPL_SIZE * sizeof(struct he_rbp),
- &he_dev->rbpl_phys);
+ he_dev->rbpl_base = dma_zalloc_coherent(&he_dev->pci_dev->dev,
+ CONFIG_RBPL_SIZE * sizeof(struct he_rbp),
+ &he_dev->rbpl_phys, GFP_KERNEL);
if (he_dev->rbpl_base == NULL) {
hprintk("failed to alloc rbpl_base\n");
goto out_destroy_rbpl_pool;
for (i = 0; i < CONFIG_RBPL_SIZE; ++i) {
- heb = pci_pool_alloc(he_dev->rbpl_pool, GFP_KERNEL|GFP_DMA, &mapping);
+ heb = dma_pool_alloc(he_dev->rbpl_pool, GFP_KERNEL, &mapping);
if (!heb)
goto out_free_rbpl;
heb->mapping = mapping;
/* rx buffer ready queue */
- he_dev->rbrq_base = pci_zalloc_consistent(he_dev->pci_dev,
- CONFIG_RBRQ_SIZE * sizeof(struct he_rbrq),
- &he_dev->rbrq_phys);
+ he_dev->rbrq_base = dma_zalloc_coherent(&he_dev->pci_dev->dev,
+ CONFIG_RBRQ_SIZE * sizeof(struct he_rbrq),
+ &he_dev->rbrq_phys, GFP_KERNEL);
if (he_dev->rbrq_base == NULL) {
hprintk("failed to allocate rbrq\n");
goto out_free_rbpl;
/* tx buffer ready queue */
- he_dev->tbrq_base = pci_zalloc_consistent(he_dev->pci_dev,
- CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq),
- &he_dev->tbrq_phys);
+ he_dev->tbrq_base = dma_zalloc_coherent(&he_dev->pci_dev->dev,
+ CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq),
+ &he_dev->tbrq_phys, GFP_KERNEL);
if (he_dev->tbrq_base == NULL) {
hprintk("failed to allocate tbrq\n");
goto out_free_rbpq_base;
return 0;
out_free_rbpq_base:
- pci_free_consistent(he_dev->pci_dev, CONFIG_RBRQ_SIZE *
- sizeof(struct he_rbrq), he_dev->rbrq_base,
- he_dev->rbrq_phys);
+ dma_free_coherent(&he_dev->pci_dev->dev, CONFIG_RBRQ_SIZE *
+ sizeof(struct he_rbrq), he_dev->rbrq_base,
+ he_dev->rbrq_phys);
out_free_rbpl:
list_for_each_entry_safe(heb, next, &he_dev->rbpl_outstanding, entry)
- pci_pool_free(he_dev->rbpl_pool, heb, heb->mapping);
+ dma_pool_free(he_dev->rbpl_pool, heb, heb->mapping);
- pci_free_consistent(he_dev->pci_dev, CONFIG_RBPL_SIZE *
- sizeof(struct he_rbp), he_dev->rbpl_base,
- he_dev->rbpl_phys);
+ dma_free_coherent(&he_dev->pci_dev->dev, CONFIG_RBPL_SIZE *
+ sizeof(struct he_rbp), he_dev->rbpl_base,
+ he_dev->rbpl_phys);
out_destroy_rbpl_pool:
- pci_pool_destroy(he_dev->rbpl_pool);
+ dma_pool_destroy(he_dev->rbpl_pool);
out_free_rbpl_virt:
kfree(he_dev->rbpl_virt);
out_free_rbpl_table:
/* 2.9.3.5 tail offset for each interrupt queue is located after the
end of the interrupt queue */
- he_dev->irq_base = pci_alloc_consistent(he_dev->pci_dev,
- (CONFIG_IRQ_SIZE+1) * sizeof(struct he_irq), &he_dev->irq_phys);
+ he_dev->irq_base = dma_zalloc_coherent(&he_dev->pci_dev->dev,
+ (CONFIG_IRQ_SIZE + 1)
+ * sizeof(struct he_irq),
+ &he_dev->irq_phys,
+ GFP_KERNEL);
if (he_dev->irq_base == NULL) {
hprintk("failed to allocate irq\n");
return -ENOMEM;
he_init_tpdrq(he_dev);
- he_dev->tpd_pool = pci_pool_create("tpd", he_dev->pci_dev,
- sizeof(struct he_tpd), TPD_ALIGNMENT, 0);
+ he_dev->tpd_pool = dma_pool_create("tpd", &he_dev->pci_dev->dev,
+ sizeof(struct he_tpd), TPD_ALIGNMENT, 0);
if (he_dev->tpd_pool == NULL) {
- hprintk("unable to create tpd pci_pool\n");
+ hprintk("unable to create tpd dma_pool\n");
return -ENOMEM;
}
/* host status page */
- he_dev->hsp = pci_zalloc_consistent(he_dev->pci_dev,
- sizeof(struct he_hsp),
- &he_dev->hsp_phys);
+ he_dev->hsp = dma_zalloc_coherent(&he_dev->pci_dev->dev,
+ sizeof(struct he_hsp),
+ &he_dev->hsp_phys, GFP_KERNEL);
if (he_dev->hsp == NULL) {
hprintk("failed to allocate host status page\n");
return -ENOMEM;
free_irq(he_dev->irq, he_dev);
if (he_dev->irq_base)
- pci_free_consistent(he_dev->pci_dev, (CONFIG_IRQ_SIZE+1)
- * sizeof(struct he_irq), he_dev->irq_base, he_dev->irq_phys);
+ dma_free_coherent(&he_dev->pci_dev->dev, (CONFIG_IRQ_SIZE + 1)
+ * sizeof(struct he_irq), he_dev->irq_base, he_dev->irq_phys);
if (he_dev->hsp)
- pci_free_consistent(he_dev->pci_dev, sizeof(struct he_hsp),
- he_dev->hsp, he_dev->hsp_phys);
+ dma_free_coherent(&he_dev->pci_dev->dev, sizeof(struct he_hsp),
+ he_dev->hsp, he_dev->hsp_phys);
if (he_dev->rbpl_base) {
list_for_each_entry_safe(heb, next, &he_dev->rbpl_outstanding, entry)
- pci_pool_free(he_dev->rbpl_pool, heb, heb->mapping);
+ dma_pool_free(he_dev->rbpl_pool, heb, heb->mapping);
- pci_free_consistent(he_dev->pci_dev, CONFIG_RBPL_SIZE
- * sizeof(struct he_rbp), he_dev->rbpl_base, he_dev->rbpl_phys);
+ dma_free_coherent(&he_dev->pci_dev->dev, CONFIG_RBPL_SIZE
+ * sizeof(struct he_rbp), he_dev->rbpl_base, he_dev->rbpl_phys);
}
kfree(he_dev->rbpl_virt);
kfree(he_dev->rbpl_table);
if (he_dev->rbpl_pool)
- pci_pool_destroy(he_dev->rbpl_pool);
+ dma_pool_destroy(he_dev->rbpl_pool);
if (he_dev->rbrq_base)
- pci_free_consistent(he_dev->pci_dev, CONFIG_RBRQ_SIZE * sizeof(struct he_rbrq),
- he_dev->rbrq_base, he_dev->rbrq_phys);
+ dma_free_coherent(&he_dev->pci_dev->dev, CONFIG_RBRQ_SIZE * sizeof(struct he_rbrq),
+ he_dev->rbrq_base, he_dev->rbrq_phys);
if (he_dev->tbrq_base)
- pci_free_consistent(he_dev->pci_dev, CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq),
- he_dev->tbrq_base, he_dev->tbrq_phys);
+ dma_free_coherent(&he_dev->pci_dev->dev, CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq),
+ he_dev->tbrq_base, he_dev->tbrq_phys);
if (he_dev->tpdrq_base)
- pci_free_consistent(he_dev->pci_dev, CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq),
- he_dev->tpdrq_base, he_dev->tpdrq_phys);
+ dma_free_coherent(&he_dev->pci_dev->dev, CONFIG_TBRQ_SIZE * sizeof(struct he_tbrq),
+ he_dev->tpdrq_base, he_dev->tpdrq_phys);
if (he_dev->tpd_pool)
- pci_pool_destroy(he_dev->tpd_pool);
+ dma_pool_destroy(he_dev->tpd_pool);
if (he_dev->pci_dev) {
pci_read_config_word(he_dev->pci_dev, PCI_COMMAND, &command);
struct he_tpd *tpd;
dma_addr_t mapping;
- tpd = pci_pool_alloc(he_dev->tpd_pool, GFP_ATOMIC|GFP_DMA, &mapping);
+ tpd = dma_pool_alloc(he_dev->tpd_pool, GFP_ATOMIC, &mapping);
if (tpd == NULL)
return NULL;
if (!RBRQ_HBUF_ERR(he_dev->rbrq_head)) {
clear_bit(i, he_dev->rbpl_table);
list_del(&heb->entry);
- pci_pool_free(he_dev->rbpl_pool, heb, heb->mapping);
+ dma_pool_free(he_dev->rbpl_pool, heb, heb->mapping);
}
goto next_rbrq_entry;
++pdus_assembled;
list_for_each_entry_safe(heb, next, &he_vcc->buffers, entry)
- pci_pool_free(he_dev->rbpl_pool, heb, heb->mapping);
+ dma_pool_free(he_dev->rbpl_pool, heb, heb->mapping);
INIT_LIST_HEAD(&he_vcc->buffers);
he_vcc->pdu_len = 0;
for (slot = 0; slot < TPD_MAXIOV; ++slot) {
if (tpd->iovec[slot].addr)
- pci_unmap_single(he_dev->pci_dev,
+ dma_unmap_single(&he_dev->pci_dev->dev,
tpd->iovec[slot].addr,
tpd->iovec[slot].len & TPD_LEN_MASK,
- PCI_DMA_TODEVICE);
+ DMA_TO_DEVICE);
if (tpd->iovec[slot].len & TPD_LST)
break;
next_tbrq_entry:
if (tpd)
- pci_pool_free(he_dev->tpd_pool, tpd, TPD_ADDR(tpd->status));
+ dma_pool_free(he_dev->tpd_pool, tpd, TPD_ADDR(tpd->status));
he_dev->tbrq_head = (struct he_tbrq *)
((unsigned long) he_dev->tbrq_base |
TBRQ_MASK(he_dev->tbrq_head + 1));
}
he_dev->rbpl_hint = i + 1;
- heb = pci_pool_alloc(he_dev->rbpl_pool, GFP_ATOMIC|GFP_DMA, &mapping);
+ heb = dma_pool_alloc(he_dev->rbpl_pool, GFP_ATOMIC, &mapping);
if (!heb)
break;
heb->mapping = mapping;
*/
for (slot = 0; slot < TPD_MAXIOV; ++slot) {
if (tpd->iovec[slot].addr)
- pci_unmap_single(he_dev->pci_dev,
+ dma_unmap_single(&he_dev->pci_dev->dev,
tpd->iovec[slot].addr,
tpd->iovec[slot].len & TPD_LEN_MASK,
- PCI_DMA_TODEVICE);
+ DMA_TO_DEVICE);
}
if (tpd->skb) {
if (tpd->vcc->pop)
dev_kfree_skb_any(tpd->skb);
atomic_inc(&tpd->vcc->stats->tx_err);
}
- pci_pool_free(he_dev->tpd_pool, tpd, TPD_ADDR(tpd->status));
+ dma_pool_free(he_dev->tpd_pool, tpd, TPD_ADDR(tpd->status));
return;
}
}
}
#ifdef USE_SCATTERGATHER
- tpd->iovec[slot].addr = pci_map_single(he_dev->pci_dev, skb->data,
- skb_headlen(skb), PCI_DMA_TODEVICE);
+ tpd->iovec[slot].addr = dma_map_single(&he_dev->pci_dev->dev, skb->data,
+ skb_headlen(skb), DMA_TO_DEVICE);
tpd->iovec[slot].len = skb_headlen(skb);
++slot;
slot = 0;
}
- tpd->iovec[slot].addr = pci_map_single(he_dev->pci_dev,
+ tpd->iovec[slot].addr = dma_map_single(&he_dev->pci_dev->dev,
(void *) page_address(frag->page) + frag->page_offset,
- frag->size, PCI_DMA_TODEVICE);
+ frag->size, DMA_TO_DEVICE);
tpd->iovec[slot].len = frag->size;
++slot;
tpd->iovec[slot - 1].len |= TPD_LST;
#else
- tpd->address0 = pci_map_single(he_dev->pci_dev, skb->data, skb->len, PCI_DMA_TODEVICE);
+ tpd->address0 = dma_map_single(&he_dev->pci_dev->dev, skb->data, skb->len, DMA_TO_DEVICE);
tpd->length0 = skb->len | TPD_LST;
#endif
tpd->status |= TPD_INT;
int irq_peak;
struct tasklet_struct tasklet;
- struct pci_pool *tpd_pool;
+ struct dma_pool *tpd_pool;
struct list_head outstanding_tpds;
dma_addr_t tpdrq_phys;
struct he_buff **rbpl_virt;
unsigned long *rbpl_table;
unsigned long rbpl_hint;
- struct pci_pool *rbpl_pool;
+ struct dma_pool *rbpl_pool;
dma_addr_t rbpl_phys;
struct he_rbp *rbpl_base, *rbpl_tail;
struct list_head rbpl_outstanding;
return;
}
-static inline u16 query_tx_channel_config (hrz_dev * dev, short chan, u8 mode) {
- wr_regw (dev, TX_CHANNEL_CONFIG_COMMAND_OFF,
- chan * TX_CHANNEL_CONFIG_MULT | mode);
- return rd_regw (dev, TX_CHANNEL_CONFIG_DATA_OFF);
-}
-
/********** dump functions **********/
static inline void dump_skb (char * prefix, unsigned int vc, struct sk_buff * skb) {
/* RX channels are 10 bit integers, these fns are quite paranoid */
-static inline int channel_to_vpivci (const u16 channel, short * vpi, int * vci) {
- unsigned short vci_bits = 10 - vpi_bits;
- if ((channel & RX_CHANNEL_MASK) == channel) {
- *vci = channel & ((~0)<<vci_bits);
- *vpi = channel >> vci_bits;
- return channel ? 0 : -EINVAL;
- }
- return -EINVAL;
-}
-
static inline int vpivci_to_channel (u16 * channel, const short vpi, const int vci) {
unsigned short vci_bits = 10 - vpi_bits;
if (0 <= vpi && vpi < 1<<vpi_bits && 0 <= vci && vci < 1<<vci_bits) {
return rx_queue_entry;
}
-/********** handle RX disabled by device **********/
-
-static inline void rx_disabled_handler (hrz_dev * dev) {
- wr_regw (dev, RX_CONFIG_OFF, rd_regw (dev, RX_CONFIG_OFF) | RX_ENABLE);
- // count me please
- PRINTK (KERN_WARNING, "RX was disabled!");
-}
-
/********** handle RX data received by device **********/
// called from IRQ handler
scq = kzalloc(sizeof(struct scq_info), GFP_KERNEL);
if (!scq)
return NULL;
- scq->base = pci_zalloc_consistent(card->pcidev, SCQ_SIZE, &scq->paddr);
+ scq->base = dma_zalloc_coherent(&card->pcidev->dev, SCQ_SIZE,
+ &scq->paddr, GFP_KERNEL);
if (scq->base == NULL) {
kfree(scq);
return NULL;
struct sk_buff *skb;
struct atm_vcc *vcc;
- pci_free_consistent(card->pcidev, SCQ_SIZE,
- scq->base, scq->paddr);
+ dma_free_coherent(&card->pcidev->dev, SCQ_SIZE,
+ scq->base, scq->paddr);
while ((skb = skb_dequeue(&scq->transmit))) {
- pci_unmap_single(card->pcidev, IDT77252_PRV_PADDR(skb),
- skb->len, PCI_DMA_TODEVICE);
+ dma_unmap_single(&card->pcidev->dev, IDT77252_PRV_PADDR(skb),
+ skb->len, DMA_TO_DEVICE);
vcc = ATM_SKB(skb)->vcc;
if (vcc->pop)
}
while ((skb = skb_dequeue(&scq->pending))) {
- pci_unmap_single(card->pcidev, IDT77252_PRV_PADDR(skb),
- skb->len, PCI_DMA_TODEVICE);
+ dma_unmap_single(&card->pcidev->dev, IDT77252_PRV_PADDR(skb),
+ skb->len, DMA_TO_DEVICE);
vcc = ATM_SKB(skb)->vcc;
if (vcc->pop)
if (skb) {
TXPRINTK("%s: freeing skb at %p.\n", card->name, skb);
- pci_unmap_single(card->pcidev, IDT77252_PRV_PADDR(skb),
- skb->len, PCI_DMA_TODEVICE);
+ dma_unmap_single(&card->pcidev->dev, IDT77252_PRV_PADDR(skb),
+ skb->len, DMA_TO_DEVICE);
vcc = ATM_SKB(skb)->vcc;
tbd = &IDT77252_PRV_TBD(skb);
vcc = ATM_SKB(skb)->vcc;
- IDT77252_PRV_PADDR(skb) = pci_map_single(card->pcidev, skb->data,
- skb->len, PCI_DMA_TODEVICE);
+ IDT77252_PRV_PADDR(skb) = dma_map_single(&card->pcidev->dev, skb->data,
+ skb->len, DMA_TO_DEVICE);
error = -EINVAL;
return 0;
errout:
- pci_unmap_single(card->pcidev, IDT77252_PRV_PADDR(skb),
- skb->len, PCI_DMA_TODEVICE);
+ dma_unmap_single(&card->pcidev->dev, IDT77252_PRV_PADDR(skb),
+ skb->len, DMA_TO_DEVICE);
return error;
}
{
struct rsq_entry *rsqe;
- card->rsq.base = pci_zalloc_consistent(card->pcidev, RSQSIZE,
- &card->rsq.paddr);
+ card->rsq.base = dma_zalloc_coherent(&card->pcidev->dev, RSQSIZE,
+ &card->rsq.paddr, GFP_KERNEL);
if (card->rsq.base == NULL) {
printk("%s: can't allocate RSQ.\n", card->name);
return -1;
static void
deinit_rsq(struct idt77252_dev *card)
{
- pci_free_consistent(card->pcidev, RSQSIZE,
- card->rsq.base, card->rsq.paddr);
+ dma_free_coherent(&card->pcidev->dev, RSQSIZE,
+ card->rsq.base, card->rsq.paddr);
}
static void
vcc = vc->rx_vcc;
- pci_dma_sync_single_for_cpu(card->pcidev, IDT77252_PRV_PADDR(skb),
- skb_end_pointer(skb) - skb->data,
- PCI_DMA_FROMDEVICE);
+ dma_sync_single_for_cpu(&card->pcidev->dev, IDT77252_PRV_PADDR(skb),
+ skb_end_pointer(skb) - skb->data,
+ DMA_FROM_DEVICE);
if ((vcc->qos.aal == ATM_AAL0) ||
(vcc->qos.aal == ATM_AAL34)) {
return;
}
- pci_unmap_single(card->pcidev, IDT77252_PRV_PADDR(skb),
+ dma_unmap_single(&card->pcidev->dev, IDT77252_PRV_PADDR(skb),
skb_end_pointer(skb) - skb->data,
- PCI_DMA_FROMDEVICE);
+ DMA_FROM_DEVICE);
sb_pool_remove(card, skb);
skb_trim(skb, len);
head = IDT77252_PRV_PADDR(queue) + (queue->data - queue->head - 16);
tail = readl(SAR_REG_RAWCT);
- pci_dma_sync_single_for_cpu(card->pcidev, IDT77252_PRV_PADDR(queue),
- skb_end_offset(queue) - 16,
- PCI_DMA_FROMDEVICE);
+ dma_sync_single_for_cpu(&card->pcidev->dev, IDT77252_PRV_PADDR(queue),
+ skb_end_offset(queue) - 16,
+ DMA_FROM_DEVICE);
while (head != tail) {
unsigned int vpi, vci;
if (next) {
card->raw_cell_head = next;
queue = card->raw_cell_head;
- pci_dma_sync_single_for_cpu(card->pcidev,
- IDT77252_PRV_PADDR(queue),
- (skb_end_pointer(queue) -
- queue->data),
- PCI_DMA_FROMDEVICE);
+ dma_sync_single_for_cpu(&card->pcidev->dev,
+ IDT77252_PRV_PADDR(queue),
+ (skb_end_pointer(queue) -
+ queue->data),
+ DMA_FROM_DEVICE);
} else {
card->raw_cell_head = NULL;
printk("%s: raw cell queue overrun\n",
{
struct tsq_entry *tsqe;
- card->tsq.base = pci_alloc_consistent(card->pcidev, RSQSIZE,
- &card->tsq.paddr);
+ card->tsq.base = dma_alloc_coherent(&card->pcidev->dev, RSQSIZE,
+ &card->tsq.paddr, GFP_KERNEL);
if (card->tsq.base == NULL) {
printk("%s: can't allocate TSQ.\n", card->name);
return -1;
static void
deinit_tsq(struct idt77252_dev *card)
{
- pci_free_consistent(card->pcidev, TSQSIZE,
- card->tsq.base, card->tsq.paddr);
+ dma_free_coherent(&card->pcidev->dev, TSQSIZE,
+ card->tsq.base, card->tsq.paddr);
}
static void
goto outfree;
}
- paddr = pci_map_single(card->pcidev, skb->data,
+ paddr = dma_map_single(&card->pcidev->dev, skb->data,
skb_end_pointer(skb) - skb->data,
- PCI_DMA_FROMDEVICE);
+ DMA_FROM_DEVICE);
IDT77252_PRV_PADDR(skb) = paddr;
if (push_rx_skb(card, skb, queue)) {
return;
outunmap:
- pci_unmap_single(card->pcidev, IDT77252_PRV_PADDR(skb),
- skb_end_pointer(skb) - skb->data, PCI_DMA_FROMDEVICE);
+ dma_unmap_single(&card->pcidev->dev, IDT77252_PRV_PADDR(skb),
+ skb_end_pointer(skb) - skb->data, DMA_FROM_DEVICE);
handle = IDT77252_PRV_POOL(skb);
card->sbpool[POOL_QUEUE(handle)].skb[POOL_INDEX(handle)] = NULL;
u32 handle = IDT77252_PRV_POOL(skb);
int err;
- pci_dma_sync_single_for_device(card->pcidev, IDT77252_PRV_PADDR(skb),
- skb_end_pointer(skb) - skb->data,
- PCI_DMA_FROMDEVICE);
+ dma_sync_single_for_device(&card->pcidev->dev, IDT77252_PRV_PADDR(skb),
+ skb_end_pointer(skb) - skb->data,
+ DMA_FROM_DEVICE);
err = push_rx_skb(card, skb, POOL_QUEUE(handle));
if (err) {
- pci_unmap_single(card->pcidev, IDT77252_PRV_PADDR(skb),
+ dma_unmap_single(&card->pcidev->dev, IDT77252_PRV_PADDR(skb),
skb_end_pointer(skb) - skb->data,
- PCI_DMA_FROMDEVICE);
+ DMA_FROM_DEVICE);
sb_pool_remove(card, skb);
dev_kfree_skb(skb);
}
for (j = 0; j < FBQ_SIZE; j++) {
skb = card->sbpool[i].skb[j];
if (skb) {
- pci_unmap_single(card->pcidev,
+ dma_unmap_single(&card->pcidev->dev,
IDT77252_PRV_PADDR(skb),
(skb_end_pointer(skb) -
skb->data),
- PCI_DMA_FROMDEVICE);
+ DMA_FROM_DEVICE);
card->sbpool[i].skb[j] = NULL;
dev_kfree_skb(skb);
}
vfree(card->vcs);
if (card->raw_cell_hnd) {
- pci_free_consistent(card->pcidev, 2 * sizeof(u32),
- card->raw_cell_hnd, card->raw_cell_paddr);
+ dma_free_coherent(&card->pcidev->dev, 2 * sizeof(u32),
+ card->raw_cell_hnd, card->raw_cell_paddr);
}
if (card->rsq.base) {
writel(0, SAR_REG_GP);
/* Initialize RAW Cell Handle Register */
- card->raw_cell_hnd = pci_zalloc_consistent(card->pcidev,
- 2 * sizeof(u32),
- &card->raw_cell_paddr);
+ card->raw_cell_hnd = dma_zalloc_coherent(&card->pcidev->dev,
+ 2 * sizeof(u32),
+ &card->raw_cell_paddr,
+ GFP_KERNEL);
if (!card->raw_cell_hnd) {
printk("%s: memory allocation failure.\n", card->name);
deinit_card(card);
return err;
}
+ if ((err = dma_set_mask_and_coherent(&pcidev->dev, DMA_BIT_MASK(32)))) {
+ printk("idt77252: can't enable DMA for PCI device at %s\n", pci_name(pcidev));
+ return err;
+ }
+
card = kzalloc(sizeof(struct idt77252_dev), GFP_KERNEL);
if (!card) {
printk("idt77252-%d: can't allocate private data\n", index);
/* Build the DLE structure */
wr_ptr = iadev->rx_dle_q.write;
- wr_ptr->sys_pkt_addr = pci_map_single(iadev->pci, skb->data,
- len, PCI_DMA_FROMDEVICE);
+ wr_ptr->sys_pkt_addr = dma_map_single(&iadev->pci->dev, skb->data,
+ len, DMA_FROM_DEVICE);
wr_ptr->local_pkt_addr = buf_addr;
wr_ptr->bytes = len; /* We don't know this do we ?? */
wr_ptr->mode = DMA_INT_ENABLE;
u_short length;
struct ia_vcc *ia_vcc;
- pci_unmap_single(iadev->pci, iadev->rx_dle_q.write->sys_pkt_addr,
- len, PCI_DMA_FROMDEVICE);
+ dma_unmap_single(&iadev->pci->dev, iadev->rx_dle_q.write->sys_pkt_addr,
+ len, DMA_FROM_DEVICE);
/* no VCC related housekeeping done as yet. lets see */
vcc = ATM_SKB(skb)->vcc;
if (!vcc) {
// spin_lock_init(&iadev->rx_lock);
/* Allocate 4k bytes - more aligned than needed (4k boundary) */
- dle_addr = pci_alloc_consistent(iadev->pci, DLE_TOTAL_SIZE,
- &iadev->rx_dle_dma);
+ dle_addr = dma_alloc_coherent(&iadev->pci->dev, DLE_TOTAL_SIZE,
+ &iadev->rx_dle_dma, GFP_KERNEL);
if (!dle_addr) {
printk(KERN_ERR DEV_LABEL "can't allocate DLEs\n");
goto err_out;
return 0;
err_free_dle:
- pci_free_consistent(iadev->pci, DLE_TOTAL_SIZE, iadev->rx_dle_q.start,
- iadev->rx_dle_dma);
+ dma_free_coherent(&iadev->pci->dev, DLE_TOTAL_SIZE, iadev->rx_dle_q.start,
+ iadev->rx_dle_dma);
err_out:
return -ENOMEM;
}
/* Revenge of the 2 dle (skb + trailer) used in ia_pkt_tx() */
if (!((dle - iadev->tx_dle_q.start)%(2*sizeof(struct dle)))) {
- pci_unmap_single(iadev->pci, dle->sys_pkt_addr, skb->len,
- PCI_DMA_TODEVICE);
+ dma_unmap_single(&iadev->pci->dev, dle->sys_pkt_addr, skb->len,
+ DMA_TO_DEVICE);
}
vcc = ATM_SKB(skb)->vcc;
if (!vcc) {
readw(iadev->seg_reg+SEG_MASK_REG));)
/* Allocate 4k (boundary aligned) bytes */
- dle_addr = pci_alloc_consistent(iadev->pci, DLE_TOTAL_SIZE,
- &iadev->tx_dle_dma);
+ dle_addr = dma_alloc_coherent(&iadev->pci->dev, DLE_TOTAL_SIZE,
+ &iadev->tx_dle_dma, GFP_KERNEL);
if (!dle_addr) {
printk(KERN_ERR DEV_LABEL "can't allocate DLEs\n");
goto err_out;
goto err_free_tx_bufs;
}
iadev->tx_buf[i].cpcs = cpcs;
- iadev->tx_buf[i].dma_addr = pci_map_single(iadev->pci,
- cpcs, sizeof(*cpcs), PCI_DMA_TODEVICE);
+ iadev->tx_buf[i].dma_addr = dma_map_single(&iadev->pci->dev,
+ cpcs,
+ sizeof(*cpcs),
+ DMA_TO_DEVICE);
}
iadev->desc_tbl = kmalloc(iadev->num_tx_desc *
sizeof(struct desc_tbl_t), GFP_KERNEL);
while (--i >= 0) {
struct cpcs_trailer_desc *desc = iadev->tx_buf + i;
- pci_unmap_single(iadev->pci, desc->dma_addr,
- sizeof(*desc->cpcs), PCI_DMA_TODEVICE);
+ dma_unmap_single(&iadev->pci->dev, desc->dma_addr,
+ sizeof(*desc->cpcs), DMA_TO_DEVICE);
kfree(desc->cpcs);
}
kfree(iadev->tx_buf);
err_free_dle:
- pci_free_consistent(iadev->pci, DLE_TOTAL_SIZE, iadev->tx_dle_q.start,
- iadev->tx_dle_dma);
+ dma_free_coherent(&iadev->pci->dev, DLE_TOTAL_SIZE, iadev->tx_dle_q.start,
+ iadev->tx_dle_dma);
err_out:
return -ENOMEM;
}
for (i = 0; i < iadev->num_tx_desc; i++) {
struct cpcs_trailer_desc *desc = iadev->tx_buf + i;
- pci_unmap_single(iadev->pci, desc->dma_addr,
- sizeof(*desc->cpcs), PCI_DMA_TODEVICE);
+ dma_unmap_single(&iadev->pci->dev, desc->dma_addr,
+ sizeof(*desc->cpcs), DMA_TO_DEVICE);
kfree(desc->cpcs);
}
kfree(iadev->tx_buf);
- pci_free_consistent(iadev->pci, DLE_TOTAL_SIZE, iadev->tx_dle_q.start,
- iadev->tx_dle_dma);
+ dma_free_coherent(&iadev->pci->dev, DLE_TOTAL_SIZE, iadev->tx_dle_q.start,
+ iadev->tx_dle_dma);
}
static void ia_free_rx(IADEV *iadev)
{
kfree(iadev->rx_open);
- pci_free_consistent(iadev->pci, DLE_TOTAL_SIZE, iadev->rx_dle_q.start,
- iadev->rx_dle_dma);
+ dma_free_coherent(&iadev->pci->dev, DLE_TOTAL_SIZE, iadev->rx_dle_q.start,
+ iadev->rx_dle_dma);
}
static int ia_start(struct atm_dev *dev)
/* Build the DLE structure */
wr_ptr = iadev->tx_dle_q.write;
memset((caddr_t)wr_ptr, 0, sizeof(*wr_ptr));
- wr_ptr->sys_pkt_addr = pci_map_single(iadev->pci, skb->data,
- skb->len, PCI_DMA_TODEVICE);
+ wr_ptr->sys_pkt_addr = dma_map_single(&iadev->pci->dev, skb->data,
+ skb->len, DMA_TO_DEVICE);
wr_ptr->local_pkt_addr = (buf_desc_ptr->buf_start_hi << 16) |
buf_desc_ptr->buf_start_lo;
/* wr_ptr->bytes = swap_byte_order(total_len); didn't seem to affect?? */
* everything, but the way the lanai uses DMA memory would
* make that a terrific pain. This is much simpler.
*/
- buf->start = pci_alloc_consistent(pci, size, &buf->dmaaddr);
+ buf->start = dma_alloc_coherent(&pci->dev,
+ size, &buf->dmaaddr, GFP_KERNEL);
if (buf->start != NULL) { /* Success */
/* Lanai requires 256-byte alignment of DMA bufs */
APRINTK((buf->dmaaddr & ~0xFFFFFF00) == 0,
struct pci_dev *pci)
{
if (buf->start != NULL) {
- pci_free_consistent(pci, lanai_buf_size(buf),
- buf->start, buf->dmaaddr);
+ dma_free_coherent(&pci->dev, lanai_buf_size(buf),
+ buf->start, buf->dmaaddr);
buf->start = buf->end = buf->ptr = NULL;
}
}
return cells * 48;
}
-/* How many bytes can we send if we have "space" space, assuming we have
- * to send full cells
- */
-static inline int aal5_spacefor(int space)
-{
- int cells = space / 48;
- return cells * 48;
-}
-
/* -------------------- FREE AN ATM SKB: */
static inline void lanai_free_skb(struct atm_vcc *atmvcc, struct sk_buff *skb)
return -ENXIO;
}
pci_set_master(pci);
- if (pci_set_dma_mask(pci, DMA_BIT_MASK(32)) != 0) {
- printk(KERN_WARNING DEV_LABEL
- "(itf %d): No suitable DMA available.\n", lanai->number);
- return -EBUSY;
- }
- if (pci_set_consistent_dma_mask(pci, DMA_BIT_MASK(32)) != 0) {
+ if (dma_set_mask_and_coherent(&pci->dev, DMA_BIT_MASK(32)) != 0) {
printk(KERN_WARNING DEV_LABEL
"(itf %d): No suitable DMA available.\n", lanai->number);
return -EBUSY;
free_scq(card, card->scd2vc[j]->scq, card->scd2vc[j]->tx_vcc);
}
idr_destroy(&card->idr);
- pci_free_consistent(card->pcidev, NS_RSQSIZE + NS_RSQ_ALIGNMENT,
- card->rsq.org, card->rsq.dma);
- pci_free_consistent(card->pcidev, NS_TSQSIZE + NS_TSQ_ALIGNMENT,
- card->tsq.org, card->tsq.dma);
+ dma_free_coherent(&card->pcidev->dev, NS_RSQSIZE + NS_RSQ_ALIGNMENT,
+ card->rsq.org, card->rsq.dma);
+ dma_free_coherent(&card->pcidev->dev, NS_TSQSIZE + NS_TSQ_ALIGNMENT,
+ card->tsq.org, card->tsq.dma);
free_irq(card->pcidev->irq, card);
iounmap(card->membase);
kfree(card);
ns_init_card_error(card, error);
return error;
}
- if ((pci_set_dma_mask(pcidev, DMA_BIT_MASK(32)) != 0) ||
- (pci_set_consistent_dma_mask(pcidev, DMA_BIT_MASK(32)) != 0)) {
+ if (dma_set_mask_and_coherent(&pcidev->dev, DMA_BIT_MASK(32)) != 0) {
printk(KERN_WARNING
"nicstar%d: No suitable DMA available.\n", i);
error = 2;
writel(0x00000000, card->membase + VPM);
/* Initialize TSQ */
- card->tsq.org = pci_alloc_consistent(card->pcidev,
- NS_TSQSIZE + NS_TSQ_ALIGNMENT,
- &card->tsq.dma);
+ card->tsq.org = dma_alloc_coherent(&card->pcidev->dev,
+ NS_TSQSIZE + NS_TSQ_ALIGNMENT,
+ &card->tsq.dma, GFP_KERNEL);
if (card->tsq.org == NULL) {
printk("nicstar%d: can't allocate TSQ.\n", i);
error = 10;
PRINTK("nicstar%d: TSQ base at 0x%p.\n", i, card->tsq.base);
/* Initialize RSQ */
- card->rsq.org = pci_alloc_consistent(card->pcidev,
- NS_RSQSIZE + NS_RSQ_ALIGNMENT,
- &card->rsq.dma);
+ card->rsq.org = dma_alloc_coherent(&card->pcidev->dev,
+ NS_RSQSIZE + NS_RSQ_ALIGNMENT,
+ &card->rsq.dma, GFP_KERNEL);
if (card->rsq.org == NULL) {
printk("nicstar%d: can't allocate RSQ.\n", i);
error = 11;
scq = kmalloc(sizeof(scq_info), GFP_KERNEL);
if (!scq)
return NULL;
- scq->org = pci_alloc_consistent(card->pcidev, 2 * size, &scq->dma);
+ scq->org = dma_alloc_coherent(&card->pcidev->dev,
+ 2 * size, &scq->dma, GFP_KERNEL);
if (!scq->org) {
kfree(scq);
return NULL;
}
}
kfree(scq->skb);
- pci_free_consistent(card->pcidev,
- 2 * (scq->num_entries == VBR_SCQ_NUM_ENTRIES ?
- VBR_SCQSIZE : CBR_SCQSIZE),
- scq->org, scq->dma);
+ dma_free_coherent(&card->pcidev->dev,
+ 2 * (scq->num_entries == VBR_SCQ_NUM_ENTRIES ?
+ VBR_SCQSIZE : CBR_SCQSIZE),
+ scq->org, scq->dma);
kfree(scq);
}
handle2 = NULL;
addr2 = 0;
handle1 = skb;
- addr1 = pci_map_single(card->pcidev,
+ addr1 = dma_map_single(&card->pcidev->dev,
skb->data,
(NS_PRV_BUFTYPE(skb) == BUF_SM
? NS_SMSKBSIZE : NS_LGSKBSIZE),
- PCI_DMA_TODEVICE);
+ DMA_TO_DEVICE);
NS_PRV_DMA(skb) = addr1; /* save so we can unmap later */
#ifdef GENERAL_DEBUG
ATM_SKB(skb)->vcc = vcc;
- NS_PRV_DMA(skb) = pci_map_single(card->pcidev, skb->data,
- skb->len, PCI_DMA_TODEVICE);
+ NS_PRV_DMA(skb) = dma_map_single(&card->pcidev->dev, skb->data,
+ skb->len, DMA_TO_DEVICE);
if (vcc->qos.aal == ATM_AAL5) {
buflen = (skb->len + 47 + 8) / 48 * 48; /* Multiple of 48 */
XPRINTK("nicstar%d: freeing skb at 0x%p (index %d).\n",
card->index, skb, i);
if (skb != NULL) {
- pci_unmap_single(card->pcidev,
+ dma_unmap_single(&card->pcidev->dev,
NS_PRV_DMA(skb),
skb->len,
- PCI_DMA_TODEVICE);
+ DMA_TO_DEVICE);
vcc = ATM_SKB(skb)->vcc;
if (vcc && vcc->pop != NULL) {
vcc->pop(vcc, skb);
return;
}
idr_remove(&card->idr, id);
- pci_dma_sync_single_for_cpu(card->pcidev,
- NS_PRV_DMA(skb),
- (NS_PRV_BUFTYPE(skb) == BUF_SM
- ? NS_SMSKBSIZE : NS_LGSKBSIZE),
- PCI_DMA_FROMDEVICE);
- pci_unmap_single(card->pcidev,
+ dma_sync_single_for_cpu(&card->pcidev->dev,
+ NS_PRV_DMA(skb),
+ (NS_PRV_BUFTYPE(skb) == BUF_SM
+ ? NS_SMSKBSIZE : NS_LGSKBSIZE),
+ DMA_FROM_DEVICE);
+ dma_unmap_single(&card->pcidev->dev,
NS_PRV_DMA(skb),
(NS_PRV_BUFTYPE(skb) == BUF_SM
? NS_SMSKBSIZE : NS_LGSKBSIZE),
- PCI_DMA_FROMDEVICE);
+ DMA_FROM_DEVICE);
vpi = ns_rsqe_vpi(rsqe);
vci = ns_rsqe_vci(rsqe);
if (vpi >= 1UL << card->vpibits || vci >= 1UL << card->vcibits) {
skb = card->rx_skb[port];
card->rx_skb[port] = NULL;
- pci_unmap_single(card->dev, SKB_CB(skb)->dma_addr,
- RX_DMA_SIZE, PCI_DMA_FROMDEVICE);
+ dma_unmap_single(&card->dev->dev, SKB_CB(skb)->dma_addr,
+ RX_DMA_SIZE, DMA_FROM_DEVICE);
header = (void *)skb->data;
size = le16_to_cpu(header->size);
struct sk_buff *skb = alloc_skb(RX_DMA_SIZE, GFP_ATOMIC);
if (skb) {
SKB_CB(skb)->dma_addr =
- pci_map_single(card->dev, skb->data,
- RX_DMA_SIZE, PCI_DMA_FROMDEVICE);
+ dma_map_single(&card->dev->dev, skb->data,
+ RX_DMA_SIZE, DMA_FROM_DEVICE);
iowrite32(SKB_CB(skb)->dma_addr,
card->config_regs + RX_DMA_ADDR(port));
card->rx_skb[port] = skb;
if (tx_pending & 1) {
struct sk_buff *oldskb = card->tx_skb[port];
if (oldskb) {
- pci_unmap_single(card->dev, SKB_CB(oldskb)->dma_addr,
- oldskb->len, PCI_DMA_TODEVICE);
+ dma_unmap_single(&card->dev->dev, SKB_CB(oldskb)->dma_addr,
+ oldskb->len, DMA_TO_DEVICE);
card->tx_skb[port] = NULL;
}
spin_lock(&card->tx_queue_lock);
data = card->dma_bounce + (BUF_SIZE * port);
memcpy(data, skb->data, skb->len);
}
- SKB_CB(skb)->dma_addr = pci_map_single(card->dev, data,
- skb->len, PCI_DMA_TODEVICE);
+ SKB_CB(skb)->dma_addr = dma_map_single(&card->dev->dev, data,
+ skb->len, DMA_TO_DEVICE);
card->tx_skb[port] = skb;
iowrite32(SKB_CB(skb)->dma_addr,
card->config_regs + TX_DMA_ADDR(port));
goto out;
}
- err = pci_set_dma_mask(dev, DMA_BIT_MASK(32));
+ err = dma_set_mask_and_coherent(&dev->dev, DMA_BIT_MASK(32));
if (err) {
dev_warn(&dev->dev, "Failed to set 32-bit DMA mask\n");
goto out;
skb = card->rx_skb[i];
if (skb) {
- pci_unmap_single(card->dev, SKB_CB(skb)->dma_addr,
- RX_DMA_SIZE, PCI_DMA_FROMDEVICE);
+ dma_unmap_single(&card->dev->dev, SKB_CB(skb)->dma_addr,
+ RX_DMA_SIZE, DMA_FROM_DEVICE);
dev_kfree_skb(skb);
}
skb = card->tx_skb[i];
if (skb) {
- pci_unmap_single(card->dev, SKB_CB(skb)->dma_addr,
- skb->len, PCI_DMA_TODEVICE);
+ dma_unmap_single(&card->dev->dev, SKB_CB(skb)->dma_addr,
+ skb->len, DMA_TO_DEVICE);
dev_kfree_skb(skb);
}
while ((skb = skb_dequeue(&card->tx_queue[i])))
if (!mbx_entries[i])
continue;
- mbx = pci_alloc_consistent(pdev, 2*MBX_SIZE(i), &mbx_dma);
+ mbx = dma_alloc_coherent(&pdev->dev,
+ 2 * MBX_SIZE(i), &mbx_dma, GFP_KERNEL);
if (!mbx) {
error = -ENOMEM;
goto out;
}
/*
- * Alignment provided by pci_alloc_consistent() isn't enough
+ * Alignment provided by dma_alloc_coherent() isn't enough
* for this device.
*/
if (((unsigned long)mbx ^ mbx_dma) & 0xffff) {
printk(KERN_ERR DEV_LABEL "(itf %d): system "
"bus incompatible with driver\n", dev->number);
- pci_free_consistent(pdev, 2*MBX_SIZE(i), mbx, mbx_dma);
+ dma_free_coherent(&pdev->dev, 2*MBX_SIZE(i), mbx, mbx_dma);
error = -ENODEV;
goto out;
}
kfree(zatm_dev->tx_map);
out:
while (i-- > 0) {
- pci_free_consistent(pdev, 2*MBX_SIZE(i),
- (void *)zatm_dev->mbx_start[i],
- zatm_dev->mbx_dma[i]);
+ dma_free_coherent(&pdev->dev, 2 * MBX_SIZE(i),
+ (void *)zatm_dev->mbx_start[i],
+ zatm_dev->mbx_dma[i]);
}
free_irq(zatm_dev->irq, dev);
goto done;
if (ret < 0)
goto out_disable;
+ ret = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32));
+ if (ret < 0)
+ goto out_disable;
+
zatm_dev->pci_dev = pci_dev;
dev->dev_data = zatm_dev;
zatm_dev->copper = (int)ent->driver_data;
goto out_cleanup_queues;
nullb->q = blk_mq_init_queue(&nullb->tag_set);
- if (!nullb->q) {
+ if (IS_ERR(nullb->q)) {
rv = -ENOMEM;
goto out_cleanup_tags;
}
cmd->fn = handler;
cmd->ctx = ctx;
cmd->aborted = 0;
+ blk_mq_start_request(blk_mq_rq_from_pdu(cmd));
}
/* Special values must be less than 0x1000 */
if (unlikely(status)) {
if (!(status & NVME_SC_DNR || blk_noretry_request(req))
&& (jiffies - req->start_time) < req->timeout) {
+ unsigned long flags;
+
blk_mq_requeue_request(req);
- blk_mq_kick_requeue_list(req->q);
+ spin_lock_irqsave(req->q->queue_lock, flags);
+ if (!blk_queue_stopped(req->q))
+ blk_mq_kick_requeue_list(req->q);
+ spin_unlock_irqrestore(req->q->queue_lock, flags);
return;
}
req->errors = nvme_error_status(status);
}
}
- blk_mq_start_request(req);
-
nvme_set_info(cmd, iod, req_completion);
spin_lock_irq(&nvmeq->q_lock);
if (req->cmd_flags & REQ_DISCARD)
if (IS_ERR(req))
return PTR_ERR(req);
+ req->cmd_flags |= REQ_NO_TIMEOUT;
cmd_info = blk_mq_rq_to_pdu(req);
nvme_set_info(cmd_info, req, async_req_completion);
struct nvme_command cmd;
if (!nvmeq->qid || cmd_rq->aborted) {
+ unsigned long flags;
+
+ spin_lock_irqsave(&dev_list_lock, flags);
if (work_busy(&dev->reset_work))
- return;
+ goto out;
list_del_init(&dev->node);
dev_warn(&dev->pci_dev->dev,
"I/O %d QID %d timeout, reset controller\n",
req->tag, nvmeq->qid);
dev->reset_workfn = nvme_reset_failed_dev;
queue_work(nvme_workq, &dev->reset_work);
+ out:
+ spin_unlock_irqrestore(&dev_list_lock, flags);
return;
}
void *ctx;
nvme_completion_fn fn;
struct nvme_cmd_info *cmd;
- static struct nvme_completion cqe = {
- .status = cpu_to_le16(NVME_SC_ABORT_REQ << 1),
- };
+ struct nvme_completion cqe;
+
+ if (!blk_mq_request_started(req))
+ return;
cmd = blk_mq_rq_to_pdu(req);
if (cmd->ctx == CMD_CTX_CANCELLED)
return;
+ if (blk_queue_dying(req->q))
+ cqe.status = cpu_to_le16((NVME_SC_ABORT_REQ | NVME_SC_DNR) << 1);
+ else
+ cqe.status = cpu_to_le16(NVME_SC_ABORT_REQ << 1);
+
+
dev_warn(nvmeq->q_dmadev, "Cancelling I/O %d QID %d\n",
req->tag, nvmeq->qid);
ctx = cancel_cmd_info(cmd, &fn);
struct nvme_cmd_info *cmd = blk_mq_rq_to_pdu(req);
struct nvme_queue *nvmeq = cmd->nvmeq;
- dev_warn(nvmeq->q_dmadev, "Timeout I/O %d QID %d\n", req->tag,
- nvmeq->qid);
- if (nvmeq->dev->initialized)
- nvme_abort_req(req);
-
/*
* The aborted req will be completed on receiving the abort req.
* We enable the timer again. If hit twice, it'll cause a device reset,
* as the device then is in a faulty state.
*/
- return BLK_EH_RESET_TIMER;
+ int ret = BLK_EH_RESET_TIMER;
+
+ dev_warn(nvmeq->q_dmadev, "Timeout I/O %d QID %d\n", req->tag,
+ nvmeq->qid);
+
+ spin_lock_irq(&nvmeq->q_lock);
+ if (!nvmeq->dev->initialized) {
+ /*
+ * Force cancelled command frees the request, which requires we
+ * return BLK_EH_NOT_HANDLED.
+ */
+ nvme_cancel_queue_ios(nvmeq->hctx, req, nvmeq, reserved);
+ ret = BLK_EH_NOT_HANDLED;
+ } else
+ nvme_abort_req(req);
+ spin_unlock_irq(&nvmeq->q_lock);
+
+ return ret;
}
static void nvme_free_queue(struct nvme_queue *nvmeq)
*/
static int nvme_suspend_queue(struct nvme_queue *nvmeq)
{
- int vector = nvmeq->dev->entry[nvmeq->cq_vector].vector;
+ int vector;
spin_lock_irq(&nvmeq->q_lock);
+ if (nvmeq->cq_vector == -1) {
+ spin_unlock_irq(&nvmeq->q_lock);
+ return 1;
+ }
+ vector = nvmeq->dev->entry[nvmeq->cq_vector].vector;
nvmeq->dev->online_queues--;
+ nvmeq->cq_vector = -1;
spin_unlock_irq(&nvmeq->q_lock);
irq_set_affinity_hint(vector, NULL);
adapter_delete_sq(dev, qid);
adapter_delete_cq(dev, qid);
}
+ if (!qid && dev->admin_q)
+ blk_mq_freeze_queue_start(dev->admin_q);
nvme_clear_queue(nvmeq);
}
static struct nvme_queue *nvme_alloc_queue(struct nvme_dev *dev, int qid,
- int depth, int vector)
+ int depth)
{
struct device *dmadev = &dev->pci_dev->dev;
struct nvme_queue *nvmeq = kzalloc(sizeof(*nvmeq), GFP_KERNEL);
nvmeq->cq_phase = 1;
nvmeq->q_db = &dev->dbs[qid * 2 * dev->db_stride];
nvmeq->q_depth = depth;
- nvmeq->cq_vector = vector;
nvmeq->qid = qid;
dev->queue_count++;
dev->queues[qid] = nvmeq;
struct nvme_dev *dev = nvmeq->dev;
int result;
+ nvmeq->cq_vector = qid - 1;
result = adapter_alloc_cq(dev, qid, nvmeq);
if (result < 0)
return result;
.timeout = nvme_timeout,
};
+static void nvme_dev_remove_admin(struct nvme_dev *dev)
+{
+ if (dev->admin_q && !blk_queue_dying(dev->admin_q)) {
+ blk_cleanup_queue(dev->admin_q);
+ blk_mq_free_tag_set(&dev->admin_tagset);
+ }
+}
+
static int nvme_alloc_admin_tags(struct nvme_dev *dev)
{
if (!dev->admin_q) {
return -ENOMEM;
dev->admin_q = blk_mq_init_queue(&dev->admin_tagset);
- if (!dev->admin_q) {
+ if (IS_ERR(dev->admin_q)) {
blk_mq_free_tag_set(&dev->admin_tagset);
return -ENOMEM;
}
- }
+ if (!blk_get_queue(dev->admin_q)) {
+ nvme_dev_remove_admin(dev);
+ return -ENODEV;
+ }
+ } else
+ blk_mq_unfreeze_queue(dev->admin_q);
return 0;
}
-static void nvme_free_admin_tags(struct nvme_dev *dev)
-{
- if (dev->admin_q)
- blk_mq_free_tag_set(&dev->admin_tagset);
-}
-
static int nvme_configure_admin_queue(struct nvme_dev *dev)
{
int result;
nvmeq = dev->queues[0];
if (!nvmeq) {
- nvmeq = nvme_alloc_queue(dev, 0, NVME_AQ_DEPTH, 0);
+ nvmeq = nvme_alloc_queue(dev, 0, NVME_AQ_DEPTH);
if (!nvmeq)
return -ENOMEM;
}
if (result)
goto free_nvmeq;
- result = nvme_alloc_admin_tags(dev);
- if (result)
- goto free_nvmeq;
-
+ nvmeq->cq_vector = 0;
result = queue_request_irq(dev, nvmeq, nvmeq->irqname);
if (result)
- goto free_tags;
+ goto free_nvmeq;
return result;
- free_tags:
- nvme_free_admin_tags(dev);
free_nvmeq:
nvme_free_queues(dev, 0);
return result;
unsigned i;
for (i = dev->queue_count; i <= dev->max_qid; i++)
- if (!nvme_alloc_queue(dev, i, dev->q_depth, i - 1))
+ if (!nvme_alloc_queue(dev, i, dev->q_depth))
break;
for (i = dev->online_queues; i <= dev->queue_count - 1; i++)
break;
if (!schedule_timeout(ADMIN_TIMEOUT) ||
fatal_signal_pending(current)) {
+ /*
+ * Disable the controller first since we can't trust it
+ * at this point, but leave the admin queue enabled
+ * until all queue deletion requests are flushed.
+ * FIXME: This may take a while if there are more h/w
+ * queues than admin tags.
+ */
set_current_state(TASK_RUNNING);
-
nvme_disable_ctrl(dev, readq(&dev->bar->cap));
- nvme_disable_queue(dev, 0);
-
- send_sig(SIGKILL, dq->worker->task, 1);
+ nvme_clear_queue(dev->queues[0]);
flush_kthread_worker(dq->worker);
+ nvme_disable_queue(dev, 0);
return;
}
}
{
struct nvme_queue *nvmeq = container_of(work, struct nvme_queue,
cmdinfo.work);
- allow_signal(SIGKILL);
if (nvme_delete_sq(nvmeq))
nvme_del_queue_end(nvmeq);
}
kthread_stop(tmp);
}
+static void nvme_freeze_queues(struct nvme_dev *dev)
+{
+ struct nvme_ns *ns;
+
+ list_for_each_entry(ns, &dev->namespaces, list) {
+ blk_mq_freeze_queue_start(ns->queue);
+
+ spin_lock(ns->queue->queue_lock);
+ queue_flag_set(QUEUE_FLAG_STOPPED, ns->queue);
+ spin_unlock(ns->queue->queue_lock);
+
+ blk_mq_cancel_requeue_work(ns->queue);
+ blk_mq_stop_hw_queues(ns->queue);
+ }
+}
+
+static void nvme_unfreeze_queues(struct nvme_dev *dev)
+{
+ struct nvme_ns *ns;
+
+ list_for_each_entry(ns, &dev->namespaces, list) {
+ queue_flag_clear_unlocked(QUEUE_FLAG_STOPPED, ns->queue);
+ blk_mq_unfreeze_queue(ns->queue);
+ blk_mq_start_stopped_hw_queues(ns->queue, true);
+ blk_mq_kick_requeue_list(ns->queue);
+ }
+}
+
static void nvme_dev_shutdown(struct nvme_dev *dev)
{
int i;
dev->initialized = 0;
nvme_dev_list_remove(dev);
- if (dev->bar)
+ if (dev->bar) {
+ nvme_freeze_queues(dev);
csts = readl(&dev->bar->csts);
+ }
if (csts & NVME_CSTS_CFS || !(csts & NVME_CSTS_RDY)) {
for (i = dev->queue_count - 1; i >= 0; i--) {
struct nvme_queue *nvmeq = dev->queues[i];
nvme_dev_unmap(dev);
}
-static void nvme_dev_remove_admin(struct nvme_dev *dev)
-{
- if (dev->admin_q && !blk_queue_dying(dev->admin_q))
- blk_cleanup_queue(dev->admin_q);
-}
-
static void nvme_dev_remove(struct nvme_dev *dev)
{
struct nvme_ns *ns;
list_for_each_entry(ns, &dev->namespaces, list) {
if (ns->disk->flags & GENHD_FL_UP)
del_gendisk(ns->disk);
- if (!blk_queue_dying(ns->queue))
+ if (!blk_queue_dying(ns->queue)) {
+ blk_mq_abort_requeue_list(ns->queue);
blk_cleanup_queue(ns->queue);
+ }
}
}
nvme_free_namespaces(dev);
nvme_release_instance(dev);
blk_mq_free_tag_set(&dev->tagset);
+ blk_put_queue(dev->admin_q);
kfree(dev->queues);
kfree(dev->entry);
kfree(dev);
}
nvme_init_queue(dev->queues[0], 0);
+ result = nvme_alloc_admin_tags(dev);
+ if (result)
+ goto disable;
result = nvme_setup_io_queues(dev);
if (result)
- goto disable;
+ goto free_tags;
nvme_set_irq_hints(dev);
return result;
+ free_tags:
+ nvme_dev_remove_admin(dev);
disable:
nvme_disable_queue(dev, 0);
nvme_dev_list_remove(dev);
dev->reset_workfn = nvme_remove_disks;
queue_work(nvme_workq, &dev->reset_work);
spin_unlock(&dev_list_lock);
+ } else {
+ nvme_unfreeze_queues(dev);
+ nvme_set_irq_hints(dev);
}
dev->initialized = 1;
return 0;
pci_set_drvdata(pdev, NULL);
flush_work(&dev->reset_work);
misc_deregister(&dev->miscdev);
- nvme_dev_remove(dev);
nvme_dev_shutdown(dev);
+ nvme_dev_remove(dev);
nvme_dev_remove_admin(dev);
nvme_free_queues(dev, 0);
- nvme_free_admin_tags(dev);
nvme_release_prp_pools(dev);
kref_put(&dev->kref, nvme_free_dev);
}
goto out_put_disk;
q = vblk->disk->queue = blk_mq_init_queue(&vblk->tag_set);
- if (!q) {
+ if (IS_ERR(q)) {
err = -ENOMEM;
goto out_free_tags;
}
#define BTM_UPLD_SIZE 2312
/* Time to wait until Host Sleep state change in millisecond */
-#define WAIT_UNTIL_HS_STATE_CHANGED 5000
+#define WAIT_UNTIL_HS_STATE_CHANGED msecs_to_jiffies(5000)
/* Time to wait for command response in millisecond */
-#define WAIT_UNTIL_CMD_RESP 5000
+#define WAIT_UNTIL_CMD_RESP msecs_to_jiffies(5000)
enum rdwr_status {
RDWR_STATUS_SUCCESS = 0,
#ifdef CONFIG_DEBUG_FS
void *debugfs_data;
#endif
+ bool surprise_removed;
};
#define MRVL_VENDOR_PKT 0xFE
struct sk_buff *skb;
struct hci_command_hdr *hdr;
+ if (priv->surprise_removed) {
+ BT_ERR("Card is removed");
+ return -EFAULT;
+ }
+
skb = bt_skb_alloc(HCI_COMMAND_HDR_SIZE + len, GFP_ATOMIC);
if (skb == NULL) {
BT_ERR("No free skb");
wake_up_interruptible(&priv->main_thread.wait_q);
if (!wait_event_interruptible_timeout(priv->adapter->cmd_wait_q,
- priv->adapter->cmd_complete,
- msecs_to_jiffies(WAIT_UNTIL_CMD_RESP)))
+ priv->adapter->cmd_complete ||
+ priv->surprise_removed,
+ WAIT_UNTIL_CMD_RESP))
return -ETIMEDOUT;
+ if (priv->surprise_removed)
+ return -EFAULT;
+
return 0;
}
}
ret = wait_event_interruptible_timeout(adapter->event_hs_wait_q,
- adapter->hs_state,
- msecs_to_jiffies(WAIT_UNTIL_HS_STATE_CHANGED));
- if (ret < 0) {
+ adapter->hs_state ||
+ priv->surprise_removed,
+ WAIT_UNTIL_HS_STATE_CHANGED);
+ if (ret < 0 || priv->surprise_removed) {
BT_ERR("event_hs_wait_q terminated (%d): %d,%d,%d",
ret, adapter->hs_state, adapter->ps_state,
adapter->wakeup_tries);
static int btmrvl_setup(struct hci_dev *hdev)
{
struct btmrvl_private *priv = hci_get_drvdata(hdev);
+ int ret;
- btmrvl_send_module_cfg_cmd(priv, MODULE_BRINGUP_REQ);
+ ret = btmrvl_send_module_cfg_cmd(priv, MODULE_BRINGUP_REQ);
+ if (ret)
+ return ret;
priv->btmrvl_dev.gpio_gap = 0xffff;
add_wait_queue(&thread->wait_q, &wait);
set_current_state(TASK_INTERRUPTIBLE);
- if (kthread_should_stop()) {
+ if (kthread_should_stop() || priv->surprise_removed) {
BT_DBG("main_thread: break from main thread");
break;
}
BT_DBG("main_thread woke up");
+ if (kthread_should_stop() || priv->surprise_removed) {
+ BT_DBG("main_thread: break from main thread");
+ break;
+ }
+
spin_lock_irqsave(&priv->driver_lock, flags);
if (adapter->int_count) {
adapter->int_count = 0;
offset += txlen;
} while (true);
- BT_DBG("FW download over, size %d bytes", offset);
+ BT_INFO("FW download over, size %d bytes", offset);
ret = 0;
priv = card->priv;
+ if (priv->surprise_removed)
+ return;
+
if (card->reg->int_read_to_clear)
ret = btmrvl_sdio_read_to_clear(card, &ireg);
else
btmrvl_sdio_disable_host_int(card);
}
BT_DBG("unregester dev");
+ card->priv->surprise_removed = true;
btmrvl_sdio_unregister_dev(card);
btmrvl_remove_card(card->priv);
}
#define BTUSB_INTEL_BOOT 0x200
#define BTUSB_BCM_PATCHRAM 0x400
#define BTUSB_MARVELL 0x800
-#define BTUSB_AVM 0x1000
+#define BTUSB_SWAVE 0x1000
static const struct usb_device_id btusb_table[] = {
/* Generic Bluetooth USB device */
{ USB_DEVICE(0x05ac, 0x8281) },
/* AVM BlueFRITZ! USB v2.0 */
- { USB_DEVICE(0x057c, 0x3800), .driver_info = BTUSB_AVM },
+ { USB_DEVICE(0x057c, 0x3800), .driver_info = BTUSB_SWAVE },
/* Bluetooth Ultraport Module from IBM */
{ USB_DEVICE(0x04bf, 0x030a) },
/* CONWISE Technology based adapters with buggy SCO support */
{ USB_DEVICE(0x0e5e, 0x6622), .driver_info = BTUSB_BROKEN_ISOC },
+ /* Roper Class 1 Bluetooth Dongle (Silicon Wave based) */
+ { USB_DEVICE(0x1300, 0x0001), .driver_info = BTUSB_SWAVE },
+
/* Digianswer devices */
{ USB_DEVICE(0x08fd, 0x0001), .driver_info = BTUSB_DIGIANSWER },
{ USB_DEVICE(0x08fd, 0x0002), .driver_info = BTUSB_IGNORE },
int isoc_altsetting;
int suspend_count;
+ int (*recv_event)(struct hci_dev *hdev, struct sk_buff *skb);
int (*recv_bulk)(struct btusb_data *data, void *buffer, int count);
};
if (bt_cb(skb)->expect == 0) {
/* Complete frame */
- hci_recv_frame(data->hdev, skb);
+ data->recv_event(data->hdev, skb);
skb = NULL;
}
}
init_usb_anchor(&data->isoc_anchor);
spin_lock_init(&data->rxlock);
+ data->recv_event = hci_recv_frame;
data->recv_bulk = btusb_recv_bulk;
hdev = hci_alloc_dev();
if (id->driver_info & BTUSB_MARVELL)
hdev->set_bdaddr = btusb_set_bdaddr_marvell;
- if (id->driver_info & BTUSB_AVM)
+ if (id->driver_info & BTUSB_SWAVE) {
+ set_bit(HCI_QUIRK_FIXUP_INQUIRY_MODE, &hdev->quirks);
set_bit(HCI_QUIRK_BROKEN_LOCAL_COMMANDS, &hdev->quirks);
+ }
if (id->driver_info & BTUSB_INTEL_BOOT)
set_bit(HCI_QUIRK_RAW_DEVICE, &hdev->quirks);
do_gettimeofday(&t);
pr_info("**Enqueue %02x %02x: %ld.%6.6ld\n",
- msg->data[0], msg->data[1], t.tv_sec, t.tv_usec);
+ msg->data[0], msg->data[1],
+ (long) t.tv_sec, (long) t.tv_usec);
}
}
#include <linux/cpu.h>
#include <linux/cpu_pm.h>
#include <linux/clockchips.h>
+#include <linux/clocksource.h>
#include <linux/interrupt.h>
#include <linux/of_irq.h>
#include <linux/of_address.h>
/* Register the CP15 based counter if we have one */
if (type & ARCH_CP15_TIMER) {
- if (arch_timer_use_virtual)
+ if (IS_ENABLED(CONFIG_ARM64) || arch_timer_use_virtual)
arch_timer_read_counter = arch_counter_get_cntvct;
else
arch_timer_read_counter = arch_counter_get_cntpct;
#define DLN2_GPIO_MAX_PINS 32
-struct dln2_irq_work {
- struct work_struct work;
- struct dln2_gpio *dln2;
- int pin;
- int type;
-};
-
struct dln2_gpio {
struct platform_device *pdev;
struct gpio_chip gpio;
*/
DECLARE_BITMAP(output_enabled, DLN2_GPIO_MAX_PINS);
- DECLARE_BITMAP(irqs_masked, DLN2_GPIO_MAX_PINS);
- DECLARE_BITMAP(irqs_enabled, DLN2_GPIO_MAX_PINS);
- DECLARE_BITMAP(irqs_pending, DLN2_GPIO_MAX_PINS);
- struct dln2_irq_work *irq_work;
+ /* active IRQs - not synced to hardware */
+ DECLARE_BITMAP(unmasked_irqs, DLN2_GPIO_MAX_PINS);
+ /* active IRQS - synced to hardware */
+ DECLARE_BITMAP(enabled_irqs, DLN2_GPIO_MAX_PINS);
+ int irq_type[DLN2_GPIO_MAX_PINS];
+ struct mutex irq_lock;
};
struct dln2_gpio_pin {
return !!ret;
}
-static void dln2_gpio_pin_set_out_val(struct dln2_gpio *dln2,
- unsigned int pin, int value)
+static int dln2_gpio_pin_set_out_val(struct dln2_gpio *dln2,
+ unsigned int pin, int value)
{
struct dln2_gpio_pin_val req = {
.pin = cpu_to_le16(pin),
.value = value,
};
- dln2_transfer_tx(dln2->pdev, DLN2_GPIO_PIN_SET_OUT_VAL, &req,
- sizeof(req));
+ return dln2_transfer_tx(dln2->pdev, DLN2_GPIO_PIN_SET_OUT_VAL, &req,
+ sizeof(req));
}
#define DLN2_GPIO_DIRECTION_IN 0
static int dln2_gpio_direction_output(struct gpio_chip *chip, unsigned offset,
int value)
{
+ struct dln2_gpio *dln2 = container_of(chip, struct dln2_gpio, gpio);
+ int ret;
+
+ ret = dln2_gpio_pin_set_out_val(dln2, offset, value);
+ if (ret < 0)
+ return ret;
+
return dln2_gpio_set_direction(chip, offset, DLN2_GPIO_DIRECTION_OUT);
}
&req, sizeof(req));
}
-static void dln2_irq_work(struct work_struct *w)
-{
- struct dln2_irq_work *iw = container_of(w, struct dln2_irq_work, work);
- struct dln2_gpio *dln2 = iw->dln2;
- u8 type = iw->type & DLN2_GPIO_EVENT_MASK;
-
- if (test_bit(iw->pin, dln2->irqs_enabled))
- dln2_gpio_set_event_cfg(dln2, iw->pin, type, 0);
- else
- dln2_gpio_set_event_cfg(dln2, iw->pin, DLN2_GPIO_EVENT_NONE, 0);
-}
-
-static void dln2_irq_enable(struct irq_data *irqd)
-{
- struct gpio_chip *gc = irq_data_get_irq_chip_data(irqd);
- struct dln2_gpio *dln2 = container_of(gc, struct dln2_gpio, gpio);
- int pin = irqd_to_hwirq(irqd);
-
- set_bit(pin, dln2->irqs_enabled);
- schedule_work(&dln2->irq_work[pin].work);
-}
-
-static void dln2_irq_disable(struct irq_data *irqd)
+static void dln2_irq_unmask(struct irq_data *irqd)
{
struct gpio_chip *gc = irq_data_get_irq_chip_data(irqd);
struct dln2_gpio *dln2 = container_of(gc, struct dln2_gpio, gpio);
int pin = irqd_to_hwirq(irqd);
- clear_bit(pin, dln2->irqs_enabled);
- schedule_work(&dln2->irq_work[pin].work);
+ set_bit(pin, dln2->unmasked_irqs);
}
static void dln2_irq_mask(struct irq_data *irqd)
struct dln2_gpio *dln2 = container_of(gc, struct dln2_gpio, gpio);
int pin = irqd_to_hwirq(irqd);
- set_bit(pin, dln2->irqs_masked);
-}
-
-static void dln2_irq_unmask(struct irq_data *irqd)
-{
- struct gpio_chip *gc = irq_data_get_irq_chip_data(irqd);
- struct dln2_gpio *dln2 = container_of(gc, struct dln2_gpio, gpio);
- struct device *dev = dln2->gpio.dev;
- int pin = irqd_to_hwirq(irqd);
-
- if (test_and_clear_bit(pin, dln2->irqs_pending)) {
- int irq;
-
- irq = irq_find_mapping(dln2->gpio.irqdomain, pin);
- if (!irq) {
- dev_err(dev, "pin %d not mapped to IRQ\n", pin);
- return;
- }
-
- generic_handle_irq(irq);
- }
+ clear_bit(pin, dln2->unmasked_irqs);
}
static int dln2_irq_set_type(struct irq_data *irqd, unsigned type)
switch (type) {
case IRQ_TYPE_LEVEL_HIGH:
- dln2->irq_work[pin].type = DLN2_GPIO_EVENT_LVL_HIGH;
+ dln2->irq_type[pin] = DLN2_GPIO_EVENT_LVL_HIGH;
break;
case IRQ_TYPE_LEVEL_LOW:
- dln2->irq_work[pin].type = DLN2_GPIO_EVENT_LVL_LOW;
+ dln2->irq_type[pin] = DLN2_GPIO_EVENT_LVL_LOW;
break;
case IRQ_TYPE_EDGE_BOTH:
- dln2->irq_work[pin].type = DLN2_GPIO_EVENT_CHANGE;
+ dln2->irq_type[pin] = DLN2_GPIO_EVENT_CHANGE;
break;
case IRQ_TYPE_EDGE_RISING:
- dln2->irq_work[pin].type = DLN2_GPIO_EVENT_CHANGE_RISING;
+ dln2->irq_type[pin] = DLN2_GPIO_EVENT_CHANGE_RISING;
break;
case IRQ_TYPE_EDGE_FALLING:
- dln2->irq_work[pin].type = DLN2_GPIO_EVENT_CHANGE_FALLING;
+ dln2->irq_type[pin] = DLN2_GPIO_EVENT_CHANGE_FALLING;
break;
default:
return -EINVAL;
return 0;
}
+static void dln2_irq_bus_lock(struct irq_data *irqd)
+{
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(irqd);
+ struct dln2_gpio *dln2 = container_of(gc, struct dln2_gpio, gpio);
+
+ mutex_lock(&dln2->irq_lock);
+}
+
+static void dln2_irq_bus_unlock(struct irq_data *irqd)
+{
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(irqd);
+ struct dln2_gpio *dln2 = container_of(gc, struct dln2_gpio, gpio);
+ int pin = irqd_to_hwirq(irqd);
+ int enabled, unmasked;
+ unsigned type;
+ int ret;
+
+ enabled = test_bit(pin, dln2->enabled_irqs);
+ unmasked = test_bit(pin, dln2->unmasked_irqs);
+
+ if (enabled != unmasked) {
+ if (unmasked) {
+ type = dln2->irq_type[pin] & DLN2_GPIO_EVENT_MASK;
+ set_bit(pin, dln2->enabled_irqs);
+ } else {
+ type = DLN2_GPIO_EVENT_NONE;
+ clear_bit(pin, dln2->enabled_irqs);
+ }
+
+ ret = dln2_gpio_set_event_cfg(dln2, pin, type, 0);
+ if (ret)
+ dev_err(dln2->gpio.dev, "failed to set event\n");
+ }
+
+ mutex_unlock(&dln2->irq_lock);
+}
+
static struct irq_chip dln2_gpio_irqchip = {
.name = "dln2-irq",
- .irq_enable = dln2_irq_enable,
- .irq_disable = dln2_irq_disable,
.irq_mask = dln2_irq_mask,
.irq_unmask = dln2_irq_unmask,
.irq_set_type = dln2_irq_set_type,
+ .irq_bus_lock = dln2_irq_bus_lock,
+ .irq_bus_sync_unlock = dln2_irq_bus_unlock,
};
static void dln2_gpio_event(struct platform_device *pdev, u16 echo,
return;
}
- if (!test_bit(pin, dln2->irqs_enabled))
- return;
- if (test_bit(pin, dln2->irqs_masked)) {
- set_bit(pin, dln2->irqs_pending);
- return;
- }
-
- switch (dln2->irq_work[pin].type) {
+ switch (dln2->irq_type[pin]) {
case DLN2_GPIO_EVENT_CHANGE_RISING:
if (event->value)
generic_handle_irq(irq);
struct dln2_gpio *dln2;
struct device *dev = &pdev->dev;
int pins;
- int i, ret;
+ int ret;
pins = dln2_gpio_get_pin_count(pdev);
if (pins < 0) {
if (!dln2)
return -ENOMEM;
- dln2->irq_work = devm_kcalloc(&pdev->dev, pins,
- sizeof(struct dln2_irq_work), GFP_KERNEL);
- if (!dln2->irq_work)
- return -ENOMEM;
- for (i = 0; i < pins; i++) {
- INIT_WORK(&dln2->irq_work[i].work, dln2_irq_work);
- dln2->irq_work[i].pin = i;
- dln2->irq_work[i].dln2 = dln2;
- }
+ mutex_init(&dln2->irq_lock);
dln2->pdev = pdev;
static int dln2_gpio_remove(struct platform_device *pdev)
{
struct dln2_gpio *dln2 = platform_get_drvdata(pdev);
- int i;
dln2_unregister_event_cb(pdev, DLN2_GPIO_CONDITION_MET_EV);
- for (i = 0; i < dln2->gpio.ngpio; i++)
- flush_work(&dln2->irq_work[i].work);
gpiochip_remove(&dln2->gpio);
return 0;
err = gpiochip_add(gc);
if (err) {
dev_err(&ofdev->dev, "Could not add gpiochip\n");
- irq_domain_remove(priv->domain);
+ if (priv->domain)
+ irq_domain_remove(priv->domain);
return err;
}
obj-$(CONFIG_DRM_TTM) += ttm/
obj-$(CONFIG_DRM_TDFX) += tdfx/
obj-$(CONFIG_DRM_R128) += r128/
+obj-$(CONFIG_HSA_AMD) += amd/amdkfd/
obj-$(CONFIG_DRM_RADEON)+= radeon/
obj-$(CONFIG_DRM_MGA) += mga/
obj-$(CONFIG_DRM_I810) += i810/
obj-y += i2c/
obj-y += panel/
obj-y += bridge/
-obj-$(CONFIG_HSA_AMD) += amd/amdkfd/
#include <uapi/linux/kfd_ioctl.h>
#include <linux/time.h>
#include <linux/mm.h>
-#include <linux/uaccess.h>
#include <uapi/asm-generic/mman-common.h>
#include <asm/processor.h>
#include "kfd_priv.h"
return 0;
}
-static long kfd_ioctl_get_version(struct file *filep, struct kfd_process *p,
- void __user *arg)
+static int kfd_ioctl_get_version(struct file *filep, struct kfd_process *p,
+ void *data)
{
- struct kfd_ioctl_get_version_args args;
+ struct kfd_ioctl_get_version_args *args = data;
int err = 0;
- args.major_version = KFD_IOCTL_MAJOR_VERSION;
- args.minor_version = KFD_IOCTL_MINOR_VERSION;
-
- if (copy_to_user(arg, &args, sizeof(args)))
- err = -EFAULT;
+ args->major_version = KFD_IOCTL_MAJOR_VERSION;
+ args->minor_version = KFD_IOCTL_MINOR_VERSION;
return err;
}
return 0;
}
-static long kfd_ioctl_create_queue(struct file *filep, struct kfd_process *p,
- void __user *arg)
+static int kfd_ioctl_create_queue(struct file *filep, struct kfd_process *p,
+ void *data)
{
- struct kfd_ioctl_create_queue_args args;
+ struct kfd_ioctl_create_queue_args *args = data;
struct kfd_dev *dev;
int err = 0;
unsigned int queue_id;
memset(&q_properties, 0, sizeof(struct queue_properties));
- if (copy_from_user(&args, arg, sizeof(args)))
- return -EFAULT;
-
pr_debug("kfd: creating queue ioctl\n");
- err = set_queue_properties_from_user(&q_properties, &args);
+ err = set_queue_properties_from_user(&q_properties, args);
if (err)
return err;
- dev = kfd_device_by_id(args.gpu_id);
+ dev = kfd_device_by_id(args->gpu_id);
if (dev == NULL)
return -EINVAL;
pdd = kfd_bind_process_to_device(dev, p);
if (IS_ERR(pdd)) {
- err = PTR_ERR(pdd);
+ err = -ESRCH;
goto err_bind_process;
}
if (err != 0)
goto err_create_queue;
- args.queue_id = queue_id;
+ args->queue_id = queue_id;
/* Return gpu_id as doorbell offset for mmap usage */
- args.doorbell_offset = args.gpu_id << PAGE_SHIFT;
-
- if (copy_to_user(arg, &args, sizeof(args))) {
- err = -EFAULT;
- goto err_copy_args_out;
- }
+ args->doorbell_offset = args->gpu_id << PAGE_SHIFT;
mutex_unlock(&p->mutex);
- pr_debug("kfd: queue id %d was created successfully\n", args.queue_id);
+ pr_debug("kfd: queue id %d was created successfully\n", args->queue_id);
pr_debug("ring buffer address == 0x%016llX\n",
- args.ring_base_address);
+ args->ring_base_address);
pr_debug("read ptr address == 0x%016llX\n",
- args.read_pointer_address);
+ args->read_pointer_address);
pr_debug("write ptr address == 0x%016llX\n",
- args.write_pointer_address);
+ args->write_pointer_address);
return 0;
-err_copy_args_out:
- pqm_destroy_queue(&p->pqm, queue_id);
err_create_queue:
err_bind_process:
mutex_unlock(&p->mutex);
}
static int kfd_ioctl_destroy_queue(struct file *filp, struct kfd_process *p,
- void __user *arg)
+ void *data)
{
int retval;
- struct kfd_ioctl_destroy_queue_args args;
-
- if (copy_from_user(&args, arg, sizeof(args)))
- return -EFAULT;
+ struct kfd_ioctl_destroy_queue_args *args = data;
pr_debug("kfd: destroying queue id %d for PASID %d\n",
- args.queue_id,
+ args->queue_id,
p->pasid);
mutex_lock(&p->mutex);
- retval = pqm_destroy_queue(&p->pqm, args.queue_id);
+ retval = pqm_destroy_queue(&p->pqm, args->queue_id);
mutex_unlock(&p->mutex);
return retval;
}
static int kfd_ioctl_update_queue(struct file *filp, struct kfd_process *p,
- void __user *arg)
+ void *data)
{
int retval;
- struct kfd_ioctl_update_queue_args args;
+ struct kfd_ioctl_update_queue_args *args = data;
struct queue_properties properties;
- if (copy_from_user(&args, arg, sizeof(args)))
- return -EFAULT;
-
- if (args.queue_percentage > KFD_MAX_QUEUE_PERCENTAGE) {
+ if (args->queue_percentage > KFD_MAX_QUEUE_PERCENTAGE) {
pr_err("kfd: queue percentage must be between 0 to KFD_MAX_QUEUE_PERCENTAGE\n");
return -EINVAL;
}
- if (args.queue_priority > KFD_MAX_QUEUE_PRIORITY) {
+ if (args->queue_priority > KFD_MAX_QUEUE_PRIORITY) {
pr_err("kfd: queue priority must be between 0 to KFD_MAX_QUEUE_PRIORITY\n");
return -EINVAL;
}
- if ((args.ring_base_address) &&
+ if ((args->ring_base_address) &&
(!access_ok(VERIFY_WRITE,
- (const void __user *) args.ring_base_address,
+ (const void __user *) args->ring_base_address,
sizeof(uint64_t)))) {
pr_err("kfd: can't access ring base address\n");
return -EFAULT;
}
- if (!is_power_of_2(args.ring_size) && (args.ring_size != 0)) {
+ if (!is_power_of_2(args->ring_size) && (args->ring_size != 0)) {
pr_err("kfd: ring size must be a power of 2 or 0\n");
return -EINVAL;
}
- properties.queue_address = args.ring_base_address;
- properties.queue_size = args.ring_size;
- properties.queue_percent = args.queue_percentage;
- properties.priority = args.queue_priority;
+ properties.queue_address = args->ring_base_address;
+ properties.queue_size = args->ring_size;
+ properties.queue_percent = args->queue_percentage;
+ properties.priority = args->queue_priority;
pr_debug("kfd: updating queue id %d for PASID %d\n",
- args.queue_id, p->pasid);
+ args->queue_id, p->pasid);
mutex_lock(&p->mutex);
- retval = pqm_update_queue(&p->pqm, args.queue_id, &properties);
+ retval = pqm_update_queue(&p->pqm, args->queue_id, &properties);
mutex_unlock(&p->mutex);
return retval;
}
-static long kfd_ioctl_set_memory_policy(struct file *filep,
- struct kfd_process *p, void __user *arg)
+static int kfd_ioctl_set_memory_policy(struct file *filep,
+ struct kfd_process *p, void *data)
{
- struct kfd_ioctl_set_memory_policy_args args;
+ struct kfd_ioctl_set_memory_policy_args *args = data;
struct kfd_dev *dev;
int err = 0;
struct kfd_process_device *pdd;
enum cache_policy default_policy, alternate_policy;
- if (copy_from_user(&args, arg, sizeof(args)))
- return -EFAULT;
-
- if (args.default_policy != KFD_IOC_CACHE_POLICY_COHERENT
- && args.default_policy != KFD_IOC_CACHE_POLICY_NONCOHERENT) {
+ if (args->default_policy != KFD_IOC_CACHE_POLICY_COHERENT
+ && args->default_policy != KFD_IOC_CACHE_POLICY_NONCOHERENT) {
return -EINVAL;
}
- if (args.alternate_policy != KFD_IOC_CACHE_POLICY_COHERENT
- && args.alternate_policy != KFD_IOC_CACHE_POLICY_NONCOHERENT) {
+ if (args->alternate_policy != KFD_IOC_CACHE_POLICY_COHERENT
+ && args->alternate_policy != KFD_IOC_CACHE_POLICY_NONCOHERENT) {
return -EINVAL;
}
- dev = kfd_device_by_id(args.gpu_id);
+ dev = kfd_device_by_id(args->gpu_id);
if (dev == NULL)
return -EINVAL;
pdd = kfd_bind_process_to_device(dev, p);
if (IS_ERR(pdd)) {
- err = PTR_ERR(pdd);
+ err = -ESRCH;
goto out;
}
- default_policy = (args.default_policy == KFD_IOC_CACHE_POLICY_COHERENT)
+ default_policy = (args->default_policy == KFD_IOC_CACHE_POLICY_COHERENT)
? cache_policy_coherent : cache_policy_noncoherent;
alternate_policy =
- (args.alternate_policy == KFD_IOC_CACHE_POLICY_COHERENT)
+ (args->alternate_policy == KFD_IOC_CACHE_POLICY_COHERENT)
? cache_policy_coherent : cache_policy_noncoherent;
if (!dev->dqm->set_cache_memory_policy(dev->dqm,
&pdd->qpd,
default_policy,
alternate_policy,
- (void __user *)args.alternate_aperture_base,
- args.alternate_aperture_size))
+ (void __user *)args->alternate_aperture_base,
+ args->alternate_aperture_size))
err = -EINVAL;
out:
return err;
}
-static long kfd_ioctl_get_clock_counters(struct file *filep,
- struct kfd_process *p, void __user *arg)
+static int kfd_ioctl_get_clock_counters(struct file *filep,
+ struct kfd_process *p, void *data)
{
- struct kfd_ioctl_get_clock_counters_args args;
+ struct kfd_ioctl_get_clock_counters_args *args = data;
struct kfd_dev *dev;
struct timespec time;
- if (copy_from_user(&args, arg, sizeof(args)))
- return -EFAULT;
-
- dev = kfd_device_by_id(args.gpu_id);
+ dev = kfd_device_by_id(args->gpu_id);
if (dev == NULL)
return -EINVAL;
/* Reading GPU clock counter from KGD */
- args.gpu_clock_counter = kfd2kgd->get_gpu_clock_counter(dev->kgd);
+ args->gpu_clock_counter = kfd2kgd->get_gpu_clock_counter(dev->kgd);
/* No access to rdtsc. Using raw monotonic time */
getrawmonotonic(&time);
- args.cpu_clock_counter = (uint64_t)timespec_to_ns(&time);
+ args->cpu_clock_counter = (uint64_t)timespec_to_ns(&time);
get_monotonic_boottime(&time);
- args.system_clock_counter = (uint64_t)timespec_to_ns(&time);
+ args->system_clock_counter = (uint64_t)timespec_to_ns(&time);
/* Since the counter is in nano-seconds we use 1GHz frequency */
- args.system_clock_freq = 1000000000;
-
- if (copy_to_user(arg, &args, sizeof(args)))
- return -EFAULT;
+ args->system_clock_freq = 1000000000;
return 0;
}
static int kfd_ioctl_get_process_apertures(struct file *filp,
- struct kfd_process *p, void __user *arg)
+ struct kfd_process *p, void *data)
{
- struct kfd_ioctl_get_process_apertures_args args;
+ struct kfd_ioctl_get_process_apertures_args *args = data;
struct kfd_process_device_apertures *pAperture;
struct kfd_process_device *pdd;
dev_dbg(kfd_device, "get apertures for PASID %d", p->pasid);
- if (copy_from_user(&args, arg, sizeof(args)))
- return -EFAULT;
-
- args.num_of_nodes = 0;
+ args->num_of_nodes = 0;
mutex_lock(&p->mutex);
/* Run over all pdd of the process */
pdd = kfd_get_first_process_device_data(p);
do {
- pAperture = &args.process_apertures[args.num_of_nodes];
+ pAperture =
+ &args->process_apertures[args->num_of_nodes];
pAperture->gpu_id = pdd->dev->id;
pAperture->lds_base = pdd->lds_base;
pAperture->lds_limit = pdd->lds_limit;
pAperture->scratch_limit = pdd->scratch_limit;
dev_dbg(kfd_device,
- "node id %u\n", args.num_of_nodes);
+ "node id %u\n", args->num_of_nodes);
dev_dbg(kfd_device,
"gpu id %u\n", pdd->dev->id);
dev_dbg(kfd_device,
dev_dbg(kfd_device,
"scratch_limit %llX\n", pdd->scratch_limit);
- args.num_of_nodes++;
+ args->num_of_nodes++;
} while ((pdd = kfd_get_next_process_device_data(p, pdd)) != NULL &&
- (args.num_of_nodes < NUM_OF_SUPPORTED_GPUS));
+ (args->num_of_nodes < NUM_OF_SUPPORTED_GPUS));
}
mutex_unlock(&p->mutex);
- if (copy_to_user(arg, &args, sizeof(args)))
- return -EFAULT;
-
return 0;
}
+#define AMDKFD_IOCTL_DEF(ioctl, _func, _flags) \
+ [_IOC_NR(ioctl)] = {.cmd = ioctl, .func = _func, .flags = _flags, .cmd_drv = 0, .name = #ioctl}
+
+/** Ioctl table */
+static const struct amdkfd_ioctl_desc amdkfd_ioctls[] = {
+ AMDKFD_IOCTL_DEF(AMDKFD_IOC_GET_VERSION,
+ kfd_ioctl_get_version, 0),
+
+ AMDKFD_IOCTL_DEF(AMDKFD_IOC_CREATE_QUEUE,
+ kfd_ioctl_create_queue, 0),
+
+ AMDKFD_IOCTL_DEF(AMDKFD_IOC_DESTROY_QUEUE,
+ kfd_ioctl_destroy_queue, 0),
+
+ AMDKFD_IOCTL_DEF(AMDKFD_IOC_SET_MEMORY_POLICY,
+ kfd_ioctl_set_memory_policy, 0),
+
+ AMDKFD_IOCTL_DEF(AMDKFD_IOC_GET_CLOCK_COUNTERS,
+ kfd_ioctl_get_clock_counters, 0),
+
+ AMDKFD_IOCTL_DEF(AMDKFD_IOC_GET_PROCESS_APERTURES,
+ kfd_ioctl_get_process_apertures, 0),
+
+ AMDKFD_IOCTL_DEF(AMDKFD_IOC_UPDATE_QUEUE,
+ kfd_ioctl_update_queue, 0),
+};
+
+#define AMDKFD_CORE_IOCTL_COUNT ARRAY_SIZE(amdkfd_ioctls)
+
static long kfd_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
{
struct kfd_process *process;
- long err = -EINVAL;
+ amdkfd_ioctl_t *func;
+ const struct amdkfd_ioctl_desc *ioctl = NULL;
+ unsigned int nr = _IOC_NR(cmd);
+ char stack_kdata[128];
+ char *kdata = NULL;
+ unsigned int usize, asize;
+ int retcode = -EINVAL;
- dev_dbg(kfd_device,
- "ioctl cmd 0x%x (#%d), arg 0x%lx\n",
- cmd, _IOC_NR(cmd), arg);
+ if (nr >= AMDKFD_CORE_IOCTL_COUNT)
+ goto err_i1;
+
+ if ((nr >= AMDKFD_COMMAND_START) && (nr < AMDKFD_COMMAND_END)) {
+ u32 amdkfd_size;
+
+ ioctl = &amdkfd_ioctls[nr];
+
+ amdkfd_size = _IOC_SIZE(ioctl->cmd);
+ usize = asize = _IOC_SIZE(cmd);
+ if (amdkfd_size > asize)
+ asize = amdkfd_size;
+
+ cmd = ioctl->cmd;
+ } else
+ goto err_i1;
+
+ dev_dbg(kfd_device, "ioctl cmd 0x%x (#%d), arg 0x%lx\n", cmd, nr, arg);
process = kfd_get_process(current);
- if (IS_ERR(process))
- return PTR_ERR(process);
+ if (IS_ERR(process)) {
+ dev_dbg(kfd_device, "no process\n");
+ goto err_i1;
+ }
- switch (cmd) {
- case KFD_IOC_GET_VERSION:
- err = kfd_ioctl_get_version(filep, process, (void __user *)arg);
- break;
- case KFD_IOC_CREATE_QUEUE:
- err = kfd_ioctl_create_queue(filep, process,
- (void __user *)arg);
- break;
-
- case KFD_IOC_DESTROY_QUEUE:
- err = kfd_ioctl_destroy_queue(filep, process,
- (void __user *)arg);
- break;
-
- case KFD_IOC_SET_MEMORY_POLICY:
- err = kfd_ioctl_set_memory_policy(filep, process,
- (void __user *)arg);
- break;
-
- case KFD_IOC_GET_CLOCK_COUNTERS:
- err = kfd_ioctl_get_clock_counters(filep, process,
- (void __user *)arg);
- break;
-
- case KFD_IOC_GET_PROCESS_APERTURES:
- err = kfd_ioctl_get_process_apertures(filep, process,
- (void __user *)arg);
- break;
-
- case KFD_IOC_UPDATE_QUEUE:
- err = kfd_ioctl_update_queue(filep, process,
- (void __user *)arg);
- break;
-
- default:
- dev_err(kfd_device,
- "unknown ioctl cmd 0x%x, arg 0x%lx)\n",
- cmd, arg);
- err = -EINVAL;
- break;
+ /* Do not trust userspace, use our own definition */
+ func = ioctl->func;
+
+ if (unlikely(!func)) {
+ dev_dbg(kfd_device, "no function\n");
+ retcode = -EINVAL;
+ goto err_i1;
}
- if (err < 0)
- dev_err(kfd_device,
- "ioctl error %ld for ioctl cmd 0x%x (#%d)\n",
- err, cmd, _IOC_NR(cmd));
+ if (cmd & (IOC_IN | IOC_OUT)) {
+ if (asize <= sizeof(stack_kdata)) {
+ kdata = stack_kdata;
+ } else {
+ kdata = kmalloc(asize, GFP_KERNEL);
+ if (!kdata) {
+ retcode = -ENOMEM;
+ goto err_i1;
+ }
+ }
+ if (asize > usize)
+ memset(kdata + usize, 0, asize - usize);
+ }
- return err;
+ if (cmd & IOC_IN) {
+ if (copy_from_user(kdata, (void __user *)arg, usize) != 0) {
+ retcode = -EFAULT;
+ goto err_i1;
+ }
+ } else if (cmd & IOC_OUT) {
+ memset(kdata, 0, usize);
+ }
+
+ retcode = func(filep, process, kdata);
+
+ if (cmd & IOC_OUT)
+ if (copy_to_user((void __user *)arg, kdata, usize) != 0)
+ retcode = -EFAULT;
+
+err_i1:
+ if (!ioctl)
+ dev_dbg(kfd_device, "invalid ioctl: pid=%d, cmd=0x%02x, nr=0x%02x\n",
+ task_pid_nr(current), cmd, nr);
+
+ if (kdata != stack_kdata)
+ kfree(kdata);
+
+ if (retcode)
+ dev_dbg(kfd_device, "ret = %d\n", retcode);
+
+ return retcode;
}
static int kfd_mmap(struct file *filp, struct vm_area_struct *vma)
{
int bit = qpd->vmid - KFD_VMID_START_OFFSET;
+ /* Release the vmid mapping */
+ set_pasid_vmid_mapping(dqm, 0, qpd->vmid);
+
set_bit(bit, (unsigned long *)&dqm->vmid_bitmap);
qpd->vmid = 0;
q->properties.vmid = 0;
return retval;
}
+ pr_debug("kfd: loading mqd to hqd on pipe (%d) queue (%d)\n",
+ q->pipe,
+ q->queue);
+
+ retval = mqd->load_mqd(mqd, q->mqd, q->pipe,
+ q->queue, q->properties.write_ptr);
+ if (retval != 0) {
+ deallocate_hqd(dqm, q);
+ mqd->uninit_mqd(mqd, q->mqd, q->mqd_mem_obj);
+ return retval;
+ }
+
return 0;
}
{
int retval;
struct mqd_manager *mqd;
+ bool prev_active = false;
BUG_ON(!dqm || !q || !q->mqd);
return -ENOMEM;
}
- retval = mqd->update_mqd(mqd, q->mqd, &q->properties);
if (q->properties.is_active == true)
+ prev_active = true;
+
+ /*
+ *
+ * check active state vs. the previous state
+ * and modify counter accordingly
+ */
+ retval = mqd->update_mqd(mqd, q->mqd, &q->properties);
+ if ((q->properties.is_active == true) && (prev_active == false))
dqm->queue_count++;
- else
+ else if ((q->properties.is_active == false) && (prev_active == true))
dqm->queue_count--;
if (sched_policy != KFD_SCHED_POLICY_NO_HWS)
uint32_t queue_id)
{
- return kfd2kgd->hqd_is_occupies(mm->dev->kgd, queue_address,
+ return kfd2kgd->hqd_is_occupied(mm->dev->kgd, queue_address,
pipe_id, queue_id);
}
{
pasid_limit = max_num_of_processes;
- pasid_bitmap = kzalloc(BITS_TO_LONGS(pasid_limit), GFP_KERNEL);
+ pasid_bitmap = kcalloc(BITS_TO_LONGS(pasid_limit), sizeof(long), GFP_KERNEL);
if (!pasid_bitmap)
return -ENOMEM;
bool is_32bit_user_mode;
};
+/**
+ * Ioctl function type.
+ *
+ * \param filep pointer to file structure.
+ * \param p amdkfd process pointer.
+ * \param data pointer to arg that was copied from user.
+ */
+typedef int amdkfd_ioctl_t(struct file *filep, struct kfd_process *p,
+ void *data);
+
+struct amdkfd_ioctl_desc {
+ unsigned int cmd;
+ int flags;
+ amdkfd_ioctl_t *func;
+ unsigned int cmd_drv;
+ const char *name;
+};
+
void kfd_process_create_wq(void);
void kfd_process_destroy_wq(void);
struct kfd_process *kfd_create_process(const struct task_struct *);
uint32_t i = 0;
list_for_each_entry(dev, &topology_device_list, list) {
- ret = kfd_build_sysfs_node_entry(dev, 0);
+ ret = kfd_build_sysfs_node_entry(dev, i);
if (ret < 0)
return ret;
i++;
int (*hqd_load)(struct kgd_dev *kgd, void *mqd, uint32_t pipe_id,
uint32_t queue_id, uint32_t __user *wptr);
- bool (*hqd_is_occupies)(struct kgd_dev *kgd, uint64_t queue_address,
+ bool (*hqd_is_occupied)(struct kgd_dev *kgd, uint64_t queue_address,
uint32_t pipe_id, uint32_t queue_id);
int (*hqd_destroy)(struct kgd_dev *kgd, uint32_t reset_type,
*/
struct workqueue_struct *dp_wq;
- uint32_t bios_vgacntr;
-
/* Abstract the submission mechanism (legacy ringbuffer or execlists) away */
struct {
int (*do_execbuf)(struct drm_device *dev, struct drm_file *file,
i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
struct drm_file *file)
{
+ struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_gem_pwrite *args = data;
struct drm_i915_gem_object *obj;
int ret;
return -EFAULT;
}
+ intel_runtime_pm_get(dev_priv);
+
ret = i915_mutex_lock_interruptible(dev);
if (ret)
- return ret;
+ goto put_rpm;
obj = to_intel_bo(drm_gem_object_lookup(dev, file, args->handle));
if (&obj->base == NULL) {
drm_gem_object_unreference(&obj->base);
unlock:
mutex_unlock(&dev->struct_mutex);
+put_rpm:
+ intel_runtime_pm_put(dev_priv);
+
return ret;
}
if ((iir & flip_pending) == 0)
goto check_page_flip;
- intel_prepare_page_flip(dev, plane);
-
/* We detect FlipDone by looking for the change in PendingFlip from '1'
* to '0' on the following vblank, i.e. IIR has the Pendingflip
* asserted following the MI_DISPLAY_FLIP, but ISR is deasserted, hence
if (I915_READ16(ISR) & flip_pending)
goto check_page_flip;
+ intel_prepare_page_flip(dev, plane);
intel_finish_page_flip(dev, pipe);
return true;
if ((iir & flip_pending) == 0)
goto check_page_flip;
- intel_prepare_page_flip(dev, plane);
-
/* We detect FlipDone by looking for the change in PendingFlip from '1'
* to '0' on the following vblank, i.e. IIR has the Pendingflip
* asserted following the MI_DISPLAY_FLIP, but ISR is deasserted, hence
if (I915_READ(ISR) & flip_pending)
goto check_page_flip;
+ intel_prepare_page_flip(dev, plane);
intel_finish_page_flip(dev, pipe);
return true;
vga_put(dev->pdev, VGA_RSRC_LEGACY_IO);
udelay(300);
- /*
- * Fujitsu-Siemens Lifebook S6010 (830) has problems resuming
- * from S3 without preserving (some of?) the other bits.
- */
- I915_WRITE(vga_reg, dev_priv->bios_vgacntr | VGA_DISP_DISABLE);
+ I915_WRITE(vga_reg, VGA_DISP_DISABLE);
POSTING_READ(vga_reg);
}
intel_shared_dpll_init(dev);
- /* save the BIOS value before clobbering it */
- dev_priv->bios_vgacntr = I915_READ(i915_vgacntrl_reg(dev));
/* Just disable it once at startup */
i915_disable_vga(dev);
intel_setup_outputs(dev);
vlv_power_sequencer_reset(dev_priv);
}
-static void check_power_well_state(struct drm_i915_private *dev_priv,
- struct i915_power_well *power_well)
-{
- bool enabled = power_well->ops->is_enabled(dev_priv, power_well);
-
- if (power_well->always_on || !i915.disable_power_well) {
- if (!enabled)
- goto mismatch;
-
- return;
- }
-
- if (enabled != (power_well->count > 0))
- goto mismatch;
-
- return;
-
-mismatch:
- WARN(1, "state mismatch for '%s' (always_on %d hw state %d use-count %d disable_power_well %d\n",
- power_well->name, power_well->always_on, enabled,
- power_well->count, i915.disable_power_well);
-}
-
/**
* intel_display_power_get - grab a power domain reference
* @dev_priv: i915 device instance
power_well->ops->enable(dev_priv, power_well);
power_well->hw_enabled = true;
}
-
- check_power_well_state(dev_priv, power_well);
}
power_domains->domain_use_count[domain]++;
power_well->hw_enabled = false;
power_well->ops->disable(dev_priv, power_well);
}
-
- check_power_well_state(dev_priv, power_well);
}
mutex_unlock(&power_domains->lock);
void
nvkm_event_put(struct nvkm_event *event, u32 types, int index)
{
- BUG_ON(!spin_is_locked(&event->refs_lock));
+ assert_spin_locked(&event->refs_lock);
while (types) {
int type = __ffs(types); types &= ~(1 << type);
if (--event->refs[index * event->types_nr + type] == 0) {
void
nvkm_event_get(struct nvkm_event *event, u32 types, int index)
{
- BUG_ON(!spin_is_locked(&event->refs_lock));
+ assert_spin_locked(&event->refs_lock);
while (types) {
int type = __ffs(types); types &= ~(1 << type);
if (++event->refs[index * event->types_nr + type] == 1) {
struct nvkm_event *event = notify->event;
unsigned long flags;
- BUG_ON(!spin_is_locked(&event->list_lock));
+ assert_spin_locked(&event->list_lock);
BUG_ON(size != notify->size);
spin_lock_irqsave(&event->refs_lock, flags);
device->oclass[NVDEV_ENGINE_PPP ] = &nvc0_ppp_oclass;
device->oclass[NVDEV_ENGINE_PERFMON] = &nvf0_perfmon_oclass;
break;
+ case 0x106:
+ device->cname = "GK208B";
+ device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
+ device->oclass[NVDEV_SUBDEV_GPIO ] = nve0_gpio_oclass;
+ device->oclass[NVDEV_SUBDEV_I2C ] = nve0_i2c_oclass;
+ device->oclass[NVDEV_SUBDEV_FUSE ] = &gf100_fuse_oclass;
+ device->oclass[NVDEV_SUBDEV_CLOCK ] = &nve0_clock_oclass;
+ device->oclass[NVDEV_SUBDEV_THERM ] = &nvd0_therm_oclass;
+ device->oclass[NVDEV_SUBDEV_MXM ] = &nv50_mxm_oclass;
+ device->oclass[NVDEV_SUBDEV_DEVINIT] = nvc0_devinit_oclass;
+ device->oclass[NVDEV_SUBDEV_MC ] = gk20a_mc_oclass;
+ device->oclass[NVDEV_SUBDEV_BUS ] = nvc0_bus_oclass;
+ device->oclass[NVDEV_SUBDEV_TIMER ] = &nv04_timer_oclass;
+ device->oclass[NVDEV_SUBDEV_FB ] = nve0_fb_oclass;
+ device->oclass[NVDEV_SUBDEV_LTC ] = gk104_ltc_oclass;
+ device->oclass[NVDEV_SUBDEV_IBUS ] = &nve0_ibus_oclass;
+ device->oclass[NVDEV_SUBDEV_INSTMEM] = nv50_instmem_oclass;
+ device->oclass[NVDEV_SUBDEV_VM ] = &nvc0_vmmgr_oclass;
+ device->oclass[NVDEV_SUBDEV_BAR ] = &nvc0_bar_oclass;
+ device->oclass[NVDEV_SUBDEV_PWR ] = nv108_pwr_oclass;
+ device->oclass[NVDEV_SUBDEV_VOLT ] = &nv40_volt_oclass;
+ device->oclass[NVDEV_ENGINE_DMAOBJ ] = nvd0_dmaeng_oclass;
+ device->oclass[NVDEV_ENGINE_FIFO ] = nv108_fifo_oclass;
+ device->oclass[NVDEV_ENGINE_SW ] = nvc0_software_oclass;
+ device->oclass[NVDEV_ENGINE_GR ] = nv108_graph_oclass;
+ device->oclass[NVDEV_ENGINE_DISP ] = nvf0_disp_oclass;
+ device->oclass[NVDEV_ENGINE_COPY0 ] = &nve0_copy0_oclass;
+ device->oclass[NVDEV_ENGINE_COPY1 ] = &nve0_copy1_oclass;
+ device->oclass[NVDEV_ENGINE_COPY2 ] = &nve0_copy2_oclass;
+ device->oclass[NVDEV_ENGINE_BSP ] = &nve0_bsp_oclass;
+ device->oclass[NVDEV_ENGINE_VP ] = &nve0_vp_oclass;
+ device->oclass[NVDEV_ENGINE_PPP ] = &nvc0_ppp_oclass;
+ break;
case 0x108:
device->cname = "GK208";
device->oclass[NVDEV_SUBDEV_VBIOS ] = &nouveau_bios_oclass;
pramin_fini(void *data)
{
struct priv *priv = data;
- nv_wr32(priv->bios, 0x001700, priv->bar0);
- kfree(priv);
+ if (priv) {
+ nv_wr32(priv->bios, 0x001700, priv->bar0);
+ kfree(priv);
+ }
}
static void *
#include "nv50.h"
+struct nvaa_ram_priv {
+ struct nouveau_ram base;
+ u64 poller_base;
+};
+
static int
nvaa_ram_ctor(struct nouveau_object *parent, struct nouveau_object *engine,
struct nouveau_oclass *oclass, void *data, u32 datasize,
struct nouveau_object **pobject)
{
- const u32 rsvd_head = ( 256 * 1024) >> 12; /* vga memory */
- const u32 rsvd_tail = (1024 * 1024) >> 12; /* vbios etc */
+ u32 rsvd_head = ( 256 * 1024); /* vga memory */
+ u32 rsvd_tail = (1024 * 1024); /* vbios etc */
struct nouveau_fb *pfb = nouveau_fb(parent);
- struct nouveau_ram *ram;
+ struct nvaa_ram_priv *priv;
int ret;
- ret = nouveau_ram_create(parent, engine, oclass, &ram);
- *pobject = nv_object(ram);
+ ret = nouveau_ram_create(parent, engine, oclass, &priv);
+ *pobject = nv_object(priv);
if (ret)
return ret;
- ram->size = nv_rd32(pfb, 0x10020c);
- ram->size = (ram->size & 0xffffff00) | ((ram->size & 0x000000ff) << 32);
+ priv->base.type = NV_MEM_TYPE_STOLEN;
+ priv->base.stolen = (u64)nv_rd32(pfb, 0x100e10) << 12;
+ priv->base.size = (u64)nv_rd32(pfb, 0x100e14) << 12;
- ret = nouveau_mm_init(&pfb->vram, rsvd_head, (ram->size >> 12) -
- (rsvd_head + rsvd_tail), 1);
+ rsvd_tail += 0x1000;
+ priv->poller_base = priv->base.size - rsvd_tail;
+
+ ret = nouveau_mm_init(&pfb->vram, rsvd_head >> 12,
+ (priv->base.size - (rsvd_head + rsvd_tail)) >> 12,
+ 1);
if (ret)
return ret;
- ram->type = NV_MEM_TYPE_STOLEN;
- ram->stolen = (u64)nv_rd32(pfb, 0x100e10) << 12;
- ram->get = nv50_ram_get;
- ram->put = nv50_ram_put;
+ priv->base.get = nv50_ram_get;
+ priv->base.put = nv50_ram_put;
+ return 0;
+}
+
+static int
+nvaa_ram_init(struct nouveau_object *object)
+{
+ struct nouveau_fb *pfb = nouveau_fb(object);
+ struct nvaa_ram_priv *priv = (void *)object;
+ int ret;
+ u64 dniso, hostnb, flush;
+
+ ret = nouveau_ram_init(&priv->base);
+ if (ret)
+ return ret;
+
+ dniso = ((priv->base.size - (priv->poller_base + 0x00)) >> 5) - 1;
+ hostnb = ((priv->base.size - (priv->poller_base + 0x20)) >> 5) - 1;
+ flush = ((priv->base.size - (priv->poller_base + 0x40)) >> 5) - 1;
+
+ /* Enable NISO poller for various clients and set their associated
+ * read address, only for MCP77/78 and MCP79/7A. (fd#25701)
+ */
+ nv_wr32(pfb, 0x100c18, dniso);
+ nv_mask(pfb, 0x100c14, 0x00000000, 0x00000001);
+ nv_wr32(pfb, 0x100c1c, hostnb);
+ nv_mask(pfb, 0x100c14, 0x00000000, 0x00000002);
+ nv_wr32(pfb, 0x100c24, flush);
+ nv_mask(pfb, 0x100c14, 0x00000000, 0x00010000);
+
return 0;
}
.ofuncs = &(struct nouveau_ofuncs) {
.ctor = nvaa_ram_ctor,
.dtor = _nouveau_ram_dtor,
- .init = _nouveau_ram_init,
+ .init = nvaa_ram_init,
.fini = _nouveau_ram_fini,
},
};
#include "nv04.h"
-static void
-nv4c_mc_msi_rearm(struct nouveau_mc *pmc)
-{
- struct nv04_mc_priv *priv = (void *)pmc;
- nv_wr08(priv, 0x088050, 0xff);
-}
-
struct nouveau_oclass *
nv4c_mc_oclass = &(struct nouveau_mc_oclass) {
.base.handle = NV_SUBDEV(MC, 0x4c),
.fini = _nouveau_mc_fini,
},
.intr = nv04_mc_intr,
- .msi_rearm = nv4c_mc_msi_rearm,
}.base;
* so use the DMA API for them.
*/
if (!nv_device_is_cpu_coherent(device) &&
- ttm->caching_state == tt_uncached)
+ ttm->caching_state == tt_uncached) {
ttm_dma_unpopulate(ttm_dma, dev->dev);
+ return;
+ }
#if __OS_HAS_AGP
if (drm->agp.stat == ENABLED) {
nouveau_gem_object_del(struct drm_gem_object *gem)
{
struct nouveau_bo *nvbo = nouveau_gem_object(gem);
+ struct nouveau_drm *drm = nouveau_bdev(nvbo->bo.bdev);
struct ttm_buffer_object *bo = &nvbo->bo;
+ struct device *dev = drm->dev->dev;
+ int ret;
+
+ ret = pm_runtime_get_sync(dev);
+ if (WARN_ON(ret < 0 && ret != -EACCES))
+ return;
if (gem->import_attach)
drm_prime_gem_destroy(gem, nvbo->bo.sg);
/* reset filp so nouveau_bo_del_ttm() can test for it */
gem->filp = NULL;
ttm_bo_unref(&bo);
+
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_put_autosuspend(dev);
}
int
{
struct nouveau_cli *cli = nouveau_cli(file_priv);
struct nouveau_bo *nvbo = nouveau_gem_object(gem);
+ struct nouveau_drm *drm = nouveau_bdev(nvbo->bo.bdev);
struct nouveau_vma *vma;
+ struct device *dev = drm->dev->dev;
int ret;
if (!cli->vm)
goto out;
}
+ ret = pm_runtime_get_sync(dev);
+ if (ret < 0 && ret != -EACCES)
+ goto out;
+
ret = nouveau_bo_vma_add(nvbo, cli->vm, vma);
- if (ret) {
+ if (ret)
kfree(vma);
- goto out;
- }
+
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_put_autosuspend(dev);
} else {
vma->refcount++;
}
{
struct nouveau_cli *cli = nouveau_cli(file_priv);
struct nouveau_bo *nvbo = nouveau_gem_object(gem);
+ struct nouveau_drm *drm = nouveau_bdev(nvbo->bo.bdev);
+ struct device *dev = drm->dev->dev;
struct nouveau_vma *vma;
int ret;
vma = nouveau_bo_vma_find(nvbo, cli->vm);
if (vma) {
- if (--vma->refcount == 0)
- nouveau_gem_object_unmap(nvbo, vma);
+ if (--vma->refcount == 0) {
+ ret = pm_runtime_get_sync(dev);
+ if (!WARN_ON(ret < 0 && ret != -EACCES)) {
+ nouveau_gem_object_unmap(nvbo, vma);
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_put_autosuspend(dev);
+ }
+ }
}
ttm_bo_unreserve(&nvbo->bo);
}
return pll;
}
/* otherwise, pick one of the plls */
- if ((rdev->family == CHIP_KAVERI) ||
- (rdev->family == CHIP_KABINI) ||
+ if ((rdev->family == CHIP_KABINI) ||
(rdev->family == CHIP_MULLINS)) {
- /* KB/KV/ML has PPLL1 and PPLL2 */
+ /* KB/ML has PPLL1 and PPLL2 */
pll_in_use = radeon_get_pll_use_mask(crtc);
if (!(pll_in_use & (1 << ATOM_PPLL2)))
return ATOM_PPLL2;
DRM_ERROR("unable to allocate a PPLL\n");
return ATOM_PPLL_INVALID;
} else {
- /* CI has PPLL0, PPLL1, and PPLL2 */
+ /* CI/KV has PPLL0, PPLL1, and PPLL2 */
pll_in_use = radeon_get_pll_use_mask(crtc);
if (!(pll_in_use & (1 << ATOM_PPLL2)))
return ATOM_PPLL2;
case ATOM_PPLL0:
/* disable the ppll */
if ((rdev->family == CHIP_ARUBA) ||
+ (rdev->family == CHIP_KAVERI) ||
(rdev->family == CHIP_BONAIRE) ||
(rdev->family == CHIP_HAWAII))
atombios_crtc_program_pll(crtc, radeon_crtc->crtc_id, radeon_crtc->pll_id,
struct radeon_connector_atom_dig *dig_connector;
int dp_clock;
+ if ((mode->clock > 340000) &&
+ (!radeon_connector_is_dp12_capable(connector)))
+ return MODE_CLOCK_HIGH;
+
if (!radeon_connector->con_priv)
return MODE_CLOCK_HIGH;
dig_connector = radeon_connector->con_priv;
#define ATC_VM_APERTURE1_HIGH_ADDR 0x330Cu
#define ATC_VM_APERTURE1_LOW_ADDR 0x3304u
+#define IH_VMID_0_LUT 0x3D40u
+
#endif
}
sad_count = drm_edid_to_sad(radeon_connector->edid, &sads);
- if (sad_count < 0) {
+ if (sad_count <= 0) {
DRM_ERROR("Couldn't read SADs: %d\n", sad_count);
return;
}
pi->enable_auto_thermal_throttling = true;
pi->disable_nb_ps3_in_battery = false;
if (radeon_bapm == -1) {
- /* There are stability issues reported on with
- * bapm enabled on an asrock system.
- */
- if (rdev->pdev->subsystem_vendor == 0x1849)
- pi->bapm_enable = false;
- else
+ /* only enable bapm on KB, ML by default */
+ if (rdev->family == CHIP_KABINI || rdev->family == CHIP_MULLINS)
pi->bapm_enable = true;
+ else
+ pi->bapm_enable = false;
} else if (radeon_bapm == 0) {
pi->bapm_enable = false;
} else {
static int kgd_hqd_load(struct kgd_dev *kgd, void *mqd, uint32_t pipe_id,
uint32_t queue_id, uint32_t __user *wptr);
-static bool kgd_hqd_is_occupies(struct kgd_dev *kgd, uint64_t queue_address,
+static bool kgd_hqd_is_occupied(struct kgd_dev *kgd, uint64_t queue_address,
uint32_t pipe_id, uint32_t queue_id);
static int kgd_hqd_destroy(struct kgd_dev *kgd, uint32_t reset_type,
.init_memory = kgd_init_memory,
.init_pipeline = kgd_init_pipeline,
.hqd_load = kgd_hqd_load,
- .hqd_is_occupies = kgd_hqd_is_occupies,
+ .hqd_is_occupied = kgd_hqd_is_occupied,
.hqd_destroy = kgd_hqd_destroy,
.get_fw_version = get_fw_version
};
bool radeon_kfd_init(void)
{
+#if defined(CONFIG_HSA_AMD_MODULE)
bool (*kgd2kfd_init_p)(unsigned, const struct kfd2kgd_calls*,
const struct kgd2kfd_calls**);
}
return true;
+#elif defined(CONFIG_HSA_AMD)
+ if (!kgd2kfd_init(KFD_INTERFACE_VERSION, &kfd2kgd, &kgd2kfd)) {
+ kgd2kfd = NULL;
+
+ return false;
+ }
+
+ return true;
+#else
+ return false;
+#endif
}
void radeon_kfd_fini(void)
cpu_relax();
write_register(kgd, ATC_VMID_PASID_MAPPING_UPDATE_STATUS, 1U << vmid);
+ /* Mapping vmid to pasid also for IH block */
+ write_register(kgd, IH_VMID_0_LUT + vmid * sizeof(uint32_t),
+ pasid_mapping);
+
return 0;
}
return 0;
}
-static bool kgd_hqd_is_occupies(struct kgd_dev *kgd, uint64_t queue_address,
+static bool kgd_hqd_is_occupied(struct kgd_dev *kgd, uint64_t queue_address,
uint32_t pipe_id, uint32_t queue_id)
{
uint32_t act;
if (timeout == 0) {
pr_err("kfd: cp queue preemption time out (%dms)\n",
temp);
+ release_queue(kgd);
return -ETIME;
}
msleep(20);
u32 format;
u32 *buffer;
const u8 __user *data;
- int size, dwords, tex_width, blit_width, spitch;
+ unsigned int size, dwords, tex_width, blit_width, spitch;
u32 height;
int i;
u32 texpitch, microtile;
config HID_BATTERY_STRENGTH
bool "Battery level reporting for HID devices"
- depends on HID && POWER_SUPPLY && HID = POWER_SUPPLY
+ depends on HID
+ select POWER_SUPPLY
default n
---help---
This option adds support of reporting battery strength (for HID devices
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_ERGO_525V) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_I405X) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X_2) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M610X) },
{ HID_USB_DEVICE(USB_VENDOR_ID_LABTEC, USB_DEVICE_ID_LABTEC_WIRELESS_KEYBOARD) },
{ HID_USB_DEVICE(USB_VENDOR_ID_LCPOWER, USB_DEVICE_ID_LCPOWER_LC1000 ) },
#define USB_DEVICE_ID_KYE_GPEN_560 0x5003
#define USB_DEVICE_ID_KYE_EASYPEN_I405X 0x5010
#define USB_DEVICE_ID_KYE_MOUSEPEN_I608X 0x5011
+#define USB_DEVICE_ID_KYE_MOUSEPEN_I608X_2 0x501a
#define USB_DEVICE_ID_KYE_EASYPEN_M610X 0x5013
#define USB_VENDOR_ID_LABTEC 0x1020
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE,
USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_ANSI),
HID_BATTERY_QUIRK_PERCENT | HID_BATTERY_QUIRK_FEATURE },
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE,
+ USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_ISO),
+ HID_BATTERY_QUIRK_PERCENT | HID_BATTERY_QUIRK_FEATURE },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE,
USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI),
HID_BATTERY_QUIRK_PERCENT | HID_BATTERY_QUIRK_FEATURE },
}
break;
case USB_DEVICE_ID_KYE_MOUSEPEN_I608X:
+ case USB_DEVICE_ID_KYE_MOUSEPEN_I608X_2:
if (*rsize == MOUSEPEN_I608X_RDESC_ORIG_SIZE) {
rdesc = mousepen_i608x_rdesc_fixed;
*rsize = sizeof(mousepen_i608x_rdesc_fixed);
switch (id->product) {
case USB_DEVICE_ID_KYE_EASYPEN_I405X:
case USB_DEVICE_ID_KYE_MOUSEPEN_I608X:
+ case USB_DEVICE_ID_KYE_MOUSEPEN_I608X_2:
case USB_DEVICE_ID_KYE_EASYPEN_M610X:
ret = kye_tablet_enable(hdev);
if (ret) {
USB_DEVICE_ID_KYE_EASYPEN_I405X) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE,
USB_DEVICE_ID_KYE_MOUSEPEN_I608X) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_KYE,
+ USB_DEVICE_ID_KYE_MOUSEPEN_I608X_2) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE,
USB_DEVICE_ID_KYE_EASYPEN_M610X) },
{ HID_USB_DEVICE(USB_VENDOR_ID_KYE,
switch (data[0]) {
case REPORT_ID_DJ_SHORT:
+ if (size != DJREPORT_SHORT_LENGTH) {
+ dev_err(&hdev->dev, "DJ report of bad size (%d)", size);
+ return false;
+ }
return logi_dj_dj_event(hdev, report, data, size);
case REPORT_ID_HIDPP_SHORT:
- /* intentional fallthrough */
+ if (size != HIDPP_REPORT_SHORT_LENGTH) {
+ dev_err(&hdev->dev,
+ "Short HID++ report of bad size (%d)", size);
+ return false;
+ }
+ return logi_dj_hidpp_event(hdev, report, data, size);
case REPORT_ID_HIDPP_LONG:
+ if (size != HIDPP_REPORT_LONG_LENGTH) {
+ dev_err(&hdev->dev,
+ "Long HID++ report of bad size (%d)", size);
+ return false;
+ }
return logi_dj_hidpp_event(hdev, report, data, size);
}
(report->rap.sub_id == 0x41);
}
+/**
+ * hidpp_prefix_name() prefixes the current given name with "Logitech ".
+ */
+static void hidpp_prefix_name(char **name, int name_length)
+{
+#define PREFIX_LENGTH 9 /* "Logitech " */
+
+ int new_length;
+ char *new_name;
+
+ if (name_length > PREFIX_LENGTH &&
+ strncmp(*name, "Logitech ", PREFIX_LENGTH) == 0)
+ /* The prefix has is already in the name */
+ return;
+
+ new_length = PREFIX_LENGTH + name_length;
+ new_name = kzalloc(new_length, GFP_KERNEL);
+ if (!new_name)
+ return;
+
+ snprintf(new_name, new_length, "Logitech %s", *name);
+
+ kfree(*name);
+
+ *name = new_name;
+}
+
/* -------------------------------------------------------------------------- */
/* HIDP++ 1.0 commands */
/* -------------------------------------------------------------------------- */
return NULL;
memcpy(name, &response.rap.params[2], len);
+
+ /* include the terminating '\0' */
+ hidpp_prefix_name(&name, len + 1);
+
return name;
}
index += ret;
}
+ /* include the terminating '\0' */
+ hidpp_prefix_name(&name, __name_length + 1);
+
return name;
}
switch (data[0]) {
case 0x02:
+ if (size < 2) {
+ hid_err(hdev, "Received HID report of bad size (%d)",
+ size);
+ return 1;
+ }
if (hidpp->quirks & HIDPP_QUIRK_WTP_PHYSICAL_BUTTONS) {
input_event(wd->input, EV_KEY, BTN_LEFT,
!!(data[1] & 0x01));
input_event(wd->input, EV_KEY, BTN_RIGHT,
!!(data[1] & 0x02));
input_sync(wd->input);
+ return 0;
} else {
if (size < 21)
return 1;
return wtp_mouse_raw_xy_event(hidpp, &data[7]);
}
case REPORT_ID_HIDPP_LONG:
+ /* size is already checked in hidpp_raw_event. */
if ((report->fap.feature_index != wd->mt_feature_index) ||
(report->fap.funcindex_clientid != EVENT_TOUCHPAD_RAW_XY))
return 1;
static void profile_activated(struct pyra_device *pyra,
unsigned int new_profile)
{
+ if (new_profile >= ARRAY_SIZE(pyra->profile_settings))
+ return;
pyra->actual_profile = new_profile;
pyra->actual_cpi = pyra->profile_settings[pyra->actual_profile].y_cpi;
}
if (off != 0 || count != PYRA_SIZE_SETTINGS)
return -EINVAL;
- mutex_lock(&pyra->pyra_lock);
-
settings = (struct pyra_settings const *)buf;
+ if (settings->startup_profile >= ARRAY_SIZE(pyra->profile_settings))
+ return -EINVAL;
+
+ mutex_lock(&pyra->pyra_lock);
retval = pyra_set_settings(usb_dev, settings);
if (retval) {
static void i2c_hid_stop(struct hid_device *hid)
{
- struct i2c_client *client = hid->driver_data;
- struct i2c_hid *ihid = i2c_get_clientdata(client);
-
hid->claimed = 0;
-
- i2c_hid_free_buffers(ihid);
}
static int i2c_hid_open(struct hid_device *hid)
{ USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_WIRELESS, HID_QUIRK_MULTI_INPUT },
{ USB_VENDOR_ID_SIGMA_MICRO, USB_DEVICE_ID_SIGMA_MICRO_KEYBOARD, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X, HID_QUIRK_MULTI_INPUT },
+ { USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X_2, HID_QUIRK_MULTI_INPUT },
{ USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M610X, HID_QUIRK_MULTI_INPUT },
{ USB_VENDOR_ID_NTRIG, USB_DEVICE_ID_NTRIG_DUOSENSE, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_SEMICO, USB_DEVICE_ID_SEMICO_USB_KEYKOARD, HID_QUIRK_NO_INIT_REPORTS },
static void set_emss(struct c4iw_ep *ep, u16 opt)
{
- ep->emss = ep->com.dev->rdev.lldi.mtus[GET_TCPOPT_MSS(opt)] -
+ ep->emss = ep->com.dev->rdev.lldi.mtus[TCPOPT_MSS_G(opt)] -
((AF_INET == ep->com.remote_addr.ss_family) ?
sizeof(struct iphdr) : sizeof(struct ipv6hdr)) -
sizeof(struct tcphdr);
ep->mss = ep->emss;
- if (GET_TCPOPT_TSTAMP(opt))
+ if (TCPOPT_TSTAMP_G(opt))
ep->emss -= round_up(TCPOLEN_TIMESTAMP, 4);
if (ep->emss < 128)
ep->emss = 128;
if (ep->emss & 7)
PDBG("Warning: misaligned mtu idx %u mss %u emss=%u\n",
- GET_TCPOPT_MSS(opt), ep->mss, ep->emss);
- PDBG("%s mss_idx %u mss %u emss=%u\n", __func__, GET_TCPOPT_MSS(opt),
+ TCPOPT_MSS_G(opt), ep->mss, ep->emss);
+ PDBG("%s mss_idx %u mss %u emss=%u\n", __func__, TCPOPT_MSS_G(opt),
ep->mss, ep->emss);
}
if (win > RCV_BUFSIZ_M)
win = RCV_BUFSIZ_M;
- opt0 = (nocong ? NO_CONG(1) : 0) |
+ opt0 = (nocong ? NO_CONG_F : 0) |
KEEP_ALIVE_F |
- DELACK(1) |
+ DELACK_F |
WND_SCALE_V(wscale) |
MSS_IDX_V(mtu_idx) |
L2T_IDX_V(ep->l2t->idx) |
TX_CHAN_V(ep->tx_chan) |
SMAC_SEL_V(ep->smac_idx) |
- DSCP(ep->tos) |
+ DSCP_V(ep->tos) |
ULP_MODE_V(ULP_MODE_TCPDDP) |
RCV_BUFSIZ_V(win);
opt2 = RX_CHANNEL_V(0) |
- CCTRL_ECN(enable_ecn) |
+ CCTRL_ECN_V(enable_ecn) |
RSS_QUEUE_VALID_F | RSS_QUEUE_V(ep->rss_qid);
if (enable_tcp_timestamps)
- opt2 |= TSTAMPS_EN(1);
+ opt2 |= TSTAMPS_EN_F;
if (enable_tcp_sack)
- opt2 |= SACK_EN(1);
+ opt2 |= SACK_EN_F;
if (wscale && enable_tcp_window_scaling)
opt2 |= WND_SCALE_EN_F;
if (is_t5(ep->com.dev->rdev.lldi.adapter_type)) {
opt2 |= T5_OPT_2_VALID_F;
- opt2 |= V_CONG_CNTRL(CONG_ALG_TAHOE);
+ opt2 |= CONG_CNTRL_V(CONG_ALG_TAHOE);
opt2 |= CONG_CNTRL_VALID; /* OPT_2_ISS for T5 */
}
t4_set_arp_err_handler(skb, ep, act_open_req_arp_failure);
struct c4iw_ep *ep;
struct cpl_act_establish *req = cplhdr(skb);
unsigned int tid = GET_TID(req);
- unsigned int atid = GET_TID_TID(ntohl(req->tos_atid));
+ unsigned int atid = TID_TID_G(ntohl(req->tos_atid));
struct tid_info *t = dev->rdev.lldi.tids;
ep = lookup_atid(t, atid);
OPCODE_TID(req) = cpu_to_be32(MK_OPCODE_TID(CPL_RX_DATA_ACK,
ep->hwtid));
req->credit_dack = cpu_to_be32(credits | RX_FORCE_ACK_F |
- F_RX_DACK_CHANGE |
- V_RX_DACK_MODE(dack_mode));
+ RX_DACK_CHANGE_F |
+ RX_DACK_MODE_V(dack_mode));
set_wr_txq(skb, CPL_PRIORITY_ACK, ep->ctrlq_idx);
c4iw_ofld_send(&ep->com.dev->rdev, skb);
return credits;
skb = get_skb(NULL, sizeof(*req), GFP_KERNEL);
req = (struct fw_ofld_connection_wr *)__skb_put(skb, sizeof(*req));
memset(req, 0, sizeof(*req));
- req->op_compl = htonl(V_WR_OP(FW_OFLD_CONNECTION_WR));
+ req->op_compl = htonl(WR_OP_V(FW_OFLD_CONNECTION_WR));
req->len16_pkd = htonl(FW_WR_LEN16_V(DIV_ROUND_UP(sizeof(*req), 16)));
req->le.filter = cpu_to_be32(cxgb4_select_ntuple(
ep->com.dev->rdev.lldi.ports[0],
if (win > RCV_BUFSIZ_M)
win = RCV_BUFSIZ_M;
- req->tcb.opt0 = (__force __be64) (TCAM_BYPASS(1) |
- (nocong ? NO_CONG(1) : 0) |
+ req->tcb.opt0 = (__force __be64) (TCAM_BYPASS_F |
+ (nocong ? NO_CONG_F : 0) |
KEEP_ALIVE_F |
- DELACK(1) |
+ DELACK_F |
WND_SCALE_V(wscale) |
MSS_IDX_V(mtu_idx) |
L2T_IDX_V(ep->l2t->idx) |
TX_CHAN_V(ep->tx_chan) |
SMAC_SEL_V(ep->smac_idx) |
- DSCP(ep->tos) |
+ DSCP_V(ep->tos) |
ULP_MODE_V(ULP_MODE_TCPDDP) |
RCV_BUFSIZ_V(win));
- req->tcb.opt2 = (__force __be32) (PACE(1) |
- TX_QUEUE(ep->com.dev->rdev.lldi.tx_modq[ep->tx_chan]) |
+ req->tcb.opt2 = (__force __be32) (PACE_V(1) |
+ TX_QUEUE_V(ep->com.dev->rdev.lldi.tx_modq[ep->tx_chan]) |
RX_CHANNEL_V(0) |
- CCTRL_ECN(enable_ecn) |
+ CCTRL_ECN_V(enable_ecn) |
RSS_QUEUE_VALID_F | RSS_QUEUE_V(ep->rss_qid));
if (enable_tcp_timestamps)
- req->tcb.opt2 |= (__force __be32)TSTAMPS_EN(1);
+ req->tcb.opt2 |= (__force __be32)TSTAMPS_EN_F;
if (enable_tcp_sack)
- req->tcb.opt2 |= (__force __be32)SACK_EN(1);
+ req->tcb.opt2 |= (__force __be32)SACK_EN_F;
if (wscale && enable_tcp_window_scaling)
req->tcb.opt2 |= (__force __be32)WND_SCALE_EN_F;
req->tcb.opt0 = cpu_to_be64((__force u64)req->tcb.opt0);
{
struct c4iw_ep *ep;
struct cpl_act_open_rpl *rpl = cplhdr(skb);
- unsigned int atid = GET_TID_TID(GET_AOPEN_ATID(
- ntohl(rpl->atid_status)));
+ unsigned int atid = TID_TID_G(AOPEN_ATID_G(
+ ntohl(rpl->atid_status)));
struct tid_info *t = dev->rdev.lldi.tids;
- int status = GET_AOPEN_STATUS(ntohl(rpl->atid_status));
+ int status = AOPEN_STATUS_G(ntohl(rpl->atid_status));
struct sockaddr_in *la;
struct sockaddr_in *ra;
struct sockaddr_in6 *la6;
if (ep->com.local_addr.ss_family == AF_INET &&
dev->rdev.lldi.enable_fw_ofld_conn) {
send_fw_act_open_req(ep,
- GET_TID_TID(GET_AOPEN_ATID(
+ TID_TID_G(AOPEN_ATID_G(
ntohl(rpl->atid_status))));
return 0;
}
win = ep->rcv_win >> 10;
if (win > RCV_BUFSIZ_M)
win = RCV_BUFSIZ_M;
- opt0 = (nocong ? NO_CONG(1) : 0) |
+ opt0 = (nocong ? NO_CONG_F : 0) |
KEEP_ALIVE_F |
- DELACK(1) |
+ DELACK_F |
WND_SCALE_V(wscale) |
MSS_IDX_V(mtu_idx) |
L2T_IDX_V(ep->l2t->idx) |
TX_CHAN_V(ep->tx_chan) |
SMAC_SEL_V(ep->smac_idx) |
- DSCP(ep->tos >> 2) |
+ DSCP_V(ep->tos >> 2) |
ULP_MODE_V(ULP_MODE_TCPDDP) |
RCV_BUFSIZ_V(win);
opt2 = RX_CHANNEL_V(0) |
RSS_QUEUE_VALID_F | RSS_QUEUE_V(ep->rss_qid);
if (enable_tcp_timestamps && req->tcpopt.tstamp)
- opt2 |= TSTAMPS_EN(1);
+ opt2 |= TSTAMPS_EN_F;
if (enable_tcp_sack && req->tcpopt.sack)
- opt2 |= SACK_EN(1);
+ opt2 |= SACK_EN_F;
if (wscale && enable_tcp_window_scaling)
opt2 |= WND_SCALE_EN_F;
if (enable_ecn) {
const struct tcphdr *tcph;
u32 hlen = ntohl(req->hdr_len);
- tcph = (const void *)(req + 1) + G_ETH_HDR_LEN(hlen) +
- G_IP_HDR_LEN(hlen);
+ tcph = (const void *)(req + 1) + ETH_HDR_LEN_G(hlen) +
+ IP_HDR_LEN_G(hlen);
if (tcph->ece && tcph->cwr)
- opt2 |= CCTRL_ECN(1);
+ opt2 |= CCTRL_ECN_V(1);
}
if (is_t5(ep->com.dev->rdev.lldi.adapter_type)) {
u32 isn = (prandom_u32() & ~7UL) - 1;
opt2 |= T5_OPT_2_VALID_F;
- opt2 |= V_CONG_CNTRL(CONG_ALG_TAHOE);
+ opt2 |= CONG_CNTRL_V(CONG_ALG_TAHOE);
opt2 |= CONG_CNTRL_VALID; /* OPT_2_ISS for T5 */
rpl5 = (void *)rpl;
memset(&rpl5->iss, 0, roundup(sizeof(*rpl5)-sizeof(*rpl), 16));
__u8 *local_ip, __u8 *peer_ip,
__be16 *local_port, __be16 *peer_port)
{
- int eth_len = G_ETH_HDR_LEN(be32_to_cpu(req->hdr_len));
- int ip_len = G_IP_HDR_LEN(be32_to_cpu(req->hdr_len));
+ int eth_len = ETH_HDR_LEN_G(be32_to_cpu(req->hdr_len));
+ int ip_len = IP_HDR_LEN_G(be32_to_cpu(req->hdr_len));
struct iphdr *ip = (struct iphdr *)((u8 *)(req + 1) + eth_len);
struct ipv6hdr *ip6 = (struct ipv6hdr *)((u8 *)(req + 1) + eth_len);
struct tcphdr *tcp = (struct tcphdr *)
{
struct c4iw_ep *child_ep = NULL, *parent_ep;
struct cpl_pass_accept_req *req = cplhdr(skb);
- unsigned int stid = GET_POPEN_TID(ntohl(req->tos_stid));
+ unsigned int stid = PASS_OPEN_TID_G(ntohl(req->tos_stid));
struct tid_info *t = dev->rdev.lldi.tids;
unsigned int hwtid = GET_TID(req);
struct dst_entry *dst;
ntohs(peer_port), peer_mss);
dst = find_route(dev, *(__be32 *)local_ip, *(__be32 *)peer_ip,
local_port, peer_port,
- GET_POPEN_TOS(ntohl(req->tos_stid)));
+ PASS_OPEN_TOS_G(ntohl(req->tos_stid)));
} else {
PDBG("%s parent ep %p hwtid %u laddr %pI6 raddr %pI6 lport %d rport %d peer_mss %d\n"
, __func__, parent_ep, hwtid,
local_ip, peer_ip, ntohs(local_port),
ntohs(peer_port), peer_mss);
dst = find_route6(dev, local_ip, peer_ip, local_port, peer_port,
- PASS_OPEN_TOS(ntohl(req->tos_stid)),
+ PASS_OPEN_TOS_G(ntohl(req->tos_stid)),
((struct sockaddr_in6 *)
&parent_ep->com.local_addr)->sin6_scope_id);
}
}
c4iw_get_ep(&parent_ep->com);
child_ep->parent_ep = parent_ep;
- child_ep->tos = GET_POPEN_TOS(ntohl(req->tos_stid));
+ child_ep->tos = PASS_OPEN_TOS_G(ntohl(req->tos_stid));
child_ep->dst = dst;
child_ep->hwtid = hwtid;
req = (struct cpl_pass_accept_req *)__skb_push(skb, sizeof(*req));
memset(req, 0, sizeof(*req));
- req->l2info = cpu_to_be16(V_SYN_INTF(intf) |
- V_SYN_MAC_IDX(G_RX_MACIDX(
+ req->l2info = cpu_to_be16(SYN_INTF_V(intf) |
+ SYN_MAC_IDX_V(RX_MACIDX_G(
(__force int) htonl(l2info))) |
- F_SYN_XACT_MATCH);
+ SYN_XACT_MATCH_F);
eth_hdr_len = is_t4(dev->rdev.lldi.adapter_type) ?
- G_RX_ETHHDR_LEN((__force int) htonl(l2info)) :
- G_RX_T5_ETHHDR_LEN((__force int) htonl(l2info));
- req->hdr_len = cpu_to_be32(V_SYN_RX_CHAN(G_RX_CHAN(
+ RX_ETHHDR_LEN_G((__force int)htonl(l2info)) :
+ RX_T5_ETHHDR_LEN_G((__force int)htonl(l2info));
+ req->hdr_len = cpu_to_be32(SYN_RX_CHAN_V(RX_CHAN_G(
(__force int) htonl(l2info))) |
- V_TCP_HDR_LEN(G_RX_TCPHDR_LEN(
+ TCP_HDR_LEN_V(RX_TCPHDR_LEN_G(
(__force int) htons(hdr_len))) |
- V_IP_HDR_LEN(G_RX_IPHDR_LEN(
+ IP_HDR_LEN_V(RX_IPHDR_LEN_G(
(__force int) htons(hdr_len))) |
- V_ETH_HDR_LEN(G_RX_ETHHDR_LEN(eth_hdr_len)));
+ ETH_HDR_LEN_V(RX_ETHHDR_LEN_G(eth_hdr_len)));
req->vlan = (__force __be16) vlantag;
req->len = (__force __be16) len;
- req->tos_stid = cpu_to_be32(PASS_OPEN_TID(stid) |
- PASS_OPEN_TOS(tos));
+ req->tos_stid = cpu_to_be32(PASS_OPEN_TID_V(stid) |
+ PASS_OPEN_TOS_V(tos));
req->tcpopt.mss = htons(tmp_opt.mss_clamp);
if (tmp_opt.wscale_ok)
req->tcpopt.wsf = tmp_opt.snd_wscale;
req_skb = alloc_skb(sizeof(struct fw_ofld_connection_wr), GFP_KERNEL);
req = (struct fw_ofld_connection_wr *)__skb_put(req_skb, sizeof(*req));
memset(req, 0, sizeof(*req));
- req->op_compl = htonl(V_WR_OP(FW_OFLD_CONNECTION_WR) | FW_WR_COMPL_F);
+ req->op_compl = htonl(WR_OP_V(FW_OFLD_CONNECTION_WR) | FW_WR_COMPL_F);
req->len16_pkd = htonl(FW_WR_LEN16_V(DIV_ROUND_UP(sizeof(*req), 16)));
req->le.version_cpl = htonl(FW_OFLD_CONNECTION_WR_CPL_F);
req->le.filter = (__force __be32) filter;
htonl(FW_OFLD_CONNECTION_WR_T_STATE_V(TCP_SYN_RECV) |
FW_OFLD_CONNECTION_WR_RCV_SCALE_V(cpl->tcpopt.wsf) |
FW_OFLD_CONNECTION_WR_ASTID_V(
- GET_PASS_OPEN_TID(ntohl(cpl->tos_stid))));
+ PASS_OPEN_TID_G(ntohl(cpl->tos_stid))));
/*
* We store the qid in opt2 which will be used by the firmware
struct neighbour *neigh;
/* Drop all non-SYN packets */
- if (!(cpl->l2info & cpu_to_be32(F_RXF_SYN)))
+ if (!(cpl->l2info & cpu_to_be32(RXF_SYN_F)))
goto reject;
/*
}
eth_hdr_len = is_t4(dev->rdev.lldi.adapter_type) ?
- G_RX_ETHHDR_LEN(htonl(cpl->l2info)) :
- G_RX_T5_ETHHDR_LEN(htonl(cpl->l2info));
+ RX_ETHHDR_LEN_G(htonl(cpl->l2info)) :
+ RX_T5_ETHHDR_LEN_G(htonl(cpl->l2info));
if (eth_hdr_len == ETH_HLEN) {
eh = (struct ethhdr *)(req + 1);
iph = (struct iphdr *)(eh + 1);
memset(res_wr, 0, wr_len);
res_wr->op_nres = cpu_to_be32(
FW_WR_OP_V(FW_RI_RES_WR) |
- V_FW_RI_RES_WR_NRES(1) |
+ FW_RI_RES_WR_NRES_V(1) |
FW_WR_COMPL_F);
res_wr->len16_pkd = cpu_to_be32(DIV_ROUND_UP(wr_len, 16));
res_wr->cookie = (unsigned long) &wr_wait;
memset(res_wr, 0, wr_len);
res_wr->op_nres = cpu_to_be32(
FW_WR_OP_V(FW_RI_RES_WR) |
- V_FW_RI_RES_WR_NRES(1) |
+ FW_RI_RES_WR_NRES_V(1) |
FW_WR_COMPL_F);
res_wr->len16_pkd = cpu_to_be32(DIV_ROUND_UP(wr_len, 16));
res_wr->cookie = (unsigned long) &wr_wait;
res->u.cq.op = FW_RI_RES_OP_WRITE;
res->u.cq.iqid = cpu_to_be32(cq->cqid);
res->u.cq.iqandst_to_iqandstindex = cpu_to_be32(
- V_FW_RI_RES_WR_IQANUS(0) |
- V_FW_RI_RES_WR_IQANUD(1) |
- F_FW_RI_RES_WR_IQANDST |
- V_FW_RI_RES_WR_IQANDSTINDEX(
+ FW_RI_RES_WR_IQANUS_V(0) |
+ FW_RI_RES_WR_IQANUD_V(1) |
+ FW_RI_RES_WR_IQANDST_F |
+ FW_RI_RES_WR_IQANDSTINDEX_V(
rdev->lldi.ciq_ids[cq->vector]));
res->u.cq.iqdroprss_to_iqesize = cpu_to_be16(
- F_FW_RI_RES_WR_IQDROPRSS |
- V_FW_RI_RES_WR_IQPCIECH(2) |
- V_FW_RI_RES_WR_IQINTCNTTHRESH(0) |
- F_FW_RI_RES_WR_IQO |
- V_FW_RI_RES_WR_IQESIZE(1));
+ FW_RI_RES_WR_IQDROPRSS_F |
+ FW_RI_RES_WR_IQPCIECH_V(2) |
+ FW_RI_RES_WR_IQINTCNTTHRESH_V(0) |
+ FW_RI_RES_WR_IQO_F |
+ FW_RI_RES_WR_IQESIZE_V(1));
res->u.cq.iqsize = cpu_to_be16(cq->size);
res->u.cq.iqaddr = cpu_to_be64(cq->dma_addr);
PDBG("%s wq %p cq %p sw_cidx %u sw_pidx %u\n", __func__,
wq, cq, cq->sw_cidx, cq->sw_pidx);
memset(&cqe, 0, sizeof(cqe));
- cqe.header = cpu_to_be32(V_CQE_STATUS(T4_ERR_SWFLUSH) |
- V_CQE_OPCODE(FW_RI_SEND) |
- V_CQE_TYPE(0) |
- V_CQE_SWCQE(1) |
- V_CQE_QPID(wq->sq.qid));
- cqe.bits_type_ts = cpu_to_be64(V_CQE_GENBIT((u64)cq->gen));
+ cqe.header = cpu_to_be32(CQE_STATUS_V(T4_ERR_SWFLUSH) |
+ CQE_OPCODE_V(FW_RI_SEND) |
+ CQE_TYPE_V(0) |
+ CQE_SWCQE_V(1) |
+ CQE_QPID_V(wq->sq.qid));
+ cqe.bits_type_ts = cpu_to_be64(CQE_GENBIT_V((u64)cq->gen));
cq->sw_queue[cq->sw_pidx] = cqe;
t4_swcq_produce(cq);
}
PDBG("%s wq %p cq %p sw_cidx %u sw_pidx %u\n", __func__,
wq, cq, cq->sw_cidx, cq->sw_pidx);
memset(&cqe, 0, sizeof(cqe));
- cqe.header = cpu_to_be32(V_CQE_STATUS(T4_ERR_SWFLUSH) |
- V_CQE_OPCODE(swcqe->opcode) |
- V_CQE_TYPE(1) |
- V_CQE_SWCQE(1) |
- V_CQE_QPID(wq->sq.qid));
+ cqe.header = cpu_to_be32(CQE_STATUS_V(T4_ERR_SWFLUSH) |
+ CQE_OPCODE_V(swcqe->opcode) |
+ CQE_TYPE_V(1) |
+ CQE_SWCQE_V(1) |
+ CQE_QPID_V(wq->sq.qid));
CQE_WRID_SQ_IDX(&cqe) = swcqe->idx;
- cqe.bits_type_ts = cpu_to_be64(V_CQE_GENBIT((u64)cq->gen));
+ cqe.bits_type_ts = cpu_to_be64(CQE_GENBIT_V((u64)cq->gen));
cq->sw_queue[cq->sw_pidx] = cqe;
t4_swcq_produce(cq);
}
*/
PDBG("%s moving cqe into swcq sq idx %u cq idx %u\n",
__func__, cidx, cq->sw_pidx);
- swsqe->cqe.header |= htonl(V_CQE_SWCQE(1));
+ swsqe->cqe.header |= htonl(CQE_SWCQE_V(1));
cq->sw_queue[cq->sw_pidx] = swsqe->cqe;
t4_swcq_produce(cq);
swsqe->flushed = 1;
{
read_cqe->u.scqe.cidx = wq->sq.oldest_read->idx;
read_cqe->len = htonl(wq->sq.oldest_read->read_len);
- read_cqe->header = htonl(V_CQE_QPID(CQE_QPID(hw_cqe)) |
- V_CQE_SWCQE(SW_CQE(hw_cqe)) |
- V_CQE_OPCODE(FW_RI_READ_REQ) |
- V_CQE_TYPE(1));
+ read_cqe->header = htonl(CQE_QPID_V(CQE_QPID(hw_cqe)) |
+ CQE_SWCQE_V(SW_CQE(hw_cqe)) |
+ CQE_OPCODE_V(FW_RI_READ_REQ) |
+ CQE_TYPE_V(1));
read_cqe->bits_type_ts = hw_cqe->bits_type_ts;
}
} else {
swcqe = &chp->cq.sw_queue[chp->cq.sw_pidx];
*swcqe = *hw_cqe;
- swcqe->header |= cpu_to_be32(V_CQE_SWCQE(1));
+ swcqe->header |= cpu_to_be32(CQE_SWCQE_V(1));
t4_swcq_produce(&chp->cq);
}
next_cqe:
}
if (unlikely((CQE_WRID_MSN(hw_cqe) != (wq->rq.msn)))) {
t4_set_wq_in_error(wq);
- hw_cqe->header |= htonl(V_CQE_STATUS(T4_ERR_MSN));
+ hw_cqe->header |= htonl(CQE_STATUS_V(T4_ERR_MSN));
goto proc_cqe;
}
goto proc_cqe;
"stag: idx 0x%x valid %d key 0x%x state %d pdid %d "
"perm 0x%x ps %d len 0x%llx va 0x%llx\n",
(u32)id<<8,
- G_FW_RI_TPTE_VALID(ntohl(tpte.valid_to_pdid)),
- G_FW_RI_TPTE_STAGKEY(ntohl(tpte.valid_to_pdid)),
- G_FW_RI_TPTE_STAGSTATE(ntohl(tpte.valid_to_pdid)),
- G_FW_RI_TPTE_PDID(ntohl(tpte.valid_to_pdid)),
- G_FW_RI_TPTE_PERM(ntohl(tpte.locread_to_qpid)),
- G_FW_RI_TPTE_PS(ntohl(tpte.locread_to_qpid)),
+ FW_RI_TPTE_VALID_G(ntohl(tpte.valid_to_pdid)),
+ FW_RI_TPTE_STAGKEY_G(ntohl(tpte.valid_to_pdid)),
+ FW_RI_TPTE_STAGSTATE_G(ntohl(tpte.valid_to_pdid)),
+ FW_RI_TPTE_PDID_G(ntohl(tpte.valid_to_pdid)),
+ FW_RI_TPTE_PERM_G(ntohl(tpte.locread_to_qpid)),
+ FW_RI_TPTE_PS_G(ntohl(tpte.locread_to_qpid)),
((u64)ntohl(tpte.len_hi) << 32) | ntohl(tpte.len_lo),
((u64)ntohl(tpte.va_hi) << 32) | ntohl(tpte.va_lo_fbo));
if (cc < space)
PDBG("stag idx 0x%x valid %d key 0x%x state %d pdid %d "
"perm 0x%x ps %d len 0x%llx va 0x%llx\n",
stag & 0xffffff00,
- G_FW_RI_TPTE_VALID(ntohl(tpte.valid_to_pdid)),
- G_FW_RI_TPTE_STAGKEY(ntohl(tpte.valid_to_pdid)),
- G_FW_RI_TPTE_STAGSTATE(ntohl(tpte.valid_to_pdid)),
- G_FW_RI_TPTE_PDID(ntohl(tpte.valid_to_pdid)),
- G_FW_RI_TPTE_PERM(ntohl(tpte.locread_to_qpid)),
- G_FW_RI_TPTE_PS(ntohl(tpte.locread_to_qpid)),
+ FW_RI_TPTE_VALID_G(ntohl(tpte.valid_to_pdid)),
+ FW_RI_TPTE_STAGKEY_G(ntohl(tpte.valid_to_pdid)),
+ FW_RI_TPTE_STAGSTATE_G(ntohl(tpte.valid_to_pdid)),
+ FW_RI_TPTE_PDID_G(ntohl(tpte.valid_to_pdid)),
+ FW_RI_TPTE_PERM_G(ntohl(tpte.locread_to_qpid)),
+ FW_RI_TPTE_PS_G(ntohl(tpte.locread_to_qpid)),
((u64)ntohl(tpte.len_hi) << 32) | ntohl(tpte.len_lo),
((u64)ntohl(tpte.va_hi) << 32) | ntohl(tpte.va_lo_fbo));
}
req->wr.wr_lo = wait ? (__force __be64)(unsigned long) &wr_wait : 0L;
req->wr.wr_mid = cpu_to_be32(FW_WR_LEN16_V(DIV_ROUND_UP(wr_len, 16)));
req->cmd = cpu_to_be32(ULPTX_CMD_V(ULP_TX_MEM_WRITE));
- req->cmd |= cpu_to_be32(V_T5_ULP_MEMIO_ORDER(1));
+ req->cmd |= cpu_to_be32(T5_ULP_MEMIO_ORDER_V(1));
req->dlen = cpu_to_be32(ULP_MEMIO_DATA_LEN_V(len>>5));
req->len16 = cpu_to_be32(DIV_ROUND_UP(wr_len-sizeof(req->wr), 16));
req->lock_addr = cpu_to_be32(ULP_MEMIO_ADDR_V(addr));
sgl = (struct ulptx_sgl *)(req + 1);
sgl->cmd_nsge = cpu_to_be32(ULPTX_CMD_V(ULP_TX_SC_DSGL) |
- ULPTX_NSGE(1));
+ ULPTX_NSGE_V(1));
sgl->len0 = cpu_to_be32(len);
sgl->addr0 = cpu_to_be64(data);
if (reset_tpt_entry)
memset(&tpt, 0, sizeof(tpt));
else {
- tpt.valid_to_pdid = cpu_to_be32(F_FW_RI_TPTE_VALID |
- V_FW_RI_TPTE_STAGKEY((*stag & M_FW_RI_TPTE_STAGKEY)) |
- V_FW_RI_TPTE_STAGSTATE(stag_state) |
- V_FW_RI_TPTE_STAGTYPE(type) | V_FW_RI_TPTE_PDID(pdid));
- tpt.locread_to_qpid = cpu_to_be32(V_FW_RI_TPTE_PERM(perm) |
- (bind_enabled ? F_FW_RI_TPTE_MWBINDEN : 0) |
- V_FW_RI_TPTE_ADDRTYPE((zbva ? FW_RI_ZERO_BASED_TO :
+ tpt.valid_to_pdid = cpu_to_be32(FW_RI_TPTE_VALID_F |
+ FW_RI_TPTE_STAGKEY_V((*stag & FW_RI_TPTE_STAGKEY_M)) |
+ FW_RI_TPTE_STAGSTATE_V(stag_state) |
+ FW_RI_TPTE_STAGTYPE_V(type) | FW_RI_TPTE_PDID_V(pdid));
+ tpt.locread_to_qpid = cpu_to_be32(FW_RI_TPTE_PERM_V(perm) |
+ (bind_enabled ? FW_RI_TPTE_MWBINDEN_F : 0) |
+ FW_RI_TPTE_ADDRTYPE_V((zbva ? FW_RI_ZERO_BASED_TO :
FW_RI_VA_BASED_TO))|
- V_FW_RI_TPTE_PS(page_size));
+ FW_RI_TPTE_PS_V(page_size));
tpt.nosnoop_pbladdr = !pbl_size ? 0 : cpu_to_be32(
- V_FW_RI_TPTE_PBLADDR(PBL_OFF(rdev, pbl_addr)>>3));
+ FW_RI_TPTE_PBLADDR_V(PBL_OFF(rdev, pbl_addr)>>3));
tpt.len_lo = cpu_to_be32((u32)(len & 0xffffffffUL));
tpt.va_hi = cpu_to_be32((u32)(to >> 32));
tpt.va_lo_fbo = cpu_to_be32((u32)(to & 0xffffffffUL));
memset(res_wr, 0, wr_len);
res_wr->op_nres = cpu_to_be32(
FW_WR_OP_V(FW_RI_RES_WR) |
- V_FW_RI_RES_WR_NRES(2) |
+ FW_RI_RES_WR_NRES_V(2) |
FW_WR_COMPL_F);
res_wr->len16_pkd = cpu_to_be32(DIV_ROUND_UP(wr_len, 16));
res_wr->cookie = (unsigned long) &wr_wait;
rdev->hw_queue.t4_eq_status_entries;
res->u.sqrq.fetchszm_to_iqid = cpu_to_be32(
- V_FW_RI_RES_WR_HOSTFCMODE(0) | /* no host cidx updates */
- V_FW_RI_RES_WR_CPRIO(0) | /* don't keep in chip cache */
- V_FW_RI_RES_WR_PCIECHN(0) | /* set by uP at ri_init time */
- (t4_sq_onchip(&wq->sq) ? F_FW_RI_RES_WR_ONCHIP : 0) |
- V_FW_RI_RES_WR_IQID(scq->cqid));
+ FW_RI_RES_WR_HOSTFCMODE_V(0) | /* no host cidx updates */
+ FW_RI_RES_WR_CPRIO_V(0) | /* don't keep in chip cache */
+ FW_RI_RES_WR_PCIECHN_V(0) | /* set by uP at ri_init time */
+ (t4_sq_onchip(&wq->sq) ? FW_RI_RES_WR_ONCHIP_F : 0) |
+ FW_RI_RES_WR_IQID_V(scq->cqid));
res->u.sqrq.dcaen_to_eqsize = cpu_to_be32(
- V_FW_RI_RES_WR_DCAEN(0) |
- V_FW_RI_RES_WR_DCACPU(0) |
- V_FW_RI_RES_WR_FBMIN(2) |
- V_FW_RI_RES_WR_FBMAX(2) |
- V_FW_RI_RES_WR_CIDXFTHRESHO(0) |
- V_FW_RI_RES_WR_CIDXFTHRESH(0) |
- V_FW_RI_RES_WR_EQSIZE(eqsize));
+ FW_RI_RES_WR_DCAEN_V(0) |
+ FW_RI_RES_WR_DCACPU_V(0) |
+ FW_RI_RES_WR_FBMIN_V(2) |
+ FW_RI_RES_WR_FBMAX_V(2) |
+ FW_RI_RES_WR_CIDXFTHRESHO_V(0) |
+ FW_RI_RES_WR_CIDXFTHRESH_V(0) |
+ FW_RI_RES_WR_EQSIZE_V(eqsize));
res->u.sqrq.eqid = cpu_to_be32(wq->sq.qid);
res->u.sqrq.eqaddr = cpu_to_be64(wq->sq.dma_addr);
res++;
eqsize = wq->rq.size * T4_RQ_NUM_SLOTS +
rdev->hw_queue.t4_eq_status_entries;
res->u.sqrq.fetchszm_to_iqid = cpu_to_be32(
- V_FW_RI_RES_WR_HOSTFCMODE(0) | /* no host cidx updates */
- V_FW_RI_RES_WR_CPRIO(0) | /* don't keep in chip cache */
- V_FW_RI_RES_WR_PCIECHN(0) | /* set by uP at ri_init time */
- V_FW_RI_RES_WR_IQID(rcq->cqid));
+ FW_RI_RES_WR_HOSTFCMODE_V(0) | /* no host cidx updates */
+ FW_RI_RES_WR_CPRIO_V(0) | /* don't keep in chip cache */
+ FW_RI_RES_WR_PCIECHN_V(0) | /* set by uP at ri_init time */
+ FW_RI_RES_WR_IQID_V(rcq->cqid));
res->u.sqrq.dcaen_to_eqsize = cpu_to_be32(
- V_FW_RI_RES_WR_DCAEN(0) |
- V_FW_RI_RES_WR_DCACPU(0) |
- V_FW_RI_RES_WR_FBMIN(2) |
- V_FW_RI_RES_WR_FBMAX(2) |
- V_FW_RI_RES_WR_CIDXFTHRESHO(0) |
- V_FW_RI_RES_WR_CIDXFTHRESH(0) |
- V_FW_RI_RES_WR_EQSIZE(eqsize));
+ FW_RI_RES_WR_DCAEN_V(0) |
+ FW_RI_RES_WR_DCACPU_V(0) |
+ FW_RI_RES_WR_FBMIN_V(2) |
+ FW_RI_RES_WR_FBMAX_V(2) |
+ FW_RI_RES_WR_CIDXFTHRESHO_V(0) |
+ FW_RI_RES_WR_CIDXFTHRESH_V(0) |
+ FW_RI_RES_WR_EQSIZE_V(eqsize));
res->u.sqrq.eqid = cpu_to_be32(wq->rq.qid);
res->u.sqrq.eqaddr = cpu_to_be64(wq->rq.dma_addr);
case IB_WR_SEND:
if (wr->send_flags & IB_SEND_SOLICITED)
wqe->send.sendop_pkd = cpu_to_be32(
- V_FW_RI_SEND_WR_SENDOP(FW_RI_SEND_WITH_SE));
+ FW_RI_SEND_WR_SENDOP_V(FW_RI_SEND_WITH_SE));
else
wqe->send.sendop_pkd = cpu_to_be32(
- V_FW_RI_SEND_WR_SENDOP(FW_RI_SEND));
+ FW_RI_SEND_WR_SENDOP_V(FW_RI_SEND));
wqe->send.stag_inv = 0;
break;
case IB_WR_SEND_WITH_INV:
if (wr->send_flags & IB_SEND_SOLICITED)
wqe->send.sendop_pkd = cpu_to_be32(
- V_FW_RI_SEND_WR_SENDOP(FW_RI_SEND_WITH_SE_INV));
+ FW_RI_SEND_WR_SENDOP_V(FW_RI_SEND_WITH_SE_INV));
else
wqe->send.sendop_pkd = cpu_to_be32(
- V_FW_RI_SEND_WR_SENDOP(FW_RI_SEND_WITH_INV));
+ FW_RI_SEND_WR_SENDOP_V(FW_RI_SEND_WITH_INV));
wqe->send.stag_inv = cpu_to_be32(wr->ex.invalidate_rkey);
break;
wqe->u.init.type = FW_RI_TYPE_INIT;
wqe->u.init.mpareqbit_p2ptype =
- V_FW_RI_WR_MPAREQBIT(qhp->attr.mpa_attr.initiator) |
- V_FW_RI_WR_P2PTYPE(qhp->attr.mpa_attr.p2p_type);
+ FW_RI_WR_MPAREQBIT_V(qhp->attr.mpa_attr.initiator) |
+ FW_RI_WR_P2PTYPE_V(qhp->attr.mpa_attr.p2p_type);
wqe->u.init.mpa_attrs = FW_RI_MPA_IETF_ENABLE;
if (qhp->attr.mpa_attr.recv_marker_enabled)
wqe->u.init.mpa_attrs |= FW_RI_MPA_RX_MARKER_ENABLE;
if (mm5) {
mm5->key = uresp.ma_sync_key;
mm5->addr = (pci_resource_start(rhp->rdev.lldi.pdev, 0)
- + A_PCIE_MA_SYNC) & PAGE_MASK;
+ + PCIE_MA_SYNC_A) & PAGE_MASK;
mm5->len = PAGE_SIZE;
insert_mmap(ucontext, mm5);
}
#define T4_PAGESIZE_MASK 0xffff000 /* 4KB-128MB */
#define T4_STAG_UNSET 0xffffffff
#define T4_FW_MAJ 0
-#define A_PCIE_MA_SYNC 0x30b4
+#define PCIE_MA_SYNC_A 0x30b4
struct t4_status_page {
__be32 rsvd1; /* flit 0 - hw owns */
/* macros for flit 0 of the cqe */
-#define S_CQE_QPID 12
-#define M_CQE_QPID 0xFFFFF
-#define G_CQE_QPID(x) ((((x) >> S_CQE_QPID)) & M_CQE_QPID)
-#define V_CQE_QPID(x) ((x)<<S_CQE_QPID)
-
-#define S_CQE_SWCQE 11
-#define M_CQE_SWCQE 0x1
-#define G_CQE_SWCQE(x) ((((x) >> S_CQE_SWCQE)) & M_CQE_SWCQE)
-#define V_CQE_SWCQE(x) ((x)<<S_CQE_SWCQE)
-
-#define S_CQE_STATUS 5
-#define M_CQE_STATUS 0x1F
-#define G_CQE_STATUS(x) ((((x) >> S_CQE_STATUS)) & M_CQE_STATUS)
-#define V_CQE_STATUS(x) ((x)<<S_CQE_STATUS)
-
-#define S_CQE_TYPE 4
-#define M_CQE_TYPE 0x1
-#define G_CQE_TYPE(x) ((((x) >> S_CQE_TYPE)) & M_CQE_TYPE)
-#define V_CQE_TYPE(x) ((x)<<S_CQE_TYPE)
-
-#define S_CQE_OPCODE 0
-#define M_CQE_OPCODE 0xF
-#define G_CQE_OPCODE(x) ((((x) >> S_CQE_OPCODE)) & M_CQE_OPCODE)
-#define V_CQE_OPCODE(x) ((x)<<S_CQE_OPCODE)
-
-#define SW_CQE(x) (G_CQE_SWCQE(be32_to_cpu((x)->header)))
-#define CQE_QPID(x) (G_CQE_QPID(be32_to_cpu((x)->header)))
-#define CQE_TYPE(x) (G_CQE_TYPE(be32_to_cpu((x)->header)))
+#define CQE_QPID_S 12
+#define CQE_QPID_M 0xFFFFF
+#define CQE_QPID_G(x) ((((x) >> CQE_QPID_S)) & CQE_QPID_M)
+#define CQE_QPID_V(x) ((x)<<CQE_QPID_S)
+
+#define CQE_SWCQE_S 11
+#define CQE_SWCQE_M 0x1
+#define CQE_SWCQE_G(x) ((((x) >> CQE_SWCQE_S)) & CQE_SWCQE_M)
+#define CQE_SWCQE_V(x) ((x)<<CQE_SWCQE_S)
+
+#define CQE_STATUS_S 5
+#define CQE_STATUS_M 0x1F
+#define CQE_STATUS_G(x) ((((x) >> CQE_STATUS_S)) & CQE_STATUS_M)
+#define CQE_STATUS_V(x) ((x)<<CQE_STATUS_S)
+
+#define CQE_TYPE_S 4
+#define CQE_TYPE_M 0x1
+#define CQE_TYPE_G(x) ((((x) >> CQE_TYPE_S)) & CQE_TYPE_M)
+#define CQE_TYPE_V(x) ((x)<<CQE_TYPE_S)
+
+#define CQE_OPCODE_S 0
+#define CQE_OPCODE_M 0xF
+#define CQE_OPCODE_G(x) ((((x) >> CQE_OPCODE_S)) & CQE_OPCODE_M)
+#define CQE_OPCODE_V(x) ((x)<<CQE_OPCODE_S)
+
+#define SW_CQE(x) (CQE_SWCQE_G(be32_to_cpu((x)->header)))
+#define CQE_QPID(x) (CQE_QPID_G(be32_to_cpu((x)->header)))
+#define CQE_TYPE(x) (CQE_TYPE_G(be32_to_cpu((x)->header)))
#define SQ_TYPE(x) (CQE_TYPE((x)))
#define RQ_TYPE(x) (!CQE_TYPE((x)))
-#define CQE_STATUS(x) (G_CQE_STATUS(be32_to_cpu((x)->header)))
-#define CQE_OPCODE(x) (G_CQE_OPCODE(be32_to_cpu((x)->header)))
+#define CQE_STATUS(x) (CQE_STATUS_G(be32_to_cpu((x)->header)))
+#define CQE_OPCODE(x) (CQE_OPCODE_G(be32_to_cpu((x)->header)))
#define CQE_SEND_OPCODE(x)( \
- (G_CQE_OPCODE(be32_to_cpu((x)->header)) == FW_RI_SEND) || \
- (G_CQE_OPCODE(be32_to_cpu((x)->header)) == FW_RI_SEND_WITH_SE) || \
- (G_CQE_OPCODE(be32_to_cpu((x)->header)) == FW_RI_SEND_WITH_INV) || \
- (G_CQE_OPCODE(be32_to_cpu((x)->header)) == FW_RI_SEND_WITH_SE_INV))
+ (CQE_OPCODE_G(be32_to_cpu((x)->header)) == FW_RI_SEND) || \
+ (CQE_OPCODE_G(be32_to_cpu((x)->header)) == FW_RI_SEND_WITH_SE) || \
+ (CQE_OPCODE_G(be32_to_cpu((x)->header)) == FW_RI_SEND_WITH_INV) || \
+ (CQE_OPCODE_G(be32_to_cpu((x)->header)) == FW_RI_SEND_WITH_SE_INV))
#define CQE_LEN(x) (be32_to_cpu((x)->len))
#define CQE_WRID_LOW(x) (be32_to_cpu((x)->u.gen.wrid_low))
/* macros for flit 3 of the cqe */
-#define S_CQE_GENBIT 63
-#define M_CQE_GENBIT 0x1
-#define G_CQE_GENBIT(x) (((x) >> S_CQE_GENBIT) & M_CQE_GENBIT)
-#define V_CQE_GENBIT(x) ((x)<<S_CQE_GENBIT)
+#define CQE_GENBIT_S 63
+#define CQE_GENBIT_M 0x1
+#define CQE_GENBIT_G(x) (((x) >> CQE_GENBIT_S) & CQE_GENBIT_M)
+#define CQE_GENBIT_V(x) ((x)<<CQE_GENBIT_S)
-#define S_CQE_OVFBIT 62
-#define M_CQE_OVFBIT 0x1
-#define G_CQE_OVFBIT(x) ((((x) >> S_CQE_OVFBIT)) & M_CQE_OVFBIT)
+#define CQE_OVFBIT_S 62
+#define CQE_OVFBIT_M 0x1
+#define CQE_OVFBIT_G(x) ((((x) >> CQE_OVFBIT_S)) & CQE_OVFBIT_M)
-#define S_CQE_IQTYPE 60
-#define M_CQE_IQTYPE 0x3
-#define G_CQE_IQTYPE(x) ((((x) >> S_CQE_IQTYPE)) & M_CQE_IQTYPE)
+#define CQE_IQTYPE_S 60
+#define CQE_IQTYPE_M 0x3
+#define CQE_IQTYPE_G(x) ((((x) >> CQE_IQTYPE_S)) & CQE_IQTYPE_M)
-#define M_CQE_TS 0x0fffffffffffffffULL
-#define G_CQE_TS(x) ((x) & M_CQE_TS)
+#define CQE_TS_M 0x0fffffffffffffffULL
+#define CQE_TS_G(x) ((x) & CQE_TS_M)
-#define CQE_OVFBIT(x) ((unsigned)G_CQE_OVFBIT(be64_to_cpu((x)->bits_type_ts)))
-#define CQE_GENBIT(x) ((unsigned)G_CQE_GENBIT(be64_to_cpu((x)->bits_type_ts)))
-#define CQE_TS(x) (G_CQE_TS(be64_to_cpu((x)->bits_type_ts)))
+#define CQE_OVFBIT(x) ((unsigned)CQE_OVFBIT_G(be64_to_cpu((x)->bits_type_ts)))
+#define CQE_GENBIT(x) ((unsigned)CQE_GENBIT_G(be64_to_cpu((x)->bits_type_ts)))
+#define CQE_TS(x) (CQE_TS_G(be64_to_cpu((x)->bits_type_ts)))
struct t4_swsqe {
u64 wr_id;
} else {
PDBG("%s: DB wq->sq.pidx = %d\n",
__func__, wq->sq.pidx);
- writel(PIDX_T5(inc), wq->sq.udb);
+ writel(PIDX_T5_V(inc), wq->sq.udb);
}
/* Flush user doorbell area writes. */
wmb();
return;
}
- writel(QID(wq->sq.qid) | PIDX(inc), wq->db);
+ writel(QID_V(wq->sq.qid) | PIDX_V(inc), wq->db);
}
static inline void t4_ring_rq_db(struct t4_wq *wq, u16 inc, u8 t5,
} else {
PDBG("%s: DB wq->rq.pidx = %d\n",
__func__, wq->rq.pidx);
- writel(PIDX_T5(inc), wq->rq.udb);
+ writel(PIDX_T5_V(inc), wq->rq.udb);
}
/* Flush user doorbell area writes. */
wmb();
return;
}
- writel(QID(wq->rq.qid) | PIDX(inc), wq->db);
+ writel(QID_V(wq->rq.qid) | PIDX_V(inc), wq->db);
}
static inline int t4_wq_in_error(struct t4_wq *wq)
u32 val;
set_bit(CQ_ARMED, &cq->flags);
- while (cq->cidx_inc > CIDXINC_MASK) {
- val = SEINTARM(0) | CIDXINC(CIDXINC_MASK) | TIMERREG(7) |
- INGRESSQID(cq->cqid);
+ while (cq->cidx_inc > CIDXINC_M) {
+ val = SEINTARM_V(0) | CIDXINC_V(CIDXINC_M) | TIMERREG_V(7) |
+ INGRESSQID_V(cq->cqid);
writel(val, cq->gts);
- cq->cidx_inc -= CIDXINC_MASK;
+ cq->cidx_inc -= CIDXINC_M;
}
- val = SEINTARM(se) | CIDXINC(cq->cidx_inc) | TIMERREG(6) |
- INGRESSQID(cq->cqid);
+ val = SEINTARM_V(se) | CIDXINC_V(cq->cidx_inc) | TIMERREG_V(6) |
+ INGRESSQID_V(cq->cqid);
writel(val, cq->gts);
cq->cidx_inc = 0;
return 0;
static inline void t4_hwcq_consume(struct t4_cq *cq)
{
cq->bits_type_ts = cq->queue[cq->cidx].bits_type_ts;
- if (++cq->cidx_inc == (cq->size >> 4) || cq->cidx_inc == CIDXINC_MASK) {
+ if (++cq->cidx_inc == (cq->size >> 4) || cq->cidx_inc == CIDXINC_M) {
u32 val;
- val = SEINTARM(0) | CIDXINC(cq->cidx_inc) | TIMERREG(7) |
- INGRESSQID(cq->cqid);
+ val = SEINTARM_V(0) | CIDXINC_V(cq->cidx_inc) | TIMERREG_V(7) |
+ INGRESSQID_V(cq->cqid);
writel(val, cq->gts);
cq->cidx_inc = 0;
}
__be32 len_hi;
};
-#define S_FW_RI_TPTE_VALID 31
-#define M_FW_RI_TPTE_VALID 0x1
-#define V_FW_RI_TPTE_VALID(x) ((x) << S_FW_RI_TPTE_VALID)
-#define G_FW_RI_TPTE_VALID(x) \
- (((x) >> S_FW_RI_TPTE_VALID) & M_FW_RI_TPTE_VALID)
-#define F_FW_RI_TPTE_VALID V_FW_RI_TPTE_VALID(1U)
-
-#define S_FW_RI_TPTE_STAGKEY 23
-#define M_FW_RI_TPTE_STAGKEY 0xff
-#define V_FW_RI_TPTE_STAGKEY(x) ((x) << S_FW_RI_TPTE_STAGKEY)
-#define G_FW_RI_TPTE_STAGKEY(x) \
- (((x) >> S_FW_RI_TPTE_STAGKEY) & M_FW_RI_TPTE_STAGKEY)
-
-#define S_FW_RI_TPTE_STAGSTATE 22
-#define M_FW_RI_TPTE_STAGSTATE 0x1
-#define V_FW_RI_TPTE_STAGSTATE(x) ((x) << S_FW_RI_TPTE_STAGSTATE)
-#define G_FW_RI_TPTE_STAGSTATE(x) \
- (((x) >> S_FW_RI_TPTE_STAGSTATE) & M_FW_RI_TPTE_STAGSTATE)
-#define F_FW_RI_TPTE_STAGSTATE V_FW_RI_TPTE_STAGSTATE(1U)
-
-#define S_FW_RI_TPTE_STAGTYPE 20
-#define M_FW_RI_TPTE_STAGTYPE 0x3
-#define V_FW_RI_TPTE_STAGTYPE(x) ((x) << S_FW_RI_TPTE_STAGTYPE)
-#define G_FW_RI_TPTE_STAGTYPE(x) \
- (((x) >> S_FW_RI_TPTE_STAGTYPE) & M_FW_RI_TPTE_STAGTYPE)
-
-#define S_FW_RI_TPTE_PDID 0
-#define M_FW_RI_TPTE_PDID 0xfffff
-#define V_FW_RI_TPTE_PDID(x) ((x) << S_FW_RI_TPTE_PDID)
-#define G_FW_RI_TPTE_PDID(x) \
- (((x) >> S_FW_RI_TPTE_PDID) & M_FW_RI_TPTE_PDID)
-
-#define S_FW_RI_TPTE_PERM 28
-#define M_FW_RI_TPTE_PERM 0xf
-#define V_FW_RI_TPTE_PERM(x) ((x) << S_FW_RI_TPTE_PERM)
-#define G_FW_RI_TPTE_PERM(x) \
- (((x) >> S_FW_RI_TPTE_PERM) & M_FW_RI_TPTE_PERM)
-
-#define S_FW_RI_TPTE_REMINVDIS 27
-#define M_FW_RI_TPTE_REMINVDIS 0x1
-#define V_FW_RI_TPTE_REMINVDIS(x) ((x) << S_FW_RI_TPTE_REMINVDIS)
-#define G_FW_RI_TPTE_REMINVDIS(x) \
- (((x) >> S_FW_RI_TPTE_REMINVDIS) & M_FW_RI_TPTE_REMINVDIS)
-#define F_FW_RI_TPTE_REMINVDIS V_FW_RI_TPTE_REMINVDIS(1U)
-
-#define S_FW_RI_TPTE_ADDRTYPE 26
-#define M_FW_RI_TPTE_ADDRTYPE 1
-#define V_FW_RI_TPTE_ADDRTYPE(x) ((x) << S_FW_RI_TPTE_ADDRTYPE)
-#define G_FW_RI_TPTE_ADDRTYPE(x) \
- (((x) >> S_FW_RI_TPTE_ADDRTYPE) & M_FW_RI_TPTE_ADDRTYPE)
-#define F_FW_RI_TPTE_ADDRTYPE V_FW_RI_TPTE_ADDRTYPE(1U)
-
-#define S_FW_RI_TPTE_MWBINDEN 25
-#define M_FW_RI_TPTE_MWBINDEN 0x1
-#define V_FW_RI_TPTE_MWBINDEN(x) ((x) << S_FW_RI_TPTE_MWBINDEN)
-#define G_FW_RI_TPTE_MWBINDEN(x) \
- (((x) >> S_FW_RI_TPTE_MWBINDEN) & M_FW_RI_TPTE_MWBINDEN)
-#define F_FW_RI_TPTE_MWBINDEN V_FW_RI_TPTE_MWBINDEN(1U)
-
-#define S_FW_RI_TPTE_PS 20
-#define M_FW_RI_TPTE_PS 0x1f
-#define V_FW_RI_TPTE_PS(x) ((x) << S_FW_RI_TPTE_PS)
-#define G_FW_RI_TPTE_PS(x) \
- (((x) >> S_FW_RI_TPTE_PS) & M_FW_RI_TPTE_PS)
-
-#define S_FW_RI_TPTE_QPID 0
-#define M_FW_RI_TPTE_QPID 0xfffff
-#define V_FW_RI_TPTE_QPID(x) ((x) << S_FW_RI_TPTE_QPID)
-#define G_FW_RI_TPTE_QPID(x) \
- (((x) >> S_FW_RI_TPTE_QPID) & M_FW_RI_TPTE_QPID)
-
-#define S_FW_RI_TPTE_NOSNOOP 30
-#define M_FW_RI_TPTE_NOSNOOP 0x1
-#define V_FW_RI_TPTE_NOSNOOP(x) ((x) << S_FW_RI_TPTE_NOSNOOP)
-#define G_FW_RI_TPTE_NOSNOOP(x) \
- (((x) >> S_FW_RI_TPTE_NOSNOOP) & M_FW_RI_TPTE_NOSNOOP)
-#define F_FW_RI_TPTE_NOSNOOP V_FW_RI_TPTE_NOSNOOP(1U)
-
-#define S_FW_RI_TPTE_PBLADDR 0
-#define M_FW_RI_TPTE_PBLADDR 0x1fffffff
-#define V_FW_RI_TPTE_PBLADDR(x) ((x) << S_FW_RI_TPTE_PBLADDR)
-#define G_FW_RI_TPTE_PBLADDR(x) \
- (((x) >> S_FW_RI_TPTE_PBLADDR) & M_FW_RI_TPTE_PBLADDR)
-
-#define S_FW_RI_TPTE_DCA 24
-#define M_FW_RI_TPTE_DCA 0x1f
-#define V_FW_RI_TPTE_DCA(x) ((x) << S_FW_RI_TPTE_DCA)
-#define G_FW_RI_TPTE_DCA(x) \
- (((x) >> S_FW_RI_TPTE_DCA) & M_FW_RI_TPTE_DCA)
-
-#define S_FW_RI_TPTE_MWBCNT_PSTAG 0
-#define M_FW_RI_TPTE_MWBCNT_PSTAG 0xffffff
-#define V_FW_RI_TPTE_MWBCNT_PSTAT(x) \
- ((x) << S_FW_RI_TPTE_MWBCNT_PSTAG)
-#define G_FW_RI_TPTE_MWBCNT_PSTAG(x) \
- (((x) >> S_FW_RI_TPTE_MWBCNT_PSTAG) & M_FW_RI_TPTE_MWBCNT_PSTAG)
+#define FW_RI_TPTE_VALID_S 31
+#define FW_RI_TPTE_VALID_M 0x1
+#define FW_RI_TPTE_VALID_V(x) ((x) << FW_RI_TPTE_VALID_S)
+#define FW_RI_TPTE_VALID_G(x) \
+ (((x) >> FW_RI_TPTE_VALID_S) & FW_RI_TPTE_VALID_M)
+#define FW_RI_TPTE_VALID_F FW_RI_TPTE_VALID_V(1U)
+
+#define FW_RI_TPTE_STAGKEY_S 23
+#define FW_RI_TPTE_STAGKEY_M 0xff
+#define FW_RI_TPTE_STAGKEY_V(x) ((x) << FW_RI_TPTE_STAGKEY_S)
+#define FW_RI_TPTE_STAGKEY_G(x) \
+ (((x) >> FW_RI_TPTE_STAGKEY_S) & FW_RI_TPTE_STAGKEY_M)
+
+#define FW_RI_TPTE_STAGSTATE_S 22
+#define FW_RI_TPTE_STAGSTATE_M 0x1
+#define FW_RI_TPTE_STAGSTATE_V(x) ((x) << FW_RI_TPTE_STAGSTATE_S)
+#define FW_RI_TPTE_STAGSTATE_G(x) \
+ (((x) >> FW_RI_TPTE_STAGSTATE_S) & FW_RI_TPTE_STAGSTATE_M)
+#define FW_RI_TPTE_STAGSTATE_F FW_RI_TPTE_STAGSTATE_V(1U)
+
+#define FW_RI_TPTE_STAGTYPE_S 20
+#define FW_RI_TPTE_STAGTYPE_M 0x3
+#define FW_RI_TPTE_STAGTYPE_V(x) ((x) << FW_RI_TPTE_STAGTYPE_S)
+#define FW_RI_TPTE_STAGTYPE_G(x) \
+ (((x) >> FW_RI_TPTE_STAGTYPE_S) & FW_RI_TPTE_STAGTYPE_M)
+
+#define FW_RI_TPTE_PDID_S 0
+#define FW_RI_TPTE_PDID_M 0xfffff
+#define FW_RI_TPTE_PDID_V(x) ((x) << FW_RI_TPTE_PDID_S)
+#define FW_RI_TPTE_PDID_G(x) \
+ (((x) >> FW_RI_TPTE_PDID_S) & FW_RI_TPTE_PDID_M)
+
+#define FW_RI_TPTE_PERM_S 28
+#define FW_RI_TPTE_PERM_M 0xf
+#define FW_RI_TPTE_PERM_V(x) ((x) << FW_RI_TPTE_PERM_S)
+#define FW_RI_TPTE_PERM_G(x) \
+ (((x) >> FW_RI_TPTE_PERM_S) & FW_RI_TPTE_PERM_M)
+
+#define FW_RI_TPTE_REMINVDIS_S 27
+#define FW_RI_TPTE_REMINVDIS_M 0x1
+#define FW_RI_TPTE_REMINVDIS_V(x) ((x) << FW_RI_TPTE_REMINVDIS_S)
+#define FW_RI_TPTE_REMINVDIS_G(x) \
+ (((x) >> FW_RI_TPTE_REMINVDIS_S) & FW_RI_TPTE_REMINVDIS_M)
+#define FW_RI_TPTE_REMINVDIS_F FW_RI_TPTE_REMINVDIS_V(1U)
+
+#define FW_RI_TPTE_ADDRTYPE_S 26
+#define FW_RI_TPTE_ADDRTYPE_M 1
+#define FW_RI_TPTE_ADDRTYPE_V(x) ((x) << FW_RI_TPTE_ADDRTYPE_S)
+#define FW_RI_TPTE_ADDRTYPE_G(x) \
+ (((x) >> FW_RI_TPTE_ADDRTYPE_S) & FW_RI_TPTE_ADDRTYPE_M)
+#define FW_RI_TPTE_ADDRTYPE_F FW_RI_TPTE_ADDRTYPE_V(1U)
+
+#define FW_RI_TPTE_MWBINDEN_S 25
+#define FW_RI_TPTE_MWBINDEN_M 0x1
+#define FW_RI_TPTE_MWBINDEN_V(x) ((x) << FW_RI_TPTE_MWBINDEN_S)
+#define FW_RI_TPTE_MWBINDEN_G(x) \
+ (((x) >> FW_RI_TPTE_MWBINDEN_S) & FW_RI_TPTE_MWBINDEN_M)
+#define FW_RI_TPTE_MWBINDEN_F FW_RI_TPTE_MWBINDEN_V(1U)
+
+#define FW_RI_TPTE_PS_S 20
+#define FW_RI_TPTE_PS_M 0x1f
+#define FW_RI_TPTE_PS_V(x) ((x) << FW_RI_TPTE_PS_S)
+#define FW_RI_TPTE_PS_G(x) \
+ (((x) >> FW_RI_TPTE_PS_S) & FW_RI_TPTE_PS_M)
+
+#define FW_RI_TPTE_QPID_S 0
+#define FW_RI_TPTE_QPID_M 0xfffff
+#define FW_RI_TPTE_QPID_V(x) ((x) << FW_RI_TPTE_QPID_S)
+#define FW_RI_TPTE_QPID_G(x) \
+ (((x) >> FW_RI_TPTE_QPID_S) & FW_RI_TPTE_QPID_M)
+
+#define FW_RI_TPTE_NOSNOOP_S 30
+#define FW_RI_TPTE_NOSNOOP_M 0x1
+#define FW_RI_TPTE_NOSNOOP_V(x) ((x) << FW_RI_TPTE_NOSNOOP_S)
+#define FW_RI_TPTE_NOSNOOP_G(x) \
+ (((x) >> FW_RI_TPTE_NOSNOOP_S) & FW_RI_TPTE_NOSNOOP_M)
+#define FW_RI_TPTE_NOSNOOP_F FW_RI_TPTE_NOSNOOP_V(1U)
+
+#define FW_RI_TPTE_PBLADDR_S 0
+#define FW_RI_TPTE_PBLADDR_M 0x1fffffff
+#define FW_RI_TPTE_PBLADDR_V(x) ((x) << FW_RI_TPTE_PBLADDR_S)
+#define FW_RI_TPTE_PBLADDR_G(x) \
+ (((x) >> FW_RI_TPTE_PBLADDR_S) & FW_RI_TPTE_PBLADDR_M)
+
+#define FW_RI_TPTE_DCA_S 24
+#define FW_RI_TPTE_DCA_M 0x1f
+#define FW_RI_TPTE_DCA_V(x) ((x) << FW_RI_TPTE_DCA_S)
+#define FW_RI_TPTE_DCA_G(x) \
+ (((x) >> FW_RI_TPTE_DCA_S) & FW_RI_TPTE_DCA_M)
+
+#define FW_RI_TPTE_MWBCNT_PSTAG_S 0
+#define FW_RI_TPTE_MWBCNT_PSTAG_M 0xffffff
+#define FW_RI_TPTE_MWBCNT_PSTAT_V(x) \
+ ((x) << FW_RI_TPTE_MWBCNT_PSTAG_S)
+#define FW_RI_TPTE_MWBCNT_PSTAG_G(x) \
+ (((x) >> FW_RI_TPTE_MWBCNT_PSTAG_S) & FW_RI_TPTE_MWBCNT_PSTAG_M)
enum fw_ri_res_type {
FW_RI_RES_TYPE_SQ,
#endif
};
-#define S_FW_RI_RES_WR_NRES 0
-#define M_FW_RI_RES_WR_NRES 0xff
-#define V_FW_RI_RES_WR_NRES(x) ((x) << S_FW_RI_RES_WR_NRES)
-#define G_FW_RI_RES_WR_NRES(x) \
- (((x) >> S_FW_RI_RES_WR_NRES) & M_FW_RI_RES_WR_NRES)
-
-#define S_FW_RI_RES_WR_FETCHSZM 26
-#define M_FW_RI_RES_WR_FETCHSZM 0x1
-#define V_FW_RI_RES_WR_FETCHSZM(x) ((x) << S_FW_RI_RES_WR_FETCHSZM)
-#define G_FW_RI_RES_WR_FETCHSZM(x) \
- (((x) >> S_FW_RI_RES_WR_FETCHSZM) & M_FW_RI_RES_WR_FETCHSZM)
-#define F_FW_RI_RES_WR_FETCHSZM V_FW_RI_RES_WR_FETCHSZM(1U)
-
-#define S_FW_RI_RES_WR_STATUSPGNS 25
-#define M_FW_RI_RES_WR_STATUSPGNS 0x1
-#define V_FW_RI_RES_WR_STATUSPGNS(x) ((x) << S_FW_RI_RES_WR_STATUSPGNS)
-#define G_FW_RI_RES_WR_STATUSPGNS(x) \
- (((x) >> S_FW_RI_RES_WR_STATUSPGNS) & M_FW_RI_RES_WR_STATUSPGNS)
-#define F_FW_RI_RES_WR_STATUSPGNS V_FW_RI_RES_WR_STATUSPGNS(1U)
-
-#define S_FW_RI_RES_WR_STATUSPGRO 24
-#define M_FW_RI_RES_WR_STATUSPGRO 0x1
-#define V_FW_RI_RES_WR_STATUSPGRO(x) ((x) << S_FW_RI_RES_WR_STATUSPGRO)
-#define G_FW_RI_RES_WR_STATUSPGRO(x) \
- (((x) >> S_FW_RI_RES_WR_STATUSPGRO) & M_FW_RI_RES_WR_STATUSPGRO)
-#define F_FW_RI_RES_WR_STATUSPGRO V_FW_RI_RES_WR_STATUSPGRO(1U)
-
-#define S_FW_RI_RES_WR_FETCHNS 23
-#define M_FW_RI_RES_WR_FETCHNS 0x1
-#define V_FW_RI_RES_WR_FETCHNS(x) ((x) << S_FW_RI_RES_WR_FETCHNS)
-#define G_FW_RI_RES_WR_FETCHNS(x) \
- (((x) >> S_FW_RI_RES_WR_FETCHNS) & M_FW_RI_RES_WR_FETCHNS)
-#define F_FW_RI_RES_WR_FETCHNS V_FW_RI_RES_WR_FETCHNS(1U)
-
-#define S_FW_RI_RES_WR_FETCHRO 22
-#define M_FW_RI_RES_WR_FETCHRO 0x1
-#define V_FW_RI_RES_WR_FETCHRO(x) ((x) << S_FW_RI_RES_WR_FETCHRO)
-#define G_FW_RI_RES_WR_FETCHRO(x) \
- (((x) >> S_FW_RI_RES_WR_FETCHRO) & M_FW_RI_RES_WR_FETCHRO)
-#define F_FW_RI_RES_WR_FETCHRO V_FW_RI_RES_WR_FETCHRO(1U)
-
-#define S_FW_RI_RES_WR_HOSTFCMODE 20
-#define M_FW_RI_RES_WR_HOSTFCMODE 0x3
-#define V_FW_RI_RES_WR_HOSTFCMODE(x) ((x) << S_FW_RI_RES_WR_HOSTFCMODE)
-#define G_FW_RI_RES_WR_HOSTFCMODE(x) \
- (((x) >> S_FW_RI_RES_WR_HOSTFCMODE) & M_FW_RI_RES_WR_HOSTFCMODE)
-
-#define S_FW_RI_RES_WR_CPRIO 19
-#define M_FW_RI_RES_WR_CPRIO 0x1
-#define V_FW_RI_RES_WR_CPRIO(x) ((x) << S_FW_RI_RES_WR_CPRIO)
-#define G_FW_RI_RES_WR_CPRIO(x) \
- (((x) >> S_FW_RI_RES_WR_CPRIO) & M_FW_RI_RES_WR_CPRIO)
-#define F_FW_RI_RES_WR_CPRIO V_FW_RI_RES_WR_CPRIO(1U)
-
-#define S_FW_RI_RES_WR_ONCHIP 18
-#define M_FW_RI_RES_WR_ONCHIP 0x1
-#define V_FW_RI_RES_WR_ONCHIP(x) ((x) << S_FW_RI_RES_WR_ONCHIP)
-#define G_FW_RI_RES_WR_ONCHIP(x) \
- (((x) >> S_FW_RI_RES_WR_ONCHIP) & M_FW_RI_RES_WR_ONCHIP)
-#define F_FW_RI_RES_WR_ONCHIP V_FW_RI_RES_WR_ONCHIP(1U)
-
-#define S_FW_RI_RES_WR_PCIECHN 16
-#define M_FW_RI_RES_WR_PCIECHN 0x3
-#define V_FW_RI_RES_WR_PCIECHN(x) ((x) << S_FW_RI_RES_WR_PCIECHN)
-#define G_FW_RI_RES_WR_PCIECHN(x) \
- (((x) >> S_FW_RI_RES_WR_PCIECHN) & M_FW_RI_RES_WR_PCIECHN)
-
-#define S_FW_RI_RES_WR_IQID 0
-#define M_FW_RI_RES_WR_IQID 0xffff
-#define V_FW_RI_RES_WR_IQID(x) ((x) << S_FW_RI_RES_WR_IQID)
-#define G_FW_RI_RES_WR_IQID(x) \
- (((x) >> S_FW_RI_RES_WR_IQID) & M_FW_RI_RES_WR_IQID)
-
-#define S_FW_RI_RES_WR_DCAEN 31
-#define M_FW_RI_RES_WR_DCAEN 0x1
-#define V_FW_RI_RES_WR_DCAEN(x) ((x) << S_FW_RI_RES_WR_DCAEN)
-#define G_FW_RI_RES_WR_DCAEN(x) \
- (((x) >> S_FW_RI_RES_WR_DCAEN) & M_FW_RI_RES_WR_DCAEN)
-#define F_FW_RI_RES_WR_DCAEN V_FW_RI_RES_WR_DCAEN(1U)
-
-#define S_FW_RI_RES_WR_DCACPU 26
-#define M_FW_RI_RES_WR_DCACPU 0x1f
-#define V_FW_RI_RES_WR_DCACPU(x) ((x) << S_FW_RI_RES_WR_DCACPU)
-#define G_FW_RI_RES_WR_DCACPU(x) \
- (((x) >> S_FW_RI_RES_WR_DCACPU) & M_FW_RI_RES_WR_DCACPU)
-
-#define S_FW_RI_RES_WR_FBMIN 23
-#define M_FW_RI_RES_WR_FBMIN 0x7
-#define V_FW_RI_RES_WR_FBMIN(x) ((x) << S_FW_RI_RES_WR_FBMIN)
-#define G_FW_RI_RES_WR_FBMIN(x) \
- (((x) >> S_FW_RI_RES_WR_FBMIN) & M_FW_RI_RES_WR_FBMIN)
-
-#define S_FW_RI_RES_WR_FBMAX 20
-#define M_FW_RI_RES_WR_FBMAX 0x7
-#define V_FW_RI_RES_WR_FBMAX(x) ((x) << S_FW_RI_RES_WR_FBMAX)
-#define G_FW_RI_RES_WR_FBMAX(x) \
- (((x) >> S_FW_RI_RES_WR_FBMAX) & M_FW_RI_RES_WR_FBMAX)
-
-#define S_FW_RI_RES_WR_CIDXFTHRESHO 19
-#define M_FW_RI_RES_WR_CIDXFTHRESHO 0x1
-#define V_FW_RI_RES_WR_CIDXFTHRESHO(x) ((x) << S_FW_RI_RES_WR_CIDXFTHRESHO)
-#define G_FW_RI_RES_WR_CIDXFTHRESHO(x) \
- (((x) >> S_FW_RI_RES_WR_CIDXFTHRESHO) & M_FW_RI_RES_WR_CIDXFTHRESHO)
-#define F_FW_RI_RES_WR_CIDXFTHRESHO V_FW_RI_RES_WR_CIDXFTHRESHO(1U)
-
-#define S_FW_RI_RES_WR_CIDXFTHRESH 16
-#define M_FW_RI_RES_WR_CIDXFTHRESH 0x7
-#define V_FW_RI_RES_WR_CIDXFTHRESH(x) ((x) << S_FW_RI_RES_WR_CIDXFTHRESH)
-#define G_FW_RI_RES_WR_CIDXFTHRESH(x) \
- (((x) >> S_FW_RI_RES_WR_CIDXFTHRESH) & M_FW_RI_RES_WR_CIDXFTHRESH)
-
-#define S_FW_RI_RES_WR_EQSIZE 0
-#define M_FW_RI_RES_WR_EQSIZE 0xffff
-#define V_FW_RI_RES_WR_EQSIZE(x) ((x) << S_FW_RI_RES_WR_EQSIZE)
-#define G_FW_RI_RES_WR_EQSIZE(x) \
- (((x) >> S_FW_RI_RES_WR_EQSIZE) & M_FW_RI_RES_WR_EQSIZE)
-
-#define S_FW_RI_RES_WR_IQANDST 15
-#define M_FW_RI_RES_WR_IQANDST 0x1
-#define V_FW_RI_RES_WR_IQANDST(x) ((x) << S_FW_RI_RES_WR_IQANDST)
-#define G_FW_RI_RES_WR_IQANDST(x) \
- (((x) >> S_FW_RI_RES_WR_IQANDST) & M_FW_RI_RES_WR_IQANDST)
-#define F_FW_RI_RES_WR_IQANDST V_FW_RI_RES_WR_IQANDST(1U)
-
-#define S_FW_RI_RES_WR_IQANUS 14
-#define M_FW_RI_RES_WR_IQANUS 0x1
-#define V_FW_RI_RES_WR_IQANUS(x) ((x) << S_FW_RI_RES_WR_IQANUS)
-#define G_FW_RI_RES_WR_IQANUS(x) \
- (((x) >> S_FW_RI_RES_WR_IQANUS) & M_FW_RI_RES_WR_IQANUS)
-#define F_FW_RI_RES_WR_IQANUS V_FW_RI_RES_WR_IQANUS(1U)
-
-#define S_FW_RI_RES_WR_IQANUD 12
-#define M_FW_RI_RES_WR_IQANUD 0x3
-#define V_FW_RI_RES_WR_IQANUD(x) ((x) << S_FW_RI_RES_WR_IQANUD)
-#define G_FW_RI_RES_WR_IQANUD(x) \
- (((x) >> S_FW_RI_RES_WR_IQANUD) & M_FW_RI_RES_WR_IQANUD)
-
-#define S_FW_RI_RES_WR_IQANDSTINDEX 0
-#define M_FW_RI_RES_WR_IQANDSTINDEX 0xfff
-#define V_FW_RI_RES_WR_IQANDSTINDEX(x) ((x) << S_FW_RI_RES_WR_IQANDSTINDEX)
-#define G_FW_RI_RES_WR_IQANDSTINDEX(x) \
- (((x) >> S_FW_RI_RES_WR_IQANDSTINDEX) & M_FW_RI_RES_WR_IQANDSTINDEX)
-
-#define S_FW_RI_RES_WR_IQDROPRSS 15
-#define M_FW_RI_RES_WR_IQDROPRSS 0x1
-#define V_FW_RI_RES_WR_IQDROPRSS(x) ((x) << S_FW_RI_RES_WR_IQDROPRSS)
-#define G_FW_RI_RES_WR_IQDROPRSS(x) \
- (((x) >> S_FW_RI_RES_WR_IQDROPRSS) & M_FW_RI_RES_WR_IQDROPRSS)
-#define F_FW_RI_RES_WR_IQDROPRSS V_FW_RI_RES_WR_IQDROPRSS(1U)
-
-#define S_FW_RI_RES_WR_IQGTSMODE 14
-#define M_FW_RI_RES_WR_IQGTSMODE 0x1
-#define V_FW_RI_RES_WR_IQGTSMODE(x) ((x) << S_FW_RI_RES_WR_IQGTSMODE)
-#define G_FW_RI_RES_WR_IQGTSMODE(x) \
- (((x) >> S_FW_RI_RES_WR_IQGTSMODE) & M_FW_RI_RES_WR_IQGTSMODE)
-#define F_FW_RI_RES_WR_IQGTSMODE V_FW_RI_RES_WR_IQGTSMODE(1U)
-
-#define S_FW_RI_RES_WR_IQPCIECH 12
-#define M_FW_RI_RES_WR_IQPCIECH 0x3
-#define V_FW_RI_RES_WR_IQPCIECH(x) ((x) << S_FW_RI_RES_WR_IQPCIECH)
-#define G_FW_RI_RES_WR_IQPCIECH(x) \
- (((x) >> S_FW_RI_RES_WR_IQPCIECH) & M_FW_RI_RES_WR_IQPCIECH)
-
-#define S_FW_RI_RES_WR_IQDCAEN 11
-#define M_FW_RI_RES_WR_IQDCAEN 0x1
-#define V_FW_RI_RES_WR_IQDCAEN(x) ((x) << S_FW_RI_RES_WR_IQDCAEN)
-#define G_FW_RI_RES_WR_IQDCAEN(x) \
- (((x) >> S_FW_RI_RES_WR_IQDCAEN) & M_FW_RI_RES_WR_IQDCAEN)
-#define F_FW_RI_RES_WR_IQDCAEN V_FW_RI_RES_WR_IQDCAEN(1U)
-
-#define S_FW_RI_RES_WR_IQDCACPU 6
-#define M_FW_RI_RES_WR_IQDCACPU 0x1f
-#define V_FW_RI_RES_WR_IQDCACPU(x) ((x) << S_FW_RI_RES_WR_IQDCACPU)
-#define G_FW_RI_RES_WR_IQDCACPU(x) \
- (((x) >> S_FW_RI_RES_WR_IQDCACPU) & M_FW_RI_RES_WR_IQDCACPU)
-
-#define S_FW_RI_RES_WR_IQINTCNTTHRESH 4
-#define M_FW_RI_RES_WR_IQINTCNTTHRESH 0x3
-#define V_FW_RI_RES_WR_IQINTCNTTHRESH(x) \
- ((x) << S_FW_RI_RES_WR_IQINTCNTTHRESH)
-#define G_FW_RI_RES_WR_IQINTCNTTHRESH(x) \
- (((x) >> S_FW_RI_RES_WR_IQINTCNTTHRESH) & M_FW_RI_RES_WR_IQINTCNTTHRESH)
-
-#define S_FW_RI_RES_WR_IQO 3
-#define M_FW_RI_RES_WR_IQO 0x1
-#define V_FW_RI_RES_WR_IQO(x) ((x) << S_FW_RI_RES_WR_IQO)
-#define G_FW_RI_RES_WR_IQO(x) \
- (((x) >> S_FW_RI_RES_WR_IQO) & M_FW_RI_RES_WR_IQO)
-#define F_FW_RI_RES_WR_IQO V_FW_RI_RES_WR_IQO(1U)
-
-#define S_FW_RI_RES_WR_IQCPRIO 2
-#define M_FW_RI_RES_WR_IQCPRIO 0x1
-#define V_FW_RI_RES_WR_IQCPRIO(x) ((x) << S_FW_RI_RES_WR_IQCPRIO)
-#define G_FW_RI_RES_WR_IQCPRIO(x) \
- (((x) >> S_FW_RI_RES_WR_IQCPRIO) & M_FW_RI_RES_WR_IQCPRIO)
-#define F_FW_RI_RES_WR_IQCPRIO V_FW_RI_RES_WR_IQCPRIO(1U)
-
-#define S_FW_RI_RES_WR_IQESIZE 0
-#define M_FW_RI_RES_WR_IQESIZE 0x3
-#define V_FW_RI_RES_WR_IQESIZE(x) ((x) << S_FW_RI_RES_WR_IQESIZE)
-#define G_FW_RI_RES_WR_IQESIZE(x) \
- (((x) >> S_FW_RI_RES_WR_IQESIZE) & M_FW_RI_RES_WR_IQESIZE)
-
-#define S_FW_RI_RES_WR_IQNS 31
-#define M_FW_RI_RES_WR_IQNS 0x1
-#define V_FW_RI_RES_WR_IQNS(x) ((x) << S_FW_RI_RES_WR_IQNS)
-#define G_FW_RI_RES_WR_IQNS(x) \
- (((x) >> S_FW_RI_RES_WR_IQNS) & M_FW_RI_RES_WR_IQNS)
-#define F_FW_RI_RES_WR_IQNS V_FW_RI_RES_WR_IQNS(1U)
-
-#define S_FW_RI_RES_WR_IQRO 30
-#define M_FW_RI_RES_WR_IQRO 0x1
-#define V_FW_RI_RES_WR_IQRO(x) ((x) << S_FW_RI_RES_WR_IQRO)
-#define G_FW_RI_RES_WR_IQRO(x) \
- (((x) >> S_FW_RI_RES_WR_IQRO) & M_FW_RI_RES_WR_IQRO)
-#define F_FW_RI_RES_WR_IQRO V_FW_RI_RES_WR_IQRO(1U)
+#define FW_RI_RES_WR_NRES_S 0
+#define FW_RI_RES_WR_NRES_M 0xff
+#define FW_RI_RES_WR_NRES_V(x) ((x) << FW_RI_RES_WR_NRES_S)
+#define FW_RI_RES_WR_NRES_G(x) \
+ (((x) >> FW_RI_RES_WR_NRES_S) & FW_RI_RES_WR_NRES_M)
+
+#define FW_RI_RES_WR_FETCHSZM_S 26
+#define FW_RI_RES_WR_FETCHSZM_M 0x1
+#define FW_RI_RES_WR_FETCHSZM_V(x) ((x) << FW_RI_RES_WR_FETCHSZM_S)
+#define FW_RI_RES_WR_FETCHSZM_G(x) \
+ (((x) >> FW_RI_RES_WR_FETCHSZM_S) & FW_RI_RES_WR_FETCHSZM_M)
+#define FW_RI_RES_WR_FETCHSZM_F FW_RI_RES_WR_FETCHSZM_V(1U)
+
+#define FW_RI_RES_WR_STATUSPGNS_S 25
+#define FW_RI_RES_WR_STATUSPGNS_M 0x1
+#define FW_RI_RES_WR_STATUSPGNS_V(x) ((x) << FW_RI_RES_WR_STATUSPGNS_S)
+#define FW_RI_RES_WR_STATUSPGNS_G(x) \
+ (((x) >> FW_RI_RES_WR_STATUSPGNS_S) & FW_RI_RES_WR_STATUSPGNS_M)
+#define FW_RI_RES_WR_STATUSPGNS_F FW_RI_RES_WR_STATUSPGNS_V(1U)
+
+#define FW_RI_RES_WR_STATUSPGRO_S 24
+#define FW_RI_RES_WR_STATUSPGRO_M 0x1
+#define FW_RI_RES_WR_STATUSPGRO_V(x) ((x) << FW_RI_RES_WR_STATUSPGRO_S)
+#define FW_RI_RES_WR_STATUSPGRO_G(x) \
+ (((x) >> FW_RI_RES_WR_STATUSPGRO_S) & FW_RI_RES_WR_STATUSPGRO_M)
+#define FW_RI_RES_WR_STATUSPGRO_F FW_RI_RES_WR_STATUSPGRO_V(1U)
+
+#define FW_RI_RES_WR_FETCHNS_S 23
+#define FW_RI_RES_WR_FETCHNS_M 0x1
+#define FW_RI_RES_WR_FETCHNS_V(x) ((x) << FW_RI_RES_WR_FETCHNS_S)
+#define FW_RI_RES_WR_FETCHNS_G(x) \
+ (((x) >> FW_RI_RES_WR_FETCHNS_S) & FW_RI_RES_WR_FETCHNS_M)
+#define FW_RI_RES_WR_FETCHNS_F FW_RI_RES_WR_FETCHNS_V(1U)
+
+#define FW_RI_RES_WR_FETCHRO_S 22
+#define FW_RI_RES_WR_FETCHRO_M 0x1
+#define FW_RI_RES_WR_FETCHRO_V(x) ((x) << FW_RI_RES_WR_FETCHRO_S)
+#define FW_RI_RES_WR_FETCHRO_G(x) \
+ (((x) >> FW_RI_RES_WR_FETCHRO_S) & FW_RI_RES_WR_FETCHRO_M)
+#define FW_RI_RES_WR_FETCHRO_F FW_RI_RES_WR_FETCHRO_V(1U)
+
+#define FW_RI_RES_WR_HOSTFCMODE_S 20
+#define FW_RI_RES_WR_HOSTFCMODE_M 0x3
+#define FW_RI_RES_WR_HOSTFCMODE_V(x) ((x) << FW_RI_RES_WR_HOSTFCMODE_S)
+#define FW_RI_RES_WR_HOSTFCMODE_G(x) \
+ (((x) >> FW_RI_RES_WR_HOSTFCMODE_S) & FW_RI_RES_WR_HOSTFCMODE_M)
+
+#define FW_RI_RES_WR_CPRIO_S 19
+#define FW_RI_RES_WR_CPRIO_M 0x1
+#define FW_RI_RES_WR_CPRIO_V(x) ((x) << FW_RI_RES_WR_CPRIO_S)
+#define FW_RI_RES_WR_CPRIO_G(x) \
+ (((x) >> FW_RI_RES_WR_CPRIO_S) & FW_RI_RES_WR_CPRIO_M)
+#define FW_RI_RES_WR_CPRIO_F FW_RI_RES_WR_CPRIO_V(1U)
+
+#define FW_RI_RES_WR_ONCHIP_S 18
+#define FW_RI_RES_WR_ONCHIP_M 0x1
+#define FW_RI_RES_WR_ONCHIP_V(x) ((x) << FW_RI_RES_WR_ONCHIP_S)
+#define FW_RI_RES_WR_ONCHIP_G(x) \
+ (((x) >> FW_RI_RES_WR_ONCHIP_S) & FW_RI_RES_WR_ONCHIP_M)
+#define FW_RI_RES_WR_ONCHIP_F FW_RI_RES_WR_ONCHIP_V(1U)
+
+#define FW_RI_RES_WR_PCIECHN_S 16
+#define FW_RI_RES_WR_PCIECHN_M 0x3
+#define FW_RI_RES_WR_PCIECHN_V(x) ((x) << FW_RI_RES_WR_PCIECHN_S)
+#define FW_RI_RES_WR_PCIECHN_G(x) \
+ (((x) >> FW_RI_RES_WR_PCIECHN_S) & FW_RI_RES_WR_PCIECHN_M)
+
+#define FW_RI_RES_WR_IQID_S 0
+#define FW_RI_RES_WR_IQID_M 0xffff
+#define FW_RI_RES_WR_IQID_V(x) ((x) << FW_RI_RES_WR_IQID_S)
+#define FW_RI_RES_WR_IQID_G(x) \
+ (((x) >> FW_RI_RES_WR_IQID_S) & FW_RI_RES_WR_IQID_M)
+
+#define FW_RI_RES_WR_DCAEN_S 31
+#define FW_RI_RES_WR_DCAEN_M 0x1
+#define FW_RI_RES_WR_DCAEN_V(x) ((x) << FW_RI_RES_WR_DCAEN_S)
+#define FW_RI_RES_WR_DCAEN_G(x) \
+ (((x) >> FW_RI_RES_WR_DCAEN_S) & FW_RI_RES_WR_DCAEN_M)
+#define FW_RI_RES_WR_DCAEN_F FW_RI_RES_WR_DCAEN_V(1U)
+
+#define FW_RI_RES_WR_DCACPU_S 26
+#define FW_RI_RES_WR_DCACPU_M 0x1f
+#define FW_RI_RES_WR_DCACPU_V(x) ((x) << FW_RI_RES_WR_DCACPU_S)
+#define FW_RI_RES_WR_DCACPU_G(x) \
+ (((x) >> FW_RI_RES_WR_DCACPU_S) & FW_RI_RES_WR_DCACPU_M)
+
+#define FW_RI_RES_WR_FBMIN_S 23
+#define FW_RI_RES_WR_FBMIN_M 0x7
+#define FW_RI_RES_WR_FBMIN_V(x) ((x) << FW_RI_RES_WR_FBMIN_S)
+#define FW_RI_RES_WR_FBMIN_G(x) \
+ (((x) >> FW_RI_RES_WR_FBMIN_S) & FW_RI_RES_WR_FBMIN_M)
+
+#define FW_RI_RES_WR_FBMAX_S 20
+#define FW_RI_RES_WR_FBMAX_M 0x7
+#define FW_RI_RES_WR_FBMAX_V(x) ((x) << FW_RI_RES_WR_FBMAX_S)
+#define FW_RI_RES_WR_FBMAX_G(x) \
+ (((x) >> FW_RI_RES_WR_FBMAX_S) & FW_RI_RES_WR_FBMAX_M)
+
+#define FW_RI_RES_WR_CIDXFTHRESHO_S 19
+#define FW_RI_RES_WR_CIDXFTHRESHO_M 0x1
+#define FW_RI_RES_WR_CIDXFTHRESHO_V(x) ((x) << FW_RI_RES_WR_CIDXFTHRESHO_S)
+#define FW_RI_RES_WR_CIDXFTHRESHO_G(x) \
+ (((x) >> FW_RI_RES_WR_CIDXFTHRESHO_S) & FW_RI_RES_WR_CIDXFTHRESHO_M)
+#define FW_RI_RES_WR_CIDXFTHRESHO_F FW_RI_RES_WR_CIDXFTHRESHO_V(1U)
+
+#define FW_RI_RES_WR_CIDXFTHRESH_S 16
+#define FW_RI_RES_WR_CIDXFTHRESH_M 0x7
+#define FW_RI_RES_WR_CIDXFTHRESH_V(x) ((x) << FW_RI_RES_WR_CIDXFTHRESH_S)
+#define FW_RI_RES_WR_CIDXFTHRESH_G(x) \
+ (((x) >> FW_RI_RES_WR_CIDXFTHRESH_S) & FW_RI_RES_WR_CIDXFTHRESH_M)
+
+#define FW_RI_RES_WR_EQSIZE_S 0
+#define FW_RI_RES_WR_EQSIZE_M 0xffff
+#define FW_RI_RES_WR_EQSIZE_V(x) ((x) << FW_RI_RES_WR_EQSIZE_S)
+#define FW_RI_RES_WR_EQSIZE_G(x) \
+ (((x) >> FW_RI_RES_WR_EQSIZE_S) & FW_RI_RES_WR_EQSIZE_M)
+
+#define FW_RI_RES_WR_IQANDST_S 15
+#define FW_RI_RES_WR_IQANDST_M 0x1
+#define FW_RI_RES_WR_IQANDST_V(x) ((x) << FW_RI_RES_WR_IQANDST_S)
+#define FW_RI_RES_WR_IQANDST_G(x) \
+ (((x) >> FW_RI_RES_WR_IQANDST_S) & FW_RI_RES_WR_IQANDST_M)
+#define FW_RI_RES_WR_IQANDST_F FW_RI_RES_WR_IQANDST_V(1U)
+
+#define FW_RI_RES_WR_IQANUS_S 14
+#define FW_RI_RES_WR_IQANUS_M 0x1
+#define FW_RI_RES_WR_IQANUS_V(x) ((x) << FW_RI_RES_WR_IQANUS_S)
+#define FW_RI_RES_WR_IQANUS_G(x) \
+ (((x) >> FW_RI_RES_WR_IQANUS_S) & FW_RI_RES_WR_IQANUS_M)
+#define FW_RI_RES_WR_IQANUS_F FW_RI_RES_WR_IQANUS_V(1U)
+
+#define FW_RI_RES_WR_IQANUD_S 12
+#define FW_RI_RES_WR_IQANUD_M 0x3
+#define FW_RI_RES_WR_IQANUD_V(x) ((x) << FW_RI_RES_WR_IQANUD_S)
+#define FW_RI_RES_WR_IQANUD_G(x) \
+ (((x) >> FW_RI_RES_WR_IQANUD_S) & FW_RI_RES_WR_IQANUD_M)
+
+#define FW_RI_RES_WR_IQANDSTINDEX_S 0
+#define FW_RI_RES_WR_IQANDSTINDEX_M 0xfff
+#define FW_RI_RES_WR_IQANDSTINDEX_V(x) ((x) << FW_RI_RES_WR_IQANDSTINDEX_S)
+#define FW_RI_RES_WR_IQANDSTINDEX_G(x) \
+ (((x) >> FW_RI_RES_WR_IQANDSTINDEX_S) & FW_RI_RES_WR_IQANDSTINDEX_M)
+
+#define FW_RI_RES_WR_IQDROPRSS_S 15
+#define FW_RI_RES_WR_IQDROPRSS_M 0x1
+#define FW_RI_RES_WR_IQDROPRSS_V(x) ((x) << FW_RI_RES_WR_IQDROPRSS_S)
+#define FW_RI_RES_WR_IQDROPRSS_G(x) \
+ (((x) >> FW_RI_RES_WR_IQDROPRSS_S) & FW_RI_RES_WR_IQDROPRSS_M)
+#define FW_RI_RES_WR_IQDROPRSS_F FW_RI_RES_WR_IQDROPRSS_V(1U)
+
+#define FW_RI_RES_WR_IQGTSMODE_S 14
+#define FW_RI_RES_WR_IQGTSMODE_M 0x1
+#define FW_RI_RES_WR_IQGTSMODE_V(x) ((x) << FW_RI_RES_WR_IQGTSMODE_S)
+#define FW_RI_RES_WR_IQGTSMODE_G(x) \
+ (((x) >> FW_RI_RES_WR_IQGTSMODE_S) & FW_RI_RES_WR_IQGTSMODE_M)
+#define FW_RI_RES_WR_IQGTSMODE_F FW_RI_RES_WR_IQGTSMODE_V(1U)
+
+#define FW_RI_RES_WR_IQPCIECH_S 12
+#define FW_RI_RES_WR_IQPCIECH_M 0x3
+#define FW_RI_RES_WR_IQPCIECH_V(x) ((x) << FW_RI_RES_WR_IQPCIECH_S)
+#define FW_RI_RES_WR_IQPCIECH_G(x) \
+ (((x) >> FW_RI_RES_WR_IQPCIECH_S) & FW_RI_RES_WR_IQPCIECH_M)
+
+#define FW_RI_RES_WR_IQDCAEN_S 11
+#define FW_RI_RES_WR_IQDCAEN_M 0x1
+#define FW_RI_RES_WR_IQDCAEN_V(x) ((x) << FW_RI_RES_WR_IQDCAEN_S)
+#define FW_RI_RES_WR_IQDCAEN_G(x) \
+ (((x) >> FW_RI_RES_WR_IQDCAEN_S) & FW_RI_RES_WR_IQDCAEN_M)
+#define FW_RI_RES_WR_IQDCAEN_F FW_RI_RES_WR_IQDCAEN_V(1U)
+
+#define FW_RI_RES_WR_IQDCACPU_S 6
+#define FW_RI_RES_WR_IQDCACPU_M 0x1f
+#define FW_RI_RES_WR_IQDCACPU_V(x) ((x) << FW_RI_RES_WR_IQDCACPU_S)
+#define FW_RI_RES_WR_IQDCACPU_G(x) \
+ (((x) >> FW_RI_RES_WR_IQDCACPU_S) & FW_RI_RES_WR_IQDCACPU_M)
+
+#define FW_RI_RES_WR_IQINTCNTTHRESH_S 4
+#define FW_RI_RES_WR_IQINTCNTTHRESH_M 0x3
+#define FW_RI_RES_WR_IQINTCNTTHRESH_V(x) \
+ ((x) << FW_RI_RES_WR_IQINTCNTTHRESH_S)
+#define FW_RI_RES_WR_IQINTCNTTHRESH_G(x) \
+ (((x) >> FW_RI_RES_WR_IQINTCNTTHRESH_S) & FW_RI_RES_WR_IQINTCNTTHRESH_M)
+
+#define FW_RI_RES_WR_IQO_S 3
+#define FW_RI_RES_WR_IQO_M 0x1
+#define FW_RI_RES_WR_IQO_V(x) ((x) << FW_RI_RES_WR_IQO_S)
+#define FW_RI_RES_WR_IQO_G(x) \
+ (((x) >> FW_RI_RES_WR_IQO_S) & FW_RI_RES_WR_IQO_M)
+#define FW_RI_RES_WR_IQO_F FW_RI_RES_WR_IQO_V(1U)
+
+#define FW_RI_RES_WR_IQCPRIO_S 2
+#define FW_RI_RES_WR_IQCPRIO_M 0x1
+#define FW_RI_RES_WR_IQCPRIO_V(x) ((x) << FW_RI_RES_WR_IQCPRIO_S)
+#define FW_RI_RES_WR_IQCPRIO_G(x) \
+ (((x) >> FW_RI_RES_WR_IQCPRIO_S) & FW_RI_RES_WR_IQCPRIO_M)
+#define FW_RI_RES_WR_IQCPRIO_F FW_RI_RES_WR_IQCPRIO_V(1U)
+
+#define FW_RI_RES_WR_IQESIZE_S 0
+#define FW_RI_RES_WR_IQESIZE_M 0x3
+#define FW_RI_RES_WR_IQESIZE_V(x) ((x) << FW_RI_RES_WR_IQESIZE_S)
+#define FW_RI_RES_WR_IQESIZE_G(x) \
+ (((x) >> FW_RI_RES_WR_IQESIZE_S) & FW_RI_RES_WR_IQESIZE_M)
+
+#define FW_RI_RES_WR_IQNS_S 31
+#define FW_RI_RES_WR_IQNS_M 0x1
+#define FW_RI_RES_WR_IQNS_V(x) ((x) << FW_RI_RES_WR_IQNS_S)
+#define FW_RI_RES_WR_IQNS_G(x) \
+ (((x) >> FW_RI_RES_WR_IQNS_S) & FW_RI_RES_WR_IQNS_M)
+#define FW_RI_RES_WR_IQNS_F FW_RI_RES_WR_IQNS_V(1U)
+
+#define FW_RI_RES_WR_IQRO_S 30
+#define FW_RI_RES_WR_IQRO_M 0x1
+#define FW_RI_RES_WR_IQRO_V(x) ((x) << FW_RI_RES_WR_IQRO_S)
+#define FW_RI_RES_WR_IQRO_G(x) \
+ (((x) >> FW_RI_RES_WR_IQRO_S) & FW_RI_RES_WR_IQRO_M)
+#define FW_RI_RES_WR_IQRO_F FW_RI_RES_WR_IQRO_V(1U)
struct fw_ri_rdma_write_wr {
__u8 opcode;
#endif
};
-#define S_FW_RI_SEND_WR_SENDOP 0
-#define M_FW_RI_SEND_WR_SENDOP 0xf
-#define V_FW_RI_SEND_WR_SENDOP(x) ((x) << S_FW_RI_SEND_WR_SENDOP)
-#define G_FW_RI_SEND_WR_SENDOP(x) \
- (((x) >> S_FW_RI_SEND_WR_SENDOP) & M_FW_RI_SEND_WR_SENDOP)
+#define FW_RI_SEND_WR_SENDOP_S 0
+#define FW_RI_SEND_WR_SENDOP_M 0xf
+#define FW_RI_SEND_WR_SENDOP_V(x) ((x) << FW_RI_SEND_WR_SENDOP_S)
+#define FW_RI_SEND_WR_SENDOP_G(x) \
+ (((x) >> FW_RI_SEND_WR_SENDOP_S) & FW_RI_SEND_WR_SENDOP_M)
struct fw_ri_rdma_read_wr {
__u8 opcode;
__be64 r4;
};
-#define S_FW_RI_BIND_MW_WR_QPBINDE 6
-#define M_FW_RI_BIND_MW_WR_QPBINDE 0x1
-#define V_FW_RI_BIND_MW_WR_QPBINDE(x) ((x) << S_FW_RI_BIND_MW_WR_QPBINDE)
-#define G_FW_RI_BIND_MW_WR_QPBINDE(x) \
- (((x) >> S_FW_RI_BIND_MW_WR_QPBINDE) & M_FW_RI_BIND_MW_WR_QPBINDE)
-#define F_FW_RI_BIND_MW_WR_QPBINDE V_FW_RI_BIND_MW_WR_QPBINDE(1U)
+#define FW_RI_BIND_MW_WR_QPBINDE_S 6
+#define FW_RI_BIND_MW_WR_QPBINDE_M 0x1
+#define FW_RI_BIND_MW_WR_QPBINDE_V(x) ((x) << FW_RI_BIND_MW_WR_QPBINDE_S)
+#define FW_RI_BIND_MW_WR_QPBINDE_G(x) \
+ (((x) >> FW_RI_BIND_MW_WR_QPBINDE_S) & FW_RI_BIND_MW_WR_QPBINDE_M)
+#define FW_RI_BIND_MW_WR_QPBINDE_F FW_RI_BIND_MW_WR_QPBINDE_V(1U)
-#define S_FW_RI_BIND_MW_WR_NS 5
-#define M_FW_RI_BIND_MW_WR_NS 0x1
-#define V_FW_RI_BIND_MW_WR_NS(x) ((x) << S_FW_RI_BIND_MW_WR_NS)
-#define G_FW_RI_BIND_MW_WR_NS(x) \
- (((x) >> S_FW_RI_BIND_MW_WR_NS) & M_FW_RI_BIND_MW_WR_NS)
-#define F_FW_RI_BIND_MW_WR_NS V_FW_RI_BIND_MW_WR_NS(1U)
+#define FW_RI_BIND_MW_WR_NS_S 5
+#define FW_RI_BIND_MW_WR_NS_M 0x1
+#define FW_RI_BIND_MW_WR_NS_V(x) ((x) << FW_RI_BIND_MW_WR_NS_S)
+#define FW_RI_BIND_MW_WR_NS_G(x) \
+ (((x) >> FW_RI_BIND_MW_WR_NS_S) & FW_RI_BIND_MW_WR_NS_M)
+#define FW_RI_BIND_MW_WR_NS_F FW_RI_BIND_MW_WR_NS_V(1U)
-#define S_FW_RI_BIND_MW_WR_DCACPU 0
-#define M_FW_RI_BIND_MW_WR_DCACPU 0x1f
-#define V_FW_RI_BIND_MW_WR_DCACPU(x) ((x) << S_FW_RI_BIND_MW_WR_DCACPU)
-#define G_FW_RI_BIND_MW_WR_DCACPU(x) \
- (((x) >> S_FW_RI_BIND_MW_WR_DCACPU) & M_FW_RI_BIND_MW_WR_DCACPU)
+#define FW_RI_BIND_MW_WR_DCACPU_S 0
+#define FW_RI_BIND_MW_WR_DCACPU_M 0x1f
+#define FW_RI_BIND_MW_WR_DCACPU_V(x) ((x) << FW_RI_BIND_MW_WR_DCACPU_S)
+#define FW_RI_BIND_MW_WR_DCACPU_G(x) \
+ (((x) >> FW_RI_BIND_MW_WR_DCACPU_S) & FW_RI_BIND_MW_WR_DCACPU_M)
struct fw_ri_fr_nsmr_wr {
__u8 opcode;
__be32 va_lo_fbo;
};
-#define S_FW_RI_FR_NSMR_WR_QPBINDE 6
-#define M_FW_RI_FR_NSMR_WR_QPBINDE 0x1
-#define V_FW_RI_FR_NSMR_WR_QPBINDE(x) ((x) << S_FW_RI_FR_NSMR_WR_QPBINDE)
-#define G_FW_RI_FR_NSMR_WR_QPBINDE(x) \
- (((x) >> S_FW_RI_FR_NSMR_WR_QPBINDE) & M_FW_RI_FR_NSMR_WR_QPBINDE)
-#define F_FW_RI_FR_NSMR_WR_QPBINDE V_FW_RI_FR_NSMR_WR_QPBINDE(1U)
+#define FW_RI_FR_NSMR_WR_QPBINDE_S 6
+#define FW_RI_FR_NSMR_WR_QPBINDE_M 0x1
+#define FW_RI_FR_NSMR_WR_QPBINDE_V(x) ((x) << FW_RI_FR_NSMR_WR_QPBINDE_S)
+#define FW_RI_FR_NSMR_WR_QPBINDE_G(x) \
+ (((x) >> FW_RI_FR_NSMR_WR_QPBINDE_S) & FW_RI_FR_NSMR_WR_QPBINDE_M)
+#define FW_RI_FR_NSMR_WR_QPBINDE_F FW_RI_FR_NSMR_WR_QPBINDE_V(1U)
-#define S_FW_RI_FR_NSMR_WR_NS 5
-#define M_FW_RI_FR_NSMR_WR_NS 0x1
-#define V_FW_RI_FR_NSMR_WR_NS(x) ((x) << S_FW_RI_FR_NSMR_WR_NS)
-#define G_FW_RI_FR_NSMR_WR_NS(x) \
- (((x) >> S_FW_RI_FR_NSMR_WR_NS) & M_FW_RI_FR_NSMR_WR_NS)
-#define F_FW_RI_FR_NSMR_WR_NS V_FW_RI_FR_NSMR_WR_NS(1U)
+#define FW_RI_FR_NSMR_WR_NS_S 5
+#define FW_RI_FR_NSMR_WR_NS_M 0x1
+#define FW_RI_FR_NSMR_WR_NS_V(x) ((x) << FW_RI_FR_NSMR_WR_NS_S)
+#define FW_RI_FR_NSMR_WR_NS_G(x) \
+ (((x) >> FW_RI_FR_NSMR_WR_NS_S) & FW_RI_FR_NSMR_WR_NS_M)
+#define FW_RI_FR_NSMR_WR_NS_F FW_RI_FR_NSMR_WR_NS_V(1U)
-#define S_FW_RI_FR_NSMR_WR_DCACPU 0
-#define M_FW_RI_FR_NSMR_WR_DCACPU 0x1f
-#define V_FW_RI_FR_NSMR_WR_DCACPU(x) ((x) << S_FW_RI_FR_NSMR_WR_DCACPU)
-#define G_FW_RI_FR_NSMR_WR_DCACPU(x) \
- (((x) >> S_FW_RI_FR_NSMR_WR_DCACPU) & M_FW_RI_FR_NSMR_WR_DCACPU)
+#define FW_RI_FR_NSMR_WR_DCACPU_S 0
+#define FW_RI_FR_NSMR_WR_DCACPU_M 0x1f
+#define FW_RI_FR_NSMR_WR_DCACPU_V(x) ((x) << FW_RI_FR_NSMR_WR_DCACPU_S)
+#define FW_RI_FR_NSMR_WR_DCACPU_G(x) \
+ (((x) >> FW_RI_FR_NSMR_WR_DCACPU_S) & FW_RI_FR_NSMR_WR_DCACPU_M)
struct fw_ri_inv_lstag_wr {
__u8 opcode;
} u;
};
-#define S_FW_RI_WR_MPAREQBIT 7
-#define M_FW_RI_WR_MPAREQBIT 0x1
-#define V_FW_RI_WR_MPAREQBIT(x) ((x) << S_FW_RI_WR_MPAREQBIT)
-#define G_FW_RI_WR_MPAREQBIT(x) \
- (((x) >> S_FW_RI_WR_MPAREQBIT) & M_FW_RI_WR_MPAREQBIT)
-#define F_FW_RI_WR_MPAREQBIT V_FW_RI_WR_MPAREQBIT(1U)
+#define FW_RI_WR_MPAREQBIT_S 7
+#define FW_RI_WR_MPAREQBIT_M 0x1
+#define FW_RI_WR_MPAREQBIT_V(x) ((x) << FW_RI_WR_MPAREQBIT_S)
+#define FW_RI_WR_MPAREQBIT_G(x) \
+ (((x) >> FW_RI_WR_MPAREQBIT_S) & FW_RI_WR_MPAREQBIT_M)
+#define FW_RI_WR_MPAREQBIT_F FW_RI_WR_MPAREQBIT_V(1U)
-#define S_FW_RI_WR_P2PTYPE 0
-#define M_FW_RI_WR_P2PTYPE 0xf
-#define V_FW_RI_WR_P2PTYPE(x) ((x) << S_FW_RI_WR_P2PTYPE)
-#define G_FW_RI_WR_P2PTYPE(x) \
- (((x) >> S_FW_RI_WR_P2PTYPE) & M_FW_RI_WR_P2PTYPE)
+#define FW_RI_WR_P2PTYPE_S 0
+#define FW_RI_WR_P2PTYPE_M 0xf
+#define FW_RI_WR_P2PTYPE_V(x) ((x) << FW_RI_WR_P2PTYPE_S)
+#define FW_RI_WR_P2PTYPE_G(x) \
+ (((x) >> FW_RI_WR_P2PTYPE_S) & FW_RI_WR_P2PTYPE_M)
struct tcp_options {
__be16 mss;
};
/* cpl_pass_accept_req.hdr_len fields */
-#define S_SYN_RX_CHAN 0
-#define M_SYN_RX_CHAN 0xF
-#define V_SYN_RX_CHAN(x) ((x) << S_SYN_RX_CHAN)
-#define G_SYN_RX_CHAN(x) (((x) >> S_SYN_RX_CHAN) & M_SYN_RX_CHAN)
-
-#define S_TCP_HDR_LEN 10
-#define M_TCP_HDR_LEN 0x3F
-#define V_TCP_HDR_LEN(x) ((x) << S_TCP_HDR_LEN)
-#define G_TCP_HDR_LEN(x) (((x) >> S_TCP_HDR_LEN) & M_TCP_HDR_LEN)
-
-#define S_IP_HDR_LEN 16
-#define M_IP_HDR_LEN 0x3FF
-#define V_IP_HDR_LEN(x) ((x) << S_IP_HDR_LEN)
-#define G_IP_HDR_LEN(x) (((x) >> S_IP_HDR_LEN) & M_IP_HDR_LEN)
-
-#define S_ETH_HDR_LEN 26
-#define M_ETH_HDR_LEN 0x1F
-#define V_ETH_HDR_LEN(x) ((x) << S_ETH_HDR_LEN)
-#define G_ETH_HDR_LEN(x) (((x) >> S_ETH_HDR_LEN) & M_ETH_HDR_LEN)
+#define SYN_RX_CHAN_S 0
+#define SYN_RX_CHAN_M 0xF
+#define SYN_RX_CHAN_V(x) ((x) << SYN_RX_CHAN_S)
+#define SYN_RX_CHAN_G(x) (((x) >> SYN_RX_CHAN_S) & SYN_RX_CHAN_M)
+
+#define TCP_HDR_LEN_S 10
+#define TCP_HDR_LEN_M 0x3F
+#define TCP_HDR_LEN_V(x) ((x) << TCP_HDR_LEN_S)
+#define TCP_HDR_LEN_G(x) (((x) >> TCP_HDR_LEN_S) & TCP_HDR_LEN_M)
+
+#define IP_HDR_LEN_S 16
+#define IP_HDR_LEN_M 0x3FF
+#define IP_HDR_LEN_V(x) ((x) << IP_HDR_LEN_S)
+#define IP_HDR_LEN_G(x) (((x) >> IP_HDR_LEN_S) & IP_HDR_LEN_M)
+
+#define ETH_HDR_LEN_S 26
+#define ETH_HDR_LEN_M 0x1F
+#define ETH_HDR_LEN_V(x) ((x) << ETH_HDR_LEN_S)
+#define ETH_HDR_LEN_G(x) (((x) >> ETH_HDR_LEN_S) & ETH_HDR_LEN_M)
/* cpl_pass_accept_req.l2info fields */
-#define S_SYN_MAC_IDX 0
-#define M_SYN_MAC_IDX 0x1FF
-#define V_SYN_MAC_IDX(x) ((x) << S_SYN_MAC_IDX)
-#define G_SYN_MAC_IDX(x) (((x) >> S_SYN_MAC_IDX) & M_SYN_MAC_IDX)
+#define SYN_MAC_IDX_S 0
+#define SYN_MAC_IDX_M 0x1FF
+#define SYN_MAC_IDX_V(x) ((x) << SYN_MAC_IDX_S)
+#define SYN_MAC_IDX_G(x) (((x) >> SYN_MAC_IDX_S) & SYN_MAC_IDX_M)
-#define S_SYN_XACT_MATCH 9
-#define V_SYN_XACT_MATCH(x) ((x) << S_SYN_XACT_MATCH)
-#define F_SYN_XACT_MATCH V_SYN_XACT_MATCH(1U)
+#define SYN_XACT_MATCH_S 9
+#define SYN_XACT_MATCH_V(x) ((x) << SYN_XACT_MATCH_S)
+#define SYN_XACT_MATCH_F SYN_XACT_MATCH_V(1U)
-#define S_SYN_INTF 12
-#define M_SYN_INTF 0xF
-#define V_SYN_INTF(x) ((x) << S_SYN_INTF)
-#define G_SYN_INTF(x) (((x) >> S_SYN_INTF) & M_SYN_INTF)
+#define SYN_INTF_S 12
+#define SYN_INTF_M 0xF
+#define SYN_INTF_V(x) ((x) << SYN_INTF_S)
+#define SYN_INTF_G(x) (((x) >> SYN_INTF_S) & SYN_INTF_M)
struct ulptx_idata {
__be32 cmd_more;
__be32 len;
};
-#define S_ULPTX_NSGE 0
-#define M_ULPTX_NSGE 0xFFFF
-#define V_ULPTX_NSGE(x) ((x) << S_ULPTX_NSGE)
+#define ULPTX_NSGE_S 0
+#define ULPTX_NSGE_M 0xFFFF
+#define ULPTX_NSGE_V(x) ((x) << ULPTX_NSGE_S)
-#define S_RX_DACK_MODE 29
-#define M_RX_DACK_MODE 0x3
-#define V_RX_DACK_MODE(x) ((x) << S_RX_DACK_MODE)
-#define G_RX_DACK_MODE(x) (((x) >> S_RX_DACK_MODE) & M_RX_DACK_MODE)
+#define RX_DACK_MODE_S 29
+#define RX_DACK_MODE_M 0x3
+#define RX_DACK_MODE_V(x) ((x) << RX_DACK_MODE_S)
+#define RX_DACK_MODE_G(x) (((x) >> RX_DACK_MODE_S) & RX_DACK_MODE_M)
-#define S_RX_DACK_CHANGE 31
-#define V_RX_DACK_CHANGE(x) ((x) << S_RX_DACK_CHANGE)
-#define F_RX_DACK_CHANGE V_RX_DACK_CHANGE(1U)
+#define RX_DACK_CHANGE_S 31
+#define RX_DACK_CHANGE_V(x) ((x) << RX_DACK_CHANGE_S)
+#define RX_DACK_CHANGE_F RX_DACK_CHANGE_V(1U)
enum { /* TCP congestion control algorithms */
CONG_ALG_RENO,
CONG_ALG_HIGHSPEED
};
-#define S_CONG_CNTRL 14
-#define M_CONG_CNTRL 0x3
-#define V_CONG_CNTRL(x) ((x) << S_CONG_CNTRL)
-#define G_CONG_CNTRL(x) (((x) >> S_CONG_CNTRL) & M_CONG_CNTRL)
+#define CONG_CNTRL_S 14
+#define CONG_CNTRL_M 0x3
+#define CONG_CNTRL_V(x) ((x) << CONG_CNTRL_S)
+#define CONG_CNTRL_G(x) (((x) >> CONG_CNTRL_S) & CONG_CNTRL_M)
#define CONG_CNTRL_VALID (1 << 18)
continue;
slave_id = (block_num * NUM_ALIAS_GUID_IN_REC) + i ;
- if (slave_id >= dev->dev->num_vfs + 1)
+ if (slave_id >= dev->dev->persist->num_vfs + 1)
return;
tmp_cur_ag = *(__be64 *)&p_data[i * GUID_REC_SIZE];
form_cache_ag = get_cached_alias_guid(dev, port_num,
ctx->ib_dev = &dev->ib_dev;
for (i = 0;
- i < min(dev->dev->caps.sqp_demux, (u16)(dev->dev->num_vfs + 1));
+ i < min(dev->dev->caps.sqp_demux,
+ (u16)(dev->dev->persist->num_vfs + 1));
i++) {
struct mlx4_active_ports actv_ports =
mlx4_get_active_ports(dev->dev, i);
props->vendor_id = be32_to_cpup((__be32 *) (out_mad->data + 36)) &
0xffffff;
- props->vendor_part_id = dev->dev->pdev->device;
+ props->vendor_part_id = dev->dev->persist->pdev->device;
props->hw_ver = be32_to_cpup((__be32 *) (out_mad->data + 32));
memcpy(&props->sys_image_guid, out_mad->data + 4, 8);
{
struct mlx4_ib_dev *dev =
container_of(device, struct mlx4_ib_dev, ib_dev.dev);
- return sprintf(buf, "MT%d\n", dev->dev->pdev->device);
+ return sprintf(buf, "MT%d\n", dev->dev->persist->pdev->device);
}
static ssize_t show_fw_ver(struct device *device, struct device_attribute *attr,
int i;
if (mlx4_is_master(ibdev->dev)) {
- for (slave = 0; slave <= ibdev->dev->num_vfs; ++slave) {
+ for (slave = 0; slave <= ibdev->dev->persist->num_vfs;
+ ++slave) {
for (port = 1; port <= ibdev->dev->caps.num_ports; ++port) {
for (i = 0;
i < ibdev->dev->phys_caps.pkey_phys_table_len[port];
mlx4_foreach_port(i, dev, MLX4_PORT_TYPE_IB) {
for (j = 0; j < eq_per_port; j++) {
snprintf(name, sizeof(name), "mlx4-ib-%d-%d@%s",
- i, j, dev->pdev->bus->name);
+ i, j, dev->persist->pdev->bus->name);
/* Set IRQ for specific name (per ring) */
if (mlx4_assign_eq(dev, name, NULL,
&ibdev->eq_table[eq])) {
ibdev = (struct mlx4_ib_dev *) ib_alloc_device(sizeof *ibdev);
if (!ibdev) {
- dev_err(&dev->pdev->dev, "Device struct alloc failed\n");
+ dev_err(&dev->persist->pdev->dev,
+ "Device struct alloc failed\n");
return NULL;
}
ibdev->num_ports = num_ports;
ibdev->ib_dev.phys_port_cnt = ibdev->num_ports;
ibdev->ib_dev.num_comp_vectors = dev->caps.num_comp_vectors;
- ibdev->ib_dev.dma_device = &dev->pdev->dev;
+ ibdev->ib_dev.dma_device = &dev->persist->pdev->dev;
if (dev->caps.userspace_caps)
ibdev->ib_dev.uverbs_abi_ver = MLX4_IB_UVERBS_ABI_VERSION;
sizeof(long),
GFP_KERNEL);
if (!ibdev->ib_uc_qpns_bitmap) {
- dev_err(&dev->pdev->dev, "bit map alloc failed\n");
+ dev_err(&dev->persist->pdev->dev,
+ "bit map alloc failed\n");
goto err_steer_qp_release;
}
if (!mfrpl->ibfrpl.page_list)
goto err_free;
- mfrpl->mapped_page_list = dma_alloc_coherent(&dev->dev->pdev->dev,
+ mfrpl->mapped_page_list = dma_alloc_coherent(&dev->dev->persist->
+ pdev->dev,
size, &mfrpl->map,
GFP_KERNEL);
if (!mfrpl->mapped_page_list)
struct mlx4_ib_fast_reg_page_list *mfrpl = to_mfrpl(page_list);
int size = page_list->max_page_list_len * sizeof (u64);
- dma_free_coherent(&dev->dev->pdev->dev, size, mfrpl->mapped_page_list,
+ dma_free_coherent(&dev->dev->persist->pdev->dev, size,
+ mfrpl->mapped_page_list,
mfrpl->map);
kfree(mfrpl->ibfrpl.page_list);
kfree(mfrpl);
char base_name[9];
/* pci_name format is: bus:dev:func -> xxxx:yy:zz.n */
- strlcpy(name, pci_name(dev->dev->pdev), max);
+ strlcpy(name, pci_name(dev->dev->persist->pdev), max);
strncpy(base_name, name, 8); /*till xxxx:yy:*/
base_name[8] = '\0';
/* with no ARI only 3 last bits are used so when the fn is higher than 8
if (!mlx4_is_master(device->dev))
return 0;
- for (i = 0; i <= device->dev->num_vfs; ++i)
+ for (i = 0; i <= device->dev->persist->num_vfs; ++i)
register_one_pkey_tree(device, i);
return 0;
if (!mlx4_is_master(device->dev))
return;
- for (slave = device->dev->num_vfs; slave >= 0; --slave) {
+ for (slave = device->dev->persist->num_vfs; slave >= 0; --slave) {
list_for_each_entry_safe(p, t,
&device->pkeys.pkey_port_list[slave],
entry) {
for (k = 0; k < len; k++) {
if (!(i & mask)) {
tmp = (unsigned long)pfn;
- m = min(m, find_first_bit(&tmp, sizeof(tmp)));
+ m = min_t(unsigned long, m, find_first_bit(&tmp, sizeof(tmp)));
skip = 1 << m;
mask = skip - 1;
base = pfn;
wqe_fragment_length = (__le16 *)&nic_sqe->wqe_words[NES_NIC_SQ_WQE_LENGTH_0_TAG_IDX];
/* setup the VLAN tag if present */
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
nes_debug(NES_DBG_NIC_TX, "%s: VLAN packet to send... VLAN = %08X\n",
- netdev->name, vlan_tx_tag_get(skb));
+ netdev->name, skb_vlan_tag_get(skb));
wqe_misc = NES_NIC_SQ_WQE_TAGVALUE_ENABLE;
- wqe_fragment_length[0] = (__force __le16) vlan_tx_tag_get(skb);
+ wqe_fragment_length[0] = (__force __le16) skb_vlan_tag_get(skb);
} else
wqe_misc = 0;
wqe_fragment_length =
(__le16 *)&nic_sqe->wqe_words[NES_NIC_SQ_WQE_LENGTH_0_TAG_IDX];
/* setup the VLAN tag if present */
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
nes_debug(NES_DBG_NIC_TX, "%s: VLAN packet to send... VLAN = %08X\n",
- netdev->name, vlan_tx_tag_get(skb) );
+ netdev->name,
+ skb_vlan_tag_get(skb));
wqe_misc = NES_NIC_SQ_WQE_TAGVALUE_ENABLE;
- wqe_fragment_length[0] = (__force __le16) vlan_tx_tag_get(skb);
+ wqe_fragment_length[0] = (__force __le16) skb_vlan_tag_get(skb);
} else
wqe_misc = 0;
#include <linux/cdev.h>
#include "input-compat.h"
+enum evdev_clock_type {
+ EV_CLK_REAL = 0,
+ EV_CLK_MONO,
+ EV_CLK_BOOT,
+ EV_CLK_MAX
+};
+
struct evdev {
int open;
struct input_handle handle;
struct fasync_struct *fasync;
struct evdev *evdev;
struct list_head node;
- int clkid;
+ int clk_type;
bool revoked;
unsigned int bufsize;
struct input_event buffer[];
};
+static int evdev_set_clk_type(struct evdev_client *client, unsigned int clkid)
+{
+ switch (clkid) {
+
+ case CLOCK_REALTIME:
+ client->clk_type = EV_CLK_REAL;
+ break;
+ case CLOCK_MONOTONIC:
+ client->clk_type = EV_CLK_MONO;
+ break;
+ case CLOCK_BOOTTIME:
+ client->clk_type = EV_CLK_BOOT;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
/* flush queued events of type @type, caller must hold client->buffer_lock */
static void __evdev_flush_queue(struct evdev_client *client, unsigned int type)
{
struct input_event ev;
ktime_t time;
- time = (client->clkid == CLOCK_MONOTONIC) ?
- ktime_get() : ktime_get_real();
+ time = client->clk_type == EV_CLK_REAL ?
+ ktime_get_real() :
+ client->clk_type == EV_CLK_MONO ?
+ ktime_get() :
+ ktime_get_boottime();
ev.time = ktime_to_timeval(time);
ev.type = EV_SYN;
static void evdev_pass_values(struct evdev_client *client,
const struct input_value *vals, unsigned int count,
- ktime_t mono, ktime_t real)
+ ktime_t *ev_time)
{
struct evdev *evdev = client->evdev;
const struct input_value *v;
if (client->revoked)
return;
- event.time = ktime_to_timeval(client->clkid == CLOCK_MONOTONIC ?
- mono : real);
+ event.time = ktime_to_timeval(ev_time[client->clk_type]);
/* Interrupts are disabled, just acquire the lock. */
spin_lock(&client->buffer_lock);
{
struct evdev *evdev = handle->private;
struct evdev_client *client;
- ktime_t time_mono, time_real;
+ ktime_t ev_time[EV_CLK_MAX];
- time_mono = ktime_get();
- time_real = ktime_mono_to_real(time_mono);
+ ev_time[EV_CLK_MONO] = ktime_get();
+ ev_time[EV_CLK_REAL] = ktime_mono_to_real(ev_time[EV_CLK_MONO]);
+ ev_time[EV_CLK_BOOT] = ktime_mono_to_any(ev_time[EV_CLK_MONO],
+ TK_OFFS_BOOT);
rcu_read_lock();
client = rcu_dereference(evdev->grab);
if (client)
- evdev_pass_values(client, vals, count, time_mono, time_real);
+ evdev_pass_values(client, vals, count, ev_time);
else
list_for_each_entry_rcu(client, &evdev->client_list, node)
- evdev_pass_values(client, vals, count,
- time_mono, time_real);
+ evdev_pass_values(client, vals, count, ev_time);
rcu_read_unlock();
}
case EVIOCSCLOCKID:
if (copy_from_user(&i, p, sizeof(unsigned int)))
return -EFAULT;
- if (i != CLOCK_MONOTONIC && i != CLOCK_REALTIME)
- return -EINVAL;
- client->clkid = i;
- return 0;
+
+ return evdev_set_clk_type(client, i);
case EVIOCGKEYCODE:
return evdev_handle_get_keycode(dev, p);
events = mt_slots + 1; /* count SYN_MT_REPORT and SYN_REPORT */
- for (i = 0; i < ABS_CNT; i++) {
- if (test_bit(i, dev->absbit)) {
- if (input_is_mt_axis(i))
- events += mt_slots;
- else
- events++;
+ if (test_bit(EV_ABS, dev->evbit)) {
+ for (i = 0; i < ABS_CNT; i++) {
+ if (test_bit(i, dev->absbit)) {
+ if (input_is_mt_axis(i))
+ events += mt_slots;
+ else
+ events++;
+ }
}
}
- for (i = 0; i < REL_CNT; i++)
- if (test_bit(i, dev->relbit))
- events++;
+ if (test_bit(EV_REL, dev->evbit)) {
+ for (i = 0; i < REL_CNT; i++)
+ if (test_bit(i, dev->relbit))
+ events++;
+ }
/* Make room for KEY and MSC events */
events += 7;
config KEYBOARD_STMPE
tristate "STMPE keypad support"
depends on MFD_STMPE
+ depends on OF
select INPUT_MATRIXKMAP
help
Say Y here if you want to use the keypad controller on STMPE I/O
struct gpio_button_data {
const struct gpio_keys_button *button;
struct input_dev *input;
- struct timer_list timer;
- struct work_struct work;
- unsigned int timer_debounce; /* in msecs */
+
+ struct timer_list release_timer;
+ unsigned int release_delay; /* in msecs, for IRQ-only buttons */
+
+ struct delayed_work work;
+ unsigned int software_debounce; /* in msecs, for GPIO-driven buttons */
+
unsigned int irq;
spinlock_t lock;
bool disabled;
{
if (!bdata->disabled) {
/*
- * Disable IRQ and possible debouncing timer.
+ * Disable IRQ and associated timer/work structure.
*/
disable_irq(bdata->irq);
- if (bdata->timer_debounce)
- del_timer_sync(&bdata->timer);
+
+ if (gpio_is_valid(bdata->button->gpio))
+ cancel_delayed_work_sync(&bdata->work);
+ else
+ del_timer_sync(&bdata->release_timer);
bdata->disabled = true;
}
static void gpio_keys_gpio_work_func(struct work_struct *work)
{
struct gpio_button_data *bdata =
- container_of(work, struct gpio_button_data, work);
+ container_of(work, struct gpio_button_data, work.work);
gpio_keys_gpio_report_event(bdata);
pm_relax(bdata->input->dev.parent);
}
-static void gpio_keys_gpio_timer(unsigned long _data)
-{
- struct gpio_button_data *bdata = (struct gpio_button_data *)_data;
-
- schedule_work(&bdata->work);
-}
-
static irqreturn_t gpio_keys_gpio_isr(int irq, void *dev_id)
{
struct gpio_button_data *bdata = dev_id;
if (bdata->button->wakeup)
pm_stay_awake(bdata->input->dev.parent);
- if (bdata->timer_debounce)
- mod_timer(&bdata->timer,
- jiffies + msecs_to_jiffies(bdata->timer_debounce));
- else
- schedule_work(&bdata->work);
+
+ mod_delayed_work(system_wq,
+ &bdata->work,
+ msecs_to_jiffies(bdata->software_debounce));
return IRQ_HANDLED;
}
input_event(input, EV_KEY, button->code, 1);
input_sync(input);
- if (!bdata->timer_debounce) {
+ if (!bdata->release_delay) {
input_event(input, EV_KEY, button->code, 0);
input_sync(input);
goto out;
bdata->key_pressed = true;
}
- if (bdata->timer_debounce)
- mod_timer(&bdata->timer,
- jiffies + msecs_to_jiffies(bdata->timer_debounce));
+ if (bdata->release_delay)
+ mod_timer(&bdata->release_timer,
+ jiffies + msecs_to_jiffies(bdata->release_delay));
out:
spin_unlock_irqrestore(&bdata->lock, flags);
return IRQ_HANDLED;
{
struct gpio_button_data *bdata = data;
- if (bdata->timer_debounce)
- del_timer_sync(&bdata->timer);
-
- cancel_work_sync(&bdata->work);
+ if (gpio_is_valid(bdata->button->gpio))
+ cancel_delayed_work_sync(&bdata->work);
+ else
+ del_timer_sync(&bdata->release_timer);
}
static int gpio_keys_setup_key(struct platform_device *pdev,
button->debounce_interval * 1000);
/* use timer if gpiolib doesn't provide debounce */
if (error < 0)
- bdata->timer_debounce =
+ bdata->software_debounce =
button->debounce_interval;
}
- irq = gpio_to_irq(button->gpio);
- if (irq < 0) {
- error = irq;
- dev_err(dev,
- "Unable to get irq number for GPIO %d, error %d\n",
- button->gpio, error);
- return error;
+ if (button->irq) {
+ bdata->irq = button->irq;
+ } else {
+ irq = gpio_to_irq(button->gpio);
+ if (irq < 0) {
+ error = irq;
+ dev_err(dev,
+ "Unable to get irq number for GPIO %d, error %d\n",
+ button->gpio, error);
+ return error;
+ }
+ bdata->irq = irq;
}
- bdata->irq = irq;
- INIT_WORK(&bdata->work, gpio_keys_gpio_work_func);
- setup_timer(&bdata->timer,
- gpio_keys_gpio_timer, (unsigned long)bdata);
+ INIT_DELAYED_WORK(&bdata->work, gpio_keys_gpio_work_func);
isr = gpio_keys_gpio_isr;
irqflags = IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING;
return -EINVAL;
}
- bdata->timer_debounce = button->debounce_interval;
- setup_timer(&bdata->timer,
+ bdata->release_delay = button->debounce_interval;
+ setup_timer(&bdata->release_timer,
gpio_keys_irq_timer, (unsigned long)bdata);
isr = gpio_keys_irq_isr;
input_set_capability(input, button->type ?: EV_KEY, button->code);
/*
- * Install custom action to cancel debounce timer and
+ * Install custom action to cancel release timer and
* workqueue item.
*/
error = devm_add_action(&pdev->dev, gpio_keys_quiesce_key, bdata);
i = 0;
for_each_child_of_node(node, pp) {
- int gpio = -1;
enum of_gpio_flags flags;
button = &pdata->buttons[i++];
- if (!of_find_property(pp, "gpios", NULL)) {
- button->irq = irq_of_parse_and_map(pp, 0);
- if (button->irq == 0) {
- i--;
- pdata->nbuttons--;
- dev_warn(dev, "Found button without gpios or irqs\n");
- continue;
- }
- } else {
- gpio = of_get_gpio_flags(pp, 0, &flags);
- if (gpio < 0) {
- error = gpio;
+ button->gpio = of_get_gpio_flags(pp, 0, &flags);
+ if (button->gpio < 0) {
+ error = button->gpio;
+ if (error != -ENOENT) {
if (error != -EPROBE_DEFER)
dev_err(dev,
"Failed to get gpio flags, error: %d\n",
error);
return ERR_PTR(error);
}
+ } else {
+ button->active_low = flags & OF_GPIO_ACTIVE_LOW;
}
- button->gpio = gpio;
- button->active_low = flags & OF_GPIO_ACTIVE_LOW;
+ button->irq = irq_of_parse_and_map(pp, 0);
+
+ if (!gpio_is_valid(button->gpio) && !button->irq) {
+ dev_err(dev, "Found button without gpios or irqs\n");
+ return ERR_PTR(-EINVAL);
+ }
if (of_property_read_u32(pp, "linux,code", &button->code)) {
dev_err(dev, "Button without keycode: 0x%x\n",
button->wakeup = !!of_get_property(pp, "gpio-key,wakeup", NULL);
+ button->can_disable = !!of_get_property(pp, "linux,can-disable", NULL);
+
if (of_property_read_u32(pp, "debounce-interval",
&button->debounce_interval))
button->debounce_interval = 5;
if (error)
goto bail1;
- init_completion(&dev->cmd_done);
+ reinit_completion(&dev->cmd_done);
serio_write(serio, 0);
serio_write(serio, 0);
serio_write(serio, HIL_PKT_CMD >> 8);
if (error)
goto bail1;
- init_completion(&dev->cmd_done);
+ reinit_completion(&dev->cmd_done);
serio_write(serio, 0);
serio_write(serio, 0);
serio_write(serio, HIL_PKT_CMD >> 8);
if (error)
goto bail1;
- init_completion(&dev->cmd_done);
+ reinit_completion(&dev->cmd_done);
serio_write(serio, 0);
serio_write(serio, 0);
serio_write(serio, HIL_PKT_CMD >> 8);
#define STMPE_KEYPAD_MAX_ROWS 8
#define STMPE_KEYPAD_MAX_COLS 8
#define STMPE_KEYPAD_ROW_SHIFT 3
-#define STMPE_KEYPAD_KEYMAP_SIZE \
+#define STMPE_KEYPAD_KEYMAP_MAX_SIZE \
(STMPE_KEYPAD_MAX_ROWS * STMPE_KEYPAD_MAX_COLS)
/**
* struct stmpe_keypad_variant - model-specific attributes
* @auto_increment: whether the KPC_DATA_BYTE register address
* auto-increments on multiple read
+ * @set_pullup: whether the pins need to have their pull-ups set
* @num_data: number of data bytes
* @num_normal_data: number of normal keys' data bytes
* @max_cols: maximum number of columns supported
*/
struct stmpe_keypad_variant {
bool auto_increment;
+ bool set_pullup;
int num_data;
int num_normal_data;
int max_cols;
},
[STMPE2401] = {
.auto_increment = false,
+ .set_pullup = true,
.num_data = 3,
.num_normal_data = 2,
.max_cols = 8,
},
[STMPE2403] = {
.auto_increment = true,
+ .set_pullup = true,
.num_data = 5,
.num_normal_data = 3,
.max_cols = 8,
},
};
+/**
+ * struct stmpe_keypad - STMPE keypad state container
+ * @stmpe: pointer to parent STMPE device
+ * @input: spawned input device
+ * @variant: STMPE variant
+ * @debounce_ms: debounce interval, in ms. Maximum is
+ * %STMPE_KEYPAD_MAX_DEBOUNCE.
+ * @scan_count: number of key scanning cycles to confirm key data.
+ * Maximum is %STMPE_KEYPAD_MAX_SCAN_COUNT.
+ * @no_autorepeat: disable key autorepeat
+ * @rows: bitmask for the rows
+ * @cols: bitmask for the columns
+ * @keymap: the keymap
+ */
struct stmpe_keypad {
struct stmpe *stmpe;
struct input_dev *input;
const struct stmpe_keypad_variant *variant;
- const struct stmpe_keypad_platform_data *plat;
-
+ unsigned int debounce_ms;
+ unsigned int scan_count;
+ bool no_autorepeat;
unsigned int rows;
unsigned int cols;
-
- unsigned short keymap[STMPE_KEYPAD_KEYMAP_SIZE];
+ unsigned short keymap[STMPE_KEYPAD_KEYMAP_MAX_SIZE];
};
static int stmpe_keypad_read_data(struct stmpe_keypad *keypad, u8 *data)
unsigned int col_gpios = variant->col_gpios;
unsigned int row_gpios = variant->row_gpios;
struct stmpe *stmpe = keypad->stmpe;
+ u8 pureg = stmpe->regs[STMPE_IDX_GPPUR_LSB];
unsigned int pins = 0;
+ unsigned int pu_pins = 0;
+ int ret;
int i;
/*
for (i = 0; i < variant->max_cols; i++) {
int num = __ffs(col_gpios);
- if (keypad->cols & (1 << i))
+ if (keypad->cols & (1 << i)) {
pins |= 1 << num;
+ pu_pins |= 1 << num;
+ }
col_gpios &= ~(1 << num);
}
row_gpios &= ~(1 << num);
}
- return stmpe_set_altfunc(stmpe, pins, STMPE_BLOCK_KEYPAD);
+ ret = stmpe_set_altfunc(stmpe, pins, STMPE_BLOCK_KEYPAD);
+ if (ret)
+ return ret;
+
+ /*
+ * On STMPE24xx, set pin bias to pull-up on all keypad input
+ * pins (columns), this incidentally happen to be maximum 8 pins
+ * and placed at GPIO0-7 so only the LSB of the pull up register
+ * ever needs to be written.
+ */
+ if (variant->set_pullup) {
+ u8 val;
+
+ ret = stmpe_reg_read(stmpe, pureg);
+ if (ret)
+ return ret;
+
+ /* Do not touch unused pins, may be used for GPIO */
+ val = ret & ~pu_pins;
+ val |= pu_pins;
+
+ ret = stmpe_reg_write(stmpe, pureg, val);
+ }
+
+ return 0;
}
static int stmpe_keypad_chip_init(struct stmpe_keypad *keypad)
{
- const struct stmpe_keypad_platform_data *plat = keypad->plat;
const struct stmpe_keypad_variant *variant = keypad->variant;
struct stmpe *stmpe = keypad->stmpe;
int ret;
- if (plat->debounce_ms > STMPE_KEYPAD_MAX_DEBOUNCE)
+ if (keypad->debounce_ms > STMPE_KEYPAD_MAX_DEBOUNCE)
return -EINVAL;
- if (plat->scan_count > STMPE_KEYPAD_MAX_SCAN_COUNT)
+ if (keypad->scan_count > STMPE_KEYPAD_MAX_SCAN_COUNT)
return -EINVAL;
ret = stmpe_enable(stmpe, STMPE_BLOCK_KEYPAD);
ret = stmpe_set_bits(stmpe, STMPE_KPC_CTRL_MSB,
STMPE_KPC_CTRL_MSB_SCAN_COUNT,
- plat->scan_count << 4);
+ keypad->scan_count << 4);
if (ret < 0)
return ret;
STMPE_KPC_CTRL_LSB_SCAN |
STMPE_KPC_CTRL_LSB_DEBOUNCE,
STMPE_KPC_CTRL_LSB_SCAN |
- (plat->debounce_ms << 1));
+ (keypad->debounce_ms << 1));
}
-static void stmpe_keypad_fill_used_pins(struct stmpe_keypad *keypad)
+static void stmpe_keypad_fill_used_pins(struct stmpe_keypad *keypad,
+ u32 used_rows, u32 used_cols)
{
int row, col;
- for (row = 0; row < STMPE_KEYPAD_MAX_ROWS; row++) {
- for (col = 0; col < STMPE_KEYPAD_MAX_COLS; col++) {
+ for (row = 0; row < used_rows; row++) {
+ for (col = 0; col < used_cols; col++) {
int code = MATRIX_SCAN_CODE(row, col,
- STMPE_KEYPAD_ROW_SHIFT);
+ STMPE_KEYPAD_ROW_SHIFT);
if (keypad->keymap[code] != KEY_RESERVED) {
keypad->rows |= 1 << row;
keypad->cols |= 1 << col;
}
}
-#ifdef CONFIG_OF
-static const struct stmpe_keypad_platform_data *
-stmpe_keypad_of_probe(struct device *dev)
-{
- struct device_node *np = dev->of_node;
- struct stmpe_keypad_platform_data *plat;
-
- if (!np)
- return ERR_PTR(-ENODEV);
-
- plat = devm_kzalloc(dev, sizeof(*plat), GFP_KERNEL);
- if (!plat)
- return ERR_PTR(-ENOMEM);
-
- of_property_read_u32(np, "debounce-interval", &plat->debounce_ms);
- of_property_read_u32(np, "st,scan-count", &plat->scan_count);
-
- plat->no_autorepeat = of_property_read_bool(np, "st,no-autorepeat");
-
- return plat;
-}
-#else
-static inline const struct stmpe_keypad_platform_data *
-stmpe_keypad_of_probe(struct device *dev)
-{
- return ERR_PTR(-EINVAL);
-}
-#endif
-
static int stmpe_keypad_probe(struct platform_device *pdev)
{
struct stmpe *stmpe = dev_get_drvdata(pdev->dev.parent);
- const struct stmpe_keypad_platform_data *plat;
+ struct device_node *np = pdev->dev.of_node;
struct stmpe_keypad *keypad;
struct input_dev *input;
+ u32 rows;
+ u32 cols;
int error;
int irq;
- plat = stmpe->pdata->keypad;
- if (!plat) {
- plat = stmpe_keypad_of_probe(&pdev->dev);
- if (IS_ERR(plat))
- return PTR_ERR(plat);
- }
-
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return irq;
if (!keypad)
return -ENOMEM;
+ keypad->stmpe = stmpe;
+ keypad->variant = &stmpe_keypad_variants[stmpe->partnum];
+
+ of_property_read_u32(np, "debounce-interval", &keypad->debounce_ms);
+ of_property_read_u32(np, "st,scan-count", &keypad->scan_count);
+ keypad->no_autorepeat = of_property_read_bool(np, "st,no-autorepeat");
+
input = devm_input_allocate_device(&pdev->dev);
if (!input)
return -ENOMEM;
input->id.bustype = BUS_I2C;
input->dev.parent = &pdev->dev;
- error = matrix_keypad_build_keymap(plat->keymap_data, NULL,
- STMPE_KEYPAD_MAX_ROWS,
- STMPE_KEYPAD_MAX_COLS,
+ error = matrix_keypad_parse_of_params(&pdev->dev, &rows, &cols);
+ if (error)
+ return error;
+
+ error = matrix_keypad_build_keymap(NULL, NULL, rows, cols,
keypad->keymap, input);
if (error)
return error;
input_set_capability(input, EV_MSC, MSC_SCAN);
- if (!plat->no_autorepeat)
+ if (!keypad->no_autorepeat)
__set_bit(EV_REP, input->evbit);
- stmpe_keypad_fill_used_pins(keypad);
+ stmpe_keypad_fill_used_pins(keypad, rows, cols);
- keypad->stmpe = stmpe;
- keypad->plat = plat;
keypad->input = input;
- keypad->variant = &stmpe_keypad_variants[stmpe->partnum];
error = stmpe_keypad_chip_init(keypad);
if (error < 0)
unsigned char *pkt,
unsigned char pkt_id)
{
+ /*
+ * packet-fmt b7 b6 b5 b4 b3 b2 b1 b0
+ * Byte0 TWO & MULTI L 1 R M 1 Y0-2 Y0-1 Y0-0
+ * Byte0 NEW L 1 X1-5 1 1 Y0-2 Y0-1 Y0-0
+ * Byte1 Y0-10 Y0-9 Y0-8 Y0-7 Y0-6 Y0-5 Y0-4 Y0-3
+ * Byte2 X0-11 1 X0-10 X0-9 X0-8 X0-7 X0-6 X0-5
+ * Byte3 X1-11 1 X0-4 X0-3 1 X0-2 X0-1 X0-0
+ * Byte4 TWO X1-10 TWO X1-9 X1-8 X1-7 X1-6 X1-5 X1-4
+ * Byte4 MULTI X1-10 TWO X1-9 X1-8 X1-7 X1-6 Y1-5 1
+ * Byte4 NEW X1-10 TWO X1-9 X1-8 X1-7 X1-6 0 0
+ * Byte5 TWO & NEW Y1-10 0 Y1-9 Y1-8 Y1-7 Y1-6 Y1-5 Y1-4
+ * Byte5 MULTI Y1-10 0 Y1-9 Y1-8 Y1-7 Y1-6 F-1 F-0
+ * L: Left button
+ * R / M: Non-clickpads: Right / Middle button
+ * Clickpads: When > 2 fingers are down, and some fingers
+ * are in the button area, then the 2 coordinates reported
+ * are for fingers outside the button area and these report
+ * extra fingers being present in the right / left button
+ * area. Note these fingers are not added to the F field!
+ * so if a TWO packet is received and R = 1 then there are
+ * 3 fingers down, etc.
+ * TWO: 1: Two touches present, byte 0/4/5 are in TWO fmt
+ * 0: If byte 4 bit 0 is 1, then byte 0/4/5 are in MULTI fmt
+ * otherwise byte 0 bit 4 must be set and byte 0/4/5 are
+ * in NEW fmt
+ * F: Number of fingers - 3, 0 means 3 fingers, 1 means 4 ...
+ */
+
mt[0].x = ((pkt[2] & 0x80) << 4);
mt[0].x |= ((pkt[2] & 0x3F) << 5);
mt[0].x |= ((pkt[3] & 0x30) >> 1);
static int alps_get_mt_count(struct input_mt_pos *mt)
{
- int i;
+ int i, fingers = 0;
- for (i = 0; i < MAX_TOUCHES && mt[i].x != 0 && mt[i].y != 0; i++)
- /* empty */;
+ for (i = 0; i < MAX_TOUCHES; i++) {
+ if (mt[i].x != 0 || mt[i].y != 0)
+ fingers++;
+ }
- return i;
+ return fingers;
}
static int alps_decode_packet_v7(struct alps_fields *f,
unsigned char *p,
struct psmouse *psmouse)
{
+ struct alps_data *priv = psmouse->private;
unsigned char pkt_id;
pkt_id = alps_get_packet_id_v7(p);
return 0;
if (pkt_id == V7_PACKET_ID_UNKNOWN)
return -1;
+ /*
+ * NEW packets are send to indicate a discontinuity in the finger
+ * coordinate reporting. Specifically a finger may have moved from
+ * slot 0 to 1 or vice versa. INPUT_MT_TRACK takes care of this for
+ * us.
+ *
+ * NEW packets have 3 problems:
+ * 1) They do not contain middle / right button info (on non clickpads)
+ * this can be worked around by preserving the old button state
+ * 2) They do not contain an accurate fingercount, and they are
+ * typically send when the number of fingers changes. We cannot use
+ * the old finger count as that may mismatch with the amount of
+ * touch coordinates we've available in the NEW packet
+ * 3) Their x data for the second touch is inaccurate leading to
+ * a possible jump of the x coordinate by 16 units when the first
+ * non NEW packet comes in
+ * Since problems 2 & 3 cannot be worked around, just ignore them.
+ */
+ if (pkt_id == V7_PACKET_ID_NEW)
+ return 1;
alps_get_finger_coordinate_v7(f->mt, p, pkt_id);
- if (pkt_id == V7_PACKET_ID_TWO || pkt_id == V7_PACKET_ID_MULTI) {
- f->left = (p[0] & 0x80) >> 7;
+ if (pkt_id == V7_PACKET_ID_TWO)
+ f->fingers = alps_get_mt_count(f->mt);
+ else /* pkt_id == V7_PACKET_ID_MULTI */
+ f->fingers = 3 + (p[5] & 0x03);
+
+ f->left = (p[0] & 0x80) >> 7;
+ if (priv->flags & ALPS_BUTTONPAD) {
+ if (p[0] & 0x20)
+ f->fingers++;
+ if (p[0] & 0x10)
+ f->fingers++;
+ } else {
f->right = (p[0] & 0x20) >> 5;
f->middle = (p[0] & 0x10) >> 4;
}
- if (pkt_id == V7_PACKET_ID_TWO)
- f->fingers = alps_get_mt_count(f->mt);
- else if (pkt_id == V7_PACKET_ID_MULTI)
- f->fingers = 3 + (p[5] & 0x03);
+ /* Sometimes a single touch is reported in mt[1] rather then mt[0] */
+ if (f->fingers == 1 && f->mt[0].x == 0 && f->mt[0].y == 0) {
+ f->mt[0].x = f->mt[1].x;
+ f->mt[0].y = f->mt[1].y;
+ f->mt[1].x = 0;
+ f->mt[1].y = 0;
+ }
return 0;
}
TRACKPOINT_INT_ATTR(upthresh, TP_UP_THRESH, TP_DEF_UP_THRESH);
TRACKPOINT_INT_ATTR(ztime, TP_Z_TIME, TP_DEF_Z_TIME);
TRACKPOINT_INT_ATTR(jenks, TP_JENKS_CURV, TP_DEF_JENKS_CURV);
+TRACKPOINT_INT_ATTR(drift_time, TP_DRIFT_TIME, TP_DEF_DRIFT_TIME);
TRACKPOINT_BIT_ATTR(press_to_select, TP_TOGGLE_PTSON, TP_MASK_PTSON, 0,
TP_DEF_PTSON);
&psmouse_attr_upthresh.dattr.attr,
&psmouse_attr_ztime.dattr.attr,
&psmouse_attr_jenks.dattr.attr,
+ &psmouse_attr_drift_time.dattr.attr,
&psmouse_attr_press_to_select.dattr.attr,
&psmouse_attr_skipback.dattr.attr,
&psmouse_attr_ext_dev.dattr.attr,
TRACKPOINT_UPDATE(in_power_on_state, psmouse, tp, upthresh);
TRACKPOINT_UPDATE(in_power_on_state, psmouse, tp, ztime);
TRACKPOINT_UPDATE(in_power_on_state, psmouse, tp, jenks);
+ TRACKPOINT_UPDATE(in_power_on_state, psmouse, tp, drift_time);
/* toggles */
TRACKPOINT_UPDATE(in_power_on_state, psmouse, tp, press_to_select);
TRACKPOINT_SET_POWER_ON_DEFAULT(tp, upthresh);
TRACKPOINT_SET_POWER_ON_DEFAULT(tp, ztime);
TRACKPOINT_SET_POWER_ON_DEFAULT(tp, jenks);
+ TRACKPOINT_SET_POWER_ON_DEFAULT(tp, drift_time);
TRACKPOINT_SET_POWER_ON_DEFAULT(tp, inertia);
/* toggles */
#define TP_UP_THRESH 0x5A /* Used to generate a 'click' on Z-axis */
#define TP_Z_TIME 0x5E /* How sharp of a press */
#define TP_JENKS_CURV 0x5D /* Minimum curvature for double click */
+#define TP_DRIFT_TIME 0x5F /* How long a 'hands off' condition */
+ /* must last (x*107ms) for drift */
+ /* correction to occur */
/*
* Toggling Flag bits
#define TP_DEF_UP_THRESH 0xFF
#define TP_DEF_Z_TIME 0x26
#define TP_DEF_JENKS_CURV 0x87
+#define TP_DEF_DRIFT_TIME 0x05
/* Toggles */
#define TP_DEF_MB 0x00
unsigned char draghys, mindrag;
unsigned char thresh, upthresh;
unsigned char ztime, jenks;
+ unsigned char drift_time;
/* toggles */
unsigned char press_to_select;
#define MXT_T6_STATUS_COMSERR (1 << 2)
/* MXT_GEN_POWER_T7 field */
-struct t7_config {
- u8 idle;
- u8 active;
-} __packed;
-
-#define MXT_POWER_CFG_RUN 0
-#define MXT_POWER_CFG_DEEPSLEEP 1
+#define MXT_POWER_IDLEACQINT 0
+#define MXT_POWER_ACTVACQINT 1
+#define MXT_POWER_ACTV2IDLETO 2
/* MXT_GEN_ACQUIRE_T8 field */
#define MXT_ACQUIRE_CHRGTIME 0
#define MXT_ACQUIRE_ATCHCALSTHR 7
/* MXT_TOUCH_MULTI_T9 field */
+#define MXT_TOUCH_CTRL 0
#define MXT_T9_ORIENT 9
#define MXT_T9_RANGE 18
bool update_input;
u8 last_message_count;
u8 num_touchids;
- struct t7_config t7_cfg;
/* Cached parameters from object table */
u16 T5_address;
data->t6_status = status;
}
+static int mxt_write_object(struct mxt_data *data,
+ u8 type, u8 offset, u8 val)
+{
+ struct mxt_object *object;
+ u16 reg;
+
+ object = mxt_get_object(data, type);
+ if (!object || offset >= mxt_obj_size(object))
+ return -EINVAL;
+
+ reg = object->start_address;
+ return mxt_write_reg(data->client, reg + offset, val);
+}
+
static void mxt_input_button(struct mxt_data *data, u8 *message)
{
struct input_dev *input = data->input_dev;
return error;
}
-static int mxt_set_t7_power_cfg(struct mxt_data *data, u8 sleep)
-{
- struct device *dev = &data->client->dev;
- int error;
- struct t7_config *new_config;
- struct t7_config deepsleep = { .active = 0, .idle = 0 };
-
- if (sleep == MXT_POWER_CFG_DEEPSLEEP)
- new_config = &deepsleep;
- else
- new_config = &data->t7_cfg;
-
- error = __mxt_write_reg(data->client, data->T7_address,
- sizeof(data->t7_cfg), new_config);
- if (error)
- return error;
-
- dev_dbg(dev, "Set T7 ACTV:%d IDLE:%d\n",
- new_config->active, new_config->idle);
-
- return 0;
-}
-
-static int mxt_init_t7_power_cfg(struct mxt_data *data)
-{
- struct device *dev = &data->client->dev;
- int error;
- bool retry = false;
-
-recheck:
- error = __mxt_read_reg(data->client, data->T7_address,
- sizeof(data->t7_cfg), &data->t7_cfg);
- if (error)
- return error;
-
- if (data->t7_cfg.active == 0 || data->t7_cfg.idle == 0) {
- if (!retry) {
- dev_dbg(dev, "T7 cfg zero, resetting\n");
- mxt_soft_reset(data);
- retry = true;
- goto recheck;
- } else {
- dev_dbg(dev, "T7 cfg zero after reset, overriding\n");
- data->t7_cfg.active = 20;
- data->t7_cfg.idle = 100;
- return mxt_set_t7_power_cfg(data, MXT_POWER_CFG_RUN);
- }
- }
-
- dev_dbg(dev, "Initialized power cfg: ACTV %d, IDLE %d\n",
- data->t7_cfg.active, data->t7_cfg.idle);
- return 0;
-}
-
static int mxt_configure_objects(struct mxt_data *data,
const struct firmware *cfg)
{
dev_warn(dev, "Error %d updating config\n", error);
}
- error = mxt_init_t7_power_cfg(data);
- if (error) {
- dev_err(dev, "Failed to initialize power cfg\n");
- return error;
- }
-
error = mxt_initialize_t9_input_device(data);
if (error)
return error;
static void mxt_start(struct mxt_data *data)
{
- mxt_set_t7_power_cfg(data, MXT_POWER_CFG_RUN);
-
- /* Recalibrate since chip has been in deep sleep */
- mxt_t6_command(data, MXT_COMMAND_CALIBRATE, 1, false);
+ /* Touch enable */
+ mxt_write_object(data,
+ MXT_TOUCH_MULTI_T9, MXT_TOUCH_CTRL, 0x83);
}
static void mxt_stop(struct mxt_data *data)
{
- mxt_set_t7_power_cfg(data, MXT_POWER_CFG_DEEPSLEEP);
+ /* Touch disable */
+ mxt_write_object(data,
+ MXT_TOUCH_MULTI_T9, MXT_TOUCH_CTRL, 0);
}
static int mxt_input_open(struct input_dev *dev)
struct mxt_data *data = i2c_get_clientdata(client);
struct input_dev *input_dev = data->input_dev;
+ mxt_soft_reset(data);
+
mutex_lock(&input_dev->mutex);
if (input_dev->users)
}
#define EDT_ATTR_CHECKSET(name, reg) \
+do { \
if (pdata->name >= edt_ft5x06_attr_##name.limit_low && \
pdata->name <= edt_ft5x06_attr_##name.limit_high) \
- edt_ft5x06_register_write(tsdata, reg, pdata->name)
+ edt_ft5x06_register_write(tsdata, reg, pdata->name); \
+} while (0)
#define EDT_GET_PROP(name, reg) { \
u32 val; \
if (action != BUS_NOTIFY_REMOVED_DEVICE)
return 0;
- /*
- * If the device is still attached to a device driver we can't
- * tear down the domain yet as DMA mappings may still be in use.
- * Wait for the BUS_NOTIFY_UNBOUND_DRIVER event to do that.
- */
- if (action == BUS_NOTIFY_DEL_DEVICE && dev->driver != NULL)
- return 0;
-
domain = find_domain(dev);
if (!domain)
return 0;
domain_remove_one_dev_info(old_domain, dev);
else
domain_remove_dev_info(old_domain);
+
+ if (!domain_type_is_vm_or_si(old_domain) &&
+ list_empty(&old_domain->devices))
+ domain_exit(old_domain);
}
}
static u64 ipmmu_page_prot(unsigned int prot, u64 type)
{
- u64 pgprot = ARM_VMSA_PTE_XN | ARM_VMSA_PTE_nG | ARM_VMSA_PTE_AF
+ u64 pgprot = ARM_VMSA_PTE_nG | ARM_VMSA_PTE_AF
| ARM_VMSA_PTE_SH_IS | ARM_VMSA_PTE_AP_UNPRIV
| ARM_VMSA_PTE_NS | type;
if (prot & IOMMU_CACHE)
pgprot |= IMMAIR_ATTR_IDX_WBRWA << ARM_VMSA_PTE_ATTRINDX_SHIFT;
- if (prot & IOMMU_EXEC)
- pgprot &= ~ARM_VMSA_PTE_XN;
+ if (prot & IOMMU_NOEXEC)
+ pgprot |= ARM_VMSA_PTE_XN;
else if (!(prot & (IOMMU_READ | IOMMU_WRITE)))
/* If no access create a faulting entry to avoid TLB fills. */
pgprot &= ~ARM_VMSA_PTE_PAGE;
.remove = rk_iommu_remove,
.driver = {
.name = "rk_iommu",
- .owner = THIS_MODULE,
.of_match_table = of_match_ptr(rk_iommu_dt_ids),
},
};
byte SS_Ind[] = "\x05\x02\x00\x02\x00\x00"; /* Hold_Ind struct*/
byte CF_Ind[] = "\x09\x02\x00\x06\x00\x00\x00\x00\x00\x00";
byte Interr_Err_Ind[] = "\x0a\x02\x00\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00";
- byte CONF_Ind[] = "\x09\x16\x00\x06\x00\x00\0x00\0x00\0x00\0x00";
+ byte CONF_Ind[] = "\x09\x16\x00\x06\x00\x00\x00\x00\x00\x00";
byte force_mt_info = false;
byte dir;
dword d;
}
static int
-open_dchannel(struct isac_hw *isac, struct channel_req *rq)
+open_dchannel_caller(struct isac_hw *isac, struct channel_req *rq, void *caller)
{
pr_debug("%s: %s dev(%d) open from %p\n", isac->name, __func__,
- isac->dch.dev.id, __builtin_return_address(1));
+ isac->dch.dev.id, caller);
if (rq->protocol != ISDN_P_TE_S0)
return -EINVAL;
if (rq->adr.channel == 1)
return 0;
}
+static int
+open_dchannel(struct isac_hw *isac, struct channel_req *rq)
+{
+ return open_dchannel_caller(isac, rq, __builtin_return_address(0));
+}
+
static const char *ISACVer[] =
{"2086/2186 V1.1", "2085 B1", "2085 B2",
"2085 V2.3"};
case OPEN_CHANNEL:
rq = arg;
if (rq->protocol == ISDN_P_TE_S0)
- err = open_dchannel(isac, rq);
+ err = open_dchannel_caller(isac, rq, __builtin_return_address(0));
else
err = open_bchannel(ipac, rq);
if (err)
}
static int
-open_dchannel(struct w6692_hw *card, struct channel_req *rq)
+open_dchannel(struct w6692_hw *card, struct channel_req *rq, void *caller)
{
pr_debug("%s: %s dev(%d) open from %p\n", card->name, __func__,
- card->dch.dev.id, __builtin_return_address(1));
+ card->dch.dev.id, caller);
if (rq->protocol != ISDN_P_TE_S0)
return -EINVAL;
if (rq->adr.channel == 1)
case OPEN_CHANNEL:
rq = arg;
if (rq->protocol == ISDN_P_TE_S0)
- err = open_dchannel(card, rq);
+ err = open_dchannel(card, rq, __builtin_return_address(0));
else
err = open_bchannel(card, rq);
if (err)
static unsigned int io[] = {0, 0, 0, 0};
static unsigned char irq[] = {0, 0, 0, 0};
static unsigned long ram[] = {0, 0, 0, 0};
-static bool do_reset = 0;
+static bool do_reset;
module_param_array(io, int, NULL, 0);
module_param_array(irq, byte, NULL, 0);
io[b] + 0x400 * EXP_PAGE0);
continue;
}
- }
- else {
+ } else {
/*
* Yes, probe for I/O Base
*/
if (probe_exhasted) {
- pr_debug("All probe addresses exhasted, skipping\n");
+ pr_debug("All probe addresses exhausted, skipping\n");
continue;
}
pr_debug("Probing for I/O...\n");
model = identify_board(ram[b], io[b]);
release_region(ram[b], SRAM_PAGESIZE);
}
- }
- else {
+ } else {
/*
* Yes, probe for free RAM and look for
* a signature and id the board model
ram[b] = i;
break;
}
- pr_debug(" Unidentifed or inaccessible\n");
+ pr_debug(" Unidentified or inaccessible\n");
continue;
}
pr_debug(" request failed\n");
sc_adapter[cinst]->interrupt = irq[b];
if (request_irq(sc_adapter[cinst]->interrupt, interrupt_handler,
0, interface->id,
- (void *)(unsigned long) cinst))
- {
+ (void *)(unsigned long) cinst)) {
kfree(sc_adapter[cinst]->channel);
indicate_status(cinst, ISDN_STAT_UNLOAD, 0, NULL); /* Fix me */
kfree(interface);
led_dat->sata = 0;
led_dat->cdev.brightness = LED_OFF;
led_dat->cdev.flags |= LED_CORE_SUSPENDRESUME;
- /*
- * If available, expose the SATA activity blink capability through
- * a "sata" sysfs attribute.
- */
- if (led_dat->mode_val[NETXBIG_LED_SATA] != NETXBIG_LED_INVALID_MODE)
- led_dat->cdev.groups = netxbig_led_groups;
led_dat->mode_addr = template->mode_addr;
led_dat->mode_val = template->mode_val;
led_dat->bright_addr = template->bright_addr;
led_dat->bright_max = (1 << pdata->gpio_ext->num_data) - 1;
led_dat->timer = pdata->timer;
led_dat->num_timer = pdata->num_timer;
+ /*
+ * If available, expose the SATA activity blink capability through
+ * a "sata" sysfs attribute.
+ */
+ if (led_dat->mode_val[NETXBIG_LED_SATA] != NETXBIG_LED_INVALID_MODE)
+ led_dat->cdev.groups = netxbig_led_groups;
return led_classdev_register(&pdev->dev, &led_dat->cdev);
}
[STMPE_IDX_GPDR_LSB] = STMPE1601_REG_GPIO_SET_DIR_LSB,
[STMPE_IDX_GPRER_LSB] = STMPE1601_REG_GPIO_RE_LSB,
[STMPE_IDX_GPFER_LSB] = STMPE1601_REG_GPIO_FE_LSB,
+ [STMPE_IDX_GPPUR_LSB] = STMPE1601_REG_GPIO_PU_LSB,
[STMPE_IDX_GPAFR_U_MSB] = STMPE1601_REG_GPIO_AF_U_MSB,
[STMPE_IDX_IEGPIOR_LSB] = STMPE1601_REG_INT_EN_GPIO_MASK_LSB,
[STMPE_IDX_ISGPIOR_MSB] = STMPE1601_REG_INT_STA_GPIO_MSB,
[STMPE_IDX_GPDR_LSB] = STMPE1801_REG_GPIO_SET_DIR_LOW,
[STMPE_IDX_GPRER_LSB] = STMPE1801_REG_GPIO_RE_LOW,
[STMPE_IDX_GPFER_LSB] = STMPE1801_REG_GPIO_FE_LOW,
+ [STMPE_IDX_GPPUR_LSB] = STMPE1801_REG_GPIO_PULL_UP_LOW,
[STMPE_IDX_IEGPIOR_LSB] = STMPE1801_REG_INT_EN_GPIO_MASK_LOW,
[STMPE_IDX_ISGPIOR_LSB] = STMPE1801_REG_INT_STA_GPIO_LOW,
};
[STMPE_IDX_GPDR_LSB] = STMPE24XX_REG_GPDR_LSB,
[STMPE_IDX_GPRER_LSB] = STMPE24XX_REG_GPRER_LSB,
[STMPE_IDX_GPFER_LSB] = STMPE24XX_REG_GPFER_LSB,
+ [STMPE_IDX_GPPUR_LSB] = STMPE24XX_REG_GPPUR_LSB,
+ [STMPE_IDX_GPPDR_LSB] = STMPE24XX_REG_GPPDR_LSB,
[STMPE_IDX_GPAFR_U_MSB] = STMPE24XX_REG_GPAFR_U_MSB,
[STMPE_IDX_IEGPIOR_LSB] = STMPE24XX_REG_IEGPIOR_LSB,
[STMPE_IDX_ISGPIOR_MSB] = STMPE24XX_REG_ISGPIOR_MSB,
#define STMPE1601_REG_GPIO_ED_MSB 0x8A
#define STMPE1601_REG_GPIO_RE_LSB 0x8D
#define STMPE1601_REG_GPIO_FE_LSB 0x8F
+#define STMPE1601_REG_GPIO_PU_LSB 0x91
#define STMPE1601_REG_GPIO_AF_U_MSB 0x92
#define STMPE1601_SYS_CTRL_ENABLE_GPIO (1 << 3)
#define STMPE24XX_REG_GPEDR_MSB 0x8C
#define STMPE24XX_REG_GPRER_LSB 0x91
#define STMPE24XX_REG_GPFER_LSB 0x94
+#define STMPE24XX_REG_GPPUR_LSB 0x97
+#define STMPE24XX_REG_GPPDR_LSB 0x9a
#define STMPE24XX_REG_GPAFR_U_MSB 0x9B
#define STMPE24XX_SYS_CTRL_ENABLE_GPIO (1 << 3)
{ "INT33BB" , "3" , &sdhci_acpi_slot_int_sd },
{ "INT33C6" , NULL, &sdhci_acpi_slot_int_sdio },
{ "INT3436" , NULL, &sdhci_acpi_slot_int_sdio },
+ { "INT344D" , NULL, &sdhci_acpi_slot_int_sdio },
{ "PNP0D40" },
{ },
};
{ "INT33BB" },
{ "INT33C6" },
{ "INT3436" },
+ { "INT344D" },
{ "PNP0D40" },
{ },
};
.subdevice = PCI_ANY_ID,
.driver_data = (kernel_ulong_t)&sdhci_intel_mrfl_mmc,
},
+
+ {
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_SPT_EMMC,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .driver_data = (kernel_ulong_t)&sdhci_intel_byt_emmc,
+ },
+
+ {
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_SPT_SDIO,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .driver_data = (kernel_ulong_t)&sdhci_intel_byt_sdio,
+ },
+
+ {
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = PCI_DEVICE_ID_INTEL_SPT_SD,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .driver_data = (kernel_ulong_t)&sdhci_intel_byt_sd,
+ },
+
{
.vendor = PCI_VENDOR_ID_O2,
.device = PCI_DEVICE_ID_O2_8120,
#define PCI_DEVICE_ID_INTEL_CLV_EMMC0 0x08e5
#define PCI_DEVICE_ID_INTEL_CLV_EMMC1 0x08e6
#define PCI_DEVICE_ID_INTEL_QRK_SD 0x08A7
+#define PCI_DEVICE_ID_INTEL_SPT_EMMC 0x9d2b
+#define PCI_DEVICE_ID_INTEL_SPT_SDIO 0x9d2c
+#define PCI_DEVICE_ID_INTEL_SPT_SD 0x9d2d
/*
* PCI registers
if (IS_ERR(host))
return PTR_ERR(host);
- if (of_device_is_compatible(np, "marvell,armada-380-sdhci")) {
- ret = mv_conf_mbus_windows(pdev, mv_mbus_dram_info());
- if (ret < 0)
- goto err_mbus_win;
- }
-
-
pltfm_host = sdhci_priv(host);
pltfm_host->priv = pxa;
if (!IS_ERR(pxa->clk_core))
clk_prepare_enable(pxa->clk_core);
+ if (of_device_is_compatible(np, "marvell,armada-380-sdhci")) {
+ ret = mv_conf_mbus_windows(pdev, mv_mbus_dram_info());
+ if (ret < 0)
+ goto err_mbus_win;
+ }
+
/* enable 1/8V DDR capable */
host->mmc->caps |= MMC_CAP_1_8V_DDR;
pm_runtime_disable(&pdev->dev);
err_of_parse:
err_cd_req:
+err_mbus_win:
clk_disable_unprepare(pxa->clk_io);
if (!IS_ERR(pxa->clk_core))
clk_disable_unprepare(pxa->clk_core);
err_clk_get:
-err_mbus_win:
sdhci_pltfm_free(pdev);
return ret;
}
del_timer_sync(&host->tuning_timer);
host->flags &= ~SDHCI_NEEDS_RETUNING;
- host->mmc->max_blk_count =
- (host->quirks & SDHCI_QUIRK_NO_MULTIBLOCK) ? 1 : 65535;
}
sdhci_enable_card_detection(host);
}
sdhci_runtime_pm_get(host);
+ present = mmc_gpio_get_cd(host->mmc);
+
spin_lock_irqsave(&host->lock, flags);
WARN_ON(host->mrq != NULL);
* zero: cd-gpio is used, and card is removed
* one: cd-gpio is used, and card is present
*/
- present = mmc_gpio_get_cd(host->mmc);
if (present < 0) {
/* If polling, assume that the card is always present. */
if (host->quirks & SDHCI_QUIRK_BROKEN_CARD_DETECTION)
return !(present_state & SDHCI_DATA_LVL_MASK);
}
+static int sdhci_prepare_hs400_tuning(struct mmc_host *mmc, struct mmc_ios *ios)
+{
+ struct sdhci_host *host = mmc_priv(mmc);
+ unsigned long flags;
+
+ spin_lock_irqsave(&host->lock, flags);
+ host->flags |= SDHCI_HS400_TUNING;
+ spin_unlock_irqrestore(&host->lock, flags);
+
+ return 0;
+}
+
static int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode)
{
struct sdhci_host *host = mmc_priv(mmc);
int tuning_loop_counter = MAX_TUNING_LOOP;
int err = 0;
unsigned long flags;
+ unsigned int tuning_count = 0;
+ bool hs400_tuning;
sdhci_runtime_pm_get(host);
spin_lock_irqsave(&host->lock, flags);
+ hs400_tuning = host->flags & SDHCI_HS400_TUNING;
+ host->flags &= ~SDHCI_HS400_TUNING;
+
+ if (host->tuning_mode == SDHCI_TUNING_MODE_1)
+ tuning_count = host->tuning_count;
+
/*
* The Host Controller needs tuning only in case of SDR104 mode
* and for SDR50 mode when Use Tuning for SDR50 is set in the
* tuning function has to be executed.
*/
switch (host->timing) {
+ /* HS400 tuning is done in HS200 mode */
case MMC_TIMING_MMC_HS400:
+ err = -EINVAL;
+ goto out_unlock;
+
case MMC_TIMING_MMC_HS200:
+ /*
+ * Periodic re-tuning for HS400 is not expected to be needed, so
+ * disable it here.
+ */
+ if (hs400_tuning)
+ tuning_count = 0;
+ break;
+
case MMC_TIMING_UHS_SDR104:
break;
/* FALLTHROUGH */
default:
- spin_unlock_irqrestore(&host->lock, flags);
- sdhci_runtime_pm_put(host);
- return 0;
+ goto out_unlock;
}
if (host->ops->platform_execute_tuning) {
}
out:
- /*
- * If this is the very first time we are here, we start the retuning
- * timer. Since only during the first time, SDHCI_NEEDS_RETUNING
- * flag won't be set, we check this condition before actually starting
- * the timer.
- */
- if (!(host->flags & SDHCI_NEEDS_RETUNING) && host->tuning_count &&
- (host->tuning_mode == SDHCI_TUNING_MODE_1)) {
+ host->flags &= ~SDHCI_NEEDS_RETUNING;
+
+ if (tuning_count) {
host->flags |= SDHCI_USING_RETUNING_TIMER;
- mod_timer(&host->tuning_timer, jiffies +
- host->tuning_count * HZ);
- /* Tuning mode 1 limits the maximum data length to 4MB */
- mmc->max_blk_count = (4 * 1024 * 1024) / mmc->max_blk_size;
- } else if (host->flags & SDHCI_USING_RETUNING_TIMER) {
- host->flags &= ~SDHCI_NEEDS_RETUNING;
- /* Reload the new initial value for timer */
- mod_timer(&host->tuning_timer, jiffies +
- host->tuning_count * HZ);
+ mod_timer(&host->tuning_timer, jiffies + tuning_count * HZ);
}
/*
sdhci_writel(host, host->ier, SDHCI_INT_ENABLE);
sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
+out_unlock:
spin_unlock_irqrestore(&host->lock, flags);
sdhci_runtime_pm_put(host);
{
struct sdhci_host *host = mmc_priv(mmc);
unsigned long flags;
+ int present;
/* First check if client has provided their own card event */
if (host->ops->card_event)
host->ops->card_event(host);
+ present = sdhci_do_get_cd(host);
+
spin_lock_irqsave(&host->lock, flags);
/* Check host->mrq first in case we are runtime suspended */
- if (host->mrq && !sdhci_do_get_cd(host)) {
+ if (host->mrq && !present) {
pr_err("%s: Card removed during transfer!\n",
mmc_hostname(host->mmc));
pr_err("%s: Resetting controller.\n",
.hw_reset = sdhci_hw_reset,
.enable_sdio_irq = sdhci_enable_sdio_irq,
.start_signal_voltage_switch = sdhci_start_signal_voltage_switch,
+ .prepare_hs400_tuning = sdhci_prepare_hs400_tuning,
.execute_tuning = sdhci_execute_tuning,
.card_event = sdhci_card_event,
.card_busy = sdhci_card_busy,
mmc->max_segs = SDHCI_MAX_SEGS;
/*
- * Maximum number of sectors in one transfer. Limited by DMA boundary
- * size (512KiB).
+ * Maximum number of sectors in one transfer. Limited by SDMA boundary
+ * size (512KiB). Note some tuning modes impose a 4MiB limit, but this
+ * is less anyway.
*/
mmc->max_req_size = 524288;
NETIF_F_HIGHDMA | NETIF_F_LRO)
#define BOND_ENC_FEATURES (NETIF_F_ALL_CSUM | NETIF_F_SG | NETIF_F_RXCSUM |\
- NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL)
+ NETIF_F_TSO)
static void bond_compute_features(struct bonding *bond)
{
done:
bond_dev->vlan_features = vlan_features;
- bond_dev->hw_enc_features = enc_features;
+ bond_dev->hw_enc_features = enc_features | NETIF_F_GSO_ENCAP_ALL;
bond_dev->hard_header_len = max_hard_header_len;
bond_dev->gso_max_segs = gso_max_segs;
netif_set_gso_max_size(bond_dev, gso_max_size);
NETIF_F_HW_VLAN_CTAG_FILTER;
bond_dev->hw_features &= ~(NETIF_F_ALL_CSUM & ~NETIF_F_HW_CSUM);
- bond_dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL;
+ bond_dev->hw_features |= NETIF_F_GSO_ENCAP_ALL;
bond_dev->features |= bond_dev->hw_features;
}
{ NULL, -1, 0}
};
-static const struct bond_option bond_opts[] = {
+static const struct bond_option bond_opts[BOND_OPT_LAST] = {
[BOND_OPT_MODE] = {
.id = BOND_OPT_MODE,
.name = "mode",
.values = bond_tlb_dynamic_lb_tbl,
.flags = BOND_OPTFLAG_IFDOWN,
.set = bond_option_tlb_dynamic_lb_set,
- },
- { }
+ }
};
/* Searches for an option by name */
struct at91_priv {
struct can_priv can; /* must be the first member! */
- struct net_device *dev;
struct napi_struct napi;
void __iomem *reg_base;
priv->can.do_get_berr_counter = at91_get_berr_counter;
priv->can.ctrlmode_supported = CAN_CTRLMODE_3_SAMPLES |
CAN_CTRLMODE_LISTENONLY;
- priv->dev = dev;
priv->reg_base = addr;
priv->devtype_data = *devtype_data;
priv->clk = clk;
netdev_dbg(dev, "bus-off mode interrupt\n");
state = CAN_STATE_BUS_OFF;
cf->can_id |= CAN_ERR_BUSOFF;
+ priv->can.can_stats.bus_off++;
can_bus_off(dev);
}
case C_CAN_BUS_OFF:
/* bus-off state */
priv->can.state = CAN_STATE_BUS_OFF;
- can_bus_off(dev);
+ priv->can.can_stats.bus_off++;
break;
default:
break;
cc770_write_reg(priv, control, CTRL_INI);
cf->can_id |= CAN_ERR_BUSOFF;
priv->can.state = CAN_STATE_BUS_OFF;
+ priv->can.can_stats.bus_off++;
can_bus_off(dev);
} else if (status & STAT_WARN) {
cf->can_id |= CAN_ERR_CRTL;
priv->can_stats.error_passive++;
break;
case CAN_STATE_BUS_OFF:
+ priv->can_stats.bus_off++;
+ break;
default:
break;
- };
+ }
}
static int can_tx_state_to_frame(struct net_device *dev, enum can_state state)
netdev_dbg(dev, "bus-off\n");
netif_carrier_off(dev);
- priv->can_stats.bus_off++;
if (priv->restart_ms)
mod_timer(&priv->restart_timer,
struct flexcan_priv {
struct can_priv can;
- struct net_device *dev;
struct napi_struct napi;
void __iomem *base;
CAN_CTRLMODE_LISTENONLY | CAN_CTRLMODE_3_SAMPLES |
CAN_CTRLMODE_BERR_REPORTING;
priv->base = base;
- priv->dev = dev;
priv->clk_ipg = clk_ipg;
priv->clk_per = clk_per;
priv->pdata = dev_get_platdata(&pdev->dev);
if (status & SR_BS) {
state = CAN_STATE_BUS_OFF;
cf->can_id |= CAN_ERR_BUSOFF;
+ mod->can.can_stats.bus_off++;
can_bus_off(dev);
} else if (status & SR_ES) {
if (rxerr >= 128 || txerr >= 128)
/* bus-off state */
priv->can.state = CAN_STATE_BUS_OFF;
m_can_disable_all_interrupts(priv);
+ priv->can.can_stats.bus_off++;
can_bus_off(dev);
break;
default:
pch_can_set_rx_all(priv, 0);
state = CAN_STATE_BUS_OFF;
cf->can_id |= CAN_ERR_BUSOFF;
+ priv->can.can_stats.bus_off++;
can_bus_off(ndev);
}
priv->can.state = CAN_STATE_BUS_OFF;
/* Clear interrupt condition */
writeb(~RCAR_CAN_EIFR_BOEIF, &priv->regs->eifr);
+ priv->can.can_stats.bus_off++;
can_bus_off(ndev);
if (skb)
cf->can_id |= CAN_ERR_BUSOFF;
++priv->can.can_stats.error_passive;
else if (can_state == CAN_STATE_BUS_OFF) {
/* this calls can_close_cleanup() */
+ ++priv->can.can_stats.bus_off;
can_bus_off(netdev);
netif_stop_queue(netdev);
}
if (priv->can.state == CAN_STATE_BUS_OFF) {
if (priv->can.restart_ms == 0) {
priv->force_quit = 1;
+ priv->can.can_stats.bus_off++;
can_bus_off(net);
mcp251x_hw_sleep(spi);
break;
hecc_clear_bit(priv, HECC_CANMC, HECC_CANMC_CCR);
/* Disable all interrupts in bus-off to avoid int hog */
hecc_write(priv, HECC_CANGIM, 0);
+ ++priv->can.can_stats.bus_off;
can_bus_off(ndev);
}
dev->can.state = CAN_STATE_BUS_OFF;
cf->can_id |= CAN_ERR_BUSOFF;
+ dev->can.can_stats.bus_off++;
can_bus_off(dev->netdev);
} else if (state & SJA1000_SR_ES) {
dev->can.state = CAN_STATE_ERROR_WARNING;
case ESD_BUSSTATE_BUSOFF:
priv->can.state = CAN_STATE_BUS_OFF;
cf->can_id |= CAN_ERR_BUSOFF;
+ priv->can.can_stats.bus_off++;
can_bus_off(priv->netdev);
break;
case ESD_BUSSTATE_WARN:
switch (new_state) {
case CAN_STATE_BUS_OFF:
cf->can_id |= CAN_ERR_BUSOFF;
+ mc->pdev->dev.can.can_stats.bus_off++;
can_bus_off(mc->netdev);
break;
switch (new_state) {
case CAN_STATE_BUS_OFF:
can_frame->can_id |= CAN_ERR_BUSOFF;
+ dev->can.can_stats.bus_off++;
can_bus_off(netdev);
break;
case USB_8DEV_STATUSMSG_BUSOFF:
priv->can.state = CAN_STATE_BUS_OFF;
cf->can_id |= CAN_ERR_BUSOFF;
+ priv->can.can_stats.bus_off++;
can_bus_off(priv->netdev);
break;
case USB_8DEV_STATUSMSG_OVERRUN:
return 0;
}
+static void bcm_sf2_intr_disable(struct bcm_sf2_priv *priv)
+{
+ intrl2_0_writel(priv, 0xffffffff, INTRL2_CPU_MASK_SET);
+ intrl2_0_writel(priv, 0xffffffff, INTRL2_CPU_CLEAR);
+ intrl2_0_writel(priv, 0, INTRL2_CPU_MASK_CLEAR);
+ intrl2_1_writel(priv, 0xffffffff, INTRL2_CPU_MASK_SET);
+ intrl2_1_writel(priv, 0xffffffff, INTRL2_CPU_CLEAR);
+ intrl2_1_writel(priv, 0, INTRL2_CPU_MASK_CLEAR);
+}
+
static int bcm_sf2_sw_setup(struct dsa_switch *ds)
{
const char *reg_names[BCM_SF2_REGS_NUM] = BCM_SF2_REGS_NAME;
}
/* Disable all interrupts and request them */
- intrl2_0_writel(priv, 0xffffffff, INTRL2_CPU_MASK_SET);
- intrl2_0_writel(priv, 0xffffffff, INTRL2_CPU_CLEAR);
- intrl2_0_writel(priv, 0, INTRL2_CPU_MASK_CLEAR);
- intrl2_1_writel(priv, 0xffffffff, INTRL2_CPU_MASK_SET);
- intrl2_1_writel(priv, 0xffffffff, INTRL2_CPU_CLEAR);
- intrl2_1_writel(priv, 0, INTRL2_CPU_MASK_CLEAR);
+ bcm_sf2_intr_disable(priv);
ret = request_irq(priv->irq0, bcm_sf2_switch_0_isr, 0,
"switch_0", priv);
struct bcm_sf2_priv *priv = ds_to_priv(ds);
unsigned int port;
- intrl2_0_writel(priv, 0xffffffff, INTRL2_CPU_MASK_SET);
- intrl2_0_writel(priv, 0xffffffff, INTRL2_CPU_CLEAR);
- intrl2_0_writel(priv, 0, INTRL2_CPU_MASK_CLEAR);
- intrl2_1_writel(priv, 0xffffffff, INTRL2_CPU_MASK_SET);
- intrl2_1_writel(priv, 0xffffffff, INTRL2_CPU_CLEAR);
- intrl2_1_writel(priv, 0, INTRL2_CPU_MASK_CLEAR);
+ bcm_sf2_intr_disable(priv);
/* Disable all ports physically present including the IMP
* port, the other ones have already been disabled during
first_txd->processFlags |= TYPHOON_TX_PF_IP_CHKSUM;
}
- if(vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
first_txd->processFlags |=
TYPHOON_TX_PF_INSERT_VLAN | TYPHOON_TX_PF_VLAN_PRIORITY;
first_txd->processFlags |=
- cpu_to_le32(htons(vlan_tx_tag_get(skb)) <<
+ cpu_to_le32(htons(skb_vlan_tag_get(skb)) <<
TYPHOON_TX_PF_VLAN_TAG_SHIFT);
}
}
db->clk = devm_clk_get(&pdev->dev, NULL);
- if (IS_ERR(db->clk))
+ if (IS_ERR(db->clk)) {
+ ret = PTR_ERR(db->clk);
goto out;
+ }
clk_prepare_enable(db->clk);
flagsize = (skb->len << 16) | (BD_FLG_END);
if (skb->ip_summed == CHECKSUM_PARTIAL)
flagsize |= BD_FLG_TCP_UDP_SUM;
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
flagsize |= BD_FLG_VLAN_TAG;
- vlan_tag = vlan_tx_tag_get(skb);
+ vlan_tag = skb_vlan_tag_get(skb);
}
desc = ap->tx_ring + idx;
idx = (idx + 1) % ACE_TX_RING_ENTRIES(ap);
flagsize = (skb_headlen(skb) << 16);
if (skb->ip_summed == CHECKSUM_PARTIAL)
flagsize |= BD_FLG_TCP_UDP_SUM;
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
flagsize |= BD_FLG_VLAN_TAG;
- vlan_tag = vlan_tx_tag_get(skb);
+ vlan_tag = skb_vlan_tag_get(skb);
}
ace_load_tx_bd(ap, ap->tx_ring + idx, mapping, flagsize, vlan_tag);
init_error:
free_skbufs(dev);
alloc_skbuf_error:
- if (priv->phydev) {
- phy_disconnect(priv->phydev);
- priv->phydev = NULL;
- }
phy_error:
return ret;
}
int ret;
unsigned long int flags;
- /* Stop and disconnect the PHY */
- if (priv->phydev) {
+ /* Stop the PHY */
+ if (priv->phydev)
phy_stop(priv->phydev);
- phy_disconnect(priv->phydev);
- priv->phydev = NULL;
- }
netif_stop_queue(dev);
napi_disable(&priv->napi);
static int altera_tse_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
+ struct altera_tse_private *priv = netdev_priv(ndev);
+
+ if (priv->phydev)
+ phy_disconnect(priv->phydev);
platform_set_drvdata(pdev, NULL);
altera_tse_mdio_destroy(ndev);
config AMD_XGBE
tristate "AMD 10GbE Ethernet driver"
- depends on OF_NET && HAS_IOMEM
+ depends on (OF_NET || ACPI) && HAS_IOMEM
select PHYLIB
select AMD_XGBE_PHY
select BITREVERSE
lp->tx_ring[tx_index].tx_flags = 0;
#if AMD8111E_VLAN_TAG_USED
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
lp->tx_ring[tx_index].tag_ctrl_cmd |=
cpu_to_le16(TCC_VLAN_INSERT);
lp->tx_ring[tx_index].tag_ctrl_info =
- cpu_to_le16(vlan_tx_tag_get(skb));
+ cpu_to_le16(skb_vlan_tag_get(skb));
}
#endif
/*
* Check for loss of link and link establishment.
- * Can not use mii_check_media because it does nothing if mode is forced.
+ * Could possibly be changed to use mii_check_media instead.
*/
static void pcnet32_watchdog(struct net_device *dev)
buf = kasprintf(GFP_KERNEL, "amd-xgbe-%s", pdata->netdev->name);
pdata->xgbe_debugfs = debugfs_create_dir(buf, NULL);
- if (pdata->xgbe_debugfs == NULL) {
+ if (!pdata->xgbe_debugfs) {
netdev_err(pdata->netdev, "debugfs_create_dir failed\n");
return;
}
ring->cur = 0;
ring->dirty = 0;
- memset(&ring->rx, 0, sizeof(ring->rx));
hw_if->rx_desc_init(channel);
}
return 0;
}
-static void xgbe_realloc_rx_buffer(struct xgbe_channel *channel)
-{
- struct xgbe_prv_data *pdata = channel->pdata;
- struct xgbe_hw_if *hw_if = &pdata->hw_if;
- struct xgbe_ring *ring = channel->rx_ring;
- struct xgbe_ring_data *rdata;
- int i;
-
- DBGPR("-->xgbe_realloc_rx_buffer: rx_ring->rx.realloc_index = %u\n",
- ring->rx.realloc_index);
-
- for (i = 0; i < ring->dirty; i++) {
- rdata = XGBE_GET_DESC_DATA(ring, ring->rx.realloc_index);
-
- /* Reset rdata values */
- xgbe_unmap_rdata(pdata, rdata);
-
- if (xgbe_map_rx_buffer(pdata, ring, rdata))
- break;
-
- hw_if->rx_desc_reset(rdata);
-
- ring->rx.realloc_index++;
- }
- ring->dirty = 0;
-
- DBGPR("<--xgbe_realloc_rx_buffer\n");
-}
-
void xgbe_init_function_ptrs_desc(struct xgbe_desc_if *desc_if)
{
DBGPR("-->xgbe_init_function_ptrs_desc\n");
desc_if->alloc_ring_resources = xgbe_alloc_ring_resources;
desc_if->free_ring_resources = xgbe_free_ring_resources;
desc_if->map_tx_skb = xgbe_map_tx_skb;
- desc_if->realloc_rx_buffer = xgbe_realloc_rx_buffer;
+ desc_if->map_rx_buffer = xgbe_map_rx_buffer;
desc_if->unmap_rdata = xgbe_unmap_rdata;
desc_if->wrapper_tx_desc_init = xgbe_wrapper_tx_descriptor_init;
desc_if->wrapper_rx_desc_init = xgbe_wrapper_rx_descriptor_init;
*/
#include <linux/phy.h>
+#include <linux/mdio.h>
#include <linux/clk.h>
#include <linux/bitrev.h>
#include <linux/crc32.h>
DBGPR("-->xgbe_usec_to_riwt\n");
- rate = clk_get_rate(pdata->sysclk);
+ rate = pdata->sysclk_rate;
/*
* Convert the input usec value to the watchdog timer value. Each
DBGPR("-->xgbe_riwt_to_usec\n");
- rate = clk_get_rate(pdata->sysclk);
+ rate = pdata->sysclk_rate;
/*
* Convert the input watchdog timer value to the usec value. Each
static int xgbe_set_gmii_speed(struct xgbe_prv_data *pdata)
{
+ if (XGMAC_IOREAD_BITS(pdata, MAC_TCR, SS) == 0x3)
+ return 0;
+
XGMAC_IOWRITE_BITS(pdata, MAC_TCR, SS, 0x3);
return 0;
static int xgbe_set_gmii_2500_speed(struct xgbe_prv_data *pdata)
{
+ if (XGMAC_IOREAD_BITS(pdata, MAC_TCR, SS) == 0x2)
+ return 0;
+
XGMAC_IOWRITE_BITS(pdata, MAC_TCR, SS, 0x2);
return 0;
static int xgbe_set_xgmii_speed(struct xgbe_prv_data *pdata)
{
+ if (XGMAC_IOREAD_BITS(pdata, MAC_TCR, SS) == 0)
+ return 0;
+
XGMAC_IOWRITE_BITS(pdata, MAC_TCR, SS, 0);
return 0;
else
mmd_address = (pdata->mdio_mmd << 16) | (mmd_reg & 0xffff);
+ /* If the PCS is changing modes, match the MAC speed to it */
+ if (((mmd_address >> 16) == MDIO_MMD_PCS) &&
+ ((mmd_address & 0xffff) == MDIO_CTRL2)) {
+ struct phy_device *phydev = pdata->phydev;
+
+ if (mmd_data & MDIO_PCS_CTRL2_TYPE) {
+ /* KX mode */
+ if (phydev->supported & SUPPORTED_1000baseKX_Full)
+ xgbe_set_gmii_speed(pdata);
+ else
+ xgbe_set_gmii_2500_speed(pdata);
+ } else {
+ /* KR mode */
+ xgbe_set_xgmii_speed(pdata);
+ }
+ }
+
/* The PCS registers are accessed using mmio. The underlying APB3
* management interface uses indirect addressing to access the MMD
* register sets. This requires accessing of the PCS register in two
unsigned int tso_context, vlan_context;
unsigned int tx_set_ic;
int start_index = ring->cur;
+ int cur_index = ring->cur;
int i;
DBGPR("-->xgbe_dev_xmit\n");
else
tx_set_ic = 0;
- rdata = XGBE_GET_DESC_DATA(ring, ring->cur);
+ rdata = XGBE_GET_DESC_DATA(ring, cur_index);
rdesc = rdata->rdesc;
/* Create a context descriptor if this is a TSO packet */
ring->tx.cur_vlan_ctag = packet->vlan_ctag;
}
- ring->cur++;
- rdata = XGBE_GET_DESC_DATA(ring, ring->cur);
+ cur_index++;
+ rdata = XGBE_GET_DESC_DATA(ring, cur_index);
rdesc = rdata->rdesc;
}
XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, CTXT, 0);
/* Set OWN bit if not the first descriptor */
- if (ring->cur != start_index)
+ if (cur_index != start_index)
XGMAC_SET_BITS_LE(rdesc->desc3, TX_NORMAL_DESC3, OWN, 1);
if (tso) {
packet->length);
}
- for (i = ring->cur - start_index + 1; i < packet->rdesc_count; i++) {
- ring->cur++;
- rdata = XGBE_GET_DESC_DATA(ring, ring->cur);
+ for (i = cur_index - start_index + 1; i < packet->rdesc_count; i++) {
+ cur_index++;
+ rdata = XGBE_GET_DESC_DATA(ring, cur_index);
rdesc = rdata->rdesc;
/* Update buffer address */
/* Make sure ownership is written to the descriptor */
wmb();
- ring->cur++;
+ ring->cur = cur_index + 1;
if (!packet->skb->xmit_more ||
netif_xmit_stopped(netdev_get_tx_queue(pdata->netdev,
channel->queue_index)))
XGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
}
+static void xgbe_config_mac_speed(struct xgbe_prv_data *pdata)
+{
+ switch (pdata->phy_speed) {
+ case SPEED_10000:
+ xgbe_set_xgmii_speed(pdata);
+ break;
+
+ case SPEED_2500:
+ xgbe_set_gmii_2500_speed(pdata);
+ break;
+
+ case SPEED_1000:
+ xgbe_set_gmii_speed(pdata);
+ break;
+ }
+}
+
static void xgbe_config_checksum_offload(struct xgbe_prv_data *pdata)
{
if (pdata->netdev->features & NETIF_F_RXCSUM)
xgbe_config_mac_address(pdata);
xgbe_config_jumbo_enable(pdata);
xgbe_config_flow_control(pdata);
+ xgbe_config_mac_speed(pdata);
xgbe_config_checksum_offload(pdata);
xgbe_config_vlan_support(pdata);
xgbe_config_mmc(pdata);
return (ring->rdesc_count - (ring->cur - ring->dirty));
}
+static inline unsigned int xgbe_rx_dirty_desc(struct xgbe_ring *ring)
+{
+ return (ring->cur - ring->dirty);
+}
+
static int xgbe_maybe_stop_tx_queue(struct xgbe_channel *channel,
struct xgbe_ring *ring, unsigned int count)
{
struct xgbe_channel *channel = container_of(timer,
struct xgbe_channel,
tx_timer);
- struct xgbe_ring *ring = channel->tx_ring;
struct xgbe_prv_data *pdata = channel->pdata;
struct napi_struct *napi;
- unsigned long flags;
DBGPR("-->xgbe_tx_timer\n");
napi = (pdata->per_channel_irq) ? &channel->napi : &pdata->napi;
- spin_lock_irqsave(&ring->lock, flags);
-
if (napi_schedule_prep(napi)) {
/* Disable Tx and Rx interrupts */
if (pdata->per_channel_irq)
channel->tx_timer_active = 0;
- spin_unlock_irqrestore(&ring->lock, flags);
-
DBGPR("<--xgbe_tx_timer\n");
return HRTIMER_NORESTART;
struct phy_device *phydev = pdata->phydev;
int new_state = 0;
- if (phydev == NULL)
+ if (!phydev)
return;
if (phydev->link) {
DBGPR("<--xgbe_stop\n");
}
-static void xgbe_restart_dev(struct xgbe_prv_data *pdata, unsigned int reset)
+static void xgbe_restart_dev(struct xgbe_prv_data *pdata)
{
struct xgbe_channel *channel;
struct xgbe_hw_if *hw_if = &pdata->hw_if;
xgbe_free_tx_data(pdata);
xgbe_free_rx_data(pdata);
- /* Issue software reset to device if requested */
- if (reset)
- hw_if->exit(pdata);
+ /* Issue software reset to device */
+ hw_if->exit(pdata);
xgbe_start(pdata);
rtnl_lock();
- xgbe_restart_dev(pdata, 1);
+ xgbe_restart_dev(pdata);
rtnl_unlock();
}
static void xgbe_prep_vlan(struct sk_buff *skb, struct xgbe_packet_data *packet)
{
- if (vlan_tx_tag_present(skb))
- packet->vlan_ctag = vlan_tx_tag_get(skb);
+ if (skb_vlan_tag_present(skb))
+ packet->vlan_ctag = skb_vlan_tag_get(skb);
}
static int xgbe_prep_tso(struct sk_buff *skb, struct xgbe_packet_data *packet)
XGMAC_SET_BITS(packet->attributes, TX_PACKET_ATTRIBUTES,
CSUM_ENABLE, 1);
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
/* VLAN requires an extra descriptor if tag is different */
- if (vlan_tx_tag_get(skb) != ring->tx.cur_vlan_ctag)
+ if (skb_vlan_tag_get(skb) != ring->tx.cur_vlan_ctag)
/* We can share with the TSO context descriptor */
if (!context_desc) {
context_desc = 1;
struct xgbe_ring *ring;
struct xgbe_packet_data *packet;
struct netdev_queue *txq;
- unsigned long flags;
int ret;
DBGPR("-->xgbe_xmit: skb->len = %d\n", skb->len);
ret = NETDEV_TX_OK;
- spin_lock_irqsave(&ring->lock, flags);
-
if (skb->len == 0) {
netdev_err(netdev, "empty skb received from stack\n");
dev_kfree_skb_any(skb);
ret = NETDEV_TX_OK;
tx_netdev_return:
- spin_unlock_irqrestore(&ring->lock, flags);
-
- DBGPR("<--xgbe_xmit\n");
-
return ret;
}
pdata->rx_buf_size = ret;
netdev->mtu = mtu;
- xgbe_restart_dev(pdata, 0);
+ xgbe_restart_dev(pdata);
DBGPR("<--xgbe_change_mtu\n");
static void xgbe_rx_refresh(struct xgbe_channel *channel)
{
struct xgbe_prv_data *pdata = channel->pdata;
+ struct xgbe_hw_if *hw_if = &pdata->hw_if;
struct xgbe_desc_if *desc_if = &pdata->desc_if;
struct xgbe_ring *ring = channel->rx_ring;
struct xgbe_ring_data *rdata;
- desc_if->realloc_rx_buffer(channel);
+ while (ring->dirty != ring->cur) {
+ rdata = XGBE_GET_DESC_DATA(ring, ring->dirty);
+
+ /* Reset rdata values */
+ desc_if->unmap_rdata(pdata, rdata);
+
+ if (desc_if->map_rx_buffer(pdata, ring, rdata))
+ break;
+
+ hw_if->rx_desc_reset(rdata);
+
+ ring->dirty++;
+ }
/* Update the Rx Tail Pointer Register with address of
* the last cleaned entry */
- rdata = XGBE_GET_DESC_DATA(ring, ring->rx.realloc_index - 1);
+ rdata = XGBE_GET_DESC_DATA(ring, ring->dirty - 1);
XGMAC_DMA_IOWRITE(channel, DMA_CH_RDTR_LO,
lower_32_bits(rdata->rdesc_dma));
}
struct xgbe_ring_desc *rdesc;
struct net_device *netdev = pdata->netdev;
struct netdev_queue *txq;
- unsigned long flags;
int processed = 0;
unsigned int tx_packets = 0, tx_bytes = 0;
txq = netdev_get_tx_queue(netdev, channel->queue_index);
- spin_lock_irqsave(&ring->lock, flags);
-
while ((processed < XGBE_TX_DESC_MAX_PROC) &&
(ring->dirty != ring->cur)) {
rdata = XGBE_GET_DESC_DATA(ring, ring->dirty);
}
if (!processed)
- goto unlock;
+ return 0;
netdev_tx_completed_queue(txq, tx_packets, tx_bytes);
DBGPR("<--xgbe_tx_poll: processed=%d\n", processed);
-unlock:
- spin_unlock_irqrestore(&ring->lock, flags);
-
return processed;
}
read_again:
rdata = XGBE_GET_DESC_DATA(ring, ring->cur);
- if (ring->dirty > (XGBE_RX_DESC_CNT >> 3))
+ if (xgbe_rx_dirty_desc(ring) > (XGBE_RX_DESC_CNT >> 3))
xgbe_rx_refresh(channel);
if (hw_if->dev_read(channel))
received++;
ring->cur++;
- ring->dirty++;
incomplete = XGMAC_GET_BITS(packet->attributes,
RX_PACKET_ATTRIBUTES,
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_net.h>
+#include <linux/of_address.h>
#include <linux/clk.h>
+#include <linux/property.h>
+#include <linux/acpi.h>
#include "xgbe.h"
#include "xgbe-common.h"
pdata->pause_autoneg = 1;
pdata->tx_pause = 1;
pdata->rx_pause = 1;
+ pdata->phy_speed = SPEED_UNKNOWN;
pdata->power_down = 0;
pdata->default_autoneg = AUTONEG_ENABLE;
pdata->default_speed = SPEED_10000;
xgbe_init_function_ptrs_desc(&pdata->desc_if);
}
+#ifdef CONFIG_ACPI
+static int xgbe_acpi_support(struct xgbe_prv_data *pdata)
+{
+ struct acpi_device *adev = pdata->adev;
+ struct device *dev = pdata->dev;
+ u32 property;
+ acpi_handle handle;
+ acpi_status status;
+ unsigned long long data;
+ int cca;
+ int ret;
+
+ /* Obtain the system clock setting */
+ ret = device_property_read_u32(dev, XGBE_ACPI_DMA_FREQ, &property);
+ if (ret) {
+ dev_err(dev, "unable to obtain %s property\n",
+ XGBE_ACPI_DMA_FREQ);
+ return ret;
+ }
+ pdata->sysclk_rate = property;
+
+ /* Obtain the PTP clock setting */
+ ret = device_property_read_u32(dev, XGBE_ACPI_PTP_FREQ, &property);
+ if (ret) {
+ dev_err(dev, "unable to obtain %s property\n",
+ XGBE_ACPI_PTP_FREQ);
+ return ret;
+ }
+ pdata->ptpclk_rate = property;
+
+ /* Retrieve the device cache coherency value */
+ handle = adev->handle;
+ do {
+ status = acpi_evaluate_integer(handle, "_CCA", NULL, &data);
+ if (!ACPI_FAILURE(status)) {
+ cca = data;
+ break;
+ }
+
+ status = acpi_get_parent(handle, &handle);
+ } while (!ACPI_FAILURE(status));
+
+ if (ACPI_FAILURE(status)) {
+ dev_err(dev, "error obtaining acpi coherency value\n");
+ return -EINVAL;
+ }
+ pdata->coherent = !!cca;
+
+ return 0;
+}
+#else /* CONFIG_ACPI */
+static int xgbe_acpi_support(struct xgbe_prv_data *pdata)
+{
+ return -EINVAL;
+}
+#endif /* CONFIG_ACPI */
+
+#ifdef CONFIG_OF
+static int xgbe_of_support(struct xgbe_prv_data *pdata)
+{
+ struct device *dev = pdata->dev;
+
+ /* Obtain the system clock setting */
+ pdata->sysclk = devm_clk_get(dev, XGBE_DMA_CLOCK);
+ if (IS_ERR(pdata->sysclk)) {
+ dev_err(dev, "dma devm_clk_get failed\n");
+ return PTR_ERR(pdata->sysclk);
+ }
+ pdata->sysclk_rate = clk_get_rate(pdata->sysclk);
+
+ /* Obtain the PTP clock setting */
+ pdata->ptpclk = devm_clk_get(dev, XGBE_PTP_CLOCK);
+ if (IS_ERR(pdata->ptpclk)) {
+ dev_err(dev, "ptp devm_clk_get failed\n");
+ return PTR_ERR(pdata->ptpclk);
+ }
+ pdata->ptpclk_rate = clk_get_rate(pdata->ptpclk);
+
+ /* Retrieve the device cache coherency value */
+ pdata->coherent = of_dma_is_coherent(dev->of_node);
+
+ return 0;
+}
+#else /* CONFIG_OF */
+static int xgbe_of_support(struct xgbe_prv_data *pdata)
+{
+ return -EINVAL;
+}
+#endif /*CONFIG_OF */
+
static int xgbe_probe(struct platform_device *pdev)
{
struct xgbe_prv_data *pdata;
struct net_device *netdev;
struct device *dev = &pdev->dev;
struct resource *res;
- const u8 *mac_addr;
+ const char *phy_mode;
unsigned int i;
int ret;
pdata = netdev_priv(netdev);
pdata->netdev = netdev;
pdata->pdev = pdev;
+ pdata->adev = ACPI_COMPANION(dev);
pdata->dev = dev;
platform_set_drvdata(pdev, netdev);
mutex_init(&pdata->rss_mutex);
spin_lock_init(&pdata->tstamp_lock);
+ /* Check if we should use ACPI or DT */
+ pdata->use_acpi = (!pdata->adev || acpi_disabled) ? 0 : 1;
+
/* Set and validate the number of descriptors for a ring */
BUILD_BUG_ON_NOT_POWER_OF_2(XGBE_TX_DESC_CNT);
pdata->tx_desc_count = XGBE_TX_DESC_CNT;
goto err_io;
}
- /* Obtain the system clock setting */
- pdata->sysclk = devm_clk_get(dev, XGBE_DMA_CLOCK);
- if (IS_ERR(pdata->sysclk)) {
- dev_err(dev, "dma devm_clk_get failed\n");
- ret = PTR_ERR(pdata->sysclk);
- goto err_io;
- }
-
- /* Obtain the PTP clock setting */
- pdata->ptpclk = devm_clk_get(dev, XGBE_PTP_CLOCK);
- if (IS_ERR(pdata->ptpclk)) {
- dev_err(dev, "ptp devm_clk_get failed\n");
- ret = PTR_ERR(pdata->ptpclk);
- goto err_io;
- }
-
/* Obtain the mmio areas for the device */
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
pdata->xgmac_regs = devm_ioremap_resource(dev, res);
}
DBGPR(" xpcs_regs = %p\n", pdata->xpcs_regs);
- /* Set the DMA mask */
- if (!dev->dma_mask)
- dev->dma_mask = &dev->coherent_dma_mask;
- ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
- if (ret) {
- dev_err(dev, "dma_set_mask_and_coherent failed\n");
+ /* Retrieve the MAC address */
+ ret = device_property_read_u8_array(dev, XGBE_MAC_ADDR_PROPERTY,
+ pdata->mac_addr,
+ sizeof(pdata->mac_addr));
+ if (ret || !is_valid_ether_addr(pdata->mac_addr)) {
+ dev_err(dev, "invalid %s property\n", XGBE_MAC_ADDR_PROPERTY);
+ if (!ret)
+ ret = -EINVAL;
goto err_io;
}
- if (of_property_read_bool(dev->of_node, "dma-coherent")) {
+ /* Retrieve the PHY mode - it must be "xgmii" */
+ ret = device_property_read_string(dev, XGBE_PHY_MODE_PROPERTY,
+ &phy_mode);
+ if (ret || strcmp(phy_mode, phy_modes(PHY_INTERFACE_MODE_XGMII))) {
+ dev_err(dev, "invalid %s property\n", XGBE_PHY_MODE_PROPERTY);
+ if (!ret)
+ ret = -EINVAL;
+ goto err_io;
+ }
+ pdata->phy_mode = PHY_INTERFACE_MODE_XGMII;
+
+ /* Check for per channel interrupt support */
+ if (device_property_present(dev, XGBE_DMA_IRQS_PROPERTY))
+ pdata->per_channel_irq = 1;
+
+ /* Obtain device settings unique to ACPI/OF */
+ if (pdata->use_acpi)
+ ret = xgbe_acpi_support(pdata);
+ else
+ ret = xgbe_of_support(pdata);
+ if (ret)
+ goto err_io;
+
+ /* Set the DMA coherency values */
+ if (pdata->coherent) {
pdata->axdomain = XGBE_DMA_OS_AXDOMAIN;
pdata->arcache = XGBE_DMA_OS_ARCACHE;
pdata->awcache = XGBE_DMA_OS_AWCACHE;
pdata->awcache = XGBE_DMA_SYS_AWCACHE;
}
- /* Check for per channel interrupt support */
- if (of_property_read_bool(dev->of_node, XGBE_DMA_IRQS))
- pdata->per_channel_irq = 1;
+ /* Set the DMA mask */
+ if (!dev->dma_mask)
+ dev->dma_mask = &dev->coherent_dma_mask;
+ ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(40));
+ if (ret) {
+ dev_err(dev, "dma_set_mask_and_coherent failed\n");
+ goto err_io;
+ }
+ /* Get the device interrupt */
ret = platform_get_irq(pdev, 0);
if (ret < 0) {
dev_err(dev, "platform_get_irq 0 failed\n");
netdev->irq = pdata->dev_irq;
netdev->base_addr = (unsigned long)pdata->xgmac_regs;
+ memcpy(netdev->dev_addr, pdata->mac_addr, netdev->addr_len);
/* Set all the function pointers */
xgbe_init_all_fptrs(pdata);
/* Populate the hardware features */
xgbe_get_all_hw_features(pdata);
- /* Retrieve the MAC address */
- mac_addr = of_get_mac_address(dev->of_node);
- if (!mac_addr) {
- dev_err(dev, "invalid mac address for this device\n");
- ret = -EINVAL;
- goto err_io;
- }
- memcpy(netdev->dev_addr, mac_addr, netdev->addr_len);
-
- /* Retrieve the PHY mode - it must be "xgmii" */
- pdata->phy_mode = of_get_phy_mode(dev->of_node);
- if (pdata->phy_mode != PHY_INTERFACE_MODE_XGMII) {
- dev_err(dev, "invalid phy-mode specified for this device\n");
- ret = -EINVAL;
- goto err_io;
- }
-
/* Set default configuration data */
xgbe_default_config(pdata);
}
#endif /* CONFIG_PM */
+#ifdef CONFIG_ACPI
+static const struct acpi_device_id xgbe_acpi_match[] = {
+ { "AMDI8001", 0 },
+ {},
+};
+
+MODULE_DEVICE_TABLE(acpi, xgbe_acpi_match);
+#endif
+
+#ifdef CONFIG_OF
static const struct of_device_id xgbe_of_match[] = {
{ .compatible = "amd,xgbe-seattle-v1a", },
{},
};
MODULE_DEVICE_TABLE(of, xgbe_of_match);
+#endif
+
static SIMPLE_DEV_PM_OPS(xgbe_pm_ops, xgbe_suspend, xgbe_resume);
static struct platform_driver xgbe_driver = {
.driver = {
.name = "amd-xgbe",
+#ifdef CONFIG_ACPI
+ .acpi_match_table = xgbe_acpi_match,
+#endif
+#ifdef CONFIG_OF
.of_match_table = xgbe_of_match,
+#endif
.pm = &xgbe_pm_ops,
},
.probe = xgbe_probe,
int xgbe_mdio_register(struct xgbe_prv_data *pdata)
{
- struct device_node *phy_node;
struct mii_bus *mii;
struct phy_device *phydev;
int ret = 0;
DBGPR("-->xgbe_mdio_register\n");
- /* Retrieve the phy-handle */
- phy_node = of_parse_phandle(pdata->dev->of_node, "phy-handle", 0);
- if (!phy_node) {
- dev_err(pdata->dev, "unable to parse phy-handle\n");
- return -EINVAL;
- }
-
mii = mdiobus_alloc();
- if (mii == NULL) {
+ if (!mii) {
dev_err(pdata->dev, "mdiobus_alloc failed\n");
- ret = -ENOMEM;
- goto err_node_get;
+ return -ENOMEM;
}
/* Register on the MDIO bus (don't probe any PHYs) */
request_module(MDIO_MODULE_PREFIX MDIO_ID_FMT,
MDIO_ID_ARGS(phydev->c45_ids.device_ids[MDIO_MMD_PCS]));
- of_node_get(phy_node);
- phydev->dev.of_node = phy_node;
ret = phy_device_register(phydev);
if (ret) {
dev_err(pdata->dev, "phy_device_register failed\n");
- of_node_put(phy_node);
+ goto err_phy_device;
+ }
+ if (!phydev->dev.driver) {
+ dev_err(pdata->dev, "phy driver probe failed\n");
+ ret = -EIO;
goto err_phy_device;
}
/* Add a reference to the PHY driver so it can't be unloaded */
- pdata->phy_module = phydev->dev.driver ?
- phydev->dev.driver->owner : NULL;
+ pdata->phy_module = phydev->dev.driver->owner;
if (!try_module_get(pdata->phy_module)) {
dev_err(pdata->dev, "try_module_get failed\n");
ret = -EIO;
pdata->phydev = phydev;
- of_node_put(phy_node);
-
DBGPHY_REGS(pdata);
DBGPR("<--xgbe_mdio_register\n");
err_mdiobus_alloc:
mdiobus_free(mii);
-err_node_get:
- of_node_put(phy_node);
-
return ret;
}
snprintf(info->name, sizeof(info->name), "%s",
netdev_name(pdata->netdev));
info->owner = THIS_MODULE;
- info->max_adj = clk_get_rate(pdata->ptpclk);
+ info->max_adj = pdata->ptpclk_rate;
info->adjfreq = xgbe_adjfreq;
info->adjtime = xgbe_adjtime;
info->gettime = xgbe_gettime;
*/
dividend = 50000000;
dividend <<= 32;
- pdata->tstamp_addend = div_u64(dividend, clk_get_rate(pdata->ptpclk));
+ pdata->tstamp_addend = div_u64(dividend, pdata->ptpclk_rate);
/* Setup the timecounter */
cc->read = xgbe_cc_read;
#define XGBE_PHY_NAME "amd_xgbe_phy"
#define XGBE_PRTAD 0
+/* Common property names */
+#define XGBE_MAC_ADDR_PROPERTY "mac-address"
+#define XGBE_PHY_MODE_PROPERTY "phy-mode"
+#define XGBE_DMA_IRQS_PROPERTY "amd,per-channel-interrupt"
+
/* Device-tree clock names */
#define XGBE_DMA_CLOCK "dma_clk"
#define XGBE_PTP_CLOCK "ptp_clk"
-#define XGBE_DMA_IRQS "amd,per-channel-interrupt"
+
+/* ACPI property names */
+#define XGBE_ACPI_DMA_FREQ "amd,dma-freq"
+#define XGBE_ACPI_PTP_FREQ "amd,ptp-freq"
/* Timestamp support - values based on 50MHz PTP clock
* 50MHz => 20 nsec
* cur - Tx: index of descriptor to be used for current transfer
* Rx: index of descriptor to check for packet availability
* dirty - Tx: index of descriptor to check for transfer complete
- * Rx: count of descriptors in which a packet has been received
- * (used with skb_realloc_index to refresh the ring)
+ * Rx: index of descriptor to check for buffer reallocation
*/
unsigned int cur;
unsigned int dirty;
unsigned short cur_mss;
unsigned short cur_vlan_ctag;
} tx;
-
- struct {
- unsigned int realloc_index;
- unsigned int realloc_threshold;
- } rx;
};
} ____cacheline_aligned;
int (*alloc_ring_resources)(struct xgbe_prv_data *);
void (*free_ring_resources)(struct xgbe_prv_data *);
int (*map_tx_skb)(struct xgbe_channel *, struct sk_buff *);
- void (*realloc_rx_buffer)(struct xgbe_channel *);
+ int (*map_rx_buffer)(struct xgbe_prv_data *, struct xgbe_ring *,
+ struct xgbe_ring_data *);
void (*unmap_rdata)(struct xgbe_prv_data *, struct xgbe_ring_data *);
void (*wrapper_tx_desc_init)(struct xgbe_prv_data *);
void (*wrapper_rx_desc_init)(struct xgbe_prv_data *);
struct xgbe_prv_data {
struct net_device *netdev;
struct platform_device *pdev;
+ struct acpi_device *adev;
struct device *dev;
+ /* ACPI or DT flag */
+ unsigned int use_acpi;
+
/* XGMAC/XPCS related mmio registers */
void __iomem *xgmac_regs; /* XGMAC CSRs */
void __iomem *xpcs_regs; /* XPCS MMD registers */
struct xgbe_desc_if desc_if;
/* AXI DMA settings */
+ unsigned int coherent;
unsigned int axdomain;
unsigned int arcache;
unsigned int awcache;
unsigned int phy_rx_pause;
/* Netdev related settings */
+ unsigned char mac_addr[ETH_ALEN];
netdev_features_t netdev_features;
struct napi_struct napi;
struct xgbe_mmc_stats mmc_stats;
/* Device clocks */
struct clk *sysclk;
+ unsigned long sysclk_rate;
struct clk *ptpclk;
+ unsigned long ptpclk_rate;
/* Timestamp support */
spinlock_t tstamp_lock;
if (!xgene_ring_mgr_init(pdata))
return -ENODEV;
- clk_prepare_enable(pdata->clk);
- clk_disable_unprepare(pdata->clk);
- clk_prepare_enable(pdata->clk);
- xgene_enet_ecc_init(pdata);
+ if (!efi_enabled(EFI_BOOT)) {
+ clk_prepare_enable(pdata->clk);
+ clk_disable_unprepare(pdata->clk);
+ clk_prepare_enable(pdata->clk);
+ xgene_enet_ecc_init(pdata);
+ }
xgene_enet_config_ring_if_assoc(pdata);
/* Enable auto-incr for scanning */
struct phy_device *phy_dev;
struct device *dev = &pdata->pdev->dev;
- phy_np = of_parse_phandle(dev->of_node, "phy-handle", 0);
- if (!phy_np) {
- netdev_dbg(ndev, "No phy-handle found\n");
- return -ENODEV;
+ if (dev->of_node) {
+ phy_np = of_parse_phandle(dev->of_node, "phy-handle", 0);
+ if (!phy_np) {
+ netdev_dbg(ndev, "No phy-handle found in DT\n");
+ return -ENODEV;
+ }
+ pdata->phy_dev = of_phy_find_device(phy_np);
}
- phy_dev = of_phy_connect(ndev, phy_np, &xgene_enet_adjust_link,
- 0, pdata->phy_mode);
- if (!phy_dev) {
+ phy_dev = pdata->phy_dev;
+
+ if (!phy_dev ||
+ phy_connect_direct(ndev, phy_dev, &xgene_enet_adjust_link,
+ pdata->phy_mode)) {
netdev_err(ndev, "Could not connect to PHY\n");
return -ENODEV;
}
~SUPPORTED_100baseT_Half &
~SUPPORTED_1000baseT_Half;
phy_dev->advertising = phy_dev->supported;
- pdata->phy_dev = phy_dev;
return 0;
}
-int xgene_enet_mdio_config(struct xgene_enet_pdata *pdata)
+static int xgene_mdiobus_register(struct xgene_enet_pdata *pdata,
+ struct mii_bus *mdio)
{
- struct net_device *ndev = pdata->ndev;
struct device *dev = &pdata->pdev->dev;
+ struct net_device *ndev = pdata->ndev;
+ struct phy_device *phy;
struct device_node *child_np;
struct device_node *mdio_np = NULL;
- struct mii_bus *mdio_bus;
int ret;
+ u32 phy_id;
+
+ if (dev->of_node) {
+ for_each_child_of_node(dev->of_node, child_np) {
+ if (of_device_is_compatible(child_np,
+ "apm,xgene-mdio")) {
+ mdio_np = child_np;
+ break;
+ }
+ }
- for_each_child_of_node(dev->of_node, child_np) {
- if (of_device_is_compatible(child_np, "apm,xgene-mdio")) {
- mdio_np = child_np;
- break;
+ if (!mdio_np) {
+ netdev_dbg(ndev, "No mdio node in the dts\n");
+ return -ENXIO;
}
- }
- if (!mdio_np) {
- netdev_dbg(ndev, "No mdio node in the dts\n");
- return -ENXIO;
+ return of_mdiobus_register(mdio, mdio_np);
}
+ /* Mask out all PHYs from auto probing. */
+ mdio->phy_mask = ~0;
+
+ /* Register the MDIO bus */
+ ret = mdiobus_register(mdio);
+ if (ret)
+ return ret;
+
+ ret = device_property_read_u32(dev, "phy-channel", &phy_id);
+ if (ret)
+ ret = device_property_read_u32(dev, "phy-addr", &phy_id);
+ if (ret)
+ return -EINVAL;
+
+ phy = get_phy_device(mdio, phy_id, true);
+ if (!phy || IS_ERR(phy))
+ return -EIO;
+
+ ret = phy_device_register(phy);
+ if (ret)
+ phy_device_free(phy);
+ else
+ pdata->phy_dev = phy;
+
+ return ret;
+}
+
+int xgene_enet_mdio_config(struct xgene_enet_pdata *pdata)
+{
+ struct net_device *ndev = pdata->ndev;
+ struct mii_bus *mdio_bus;
+ int ret;
+
mdio_bus = mdiobus_alloc();
if (!mdio_bus)
return -ENOMEM;
mdio_bus->priv = pdata;
mdio_bus->parent = &ndev->dev;
- ret = of_mdiobus_register(mdio_bus, mdio_np);
+ ret = xgene_mdiobus_register(pdata, mdio_bus);
if (ret) {
netdev_err(ndev, "Failed to register MDIO bus\n");
mdiobus_free(mdio_bus);
#include "xgene_enet_sgmac.h"
#include "xgene_enet_xgmac.h"
+#define RES_ENET_CSR 0
+#define RES_RING_CSR 1
+#define RES_RING_CMD 2
+
static void xgene_enet_init_bufpool(struct xgene_enet_desc_ring *buf_pool)
{
struct xgene_enet_raw_desc16 *raw_desc;
.ndo_set_mac_address = xgene_enet_set_mac_address,
};
+static int xgene_get_mac_address(struct device *dev,
+ unsigned char *addr)
+{
+ int ret;
+
+ ret = device_property_read_u8_array(dev, "local-mac-address", addr, 6);
+ if (ret)
+ ret = device_property_read_u8_array(dev, "mac-address",
+ addr, 6);
+ if (ret)
+ return -ENODEV;
+
+ return ETH_ALEN;
+}
+
+static int xgene_get_phy_mode(struct device *dev)
+{
+ int i, ret;
+ char *modestr;
+
+ ret = device_property_read_string(dev, "phy-connection-type",
+ (const char **)&modestr);
+ if (ret)
+ ret = device_property_read_string(dev, "phy-mode",
+ (const char **)&modestr);
+ if (ret)
+ return -ENODEV;
+
+ for (i = 0; i < PHY_INTERFACE_MODE_MAX; i++) {
+ if (!strcasecmp(modestr, phy_modes(i)))
+ return i;
+ }
+ return -ENODEV;
+}
+
static int xgene_enet_get_resources(struct xgene_enet_pdata *pdata)
{
struct platform_device *pdev;
struct device *dev;
struct resource *res;
void __iomem *base_addr;
- const char *mac;
int ret;
pdev = pdata->pdev;
dev = &pdev->dev;
ndev = pdata->ndev;
- res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "enet_csr");
- pdata->base_addr = devm_ioremap_resource(dev, res);
- if (IS_ERR(pdata->base_addr)) {
+ res = platform_get_resource(pdev, IORESOURCE_MEM, RES_ENET_CSR);
+ if (!res) {
+ dev_err(dev, "Resource enet_csr not defined\n");
+ return -ENODEV;
+ }
+ pdata->base_addr = devm_ioremap(dev, res->start, resource_size(res));
+ if (!pdata->base_addr) {
dev_err(dev, "Unable to retrieve ENET Port CSR region\n");
- return PTR_ERR(pdata->base_addr);
+ return -ENOMEM;
}
- res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ring_csr");
- pdata->ring_csr_addr = devm_ioremap_resource(dev, res);
- if (IS_ERR(pdata->ring_csr_addr)) {
+ res = platform_get_resource(pdev, IORESOURCE_MEM, RES_RING_CSR);
+ if (!res) {
+ dev_err(dev, "Resource ring_csr not defined\n");
+ return -ENODEV;
+ }
+ pdata->ring_csr_addr = devm_ioremap(dev, res->start,
+ resource_size(res));
+ if (!pdata->ring_csr_addr) {
dev_err(dev, "Unable to retrieve ENET Ring CSR region\n");
- return PTR_ERR(pdata->ring_csr_addr);
+ return -ENOMEM;
}
- res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ring_cmd");
- pdata->ring_cmd_addr = devm_ioremap_resource(dev, res);
- if (IS_ERR(pdata->ring_cmd_addr)) {
+ res = platform_get_resource(pdev, IORESOURCE_MEM, RES_RING_CMD);
+ if (!res) {
+ dev_err(dev, "Resource ring_cmd not defined\n");
+ return -ENODEV;
+ }
+ pdata->ring_cmd_addr = devm_ioremap(dev, res->start,
+ resource_size(res));
+ if (!pdata->ring_cmd_addr) {
dev_err(dev, "Unable to retrieve ENET Ring command region\n");
- return PTR_ERR(pdata->ring_cmd_addr);
+ return -ENOMEM;
}
ret = platform_get_irq(pdev, 0);
}
pdata->rx_irq = ret;
- mac = of_get_mac_address(dev->of_node);
- if (mac)
- memcpy(ndev->dev_addr, mac, ndev->addr_len);
- else
+ if (xgene_get_mac_address(dev, ndev->dev_addr) != ETH_ALEN)
eth_hw_addr_random(ndev);
+
memcpy(ndev->perm_addr, ndev->dev_addr, ndev->addr_len);
- pdata->phy_mode = of_get_phy_mode(pdev->dev.of_node);
+ pdata->phy_mode = xgene_get_phy_mode(dev);
if (pdata->phy_mode < 0) {
dev_err(dev, "Unable to get phy-connection-type\n");
return pdata->phy_mode;
}
pdata->clk = devm_clk_get(&pdev->dev, NULL);
- ret = IS_ERR(pdata->clk);
if (IS_ERR(pdata->clk)) {
- dev_err(&pdev->dev, "can't get clock\n");
- ret = PTR_ERR(pdata->clk);
- return ret;
+ /* Firmware may have set up the clock already. */
+ pdata->clk = NULL;
}
base_addr = pdata->base_addr;
goto err;
}
- ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ ret = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(64));
if (ret) {
netdev_err(ndev, "No usable DMA configuration\n");
goto err;
return 0;
}
-static struct of_device_id xgene_enet_match[] = {
+#ifdef CONFIG_ACPI
+static const struct acpi_device_id xgene_enet_acpi_match[] = {
+ { "APMC0D05", },
+ { }
+};
+MODULE_DEVICE_TABLE(acpi, xgene_enet_acpi_match);
+#endif
+
+static struct of_device_id xgene_enet_of_match[] = {
{.compatible = "apm,xgene-enet",},
{},
};
-MODULE_DEVICE_TABLE(of, xgene_enet_match);
+MODULE_DEVICE_TABLE(of, xgene_enet_of_match);
static struct platform_driver xgene_enet_driver = {
.driver = {
.name = "xgene-enet",
- .of_match_table = xgene_enet_match,
+ .of_match_table = of_match_ptr(xgene_enet_of_match),
+ .acpi_match_table = ACPI_PTR(xgene_enet_acpi_match),
},
.probe = xgene_enet_probe,
.remove = xgene_enet_remove,
#ifndef __XGENE_ENET_MAIN_H__
#define __XGENE_ENET_MAIN_H__
+#include <linux/acpi.h>
#include <linux/clk.h>
+#include <linux/efi.h>
+#include <linux/io.h>
#include <linux/of_platform.h>
#include <linux/of_net.h>
#include <linux/of_mdio.h>
schedule_work(&alx->reset_wk);
}
-static bool alx_clean_rx_irq(struct alx_priv *alx, int budget)
+static int alx_clean_rx_irq(struct alx_priv *alx, int budget)
{
struct alx_rx_queue *rxq = &alx->rxq;
struct alx_rrd *rrd;
struct alx_buffer *rxb;
struct sk_buff *skb;
u16 length, rfd_cleaned = 0;
+ int work = 0;
- while (budget > 0) {
+ while (work < budget) {
rrd = &rxq->rrd[rxq->rrd_read_idx];
if (!(rrd->word3 & cpu_to_le32(1 << RRD_UPDATED_SHIFT)))
break;
ALX_GET_FIELD(le32_to_cpu(rrd->word0),
RRD_NOR) != 1) {
alx_schedule_reset(alx);
- return 0;
+ return work;
}
rxb = &rxq->bufs[rxq->read_idx];
}
napi_gro_receive(&alx->napi, skb);
- budget--;
+ work++;
next_pkt:
if (++rxq->read_idx == alx->rx_ringsz)
if (rfd_cleaned)
alx_refill_rx_ring(alx, GFP_ATOMIC);
- return budget > 0;
+ return work;
}
static int alx_poll(struct napi_struct *napi, int budget)
{
struct alx_priv *alx = container_of(napi, struct alx_priv, napi);
struct alx_hw *hw = &alx->hw;
- bool complete = true;
unsigned long flags;
+ bool tx_complete;
+ int work;
- complete = alx_clean_tx_irq(alx) &&
- alx_clean_rx_irq(alx, budget);
+ tx_complete = alx_clean_tx_irq(alx);
+ work = alx_clean_rx_irq(alx, budget);
- if (!complete)
- return 1;
+ if (!tx_complete || work == budget)
+ return budget;
napi_complete(&alx->napi);
alx_post_write(hw);
- return 0;
+ return work;
}
static irqreturn_t alx_intr_handle(struct alx_priv *alx, u32 intr)
return NETDEV_TX_OK;
}
- if (unlikely(vlan_tx_tag_present(skb))) {
- u16 vlan = vlan_tx_tag_get(skb);
+ if (unlikely(skb_vlan_tag_present(skb))) {
+ u16 vlan = skb_vlan_tag_get(skb);
__le16 tag;
vlan = cpu_to_le16(vlan);
tpd = atl1e_get_tpd(adapter);
- if (vlan_tx_tag_present(skb)) {
- u16 vlan_tag = vlan_tx_tag_get(skb);
+ if (skb_vlan_tag_present(skb)) {
+ u16 vlan_tag = skb_vlan_tag_get(skb);
u16 atl1e_vlan_tag;
tpd->word3 |= 1 << TPD_INS_VL_TAG_SHIFT;
(u16) atomic_read(&tpd_ring->next_to_use));
memset(ptpd, 0, sizeof(struct tx_packet_desc));
- if (vlan_tx_tag_present(skb)) {
- vlan_tag = vlan_tx_tag_get(skb);
+ if (skb_vlan_tag_present(skb)) {
+ vlan_tag = skb_vlan_tag_get(skb);
vlan_tag = (vlan_tag << 4) | (vlan_tag >> 13) |
((vlan_tag >> 9) & 0x8);
ptpd->word3 |= 1 << TPD_INS_VL_TAG_SHIFT;
offset = ((u32)(skb->len-copy_len + 3) & ~3);
}
#ifdef NETIF_F_HW_VLAN_CTAG_TX
- if (vlan_tx_tag_present(skb)) {
- u16 vlan_tag = vlan_tx_tag_get(skb);
+ if (skb_vlan_tag_present(skb)) {
+ u16 vlan_tag = skb_vlan_tag_get(skb);
vlan_tag = (vlan_tag << 4) |
(vlan_tag >> 13) |
((vlan_tag >> 9) & 0x8);
vlan_tag_flags |= TX_BD_FLAGS_TCP_UDP_CKSUM;
}
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
vlan_tag_flags |=
- (TX_BD_FLAGS_VLAN_TAG | (vlan_tx_tag_get(skb) << 16));
+ (TX_BD_FLAGS_VLAN_TAG | (skb_vlan_tag_get(skb) << 16));
}
if ((mss = skb_shinfo(skb)->gso_size)) {
u32 link_config[LINK_CONFIG_SIZE];
u32 supported[LINK_CONFIG_SIZE];
-/* link settings - missing defines */
-#define SUPPORTED_2500baseX_Full (1 << 15)
u32 advertising[LINK_CONFIG_SIZE];
-/* link settings - missing defines */
-#define ADVERTISED_2500baseX_Full (1 << 15)
u32 phy_addr;
"sending pkt %u @%p next_idx %u bd %u @%p\n",
pkt_prod, tx_buf, txdata->tx_pkt_prod, bd_prod, tx_start_bd);
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
tx_start_bd->vlan_or_ethertype =
- cpu_to_le16(vlan_tx_tag_get(skb));
+ cpu_to_le16(skb_vlan_tag_get(skb));
tx_start_bd->bd_flags.as_bitfield |=
(X_ETH_OUTBAND_VLAN << ETH_TX_BD_FLAGS_VLAN_MODE_SHIFT);
} else {
}
static void tg3_irq_quiesce(struct tg3 *tp)
+ __releases(tp->lock)
+ __acquires(tp->lock)
{
int i;
tp->irq_sync = 1;
smp_mb();
+ spin_unlock_bh(&tp->lock);
+
for (i = 0; i < tp->irq_cnt; i++)
synchronize_irq(tp->napi[i].irq_vec);
+
+ spin_lock_bh(&tp->lock);
}
/* Fully shutdown all tg3 driver activity elsewhere in the system.
!mss && skb->len > VLAN_ETH_FRAME_LEN)
base_flags |= TXD_FLAG_JMB_PKT;
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
base_flags |= TXD_FLAG_VLAN;
- vlan = vlan_tx_tag_get(skb);
+ vlan = skb_vlan_tag_get(skb);
}
if ((unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) &&
/* tp->lock is held. */
static int tg3_chip_reset(struct tg3 *tp)
+ __releases(tp->lock)
+ __acquires(tp->lock)
{
u32 val;
void (*write_op)(struct tg3 *, u32, u32);
}
smp_mb();
+ tg3_full_unlock(tp);
+
for (i = 0; i < tp->irq_cnt; i++)
synchronize_irq(tp->napi[i].irq_vec);
+ tg3_full_lock(tp, 0);
+
if (tg3_asic_rev(tp) == ASIC_REV_57780) {
val = tr32(TG3_PCIE_LNKCTL) & ~TG3_PCIE_LNKCTL_L1_PLL_PD_EN;
tw32(TG3_PCIE_LNKCTL, val | TG3_PCIE_LNKCTL_L1_PLL_PD_DIS);
{
struct tg3 *tp = (struct tg3 *) __opaque;
- if (tp->irq_sync || tg3_flag(tp, RESET_TASK_PENDING))
- goto restart_timer;
-
spin_lock(&tp->lock);
+ if (tp->irq_sync || tg3_flag(tp, RESET_TASK_PENDING)) {
+ spin_unlock(&tp->lock);
+ goto restart_timer;
+ }
+
if (tg3_asic_rev(tp) == ASIC_REV_5717 ||
tg3_flag(tp, 57765_CLASS))
tg3_chk_missed_msi(tp);
struct tg3 *tp = container_of(work, struct tg3, reset_task);
int err;
+ rtnl_lock();
tg3_full_lock(tp, 0);
if (!netif_running(tp->dev)) {
tg3_flag_clear(tp, RESET_TASK_PENDING);
tg3_full_unlock(tp);
+ rtnl_unlock();
return;
}
tg3_phy_start(tp);
tg3_flag_clear(tp, RESET_TASK_PENDING);
+ rtnl_unlock();
}
static int tg3_request_irq(struct tg3 *tp, int irq_num)
tg3_flag_set(tp, INIT_COMPLETE);
tg3_enable_ints(tp);
- if (init)
- tg3_ptp_init(tp);
- else
- tg3_ptp_resume(tp);
-
+ tg3_ptp_resume(tp);
tg3_full_unlock(tp);
pci_set_power_state(tp->pdev, PCI_D3hot);
}
- if (tg3_flag(tp, PTP_CAPABLE)) {
- tp->ptp_clock = ptp_clock_register(&tp->ptp_info,
- &tp->pdev->dev);
- if (IS_ERR(tp->ptp_clock))
- tp->ptp_clock = NULL;
- }
-
return err;
}
return -EAGAIN;
}
- tg3_ptp_fini(tp);
-
tg3_stop(tp);
/* Clear stats across close / open calls */
goto err_out_apeunmap;
}
+ if (tg3_flag(tp, PTP_CAPABLE)) {
+ tg3_ptp_init(tp);
+ tp->ptp_clock = ptp_clock_register(&tp->ptp_info,
+ &tp->pdev->dev);
+ if (IS_ERR(tp->ptp_clock))
+ tp->ptp_clock = NULL;
+ }
+
netdev_info(dev, "Tigon3 [partno(%s) rev %04x] (%s) MAC address %pM\n",
tp->board_part_number,
tg3_chip_rev_id(tp),
if (dev) {
struct tg3 *tp = netdev_priv(dev);
+ tg3_ptp_fini(tp);
+
release_firmware(tp->fw);
tg3_reset_task_cancel(tp);
u32 gso_size;
u16 vlan_tag = 0;
- if (vlan_tx_tag_present(skb)) {
- vlan_tag = (u16)vlan_tx_tag_get(skb);
+ if (skb_vlan_tag_present(skb)) {
+ vlan_tag = (u16)skb_vlan_tag_get(skb);
flags |= (BNA_TXQ_WI_CF_INS_PRIO | BNA_TXQ_WI_CF_INS_VLAN);
}
if (test_bit(BNAD_RF_CEE_RUNNING, &bnad->run_flags)) {
res = PTR_ERR(lp->pclk);
goto err_free_dev;
}
- clk_enable(lp->pclk);
+ clk_prepare_enable(lp->pclk);
lp->hclk = ERR_PTR(-ENOENT);
lp->tx_clk = ERR_PTR(-ENOENT);
err_out_unregister_netdev:
unregister_netdev(dev);
err_disable_clock:
- clk_disable(lp->pclk);
+ clk_disable_unprepare(lp->pclk);
err_free_dev:
free_netdev(dev);
return res;
kfree(lp->mii_bus->irq);
mdiobus_free(lp->mii_bus);
unregister_netdev(dev);
- clk_disable(lp->pclk);
+ clk_disable_unprepare(lp->pclk);
free_netdev(dev);
return 0;
netif_stop_queue(net_dev);
netif_device_detach(net_dev);
- clk_disable(lp->pclk);
+ clk_disable_unprepare(lp->pclk);
}
return 0;
}
struct macb *lp = netdev_priv(net_dev);
if (netif_running(net_dev)) {
- clk_enable(lp->pclk);
+ clk_prepare_enable(lp->pclk);
netif_device_attach(net_dev);
netif_start_queue(net_dev);
for (j = 0; j < 6; j++) {
for (i = 0, bitval = 0; i < 8; i++)
- bitval ^= hash_bit_value(i*6 + j, addr);
+ bitval ^= hash_bit_value(i * 6 + j, addr);
hash_index |= (bitval << j);
}
static void gem_update_stats(struct macb *bp)
{
- u32 __iomem *reg = bp->regs + GEM_OTX;
+ int i;
u32 *p = &bp->hw_stats.gem.tx_octets_31_0;
- u32 *end = &bp->hw_stats.gem.rx_udp_checksum_errors + 1;
- for (; p < end; p++, reg++)
- *p += __raw_readl(reg);
+ for (i = 0; i < GEM_STATS_LEN; ++i, ++p) {
+ u32 offset = gem_statistics[i].offset;
+ u64 val = __raw_readl(bp->regs + offset);
+
+ bp->ethtool_stats[i] += val;
+ *p += val;
+
+ if (offset == GEM_OCTTXL || offset == GEM_OCTRXL) {
+ /* Add GEM_OCTTXH, GEM_OCTRXH */
+ val = __raw_readl(bp->regs + offset + 4);
+ bp->ethtool_stats[i] += ((u64)val) << 32;
+ *(++p) += val;
+ }
+ }
}
static struct net_device_stats *gem_get_stats(struct macb *bp)
return nstat;
}
+static void gem_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct macb *bp;
+
+ bp = netdev_priv(dev);
+ gem_update_stats(bp);
+ memcpy(data, &bp->ethtool_stats, sizeof(u64) * GEM_STATS_LEN);
+}
+
+static int gem_get_sset_count(struct net_device *dev, int sset)
+{
+ switch (sset) {
+ case ETH_SS_STATS:
+ return GEM_STATS_LEN;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static void gem_get_ethtool_strings(struct net_device *dev, u32 sset, u8 *p)
+{
+ int i;
+
+ switch (sset) {
+ case ETH_SS_STATS:
+ for (i = 0; i < GEM_STATS_LEN; i++, p += ETH_GSTRING_LEN)
+ memcpy(p, gem_statistics[i].stat_string,
+ ETH_GSTRING_LEN);
+ break;
+ }
+}
+
struct net_device_stats *macb_get_stats(struct net_device *dev)
{
struct macb *bp = netdev_priv(dev);
};
EXPORT_SYMBOL_GPL(macb_ethtool_ops);
+const struct ethtool_ops gem_ethtool_ops = {
+ .get_settings = macb_get_settings,
+ .set_settings = macb_set_settings,
+ .get_regs_len = macb_get_regs_len,
+ .get_regs = macb_get_regs,
+ .get_link = ethtool_op_get_link,
+ .get_ts_info = ethtool_op_get_ts_info,
+ .get_ethtool_stats = gem_get_ethtool_stats,
+ .get_strings = gem_get_ethtool_strings,
+ .get_sset_count = gem_get_sset_count,
+};
+EXPORT_SYMBOL_GPL(gem_ethtool_ops);
+
int macb_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
struct macb *bp = netdev_priv(dev);
dev->netdev_ops = &macb_netdev_ops;
netif_napi_add(dev, &bp->napi, macb_poll, 64);
- dev->ethtool_ops = &macb_ethtool_ops;
dev->base_addr = regs->start;
bp->macbgem_ops.mog_free_rx_buffers = gem_free_rx_buffers;
bp->macbgem_ops.mog_init_rings = gem_init_rings;
bp->macbgem_ops.mog_rx = gem_rx;
+ dev->ethtool_ops = &gem_ethtool_ops;
} else {
bp->max_tx_length = MACB_MAX_TX_LEN;
bp->macbgem_ops.mog_alloc_rx_buffers = macb_alloc_rx_buffers;
bp->macbgem_ops.mog_free_rx_buffers = macb_free_rx_buffers;
bp->macbgem_ops.mog_init_rings = macb_init_rings;
bp->macbgem_ops.mog_rx = macb_rx;
+ dev->ethtool_ops = &macb_ethtool_ops;
}
/* Set features */
#define MACB_MAX_QUEUES 8
/* MACB register offsets */
-#define MACB_NCR 0x0000
-#define MACB_NCFGR 0x0004
-#define MACB_NSR 0x0008
-#define MACB_TAR 0x000c /* AT91RM9200 only */
-#define MACB_TCR 0x0010 /* AT91RM9200 only */
-#define MACB_TSR 0x0014
-#define MACB_RBQP 0x0018
-#define MACB_TBQP 0x001c
-#define MACB_RSR 0x0020
-#define MACB_ISR 0x0024
-#define MACB_IER 0x0028
-#define MACB_IDR 0x002c
-#define MACB_IMR 0x0030
-#define MACB_MAN 0x0034
-#define MACB_PTR 0x0038
-#define MACB_PFR 0x003c
-#define MACB_FTO 0x0040
-#define MACB_SCF 0x0044
-#define MACB_MCF 0x0048
-#define MACB_FRO 0x004c
-#define MACB_FCSE 0x0050
-#define MACB_ALE 0x0054
-#define MACB_DTF 0x0058
-#define MACB_LCOL 0x005c
-#define MACB_EXCOL 0x0060
-#define MACB_TUND 0x0064
-#define MACB_CSE 0x0068
-#define MACB_RRE 0x006c
-#define MACB_ROVR 0x0070
-#define MACB_RSE 0x0074
-#define MACB_ELE 0x0078
-#define MACB_RJA 0x007c
-#define MACB_USF 0x0080
-#define MACB_STE 0x0084
-#define MACB_RLE 0x0088
-#define MACB_TPF 0x008c
-#define MACB_HRB 0x0090
-#define MACB_HRT 0x0094
-#define MACB_SA1B 0x0098
-#define MACB_SA1T 0x009c
-#define MACB_SA2B 0x00a0
-#define MACB_SA2T 0x00a4
-#define MACB_SA3B 0x00a8
-#define MACB_SA3T 0x00ac
-#define MACB_SA4B 0x00b0
-#define MACB_SA4T 0x00b4
-#define MACB_TID 0x00b8
-#define MACB_TPQ 0x00bc
-#define MACB_USRIO 0x00c0
-#define MACB_WOL 0x00c4
-#define MACB_MID 0x00fc
+#define MACB_NCR 0x0000 /* Network Control */
+#define MACB_NCFGR 0x0004 /* Network Config */
+#define MACB_NSR 0x0008 /* Network Status */
+#define MACB_TAR 0x000c /* AT91RM9200 only */
+#define MACB_TCR 0x0010 /* AT91RM9200 only */
+#define MACB_TSR 0x0014 /* Transmit Status */
+#define MACB_RBQP 0x0018 /* RX Q Base Address */
+#define MACB_TBQP 0x001c /* TX Q Base Address */
+#define MACB_RSR 0x0020 /* Receive Status */
+#define MACB_ISR 0x0024 /* Interrupt Status */
+#define MACB_IER 0x0028 /* Interrupt Enable */
+#define MACB_IDR 0x002c /* Interrupt Disable */
+#define MACB_IMR 0x0030 /* Interrupt Mask */
+#define MACB_MAN 0x0034 /* PHY Maintenance */
+#define MACB_PTR 0x0038
+#define MACB_PFR 0x003c
+#define MACB_FTO 0x0040
+#define MACB_SCF 0x0044
+#define MACB_MCF 0x0048
+#define MACB_FRO 0x004c
+#define MACB_FCSE 0x0050
+#define MACB_ALE 0x0054
+#define MACB_DTF 0x0058
+#define MACB_LCOL 0x005c
+#define MACB_EXCOL 0x0060
+#define MACB_TUND 0x0064
+#define MACB_CSE 0x0068
+#define MACB_RRE 0x006c
+#define MACB_ROVR 0x0070
+#define MACB_RSE 0x0074
+#define MACB_ELE 0x0078
+#define MACB_RJA 0x007c
+#define MACB_USF 0x0080
+#define MACB_STE 0x0084
+#define MACB_RLE 0x0088
+#define MACB_TPF 0x008c
+#define MACB_HRB 0x0090
+#define MACB_HRT 0x0094
+#define MACB_SA1B 0x0098
+#define MACB_SA1T 0x009c
+#define MACB_SA2B 0x00a0
+#define MACB_SA2T 0x00a4
+#define MACB_SA3B 0x00a8
+#define MACB_SA3T 0x00ac
+#define MACB_SA4B 0x00b0
+#define MACB_SA4T 0x00b4
+#define MACB_TID 0x00b8
+#define MACB_TPQ 0x00bc
+#define MACB_USRIO 0x00c0
+#define MACB_WOL 0x00c4
+#define MACB_MID 0x00fc
/* GEM register offsets. */
-#define GEM_NCFGR 0x0004
-#define GEM_USRIO 0x000c
-#define GEM_DMACFG 0x0010
-#define GEM_HRB 0x0080
-#define GEM_HRT 0x0084
-#define GEM_SA1B 0x0088
-#define GEM_SA1T 0x008C
-#define GEM_SA2B 0x0090
-#define GEM_SA2T 0x0094
-#define GEM_SA3B 0x0098
-#define GEM_SA3T 0x009C
-#define GEM_SA4B 0x00A0
-#define GEM_SA4T 0x00A4
-#define GEM_OTX 0x0100
-#define GEM_DCFG1 0x0280
-#define GEM_DCFG2 0x0284
-#define GEM_DCFG3 0x0288
-#define GEM_DCFG4 0x028c
-#define GEM_DCFG5 0x0290
-#define GEM_DCFG6 0x0294
-#define GEM_DCFG7 0x0298
-
-#define GEM_ISR(hw_q) (0x0400 + ((hw_q) << 2))
-#define GEM_TBQP(hw_q) (0x0440 + ((hw_q) << 2))
-#define GEM_RBQP(hw_q) (0x0480 + ((hw_q) << 2))
-#define GEM_IER(hw_q) (0x0600 + ((hw_q) << 2))
-#define GEM_IDR(hw_q) (0x0620 + ((hw_q) << 2))
-#define GEM_IMR(hw_q) (0x0640 + ((hw_q) << 2))
+#define GEM_NCFGR 0x0004 /* Network Config */
+#define GEM_USRIO 0x000c /* User IO */
+#define GEM_DMACFG 0x0010 /* DMA Configuration */
+#define GEM_HRB 0x0080 /* Hash Bottom */
+#define GEM_HRT 0x0084 /* Hash Top */
+#define GEM_SA1B 0x0088 /* Specific1 Bottom */
+#define GEM_SA1T 0x008C /* Specific1 Top */
+#define GEM_SA2B 0x0090 /* Specific2 Bottom */
+#define GEM_SA2T 0x0094 /* Specific2 Top */
+#define GEM_SA3B 0x0098 /* Specific3 Bottom */
+#define GEM_SA3T 0x009C /* Specific3 Top */
+#define GEM_SA4B 0x00A0 /* Specific4 Bottom */
+#define GEM_SA4T 0x00A4 /* Specific4 Top */
+#define GEM_OTX 0x0100 /* Octets transmitted */
+#define GEM_OCTTXL 0x0100 /* Octets transmitted [31:0] */
+#define GEM_OCTTXH 0x0104 /* Octets transmitted [47:32] */
+#define GEM_TXCNT 0x0108 /* Frames Transmitted counter */
+#define GEM_TXBCCNT 0x010c /* Broadcast Frames counter */
+#define GEM_TXMCCNT 0x0110 /* Multicast Frames counter */
+#define GEM_TXPAUSECNT 0x0114 /* Pause Frames Transmitted Counter */
+#define GEM_TX64CNT 0x0118 /* 64 byte Frames TX counter */
+#define GEM_TX65CNT 0x011c /* 65-127 byte Frames TX counter */
+#define GEM_TX128CNT 0x0120 /* 128-255 byte Frames TX counter */
+#define GEM_TX256CNT 0x0124 /* 256-511 byte Frames TX counter */
+#define GEM_TX512CNT 0x0128 /* 512-1023 byte Frames TX counter */
+#define GEM_TX1024CNT 0x012c /* 1024-1518 byte Frames TX counter */
+#define GEM_TX1519CNT 0x0130 /* 1519+ byte Frames TX counter */
+#define GEM_TXURUNCNT 0x0134 /* TX under run error counter */
+#define GEM_SNGLCOLLCNT 0x0138 /* Single Collision Frame Counter */
+#define GEM_MULTICOLLCNT 0x013c /* Multiple Collision Frame Counter */
+#define GEM_EXCESSCOLLCNT 0x0140 /* Excessive Collision Frame Counter */
+#define GEM_LATECOLLCNT 0x0144 /* Late Collision Frame Counter */
+#define GEM_TXDEFERCNT 0x0148 /* Deferred Transmission Frame Counter */
+#define GEM_TXCSENSECNT 0x014c /* Carrier Sense Error Counter */
+#define GEM_ORX 0x0150 /* Octets received */
+#define GEM_OCTRXL 0x0150 /* Octets received [31:0] */
+#define GEM_OCTRXH 0x0154 /* Octets received [47:32] */
+#define GEM_RXCNT 0x0158 /* Frames Received Counter */
+#define GEM_RXBROADCNT 0x015c /* Broadcast Frames Received Counter */
+#define GEM_RXMULTICNT 0x0160 /* Multicast Frames Received Counter */
+#define GEM_RXPAUSECNT 0x0164 /* Pause Frames Received Counter */
+#define GEM_RX64CNT 0x0168 /* 64 byte Frames RX Counter */
+#define GEM_RX65CNT 0x016c /* 65-127 byte Frames RX Counter */
+#define GEM_RX128CNT 0x0170 /* 128-255 byte Frames RX Counter */
+#define GEM_RX256CNT 0x0174 /* 256-511 byte Frames RX Counter */
+#define GEM_RX512CNT 0x0178 /* 512-1023 byte Frames RX Counter */
+#define GEM_RX1024CNT 0x017c /* 1024-1518 byte Frames RX Counter */
+#define GEM_RX1519CNT 0x0180 /* 1519+ byte Frames RX Counter */
+#define GEM_RXUNDRCNT 0x0184 /* Undersize Frames Received Counter */
+#define GEM_RXOVRCNT 0x0188 /* Oversize Frames Received Counter */
+#define GEM_RXJABCNT 0x018c /* Jabbers Received Counter */
+#define GEM_RXFCSCNT 0x0190 /* Frame Check Sequence Error Counter */
+#define GEM_RXLENGTHCNT 0x0194 /* Length Field Error Counter */
+#define GEM_RXSYMBCNT 0x0198 /* Symbol Error Counter */
+#define GEM_RXALIGNCNT 0x019c /* Alignment Error Counter */
+#define GEM_RXRESERRCNT 0x01a0 /* Receive Resource Error Counter */
+#define GEM_RXORCNT 0x01a4 /* Receive Overrun Counter */
+#define GEM_RXIPCCNT 0x01a8 /* IP header Checksum Error Counter */
+#define GEM_RXTCPCCNT 0x01ac /* TCP Checksum Error Counter */
+#define GEM_RXUDPCCNT 0x01b0 /* UDP Checksum Error Counter */
+#define GEM_DCFG1 0x0280 /* Design Config 1 */
+#define GEM_DCFG2 0x0284 /* Design Config 2 */
+#define GEM_DCFG3 0x0288 /* Design Config 3 */
+#define GEM_DCFG4 0x028c /* Design Config 4 */
+#define GEM_DCFG5 0x0290 /* Design Config 5 */
+#define GEM_DCFG6 0x0294 /* Design Config 6 */
+#define GEM_DCFG7 0x0298 /* Design Config 7 */
+
+#define GEM_ISR(hw_q) (0x0400 + ((hw_q) << 2))
+#define GEM_TBQP(hw_q) (0x0440 + ((hw_q) << 2))
+#define GEM_RBQP(hw_q) (0x0480 + ((hw_q) << 2))
+#define GEM_IER(hw_q) (0x0600 + ((hw_q) << 2))
+#define GEM_IDR(hw_q) (0x0620 + ((hw_q) << 2))
+#define GEM_IMR(hw_q) (0x0640 + ((hw_q) << 2))
/* Bitfields in NCR */
-#define MACB_LB_OFFSET 0
-#define MACB_LB_SIZE 1
-#define MACB_LLB_OFFSET 1
-#define MACB_LLB_SIZE 1
-#define MACB_RE_OFFSET 2
-#define MACB_RE_SIZE 1
-#define MACB_TE_OFFSET 3
-#define MACB_TE_SIZE 1
-#define MACB_MPE_OFFSET 4
-#define MACB_MPE_SIZE 1
-#define MACB_CLRSTAT_OFFSET 5
-#define MACB_CLRSTAT_SIZE 1
-#define MACB_INCSTAT_OFFSET 6
-#define MACB_INCSTAT_SIZE 1
-#define MACB_WESTAT_OFFSET 7
-#define MACB_WESTAT_SIZE 1
-#define MACB_BP_OFFSET 8
-#define MACB_BP_SIZE 1
-#define MACB_TSTART_OFFSET 9
-#define MACB_TSTART_SIZE 1
-#define MACB_THALT_OFFSET 10
-#define MACB_THALT_SIZE 1
-#define MACB_NCR_TPF_OFFSET 11
-#define MACB_NCR_TPF_SIZE 1
-#define MACB_TZQ_OFFSET 12
-#define MACB_TZQ_SIZE 1
+#define MACB_LB_OFFSET 0 /* reserved */
+#define MACB_LB_SIZE 1
+#define MACB_LLB_OFFSET 1 /* Loop back local */
+#define MACB_LLB_SIZE 1
+#define MACB_RE_OFFSET 2 /* Receive enable */
+#define MACB_RE_SIZE 1
+#define MACB_TE_OFFSET 3 /* Transmit enable */
+#define MACB_TE_SIZE 1
+#define MACB_MPE_OFFSET 4 /* Management port enable */
+#define MACB_MPE_SIZE 1
+#define MACB_CLRSTAT_OFFSET 5 /* Clear stats regs */
+#define MACB_CLRSTAT_SIZE 1
+#define MACB_INCSTAT_OFFSET 6 /* Incremental stats regs */
+#define MACB_INCSTAT_SIZE 1
+#define MACB_WESTAT_OFFSET 7 /* Write enable stats regs */
+#define MACB_WESTAT_SIZE 1
+#define MACB_BP_OFFSET 8 /* Back pressure */
+#define MACB_BP_SIZE 1
+#define MACB_TSTART_OFFSET 9 /* Start transmission */
+#define MACB_TSTART_SIZE 1
+#define MACB_THALT_OFFSET 10 /* Transmit halt */
+#define MACB_THALT_SIZE 1
+#define MACB_NCR_TPF_OFFSET 11 /* Transmit pause frame */
+#define MACB_NCR_TPF_SIZE 1
+#define MACB_TZQ_OFFSET 12 /* Transmit zero quantum pause frame */
+#define MACB_TZQ_SIZE 1
/* Bitfields in NCFGR */
-#define MACB_SPD_OFFSET 0
-#define MACB_SPD_SIZE 1
-#define MACB_FD_OFFSET 1
-#define MACB_FD_SIZE 1
-#define MACB_BIT_RATE_OFFSET 2
-#define MACB_BIT_RATE_SIZE 1
-#define MACB_JFRAME_OFFSET 3
-#define MACB_JFRAME_SIZE 1
-#define MACB_CAF_OFFSET 4
-#define MACB_CAF_SIZE 1
-#define MACB_NBC_OFFSET 5
-#define MACB_NBC_SIZE 1
-#define MACB_NCFGR_MTI_OFFSET 6
-#define MACB_NCFGR_MTI_SIZE 1
-#define MACB_UNI_OFFSET 7
-#define MACB_UNI_SIZE 1
-#define MACB_BIG_OFFSET 8
-#define MACB_BIG_SIZE 1
-#define MACB_EAE_OFFSET 9
-#define MACB_EAE_SIZE 1
-#define MACB_CLK_OFFSET 10
-#define MACB_CLK_SIZE 2
-#define MACB_RTY_OFFSET 12
-#define MACB_RTY_SIZE 1
-#define MACB_PAE_OFFSET 13
-#define MACB_PAE_SIZE 1
-#define MACB_RM9200_RMII_OFFSET 13 /* AT91RM9200 only */
-#define MACB_RM9200_RMII_SIZE 1 /* AT91RM9200 only */
-#define MACB_RBOF_OFFSET 14
-#define MACB_RBOF_SIZE 2
-#define MACB_RLCE_OFFSET 16
-#define MACB_RLCE_SIZE 1
-#define MACB_DRFCS_OFFSET 17
-#define MACB_DRFCS_SIZE 1
-#define MACB_EFRHD_OFFSET 18
-#define MACB_EFRHD_SIZE 1
-#define MACB_IRXFCS_OFFSET 19
-#define MACB_IRXFCS_SIZE 1
+#define MACB_SPD_OFFSET 0 /* Speed */
+#define MACB_SPD_SIZE 1
+#define MACB_FD_OFFSET 1 /* Full duplex */
+#define MACB_FD_SIZE 1
+#define MACB_BIT_RATE_OFFSET 2 /* Discard non-VLAN frames */
+#define MACB_BIT_RATE_SIZE 1
+#define MACB_JFRAME_OFFSET 3 /* reserved */
+#define MACB_JFRAME_SIZE 1
+#define MACB_CAF_OFFSET 4 /* Copy all frames */
+#define MACB_CAF_SIZE 1
+#define MACB_NBC_OFFSET 5 /* No broadcast */
+#define MACB_NBC_SIZE 1
+#define MACB_NCFGR_MTI_OFFSET 6 /* Multicast hash enable */
+#define MACB_NCFGR_MTI_SIZE 1
+#define MACB_UNI_OFFSET 7 /* Unicast hash enable */
+#define MACB_UNI_SIZE 1
+#define MACB_BIG_OFFSET 8 /* Receive 1536 byte frames */
+#define MACB_BIG_SIZE 1
+#define MACB_EAE_OFFSET 9 /* External address match enable */
+#define MACB_EAE_SIZE 1
+#define MACB_CLK_OFFSET 10
+#define MACB_CLK_SIZE 2
+#define MACB_RTY_OFFSET 12 /* Retry test */
+#define MACB_RTY_SIZE 1
+#define MACB_PAE_OFFSET 13 /* Pause enable */
+#define MACB_PAE_SIZE 1
+#define MACB_RM9200_RMII_OFFSET 13 /* AT91RM9200 only */
+#define MACB_RM9200_RMII_SIZE 1 /* AT91RM9200 only */
+#define MACB_RBOF_OFFSET 14 /* Receive buffer offset */
+#define MACB_RBOF_SIZE 2
+#define MACB_RLCE_OFFSET 16 /* Length field error frame discard */
+#define MACB_RLCE_SIZE 1
+#define MACB_DRFCS_OFFSET 17 /* FCS remove */
+#define MACB_DRFCS_SIZE 1
+#define MACB_EFRHD_OFFSET 18
+#define MACB_EFRHD_SIZE 1
+#define MACB_IRXFCS_OFFSET 19
+#define MACB_IRXFCS_SIZE 1
/* GEM specific NCFGR bitfields. */
-#define GEM_GBE_OFFSET 10
-#define GEM_GBE_SIZE 1
-#define GEM_CLK_OFFSET 18
-#define GEM_CLK_SIZE 3
-#define GEM_DBW_OFFSET 21
-#define GEM_DBW_SIZE 2
-#define GEM_RXCOEN_OFFSET 24
-#define GEM_RXCOEN_SIZE 1
+#define GEM_GBE_OFFSET 10 /* Gigabit mode enable */
+#define GEM_GBE_SIZE 1
+#define GEM_CLK_OFFSET 18 /* MDC clock division */
+#define GEM_CLK_SIZE 3
+#define GEM_DBW_OFFSET 21 /* Data bus width */
+#define GEM_DBW_SIZE 2
+#define GEM_RXCOEN_OFFSET 24
+#define GEM_RXCOEN_SIZE 1
/* Constants for data bus width. */
-#define GEM_DBW32 0
-#define GEM_DBW64 1
-#define GEM_DBW128 2
+#define GEM_DBW32 0 /* 32 bit AMBA AHB data bus width */
+#define GEM_DBW64 1 /* 64 bit AMBA AHB data bus width */
+#define GEM_DBW128 2 /* 128 bit AMBA AHB data bus width */
/* Bitfields in DMACFG. */
-#define GEM_FBLDO_OFFSET 0
-#define GEM_FBLDO_SIZE 5
-#define GEM_ENDIA_OFFSET 7
-#define GEM_ENDIA_SIZE 1
-#define GEM_RXBMS_OFFSET 8
-#define GEM_RXBMS_SIZE 2
-#define GEM_TXPBMS_OFFSET 10
-#define GEM_TXPBMS_SIZE 1
-#define GEM_TXCOEN_OFFSET 11
-#define GEM_TXCOEN_SIZE 1
-#define GEM_RXBS_OFFSET 16
-#define GEM_RXBS_SIZE 8
-#define GEM_DDRP_OFFSET 24
-#define GEM_DDRP_SIZE 1
+#define GEM_FBLDO_OFFSET 0 /* fixed burst length for DMA */
+#define GEM_FBLDO_SIZE 5
+#define GEM_ENDIA_OFFSET 7 /* endian swap mode for packet data access */
+#define GEM_ENDIA_SIZE 1
+#define GEM_RXBMS_OFFSET 8 /* RX packet buffer memory size select */
+#define GEM_RXBMS_SIZE 2
+#define GEM_TXPBMS_OFFSET 10 /* TX packet buffer memory size select */
+#define GEM_TXPBMS_SIZE 1
+#define GEM_TXCOEN_OFFSET 11 /* TX IP/TCP/UDP checksum gen offload */
+#define GEM_TXCOEN_SIZE 1
+#define GEM_RXBS_OFFSET 16 /* DMA receive buffer size */
+#define GEM_RXBS_SIZE 8
+#define GEM_DDRP_OFFSET 24 /* disc_when_no_ahb */
+#define GEM_DDRP_SIZE 1
/* Bitfields in NSR */
-#define MACB_NSR_LINK_OFFSET 0
-#define MACB_NSR_LINK_SIZE 1
-#define MACB_MDIO_OFFSET 1
-#define MACB_MDIO_SIZE 1
-#define MACB_IDLE_OFFSET 2
-#define MACB_IDLE_SIZE 1
+#define MACB_NSR_LINK_OFFSET 0 /* pcs_link_state */
+#define MACB_NSR_LINK_SIZE 1
+#define MACB_MDIO_OFFSET 1 /* status of the mdio_in pin */
+#define MACB_MDIO_SIZE 1
+#define MACB_IDLE_OFFSET 2 /* The PHY management logic is idle */
+#define MACB_IDLE_SIZE 1
/* Bitfields in TSR */
-#define MACB_UBR_OFFSET 0
-#define MACB_UBR_SIZE 1
-#define MACB_COL_OFFSET 1
-#define MACB_COL_SIZE 1
-#define MACB_TSR_RLE_OFFSET 2
-#define MACB_TSR_RLE_SIZE 1
-#define MACB_TGO_OFFSET 3
-#define MACB_TGO_SIZE 1
-#define MACB_BEX_OFFSET 4
-#define MACB_BEX_SIZE 1
-#define MACB_RM9200_BNQ_OFFSET 4 /* AT91RM9200 only */
-#define MACB_RM9200_BNQ_SIZE 1 /* AT91RM9200 only */
-#define MACB_COMP_OFFSET 5
-#define MACB_COMP_SIZE 1
-#define MACB_UND_OFFSET 6
-#define MACB_UND_SIZE 1
+#define MACB_UBR_OFFSET 0 /* Used bit read */
+#define MACB_UBR_SIZE 1
+#define MACB_COL_OFFSET 1 /* Collision occurred */
+#define MACB_COL_SIZE 1
+#define MACB_TSR_RLE_OFFSET 2 /* Retry limit exceeded */
+#define MACB_TSR_RLE_SIZE 1
+#define MACB_TGO_OFFSET 3 /* Transmit go */
+#define MACB_TGO_SIZE 1
+#define MACB_BEX_OFFSET 4 /* TX frame corruption due to AHB error */
+#define MACB_BEX_SIZE 1
+#define MACB_RM9200_BNQ_OFFSET 4 /* AT91RM9200 only */
+#define MACB_RM9200_BNQ_SIZE 1 /* AT91RM9200 only */
+#define MACB_COMP_OFFSET 5 /* Trnasmit complete */
+#define MACB_COMP_SIZE 1
+#define MACB_UND_OFFSET 6 /* Trnasmit under run */
+#define MACB_UND_SIZE 1
/* Bitfields in RSR */
-#define MACB_BNA_OFFSET 0
-#define MACB_BNA_SIZE 1
-#define MACB_REC_OFFSET 1
-#define MACB_REC_SIZE 1
-#define MACB_OVR_OFFSET 2
-#define MACB_OVR_SIZE 1
+#define MACB_BNA_OFFSET 0 /* Buffer not available */
+#define MACB_BNA_SIZE 1
+#define MACB_REC_OFFSET 1 /* Frame received */
+#define MACB_REC_SIZE 1
+#define MACB_OVR_OFFSET 2 /* Receive overrun */
+#define MACB_OVR_SIZE 1
/* Bitfields in ISR/IER/IDR/IMR */
-#define MACB_MFD_OFFSET 0
-#define MACB_MFD_SIZE 1
-#define MACB_RCOMP_OFFSET 1
-#define MACB_RCOMP_SIZE 1
-#define MACB_RXUBR_OFFSET 2
-#define MACB_RXUBR_SIZE 1
-#define MACB_TXUBR_OFFSET 3
-#define MACB_TXUBR_SIZE 1
-#define MACB_ISR_TUND_OFFSET 4
-#define MACB_ISR_TUND_SIZE 1
-#define MACB_ISR_RLE_OFFSET 5
-#define MACB_ISR_RLE_SIZE 1
-#define MACB_TXERR_OFFSET 6
-#define MACB_TXERR_SIZE 1
-#define MACB_TCOMP_OFFSET 7
-#define MACB_TCOMP_SIZE 1
-#define MACB_ISR_LINK_OFFSET 9
-#define MACB_ISR_LINK_SIZE 1
-#define MACB_ISR_ROVR_OFFSET 10
-#define MACB_ISR_ROVR_SIZE 1
-#define MACB_HRESP_OFFSET 11
-#define MACB_HRESP_SIZE 1
-#define MACB_PFR_OFFSET 12
-#define MACB_PFR_SIZE 1
-#define MACB_PTZ_OFFSET 13
-#define MACB_PTZ_SIZE 1
+#define MACB_MFD_OFFSET 0 /* Management frame sent */
+#define MACB_MFD_SIZE 1
+#define MACB_RCOMP_OFFSET 1 /* Receive complete */
+#define MACB_RCOMP_SIZE 1
+#define MACB_RXUBR_OFFSET 2 /* RX used bit read */
+#define MACB_RXUBR_SIZE 1
+#define MACB_TXUBR_OFFSET 3 /* TX used bit read */
+#define MACB_TXUBR_SIZE 1
+#define MACB_ISR_TUND_OFFSET 4 /* Enable TX buffer under run interrupt */
+#define MACB_ISR_TUND_SIZE 1
+#define MACB_ISR_RLE_OFFSET 5 /* EN retry exceeded/late coll interrupt */
+#define MACB_ISR_RLE_SIZE 1
+#define MACB_TXERR_OFFSET 6 /* EN TX frame corrupt from error interrupt */
+#define MACB_TXERR_SIZE 1
+#define MACB_TCOMP_OFFSET 7 /* Enable transmit complete interrupt */
+#define MACB_TCOMP_SIZE 1
+#define MACB_ISR_LINK_OFFSET 9 /* Enable link change interrupt */
+#define MACB_ISR_LINK_SIZE 1
+#define MACB_ISR_ROVR_OFFSET 10 /* Enable receive overrun interrupt */
+#define MACB_ISR_ROVR_SIZE 1
+#define MACB_HRESP_OFFSET 11 /* Enable hrsep not OK interrupt */
+#define MACB_HRESP_SIZE 1
+#define MACB_PFR_OFFSET 12 /* Enable pause frame w/ quantum interrupt */
+#define MACB_PFR_SIZE 1
+#define MACB_PTZ_OFFSET 13 /* Enable pause time zero interrupt */
+#define MACB_PTZ_SIZE 1
/* Bitfields in MAN */
-#define MACB_DATA_OFFSET 0
-#define MACB_DATA_SIZE 16
-#define MACB_CODE_OFFSET 16
-#define MACB_CODE_SIZE 2
-#define MACB_REGA_OFFSET 18
-#define MACB_REGA_SIZE 5
-#define MACB_PHYA_OFFSET 23
-#define MACB_PHYA_SIZE 5
-#define MACB_RW_OFFSET 28
-#define MACB_RW_SIZE 2
-#define MACB_SOF_OFFSET 30
-#define MACB_SOF_SIZE 2
+#define MACB_DATA_OFFSET 0 /* data */
+#define MACB_DATA_SIZE 16
+#define MACB_CODE_OFFSET 16 /* Must be written to 10 */
+#define MACB_CODE_SIZE 2
+#define MACB_REGA_OFFSET 18 /* Register address */
+#define MACB_REGA_SIZE 5
+#define MACB_PHYA_OFFSET 23 /* PHY address */
+#define MACB_PHYA_SIZE 5
+#define MACB_RW_OFFSET 28 /* Operation. 10 is read. 01 is write. */
+#define MACB_RW_SIZE 2
+#define MACB_SOF_OFFSET 30 /* Must be written to 1 for Clause 22 */
+#define MACB_SOF_SIZE 2
/* Bitfields in USRIO (AVR32) */
#define MACB_MII_OFFSET 0
/* Bitfields in USRIO (AT91) */
#define MACB_RMII_OFFSET 0
#define MACB_RMII_SIZE 1
-#define GEM_RGMII_OFFSET 0 /* GEM gigabit mode */
+#define GEM_RGMII_OFFSET 0 /* GEM gigabit mode */
#define GEM_RGMII_SIZE 1
#define MACB_CLKEN_OFFSET 1
#define MACB_CLKEN_SIZE 1
#define queue_writel(queue, reg, value) \
__raw_writel((value), (queue)->bp->regs + (queue)->reg)
-/*
- * Conditional GEM/MACB macros. These perform the operation to the correct
+/* Conditional GEM/MACB macros. These perform the operation to the correct
* register dependent on whether the device is a GEM or a MACB. For registers
* and bitfields that are common across both devices, use macb_{read,write}l
* to avoid the cost of the conditional.
__v; \
})
-/**
- * struct macb_dma_desc - Hardware DMA descriptor
+/* struct macb_dma_desc - Hardware DMA descriptor
* @addr: DMA address of data buffer
* @ctrl: Control and status bits
*/
/* limit RX checksum offload to TCP and UDP packets */
#define GEM_RX_CSUM_CHECKED_MASK 2
-/**
- * struct macb_tx_skb - data about an skb which is being transmitted
+/* struct macb_tx_skb - data about an skb which is being transmitted
* @skb: skb currently being transmitted, only set for the last buffer
* of the frame
* @mapping: DMA address of the skb's fragment buffer
bool mapped_as_page;
};
-/*
- * Hardware-collected statistics. Used when updating the network
+/* Hardware-collected statistics. Used when updating the network
* device stats by a periodic timer.
*/
struct macb_stats {
u32 rx_udp_checksum_errors;
};
+/* Describes the name and offset of an individual statistic register, as
+ * returned by `ethtool -S`. Also describes which net_device_stats statistics
+ * this register should contribute to.
+ */
+struct gem_statistic {
+ char stat_string[ETH_GSTRING_LEN];
+ int offset;
+ u32 stat_bits;
+};
+
+/* Bitfield defs for net_device_stat statistics */
+#define GEM_NDS_RXERR_OFFSET 0
+#define GEM_NDS_RXLENERR_OFFSET 1
+#define GEM_NDS_RXOVERERR_OFFSET 2
+#define GEM_NDS_RXCRCERR_OFFSET 3
+#define GEM_NDS_RXFRAMEERR_OFFSET 4
+#define GEM_NDS_RXFIFOERR_OFFSET 5
+#define GEM_NDS_TXERR_OFFSET 6
+#define GEM_NDS_TXABORTEDERR_OFFSET 7
+#define GEM_NDS_TXCARRIERERR_OFFSET 8
+#define GEM_NDS_TXFIFOERR_OFFSET 9
+#define GEM_NDS_COLLISIONS_OFFSET 10
+
+#define GEM_STAT_TITLE(name, title) GEM_STAT_TITLE_BITS(name, title, 0)
+#define GEM_STAT_TITLE_BITS(name, title, bits) { \
+ .stat_string = title, \
+ .offset = GEM_##name, \
+ .stat_bits = bits \
+}
+
+/* list of gem statistic registers. The names MUST match the
+ * corresponding GEM_* definitions.
+ */
+static const struct gem_statistic gem_statistics[] = {
+ GEM_STAT_TITLE(OCTTXL, "tx_octets"), /* OCTTXH combined with OCTTXL */
+ GEM_STAT_TITLE(TXCNT, "tx_frames"),
+ GEM_STAT_TITLE(TXBCCNT, "tx_broadcast_frames"),
+ GEM_STAT_TITLE(TXMCCNT, "tx_multicast_frames"),
+ GEM_STAT_TITLE(TXPAUSECNT, "tx_pause_frames"),
+ GEM_STAT_TITLE(TX64CNT, "tx_64_byte_frames"),
+ GEM_STAT_TITLE(TX65CNT, "tx_65_127_byte_frames"),
+ GEM_STAT_TITLE(TX128CNT, "tx_128_255_byte_frames"),
+ GEM_STAT_TITLE(TX256CNT, "tx_256_511_byte_frames"),
+ GEM_STAT_TITLE(TX512CNT, "tx_512_1023_byte_frames"),
+ GEM_STAT_TITLE(TX1024CNT, "tx_1024_1518_byte_frames"),
+ GEM_STAT_TITLE(TX1519CNT, "tx_greater_than_1518_byte_frames"),
+ GEM_STAT_TITLE_BITS(TXURUNCNT, "tx_underrun",
+ GEM_BIT(NDS_TXERR)|GEM_BIT(NDS_TXFIFOERR)),
+ GEM_STAT_TITLE_BITS(SNGLCOLLCNT, "tx_single_collision_frames",
+ GEM_BIT(NDS_TXERR)|GEM_BIT(NDS_COLLISIONS)),
+ GEM_STAT_TITLE_BITS(MULTICOLLCNT, "tx_multiple_collision_frames",
+ GEM_BIT(NDS_TXERR)|GEM_BIT(NDS_COLLISIONS)),
+ GEM_STAT_TITLE_BITS(EXCESSCOLLCNT, "tx_excessive_collisions",
+ GEM_BIT(NDS_TXERR)|
+ GEM_BIT(NDS_TXABORTEDERR)|
+ GEM_BIT(NDS_COLLISIONS)),
+ GEM_STAT_TITLE_BITS(LATECOLLCNT, "tx_late_collisions",
+ GEM_BIT(NDS_TXERR)|GEM_BIT(NDS_COLLISIONS)),
+ GEM_STAT_TITLE(TXDEFERCNT, "tx_deferred_frames"),
+ GEM_STAT_TITLE_BITS(TXCSENSECNT, "tx_carrier_sense_errors",
+ GEM_BIT(NDS_TXERR)|GEM_BIT(NDS_COLLISIONS)),
+ GEM_STAT_TITLE(OCTRXL, "rx_octets"), /* OCTRXH combined with OCTRXL */
+ GEM_STAT_TITLE(RXCNT, "rx_frames"),
+ GEM_STAT_TITLE(RXBROADCNT, "rx_broadcast_frames"),
+ GEM_STAT_TITLE(RXMULTICNT, "rx_multicast_frames"),
+ GEM_STAT_TITLE(RXPAUSECNT, "rx_pause_frames"),
+ GEM_STAT_TITLE(RX64CNT, "rx_64_byte_frames"),
+ GEM_STAT_TITLE(RX65CNT, "rx_65_127_byte_frames"),
+ GEM_STAT_TITLE(RX128CNT, "rx_128_255_byte_frames"),
+ GEM_STAT_TITLE(RX256CNT, "rx_256_511_byte_frames"),
+ GEM_STAT_TITLE(RX512CNT, "rx_512_1023_byte_frames"),
+ GEM_STAT_TITLE(RX1024CNT, "rx_1024_1518_byte_frames"),
+ GEM_STAT_TITLE(RX1519CNT, "rx_greater_than_1518_byte_frames"),
+ GEM_STAT_TITLE_BITS(RXUNDRCNT, "rx_undersized_frames",
+ GEM_BIT(NDS_RXERR)|GEM_BIT(NDS_RXLENERR)),
+ GEM_STAT_TITLE_BITS(RXOVRCNT, "rx_oversize_frames",
+ GEM_BIT(NDS_RXERR)|GEM_BIT(NDS_RXLENERR)),
+ GEM_STAT_TITLE_BITS(RXJABCNT, "rx_jabbers",
+ GEM_BIT(NDS_RXERR)|GEM_BIT(NDS_RXLENERR)),
+ GEM_STAT_TITLE_BITS(RXFCSCNT, "rx_frame_check_sequence_errors",
+ GEM_BIT(NDS_RXERR)|GEM_BIT(NDS_RXCRCERR)),
+ GEM_STAT_TITLE_BITS(RXLENGTHCNT, "rx_length_field_frame_errors",
+ GEM_BIT(NDS_RXERR)),
+ GEM_STAT_TITLE_BITS(RXSYMBCNT, "rx_symbol_errors",
+ GEM_BIT(NDS_RXERR)|GEM_BIT(NDS_RXFRAMEERR)),
+ GEM_STAT_TITLE_BITS(RXALIGNCNT, "rx_alignment_errors",
+ GEM_BIT(NDS_RXERR)|GEM_BIT(NDS_RXOVERERR)),
+ GEM_STAT_TITLE_BITS(RXRESERRCNT, "rx_resource_errors",
+ GEM_BIT(NDS_RXERR)|GEM_BIT(NDS_RXOVERERR)),
+ GEM_STAT_TITLE_BITS(RXORCNT, "rx_overruns",
+ GEM_BIT(NDS_RXERR)|GEM_BIT(NDS_RXFIFOERR)),
+ GEM_STAT_TITLE_BITS(RXIPCCNT, "rx_ip_header_checksum_errors",
+ GEM_BIT(NDS_RXERR)),
+ GEM_STAT_TITLE_BITS(RXTCPCCNT, "rx_tcp_checksum_errors",
+ GEM_BIT(NDS_RXERR)),
+ GEM_STAT_TITLE_BITS(RXUDPCCNT, "rx_udp_checksum_errors",
+ GEM_BIT(NDS_RXERR)),
+};
+
+#define GEM_STATS_LEN ARRAY_SIZE(gem_statistics)
+
struct macb;
struct macb_or_gem_ops {
dma_addr_t skb_physaddr; /* phys addr from pci_map_single */
int skb_length; /* saved skb length for pci_unmap_single */
unsigned int max_tx_length;
+
+ u64 ethtool_stats[GEM_STATS_LEN];
};
extern const struct ethtool_ops macb_ethtool_ops;
}
cpl->iff = dev->if_port;
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
cpl->vlan_valid = 1;
- cpl->vlan = htons(vlan_tx_tag_get(skb));
+ cpl->vlan = htons(skb_vlan_tag_get(skb));
st->vlan_insert++;
} else
cpl->vlan_valid = 0;
cpl->len = htonl(skb->len);
cntrl = V_TXPKT_INTF(pi->port_id);
- if (vlan_tx_tag_present(skb))
- cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(vlan_tx_tag_get(skb));
+ if (skb_vlan_tag_present(skb))
+ cntrl |= F_TXPKT_VLAN_VLD | V_TXPKT_VLAN(skb_vlan_tag_get(skb));
tso_info = V_LSO_MSS(skb_shinfo(skb)->gso_size);
if (tso_info) {
qs->port_stats[SGE_PSTAT_TX_CSUM]++;
if (skb_shinfo(skb)->gso_size)
qs->port_stats[SGE_PSTAT_TSO]++;
- if (vlan_tx_tag_present(skb))
+ if (skb_vlan_tag_present(skb))
qs->port_stats[SGE_PSTAT_VLANINS]++;
/*
p->xauicfg[1] = simple_strtoul(vpd.xaui1cfg_data, NULL, 16);
}
- for (i = 0; i < 6; i++)
- p->eth_base[i] = hex_to_bin(vpd.na_data[2 * i]) * 16 +
- hex_to_bin(vpd.na_data[2 * i + 1]);
+ ret = hex2bin(p->eth_base, vpd.na_data, 6);
+ if (ret < 0)
+ return -EINVAL;
return 0;
}
obj-$(CONFIG_CHELSIO_T4) += cxgb4.o
-cxgb4-objs := cxgb4_main.o l2t.o t4_hw.o sge.o
+cxgb4-objs := cxgb4_main.o l2t.o t4_hw.o sge.o clip_tbl.o
cxgb4-$(CONFIG_CHELSIO_T4_DCB) += cxgb4_dcb.o
cxgb4-$(CONFIG_DEBUG_FS) += cxgb4_debugfs.o
--- /dev/null
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ * Copyright (C) 2003-2014 Chelsio Communications. All rights reserved.
+ *
+ * Written by Deepak (deepak.s@chelsio.com)
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the LICENSE file included in this
+ * release for licensing terms and conditions.
+ */
+
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/jhash.h>
+#include <linux/if_vlan.h>
+#include <net/addrconf.h>
+#include "cxgb4.h"
+#include "clip_tbl.h"
+
+static inline unsigned int ipv4_clip_hash(struct clip_tbl *c, const u32 *key)
+{
+ unsigned int clipt_size_half = c->clipt_size / 2;
+
+ return jhash_1word(*key, 0) % clipt_size_half;
+}
+
+static inline unsigned int ipv6_clip_hash(struct clip_tbl *d, const u32 *key)
+{
+ unsigned int clipt_size_half = d->clipt_size / 2;
+ u32 xor = key[0] ^ key[1] ^ key[2] ^ key[3];
+
+ return clipt_size_half +
+ (jhash_1word(xor, 0) % clipt_size_half);
+}
+
+static unsigned int clip_addr_hash(struct clip_tbl *ctbl, const u32 *addr,
+ int addr_len)
+{
+ return addr_len == 4 ? ipv4_clip_hash(ctbl, addr) :
+ ipv6_clip_hash(ctbl, addr);
+}
+
+static int clip6_get_mbox(const struct net_device *dev,
+ const struct in6_addr *lip)
+{
+ struct adapter *adap = netdev2adap(dev);
+ struct fw_clip_cmd c;
+
+ memset(&c, 0, sizeof(c));
+ c.op_to_write = htonl(FW_CMD_OP_V(FW_CLIP_CMD) |
+ FW_CMD_REQUEST_F | FW_CMD_WRITE_F);
+ c.alloc_to_len16 = htonl(FW_CLIP_CMD_ALLOC_F | FW_LEN16(c));
+ *(__be64 *)&c.ip_hi = *(__be64 *)(lip->s6_addr);
+ *(__be64 *)&c.ip_lo = *(__be64 *)(lip->s6_addr + 8);
+ return t4_wr_mbox_meat(adap, adap->mbox, &c, sizeof(c), &c, false);
+}
+
+static int clip6_release_mbox(const struct net_device *dev,
+ const struct in6_addr *lip)
+{
+ struct adapter *adap = netdev2adap(dev);
+ struct fw_clip_cmd c;
+
+ memset(&c, 0, sizeof(c));
+ c.op_to_write = htonl(FW_CMD_OP_V(FW_CLIP_CMD) |
+ FW_CMD_REQUEST_F | FW_CMD_READ_F);
+ c.alloc_to_len16 = htonl(FW_CLIP_CMD_FREE_F | FW_LEN16(c));
+ *(__be64 *)&c.ip_hi = *(__be64 *)(lip->s6_addr);
+ *(__be64 *)&c.ip_lo = *(__be64 *)(lip->s6_addr + 8);
+ return t4_wr_mbox_meat(adap, adap->mbox, &c, sizeof(c), &c, false);
+}
+
+int cxgb4_clip_get(const struct net_device *dev, const u32 *lip, u8 v6)
+{
+ struct adapter *adap = netdev2adap(dev);
+ struct clip_tbl *ctbl = adap->clipt;
+ struct clip_entry *ce, *cte;
+ u32 *addr = (u32 *)lip;
+ int hash;
+ int addr_len;
+ int ret = 0;
+
+ if (v6)
+ addr_len = 16;
+ else
+ addr_len = 4;
+
+ hash = clip_addr_hash(ctbl, addr, addr_len);
+
+ read_lock_bh(&ctbl->lock);
+ list_for_each_entry(cte, &ctbl->hash_list[hash], list) {
+ if (addr_len == cte->addr_len &&
+ memcmp(lip, cte->addr, cte->addr_len) == 0) {
+ ce = cte;
+ read_unlock_bh(&ctbl->lock);
+ goto found;
+ }
+ }
+ read_unlock_bh(&ctbl->lock);
+
+ write_lock_bh(&ctbl->lock);
+ if (!list_empty(&ctbl->ce_free_head)) {
+ ce = list_first_entry(&ctbl->ce_free_head,
+ struct clip_entry, list);
+ list_del(&ce->list);
+ INIT_LIST_HEAD(&ce->list);
+ spin_lock_init(&ce->lock);
+ atomic_set(&ce->refcnt, 0);
+ atomic_dec(&ctbl->nfree);
+ ce->addr_len = addr_len;
+ memcpy(ce->addr, lip, addr_len);
+ list_add_tail(&ce->list, &ctbl->hash_list[hash]);
+ if (v6) {
+ ret = clip6_get_mbox(dev, (const struct in6_addr *)lip);
+ if (ret) {
+ write_unlock_bh(&ctbl->lock);
+ return ret;
+ }
+ }
+ } else {
+ write_unlock_bh(&ctbl->lock);
+ return -ENOMEM;
+ }
+ write_unlock_bh(&ctbl->lock);
+found:
+ atomic_inc(&ce->refcnt);
+
+ return 0;
+}
+EXPORT_SYMBOL(cxgb4_clip_get);
+
+void cxgb4_clip_release(const struct net_device *dev, const u32 *lip, u8 v6)
+{
+ struct adapter *adap = netdev2adap(dev);
+ struct clip_tbl *ctbl = adap->clipt;
+ struct clip_entry *ce, *cte;
+ u32 *addr = (u32 *)lip;
+ int hash;
+ int addr_len;
+
+ if (v6)
+ addr_len = 16;
+ else
+ addr_len = 4;
+
+ hash = clip_addr_hash(ctbl, addr, addr_len);
+
+ read_lock_bh(&ctbl->lock);
+ list_for_each_entry(cte, &ctbl->hash_list[hash], list) {
+ if (addr_len == cte->addr_len &&
+ memcmp(lip, cte->addr, cte->addr_len) == 0) {
+ ce = cte;
+ read_unlock_bh(&ctbl->lock);
+ goto found;
+ }
+ }
+ read_unlock_bh(&ctbl->lock);
+
+ return;
+found:
+ write_lock_bh(&ctbl->lock);
+ spin_lock_bh(&ce->lock);
+ if (atomic_dec_and_test(&ce->refcnt)) {
+ list_del(&ce->list);
+ INIT_LIST_HEAD(&ce->list);
+ list_add_tail(&ce->list, &ctbl->ce_free_head);
+ atomic_inc(&ctbl->nfree);
+ if (v6)
+ clip6_release_mbox(dev, (const struct in6_addr *)lip);
+ }
+ spin_unlock_bh(&ce->lock);
+ write_unlock_bh(&ctbl->lock);
+}
+EXPORT_SYMBOL(cxgb4_clip_release);
+
+/* Retrieves IPv6 addresses from a root device (bond, vlan) associated with
+ * a physical device.
+ * The physical device reference is needed to send the actul CLIP command.
+ */
+static int cxgb4_update_dev_clip(struct net_device *root_dev,
+ struct net_device *dev)
+{
+ struct inet6_dev *idev = NULL;
+ struct inet6_ifaddr *ifa;
+ int ret = 0;
+
+ idev = __in6_dev_get(root_dev);
+ if (!idev)
+ return ret;
+
+ read_lock_bh(&idev->lock);
+ list_for_each_entry(ifa, &idev->addr_list, if_list) {
+ ret = cxgb4_clip_get(dev, (const u32 *)ifa->addr.s6_addr, 1);
+ if (ret < 0)
+ break;
+ }
+ read_unlock_bh(&idev->lock);
+
+ return ret;
+}
+
+int cxgb4_update_root_dev_clip(struct net_device *dev)
+{
+ struct net_device *root_dev = NULL;
+ int i, ret = 0;
+
+ /* First populate the real net device's IPv6 addresses */
+ ret = cxgb4_update_dev_clip(dev, dev);
+ if (ret)
+ return ret;
+
+ /* Parse all bond and vlan devices layered on top of the physical dev */
+ root_dev = netdev_master_upper_dev_get_rcu(dev);
+ if (root_dev) {
+ ret = cxgb4_update_dev_clip(root_dev, dev);
+ if (ret)
+ return ret;
+ }
+
+ for (i = 0; i < VLAN_N_VID; i++) {
+ root_dev = __vlan_find_dev_deep_rcu(dev, htons(ETH_P_8021Q), i);
+ if (!root_dev)
+ continue;
+
+ ret = cxgb4_update_dev_clip(root_dev, dev);
+ if (ret)
+ break;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL(cxgb4_update_root_dev_clip);
+
+int clip_tbl_show(struct seq_file *seq, void *v)
+{
+ struct adapter *adapter = seq->private;
+ struct clip_tbl *ctbl = adapter->clipt;
+ struct clip_entry *ce;
+ char ip[60];
+ int i;
+
+ read_lock_bh(&ctbl->lock);
+
+ seq_puts(seq, "IP Address Users\n");
+ for (i = 0 ; i < ctbl->clipt_size; ++i) {
+ list_for_each_entry(ce, &ctbl->hash_list[i], list) {
+ ip[0] = '\0';
+ if (ce->addr_len == 16)
+ sprintf(ip, "%pI6c", ce->addr);
+ else
+ sprintf(ip, "%pI4c", ce->addr);
+ seq_printf(seq, "%-25s %u\n", ip,
+ atomic_read(&ce->refcnt));
+ }
+ }
+ seq_printf(seq, "Free clip entries : %d\n", atomic_read(&ctbl->nfree));
+
+ read_unlock_bh(&ctbl->lock);
+
+ return 0;
+}
+
+struct clip_tbl *t4_init_clip_tbl(unsigned int clipt_start,
+ unsigned int clipt_end)
+{
+ struct clip_entry *cl_list;
+ struct clip_tbl *ctbl;
+ unsigned int clipt_size;
+ int i;
+
+ if (clipt_start >= clipt_end)
+ return NULL;
+ clipt_size = clipt_end - clipt_start + 1;
+ if (clipt_size < CLIPT_MIN_HASH_BUCKETS)
+ return NULL;
+
+ ctbl = t4_alloc_mem(sizeof(*ctbl) +
+ clipt_size*sizeof(struct list_head));
+ if (!ctbl)
+ return NULL;
+
+ ctbl->clipt_start = clipt_start;
+ ctbl->clipt_size = clipt_size;
+ INIT_LIST_HEAD(&ctbl->ce_free_head);
+
+ atomic_set(&ctbl->nfree, clipt_size);
+ rwlock_init(&ctbl->lock);
+
+ for (i = 0; i < ctbl->clipt_size; ++i)
+ INIT_LIST_HEAD(&ctbl->hash_list[i]);
+
+ cl_list = t4_alloc_mem(clipt_size*sizeof(struct clip_entry));
+ ctbl->cl_list = (void *)cl_list;
+
+ for (i = 0; i < clipt_size; i++) {
+ INIT_LIST_HEAD(&cl_list[i].list);
+ list_add_tail(&cl_list[i].list, &ctbl->ce_free_head);
+ }
+
+ return ctbl;
+}
+
+void t4_cleanup_clip_tbl(struct adapter *adap)
+{
+ struct clip_tbl *ctbl = adap->clipt;
+
+ if (ctbl) {
+ if (ctbl->cl_list)
+ t4_free_mem(ctbl->cl_list);
+ t4_free_mem(ctbl);
+ }
+}
+EXPORT_SYMBOL(t4_cleanup_clip_tbl);
--- /dev/null
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ * Copyright (C) 2003-2014 Chelsio Communications. All rights reserved.
+ *
+ * Written by Deepak (deepak.s@chelsio.com)
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the LICENSE file included in this
+ * release for licensing terms and conditions.
+ */
+
+struct clip_entry {
+ spinlock_t lock; /* Hold while modifying clip reference */
+ atomic_t refcnt;
+ struct list_head list;
+ u32 addr[4];
+ int addr_len;
+};
+
+struct clip_tbl {
+ unsigned int clipt_start;
+ unsigned int clipt_size;
+ rwlock_t lock;
+ atomic_t nfree;
+ struct list_head ce_free_head;
+ void *cl_list;
+ struct list_head hash_list[0];
+};
+
+enum {
+ CLIPT_MIN_HASH_BUCKETS = 2,
+};
+
+struct clip_tbl *t4_init_clip_tbl(unsigned int clipt_start,
+ unsigned int clipt_end);
+int cxgb4_clip_get(const struct net_device *dev, const u32 *lip, u8 v6);
+void cxgb4_clip_release(const struct net_device *dev, const u32 *lip, u8 v6);
+int clip_tbl_show(struct seq_file *seq, void *v);
+int cxgb4_update_root_dev_clip(struct net_device *dev);
+void t4_cleanup_clip_tbl(struct adapter *adap);
T5_LAST_REV = T5_A1,
};
+struct devlog_params {
+ u32 memtype; /* which memory (EDC0, EDC1, MC) */
+ u32 start; /* start of log in firmware memory */
+ u32 size; /* size of log */
+};
+
struct adapter_params {
struct sge_params sge;
struct tp_params tp;
struct vpd_params vpd;
struct pci_params pci;
+ struct devlog_params devlog;
+ enum pcie_memwin drv_memwin;
+
+ unsigned int cim_la_size;
unsigned int sf_size; /* serial flash size in bytes */
unsigned int sf_nsec; /* # of flash sectors */
unsigned int l2t_start;
unsigned int l2t_end;
struct l2t_data *l2t;
+ unsigned int clipt_start;
+ unsigned int clipt_end;
+ struct clip_tbl *clipt;
void *uld_handle[CXGB4_ULD_MAX];
struct list_head list_node;
struct list_head rcu_node;
int t4_seeprom_wp(struct adapter *adapter, bool enable);
int get_vpd_params(struct adapter *adapter, struct vpd_params *p);
+int t4_read_flash(struct adapter *adapter, unsigned int addr,
+ unsigned int nwords, u32 *data, int byte_oriented);
int t4_load_fw(struct adapter *adapter, const u8 *fw_data, unsigned int size);
+int t4_fwcache(struct adapter *adap, enum fw_params_param_dev_fwcache op);
int t4_fw_upgrade(struct adapter *adap, unsigned int mbox,
const u8 *fw_data, unsigned int size, int force);
unsigned int t4_flash_cfg_addr(struct adapter *adapter);
int start, int n, const u16 *rspq, unsigned int nrspq);
int t4_config_glbl_rss(struct adapter *adapter, int mbox, unsigned int mode,
unsigned int flags);
+int t4_read_rss(struct adapter *adapter, u16 *entries);
+void t4_read_rss_key(struct adapter *adapter, u32 *key);
+void t4_write_rss_key(struct adapter *adap, const u32 *key, int idx);
+void t4_read_rss_pf_config(struct adapter *adapter, unsigned int index,
+ u32 *valp);
+void t4_read_rss_vf_config(struct adapter *adapter, unsigned int index,
+ u32 *vfl, u32 *vfh);
+u32 t4_read_rss_pf_map(struct adapter *adapter);
+u32 t4_read_rss_pf_mask(struct adapter *adapter);
+
int t4_mc_read(struct adapter *adap, int idx, u32 addr, __be32 *data,
u64 *parity);
int t4_edc_read(struct adapter *adap, int idx, u32 addr, __be32 *data,
u64 *parity);
+int t4_cim_read(struct adapter *adap, unsigned int addr, unsigned int n,
+ unsigned int *valp);
+int t4_cim_write(struct adapter *adap, unsigned int addr, unsigned int n,
+ const unsigned int *valp);
+int t4_cim_read_la(struct adapter *adap, u32 *la_buf, unsigned int *wrptr);
+void t4_read_cimq_cfg(struct adapter *adap, u16 *base, u16 *size, u16 *thres);
const char *t4_get_port_type_description(enum fw_port_type port_type);
void t4_get_port_stats(struct adapter *adap, int idx, struct port_stats *p);
void t4_read_mtu_tbl(struct adapter *adap, u16 *mtus, u8 *mtu_log);
#include <linux/debugfs.h>
#include <linux/string_helpers.h>
#include <linux/sort.h>
+#include <linux/ctype.h>
#include "cxgb4.h"
#include "t4_regs.h"
#include "t4fw_api.h"
#include "cxgb4_debugfs.h"
+#include "clip_tbl.h"
#include "l2t.h"
+/* generic seq_file support for showing a table of size rows x width. */
+static void *seq_tab_get_idx(struct seq_tab *tb, loff_t pos)
+{
+ pos -= tb->skip_first;
+ return pos >= tb->rows ? NULL : &tb->data[pos * tb->width];
+}
+
+static void *seq_tab_start(struct seq_file *seq, loff_t *pos)
+{
+ struct seq_tab *tb = seq->private;
+
+ if (tb->skip_first && *pos == 0)
+ return SEQ_START_TOKEN;
+
+ return seq_tab_get_idx(tb, *pos);
+}
+
+static void *seq_tab_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+ v = seq_tab_get_idx(seq->private, *pos + 1);
+ if (v)
+ ++*pos;
+ return v;
+}
+
+static void seq_tab_stop(struct seq_file *seq, void *v)
+{
+}
+
+static int seq_tab_show(struct seq_file *seq, void *v)
+{
+ const struct seq_tab *tb = seq->private;
+
+ return tb->show(seq, v, ((char *)v - tb->data) / tb->width);
+}
+
+static const struct seq_operations seq_tab_ops = {
+ .start = seq_tab_start,
+ .next = seq_tab_next,
+ .stop = seq_tab_stop,
+ .show = seq_tab_show
+};
+
+struct seq_tab *seq_open_tab(struct file *f, unsigned int rows,
+ unsigned int width, unsigned int have_header,
+ int (*show)(struct seq_file *seq, void *v, int i))
+{
+ struct seq_tab *p;
+
+ p = __seq_open_private(f, &seq_tab_ops, sizeof(*p) + rows * width);
+ if (p) {
+ p->show = show;
+ p->rows = rows;
+ p->width = width;
+ p->skip_first = have_header != 0;
+ }
+ return p;
+}
+
+static int cim_la_show(struct seq_file *seq, void *v, int idx)
+{
+ if (v == SEQ_START_TOKEN)
+ seq_puts(seq, "Status Data PC LS0Stat LS0Addr "
+ " LS0Data\n");
+ else {
+ const u32 *p = v;
+
+ seq_printf(seq,
+ " %02x %x%07x %x%07x %08x %08x %08x%08x%08x%08x\n",
+ (p[0] >> 4) & 0xff, p[0] & 0xf, p[1] >> 4,
+ p[1] & 0xf, p[2] >> 4, p[2] & 0xf, p[3], p[4], p[5],
+ p[6], p[7]);
+ }
+ return 0;
+}
+
+static int cim_la_show_3in1(struct seq_file *seq, void *v, int idx)
+{
+ if (v == SEQ_START_TOKEN) {
+ seq_puts(seq, "Status Data PC\n");
+ } else {
+ const u32 *p = v;
+
+ seq_printf(seq, " %02x %08x %08x\n", p[5] & 0xff, p[6],
+ p[7]);
+ seq_printf(seq, " %02x %02x%06x %02x%06x\n",
+ (p[3] >> 8) & 0xff, p[3] & 0xff, p[4] >> 8,
+ p[4] & 0xff, p[5] >> 8);
+ seq_printf(seq, " %02x %x%07x %x%07x\n", (p[0] >> 4) & 0xff,
+ p[0] & 0xf, p[1] >> 4, p[1] & 0xf, p[2] >> 4);
+ }
+ return 0;
+}
+
+static int cim_la_open(struct inode *inode, struct file *file)
+{
+ int ret;
+ unsigned int cfg;
+ struct seq_tab *p;
+ struct adapter *adap = inode->i_private;
+
+ ret = t4_cim_read(adap, UP_UP_DBG_LA_CFG_A, 1, &cfg);
+ if (ret)
+ return ret;
+
+ p = seq_open_tab(file, adap->params.cim_la_size / 8, 8 * sizeof(u32), 1,
+ cfg & UPDBGLACAPTPCONLY_F ?
+ cim_la_show_3in1 : cim_la_show);
+ if (!p)
+ return -ENOMEM;
+
+ ret = t4_cim_read_la(adap, (u32 *)p->data, NULL);
+ if (ret)
+ seq_release_private(inode, file);
+ return ret;
+}
+
+static const struct file_operations cim_la_fops = {
+ .owner = THIS_MODULE,
+ .open = cim_la_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release_private
+};
+
+static int cim_qcfg_show(struct seq_file *seq, void *v)
+{
+ static const char * const qname[] = {
+ "TP0", "TP1", "ULP", "SGE0", "SGE1", "NC-SI",
+ "ULP0", "ULP1", "ULP2", "ULP3", "SGE", "NC-SI",
+ "SGE0-RX", "SGE1-RX"
+ };
+
+ int i;
+ struct adapter *adap = seq->private;
+ u16 base[CIM_NUM_IBQ + CIM_NUM_OBQ_T5];
+ u16 size[CIM_NUM_IBQ + CIM_NUM_OBQ_T5];
+ u32 stat[(4 * (CIM_NUM_IBQ + CIM_NUM_OBQ_T5))];
+ u16 thres[CIM_NUM_IBQ];
+ u32 obq_wr_t4[2 * CIM_NUM_OBQ], *wr;
+ u32 obq_wr_t5[2 * CIM_NUM_OBQ_T5];
+ u32 *p = stat;
+ int cim_num_obq = is_t4(adap->params.chip) ?
+ CIM_NUM_OBQ : CIM_NUM_OBQ_T5;
+
+ i = t4_cim_read(adap, is_t4(adap->params.chip) ? UP_IBQ_0_RDADDR_A :
+ UP_IBQ_0_SHADOW_RDADDR_A,
+ ARRAY_SIZE(stat), stat);
+ if (!i) {
+ if (is_t4(adap->params.chip)) {
+ i = t4_cim_read(adap, UP_OBQ_0_REALADDR_A,
+ ARRAY_SIZE(obq_wr_t4), obq_wr_t4);
+ wr = obq_wr_t4;
+ } else {
+ i = t4_cim_read(adap, UP_OBQ_0_SHADOW_REALADDR_A,
+ ARRAY_SIZE(obq_wr_t5), obq_wr_t5);
+ wr = obq_wr_t5;
+ }
+ }
+ if (i)
+ return i;
+
+ t4_read_cimq_cfg(adap, base, size, thres);
+
+ seq_printf(seq,
+ " Queue Base Size Thres RdPtr WrPtr SOP EOP Avail\n");
+ for (i = 0; i < CIM_NUM_IBQ; i++, p += 4)
+ seq_printf(seq, "%7s %5x %5u %5u %6x %4x %4u %4u %5u\n",
+ qname[i], base[i], size[i], thres[i],
+ IBQRDADDR_G(p[0]), IBQWRADDR_G(p[1]),
+ QUESOPCNT_G(p[3]), QUEEOPCNT_G(p[3]),
+ QUEREMFLITS_G(p[2]) * 16);
+ for ( ; i < CIM_NUM_IBQ + cim_num_obq; i++, p += 4, wr += 2)
+ seq_printf(seq, "%7s %5x %5u %12x %4x %4u %4u %5u\n",
+ qname[i], base[i], size[i],
+ QUERDADDR_G(p[0]) & 0x3fff, wr[0] - base[i],
+ QUESOPCNT_G(p[3]), QUEEOPCNT_G(p[3]),
+ QUEREMFLITS_G(p[2]) * 16);
+ return 0;
+}
+
+static int cim_qcfg_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, cim_qcfg_show, inode->i_private);
+}
+
+static const struct file_operations cim_qcfg_fops = {
+ .owner = THIS_MODULE,
+ .open = cim_qcfg_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+/* Firmware Device Log dump. */
+static const char * const devlog_level_strings[] = {
+ [FW_DEVLOG_LEVEL_EMERG] = "EMERG",
+ [FW_DEVLOG_LEVEL_CRIT] = "CRIT",
+ [FW_DEVLOG_LEVEL_ERR] = "ERR",
+ [FW_DEVLOG_LEVEL_NOTICE] = "NOTICE",
+ [FW_DEVLOG_LEVEL_INFO] = "INFO",
+ [FW_DEVLOG_LEVEL_DEBUG] = "DEBUG"
+};
+
+static const char * const devlog_facility_strings[] = {
+ [FW_DEVLOG_FACILITY_CORE] = "CORE",
+ [FW_DEVLOG_FACILITY_SCHED] = "SCHED",
+ [FW_DEVLOG_FACILITY_TIMER] = "TIMER",
+ [FW_DEVLOG_FACILITY_RES] = "RES",
+ [FW_DEVLOG_FACILITY_HW] = "HW",
+ [FW_DEVLOG_FACILITY_FLR] = "FLR",
+ [FW_DEVLOG_FACILITY_DMAQ] = "DMAQ",
+ [FW_DEVLOG_FACILITY_PHY] = "PHY",
+ [FW_DEVLOG_FACILITY_MAC] = "MAC",
+ [FW_DEVLOG_FACILITY_PORT] = "PORT",
+ [FW_DEVLOG_FACILITY_VI] = "VI",
+ [FW_DEVLOG_FACILITY_FILTER] = "FILTER",
+ [FW_DEVLOG_FACILITY_ACL] = "ACL",
+ [FW_DEVLOG_FACILITY_TM] = "TM",
+ [FW_DEVLOG_FACILITY_QFC] = "QFC",
+ [FW_DEVLOG_FACILITY_DCB] = "DCB",
+ [FW_DEVLOG_FACILITY_ETH] = "ETH",
+ [FW_DEVLOG_FACILITY_OFLD] = "OFLD",
+ [FW_DEVLOG_FACILITY_RI] = "RI",
+ [FW_DEVLOG_FACILITY_ISCSI] = "ISCSI",
+ [FW_DEVLOG_FACILITY_FCOE] = "FCOE",
+ [FW_DEVLOG_FACILITY_FOISCSI] = "FOISCSI",
+ [FW_DEVLOG_FACILITY_FOFCOE] = "FOFCOE"
+};
+
+/* Information gathered by Device Log Open routine for the display routine.
+ */
+struct devlog_info {
+ unsigned int nentries; /* number of entries in log[] */
+ unsigned int first; /* first [temporal] entry in log[] */
+ struct fw_devlog_e log[0]; /* Firmware Device Log */
+};
+
+/* Dump a Firmaware Device Log entry.
+ */
+static int devlog_show(struct seq_file *seq, void *v)
+{
+ if (v == SEQ_START_TOKEN)
+ seq_printf(seq, "%10s %15s %8s %8s %s\n",
+ "Seq#", "Tstamp", "Level", "Facility", "Message");
+ else {
+ struct devlog_info *dinfo = seq->private;
+ int fidx = (uintptr_t)v - 2;
+ unsigned long index;
+ struct fw_devlog_e *e;
+
+ /* Get a pointer to the log entry to display. Skip unused log
+ * entries.
+ */
+ index = dinfo->first + fidx;
+ if (index >= dinfo->nentries)
+ index -= dinfo->nentries;
+ e = &dinfo->log[index];
+ if (e->timestamp == 0)
+ return 0;
+
+ /* Print the message. This depends on the firmware using
+ * exactly the same formating strings as the kernel so we may
+ * eventually have to put a format interpreter in here ...
+ */
+ seq_printf(seq, "%10d %15llu %8s %8s ",
+ e->seqno, e->timestamp,
+ (e->level < ARRAY_SIZE(devlog_level_strings)
+ ? devlog_level_strings[e->level]
+ : "UNKNOWN"),
+ (e->facility < ARRAY_SIZE(devlog_facility_strings)
+ ? devlog_facility_strings[e->facility]
+ : "UNKNOWN"));
+ seq_printf(seq, e->fmt, e->params[0], e->params[1],
+ e->params[2], e->params[3], e->params[4],
+ e->params[5], e->params[6], e->params[7]);
+ }
+ return 0;
+}
+
+/* Sequential File Operations for Device Log.
+ */
+static inline void *devlog_get_idx(struct devlog_info *dinfo, loff_t pos)
+{
+ if (pos > dinfo->nentries)
+ return NULL;
+
+ return (void *)(uintptr_t)(pos + 1);
+}
+
+static void *devlog_start(struct seq_file *seq, loff_t *pos)
+{
+ struct devlog_info *dinfo = seq->private;
+
+ return (*pos
+ ? devlog_get_idx(dinfo, *pos)
+ : SEQ_START_TOKEN);
+}
+
+static void *devlog_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+ struct devlog_info *dinfo = seq->private;
+
+ (*pos)++;
+ return devlog_get_idx(dinfo, *pos);
+}
+
+static void devlog_stop(struct seq_file *seq, void *v)
+{
+}
+
+static const struct seq_operations devlog_seq_ops = {
+ .start = devlog_start,
+ .next = devlog_next,
+ .stop = devlog_stop,
+ .show = devlog_show
+};
+
+/* Set up for reading the firmware's device log. We read the entire log here
+ * and then display it incrementally in devlog_show().
+ */
+static int devlog_open(struct inode *inode, struct file *file)
+{
+ struct adapter *adap = inode->i_private;
+ struct devlog_params *dparams = &adap->params.devlog;
+ struct devlog_info *dinfo;
+ unsigned int index;
+ u32 fseqno;
+ int ret;
+
+ /* If we don't know where the log is we can't do anything.
+ */
+ if (dparams->start == 0)
+ return -ENXIO;
+
+ /* Allocate the space to read in the firmware's device log and set up
+ * for the iterated call to our display function.
+ */
+ dinfo = __seq_open_private(file, &devlog_seq_ops,
+ sizeof(*dinfo) + dparams->size);
+ if (!dinfo)
+ return -ENOMEM;
+
+ /* Record the basic log buffer information and read in the raw log.
+ */
+ dinfo->nentries = (dparams->size / sizeof(struct fw_devlog_e));
+ dinfo->first = 0;
+ spin_lock(&adap->win0_lock);
+ ret = t4_memory_rw(adap, adap->params.drv_memwin, dparams->memtype,
+ dparams->start, dparams->size, (__be32 *)dinfo->log,
+ T4_MEMORY_READ);
+ spin_unlock(&adap->win0_lock);
+ if (ret) {
+ seq_release_private(inode, file);
+ return ret;
+ }
+
+ /* Translate log multi-byte integral elements into host native format
+ * and determine where the first entry in the log is.
+ */
+ for (fseqno = ~((u32)0), index = 0; index < dinfo->nentries; index++) {
+ struct fw_devlog_e *e = &dinfo->log[index];
+ int i;
+ __u32 seqno;
+
+ if (e->timestamp == 0)
+ continue;
+
+ e->timestamp = (__force __be64)be64_to_cpu(e->timestamp);
+ seqno = be32_to_cpu(e->seqno);
+ for (i = 0; i < 8; i++)
+ e->params[i] =
+ (__force __be32)be32_to_cpu(e->params[i]);
+
+ if (seqno < fseqno) {
+ fseqno = seqno;
+ dinfo->first = index;
+ }
+ }
+ return 0;
+}
+
+static const struct file_operations devlog_fops = {
+ .owner = THIS_MODULE,
+ .open = devlog_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release_private
+};
+
+static ssize_t flash_read(struct file *file, char __user *buf, size_t count,
+ loff_t *ppos)
+{
+ loff_t pos = *ppos;
+ loff_t avail = FILE_DATA(file)->i_size;
+ struct adapter *adap = file->private_data;
+
+ if (pos < 0)
+ return -EINVAL;
+ if (pos >= avail)
+ return 0;
+ if (count > avail - pos)
+ count = avail - pos;
+
+ while (count) {
+ size_t len;
+ int ret, ofst;
+ u8 data[256];
+
+ ofst = pos & 3;
+ len = min(count + ofst, sizeof(data));
+ ret = t4_read_flash(adap, pos - ofst, (len + 3) / 4,
+ (u32 *)data, 1);
+ if (ret)
+ return ret;
+
+ len -= ofst;
+ if (copy_to_user(buf, data + ofst, len))
+ return -EFAULT;
+
+ buf += len;
+ pos += len;
+ count -= len;
+ }
+ count = pos - *ppos;
+ *ppos = pos;
+ return count;
+}
+
+static const struct file_operations flash_debugfs_fops = {
+ .owner = THIS_MODULE,
+ .open = mem_open,
+ .read = flash_read,
+};
+
+static inline void tcamxy2valmask(u64 x, u64 y, u8 *addr, u64 *mask)
+{
+ *mask = x | y;
+ y = (__force u64)cpu_to_be64(y);
+ memcpy(addr, (char *)&y + 2, ETH_ALEN);
+}
+
+static int mps_tcam_show(struct seq_file *seq, void *v)
+{
+ if (v == SEQ_START_TOKEN)
+ seq_puts(seq, "Idx Ethernet address Mask Vld Ports PF"
+ " VF Replication "
+ "P0 P1 P2 P3 ML\n");
+ else {
+ u64 mask;
+ u8 addr[ETH_ALEN];
+ struct adapter *adap = seq->private;
+ unsigned int idx = (uintptr_t)v - 2;
+ u64 tcamy = t4_read_reg64(adap, MPS_CLS_TCAM_Y_L(idx));
+ u64 tcamx = t4_read_reg64(adap, MPS_CLS_TCAM_X_L(idx));
+ u32 cls_lo = t4_read_reg(adap, MPS_CLS_SRAM_L(idx));
+ u32 cls_hi = t4_read_reg(adap, MPS_CLS_SRAM_H(idx));
+ u32 rplc[4] = {0, 0, 0, 0};
+
+ if (tcamx & tcamy) {
+ seq_printf(seq, "%3u -\n", idx);
+ goto out;
+ }
+
+ if (cls_lo & REPLICATE_F) {
+ struct fw_ldst_cmd ldst_cmd;
+ int ret;
+
+ memset(&ldst_cmd, 0, sizeof(ldst_cmd));
+ ldst_cmd.op_to_addrspace =
+ htonl(FW_CMD_OP_V(FW_LDST_CMD) |
+ FW_CMD_REQUEST_F |
+ FW_CMD_READ_F |
+ FW_LDST_CMD_ADDRSPACE_V(
+ FW_LDST_ADDRSPC_MPS));
+ ldst_cmd.cycles_to_len16 = htonl(FW_LEN16(ldst_cmd));
+ ldst_cmd.u.mps.fid_ctl =
+ htons(FW_LDST_CMD_FID_V(FW_LDST_MPS_RPLC) |
+ FW_LDST_CMD_CTL_V(idx));
+ ret = t4_wr_mbox(adap, adap->mbox, &ldst_cmd,
+ sizeof(ldst_cmd), &ldst_cmd);
+ if (ret)
+ dev_warn(adap->pdev_dev, "Can't read MPS "
+ "replication map for idx %d: %d\n",
+ idx, -ret);
+ else {
+ rplc[0] = ntohl(ldst_cmd.u.mps.rplc31_0);
+ rplc[1] = ntohl(ldst_cmd.u.mps.rplc63_32);
+ rplc[2] = ntohl(ldst_cmd.u.mps.rplc95_64);
+ rplc[3] = ntohl(ldst_cmd.u.mps.rplc127_96);
+ }
+ }
+
+ tcamxy2valmask(tcamx, tcamy, addr, &mask);
+ seq_printf(seq, "%3u %02x:%02x:%02x:%02x:%02x:%02x %012llx"
+ "%3c %#x%4u%4d",
+ idx, addr[0], addr[1], addr[2], addr[3], addr[4],
+ addr[5], (unsigned long long)mask,
+ (cls_lo & SRAM_VLD_F) ? 'Y' : 'N', PORTMAP_G(cls_hi),
+ PF_G(cls_lo),
+ (cls_lo & VF_VALID_F) ? VF_G(cls_lo) : -1);
+ if (cls_lo & REPLICATE_F)
+ seq_printf(seq, " %08x %08x %08x %08x",
+ rplc[3], rplc[2], rplc[1], rplc[0]);
+ else
+ seq_printf(seq, "%36c", ' ');
+ seq_printf(seq, "%4u%3u%3u%3u %#x\n",
+ SRAM_PRIO0_G(cls_lo), SRAM_PRIO1_G(cls_lo),
+ SRAM_PRIO2_G(cls_lo), SRAM_PRIO3_G(cls_lo),
+ (cls_lo >> MULTILISTEN0_S) & 0xf);
+ }
+out: return 0;
+}
+
+static inline void *mps_tcam_get_idx(struct seq_file *seq, loff_t pos)
+{
+ struct adapter *adap = seq->private;
+ int max_mac_addr = is_t4(adap->params.chip) ?
+ NUM_MPS_CLS_SRAM_L_INSTANCES :
+ NUM_MPS_T5_CLS_SRAM_L_INSTANCES;
+ return ((pos <= max_mac_addr) ? (void *)(uintptr_t)(pos + 1) : NULL);
+}
+
+static void *mps_tcam_start(struct seq_file *seq, loff_t *pos)
+{
+ return *pos ? mps_tcam_get_idx(seq, *pos) : SEQ_START_TOKEN;
+}
+
+static void *mps_tcam_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+ ++*pos;
+ return mps_tcam_get_idx(seq, *pos);
+}
+
+static void mps_tcam_stop(struct seq_file *seq, void *v)
+{
+}
+
+static const struct seq_operations mps_tcam_seq_ops = {
+ .start = mps_tcam_start,
+ .next = mps_tcam_next,
+ .stop = mps_tcam_stop,
+ .show = mps_tcam_show
+};
+
+static int mps_tcam_open(struct inode *inode, struct file *file)
+{
+ int res = seq_open(file, &mps_tcam_seq_ops);
+
+ if (!res) {
+ struct seq_file *seq = file->private_data;
+
+ seq->private = inode->i_private;
+ }
+ return res;
+}
+
+static const struct file_operations mps_tcam_debugfs_fops = {
+ .owner = THIS_MODULE,
+ .open = mps_tcam_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
+#if IS_ENABLED(CONFIG_IPV6)
+static int clip_tbl_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, clip_tbl_show, PDE_DATA(inode));
+}
+
+static const struct file_operations clip_tbl_debugfs_fops = {
+ .owner = THIS_MODULE,
+ .open = clip_tbl_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release
+};
+#endif
+
+/*RSS Table.
+ */
+
+static int rss_show(struct seq_file *seq, void *v, int idx)
+{
+ u16 *entry = v;
+
+ seq_printf(seq, "%4d: %4u %4u %4u %4u %4u %4u %4u %4u\n",
+ idx * 8, entry[0], entry[1], entry[2], entry[3], entry[4],
+ entry[5], entry[6], entry[7]);
+ return 0;
+}
+
+static int rss_open(struct inode *inode, struct file *file)
+{
+ int ret;
+ struct seq_tab *p;
+ struct adapter *adap = inode->i_private;
+
+ p = seq_open_tab(file, RSS_NENTRIES / 8, 8 * sizeof(u16), 0, rss_show);
+ if (!p)
+ return -ENOMEM;
+
+ ret = t4_read_rss(adap, (u16 *)p->data);
+ if (ret)
+ seq_release_private(inode, file);
+
+ return ret;
+}
+
+static const struct file_operations rss_debugfs_fops = {
+ .owner = THIS_MODULE,
+ .open = rss_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release_private
+};
+
+/* RSS Configuration.
+ */
+
+/* Small utility function to return the strings "yes" or "no" if the supplied
+ * argument is non-zero.
+ */
+static const char *yesno(int x)
+{
+ static const char *yes = "yes";
+ static const char *no = "no";
+
+ return x ? yes : no;
+}
+
+static int rss_config_show(struct seq_file *seq, void *v)
+{
+ struct adapter *adapter = seq->private;
+ static const char * const keymode[] = {
+ "global",
+ "global and per-VF scramble",
+ "per-PF and per-VF scramble",
+ "per-VF and per-VF scramble",
+ };
+ u32 rssconf;
+
+ rssconf = t4_read_reg(adapter, TP_RSS_CONFIG_A);
+ seq_printf(seq, "TP_RSS_CONFIG: %#x\n", rssconf);
+ seq_printf(seq, " Tnl4TupEnIpv6: %3s\n", yesno(rssconf &
+ TNL4TUPENIPV6_F));
+ seq_printf(seq, " Tnl2TupEnIpv6: %3s\n", yesno(rssconf &
+ TNL2TUPENIPV6_F));
+ seq_printf(seq, " Tnl4TupEnIpv4: %3s\n", yesno(rssconf &
+ TNL4TUPENIPV4_F));
+ seq_printf(seq, " Tnl2TupEnIpv4: %3s\n", yesno(rssconf &
+ TNL2TUPENIPV4_F));
+ seq_printf(seq, " TnlTcpSel: %3s\n", yesno(rssconf & TNLTCPSEL_F));
+ seq_printf(seq, " TnlIp6Sel: %3s\n", yesno(rssconf & TNLIP6SEL_F));
+ seq_printf(seq, " TnlVrtSel: %3s\n", yesno(rssconf & TNLVRTSEL_F));
+ seq_printf(seq, " TnlMapEn: %3s\n", yesno(rssconf & TNLMAPEN_F));
+ seq_printf(seq, " OfdHashSave: %3s\n", yesno(rssconf &
+ OFDHASHSAVE_F));
+ seq_printf(seq, " OfdVrtSel: %3s\n", yesno(rssconf & OFDVRTSEL_F));
+ seq_printf(seq, " OfdMapEn: %3s\n", yesno(rssconf & OFDMAPEN_F));
+ seq_printf(seq, " OfdLkpEn: %3s\n", yesno(rssconf & OFDLKPEN_F));
+ seq_printf(seq, " Syn4TupEnIpv6: %3s\n", yesno(rssconf &
+ SYN4TUPENIPV6_F));
+ seq_printf(seq, " Syn2TupEnIpv6: %3s\n", yesno(rssconf &
+ SYN2TUPENIPV6_F));
+ seq_printf(seq, " Syn4TupEnIpv4: %3s\n", yesno(rssconf &
+ SYN4TUPENIPV4_F));
+ seq_printf(seq, " Syn2TupEnIpv4: %3s\n", yesno(rssconf &
+ SYN2TUPENIPV4_F));
+ seq_printf(seq, " Syn4TupEnIpv6: %3s\n", yesno(rssconf &
+ SYN4TUPENIPV6_F));
+ seq_printf(seq, " SynIp6Sel: %3s\n", yesno(rssconf & SYNIP6SEL_F));
+ seq_printf(seq, " SynVrt6Sel: %3s\n", yesno(rssconf & SYNVRTSEL_F));
+ seq_printf(seq, " SynMapEn: %3s\n", yesno(rssconf & SYNMAPEN_F));
+ seq_printf(seq, " SynLkpEn: %3s\n", yesno(rssconf & SYNLKPEN_F));
+ seq_printf(seq, " ChnEn: %3s\n", yesno(rssconf &
+ CHANNELENABLE_F));
+ seq_printf(seq, " PrtEn: %3s\n", yesno(rssconf &
+ PORTENABLE_F));
+ seq_printf(seq, " TnlAllLkp: %3s\n", yesno(rssconf &
+ TNLALLLOOKUP_F));
+ seq_printf(seq, " VrtEn: %3s\n", yesno(rssconf &
+ VIRTENABLE_F));
+ seq_printf(seq, " CngEn: %3s\n", yesno(rssconf &
+ CONGESTIONENABLE_F));
+ seq_printf(seq, " HashToeplitz: %3s\n", yesno(rssconf &
+ HASHTOEPLITZ_F));
+ seq_printf(seq, " Udp4En: %3s\n", yesno(rssconf & UDPENABLE_F));
+ seq_printf(seq, " Disable: %3s\n", yesno(rssconf & DISABLE_F));
+
+ seq_puts(seq, "\n");
+
+ rssconf = t4_read_reg(adapter, TP_RSS_CONFIG_TNL_A);
+ seq_printf(seq, "TP_RSS_CONFIG_TNL: %#x\n", rssconf);
+ seq_printf(seq, " MaskSize: %3d\n", MASKSIZE_G(rssconf));
+ seq_printf(seq, " MaskFilter: %3d\n", MASKFILTER_G(rssconf));
+ if (CHELSIO_CHIP_VERSION(adapter->params.chip) > CHELSIO_T5) {
+ seq_printf(seq, " HashAll: %3s\n",
+ yesno(rssconf & HASHALL_F));
+ seq_printf(seq, " HashEth: %3s\n",
+ yesno(rssconf & HASHETH_F));
+ }
+ seq_printf(seq, " UseWireCh: %3s\n", yesno(rssconf & USEWIRECH_F));
+
+ seq_puts(seq, "\n");
+
+ rssconf = t4_read_reg(adapter, TP_RSS_CONFIG_OFD_A);
+ seq_printf(seq, "TP_RSS_CONFIG_OFD: %#x\n", rssconf);
+ seq_printf(seq, " MaskSize: %3d\n", MASKSIZE_G(rssconf));
+ seq_printf(seq, " RRCplMapEn: %3s\n", yesno(rssconf &
+ RRCPLMAPEN_F));
+ seq_printf(seq, " RRCplQueWidth: %3d\n", RRCPLQUEWIDTH_G(rssconf));
+
+ seq_puts(seq, "\n");
+
+ rssconf = t4_read_reg(adapter, TP_RSS_CONFIG_SYN_A);
+ seq_printf(seq, "TP_RSS_CONFIG_SYN: %#x\n", rssconf);
+ seq_printf(seq, " MaskSize: %3d\n", MASKSIZE_G(rssconf));
+ seq_printf(seq, " UseWireCh: %3s\n", yesno(rssconf & USEWIRECH_F));
+
+ seq_puts(seq, "\n");
+
+ rssconf = t4_read_reg(adapter, TP_RSS_CONFIG_VRT_A);
+ seq_printf(seq, "TP_RSS_CONFIG_VRT: %#x\n", rssconf);
+ if (CHELSIO_CHIP_VERSION(adapter->params.chip) > CHELSIO_T5) {
+ seq_printf(seq, " KeyWrAddrX: %3d\n",
+ KEYWRADDRX_G(rssconf));
+ seq_printf(seq, " KeyExtend: %3s\n",
+ yesno(rssconf & KEYEXTEND_F));
+ }
+ seq_printf(seq, " VfRdRg: %3s\n", yesno(rssconf & VFRDRG_F));
+ seq_printf(seq, " VfRdEn: %3s\n", yesno(rssconf & VFRDEN_F));
+ seq_printf(seq, " VfPerrEn: %3s\n", yesno(rssconf & VFPERREN_F));
+ seq_printf(seq, " KeyPerrEn: %3s\n", yesno(rssconf & KEYPERREN_F));
+ seq_printf(seq, " DisVfVlan: %3s\n", yesno(rssconf &
+ DISABLEVLAN_F));
+ seq_printf(seq, " EnUpSwt: %3s\n", yesno(rssconf & ENABLEUP0_F));
+ seq_printf(seq, " HashDelay: %3d\n", HASHDELAY_G(rssconf));
+ if (CHELSIO_CHIP_VERSION(adapter->params.chip) <= CHELSIO_T5)
+ seq_printf(seq, " VfWrAddr: %3d\n", VFWRADDR_G(rssconf));
+ seq_printf(seq, " KeyMode: %s\n", keymode[KEYMODE_G(rssconf)]);
+ seq_printf(seq, " VfWrEn: %3s\n", yesno(rssconf & VFWREN_F));
+ seq_printf(seq, " KeyWrEn: %3s\n", yesno(rssconf & KEYWREN_F));
+ seq_printf(seq, " KeyWrAddr: %3d\n", KEYWRADDR_G(rssconf));
+
+ seq_puts(seq, "\n");
+
+ rssconf = t4_read_reg(adapter, TP_RSS_CONFIG_CNG_A);
+ seq_printf(seq, "TP_RSS_CONFIG_CNG: %#x\n", rssconf);
+ seq_printf(seq, " ChnCount3: %3s\n", yesno(rssconf & CHNCOUNT3_F));
+ seq_printf(seq, " ChnCount2: %3s\n", yesno(rssconf & CHNCOUNT2_F));
+ seq_printf(seq, " ChnCount1: %3s\n", yesno(rssconf & CHNCOUNT1_F));
+ seq_printf(seq, " ChnCount0: %3s\n", yesno(rssconf & CHNCOUNT0_F));
+ seq_printf(seq, " ChnUndFlow3: %3s\n", yesno(rssconf &
+ CHNUNDFLOW3_F));
+ seq_printf(seq, " ChnUndFlow2: %3s\n", yesno(rssconf &
+ CHNUNDFLOW2_F));
+ seq_printf(seq, " ChnUndFlow1: %3s\n", yesno(rssconf &
+ CHNUNDFLOW1_F));
+ seq_printf(seq, " ChnUndFlow0: %3s\n", yesno(rssconf &
+ CHNUNDFLOW0_F));
+ seq_printf(seq, " RstChn3: %3s\n", yesno(rssconf & RSTCHN3_F));
+ seq_printf(seq, " RstChn2: %3s\n", yesno(rssconf & RSTCHN2_F));
+ seq_printf(seq, " RstChn1: %3s\n", yesno(rssconf & RSTCHN1_F));
+ seq_printf(seq, " RstChn0: %3s\n", yesno(rssconf & RSTCHN0_F));
+ seq_printf(seq, " UpdVld: %3s\n", yesno(rssconf & UPDVLD_F));
+ seq_printf(seq, " Xoff: %3s\n", yesno(rssconf & XOFF_F));
+ seq_printf(seq, " UpdChn3: %3s\n", yesno(rssconf & UPDCHN3_F));
+ seq_printf(seq, " UpdChn2: %3s\n", yesno(rssconf & UPDCHN2_F));
+ seq_printf(seq, " UpdChn1: %3s\n", yesno(rssconf & UPDCHN1_F));
+ seq_printf(seq, " UpdChn0: %3s\n", yesno(rssconf & UPDCHN0_F));
+ seq_printf(seq, " Queue: %3d\n", QUEUE_G(rssconf));
+
+ return 0;
+}
+
+DEFINE_SIMPLE_DEBUGFS_FILE(rss_config);
+
+/* RSS Secret Key.
+ */
+
+static int rss_key_show(struct seq_file *seq, void *v)
+{
+ u32 key[10];
+
+ t4_read_rss_key(seq->private, key);
+ seq_printf(seq, "%08x%08x%08x%08x%08x%08x%08x%08x%08x%08x\n",
+ key[9], key[8], key[7], key[6], key[5], key[4], key[3],
+ key[2], key[1], key[0]);
+ return 0;
+}
+
+static int rss_key_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, rss_key_show, inode->i_private);
+}
+
+static ssize_t rss_key_write(struct file *file, const char __user *buf,
+ size_t count, loff_t *pos)
+{
+ int i, j;
+ u32 key[10];
+ char s[100], *p;
+ struct adapter *adap = FILE_DATA(file)->i_private;
+
+ if (count > sizeof(s) - 1)
+ return -EINVAL;
+ if (copy_from_user(s, buf, count))
+ return -EFAULT;
+ for (i = count; i > 0 && isspace(s[i - 1]); i--)
+ ;
+ s[i] = '\0';
+
+ for (p = s, i = 9; i >= 0; i--) {
+ key[i] = 0;
+ for (j = 0; j < 8; j++, p++) {
+ if (!isxdigit(*p))
+ return -EINVAL;
+ key[i] = (key[i] << 4) | hex2val(*p);
+ }
+ }
+
+ t4_write_rss_key(adap, key, -1);
+ return count;
+}
+
+static const struct file_operations rss_key_debugfs_fops = {
+ .owner = THIS_MODULE,
+ .open = rss_key_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+ .write = rss_key_write
+};
+
+/* PF RSS Configuration.
+ */
+
+struct rss_pf_conf {
+ u32 rss_pf_map;
+ u32 rss_pf_mask;
+ u32 rss_pf_config;
+};
+
+static int rss_pf_config_show(struct seq_file *seq, void *v, int idx)
+{
+ struct rss_pf_conf *pfconf;
+
+ if (v == SEQ_START_TOKEN) {
+ /* use the 0th entry to dump the PF Map Index Size */
+ pfconf = seq->private + offsetof(struct seq_tab, data);
+ seq_printf(seq, "PF Map Index Size = %d\n\n",
+ LKPIDXSIZE_G(pfconf->rss_pf_map));
+
+ seq_puts(seq, " RSS PF VF Hash Tuple Enable Default\n");
+ seq_puts(seq, " Enable IPF Mask Mask IPv6 IPv4 UDP Queue\n");
+ seq_puts(seq, " PF Map Chn Prt Map Size Size Four Two Four Two Four Ch1 Ch0\n");
+ } else {
+ #define G_PFnLKPIDX(map, n) \
+ (((map) >> PF1LKPIDX_S*(n)) & PF0LKPIDX_M)
+ #define G_PFnMSKSIZE(mask, n) \
+ (((mask) >> PF1MSKSIZE_S*(n)) & PF1MSKSIZE_M)
+
+ pfconf = v;
+ seq_printf(seq, "%3d %3s %3s %3s %3d %3d %3d %3s %3s %3s %3s %3s %3d %3d\n",
+ idx,
+ yesno(pfconf->rss_pf_config & MAPENABLE_F),
+ yesno(pfconf->rss_pf_config & CHNENABLE_F),
+ yesno(pfconf->rss_pf_config & PRTENABLE_F),
+ G_PFnLKPIDX(pfconf->rss_pf_map, idx),
+ G_PFnMSKSIZE(pfconf->rss_pf_mask, idx),
+ IVFWIDTH_G(pfconf->rss_pf_config),
+ yesno(pfconf->rss_pf_config & IP6FOURTUPEN_F),
+ yesno(pfconf->rss_pf_config & IP6TWOTUPEN_F),
+ yesno(pfconf->rss_pf_config & IP4FOURTUPEN_F),
+ yesno(pfconf->rss_pf_config & IP4TWOTUPEN_F),
+ yesno(pfconf->rss_pf_config & UDPFOURTUPEN_F),
+ CH1DEFAULTQUEUE_G(pfconf->rss_pf_config),
+ CH0DEFAULTQUEUE_G(pfconf->rss_pf_config));
+
+ #undef G_PFnLKPIDX
+ #undef G_PFnMSKSIZE
+ }
+ return 0;
+}
+
+static int rss_pf_config_open(struct inode *inode, struct file *file)
+{
+ struct adapter *adapter = inode->i_private;
+ struct seq_tab *p;
+ u32 rss_pf_map, rss_pf_mask;
+ struct rss_pf_conf *pfconf;
+ int pf;
+
+ p = seq_open_tab(file, 8, sizeof(*pfconf), 1, rss_pf_config_show);
+ if (!p)
+ return -ENOMEM;
+
+ pfconf = (struct rss_pf_conf *)p->data;
+ rss_pf_map = t4_read_rss_pf_map(adapter);
+ rss_pf_mask = t4_read_rss_pf_mask(adapter);
+ for (pf = 0; pf < 8; pf++) {
+ pfconf[pf].rss_pf_map = rss_pf_map;
+ pfconf[pf].rss_pf_mask = rss_pf_mask;
+ t4_read_rss_pf_config(adapter, pf, &pfconf[pf].rss_pf_config);
+ }
+ return 0;
+}
+
+static const struct file_operations rss_pf_config_debugfs_fops = {
+ .owner = THIS_MODULE,
+ .open = rss_pf_config_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release_private
+};
+
+/* VF RSS Configuration.
+ */
+
+struct rss_vf_conf {
+ u32 rss_vf_vfl;
+ u32 rss_vf_vfh;
+};
+
+static int rss_vf_config_show(struct seq_file *seq, void *v, int idx)
+{
+ if (v == SEQ_START_TOKEN) {
+ seq_puts(seq, " RSS Hash Tuple Enable\n");
+ seq_puts(seq, " Enable IVF Dis Enb IPv6 IPv4 UDP Def Secret Key\n");
+ seq_puts(seq, " VF Chn Prt Map VLAN uP Four Two Four Two Four Que Idx Hash\n");
+ } else {
+ struct rss_vf_conf *vfconf = v;
+
+ seq_printf(seq, "%3d %3s %3s %3d %3s %3s %3s %3s %3s %3s %3s %4d %3d %#10x\n",
+ idx,
+ yesno(vfconf->rss_vf_vfh & VFCHNEN_F),
+ yesno(vfconf->rss_vf_vfh & VFPRTEN_F),
+ VFLKPIDX_G(vfconf->rss_vf_vfh),
+ yesno(vfconf->rss_vf_vfh & VFVLNEX_F),
+ yesno(vfconf->rss_vf_vfh & VFUPEN_F),
+ yesno(vfconf->rss_vf_vfh & VFIP4FOURTUPEN_F),
+ yesno(vfconf->rss_vf_vfh & VFIP6TWOTUPEN_F),
+ yesno(vfconf->rss_vf_vfh & VFIP4FOURTUPEN_F),
+ yesno(vfconf->rss_vf_vfh & VFIP4TWOTUPEN_F),
+ yesno(vfconf->rss_vf_vfh & ENABLEUDPHASH_F),
+ DEFAULTQUEUE_G(vfconf->rss_vf_vfh),
+ KEYINDEX_G(vfconf->rss_vf_vfh),
+ vfconf->rss_vf_vfl);
+ }
+ return 0;
+}
+
+static int rss_vf_config_open(struct inode *inode, struct file *file)
+{
+ struct adapter *adapter = inode->i_private;
+ struct seq_tab *p;
+ struct rss_vf_conf *vfconf;
+ int vf;
+
+ p = seq_open_tab(file, 128, sizeof(*vfconf), 1, rss_vf_config_show);
+ if (!p)
+ return -ENOMEM;
+
+ vfconf = (struct rss_vf_conf *)p->data;
+ for (vf = 0; vf < 128; vf++) {
+ t4_read_rss_vf_config(adapter, vf, &vfconf[vf].rss_vf_vfl,
+ &vfconf[vf].rss_vf_vfh);
+ }
+ return 0;
+}
+
+static const struct file_operations rss_vf_config_debugfs_fops = {
+ .owner = THIS_MODULE,
+ .open = rss_vf_config_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release_private
+};
+
+int mem_open(struct inode *inode, struct file *file)
+{
+ unsigned int mem;
+ struct adapter *adap;
+
+ file->private_data = inode->i_private;
+
+ mem = (uintptr_t)file->private_data & 0x3;
+ adap = file->private_data - mem;
+
+ (void)t4_fwcache(adap, FW_PARAM_DEV_FWCACHE_FLUSH);
+
+ return 0;
+}
+
static ssize_t mem_read(struct file *file, char __user *buf, size_t count,
loff_t *ppos)
{
*ppos = pos + count;
return count;
}
-
static const struct file_operations mem_debugfs_fops = {
.owner = THIS_MODULE,
.open = simple_open,
.llseek = default_llseek,
};
+static void set_debugfs_file_size(struct dentry *de, loff_t size)
+{
+ if (!IS_ERR(de) && de->d_inode)
+ de->d_inode->i_size = size;
+}
+
static void add_debugfs_mem(struct adapter *adap, const char *name,
unsigned int idx, unsigned int size_mb)
{
{
int i;
u32 size;
+ struct dentry *de;
static struct t4_debugfs_entry t4_debugfs_files[] = {
+ { "cim_la", &cim_la_fops, S_IRUSR, 0 },
+ { "cim_qcfg", &cim_qcfg_fops, S_IRUSR, 0 },
+ { "devlog", &devlog_fops, S_IRUSR, 0 },
{ "l2t", &t4_l2t_fops, S_IRUSR, 0},
+ { "mps_tcam", &mps_tcam_debugfs_fops, S_IRUSR, 0 },
+ { "rss", &rss_debugfs_fops, S_IRUSR, 0 },
+ { "rss_config", &rss_config_debugfs_fops, S_IRUSR, 0 },
+ { "rss_key", &rss_key_debugfs_fops, S_IRUSR, 0 },
+ { "rss_pf_config", &rss_pf_config_debugfs_fops, S_IRUSR, 0 },
+ { "rss_vf_config", &rss_vf_config_debugfs_fops, S_IRUSR, 0 },
+#if IS_ENABLED(CONFIG_IPV6)
+ { "clip_tbl", &clip_tbl_debugfs_fops, S_IRUSR, 0 },
+#endif
};
add_debugfs_files(adap,
EXT_MEM1_SIZE_G(size));
}
}
+
+ de = debugfs_create_file("flash", S_IRUSR, adap->debugfs_root, adap,
+ &flash_debugfs_fops);
+ set_debugfs_file_size(de, adap->params.sf_size);
+
return 0;
}
#include <linux/export.h>
+#define FILE_DATA(_file) ((_file)->f_path.dentry->d_inode)
+
+#define DEFINE_SIMPLE_DEBUGFS_FILE(name) \
+static int name##_open(struct inode *inode, struct file *file) \
+{ \
+ return single_open(file, name##_show, inode->i_private); \
+} \
+static const struct file_operations name##_debugfs_fops = { \
+ .owner = THIS_MODULE, \
+ .open = name##_open, \
+ .read = seq_read, \
+ .llseek = seq_lseek, \
+ .release = single_release \
+}
+
struct t4_debugfs_entry {
const char *name;
const struct file_operations *ops;
unsigned char data;
};
+struct seq_tab {
+ int (*show)(struct seq_file *seq, void *v, int idx);
+ unsigned int rows; /* # of entries */
+ unsigned char width; /* size in bytes of each entry */
+ unsigned char skip_first; /* whether the first line is a header */
+ char data[0]; /* the table data */
+};
+
+static inline unsigned int hex2val(char c)
+{
+ return isdigit(c) ? c - '0' : tolower(c) - 'a' + 10;
+}
+
+struct seq_tab *seq_open_tab(struct file *f, unsigned int rows,
+ unsigned int width, unsigned int have_header,
+ int (*show)(struct seq_file *seq, void *v, int i));
+
int t4_setup_debugfs(struct adapter *adap);
void add_debugfs_files(struct adapter *adap,
struct t4_debugfs_entry *files,
unsigned int nfiles);
+int mem_open(struct inode *inode, struct file *file);
#endif
#include <net/netevent.h>
#include <net/addrconf.h>
#include <net/bonding.h>
+#include <net/addrconf.h>
#include <asm/uaccess.h>
#include "cxgb4.h"
#include "t4_regs.h"
+#include "t4_values.h"
#include "t4_msg.h"
#include "t4fw_api.h"
#include "cxgb4_dcb.h"
#include "cxgb4_debugfs.h"
+#include "clip_tbl.h"
#include "l2t.h"
#ifdef DRV_VERSION
#define DRV_VERSION "2.0.0-ko"
#define DRV_DESC "Chelsio T4/T5 Network Driver"
-/*
- * Max interrupt hold-off timer value in us. Queues fall back to this value
- * under extreme memory pressure so it's largish to give the system time to
- * recover.
- */
-#define MAX_SGE_TIMERVAL 200U
-
-enum {
- /*
- * Physical Function provisioning constants.
- */
- PFRES_NVI = 4, /* # of Virtual Interfaces */
- PFRES_NETHCTRL = 128, /* # of EQs used for ETH or CTRL Qs */
- PFRES_NIQFLINT = 128, /* # of ingress Qs/w Free List(s)/intr
- */
- PFRES_NEQ = 256, /* # of egress queues */
- PFRES_NIQ = 0, /* # of ingress queues */
- PFRES_TC = 0, /* PCI-E traffic class */
- PFRES_NEXACTF = 128, /* # of exact MPS filters */
-
- PFRES_R_CAPS = FW_CMD_CAP_PF,
- PFRES_WX_CAPS = FW_CMD_CAP_PF,
-
-#ifdef CONFIG_PCI_IOV
- /*
- * Virtual Function provisioning constants. We need two extra Ingress
- * Queues with Interrupt capability to serve as the VF's Firmware
- * Event Queue and Forwarded Interrupt Queue (when using MSI mode) --
- * neither will have Free Lists associated with them). For each
- * Ethernet/Control Egress Queue and for each Free List, we need an
- * Egress Context.
- */
- VFRES_NPORTS = 1, /* # of "ports" per VF */
- VFRES_NQSETS = 2, /* # of "Queue Sets" per VF */
-
- VFRES_NVI = VFRES_NPORTS, /* # of Virtual Interfaces */
- VFRES_NETHCTRL = VFRES_NQSETS, /* # of EQs used for ETH or CTRL Qs */
- VFRES_NIQFLINT = VFRES_NQSETS+2,/* # of ingress Qs/w Free List(s)/intr */
- VFRES_NEQ = VFRES_NQSETS*2, /* # of egress queues */
- VFRES_NIQ = 0, /* # of non-fl/int ingress queues */
- VFRES_TC = 0, /* PCI-E traffic class */
- VFRES_NEXACTF = 16, /* # of exact MPS filters */
-
- VFRES_R_CAPS = FW_CMD_CAP_DMAQ|FW_CMD_CAP_VF|FW_CMD_CAP_PORT,
- VFRES_WX_CAPS = FW_CMD_CAP_DMAQ|FW_CMD_CAP_VF,
-#endif
-};
-
-/*
- * Provide a Port Access Rights Mask for the specified PF/VF. This is very
- * static and likely not to be useful in the long run. We really need to
- * implement some form of persistent configuration which the firmware
- * controls.
- */
-static unsigned int pfvfres_pmask(struct adapter *adapter,
- unsigned int pf, unsigned int vf)
-{
- unsigned int portn, portvec;
-
- /*
- * Give PF's access to all of the ports.
- */
- if (vf == 0)
- return FW_PFVF_CMD_PMASK_M;
-
- /*
- * For VFs, we'll assign them access to the ports based purely on the
- * PF. We assign active ports in order, wrapping around if there are
- * fewer active ports than PFs: e.g. active port[pf % nports].
- * Unfortunately the adapter's port_info structs haven't been
- * initialized yet so we have to compute this.
- */
- if (adapter->params.nports == 0)
- return 0;
-
- portn = pf % adapter->params.nports;
- portvec = adapter->params.portvec;
- for (;;) {
- /*
- * Isolate the lowest set bit in the port vector. If we're at
- * the port number that we want, return that as the pmask.
- * otherwise mask that bit out of the port vector and
- * decrement our port number ...
- */
- unsigned int pmask = portvec ^ (portvec & (portvec-1));
- if (portn == 0)
- return pmask;
- portn--;
- portvec &= ~pmask;
- }
- /*NOTREACHED*/
-}
-
enum {
MAX_TXQ_ENTRIES = 16384,
MAX_CTRL_TXQ_ENTRIES = 1024,
static uint force_old_init;
module_param(force_old_init, uint, 0644);
-MODULE_PARM_DESC(force_old_init, "Force old initialization sequence");
+MODULE_PARM_DESC(force_old_init, "Force old initialization sequence, deprecated"
+ " parameter");
static int dflt_msg_enable = DFLT_MSG_ENABLE;
module_param_array(intr_holdoff, uint, NULL, 0644);
MODULE_PARM_DESC(intr_holdoff, "values for queue interrupt hold-off timers "
- "0..4 in microseconds");
+ "0..4 in microseconds, deprecated parameter");
static unsigned int intr_cnt[SGE_NCOUNTERS - 1] = { 4, 8, 16 };
module_param_array(intr_cnt, uint, NULL, 0644);
MODULE_PARM_DESC(intr_cnt,
- "thresholds 1..3 for queue interrupt packet counters");
+ "thresholds 1..3 for queue interrupt packet counters, "
+ "deprecated parameter");
/*
* Normally we tell the chip to deliver Ingress Packets into our DMA buffers
#ifdef CONFIG_PCI_IOV
module_param(vf_acls, bool, 0644);
-MODULE_PARM_DESC(vf_acls, "if set enable virtualization L2 ACL enforcement");
+MODULE_PARM_DESC(vf_acls, "if set enable virtualization L2 ACL enforcement, "
+ "deprecated parameter");
/* Configure the number of PCI-E Virtual Function which are to be instantiated
* on SR-IOV Capable Physical Functions.
MODULE_PARM_DESC(select_queue,
"Select between kernel provided method of selecting or driver method of selecting TX queue. Default is kernel method.");
-/*
- * The filter TCAM has a fixed portion and a variable portion. The fixed
- * portion can match on source/destination IP IPv4/IPv6 addresses and TCP/UDP
- * ports. The variable portion is 36 bits which can include things like Exact
- * Match MAC Index (9 bits), Ether Type (16 bits), IP Protocol (8 bits),
- * [Inner] VLAN Tag (17 bits), etc. which, if all were somehow selected, would
- * far exceed the 36-bit budget for this "compressed" header portion of the
- * filter. Thus, we have a scarce resource which must be carefully managed.
- *
- * By default we set this up to mostly match the set of filter matching
- * capabilities of T3 but with accommodations for some of T4's more
- * interesting features:
- *
- * { IP Fragment (1), MPS Match Type (3), IP Protocol (8),
- * [Inner] VLAN (17), Port (3), FCoE (1) }
- */
-enum {
- TP_VLAN_PRI_MAP_DEFAULT = HW_TPL_FR_MT_PR_IV_P_FC,
- TP_VLAN_PRI_MAP_FIRST = FCOE_SHIFT,
- TP_VLAN_PRI_MAP_LAST = FRAGMENTATION_SHIFT,
-};
-
-static unsigned int tp_vlan_pri_map = TP_VLAN_PRI_MAP_DEFAULT;
+static unsigned int tp_vlan_pri_map = HW_TPL_FR_MT_PR_IV_P_FC;
module_param(tp_vlan_pri_map, uint, 0644);
-MODULE_PARM_DESC(tp_vlan_pri_map, "global compressed filter configuration");
+MODULE_PARM_DESC(tp_vlan_pri_map, "global compressed filter configuration, "
+ "deprecated parameter");
static struct dentry *cxgb4_debugfs_root;
if (idx >= adap->tids.ftid_base && nidx <
(adap->tids.nftids + adap->tids.nsftids)) {
idx = nidx;
- ret = GET_TCB_COOKIE(rpl->cookie);
+ ret = TCB_COOKIE_G(rpl->cookie);
f = &adap->tids.ftid_tab[idx];
if (ret == FW_FILTER_WR_FLT_DELETED) {
if (likely(opcode == CPL_SGE_EGR_UPDATE)) {
const struct cpl_sge_egr_update *p = (void *)rsp;
- unsigned int qid = EGR_QID(ntohl(p->opcode_qid));
+ unsigned int qid = EGR_QID_G(ntohl(p->opcode_qid));
struct sge_txq *txq;
txq = q->adap->sge.egr_map[qid - q->adap->sge.egr_start];
static irqreturn_t t4_nondata_intr(int irq, void *cookie)
{
struct adapter *adap = cookie;
+ u32 v = t4_read_reg(adap, MYPF_REG(PL_PF_INT_CAUSE_A));
- u32 v = t4_read_reg(adap, MYPF_REG(PL_PF_INT_CAUSE));
- if (v & PFSW) {
+ if (v & PFSW_F) {
adap->swintr = 1;
- t4_write_reg(adap, MYPF_REG(PL_PF_INT_CAUSE), v);
+ t4_write_reg(adap, MYPF_REG(PL_PF_INT_CAUSE_A), v);
}
t4_slow_intr_handler(adap);
return IRQ_HANDLED;
if (q->handler)
napi_enable(&q->napi);
/* 0-increment GTS to start the timer and enable interrupts */
- t4_write_reg(adap, MYPF_REG(SGE_PF_GTS),
- SEINTARM(q->intr_params) |
- INGRESSQID(q->cntxt_id));
+ t4_write_reg(adap, MYPF_REG(SGE_PF_GTS_A),
+ SEINTARM_V(q->intr_params) |
+ INGRESSQID_V(q->cntxt_id));
}
}
}
t4_write_reg(adap, is_t4(adap->params.chip) ?
- MPS_TRC_RSS_CONTROL :
- MPS_T5_TRC_RSS_CONTROL,
- RSSCONTROL(netdev2pinfo(adap->port[0])->tx_chan) |
- QUEUENUMBER(s->ethrxq[0].rspq.abs_id));
+ MPS_TRC_RSS_CONTROL_A :
+ MPS_T5_TRC_RSS_CONTROL_A,
+ RSSCONTROL_V(netdev2pinfo(adap->port[0])->tx_chan) |
+ QUEUENUMBER_V(s->ethrxq[0].rspq.abs_id));
return 0;
}
collect_sge_port_stats(adapter, pi, (struct queue_port_stats *)data);
data += sizeof(struct queue_port_stats) / sizeof(u64);
if (!is_t4(adapter->params.chip)) {
- t4_write_reg(adapter, SGE_STAT_CFG, STATSOURCE_T5(7));
- val1 = t4_read_reg(adapter, SGE_STAT_TOTAL);
- val2 = t4_read_reg(adapter, SGE_STAT_MATCH);
+ t4_write_reg(adapter, SGE_STAT_CFG_A, STATSOURCE_T5_V(7));
+ val1 = t4_read_reg(adapter, SGE_STAT_TOTAL_A);
+ val2 = t4_read_reg(adapter, SGE_STAT_MATCH_A);
*data = val1 - val2;
data++;
*data = val2;
return 0;
}
-int cxgb4_clip_get(const struct net_device *dev,
- const struct in6_addr *lip)
-{
- struct adapter *adap;
- struct fw_clip_cmd c;
-
- adap = netdev2adap(dev);
- memset(&c, 0, sizeof(c));
- c.op_to_write = htonl(FW_CMD_OP_V(FW_CLIP_CMD) |
- FW_CMD_REQUEST_F | FW_CMD_WRITE_F);
- c.alloc_to_len16 = htonl(FW_CLIP_CMD_ALLOC_F | FW_LEN16(c));
- c.ip_hi = *(__be64 *)(lip->s6_addr);
- c.ip_lo = *(__be64 *)(lip->s6_addr + 8);
- return t4_wr_mbox_meat(adap, adap->mbox, &c, sizeof(c), &c, false);
-}
-EXPORT_SYMBOL(cxgb4_clip_get);
-
-int cxgb4_clip_release(const struct net_device *dev,
- const struct in6_addr *lip)
-{
- struct adapter *adap;
- struct fw_clip_cmd c;
-
- adap = netdev2adap(dev);
- memset(&c, 0, sizeof(c));
- c.op_to_write = htonl(FW_CMD_OP_V(FW_CLIP_CMD) |
- FW_CMD_REQUEST_F | FW_CMD_READ_F);
- c.alloc_to_len16 = htonl(FW_CLIP_CMD_FREE_F | FW_LEN16(c));
- c.ip_hi = *(__be64 *)(lip->s6_addr);
- c.ip_lo = *(__be64 *)(lip->s6_addr + 8);
- return t4_wr_mbox_meat(adap, adap->mbox, &c, sizeof(c), &c, false);
-}
-EXPORT_SYMBOL(cxgb4_clip_release);
-
/**
* cxgb4_create_server - create an IP server
* @dev: the device
req->peer_ip = htonl(0);
chan = rxq_to_chan(&adap->sge, queue);
req->opt0 = cpu_to_be64(TX_CHAN_V(chan));
- req->opt1 = cpu_to_be64(CONN_POLICY_ASK |
- SYN_RSS_ENABLE | SYN_RSS_QUEUE(queue));
+ req->opt1 = cpu_to_be64(CONN_POLICY_V(CPL_CONN_POLICY_ASK) |
+ SYN_RSS_ENABLE_F | SYN_RSS_QUEUE_V(queue));
ret = t4_mgmt_tx(adap, skb);
return net_xmit_eval(ret);
}
req->peer_ip_lo = cpu_to_be64(0);
chan = rxq_to_chan(&adap->sge, queue);
req->opt0 = cpu_to_be64(TX_CHAN_V(chan));
- req->opt1 = cpu_to_be64(CONN_POLICY_ASK |
- SYN_RSS_ENABLE | SYN_RSS_QUEUE(queue));
+ req->opt1 = cpu_to_be64(CONN_POLICY_V(CPL_CONN_POLICY_ASK) |
+ SYN_RSS_ENABLE_F | SYN_RSS_QUEUE_V(queue));
ret = t4_mgmt_tx(adap, skb);
return net_xmit_eval(ret);
}
req = (struct cpl_close_listsvr_req *)__skb_put(skb, sizeof(*req));
INIT_TP_WR(req, 0);
OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_CLOSE_LISTSRV_REQ, stid));
- req->reply_ctrl = htons(NO_REPLY(0) | (ipv6 ? LISTSVR_IPV6(1) :
- LISTSVR_IPV6(0)) | QUEUENO(queue));
+ req->reply_ctrl = htons(NO_REPLY_V(0) | (ipv6 ? LISTSVR_IPV6_V(1) :
+ LISTSVR_IPV6_V(0)) | QUEUENO_V(queue));
ret = t4_mgmt_tx(adap, skb);
return net_xmit_eval(ret);
}
struct adapter *adap = netdev2adap(dev);
u32 v1, v2, lp_count, hp_count;
- v1 = t4_read_reg(adap, A_SGE_DBFIFO_STATUS);
- v2 = t4_read_reg(adap, SGE_DBFIFO_STATUS2);
+ v1 = t4_read_reg(adap, SGE_DBFIFO_STATUS_A);
+ v2 = t4_read_reg(adap, SGE_DBFIFO_STATUS2_A);
if (is_t4(adap->params.chip)) {
- lp_count = G_LP_COUNT(v1);
- hp_count = G_HP_COUNT(v1);
+ lp_count = LP_COUNT_G(v1);
+ hp_count = HP_COUNT_G(v1);
} else {
- lp_count = G_LP_COUNT_T5(v1);
- hp_count = G_HP_COUNT_T5(v2);
+ lp_count = LP_COUNT_T5_G(v1);
+ hp_count = HP_COUNT_T5_G(v2);
}
return lpfifo ? lp_count : hp_count;
}
{
struct adapter *adap = netdev2adap(dev);
- t4_write_reg(adap, ULP_RX_ISCSI_TAGMASK, tag_mask);
- t4_write_reg(adap, ULP_RX_ISCSI_PSZ, HPZ0(pgsz_order[0]) |
- HPZ1(pgsz_order[1]) | HPZ2(pgsz_order[2]) |
- HPZ3(pgsz_order[3]));
+ t4_write_reg(adap, ULP_RX_ISCSI_TAGMASK_A, tag_mask);
+ t4_write_reg(adap, ULP_RX_ISCSI_PSZ_A, HPZ0_V(pgsz_order[0]) |
+ HPZ1_V(pgsz_order[1]) | HPZ2_V(pgsz_order[2]) |
+ HPZ3_V(pgsz_order[3]));
}
EXPORT_SYMBOL(cxgb4_iscsi_init);
int ret;
ret = t4_fwaddrspace_write(adap, adap->mbox,
- 0xe1000000 + A_SGE_CTXT_CMD, 0x20000000);
+ 0xe1000000 + SGE_CTXT_CMD_A, 0x20000000);
return ret;
}
EXPORT_SYMBOL(cxgb4_flush_eq_cache);
static int read_eq_indices(struct adapter *adap, u16 qid, u16 *pidx, u16 *cidx)
{
- u32 addr = t4_read_reg(adap, A_SGE_DBQ_CTXT_BADDR) + 24 * qid + 8;
+ u32 addr = t4_read_reg(adap, SGE_DBQ_CTXT_BADDR_A) + 24 * qid + 8;
__be64 indices;
int ret;
if (pidx != hw_pidx) {
u16 delta;
+ u32 val;
if (pidx >= hw_pidx)
delta = pidx - hw_pidx;
else
delta = size - hw_pidx + pidx;
+
+ if (is_t4(adap->params.chip))
+ val = PIDX_V(delta);
+ else
+ val = PIDX_T5_V(delta);
wmb();
- t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL),
- QID(qid) | PIDX(delta));
+ t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL_A),
+ QID_V(qid) | val);
}
out:
return ret;
struct adapter *adap;
adap = netdev2adap(dev);
- t4_set_reg_field(adap, A_SGE_DOORBELL_CONTROL, F_NOCOALESCE,
- F_NOCOALESCE);
+ t4_set_reg_field(adap, SGE_DOORBELL_CONTROL_A, NOCOALESCE_F,
+ NOCOALESCE_F);
}
EXPORT_SYMBOL(cxgb4_disable_db_coalescing);
struct adapter *adap;
adap = netdev2adap(dev);
- t4_set_reg_field(adap, A_SGE_DOORBELL_CONTROL, F_NOCOALESCE, 0);
+ t4_set_reg_field(adap, SGE_DOORBELL_CONTROL_A, NOCOALESCE_F, 0);
}
EXPORT_SYMBOL(cxgb4_enable_db_coalescing);
struct adapter *adap;
adap = netdev2adap(dev);
- lo = t4_read_reg(adap, SGE_TIMESTAMP_LO);
- hi = GET_TSVAL(t4_read_reg(adap, SGE_TIMESTAMP_HI));
+ lo = t4_read_reg(adap, SGE_TIMESTAMP_LO_A);
+ hi = TSVAL_G(t4_read_reg(adap, SGE_TIMESTAMP_HI_A));
return ((u64)hi << 32) | (u64)lo;
}
u32 v1, v2, lp_count, hp_count;
do {
- v1 = t4_read_reg(adap, A_SGE_DBFIFO_STATUS);
- v2 = t4_read_reg(adap, SGE_DBFIFO_STATUS2);
+ v1 = t4_read_reg(adap, SGE_DBFIFO_STATUS_A);
+ v2 = t4_read_reg(adap, SGE_DBFIFO_STATUS2_A);
if (is_t4(adap->params.chip)) {
- lp_count = G_LP_COUNT(v1);
- hp_count = G_HP_COUNT(v1);
+ lp_count = LP_COUNT_G(v1);
+ hp_count = HP_COUNT_G(v1);
} else {
- lp_count = G_LP_COUNT_T5(v1);
- hp_count = G_HP_COUNT_T5(v2);
+ lp_count = LP_COUNT_T5_G(v1);
+ hp_count = HP_COUNT_T5_G(v2);
}
if (lp_count == 0 && hp_count == 0)
* are committed before we tell HW about them.
*/
wmb();
- t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL),
- QID(q->cntxt_id) | PIDX(q->db_pidx_inc));
+ t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL_A),
+ QID_V(q->cntxt_id) | PIDX_V(q->db_pidx_inc));
q->db_pidx_inc = 0;
}
q->db_disabled = 0;
drain_db_fifo(adap, dbfifo_drain_delay);
enable_dbs(adap);
notify_rdma_uld(adap, CXGB4_CONTROL_DB_EMPTY);
- t4_set_reg_field(adap, SGE_INT_ENABLE3,
- DBFIFO_HP_INT | DBFIFO_LP_INT,
- DBFIFO_HP_INT | DBFIFO_LP_INT);
+ t4_set_reg_field(adap, SGE_INT_ENABLE3_A,
+ DBFIFO_HP_INT_F | DBFIFO_LP_INT_F,
+ DBFIFO_HP_INT_F | DBFIFO_LP_INT_F);
}
static void sync_txq_pidx(struct adapter *adap, struct sge_txq *q)
goto out;
if (q->db_pidx != hw_pidx) {
u16 delta;
+ u32 val;
if (q->db_pidx >= hw_pidx)
delta = q->db_pidx - hw_pidx;
else
delta = q->size - hw_pidx + q->db_pidx;
+
+ if (is_t4(adap->params.chip))
+ val = PIDX_V(delta);
+ else
+ val = PIDX_T5_V(delta);
wmb();
- t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL),
- QID(q->cntxt_id) | PIDX(delta));
+ t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL_A),
+ QID_V(q->cntxt_id) | val);
}
out:
q->db_disabled = 0;
dev_err(adap->pdev_dev, "doorbell drop recovery: "
"qid=%d, pidx_inc=%d\n", qid, pidx_inc);
else
- writel(PIDX_T5(pidx_inc) | QID(bar2_qid),
+ writel(PIDX_T5_V(pidx_inc) | QID_V(bar2_qid),
adap->bar2 + bar2_qoffset + SGE_UDB_KDOORBELL);
/* Re-enable BAR2 WC */
t4_set_reg_field(adap, 0x10b0, 1<<15, 1<<15);
}
- t4_set_reg_field(adap, A_SGE_DOORBELL_CONTROL, F_DROPPED_DB, 0);
+ t4_set_reg_field(adap, SGE_DOORBELL_CONTROL_A, DROPPED_DB_F, 0);
}
void t4_db_full(struct adapter *adap)
if (is_t4(adap->params.chip)) {
disable_dbs(adap);
notify_rdma_uld(adap, CXGB4_CONTROL_DB_FULL);
- t4_set_reg_field(adap, SGE_INT_ENABLE3,
- DBFIFO_HP_INT | DBFIFO_LP_INT, 0);
+ t4_set_reg_field(adap, SGE_INT_ENABLE3_A,
+ DBFIFO_HP_INT_F | DBFIFO_LP_INT_F, 0);
queue_work(adap->workq, &adap->db_full_task);
}
}
lli.nports = adap->params.nports;
lli.wr_cred = adap->params.ofldq_wr_cred;
lli.adapter_type = adap->params.chip;
- lli.iscsi_iolen = MAXRXDATA_GET(t4_read_reg(adap, TP_PARA_REG2));
+ lli.iscsi_iolen = MAXRXDATA_G(t4_read_reg(adap, TP_PARA_REG2_A));
lli.cclk_ps = 1000000000 / adap->params.vpd.cclk;
lli.udb_density = 1 << adap->params.sge.eq_qpp;
lli.ucq_density = 1 << adap->params.sge.iq_qpp;
/* MODQ_REQ_MAP sets queues 0-3 to chan 0-3 */
for (i = 0; i < NCHAN; i++)
lli.tx_modq[i] = i;
- lli.gts_reg = adap->regs + MYPF_REG(SGE_PF_GTS);
- lli.db_reg = adap->regs + MYPF_REG(SGE_PF_KDOORBELL);
+ lli.gts_reg = adap->regs + MYPF_REG(SGE_PF_GTS_A);
+ lli.db_reg = adap->regs + MYPF_REG(SGE_PF_KDOORBELL_A);
lli.fw_vers = adap->params.fw_vers;
lli.dbfifo_int_thresh = dbfifo_int_thresh;
lli.sge_ingpadboundary = adap->sge.fl_align;
}
EXPORT_SYMBOL(cxgb4_unregister_uld);
-/* Check if netdev on which event is occured belongs to us or not. Return
- * success (true) if it belongs otherwise failure (false).
- * Called with rcu_read_lock() held.
- */
#if IS_ENABLED(CONFIG_IPV6)
-static bool cxgb4_netdev(const struct net_device *netdev)
+static int cxgb4_inet6addr_handler(struct notifier_block *this,
+ unsigned long event, void *data)
{
+ struct inet6_ifaddr *ifa = data;
+ struct net_device *event_dev = ifa->idev->dev;
+ const struct device *parent = NULL;
+#if IS_ENABLED(CONFIG_BONDING)
struct adapter *adap;
- int i;
-
- list_for_each_entry_rcu(adap, &adap_rcu_list, rcu_node)
- for (i = 0; i < MAX_NPORTS; i++)
- if (adap->port[i] == netdev)
- return true;
- return false;
-}
+#endif
+ if (event_dev->priv_flags & IFF_802_1Q_VLAN)
+ event_dev = vlan_dev_real_dev(event_dev);
+#if IS_ENABLED(CONFIG_BONDING)
+ if (event_dev->flags & IFF_MASTER) {
+ list_for_each_entry(adap, &adapter_list, list_node) {
+ switch (event) {
+ case NETDEV_UP:
+ cxgb4_clip_get(adap->port[0],
+ (const u32 *)ifa, 1);
+ break;
+ case NETDEV_DOWN:
+ cxgb4_clip_release(adap->port[0],
+ (const u32 *)ifa, 1);
+ break;
+ default:
+ break;
+ }
+ }
+ return NOTIFY_OK;
+ }
+#endif
-static int clip_add(struct net_device *event_dev, struct inet6_ifaddr *ifa,
- unsigned long event)
-{
- int ret = NOTIFY_DONE;
+ if (event_dev)
+ parent = event_dev->dev.parent;
- rcu_read_lock();
- if (cxgb4_netdev(event_dev)) {
+ if (parent && parent->driver == &cxgb4_driver.driver) {
switch (event) {
case NETDEV_UP:
- ret = cxgb4_clip_get(event_dev, &ifa->addr);
- if (ret < 0) {
- rcu_read_unlock();
- return ret;
- }
- ret = NOTIFY_OK;
+ cxgb4_clip_get(event_dev, (const u32 *)ifa, 1);
break;
case NETDEV_DOWN:
- cxgb4_clip_release(event_dev, &ifa->addr);
- ret = NOTIFY_OK;
+ cxgb4_clip_release(event_dev, (const u32 *)ifa, 1);
break;
default:
break;
}
}
- rcu_read_unlock();
- return ret;
-}
-
-static int cxgb4_inet6addr_handler(struct notifier_block *this,
- unsigned long event, void *data)
-{
- struct inet6_ifaddr *ifa = data;
- struct net_device *event_dev;
- int ret = NOTIFY_DONE;
- struct bonding *bond = netdev_priv(ifa->idev->dev);
- struct list_head *iter;
- struct slave *slave;
- struct pci_dev *first_pdev = NULL;
-
- if (ifa->idev->dev->priv_flags & IFF_802_1Q_VLAN) {
- event_dev = vlan_dev_real_dev(ifa->idev->dev);
- ret = clip_add(event_dev, ifa, event);
- } else if (ifa->idev->dev->flags & IFF_MASTER) {
- /* It is possible that two different adapters are bonded in one
- * bond. We need to find such different adapters and add clip
- * in all of them only once.
- */
- bond_for_each_slave(bond, slave, iter) {
- if (!first_pdev) {
- ret = clip_add(slave->dev, ifa, event);
- /* If clip_add is success then only initialize
- * first_pdev since it means it is our device
- */
- if (ret == NOTIFY_OK)
- first_pdev = to_pci_dev(
- slave->dev->dev.parent);
- } else if (first_pdev !=
- to_pci_dev(slave->dev->dev.parent))
- ret = clip_add(slave->dev, ifa, event);
- }
- } else
- ret = clip_add(ifa->idev->dev, ifa, event);
-
- return ret;
+ return NOTIFY_OK;
}
+static bool inet6addr_registered;
static struct notifier_block cxgb4_inet6addr_notifier = {
.notifier_call = cxgb4_inet6addr_handler
};
-/* Retrieves IPv6 addresses from a root device (bond, vlan) associated with
- * a physical device.
- * The physical device reference is needed to send the actul CLIP command.
- */
-static int update_dev_clip(struct net_device *root_dev, struct net_device *dev)
-{
- struct inet6_dev *idev = NULL;
- struct inet6_ifaddr *ifa;
- int ret = 0;
-
- idev = __in6_dev_get(root_dev);
- if (!idev)
- return ret;
-
- read_lock_bh(&idev->lock);
- list_for_each_entry(ifa, &idev->addr_list, if_list) {
- ret = cxgb4_clip_get(dev, &ifa->addr);
- if (ret < 0)
- break;
- }
- read_unlock_bh(&idev->lock);
-
- return ret;
-}
-
-static int update_root_dev_clip(struct net_device *dev)
-{
- struct net_device *root_dev = NULL;
- int i, ret = 0;
-
- /* First populate the real net device's IPv6 addresses */
- ret = update_dev_clip(dev, dev);
- if (ret)
- return ret;
-
- /* Parse all bond and vlan devices layered on top of the physical dev */
- root_dev = netdev_master_upper_dev_get_rcu(dev);
- if (root_dev) {
- ret = update_dev_clip(root_dev, dev);
- if (ret)
- return ret;
- }
-
- for (i = 0; i < VLAN_N_VID; i++) {
- root_dev = __vlan_find_dev_deep_rcu(dev, htons(ETH_P_8021Q), i);
- if (!root_dev)
- continue;
-
- ret = update_dev_clip(root_dev, dev);
- if (ret)
- break;
- }
- return ret;
-}
-
static void update_clip(const struct adapter *adap)
{
int i;
ret = 0;
if (dev)
- ret = update_root_dev_clip(dev);
+ ret = cxgb4_update_root_dev_clip(dev);
if (ret < 0)
break;
f->fs.val.lip[i] = val[i];
f->fs.mask.lip[i] = ~0;
}
- if (adap->params.tp.vlan_pri_map & F_PORT) {
+ if (adap->params.tp.vlan_pri_map & PORT_F) {
f->fs.val.iport = port;
f->fs.mask.iport = mask;
}
}
- if (adap->params.tp.vlan_pri_map & F_PROTOCOL) {
+ if (adap->params.tp.vlan_pri_map & PROTOCOL_F) {
f->fs.val.proto = IPPROTO_TCP;
f->fs.mask.proto = ~0;
}
void t4_fatal_err(struct adapter *adap)
{
- t4_set_reg_field(adap, SGE_CONTROL, GLOBALENABLE, 0);
+ t4_set_reg_field(adap, SGE_CONTROL_A, GLOBALENABLE_F, 0);
t4_intr_disable(adap);
dev_alert(adap->pdev_dev, "encountered fatal error, adapter stopped\n");
}
mem_win2_base = MEMWIN2_BASE_T5;
mem_win2_aperture = MEMWIN2_APERTURE_T5;
}
- t4_write_reg(adap, PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, 0),
- mem_win0_base | BIR(0) |
- WINDOW(ilog2(MEMWIN0_APERTURE) - 10));
- t4_write_reg(adap, PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, 1),
- mem_win1_base | BIR(0) |
- WINDOW(ilog2(MEMWIN1_APERTURE) - 10));
- t4_write_reg(adap, PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, 2),
- mem_win2_base | BIR(0) |
- WINDOW(ilog2(mem_win2_aperture) - 10));
- t4_read_reg(adap, PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, 2));
+ t4_write_reg(adap, PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN_A, 0),
+ mem_win0_base | BIR_V(0) |
+ WINDOW_V(ilog2(MEMWIN0_APERTURE) - 10));
+ t4_write_reg(adap, PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN_A, 1),
+ mem_win1_base | BIR_V(0) |
+ WINDOW_V(ilog2(MEMWIN1_APERTURE) - 10));
+ t4_write_reg(adap, PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN_A, 2),
+ mem_win2_base | BIR_V(0) |
+ WINDOW_V(ilog2(mem_win2_aperture) - 10));
+ t4_read_reg(adap, PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN_A, 2));
}
static void setup_memwin_rdma(struct adapter *adap)
start += OCQ_WIN_OFFSET(adap->pdev, &adap->vres);
sz_kb = roundup_pow_of_two(adap->vres.ocq.size) >> 10;
t4_write_reg(adap,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, 3),
- start | BIR(1) | WINDOW(ilog2(sz_kb)));
+ PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN_A, 3),
+ start | BIR_V(1) | WINDOW_V(ilog2(sz_kb)));
t4_write_reg(adap,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET, 3),
+ PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET_A, 3),
adap->vres.ocq.start);
t4_read_reg(adap,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET, 3));
+ PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET_A, 3));
}
}
t4_sge_init(adap);
/* tweak some settings */
- t4_write_reg(adap, TP_SHIFT_CNT, 0x64f8849);
- t4_write_reg(adap, ULP_RX_TDDP_PSZ, HPZ0(PAGE_SHIFT - 12));
- t4_write_reg(adap, TP_PIO_ADDR, TP_INGRESS_CONFIG);
- v = t4_read_reg(adap, TP_PIO_DATA);
- t4_write_reg(adap, TP_PIO_DATA, v & ~CSUM_HAS_PSEUDO_HDR);
+ t4_write_reg(adap, TP_SHIFT_CNT_A, 0x64f8849);
+ t4_write_reg(adap, ULP_RX_TDDP_PSZ_A, HPZ0_V(PAGE_SHIFT - 12));
+ t4_write_reg(adap, TP_PIO_ADDR_A, TP_INGRESS_CONFIG_A);
+ v = t4_read_reg(adap, TP_PIO_DATA_A);
+ t4_write_reg(adap, TP_PIO_DATA_A, v & ~CSUM_HAS_PSEUDO_HDR_F);
/* first 4 Tx modulation queues point to consecutive Tx channels */
adap->params.tp.tx_modq_map = 0xE4;
- t4_write_reg(adap, A_TP_TX_MOD_QUEUE_REQ_MAP,
- V_TX_MOD_QUEUE_REQ_MAP(adap->params.tp.tx_modq_map));
+ t4_write_reg(adap, TP_TX_MOD_QUEUE_REQ_MAP_A,
+ TX_MOD_QUEUE_REQ_MAP_V(adap->params.tp.tx_modq_map));
/* associate each Tx modulation queue with consecutive Tx channels */
v = 0x84218421;
- t4_write_indirect(adap, TP_PIO_ADDR, TP_PIO_DATA,
- &v, 1, A_TP_TX_SCHED_HDR);
- t4_write_indirect(adap, TP_PIO_ADDR, TP_PIO_DATA,
- &v, 1, A_TP_TX_SCHED_FIFO);
- t4_write_indirect(adap, TP_PIO_ADDR, TP_PIO_DATA,
- &v, 1, A_TP_TX_SCHED_PCMD);
+ t4_write_indirect(adap, TP_PIO_ADDR_A, TP_PIO_DATA_A,
+ &v, 1, TP_TX_SCHED_HDR_A);
+ t4_write_indirect(adap, TP_PIO_ADDR_A, TP_PIO_DATA_A,
+ &v, 1, TP_TX_SCHED_FIFO_A);
+ t4_write_indirect(adap, TP_PIO_ADDR_A, TP_PIO_DATA_A,
+ &v, 1, TP_TX_SCHED_PCMD_A);
#define T4_TX_MODQ_10G_WEIGHT_DEFAULT 16 /* in KB units */
if (is_offload(adap)) {
- t4_write_reg(adap, A_TP_TX_MOD_QUEUE_WEIGHT0,
- V_TX_MODQ_WEIGHT0(T4_TX_MODQ_10G_WEIGHT_DEFAULT) |
- V_TX_MODQ_WEIGHT1(T4_TX_MODQ_10G_WEIGHT_DEFAULT) |
- V_TX_MODQ_WEIGHT2(T4_TX_MODQ_10G_WEIGHT_DEFAULT) |
- V_TX_MODQ_WEIGHT3(T4_TX_MODQ_10G_WEIGHT_DEFAULT));
- t4_write_reg(adap, A_TP_TX_MOD_CHANNEL_WEIGHT,
- V_TX_MODQ_WEIGHT0(T4_TX_MODQ_10G_WEIGHT_DEFAULT) |
- V_TX_MODQ_WEIGHT1(T4_TX_MODQ_10G_WEIGHT_DEFAULT) |
- V_TX_MODQ_WEIGHT2(T4_TX_MODQ_10G_WEIGHT_DEFAULT) |
- V_TX_MODQ_WEIGHT3(T4_TX_MODQ_10G_WEIGHT_DEFAULT));
+ t4_write_reg(adap, TP_TX_MOD_QUEUE_WEIGHT0_A,
+ TX_MODQ_WEIGHT0_V(T4_TX_MODQ_10G_WEIGHT_DEFAULT) |
+ TX_MODQ_WEIGHT1_V(T4_TX_MODQ_10G_WEIGHT_DEFAULT) |
+ TX_MODQ_WEIGHT2_V(T4_TX_MODQ_10G_WEIGHT_DEFAULT) |
+ TX_MODQ_WEIGHT3_V(T4_TX_MODQ_10G_WEIGHT_DEFAULT));
+ t4_write_reg(adap, TP_TX_MOD_CHANNEL_WEIGHT_A,
+ TX_MODQ_WEIGHT0_V(T4_TX_MODQ_10G_WEIGHT_DEFAULT) |
+ TX_MODQ_WEIGHT1_V(T4_TX_MODQ_10G_WEIGHT_DEFAULT) |
+ TX_MODQ_WEIGHT2_V(T4_TX_MODQ_10G_WEIGHT_DEFAULT) |
+ TX_MODQ_WEIGHT3_V(T4_TX_MODQ_10G_WEIGHT_DEFAULT));
}
/* get basic stuff going */
rx_dma_offset);
rx_dma_offset = 2;
}
- t4_set_reg_field(adapter, SGE_CONTROL,
- PKTSHIFT_MASK,
- PKTSHIFT(rx_dma_offset));
+ t4_set_reg_field(adapter, SGE_CONTROL_A,
+ PKTSHIFT_V(PKTSHIFT_M),
+ PKTSHIFT_V(rx_dma_offset));
/*
* Don't include the "IP Pseudo Header" in CPL_RX_PKT checksums: Linux
* adds the pseudo header itself.
*/
- t4_tp_wr_bits_indirect(adapter, TP_INGRESS_CONFIG,
- CSUM_HAS_PSEUDO_HDR, 0);
+ t4_tp_wr_bits_indirect(adapter, TP_INGRESS_CONFIG_A,
+ CSUM_HAS_PSEUDO_HDR_F, 0);
return 0;
}
*/
if (reset) {
ret = t4_fw_reset(adapter, adapter->mbox,
- PIORSTMODE | PIORST);
+ PIORSTMODE_F | PIORST_F);
if (ret < 0)
goto bye;
}
if (ret < 0)
goto bye;
- /*
- * Return successfully and note that we're operating with parameters
- * not supplied by the driver, rather than from hard-wired
- * initialization constants burried in the driver.
+ /* Emit Firmware Configuration File information and return
+ * successfully.
*/
- adapter->flags |= USING_SOFT_PARAMS;
dev_info(adapter->pdev_dev, "Successfully configured using Firmware "\
"Configuration File \"%s\", version %#x, computed checksum %#x\n",
config_name, finiver, cfcsum);
return ret;
}
-/*
- * Attempt to initialize the adapter via hard-coded, driver supplied
- * parameters ...
- */
-static int adap_init0_no_config(struct adapter *adapter, int reset)
-{
- struct sge *s = &adapter->sge;
- struct fw_caps_config_cmd caps_cmd;
- u32 v;
- int i, ret;
-
- /*
- * Reset device if necessary
- */
- if (reset) {
- ret = t4_fw_reset(adapter, adapter->mbox,
- PIORSTMODE | PIORST);
- if (ret < 0)
- goto bye;
- }
-
- /*
- * Get device capabilities and select which we'll be using.
- */
- memset(&caps_cmd, 0, sizeof(caps_cmd));
- caps_cmd.op_to_write = htonl(FW_CMD_OP_V(FW_CAPS_CONFIG_CMD) |
- FW_CMD_REQUEST_F | FW_CMD_READ_F);
- caps_cmd.cfvalid_to_len16 = htonl(FW_LEN16(caps_cmd));
- ret = t4_wr_mbox(adapter, adapter->mbox, &caps_cmd, sizeof(caps_cmd),
- &caps_cmd);
- if (ret < 0)
- goto bye;
-
- if (caps_cmd.niccaps & htons(FW_CAPS_CONFIG_NIC_VM)) {
- if (!vf_acls)
- caps_cmd.niccaps ^= htons(FW_CAPS_CONFIG_NIC_VM);
- else
- caps_cmd.niccaps = htons(FW_CAPS_CONFIG_NIC_VM);
- } else if (vf_acls) {
- dev_err(adapter->pdev_dev, "virtualization ACLs not supported");
- goto bye;
- }
- caps_cmd.op_to_write = htonl(FW_CMD_OP_V(FW_CAPS_CONFIG_CMD) |
- FW_CMD_REQUEST_F | FW_CMD_WRITE_F);
- ret = t4_wr_mbox(adapter, adapter->mbox, &caps_cmd, sizeof(caps_cmd),
- NULL);
- if (ret < 0)
- goto bye;
-
- /*
- * Tweak configuration based on system architecture, module
- * parameters, etc.
- */
- ret = adap_init0_tweaks(adapter);
- if (ret < 0)
- goto bye;
-
- /*
- * Select RSS Global Mode we want to use. We use "Basic Virtual"
- * mode which maps each Virtual Interface to its own section of
- * the RSS Table and we turn on all map and hash enables ...
- */
- adapter->flags |= RSS_TNLALLLOOKUP;
- ret = t4_config_glbl_rss(adapter, adapter->mbox,
- FW_RSS_GLB_CONFIG_CMD_MODE_BASICVIRTUAL,
- FW_RSS_GLB_CONFIG_CMD_TNLMAPEN_F |
- FW_RSS_GLB_CONFIG_CMD_HASHTOEPLITZ_F |
- ((adapter->flags & RSS_TNLALLLOOKUP) ?
- FW_RSS_GLB_CONFIG_CMD_TNLALLLKP_F : 0));
- if (ret < 0)
- goto bye;
-
- /*
- * Set up our own fundamental resource provisioning ...
- */
- ret = t4_cfg_pfvf(adapter, adapter->mbox, adapter->fn, 0,
- PFRES_NEQ, PFRES_NETHCTRL,
- PFRES_NIQFLINT, PFRES_NIQ,
- PFRES_TC, PFRES_NVI,
- FW_PFVF_CMD_CMASK_M,
- pfvfres_pmask(adapter, adapter->fn, 0),
- PFRES_NEXACTF,
- PFRES_R_CAPS, PFRES_WX_CAPS);
- if (ret < 0)
- goto bye;
-
- /*
- * Perform low level SGE initialization. We need to do this before we
- * send the firmware the INITIALIZE command because that will cause
- * any other PF Drivers which are waiting for the Master
- * Initialization to proceed forward.
- */
- for (i = 0; i < SGE_NTIMERS - 1; i++)
- s->timer_val[i] = min(intr_holdoff[i], MAX_SGE_TIMERVAL);
- s->timer_val[SGE_NTIMERS - 1] = MAX_SGE_TIMERVAL;
- s->counter_val[0] = 1;
- for (i = 1; i < SGE_NCOUNTERS; i++)
- s->counter_val[i] = min(intr_cnt[i - 1],
- THRESHOLD_0_GET(THRESHOLD_0_MASK));
- t4_sge_init(adapter);
-
-#ifdef CONFIG_PCI_IOV
- /*
- * Provision resource limits for Virtual Functions. We currently
- * grant them all the same static resource limits except for the Port
- * Access Rights Mask which we're assigning based on the PF. All of
- * the static provisioning stuff for both the PF and VF really needs
- * to be managed in a persistent manner for each device which the
- * firmware controls.
- */
- {
- int pf, vf;
-
- for (pf = 0; pf < ARRAY_SIZE(num_vf); pf++) {
- if (num_vf[pf] <= 0)
- continue;
-
- /* VF numbering starts at 1! */
- for (vf = 1; vf <= num_vf[pf]; vf++) {
- ret = t4_cfg_pfvf(adapter, adapter->mbox,
- pf, vf,
- VFRES_NEQ, VFRES_NETHCTRL,
- VFRES_NIQFLINT, VFRES_NIQ,
- VFRES_TC, VFRES_NVI,
- FW_PFVF_CMD_CMASK_M,
- pfvfres_pmask(
- adapter, pf, vf),
- VFRES_NEXACTF,
- VFRES_R_CAPS, VFRES_WX_CAPS);
- if (ret < 0)
- dev_warn(adapter->pdev_dev,
- "failed to "\
- "provision pf/vf=%d/%d; "
- "err=%d\n", pf, vf, ret);
- }
- }
- }
-#endif
-
- /*
- * Set up the default filter mode. Later we'll want to implement this
- * via a firmware command, etc. ... This needs to be done before the
- * firmare initialization command ... If the selected set of fields
- * isn't equal to the default value, we'll need to make sure that the
- * field selections will fit in the 36-bit budget.
- */
- if (tp_vlan_pri_map != TP_VLAN_PRI_MAP_DEFAULT) {
- int j, bits = 0;
-
- for (j = TP_VLAN_PRI_MAP_FIRST; j <= TP_VLAN_PRI_MAP_LAST; j++)
- switch (tp_vlan_pri_map & (1 << j)) {
- case 0:
- /* compressed filter field not enabled */
- break;
- case FCOE_MASK:
- bits += 1;
- break;
- case PORT_MASK:
- bits += 3;
- break;
- case VNIC_ID_MASK:
- bits += 17;
- break;
- case VLAN_MASK:
- bits += 17;
- break;
- case TOS_MASK:
- bits += 8;
- break;
- case PROTOCOL_MASK:
- bits += 8;
- break;
- case ETHERTYPE_MASK:
- bits += 16;
- break;
- case MACMATCH_MASK:
- bits += 9;
- break;
- case MPSHITTYPE_MASK:
- bits += 3;
- break;
- case FRAGMENTATION_MASK:
- bits += 1;
- break;
- }
-
- if (bits > 36) {
- dev_err(adapter->pdev_dev,
- "tp_vlan_pri_map=%#x needs %d bits > 36;"\
- " using %#x\n", tp_vlan_pri_map, bits,
- TP_VLAN_PRI_MAP_DEFAULT);
- tp_vlan_pri_map = TP_VLAN_PRI_MAP_DEFAULT;
- }
- }
- v = tp_vlan_pri_map;
- t4_write_indirect(adapter, TP_PIO_ADDR, TP_PIO_DATA,
- &v, 1, TP_VLAN_PRI_MAP);
-
- /*
- * We need Five Tuple Lookup mode to be set in TP_GLOBAL_CONFIG order
- * to support any of the compressed filter fields above. Newer
- * versions of the firmware do this automatically but it doesn't hurt
- * to set it here. Meanwhile, we do _not_ need to set Lookup Every
- * Packet in TP_INGRESS_CONFIG to support matching non-TCP packets
- * since the firmware automatically turns this on and off when we have
- * a non-zero number of filters active (since it does have a
- * performance impact).
- */
- if (tp_vlan_pri_map)
- t4_set_reg_field(adapter, TP_GLOBAL_CONFIG,
- FIVETUPLELOOKUP_MASK,
- FIVETUPLELOOKUP_MASK);
-
- /*
- * Tweak some settings.
- */
- t4_write_reg(adapter, TP_SHIFT_CNT, SYNSHIFTMAX(6) |
- RXTSHIFTMAXR1(4) | RXTSHIFTMAXR2(15) |
- PERSHIFTBACKOFFMAX(8) | PERSHIFTMAX(8) |
- KEEPALIVEMAXR1(4) | KEEPALIVEMAXR2(9));
-
- /*
- * Get basic stuff going by issuing the Firmware Initialize command.
- * Note that this _must_ be after all PFVF commands ...
- */
- ret = t4_fw_initialize(adapter, adapter->mbox);
- if (ret < 0)
- goto bye;
-
- /*
- * Return successfully!
- */
- dev_info(adapter->pdev_dev, "Successfully configured using built-in "\
- "driver parameters\n");
- return 0;
-
- /*
- * Something bad happened. Return the error ...
- */
-bye:
- return ret;
-}
-
static struct fw_info fw_info_array[] = {
{
.chip = CHELSIO_T4,
enum dev_state state;
u32 params[7], val[7];
struct fw_caps_config_cmd caps_cmd;
+ struct fw_devlog_cmd devlog_cmd;
+ u32 devlog_meminfo;
int reset = 1;
/* Contact FW, advertising Master capability */
if (ret < 0)
goto bye;
+ /* Read firmware device log parameters. We really need to find a way
+ * to get these parameters initialized with some default values (which
+ * are likely to be correct) for the case where we either don't
+ * attache to the firmware or it's crashed when we probe the adapter.
+ * That way we'll still be able to perform early firmware startup
+ * debugging ... If the request to get the Firmware's Device Log
+ * parameters fails, we'll live so we don't make that a fatal error.
+ */
+ memset(&devlog_cmd, 0, sizeof(devlog_cmd));
+ devlog_cmd.op_to_write = htonl(FW_CMD_OP_V(FW_DEVLOG_CMD) |
+ FW_CMD_REQUEST_F | FW_CMD_READ_F);
+ devlog_cmd.retval_len16 = htonl(FW_LEN16(devlog_cmd));
+ ret = t4_wr_mbox(adap, adap->mbox, &devlog_cmd, sizeof(devlog_cmd),
+ &devlog_cmd);
+ if (ret == 0) {
+ devlog_meminfo =
+ ntohl(devlog_cmd.memtype_devlog_memaddr16_devlog);
+ adap->params.devlog.memtype =
+ FW_DEVLOG_CMD_MEMTYPE_DEVLOG_G(devlog_meminfo);
+ adap->params.devlog.start =
+ FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(devlog_meminfo) << 4;
+ adap->params.devlog.size = ntohl(devlog_cmd.memsize_devlog);
+ }
+
/*
* Find out what ports are available to us. Note that we need to do
* this before calling adap_init0_no_config() since it needs nports
adap->params.nports = hweight32(port_vec);
adap->params.portvec = port_vec;
- /*
- * If the firmware is initialized already (and we're not forcing a
- * master initialization), note that we're living with existing
- * adapter parameters. Otherwise, it's time to try initializing the
- * adapter ...
+ /* If the firmware is initialized already, emit a simply note to that
+ * effect. Otherwise, it's time to try initializing the adapter.
*/
if (state == DEV_STATE_INIT) {
dev_info(adap->pdev_dev, "Coming up as %s: "\
"Adapter already initialized\n",
adap->flags & MASTER_PF ? "MASTER" : "SLAVE");
- adap->flags |= USING_SOFT_PARAMS;
} else {
dev_info(adap->pdev_dev, "Coming up as MASTER: "\
"Initializing adapter\n");
- /*
- * If the firmware doesn't support Configuration
- * Files warn user and exit,
+
+ /* Find out whether we're dealing with a version of the
+ * firmware which has configuration file support.
*/
- if (ret < 0)
- dev_warn(adap->pdev_dev, "Firmware doesn't support "
- "configuration file.\n");
- if (force_old_init)
- ret = adap_init0_no_config(adap, reset);
- else {
- /*
- * Find out whether we're dealing with a version of
- * the firmware which has configuration file support.
- */
- params[0] = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_DEV) |
- FW_PARAMS_PARAM_X_V(
- FW_PARAMS_PARAM_DEV_CF));
- ret = t4_query_params(adap, adap->mbox, adap->fn, 0, 1,
- params, val);
-
- /*
- * If the firmware doesn't support Configuration
- * Files, use the old Driver-based, hard-wired
- * initialization. Otherwise, try using the
- * Configuration File support and fall back to the
- * Driver-based initialization if there's no
- * Configuration File found.
- */
- if (ret < 0)
- ret = adap_init0_no_config(adap, reset);
- else {
- /*
- * The firmware provides us with a memory
- * buffer where we can load a Configuration
- * File from the host if we want to override
- * the Configuration File in flash.
- */
+ params[0] = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_DEV) |
+ FW_PARAMS_PARAM_X_V(FW_PARAMS_PARAM_DEV_CF));
+ ret = t4_query_params(adap, adap->mbox, adap->fn, 0, 1,
+ params, val);
- ret = adap_init0_config(adap, reset);
- if (ret == -ENOENT) {
- dev_info(adap->pdev_dev,
- "No Configuration File present "
- "on adapter. Using hard-wired "
- "configuration parameters.\n");
- ret = adap_init0_no_config(adap, reset);
- }
- }
+ /* If the firmware doesn't support Configuration Files,
+ * return an error.
+ */
+ if (ret < 0) {
+ dev_err(adap->pdev_dev, "firmware doesn't support "
+ "Firmware Configuration Files\n");
+ goto bye;
+ }
+
+ /* The firmware provides us with a memory buffer where we can
+ * load a Configuration File from the host if we want to
+ * override the Configuration File in flash.
+ */
+ ret = adap_init0_config(adap, reset);
+ if (ret == -ENOENT) {
+ dev_err(adap->pdev_dev, "no Configuration File "
+ "present on adapter.\n");
+ goto bye;
}
if (ret < 0) {
- dev_err(adap->pdev_dev,
- "could not initialize adapter, error %d\n",
- -ret);
+ dev_err(adap->pdev_dev, "could not initialize "
+ "adapter, error %d\n", -ret);
goto bye;
}
}
- /*
- * If we're living with non-hard-coded parameters (either from a
- * Firmware Configuration File or values programmed by a different PF
- * Driver), give the SGE code a chance to pull in anything that it
- * needs ... Note that this must be called after we retrieve our VPD
- * parameters in order to know how to convert core ticks to seconds.
+ /* Give the SGE code a chance to pull in anything that it needs ...
+ * Note that this must be called after we retrieve our VPD parameters
+ * in order to know how to convert core ticks to seconds, etc.
*/
- if (adap->flags & USING_SOFT_PARAMS) {
- ret = t4_sge_init(adap);
- if (ret < 0)
- goto bye;
- }
+ ret = t4_sge_init(adap);
+ if (ret < 0)
+ goto bye;
if (is_bypass_device(adap->pdev->device))
adap->params.bypass = 1;
adap->tids.nftids = val[4] - val[3] + 1;
adap->sge.ingr_start = val[5];
+ params[0] = FW_PARAM_PFVF(CLIP_START);
+ params[1] = FW_PARAM_PFVF(CLIP_END);
+ ret = t4_query_params(adap, adap->mbox, adap->fn, 0, 2, params, val);
+ if (ret < 0)
+ goto bye;
+ adap->clipt_start = val[0];
+ adap->clipt_end = val[1];
+
/* query params related to active filter region */
params[0] = FW_PARAM_PFVF(ACTIVE_FILTER_START);
params[1] = FW_PARAM_PFVF(ACTIVE_FILTER_END);
goto out_unmap_bar0;
/* We control everything through one PF */
- func = SOURCEPF_GET(readl(regs + PL_WHOAMI));
+ func = SOURCEPF_G(readl(regs + PL_WHOAMI_A));
if (func != ent->driver_data) {
iounmap(regs);
pci_disable_device(pdev);
if (!is_t4(adapter->params.chip)) {
- s_qpp = QUEUESPERPAGEPF1 * adapter->fn;
- qpp = 1 << QUEUESPERPAGEPF0_GET(t4_read_reg(adapter,
- SGE_EGRESS_QUEUES_PER_PAGE_PF) >> s_qpp);
+ s_qpp = (QUEUESPERPAGEPF0_S +
+ (QUEUESPERPAGEPF1_S - QUEUESPERPAGEPF0_S) *
+ adapter->fn);
+ qpp = 1 << QUEUESPERPAGEPF0_G(t4_read_reg(adapter,
+ SGE_EGRESS_QUEUES_PER_PAGE_PF_A) >> s_qpp);
num_seg = PAGE_SIZE / SEGMENT_SIZE;
/* Each segment size is 128B. Write coalescing is enabled only
adapter->params.offload = 0;
}
+#if IS_ENABLED(CONFIG_IPV6)
+ adapter->clipt = t4_init_clip_tbl(adapter->clipt_start,
+ adapter->clipt_end);
+ if (!adapter->clipt) {
+ /* We tolerate a lack of clip_table, giving up
+ * some functionality
+ */
+ dev_warn(&pdev->dev,
+ "could not allocate Clip table, continuing\n");
+ adapter->params.offload = 0;
+ }
+#endif
if (is_offload(adapter) && tid_init(&adapter->tids) < 0) {
dev_warn(&pdev->dev, "could not allocate TID table, "
"continuing\n");
cxgb_down(adapter);
free_some_resources(adapter);
+#if IS_ENABLED(CONFIG_IPV6)
+ t4_cleanup_clip_tbl(adapter);
+#endif
iounmap(adapter->regs);
if (!is_t4(adapter->params.chip))
iounmap(adapter->bar2);
debugfs_remove(cxgb4_debugfs_root);
#if IS_ENABLED(CONFIG_IPV6)
- register_inet6addr_notifier(&cxgb4_inet6addr_notifier);
+ if (!inet6addr_registered) {
+ register_inet6addr_notifier(&cxgb4_inet6addr_notifier);
+ inet6addr_registered = true;
+ }
#endif
return ret;
static void __exit cxgb4_cleanup_module(void)
{
#if IS_ENABLED(CONFIG_IPV6)
- unregister_inet6addr_notifier(&cxgb4_inet6addr_notifier);
+ if (inet6addr_registered) {
+ unregister_inet6addr_notifier(&cxgb4_inet6addr_notifier);
+ inet6addr_registered = false;
+ }
#endif
pci_unregister_driver(&cxgb4_driver);
debugfs_remove(cxgb4_debugfs_root); /* NULL ok */
unsigned char port, unsigned char mask);
int cxgb4_remove_server_filter(const struct net_device *dev, unsigned int stid,
unsigned int queue, bool ipv6);
-int cxgb4_clip_get(const struct net_device *dev, const struct in6_addr *lip);
-int cxgb4_clip_release(const struct net_device *dev,
- const struct in6_addr *lip);
static inline void set_wr_txq(struct sk_buff *skb, int prio, int queue)
{
#include "t4_msg.h"
#include "t4fw_api.h"
#include "t4_regs.h"
+#include "t4_values.h"
#define VLAN_NONE 0xfff
OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_L2T_WRITE_REQ,
e->idx | (sync ? F_SYNC_WR : 0) |
- TID_QID(adap->sge.fw_evtq.abs_id)));
- req->params = htons(L2T_W_PORT(e->lport) | L2T_W_NOREPLY(!sync));
+ TID_QID_V(adap->sge.fw_evtq.abs_id)));
+ req->params = htons(L2T_W_PORT_V(e->lport) | L2T_W_NOREPLY_V(!sync));
req->l2t_idx = htons(e->idx);
req->vlan = htons(e->vlan);
if (e->neigh && !(e->neigh->dev->flags & IFF_LOOPBACK))
* in the Compressed Filter Tuple.
*/
if (tp->vlan_shift >= 0 && l2t->vlan != VLAN_NONE)
- ntuple |= (u64)(F_FT_VLAN_VLD | l2t->vlan) << tp->vlan_shift;
+ ntuple |= (u64)(FT_VLAN_VLD_F | l2t->vlan) << tp->vlan_shift;
if (tp->port_shift >= 0)
ntuple |= (u64)l2t->lport << tp->port_shift;
u32 pf = FW_VIID_PFN_G(viid);
u32 vld = FW_VIID_VIVLD_G(viid);
- ntuple |= (u64)(V_FT_VNID_ID_VF(vf) |
- V_FT_VNID_ID_PF(pf) |
- V_FT_VNID_ID_VLD(vld)) << tp->vnic_shift;
+ ntuple |= (u64)(FT_VNID_ID_VF_V(vf) |
+ FT_VNID_ID_PF_V(pf) |
+ FT_VNID_ID_VLD_V(vld)) << tp->vnic_shift;
}
return ntuple;
#include <net/tcp.h>
#include "cxgb4.h"
#include "t4_regs.h"
+#include "t4_values.h"
#include "t4_msg.h"
#include "t4fw_api.h"
{
u32 val;
if (q->pend_cred >= 8) {
- val = PIDX(q->pend_cred / 8);
- if (!is_t4(adap->params.chip))
- val |= DBTYPE(1);
- val |= DBPRIO(1);
+ if (is_t4(adap->params.chip))
+ val = PIDX_V(q->pend_cred / 8);
+ else
+ val = PIDX_T5_V(q->pend_cred / 8) |
+ DBTYPE_F;
+ val |= DBPRIO_F;
wmb();
/* If we don't have access to the new User Doorbell (T5+), use
* mechanism.
*/
if (unlikely(q->bar2_addr == NULL)) {
- t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL),
- val | QID(q->cntxt_id));
+ t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL_A),
+ val | QID_V(q->cntxt_id));
} else {
- writel(val | QID(q->bar2_qid),
+ writel(val | QID_V(q->bar2_qid),
q->bar2_addr + SGE_UDB_KDOORBELL);
/* This Write memory Barrier will force the write to
sgl->addr0 = cpu_to_be64(addr[1]);
}
- sgl->cmd_nsge = htonl(ULPTX_CMD_V(ULP_TX_SC_DSGL) | ULPTX_NSGE(nfrags));
+ sgl->cmd_nsge = htonl(ULPTX_CMD_V(ULP_TX_SC_DSGL) |
+ ULPTX_NSGE_V(nfrags));
if (likely(--nfrags == 0))
return;
/*
* doorbell mechanism; otherwise use the new BAR2 mechanism.
*/
if (unlikely(q->bar2_addr == NULL)) {
- u32 val = PIDX(n);
+ u32 val = PIDX_V(n);
unsigned long flags;
/* For T4 we need to participate in the Doorbell Recovery
*/
spin_lock_irqsave(&q->db_lock, flags);
if (!q->db_disabled)
- t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL),
- QID(q->cntxt_id) | val);
+ t4_write_reg(adap, MYPF_REG(SGE_PF_KDOORBELL_A),
+ QID_V(q->cntxt_id) | val);
else
q->db_pidx_inc += n;
q->db_pidx = q->pidx;
spin_unlock_irqrestore(&q->db_lock, flags);
} else {
- u32 val = PIDX_T5(n);
+ u32 val = PIDX_T5_V(n);
/* T4 and later chips share the same PIDX field offset within
* the doorbell, but T5 and later shrank the field in order to
* large in the first place (14 bits) so we just use the T5
* and later limits and warn if a Queue ID is too large.
*/
- WARN_ON(val & DBPRIO(1));
+ WARN_ON(val & DBPRIO_F);
/* If we're only writing a single TX Descriptor and we can use
* Inferred QID registers, we can use the Write Combining
(q->bar2_addr + SGE_UDB_WCDOORBELL),
wr);
} else {
- writel(val | QID(q->bar2_qid),
+ writel(val | QID_V(q->bar2_qid),
q->bar2_addr + SGE_UDB_KDOORBELL);
}
cntrl = TXPKT_L4CSUM_DIS | TXPKT_IPCSUM_DIS;
}
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
q->vlan_ins++;
- cntrl |= TXPKT_VLAN_VLD | TXPKT_VLAN(vlan_tx_tag_get(skb));
+ cntrl |= TXPKT_VLAN_VLD | TXPKT_VLAN(skb_vlan_tag_get(skb));
}
cpl->ctrl0 = htonl(TXPKT_OPCODE(CPL_TX_PKT_XT) |
pkt = (const struct cpl_rx_pkt *)rsp;
csum_ok = pkt->csum_calc && !pkt->err_vec &&
(q->netdev->features & NETIF_F_RXCSUM);
- if ((pkt->l2info & htonl(RXF_TCP)) &&
+ if ((pkt->l2info & htonl(RXF_TCP_F)) &&
(q->netdev->features & NETIF_F_GRO) && csum_ok && !pkt->ip_frag) {
do_gro(rxq, si, pkt);
return 0;
rxq->stats.pkts++;
- if (csum_ok && (pkt->l2info & htonl(RXF_UDP | RXF_TCP))) {
+ if (csum_ok && (pkt->l2info & htonl(RXF_UDP_F | RXF_TCP_F))) {
if (!pkt->ip_frag) {
skb->ip_summed = CHECKSUM_UNNECESSARY;
rxq->stats.rx_cso++;
- } else if (pkt->l2info & htonl(RXF_IP)) {
+ } else if (pkt->l2info & htonl(RXF_IP_F)) {
__sum16 c = (__force __sum16)pkt->csum;
skb->csum = csum_unfold(c);
skb->ip_summed = CHECKSUM_COMPLETE;
} else
params = QINTR_TIMER_IDX(7);
- val = CIDXINC(work_done) | SEINTARM(params);
+ val = CIDXINC_V(work_done) | SEINTARM_V(params);
/* If we don't have access to the new User GTS (T5+), use the old
* doorbell mechanism; otherwise use the new BAR2 mechanism.
*/
if (unlikely(q->bar2_addr == NULL)) {
- t4_write_reg(q->adap, MYPF_REG(SGE_PF_GTS),
- val | INGRESSQID((u32)q->cntxt_id));
+ t4_write_reg(q->adap, MYPF_REG(SGE_PF_GTS_A),
+ val | INGRESSQID_V((u32)q->cntxt_id));
} else {
- writel(val | INGRESSQID(q->bar2_qid),
+ writel(val | INGRESSQID_V(q->bar2_qid),
q->bar2_addr + SGE_UDB_GTS);
wmb();
}
rspq_next(q);
}
- val = CIDXINC(credits) | SEINTARM(q->intr_params);
+ val = CIDXINC_V(credits) | SEINTARM_V(q->intr_params);
/* If we don't have access to the new User GTS (T5+), use the old
* doorbell mechanism; otherwise use the new BAR2 mechanism.
*/
if (unlikely(q->bar2_addr == NULL)) {
- t4_write_reg(adap, MYPF_REG(SGE_PF_GTS),
- val | INGRESSQID(q->cntxt_id));
+ t4_write_reg(adap, MYPF_REG(SGE_PF_GTS_A),
+ val | INGRESSQID_V(q->cntxt_id));
} else {
- writel(val | INGRESSQID(q->bar2_qid),
+ writel(val | INGRESSQID_V(q->bar2_qid),
q->bar2_addr + SGE_UDB_GTS);
wmb();
}
{
struct adapter *adap = cookie;
- t4_write_reg(adap, MYPF_REG(PCIE_PF_CLI), 0);
+ t4_write_reg(adap, MYPF_REG(PCIE_PF_CLI_A), 0);
if (t4_slow_intr_handler(adap) | process_intrq(adap))
return IRQ_HANDLED;
return IRQ_NONE; /* probably shared interrupt */
}
}
- t4_write_reg(adap, SGE_DEBUG_INDEX, 13);
- idma_same_state_cnt[0] = t4_read_reg(adap, SGE_DEBUG_DATA_HIGH);
- idma_same_state_cnt[1] = t4_read_reg(adap, SGE_DEBUG_DATA_LOW);
+ t4_write_reg(adap, SGE_DEBUG_INDEX_A, 13);
+ idma_same_state_cnt[0] = t4_read_reg(adap, SGE_DEBUG_DATA_HIGH_A);
+ idma_same_state_cnt[1] = t4_read_reg(adap, SGE_DEBUG_DATA_LOW_A);
for (i = 0; i < 2; i++) {
u32 debug0, debug11;
/* Read and save the SGE IDMA State and Queue ID information.
* We do this every time in case it changes across time ...
*/
- t4_write_reg(adap, SGE_DEBUG_INDEX, 0);
- debug0 = t4_read_reg(adap, SGE_DEBUG_DATA_LOW);
+ t4_write_reg(adap, SGE_DEBUG_INDEX_A, 0);
+ debug0 = t4_read_reg(adap, SGE_DEBUG_DATA_LOW_A);
s->idma_state[i] = (debug0 >> (i * 9)) & 0x3f;
- t4_write_reg(adap, SGE_DEBUG_INDEX, 11);
- debug11 = t4_read_reg(adap, SGE_DEBUG_DATA_LOW);
+ t4_write_reg(adap, SGE_DEBUG_INDEX_A, 11);
+ debug11 = t4_read_reg(adap, SGE_DEBUG_DATA_LOW_A);
s->idma_qid[i] = (debug11 >> (i * 16)) & 0xffff;
CH_WARN(adap, "SGE idma%u, queue%u, maybe stuck state%u %dsecs (debug0=%#x, debug11=%#x)\n",
}
/**
- * t4_sge_init - initialize SGE
+ * t4_sge_init_soft - grab core SGE values needed by SGE code
* @adap: the adapter
*
- * Performs SGE initialization needed every time after a chip reset.
- * We do not initialize any of the queues here, instead the driver
- * top-level must request them individually.
- *
- * Called in two different modes:
- *
- * 1. Perform actual hardware initialization and record hard-coded
- * parameters which were used. This gets used when we're the
- * Master PF and the Firmware Configuration File support didn't
- * work for some reason.
- *
- * 2. We're not the Master PF or initialization was performed with
- * a Firmware Configuration File. In this case we need to grab
- * any of the SGE operating parameters that we need to have in
- * order to do our job and make sure we can live with them ...
+ * We need to grab the SGE operating parameters that we need to have
+ * in order to do our job and make sure we can live with them.
*/
static int t4_sge_init_soft(struct adapter *adap)
* process_responses() and that only packet data is going to the
* Free Lists.
*/
- if ((t4_read_reg(adap, SGE_CONTROL) & RXPKTCPLMODE_MASK) !=
- RXPKTCPLMODE(X_RXPKTCPLMODE_SPLIT)) {
+ if ((t4_read_reg(adap, SGE_CONTROL_A) & RXPKTCPLMODE_F) !=
+ RXPKTCPLMODE_V(RXPKTCPLMODE_SPLIT_X)) {
dev_err(adap->pdev_dev, "bad SGE CPL MODE\n");
return -EINVAL;
}
* XXX meet our needs!
*/
#define READ_FL_BUF(x) \
- t4_read_reg(adap, SGE_FL_BUFFER_SIZE0+(x)*sizeof(u32))
+ t4_read_reg(adap, SGE_FL_BUFFER_SIZE0_A+(x)*sizeof(u32))
fl_small_pg = READ_FL_BUF(RX_SMALL_PG_BUF);
fl_large_pg = READ_FL_BUF(RX_LARGE_PG_BUF);
* Retrieve our RX interrupt holdoff timer values and counter
* threshold values from the SGE parameters.
*/
- timer_value_0_and_1 = t4_read_reg(adap, SGE_TIMER_VALUE_0_AND_1);
- timer_value_2_and_3 = t4_read_reg(adap, SGE_TIMER_VALUE_2_AND_3);
- timer_value_4_and_5 = t4_read_reg(adap, SGE_TIMER_VALUE_4_AND_5);
+ timer_value_0_and_1 = t4_read_reg(adap, SGE_TIMER_VALUE_0_AND_1_A);
+ timer_value_2_and_3 = t4_read_reg(adap, SGE_TIMER_VALUE_2_AND_3_A);
+ timer_value_4_and_5 = t4_read_reg(adap, SGE_TIMER_VALUE_4_AND_5_A);
s->timer_val[0] = core_ticks_to_us(adap,
- TIMERVALUE0_GET(timer_value_0_and_1));
+ TIMERVALUE0_G(timer_value_0_and_1));
s->timer_val[1] = core_ticks_to_us(adap,
- TIMERVALUE1_GET(timer_value_0_and_1));
+ TIMERVALUE1_G(timer_value_0_and_1));
s->timer_val[2] = core_ticks_to_us(adap,
- TIMERVALUE2_GET(timer_value_2_and_3));
+ TIMERVALUE2_G(timer_value_2_and_3));
s->timer_val[3] = core_ticks_to_us(adap,
- TIMERVALUE3_GET(timer_value_2_and_3));
+ TIMERVALUE3_G(timer_value_2_and_3));
s->timer_val[4] = core_ticks_to_us(adap,
- TIMERVALUE4_GET(timer_value_4_and_5));
+ TIMERVALUE4_G(timer_value_4_and_5));
s->timer_val[5] = core_ticks_to_us(adap,
- TIMERVALUE5_GET(timer_value_4_and_5));
+ TIMERVALUE5_G(timer_value_4_and_5));
- ingress_rx_threshold = t4_read_reg(adap, SGE_INGRESS_RX_THRESHOLD);
- s->counter_val[0] = THRESHOLD_0_GET(ingress_rx_threshold);
- s->counter_val[1] = THRESHOLD_1_GET(ingress_rx_threshold);
- s->counter_val[2] = THRESHOLD_2_GET(ingress_rx_threshold);
- s->counter_val[3] = THRESHOLD_3_GET(ingress_rx_threshold);
-
- return 0;
-}
-
-static int t4_sge_init_hard(struct adapter *adap)
-{
- struct sge *s = &adap->sge;
-
- /*
- * Set up our basic SGE mode to deliver CPL messages to our Ingress
- * Queue and Packet Date to the Free List.
- */
- t4_set_reg_field(adap, SGE_CONTROL, RXPKTCPLMODE_MASK,
- RXPKTCPLMODE_MASK);
-
- /*
- * Set up to drop DOORBELL writes when the DOORBELL FIFO overflows
- * and generate an interrupt when this occurs so we can recover.
- */
- if (is_t4(adap->params.chip)) {
- t4_set_reg_field(adap, A_SGE_DBFIFO_STATUS,
- V_HP_INT_THRESH(M_HP_INT_THRESH) |
- V_LP_INT_THRESH(M_LP_INT_THRESH),
- V_HP_INT_THRESH(dbfifo_int_thresh) |
- V_LP_INT_THRESH(dbfifo_int_thresh));
- } else {
- t4_set_reg_field(adap, A_SGE_DBFIFO_STATUS,
- V_LP_INT_THRESH_T5(M_LP_INT_THRESH_T5),
- V_LP_INT_THRESH_T5(dbfifo_int_thresh));
- t4_set_reg_field(adap, SGE_DBFIFO_STATUS2,
- V_HP_INT_THRESH_T5(M_HP_INT_THRESH_T5),
- V_HP_INT_THRESH_T5(dbfifo_int_thresh));
- }
- t4_set_reg_field(adap, A_SGE_DOORBELL_CONTROL, F_ENABLE_DROP,
- F_ENABLE_DROP);
-
- /*
- * SGE_FL_BUFFER_SIZE0 (RX_SMALL_PG_BUF) is set up by
- * t4_fixup_host_params().
- */
- s->fl_pg_order = FL_PG_ORDER;
- if (s->fl_pg_order)
- t4_write_reg(adap,
- SGE_FL_BUFFER_SIZE0+RX_LARGE_PG_BUF*sizeof(u32),
- PAGE_SIZE << FL_PG_ORDER);
- t4_write_reg(adap, SGE_FL_BUFFER_SIZE0+RX_SMALL_MTU_BUF*sizeof(u32),
- FL_MTU_SMALL_BUFSIZE(adap));
- t4_write_reg(adap, SGE_FL_BUFFER_SIZE0+RX_LARGE_MTU_BUF*sizeof(u32),
- FL_MTU_LARGE_BUFSIZE(adap));
-
- /*
- * Note that the SGE Ingress Packet Count Interrupt Threshold and
- * Timer Holdoff values must be supplied by our caller.
- */
- t4_write_reg(adap, SGE_INGRESS_RX_THRESHOLD,
- THRESHOLD_0(s->counter_val[0]) |
- THRESHOLD_1(s->counter_val[1]) |
- THRESHOLD_2(s->counter_val[2]) |
- THRESHOLD_3(s->counter_val[3]));
- t4_write_reg(adap, SGE_TIMER_VALUE_0_AND_1,
- TIMERVALUE0(us_to_core_ticks(adap, s->timer_val[0])) |
- TIMERVALUE1(us_to_core_ticks(adap, s->timer_val[1])));
- t4_write_reg(adap, SGE_TIMER_VALUE_2_AND_3,
- TIMERVALUE2(us_to_core_ticks(adap, s->timer_val[2])) |
- TIMERVALUE3(us_to_core_ticks(adap, s->timer_val[3])));
- t4_write_reg(adap, SGE_TIMER_VALUE_4_AND_5,
- TIMERVALUE4(us_to_core_ticks(adap, s->timer_val[4])) |
- TIMERVALUE5(us_to_core_ticks(adap, s->timer_val[5])));
+ ingress_rx_threshold = t4_read_reg(adap, SGE_INGRESS_RX_THRESHOLD_A);
+ s->counter_val[0] = THRESHOLD_0_G(ingress_rx_threshold);
+ s->counter_val[1] = THRESHOLD_1_G(ingress_rx_threshold);
+ s->counter_val[2] = THRESHOLD_2_G(ingress_rx_threshold);
+ s->counter_val[3] = THRESHOLD_3_G(ingress_rx_threshold);
return 0;
}
+/**
+ * t4_sge_init - initialize SGE
+ * @adap: the adapter
+ *
+ * Perform low-level SGE code initialization needed every time after a
+ * chip reset.
+ */
int t4_sge_init(struct adapter *adap)
{
struct sge *s = &adap->sge;
* Ingress Padding Boundary and Egress Status Page Size are set up by
* t4_fixup_host_params().
*/
- sge_control = t4_read_reg(adap, SGE_CONTROL);
- s->pktshift = PKTSHIFT_GET(sge_control);
- s->stat_len = (sge_control & EGRSTATUSPAGESIZE_MASK) ? 128 : 64;
+ sge_control = t4_read_reg(adap, SGE_CONTROL_A);
+ s->pktshift = PKTSHIFT_G(sge_control);
+ s->stat_len = (sge_control & EGRSTATUSPAGESIZE_F) ? 128 : 64;
/* T4 uses a single control field to specify both the PCIe Padding and
* Packing Boundary. T5 introduced the ability to specify these
* within Packed Buffer Mode is the maximum of these two
* specifications.
*/
- ingpadboundary = 1 << (INGPADBOUNDARY_GET(sge_control) +
- X_INGPADBOUNDARY_SHIFT);
+ ingpadboundary = 1 << (INGPADBOUNDARY_G(sge_control) +
+ INGPADBOUNDARY_SHIFT_X);
if (is_t4(adap->params.chip)) {
s->fl_align = ingpadboundary;
} else {
s->fl_align = max(ingpadboundary, ingpackboundary);
}
- if (adap->flags & USING_SOFT_PARAMS)
- ret = t4_sge_init_soft(adap);
- else
- ret = t4_sge_init_hard(adap);
+ ret = t4_sge_init_soft(adap);
if (ret < 0)
return ret;
* buffers and a new field which only applies to Packed Mode Free List
* buffers.
*/
- sge_conm_ctrl = t4_read_reg(adap, SGE_CONM_CTRL);
+ sge_conm_ctrl = t4_read_reg(adap, SGE_CONM_CTRL_A);
if (is_t4(adap->params.chip))
- egress_threshold = EGRTHRESHOLD_GET(sge_conm_ctrl);
+ egress_threshold = EGRTHRESHOLD_G(sge_conm_ctrl);
else
- egress_threshold = EGRTHRESHOLDPACKING_GET(sge_conm_ctrl);
+ egress_threshold = EGRTHRESHOLDPACKING_G(sge_conm_ctrl);
s->fl_starve_thres = 2*egress_threshold + 1;
setup_timer(&s->rx_timer, sge_rx_timer_cb, (unsigned long)adap);
#include <linux/delay.h>
#include "cxgb4.h"
#include "t4_regs.h"
+#include "t4_values.h"
#include "t4fw_api.h"
/**
*/
void t4_hw_pci_read_cfg4(struct adapter *adap, int reg, u32 *val)
{
- u32 req = ENABLE | FUNCTION(adap->fn) | reg;
+ u32 req = ENABLE_F | FUNCTION_V(adap->fn) | REGISTER_V(reg);
if (is_t4(adap->params.chip))
- req |= F_LOCALCFG;
+ req |= LOCALCFG_F;
- t4_write_reg(adap, PCIE_CFG_SPACE_REQ, req);
- *val = t4_read_reg(adap, PCIE_CFG_SPACE_DATA);
+ t4_write_reg(adap, PCIE_CFG_SPACE_REQ_A, req);
+ *val = t4_read_reg(adap, PCIE_CFG_SPACE_DATA_A);
/* Reset ENABLE to 0 so reads of PCIE_CFG_SPACE_DATA won't cause a
* Configuration Space read. (None of the other fields matter when
* ENABLE is 0 so a simple register write is easier than a
* read-modify-write via t4_set_reg_field().)
*/
- t4_write_reg(adap, PCIE_CFG_SPACE_REQ, 0);
+ t4_write_reg(adap, PCIE_CFG_SPACE_REQ_A, 0);
}
/*
};
u32 pcie_fw;
- pcie_fw = t4_read_reg(adap, MA_PCIE_FW);
- if (pcie_fw & PCIE_FW_ERR)
+ pcie_fw = t4_read_reg(adap, PCIE_FW_A);
+ if (pcie_fw & PCIE_FW_ERR_F)
dev_err(adap->pdev_dev, "Firmware reports adapter error: %s\n",
reason[PCIE_FW_EVAL_G(pcie_fw)]);
}
u64 res;
int i, ms, delay_idx;
const __be64 *p = cmd;
- u32 data_reg = PF_REG(mbox, CIM_PF_MAILBOX_DATA);
- u32 ctl_reg = PF_REG(mbox, CIM_PF_MAILBOX_CTRL);
+ u32 data_reg = PF_REG(mbox, CIM_PF_MAILBOX_DATA_A);
+ u32 ctl_reg = PF_REG(mbox, CIM_PF_MAILBOX_CTRL_A);
if ((size & 15) || size > MBOX_LEN)
return -EINVAL;
if (adap->pdev->error_state != pci_channel_io_normal)
return -EIO;
- v = MBOWNER_GET(t4_read_reg(adap, ctl_reg));
+ v = MBOWNER_G(t4_read_reg(adap, ctl_reg));
for (i = 0; v == MBOX_OWNER_NONE && i < 3; i++)
- v = MBOWNER_GET(t4_read_reg(adap, ctl_reg));
+ v = MBOWNER_G(t4_read_reg(adap, ctl_reg));
if (v != MBOX_OWNER_DRV)
return v ? -EBUSY : -ETIMEDOUT;
for (i = 0; i < size; i += 8)
t4_write_reg64(adap, data_reg + i, be64_to_cpu(*p++));
- t4_write_reg(adap, ctl_reg, MBMSGVALID | MBOWNER(MBOX_OWNER_FW));
+ t4_write_reg(adap, ctl_reg, MBMSGVALID_F | MBOWNER_V(MBOX_OWNER_FW));
t4_read_reg(adap, ctl_reg); /* flush write */
delay_idx = 0;
mdelay(ms);
v = t4_read_reg(adap, ctl_reg);
- if (MBOWNER_GET(v) == MBOX_OWNER_DRV) {
- if (!(v & MBMSGVALID)) {
+ if (MBOWNER_G(v) == MBOX_OWNER_DRV) {
+ if (!(v & MBMSGVALID_F)) {
t4_write_reg(adap, ctl_reg, 0);
continue;
}
u32 mc_bist_status_rdata, mc_bist_data_pattern;
if (is_t4(adap->params.chip)) {
- mc_bist_cmd = MC_BIST_CMD;
- mc_bist_cmd_addr = MC_BIST_CMD_ADDR;
- mc_bist_cmd_len = MC_BIST_CMD_LEN;
- mc_bist_status_rdata = MC_BIST_STATUS_RDATA;
- mc_bist_data_pattern = MC_BIST_DATA_PATTERN;
+ mc_bist_cmd = MC_BIST_CMD_A;
+ mc_bist_cmd_addr = MC_BIST_CMD_ADDR_A;
+ mc_bist_cmd_len = MC_BIST_CMD_LEN_A;
+ mc_bist_status_rdata = MC_BIST_STATUS_RDATA_A;
+ mc_bist_data_pattern = MC_BIST_DATA_PATTERN_A;
} else {
- mc_bist_cmd = MC_REG(MC_P_BIST_CMD, idx);
- mc_bist_cmd_addr = MC_REG(MC_P_BIST_CMD_ADDR, idx);
- mc_bist_cmd_len = MC_REG(MC_P_BIST_CMD_LEN, idx);
- mc_bist_status_rdata = MC_REG(MC_P_BIST_STATUS_RDATA, idx);
- mc_bist_data_pattern = MC_REG(MC_P_BIST_DATA_PATTERN, idx);
+ mc_bist_cmd = MC_REG(MC_P_BIST_CMD_A, idx);
+ mc_bist_cmd_addr = MC_REG(MC_P_BIST_CMD_ADDR_A, idx);
+ mc_bist_cmd_len = MC_REG(MC_P_BIST_CMD_LEN_A, idx);
+ mc_bist_status_rdata = MC_REG(MC_P_BIST_STATUS_RDATA_A, idx);
+ mc_bist_data_pattern = MC_REG(MC_P_BIST_DATA_PATTERN_A, idx);
}
- if (t4_read_reg(adap, mc_bist_cmd) & START_BIST)
+ if (t4_read_reg(adap, mc_bist_cmd) & START_BIST_F)
return -EBUSY;
t4_write_reg(adap, mc_bist_cmd_addr, addr & ~0x3fU);
t4_write_reg(adap, mc_bist_cmd_len, 64);
t4_write_reg(adap, mc_bist_data_pattern, 0xc);
- t4_write_reg(adap, mc_bist_cmd, BIST_OPCODE(1) | START_BIST |
- BIST_CMD_GAP(1));
- i = t4_wait_op_done(adap, mc_bist_cmd, START_BIST, 0, 10, 1);
+ t4_write_reg(adap, mc_bist_cmd, BIST_OPCODE_V(1) | START_BIST_F |
+ BIST_CMD_GAP_V(1));
+ i = t4_wait_op_done(adap, mc_bist_cmd, START_BIST_F, 0, 10, 1);
if (i)
return i;
u32 edc_bist_cmd_data_pattern, edc_bist_status_rdata;
if (is_t4(adap->params.chip)) {
- edc_bist_cmd = EDC_REG(EDC_BIST_CMD, idx);
- edc_bist_cmd_addr = EDC_REG(EDC_BIST_CMD_ADDR, idx);
- edc_bist_cmd_len = EDC_REG(EDC_BIST_CMD_LEN, idx);
- edc_bist_cmd_data_pattern = EDC_REG(EDC_BIST_DATA_PATTERN,
- idx);
- edc_bist_status_rdata = EDC_REG(EDC_BIST_STATUS_RDATA,
+ edc_bist_cmd = EDC_REG(EDC_BIST_CMD_A, idx);
+ edc_bist_cmd_addr = EDC_REG(EDC_BIST_CMD_ADDR_A, idx);
+ edc_bist_cmd_len = EDC_REG(EDC_BIST_CMD_LEN_A, idx);
+ edc_bist_cmd_data_pattern = EDC_REG(EDC_BIST_DATA_PATTERN_A,
idx);
+ edc_bist_status_rdata = EDC_REG(EDC_BIST_STATUS_RDATA_A,
+ idx);
} else {
- edc_bist_cmd = EDC_REG_T5(EDC_H_BIST_CMD, idx);
- edc_bist_cmd_addr = EDC_REG_T5(EDC_H_BIST_CMD_ADDR, idx);
- edc_bist_cmd_len = EDC_REG_T5(EDC_H_BIST_CMD_LEN, idx);
+ edc_bist_cmd = EDC_REG_T5(EDC_H_BIST_CMD_A, idx);
+ edc_bist_cmd_addr = EDC_REG_T5(EDC_H_BIST_CMD_ADDR_A, idx);
+ edc_bist_cmd_len = EDC_REG_T5(EDC_H_BIST_CMD_LEN_A, idx);
edc_bist_cmd_data_pattern =
- EDC_REG_T5(EDC_H_BIST_DATA_PATTERN, idx);
+ EDC_REG_T5(EDC_H_BIST_DATA_PATTERN_A, idx);
edc_bist_status_rdata =
- EDC_REG_T5(EDC_H_BIST_STATUS_RDATA, idx);
+ EDC_REG_T5(EDC_H_BIST_STATUS_RDATA_A, idx);
}
- if (t4_read_reg(adap, edc_bist_cmd) & START_BIST)
+ if (t4_read_reg(adap, edc_bist_cmd) & START_BIST_F)
return -EBUSY;
t4_write_reg(adap, edc_bist_cmd_addr, addr & ~0x3fU);
t4_write_reg(adap, edc_bist_cmd_len, 64);
t4_write_reg(adap, edc_bist_cmd_data_pattern, 0xc);
t4_write_reg(adap, edc_bist_cmd,
- BIST_OPCODE(1) | BIST_CMD_GAP(1) | START_BIST);
- i = t4_wait_op_done(adap, edc_bist_cmd, START_BIST, 0, 10, 1);
+ BIST_OPCODE_V(1) | BIST_CMD_GAP_V(1) | START_BIST_F);
+ i = t4_wait_op_done(adap, edc_bist_cmd, START_BIST_F, 0, 10, 1);
if (i)
return i;
* the address is relative to BAR0.
*/
mem_reg = t4_read_reg(adap,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN,
+ PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN_A,
win));
- mem_aperture = 1 << (GET_WINDOW(mem_reg) + 10);
- mem_base = GET_PCIEOFST(mem_reg) << 10;
+ mem_aperture = 1 << (WINDOW_G(mem_reg) + WINDOW_SHIFT_X);
+ mem_base = PCIEOFST_G(mem_reg) << PCIEOFST_SHIFT_X;
if (is_t4(adap->params.chip))
mem_base -= adap->t4_bar0;
- win_pf = is_t4(adap->params.chip) ? 0 : V_PFNUM(adap->fn);
+ win_pf = is_t4(adap->params.chip) ? 0 : PFNUM_V(adap->fn);
/* Calculate our initial PCI-E Memory Window Position and Offset into
* that Window.
* attempt to use the new value.)
*/
t4_write_reg(adap,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET, win),
+ PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET_A, win),
pos | win_pf);
t4_read_reg(adap,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET, win));
+ PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET_A, win));
/* Transfer data to/from the adapter as long as there's an integral
* number of 32-bit transfers to complete.
pos += mem_aperture;
offset = 0;
t4_write_reg(adap,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET,
- win), pos | win_pf);
+ PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET_A,
+ win), pos | win_pf);
t4_read_reg(adap,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET,
- win));
+ PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET_A,
+ win));
}
}
if (!byte_cnt || byte_cnt > 4)
return -EINVAL;
- if (t4_read_reg(adapter, SF_OP) & SF_BUSY)
+ if (t4_read_reg(adapter, SF_OP_A) & SF_BUSY_F)
return -EBUSY;
- cont = cont ? SF_CONT : 0;
- lock = lock ? SF_LOCK : 0;
- t4_write_reg(adapter, SF_OP, lock | cont | BYTECNT(byte_cnt - 1));
- ret = t4_wait_op_done(adapter, SF_OP, SF_BUSY, 0, SF_ATTEMPTS, 5);
+ t4_write_reg(adapter, SF_OP_A, SF_LOCK_V(lock) |
+ SF_CONT_V(cont) | BYTECNT_V(byte_cnt - 1));
+ ret = t4_wait_op_done(adapter, SF_OP_A, SF_BUSY_F, 0, SF_ATTEMPTS, 5);
if (!ret)
- *valp = t4_read_reg(adapter, SF_DATA);
+ *valp = t4_read_reg(adapter, SF_DATA_A);
return ret;
}
{
if (!byte_cnt || byte_cnt > 4)
return -EINVAL;
- if (t4_read_reg(adapter, SF_OP) & SF_BUSY)
+ if (t4_read_reg(adapter, SF_OP_A) & SF_BUSY_F)
return -EBUSY;
- cont = cont ? SF_CONT : 0;
- lock = lock ? SF_LOCK : 0;
- t4_write_reg(adapter, SF_DATA, val);
- t4_write_reg(adapter, SF_OP, lock |
- cont | BYTECNT(byte_cnt - 1) | OP_WR);
- return t4_wait_op_done(adapter, SF_OP, SF_BUSY, 0, SF_ATTEMPTS, 5);
+ t4_write_reg(adapter, SF_DATA_A, val);
+ t4_write_reg(adapter, SF_OP_A, SF_LOCK_V(lock) |
+ SF_CONT_V(cont) | BYTECNT_V(byte_cnt - 1) | OP_V(1));
+ return t4_wait_op_done(adapter, SF_OP_A, SF_BUSY_F, 0, SF_ATTEMPTS, 5);
}
/**
* (i.e., big-endian), otherwise as 32-bit words in the platform's
* natural endianess.
*/
-static int t4_read_flash(struct adapter *adapter, unsigned int addr,
- unsigned int nwords, u32 *data, int byte_oriented)
+int t4_read_flash(struct adapter *adapter, unsigned int addr,
+ unsigned int nwords, u32 *data, int byte_oriented)
{
int ret;
for ( ; nwords; nwords--, data++) {
ret = sf1_read(adapter, 4, nwords > 1, nwords == 1, data);
if (nwords == 1)
- t4_write_reg(adapter, SF_OP, 0); /* unlock SF */
+ t4_write_reg(adapter, SF_OP_A, 0); /* unlock SF */
if (ret)
return ret;
if (byte_oriented)
if (ret)
goto unlock;
- t4_write_reg(adapter, SF_OP, 0); /* unlock SF */
+ t4_write_reg(adapter, SF_OP_A, 0); /* unlock SF */
/* Read the page to verify the write succeeded */
ret = t4_read_flash(adapter, addr & ~0xff, ARRAY_SIZE(buf), buf, 1);
return 0;
unlock:
- t4_write_reg(adapter, SF_OP, 0); /* unlock SF */
+ t4_write_reg(adapter, SF_OP_A, 0); /* unlock SF */
return ret;
}
}
start++;
}
- t4_write_reg(adapter, SF_OP, 0); /* unlock SF */
+ t4_write_reg(adapter, SF_OP_A, 0); /* unlock SF */
return ret;
}
return ret;
}
+/**
+ * t4_fwcache - firmware cache operation
+ * @adap: the adapter
+ * @op : the operation (flush or flush and invalidate)
+ */
+int t4_fwcache(struct adapter *adap, enum fw_params_param_dev_fwcache op)
+{
+ struct fw_params_cmd c;
+
+ memset(&c, 0, sizeof(c));
+ c.op_to_vfn =
+ cpu_to_be32(FW_CMD_OP_V(FW_PARAMS_CMD) |
+ FW_CMD_REQUEST_F | FW_CMD_WRITE_F |
+ FW_PARAMS_CMD_PFN_V(adap->fn) |
+ FW_PARAMS_CMD_VFN_V(0));
+ c.retval_len16 = cpu_to_be32(FW_LEN16(c));
+ c.param[0].mnem =
+ cpu_to_be32(FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_DEV) |
+ FW_PARAMS_PARAM_X_V(FW_PARAMS_PARAM_DEV_FWCACHE));
+ c.param[0].val = (__force __be32)op;
+
+ return t4_wr_mbox(adap, adap->mbox, &c, sizeof(c), NULL);
+}
+
#define ADVERT_MASK (FW_PORT_CAP_SPEED_100M | FW_PORT_CAP_SPEED_1G |\
FW_PORT_CAP_SPEED_10G | FW_PORT_CAP_SPEED_40G | \
FW_PORT_CAP_ANEG)
static void pcie_intr_handler(struct adapter *adapter)
{
static const struct intr_info sysbus_intr_info[] = {
- { RNPP, "RXNP array parity error", -1, 1 },
- { RPCP, "RXPC array parity error", -1, 1 },
- { RCIP, "RXCIF array parity error", -1, 1 },
- { RCCP, "Rx completions control array parity error", -1, 1 },
- { RFTP, "RXFT array parity error", -1, 1 },
+ { RNPP_F, "RXNP array parity error", -1, 1 },
+ { RPCP_F, "RXPC array parity error", -1, 1 },
+ { RCIP_F, "RXCIF array parity error", -1, 1 },
+ { RCCP_F, "Rx completions control array parity error", -1, 1 },
+ { RFTP_F, "RXFT array parity error", -1, 1 },
{ 0 }
};
static const struct intr_info pcie_port_intr_info[] = {
- { TPCP, "TXPC array parity error", -1, 1 },
- { TNPP, "TXNP array parity error", -1, 1 },
- { TFTP, "TXFT array parity error", -1, 1 },
- { TCAP, "TXCA array parity error", -1, 1 },
- { TCIP, "TXCIF array parity error", -1, 1 },
- { RCAP, "RXCA array parity error", -1, 1 },
- { OTDD, "outbound request TLP discarded", -1, 1 },
- { RDPE, "Rx data parity error", -1, 1 },
- { TDUE, "Tx uncorrectable data error", -1, 1 },
+ { TPCP_F, "TXPC array parity error", -1, 1 },
+ { TNPP_F, "TXNP array parity error", -1, 1 },
+ { TFTP_F, "TXFT array parity error", -1, 1 },
+ { TCAP_F, "TXCA array parity error", -1, 1 },
+ { TCIP_F, "TXCIF array parity error", -1, 1 },
+ { RCAP_F, "RXCA array parity error", -1, 1 },
+ { OTDD_F, "outbound request TLP discarded", -1, 1 },
+ { RDPE_F, "Rx data parity error", -1, 1 },
+ { TDUE_F, "Tx uncorrectable data error", -1, 1 },
{ 0 }
};
static const struct intr_info pcie_intr_info[] = {
- { MSIADDRLPERR, "MSI AddrL parity error", -1, 1 },
- { MSIADDRHPERR, "MSI AddrH parity error", -1, 1 },
- { MSIDATAPERR, "MSI data parity error", -1, 1 },
- { MSIXADDRLPERR, "MSI-X AddrL parity error", -1, 1 },
- { MSIXADDRHPERR, "MSI-X AddrH parity error", -1, 1 },
- { MSIXDATAPERR, "MSI-X data parity error", -1, 1 },
- { MSIXDIPERR, "MSI-X DI parity error", -1, 1 },
- { PIOCPLPERR, "PCI PIO completion FIFO parity error", -1, 1 },
- { PIOREQPERR, "PCI PIO request FIFO parity error", -1, 1 },
- { TARTAGPERR, "PCI PCI target tag FIFO parity error", -1, 1 },
- { CCNTPERR, "PCI CMD channel count parity error", -1, 1 },
- { CREQPERR, "PCI CMD channel request parity error", -1, 1 },
- { CRSPPERR, "PCI CMD channel response parity error", -1, 1 },
- { DCNTPERR, "PCI DMA channel count parity error", -1, 1 },
- { DREQPERR, "PCI DMA channel request parity error", -1, 1 },
- { DRSPPERR, "PCI DMA channel response parity error", -1, 1 },
- { HCNTPERR, "PCI HMA channel count parity error", -1, 1 },
- { HREQPERR, "PCI HMA channel request parity error", -1, 1 },
- { HRSPPERR, "PCI HMA channel response parity error", -1, 1 },
- { CFGSNPPERR, "PCI config snoop FIFO parity error", -1, 1 },
- { FIDPERR, "PCI FID parity error", -1, 1 },
- { INTXCLRPERR, "PCI INTx clear parity error", -1, 1 },
- { MATAGPERR, "PCI MA tag parity error", -1, 1 },
- { PIOTAGPERR, "PCI PIO tag parity error", -1, 1 },
- { RXCPLPERR, "PCI Rx completion parity error", -1, 1 },
- { RXWRPERR, "PCI Rx write parity error", -1, 1 },
- { RPLPERR, "PCI replay buffer parity error", -1, 1 },
- { PCIESINT, "PCI core secondary fault", -1, 1 },
- { PCIEPINT, "PCI core primary fault", -1, 1 },
- { UNXSPLCPLERR, "PCI unexpected split completion error", -1, 0 },
+ { MSIADDRLPERR_F, "MSI AddrL parity error", -1, 1 },
+ { MSIADDRHPERR_F, "MSI AddrH parity error", -1, 1 },
+ { MSIDATAPERR_F, "MSI data parity error", -1, 1 },
+ { MSIXADDRLPERR_F, "MSI-X AddrL parity error", -1, 1 },
+ { MSIXADDRHPERR_F, "MSI-X AddrH parity error", -1, 1 },
+ { MSIXDATAPERR_F, "MSI-X data parity error", -1, 1 },
+ { MSIXDIPERR_F, "MSI-X DI parity error", -1, 1 },
+ { PIOCPLPERR_F, "PCI PIO completion FIFO parity error", -1, 1 },
+ { PIOREQPERR_F, "PCI PIO request FIFO parity error", -1, 1 },
+ { TARTAGPERR_F, "PCI PCI target tag FIFO parity error", -1, 1 },
+ { CCNTPERR_F, "PCI CMD channel count parity error", -1, 1 },
+ { CREQPERR_F, "PCI CMD channel request parity error", -1, 1 },
+ { CRSPPERR_F, "PCI CMD channel response parity error", -1, 1 },
+ { DCNTPERR_F, "PCI DMA channel count parity error", -1, 1 },
+ { DREQPERR_F, "PCI DMA channel request parity error", -1, 1 },
+ { DRSPPERR_F, "PCI DMA channel response parity error", -1, 1 },
+ { HCNTPERR_F, "PCI HMA channel count parity error", -1, 1 },
+ { HREQPERR_F, "PCI HMA channel request parity error", -1, 1 },
+ { HRSPPERR_F, "PCI HMA channel response parity error", -1, 1 },
+ { CFGSNPPERR_F, "PCI config snoop FIFO parity error", -1, 1 },
+ { FIDPERR_F, "PCI FID parity error", -1, 1 },
+ { INTXCLRPERR_F, "PCI INTx clear parity error", -1, 1 },
+ { MATAGPERR_F, "PCI MA tag parity error", -1, 1 },
+ { PIOTAGPERR_F, "PCI PIO tag parity error", -1, 1 },
+ { RXCPLPERR_F, "PCI Rx completion parity error", -1, 1 },
+ { RXWRPERR_F, "PCI Rx write parity error", -1, 1 },
+ { RPLPERR_F, "PCI replay buffer parity error", -1, 1 },
+ { PCIESINT_F, "PCI core secondary fault", -1, 1 },
+ { PCIEPINT_F, "PCI core primary fault", -1, 1 },
+ { UNXSPLCPLERR_F, "PCI unexpected split completion error",
+ -1, 0 },
{ 0 }
};
static struct intr_info t5_pcie_intr_info[] = {
- { MSTGRPPERR, "Master Response Read Queue parity error",
+ { MSTGRPPERR_F, "Master Response Read Queue parity error",
+ -1, 1 },
+ { MSTTIMEOUTPERR_F, "Master Timeout FIFO parity error", -1, 1 },
+ { MSIXSTIPERR_F, "MSI-X STI SRAM parity error", -1, 1 },
+ { MSIXADDRLPERR_F, "MSI-X AddrL parity error", -1, 1 },
+ { MSIXADDRHPERR_F, "MSI-X AddrH parity error", -1, 1 },
+ { MSIXDATAPERR_F, "MSI-X data parity error", -1, 1 },
+ { MSIXDIPERR_F, "MSI-X DI parity error", -1, 1 },
+ { PIOCPLGRPPERR_F, "PCI PIO completion Group FIFO parity error",
-1, 1 },
- { MSTTIMEOUTPERR, "Master Timeout FIFO parity error", -1, 1 },
- { MSIXSTIPERR, "MSI-X STI SRAM parity error", -1, 1 },
- { MSIXADDRLPERR, "MSI-X AddrL parity error", -1, 1 },
- { MSIXADDRHPERR, "MSI-X AddrH parity error", -1, 1 },
- { MSIXDATAPERR, "MSI-X data parity error", -1, 1 },
- { MSIXDIPERR, "MSI-X DI parity error", -1, 1 },
- { PIOCPLGRPPERR, "PCI PIO completion Group FIFO parity error",
+ { PIOREQGRPPERR_F, "PCI PIO request Group FIFO parity error",
-1, 1 },
- { PIOREQGRPPERR, "PCI PIO request Group FIFO parity error",
+ { TARTAGPERR_F, "PCI PCI target tag FIFO parity error", -1, 1 },
+ { MSTTAGQPERR_F, "PCI master tag queue parity error", -1, 1 },
+ { CREQPERR_F, "PCI CMD channel request parity error", -1, 1 },
+ { CRSPPERR_F, "PCI CMD channel response parity error", -1, 1 },
+ { DREQWRPERR_F, "PCI DMA channel write request parity error",
-1, 1 },
- { TARTAGPERR, "PCI PCI target tag FIFO parity error", -1, 1 },
- { MSTTAGQPERR, "PCI master tag queue parity error", -1, 1 },
- { CREQPERR, "PCI CMD channel request parity error", -1, 1 },
- { CRSPPERR, "PCI CMD channel response parity error", -1, 1 },
- { DREQWRPERR, "PCI DMA channel write request parity error",
+ { DREQPERR_F, "PCI DMA channel request parity error", -1, 1 },
+ { DRSPPERR_F, "PCI DMA channel response parity error", -1, 1 },
+ { HREQWRPERR_F, "PCI HMA channel count parity error", -1, 1 },
+ { HREQPERR_F, "PCI HMA channel request parity error", -1, 1 },
+ { HRSPPERR_F, "PCI HMA channel response parity error", -1, 1 },
+ { CFGSNPPERR_F, "PCI config snoop FIFO parity error", -1, 1 },
+ { FIDPERR_F, "PCI FID parity error", -1, 1 },
+ { VFIDPERR_F, "PCI INTx clear parity error", -1, 1 },
+ { MAGRPPERR_F, "PCI MA group FIFO parity error", -1, 1 },
+ { PIOTAGPERR_F, "PCI PIO tag parity error", -1, 1 },
+ { IPRXHDRGRPPERR_F, "PCI IP Rx header group parity error",
-1, 1 },
- { DREQPERR, "PCI DMA channel request parity error", -1, 1 },
- { DRSPPERR, "PCI DMA channel response parity error", -1, 1 },
- { HREQWRPERR, "PCI HMA channel count parity error", -1, 1 },
- { HREQPERR, "PCI HMA channel request parity error", -1, 1 },
- { HRSPPERR, "PCI HMA channel response parity error", -1, 1 },
- { CFGSNPPERR, "PCI config snoop FIFO parity error", -1, 1 },
- { FIDPERR, "PCI FID parity error", -1, 1 },
- { VFIDPERR, "PCI INTx clear parity error", -1, 1 },
- { MAGRPPERR, "PCI MA group FIFO parity error", -1, 1 },
- { PIOTAGPERR, "PCI PIO tag parity error", -1, 1 },
- { IPRXHDRGRPPERR, "PCI IP Rx header group parity error",
+ { IPRXDATAGRPPERR_F, "PCI IP Rx data group parity error",
-1, 1 },
- { IPRXDATAGRPPERR, "PCI IP Rx data group parity error", -1, 1 },
- { RPLPERR, "PCI IP replay buffer parity error", -1, 1 },
- { IPSOTPERR, "PCI IP SOT buffer parity error", -1, 1 },
- { TRGT1GRPPERR, "PCI TRGT1 group FIFOs parity error", -1, 1 },
- { READRSPERR, "Outbound read error", -1, 0 },
+ { RPLPERR_F, "PCI IP replay buffer parity error", -1, 1 },
+ { IPSOTPERR_F, "PCI IP SOT buffer parity error", -1, 1 },
+ { TRGT1GRPPERR_F, "PCI TRGT1 group FIFOs parity error", -1, 1 },
+ { READRSPERR_F, "Outbound read error", -1, 0 },
{ 0 }
};
if (is_t4(adapter->params.chip))
fat = t4_handle_intr_status(adapter,
- PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS,
- sysbus_intr_info) +
+ PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS_A,
+ sysbus_intr_info) +
t4_handle_intr_status(adapter,
- PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
- pcie_port_intr_info) +
- t4_handle_intr_status(adapter, PCIE_INT_CAUSE,
+ PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS_A,
+ pcie_port_intr_info) +
+ t4_handle_intr_status(adapter, PCIE_INT_CAUSE_A,
pcie_intr_info);
else
- fat = t4_handle_intr_status(adapter, PCIE_INT_CAUSE,
+ fat = t4_handle_intr_status(adapter, PCIE_INT_CAUSE_A,
t5_pcie_intr_info);
if (fat)
{
static const struct intr_info tp_intr_info[] = {
{ 0x3fffffff, "TP parity error", -1, 1 },
- { FLMTXFLSTEMPTY, "TP out of Tx pages", -1, 1 },
+ { FLMTXFLSTEMPTY_F, "TP out of Tx pages", -1, 1 },
{ 0 }
};
- if (t4_handle_intr_status(adapter, TP_INT_CAUSE, tp_intr_info))
+ if (t4_handle_intr_status(adapter, TP_INT_CAUSE_A, tp_intr_info))
t4_fatal_err(adapter);
}
u64 v;
static const struct intr_info sge_intr_info[] = {
- { ERR_CPL_EXCEED_IQE_SIZE,
+ { ERR_CPL_EXCEED_IQE_SIZE_F,
"SGE received CPL exceeding IQE size", -1, 1 },
- { ERR_INVALID_CIDX_INC,
+ { ERR_INVALID_CIDX_INC_F,
"SGE GTS CIDX increment too large", -1, 0 },
- { ERR_CPL_OPCODE_0, "SGE received 0-length CPL", -1, 0 },
- { DBFIFO_LP_INT, NULL, -1, 0, t4_db_full },
- { DBFIFO_HP_INT, NULL, -1, 0, t4_db_full },
- { ERR_DROPPED_DB, NULL, -1, 0, t4_db_dropped },
- { ERR_DATA_CPL_ON_HIGH_QID1 | ERR_DATA_CPL_ON_HIGH_QID0,
+ { ERR_CPL_OPCODE_0_F, "SGE received 0-length CPL", -1, 0 },
+ { DBFIFO_LP_INT_F, NULL, -1, 0, t4_db_full },
+ { DBFIFO_HP_INT_F, NULL, -1, 0, t4_db_full },
+ { ERR_DROPPED_DB_F, NULL, -1, 0, t4_db_dropped },
+ { ERR_DATA_CPL_ON_HIGH_QID1_F | ERR_DATA_CPL_ON_HIGH_QID0_F,
"SGE IQID > 1023 received CPL for FL", -1, 0 },
- { ERR_BAD_DB_PIDX3, "SGE DBP 3 pidx increment too large", -1,
+ { ERR_BAD_DB_PIDX3_F, "SGE DBP 3 pidx increment too large", -1,
0 },
- { ERR_BAD_DB_PIDX2, "SGE DBP 2 pidx increment too large", -1,
+ { ERR_BAD_DB_PIDX2_F, "SGE DBP 2 pidx increment too large", -1,
0 },
- { ERR_BAD_DB_PIDX1, "SGE DBP 1 pidx increment too large", -1,
+ { ERR_BAD_DB_PIDX1_F, "SGE DBP 1 pidx increment too large", -1,
0 },
- { ERR_BAD_DB_PIDX0, "SGE DBP 0 pidx increment too large", -1,
+ { ERR_BAD_DB_PIDX0_F, "SGE DBP 0 pidx increment too large", -1,
0 },
- { ERR_ING_CTXT_PRIO,
+ { ERR_ING_CTXT_PRIO_F,
"SGE too many priority ingress contexts", -1, 0 },
- { ERR_EGR_CTXT_PRIO,
+ { ERR_EGR_CTXT_PRIO_F,
"SGE too many priority egress contexts", -1, 0 },
- { INGRESS_SIZE_ERR, "SGE illegal ingress QID", -1, 0 },
- { EGRESS_SIZE_ERR, "SGE illegal egress QID", -1, 0 },
+ { INGRESS_SIZE_ERR_F, "SGE illegal ingress QID", -1, 0 },
+ { EGRESS_SIZE_ERR_F, "SGE illegal egress QID", -1, 0 },
{ 0 }
};
- v = (u64)t4_read_reg(adapter, SGE_INT_CAUSE1) |
- ((u64)t4_read_reg(adapter, SGE_INT_CAUSE2) << 32);
+ v = (u64)t4_read_reg(adapter, SGE_INT_CAUSE1_A) |
+ ((u64)t4_read_reg(adapter, SGE_INT_CAUSE2_A) << 32);
if (v) {
dev_alert(adapter->pdev_dev, "SGE parity error (%#llx)\n",
(unsigned long long)v);
- t4_write_reg(adapter, SGE_INT_CAUSE1, v);
- t4_write_reg(adapter, SGE_INT_CAUSE2, v >> 32);
+ t4_write_reg(adapter, SGE_INT_CAUSE1_A, v);
+ t4_write_reg(adapter, SGE_INT_CAUSE2_A, v >> 32);
}
- if (t4_handle_intr_status(adapter, SGE_INT_CAUSE3, sge_intr_info) ||
+ if (t4_handle_intr_status(adapter, SGE_INT_CAUSE3_A, sge_intr_info) ||
v != 0)
t4_fatal_err(adapter);
}
+#define CIM_OBQ_INTR (OBQULP0PARERR_F | OBQULP1PARERR_F | OBQULP2PARERR_F |\
+ OBQULP3PARERR_F | OBQSGEPARERR_F | OBQNCSIPARERR_F)
+#define CIM_IBQ_INTR (IBQTP0PARERR_F | IBQTP1PARERR_F | IBQULPPARERR_F |\
+ IBQSGEHIPARERR_F | IBQSGELOPARERR_F | IBQNCSIPARERR_F)
+
/*
* CIM interrupt handler.
*/
static void cim_intr_handler(struct adapter *adapter)
{
static const struct intr_info cim_intr_info[] = {
- { PREFDROPINT, "CIM control register prefetch drop", -1, 1 },
- { OBQPARERR, "CIM OBQ parity error", -1, 1 },
- { IBQPARERR, "CIM IBQ parity error", -1, 1 },
- { MBUPPARERR, "CIM mailbox uP parity error", -1, 1 },
- { MBHOSTPARERR, "CIM mailbox host parity error", -1, 1 },
- { TIEQINPARERRINT, "CIM TIEQ outgoing parity error", -1, 1 },
- { TIEQOUTPARERRINT, "CIM TIEQ incoming parity error", -1, 1 },
+ { PREFDROPINT_F, "CIM control register prefetch drop", -1, 1 },
+ { CIM_OBQ_INTR, "CIM OBQ parity error", -1, 1 },
+ { CIM_IBQ_INTR, "CIM IBQ parity error", -1, 1 },
+ { MBUPPARERR_F, "CIM mailbox uP parity error", -1, 1 },
+ { MBHOSTPARERR_F, "CIM mailbox host parity error", -1, 1 },
+ { TIEQINPARERRINT_F, "CIM TIEQ outgoing parity error", -1, 1 },
+ { TIEQOUTPARERRINT_F, "CIM TIEQ incoming parity error", -1, 1 },
{ 0 }
};
static const struct intr_info cim_upintr_info[] = {
- { RSVDSPACEINT, "CIM reserved space access", -1, 1 },
- { ILLTRANSINT, "CIM illegal transaction", -1, 1 },
- { ILLWRINT, "CIM illegal write", -1, 1 },
- { ILLRDINT, "CIM illegal read", -1, 1 },
- { ILLRDBEINT, "CIM illegal read BE", -1, 1 },
- { ILLWRBEINT, "CIM illegal write BE", -1, 1 },
- { SGLRDBOOTINT, "CIM single read from boot space", -1, 1 },
- { SGLWRBOOTINT, "CIM single write to boot space", -1, 1 },
- { BLKWRBOOTINT, "CIM block write to boot space", -1, 1 },
- { SGLRDFLASHINT, "CIM single read from flash space", -1, 1 },
- { SGLWRFLASHINT, "CIM single write to flash space", -1, 1 },
- { BLKWRFLASHINT, "CIM block write to flash space", -1, 1 },
- { SGLRDEEPROMINT, "CIM single EEPROM read", -1, 1 },
- { SGLWREEPROMINT, "CIM single EEPROM write", -1, 1 },
- { BLKRDEEPROMINT, "CIM block EEPROM read", -1, 1 },
- { BLKWREEPROMINT, "CIM block EEPROM write", -1, 1 },
- { SGLRDCTLINT , "CIM single read from CTL space", -1, 1 },
- { SGLWRCTLINT , "CIM single write to CTL space", -1, 1 },
- { BLKRDCTLINT , "CIM block read from CTL space", -1, 1 },
- { BLKWRCTLINT , "CIM block write to CTL space", -1, 1 },
- { SGLRDPLINT , "CIM single read from PL space", -1, 1 },
- { SGLWRPLINT , "CIM single write to PL space", -1, 1 },
- { BLKRDPLINT , "CIM block read from PL space", -1, 1 },
- { BLKWRPLINT , "CIM block write to PL space", -1, 1 },
- { REQOVRLOOKUPINT , "CIM request FIFO overwrite", -1, 1 },
- { RSPOVRLOOKUPINT , "CIM response FIFO overwrite", -1, 1 },
- { TIMEOUTINT , "CIM PIF timeout", -1, 1 },
- { TIMEOUTMAINT , "CIM PIF MA timeout", -1, 1 },
+ { RSVDSPACEINT_F, "CIM reserved space access", -1, 1 },
+ { ILLTRANSINT_F, "CIM illegal transaction", -1, 1 },
+ { ILLWRINT_F, "CIM illegal write", -1, 1 },
+ { ILLRDINT_F, "CIM illegal read", -1, 1 },
+ { ILLRDBEINT_F, "CIM illegal read BE", -1, 1 },
+ { ILLWRBEINT_F, "CIM illegal write BE", -1, 1 },
+ { SGLRDBOOTINT_F, "CIM single read from boot space", -1, 1 },
+ { SGLWRBOOTINT_F, "CIM single write to boot space", -1, 1 },
+ { BLKWRBOOTINT_F, "CIM block write to boot space", -1, 1 },
+ { SGLRDFLASHINT_F, "CIM single read from flash space", -1, 1 },
+ { SGLWRFLASHINT_F, "CIM single write to flash space", -1, 1 },
+ { BLKWRFLASHINT_F, "CIM block write to flash space", -1, 1 },
+ { SGLRDEEPROMINT_F, "CIM single EEPROM read", -1, 1 },
+ { SGLWREEPROMINT_F, "CIM single EEPROM write", -1, 1 },
+ { BLKRDEEPROMINT_F, "CIM block EEPROM read", -1, 1 },
+ { BLKWREEPROMINT_F, "CIM block EEPROM write", -1, 1 },
+ { SGLRDCTLINT_F, "CIM single read from CTL space", -1, 1 },
+ { SGLWRCTLINT_F, "CIM single write to CTL space", -1, 1 },
+ { BLKRDCTLINT_F, "CIM block read from CTL space", -1, 1 },
+ { BLKWRCTLINT_F, "CIM block write to CTL space", -1, 1 },
+ { SGLRDPLINT_F, "CIM single read from PL space", -1, 1 },
+ { SGLWRPLINT_F, "CIM single write to PL space", -1, 1 },
+ { BLKRDPLINT_F, "CIM block read from PL space", -1, 1 },
+ { BLKWRPLINT_F, "CIM block write to PL space", -1, 1 },
+ { REQOVRLOOKUPINT_F, "CIM request FIFO overwrite", -1, 1 },
+ { RSPOVRLOOKUPINT_F, "CIM response FIFO overwrite", -1, 1 },
+ { TIMEOUTINT_F, "CIM PIF timeout", -1, 1 },
+ { TIMEOUTMAINT_F, "CIM PIF MA timeout", -1, 1 },
{ 0 }
};
int fat;
- if (t4_read_reg(adapter, MA_PCIE_FW) & PCIE_FW_ERR)
+ if (t4_read_reg(adapter, PCIE_FW_A) & PCIE_FW_ERR_F)
t4_report_fw_error(adapter);
- fat = t4_handle_intr_status(adapter, CIM_HOST_INT_CAUSE,
+ fat = t4_handle_intr_status(adapter, CIM_HOST_INT_CAUSE_A,
cim_intr_info) +
- t4_handle_intr_status(adapter, CIM_HOST_UPACC_INT_CAUSE,
+ t4_handle_intr_status(adapter, CIM_HOST_UPACC_INT_CAUSE_A,
cim_upintr_info);
if (fat)
t4_fatal_err(adapter);
{ 0 }
};
- if (t4_handle_intr_status(adapter, ULP_RX_INT_CAUSE, ulprx_intr_info))
+ if (t4_handle_intr_status(adapter, ULP_RX_INT_CAUSE_A, ulprx_intr_info))
t4_fatal_err(adapter);
}
static void ulptx_intr_handler(struct adapter *adapter)
{
static const struct intr_info ulptx_intr_info[] = {
- { PBL_BOUND_ERR_CH3, "ULPTX channel 3 PBL out of bounds", -1,
+ { PBL_BOUND_ERR_CH3_F, "ULPTX channel 3 PBL out of bounds", -1,
0 },
- { PBL_BOUND_ERR_CH2, "ULPTX channel 2 PBL out of bounds", -1,
+ { PBL_BOUND_ERR_CH2_F, "ULPTX channel 2 PBL out of bounds", -1,
0 },
- { PBL_BOUND_ERR_CH1, "ULPTX channel 1 PBL out of bounds", -1,
+ { PBL_BOUND_ERR_CH1_F, "ULPTX channel 1 PBL out of bounds", -1,
0 },
- { PBL_BOUND_ERR_CH0, "ULPTX channel 0 PBL out of bounds", -1,
+ { PBL_BOUND_ERR_CH0_F, "ULPTX channel 0 PBL out of bounds", -1,
0 },
{ 0xfffffff, "ULPTX parity error", -1, 1 },
{ 0 }
};
- if (t4_handle_intr_status(adapter, ULP_TX_INT_CAUSE, ulptx_intr_info))
+ if (t4_handle_intr_status(adapter, ULP_TX_INT_CAUSE_A, ulptx_intr_info))
t4_fatal_err(adapter);
}
static void pmtx_intr_handler(struct adapter *adapter)
{
static const struct intr_info pmtx_intr_info[] = {
- { PCMD_LEN_OVFL0, "PMTX channel 0 pcmd too large", -1, 1 },
- { PCMD_LEN_OVFL1, "PMTX channel 1 pcmd too large", -1, 1 },
- { PCMD_LEN_OVFL2, "PMTX channel 2 pcmd too large", -1, 1 },
- { ZERO_C_CMD_ERROR, "PMTX 0-length pcmd", -1, 1 },
- { PMTX_FRAMING_ERROR, "PMTX framing error", -1, 1 },
- { OESPI_PAR_ERROR, "PMTX oespi parity error", -1, 1 },
- { DB_OPTIONS_PAR_ERROR, "PMTX db_options parity error", -1, 1 },
- { ICSPI_PAR_ERROR, "PMTX icspi parity error", -1, 1 },
- { C_PCMD_PAR_ERROR, "PMTX c_pcmd parity error", -1, 1},
+ { PCMD_LEN_OVFL0_F, "PMTX channel 0 pcmd too large", -1, 1 },
+ { PCMD_LEN_OVFL1_F, "PMTX channel 1 pcmd too large", -1, 1 },
+ { PCMD_LEN_OVFL2_F, "PMTX channel 2 pcmd too large", -1, 1 },
+ { ZERO_C_CMD_ERROR_F, "PMTX 0-length pcmd", -1, 1 },
+ { PMTX_FRAMING_ERROR_F, "PMTX framing error", -1, 1 },
+ { OESPI_PAR_ERROR_F, "PMTX oespi parity error", -1, 1 },
+ { DB_OPTIONS_PAR_ERROR_F, "PMTX db_options parity error",
+ -1, 1 },
+ { ICSPI_PAR_ERROR_F, "PMTX icspi parity error", -1, 1 },
+ { PMTX_C_PCMD_PAR_ERROR_F, "PMTX c_pcmd parity error", -1, 1},
{ 0 }
};
- if (t4_handle_intr_status(adapter, PM_TX_INT_CAUSE, pmtx_intr_info))
+ if (t4_handle_intr_status(adapter, PM_TX_INT_CAUSE_A, pmtx_intr_info))
t4_fatal_err(adapter);
}
static void pmrx_intr_handler(struct adapter *adapter)
{
static const struct intr_info pmrx_intr_info[] = {
- { ZERO_E_CMD_ERROR, "PMRX 0-length pcmd", -1, 1 },
- { PMRX_FRAMING_ERROR, "PMRX framing error", -1, 1 },
- { OCSPI_PAR_ERROR, "PMRX ocspi parity error", -1, 1 },
- { DB_OPTIONS_PAR_ERROR, "PMRX db_options parity error", -1, 1 },
- { IESPI_PAR_ERROR, "PMRX iespi parity error", -1, 1 },
- { E_PCMD_PAR_ERROR, "PMRX e_pcmd parity error", -1, 1},
+ { ZERO_E_CMD_ERROR_F, "PMRX 0-length pcmd", -1, 1 },
+ { PMRX_FRAMING_ERROR_F, "PMRX framing error", -1, 1 },
+ { OCSPI_PAR_ERROR_F, "PMRX ocspi parity error", -1, 1 },
+ { DB_OPTIONS_PAR_ERROR_F, "PMRX db_options parity error",
+ -1, 1 },
+ { IESPI_PAR_ERROR_F, "PMRX iespi parity error", -1, 1 },
+ { PMRX_E_PCMD_PAR_ERROR_F, "PMRX e_pcmd parity error", -1, 1},
{ 0 }
};
- if (t4_handle_intr_status(adapter, PM_RX_INT_CAUSE, pmrx_intr_info))
+ if (t4_handle_intr_status(adapter, PM_RX_INT_CAUSE_A, pmrx_intr_info))
t4_fatal_err(adapter);
}
static void cplsw_intr_handler(struct adapter *adapter)
{
static const struct intr_info cplsw_intr_info[] = {
- { CIM_OP_MAP_PERR, "CPLSW CIM op_map parity error", -1, 1 },
- { CIM_OVFL_ERROR, "CPLSW CIM overflow", -1, 1 },
- { TP_FRAMING_ERROR, "CPLSW TP framing error", -1, 1 },
- { SGE_FRAMING_ERROR, "CPLSW SGE framing error", -1, 1 },
- { CIM_FRAMING_ERROR, "CPLSW CIM framing error", -1, 1 },
- { ZERO_SWITCH_ERROR, "CPLSW no-switch error", -1, 1 },
+ { CIM_OP_MAP_PERR_F, "CPLSW CIM op_map parity error", -1, 1 },
+ { CIM_OVFL_ERROR_F, "CPLSW CIM overflow", -1, 1 },
+ { TP_FRAMING_ERROR_F, "CPLSW TP framing error", -1, 1 },
+ { SGE_FRAMING_ERROR_F, "CPLSW SGE framing error", -1, 1 },
+ { CIM_FRAMING_ERROR_F, "CPLSW CIM framing error", -1, 1 },
+ { ZERO_SWITCH_ERROR_F, "CPLSW no-switch error", -1, 1 },
{ 0 }
};
- if (t4_handle_intr_status(adapter, CPL_INTR_CAUSE, cplsw_intr_info))
+ if (t4_handle_intr_status(adapter, CPL_INTR_CAUSE_A, cplsw_intr_info))
t4_fatal_err(adapter);
}
static void le_intr_handler(struct adapter *adap)
{
static const struct intr_info le_intr_info[] = {
- { LIPMISS, "LE LIP miss", -1, 0 },
- { LIP0, "LE 0 LIP error", -1, 0 },
- { PARITYERR, "LE parity error", -1, 1 },
- { UNKNOWNCMD, "LE unknown command", -1, 1 },
- { REQQPARERR, "LE request queue parity error", -1, 1 },
+ { LIPMISS_F, "LE LIP miss", -1, 0 },
+ { LIP0_F, "LE 0 LIP error", -1, 0 },
+ { PARITYERR_F, "LE parity error", -1, 1 },
+ { UNKNOWNCMD_F, "LE unknown command", -1, 1 },
+ { REQQPARERR_F, "LE request queue parity error", -1, 1 },
{ 0 }
};
- if (t4_handle_intr_status(adap, LE_DB_INT_CAUSE, le_intr_info))
+ if (t4_handle_intr_status(adap, LE_DB_INT_CAUSE_A, le_intr_info))
t4_fatal_err(adap);
}
{ 0 }
};
static const struct intr_info mps_tx_intr_info[] = {
- { TPFIFO, "MPS Tx TP FIFO parity error", -1, 1 },
- { NCSIFIFO, "MPS Tx NC-SI FIFO parity error", -1, 1 },
- { TXDATAFIFO, "MPS Tx data FIFO parity error", -1, 1 },
- { TXDESCFIFO, "MPS Tx desc FIFO parity error", -1, 1 },
- { BUBBLE, "MPS Tx underflow", -1, 1 },
- { SECNTERR, "MPS Tx SOP/EOP error", -1, 1 },
- { FRMERR, "MPS Tx framing error", -1, 1 },
+ { TPFIFO_V(TPFIFO_M), "MPS Tx TP FIFO parity error", -1, 1 },
+ { NCSIFIFO_F, "MPS Tx NC-SI FIFO parity error", -1, 1 },
+ { TXDATAFIFO_V(TXDATAFIFO_M), "MPS Tx data FIFO parity error",
+ -1, 1 },
+ { TXDESCFIFO_V(TXDESCFIFO_M), "MPS Tx desc FIFO parity error",
+ -1, 1 },
+ { BUBBLE_F, "MPS Tx underflow", -1, 1 },
+ { SECNTERR_F, "MPS Tx SOP/EOP error", -1, 1 },
+ { FRMERR_F, "MPS Tx framing error", -1, 1 },
{ 0 }
};
static const struct intr_info mps_trc_intr_info[] = {
- { FILTMEM, "MPS TRC filter parity error", -1, 1 },
- { PKTFIFO, "MPS TRC packet FIFO parity error", -1, 1 },
- { MISCPERR, "MPS TRC misc parity error", -1, 1 },
+ { FILTMEM_V(FILTMEM_M), "MPS TRC filter parity error", -1, 1 },
+ { PKTFIFO_V(PKTFIFO_M), "MPS TRC packet FIFO parity error",
+ -1, 1 },
+ { MISCPERR_F, "MPS TRC misc parity error", -1, 1 },
{ 0 }
};
static const struct intr_info mps_stat_sram_intr_info[] = {
{ 0 }
};
static const struct intr_info mps_cls_intr_info[] = {
- { MATCHSRAM, "MPS match SRAM parity error", -1, 1 },
- { MATCHTCAM, "MPS match TCAM parity error", -1, 1 },
- { HASHSRAM, "MPS hash SRAM parity error", -1, 1 },
+ { MATCHSRAM_F, "MPS match SRAM parity error", -1, 1 },
+ { MATCHTCAM_F, "MPS match TCAM parity error", -1, 1 },
+ { HASHSRAM_F, "MPS hash SRAM parity error", -1, 1 },
{ 0 }
};
int fat;
- fat = t4_handle_intr_status(adapter, MPS_RX_PERR_INT_CAUSE,
+ fat = t4_handle_intr_status(adapter, MPS_RX_PERR_INT_CAUSE_A,
mps_rx_intr_info) +
- t4_handle_intr_status(adapter, MPS_TX_INT_CAUSE,
+ t4_handle_intr_status(adapter, MPS_TX_INT_CAUSE_A,
mps_tx_intr_info) +
- t4_handle_intr_status(adapter, MPS_TRC_INT_CAUSE,
+ t4_handle_intr_status(adapter, MPS_TRC_INT_CAUSE_A,
mps_trc_intr_info) +
- t4_handle_intr_status(adapter, MPS_STAT_PERR_INT_CAUSE_SRAM,
+ t4_handle_intr_status(adapter, MPS_STAT_PERR_INT_CAUSE_SRAM_A,
mps_stat_sram_intr_info) +
- t4_handle_intr_status(adapter, MPS_STAT_PERR_INT_CAUSE_TX_FIFO,
+ t4_handle_intr_status(adapter, MPS_STAT_PERR_INT_CAUSE_TX_FIFO_A,
mps_stat_tx_intr_info) +
- t4_handle_intr_status(adapter, MPS_STAT_PERR_INT_CAUSE_RX_FIFO,
+ t4_handle_intr_status(adapter, MPS_STAT_PERR_INT_CAUSE_RX_FIFO_A,
mps_stat_rx_intr_info) +
- t4_handle_intr_status(adapter, MPS_CLS_INT_CAUSE,
+ t4_handle_intr_status(adapter, MPS_CLS_INT_CAUSE_A,
mps_cls_intr_info);
- t4_write_reg(adapter, MPS_INT_CAUSE, CLSINT | TRCINT |
- RXINT | TXINT | STATINT);
- t4_read_reg(adapter, MPS_INT_CAUSE); /* flush */
+ t4_write_reg(adapter, MPS_INT_CAUSE_A, 0);
+ t4_read_reg(adapter, MPS_INT_CAUSE_A); /* flush */
if (fat)
t4_fatal_err(adapter);
}
-#define MEM_INT_MASK (PERR_INT_CAUSE | ECC_CE_INT_CAUSE | ECC_UE_INT_CAUSE)
+#define MEM_INT_MASK (PERR_INT_CAUSE_F | ECC_CE_INT_CAUSE_F | \
+ ECC_UE_INT_CAUSE_F)
/*
* EDC/MC interrupt handler.
unsigned int addr, cnt_addr, v;
if (idx <= MEM_EDC1) {
- addr = EDC_REG(EDC_INT_CAUSE, idx);
- cnt_addr = EDC_REG(EDC_ECC_STATUS, idx);
+ addr = EDC_REG(EDC_INT_CAUSE_A, idx);
+ cnt_addr = EDC_REG(EDC_ECC_STATUS_A, idx);
} else if (idx == MEM_MC) {
if (is_t4(adapter->params.chip)) {
- addr = MC_INT_CAUSE;
- cnt_addr = MC_ECC_STATUS;
+ addr = MC_INT_CAUSE_A;
+ cnt_addr = MC_ECC_STATUS_A;
} else {
- addr = MC_P_INT_CAUSE;
- cnt_addr = MC_P_ECC_STATUS;
+ addr = MC_P_INT_CAUSE_A;
+ cnt_addr = MC_P_ECC_STATUS_A;
}
} else {
- addr = MC_REG(MC_P_INT_CAUSE, 1);
- cnt_addr = MC_REG(MC_P_ECC_STATUS, 1);
+ addr = MC_REG(MC_P_INT_CAUSE_A, 1);
+ cnt_addr = MC_REG(MC_P_ECC_STATUS_A, 1);
}
v = t4_read_reg(adapter, addr) & MEM_INT_MASK;
- if (v & PERR_INT_CAUSE)
+ if (v & PERR_INT_CAUSE_F)
dev_alert(adapter->pdev_dev, "%s FIFO parity error\n",
name[idx]);
- if (v & ECC_CE_INT_CAUSE) {
- u32 cnt = ECC_CECNT_GET(t4_read_reg(adapter, cnt_addr));
+ if (v & ECC_CE_INT_CAUSE_F) {
+ u32 cnt = ECC_CECNT_G(t4_read_reg(adapter, cnt_addr));
- t4_write_reg(adapter, cnt_addr, ECC_CECNT_MASK);
+ t4_write_reg(adapter, cnt_addr, ECC_CECNT_V(ECC_CECNT_M));
if (printk_ratelimit())
dev_warn(adapter->pdev_dev,
"%u %s correctable ECC data error%s\n",
cnt, name[idx], cnt > 1 ? "s" : "");
}
- if (v & ECC_UE_INT_CAUSE)
+ if (v & ECC_UE_INT_CAUSE_F)
dev_alert(adapter->pdev_dev,
"%s uncorrectable ECC data error\n", name[idx]);
t4_write_reg(adapter, addr, v);
- if (v & (PERR_INT_CAUSE | ECC_UE_INT_CAUSE))
+ if (v & (PERR_INT_CAUSE_F | ECC_UE_INT_CAUSE_F))
t4_fatal_err(adapter);
}
*/
static void ma_intr_handler(struct adapter *adap)
{
- u32 v, status = t4_read_reg(adap, MA_INT_CAUSE);
+ u32 v, status = t4_read_reg(adap, MA_INT_CAUSE_A);
- if (status & MEM_PERR_INT_CAUSE) {
+ if (status & MEM_PERR_INT_CAUSE_F) {
dev_alert(adap->pdev_dev,
"MA parity error, parity status %#x\n",
- t4_read_reg(adap, MA_PARITY_ERROR_STATUS));
+ t4_read_reg(adap, MA_PARITY_ERROR_STATUS1_A));
if (is_t5(adap->params.chip))
dev_alert(adap->pdev_dev,
"MA parity error, parity status %#x\n",
t4_read_reg(adap,
- MA_PARITY_ERROR_STATUS2));
+ MA_PARITY_ERROR_STATUS2_A));
}
- if (status & MEM_WRAP_INT_CAUSE) {
- v = t4_read_reg(adap, MA_INT_WRAP_STATUS);
+ if (status & MEM_WRAP_INT_CAUSE_F) {
+ v = t4_read_reg(adap, MA_INT_WRAP_STATUS_A);
dev_alert(adap->pdev_dev, "MA address wrap-around error by "
"client %u to address %#x\n",
- MEM_WRAP_CLIENT_NUM_GET(v),
- MEM_WRAP_ADDRESS_GET(v) << 4);
+ MEM_WRAP_CLIENT_NUM_G(v),
+ MEM_WRAP_ADDRESS_G(v) << 4);
}
- t4_write_reg(adap, MA_INT_CAUSE, status);
+ t4_write_reg(adap, MA_INT_CAUSE_A, status);
t4_fatal_err(adap);
}
static void smb_intr_handler(struct adapter *adap)
{
static const struct intr_info smb_intr_info[] = {
- { MSTTXFIFOPARINT, "SMB master Tx FIFO parity error", -1, 1 },
- { MSTRXFIFOPARINT, "SMB master Rx FIFO parity error", -1, 1 },
- { SLVFIFOPARINT, "SMB slave FIFO parity error", -1, 1 },
+ { MSTTXFIFOPARINT_F, "SMB master Tx FIFO parity error", -1, 1 },
+ { MSTRXFIFOPARINT_F, "SMB master Rx FIFO parity error", -1, 1 },
+ { SLVFIFOPARINT_F, "SMB slave FIFO parity error", -1, 1 },
{ 0 }
};
- if (t4_handle_intr_status(adap, SMB_INT_CAUSE, smb_intr_info))
+ if (t4_handle_intr_status(adap, SMB_INT_CAUSE_A, smb_intr_info))
t4_fatal_err(adap);
}
static void ncsi_intr_handler(struct adapter *adap)
{
static const struct intr_info ncsi_intr_info[] = {
- { CIM_DM_PRTY_ERR, "NC-SI CIM parity error", -1, 1 },
- { MPS_DM_PRTY_ERR, "NC-SI MPS parity error", -1, 1 },
- { TXFIFO_PRTY_ERR, "NC-SI Tx FIFO parity error", -1, 1 },
- { RXFIFO_PRTY_ERR, "NC-SI Rx FIFO parity error", -1, 1 },
+ { CIM_DM_PRTY_ERR_F, "NC-SI CIM parity error", -1, 1 },
+ { MPS_DM_PRTY_ERR_F, "NC-SI MPS parity error", -1, 1 },
+ { TXFIFO_PRTY_ERR_F, "NC-SI Tx FIFO parity error", -1, 1 },
+ { RXFIFO_PRTY_ERR_F, "NC-SI Rx FIFO parity error", -1, 1 },
{ 0 }
};
- if (t4_handle_intr_status(adap, NCSI_INT_CAUSE, ncsi_intr_info))
+ if (t4_handle_intr_status(adap, NCSI_INT_CAUSE_A, ncsi_intr_info))
t4_fatal_err(adap);
}
u32 v, int_cause_reg;
if (is_t4(adap->params.chip))
- int_cause_reg = PORT_REG(port, XGMAC_PORT_INT_CAUSE);
+ int_cause_reg = PORT_REG(port, XGMAC_PORT_INT_CAUSE_A);
else
- int_cause_reg = T5_PORT_REG(port, MAC_PORT_INT_CAUSE);
+ int_cause_reg = T5_PORT_REG(port, MAC_PORT_INT_CAUSE_A);
v = t4_read_reg(adap, int_cause_reg);
- v &= TXFIFO_PRTY_ERR | RXFIFO_PRTY_ERR;
+ v &= TXFIFO_PRTY_ERR_F | RXFIFO_PRTY_ERR_F;
if (!v)
return;
- if (v & TXFIFO_PRTY_ERR)
+ if (v & TXFIFO_PRTY_ERR_F)
dev_alert(adap->pdev_dev, "XGMAC %d Tx FIFO parity error\n",
port);
- if (v & RXFIFO_PRTY_ERR)
+ if (v & RXFIFO_PRTY_ERR_F)
dev_alert(adap->pdev_dev, "XGMAC %d Rx FIFO parity error\n",
port);
- t4_write_reg(adap, PORT_REG(port, XGMAC_PORT_INT_CAUSE), v);
+ t4_write_reg(adap, PORT_REG(port, XGMAC_PORT_INT_CAUSE_A), v);
t4_fatal_err(adap);
}
static void pl_intr_handler(struct adapter *adap)
{
static const struct intr_info pl_intr_info[] = {
- { FATALPERR, "T4 fatal parity error", -1, 1 },
- { PERRVFID, "PL VFID_MAP parity error", -1, 1 },
+ { FATALPERR_F, "T4 fatal parity error", -1, 1 },
+ { PERRVFID_F, "PL VFID_MAP parity error", -1, 1 },
{ 0 }
};
- if (t4_handle_intr_status(adap, PL_PL_INT_CAUSE, pl_intr_info))
+ if (t4_handle_intr_status(adap, PL_PL_INT_CAUSE_A, pl_intr_info))
t4_fatal_err(adap);
}
-#define PF_INTR_MASK (PFSW)
-#define GLBL_INTR_MASK (CIM | MPS | PL | PCIE | MC | EDC0 | \
- EDC1 | LE | TP | MA | PM_TX | PM_RX | ULP_RX | \
- CPL_SWITCH | SGE | ULP_TX)
+#define PF_INTR_MASK (PFSW_F)
+#define GLBL_INTR_MASK (CIM_F | MPS_F | PL_F | PCIE_F | MC_F | EDC0_F | \
+ EDC1_F | LE_F | TP_F | MA_F | PM_TX_F | PM_RX_F | ULP_RX_F | \
+ CPL_SWITCH_F | SGE_F | ULP_TX_F)
/**
* t4_slow_intr_handler - control path interrupt handler
*/
int t4_slow_intr_handler(struct adapter *adapter)
{
- u32 cause = t4_read_reg(adapter, PL_INT_CAUSE);
+ u32 cause = t4_read_reg(adapter, PL_INT_CAUSE_A);
if (!(cause & GLBL_INTR_MASK))
return 0;
- if (cause & CIM)
+ if (cause & CIM_F)
cim_intr_handler(adapter);
- if (cause & MPS)
+ if (cause & MPS_F)
mps_intr_handler(adapter);
- if (cause & NCSI)
+ if (cause & NCSI_F)
ncsi_intr_handler(adapter);
- if (cause & PL)
+ if (cause & PL_F)
pl_intr_handler(adapter);
- if (cause & SMB)
+ if (cause & SMB_F)
smb_intr_handler(adapter);
- if (cause & XGMAC0)
+ if (cause & XGMAC0_F)
xgmac_intr_handler(adapter, 0);
- if (cause & XGMAC1)
+ if (cause & XGMAC1_F)
xgmac_intr_handler(adapter, 1);
- if (cause & XGMAC_KR0)
+ if (cause & XGMAC_KR0_F)
xgmac_intr_handler(adapter, 2);
- if (cause & XGMAC_KR1)
+ if (cause & XGMAC_KR1_F)
xgmac_intr_handler(adapter, 3);
- if (cause & PCIE)
+ if (cause & PCIE_F)
pcie_intr_handler(adapter);
- if (cause & MC)
+ if (cause & MC_F)
mem_intr_handler(adapter, MEM_MC);
- if (!is_t4(adapter->params.chip) && (cause & MC1))
+ if (!is_t4(adapter->params.chip) && (cause & MC1_S))
mem_intr_handler(adapter, MEM_MC1);
- if (cause & EDC0)
+ if (cause & EDC0_F)
mem_intr_handler(adapter, MEM_EDC0);
- if (cause & EDC1)
+ if (cause & EDC1_F)
mem_intr_handler(adapter, MEM_EDC1);
- if (cause & LE)
+ if (cause & LE_F)
le_intr_handler(adapter);
- if (cause & TP)
+ if (cause & TP_F)
tp_intr_handler(adapter);
- if (cause & MA)
+ if (cause & MA_F)
ma_intr_handler(adapter);
- if (cause & PM_TX)
+ if (cause & PM_TX_F)
pmtx_intr_handler(adapter);
- if (cause & PM_RX)
+ if (cause & PM_RX_F)
pmrx_intr_handler(adapter);
- if (cause & ULP_RX)
+ if (cause & ULP_RX_F)
ulprx_intr_handler(adapter);
- if (cause & CPL_SWITCH)
+ if (cause & CPL_SWITCH_F)
cplsw_intr_handler(adapter);
- if (cause & SGE)
+ if (cause & SGE_F)
sge_intr_handler(adapter);
- if (cause & ULP_TX)
+ if (cause & ULP_TX_F)
ulptx_intr_handler(adapter);
/* Clear the interrupts just processed for which we are the master. */
- t4_write_reg(adapter, PL_INT_CAUSE, cause & GLBL_INTR_MASK);
- (void) t4_read_reg(adapter, PL_INT_CAUSE); /* flush */
+ t4_write_reg(adapter, PL_INT_CAUSE_A, cause & GLBL_INTR_MASK);
+ (void)t4_read_reg(adapter, PL_INT_CAUSE_A); /* flush */
return 1;
}
*/
void t4_intr_enable(struct adapter *adapter)
{
- u32 pf = SOURCEPF_GET(t4_read_reg(adapter, PL_WHOAMI));
-
- t4_write_reg(adapter, SGE_INT_ENABLE3, ERR_CPL_EXCEED_IQE_SIZE |
- ERR_INVALID_CIDX_INC | ERR_CPL_OPCODE_0 |
- ERR_DROPPED_DB | ERR_DATA_CPL_ON_HIGH_QID1 |
- ERR_DATA_CPL_ON_HIGH_QID0 | ERR_BAD_DB_PIDX3 |
- ERR_BAD_DB_PIDX2 | ERR_BAD_DB_PIDX1 |
- ERR_BAD_DB_PIDX0 | ERR_ING_CTXT_PRIO |
- ERR_EGR_CTXT_PRIO | INGRESS_SIZE_ERR |
- DBFIFO_HP_INT | DBFIFO_LP_INT |
- EGRESS_SIZE_ERR);
- t4_write_reg(adapter, MYPF_REG(PL_PF_INT_ENABLE), PF_INTR_MASK);
- t4_set_reg_field(adapter, PL_INT_MAP0, 0, 1 << pf);
+ u32 pf = SOURCEPF_G(t4_read_reg(adapter, PL_WHOAMI_A));
+
+ t4_write_reg(adapter, SGE_INT_ENABLE3_A, ERR_CPL_EXCEED_IQE_SIZE_F |
+ ERR_INVALID_CIDX_INC_F | ERR_CPL_OPCODE_0_F |
+ ERR_DROPPED_DB_F | ERR_DATA_CPL_ON_HIGH_QID1_F |
+ ERR_DATA_CPL_ON_HIGH_QID0_F | ERR_BAD_DB_PIDX3_F |
+ ERR_BAD_DB_PIDX2_F | ERR_BAD_DB_PIDX1_F |
+ ERR_BAD_DB_PIDX0_F | ERR_ING_CTXT_PRIO_F |
+ ERR_EGR_CTXT_PRIO_F | INGRESS_SIZE_ERR_F |
+ DBFIFO_HP_INT_F | DBFIFO_LP_INT_F |
+ EGRESS_SIZE_ERR_F);
+ t4_write_reg(adapter, MYPF_REG(PL_PF_INT_ENABLE_A), PF_INTR_MASK);
+ t4_set_reg_field(adapter, PL_INT_MAP0_A, 0, 1 << pf);
}
/**
*/
void t4_intr_disable(struct adapter *adapter)
{
- u32 pf = SOURCEPF_GET(t4_read_reg(adapter, PL_WHOAMI));
+ u32 pf = SOURCEPF_G(t4_read_reg(adapter, PL_WHOAMI_A));
- t4_write_reg(adapter, MYPF_REG(PL_PF_INT_ENABLE), 0);
- t4_set_reg_field(adapter, PL_INT_MAP0, 1 << pf, 0);
+ t4_write_reg(adapter, MYPF_REG(PL_PF_INT_ENABLE_A), 0);
+ t4_set_reg_field(adapter, PL_INT_MAP0_A, 1 << pf, 0);
}
/**
return t4_wr_mbox(adapter, mbox, &c, sizeof(c), NULL);
}
+/* Read an RSS table row */
+static int rd_rss_row(struct adapter *adap, int row, u32 *val)
+{
+ t4_write_reg(adap, TP_RSS_LKP_TABLE_A, 0xfff00000 | row);
+ return t4_wait_op_done_val(adap, TP_RSS_LKP_TABLE_A, LKPTBLROWVLD_F, 1,
+ 5, 0, val);
+}
+
+/**
+ * t4_read_rss - read the contents of the RSS mapping table
+ * @adapter: the adapter
+ * @map: holds the contents of the RSS mapping table
+ *
+ * Reads the contents of the RSS hash->queue mapping table.
+ */
+int t4_read_rss(struct adapter *adapter, u16 *map)
+{
+ u32 val;
+ int i, ret;
+
+ for (i = 0; i < RSS_NENTRIES / 2; ++i) {
+ ret = rd_rss_row(adapter, i, &val);
+ if (ret)
+ return ret;
+ *map++ = LKPTBLQUEUE0_G(val);
+ *map++ = LKPTBLQUEUE1_G(val);
+ }
+ return 0;
+}
+
+/**
+ * t4_read_rss_key - read the global RSS key
+ * @adap: the adapter
+ * @key: 10-entry array holding the 320-bit RSS key
+ *
+ * Reads the global 320-bit RSS key.
+ */
+void t4_read_rss_key(struct adapter *adap, u32 *key)
+{
+ t4_read_indirect(adap, TP_PIO_ADDR_A, TP_PIO_DATA_A, key, 10,
+ TP_RSS_SECRET_KEY0_A);
+}
+
+/**
+ * t4_write_rss_key - program one of the RSS keys
+ * @adap: the adapter
+ * @key: 10-entry array holding the 320-bit RSS key
+ * @idx: which RSS key to write
+ *
+ * Writes one of the RSS keys with the given 320-bit value. If @idx is
+ * 0..15 the corresponding entry in the RSS key table is written,
+ * otherwise the global RSS key is written.
+ */
+void t4_write_rss_key(struct adapter *adap, const u32 *key, int idx)
+{
+ t4_write_indirect(adap, TP_PIO_ADDR_A, TP_PIO_DATA_A, key, 10,
+ TP_RSS_SECRET_KEY0_A);
+ if (idx >= 0 && idx < 16)
+ t4_write_reg(adap, TP_RSS_CONFIG_VRT_A,
+ KEYWRADDR_V(idx) | KEYWREN_F);
+}
+
+/**
+ * t4_read_rss_pf_config - read PF RSS Configuration Table
+ * @adapter: the adapter
+ * @index: the entry in the PF RSS table to read
+ * @valp: where to store the returned value
+ *
+ * Reads the PF RSS Configuration Table at the specified index and returns
+ * the value found there.
+ */
+void t4_read_rss_pf_config(struct adapter *adapter, unsigned int index,
+ u32 *valp)
+{
+ t4_read_indirect(adapter, TP_PIO_ADDR_A, TP_PIO_DATA_A,
+ valp, 1, TP_RSS_PF0_CONFIG_A + index);
+}
+
+/**
+ * t4_read_rss_vf_config - read VF RSS Configuration Table
+ * @adapter: the adapter
+ * @index: the entry in the VF RSS table to read
+ * @vfl: where to store the returned VFL
+ * @vfh: where to store the returned VFH
+ *
+ * Reads the VF RSS Configuration Table at the specified index and returns
+ * the (VFL, VFH) values found there.
+ */
+void t4_read_rss_vf_config(struct adapter *adapter, unsigned int index,
+ u32 *vfl, u32 *vfh)
+{
+ u32 vrt, mask, data;
+
+ mask = VFWRADDR_V(VFWRADDR_M);
+ data = VFWRADDR_V(index);
+
+ /* Request that the index'th VF Table values be read into VFL/VFH.
+ */
+ vrt = t4_read_reg(adapter, TP_RSS_CONFIG_VRT_A);
+ vrt &= ~(VFRDRG_F | VFWREN_F | KEYWREN_F | mask);
+ vrt |= data | VFRDEN_F;
+ t4_write_reg(adapter, TP_RSS_CONFIG_VRT_A, vrt);
+
+ /* Grab the VFL/VFH values ...
+ */
+ t4_read_indirect(adapter, TP_PIO_ADDR_A, TP_PIO_DATA_A,
+ vfl, 1, TP_RSS_VFL_CONFIG_A);
+ t4_read_indirect(adapter, TP_PIO_ADDR_A, TP_PIO_DATA_A,
+ vfh, 1, TP_RSS_VFH_CONFIG_A);
+}
+
+/**
+ * t4_read_rss_pf_map - read PF RSS Map
+ * @adapter: the adapter
+ *
+ * Reads the PF RSS Map register and returns its value.
+ */
+u32 t4_read_rss_pf_map(struct adapter *adapter)
+{
+ u32 pfmap;
+
+ t4_read_indirect(adapter, TP_PIO_ADDR_A, TP_PIO_DATA_A,
+ &pfmap, 1, TP_RSS_PF_MAP_A);
+ return pfmap;
+}
+
+/**
+ * t4_read_rss_pf_mask - read PF RSS Mask
+ * @adapter: the adapter
+ *
+ * Reads the PF RSS Mask register and returns its value.
+ */
+u32 t4_read_rss_pf_mask(struct adapter *adapter)
+{
+ u32 pfmask;
+
+ t4_read_indirect(adapter, TP_PIO_ADDR_A, TP_PIO_DATA_A,
+ &pfmask, 1, TP_RSS_PF_MSK_A);
+ return pfmask;
+}
+
/**
* t4_tp_get_tcp_stats - read TP's TCP MIB counters
* @adap: the adapter
void t4_tp_get_tcp_stats(struct adapter *adap, struct tp_tcp_stats *v4,
struct tp_tcp_stats *v6)
{
- u32 val[TP_MIB_TCP_RXT_SEG_LO - TP_MIB_TCP_OUT_RST + 1];
+ u32 val[TP_MIB_TCP_RXT_SEG_LO_A - TP_MIB_TCP_OUT_RST_A + 1];
-#define STAT_IDX(x) ((TP_MIB_TCP_##x) - TP_MIB_TCP_OUT_RST)
+#define STAT_IDX(x) ((TP_MIB_TCP_##x##_A) - TP_MIB_TCP_OUT_RST_A)
#define STAT(x) val[STAT_IDX(x)]
#define STAT64(x) (((u64)STAT(x##_HI) << 32) | STAT(x##_LO))
if (v4) {
- t4_read_indirect(adap, TP_MIB_INDEX, TP_MIB_DATA, val,
- ARRAY_SIZE(val), TP_MIB_TCP_OUT_RST);
+ t4_read_indirect(adap, TP_MIB_INDEX_A, TP_MIB_DATA_A, val,
+ ARRAY_SIZE(val), TP_MIB_TCP_OUT_RST_A);
v4->tcpOutRsts = STAT(OUT_RST);
v4->tcpInSegs = STAT64(IN_SEG);
v4->tcpOutSegs = STAT64(OUT_SEG);
v4->tcpRetransSegs = STAT64(RXT_SEG);
}
if (v6) {
- t4_read_indirect(adap, TP_MIB_INDEX, TP_MIB_DATA, val,
- ARRAY_SIZE(val), TP_MIB_TCP_V6OUT_RST);
+ t4_read_indirect(adap, TP_MIB_INDEX_A, TP_MIB_DATA_A, val,
+ ARRAY_SIZE(val), TP_MIB_TCP_V6OUT_RST_A);
v6->tcpOutRsts = STAT(OUT_RST);
v6->tcpInSegs = STAT64(IN_SEG);
v6->tcpOutSegs = STAT64(OUT_SEG);
int i;
for (i = 0; i < NMTUS; ++i) {
- t4_write_reg(adap, TP_MTU_TABLE,
- MTUINDEX(0xff) | MTUVALUE(i));
- v = t4_read_reg(adap, TP_MTU_TABLE);
- mtus[i] = MTUVALUE_GET(v);
+ t4_write_reg(adap, TP_MTU_TABLE_A,
+ MTUINDEX_V(0xff) | MTUVALUE_V(i));
+ v = t4_read_reg(adap, TP_MTU_TABLE_A);
+ mtus[i] = MTUVALUE_G(v);
if (mtu_log)
- mtu_log[i] = MTUWIDTH_GET(v);
+ mtu_log[i] = MTUWIDTH_G(v);
}
}
void t4_tp_wr_bits_indirect(struct adapter *adap, unsigned int addr,
unsigned int mask, unsigned int val)
{
- t4_write_reg(adap, TP_PIO_ADDR, addr);
- val |= t4_read_reg(adap, TP_PIO_DATA) & ~mask;
- t4_write_reg(adap, TP_PIO_DATA, val);
+ t4_write_reg(adap, TP_PIO_ADDR_A, addr);
+ val |= t4_read_reg(adap, TP_PIO_DATA_A) & ~mask;
+ t4_write_reg(adap, TP_PIO_DATA_A, val);
}
/**
if (!(mtu & ((1 << log2) >> 2))) /* round */
log2--;
- t4_write_reg(adap, TP_MTU_TABLE, MTUINDEX(i) |
- MTUWIDTH(log2) | MTUVALUE(mtu));
+ t4_write_reg(adap, TP_MTU_TABLE_A, MTUINDEX_V(i) |
+ MTUWIDTH_V(log2) | MTUVALUE_V(mtu));
for (w = 0; w < NCCTRL_WIN; ++w) {
unsigned int inc;
inc = max(((mtu - 40) * alpha[w]) / avg_pkts[w],
CC_MIN_INCR);
- t4_write_reg(adap, TP_CCTRL_TABLE, (i << 21) |
+ t4_write_reg(adap, TP_CCTRL_TABLE_A, (i << 21) |
(w << 16) | (beta[w] << 13) | inc);
}
}
*/
static unsigned int get_mps_bg_map(struct adapter *adap, int idx)
{
- u32 n = NUMPORTS_GET(t4_read_reg(adap, MPS_CMN_CTL));
+ u32 n = NUMPORTS_G(t4_read_reg(adap, MPS_CMN_CTL_A));
if (n == 0)
return idx == 0 ? 0xf : 0;
if (is_t4(adap->params.chip)) {
mag_id_reg_l = PORT_REG(port, XGMAC_PORT_MAGIC_MACID_LO);
mag_id_reg_h = PORT_REG(port, XGMAC_PORT_MAGIC_MACID_HI);
- port_cfg_reg = PORT_REG(port, XGMAC_PORT_CFG2);
+ port_cfg_reg = PORT_REG(port, XGMAC_PORT_CFG2_A);
} else {
mag_id_reg_l = T5_PORT_REG(port, MAC_PORT_MAGIC_MACID_LO);
mag_id_reg_h = T5_PORT_REG(port, MAC_PORT_MAGIC_MACID_HI);
- port_cfg_reg = T5_PORT_REG(port, MAC_PORT_CFG2);
+ port_cfg_reg = T5_PORT_REG(port, MAC_PORT_CFG2_A);
}
if (addr) {
t4_write_reg(adap, mag_id_reg_h,
(addr[0] << 8) | addr[1]);
}
- t4_set_reg_field(adap, port_cfg_reg, MAGICEN,
- addr ? MAGICEN : 0);
+ t4_set_reg_field(adap, port_cfg_reg, MAGICEN_F,
+ addr ? MAGICEN_F : 0);
}
/**
u32 port_cfg_reg;
if (is_t4(adap->params.chip))
- port_cfg_reg = PORT_REG(port, XGMAC_PORT_CFG2);
+ port_cfg_reg = PORT_REG(port, XGMAC_PORT_CFG2_A);
else
- port_cfg_reg = T5_PORT_REG(port, MAC_PORT_CFG2);
+ port_cfg_reg = T5_PORT_REG(port, MAC_PORT_CFG2_A);
if (!enable) {
- t4_set_reg_field(adap, port_cfg_reg, PATEN, 0);
+ t4_set_reg_field(adap, port_cfg_reg, PATEN_F, 0);
return 0;
}
if (map > 0xff)
return -EINVAL;
#define EPIO_REG(name) \
- (is_t4(adap->params.chip) ? PORT_REG(port, XGMAC_PORT_EPIO_##name) : \
- T5_PORT_REG(port, MAC_PORT_EPIO_##name))
+ (is_t4(adap->params.chip) ? \
+ PORT_REG(port, XGMAC_PORT_EPIO_##name##_A) : \
+ T5_PORT_REG(port, MAC_PORT_EPIO_##name##_A))
t4_write_reg(adap, EPIO_REG(DATA1), mask0 >> 32);
t4_write_reg(adap, EPIO_REG(DATA2), mask1);
/* write byte masks */
t4_write_reg(adap, EPIO_REG(DATA0), mask0);
- t4_write_reg(adap, EPIO_REG(OP), ADDRESS(i) | EPIOWR);
+ t4_write_reg(adap, EPIO_REG(OP), ADDRESS_V(i) | EPIOWR_F);
t4_read_reg(adap, EPIO_REG(OP)); /* flush */
- if (t4_read_reg(adap, EPIO_REG(OP)) & SF_BUSY)
+ if (t4_read_reg(adap, EPIO_REG(OP)) & SF_BUSY_F)
return -ETIMEDOUT;
/* write CRC */
t4_write_reg(adap, EPIO_REG(DATA0), crc);
- t4_write_reg(adap, EPIO_REG(OP), ADDRESS(i + 32) | EPIOWR);
+ t4_write_reg(adap, EPIO_REG(OP), ADDRESS_V(i + 32) | EPIOWR_F);
t4_read_reg(adap, EPIO_REG(OP)); /* flush */
- if (t4_read_reg(adap, EPIO_REG(OP)) & SF_BUSY)
+ if (t4_read_reg(adap, EPIO_REG(OP)) & SF_BUSY_F)
return -ETIMEDOUT;
}
#undef EPIO_REG
- t4_set_reg_field(adap, PORT_REG(port, XGMAC_PORT_CFG2), 0, PATEN);
+ t4_set_reg_field(adap, PORT_REG(port, XGMAC_PORT_CFG2_A), 0, PATEN_F);
return 0;
}
"IDMA_FL_SEND_COMPLETION_TO_IMSG",
};
static const u32 sge_regs[] = {
- SGE_DEBUG_DATA_LOW_INDEX_2,
- SGE_DEBUG_DATA_LOW_INDEX_3,
- SGE_DEBUG_DATA_HIGH_INDEX_10,
+ SGE_DEBUG_DATA_LOW_INDEX_2_A,
+ SGE_DEBUG_DATA_LOW_INDEX_3_A,
+ SGE_DEBUG_DATA_HIGH_INDEX_10_A,
};
const char **sge_idma_decode;
int sge_idma_decode_nstates;
if (ret < 0) {
if ((ret == -EBUSY || ret == -ETIMEDOUT) && retries-- > 0)
goto retry;
- if (t4_read_reg(adap, MA_PCIE_FW) & PCIE_FW_ERR)
+ if (t4_read_reg(adap, PCIE_FW_A) & PCIE_FW_ERR_F)
t4_report_fw_error(adap);
return ret;
}
* timeout ... and then retry if we haven't exhausted
* our retries ...
*/
- pcie_fw = t4_read_reg(adap, MA_PCIE_FW);
- if (!(pcie_fw & (PCIE_FW_ERR|PCIE_FW_INIT))) {
+ pcie_fw = t4_read_reg(adap, PCIE_FW_A);
+ if (!(pcie_fw & (PCIE_FW_ERR_F|PCIE_FW_INIT_F))) {
if (waiting <= 0) {
if (retries-- > 0)
goto retry;
* report errors preferentially.
*/
if (state) {
- if (pcie_fw & PCIE_FW_ERR)
+ if (pcie_fw & PCIE_FW_ERR_F)
*state = DEV_STATE_ERR;
- else if (pcie_fw & PCIE_FW_INIT)
+ else if (pcie_fw & PCIE_FW_INIT_F)
*state = DEV_STATE_INIT;
}
* for our caller.
*/
if (master_mbox == PCIE_FW_MASTER_M &&
- (pcie_fw & PCIE_FW_MASTER_VLD))
+ (pcie_fw & PCIE_FW_MASTER_VLD_F))
master_mbox = PCIE_FW_MASTER_G(pcie_fw);
break;
}
memset(&c, 0, sizeof(c));
INIT_CMD(c, RESET, WRITE);
- c.val = htonl(PIORST | PIORSTMODE);
+ c.val = htonl(PIORST_F | PIORSTMODE_F);
c.halt_pkd = htonl(FW_RESET_CMD_HALT_F);
ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), NULL);
}
* rather than a RESET ... if it's new enough to understand that ...
*/
if (ret == 0 || force) {
- t4_set_reg_field(adap, CIM_BOOT_CFG, UPCRST, UPCRST);
- t4_set_reg_field(adap, PCIE_FW, PCIE_FW_HALT_F,
+ t4_set_reg_field(adap, CIM_BOOT_CFG_A, UPCRST_F, UPCRST_F);
+ t4_set_reg_field(adap, PCIE_FW_A, PCIE_FW_HALT_F,
PCIE_FW_HALT_F);
}
* doing it automatically, we need to clear the PCIE_FW.HALT
* bit.
*/
- t4_set_reg_field(adap, PCIE_FW, PCIE_FW_HALT_F, 0);
+ t4_set_reg_field(adap, PCIE_FW_A, PCIE_FW_HALT_F, 0);
/*
* If we've been given a valid mailbox, first try to get the
* hitting the chip with a hammer.
*/
if (mbox <= PCIE_FW_MASTER_M) {
- t4_set_reg_field(adap, CIM_BOOT_CFG, UPCRST, 0);
+ t4_set_reg_field(adap, CIM_BOOT_CFG_A, UPCRST_F, 0);
msleep(100);
if (t4_fw_reset(adap, mbox,
- PIORST | PIORSTMODE) == 0)
+ PIORST_F | PIORSTMODE_F) == 0)
return 0;
}
- t4_write_reg(adap, PL_RST, PIORST | PIORSTMODE);
+ t4_write_reg(adap, PL_RST_A, PIORST_F | PIORSTMODE_F);
msleep(2000);
} else {
int ms;
- t4_set_reg_field(adap, CIM_BOOT_CFG, UPCRST, 0);
+ t4_set_reg_field(adap, CIM_BOOT_CFG_A, UPCRST_F, 0);
for (ms = 0; ms < FW_CMD_MAX_TIMEOUT; ) {
- if (!(t4_read_reg(adap, PCIE_FW) & PCIE_FW_HALT_F))
+ if (!(t4_read_reg(adap, PCIE_FW_A) & PCIE_FW_HALT_F))
return 0;
msleep(100);
ms += 100;
unsigned int fl_align = cache_line_size < 32 ? 32 : cache_line_size;
unsigned int fl_align_log = fls(fl_align) - 1;
- t4_write_reg(adap, SGE_HOST_PAGE_SIZE,
- HOSTPAGESIZEPF0(sge_hps) |
- HOSTPAGESIZEPF1(sge_hps) |
- HOSTPAGESIZEPF2(sge_hps) |
- HOSTPAGESIZEPF3(sge_hps) |
- HOSTPAGESIZEPF4(sge_hps) |
- HOSTPAGESIZEPF5(sge_hps) |
- HOSTPAGESIZEPF6(sge_hps) |
- HOSTPAGESIZEPF7(sge_hps));
+ t4_write_reg(adap, SGE_HOST_PAGE_SIZE_A,
+ HOSTPAGESIZEPF0_V(sge_hps) |
+ HOSTPAGESIZEPF1_V(sge_hps) |
+ HOSTPAGESIZEPF2_V(sge_hps) |
+ HOSTPAGESIZEPF3_V(sge_hps) |
+ HOSTPAGESIZEPF4_V(sge_hps) |
+ HOSTPAGESIZEPF5_V(sge_hps) |
+ HOSTPAGESIZEPF6_V(sge_hps) |
+ HOSTPAGESIZEPF7_V(sge_hps));
if (is_t4(adap->params.chip)) {
- t4_set_reg_field(adap, SGE_CONTROL,
- INGPADBOUNDARY_MASK |
- EGRSTATUSPAGESIZE_MASK,
- INGPADBOUNDARY(fl_align_log - 5) |
- EGRSTATUSPAGESIZE(stat_len != 64));
+ t4_set_reg_field(adap, SGE_CONTROL_A,
+ INGPADBOUNDARY_V(INGPADBOUNDARY_M) |
+ EGRSTATUSPAGESIZE_F,
+ INGPADBOUNDARY_V(fl_align_log -
+ INGPADBOUNDARY_SHIFT_X) |
+ EGRSTATUSPAGESIZE_V(stat_len != 64));
} else {
/* T5 introduced the separation of the Free List Padding and
* Packing Boundaries. Thus, we can select a smaller Padding
fl_align = 64;
fl_align_log = 6;
}
- t4_set_reg_field(adap, SGE_CONTROL,
- INGPADBOUNDARY_MASK |
- EGRSTATUSPAGESIZE_MASK,
- INGPADBOUNDARY(INGPCIEBOUNDARY_32B_X) |
- EGRSTATUSPAGESIZE(stat_len != 64));
+ t4_set_reg_field(adap, SGE_CONTROL_A,
+ INGPADBOUNDARY_V(INGPADBOUNDARY_M) |
+ EGRSTATUSPAGESIZE_F,
+ INGPADBOUNDARY_V(INGPCIEBOUNDARY_32B_X) |
+ EGRSTATUSPAGESIZE_V(stat_len != 64));
t4_set_reg_field(adap, SGE_CONTROL2_A,
INGPACKBOUNDARY_V(INGPACKBOUNDARY_M),
INGPACKBOUNDARY_V(fl_align_log -
- INGPACKBOUNDARY_SHIFT_X));
+ INGPACKBOUNDARY_SHIFT_X));
}
/*
* Adjust various SGE Free List Host Buffer Sizes.
* Default Firmware Configuration File but we need to adjust it for
* this host's cache line size.
*/
- t4_write_reg(adap, SGE_FL_BUFFER_SIZE0, page_size);
- t4_write_reg(adap, SGE_FL_BUFFER_SIZE2,
- (t4_read_reg(adap, SGE_FL_BUFFER_SIZE2) + fl_align-1)
+ t4_write_reg(adap, SGE_FL_BUFFER_SIZE0_A, page_size);
+ t4_write_reg(adap, SGE_FL_BUFFER_SIZE2_A,
+ (t4_read_reg(adap, SGE_FL_BUFFER_SIZE2_A) + fl_align-1)
& ~(fl_align-1));
- t4_write_reg(adap, SGE_FL_BUFFER_SIZE3,
- (t4_read_reg(adap, SGE_FL_BUFFER_SIZE3) + fl_align-1)
+ t4_write_reg(adap, SGE_FL_BUFFER_SIZE3_A,
+ (t4_read_reg(adap, SGE_FL_BUFFER_SIZE3_A) + fl_align-1)
& ~(fl_align-1));
- t4_write_reg(adap, ULP_RX_TDDP_PSZ, HPZ0(page_shift - 12));
+ t4_write_reg(adap, ULP_RX_TDDP_PSZ_A, HPZ0_V(page_shift - 12));
return 0;
}
{
u32 whoami;
- whoami = readl(regs + PL_WHOAMI);
+ whoami = readl(regs + PL_WHOAMI_A);
if (whoami != 0xffffffff && whoami != CIM_PF_NOACCESS)
return 0;
msleep(500);
- whoami = readl(regs + PL_WHOAMI);
+ whoami = readl(regs + PL_WHOAMI_A);
return (whoami != 0xffffffff && whoami != CIM_PF_NOACCESS ? 0 : -EIO);
}
ret = sf1_write(adap, 1, 1, 0, SF_RD_ID);
if (!ret)
ret = sf1_read(adap, 3, 0, 1, &info);
- t4_write_reg(adap, SF_OP, 0); /* unlock SF */
+ t4_write_reg(adap, SF_OP_A, 0); /* unlock SF */
if (ret)
return ret;
return -EINVAL;
adap->params.sf_size = 1 << info;
adap->params.sf_fw_start =
- t4_read_reg(adap, CIM_BOOT_CFG) & BOOTADDR_MASK;
+ t4_read_reg(adap, CIM_BOOT_CFG_A) & BOOTADDR_M;
if (adap->params.sf_size < FLASH_MIN_SIZE)
dev_warn(adap->pdev_dev, "WARNING!!! FLASH size %#x < %#x!!!\n",
u32 pl_rev;
get_pci_mode(adapter, &adapter->params.pci);
- pl_rev = G_REV(t4_read_reg(adapter, PL_REV));
+ pl_rev = REV_G(t4_read_reg(adapter, PL_REV_A));
ret = get_flash_params(adapter);
if (ret < 0) {
return -EINVAL;
}
+ adapter->params.cim_la_size = CIMLA_SIZE;
init_cong_ctrl(adapter->params.a_wnd, adapter->params.b_wnd);
/*
/* Extract the SGE Page Size for our PF.
*/
- hps = t4_read_reg(adapter, SGE_HOST_PAGE_SIZE);
+ hps = t4_read_reg(adapter, SGE_HOST_PAGE_SIZE_A);
s_hps = (HOSTPAGESIZEPF0_S +
(HOSTPAGESIZEPF1_S - HOSTPAGESIZEPF0_S) * adapter->fn);
sge_params->hps = ((hps >> s_hps) & HOSTPAGESIZEPF0_M);
*/
s_qpp = (QUEUESPERPAGEPF0_S +
(QUEUESPERPAGEPF1_S - QUEUESPERPAGEPF0_S) * adapter->fn);
- qpp = t4_read_reg(adapter, SGE_EGRESS_QUEUES_PER_PAGE_PF);
- sge_params->eq_qpp = ((qpp >> s_qpp) & QUEUESPERPAGEPF0_MASK);
- qpp = t4_read_reg(adapter, SGE_INGRESS_QUEUES_PER_PAGE_PF);
- sge_params->iq_qpp = ((qpp >> s_qpp) & QUEUESPERPAGEPF0_MASK);
+ qpp = t4_read_reg(adapter, SGE_EGRESS_QUEUES_PER_PAGE_PF_A);
+ sge_params->eq_qpp = ((qpp >> s_qpp) & QUEUESPERPAGEPF0_M);
+ qpp = t4_read_reg(adapter, SGE_INGRESS_QUEUES_PER_PAGE_PF_A);
+ sge_params->iq_qpp = ((qpp >> s_qpp) & QUEUESPERPAGEPF0_M);
return 0;
}
int chan;
u32 v;
- v = t4_read_reg(adap, TP_TIMER_RESOLUTION);
- adap->params.tp.tre = TIMERRESOLUTION_GET(v);
- adap->params.tp.dack_re = DELAYEDACKRESOLUTION_GET(v);
+ v = t4_read_reg(adap, TP_TIMER_RESOLUTION_A);
+ adap->params.tp.tre = TIMERRESOLUTION_G(v);
+ adap->params.tp.dack_re = DELAYEDACKRESOLUTION_G(v);
/* MODQ_REQ_MAP defaults to setting queues 0-3 to chan 0-3 */
for (chan = 0; chan < NCHAN; chan++)
/* Cache the adapter's Compressed Filter Mode and global Incress
* Configuration.
*/
- t4_read_indirect(adap, TP_PIO_ADDR, TP_PIO_DATA,
+ t4_read_indirect(adap, TP_PIO_ADDR_A, TP_PIO_DATA_A,
&adap->params.tp.vlan_pri_map, 1,
- TP_VLAN_PRI_MAP);
- t4_read_indirect(adap, TP_PIO_ADDR, TP_PIO_DATA,
+ TP_VLAN_PRI_MAP_A);
+ t4_read_indirect(adap, TP_PIO_ADDR_A, TP_PIO_DATA_A,
&adap->params.tp.ingress_config, 1,
- TP_INGRESS_CONFIG);
+ TP_INGRESS_CONFIG_A);
/* Now that we have TP_VLAN_PRI_MAP cached, we can calculate the field
* shift positions of several elements of the Compressed Filter Tuple
* for this adapter which we need frequently ...
*/
- adap->params.tp.vlan_shift = t4_filter_field_shift(adap, F_VLAN);
- adap->params.tp.vnic_shift = t4_filter_field_shift(adap, F_VNIC_ID);
- adap->params.tp.port_shift = t4_filter_field_shift(adap, F_PORT);
+ adap->params.tp.vlan_shift = t4_filter_field_shift(adap, VLAN_F);
+ adap->params.tp.vnic_shift = t4_filter_field_shift(adap, VNIC_ID_F);
+ adap->params.tp.port_shift = t4_filter_field_shift(adap, PORT_F);
adap->params.tp.protocol_shift = t4_filter_field_shift(adap,
- F_PROTOCOL);
+ PROTOCOL_F);
/* If TP_INGRESS_CONFIG.VNID == 0, then TP_VLAN_PRI_MAP.VNIC_ID
* represents the presense of an Outer VLAN instead of a VNIC ID.
*/
- if ((adap->params.tp.ingress_config & F_VNIC) == 0)
+ if ((adap->params.tp.ingress_config & VNIC_F) == 0)
adap->params.tp.vnic_shift = -1;
return 0;
for (sel = 1, field_shift = 0; sel < filter_sel; sel <<= 1) {
switch (filter_mode & sel) {
- case F_FCOE:
- field_shift += W_FT_FCOE;
+ case FCOE_F:
+ field_shift += FT_FCOE_W;
break;
- case F_PORT:
- field_shift += W_FT_PORT;
+ case PORT_F:
+ field_shift += FT_PORT_W;
break;
- case F_VNIC_ID:
- field_shift += W_FT_VNIC_ID;
+ case VNIC_ID_F:
+ field_shift += FT_VNIC_ID_W;
break;
- case F_VLAN:
- field_shift += W_FT_VLAN;
+ case VLAN_F:
+ field_shift += FT_VLAN_W;
break;
- case F_TOS:
- field_shift += W_FT_TOS;
+ case TOS_F:
+ field_shift += FT_TOS_W;
break;
- case F_PROTOCOL:
- field_shift += W_FT_PROTOCOL;
+ case PROTOCOL_F:
+ field_shift += FT_PROTOCOL_W;
break;
- case F_ETHERTYPE:
- field_shift += W_FT_ETHERTYPE;
+ case ETHERTYPE_F:
+ field_shift += FT_ETHERTYPE_W;
break;
- case F_MACMATCH:
- field_shift += W_FT_MACMATCH;
+ case MACMATCH_F:
+ field_shift += FT_MACMATCH_W;
break;
- case F_MPSHITTYPE:
- field_shift += W_FT_MPSHITTYPE;
+ case MPSHITTYPE_F:
+ field_shift += FT_MPSHITTYPE_W;
break;
- case F_FRAGMENTATION:
- field_shift += W_FT_FRAGMENTATION;
+ case FRAGMENTATION_F:
+ field_shift += FT_FRAGMENTATION_W;
break;
}
}
}
return 0;
}
+
+/**
+ * t4_read_cimq_cfg - read CIM queue configuration
+ * @adap: the adapter
+ * @base: holds the queue base addresses in bytes
+ * @size: holds the queue sizes in bytes
+ * @thres: holds the queue full thresholds in bytes
+ *
+ * Returns the current configuration of the CIM queues, starting with
+ * the IBQs, then the OBQs.
+ */
+void t4_read_cimq_cfg(struct adapter *adap, u16 *base, u16 *size, u16 *thres)
+{
+ unsigned int i, v;
+ int cim_num_obq = is_t4(adap->params.chip) ?
+ CIM_NUM_OBQ : CIM_NUM_OBQ_T5;
+
+ for (i = 0; i < CIM_NUM_IBQ; i++) {
+ t4_write_reg(adap, CIM_QUEUE_CONFIG_REF_A, IBQSELECT_F |
+ QUENUMSELECT_V(i));
+ v = t4_read_reg(adap, CIM_QUEUE_CONFIG_CTRL_A);
+ /* value is in 256-byte units */
+ *base++ = CIMQBASE_G(v) * 256;
+ *size++ = CIMQSIZE_G(v) * 256;
+ *thres++ = QUEFULLTHRSH_G(v) * 8; /* 8-byte unit */
+ }
+ for (i = 0; i < cim_num_obq; i++) {
+ t4_write_reg(adap, CIM_QUEUE_CONFIG_REF_A, OBQSELECT_F |
+ QUENUMSELECT_V(i));
+ v = t4_read_reg(adap, CIM_QUEUE_CONFIG_CTRL_A);
+ /* value is in 256-byte units */
+ *base++ = CIMQBASE_G(v) * 256;
+ *size++ = CIMQSIZE_G(v) * 256;
+ }
+}
+
+/**
+ * t4_cim_read - read a block from CIM internal address space
+ * @adap: the adapter
+ * @addr: the start address within the CIM address space
+ * @n: number of words to read
+ * @valp: where to store the result
+ *
+ * Reads a block of 4-byte words from the CIM intenal address space.
+ */
+int t4_cim_read(struct adapter *adap, unsigned int addr, unsigned int n,
+ unsigned int *valp)
+{
+ int ret = 0;
+
+ if (t4_read_reg(adap, CIM_HOST_ACC_CTRL_A) & HOSTBUSY_F)
+ return -EBUSY;
+
+ for ( ; !ret && n--; addr += 4) {
+ t4_write_reg(adap, CIM_HOST_ACC_CTRL_A, addr);
+ ret = t4_wait_op_done(adap, CIM_HOST_ACC_CTRL_A, HOSTBUSY_F,
+ 0, 5, 2);
+ if (!ret)
+ *valp++ = t4_read_reg(adap, CIM_HOST_ACC_DATA_A);
+ }
+ return ret;
+}
+
+/**
+ * t4_cim_write - write a block into CIM internal address space
+ * @adap: the adapter
+ * @addr: the start address within the CIM address space
+ * @n: number of words to write
+ * @valp: set of values to write
+ *
+ * Writes a block of 4-byte words into the CIM intenal address space.
+ */
+int t4_cim_write(struct adapter *adap, unsigned int addr, unsigned int n,
+ const unsigned int *valp)
+{
+ int ret = 0;
+
+ if (t4_read_reg(adap, CIM_HOST_ACC_CTRL_A) & HOSTBUSY_F)
+ return -EBUSY;
+
+ for ( ; !ret && n--; addr += 4) {
+ t4_write_reg(adap, CIM_HOST_ACC_DATA_A, *valp++);
+ t4_write_reg(adap, CIM_HOST_ACC_CTRL_A, addr | HOSTWRITE_F);
+ ret = t4_wait_op_done(adap, CIM_HOST_ACC_CTRL_A, HOSTBUSY_F,
+ 0, 5, 2);
+ }
+ return ret;
+}
+
+static int t4_cim_write1(struct adapter *adap, unsigned int addr,
+ unsigned int val)
+{
+ return t4_cim_write(adap, addr, 1, &val);
+}
+
+/**
+ * t4_cim_read_la - read CIM LA capture buffer
+ * @adap: the adapter
+ * @la_buf: where to store the LA data
+ * @wrptr: the HW write pointer within the capture buffer
+ *
+ * Reads the contents of the CIM LA buffer with the most recent entry at
+ * the end of the returned data and with the entry at @wrptr first.
+ * We try to leave the LA in the running state we find it in.
+ */
+int t4_cim_read_la(struct adapter *adap, u32 *la_buf, unsigned int *wrptr)
+{
+ int i, ret;
+ unsigned int cfg, val, idx;
+
+ ret = t4_cim_read(adap, UP_UP_DBG_LA_CFG_A, 1, &cfg);
+ if (ret)
+ return ret;
+
+ if (cfg & UPDBGLAEN_F) { /* LA is running, freeze it */
+ ret = t4_cim_write1(adap, UP_UP_DBG_LA_CFG_A, 0);
+ if (ret)
+ return ret;
+ }
+
+ ret = t4_cim_read(adap, UP_UP_DBG_LA_CFG_A, 1, &val);
+ if (ret)
+ goto restart;
+
+ idx = UPDBGLAWRPTR_G(val);
+ if (wrptr)
+ *wrptr = idx;
+
+ for (i = 0; i < adap->params.cim_la_size; i++) {
+ ret = t4_cim_write1(adap, UP_UP_DBG_LA_CFG_A,
+ UPDBGLARDPTR_V(idx) | UPDBGLARDEN_F);
+ if (ret)
+ break;
+ ret = t4_cim_read(adap, UP_UP_DBG_LA_CFG_A, 1, &val);
+ if (ret)
+ break;
+ if (val & UPDBGLARDEN_F) {
+ ret = -ETIMEDOUT;
+ break;
+ }
+ ret = t4_cim_read(adap, UP_UP_DBG_LA_DATA_A, 1, &la_buf[i]);
+ if (ret)
+ break;
+ idx = (idx + 1) & UPDBGLARDPTR_M;
+ }
+restart:
+ if (cfg & UPDBGLAEN_F) {
+ int r = t4_cim_write1(adap, UP_UP_DBG_LA_CFG_A,
+ cfg & ~UPDBGLARDEN_F);
+ if (!ret)
+ ret = r;
+ }
+ return ret;
+}
WOL_PAT_LEN = 128, /* length of WoL patterns */
};
+enum {
+ CIM_NUM_IBQ = 6, /* # of CIM IBQs */
+ CIM_NUM_OBQ = 6, /* # of CIM OBQs */
+ CIM_NUM_OBQ_T5 = 8, /* # of CIM OBQs for T5 adapter */
+ CIMLA_SIZE = 2048, /* # of 32-bit words in CIM LA */
+};
+
enum {
SF_PAGE_SIZE = 256, /* serial flash page size */
SF_SEC_SIZE = 64 * 1024, /* serial flash sector size */
SGE_INGPADBOUNDARY_SHIFT = 5,/* ingress queue pad boundary */
};
+/* PCI-e memory window access */
+enum pcie_memwin {
+ MEMWIN_NIC = 0,
+ MEMWIN_RSVD1 = 1,
+ MEMWIN_RSVD2 = 2,
+ MEMWIN_RDMA = 3,
+ MEMWIN_RSVD4 = 4,
+ MEMWIN_FOISCSI = 5,
+ MEMWIN_CSIOSTOR = 6,
+ MEMWIN_RSVD7 = 7,
+};
+
struct sge_qstat { /* data written to SGE queue status entries */
__be32 qid;
__be16 cidx;
CPL_ERR_IWARP_FLM = 50,
};
+enum {
+ CPL_CONN_POLICY_AUTO = 0,
+ CPL_CONN_POLICY_ASK = 1,
+ CPL_CONN_POLICY_FILTER = 2,
+ CPL_CONN_POLICY_DENY = 3
+};
+
enum {
ULP_MODE_NONE = 0,
ULP_MODE_ISCSI = 2,
u8 opcode;
};
-#define CPL_OPCODE(x) ((x) << 24)
-#define G_CPL_OPCODE(x) (((x) >> 24) & 0xFF)
-#define MK_OPCODE_TID(opcode, tid) (CPL_OPCODE(opcode) | (tid))
+#define CPL_OPCODE_S 24
+#define CPL_OPCODE_V(x) ((x) << CPL_OPCODE_S)
+#define CPL_OPCODE_G(x) (((x) >> CPL_OPCODE_S) & 0xFF)
+#define TID_G(x) ((x) & 0xFFFFFF)
+
+/* tid is assumed to be 24-bits */
+#define MK_OPCODE_TID(opcode, tid) (CPL_OPCODE_V(opcode) | (tid))
+
#define OPCODE_TID(cmd) ((cmd)->ot.opcode_tid)
-#define GET_TID(cmd) (ntohl(OPCODE_TID(cmd)) & 0xFFFFFF)
+
+/* extract the TID from a CPL command */
+#define GET_TID(cmd) (TID_G(be32_to_cpu(OPCODE_TID(cmd))))
/* partitioning of TID fields that also carry a queue id */
-#define GET_TID_TID(x) ((x) & 0x3fff)
-#define GET_TID_QID(x) (((x) >> 14) & 0x3ff)
-#define TID_QID(x) ((x) << 14)
+#define TID_TID_S 0
+#define TID_TID_M 0x3fff
+#define TID_TID_G(x) (((x) >> TID_TID_S) & TID_TID_M)
+
+#define TID_QID_S 14
+#define TID_QID_M 0x3ff
+#define TID_QID_V(x) ((x) << TID_QID_S)
+#define TID_QID_G(x) (((x) >> TID_QID_S) & TID_QID_M)
struct rss_header {
u8 opcode;
};
/* wr_hi fields */
-#define S_WR_OP 24
-#define V_WR_OP(x) ((__u64)(x) << S_WR_OP)
+#define WR_OP_S 24
+#define WR_OP_V(x) ((__u64)(x) << WR_OP_S)
#define WR_HDR struct work_request_hdr wr
__be32 local_ip;
__be32 peer_ip;
__be64 opt0;
-#define NO_CONG(x) ((x) << 4)
-#define DELACK(x) ((x) << 5)
-#define DSCP(x) ((x) << 22)
-#define TCAM_BYPASS(x) ((u64)(x) << 48)
-#define NAGLE(x) ((u64)(x) << 49)
__be64 opt1;
-#define SYN_RSS_ENABLE (1 << 0)
-#define SYN_RSS_QUEUE(x) ((x) << 2)
-#define CONN_POLICY_ASK (1 << 22)
};
+/* option 0 fields */
+#define NO_CONG_S 4
+#define NO_CONG_V(x) ((x) << NO_CONG_S)
+#define NO_CONG_F NO_CONG_V(1U)
+
+#define DELACK_S 5
+#define DELACK_V(x) ((x) << DELACK_S)
+#define DELACK_F DELACK_V(1U)
+
+#define DSCP_S 22
+#define DSCP_M 0x3F
+#define DSCP_V(x) ((x) << DSCP_S)
+#define DSCP_G(x) (((x) >> DSCP_S) & DSCP_M)
+
+#define TCAM_BYPASS_S 48
+#define TCAM_BYPASS_V(x) ((__u64)(x) << TCAM_BYPASS_S)
+#define TCAM_BYPASS_F TCAM_BYPASS_V(1ULL)
+
+#define NAGLE_S 49
+#define NAGLE_V(x) ((__u64)(x) << NAGLE_S)
+#define NAGLE_F NAGLE_V(1ULL)
+
+/* option 1 fields */
+#define SYN_RSS_ENABLE_S 0
+#define SYN_RSS_ENABLE_V(x) ((x) << SYN_RSS_ENABLE_S)
+#define SYN_RSS_ENABLE_F SYN_RSS_ENABLE_V(1U)
+
+#define SYN_RSS_QUEUE_S 2
+#define SYN_RSS_QUEUE_V(x) ((x) << SYN_RSS_QUEUE_S)
+
+#define CONN_POLICY_S 22
+#define CONN_POLICY_V(x) ((x) << CONN_POLICY_S)
+
struct cpl_pass_open_req6 {
WR_HDR;
union opcode_tid ot;
WR_HDR;
union opcode_tid ot;
__be32 opt2;
-#define RX_COALESCE_VALID(x) ((x) << 11)
-#define RX_COALESCE(x) ((x) << 12)
-#define PACE(x) ((x) << 16)
-#define TX_QUEUE(x) ((x) << 23)
-#define CCTRL_ECN(x) ((x) << 27)
-#define TSTAMPS_EN(x) ((x) << 29)
-#define SACK_EN(x) ((x) << 30)
__be64 opt0;
};
+/* option 2 fields */
+#define RX_COALESCE_VALID_S 11
+#define RX_COALESCE_VALID_V(x) ((x) << RX_COALESCE_VALID_S)
+#define RX_COALESCE_VALID_F RX_COALESCE_VALID_V(1U)
+
+#define RX_COALESCE_S 12
+#define RX_COALESCE_V(x) ((x) << RX_COALESCE_S)
+
+#define PACE_S 16
+#define PACE_V(x) ((x) << PACE_S)
+
+#define TX_QUEUE_S 23
+#define TX_QUEUE_M 0x7
+#define TX_QUEUE_V(x) ((x) << TX_QUEUE_S)
+#define TX_QUEUE_G(x) (((x) >> TX_QUEUE_S) & TX_QUEUE_M)
+
+#define CCTRL_ECN_S 27
+#define CCTRL_ECN_V(x) ((x) << CCTRL_ECN_S)
+#define CCTRL_ECN_F CCTRL_ECN_V(1U)
+
+#define TSTAMPS_EN_S 29
+#define TSTAMPS_EN_V(x) ((x) << TSTAMPS_EN_S)
+#define TSTAMPS_EN_F TSTAMPS_EN_V(1U)
+
+#define SACK_EN_S 30
+#define SACK_EN_V(x) ((x) << SACK_EN_S)
+#define SACK_EN_F SACK_EN_V(1U)
+
struct cpl_t5_pass_accept_rpl {
WR_HDR;
union opcode_tid ot;
struct cpl_act_open_rpl {
union opcode_tid ot;
__be32 atid_status;
-#define GET_AOPEN_STATUS(x) ((x) & 0xff)
-#define GET_AOPEN_ATID(x) (((x) >> 8) & 0xffffff)
};
+/* cpl_act_open_rpl.atid_status fields */
+#define AOPEN_STATUS_S 0
+#define AOPEN_STATUS_M 0xFF
+#define AOPEN_STATUS_G(x) (((x) >> AOPEN_STATUS_S) & AOPEN_STATUS_M)
+
+#define AOPEN_ATID_S 8
+#define AOPEN_ATID_M 0xFFFFFF
+#define AOPEN_ATID_G(x) (((x) >> AOPEN_ATID_S) & AOPEN_ATID_M)
+
struct cpl_pass_establish {
union opcode_tid ot;
__be32 rsvd;
__be32 tos_stid;
-#define PASS_OPEN_TID(x) ((x) << 0)
-#define PASS_OPEN_TOS(x) ((x) << 24)
-#define GET_PASS_OPEN_TID(x) (((x) >> 0) & 0xFFFFFF)
-#define GET_POPEN_TID(x) ((x) & 0xffffff)
-#define GET_POPEN_TOS(x) (((x) >> 24) & 0xff)
__be16 mac_idx;
__be16 tcp_opt;
-#define GET_TCPOPT_WSCALE_OK(x) (((x) >> 5) & 1)
-#define GET_TCPOPT_SACK(x) (((x) >> 6) & 1)
-#define GET_TCPOPT_TSTAMP(x) (((x) >> 7) & 1)
-#define GET_TCPOPT_SND_WSCALE(x) (((x) >> 8) & 0xf)
-#define GET_TCPOPT_MSS(x) (((x) >> 12) & 0xf)
__be32 snd_isn;
__be32 rcv_isn;
};
+/* cpl_pass_establish.tos_stid fields */
+#define PASS_OPEN_TID_S 0
+#define PASS_OPEN_TID_M 0xFFFFFF
+#define PASS_OPEN_TID_V(x) ((x) << PASS_OPEN_TID_S)
+#define PASS_OPEN_TID_G(x) (((x) >> PASS_OPEN_TID_S) & PASS_OPEN_TID_M)
+
+#define PASS_OPEN_TOS_S 24
+#define PASS_OPEN_TOS_M 0xFF
+#define PASS_OPEN_TOS_V(x) ((x) << PASS_OPEN_TOS_S)
+#define PASS_OPEN_TOS_G(x) (((x) >> PASS_OPEN_TOS_S) & PASS_OPEN_TOS_M)
+
+/* cpl_pass_establish.tcp_opt fields (also applies to act_open_establish) */
+#define TCPOPT_WSCALE_OK_S 5
+#define TCPOPT_WSCALE_OK_M 0x1
+#define TCPOPT_WSCALE_OK_G(x) \
+ (((x) >> TCPOPT_WSCALE_OK_S) & TCPOPT_WSCALE_OK_M)
+
+#define TCPOPT_SACK_S 6
+#define TCPOPT_SACK_M 0x1
+#define TCPOPT_SACK_G(x) (((x) >> TCPOPT_SACK_S) & TCPOPT_SACK_M)
+
+#define TCPOPT_TSTAMP_S 7
+#define TCPOPT_TSTAMP_M 0x1
+#define TCPOPT_TSTAMP_G(x) (((x) >> TCPOPT_TSTAMP_S) & TCPOPT_TSTAMP_M)
+
+#define TCPOPT_SND_WSCALE_S 8
+#define TCPOPT_SND_WSCALE_M 0xF
+#define TCPOPT_SND_WSCALE_G(x) \
+ (((x) >> TCPOPT_SND_WSCALE_S) & TCPOPT_SND_WSCALE_M)
+
+#define TCPOPT_MSS_S 12
+#define TCPOPT_MSS_M 0xF
+#define TCPOPT_MSS_G(x) (((x) >> TCPOPT_MSS_S) & TCPOPT_MSS_M)
+
struct cpl_act_establish {
union opcode_tid ot;
__be32 rsvd;
WR_HDR;
union opcode_tid ot;
__be16 reply_ctrl;
-#define QUEUENO(x) ((x) << 0)
-#define REPLY_CHAN(x) ((x) << 14)
-#define NO_REPLY(x) ((x) << 15)
__be16 cookie;
};
+/* cpl_get_tcb.reply_ctrl fields */
+#define QUEUENO_S 0
+#define QUEUENO_V(x) ((x) << QUEUENO_S)
+
+#define REPLY_CHAN_S 14
+#define REPLY_CHAN_V(x) ((x) << REPLY_CHAN_S)
+#define REPLY_CHAN_F REPLY_CHAN_V(1U)
+
+#define NO_REPLY_S 15
+#define NO_REPLY_V(x) ((x) << NO_REPLY_S)
+#define NO_REPLY_F NO_REPLY_V(1U)
+
struct cpl_set_tcb_field {
WR_HDR;
union opcode_tid ot;
__be16 reply_ctrl;
__be16 word_cookie;
-#define TCB_WORD(x) ((x) << 0)
-#define TCB_COOKIE(x) ((x) << 5)
-#define GET_TCB_COOKIE(x) (((x) >> 5) & 7)
__be64 mask;
__be64 val;
};
+/* cpl_set_tcb_field.word_cookie fields */
+#define TCB_WORD_S 0
+#define TCB_WORD(x) ((x) << TCB_WORD_S)
+
+#define TCB_COOKIE_S 5
+#define TCB_COOKIE_M 0x7
+#define TCB_COOKIE_V(x) ((x) << TCB_COOKIE_S)
+#define TCB_COOKIE_G(x) (((x) >> TCB_COOKIE_S) & TCB_COOKIE_M)
+
struct cpl_set_tcb_rpl {
union opcode_tid ot;
__be16 rsvd;
WR_HDR;
union opcode_tid ot;
__be16 reply_ctrl;
-#define LISTSVR_IPV6(x) ((x) << 14)
__be16 rsvd;
};
+/* additional cpl_close_listsvr_req.reply_ctrl field */
+#define LISTSVR_IPV6_S 14
+#define LISTSVR_IPV6_V(x) ((x) << LISTSVR_IPV6_S)
+#define LISTSVR_IPV6_F LISTSVR_IPV6_V(1U)
+
struct cpl_close_listsvr_rpl {
union opcode_tid ot;
u8 rsvd[3];
/* encapsulated CPL (TX_PKT, TX_PKT_XT or TX_DATA) follows here */
};
+/* cpl_tx_pkt_lso_core.lso_ctrl fields */
+#define LSO_TCPHDR_LEN_S 0
+#define LSO_TCPHDR_LEN_V(x) ((x) << LSO_TCPHDR_LEN_S)
+
+#define LSO_IPHDR_LEN_S 4
+#define LSO_IPHDR_LEN_V(x) ((x) << LSO_IPHDR_LEN_S)
+
+#define LSO_ETHHDR_LEN_S 16
+#define LSO_ETHHDR_LEN_V(x) ((x) << LSO_ETHHDR_LEN_S)
+
+#define LSO_IPV6_S 20
+#define LSO_IPV6_V(x) ((x) << LSO_IPV6_S)
+#define LSO_IPV6_F LSO_IPV6_V(1U)
+
+#define LSO_LAST_SLICE_S 22
+#define LSO_LAST_SLICE_V(x) ((x) << LSO_LAST_SLICE_S)
+#define LSO_LAST_SLICE_F LSO_LAST_SLICE_V(1U)
+
+#define LSO_FIRST_SLICE_S 23
+#define LSO_FIRST_SLICE_V(x) ((x) << LSO_FIRST_SLICE_S)
+#define LSO_FIRST_SLICE_F LSO_FIRST_SLICE_V(1U)
+
+#define LSO_OPCODE_S 24
+#define LSO_OPCODE_V(x) ((x) << LSO_OPCODE_S)
+
+#define LSO_T5_XFER_SIZE_S 0
+#define LSO_T5_XFER_SIZE_V(x) ((x) << LSO_T5_XFER_SIZE_S)
+
struct cpl_tx_pkt_lso {
WR_HDR;
struct cpl_tx_pkt_lso_core c;
struct cpl_iscsi_hdr {
union opcode_tid ot;
__be16 pdu_len_ddp;
-#define ISCSI_PDU_LEN(x) ((x) & 0x7FFF)
-#define ISCSI_DDP (1 << 15)
__be16 len;
__be32 seq;
__be16 urg;
u8 status;
};
+/* cpl_iscsi_hdr.pdu_len_ddp fields */
+#define ISCSI_PDU_LEN_S 0
+#define ISCSI_PDU_LEN_M 0x7FFF
+#define ISCSI_PDU_LEN_V(x) ((x) << ISCSI_PDU_LEN_S)
+#define ISCSI_PDU_LEN_G(x) (((x) >> ISCSI_PDU_LEN_S) & ISCSI_PDU_LEN_M)
+
+#define ISCSI_DDP_S 15
+#define ISCSI_DDP_V(x) ((x) << ISCSI_DDP_S)
+#define ISCSI_DDP_F ISCSI_DDP_V(1U)
+
struct cpl_rx_data {
union opcode_tid ot;
__be16 rsvd;
__be16 vlan;
__be16 len;
__be32 l2info;
-#define RXF_UDP (1 << 22)
-#define RXF_TCP (1 << 23)
-#define RXF_IP (1 << 24)
-#define RXF_IP6 (1 << 25)
__be16 hdr_len;
__be16 err_vec;
};
+#define RXF_UDP_S 22
+#define RXF_UDP_V(x) ((x) << RXF_UDP_S)
+#define RXF_UDP_F RXF_UDP_V(1U)
+
+#define RXF_TCP_S 23
+#define RXF_TCP_V(x) ((x) << RXF_TCP_S)
+#define RXF_TCP_F RXF_TCP_V(1U)
+
+#define RXF_IP_S 24
+#define RXF_IP_V(x) ((x) << RXF_IP_S)
+#define RXF_IP_F RXF_IP_V(1U)
+
+#define RXF_IP6_S 25
+#define RXF_IP6_V(x) ((x) << RXF_IP6_S)
+#define RXF_IP6_F RXF_IP6_V(1U)
+
/* rx_pkt.l2info fields */
-#define S_RX_ETHHDR_LEN 0
-#define M_RX_ETHHDR_LEN 0x1F
-#define V_RX_ETHHDR_LEN(x) ((x) << S_RX_ETHHDR_LEN)
-#define G_RX_ETHHDR_LEN(x) (((x) >> S_RX_ETHHDR_LEN) & M_RX_ETHHDR_LEN)
-
-#define S_RX_T5_ETHHDR_LEN 0
-#define M_RX_T5_ETHHDR_LEN 0x3F
-#define V_RX_T5_ETHHDR_LEN(x) ((x) << S_RX_T5_ETHHDR_LEN)
-#define G_RX_T5_ETHHDR_LEN(x) (((x) >> S_RX_T5_ETHHDR_LEN) & M_RX_T5_ETHHDR_LEN)
-
-#define S_RX_MACIDX 8
-#define M_RX_MACIDX 0x1FF
-#define V_RX_MACIDX(x) ((x) << S_RX_MACIDX)
-#define G_RX_MACIDX(x) (((x) >> S_RX_MACIDX) & M_RX_MACIDX)
-
-#define S_RXF_SYN 21
-#define V_RXF_SYN(x) ((x) << S_RXF_SYN)
-#define F_RXF_SYN V_RXF_SYN(1U)
-
-#define S_RX_CHAN 28
-#define M_RX_CHAN 0xF
-#define V_RX_CHAN(x) ((x) << S_RX_CHAN)
-#define G_RX_CHAN(x) (((x) >> S_RX_CHAN) & M_RX_CHAN)
+#define RX_ETHHDR_LEN_S 0
+#define RX_ETHHDR_LEN_M 0x1F
+#define RX_ETHHDR_LEN_V(x) ((x) << RX_ETHHDR_LEN_S)
+#define RX_ETHHDR_LEN_G(x) (((x) >> RX_ETHHDR_LEN_S) & RX_ETHHDR_LEN_M)
+
+#define RX_T5_ETHHDR_LEN_S 0
+#define RX_T5_ETHHDR_LEN_M 0x3F
+#define RX_T5_ETHHDR_LEN_V(x) ((x) << RX_T5_ETHHDR_LEN_S)
+#define RX_T5_ETHHDR_LEN_G(x) (((x) >> RX_T5_ETHHDR_LEN_S) & RX_T5_ETHHDR_LEN_M)
+
+#define RX_MACIDX_S 8
+#define RX_MACIDX_M 0x1FF
+#define RX_MACIDX_V(x) ((x) << RX_MACIDX_S)
+#define RX_MACIDX_G(x) (((x) >> RX_MACIDX_S) & RX_MACIDX_M)
+
+#define RXF_SYN_S 21
+#define RXF_SYN_V(x) ((x) << RXF_SYN_S)
+#define RXF_SYN_F RXF_SYN_V(1U)
+
+#define RX_CHAN_S 28
+#define RX_CHAN_M 0xF
+#define RX_CHAN_V(x) ((x) << RX_CHAN_S)
+#define RX_CHAN_G(x) (((x) >> RX_CHAN_S) & RX_CHAN_M)
/* rx_pkt.hdr_len fields */
-#define S_RX_TCPHDR_LEN 0
-#define M_RX_TCPHDR_LEN 0x3F
-#define V_RX_TCPHDR_LEN(x) ((x) << S_RX_TCPHDR_LEN)
-#define G_RX_TCPHDR_LEN(x) (((x) >> S_RX_TCPHDR_LEN) & M_RX_TCPHDR_LEN)
+#define RX_TCPHDR_LEN_S 0
+#define RX_TCPHDR_LEN_M 0x3F
+#define RX_TCPHDR_LEN_V(x) ((x) << RX_TCPHDR_LEN_S)
+#define RX_TCPHDR_LEN_G(x) (((x) >> RX_TCPHDR_LEN_S) & RX_TCPHDR_LEN_M)
-#define S_RX_IPHDR_LEN 6
-#define M_RX_IPHDR_LEN 0x3FF
-#define V_RX_IPHDR_LEN(x) ((x) << S_RX_IPHDR_LEN)
-#define G_RX_IPHDR_LEN(x) (((x) >> S_RX_IPHDR_LEN) & M_RX_IPHDR_LEN)
+#define RX_IPHDR_LEN_S 6
+#define RX_IPHDR_LEN_M 0x3FF
+#define RX_IPHDR_LEN_V(x) ((x) << RX_IPHDR_LEN_S)
+#define RX_IPHDR_LEN_G(x) (((x) >> RX_IPHDR_LEN_S) & RX_IPHDR_LEN_M)
struct cpl_trace_pkt {
u8 opcode;
WR_HDR;
union opcode_tid ot;
__be16 params;
-#define L2T_W_INFO(x) ((x) << 2)
-#define L2T_W_PORT(x) ((x) << 8)
-#define L2T_W_NOREPLY(x) ((x) << 15)
__be16 l2t_idx;
__be16 vlan;
u8 dst_mac[6];
};
+/* cpl_l2t_write_req.params fields */
+#define L2T_W_INFO_S 2
+#define L2T_W_INFO_V(x) ((x) << L2T_W_INFO_S)
+
+#define L2T_W_PORT_S 8
+#define L2T_W_PORT_V(x) ((x) << L2T_W_PORT_S)
+
+#define L2T_W_NOREPLY_S 15
+#define L2T_W_NOREPLY_V(x) ((x) << L2T_W_NOREPLY_S)
+#define L2T_W_NOREPLY_F L2T_W_NOREPLY_V(1U)
+
struct cpl_l2t_write_rpl {
union opcode_tid ot;
u8 status;
struct cpl_sge_egr_update {
__be32 opcode_qid;
-#define EGR_QID(x) ((x) & 0x1FFFF)
__be16 cidx;
__be16 pidx;
};
+/* cpl_sge_egr_update.ot fields */
+#define EGR_QID_S 0
+#define EGR_QID_M 0x1FFFF
+#define EGR_QID_G(x) (((x) >> EGR_QID_S) & EGR_QID_M)
+
/* cpl_fw*.type values */
enum {
FW_TYPE_CMD_RPL = 0,
struct ulptx_sgl {
__be32 cmd_nsge;
-#define ULPTX_NSGE(x) ((x) << 0)
-#define ULPTX_MORE (1U << 23)
__be32 len0;
__be64 addr0;
struct ulptx_sge_pair sge[0];
};
+#define ULPTX_NSGE_S 0
+#define ULPTX_NSGE_V(x) ((x) << ULPTX_NSGE_S)
+
+#define ULPTX_MORE_S 23
+#define ULPTX_MORE_V(x) ((x) << ULPTX_MORE_S)
+#define ULPTX_MORE_F ULPTX_MORE_V(1U)
+
struct ulp_mem_io {
WR_HDR;
__be32 cmd;
__be32 len16; /* command length */
__be32 dlen; /* data length in 32-byte units */
__be32 lock_addr;
-#define ULP_MEMIO_LOCK(x) ((x) << 31)
};
+#define ULP_MEMIO_LOCK_S 31
+#define ULP_MEMIO_LOCK_V(x) ((x) << ULP_MEMIO_LOCK_S)
+#define ULP_MEMIO_LOCK_F ULP_MEMIO_LOCK_V(1U)
+
/* additional ulp_mem_io.cmd fields */
#define ULP_MEMIO_ORDER_S 23
#define ULP_MEMIO_ORDER_V(x) ((x) << ULP_MEMIO_ORDER_S)
#define T5_ULP_MEMIO_IMM_V(x) ((x) << T5_ULP_MEMIO_IMM_S)
#define T5_ULP_MEMIO_IMM_F T5_ULP_MEMIO_IMM_V(1U)
-#define S_T5_ULP_MEMIO_IMM 23
-#define V_T5_ULP_MEMIO_IMM(x) ((x) << S_T5_ULP_MEMIO_IMM)
-#define F_T5_ULP_MEMIO_IMM V_T5_ULP_MEMIO_IMM(1U)
-
-#define S_T5_ULP_MEMIO_ORDER 22
-#define V_T5_ULP_MEMIO_ORDER(x) ((x) << S_T5_ULP_MEMIO_ORDER)
-#define F_T5_ULP_MEMIO_ORDER V_T5_ULP_MEMIO_ORDER(1U)
+#define T5_ULP_MEMIO_ORDER_S 22
+#define T5_ULP_MEMIO_ORDER_V(x) ((x) << T5_ULP_MEMIO_ORDER_S)
+#define T5_ULP_MEMIO_ORDER_F T5_ULP_MEMIO_ORDER_V(1U)
/* ulp_mem_io.lock_addr fields */
#define ULP_MEMIO_ADDR_S 0
CH_PCI_ID_TABLE_FENTRY(0x5086), /* Custom 2x T580-CR */
CH_PCI_ID_TABLE_FENTRY(0x5087), /* Custom T580-CR */
CH_PCI_ID_TABLE_FENTRY(0x5088), /* Custom T570-CR */
+ CH_PCI_ID_TABLE_FENTRY(0x5089), /* Custom T520-CR */
CH_PCI_DEVICE_ID_TABLE_DEFINE_END;
#endif /* CH_PCI_DEVICE_ID_TABLE_DEFINE_BEGIN */
#define MC_BIST_STATUS_REG(reg_addr, idx) ((reg_addr) + (idx) * 4)
#define EDC_BIST_STATUS_REG(reg_addr, idx) ((reg_addr) + (idx) * 4)
-#define SGE_PF_KDOORBELL 0x0
-#define QID_MASK 0xffff8000U
-#define QID_SHIFT 15
-#define QID(x) ((x) << QID_SHIFT)
-#define DBPRIO(x) ((x) << 14)
-#define DBTYPE(x) ((x) << 13)
-#define PIDX_MASK 0x00003fffU
-#define PIDX_SHIFT 0
-#define PIDX(x) ((x) << PIDX_SHIFT)
-#define PIDX_SHIFT_T5 0
-#define PIDX_T5(x) ((x) << PIDX_SHIFT_T5)
-
-
-#define SGE_TIMERREGS 6
-#define SGE_PF_GTS 0x4
-#define INGRESSQID_MASK 0xffff0000U
-#define INGRESSQID_SHIFT 16
-#define INGRESSQID(x) ((x) << INGRESSQID_SHIFT)
-#define TIMERREG_MASK 0x0000e000U
-#define TIMERREG_SHIFT 13
-#define TIMERREG(x) ((x) << TIMERREG_SHIFT)
-#define SEINTARM_MASK 0x00001000U
-#define SEINTARM_SHIFT 12
-#define SEINTARM(x) ((x) << SEINTARM_SHIFT)
-#define CIDXINC_MASK 0x00000fffU
-#define CIDXINC_SHIFT 0
-#define CIDXINC(x) ((x) << CIDXINC_SHIFT)
-
-#define X_RXPKTCPLMODE_SPLIT 1
-#define X_INGPADBOUNDARY_SHIFT 5
-
-#define SGE_CONTROL 0x1008
-#define SGE_CONTROL2_A 0x1124
-#define DCASYSTYPE 0x00080000U
-#define RXPKTCPLMODE_MASK 0x00040000U
-#define RXPKTCPLMODE_SHIFT 18
-#define RXPKTCPLMODE(x) ((x) << RXPKTCPLMODE_SHIFT)
-#define EGRSTATUSPAGESIZE_MASK 0x00020000U
-#define EGRSTATUSPAGESIZE_SHIFT 17
-#define EGRSTATUSPAGESIZE(x) ((x) << EGRSTATUSPAGESIZE_SHIFT)
-#define PKTSHIFT_MASK 0x00001c00U
-#define PKTSHIFT_SHIFT 10
-#define PKTSHIFT(x) ((x) << PKTSHIFT_SHIFT)
-#define PKTSHIFT_GET(x) (((x) & PKTSHIFT_MASK) >> PKTSHIFT_SHIFT)
-#define INGPCIEBOUNDARY_32B_X 0
-#define INGPCIEBOUNDARY_MASK 0x00000380U
-#define INGPCIEBOUNDARY_SHIFT 7
-#define INGPCIEBOUNDARY(x) ((x) << INGPCIEBOUNDARY_SHIFT)
-#define INGPADBOUNDARY_MASK 0x00000070U
-#define INGPADBOUNDARY_SHIFT 4
-#define INGPADBOUNDARY(x) ((x) << INGPADBOUNDARY_SHIFT)
-#define INGPADBOUNDARY_GET(x) (((x) & INGPADBOUNDARY_MASK) \
- >> INGPADBOUNDARY_SHIFT)
-#define INGPACKBOUNDARY_16B_X 0
-#define INGPACKBOUNDARY_SHIFT_X 5
+#define SGE_PF_KDOORBELL_A 0x0
+
+#define QID_S 15
+#define QID_V(x) ((x) << QID_S)
+
+#define DBPRIO_S 14
+#define DBPRIO_V(x) ((x) << DBPRIO_S)
+#define DBPRIO_F DBPRIO_V(1U)
+
+#define PIDX_S 0
+#define PIDX_V(x) ((x) << PIDX_S)
+
+#define SGE_VF_KDOORBELL_A 0x0
+
+#define DBTYPE_S 13
+#define DBTYPE_V(x) ((x) << DBTYPE_S)
+#define DBTYPE_F DBTYPE_V(1U)
+
+#define PIDX_T5_S 0
+#define PIDX_T5_M 0x1fffU
+#define PIDX_T5_V(x) ((x) << PIDX_T5_S)
+#define PIDX_T5_G(x) (((x) >> PIDX_T5_S) & PIDX_T5_M)
+
+#define SGE_PF_GTS_A 0x4
+
+#define INGRESSQID_S 16
+#define INGRESSQID_V(x) ((x) << INGRESSQID_S)
+
+#define TIMERREG_S 13
+#define TIMERREG_V(x) ((x) << TIMERREG_S)
+
+#define SEINTARM_S 12
+#define SEINTARM_V(x) ((x) << SEINTARM_S)
+
+#define CIDXINC_S 0
+#define CIDXINC_M 0xfffU
+#define CIDXINC_V(x) ((x) << CIDXINC_S)
+
+#define SGE_CONTROL_A 0x1008
+#define SGE_CONTROL2_A 0x1124
+
+#define RXPKTCPLMODE_S 18
+#define RXPKTCPLMODE_V(x) ((x) << RXPKTCPLMODE_S)
+#define RXPKTCPLMODE_F RXPKTCPLMODE_V(1U)
+
+#define EGRSTATUSPAGESIZE_S 17
+#define EGRSTATUSPAGESIZE_V(x) ((x) << EGRSTATUSPAGESIZE_S)
+#define EGRSTATUSPAGESIZE_F EGRSTATUSPAGESIZE_V(1U)
+
+#define PKTSHIFT_S 10
+#define PKTSHIFT_M 0x7U
+#define PKTSHIFT_V(x) ((x) << PKTSHIFT_S)
+#define PKTSHIFT_G(x) (((x) >> PKTSHIFT_S) & PKTSHIFT_M)
+
+#define INGPCIEBOUNDARY_S 7
+#define INGPCIEBOUNDARY_V(x) ((x) << INGPCIEBOUNDARY_S)
+
+#define INGPADBOUNDARY_S 4
+#define INGPADBOUNDARY_M 0x7U
+#define INGPADBOUNDARY_V(x) ((x) << INGPADBOUNDARY_S)
+#define INGPADBOUNDARY_G(x) (((x) >> INGPADBOUNDARY_S) & INGPADBOUNDARY_M)
+
+#define EGRPCIEBOUNDARY_S 1
+#define EGRPCIEBOUNDARY_V(x) ((x) << EGRPCIEBOUNDARY_S)
#define INGPACKBOUNDARY_S 16
#define INGPACKBOUNDARY_M 0x7U
#define INGPACKBOUNDARY_V(x) ((x) << INGPACKBOUNDARY_S)
#define INGPACKBOUNDARY_G(x) (((x) >> INGPACKBOUNDARY_S) \
& INGPACKBOUNDARY_M)
-#define EGRPCIEBOUNDARY_MASK 0x0000000eU
-#define EGRPCIEBOUNDARY_SHIFT 1
-#define EGRPCIEBOUNDARY(x) ((x) << EGRPCIEBOUNDARY_SHIFT)
-#define GLOBALENABLE 0x00000001U
-#define SGE_HOST_PAGE_SIZE 0x100c
+#define GLOBALENABLE_S 0
+#define GLOBALENABLE_V(x) ((x) << GLOBALENABLE_S)
+#define GLOBALENABLE_F GLOBALENABLE_V(1U)
+
+#define SGE_HOST_PAGE_SIZE_A 0x100c
+
+#define HOSTPAGESIZEPF7_S 28
+#define HOSTPAGESIZEPF7_M 0xfU
+#define HOSTPAGESIZEPF7_V(x) ((x) << HOSTPAGESIZEPF7_S)
+#define HOSTPAGESIZEPF7_G(x) (((x) >> HOSTPAGESIZEPF7_S) & HOSTPAGESIZEPF7_M)
+
+#define HOSTPAGESIZEPF6_S 24
+#define HOSTPAGESIZEPF6_M 0xfU
+#define HOSTPAGESIZEPF6_V(x) ((x) << HOSTPAGESIZEPF6_S)
+#define HOSTPAGESIZEPF6_G(x) (((x) >> HOSTPAGESIZEPF6_S) & HOSTPAGESIZEPF6_M)
+
+#define HOSTPAGESIZEPF5_S 20
+#define HOSTPAGESIZEPF5_M 0xfU
+#define HOSTPAGESIZEPF5_V(x) ((x) << HOSTPAGESIZEPF5_S)
+#define HOSTPAGESIZEPF5_G(x) (((x) >> HOSTPAGESIZEPF5_S) & HOSTPAGESIZEPF5_M)
+
+#define HOSTPAGESIZEPF4_S 16
+#define HOSTPAGESIZEPF4_M 0xfU
+#define HOSTPAGESIZEPF4_V(x) ((x) << HOSTPAGESIZEPF4_S)
+#define HOSTPAGESIZEPF4_G(x) (((x) >> HOSTPAGESIZEPF4_S) & HOSTPAGESIZEPF4_M)
+
+#define HOSTPAGESIZEPF3_S 12
+#define HOSTPAGESIZEPF3_M 0xfU
+#define HOSTPAGESIZEPF3_V(x) ((x) << HOSTPAGESIZEPF3_S)
+#define HOSTPAGESIZEPF3_G(x) (((x) >> HOSTPAGESIZEPF3_S) & HOSTPAGESIZEPF3_M)
+
+#define HOSTPAGESIZEPF2_S 8
+#define HOSTPAGESIZEPF2_M 0xfU
+#define HOSTPAGESIZEPF2_V(x) ((x) << HOSTPAGESIZEPF2_S)
+#define HOSTPAGESIZEPF2_G(x) (((x) >> HOSTPAGESIZEPF2_S) & HOSTPAGESIZEPF2_M)
+
+#define HOSTPAGESIZEPF1_S 4
+#define HOSTPAGESIZEPF1_M 0xfU
+#define HOSTPAGESIZEPF1_V(x) ((x) << HOSTPAGESIZEPF1_S)
+#define HOSTPAGESIZEPF1_G(x) (((x) >> HOSTPAGESIZEPF1_S) & HOSTPAGESIZEPF1_M)
+
+#define HOSTPAGESIZEPF0_S 0
+#define HOSTPAGESIZEPF0_M 0xfU
+#define HOSTPAGESIZEPF0_V(x) ((x) << HOSTPAGESIZEPF0_S)
+#define HOSTPAGESIZEPF0_G(x) (((x) >> HOSTPAGESIZEPF0_S) & HOSTPAGESIZEPF0_M)
+
+#define SGE_EGRESS_QUEUES_PER_PAGE_PF_A 0x1010
+#define SGE_EGRESS_QUEUES_PER_PAGE_VF_A 0x1014
-#define HOSTPAGESIZEPF7_MASK 0x0000000fU
-#define HOSTPAGESIZEPF7_SHIFT 28
-#define HOSTPAGESIZEPF7(x) ((x) << HOSTPAGESIZEPF7_SHIFT)
+#define QUEUESPERPAGEPF1_S 4
+
+#define QUEUESPERPAGEPF0_S 0
+#define QUEUESPERPAGEPF0_M 0xfU
+#define QUEUESPERPAGEPF0_V(x) ((x) << QUEUESPERPAGEPF0_S)
+#define QUEUESPERPAGEPF0_G(x) (((x) >> QUEUESPERPAGEPF0_S) & QUEUESPERPAGEPF0_M)
-#define HOSTPAGESIZEPF6_MASK 0x0000000fU
-#define HOSTPAGESIZEPF6_SHIFT 24
-#define HOSTPAGESIZEPF6(x) ((x) << HOSTPAGESIZEPF6_SHIFT)
+#define SGE_INT_CAUSE1_A 0x1024
+#define SGE_INT_CAUSE2_A 0x1030
+#define SGE_INT_CAUSE3_A 0x103c
+
+#define ERR_FLM_DBP_S 31
+#define ERR_FLM_DBP_V(x) ((x) << ERR_FLM_DBP_S)
+#define ERR_FLM_DBP_F ERR_FLM_DBP_V(1U)
+
+#define ERR_FLM_IDMA1_S 30
+#define ERR_FLM_IDMA1_V(x) ((x) << ERR_FLM_IDMA1_S)
+#define ERR_FLM_IDMA1_F ERR_FLM_IDMA1_V(1U)
+
+#define ERR_FLM_IDMA0_S 29
+#define ERR_FLM_IDMA0_V(x) ((x) << ERR_FLM_IDMA0_S)
+#define ERR_FLM_IDMA0_F ERR_FLM_IDMA0_V(1U)
+
+#define ERR_FLM_HINT_S 28
+#define ERR_FLM_HINT_V(x) ((x) << ERR_FLM_HINT_S)
+#define ERR_FLM_HINT_F ERR_FLM_HINT_V(1U)
+
+#define ERR_PCIE_ERROR3_S 27
+#define ERR_PCIE_ERROR3_V(x) ((x) << ERR_PCIE_ERROR3_S)
+#define ERR_PCIE_ERROR3_F ERR_PCIE_ERROR3_V(1U)
+
+#define ERR_PCIE_ERROR2_S 26
+#define ERR_PCIE_ERROR2_V(x) ((x) << ERR_PCIE_ERROR2_S)
+#define ERR_PCIE_ERROR2_F ERR_PCIE_ERROR2_V(1U)
+
+#define ERR_PCIE_ERROR1_S 25
+#define ERR_PCIE_ERROR1_V(x) ((x) << ERR_PCIE_ERROR1_S)
+#define ERR_PCIE_ERROR1_F ERR_PCIE_ERROR1_V(1U)
+
+#define ERR_PCIE_ERROR0_S 24
+#define ERR_PCIE_ERROR0_V(x) ((x) << ERR_PCIE_ERROR0_S)
+#define ERR_PCIE_ERROR0_F ERR_PCIE_ERROR0_V(1U)
+
+#define ERR_CPL_EXCEED_IQE_SIZE_S 22
+#define ERR_CPL_EXCEED_IQE_SIZE_V(x) ((x) << ERR_CPL_EXCEED_IQE_SIZE_S)
+#define ERR_CPL_EXCEED_IQE_SIZE_F ERR_CPL_EXCEED_IQE_SIZE_V(1U)
+
+#define ERR_INVALID_CIDX_INC_S 21
+#define ERR_INVALID_CIDX_INC_V(x) ((x) << ERR_INVALID_CIDX_INC_S)
+#define ERR_INVALID_CIDX_INC_F ERR_INVALID_CIDX_INC_V(1U)
+
+#define ERR_CPL_OPCODE_0_S 19
+#define ERR_CPL_OPCODE_0_V(x) ((x) << ERR_CPL_OPCODE_0_S)
+#define ERR_CPL_OPCODE_0_F ERR_CPL_OPCODE_0_V(1U)
+
+#define ERR_DROPPED_DB_S 18
+#define ERR_DROPPED_DB_V(x) ((x) << ERR_DROPPED_DB_S)
+#define ERR_DROPPED_DB_F ERR_DROPPED_DB_V(1U)
+
+#define ERR_DATA_CPL_ON_HIGH_QID1_S 17
+#define ERR_DATA_CPL_ON_HIGH_QID1_V(x) ((x) << ERR_DATA_CPL_ON_HIGH_QID1_S)
+#define ERR_DATA_CPL_ON_HIGH_QID1_F ERR_DATA_CPL_ON_HIGH_QID1_V(1U)
+
+#define ERR_DATA_CPL_ON_HIGH_QID0_S 16
+#define ERR_DATA_CPL_ON_HIGH_QID0_V(x) ((x) << ERR_DATA_CPL_ON_HIGH_QID0_S)
+#define ERR_DATA_CPL_ON_HIGH_QID0_F ERR_DATA_CPL_ON_HIGH_QID0_V(1U)
+
+#define ERR_BAD_DB_PIDX3_S 15
+#define ERR_BAD_DB_PIDX3_V(x) ((x) << ERR_BAD_DB_PIDX3_S)
+#define ERR_BAD_DB_PIDX3_F ERR_BAD_DB_PIDX3_V(1U)
+
+#define ERR_BAD_DB_PIDX2_S 14
+#define ERR_BAD_DB_PIDX2_V(x) ((x) << ERR_BAD_DB_PIDX2_S)
+#define ERR_BAD_DB_PIDX2_F ERR_BAD_DB_PIDX2_V(1U)
+
+#define ERR_BAD_DB_PIDX1_S 13
+#define ERR_BAD_DB_PIDX1_V(x) ((x) << ERR_BAD_DB_PIDX1_S)
+#define ERR_BAD_DB_PIDX1_F ERR_BAD_DB_PIDX1_V(1U)
+
+#define ERR_BAD_DB_PIDX0_S 12
+#define ERR_BAD_DB_PIDX0_V(x) ((x) << ERR_BAD_DB_PIDX0_S)
+#define ERR_BAD_DB_PIDX0_F ERR_BAD_DB_PIDX0_V(1U)
+
+#define ERR_ING_CTXT_PRIO_S 10
+#define ERR_ING_CTXT_PRIO_V(x) ((x) << ERR_ING_CTXT_PRIO_S)
+#define ERR_ING_CTXT_PRIO_F ERR_ING_CTXT_PRIO_V(1U)
+
+#define ERR_EGR_CTXT_PRIO_S 9
+#define ERR_EGR_CTXT_PRIO_V(x) ((x) << ERR_EGR_CTXT_PRIO_S)
+#define ERR_EGR_CTXT_PRIO_F ERR_EGR_CTXT_PRIO_V(1U)
+
+#define DBFIFO_HP_INT_S 8
+#define DBFIFO_HP_INT_V(x) ((x) << DBFIFO_HP_INT_S)
+#define DBFIFO_HP_INT_F DBFIFO_HP_INT_V(1U)
+
+#define DBFIFO_LP_INT_S 7
+#define DBFIFO_LP_INT_V(x) ((x) << DBFIFO_LP_INT_S)
+#define DBFIFO_LP_INT_F DBFIFO_LP_INT_V(1U)
+
+#define INGRESS_SIZE_ERR_S 5
+#define INGRESS_SIZE_ERR_V(x) ((x) << INGRESS_SIZE_ERR_S)
+#define INGRESS_SIZE_ERR_F INGRESS_SIZE_ERR_V(1U)
+
+#define EGRESS_SIZE_ERR_S 4
+#define EGRESS_SIZE_ERR_V(x) ((x) << EGRESS_SIZE_ERR_S)
+#define EGRESS_SIZE_ERR_F EGRESS_SIZE_ERR_V(1U)
+
+#define SGE_INT_ENABLE3_A 0x1040
+#define SGE_FL_BUFFER_SIZE0_A 0x1044
+#define SGE_FL_BUFFER_SIZE1_A 0x1048
+#define SGE_FL_BUFFER_SIZE2_A 0x104c
+#define SGE_FL_BUFFER_SIZE3_A 0x1050
+#define SGE_FL_BUFFER_SIZE4_A 0x1054
+#define SGE_FL_BUFFER_SIZE5_A 0x1058
+#define SGE_FL_BUFFER_SIZE6_A 0x105c
+#define SGE_FL_BUFFER_SIZE7_A 0x1060
+#define SGE_FL_BUFFER_SIZE8_A 0x1064
+
+#define SGE_INGRESS_RX_THRESHOLD_A 0x10a0
+
+#define THRESHOLD_0_S 24
+#define THRESHOLD_0_M 0x3fU
+#define THRESHOLD_0_V(x) ((x) << THRESHOLD_0_S)
+#define THRESHOLD_0_G(x) (((x) >> THRESHOLD_0_S) & THRESHOLD_0_M)
+
+#define THRESHOLD_1_S 16
+#define THRESHOLD_1_M 0x3fU
+#define THRESHOLD_1_V(x) ((x) << THRESHOLD_1_S)
+#define THRESHOLD_1_G(x) (((x) >> THRESHOLD_1_S) & THRESHOLD_1_M)
+
+#define THRESHOLD_2_S 8
+#define THRESHOLD_2_M 0x3fU
+#define THRESHOLD_2_V(x) ((x) << THRESHOLD_2_S)
+#define THRESHOLD_2_G(x) (((x) >> THRESHOLD_2_S) & THRESHOLD_2_M)
+
+#define THRESHOLD_3_S 0
+#define THRESHOLD_3_M 0x3fU
+#define THRESHOLD_3_V(x) ((x) << THRESHOLD_3_S)
+#define THRESHOLD_3_G(x) (((x) >> THRESHOLD_3_S) & THRESHOLD_3_M)
+
+#define SGE_CONM_CTRL_A 0x1094
+
+#define EGRTHRESHOLD_S 8
+#define EGRTHRESHOLD_M 0x3fU
+#define EGRTHRESHOLD_V(x) ((x) << EGRTHRESHOLD_S)
+#define EGRTHRESHOLD_G(x) (((x) >> EGRTHRESHOLD_S) & EGRTHRESHOLD_M)
+
+#define EGRTHRESHOLDPACKING_S 14
+#define EGRTHRESHOLDPACKING_M 0x3fU
+#define EGRTHRESHOLDPACKING_V(x) ((x) << EGRTHRESHOLDPACKING_S)
+#define EGRTHRESHOLDPACKING_G(x) \
+ (((x) >> EGRTHRESHOLDPACKING_S) & EGRTHRESHOLDPACKING_M)
+
+#define SGE_TIMESTAMP_LO_A 0x1098
+#define SGE_TIMESTAMP_HI_A 0x109c
+
+#define TSOP_S 28
+#define TSOP_M 0x3U
+#define TSOP_V(x) ((x) << TSOP_S)
+#define TSOP_G(x) (((x) >> TSOP_S) & TSOP_M)
+
+#define TSVAL_S 0
+#define TSVAL_M 0xfffffffU
+#define TSVAL_V(x) ((x) << TSVAL_S)
+#define TSVAL_G(x) (((x) >> TSVAL_S) & TSVAL_M)
+
+#define SGE_DBFIFO_STATUS_A 0x10a4
+
+#define HP_INT_THRESH_S 28
+#define HP_INT_THRESH_M 0xfU
+#define HP_INT_THRESH_V(x) ((x) << HP_INT_THRESH_S)
+
+#define LP_INT_THRESH_S 12
+#define LP_INT_THRESH_M 0xfU
+#define LP_INT_THRESH_V(x) ((x) << LP_INT_THRESH_S)
+
+#define SGE_DOORBELL_CONTROL_A 0x10a8
+
+#define NOCOALESCE_S 26
+#define NOCOALESCE_V(x) ((x) << NOCOALESCE_S)
+#define NOCOALESCE_F NOCOALESCE_V(1U)
+
+#define ENABLE_DROP_S 13
+#define ENABLE_DROP_V(x) ((x) << ENABLE_DROP_S)
+#define ENABLE_DROP_F ENABLE_DROP_V(1U)
+
+#define SGE_TIMER_VALUE_0_AND_1_A 0x10b8
+
+#define TIMERVALUE0_S 16
+#define TIMERVALUE0_M 0xffffU
+#define TIMERVALUE0_V(x) ((x) << TIMERVALUE0_S)
+#define TIMERVALUE0_G(x) (((x) >> TIMERVALUE0_S) & TIMERVALUE0_M)
+
+#define TIMERVALUE1_S 0
+#define TIMERVALUE1_M 0xffffU
+#define TIMERVALUE1_V(x) ((x) << TIMERVALUE1_S)
+#define TIMERVALUE1_G(x) (((x) >> TIMERVALUE1_S) & TIMERVALUE1_M)
+
+#define SGE_TIMER_VALUE_2_AND_3_A 0x10bc
+
+#define TIMERVALUE2_S 16
+#define TIMERVALUE2_M 0xffffU
+#define TIMERVALUE2_V(x) ((x) << TIMERVALUE2_S)
+#define TIMERVALUE2_G(x) (((x) >> TIMERVALUE2_S) & TIMERVALUE2_M)
+
+#define TIMERVALUE3_S 0
+#define TIMERVALUE3_M 0xffffU
+#define TIMERVALUE3_V(x) ((x) << TIMERVALUE3_S)
+#define TIMERVALUE3_G(x) (((x) >> TIMERVALUE3_S) & TIMERVALUE3_M)
+
+#define SGE_TIMER_VALUE_4_AND_5_A 0x10c0
+
+#define TIMERVALUE4_S 16
+#define TIMERVALUE4_M 0xffffU
+#define TIMERVALUE4_V(x) ((x) << TIMERVALUE4_S)
+#define TIMERVALUE4_G(x) (((x) >> TIMERVALUE4_S) & TIMERVALUE4_M)
-#define HOSTPAGESIZEPF5_MASK 0x0000000fU
-#define HOSTPAGESIZEPF5_SHIFT 20
-#define HOSTPAGESIZEPF5(x) ((x) << HOSTPAGESIZEPF5_SHIFT)
+#define TIMERVALUE5_S 0
+#define TIMERVALUE5_M 0xffffU
+#define TIMERVALUE5_V(x) ((x) << TIMERVALUE5_S)
+#define TIMERVALUE5_G(x) (((x) >> TIMERVALUE5_S) & TIMERVALUE5_M)
-#define HOSTPAGESIZEPF4_MASK 0x0000000fU
-#define HOSTPAGESIZEPF4_SHIFT 16
-#define HOSTPAGESIZEPF4(x) ((x) << HOSTPAGESIZEPF4_SHIFT)
+#define SGE_DEBUG_INDEX_A 0x10cc
+#define SGE_DEBUG_DATA_HIGH_A 0x10d0
+#define SGE_DEBUG_DATA_LOW_A 0x10d4
-#define HOSTPAGESIZEPF3_MASK 0x0000000fU
-#define HOSTPAGESIZEPF3_SHIFT 12
-#define HOSTPAGESIZEPF3(x) ((x) << HOSTPAGESIZEPF3_SHIFT)
+#define SGE_DEBUG_DATA_LOW_INDEX_2_A 0x12c8
+#define SGE_DEBUG_DATA_LOW_INDEX_3_A 0x12cc
+#define SGE_DEBUG_DATA_HIGH_INDEX_10_A 0x12a8
-#define HOSTPAGESIZEPF2_MASK 0x0000000fU
-#define HOSTPAGESIZEPF2_SHIFT 8
-#define HOSTPAGESIZEPF2(x) ((x) << HOSTPAGESIZEPF2_SHIFT)
+#define SGE_INGRESS_QUEUES_PER_PAGE_PF_A 0x10f4
+#define SGE_INGRESS_QUEUES_PER_PAGE_VF_A 0x10f8
-#define HOSTPAGESIZEPF1_M 0x0000000fU
-#define HOSTPAGESIZEPF1_S 4
-#define HOSTPAGESIZEPF1(x) ((x) << HOSTPAGESIZEPF1_S)
+#define HP_INT_THRESH_S 28
+#define HP_INT_THRESH_M 0xfU
+#define HP_INT_THRESH_V(x) ((x) << HP_INT_THRESH_S)
-#define HOSTPAGESIZEPF0_M 0x0000000fU
-#define HOSTPAGESIZEPF0_S 0
-#define HOSTPAGESIZEPF0(x) ((x) << HOSTPAGESIZEPF0_S)
+#define HP_COUNT_S 16
+#define HP_COUNT_M 0x7ffU
+#define HP_COUNT_G(x) (((x) >> HP_COUNT_S) & HP_COUNT_M)
-#define SGE_EGRESS_QUEUES_PER_PAGE_PF 0x1010
-#define SGE_EGRESS_QUEUES_PER_PAGE_VF_A 0x1014
+#define LP_INT_THRESH_S 12
+#define LP_INT_THRESH_M 0xfU
+#define LP_INT_THRESH_V(x) ((x) << LP_INT_THRESH_S)
-#define QUEUESPERPAGEPF1_S 4
+#define LP_COUNT_S 0
+#define LP_COUNT_M 0x7ffU
+#define LP_COUNT_G(x) (((x) >> LP_COUNT_S) & LP_COUNT_M)
-#define QUEUESPERPAGEPF0_S 0
-#define QUEUESPERPAGEPF0_MASK 0x0000000fU
-#define QUEUESPERPAGEPF0_GET(x) ((x) & QUEUESPERPAGEPF0_MASK)
+#define LP_INT_THRESH_T5_S 18
+#define LP_INT_THRESH_T5_M 0xfffU
+#define LP_INT_THRESH_T5_V(x) ((x) << LP_INT_THRESH_T5_S)
-#define QUEUESPERPAGEPF0 0
-#define QUEUESPERPAGEPF1 4
+#define LP_COUNT_T5_S 0
+#define LP_COUNT_T5_M 0x3ffffU
+#define LP_COUNT_T5_G(x) (((x) >> LP_COUNT_T5_S) & LP_COUNT_T5_M)
-/* T5 and later support a new BAR2-based doorbell mechanism for Egress Queues.
- * The User Doorbells are each 128 bytes in length with a Simple Doorbell at
- * offsets 8x and a Write Combining single 64-byte Egress Queue Unit
- * (X_IDXSIZE_UNIT) Gather Buffer interface at offset 64. For Ingress Queues,
- * we have a Going To Sleep register at offsets 8x+4.
- *
- * As noted above, we have many instances of the Simple Doorbell and Going To
- * Sleep registers at offsets 8x and 8x+4, respectively. We want to use a
- * non-64-byte aligned offset for the Simple Doorbell in order to attempt to
- * avoid buffering of the writes to the Simple Doorbell and we want to use a
- * non-contiguous offset for the Going To Sleep writes in order to avoid
- * possible combining between them.
- */
-#define SGE_UDB_SIZE 128
-#define SGE_UDB_KDOORBELL 8
-#define SGE_UDB_GTS 20
-#define SGE_UDB_WCDOORBELL 64
-
-#define SGE_INT_CAUSE1 0x1024
-#define SGE_INT_CAUSE2 0x1030
-#define SGE_INT_CAUSE3 0x103c
-#define ERR_FLM_DBP 0x80000000U
-#define ERR_FLM_IDMA1 0x40000000U
-#define ERR_FLM_IDMA0 0x20000000U
-#define ERR_FLM_HINT 0x10000000U
-#define ERR_PCIE_ERROR3 0x08000000U
-#define ERR_PCIE_ERROR2 0x04000000U
-#define ERR_PCIE_ERROR1 0x02000000U
-#define ERR_PCIE_ERROR0 0x01000000U
-#define ERR_TIMER_ABOVE_MAX_QID 0x00800000U
-#define ERR_CPL_EXCEED_IQE_SIZE 0x00400000U
-#define ERR_INVALID_CIDX_INC 0x00200000U
-#define ERR_ITP_TIME_PAUSED 0x00100000U
-#define ERR_CPL_OPCODE_0 0x00080000U
-#define ERR_DROPPED_DB 0x00040000U
-#define ERR_DATA_CPL_ON_HIGH_QID1 0x00020000U
-#define ERR_DATA_CPL_ON_HIGH_QID0 0x00010000U
-#define ERR_BAD_DB_PIDX3 0x00008000U
-#define ERR_BAD_DB_PIDX2 0x00004000U
-#define ERR_BAD_DB_PIDX1 0x00002000U
-#define ERR_BAD_DB_PIDX0 0x00001000U
-#define ERR_ING_PCIE_CHAN 0x00000800U
-#define ERR_ING_CTXT_PRIO 0x00000400U
-#define ERR_EGR_CTXT_PRIO 0x00000200U
-#define DBFIFO_HP_INT 0x00000100U
-#define DBFIFO_LP_INT 0x00000080U
-#define REG_ADDRESS_ERR 0x00000040U
-#define INGRESS_SIZE_ERR 0x00000020U
-#define EGRESS_SIZE_ERR 0x00000010U
-#define ERR_INV_CTXT3 0x00000008U
-#define ERR_INV_CTXT2 0x00000004U
-#define ERR_INV_CTXT1 0x00000002U
-#define ERR_INV_CTXT0 0x00000001U
-
-#define SGE_INT_ENABLE3 0x1040
-#define SGE_FL_BUFFER_SIZE0 0x1044
-#define SGE_FL_BUFFER_SIZE1 0x1048
-#define SGE_FL_BUFFER_SIZE2 0x104c
-#define SGE_FL_BUFFER_SIZE3 0x1050
-#define SGE_FL_BUFFER_SIZE4 0x1054
-#define SGE_FL_BUFFER_SIZE5 0x1058
-#define SGE_FL_BUFFER_SIZE6 0x105c
-#define SGE_FL_BUFFER_SIZE7 0x1060
-#define SGE_FL_BUFFER_SIZE8 0x1064
-
-#define SGE_INGRESS_RX_THRESHOLD 0x10a0
-#define THRESHOLD_0_MASK 0x3f000000U
-#define THRESHOLD_0_SHIFT 24
-#define THRESHOLD_0(x) ((x) << THRESHOLD_0_SHIFT)
-#define THRESHOLD_0_GET(x) (((x) & THRESHOLD_0_MASK) >> THRESHOLD_0_SHIFT)
-#define THRESHOLD_1_MASK 0x003f0000U
-#define THRESHOLD_1_SHIFT 16
-#define THRESHOLD_1(x) ((x) << THRESHOLD_1_SHIFT)
-#define THRESHOLD_1_GET(x) (((x) & THRESHOLD_1_MASK) >> THRESHOLD_1_SHIFT)
-#define THRESHOLD_2_MASK 0x00003f00U
-#define THRESHOLD_2_SHIFT 8
-#define THRESHOLD_2(x) ((x) << THRESHOLD_2_SHIFT)
-#define THRESHOLD_2_GET(x) (((x) & THRESHOLD_2_MASK) >> THRESHOLD_2_SHIFT)
-#define THRESHOLD_3_MASK 0x0000003fU
-#define THRESHOLD_3_SHIFT 0
-#define THRESHOLD_3(x) ((x) << THRESHOLD_3_SHIFT)
-#define THRESHOLD_3_GET(x) (((x) & THRESHOLD_3_MASK) >> THRESHOLD_3_SHIFT)
-
-#define SGE_CONM_CTRL 0x1094
-#define EGRTHRESHOLD_MASK 0x00003f00U
-#define EGRTHRESHOLDshift 8
-#define EGRTHRESHOLD(x) ((x) << EGRTHRESHOLDshift)
-#define EGRTHRESHOLD_GET(x) (((x) & EGRTHRESHOLD_MASK) >> EGRTHRESHOLDshift)
-
-#define EGRTHRESHOLDPACKING_MASK 0x3fU
-#define EGRTHRESHOLDPACKING_SHIFT 14
-#define EGRTHRESHOLDPACKING(x) ((x) << EGRTHRESHOLDPACKING_SHIFT)
-#define EGRTHRESHOLDPACKING_GET(x) (((x) >> EGRTHRESHOLDPACKING_SHIFT) & \
- EGRTHRESHOLDPACKING_MASK)
-
-#define SGE_DBFIFO_STATUS 0x10a4
-#define HP_INT_THRESH_SHIFT 28
-#define HP_INT_THRESH_MASK 0xfU
-#define HP_INT_THRESH(x) ((x) << HP_INT_THRESH_SHIFT)
-#define LP_INT_THRESH_SHIFT 12
-#define LP_INT_THRESH_MASK 0xfU
-#define LP_INT_THRESH(x) ((x) << LP_INT_THRESH_SHIFT)
-
-#define SGE_DOORBELL_CONTROL 0x10a8
-#define ENABLE_DROP (1 << 13)
-
-#define S_NOCOALESCE 26
-#define V_NOCOALESCE(x) ((x) << S_NOCOALESCE)
-#define F_NOCOALESCE V_NOCOALESCE(1U)
-
-#define SGE_TIMESTAMP_LO 0x1098
-#define SGE_TIMESTAMP_HI 0x109c
-#define S_TSVAL 0
-#define M_TSVAL 0xfffffffU
-#define GET_TSVAL(x) (((x) >> S_TSVAL) & M_TSVAL)
-
-#define SGE_TIMER_VALUE_0_AND_1 0x10b8
-#define TIMERVALUE0_MASK 0xffff0000U
-#define TIMERVALUE0_SHIFT 16
-#define TIMERVALUE0(x) ((x) << TIMERVALUE0_SHIFT)
-#define TIMERVALUE0_GET(x) (((x) & TIMERVALUE0_MASK) >> TIMERVALUE0_SHIFT)
-#define TIMERVALUE1_MASK 0x0000ffffU
-#define TIMERVALUE1_SHIFT 0
-#define TIMERVALUE1(x) ((x) << TIMERVALUE1_SHIFT)
-#define TIMERVALUE1_GET(x) (((x) & TIMERVALUE1_MASK) >> TIMERVALUE1_SHIFT)
-
-#define SGE_TIMER_VALUE_2_AND_3 0x10bc
-#define TIMERVALUE2_MASK 0xffff0000U
-#define TIMERVALUE2_SHIFT 16
-#define TIMERVALUE2(x) ((x) << TIMERVALUE2_SHIFT)
-#define TIMERVALUE2_GET(x) (((x) & TIMERVALUE2_MASK) >> TIMERVALUE2_SHIFT)
-#define TIMERVALUE3_MASK 0x0000ffffU
-#define TIMERVALUE3_SHIFT 0
-#define TIMERVALUE3(x) ((x) << TIMERVALUE3_SHIFT)
-#define TIMERVALUE3_GET(x) (((x) & TIMERVALUE3_MASK) >> TIMERVALUE3_SHIFT)
-
-#define SGE_TIMER_VALUE_4_AND_5 0x10c0
-#define TIMERVALUE4_MASK 0xffff0000U
-#define TIMERVALUE4_SHIFT 16
-#define TIMERVALUE4(x) ((x) << TIMERVALUE4_SHIFT)
-#define TIMERVALUE4_GET(x) (((x) & TIMERVALUE4_MASK) >> TIMERVALUE4_SHIFT)
-#define TIMERVALUE5_MASK 0x0000ffffU
-#define TIMERVALUE5_SHIFT 0
-#define TIMERVALUE5(x) ((x) << TIMERVALUE5_SHIFT)
-#define TIMERVALUE5_GET(x) (((x) & TIMERVALUE5_MASK) >> TIMERVALUE5_SHIFT)
-
-#define SGE_DEBUG_INDEX 0x10cc
-#define SGE_DEBUG_DATA_HIGH 0x10d0
-#define SGE_DEBUG_DATA_LOW 0x10d4
-#define SGE_DEBUG_DATA_LOW_INDEX_2 0x12c8
-#define SGE_DEBUG_DATA_LOW_INDEX_3 0x12cc
-#define SGE_DEBUG_DATA_HIGH_INDEX_10 0x12a8
-#define SGE_INGRESS_QUEUES_PER_PAGE_PF 0x10f4
-#define SGE_INGRESS_QUEUES_PER_PAGE_VF_A 0x10f8
+#define SGE_DOORBELL_CONTROL_A 0x10a8
+
+#define SGE_STAT_TOTAL_A 0x10e4
+#define SGE_STAT_MATCH_A 0x10e8
+#define SGE_STAT_CFG_A 0x10ec
+
+#define STATSOURCE_T5_S 9
+#define STATSOURCE_T5_V(x) ((x) << STATSOURCE_T5_S)
+
+#define SGE_DBFIFO_STATUS2_A 0x1118
+
+#define HP_INT_THRESH_T5_S 10
+#define HP_INT_THRESH_T5_M 0xfU
+#define HP_INT_THRESH_T5_V(x) ((x) << HP_INT_THRESH_T5_S)
+
+#define HP_COUNT_T5_S 0
+#define HP_COUNT_T5_M 0x3ffU
+#define HP_COUNT_T5_G(x) (((x) >> HP_COUNT_T5_S) & HP_COUNT_T5_M)
+
+#define ENABLE_DROP_S 13
+#define ENABLE_DROP_V(x) ((x) << ENABLE_DROP_S)
+#define ENABLE_DROP_F ENABLE_DROP_V(1U)
+
+#define DROPPED_DB_S 0
+#define DROPPED_DB_V(x) ((x) << DROPPED_DB_S)
+#define DROPPED_DB_F DROPPED_DB_V(1U)
+
+#define SGE_CTXT_CMD_A 0x11fc
+#define SGE_DBQ_CTXT_BADDR_A 0x1084
+
+/* registers for module PCIE */
+#define PCIE_PF_CFG_A 0x40
+
+#define AIVEC_S 4
+#define AIVEC_M 0x3ffU
+#define AIVEC_V(x) ((x) << AIVEC_S)
+
+#define PCIE_PF_CLI_A 0x44
+#define PCIE_INT_CAUSE_A 0x3004
+
+#define UNXSPLCPLERR_S 29
+#define UNXSPLCPLERR_V(x) ((x) << UNXSPLCPLERR_S)
+#define UNXSPLCPLERR_F UNXSPLCPLERR_V(1U)
+
+#define PCIEPINT_S 28
+#define PCIEPINT_V(x) ((x) << PCIEPINT_S)
+#define PCIEPINT_F PCIEPINT_V(1U)
+
+#define PCIESINT_S 27
+#define PCIESINT_V(x) ((x) << PCIESINT_S)
+#define PCIESINT_F PCIESINT_V(1U)
+
+#define RPLPERR_S 26
+#define RPLPERR_V(x) ((x) << RPLPERR_S)
+#define RPLPERR_F RPLPERR_V(1U)
+
+#define RXWRPERR_S 25
+#define RXWRPERR_V(x) ((x) << RXWRPERR_S)
+#define RXWRPERR_F RXWRPERR_V(1U)
+
+#define RXCPLPERR_S 24
+#define RXCPLPERR_V(x) ((x) << RXCPLPERR_S)
+#define RXCPLPERR_F RXCPLPERR_V(1U)
+
+#define PIOTAGPERR_S 23
+#define PIOTAGPERR_V(x) ((x) << PIOTAGPERR_S)
+#define PIOTAGPERR_F PIOTAGPERR_V(1U)
+
+#define MATAGPERR_S 22
+#define MATAGPERR_V(x) ((x) << MATAGPERR_S)
+#define MATAGPERR_F MATAGPERR_V(1U)
+
+#define INTXCLRPERR_S 21
+#define INTXCLRPERR_V(x) ((x) << INTXCLRPERR_S)
+#define INTXCLRPERR_F INTXCLRPERR_V(1U)
+
+#define FIDPERR_S 20
+#define FIDPERR_V(x) ((x) << FIDPERR_S)
+#define FIDPERR_F FIDPERR_V(1U)
+
+#define CFGSNPPERR_S 19
+#define CFGSNPPERR_V(x) ((x) << CFGSNPPERR_S)
+#define CFGSNPPERR_F CFGSNPPERR_V(1U)
+
+#define HRSPPERR_S 18
+#define HRSPPERR_V(x) ((x) << HRSPPERR_S)
+#define HRSPPERR_F HRSPPERR_V(1U)
+
+#define HREQPERR_S 17
+#define HREQPERR_V(x) ((x) << HREQPERR_S)
+#define HREQPERR_F HREQPERR_V(1U)
+
+#define HCNTPERR_S 16
+#define HCNTPERR_V(x) ((x) << HCNTPERR_S)
+#define HCNTPERR_F HCNTPERR_V(1U)
+
+#define DRSPPERR_S 15
+#define DRSPPERR_V(x) ((x) << DRSPPERR_S)
+#define DRSPPERR_F DRSPPERR_V(1U)
+
+#define DREQPERR_S 14
+#define DREQPERR_V(x) ((x) << DREQPERR_S)
+#define DREQPERR_F DREQPERR_V(1U)
+
+#define DCNTPERR_S 13
+#define DCNTPERR_V(x) ((x) << DCNTPERR_S)
+#define DCNTPERR_F DCNTPERR_V(1U)
+
+#define CRSPPERR_S 12
+#define CRSPPERR_V(x) ((x) << CRSPPERR_S)
+#define CRSPPERR_F CRSPPERR_V(1U)
+
+#define CREQPERR_S 11
+#define CREQPERR_V(x) ((x) << CREQPERR_S)
+#define CREQPERR_F CREQPERR_V(1U)
+
+#define CCNTPERR_S 10
+#define CCNTPERR_V(x) ((x) << CCNTPERR_S)
+#define CCNTPERR_F CCNTPERR_V(1U)
+
+#define TARTAGPERR_S 9
+#define TARTAGPERR_V(x) ((x) << TARTAGPERR_S)
+#define TARTAGPERR_F TARTAGPERR_V(1U)
+
+#define PIOREQPERR_S 8
+#define PIOREQPERR_V(x) ((x) << PIOREQPERR_S)
+#define PIOREQPERR_F PIOREQPERR_V(1U)
+
+#define PIOCPLPERR_S 7
+#define PIOCPLPERR_V(x) ((x) << PIOCPLPERR_S)
+#define PIOCPLPERR_F PIOCPLPERR_V(1U)
+
+#define MSIXDIPERR_S 6
+#define MSIXDIPERR_V(x) ((x) << MSIXDIPERR_S)
+#define MSIXDIPERR_F MSIXDIPERR_V(1U)
+
+#define MSIXDATAPERR_S 5
+#define MSIXDATAPERR_V(x) ((x) << MSIXDATAPERR_S)
+#define MSIXDATAPERR_F MSIXDATAPERR_V(1U)
+
+#define MSIXADDRHPERR_S 4
+#define MSIXADDRHPERR_V(x) ((x) << MSIXADDRHPERR_S)
+#define MSIXADDRHPERR_F MSIXADDRHPERR_V(1U)
+
+#define MSIXADDRLPERR_S 3
+#define MSIXADDRLPERR_V(x) ((x) << MSIXADDRLPERR_S)
+#define MSIXADDRLPERR_F MSIXADDRLPERR_V(1U)
+
+#define MSIDATAPERR_S 2
+#define MSIDATAPERR_V(x) ((x) << MSIDATAPERR_S)
+#define MSIDATAPERR_F MSIDATAPERR_V(1U)
+
+#define MSIADDRHPERR_S 1
+#define MSIADDRHPERR_V(x) ((x) << MSIADDRHPERR_S)
+#define MSIADDRHPERR_F MSIADDRHPERR_V(1U)
+
+#define MSIADDRLPERR_S 0
+#define MSIADDRLPERR_V(x) ((x) << MSIADDRLPERR_S)
+#define MSIADDRLPERR_F MSIADDRLPERR_V(1U)
+
+#define READRSPERR_S 29
+#define READRSPERR_V(x) ((x) << READRSPERR_S)
+#define READRSPERR_F READRSPERR_V(1U)
+
+#define TRGT1GRPPERR_S 28
+#define TRGT1GRPPERR_V(x) ((x) << TRGT1GRPPERR_S)
+#define TRGT1GRPPERR_F TRGT1GRPPERR_V(1U)
+
+#define IPSOTPERR_S 27
+#define IPSOTPERR_V(x) ((x) << IPSOTPERR_S)
+#define IPSOTPERR_F IPSOTPERR_V(1U)
+
+#define IPRETRYPERR_S 26
+#define IPRETRYPERR_V(x) ((x) << IPRETRYPERR_S)
+#define IPRETRYPERR_F IPRETRYPERR_V(1U)
+
+#define IPRXDATAGRPPERR_S 25
+#define IPRXDATAGRPPERR_V(x) ((x) << IPRXDATAGRPPERR_S)
+#define IPRXDATAGRPPERR_F IPRXDATAGRPPERR_V(1U)
+
+#define IPRXHDRGRPPERR_S 24
+#define IPRXHDRGRPPERR_V(x) ((x) << IPRXHDRGRPPERR_S)
+#define IPRXHDRGRPPERR_F IPRXHDRGRPPERR_V(1U)
+
+#define MAGRPPERR_S 22
+#define MAGRPPERR_V(x) ((x) << MAGRPPERR_S)
+#define MAGRPPERR_F MAGRPPERR_V(1U)
+
+#define VFIDPERR_S 21
+#define VFIDPERR_V(x) ((x) << VFIDPERR_S)
+#define VFIDPERR_F VFIDPERR_V(1U)
+
+#define HREQWRPERR_S 16
+#define HREQWRPERR_V(x) ((x) << HREQWRPERR_S)
+#define HREQWRPERR_F HREQWRPERR_V(1U)
+
+#define DREQWRPERR_S 13
+#define DREQWRPERR_V(x) ((x) << DREQWRPERR_S)
+#define DREQWRPERR_F DREQWRPERR_V(1U)
+
+#define CREQRDPERR_S 11
+#define CREQRDPERR_V(x) ((x) << CREQRDPERR_S)
+#define CREQRDPERR_F CREQRDPERR_V(1U)
+
+#define MSTTAGQPERR_S 10
+#define MSTTAGQPERR_V(x) ((x) << MSTTAGQPERR_S)
+#define MSTTAGQPERR_F MSTTAGQPERR_V(1U)
+
+#define PIOREQGRPPERR_S 8
+#define PIOREQGRPPERR_V(x) ((x) << PIOREQGRPPERR_S)
+#define PIOREQGRPPERR_F PIOREQGRPPERR_V(1U)
+
+#define PIOCPLGRPPERR_S 7
+#define PIOCPLGRPPERR_V(x) ((x) << PIOCPLGRPPERR_S)
+#define PIOCPLGRPPERR_F PIOCPLGRPPERR_V(1U)
+
+#define MSIXSTIPERR_S 2
+#define MSIXSTIPERR_V(x) ((x) << MSIXSTIPERR_S)
+#define MSIXSTIPERR_F MSIXSTIPERR_V(1U)
+
+#define MSTTIMEOUTPERR_S 1
+#define MSTTIMEOUTPERR_V(x) ((x) << MSTTIMEOUTPERR_S)
+#define MSTTIMEOUTPERR_F MSTTIMEOUTPERR_V(1U)
+
+#define MSTGRPPERR_S 0
+#define MSTGRPPERR_V(x) ((x) << MSTGRPPERR_S)
+#define MSTGRPPERR_F MSTGRPPERR_V(1U)
+
+#define PCIE_NONFAT_ERR_A 0x3010
+#define PCIE_CFG_SPACE_REQ_A 0x3060
+#define PCIE_CFG_SPACE_DATA_A 0x3064
+#define PCIE_MEM_ACCESS_BASE_WIN_A 0x3068
+
+#define PCIEOFST_S 10
+#define PCIEOFST_M 0x3fffffU
+#define PCIEOFST_G(x) (((x) >> PCIEOFST_S) & PCIEOFST_M)
+
+#define BIR_S 8
+#define BIR_M 0x3U
+#define BIR_V(x) ((x) << BIR_S)
+#define BIR_G(x) (((x) >> BIR_S) & BIR_M)
+
+#define WINDOW_S 0
+#define WINDOW_M 0xffU
+#define WINDOW_V(x) ((x) << WINDOW_S)
+#define WINDOW_G(x) (((x) >> WINDOW_S) & WINDOW_M)
+
+#define PCIE_MEM_ACCESS_OFFSET_A 0x306c
+
+#define ENABLE_S 30
+#define ENABLE_V(x) ((x) << ENABLE_S)
+#define ENABLE_F ENABLE_V(1U)
+
+#define LOCALCFG_S 28
+#define LOCALCFG_V(x) ((x) << LOCALCFG_S)
+#define LOCALCFG_F LOCALCFG_V(1U)
+
+#define FUNCTION_S 12
+#define FUNCTION_V(x) ((x) << FUNCTION_S)
+
+#define REGISTER_S 0
+#define REGISTER_V(x) ((x) << REGISTER_S)
+
+#define PFNUM_S 0
+#define PFNUM_V(x) ((x) << PFNUM_S)
+
+#define PCIE_FW_A 0x30b8
+
+#define PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS_A 0x5908
+
+#define RNPP_S 31
+#define RNPP_V(x) ((x) << RNPP_S)
+#define RNPP_F RNPP_V(1U)
+
+#define RPCP_S 29
+#define RPCP_V(x) ((x) << RPCP_S)
+#define RPCP_F RPCP_V(1U)
+
+#define RCIP_S 27
+#define RCIP_V(x) ((x) << RCIP_S)
+#define RCIP_F RCIP_V(1U)
+
+#define RCCP_S 26
+#define RCCP_V(x) ((x) << RCCP_S)
+#define RCCP_F RCCP_V(1U)
+
+#define RFTP_S 23
+#define RFTP_V(x) ((x) << RFTP_S)
+#define RFTP_F RFTP_V(1U)
+
+#define PTRP_S 20
+#define PTRP_V(x) ((x) << PTRP_S)
+#define PTRP_F PTRP_V(1U)
+
+#define PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS_A 0x59a4
+
+#define TPCP_S 30
+#define TPCP_V(x) ((x) << TPCP_S)
+#define TPCP_F TPCP_V(1U)
+
+#define TNPP_S 29
+#define TNPP_V(x) ((x) << TNPP_S)
+#define TNPP_F TNPP_V(1U)
+
+#define TFTP_S 28
+#define TFTP_V(x) ((x) << TFTP_S)
+#define TFTP_F TFTP_V(1U)
+
+#define TCAP_S 27
+#define TCAP_V(x) ((x) << TCAP_S)
+#define TCAP_F TCAP_V(1U)
+
+#define TCIP_S 26
+#define TCIP_V(x) ((x) << TCIP_S)
+#define TCIP_F TCIP_V(1U)
+
+#define RCAP_S 25
+#define RCAP_V(x) ((x) << RCAP_S)
+#define RCAP_F RCAP_V(1U)
+
+#define PLUP_S 23
+#define PLUP_V(x) ((x) << PLUP_S)
+#define PLUP_F PLUP_V(1U)
+
+#define PLDN_S 22
+#define PLDN_V(x) ((x) << PLDN_S)
+#define PLDN_F PLDN_V(1U)
+
+#define OTDD_S 21
+#define OTDD_V(x) ((x) << OTDD_S)
+#define OTDD_F OTDD_V(1U)
-#define S_HP_INT_THRESH 28
-#define M_HP_INT_THRESH 0xfU
-#define V_HP_INT_THRESH(x) ((x) << S_HP_INT_THRESH)
-#define S_LP_INT_THRESH_T5 18
-#define V_LP_INT_THRESH_T5(x) ((x) << S_LP_INT_THRESH_T5)
-#define M_LP_COUNT_T5 0x3ffffU
-#define G_LP_COUNT_T5(x) (((x) >> S_LP_COUNT) & M_LP_COUNT_T5)
-#define M_HP_COUNT 0x7ffU
-#define S_HP_COUNT 16
-#define G_HP_COUNT(x) (((x) >> S_HP_COUNT) & M_HP_COUNT)
-#define S_LP_INT_THRESH 12
-#define M_LP_INT_THRESH 0xfU
-#define M_LP_INT_THRESH_T5 0xfffU
-#define V_LP_INT_THRESH(x) ((x) << S_LP_INT_THRESH)
-#define M_LP_COUNT 0x7ffU
-#define S_LP_COUNT 0
-#define G_LP_COUNT(x) (((x) >> S_LP_COUNT) & M_LP_COUNT)
-#define A_SGE_DBFIFO_STATUS 0x10a4
-
-#define SGE_STAT_TOTAL 0x10e4
-#define SGE_STAT_MATCH 0x10e8
-
-#define SGE_STAT_CFG 0x10ec
-#define S_STATSOURCE_T5 9
-#define STATSOURCE_T5(x) ((x) << S_STATSOURCE_T5)
-
-#define SGE_DBFIFO_STATUS2 0x1118
-#define M_HP_COUNT_T5 0x3ffU
-#define G_HP_COUNT_T5(x) ((x) & M_HP_COUNT_T5)
-#define S_HP_INT_THRESH_T5 10
-#define M_HP_INT_THRESH_T5 0xfU
-#define V_HP_INT_THRESH_T5(x) ((x) << S_HP_INT_THRESH_T5)
-
-#define S_ENABLE_DROP 13
-#define V_ENABLE_DROP(x) ((x) << S_ENABLE_DROP)
-#define F_ENABLE_DROP V_ENABLE_DROP(1U)
-#define S_DROPPED_DB 0
-#define V_DROPPED_DB(x) ((x) << S_DROPPED_DB)
-#define F_DROPPED_DB V_DROPPED_DB(1U)
-#define A_SGE_DOORBELL_CONTROL 0x10a8
-
-#define A_SGE_CTXT_CMD 0x11fc
-#define A_SGE_DBQ_CTXT_BADDR 0x1084
-
-#define PCIE_PF_CFG 0x40
-#define AIVEC(x) ((x) << 4)
-#define AIVEC_MASK 0x3ffU
-
-#define PCIE_PF_CLI 0x44
-#define PCIE_INT_CAUSE 0x3004
-#define UNXSPLCPLERR 0x20000000U
-#define PCIEPINT 0x10000000U
-#define PCIESINT 0x08000000U
-#define RPLPERR 0x04000000U
-#define RXWRPERR 0x02000000U
-#define RXCPLPERR 0x01000000U
-#define PIOTAGPERR 0x00800000U
-#define MATAGPERR 0x00400000U
-#define INTXCLRPERR 0x00200000U
-#define FIDPERR 0x00100000U
-#define CFGSNPPERR 0x00080000U
-#define HRSPPERR 0x00040000U
-#define HREQPERR 0x00020000U
-#define HCNTPERR 0x00010000U
-#define DRSPPERR 0x00008000U
-#define DREQPERR 0x00004000U
-#define DCNTPERR 0x00002000U
-#define CRSPPERR 0x00001000U
-#define CREQPERR 0x00000800U
-#define CCNTPERR 0x00000400U
-#define TARTAGPERR 0x00000200U
-#define PIOREQPERR 0x00000100U
-#define PIOCPLPERR 0x00000080U
-#define MSIXDIPERR 0x00000040U
-#define MSIXDATAPERR 0x00000020U
-#define MSIXADDRHPERR 0x00000010U
-#define MSIXADDRLPERR 0x00000008U
-#define MSIDATAPERR 0x00000004U
-#define MSIADDRHPERR 0x00000002U
-#define MSIADDRLPERR 0x00000001U
-
-#define READRSPERR 0x20000000U
-#define TRGT1GRPPERR 0x10000000U
-#define IPSOTPERR 0x08000000U
-#define IPRXDATAGRPPERR 0x02000000U
-#define IPRXHDRGRPPERR 0x01000000U
-#define MAGRPPERR 0x00400000U
-#define VFIDPERR 0x00200000U
-#define HREQWRPERR 0x00010000U
-#define DREQWRPERR 0x00002000U
-#define MSTTAGQPERR 0x00000400U
-#define PIOREQGRPPERR 0x00000100U
-#define PIOCPLGRPPERR 0x00000080U
-#define MSIXSTIPERR 0x00000004U
-#define MSTTIMEOUTPERR 0x00000002U
-#define MSTGRPPERR 0x00000001U
-
-#define PCIE_NONFAT_ERR 0x3010
-#define PCIE_CFG_SPACE_REQ 0x3060
-#define PCIE_CFG_SPACE_DATA 0x3064
-#define PCIE_MEM_ACCESS_BASE_WIN 0x3068
-#define S_PCIEOFST 10
-#define M_PCIEOFST 0x3fffffU
-#define GET_PCIEOFST(x) (((x) >> S_PCIEOFST) & M_PCIEOFST)
-#define PCIEOFST_MASK 0xfffffc00U
-#define BIR_MASK 0x00000300U
-#define BIR_SHIFT 8
-#define BIR(x) ((x) << BIR_SHIFT)
-#define WINDOW_MASK 0x000000ffU
-#define WINDOW_SHIFT 0
-#define WINDOW(x) ((x) << WINDOW_SHIFT)
-#define GET_WINDOW(x) (((x) >> WINDOW_SHIFT) & WINDOW_MASK)
-#define PCIE_MEM_ACCESS_OFFSET 0x306c
-#define ENABLE (1U << 30)
-#define FUNCTION(x) ((x) << 12)
-#define F_LOCALCFG (1U << 28)
-
-#define S_PFNUM 0
-#define V_PFNUM(x) ((x) << S_PFNUM)
-
-#define PCIE_FW 0x30b8
-#define PCIE_FW_ERR 0x80000000U
-#define PCIE_FW_INIT 0x40000000U
-#define PCIE_FW_HALT 0x20000000U
-#define PCIE_FW_MASTER_VLD 0x00008000U
-#define PCIE_FW_MASTER(x) ((x) << 12)
-#define PCIE_FW_MASTER_MASK 0x7
-#define PCIE_FW_MASTER_GET(x) (((x) >> 12) & PCIE_FW_MASTER_MASK)
-
-#define PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS 0x5908
-#define RNPP 0x80000000U
-#define RPCP 0x20000000U
-#define RCIP 0x08000000U
-#define RCCP 0x04000000U
-#define RFTP 0x00800000U
-#define PTRP 0x00100000U
-
-#define PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS 0x59a4
-#define TPCP 0x40000000U
-#define TNPP 0x20000000U
-#define TFTP 0x10000000U
-#define TCAP 0x08000000U
-#define TCIP 0x04000000U
-#define RCAP 0x02000000U
-#define PLUP 0x00800000U
-#define PLDN 0x00400000U
-#define OTDD 0x00200000U
-#define GTRP 0x00100000U
-#define RDPE 0x00040000U
-#define TDCE 0x00020000U
-#define TDUE 0x00010000U
-
-#define MC_INT_CAUSE 0x7518
-#define MC_P_INT_CAUSE 0x41318
-#define ECC_UE_INT_CAUSE 0x00000004U
-#define ECC_CE_INT_CAUSE 0x00000002U
-#define PERR_INT_CAUSE 0x00000001U
-
-#define MC_ECC_STATUS 0x751c
-#define MC_P_ECC_STATUS 0x4131c
-#define ECC_CECNT_MASK 0xffff0000U
-#define ECC_CECNT_SHIFT 16
-#define ECC_CECNT(x) ((x) << ECC_CECNT_SHIFT)
-#define ECC_CECNT_GET(x) (((x) & ECC_CECNT_MASK) >> ECC_CECNT_SHIFT)
-#define ECC_UECNT_MASK 0x0000ffffU
-#define ECC_UECNT_SHIFT 0
-#define ECC_UECNT(x) ((x) << ECC_UECNT_SHIFT)
-#define ECC_UECNT_GET(x) (((x) & ECC_UECNT_MASK) >> ECC_UECNT_SHIFT)
-
-#define MC_BIST_CMD 0x7600
-#define START_BIST 0x80000000U
-#define BIST_CMD_GAP_MASK 0x0000ff00U
-#define BIST_CMD_GAP_SHIFT 8
-#define BIST_CMD_GAP(x) ((x) << BIST_CMD_GAP_SHIFT)
-#define BIST_OPCODE_MASK 0x00000003U
-#define BIST_OPCODE_SHIFT 0
-#define BIST_OPCODE(x) ((x) << BIST_OPCODE_SHIFT)
-
-#define MC_BIST_CMD_ADDR 0x7604
-#define MC_BIST_CMD_LEN 0x7608
-#define MC_BIST_DATA_PATTERN 0x760c
-#define BIST_DATA_TYPE_MASK 0x0000000fU
-#define BIST_DATA_TYPE_SHIFT 0
-#define BIST_DATA_TYPE(x) ((x) << BIST_DATA_TYPE_SHIFT)
-
-#define MC_BIST_STATUS_RDATA 0x7688
+#define GTRP_S 20
+#define GTRP_V(x) ((x) << GTRP_S)
+#define GTRP_F GTRP_V(1U)
+#define RDPE_S 18
+#define RDPE_V(x) ((x) << RDPE_S)
+#define RDPE_F RDPE_V(1U)
+
+#define TDCE_S 17
+#define TDCE_V(x) ((x) << TDCE_S)
+#define TDCE_F TDCE_V(1U)
+
+#define TDUE_S 16
+#define TDUE_V(x) ((x) << TDUE_S)
+#define TDUE_F TDUE_V(1U)
+
+/* registers for module MC */
+#define MC_INT_CAUSE_A 0x7518
+#define MC_P_INT_CAUSE_A 0x41318
+
+#define ECC_UE_INT_CAUSE_S 2
+#define ECC_UE_INT_CAUSE_V(x) ((x) << ECC_UE_INT_CAUSE_S)
+#define ECC_UE_INT_CAUSE_F ECC_UE_INT_CAUSE_V(1U)
+
+#define ECC_CE_INT_CAUSE_S 1
+#define ECC_CE_INT_CAUSE_V(x) ((x) << ECC_CE_INT_CAUSE_S)
+#define ECC_CE_INT_CAUSE_F ECC_CE_INT_CAUSE_V(1U)
+
+#define PERR_INT_CAUSE_S 0
+#define PERR_INT_CAUSE_V(x) ((x) << PERR_INT_CAUSE_S)
+#define PERR_INT_CAUSE_F PERR_INT_CAUSE_V(1U)
+
+#define MC_ECC_STATUS_A 0x751c
+#define MC_P_ECC_STATUS_A 0x4131c
+
+#define ECC_CECNT_S 16
+#define ECC_CECNT_M 0xffffU
+#define ECC_CECNT_V(x) ((x) << ECC_CECNT_S)
+#define ECC_CECNT_G(x) (((x) >> ECC_CECNT_S) & ECC_CECNT_M)
+
+#define ECC_UECNT_S 0
+#define ECC_UECNT_M 0xffffU
+#define ECC_UECNT_V(x) ((x) << ECC_UECNT_S)
+#define ECC_UECNT_G(x) (((x) >> ECC_UECNT_S) & ECC_UECNT_M)
+
+#define MC_BIST_CMD_A 0x7600
+
+#define START_BIST_S 31
+#define START_BIST_V(x) ((x) << START_BIST_S)
+#define START_BIST_F START_BIST_V(1U)
+
+#define BIST_CMD_GAP_S 8
+#define BIST_CMD_GAP_V(x) ((x) << BIST_CMD_GAP_S)
+
+#define BIST_OPCODE_S 0
+#define BIST_OPCODE_V(x) ((x) << BIST_OPCODE_S)
+
+#define MC_BIST_CMD_ADDR_A 0x7604
+#define MC_BIST_CMD_LEN_A 0x7608
+#define MC_BIST_DATA_PATTERN_A 0x760c
+
+#define MC_BIST_STATUS_RDATA_A 0x7688
+
+/* registers for module MA */
#define MA_EDRAM0_BAR_A 0x77c0
#define EDRAM0_SIZE_S 0
#define EXT_MEM0_ENABLE_V(x) ((x) << EXT_MEM0_ENABLE_S)
#define EXT_MEM0_ENABLE_F EXT_MEM0_ENABLE_V(1U)
-#define MA_INT_CAUSE 0x77e0
-#define MEM_PERR_INT_CAUSE 0x00000002U
-#define MEM_WRAP_INT_CAUSE 0x00000001U
-
-#define MA_INT_WRAP_STATUS 0x77e4
-#define MEM_WRAP_ADDRESS_MASK 0xfffffff0U
-#define MEM_WRAP_ADDRESS_SHIFT 4
-#define MEM_WRAP_ADDRESS_GET(x) (((x) & MEM_WRAP_ADDRESS_MASK) >> MEM_WRAP_ADDRESS_SHIFT)
-#define MEM_WRAP_CLIENT_NUM_MASK 0x0000000fU
-#define MEM_WRAP_CLIENT_NUM_SHIFT 0
-#define MEM_WRAP_CLIENT_NUM_GET(x) (((x) & MEM_WRAP_CLIENT_NUM_MASK) >> MEM_WRAP_CLIENT_NUM_SHIFT)
-#define MA_PCIE_FW 0x30b8
-#define MA_PARITY_ERROR_STATUS 0x77f4
-#define MA_PARITY_ERROR_STATUS2 0x7804
-
-#define EDC_0_BASE_ADDR 0x7900
-
-#define EDC_BIST_CMD 0x7904
-#define EDC_BIST_CMD_ADDR 0x7908
-#define EDC_BIST_CMD_LEN 0x790c
-#define EDC_BIST_DATA_PATTERN 0x7910
-#define EDC_BIST_STATUS_RDATA 0x7928
-#define EDC_INT_CAUSE 0x7978
-#define ECC_UE_PAR 0x00000020U
-#define ECC_CE_PAR 0x00000010U
-#define PERR_PAR_CAUSE 0x00000008U
-
-#define EDC_ECC_STATUS 0x797c
-
-#define EDC_1_BASE_ADDR 0x7980
-
-#define CIM_BOOT_CFG 0x7b00
-#define BOOTADDR_MASK 0xffffff00U
-#define UPCRST 0x1U
-
-#define CIM_PF_MAILBOX_DATA 0x240
-#define CIM_PF_MAILBOX_CTRL 0x280
-#define MBMSGVALID 0x00000008U
-#define MBINTREQ 0x00000004U
-#define MBOWNER_MASK 0x00000003U
-#define MBOWNER_SHIFT 0
-#define MBOWNER(x) ((x) << MBOWNER_SHIFT)
-#define MBOWNER_GET(x) (((x) & MBOWNER_MASK) >> MBOWNER_SHIFT)
-
-#define CIM_PF_HOST_INT_ENABLE 0x288
-#define MBMSGRDYINTEN(x) ((x) << 19)
-
-#define CIM_PF_HOST_INT_CAUSE 0x28c
-#define MBMSGRDYINT 0x00080000U
-
-#define CIM_HOST_INT_CAUSE 0x7b2c
-#define TIEQOUTPARERRINT 0x00100000U
-#define TIEQINPARERRINT 0x00080000U
-#define MBHOSTPARERR 0x00040000U
-#define MBUPPARERR 0x00020000U
-#define IBQPARERR 0x0001f800U
-#define IBQTP0PARERR 0x00010000U
-#define IBQTP1PARERR 0x00008000U
-#define IBQULPPARERR 0x00004000U
-#define IBQSGELOPARERR 0x00002000U
-#define IBQSGEHIPARERR 0x00001000U
-#define IBQNCSIPARERR 0x00000800U
-#define OBQPARERR 0x000007e0U
-#define OBQULP0PARERR 0x00000400U
-#define OBQULP1PARERR 0x00000200U
-#define OBQULP2PARERR 0x00000100U
-#define OBQULP3PARERR 0x00000080U
-#define OBQSGEPARERR 0x00000040U
-#define OBQNCSIPARERR 0x00000020U
-#define PREFDROPINT 0x00000002U
-#define UPACCNONZERO 0x00000001U
-
-#define CIM_HOST_UPACC_INT_CAUSE 0x7b34
-#define EEPROMWRINT 0x40000000U
-#define TIMEOUTMAINT 0x20000000U
-#define TIMEOUTINT 0x10000000U
-#define RSPOVRLOOKUPINT 0x08000000U
-#define REQOVRLOOKUPINT 0x04000000U
-#define BLKWRPLINT 0x02000000U
-#define BLKRDPLINT 0x01000000U
-#define SGLWRPLINT 0x00800000U
-#define SGLRDPLINT 0x00400000U
-#define BLKWRCTLINT 0x00200000U
-#define BLKRDCTLINT 0x00100000U
-#define SGLWRCTLINT 0x00080000U
-#define SGLRDCTLINT 0x00040000U
-#define BLKWREEPROMINT 0x00020000U
-#define BLKRDEEPROMINT 0x00010000U
-#define SGLWREEPROMINT 0x00008000U
-#define SGLRDEEPROMINT 0x00004000U
-#define BLKWRFLASHINT 0x00002000U
-#define BLKRDFLASHINT 0x00001000U
-#define SGLWRFLASHINT 0x00000800U
-#define SGLRDFLASHINT 0x00000400U
-#define BLKWRBOOTINT 0x00000200U
-#define BLKRDBOOTINT 0x00000100U
-#define SGLWRBOOTINT 0x00000080U
-#define SGLRDBOOTINT 0x00000040U
-#define ILLWRBEINT 0x00000020U
-#define ILLRDBEINT 0x00000010U
-#define ILLRDINT 0x00000008U
-#define ILLWRINT 0x00000004U
-#define ILLTRANSINT 0x00000002U
-#define RSVDSPACEINT 0x00000001U
-
-#define TP_OUT_CONFIG 0x7d04
-#define VLANEXTENABLE_MASK 0x0000f000U
-#define VLANEXTENABLE_SHIFT 12
-
-#define TP_GLOBAL_CONFIG 0x7d08
-#define FIVETUPLELOOKUP_SHIFT 17
-#define FIVETUPLELOOKUP_MASK 0x00060000U
-#define FIVETUPLELOOKUP(x) ((x) << FIVETUPLELOOKUP_SHIFT)
-#define FIVETUPLELOOKUP_GET(x) (((x) & FIVETUPLELOOKUP_MASK) >> \
- FIVETUPLELOOKUP_SHIFT)
-
-#define TP_PARA_REG2 0x7d68
-#define MAXRXDATA_MASK 0xffff0000U
-#define MAXRXDATA_SHIFT 16
-#define MAXRXDATA_GET(x) (((x) & MAXRXDATA_MASK) >> MAXRXDATA_SHIFT)
-
-#define TP_TIMER_RESOLUTION 0x7d90
-#define TIMERRESOLUTION_MASK 0x00ff0000U
-#define TIMERRESOLUTION_SHIFT 16
-#define TIMERRESOLUTION_GET(x) (((x) & TIMERRESOLUTION_MASK) >> TIMERRESOLUTION_SHIFT)
-#define DELAYEDACKRESOLUTION_MASK 0x000000ffU
-#define DELAYEDACKRESOLUTION_SHIFT 0
-#define DELAYEDACKRESOLUTION_GET(x) \
- (((x) & DELAYEDACKRESOLUTION_MASK) >> DELAYEDACKRESOLUTION_SHIFT)
-
-#define TP_SHIFT_CNT 0x7dc0
-#define SYNSHIFTMAX_SHIFT 24
-#define SYNSHIFTMAX_MASK 0xff000000U
-#define SYNSHIFTMAX(x) ((x) << SYNSHIFTMAX_SHIFT)
-#define SYNSHIFTMAX_GET(x) (((x) & SYNSHIFTMAX_MASK) >> \
- SYNSHIFTMAX_SHIFT)
-#define RXTSHIFTMAXR1_SHIFT 20
-#define RXTSHIFTMAXR1_MASK 0x00f00000U
-#define RXTSHIFTMAXR1(x) ((x) << RXTSHIFTMAXR1_SHIFT)
-#define RXTSHIFTMAXR1_GET(x) (((x) & RXTSHIFTMAXR1_MASK) >> \
- RXTSHIFTMAXR1_SHIFT)
-#define RXTSHIFTMAXR2_SHIFT 16
-#define RXTSHIFTMAXR2_MASK 0x000f0000U
-#define RXTSHIFTMAXR2(x) ((x) << RXTSHIFTMAXR2_SHIFT)
-#define RXTSHIFTMAXR2_GET(x) (((x) & RXTSHIFTMAXR2_MASK) >> \
- RXTSHIFTMAXR2_SHIFT)
-#define PERSHIFTBACKOFFMAX_SHIFT 12
-#define PERSHIFTBACKOFFMAX_MASK 0x0000f000U
-#define PERSHIFTBACKOFFMAX(x) ((x) << PERSHIFTBACKOFFMAX_SHIFT)
-#define PERSHIFTBACKOFFMAX_GET(x) (((x) & PERSHIFTBACKOFFMAX_MASK) >> \
- PERSHIFTBACKOFFMAX_SHIFT)
-#define PERSHIFTMAX_SHIFT 8
-#define PERSHIFTMAX_MASK 0x00000f00U
-#define PERSHIFTMAX(x) ((x) << PERSHIFTMAX_SHIFT)
-#define PERSHIFTMAX_GET(x) (((x) & PERSHIFTMAX_MASK) >> \
- PERSHIFTMAX_SHIFT)
-#define KEEPALIVEMAXR1_SHIFT 4
-#define KEEPALIVEMAXR1_MASK 0x000000f0U
-#define KEEPALIVEMAXR1(x) ((x) << KEEPALIVEMAXR1_SHIFT)
-#define KEEPALIVEMAXR1_GET(x) (((x) & KEEPALIVEMAXR1_MASK) >> \
- KEEPALIVEMAXR1_SHIFT)
-#define KEEPALIVEMAXR2_SHIFT 0
-#define KEEPALIVEMAXR2_MASK 0x0000000fU
-#define KEEPALIVEMAXR2(x) ((x) << KEEPALIVEMAXR2_SHIFT)
-#define KEEPALIVEMAXR2_GET(x) (((x) & KEEPALIVEMAXR2_MASK) >> \
- KEEPALIVEMAXR2_SHIFT)
-
-#define TP_CCTRL_TABLE 0x7ddc
-#define TP_MTU_TABLE 0x7de4
-#define MTUINDEX_MASK 0xff000000U
-#define MTUINDEX_SHIFT 24
-#define MTUINDEX(x) ((x) << MTUINDEX_SHIFT)
-#define MTUWIDTH_MASK 0x000f0000U
-#define MTUWIDTH_SHIFT 16
-#define MTUWIDTH(x) ((x) << MTUWIDTH_SHIFT)
-#define MTUWIDTH_GET(x) (((x) & MTUWIDTH_MASK) >> MTUWIDTH_SHIFT)
-#define MTUVALUE_MASK 0x00003fffU
-#define MTUVALUE_SHIFT 0
-#define MTUVALUE(x) ((x) << MTUVALUE_SHIFT)
-#define MTUVALUE_GET(x) (((x) & MTUVALUE_MASK) >> MTUVALUE_SHIFT)
-
-#define TP_RSS_LKP_TABLE 0x7dec
-#define LKPTBLROWVLD 0x80000000U
-#define LKPTBLQUEUE1_MASK 0x000ffc00U
-#define LKPTBLQUEUE1_SHIFT 10
-#define LKPTBLQUEUE1(x) ((x) << LKPTBLQUEUE1_SHIFT)
-#define LKPTBLQUEUE1_GET(x) (((x) & LKPTBLQUEUE1_MASK) >> LKPTBLQUEUE1_SHIFT)
-#define LKPTBLQUEUE0_MASK 0x000003ffU
-#define LKPTBLQUEUE0_SHIFT 0
-#define LKPTBLQUEUE0(x) ((x) << LKPTBLQUEUE0_SHIFT)
-#define LKPTBLQUEUE0_GET(x) (((x) & LKPTBLQUEUE0_MASK) >> LKPTBLQUEUE0_SHIFT)
-
-#define TP_PIO_ADDR 0x7e40
-#define TP_PIO_DATA 0x7e44
-#define TP_MIB_INDEX 0x7e50
-#define TP_MIB_DATA 0x7e54
-#define TP_INT_CAUSE 0x7e74
-#define FLMTXFLSTEMPTY 0x40000000U
-
-#define TP_VLAN_PRI_MAP 0x140
-#define FRAGMENTATION_SHIFT 9
-#define FRAGMENTATION_MASK 0x00000200U
-#define MPSHITTYPE_MASK 0x00000100U
-#define MACMATCH_MASK 0x00000080U
-#define ETHERTYPE_MASK 0x00000040U
-#define PROTOCOL_MASK 0x00000020U
-#define TOS_MASK 0x00000010U
-#define VLAN_MASK 0x00000008U
-#define VNIC_ID_MASK 0x00000004U
-#define PORT_MASK 0x00000002U
-#define FCOE_SHIFT 0
-#define FCOE_MASK 0x00000001U
-
-#define TP_INGRESS_CONFIG 0x141
-#define VNIC 0x00000800U
-#define CSUM_HAS_PSEUDO_HDR 0x00000400U
-#define RM_OVLAN 0x00000200U
-#define LOOKUPEVERYPKT 0x00000100U
-
-#define TP_MIB_MAC_IN_ERR_0 0x0
-#define TP_MIB_TCP_OUT_RST 0xc
-#define TP_MIB_TCP_IN_SEG_HI 0x10
-#define TP_MIB_TCP_IN_SEG_LO 0x11
-#define TP_MIB_TCP_OUT_SEG_HI 0x12
-#define TP_MIB_TCP_OUT_SEG_LO 0x13
-#define TP_MIB_TCP_RXT_SEG_HI 0x14
-#define TP_MIB_TCP_RXT_SEG_LO 0x15
-#define TP_MIB_TNL_CNG_DROP_0 0x18
-#define TP_MIB_TCP_V6IN_ERR_0 0x28
-#define TP_MIB_TCP_V6OUT_RST 0x2c
-#define TP_MIB_OFD_ARP_DROP 0x36
-#define TP_MIB_TNL_DROP_0 0x44
-#define TP_MIB_OFD_VLN_DROP_0 0x58
-
-#define ULP_TX_INT_CAUSE 0x8dcc
-#define PBL_BOUND_ERR_CH3 0x80000000U
-#define PBL_BOUND_ERR_CH2 0x40000000U
-#define PBL_BOUND_ERR_CH1 0x20000000U
-#define PBL_BOUND_ERR_CH0 0x10000000U
-
-#define PM_RX_INT_CAUSE 0x8fdc
-#define ZERO_E_CMD_ERROR 0x00400000U
-#define PMRX_FRAMING_ERROR 0x003ffff0U
-#define OCSPI_PAR_ERROR 0x00000008U
-#define DB_OPTIONS_PAR_ERROR 0x00000004U
-#define IESPI_PAR_ERROR 0x00000002U
-#define E_PCMD_PAR_ERROR 0x00000001U
-
-#define PM_TX_INT_CAUSE 0x8ffc
-#define PCMD_LEN_OVFL0 0x80000000U
-#define PCMD_LEN_OVFL1 0x40000000U
-#define PCMD_LEN_OVFL2 0x20000000U
-#define ZERO_C_CMD_ERROR 0x10000000U
-#define PMTX_FRAMING_ERROR 0x0ffffff0U
-#define OESPI_PAR_ERROR 0x00000008U
-#define ICSPI_PAR_ERROR 0x00000002U
-#define C_PCMD_PAR_ERROR 0x00000001U
+#define MA_INT_CAUSE_A 0x77e0
+
+#define MEM_PERR_INT_CAUSE_S 1
+#define MEM_PERR_INT_CAUSE_V(x) ((x) << MEM_PERR_INT_CAUSE_S)
+#define MEM_PERR_INT_CAUSE_F MEM_PERR_INT_CAUSE_V(1U)
+
+#define MEM_WRAP_INT_CAUSE_S 0
+#define MEM_WRAP_INT_CAUSE_V(x) ((x) << MEM_WRAP_INT_CAUSE_S)
+#define MEM_WRAP_INT_CAUSE_F MEM_WRAP_INT_CAUSE_V(1U)
+
+#define MA_INT_WRAP_STATUS_A 0x77e4
+
+#define MEM_WRAP_ADDRESS_S 4
+#define MEM_WRAP_ADDRESS_M 0xfffffffU
+#define MEM_WRAP_ADDRESS_G(x) (((x) >> MEM_WRAP_ADDRESS_S) & MEM_WRAP_ADDRESS_M)
+
+#define MEM_WRAP_CLIENT_NUM_S 0
+#define MEM_WRAP_CLIENT_NUM_M 0xfU
+#define MEM_WRAP_CLIENT_NUM_G(x) \
+ (((x) >> MEM_WRAP_CLIENT_NUM_S) & MEM_WRAP_CLIENT_NUM_M)
+
+#define MA_PARITY_ERROR_STATUS_A 0x77f4
+#define MA_PARITY_ERROR_STATUS1_A 0x77f4
+#define MA_PARITY_ERROR_STATUS2_A 0x7804
+
+/* registers for module EDC_0 */
+#define EDC_0_BASE_ADDR 0x7900
+
+#define EDC_BIST_CMD_A 0x7904
+#define EDC_BIST_CMD_ADDR_A 0x7908
+#define EDC_BIST_CMD_LEN_A 0x790c
+#define EDC_BIST_DATA_PATTERN_A 0x7910
+#define EDC_BIST_STATUS_RDATA_A 0x7928
+#define EDC_INT_CAUSE_A 0x7978
+
+#define ECC_UE_PAR_S 5
+#define ECC_UE_PAR_V(x) ((x) << ECC_UE_PAR_S)
+#define ECC_UE_PAR_F ECC_UE_PAR_V(1U)
+
+#define ECC_CE_PAR_S 4
+#define ECC_CE_PAR_V(x) ((x) << ECC_CE_PAR_S)
+#define ECC_CE_PAR_F ECC_CE_PAR_V(1U)
+
+#define PERR_PAR_CAUSE_S 3
+#define PERR_PAR_CAUSE_V(x) ((x) << PERR_PAR_CAUSE_S)
+#define PERR_PAR_CAUSE_F PERR_PAR_CAUSE_V(1U)
+
+#define EDC_ECC_STATUS_A 0x797c
+
+/* registers for module EDC_1 */
+#define EDC_1_BASE_ADDR 0x7980
+
+/* registers for module CIM */
+#define CIM_BOOT_CFG_A 0x7b00
+
+#define BOOTADDR_M 0xffffff00U
+
+#define UPCRST_S 0
+#define UPCRST_V(x) ((x) << UPCRST_S)
+#define UPCRST_F UPCRST_V(1U)
+
+#define CIM_PF_MAILBOX_DATA_A 0x240
+#define CIM_PF_MAILBOX_CTRL_A 0x280
+
+#define MBMSGVALID_S 3
+#define MBMSGVALID_V(x) ((x) << MBMSGVALID_S)
+#define MBMSGVALID_F MBMSGVALID_V(1U)
+
+#define MBINTREQ_S 2
+#define MBINTREQ_V(x) ((x) << MBINTREQ_S)
+#define MBINTREQ_F MBINTREQ_V(1U)
+
+#define MBOWNER_S 0
+#define MBOWNER_M 0x3U
+#define MBOWNER_V(x) ((x) << MBOWNER_S)
+#define MBOWNER_G(x) (((x) >> MBOWNER_S) & MBOWNER_M)
+
+#define CIM_PF_HOST_INT_ENABLE_A 0x288
+
+#define MBMSGRDYINTEN_S 19
+#define MBMSGRDYINTEN_V(x) ((x) << MBMSGRDYINTEN_S)
+#define MBMSGRDYINTEN_F MBMSGRDYINTEN_V(1U)
+
+#define CIM_PF_HOST_INT_CAUSE_A 0x28c
+
+#define MBMSGRDYINT_S 19
+#define MBMSGRDYINT_V(x) ((x) << MBMSGRDYINT_S)
+#define MBMSGRDYINT_F MBMSGRDYINT_V(1U)
+
+#define CIM_HOST_INT_CAUSE_A 0x7b2c
+
+#define TIEQOUTPARERRINT_S 20
+#define TIEQOUTPARERRINT_V(x) ((x) << TIEQOUTPARERRINT_S)
+#define TIEQOUTPARERRINT_F TIEQOUTPARERRINT_V(1U)
+
+#define TIEQINPARERRINT_S 19
+#define TIEQINPARERRINT_V(x) ((x) << TIEQINPARERRINT_S)
+#define TIEQINPARERRINT_F TIEQINPARERRINT_V(1U)
+
+#define PREFDROPINT_S 1
+#define PREFDROPINT_V(x) ((x) << PREFDROPINT_S)
+#define PREFDROPINT_F PREFDROPINT_V(1U)
+
+#define UPACCNONZERO_S 0
+#define UPACCNONZERO_V(x) ((x) << UPACCNONZERO_S)
+#define UPACCNONZERO_F UPACCNONZERO_V(1U)
+
+#define MBHOSTPARERR_S 18
+#define MBHOSTPARERR_V(x) ((x) << MBHOSTPARERR_S)
+#define MBHOSTPARERR_F MBHOSTPARERR_V(1U)
+
+#define MBUPPARERR_S 17
+#define MBUPPARERR_V(x) ((x) << MBUPPARERR_S)
+#define MBUPPARERR_F MBUPPARERR_V(1U)
+
+#define IBQTP0PARERR_S 16
+#define IBQTP0PARERR_V(x) ((x) << IBQTP0PARERR_S)
+#define IBQTP0PARERR_F IBQTP0PARERR_V(1U)
+
+#define IBQTP1PARERR_S 15
+#define IBQTP1PARERR_V(x) ((x) << IBQTP1PARERR_S)
+#define IBQTP1PARERR_F IBQTP1PARERR_V(1U)
+
+#define IBQULPPARERR_S 14
+#define IBQULPPARERR_V(x) ((x) << IBQULPPARERR_S)
+#define IBQULPPARERR_F IBQULPPARERR_V(1U)
+
+#define IBQSGELOPARERR_S 13
+#define IBQSGELOPARERR_V(x) ((x) << IBQSGELOPARERR_S)
+#define IBQSGELOPARERR_F IBQSGELOPARERR_V(1U)
+
+#define IBQSGEHIPARERR_S 12
+#define IBQSGEHIPARERR_V(x) ((x) << IBQSGEHIPARERR_S)
+#define IBQSGEHIPARERR_F IBQSGEHIPARERR_V(1U)
+
+#define IBQNCSIPARERR_S 11
+#define IBQNCSIPARERR_V(x) ((x) << IBQNCSIPARERR_S)
+#define IBQNCSIPARERR_F IBQNCSIPARERR_V(1U)
+
+#define OBQULP0PARERR_S 10
+#define OBQULP0PARERR_V(x) ((x) << OBQULP0PARERR_S)
+#define OBQULP0PARERR_F OBQULP0PARERR_V(1U)
+
+#define OBQULP1PARERR_S 9
+#define OBQULP1PARERR_V(x) ((x) << OBQULP1PARERR_S)
+#define OBQULP1PARERR_F OBQULP1PARERR_V(1U)
+
+#define OBQULP2PARERR_S 8
+#define OBQULP2PARERR_V(x) ((x) << OBQULP2PARERR_S)
+#define OBQULP2PARERR_F OBQULP2PARERR_V(1U)
+
+#define OBQULP3PARERR_S 7
+#define OBQULP3PARERR_V(x) ((x) << OBQULP3PARERR_S)
+#define OBQULP3PARERR_F OBQULP3PARERR_V(1U)
+
+#define OBQSGEPARERR_S 6
+#define OBQSGEPARERR_V(x) ((x) << OBQSGEPARERR_S)
+#define OBQSGEPARERR_F OBQSGEPARERR_V(1U)
+
+#define OBQNCSIPARERR_S 5
+#define OBQNCSIPARERR_V(x) ((x) << OBQNCSIPARERR_S)
+#define OBQNCSIPARERR_F OBQNCSIPARERR_V(1U)
+
+#define CIM_HOST_UPACC_INT_CAUSE_A 0x7b34
+
+#define EEPROMWRINT_S 30
+#define EEPROMWRINT_V(x) ((x) << EEPROMWRINT_S)
+#define EEPROMWRINT_F EEPROMWRINT_V(1U)
+
+#define TIMEOUTMAINT_S 29
+#define TIMEOUTMAINT_V(x) ((x) << TIMEOUTMAINT_S)
+#define TIMEOUTMAINT_F TIMEOUTMAINT_V(1U)
+
+#define TIMEOUTINT_S 28
+#define TIMEOUTINT_V(x) ((x) << TIMEOUTINT_S)
+#define TIMEOUTINT_F TIMEOUTINT_V(1U)
+
+#define RSPOVRLOOKUPINT_S 27
+#define RSPOVRLOOKUPINT_V(x) ((x) << RSPOVRLOOKUPINT_S)
+#define RSPOVRLOOKUPINT_F RSPOVRLOOKUPINT_V(1U)
+
+#define REQOVRLOOKUPINT_S 26
+#define REQOVRLOOKUPINT_V(x) ((x) << REQOVRLOOKUPINT_S)
+#define REQOVRLOOKUPINT_F REQOVRLOOKUPINT_V(1U)
+
+#define BLKWRPLINT_S 25
+#define BLKWRPLINT_V(x) ((x) << BLKWRPLINT_S)
+#define BLKWRPLINT_F BLKWRPLINT_V(1U)
+
+#define BLKRDPLINT_S 24
+#define BLKRDPLINT_V(x) ((x) << BLKRDPLINT_S)
+#define BLKRDPLINT_F BLKRDPLINT_V(1U)
+
+#define SGLWRPLINT_S 23
+#define SGLWRPLINT_V(x) ((x) << SGLWRPLINT_S)
+#define SGLWRPLINT_F SGLWRPLINT_V(1U)
+
+#define SGLRDPLINT_S 22
+#define SGLRDPLINT_V(x) ((x) << SGLRDPLINT_S)
+#define SGLRDPLINT_F SGLRDPLINT_V(1U)
+
+#define BLKWRCTLINT_S 21
+#define BLKWRCTLINT_V(x) ((x) << BLKWRCTLINT_S)
+#define BLKWRCTLINT_F BLKWRCTLINT_V(1U)
+
+#define BLKRDCTLINT_S 20
+#define BLKRDCTLINT_V(x) ((x) << BLKRDCTLINT_S)
+#define BLKRDCTLINT_F BLKRDCTLINT_V(1U)
+
+#define SGLWRCTLINT_S 19
+#define SGLWRCTLINT_V(x) ((x) << SGLWRCTLINT_S)
+#define SGLWRCTLINT_F SGLWRCTLINT_V(1U)
+
+#define SGLRDCTLINT_S 18
+#define SGLRDCTLINT_V(x) ((x) << SGLRDCTLINT_S)
+#define SGLRDCTLINT_F SGLRDCTLINT_V(1U)
+
+#define BLKWREEPROMINT_S 17
+#define BLKWREEPROMINT_V(x) ((x) << BLKWREEPROMINT_S)
+#define BLKWREEPROMINT_F BLKWREEPROMINT_V(1U)
+
+#define BLKRDEEPROMINT_S 16
+#define BLKRDEEPROMINT_V(x) ((x) << BLKRDEEPROMINT_S)
+#define BLKRDEEPROMINT_F BLKRDEEPROMINT_V(1U)
+
+#define SGLWREEPROMINT_S 15
+#define SGLWREEPROMINT_V(x) ((x) << SGLWREEPROMINT_S)
+#define SGLWREEPROMINT_F SGLWREEPROMINT_V(1U)
+
+#define SGLRDEEPROMINT_S 14
+#define SGLRDEEPROMINT_V(x) ((x) << SGLRDEEPROMINT_S)
+#define SGLRDEEPROMINT_F SGLRDEEPROMINT_V(1U)
+
+#define BLKWRFLASHINT_S 13
+#define BLKWRFLASHINT_V(x) ((x) << BLKWRFLASHINT_S)
+#define BLKWRFLASHINT_F BLKWRFLASHINT_V(1U)
+
+#define BLKRDFLASHINT_S 12
+#define BLKRDFLASHINT_V(x) ((x) << BLKRDFLASHINT_S)
+#define BLKRDFLASHINT_F BLKRDFLASHINT_V(1U)
+
+#define SGLWRFLASHINT_S 11
+#define SGLWRFLASHINT_V(x) ((x) << SGLWRFLASHINT_S)
+#define SGLWRFLASHINT_F SGLWRFLASHINT_V(1U)
+
+#define SGLRDFLASHINT_S 10
+#define SGLRDFLASHINT_V(x) ((x) << SGLRDFLASHINT_S)
+#define SGLRDFLASHINT_F SGLRDFLASHINT_V(1U)
+
+#define BLKWRBOOTINT_S 9
+#define BLKWRBOOTINT_V(x) ((x) << BLKWRBOOTINT_S)
+#define BLKWRBOOTINT_F BLKWRBOOTINT_V(1U)
+
+#define BLKRDBOOTINT_S 8
+#define BLKRDBOOTINT_V(x) ((x) << BLKRDBOOTINT_S)
+#define BLKRDBOOTINT_F BLKRDBOOTINT_V(1U)
+
+#define SGLWRBOOTINT_S 7
+#define SGLWRBOOTINT_V(x) ((x) << SGLWRBOOTINT_S)
+#define SGLWRBOOTINT_F SGLWRBOOTINT_V(1U)
+
+#define SGLRDBOOTINT_S 6
+#define SGLRDBOOTINT_V(x) ((x) << SGLRDBOOTINT_S)
+#define SGLRDBOOTINT_F SGLRDBOOTINT_V(1U)
+
+#define ILLWRBEINT_S 5
+#define ILLWRBEINT_V(x) ((x) << ILLWRBEINT_S)
+#define ILLWRBEINT_F ILLWRBEINT_V(1U)
+
+#define ILLRDBEINT_S 4
+#define ILLRDBEINT_V(x) ((x) << ILLRDBEINT_S)
+#define ILLRDBEINT_F ILLRDBEINT_V(1U)
+
+#define ILLRDINT_S 3
+#define ILLRDINT_V(x) ((x) << ILLRDINT_S)
+#define ILLRDINT_F ILLRDINT_V(1U)
+
+#define ILLWRINT_S 2
+#define ILLWRINT_V(x) ((x) << ILLWRINT_S)
+#define ILLWRINT_F ILLWRINT_V(1U)
+
+#define ILLTRANSINT_S 1
+#define ILLTRANSINT_V(x) ((x) << ILLTRANSINT_S)
+#define ILLTRANSINT_F ILLTRANSINT_V(1U)
+
+#define RSVDSPACEINT_S 0
+#define RSVDSPACEINT_V(x) ((x) << RSVDSPACEINT_S)
+#define RSVDSPACEINT_F RSVDSPACEINT_V(1U)
+
+/* registers for module TP */
+#define TP_OUT_CONFIG_A 0x7d04
+#define TP_GLOBAL_CONFIG_A 0x7d08
+
+#define FIVETUPLELOOKUP_S 17
+#define FIVETUPLELOOKUP_M 0x3U
+#define FIVETUPLELOOKUP_V(x) ((x) << FIVETUPLELOOKUP_S)
+#define FIVETUPLELOOKUP_G(x) (((x) >> FIVETUPLELOOKUP_S) & FIVETUPLELOOKUP_M)
+
+#define TP_PARA_REG2_A 0x7d68
+
+#define MAXRXDATA_S 16
+#define MAXRXDATA_M 0xffffU
+#define MAXRXDATA_G(x) (((x) >> MAXRXDATA_S) & MAXRXDATA_M)
+
+#define TP_TIMER_RESOLUTION_A 0x7d90
+
+#define TIMERRESOLUTION_S 16
+#define TIMERRESOLUTION_M 0xffU
+#define TIMERRESOLUTION_G(x) (((x) >> TIMERRESOLUTION_S) & TIMERRESOLUTION_M)
+
+#define DELAYEDACKRESOLUTION_S 0
+#define DELAYEDACKRESOLUTION_M 0xffU
+#define DELAYEDACKRESOLUTION_G(x) \
+ (((x) >> DELAYEDACKRESOLUTION_S) & DELAYEDACKRESOLUTION_M)
+
+#define TP_SHIFT_CNT_A 0x7dc0
+
+#define SYNSHIFTMAX_S 24
+#define SYNSHIFTMAX_M 0xffU
+#define SYNSHIFTMAX_V(x) ((x) << SYNSHIFTMAX_S)
+#define SYNSHIFTMAX_G(x) (((x) >> SYNSHIFTMAX_S) & SYNSHIFTMAX_M)
+
+#define RXTSHIFTMAXR1_S 20
+#define RXTSHIFTMAXR1_M 0xfU
+#define RXTSHIFTMAXR1_V(x) ((x) << RXTSHIFTMAXR1_S)
+#define RXTSHIFTMAXR1_G(x) (((x) >> RXTSHIFTMAXR1_S) & RXTSHIFTMAXR1_M)
+
+#define RXTSHIFTMAXR2_S 16
+#define RXTSHIFTMAXR2_M 0xfU
+#define RXTSHIFTMAXR2_V(x) ((x) << RXTSHIFTMAXR2_S)
+#define RXTSHIFTMAXR2_G(x) (((x) >> RXTSHIFTMAXR2_S) & RXTSHIFTMAXR2_M)
+
+#define PERSHIFTBACKOFFMAX_S 12
+#define PERSHIFTBACKOFFMAX_M 0xfU
+#define PERSHIFTBACKOFFMAX_V(x) ((x) << PERSHIFTBACKOFFMAX_S)
+#define PERSHIFTBACKOFFMAX_G(x) \
+ (((x) >> PERSHIFTBACKOFFMAX_S) & PERSHIFTBACKOFFMAX_M)
+
+#define PERSHIFTMAX_S 8
+#define PERSHIFTMAX_M 0xfU
+#define PERSHIFTMAX_V(x) ((x) << PERSHIFTMAX_S)
+#define PERSHIFTMAX_G(x) (((x) >> PERSHIFTMAX_S) & PERSHIFTMAX_M)
+
+#define KEEPALIVEMAXR1_S 4
+#define KEEPALIVEMAXR1_M 0xfU
+#define KEEPALIVEMAXR1_V(x) ((x) << KEEPALIVEMAXR1_S)
+#define KEEPALIVEMAXR1_G(x) (((x) >> KEEPALIVEMAXR1_S) & KEEPALIVEMAXR1_M)
+
+#define KEEPALIVEMAXR2_S 0
+#define KEEPALIVEMAXR2_M 0xfU
+#define KEEPALIVEMAXR2_V(x) ((x) << KEEPALIVEMAXR2_S)
+#define KEEPALIVEMAXR2_G(x) (((x) >> KEEPALIVEMAXR2_S) & KEEPALIVEMAXR2_M)
+
+#define TP_CCTRL_TABLE_A 0x7ddc
+#define TP_MTU_TABLE_A 0x7de4
+
+#define MTUINDEX_S 24
+#define MTUINDEX_V(x) ((x) << MTUINDEX_S)
+
+#define MTUWIDTH_S 16
+#define MTUWIDTH_M 0xfU
+#define MTUWIDTH_V(x) ((x) << MTUWIDTH_S)
+#define MTUWIDTH_G(x) (((x) >> MTUWIDTH_S) & MTUWIDTH_M)
+
+#define MTUVALUE_S 0
+#define MTUVALUE_M 0x3fffU
+#define MTUVALUE_V(x) ((x) << MTUVALUE_S)
+#define MTUVALUE_G(x) (((x) >> MTUVALUE_S) & MTUVALUE_M)
+
+#define TP_RSS_LKP_TABLE_A 0x7dec
+
+#define LKPTBLROWVLD_S 31
+#define LKPTBLROWVLD_V(x) ((x) << LKPTBLROWVLD_S)
+#define LKPTBLROWVLD_F LKPTBLROWVLD_V(1U)
+
+#define LKPTBLQUEUE1_S 10
+#define LKPTBLQUEUE1_M 0x3ffU
+#define LKPTBLQUEUE1_G(x) (((x) >> LKPTBLQUEUE1_S) & LKPTBLQUEUE1_M)
+
+#define LKPTBLQUEUE0_S 0
+#define LKPTBLQUEUE0_M 0x3ffU
+#define LKPTBLQUEUE0_G(x) (((x) >> LKPTBLQUEUE0_S) & LKPTBLQUEUE0_M)
+
+#define TP_PIO_ADDR_A 0x7e40
+#define TP_PIO_DATA_A 0x7e44
+#define TP_MIB_INDEX_A 0x7e50
+#define TP_MIB_DATA_A 0x7e54
+#define TP_INT_CAUSE_A 0x7e74
+
+#define FLMTXFLSTEMPTY_S 30
+#define FLMTXFLSTEMPTY_V(x) ((x) << FLMTXFLSTEMPTY_S)
+#define FLMTXFLSTEMPTY_F FLMTXFLSTEMPTY_V(1U)
+
+#define TP_VLAN_PRI_MAP_A 0x140
+
+#define FRAGMENTATION_S 9
+#define FRAGMENTATION_V(x) ((x) << FRAGMENTATION_S)
+#define FRAGMENTATION_F FRAGMENTATION_V(1U)
+
+#define MPSHITTYPE_S 8
+#define MPSHITTYPE_V(x) ((x) << MPSHITTYPE_S)
+#define MPSHITTYPE_F MPSHITTYPE_V(1U)
+
+#define MACMATCH_S 7
+#define MACMATCH_V(x) ((x) << MACMATCH_S)
+#define MACMATCH_F MACMATCH_V(1U)
+
+#define ETHERTYPE_S 6
+#define ETHERTYPE_V(x) ((x) << ETHERTYPE_S)
+#define ETHERTYPE_F ETHERTYPE_V(1U)
+
+#define PROTOCOL_S 5
+#define PROTOCOL_V(x) ((x) << PROTOCOL_S)
+#define PROTOCOL_F PROTOCOL_V(1U)
+
+#define TOS_S 4
+#define TOS_V(x) ((x) << TOS_S)
+#define TOS_F TOS_V(1U)
+
+#define VLAN_S 3
+#define VLAN_V(x) ((x) << VLAN_S)
+#define VLAN_F VLAN_V(1U)
+
+#define VNIC_ID_S 2
+#define VNIC_ID_V(x) ((x) << VNIC_ID_S)
+#define VNIC_ID_F VNIC_ID_V(1U)
+
+#define PORT_S 1
+#define PORT_V(x) ((x) << PORT_S)
+#define PORT_F PORT_V(1U)
+
+#define FCOE_S 0
+#define FCOE_V(x) ((x) << FCOE_S)
+#define FCOE_F FCOE_V(1U)
+
+#define FILTERMODE_S 15
+#define FILTERMODE_V(x) ((x) << FILTERMODE_S)
+#define FILTERMODE_F FILTERMODE_V(1U)
+
+#define FCOEMASK_S 14
+#define FCOEMASK_V(x) ((x) << FCOEMASK_S)
+#define FCOEMASK_F FCOEMASK_V(1U)
+
+#define TP_INGRESS_CONFIG_A 0x141
+
+#define VNIC_S 11
+#define VNIC_V(x) ((x) << VNIC_S)
+#define VNIC_F VNIC_V(1U)
+
+#define CSUM_HAS_PSEUDO_HDR_S 10
+#define CSUM_HAS_PSEUDO_HDR_V(x) ((x) << CSUM_HAS_PSEUDO_HDR_S)
+#define CSUM_HAS_PSEUDO_HDR_F CSUM_HAS_PSEUDO_HDR_V(1U)
+
+#define TP_MIB_MAC_IN_ERR_0_A 0x0
+#define TP_MIB_TCP_OUT_RST_A 0xc
+#define TP_MIB_TCP_IN_SEG_HI_A 0x10
+#define TP_MIB_TCP_IN_SEG_LO_A 0x11
+#define TP_MIB_TCP_OUT_SEG_HI_A 0x12
+#define TP_MIB_TCP_OUT_SEG_LO_A 0x13
+#define TP_MIB_TCP_RXT_SEG_HI_A 0x14
+#define TP_MIB_TCP_RXT_SEG_LO_A 0x15
+#define TP_MIB_TNL_CNG_DROP_0_A 0x18
+#define TP_MIB_TCP_V6IN_ERR_0_A 0x28
+#define TP_MIB_TCP_V6OUT_RST_A 0x2c
+#define TP_MIB_OFD_ARP_DROP_A 0x36
+#define TP_MIB_TNL_DROP_0_A 0x44
+#define TP_MIB_OFD_VLN_DROP_0_A 0x58
+
+#define ULP_TX_INT_CAUSE_A 0x8dcc
+
+#define PBL_BOUND_ERR_CH3_S 31
+#define PBL_BOUND_ERR_CH3_V(x) ((x) << PBL_BOUND_ERR_CH3_S)
+#define PBL_BOUND_ERR_CH3_F PBL_BOUND_ERR_CH3_V(1U)
+
+#define PBL_BOUND_ERR_CH2_S 30
+#define PBL_BOUND_ERR_CH2_V(x) ((x) << PBL_BOUND_ERR_CH2_S)
+#define PBL_BOUND_ERR_CH2_F PBL_BOUND_ERR_CH2_V(1U)
+
+#define PBL_BOUND_ERR_CH1_S 29
+#define PBL_BOUND_ERR_CH1_V(x) ((x) << PBL_BOUND_ERR_CH1_S)
+#define PBL_BOUND_ERR_CH1_F PBL_BOUND_ERR_CH1_V(1U)
+
+#define PBL_BOUND_ERR_CH0_S 28
+#define PBL_BOUND_ERR_CH0_V(x) ((x) << PBL_BOUND_ERR_CH0_S)
+#define PBL_BOUND_ERR_CH0_F PBL_BOUND_ERR_CH0_V(1U)
+
+#define PM_RX_INT_CAUSE_A 0x8fdc
+
+#define PMRX_FRAMING_ERROR_F 0x003ffff0U
+
+#define ZERO_E_CMD_ERROR_S 22
+#define ZERO_E_CMD_ERROR_V(x) ((x) << ZERO_E_CMD_ERROR_S)
+#define ZERO_E_CMD_ERROR_F ZERO_E_CMD_ERROR_V(1U)
+
+#define OCSPI_PAR_ERROR_S 3
+#define OCSPI_PAR_ERROR_V(x) ((x) << OCSPI_PAR_ERROR_S)
+#define OCSPI_PAR_ERROR_F OCSPI_PAR_ERROR_V(1U)
+
+#define DB_OPTIONS_PAR_ERROR_S 2
+#define DB_OPTIONS_PAR_ERROR_V(x) ((x) << DB_OPTIONS_PAR_ERROR_S)
+#define DB_OPTIONS_PAR_ERROR_F DB_OPTIONS_PAR_ERROR_V(1U)
+
+#define IESPI_PAR_ERROR_S 1
+#define IESPI_PAR_ERROR_V(x) ((x) << IESPI_PAR_ERROR_S)
+#define IESPI_PAR_ERROR_F IESPI_PAR_ERROR_V(1U)
+
+#define PMRX_E_PCMD_PAR_ERROR_S 0
+#define PMRX_E_PCMD_PAR_ERROR_V(x) ((x) << PMRX_E_PCMD_PAR_ERROR_S)
+#define PMRX_E_PCMD_PAR_ERROR_F PMRX_E_PCMD_PAR_ERROR_V(1U)
+
+#define PM_TX_INT_CAUSE_A 0x8ffc
+
+#define PCMD_LEN_OVFL0_S 31
+#define PCMD_LEN_OVFL0_V(x) ((x) << PCMD_LEN_OVFL0_S)
+#define PCMD_LEN_OVFL0_F PCMD_LEN_OVFL0_V(1U)
+
+#define PCMD_LEN_OVFL1_S 30
+#define PCMD_LEN_OVFL1_V(x) ((x) << PCMD_LEN_OVFL1_S)
+#define PCMD_LEN_OVFL1_F PCMD_LEN_OVFL1_V(1U)
+
+#define PCMD_LEN_OVFL2_S 29
+#define PCMD_LEN_OVFL2_V(x) ((x) << PCMD_LEN_OVFL2_S)
+#define PCMD_LEN_OVFL2_F PCMD_LEN_OVFL2_V(1U)
+
+#define ZERO_C_CMD_ERROR_S 28
+#define ZERO_C_CMD_ERROR_V(x) ((x) << ZERO_C_CMD_ERROR_S)
+#define ZERO_C_CMD_ERROR_F ZERO_C_CMD_ERROR_V(1U)
+
+#define PMTX_FRAMING_ERROR_F 0x0ffffff0U
+
+#define OESPI_PAR_ERROR_S 3
+#define OESPI_PAR_ERROR_V(x) ((x) << OESPI_PAR_ERROR_S)
+#define OESPI_PAR_ERROR_F OESPI_PAR_ERROR_V(1U)
+
+#define ICSPI_PAR_ERROR_S 1
+#define ICSPI_PAR_ERROR_V(x) ((x) << ICSPI_PAR_ERROR_S)
+#define ICSPI_PAR_ERROR_F ICSPI_PAR_ERROR_V(1U)
+
+#define PMTX_C_PCMD_PAR_ERROR_S 0
+#define PMTX_C_PCMD_PAR_ERROR_V(x) ((x) << PMTX_C_PCMD_PAR_ERROR_S)
+#define PMTX_C_PCMD_PAR_ERROR_F PMTX_C_PCMD_PAR_ERROR_V(1U)
#define MPS_PORT_STAT_TX_PORT_BYTES_L 0x400
#define MPS_PORT_STAT_TX_PORT_BYTES_H 0x404
#define MPS_PORT_STAT_RX_PORT_PPP7_H 0x60c
#define MPS_PORT_STAT_RX_PORT_LESS_64B_L 0x610
#define MPS_PORT_STAT_RX_PORT_LESS_64B_H 0x614
-#define MAC_PORT_CFG2 0x818
#define MAC_PORT_MAGIC_MACID_LO 0x824
#define MAC_PORT_MAGIC_MACID_HI 0x828
-#define MAC_PORT_EPIO_DATA0 0x8c0
-#define MAC_PORT_EPIO_DATA1 0x8c4
-#define MAC_PORT_EPIO_DATA2 0x8c8
-#define MAC_PORT_EPIO_DATA3 0x8cc
-#define MAC_PORT_EPIO_OP 0x8d0
-
-#define MPS_CMN_CTL 0x9000
-#define NUMPORTS_MASK 0x00000003U
-#define NUMPORTS_SHIFT 0
-#define NUMPORTS_GET(x) (((x) & NUMPORTS_MASK) >> NUMPORTS_SHIFT)
-
-#define MPS_INT_CAUSE 0x9008
-#define STATINT 0x00000020U
-#define TXINT 0x00000010U
-#define RXINT 0x00000008U
-#define TRCINT 0x00000004U
-#define CLSINT 0x00000002U
-#define PLINT 0x00000001U
-
-#define MPS_TX_INT_CAUSE 0x9408
-#define PORTERR 0x00010000U
-#define FRMERR 0x00008000U
-#define SECNTERR 0x00004000U
-#define BUBBLE 0x00002000U
-#define TXDESCFIFO 0x00001e00U
-#define TXDATAFIFO 0x000001e0U
-#define NCSIFIFO 0x00000010U
-#define TPFIFO 0x0000000fU
-
-#define MPS_STAT_PERR_INT_CAUSE_SRAM 0x9614
-#define MPS_STAT_PERR_INT_CAUSE_TX_FIFO 0x9620
-#define MPS_STAT_PERR_INT_CAUSE_RX_FIFO 0x962c
+
+#define MAC_PORT_EPIO_DATA0_A 0x8c0
+#define MAC_PORT_EPIO_DATA1_A 0x8c4
+#define MAC_PORT_EPIO_DATA2_A 0x8c8
+#define MAC_PORT_EPIO_DATA3_A 0x8cc
+#define MAC_PORT_EPIO_OP_A 0x8d0
+
+#define MAC_PORT_CFG2_A 0x818
+
+#define MPS_CMN_CTL_A 0x9000
+
+#define NUMPORTS_S 0
+#define NUMPORTS_M 0x3U
+#define NUMPORTS_G(x) (((x) >> NUMPORTS_S) & NUMPORTS_M)
+
+#define MPS_INT_CAUSE_A 0x9008
+#define MPS_TX_INT_CAUSE_A 0x9408
+
+#define FRMERR_S 15
+#define FRMERR_V(x) ((x) << FRMERR_S)
+#define FRMERR_F FRMERR_V(1U)
+
+#define SECNTERR_S 14
+#define SECNTERR_V(x) ((x) << SECNTERR_S)
+#define SECNTERR_F SECNTERR_V(1U)
+
+#define BUBBLE_S 13
+#define BUBBLE_V(x) ((x) << BUBBLE_S)
+#define BUBBLE_F BUBBLE_V(1U)
+
+#define TXDESCFIFO_S 9
+#define TXDESCFIFO_M 0xfU
+#define TXDESCFIFO_V(x) ((x) << TXDESCFIFO_S)
+
+#define TXDATAFIFO_S 5
+#define TXDATAFIFO_M 0xfU
+#define TXDATAFIFO_V(x) ((x) << TXDATAFIFO_S)
+
+#define NCSIFIFO_S 4
+#define NCSIFIFO_V(x) ((x) << NCSIFIFO_S)
+#define NCSIFIFO_F NCSIFIFO_V(1U)
+
+#define TPFIFO_S 0
+#define TPFIFO_M 0xfU
+#define TPFIFO_V(x) ((x) << TPFIFO_S)
+
+#define MPS_STAT_PERR_INT_CAUSE_SRAM_A 0x9614
+#define MPS_STAT_PERR_INT_CAUSE_TX_FIFO_A 0x9620
+#define MPS_STAT_PERR_INT_CAUSE_RX_FIFO_A 0x962c
#define MPS_STAT_RX_BG_0_MAC_DROP_FRAME_L 0x9640
#define MPS_STAT_RX_BG_0_MAC_DROP_FRAME_H 0x9644
#define MPS_STAT_RX_BG_2_LB_TRUNC_FRAME_H 0x96b4
#define MPS_STAT_RX_BG_3_LB_TRUNC_FRAME_L 0x96b8
#define MPS_STAT_RX_BG_3_LB_TRUNC_FRAME_H 0x96bc
-#define MPS_TRC_CFG 0x9800
-#define TRCFIFOEMPTY 0x00000010U
-#define TRCIGNOREDROPINPUT 0x00000008U
-#define TRCKEEPDUPLICATES 0x00000004U
-#define TRCEN 0x00000002U
-#define TRCMULTIFILTER 0x00000001U
-
-#define MPS_TRC_RSS_CONTROL 0x9808
-#define MPS_T5_TRC_RSS_CONTROL 0xa00c
-#define RSSCONTROL_MASK 0x00ff0000U
-#define RSSCONTROL_SHIFT 16
-#define RSSCONTROL(x) ((x) << RSSCONTROL_SHIFT)
-#define QUEUENUMBER_MASK 0x0000ffffU
-#define QUEUENUMBER_SHIFT 0
-#define QUEUENUMBER(x) ((x) << QUEUENUMBER_SHIFT)
-
-#define MPS_TRC_FILTER_MATCH_CTL_A 0x9810
-#define TFINVERTMATCH 0x01000000U
-#define TFPKTTOOLARGE 0x00800000U
-#define TFEN 0x00400000U
-#define TFPORT_MASK 0x003c0000U
-#define TFPORT_SHIFT 18
-#define TFPORT(x) ((x) << TFPORT_SHIFT)
-#define TFPORT_GET(x) (((x) & TFPORT_MASK) >> TFPORT_SHIFT)
-#define TFDROP 0x00020000U
-#define TFSOPEOPERR 0x00010000U
-#define TFLENGTH_MASK 0x00001f00U
-#define TFLENGTH_SHIFT 8
-#define TFLENGTH(x) ((x) << TFLENGTH_SHIFT)
-#define TFLENGTH_GET(x) (((x) & TFLENGTH_MASK) >> TFLENGTH_SHIFT)
-#define TFOFFSET_MASK 0x0000001fU
-#define TFOFFSET_SHIFT 0
-#define TFOFFSET(x) ((x) << TFOFFSET_SHIFT)
-#define TFOFFSET_GET(x) (((x) & TFOFFSET_MASK) >> TFOFFSET_SHIFT)
-
-#define MPS_TRC_FILTER_MATCH_CTL_B 0x9820
-#define TFMINPKTSIZE_MASK 0x01ff0000U
-#define TFMINPKTSIZE_SHIFT 16
-#define TFMINPKTSIZE(x) ((x) << TFMINPKTSIZE_SHIFT)
-#define TFMINPKTSIZE_GET(x) (((x) & TFMINPKTSIZE_MASK) >> TFMINPKTSIZE_SHIFT)
-#define TFCAPTUREMAX_MASK 0x00003fffU
-#define TFCAPTUREMAX_SHIFT 0
-#define TFCAPTUREMAX(x) ((x) << TFCAPTUREMAX_SHIFT)
-#define TFCAPTUREMAX_GET(x) (((x) & TFCAPTUREMAX_MASK) >> TFCAPTUREMAX_SHIFT)
-
-#define MPS_TRC_INT_CAUSE 0x985c
-#define MISCPERR 0x00000100U
-#define PKTFIFO 0x000000f0U
-#define FILTMEM 0x0000000fU
-
-#define MPS_TRC_FILTER0_MATCH 0x9c00
-#define MPS_TRC_FILTER0_DONT_CARE 0x9c80
-#define MPS_TRC_FILTER1_MATCH 0x9d00
-#define MPS_CLS_INT_CAUSE 0xd028
-#define PLERRENB 0x00000008U
-#define HASHSRAM 0x00000004U
-#define MATCHTCAM 0x00000002U
-#define MATCHSRAM 0x00000001U
-
-#define MPS_RX_PERR_INT_CAUSE 0x11074
-
-#define CPL_INTR_CAUSE 0x19054
-#define CIM_OP_MAP_PERR 0x00000020U
-#define CIM_OVFL_ERROR 0x00000010U
-#define TP_FRAMING_ERROR 0x00000008U
-#define SGE_FRAMING_ERROR 0x00000004U
-#define CIM_FRAMING_ERROR 0x00000002U
-#define ZERO_SWITCH_ERROR 0x00000001U
-
-#define SMB_INT_CAUSE 0x19090
-#define MSTTXFIFOPARINT 0x00200000U
-#define MSTRXFIFOPARINT 0x00100000U
-#define SLVFIFOPARINT 0x00080000U
-
-#define ULP_RX_INT_CAUSE 0x19158
-#define ULP_RX_ISCSI_TAGMASK 0x19164
-#define ULP_RX_ISCSI_PSZ 0x19168
-#define HPZ3_MASK 0x0f000000U
-#define HPZ3_SHIFT 24
-#define HPZ3(x) ((x) << HPZ3_SHIFT)
-#define HPZ2_MASK 0x000f0000U
-#define HPZ2_SHIFT 16
-#define HPZ2(x) ((x) << HPZ2_SHIFT)
-#define HPZ1_MASK 0x00000f00U
-#define HPZ1_SHIFT 8
-#define HPZ1(x) ((x) << HPZ1_SHIFT)
-#define HPZ0_MASK 0x0000000fU
-#define HPZ0_SHIFT 0
-#define HPZ0(x) ((x) << HPZ0_SHIFT)
-
-#define ULP_RX_TDDP_PSZ 0x19178
-
-#define SF_DATA 0x193f8
-#define SF_OP 0x193fc
-#define SF_BUSY 0x80000000U
-#define SF_LOCK 0x00000010U
-#define SF_CONT 0x00000008U
-#define BYTECNT_MASK 0x00000006U
-#define BYTECNT_SHIFT 1
-#define BYTECNT(x) ((x) << BYTECNT_SHIFT)
-#define OP_WR 0x00000001U
-
-#define PL_PF_INT_CAUSE 0x3c0
-#define PFSW 0x00000008U
-#define PFSGE 0x00000004U
-#define PFCIM 0x00000002U
-#define PFMPS 0x00000001U
-
-#define PL_PF_INT_ENABLE 0x3c4
-#define PL_PF_CTL 0x3c8
-#define SWINT 0x00000001U
-
-#define PL_WHOAMI 0x19400
-#define SOURCEPF_MASK 0x00000700U
-#define SOURCEPF_SHIFT 8
-#define SOURCEPF(x) ((x) << SOURCEPF_SHIFT)
-#define SOURCEPF_GET(x) (((x) & SOURCEPF_MASK) >> SOURCEPF_SHIFT)
-#define ISVF 0x00000080U
-#define VFID_MASK 0x0000007fU
-#define VFID_SHIFT 0
-#define VFID(x) ((x) << VFID_SHIFT)
-#define VFID_GET(x) (((x) & VFID_MASK) >> VFID_SHIFT)
-
-#define PL_INT_CAUSE 0x1940c
-#define ULP_TX 0x08000000U
-#define SGE 0x04000000U
-#define HMA 0x02000000U
-#define CPL_SWITCH 0x01000000U
-#define ULP_RX 0x00800000U
-#define PM_RX 0x00400000U
-#define PM_TX 0x00200000U
-#define MA 0x00100000U
-#define TP 0x00080000U
-#define LE 0x00040000U
-#define EDC1 0x00020000U
-#define EDC0 0x00010000U
-#define MC 0x00008000U
-#define PCIE 0x00004000U
-#define PMU 0x00002000U
-#define XGMAC_KR1 0x00001000U
-#define XGMAC_KR0 0x00000800U
-#define XGMAC1 0x00000400U
-#define XGMAC0 0x00000200U
-#define SMB 0x00000100U
-#define SF 0x00000080U
-#define PL 0x00000040U
-#define NCSI 0x00000020U
-#define MPS 0x00000010U
-#define MI 0x00000008U
-#define DBG 0x00000004U
-#define I2CM 0x00000002U
-#define CIM 0x00000001U
-
-#define MC1 0x31
-#define PL_INT_ENABLE 0x19410
-#define PL_INT_MAP0 0x19414
-#define PL_RST 0x19428
-#define PIORST 0x00000002U
-#define PIORSTMODE 0x00000001U
-
-#define PL_PL_INT_CAUSE 0x19430
-#define FATALPERR 0x00000010U
-#define PERRVFID 0x00000001U
-
-#define PL_REV 0x1943c
-
-#define S_REV 0
-#define M_REV 0xfU
-#define V_REV(x) ((x) << S_REV)
-#define G_REV(x) (((x) >> S_REV) & M_REV)
-
-#define LE_DB_CONFIG 0x19c04
-#define HASHEN 0x00100000U
-
-#define LE_DB_SERVER_INDEX 0x19c18
-#define LE_DB_ACT_CNT_IPV4 0x19c20
-#define LE_DB_ACT_CNT_IPV6 0x19c24
-
-#define LE_DB_INT_CAUSE 0x19c3c
-#define REQQPARERR 0x00010000U
-#define UNKNOWNCMD 0x00008000U
-#define PARITYERR 0x00000040U
-#define LIPMISS 0x00000020U
-#define LIP0 0x00000010U
-
-#define LE_DB_TID_HASHBASE 0x19df8
-
-#define NCSI_INT_CAUSE 0x1a0d8
-#define CIM_DM_PRTY_ERR 0x00000100U
-#define MPS_DM_PRTY_ERR 0x00000080U
-#define TXFIFO_PRTY_ERR 0x00000002U
-#define RXFIFO_PRTY_ERR 0x00000001U
-
-#define XGMAC_PORT_CFG2 0x1018
-#define PATEN 0x00040000U
-#define MAGICEN 0x00020000U
-#define XGMAC_PORT_MAGIC_MACID_LO 0x1024
-#define XGMAC_PORT_MAGIC_MACID_HI 0x1028
+#define MPS_TRC_CFG_A 0x9800
+
+#define TRCFIFOEMPTY_S 4
+#define TRCFIFOEMPTY_V(x) ((x) << TRCFIFOEMPTY_S)
+#define TRCFIFOEMPTY_F TRCFIFOEMPTY_V(1U)
+
+#define TRCIGNOREDROPINPUT_S 3
+#define TRCIGNOREDROPINPUT_V(x) ((x) << TRCIGNOREDROPINPUT_S)
+#define TRCIGNOREDROPINPUT_F TRCIGNOREDROPINPUT_V(1U)
+
+#define TRCKEEPDUPLICATES_S 2
+#define TRCKEEPDUPLICATES_V(x) ((x) << TRCKEEPDUPLICATES_S)
+#define TRCKEEPDUPLICATES_F TRCKEEPDUPLICATES_V(1U)
+
+#define TRCEN_S 1
+#define TRCEN_V(x) ((x) << TRCEN_S)
+#define TRCEN_F TRCEN_V(1U)
+
+#define TRCMULTIFILTER_S 0
+#define TRCMULTIFILTER_V(x) ((x) << TRCMULTIFILTER_S)
+#define TRCMULTIFILTER_F TRCMULTIFILTER_V(1U)
+
+#define MPS_TRC_RSS_CONTROL_A 0x9808
+#define MPS_T5_TRC_RSS_CONTROL_A 0xa00c
+
+#define RSSCONTROL_S 16
+#define RSSCONTROL_V(x) ((x) << RSSCONTROL_S)
+
+#define QUEUENUMBER_S 0
+#define QUEUENUMBER_V(x) ((x) << QUEUENUMBER_S)
+
+#define TP_RSS_CONFIG_A 0x7df0
+
+#define TNL4TUPENIPV6_S 31
+#define TNL4TUPENIPV6_V(x) ((x) << TNL4TUPENIPV6_S)
+#define TNL4TUPENIPV6_F TNL4TUPENIPV6_V(1U)
+
+#define TNL2TUPENIPV6_S 30
+#define TNL2TUPENIPV6_V(x) ((x) << TNL2TUPENIPV6_S)
+#define TNL2TUPENIPV6_F TNL2TUPENIPV6_V(1U)
+
+#define TNL4TUPENIPV4_S 29
+#define TNL4TUPENIPV4_V(x) ((x) << TNL4TUPENIPV4_S)
+#define TNL4TUPENIPV4_F TNL4TUPENIPV4_V(1U)
+
+#define TNL2TUPENIPV4_S 28
+#define TNL2TUPENIPV4_V(x) ((x) << TNL2TUPENIPV4_S)
+#define TNL2TUPENIPV4_F TNL2TUPENIPV4_V(1U)
+
+#define TNLTCPSEL_S 27
+#define TNLTCPSEL_V(x) ((x) << TNLTCPSEL_S)
+#define TNLTCPSEL_F TNLTCPSEL_V(1U)
+
+#define TNLIP6SEL_S 26
+#define TNLIP6SEL_V(x) ((x) << TNLIP6SEL_S)
+#define TNLIP6SEL_F TNLIP6SEL_V(1U)
+
+#define TNLVRTSEL_S 25
+#define TNLVRTSEL_V(x) ((x) << TNLVRTSEL_S)
+#define TNLVRTSEL_F TNLVRTSEL_V(1U)
+
+#define TNLMAPEN_S 24
+#define TNLMAPEN_V(x) ((x) << TNLMAPEN_S)
+#define TNLMAPEN_F TNLMAPEN_V(1U)
+
+#define OFDHASHSAVE_S 19
+#define OFDHASHSAVE_V(x) ((x) << OFDHASHSAVE_S)
+#define OFDHASHSAVE_F OFDHASHSAVE_V(1U)
+
+#define OFDVRTSEL_S 18
+#define OFDVRTSEL_V(x) ((x) << OFDVRTSEL_S)
+#define OFDVRTSEL_F OFDVRTSEL_V(1U)
+
+#define OFDMAPEN_S 17
+#define OFDMAPEN_V(x) ((x) << OFDMAPEN_S)
+#define OFDMAPEN_F OFDMAPEN_V(1U)
+
+#define OFDLKPEN_S 16
+#define OFDLKPEN_V(x) ((x) << OFDLKPEN_S)
+#define OFDLKPEN_F OFDLKPEN_V(1U)
+
+#define SYN4TUPENIPV6_S 15
+#define SYN4TUPENIPV6_V(x) ((x) << SYN4TUPENIPV6_S)
+#define SYN4TUPENIPV6_F SYN4TUPENIPV6_V(1U)
+
+#define SYN2TUPENIPV6_S 14
+#define SYN2TUPENIPV6_V(x) ((x) << SYN2TUPENIPV6_S)
+#define SYN2TUPENIPV6_F SYN2TUPENIPV6_V(1U)
+
+#define SYN4TUPENIPV4_S 13
+#define SYN4TUPENIPV4_V(x) ((x) << SYN4TUPENIPV4_S)
+#define SYN4TUPENIPV4_F SYN4TUPENIPV4_V(1U)
+
+#define SYN2TUPENIPV4_S 12
+#define SYN2TUPENIPV4_V(x) ((x) << SYN2TUPENIPV4_S)
+#define SYN2TUPENIPV4_F SYN2TUPENIPV4_V(1U)
+
+#define SYNIP6SEL_S 11
+#define SYNIP6SEL_V(x) ((x) << SYNIP6SEL_S)
+#define SYNIP6SEL_F SYNIP6SEL_V(1U)
+
+#define SYNVRTSEL_S 10
+#define SYNVRTSEL_V(x) ((x) << SYNVRTSEL_S)
+#define SYNVRTSEL_F SYNVRTSEL_V(1U)
-#define XGMAC_PORT_EPIO_DATA0 0x10c0
-#define XGMAC_PORT_EPIO_DATA1 0x10c4
-#define XGMAC_PORT_EPIO_DATA2 0x10c8
-#define XGMAC_PORT_EPIO_DATA3 0x10cc
-#define XGMAC_PORT_EPIO_OP 0x10d0
-#define EPIOWR 0x00000100U
-#define ADDRESS_MASK 0x000000ffU
-#define ADDRESS_SHIFT 0
-#define ADDRESS(x) ((x) << ADDRESS_SHIFT)
+#define SYNMAPEN_S 9
+#define SYNMAPEN_V(x) ((x) << SYNMAPEN_S)
+#define SYNMAPEN_F SYNMAPEN_V(1U)
-#define MAC_PORT_INT_CAUSE 0x8dc
-#define XGMAC_PORT_INT_CAUSE 0x10dc
+#define SYNLKPEN_S 8
+#define SYNLKPEN_V(x) ((x) << SYNLKPEN_S)
+#define SYNLKPEN_F SYNLKPEN_V(1U)
-#define A_TP_TX_MOD_QUEUE_REQ_MAP 0x7e28
+#define CHANNELENABLE_S 7
+#define CHANNELENABLE_V(x) ((x) << CHANNELENABLE_S)
+#define CHANNELENABLE_F CHANNELENABLE_V(1U)
-#define A_TP_TX_MOD_CHANNEL_WEIGHT 0x7e34
+#define PORTENABLE_S 6
+#define PORTENABLE_V(x) ((x) << PORTENABLE_S)
+#define PORTENABLE_F PORTENABLE_V(1U)
-#define S_TX_MOD_QUEUE_REQ_MAP 0
-#define M_TX_MOD_QUEUE_REQ_MAP 0xffffU
-#define V_TX_MOD_QUEUE_REQ_MAP(x) ((x) << S_TX_MOD_QUEUE_REQ_MAP)
+#define TNLALLLOOKUP_S 5
+#define TNLALLLOOKUP_V(x) ((x) << TNLALLLOOKUP_S)
+#define TNLALLLOOKUP_F TNLALLLOOKUP_V(1U)
-#define A_TP_TX_MOD_QUEUE_WEIGHT0 0x7e30
+#define VIRTENABLE_S 4
+#define VIRTENABLE_V(x) ((x) << VIRTENABLE_S)
+#define VIRTENABLE_F VIRTENABLE_V(1U)
-#define S_TX_MODQ_WEIGHT3 24
-#define M_TX_MODQ_WEIGHT3 0xffU
-#define V_TX_MODQ_WEIGHT3(x) ((x) << S_TX_MODQ_WEIGHT3)
+#define CONGESTIONENABLE_S 3
+#define CONGESTIONENABLE_V(x) ((x) << CONGESTIONENABLE_S)
+#define CONGESTIONENABLE_F CONGESTIONENABLE_V(1U)
-#define S_TX_MODQ_WEIGHT2 16
-#define M_TX_MODQ_WEIGHT2 0xffU
-#define V_TX_MODQ_WEIGHT2(x) ((x) << S_TX_MODQ_WEIGHT2)
+#define HASHTOEPLITZ_S 2
+#define HASHTOEPLITZ_V(x) ((x) << HASHTOEPLITZ_S)
+#define HASHTOEPLITZ_F HASHTOEPLITZ_V(1U)
-#define S_TX_MODQ_WEIGHT1 8
-#define M_TX_MODQ_WEIGHT1 0xffU
-#define V_TX_MODQ_WEIGHT1(x) ((x) << S_TX_MODQ_WEIGHT1)
+#define UDPENABLE_S 1
+#define UDPENABLE_V(x) ((x) << UDPENABLE_S)
+#define UDPENABLE_F UDPENABLE_V(1U)
-#define S_TX_MODQ_WEIGHT0 0
-#define M_TX_MODQ_WEIGHT0 0xffU
-#define V_TX_MODQ_WEIGHT0(x) ((x) << S_TX_MODQ_WEIGHT0)
+#define DISABLE_S 0
+#define DISABLE_V(x) ((x) << DISABLE_S)
+#define DISABLE_F DISABLE_V(1U)
-#define A_TP_TX_SCHED_HDR 0x23
+#define TP_RSS_CONFIG_TNL_A 0x7df4
+
+#define MASKSIZE_S 28
+#define MASKSIZE_M 0xfU
+#define MASKSIZE_V(x) ((x) << MASKSIZE_S)
+#define MASKSIZE_G(x) (((x) >> MASKSIZE_S) & MASKSIZE_M)
-#define A_TP_TX_SCHED_FIFO 0x24
+#define MASKFILTER_S 16
+#define MASKFILTER_M 0x7ffU
+#define MASKFILTER_V(x) ((x) << MASKFILTER_S)
+#define MASKFILTER_G(x) (((x) >> MASKFILTER_S) & MASKFILTER_M)
-#define A_TP_TX_SCHED_PCMD 0x25
+#define USEWIRECH_S 0
+#define USEWIRECH_V(x) ((x) << USEWIRECH_S)
+#define USEWIRECH_F USEWIRECH_V(1U)
-#define S_VNIC 11
-#define V_VNIC(x) ((x) << S_VNIC)
-#define F_VNIC V_VNIC(1U)
+#define HASHALL_S 2
+#define HASHALL_V(x) ((x) << HASHALL_S)
+#define HASHALL_F HASHALL_V(1U)
-#define S_FRAGMENTATION 9
-#define V_FRAGMENTATION(x) ((x) << S_FRAGMENTATION)
-#define F_FRAGMENTATION V_FRAGMENTATION(1U)
+#define HASHETH_S 1
+#define HASHETH_V(x) ((x) << HASHETH_S)
+#define HASHETH_F HASHETH_V(1U)
-#define S_MPSHITTYPE 8
-#define V_MPSHITTYPE(x) ((x) << S_MPSHITTYPE)
-#define F_MPSHITTYPE V_MPSHITTYPE(1U)
+#define TP_RSS_CONFIG_OFD_A 0x7df8
+
+#define RRCPLMAPEN_S 20
+#define RRCPLMAPEN_V(x) ((x) << RRCPLMAPEN_S)
+#define RRCPLMAPEN_F RRCPLMAPEN_V(1U)
+
+#define RRCPLQUEWIDTH_S 16
+#define RRCPLQUEWIDTH_M 0xfU
+#define RRCPLQUEWIDTH_V(x) ((x) << RRCPLQUEWIDTH_S)
+#define RRCPLQUEWIDTH_G(x) (((x) >> RRCPLQUEWIDTH_S) & RRCPLQUEWIDTH_M)
-#define S_MACMATCH 7
-#define V_MACMATCH(x) ((x) << S_MACMATCH)
-#define F_MACMATCH V_MACMATCH(1U)
+#define TP_RSS_CONFIG_SYN_A 0x7dfc
+#define TP_RSS_CONFIG_VRT_A 0x7e00
+
+#define VFRDRG_S 25
+#define VFRDRG_V(x) ((x) << VFRDRG_S)
+#define VFRDRG_F VFRDRG_V(1U)
+
+#define VFRDEN_S 24
+#define VFRDEN_V(x) ((x) << VFRDEN_S)
+#define VFRDEN_F VFRDEN_V(1U)
+
+#define VFPERREN_S 23
+#define VFPERREN_V(x) ((x) << VFPERREN_S)
+#define VFPERREN_F VFPERREN_V(1U)
+
+#define KEYPERREN_S 22
+#define KEYPERREN_V(x) ((x) << KEYPERREN_S)
+#define KEYPERREN_F KEYPERREN_V(1U)
+
+#define DISABLEVLAN_S 21
+#define DISABLEVLAN_V(x) ((x) << DISABLEVLAN_S)
+#define DISABLEVLAN_F DISABLEVLAN_V(1U)
+
+#define ENABLEUP0_S 20
+#define ENABLEUP0_V(x) ((x) << ENABLEUP0_S)
+#define ENABLEUP0_F ENABLEUP0_V(1U)
+
+#define HASHDELAY_S 16
+#define HASHDELAY_M 0xfU
+#define HASHDELAY_V(x) ((x) << HASHDELAY_S)
+#define HASHDELAY_G(x) (((x) >> HASHDELAY_S) & HASHDELAY_M)
-#define S_ETHERTYPE 6
-#define V_ETHERTYPE(x) ((x) << S_ETHERTYPE)
-#define F_ETHERTYPE V_ETHERTYPE(1U)
+#define VFWRADDR_S 8
+#define VFWRADDR_M 0x7fU
+#define VFWRADDR_V(x) ((x) << VFWRADDR_S)
+#define VFWRADDR_G(x) (((x) >> VFWRADDR_S) & VFWRADDR_M)
-#define S_PROTOCOL 5
-#define V_PROTOCOL(x) ((x) << S_PROTOCOL)
-#define F_PROTOCOL V_PROTOCOL(1U)
+#define KEYMODE_S 6
+#define KEYMODE_M 0x3U
+#define KEYMODE_V(x) ((x) << KEYMODE_S)
+#define KEYMODE_G(x) (((x) >> KEYMODE_S) & KEYMODE_M)
+
+#define VFWREN_S 5
+#define VFWREN_V(x) ((x) << VFWREN_S)
+#define VFWREN_F VFWREN_V(1U)
+
+#define KEYWREN_S 4
+#define KEYWREN_V(x) ((x) << KEYWREN_S)
+#define KEYWREN_F KEYWREN_V(1U)
+
+#define KEYWRADDR_S 0
+#define KEYWRADDR_M 0xfU
+#define KEYWRADDR_V(x) ((x) << KEYWRADDR_S)
+#define KEYWRADDR_G(x) (((x) >> KEYWRADDR_S) & KEYWRADDR_M)
+
+#define KEYWRADDRX_S 30
+#define KEYWRADDRX_M 0x3U
+#define KEYWRADDRX_V(x) ((x) << KEYWRADDRX_S)
+#define KEYWRADDRX_G(x) (((x) >> KEYWRADDRX_S) & KEYWRADDRX_M)
+
+#define KEYEXTEND_S 26
+#define KEYEXTEND_V(x) ((x) << KEYEXTEND_S)
+#define KEYEXTEND_F KEYEXTEND_V(1U)
+
+#define LKPIDXSIZE_S 24
+#define LKPIDXSIZE_M 0x3U
+#define LKPIDXSIZE_V(x) ((x) << LKPIDXSIZE_S)
+#define LKPIDXSIZE_G(x) (((x) >> LKPIDXSIZE_S) & LKPIDXSIZE_M)
+
+#define TP_RSS_VFL_CONFIG_A 0x3a
+#define TP_RSS_VFH_CONFIG_A 0x3b
+
+#define ENABLEUDPHASH_S 31
+#define ENABLEUDPHASH_V(x) ((x) << ENABLEUDPHASH_S)
+#define ENABLEUDPHASH_F ENABLEUDPHASH_V(1U)
-#define S_TOS 4
-#define V_TOS(x) ((x) << S_TOS)
-#define F_TOS V_TOS(1U)
+#define VFUPEN_S 30
+#define VFUPEN_V(x) ((x) << VFUPEN_S)
+#define VFUPEN_F VFUPEN_V(1U)
-#define S_VLAN 3
-#define V_VLAN(x) ((x) << S_VLAN)
-#define F_VLAN V_VLAN(1U)
+#define VFVLNEX_S 28
+#define VFVLNEX_V(x) ((x) << VFVLNEX_S)
+#define VFVLNEX_F VFVLNEX_V(1U)
-#define S_VNIC_ID 2
-#define V_VNIC_ID(x) ((x) << S_VNIC_ID)
-#define F_VNIC_ID V_VNIC_ID(1U)
+#define VFPRTEN_S 27
+#define VFPRTEN_V(x) ((x) << VFPRTEN_S)
+#define VFPRTEN_F VFPRTEN_V(1U)
-#define S_PORT 1
-#define V_PORT(x) ((x) << S_PORT)
-#define F_PORT V_PORT(1U)
+#define VFCHNEN_S 26
+#define VFCHNEN_V(x) ((x) << VFCHNEN_S)
+#define VFCHNEN_F VFCHNEN_V(1U)
+
+#define DEFAULTQUEUE_S 16
+#define DEFAULTQUEUE_M 0x3ffU
+#define DEFAULTQUEUE_G(x) (((x) >> DEFAULTQUEUE_S) & DEFAULTQUEUE_M)
+
+#define VFIP6TWOTUPEN_S 6
+#define VFIP6TWOTUPEN_V(x) ((x) << VFIP6TWOTUPEN_S)
+#define VFIP6TWOTUPEN_F VFIP6TWOTUPEN_V(1U)
-#define S_FCOE 0
-#define V_FCOE(x) ((x) << S_FCOE)
-#define F_FCOE V_FCOE(1U)
+#define VFIP4FOURTUPEN_S 5
+#define VFIP4FOURTUPEN_V(x) ((x) << VFIP4FOURTUPEN_S)
+#define VFIP4FOURTUPEN_F VFIP4FOURTUPEN_V(1U)
+
+#define VFIP4TWOTUPEN_S 4
+#define VFIP4TWOTUPEN_V(x) ((x) << VFIP4TWOTUPEN_S)
+#define VFIP4TWOTUPEN_F VFIP4TWOTUPEN_V(1U)
+
+#define KEYINDEX_S 0
+#define KEYINDEX_M 0xfU
+#define KEYINDEX_G(x) (((x) >> KEYINDEX_S) & KEYINDEX_M)
+
+#define MAPENABLE_S 31
+#define MAPENABLE_V(x) ((x) << MAPENABLE_S)
+#define MAPENABLE_F MAPENABLE_V(1U)
+
+#define CHNENABLE_S 30
+#define CHNENABLE_V(x) ((x) << CHNENABLE_S)
+#define CHNENABLE_F CHNENABLE_V(1U)
+
+#define PRTENABLE_S 29
+#define PRTENABLE_V(x) ((x) << PRTENABLE_S)
+#define PRTENABLE_F PRTENABLE_V(1U)
+
+#define UDPFOURTUPEN_S 28
+#define UDPFOURTUPEN_V(x) ((x) << UDPFOURTUPEN_S)
+#define UDPFOURTUPEN_F UDPFOURTUPEN_V(1U)
+
+#define IP6FOURTUPEN_S 27
+#define IP6FOURTUPEN_V(x) ((x) << IP6FOURTUPEN_S)
+#define IP6FOURTUPEN_F IP6FOURTUPEN_V(1U)
+
+#define IP6TWOTUPEN_S 26
+#define IP6TWOTUPEN_V(x) ((x) << IP6TWOTUPEN_S)
+#define IP6TWOTUPEN_F IP6TWOTUPEN_V(1U)
+
+#define IP4FOURTUPEN_S 25
+#define IP4FOURTUPEN_V(x) ((x) << IP4FOURTUPEN_S)
+#define IP4FOURTUPEN_F IP4FOURTUPEN_V(1U)
+
+#define IP4TWOTUPEN_S 24
+#define IP4TWOTUPEN_V(x) ((x) << IP4TWOTUPEN_S)
+#define IP4TWOTUPEN_F IP4TWOTUPEN_V(1U)
+
+#define IVFWIDTH_S 20
+#define IVFWIDTH_M 0xfU
+#define IVFWIDTH_V(x) ((x) << IVFWIDTH_S)
+#define IVFWIDTH_G(x) (((x) >> IVFWIDTH_S) & IVFWIDTH_M)
+
+#define CH1DEFAULTQUEUE_S 10
+#define CH1DEFAULTQUEUE_M 0x3ffU
+#define CH1DEFAULTQUEUE_V(x) ((x) << CH1DEFAULTQUEUE_S)
+#define CH1DEFAULTQUEUE_G(x) (((x) >> CH1DEFAULTQUEUE_S) & CH1DEFAULTQUEUE_M)
+
+#define CH0DEFAULTQUEUE_S 0
+#define CH0DEFAULTQUEUE_M 0x3ffU
+#define CH0DEFAULTQUEUE_V(x) ((x) << CH0DEFAULTQUEUE_S)
+#define CH0DEFAULTQUEUE_G(x) (((x) >> CH0DEFAULTQUEUE_S) & CH0DEFAULTQUEUE_M)
+
+#define VFLKPIDX_S 8
+#define VFLKPIDX_M 0xffU
+#define VFLKPIDX_G(x) (((x) >> VFLKPIDX_S) & VFLKPIDX_M)
+
+#define TP_RSS_CONFIG_CNG_A 0x7e04
+#define TP_RSS_SECRET_KEY0_A 0x40
+#define TP_RSS_PF0_CONFIG_A 0x30
+#define TP_RSS_PF_MAP_A 0x38
+#define TP_RSS_PF_MSK_A 0x39
+
+#define PF1LKPIDX_S 3
+
+#define PF0LKPIDX_M 0x7U
+
+#define PF1MSKSIZE_S 4
+#define PF1MSKSIZE_M 0xfU
+
+#define CHNCOUNT3_S 31
+#define CHNCOUNT3_V(x) ((x) << CHNCOUNT3_S)
+#define CHNCOUNT3_F CHNCOUNT3_V(1U)
+
+#define CHNCOUNT2_S 30
+#define CHNCOUNT2_V(x) ((x) << CHNCOUNT2_S)
+#define CHNCOUNT2_F CHNCOUNT2_V(1U)
+
+#define CHNCOUNT1_S 29
+#define CHNCOUNT1_V(x) ((x) << CHNCOUNT1_S)
+#define CHNCOUNT1_F CHNCOUNT1_V(1U)
+
+#define CHNCOUNT0_S 28
+#define CHNCOUNT0_V(x) ((x) << CHNCOUNT0_S)
+#define CHNCOUNT0_F CHNCOUNT0_V(1U)
+
+#define CHNUNDFLOW3_S 27
+#define CHNUNDFLOW3_V(x) ((x) << CHNUNDFLOW3_S)
+#define CHNUNDFLOW3_F CHNUNDFLOW3_V(1U)
+
+#define CHNUNDFLOW2_S 26
+#define CHNUNDFLOW2_V(x) ((x) << CHNUNDFLOW2_S)
+#define CHNUNDFLOW2_F CHNUNDFLOW2_V(1U)
+
+#define CHNUNDFLOW1_S 25
+#define CHNUNDFLOW1_V(x) ((x) << CHNUNDFLOW1_S)
+#define CHNUNDFLOW1_F CHNUNDFLOW1_V(1U)
+
+#define CHNUNDFLOW0_S 24
+#define CHNUNDFLOW0_V(x) ((x) << CHNUNDFLOW0_S)
+#define CHNUNDFLOW0_F CHNUNDFLOW0_V(1U)
+
+#define RSTCHN3_S 19
+#define RSTCHN3_V(x) ((x) << RSTCHN3_S)
+#define RSTCHN3_F RSTCHN3_V(1U)
+
+#define RSTCHN2_S 18
+#define RSTCHN2_V(x) ((x) << RSTCHN2_S)
+#define RSTCHN2_F RSTCHN2_V(1U)
+
+#define RSTCHN1_S 17
+#define RSTCHN1_V(x) ((x) << RSTCHN1_S)
+#define RSTCHN1_F RSTCHN1_V(1U)
+
+#define RSTCHN0_S 16
+#define RSTCHN0_V(x) ((x) << RSTCHN0_S)
+#define RSTCHN0_F RSTCHN0_V(1U)
+
+#define UPDVLD_S 15
+#define UPDVLD_V(x) ((x) << UPDVLD_S)
+#define UPDVLD_F UPDVLD_V(1U)
+
+#define XOFF_S 14
+#define XOFF_V(x) ((x) << XOFF_S)
+#define XOFF_F XOFF_V(1U)
+
+#define UPDCHN3_S 13
+#define UPDCHN3_V(x) ((x) << UPDCHN3_S)
+#define UPDCHN3_F UPDCHN3_V(1U)
+
+#define UPDCHN2_S 12
+#define UPDCHN2_V(x) ((x) << UPDCHN2_S)
+#define UPDCHN2_F UPDCHN2_V(1U)
+
+#define UPDCHN1_S 11
+#define UPDCHN1_V(x) ((x) << UPDCHN1_S)
+#define UPDCHN1_F UPDCHN1_V(1U)
+
+#define UPDCHN0_S 10
+#define UPDCHN0_V(x) ((x) << UPDCHN0_S)
+#define UPDCHN0_F UPDCHN0_V(1U)
+
+#define QUEUE_S 0
+#define QUEUE_M 0x3ffU
+#define QUEUE_V(x) ((x) << QUEUE_S)
+#define QUEUE_G(x) (((x) >> QUEUE_S) & QUEUE_M)
+
+#define MPS_TRC_INT_CAUSE_A 0x985c
+
+#define MISCPERR_S 8
+#define MISCPERR_V(x) ((x) << MISCPERR_S)
+#define MISCPERR_F MISCPERR_V(1U)
+
+#define PKTFIFO_S 4
+#define PKTFIFO_M 0xfU
+#define PKTFIFO_V(x) ((x) << PKTFIFO_S)
+
+#define FILTMEM_S 0
+#define FILTMEM_M 0xfU
+#define FILTMEM_V(x) ((x) << FILTMEM_S)
+
+#define MPS_CLS_INT_CAUSE_A 0xd028
+
+#define HASHSRAM_S 2
+#define HASHSRAM_V(x) ((x) << HASHSRAM_S)
+#define HASHSRAM_F HASHSRAM_V(1U)
+
+#define MATCHTCAM_S 1
+#define MATCHTCAM_V(x) ((x) << MATCHTCAM_S)
+#define MATCHTCAM_F MATCHTCAM_V(1U)
+
+#define MATCHSRAM_S 0
+#define MATCHSRAM_V(x) ((x) << MATCHSRAM_S)
+#define MATCHSRAM_F MATCHSRAM_V(1U)
+
+#define MPS_RX_PERR_INT_CAUSE_A 0x11074
+
+#define MPS_CLS_TCAM_Y_L_A 0xf000
+#define MPS_CLS_TCAM_X_L_A 0xf008
+
+#define MPS_CLS_TCAM_Y_L(idx) (MPS_CLS_TCAM_Y_L_A + (idx) * 16)
+#define NUM_MPS_CLS_TCAM_Y_L_INSTANCES 512
+
+#define MPS_CLS_TCAM_X_L(idx) (MPS_CLS_TCAM_X_L_A + (idx) * 16)
+#define NUM_MPS_CLS_TCAM_X_L_INSTANCES 512
+
+#define MPS_CLS_SRAM_L_A 0xe000
+#define MPS_CLS_SRAM_H_A 0xe004
+
+#define MPS_CLS_SRAM_L(idx) (MPS_CLS_SRAM_L_A + (idx) * 8)
+#define NUM_MPS_CLS_SRAM_L_INSTANCES 336
+
+#define MPS_CLS_SRAM_H(idx) (MPS_CLS_SRAM_H_A + (idx) * 8)
+#define NUM_MPS_CLS_SRAM_H_INSTANCES 336
+
+#define MULTILISTEN0_S 25
+
+#define REPLICATE_S 11
+#define REPLICATE_V(x) ((x) << REPLICATE_S)
+#define REPLICATE_F REPLICATE_V(1U)
+
+#define PF_S 8
+#define PF_M 0x7U
+#define PF_G(x) (((x) >> PF_S) & PF_M)
+
+#define VF_VALID_S 7
+#define VF_VALID_V(x) ((x) << VF_VALID_S)
+#define VF_VALID_F VF_VALID_V(1U)
+
+#define VF_S 0
+#define VF_M 0x7fU
+#define VF_G(x) (((x) >> VF_S) & VF_M)
+
+#define SRAM_PRIO3_S 22
+#define SRAM_PRIO3_M 0x7U
+#define SRAM_PRIO3_G(x) (((x) >> SRAM_PRIO3_S) & SRAM_PRIO3_M)
+
+#define SRAM_PRIO2_S 19
+#define SRAM_PRIO2_M 0x7U
+#define SRAM_PRIO2_G(x) (((x) >> SRAM_PRIO2_S) & SRAM_PRIO2_M)
+
+#define SRAM_PRIO1_S 16
+#define SRAM_PRIO1_M 0x7U
+#define SRAM_PRIO1_G(x) (((x) >> SRAM_PRIO1_S) & SRAM_PRIO1_M)
+
+#define SRAM_PRIO0_S 13
+#define SRAM_PRIO0_M 0x7U
+#define SRAM_PRIO0_G(x) (((x) >> SRAM_PRIO0_S) & SRAM_PRIO0_M)
+
+#define SRAM_VLD_S 12
+#define SRAM_VLD_V(x) ((x) << SRAM_VLD_S)
+#define SRAM_VLD_F SRAM_VLD_V(1U)
+
+#define PORTMAP_S 0
+#define PORTMAP_M 0xfU
+#define PORTMAP_G(x) (((x) >> PORTMAP_S) & PORTMAP_M)
+
+#define CPL_INTR_CAUSE_A 0x19054
+
+#define CIM_OP_MAP_PERR_S 5
+#define CIM_OP_MAP_PERR_V(x) ((x) << CIM_OP_MAP_PERR_S)
+#define CIM_OP_MAP_PERR_F CIM_OP_MAP_PERR_V(1U)
+
+#define CIM_OVFL_ERROR_S 4
+#define CIM_OVFL_ERROR_V(x) ((x) << CIM_OVFL_ERROR_S)
+#define CIM_OVFL_ERROR_F CIM_OVFL_ERROR_V(1U)
+
+#define TP_FRAMING_ERROR_S 3
+#define TP_FRAMING_ERROR_V(x) ((x) << TP_FRAMING_ERROR_S)
+#define TP_FRAMING_ERROR_F TP_FRAMING_ERROR_V(1U)
+
+#define SGE_FRAMING_ERROR_S 2
+#define SGE_FRAMING_ERROR_V(x) ((x) << SGE_FRAMING_ERROR_S)
+#define SGE_FRAMING_ERROR_F SGE_FRAMING_ERROR_V(1U)
+
+#define CIM_FRAMING_ERROR_S 1
+#define CIM_FRAMING_ERROR_V(x) ((x) << CIM_FRAMING_ERROR_S)
+#define CIM_FRAMING_ERROR_F CIM_FRAMING_ERROR_V(1U)
+
+#define ZERO_SWITCH_ERROR_S 0
+#define ZERO_SWITCH_ERROR_V(x) ((x) << ZERO_SWITCH_ERROR_S)
+#define ZERO_SWITCH_ERROR_F ZERO_SWITCH_ERROR_V(1U)
+
+#define SMB_INT_CAUSE_A 0x19090
+
+#define MSTTXFIFOPARINT_S 21
+#define MSTTXFIFOPARINT_V(x) ((x) << MSTTXFIFOPARINT_S)
+#define MSTTXFIFOPARINT_F MSTTXFIFOPARINT_V(1U)
+
+#define MSTRXFIFOPARINT_S 20
+#define MSTRXFIFOPARINT_V(x) ((x) << MSTRXFIFOPARINT_S)
+#define MSTRXFIFOPARINT_F MSTRXFIFOPARINT_V(1U)
+
+#define SLVFIFOPARINT_S 19
+#define SLVFIFOPARINT_V(x) ((x) << SLVFIFOPARINT_S)
+#define SLVFIFOPARINT_F SLVFIFOPARINT_V(1U)
+
+#define ULP_RX_INT_CAUSE_A 0x19158
+#define ULP_RX_ISCSI_TAGMASK_A 0x19164
+#define ULP_RX_ISCSI_PSZ_A 0x19168
+
+#define HPZ3_S 24
+#define HPZ3_V(x) ((x) << HPZ3_S)
+
+#define HPZ2_S 16
+#define HPZ2_V(x) ((x) << HPZ2_S)
+
+#define HPZ1_S 8
+#define HPZ1_V(x) ((x) << HPZ1_S)
+
+#define HPZ0_S 0
+#define HPZ0_V(x) ((x) << HPZ0_S)
+
+#define ULP_RX_TDDP_PSZ_A 0x19178
+
+/* registers for module SF */
+#define SF_DATA_A 0x193f8
+#define SF_OP_A 0x193fc
+
+#define SF_BUSY_S 31
+#define SF_BUSY_V(x) ((x) << SF_BUSY_S)
+#define SF_BUSY_F SF_BUSY_V(1U)
+
+#define SF_LOCK_S 4
+#define SF_LOCK_V(x) ((x) << SF_LOCK_S)
+#define SF_LOCK_F SF_LOCK_V(1U)
+
+#define SF_CONT_S 3
+#define SF_CONT_V(x) ((x) << SF_CONT_S)
+#define SF_CONT_F SF_CONT_V(1U)
+
+#define BYTECNT_S 1
+#define BYTECNT_V(x) ((x) << BYTECNT_S)
+
+#define OP_S 0
+#define OP_V(x) ((x) << OP_S)
+#define OP_F OP_V(1U)
+
+#define PL_PF_INT_CAUSE_A 0x3c0
+
+#define PFSW_S 3
+#define PFSW_V(x) ((x) << PFSW_S)
+#define PFSW_F PFSW_V(1U)
+
+#define PFCIM_S 1
+#define PFCIM_V(x) ((x) << PFCIM_S)
+#define PFCIM_F PFCIM_V(1U)
+
+#define PL_PF_INT_ENABLE_A 0x3c4
+#define PL_PF_CTL_A 0x3c8
+
+#define PL_WHOAMI_A 0x19400
+
+#define SOURCEPF_S 8
+#define SOURCEPF_M 0x7U
+#define SOURCEPF_G(x) (((x) >> SOURCEPF_S) & SOURCEPF_M)
+
+#define PL_INT_CAUSE_A 0x1940c
+
+#define ULP_TX_S 27
+#define ULP_TX_V(x) ((x) << ULP_TX_S)
+#define ULP_TX_F ULP_TX_V(1U)
+
+#define SGE_S 26
+#define SGE_V(x) ((x) << SGE_S)
+#define SGE_F SGE_V(1U)
+
+#define CPL_SWITCH_S 24
+#define CPL_SWITCH_V(x) ((x) << CPL_SWITCH_S)
+#define CPL_SWITCH_F CPL_SWITCH_V(1U)
+
+#define ULP_RX_S 23
+#define ULP_RX_V(x) ((x) << ULP_RX_S)
+#define ULP_RX_F ULP_RX_V(1U)
+
+#define PM_RX_S 22
+#define PM_RX_V(x) ((x) << PM_RX_S)
+#define PM_RX_F PM_RX_V(1U)
+
+#define PM_TX_S 21
+#define PM_TX_V(x) ((x) << PM_TX_S)
+#define PM_TX_F PM_TX_V(1U)
+
+#define MA_S 20
+#define MA_V(x) ((x) << MA_S)
+#define MA_F MA_V(1U)
+
+#define TP_S 19
+#define TP_V(x) ((x) << TP_S)
+#define TP_F TP_V(1U)
+
+#define LE_S 18
+#define LE_V(x) ((x) << LE_S)
+#define LE_F LE_V(1U)
+
+#define EDC1_S 17
+#define EDC1_V(x) ((x) << EDC1_S)
+#define EDC1_F EDC1_V(1U)
+
+#define EDC0_S 16
+#define EDC0_V(x) ((x) << EDC0_S)
+#define EDC0_F EDC0_V(1U)
+
+#define MC_S 15
+#define MC_V(x) ((x) << MC_S)
+#define MC_F MC_V(1U)
+
+#define PCIE_S 14
+#define PCIE_V(x) ((x) << PCIE_S)
+#define PCIE_F PCIE_V(1U)
+
+#define XGMAC_KR1_S 12
+#define XGMAC_KR1_V(x) ((x) << XGMAC_KR1_S)
+#define XGMAC_KR1_F XGMAC_KR1_V(1U)
+
+#define XGMAC_KR0_S 11
+#define XGMAC_KR0_V(x) ((x) << XGMAC_KR0_S)
+#define XGMAC_KR0_F XGMAC_KR0_V(1U)
+
+#define XGMAC1_S 10
+#define XGMAC1_V(x) ((x) << XGMAC1_S)
+#define XGMAC1_F XGMAC1_V(1U)
+
+#define XGMAC0_S 9
+#define XGMAC0_V(x) ((x) << XGMAC0_S)
+#define XGMAC0_F XGMAC0_V(1U)
+
+#define SMB_S 8
+#define SMB_V(x) ((x) << SMB_S)
+#define SMB_F SMB_V(1U)
+
+#define SF_S 7
+#define SF_V(x) ((x) << SF_S)
+#define SF_F SF_V(1U)
+
+#define PL_S 6
+#define PL_V(x) ((x) << PL_S)
+#define PL_F PL_V(1U)
+
+#define NCSI_S 5
+#define NCSI_V(x) ((x) << NCSI_S)
+#define NCSI_F NCSI_V(1U)
+
+#define MPS_S 4
+#define MPS_V(x) ((x) << MPS_S)
+#define MPS_F MPS_V(1U)
+
+#define CIM_S 0
+#define CIM_V(x) ((x) << CIM_S)
+#define CIM_F CIM_V(1U)
+
+#define MC1_S 31
+
+#define PL_INT_ENABLE_A 0x19410
+#define PL_INT_MAP0_A 0x19414
+#define PL_RST_A 0x19428
+
+#define PIORST_S 1
+#define PIORST_V(x) ((x) << PIORST_S)
+#define PIORST_F PIORST_V(1U)
+
+#define PIORSTMODE_S 0
+#define PIORSTMODE_V(x) ((x) << PIORSTMODE_S)
+#define PIORSTMODE_F PIORSTMODE_V(1U)
+
+#define PL_PL_INT_CAUSE_A 0x19430
+
+#define FATALPERR_S 4
+#define FATALPERR_V(x) ((x) << FATALPERR_S)
+#define FATALPERR_F FATALPERR_V(1U)
+
+#define PERRVFID_S 0
+#define PERRVFID_V(x) ((x) << PERRVFID_S)
+#define PERRVFID_F PERRVFID_V(1U)
+
+#define PL_REV_A 0x1943c
+
+#define REV_S 0
+#define REV_M 0xfU
+#define REV_V(x) ((x) << REV_S)
+#define REV_G(x) (((x) >> REV_S) & REV_M)
+
+#define LE_DB_INT_CAUSE_A 0x19c3c
+
+#define REQQPARERR_S 16
+#define REQQPARERR_V(x) ((x) << REQQPARERR_S)
+#define REQQPARERR_F REQQPARERR_V(1U)
+
+#define UNKNOWNCMD_S 15
+#define UNKNOWNCMD_V(x) ((x) << UNKNOWNCMD_S)
+#define UNKNOWNCMD_F UNKNOWNCMD_V(1U)
+
+#define PARITYERR_S 6
+#define PARITYERR_V(x) ((x) << PARITYERR_S)
+#define PARITYERR_F PARITYERR_V(1U)
+
+#define LIPMISS_S 5
+#define LIPMISS_V(x) ((x) << LIPMISS_S)
+#define LIPMISS_F LIPMISS_V(1U)
+
+#define LIP0_S 4
+#define LIP0_V(x) ((x) << LIP0_S)
+#define LIP0_F LIP0_V(1U)
+
+#define NCSI_INT_CAUSE_A 0x1a0d8
+
+#define CIM_DM_PRTY_ERR_S 8
+#define CIM_DM_PRTY_ERR_V(x) ((x) << CIM_DM_PRTY_ERR_S)
+#define CIM_DM_PRTY_ERR_F CIM_DM_PRTY_ERR_V(1U)
+
+#define MPS_DM_PRTY_ERR_S 7
+#define MPS_DM_PRTY_ERR_V(x) ((x) << MPS_DM_PRTY_ERR_S)
+#define MPS_DM_PRTY_ERR_F MPS_DM_PRTY_ERR_V(1U)
+
+#define TXFIFO_PRTY_ERR_S 1
+#define TXFIFO_PRTY_ERR_V(x) ((x) << TXFIFO_PRTY_ERR_S)
+#define TXFIFO_PRTY_ERR_F TXFIFO_PRTY_ERR_V(1U)
+
+#define RXFIFO_PRTY_ERR_S 0
+#define RXFIFO_PRTY_ERR_V(x) ((x) << RXFIFO_PRTY_ERR_S)
+#define RXFIFO_PRTY_ERR_F RXFIFO_PRTY_ERR_V(1U)
+
+#define XGMAC_PORT_CFG2_A 0x1018
+
+#define PATEN_S 18
+#define PATEN_V(x) ((x) << PATEN_S)
+#define PATEN_F PATEN_V(1U)
+
+#define MAGICEN_S 17
+#define MAGICEN_V(x) ((x) << MAGICEN_S)
+#define MAGICEN_F MAGICEN_V(1U)
+
+#define XGMAC_PORT_MAGIC_MACID_LO 0x1024
+#define XGMAC_PORT_MAGIC_MACID_HI 0x1028
+
+#define XGMAC_PORT_EPIO_DATA0_A 0x10c0
+#define XGMAC_PORT_EPIO_DATA1_A 0x10c4
+#define XGMAC_PORT_EPIO_DATA2_A 0x10c8
+#define XGMAC_PORT_EPIO_DATA3_A 0x10cc
+#define XGMAC_PORT_EPIO_OP_A 0x10d0
+
+#define EPIOWR_S 8
+#define EPIOWR_V(x) ((x) << EPIOWR_S)
+#define EPIOWR_F EPIOWR_V(1U)
+
+#define ADDRESS_S 0
+#define ADDRESS_V(x) ((x) << ADDRESS_S)
+
+#define MAC_PORT_INT_CAUSE_A 0x8dc
+#define XGMAC_PORT_INT_CAUSE_A 0x10dc
+
+#define TP_TX_MOD_QUEUE_REQ_MAP_A 0x7e28
+
+#define TP_TX_MOD_QUEUE_WEIGHT0_A 0x7e30
+#define TP_TX_MOD_CHANNEL_WEIGHT_A 0x7e34
+
+#define TX_MOD_QUEUE_REQ_MAP_S 0
+#define TX_MOD_QUEUE_REQ_MAP_V(x) ((x) << TX_MOD_QUEUE_REQ_MAP_S)
+
+#define TX_MODQ_WEIGHT3_S 24
+#define TX_MODQ_WEIGHT3_V(x) ((x) << TX_MODQ_WEIGHT3_S)
+
+#define TX_MODQ_WEIGHT2_S 16
+#define TX_MODQ_WEIGHT2_V(x) ((x) << TX_MODQ_WEIGHT2_S)
+
+#define TX_MODQ_WEIGHT1_S 8
+#define TX_MODQ_WEIGHT1_V(x) ((x) << TX_MODQ_WEIGHT1_S)
+
+#define TX_MODQ_WEIGHT0_S 0
+#define TX_MODQ_WEIGHT0_V(x) ((x) << TX_MODQ_WEIGHT0_S)
+
+#define TP_TX_SCHED_HDR_A 0x23
+#define TP_TX_SCHED_FIFO_A 0x24
+#define TP_TX_SCHED_PCMD_A 0x25
#define NUM_MPS_CLS_SRAM_L_INSTANCES 336
#define NUM_MPS_T5_CLS_SRAM_L_INSTANCES 512
#define MC_STRIDE (MC_1_BASE_ADDR - MC_0_BASE_ADDR)
#define MC_REG(reg, idx) (reg + MC_STRIDE * idx)
-#define MC_P_BIST_CMD 0x41400
-#define MC_P_BIST_CMD_ADDR 0x41404
-#define MC_P_BIST_CMD_LEN 0x41408
-#define MC_P_BIST_DATA_PATTERN 0x4140c
-#define MC_P_BIST_STATUS_RDATA 0x41488
-#define EDC_T50_BASE_ADDR 0x50000
-#define EDC_H_BIST_CMD 0x50004
-#define EDC_H_BIST_CMD_ADDR 0x50008
-#define EDC_H_BIST_CMD_LEN 0x5000c
-#define EDC_H_BIST_DATA_PATTERN 0x50010
-#define EDC_H_BIST_STATUS_RDATA 0x50028
-
-#define EDC_T51_BASE_ADDR 0x50800
+#define MC_P_BIST_CMD_A 0x41400
+#define MC_P_BIST_CMD_ADDR_A 0x41404
+#define MC_P_BIST_CMD_LEN_A 0x41408
+#define MC_P_BIST_DATA_PATTERN_A 0x4140c
+#define MC_P_BIST_STATUS_RDATA_A 0x41488
+
+#define EDC_T50_BASE_ADDR 0x50000
+
+#define EDC_H_BIST_CMD_A 0x50004
+#define EDC_H_BIST_CMD_ADDR_A 0x50008
+#define EDC_H_BIST_CMD_LEN_A 0x5000c
+#define EDC_H_BIST_DATA_PATTERN_A 0x50010
+#define EDC_H_BIST_STATUS_RDATA_A 0x50028
+
+#define EDC_T51_BASE_ADDR 0x50800
+
#define EDC_STRIDE_T5 (EDC_T51_BASE_ADDR - EDC_T50_BASE_ADDR)
#define EDC_REG_T5(reg, idx) (reg + EDC_STRIDE_T5 * idx)
-#define A_PL_VF_REV 0x4
-#define A_PL_VF_WHOAMI 0x0
-#define A_PL_VF_REVISION 0x8
+#define PL_VF_REV_A 0x4
+#define PL_VF_WHOAMI_A 0x0
+#define PL_VF_REVISION_A 0x8
-#define S_CHIPID 4
-#define M_CHIPID 0xfU
-#define V_CHIPID(x) ((x) << S_CHIPID)
-#define G_CHIPID(x) (((x) >> S_CHIPID) & M_CHIPID)
+/* registers for module CIM */
+#define CIM_HOST_ACC_CTRL_A 0x7b50
+#define CIM_HOST_ACC_DATA_A 0x7b54
+#define UP_UP_DBG_LA_CFG_A 0x140
+#define UP_UP_DBG_LA_DATA_A 0x144
-/* TP_VLAN_PRI_MAP controls which subset of fields will be present in the
- * Compressed Filter Tuple for LE filters. Each bit set in TP_VLAN_PRI_MAP
- * selects for a particular field being present. These fields, when present
- * in the Compressed Filter Tuple, have the following widths in bits.
- */
-#define W_FT_FCOE 1
-#define W_FT_PORT 3
-#define W_FT_VNIC_ID 17
-#define W_FT_VLAN 17
-#define W_FT_TOS 8
-#define W_FT_PROTOCOL 8
-#define W_FT_ETHERTYPE 16
-#define W_FT_MACMATCH 9
-#define W_FT_MPSHITTYPE 3
-#define W_FT_FRAGMENTATION 1
-
-/* Some of the Compressed Filter Tuple fields have internal structure. These
- * bit shifts/masks describe those structures. All shifts are relative to the
- * base position of the fields within the Compressed Filter Tuple
- */
-#define S_FT_VLAN_VLD 16
-#define V_FT_VLAN_VLD(x) ((x) << S_FT_VLAN_VLD)
-#define F_FT_VLAN_VLD V_FT_VLAN_VLD(1U)
+#define HOSTBUSY_S 17
+#define HOSTBUSY_V(x) ((x) << HOSTBUSY_S)
+#define HOSTBUSY_F HOSTBUSY_V(1U)
+
+#define HOSTWRITE_S 16
+#define HOSTWRITE_V(x) ((x) << HOSTWRITE_S)
+#define HOSTWRITE_F HOSTWRITE_V(1U)
+
+#define UPDBGLARDEN_S 1
+#define UPDBGLARDEN_V(x) ((x) << UPDBGLARDEN_S)
+#define UPDBGLARDEN_F UPDBGLARDEN_V(1U)
+
+#define UPDBGLAEN_S 0
+#define UPDBGLAEN_V(x) ((x) << UPDBGLAEN_S)
+#define UPDBGLAEN_F UPDBGLAEN_V(1U)
+
+#define UPDBGLARDPTR_S 2
+#define UPDBGLARDPTR_M 0xfffU
+#define UPDBGLARDPTR_V(x) ((x) << UPDBGLARDPTR_S)
+
+#define UPDBGLAWRPTR_S 16
+#define UPDBGLAWRPTR_M 0xfffU
+#define UPDBGLAWRPTR_G(x) (((x) >> UPDBGLAWRPTR_S) & UPDBGLAWRPTR_M)
+
+#define UPDBGLACAPTPCONLY_S 30
+#define UPDBGLACAPTPCONLY_V(x) ((x) << UPDBGLACAPTPCONLY_S)
+#define UPDBGLACAPTPCONLY_F UPDBGLACAPTPCONLY_V(1U)
+
+#define CIM_QUEUE_CONFIG_REF_A 0x7b48
+#define CIM_QUEUE_CONFIG_CTRL_A 0x7b4c
+
+#define CIMQSIZE_S 24
+#define CIMQSIZE_M 0x3fU
+#define CIMQSIZE_G(x) (((x) >> CIMQSIZE_S) & CIMQSIZE_M)
+
+#define CIMQBASE_S 16
+#define CIMQBASE_M 0x3fU
+#define CIMQBASE_G(x) (((x) >> CIMQBASE_S) & CIMQBASE_M)
+
+#define QUEFULLTHRSH_S 0
+#define QUEFULLTHRSH_M 0x1ffU
+#define QUEFULLTHRSH_G(x) (((x) >> QUEFULLTHRSH_S) & QUEFULLTHRSH_M)
+
+#define UP_IBQ_0_RDADDR_A 0x10
+#define UP_IBQ_0_SHADOW_RDADDR_A 0x280
+#define UP_OBQ_0_REALADDR_A 0x104
+#define UP_OBQ_0_SHADOW_REALADDR_A 0x394
+
+#define IBQRDADDR_S 0
+#define IBQRDADDR_M 0x1fffU
+#define IBQRDADDR_G(x) (((x) >> IBQRDADDR_S) & IBQRDADDR_M)
+
+#define IBQWRADDR_S 0
+#define IBQWRADDR_M 0x1fffU
+#define IBQWRADDR_G(x) (((x) >> IBQWRADDR_S) & IBQWRADDR_M)
+
+#define QUERDADDR_S 0
+#define QUERDADDR_M 0x7fffU
+#define QUERDADDR_G(x) (((x) >> QUERDADDR_S) & QUERDADDR_M)
+
+#define QUEREMFLITS_S 0
+#define QUEREMFLITS_M 0x7ffU
+#define QUEREMFLITS_G(x) (((x) >> QUEREMFLITS_S) & QUEREMFLITS_M)
+
+#define QUEEOPCNT_S 16
+#define QUEEOPCNT_M 0xfffU
+#define QUEEOPCNT_G(x) (((x) >> QUEEOPCNT_S) & QUEEOPCNT_M)
+
+#define QUESOPCNT_S 0
+#define QUESOPCNT_M 0xfffU
+#define QUESOPCNT_G(x) (((x) >> QUESOPCNT_S) & QUESOPCNT_M)
-#define S_FT_VNID_ID_VF 0
-#define V_FT_VNID_ID_VF(x) ((x) << S_FT_VNID_ID_VF)
+#define OBQSELECT_S 4
+#define OBQSELECT_V(x) ((x) << OBQSELECT_S)
+#define OBQSELECT_F OBQSELECT_V(1U)
-#define S_FT_VNID_ID_PF 7
-#define V_FT_VNID_ID_PF(x) ((x) << S_FT_VNID_ID_PF)
+#define IBQSELECT_S 3
+#define IBQSELECT_V(x) ((x) << IBQSELECT_S)
+#define IBQSELECT_F IBQSELECT_V(1U)
-#define S_FT_VNID_ID_VLD 16
-#define V_FT_VNID_ID_VLD(x) ((x) << S_FT_VNID_ID_VLD)
+#define QUENUMSELECT_S 0
+#define QUENUMSELECT_V(x) ((x) << QUENUMSELECT_S)
#endif /* __T4_REGS_H */
--- /dev/null
+/*
+ * This file is part of the Chelsio T4 Ethernet driver for Linux.
+ *
+ * Copyright (c) 2003-2014 Chelsio Communications, Inc. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifndef __T4_VALUES_H__
+#define __T4_VALUES_H__
+
+/* This file contains definitions for various T4 register value hardware
+ * constants. The types of values encoded here are predominantly those for
+ * register fields which control "modal" behavior. For the most part, we do
+ * not include definitions for register fields which are simple numeric
+ * metrics, etc.
+ */
+
+/* SGE register field values.
+ */
+
+/* CONTROL1 register */
+#define RXPKTCPLMODE_SPLIT_X 1
+
+#define INGPCIEBOUNDARY_SHIFT_X 5
+#define INGPCIEBOUNDARY_32B_X 0
+
+#define INGPADBOUNDARY_SHIFT_X 5
+
+/* CONTROL2 register */
+#define INGPACKBOUNDARY_SHIFT_X 5
+#define INGPACKBOUNDARY_16B_X 0
+
+/* GTS register */
+#define SGE_TIMERREGS 6
+
+/* T5 and later support a new BAR2-based doorbell mechanism for Egress Queues.
+ * The User Doorbells are each 128 bytes in length with a Simple Doorbell at
+ * offsets 8x and a Write Combining single 64-byte Egress Queue Unit
+ * (IDXSIZE_UNIT_X) Gather Buffer interface at offset 64. For Ingress Queues,
+ * we have a Going To Sleep register at offsets 8x+4.
+ *
+ * As noted above, we have many instances of the Simple Doorbell and Going To
+ * Sleep registers at offsets 8x and 8x+4, respectively. We want to use a
+ * non-64-byte aligned offset for the Simple Doorbell in order to attempt to
+ * avoid buffering of the writes to the Simple Doorbell and we want to use a
+ * non-contiguous offset for the Going To Sleep writes in order to avoid
+ * possible combining between them.
+ */
+#define SGE_UDB_SIZE 128
+#define SGE_UDB_KDOORBELL 8
+#define SGE_UDB_GTS 20
+#define SGE_UDB_WCDOORBELL 64
+
+/* PCI-E definitions */
+#define WINDOW_SHIFT_X 10
+#define PCIEOFST_SHIFT_X 10
+
+/* TP_VLAN_PRI_MAP controls which subset of fields will be present in the
+ * Compressed Filter Tuple for LE filters. Each bit set in TP_VLAN_PRI_MAP
+ * selects for a particular field being present. These fields, when present
+ * in the Compressed Filter Tuple, have the following widths in bits.
+ */
+#define FT_FCOE_W 1
+#define FT_PORT_W 3
+#define FT_VNIC_ID_W 17
+#define FT_VLAN_W 17
+#define FT_TOS_W 8
+#define FT_PROTOCOL_W 8
+#define FT_ETHERTYPE_W 16
+#define FT_MACMATCH_W 9
+#define FT_MPSHITTYPE_W 3
+#define FT_FRAGMENTATION_W 1
+
+/* Some of the Compressed Filter Tuple fields have internal structure. These
+ * bit shifts/masks describe those structures. All shifts are relative to the
+ * base position of the fields within the Compressed Filter Tuple
+ */
+#define FT_VLAN_VLD_S 16
+#define FT_VLAN_VLD_V(x) ((x) << FT_VLAN_VLD_S)
+#define FT_VLAN_VLD_F FT_VLAN_VLD_V(1U)
+
+#define FT_VNID_ID_VF_S 0
+#define FT_VNID_ID_VF_V(x) ((x) << FT_VNID_ID_VF_S)
+
+#define FT_VNID_ID_PF_S 7
+#define FT_VNID_ID_PF_V(x) ((x) << FT_VNID_ID_PF_S)
+
+#define FT_VNID_ID_VLD_S 16
+#define FT_VNID_ID_VLD_V(x) ((x) << FT_VNID_ID_VLD_S)
+
+#endif /* __T4_VALUES_H__ */
FW_RSS_IND_TBL_CMD = 0x20,
FW_RSS_GLB_CONFIG_CMD = 0x22,
FW_RSS_VI_CONFIG_CMD = 0x23,
+ FW_DEVLOG_CMD = 0x25,
FW_CLIP_CMD = 0x28,
FW_LASTC2E_CMD = 0x40,
FW_ERROR_CMD = 0x80,
FW_PARAMS_PARAM_DEV_MAXORDIRD_QP = 0x13, /* max supported QP IRD/ORD */
FW_PARAMS_PARAM_DEV_MAXIRD_ADAPTER = 0x14, /* max supported adap IRD */
FW_PARAMS_PARAM_DEV_ULPTX_MEMWRITE_DSGL = 0x17,
+ FW_PARAMS_PARAM_DEV_FWCACHE = 0x18,
};
/*
FW_PARAMS_PARAM_DMAQ_EQ_DCBPRIO_ETH = 0x13,
};
+enum fw_params_param_dev_fwcache {
+ FW_PARAM_DEV_FWCACHE_FLUSH = 0x00,
+ FW_PARAM_DEV_FWCACHE_FLUSHINV = 0x01,
+};
+
#define FW_PARAMS_MNEM_S 24
#define FW_PARAMS_MNEM_V(x) ((x) << FW_PARAMS_MNEM_S)
FW_HDR_FLAGS_RESET_HALT = 0x00000001,
};
+/* length of the formatting string */
+#define FW_DEVLOG_FMT_LEN 192
+
+/* maximum number of the formatting string parameters */
+#define FW_DEVLOG_FMT_PARAMS_NUM 8
+
+/* priority levels */
+enum fw_devlog_level {
+ FW_DEVLOG_LEVEL_EMERG = 0x0,
+ FW_DEVLOG_LEVEL_CRIT = 0x1,
+ FW_DEVLOG_LEVEL_ERR = 0x2,
+ FW_DEVLOG_LEVEL_NOTICE = 0x3,
+ FW_DEVLOG_LEVEL_INFO = 0x4,
+ FW_DEVLOG_LEVEL_DEBUG = 0x5,
+ FW_DEVLOG_LEVEL_MAX = 0x5,
+};
+
+/* facilities that may send a log message */
+enum fw_devlog_facility {
+ FW_DEVLOG_FACILITY_CORE = 0x00,
+ FW_DEVLOG_FACILITY_CF = 0x01,
+ FW_DEVLOG_FACILITY_SCHED = 0x02,
+ FW_DEVLOG_FACILITY_TIMER = 0x04,
+ FW_DEVLOG_FACILITY_RES = 0x06,
+ FW_DEVLOG_FACILITY_HW = 0x08,
+ FW_DEVLOG_FACILITY_FLR = 0x10,
+ FW_DEVLOG_FACILITY_DMAQ = 0x12,
+ FW_DEVLOG_FACILITY_PHY = 0x14,
+ FW_DEVLOG_FACILITY_MAC = 0x16,
+ FW_DEVLOG_FACILITY_PORT = 0x18,
+ FW_DEVLOG_FACILITY_VI = 0x1A,
+ FW_DEVLOG_FACILITY_FILTER = 0x1C,
+ FW_DEVLOG_FACILITY_ACL = 0x1E,
+ FW_DEVLOG_FACILITY_TM = 0x20,
+ FW_DEVLOG_FACILITY_QFC = 0x22,
+ FW_DEVLOG_FACILITY_DCB = 0x24,
+ FW_DEVLOG_FACILITY_ETH = 0x26,
+ FW_DEVLOG_FACILITY_OFLD = 0x28,
+ FW_DEVLOG_FACILITY_RI = 0x2A,
+ FW_DEVLOG_FACILITY_ISCSI = 0x2C,
+ FW_DEVLOG_FACILITY_FCOE = 0x2E,
+ FW_DEVLOG_FACILITY_FOISCSI = 0x30,
+ FW_DEVLOG_FACILITY_FOFCOE = 0x32,
+ FW_DEVLOG_FACILITY_MAX = 0x32,
+};
+
+/* log message format */
+struct fw_devlog_e {
+ __be64 timestamp;
+ __be32 seqno;
+ __be16 reserved1;
+ __u8 level;
+ __u8 facility;
+ __u8 fmt[FW_DEVLOG_FMT_LEN];
+ __be32 params[FW_DEVLOG_FMT_PARAMS_NUM];
+ __be32 reserved3[4];
+};
+
+struct fw_devlog_cmd {
+ __be32 op_to_write;
+ __be32 retval_len16;
+ __u8 level;
+ __u8 r2[7];
+ __be32 memtype_devlog_memaddr16_devlog;
+ __be32 memsize_devlog;
+ __be32 r3[2];
+};
+
+#define FW_DEVLOG_CMD_MEMTYPE_DEVLOG_S 28
+#define FW_DEVLOG_CMD_MEMTYPE_DEVLOG_M 0xf
+#define FW_DEVLOG_CMD_MEMTYPE_DEVLOG_G(x) \
+ (((x) >> FW_DEVLOG_CMD_MEMTYPE_DEVLOG_S) & \
+ FW_DEVLOG_CMD_MEMTYPE_DEVLOG_M)
+
+#define FW_DEVLOG_CMD_MEMADDR16_DEVLOG_S 0
+#define FW_DEVLOG_CMD_MEMADDR16_DEVLOG_M 0xfffffff
+#define FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(x) \
+ (((x) >> FW_DEVLOG_CMD_MEMADDR16_DEVLOG_S) & \
+ FW_DEVLOG_CMD_MEMADDR16_DEVLOG_M)
+
#endif /* _T4FW_INTERFACE_H_ */
* enable interrupts.
*/
t4_write_reg(rspq->adapter, T4VF_SGE_BASE_ADDR + SGE_VF_GTS,
- CIDXINC(0) |
- SEINTARM(rspq->intr_params) |
- INGRESSQID(rspq->cntxt_id));
+ CIDXINC_V(0) |
+ SEINTARM_V(rspq->intr_params) |
+ INGRESSQID_V(rspq->cntxt_id));
}
/*
*/
if (adapter->flags & USING_MSI)
t4_write_reg(adapter, T4VF_SGE_BASE_ADDR + SGE_VF_GTS,
- CIDXINC(0) |
- SEINTARM(s->intrq.intr_params) |
- INGRESSQID(s->intrq.cntxt_id));
+ CIDXINC_V(0) |
+ SEINTARM_V(s->intrq.intr_params) |
+ INGRESSQID_V(s->intrq.cntxt_id));
}
/* FW can send EGR_UPDATEs encapsulated in a CPL_FW4_MSG.
*/
const struct cpl_sge_egr_update *p = (void *)(rsp + 3);
- opcode = G_CPL_OPCODE(ntohl(p->opcode_qid));
+ opcode = CPL_OPCODE_G(ntohl(p->opcode_qid));
if (opcode != CPL_SGE_EGR_UPDATE) {
dev_err(adapter->pdev_dev, "unexpected FW4/CPL %#x on FW event queue\n"
, opcode);
* free TX Queue Descriptors ...
*/
const struct cpl_sge_egr_update *p = cpl;
- unsigned int qid = EGR_QID(be32_to_cpu(p->opcode_qid));
+ unsigned int qid = EGR_QID_G(be32_to_cpu(p->opcode_qid));
struct sge *s = &adapter->sge;
struct sge_txq *tq;
struct sge_eth_txq *txq;
reg_block_dump(adapter, regbuf,
T4VF_PL_BASE_ADDR + T4VF_MOD_MAP_PL_FIRST,
T4VF_PL_BASE_ADDR + (is_t4(adapter->params.chip)
- ? A_PL_VF_WHOAMI : A_PL_VF_REVISION));
+ ? PL_VF_WHOAMI_A : PL_VF_REVISION_A));
reg_block_dump(adapter, regbuf,
T4VF_CIM_BASE_ADDR + T4VF_MOD_MAP_CIM_FIRST,
T4VF_CIM_BASE_ADDR + T4VF_MOD_MAP_CIM_LAST);
* threshold values from the SGE parameters.
*/
s->timer_val[0] = core_ticks_to_us(adapter,
- TIMERVALUE0_GET(sge_params->sge_timer_value_0_and_1));
+ TIMERVALUE0_G(sge_params->sge_timer_value_0_and_1));
s->timer_val[1] = core_ticks_to_us(adapter,
- TIMERVALUE1_GET(sge_params->sge_timer_value_0_and_1));
+ TIMERVALUE1_G(sge_params->sge_timer_value_0_and_1));
s->timer_val[2] = core_ticks_to_us(adapter,
- TIMERVALUE0_GET(sge_params->sge_timer_value_2_and_3));
+ TIMERVALUE0_G(sge_params->sge_timer_value_2_and_3));
s->timer_val[3] = core_ticks_to_us(adapter,
- TIMERVALUE1_GET(sge_params->sge_timer_value_2_and_3));
+ TIMERVALUE1_G(sge_params->sge_timer_value_2_and_3));
s->timer_val[4] = core_ticks_to_us(adapter,
- TIMERVALUE0_GET(sge_params->sge_timer_value_4_and_5));
+ TIMERVALUE0_G(sge_params->sge_timer_value_4_and_5));
s->timer_val[5] = core_ticks_to_us(adapter,
- TIMERVALUE1_GET(sge_params->sge_timer_value_4_and_5));
+ TIMERVALUE1_G(sge_params->sge_timer_value_4_and_5));
- s->counter_val[0] =
- THRESHOLD_0_GET(sge_params->sge_ingress_rx_threshold);
- s->counter_val[1] =
- THRESHOLD_1_GET(sge_params->sge_ingress_rx_threshold);
- s->counter_val[2] =
- THRESHOLD_2_GET(sge_params->sge_ingress_rx_threshold);
- s->counter_val[3] =
- THRESHOLD_3_GET(sge_params->sge_ingress_rx_threshold);
+ s->counter_val[0] = THRESHOLD_0_G(sge_params->sge_ingress_rx_threshold);
+ s->counter_val[1] = THRESHOLD_1_G(sge_params->sge_ingress_rx_threshold);
+ s->counter_val[2] = THRESHOLD_2_G(sge_params->sge_ingress_rx_threshold);
+ s->counter_val[3] = THRESHOLD_3_G(sge_params->sge_ingress_rx_threshold);
/*
* Grab our Virtual Interface resource allocation, extract the
*/
n10g = 0;
for_each_port(adapter, pidx)
- n10g += is_10g_port(&adap2pinfo(adapter, pidx)->link_cfg);
+ n10g += is_x_10g_port(&adap2pinfo(adapter, pidx)->link_cfg);
/*
* We default to 1 queue per non-10G port and up to # of cores queues
#include "t4vf_defs.h"
#include "../cxgb4/t4_regs.h"
+#include "../cxgb4/t4_values.h"
#include "../cxgb4/t4fw_api.h"
#include "../cxgb4/t4_msg.h"
*/
if (fl->pend_cred >= FL_PER_EQ_UNIT) {
if (is_t4(adapter->params.chip))
- val = PIDX(fl->pend_cred / FL_PER_EQ_UNIT);
+ val = PIDX_V(fl->pend_cred / FL_PER_EQ_UNIT);
else
- val = PIDX_T5(fl->pend_cred / FL_PER_EQ_UNIT) |
- DBTYPE(1);
- val |= DBPRIO(1);
+ val = PIDX_T5_V(fl->pend_cred / FL_PER_EQ_UNIT) |
+ DBTYPE_F;
+ val |= DBPRIO_F;
/* Make sure all memory writes to the Free List queue are
* committed before we tell the hardware about them.
if (unlikely(fl->bar2_addr == NULL)) {
t4_write_reg(adapter,
T4VF_SGE_BASE_ADDR + SGE_VF_KDOORBELL,
- QID(fl->cntxt_id) | val);
+ QID_V(fl->cntxt_id) | val);
} else {
- writel(val | QID(fl->bar2_qid),
+ writel(val | QID_V(fl->bar2_qid),
fl->bar2_addr + SGE_UDB_KDOORBELL);
/* This Write memory Barrier will force the write to
}
sgl->cmd_nsge = htonl(ULPTX_CMD_V(ULP_TX_SC_DSGL) |
- ULPTX_NSGE(nfrags));
+ ULPTX_NSGE_V(nfrags));
if (likely(--nfrags == 0))
return;
/*
* doorbell mechanism; otherwise use the new BAR2 mechanism.
*/
if (unlikely(tq->bar2_addr == NULL)) {
- u32 val = PIDX(n);
+ u32 val = PIDX_V(n);
t4_write_reg(adapter, T4VF_SGE_BASE_ADDR + SGE_VF_KDOORBELL,
- QID(tq->cntxt_id) | val);
+ QID_V(tq->cntxt_id) | val);
} else {
- u32 val = PIDX_T5(n);
+ u32 val = PIDX_T5_V(n);
/* T4 and later chips share the same PIDX field offset within
* the doorbell, but T5 and later shrank the field in order to
* large in the first place (14 bits) so we just use the T5
* and later limits and warn if a Queue ID is too large.
*/
- WARN_ON(val & DBPRIO(1));
+ WARN_ON(val & DBPRIO_F);
/* If we're only writing a single Egress Unit and the BAR2
* Queue ID is 0, we can use the Write Combining Doorbell
count--;
}
} else
- writel(val | QID(tq->bar2_qid),
+ writel(val | QID_V(tq->bar2_qid),
tq->bar2_addr + SGE_UDB_KDOORBELL);
/* This Write Memory Barrier will force the write to the User
* If there's a VLAN tag present, add that to the list of things to
* do in this Work Request.
*/
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
txq->vlan_ins++;
- cntrl |= TXPKT_VLAN_VLD | TXPKT_VLAN(vlan_tx_tag_get(skb));
+ cntrl |= TXPKT_VLAN_VLD | TXPKT_VLAN(skb_vlan_tag_get(skb));
}
/*
* If this is a good TCP packet and we have Generic Receive Offload
* enabled, handle the packet in the GRO path.
*/
- if ((pkt->l2info & cpu_to_be32(RXF_TCP)) &&
+ if ((pkt->l2info & cpu_to_be32(RXF_TCP_F)) &&
(rspq->netdev->features & NETIF_F_GRO) && csum_ok &&
!pkt->ip_frag) {
do_gro(rxq, gl, pkt);
rxq->stats.pkts++;
if (csum_ok && !pkt->err_vec &&
- (be32_to_cpu(pkt->l2info) & (RXF_UDP|RXF_TCP))) {
+ (be32_to_cpu(pkt->l2info) & (RXF_UDP_F | RXF_TCP_F))) {
if (!pkt->ip_frag)
skb->ip_summed = CHECKSUM_UNNECESSARY;
else {
if (unlikely(work_done == 0))
rspq->unhandled_irqs++;
- val = CIDXINC(work_done) | SEINTARM(intr_params);
+ val = CIDXINC_V(work_done) | SEINTARM_V(intr_params);
if (is_t4(rspq->adapter->params.chip)) {
t4_write_reg(rspq->adapter,
T4VF_SGE_BASE_ADDR + SGE_VF_GTS,
- val | INGRESSQID((u32)rspq->cntxt_id));
+ val | INGRESSQID_V((u32)rspq->cntxt_id));
} else {
- writel(val | INGRESSQID(rspq->bar2_qid),
+ writel(val | INGRESSQID_V(rspq->bar2_qid),
rspq->bar2_addr + SGE_UDB_GTS);
wmb();
}
rspq_next(intrq);
}
- val = CIDXINC(work_done) | SEINTARM(intrq->intr_params);
+ val = CIDXINC_V(work_done) | SEINTARM_V(intrq->intr_params);
if (is_t4(adapter->params.chip))
t4_write_reg(adapter, T4VF_SGE_BASE_ADDR + SGE_VF_GTS,
- val | INGRESSQID(intrq->cntxt_id));
+ val | INGRESSQID_V(intrq->cntxt_id));
else {
- writel(val | INGRESSQID(intrq->bar2_qid),
+ writel(val | INGRESSQID_V(intrq->bar2_qid),
intrq->bar2_addr + SGE_UDB_GTS);
wmb();
}
fl0, fl1);
return -EINVAL;
}
- if ((sge_params->sge_control & RXPKTCPLMODE_MASK) == 0) {
+ if ((sge_params->sge_control & RXPKTCPLMODE_F) == 0) {
dev_err(adapter->pdev_dev, "bad SGE CPL MODE\n");
return -EINVAL;
}
*/
if (fl1)
s->fl_pg_order = ilog2(fl1) - PAGE_SHIFT;
- s->stat_len = ((sge_params->sge_control & EGRSTATUSPAGESIZE_MASK)
+ s->stat_len = ((sge_params->sge_control & EGRSTATUSPAGESIZE_F)
? 128 : 64);
- s->pktshift = PKTSHIFT_GET(sge_params->sge_control);
+ s->pktshift = PKTSHIFT_G(sge_params->sge_control);
/* T4 uses a single control field to specify both the PCIe Padding and
* Packing Boundary. T5 introduced the ability to specify these
* end doing this because it would initialize the Padding Boundary and
* leave the Packing Boundary initialized to 0 (16 bytes).)
*/
- ingpadboundary = 1 << (INGPADBOUNDARY_GET(sge_params->sge_control) +
- X_INGPADBOUNDARY_SHIFT);
+ ingpadboundary = 1 << (INGPADBOUNDARY_G(sge_params->sge_control) +
+ INGPADBOUNDARY_SHIFT_X);
if (is_t4(adapter->params.chip)) {
s->fl_align = ingpadboundary;
} else {
* Congestion Threshold is in units of 2 Free List pointers.)
*/
s->fl_starve_thres
- = EGRTHRESHOLD_GET(sge_params->sge_congestion_control)*2 + 1;
+ = EGRTHRESHOLD_G(sge_params->sge_congestion_control)*2 + 1;
/*
* Set up tasklet timers.
* Mailbox Data in the fixed CIM PF map and the programmable VF map must
* match. However, it's a useful convention ...
*/
-#if T4VF_MBDATA_BASE_ADDR != CIM_PF_MAILBOX_DATA
-#error T4VF_MBDATA_BASE_ADDR must match CIM_PF_MAILBOX_DATA!
+#if T4VF_MBDATA_BASE_ADDR != CIM_PF_MAILBOX_DATA_A
+#error T4VF_MBDATA_BASE_ADDR must match CIM_PF_MAILBOX_DATA_A!
#endif
/*
#include "t4vf_defs.h"
#include "../cxgb4/t4_regs.h"
+#include "../cxgb4/t4_values.h"
#include "../cxgb4/t4fw_api.h"
/*
* Loop trying to get ownership of the mailbox. Return an error
* if we can't gain ownership.
*/
- v = MBOWNER_GET(t4_read_reg(adapter, mbox_ctl));
+ v = MBOWNER_G(t4_read_reg(adapter, mbox_ctl));
for (i = 0; v == MBOX_OWNER_NONE && i < 3; i++)
- v = MBOWNER_GET(t4_read_reg(adapter, mbox_ctl));
+ v = MBOWNER_G(t4_read_reg(adapter, mbox_ctl));
if (v != MBOX_OWNER_DRV)
return v == MBOX_OWNER_FW ? -EBUSY : -ETIMEDOUT;
t4_read_reg(adapter, mbox_data); /* flush write */
t4_write_reg(adapter, mbox_ctl,
- MBMSGVALID | MBOWNER(MBOX_OWNER_FW));
+ MBMSGVALID_F | MBOWNER_V(MBOX_OWNER_FW));
t4_read_reg(adapter, mbox_ctl); /* flush write */
/*
* If we're the owner, see if this is the reply we wanted.
*/
v = t4_read_reg(adapter, mbox_ctl);
- if (MBOWNER_GET(v) == MBOX_OWNER_DRV) {
+ if (MBOWNER_G(v) == MBOX_OWNER_DRV) {
/*
* If the Message Valid bit isn't on, revoke ownership
* of the mailbox and continue waiting for our reply.
*/
- if ((v & MBMSGVALID) == 0) {
+ if ((v & MBMSGVALID_F) == 0) {
t4_write_reg(adapter, mbox_ctl,
- MBOWNER(MBOX_OWNER_NONE));
+ MBOWNER_V(MBOX_OWNER_NONE));
continue;
}
& FW_CMD_REQUEST_F) != 0);
}
t4_write_reg(adapter, mbox_ctl,
- MBOWNER(MBOX_OWNER_NONE));
+ MBOWNER_V(MBOX_OWNER_NONE));
return -FW_CMD_RETVAL_G(v);
}
}
return v;
v = be32_to_cpu(port_rpl.u.info.lstatus_to_modtype);
+ pi->mdio_addr = (v & FW_PORT_CMD_MDIOCAP_F) ?
+ FW_PORT_CMD_MDIOADDR_G(v) : -1;
pi->port_type = FW_PORT_CMD_PTYPE_G(v);
pi->mod_type = FW_PORT_MOD_TYPE_NA;
int v;
params[0] = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_REG) |
- FW_PARAMS_PARAM_XYZ_V(SGE_CONTROL));
+ FW_PARAMS_PARAM_XYZ_V(SGE_CONTROL_A));
params[1] = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_REG) |
- FW_PARAMS_PARAM_XYZ_V(SGE_HOST_PAGE_SIZE));
+ FW_PARAMS_PARAM_XYZ_V(SGE_HOST_PAGE_SIZE_A));
params[2] = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_REG) |
- FW_PARAMS_PARAM_XYZ_V(SGE_FL_BUFFER_SIZE0));
+ FW_PARAMS_PARAM_XYZ_V(SGE_FL_BUFFER_SIZE0_A));
params[3] = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_REG) |
- FW_PARAMS_PARAM_XYZ_V(SGE_FL_BUFFER_SIZE1));
+ FW_PARAMS_PARAM_XYZ_V(SGE_FL_BUFFER_SIZE1_A));
params[4] = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_REG) |
- FW_PARAMS_PARAM_XYZ_V(SGE_TIMER_VALUE_0_AND_1));
+ FW_PARAMS_PARAM_XYZ_V(SGE_TIMER_VALUE_0_AND_1_A));
params[5] = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_REG) |
- FW_PARAMS_PARAM_XYZ_V(SGE_TIMER_VALUE_2_AND_3));
+ FW_PARAMS_PARAM_XYZ_V(SGE_TIMER_VALUE_2_AND_3_A));
params[6] = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_REG) |
- FW_PARAMS_PARAM_XYZ_V(SGE_TIMER_VALUE_4_AND_5));
+ FW_PARAMS_PARAM_XYZ_V(SGE_TIMER_VALUE_4_AND_5_A));
v = t4vf_query_params(adapter, 7, params, vals);
if (v)
return v;
}
params[0] = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_REG) |
- FW_PARAMS_PARAM_XYZ_V(SGE_INGRESS_RX_THRESHOLD));
+ FW_PARAMS_PARAM_XYZ_V(SGE_INGRESS_RX_THRESHOLD_A));
params[1] = (FW_PARAMS_MNEM_V(FW_PARAMS_MNEM_REG) |
- FW_PARAMS_PARAM_XYZ_V(SGE_CONM_CTRL));
+ FW_PARAMS_PARAM_XYZ_V(SGE_CONM_CTRL_A));
v = t4vf_query_params(adapter, 2, params, vals);
if (v)
return v;
* the driver can just use it.
*/
whoami = t4_read_reg(adapter,
- T4VF_PL_BASE_ADDR + A_PL_VF_WHOAMI);
- pf = SOURCEPF_GET(whoami);
+ T4VF_PL_BASE_ADDR + PL_VF_WHOAMI_A);
+ pf = SOURCEPF_G(whoami);
s_hps = (HOSTPAGESIZEPF0_S +
(HOSTPAGESIZEPF1_S - HOSTPAGESIZEPF0_S) * pf);
(QUEUESPERPAGEPF1_S - QUEUESPERPAGEPF0_S) * pf);
sge_params->sge_vf_eq_qpp =
((sge_params->sge_egress_queues_per_page >> s_qpp)
- & QUEUESPERPAGEPF0_MASK);
+ & QUEUESPERPAGEPF0_M);
sge_params->sge_vf_iq_qpp =
((sge_params->sge_ingress_queues_per_page >> s_qpp)
- & QUEUESPERPAGEPF0_MASK);
+ & QUEUESPERPAGEPF0_M);
}
return 0;
break;
case CHELSIO_T5:
- chipid = G_REV(t4_read_reg(adapter, A_PL_VF_REV));
+ chipid = REV_G(t4_read_reg(adapter, PL_VF_REV_A));
adapter->params.chip |= CHELSIO_CHIP_CODE(CHELSIO_T5, chipid);
break;
}
#define DRV_NAME "enic"
#define DRV_DESCRIPTION "Cisco VIC Ethernet NIC Driver"
-#define DRV_VERSION "2.1.1.67"
+#define DRV_VERSION "2.1.1.83"
#define DRV_COPYRIGHT "Copyright 2008-2013 Cisco Systems, Inc"
#define ENIC_BARS_MAX 6
#ifdef CONFIG_NET_RX_BUSY_POLL
#include <net/busy_poll.h>
#endif
+#include <linux/crash_dump.h>
#include "cq_enet_desc.h"
#include "vnic_dev.h"
int loopback = 0;
int err;
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
/* VLAN tag from trunking driver */
vlan_tag_insert = 1;
- vlan_tag = vlan_tx_tag_get(skb);
+ vlan_tag = skb_vlan_tag_get(skb);
} else if (enic->loop_enable) {
vlan_tag = enic->loop_tag;
loopback = 1;
if (vnic_rq_desc_used(&enic->rq[i]) == 0) {
netdev_err(netdev, "Unable to alloc receive buffers\n");
err = -ENOMEM;
- goto err_out_notify_unset;
+ goto err_out_free_rq;
}
}
return 0;
-err_out_notify_unset:
+err_out_free_rq:
+ for (i = 0; i < enic->rq_count; i++)
+ vnic_rq_clean(&enic->rq[i], enic_free_rq_buf);
enic_dev_notify_unset(enic);
err_out_free_intr:
enic_free_intr(enic);
enic_clear_intr_mode(enic);
}
+static void enic_kdump_kernel_config(struct enic *enic)
+{
+ if (is_kdump_kernel()) {
+ dev_info(enic_get_dev(enic), "Running from within kdump kernel. Using minimal resources\n");
+ enic->rq_count = 1;
+ enic->wq_count = 1;
+ enic->config.rq_desc_count = ENIC_MIN_RQ_DESCS;
+ enic->config.wq_desc_count = ENIC_MIN_WQ_DESCS;
+ enic->config.mtu = min_t(u16, 1500, enic->config.mtu);
+ }
+}
+
static int enic_dev_init(struct enic *enic)
{
struct device *dev = enic_get_dev(enic);
enic_get_res_counts(enic);
+ /* modify resource count if we are in kdump_kernel
+ */
+ enic_kdump_kernel_config(enic);
+
/* Set interrupt mode based on resource counts and system
* capabilities
*/
#include <linux/platform_device.h>
#include <linux/irq.h>
#include <linux/slab.h>
+#include <linux/regulator/consumer.h>
+#include <linux/gpio.h>
+#include <linux/of_gpio.h>
#include <asm/delay.h>
#include <asm/irq.h>
struct dm9000_plat_data *pdata = dev_get_platdata(&pdev->dev);
struct board_info *db; /* Point a board information structure */
struct net_device *ndev;
+ struct device *dev = &pdev->dev;
const unsigned char *mac_src;
int ret = 0;
int iosize;
int i;
u32 id_val;
+ int reset_gpios;
+ enum of_gpio_flags flags;
+ struct regulator *power;
+
+ power = devm_regulator_get(dev, "vcc");
+ if (IS_ERR(power)) {
+ if (PTR_ERR(power) == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+ dev_dbg(dev, "no regulator provided\n");
+ } else {
+ ret = regulator_enable(power);
+ if (ret != 0) {
+ dev_err(dev,
+ "Failed to enable power regulator: %d\n", ret);
+ return ret;
+ }
+ dev_dbg(dev, "regulator enabled\n");
+ }
+
+ reset_gpios = of_get_named_gpio_flags(dev->of_node, "reset-gpios", 0,
+ &flags);
+ if (gpio_is_valid(reset_gpios)) {
+ ret = devm_gpio_request_one(dev, reset_gpios, flags,
+ "dm9000_reset");
+ if (ret) {
+ dev_err(dev, "failed to request reset gpio %d: %d\n",
+ reset_gpios, ret);
+ return -ENODEV;
+ }
+
+ /* According to manual PWRST# Low Period Min 1ms */
+ msleep(2);
+ gpio_set_value(reset_gpios, 1);
+ /* Needs 3ms to read eeprom when PWRST is deasserted */
+ msleep(4);
+ }
if (!pdata) {
pdata = dm9000_parse_dt(&pdev->dev);
* break out of while loop if there are no more
* packets waiting
*/
- if (!(dnet_readl(bp, RX_FIFO_WCNT) >> 16)) {
- napi_complete(napi);
- int_enable = dnet_readl(bp, INTR_ENB);
- int_enable |= DNET_INTR_SRC_RX_CMDFIFOAF;
- dnet_writel(bp, int_enable, INTR_ENB);
- return 0;
- }
+ if (!(dnet_readl(bp, RX_FIFO_WCNT) >> 16))
+ break;
cmd_word = dnet_readl(bp, RX_LEN_FIFO);
pkt_len = cmd_word & 0xFFFF;
"size %u.\n", dev->name, pkt_len);
}
- budget -= npackets;
-
if (npackets < budget) {
/* We processed all packets available. Tell NAPI it can
- * stop polling then re-enable rx interrupts */
+ * stop polling then re-enable rx interrupts.
+ */
napi_complete(napi);
int_enable = dnet_readl(bp, INTR_ENB);
int_enable |= DNET_INTR_SRC_RX_CMDFIFOAF;
dnet_writel(bp, int_enable, INTR_ENB);
- return 0;
}
- /* There are still packets waiting */
- return 1;
+ return npackets;
}
static irqreturn_t dnet_interrupt(int irq, void *dev_id)
u64 tx_bytes;
u64 tx_pkts;
u64 tx_reqs;
- u64 tx_wrbs;
u64 tx_compl;
ulong tx_jiffies;
u32 tx_stops;
/* Remember the skbs that were transmitted */
struct sk_buff *sent_skb_list[TX_Q_LEN];
struct be_tx_stats stats;
+ u16 pend_wrb_cnt; /* Number of WRBs yet to be given to HW */
+ u16 last_req_wrb_cnt; /* wrb cnt of the last req in the Q */
+ u16 last_req_hdr; /* index of the last req's hdr-wrb */
} ____cacheline_aligned_in_smp;
/* Struct to remember the pages posted for rx frags */
{
#define SLIPORT_READY_TIMEOUT 30
u32 sliport_status;
- int status = 0, i;
+ int i;
for (i = 0; i < SLIPORT_READY_TIMEOUT; i++) {
sliport_status = ioread32(adapter->db + SLIPORT_STATUS_OFFSET);
}
if (i == SLIPORT_READY_TIMEOUT)
- status = -1;
+ return sliport_status ? : -1;
- return status;
+ return 0;
}
static bool lancer_provisioning_error(struct be_adapter *adapter)
iowrite32(SLI_PORT_CONTROL_IP_MASK,
adapter->db + SLIPORT_CONTROL_OFFSET);
- /* check adapter has corrected the error */
+ /* check if adapter has corrected the error */
status = lancer_wait_ready(adapter);
sliport_status = ioread32(adapter->db +
SLIPORT_STATUS_OFFSET);
if (lancer_chip(adapter)) {
status = lancer_wait_ready(adapter);
- return status;
+ if (status) {
+ stage = status;
+ goto err;
+ }
+ return 0;
}
do {
timeout += 2;
} while (timeout < 60);
- dev_err(dev, "POST timeout; stage=0x%x\n", stage);
+err:
+ dev_err(dev, "POST timeout; stage=%#x\n", stage);
return -1;
}
be_reset_nic_desc(&nic_desc);
nic_desc.pf_num = adapter->pf_number;
nic_desc.vf_num = domain;
+ nic_desc.bw_min = 0;
if (lancer_chip(adapter)) {
nic_desc.hdr.desc_type = NIC_RESOURCE_DESC_TYPE_V0;
nic_desc.hdr.desc_len = RESOURCE_DESC_SIZE_V0;
int status;
if (BEx_chip(adapter) || lancer_chip(adapter))
- return 0;
+ return -EOPNOTSUPP;
spin_lock_bh(&adapter->mcc_lock);
u8 rsvd0[3];
} __packed;
+/* Flashrom related descriptors */
+#define MAX_FLASH_COMP 32
+
+#define OPTYPE_ISCSI_ACTIVE 0
+#define OPTYPE_REDBOOT 1
+#define OPTYPE_BIOS 2
+#define OPTYPE_PXE_BIOS 3
+#define OPTYPE_FCOE_BIOS 8
+#define OPTYPE_ISCSI_BACKUP 9
+#define OPTYPE_FCOE_FW_ACTIVE 10
+#define OPTYPE_FCOE_FW_BACKUP 11
+#define OPTYPE_NCSI_FW 13
+#define OPTYPE_REDBOOT_DIR 18
+#define OPTYPE_REDBOOT_CONFIG 19
+#define OPTYPE_SH_PHY_FW 21
+#define OPTYPE_FLASHISM_JUMPVECTOR 22
+#define OPTYPE_UFI_DIR 23
+#define OPTYPE_PHY_FW 99
+
+#define FLASH_BIOS_IMAGE_MAX_SIZE_g2 262144 /* Max OPTION ROM image sz */
+#define FLASH_REDBOOT_IMAGE_MAX_SIZE_g2 262144 /* Max Redboot image sz */
+#define FLASH_IMAGE_MAX_SIZE_g2 1310720 /* Max firmware image size */
+
+#define FLASH_NCSI_IMAGE_MAX_SIZE_g3 262144
+#define FLASH_PHY_FW_IMAGE_MAX_SIZE_g3 262144
+#define FLASH_BIOS_IMAGE_MAX_SIZE_g3 524288 /* Max OPTION ROM image sz */
+#define FLASH_REDBOOT_IMAGE_MAX_SIZE_g3 1048576 /* Max Redboot image sz */
+#define FLASH_IMAGE_MAX_SIZE_g3 2097152 /* Max firmware image size */
+
+/* Offsets for components on Flash. */
+#define FLASH_REDBOOT_START_g2 0
+#define FLASH_FCoE_BIOS_START_g2 524288
+#define FLASH_iSCSI_PRIMARY_IMAGE_START_g2 1048576
+#define FLASH_iSCSI_BACKUP_IMAGE_START_g2 2359296
+#define FLASH_FCoE_PRIMARY_IMAGE_START_g2 3670016
+#define FLASH_FCoE_BACKUP_IMAGE_START_g2 4980736
+#define FLASH_iSCSI_BIOS_START_g2 7340032
+#define FLASH_PXE_BIOS_START_g2 7864320
+
+#define FLASH_REDBOOT_START_g3 262144
+#define FLASH_PHY_FW_START_g3 1310720
+#define FLASH_iSCSI_PRIMARY_IMAGE_START_g3 2097152
+#define FLASH_iSCSI_BACKUP_IMAGE_START_g3 4194304
+#define FLASH_FCoE_PRIMARY_IMAGE_START_g3 6291456
+#define FLASH_FCoE_BACKUP_IMAGE_START_g3 8388608
+#define FLASH_iSCSI_BIOS_START_g3 12582912
+#define FLASH_PXE_BIOS_START_g3 13107200
+#define FLASH_FCoE_BIOS_START_g3 13631488
+#define FLASH_NCSI_START_g3 15990784
+
+#define IMAGE_NCSI 16
+#define IMAGE_OPTION_ROM_PXE 32
+#define IMAGE_OPTION_ROM_FCoE 33
+#define IMAGE_OPTION_ROM_ISCSI 34
+#define IMAGE_FLASHISM_JUMPVECTOR 48
+#define IMAGE_FIRMWARE_iSCSI 160
+#define IMAGE_FIRMWARE_FCoE 162
+#define IMAGE_FIRMWARE_BACKUP_iSCSI 176
+#define IMAGE_FIRMWARE_BACKUP_FCoE 178
+#define IMAGE_FIRMWARE_PHY 192
+#define IMAGE_REDBOOT_DIR 208
+#define IMAGE_REDBOOT_CONFIG 209
+#define IMAGE_UFI_DIR 210
+#define IMAGE_BOOT_CODE 224
+
+struct controller_id {
+ u32 vendor;
+ u32 device;
+ u32 subvendor;
+ u32 subdevice;
+};
+
+struct flash_comp {
+ unsigned long offset;
+ int optype;
+ int size;
+ int img_type;
+};
+
+struct image_hdr {
+ u32 imageid;
+ u32 imageoffset;
+ u32 imagelength;
+ u32 image_checksum;
+ u8 image_version[32];
+};
+
+struct flash_file_hdr_g2 {
+ u8 sign[32];
+ u32 cksum;
+ u32 antidote;
+ struct controller_id cont_id;
+ u32 file_len;
+ u32 chunk_num;
+ u32 total_chunks;
+ u32 num_imgs;
+ u8 build[24];
+};
+
+struct flash_file_hdr_g3 {
+ u8 sign[52];
+ u8 ufi_version[4];
+ u32 file_len;
+ u32 cksum;
+ u32 antidote;
+ u32 num_imgs;
+ u8 build[24];
+ u8 asic_type_rev;
+ u8 rsvd[31];
+};
+
+struct flash_section_hdr {
+ u32 format_rev;
+ u32 cksum;
+ u32 antidote;
+ u32 num_images;
+ u8 id_string[128];
+ u32 rsvd[4];
+} __packed;
+
+struct flash_section_hdr_g2 {
+ u32 format_rev;
+ u32 cksum;
+ u32 antidote;
+ u32 build_num;
+ u8 id_string[128];
+ u32 rsvd[8];
+} __packed;
+
+struct flash_section_entry {
+ u32 type;
+ u32 offset;
+ u32 pad_size;
+ u32 image_size;
+ u32 cksum;
+ u32 entry_point;
+ u16 optype;
+ u16 rsvd0;
+ u32 rsvd1;
+ u8 ver_data[32];
+} __packed;
+
+struct flash_section_info {
+ u8 cookie[32];
+ struct flash_section_hdr fsec_hdr;
+ struct flash_section_entry fsec_entry[32];
+} __packed;
+
+struct flash_section_info_g2 {
+ u8 cookie[32];
+ struct flash_section_hdr_g2 fsec_hdr;
+ struct flash_section_entry fsec_entry[32];
+} __packed;
+
/****************** Firmware Flash ******************/
+#define FLASHROM_OPER_FLASH 1
+#define FLASHROM_OPER_SAVE 2
+#define FLASHROM_OPER_REPORT 4
+#define FLASHROM_OPER_PHY_FLASH 9
+#define FLASHROM_OPER_PHY_SAVE 10
+
struct flashrom_params {
u32 op_code;
u32 op_type;
PHY_TYPE_QSFP,
PHY_TYPE_KR4_40GB,
PHY_TYPE_KR2_20GB,
+ PHY_TYPE_TN_8022,
PHY_TYPE_DISABLED = 255
};
};
/*********************** Controller Attributes ***********************/
+struct mgmt_hba_attribs {
+ u32 rsvd0[24];
+ u8 controller_model_number[32];
+ u32 rsvd1[79];
+ u8 rsvd2[3];
+ u8 phy_port;
+ u32 rsvd3[13];
+} __packed;
+
+struct mgmt_controller_attrib {
+ struct mgmt_hba_attribs hba_attribs;
+ u32 rsvd0[10];
+} __packed;
+
struct be_cmd_req_cntl_attribs {
struct be_cmd_req_hdr hdr;
};
{DRVSTAT_TX_INFO(tx_pkts)},
/* Number of skbs queued for trasmission by the driver */
{DRVSTAT_TX_INFO(tx_reqs)},
- /* Number of TX work request blocks DMAed to HW */
- {DRVSTAT_TX_INFO(tx_wrbs)},
/* Number of times the TX queue was stopped due to lack
* of spaces in the TXQ.
*/
if (ecmd->autoneg != adapter->phy.fc_autoneg)
return -EINVAL;
- adapter->tx_fc = ecmd->tx_pause;
- adapter->rx_fc = ecmd->rx_pause;
- status = be_cmd_set_flow_control(adapter,
- adapter->tx_fc, adapter->rx_fc);
- if (status)
+ status = be_cmd_set_flow_control(adapter, ecmd->tx_pause,
+ ecmd->rx_pause);
+ if (status) {
dev_warn(&adapter->pdev->dev, "Pause param set failed\n");
+ return be_cmd_status(status);
+ }
- return be_cmd_status(status);
+ adapter->tx_fc = ecmd->tx_pause;
+ adapter->rx_fc = ecmd->rx_pause;
+ return 0;
}
static int be_set_phys_id(struct net_device *netdev,
#define RETRIEVE_FAT 0
#define QUERY_FAT 1
-/* Flashrom related descriptors */
-#define MAX_FLASH_COMP 32
-#define IMAGE_TYPE_FIRMWARE 160
-#define IMAGE_TYPE_BOOTCODE 224
-#define IMAGE_TYPE_OPTIONROM 32
-
-#define NUM_FLASHDIR_ENTRIES 32
-
-#define OPTYPE_ISCSI_ACTIVE 0
-#define OPTYPE_REDBOOT 1
-#define OPTYPE_BIOS 2
-#define OPTYPE_PXE_BIOS 3
-#define OPTYPE_FCOE_BIOS 8
-#define OPTYPE_ISCSI_BACKUP 9
-#define OPTYPE_FCOE_FW_ACTIVE 10
-#define OPTYPE_FCOE_FW_BACKUP 11
-#define OPTYPE_NCSI_FW 13
-#define OPTYPE_REDBOOT_DIR 18
-#define OPTYPE_REDBOOT_CONFIG 19
-#define OPTYPE_SH_PHY_FW 21
-#define OPTYPE_FLASHISM_JUMPVECTOR 22
-#define OPTYPE_UFI_DIR 23
-#define OPTYPE_PHY_FW 99
-#define TN_8022 13
-
-#define FLASHROM_OPER_PHY_FLASH 9
-#define FLASHROM_OPER_PHY_SAVE 10
-#define FLASHROM_OPER_FLASH 1
-#define FLASHROM_OPER_SAVE 2
-#define FLASHROM_OPER_REPORT 4
-
-#define FLASH_IMAGE_MAX_SIZE_g2 (1310720) /* Max firmware image size */
-#define FLASH_BIOS_IMAGE_MAX_SIZE_g2 (262144) /* Max OPTION ROM image sz */
-#define FLASH_REDBOOT_IMAGE_MAX_SIZE_g2 (262144) /* Max Redboot image sz */
-#define FLASH_IMAGE_MAX_SIZE_g3 (2097152) /* Max firmware image size */
-#define FLASH_BIOS_IMAGE_MAX_SIZE_g3 (524288) /* Max OPTION ROM image sz */
-#define FLASH_REDBOOT_IMAGE_MAX_SIZE_g3 (1048576) /* Max Redboot image sz */
-#define FLASH_NCSI_IMAGE_MAX_SIZE_g3 (262144)
-#define FLASH_PHY_FW_IMAGE_MAX_SIZE_g3 262144
-
-#define FLASH_NCSI_MAGIC (0x16032009)
-#define FLASH_NCSI_DISABLED (0)
-#define FLASH_NCSI_ENABLED (1)
-
-#define FLASH_NCSI_BITFILE_HDR_OFFSET (0x600000)
-
-/* Offsets for components on Flash. */
-#define FLASH_iSCSI_PRIMARY_IMAGE_START_g2 (1048576)
-#define FLASH_iSCSI_BACKUP_IMAGE_START_g2 (2359296)
-#define FLASH_FCoE_PRIMARY_IMAGE_START_g2 (3670016)
-#define FLASH_FCoE_BACKUP_IMAGE_START_g2 (4980736)
-#define FLASH_iSCSI_BIOS_START_g2 (7340032)
-#define FLASH_PXE_BIOS_START_g2 (7864320)
-#define FLASH_FCoE_BIOS_START_g2 (524288)
-#define FLASH_REDBOOT_START_g2 (0)
-
-#define FLASH_NCSI_START_g3 (15990784)
-#define FLASH_iSCSI_PRIMARY_IMAGE_START_g3 (2097152)
-#define FLASH_iSCSI_BACKUP_IMAGE_START_g3 (4194304)
-#define FLASH_FCoE_PRIMARY_IMAGE_START_g3 (6291456)
-#define FLASH_FCoE_BACKUP_IMAGE_START_g3 (8388608)
-#define FLASH_iSCSI_BIOS_START_g3 (12582912)
-#define FLASH_PXE_BIOS_START_g3 (13107200)
-#define FLASH_FCoE_BIOS_START_g3 (13631488)
-#define FLASH_REDBOOT_START_g3 (262144)
-#define FLASH_PHY_FW_START_g3 1310720
-
-#define IMAGE_NCSI 16
-#define IMAGE_OPTION_ROM_PXE 32
-#define IMAGE_OPTION_ROM_FCoE 33
-#define IMAGE_OPTION_ROM_ISCSI 34
-#define IMAGE_FLASHISM_JUMPVECTOR 48
-#define IMAGE_FLASH_ISM 49
-#define IMAGE_JUMP_VECTOR 50
-#define IMAGE_FIRMWARE_iSCSI 160
-#define IMAGE_FIRMWARE_COMP_iSCSI 161
-#define IMAGE_FIRMWARE_FCoE 162
-#define IMAGE_FIRMWARE_COMP_FCoE 163
-#define IMAGE_FIRMWARE_BACKUP_iSCSI 176
-#define IMAGE_FIRMWARE_BACKUP_COMP_iSCSI 177
-#define IMAGE_FIRMWARE_BACKUP_FCoE 178
-#define IMAGE_FIRMWARE_BACKUP_COMP_FCoE 179
-#define IMAGE_FIRMWARE_PHY 192
-#define IMAGE_REDBOOT_DIR 208
-#define IMAGE_REDBOOT_CONFIG 209
-#define IMAGE_UFI_DIR 210
-#define IMAGE_BOOT_CODE 224
-
/************* Rx Packet Type Encoding **************/
#define BE_UNICAST_PACKET 0
#define BE_MULTICAST_PACKET 1
u8 vlan_tag[16];
} __packed;
+#define TX_HDR_WRB_COMPL 1 /* word 2 */
+#define TX_HDR_WRB_EVT (1 << 1) /* word 2 */
+#define TX_HDR_WRB_NUM_SHIFT 13 /* word 2: bits 13:17 */
+#define TX_HDR_WRB_NUM_MASK 0x1F /* word 2: bits 13:17 */
+
struct be_eth_hdr_wrb {
u32 dw[4];
};
struct be_eth_rx_compl {
u32 dw[4];
};
-
-struct mgmt_hba_attribs {
- u8 flashrom_version_string[32];
- u8 manufacturer_name[32];
- u32 supported_modes;
- u32 rsvd0[3];
- u8 ncsi_ver_string[12];
- u32 default_extended_timeout;
- u8 controller_model_number[32];
- u8 controller_description[64];
- u8 controller_serial_number[32];
- u8 ip_version_string[32];
- u8 firmware_version_string[32];
- u8 bios_version_string[32];
- u8 redboot_version_string[32];
- u8 driver_version_string[32];
- u8 fw_on_flash_version_string[32];
- u32 functionalities_supported;
- u16 max_cdblength;
- u8 asic_revision;
- u8 generational_guid[16];
- u8 hba_port_count;
- u16 default_link_down_timeout;
- u8 iscsi_ver_min_max;
- u8 multifunction_device;
- u8 cache_valid;
- u8 hba_status;
- u8 max_domains_supported;
- u8 phy_port;
- u32 firmware_post_status;
- u32 hba_mtu[8];
- u32 rsvd1[4];
-};
-
-struct mgmt_controller_attrib {
- struct mgmt_hba_attribs hba_attribs;
- u16 pci_vendor_id;
- u16 pci_device_id;
- u16 pci_sub_vendor_id;
- u16 pci_sub_system_id;
- u8 pci_bus_number;
- u8 pci_device_number;
- u8 pci_function_number;
- u8 interface_type;
- u64 unique_identifier;
- u32 rsvd0[5];
-};
-
-struct controller_id {
- u32 vendor;
- u32 device;
- u32 subvendor;
- u32 subdevice;
-};
-
-struct flash_comp {
- unsigned long offset;
- int optype;
- int size;
- int img_type;
-};
-
-struct image_hdr {
- u32 imageid;
- u32 imageoffset;
- u32 imagelength;
- u32 image_checksum;
- u8 image_version[32];
-};
-struct flash_file_hdr_g2 {
- u8 sign[32];
- u32 cksum;
- u32 antidote;
- struct controller_id cont_id;
- u32 file_len;
- u32 chunk_num;
- u32 total_chunks;
- u32 num_imgs;
- u8 build[24];
-};
-
-struct flash_file_hdr_g3 {
- u8 sign[52];
- u8 ufi_version[4];
- u32 file_len;
- u32 cksum;
- u32 antidote;
- u32 num_imgs;
- u8 build[24];
- u8 asic_type_rev;
- u8 rsvd[31];
-};
-
-struct flash_section_hdr {
- u32 format_rev;
- u32 cksum;
- u32 antidote;
- u32 num_images;
- u8 id_string[128];
- u32 rsvd[4];
-} __packed;
-
-struct flash_section_hdr_g2 {
- u32 format_rev;
- u32 cksum;
- u32 antidote;
- u32 build_num;
- u8 id_string[128];
- u32 rsvd[8];
-} __packed;
-
-struct flash_section_entry {
- u32 type;
- u32 offset;
- u32 pad_size;
- u32 image_size;
- u32 cksum;
- u32 entry_point;
- u16 optype;
- u16 rsvd0;
- u32 rsvd1;
- u8 ver_data[32];
-} __packed;
-
-struct flash_section_info {
- u8 cookie[32];
- struct flash_section_hdr fsec_hdr;
- struct flash_section_entry fsec_entry[32];
-} __packed;
-
-struct flash_section_info_g2 {
- u8 cookie[32];
- struct flash_section_hdr_g2 fsec_hdr;
- struct flash_section_entry fsec_entry[32];
-} __packed;
netif_carrier_off(netdev);
}
-static void be_tx_stats_update(struct be_tx_obj *txo,
- u32 wrb_cnt, u32 copied, u32 gso_segs,
- bool stopped)
+static void be_tx_stats_update(struct be_tx_obj *txo, struct sk_buff *skb)
{
struct be_tx_stats *stats = tx_stats(txo);
u64_stats_update_begin(&stats->sync);
stats->tx_reqs++;
- stats->tx_wrbs += wrb_cnt;
- stats->tx_bytes += copied;
- stats->tx_pkts += (gso_segs ? gso_segs : 1);
- if (stopped)
- stats->tx_stops++;
+ stats->tx_bytes += skb->len;
+ stats->tx_pkts += (skb_shinfo(skb)->gso_segs ? : 1);
u64_stats_update_end(&stats->sync);
}
-/* Determine number of WRB entries needed to xmit data in an skb */
-static u32 wrb_cnt_for_skb(struct be_adapter *adapter, struct sk_buff *skb,
- bool *dummy)
+/* Returns number of WRBs needed for the skb */
+static u32 skb_wrb_cnt(struct sk_buff *skb)
{
- int cnt = (skb->len > skb->data_len);
-
- cnt += skb_shinfo(skb)->nr_frags;
-
- /* to account for hdr wrb */
- cnt++;
- if (lancer_chip(adapter) || !(cnt & 1)) {
- *dummy = false;
- } else {
- /* add a dummy to make it an even num */
- cnt++;
- *dummy = true;
- }
- BUG_ON(cnt > BE_MAX_TX_FRAG_COUNT);
- return cnt;
+ /* +1 for the header wrb */
+ return 1 + (skb_headlen(skb) ? 1 : 0) + skb_shinfo(skb)->nr_frags;
}
static inline void wrb_fill(struct be_eth_wrb *wrb, u64 addr, int len)
u8 vlan_prio;
u16 vlan_tag;
- vlan_tag = vlan_tx_tag_get(skb);
+ vlan_tag = skb_vlan_tag_get(skb);
vlan_prio = (vlan_tag & VLAN_PRIO_MASK) >> VLAN_PRIO_SHIFT;
/* If vlan priority provided by OS is NOT in available bmap */
if (!(adapter->vlan_prio_bmap & (1 << vlan_prio)))
SET_TX_WRB_HDR_BITS(udpcs, hdr, 1);
}
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
SET_TX_WRB_HDR_BITS(vlan, hdr, 1);
vlan_tag = be_get_tx_vlan_tag(adapter, skb);
SET_TX_WRB_HDR_BITS(vlan_tag, hdr, vlan_tag);
}
- /* To skip HW VLAN tagging: evt = 1, compl = 0 */
- SET_TX_WRB_HDR_BITS(complete, hdr, !skip_hw_vlan);
- SET_TX_WRB_HDR_BITS(event, hdr, 1);
SET_TX_WRB_HDR_BITS(num_wrb, hdr, wrb_cnt);
SET_TX_WRB_HDR_BITS(len, hdr, len);
+
+ /* Hack to skip HW VLAN tagging needs evt = 1, compl = 0
+ * When this hack is not needed, the evt bit is set while ringing DB
+ */
+ if (skip_hw_vlan)
+ SET_TX_WRB_HDR_BITS(event, hdr, 1);
}
static void unmap_tx_frag(struct device *dev, struct be_eth_wrb *wrb,
}
}
-static int make_tx_wrbs(struct be_adapter *adapter, struct be_queue_info *txq,
- struct sk_buff *skb, u32 wrb_cnt, bool dummy_wrb,
- bool skip_hw_vlan)
+/* Returns the number of WRBs used up by the skb */
+static u32 be_xmit_enqueue(struct be_adapter *adapter, struct be_tx_obj *txo,
+ struct sk_buff *skb, bool skip_hw_vlan)
{
- dma_addr_t busaddr;
- int i, copied = 0;
+ u32 i, copied = 0, wrb_cnt = skb_wrb_cnt(skb);
struct device *dev = &adapter->pdev->dev;
- struct sk_buff *first_skb = skb;
- struct be_eth_wrb *wrb;
+ struct be_queue_info *txq = &txo->q;
struct be_eth_hdr_wrb *hdr;
bool map_single = false;
- u16 map_head;
+ struct be_eth_wrb *wrb;
+ dma_addr_t busaddr;
+ u16 head = txq->head;
hdr = queue_head_node(txq);
+ wrb_fill_hdr(adapter, hdr, skb, wrb_cnt, skb->len, skip_hw_vlan);
+ be_dws_cpu_to_le(hdr, sizeof(*hdr));
+
queue_head_inc(txq);
- map_head = txq->head;
if (skb->len > skb->data_len) {
int len = skb_headlen(skb);
copied += skb_frag_size(frag);
}
- if (dummy_wrb) {
- wrb = queue_head_node(txq);
- wrb_fill(wrb, 0, 0);
- be_dws_cpu_to_le(wrb, sizeof(*wrb));
- queue_head_inc(txq);
- }
+ BUG_ON(txo->sent_skb_list[head]);
+ txo->sent_skb_list[head] = skb;
+ txo->last_req_hdr = head;
+ atomic_add(wrb_cnt, &txq->used);
+ txo->last_req_wrb_cnt = wrb_cnt;
+ txo->pend_wrb_cnt += wrb_cnt;
- wrb_fill_hdr(adapter, hdr, first_skb, wrb_cnt, copied, skip_hw_vlan);
- be_dws_cpu_to_le(hdr, sizeof(*hdr));
+ be_tx_stats_update(txo, skb);
+ return wrb_cnt;
- return copied;
dma_err:
- txq->head = map_head;
+ /* Bring the queue back to the state it was in before this
+ * routine was invoked.
+ */
+ txq->head = head;
+ /* skip the first wrb (hdr); it's not mapped */
+ queue_head_inc(txq);
while (copied) {
wrb = queue_head_node(txq);
unmap_tx_frag(dev, wrb, map_single);
adapter->drv_stats.dma_map_errors++;
queue_head_inc(txq);
}
+ txq->head = head;
return 0;
}
if (unlikely(!skb))
return skb;
- if (vlan_tx_tag_present(skb))
+ if (skb_vlan_tag_present(skb))
vlan_tag = be_get_tx_vlan_tag(adapter, skb);
if (qnq_async_evt_rcvd(adapter) && adapter->pvid) {
static int be_vlan_tag_tx_chk(struct be_adapter *adapter, struct sk_buff *skb)
{
- return vlan_tx_tag_present(skb) || adapter->pvid || adapter->qnq_vid;
+ return skb_vlan_tag_present(skb) || adapter->pvid || adapter->qnq_vid;
}
static int be_ipv6_tx_stall_chk(struct be_adapter *adapter, struct sk_buff *skb)
eth_hdr_len = ntohs(skb->protocol) == ETH_P_8021Q ?
VLAN_ETH_HLEN : ETH_HLEN;
if (skb->len <= 60 &&
- (lancer_chip(adapter) || vlan_tx_tag_present(skb)) &&
+ (lancer_chip(adapter) || skb_vlan_tag_present(skb)) &&
is_ipv4_pkt(skb)) {
ip = (struct iphdr *)ip_hdr(skb);
pskb_trim(skb, eth_hdr_len + ntohs(ip->tot_len));
* Manually insert VLAN in pkt.
*/
if (skb->ip_summed != CHECKSUM_PARTIAL &&
- vlan_tx_tag_present(skb)) {
+ skb_vlan_tag_present(skb)) {
skb = be_insert_vlan_in_pkt(adapter, skb, skip_hw_vlan);
if (unlikely(!skb))
goto err;
return skb;
}
+static void be_xmit_flush(struct be_adapter *adapter, struct be_tx_obj *txo)
+{
+ struct be_queue_info *txq = &txo->q;
+ struct be_eth_hdr_wrb *hdr = queue_index_node(txq, txo->last_req_hdr);
+
+ /* Mark the last request eventable if it hasn't been marked already */
+ if (!(hdr->dw[2] & cpu_to_le32(TX_HDR_WRB_EVT)))
+ hdr->dw[2] |= cpu_to_le32(TX_HDR_WRB_EVT | TX_HDR_WRB_COMPL);
+
+ /* compose a dummy wrb if there are odd set of wrbs to notify */
+ if (!lancer_chip(adapter) && (txo->pend_wrb_cnt & 1)) {
+ wrb_fill(queue_head_node(txq), 0, 0);
+ queue_head_inc(txq);
+ atomic_inc(&txq->used);
+ txo->pend_wrb_cnt++;
+ hdr->dw[2] &= ~cpu_to_le32(TX_HDR_WRB_NUM_MASK <<
+ TX_HDR_WRB_NUM_SHIFT);
+ hdr->dw[2] |= cpu_to_le32((txo->last_req_wrb_cnt + 1) <<
+ TX_HDR_WRB_NUM_SHIFT);
+ }
+ be_txq_notify(adapter, txo, txo->pend_wrb_cnt);
+ txo->pend_wrb_cnt = 0;
+}
+
static netdev_tx_t be_xmit(struct sk_buff *skb, struct net_device *netdev)
{
+ bool skip_hw_vlan = false, flush = !skb->xmit_more;
struct be_adapter *adapter = netdev_priv(netdev);
- struct be_tx_obj *txo = &adapter->tx_obj[skb_get_queue_mapping(skb)];
+ u16 q_idx = skb_get_queue_mapping(skb);
+ struct be_tx_obj *txo = &adapter->tx_obj[q_idx];
struct be_queue_info *txq = &txo->q;
- bool dummy_wrb, stopped = false;
- u32 wrb_cnt = 0, copied = 0;
- bool skip_hw_vlan = false;
- u32 start = txq->head;
+ u16 wrb_cnt;
skb = be_xmit_workarounds(adapter, skb, &skip_hw_vlan);
- if (!skb) {
- tx_stats(txo)->tx_drv_drops++;
- return NETDEV_TX_OK;
- }
+ if (unlikely(!skb))
+ goto drop;
- wrb_cnt = wrb_cnt_for_skb(adapter, skb, &dummy_wrb);
+ wrb_cnt = be_xmit_enqueue(adapter, txo, skb, skip_hw_vlan);
+ if (unlikely(!wrb_cnt)) {
+ dev_kfree_skb_any(skb);
+ goto drop;
+ }
- copied = make_tx_wrbs(adapter, txq, skb, wrb_cnt, dummy_wrb,
- skip_hw_vlan);
- if (copied) {
- int gso_segs = skb_shinfo(skb)->gso_segs;
+ if ((atomic_read(&txq->used) + BE_MAX_TX_FRAG_COUNT) >= txq->len) {
+ netif_stop_subqueue(netdev, q_idx);
+ tx_stats(txo)->tx_stops++;
+ }
- /* record the sent skb in the sent_skb table */
- BUG_ON(txo->sent_skb_list[start]);
- txo->sent_skb_list[start] = skb;
+ if (flush || __netif_subqueue_stopped(netdev, q_idx))
+ be_xmit_flush(adapter, txo);
- /* Ensure txq has space for the next skb; Else stop the queue
- * *BEFORE* ringing the tx doorbell, so that we serialze the
- * tx compls of the current transmit which'll wake up the queue
- */
- atomic_add(wrb_cnt, &txq->used);
- if ((BE_MAX_TX_FRAG_COUNT + atomic_read(&txq->used)) >=
- txq->len) {
- netif_stop_subqueue(netdev, skb_get_queue_mapping(skb));
- stopped = true;
- }
-
- be_txq_notify(adapter, txo, wrb_cnt);
+ return NETDEV_TX_OK;
+drop:
+ tx_stats(txo)->tx_drv_drops++;
+ /* Flush the already enqueued tx requests */
+ if (flush && txo->pend_wrb_cnt)
+ be_xmit_flush(adapter, txo);
- be_tx_stats_update(txo, wrb_cnt, copied, gso_segs, stopped);
- } else {
- txq->head = start;
- tx_stats(txo)->tx_drv_drops++;
- dev_kfree_skb_any(skb);
- }
return NETDEV_TX_OK;
}
static u16 be_tx_compl_process(struct be_adapter *adapter,
struct be_tx_obj *txo, u16 last_index)
{
+ struct sk_buff **sent_skbs = txo->sent_skb_list;
struct be_queue_info *txq = &txo->q;
+ u16 frag_index, num_wrbs = 0;
+ struct sk_buff *skb = NULL;
+ bool unmap_skb_hdr = false;
struct be_eth_wrb *wrb;
- struct sk_buff **sent_skbs = txo->sent_skb_list;
- struct sk_buff *sent_skb;
- u16 cur_index, num_wrbs = 1; /* account for hdr wrb */
- bool unmap_skb_hdr = true;
-
- sent_skb = sent_skbs[txq->tail];
- BUG_ON(!sent_skb);
- sent_skbs[txq->tail] = NULL;
-
- /* skip header wrb */
- queue_tail_inc(txq);
do {
- cur_index = txq->tail;
+ if (sent_skbs[txq->tail]) {
+ /* Free skb from prev req */
+ if (skb)
+ dev_consume_skb_any(skb);
+ skb = sent_skbs[txq->tail];
+ sent_skbs[txq->tail] = NULL;
+ queue_tail_inc(txq); /* skip hdr wrb */
+ num_wrbs++;
+ unmap_skb_hdr = true;
+ }
wrb = queue_tail_node(txq);
+ frag_index = txq->tail;
unmap_tx_frag(&adapter->pdev->dev, wrb,
- (unmap_skb_hdr && skb_headlen(sent_skb)));
+ (unmap_skb_hdr && skb_headlen(skb)));
unmap_skb_hdr = false;
-
- num_wrbs++;
queue_tail_inc(txq);
- } while (cur_index != last_index);
+ num_wrbs++;
+ } while (frag_index != last_index);
+ dev_consume_skb_any(skb);
- dev_consume_skb_any(sent_skb);
return num_wrbs;
}
static void be_tx_compl_clean(struct be_adapter *adapter)
{
+ u16 end_idx, notified_idx, cmpl = 0, timeo = 0, num_wrbs = 0;
+ struct device *dev = &adapter->pdev->dev;
struct be_tx_obj *txo;
struct be_queue_info *txq;
struct be_eth_tx_compl *txcp;
- u16 end_idx, cmpl = 0, timeo = 0, num_wrbs = 0;
- struct sk_buff *sent_skb;
- bool dummy_wrb;
int i, pending_txqs;
/* Stop polling for compls when HW has been silent for 10ms */
atomic_sub(num_wrbs, &txq->used);
timeo = 0;
}
- if (atomic_read(&txq->used) == 0)
+ if (atomic_read(&txq->used) == txo->pend_wrb_cnt)
pending_txqs--;
}
mdelay(1);
} while (true);
+ /* Free enqueued TX that was never notified to HW */
for_all_tx_queues(adapter, txo, i) {
txq = &txo->q;
- if (atomic_read(&txq->used))
- dev_err(&adapter->pdev->dev, "%d pending tx-compls\n",
- atomic_read(&txq->used));
- /* free posted tx for which compls will never arrive */
- while (atomic_read(&txq->used)) {
- sent_skb = txo->sent_skb_list[txq->tail];
+ if (atomic_read(&txq->used)) {
+ dev_info(dev, "txq%d: cleaning %d pending tx-wrbs\n",
+ i, atomic_read(&txq->used));
+ notified_idx = txq->tail;
end_idx = txq->tail;
- num_wrbs = wrb_cnt_for_skb(adapter, sent_skb,
- &dummy_wrb);
- index_adv(&end_idx, num_wrbs - 1, txq->len);
+ index_adv(&end_idx, atomic_read(&txq->used) - 1,
+ txq->len);
+ /* Use the tx-compl process logic to handle requests
+ * that were not sent to the HW.
+ */
num_wrbs = be_tx_compl_process(adapter, txo, end_idx);
atomic_sub(num_wrbs, &txq->used);
+ BUG_ON(atomic_read(&txq->used));
+ txo->pend_wrb_cnt = 0;
+ /* Since hw was never notified of these requests,
+ * reset TXQ indices
+ */
+ txq->head = notified_idx;
+ txq->tail = notified_idx;
}
}
}
return 0;
}
+static int be_if_create(struct be_adapter *adapter, u32 *if_handle,
+ u32 cap_flags, u32 vf)
+{
+ u32 en_flags;
+ int status;
+
+ en_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST |
+ BE_IF_FLAGS_MULTICAST | BE_IF_FLAGS_PASS_L3L4_ERRORS |
+ BE_IF_FLAGS_RSS;
+
+ en_flags &= cap_flags;
+
+ status = be_cmd_if_create(adapter, cap_flags, en_flags,
+ if_handle, vf);
+
+ return status;
+}
+
static int be_vfs_if_create(struct be_adapter *adapter)
{
struct be_resources res = {0};
struct be_vf_cfg *vf_cfg;
- u32 cap_flags, en_flags, vf;
- int status = 0;
+ u32 cap_flags, vf;
+ int status;
+ /* If a FW profile exists, then cap_flags are updated */
cap_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST |
BE_IF_FLAGS_MULTICAST;
cap_flags = res.if_cap_flags;
}
- /* If a FW profile exists, then cap_flags are updated */
- en_flags = cap_flags & (BE_IF_FLAGS_UNTAGGED |
- BE_IF_FLAGS_BROADCAST |
- BE_IF_FLAGS_MULTICAST);
- status =
- be_cmd_if_create(adapter, cap_flags, en_flags,
- &vf_cfg->if_handle, vf + 1);
+ status = be_if_create(adapter, &vf_cfg->if_handle,
+ cap_flags, vf + 1);
if (status)
- goto err;
+ return status;
}
-err:
- return status;
+
+ return 0;
}
static int be_vf_setup_init(struct be_adapter *adapter)
static int be_setup(struct be_adapter *adapter)
{
struct device *dev = &adapter->pdev->dev;
- u32 tx_fc, rx_fc, en_flags;
int status;
be_setup_init(adapter);
if (status)
goto err;
- en_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST |
- BE_IF_FLAGS_MULTICAST | BE_IF_FLAGS_PASS_L3L4_ERRORS;
- if (adapter->function_caps & BE_FUNCTION_CAPS_RSS)
- en_flags |= BE_IF_FLAGS_RSS;
- en_flags = en_flags & be_if_cap_flags(adapter);
- status = be_cmd_if_create(adapter, be_if_cap_flags(adapter), en_flags,
- &adapter->if_handle, 0);
+ status = be_if_create(adapter, &adapter->if_handle,
+ be_if_cap_flags(adapter), 0);
if (status)
goto err;
be_cmd_get_acpi_wol_cap(adapter);
- be_cmd_get_flow_control(adapter, &tx_fc, &rx_fc);
+ status = be_cmd_set_flow_control(adapter, adapter->tx_fc,
+ adapter->rx_fc);
+ if (status)
+ be_cmd_get_flow_control(adapter, &adapter->tx_fc,
+ &adapter->rx_fc);
- if (rx_fc != adapter->rx_fc || tx_fc != adapter->tx_fc)
- be_cmd_set_flow_control(adapter, adapter->tx_fc,
- adapter->rx_fc);
+ dev_info(&adapter->pdev->dev, "HW Flow control - TX:%d RX:%d\n",
+ adapter->tx_fc, adapter->rx_fc);
if (be_physfn(adapter))
be_cmd_set_logical_link_config(adapter,
static bool phy_flashing_required(struct be_adapter *adapter)
{
- return (adapter->phy.phy_type == TN_8022 &&
+ return (adapter->phy.phy_type == PHY_TYPE_TN_8022 &&
adapter->phy.interface_type == PHY_TYPE_BASET_10GB);
}
if (status)
return status;
+ status = be_cmd_reset_function(adapter);
+ if (status)
+ return status;
+
be_intr_set(adapter, true);
/* tell fw we're ready to fire cmds */
status = be_cmd_fw_init(adapter);
select PHYLIB
select OF_MDIO
---help---
- This driver supports the MDIO bus on the Fman 10G Ethernet MACs.
+ This driver supports the MDIO bus on the Fman 10G Ethernet MACs, and
+ on the FMan mEMAC (which supports both Clauses 22 and 45)
config UCC_GETH
tristate "Freescale QE Gigabit Ethernet"
* (40ns * 6).
*/
#define FEC_QUIRK_BUG_CAPTURE (1 << 10)
+/* Controller has only one MDIO bus */
+#define FEC_QUIRK_SINGLE_MDIO (1 << 11)
struct fec_enet_priv_tx_q {
int index;
.driver_data = 0,
}, {
.name = "imx28-fec",
- .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME,
+ .driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME |
+ FEC_QUIRK_SINGLE_MDIO,
}, {
.name = "imx6q-fec",
.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
fec_enet_tx_queue(struct net_device *ndev, u16 queue_id)
{
struct fec_enet_private *fep;
- struct bufdesc *bdp;
+ struct bufdesc *bdp, *bdp_t;
unsigned short status;
struct sk_buff *skb;
struct fec_enet_priv_tx_q *txq;
struct netdev_queue *nq;
int index = 0;
+ int i, bdnum;
int entries_free;
fep = netdev_priv(ndev);
if (bdp == txq->cur_tx)
break;
- index = fec_enet_get_bd_index(txq->tx_bd_base, bdp, fep);
-
+ bdp_t = bdp;
+ bdnum = 1;
+ index = fec_enet_get_bd_index(txq->tx_bd_base, bdp_t, fep);
skb = txq->tx_skbuff[index];
- txq->tx_skbuff[index] = NULL;
- if (!IS_TSO_HEADER(txq, bdp->cbd_bufaddr))
- dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
- bdp->cbd_datlen, DMA_TO_DEVICE);
- bdp->cbd_bufaddr = 0;
- if (!skb) {
- bdp = fec_enet_get_nextdesc(bdp, fep, queue_id);
- continue;
+ while (!skb) {
+ bdp_t = fec_enet_get_nextdesc(bdp_t, fep, queue_id);
+ index = fec_enet_get_bd_index(txq->tx_bd_base, bdp_t, fep);
+ skb = txq->tx_skbuff[index];
+ bdnum++;
}
+ if (skb_shinfo(skb)->nr_frags &&
+ (status = bdp_t->cbd_sc) & BD_ENET_TX_READY)
+ break;
+
+ for (i = 0; i < bdnum; i++) {
+ if (!IS_TSO_HEADER(txq, bdp->cbd_bufaddr))
+ dma_unmap_single(&fep->pdev->dev, bdp->cbd_bufaddr,
+ bdp->cbd_datlen, DMA_TO_DEVICE);
+ bdp->cbd_bufaddr = 0;
+ if (i < bdnum - 1)
+ bdp = fec_enet_get_nextdesc(bdp, fep, queue_id);
+ }
+ txq->tx_skbuff[index] = NULL;
/* Check for errors. */
if (status & (BD_ENET_TX_HB | BD_ENET_TX_LC |
int err = -ENXIO, i;
/*
- * The dual fec interfaces are not equivalent with enet-mac.
+ * The i.MX28 dual fec interfaces are not equal.
* Here are the differences:
*
* - fec0 supports MII & RMII modes while fec1 only supports RMII
* mdio interface in board design, and need to be configured by
* fec0 mii_bus.
*/
- if ((fep->quirks & FEC_QUIRK_ENET_MAC) && fep->dev_id > 0) {
+ if ((fep->quirks & FEC_QUIRK_SINGLE_MDIO) && fep->dev_id > 0) {
/* fec1 uses fec0 mii_bus */
if (mii_cnt && fec0_mii_bus) {
fep->mii_bus = fec0_mii_bus;
mii_cnt++;
/* save fec0 mii_bus */
- if (fep->quirks & FEC_QUIRK_ENET_MAC)
+ if (fep->quirks & FEC_QUIRK_SINGLE_MDIO)
fec0_mii_bus = fep->mii_bus;
return 0;
pdev->id_entry = of_id->data;
fep->quirks = pdev->id_entry->driver_data;
+ fep->netdev = ndev;
fep->num_rx_queues = num_rx_qs;
fep->num_tx_queues = num_tx_qs;
void inline gfar_tx_vlan(struct sk_buff *skb, struct txfcb *fcb)
{
fcb->flags |= TXFCB_VLN;
- fcb->vlctl = vlan_tx_tag_get(skb);
+ fcb->vlctl = skb_vlan_tag_get(skb);
}
static inline struct txbd8 *skip_txbd(struct txbd8 *bdp, int stride,
regs = tx_queue->grp->regs;
do_csum = (CHECKSUM_PARTIAL == skb->ip_summed);
- do_vlan = vlan_tx_tag_present(skb);
+ do_vlan = skb_vlan_tag_present(skb);
do_tstamp = (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
priv->hwts_tx_en;
__be32 mdio_addr; /* MDIO address */
} __packed;
+#define MDIO_STAT_ENC BIT(6)
#define MDIO_STAT_CLKDIV(x) (((x>>1) & 0xff) << 8)
-#define MDIO_STAT_BSY (1 << 0)
-#define MDIO_STAT_RD_ER (1 << 1)
+#define MDIO_STAT_BSY BIT(0)
+#define MDIO_STAT_RD_ER BIT(1)
#define MDIO_CTL_DEV_ADDR(x) (x & 0x1f)
#define MDIO_CTL_PORT_ADDR(x) ((x & 0x1f) << 5)
-#define MDIO_CTL_PRE_DIS (1 << 10)
-#define MDIO_CTL_SCAN_EN (1 << 11)
-#define MDIO_CTL_POST_INC (1 << 14)
-#define MDIO_CTL_READ (1 << 15)
+#define MDIO_CTL_PRE_DIS BIT(10)
+#define MDIO_CTL_SCAN_EN BIT(11)
+#define MDIO_CTL_POST_INC BIT(14)
+#define MDIO_CTL_READ BIT(15)
#define MDIO_DATA(x) (x & 0xffff)
-#define MDIO_DATA_BSY (1 << 31)
+#define MDIO_DATA_BSY BIT(31)
/*
* Wait until the MDIO bus is free
static int xgmac_wait_until_free(struct device *dev,
struct tgec_mdio_controller __iomem *regs)
{
- uint32_t status;
+ unsigned int timeout;
/* Wait till the bus is free */
- status = spin_event_timeout(
- !((in_be32(®s->mdio_stat)) & MDIO_STAT_BSY), TIMEOUT, 0);
- if (!status) {
+ timeout = TIMEOUT;
+ while ((ioread32be(®s->mdio_stat) & MDIO_STAT_BSY) && timeout) {
+ cpu_relax();
+ timeout--;
+ }
+
+ if (!timeout) {
dev_err(dev, "timeout waiting for bus to be free\n");
return -ETIMEDOUT;
}
static int xgmac_wait_until_done(struct device *dev,
struct tgec_mdio_controller __iomem *regs)
{
- uint32_t status;
+ unsigned int timeout;
/* Wait till the MDIO write is complete */
- status = spin_event_timeout(
- !((in_be32(®s->mdio_data)) & MDIO_DATA_BSY), TIMEOUT, 0);
- if (!status) {
+ timeout = TIMEOUT;
+ while ((ioread32be(®s->mdio_data) & MDIO_DATA_BSY) && timeout) {
+ cpu_relax();
+ timeout--;
+ }
+
+ if (!timeout) {
dev_err(dev, "timeout waiting for operation to complete\n");
return -ETIMEDOUT;
}
static int xgmac_mdio_write(struct mii_bus *bus, int phy_id, int regnum, u16 value)
{
struct tgec_mdio_controller __iomem *regs = bus->priv;
- uint16_t dev_addr = regnum >> 16;
+ uint16_t dev_addr;
+ u32 mdio_ctl, mdio_stat;
int ret;
- /* Set the port and dev addr */
- out_be32(®s->mdio_ctl,
- MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr));
+ mdio_stat = ioread32be(®s->mdio_stat);
+ if (regnum & MII_ADDR_C45) {
+ /* Clause 45 (ie 10G) */
+ dev_addr = (regnum >> 16) & 0x1f;
+ mdio_stat |= MDIO_STAT_ENC;
+ } else {
+ /* Clause 22 (ie 1G) */
+ dev_addr = regnum & 0x1f;
+ mdio_stat &= ~MDIO_STAT_ENC;
+ }
- /* Set the register address */
- out_be32(®s->mdio_addr, regnum & 0xffff);
+ iowrite32be(mdio_stat, ®s->mdio_stat);
ret = xgmac_wait_until_free(&bus->dev, regs);
if (ret)
return ret;
+ /* Set the port and dev addr */
+ mdio_ctl = MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr);
+ iowrite32be(mdio_ctl, ®s->mdio_ctl);
+
+ /* Set the register address */
+ if (regnum & MII_ADDR_C45) {
+ iowrite32be(regnum & 0xffff, ®s->mdio_addr);
+
+ ret = xgmac_wait_until_free(&bus->dev, regs);
+ if (ret)
+ return ret;
+ }
+
/* Write the value to the register */
- out_be32(®s->mdio_data, MDIO_DATA(value));
+ iowrite32be(MDIO_DATA(value), ®s->mdio_data);
ret = xgmac_wait_until_done(&bus->dev, regs);
if (ret)
static int xgmac_mdio_read(struct mii_bus *bus, int phy_id, int regnum)
{
struct tgec_mdio_controller __iomem *regs = bus->priv;
- uint16_t dev_addr = regnum >> 16;
+ uint16_t dev_addr;
+ uint32_t mdio_stat;
uint32_t mdio_ctl;
uint16_t value;
int ret;
- /* Set the Port and Device Addrs */
- mdio_ctl = MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr);
- out_be32(®s->mdio_ctl, mdio_ctl);
+ mdio_stat = ioread32be(®s->mdio_stat);
+ if (regnum & MII_ADDR_C45) {
+ dev_addr = (regnum >> 16) & 0x1f;
+ mdio_stat |= MDIO_STAT_ENC;
+ } else {
+ dev_addr = regnum & 0x1f;
+ mdio_stat &= ~MDIO_STAT_ENC;
+ }
- /* Set the register address */
- out_be32(®s->mdio_addr, regnum & 0xffff);
+ iowrite32be(mdio_stat, ®s->mdio_stat);
ret = xgmac_wait_until_free(&bus->dev, regs);
if (ret)
return ret;
+ /* Set the Port and Device Addrs */
+ mdio_ctl = MDIO_CTL_PORT_ADDR(phy_id) | MDIO_CTL_DEV_ADDR(dev_addr);
+ iowrite32be(mdio_ctl, ®s->mdio_ctl);
+
+ /* Set the register address */
+ if (regnum & MII_ADDR_C45) {
+ iowrite32be(regnum & 0xffff, ®s->mdio_addr);
+
+ ret = xgmac_wait_until_free(&bus->dev, regs);
+ if (ret)
+ return ret;
+ }
+
/* Initiate the read */
- out_be32(®s->mdio_ctl, mdio_ctl | MDIO_CTL_READ);
+ iowrite32be(mdio_ctl | MDIO_CTL_READ, ®s->mdio_ctl);
ret = xgmac_wait_until_done(&bus->dev, regs);
if (ret)
return ret;
/* Return all Fs if nothing was there */
- if (in_be32(®s->mdio_stat) & MDIO_STAT_RD_ER) {
+ if (ioread32be(®s->mdio_stat) & MDIO_STAT_RD_ER) {
dev_err(&bus->dev,
"Error while reading PHY%d reg at %d.%hhu\n",
phy_id, dev_addr, regnum);
return 0xffff;
}
- value = in_be32(®s->mdio_data) & 0xffff;
+ value = ioread32be(®s->mdio_data) & 0xffff;
dev_dbg(&bus->dev, "read %04x\n", value);
return value;
{
.compatible = "fsl,fman-xmdio",
},
+ {
+ .compatible = "fsl,fman-memac-mdio",
+ },
{},
};
MODULE_DEVICE_TABLE(of, xgmac_mdio_match);
help
This selects the hix5hd2 mac family network device.
+config HIP04_ETH
+ tristate "HISILICON P04 Ethernet support"
+ select PHYLIB
+ select MARVELL_PHY
+ select MFD_SYSCON
+ ---help---
+ If you wish to compile a kernel for a hardware with hisilicon p04 SoC and
+ want to use the internal ethernet then you should answer Y to this.
+
endif # NET_VENDOR_HISILICON
#
obj-$(CONFIG_HIX5HD2_GMAC) += hix5hd2_gmac.o
+obj-$(CONFIG_HIP04_ETH) += hip04_mdio.o hip04_eth.o
--- /dev/null
+
+/* Copyright (c) 2014 Linaro Ltd.
+ * Copyright (c) 2014 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/etherdevice.h>
+#include <linux/platform_device.h>
+#include <linux/interrupt.h>
+#include <linux/ktime.h>
+#include <linux/of_address.h>
+#include <linux/phy.h>
+#include <linux/of_mdio.h>
+#include <linux/of_net.h>
+#include <linux/mfd/syscon.h>
+#include <linux/regmap.h>
+
+#define PPE_CFG_RX_ADDR 0x100
+#define PPE_CFG_POOL_GRP 0x300
+#define PPE_CFG_RX_BUF_SIZE 0x400
+#define PPE_CFG_RX_FIFO_SIZE 0x500
+#define PPE_CURR_BUF_CNT 0xa200
+
+#define GE_DUPLEX_TYPE 0x08
+#define GE_MAX_FRM_SIZE_REG 0x3c
+#define GE_PORT_MODE 0x40
+#define GE_PORT_EN 0x44
+#define GE_SHORT_RUNTS_THR_REG 0x50
+#define GE_TX_LOCAL_PAGE_REG 0x5c
+#define GE_TRANSMIT_CONTROL_REG 0x60
+#define GE_CF_CRC_STRIP_REG 0x1b0
+#define GE_MODE_CHANGE_REG 0x1b4
+#define GE_RECV_CONTROL_REG 0x1e0
+#define GE_STATION_MAC_ADDRESS 0x210
+#define PPE_CFG_CPU_ADD_ADDR 0x580
+#define PPE_CFG_MAX_FRAME_LEN_REG 0x408
+#define PPE_CFG_BUS_CTRL_REG 0x424
+#define PPE_CFG_RX_CTRL_REG 0x428
+#define PPE_CFG_RX_PKT_MODE_REG 0x438
+#define PPE_CFG_QOS_VMID_GEN 0x500
+#define PPE_CFG_RX_PKT_INT 0x538
+#define PPE_INTEN 0x600
+#define PPE_INTSTS 0x608
+#define PPE_RINT 0x604
+#define PPE_CFG_STS_MODE 0x700
+#define PPE_HIS_RX_PKT_CNT 0x804
+
+/* REG_INTERRUPT */
+#define RCV_INT BIT(10)
+#define RCV_NOBUF BIT(8)
+#define RCV_DROP BIT(7)
+#define TX_DROP BIT(6)
+#define DEF_INT_ERR (RCV_NOBUF | RCV_DROP | TX_DROP)
+#define DEF_INT_MASK (RCV_INT | DEF_INT_ERR)
+
+/* TX descriptor config */
+#define TX_FREE_MEM BIT(0)
+#define TX_READ_ALLOC_L3 BIT(1)
+#define TX_FINISH_CACHE_INV BIT(2)
+#define TX_CLEAR_WB BIT(4)
+#define TX_L3_CHECKSUM BIT(5)
+#define TX_LOOP_BACK BIT(11)
+
+/* RX error */
+#define RX_PKT_DROP BIT(0)
+#define RX_L2_ERR BIT(1)
+#define RX_PKT_ERR (RX_PKT_DROP | RX_L2_ERR)
+
+#define SGMII_SPEED_1000 0x08
+#define SGMII_SPEED_100 0x07
+#define SGMII_SPEED_10 0x06
+#define MII_SPEED_100 0x01
+#define MII_SPEED_10 0x00
+
+#define GE_DUPLEX_FULL BIT(0)
+#define GE_DUPLEX_HALF 0x00
+#define GE_MODE_CHANGE_EN BIT(0)
+
+#define GE_TX_AUTO_NEG BIT(5)
+#define GE_TX_ADD_CRC BIT(6)
+#define GE_TX_SHORT_PAD_THROUGH BIT(7)
+
+#define GE_RX_STRIP_CRC BIT(0)
+#define GE_RX_STRIP_PAD BIT(3)
+#define GE_RX_PAD_EN BIT(4)
+
+#define GE_AUTO_NEG_CTL BIT(0)
+
+#define GE_RX_INT_THRESHOLD BIT(6)
+#define GE_RX_TIMEOUT 0x04
+
+#define GE_RX_PORT_EN BIT(1)
+#define GE_TX_PORT_EN BIT(2)
+
+#define PPE_CFG_STS_RX_PKT_CNT_RC BIT(12)
+
+#define PPE_CFG_RX_PKT_ALIGN BIT(18)
+#define PPE_CFG_QOS_VMID_MODE BIT(14)
+#define PPE_CFG_QOS_VMID_GRP_SHIFT 8
+
+#define PPE_CFG_RX_FIFO_FSFU BIT(11)
+#define PPE_CFG_RX_DEPTH_SHIFT 16
+#define PPE_CFG_RX_START_SHIFT 0
+#define PPE_CFG_RX_CTRL_ALIGN_SHIFT 11
+
+#define PPE_CFG_BUS_LOCAL_REL BIT(14)
+#define PPE_CFG_BUS_BIG_ENDIEN BIT(0)
+
+#define RX_DESC_NUM 128
+#define TX_DESC_NUM 256
+#define TX_NEXT(N) (((N) + 1) & (TX_DESC_NUM-1))
+#define RX_NEXT(N) (((N) + 1) & (RX_DESC_NUM-1))
+
+#define GMAC_PPE_RX_PKT_MAX_LEN 379
+#define GMAC_MAX_PKT_LEN 1516
+#define GMAC_MIN_PKT_LEN 31
+#define RX_BUF_SIZE 1600
+#define RESET_TIMEOUT 1000
+#define TX_TIMEOUT (6 * HZ)
+
+#define DRV_NAME "hip04-ether"
+#define DRV_VERSION "v1.0"
+
+#define HIP04_MAX_TX_COALESCE_USECS 200
+#define HIP04_MIN_TX_COALESCE_USECS 100
+#define HIP04_MAX_TX_COALESCE_FRAMES 200
+#define HIP04_MIN_TX_COALESCE_FRAMES 100
+
+struct tx_desc {
+ u32 send_addr;
+ u32 send_size;
+ u32 next_addr;
+ u32 cfg;
+ u32 wb_addr;
+} __aligned(64);
+
+struct rx_desc {
+ u16 reserved_16;
+ u16 pkt_len;
+ u32 reserve1[3];
+ u32 pkt_err;
+ u32 reserve2[4];
+};
+
+struct hip04_priv {
+ void __iomem *base;
+ int phy_mode;
+ int chan;
+ unsigned int port;
+ unsigned int speed;
+ unsigned int duplex;
+ unsigned int reg_inten;
+
+ struct napi_struct napi;
+ struct net_device *ndev;
+
+ struct tx_desc *tx_desc;
+ dma_addr_t tx_desc_dma;
+ struct sk_buff *tx_skb[TX_DESC_NUM];
+ dma_addr_t tx_phys[TX_DESC_NUM];
+ unsigned int tx_head;
+
+ int tx_coalesce_frames;
+ int tx_coalesce_usecs;
+ struct hrtimer tx_coalesce_timer;
+
+ unsigned char *rx_buf[RX_DESC_NUM];
+ dma_addr_t rx_phys[RX_DESC_NUM];
+ unsigned int rx_head;
+ unsigned int rx_buf_size;
+
+ struct device_node *phy_node;
+ struct phy_device *phy;
+ struct regmap *map;
+ struct work_struct tx_timeout_task;
+
+ /* written only by tx cleanup */
+ unsigned int tx_tail ____cacheline_aligned_in_smp;
+};
+
+static inline unsigned int tx_count(unsigned int head, unsigned int tail)
+{
+ return (head - tail) % (TX_DESC_NUM - 1);
+}
+
+static void hip04_config_port(struct net_device *ndev, u32 speed, u32 duplex)
+{
+ struct hip04_priv *priv = netdev_priv(ndev);
+ u32 val;
+
+ priv->speed = speed;
+ priv->duplex = duplex;
+
+ switch (priv->phy_mode) {
+ case PHY_INTERFACE_MODE_SGMII:
+ if (speed == SPEED_1000)
+ val = SGMII_SPEED_1000;
+ else if (speed == SPEED_100)
+ val = SGMII_SPEED_100;
+ else
+ val = SGMII_SPEED_10;
+ break;
+ case PHY_INTERFACE_MODE_MII:
+ if (speed == SPEED_100)
+ val = MII_SPEED_100;
+ else
+ val = MII_SPEED_10;
+ break;
+ default:
+ netdev_warn(ndev, "not supported mode\n");
+ val = MII_SPEED_10;
+ break;
+ }
+ writel_relaxed(val, priv->base + GE_PORT_MODE);
+
+ val = duplex ? GE_DUPLEX_FULL : GE_DUPLEX_HALF;
+ writel_relaxed(val, priv->base + GE_DUPLEX_TYPE);
+
+ val = GE_MODE_CHANGE_EN;
+ writel_relaxed(val, priv->base + GE_MODE_CHANGE_REG);
+}
+
+static void hip04_reset_ppe(struct hip04_priv *priv)
+{
+ u32 val, tmp, timeout = 0;
+
+ do {
+ regmap_read(priv->map, priv->port * 4 + PPE_CURR_BUF_CNT, &val);
+ regmap_read(priv->map, priv->port * 4 + PPE_CFG_RX_ADDR, &tmp);
+ if (timeout++ > RESET_TIMEOUT)
+ break;
+ } while (val & 0xfff);
+}
+
+static void hip04_config_fifo(struct hip04_priv *priv)
+{
+ u32 val;
+
+ val = readl_relaxed(priv->base + PPE_CFG_STS_MODE);
+ val |= PPE_CFG_STS_RX_PKT_CNT_RC;
+ writel_relaxed(val, priv->base + PPE_CFG_STS_MODE);
+
+ val = BIT(priv->port);
+ regmap_write(priv->map, priv->port * 4 + PPE_CFG_POOL_GRP, val);
+
+ val = priv->port << PPE_CFG_QOS_VMID_GRP_SHIFT;
+ val |= PPE_CFG_QOS_VMID_MODE;
+ writel_relaxed(val, priv->base + PPE_CFG_QOS_VMID_GEN);
+
+ val = RX_BUF_SIZE;
+ regmap_write(priv->map, priv->port * 4 + PPE_CFG_RX_BUF_SIZE, val);
+
+ val = RX_DESC_NUM << PPE_CFG_RX_DEPTH_SHIFT;
+ val |= PPE_CFG_RX_FIFO_FSFU;
+ val |= priv->chan << PPE_CFG_RX_START_SHIFT;
+ regmap_write(priv->map, priv->port * 4 + PPE_CFG_RX_FIFO_SIZE, val);
+
+ val = NET_IP_ALIGN << PPE_CFG_RX_CTRL_ALIGN_SHIFT;
+ writel_relaxed(val, priv->base + PPE_CFG_RX_CTRL_REG);
+
+ val = PPE_CFG_RX_PKT_ALIGN;
+ writel_relaxed(val, priv->base + PPE_CFG_RX_PKT_MODE_REG);
+
+ val = PPE_CFG_BUS_LOCAL_REL | PPE_CFG_BUS_BIG_ENDIEN;
+ writel_relaxed(val, priv->base + PPE_CFG_BUS_CTRL_REG);
+
+ val = GMAC_PPE_RX_PKT_MAX_LEN;
+ writel_relaxed(val, priv->base + PPE_CFG_MAX_FRAME_LEN_REG);
+
+ val = GMAC_MAX_PKT_LEN;
+ writel_relaxed(val, priv->base + GE_MAX_FRM_SIZE_REG);
+
+ val = GMAC_MIN_PKT_LEN;
+ writel_relaxed(val, priv->base + GE_SHORT_RUNTS_THR_REG);
+
+ val = readl_relaxed(priv->base + GE_TRANSMIT_CONTROL_REG);
+ val |= GE_TX_AUTO_NEG | GE_TX_ADD_CRC | GE_TX_SHORT_PAD_THROUGH;
+ writel_relaxed(val, priv->base + GE_TRANSMIT_CONTROL_REG);
+
+ val = GE_RX_STRIP_CRC;
+ writel_relaxed(val, priv->base + GE_CF_CRC_STRIP_REG);
+
+ val = readl_relaxed(priv->base + GE_RECV_CONTROL_REG);
+ val |= GE_RX_STRIP_PAD | GE_RX_PAD_EN;
+ writel_relaxed(val, priv->base + GE_RECV_CONTROL_REG);
+
+ val = GE_AUTO_NEG_CTL;
+ writel_relaxed(val, priv->base + GE_TX_LOCAL_PAGE_REG);
+}
+
+static void hip04_mac_enable(struct net_device *ndev)
+{
+ struct hip04_priv *priv = netdev_priv(ndev);
+ u32 val;
+
+ /* enable tx & rx */
+ val = readl_relaxed(priv->base + GE_PORT_EN);
+ val |= GE_RX_PORT_EN | GE_TX_PORT_EN;
+ writel_relaxed(val, priv->base + GE_PORT_EN);
+
+ /* clear rx int */
+ val = RCV_INT;
+ writel_relaxed(val, priv->base + PPE_RINT);
+
+ /* config recv int */
+ val = GE_RX_INT_THRESHOLD | GE_RX_TIMEOUT;
+ writel_relaxed(val, priv->base + PPE_CFG_RX_PKT_INT);
+
+ /* enable interrupt */
+ priv->reg_inten = DEF_INT_MASK;
+ writel_relaxed(priv->reg_inten, priv->base + PPE_INTEN);
+}
+
+static void hip04_mac_disable(struct net_device *ndev)
+{
+ struct hip04_priv *priv = netdev_priv(ndev);
+ u32 val;
+
+ /* disable int */
+ priv->reg_inten &= ~(DEF_INT_MASK);
+ writel_relaxed(priv->reg_inten, priv->base + PPE_INTEN);
+
+ /* disable tx & rx */
+ val = readl_relaxed(priv->base + GE_PORT_EN);
+ val &= ~(GE_RX_PORT_EN | GE_TX_PORT_EN);
+ writel_relaxed(val, priv->base + GE_PORT_EN);
+}
+
+static void hip04_set_xmit_desc(struct hip04_priv *priv, dma_addr_t phys)
+{
+ writel(phys, priv->base + PPE_CFG_CPU_ADD_ADDR);
+}
+
+static void hip04_set_recv_desc(struct hip04_priv *priv, dma_addr_t phys)
+{
+ regmap_write(priv->map, priv->port * 4 + PPE_CFG_RX_ADDR, phys);
+}
+
+static u32 hip04_recv_cnt(struct hip04_priv *priv)
+{
+ return readl(priv->base + PPE_HIS_RX_PKT_CNT);
+}
+
+static void hip04_update_mac_address(struct net_device *ndev)
+{
+ struct hip04_priv *priv = netdev_priv(ndev);
+
+ writel_relaxed(((ndev->dev_addr[0] << 8) | (ndev->dev_addr[1])),
+ priv->base + GE_STATION_MAC_ADDRESS);
+ writel_relaxed(((ndev->dev_addr[2] << 24) | (ndev->dev_addr[3] << 16) |
+ (ndev->dev_addr[4] << 8) | (ndev->dev_addr[5])),
+ priv->base + GE_STATION_MAC_ADDRESS + 4);
+}
+
+static int hip04_set_mac_address(struct net_device *ndev, void *addr)
+{
+ eth_mac_addr(ndev, addr);
+ hip04_update_mac_address(ndev);
+ return 0;
+}
+
+static int hip04_tx_reclaim(struct net_device *ndev, bool force)
+{
+ struct hip04_priv *priv = netdev_priv(ndev);
+ unsigned tx_tail = priv->tx_tail;
+ struct tx_desc *desc;
+ unsigned int bytes_compl = 0, pkts_compl = 0;
+ unsigned int count;
+
+ smp_rmb();
+ count = tx_count(ACCESS_ONCE(priv->tx_head), tx_tail);
+ if (count == 0)
+ goto out;
+
+ while (count) {
+ desc = &priv->tx_desc[tx_tail];
+ if (desc->send_addr != 0) {
+ if (force)
+ desc->send_addr = 0;
+ else
+ break;
+ }
+
+ if (priv->tx_phys[tx_tail]) {
+ dma_unmap_single(&ndev->dev, priv->tx_phys[tx_tail],
+ priv->tx_skb[tx_tail]->len,
+ DMA_TO_DEVICE);
+ priv->tx_phys[tx_tail] = 0;
+ }
+ pkts_compl++;
+ bytes_compl += priv->tx_skb[tx_tail]->len;
+ dev_kfree_skb(priv->tx_skb[tx_tail]);
+ priv->tx_skb[tx_tail] = NULL;
+ tx_tail = TX_NEXT(tx_tail);
+ count--;
+ }
+
+ priv->tx_tail = tx_tail;
+ smp_wmb(); /* Ensure tx_tail visible to xmit */
+
+out:
+ if (pkts_compl || bytes_compl)
+ netdev_completed_queue(ndev, pkts_compl, bytes_compl);
+
+ if (unlikely(netif_queue_stopped(ndev)) && (count < (TX_DESC_NUM - 1)))
+ netif_wake_queue(ndev);
+
+ return count;
+}
+
+static int hip04_mac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+{
+ struct hip04_priv *priv = netdev_priv(ndev);
+ struct net_device_stats *stats = &ndev->stats;
+ unsigned int tx_head = priv->tx_head, count;
+ struct tx_desc *desc = &priv->tx_desc[tx_head];
+ dma_addr_t phys;
+
+ smp_rmb();
+ count = tx_count(tx_head, ACCESS_ONCE(priv->tx_tail));
+ if (count == (TX_DESC_NUM - 1)) {
+ netif_stop_queue(ndev);
+ return NETDEV_TX_BUSY;
+ }
+
+ phys = dma_map_single(&ndev->dev, skb->data, skb->len, DMA_TO_DEVICE);
+ if (dma_mapping_error(&ndev->dev, phys)) {
+ dev_kfree_skb(skb);
+ return NETDEV_TX_OK;
+ }
+
+ priv->tx_skb[tx_head] = skb;
+ priv->tx_phys[tx_head] = phys;
+ desc->send_addr = cpu_to_be32(phys);
+ desc->send_size = cpu_to_be32(skb->len);
+ desc->cfg = cpu_to_be32(TX_CLEAR_WB | TX_FINISH_CACHE_INV);
+ phys = priv->tx_desc_dma + tx_head * sizeof(struct tx_desc);
+ desc->wb_addr = cpu_to_be32(phys);
+ skb_tx_timestamp(skb);
+
+ hip04_set_xmit_desc(priv, phys);
+ priv->tx_head = TX_NEXT(tx_head);
+ count++;
+ netdev_sent_queue(ndev, skb->len);
+
+ stats->tx_bytes += skb->len;
+ stats->tx_packets++;
+
+ /* Ensure tx_head update visible to tx reclaim */
+ smp_wmb();
+
+ /* queue is getting full, better start cleaning up now */
+ if (count >= priv->tx_coalesce_frames) {
+ if (napi_schedule_prep(&priv->napi)) {
+ /* disable rx interrupt and timer */
+ priv->reg_inten &= ~(RCV_INT);
+ writel_relaxed(DEF_INT_MASK & ~RCV_INT,
+ priv->base + PPE_INTEN);
+ hrtimer_cancel(&priv->tx_coalesce_timer);
+ __napi_schedule(&priv->napi);
+ }
+ } else if (!hrtimer_is_queued(&priv->tx_coalesce_timer)) {
+ /* cleanup not pending yet, start a new timer */
+ hrtimer_start_expires(&priv->tx_coalesce_timer,
+ HRTIMER_MODE_REL);
+ }
+
+ return NETDEV_TX_OK;
+}
+
+static int hip04_rx_poll(struct napi_struct *napi, int budget)
+{
+ struct hip04_priv *priv = container_of(napi, struct hip04_priv, napi);
+ struct net_device *ndev = priv->ndev;
+ struct net_device_stats *stats = &ndev->stats;
+ unsigned int cnt = hip04_recv_cnt(priv);
+ struct rx_desc *desc;
+ struct sk_buff *skb;
+ unsigned char *buf;
+ bool last = false;
+ dma_addr_t phys;
+ int rx = 0;
+ int tx_remaining;
+ u16 len;
+ u32 err;
+
+ while (cnt && !last) {
+ buf = priv->rx_buf[priv->rx_head];
+ skb = build_skb(buf, priv->rx_buf_size);
+ if (unlikely(!skb))
+ net_dbg_ratelimited("build_skb failed\n");
+
+ dma_unmap_single(&ndev->dev, priv->rx_phys[priv->rx_head],
+ RX_BUF_SIZE, DMA_FROM_DEVICE);
+ priv->rx_phys[priv->rx_head] = 0;
+
+ desc = (struct rx_desc *)skb->data;
+ len = be16_to_cpu(desc->pkt_len);
+ err = be32_to_cpu(desc->pkt_err);
+
+ if (0 == len) {
+ dev_kfree_skb_any(skb);
+ last = true;
+ } else if ((err & RX_PKT_ERR) || (len >= GMAC_MAX_PKT_LEN)) {
+ dev_kfree_skb_any(skb);
+ stats->rx_dropped++;
+ stats->rx_errors++;
+ } else {
+ skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
+ skb_put(skb, len);
+ skb->protocol = eth_type_trans(skb, ndev);
+ napi_gro_receive(&priv->napi, skb);
+ stats->rx_packets++;
+ stats->rx_bytes += len;
+ rx++;
+ }
+
+ buf = netdev_alloc_frag(priv->rx_buf_size);
+ if (!buf)
+ goto done;
+ phys = dma_map_single(&ndev->dev, buf,
+ RX_BUF_SIZE, DMA_FROM_DEVICE);
+ if (dma_mapping_error(&ndev->dev, phys))
+ goto done;
+ priv->rx_buf[priv->rx_head] = buf;
+ priv->rx_phys[priv->rx_head] = phys;
+ hip04_set_recv_desc(priv, phys);
+
+ priv->rx_head = RX_NEXT(priv->rx_head);
+ if (rx >= budget)
+ goto done;
+
+ if (--cnt == 0)
+ cnt = hip04_recv_cnt(priv);
+ }
+
+ if (!(priv->reg_inten & RCV_INT)) {
+ /* enable rx interrupt */
+ priv->reg_inten |= RCV_INT;
+ writel_relaxed(priv->reg_inten, priv->base + PPE_INTEN);
+ }
+ napi_complete(napi);
+done:
+ /* clean up tx descriptors and start a new timer if necessary */
+ tx_remaining = hip04_tx_reclaim(ndev, false);
+ if (rx < budget && tx_remaining)
+ hrtimer_start_expires(&priv->tx_coalesce_timer, HRTIMER_MODE_REL);
+
+ return rx;
+}
+
+static irqreturn_t hip04_mac_interrupt(int irq, void *dev_id)
+{
+ struct net_device *ndev = (struct net_device *)dev_id;
+ struct hip04_priv *priv = netdev_priv(ndev);
+ struct net_device_stats *stats = &ndev->stats;
+ u32 ists = readl_relaxed(priv->base + PPE_INTSTS);
+
+ if (!ists)
+ return IRQ_NONE;
+
+ writel_relaxed(DEF_INT_MASK, priv->base + PPE_RINT);
+
+ if (unlikely(ists & DEF_INT_ERR)) {
+ if (ists & (RCV_NOBUF | RCV_DROP))
+ stats->rx_errors++;
+ stats->rx_dropped++;
+ netdev_err(ndev, "rx drop\n");
+ if (ists & TX_DROP) {
+ stats->tx_dropped++;
+ netdev_err(ndev, "tx drop\n");
+ }
+ }
+
+ if (ists & RCV_INT && napi_schedule_prep(&priv->napi)) {
+ /* disable rx interrupt */
+ priv->reg_inten &= ~(RCV_INT);
+ writel_relaxed(DEF_INT_MASK & ~RCV_INT, priv->base + PPE_INTEN);
+ hrtimer_cancel(&priv->tx_coalesce_timer);
+ __napi_schedule(&priv->napi);
+ }
+
+ return IRQ_HANDLED;
+}
+
+enum hrtimer_restart tx_done(struct hrtimer *hrtimer)
+{
+ struct hip04_priv *priv;
+
+ priv = container_of(hrtimer, struct hip04_priv, tx_coalesce_timer);
+
+ if (napi_schedule_prep(&priv->napi)) {
+ /* disable rx interrupt */
+ priv->reg_inten &= ~(RCV_INT);
+ writel_relaxed(DEF_INT_MASK & ~RCV_INT, priv->base + PPE_INTEN);
+ __napi_schedule(&priv->napi);
+ }
+
+ return HRTIMER_NORESTART;
+}
+
+static void hip04_adjust_link(struct net_device *ndev)
+{
+ struct hip04_priv *priv = netdev_priv(ndev);
+ struct phy_device *phy = priv->phy;
+
+ if ((priv->speed != phy->speed) || (priv->duplex != phy->duplex)) {
+ hip04_config_port(ndev, phy->speed, phy->duplex);
+ phy_print_status(phy);
+ }
+}
+
+static int hip04_mac_open(struct net_device *ndev)
+{
+ struct hip04_priv *priv = netdev_priv(ndev);
+ int i;
+
+ priv->rx_head = 0;
+ priv->tx_head = 0;
+ priv->tx_tail = 0;
+ hip04_reset_ppe(priv);
+
+ for (i = 0; i < RX_DESC_NUM; i++) {
+ dma_addr_t phys;
+
+ phys = dma_map_single(&ndev->dev, priv->rx_buf[i],
+ RX_BUF_SIZE, DMA_FROM_DEVICE);
+ if (dma_mapping_error(&ndev->dev, phys))
+ return -EIO;
+
+ priv->rx_phys[i] = phys;
+ hip04_set_recv_desc(priv, phys);
+ }
+
+ if (priv->phy)
+ phy_start(priv->phy);
+
+ netdev_reset_queue(ndev);
+ netif_start_queue(ndev);
+ hip04_mac_enable(ndev);
+ napi_enable(&priv->napi);
+
+ return 0;
+}
+
+static int hip04_mac_stop(struct net_device *ndev)
+{
+ struct hip04_priv *priv = netdev_priv(ndev);
+ int i;
+
+ napi_disable(&priv->napi);
+ netif_stop_queue(ndev);
+ hip04_mac_disable(ndev);
+ hip04_tx_reclaim(ndev, true);
+ hip04_reset_ppe(priv);
+
+ if (priv->phy)
+ phy_stop(priv->phy);
+
+ for (i = 0; i < RX_DESC_NUM; i++) {
+ if (priv->rx_phys[i]) {
+ dma_unmap_single(&ndev->dev, priv->rx_phys[i],
+ RX_BUF_SIZE, DMA_FROM_DEVICE);
+ priv->rx_phys[i] = 0;
+ }
+ }
+
+ return 0;
+}
+
+static void hip04_timeout(struct net_device *ndev)
+{
+ struct hip04_priv *priv = netdev_priv(ndev);
+
+ schedule_work(&priv->tx_timeout_task);
+}
+
+static void hip04_tx_timeout_task(struct work_struct *work)
+{
+ struct hip04_priv *priv;
+
+ priv = container_of(work, struct hip04_priv, tx_timeout_task);
+ hip04_mac_stop(priv->ndev);
+ hip04_mac_open(priv->ndev);
+}
+
+static struct net_device_stats *hip04_get_stats(struct net_device *ndev)
+{
+ return &ndev->stats;
+}
+
+static int hip04_get_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *ec)
+{
+ struct hip04_priv *priv = netdev_priv(netdev);
+
+ ec->tx_coalesce_usecs = priv->tx_coalesce_usecs;
+ ec->tx_max_coalesced_frames = priv->tx_coalesce_frames;
+
+ return 0;
+}
+
+static int hip04_set_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *ec)
+{
+ struct hip04_priv *priv = netdev_priv(netdev);
+
+ /* Check not supported parameters */
+ if ((ec->rx_max_coalesced_frames) || (ec->rx_coalesce_usecs_irq) ||
+ (ec->rx_max_coalesced_frames_irq) || (ec->tx_coalesce_usecs_irq) ||
+ (ec->use_adaptive_rx_coalesce) || (ec->use_adaptive_tx_coalesce) ||
+ (ec->pkt_rate_low) || (ec->rx_coalesce_usecs_low) ||
+ (ec->rx_max_coalesced_frames_low) || (ec->tx_coalesce_usecs_high) ||
+ (ec->tx_max_coalesced_frames_low) || (ec->pkt_rate_high) ||
+ (ec->tx_coalesce_usecs_low) || (ec->rx_coalesce_usecs_high) ||
+ (ec->rx_max_coalesced_frames_high) || (ec->rx_coalesce_usecs) ||
+ (ec->tx_max_coalesced_frames_irq) ||
+ (ec->stats_block_coalesce_usecs) ||
+ (ec->tx_max_coalesced_frames_high) || (ec->rate_sample_interval))
+ return -EOPNOTSUPP;
+
+ if ((ec->tx_coalesce_usecs > HIP04_MAX_TX_COALESCE_USECS ||
+ ec->tx_coalesce_usecs < HIP04_MIN_TX_COALESCE_USECS) ||
+ (ec->tx_max_coalesced_frames > HIP04_MAX_TX_COALESCE_FRAMES ||
+ ec->tx_max_coalesced_frames < HIP04_MIN_TX_COALESCE_FRAMES))
+ return -EINVAL;
+
+ priv->tx_coalesce_usecs = ec->tx_coalesce_usecs;
+ priv->tx_coalesce_frames = ec->tx_max_coalesced_frames;
+
+ return 0;
+}
+
+static void hip04_get_drvinfo(struct net_device *netdev,
+ struct ethtool_drvinfo *drvinfo)
+{
+ strlcpy(drvinfo->driver, DRV_NAME, sizeof(drvinfo->driver));
+ strlcpy(drvinfo->version, DRV_VERSION, sizeof(drvinfo->version));
+}
+
+static struct ethtool_ops hip04_ethtool_ops = {
+ .get_coalesce = hip04_get_coalesce,
+ .set_coalesce = hip04_set_coalesce,
+ .get_drvinfo = hip04_get_drvinfo,
+};
+
+static struct net_device_ops hip04_netdev_ops = {
+ .ndo_open = hip04_mac_open,
+ .ndo_stop = hip04_mac_stop,
+ .ndo_get_stats = hip04_get_stats,
+ .ndo_start_xmit = hip04_mac_start_xmit,
+ .ndo_set_mac_address = hip04_set_mac_address,
+ .ndo_tx_timeout = hip04_timeout,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+};
+
+static int hip04_alloc_ring(struct net_device *ndev, struct device *d)
+{
+ struct hip04_priv *priv = netdev_priv(ndev);
+ int i;
+
+ priv->tx_desc = dma_alloc_coherent(d,
+ TX_DESC_NUM * sizeof(struct tx_desc),
+ &priv->tx_desc_dma, GFP_KERNEL);
+ if (!priv->tx_desc)
+ return -ENOMEM;
+
+ priv->rx_buf_size = RX_BUF_SIZE +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+ for (i = 0; i < RX_DESC_NUM; i++) {
+ priv->rx_buf[i] = netdev_alloc_frag(priv->rx_buf_size);
+ if (!priv->rx_buf[i])
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void hip04_free_ring(struct net_device *ndev, struct device *d)
+{
+ struct hip04_priv *priv = netdev_priv(ndev);
+ int i;
+
+ for (i = 0; i < RX_DESC_NUM; i++)
+ if (priv->rx_buf[i])
+ put_page(virt_to_head_page(priv->rx_buf[i]));
+
+ for (i = 0; i < TX_DESC_NUM; i++)
+ if (priv->tx_skb[i])
+ dev_kfree_skb_any(priv->tx_skb[i]);
+
+ dma_free_coherent(d, TX_DESC_NUM * sizeof(struct tx_desc),
+ priv->tx_desc, priv->tx_desc_dma);
+}
+
+static int hip04_mac_probe(struct platform_device *pdev)
+{
+ struct device *d = &pdev->dev;
+ struct device_node *node = d->of_node;
+ struct of_phandle_args arg;
+ struct net_device *ndev;
+ struct hip04_priv *priv;
+ struct resource *res;
+ unsigned int irq;
+ ktime_t txtime;
+ int ret;
+
+ ndev = alloc_etherdev(sizeof(struct hip04_priv));
+ if (!ndev)
+ return -ENOMEM;
+
+ priv = netdev_priv(ndev);
+ priv->ndev = ndev;
+ platform_set_drvdata(pdev, ndev);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ priv->base = devm_ioremap_resource(d, res);
+ if (IS_ERR(priv->base)) {
+ ret = PTR_ERR(priv->base);
+ goto init_fail;
+ }
+
+ ret = of_parse_phandle_with_fixed_args(node, "port-handle", 2, 0, &arg);
+ if (ret < 0) {
+ dev_warn(d, "no port-handle\n");
+ goto init_fail;
+ }
+
+ priv->port = arg.args[0];
+ priv->chan = arg.args[1] * RX_DESC_NUM;
+
+ hrtimer_init(&priv->tx_coalesce_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+
+ /* BQL will try to keep the TX queue as short as possible, but it can't
+ * be faster than tx_coalesce_usecs, so we need a fast timeout here,
+ * but also long enough to gather up enough frames to ensure we don't
+ * get more interrupts than necessary.
+ * 200us is enough for 16 frames of 1500 bytes at gigabit ethernet rate
+ */
+ priv->tx_coalesce_frames = TX_DESC_NUM * 3 / 4;
+ priv->tx_coalesce_usecs = 200;
+ /* allow timer to fire after half the time at the earliest */
+ txtime = ktime_set(0, priv->tx_coalesce_usecs * NSEC_PER_USEC / 2);
+ hrtimer_set_expires_range(&priv->tx_coalesce_timer, txtime, txtime);
+ priv->tx_coalesce_timer.function = tx_done;
+
+ priv->map = syscon_node_to_regmap(arg.np);
+ if (IS_ERR(priv->map)) {
+ dev_warn(d, "no syscon hisilicon,hip04-ppe\n");
+ ret = PTR_ERR(priv->map);
+ goto init_fail;
+ }
+
+ priv->phy_mode = of_get_phy_mode(node);
+ if (priv->phy_mode < 0) {
+ dev_warn(d, "not find phy-mode\n");
+ ret = -EINVAL;
+ goto init_fail;
+ }
+
+ irq = platform_get_irq(pdev, 0);
+ if (irq <= 0) {
+ ret = -EINVAL;
+ goto init_fail;
+ }
+
+ ret = devm_request_irq(d, irq, hip04_mac_interrupt,
+ 0, pdev->name, ndev);
+ if (ret) {
+ netdev_err(ndev, "devm_request_irq failed\n");
+ goto init_fail;
+ }
+
+ priv->phy_node = of_parse_phandle(node, "phy-handle", 0);
+ if (priv->phy_node) {
+ priv->phy = of_phy_connect(ndev, priv->phy_node,
+ &hip04_adjust_link,
+ 0, priv->phy_mode);
+ if (!priv->phy) {
+ ret = -EPROBE_DEFER;
+ goto init_fail;
+ }
+ }
+
+ INIT_WORK(&priv->tx_timeout_task, hip04_tx_timeout_task);
+
+ ether_setup(ndev);
+ ndev->netdev_ops = &hip04_netdev_ops;
+ ndev->ethtool_ops = &hip04_ethtool_ops;
+ ndev->watchdog_timeo = TX_TIMEOUT;
+ ndev->priv_flags |= IFF_UNICAST_FLT;
+ ndev->irq = irq;
+ netif_napi_add(ndev, &priv->napi, hip04_rx_poll, NAPI_POLL_WEIGHT);
+ SET_NETDEV_DEV(ndev, &pdev->dev);
+
+ hip04_reset_ppe(priv);
+ if (priv->phy_mode == PHY_INTERFACE_MODE_MII)
+ hip04_config_port(ndev, SPEED_100, DUPLEX_FULL);
+
+ hip04_config_fifo(priv);
+ random_ether_addr(ndev->dev_addr);
+ hip04_update_mac_address(ndev);
+
+ ret = hip04_alloc_ring(ndev, d);
+ if (ret) {
+ netdev_err(ndev, "alloc ring fail\n");
+ goto alloc_fail;
+ }
+
+ ret = register_netdev(ndev);
+ if (ret) {
+ free_netdev(ndev);
+ goto alloc_fail;
+ }
+
+ return 0;
+
+alloc_fail:
+ hip04_free_ring(ndev, d);
+init_fail:
+ of_node_put(priv->phy_node);
+ free_netdev(ndev);
+ return ret;
+}
+
+static int hip04_remove(struct platform_device *pdev)
+{
+ struct net_device *ndev = platform_get_drvdata(pdev);
+ struct hip04_priv *priv = netdev_priv(ndev);
+ struct device *d = &pdev->dev;
+
+ if (priv->phy)
+ phy_disconnect(priv->phy);
+
+ hip04_free_ring(ndev, d);
+ unregister_netdev(ndev);
+ free_irq(ndev->irq, ndev);
+ of_node_put(priv->phy_node);
+ cancel_work_sync(&priv->tx_timeout_task);
+ free_netdev(ndev);
+
+ return 0;
+}
+
+static const struct of_device_id hip04_mac_match[] = {
+ { .compatible = "hisilicon,hip04-mac" },
+ { }
+};
+
+MODULE_DEVICE_TABLE(of, hip04_mac_match);
+
+static struct platform_driver hip04_mac_driver = {
+ .probe = hip04_mac_probe,
+ .remove = hip04_remove,
+ .driver = {
+ .name = DRV_NAME,
+ .owner = THIS_MODULE,
+ .of_match_table = hip04_mac_match,
+ },
+};
+module_platform_driver(hip04_mac_driver);
+
+MODULE_DESCRIPTION("HISILICON P04 Ethernet driver");
--- /dev/null
+/* Copyright (c) 2014 Linaro Ltd.
+ * Copyright (c) 2014 Hisilicon Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/io.h>
+#include <linux/of_mdio.h>
+#include <linux/delay.h>
+
+#define MDIO_CMD_REG 0x0
+#define MDIO_ADDR_REG 0x4
+#define MDIO_WDATA_REG 0x8
+#define MDIO_RDATA_REG 0xc
+#define MDIO_STA_REG 0x10
+
+#define MDIO_START BIT(14)
+#define MDIO_R_VALID BIT(1)
+#define MDIO_READ (BIT(12) | BIT(11) | MDIO_START)
+#define MDIO_WRITE (BIT(12) | BIT(10) | MDIO_START)
+
+struct hip04_mdio_priv {
+ void __iomem *base;
+};
+
+#define WAIT_TIMEOUT 10
+static int hip04_mdio_wait_ready(struct mii_bus *bus)
+{
+ struct hip04_mdio_priv *priv = bus->priv;
+ int i;
+
+ for (i = 0; readl_relaxed(priv->base + MDIO_CMD_REG) & MDIO_START; i++) {
+ if (i == WAIT_TIMEOUT)
+ return -ETIMEDOUT;
+ msleep(20);
+ }
+
+ return 0;
+}
+
+static int hip04_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
+{
+ struct hip04_mdio_priv *priv = bus->priv;
+ u32 val;
+ int ret;
+
+ ret = hip04_mdio_wait_ready(bus);
+ if (ret < 0)
+ goto out;
+
+ val = regnum | (mii_id << 5) | MDIO_READ;
+ writel_relaxed(val, priv->base + MDIO_CMD_REG);
+
+ ret = hip04_mdio_wait_ready(bus);
+ if (ret < 0)
+ goto out;
+
+ val = readl_relaxed(priv->base + MDIO_STA_REG);
+ if (val & MDIO_R_VALID) {
+ dev_err(bus->parent, "SMI bus read not valid\n");
+ ret = -ENODEV;
+ goto out;
+ }
+
+ val = readl_relaxed(priv->base + MDIO_RDATA_REG);
+ ret = val & 0xFFFF;
+out:
+ return ret;
+}
+
+static int hip04_mdio_write(struct mii_bus *bus, int mii_id,
+ int regnum, u16 value)
+{
+ struct hip04_mdio_priv *priv = bus->priv;
+ u32 val;
+ int ret;
+
+ ret = hip04_mdio_wait_ready(bus);
+ if (ret < 0)
+ goto out;
+
+ writel_relaxed(value, priv->base + MDIO_WDATA_REG);
+ val = regnum | (mii_id << 5) | MDIO_WRITE;
+ writel_relaxed(val, priv->base + MDIO_CMD_REG);
+out:
+ return ret;
+}
+
+static int hip04_mdio_reset(struct mii_bus *bus)
+{
+ int temp, i;
+
+ for (i = 0; i < PHY_MAX_ADDR; i++) {
+ hip04_mdio_write(bus, i, 22, 0);
+ temp = hip04_mdio_read(bus, i, MII_BMCR);
+ if (temp < 0)
+ continue;
+
+ temp |= BMCR_RESET;
+ if (hip04_mdio_write(bus, i, MII_BMCR, temp) < 0)
+ continue;
+ }
+
+ mdelay(500);
+ return 0;
+}
+
+static int hip04_mdio_probe(struct platform_device *pdev)
+{
+ struct resource *r;
+ struct mii_bus *bus;
+ struct hip04_mdio_priv *priv;
+ int ret;
+
+ bus = mdiobus_alloc_size(sizeof(struct hip04_mdio_priv));
+ if (!bus) {
+ dev_err(&pdev->dev, "Cannot allocate MDIO bus\n");
+ return -ENOMEM;
+ }
+
+ bus->name = "hip04_mdio_bus";
+ bus->read = hip04_mdio_read;
+ bus->write = hip04_mdio_write;
+ bus->reset = hip04_mdio_reset;
+ snprintf(bus->id, MII_BUS_ID_SIZE, "%s-mii", dev_name(&pdev->dev));
+ bus->parent = &pdev->dev;
+ priv = bus->priv;
+
+ r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ priv->base = devm_ioremap_resource(&pdev->dev, r);
+ if (IS_ERR(priv->base)) {
+ ret = PTR_ERR(priv->base);
+ goto out_mdio;
+ }
+
+ ret = of_mdiobus_register(bus, pdev->dev.of_node);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Cannot register MDIO bus (%d)\n", ret);
+ goto out_mdio;
+ }
+
+ platform_set_drvdata(pdev, bus);
+
+ return 0;
+
+out_mdio:
+ mdiobus_free(bus);
+ return ret;
+}
+
+static int hip04_mdio_remove(struct platform_device *pdev)
+{
+ struct mii_bus *bus = platform_get_drvdata(pdev);
+
+ mdiobus_unregister(bus);
+ mdiobus_free(bus);
+
+ return 0;
+}
+
+static const struct of_device_id hip04_mdio_match[] = {
+ { .compatible = "hisilicon,hip04-mdio" },
+ { }
+};
+MODULE_DEVICE_TABLE(of, hip04_mdio_match);
+
+static struct platform_driver hip04_mdio_driver = {
+ .probe = hip04_mdio_probe,
+ .remove = hip04_mdio_remove,
+ .driver = {
+ .name = "hip04-mdio",
+ .owner = THIS_MODULE,
+ .of_match_table = hip04_mdio_match,
+ },
+};
+
+module_platform_driver(hip04_mdio_driver);
+
+MODULE_DESCRIPTION("HISILICON P04 MDIO interface driver");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("platform:hip04-mdio");
memset(swqe, 0, SWQE_HEADER_SIZE);
atomic_dec(&pr->swqe_avail);
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
swqe->tx_control |= EHEA_SWQE_VLAN_INSERT;
- swqe->vlan_tag = vlan_tx_tag_get(skb);
+ swqe->vlan_tag = skb_vlan_tag_get(skb);
}
pr->tx_packets++;
If unsure, say N.
+config I40E_FCOE
+ bool "Fibre Channel over Ethernet (FCoE)"
+ default n
+ depends on I40E && DCB && FCOE
+ ---help---
+ Say Y here if you want to use Fibre Channel over Ethernet (FCoE)
+ in the driver. This will create new netdev for exclusive FCoE
+ use with XL710 FCoE offloads enabled.
+
+ If unsure, say N.
+
config I40EVF
tristate "Intel(R) XL710 X710 Virtual Function Ethernet support"
depends on PCI_MSI
mdio_write(netdev, nic->mii.phy_id, MII_BMCR, bmcr);
} else if ((nic->mac >= mac_82550_D102) || ((nic->flags & ich) &&
(mdio_read(netdev, nic->mii.phy_id, MII_TPISTATUS) & 0x8000) &&
- !(nic->eeprom[eeprom_cnfg_mdix] & eeprom_mdix_enabled))) {
+ (nic->eeprom[eeprom_cnfg_mdix] & eeprom_mdix_enabled))) {
/* enable/disable MDI/MDI-X auto-switching. */
mdio_write(netdev, nic->mii.phy_id, MII_NCONFIG,
nic->mii.force_media ? 0 : NCONFIG_AUTO_SWITCH);
/* ethtool support for e1000 */
#include "e1000.h"
+#include <linux/jiffies.h>
#include <linux/uaccess.h>
enum {NETDEV_STATS, E1000_STATS};
ret_val = 13; /* ret_val is the same as mis-compare */
break;
}
- if (jiffies >= (time + 2)) {
+ if (time_after_eq(jiffies, time + 2)) {
ret_val = 14; /* error code for time out error */
break;
}
struct e1000_tx_ring *tx_ring, int tx_flags,
int count)
{
- struct e1000_hw *hw = &adapter->hw;
struct e1000_tx_desc *tx_desc = NULL;
struct e1000_tx_buffer *buffer_info;
u32 txd_upper = 0, txd_lower = E1000_TXD_CMD_IFCS;
wmb();
tx_ring->next_to_use = i;
- writel(i, hw->hw_addr + tx_ring->tdt);
- /* we need this if more than one processor can write to our tail
- * at a time, it synchronizes IO on IA64/Altix systems
- */
- mmiowb();
}
/* 82547 workaround to avoid controller hang in half-duplex environment.
return NETDEV_TX_BUSY;
}
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
tx_flags |= E1000_TX_FLAGS_VLAN;
- tx_flags |= (vlan_tx_tag_get(skb) << E1000_TX_FLAGS_VLAN_SHIFT);
+ tx_flags |= (skb_vlan_tag_get(skb) <<
+ E1000_TX_FLAGS_VLAN_SHIFT);
}
first = tx_ring->next_to_use;
/* Make sure there is space in the ring for the next send. */
e1000_maybe_stop_tx(netdev, tx_ring, MAX_SKB_FRAGS + 2);
+ if (!skb->xmit_more ||
+ netif_xmit_stopped(netdev_get_tx_queue(netdev, 0))) {
+ writel(tx_ring->next_to_use, hw->hw_addr + tx_ring->tdt);
+ /* we need this if more than one processor can write to
+ * our tail at a time, it synchronizes IO on IA64/Altix
+ * systems
+ */
+ mmiowb();
+ }
} else {
dev_kfree_skb_any(skb);
tx_ring->buffer_info[first].time_stamp = 0;
wmb();
tx_ring->next_to_use = i;
-
- if (adapter->flags2 & FLAG2_PCIM2PCI_ARBITER_WA)
- e1000e_update_tdt_wa(tx_ring, i);
- else
- writel(i, tx_ring->tail);
-
- /* we need this if more than one processor can write to our tail
- * at a time, it synchronizes IO on IA64/Altix systems
- */
- mmiowb();
}
#define MINIMUM_DHCP_PACKET_SIZE 282
struct e1000_hw *hw = &adapter->hw;
u16 length, offset;
- if (vlan_tx_tag_present(skb) &&
- !((vlan_tx_tag_get(skb) == adapter->hw.mng_cookie.vlan_id) &&
+ if (skb_vlan_tag_present(skb) &&
+ !((skb_vlan_tag_get(skb) == adapter->hw.mng_cookie.vlan_id) &&
(adapter->hw.mng_cookie.status &
E1000_MNG_DHCP_COOKIE_STATUS_VLAN)))
return 0;
if (e1000_maybe_stop_tx(tx_ring, count + 2))
return NETDEV_TX_BUSY;
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
tx_flags |= E1000_TX_FLAGS_VLAN;
- tx_flags |= (vlan_tx_tag_get(skb) << E1000_TX_FLAGS_VLAN_SHIFT);
+ tx_flags |= (skb_vlan_tag_get(skb) <<
+ E1000_TX_FLAGS_VLAN_SHIFT);
}
first = tx_ring->next_to_use;
count = e1000_tx_map(tx_ring, skb, first, adapter->tx_fifo_limit,
nr_frags);
if (count) {
- if (unlikely((skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
- !adapter->tx_hwtstamp_skb)) {
+ if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
+ (adapter->flags & FLAG_HAS_HW_TIMESTAMP) &&
+ !adapter->tx_hwtstamp_skb) {
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
tx_flags |= E1000_TX_FLAGS_HWTSTAMP;
adapter->tx_hwtstamp_skb = skb_get(skb);
(MAX_SKB_FRAGS *
DIV_ROUND_UP(PAGE_SIZE,
adapter->tx_fifo_limit) + 2));
+
+ if (!skb->xmit_more ||
+ netif_xmit_stopped(netdev_get_tx_queue(netdev, 0))) {
+ if (adapter->flags2 & FLAG2_PCIM2PCI_ARBITER_WA)
+ e1000e_update_tdt_wa(tx_ring,
+ tx_ring->next_to_use);
+ else
+ writel(tx_ring->next_to_use, tx_ring->tail);
+
+ /* we need this if more than one processor can write
+ * to our tail at a time, it synchronizes IO on
+ *IA64/Altix systems
+ */
+ mmiowb();
+ }
} else {
dev_kfree_skb_any(skb);
tx_ring->buffer_info[first].time_stamp = 0;
*/
if (dma_mapping_error(rx_ring->dev, dma)) {
__free_page(page);
- bi->page = NULL;
rx_ring->rx_stats.alloc_failed++;
return false;
i -= rx_ring->count;
}
- /* clear the hdr_addr for the next_to_use descriptor */
- rx_desc->q.hdr_addr = 0;
+ /* clear the status bits for the next_to_use descriptor */
+ rx_desc->d.staterr = 0;
cleaned_count--;
} while (cleaned_count);
rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
/* transfer page from old buffer to new buffer */
- memcpy(new_buff, old_buff, sizeof(struct fm10k_rx_buffer));
+ *new_buff = *old_buff;
/* sync the buffer for use by the device */
dma_sync_single_range_for_device(rx_ring->dev, old_buff->dma,
DMA_FROM_DEVICE);
}
+static inline bool fm10k_page_is_reserved(struct page *page)
+{
+ return (page_to_nid(page) != numa_mem_id()) || page->pfmemalloc;
+}
+
static bool fm10k_can_reuse_rx_page(struct fm10k_rx_buffer *rx_buffer,
struct page *page,
unsigned int truesize)
{
/* avoid re-using remote pages */
- if (unlikely(page_to_nid(page) != numa_mem_id()))
+ if (unlikely(fm10k_page_is_reserved(page)))
return false;
#if (PAGE_SIZE < 8192)
/* flip page offset to other buffer */
rx_buffer->page_offset ^= FM10K_RX_BUFSZ;
-
- /* Even if we own the page, we are not allowed to use atomic_set()
- * This would break get_page_unless_zero() users.
- */
- atomic_inc(&page->_count);
#else
/* move offset up to the next cache line */
rx_buffer->page_offset += truesize;
if (rx_buffer->page_offset > (PAGE_SIZE - FM10K_RX_BUFSZ))
return false;
-
- /* bump ref count on page before it is given to the stack */
- get_page(page);
#endif
+ /* Even if we own the page, we are not allowed to use atomic_set()
+ * This would break get_page_unless_zero() users.
+ */
+ atomic_inc(&page->_count);
+
return true;
}
memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
- /* we can reuse buffer as-is, just make sure it is local */
- if (likely(page_to_nid(page) == numa_mem_id()))
+ /* page is not reserved, we can reuse buffer as-is */
+ if (likely(!fm10k_page_is_reserved(page)))
return true;
/* this page cannot be reused so discard it */
- put_page(page);
+ __free_page(page);
return false;
}
struct page *page;
rx_buffer = &rx_ring->rx_buffer[rx_ring->next_to_clean];
-
page = rx_buffer->page;
prefetchw(page);
struct ethhdr *eth_hdr;
u8 l4_hdr = 0;
+/* fm10k supports 184 octets of outer+inner headers. Minus 20 for inner L4. */
+#define FM10K_MAX_ENCAP_TRANSPORT_OFFSET 164
+ if (skb_inner_transport_header(skb) - skb_mac_header(skb) >
+ FM10K_MAX_ENCAP_TRANSPORT_OFFSET)
+ return 0;
+
switch (vlan_get_protocol(skb)) {
case htons(ETH_P_IP):
l4_hdr = ip_hdr(skb)->protocol;
tx_desc = FM10K_TX_DESC(tx_ring, i);
/* add HW VLAN tag */
- if (vlan_tx_tag_present(skb))
- tx_desc->vlan = cpu_to_le16(vlan_tx_tag_get(skb));
+ if (skb_vlan_tag_present(skb))
+ tx_desc->vlan = cpu_to_le16(skb_vlan_tag_get(skb));
else
tx_desc->vlan = 0;
int err;
if ((skb->protocol == htons(ETH_P_8021Q)) &&
- !vlan_tx_tag_present(skb)) {
+ !skb_vlan_tag_present(skb)) {
/* FM10K only supports hardware tagging, any tags in frame
* are considered 2nd level or "outer" tags
*/
dev->vlan_features |= dev->features;
/* configure tunnel offloads */
- dev->hw_enc_features = NETIF_F_IP_CSUM |
- NETIF_F_TSO |
- NETIF_F_TSO6 |
- NETIF_F_TSO_ECN |
- NETIF_F_GSO_UDP_TUNNEL |
- NETIF_F_IPV6_CSUM |
- NETIF_F_SG;
+ dev->hw_enc_features |= NETIF_F_IP_CSUM |
+ NETIF_F_TSO |
+ NETIF_F_TSO6 |
+ NETIF_F_TSO_ECN |
+ NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_IPV6_CSUM;
/* we want to leave these both on as we cannot disable VLAN tag
* insertion or stripping on the hardware since it is contained
/* Define timeouts for resets and disables */
#define FM10K_QUEUE_DISABLE_TIMEOUT 100
-#define FM10K_RESET_TIMEOUT 100
+#define FM10K_RESET_TIMEOUT 150
/* VF registers */
#define FM10K_VFCTRL 0x00000
i40e_virtchnl_pf.o
i40e-$(CONFIG_I40E_DCB) += i40e_dcb.o i40e_dcb_nl.o
-i40e-$(CONFIG_FCOE:m=y) += i40e_fcoe.o
+i40e-$(CONFIG_I40E_FCOE) += i40e_fcoe.o
#define I40E_MAX_USER_PRIORITY 8
#define I40E_DEFAULT_MSG_ENABLE 4
#define I40E_QUEUE_WAIT_RETRY_LIMIT 10
+#define I40E_INT_NAME_STR_LEN (IFNAMSIZ + 9)
#define I40E_NVM_VERSION_LO_SHIFT 0
#define I40E_NVM_VERSION_LO_MASK (0xff << I40E_NVM_VERSION_LO_SHIFT)
u16 rx_itr_default;
u16 tx_itr_default;
u16 msg_enable;
- char misc_int_name[IFNAMSIZ + 9];
+ char int_name[I40E_INT_NAME_STR_LEN];
u16 adminq_work_limit; /* num of admin receive queue desc to process */
unsigned long service_timer_period;
unsigned long service_timer_previous;
cpumask_t affinity_mask;
struct rcu_head rcu; /* to avoid race with update stats on free */
- char name[IFNAMSIZ + 9];
+ char name[I40E_INT_NAME_STR_LEN];
} ____cacheline_internodealigned_in_smp;
/* lan device */
/* general information */
#define I40E_AQ_LARGE_BUF 512
-#define I40E_ASQ_CMD_TIMEOUT 100 /* msecs */
+#define I40E_ASQ_CMD_TIMEOUT 250 /* msecs */
void i40e_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc,
u16 opcode);
i40e_aqc_opc_lldp_stop = 0x0A05,
i40e_aqc_opc_lldp_start = 0x0A06,
i40e_aqc_opc_get_cee_dcb_cfg = 0x0A07,
+ i40e_aqc_opc_lldp_set_local_mib = 0x0A08,
+ i40e_aqc_opc_lldp_stop_start_spec_agent = 0x0A09,
/* Tunnel commands */
i40e_aqc_opc_add_udp_tunnel = 0x0B00,
/* OEM commands */
i40e_aqc_opc_oem_parameter_change = 0xFE00,
i40e_aqc_opc_oem_device_status_change = 0xFE01,
+ i40e_aqc_opc_oem_ocsd_initialize = 0xFE02,
+ i40e_aqc_opc_oem_ocbb_initialize = 0xFE03,
/* debug commands */
i40e_aqc_opc_debug_get_deviceid = 0xFF00,
i40e_aqc_opc_debug_write_reg = 0xFF04,
i40e_aqc_opc_debug_modify_reg = 0xFF07,
i40e_aqc_opc_debug_dump_internals = 0xFF08,
- i40e_aqc_opc_debug_modify_internals = 0xFF09,
};
/* command structures and indirect data structures */
#define I40E_AQ_CAP_ID_VSI 0x0017
#define I40E_AQ_CAP_ID_DCB 0x0018
#define I40E_AQ_CAP_ID_FCOE 0x0021
+#define I40E_AQ_CAP_ID_ISCSI 0x0022
#define I40E_AQ_CAP_ID_RSS 0x0040
#define I40E_AQ_CAP_ID_RXQ 0x0041
#define I40E_AQ_CAP_ID_TXQ 0x0042
__le32 pfpm_proxyfc;
__le32 ip_addr;
u8 mac_addr[6];
+ u8 reserved[2];
};
+I40E_CHECK_STRUCT_LEN(0x14, i40e_aqc_arp_proxy_data);
+
/* Set NS Proxy Table Entry Command (indirect 0x0105) */
struct i40e_aqc_ns_proxy_data {
__le16 table_idx_mac_addr_0;
u8 ipv6_addr_1[16];
};
+I40E_CHECK_STRUCT_LEN(0x3c, i40e_aqc_ns_proxy_data);
+
/* Manage LAA Command (0x0106) - obsolete */
struct i40e_aqc_mng_laa {
__le16 command_flags;
u8 reserved2[6];
};
+I40E_CHECK_CMD_LENGTH(i40e_aqc_mng_laa);
+
/* Manage MAC Address Read Command (indirect 0x0107) */
struct i40e_aqc_mac_address_read {
__le16 command_flags;
u8 reserved[12];
};
+I40E_CHECK_CMD_LENGTH(i40e_aqc_get_switch_config_header_resp);
+
struct i40e_aqc_switch_config_element_resp {
u8 element_type;
#define I40E_AQ_SW_ELEM_TYPE_MAC 1
__le16 element_info;
};
+I40E_CHECK_STRUCT_LEN(0x10, i40e_aqc_switch_config_element_resp);
+
/* Get Switch Configuration (indirect 0x0200)
* an array of elements are returned in the response buffer
* the first in the array is the header, remainder are elements
struct i40e_aqc_switch_config_element_resp element[1];
};
+I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_get_switch_config_resp);
+
/* Add Statistics (direct 0x0201)
* Remove Statistics (direct 0x0202)
*/
u8 reserved2[6];
};
+I40E_CHECK_STRUCT_LEN(0x10, i40e_aqc_switch_resource_alloc_element_resp);
+
/* Add VSI (indirect 0x0210)
* this indirect command uses struct i40e_aqc_vsi_properties_data
* as the indirect buffer (128 bytes)
u8 reserved[12];
};
+I40E_CHECK_CMD_LENGTH(i40e_aqc_remove_tag);
+
/* Add multicast E-Tag (direct 0x0257)
* del multicast E-Tag (direct 0x0258) only uses pv_seid and etag fields
* and no external data
} ipaddr;
__le16 flags;
#define I40E_AQC_ADD_CLOUD_FILTER_SHIFT 0
-#define I40E_AQC_ADD_CLOUD_FILTER_MASK (0x3F << \
+#define I40E_AQC_ADD_CLOUD_FILTER_MASK (0x3F << \
I40E_AQC_ADD_CLOUD_FILTER_SHIFT)
/* 0x0000 reserved */
#define I40E_AQC_ADD_CLOUD_FILTER_OIP 0x0001
u8 reserved[4];
__le16 queue_number;
#define I40E_AQC_ADD_CLOUD_QUEUE_SHIFT 0
-#define I40E_AQC_ADD_CLOUD_QUEUE_MASK (0x3F << \
+#define I40E_AQC_ADD_CLOUD_QUEUE_MASK (0x7FF << \
I40E_AQC_ADD_CLOUD_QUEUE_SHIFT)
u8 reserved2[14];
/* response section */
u8 reserved1[28];
};
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_configure_vsi_ets_sla_bw_data);
+
/* Configure VSI Bandwidth Allocation per Traffic Type (indirect 0x0407)
* responds with i40e_aqc_qs_handles_resp
*/
__le16 qs_handles[8];
};
+I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_configure_vsi_tc_bw_data);
+
/* Query vsi bw configuration (indirect 0x0408) */
struct i40e_aqc_query_vsi_bw_config_resp {
u8 tc_valid_bits;
u8 reserved3[23];
};
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_query_vsi_bw_config_resp);
+
/* Query VSI Bandwidth Allocation per Traffic Type (indirect 0x040A) */
struct i40e_aqc_query_vsi_ets_sla_config_resp {
u8 tc_valid_bits;
__le16 tc_bw_max[2];
};
+I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_query_vsi_ets_sla_config_resp);
+
/* Configure Switching Component Bandwidth Limit (direct 0x0410) */
struct i40e_aqc_configure_switching_comp_bw_limit {
__le16 seid;
u8 reserved2[96];
};
+I40E_CHECK_STRUCT_LEN(0x80, i40e_aqc_configure_switching_comp_ets_data);
+
/* Configure Switching Component Bandwidth Limits per Tc (indirect 0x0416) */
struct i40e_aqc_configure_switching_comp_ets_bw_limit_data {
u8 tc_valid_bits;
u8 reserved1[28];
};
+I40E_CHECK_STRUCT_LEN(0x40,
+ i40e_aqc_configure_switching_comp_ets_bw_limit_data);
+
/* Configure Switching Component Bandwidth Allocation per Tc
* (indirect 0x0417)
*/
u8 reserved1[20];
};
+I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_configure_switching_comp_bw_config_data);
+
/* Query Switching Component Configuration (indirect 0x0418) */
struct i40e_aqc_query_switching_comp_ets_config_resp {
u8 tc_valid_bits;
u8 reserved2[23];
};
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_query_switching_comp_ets_config_resp);
+
/* Query PhysicalPort ETS Configuration (indirect 0x0419) */
struct i40e_aqc_query_port_ets_config_resp {
u8 reserved[4];
u8 reserved3[32];
};
+I40E_CHECK_STRUCT_LEN(0x44, i40e_aqc_query_port_ets_config_resp);
+
/* Query Switching Component Bandwidth Allocation per Traffic Type
* (indirect 0x041A)
*/
__le16 tc_bw_max[2];
};
+I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_query_switching_comp_bw_config_resp);
+
/* Suspend/resume port TX traffic
* (direct 0x041B and 0x041C) uses the generic SEID struct
*/
u8 max_bw[16]; /* bandwidth limit */
};
+I40E_CHECK_STRUCT_LEN(0x22, i40e_aqc_configure_partition_bw_data);
+
/* Get and set the active HMC resource profile and status.
* (direct 0x0500) and (direct 0x0501)
*/
u8 reserved2[8];
};
+I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_module_desc);
+
struct i40e_aq_get_phy_abilities_resp {
__le32 phy_type; /* bitmap using the above enum for offsets */
u8 link_speed; /* bitmap using the above enum bit patterns */
struct i40e_aqc_module_desc qualified_module[I40E_AQ_PHY_MAX_QMS];
};
+I40E_CHECK_STRUCT_LEN(0x218, i40e_aq_get_phy_abilities_resp);
+
/* Set PHY Config (direct 0x0601) */
struct i40e_aq_set_phy_config { /* same bits as above in all */
__le32 phy_type;
/* NVM Config Read (indirect 0x0704) */
struct i40e_aqc_nvm_config_read {
__le16 cmd_flags;
-#define ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK 1
-#define ANVM_READ_SINGLE_FEATURE 0
-#define ANVM_READ_MULTIPLE_FEATURES 1
+#define I40E_AQ_ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK 1
+#define I40E_AQ_ANVM_READ_SINGLE_FEATURE 0
+#define I40E_AQ_ANVM_READ_MULTIPLE_FEATURES 1
__le16 element_count;
- __le16 element_id; /* Feature/field ID */
- u8 reserved[2];
+ __le16 element_id; /* Feature/field ID */
+ __le16 element_id_msw; /* MSWord of field ID */
__le32 address_high;
__le32 address_low;
};
I40E_CHECK_CMD_LENGTH(i40e_aqc_nvm_config_write);
+/* Used for 0x0704 as well as for 0x0705 commands */
+#define I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT 1
+#define I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \
+ (1 << I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
+#define I40E_AQ_ANVM_FEATURE 0
+#define I40E_AQ_ANVM_IMMEDIATE_FIELD (1 << FEATURE_OR_IMMEDIATE_SHIFT)
struct i40e_aqc_nvm_config_data_feature {
__le16 feature_id;
- __le16 instance_id;
+#define I40E_AQ_ANVM_FEATURE_OPTION_OEM_ONLY 0x01
+#define I40E_AQ_ANVM_FEATURE_OPTION_DWORD_MAP 0x08
+#define I40E_AQ_ANVM_FEATURE_OPTION_POR_CSR 0x10
__le16 feature_options;
__le16 feature_selection;
};
+I40E_CHECK_STRUCT_LEN(0x6, i40e_aqc_nvm_config_data_feature);
+
struct i40e_aqc_nvm_config_data_immediate_field {
-#define ANVM_FEATURE_OR_IMMEDIATE_MASK 0x2
- __le16 field_id;
- __le16 instance_id;
+ __le32 field_id;
+ __le32 field_value;
__le16 field_options;
- __le16 field_value;
+ __le16 reserved;
};
+I40E_CHECK_STRUCT_LEN(0xc, i40e_aqc_nvm_config_data_immediate_field);
+
/* Send to PF command (indirect 0x0801) id is only used by PF
* Send to VF command (indirect 0x0802) id is only used by PF
* Send to Peer PF command (indirect 0x0803)
u8 oper_tc_bw[8];
u8 oper_pfc_en;
__le16 oper_app_prio;
+#define I40E_AQC_CEE_APP_FCOE_SHIFT 0x0
+#define I40E_AQC_CEE_APP_FCOE_MASK (0x7 << I40E_AQC_CEE_APP_FCOE_SHIFT)
+#define I40E_AQC_CEE_APP_ISCSI_SHIFT 0x3
+#define I40E_AQC_CEE_APP_ISCSI_MASK (0x7 << I40E_AQC_CEE_APP_ISCSI_SHIFT)
+#define I40E_AQC_CEE_APP_FIP_SHIFT 0x8
+#define I40E_AQC_CEE_APP_FIP_MASK (0x7 << I40E_AQC_CEE_APP_FIP_SHIFT)
+#define I40E_AQC_CEE_APP_FIP_MASK (0x7 << I40E_AQC_CEE_APP_FIP_SHIFT)
__le32 tlv_status;
+#define I40E_AQC_CEE_PG_STATUS_SHIFT 0x0
+#define I40E_AQC_CEE_PG_STATUS_MASK (0x7 << I40E_AQC_CEE_PG_STATUS_SHIFT)
+#define I40E_AQC_CEE_PFC_STATUS_SHIFT 0x3
+#define I40E_AQC_CEE_PFC_STATUS_MASK (0x7 << I40E_AQC_CEE_PFC_STATUS_SHIFT)
+#define I40E_AQC_CEE_APP_STATUS_SHIFT 0x8
+#define I40E_AQC_CEE_APP_STATUS_MASK (0x7 << I40E_AQC_CEE_APP_STATUS_SHIFT)
u8 reserved[12];
};
I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_get_cee_dcb_cfg_resp);
+/* Set Local LLDP MIB (indirect 0x0A08)
+ * Used to replace the local MIB of a given LLDP agent. e.g. DCBx
+ */
+struct i40e_aqc_lldp_set_local_mib {
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT 0
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK (1 << SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
+ u8 type;
+ u8 reserved0;
+ __le16 length;
+ u8 reserved1[4];
+ __le32 address_high;
+ __le32 address_low;
+};
+
+I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_set_local_mib);
+
+/* Stop/Start LLDP Agent (direct 0x0A09)
+ * Used for stopping/starting specific LLDP agent. e.g. DCBx
+ */
+struct i40e_aqc_lldp_stop_start_specific_agent {
+#define I40E_AQC_START_SPECIFIC_AGENT_SHIFT 0
+#define I40E_AQC_START_SPECIFIC_AGENT_MASK \
+ (1 << I40E_AQC_START_SPECIFIC_AGENT_SHIFT)
+ u8 command;
+ u8 reserved[15];
+};
+
+I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_stop_start_specific_agent);
+
/* Add Udp Tunnel command and completion (direct 0x0B00) */
struct i40e_aqc_add_udp_tunnel {
__le16 udp_port;
#define I40E_AQ_OEM_PARAM_TYPE_BW_CTL 1
#define I40E_AQ_OEM_PARAM_MAC 2
__le32 param_value1;
- u8 param_value2[8];
+ __le16 param_value2;
+ u8 reserved[6];
};
I40E_CHECK_CMD_LENGTH(i40e_aqc_oem_param_change);
I40E_CHECK_CMD_LENGTH(i40e_aqc_oem_state_change);
+/* Initialize OCSD (0xFE02, direct) */
+struct i40e_aqc_opc_oem_ocsd_initialize {
+ u8 type_status;
+ u8 reserved1[3];
+ __le32 ocsd_memory_block_addr_high;
+ __le32 ocsd_memory_block_addr_low;
+ __le32 requested_update_interval;
+};
+
+I40E_CHECK_CMD_LENGTH(i40e_aqc_opc_oem_ocsd_initialize);
+
+/* Initialize OCBB (0xFE03, direct) */
+struct i40e_aqc_opc_oem_ocbb_initialize {
+ u8 type_status;
+ u8 reserved1[3];
+ __le32 ocbb_memory_block_addr_high;
+ __le32 ocbb_memory_block_addr_low;
+ u8 reserved2[4];
+};
+
+I40E_CHECK_CMD_LENGTH(i40e_aqc_opc_oem_ocbb_initialize);
+
/* debug commands */
/* get device id (0xFF00) uses the generic structure */
}
#endif
+/**
+ * i40e_read_pba_string - Reads part number string from EEPROM
+ * @hw: pointer to hardware structure
+ * @pba_num: stores the part number string from the EEPROM
+ * @pba_num_size: part number string buffer length
+ *
+ * Reads the part number string from the EEPROM.
+ **/
+i40e_status i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num,
+ u32 pba_num_size)
+{
+ i40e_status status = 0;
+ u16 pba_word = 0;
+ u16 pba_size = 0;
+ u16 pba_ptr = 0;
+ u16 i = 0;
+
+ status = i40e_read_nvm_word(hw, I40E_SR_PBA_FLAGS, &pba_word);
+ if (status || (pba_word != 0xFAFA)) {
+ hw_dbg(hw, "Failed to read PBA flags or flag is invalid.\n");
+ return status;
+ }
+
+ status = i40e_read_nvm_word(hw, I40E_SR_PBA_BLOCK_PTR, &pba_ptr);
+ if (status) {
+ hw_dbg(hw, "Failed to read PBA Block pointer.\n");
+ return status;
+ }
+
+ status = i40e_read_nvm_word(hw, pba_ptr, &pba_size);
+ if (status) {
+ hw_dbg(hw, "Failed to read PBA Block size.\n");
+ return status;
+ }
+
+ /* Subtract one to get PBA word count (PBA Size word is included in
+ * total size)
+ */
+ pba_size--;
+ if (pba_num_size < (((u32)pba_size * 2) + 1)) {
+ hw_dbg(hw, "Buffer to small for PBA data.\n");
+ return I40E_ERR_PARAM;
+ }
+
+ for (i = 0; i < pba_size; i++) {
+ status = i40e_read_nvm_word(hw, (pba_ptr + 1) + i, &pba_word);
+ if (status) {
+ hw_dbg(hw, "Failed to read PBA Block word %d.\n", i);
+ return status;
+ }
+
+ pba_num[(i * 2)] = (pba_word >> 8) & 0xFF;
+ pba_num[(i * 2) + 1] = pba_word & 0xFF;
+ }
+ pba_num[(pba_size * 2)] = '\0';
+
+ return status;
+}
+
/**
* i40e_get_media_type - Gets media type
* @hw: pointer to the hardware structure
return status;
}
+/**
+ * i40e_aq_debug_read_register
+ * @hw: pointer to the hw struct
+ * @reg_addr: register address
+ * @reg_val: register value
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Read the register using the admin queue commands
+ **/
+i40e_status i40e_aq_debug_read_register(struct i40e_hw *hw,
+ u32 reg_addr, u64 *reg_val,
+ struct i40e_asq_cmd_details *cmd_details)
+{
+ struct i40e_aq_desc desc;
+ struct i40e_aqc_debug_reg_read_write *cmd_resp =
+ (struct i40e_aqc_debug_reg_read_write *)&desc.params.raw;
+ i40e_status status;
+
+ if (reg_val == NULL)
+ return I40E_ERR_PARAM;
+
+ i40e_fill_default_direct_cmd_desc(&desc,
+ i40e_aqc_opc_debug_read_reg);
+
+ cmd_resp->address = cpu_to_le32(reg_addr);
+
+ status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+ if (!status) {
+ *reg_val = ((u64)cmd_resp->value_high << 32) |
+ (u64)cmd_resp->value_low;
+ *reg_val = le64_to_cpu(*reg_val);
+ }
+
+ return status;
+}
+
/**
* i40e_aq_debug_write_register
* @hw: pointer to the hw struct
#define I40E_DEV_FUNC_CAP_VSI 0x17
#define I40E_DEV_FUNC_CAP_DCB 0x18
#define I40E_DEV_FUNC_CAP_FCOE 0x21
+#define I40E_DEV_FUNC_CAP_ISCSI 0x22
#define I40E_DEV_FUNC_CAP_RSS 0x40
#define I40E_DEV_FUNC_CAP_RX_QUEUES 0x41
#define I40E_DEV_FUNC_CAP_TX_QUEUES 0x42
enum i40e_admin_queue_opc list_type_opc)
{
struct i40e_aqc_list_capabilities_element_resp *cap;
+ u32 valid_functions, num_functions;
u32 number, logical_id, phys_id;
struct i40e_hw_capabilities *p;
u32 i = 0;
if (number == 1)
p->fcoe = true;
break;
+ case I40E_DEV_FUNC_CAP_ISCSI:
+ if (number == 1)
+ p->iscsi = true;
+ break;
case I40E_DEV_FUNC_CAP_RSS:
p->rss = true;
p->rss_table_size = number;
if (p->npar_enable || p->mfp_mode_1)
p->fcoe = false;
+ /* count the enabled ports (aka the "not disabled" ports) */
+ hw->num_ports = 0;
+ for (i = 0; i < 4; i++) {
+ u32 port_cfg_reg = I40E_PRTGEN_CNF + (4 * i);
+ u64 port_cfg = 0;
+
+ /* use AQ read to get the physical register offset instead
+ * of the port relative offset
+ */
+ i40e_aq_debug_read_register(hw, port_cfg_reg, &port_cfg, NULL);
+ if (!(port_cfg & I40E_PRTGEN_CNF_PORT_DIS_MASK))
+ hw->num_ports++;
+ }
+
+ valid_functions = p->valid_functions;
+ num_functions = 0;
+ while (valid_functions) {
+ if (valid_functions & 1)
+ num_functions++;
+ valid_functions >>= 1;
+ }
+
+ /* partition id is 1-based, and functions are evenly spread
+ * across the ports as partitions
+ */
+ hw->partition_id = (hw->pf_id / hw->num_ports) + 1;
+ hw->num_partitions = num_functions / hw->num_ports;
+
/* additional HW specific goodies that might
* someday be HW version specific
*/
if (desc_n >= ring->count || desc_n < 0) {
dev_info(&pf->pdev->dev,
"descriptor %d not found\n", desc_n);
- return;
+ goto out;
}
if (!is_rx_ring) {
txd = I40E_TX_DESC(ring, desc_n);
} else {
dev_info(&pf->pdev->dev, "dump desc rx/tx <vsi_seid> <ring_id> [<desc_n>]\n");
}
+
+out:
kfree(ring);
}
dev_info(&pf->pdev->dev, " dump desc tx <vsi_seid> <ring_id> [<desc_n>]\n");
dev_info(&pf->pdev->dev, " dump desc rx <vsi_seid> <ring_id> [<desc_n>]\n");
dev_info(&pf->pdev->dev, " dump desc aq\n");
- dev_info(&pf->pdev->dev, " dump stats\n");
dev_info(&pf->pdev->dev, " dump reset stats\n");
dev_info(&pf->pdev->dev, " msg_enable [level]\n");
dev_info(&pf->pdev->dev, " read <reg>\n");
#define I40E_TEST_LEN (sizeof(i40e_gstrings_test) / ETH_GSTRING_LEN)
+/**
+ * i40e_partition_setting_complaint - generic complaint for MFP restriction
+ * @pf: the PF struct
+ **/
+static void i40e_partition_setting_complaint(struct i40e_pf *pf)
+{
+ dev_info(&pf->pdev->dev,
+ "The link settings are allowed to be changed only from the first partition of a given port. Please switch to the first partition in order to change the setting.\n");
+}
+
/**
* i40e_get_settings - Get Link Speed and Duplex settings
* @netdev: network interface device structure
u8 autoneg;
u32 advertise;
+ /* Changing port settings is not supported if this isn't the
+ * port's controlling PF
+ */
+ if (hw->partition_id != 1) {
+ i40e_partition_setting_complaint(pf);
+ return -EOPNOTSUPP;
+ }
+
if (vsi != pf->vsi[pf->lan_vsi])
return -EOPNOTSUPP;
u8 aq_failures;
int err = 0;
+ /* Changing the port's flow control is not supported if this isn't the
+ * port's controlling PF
+ */
+ if (hw->partition_id != 1) {
+ i40e_partition_setting_complaint(pf);
+ return -EOPNOTSUPP;
+ }
+
if (vsi != pf->vsi[pf->lan_vsi])
return -EOPNOTSUPP;
/* NVM bit on means WoL disabled for the port */
i40e_read_nvm_word(hw, I40E_SR_NVM_WAKE_ON_LAN, &wol_nvm_bits);
- if ((1 << hw->port) & wol_nvm_bits) {
+ if ((1 << hw->port) & wol_nvm_bits || hw->partition_id != 1) {
wol->supported = 0;
wol->wolopts = 0;
} else {
}
}
+/**
+ * i40e_set_wol - set the WakeOnLAN configuration
+ * @netdev: the netdev in question
+ * @wol: the ethtool WoL setting data
+ **/
static int i40e_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
{
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_pf *pf = np->vsi->back;
+ struct i40e_vsi *vsi = np->vsi;
struct i40e_hw *hw = &pf->hw;
u16 wol_nvm_bits;
+ /* WoL not supported if this isn't the controlling PF on the port */
+ if (hw->partition_id != 1) {
+ i40e_partition_setting_complaint(pf);
+ return -EOPNOTSUPP;
+ }
+
+ if (vsi != pf->vsi[pf->lan_vsi])
+ return -EOPNOTSUPP;
+
/* NVM bit on means WoL disabled for the port */
i40e_read_nvm_word(hw, I40E_SR_NVM_WAKE_ON_LAN, &wol_nvm_bits);
if (((1 << hw->port) & wol_nvm_bits))
i40e_add_filter(vsi, (u8[6]) FC_FCOE_FLOGI_MAC, 0, false, false);
i40e_add_filter(vsi, FIP_ALL_FCOE_MACS, 0, false, false);
i40e_add_filter(vsi, FIP_ALL_ENODE_MACS, 0, false, false);
- i40e_add_filter(vsi, FIP_ALL_VN2VN_MACS, 0, false, false);
- i40e_add_filter(vsi, FIP_ALL_P2P_MACS, 0, false, false);
/* use san mac */
ether_addr_copy(netdev->dev_addr, hw->mac.san_addr);
#define DRV_VERSION_MAJOR 1
#define DRV_VERSION_MINOR 2
-#define DRV_VERSION_BUILD 2
+#define DRV_VERSION_BUILD 6
#define DRV_VERSION __stringify(DRV_VERSION_MAJOR) "." \
__stringify(DRV_VERSION_MINOR) "." \
__stringify(DRV_VERSION_BUILD) DRV_KERN
* i40e_enable_misc_int_causes - enable the non-queue interrupts
* @hw: ptr to the hardware info
**/
-static void i40e_enable_misc_int_causes(struct i40e_hw *hw)
+static void i40e_enable_misc_int_causes(struct i40e_pf *pf)
{
+ struct i40e_hw *hw = &pf->hw;
u32 val;
/* clear things first */
I40E_PFINT_ICR0_ENA_GRST_MASK |
I40E_PFINT_ICR0_ENA_PCI_EXCEPTION_MASK |
I40E_PFINT_ICR0_ENA_GPIO_MASK |
- I40E_PFINT_ICR0_ENA_TIMESYNC_MASK |
I40E_PFINT_ICR0_ENA_HMC_ERR_MASK |
I40E_PFINT_ICR0_ENA_VFLR_MASK |
I40E_PFINT_ICR0_ENA_ADMINQ_MASK;
+ if (pf->flags & I40E_FLAG_PTP)
+ val |= I40E_PFINT_ICR0_ENA_TIMESYNC_MASK;
+
wr32(hw, I40E_PFINT_ICR0_ENA, val);
/* SW_ITR_IDX = 0, but don't change INTENA */
q_vector->tx.latency_range = I40E_LOW_LATENCY;
wr32(hw, I40E_PFINT_ITR0(I40E_TX_ITR), q_vector->tx.itr);
- i40e_enable_misc_int_causes(hw);
+ i40e_enable_misc_int_causes(pf);
/* FIRSTQ_INDX = 0, FIRSTQ_TYPE = 0 (rx) */
wr32(hw, I40E_PFINT_LNKLST0, 0);
err = i40e_vsi_request_irq_msix(vsi, basename);
else if (pf->flags & I40E_FLAG_MSI_ENABLED)
err = request_irq(pf->pdev->irq, i40e_intr, 0,
- pf->misc_int_name, pf);
+ pf->int_name, pf);
else
err = request_irq(pf->pdev->irq, i40e_intr, IRQF_SHARED,
- pf->misc_int_name, pf);
+ pf->int_name, pf);
if (err)
dev_info(&pf->pdev->dev, "request_irq failed, Error %d\n", err);
}
#endif
+/**
+ * i40e_get_iscsi_tc_map - Return TC map for iSCSI APP
+ * @pf: pointer to pf
+ *
+ * Get TC map for ISCSI PF type that will include iSCSI TC
+ * and LAN TC.
+ **/
+static u8 i40e_get_iscsi_tc_map(struct i40e_pf *pf)
+{
+ struct i40e_dcb_app_priority_table app;
+ struct i40e_hw *hw = &pf->hw;
+ u8 enabled_tc = 1; /* TC0 is always enabled */
+ u8 tc, i;
+ /* Get the iSCSI APP TLV */
+ struct i40e_dcbx_config *dcbcfg = &hw->local_dcbx_config;
+
+ for (i = 0; i < dcbcfg->numapps; i++) {
+ app = dcbcfg->app[i];
+ if (app.selector == I40E_APP_SEL_TCPIP &&
+ app.protocolid == I40E_APP_PROTOID_ISCSI) {
+ tc = dcbcfg->etscfg.prioritytable[app.priority];
+ enabled_tc |= (1 << tc);
+ break;
+ }
+ }
+
+ return enabled_tc;
+}
+
/**
* i40e_dcb_get_num_tc - Get the number of TCs from DCBx config
* @dcbcfg: the corresponding DCBx configuration structure
if (!(pf->flags & I40E_FLAG_DCB_ENABLED))
return 1;
+ /* SFP mode will be enabled for all TCs on port */
+ if (!(pf->flags & I40E_FLAG_MFP_ENABLED))
+ return i40e_dcb_get_num_tc(dcbcfg);
+
/* MFP mode return count of enabled TCs for this PF */
- if (pf->flags & I40E_FLAG_MFP_ENABLED) {
+ if (pf->hw.func_caps.iscsi)
+ enabled_tc = i40e_get_iscsi_tc_map(pf);
+ else
enabled_tc = pf->hw.func_caps.enabled_tcmap;
- for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
- if (enabled_tc & (1 << i))
- num_tc++;
- }
- return num_tc;
- }
- /* SFP mode will be enabled for all TCs on port */
- return i40e_dcb_get_num_tc(dcbcfg);
+ /* At least have TC0 */
+ enabled_tc = (enabled_tc ? enabled_tc : 0x1);
+ for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++) {
+ if (enabled_tc & (1 << i))
+ num_tc++;
+ }
+ return num_tc;
}
/**
if (!(pf->flags & I40E_FLAG_DCB_ENABLED))
return i40e_pf_get_default_tc(pf);
- /* MFP mode will have enabled TCs set by FW */
- if (pf->flags & I40E_FLAG_MFP_ENABLED)
- return pf->hw.func_caps.enabled_tcmap;
-
/* SFP mode we want PF to be enabled for all TCs */
- return i40e_dcb_get_enabled_tc(&pf->hw.local_dcbx_config);
+ if (!(pf->flags & I40E_FLAG_MFP_ENABLED))
+ return i40e_dcb_get_enabled_tc(&pf->hw.local_dcbx_config);
+
+ /* MPF enabled and iSCSI PF type */
+ if (pf->hw.func_caps.iscsi)
+ return i40e_get_iscsi_tc_map(pf);
+ else
+ return pf->hw.func_caps.enabled_tcmap;
}
/**
struct i40e_hw *hw = &pf->hw;
int err = 0;
- if (pf->hw.func_caps.npar_enable)
- goto out;
-
/* Get the initial DCB configuration */
err = i40e_init_dcb(hw);
if (!err) {
"DCBX offload is supported for this PF.\n");
}
} else {
- dev_info(&pf->pdev->dev, "AQ Querying DCB configuration failed: %d\n",
+ dev_info(&pf->pdev->dev,
+ "AQ Querying DCB configuration failed: aq_err %d\n",
pf->hw.aq.asq_last_status);
}
return;
}
+ /* Warn user if link speed on NPAR enabled partition is not at
+ * least 10GB
+ */
+ if (vsi->back->hw.func_caps.npar_enable &&
+ (vsi->back->hw.phy.link_info.link_speed == I40E_LINK_SPEED_1GB ||
+ vsi->back->hw.phy.link_info.link_speed == I40E_LINK_SPEED_100MB))
+ netdev_warn(vsi->netdev,
+ "The partition detected link speed that is less than 10Gbps\n");
+
switch (vsi->back->hw.phy.link_info.link_speed) {
case I40E_LINK_SPEED_40GB:
strlcpy(speed, "40 Gbps", SPEED_SIZE);
int i40e_vsi_open(struct i40e_vsi *vsi)
{
struct i40e_pf *pf = vsi->back;
- char int_name[IFNAMSIZ];
+ char int_name[I40E_INT_NAME_STR_LEN];
int err;
/* allocate descriptors */
goto err_set_queues;
} else if (vsi->type == I40E_VSI_FDIR) {
- snprintf(int_name, sizeof(int_name) - 1, "%s-%s-fdir",
+ snprintf(int_name, sizeof(int_name) - 1, "%s-%s:fdir",
dev_driver_string(&pf->pdev->dev),
dev_name(&pf->pdev->dev));
err = i40e_vsi_request_irq(vsi, int_name);
{
bool new_link, old_link;
struct i40e_vsi *vsi = pf->vsi[pf->lan_vsi];
+ u8 new_link_speed, old_link_speed;
/* set this to force the get_link_status call to refresh state */
pf->hw.phy.get_link_info = true;
old_link = (pf->hw.phy.link_info_old.link_info & I40E_AQ_LINK_UP);
new_link = i40e_get_link_status(&pf->hw);
+ old_link_speed = pf->hw.phy.link_info_old.link_speed;
+ new_link_speed = pf->hw.phy.link_info.link_speed;
if (new_link == old_link &&
+ new_link_speed == old_link_speed &&
(test_bit(__I40E_DOWN, &vsi->state) ||
new_link == netif_carrier_ok(vsi->netdev)))
return;
#ifdef CONFIG_I40E_DCB
ret = i40e_init_pf_dcb(pf);
if (ret) {
- dev_info(&pf->pdev->dev, "init_pf_dcb failed: %d\n", ret);
- goto end_core_reset;
+ dev_info(&pf->pdev->dev, "DCB init failed %d, disabled\n", ret);
+ pf->flags &= ~I40E_FLAG_DCB_CAPABLE;
+ /* Continue without DCB enabled */
}
#endif /* CONFIG_I40E_DCB */
#ifdef I40E_FCOE
*/
if (!test_bit(__I40E_RESET_RECOVERY_PENDING, &pf->state)) {
err = request_irq(pf->msix_entries[0].vector,
- i40e_intr, 0, pf->misc_int_name, pf);
+ i40e_intr, 0, pf->int_name, pf);
if (err) {
dev_info(&pf->pdev->dev,
"request_irq for %s failed: %d\n",
- pf->misc_int_name, err);
+ pf->int_name, err);
return -EFAULT;
}
}
- i40e_enable_misc_int_causes(hw);
+ i40e_enable_misc_int_causes(pf);
/* associate no queues to the misc vector */
wr32(hw, I40E_PFINT_LNKLST0, I40E_QUEUE_END_OF_LIST);
#endif /* I40E_FCOE */
#ifdef CONFIG_PCI_IOV
- if (pf->hw.func_caps.num_vfs) {
+ if (pf->hw.func_caps.num_vfs && pf->hw.partition_id == 1) {
pf->num_vf_qps = I40E_DEFAULT_QUEUES_PER_VF;
pf->flags |= I40E_FLAG_SRIOV_ENABLED;
pf->num_req_vfs = min_t(int,
enabled_tc = i40e_pf_get_tc_map(pf);
/* MFP mode setup queue map and update VSI */
- if (pf->flags & I40E_FLAG_MFP_ENABLED) {
+ if ((pf->flags & I40E_FLAG_MFP_ENABLED) &&
+ !(pf->hw.func_caps.iscsi)) { /* NIC type PF */
memset(&ctxt, 0, sizeof(ctxt));
ctxt.seid = pf->main_vsi_seid;
ctxt.pf_num = pf->hw.pf_id;
/* Default/Main VSI is only enabled for TC0
* reconfigure it to enable all TCs that are
* available on the port in SFP mode.
+ * For MFP case the iSCSI PF would use this
+ * flow to enable LAN+iSCSI TC.
*/
ret = i40e_vsi_config_tc(vsi, enabled_tc);
if (ret) {
hw->aq.asq_buf_size = I40E_MAX_AQ_BUF_SIZE;
pf->adminq_work_limit = I40E_AQ_WORK_LIMIT;
- snprintf(pf->misc_int_name, sizeof(pf->misc_int_name) - 1,
+ snprintf(pf->int_name, sizeof(pf->int_name) - 1,
"%s-%s:misc",
dev_driver_string(&pf->pdev->dev), dev_name(&pdev->dev));
goto err_configure_lan_hmc;
}
+ /* Disable LLDP for NICs that have firmware versions lower than v4.3.
+ * Ignore error return codes because if it was already disabled via
+ * hardware settings this will fail
+ */
+ if (((pf->hw.aq.fw_maj_ver == 4) && (pf->hw.aq.fw_min_ver < 3)) ||
+ (pf->hw.aq.fw_maj_ver < 4)) {
+ dev_info(&pdev->dev, "Stopping firmware LLDP agent.\n");
+ i40e_aq_stop_lldp(hw, true, NULL);
+ }
+
i40e_get_mac_addr(hw, hw->mac.addr);
if (!is_valid_ether_addr(hw->mac.addr)) {
dev_info(&pdev->dev, "invalid MAC address %pM\n", hw->mac.addr);
#ifdef CONFIG_I40E_DCB
err = i40e_init_pf_dcb(pf);
if (err) {
- dev_info(&pdev->dev, "init_pf_dcb failed: %d\n", err);
+ dev_info(&pdev->dev, "DCB init failed %d, disabled\n", err);
pf->flags &= ~I40E_FLAG_DCB_CAPABLE;
/* Continue without DCB enabled */
}
} while (0)
typedef enum i40e_status_code i40e_status;
-#if defined(CONFIG_FCOE) || defined(CONFIG_FCOE_MODULE)
+#ifdef CONFIG_I40E_FCOE
#define I40E_FCOE
-#endif /* CONFIG_FCOE or CONFIG_FCOE_MODULE */
+#endif
#endif /* _I40E_OSDEP_H_ */
i40e_status i40e_aq_debug_write_register(struct i40e_hw *hw,
u32 reg_addr, u64 reg_val,
struct i40e_asq_cmd_details *cmd_details);
+i40e_status i40e_aq_debug_read_register(struct i40e_hw *hw,
+ u32 reg_addr, u64 *reg_val,
+ struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_set_default_vsi(struct i40e_hw *hw, u16 vsi_id,
bool i40e_get_link_status(struct i40e_hw *hw);
i40e_status i40e_get_mac_addr(struct i40e_hw *hw, u8 *mac_addr);
i40e_status i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr);
+i40e_status i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num,
+ u32 pba_num_size);
i40e_status i40e_validate_mac_addr(u8 *mac_addr);
void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable);
#ifdef I40E_FCOE
u32 prttsyn_stat;
int n;
- if (!(pf->flags & I40E_FLAG_PTP))
+ /* Since we cannot turn off the Rx timestamp logic if the device is
+ * configured for Tx timestamping, we check if Rx timestamping is
+ * configured. We don't want to spuriously warn about Rx timestamp
+ * hangs if we don't care about the timestamps.
+ */
+ if (!(pf->flags & I40E_FLAG_PTP) || !pf->ptp_rx)
return;
prttsyn_stat = rd32(hw, I40E_PRTTSYN_STAT_1);
u32 hi, lo;
u64 ns;
+ if (!(pf->flags & I40E_FLAG_PTP) || !pf->ptp_tx)
+ return;
+
+ /* don't attempt to timestamp if we don't have an skb */
+ if (!pf->ptp_tx_skb)
+ return;
+
lo = rd32(hw, I40E_PRTTSYN_TXTIME_L);
hi = rd32(hw, I40E_PRTTSYN_TXTIME_H);
/* Since we cannot turn off the Rx timestamp logic if the device is
* doing Tx timestamping, check if Rx timestamping is configured.
*/
- if (!pf->ptp_rx)
+ if (!(pf->flags & I40E_FLAG_PTP) || !pf->ptp_rx)
return;
hw = &pf->hw;
switch (config->rx_filter) {
case HWTSTAMP_FILTER_NONE:
pf->ptp_rx = false;
- tsyntype = 0;
+ /* We set the type to V1, but do not enable UDP packet
+ * recognition. In this way, we should be as close to
+ * disabling PTP Rx timestamps as possible since V1 packets
+ * are always UDP, since L2 packets are a V2 feature.
+ */
+ tsyntype = I40E_PRTTSYN_CTL1_TSYNTYPE_V1;
break;
case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
regval &= ~I40E_PFINT_ICR0_ENA_TIMESYNC_MASK;
wr32(hw, I40E_PFINT_ICR0_ENA, regval);
- /* There is no simple on/off switch for Rx. To "disable" Rx support,
- * ignore any received timestamps, rather than turn off the clock.
+ /* Although there is no simple on/off switch for Rx, we "disable" Rx
+ * timestamps by setting to V1 only mode and clear the UDP
+ * recognition. This ought to disable all PTP Rx timestamps as V1
+ * packets are always over UDP. Note that software is configured to
+ * ignore Rx timestamps via the pf->ptp_rx flag.
*/
- if (pf->ptp_rx) {
- regval = rd32(hw, I40E_PRTTSYN_CTL1);
- /* clear everything but the enable bit */
- regval &= I40E_PRTTSYN_CTL1_TSYNENA_MASK;
- /* now enable bits for desired Rx timestamps */
- regval |= tsyntype;
- wr32(hw, I40E_PRTTSYN_CTL1, regval);
- }
+ regval = rd32(hw, I40E_PRTTSYN_CTL1);
+ /* clear everything but the enable bit */
+ regval &= I40E_PRTTSYN_CTL1_TSYNENA_MASK;
+ /* now enable bits for desired Rx timestamps */
+ regval |= tsyntype;
+ wr32(hw, I40E_PRTTSYN_CTL1, regval);
return 0;
}
return le32_to_cpu(*(volatile __le32 *)head);
}
+#define WB_STRIDE 0x3
+
/**
* i40e_clean_tx_irq - Reclaim resources after transmit completes
* @tx_ring: tx ring to clean
tx_ring->q_vector->tx.total_bytes += total_bytes;
tx_ring->q_vector->tx.total_packets += total_packets;
+ /* check to see if there are any non-cache aligned descriptors
+ * waiting to be written back, and kick the hardware to force
+ * them to be written back in case of napi polling
+ */
+ if (budget &&
+ !((i & WB_STRIDE) == WB_STRIDE) &&
+ !test_bit(__I40E_DOWN, &tx_ring->vsi->state) &&
+ (I40E_DESC_UNUSED(tx_ring) != tx_ring->count))
+ tx_ring->arm_wb = true;
+ else
+ tx_ring->arm_wb = false;
+
if (check_for_tx_hang(tx_ring) && i40e_check_tx_hang(tx_ring)) {
/* schedule immediate reset if we believe we hung */
dev_info(tx_ring->dev, "Detected Tx Unit Hang\n"
netif_stop_subqueue(tx_ring->netdev, tx_ring->queue_index);
dev_info(tx_ring->dev,
- "tx hang detected on queue %d, resetting adapter\n",
+ "tx hang detected on queue %d, reset requested\n",
tx_ring->queue_index);
- tx_ring->netdev->netdev_ops->ndo_tx_timeout(tx_ring->netdev);
+ /* do not fire the reset immediately, wait for the stack to
+ * decide we are truly stuck, also prevents every queue from
+ * simultaneously requesting a reset
+ */
- /* the adapter is about to reset, no point in enabling stuff */
- return true;
+ /* the adapter is about to reset, no point in enabling polling */
+ budget = 1;
}
netdev_tx_completed_queue(netdev_get_tx_queue(tx_ring->netdev,
}
}
- return budget > 0;
+ return !!budget;
+}
+
+/**
+ * i40e_force_wb - Arm hardware to do a wb on noncache aligned descriptors
+ * @vsi: the VSI we care about
+ * @q_vector: the vector on which to force writeback
+ *
+ **/
+static void i40e_force_wb(struct i40e_vsi *vsi, struct i40e_q_vector *q_vector)
+{
+ u32 val = I40E_PFINT_DYN_CTLN_INTENA_MASK |
+ I40E_PFINT_DYN_CTLN_SWINT_TRIG_MASK |
+ I40E_PFINT_DYN_CTLN_SW_ITR_INDX_ENA_MASK
+ /* allow 00 to be written to the index */;
+
+ wr32(&vsi->back->hw,
+ I40E_PFINT_DYN_CTLN(q_vector->v_idx + vsi->base_vector - 1),
+ val);
}
/**
* so the total length of IPv4 header is IHL*4 bytes
* The UDP_0 bit *may* bet set if the *inner* header is UDP
*/
- if (ipv4_tunnel &&
- (decoded.inner_prot != I40E_RX_PTYPE_INNER_PROT_UDP) &&
- !(rx_status & (1 << I40E_RX_DESC_STATUS_UDP_0_SHIFT))) {
+ if (ipv4_tunnel) {
skb->transport_header = skb->mac_header +
sizeof(struct ethhdr) +
(ip_hdr(skb)->ihl * 4);
skb->protocol == htons(ETH_P_8021AD))
? VLAN_HLEN : 0;
- rx_udp_csum = udp_csum(skb);
- iph = ip_hdr(skb);
- csum = csum_tcpudp_magic(
- iph->saddr, iph->daddr,
- (skb->len - skb_transport_offset(skb)),
- IPPROTO_UDP, rx_udp_csum);
+ if ((ip_hdr(skb)->protocol == IPPROTO_UDP) &&
+ (udp_hdr(skb)->check != 0)) {
+ rx_udp_csum = udp_csum(skb);
+ iph = ip_hdr(skb);
+ csum = csum_tcpudp_magic(
+ iph->saddr, iph->daddr,
+ (skb->len - skb_transport_offset(skb)),
+ IPPROTO_UDP, rx_udp_csum);
+
+ if (udp_hdr(skb)->check != csum)
+ goto checksum_fail;
- if (udp_hdr(skb)->check != csum)
- goto checksum_fail;
+ } /* else its GRE and so no outer UDP header */
}
skb->ip_summed = CHECKSUM_UNNECESSARY;
struct i40e_vsi *vsi = q_vector->vsi;
struct i40e_ring *ring;
bool clean_complete = true;
+ bool arm_wb = false;
int budget_per_ring;
if (test_bit(__I40E_DOWN, &vsi->state)) {
/* Since the actual Tx work is minimal, we can give the Tx a larger
* budget and be more aggressive about cleaning up the Tx descriptors.
*/
- i40e_for_each_ring(ring, q_vector->tx)
+ i40e_for_each_ring(ring, q_vector->tx) {
clean_complete &= i40e_clean_tx_irq(ring, vsi->work_limit);
+ arm_wb |= ring->arm_wb;
+ }
/* We attempt to distribute budget to each Rx queue fairly, but don't
* allow the budget to go below 1 because that would exit polling early.
clean_complete &= i40e_clean_rx_irq(ring, budget_per_ring);
/* If work not completed, return budget and polling will return */
- if (!clean_complete)
+ if (!clean_complete) {
+ if (arm_wb)
+ i40e_force_wb(vsi, q_vector);
return budget;
+ }
/* Work is done so exit the polling mode and re-enable the interrupt */
napi_complete(napi);
u32 tx_flags = 0;
/* if we have a HW VLAN tag being added, default to the HW one */
- if (vlan_tx_tag_present(skb)) {
- tx_flags |= vlan_tx_tag_get(skb) << I40E_TX_FLAGS_VLAN_SHIFT;
+ if (skb_vlan_tag_present(skb)) {
+ tx_flags |= skb_vlan_tag_get(skb) << I40E_TX_FLAGS_VLAN_SHIFT;
tx_flags |= I40E_TX_FLAGS_HW_VLAN;
/* else if it is a SW VLAN, check the next protocol and store the tag */
} else if (protocol == htons(ETH_P_8021Q)) {
if (err < 0)
return err;
- if (protocol == htons(ETH_P_IP)) {
- iph = skb->encapsulation ? inner_ip_hdr(skb) : ip_hdr(skb);
+ iph = skb->encapsulation ? inner_ip_hdr(skb) : ip_hdr(skb);
+ ipv6h = skb->encapsulation ? inner_ipv6_hdr(skb) : ipv6_hdr(skb);
+
+ if (iph->version == 4) {
tcph = skb->encapsulation ? inner_tcp_hdr(skb) : tcp_hdr(skb);
iph->tot_len = 0;
iph->check = 0;
tcph->check = ~csum_tcpudp_magic(iph->saddr, iph->daddr,
0, IPPROTO_TCP, 0);
- } else if (skb_is_gso_v6(skb)) {
-
- ipv6h = skb->encapsulation ? inner_ipv6_hdr(skb)
- : ipv6_hdr(skb);
+ } else if (ipv6h->version == 6) {
tcph = skb->encapsulation ? inner_tcp_hdr(skb) : tcp_hdr(skb);
ipv6h->payload_len = 0;
tcph->check = ~csum_ipv6_magic(&ipv6h->saddr, &ipv6h->daddr,
* we are not already transmitting a packet to be timestamped
*/
pf = i40e_netdev_to_pf(tx_ring->netdev);
+ if (!(pf->flags & I40E_FLAG_PTP))
+ return 0;
+
if (pf->ptp_tx &&
!test_and_set_bit_lock(__I40E_PTP_TX_IN_PROGRESS, &pf->state)) {
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM;
}
} else if (tx_flags & I40E_TX_FLAGS_IPV6) {
- if (tx_flags & I40E_TX_FLAGS_TSO) {
- *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6;
+ *cd_tunneling |= I40E_TX_CTX_EXT_IP_IPV6;
+ if (tx_flags & I40E_TX_FLAGS_TSO)
ip_hdr(skb)->check = 0;
- } else {
- *cd_tunneling |=
- I40E_TX_CTX_EXT_IP_IPV4_NO_CSUM;
- }
}
/* Now set the ctx descriptor fields */
((skb_inner_network_offset(skb) -
skb_transport_offset(skb)) >> 1) <<
I40E_TXD_CTX_QW0_NATLEN_SHIFT;
-
+ if (this_ip_hdr->version == 6) {
+ tx_flags &= ~I40E_TX_FLAGS_IPV4;
+ tx_flags |= I40E_TX_FLAGS_IPV6;
+ }
} else {
network_hdr_len = skb_network_header_len(skb);
this_ip_hdr = ip_hdr(skb);
/* Place RS bit on last descriptor of any packet that spans across the
* 4th descriptor (WB_STRIDE aka 0x3) in a 64B cacheline.
*/
-#define WB_STRIDE 0x3
if (((i & WB_STRIDE) != WB_STRIDE) &&
(first <= &tx_ring->tx_bi[i]) &&
(first >= &tx_ring->tx_bi[i & ~WB_STRIDE])) {
unsigned long last_rx_timestamp;
bool ring_active; /* is ring online or not */
+ bool arm_wb; /* do something to arm write back */
/* stats structs */
struct i40e_queue_stats stats;
bool evb_802_1_qbh; /* Bridge Port Extension */
bool dcb;
bool fcoe;
+ bool iscsi; /* Indicates iSCSI enabled */
bool mfp_mode_1;
bool mgmt_cem;
bool ieee_1588;
u8 __iomem *hw_addr;
void *back;
- /* function pointer structs */
+ /* subsystem structs */
struct i40e_phy_info phy;
struct i40e_mac_info mac;
struct i40e_bus_info bus;
u8 pf_id;
u16 main_vsi_seid;
+ /* for multi-function MACs */
+ u16 partition_id;
+ u16 num_partitions;
+ u16 num_ports;
+
/* Closest numa node to the device */
u16 numa_node;
/* Checksum and Shadow RAM pointers */
#define I40E_SR_NVM_CONTROL_WORD 0x00
#define I40E_SR_EMP_MODULE_PTR 0x0F
+#define I40E_SR_PBA_FLAGS 0x15
+#define I40E_SR_PBA_BLOCK_PTR 0x16
#define I40E_SR_NVM_IMAGE_VERSION 0x18
#define I40E_SR_NVM_WAKE_ON_LAN 0x19
#define I40E_SR_ALTERNATE_SAN_MAC_ADDRESS_PTR 0x27
if (!pf->vf)
return;
+ /* Disable IOV before freeing resources. This lets any VF drivers
+ * running in the host get themselves cleaned up before we yank
+ * the carpet out from underneath their feet.
+ */
+ if (!pci_vfs_assigned(pf->pdev))
+ pci_disable_sriov(pf->pdev);
+
+ msleep(20); /* let any messages in transit get finished up */
+
/* Disable interrupt 0 so we don't try to handle the VFLR. */
i40e_irq_dynamic_disable_icr0(pf);
- mdelay(10); /* let any messages in transit get finished up */
/* free up vf resources */
tmp = pf->num_alloc_vfs;
pf->num_alloc_vfs = 0;
* before this function ever gets called.
*/
if (!pci_vfs_assigned(pf->pdev)) {
- pci_disable_sriov(pf->pdev);
/* Acknowledge VFLR for all VFS. Without this, VFs will fail to
* work correctly when SR-IOV gets re-enabled.
*/
/* general information */
#define I40E_AQ_LARGE_BUF 512
-#define I40E_ASQ_CMD_TIMEOUT 100 /* msecs */
+#define I40E_ASQ_CMD_TIMEOUT 250 /* msecs */
void i40evf_fill_default_direct_cmd_desc(struct i40e_aq_desc *desc,
u16 opcode);
/* OEM commands */
i40e_aqc_opc_oem_parameter_change = 0xFE00,
i40e_aqc_opc_oem_device_status_change = 0xFE01,
+ i40e_aqc_opc_oem_ocsd_initialize = 0xFE02,
+ i40e_aqc_opc_oem_ocbb_initialize = 0xFE03,
/* debug commands */
i40e_aqc_opc_debug_get_deviceid = 0xFF00,
i40e_aqc_opc_debug_write_reg = 0xFF04,
i40e_aqc_opc_debug_modify_reg = 0xFF07,
i40e_aqc_opc_debug_dump_internals = 0xFF08,
- i40e_aqc_opc_debug_modify_internals = 0xFF09,
};
/* command structures and indirect data structures */
#define I40E_AQ_CAP_ID_VSI 0x0017
#define I40E_AQ_CAP_ID_DCB 0x0018
#define I40E_AQ_CAP_ID_FCOE 0x0021
+#define I40E_AQ_CAP_ID_ISCSI 0x0022
#define I40E_AQ_CAP_ID_RSS 0x0040
#define I40E_AQ_CAP_ID_RXQ 0x0041
#define I40E_AQ_CAP_ID_TXQ 0x0042
__le32 pfpm_proxyfc;
__le32 ip_addr;
u8 mac_addr[6];
+ u8 reserved[2];
};
+I40E_CHECK_STRUCT_LEN(0x14, i40e_aqc_arp_proxy_data);
+
/* Set NS Proxy Table Entry Command (indirect 0x0105) */
struct i40e_aqc_ns_proxy_data {
__le16 table_idx_mac_addr_0;
u8 ipv6_addr_1[16];
};
+I40E_CHECK_STRUCT_LEN(0x3c, i40e_aqc_ns_proxy_data);
+
/* Manage LAA Command (0x0106) - obsolete */
struct i40e_aqc_mng_laa {
__le16 command_flags;
u8 reserved2[6];
};
+I40E_CHECK_CMD_LENGTH(i40e_aqc_mng_laa);
+
/* Manage MAC Address Read Command (indirect 0x0107) */
struct i40e_aqc_mac_address_read {
__le16 command_flags;
u8 reserved[12];
};
+I40E_CHECK_CMD_LENGTH(i40e_aqc_get_switch_config_header_resp);
+
struct i40e_aqc_switch_config_element_resp {
u8 element_type;
#define I40E_AQ_SW_ELEM_TYPE_MAC 1
__le16 element_info;
};
+I40E_CHECK_STRUCT_LEN(0x10, i40e_aqc_switch_config_element_resp);
+
/* Get Switch Configuration (indirect 0x0200)
* an array of elements are returned in the response buffer
* the first in the array is the header, remainder are elements
struct i40e_aqc_switch_config_element_resp element[1];
};
+I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_get_switch_config_resp);
+
/* Add Statistics (direct 0x0201)
* Remove Statistics (direct 0x0202)
*/
u8 reserved2[6];
};
+I40E_CHECK_STRUCT_LEN(0x10, i40e_aqc_switch_resource_alloc_element_resp);
+
/* Add VSI (indirect 0x0210)
* this indirect command uses struct i40e_aqc_vsi_properties_data
* as the indirect buffer (128 bytes)
u8 reserved[12];
};
+I40E_CHECK_CMD_LENGTH(i40e_aqc_remove_tag);
+
/* Add multicast E-Tag (direct 0x0257)
* del multicast E-Tag (direct 0x0258) only uses pv_seid and etag fields
* and no external data
} ipaddr;
__le16 flags;
#define I40E_AQC_ADD_CLOUD_FILTER_SHIFT 0
-#define I40E_AQC_ADD_CLOUD_FILTER_MASK (0x3F << \
+#define I40E_AQC_ADD_CLOUD_FILTER_MASK (0x3F << \
I40E_AQC_ADD_CLOUD_FILTER_SHIFT)
/* 0x0000 reserved */
#define I40E_AQC_ADD_CLOUD_FILTER_OIP 0x0001
u8 reserved[4];
__le16 queue_number;
#define I40E_AQC_ADD_CLOUD_QUEUE_SHIFT 0
-#define I40E_AQC_ADD_CLOUD_QUEUE_MASK (0x3F << \
+#define I40E_AQC_ADD_CLOUD_QUEUE_MASK (0x7FF << \
I40E_AQC_ADD_CLOUD_QUEUE_SHIFT)
u8 reserved2[14];
/* response section */
u8 reserved1[28];
};
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_configure_vsi_ets_sla_bw_data);
+
/* Configure VSI Bandwidth Allocation per Traffic Type (indirect 0x0407)
* responds with i40e_aqc_qs_handles_resp
*/
__le16 qs_handles[8];
};
+I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_configure_vsi_tc_bw_data);
+
/* Query vsi bw configuration (indirect 0x0408) */
struct i40e_aqc_query_vsi_bw_config_resp {
u8 tc_valid_bits;
u8 reserved3[23];
};
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_query_vsi_bw_config_resp);
+
/* Query VSI Bandwidth Allocation per Traffic Type (indirect 0x040A) */
struct i40e_aqc_query_vsi_ets_sla_config_resp {
u8 tc_valid_bits;
__le16 tc_bw_max[2];
};
+I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_query_vsi_ets_sla_config_resp);
+
/* Configure Switching Component Bandwidth Limit (direct 0x0410) */
struct i40e_aqc_configure_switching_comp_bw_limit {
__le16 seid;
u8 reserved2[96];
};
+I40E_CHECK_STRUCT_LEN(0x80, i40e_aqc_configure_switching_comp_ets_data);
+
/* Configure Switching Component Bandwidth Limits per Tc (indirect 0x0416) */
struct i40e_aqc_configure_switching_comp_ets_bw_limit_data {
u8 tc_valid_bits;
u8 reserved1[28];
};
+I40E_CHECK_STRUCT_LEN(0x40,
+ i40e_aqc_configure_switching_comp_ets_bw_limit_data);
+
/* Configure Switching Component Bandwidth Allocation per Tc
* (indirect 0x0417)
*/
u8 reserved1[20];
};
+I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_configure_switching_comp_bw_config_data);
+
/* Query Switching Component Configuration (indirect 0x0418) */
struct i40e_aqc_query_switching_comp_ets_config_resp {
u8 tc_valid_bits;
u8 reserved2[23];
};
+I40E_CHECK_STRUCT_LEN(0x40, i40e_aqc_query_switching_comp_ets_config_resp);
+
/* Query PhysicalPort ETS Configuration (indirect 0x0419) */
struct i40e_aqc_query_port_ets_config_resp {
u8 reserved[4];
u8 reserved3[32];
};
+I40E_CHECK_STRUCT_LEN(0x44, i40e_aqc_query_port_ets_config_resp);
+
/* Query Switching Component Bandwidth Allocation per Traffic Type
* (indirect 0x041A)
*/
__le16 tc_bw_max[2];
};
+I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_query_switching_comp_bw_config_resp);
+
/* Suspend/resume port TX traffic
* (direct 0x041B and 0x041C) uses the generic SEID struct
*/
u8 max_bw[16]; /* bandwidth limit */
};
+I40E_CHECK_STRUCT_LEN(0x22, i40e_aqc_configure_partition_bw_data);
+
/* Get and set the active HMC resource profile and status.
* (direct 0x0500) and (direct 0x0501)
*/
u8 reserved2[8];
};
+I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_module_desc);
+
struct i40e_aq_get_phy_abilities_resp {
__le32 phy_type; /* bitmap using the above enum for offsets */
u8 link_speed; /* bitmap using the above enum bit patterns */
struct i40e_aqc_module_desc qualified_module[I40E_AQ_PHY_MAX_QMS];
};
+I40E_CHECK_STRUCT_LEN(0x218, i40e_aq_get_phy_abilities_resp);
+
/* Set PHY Config (direct 0x0601) */
struct i40e_aq_set_phy_config { /* same bits as above in all */
__le32 phy_type;
/* NVM Config Read (indirect 0x0704) */
struct i40e_aqc_nvm_config_read {
__le16 cmd_flags;
-#define ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK 1
-#define ANVM_READ_SINGLE_FEATURE 0
-#define ANVM_READ_MULTIPLE_FEATURES 1
+#define I40E_AQ_ANVM_SINGLE_OR_MULTIPLE_FEATURES_MASK 1
+#define I40E_AQ_ANVM_READ_SINGLE_FEATURE 0
+#define I40E_AQ_ANVM_READ_MULTIPLE_FEATURES 1
__le16 element_count;
- __le16 element_id; /* Feature/field ID */
- u8 reserved[2];
+ __le16 element_id; /* Feature/field ID */
+ __le16 element_id_msw; /* MSWord of field ID */
__le32 address_high;
__le32 address_low;
};
I40E_CHECK_CMD_LENGTH(i40e_aqc_nvm_config_write);
+/* Used for 0x0704 as well as for 0x0705 commands */
+#define I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT 1
+#define I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \
+ (1 << I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
+#define I40E_AQ_ANVM_FEATURE 0
+#define I40E_AQ_ANVM_IMMEDIATE_FIELD (1 << FEATURE_OR_IMMEDIATE_SHIFT)
struct i40e_aqc_nvm_config_data_feature {
__le16 feature_id;
- __le16 instance_id;
+#define I40E_AQ_ANVM_FEATURE_OPTION_OEM_ONLY 0x01
+#define I40E_AQ_ANVM_FEATURE_OPTION_DWORD_MAP 0x08
+#define I40E_AQ_ANVM_FEATURE_OPTION_POR_CSR 0x10
__le16 feature_options;
__le16 feature_selection;
};
+I40E_CHECK_STRUCT_LEN(0x6, i40e_aqc_nvm_config_data_feature);
+
struct i40e_aqc_nvm_config_data_immediate_field {
-#define ANVM_FEATURE_OR_IMMEDIATE_MASK 0x2
- __le16 field_id;
- __le16 instance_id;
+ __le32 field_id;
+ __le32 field_value;
__le16 field_options;
- __le16 field_value;
+ __le16 reserved;
};
+I40E_CHECK_STRUCT_LEN(0xc, i40e_aqc_nvm_config_data_immediate_field);
+
/* Send to PF command (indirect 0x0801) id is only used by PF
* Send to VF command (indirect 0x0802) id is only used by PF
* Send to Peer PF command (indirect 0x0803)
#define I40E_AQ_OEM_PARAM_TYPE_BW_CTL 1
#define I40E_AQ_OEM_PARAM_MAC 2
__le32 param_value1;
- u8 param_value2[8];
+ __le16 param_value2;
+ u8 reserved[6];
};
I40E_CHECK_CMD_LENGTH(i40e_aqc_oem_param_change);
I40E_CHECK_CMD_LENGTH(i40e_aqc_oem_state_change);
+/* Initialize OCSD (0xFE02, direct) */
+struct i40e_aqc_opc_oem_ocsd_initialize {
+ u8 type_status;
+ u8 reserved1[3];
+ __le32 ocsd_memory_block_addr_high;
+ __le32 ocsd_memory_block_addr_low;
+ __le32 requested_update_interval;
+};
+
+I40E_CHECK_CMD_LENGTH(i40e_aqc_opc_oem_ocsd_initialize);
+
+/* Initialize OCBB (0xFE03, direct) */
+struct i40e_aqc_opc_oem_ocbb_initialize {
+ u8 type_status;
+ u8 reserved1[3];
+ __le32 ocbb_memory_block_addr_high;
+ __le32 ocbb_memory_block_addr_low;
+ u8 reserved2[4];
+};
+
+I40E_CHECK_CMD_LENGTH(i40e_aqc_opc_oem_ocbb_initialize);
+
/* debug commands */
/* get device id (0xFF00) uses the generic structure */
u32 tx_flags = 0;
/* if we have a HW VLAN tag being added, default to the HW one */
- if (vlan_tx_tag_present(skb)) {
- tx_flags |= vlan_tx_tag_get(skb) << I40E_TX_FLAGS_VLAN_SHIFT;
+ if (skb_vlan_tag_present(skb)) {
+ tx_flags |= skb_vlan_tag_get(skb) << I40E_TX_FLAGS_VLAN_SHIFT;
tx_flags |= I40E_TX_FLAGS_HW_VLAN;
/* else if it is a SW VLAN, check the next protocol and store the tag */
} else if (protocol == htons(ETH_P_8021Q)) {
bool evb_802_1_qbh; /* Bridge Port Extension */
bool dcb;
bool fcoe;
+ bool iscsi; /* Indicates iSCSI enabled */
bool mfp_mode_1;
bool mgmt_cem;
bool ieee_1588;
u8 __iomem *hw_addr;
void *back;
- /* function pointer structs */
+ /* subsystem structs */
struct i40e_phy_info phy;
struct i40e_mac_info mac;
struct i40e_bus_info bus;
u8 pf_id;
u16 main_vsi_seid;
+ /* for multi-function MACs */
+ u16 partition_id;
+ u16 num_partitions;
+ u16 num_ports;
+
/* Closest numa node to the device */
u16 numa_node;
static const char i40evf_driver_string[] =
"Intel(R) XL710/X710 Virtual Function Network Driver";
-#define DRV_VERSION "1.0.6"
+#define DRV_VERSION "1.2.0"
const char i40evf_driver_version[] = DRV_VERSION;
static const char i40evf_copyright[] =
"Copyright (c) 2013 - 2014 Intel Corporation.";
val = val | I40E_PFINT_DYN_CTL0_CLEARPBA_MASK;
wr32(hw, I40E_VFINT_DYN_CTL01, val);
- /* re-enable interrupt causes */
- wr32(hw, I40E_VFINT_ICR0_ENA1, ena_mask);
- wr32(hw, I40E_VFINT_DYN_CTL01, I40E_VFINT_DYN_CTL01_INTENA_MASK);
-
/* schedule work on the private workqueue */
schedule_work(&adapter->adminq_task);
return 0;
}
-/**
- * i40evf_clean_all_rx_rings - Free Rx Buffers for all queues
- * @adapter: board private structure
- **/
-static void i40evf_clean_all_rx_rings(struct i40evf_adapter *adapter)
-{
- int i;
-
- for (i = 0; i < adapter->num_active_queues; i++)
- i40evf_clean_rx_ring(adapter->rx_rings[i]);
-}
-
-/**
- * i40evf_clean_all_tx_rings - Free Tx Buffers for all queues
- * @adapter: board private structure
- **/
-static void i40evf_clean_all_tx_rings(struct i40evf_adapter *adapter)
-{
- int i;
-
- for (i = 0; i < adapter->num_active_queues; i++)
- i40evf_clean_tx_ring(adapter->tx_rings[i]);
-}
-
/**
* i40e_down - Shutdown the connection processing
* @adapter: board private structure
if (adapter->state == __I40EVF_DOWN)
return;
+ while (test_and_set_bit(__I40EVF_IN_CRITICAL_TASK,
+ &adapter->crit_section))
+ usleep_range(500, 1000);
+
+ i40evf_irq_disable(adapter);
+
/* remove all MAC filters */
list_for_each_entry(f, &adapter->mac_filter_list, list) {
f->remove = true;
}
if (!(adapter->flags & I40EVF_FLAG_PF_COMMS_FAILED) &&
adapter->state != __I40EVF_RESETTING) {
- adapter->aq_required |= I40EVF_FLAG_AQ_DEL_MAC_FILTER;
+ /* cancel any current operation */
+ adapter->current_op = I40E_VIRTCHNL_OP_UNKNOWN;
+ adapter->aq_pending = 0;
+ /* Schedule operations to close down the HW. Don't wait
+ * here for this to complete. The watchdog is still running
+ * and it will take care of this.
+ */
+ adapter->aq_required = I40EVF_FLAG_AQ_DEL_MAC_FILTER;
adapter->aq_required |= I40EVF_FLAG_AQ_DEL_VLAN_FILTER;
- /* disable receives */
adapter->aq_required |= I40EVF_FLAG_AQ_DISABLE_QUEUES;
- mod_timer_pending(&adapter->watchdog_timer, jiffies + 1);
- msleep(20);
}
netif_tx_disable(netdev);
netif_tx_stop_all_queues(netdev);
- i40evf_irq_disable(adapter);
-
i40evf_napi_disable_all(adapter);
- netif_carrier_off(netdev);
+ msleep(20);
- i40evf_clean_all_tx_rings(adapter);
- i40evf_clean_all_rx_rings(adapter);
+ netif_carrier_off(netdev);
+ clear_bit(__I40EVF_IN_CRITICAL_TASK, &adapter->crit_section);
}
/**
/* Process admin queue tasks. After init, everything gets done
* here so we don't race on the admin queue.
*/
- if (adapter->aq_pending)
+ if (adapter->aq_pending) {
+ if (!i40evf_asq_done(hw)) {
+ dev_dbg(&adapter->pdev->dev, "Admin queue timeout\n");
+ i40evf_send_api_ver(adapter);
+ }
goto watchdog_done;
+ }
if (adapter->aq_required & I40EVF_FLAG_AQ_MAP_VECTORS) {
i40evf_map_queues(adapter);
if (adapter->state == __I40EVF_RUNNING)
i40evf_request_stats(adapter);
-
- i40evf_irq_enable(adapter, true);
- i40evf_fire_sw_int(adapter, 0xFF);
-
watchdog_done:
+ if (adapter->state == __I40EVF_RUNNING) {
+ i40evf_irq_enable_queues(adapter, ~0);
+ i40evf_fire_sw_int(adapter, 0xFF);
+ } else {
+ i40evf_fire_sw_int(adapter, 0x1);
+ }
+
clear_bit(__I40EVF_IN_CRITICAL_TASK, &adapter->crit_section);
restart_watchdog:
if (adapter->state == __I40EVF_REMOVE)
u16 pending;
if (adapter->flags & I40EVF_FLAG_PF_COMMS_FAILED)
- return;
+ goto out;
event.buf_len = I40EVF_MAX_AQ_BUF_SIZE;
event.msg_buf = kzalloc(event.buf_len, GFP_KERNEL);
if (!event.msg_buf)
- return;
+ goto out;
v_msg = (struct i40e_virtchnl_msg *)&event.desc;
do {
if (oldval != val)
wr32(hw, hw->aq.asq.len, val);
+ kfree(event.msg_buf);
+out:
/* re-enable Admin queue interrupt cause */
i40evf_misc_irq_enable(adapter);
-
- kfree(event.msg_buf);
}
/**
/* aq msg sent, awaiting reply */
err = i40evf_verify_api_ver(adapter);
if (err) {
- dev_info(&pdev->dev, "Unable to verify API version (%d), retrying\n",
- err);
- if (err == I40E_ERR_ADMIN_QUEUE_NO_WORK) {
- dev_info(&pdev->dev, "Resending request\n");
+ if (err == I40E_ERR_ADMIN_QUEUE_NO_WORK)
err = i40evf_send_api_ver(adapter);
- }
goto err;
}
err = i40evf_send_vf_config_msg(adapter);
}
err = i40evf_get_vf_config(adapter);
if (err == I40E_ERR_ADMIN_QUEUE_NO_WORK) {
- dev_info(&pdev->dev, "Resending VF config request\n");
err = i40evf_send_vf_config_msg(adapter);
goto err;
}
struct i40evf_adapter *adapter = netdev_priv(netdev);
struct i40evf_mac_filter *f, *ftmp;
struct i40e_hw *hw = &adapter->hw;
+ int count = 50;
cancel_delayed_work_sync(&adapter->init_task);
cancel_work_sync(&adapter->reset_task);
unregister_netdev(netdev);
adapter->netdev_registered = false;
}
+ while (count-- && adapter->aq_required)
+ msleep(50);
+
+ if (count < 0)
+ dev_err(&pdev->dev, "Timed out waiting for PF driver.\n");
adapter->state = __I40EVF_REMOVE;
if (adapter->msix_entries) {
list_del(&f->list);
kfree(f);
}
+ list_for_each_entry_safe(f, ftmp, &adapter->vlan_filter_list, list) {
+ list_del(&f->list);
+ kfree(f);
+ }
free_netdev(netdev);
}
return;
}
- if (v_opcode != adapter->current_op)
- dev_info(&adapter->pdev->dev, "Pending op is %d, received %d\n",
- adapter->current_op, v_opcode);
if (v_retval) {
dev_err(&adapter->pdev->dev, "%s: PF returned error %d to our request %d\n",
__func__, v_retval, v_opcode);
}
switch (v_opcode) {
+ case I40E_VIRTCHNL_OP_VERSION:
+ /* no action, but also not an error */
+ break;
case I40E_VIRTCHNL_OP_GET_STATS: {
struct i40e_eth_stats *stats =
(struct i40e_eth_stats *)msg;
u32 swmask = mask;
u32 fwmask = mask << 16;
s32 ret_val = 0;
- s32 i = 0, timeout = 200; /* FIXME: find real value to use here */
+ s32 i = 0, timeout = 200;
while (i < timeout) {
if (igb_get_hw_semaphore(hw)) {
};
#endif
+#define IGB_N_EXTTS 2
+#define IGB_N_PEROUT 2
+#define IGB_N_SDP 4
#define IGB_RETA_SIZE 128
/* board specific private data structure */
u32 tx_hwtstamp_timeouts;
u32 rx_hwtstamp_cleared;
+ struct ptp_pin_desc sdp_config[IGB_N_SDP];
+ struct {
+ struct timespec start;
+ struct timespec period;
+ } perout[IGB_N_PEROUT];
+
char fw_version[32];
#ifdef CONFIG_IGB_HWMON
struct hwmon_buff *igb_hwmon_buff;
skb_tx_timestamp(skb);
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
tx_flags |= IGB_TX_FLAGS_VLAN;
- tx_flags |= (vlan_tx_tag_get(skb) << IGB_TX_FLAGS_VLAN_SHIFT);
+ tx_flags |= (skb_vlan_tag_get(skb) << IGB_TX_FLAGS_VLAN_SHIFT);
}
/* record initial flags and protocol */
}
}
+static void igb_tsync_interrupt(struct igb_adapter *adapter)
+{
+ struct e1000_hw *hw = &adapter->hw;
+ struct ptp_clock_event event;
+ struct timespec ts;
+ u32 ack = 0, tsauxc, sec, nsec, tsicr = rd32(E1000_TSICR);
+
+ if (tsicr & TSINTR_SYS_WRAP) {
+ event.type = PTP_CLOCK_PPS;
+ if (adapter->ptp_caps.pps)
+ ptp_clock_event(adapter->ptp_clock, &event);
+ else
+ dev_err(&adapter->pdev->dev, "unexpected SYS WRAP");
+ ack |= TSINTR_SYS_WRAP;
+ }
+
+ if (tsicr & E1000_TSICR_TXTS) {
+ /* retrieve hardware timestamp */
+ schedule_work(&adapter->ptp_tx_work);
+ ack |= E1000_TSICR_TXTS;
+ }
+
+ if (tsicr & TSINTR_TT0) {
+ spin_lock(&adapter->tmreg_lock);
+ ts = timespec_add(adapter->perout[0].start,
+ adapter->perout[0].period);
+ wr32(E1000_TRGTTIML0, ts.tv_nsec);
+ wr32(E1000_TRGTTIMH0, ts.tv_sec);
+ tsauxc = rd32(E1000_TSAUXC);
+ tsauxc |= TSAUXC_EN_TT0;
+ wr32(E1000_TSAUXC, tsauxc);
+ adapter->perout[0].start = ts;
+ spin_unlock(&adapter->tmreg_lock);
+ ack |= TSINTR_TT0;
+ }
+
+ if (tsicr & TSINTR_TT1) {
+ spin_lock(&adapter->tmreg_lock);
+ ts = timespec_add(adapter->perout[1].start,
+ adapter->perout[1].period);
+ wr32(E1000_TRGTTIML1, ts.tv_nsec);
+ wr32(E1000_TRGTTIMH1, ts.tv_sec);
+ tsauxc = rd32(E1000_TSAUXC);
+ tsauxc |= TSAUXC_EN_TT1;
+ wr32(E1000_TSAUXC, tsauxc);
+ adapter->perout[1].start = ts;
+ spin_unlock(&adapter->tmreg_lock);
+ ack |= TSINTR_TT1;
+ }
+
+ if (tsicr & TSINTR_AUTT0) {
+ nsec = rd32(E1000_AUXSTMPL0);
+ sec = rd32(E1000_AUXSTMPH0);
+ event.type = PTP_CLOCK_EXTTS;
+ event.index = 0;
+ event.timestamp = sec * 1000000000ULL + nsec;
+ ptp_clock_event(adapter->ptp_clock, &event);
+ ack |= TSINTR_AUTT0;
+ }
+
+ if (tsicr & TSINTR_AUTT1) {
+ nsec = rd32(E1000_AUXSTMPL1);
+ sec = rd32(E1000_AUXSTMPH1);
+ event.type = PTP_CLOCK_EXTTS;
+ event.index = 1;
+ event.timestamp = sec * 1000000000ULL + nsec;
+ ptp_clock_event(adapter->ptp_clock, &event);
+ ack |= TSINTR_AUTT1;
+ }
+
+ /* acknowledge the interrupts */
+ wr32(E1000_TSICR, ack);
+}
+
static irqreturn_t igb_msix_other(int irq, void *data)
{
struct igb_adapter *adapter = data;
mod_timer(&adapter->watchdog_timer, jiffies + 1);
}
- if (icr & E1000_ICR_TS) {
- u32 tsicr = rd32(E1000_TSICR);
-
- if (tsicr & E1000_TSICR_TXTS) {
- /* acknowledge the interrupt */
- wr32(E1000_TSICR, E1000_TSICR_TXTS);
- /* retrieve hardware timestamp */
- schedule_work(&adapter->ptp_tx_work);
- }
- }
+ if (icr & E1000_ICR_TS)
+ igb_tsync_interrupt(adapter);
wr32(E1000_EIMS, adapter->eims_other);
adapter->vf_data[vf].flags |= IGB_VF_FLAG_CTS;
/* reply to reset with ack and vf mac address */
- msgbuf[0] = E1000_VF_RESET | E1000_VT_MSGTYPE_ACK;
- memcpy(addr, vf_mac, ETH_ALEN);
+ if (!is_zero_ether_addr(vf_mac)) {
+ msgbuf[0] = E1000_VF_RESET | E1000_VT_MSGTYPE_ACK;
+ memcpy(addr, vf_mac, ETH_ALEN);
+ } else {
+ msgbuf[0] = E1000_VF_RESET | E1000_VT_MSGTYPE_NACK;
+ }
igb_write_mbx(hw, msgbuf, 3, vf);
}
mod_timer(&adapter->watchdog_timer, jiffies + 1);
}
- if (icr & E1000_ICR_TS) {
- u32 tsicr = rd32(E1000_TSICR);
-
- if (tsicr & E1000_TSICR_TXTS) {
- /* acknowledge the interrupt */
- wr32(E1000_TSICR, E1000_TSICR_TXTS);
- /* retrieve hardware timestamp */
- schedule_work(&adapter->ptp_tx_work);
- }
- }
+ if (icr & E1000_ICR_TS)
+ igb_tsync_interrupt(adapter);
napi_schedule(&q_vector->napi);
mod_timer(&adapter->watchdog_timer, jiffies + 1);
}
- if (icr & E1000_ICR_TS) {
- u32 tsicr = rd32(E1000_TSICR);
-
- if (tsicr & E1000_TSICR_TXTS) {
- /* acknowledge the interrupt */
- wr32(E1000_TSICR, E1000_TSICR_TXTS);
- /* retrieve hardware timestamp */
- schedule_work(&adapter->ptp_tx_work);
- }
- }
+ if (icr & E1000_ICR_TS)
+ igb_tsync_interrupt(adapter);
napi_schedule(&q_vector->napi);
DMA_FROM_DEVICE);
}
+static inline bool igb_page_is_reserved(struct page *page)
+{
+ return (page_to_nid(page) != numa_mem_id()) || page->pfmemalloc;
+}
+
static bool igb_can_reuse_rx_page(struct igb_rx_buffer *rx_buffer,
struct page *page,
unsigned int truesize)
{
/* avoid re-using remote pages */
- if (unlikely(page_to_nid(page) != numa_node_id()))
- return false;
-
- if (unlikely(page->pfmemalloc))
+ if (unlikely(igb_page_is_reserved(page)))
return false;
#if (PAGE_SIZE < 8192)
/* flip page offset to other buffer */
rx_buffer->page_offset ^= IGB_RX_BUFSZ;
-
- /* Even if we own the page, we are not allowed to use atomic_set()
- * This would break get_page_unless_zero() users.
- */
- atomic_inc(&page->_count);
#else
/* move offset up to the next cache line */
rx_buffer->page_offset += truesize;
if (rx_buffer->page_offset > (PAGE_SIZE - IGB_RX_BUFSZ))
return false;
-
- /* bump ref count on page before it is given to the stack */
- get_page(page);
#endif
+ /* Even if we own the page, we are not allowed to use atomic_set()
+ * This would break get_page_unless_zero() users.
+ */
+ atomic_inc(&page->_count);
+
return true;
}
memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
- /* we can reuse buffer as-is, just make sure it is local */
- if (likely((page_to_nid(page) == numa_node_id()) &&
- !page->pfmemalloc))
+ /* page is not reserved, we can reuse buffer as-is */
+ if (likely(!igb_page_is_reserved(page)))
return true;
/* this page cannot be reused so discard it */
- put_page(page);
+ __free_page(page);
return false;
}
struct page *page;
rx_buffer = &rx_ring->rx_buffer_info[rx_ring->next_to_clean];
-
page = rx_buffer->page;
prefetchw(page);
i -= rx_ring->count;
}
- /* clear the hdr_addr for the next_to_use descriptor */
- rx_desc->read.hdr_addr = 0;
+ /* clear the status bits for the next_to_use descriptor */
+ rx_desc->wb.upper.status_error = 0;
cleaned_count--;
} while (cleaned_count);
return 0;
}
+static void igb_pin_direction(int pin, int input, u32 *ctrl, u32 *ctrl_ext)
+{
+ u32 *ptr = pin < 2 ? ctrl : ctrl_ext;
+ u32 mask[IGB_N_SDP] = {
+ E1000_CTRL_SDP0_DIR,
+ E1000_CTRL_SDP1_DIR,
+ E1000_CTRL_EXT_SDP2_DIR,
+ E1000_CTRL_EXT_SDP3_DIR,
+ };
+
+ if (input)
+ *ptr &= ~mask[pin];
+ else
+ *ptr |= mask[pin];
+}
+
+static void igb_pin_extts(struct igb_adapter *igb, int chan, int pin)
+{
+ struct e1000_hw *hw = &igb->hw;
+ u32 aux0_sel_sdp[IGB_N_SDP] = {
+ AUX0_SEL_SDP0, AUX0_SEL_SDP1, AUX0_SEL_SDP2, AUX0_SEL_SDP3,
+ };
+ u32 aux1_sel_sdp[IGB_N_SDP] = {
+ AUX1_SEL_SDP0, AUX1_SEL_SDP1, AUX1_SEL_SDP2, AUX1_SEL_SDP3,
+ };
+ u32 ts_sdp_en[IGB_N_SDP] = {
+ TS_SDP0_EN, TS_SDP1_EN, TS_SDP2_EN, TS_SDP3_EN,
+ };
+ u32 ctrl, ctrl_ext, tssdp = 0;
+
+ ctrl = rd32(E1000_CTRL);
+ ctrl_ext = rd32(E1000_CTRL_EXT);
+ tssdp = rd32(E1000_TSSDP);
+
+ igb_pin_direction(pin, 1, &ctrl, &ctrl_ext);
+
+ /* Make sure this pin is not enabled as an output. */
+ tssdp &= ~ts_sdp_en[pin];
+
+ if (chan == 1) {
+ tssdp &= ~AUX1_SEL_SDP3;
+ tssdp |= aux1_sel_sdp[pin] | AUX1_TS_SDP_EN;
+ } else {
+ tssdp &= ~AUX0_SEL_SDP3;
+ tssdp |= aux0_sel_sdp[pin] | AUX0_TS_SDP_EN;
+ }
+
+ wr32(E1000_TSSDP, tssdp);
+ wr32(E1000_CTRL, ctrl);
+ wr32(E1000_CTRL_EXT, ctrl_ext);
+}
+
+static void igb_pin_perout(struct igb_adapter *igb, int chan, int pin)
+{
+ struct e1000_hw *hw = &igb->hw;
+ u32 aux0_sel_sdp[IGB_N_SDP] = {
+ AUX0_SEL_SDP0, AUX0_SEL_SDP1, AUX0_SEL_SDP2, AUX0_SEL_SDP3,
+ };
+ u32 aux1_sel_sdp[IGB_N_SDP] = {
+ AUX1_SEL_SDP0, AUX1_SEL_SDP1, AUX1_SEL_SDP2, AUX1_SEL_SDP3,
+ };
+ u32 ts_sdp_en[IGB_N_SDP] = {
+ TS_SDP0_EN, TS_SDP1_EN, TS_SDP2_EN, TS_SDP3_EN,
+ };
+ u32 ts_sdp_sel_tt0[IGB_N_SDP] = {
+ TS_SDP0_SEL_TT0, TS_SDP1_SEL_TT0,
+ TS_SDP2_SEL_TT0, TS_SDP3_SEL_TT0,
+ };
+ u32 ts_sdp_sel_tt1[IGB_N_SDP] = {
+ TS_SDP0_SEL_TT1, TS_SDP1_SEL_TT1,
+ TS_SDP2_SEL_TT1, TS_SDP3_SEL_TT1,
+ };
+ u32 ts_sdp_sel_clr[IGB_N_SDP] = {
+ TS_SDP0_SEL_FC1, TS_SDP1_SEL_FC1,
+ TS_SDP2_SEL_FC1, TS_SDP3_SEL_FC1,
+ };
+ u32 ctrl, ctrl_ext, tssdp = 0;
+
+ ctrl = rd32(E1000_CTRL);
+ ctrl_ext = rd32(E1000_CTRL_EXT);
+ tssdp = rd32(E1000_TSSDP);
+
+ igb_pin_direction(pin, 0, &ctrl, &ctrl_ext);
+
+ /* Make sure this pin is not enabled as an input. */
+ if ((tssdp & AUX0_SEL_SDP3) == aux0_sel_sdp[pin])
+ tssdp &= ~AUX0_TS_SDP_EN;
+
+ if ((tssdp & AUX1_SEL_SDP3) == aux1_sel_sdp[pin])
+ tssdp &= ~AUX1_TS_SDP_EN;
+
+ tssdp &= ~ts_sdp_sel_clr[pin];
+ if (chan == 1)
+ tssdp |= ts_sdp_sel_tt1[pin];
+ else
+ tssdp |= ts_sdp_sel_tt0[pin];
+
+ tssdp |= ts_sdp_en[pin];
+
+ wr32(E1000_TSSDP, tssdp);
+ wr32(E1000_CTRL, ctrl);
+ wr32(E1000_CTRL_EXT, ctrl_ext);
+}
+
+static int igb_ptp_feature_enable_i210(struct ptp_clock_info *ptp,
+ struct ptp_clock_request *rq, int on)
+{
+ struct igb_adapter *igb =
+ container_of(ptp, struct igb_adapter, ptp_caps);
+ struct e1000_hw *hw = &igb->hw;
+ u32 tsauxc, tsim, tsauxc_mask, tsim_mask, trgttiml, trgttimh;
+ unsigned long flags;
+ struct timespec ts;
+ int pin;
+ s64 ns;
+
+ switch (rq->type) {
+ case PTP_CLK_REQ_EXTTS:
+ if (on) {
+ pin = ptp_find_pin(igb->ptp_clock, PTP_PF_EXTTS,
+ rq->extts.index);
+ if (pin < 0)
+ return -EBUSY;
+ }
+ if (rq->extts.index == 1) {
+ tsauxc_mask = TSAUXC_EN_TS1;
+ tsim_mask = TSINTR_AUTT1;
+ } else {
+ tsauxc_mask = TSAUXC_EN_TS0;
+ tsim_mask = TSINTR_AUTT0;
+ }
+ spin_lock_irqsave(&igb->tmreg_lock, flags);
+ tsauxc = rd32(E1000_TSAUXC);
+ tsim = rd32(E1000_TSIM);
+ if (on) {
+ igb_pin_extts(igb, rq->extts.index, pin);
+ tsauxc |= tsauxc_mask;
+ tsim |= tsim_mask;
+ } else {
+ tsauxc &= ~tsauxc_mask;
+ tsim &= ~tsim_mask;
+ }
+ wr32(E1000_TSAUXC, tsauxc);
+ wr32(E1000_TSIM, tsim);
+ spin_unlock_irqrestore(&igb->tmreg_lock, flags);
+ return 0;
+
+ case PTP_CLK_REQ_PEROUT:
+ if (on) {
+ pin = ptp_find_pin(igb->ptp_clock, PTP_PF_PEROUT,
+ rq->perout.index);
+ if (pin < 0)
+ return -EBUSY;
+ }
+ ts.tv_sec = rq->perout.period.sec;
+ ts.tv_nsec = rq->perout.period.nsec;
+ ns = timespec_to_ns(&ts);
+ ns = ns >> 1;
+ if (on && ns < 500000LL) {
+ /* 2k interrupts per second is an awful lot. */
+ return -EINVAL;
+ }
+ ts = ns_to_timespec(ns);
+ if (rq->perout.index == 1) {
+ tsauxc_mask = TSAUXC_EN_TT1;
+ tsim_mask = TSINTR_TT1;
+ trgttiml = E1000_TRGTTIML1;
+ trgttimh = E1000_TRGTTIMH1;
+ } else {
+ tsauxc_mask = TSAUXC_EN_TT0;
+ tsim_mask = TSINTR_TT0;
+ trgttiml = E1000_TRGTTIML0;
+ trgttimh = E1000_TRGTTIMH0;
+ }
+ spin_lock_irqsave(&igb->tmreg_lock, flags);
+ tsauxc = rd32(E1000_TSAUXC);
+ tsim = rd32(E1000_TSIM);
+ if (on) {
+ int i = rq->perout.index;
+
+ igb_pin_perout(igb, i, pin);
+ igb->perout[i].start.tv_sec = rq->perout.start.sec;
+ igb->perout[i].start.tv_nsec = rq->perout.start.nsec;
+ igb->perout[i].period.tv_sec = ts.tv_sec;
+ igb->perout[i].period.tv_nsec = ts.tv_nsec;
+ wr32(trgttiml, rq->perout.start.sec);
+ wr32(trgttimh, rq->perout.start.nsec);
+ tsauxc |= tsauxc_mask;
+ tsim |= tsim_mask;
+ } else {
+ tsauxc &= ~tsauxc_mask;
+ tsim &= ~tsim_mask;
+ }
+ wr32(E1000_TSAUXC, tsauxc);
+ wr32(E1000_TSIM, tsim);
+ spin_unlock_irqrestore(&igb->tmreg_lock, flags);
+ return 0;
+
+ case PTP_CLK_REQ_PPS:
+ spin_lock_irqsave(&igb->tmreg_lock, flags);
+ tsim = rd32(E1000_TSIM);
+ if (on)
+ tsim |= TSINTR_SYS_WRAP;
+ else
+ tsim &= ~TSINTR_SYS_WRAP;
+ wr32(E1000_TSIM, tsim);
+ spin_unlock_irqrestore(&igb->tmreg_lock, flags);
+ return 0;
+ }
+
+ return -EOPNOTSUPP;
+}
+
static int igb_ptp_feature_enable(struct ptp_clock_info *ptp,
struct ptp_clock_request *rq, int on)
{
return -EOPNOTSUPP;
}
+static int igb_ptp_verify_pin(struct ptp_clock_info *ptp, unsigned int pin,
+ enum ptp_pin_function func, unsigned int chan)
+{
+ switch (func) {
+ case PTP_PF_NONE:
+ case PTP_PF_EXTTS:
+ case PTP_PF_PEROUT:
+ break;
+ case PTP_PF_PHYSYNC:
+ return -1;
+ }
+ return 0;
+}
+
/**
* igb_ptp_tx_work
* @work: pointer to work struct
{
struct e1000_hw *hw = &adapter->hw;
struct net_device *netdev = adapter->netdev;
+ int i;
switch (hw->mac.type) {
case e1000_82576:
break;
case e1000_i210:
case e1000_i211:
+ for (i = 0; i < IGB_N_SDP; i++) {
+ struct ptp_pin_desc *ppd = &adapter->sdp_config[i];
+
+ snprintf(ppd->name, sizeof(ppd->name), "SDP%d", i);
+ ppd->index = i;
+ ppd->func = PTP_PF_NONE;
+ }
snprintf(adapter->ptp_caps.name, 16, "%pm", netdev->dev_addr);
adapter->ptp_caps.owner = THIS_MODULE;
adapter->ptp_caps.max_adj = 62499999;
- adapter->ptp_caps.n_ext_ts = 0;
- adapter->ptp_caps.pps = 0;
+ adapter->ptp_caps.n_ext_ts = IGB_N_EXTTS;
+ adapter->ptp_caps.n_per_out = IGB_N_PEROUT;
+ adapter->ptp_caps.n_pins = IGB_N_SDP;
+ adapter->ptp_caps.pps = 1;
+ adapter->ptp_caps.pin_config = adapter->sdp_config;
adapter->ptp_caps.adjfreq = igb_ptp_adjfreq_82580;
adapter->ptp_caps.adjtime = igb_ptp_adjtime_i210;
adapter->ptp_caps.gettime = igb_ptp_gettime_i210;
adapter->ptp_caps.settime = igb_ptp_settime_i210;
- adapter->ptp_caps.enable = igb_ptp_feature_enable;
+ adapter->ptp_caps.enable = igb_ptp_feature_enable_i210;
+ adapter->ptp_caps.verify = igb_ptp_verify_pin;
/* Enable the timer functions by clearing bit 31. */
wr32(E1000_TSAUXC, 0x0);
break;
void igb_ptp_reset(struct igb_adapter *adapter)
{
struct e1000_hw *hw = &adapter->hw;
+ unsigned long flags;
if (!(adapter->flags & IGB_FLAG_PTP))
return;
/* reset the tstamp_config */
igb_ptp_set_timestamp_mode(adapter, &adapter->tstamp_config);
+ spin_lock_irqsave(&adapter->tmreg_lock, flags);
+
switch (adapter->hw.mac.type) {
case e1000_82576:
/* Dial the nominal frequency. */
case e1000_i350:
case e1000_i210:
case e1000_i211:
- /* Enable the timer functions and interrupts. */
wr32(E1000_TSAUXC, 0x0);
+ wr32(E1000_TSSDP, 0x0);
wr32(E1000_TSIM, TSYNC_INTERRUPTS);
wr32(E1000_IMS, E1000_IMS_TS);
break;
default:
/* No work to do. */
- return;
+ goto out;
}
/* Re-initialize the timer. */
if ((hw->mac.type == e1000_i210) || (hw->mac.type == e1000_i211)) {
struct timespec ts = ktime_to_timespec(ktime_get_real());
- igb_ptp_settime_i210(&adapter->ptp_caps, &ts);
+ igb_ptp_write_i210(adapter, &ts);
} else {
timecounter_init(&adapter->tc, &adapter->cc,
ktime_to_ns(ktime_get_real()));
}
+out:
+ spin_unlock_irqrestore(&adapter->tmreg_lock, flags);
}
return NETDEV_TX_BUSY;
}
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
tx_flags |= IGBVF_TX_FLAGS_VLAN;
- tx_flags |= (vlan_tx_tag_get(skb) << IGBVF_TX_FLAGS_VLAN_SHIFT);
+ tx_flags |= (skb_vlan_tag_get(skb) <<
+ IGBVF_TX_FLAGS_VLAN_SHIFT);
}
if (skb->protocol == htons(ETH_P_IP))
DESC_NEEDED)))
return NETDEV_TX_BUSY;
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
tx_flags |= IXGB_TX_FLAGS_VLAN;
- vlan_id = vlan_tx_tag_get(skb);
+ vlan_id = skb_vlan_tag_get(skb);
}
first = adapter->tx_ring.next_to_use;
first->gso_segs = 1;
/* if we have a HW VLAN tag being added default to the HW one */
- if (vlan_tx_tag_present(skb)) {
- tx_flags |= vlan_tx_tag_get(skb) << IXGBE_TX_FLAGS_VLAN_SHIFT;
+ if (skb_vlan_tag_present(skb)) {
+ tx_flags |= skb_vlan_tag_get(skb) << IXGBE_TX_FLAGS_VLAN_SHIFT;
tx_flags |= IXGBE_TX_FLAGS_HW_VLAN;
/* else if it is a SW VLAN check the next protocol and store the tag */
} else if (protocol == htons(ETH_P_8021Q)) {
first->bytecount = skb->len;
first->gso_segs = 1;
- if (vlan_tx_tag_present(skb)) {
- tx_flags |= vlan_tx_tag_get(skb);
+ if (skb_vlan_tag_present(skb)) {
+ tx_flags |= skb_vlan_tag_get(skb);
tx_flags <<= IXGBE_TX_FLAGS_VLAN_SHIFT;
tx_flags |= IXGBE_TX_FLAGS_VLAN;
}
static inline void
jme_tx_vlan(struct sk_buff *skb, __le16 *vlan, u8 *flags)
{
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
*flags |= TXFLAG_TAGON;
- *vlan = cpu_to_le16(vlan_tx_tag_get(skb));
+ *vlan = cpu_to_le16(skb_vlan_tag_get(skb));
}
}
ctrl = 0;
/* Add VLAN tag, can piggyback on LRGLEN or ADDR64 */
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
if (!le) {
le = get_tx_le(sky2, &slot);
le->addr = 0;
le->opcode = OP_VLAN|HW_OWNER;
} else
le->opcode |= OP_VLAN;
- le->length = cpu_to_be16(vlan_tx_tag_get(skb));
+ le->length = cpu_to_be16(skb_vlan_tag_get(skb));
ctrl |= INS_VLAN;
}
sky2->rx_next = (sky2->rx_next + 1) % sky2->rx_pending;
prefetch(sky2->rx_ring + sky2->rx_next);
- if (vlan_tx_tag_present(re->skb))
+ if (skb_vlan_tag_present(re->skb))
count -= VLAN_HLEN; /* Account for vlan tag */
/* This chip has hardware problems that generates bogus status.
buf->nbufs = 1;
buf->npages = 1;
buf->page_shift = get_order(size) + PAGE_SHIFT;
- buf->direct.buf = dma_alloc_coherent(&dev->pdev->dev,
+ buf->direct.buf = dma_alloc_coherent(&dev->persist->pdev->dev,
size, &t, gfp);
if (!buf->direct.buf)
return -ENOMEM;
for (i = 0; i < buf->nbufs; ++i) {
buf->page_list[i].buf =
- dma_alloc_coherent(&dev->pdev->dev, PAGE_SIZE,
+ dma_alloc_coherent(&dev->persist->pdev->dev,
+ PAGE_SIZE,
&t, gfp);
if (!buf->page_list[i].buf)
goto err_free;
int i;
if (buf->nbufs == 1)
- dma_free_coherent(&dev->pdev->dev, size, buf->direct.buf,
+ dma_free_coherent(&dev->persist->pdev->dev, size,
+ buf->direct.buf,
buf->direct.map);
else {
if (BITS_PER_LONG == 64 && buf->direct.buf)
for (i = 0; i < buf->nbufs; ++i)
if (buf->page_list[i].buf)
- dma_free_coherent(&dev->pdev->dev, PAGE_SIZE,
+ dma_free_coherent(&dev->persist->pdev->dev,
+ PAGE_SIZE,
buf->page_list[i].buf,
buf->page_list[i].map);
kfree(buf->page_list);
if (!mlx4_alloc_db_from_pgdir(pgdir, db, order))
goto out;
- pgdir = mlx4_alloc_db_pgdir(&(dev->pdev->dev), gfp);
+ pgdir = mlx4_alloc_db_pgdir(&dev->persist->pdev->dev, gfp);
if (!pgdir) {
ret = -ENOMEM;
goto out;
set_bit(i, db->u.pgdir->bits[o]);
if (bitmap_full(db->u.pgdir->order1, MLX4_DB_PER_PAGE / 2)) {
- dma_free_coherent(&(dev->pdev->dev), PAGE_SIZE,
+ dma_free_coherent(&dev->persist->pdev->dev, PAGE_SIZE,
db->u.pgdir->db_page, db->u.pgdir->db_dma);
list_del(&db->u.pgdir->list);
kfree(db->u.pgdir);
MLX4_CATAS_POLL_INTERVAL = 5 * HZ,
};
-static DEFINE_SPINLOCK(catas_lock);
-static LIST_HEAD(catas_list);
-static struct work_struct catas_work;
-static int internal_err_reset = 1;
-module_param(internal_err_reset, int, 0644);
+int mlx4_internal_err_reset = 1;
+module_param_named(internal_err_reset, mlx4_internal_err_reset, int, 0644);
MODULE_PARM_DESC(internal_err_reset,
- "Reset device on internal errors if non-zero"
- " (default 1, in SRIOV mode default is 0)");
+ "Reset device on internal errors if non-zero (default 1)");
+
+static int read_vendor_id(struct mlx4_dev *dev)
+{
+ u16 vendor_id = 0;
+ int ret;
+
+ ret = pci_read_config_word(dev->persist->pdev, 0, &vendor_id);
+ if (ret) {
+ mlx4_err(dev, "Failed to read vendor ID, ret=%d\n", ret);
+ return ret;
+ }
+
+ if (vendor_id == 0xffff) {
+ mlx4_err(dev, "PCI can't be accessed to read vendor id\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int mlx4_reset_master(struct mlx4_dev *dev)
+{
+ int err = 0;
+
+ if (mlx4_is_master(dev))
+ mlx4_report_internal_err_comm_event(dev);
+
+ if (!pci_channel_offline(dev->persist->pdev)) {
+ err = read_vendor_id(dev);
+ /* If PCI can't be accessed to read vendor ID we assume that its
+ * link was disabled and chip was already reset.
+ */
+ if (err)
+ return 0;
+
+ err = mlx4_reset(dev);
+ if (err)
+ mlx4_err(dev, "Fail to reset HCA\n");
+ }
+
+ return err;
+}
+
+static int mlx4_reset_slave(struct mlx4_dev *dev)
+{
+#define COM_CHAN_RST_REQ_OFFSET 0x10
+#define COM_CHAN_RST_ACK_OFFSET 0x08
+
+ u32 comm_flags;
+ u32 rst_req;
+ u32 rst_ack;
+ unsigned long end;
+ struct mlx4_priv *priv = mlx4_priv(dev);
+
+ if (pci_channel_offline(dev->persist->pdev))
+ return 0;
+
+ comm_flags = swab32(readl((__iomem char *)priv->mfunc.comm +
+ MLX4_COMM_CHAN_FLAGS));
+ if (comm_flags == 0xffffffff) {
+ mlx4_err(dev, "VF reset is not needed\n");
+ return 0;
+ }
+
+ if (!(dev->caps.vf_caps & MLX4_VF_CAP_FLAG_RESET)) {
+ mlx4_err(dev, "VF reset is not supported\n");
+ return -EOPNOTSUPP;
+ }
+
+ rst_req = (comm_flags & (u32)(1 << COM_CHAN_RST_REQ_OFFSET)) >>
+ COM_CHAN_RST_REQ_OFFSET;
+ rst_ack = (comm_flags & (u32)(1 << COM_CHAN_RST_ACK_OFFSET)) >>
+ COM_CHAN_RST_ACK_OFFSET;
+ if (rst_req != rst_ack) {
+ mlx4_err(dev, "Communication channel isn't sync, fail to send reset\n");
+ return -EIO;
+ }
+
+ rst_req ^= 1;
+ mlx4_warn(dev, "VF is sending reset request to Firmware\n");
+ comm_flags = rst_req << COM_CHAN_RST_REQ_OFFSET;
+ __raw_writel((__force u32)cpu_to_be32(comm_flags),
+ (__iomem char *)priv->mfunc.comm + MLX4_COMM_CHAN_FLAGS);
+ /* Make sure that our comm channel write doesn't
+ * get mixed in with writes from another CPU.
+ */
+ mmiowb();
+
+ end = msecs_to_jiffies(MLX4_COMM_TIME) + jiffies;
+ while (time_before(jiffies, end)) {
+ comm_flags = swab32(readl((__iomem char *)priv->mfunc.comm +
+ MLX4_COMM_CHAN_FLAGS));
+ rst_ack = (comm_flags & (u32)(1 << COM_CHAN_RST_ACK_OFFSET)) >>
+ COM_CHAN_RST_ACK_OFFSET;
+
+ /* Reading rst_req again since the communication channel can
+ * be reset at any time by the PF and all its bits will be
+ * set to zero.
+ */
+ rst_req = (comm_flags & (u32)(1 << COM_CHAN_RST_REQ_OFFSET)) >>
+ COM_CHAN_RST_REQ_OFFSET;
+
+ if (rst_ack == rst_req) {
+ mlx4_warn(dev, "VF Reset succeed\n");
+ return 0;
+ }
+ cond_resched();
+ }
+ mlx4_err(dev, "Fail to send reset over the communication channel\n");
+ return -ETIMEDOUT;
+}
+
+static int mlx4_comm_internal_err(u32 slave_read)
+{
+ return (u32)COMM_CHAN_EVENT_INTERNAL_ERR ==
+ (slave_read & (u32)COMM_CHAN_EVENT_INTERNAL_ERR) ? 1 : 0;
+}
+
+void mlx4_enter_error_state(struct mlx4_dev_persistent *persist)
+{
+ int err;
+ struct mlx4_dev *dev;
+
+ if (!mlx4_internal_err_reset)
+ return;
+
+ mutex_lock(&persist->device_state_mutex);
+ if (persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR)
+ goto out;
+
+ dev = persist->dev;
+ mlx4_err(dev, "device is going to be reset\n");
+ if (mlx4_is_slave(dev))
+ err = mlx4_reset_slave(dev);
+ else
+ err = mlx4_reset_master(dev);
+ BUG_ON(err != 0);
+
+ dev->persist->state |= MLX4_DEVICE_STATE_INTERNAL_ERROR;
+ mlx4_err(dev, "device was reset successfully\n");
+ mutex_unlock(&persist->device_state_mutex);
+
+ /* At that step HW was already reset, now notify clients */
+ mlx4_dispatch_event(dev, MLX4_DEV_EVENT_CATASTROPHIC_ERROR, 0);
+ mlx4_cmd_wake_completions(dev);
+ return;
+
+out:
+ mutex_unlock(&persist->device_state_mutex);
+}
+
+static void mlx4_handle_error_state(struct mlx4_dev_persistent *persist)
+{
+ int err = 0;
+
+ mlx4_enter_error_state(persist);
+ mutex_lock(&persist->interface_state_mutex);
+ if (persist->interface_state & MLX4_INTERFACE_STATE_UP &&
+ !(persist->interface_state & MLX4_INTERFACE_STATE_DELETION)) {
+ err = mlx4_restart_one(persist->pdev);
+ mlx4_info(persist->dev, "mlx4_restart_one was ended, ret=%d\n",
+ err);
+ }
+ mutex_unlock(&persist->interface_state_mutex);
+}
static void dump_err_buf(struct mlx4_dev *dev)
{
{
struct mlx4_dev *dev = (struct mlx4_dev *) dev_ptr;
struct mlx4_priv *priv = mlx4_priv(dev);
+ u32 slave_read;
- if (readl(priv->catas_err.map)) {
- /* If the device is off-line, we cannot try to recover it */
- if (pci_channel_offline(dev->pdev))
- mod_timer(&priv->catas_err.timer,
- round_jiffies(jiffies + MLX4_CATAS_POLL_INTERVAL));
- else {
- dump_err_buf(dev);
- mlx4_dispatch_event(dev, MLX4_DEV_EVENT_CATASTROPHIC_ERROR, 0);
-
- if (internal_err_reset) {
- spin_lock(&catas_lock);
- list_add(&priv->catas_err.list, &catas_list);
- spin_unlock(&catas_lock);
-
- queue_work(mlx4_wq, &catas_work);
- }
+ if (mlx4_is_slave(dev)) {
+ slave_read = swab32(readl(&priv->mfunc.comm->slave_read));
+ if (mlx4_comm_internal_err(slave_read)) {
+ mlx4_warn(dev, "Internal error detected on the communication channel\n");
+ goto internal_err;
}
- } else
- mod_timer(&priv->catas_err.timer,
- round_jiffies(jiffies + MLX4_CATAS_POLL_INTERVAL));
+ } else if (readl(priv->catas_err.map)) {
+ dump_err_buf(dev);
+ goto internal_err;
+ }
+
+ if (dev->persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR) {
+ mlx4_warn(dev, "Internal error mark was detected on device\n");
+ goto internal_err;
+ }
+
+ mod_timer(&priv->catas_err.timer,
+ round_jiffies(jiffies + MLX4_CATAS_POLL_INTERVAL));
+ return;
+
+internal_err:
+ if (mlx4_internal_err_reset)
+ queue_work(dev->persist->catas_wq, &dev->persist->catas_work);
}
static void catas_reset(struct work_struct *work)
{
- struct mlx4_priv *priv, *tmppriv;
- struct mlx4_dev *dev;
+ struct mlx4_dev_persistent *persist =
+ container_of(work, struct mlx4_dev_persistent,
+ catas_work);
- LIST_HEAD(tlist);
- int ret;
-
- spin_lock_irq(&catas_lock);
- list_splice_init(&catas_list, &tlist);
- spin_unlock_irq(&catas_lock);
-
- list_for_each_entry_safe(priv, tmppriv, &tlist, catas_err.list) {
- struct pci_dev *pdev = priv->dev.pdev;
-
- /* If the device is off-line, we cannot reset it */
- if (pci_channel_offline(pdev))
- continue;
-
- ret = mlx4_restart_one(priv->dev.pdev);
- /* 'priv' now is not valid */
- if (ret)
- pr_err("mlx4 %s: Reset failed (%d)\n",
- pci_name(pdev), ret);
- else {
- dev = pci_get_drvdata(pdev);
- mlx4_dbg(dev, "Reset succeeded\n");
- }
- }
+ mlx4_handle_error_state(persist);
}
void mlx4_start_catas_poll(struct mlx4_dev *dev)
struct mlx4_priv *priv = mlx4_priv(dev);
phys_addr_t addr;
- /*If we are in SRIOV the default of the module param must be 0*/
- if (mlx4_is_mfunc(dev))
- internal_err_reset = 0;
-
INIT_LIST_HEAD(&priv->catas_err.list);
init_timer(&priv->catas_err.timer);
priv->catas_err.map = NULL;
- addr = pci_resource_start(dev->pdev, priv->fw.catas_bar) +
- priv->fw.catas_offset;
+ if (!mlx4_is_slave(dev)) {
+ addr = pci_resource_start(dev->persist->pdev,
+ priv->fw.catas_bar) +
+ priv->fw.catas_offset;
- priv->catas_err.map = ioremap(addr, priv->fw.catas_size * 4);
- if (!priv->catas_err.map) {
- mlx4_warn(dev, "Failed to map internal error buffer at 0x%llx\n",
- (unsigned long long) addr);
- return;
+ priv->catas_err.map = ioremap(addr, priv->fw.catas_size * 4);
+ if (!priv->catas_err.map) {
+ mlx4_warn(dev, "Failed to map internal error buffer at 0x%llx\n",
+ (unsigned long long)addr);
+ return;
+ }
}
priv->catas_err.timer.data = (unsigned long) dev;
del_timer_sync(&priv->catas_err.timer);
- if (priv->catas_err.map)
+ if (priv->catas_err.map) {
iounmap(priv->catas_err.map);
+ priv->catas_err.map = NULL;
+ }
- spin_lock_irq(&catas_lock);
- list_del(&priv->catas_err.list);
- spin_unlock_irq(&catas_lock);
+ if (dev->persist->interface_state & MLX4_INTERFACE_STATE_DELETION)
+ flush_workqueue(dev->persist->catas_wq);
}
-void __init mlx4_catas_init(void)
+int mlx4_catas_init(struct mlx4_dev *dev)
{
- INIT_WORK(&catas_work, catas_reset);
+ INIT_WORK(&dev->persist->catas_work, catas_reset);
+ dev->persist->catas_wq = create_singlethread_workqueue("mlx4_health");
+ if (!dev->persist->catas_wq)
+ return -ENOMEM;
+
+ return 0;
+}
+
+void mlx4_catas_end(struct mlx4_dev *dev)
+{
+ if (dev->persist->catas_wq) {
+ destroy_workqueue(dev->persist->catas_wq);
+ dev->persist->catas_wq = NULL;
+ }
}
#include <linux/mlx4/device.h>
#include <linux/semaphore.h>
#include <rdma/ib_smi.h>
+#include <linux/delay.h>
#include <asm/io.h>
}
}
+static int mlx4_internal_err_ret_value(struct mlx4_dev *dev, u16 op,
+ u8 op_modifier)
+{
+ switch (op) {
+ case MLX4_CMD_UNMAP_ICM:
+ case MLX4_CMD_UNMAP_ICM_AUX:
+ case MLX4_CMD_UNMAP_FA:
+ case MLX4_CMD_2RST_QP:
+ case MLX4_CMD_HW2SW_EQ:
+ case MLX4_CMD_HW2SW_CQ:
+ case MLX4_CMD_HW2SW_SRQ:
+ case MLX4_CMD_HW2SW_MPT:
+ case MLX4_CMD_CLOSE_HCA:
+ case MLX4_QP_FLOW_STEERING_DETACH:
+ case MLX4_CMD_FREE_RES:
+ case MLX4_CMD_CLOSE_PORT:
+ return CMD_STAT_OK;
+
+ case MLX4_CMD_QP_ATTACH:
+ /* On Detach case return success */
+ if (op_modifier == 0)
+ return CMD_STAT_OK;
+ return mlx4_status_to_errno(CMD_STAT_INTERNAL_ERR);
+
+ default:
+ return mlx4_status_to_errno(CMD_STAT_INTERNAL_ERR);
+ }
+}
+
+static int mlx4_closing_cmd_fatal_error(u16 op, u8 fw_status)
+{
+ /* Any error during the closing commands below is considered fatal */
+ if (op == MLX4_CMD_CLOSE_HCA ||
+ op == MLX4_CMD_HW2SW_EQ ||
+ op == MLX4_CMD_HW2SW_CQ ||
+ op == MLX4_CMD_2RST_QP ||
+ op == MLX4_CMD_HW2SW_SRQ ||
+ op == MLX4_CMD_SYNC_TPT ||
+ op == MLX4_CMD_UNMAP_ICM ||
+ op == MLX4_CMD_UNMAP_ICM_AUX ||
+ op == MLX4_CMD_UNMAP_FA)
+ return 1;
+ /* Error on MLX4_CMD_HW2SW_MPT is fatal except when fw status equals
+ * CMD_STAT_REG_BOUND.
+ * This status indicates that memory region has memory windows bound to it
+ * which may result from invalid user space usage and is not fatal.
+ */
+ if (op == MLX4_CMD_HW2SW_MPT && fw_status != CMD_STAT_REG_BOUND)
+ return 1;
+ return 0;
+}
+
+static int mlx4_cmd_reset_flow(struct mlx4_dev *dev, u16 op, u8 op_modifier,
+ int err)
+{
+ /* Only if reset flow is really active return code is based on
+ * command, otherwise current error code is returned.
+ */
+ if (mlx4_internal_err_reset) {
+ mlx4_enter_error_state(dev->persist);
+ err = mlx4_internal_err_ret_value(dev, op, op_modifier);
+ }
+
+ return err;
+}
+
static int comm_pending(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
return (swab32(status) >> 31) != priv->cmd.comm_toggle;
}
-static void mlx4_comm_cmd_post(struct mlx4_dev *dev, u8 cmd, u16 param)
+static int mlx4_comm_cmd_post(struct mlx4_dev *dev, u8 cmd, u16 param)
{
struct mlx4_priv *priv = mlx4_priv(dev);
u32 val;
+ /* To avoid writing to unknown addresses after the device state was
+ * changed to internal error and the function was rest,
+ * check the INTERNAL_ERROR flag which is updated under
+ * device_state_mutex lock.
+ */
+ mutex_lock(&dev->persist->device_state_mutex);
+
+ if (dev->persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR) {
+ mutex_unlock(&dev->persist->device_state_mutex);
+ return -EIO;
+ }
+
priv->cmd.comm_toggle ^= 1;
val = param | (cmd << 16) | (priv->cmd.comm_toggle << 31);
__raw_writel((__force u32) cpu_to_be32(val),
&priv->mfunc.comm->slave_write);
mmiowb();
+ mutex_unlock(&dev->persist->device_state_mutex);
+ return 0;
}
static int mlx4_comm_cmd_poll(struct mlx4_dev *dev, u8 cmd, u16 param,
/* Write command */
down(&priv->cmd.poll_sem);
- mlx4_comm_cmd_post(dev, cmd, param);
+ if (mlx4_comm_cmd_post(dev, cmd, param)) {
+ /* Only in case the device state is INTERNAL_ERROR,
+ * mlx4_comm_cmd_post returns with an error
+ */
+ err = mlx4_status_to_errno(CMD_STAT_INTERNAL_ERR);
+ goto out;
+ }
end = msecs_to_jiffies(timeout) + jiffies;
while (comm_pending(dev) && time_before(jiffies, end))
* is MLX4_DELAY_RESET_SLAVE*/
if ((MLX4_COMM_CMD_RESET == cmd)) {
err = MLX4_DELAY_RESET_SLAVE;
+ goto out;
} else {
- mlx4_warn(dev, "Communication channel timed out\n");
- err = -ETIMEDOUT;
+ mlx4_warn(dev, "Communication channel command 0x%x timed out\n",
+ cmd);
+ err = mlx4_status_to_errno(CMD_STAT_INTERNAL_ERR);
}
}
+ if (err)
+ mlx4_enter_error_state(dev->persist);
+out:
up(&priv->cmd.poll_sem);
return err;
}
-static int mlx4_comm_cmd_wait(struct mlx4_dev *dev, u8 op,
- u16 param, unsigned long timeout)
+static int mlx4_comm_cmd_wait(struct mlx4_dev *dev, u8 vhcr_cmd,
+ u16 param, u16 op, unsigned long timeout)
{
struct mlx4_cmd *cmd = &mlx4_priv(dev)->cmd;
struct mlx4_cmd_context *context;
cmd->free_head = context->next;
spin_unlock(&cmd->context_lock);
- init_completion(&context->done);
+ reinit_completion(&context->done);
- mlx4_comm_cmd_post(dev, op, param);
+ if (mlx4_comm_cmd_post(dev, vhcr_cmd, param)) {
+ /* Only in case the device state is INTERNAL_ERROR,
+ * mlx4_comm_cmd_post returns with an error
+ */
+ err = mlx4_status_to_errno(CMD_STAT_INTERNAL_ERR);
+ goto out;
+ }
if (!wait_for_completion_timeout(&context->done,
msecs_to_jiffies(timeout))) {
- mlx4_warn(dev, "communication channel command 0x%x timed out\n",
- op);
- err = -EBUSY;
- goto out;
+ mlx4_warn(dev, "communication channel command 0x%x (op=0x%x) timed out\n",
+ vhcr_cmd, op);
+ goto out_reset;
}
err = context->result;
if (err && context->fw_status != CMD_STAT_MULTI_FUNC_REQ) {
mlx4_err(dev, "command 0x%x failed: fw status = 0x%x\n",
- op, context->fw_status);
- goto out;
+ vhcr_cmd, context->fw_status);
+ if (mlx4_closing_cmd_fatal_error(op, context->fw_status))
+ goto out_reset;
}
-out:
/* wait for comm channel ready
* this is necessary for prevention the race
* when switching between event to polling mode
+ * Skipping this section in case the device is in FATAL_ERROR state,
+ * In this state, no commands are sent via the comm channel until
+ * the device has returned from reset.
*/
- end = msecs_to_jiffies(timeout) + jiffies;
- while (comm_pending(dev) && time_before(jiffies, end))
- cond_resched();
+ if (!(dev->persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR)) {
+ end = msecs_to_jiffies(timeout) + jiffies;
+ while (comm_pending(dev) && time_before(jiffies, end))
+ cond_resched();
+ }
+ goto out;
+out_reset:
+ err = mlx4_status_to_errno(CMD_STAT_INTERNAL_ERR);
+ mlx4_enter_error_state(dev->persist);
+out:
spin_lock(&cmd->context_lock);
context->next = cmd->free_head;
cmd->free_head = context - cmd->context;
}
int mlx4_comm_cmd(struct mlx4_dev *dev, u8 cmd, u16 param,
- unsigned long timeout)
+ u16 op, unsigned long timeout)
{
+ if (dev->persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR)
+ return mlx4_status_to_errno(CMD_STAT_INTERNAL_ERR);
+
if (mlx4_priv(dev)->cmd.use_events)
- return mlx4_comm_cmd_wait(dev, cmd, param, timeout);
+ return mlx4_comm_cmd_wait(dev, cmd, param, op, timeout);
return mlx4_comm_cmd_poll(dev, cmd, param, timeout);
}
{
u32 status;
- if (pci_channel_offline(dev->pdev))
+ if (pci_channel_offline(dev->persist->pdev))
return -EIO;
status = readl(mlx4_priv(dev)->cmd.hcr + HCR_STATUS_OFFSET);
{
struct mlx4_cmd *cmd = &mlx4_priv(dev)->cmd;
u32 __iomem *hcr = cmd->hcr;
- int ret = -EAGAIN;
+ int ret = -EIO;
unsigned long end;
- mutex_lock(&cmd->hcr_mutex);
-
- if (pci_channel_offline(dev->pdev)) {
+ mutex_lock(&dev->persist->device_state_mutex);
+ /* To avoid writing to unknown addresses after the device state was
+ * changed to internal error and the chip was reset,
+ * check the INTERNAL_ERROR flag which is updated under
+ * device_state_mutex lock.
+ */
+ if (pci_channel_offline(dev->persist->pdev) ||
+ (dev->persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR)) {
/*
* Device is going through error recovery
* and cannot accept commands.
*/
- ret = -EIO;
goto out;
}
end += msecs_to_jiffies(GO_BIT_TIMEOUT_MSECS);
while (cmd_pending(dev)) {
- if (pci_channel_offline(dev->pdev)) {
+ if (pci_channel_offline(dev->persist->pdev)) {
/*
* Device is going through error recovery
* and cannot accept commands.
*/
- ret = -EIO;
goto out;
}
ret = 0;
out:
- mutex_unlock(&cmd->hcr_mutex);
+ if (ret)
+ mlx4_warn(dev, "Could not post command 0x%x: ret=%d, in_param=0x%llx, in_mod=0x%x, op_mod=0x%x\n",
+ op, ret, in_param, in_modifier, op_modifier);
+ mutex_unlock(&dev->persist->device_state_mutex);
+
return ret;
}
}
ret = mlx4_status_to_errno(vhcr->status);
}
+ if (ret &&
+ dev->persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR)
+ ret = mlx4_internal_err_ret_value(dev, op, op_modifier);
} else {
- ret = mlx4_comm_cmd(dev, MLX4_COMM_CMD_VHCR_POST, 0,
+ ret = mlx4_comm_cmd(dev, MLX4_COMM_CMD_VHCR_POST, 0, op,
MLX4_COMM_TIME + timeout);
if (!ret) {
if (out_is_imm) {
}
}
ret = mlx4_status_to_errno(vhcr->status);
- } else
- mlx4_err(dev, "failed execution of VHCR_POST command opcode 0x%x\n",
- op);
+ } else {
+ if (dev->persist->state &
+ MLX4_DEVICE_STATE_INTERNAL_ERROR)
+ ret = mlx4_internal_err_ret_value(dev, op,
+ op_modifier);
+ else
+ mlx4_err(dev, "failed execution of VHCR_POST command opcode 0x%x\n", op);
+ }
}
mutex_unlock(&priv->cmd.slave_cmd_mutex);
down(&priv->cmd.poll_sem);
- if (pci_channel_offline(dev->pdev)) {
+ if (dev->persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR) {
/*
* Device is going through error recovery
* and cannot accept commands.
*/
- err = -EIO;
+ err = mlx4_internal_err_ret_value(dev, op, op_modifier);
goto out;
}
err = mlx4_cmd_post(dev, in_param, out_param ? *out_param : 0,
in_modifier, op_modifier, op, CMD_POLL_TOKEN, 0);
if (err)
- goto out;
+ goto out_reset;
end = msecs_to_jiffies(timeout) + jiffies;
while (cmd_pending(dev) && time_before(jiffies, end)) {
- if (pci_channel_offline(dev->pdev)) {
+ if (pci_channel_offline(dev->persist->pdev)) {
/*
* Device is going through error recovery
* and cannot accept commands.
*/
err = -EIO;
+ goto out_reset;
+ }
+
+ if (dev->persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR) {
+ err = mlx4_internal_err_ret_value(dev, op, op_modifier);
goto out;
}
if (cmd_pending(dev)) {
mlx4_warn(dev, "command 0x%x timed out (go bit not cleared)\n",
op);
- err = -ETIMEDOUT;
- goto out;
+ err = -EIO;
+ goto out_reset;
}
if (out_is_imm)
stat = be32_to_cpu((__force __be32)
__raw_readl(hcr + HCR_STATUS_OFFSET)) >> 24;
err = mlx4_status_to_errno(stat);
- if (err)
+ if (err) {
mlx4_err(dev, "command 0x%x failed: fw status = 0x%x\n",
op, stat);
+ if (mlx4_closing_cmd_fatal_error(op, stat))
+ goto out_reset;
+ goto out;
+ }
+out_reset:
+ if (err)
+ err = mlx4_cmd_reset_flow(dev, op, op_modifier, err);
out:
up(&priv->cmd.poll_sem);
return err;
goto out;
}
- init_completion(&context->done);
+ reinit_completion(&context->done);
- mlx4_cmd_post(dev, in_param, out_param ? *out_param : 0,
- in_modifier, op_modifier, op, context->token, 1);
+ err = mlx4_cmd_post(dev, in_param, out_param ? *out_param : 0,
+ in_modifier, op_modifier, op, context->token, 1);
+ if (err)
+ goto out_reset;
if (!wait_for_completion_timeout(&context->done,
msecs_to_jiffies(timeout))) {
mlx4_warn(dev, "command 0x%x timed out (go bit not cleared)\n",
op);
- err = -EBUSY;
- goto out;
+ err = -EIO;
+ goto out_reset;
}
err = context->result;
else
mlx4_err(dev, "command 0x%x failed: fw status = 0x%x\n",
op, context->fw_status);
+ if (dev->persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR)
+ err = mlx4_internal_err_ret_value(dev, op, op_modifier);
+ else if (mlx4_closing_cmd_fatal_error(op, context->fw_status))
+ goto out_reset;
+
goto out;
}
if (out_is_imm)
*out_param = context->out_param;
+out_reset:
+ if (err)
+ err = mlx4_cmd_reset_flow(dev, op, op_modifier, err);
out:
spin_lock(&cmd->context_lock);
context->next = cmd->free_head;
int out_is_imm, u32 in_modifier, u8 op_modifier,
u16 op, unsigned long timeout, int native)
{
- if (pci_channel_offline(dev->pdev))
- return -EIO;
+ if (pci_channel_offline(dev->persist->pdev))
+ return mlx4_cmd_reset_flow(dev, op, op_modifier, -EIO);
if (!mlx4_is_mfunc(dev) || (native && mlx4_is_master(dev))) {
+ if (dev->persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR)
+ return mlx4_internal_err_ret_value(dev, op,
+ op_modifier);
if (mlx4_priv(dev)->cmd.use_events)
return mlx4_cmd_wait(dev, in_param, out_param,
out_is_imm, in_modifier,
EXPORT_SYMBOL_GPL(__mlx4_cmd);
-static int mlx4_ARM_COMM_CHANNEL(struct mlx4_dev *dev)
+int mlx4_ARM_COMM_CHANNEL(struct mlx4_dev *dev)
{
return mlx4_cmd(dev, 0, 0, 0, MLX4_CMD_ARM_COMM_CHANNEL,
MLX4_CMD_TIME_CLASS_B, MLX4_CMD_NATIVE);
ALIGN(sizeof(struct mlx4_vhcr_cmd),
MLX4_ACCESS_MEM_ALIGN), 1);
if (ret) {
- mlx4_err(dev, "%s: Failed reading vhcr ret: 0x%x\n",
- __func__, ret);
+ if (!(dev->persist->state &
+ MLX4_DEVICE_STATE_INTERNAL_ERROR))
+ mlx4_err(dev, "%s: Failed reading vhcr ret: 0x%x\n",
+ __func__, ret);
kfree(vhcr);
return ret;
}
goto out_status;
}
- if (mlx4_ACCESS_MEM(dev, inbox->dma, slave,
- vhcr->in_param,
- MLX4_MAILBOX_SIZE, 1)) {
- mlx4_err(dev, "%s: Failed reading inbox (cmd:0x%x)\n",
- __func__, cmd->opcode);
+ ret = mlx4_ACCESS_MEM(dev, inbox->dma, slave,
+ vhcr->in_param,
+ MLX4_MAILBOX_SIZE, 1);
+ if (ret) {
+ if (!(dev->persist->state &
+ MLX4_DEVICE_STATE_INTERNAL_ERROR))
+ mlx4_err(dev, "%s: Failed reading inbox (cmd:0x%x)\n",
+ __func__, cmd->opcode);
vhcr_cmd->status = CMD_STAT_INTERNAL_ERR;
goto out_status;
}
}
if (err) {
- mlx4_warn(dev, "vhcr command:0x%x slave:%d failed with error:%d, status %d\n",
- vhcr->op, slave, vhcr->errno, err);
+ if (!(dev->persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR))
+ mlx4_warn(dev, "vhcr command:0x%x slave:%d failed with error:%d, status %d\n",
+ vhcr->op, slave, vhcr->errno, err);
vhcr_cmd->status = mlx4_errno_to_status(err);
goto out_status;
}
/* If we failed to write back the outbox after the
*command was successfully executed, we must fail this
* slave, as it is now in undefined state */
- mlx4_err(dev, "%s:Failed writing outbox\n", __func__);
+ if (!(dev->persist->state &
+ MLX4_DEVICE_STATE_INTERNAL_ERROR))
+ mlx4_err(dev, "%s:Failed writing outbox\n", __func__);
goto out;
}
}
break;
case MLX4_COMM_CMD_VHCR_POST:
if ((slave_state[slave].last_cmd != MLX4_COMM_CMD_VHCR_EN) &&
- (slave_state[slave].last_cmd != MLX4_COMM_CMD_VHCR_POST))
+ (slave_state[slave].last_cmd != MLX4_COMM_CMD_VHCR_POST)) {
+ mlx4_warn(dev, "slave:%d is out of sync, cmd=0x%x, last command=0x%x, reset is needed\n",
+ slave, cmd, slave_state[slave].last_cmd);
goto reset_slave;
+ }
mutex_lock(&priv->cmd.slave_cmd_mutex);
if (mlx4_master_process_vhcr(dev, slave, NULL)) {
reset_slave:
/* cleanup any slave resources */
- mlx4_delete_all_resources_for_slave(dev, slave);
+ if (dev->persist->interface_state & MLX4_INTERFACE_STATE_UP)
+ mlx4_delete_all_resources_for_slave(dev, slave);
+
+ if (cmd != MLX4_COMM_CMD_RESET) {
+ mlx4_warn(dev, "Turn on internal error to force reset, slave=%d, cmd=0x%x\n",
+ slave, cmd);
+ /* Turn on internal error letting slave reset itself immeditaly,
+ * otherwise it might take till timeout on command is passed
+ */
+ reply |= ((u32)COMM_CHAN_EVENT_INTERNAL_ERR);
+ }
+
spin_lock_irqsave(&priv->mfunc.master.slave_state_lock, flags);
if (!slave_state[slave].is_slave_going_down)
slave_state[slave].last_cmd = MLX4_COMM_CMD_RESET;
static int sync_toggles(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
- int wr_toggle;
- int rd_toggle;
+ u32 wr_toggle;
+ u32 rd_toggle;
unsigned long end;
- wr_toggle = swab32(readl(&priv->mfunc.comm->slave_write)) >> 31;
- end = jiffies + msecs_to_jiffies(5000);
+ wr_toggle = swab32(readl(&priv->mfunc.comm->slave_write));
+ if (wr_toggle == 0xffffffff)
+ end = jiffies + msecs_to_jiffies(30000);
+ else
+ end = jiffies + msecs_to_jiffies(5000);
while (time_before(jiffies, end)) {
- rd_toggle = swab32(readl(&priv->mfunc.comm->slave_read)) >> 31;
- if (rd_toggle == wr_toggle) {
- priv->cmd.comm_toggle = rd_toggle;
+ rd_toggle = swab32(readl(&priv->mfunc.comm->slave_read));
+ if (wr_toggle == 0xffffffff || rd_toggle == 0xffffffff) {
+ /* PCI might be offline */
+ msleep(100);
+ wr_toggle = swab32(readl(&priv->mfunc.comm->
+ slave_write));
+ continue;
+ }
+
+ if (rd_toggle >> 31 == wr_toggle >> 31) {
+ priv->cmd.comm_toggle = rd_toggle >> 31;
return 0;
}
if (mlx4_is_master(dev))
priv->mfunc.comm =
- ioremap(pci_resource_start(dev->pdev, priv->fw.comm_bar) +
+ ioremap(pci_resource_start(dev->persist->pdev,
+ priv->fw.comm_bar) +
priv->fw.comm_base, MLX4_COMM_PAGESIZE);
else
priv->mfunc.comm =
- ioremap(pci_resource_start(dev->pdev, 2) +
+ ioremap(pci_resource_start(dev->persist->pdev, 2) +
MLX4_SLAVE_COMM_BASE, MLX4_COMM_PAGESIZE);
if (!priv->mfunc.comm) {
mlx4_err(dev, "Couldn't map communication vector\n");
if (mlx4_init_resource_tracker(dev))
goto err_thread;
- err = mlx4_ARM_COMM_CHANNEL(dev);
- if (err) {
- mlx4_err(dev, " Failed to arm comm channel eq: %x\n",
- err);
- goto err_resource;
- }
-
} else {
err = sync_toggles(dev);
if (err) {
}
return 0;
-err_resource:
- mlx4_free_resource_tracker(dev, RES_TR_FREE_ALL);
err_thread:
flush_workqueue(priv->mfunc.master.comm_wq);
destroy_workqueue(priv->mfunc.master.comm_wq);
err_comm:
iounmap(priv->mfunc.comm);
err_vhcr:
- dma_free_coherent(&(dev->pdev->dev), PAGE_SIZE,
- priv->mfunc.vhcr,
- priv->mfunc.vhcr_dma);
+ dma_free_coherent(&dev->persist->pdev->dev, PAGE_SIZE,
+ priv->mfunc.vhcr,
+ priv->mfunc.vhcr_dma);
priv->mfunc.vhcr = NULL;
return -ENOMEM;
}
int flags = 0;
if (!priv->cmd.initialized) {
- mutex_init(&priv->cmd.hcr_mutex);
mutex_init(&priv->cmd.slave_cmd_mutex);
sema_init(&priv->cmd.poll_sem, 1);
priv->cmd.use_events = 0;
}
if (!mlx4_is_slave(dev) && !priv->cmd.hcr) {
- priv->cmd.hcr = ioremap(pci_resource_start(dev->pdev, 0) +
- MLX4_HCR_BASE, MLX4_HCR_SIZE);
+ priv->cmd.hcr = ioremap(pci_resource_start(dev->persist->pdev,
+ 0) + MLX4_HCR_BASE, MLX4_HCR_SIZE);
if (!priv->cmd.hcr) {
mlx4_err(dev, "Couldn't map command register\n");
goto err;
}
if (mlx4_is_mfunc(dev) && !priv->mfunc.vhcr) {
- priv->mfunc.vhcr = dma_alloc_coherent(&(dev->pdev->dev), PAGE_SIZE,
+ priv->mfunc.vhcr = dma_alloc_coherent(&dev->persist->pdev->dev,
+ PAGE_SIZE,
&priv->mfunc.vhcr_dma,
GFP_KERNEL);
if (!priv->mfunc.vhcr)
}
if (!priv->cmd.pool) {
- priv->cmd.pool = pci_pool_create("mlx4_cmd", dev->pdev,
+ priv->cmd.pool = pci_pool_create("mlx4_cmd",
+ dev->persist->pdev,
MLX4_MAILBOX_SIZE,
MLX4_MAILBOX_SIZE, 0);
if (!priv->cmd.pool)
return -ENOMEM;
}
+void mlx4_report_internal_err_comm_event(struct mlx4_dev *dev)
+{
+ struct mlx4_priv *priv = mlx4_priv(dev);
+ int slave;
+ u32 slave_read;
+
+ /* Report an internal error event to all
+ * communication channels.
+ */
+ for (slave = 0; slave < dev->num_slaves; slave++) {
+ slave_read = swab32(readl(&priv->mfunc.comm[slave].slave_read));
+ slave_read |= (u32)COMM_CHAN_EVENT_INTERNAL_ERR;
+ __raw_writel((__force u32)cpu_to_be32(slave_read),
+ &priv->mfunc.comm[slave].slave_read);
+ /* Make sure that our comm channel write doesn't
+ * get mixed in with writes from another CPU.
+ */
+ mmiowb();
+ }
+}
+
void mlx4_multi_func_cleanup(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
kfree(priv->mfunc.master.slave_state);
kfree(priv->mfunc.master.vf_admin);
kfree(priv->mfunc.master.vf_oper);
+ dev->num_slaves = 0;
}
iounmap(priv->mfunc.comm);
}
if (mlx4_is_mfunc(dev) && priv->mfunc.vhcr &&
(cleanup_mask & MLX4_CMD_CLEANUP_VHCR)) {
- dma_free_coherent(&(dev->pdev->dev), PAGE_SIZE,
+ dma_free_coherent(&dev->persist->pdev->dev, PAGE_SIZE,
priv->mfunc.vhcr, priv->mfunc.vhcr_dma);
priv->mfunc.vhcr = NULL;
}
for (i = 0; i < priv->cmd.max_cmds; ++i) {
priv->cmd.context[i].token = i;
priv->cmd.context[i].next = i + 1;
+ /* To support fatal error flow, initialize all
+ * cmd contexts to allow simulating completions
+ * with complete() at any time.
+ */
+ init_completion(&priv->cmd.context[i].done);
}
priv->cmd.context[priv->cmd.max_cmds - 1].next = -1;
static int mlx4_get_slave_indx(struct mlx4_dev *dev, int vf)
{
- if ((vf < 0) || (vf >= dev->num_vfs)) {
- mlx4_err(dev, "Bad vf number:%d (number of activated vf: %d)\n", vf, dev->num_vfs);
+ if ((vf < 0) || (vf >= dev->persist->num_vfs)) {
+ mlx4_err(dev, "Bad vf number:%d (number of activated vf: %d)\n",
+ vf, dev->persist->num_vfs);
return -EINVAL;
}
int mlx4_get_vf_indx(struct mlx4_dev *dev, int slave)
{
- if (slave < 1 || slave > dev->num_vfs) {
+ if (slave < 1 || slave > dev->persist->num_vfs) {
mlx4_err(dev,
"Bad slave number:%d (number of activated slaves: %lu)\n",
slave, dev->num_slaves);
return slave - 1;
}
+void mlx4_cmd_wake_completions(struct mlx4_dev *dev)
+{
+ struct mlx4_priv *priv = mlx4_priv(dev);
+ struct mlx4_cmd_context *context;
+ int i;
+
+ spin_lock(&priv->cmd.context_lock);
+ if (priv->cmd.context) {
+ for (i = 0; i < priv->cmd.max_cmds; ++i) {
+ context = &priv->cmd.context[i];
+ context->fw_status = CMD_STAT_INTERNAL_ERR;
+ context->result =
+ mlx4_status_to_errno(CMD_STAT_INTERNAL_ERR);
+ complete(&context->done);
+ }
+ }
+ spin_unlock(&priv->cmd.context_lock);
+}
+
struct mlx4_active_ports mlx4_get_active_ports(struct mlx4_dev *dev, int slave)
{
struct mlx4_active_ports actv_ports;
if (port <= 0 || port > dev->caps.num_ports)
return slaves_pport;
- for (i = 0; i < dev->num_vfs + 1; i++) {
+ for (i = 0; i < dev->persist->num_vfs + 1; i++) {
struct mlx4_active_ports actv_ports =
mlx4_get_active_ports(dev, i);
if (test_bit(port - 1, actv_ports.ports))
bitmap_zero(slaves_pport.slaves, MLX4_MFUNC_MAX);
- for (i = 0; i < dev->num_vfs + 1; i++) {
+ for (i = 0; i < dev->persist->num_vfs + 1; i++) {
struct mlx4_active_ports actv_ports =
mlx4_get_active_ports(dev, i);
if (bitmap_equal(crit_ports->ports, actv_ports.ports,
/* Allocate HW buffers on provided NUMA node.
* dev->numa_node is used in mtt range allocation flow.
*/
- set_dev_node(&mdev->dev->pdev->dev, node);
+ set_dev_node(&mdev->dev->persist->pdev->dev, node);
err = mlx4_alloc_hwq_res(mdev->dev, &cq->wqres,
cq->buf_size, 2 * PAGE_SIZE);
- set_dev_node(&mdev->dev->pdev->dev, mdev->dev->numa_node);
+ set_dev_node(&mdev->dev->persist->pdev->dev, mdev->dev->numa_node);
if (err)
goto err_cq;
(u16) (mdev->dev->caps.fw_ver >> 32),
(u16) ((mdev->dev->caps.fw_ver >> 16) & 0xffff),
(u16) (mdev->dev->caps.fw_ver & 0xffff));
- strlcpy(drvinfo->bus_info, pci_name(mdev->dev->pdev),
+ strlcpy(drvinfo->bus_info, pci_name(mdev->dev->persist->pdev),
sizeof(drvinfo->bus_info));
drvinfo->n_stats = 0;
drvinfo->regdump_len = 0;
spin_lock_init(&mdev->uar_lock);
mdev->dev = dev;
- mdev->dma_device = &(dev->pdev->dev);
- mdev->pdev = dev->pdev;
+ mdev->dma_device = &dev->persist->pdev->dev;
+ mdev->pdev = dev->persist->pdev;
mdev->device_up = false;
mdev->LSO_support = !!(dev->caps.flags & (1 << 15));
netif_set_real_num_tx_queues(dev, prof->tx_ring_num);
netif_set_real_num_rx_queues(dev, prof->rx_ring_num);
- SET_NETDEV_DEV(dev, &mdev->dev->pdev->dev);
+ SET_NETDEV_DEV(dev, &mdev->dev->persist->pdev->dev);
dev->dev_port = port - 1;
/*
ring->rx_info, tmp);
/* Allocate HW buffers on provided NUMA node */
- set_dev_node(&mdev->dev->pdev->dev, node);
+ set_dev_node(&mdev->dev->persist->pdev->dev, node);
err = mlx4_alloc_hwq_res(mdev->dev, &ring->wqres,
ring->buf_size, 2 * PAGE_SIZE);
- set_dev_node(&mdev->dev->pdev->dev, mdev->dev->numa_node);
+ set_dev_node(&mdev->dev->persist->pdev->dev, mdev->dev->numa_node);
if (err)
goto err_info;
ring->buf_size = ALIGN(size * ring->stride, MLX4_EN_PAGE_SIZE);
/* Allocate HW buffers on provided NUMA node */
- set_dev_node(&mdev->dev->pdev->dev, node);
+ set_dev_node(&mdev->dev->persist->pdev->dev, node);
err = mlx4_alloc_hwq_res(mdev->dev, &ring->wqres, ring->buf_size,
2 * PAGE_SIZE);
- set_dev_node(&mdev->dev->pdev->dev, mdev->dev->numa_node);
+ set_dev_node(&mdev->dev->persist->pdev->dev, mdev->dev->numa_node);
if (err) {
en_err(priv, "Failed allocating hwq resources\n");
goto err_bounce;
if (dev->num_tc)
return skb_tx_hash(dev, skb);
- if (vlan_tx_tag_present(skb))
- up = vlan_tx_tag_get(skb) >> VLAN_PRIO_SHIFT;
+ if (skb_vlan_tag_present(skb))
+ up = skb_vlan_tag_get(skb) >> VLAN_PRIO_SHIFT;
return fallback(dev, skb) % rings_p_up + up * rings_p_up;
}
goto tx_drop;
}
- if (vlan_tx_tag_present(skb))
- vlan_tag = vlan_tx_tag_get(skb);
+ if (skb_vlan_tag_present(skb))
+ vlan_tag = skb_vlan_tag_get(skb);
netdev_txq_bql_enqueue_prefetchw(ring->tx_queue);
real_size = (real_size / 16) & 0x3f;
if (ring->bf_enabled && desc_size <= MAX_BF && !bounce &&
- !vlan_tx_tag_present(skb) && send_doorbell) {
+ !skb_vlan_tag_present(skb) && send_doorbell) {
tx_desc->ctrl.bf_qpn = ring->doorbell_qpn |
cpu_to_be32(real_size);
} else {
tx_desc->ctrl.vlan_tag = cpu_to_be16(vlan_tag);
tx_desc->ctrl.ins_vlan = MLX4_WQE_CTRL_INS_VLAN *
- !!vlan_tx_tag_present(skb);
+ !!skb_vlan_tag_present(skb);
tx_desc->ctrl.fence_size = real_size;
/* Ensure new descriptor hits memory
struct mlx4_eqe eqe;
/*don't send if we don't have the that slave */
- if (dev->num_vfs < slave)
+ if (dev->persist->num_vfs < slave)
return 0;
memset(&eqe, 0, sizeof eqe);
struct mlx4_eqe eqe;
/*don't send if we don't have the that slave */
- if (dev->num_vfs < slave)
+ if (dev->persist->num_vfs < slave)
return 0;
memset(&eqe, 0, sizeof eqe);
struct mlx4_slaves_pport slaves_pport = mlx4_phys_to_slaves_pport(dev,
port);
- for (i = 0; i < dev->num_vfs + 1; i++)
+ for (i = 0; i < dev->persist->num_vfs + 1; i++)
if (test_bit(i, slaves_pport.slaves))
set_and_calc_slave_port_state(dev, i, port,
event, &gen_event);
if (MLX4_COMM_CMD_FLR == slave_state[i].last_cmd) {
mlx4_dbg(dev, "mlx4_handle_slave_flr: clean slave: %d\n",
i);
-
- mlx4_delete_all_resources_for_slave(dev, i);
+ /* In case of 'Reset flow' FLR can be generated for
+ * a slave before mlx4_load_one is done.
+ * make sure interface is up before trying to delete
+ * slave resources which weren't allocated yet.
+ */
+ if (dev->persist->interface_state &
+ MLX4_INTERFACE_STATE_UP)
+ mlx4_delete_all_resources_for_slave(dev, i);
/*return the slave to running mode*/
spin_lock_irqsave(&priv->mfunc.master.slave_state_lock, flags);
slave_state[i].last_cmd = MLX4_COMM_CMD_RESET;
mlx4_priv(dev)->sense.do_sense_port[port] = 1;
if (!mlx4_is_master(dev))
break;
- for (i = 0; i < dev->num_vfs + 1; i++) {
+ for (i = 0; i < dev->persist->num_vfs + 1;
+ i++) {
if (!test_bit(i, slaves_port.slaves))
continue;
if (dev->caps.port_type[port] == MLX4_PORT_TYPE_ETH) {
if (!mlx4_is_master(dev))
break;
if (dev->caps.port_type[port] == MLX4_PORT_TYPE_ETH)
- for (i = 0; i < dev->num_vfs + 1; i++) {
+ for (i = 0;
+ i < dev->persist->num_vfs + 1;
+ i++) {
if (!test_bit(i, slaves_port.slaves))
continue;
if (i == mlx4_master_func_num(dev))
if (!priv->eq_table.uar_map[index]) {
priv->eq_table.uar_map[index] =
- ioremap(pci_resource_start(dev->pdev, 2) +
+ ioremap(pci_resource_start(dev->persist->pdev, 2) +
((eq->eqn / 4) << PAGE_SHIFT),
PAGE_SIZE);
if (!priv->eq_table.uar_map[index]) {
eq_context = mailbox->buf;
for (i = 0; i < npages; ++i) {
- eq->page_list[i].buf = dma_alloc_coherent(&dev->pdev->dev,
- PAGE_SIZE, &t, GFP_KERNEL);
+ eq->page_list[i].buf = dma_alloc_coherent(&dev->persist->
+ pdev->dev,
+ PAGE_SIZE, &t,
+ GFP_KERNEL);
if (!eq->page_list[i].buf)
goto err_out_free_pages;
err_out_free_pages:
for (i = 0; i < npages; ++i)
if (eq->page_list[i].buf)
- dma_free_coherent(&dev->pdev->dev, PAGE_SIZE,
+ dma_free_coherent(&dev->persist->pdev->dev, PAGE_SIZE,
eq->page_list[i].buf,
eq->page_list[i].map);
mlx4_mtt_cleanup(dev, &eq->mtt);
for (i = 0; i < npages; ++i)
- dma_free_coherent(&dev->pdev->dev, PAGE_SIZE,
- eq->page_list[i].buf,
- eq->page_list[i].map);
+ dma_free_coherent(&dev->persist->pdev->dev, PAGE_SIZE,
+ eq->page_list[i].buf,
+ eq->page_list[i].map);
kfree(eq->page_list);
mlx4_bitmap_free(&priv->eq_table.bitmap, eq->eqn, MLX4_USE_RR);
int i, vec;
if (eq_table->have_irq)
- free_irq(dev->pdev->irq, dev);
+ free_irq(dev->persist->pdev->irq, dev);
for (i = 0; i < dev->caps.num_comp_vectors + 1; ++i)
if (eq_table->eq[i].have_irq) {
{
struct mlx4_priv *priv = mlx4_priv(dev);
- priv->clr_base = ioremap(pci_resource_start(dev->pdev, priv->fw.clr_int_bar) +
+ priv->clr_base = ioremap(pci_resource_start(dev->persist->pdev,
+ priv->fw.clr_int_bar) +
priv->fw.clr_int_base, MLX4_CLR_INT_SIZE);
if (!priv->clr_base) {
mlx4_err(dev, "Couldn't map interrupt clear register, aborting\n");
i * MLX4_IRQNAME_SIZE,
MLX4_IRQNAME_SIZE,
"mlx4-comp-%d@pci:%s", i,
- pci_name(dev->pdev));
+ pci_name(dev->persist->pdev));
} else {
snprintf(priv->eq_table.irq_names +
i * MLX4_IRQNAME_SIZE,
MLX4_IRQNAME_SIZE,
"mlx4-async@pci:%s",
- pci_name(dev->pdev));
+ pci_name(dev->persist->pdev));
}
eq_name = priv->eq_table.irq_names +
snprintf(priv->eq_table.irq_names,
MLX4_IRQNAME_SIZE,
DRV_NAME "@pci:%s",
- pci_name(dev->pdev));
- err = request_irq(dev->pdev->irq, mlx4_interrupt,
+ pci_name(dev->persist->pdev));
+ err = request_irq(dev->persist->pdev->irq, mlx4_interrupt,
IRQF_SHARED, priv->eq_table.irq_names, dev);
if (err)
goto err_out_async;
int i;
if (chunk->nsg > 0)
- pci_unmap_sg(dev->pdev, chunk->mem, chunk->npages,
+ pci_unmap_sg(dev->persist->pdev, chunk->mem, chunk->npages,
PCI_DMA_BIDIRECTIONAL);
for (i = 0; i < chunk->npages; ++i)
int i;
for (i = 0; i < chunk->npages; ++i)
- dma_free_coherent(&dev->pdev->dev, chunk->mem[i].length,
+ dma_free_coherent(&dev->persist->pdev->dev,
+ chunk->mem[i].length,
lowmem_page_address(sg_page(&chunk->mem[i])),
sg_dma_address(&chunk->mem[i]));
}
--cur_order;
if (coherent)
- ret = mlx4_alloc_icm_coherent(&dev->pdev->dev,
+ ret = mlx4_alloc_icm_coherent(&dev->persist->pdev->dev,
&chunk->mem[chunk->npages],
cur_order, gfp_mask);
else
if (coherent)
++chunk->nsg;
else if (chunk->npages == MLX4_ICM_CHUNK_LEN) {
- chunk->nsg = pci_map_sg(dev->pdev, chunk->mem,
+ chunk->nsg = pci_map_sg(dev->persist->pdev, chunk->mem,
chunk->npages,
PCI_DMA_BIDIRECTIONAL);
}
if (!coherent && chunk) {
- chunk->nsg = pci_map_sg(dev->pdev, chunk->mem,
+ chunk->nsg = pci_map_sg(dev->persist->pdev, chunk->mem,
chunk->npages,
PCI_DMA_BIDIRECTIONAL);
mutex_lock(&intf_mutex);
+ dev->persist->interface_state |= MLX4_INTERFACE_STATE_UP;
list_add_tail(&priv->dev_list, &dev_list);
list_for_each_entry(intf, &intf_list, list)
mlx4_add_device(intf, priv);
mutex_unlock(&intf_mutex);
- if (!mlx4_is_slave(dev))
- mlx4_start_catas_poll(dev);
+ mlx4_start_catas_poll(dev);
return 0;
}
struct mlx4_priv *priv = mlx4_priv(dev);
struct mlx4_interface *intf;
- if (!mlx4_is_slave(dev))
- mlx4_stop_catas_poll(dev);
+ mlx4_stop_catas_poll(dev);
mutex_lock(&intf_mutex);
list_for_each_entry(intf, &intf_list, list)
mlx4_remove_device(intf, priv);
list_del(&priv->dev_list);
+ dev->persist->interface_state &= ~MLX4_INTERFACE_STATE_UP;
mutex_unlock(&intf_mutex);
}
MLX4_FUNC_CAP_EQE_CQE_STRIDE | \
MLX4_FUNC_CAP_DMFS_A0_STATIC)
+#define RESET_PERSIST_MASK_FLAGS (MLX4_FLAG_SRIOV)
+
static char mlx4_version[] =
DRV_NAME ": Mellanox ConnectX core driver v"
DRV_VERSION " (" DRV_RELDATE ")\n";
return -ENODEV;
}
- if (dev_cap->uar_size > pci_resource_len(dev->pdev, 2)) {
+ if (dev_cap->uar_size > pci_resource_len(dev->persist->pdev, 2)) {
mlx4_err(dev, "HCA reported UAR size of 0x%x bigger than PCI resource 2 size of 0x%llx, aborting\n",
dev_cap->uar_size,
- (unsigned long long) pci_resource_len(dev->pdev, 2));
+ (unsigned long long)
+ pci_resource_len(dev->persist->pdev, 2));
return -ENODEV;
}
*speed = PCI_SPEED_UNKNOWN;
*width = PCIE_LNK_WIDTH_UNKNOWN;
- err1 = pcie_capability_read_dword(dev->pdev, PCI_EXP_LNKCAP, &lnkcap1);
- err2 = pcie_capability_read_dword(dev->pdev, PCI_EXP_LNKCAP2, &lnkcap2);
+ err1 = pcie_capability_read_dword(dev->persist->pdev, PCI_EXP_LNKCAP,
+ &lnkcap1);
+ err2 = pcie_capability_read_dword(dev->persist->pdev, PCI_EXP_LNKCAP2,
+ &lnkcap2);
if (!err2 && lnkcap2) { /* PCIe r3.0-compliant */
if (lnkcap2 & PCI_EXP_LNKCAP2_SLS_8_0GB)
*speed = PCIE_SPEED_8_0GT;
return;
}
- err = pcie_get_minimum_link(dev->pdev, &speed, &width);
+ err = pcie_get_minimum_link(dev->persist->pdev, &speed, &width);
if (err || speed == PCI_SPEED_UNKNOWN ||
width == PCIE_LNK_WIDTH_UNKNOWN) {
mlx4_warn(dev,
if (dev->caps.uar_page_size * (dev->caps.num_uars -
dev->caps.reserved_uars) >
- pci_resource_len(dev->pdev, 2)) {
+ pci_resource_len(dev->persist->pdev,
+ 2)) {
mlx4_err(dev, "HCA reported UAR region size of 0x%x bigger than PCI resource 2 size of 0x%llx, aborting\n",
dev->caps.uar_page_size * dev->caps.num_uars,
- (unsigned long long) pci_resource_len(dev->pdev, 2));
+ (unsigned long long)
+ pci_resource_len(dev->persist->pdev, 2));
goto err_mem;
}
struct mlx4_priv *priv = mlx4_priv(dev);
mutex_lock(&priv->cmd.slave_cmd_mutex);
- if (mlx4_comm_cmd(dev, MLX4_COMM_CMD_RESET, 0, MLX4_COMM_TIME))
+ if (mlx4_comm_cmd(dev, MLX4_COMM_CMD_RESET, 0, MLX4_COMM_CMD_NA_OP,
+ MLX4_COMM_TIME))
mlx4_warn(dev, "Failed to close slave function\n");
mutex_unlock(&priv->cmd.slave_cmd_mutex);
}
if (!dev->caps.bf_reg_size)
return -ENXIO;
- bf_start = pci_resource_start(dev->pdev, 2) +
+ bf_start = pci_resource_start(dev->persist->pdev, 2) +
(dev->caps.num_uars << PAGE_SHIFT);
- bf_len = pci_resource_len(dev->pdev, 2) -
+ bf_len = pci_resource_len(dev->persist->pdev, 2) -
(dev->caps.num_uars << PAGE_SHIFT);
priv->bf_mapping = io_mapping_create_wc(bf_start, bf_len);
if (!priv->bf_mapping)
struct mlx4_priv *priv = mlx4_priv(dev);
priv->clock_mapping =
- ioremap(pci_resource_start(dev->pdev, priv->fw.clock_bar) +
+ ioremap(pci_resource_start(dev->persist->pdev,
+ priv->fw.clock_bar) +
priv->fw.clock_offset, MLX4_CLOCK_SIZE);
if (!priv->clock_mapping)
}
}
+static int mlx4_comm_check_offline(struct mlx4_dev *dev)
+{
+#define COMM_CHAN_OFFLINE_OFFSET 0x09
+
+ u32 comm_flags;
+ u32 offline_bit;
+ unsigned long end;
+ struct mlx4_priv *priv = mlx4_priv(dev);
+
+ end = msecs_to_jiffies(MLX4_COMM_OFFLINE_TIME_OUT) + jiffies;
+ while (time_before(jiffies, end)) {
+ comm_flags = swab32(readl((__iomem char *)priv->mfunc.comm +
+ MLX4_COMM_CHAN_FLAGS));
+ offline_bit = (comm_flags &
+ (u32)(1 << COMM_CHAN_OFFLINE_OFFSET));
+ if (!offline_bit)
+ return 0;
+ /* There are cases as part of AER/Reset flow that PF needs
+ * around 100 msec to load. We therefore sleep for 100 msec
+ * to allow other tasks to make use of that CPU during this
+ * time interval.
+ */
+ msleep(100);
+ }
+ mlx4_err(dev, "Communication channel is offline.\n");
+ return -EIO;
+}
+
+static void mlx4_reset_vf_support(struct mlx4_dev *dev)
+{
+#define COMM_CHAN_RST_OFFSET 0x1e
+
+ struct mlx4_priv *priv = mlx4_priv(dev);
+ u32 comm_rst;
+ u32 comm_caps;
+
+ comm_caps = swab32(readl((__iomem char *)priv->mfunc.comm +
+ MLX4_COMM_CHAN_CAPS));
+ comm_rst = (comm_caps & (u32)(1 << COMM_CHAN_RST_OFFSET));
+
+ if (comm_rst)
+ dev->caps.vf_caps |= MLX4_VF_CAP_FLAG_RESET;
+}
+
static int mlx4_init_slave(struct mlx4_dev *dev)
{
struct mlx4_priv *priv = mlx4_priv(dev);
mutex_lock(&priv->cmd.slave_cmd_mutex);
priv->cmd.max_cmds = 1;
+ if (mlx4_comm_check_offline(dev)) {
+ mlx4_err(dev, "PF is not responsive, skipping initialization\n");
+ goto err_offline;
+ }
+
+ mlx4_reset_vf_support(dev);
mlx4_warn(dev, "Sending reset\n");
ret_from_reset = mlx4_comm_cmd(dev, MLX4_COMM_CMD_RESET, 0,
- MLX4_COMM_TIME);
+ MLX4_COMM_CMD_NA_OP, MLX4_COMM_TIME);
/* if we are in the middle of flr the slave will try
* NUM_OF_RESET_RETRIES times before leaving.*/
if (ret_from_reset) {
mlx4_warn(dev, "Sending vhcr0\n");
if (mlx4_comm_cmd(dev, MLX4_COMM_CMD_VHCR0, dma >> 48,
- MLX4_COMM_TIME))
+ MLX4_COMM_CMD_NA_OP, MLX4_COMM_TIME))
goto err;
if (mlx4_comm_cmd(dev, MLX4_COMM_CMD_VHCR1, dma >> 32,
- MLX4_COMM_TIME))
+ MLX4_COMM_CMD_NA_OP, MLX4_COMM_TIME))
goto err;
if (mlx4_comm_cmd(dev, MLX4_COMM_CMD_VHCR2, dma >> 16,
- MLX4_COMM_TIME))
+ MLX4_COMM_CMD_NA_OP, MLX4_COMM_TIME))
goto err;
- if (mlx4_comm_cmd(dev, MLX4_COMM_CMD_VHCR_EN, dma, MLX4_COMM_TIME))
+ if (mlx4_comm_cmd(dev, MLX4_COMM_CMD_VHCR_EN, dma,
+ MLX4_COMM_CMD_NA_OP, MLX4_COMM_TIME))
goto err;
mutex_unlock(&priv->cmd.slave_cmd_mutex);
return 0;
err:
- mlx4_comm_cmd(dev, MLX4_COMM_CMD_RESET, 0, 0);
+ mlx4_comm_cmd(dev, MLX4_COMM_CMD_RESET, 0, MLX4_COMM_CMD_NA_OP, 0);
+err_offline:
mutex_unlock(&priv->cmd.slave_cmd_mutex);
return -EIO;
}
if (mlx4_log_num_mgm_entry_size <= 0 &&
dev_cap->flags2 & MLX4_DEV_CAP_FLAG2_FS_EN &&
(!mlx4_is_mfunc(dev) ||
- (dev_cap->fs_max_num_qp_per_entry >= (dev->num_vfs + 1))) &&
+ (dev_cap->fs_max_num_qp_per_entry >=
+ (dev->persist->num_vfs + 1))) &&
choose_log_fs_mgm_entry_size(dev_cap->fs_max_num_qp_per_entry) >=
MLX4_MIN_MGM_LOG_ENTRY_SIZE) {
dev->oper_log_mgm_entry_size =
err = mlx4_dev_cap(dev, &dev_cap);
if (err) {
mlx4_err(dev, "QUERY_DEV_CAP command failed, aborting\n");
- goto err_stop_fw;
+ return err;
}
choose_steering_mode(dev, &dev_cap);
&init_hca);
if ((long long) icm_size < 0) {
err = icm_size;
- goto err_stop_fw;
+ return err;
}
dev->caps.max_fmr_maps = (1 << (32 - ilog2(dev->caps.num_mpts))) - 1;
err = mlx4_init_icm(dev, &dev_cap, &init_hca, icm_size);
if (err)
- goto err_stop_fw;
+ return err;
err = mlx4_INIT_HCA(dev, &init_hca);
if (err) {
err = mlx4_query_func(dev, &dev_cap);
if (err < 0) {
mlx4_err(dev, "QUERY_FUNC command failed, aborting.\n");
- goto err_stop_fw;
+ goto err_close;
} else if (err & MLX4_QUERY_FUNC_NUM_SYS_EQS) {
dev->caps.num_eqs = dev_cap.max_eqs;
dev->caps.reserved_eqs = dev_cap.reserved_eqs;
if (!mlx4_is_slave(dev))
mlx4_free_icms(dev);
-err_stop_fw:
- if (!mlx4_is_slave(dev)) {
- mlx4_UNMAP_FA(dev);
- mlx4_free_icm(dev, priv->fw.fw_icm, 0);
- }
return err;
}
for (i = 0; i < nreq; ++i)
entries[i].entry = i;
- nreq = pci_enable_msix_range(dev->pdev, entries, 2, nreq);
+ nreq = pci_enable_msix_range(dev->persist->pdev, entries, 2,
+ nreq);
if (nreq < 0) {
kfree(entries);
dev->caps.comp_pool = 0;
for (i = 0; i < 2; ++i)
- priv->eq_table.eq[i].irq = dev->pdev->irq;
+ priv->eq_table.eq[i].irq = dev->persist->pdev->irq;
}
static int mlx4_init_port_info(struct mlx4_dev *dev, int port)
info->port_attr.show = show_port_type;
sysfs_attr_init(&info->port_attr.attr);
- err = device_create_file(&dev->pdev->dev, &info->port_attr);
+ err = device_create_file(&dev->persist->pdev->dev, &info->port_attr);
if (err) {
mlx4_err(dev, "Failed to create file for port %d\n", port);
info->port = -1;
info->port_mtu_attr.show = show_port_ib_mtu;
sysfs_attr_init(&info->port_mtu_attr.attr);
- err = device_create_file(&dev->pdev->dev, &info->port_mtu_attr);
+ err = device_create_file(&dev->persist->pdev->dev,
+ &info->port_mtu_attr);
if (err) {
mlx4_err(dev, "Failed to create mtu file for port %d\n", port);
- device_remove_file(&info->dev->pdev->dev, &info->port_attr);
+ device_remove_file(&info->dev->persist->pdev->dev,
+ &info->port_attr);
info->port = -1;
}
if (info->port < 0)
return;
- device_remove_file(&info->dev->pdev->dev, &info->port_attr);
- device_remove_file(&info->dev->pdev->dev, &info->port_mtu_attr);
+ device_remove_file(&info->dev->persist->pdev->dev, &info->port_attr);
+ device_remove_file(&info->dev->persist->pdev->dev,
+ &info->port_mtu_attr);
}
static int mlx4_init_steering(struct mlx4_dev *dev)
void __iomem *owner;
u32 ret;
- if (pci_channel_offline(dev->pdev))
+ if (pci_channel_offline(dev->persist->pdev))
return -EIO;
- owner = ioremap(pci_resource_start(dev->pdev, 0) + MLX4_OWNER_BASE,
+ owner = ioremap(pci_resource_start(dev->persist->pdev, 0) +
+ MLX4_OWNER_BASE,
MLX4_OWNER_SIZE);
if (!owner) {
mlx4_err(dev, "Failed to obtain ownership bit\n");
{
void __iomem *owner;
- if (pci_channel_offline(dev->pdev))
+ if (pci_channel_offline(dev->persist->pdev))
return;
- owner = ioremap(pci_resource_start(dev->pdev, 0) + MLX4_OWNER_BASE,
+ owner = ioremap(pci_resource_start(dev->persist->pdev, 0) +
+ MLX4_OWNER_BASE,
MLX4_OWNER_SIZE);
if (!owner) {
mlx4_err(dev, "Failed to obtain ownership bit\n");
!!((flags) & MLX4_FLAG_MASTER))
static u64 mlx4_enable_sriov(struct mlx4_dev *dev, struct pci_dev *pdev,
- u8 total_vfs, int existing_vfs)
+ u8 total_vfs, int existing_vfs, int reset_flow)
{
u64 dev_flags = dev->flags;
int err = 0;
+ if (reset_flow) {
+ dev->dev_vfs = kcalloc(total_vfs, sizeof(*dev->dev_vfs),
+ GFP_KERNEL);
+ if (!dev->dev_vfs)
+ goto free_mem;
+ return dev_flags;
+ }
+
atomic_inc(&pf_loading);
if (dev->flags & MLX4_FLAG_SRIOV) {
if (existing_vfs != total_vfs) {
dev_flags |= MLX4_FLAG_SRIOV |
MLX4_FLAG_MASTER;
dev_flags &= ~MLX4_FLAG_SLAVE;
- dev->num_vfs = total_vfs;
+ dev->persist->num_vfs = total_vfs;
}
return dev_flags;
disable_sriov:
atomic_dec(&pf_loading);
- dev->num_vfs = 0;
+free_mem:
+ dev->persist->num_vfs = 0;
kfree(dev->dev_vfs);
return dev_flags & ~MLX4_FLAG_MASTER;
}
}
static int mlx4_load_one(struct pci_dev *pdev, int pci_dev_data,
- int total_vfs, int *nvfs, struct mlx4_priv *priv)
+ int total_vfs, int *nvfs, struct mlx4_priv *priv,
+ int reset_flow)
{
struct mlx4_dev *dev;
unsigned sum = 0;
existing_vfs = pci_num_vf(pdev);
if (existing_vfs)
dev->flags |= MLX4_FLAG_SRIOV;
- dev->num_vfs = total_vfs;
+ dev->persist->num_vfs = total_vfs;
}
}
+ /* on load remove any previous indication of internal error,
+ * device is up.
+ */
+ dev->persist->state = MLX4_DEVICE_STATE_UP;
+
slave_start:
err = mlx4_cmd_init(dev);
if (err) {
goto err_fw;
if (!(dev_cap->flags2 & MLX4_DEV_CAP_FLAG2_SYS_EQS)) {
- u64 dev_flags = mlx4_enable_sriov(dev, pdev, total_vfs,
- existing_vfs);
+ u64 dev_flags = mlx4_enable_sriov(dev, pdev,
+ total_vfs,
+ existing_vfs,
+ reset_flow);
mlx4_cmd_cleanup(dev, MLX4_CMD_CLEANUP_ALL);
dev->flags = dev_flags;
if (dev->flags & MLX4_FLAG_SRIOV) {
if (!existing_vfs)
pci_disable_sriov(pdev);
- if (mlx4_is_master(dev))
+ if (mlx4_is_master(dev) && !reset_flow)
atomic_dec(&pf_loading);
dev->flags &= ~MLX4_FLAG_SRIOV;
}
}
if (mlx4_is_master(dev) && (dev_cap->flags2 & MLX4_DEV_CAP_FLAG2_SYS_EQS)) {
- u64 dev_flags = mlx4_enable_sriov(dev, pdev, total_vfs, existing_vfs);
+ u64 dev_flags = mlx4_enable_sriov(dev, pdev, total_vfs,
+ existing_vfs, reset_flow);
if ((dev->flags ^ dev_flags) & (MLX4_FLAG_MASTER | MLX4_FLAG_SLAVE)) {
mlx4_cmd_cleanup(dev, MLX4_CMD_CLEANUP_VHCR);
dev->caps.num_ports);
goto err_close;
}
- memcpy(dev->nvfs, nvfs, sizeof(dev->nvfs));
+ memcpy(dev->persist->nvfs, nvfs, sizeof(dev->persist->nvfs));
- for (i = 0; i < sizeof(dev->nvfs)/sizeof(dev->nvfs[0]); i++) {
+ for (i = 0;
+ i < sizeof(dev->persist->nvfs)/
+ sizeof(dev->persist->nvfs[0]); i++) {
unsigned j;
- for (j = 0; j < dev->nvfs[i]; ++sum, ++j) {
+ for (j = 0; j < dev->persist->nvfs[i]; ++sum, ++j) {
dev->dev_vfs[sum].min_port = i < 2 ? i + 1 : 1;
dev->dev_vfs[sum].n_ports = i < 2 ? 1 :
dev->caps.num_ports;
goto err_steer;
mlx4_init_quotas(dev);
+ /* When PF resources are ready arm its comm channel to enable
+ * getting commands
+ */
+ if (mlx4_is_master(dev)) {
+ err = mlx4_ARM_COMM_CHANNEL(dev);
+ if (err) {
+ mlx4_err(dev, " Failed to arm comm channel eq: %x\n",
+ err);
+ goto err_steer;
+ }
+ }
for (port = 1; port <= dev->caps.num_ports; port++) {
err = mlx4_init_port_info(dev, port);
priv->removed = 0;
- if (mlx4_is_master(dev) && dev->num_vfs)
+ if (mlx4_is_master(dev) && dev->persist->num_vfs && !reset_flow)
atomic_dec(&pf_loading);
kfree(dev_cap);
mlx4_cmd_cleanup(dev, MLX4_CMD_CLEANUP_ALL);
err_sriov:
- if (dev->flags & MLX4_FLAG_SRIOV && !existing_vfs)
+ if (dev->flags & MLX4_FLAG_SRIOV && !existing_vfs) {
pci_disable_sriov(pdev);
+ dev->flags &= ~MLX4_FLAG_SRIOV;
+ }
- if (mlx4_is_master(dev) && dev->num_vfs)
+ if (mlx4_is_master(dev) && dev->persist->num_vfs && !reset_flow)
atomic_dec(&pf_loading);
kfree(priv->dev.dev_vfs);
}
}
- err = mlx4_load_one(pdev, pci_dev_data, total_vfs, nvfs, priv);
+ err = mlx4_catas_init(&priv->dev);
if (err)
goto err_release_regions;
+
+ err = mlx4_load_one(pdev, pci_dev_data, total_vfs, nvfs, priv, 0);
+ if (err)
+ goto err_catas;
+
return 0;
+err_catas:
+ mlx4_catas_end(&priv->dev);
+
err_release_regions:
pci_release_regions(pdev);
return -ENOMEM;
dev = &priv->dev;
- dev->pdev = pdev;
- pci_set_drvdata(pdev, dev);
+ dev->persist = kzalloc(sizeof(*dev->persist), GFP_KERNEL);
+ if (!dev->persist) {
+ kfree(priv);
+ return -ENOMEM;
+ }
+ dev->persist->pdev = pdev;
+ dev->persist->dev = dev;
+ pci_set_drvdata(pdev, dev->persist);
priv->pci_dev_data = id->driver_data;
+ mutex_init(&dev->persist->device_state_mutex);
+ mutex_init(&dev->persist->interface_state_mutex);
ret = __mlx4_init_one(pdev, id->driver_data, priv);
- if (ret)
+ if (ret) {
+ kfree(dev->persist);
kfree(priv);
+ } else {
+ pci_save_state(pdev);
+ }
return ret;
}
+static void mlx4_clean_dev(struct mlx4_dev *dev)
+{
+ struct mlx4_dev_persistent *persist = dev->persist;
+ struct mlx4_priv *priv = mlx4_priv(dev);
+ unsigned long flags = (dev->flags & RESET_PERSIST_MASK_FLAGS);
+
+ memset(priv, 0, sizeof(*priv));
+ priv->dev.persist = persist;
+ priv->dev.flags = flags;
+}
+
static void mlx4_unload_one(struct pci_dev *pdev)
{
- struct mlx4_dev *dev = pci_get_drvdata(pdev);
+ struct mlx4_dev_persistent *persist = pci_get_drvdata(pdev);
+ struct mlx4_dev *dev = persist->dev;
struct mlx4_priv *priv = mlx4_priv(dev);
int pci_dev_data;
- int p;
- int active_vfs = 0;
+ int p, i;
if (priv->removed)
return;
+ /* saving current ports type for further use */
+ for (i = 0; i < dev->caps.num_ports; i++) {
+ dev->persist->curr_port_type[i] = dev->caps.port_type[i + 1];
+ dev->persist->curr_port_poss_type[i] = dev->caps.
+ possible_type[i + 1];
+ }
+
pci_dev_data = priv->pci_dev_data;
- /* Disabling SR-IOV is not allowed while there are active vf's */
- if (mlx4_is_master(dev)) {
- active_vfs = mlx4_how_many_lives_vf(dev);
- if (active_vfs) {
- pr_warn("Removing PF when there are active VF's !!\n");
- pr_warn("Will not disable SR-IOV.\n");
- }
- }
mlx4_stop_sense(dev);
mlx4_unregister_device(dev);
if (dev->flags & MLX4_FLAG_MSI_X)
pci_disable_msix(pdev);
- if (dev->flags & MLX4_FLAG_SRIOV && !active_vfs) {
- mlx4_warn(dev, "Disabling SR-IOV\n");
- pci_disable_sriov(pdev);
- dev->flags &= ~MLX4_FLAG_SRIOV;
- dev->num_vfs = 0;
- }
if (!mlx4_is_slave(dev))
mlx4_free_ownership(dev);
kfree(dev->caps.qp1_proxy);
kfree(dev->dev_vfs);
- memset(priv, 0, sizeof(*priv));
+ mlx4_clean_dev(dev);
priv->pci_dev_data = pci_dev_data;
priv->removed = 1;
}
static void mlx4_remove_one(struct pci_dev *pdev)
{
- struct mlx4_dev *dev = pci_get_drvdata(pdev);
+ struct mlx4_dev_persistent *persist = pci_get_drvdata(pdev);
+ struct mlx4_dev *dev = persist->dev;
struct mlx4_priv *priv = mlx4_priv(dev);
+ int active_vfs = 0;
+
+ mutex_lock(&persist->interface_state_mutex);
+ persist->interface_state |= MLX4_INTERFACE_STATE_DELETION;
+ mutex_unlock(&persist->interface_state_mutex);
+
+ /* Disabling SR-IOV is not allowed while there are active vf's */
+ if (mlx4_is_master(dev) && dev->flags & MLX4_FLAG_SRIOV) {
+ active_vfs = mlx4_how_many_lives_vf(dev);
+ if (active_vfs) {
+ pr_warn("Removing PF when there are active VF's !!\n");
+ pr_warn("Will not disable SR-IOV.\n");
+ }
+ }
+
+ /* device marked to be under deletion running now without the lock
+ * letting other tasks to be terminated
+ */
+ if (persist->interface_state & MLX4_INTERFACE_STATE_UP)
+ mlx4_unload_one(pdev);
+ else
+ mlx4_info(dev, "%s: interface is down\n", __func__);
+ mlx4_catas_end(dev);
+ if (dev->flags & MLX4_FLAG_SRIOV && !active_vfs) {
+ mlx4_warn(dev, "Disabling SR-IOV\n");
+ pci_disable_sriov(pdev);
+ }
- mlx4_unload_one(pdev);
pci_release_regions(pdev);
pci_disable_device(pdev);
+ kfree(dev->persist);
kfree(priv);
pci_set_drvdata(pdev, NULL);
}
+static int restore_current_port_types(struct mlx4_dev *dev,
+ enum mlx4_port_type *types,
+ enum mlx4_port_type *poss_types)
+{
+ struct mlx4_priv *priv = mlx4_priv(dev);
+ int err, i;
+
+ mlx4_stop_sense(dev);
+
+ mutex_lock(&priv->port_mutex);
+ for (i = 0; i < dev->caps.num_ports; i++)
+ dev->caps.possible_type[i + 1] = poss_types[i];
+ err = mlx4_change_port_types(dev, types);
+ mlx4_start_sense(dev);
+ mutex_unlock(&priv->port_mutex);
+
+ return err;
+}
+
int mlx4_restart_one(struct pci_dev *pdev)
{
- struct mlx4_dev *dev = pci_get_drvdata(pdev);
+ struct mlx4_dev_persistent *persist = pci_get_drvdata(pdev);
+ struct mlx4_dev *dev = persist->dev;
struct mlx4_priv *priv = mlx4_priv(dev);
int nvfs[MLX4_MAX_PORTS + 1] = {0, 0, 0};
int pci_dev_data, err, total_vfs;
pci_dev_data = priv->pci_dev_data;
- total_vfs = dev->num_vfs;
- memcpy(nvfs, dev->nvfs, sizeof(dev->nvfs));
+ total_vfs = dev->persist->num_vfs;
+ memcpy(nvfs, dev->persist->nvfs, sizeof(dev->persist->nvfs));
mlx4_unload_one(pdev);
- err = mlx4_load_one(pdev, pci_dev_data, total_vfs, nvfs, priv);
+ err = mlx4_load_one(pdev, pci_dev_data, total_vfs, nvfs, priv, 1);
if (err) {
mlx4_err(dev, "%s: ERROR: mlx4_load_one failed, pci_name=%s, err=%d\n",
__func__, pci_name(pdev), err);
return err;
}
+ err = restore_current_port_types(dev, dev->persist->curr_port_type,
+ dev->persist->curr_port_poss_type);
+ if (err)
+ mlx4_err(dev, "could not restore original port types (%d)\n",
+ err);
+
return err;
}
static pci_ers_result_t mlx4_pci_err_detected(struct pci_dev *pdev,
pci_channel_state_t state)
{
- mlx4_unload_one(pdev);
+ struct mlx4_dev_persistent *persist = pci_get_drvdata(pdev);
+
+ mlx4_err(persist->dev, "mlx4_pci_err_detected was called\n");
+ mlx4_enter_error_state(persist);
- return state == pci_channel_io_perm_failure ?
- PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_NEED_RESET;
+ mutex_lock(&persist->interface_state_mutex);
+ if (persist->interface_state & MLX4_INTERFACE_STATE_UP)
+ mlx4_unload_one(pdev);
+
+ mutex_unlock(&persist->interface_state_mutex);
+ if (state == pci_channel_io_perm_failure)
+ return PCI_ERS_RESULT_DISCONNECT;
+
+ pci_disable_device(pdev);
+ return PCI_ERS_RESULT_NEED_RESET;
}
static pci_ers_result_t mlx4_pci_slot_reset(struct pci_dev *pdev)
{
- struct mlx4_dev *dev = pci_get_drvdata(pdev);
+ struct mlx4_dev_persistent *persist = pci_get_drvdata(pdev);
+ struct mlx4_dev *dev = persist->dev;
struct mlx4_priv *priv = mlx4_priv(dev);
int ret;
+ int nvfs[MLX4_MAX_PORTS + 1] = {0, 0, 0};
+ int total_vfs;
+
+ mlx4_err(dev, "mlx4_pci_slot_reset was called\n");
+ ret = pci_enable_device(pdev);
+ if (ret) {
+ mlx4_err(dev, "Can not re-enable device, ret=%d\n", ret);
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
+
+ pci_set_master(pdev);
+ pci_restore_state(pdev);
+ pci_save_state(pdev);
+
+ total_vfs = dev->persist->num_vfs;
+ memcpy(nvfs, dev->persist->nvfs, sizeof(dev->persist->nvfs));
+
+ mutex_lock(&persist->interface_state_mutex);
+ if (!(persist->interface_state & MLX4_INTERFACE_STATE_UP)) {
+ ret = mlx4_load_one(pdev, priv->pci_dev_data, total_vfs, nvfs,
+ priv, 1);
+ if (ret) {
+ mlx4_err(dev, "%s: mlx4_load_one failed, ret=%d\n",
+ __func__, ret);
+ goto end;
+ }
- ret = __mlx4_init_one(pdev, priv->pci_dev_data, priv);
+ ret = restore_current_port_types(dev, dev->persist->
+ curr_port_type, dev->persist->
+ curr_port_poss_type);
+ if (ret)
+ mlx4_err(dev, "could not restore original port types (%d)\n", ret);
+ }
+end:
+ mutex_unlock(&persist->interface_state_mutex);
return ret ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED;
}
+static void mlx4_shutdown(struct pci_dev *pdev)
+{
+ struct mlx4_dev_persistent *persist = pci_get_drvdata(pdev);
+
+ mlx4_info(persist->dev, "mlx4_shutdown was called\n");
+ mutex_lock(&persist->interface_state_mutex);
+ if (persist->interface_state & MLX4_INTERFACE_STATE_UP)
+ mlx4_unload_one(pdev);
+ mutex_unlock(&persist->interface_state_mutex);
+}
+
static const struct pci_error_handlers mlx4_err_handler = {
.error_detected = mlx4_pci_err_detected,
.slot_reset = mlx4_pci_slot_reset,
.name = DRV_NAME,
.id_table = mlx4_pci_table,
.probe = mlx4_init_one,
- .shutdown = mlx4_unload_one,
+ .shutdown = mlx4_shutdown,
.remove = mlx4_remove_one,
.err_handler = &mlx4_err_handler,
};
if (mlx4_verify_params())
return -EINVAL;
- mlx4_catas_init();
mlx4_wq = create_singlethread_workqueue("mlx4");
if (!mlx4_wq)
mutex_unlock(&priv->mcg_table.mutex);
mlx4_free_cmd_mailbox(dev, mailbox);
+ if (err && dev->persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR)
+ /* In case device is under an error, return success as a closing command */
+ err = 0;
return err;
}
MLX4_CMD_WRAPPED);
mlx4_free_cmd_mailbox(dev, mailbox);
+ if (err && !attach &&
+ dev->persist->state & MLX4_DEVICE_STATE_INTERNAL_ERROR)
+ err = 0;
return err;
}
MLX4_CLR_INT_SIZE = 0x00008,
MLX4_SLAVE_COMM_BASE = 0x0,
MLX4_COMM_PAGESIZE = 0x1000,
- MLX4_CLOCK_SIZE = 0x00008
+ MLX4_CLOCK_SIZE = 0x00008,
+ MLX4_COMM_CHAN_CAPS = 0x8,
+ MLX4_COMM_CHAN_FLAGS = 0xc
};
enum {
};
#define MLX4_COMM_TIME 10000
+#define MLX4_COMM_OFFLINE_TIME_OUT 30000
+#define MLX4_COMM_CMD_NA_OP 0x0
+
+
enum {
MLX4_COMM_CMD_RESET,
MLX4_COMM_CMD_VHCR0,
#define mlx4_dbg(mdev, format, ...) \
do { \
if (mlx4_debug_level) \
- dev_printk(KERN_DEBUG, &(mdev)->pdev->dev, format, \
+ dev_printk(KERN_DEBUG, \
+ &(mdev)->persist->pdev->dev, format, \
##__VA_ARGS__); \
} while (0)
#define mlx4_err(mdev, format, ...) \
- dev_err(&(mdev)->pdev->dev, format, ##__VA_ARGS__)
+ dev_err(&(mdev)->persist->pdev->dev, format, ##__VA_ARGS__)
#define mlx4_info(mdev, format, ...) \
- dev_info(&(mdev)->pdev->dev, format, ##__VA_ARGS__)
+ dev_info(&(mdev)->persist->pdev->dev, format, ##__VA_ARGS__)
#define mlx4_warn(mdev, format, ...) \
- dev_warn(&(mdev)->pdev->dev, format, ##__VA_ARGS__)
+ dev_warn(&(mdev)->persist->pdev->dev, format, ##__VA_ARGS__)
extern int mlx4_log_num_mgm_entry_size;
extern int log_mtts_per_seg;
+extern int mlx4_internal_err_reset;
#define MLX4_MAX_NUM_SLAVES (MLX4_MAX_NUM_PF + MLX4_MAX_NUM_VF)
#define ALL_SLAVES 0xff
struct mlx4_cmd {
struct pci_pool *pool;
void __iomem *hcr;
- struct mutex hcr_mutex;
struct mutex slave_cmd_mutex;
struct semaphore poll_sem;
struct semaphore event_sem;
void mlx4_start_catas_poll(struct mlx4_dev *dev);
void mlx4_stop_catas_poll(struct mlx4_dev *dev);
-void mlx4_catas_init(void);
+int mlx4_catas_init(struct mlx4_dev *dev);
+void mlx4_catas_end(struct mlx4_dev *dev);
int mlx4_restart_one(struct pci_dev *pdev);
int mlx4_register_device(struct mlx4_dev *dev);
void mlx4_unregister_device(struct mlx4_dev *dev);
int mlx4_cmd_init(struct mlx4_dev *dev);
void mlx4_cmd_cleanup(struct mlx4_dev *dev, int cleanup_mask);
int mlx4_multi_func_init(struct mlx4_dev *dev);
+int mlx4_ARM_COMM_CHANNEL(struct mlx4_dev *dev);
void mlx4_multi_func_cleanup(struct mlx4_dev *dev);
void mlx4_cmd_event(struct mlx4_dev *dev, u16 token, u8 status, u64 out_param);
int mlx4_cmd_use_events(struct mlx4_dev *dev);
void mlx4_cmd_use_polling(struct mlx4_dev *dev);
int mlx4_comm_cmd(struct mlx4_dev *dev, u8 cmd, u16 param,
- unsigned long timeout);
+ u16 op, unsigned long timeout);
void mlx4_cq_tasklet_cb(unsigned long data);
void mlx4_cq_completion(struct mlx4_dev *dev, u32 cqn);
void mlx4_srq_event(struct mlx4_dev *dev, u32 srqn, int event_type);
-void mlx4_handle_catas_err(struct mlx4_dev *dev);
+void mlx4_enter_error_state(struct mlx4_dev_persistent *persist);
int mlx4_SENSE_PORT(struct mlx4_dev *dev, int port,
enum mlx4_port_type *type);
void mlx4_mr_rereg_mem_cleanup(struct mlx4_dev *dev, struct mlx4_mr *mr)
{
mlx4_mtt_cleanup(dev, &mr->mtt);
+ mr->mtt.order = -1;
}
EXPORT_SYMBOL_GPL(mlx4_mr_rereg_mem_cleanup);
{
int err;
- mpt_entry->start = cpu_to_be64(iova);
- mpt_entry->length = cpu_to_be64(size);
- mpt_entry->entity_size = cpu_to_be32(page_shift);
-
err = mlx4_mtt_init(dev, npages, page_shift, &mr->mtt);
if (err)
return err;
+ mpt_entry->start = cpu_to_be64(mr->iova);
+ mpt_entry->length = cpu_to_be64(mr->size);
+ mpt_entry->entity_size = cpu_to_be32(mr->mtt.page_shift);
+
mpt_entry->pd_flags &= cpu_to_be32(MLX4_MPT_PD_MASK |
MLX4_MPT_PD_FLAG_EN_INV);
mpt_entry->flags &= cpu_to_be32(MLX4_MPT_FLAG_FREE |
if (!mtts)
return -ENOMEM;
- dma_sync_single_for_cpu(&dev->pdev->dev, dma_handle,
+ dma_sync_single_for_cpu(&dev->persist->pdev->dev, dma_handle,
npages * sizeof (u64), DMA_TO_DEVICE);
for (i = 0; i < npages; ++i)
mtts[i] = cpu_to_be64(page_list[i] | MLX4_MTT_FLAG_PRESENT);
- dma_sync_single_for_device(&dev->pdev->dev, dma_handle,
+ dma_sync_single_for_device(&dev->persist->pdev->dev, dma_handle,
npages * sizeof (u64), DMA_TO_DEVICE);
return 0;
/* Make sure MPT status is visible before writing MTT entries */
wmb();
- dma_sync_single_for_cpu(&dev->pdev->dev, fmr->dma_handle,
+ dma_sync_single_for_cpu(&dev->persist->pdev->dev, fmr->dma_handle,
npages * sizeof(u64), DMA_TO_DEVICE);
for (i = 0; i < npages; ++i)
fmr->mtts[i] = cpu_to_be64(page_list[i] | MLX4_MTT_FLAG_PRESENT);
- dma_sync_single_for_device(&dev->pdev->dev, fmr->dma_handle,
+ dma_sync_single_for_device(&dev->persist->pdev->dev, fmr->dma_handle,
npages * sizeof(u64), DMA_TO_DEVICE);
fmr->mpt->key = cpu_to_be32(key);
return -ENOMEM;
if (mlx4_is_slave(dev))
- offset = uar->index % ((int) pci_resource_len(dev->pdev, 2) /
+ offset = uar->index % ((int)pci_resource_len(dev->persist->pdev,
+ 2) /
dev->caps.uar_page_size);
else
offset = uar->index;
- uar->pfn = (pci_resource_start(dev->pdev, 2) >> PAGE_SHIFT) + offset;
+ uar->pfn = (pci_resource_start(dev->persist->pdev, 2) >> PAGE_SHIFT)
+ + offset;
uar->map = NULL;
return 0;
}
slaves_pport_actv = mlx4_phys_to_slaves_pport_actv(
dev, &exclusive_ports);
slave_gid -= bitmap_weight(slaves_pport_actv.slaves,
- dev->num_vfs + 1);
+ dev->persist->num_vfs + 1);
}
- vfs = bitmap_weight(slaves_pport.slaves, dev->num_vfs + 1) - 1;
+ vfs = bitmap_weight(slaves_pport.slaves, dev->persist->num_vfs + 1) - 1;
if (slave_gid <= ((MLX4_ROCE_MAX_GIDS - MLX4_ROCE_PF_GIDS) % vfs))
return ((MLX4_ROCE_MAX_GIDS - MLX4_ROCE_PF_GIDS) / vfs) + 1;
return (MLX4_ROCE_MAX_GIDS - MLX4_ROCE_PF_GIDS) / vfs;
slaves_pport_actv = mlx4_phys_to_slaves_pport_actv(
dev, &exclusive_ports);
slave_gid -= bitmap_weight(slaves_pport_actv.slaves,
- dev->num_vfs + 1);
+ dev->persist->num_vfs + 1);
}
gids = MLX4_ROCE_MAX_GIDS - MLX4_ROCE_PF_GIDS;
- vfs = bitmap_weight(slaves_pport.slaves, dev->num_vfs + 1) - 1;
+ vfs = bitmap_weight(slaves_pport.slaves, dev->persist->num_vfs + 1) - 1;
if (slave_gid <= gids % vfs)
return MLX4_ROCE_PF_GIDS + ((gids / vfs) + 1) * (slave_gid - 1);
int num_eth_ports, err;
int i;
- if (slave < 0 || slave > dev->num_vfs)
+ if (slave < 0 || slave > dev->persist->num_vfs)
return;
actv_ports = mlx4_get_active_ports(dev, slave);
return -EINVAL;
slaves_pport = mlx4_phys_to_slaves_pport(dev, port);
- num_vfs = bitmap_weight(slaves_pport.slaves, dev->num_vfs + 1) - 1;
+ num_vfs = bitmap_weight(slaves_pport.slaves,
+ dev->persist->num_vfs + 1) - 1;
for (i = 0; i < MLX4_ROCE_MAX_GIDS; i++) {
if (!memcmp(priv->port[port].gid_table.roce_gids[i].raw, gid,
dev, &exclusive_ports);
num_vfs_before += bitmap_weight(
slaves_pport_actv.slaves,
- dev->num_vfs + 1);
+ dev->persist->num_vfs + 1);
}
/* candidate_slave_gid isn't necessarily the correct slave, but
dev, &exclusive_ports);
slave_gid += bitmap_weight(
slaves_pport_actv.slaves,
- dev->num_vfs + 1);
+ dev->persist->num_vfs + 1);
}
}
*slave_id = slave_gid;
goto out;
}
- pcie_cap = pci_pcie_cap(dev->pdev);
+ pcie_cap = pci_pcie_cap(dev->persist->pdev);
for (i = 0; i < 64; ++i) {
if (i == 22 || i == 23)
continue;
- if (pci_read_config_dword(dev->pdev, i * 4, hca_header + i)) {
+ if (pci_read_config_dword(dev->persist->pdev, i * 4,
+ hca_header + i)) {
err = -ENODEV;
mlx4_err(dev, "Couldn't save HCA PCI header, aborting\n");
goto out;
}
}
- reset = ioremap(pci_resource_start(dev->pdev, 0) + MLX4_RESET_BASE,
+ reset = ioremap(pci_resource_start(dev->persist->pdev, 0) +
+ MLX4_RESET_BASE,
MLX4_RESET_SIZE);
if (!reset) {
err = -ENOMEM;
end = jiffies + MLX4_RESET_TIMEOUT_JIFFIES;
do {
- if (!pci_read_config_word(dev->pdev, PCI_VENDOR_ID, &vendor) &&
- vendor != 0xffff)
+ if (!pci_read_config_word(dev->persist->pdev, PCI_VENDOR_ID,
+ &vendor) && vendor != 0xffff)
break;
msleep(1);
/* Now restore the PCI headers */
if (pcie_cap) {
devctl = hca_header[(pcie_cap + PCI_EXP_DEVCTL) / 4];
- if (pcie_capability_write_word(dev->pdev, PCI_EXP_DEVCTL,
+ if (pcie_capability_write_word(dev->persist->pdev,
+ PCI_EXP_DEVCTL,
devctl)) {
err = -ENODEV;
mlx4_err(dev, "Couldn't restore HCA PCI Express Device Control register, aborting\n");
goto out;
}
linkctl = hca_header[(pcie_cap + PCI_EXP_LNKCTL) / 4];
- if (pcie_capability_write_word(dev->pdev, PCI_EXP_LNKCTL,
+ if (pcie_capability_write_word(dev->persist->pdev,
+ PCI_EXP_LNKCTL,
linkctl)) {
err = -ENODEV;
mlx4_err(dev, "Couldn't restore HCA PCI Express Link control register, aborting\n");
if (i * 4 == PCI_COMMAND)
continue;
- if (pci_write_config_dword(dev->pdev, i * 4, hca_header[i])) {
+ if (pci_write_config_dword(dev->persist->pdev, i * 4,
+ hca_header[i])) {
err = -ENODEV;
mlx4_err(dev, "Couldn't restore HCA reg %x, aborting\n",
i);
}
}
- if (pci_write_config_dword(dev->pdev, PCI_COMMAND,
+ if (pci_write_config_dword(dev->persist->pdev, PCI_COMMAND,
hca_header[PCI_COMMAND / 4])) {
err = -ENODEV;
mlx4_err(dev, "Couldn't restore HCA COMMAND, aborting\n");
int allocated, free, reserved, guaranteed, from_free;
int from_rsvd;
- if (slave > dev->num_vfs)
+ if (slave > dev->persist->num_vfs)
return -EINVAL;
spin_lock(&res_alloc->alloc_lock);
allocated = (port > 0) ?
- res_alloc->allocated[(port - 1) * (dev->num_vfs + 1) + slave] :
+ res_alloc->allocated[(port - 1) *
+ (dev->persist->num_vfs + 1) + slave] :
res_alloc->allocated[slave];
free = (port > 0) ? res_alloc->res_port_free[port - 1] :
res_alloc->res_free;
if (!err) {
/* grant the request */
if (port > 0) {
- res_alloc->allocated[(port - 1) * (dev->num_vfs + 1) + slave] += count;
+ res_alloc->allocated[(port - 1) *
+ (dev->persist->num_vfs + 1) + slave] += count;
res_alloc->res_port_free[port - 1] -= count;
res_alloc->res_port_rsvd[port - 1] -= from_rsvd;
} else {
&priv->mfunc.master.res_tracker.res_alloc[res_type];
int allocated, guaranteed, from_rsvd;
- if (slave > dev->num_vfs)
+ if (slave > dev->persist->num_vfs)
return;
spin_lock(&res_alloc->alloc_lock);
allocated = (port > 0) ?
- res_alloc->allocated[(port - 1) * (dev->num_vfs + 1) + slave] :
+ res_alloc->allocated[(port - 1) *
+ (dev->persist->num_vfs + 1) + slave] :
res_alloc->allocated[slave];
guaranteed = res_alloc->guaranteed[slave];
}
if (port > 0) {
- res_alloc->allocated[(port - 1) * (dev->num_vfs + 1) + slave] -= count;
+ res_alloc->allocated[(port - 1) *
+ (dev->persist->num_vfs + 1) + slave] -= count;
res_alloc->res_port_free[port - 1] += count;
res_alloc->res_port_rsvd[port - 1] += from_rsvd;
} else {
enum mlx4_resource res_type,
int vf, int num_instances)
{
- res_alloc->guaranteed[vf] = num_instances / (2 * (dev->num_vfs + 1));
+ res_alloc->guaranteed[vf] = num_instances /
+ (2 * (dev->persist->num_vfs + 1));
res_alloc->quota[vf] = (num_instances / 2) + res_alloc->guaranteed[vf];
if (vf == mlx4_master_func_num(dev)) {
res_alloc->res_free = num_instances;
for (i = 0; i < MLX4_NUM_OF_RESOURCE_TYPE; i++) {
struct resource_allocator *res_alloc =
&priv->mfunc.master.res_tracker.res_alloc[i];
- res_alloc->quota = kmalloc((dev->num_vfs + 1) * sizeof(int), GFP_KERNEL);
- res_alloc->guaranteed = kmalloc((dev->num_vfs + 1) * sizeof(int), GFP_KERNEL);
+ res_alloc->quota = kmalloc((dev->persist->num_vfs + 1) *
+ sizeof(int), GFP_KERNEL);
+ res_alloc->guaranteed = kmalloc((dev->persist->num_vfs + 1) *
+ sizeof(int), GFP_KERNEL);
if (i == RES_MAC || i == RES_VLAN)
res_alloc->allocated = kzalloc(MLX4_MAX_PORTS *
- (dev->num_vfs + 1) * sizeof(int),
- GFP_KERNEL);
+ (dev->persist->num_vfs
+ + 1) *
+ sizeof(int), GFP_KERNEL);
else
- res_alloc->allocated = kzalloc((dev->num_vfs + 1) * sizeof(int), GFP_KERNEL);
+ res_alloc->allocated = kzalloc((dev->persist->
+ num_vfs + 1) *
+ sizeof(int), GFP_KERNEL);
if (!res_alloc->quota || !res_alloc->guaranteed ||
!res_alloc->allocated)
goto no_mem_err;
spin_lock_init(&res_alloc->alloc_lock);
- for (t = 0; t < dev->num_vfs + 1; t++) {
+ for (t = 0; t < dev->persist->num_vfs + 1; t++) {
struct mlx4_active_ports actv_ports =
mlx4_get_active_ports(dev, t);
switch (i) {
param = qp->pid;
break;
case QP_STATE:
- param = (u64)mlx5_qp_state_str(be32_to_cpu(ctx->flags) >> 28);
+ param = (unsigned long)mlx5_qp_state_str(be32_to_cpu(ctx->flags) >> 28);
*is_str = 1;
break;
case QP_XPORT:
- param = (u64)mlx5_qp_type_str((be32_to_cpu(ctx->flags) >> 16) & 0xff);
+ param = (unsigned long)mlx5_qp_type_str((be32_to_cpu(ctx->flags) >> 16) & 0xff);
*is_str = 1;
break;
case QP_MTU:
if (is_str)
- ret = snprintf(tbuf, sizeof(tbuf), "%s\n", (const char *)field);
+ ret = snprintf(tbuf, sizeof(tbuf), "%s\n", (const char *)(unsigned long)field);
else
ret = snprintf(tbuf, sizeof(tbuf), "0x%llx\n", field);
(void)pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(64));
mgp->cmd = dma_alloc_coherent(&pdev->dev, sizeof(*mgp->cmd),
&mgp->cmd_bus, GFP_KERNEL);
- if (mgp->cmd == NULL)
+ if (!mgp->cmd) {
+ status = -ENOMEM;
goto abort_with_enabled;
+ }
mgp->board_span = pci_resource_len(pdev, 0);
mgp->iomem_base = pci_resource_start(pdev, 0);
}
#ifdef NS83820_VLAN_ACCEL_SUPPORT
- if(vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
/* fetch the vlan tag info out of the
* ancillary data if the vlan code
* is using hw vlan acceleration
*/
- short tag = vlan_tx_tag_get(skb);
+ short tag = skb_vlan_tag_get(skb);
extsts |= (EXTSTS_VPKT | htons(tag));
}
#endif
}
queue = 0;
- if (vlan_tx_tag_present(skb))
- vlan_tag = vlan_tx_tag_get(skb);
+ if (skb_vlan_tag_present(skb))
+ vlan_tag = skb_vlan_tag_get(skb);
if (sp->config.tx_steering_type == TX_DEFAULT_STEERING) {
if (skb->protocol == htons(ETH_P_IP)) {
struct iphdr *ip;
dev->name, __func__, __LINE__,
fifo_hw, dtr, dtr_priv);
- if (vlan_tx_tag_present(skb)) {
- u16 vlan_tag = vlan_tx_tag_get(skb);
+ if (skb_vlan_tag_present(skb)) {
+ u16 vlan_tag = skb_vlan_tag_get(skb);
vxge_hw_fifo_txdl_vlan_set(dtr, vlan_tag);
}
NV_TX2_CHECKSUM_L3 | NV_TX2_CHECKSUM_L4 : 0;
/* vlan tag */
- if (vlan_tx_tag_present(skb))
+ if (skb_vlan_tag_present(skb))
start_tx->txvlan = cpu_to_le32(NV_TX3_VLAN_TAG_PRESENT |
- vlan_tx_tag_get(skb));
+ skb_vlan_tag_get(skb));
else
start_tx->txvlan = 0;
protocol = vh->h_vlan_encapsulated_proto;
flags = FLAGS_VLAN_TAGGED;
- } else if (vlan_tx_tag_present(skb)) {
+ } else if (skb_vlan_tag_present(skb)) {
flags = FLAGS_VLAN_OOB;
- vid = vlan_tx_tag_get(skb);
+ vid = skb_vlan_tag_get(skb);
netxen_set_tx_vlan_tci(first_desc, vid);
vlan_oob = 1;
}
{
int i = 0;
- while (i < 10) {
- if (i)
- ssleep(1);
-
+ do {
if (ql_sem_lock(qdev,
QL_DRVR_SEM_MASK,
(QL_RESOURCE_BITS_BASE_CODE | (qdev->mac_index)
"driver lock acquired\n");
return 1;
}
- }
+ ssleep(1);
+ } while (++i < 10);
netdev_err(qdev->ndev, "Timed out waiting for driver lock...\n");
return 0;
#include <net/ip.h>
#include <linux/ipv6.h>
#include <net/checksum.h>
+#include <linux/printk.h>
#include "qlcnic.h"
if (protocol == ETH_P_8021Q) {
vh = (struct vlan_ethhdr *)skb->data;
vlan_id = ntohs(vh->h_vlan_TCI);
- } else if (vlan_tx_tag_present(skb)) {
- vlan_id = vlan_tx_tag_get(skb);
+ } else if (skb_vlan_tag_present(skb)) {
+ vlan_id = skb_vlan_tag_get(skb);
}
}
flags = QLCNIC_FLAGS_VLAN_TAGGED;
vlan_tci = ntohs(vh->h_vlan_TCI);
protocol = ntohs(vh->h_vlan_encapsulated_proto);
- } else if (vlan_tx_tag_present(skb)) {
+ } else if (skb_vlan_tag_present(skb)) {
flags = QLCNIC_FLAGS_VLAN_OOB;
- vlan_tci = vlan_tx_tag_get(skb);
+ vlan_tci = skb_vlan_tag_get(skb);
}
if (unlikely(adapter->tx_pvid)) {
if (vlan_tci && !(adapter->flags & QLCNIC_TAGGING_ENABLED))
static void dump_skb(struct sk_buff *skb, struct qlcnic_adapter *adapter)
{
- int i;
- unsigned char *data = skb->data;
-
- pr_info(KERN_INFO "\n");
- for (i = 0; i < skb->len; i++) {
- QLCDB(adapter, DRV, "%02x ", data[i]);
- if ((i & 0x0f) == 8)
- pr_info(KERN_INFO "\n");
+ if (adapter->ahw->msg_enable & NETIF_MSG_DRV) {
+ char prefix[30];
+
+ scnprintf(prefix, sizeof(prefix), "%s: %s: ",
+ dev_name(&adapter->pdev->dev), __func__);
+
+ print_hex_dump_debug(prefix, DUMP_PREFIX_NONE, 16, 1,
+ skb->data, skb->len, true);
}
}
} else {
dev_err(&pdev->dev,
"%s: failed. Please Reboot\n", __func__);
+ err = -ENODEV;
goto err_out_free_hw;
}
mac_iocb_ptr->frame_len = cpu_to_le16((u16) skb->len);
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
netif_printk(qdev, tx_queued, KERN_DEBUG, qdev->ndev,
- "Adding a vlan tag %d.\n", vlan_tx_tag_get(skb));
+ "Adding a vlan tag %d.\n", skb_vlan_tag_get(skb));
mac_iocb_ptr->flags3 |= OB_MAC_IOCB_V;
- mac_iocb_ptr->vlan_tci = cpu_to_le16(vlan_tx_tag_get(skb));
+ mac_iocb_ptr->vlan_tci = cpu_to_le16(skb_vlan_tag_get(skb));
}
tso = ql_tso(skb, (struct ob_mac_tso_iocb_req *)mac_iocb_ptr);
if (tso < 0) {
static inline u32 cp_tx_vlan_tag(struct sk_buff *skb)
{
- return vlan_tx_tag_present(skb) ?
- TxVlanTag | swab16(vlan_tx_tag_get(skb)) : 0x00;
+ return skb_vlan_tag_present(skb) ?
+ TxVlanTag | swab16(skb_vlan_tag_get(skb)) : 0x00;
}
static void unwind_tx_frag_mapping(struct cp_private *cp, struct sk_buff *skb,
static inline u32 rtl8169_tx_vlan_tag(struct sk_buff *skb)
{
- return (vlan_tx_tag_present(skb)) ?
- TxVlanTag | swab16(vlan_tx_tag_get(skb)) : 0x00;
+ return (skb_vlan_tag_present(skb)) ?
+ TxVlanTag | swab16(skb_vlan_tag_get(skb)) : 0x00;
}
static void rtl8169_rx_vlan_tag(struct RxDesc *desc, struct sk_buff *skb)
u32 status, len;
u32 opts[2];
int frags;
+ bool stop_queue;
if (unlikely(!TX_FRAGS_READY_FOR(tp, skb_shinfo(skb)->nr_frags))) {
netif_err(tp, drv, dev, "BUG! Tx Ring full when queue awake!\n");
tp->cur_tx += frags + 1;
- RTL_W8(TxPoll, NPQ);
+ stop_queue = !TX_FRAGS_READY_FOR(tp, MAX_SKB_FRAGS);
- mmiowb();
+ if (!skb->xmit_more || stop_queue ||
+ netif_xmit_stopped(netdev_get_tx_queue(dev, 0))) {
+ RTL_W8(TxPoll, NPQ);
+
+ mmiowb();
+ }
- if (!TX_FRAGS_READY_FOR(tp, MAX_SKB_FRAGS)) {
+ if (stop_queue) {
/* Avoid wrongly optimistic queue wake-up: rtl_tx thread must
* not miss a ring update when it notices a stopped queue.
*/
.eesr_err_check = EESR_TWB | EESR_TABT | EESR_RABT | EESR_RFE |
EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE |
EESR_ECI,
+ .fdr_value = 0x00000f0f,
.apr = 1,
.mpr = 1,
.eesr_err_check = EESR_TWB | EESR_TABT | EESR_RABT | EESR_RFE |
EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE |
EESR_ECI,
+ .fdr_value = 0x00000f0f,
.apr = 1,
.mpr = 1,
EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE |
EESR_ECI,
+ .trscer_err_mask = DESC_I_RINT8,
+
.apr = 1,
.mpr = 1,
.tpauser = 1,
static void sh_eth_chip_reset_giga(struct net_device *ndev)
{
int i;
- unsigned long mahr[2], malr[2];
+ u32 mahr[2], malr[2];
/* save MAHR and MALR */
for (i = 0; i < 2; i++) {
if (!cd->eesr_err_check)
cd->eesr_err_check = DEFAULT_EESR_ERR_CHECK;
+
+ if (!cd->trscer_err_mask)
+ cd->trscer_err_mask = DEFAULT_TRSCER_ERR_MASK;
}
static int sh_eth_check_reset(struct net_device *ndev)
}
}
-static unsigned long sh_eth_get_edtrr_trns(struct sh_eth_private *mdp)
+static u32 sh_eth_get_edtrr_trns(struct sh_eth_private *mdp)
{
if (sh_eth_is_gether(mdp) || sh_eth_is_rz_fast_ether(mdp))
return EDTRR_TRNS_GETHER;
/* Frame recv control (enable multiple-packets per rx irq) */
sh_eth_write(ndev, RMCR_RNC, RMCR);
- sh_eth_write(ndev, DESC_I_RINT8 | DESC_I_RINT5 | DESC_I_TINT2, TRSCER);
+ sh_eth_write(ndev, mdp->cd->trscer_err_mask, TRSCER);
if (mdp->cd->bculr)
sh_eth_write(ndev, 0x800, BCULR); /* Burst sycle set */
}
/* error control function */
-static void sh_eth_error(struct net_device *ndev, int intr_status)
+static void sh_eth_error(struct net_device *ndev, u32 intr_status)
{
struct sh_eth_private *mdp = netdev_priv(ndev);
u32 felic_stat;
struct sh_eth_private *mdp = netdev_priv(ndev);
struct sh_eth_cpu_data *cd = mdp->cd;
irqreturn_t ret = IRQ_NONE;
- unsigned long intr_status, intr_enable;
+ u32 intr_status, intr_enable;
spin_lock(&mdp->lock);
__napi_schedule(&mdp->napi);
} else {
netdev_warn(ndev,
- "ignoring interrupt, status 0x%08lx, mask 0x%08lx.\n",
+ "ignoring interrupt, status 0x%08x, mask 0x%08x.\n",
intr_status, intr_enable);
}
}
napi);
struct net_device *ndev = napi->dev;
int quota = budget;
- unsigned long intr_status;
+ u32 intr_status;
for (;;) {
intr_status = sh_eth_read(ndev, EESR);
netif_err(mdp, timer, ndev,
"transmit timed out, status %8.8x, resetting...\n",
- (int)sh_eth_read(ndev, EESR));
+ sh_eth_read(ndev, EESR));
/* tx_errors count up */
ndev->stats.tx_errors++;
}
#ifdef CONFIG_PM
+#ifdef CONFIG_PM_SLEEP
+static int sh_eth_suspend(struct device *dev)
+{
+ struct net_device *ndev = dev_get_drvdata(dev);
+ int ret = 0;
+
+ if (netif_running(ndev)) {
+ netif_device_detach(ndev);
+ ret = sh_eth_close(ndev);
+ }
+
+ return ret;
+}
+
+static int sh_eth_resume(struct device *dev)
+{
+ struct net_device *ndev = dev_get_drvdata(dev);
+ int ret = 0;
+
+ if (netif_running(ndev)) {
+ ret = sh_eth_open(ndev);
+ if (ret < 0)
+ return ret;
+ netif_device_attach(ndev);
+ }
+
+ return ret;
+}
+#endif
+
static int sh_eth_runtime_nop(struct device *dev)
{
/* Runtime PM callback shared between ->runtime_suspend()
}
static const struct dev_pm_ops sh_eth_dev_pm_ops = {
- .runtime_suspend = sh_eth_runtime_nop,
- .runtime_resume = sh_eth_runtime_nop,
+ SET_SYSTEM_SLEEP_PM_OPS(sh_eth_suspend, sh_eth_resume)
+ SET_RUNTIME_PM_OPS(sh_eth_runtime_nop, sh_eth_runtime_nop, NULL)
};
#define SH_ETH_PM_OPS (&sh_eth_dev_pm_ops)
#else
DESC_I_RINT1 = 0x0001,
};
+#define DEFAULT_TRSCER_ERR_MASK (DESC_I_RINT8 | DESC_I_RINT5 | DESC_I_TINT2)
+
/* RPADIR */
enum RPADIR_BIT {
RPADIR_PADS1 = 0x20000, RPADIR_PADS0 = 0x10000,
/* mandatory initialize value */
int register_type;
- unsigned long eesipr_value;
+ u32 eesipr_value;
/* optional initialize value */
- unsigned long ecsr_value;
- unsigned long ecsipr_value;
- unsigned long fdr_value;
- unsigned long fcftr_value;
- unsigned long rpadir_value;
+ u32 ecsr_value;
+ u32 ecsipr_value;
+ u32 fdr_value;
+ u32 fcftr_value;
+ u32 rpadir_value;
/* interrupt checking mask */
- unsigned long tx_check;
- unsigned long eesr_err_check;
+ u32 tx_check;
+ u32 eesr_err_check;
+
+ /* Error mask */
+ u32 trscer_err_mask;
/* hardware features */
unsigned long irq_flags; /* IRQ configuration flags */
#endif
}
-static inline void sh_eth_write(struct net_device *ndev, unsigned long data,
+static inline void sh_eth_write(struct net_device *ndev, u32 data,
int enum_index)
{
struct sh_eth_private *mdp = netdev_priv(ndev);
iowrite32(data, mdp->addr + mdp->reg_offset[enum_index]);
}
-static inline unsigned long sh_eth_read(struct net_device *ndev,
- int enum_index)
+static inline u32 sh_eth_read(struct net_device *ndev, int enum_index)
{
struct sh_eth_private *mdp = netdev_priv(ndev);
return mdp->tsu_addr + mdp->reg_offset[enum_index];
}
-static inline void sh_eth_tsu_write(struct sh_eth_private *mdp,
- unsigned long data, int enum_index)
+static inline void sh_eth_tsu_write(struct sh_eth_private *mdp, u32 data,
+ int enum_index)
{
iowrite32(data, mdp->tsu_addr + mdp->reg_offset[enum_index]);
}
-static inline unsigned long sh_eth_tsu_read(struct sh_eth_private *mdp,
- int enum_index)
+static inline u32 sh_eth_tsu_read(struct sh_eth_private *mdp, int enum_index)
{
return ioread32(mdp->tsu_addr + mdp->reg_offset[enum_index]);
}
static void *rocker_desc_cookie_ptr_get(struct rocker_desc_info *desc_info)
{
- return (void *) desc_info->desc->cookie;
+ return (void *)(uintptr_t)desc_info->desc->cookie;
}
static void rocker_desc_cookie_ptr_set(struct rocker_desc_info *desc_info,
void *ptr)
{
- desc_info->desc->cookie = (long) ptr;
+ desc_info->desc->cookie = (uintptr_t) ptr;
}
static struct rocker_desc_info *
container_of(work, struct rocker_fdb_learn_work, work);
bool removing = (lw->flags & ROCKER_OP_FLAG_REMOVE);
bool learned = (lw->flags & ROCKER_OP_FLAG_LEARNED);
+ struct netdev_switch_notifier_fdb_info info;
+
+ info.addr = lw->addr;
+ info.vid = lw->vid;
if (learned && removing)
- br_fdb_external_learn_del(lw->dev, lw->addr, lw->vid);
+ call_netdev_switch_notifiers(NETDEV_SWITCH_FDB_DEL,
+ lw->dev, &info.info);
else if (learned && !removing)
- br_fdb_external_learn_add(lw->dev, lw->addr, lw->vid);
+ call_netdev_switch_notifiers(NETDEV_SWITCH_FDB_ADD,
+ lw->dev, &info.info);
kfree(work);
}
rocker_tlv_nest_cancel(desc_info, frags);
out:
dev_kfree_skb(skb);
+ dev->stats.tx_dropped++;
+
return NETDEV_TX_OK;
}
if (vid && nla_put_u16(skb, NDA_VLAN, vid))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
/* Cleanup tx descriptors */
while ((desc_info = rocker_desc_tail_get(&rocker_port->tx_ring))) {
+ struct sk_buff *skb;
+
err = rocker_desc_err(desc_info);
if (err && net_ratelimit())
netdev_err(rocker_port->dev, "tx desc received with err %d\n",
err);
rocker_tx_desc_frags_unmap(rocker_port, desc_info);
- dev_kfree_skb_any(rocker_desc_cookie_ptr_get(desc_info));
+
+ skb = rocker_desc_cookie_ptr_get(desc_info);
+ if (err == 0) {
+ rocker_port->dev->stats.tx_packets++;
+ rocker_port->dev->stats.tx_bytes += skb->len;
+ } else
+ rocker_port->dev->stats.tx_errors++;
+
+ dev_kfree_skb_any(skb);
credits++;
}
rx_len = rocker_tlv_get_u16(attrs[ROCKER_TLV_RX_FRAG_LEN]);
skb_put(skb, rx_len);
skb->protocol = eth_type_trans(skb, rocker_port->dev);
+
+ rocker_port->dev->stats.rx_packets++;
+ rocker_port->dev->stats.rx_bytes += skb->len;
+
netif_receive_skb(skb);
return rocker_dma_rx_ring_skb_alloc(rocker, rocker_port, desc_info);
netdev_err(rocker_port->dev, "rx processing failed with err %d\n",
err);
}
+ if (err)
+ rocker_port->dev->stats.rx_errors++;
+
rocker_desc_gen_clear(desc_info);
rocker_desc_head_set(rocker, &rocker_port->rx_ring, desc_info);
credits++;
if (unlikely(skb_is_gso(skb) && tqueue->prev_mss != cur_mss))
ctxt_desc_req = 1;
- if (unlikely(vlan_tx_tag_present(skb) ||
+ if (unlikely(skb_vlan_tag_present(skb) ||
((skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
tqueue->hwts_tx_en)))
ctxt_desc_req = 1;
config NET_VENDOR_SMSC
bool "SMC (SMSC)/Western Digital devices"
default y
- depends on ARM || ISA || MAC || ARM64 || MIPS || M32R || SUPERH || \
- BLACKFIN || MN10300 || COLDFIRE || XTENSA || NIOS2 || PCI || PCMCIA
+ depends on ARM || ARM64 || ATARI_ETHERNAT || BLACKFIN || COLDFIRE || \
+ ISA || M32R || MAC || MIPS || MN10300 || NIOS2 || PCI || \
+ PCMCIA || SUPERH || XTENSA
---help---
If you have a network (Ethernet) card belonging to this class, say Y
and read the Ethernet-HOWTO, available from
tristate "SMC 91C9x/91C1xxx support"
select CRC32
select MII
- depends on (ARM || M32R || SUPERH || MIPS || BLACKFIN || \
- MN10300 || COLDFIRE || ARM64 || XTENSA || NIOS2) && (!OF || GPIOLIB)
+ depends on !OF || GPIOLIB
+ depends on ARM || ARM64 || ATARI_ETHERNAT || BLACKFIN || COLDFIRE || \
+ M32R || MIPS || MN10300 || NIOS2 || SUPERH || XTENSA
---help---
This is a driver for SMC's 91x series of Ethernet chipsets,
including the SMC91C94 and the SMC91C111. Say Y if you want it
#include <unit/smc91111.h>
+#elif defined(CONFIG_ATARI)
+
+#define SMC_CAN_USE_8BIT 1
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 1
+#define SMC_NOWAIT 1
+
+#define SMC_inb(a, r) readb((a) + (r))
+#define SMC_inw(a, r) readw((a) + (r))
+#define SMC_inl(a, r) readl((a) + (r))
+#define SMC_outb(v, a, r) writeb(v, (a) + (r))
+#define SMC_outw(v, a, r) writew(v, (a) + (r))
+#define SMC_outl(v, a, r) writel(v, (a) + (r))
+#define SMC_insw(a, r, p, l) readsw((a) + (r), p, l)
+#define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l)
+#define SMC_insl(a, r, p, l) readsl((a) + (r), p, l)
+#define SMC_outsl(a, r, p, l) writesl((a) + (r), p, l)
+
+#define RPC_LSA_DEFAULT RPC_LED_100_10
+#define RPC_LSB_DEFAULT RPC_LED_TX_RX
+
#elif defined(CONFIG_ARCH_MSM)
#define SMC_CAN_USE_8BIT 0
struct rk_priv_data {
struct platform_device *pdev;
int phy_iface;
- char regulator[32];
+ struct regulator *regulator;
bool clk_enabled;
bool clock_input;
static int phy_power_on(struct rk_priv_data *bsp_priv, bool enable)
{
- struct regulator *ldo;
- char *ldostr = bsp_priv->regulator;
+ struct regulator *ldo = bsp_priv->regulator;
int ret;
struct device *dev = &bsp_priv->pdev->dev;
- if (!ldostr) {
- dev_err(dev, "%s: no ldo found\n", __func__);
+ if (!ldo) {
+ dev_err(dev, "%s: no regulator found\n", __func__);
return -1;
}
- ldo = regulator_get(NULL, ldostr);
- if (!ldo) {
- dev_err(dev, "\n%s get ldo %s failed\n", __func__, ldostr);
+ if (enable) {
+ ret = regulator_enable(ldo);
+ if (ret)
+ dev_err(dev, "%s: fail to enable phy-supply\n",
+ __func__);
} else {
- if (enable) {
- if (!regulator_is_enabled(ldo)) {
- regulator_set_voltage(ldo, 3300000, 3300000);
- ret = regulator_enable(ldo);
- if (ret != 0)
- dev_err(dev, "%s: fail to enable %s\n",
- __func__, ldostr);
- else
- dev_info(dev, "turn on ldo done.\n");
- } else {
- dev_warn(dev, "%s is enabled before enable",
- ldostr);
- }
- } else {
- if (regulator_is_enabled(ldo)) {
- ret = regulator_disable(ldo);
- if (ret != 0)
- dev_err(dev, "%s: fail to disable %s\n",
- __func__, ldostr);
- else
- dev_info(dev, "turn off ldo done.\n");
- } else {
- dev_warn(dev, "%s is disabled before disable",
- ldostr);
- }
- }
- regulator_put(ldo);
+ ret = regulator_disable(ldo);
+ if (ret)
+ dev_err(dev, "%s: fail to disable phy-supply\n",
+ __func__);
}
return 0;
bsp_priv->phy_iface = of_get_phy_mode(dev->of_node);
- ret = of_property_read_string(dev->of_node, "phy_regulator", &strings);
- if (ret) {
- dev_warn(dev, "%s: Can not read property: phy_regulator.\n",
- __func__);
- } else {
- dev_info(dev, "%s: PHY power controlled by regulator(%s).\n",
- __func__, strings);
- strcpy(bsp_priv->regulator, strings);
+ bsp_priv->regulator = devm_regulator_get_optional(dev, "phy");
+ if (IS_ERR(bsp_priv->regulator)) {
+ if (PTR_ERR(bsp_priv->regulator) == -EPROBE_DEFER) {
+ dev_err(dev, "phy regulator is not available yet, deferred probing\n");
+ return ERR_PTR(-EPROBE_DEFER);
+ }
+ dev_err(dev, "no regulator found\n");
+ bsp_priv->regulator = NULL;
}
ret = of_property_read_string(dev->of_node, "clock_in_out", &strings);
bool ext_phyclk; /* Clock from external PHY */
u32 tx_retime_src; /* TXCLK Retiming*/
struct clk *clk; /* PHY clock */
- int ctrl_reg; /* GMAC glue-logic control register */
+ u32 ctrl_reg; /* GMAC glue-logic control register */
int clk_sel_reg; /* GMAC ext clk selection register */
struct device *dev;
struct regmap *regmap;
if (!np)
return -EINVAL;
- res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "sti-ethconf");
- if (!res)
- return -ENODATA;
- dwmac->ctrl_reg = res->start;
-
/* clk selection from extra syscfg register */
dwmac->clk_sel_reg = -ENXIO;
res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "sti-clkconf");
if (IS_ERR(regmap))
return PTR_ERR(regmap);
+ err = of_property_read_u32_index(np, "st,syscon", 1, &dwmac->ctrl_reg);
+ if (err) {
+ dev_err(dev, "Can't get sysconfig ctrl offset (%d)\n", err);
+ return err;
+ }
+
dwmac->dev = dev;
dwmac->interface = of_get_phy_mode(np);
dwmac->regmap = regmap;
priv->dirty_tx = 0;
priv->cur_tx = 0;
+ netdev_reset_queue(priv->dev);
stmmac_clear_descriptors(priv);
static void stmmac_tx_clean(struct stmmac_priv *priv)
{
unsigned int txsize = priv->dma_tx_size;
+ unsigned int bytes_compl = 0, pkts_compl = 0;
spin_lock(&priv->tx_lock);
priv->hw->mode->clean_desc3(priv, p);
if (likely(skb != NULL)) {
+ pkts_compl++;
+ bytes_compl += skb->len;
dev_consume_skb_any(skb);
priv->tx_skbuff[entry] = NULL;
}
priv->dirty_tx++;
}
+
+ netdev_completed_queue(priv->dev, pkts_compl, bytes_compl);
+
if (unlikely(netif_queue_stopped(priv->dev) &&
stmmac_tx_avail(priv) > STMMAC_TX_THRESH(priv))) {
netif_tx_lock(priv->dev);
(i == txsize - 1));
priv->dirty_tx = 0;
priv->cur_tx = 0;
+ netdev_reset_queue(priv->dev);
priv->hw->dma->start_tx(priv->ioaddr);
priv->dev->stats.tx_errors++;
/* Try to bump up the dma threshold on this failure */
if (unlikely(tc != SF_DMA_MODE) && (tc <= 256)) {
tc += 64;
- priv->hw->dma->dma_mode(priv->ioaddr, tc, SF_DMA_MODE);
+ if (priv->plat->force_thresh_dma_mode)
+ priv->hw->dma->dma_mode(priv->ioaddr, tc, tc);
+ else
+ priv->hw->dma->dma_mode(priv->ioaddr, tc,
+ SF_DMA_MODE);
priv->xstats.threshold = tc;
}
} else if (unlikely(status == tx_hard_error))
if (!priv->hwts_tx_en)
skb_tx_timestamp(skb);
+ netdev_sent_queue(dev, skb->len);
priv->hw->dma->enable_dma_transmission(priv->ioaddr);
spin_unlock(&priv->tx_lock);
priv->plat->enh_desc = priv->dma_cap.enh_desc;
priv->plat->pmt = priv->dma_cap.pmt_remote_wake_up;
- priv->plat->tx_coe = priv->dma_cap.tx_coe;
+ /* TXCOE doesn't work in thresh DMA mode */
+ if (priv->plat->force_thresh_dma_mode)
+ priv->plat->tx_coe = 0;
+ else
+ priv->plat->tx_coe = priv->dma_cap.tx_coe;
if (priv->dma_cap.rx_coe_type2)
priv->plat->rx_coe = STMMAC_RX_COE_TYPE2;
of_property_read_bool(np, "snps,fixed-burst");
dma_cfg->mixed_burst =
of_property_read_bool(np, "snps,mixed-burst");
+ of_property_read_u32(np, "snps,burst_len", &dma_cfg->burst_len);
+ if (dma_cfg->burst_len < 0 || dma_cfg->burst_len > 256)
+ dma_cfg->burst_len = 0;
}
plat->force_thresh_dma_mode = of_property_read_bool(np, "snps,force_thresh_dma_mode");
if (plat->force_thresh_dma_mode) {
niu_hash_page(rp, page, addr);
if (rp->rbr_blocks_per_page > 1)
- atomic_add(rp->rbr_blocks_per_page - 1,
- &compound_head(page)->_count);
+ atomic_add(rp->rbr_blocks_per_page - 1, &page->_count);
for (i = 0; i < rp->rbr_blocks_per_page; i++) {
__le32 *rbr = &rp->rbr[start_index + i];
unsigned int len = desc->size;
unsigned int copy_len;
struct sk_buff *skb;
+ int maxlen;
int err;
err = -EMSGSIZE;
- if (unlikely(len < ETH_ZLEN || len > port->rmtu)) {
+ if (port->tso && port->tsolen > port->rmtu)
+ maxlen = port->tsolen;
+ else
+ maxlen = port->rmtu;
+ if (unlikely(len < ETH_ZLEN || len > maxlen)) {
dev->stats.rx_length_errors++;
goto out_dropped;
}
txd_mss);
}
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
/*Cut VLAN ID to 12 bits */
- txd_vlan_id = vlan_tx_tag_get(skb) & BITS_MASK(12);
+ txd_vlan_id = skb_vlan_tag_get(skb) & BITS_MASK(12);
txd_vtag = 1;
}
config TI_CPTS
boolean "TI Common Platform Time Sync (CPTS) Support"
depends on TI_CPSW
+ depends on TI_CPSW || TI_KEYSTONE_NET
select PTP_1588_CLOCK
---help---
This driver supports the Common Platform Time Sync unit of
the CPSW Ethernet Switch. The unit can time stamp PTP UDP/IPv4
and Layer 2 packets, and the driver offers a PTP Hardware Clock.
+config TI_KEYSTONE_NETCP
+ tristate "TI Keystone NETCP Ethernet subsystem Support"
+ depends on OF
+ depends on KEYSTONE_NAVIGATOR_DMA && KEYSTONE_NAVIGATOR_QMSS
+ ---help---
+ This driver supports TI's Keystone NETCP Ethernet subsystem.
+
+ To compile this driver as a module, choose M here: the module
+ will be called keystone_netcp.
+
config TLAN
tristate "TI ThunderLAN support"
depends on (PCI || EISA)
obj-$(CONFIG_TI_CPSW_PHY_SEL) += cpsw-phy-sel.o
obj-$(CONFIG_TI_CPSW) += ti_cpsw.o
ti_cpsw-y := cpsw_ale.o cpsw.o cpts.o
+
+obj-$(CONFIG_TI_KEYSTONE_NETCP) += keystone_netcp.o
+keystone_netcp-y := netcp_core.o netcp_ethss.o netcp_sgmii.o \
+ netcp_xgbepcsr.o cpsw_ale.o cpts.o
/* Clear all mcast from ALE */
cpsw_ale_flush_multicast(ale, ALE_ALL_PORTS <<
- priv->host_port);
+ priv->host_port, -1);
/* Flood All Unicast Packets to Host port */
cpsw_ale_control_set(ale, 0, ALE_P0_UNI_FLOOD, 1);
static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
{
struct cpsw_priv *priv = netdev_priv(ndev);
+ int vid;
+
+ if (priv->data.dual_emac)
+ vid = priv->slaves[priv->emac_port].port_vlan;
+ else
+ vid = priv->data.default_vlan;
if (ndev->flags & IFF_PROMISC) {
/* Enable promiscuous mode */
cpsw_ale_set_allmulti(priv->ale, priv->ndev->flags & IFF_ALLMULTI);
/* Clear all mcast from ALE */
- cpsw_ale_flush_multicast(priv->ale, ALE_ALL_PORTS << priv->host_port);
+ cpsw_ale_flush_multicast(priv->ale, ALE_ALL_PORTS << priv->host_port,
+ vid);
if (!netdev_mc_empty(ndev)) {
struct netdev_hw_addr *ha;
dev_kfree_skb_any(new_skb);
}
-static irqreturn_t cpsw_interrupt(int irq, void *dev_id)
+static irqreturn_t cpsw_tx_interrupt(int irq, void *dev_id)
+{
+ struct cpsw_priv *priv = dev_id;
+
+ cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);
+ cpdma_chan_process(priv->txch, 128);
+
+ priv = cpsw_get_slave_priv(priv, 1);
+ if (priv)
+ cpdma_chan_process(priv->txch, 128);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t cpsw_rx_interrupt(int irq, void *dev_id)
{
struct cpsw_priv *priv = dev_id;
+ cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_RX);
+
cpsw_intr_disable(priv);
if (priv->irq_enabled == true) {
cpsw_disable_irq(priv);
int num_tx, num_rx;
num_tx = cpdma_chan_process(priv->txch, 128);
- if (num_tx)
- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);
num_rx = cpdma_chan_process(priv->rxch, budget);
if (num_rx < budget) {
napi_complete(napi);
cpsw_intr_enable(priv);
- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_RX);
prim_cpsw = cpsw_get_slave_priv(priv, 0);
if (prim_cpsw->irq_enabled == false) {
prim_cpsw->irq_enabled = true;
napi_enable(&priv->napi);
cpdma_ctlr_start(priv->dma);
cpsw_intr_enable(priv);
- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_RX);
- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);
prim_cpsw = cpsw_get_slave_priv(priv, 0);
if (prim_cpsw->irq_enabled == false) {
cpdma_chan_start(priv->txch);
cpdma_ctlr_int_ctrl(priv->dma, true);
cpsw_intr_enable(priv);
- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_RX);
- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);
-
}
static int cpsw_ndo_set_mac_address(struct net_device *ndev, void *p)
cpsw_intr_disable(priv);
cpdma_ctlr_int_ctrl(priv->dma, false);
- cpsw_interrupt(ndev->irq, priv);
+ cpsw_rx_interrupt(priv->irqs_table[0], priv);
+ cpsw_tx_interrupt(priv->irqs_table[1], priv);
cpdma_ctlr_int_ctrl(priv->dma, true);
cpsw_intr_enable(priv);
- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_RX);
- cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);
-
}
#endif
void __iomem *ss_regs;
struct resource *res, *ss_res;
u32 slave_offset, sliver_offset, slave_size;
- int ret = 0, i, k = 0;
+ int ret = 0, i;
+ int irq;
ndev = alloc_etherdev(sizeof(struct cpsw_priv));
if (!ndev) {
goto clean_dma_ret;
}
- ndev->irq = platform_get_irq(pdev, 0);
+ ndev->irq = platform_get_irq(pdev, 1);
if (ndev->irq < 0) {
dev_err(priv->dev, "error getting irq resource\n");
ret = -ENOENT;
goto clean_ale_ret;
}
- while ((res = platform_get_resource(priv->pdev, IORESOURCE_IRQ, k))) {
- if (k >= ARRAY_SIZE(priv->irqs_table)) {
- ret = -EINVAL;
- goto clean_ale_ret;
- }
+ /* Grab RX and TX IRQs. Note that we also have RX_THRESHOLD and
+ * MISC IRQs which are always kept disabled with this driver so
+ * we will not request them.
+ *
+ * If anyone wants to implement support for those, make sure to
+ * first request and append them to irqs_table array.
+ */
- ret = devm_request_irq(&pdev->dev, res->start, cpsw_interrupt,
- 0, dev_name(&pdev->dev), priv);
- if (ret < 0) {
- dev_err(priv->dev, "error attaching irq (%d)\n", ret);
- goto clean_ale_ret;
- }
+ /* RX IRQ */
+ irq = platform_get_irq(pdev, 1);
+ if (irq < 0)
+ goto clean_ale_ret;
- priv->irqs_table[k] = res->start;
- k++;
+ priv->irqs_table[0] = irq;
+ ret = devm_request_irq(&pdev->dev, irq, cpsw_rx_interrupt,
+ 0, dev_name(&pdev->dev), priv);
+ if (ret < 0) {
+ dev_err(priv->dev, "error attaching irq (%d)\n", ret);
+ goto clean_ale_ret;
}
- priv->num_irqs = k;
+ /* TX IRQ */
+ irq = platform_get_irq(pdev, 2);
+ if (irq < 0)
+ goto clean_ale_ret;
+
+ priv->irqs_table[1] = irq;
+ ret = devm_request_irq(&pdev->dev, irq, cpsw_tx_interrupt,
+ 0, dev_name(&pdev->dev), priv);
+ if (ret < 0) {
+ dev_err(priv->dev, "error attaching irq (%d)\n", ret);
+ goto clean_ale_ret;
+ }
+ priv->num_irqs = 2;
ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
cpsw_ale_set_entry_type(ale_entry, ALE_TYPE_FREE);
}
-int cpsw_ale_flush_multicast(struct cpsw_ale *ale, int port_mask)
+int cpsw_ale_flush_multicast(struct cpsw_ale *ale, int port_mask, int vid)
{
u32 ale_entry[ALE_ENTRY_WORDS];
int ret, idx;
if (ret != ALE_TYPE_ADDR && ret != ALE_TYPE_VLAN_ADDR)
continue;
+ /* if vid passed is -1 then remove all multicast entry from
+ * the table irrespective of vlan id, if a valid vlan id is
+ * passed then remove only multicast added to that vlan id.
+ * if vlan id doesn't match then move on to next entry.
+ */
+ if (vid != -1 && cpsw_ale_get_vlan_id(ale_entry) != vid)
+ continue;
+
if (cpsw_ale_get_mcast(ale_entry)) {
u8 addr[6];
int cpsw_ale_set_ageout(struct cpsw_ale *ale, int ageout);
int cpsw_ale_flush(struct cpsw_ale *ale, int port_mask);
-int cpsw_ale_flush_multicast(struct cpsw_ale *ale, int port_mask);
+int cpsw_ale_flush_multicast(struct cpsw_ale *ale, int port_mask, int vid);
int cpsw_ale_add_ucast(struct cpsw_ale *ale, u8 *addr, int port,
int flags, u16 vid);
int cpsw_ale_del_ucast(struct cpsw_ale *ale, u8 *addr, int port,
--- /dev/null
+/*
+ * NetCP driver local header
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated
+ * Authors: Sandeep Nair <sandeep_n@ti.com>
+ * Sandeep Paulraj <s-paulraj@ti.com>
+ * Cyril Chemparathy <cyril@ti.com>
+ * Santosh Shilimkar <santosh.shilimkar@ti.com>
+ * Wingman Kwok <w-kwok2@ti.com>
+ * Murali Karicheri <m-karicheri2@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#ifndef __NETCP_H__
+#define __NETCP_H__
+
+#include <linux/netdevice.h>
+#include <linux/soc/ti/knav_dma.h>
+
+/* Maximum Ethernet frame size supported by Keystone switch */
+#define NETCP_MAX_FRAME_SIZE 9504
+
+#define SGMII_LINK_MAC_MAC_AUTONEG 0
+#define SGMII_LINK_MAC_PHY 1
+#define SGMII_LINK_MAC_MAC_FORCED 2
+#define SGMII_LINK_MAC_FIBER 3
+#define SGMII_LINK_MAC_PHY_NO_MDIO 4
+#define XGMII_LINK_MAC_PHY 10
+#define XGMII_LINK_MAC_MAC_FORCED 11
+
+struct netcp_device;
+
+struct netcp_tx_pipe {
+ struct netcp_device *netcp_device;
+ void *dma_queue;
+ unsigned int dma_queue_id;
+ u8 dma_psflags;
+ void *dma_channel;
+ const char *dma_chan_name;
+};
+
+#define ADDR_NEW BIT(0)
+#define ADDR_VALID BIT(1)
+
+enum netcp_addr_type {
+ ADDR_ANY,
+ ADDR_DEV,
+ ADDR_UCAST,
+ ADDR_MCAST,
+ ADDR_BCAST
+};
+
+struct netcp_addr {
+ struct netcp_intf *netcp;
+ unsigned char addr[ETH_ALEN];
+ enum netcp_addr_type type;
+ unsigned int flags;
+ struct list_head node;
+};
+
+struct netcp_intf {
+ struct device *dev;
+ struct device *ndev_dev;
+ struct net_device *ndev;
+ bool big_endian;
+ unsigned int tx_compl_qid;
+ void *tx_pool;
+ struct list_head txhook_list_head;
+ unsigned int tx_pause_threshold;
+ void *tx_compl_q;
+
+ unsigned int tx_resume_threshold;
+ void *rx_queue;
+ void *rx_pool;
+ struct list_head rxhook_list_head;
+ unsigned int rx_queue_id;
+ void *rx_fdq[KNAV_DMA_FDQ_PER_CHAN];
+ u32 rx_buffer_sizes[KNAV_DMA_FDQ_PER_CHAN];
+ struct napi_struct rx_napi;
+ struct napi_struct tx_napi;
+
+ void *rx_channel;
+ const char *dma_chan_name;
+ u32 rx_pool_size;
+ u32 rx_pool_region_id;
+ u32 tx_pool_size;
+ u32 tx_pool_region_id;
+ struct list_head module_head;
+ struct list_head interface_list;
+ struct list_head addr_list;
+ bool netdev_registered;
+ bool primary_module_attached;
+
+ /* Lock used for protecting Rx/Tx hook list management */
+ spinlock_t lock;
+ struct netcp_device *netcp_device;
+ struct device_node *node_interface;
+
+ /* DMA configuration data */
+ u32 msg_enable;
+ u32 rx_queue_depths[KNAV_DMA_FDQ_PER_CHAN];
+};
+
+#define NETCP_PSDATA_LEN KNAV_DMA_NUM_PS_WORDS
+struct netcp_packet {
+ struct sk_buff *skb;
+ u32 *epib;
+ u32 *psdata;
+ unsigned int psdata_len;
+ struct netcp_intf *netcp;
+ struct netcp_tx_pipe *tx_pipe;
+ bool rxtstamp_complete;
+ void *ts_context;
+
+ int (*txtstamp_complete)(void *ctx, struct netcp_packet *pkt);
+};
+
+static inline u32 *netcp_push_psdata(struct netcp_packet *p_info,
+ unsigned int bytes)
+{
+ u32 *buf;
+ unsigned int words;
+
+ if ((bytes & 0x03) != 0)
+ return NULL;
+ words = bytes >> 2;
+
+ if ((p_info->psdata_len + words) > NETCP_PSDATA_LEN)
+ return NULL;
+
+ p_info->psdata_len += words;
+ buf = &p_info->psdata[NETCP_PSDATA_LEN - p_info->psdata_len];
+ return buf;
+}
+
+static inline int netcp_align_psdata(struct netcp_packet *p_info,
+ unsigned int byte_align)
+{
+ int padding;
+
+ switch (byte_align) {
+ case 0:
+ padding = -EINVAL;
+ break;
+ case 1:
+ case 2:
+ case 4:
+ padding = 0;
+ break;
+ case 8:
+ padding = (p_info->psdata_len << 2) % 8;
+ break;
+ case 16:
+ padding = (p_info->psdata_len << 2) % 16;
+ break;
+ default:
+ padding = (p_info->psdata_len << 2) % byte_align;
+ break;
+ }
+ return padding;
+}
+
+struct netcp_module {
+ const char *name;
+ struct module *owner;
+ bool primary;
+
+ /* probe/remove: called once per NETCP instance */
+ int (*probe)(struct netcp_device *netcp_device,
+ struct device *device, struct device_node *node,
+ void **inst_priv);
+ int (*remove)(struct netcp_device *netcp_device, void *inst_priv);
+
+ /* attach/release: called once per network interface */
+ int (*attach)(void *inst_priv, struct net_device *ndev,
+ struct device_node *node, void **intf_priv);
+ int (*release)(void *intf_priv);
+ int (*open)(void *intf_priv, struct net_device *ndev);
+ int (*close)(void *intf_priv, struct net_device *ndev);
+ int (*add_addr)(void *intf_priv, struct netcp_addr *naddr);
+ int (*del_addr)(void *intf_priv, struct netcp_addr *naddr);
+ int (*add_vid)(void *intf_priv, int vid);
+ int (*del_vid)(void *intf_priv, int vid);
+ int (*ioctl)(void *intf_priv, struct ifreq *req, int cmd);
+
+ /* used internally */
+ struct list_head module_list;
+ struct list_head interface_list;
+};
+
+int netcp_register_module(struct netcp_module *module);
+void netcp_unregister_module(struct netcp_module *module);
+void *netcp_module_get_intf_data(struct netcp_module *module,
+ struct netcp_intf *intf);
+
+int netcp_txpipe_init(struct netcp_tx_pipe *tx_pipe,
+ struct netcp_device *netcp_device,
+ const char *dma_chan_name, unsigned int dma_queue_id);
+int netcp_txpipe_open(struct netcp_tx_pipe *tx_pipe);
+int netcp_txpipe_close(struct netcp_tx_pipe *tx_pipe);
+
+typedef int netcp_hook_rtn(int order, void *data, struct netcp_packet *packet);
+int netcp_register_txhook(struct netcp_intf *netcp_priv, int order,
+ netcp_hook_rtn *hook_rtn, void *hook_data);
+int netcp_unregister_txhook(struct netcp_intf *netcp_priv, int order,
+ netcp_hook_rtn *hook_rtn, void *hook_data);
+int netcp_register_rxhook(struct netcp_intf *netcp_priv, int order,
+ netcp_hook_rtn *hook_rtn, void *hook_data);
+int netcp_unregister_rxhook(struct netcp_intf *netcp_priv, int order,
+ netcp_hook_rtn *hook_rtn, void *hook_data);
+void *netcp_device_find_module(struct netcp_device *netcp_device,
+ const char *name);
+
+/* SGMII functions */
+int netcp_sgmii_reset(void __iomem *sgmii_ofs, int port);
+int netcp_sgmii_get_port_link(void __iomem *sgmii_ofs, int port);
+int netcp_sgmii_config(void __iomem *sgmii_ofs, int port, u32 interface);
+
+/* XGBE SERDES init functions */
+int netcp_xgbe_serdes_init(void __iomem *serdes_regs, void __iomem *xgbe_regs);
+
+#endif /* __NETCP_H__ */
--- /dev/null
+/*
+ * Keystone NetCP Core driver
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated
+ * Authors: Sandeep Nair <sandeep_n@ti.com>
+ * Sandeep Paulraj <s-paulraj@ti.com>
+ * Cyril Chemparathy <cyril@ti.com>
+ * Santosh Shilimkar <santosh.shilimkar@ti.com>
+ * Murali Karicheri <m-karicheri2@ti.com>
+ * Wingman Kwok <w-kwok2@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/of_net.h>
+#include <linux/of_address.h>
+#include <linux/if_vlan.h>
+#include <linux/pm_runtime.h>
+#include <linux/platform_device.h>
+#include <linux/soc/ti/knav_qmss.h>
+#include <linux/soc/ti/knav_dma.h>
+
+#include "netcp.h"
+
+#define NETCP_SOP_OFFSET (NET_IP_ALIGN + NET_SKB_PAD)
+#define NETCP_NAPI_WEIGHT 64
+#define NETCP_TX_TIMEOUT (5 * HZ)
+#define NETCP_MIN_PACKET_SIZE ETH_ZLEN
+#define NETCP_MAX_MCAST_ADDR 16
+
+#define NETCP_EFUSE_REG_INDEX 0
+
+#define NETCP_MOD_PROBE_SKIPPED 1
+#define NETCP_MOD_PROBE_FAILED 2
+
+#define NETCP_DEBUG (NETIF_MSG_HW | NETIF_MSG_WOL | \
+ NETIF_MSG_DRV | NETIF_MSG_LINK | \
+ NETIF_MSG_IFUP | NETIF_MSG_INTR | \
+ NETIF_MSG_PROBE | NETIF_MSG_TIMER | \
+ NETIF_MSG_IFDOWN | NETIF_MSG_RX_ERR | \
+ NETIF_MSG_TX_ERR | NETIF_MSG_TX_DONE | \
+ NETIF_MSG_PKTDATA | NETIF_MSG_TX_QUEUED | \
+ NETIF_MSG_RX_STATUS)
+
+#define knav_queue_get_id(q) knav_queue_device_control(q, \
+ KNAV_QUEUE_GET_ID, (unsigned long)NULL)
+
+#define knav_queue_enable_notify(q) knav_queue_device_control(q, \
+ KNAV_QUEUE_ENABLE_NOTIFY, \
+ (unsigned long)NULL)
+
+#define knav_queue_disable_notify(q) knav_queue_device_control(q, \
+ KNAV_QUEUE_DISABLE_NOTIFY, \
+ (unsigned long)NULL)
+
+#define knav_queue_get_count(q) knav_queue_device_control(q, \
+ KNAV_QUEUE_GET_COUNT, (unsigned long)NULL)
+
+#define for_each_netcp_module(module) \
+ list_for_each_entry(module, &netcp_modules, module_list)
+
+#define for_each_netcp_device_module(netcp_device, inst_modpriv) \
+ list_for_each_entry(inst_modpriv, \
+ &((netcp_device)->modpriv_head), inst_list)
+
+#define for_each_module(netcp, intf_modpriv) \
+ list_for_each_entry(intf_modpriv, &netcp->module_head, intf_list)
+
+/* Module management structures */
+struct netcp_device {
+ struct list_head device_list;
+ struct list_head interface_head;
+ struct list_head modpriv_head;
+ struct device *device;
+};
+
+struct netcp_inst_modpriv {
+ struct netcp_device *netcp_device;
+ struct netcp_module *netcp_module;
+ struct list_head inst_list;
+ void *module_priv;
+};
+
+struct netcp_intf_modpriv {
+ struct netcp_intf *netcp_priv;
+ struct netcp_module *netcp_module;
+ struct list_head intf_list;
+ void *module_priv;
+};
+
+static LIST_HEAD(netcp_devices);
+static LIST_HEAD(netcp_modules);
+static DEFINE_MUTEX(netcp_modules_lock);
+
+static int netcp_debug_level = -1;
+module_param(netcp_debug_level, int, 0);
+MODULE_PARM_DESC(netcp_debug_level, "Netcp debug level (NETIF_MSG bits) (0=none,...,16=all)");
+
+/* Helper functions - Get/Set */
+static void get_pkt_info(u32 *buff, u32 *buff_len, u32 *ndesc,
+ struct knav_dma_desc *desc)
+{
+ *buff_len = desc->buff_len;
+ *buff = desc->buff;
+ *ndesc = desc->next_desc;
+}
+
+static void get_pad_info(u32 *pad0, u32 *pad1, struct knav_dma_desc *desc)
+{
+ *pad0 = desc->pad[0];
+ *pad1 = desc->pad[1];
+}
+
+static void get_org_pkt_info(u32 *buff, u32 *buff_len,
+ struct knav_dma_desc *desc)
+{
+ *buff = desc->orig_buff;
+ *buff_len = desc->orig_len;
+}
+
+static void get_words(u32 *words, int num_words, u32 *desc)
+{
+ int i;
+
+ for (i = 0; i < num_words; i++)
+ words[i] = desc[i];
+}
+
+static void set_pkt_info(u32 buff, u32 buff_len, u32 ndesc,
+ struct knav_dma_desc *desc)
+{
+ desc->buff_len = buff_len;
+ desc->buff = buff;
+ desc->next_desc = ndesc;
+}
+
+static void set_desc_info(u32 desc_info, u32 pkt_info,
+ struct knav_dma_desc *desc)
+{
+ desc->desc_info = desc_info;
+ desc->packet_info = pkt_info;
+}
+
+static void set_pad_info(u32 pad0, u32 pad1, struct knav_dma_desc *desc)
+{
+ desc->pad[0] = pad0;
+ desc->pad[1] = pad1;
+}
+
+static void set_org_pkt_info(u32 buff, u32 buff_len,
+ struct knav_dma_desc *desc)
+{
+ desc->orig_buff = buff;
+ desc->orig_len = buff_len;
+}
+
+static void set_words(u32 *words, int num_words, u32 *desc)
+{
+ int i;
+
+ for (i = 0; i < num_words; i++)
+ desc[i] = words[i];
+}
+
+/* Read the e-fuse value as 32 bit values to be endian independent */
+static int emac_arch_get_mac_addr(char *x, void __iomem *efuse_mac)
+{
+ unsigned int addr0, addr1;
+
+ addr1 = readl(efuse_mac + 4);
+ addr0 = readl(efuse_mac);
+
+ x[0] = (addr1 & 0x0000ff00) >> 8;
+ x[1] = addr1 & 0x000000ff;
+ x[2] = (addr0 & 0xff000000) >> 24;
+ x[3] = (addr0 & 0x00ff0000) >> 16;
+ x[4] = (addr0 & 0x0000ff00) >> 8;
+ x[5] = addr0 & 0x000000ff;
+
+ return 0;
+}
+
+static const char *netcp_node_name(struct device_node *node)
+{
+ const char *name;
+
+ if (of_property_read_string(node, "label", &name) < 0)
+ name = node->name;
+ if (!name)
+ name = "unknown";
+ return name;
+}
+
+/* Module management routines */
+static int netcp_register_interface(struct netcp_intf *netcp)
+{
+ int ret;
+
+ ret = register_netdev(netcp->ndev);
+ if (!ret)
+ netcp->netdev_registered = true;
+ return ret;
+}
+
+static int netcp_module_probe(struct netcp_device *netcp_device,
+ struct netcp_module *module)
+{
+ struct device *dev = netcp_device->device;
+ struct device_node *devices, *interface, *node = dev->of_node;
+ struct device_node *child;
+ struct netcp_inst_modpriv *inst_modpriv;
+ struct netcp_intf *netcp_intf;
+ struct netcp_module *tmp;
+ bool primary_module_registered = false;
+ int ret;
+
+ /* Find this module in the sub-tree for this device */
+ devices = of_get_child_by_name(node, "netcp-devices");
+ if (!devices) {
+ dev_err(dev, "could not find netcp-devices node\n");
+ return NETCP_MOD_PROBE_SKIPPED;
+ }
+
+ for_each_available_child_of_node(devices, child) {
+ const char *name = netcp_node_name(child);
+
+ if (!strcasecmp(module->name, name))
+ break;
+ }
+
+ of_node_put(devices);
+ /* If module not used for this device, skip it */
+ if (!child) {
+ dev_warn(dev, "module(%s) not used for device\n", module->name);
+ return NETCP_MOD_PROBE_SKIPPED;
+ }
+
+ inst_modpriv = devm_kzalloc(dev, sizeof(*inst_modpriv), GFP_KERNEL);
+ if (!inst_modpriv) {
+ of_node_put(child);
+ return -ENOMEM;
+ }
+
+ inst_modpriv->netcp_device = netcp_device;
+ inst_modpriv->netcp_module = module;
+ list_add_tail(&inst_modpriv->inst_list, &netcp_device->modpriv_head);
+
+ ret = module->probe(netcp_device, dev, child,
+ &inst_modpriv->module_priv);
+ of_node_put(child);
+ if (ret) {
+ dev_err(dev, "Probe of module(%s) failed with %d\n",
+ module->name, ret);
+ list_del(&inst_modpriv->inst_list);
+ devm_kfree(dev, inst_modpriv);
+ return NETCP_MOD_PROBE_FAILED;
+ }
+
+ /* Attach modules only if the primary module is probed */
+ for_each_netcp_module(tmp) {
+ if (tmp->primary)
+ primary_module_registered = true;
+ }
+
+ if (!primary_module_registered)
+ return 0;
+
+ /* Attach module to interfaces */
+ list_for_each_entry(netcp_intf, &netcp_device->interface_head,
+ interface_list) {
+ struct netcp_intf_modpriv *intf_modpriv;
+
+ /* If interface not registered then register now */
+ if (!netcp_intf->netdev_registered)
+ ret = netcp_register_interface(netcp_intf);
+
+ if (ret)
+ return -ENODEV;
+
+ intf_modpriv = devm_kzalloc(dev, sizeof(*intf_modpriv),
+ GFP_KERNEL);
+ if (!intf_modpriv)
+ return -ENOMEM;
+
+ interface = of_parse_phandle(netcp_intf->node_interface,
+ module->name, 0);
+
+ intf_modpriv->netcp_priv = netcp_intf;
+ intf_modpriv->netcp_module = module;
+ list_add_tail(&intf_modpriv->intf_list,
+ &netcp_intf->module_head);
+
+ ret = module->attach(inst_modpriv->module_priv,
+ netcp_intf->ndev, interface,
+ &intf_modpriv->module_priv);
+ of_node_put(interface);
+ if (ret) {
+ dev_dbg(dev, "Attach of module %s declined with %d\n",
+ module->name, ret);
+ list_del(&intf_modpriv->intf_list);
+ devm_kfree(dev, intf_modpriv);
+ continue;
+ }
+ }
+ return 0;
+}
+
+int netcp_register_module(struct netcp_module *module)
+{
+ struct netcp_device *netcp_device;
+ struct netcp_module *tmp;
+ int ret;
+
+ if (!module->name) {
+ WARN(1, "error registering netcp module: no name\n");
+ return -EINVAL;
+ }
+
+ if (!module->probe) {
+ WARN(1, "error registering netcp module: no probe\n");
+ return -EINVAL;
+ }
+
+ mutex_lock(&netcp_modules_lock);
+
+ for_each_netcp_module(tmp) {
+ if (!strcasecmp(tmp->name, module->name)) {
+ mutex_unlock(&netcp_modules_lock);
+ return -EEXIST;
+ }
+ }
+ list_add_tail(&module->module_list, &netcp_modules);
+
+ list_for_each_entry(netcp_device, &netcp_devices, device_list) {
+ ret = netcp_module_probe(netcp_device, module);
+ if (ret < 0)
+ goto fail;
+ }
+
+ mutex_unlock(&netcp_modules_lock);
+ return 0;
+
+fail:
+ mutex_unlock(&netcp_modules_lock);
+ netcp_unregister_module(module);
+ return ret;
+}
+
+static void netcp_release_module(struct netcp_device *netcp_device,
+ struct netcp_module *module)
+{
+ struct netcp_inst_modpriv *inst_modpriv, *inst_tmp;
+ struct netcp_intf *netcp_intf, *netcp_tmp;
+ struct device *dev = netcp_device->device;
+
+ /* Release the module from each interface */
+ list_for_each_entry_safe(netcp_intf, netcp_tmp,
+ &netcp_device->interface_head,
+ interface_list) {
+ struct netcp_intf_modpriv *intf_modpriv, *intf_tmp;
+
+ list_for_each_entry_safe(intf_modpriv, intf_tmp,
+ &netcp_intf->module_head,
+ intf_list) {
+ if (intf_modpriv->netcp_module == module) {
+ module->release(intf_modpriv->module_priv);
+ list_del(&intf_modpriv->intf_list);
+ devm_kfree(dev, intf_modpriv);
+ break;
+ }
+ }
+ }
+
+ /* Remove the module from each instance */
+ list_for_each_entry_safe(inst_modpriv, inst_tmp,
+ &netcp_device->modpriv_head, inst_list) {
+ if (inst_modpriv->netcp_module == module) {
+ module->remove(netcp_device,
+ inst_modpriv->module_priv);
+ list_del(&inst_modpriv->inst_list);
+ devm_kfree(dev, inst_modpriv);
+ break;
+ }
+ }
+}
+
+void netcp_unregister_module(struct netcp_module *module)
+{
+ struct netcp_device *netcp_device;
+ struct netcp_module *module_tmp;
+
+ mutex_lock(&netcp_modules_lock);
+
+ list_for_each_entry(netcp_device, &netcp_devices, device_list) {
+ netcp_release_module(netcp_device, module);
+ }
+
+ /* Remove the module from the module list */
+ for_each_netcp_module(module_tmp) {
+ if (module == module_tmp) {
+ list_del(&module->module_list);
+ break;
+ }
+ }
+
+ mutex_unlock(&netcp_modules_lock);
+}
+
+void *netcp_module_get_intf_data(struct netcp_module *module,
+ struct netcp_intf *intf)
+{
+ struct netcp_intf_modpriv *intf_modpriv;
+
+ list_for_each_entry(intf_modpriv, &intf->module_head, intf_list)
+ if (intf_modpriv->netcp_module == module)
+ return intf_modpriv->module_priv;
+ return NULL;
+}
+
+/* Module TX and RX Hook management */
+struct netcp_hook_list {
+ struct list_head list;
+ netcp_hook_rtn *hook_rtn;
+ void *hook_data;
+ int order;
+};
+
+int netcp_register_txhook(struct netcp_intf *netcp_priv, int order,
+ netcp_hook_rtn *hook_rtn, void *hook_data)
+{
+ struct netcp_hook_list *entry;
+ struct netcp_hook_list *next;
+ unsigned long flags;
+
+ entry = devm_kzalloc(netcp_priv->dev, sizeof(*entry), GFP_KERNEL);
+ if (!entry)
+ return -ENOMEM;
+
+ entry->hook_rtn = hook_rtn;
+ entry->hook_data = hook_data;
+ entry->order = order;
+
+ spin_lock_irqsave(&netcp_priv->lock, flags);
+ list_for_each_entry(next, &netcp_priv->txhook_list_head, list) {
+ if (next->order > order)
+ break;
+ }
+ __list_add(&entry->list, next->list.prev, &next->list);
+ spin_unlock_irqrestore(&netcp_priv->lock, flags);
+
+ return 0;
+}
+
+int netcp_unregister_txhook(struct netcp_intf *netcp_priv, int order,
+ netcp_hook_rtn *hook_rtn, void *hook_data)
+{
+ struct netcp_hook_list *next, *n;
+ unsigned long flags;
+
+ spin_lock_irqsave(&netcp_priv->lock, flags);
+ list_for_each_entry_safe(next, n, &netcp_priv->txhook_list_head, list) {
+ if ((next->order == order) &&
+ (next->hook_rtn == hook_rtn) &&
+ (next->hook_data == hook_data)) {
+ list_del(&next->list);
+ spin_unlock_irqrestore(&netcp_priv->lock, flags);
+ devm_kfree(netcp_priv->dev, next);
+ return 0;
+ }
+ }
+ spin_unlock_irqrestore(&netcp_priv->lock, flags);
+ return -ENOENT;
+}
+
+int netcp_register_rxhook(struct netcp_intf *netcp_priv, int order,
+ netcp_hook_rtn *hook_rtn, void *hook_data)
+{
+ struct netcp_hook_list *entry;
+ struct netcp_hook_list *next;
+ unsigned long flags;
+
+ entry = devm_kzalloc(netcp_priv->dev, sizeof(*entry), GFP_KERNEL);
+ if (!entry)
+ return -ENOMEM;
+
+ entry->hook_rtn = hook_rtn;
+ entry->hook_data = hook_data;
+ entry->order = order;
+
+ spin_lock_irqsave(&netcp_priv->lock, flags);
+ list_for_each_entry(next, &netcp_priv->rxhook_list_head, list) {
+ if (next->order > order)
+ break;
+ }
+ __list_add(&entry->list, next->list.prev, &next->list);
+ spin_unlock_irqrestore(&netcp_priv->lock, flags);
+
+ return 0;
+}
+
+int netcp_unregister_rxhook(struct netcp_intf *netcp_priv, int order,
+ netcp_hook_rtn *hook_rtn, void *hook_data)
+{
+ struct netcp_hook_list *next, *n;
+ unsigned long flags;
+
+ spin_lock_irqsave(&netcp_priv->lock, flags);
+ list_for_each_entry_safe(next, n, &netcp_priv->rxhook_list_head, list) {
+ if ((next->order == order) &&
+ (next->hook_rtn == hook_rtn) &&
+ (next->hook_data == hook_data)) {
+ list_del(&next->list);
+ spin_unlock_irqrestore(&netcp_priv->lock, flags);
+ devm_kfree(netcp_priv->dev, next);
+ return 0;
+ }
+ }
+ spin_unlock_irqrestore(&netcp_priv->lock, flags);
+
+ return -ENOENT;
+}
+
+static void netcp_frag_free(bool is_frag, void *ptr)
+{
+ if (is_frag)
+ put_page(virt_to_head_page(ptr));
+ else
+ kfree(ptr);
+}
+
+static void netcp_free_rx_desc_chain(struct netcp_intf *netcp,
+ struct knav_dma_desc *desc)
+{
+ struct knav_dma_desc *ndesc;
+ dma_addr_t dma_desc, dma_buf;
+ unsigned int buf_len, dma_sz = sizeof(*ndesc);
+ void *buf_ptr;
+ u32 tmp;
+
+ get_words(&dma_desc, 1, &desc->next_desc);
+
+ while (dma_desc) {
+ ndesc = knav_pool_desc_unmap(netcp->rx_pool, dma_desc, dma_sz);
+ if (unlikely(!ndesc)) {
+ dev_err(netcp->ndev_dev, "failed to unmap Rx desc\n");
+ break;
+ }
+ get_pkt_info(&dma_buf, &tmp, &dma_desc, ndesc);
+ get_pad_info((u32 *)&buf_ptr, &tmp, ndesc);
+ dma_unmap_page(netcp->dev, dma_buf, PAGE_SIZE, DMA_FROM_DEVICE);
+ __free_page(buf_ptr);
+ knav_pool_desc_put(netcp->rx_pool, desc);
+ }
+
+ get_pad_info((u32 *)&buf_ptr, &buf_len, desc);
+ if (buf_ptr)
+ netcp_frag_free(buf_len <= PAGE_SIZE, buf_ptr);
+ knav_pool_desc_put(netcp->rx_pool, desc);
+}
+
+static void netcp_empty_rx_queue(struct netcp_intf *netcp)
+{
+ struct knav_dma_desc *desc;
+ unsigned int dma_sz;
+ dma_addr_t dma;
+
+ for (; ;) {
+ dma = knav_queue_pop(netcp->rx_queue, &dma_sz);
+ if (!dma)
+ break;
+
+ desc = knav_pool_desc_unmap(netcp->rx_pool, dma, dma_sz);
+ if (unlikely(!desc)) {
+ dev_err(netcp->ndev_dev, "%s: failed to unmap Rx desc\n",
+ __func__);
+ netcp->ndev->stats.rx_errors++;
+ continue;
+ }
+ netcp_free_rx_desc_chain(netcp, desc);
+ netcp->ndev->stats.rx_dropped++;
+ }
+}
+
+static int netcp_process_one_rx_packet(struct netcp_intf *netcp)
+{
+ unsigned int dma_sz, buf_len, org_buf_len;
+ struct knav_dma_desc *desc, *ndesc;
+ unsigned int pkt_sz = 0, accum_sz;
+ struct netcp_hook_list *rx_hook;
+ dma_addr_t dma_desc, dma_buff;
+ struct netcp_packet p_info;
+ struct sk_buff *skb;
+ void *org_buf_ptr;
+ u32 tmp;
+
+ dma_desc = knav_queue_pop(netcp->rx_queue, &dma_sz);
+ if (!dma_desc)
+ return -1;
+
+ desc = knav_pool_desc_unmap(netcp->rx_pool, dma_desc, dma_sz);
+ if (unlikely(!desc)) {
+ dev_err(netcp->ndev_dev, "failed to unmap Rx desc\n");
+ return 0;
+ }
+
+ get_pkt_info(&dma_buff, &buf_len, &dma_desc, desc);
+ get_pad_info((u32 *)&org_buf_ptr, &org_buf_len, desc);
+
+ if (unlikely(!org_buf_ptr)) {
+ dev_err(netcp->ndev_dev, "NULL bufptr in desc\n");
+ goto free_desc;
+ }
+
+ pkt_sz &= KNAV_DMA_DESC_PKT_LEN_MASK;
+ accum_sz = buf_len;
+ dma_unmap_single(netcp->dev, dma_buff, buf_len, DMA_FROM_DEVICE);
+
+ /* Build a new sk_buff for the primary buffer */
+ skb = build_skb(org_buf_ptr, org_buf_len);
+ if (unlikely(!skb)) {
+ dev_err(netcp->ndev_dev, "build_skb() failed\n");
+ goto free_desc;
+ }
+
+ /* update data, tail and len */
+ skb_reserve(skb, NETCP_SOP_OFFSET);
+ __skb_put(skb, buf_len);
+
+ /* Fill in the page fragment list */
+ while (dma_desc) {
+ struct page *page;
+
+ ndesc = knav_pool_desc_unmap(netcp->rx_pool, dma_desc, dma_sz);
+ if (unlikely(!ndesc)) {
+ dev_err(netcp->ndev_dev, "failed to unmap Rx desc\n");
+ goto free_desc;
+ }
+
+ get_pkt_info(&dma_buff, &buf_len, &dma_desc, ndesc);
+ get_pad_info((u32 *)&page, &tmp, ndesc);
+
+ if (likely(dma_buff && buf_len && page)) {
+ dma_unmap_page(netcp->dev, dma_buff, PAGE_SIZE,
+ DMA_FROM_DEVICE);
+ } else {
+ dev_err(netcp->ndev_dev, "Bad Rx desc dma_buff(%p), len(%d), page(%p)\n",
+ (void *)dma_buff, buf_len, page);
+ goto free_desc;
+ }
+
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+ offset_in_page(dma_buff), buf_len, PAGE_SIZE);
+ accum_sz += buf_len;
+
+ /* Free the descriptor */
+ knav_pool_desc_put(netcp->rx_pool, ndesc);
+ }
+
+ /* Free the primary descriptor */
+ knav_pool_desc_put(netcp->rx_pool, desc);
+
+ /* check for packet len and warn */
+ if (unlikely(pkt_sz != accum_sz))
+ dev_dbg(netcp->ndev_dev, "mismatch in packet size(%d) & sum of fragments(%d)\n",
+ pkt_sz, accum_sz);
+
+ /* Remove ethernet FCS from the packet */
+ __pskb_trim(skb, skb->len - ETH_FCS_LEN);
+
+ /* Call each of the RX hooks */
+ p_info.skb = skb;
+ p_info.rxtstamp_complete = false;
+ list_for_each_entry(rx_hook, &netcp->rxhook_list_head, list) {
+ int ret;
+
+ ret = rx_hook->hook_rtn(rx_hook->order, rx_hook->hook_data,
+ &p_info);
+ if (unlikely(ret)) {
+ dev_err(netcp->ndev_dev, "RX hook %d failed: %d\n",
+ rx_hook->order, ret);
+ netcp->ndev->stats.rx_errors++;
+ dev_kfree_skb(skb);
+ return 0;
+ }
+ }
+
+ netcp->ndev->last_rx = jiffies;
+ netcp->ndev->stats.rx_packets++;
+ netcp->ndev->stats.rx_bytes += skb->len;
+
+ /* push skb up the stack */
+ skb->protocol = eth_type_trans(skb, netcp->ndev);
+ netif_receive_skb(skb);
+ return 0;
+
+free_desc:
+ netcp_free_rx_desc_chain(netcp, desc);
+ netcp->ndev->stats.rx_errors++;
+ return 0;
+}
+
+static int netcp_process_rx_packets(struct netcp_intf *netcp,
+ unsigned int budget)
+{
+ int i;
+
+ for (i = 0; (i < budget) && !netcp_process_one_rx_packet(netcp); i++)
+ ;
+ return i;
+}
+
+/* Release descriptors and attached buffers from Rx FDQ */
+static void netcp_free_rx_buf(struct netcp_intf *netcp, int fdq)
+{
+ struct knav_dma_desc *desc;
+ unsigned int buf_len, dma_sz;
+ dma_addr_t dma;
+ void *buf_ptr;
+ u32 tmp;
+
+ /* Allocate descriptor */
+ while ((dma = knav_queue_pop(netcp->rx_fdq[fdq], &dma_sz))) {
+ desc = knav_pool_desc_unmap(netcp->rx_pool, dma, dma_sz);
+ if (unlikely(!desc)) {
+ dev_err(netcp->ndev_dev, "failed to unmap Rx desc\n");
+ continue;
+ }
+
+ get_org_pkt_info(&dma, &buf_len, desc);
+ get_pad_info((u32 *)&buf_ptr, &tmp, desc);
+
+ if (unlikely(!dma)) {
+ dev_err(netcp->ndev_dev, "NULL orig_buff in desc\n");
+ knav_pool_desc_put(netcp->rx_pool, desc);
+ continue;
+ }
+
+ if (unlikely(!buf_ptr)) {
+ dev_err(netcp->ndev_dev, "NULL bufptr in desc\n");
+ knav_pool_desc_put(netcp->rx_pool, desc);
+ continue;
+ }
+
+ if (fdq == 0) {
+ dma_unmap_single(netcp->dev, dma, buf_len,
+ DMA_FROM_DEVICE);
+ netcp_frag_free((buf_len <= PAGE_SIZE), buf_ptr);
+ } else {
+ dma_unmap_page(netcp->dev, dma, buf_len,
+ DMA_FROM_DEVICE);
+ __free_page(buf_ptr);
+ }
+
+ knav_pool_desc_put(netcp->rx_pool, desc);
+ }
+}
+
+static void netcp_rxpool_free(struct netcp_intf *netcp)
+{
+ int i;
+
+ for (i = 0; i < KNAV_DMA_FDQ_PER_CHAN &&
+ !IS_ERR_OR_NULL(netcp->rx_fdq[i]); i++)
+ netcp_free_rx_buf(netcp, i);
+
+ if (knav_pool_count(netcp->rx_pool) != netcp->rx_pool_size)
+ dev_err(netcp->ndev_dev, "Lost Rx (%d) descriptors\n",
+ netcp->rx_pool_size - knav_pool_count(netcp->rx_pool));
+
+ knav_pool_destroy(netcp->rx_pool);
+ netcp->rx_pool = NULL;
+}
+
+static void netcp_allocate_rx_buf(struct netcp_intf *netcp, int fdq)
+{
+ struct knav_dma_desc *hwdesc;
+ unsigned int buf_len, dma_sz;
+ u32 desc_info, pkt_info;
+ struct page *page;
+ dma_addr_t dma;
+ void *bufptr;
+ u32 pad[2];
+
+ /* Allocate descriptor */
+ hwdesc = knav_pool_desc_get(netcp->rx_pool);
+ if (IS_ERR_OR_NULL(hwdesc)) {
+ dev_dbg(netcp->ndev_dev, "out of rx pool desc\n");
+ return;
+ }
+
+ if (likely(fdq == 0)) {
+ unsigned int primary_buf_len;
+ /* Allocate a primary receive queue entry */
+ buf_len = netcp->rx_buffer_sizes[0] + NETCP_SOP_OFFSET;
+ primary_buf_len = SKB_DATA_ALIGN(buf_len) +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+
+ if (primary_buf_len <= PAGE_SIZE) {
+ bufptr = netdev_alloc_frag(primary_buf_len);
+ pad[1] = primary_buf_len;
+ } else {
+ bufptr = kmalloc(primary_buf_len, GFP_ATOMIC |
+ GFP_DMA32 | __GFP_COLD);
+ pad[1] = 0;
+ }
+
+ if (unlikely(!bufptr)) {
+ dev_warn_ratelimited(netcp->ndev_dev, "Primary RX buffer alloc failed\n");
+ goto fail;
+ }
+ dma = dma_map_single(netcp->dev, bufptr, buf_len,
+ DMA_TO_DEVICE);
+ pad[0] = (u32)bufptr;
+
+ } else {
+ /* Allocate a secondary receive queue entry */
+ page = alloc_page(GFP_ATOMIC | GFP_DMA32 | __GFP_COLD);
+ if (unlikely(!page)) {
+ dev_warn_ratelimited(netcp->ndev_dev, "Secondary page alloc failed\n");
+ goto fail;
+ }
+ buf_len = PAGE_SIZE;
+ dma = dma_map_page(netcp->dev, page, 0, buf_len, DMA_TO_DEVICE);
+ pad[0] = (u32)page;
+ pad[1] = 0;
+ }
+
+ desc_info = KNAV_DMA_DESC_PS_INFO_IN_DESC;
+ desc_info |= buf_len & KNAV_DMA_DESC_PKT_LEN_MASK;
+ pkt_info = KNAV_DMA_DESC_HAS_EPIB;
+ pkt_info |= KNAV_DMA_NUM_PS_WORDS << KNAV_DMA_DESC_PSLEN_SHIFT;
+ pkt_info |= (netcp->rx_queue_id & KNAV_DMA_DESC_RETQ_MASK) <<
+ KNAV_DMA_DESC_RETQ_SHIFT;
+ set_org_pkt_info(dma, buf_len, hwdesc);
+ set_pad_info(pad[0], pad[1], hwdesc);
+ set_desc_info(desc_info, pkt_info, hwdesc);
+
+ /* Push to FDQs */
+ knav_pool_desc_map(netcp->rx_pool, hwdesc, sizeof(*hwdesc), &dma,
+ &dma_sz);
+ knav_queue_push(netcp->rx_fdq[fdq], dma, sizeof(*hwdesc), 0);
+ return;
+
+fail:
+ knav_pool_desc_put(netcp->rx_pool, hwdesc);
+}
+
+/* Refill Rx FDQ with descriptors & attached buffers */
+static void netcp_rxpool_refill(struct netcp_intf *netcp)
+{
+ u32 fdq_deficit[KNAV_DMA_FDQ_PER_CHAN] = {0};
+ int i;
+
+ /* Calculate the FDQ deficit and refill */
+ for (i = 0; i < KNAV_DMA_FDQ_PER_CHAN && netcp->rx_fdq[i]; i++) {
+ fdq_deficit[i] = netcp->rx_queue_depths[i] -
+ knav_queue_get_count(netcp->rx_fdq[i]);
+
+ while (fdq_deficit[i]--)
+ netcp_allocate_rx_buf(netcp, i);
+ } /* end for fdqs */
+}
+
+/* NAPI poll */
+static int netcp_rx_poll(struct napi_struct *napi, int budget)
+{
+ struct netcp_intf *netcp = container_of(napi, struct netcp_intf,
+ rx_napi);
+ unsigned int packets;
+
+ packets = netcp_process_rx_packets(netcp, budget);
+
+ if (packets < budget) {
+ napi_complete(&netcp->rx_napi);
+ knav_queue_enable_notify(netcp->rx_queue);
+ }
+
+ netcp_rxpool_refill(netcp);
+ return packets;
+}
+
+static void netcp_rx_notify(void *arg)
+{
+ struct netcp_intf *netcp = arg;
+
+ knav_queue_disable_notify(netcp->rx_queue);
+ napi_schedule(&netcp->rx_napi);
+}
+
+static void netcp_free_tx_desc_chain(struct netcp_intf *netcp,
+ struct knav_dma_desc *desc,
+ unsigned int desc_sz)
+{
+ struct knav_dma_desc *ndesc = desc;
+ dma_addr_t dma_desc, dma_buf;
+ unsigned int buf_len;
+
+ while (ndesc) {
+ get_pkt_info(&dma_buf, &buf_len, &dma_desc, ndesc);
+
+ if (dma_buf && buf_len)
+ dma_unmap_single(netcp->dev, dma_buf, buf_len,
+ DMA_TO_DEVICE);
+ else
+ dev_warn(netcp->ndev_dev, "bad Tx desc buf(%p), len(%d)\n",
+ (void *)dma_buf, buf_len);
+
+ knav_pool_desc_put(netcp->tx_pool, ndesc);
+ ndesc = NULL;
+ if (dma_desc) {
+ ndesc = knav_pool_desc_unmap(netcp->tx_pool, dma_desc,
+ desc_sz);
+ if (!ndesc)
+ dev_err(netcp->ndev_dev, "failed to unmap Tx desc\n");
+ }
+ }
+}
+
+static int netcp_process_tx_compl_packets(struct netcp_intf *netcp,
+ unsigned int budget)
+{
+ struct knav_dma_desc *desc;
+ struct sk_buff *skb;
+ unsigned int dma_sz;
+ dma_addr_t dma;
+ int pkts = 0;
+ u32 tmp;
+
+ while (budget--) {
+ dma = knav_queue_pop(netcp->tx_compl_q, &dma_sz);
+ if (!dma)
+ break;
+ desc = knav_pool_desc_unmap(netcp->tx_pool, dma, dma_sz);
+ if (unlikely(!desc)) {
+ dev_err(netcp->ndev_dev, "failed to unmap Tx desc\n");
+ netcp->ndev->stats.tx_errors++;
+ continue;
+ }
+
+ get_pad_info((u32 *)&skb, &tmp, desc);
+ netcp_free_tx_desc_chain(netcp, desc, dma_sz);
+ if (!skb) {
+ dev_err(netcp->ndev_dev, "No skb in Tx desc\n");
+ netcp->ndev->stats.tx_errors++;
+ continue;
+ }
+
+ if (netif_subqueue_stopped(netcp->ndev, skb) &&
+ netif_running(netcp->ndev) &&
+ (knav_pool_count(netcp->tx_pool) >
+ netcp->tx_resume_threshold)) {
+ u16 subqueue = skb_get_queue_mapping(skb);
+
+ netif_wake_subqueue(netcp->ndev, subqueue);
+ }
+
+ netcp->ndev->stats.tx_packets++;
+ netcp->ndev->stats.tx_bytes += skb->len;
+ dev_kfree_skb(skb);
+ pkts++;
+ }
+ return pkts;
+}
+
+static int netcp_tx_poll(struct napi_struct *napi, int budget)
+{
+ int packets;
+ struct netcp_intf *netcp = container_of(napi, struct netcp_intf,
+ tx_napi);
+
+ packets = netcp_process_tx_compl_packets(netcp, budget);
+ if (packets < budget) {
+ napi_complete(&netcp->tx_napi);
+ knav_queue_enable_notify(netcp->tx_compl_q);
+ }
+
+ return packets;
+}
+
+static void netcp_tx_notify(void *arg)
+{
+ struct netcp_intf *netcp = arg;
+
+ knav_queue_disable_notify(netcp->tx_compl_q);
+ napi_schedule(&netcp->tx_napi);
+}
+
+static struct knav_dma_desc*
+netcp_tx_map_skb(struct sk_buff *skb, struct netcp_intf *netcp)
+{
+ struct knav_dma_desc *desc, *ndesc, *pdesc;
+ unsigned int pkt_len = skb_headlen(skb);
+ struct device *dev = netcp->dev;
+ dma_addr_t dma_addr;
+ unsigned int dma_sz;
+ int i;
+
+ /* Map the linear buffer */
+ dma_addr = dma_map_single(dev, skb->data, pkt_len, DMA_TO_DEVICE);
+ if (unlikely(!dma_addr)) {
+ dev_err(netcp->ndev_dev, "Failed to map skb buffer\n");
+ return NULL;
+ }
+
+ desc = knav_pool_desc_get(netcp->tx_pool);
+ if (unlikely(IS_ERR_OR_NULL(desc))) {
+ dev_err(netcp->ndev_dev, "out of TX desc\n");
+ dma_unmap_single(dev, dma_addr, pkt_len, DMA_TO_DEVICE);
+ return NULL;
+ }
+
+ set_pkt_info(dma_addr, pkt_len, 0, desc);
+ if (skb_is_nonlinear(skb)) {
+ prefetchw(skb_shinfo(skb));
+ } else {
+ desc->next_desc = 0;
+ goto upd_pkt_len;
+ }
+
+ pdesc = desc;
+
+ /* Handle the case where skb is fragmented in pages */
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ struct page *page = skb_frag_page(frag);
+ u32 page_offset = frag->page_offset;
+ u32 buf_len = skb_frag_size(frag);
+ dma_addr_t desc_dma;
+ u32 pkt_info;
+
+ dma_addr = dma_map_page(dev, page, page_offset, buf_len,
+ DMA_TO_DEVICE);
+ if (unlikely(!dma_addr)) {
+ dev_err(netcp->ndev_dev, "Failed to map skb page\n");
+ goto free_descs;
+ }
+
+ ndesc = knav_pool_desc_get(netcp->tx_pool);
+ if (unlikely(IS_ERR_OR_NULL(ndesc))) {
+ dev_err(netcp->ndev_dev, "out of TX desc for frags\n");
+ dma_unmap_page(dev, dma_addr, buf_len, DMA_TO_DEVICE);
+ goto free_descs;
+ }
+
+ desc_dma = knav_pool_desc_virt_to_dma(netcp->tx_pool,
+ (void *)ndesc);
+ pkt_info =
+ (netcp->tx_compl_qid & KNAV_DMA_DESC_RETQ_MASK) <<
+ KNAV_DMA_DESC_RETQ_SHIFT;
+ set_pkt_info(dma_addr, buf_len, 0, ndesc);
+ set_words(&desc_dma, 1, &pdesc->next_desc);
+ pkt_len += buf_len;
+ if (pdesc != desc)
+ knav_pool_desc_map(netcp->tx_pool, pdesc,
+ sizeof(*pdesc), &desc_dma, &dma_sz);
+ pdesc = ndesc;
+ }
+ if (pdesc != desc)
+ knav_pool_desc_map(netcp->tx_pool, pdesc, sizeof(*pdesc),
+ &dma_addr, &dma_sz);
+
+ /* frag list based linkage is not supported for now. */
+ if (skb_shinfo(skb)->frag_list) {
+ dev_err_ratelimited(netcp->ndev_dev, "NETIF_F_FRAGLIST not supported\n");
+ goto free_descs;
+ }
+
+upd_pkt_len:
+ WARN_ON(pkt_len != skb->len);
+
+ pkt_len &= KNAV_DMA_DESC_PKT_LEN_MASK;
+ set_words(&pkt_len, 1, &desc->desc_info);
+ return desc;
+
+free_descs:
+ netcp_free_tx_desc_chain(netcp, desc, sizeof(*desc));
+ return NULL;
+}
+
+static int netcp_tx_submit_skb(struct netcp_intf *netcp,
+ struct sk_buff *skb,
+ struct knav_dma_desc *desc)
+{
+ struct netcp_tx_pipe *tx_pipe = NULL;
+ struct netcp_hook_list *tx_hook;
+ struct netcp_packet p_info;
+ u32 packet_info = 0;
+ unsigned int dma_sz;
+ dma_addr_t dma;
+ int ret = 0;
+
+ p_info.netcp = netcp;
+ p_info.skb = skb;
+ p_info.tx_pipe = NULL;
+ p_info.psdata_len = 0;
+ p_info.ts_context = NULL;
+ p_info.txtstamp_complete = NULL;
+ p_info.epib = desc->epib;
+ p_info.psdata = desc->psdata;
+ memset(p_info.epib, 0, KNAV_DMA_NUM_EPIB_WORDS * sizeof(u32));
+
+ /* Find out where to inject the packet for transmission */
+ list_for_each_entry(tx_hook, &netcp->txhook_list_head, list) {
+ ret = tx_hook->hook_rtn(tx_hook->order, tx_hook->hook_data,
+ &p_info);
+ if (unlikely(ret != 0)) {
+ dev_err(netcp->ndev_dev, "TX hook %d rejected the packet with reason(%d)\n",
+ tx_hook->order, ret);
+ ret = (ret < 0) ? ret : NETDEV_TX_OK;
+ goto out;
+ }
+ }
+
+ /* Make sure some TX hook claimed the packet */
+ tx_pipe = p_info.tx_pipe;
+ if (!tx_pipe) {
+ dev_err(netcp->ndev_dev, "No TX hook claimed the packet!\n");
+ ret = -ENXIO;
+ goto out;
+ }
+
+ /* update descriptor */
+ if (p_info.psdata_len) {
+ u32 *psdata = p_info.psdata;
+
+ memmove(p_info.psdata, p_info.psdata + p_info.psdata_len,
+ p_info.psdata_len);
+ set_words(psdata, p_info.psdata_len, psdata);
+ packet_info |=
+ (p_info.psdata_len & KNAV_DMA_DESC_PSLEN_MASK) <<
+ KNAV_DMA_DESC_PSLEN_SHIFT;
+ }
+
+ packet_info |= KNAV_DMA_DESC_HAS_EPIB |
+ ((netcp->tx_compl_qid & KNAV_DMA_DESC_RETQ_MASK) <<
+ KNAV_DMA_DESC_RETQ_SHIFT) |
+ ((tx_pipe->dma_psflags & KNAV_DMA_DESC_PSFLAG_MASK) <<
+ KNAV_DMA_DESC_PSFLAG_SHIFT);
+
+ set_words(&packet_info, 1, &desc->packet_info);
+ set_words((u32 *)&skb, 1, &desc->pad[0]);
+
+ /* submit packet descriptor */
+ ret = knav_pool_desc_map(netcp->tx_pool, desc, sizeof(*desc), &dma,
+ &dma_sz);
+ if (unlikely(ret)) {
+ dev_err(netcp->ndev_dev, "%s() failed to map desc\n", __func__);
+ ret = -ENOMEM;
+ goto out;
+ }
+ skb_tx_timestamp(skb);
+ knav_queue_push(tx_pipe->dma_queue, dma, dma_sz, 0);
+
+out:
+ return ret;
+}
+
+/* Submit the packet */
+static int netcp_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ int subqueue = skb_get_queue_mapping(skb);
+ struct knav_dma_desc *desc;
+ int desc_count, ret = 0;
+
+ if (unlikely(skb->len <= 0)) {
+ dev_kfree_skb(skb);
+ return NETDEV_TX_OK;
+ }
+
+ if (unlikely(skb->len < NETCP_MIN_PACKET_SIZE)) {
+ ret = skb_padto(skb, NETCP_MIN_PACKET_SIZE);
+ if (ret < 0) {
+ /* If we get here, the skb has already been dropped */
+ dev_warn(netcp->ndev_dev, "padding failed (%d), packet dropped\n",
+ ret);
+ ndev->stats.tx_dropped++;
+ return ret;
+ }
+ skb->len = NETCP_MIN_PACKET_SIZE;
+ }
+
+ desc = netcp_tx_map_skb(skb, netcp);
+ if (unlikely(!desc)) {
+ netif_stop_subqueue(ndev, subqueue);
+ ret = -ENOBUFS;
+ goto drop;
+ }
+
+ ret = netcp_tx_submit_skb(netcp, skb, desc);
+ if (ret)
+ goto drop;
+
+ ndev->trans_start = jiffies;
+
+ /* Check Tx pool count & stop subqueue if needed */
+ desc_count = knav_pool_count(netcp->tx_pool);
+ if (desc_count < netcp->tx_pause_threshold) {
+ dev_dbg(netcp->ndev_dev, "pausing tx, count(%d)\n", desc_count);
+ netif_stop_subqueue(ndev, subqueue);
+ }
+ return NETDEV_TX_OK;
+
+drop:
+ ndev->stats.tx_dropped++;
+ if (desc)
+ netcp_free_tx_desc_chain(netcp, desc, sizeof(*desc));
+ dev_kfree_skb(skb);
+ return ret;
+}
+
+int netcp_txpipe_close(struct netcp_tx_pipe *tx_pipe)
+{
+ if (tx_pipe->dma_channel) {
+ knav_dma_close_channel(tx_pipe->dma_channel);
+ tx_pipe->dma_channel = NULL;
+ }
+ return 0;
+}
+
+int netcp_txpipe_open(struct netcp_tx_pipe *tx_pipe)
+{
+ struct device *dev = tx_pipe->netcp_device->device;
+ struct knav_dma_cfg config;
+ int ret = 0;
+ u8 name[16];
+
+ memset(&config, 0, sizeof(config));
+ config.direction = DMA_MEM_TO_DEV;
+ config.u.tx.filt_einfo = false;
+ config.u.tx.filt_pswords = false;
+ config.u.tx.priority = DMA_PRIO_MED_L;
+
+ tx_pipe->dma_channel = knav_dma_open_channel(dev,
+ tx_pipe->dma_chan_name, &config);
+ if (IS_ERR_OR_NULL(tx_pipe->dma_channel)) {
+ dev_err(dev, "failed opening tx chan(%s)\n",
+ tx_pipe->dma_chan_name);
+ goto err;
+ }
+
+ snprintf(name, sizeof(name), "tx-pipe-%s", dev_name(dev));
+ tx_pipe->dma_queue = knav_queue_open(name, tx_pipe->dma_queue_id,
+ KNAV_QUEUE_SHARED);
+ if (IS_ERR(tx_pipe->dma_queue)) {
+ dev_err(dev, "Could not open DMA queue for channel \"%s\": %d\n",
+ name, ret);
+ ret = PTR_ERR(tx_pipe->dma_queue);
+ goto err;
+ }
+
+ dev_dbg(dev, "opened tx pipe %s\n", name);
+ return 0;
+
+err:
+ if (!IS_ERR_OR_NULL(tx_pipe->dma_channel))
+ knav_dma_close_channel(tx_pipe->dma_channel);
+ tx_pipe->dma_channel = NULL;
+ return ret;
+}
+
+int netcp_txpipe_init(struct netcp_tx_pipe *tx_pipe,
+ struct netcp_device *netcp_device,
+ const char *dma_chan_name, unsigned int dma_queue_id)
+{
+ memset(tx_pipe, 0, sizeof(*tx_pipe));
+ tx_pipe->netcp_device = netcp_device;
+ tx_pipe->dma_chan_name = dma_chan_name;
+ tx_pipe->dma_queue_id = dma_queue_id;
+ return 0;
+}
+
+static struct netcp_addr *netcp_addr_find(struct netcp_intf *netcp,
+ const u8 *addr,
+ enum netcp_addr_type type)
+{
+ struct netcp_addr *naddr;
+
+ list_for_each_entry(naddr, &netcp->addr_list, node) {
+ if (naddr->type != type)
+ continue;
+ if (addr && memcmp(addr, naddr->addr, ETH_ALEN))
+ continue;
+ return naddr;
+ }
+
+ return NULL;
+}
+
+static struct netcp_addr *netcp_addr_add(struct netcp_intf *netcp,
+ const u8 *addr,
+ enum netcp_addr_type type)
+{
+ struct netcp_addr *naddr;
+
+ naddr = devm_kmalloc(netcp->dev, sizeof(*naddr), GFP_ATOMIC);
+ if (!naddr)
+ return NULL;
+
+ naddr->type = type;
+ naddr->flags = 0;
+ naddr->netcp = netcp;
+ if (addr)
+ ether_addr_copy(naddr->addr, addr);
+ else
+ memset(naddr->addr, 0, ETH_ALEN);
+ list_add_tail(&naddr->node, &netcp->addr_list);
+
+ return naddr;
+}
+
+static void netcp_addr_del(struct netcp_intf *netcp, struct netcp_addr *naddr)
+{
+ list_del(&naddr->node);
+ devm_kfree(netcp->dev, naddr);
+}
+
+static void netcp_addr_clear_mark(struct netcp_intf *netcp)
+{
+ struct netcp_addr *naddr;
+
+ list_for_each_entry(naddr, &netcp->addr_list, node)
+ naddr->flags = 0;
+}
+
+static void netcp_addr_add_mark(struct netcp_intf *netcp, const u8 *addr,
+ enum netcp_addr_type type)
+{
+ struct netcp_addr *naddr;
+
+ naddr = netcp_addr_find(netcp, addr, type);
+ if (naddr) {
+ naddr->flags |= ADDR_VALID;
+ return;
+ }
+
+ naddr = netcp_addr_add(netcp, addr, type);
+ if (!WARN_ON(!naddr))
+ naddr->flags |= ADDR_NEW;
+}
+
+static void netcp_addr_sweep_del(struct netcp_intf *netcp)
+{
+ struct netcp_addr *naddr, *tmp;
+ struct netcp_intf_modpriv *priv;
+ struct netcp_module *module;
+ int error;
+
+ list_for_each_entry_safe(naddr, tmp, &netcp->addr_list, node) {
+ if (naddr->flags & (ADDR_VALID | ADDR_NEW))
+ continue;
+ dev_dbg(netcp->ndev_dev, "deleting address %pM, type %x\n",
+ naddr->addr, naddr->type);
+ mutex_lock(&netcp_modules_lock);
+ for_each_module(netcp, priv) {
+ module = priv->netcp_module;
+ if (!module->del_addr)
+ continue;
+ error = module->del_addr(priv->module_priv,
+ naddr);
+ WARN_ON(error);
+ }
+ mutex_unlock(&netcp_modules_lock);
+ netcp_addr_del(netcp, naddr);
+ }
+}
+
+static void netcp_addr_sweep_add(struct netcp_intf *netcp)
+{
+ struct netcp_addr *naddr, *tmp;
+ struct netcp_intf_modpriv *priv;
+ struct netcp_module *module;
+ int error;
+
+ list_for_each_entry_safe(naddr, tmp, &netcp->addr_list, node) {
+ if (!(naddr->flags & ADDR_NEW))
+ continue;
+ dev_dbg(netcp->ndev_dev, "adding address %pM, type %x\n",
+ naddr->addr, naddr->type);
+ mutex_lock(&netcp_modules_lock);
+ for_each_module(netcp, priv) {
+ module = priv->netcp_module;
+ if (!module->add_addr)
+ continue;
+ error = module->add_addr(priv->module_priv, naddr);
+ WARN_ON(error);
+ }
+ mutex_unlock(&netcp_modules_lock);
+ }
+}
+
+static void netcp_set_rx_mode(struct net_device *ndev)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct netdev_hw_addr *ndev_addr;
+ bool promisc;
+
+ promisc = (ndev->flags & IFF_PROMISC ||
+ ndev->flags & IFF_ALLMULTI ||
+ netdev_mc_count(ndev) > NETCP_MAX_MCAST_ADDR);
+
+ /* first clear all marks */
+ netcp_addr_clear_mark(netcp);
+
+ /* next add new entries, mark existing ones */
+ netcp_addr_add_mark(netcp, ndev->broadcast, ADDR_BCAST);
+ for_each_dev_addr(ndev, ndev_addr)
+ netcp_addr_add_mark(netcp, ndev_addr->addr, ADDR_DEV);
+ netdev_for_each_uc_addr(ndev_addr, ndev)
+ netcp_addr_add_mark(netcp, ndev_addr->addr, ADDR_UCAST);
+ netdev_for_each_mc_addr(ndev_addr, ndev)
+ netcp_addr_add_mark(netcp, ndev_addr->addr, ADDR_MCAST);
+
+ if (promisc)
+ netcp_addr_add_mark(netcp, NULL, ADDR_ANY);
+
+ /* finally sweep and callout into modules */
+ netcp_addr_sweep_del(netcp);
+ netcp_addr_sweep_add(netcp);
+}
+
+static void netcp_free_navigator_resources(struct netcp_intf *netcp)
+{
+ int i;
+
+ if (netcp->rx_channel) {
+ knav_dma_close_channel(netcp->rx_channel);
+ netcp->rx_channel = NULL;
+ }
+
+ if (!IS_ERR_OR_NULL(netcp->rx_pool))
+ netcp_rxpool_free(netcp);
+
+ if (!IS_ERR_OR_NULL(netcp->rx_queue)) {
+ knav_queue_close(netcp->rx_queue);
+ netcp->rx_queue = NULL;
+ }
+
+ for (i = 0; i < KNAV_DMA_FDQ_PER_CHAN &&
+ !IS_ERR_OR_NULL(netcp->rx_fdq[i]) ; ++i) {
+ knav_queue_close(netcp->rx_fdq[i]);
+ netcp->rx_fdq[i] = NULL;
+ }
+
+ if (!IS_ERR_OR_NULL(netcp->tx_compl_q)) {
+ knav_queue_close(netcp->tx_compl_q);
+ netcp->tx_compl_q = NULL;
+ }
+
+ if (!IS_ERR_OR_NULL(netcp->tx_pool)) {
+ knav_pool_destroy(netcp->tx_pool);
+ netcp->tx_pool = NULL;
+ }
+}
+
+static int netcp_setup_navigator_resources(struct net_device *ndev)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct knav_queue_notify_config notify_cfg;
+ struct knav_dma_cfg config;
+ u32 last_fdq = 0;
+ u8 name[16];
+ int ret;
+ int i;
+
+ /* Create Rx/Tx descriptor pools */
+ snprintf(name, sizeof(name), "rx-pool-%s", ndev->name);
+ netcp->rx_pool = knav_pool_create(name, netcp->rx_pool_size,
+ netcp->rx_pool_region_id);
+ if (IS_ERR_OR_NULL(netcp->rx_pool)) {
+ dev_err(netcp->ndev_dev, "Couldn't create rx pool\n");
+ ret = PTR_ERR(netcp->rx_pool);
+ goto fail;
+ }
+
+ snprintf(name, sizeof(name), "tx-pool-%s", ndev->name);
+ netcp->tx_pool = knav_pool_create(name, netcp->tx_pool_size,
+ netcp->tx_pool_region_id);
+ if (IS_ERR_OR_NULL(netcp->tx_pool)) {
+ dev_err(netcp->ndev_dev, "Couldn't create tx pool\n");
+ ret = PTR_ERR(netcp->tx_pool);
+ goto fail;
+ }
+
+ /* open Tx completion queue */
+ snprintf(name, sizeof(name), "tx-compl-%s", ndev->name);
+ netcp->tx_compl_q = knav_queue_open(name, netcp->tx_compl_qid, 0);
+ if (IS_ERR_OR_NULL(netcp->tx_compl_q)) {
+ ret = PTR_ERR(netcp->tx_compl_q);
+ goto fail;
+ }
+ netcp->tx_compl_qid = knav_queue_get_id(netcp->tx_compl_q);
+
+ /* Set notification for Tx completion */
+ notify_cfg.fn = netcp_tx_notify;
+ notify_cfg.fn_arg = netcp;
+ ret = knav_queue_device_control(netcp->tx_compl_q,
+ KNAV_QUEUE_SET_NOTIFIER,
+ (unsigned long)¬ify_cfg);
+ if (ret)
+ goto fail;
+
+ knav_queue_disable_notify(netcp->tx_compl_q);
+
+ /* open Rx completion queue */
+ snprintf(name, sizeof(name), "rx-compl-%s", ndev->name);
+ netcp->rx_queue = knav_queue_open(name, netcp->rx_queue_id, 0);
+ if (IS_ERR_OR_NULL(netcp->rx_queue)) {
+ ret = PTR_ERR(netcp->rx_queue);
+ goto fail;
+ }
+ netcp->rx_queue_id = knav_queue_get_id(netcp->rx_queue);
+
+ /* Set notification for Rx completion */
+ notify_cfg.fn = netcp_rx_notify;
+ notify_cfg.fn_arg = netcp;
+ ret = knav_queue_device_control(netcp->rx_queue,
+ KNAV_QUEUE_SET_NOTIFIER,
+ (unsigned long)¬ify_cfg);
+ if (ret)
+ goto fail;
+
+ knav_queue_disable_notify(netcp->rx_queue);
+
+ /* open Rx FDQs */
+ for (i = 0; i < KNAV_DMA_FDQ_PER_CHAN &&
+ netcp->rx_queue_depths[i] && netcp->rx_buffer_sizes[i]; ++i) {
+ snprintf(name, sizeof(name), "rx-fdq-%s-%d", ndev->name, i);
+ netcp->rx_fdq[i] = knav_queue_open(name, KNAV_QUEUE_GP, 0);
+ if (IS_ERR_OR_NULL(netcp->rx_fdq[i])) {
+ ret = PTR_ERR(netcp->rx_fdq[i]);
+ goto fail;
+ }
+ }
+
+ memset(&config, 0, sizeof(config));
+ config.direction = DMA_DEV_TO_MEM;
+ config.u.rx.einfo_present = true;
+ config.u.rx.psinfo_present = true;
+ config.u.rx.err_mode = DMA_DROP;
+ config.u.rx.desc_type = DMA_DESC_HOST;
+ config.u.rx.psinfo_at_sop = false;
+ config.u.rx.sop_offset = NETCP_SOP_OFFSET;
+ config.u.rx.dst_q = netcp->rx_queue_id;
+ config.u.rx.thresh = DMA_THRESH_NONE;
+
+ for (i = 0; i < KNAV_DMA_FDQ_PER_CHAN; ++i) {
+ if (netcp->rx_fdq[i])
+ last_fdq = knav_queue_get_id(netcp->rx_fdq[i]);
+ config.u.rx.fdq[i] = last_fdq;
+ }
+
+ netcp->rx_channel = knav_dma_open_channel(netcp->netcp_device->device,
+ netcp->dma_chan_name, &config);
+ if (IS_ERR_OR_NULL(netcp->rx_channel)) {
+ dev_err(netcp->ndev_dev, "failed opening rx chan(%s\n",
+ netcp->dma_chan_name);
+ goto fail;
+ }
+
+ dev_dbg(netcp->ndev_dev, "opened RX channel: %p\n", netcp->rx_channel);
+ return 0;
+
+fail:
+ netcp_free_navigator_resources(netcp);
+ return ret;
+}
+
+/* Open the device */
+static int netcp_ndo_open(struct net_device *ndev)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct netcp_intf_modpriv *intf_modpriv;
+ struct netcp_module *module;
+ int ret;
+
+ netif_carrier_off(ndev);
+ ret = netcp_setup_navigator_resources(ndev);
+ if (ret) {
+ dev_err(netcp->ndev_dev, "Failed to setup navigator resources\n");
+ goto fail;
+ }
+
+ mutex_lock(&netcp_modules_lock);
+ for_each_module(netcp, intf_modpriv) {
+ module = intf_modpriv->netcp_module;
+ if (module->open) {
+ ret = module->open(intf_modpriv->module_priv, ndev);
+ if (ret != 0) {
+ dev_err(netcp->ndev_dev, "module open failed\n");
+ goto fail_open;
+ }
+ }
+ }
+ mutex_unlock(&netcp_modules_lock);
+
+ netcp_rxpool_refill(netcp);
+ napi_enable(&netcp->rx_napi);
+ napi_enable(&netcp->tx_napi);
+ knav_queue_enable_notify(netcp->tx_compl_q);
+ knav_queue_enable_notify(netcp->rx_queue);
+ netif_tx_wake_all_queues(ndev);
+ dev_dbg(netcp->ndev_dev, "netcp device %s opened\n", ndev->name);
+ return 0;
+
+fail_open:
+ for_each_module(netcp, intf_modpriv) {
+ module = intf_modpriv->netcp_module;
+ if (module->close)
+ module->close(intf_modpriv->module_priv, ndev);
+ }
+ mutex_unlock(&netcp_modules_lock);
+
+fail:
+ netcp_free_navigator_resources(netcp);
+ return ret;
+}
+
+/* Close the device */
+static int netcp_ndo_stop(struct net_device *ndev)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct netcp_intf_modpriv *intf_modpriv;
+ struct netcp_module *module;
+ int err = 0;
+
+ netif_tx_stop_all_queues(ndev);
+ netif_carrier_off(ndev);
+ netcp_addr_clear_mark(netcp);
+ netcp_addr_sweep_del(netcp);
+ knav_queue_disable_notify(netcp->rx_queue);
+ knav_queue_disable_notify(netcp->tx_compl_q);
+ napi_disable(&netcp->rx_napi);
+ napi_disable(&netcp->tx_napi);
+
+ mutex_lock(&netcp_modules_lock);
+ for_each_module(netcp, intf_modpriv) {
+ module = intf_modpriv->netcp_module;
+ if (module->close) {
+ err = module->close(intf_modpriv->module_priv, ndev);
+ if (err != 0)
+ dev_err(netcp->ndev_dev, "Close failed\n");
+ }
+ }
+ mutex_unlock(&netcp_modules_lock);
+
+ /* Recycle Rx descriptors from completion queue */
+ netcp_empty_rx_queue(netcp);
+
+ /* Recycle Tx descriptors from completion queue */
+ netcp_process_tx_compl_packets(netcp, netcp->tx_pool_size);
+
+ if (knav_pool_count(netcp->tx_pool) != netcp->tx_pool_size)
+ dev_err(netcp->ndev_dev, "Lost (%d) Tx descs\n",
+ netcp->tx_pool_size - knav_pool_count(netcp->tx_pool));
+
+ netcp_free_navigator_resources(netcp);
+ dev_dbg(netcp->ndev_dev, "netcp device %s stopped\n", ndev->name);
+ return 0;
+}
+
+static int netcp_ndo_ioctl(struct net_device *ndev,
+ struct ifreq *req, int cmd)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct netcp_intf_modpriv *intf_modpriv;
+ struct netcp_module *module;
+ int ret = -1, err = -EOPNOTSUPP;
+
+ if (!netif_running(ndev))
+ return -EINVAL;
+
+ mutex_lock(&netcp_modules_lock);
+ for_each_module(netcp, intf_modpriv) {
+ module = intf_modpriv->netcp_module;
+ if (!module->ioctl)
+ continue;
+
+ err = module->ioctl(intf_modpriv->module_priv, req, cmd);
+ if ((err < 0) && (err != -EOPNOTSUPP)) {
+ ret = err;
+ goto out;
+ }
+ if (err == 0)
+ ret = err;
+ }
+
+out:
+ mutex_unlock(&netcp_modules_lock);
+ return (ret == 0) ? 0 : err;
+}
+
+static int netcp_ndo_change_mtu(struct net_device *ndev, int new_mtu)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+
+ /* MTU < 68 is an error for IPv4 traffic */
+ if ((new_mtu < 68) ||
+ (new_mtu > (NETCP_MAX_FRAME_SIZE - ETH_HLEN - ETH_FCS_LEN))) {
+ dev_err(netcp->ndev_dev, "Invalid mtu size = %d\n", new_mtu);
+ return -EINVAL;
+ }
+
+ ndev->mtu = new_mtu;
+ return 0;
+}
+
+static void netcp_ndo_tx_timeout(struct net_device *ndev)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ unsigned int descs = knav_pool_count(netcp->tx_pool);
+
+ dev_err(netcp->ndev_dev, "transmit timed out tx descs(%d)\n", descs);
+ netcp_process_tx_compl_packets(netcp, netcp->tx_pool_size);
+ ndev->trans_start = jiffies;
+ netif_tx_wake_all_queues(ndev);
+}
+
+static int netcp_rx_add_vid(struct net_device *ndev, __be16 proto, u16 vid)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct netcp_intf_modpriv *intf_modpriv;
+ struct netcp_module *module;
+ int err = 0;
+
+ dev_dbg(netcp->ndev_dev, "adding rx vlan id: %d\n", vid);
+
+ mutex_lock(&netcp_modules_lock);
+ for_each_module(netcp, intf_modpriv) {
+ module = intf_modpriv->netcp_module;
+ if ((module->add_vid) && (vid != 0)) {
+ err = module->add_vid(intf_modpriv->module_priv, vid);
+ if (err != 0) {
+ dev_err(netcp->ndev_dev, "Could not add vlan id = %d\n",
+ vid);
+ break;
+ }
+ }
+ }
+ mutex_unlock(&netcp_modules_lock);
+ return err;
+}
+
+static int netcp_rx_kill_vid(struct net_device *ndev, __be16 proto, u16 vid)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct netcp_intf_modpriv *intf_modpriv;
+ struct netcp_module *module;
+ int err = 0;
+
+ dev_dbg(netcp->ndev_dev, "removing rx vlan id: %d\n", vid);
+
+ mutex_lock(&netcp_modules_lock);
+ for_each_module(netcp, intf_modpriv) {
+ module = intf_modpriv->netcp_module;
+ if (module->del_vid) {
+ err = module->del_vid(intf_modpriv->module_priv, vid);
+ if (err != 0) {
+ dev_err(netcp->ndev_dev, "Could not delete vlan id = %d\n",
+ vid);
+ break;
+ }
+ }
+ }
+ mutex_unlock(&netcp_modules_lock);
+ return err;
+}
+
+static u16 netcp_select_queue(struct net_device *dev, struct sk_buff *skb,
+ void *accel_priv,
+ select_queue_fallback_t fallback)
+{
+ return 0;
+}
+
+static int netcp_setup_tc(struct net_device *dev, u8 num_tc)
+{
+ int i;
+
+ /* setup tc must be called under rtnl lock */
+ ASSERT_RTNL();
+
+ /* Sanity-check the number of traffic classes requested */
+ if ((dev->real_num_tx_queues <= 1) ||
+ (dev->real_num_tx_queues < num_tc))
+ return -EINVAL;
+
+ /* Configure traffic class to queue mappings */
+ if (num_tc) {
+ netdev_set_num_tc(dev, num_tc);
+ for (i = 0; i < num_tc; i++)
+ netdev_set_tc_queue(dev, i, 1, i);
+ } else {
+ netdev_reset_tc(dev);
+ }
+
+ return 0;
+}
+
+static const struct net_device_ops netcp_netdev_ops = {
+ .ndo_open = netcp_ndo_open,
+ .ndo_stop = netcp_ndo_stop,
+ .ndo_start_xmit = netcp_ndo_start_xmit,
+ .ndo_set_rx_mode = netcp_set_rx_mode,
+ .ndo_do_ioctl = netcp_ndo_ioctl,
+ .ndo_change_mtu = netcp_ndo_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_vlan_rx_add_vid = netcp_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = netcp_rx_kill_vid,
+ .ndo_tx_timeout = netcp_ndo_tx_timeout,
+ .ndo_select_queue = netcp_select_queue,
+ .ndo_setup_tc = netcp_setup_tc,
+};
+
+static int netcp_create_interface(struct netcp_device *netcp_device,
+ struct device_node *node_interface)
+{
+ struct device *dev = netcp_device->device;
+ struct device_node *node = dev->of_node;
+ struct netcp_intf *netcp;
+ struct net_device *ndev;
+ resource_size_t size;
+ struct resource res;
+ void __iomem *efuse = NULL;
+ u32 efuse_mac = 0;
+ const void *mac_addr;
+ u8 efuse_mac_addr[6];
+ u32 temp[2];
+ int ret = 0;
+
+ ndev = alloc_etherdev_mqs(sizeof(*netcp), 1, 1);
+ if (!ndev) {
+ dev_err(dev, "Error allocating netdev\n");
+ return -ENOMEM;
+ }
+
+ ndev->features |= NETIF_F_SG;
+ ndev->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
+ ndev->hw_features = ndev->features;
+ ndev->vlan_features |= NETIF_F_SG;
+
+ netcp = netdev_priv(ndev);
+ spin_lock_init(&netcp->lock);
+ INIT_LIST_HEAD(&netcp->module_head);
+ INIT_LIST_HEAD(&netcp->txhook_list_head);
+ INIT_LIST_HEAD(&netcp->rxhook_list_head);
+ INIT_LIST_HEAD(&netcp->addr_list);
+ netcp->netcp_device = netcp_device;
+ netcp->dev = netcp_device->device;
+ netcp->ndev = ndev;
+ netcp->ndev_dev = &ndev->dev;
+ netcp->msg_enable = netif_msg_init(netcp_debug_level, NETCP_DEBUG);
+ netcp->tx_pause_threshold = MAX_SKB_FRAGS;
+ netcp->tx_resume_threshold = netcp->tx_pause_threshold;
+ netcp->node_interface = node_interface;
+
+ ret = of_property_read_u32(node_interface, "efuse-mac", &efuse_mac);
+ if (efuse_mac) {
+ if (of_address_to_resource(node, NETCP_EFUSE_REG_INDEX, &res)) {
+ dev_err(dev, "could not find efuse-mac reg resource\n");
+ ret = -ENODEV;
+ goto quit;
+ }
+ size = resource_size(&res);
+
+ if (!devm_request_mem_region(dev, res.start, size,
+ dev_name(dev))) {
+ dev_err(dev, "could not reserve resource\n");
+ ret = -ENOMEM;
+ goto quit;
+ }
+
+ efuse = devm_ioremap_nocache(dev, res.start, size);
+ if (!efuse) {
+ dev_err(dev, "could not map resource\n");
+ devm_release_mem_region(dev, res.start, size);
+ ret = -ENOMEM;
+ goto quit;
+ }
+
+ emac_arch_get_mac_addr(efuse_mac_addr, efuse);
+ if (is_valid_ether_addr(efuse_mac_addr))
+ ether_addr_copy(ndev->dev_addr, efuse_mac_addr);
+ else
+ random_ether_addr(ndev->dev_addr);
+
+ devm_iounmap(dev, efuse);
+ devm_release_mem_region(dev, res.start, size);
+ } else {
+ mac_addr = of_get_mac_address(node_interface);
+ if (mac_addr)
+ ether_addr_copy(ndev->dev_addr, mac_addr);
+ else
+ random_ether_addr(ndev->dev_addr);
+ }
+
+ ret = of_property_read_string(node_interface, "rx-channel",
+ &netcp->dma_chan_name);
+ if (ret < 0) {
+ dev_err(dev, "missing \"rx-channel\" parameter\n");
+ ret = -ENODEV;
+ goto quit;
+ }
+
+ ret = of_property_read_u32(node_interface, "rx-queue",
+ &netcp->rx_queue_id);
+ if (ret < 0) {
+ dev_warn(dev, "missing \"rx-queue\" parameter\n");
+ netcp->rx_queue_id = KNAV_QUEUE_QPEND;
+ }
+
+ ret = of_property_read_u32_array(node_interface, "rx-queue-depth",
+ netcp->rx_queue_depths,
+ KNAV_DMA_FDQ_PER_CHAN);
+ if (ret < 0) {
+ dev_err(dev, "missing \"rx-queue-depth\" parameter\n");
+ netcp->rx_queue_depths[0] = 128;
+ }
+
+ ret = of_property_read_u32_array(node_interface, "rx-buffer-size",
+ netcp->rx_buffer_sizes,
+ KNAV_DMA_FDQ_PER_CHAN);
+ if (ret) {
+ dev_err(dev, "missing \"rx-buffer-size\" parameter\n");
+ netcp->rx_buffer_sizes[0] = 1536;
+ }
+
+ ret = of_property_read_u32_array(node_interface, "rx-pool", temp, 2);
+ if (ret < 0) {
+ dev_err(dev, "missing \"rx-pool\" parameter\n");
+ ret = -ENODEV;
+ goto quit;
+ }
+ netcp->rx_pool_size = temp[0];
+ netcp->rx_pool_region_id = temp[1];
+
+ ret = of_property_read_u32_array(node_interface, "tx-pool", temp, 2);
+ if (ret < 0) {
+ dev_err(dev, "missing \"tx-pool\" parameter\n");
+ ret = -ENODEV;
+ goto quit;
+ }
+ netcp->tx_pool_size = temp[0];
+ netcp->tx_pool_region_id = temp[1];
+
+ if (netcp->tx_pool_size < MAX_SKB_FRAGS) {
+ dev_err(dev, "tx-pool size too small, must be atleast(%ld)\n",
+ MAX_SKB_FRAGS);
+ ret = -ENODEV;
+ goto quit;
+ }
+
+ ret = of_property_read_u32(node_interface, "tx-completion-queue",
+ &netcp->tx_compl_qid);
+ if (ret < 0) {
+ dev_warn(dev, "missing \"tx-completion-queue\" parameter\n");
+ netcp->tx_compl_qid = KNAV_QUEUE_QPEND;
+ }
+
+ /* NAPI register */
+ netif_napi_add(ndev, &netcp->rx_napi, netcp_rx_poll, NETCP_NAPI_WEIGHT);
+ netif_napi_add(ndev, &netcp->tx_napi, netcp_tx_poll, NETCP_NAPI_WEIGHT);
+
+ /* Register the network device */
+ ndev->dev_id = 0;
+ ndev->watchdog_timeo = NETCP_TX_TIMEOUT;
+ ndev->netdev_ops = &netcp_netdev_ops;
+ SET_NETDEV_DEV(ndev, dev);
+
+ list_add_tail(&netcp->interface_list, &netcp_device->interface_head);
+ return 0;
+
+quit:
+ free_netdev(ndev);
+ return ret;
+}
+
+static void netcp_delete_interface(struct netcp_device *netcp_device,
+ struct net_device *ndev)
+{
+ struct netcp_intf_modpriv *intf_modpriv, *tmp;
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct netcp_module *module;
+
+ dev_dbg(netcp_device->device, "Removing interface \"%s\"\n",
+ ndev->name);
+
+ /* Notify each of the modules that the interface is going away */
+ list_for_each_entry_safe(intf_modpriv, tmp, &netcp->module_head,
+ intf_list) {
+ module = intf_modpriv->netcp_module;
+ dev_dbg(netcp_device->device, "Releasing module \"%s\"\n",
+ module->name);
+ if (module->release)
+ module->release(intf_modpriv->module_priv);
+ list_del(&intf_modpriv->intf_list);
+ kfree(intf_modpriv);
+ }
+ WARN(!list_empty(&netcp->module_head), "%s interface module list is not empty!\n",
+ ndev->name);
+
+ list_del(&netcp->interface_list);
+
+ of_node_put(netcp->node_interface);
+ unregister_netdev(ndev);
+ netif_napi_del(&netcp->rx_napi);
+ free_netdev(ndev);
+}
+
+static int netcp_probe(struct platform_device *pdev)
+{
+ struct device_node *node = pdev->dev.of_node;
+ struct netcp_intf *netcp_intf, *netcp_tmp;
+ struct device_node *child, *interfaces;
+ struct netcp_device *netcp_device;
+ struct device *dev = &pdev->dev;
+ struct netcp_module *module;
+ int ret;
+
+ if (!node) {
+ dev_err(dev, "could not find device info\n");
+ return -ENODEV;
+ }
+
+ /* Allocate a new NETCP device instance */
+ netcp_device = devm_kzalloc(dev, sizeof(*netcp_device), GFP_KERNEL);
+ if (!netcp_device)
+ return -ENOMEM;
+
+ pm_runtime_enable(&pdev->dev);
+ ret = pm_runtime_get_sync(&pdev->dev);
+ if (ret < 0) {
+ dev_err(dev, "Failed to enable NETCP power-domain\n");
+ pm_runtime_disable(&pdev->dev);
+ return ret;
+ }
+
+ /* Initialize the NETCP device instance */
+ INIT_LIST_HEAD(&netcp_device->interface_head);
+ INIT_LIST_HEAD(&netcp_device->modpriv_head);
+ netcp_device->device = dev;
+ platform_set_drvdata(pdev, netcp_device);
+
+ /* create interfaces */
+ interfaces = of_get_child_by_name(node, "netcp-interfaces");
+ if (!interfaces) {
+ dev_err(dev, "could not find netcp-interfaces node\n");
+ ret = -ENODEV;
+ goto probe_quit;
+ }
+
+ for_each_available_child_of_node(interfaces, child) {
+ ret = netcp_create_interface(netcp_device, child);
+ if (ret) {
+ dev_err(dev, "could not create interface(%s)\n",
+ child->name);
+ goto probe_quit_interface;
+ }
+ }
+
+ /* Add the device instance to the list */
+ list_add_tail(&netcp_device->device_list, &netcp_devices);
+
+ /* Probe & attach any modules already registered */
+ mutex_lock(&netcp_modules_lock);
+ for_each_netcp_module(module) {
+ ret = netcp_module_probe(netcp_device, module);
+ if (ret < 0)
+ dev_err(dev, "module(%s) probe failed\n", module->name);
+ }
+ mutex_unlock(&netcp_modules_lock);
+ return 0;
+
+probe_quit_interface:
+ list_for_each_entry_safe(netcp_intf, netcp_tmp,
+ &netcp_device->interface_head,
+ interface_list) {
+ netcp_delete_interface(netcp_device, netcp_intf->ndev);
+ }
+
+probe_quit:
+ pm_runtime_put_sync(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+ platform_set_drvdata(pdev, NULL);
+ return ret;
+}
+
+static int netcp_remove(struct platform_device *pdev)
+{
+ struct netcp_device *netcp_device = platform_get_drvdata(pdev);
+ struct netcp_inst_modpriv *inst_modpriv, *tmp;
+ struct netcp_module *module;
+
+ list_for_each_entry_safe(inst_modpriv, tmp, &netcp_device->modpriv_head,
+ inst_list) {
+ module = inst_modpriv->netcp_module;
+ dev_dbg(&pdev->dev, "Removing module \"%s\"\n", module->name);
+ module->remove(netcp_device, inst_modpriv->module_priv);
+ list_del(&inst_modpriv->inst_list);
+ kfree(inst_modpriv);
+ }
+ WARN(!list_empty(&netcp_device->interface_head), "%s interface list not empty!\n",
+ pdev->name);
+
+ devm_kfree(&pdev->dev, netcp_device);
+ pm_runtime_put_sync(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+ platform_set_drvdata(pdev, NULL);
+ return 0;
+}
+
+static struct of_device_id of_match[] = {
+ { .compatible = "ti,netcp-1.0", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, of_match);
+
+static struct platform_driver netcp_driver = {
+ .driver = {
+ .name = "netcp-1.0",
+ .owner = THIS_MODULE,
+ .of_match_table = of_match,
+ },
+ .probe = netcp_probe,
+ .remove = netcp_remove,
+};
+module_platform_driver(netcp_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("TI NETCP driver for Keystone SOCs");
+MODULE_AUTHOR("Sandeep Nair <sandeep_n@ti.com");
--- /dev/null
+/*
+ * Keystone GBE and XGBE subsystem code
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated
+ * Authors: Sandeep Nair <sandeep_n@ti.com>
+ * Sandeep Paulraj <s-paulraj@ti.com>
+ * Cyril Chemparathy <cyril@ti.com>
+ * Santosh Shilimkar <santosh.shilimkar@ti.com>
+ * Wingman Kwok <w-kwok2@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/io.h>
+#include <linux/of_mdio.h>
+#include <linux/of_address.h>
+#include <linux/if_vlan.h>
+#include <linux/ethtool.h>
+
+#include "cpsw_ale.h"
+#include "netcp.h"
+
+#define NETCP_DRIVER_NAME "TI KeyStone Ethernet Driver"
+#define NETCP_DRIVER_VERSION "v1.0"
+
+#define GBE_IDENT(reg) ((reg >> 16) & 0xffff)
+#define GBE_MAJOR_VERSION(reg) (reg >> 8 & 0x7)
+#define GBE_MINOR_VERSION(reg) (reg & 0xff)
+#define GBE_RTL_VERSION(reg) ((reg >> 11) & 0x1f)
+
+/* 1G Ethernet SS defines */
+#define GBE_MODULE_NAME "netcp-gbe"
+#define GBE_SS_VERSION_14 0x4ed21104
+
+#define GBE13_SGMII_MODULE_OFFSET 0x100
+#define GBE13_SGMII34_MODULE_OFFSET 0x400
+#define GBE13_SWITCH_MODULE_OFFSET 0x800
+#define GBE13_HOST_PORT_OFFSET 0x834
+#define GBE13_SLAVE_PORT_OFFSET 0x860
+#define GBE13_EMAC_OFFSET 0x900
+#define GBE13_SLAVE_PORT2_OFFSET 0xa00
+#define GBE13_HW_STATS_OFFSET 0xb00
+#define GBE13_ALE_OFFSET 0xe00
+#define GBE13_HOST_PORT_NUM 0
+#define GBE13_NUM_SLAVES 4
+#define GBE13_NUM_ALE_PORTS (GBE13_NUM_SLAVES + 1)
+#define GBE13_NUM_ALE_ENTRIES 1024
+
+/* 10G Ethernet SS defines */
+#define XGBE_MODULE_NAME "netcp-xgbe"
+#define XGBE_SS_VERSION_10 0x4ee42100
+
+#define XGBE_SERDES_REG_INDEX 1
+#define XGBE10_SGMII_MODULE_OFFSET 0x100
+#define XGBE10_SWITCH_MODULE_OFFSET 0x1000
+#define XGBE10_HOST_PORT_OFFSET 0x1034
+#define XGBE10_SLAVE_PORT_OFFSET 0x1064
+#define XGBE10_EMAC_OFFSET 0x1400
+#define XGBE10_ALE_OFFSET 0x1700
+#define XGBE10_HW_STATS_OFFSET 0x1800
+#define XGBE10_HOST_PORT_NUM 0
+#define XGBE10_NUM_SLAVES 2
+#define XGBE10_NUM_ALE_PORTS (XGBE10_NUM_SLAVES + 1)
+#define XGBE10_NUM_ALE_ENTRIES 1024
+
+#define GBE_TIMER_INTERVAL (HZ / 2)
+
+/* Soft reset register values */
+#define SOFT_RESET_MASK BIT(0)
+#define SOFT_RESET BIT(0)
+#define DEVICE_EMACSL_RESET_POLL_COUNT 100
+#define GMACSL_RET_WARN_RESET_INCOMPLETE -2
+
+#define MACSL_RX_ENABLE_CSF BIT(23)
+#define MACSL_ENABLE_EXT_CTL BIT(18)
+#define MACSL_XGMII_ENABLE BIT(13)
+#define MACSL_XGIG_MODE BIT(8)
+#define MACSL_GIG_MODE BIT(7)
+#define MACSL_GMII_ENABLE BIT(5)
+#define MACSL_FULLDUPLEX BIT(0)
+
+#define GBE_CTL_P0_ENABLE BIT(2)
+#define GBE_REG_VAL_STAT_ENABLE_ALL 0xff
+#define XGBE_REG_VAL_STAT_ENABLE_ALL 0xf
+#define GBE_STATS_CD_SEL BIT(28)
+
+#define GBE_PORT_MASK(x) (BIT(x) - 1)
+#define GBE_MASK_NO_PORTS 0
+
+#define GBE_DEF_1G_MAC_CONTROL \
+ (MACSL_GIG_MODE | MACSL_GMII_ENABLE | \
+ MACSL_ENABLE_EXT_CTL | MACSL_RX_ENABLE_CSF)
+
+#define GBE_DEF_10G_MAC_CONTROL \
+ (MACSL_XGIG_MODE | MACSL_XGMII_ENABLE | \
+ MACSL_ENABLE_EXT_CTL | MACSL_RX_ENABLE_CSF)
+
+#define GBE_STATSA_MODULE 0
+#define GBE_STATSB_MODULE 1
+#define GBE_STATSC_MODULE 2
+#define GBE_STATSD_MODULE 3
+
+#define XGBE_STATS0_MODULE 0
+#define XGBE_STATS1_MODULE 1
+#define XGBE_STATS2_MODULE 2
+
+#define MAX_SLAVES GBE13_NUM_SLAVES
+/* s: 0-based slave_port */
+#define SGMII_BASE(s) \
+ (((s) < 2) ? gbe_dev->sgmii_port_regs : gbe_dev->sgmii_port34_regs)
+
+#define GBE_TX_QUEUE 648
+#define GBE_TXHOOK_ORDER 0
+#define GBE_DEFAULT_ALE_AGEOUT 30
+#define SLAVE_LINK_IS_XGMII(s) ((s)->link_interface >= XGMII_LINK_MAC_PHY)
+#define NETCP_LINK_STATE_INVALID -1
+
+#define GBE_SET_REG_OFS(p, rb, rn) p->rb##_ofs.rn = \
+ offsetof(struct gbe##_##rb, rn)
+#define XGBE_SET_REG_OFS(p, rb, rn) p->rb##_ofs.rn = \
+ offsetof(struct xgbe##_##rb, rn)
+#define GBE_REG_ADDR(p, rb, rn) (p->rb + p->rb##_ofs.rn)
+
+struct xgbe_ss_regs {
+ u32 id_ver;
+ u32 synce_count;
+ u32 synce_mux;
+ u32 control;
+};
+
+struct xgbe_switch_regs {
+ u32 id_ver;
+ u32 control;
+ u32 emcontrol;
+ u32 stat_port_en;
+ u32 ptype;
+ u32 soft_idle;
+ u32 thru_rate;
+ u32 gap_thresh;
+ u32 tx_start_wds;
+ u32 flow_control;
+ u32 cppi_thresh;
+};
+
+struct xgbe_port_regs {
+ u32 blk_cnt;
+ u32 port_vlan;
+ u32 tx_pri_map;
+ u32 sa_lo;
+ u32 sa_hi;
+ u32 ts_ctl;
+ u32 ts_seq_ltype;
+ u32 ts_vlan;
+ u32 ts_ctl_ltype2;
+ u32 ts_ctl2;
+ u32 control;
+};
+
+struct xgbe_host_port_regs {
+ u32 blk_cnt;
+ u32 port_vlan;
+ u32 tx_pri_map;
+ u32 src_id;
+ u32 rx_pri_map;
+ u32 rx_maxlen;
+};
+
+struct xgbe_emac_regs {
+ u32 id_ver;
+ u32 mac_control;
+ u32 mac_status;
+ u32 soft_reset;
+ u32 rx_maxlen;
+ u32 __reserved_0;
+ u32 rx_pause;
+ u32 tx_pause;
+ u32 em_control;
+ u32 __reserved_1;
+ u32 tx_gap;
+ u32 rsvd[4];
+};
+
+struct xgbe_host_hw_stats {
+ u32 rx_good_frames;
+ u32 rx_broadcast_frames;
+ u32 rx_multicast_frames;
+ u32 __rsvd_0[3];
+ u32 rx_oversized_frames;
+ u32 __rsvd_1;
+ u32 rx_undersized_frames;
+ u32 __rsvd_2;
+ u32 overrun_type4;
+ u32 overrun_type5;
+ u32 rx_bytes;
+ u32 tx_good_frames;
+ u32 tx_broadcast_frames;
+ u32 tx_multicast_frames;
+ u32 __rsvd_3[9];
+ u32 tx_bytes;
+ u32 tx_64byte_frames;
+ u32 tx_65_to_127byte_frames;
+ u32 tx_128_to_255byte_frames;
+ u32 tx_256_to_511byte_frames;
+ u32 tx_512_to_1023byte_frames;
+ u32 tx_1024byte_frames;
+ u32 net_bytes;
+ u32 rx_sof_overruns;
+ u32 rx_mof_overruns;
+ u32 rx_dma_overruns;
+};
+
+struct xgbe_hw_stats {
+ u32 rx_good_frames;
+ u32 rx_broadcast_frames;
+ u32 rx_multicast_frames;
+ u32 rx_pause_frames;
+ u32 rx_crc_errors;
+ u32 rx_align_code_errors;
+ u32 rx_oversized_frames;
+ u32 rx_jabber_frames;
+ u32 rx_undersized_frames;
+ u32 rx_fragments;
+ u32 overrun_type4;
+ u32 overrun_type5;
+ u32 rx_bytes;
+ u32 tx_good_frames;
+ u32 tx_broadcast_frames;
+ u32 tx_multicast_frames;
+ u32 tx_pause_frames;
+ u32 tx_deferred_frames;
+ u32 tx_collision_frames;
+ u32 tx_single_coll_frames;
+ u32 tx_mult_coll_frames;
+ u32 tx_excessive_collisions;
+ u32 tx_late_collisions;
+ u32 tx_underrun;
+ u32 tx_carrier_sense_errors;
+ u32 tx_bytes;
+ u32 tx_64byte_frames;
+ u32 tx_65_to_127byte_frames;
+ u32 tx_128_to_255byte_frames;
+ u32 tx_256_to_511byte_frames;
+ u32 tx_512_to_1023byte_frames;
+ u32 tx_1024byte_frames;
+ u32 net_bytes;
+ u32 rx_sof_overruns;
+ u32 rx_mof_overruns;
+ u32 rx_dma_overruns;
+};
+
+#define XGBE10_NUM_STAT_ENTRIES (sizeof(struct xgbe_hw_stats)/sizeof(u32))
+
+struct gbe_ss_regs {
+ u32 id_ver;
+ u32 synce_count;
+ u32 synce_mux;
+};
+
+struct gbe_ss_regs_ofs {
+ u16 id_ver;
+ u16 control;
+};
+
+struct gbe_switch_regs {
+ u32 id_ver;
+ u32 control;
+ u32 soft_reset;
+ u32 stat_port_en;
+ u32 ptype;
+ u32 soft_idle;
+ u32 thru_rate;
+ u32 gap_thresh;
+ u32 tx_start_wds;
+ u32 flow_control;
+};
+
+struct gbe_switch_regs_ofs {
+ u16 id_ver;
+ u16 control;
+ u16 soft_reset;
+ u16 emcontrol;
+ u16 stat_port_en;
+ u16 ptype;
+ u16 flow_control;
+};
+
+struct gbe_port_regs {
+ u32 max_blks;
+ u32 blk_cnt;
+ u32 port_vlan;
+ u32 tx_pri_map;
+ u32 sa_lo;
+ u32 sa_hi;
+ u32 ts_ctl;
+ u32 ts_seq_ltype;
+ u32 ts_vlan;
+ u32 ts_ctl_ltype2;
+ u32 ts_ctl2;
+};
+
+struct gbe_port_regs_ofs {
+ u16 port_vlan;
+ u16 tx_pri_map;
+ u16 sa_lo;
+ u16 sa_hi;
+ u16 ts_ctl;
+ u16 ts_seq_ltype;
+ u16 ts_vlan;
+ u16 ts_ctl_ltype2;
+ u16 ts_ctl2;
+};
+
+struct gbe_host_port_regs {
+ u32 src_id;
+ u32 port_vlan;
+ u32 rx_pri_map;
+ u32 rx_maxlen;
+};
+
+struct gbe_host_port_regs_ofs {
+ u16 port_vlan;
+ u16 tx_pri_map;
+ u16 rx_maxlen;
+};
+
+struct gbe_emac_regs {
+ u32 id_ver;
+ u32 mac_control;
+ u32 mac_status;
+ u32 soft_reset;
+ u32 rx_maxlen;
+ u32 __reserved_0;
+ u32 rx_pause;
+ u32 tx_pause;
+ u32 __reserved_1;
+ u32 rx_pri_map;
+ u32 rsvd[6];
+};
+
+struct gbe_emac_regs_ofs {
+ u16 mac_control;
+ u16 soft_reset;
+ u16 rx_maxlen;
+};
+
+struct gbe_hw_stats {
+ u32 rx_good_frames;
+ u32 rx_broadcast_frames;
+ u32 rx_multicast_frames;
+ u32 rx_pause_frames;
+ u32 rx_crc_errors;
+ u32 rx_align_code_errors;
+ u32 rx_oversized_frames;
+ u32 rx_jabber_frames;
+ u32 rx_undersized_frames;
+ u32 rx_fragments;
+ u32 __pad_0[2];
+ u32 rx_bytes;
+ u32 tx_good_frames;
+ u32 tx_broadcast_frames;
+ u32 tx_multicast_frames;
+ u32 tx_pause_frames;
+ u32 tx_deferred_frames;
+ u32 tx_collision_frames;
+ u32 tx_single_coll_frames;
+ u32 tx_mult_coll_frames;
+ u32 tx_excessive_collisions;
+ u32 tx_late_collisions;
+ u32 tx_underrun;
+ u32 tx_carrier_sense_errors;
+ u32 tx_bytes;
+ u32 tx_64byte_frames;
+ u32 tx_65_to_127byte_frames;
+ u32 tx_128_to_255byte_frames;
+ u32 tx_256_to_511byte_frames;
+ u32 tx_512_to_1023byte_frames;
+ u32 tx_1024byte_frames;
+ u32 net_bytes;
+ u32 rx_sof_overruns;
+ u32 rx_mof_overruns;
+ u32 rx_dma_overruns;
+};
+
+#define GBE13_NUM_HW_STAT_ENTRIES (sizeof(struct gbe_hw_stats)/sizeof(u32))
+#define GBE13_NUM_HW_STATS_MOD 2
+#define XGBE10_NUM_HW_STATS_MOD 3
+#define GBE_MAX_HW_STAT_MODS 3
+#define GBE_HW_STATS_REG_MAP_SZ 0x100
+
+struct gbe_slave {
+ void __iomem *port_regs;
+ void __iomem *emac_regs;
+ struct gbe_port_regs_ofs port_regs_ofs;
+ struct gbe_emac_regs_ofs emac_regs_ofs;
+ int slave_num; /* 0 based logical number */
+ int port_num; /* actual port number */
+ atomic_t link_state;
+ bool open;
+ struct phy_device *phy;
+ u32 link_interface;
+ u32 mac_control;
+ u8 phy_port_t;
+ struct device_node *phy_node;
+ struct list_head slave_list;
+};
+
+struct gbe_priv {
+ struct device *dev;
+ struct netcp_device *netcp_device;
+ struct timer_list timer;
+ u32 num_slaves;
+ u32 ale_entries;
+ u32 ale_ports;
+ bool enable_ale;
+ struct netcp_tx_pipe tx_pipe;
+
+ int host_port;
+ u32 rx_packet_max;
+ u32 ss_version;
+
+ void __iomem *ss_regs;
+ void __iomem *switch_regs;
+ void __iomem *host_port_regs;
+ void __iomem *ale_reg;
+ void __iomem *sgmii_port_regs;
+ void __iomem *sgmii_port34_regs;
+ void __iomem *xgbe_serdes_regs;
+ void __iomem *hw_stats_regs[GBE_MAX_HW_STAT_MODS];
+
+ struct gbe_ss_regs_ofs ss_regs_ofs;
+ struct gbe_switch_regs_ofs switch_regs_ofs;
+ struct gbe_host_port_regs_ofs host_port_regs_ofs;
+
+ struct cpsw_ale *ale;
+ unsigned int tx_queue_id;
+ const char *dma_chan_name;
+
+ struct list_head gbe_intf_head;
+ struct list_head secondary_slaves;
+ struct net_device *dummy_ndev;
+
+ u64 *hw_stats;
+ const struct netcp_ethtool_stat *et_stats;
+ int num_et_stats;
+ /* Lock for updating the hwstats */
+ spinlock_t hw_stats_lock;
+};
+
+struct gbe_intf {
+ struct net_device *ndev;
+ struct device *dev;
+ struct gbe_priv *gbe_dev;
+ struct netcp_tx_pipe tx_pipe;
+ struct gbe_slave *slave;
+ struct list_head gbe_intf_list;
+ unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
+};
+
+static struct netcp_module gbe_module;
+static struct netcp_module xgbe_module;
+
+/* Statistic management */
+struct netcp_ethtool_stat {
+ char desc[ETH_GSTRING_LEN];
+ int type;
+ u32 size;
+ int offset;
+};
+
+#define GBE_STATSA_INFO(field) "GBE_A:"#field, GBE_STATSA_MODULE,\
+ FIELD_SIZEOF(struct gbe_hw_stats, field), \
+ offsetof(struct gbe_hw_stats, field)
+
+#define GBE_STATSB_INFO(field) "GBE_B:"#field, GBE_STATSB_MODULE,\
+ FIELD_SIZEOF(struct gbe_hw_stats, field), \
+ offsetof(struct gbe_hw_stats, field)
+
+#define GBE_STATSC_INFO(field) "GBE_C:"#field, GBE_STATSC_MODULE,\
+ FIELD_SIZEOF(struct gbe_hw_stats, field), \
+ offsetof(struct gbe_hw_stats, field)
+
+#define GBE_STATSD_INFO(field) "GBE_D:"#field, GBE_STATSD_MODULE,\
+ FIELD_SIZEOF(struct gbe_hw_stats, field), \
+ offsetof(struct gbe_hw_stats, field)
+
+static const struct netcp_ethtool_stat gbe13_et_stats[] = {
+ /* GBE module A */
+ {GBE_STATSA_INFO(rx_good_frames)},
+ {GBE_STATSA_INFO(rx_broadcast_frames)},
+ {GBE_STATSA_INFO(rx_multicast_frames)},
+ {GBE_STATSA_INFO(rx_pause_frames)},
+ {GBE_STATSA_INFO(rx_crc_errors)},
+ {GBE_STATSA_INFO(rx_align_code_errors)},
+ {GBE_STATSA_INFO(rx_oversized_frames)},
+ {GBE_STATSA_INFO(rx_jabber_frames)},
+ {GBE_STATSA_INFO(rx_undersized_frames)},
+ {GBE_STATSA_INFO(rx_fragments)},
+ {GBE_STATSA_INFO(rx_bytes)},
+ {GBE_STATSA_INFO(tx_good_frames)},
+ {GBE_STATSA_INFO(tx_broadcast_frames)},
+ {GBE_STATSA_INFO(tx_multicast_frames)},
+ {GBE_STATSA_INFO(tx_pause_frames)},
+ {GBE_STATSA_INFO(tx_deferred_frames)},
+ {GBE_STATSA_INFO(tx_collision_frames)},
+ {GBE_STATSA_INFO(tx_single_coll_frames)},
+ {GBE_STATSA_INFO(tx_mult_coll_frames)},
+ {GBE_STATSA_INFO(tx_excessive_collisions)},
+ {GBE_STATSA_INFO(tx_late_collisions)},
+ {GBE_STATSA_INFO(tx_underrun)},
+ {GBE_STATSA_INFO(tx_carrier_sense_errors)},
+ {GBE_STATSA_INFO(tx_bytes)},
+ {GBE_STATSA_INFO(tx_64byte_frames)},
+ {GBE_STATSA_INFO(tx_65_to_127byte_frames)},
+ {GBE_STATSA_INFO(tx_128_to_255byte_frames)},
+ {GBE_STATSA_INFO(tx_256_to_511byte_frames)},
+ {GBE_STATSA_INFO(tx_512_to_1023byte_frames)},
+ {GBE_STATSA_INFO(tx_1024byte_frames)},
+ {GBE_STATSA_INFO(net_bytes)},
+ {GBE_STATSA_INFO(rx_sof_overruns)},
+ {GBE_STATSA_INFO(rx_mof_overruns)},
+ {GBE_STATSA_INFO(rx_dma_overruns)},
+ /* GBE module B */
+ {GBE_STATSB_INFO(rx_good_frames)},
+ {GBE_STATSB_INFO(rx_broadcast_frames)},
+ {GBE_STATSB_INFO(rx_multicast_frames)},
+ {GBE_STATSB_INFO(rx_pause_frames)},
+ {GBE_STATSB_INFO(rx_crc_errors)},
+ {GBE_STATSB_INFO(rx_align_code_errors)},
+ {GBE_STATSB_INFO(rx_oversized_frames)},
+ {GBE_STATSB_INFO(rx_jabber_frames)},
+ {GBE_STATSB_INFO(rx_undersized_frames)},
+ {GBE_STATSB_INFO(rx_fragments)},
+ {GBE_STATSB_INFO(rx_bytes)},
+ {GBE_STATSB_INFO(tx_good_frames)},
+ {GBE_STATSB_INFO(tx_broadcast_frames)},
+ {GBE_STATSB_INFO(tx_multicast_frames)},
+ {GBE_STATSB_INFO(tx_pause_frames)},
+ {GBE_STATSB_INFO(tx_deferred_frames)},
+ {GBE_STATSB_INFO(tx_collision_frames)},
+ {GBE_STATSB_INFO(tx_single_coll_frames)},
+ {GBE_STATSB_INFO(tx_mult_coll_frames)},
+ {GBE_STATSB_INFO(tx_excessive_collisions)},
+ {GBE_STATSB_INFO(tx_late_collisions)},
+ {GBE_STATSB_INFO(tx_underrun)},
+ {GBE_STATSB_INFO(tx_carrier_sense_errors)},
+ {GBE_STATSB_INFO(tx_bytes)},
+ {GBE_STATSB_INFO(tx_64byte_frames)},
+ {GBE_STATSB_INFO(tx_65_to_127byte_frames)},
+ {GBE_STATSB_INFO(tx_128_to_255byte_frames)},
+ {GBE_STATSB_INFO(tx_256_to_511byte_frames)},
+ {GBE_STATSB_INFO(tx_512_to_1023byte_frames)},
+ {GBE_STATSB_INFO(tx_1024byte_frames)},
+ {GBE_STATSB_INFO(net_bytes)},
+ {GBE_STATSB_INFO(rx_sof_overruns)},
+ {GBE_STATSB_INFO(rx_mof_overruns)},
+ {GBE_STATSB_INFO(rx_dma_overruns)},
+ /* GBE module C */
+ {GBE_STATSC_INFO(rx_good_frames)},
+ {GBE_STATSC_INFO(rx_broadcast_frames)},
+ {GBE_STATSC_INFO(rx_multicast_frames)},
+ {GBE_STATSC_INFO(rx_pause_frames)},
+ {GBE_STATSC_INFO(rx_crc_errors)},
+ {GBE_STATSC_INFO(rx_align_code_errors)},
+ {GBE_STATSC_INFO(rx_oversized_frames)},
+ {GBE_STATSC_INFO(rx_jabber_frames)},
+ {GBE_STATSC_INFO(rx_undersized_frames)},
+ {GBE_STATSC_INFO(rx_fragments)},
+ {GBE_STATSC_INFO(rx_bytes)},
+ {GBE_STATSC_INFO(tx_good_frames)},
+ {GBE_STATSC_INFO(tx_broadcast_frames)},
+ {GBE_STATSC_INFO(tx_multicast_frames)},
+ {GBE_STATSC_INFO(tx_pause_frames)},
+ {GBE_STATSC_INFO(tx_deferred_frames)},
+ {GBE_STATSC_INFO(tx_collision_frames)},
+ {GBE_STATSC_INFO(tx_single_coll_frames)},
+ {GBE_STATSC_INFO(tx_mult_coll_frames)},
+ {GBE_STATSC_INFO(tx_excessive_collisions)},
+ {GBE_STATSC_INFO(tx_late_collisions)},
+ {GBE_STATSC_INFO(tx_underrun)},
+ {GBE_STATSC_INFO(tx_carrier_sense_errors)},
+ {GBE_STATSC_INFO(tx_bytes)},
+ {GBE_STATSC_INFO(tx_64byte_frames)},
+ {GBE_STATSC_INFO(tx_65_to_127byte_frames)},
+ {GBE_STATSC_INFO(tx_128_to_255byte_frames)},
+ {GBE_STATSC_INFO(tx_256_to_511byte_frames)},
+ {GBE_STATSC_INFO(tx_512_to_1023byte_frames)},
+ {GBE_STATSC_INFO(tx_1024byte_frames)},
+ {GBE_STATSC_INFO(net_bytes)},
+ {GBE_STATSC_INFO(rx_sof_overruns)},
+ {GBE_STATSC_INFO(rx_mof_overruns)},
+ {GBE_STATSC_INFO(rx_dma_overruns)},
+ /* GBE module D */
+ {GBE_STATSD_INFO(rx_good_frames)},
+ {GBE_STATSD_INFO(rx_broadcast_frames)},
+ {GBE_STATSD_INFO(rx_multicast_frames)},
+ {GBE_STATSD_INFO(rx_pause_frames)},
+ {GBE_STATSD_INFO(rx_crc_errors)},
+ {GBE_STATSD_INFO(rx_align_code_errors)},
+ {GBE_STATSD_INFO(rx_oversized_frames)},
+ {GBE_STATSD_INFO(rx_jabber_frames)},
+ {GBE_STATSD_INFO(rx_undersized_frames)},
+ {GBE_STATSD_INFO(rx_fragments)},
+ {GBE_STATSD_INFO(rx_bytes)},
+ {GBE_STATSD_INFO(tx_good_frames)},
+ {GBE_STATSD_INFO(tx_broadcast_frames)},
+ {GBE_STATSD_INFO(tx_multicast_frames)},
+ {GBE_STATSD_INFO(tx_pause_frames)},
+ {GBE_STATSD_INFO(tx_deferred_frames)},
+ {GBE_STATSD_INFO(tx_collision_frames)},
+ {GBE_STATSD_INFO(tx_single_coll_frames)},
+ {GBE_STATSD_INFO(tx_mult_coll_frames)},
+ {GBE_STATSD_INFO(tx_excessive_collisions)},
+ {GBE_STATSD_INFO(tx_late_collisions)},
+ {GBE_STATSD_INFO(tx_underrun)},
+ {GBE_STATSD_INFO(tx_carrier_sense_errors)},
+ {GBE_STATSD_INFO(tx_bytes)},
+ {GBE_STATSD_INFO(tx_64byte_frames)},
+ {GBE_STATSD_INFO(tx_65_to_127byte_frames)},
+ {GBE_STATSD_INFO(tx_128_to_255byte_frames)},
+ {GBE_STATSD_INFO(tx_256_to_511byte_frames)},
+ {GBE_STATSD_INFO(tx_512_to_1023byte_frames)},
+ {GBE_STATSD_INFO(tx_1024byte_frames)},
+ {GBE_STATSD_INFO(net_bytes)},
+ {GBE_STATSD_INFO(rx_sof_overruns)},
+ {GBE_STATSD_INFO(rx_mof_overruns)},
+ {GBE_STATSD_INFO(rx_dma_overruns)},
+};
+
+#define XGBE_STATS0_INFO(field) "GBE_0:"#field, XGBE_STATS0_MODULE, \
+ FIELD_SIZEOF(struct xgbe_hw_stats, field), \
+ offsetof(struct xgbe_hw_stats, field)
+
+#define XGBE_STATS1_INFO(field) "GBE_1:"#field, XGBE_STATS1_MODULE, \
+ FIELD_SIZEOF(struct xgbe_hw_stats, field), \
+ offsetof(struct xgbe_hw_stats, field)
+
+#define XGBE_STATS2_INFO(field) "GBE_2:"#field, XGBE_STATS2_MODULE, \
+ FIELD_SIZEOF(struct xgbe_hw_stats, field), \
+ offsetof(struct xgbe_hw_stats, field)
+
+static const struct netcp_ethtool_stat xgbe10_et_stats[] = {
+ /* GBE module 0 */
+ {XGBE_STATS0_INFO(rx_good_frames)},
+ {XGBE_STATS0_INFO(rx_broadcast_frames)},
+ {XGBE_STATS0_INFO(rx_multicast_frames)},
+ {XGBE_STATS0_INFO(rx_oversized_frames)},
+ {XGBE_STATS0_INFO(rx_undersized_frames)},
+ {XGBE_STATS0_INFO(overrun_type4)},
+ {XGBE_STATS0_INFO(overrun_type5)},
+ {XGBE_STATS0_INFO(rx_bytes)},
+ {XGBE_STATS0_INFO(tx_good_frames)},
+ {XGBE_STATS0_INFO(tx_broadcast_frames)},
+ {XGBE_STATS0_INFO(tx_multicast_frames)},
+ {XGBE_STATS0_INFO(tx_bytes)},
+ {XGBE_STATS0_INFO(tx_64byte_frames)},
+ {XGBE_STATS0_INFO(tx_65_to_127byte_frames)},
+ {XGBE_STATS0_INFO(tx_128_to_255byte_frames)},
+ {XGBE_STATS0_INFO(tx_256_to_511byte_frames)},
+ {XGBE_STATS0_INFO(tx_512_to_1023byte_frames)},
+ {XGBE_STATS0_INFO(tx_1024byte_frames)},
+ {XGBE_STATS0_INFO(net_bytes)},
+ {XGBE_STATS0_INFO(rx_sof_overruns)},
+ {XGBE_STATS0_INFO(rx_mof_overruns)},
+ {XGBE_STATS0_INFO(rx_dma_overruns)},
+ /* XGBE module 1 */
+ {XGBE_STATS1_INFO(rx_good_frames)},
+ {XGBE_STATS1_INFO(rx_broadcast_frames)},
+ {XGBE_STATS1_INFO(rx_multicast_frames)},
+ {XGBE_STATS1_INFO(rx_pause_frames)},
+ {XGBE_STATS1_INFO(rx_crc_errors)},
+ {XGBE_STATS1_INFO(rx_align_code_errors)},
+ {XGBE_STATS1_INFO(rx_oversized_frames)},
+ {XGBE_STATS1_INFO(rx_jabber_frames)},
+ {XGBE_STATS1_INFO(rx_undersized_frames)},
+ {XGBE_STATS1_INFO(rx_fragments)},
+ {XGBE_STATS1_INFO(overrun_type4)},
+ {XGBE_STATS1_INFO(overrun_type5)},
+ {XGBE_STATS1_INFO(rx_bytes)},
+ {XGBE_STATS1_INFO(tx_good_frames)},
+ {XGBE_STATS1_INFO(tx_broadcast_frames)},
+ {XGBE_STATS1_INFO(tx_multicast_frames)},
+ {XGBE_STATS1_INFO(tx_pause_frames)},
+ {XGBE_STATS1_INFO(tx_deferred_frames)},
+ {XGBE_STATS1_INFO(tx_collision_frames)},
+ {XGBE_STATS1_INFO(tx_single_coll_frames)},
+ {XGBE_STATS1_INFO(tx_mult_coll_frames)},
+ {XGBE_STATS1_INFO(tx_excessive_collisions)},
+ {XGBE_STATS1_INFO(tx_late_collisions)},
+ {XGBE_STATS1_INFO(tx_underrun)},
+ {XGBE_STATS1_INFO(tx_carrier_sense_errors)},
+ {XGBE_STATS1_INFO(tx_bytes)},
+ {XGBE_STATS1_INFO(tx_64byte_frames)},
+ {XGBE_STATS1_INFO(tx_65_to_127byte_frames)},
+ {XGBE_STATS1_INFO(tx_128_to_255byte_frames)},
+ {XGBE_STATS1_INFO(tx_256_to_511byte_frames)},
+ {XGBE_STATS1_INFO(tx_512_to_1023byte_frames)},
+ {XGBE_STATS1_INFO(tx_1024byte_frames)},
+ {XGBE_STATS1_INFO(net_bytes)},
+ {XGBE_STATS1_INFO(rx_sof_overruns)},
+ {XGBE_STATS1_INFO(rx_mof_overruns)},
+ {XGBE_STATS1_INFO(rx_dma_overruns)},
+ /* XGBE module 2 */
+ {XGBE_STATS2_INFO(rx_good_frames)},
+ {XGBE_STATS2_INFO(rx_broadcast_frames)},
+ {XGBE_STATS2_INFO(rx_multicast_frames)},
+ {XGBE_STATS2_INFO(rx_pause_frames)},
+ {XGBE_STATS2_INFO(rx_crc_errors)},
+ {XGBE_STATS2_INFO(rx_align_code_errors)},
+ {XGBE_STATS2_INFO(rx_oversized_frames)},
+ {XGBE_STATS2_INFO(rx_jabber_frames)},
+ {XGBE_STATS2_INFO(rx_undersized_frames)},
+ {XGBE_STATS2_INFO(rx_fragments)},
+ {XGBE_STATS2_INFO(overrun_type4)},
+ {XGBE_STATS2_INFO(overrun_type5)},
+ {XGBE_STATS2_INFO(rx_bytes)},
+ {XGBE_STATS2_INFO(tx_good_frames)},
+ {XGBE_STATS2_INFO(tx_broadcast_frames)},
+ {XGBE_STATS2_INFO(tx_multicast_frames)},
+ {XGBE_STATS2_INFO(tx_pause_frames)},
+ {XGBE_STATS2_INFO(tx_deferred_frames)},
+ {XGBE_STATS2_INFO(tx_collision_frames)},
+ {XGBE_STATS2_INFO(tx_single_coll_frames)},
+ {XGBE_STATS2_INFO(tx_mult_coll_frames)},
+ {XGBE_STATS2_INFO(tx_excessive_collisions)},
+ {XGBE_STATS2_INFO(tx_late_collisions)},
+ {XGBE_STATS2_INFO(tx_underrun)},
+ {XGBE_STATS2_INFO(tx_carrier_sense_errors)},
+ {XGBE_STATS2_INFO(tx_bytes)},
+ {XGBE_STATS2_INFO(tx_64byte_frames)},
+ {XGBE_STATS2_INFO(tx_65_to_127byte_frames)},
+ {XGBE_STATS2_INFO(tx_128_to_255byte_frames)},
+ {XGBE_STATS2_INFO(tx_256_to_511byte_frames)},
+ {XGBE_STATS2_INFO(tx_512_to_1023byte_frames)},
+ {XGBE_STATS2_INFO(tx_1024byte_frames)},
+ {XGBE_STATS2_INFO(net_bytes)},
+ {XGBE_STATS2_INFO(rx_sof_overruns)},
+ {XGBE_STATS2_INFO(rx_mof_overruns)},
+ {XGBE_STATS2_INFO(rx_dma_overruns)},
+};
+
+#define for_each_intf(i, priv) \
+ list_for_each_entry((i), &(priv)->gbe_intf_head, gbe_intf_list)
+
+#define for_each_sec_slave(slave, priv) \
+ list_for_each_entry((slave), &(priv)->secondary_slaves, slave_list)
+
+#define first_sec_slave(priv) \
+ list_first_entry(&priv->secondary_slaves, \
+ struct gbe_slave, slave_list)
+
+static void keystone_get_drvinfo(struct net_device *ndev,
+ struct ethtool_drvinfo *info)
+{
+ strncpy(info->driver, NETCP_DRIVER_NAME, sizeof(info->driver));
+ strncpy(info->version, NETCP_DRIVER_VERSION, sizeof(info->version));
+}
+
+static u32 keystone_get_msglevel(struct net_device *ndev)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+
+ return netcp->msg_enable;
+}
+
+static void keystone_set_msglevel(struct net_device *ndev, u32 value)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+
+ netcp->msg_enable = value;
+}
+
+static void keystone_get_stat_strings(struct net_device *ndev,
+ uint32_t stringset, uint8_t *data)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct gbe_intf *gbe_intf;
+ struct gbe_priv *gbe_dev;
+ int i;
+
+ gbe_intf = netcp_module_get_intf_data(&gbe_module, netcp);
+ if (!gbe_intf)
+ return;
+ gbe_dev = gbe_intf->gbe_dev;
+
+ switch (stringset) {
+ case ETH_SS_STATS:
+ for (i = 0; i < gbe_dev->num_et_stats; i++) {
+ memcpy(data, gbe_dev->et_stats[i].desc,
+ ETH_GSTRING_LEN);
+ data += ETH_GSTRING_LEN;
+ }
+ break;
+ case ETH_SS_TEST:
+ break;
+ }
+}
+
+static int keystone_get_sset_count(struct net_device *ndev, int stringset)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct gbe_intf *gbe_intf;
+ struct gbe_priv *gbe_dev;
+
+ gbe_intf = netcp_module_get_intf_data(&gbe_module, netcp);
+ if (!gbe_intf)
+ return -EINVAL;
+ gbe_dev = gbe_intf->gbe_dev;
+
+ switch (stringset) {
+ case ETH_SS_TEST:
+ return 0;
+ case ETH_SS_STATS:
+ return gbe_dev->num_et_stats;
+ default:
+ return -EINVAL;
+ }
+}
+
+static void gbe_update_stats(struct gbe_priv *gbe_dev, uint64_t *data)
+{
+ void __iomem *base = NULL;
+ u32 __iomem *p;
+ u32 tmp = 0;
+ int i;
+
+ for (i = 0; i < gbe_dev->num_et_stats; i++) {
+ base = gbe_dev->hw_stats_regs[gbe_dev->et_stats[i].type];
+ p = base + gbe_dev->et_stats[i].offset;
+ tmp = readl(p);
+ gbe_dev->hw_stats[i] = gbe_dev->hw_stats[i] + tmp;
+ if (data)
+ data[i] = gbe_dev->hw_stats[i];
+ /* write-to-decrement:
+ * new register value = old register value - write value
+ */
+ writel(tmp, p);
+ }
+}
+
+static void gbe_update_stats_ver14(struct gbe_priv *gbe_dev, uint64_t *data)
+{
+ void __iomem *gbe_statsa = gbe_dev->hw_stats_regs[0];
+ void __iomem *gbe_statsb = gbe_dev->hw_stats_regs[1];
+ u64 *hw_stats = &gbe_dev->hw_stats[0];
+ void __iomem *base = NULL;
+ u32 __iomem *p;
+ u32 tmp = 0, val, pair_size = (gbe_dev->num_et_stats / 2);
+ int i, j, pair;
+
+ for (pair = 0; pair < 2; pair++) {
+ val = readl(GBE_REG_ADDR(gbe_dev, switch_regs, stat_port_en));
+
+ if (pair == 0)
+ val &= ~GBE_STATS_CD_SEL;
+ else
+ val |= GBE_STATS_CD_SEL;
+
+ /* make the stat modules visible */
+ writel(val, GBE_REG_ADDR(gbe_dev, switch_regs, stat_port_en));
+
+ for (i = 0; i < pair_size; i++) {
+ j = pair * pair_size + i;
+ switch (gbe_dev->et_stats[j].type) {
+ case GBE_STATSA_MODULE:
+ case GBE_STATSC_MODULE:
+ base = gbe_statsa;
+ break;
+ case GBE_STATSB_MODULE:
+ case GBE_STATSD_MODULE:
+ base = gbe_statsb;
+ break;
+ }
+
+ p = base + gbe_dev->et_stats[j].offset;
+ tmp = readl(p);
+ hw_stats[j] += tmp;
+ if (data)
+ data[j] = hw_stats[j];
+ /* write-to-decrement:
+ * new register value = old register value - write value
+ */
+ writel(tmp, p);
+ }
+ }
+}
+
+static void keystone_get_ethtool_stats(struct net_device *ndev,
+ struct ethtool_stats *stats,
+ uint64_t *data)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct gbe_intf *gbe_intf;
+ struct gbe_priv *gbe_dev;
+
+ gbe_intf = netcp_module_get_intf_data(&gbe_module, netcp);
+ if (!gbe_intf)
+ return;
+
+ gbe_dev = gbe_intf->gbe_dev;
+ spin_lock_bh(&gbe_dev->hw_stats_lock);
+ if (gbe_dev->ss_version == GBE_SS_VERSION_14)
+ gbe_update_stats_ver14(gbe_dev, data);
+ else
+ gbe_update_stats(gbe_dev, data);
+ spin_unlock_bh(&gbe_dev->hw_stats_lock);
+}
+
+static int keystone_get_settings(struct net_device *ndev,
+ struct ethtool_cmd *cmd)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct phy_device *phy = ndev->phydev;
+ struct gbe_intf *gbe_intf;
+ int ret;
+
+ if (!phy)
+ return -EINVAL;
+
+ gbe_intf = netcp_module_get_intf_data(&gbe_module, netcp);
+ if (!gbe_intf)
+ return -EINVAL;
+
+ if (!gbe_intf->slave)
+ return -EINVAL;
+
+ ret = phy_ethtool_gset(phy, cmd);
+ if (!ret)
+ cmd->port = gbe_intf->slave->phy_port_t;
+
+ return ret;
+}
+
+static int keystone_set_settings(struct net_device *ndev,
+ struct ethtool_cmd *cmd)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct phy_device *phy = ndev->phydev;
+ struct gbe_intf *gbe_intf;
+ u32 features = cmd->advertising & cmd->supported;
+
+ if (!phy)
+ return -EINVAL;
+
+ gbe_intf = netcp_module_get_intf_data(&gbe_module, netcp);
+ if (!gbe_intf)
+ return -EINVAL;
+
+ if (!gbe_intf->slave)
+ return -EINVAL;
+
+ if (cmd->port != gbe_intf->slave->phy_port_t) {
+ if ((cmd->port == PORT_TP) && !(features & ADVERTISED_TP))
+ return -EINVAL;
+
+ if ((cmd->port == PORT_AUI) && !(features & ADVERTISED_AUI))
+ return -EINVAL;
+
+ if ((cmd->port == PORT_BNC) && !(features & ADVERTISED_BNC))
+ return -EINVAL;
+
+ if ((cmd->port == PORT_MII) && !(features & ADVERTISED_MII))
+ return -EINVAL;
+
+ if ((cmd->port == PORT_FIBRE) && !(features & ADVERTISED_FIBRE))
+ return -EINVAL;
+ }
+
+ gbe_intf->slave->phy_port_t = cmd->port;
+ return phy_ethtool_sset(phy, cmd);
+}
+
+static const struct ethtool_ops keystone_ethtool_ops = {
+ .get_drvinfo = keystone_get_drvinfo,
+ .get_link = ethtool_op_get_link,
+ .get_msglevel = keystone_get_msglevel,
+ .set_msglevel = keystone_set_msglevel,
+ .get_strings = keystone_get_stat_strings,
+ .get_sset_count = keystone_get_sset_count,
+ .get_ethtool_stats = keystone_get_ethtool_stats,
+ .get_settings = keystone_get_settings,
+ .set_settings = keystone_set_settings,
+};
+
+#define mac_hi(mac) (((mac)[0] << 0) | ((mac)[1] << 8) | \
+ ((mac)[2] << 16) | ((mac)[3] << 24))
+#define mac_lo(mac) (((mac)[4] << 0) | ((mac)[5] << 8))
+
+static void gbe_set_slave_mac(struct gbe_slave *slave,
+ struct gbe_intf *gbe_intf)
+{
+ struct net_device *ndev = gbe_intf->ndev;
+
+ writel(mac_hi(ndev->dev_addr), GBE_REG_ADDR(slave, port_regs, sa_hi));
+ writel(mac_lo(ndev->dev_addr), GBE_REG_ADDR(slave, port_regs, sa_lo));
+}
+
+static int gbe_get_slave_port(struct gbe_priv *priv, u32 slave_num)
+{
+ if (priv->host_port == 0)
+ return slave_num + 1;
+
+ return slave_num;
+}
+
+static void netcp_ethss_link_state_action(struct gbe_priv *gbe_dev,
+ struct net_device *ndev,
+ struct gbe_slave *slave,
+ int up)
+{
+ struct phy_device *phy = slave->phy;
+ u32 mac_control = 0;
+
+ if (up) {
+ mac_control = slave->mac_control;
+ if (phy && (phy->speed == SPEED_1000)) {
+ mac_control |= MACSL_GIG_MODE;
+ mac_control &= ~MACSL_XGIG_MODE;
+ } else if (phy && (phy->speed == SPEED_10000)) {
+ mac_control |= MACSL_XGIG_MODE;
+ mac_control &= ~MACSL_GIG_MODE;
+ }
+
+ writel(mac_control, GBE_REG_ADDR(slave, emac_regs,
+ mac_control));
+
+ cpsw_ale_control_set(gbe_dev->ale, slave->port_num,
+ ALE_PORT_STATE,
+ ALE_PORT_STATE_FORWARD);
+
+ if (ndev && slave->open)
+ netif_carrier_on(ndev);
+ } else {
+ writel(mac_control, GBE_REG_ADDR(slave, emac_regs,
+ mac_control));
+ cpsw_ale_control_set(gbe_dev->ale, slave->port_num,
+ ALE_PORT_STATE,
+ ALE_PORT_STATE_DISABLE);
+ if (ndev)
+ netif_carrier_off(ndev);
+ }
+
+ if (phy)
+ phy_print_status(phy);
+}
+
+static bool gbe_phy_link_status(struct gbe_slave *slave)
+{
+ return !slave->phy || slave->phy->link;
+}
+
+static void netcp_ethss_update_link_state(struct gbe_priv *gbe_dev,
+ struct gbe_slave *slave,
+ struct net_device *ndev)
+{
+ int sp = slave->slave_num;
+ int phy_link_state, sgmii_link_state = 1, link_state;
+
+ if (!slave->open)
+ return;
+
+ if (!SLAVE_LINK_IS_XGMII(slave))
+ sgmii_link_state = netcp_sgmii_get_port_link(SGMII_BASE(sp),
+ sp);
+ phy_link_state = gbe_phy_link_status(slave);
+ link_state = phy_link_state & sgmii_link_state;
+
+ if (atomic_xchg(&slave->link_state, link_state) != link_state)
+ netcp_ethss_link_state_action(gbe_dev, ndev, slave,
+ link_state);
+}
+
+static void xgbe_adjust_link(struct net_device *ndev)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct gbe_intf *gbe_intf;
+
+ gbe_intf = netcp_module_get_intf_data(&xgbe_module, netcp);
+ if (!gbe_intf)
+ return;
+
+ netcp_ethss_update_link_state(gbe_intf->gbe_dev, gbe_intf->slave,
+ ndev);
+}
+
+static void gbe_adjust_link(struct net_device *ndev)
+{
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct gbe_intf *gbe_intf;
+
+ gbe_intf = netcp_module_get_intf_data(&gbe_module, netcp);
+ if (!gbe_intf)
+ return;
+
+ netcp_ethss_update_link_state(gbe_intf->gbe_dev, gbe_intf->slave,
+ ndev);
+}
+
+static void gbe_adjust_link_sec_slaves(struct net_device *ndev)
+{
+ struct gbe_priv *gbe_dev = netdev_priv(ndev);
+ struct gbe_slave *slave;
+
+ for_each_sec_slave(slave, gbe_dev)
+ netcp_ethss_update_link_state(gbe_dev, slave, NULL);
+}
+
+/* Reset EMAC
+ * Soft reset is set and polled until clear, or until a timeout occurs
+ */
+static int gbe_port_reset(struct gbe_slave *slave)
+{
+ u32 i, v;
+
+ /* Set the soft reset bit */
+ writel(SOFT_RESET, GBE_REG_ADDR(slave, emac_regs, soft_reset));
+
+ /* Wait for the bit to clear */
+ for (i = 0; i < DEVICE_EMACSL_RESET_POLL_COUNT; i++) {
+ v = readl(GBE_REG_ADDR(slave, emac_regs, soft_reset));
+ if ((v & SOFT_RESET_MASK) != SOFT_RESET)
+ return 0;
+ }
+
+ /* Timeout on the reset */
+ return GMACSL_RET_WARN_RESET_INCOMPLETE;
+}
+
+/* Configure EMAC */
+static void gbe_port_config(struct gbe_priv *gbe_dev, struct gbe_slave *slave,
+ int max_rx_len)
+{
+ u32 xgmii_mode;
+
+ if (max_rx_len > NETCP_MAX_FRAME_SIZE)
+ max_rx_len = NETCP_MAX_FRAME_SIZE;
+
+ /* Enable correct MII mode at SS level */
+ if ((gbe_dev->ss_version == XGBE_SS_VERSION_10) &&
+ (slave->link_interface >= XGMII_LINK_MAC_PHY)) {
+ xgmii_mode = readl(GBE_REG_ADDR(gbe_dev, ss_regs, control));
+ xgmii_mode |= (1 << slave->slave_num);
+ writel(xgmii_mode, GBE_REG_ADDR(gbe_dev, ss_regs, control));
+ }
+
+ writel(max_rx_len, GBE_REG_ADDR(slave, emac_regs, rx_maxlen));
+ writel(slave->mac_control, GBE_REG_ADDR(slave, emac_regs, mac_control));
+}
+
+static void gbe_slave_stop(struct gbe_intf *intf)
+{
+ struct gbe_priv *gbe_dev = intf->gbe_dev;
+ struct gbe_slave *slave = intf->slave;
+
+ gbe_port_reset(slave);
+ /* Disable forwarding */
+ cpsw_ale_control_set(gbe_dev->ale, slave->port_num,
+ ALE_PORT_STATE, ALE_PORT_STATE_DISABLE);
+ cpsw_ale_del_mcast(gbe_dev->ale, intf->ndev->broadcast,
+ 1 << slave->port_num, 0, 0);
+
+ if (!slave->phy)
+ return;
+
+ phy_stop(slave->phy);
+ phy_disconnect(slave->phy);
+ slave->phy = NULL;
+}
+
+static void gbe_sgmii_config(struct gbe_priv *priv, struct gbe_slave *slave)
+{
+ void __iomem *sgmii_port_regs;
+
+ sgmii_port_regs = priv->sgmii_port_regs;
+ if ((priv->ss_version == GBE_SS_VERSION_14) && (slave->slave_num >= 2))
+ sgmii_port_regs = priv->sgmii_port34_regs;
+
+ if (!SLAVE_LINK_IS_XGMII(slave)) {
+ netcp_sgmii_reset(sgmii_port_regs, slave->slave_num);
+ netcp_sgmii_config(sgmii_port_regs, slave->slave_num,
+ slave->link_interface);
+ }
+}
+
+static int gbe_slave_open(struct gbe_intf *gbe_intf)
+{
+ struct gbe_priv *priv = gbe_intf->gbe_dev;
+ struct gbe_slave *slave = gbe_intf->slave;
+ phy_interface_t phy_mode;
+ bool has_phy = false;
+
+ void (*hndlr)(struct net_device *) = gbe_adjust_link;
+
+ gbe_sgmii_config(priv, slave);
+ gbe_port_reset(slave);
+ gbe_port_config(priv, slave, priv->rx_packet_max);
+ gbe_set_slave_mac(slave, gbe_intf);
+ /* enable forwarding */
+ cpsw_ale_control_set(priv->ale, slave->port_num,
+ ALE_PORT_STATE, ALE_PORT_STATE_FORWARD);
+ cpsw_ale_add_mcast(priv->ale, gbe_intf->ndev->broadcast,
+ 1 << slave->port_num, 0, 0, ALE_MCAST_FWD_2);
+
+ if (slave->link_interface == SGMII_LINK_MAC_PHY) {
+ has_phy = true;
+ phy_mode = PHY_INTERFACE_MODE_SGMII;
+ slave->phy_port_t = PORT_MII;
+ } else if (slave->link_interface == XGMII_LINK_MAC_PHY) {
+ has_phy = true;
+ phy_mode = PHY_INTERFACE_MODE_NA;
+ slave->phy_port_t = PORT_FIBRE;
+ }
+
+ if (has_phy) {
+ if (priv->ss_version == XGBE_SS_VERSION_10)
+ hndlr = xgbe_adjust_link;
+
+ slave->phy = of_phy_connect(gbe_intf->ndev,
+ slave->phy_node,
+ hndlr, 0,
+ phy_mode);
+ if (!slave->phy) {
+ dev_err(priv->dev, "phy not found on slave %d\n",
+ slave->slave_num);
+ return -ENODEV;
+ }
+ dev_dbg(priv->dev, "phy found: id is: 0x%s\n",
+ dev_name(&slave->phy->dev));
+ phy_start(slave->phy);
+ phy_read_status(slave->phy);
+ }
+ return 0;
+}
+
+static void gbe_init_host_port(struct gbe_priv *priv)
+{
+ int bypass_en = 1;
+ /* Max length register */
+ writel(NETCP_MAX_FRAME_SIZE, GBE_REG_ADDR(priv, host_port_regs,
+ rx_maxlen));
+
+ cpsw_ale_start(priv->ale);
+
+ if (priv->enable_ale)
+ bypass_en = 0;
+
+ cpsw_ale_control_set(priv->ale, 0, ALE_BYPASS, bypass_en);
+
+ cpsw_ale_control_set(priv->ale, 0, ALE_NO_PORT_VLAN, 1);
+
+ cpsw_ale_control_set(priv->ale, priv->host_port,
+ ALE_PORT_STATE, ALE_PORT_STATE_FORWARD);
+
+ cpsw_ale_control_set(priv->ale, 0,
+ ALE_PORT_UNKNOWN_VLAN_MEMBER,
+ GBE_PORT_MASK(priv->ale_ports));
+
+ cpsw_ale_control_set(priv->ale, 0,
+ ALE_PORT_UNKNOWN_MCAST_FLOOD,
+ GBE_PORT_MASK(priv->ale_ports - 1));
+
+ cpsw_ale_control_set(priv->ale, 0,
+ ALE_PORT_UNKNOWN_REG_MCAST_FLOOD,
+ GBE_PORT_MASK(priv->ale_ports));
+
+ cpsw_ale_control_set(priv->ale, 0,
+ ALE_PORT_UNTAGGED_EGRESS,
+ GBE_PORT_MASK(priv->ale_ports));
+}
+
+static void gbe_add_mcast_addr(struct gbe_intf *gbe_intf, u8 *addr)
+{
+ struct gbe_priv *gbe_dev = gbe_intf->gbe_dev;
+ u16 vlan_id;
+
+ cpsw_ale_add_mcast(gbe_dev->ale, addr,
+ GBE_PORT_MASK(gbe_dev->ale_ports), 0, 0,
+ ALE_MCAST_FWD_2);
+ for_each_set_bit(vlan_id, gbe_intf->active_vlans, VLAN_N_VID) {
+ cpsw_ale_add_mcast(gbe_dev->ale, addr,
+ GBE_PORT_MASK(gbe_dev->ale_ports),
+ ALE_VLAN, vlan_id, ALE_MCAST_FWD_2);
+ }
+}
+
+static void gbe_add_ucast_addr(struct gbe_intf *gbe_intf, u8 *addr)
+{
+ struct gbe_priv *gbe_dev = gbe_intf->gbe_dev;
+ u16 vlan_id;
+
+ cpsw_ale_add_ucast(gbe_dev->ale, addr, gbe_dev->host_port, 0, 0);
+
+ for_each_set_bit(vlan_id, gbe_intf->active_vlans, VLAN_N_VID)
+ cpsw_ale_add_ucast(gbe_dev->ale, addr, gbe_dev->host_port,
+ ALE_VLAN, vlan_id);
+}
+
+static void gbe_del_mcast_addr(struct gbe_intf *gbe_intf, u8 *addr)
+{
+ struct gbe_priv *gbe_dev = gbe_intf->gbe_dev;
+ u16 vlan_id;
+
+ cpsw_ale_del_mcast(gbe_dev->ale, addr, 0, 0, 0);
+
+ for_each_set_bit(vlan_id, gbe_intf->active_vlans, VLAN_N_VID) {
+ cpsw_ale_del_mcast(gbe_dev->ale, addr, 0, ALE_VLAN, vlan_id);
+ }
+}
+
+static void gbe_del_ucast_addr(struct gbe_intf *gbe_intf, u8 *addr)
+{
+ struct gbe_priv *gbe_dev = gbe_intf->gbe_dev;
+ u16 vlan_id;
+
+ cpsw_ale_del_ucast(gbe_dev->ale, addr, gbe_dev->host_port, 0, 0);
+
+ for_each_set_bit(vlan_id, gbe_intf->active_vlans, VLAN_N_VID) {
+ cpsw_ale_del_ucast(gbe_dev->ale, addr, gbe_dev->host_port,
+ ALE_VLAN, vlan_id);
+ }
+}
+
+static int gbe_add_addr(void *intf_priv, struct netcp_addr *naddr)
+{
+ struct gbe_intf *gbe_intf = intf_priv;
+ struct gbe_priv *gbe_dev = gbe_intf->gbe_dev;
+
+ dev_dbg(gbe_dev->dev, "ethss adding address %pM, type %d\n",
+ naddr->addr, naddr->type);
+
+ switch (naddr->type) {
+ case ADDR_MCAST:
+ case ADDR_BCAST:
+ gbe_add_mcast_addr(gbe_intf, naddr->addr);
+ break;
+ case ADDR_UCAST:
+ case ADDR_DEV:
+ gbe_add_ucast_addr(gbe_intf, naddr->addr);
+ break;
+ case ADDR_ANY:
+ /* nothing to do for promiscuous */
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int gbe_del_addr(void *intf_priv, struct netcp_addr *naddr)
+{
+ struct gbe_intf *gbe_intf = intf_priv;
+ struct gbe_priv *gbe_dev = gbe_intf->gbe_dev;
+
+ dev_dbg(gbe_dev->dev, "ethss deleting address %pM, type %d\n",
+ naddr->addr, naddr->type);
+
+ switch (naddr->type) {
+ case ADDR_MCAST:
+ case ADDR_BCAST:
+ gbe_del_mcast_addr(gbe_intf, naddr->addr);
+ break;
+ case ADDR_UCAST:
+ case ADDR_DEV:
+ gbe_del_ucast_addr(gbe_intf, naddr->addr);
+ break;
+ case ADDR_ANY:
+ /* nothing to do for promiscuous */
+ default:
+ break;
+ }
+
+ return 0;
+}
+
+static int gbe_add_vid(void *intf_priv, int vid)
+{
+ struct gbe_intf *gbe_intf = intf_priv;
+ struct gbe_priv *gbe_dev = gbe_intf->gbe_dev;
+
+ set_bit(vid, gbe_intf->active_vlans);
+
+ cpsw_ale_add_vlan(gbe_dev->ale, vid,
+ GBE_PORT_MASK(gbe_dev->ale_ports),
+ GBE_MASK_NO_PORTS,
+ GBE_PORT_MASK(gbe_dev->ale_ports),
+ GBE_PORT_MASK(gbe_dev->ale_ports - 1));
+
+ return 0;
+}
+
+static int gbe_del_vid(void *intf_priv, int vid)
+{
+ struct gbe_intf *gbe_intf = intf_priv;
+ struct gbe_priv *gbe_dev = gbe_intf->gbe_dev;
+
+ cpsw_ale_del_vlan(gbe_dev->ale, vid, 0);
+ clear_bit(vid, gbe_intf->active_vlans);
+ return 0;
+}
+
+static int gbe_ioctl(void *intf_priv, struct ifreq *req, int cmd)
+{
+ struct gbe_intf *gbe_intf = intf_priv;
+ struct phy_device *phy = gbe_intf->slave->phy;
+ int ret = -EOPNOTSUPP;
+
+ if (phy)
+ ret = phy_mii_ioctl(phy, req, cmd);
+
+ return ret;
+}
+
+static void netcp_ethss_timer(unsigned long arg)
+{
+ struct gbe_priv *gbe_dev = (struct gbe_priv *)arg;
+ struct gbe_intf *gbe_intf;
+ struct gbe_slave *slave;
+
+ /* Check & update SGMII link state of interfaces */
+ for_each_intf(gbe_intf, gbe_dev) {
+ if (!gbe_intf->slave->open)
+ continue;
+ netcp_ethss_update_link_state(gbe_dev, gbe_intf->slave,
+ gbe_intf->ndev);
+ }
+
+ /* Check & update SGMII link state of secondary ports */
+ for_each_sec_slave(slave, gbe_dev) {
+ netcp_ethss_update_link_state(gbe_dev, slave, NULL);
+ }
+
+ spin_lock_bh(&gbe_dev->hw_stats_lock);
+
+ if (gbe_dev->ss_version == GBE_SS_VERSION_14)
+ gbe_update_stats_ver14(gbe_dev, NULL);
+ else
+ gbe_update_stats(gbe_dev, NULL);
+
+ spin_unlock_bh(&gbe_dev->hw_stats_lock);
+
+ gbe_dev->timer.expires = jiffies + GBE_TIMER_INTERVAL;
+ add_timer(&gbe_dev->timer);
+}
+
+static int gbe_tx_hook(int order, void *data, struct netcp_packet *p_info)
+{
+ struct gbe_intf *gbe_intf = data;
+
+ p_info->tx_pipe = &gbe_intf->tx_pipe;
+ return 0;
+}
+
+static int gbe_open(void *intf_priv, struct net_device *ndev)
+{
+ struct gbe_intf *gbe_intf = intf_priv;
+ struct gbe_priv *gbe_dev = gbe_intf->gbe_dev;
+ struct netcp_intf *netcp = netdev_priv(ndev);
+ struct gbe_slave *slave = gbe_intf->slave;
+ int port_num = slave->port_num;
+ u32 reg;
+ int ret;
+
+ reg = readl(GBE_REG_ADDR(gbe_dev, switch_regs, id_ver));
+ dev_dbg(gbe_dev->dev, "initializing gbe version %d.%d (%d) GBE identification value 0x%x\n",
+ GBE_MAJOR_VERSION(reg), GBE_MINOR_VERSION(reg),
+ GBE_RTL_VERSION(reg), GBE_IDENT(reg));
+
+ if (gbe_dev->enable_ale)
+ gbe_intf->tx_pipe.dma_psflags = 0;
+ else
+ gbe_intf->tx_pipe.dma_psflags = port_num;
+
+ dev_dbg(gbe_dev->dev, "opened TX channel %s: %p with psflags %d\n",
+ gbe_intf->tx_pipe.dma_chan_name,
+ gbe_intf->tx_pipe.dma_channel,
+ gbe_intf->tx_pipe.dma_psflags);
+
+ gbe_slave_stop(gbe_intf);
+
+ /* disable priority elevation and enable statistics on all ports */
+ writel(0, GBE_REG_ADDR(gbe_dev, switch_regs, ptype));
+
+ /* Control register */
+ writel(GBE_CTL_P0_ENABLE, GBE_REG_ADDR(gbe_dev, switch_regs, control));
+
+ /* All statistics enabled and STAT AB visible by default */
+ writel(GBE_REG_VAL_STAT_ENABLE_ALL, GBE_REG_ADDR(gbe_dev, switch_regs,
+ stat_port_en));
+
+ ret = gbe_slave_open(gbe_intf);
+ if (ret)
+ goto fail;
+
+ netcp_register_txhook(netcp, GBE_TXHOOK_ORDER, gbe_tx_hook,
+ gbe_intf);
+
+ slave->open = true;
+ netcp_ethss_update_link_state(gbe_dev, slave, ndev);
+ return 0;
+
+fail:
+ gbe_slave_stop(gbe_intf);
+ return ret;
+}
+
+static int gbe_close(void *intf_priv, struct net_device *ndev)
+{
+ struct gbe_intf *gbe_intf = intf_priv;
+ struct netcp_intf *netcp = netdev_priv(ndev);
+
+ gbe_slave_stop(gbe_intf);
+ netcp_unregister_txhook(netcp, GBE_TXHOOK_ORDER, gbe_tx_hook,
+ gbe_intf);
+
+ gbe_intf->slave->open = false;
+ atomic_set(&gbe_intf->slave->link_state, NETCP_LINK_STATE_INVALID);
+ return 0;
+}
+
+static int init_slave(struct gbe_priv *gbe_dev, struct gbe_slave *slave,
+ struct device_node *node)
+{
+ int port_reg_num;
+ u32 port_reg_ofs, emac_reg_ofs;
+
+ if (of_property_read_u32(node, "slave-port", &slave->slave_num)) {
+ dev_err(gbe_dev->dev, "missing slave-port parameter\n");
+ return -EINVAL;
+ }
+
+ if (of_property_read_u32(node, "link-interface",
+ &slave->link_interface)) {
+ dev_warn(gbe_dev->dev,
+ "missing link-interface value defaulting to 1G mac-phy link\n");
+ slave->link_interface = SGMII_LINK_MAC_PHY;
+ }
+
+ slave->open = false;
+ slave->phy_node = of_parse_phandle(node, "phy-handle", 0);
+ slave->port_num = gbe_get_slave_port(gbe_dev, slave->slave_num);
+
+ if (slave->link_interface >= XGMII_LINK_MAC_PHY)
+ slave->mac_control = GBE_DEF_10G_MAC_CONTROL;
+ else
+ slave->mac_control = GBE_DEF_1G_MAC_CONTROL;
+
+ /* Emac regs memmap are contiguous but port regs are not */
+ port_reg_num = slave->slave_num;
+ if (gbe_dev->ss_version == GBE_SS_VERSION_14) {
+ if (slave->slave_num > 1) {
+ port_reg_ofs = GBE13_SLAVE_PORT2_OFFSET;
+ port_reg_num -= 2;
+ } else {
+ port_reg_ofs = GBE13_SLAVE_PORT_OFFSET;
+ }
+ } else if (gbe_dev->ss_version == XGBE_SS_VERSION_10) {
+ port_reg_ofs = XGBE10_SLAVE_PORT_OFFSET;
+ } else {
+ dev_err(gbe_dev->dev, "unknown ethss(0x%x)\n",
+ gbe_dev->ss_version);
+ return -EINVAL;
+ }
+
+ if (gbe_dev->ss_version == GBE_SS_VERSION_14)
+ emac_reg_ofs = GBE13_EMAC_OFFSET;
+ else if (gbe_dev->ss_version == XGBE_SS_VERSION_10)
+ emac_reg_ofs = XGBE10_EMAC_OFFSET;
+
+ slave->port_regs = gbe_dev->ss_regs + port_reg_ofs +
+ (0x30 * port_reg_num);
+ slave->emac_regs = gbe_dev->ss_regs + emac_reg_ofs +
+ (0x40 * slave->slave_num);
+
+ if (gbe_dev->ss_version == GBE_SS_VERSION_14) {
+ /* Initialize slave port register offsets */
+ GBE_SET_REG_OFS(slave, port_regs, port_vlan);
+ GBE_SET_REG_OFS(slave, port_regs, tx_pri_map);
+ GBE_SET_REG_OFS(slave, port_regs, sa_lo);
+ GBE_SET_REG_OFS(slave, port_regs, sa_hi);
+ GBE_SET_REG_OFS(slave, port_regs, ts_ctl);
+ GBE_SET_REG_OFS(slave, port_regs, ts_seq_ltype);
+ GBE_SET_REG_OFS(slave, port_regs, ts_vlan);
+ GBE_SET_REG_OFS(slave, port_regs, ts_ctl_ltype2);
+ GBE_SET_REG_OFS(slave, port_regs, ts_ctl2);
+
+ /* Initialize EMAC register offsets */
+ GBE_SET_REG_OFS(slave, emac_regs, mac_control);
+ GBE_SET_REG_OFS(slave, emac_regs, soft_reset);
+ GBE_SET_REG_OFS(slave, emac_regs, rx_maxlen);
+
+ } else if (gbe_dev->ss_version == XGBE_SS_VERSION_10) {
+ /* Initialize slave port register offsets */
+ XGBE_SET_REG_OFS(slave, port_regs, port_vlan);
+ XGBE_SET_REG_OFS(slave, port_regs, tx_pri_map);
+ XGBE_SET_REG_OFS(slave, port_regs, sa_lo);
+ XGBE_SET_REG_OFS(slave, port_regs, sa_hi);
+ XGBE_SET_REG_OFS(slave, port_regs, ts_ctl);
+ XGBE_SET_REG_OFS(slave, port_regs, ts_seq_ltype);
+ XGBE_SET_REG_OFS(slave, port_regs, ts_vlan);
+ XGBE_SET_REG_OFS(slave, port_regs, ts_ctl_ltype2);
+ XGBE_SET_REG_OFS(slave, port_regs, ts_ctl2);
+
+ /* Initialize EMAC register offsets */
+ XGBE_SET_REG_OFS(slave, emac_regs, mac_control);
+ XGBE_SET_REG_OFS(slave, emac_regs, soft_reset);
+ XGBE_SET_REG_OFS(slave, emac_regs, rx_maxlen);
+ }
+
+ atomic_set(&slave->link_state, NETCP_LINK_STATE_INVALID);
+ return 0;
+}
+
+static void init_secondary_ports(struct gbe_priv *gbe_dev,
+ struct device_node *node)
+{
+ struct device *dev = gbe_dev->dev;
+ phy_interface_t phy_mode;
+ struct gbe_priv **priv;
+ struct device_node *port;
+ struct gbe_slave *slave;
+ bool mac_phy_link = false;
+
+ for_each_child_of_node(node, port) {
+ slave = devm_kzalloc(dev, sizeof(*slave), GFP_KERNEL);
+ if (!slave) {
+ dev_err(dev,
+ "memomry alloc failed for secondary port(%s), skipping...\n",
+ port->name);
+ continue;
+ }
+
+ if (init_slave(gbe_dev, slave, port)) {
+ dev_err(dev,
+ "Failed to initialize secondary port(%s), skipping...\n",
+ port->name);
+ devm_kfree(dev, slave);
+ continue;
+ }
+
+ gbe_sgmii_config(gbe_dev, slave);
+ gbe_port_reset(slave);
+ gbe_port_config(gbe_dev, slave, gbe_dev->rx_packet_max);
+ list_add_tail(&slave->slave_list, &gbe_dev->secondary_slaves);
+ gbe_dev->num_slaves++;
+ if ((slave->link_interface == SGMII_LINK_MAC_PHY) ||
+ (slave->link_interface == XGMII_LINK_MAC_PHY))
+ mac_phy_link = true;
+
+ slave->open = true;
+ }
+
+ /* of_phy_connect() is needed only for MAC-PHY interface */
+ if (!mac_phy_link)
+ return;
+
+ /* Allocate dummy netdev device for attaching to phy device */
+ gbe_dev->dummy_ndev = alloc_netdev(sizeof(gbe_dev), "dummy",
+ NET_NAME_UNKNOWN, ether_setup);
+ if (!gbe_dev->dummy_ndev) {
+ dev_err(dev,
+ "Failed to allocate dummy netdev for secondary ports, skipping phy_connect()...\n");
+ return;
+ }
+ priv = netdev_priv(gbe_dev->dummy_ndev);
+ *priv = gbe_dev;
+
+ if (slave->link_interface == SGMII_LINK_MAC_PHY) {
+ phy_mode = PHY_INTERFACE_MODE_SGMII;
+ slave->phy_port_t = PORT_MII;
+ } else {
+ phy_mode = PHY_INTERFACE_MODE_NA;
+ slave->phy_port_t = PORT_FIBRE;
+ }
+
+ for_each_sec_slave(slave, gbe_dev) {
+ if ((slave->link_interface != SGMII_LINK_MAC_PHY) &&
+ (slave->link_interface != XGMII_LINK_MAC_PHY))
+ continue;
+ slave->phy =
+ of_phy_connect(gbe_dev->dummy_ndev,
+ slave->phy_node,
+ gbe_adjust_link_sec_slaves,
+ 0, phy_mode);
+ if (!slave->phy) {
+ dev_err(dev, "phy not found for slave %d\n",
+ slave->slave_num);
+ slave->phy = NULL;
+ } else {
+ dev_dbg(dev, "phy found: id is: 0x%s\n",
+ dev_name(&slave->phy->dev));
+ phy_start(slave->phy);
+ phy_read_status(slave->phy);
+ }
+ }
+}
+
+static void free_secondary_ports(struct gbe_priv *gbe_dev)
+{
+ struct gbe_slave *slave;
+
+ for (;;) {
+ slave = first_sec_slave(gbe_dev);
+ if (!slave)
+ break;
+ if (slave->phy)
+ phy_disconnect(slave->phy);
+ list_del(&slave->slave_list);
+ }
+ if (gbe_dev->dummy_ndev)
+ free_netdev(gbe_dev->dummy_ndev);
+}
+
+static int set_xgbe_ethss10_priv(struct gbe_priv *gbe_dev,
+ struct device_node *node)
+{
+ struct resource res;
+ void __iomem *regs;
+ int ret, i;
+
+ ret = of_address_to_resource(node, 0, &res);
+ if (ret) {
+ dev_err(gbe_dev->dev, "Can't translate of node(%s) address for xgbe subsystem regs\n",
+ node->name);
+ return ret;
+ }
+
+ regs = devm_ioremap_resource(gbe_dev->dev, &res);
+ if (IS_ERR(regs)) {
+ dev_err(gbe_dev->dev, "Failed to map xgbe register base\n");
+ return PTR_ERR(regs);
+ }
+ gbe_dev->ss_regs = regs;
+
+ ret = of_address_to_resource(node, XGBE_SERDES_REG_INDEX, &res);
+ if (ret) {
+ dev_err(gbe_dev->dev, "Can't translate of node(%s) address for xgbe serdes regs\n",
+ node->name);
+ return ret;
+ }
+
+ regs = devm_ioremap_resource(gbe_dev->dev, &res);
+ if (IS_ERR(regs)) {
+ dev_err(gbe_dev->dev, "Failed to map xgbe serdes register base\n");
+ return PTR_ERR(regs);
+ }
+ gbe_dev->xgbe_serdes_regs = regs;
+
+ gbe_dev->hw_stats = devm_kzalloc(gbe_dev->dev,
+ XGBE10_NUM_STAT_ENTRIES *
+ (XGBE10_NUM_SLAVES + 1) * sizeof(u64),
+ GFP_KERNEL);
+ if (!gbe_dev->hw_stats) {
+ dev_err(gbe_dev->dev, "hw_stats memory allocation failed\n");
+ return -ENOMEM;
+ }
+
+ gbe_dev->ss_version = XGBE_SS_VERSION_10;
+ gbe_dev->sgmii_port_regs = gbe_dev->ss_regs +
+ XGBE10_SGMII_MODULE_OFFSET;
+ gbe_dev->switch_regs = gbe_dev->ss_regs + XGBE10_SWITCH_MODULE_OFFSET;
+ gbe_dev->host_port_regs = gbe_dev->ss_regs + XGBE10_HOST_PORT_OFFSET;
+
+ for (i = 0; i < XGBE10_NUM_HW_STATS_MOD; i++)
+ gbe_dev->hw_stats_regs[i] = gbe_dev->ss_regs +
+ XGBE10_HW_STATS_OFFSET + (GBE_HW_STATS_REG_MAP_SZ * i);
+
+ gbe_dev->ale_reg = gbe_dev->ss_regs + XGBE10_ALE_OFFSET;
+ gbe_dev->ale_ports = XGBE10_NUM_ALE_PORTS;
+ gbe_dev->host_port = XGBE10_HOST_PORT_NUM;
+ gbe_dev->ale_entries = XGBE10_NUM_ALE_ENTRIES;
+ gbe_dev->et_stats = xgbe10_et_stats;
+ gbe_dev->num_et_stats = ARRAY_SIZE(xgbe10_et_stats);
+
+ /* Subsystem registers */
+ XGBE_SET_REG_OFS(gbe_dev, ss_regs, id_ver);
+ XGBE_SET_REG_OFS(gbe_dev, ss_regs, control);
+
+ /* Switch module registers */
+ XGBE_SET_REG_OFS(gbe_dev, switch_regs, id_ver);
+ XGBE_SET_REG_OFS(gbe_dev, switch_regs, control);
+ XGBE_SET_REG_OFS(gbe_dev, switch_regs, ptype);
+ XGBE_SET_REG_OFS(gbe_dev, switch_regs, stat_port_en);
+ XGBE_SET_REG_OFS(gbe_dev, switch_regs, flow_control);
+
+ /* Host port registers */
+ XGBE_SET_REG_OFS(gbe_dev, host_port_regs, port_vlan);
+ XGBE_SET_REG_OFS(gbe_dev, host_port_regs, tx_pri_map);
+ XGBE_SET_REG_OFS(gbe_dev, host_port_regs, rx_maxlen);
+ return 0;
+}
+
+static int get_gbe_resource_version(struct gbe_priv *gbe_dev,
+ struct device_node *node)
+{
+ struct resource res;
+ void __iomem *regs;
+ int ret;
+
+ ret = of_address_to_resource(node, 0, &res);
+ if (ret) {
+ dev_err(gbe_dev->dev, "Can't translate of node(%s) address\n",
+ node->name);
+ return ret;
+ }
+
+ regs = devm_ioremap_resource(gbe_dev->dev, &res);
+ if (IS_ERR(regs)) {
+ dev_err(gbe_dev->dev, "Failed to map gbe register base\n");
+ return PTR_ERR(regs);
+ }
+ gbe_dev->ss_regs = regs;
+ gbe_dev->ss_version = readl(gbe_dev->ss_regs);
+ return 0;
+}
+
+static int set_gbe_ethss14_priv(struct gbe_priv *gbe_dev,
+ struct device_node *node)
+{
+ void __iomem *regs;
+ int i;
+
+ gbe_dev->hw_stats = devm_kzalloc(gbe_dev->dev,
+ GBE13_NUM_HW_STAT_ENTRIES *
+ GBE13_NUM_SLAVES * sizeof(u64),
+ GFP_KERNEL);
+ if (!gbe_dev->hw_stats) {
+ dev_err(gbe_dev->dev, "hw_stats memory allocation failed\n");
+ return -ENOMEM;
+ }
+
+ regs = gbe_dev->ss_regs;
+ gbe_dev->sgmii_port_regs = regs + GBE13_SGMII_MODULE_OFFSET;
+ gbe_dev->sgmii_port34_regs = regs + GBE13_SGMII34_MODULE_OFFSET;
+ gbe_dev->switch_regs = regs + GBE13_SWITCH_MODULE_OFFSET;
+ gbe_dev->host_port_regs = regs + GBE13_HOST_PORT_OFFSET;
+
+ for (i = 0; i < GBE13_NUM_HW_STATS_MOD; i++)
+ gbe_dev->hw_stats_regs[i] = regs + GBE13_HW_STATS_OFFSET +
+ (GBE_HW_STATS_REG_MAP_SZ * i);
+
+ gbe_dev->ale_reg = regs + GBE13_ALE_OFFSET;
+ gbe_dev->ale_ports = GBE13_NUM_ALE_PORTS;
+ gbe_dev->host_port = GBE13_HOST_PORT_NUM;
+ gbe_dev->ale_entries = GBE13_NUM_ALE_ENTRIES;
+ gbe_dev->et_stats = gbe13_et_stats;
+ gbe_dev->num_et_stats = ARRAY_SIZE(gbe13_et_stats);
+
+ /* Subsystem registers */
+ GBE_SET_REG_OFS(gbe_dev, ss_regs, id_ver);
+
+ /* Switch module registers */
+ GBE_SET_REG_OFS(gbe_dev, switch_regs, id_ver);
+ GBE_SET_REG_OFS(gbe_dev, switch_regs, control);
+ GBE_SET_REG_OFS(gbe_dev, switch_regs, soft_reset);
+ GBE_SET_REG_OFS(gbe_dev, switch_regs, stat_port_en);
+ GBE_SET_REG_OFS(gbe_dev, switch_regs, ptype);
+ GBE_SET_REG_OFS(gbe_dev, switch_regs, flow_control);
+
+ /* Host port registers */
+ GBE_SET_REG_OFS(gbe_dev, host_port_regs, port_vlan);
+ GBE_SET_REG_OFS(gbe_dev, host_port_regs, rx_maxlen);
+ return 0;
+}
+
+static int gbe_probe(struct netcp_device *netcp_device, struct device *dev,
+ struct device_node *node, void **inst_priv)
+{
+ struct device_node *interfaces, *interface;
+ struct device_node *secondary_ports;
+ struct cpsw_ale_params ale_params;
+ struct gbe_priv *gbe_dev;
+ u32 slave_num;
+ int ret = 0;
+
+ if (!node) {
+ dev_err(dev, "device tree info unavailable\n");
+ return -ENODEV;
+ }
+
+ gbe_dev = devm_kzalloc(dev, sizeof(struct gbe_priv), GFP_KERNEL);
+ if (!gbe_dev)
+ return -ENOMEM;
+
+ gbe_dev->dev = dev;
+ gbe_dev->netcp_device = netcp_device;
+ gbe_dev->rx_packet_max = NETCP_MAX_FRAME_SIZE;
+
+ /* init the hw stats lock */
+ spin_lock_init(&gbe_dev->hw_stats_lock);
+
+ if (of_find_property(node, "enable-ale", NULL)) {
+ gbe_dev->enable_ale = true;
+ dev_info(dev, "ALE enabled\n");
+ } else {
+ gbe_dev->enable_ale = false;
+ dev_dbg(dev, "ALE bypass enabled*\n");
+ }
+
+ ret = of_property_read_u32(node, "tx-queue",
+ &gbe_dev->tx_queue_id);
+ if (ret < 0) {
+ dev_err(dev, "missing tx_queue parameter\n");
+ gbe_dev->tx_queue_id = GBE_TX_QUEUE;
+ }
+
+ ret = of_property_read_string(node, "tx-channel",
+ &gbe_dev->dma_chan_name);
+ if (ret < 0) {
+ dev_err(dev, "missing \"tx-channel\" parameter\n");
+ ret = -ENODEV;
+ goto quit;
+ }
+
+ if (!strcmp(node->name, "gbe")) {
+ ret = get_gbe_resource_version(gbe_dev, node);
+ if (ret)
+ goto quit;
+
+ ret = set_gbe_ethss14_priv(gbe_dev, node);
+ if (ret)
+ goto quit;
+ } else if (!strcmp(node->name, "xgbe")) {
+ ret = set_xgbe_ethss10_priv(gbe_dev, node);
+ if (ret)
+ goto quit;
+ ret = netcp_xgbe_serdes_init(gbe_dev->xgbe_serdes_regs,
+ gbe_dev->ss_regs);
+ if (ret)
+ goto quit;
+ } else {
+ dev_err(dev, "unknown GBE node(%s)\n", node->name);
+ ret = -ENODEV;
+ goto quit;
+ }
+
+ interfaces = of_get_child_by_name(node, "interfaces");
+ if (!interfaces)
+ dev_err(dev, "could not find interfaces\n");
+
+ ret = netcp_txpipe_init(&gbe_dev->tx_pipe, netcp_device,
+ gbe_dev->dma_chan_name, gbe_dev->tx_queue_id);
+ if (ret)
+ goto quit;
+
+ ret = netcp_txpipe_open(&gbe_dev->tx_pipe);
+ if (ret)
+ goto quit;
+
+ /* Create network interfaces */
+ INIT_LIST_HEAD(&gbe_dev->gbe_intf_head);
+ for_each_child_of_node(interfaces, interface) {
+ ret = of_property_read_u32(interface, "slave-port", &slave_num);
+ if (ret) {
+ dev_err(dev, "missing slave-port parameter, skipping interface configuration for %s\n",
+ interface->name);
+ continue;
+ }
+ gbe_dev->num_slaves++;
+ }
+
+ if (!gbe_dev->num_slaves)
+ dev_warn(dev, "No network interface configured\n");
+
+ /* Initialize Secondary slave ports */
+ secondary_ports = of_get_child_by_name(node, "secondary-slave-ports");
+ INIT_LIST_HEAD(&gbe_dev->secondary_slaves);
+ if (secondary_ports)
+ init_secondary_ports(gbe_dev, secondary_ports);
+ of_node_put(secondary_ports);
+
+ if (!gbe_dev->num_slaves) {
+ dev_err(dev, "No network interface or secondary ports configured\n");
+ ret = -ENODEV;
+ goto quit;
+ }
+
+ memset(&ale_params, 0, sizeof(ale_params));
+ ale_params.dev = gbe_dev->dev;
+ ale_params.ale_regs = gbe_dev->ale_reg;
+ ale_params.ale_ageout = GBE_DEFAULT_ALE_AGEOUT;
+ ale_params.ale_entries = gbe_dev->ale_entries;
+ ale_params.ale_ports = gbe_dev->ale_ports;
+
+ gbe_dev->ale = cpsw_ale_create(&ale_params);
+ if (!gbe_dev->ale) {
+ dev_err(gbe_dev->dev, "error initializing ale engine\n");
+ ret = -ENODEV;
+ goto quit;
+ } else {
+ dev_dbg(gbe_dev->dev, "Created a gbe ale engine\n");
+ }
+
+ /* initialize host port */
+ gbe_init_host_port(gbe_dev);
+
+ init_timer(&gbe_dev->timer);
+ gbe_dev->timer.data = (unsigned long)gbe_dev;
+ gbe_dev->timer.function = netcp_ethss_timer;
+ gbe_dev->timer.expires = jiffies + GBE_TIMER_INTERVAL;
+ add_timer(&gbe_dev->timer);
+ *inst_priv = gbe_dev;
+ return 0;
+
+quit:
+ if (gbe_dev->hw_stats)
+ devm_kfree(dev, gbe_dev->hw_stats);
+ if (gbe_dev->ale)
+ cpsw_ale_destroy(gbe_dev->ale);
+ if (gbe_dev->ss_regs)
+ devm_iounmap(dev, gbe_dev->ss_regs);
+ if (interfaces)
+ of_node_put(interfaces);
+ devm_kfree(dev, gbe_dev);
+ return ret;
+}
+
+static int gbe_attach(void *inst_priv, struct net_device *ndev,
+ struct device_node *node, void **intf_priv)
+{
+ struct gbe_priv *gbe_dev = inst_priv;
+ struct gbe_intf *gbe_intf;
+ int ret;
+
+ if (!node) {
+ dev_err(gbe_dev->dev, "interface node not available\n");
+ return -ENODEV;
+ }
+
+ gbe_intf = devm_kzalloc(gbe_dev->dev, sizeof(*gbe_intf), GFP_KERNEL);
+ if (!gbe_intf)
+ return -ENOMEM;
+
+ gbe_intf->ndev = ndev;
+ gbe_intf->dev = gbe_dev->dev;
+ gbe_intf->gbe_dev = gbe_dev;
+
+ gbe_intf->slave = devm_kzalloc(gbe_dev->dev,
+ sizeof(*gbe_intf->slave),
+ GFP_KERNEL);
+ if (!gbe_intf->slave) {
+ ret = -ENOMEM;
+ goto fail;
+ }
+
+ if (init_slave(gbe_dev, gbe_intf->slave, node)) {
+ ret = -ENODEV;
+ goto fail;
+ }
+
+ gbe_intf->tx_pipe = gbe_dev->tx_pipe;
+ ndev->ethtool_ops = &keystone_ethtool_ops;
+ list_add_tail(&gbe_intf->gbe_intf_list, &gbe_dev->gbe_intf_head);
+ *intf_priv = gbe_intf;
+ return 0;
+
+fail:
+ if (gbe_intf->slave)
+ devm_kfree(gbe_dev->dev, gbe_intf->slave);
+ if (gbe_intf)
+ devm_kfree(gbe_dev->dev, gbe_intf);
+ return ret;
+}
+
+static int gbe_release(void *intf_priv)
+{
+ struct gbe_intf *gbe_intf = intf_priv;
+
+ gbe_intf->ndev->ethtool_ops = NULL;
+ list_del(&gbe_intf->gbe_intf_list);
+ devm_kfree(gbe_intf->dev, gbe_intf->slave);
+ devm_kfree(gbe_intf->dev, gbe_intf);
+ return 0;
+}
+
+static int gbe_remove(struct netcp_device *netcp_device, void *inst_priv)
+{
+ struct gbe_priv *gbe_dev = inst_priv;
+
+ del_timer_sync(&gbe_dev->timer);
+ cpsw_ale_stop(gbe_dev->ale);
+ cpsw_ale_destroy(gbe_dev->ale);
+ netcp_txpipe_close(&gbe_dev->tx_pipe);
+ free_secondary_ports(gbe_dev);
+
+ if (!list_empty(&gbe_dev->gbe_intf_head))
+ dev_alert(gbe_dev->dev, "unreleased ethss interfaces present\n");
+
+ devm_kfree(gbe_dev->dev, gbe_dev->hw_stats);
+ devm_iounmap(gbe_dev->dev, gbe_dev->ss_regs);
+ memset(gbe_dev, 0x00, sizeof(*gbe_dev));
+ devm_kfree(gbe_dev->dev, gbe_dev);
+ return 0;
+}
+
+static struct netcp_module gbe_module = {
+ .name = GBE_MODULE_NAME,
+ .owner = THIS_MODULE,
+ .primary = true,
+ .probe = gbe_probe,
+ .open = gbe_open,
+ .close = gbe_close,
+ .remove = gbe_remove,
+ .attach = gbe_attach,
+ .release = gbe_release,
+ .add_addr = gbe_add_addr,
+ .del_addr = gbe_del_addr,
+ .add_vid = gbe_add_vid,
+ .del_vid = gbe_del_vid,
+ .ioctl = gbe_ioctl,
+};
+
+static struct netcp_module xgbe_module = {
+ .name = XGBE_MODULE_NAME,
+ .owner = THIS_MODULE,
+ .primary = true,
+ .probe = gbe_probe,
+ .open = gbe_open,
+ .close = gbe_close,
+ .remove = gbe_remove,
+ .attach = gbe_attach,
+ .release = gbe_release,
+ .add_addr = gbe_add_addr,
+ .del_addr = gbe_del_addr,
+ .add_vid = gbe_add_vid,
+ .del_vid = gbe_del_vid,
+ .ioctl = gbe_ioctl,
+};
+
+static int __init keystone_gbe_init(void)
+{
+ int ret;
+
+ ret = netcp_register_module(&gbe_module);
+ if (ret)
+ return ret;
+
+ ret = netcp_register_module(&xgbe_module);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+module_init(keystone_gbe_init);
+
+static void __exit keystone_gbe_exit(void)
+{
+ netcp_unregister_module(&gbe_module);
+ netcp_unregister_module(&xgbe_module);
+}
+module_exit(keystone_gbe_exit);
--- /dev/null
+/*
+ * SGMI module initialisation
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated
+ * Authors: Sandeep Nair <sandeep_n@ti.com>
+ * Sandeep Paulraj <s-paulraj@ti.com>
+ * Wingman Kwok <w-kwok2@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include "netcp.h"
+
+#define SGMII_REG_STATUS_LOCK BIT(4)
+#define SGMII_REG_STATUS_LINK BIT(0)
+#define SGMII_REG_STATUS_AUTONEG BIT(2)
+#define SGMII_REG_CONTROL_AUTONEG BIT(0)
+
+#define SGMII23_OFFSET(x) ((x - 2) * 0x100)
+#define SGMII_OFFSET(x) ((x <= 1) ? (x * 0x100) : (SGMII23_OFFSET(x)))
+
+/* SGMII registers */
+#define SGMII_SRESET_REG(x) (SGMII_OFFSET(x) + 0x004)
+#define SGMII_CTL_REG(x) (SGMII_OFFSET(x) + 0x010)
+#define SGMII_STATUS_REG(x) (SGMII_OFFSET(x) + 0x014)
+#define SGMII_MRADV_REG(x) (SGMII_OFFSET(x) + 0x018)
+
+static void sgmii_write_reg(void __iomem *base, int reg, u32 val)
+{
+ writel(val, base + reg);
+}
+
+static u32 sgmii_read_reg(void __iomem *base, int reg)
+{
+ return readl(base + reg);
+}
+
+static void sgmii_write_reg_bit(void __iomem *base, int reg, u32 val)
+{
+ writel((readl(base + reg) | val), base + reg);
+}
+
+/* port is 0 based */
+int netcp_sgmii_reset(void __iomem *sgmii_ofs, int port)
+{
+ /* Soft reset */
+ sgmii_write_reg_bit(sgmii_ofs, SGMII_SRESET_REG(port), 0x1);
+ while (sgmii_read_reg(sgmii_ofs, SGMII_SRESET_REG(port)) != 0x0)
+ ;
+ return 0;
+}
+
+int netcp_sgmii_get_port_link(void __iomem *sgmii_ofs, int port)
+{
+ u32 status = 0, link = 0;
+
+ status = sgmii_read_reg(sgmii_ofs, SGMII_STATUS_REG(port));
+ if ((status & SGMII_REG_STATUS_LINK) != 0)
+ link = 1;
+ return link;
+}
+
+int netcp_sgmii_config(void __iomem *sgmii_ofs, int port, u32 interface)
+{
+ unsigned int i, status, mask;
+ u32 mr_adv_ability;
+ u32 control;
+
+ switch (interface) {
+ case SGMII_LINK_MAC_MAC_AUTONEG:
+ mr_adv_ability = 0x9801;
+ control = 0x21;
+ break;
+
+ case SGMII_LINK_MAC_PHY:
+ case SGMII_LINK_MAC_PHY_NO_MDIO:
+ mr_adv_ability = 1;
+ control = 1;
+ break;
+
+ case SGMII_LINK_MAC_MAC_FORCED:
+ mr_adv_ability = 0x9801;
+ control = 0x20;
+ break;
+
+ case SGMII_LINK_MAC_FIBER:
+ mr_adv_ability = 0x20;
+ control = 0x1;
+ break;
+
+ default:
+ WARN_ONCE(1, "Invalid sgmii interface: %d\n", interface);
+ return -EINVAL;
+ }
+
+ sgmii_write_reg(sgmii_ofs, SGMII_CTL_REG(port), 0);
+
+ /* Wait for the SerDes pll to lock */
+ for (i = 0; i < 1000; i++) {
+ usleep_range(1000, 2000);
+ status = sgmii_read_reg(sgmii_ofs, SGMII_STATUS_REG(port));
+ if ((status & SGMII_REG_STATUS_LOCK) != 0)
+ break;
+ }
+
+ if ((status & SGMII_REG_STATUS_LOCK) == 0)
+ pr_err("serdes PLL not locked\n");
+
+ sgmii_write_reg(sgmii_ofs, SGMII_MRADV_REG(port), mr_adv_ability);
+ sgmii_write_reg(sgmii_ofs, SGMII_CTL_REG(port), control);
+
+ mask = SGMII_REG_STATUS_LINK;
+ if (control & SGMII_REG_CONTROL_AUTONEG)
+ mask |= SGMII_REG_STATUS_AUTONEG;
+
+ for (i = 0; i < 1000; i++) {
+ usleep_range(200, 500);
+ status = sgmii_read_reg(sgmii_ofs, SGMII_STATUS_REG(port));
+ if ((status & mask) == mask)
+ break;
+ }
+
+ return 0;
+}
--- /dev/null
+/*
+ * XGE PCSR module initialisation
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated
+ * Authors: Sandeep Nair <sandeep_n@ti.com>
+ * WingMan Kwok <w-kwok2@ti.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#include "netcp.h"
+
+/* XGBE registers */
+#define XGBE_CTRL_OFFSET 0x0c
+#define XGBE_SGMII_1_OFFSET 0x0114
+#define XGBE_SGMII_2_OFFSET 0x0214
+
+/* PCS-R registers */
+#define PCSR_CPU_CTRL_OFFSET 0x1fd0
+#define POR_EN BIT(29)
+
+#define reg_rmw(addr, value, mask) \
+ writel(((readl(addr) & (~(mask))) | \
+ (value & (mask))), (addr))
+
+/* bit mask of width w at offset s */
+#define MASK_WID_SH(w, s) (((1 << w) - 1) << s)
+
+/* shift value v to offset s */
+#define VAL_SH(v, s) (v << s)
+
+#define PHY_A(serdes) 0
+
+struct serdes_cfg {
+ u32 ofs;
+ u32 val;
+ u32 mask;
+};
+
+static struct serdes_cfg cfg_phyb_1p25g_156p25mhz_cmu0[] = {
+ {0x0000, 0x00800002, 0x00ff00ff},
+ {0x0014, 0x00003838, 0x0000ffff},
+ {0x0060, 0x1c44e438, 0xffffffff},
+ {0x0064, 0x00c18400, 0x00ffffff},
+ {0x0068, 0x17078200, 0xffffff00},
+ {0x006c, 0x00000014, 0x000000ff},
+ {0x0078, 0x0000c000, 0x0000ff00},
+ {0x0000, 0x00000003, 0x000000ff},
+};
+
+static struct serdes_cfg cfg_phyb_10p3125g_156p25mhz_cmu1[] = {
+ {0x0c00, 0x00030002, 0x00ff00ff},
+ {0x0c14, 0x00005252, 0x0000ffff},
+ {0x0c28, 0x80000000, 0xff000000},
+ {0x0c2c, 0x000000f6, 0x000000ff},
+ {0x0c3c, 0x04000405, 0xff00ffff},
+ {0x0c40, 0xc0800000, 0xffff0000},
+ {0x0c44, 0x5a202062, 0xffffffff},
+ {0x0c48, 0x40040424, 0xffffffff},
+ {0x0c4c, 0x00004002, 0x0000ffff},
+ {0x0c50, 0x19001c00, 0xff00ff00},
+ {0x0c54, 0x00002100, 0x0000ff00},
+ {0x0c58, 0x00000060, 0x000000ff},
+ {0x0c60, 0x80131e7c, 0xffffffff},
+ {0x0c64, 0x8400cb02, 0xff00ffff},
+ {0x0c68, 0x17078200, 0xffffff00},
+ {0x0c6c, 0x00000016, 0x000000ff},
+ {0x0c74, 0x00000400, 0x0000ff00},
+ {0x0c78, 0x0000c000, 0x0000ff00},
+ {0x0c00, 0x00000003, 0x000000ff},
+};
+
+static struct serdes_cfg cfg_phyb_10p3125g_16bit_lane[] = {
+ {0x0204, 0x00000080, 0x000000ff},
+ {0x0208, 0x0000920d, 0x0000ffff},
+ {0x0204, 0xfc000000, 0xff000000},
+ {0x0208, 0x00009104, 0x0000ffff},
+ {0x0210, 0x1a000000, 0xff000000},
+ {0x0214, 0x00006b58, 0x00ffffff},
+ {0x0218, 0x75800084, 0xffff00ff},
+ {0x022c, 0x00300000, 0x00ff0000},
+ {0x0230, 0x00003800, 0x0000ff00},
+ {0x024c, 0x008f0000, 0x00ff0000},
+ {0x0250, 0x30000000, 0xff000000},
+ {0x0260, 0x00000002, 0x000000ff},
+ {0x0264, 0x00000057, 0x000000ff},
+ {0x0268, 0x00575700, 0x00ffff00},
+ {0x0278, 0xff000000, 0xff000000},
+ {0x0280, 0x00500050, 0x00ff00ff},
+ {0x0284, 0x00001f15, 0x0000ffff},
+ {0x028c, 0x00006f00, 0x0000ff00},
+ {0x0294, 0x00000000, 0xffffff00},
+ {0x0298, 0x00002640, 0xff00ffff},
+ {0x029c, 0x00000003, 0x000000ff},
+ {0x02a4, 0x00000f13, 0x0000ffff},
+ {0x02a8, 0x0001b600, 0x00ffff00},
+ {0x0380, 0x00000030, 0x000000ff},
+ {0x03c0, 0x00000200, 0x0000ff00},
+ {0x03cc, 0x00000018, 0x000000ff},
+ {0x03cc, 0x00000000, 0x000000ff},
+};
+
+static struct serdes_cfg cfg_phyb_10p3125g_comlane[] = {
+ {0x0a00, 0x00000800, 0x0000ff00},
+ {0x0a84, 0x00000000, 0x000000ff},
+ {0x0a8c, 0x00130000, 0x00ff0000},
+ {0x0a90, 0x77a00000, 0xffff0000},
+ {0x0a94, 0x00007777, 0x0000ffff},
+ {0x0b08, 0x000f0000, 0xffff0000},
+ {0x0b0c, 0x000f0000, 0x00ffffff},
+ {0x0b10, 0xbe000000, 0xff000000},
+ {0x0b14, 0x000000ff, 0x000000ff},
+ {0x0b18, 0x00000014, 0x000000ff},
+ {0x0b5c, 0x981b0000, 0xffff0000},
+ {0x0b64, 0x00001100, 0x0000ff00},
+ {0x0b78, 0x00000c00, 0x0000ff00},
+ {0x0abc, 0xff000000, 0xff000000},
+ {0x0ac0, 0x0000008b, 0x000000ff},
+};
+
+static struct serdes_cfg cfg_cm_c1_c2[] = {
+ {0x0208, 0x00000000, 0x00000f00},
+ {0x0208, 0x00000000, 0x0000001f},
+ {0x0204, 0x00000000, 0x00040000},
+ {0x0208, 0x000000a0, 0x000000e0},
+};
+
+static void netcp_xgbe_serdes_cmu_init(void __iomem *serdes_regs)
+{
+ int i;
+
+ /* cmu0 setup */
+ for (i = 0; i < ARRAY_SIZE(cfg_phyb_1p25g_156p25mhz_cmu0); i++) {
+ reg_rmw(serdes_regs + cfg_phyb_1p25g_156p25mhz_cmu0[i].ofs,
+ cfg_phyb_1p25g_156p25mhz_cmu0[i].val,
+ cfg_phyb_1p25g_156p25mhz_cmu0[i].mask);
+ }
+
+ /* cmu1 setup */
+ for (i = 0; i < ARRAY_SIZE(cfg_phyb_10p3125g_156p25mhz_cmu1); i++) {
+ reg_rmw(serdes_regs + cfg_phyb_10p3125g_156p25mhz_cmu1[i].ofs,
+ cfg_phyb_10p3125g_156p25mhz_cmu1[i].val,
+ cfg_phyb_10p3125g_156p25mhz_cmu1[i].mask);
+ }
+}
+
+/* lane is 0 based */
+static void netcp_xgbe_serdes_lane_config(
+ void __iomem *serdes_regs, int lane)
+{
+ int i;
+
+ /* lane setup */
+ for (i = 0; i < ARRAY_SIZE(cfg_phyb_10p3125g_16bit_lane); i++) {
+ reg_rmw(serdes_regs +
+ cfg_phyb_10p3125g_16bit_lane[i].ofs +
+ (0x200 * lane),
+ cfg_phyb_10p3125g_16bit_lane[i].val,
+ cfg_phyb_10p3125g_16bit_lane[i].mask);
+ }
+
+ /* disable auto negotiation*/
+ reg_rmw(serdes_regs + (0x200 * lane) + 0x0380,
+ 0x00000000, 0x00000010);
+
+ /* disable link training */
+ reg_rmw(serdes_regs + (0x200 * lane) + 0x03c0,
+ 0x00000000, 0x00000200);
+}
+
+static void netcp_xgbe_serdes_com_enable(void __iomem *serdes_regs)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(cfg_phyb_10p3125g_comlane); i++) {
+ reg_rmw(serdes_regs + cfg_phyb_10p3125g_comlane[i].ofs,
+ cfg_phyb_10p3125g_comlane[i].val,
+ cfg_phyb_10p3125g_comlane[i].mask);
+ }
+}
+
+static void netcp_xgbe_serdes_lane_enable(
+ void __iomem *serdes_regs, int lane)
+{
+ /* Set Lane Control Rate */
+ writel(0xe0e9e038, serdes_regs + 0x1fe0 + (4 * lane));
+}
+
+static void netcp_xgbe_serdes_phyb_rst_clr(void __iomem *serdes_regs)
+{
+ reg_rmw(serdes_regs + 0x0a00, 0x0000001f, 0x000000ff);
+}
+
+static void netcp_xgbe_serdes_pll_disable(void __iomem *serdes_regs)
+{
+ writel(0x88000000, serdes_regs + 0x1ff4);
+}
+
+static void netcp_xgbe_serdes_pll_enable(void __iomem *serdes_regs)
+{
+ netcp_xgbe_serdes_phyb_rst_clr(serdes_regs);
+ writel(0xee000000, serdes_regs + 0x1ff4);
+}
+
+static int netcp_xgbe_wait_pll_locked(void __iomem *sw_regs)
+{
+ unsigned long timeout;
+ int ret = 0;
+ u32 val_1, val_0;
+
+ timeout = jiffies + msecs_to_jiffies(500);
+ do {
+ val_0 = (readl(sw_regs + XGBE_SGMII_1_OFFSET) & BIT(4));
+ val_1 = (readl(sw_regs + XGBE_SGMII_2_OFFSET) & BIT(4));
+
+ if (val_1 && val_0)
+ return 0;
+
+ if (time_after(jiffies, timeout)) {
+ ret = -ETIMEDOUT;
+ break;
+ }
+
+ cpu_relax();
+ } while (true);
+
+ pr_err("XGBE serdes not locked: time out.\n");
+ return ret;
+}
+
+static void netcp_xgbe_serdes_enable_xgmii_port(void __iomem *sw_regs)
+{
+ writel(0x03, sw_regs + XGBE_CTRL_OFFSET);
+}
+
+static u32 netcp_xgbe_serdes_read_tbus_val(void __iomem *serdes_regs)
+{
+ u32 tmp;
+
+ if (PHY_A(serdes_regs)) {
+ tmp = (readl(serdes_regs + 0x0ec) >> 24) & 0x0ff;
+ tmp |= ((readl(serdes_regs + 0x0fc) >> 16) & 0x00f00);
+ } else {
+ tmp = (readl(serdes_regs + 0x0f8) >> 16) & 0x0fff;
+ }
+
+ return tmp;
+}
+
+static void netcp_xgbe_serdes_write_tbus_addr(void __iomem *serdes_regs,
+ int select, int ofs)
+{
+ if (PHY_A(serdes_regs)) {
+ reg_rmw(serdes_regs + 0x0008, ((select << 5) + ofs) << 24,
+ ~0x00ffffff);
+ return;
+ }
+
+ /* For 2 lane Phy-B, lane0 is actually lane1 */
+ switch (select) {
+ case 1:
+ select = 2;
+ break;
+ case 2:
+ select = 3;
+ break;
+ default:
+ return;
+ }
+
+ reg_rmw(serdes_regs + 0x00fc, ((select << 8) + ofs) << 16, ~0xf800ffff);
+}
+
+static u32 netcp_xgbe_serdes_read_select_tbus(void __iomem *serdes_regs,
+ int select, int ofs)
+{
+ /* Set tbus address */
+ netcp_xgbe_serdes_write_tbus_addr(serdes_regs, select, ofs);
+ /* Get TBUS Value */
+ return netcp_xgbe_serdes_read_tbus_val(serdes_regs);
+}
+
+static void netcp_xgbe_serdes_reset_cdr(void __iomem *serdes_regs,
+ void __iomem *sig_detect_reg, int lane)
+{
+ u32 tmp, dlpf, tbus;
+
+ /*Get the DLPF values */
+ tmp = netcp_xgbe_serdes_read_select_tbus(
+ serdes_regs, lane + 1, 5);
+
+ dlpf = tmp >> 2;
+
+ if (dlpf < 400 || dlpf > 700) {
+ reg_rmw(sig_detect_reg, VAL_SH(2, 1), MASK_WID_SH(2, 1));
+ mdelay(1);
+ reg_rmw(sig_detect_reg, VAL_SH(0, 1), MASK_WID_SH(2, 1));
+ } else {
+ tbus = netcp_xgbe_serdes_read_select_tbus(serdes_regs, lane +
+ 1, 0xe);
+
+ pr_debug("XGBE: CDR centered, DLPF: %4d,%d,%d.\n",
+ tmp >> 2, tmp & 3, (tbus >> 2) & 3);
+ }
+}
+
+/* Call every 100 ms */
+static int netcp_xgbe_check_link_status(void __iomem *serdes_regs,
+ void __iomem *sw_regs, u32 lanes,
+ u32 *current_state, u32 *lane_down)
+{
+ void __iomem *pcsr_base = sw_regs + 0x0600;
+ void __iomem *sig_detect_reg;
+ u32 pcsr_rx_stat, blk_lock, blk_errs;
+ int loss, i, status = 1;
+
+ for (i = 0; i < lanes; i++) {
+ /* Get the Loss bit */
+ loss = readl(serdes_regs + 0x1fc0 + 0x20 + (i * 0x04)) & 0x1;
+
+ /* Get Block Errors and Block Lock bits */
+ pcsr_rx_stat = readl(pcsr_base + 0x0c + (i * 0x80));
+ blk_lock = (pcsr_rx_stat >> 30) & 0x1;
+ blk_errs = (pcsr_rx_stat >> 16) & 0x0ff;
+
+ /* Get Signal Detect Overlay Address */
+ sig_detect_reg = serdes_regs + (i * 0x200) + 0x200 + 0x04;
+
+ /* If Block errors maxed out, attempt recovery! */
+ if (blk_errs == 0x0ff)
+ blk_lock = 0;
+
+ switch (current_state[i]) {
+ case 0:
+ /* if good link lock the signal detect ON! */
+ if (!loss && blk_lock) {
+ pr_debug("XGBE PCSR Linked Lane: %d\n", i);
+ reg_rmw(sig_detect_reg, VAL_SH(3, 1),
+ MASK_WID_SH(2, 1));
+ current_state[i] = 1;
+ } else if (!blk_lock) {
+ /* if no lock, then reset CDR */
+ pr_debug("XGBE PCSR Recover Lane: %d\n", i);
+ netcp_xgbe_serdes_reset_cdr(serdes_regs,
+ sig_detect_reg, i);
+ }
+ break;
+
+ case 1:
+ if (!blk_lock) {
+ /* Link Lost? */
+ lane_down[i] = 1;
+ current_state[i] = 2;
+ }
+ break;
+
+ case 2:
+ if (blk_lock)
+ /* Nope just noise */
+ current_state[i] = 1;
+ else {
+ /* Lost the block lock, reset CDR if it is
+ * not centered and go back to sync state
+ */
+ netcp_xgbe_serdes_reset_cdr(serdes_regs,
+ sig_detect_reg, i);
+ current_state[i] = 0;
+ }
+ break;
+
+ default:
+ pr_err("XGBE: unknown current_state[%d] %d\n",
+ i, current_state[i]);
+ break;
+ }
+
+ if (blk_errs > 0) {
+ /* Reset the Error counts! */
+ reg_rmw(pcsr_base + 0x08 + (i * 0x80), VAL_SH(0x19, 0),
+ MASK_WID_SH(8, 0));
+
+ reg_rmw(pcsr_base + 0x08 + (i * 0x80), VAL_SH(0x00, 0),
+ MASK_WID_SH(8, 0));
+ }
+
+ status &= (current_state[i] == 1);
+ }
+
+ return status;
+}
+
+static int netcp_xgbe_serdes_check_lane(void __iomem *serdes_regs,
+ void __iomem *sw_regs)
+{
+ u32 current_state[2] = {0, 0};
+ int retries = 0, link_up;
+ u32 lane_down[2];
+
+ do {
+ lane_down[0] = 0;
+ lane_down[1] = 0;
+
+ link_up = netcp_xgbe_check_link_status(serdes_regs, sw_regs, 2,
+ current_state,
+ lane_down);
+
+ /* if we did not get link up then wait 100ms before calling
+ * it again
+ */
+ if (link_up)
+ break;
+
+ if (lane_down[0])
+ pr_debug("XGBE: detected link down on lane 0\n");
+
+ if (lane_down[1])
+ pr_debug("XGBE: detected link down on lane 1\n");
+
+ if (++retries > 1) {
+ pr_debug("XGBE: timeout waiting for serdes link up\n");
+ return -ETIMEDOUT;
+ }
+ mdelay(100);
+ } while (!link_up);
+
+ pr_debug("XGBE: PCSR link is up\n");
+ return 0;
+}
+
+static void netcp_xgbe_serdes_setup_cm_c1_c2(void __iomem *serdes_regs,
+ int lane, int cm, int c1, int c2)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(cfg_cm_c1_c2); i++) {
+ reg_rmw(serdes_regs + cfg_cm_c1_c2[i].ofs + (0x200 * lane),
+ cfg_cm_c1_c2[i].val,
+ cfg_cm_c1_c2[i].mask);
+ }
+}
+
+static void netcp_xgbe_reset_serdes(void __iomem *serdes_regs)
+{
+ /* Toggle the POR_EN bit in CONFIG.CPU_CTRL */
+ /* enable POR_EN bit */
+ reg_rmw(serdes_regs + PCSR_CPU_CTRL_OFFSET, POR_EN, POR_EN);
+ usleep_range(10, 100);
+
+ /* disable POR_EN bit */
+ reg_rmw(serdes_regs + PCSR_CPU_CTRL_OFFSET, 0, POR_EN);
+ usleep_range(10, 100);
+}
+
+static int netcp_xgbe_serdes_config(void __iomem *serdes_regs,
+ void __iomem *sw_regs)
+{
+ u32 ret, i;
+
+ netcp_xgbe_serdes_pll_disable(serdes_regs);
+ netcp_xgbe_serdes_cmu_init(serdes_regs);
+
+ for (i = 0; i < 2; i++)
+ netcp_xgbe_serdes_lane_config(serdes_regs, i);
+
+ netcp_xgbe_serdes_com_enable(serdes_regs);
+ /* This is EVM + RTM-BOC specific */
+ for (i = 0; i < 2; i++)
+ netcp_xgbe_serdes_setup_cm_c1_c2(serdes_regs, i, 0, 0, 5);
+
+ netcp_xgbe_serdes_pll_enable(serdes_regs);
+ for (i = 0; i < 2; i++)
+ netcp_xgbe_serdes_lane_enable(serdes_regs, i);
+
+ /* SB PLL Status Poll */
+ ret = netcp_xgbe_wait_pll_locked(sw_regs);
+ if (ret)
+ return ret;
+
+ netcp_xgbe_serdes_enable_xgmii_port(sw_regs);
+ netcp_xgbe_serdes_check_lane(serdes_regs, sw_regs);
+ return ret;
+}
+
+int netcp_xgbe_serdes_init(void __iomem *serdes_regs, void __iomem *xgbe_regs)
+{
+ u32 val;
+
+ /* read COMLANE bits 4:0 */
+ val = readl(serdes_regs + 0xa00);
+ if (val & 0x1f) {
+ pr_debug("XGBE: serdes already in operation - reset\n");
+ netcp_xgbe_reset_serdes(serdes_regs);
+ }
+ return netcp_xgbe_serdes_config(serdes_regs, xgbe_regs);
+}
struct rhine_private *rp = netdev_priv(dev);
void __iomem *ioaddr = rp->base;
- mii_check_media(&rp->mii_if, netif_msg_link(rp), init_media);
+ if (!rp->mii_if.force_media)
+ mii_check_media(&rp->mii_if, netif_msg_link(rp), init_media);
if (rp->mii_if.full_duplex)
iowrite8(ioread8(ioaddr + ChipCmd1) | Cmd1FDuplex,
rp->tx_ring[entry].desc_length =
cpu_to_le32(TXDESC | (skb->len >= ETH_ZLEN ? skb->len : ETH_ZLEN));
- if (unlikely(vlan_tx_tag_present(skb))) {
- u16 vid_pcp = vlan_tx_tag_get(skb);
+ if (unlikely(skb_vlan_tag_present(skb))) {
+ u16 vid_pcp = skb_vlan_tag_get(skb);
/* drop CFI/DEI bit, register needs VID and PCP */
vid_pcp = (vid_pcp & VLAN_VID_MASK) |
/* Non-x86 Todo: explicitly flush cache lines here. */
- if (vlan_tx_tag_present(skb))
+ if (skb_vlan_tag_present(skb))
/* Tx queues are bits 7-0 (first Tx queue: bit 7) */
BYTE_REG_BITS_ON(1 << 7, ioaddr + TQWake);
td_ptr->tdesc1.cmd = TCPLS_NORMAL + (tdinfo->nskb_dma + 1) * 16;
- if (vlan_tx_tag_present(skb)) {
- td_ptr->tdesc1.vlan = cpu_to_le16(vlan_tx_tag_get(skb));
+ if (skb_vlan_tag_present(skb)) {
+ td_ptr->tdesc1.vlan = cpu_to_le16(skb_vlan_tag_get(skb));
td_ptr->tdesc1.TCR |= TCR0_VETAG;
}
lp->regs = of_iomap(op->dev.of_node, 0);
if (!lp->regs) {
dev_err(&op->dev, "could not map temac regs.\n");
+ rc = -ENOMEM;
goto nodev;
}
np = of_parse_phandle(op->dev.of_node, "llink-connected", 0);
if (!np) {
dev_err(&op->dev, "could not find DMA node\n");
+ rc = -ENODEV;
goto err_iounmap;
}
lp->regs = of_iomap(op->dev.of_node, 0);
if (!lp->regs) {
dev_err(&op->dev, "could not map Axi Ethernet regs.\n");
+ ret = -ENOMEM;
goto nodev;
}
/* Setup checksum offload, but default to off if not specified */
np = of_parse_phandle(op->dev.of_node, "axistream-connected", 0);
if (!np) {
dev_err(&op->dev, "could not find DMA node\n");
+ ret = -ENODEV;
goto err_iounmap;
}
lp->dma_regs = of_iomap(np, 0);
res = platform_get_resource(ofdev, IORESOURCE_IRQ, 0);
if (!res) {
dev_err(dev, "no IRQ found\n");
+ rc = -ENXIO;
goto error;
}
}
}
-static struct regmap_config at86rf230_regmap_spi_config = {
+static const struct regmap_config at86rf230_regmap_spi_config = {
.reg_bits = 8,
.val_bits = 8,
.write_flag_mask = CMD_REG | CMD_WRITE,
if (mtt)
{
/* Check how much time we have used already */
- do_gettimeofday(&self->now);
-
- diff = self->now.tv_usec - self->stamp.tv_usec;
+ diff = ktime_us_delta(ktime_get(), self->stamp);
/* self->stamp is set from ali_ircc_dma_receive_complete() */
pr_debug("%s(), ******* diff = %d *******\n",
__func__, diff);
-
- if (diff < 0)
- diff += 1000000;
-
+
/* Check if the mtt is larger than the time we have
* already used by all the protocol processing
*/
* reduce the min turn time a bit since we will know
* how much time we have used for protocol processing
*/
- do_gettimeofday(&self->stamp);
+ self->stamp = ktime_get();
skb = dev_alloc_skb(len+1);
if (skb == NULL)
#ifndef ALI_IRCC_H
#define ALI_IRCC_H
-#include <linux/time.h>
+#include <linux/ktime.h>
#include <linux/spinlock.h>
#include <linux/pm.h>
unsigned char rcvFramesOverflow;
- struct timeval stamp;
- struct timeval now;
+ ktime_t stamp;
spinlock_t lock; /* For serializing operations */
#include <linux/interrupt.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
-#include <linux/time.h>
#include <linux/types.h>
#include <linux/ioport.h>
iobuff_t rx_buff;
struct net_device *netdev;
- struct timeval stamp;
- struct timeval now;
struct qos_info qos;
struct irlap_cb *irlap;
mtt = irda_get_mtt(skb);
if (mtt) {
int diff;
- do_gettimeofday(&self->now);
- diff = self->now.tv_usec - self->stamp.tv_usec;
+ diff = ktime_us_delta(ktime_get(), self->stamp);
#ifdef IU_USB_MIN_RTT
/* Factor in USB delays -> Get rid of udelay() that
* would be lost in the noise - Jean II */
diff += IU_USB_MIN_RTT;
#endif /* IU_USB_MIN_RTT */
- /* If the usec counter did wraparound, the diff will
- * go negative (tv_usec is a long), so we need to
- * correct it by one second. Jean II */
- if (diff < 0)
- diff += 1000000;
/* Check if the mtt is larger than the time we have
* already used by all the protocol processing
* reduce the min turn time a bit since we will know
* how much time we have used for protocol processing
*/
- do_gettimeofday(&self->stamp);
+ self->stamp = ktime_get();
/* Check if we need to copy the data to a new skb or not.
* For most frames, we use ZeroCopy and pass the already
*
*****************************************************************************/
-#include <linux/time.h>
+#include <linux/ktime.h>
#include <net/irda/irda.h>
#include <net/irda/irda_device.h> /* struct irlap_cb */
char *speed_buff; /* Buffer for speed changes */
char *tx_buff;
- struct timeval stamp;
- struct timeval now;
+ ktime_t stamp;
spinlock_t lock; /* For serializing Tx operations */
(usually 8) */
iobuff_t rx_buff; /* receive unwrap state machine */
- struct timeval rx_time;
spinlock_t lock;
int receiving;
&kingsun->netdev->stats,
&kingsun->rx_buff, bytes[i]);
}
- do_gettimeofday(&kingsun->rx_time);
kingsun->receiving =
(kingsun->rx_buff.state != OUTSIDE_FRAME)
? 1 : 0;
skb_reserve(kingsun->rx_buff.skb, 1);
kingsun->rx_buff.head = kingsun->rx_buff.skb->data;
- do_gettimeofday(&kingsun->rx_time);
kingsun->rx_urb = usb_alloc_urb(0, GFP_KERNEL);
if (!kingsun->rx_urb)
__u8 *rx_buf;
__u8 rx_variable_xormask;
iobuff_t rx_unwrap_buff;
- struct timeval rx_time;
struct usb_ctrlrequest *speed_setuprequest;
struct urb *speed_urb;
bytes[i]);
}
}
- do_gettimeofday(&kingsun->rx_time);
kingsun->receiving =
(kingsun->rx_unwrap_buff.state != OUTSIDE_FRAME) ? 1 : 0;
}
skb_reserve(kingsun->rx_unwrap_buff.skb, 1);
kingsun->rx_unwrap_buff.head = kingsun->rx_unwrap_buff.skb->data;
- do_gettimeofday(&kingsun->rx_time);
kingsun->rx_urb = usb_alloc_urb(0, GFP_KERNEL);
if (!kingsun->rx_urb)
skb_reserve(mcs->rx_buff.skb, 1);
mcs->rx_buff.head = mcs->rx_buff.skb->data;
- do_gettimeofday(&mcs->rx_time);
/*
* Now that everything should be initialized properly,
mcs_unwrap_fir(mcs, urb->transfer_buffer,
urb->actual_length);
}
- do_gettimeofday(&mcs->rx_time);
}
ret = usb_submit_urb(urb, GFP_ATOMIC);
__u8 *fifo_status;
iobuff_t rx_buff; /* receive unwrap state machine */
- struct timeval rx_time;
spinlock_t lock;
int receiving;
mtt = irda_get_mtt(skb);
if (mtt) {
/* Check how much time we have used already */
- do_gettimeofday(&self->now);
- diff = self->now.tv_usec - self->stamp.tv_usec;
- if (diff < 0)
- diff += 1000000;
+ diff = ktime_us_delta(ktime_get(), self->stamp);
/* Check if the mtt is larger than the time we have
* already used by all the protocol processing
* reduce the min turn time a bit since we will know
* how much time we have used for protocol processing
*/
- do_gettimeofday(&self->stamp);
+ self->stamp = ktime_get();
skb = dev_alloc_skb(len+1);
if (skb == NULL) {
#ifndef NSC_IRCC_H
#define NSC_IRCC_H
-#include <linux/time.h>
+#include <linux/ktime.h>
#include <linux/spinlock.h>
#include <linux/pm.h>
__u8 ier; /* Interrupt enable register */
- struct timeval stamp;
- struct timeval now;
+ ktime_t stamp;
spinlock_t lock; /* For serializing operations */
#include <linux/moduleparam.h>
#include <linux/kernel.h>
+#include <linux/ktime.h>
#include <linux/types.h>
#include <linux/time.h>
#include <linux/skbuff.h>
__u8 *fifo_status;
iobuff_t rx_buff; /* receive unwrap state machine */
- struct timeval rx_time;
+ ktime_t rx_time;
int receiving;
struct urb *rx_urb;
};
static void turnaround_delay(const struct stir_cb *stir, long us)
{
long ticks;
- struct timeval now;
if (us <= 0)
return;
- do_gettimeofday(&now);
- if (now.tv_sec - stir->rx_time.tv_sec > 0)
- us -= USEC_PER_SEC;
- us -= now.tv_usec - stir->rx_time.tv_usec;
+ us -= ktime_us_delta(ktime_get(), stir->rx_time);
+
if (us < 10)
return;
pr_debug("receive %d\n", urb->actual_length);
unwrap_chars(stir, urb->transfer_buffer,
urb->actual_length);
-
- do_gettimeofday(&stir->rx_time);
+
+ stir->rx_time = ktime_get();
}
/* kernel thread is stopping receiver don't resubmit */
skb_reserve(stir->rx_buff.skb, 1);
stir->rx_buff.head = stir->rx_buff.skb->data;
- do_gettimeofday(&stir->rx_time);
+ stir->rx_time = ktime_get();
stir->rx_urb = usb_alloc_urb(0, GFP_KERNEL);
if (!stir->rx_urb)
********************************************************************/
#ifndef via_IRCC_H
#define via_IRCC_H
-#include <linux/time.h>
#include <linux/spinlock.h>
#include <linux/pm.h>
#include <linux/types.h>
__u8 ier; /* Interrupt enable register */
- struct timeval stamp;
- struct timeval now;
-
spinlock_t lock; /* For serializing operations */
__u32 flags; /* Interface flags */
/********************************************************/
#include <linux/kernel.h>
+#include <linux/ktime.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/pci.h>
#include <linux/netdevice.h>
#include <linux/skbuff.h>
#include <linux/delay.h>
-#include <linux/time.h>
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
+#include <linux/math64.h>
#include <linux/mutex.h>
#include <asm/uaccess.h>
#include <asm/byteorder.h>
vlsi_irda_dev_t *idev = netdev_priv(ndev);
u8 byte;
u16 word;
- unsigned delta1, delta2;
- struct timeval now;
+ s32 sec, usec;
unsigned iobase = ndev->base_addr;
seq_printf(seq, "\n%s link state: %s / %s / %s / %s\n", ndev->name,
seq_printf(seq, "\nsw-state:\n");
seq_printf(seq, "IrPHY setup: %d baud - %s encoding\n", idev->baud,
(idev->mode==IFF_SIR)?"SIR":((idev->mode==IFF_MIR)?"MIR":"FIR"));
- do_gettimeofday(&now);
- if (now.tv_usec >= idev->last_rx.tv_usec) {
- delta2 = now.tv_usec - idev->last_rx.tv_usec;
- delta1 = 0;
- }
- else {
- delta2 = 1000000 + now.tv_usec - idev->last_rx.tv_usec;
- delta1 = 1;
- }
- seq_printf(seq, "last rx: %lu.%06u sec\n",
- now.tv_sec - idev->last_rx.tv_sec - delta1, delta2);
+ sec = div_s64_rem(ktime_us_delta(ktime_get(), idev->last_rx),
+ USEC_PER_SEC, &usec);
+ seq_printf(seq, "last rx: %ul.%06u sec\n", sec, usec);
seq_printf(seq, "RX: packets=%lu / bytes=%lu / errors=%lu / dropped=%lu",
ndev->stats.rx_packets, ndev->stats.rx_bytes, ndev->stats.rx_errors,
}
}
- do_gettimeofday(&idev->last_rx); /* remember "now" for later mtt delay */
+ idev->last_rx = ktime_get(); /* remember "now" for later mtt delay */
vlsi_fill_rx(r);
unsigned iobase = ndev->base_addr;
u8 status;
u16 config;
- int mtt;
+ int mtt, diff;
int len, speed;
- struct timeval now, ready;
char *msg = NULL;
speed = irda_get_next_speed(skb);
spin_unlock_irqrestore(&idev->lock, flags);
if ((mtt = irda_get_mtt(skb)) > 0) {
-
- ready.tv_usec = idev->last_rx.tv_usec + mtt;
- ready.tv_sec = idev->last_rx.tv_sec;
- if (ready.tv_usec >= 1000000) {
- ready.tv_usec -= 1000000;
- ready.tv_sec++; /* IrLAP 1.1: mtt always < 1 sec */
- }
- for(;;) {
- do_gettimeofday(&now);
- if (now.tv_sec > ready.tv_sec ||
- (now.tv_sec==ready.tv_sec && now.tv_usec>=ready.tv_usec))
- break;
- udelay(100);
+ diff = ktime_us_delta(ktime_get(), idev->last_rx);
+ if (mtt > diff)
+ udelay(mtt - diff);
/* must not sleep here - called under netif_tx_lock! */
- }
}
/* tx buffer already owned by CPU due to pci_dma_sync_single_for_cpu()
vlsi_fill_rx(idev->rx_ring);
- do_gettimeofday(&idev->last_rx); /* first mtt may start from now on */
+ idev->last_rx = ktime_get(); /* first mtt may start from now on */
outw(0, iobase+VLSI_PIO_PROMPT); /* kick hw state machine */
if (!idev->irlap)
goto errout_free_ring;
- do_gettimeofday(&idev->last_rx); /* first mtt may start from now on */
+ idev->last_rx = ktime_get(); /* first mtt may start from now on */
idev->new_baud = 9600; /* start with IrPHY using 9600(SIR) mode */
void *virtaddr;
struct vlsi_ring *tx_ring, *rx_ring;
- struct timeval last_rx;
+ ktime_t last_rx;
spinlock_t lock;
struct mutex mtx;
};
EXPORT_SYMBOL_GPL(macvlan_link_register);
+static struct net *macvlan_get_link_net(const struct net_device *dev)
+{
+ return dev_net(macvlan_dev_real_dev(dev));
+}
+
static struct rtnl_link_ops macvlan_link_ops = {
.kind = "macvlan",
.setup = macvlan_setup,
.newlink = macvlan_newlink,
.dellink = macvlan_dellink,
+ .get_link_net = macvlan_get_link_net,
};
static int macvlan_device_event(struct notifier_block *unused,
if (skb->ip_summed == CHECKSUM_PARTIAL) {
vnet_hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM;
- if (vlan_tx_tag_present(skb))
+ if (skb_vlan_tag_present(skb))
vnet_hdr->csum_start = cpu_to_macvtap16(q,
skb_checksum_start_offset(skb) + VLAN_HLEN);
else
total = vnet_hdr_len;
total += skb->len;
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
struct {
__be16 h_vlan_proto;
__be16 h_vlan_TCI;
} veth;
veth.h_vlan_proto = skb->vlan_proto;
- veth.h_vlan_TCI = htons(vlan_tx_tag_get(skb));
+ veth.h_vlan_TCI = htons(skb_vlan_tag_get(skb));
vlan_offset = offsetof(struct vlan_ethhdr, h_vlan_proto);
total += VLAN_HLEN;
}
/**
- * mii_check_media - check the MII interface for a duplex change
+ * mii_check_media - check the MII interface for a carrier/speed/duplex change
* @mii: the MII interface
* @ok_to_print: OK to print link up/down messages
* @init_media: OK to save duplex mode in @mii
int advertise, lpa, media, duplex;
int lpa2 = 0;
- /* if forced media, go no further */
- if (mii->force_media)
- return 0; /* duplex did not change */
-
/* check current and old link status */
old_carrier = netif_carrier_ok(mii->dev) ? 1 : 0;
new_carrier = (unsigned int) mii_link_ok(mii);
*/
netif_carrier_on(mii->dev);
+ if (mii->force_media) {
+ if (ok_to_print)
+ netdev_info(mii->dev, "link up\n");
+ return 0; /* duplex did not change */
+ }
+
/* get MII advertise and LPA values */
if ((!init_media) && (mii->advertising))
advertise = mii->advertising;
config AMD_XGBE_PHY
tristate "Driver for the AMD 10GbE (amd-xgbe) PHYs"
- depends on OF && HAS_IOMEM
+ depends on (OF || ACPI) && HAS_IOMEM
---help---
Currently supports the AMD 10GbE PHY
#include <linux/interrupt.h>
#include <linux/init.h>
#include <linux/delay.h>
+#include <linux/workqueue.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
#include <linux/skbuff.h>
#include <linux/of_platform.h>
#include <linux/of_device.h>
#include <linux/uaccess.h>
+#include <linux/bitops.h>
+#include <linux/property.h>
+#include <linux/acpi.h>
MODULE_AUTHOR("Tom Lendacky <thomas.lendacky@amd.com>");
MODULE_LICENSE("Dual BSD/GPL");
#define XGBE_PHY_MASK 0xfffffff0
#define XGBE_PHY_SPEEDSET_PROPERTY "amd,speed-set"
+#define XGBE_PHY_BLWC_PROPERTY "amd,serdes-blwc"
+#define XGBE_PHY_CDR_RATE_PROPERTY "amd,serdes-cdr-rate"
+#define XGBE_PHY_PQ_SKEW_PROPERTY "amd,serdes-pq-skew"
+#define XGBE_PHY_TX_AMP_PROPERTY "amd,serdes-tx-amp"
+
+#define XGBE_PHY_SPEEDS 3
+#define XGBE_PHY_SPEED_1000 0
+#define XGBE_PHY_SPEED_2500 1
+#define XGBE_PHY_SPEED_10000 2
#define XGBE_AN_INT_CMPLT 0x01
#define XGBE_AN_INC_LINK 0x02
#define XGBE_AN_PG_RCV 0x04
+#define XGBE_AN_INT_MASK 0x07
#define XNP_MCF_NULL_MESSAGE 0x001
-#define XNP_ACK_PROCESSED (1 << 12)
-#define XNP_MP_FORMATTED (1 << 13)
-#define XNP_NP_EXCHANGE (1 << 15)
+#define XNP_ACK_PROCESSED BIT(12)
+#define XNP_MP_FORMATTED BIT(13)
+#define XNP_NP_EXCHANGE BIT(15)
#define XGBE_PHY_RATECHANGE_COUNT 500
+#define XGBE_PHY_KR_TRAINING_START 0x01
+#define XGBE_PHY_KR_TRAINING_ENABLE 0x02
+
+#define XGBE_PHY_FEC_ENABLE 0x01
+#define XGBE_PHY_FEC_FORWARD 0x02
+#define XGBE_PHY_FEC_MASK 0x03
+
#ifndef MDIO_PMA_10GBR_PMD_CTRL
#define MDIO_PMA_10GBR_PMD_CTRL 0x0096
#endif
+#ifndef MDIO_PMA_10GBR_FEC_ABILITY
+#define MDIO_PMA_10GBR_FEC_ABILITY 0x00aa
+#endif
+
#ifndef MDIO_PMA_10GBR_FEC_CTRL
#define MDIO_PMA_10GBR_FEC_CTRL 0x00ab
#endif
#define MDIO_AN_XNP 0x0016
#endif
+#ifndef MDIO_AN_LPX
+#define MDIO_AN_LPX 0x0019
+#endif
+
#ifndef MDIO_AN_INTMASK
#define MDIO_AN_INTMASK 0x8001
#endif
#define MDIO_AN_INT 0x8002
#endif
-#ifndef MDIO_AN_KR_CTRL
-#define MDIO_AN_KR_CTRL 0x8003
-#endif
-
#ifndef MDIO_CTRL1_SPEED1G
#define MDIO_CTRL1_SPEED1G (MDIO_CTRL1_SPEED10G & ~BMCR_SPEED100)
#endif
-#ifndef MDIO_KR_CTRL_PDETECT
-#define MDIO_KR_CTRL_PDETECT 0x01
-#endif
-
/* SerDes integration register offsets */
#define SIR0_KR_RT_1 0x002c
#define SIR0_STATUS 0x0040
#define SIR0_STATUS_RX_READY_WIDTH 1
#define SIR0_STATUS_TX_READY_INDEX 8
#define SIR0_STATUS_TX_READY_WIDTH 1
+#define SIR1_SPEED_CDR_RATE_INDEX 12
+#define SIR1_SPEED_CDR_RATE_WIDTH 4
#define SIR1_SPEED_DATARATE_INDEX 4
#define SIR1_SPEED_DATARATE_WIDTH 2
-#define SIR1_SPEED_PI_SPD_SEL_INDEX 12
-#define SIR1_SPEED_PI_SPD_SEL_WIDTH 4
#define SIR1_SPEED_PLLSEL_INDEX 3
#define SIR1_SPEED_PLLSEL_WIDTH 1
#define SIR1_SPEED_RATECHANGE_INDEX 6
#define SIR1_SPEED_WORDMODE_INDEX 0
#define SIR1_SPEED_WORDMODE_WIDTH 3
+#define SPEED_10000_BLWC 0
#define SPEED_10000_CDR 0x7
#define SPEED_10000_PLL 0x1
+#define SPEED_10000_PQ 0x1e
#define SPEED_10000_RATE 0x0
#define SPEED_10000_TXAMP 0xa
#define SPEED_10000_WORD 0x7
+#define SPEED_2500_BLWC 1
#define SPEED_2500_CDR 0x2
#define SPEED_2500_PLL 0x0
+#define SPEED_2500_PQ 0xa
#define SPEED_2500_RATE 0x1
#define SPEED_2500_TXAMP 0xf
#define SPEED_2500_WORD 0x1
+#define SPEED_1000_BLWC 1
#define SPEED_1000_CDR 0x2
#define SPEED_1000_PLL 0x0
+#define SPEED_1000_PQ 0xa
#define SPEED_1000_RATE 0x3
#define SPEED_1000_TXAMP 0xf
#define SPEED_1000_WORD 0x1
#define RXTX_REG114_PQ_REG_INDEX 9
#define RXTX_REG114_PQ_REG_WIDTH 7
-#define RXTX_10000_BLWC 0
-#define RXTX_10000_PQ 0x1e
-
-#define RXTX_2500_BLWC 1
-#define RXTX_2500_PQ 0xa
-
-#define RXTX_1000_BLWC 1
-#define RXTX_1000_PQ 0xa
-
/* Bit setting and getting macros
* The get macro will extract the current bit field value from within
* the variable
XRXTX_IOWRITE((_priv), _reg, reg_val); \
} while (0)
+static const u32 amd_xgbe_phy_serdes_blwc[] = {
+ SPEED_1000_BLWC,
+ SPEED_2500_BLWC,
+ SPEED_10000_BLWC,
+};
+
+static const u32 amd_xgbe_phy_serdes_cdr_rate[] = {
+ SPEED_1000_CDR,
+ SPEED_2500_CDR,
+ SPEED_10000_CDR,
+};
+
+static const u32 amd_xgbe_phy_serdes_pq_skew[] = {
+ SPEED_1000_PQ,
+ SPEED_2500_PQ,
+ SPEED_10000_PQ,
+};
+
+static const u32 amd_xgbe_phy_serdes_tx_amp[] = {
+ SPEED_1000_TXAMP,
+ SPEED_2500_TXAMP,
+ SPEED_10000_TXAMP,
+};
+
enum amd_xgbe_phy_an {
AMD_XGBE_AN_READY = 0,
- AMD_XGBE_AN_START,
- AMD_XGBE_AN_EVENT,
AMD_XGBE_AN_PAGE_RECEIVED,
AMD_XGBE_AN_INCOMPAT_LINK,
AMD_XGBE_AN_COMPLETE,
AMD_XGBE_AN_NO_LINK,
- AMD_XGBE_AN_EXIT,
AMD_XGBE_AN_ERROR,
};
enum amd_xgbe_phy_rx {
- AMD_XGBE_RX_READY = 0,
- AMD_XGBE_RX_BPA,
+ AMD_XGBE_RX_BPA = 0,
AMD_XGBE_RX_XNP,
AMD_XGBE_RX_COMPLETE,
+ AMD_XGBE_RX_ERROR,
};
enum amd_xgbe_phy_mode {
};
enum amd_xgbe_phy_speedset {
- AMD_XGBE_PHY_SPEEDSET_1000_10000,
+ AMD_XGBE_PHY_SPEEDSET_1000_10000 = 0,
AMD_XGBE_PHY_SPEEDSET_2500_10000,
};
struct amd_xgbe_phy_priv {
struct platform_device *pdev;
+ struct acpi_device *adev;
struct device *dev;
struct phy_device *phydev;
void __iomem *sir0_regs; /* SerDes integration registers (1/2) */
void __iomem *sir1_regs; /* SerDes integration registers (2/2) */
- /* Maintain link status for re-starting auto-negotiation */
- unsigned int link;
+ int an_irq;
+ char an_irq_name[IFNAMSIZ + 32];
+ struct work_struct an_irq_work;
+ unsigned int an_irq_allocated;
+
unsigned int speed_set;
+ /* SerDes UEFI configurable settings.
+ * Switching between modes/speeds requires new values for some
+ * SerDes settings. The values can be supplied as device
+ * properties in array format. The first array entry is for
+ * 1GbE, second for 2.5GbE and third for 10GbE
+ */
+ u32 serdes_blwc[XGBE_PHY_SPEEDS];
+ u32 serdes_cdr_rate[XGBE_PHY_SPEEDS];
+ u32 serdes_pq_skew[XGBE_PHY_SPEEDS];
+ u32 serdes_tx_amp[XGBE_PHY_SPEEDS];
+
/* Auto-negotiation state machine support */
struct mutex an_mutex;
enum amd_xgbe_phy_an an_result;
enum amd_xgbe_phy_rx kx_state;
struct work_struct an_work;
struct workqueue_struct *an_workqueue;
+ unsigned int an_supported;
unsigned int parallel_detect;
+ unsigned int fec_ability;
+
+ unsigned int lpm_ctrl; /* CTRL1 for resume */
};
static int amd_xgbe_an_enable_kr_training(struct phy_device *phydev)
if (ret < 0)
return ret;
- ret |= 0x02;
+ ret |= XGBE_PHY_KR_TRAINING_ENABLE;
phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL, ret);
return 0;
if (ret < 0)
return ret;
- ret &= ~0x02;
+ ret &= ~XGBE_PHY_KR_TRAINING_ENABLE;
phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL, ret);
return 0;
XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, DATARATE, SPEED_10000_RATE);
XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, WORDMODE, SPEED_10000_WORD);
- XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, TXAMP, SPEED_10000_TXAMP);
XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, PLLSEL, SPEED_10000_PLL);
- XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, PI_SPD_SEL, SPEED_10000_CDR);
- XRXTX_IOWRITE_BITS(priv, RXTX_REG20, BLWC_ENA, RXTX_10000_BLWC);
- XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG, RXTX_10000_PQ);
+ XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, CDR_RATE,
+ priv->serdes_cdr_rate[XGBE_PHY_SPEED_10000]);
+ XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, TXAMP,
+ priv->serdes_tx_amp[XGBE_PHY_SPEED_10000]);
+ XRXTX_IOWRITE_BITS(priv, RXTX_REG20, BLWC_ENA,
+ priv->serdes_blwc[XGBE_PHY_SPEED_10000]);
+ XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG,
+ priv->serdes_pq_skew[XGBE_PHY_SPEED_10000]);
amd_xgbe_phy_serdes_complete_ratechange(phydev);
XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, DATARATE, SPEED_2500_RATE);
XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, WORDMODE, SPEED_2500_WORD);
- XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, TXAMP, SPEED_2500_TXAMP);
XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, PLLSEL, SPEED_2500_PLL);
- XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, PI_SPD_SEL, SPEED_2500_CDR);
- XRXTX_IOWRITE_BITS(priv, RXTX_REG20, BLWC_ENA, RXTX_2500_BLWC);
- XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG, RXTX_2500_PQ);
+ XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, CDR_RATE,
+ priv->serdes_cdr_rate[XGBE_PHY_SPEED_2500]);
+ XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, TXAMP,
+ priv->serdes_tx_amp[XGBE_PHY_SPEED_2500]);
+ XRXTX_IOWRITE_BITS(priv, RXTX_REG20, BLWC_ENA,
+ priv->serdes_blwc[XGBE_PHY_SPEED_2500]);
+ XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG,
+ priv->serdes_pq_skew[XGBE_PHY_SPEED_2500]);
amd_xgbe_phy_serdes_complete_ratechange(phydev);
XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, DATARATE, SPEED_1000_RATE);
XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, WORDMODE, SPEED_1000_WORD);
- XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, TXAMP, SPEED_1000_TXAMP);
XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, PLLSEL, SPEED_1000_PLL);
- XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, PI_SPD_SEL, SPEED_1000_CDR);
- XRXTX_IOWRITE_BITS(priv, RXTX_REG20, BLWC_ENA, RXTX_1000_BLWC);
- XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG, RXTX_1000_PQ);
+ XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, CDR_RATE,
+ priv->serdes_cdr_rate[XGBE_PHY_SPEED_1000]);
+ XSIR1_IOWRITE_BITS(priv, SIR1_SPEED, TXAMP,
+ priv->serdes_tx_amp[XGBE_PHY_SPEED_1000]);
+ XRXTX_IOWRITE_BITS(priv, RXTX_REG20, BLWC_ENA,
+ priv->serdes_blwc[XGBE_PHY_SPEED_1000]);
+ XRXTX_IOWRITE_BITS(priv, RXTX_REG114, PQ_REG,
+ priv->serdes_pq_skew[XGBE_PHY_SPEED_1000]);
amd_xgbe_phy_serdes_complete_ratechange(phydev);
return ret;
}
+static int amd_xgbe_phy_set_an(struct phy_device *phydev, bool enable,
+ bool restart)
+{
+ int ret;
+
+ ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_CTRL1);
+ if (ret < 0)
+ return ret;
+
+ ret &= ~MDIO_AN_CTRL1_ENABLE;
+
+ if (enable)
+ ret |= MDIO_AN_CTRL1_ENABLE;
+
+ if (restart)
+ ret |= MDIO_AN_CTRL1_RESTART;
+
+ phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_CTRL1, ret);
+
+ return 0;
+}
+
+static int amd_xgbe_phy_restart_an(struct phy_device *phydev)
+{
+ return amd_xgbe_phy_set_an(phydev, true, true);
+}
+
+static int amd_xgbe_phy_disable_an(struct phy_device *phydev)
+{
+ return amd_xgbe_phy_set_an(phydev, false, false);
+}
+
static enum amd_xgbe_phy_an amd_xgbe_an_tx_training(struct phy_device *phydev,
enum amd_xgbe_phy_rx *state)
{
/* If we're not in KR mode then we're done */
if (!amd_xgbe_phy_in_kr_mode(phydev))
- return AMD_XGBE_AN_EVENT;
+ return AMD_XGBE_AN_PAGE_RECEIVED;
/* Enable/Disable FEC */
ad_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
if (ret < 0)
return AMD_XGBE_AN_ERROR;
+ ret &= ~XGBE_PHY_FEC_MASK;
if ((ad_reg & 0xc000) && (lp_reg & 0xc000))
- ret |= 0x01;
- else
- ret &= ~0x01;
+ ret |= priv->fec_ability;
phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_FEC_CTRL, ret);
if (ret < 0)
return AMD_XGBE_AN_ERROR;
- XSIR0_IOWRITE_BITS(priv, SIR0_KR_RT_1, RESET, 1);
+ if (ret & XGBE_PHY_KR_TRAINING_ENABLE) {
+ XSIR0_IOWRITE_BITS(priv, SIR0_KR_RT_1, RESET, 1);
- ret |= 0x01;
- phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL, ret);
+ ret |= XGBE_PHY_KR_TRAINING_START;
+ phy_write_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_PMD_CTRL,
+ ret);
- XSIR0_IOWRITE_BITS(priv, SIR0_KR_RT_1, RESET, 0);
+ XSIR0_IOWRITE_BITS(priv, SIR0_KR_RT_1, RESET, 0);
+ }
- return AMD_XGBE_AN_EVENT;
+ return AMD_XGBE_AN_PAGE_RECEIVED;
}
static enum amd_xgbe_phy_an amd_xgbe_an_tx_xnp(struct phy_device *phydev,
phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_XNP + 1, 0);
phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_XNP, msg);
- return AMD_XGBE_AN_EVENT;
+ return AMD_XGBE_AN_PAGE_RECEIVED;
}
static enum amd_xgbe_phy_an amd_xgbe_an_rx_bpa(struct phy_device *phydev,
int ad_reg, lp_reg;
/* Check Extended Next Page support */
- ad_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
+ ad_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_XNP);
if (ad_reg < 0)
return AMD_XGBE_AN_ERROR;
- lp_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPA);
+ lp_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_LPX);
if (lp_reg < 0)
return AMD_XGBE_AN_ERROR;
amd_xgbe_an_tx_training(phydev, state);
}
-static enum amd_xgbe_phy_an amd_xgbe_an_start(struct phy_device *phydev)
+static enum amd_xgbe_phy_an amd_xgbe_an_page_received(struct phy_device *phydev)
+{
+ struct amd_xgbe_phy_priv *priv = phydev->priv;
+ enum amd_xgbe_phy_rx *state;
+ int ret;
+
+ state = amd_xgbe_phy_in_kr_mode(phydev) ? &priv->kr_state
+ : &priv->kx_state;
+
+ switch (*state) {
+ case AMD_XGBE_RX_BPA:
+ ret = amd_xgbe_an_rx_bpa(phydev, state);
+ break;
+
+ case AMD_XGBE_RX_XNP:
+ ret = amd_xgbe_an_rx_xnp(phydev, state);
+ break;
+
+ default:
+ ret = AMD_XGBE_AN_ERROR;
+ }
+
+ return ret;
+}
+
+static enum amd_xgbe_phy_an amd_xgbe_an_incompat_link(struct phy_device *phydev)
{
struct amd_xgbe_phy_priv *priv = phydev->priv;
int ret;
/* Be sure we aren't looping trying to negotiate */
if (amd_xgbe_phy_in_kr_mode(phydev)) {
- if (priv->kr_state != AMD_XGBE_RX_READY)
+ priv->kr_state = AMD_XGBE_RX_ERROR;
+
+ if (!(phydev->supported & SUPPORTED_1000baseKX_Full) &&
+ !(phydev->supported & SUPPORTED_2500baseX_Full))
+ return AMD_XGBE_AN_NO_LINK;
+
+ if (priv->kx_state != AMD_XGBE_RX_BPA)
return AMD_XGBE_AN_NO_LINK;
- priv->kr_state = AMD_XGBE_RX_BPA;
} else {
- if (priv->kx_state != AMD_XGBE_RX_READY)
+ priv->kx_state = AMD_XGBE_RX_ERROR;
+
+ if (!(phydev->supported & SUPPORTED_10000baseKR_Full))
+ return AMD_XGBE_AN_NO_LINK;
+
+ if (priv->kr_state != AMD_XGBE_RX_BPA)
return AMD_XGBE_AN_NO_LINK;
- priv->kx_state = AMD_XGBE_RX_BPA;
}
- /* Set up Advertisement register 3 first */
- ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
- if (ret < 0)
+ ret = amd_xgbe_phy_disable_an(phydev);
+ if (ret)
return AMD_XGBE_AN_ERROR;
- if (phydev->supported & SUPPORTED_10000baseR_FEC)
- ret |= 0xc000;
- else
- ret &= ~0xc000;
-
- phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2, ret);
-
- /* Set up Advertisement register 2 next */
- ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1);
- if (ret < 0)
+ ret = amd_xgbe_phy_switch_mode(phydev);
+ if (ret)
return AMD_XGBE_AN_ERROR;
- if (phydev->supported & SUPPORTED_10000baseKR_Full)
- ret |= 0x80;
- else
- ret &= ~0x80;
-
- if ((phydev->supported & SUPPORTED_1000baseKX_Full) ||
- (phydev->supported & SUPPORTED_2500baseX_Full))
- ret |= 0x20;
- else
- ret &= ~0x20;
-
- phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1, ret);
-
- /* Set up Advertisement register 1 last */
- ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
- if (ret < 0)
+ ret = amd_xgbe_phy_restart_an(phydev);
+ if (ret)
return AMD_XGBE_AN_ERROR;
- if (phydev->supported & SUPPORTED_Pause)
- ret |= 0x400;
- else
- ret &= ~0x400;
+ return AMD_XGBE_AN_INCOMPAT_LINK;
+}
- if (phydev->supported & SUPPORTED_Asym_Pause)
- ret |= 0x800;
- else
- ret &= ~0x800;
+static irqreturn_t amd_xgbe_an_isr(int irq, void *data)
+{
+ struct amd_xgbe_phy_priv *priv = (struct amd_xgbe_phy_priv *)data;
- /* We don't intend to perform XNP */
- ret &= ~XNP_NP_EXCHANGE;
+ /* Interrupt reason must be read and cleared outside of IRQ context */
+ disable_irq_nosync(priv->an_irq);
- phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE, ret);
+ queue_work(priv->an_workqueue, &priv->an_irq_work);
- /* Enable and start auto-negotiation */
- phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
+ return IRQ_HANDLED;
+}
- ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_KR_CTRL);
- if (ret < 0)
- return AMD_XGBE_AN_ERROR;
+static void amd_xgbe_an_irq_work(struct work_struct *work)
+{
+ struct amd_xgbe_phy_priv *priv = container_of(work,
+ struct amd_xgbe_phy_priv,
+ an_irq_work);
- ret |= MDIO_KR_CTRL_PDETECT;
- phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_KR_CTRL, ret);
+ /* Avoid a race between enabling the IRQ and exiting the work by
+ * waiting for the work to finish and then queueing it
+ */
+ flush_work(&priv->an_work);
+ queue_work(priv->an_workqueue, &priv->an_work);
+}
- ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_CTRL1);
- if (ret < 0)
- return AMD_XGBE_AN_ERROR;
+static void amd_xgbe_an_state_machine(struct work_struct *work)
+{
+ struct amd_xgbe_phy_priv *priv = container_of(work,
+ struct amd_xgbe_phy_priv,
+ an_work);
+ struct phy_device *phydev = priv->phydev;
+ enum amd_xgbe_phy_an cur_state = priv->an_state;
+ int int_reg, int_mask;
- ret |= MDIO_AN_CTRL1_ENABLE;
- ret |= MDIO_AN_CTRL1_RESTART;
- phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_CTRL1, ret);
+ mutex_lock(&priv->an_mutex);
- return AMD_XGBE_AN_EVENT;
-}
+ /* Read the interrupt */
+ int_reg = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT);
+ if (!int_reg)
+ goto out;
-static enum amd_xgbe_phy_an amd_xgbe_an_event(struct phy_device *phydev)
-{
- enum amd_xgbe_phy_an new_state;
- int ret;
+next_int:
+ if (int_reg < 0) {
+ priv->an_state = AMD_XGBE_AN_ERROR;
+ int_mask = XGBE_AN_INT_MASK;
+ } else if (int_reg & XGBE_AN_PG_RCV) {
+ priv->an_state = AMD_XGBE_AN_PAGE_RECEIVED;
+ int_mask = XGBE_AN_PG_RCV;
+ } else if (int_reg & XGBE_AN_INC_LINK) {
+ priv->an_state = AMD_XGBE_AN_INCOMPAT_LINK;
+ int_mask = XGBE_AN_INC_LINK;
+ } else if (int_reg & XGBE_AN_INT_CMPLT) {
+ priv->an_state = AMD_XGBE_AN_COMPLETE;
+ int_mask = XGBE_AN_INT_CMPLT;
+ } else {
+ priv->an_state = AMD_XGBE_AN_ERROR;
+ int_mask = 0;
+ }
- ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT);
- if (ret < 0)
- return AMD_XGBE_AN_ERROR;
+ /* Clear the interrupt to be processed */
+ int_reg &= ~int_mask;
+ phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, int_reg);
- new_state = AMD_XGBE_AN_EVENT;
- if (ret & XGBE_AN_PG_RCV)
- new_state = AMD_XGBE_AN_PAGE_RECEIVED;
- else if (ret & XGBE_AN_INC_LINK)
- new_state = AMD_XGBE_AN_INCOMPAT_LINK;
- else if (ret & XGBE_AN_INT_CMPLT)
- new_state = AMD_XGBE_AN_COMPLETE;
+ priv->an_result = priv->an_state;
- if (new_state != AMD_XGBE_AN_EVENT)
- phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
+again:
+ cur_state = priv->an_state;
- return new_state;
-}
+ switch (priv->an_state) {
+ case AMD_XGBE_AN_READY:
+ priv->an_supported = 0;
+ break;
-static enum amd_xgbe_phy_an amd_xgbe_an_page_received(struct phy_device *phydev)
-{
- struct amd_xgbe_phy_priv *priv = phydev->priv;
- enum amd_xgbe_phy_rx *state;
- int ret;
+ case AMD_XGBE_AN_PAGE_RECEIVED:
+ priv->an_state = amd_xgbe_an_page_received(phydev);
+ priv->an_supported++;
+ break;
- state = amd_xgbe_phy_in_kr_mode(phydev) ? &priv->kr_state
- : &priv->kx_state;
+ case AMD_XGBE_AN_INCOMPAT_LINK:
+ priv->an_supported = 0;
+ priv->parallel_detect = 0;
+ priv->an_state = amd_xgbe_an_incompat_link(phydev);
+ break;
- switch (*state) {
- case AMD_XGBE_RX_BPA:
- ret = amd_xgbe_an_rx_bpa(phydev, state);
+ case AMD_XGBE_AN_COMPLETE:
+ priv->parallel_detect = priv->an_supported ? 0 : 1;
+ netdev_dbg(phydev->attached_dev, "%s successful\n",
+ priv->an_supported ? "Auto negotiation"
+ : "Parallel detection");
break;
- case AMD_XGBE_RX_XNP:
- ret = amd_xgbe_an_rx_xnp(phydev, state);
+ case AMD_XGBE_AN_NO_LINK:
break;
default:
- ret = AMD_XGBE_AN_ERROR;
+ priv->an_state = AMD_XGBE_AN_ERROR;
}
- return ret;
-}
+ if (priv->an_state == AMD_XGBE_AN_NO_LINK) {
+ int_reg = 0;
+ phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
+ } else if (priv->an_state == AMD_XGBE_AN_ERROR) {
+ netdev_err(phydev->attached_dev,
+ "error during auto-negotiation, state=%u\n",
+ cur_state);
-static enum amd_xgbe_phy_an amd_xgbe_an_incompat_link(struct phy_device *phydev)
-{
- int ret;
+ int_reg = 0;
+ phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
+ }
- ret = amd_xgbe_phy_switch_mode(phydev);
- if (ret)
- return AMD_XGBE_AN_ERROR;
+ if (priv->an_state >= AMD_XGBE_AN_COMPLETE) {
+ priv->an_result = priv->an_state;
+ priv->an_state = AMD_XGBE_AN_READY;
+ priv->kr_state = AMD_XGBE_RX_BPA;
+ priv->kx_state = AMD_XGBE_RX_BPA;
+ }
- return AMD_XGBE_AN_START;
-}
+ if (cur_state != priv->an_state)
+ goto again;
-static void amd_xgbe_an_state_machine(struct work_struct *work)
-{
- struct amd_xgbe_phy_priv *priv = container_of(work,
- struct amd_xgbe_phy_priv,
- an_work);
- struct phy_device *phydev = priv->phydev;
- enum amd_xgbe_phy_an cur_state;
- int sleep;
- unsigned int an_supported = 0;
+ if (int_reg)
+ goto next_int;
- /* Start in KX mode */
- if (amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KX))
- priv->an_state = AMD_XGBE_AN_ERROR;
+out:
+ enable_irq(priv->an_irq);
- while (1) {
- mutex_lock(&priv->an_mutex);
+ mutex_unlock(&priv->an_mutex);
+}
- cur_state = priv->an_state;
+static int amd_xgbe_an_init(struct phy_device *phydev)
+{
+ int ret;
- switch (priv->an_state) {
- case AMD_XGBE_AN_START:
- an_supported = 0;
- priv->parallel_detect = 0;
- priv->an_state = amd_xgbe_an_start(phydev);
- break;
+ /* Set up Advertisement register 3 first */
+ ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2);
+ if (ret < 0)
+ return ret;
- case AMD_XGBE_AN_EVENT:
- priv->an_state = amd_xgbe_an_event(phydev);
- break;
+ if (phydev->supported & SUPPORTED_10000baseR_FEC)
+ ret |= 0xc000;
+ else
+ ret &= ~0xc000;
- case AMD_XGBE_AN_PAGE_RECEIVED:
- priv->an_state = amd_xgbe_an_page_received(phydev);
- an_supported++;
- break;
+ phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 2, ret);
- case AMD_XGBE_AN_INCOMPAT_LINK:
- priv->an_state = amd_xgbe_an_incompat_link(phydev);
- break;
+ /* Set up Advertisement register 2 next */
+ ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1);
+ if (ret < 0)
+ return ret;
- case AMD_XGBE_AN_COMPLETE:
- priv->parallel_detect = an_supported ? 0 : 1;
- netdev_info(phydev->attached_dev, "%s successful\n",
- an_supported ? "Auto negotiation"
- : "Parallel detection");
- /* fall through */
+ if (phydev->supported & SUPPORTED_10000baseKR_Full)
+ ret |= 0x80;
+ else
+ ret &= ~0x80;
- case AMD_XGBE_AN_NO_LINK:
- case AMD_XGBE_AN_EXIT:
- goto exit_unlock;
+ if ((phydev->supported & SUPPORTED_1000baseKX_Full) ||
+ (phydev->supported & SUPPORTED_2500baseX_Full))
+ ret |= 0x20;
+ else
+ ret &= ~0x20;
- default:
- priv->an_state = AMD_XGBE_AN_ERROR;
- }
+ phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE + 1, ret);
- if (priv->an_state == AMD_XGBE_AN_ERROR) {
- netdev_err(phydev->attached_dev,
- "error during auto-negotiation, state=%u\n",
- cur_state);
- goto exit_unlock;
- }
+ /* Set up Advertisement register 1 last */
+ ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE);
+ if (ret < 0)
+ return ret;
- sleep = (priv->an_state == AMD_XGBE_AN_EVENT) ? 1 : 0;
+ if (phydev->supported & SUPPORTED_Pause)
+ ret |= 0x400;
+ else
+ ret &= ~0x400;
- mutex_unlock(&priv->an_mutex);
+ if (phydev->supported & SUPPORTED_Asym_Pause)
+ ret |= 0x800;
+ else
+ ret &= ~0x800;
- if (sleep)
- usleep_range(20, 50);
- }
+ /* We don't intend to perform XNP */
+ ret &= ~XNP_NP_EXCHANGE;
-exit_unlock:
- priv->an_result = priv->an_state;
- priv->an_state = AMD_XGBE_AN_READY;
+ phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_ADVERTISE, ret);
- mutex_unlock(&priv->an_mutex);
+ return 0;
}
static int amd_xgbe_phy_soft_reset(struct phy_device *phydev)
if (ret & MDIO_CTRL1_RESET)
return -ETIMEDOUT;
- /* Make sure the XPCS and SerDes are in compatible states */
- return amd_xgbe_phy_xgmii_mode(phydev);
+ /* Disable auto-negotiation for now */
+ ret = amd_xgbe_phy_disable_an(phydev);
+ if (ret < 0)
+ return ret;
+
+ /* Clear auto-negotiation interrupts */
+ phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
+
+ return 0;
}
static int amd_xgbe_phy_config_init(struct phy_device *phydev)
{
struct amd_xgbe_phy_priv *priv = phydev->priv;
+ struct net_device *netdev = phydev->attached_dev;
+ int ret;
+
+ if (!priv->an_irq_allocated) {
+ /* Allocate the auto-negotiation workqueue and interrupt */
+ snprintf(priv->an_irq_name, sizeof(priv->an_irq_name) - 1,
+ "%s-pcs", netdev_name(netdev));
+
+ priv->an_workqueue =
+ create_singlethread_workqueue(priv->an_irq_name);
+ if (!priv->an_workqueue) {
+ netdev_err(netdev, "phy workqueue creation failed\n");
+ return -ENOMEM;
+ }
+
+ ret = devm_request_irq(priv->dev, priv->an_irq,
+ amd_xgbe_an_isr, 0, priv->an_irq_name,
+ priv);
+ if (ret) {
+ netdev_err(netdev, "phy irq request failed\n");
+ destroy_workqueue(priv->an_workqueue);
+ return ret;
+ }
+
+ priv->an_irq_allocated = 1;
+ }
+
+ ret = phy_read_mmd(phydev, MDIO_MMD_PMAPMD, MDIO_PMA_10GBR_FEC_ABILITY);
+ if (ret < 0)
+ return ret;
+ priv->fec_ability = ret & XGBE_PHY_FEC_MASK;
/* Initialize supported features */
phydev->supported = SUPPORTED_Autoneg;
phydev->supported |= SUPPORTED_Pause | SUPPORTED_Asym_Pause;
phydev->supported |= SUPPORTED_Backplane;
- phydev->supported |= SUPPORTED_10000baseKR_Full |
- SUPPORTED_10000baseR_FEC;
+ phydev->supported |= SUPPORTED_10000baseKR_Full;
switch (priv->speed_set) {
case AMD_XGBE_PHY_SPEEDSET_1000_10000:
phydev->supported |= SUPPORTED_1000baseKX_Full;
phydev->supported |= SUPPORTED_2500baseX_Full;
break;
}
+
+ if (priv->fec_ability & XGBE_PHY_FEC_ENABLE)
+ phydev->supported |= SUPPORTED_10000baseR_FEC;
+
phydev->advertising = phydev->supported;
- /* Turn off and clear interrupts */
- phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INTMASK, 0);
- phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
+ /* Set initial mode - call the mode setting routines
+ * directly to insure we are properly configured
+ */
+ if (phydev->supported & SUPPORTED_10000baseKR_Full)
+ ret = amd_xgbe_phy_xgmii_mode(phydev);
+ else if (phydev->supported & SUPPORTED_1000baseKX_Full)
+ ret = amd_xgbe_phy_gmii_mode(phydev);
+ else if (phydev->supported & SUPPORTED_2500baseX_Full)
+ ret = amd_xgbe_phy_gmii_2500_mode(phydev);
+ else
+ ret = -EINVAL;
+ if (ret < 0)
+ return ret;
+
+ /* Set up advertisement registers based on current settings */
+ ret = amd_xgbe_an_init(phydev);
+ if (ret)
+ return ret;
+
+ /* Enable auto-negotiation interrupts */
+ phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INTMASK, 0x07);
return 0;
}
int ret;
/* Disable auto-negotiation */
- ret = phy_read_mmd(phydev, MDIO_MMD_AN, MDIO_CTRL1);
+ ret = amd_xgbe_phy_disable_an(phydev);
if (ret < 0)
return ret;
- ret &= ~MDIO_AN_CTRL1_ENABLE;
- phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_CTRL1, ret);
-
/* Validate/Set specified speed */
switch (phydev->speed) {
case SPEED_10000:
- ret = amd_xgbe_phy_xgmii_mode(phydev);
+ ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KR);
break;
case SPEED_2500:
- ret = amd_xgbe_phy_gmii_2500_mode(phydev);
- break;
-
case SPEED_1000:
- ret = amd_xgbe_phy_gmii_mode(phydev);
+ ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KX);
break;
default:
return 0;
}
-static int amd_xgbe_phy_config_aneg(struct phy_device *phydev)
+static int __amd_xgbe_phy_config_aneg(struct phy_device *phydev)
{
struct amd_xgbe_phy_priv *priv = phydev->priv;
u32 mmd_mask = phydev->c45_ids.devices_in_package;
+ int ret;
if (phydev->autoneg != AUTONEG_ENABLE)
return amd_xgbe_phy_setup_forced(phydev);
if (!(mmd_mask & MDIO_DEVS_AN))
return -EINVAL;
- /* Start/Restart the auto-negotiation state machine */
- mutex_lock(&priv->an_mutex);
+ /* Disable auto-negotiation interrupt */
+ disable_irq(priv->an_irq);
+
+ /* Start auto-negotiation in a supported mode */
+ if (phydev->supported & SUPPORTED_10000baseKR_Full)
+ ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KR);
+ else if ((phydev->supported & SUPPORTED_1000baseKX_Full) ||
+ (phydev->supported & SUPPORTED_2500baseX_Full))
+ ret = amd_xgbe_phy_set_mode(phydev, AMD_XGBE_MODE_KX);
+ else
+ ret = -EINVAL;
+ if (ret < 0) {
+ enable_irq(priv->an_irq);
+ return ret;
+ }
+
+ /* Disable and stop any in progress auto-negotiation */
+ ret = amd_xgbe_phy_disable_an(phydev);
+ if (ret < 0)
+ return ret;
+
+ /* Clear any auto-negotitation interrupts */
+ phy_write_mmd(phydev, MDIO_MMD_AN, MDIO_AN_INT, 0);
+
priv->an_result = AMD_XGBE_AN_READY;
- priv->an_state = AMD_XGBE_AN_START;
- priv->kr_state = AMD_XGBE_RX_READY;
- priv->kx_state = AMD_XGBE_RX_READY;
- mutex_unlock(&priv->an_mutex);
+ priv->an_state = AMD_XGBE_AN_READY;
+ priv->kr_state = AMD_XGBE_RX_BPA;
+ priv->kx_state = AMD_XGBE_RX_BPA;
- queue_work(priv->an_workqueue, &priv->an_work);
+ /* Re-enable auto-negotiation interrupt */
+ enable_irq(priv->an_irq);
- return 0;
+ /* Set up advertisement registers based on current settings */
+ ret = amd_xgbe_an_init(phydev);
+ if (ret)
+ return ret;
+
+ /* Enable and start auto-negotiation */
+ return amd_xgbe_phy_restart_an(phydev);
}
-static int amd_xgbe_phy_aneg_done(struct phy_device *phydev)
+static int amd_xgbe_phy_config_aneg(struct phy_device *phydev)
{
struct amd_xgbe_phy_priv *priv = phydev->priv;
- enum amd_xgbe_phy_an state;
+ int ret;
mutex_lock(&priv->an_mutex);
- state = priv->an_result;
+
+ ret = __amd_xgbe_phy_config_aneg(phydev);
+
mutex_unlock(&priv->an_mutex);
- return (state == AMD_XGBE_AN_COMPLETE);
+ return ret;
+}
+
+static int amd_xgbe_phy_aneg_done(struct phy_device *phydev)
+{
+ struct amd_xgbe_phy_priv *priv = phydev->priv;
+
+ return (priv->an_result == AMD_XGBE_AN_COMPLETE);
}
static int amd_xgbe_phy_update_link(struct phy_device *phydev)
{
struct amd_xgbe_phy_priv *priv = phydev->priv;
- enum amd_xgbe_phy_an state;
- unsigned int check_again, autoneg;
int ret;
/* If we're doing auto-negotiation don't report link down */
- mutex_lock(&priv->an_mutex);
- state = priv->an_state;
- mutex_unlock(&priv->an_mutex);
-
- if (state != AMD_XGBE_AN_READY) {
+ if (priv->an_state != AMD_XGBE_AN_READY) {
phydev->link = 1;
return 0;
}
- /* Since the device can be in the wrong mode when a link is
- * (re-)established (cable connected after the interface is
- * up, etc.), the link status may report no link. If there
- * is no link, try switching modes and checking the status
- * again if auto negotiation is enabled.
- */
- check_again = (phydev->autoneg == AUTONEG_ENABLE) ? 1 : 0;
-again:
/* Link status is latched low, so read once to clear
* and then read again to get current state
*/
phydev->link = (ret & MDIO_STAT1_LSTATUS) ? 1 : 0;
- if (!phydev->link) {
- if (check_again) {
- ret = amd_xgbe_phy_switch_mode(phydev);
- if (ret < 0)
- return ret;
- check_again = 0;
- goto again;
- }
- }
-
- autoneg = (phydev->link && !priv->link) ? 1 : 0;
- priv->link = phydev->link;
- if (autoneg) {
- /* Link is (back) up, re-start auto-negotiation */
- ret = amd_xgbe_phy_config_aneg(phydev);
- if (ret < 0)
- return ret;
- }
-
return 0;
}
static int amd_xgbe_phy_suspend(struct phy_device *phydev)
{
+ struct amd_xgbe_phy_priv *priv = phydev->priv;
int ret;
mutex_lock(&phydev->lock);
if (ret < 0)
goto unlock;
+ priv->lpm_ctrl = ret;
+
ret |= MDIO_CTRL1_LPOWER;
phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
static int amd_xgbe_phy_resume(struct phy_device *phydev)
{
- int ret;
+ struct amd_xgbe_phy_priv *priv = phydev->priv;
mutex_lock(&phydev->lock);
- ret = phy_read_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1);
- if (ret < 0)
- goto unlock;
+ priv->lpm_ctrl &= ~MDIO_CTRL1_LPOWER;
+ phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, priv->lpm_ctrl);
- ret &= ~MDIO_CTRL1_LPOWER;
- phy_write_mmd(phydev, MDIO_MMD_PCS, MDIO_CTRL1, ret);
+ mutex_unlock(&phydev->lock);
- ret = 0;
+ return 0;
+}
-unlock:
- mutex_unlock(&phydev->lock);
+static unsigned int amd_xgbe_phy_resource_count(struct platform_device *pdev,
+ unsigned int type)
+{
+ unsigned int count;
+ int i;
- return ret;
+ for (i = 0, count = 0; i < pdev->num_resources; i++) {
+ struct resource *r = &pdev->resource[i];
+
+ if (type == resource_type(r))
+ count++;
+ }
+
+ return count;
}
static int amd_xgbe_phy_probe(struct phy_device *phydev)
{
struct amd_xgbe_phy_priv *priv;
- struct platform_device *pdev;
- struct device *dev;
- char *wq_name;
- const __be32 *property;
- unsigned int speed_set;
+ struct platform_device *phy_pdev;
+ struct device *dev, *phy_dev;
+ unsigned int phy_resnum, phy_irqnum;
int ret;
- if (!phydev->dev.of_node)
+ if (!phydev->bus || !phydev->bus->parent)
return -EINVAL;
- pdev = of_find_device_by_node(phydev->dev.of_node);
- if (!pdev)
- return -EINVAL;
- dev = &pdev->dev;
-
- wq_name = kasprintf(GFP_KERNEL, "%s-amd-xgbe-phy", phydev->bus->name);
- if (!wq_name) {
- ret = -ENOMEM;
- goto err_pdev;
- }
+ dev = phydev->bus->parent;
priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
- if (!priv) {
- ret = -ENOMEM;
- goto err_name;
- }
+ if (!priv)
+ return -ENOMEM;
- priv->pdev = pdev;
+ priv->pdev = to_platform_device(dev);
+ priv->adev = ACPI_COMPANION(dev);
priv->dev = dev;
priv->phydev = phydev;
+ mutex_init(&priv->an_mutex);
+ INIT_WORK(&priv->an_irq_work, amd_xgbe_an_irq_work);
+ INIT_WORK(&priv->an_work, amd_xgbe_an_state_machine);
+
+ if (!priv->adev || acpi_disabled) {
+ struct device_node *bus_node;
+ struct device_node *phy_node;
+
+ bus_node = priv->dev->of_node;
+ phy_node = of_parse_phandle(bus_node, "phy-handle", 0);
+ if (!phy_node) {
+ dev_err(dev, "unable to parse phy-handle\n");
+ ret = -EINVAL;
+ goto err_priv;
+ }
+
+ phy_pdev = of_find_device_by_node(phy_node);
+ of_node_put(phy_node);
+
+ if (!phy_pdev) {
+ dev_err(dev, "unable to obtain phy device\n");
+ ret = -EINVAL;
+ goto err_priv;
+ }
+
+ phy_resnum = 0;
+ phy_irqnum = 0;
+ } else {
+ /* In ACPI, the XGBE and PHY resources are the grouped
+ * together with the PHY resources at the end
+ */
+ phy_pdev = priv->pdev;
+ phy_resnum = amd_xgbe_phy_resource_count(phy_pdev,
+ IORESOURCE_MEM) - 3;
+ phy_irqnum = amd_xgbe_phy_resource_count(phy_pdev,
+ IORESOURCE_IRQ) - 1;
+ }
+ phy_dev = &phy_pdev->dev;
/* Get the device mmio areas */
- priv->rxtx_res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ priv->rxtx_res = platform_get_resource(phy_pdev, IORESOURCE_MEM,
+ phy_resnum++);
priv->rxtx_regs = devm_ioremap_resource(dev, priv->rxtx_res);
if (IS_ERR(priv->rxtx_regs)) {
dev_err(dev, "rxtx ioremap failed\n");
ret = PTR_ERR(priv->rxtx_regs);
- goto err_priv;
+ goto err_put;
}
- priv->sir0_res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ priv->sir0_res = platform_get_resource(phy_pdev, IORESOURCE_MEM,
+ phy_resnum++);
priv->sir0_regs = devm_ioremap_resource(dev, priv->sir0_res);
if (IS_ERR(priv->sir0_regs)) {
dev_err(dev, "sir0 ioremap failed\n");
goto err_rxtx;
}
- priv->sir1_res = platform_get_resource(pdev, IORESOURCE_MEM, 2);
+ priv->sir1_res = platform_get_resource(phy_pdev, IORESOURCE_MEM,
+ phy_resnum++);
priv->sir1_regs = devm_ioremap_resource(dev, priv->sir1_res);
if (IS_ERR(priv->sir1_regs)) {
dev_err(dev, "sir1 ioremap failed\n");
goto err_sir0;
}
+ /* Get the auto-negotiation interrupt */
+ ret = platform_get_irq(phy_pdev, phy_irqnum);
+ if (ret < 0) {
+ dev_err(dev, "platform_get_irq failed\n");
+ goto err_sir1;
+ }
+ priv->an_irq = ret;
+
/* Get the device speed set property */
- speed_set = 0;
- property = of_get_property(dev->of_node, XGBE_PHY_SPEEDSET_PROPERTY,
- NULL);
- if (property)
- speed_set = be32_to_cpu(*property);
-
- switch (speed_set) {
- case 0:
- priv->speed_set = AMD_XGBE_PHY_SPEEDSET_1000_10000;
- break;
- case 1:
- priv->speed_set = AMD_XGBE_PHY_SPEEDSET_2500_10000;
+ ret = device_property_read_u32(phy_dev, XGBE_PHY_SPEEDSET_PROPERTY,
+ &priv->speed_set);
+ if (ret) {
+ dev_err(dev, "invalid %s property\n",
+ XGBE_PHY_SPEEDSET_PROPERTY);
+ goto err_sir1;
+ }
+
+ switch (priv->speed_set) {
+ case AMD_XGBE_PHY_SPEEDSET_1000_10000:
+ case AMD_XGBE_PHY_SPEEDSET_2500_10000:
break;
default:
- dev_err(dev, "invalid amd,speed-set property\n");
+ dev_err(dev, "invalid %s property\n",
+ XGBE_PHY_SPEEDSET_PROPERTY);
ret = -EINVAL;
goto err_sir1;
}
- priv->link = 1;
+ if (device_property_present(phy_dev, XGBE_PHY_BLWC_PROPERTY)) {
+ ret = device_property_read_u32_array(phy_dev,
+ XGBE_PHY_BLWC_PROPERTY,
+ priv->serdes_blwc,
+ XGBE_PHY_SPEEDS);
+ if (ret) {
+ dev_err(dev, "invalid %s property\n",
+ XGBE_PHY_BLWC_PROPERTY);
+ goto err_sir1;
+ }
+ } else {
+ memcpy(priv->serdes_blwc, amd_xgbe_phy_serdes_blwc,
+ sizeof(priv->serdes_blwc));
+ }
- mutex_init(&priv->an_mutex);
- INIT_WORK(&priv->an_work, amd_xgbe_an_state_machine);
- priv->an_workqueue = create_singlethread_workqueue(wq_name);
- if (!priv->an_workqueue) {
- ret = -ENOMEM;
- goto err_sir1;
+ if (device_property_present(phy_dev, XGBE_PHY_CDR_RATE_PROPERTY)) {
+ ret = device_property_read_u32_array(phy_dev,
+ XGBE_PHY_CDR_RATE_PROPERTY,
+ priv->serdes_cdr_rate,
+ XGBE_PHY_SPEEDS);
+ if (ret) {
+ dev_err(dev, "invalid %s property\n",
+ XGBE_PHY_CDR_RATE_PROPERTY);
+ goto err_sir1;
+ }
+ } else {
+ memcpy(priv->serdes_cdr_rate, amd_xgbe_phy_serdes_cdr_rate,
+ sizeof(priv->serdes_cdr_rate));
+ }
+
+ if (device_property_present(phy_dev, XGBE_PHY_PQ_SKEW_PROPERTY)) {
+ ret = device_property_read_u32_array(phy_dev,
+ XGBE_PHY_PQ_SKEW_PROPERTY,
+ priv->serdes_pq_skew,
+ XGBE_PHY_SPEEDS);
+ if (ret) {
+ dev_err(dev, "invalid %s property\n",
+ XGBE_PHY_PQ_SKEW_PROPERTY);
+ goto err_sir1;
+ }
+ } else {
+ memcpy(priv->serdes_pq_skew, amd_xgbe_phy_serdes_pq_skew,
+ sizeof(priv->serdes_pq_skew));
+ }
+
+ if (device_property_present(phy_dev, XGBE_PHY_TX_AMP_PROPERTY)) {
+ ret = device_property_read_u32_array(phy_dev,
+ XGBE_PHY_TX_AMP_PROPERTY,
+ priv->serdes_tx_amp,
+ XGBE_PHY_SPEEDS);
+ if (ret) {
+ dev_err(dev, "invalid %s property\n",
+ XGBE_PHY_TX_AMP_PROPERTY);
+ goto err_sir1;
+ }
+ } else {
+ memcpy(priv->serdes_tx_amp, amd_xgbe_phy_serdes_tx_amp,
+ sizeof(priv->serdes_tx_amp));
}
phydev->priv = priv;
- kfree(wq_name);
- of_dev_put(pdev);
+ if (!priv->adev || acpi_disabled)
+ platform_device_put(phy_pdev);
return 0;
devm_release_mem_region(dev, priv->rxtx_res->start,
resource_size(priv->rxtx_res));
+err_put:
+ if (!priv->adev || acpi_disabled)
+ platform_device_put(phy_pdev);
+
err_priv:
devm_kfree(dev, priv);
-err_name:
- kfree(wq_name);
-
-err_pdev:
- of_dev_put(pdev);
-
return ret;
}
struct amd_xgbe_phy_priv *priv = phydev->priv;
struct device *dev = priv->dev;
- /* Stop any in process auto-negotiation */
- mutex_lock(&priv->an_mutex);
- priv->an_state = AMD_XGBE_AN_EXIT;
- mutex_unlock(&priv->an_mutex);
+ if (priv->an_irq_allocated) {
+ devm_free_irq(dev, priv->an_irq, priv);
- flush_workqueue(priv->an_workqueue);
- destroy_workqueue(priv->an_workqueue);
+ flush_workqueue(priv->an_workqueue);
+ destroy_workqueue(priv->an_workqueue);
+ }
/* Release resources */
devm_iounmap(dev, priv->sir1_regs);
struct fixed_mdio_bus *fmb = &platform_fmb;
struct fixed_phy *fp;
- if (!link_update || !phydev || !phydev->bus)
+ if (!phydev || !phydev->bus)
return -EINVAL;
list_for_each_entry(fp, &fmb->phys, node) {
static int __team_option_inst_add_option(struct team *team,
struct team_option *option)
{
- struct team_port *port;
int err;
if (!option->per_port) {
if (err)
goto inst_del_option;
}
-
- list_for_each_entry(port, &team->port_list, list) {
- err = __team_option_inst_add(team, option, port);
- if (err)
- goto inst_del_option;
- }
return 0;
inst_del_option:
static void team_notify_peers_work(struct work_struct *work)
{
struct team *team;
+ int val;
team = container_of(work, struct team, notify_peers.dw.work);
schedule_delayed_work(&team->notify_peers.dw, 0);
return;
}
+ val = atomic_dec_if_positive(&team->notify_peers.count_pending);
+ if (val < 0) {
+ rtnl_unlock();
+ return;
+ }
call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, team->dev);
rtnl_unlock();
- if (!atomic_dec_and_test(&team->notify_peers.count_pending))
+ if (val)
schedule_delayed_work(&team->notify_peers.dw,
msecs_to_jiffies(team->notify_peers.interval));
}
static void team_mcast_rejoin_work(struct work_struct *work)
{
struct team *team;
+ int val;
team = container_of(work, struct team, mcast_rejoin.dw.work);
schedule_delayed_work(&team->mcast_rejoin.dw, 0);
return;
}
+ val = atomic_dec_if_positive(&team->mcast_rejoin.count_pending);
+ if (val < 0) {
+ rtnl_unlock();
+ return;
+ }
call_netdevice_notifiers(NETDEV_RESEND_IGMP, team->dev);
rtnl_unlock();
- if (!atomic_dec_and_test(&team->mcast_rejoin.count_pending))
+ if (val)
schedule_delayed_work(&team->mcast_rejoin.dw,
msecs_to_jiffies(team->mcast_rejoin.interval));
}
unsigned char addr[FLT_EXACT_COUNT][ETH_ALEN];
};
-/* DEFAULT_MAX_NUM_RSS_QUEUES were chosen to let the rx/tx queues allocated for
- * the netdevice to be fit in one page. So we can make sure the success of
- * memory allocation. TODO: increase the limit. */
-#define MAX_TAP_QUEUES DEFAULT_MAX_NUM_RSS_QUEUES
+/* MAX_TAP_QUEUES 256 is chosen to allow rx/tx queues to be equal
+ * to max number of VCPUs in guest. */
+#define MAX_TAP_QUEUES 256
#define MAX_TAP_FLOWS 4096
#define TUN_FLOW_EXPIRE (3 * HZ)
int vlan_hlen = 0;
int vnet_hdr_sz = 0;
- if (vlan_tx_tag_present(skb))
+ if (skb_vlan_tag_present(skb))
vlan_hlen = VLAN_HLEN;
if (tun->flags & IFF_VNET_HDR)
} veth;
veth.h_vlan_proto = skb->vlan_proto;
- veth.h_vlan_TCI = htons(vlan_tx_tag_get(skb));
+ veth.h_vlan_TCI = htons(skb_vlan_tag_get(skb));
vlan_offset = offsetof(struct vlan_ethhdr, h_vlan_proto);
awd.done = 0;
urb->context = &awd;
- status = usb_submit_urb(urb, GFP_NOIO);
+ status = usb_submit_urb(urb, GFP_ATOMIC);
if (status) {
// something went wrong
usb_free_urb(urb);
/* default ethernet address used by the modem */
static const u8 default_modem_addr[ETH_ALEN] = {0x02, 0x50, 0xf3};
+static const u8 buggy_fw_addr[ETH_ALEN] = {0x00, 0xa0, 0xc6, 0x00, 0x00, 0x00};
+
/* Make up an ethernet header if the packet doesn't have one.
*
* A firmware bug common among several devices cause them to send raw
usb_driver_release_interface(driver, info->data);
}
- /* Never use the same address on both ends of the link, even
- * if the buggy firmware told us to.
+ /* Never use the same address on both ends of the link, even if the
+ * buggy firmware told us to. Or, if device is assigned the well-known
+ * buggy firmware MAC address, replace it with a random address,
*/
- if (ether_addr_equal(dev->net->dev_addr, default_modem_addr))
+ if (ether_addr_equal(dev->net->dev_addr, default_modem_addr) ||
+ ether_addr_equal(dev->net->dev_addr, buggy_fw_addr))
eth_hw_addr_random(dev->net);
/* make MAC addr easily distinguishable from an IP header */
#include <linux/usb/cdc.h>
/* Version Information */
-#define DRIVER_VERSION "v1.07.0 (2014/10/09)"
+#define DRIVER_VERSION "v1.08.0 (2015/01/13)"
#define DRIVER_AUTHOR "Realtek linux nic maintainers <nic_swsd@realtek.com>"
#define DRIVER_DESC "Realtek RTL8152/RTL8153 Based USB Ethernet Adapters"
#define MODULENAME "r8152"
#define RTL8152_RMS (VLAN_ETH_FRAME_LEN + VLAN_HLEN)
#define RTL8153_RMS RTL8153_MAX_PACKET
#define RTL8152_TX_TIMEOUT (5 * HZ)
+#define RTL8152_NAPI_WEIGHT 64
/* rtl8152 flags */
enum rtl8152_flags {
RTL8152_LINK_CHG,
SELECTIVE_SUSPEND,
PHY_RESET,
- SCHEDULE_TASKLET,
+ SCHEDULE_NAPI,
};
/* Define these values to match your device */
struct r8152 {
unsigned long flags;
struct usb_device *udev;
- struct tasklet_struct tl;
+ struct napi_struct napi;
struct usb_interface *intf;
struct net_device *netdev;
struct urb *intr_urb;
struct tx_agg tx_info[RTL8152_MAX_TX];
struct rx_agg rx_info[RTL8152_MAX_RX];
struct list_head rx_done, tx_free;
- struct sk_buff_head tx_queue;
+ struct sk_buff_head tx_queue, rx_queue;
spinlock_t rx_lock, tx_lock;
struct delayed_work schedule;
struct mii_if_info mii;
spin_lock(&tp->rx_lock);
list_add_tail(&agg->list, &tp->rx_done);
spin_unlock(&tp->rx_lock);
- tasklet_schedule(&tp->tl);
+ napi_schedule(&tp->napi);
return;
case -ESHUTDOWN:
set_bit(RTL8152_UNPLUG, &tp->flags);
return;
if (!skb_queue_empty(&tp->tx_queue))
- tasklet_schedule(&tp->tl);
+ napi_schedule(&tp->napi);
}
static void intr_callback(struct urb *urb)
spin_lock_init(&tp->tx_lock);
INIT_LIST_HEAD(&tp->tx_free);
skb_queue_head_init(&tp->tx_queue);
+ skb_queue_head_init(&tp->rx_queue);
for (i = 0; i < RTL8152_MAX_RX; i++) {
buf = kmalloc_node(agg_buf_sz, GFP_KERNEL, node);
static inline void rtl_tx_vlan_tag(struct tx_desc *desc, struct sk_buff *skb)
{
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
u32 opts2;
- opts2 = TX_VLAN_TAG | swab16(vlan_tx_tag_get(skb));
+ opts2 = TX_VLAN_TAG | swab16(skb_vlan_tag_get(skb));
desc->opts2 |= cpu_to_le32(opts2);
}
}
return checksum;
}
-static void rx_bottom(struct r8152 *tp)
+static int rx_bottom(struct r8152 *tp, int budget)
{
unsigned long flags;
struct list_head *cursor, *next, rx_queue;
+ int work_done = 0;
+
+ if (!skb_queue_empty(&tp->rx_queue)) {
+ while (work_done < budget) {
+ struct sk_buff *skb = __skb_dequeue(&tp->rx_queue);
+ struct net_device *netdev = tp->netdev;
+ struct net_device_stats *stats = &netdev->stats;
+ unsigned int pkt_len;
+
+ if (!skb)
+ break;
+
+ pkt_len = skb->len;
+ napi_gro_receive(&tp->napi, skb);
+ work_done++;
+ stats->rx_packets++;
+ stats->rx_bytes += pkt_len;
+ }
+ }
if (list_empty(&tp->rx_done))
- return;
+ goto out1;
INIT_LIST_HEAD(&rx_queue);
spin_lock_irqsave(&tp->rx_lock, flags);
skb_put(skb, pkt_len);
skb->protocol = eth_type_trans(skb, netdev);
rtl_rx_vlan_tag(rx_desc, skb);
- netif_receive_skb(skb);
- stats->rx_packets++;
- stats->rx_bytes += pkt_len;
+ if (work_done < budget) {
+ napi_gro_receive(&tp->napi, skb);
+ work_done++;
+ stats->rx_packets++;
+ stats->rx_bytes += pkt_len;
+ } else {
+ __skb_queue_tail(&tp->rx_queue, skb);
+ }
find_next_rx:
rx_data = rx_agg_align(rx_data + pkt_len + CRC_SIZE);
submit:
r8152_submit_rx(tp, agg, GFP_ATOMIC);
}
+
+out1:
+ return work_done;
}
static void tx_bottom(struct r8152 *tp)
} while (res == 0);
}
-static void bottom_half(unsigned long data)
+static void bottom_half(struct r8152 *tp)
{
- struct r8152 *tp;
-
- tp = (struct r8152 *)data;
-
if (test_bit(RTL8152_UNPLUG, &tp->flags))
return;
if (!netif_carrier_ok(tp->netdev))
return;
- clear_bit(SCHEDULE_TASKLET, &tp->flags);
+ clear_bit(SCHEDULE_NAPI, &tp->flags);
- rx_bottom(tp);
tx_bottom(tp);
}
+static int r8152_poll(struct napi_struct *napi, int budget)
+{
+ struct r8152 *tp = container_of(napi, struct r8152, napi);
+ int work_done;
+
+ work_done = rx_bottom(tp, budget);
+ bottom_half(tp);
+
+ if (work_done < budget) {
+ napi_complete(napi);
+ if (!list_empty(&tp->rx_done))
+ napi_schedule(napi);
+ }
+
+ return work_done;
+}
+
static
int r8152_submit_rx(struct r8152 *tp, struct rx_agg *agg, gfp_t mem_flags)
{
int ret;
+ /* The rx would be stopped, so skip submitting */
+ if (test_bit(RTL8152_UNPLUG, &tp->flags) ||
+ !test_bit(WORK_ENABLE, &tp->flags) || !netif_carrier_ok(tp->netdev))
+ return 0;
+
usb_fill_bulk_urb(agg->urb, tp->udev, usb_rcvbulkpipe(tp->udev, 1),
agg->head, agg_buf_sz,
(usb_complete_t)read_bulk_callback, agg);
spin_lock_irqsave(&tp->rx_lock, flags);
list_add_tail(&agg->list, &tp->rx_done);
spin_unlock_irqrestore(&tp->rx_lock, flags);
- tasklet_schedule(&tp->tl);
+
+ netif_err(tp, rx_err, tp->netdev,
+ "Couldn't submit rx[%p], ret = %d\n", agg, ret);
+
+ napi_schedule(&tp->napi);
}
return ret;
netif_wake_queue(netdev);
}
+static netdev_features_t
+rtl8152_features_check(struct sk_buff *skb, struct net_device *dev,
+ netdev_features_t features)
+{
+ u32 mss = skb_shinfo(skb)->gso_size;
+ int max_offset = mss ? GTTCPHO_MAX : TCPHO_MAX;
+ int offset = skb_transport_offset(skb);
+
+ if ((mss || skb->ip_summed == CHECKSUM_PARTIAL) && offset > max_offset)
+ features &= ~(NETIF_F_ALL_CSUM | NETIF_F_GSO_MASK);
+ else if ((skb->len + sizeof(struct tx_desc)) > agg_buf_sz)
+ features &= ~NETIF_F_GSO_MASK;
+
+ return features;
+}
+
static netdev_tx_t rtl8152_start_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
if (!list_empty(&tp->tx_free)) {
if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {
- set_bit(SCHEDULE_TASKLET, &tp->flags);
+ set_bit(SCHEDULE_NAPI, &tp->flags);
schedule_delayed_work(&tp->schedule, 0);
} else {
usb_mark_last_busy(tp->udev);
- tasklet_schedule(&tp->tl);
+ napi_schedule(&tp->napi);
}
} else if (skb_queue_len(&tp->tx_queue) > tp->tx_qlen) {
netif_stop_queue(netdev);
{
int i, ret = 0;
+ napi_disable(&tp->napi);
INIT_LIST_HEAD(&tp->rx_done);
for (i = 0; i < RTL8152_MAX_RX; i++) {
INIT_LIST_HEAD(&tp->rx_info[i].list);
if (ret)
break;
}
+ napi_enable(&tp->napi);
if (ret && ++i < RTL8152_MAX_RX) {
struct list_head rx_queue;
for (i = 0; i < RTL8152_MAX_RX; i++)
usb_kill_urb(tp->rx_info[i].urb);
+ while (!skb_queue_empty(&tp->rx_queue))
+ dev_kfree_skb(__skb_dequeue(&tp->rx_queue));
+
return 0;
}
rxdy_gated_en(tp, false);
- return rtl_start_rx(tp);
+ return 0;
}
static int rtl8152_enable(struct r8152 *tp)
tp->rtl_ops.enable(tp);
set_bit(RTL8152_SET_RX_MODE, &tp->flags);
netif_carrier_on(netdev);
+ rtl_start_rx(tp);
}
} else {
if (tp->speed & LINK_STATUS) {
netif_carrier_off(netdev);
- tasklet_disable(&tp->tl);
+ napi_disable(&tp->napi);
tp->rtl_ops.disable(tp);
- tasklet_enable(&tp->tl);
+ napi_enable(&tp->napi);
}
}
tp->speed = speed;
if (test_bit(RTL8152_SET_RX_MODE, &tp->flags))
_rtl8152_set_rx_mode(tp->netdev);
- if (test_bit(SCHEDULE_TASKLET, &tp->flags) &&
+ /* don't schedule napi before linking */
+ if (test_bit(SCHEDULE_NAPI, &tp->flags) &&
(tp->speed & LINK_STATUS)) {
- clear_bit(SCHEDULE_TASKLET, &tp->flags);
- tasklet_schedule(&tp->tl);
+ clear_bit(SCHEDULE_NAPI, &tp->flags);
+ napi_schedule(&tp->napi);
}
if (test_bit(PHY_RESET, &tp->flags))
res);
free_all_mem(tp);
} else {
- tasklet_enable(&tp->tl);
+ napi_enable(&tp->napi);
}
mutex_unlock(&tp->control);
struct r8152 *tp = netdev_priv(netdev);
int res = 0;
- tasklet_disable(&tp->tl);
+ napi_disable(&tp->napi);
clear_bit(WORK_ENABLE, &tp->flags);
usb_kill_urb(tp->intr_urb);
cancel_delayed_work_sync(&tp->schedule);
res = usb_autopm_get_interface(tp->intf);
if (res < 0) {
rtl_drop_queued_tx(tp);
+ rtl_stop_rx(tp);
} else {
mutex_lock(&tp->control);
if (netif_running(netdev) && test_bit(WORK_ENABLE, &tp->flags)) {
clear_bit(WORK_ENABLE, &tp->flags);
usb_kill_urb(tp->intr_urb);
- tasklet_disable(&tp->tl);
+ napi_disable(&tp->napi);
if (test_bit(SELECTIVE_SUSPEND, &tp->flags)) {
rtl_stop_rx(tp);
rtl_runtime_suspend_enable(tp, true);
cancel_delayed_work_sync(&tp->schedule);
tp->rtl_ops.down(tp);
}
- tasklet_enable(&tp->tl);
+ napi_enable(&tp->napi);
}
out1:
mutex_unlock(&tp->control);
.ndo_set_mac_address = rtl8152_set_mac_address,
.ndo_change_mtu = rtl8152_change_mtu,
.ndo_validate_addr = eth_validate_addr,
+ .ndo_features_check = rtl8152_features_check,
};
static void r8152b_get_version(struct r8152 *tp)
if (ret)
goto out;
- tasklet_init(&tp->tl, bottom_half, (unsigned long)tp);
mutex_init(&tp->control);
INIT_DELAYED_WORK(&tp->schedule, rtl_work_func_t);
set_ethernet_addr(tp);
usb_set_intfdata(intf, tp);
+ netif_napi_add(netdev, &tp->napi, r8152_poll, RTL8152_NAPI_WEIGHT);
ret = register_netdev(netdev);
if (ret != 0) {
else
device_set_wakeup_enable(&udev->dev, false);
- tasklet_disable(&tp->tl);
-
netif_info(tp, probe, netdev, "%s\n", DRIVER_VERSION);
return 0;
out1:
+ netif_napi_del(&tp->napi);
usb_set_intfdata(intf, NULL);
- tasklet_kill(&tp->tl);
out:
free_netdev(netdev);
return ret;
if (udev->state == USB_STATE_NOTATTACHED)
set_bit(RTL8152_UNPLUG, &tp->flags);
- tasklet_kill(&tp->tl);
+ netif_napi_del(&tp->napi);
unregister_netdev(tp->netdev);
tp->rtl_ops.unload(tp);
free_netdev(tp->netdev);
int usbnet_get_ethernet_addr(struct usbnet *dev, int iMACAddress)
{
- int tmp, i;
+ int tmp = -1, ret;
unsigned char buf [13];
- tmp = usb_string(dev->udev, iMACAddress, buf, sizeof buf);
- if (tmp != 12) {
+ ret = usb_string(dev->udev, iMACAddress, buf, sizeof buf);
+ if (ret == 12)
+ tmp = hex2bin(dev->net->dev_addr, buf, 6);
+ if (tmp < 0) {
dev_dbg(&dev->udev->dev,
"bad MAC string %d fetch, %d\n", iMACAddress, tmp);
- if (tmp >= 0)
- tmp = -EINVAL;
- return tmp;
+ if (ret >= 0)
+ ret = -EINVAL;
+ return ret;
}
- for (i = tmp = 0; i < 6; i++, tmp += 2)
- dev->net->dev_addr [i] =
- (hex_to_bin(buf[tmp]) << 4) + hex_to_bin(buf[tmp + 1]);
return 0;
}
EXPORT_SYMBOL_GPL(usbnet_get_ethernet_addr);
[VETH_INFO_PEER] = { .len = sizeof(struct ifinfomsg) },
};
+static struct net *veth_get_link_net(const struct net_device *dev)
+{
+ struct veth_priv *priv = netdev_priv(dev);
+ struct net_device *peer = rtnl_dereference(priv->peer);
+
+ return peer ? dev_net(peer) : dev_net(dev);
+}
+
static struct rtnl_link_ops veth_link_ops = {
.kind = DRV_NAME,
.priv_size = sizeof(struct veth_priv),
.dellink = veth_dellink,
.policy = veth_policy,
.maxtype = VETH_INFO_MAX,
+ .get_link_net = veth_get_link_net,
};
/*
/* Free up any pending old buffers before queueing new ones. */
free_old_xmit_skbs(sq);
+ /* timestamp packet in software */
+ skb_tx_timestamp(skb);
+
/* Try to transmit */
err = xmit_skb(sq, skb);
.get_ringparam = virtnet_get_ringparam,
.set_channels = virtnet_set_channels,
.get_channels = virtnet_get_channels,
+ .get_ts_info = ethtool_op_get_ts_info,
};
#define MIN_MTU 68
#define VMXNET3_TX_RING_MAX_SIZE 4096
#define VMXNET3_TC_RING_MAX_SIZE 4096
#define VMXNET3_RX_RING_MAX_SIZE 4096
+#define VMXNET3_RX_RING2_MAX_SIZE 2048
#define VMXNET3_RC_RING_MAX_SIZE 8192
/* a list of reasons for queue stop */
le32_add_cpu(&tq->shared->txNumDeferred, 1);
}
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
gdesc->txd.ti = 1;
- gdesc->txd.tci = vlan_tx_tag_get(skb);
+ gdesc->txd.tci = skb_vlan_tag_get(skb);
}
/* finally flips the GEN bit of the SOP desc. */
ring0_size = min_t(u32, ring0_size, VMXNET3_RX_RING_MAX_SIZE /
sz * sz);
ring1_size = adapter->rx_queue[0].rx_ring[1].size;
+ ring1_size = (ring1_size + sz - 1) / sz * sz;
+ ring1_size = min_t(u32, ring1_size, VMXNET3_RX_RING2_MAX_SIZE /
+ sz * sz);
comp_size = ring0_size + ring1_size;
for (i = 0; i < adapter->num_rx_queues; i++) {
err = vmxnet3_create_queues(adapter, adapter->tx_ring_size,
adapter->rx_ring_size,
- VMXNET3_DEF_RX_RING_SIZE);
+ adapter->rx_ring2_size);
if (err)
goto queue_err;
adapter->tx_ring_size = VMXNET3_DEF_TX_RING_SIZE;
adapter->rx_ring_size = VMXNET3_DEF_RX_RING_SIZE;
+ adapter->rx_ring2_size = VMXNET3_DEF_RX_RING2_SIZE;
spin_lock_init(&adapter->cmd_lock);
adapter->adapter_pa = dma_map_single(&adapter->pdev->dev, adapter,
static int
vmxnet3_resume(struct device *device)
{
- int err, i = 0;
+ int err;
unsigned long flags;
struct pci_dev *pdev = to_pci_dev(device);
struct net_device *netdev = pci_get_drvdata(pdev);
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
- struct Vmxnet3_PMConf *pmConf;
if (!netif_running(netdev))
return 0;
- /* Destroy wake-up filters. */
- pmConf = adapter->pm_conf;
- memset(pmConf, 0, sizeof(*pmConf));
-
- adapter->shared->devRead.pmConfDesc.confVer = cpu_to_le32(1);
- adapter->shared->devRead.pmConfDesc.confLen = cpu_to_le32(sizeof(
- *pmConf));
- adapter->shared->devRead.pmConfDesc.confPA =
- cpu_to_le64(adapter->pm_conf_pa);
-
- netif_device_attach(netdev);
pci_set_power_state(pdev, PCI_D0);
pci_restore_state(pdev);
err = pci_enable_device_mem(pdev);
pci_enable_wake(pdev, PCI_D0, 0);
+ vmxnet3_alloc_intr_resources(adapter);
+
+ /* During hibernate and suspend, device has to be reinitialized as the
+ * device state need not be preserved.
+ */
+
+ /* Need not check adapter state as other reset tasks cannot run during
+ * device resume.
+ */
spin_lock_irqsave(&adapter->cmd_lock, flags);
VMXNET3_WRITE_BAR1_REG(adapter, VMXNET3_REG_CMD,
- VMXNET3_CMD_UPDATE_PMCFG);
+ VMXNET3_CMD_QUIESCE_DEV);
spin_unlock_irqrestore(&adapter->cmd_lock, flags);
- vmxnet3_alloc_intr_resources(adapter);
- vmxnet3_request_irqs(adapter);
- for (i = 0; i < adapter->num_rx_queues; i++)
- napi_enable(&adapter->rx_queue[i].napi);
- vmxnet3_enable_all_intrs(adapter);
+ vmxnet3_tq_cleanup_all(adapter);
+ vmxnet3_rq_cleanup_all(adapter);
+
+ vmxnet3_reset_dev(adapter);
+ err = vmxnet3_activate_dev(adapter);
+ if (err != 0) {
+ netdev_err(netdev,
+ "failed to re-activate on resume, error: %d", err);
+ vmxnet3_force_close(adapter);
+ return err;
+ }
+ netif_device_attach(netdev);
return 0;
}
static const struct dev_pm_ops vmxnet3_pm_ops = {
.suspend = vmxnet3_suspend,
.resume = vmxnet3_resume,
+ .freeze = vmxnet3_suspend,
+ .restore = vmxnet3_resume,
};
#endif
vmxnet3_tq_driver_stats[i].offset);
}
- for (j = 0; j < adapter->num_tx_queues; j++) {
+ for (j = 0; j < adapter->num_rx_queues; j++) {
base = (u8 *)&adapter->rqd_start[j].stats;
*buf++ = (u64) j;
for (i = 1; i < ARRAY_SIZE(vmxnet3_rq_dev_stats); i++)
param->rx_max_pending = VMXNET3_RX_RING_MAX_SIZE;
param->tx_max_pending = VMXNET3_TX_RING_MAX_SIZE;
param->rx_mini_max_pending = 0;
- param->rx_jumbo_max_pending = 0;
+ param->rx_jumbo_max_pending = VMXNET3_RX_RING2_MAX_SIZE;
param->rx_pending = adapter->rx_ring_size;
param->tx_pending = adapter->tx_ring_size;
param->rx_mini_pending = 0;
- param->rx_jumbo_pending = 0;
+ param->rx_jumbo_pending = adapter->rx_ring2_size;
}
struct ethtool_ringparam *param)
{
struct vmxnet3_adapter *adapter = netdev_priv(netdev);
- u32 new_tx_ring_size, new_rx_ring_size;
+ u32 new_tx_ring_size, new_rx_ring_size, new_rx_ring2_size;
u32 sz;
int err = 0;
VMXNET3_RX_RING_MAX_SIZE)
return -EINVAL;
+ if (param->rx_jumbo_pending == 0 ||
+ param->rx_jumbo_pending > VMXNET3_RX_RING2_MAX_SIZE)
+ return -EINVAL;
+
/* if adapter not yet initialized, do nothing */
if (adapter->rx_buf_per_pkt == 0) {
netdev_err(netdev, "adapter not completely initialized, "
sz) != 0)
return -EINVAL;
- if (new_tx_ring_size == adapter->tx_queue[0].tx_ring.size &&
- new_rx_ring_size == adapter->rx_queue[0].rx_ring[0].size) {
+ /* ring2 has to be a multiple of VMXNET3_RING_SIZE_ALIGN */
+ new_rx_ring2_size = (param->rx_jumbo_pending + VMXNET3_RING_SIZE_MASK) &
+ ~VMXNET3_RING_SIZE_MASK;
+ new_rx_ring2_size = min_t(u32, new_rx_ring2_size,
+ VMXNET3_RX_RING2_MAX_SIZE);
+
+ if (new_tx_ring_size == adapter->tx_ring_size &&
+ new_rx_ring_size == adapter->rx_ring_size &&
+ new_rx_ring2_size == adapter->rx_ring2_size) {
return 0;
}
vmxnet3_rq_destroy_all(adapter);
err = vmxnet3_create_queues(adapter, new_tx_ring_size,
- new_rx_ring_size, VMXNET3_DEF_RX_RING_SIZE);
+ new_rx_ring_size, new_rx_ring2_size);
if (err) {
/* failed, most likely because of OOM, try default
netdev_err(netdev, "failed to apply new sizes, "
"try the default ones\n");
new_rx_ring_size = VMXNET3_DEF_RX_RING_SIZE;
+ new_rx_ring2_size = VMXNET3_DEF_RX_RING2_SIZE;
new_tx_ring_size = VMXNET3_DEF_TX_RING_SIZE;
err = vmxnet3_create_queues(adapter,
new_tx_ring_size,
new_rx_ring_size,
- VMXNET3_DEF_RX_RING_SIZE);
+ new_rx_ring2_size);
if (err) {
netdev_err(netdev, "failed to create queues "
"with default sizes. Closing it\n");
}
adapter->tx_ring_size = new_tx_ring_size;
adapter->rx_ring_size = new_rx_ring_size;
+ adapter->rx_ring2_size = new_rx_ring2_size;
out:
clear_bit(VMXNET3_STATE_BIT_RESETTING, &adapter->state);
/*
* Version numbers
*/
-#define VMXNET3_DRIVER_VERSION_STRING "1.2.1.0-k"
+#define VMXNET3_DRIVER_VERSION_STRING "1.3.3.0-k"
/* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */
-#define VMXNET3_DRIVER_VERSION_NUM 0x01020100
+#define VMXNET3_DRIVER_VERSION_NUM 0x01030300
#if defined(CONFIG_PCI_MSI)
/* RSS only makes sense if MSI-X is supported. */
/* Ring sizes */
u32 tx_ring_size;
u32 rx_ring_size;
+ u32 rx_ring2_size;
struct work_struct work;
/* must be a multiple of VMXNET3_RING_SIZE_ALIGN */
#define VMXNET3_DEF_TX_RING_SIZE 512
#define VMXNET3_DEF_RX_RING_SIZE 256
+#define VMXNET3_DEF_RX_RING2_SIZE 128
#define VMXNET3_MAX_ETH_HDR_SIZE 22
#define VMXNET3_MAX_SKB_BUF_SIZE (3*1024)
#define FDB_AGE_DEFAULT 300 /* 5 min */
#define FDB_AGE_INTERVAL (10 * HZ) /* rescan interval */
-#define VXLAN_N_VID (1u << 24)
-#define VXLAN_VID_MASK (VXLAN_N_VID - 1)
-#define VXLAN_HLEN (sizeof(struct udphdr) + sizeof(struct vxlanhdr))
-
-#define VXLAN_FLAGS 0x08000000 /* struct vxlanhdr.vx_flags required value. */
-
/* UDP port for VXLAN traffic.
* The IANA assigned port is 4789, but the Linux default is 8472
* for compatibility with early adopters.
return list_first_entry(&fdb->remotes, struct vxlan_rdst, list);
}
-/* Find VXLAN socket based on network namespace, address family and UDP port */
-static struct vxlan_sock *vxlan_find_sock(struct net *net,
- sa_family_t family, __be16 port)
+/* Find VXLAN socket based on network namespace, address family and UDP port
+ * and enabled unshareable flags.
+ */
+static struct vxlan_sock *vxlan_find_sock(struct net *net, sa_family_t family,
+ __be16 port, u32 flags)
{
struct vxlan_sock *vs;
+ flags &= VXLAN_F_RCV_FLAGS;
+
hlist_for_each_entry_rcu(vs, vs_head(net, port), hlist) {
if (inet_sk(vs->sock->sk)->inet_sport == port &&
- inet_sk(vs->sock->sk)->sk.sk_family == family)
+ inet_sk(vs->sock->sk)->sk.sk_family == family &&
+ vs->flags == flags)
return vs;
}
return NULL;
/* Look up VNI in a per net namespace table */
static struct vxlan_dev *vxlan_find_vni(struct net *net, u32 id,
- sa_family_t family, __be16 port)
+ sa_family_t family, __be16 port,
+ u32 flags)
{
struct vxlan_sock *vs;
- vs = vxlan_find_sock(net, family, port);
+ vs = vxlan_find_sock(net, family, port, flags);
if (!vs)
return NULL;
ndm->ndm_flags = fdb->flags;
ndm->ndm_type = RTN_UNICAST;
+ if (!net_eq(dev_net(vxlan->dev), vxlan->net) &&
+ nla_put_s32(skb, NDA_NDM_IFINDEX_NETNSID,
+ peernet2id(vxlan->net, dev_net(vxlan->dev))))
+ goto nla_put_failure;
+
if (send_eth && nla_put(skb, NDA_LLADDR, ETH_ALEN, &fdb->eth_addr))
goto nla_put_failure;
if (nla_put(skb, NDA_CACHEINFO, sizeof(ci), &ci))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
return 1;
}
-static struct sk_buff **vxlan_gro_receive(struct sk_buff **head, struct sk_buff *skb)
+static struct vxlanhdr *vxlan_gro_remcsum(struct sk_buff *skb,
+ unsigned int off,
+ struct vxlanhdr *vh, size_t hdrlen,
+ u32 data)
+{
+ size_t start, offset, plen;
+ __wsum delta;
+
+ if (skb->remcsum_offload)
+ return vh;
+
+ if (!NAPI_GRO_CB(skb)->csum_valid)
+ return NULL;
+
+ start = (data & VXLAN_RCO_MASK) << VXLAN_RCO_SHIFT;
+ offset = start + ((data & VXLAN_RCO_UDP) ?
+ offsetof(struct udphdr, check) :
+ offsetof(struct tcphdr, check));
+
+ plen = hdrlen + offset + sizeof(u16);
+
+ /* Pull checksum that will be written */
+ if (skb_gro_header_hard(skb, off + plen)) {
+ vh = skb_gro_header_slow(skb, off + plen, off);
+ if (!vh)
+ return NULL;
+ }
+
+ delta = remcsum_adjust((void *)vh + hdrlen,
+ NAPI_GRO_CB(skb)->csum, start, offset);
+
+ /* Adjust skb->csum since we changed the packet */
+ skb->csum = csum_add(skb->csum, delta);
+ NAPI_GRO_CB(skb)->csum = csum_add(NAPI_GRO_CB(skb)->csum, delta);
+
+ skb->remcsum_offload = 1;
+
+ return vh;
+}
+
+static struct sk_buff **vxlan_gro_receive(struct sk_buff **head,
+ struct sk_buff *skb,
+ struct udp_offload *uoff)
{
struct sk_buff *p, **pp = NULL;
struct vxlanhdr *vh, *vh2;
unsigned int hlen, off_vx;
int flush = 1;
+ struct vxlan_sock *vs = container_of(uoff, struct vxlan_sock,
+ udp_offloads);
+ u32 flags;
off_vx = skb_gro_offset(skb);
hlen = off_vx + sizeof(*vh);
goto out;
}
+ skb_gro_pull(skb, sizeof(struct vxlanhdr)); /* pull vxlan header */
+ skb_gro_postpull_rcsum(skb, vh, sizeof(struct vxlanhdr));
+
+ flags = ntohl(vh->vx_flags);
+
+ if ((flags & VXLAN_HF_RCO) && (vs->flags & VXLAN_F_REMCSUM_RX)) {
+ vh = vxlan_gro_remcsum(skb, off_vx, vh, sizeof(struct vxlanhdr),
+ ntohl(vh->vx_vni));
+
+ if (!vh)
+ goto out;
+ }
+
flush = 0;
for (p = *head; p; p = p->next) {
continue;
vh2 = (struct vxlanhdr *)(p->data + off_vx);
- if (vh->vx_vni != vh2->vx_vni) {
+ if (vh->vx_flags != vh2->vx_flags ||
+ vh->vx_vni != vh2->vx_vni) {
NAPI_GRO_CB(p)->same_flow = 0;
continue;
}
}
- skb_gro_pull(skb, sizeof(struct vxlanhdr));
- skb_gro_postpull_rcsum(skb, vh, sizeof(struct vxlanhdr));
pp = eth_gro_receive(head, skb);
out:
return pp;
}
-static int vxlan_gro_complete(struct sk_buff *skb, int nhoff)
+static int vxlan_gro_complete(struct sk_buff *skb, int nhoff,
+ struct udp_offload *uoff)
{
udp_tunnel_gro_complete(skb, nhoff);
dev_put(vxlan->dev);
}
+static struct vxlanhdr *vxlan_remcsum(struct sk_buff *skb, struct vxlanhdr *vh,
+ size_t hdrlen, u32 data)
+{
+ size_t start, offset, plen;
+ __wsum delta;
+
+ if (skb->remcsum_offload) {
+ /* Already processed in GRO path */
+ skb->remcsum_offload = 0;
+ return vh;
+ }
+
+ start = (data & VXLAN_RCO_MASK) << VXLAN_RCO_SHIFT;
+ offset = start + ((data & VXLAN_RCO_UDP) ?
+ offsetof(struct udphdr, check) :
+ offsetof(struct tcphdr, check));
+
+ plen = hdrlen + offset + sizeof(u16);
+
+ if (!pskb_may_pull(skb, plen))
+ return NULL;
+
+ vh = (struct vxlanhdr *)(udp_hdr(skb) + 1);
+
+ if (unlikely(skb->ip_summed != CHECKSUM_COMPLETE))
+ __skb_checksum_complete(skb);
+
+ delta = remcsum_adjust((void *)vh + hdrlen,
+ skb->csum, start, offset);
+
+ /* Adjust skb->csum since we changed the packet */
+ skb->csum = csum_add(skb->csum, delta);
+
+ return vh;
+}
+
/* Callback from net/ipv4/udp.c to receive packets */
static int vxlan_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
{
struct vxlan_sock *vs;
struct vxlanhdr *vxh;
+ u32 flags, vni;
+ struct vxlan_metadata md = {0};
/* Need Vxlan and inner Ethernet header to be present */
if (!pskb_may_pull(skb, VXLAN_HLEN))
goto error;
- /* Return packets with reserved bits set */
vxh = (struct vxlanhdr *)(udp_hdr(skb) + 1);
- if (vxh->vx_flags != htonl(VXLAN_FLAGS) ||
- (vxh->vx_vni & htonl(0xff))) {
- netdev_dbg(skb->dev, "invalid vxlan flags=%#x vni=%#x\n",
- ntohl(vxh->vx_flags), ntohl(vxh->vx_vni));
- goto error;
+ flags = ntohl(vxh->vx_flags);
+ vni = ntohl(vxh->vx_vni);
+
+ if (flags & VXLAN_HF_VNI) {
+ flags &= ~VXLAN_HF_VNI;
+ } else {
+ /* VNI flag always required to be set */
+ goto bad_flags;
}
if (iptunnel_pull_header(skb, VXLAN_HLEN, htons(ETH_P_TEB)))
goto drop;
+ vxh = (struct vxlanhdr *)(udp_hdr(skb) + 1);
vs = rcu_dereference_sk_user_data(sk);
if (!vs)
goto drop;
- vs->rcv(vs, skb, vxh->vx_vni);
+ if ((flags & VXLAN_HF_RCO) && (vs->flags & VXLAN_F_REMCSUM_RX)) {
+ vxh = vxlan_remcsum(skb, vxh, sizeof(struct vxlanhdr), vni);
+ if (!vxh)
+ goto drop;
+
+ flags &= ~VXLAN_HF_RCO;
+ vni &= VXLAN_VID_MASK;
+ }
+
+ /* For backwards compatibility, only allow reserved fields to be
+ * used by VXLAN extensions if explicitly requested.
+ */
+ if ((flags & VXLAN_HF_GBP) && (vs->flags & VXLAN_F_GBP)) {
+ struct vxlanhdr_gbp *gbp;
+
+ gbp = (struct vxlanhdr_gbp *)vxh;
+ md.gbp = ntohs(gbp->policy_id);
+
+ if (gbp->dont_learn)
+ md.gbp |= VXLAN_GBP_DONT_LEARN;
+
+ if (gbp->policy_applied)
+ md.gbp |= VXLAN_GBP_POLICY_APPLIED;
+
+ flags &= ~VXLAN_GBP_USED_BITS;
+ }
+
+ if (flags || (vni & ~VXLAN_VID_MASK)) {
+ /* If there are any unprocessed flags remaining treat
+ * this as a malformed packet. This behavior diverges from
+ * VXLAN RFC (RFC7348) which stipulates that bits in reserved
+ * in reserved fields are to be ignored. The approach here
+ * maintains compatbility with previous stack code, and also
+ * is more robust and provides a little more security in
+ * adding extensions to VXLAN.
+ */
+
+ goto bad_flags;
+ }
+
+ md.vni = vxh->vx_vni;
+ vs->rcv(vs, skb, &md);
return 0;
drop:
kfree_skb(skb);
return 0;
+bad_flags:
+ netdev_dbg(skb->dev, "invalid vxlan flags=%#x vni=%#x\n",
+ ntohl(vxh->vx_flags), ntohl(vxh->vx_vni));
+
error:
/* Return non vxlan pkt */
return 1;
}
-static void vxlan_rcv(struct vxlan_sock *vs,
- struct sk_buff *skb, __be32 vx_vni)
+static void vxlan_rcv(struct vxlan_sock *vs, struct sk_buff *skb,
+ struct vxlan_metadata *md)
{
struct iphdr *oip = NULL;
struct ipv6hdr *oip6 = NULL;
int err = 0;
union vxlan_addr *remote_ip;
- vni = ntohl(vx_vni) >> 8;
+ vni = ntohl(md->vni) >> 8;
/* Is this VNI defined? */
vxlan = vxlan_vs_find_vni(vs, vni);
if (!vxlan)
goto drop;
skb_reset_network_header(skb);
+ skb->mark = md->gbp;
if (oip6)
err = IP6_ECN_decapsulate(oip6, skb);
return false;
}
+static void vxlan_build_gbp_hdr(struct vxlanhdr *vxh, u32 vxflags,
+ struct vxlan_metadata *md)
+{
+ struct vxlanhdr_gbp *gbp;
+
+ gbp = (struct vxlanhdr_gbp *)vxh;
+ vxh->vx_flags |= htonl(VXLAN_HF_GBP);
+
+ if (md->gbp & VXLAN_GBP_DONT_LEARN)
+ gbp->dont_learn = 1;
+
+ if (md->gbp & VXLAN_GBP_POLICY_APPLIED)
+ gbp->policy_applied = 1;
+
+ gbp->policy_id = htons(md->gbp & VXLAN_GBP_ID_MASK);
+}
+
#if IS_ENABLED(CONFIG_IPV6)
-static int vxlan6_xmit_skb(struct vxlan_sock *vs,
- struct dst_entry *dst, struct sk_buff *skb,
+static int vxlan6_xmit_skb(struct dst_entry *dst, struct sk_buff *skb,
struct net_device *dev, struct in6_addr *saddr,
struct in6_addr *daddr, __u8 prio, __u8 ttl,
- __be16 src_port, __be16 dst_port, __be32 vni,
- bool xnet)
+ __be16 src_port, __be16 dst_port,
+ struct vxlan_metadata *md, bool xnet, u32 vxflags)
{
struct vxlanhdr *vxh;
int min_headroom;
int err;
- bool udp_sum = !udp_get_no_check6_tx(vs->sock->sk);
+ bool udp_sum = !(vxflags & VXLAN_F_UDP_ZERO_CSUM6_TX);
+ int type = udp_sum ? SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL;
+ u16 hdrlen = sizeof(struct vxlanhdr);
+
+ if ((vxflags & VXLAN_F_REMCSUM_TX) &&
+ skb->ip_summed == CHECKSUM_PARTIAL) {
+ int csum_start = skb_checksum_start_offset(skb);
+
+ if (csum_start <= VXLAN_MAX_REMCSUM_START &&
+ !(csum_start & VXLAN_RCO_SHIFT_MASK) &&
+ (skb->csum_offset == offsetof(struct udphdr, check) ||
+ skb->csum_offset == offsetof(struct tcphdr, check))) {
+ udp_sum = false;
+ type |= SKB_GSO_TUNNEL_REMCSUM;
+ }
+ }
- skb = udp_tunnel_handle_offloads(skb, udp_sum);
+ skb = iptunnel_handle_offloads(skb, udp_sum, type);
if (IS_ERR(skb)) {
err = -EINVAL;
goto err;
min_headroom = LL_RESERVED_SPACE(dst->dev) + dst->header_len
+ VXLAN_HLEN + sizeof(struct ipv6hdr)
- + (vlan_tx_tag_present(skb) ? VLAN_HLEN : 0);
+ + (skb_vlan_tag_present(skb) ? VLAN_HLEN : 0);
/* Need space for new headers (invalidates iph ptr) */
err = skb_cow_head(skb, min_headroom);
}
vxh = (struct vxlanhdr *) __skb_push(skb, sizeof(*vxh));
- vxh->vx_flags = htonl(VXLAN_FLAGS);
- vxh->vx_vni = vni;
+ vxh->vx_flags = htonl(VXLAN_HF_VNI);
+ vxh->vx_vni = md->vni;
+
+ if (type & SKB_GSO_TUNNEL_REMCSUM) {
+ u32 data = (skb_checksum_start_offset(skb) - hdrlen) >>
+ VXLAN_RCO_SHIFT;
+
+ if (skb->csum_offset == offsetof(struct udphdr, check))
+ data |= VXLAN_RCO_UDP;
+
+ vxh->vx_vni |= htonl(data);
+ vxh->vx_flags |= htonl(VXLAN_HF_RCO);
+
+ if (!skb_is_gso(skb)) {
+ skb->ip_summed = CHECKSUM_NONE;
+ skb->encapsulation = 0;
+ }
+ }
+
+ if (vxflags & VXLAN_F_GBP)
+ vxlan_build_gbp_hdr(vxh, vxflags, md);
skb_set_inner_protocol(skb, htons(ETH_P_TEB));
- udp_tunnel6_xmit_skb(vs->sock, dst, skb, dev, saddr, daddr, prio,
- ttl, src_port, dst_port);
+ udp_tunnel6_xmit_skb(dst, skb, dev, saddr, daddr, prio,
+ ttl, src_port, dst_port,
+ !!(vxflags & VXLAN_F_UDP_ZERO_CSUM6_TX));
return 0;
err:
dst_release(dst);
}
#endif
-int vxlan_xmit_skb(struct vxlan_sock *vs,
- struct rtable *rt, struct sk_buff *skb,
+int vxlan_xmit_skb(struct rtable *rt, struct sk_buff *skb,
__be32 src, __be32 dst, __u8 tos, __u8 ttl, __be16 df,
- __be16 src_port, __be16 dst_port, __be32 vni, bool xnet)
+ __be16 src_port, __be16 dst_port,
+ struct vxlan_metadata *md, bool xnet, u32 vxflags)
{
struct vxlanhdr *vxh;
int min_headroom;
int err;
- bool udp_sum = !vs->sock->sk->sk_no_check_tx;
+ bool udp_sum = !!(vxflags & VXLAN_F_UDP_CSUM);
+ int type = udp_sum ? SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL;
+ u16 hdrlen = sizeof(struct vxlanhdr);
+
+ if ((vxflags & VXLAN_F_REMCSUM_TX) &&
+ skb->ip_summed == CHECKSUM_PARTIAL) {
+ int csum_start = skb_checksum_start_offset(skb);
+
+ if (csum_start <= VXLAN_MAX_REMCSUM_START &&
+ !(csum_start & VXLAN_RCO_SHIFT_MASK) &&
+ (skb->csum_offset == offsetof(struct udphdr, check) ||
+ skb->csum_offset == offsetof(struct tcphdr, check))) {
+ udp_sum = false;
+ type |= SKB_GSO_TUNNEL_REMCSUM;
+ }
+ }
- skb = udp_tunnel_handle_offloads(skb, udp_sum);
+ skb = iptunnel_handle_offloads(skb, udp_sum, type);
if (IS_ERR(skb))
return PTR_ERR(skb);
min_headroom = LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len
+ VXLAN_HLEN + sizeof(struct iphdr)
- + (vlan_tx_tag_present(skb) ? VLAN_HLEN : 0);
+ + (skb_vlan_tag_present(skb) ? VLAN_HLEN : 0);
/* Need space for new headers (invalidates iph ptr) */
err = skb_cow_head(skb, min_headroom);
return -ENOMEM;
vxh = (struct vxlanhdr *) __skb_push(skb, sizeof(*vxh));
- vxh->vx_flags = htonl(VXLAN_FLAGS);
- vxh->vx_vni = vni;
+ vxh->vx_flags = htonl(VXLAN_HF_VNI);
+ vxh->vx_vni = md->vni;
+
+ if (type & SKB_GSO_TUNNEL_REMCSUM) {
+ u32 data = (skb_checksum_start_offset(skb) - hdrlen) >>
+ VXLAN_RCO_SHIFT;
+
+ if (skb->csum_offset == offsetof(struct udphdr, check))
+ data |= VXLAN_RCO_UDP;
+
+ vxh->vx_vni |= htonl(data);
+ vxh->vx_flags |= htonl(VXLAN_HF_RCO);
+
+ if (!skb_is_gso(skb)) {
+ skb->ip_summed = CHECKSUM_NONE;
+ skb->encapsulation = 0;
+ }
+ }
+
+ if (vxflags & VXLAN_F_GBP)
+ vxlan_build_gbp_hdr(vxh, vxflags, md);
skb_set_inner_protocol(skb, htons(ETH_P_TEB));
- return udp_tunnel_xmit_skb(vs->sock, rt, skb, src, dst, tos,
- ttl, df, src_port, dst_port, xnet);
+ return udp_tunnel_xmit_skb(rt, skb, src, dst, tos,
+ ttl, df, src_port, dst_port, xnet,
+ !(vxflags & VXLAN_F_UDP_CSUM));
}
EXPORT_SYMBOL_GPL(vxlan_xmit_skb);
const struct iphdr *old_iph;
struct flowi4 fl4;
union vxlan_addr *dst;
+ struct vxlan_metadata md;
__be16 src_port = 0, dst_port;
u32 vni;
__be16 df = 0;
ip_rt_put(rt);
dst_vxlan = vxlan_find_vni(vxlan->net, vni,
- dst->sa.sa_family, dst_port);
+ dst->sa.sa_family, dst_port,
+ vxlan->flags);
if (!dst_vxlan)
goto tx_error;
vxlan_encap_bypass(skb, vxlan, dst_vxlan);
tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
ttl = ttl ? : ip4_dst_hoplimit(&rt->dst);
-
- err = vxlan_xmit_skb(vxlan->vn_sock, rt, skb,
- fl4.saddr, dst->sin.sin_addr.s_addr,
- tos, ttl, df, src_port, dst_port,
- htonl(vni << 8),
- !net_eq(vxlan->net, dev_net(vxlan->dev)));
+ md.vni = htonl(vni << 8);
+ md.gbp = skb->mark;
+
+ err = vxlan_xmit_skb(rt, skb, fl4.saddr,
+ dst->sin.sin_addr.s_addr, tos, ttl, df,
+ src_port, dst_port, &md,
+ !net_eq(vxlan->net, dev_net(vxlan->dev)),
+ vxlan->flags);
if (err < 0) {
/* skb is already freed. */
skb = NULL;
dst_release(ndst);
dst_vxlan = vxlan_find_vni(vxlan->net, vni,
- dst->sa.sa_family, dst_port);
+ dst->sa.sa_family, dst_port,
+ vxlan->flags);
if (!dst_vxlan)
goto tx_error;
vxlan_encap_bypass(skb, vxlan, dst_vxlan);
}
ttl = ttl ? : ip6_dst_hoplimit(ndst);
+ md.vni = htonl(vni << 8);
+ md.gbp = skb->mark;
- err = vxlan6_xmit_skb(vxlan->vn_sock, ndst, skb,
- dev, &fl6.saddr, &fl6.daddr, 0, ttl,
- src_port, dst_port, htonl(vni << 8),
- !net_eq(vxlan->net, dev_net(vxlan->dev)));
+ err = vxlan6_xmit_skb(ndst, skb, dev, &fl6.saddr, &fl6.daddr,
+ 0, ttl, src_port, dst_port, &md,
+ !net_eq(vxlan->net, dev_net(vxlan->dev)),
+ vxlan->flags);
#endif
}
spin_lock(&vn->sock_lock);
vs = vxlan_find_sock(vxlan->net, ipv6 ? AF_INET6 : AF_INET,
- vxlan->dst_port);
+ vxlan->dst_port, vxlan->flags);
if (vs && atomic_add_unless(&vs->refcnt, 1, 0)) {
/* If we have a socket with same port already, reuse it */
vxlan_vs_add_dev(vs, vxlan);
[IFLA_VXLAN_UDP_CSUM] = { .type = NLA_U8 },
[IFLA_VXLAN_UDP_ZERO_CSUM6_TX] = { .type = NLA_U8 },
[IFLA_VXLAN_UDP_ZERO_CSUM6_RX] = { .type = NLA_U8 },
+ [IFLA_VXLAN_REMCSUM_TX] = { .type = NLA_U8 },
+ [IFLA_VXLAN_REMCSUM_RX] = { .type = NLA_U8 },
+ [IFLA_VXLAN_GBP] = { .type = NLA_FLAG, },
};
static int vxlan_validate(struct nlattr *tb[], struct nlattr *data[])
if (ipv6) {
udp_conf.family = AF_INET6;
- udp_conf.use_udp6_tx_checksums =
- !(flags & VXLAN_F_UDP_ZERO_CSUM6_TX);
udp_conf.use_udp6_rx_checksums =
!(flags & VXLAN_F_UDP_ZERO_CSUM6_RX);
} else {
udp_conf.family = AF_INET;
udp_conf.local_ip.s_addr = INADDR_ANY;
- udp_conf.use_udp_checksums =
- !!(flags & VXLAN_F_UDP_CSUM);
}
udp_conf.local_udp_port = port;
atomic_set(&vs->refcnt, 1);
vs->rcv = rcv;
vs->data = data;
+ vs->flags = (flags & VXLAN_F_RCV_FLAGS);
/* Initialize the vxlan udp offloads structure */
vs->udp_offloads.port = port;
return vs;
spin_lock(&vn->sock_lock);
- vs = vxlan_find_sock(net, ipv6 ? AF_INET6 : AF_INET, port);
+ vs = vxlan_find_sock(net, ipv6 ? AF_INET6 : AF_INET, port, flags);
if (vs && ((vs->rcv != rcv) ||
!atomic_add_unless(&vs->refcnt, 1, 0)))
vs = ERR_PTR(-EBUSY);
nla_get_u8(data[IFLA_VXLAN_UDP_ZERO_CSUM6_RX]))
vxlan->flags |= VXLAN_F_UDP_ZERO_CSUM6_RX;
+ if (data[IFLA_VXLAN_REMCSUM_TX] &&
+ nla_get_u8(data[IFLA_VXLAN_REMCSUM_TX]))
+ vxlan->flags |= VXLAN_F_REMCSUM_TX;
+
+ if (data[IFLA_VXLAN_REMCSUM_RX] &&
+ nla_get_u8(data[IFLA_VXLAN_REMCSUM_RX]))
+ vxlan->flags |= VXLAN_F_REMCSUM_RX;
+
+ if (data[IFLA_VXLAN_GBP])
+ vxlan->flags |= VXLAN_F_GBP;
+
if (vxlan_find_vni(net, vni, use_ipv6 ? AF_INET6 : AF_INET,
- vxlan->dst_port)) {
+ vxlan->dst_port, vxlan->flags)) {
pr_info("duplicate VNI %u\n", vni);
return -EEXIST;
}
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_UDP_CSUM */
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_UDP_ZERO_CSUM6_TX */
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_UDP_ZERO_CSUM6_RX */
+ nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_REMCSUM_TX */
+ nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_REMCSUM_RX */
0;
}
nla_put_u8(skb, IFLA_VXLAN_UDP_ZERO_CSUM6_TX,
!!(vxlan->flags & VXLAN_F_UDP_ZERO_CSUM6_TX)) ||
nla_put_u8(skb, IFLA_VXLAN_UDP_ZERO_CSUM6_RX,
- !!(vxlan->flags & VXLAN_F_UDP_ZERO_CSUM6_RX)))
+ !!(vxlan->flags & VXLAN_F_UDP_ZERO_CSUM6_RX)) ||
+ nla_put_u8(skb, IFLA_VXLAN_REMCSUM_TX,
+ !!(vxlan->flags & VXLAN_F_REMCSUM_TX)) ||
+ nla_put_u8(skb, IFLA_VXLAN_REMCSUM_RX,
+ !!(vxlan->flags & VXLAN_F_REMCSUM_RX)))
goto nla_put_failure;
if (nla_put(skb, IFLA_VXLAN_PORT_RANGE, sizeof(ports), &ports))
goto nla_put_failure;
+ if (vxlan->flags & VXLAN_F_GBP &&
+ nla_put_flag(skb, IFLA_VXLAN_GBP))
+ goto nla_put_failure;
+
return 0;
nla_put_failure:
return -EMSGSIZE;
}
+static struct net *vxlan_get_link_net(const struct net_device *dev)
+{
+ struct vxlan_dev *vxlan = netdev_priv(dev);
+
+ return vxlan->net;
+}
+
static struct rtnl_link_ops vxlan_link_ops __read_mostly = {
.kind = "vxlan",
.maxtype = IFLA_VXLAN_MAX,
.dellink = vxlan_dellink,
.get_size = vxlan_get_size,
.fill_info = vxlan_fill_info,
+ .get_link_net = vxlan_get_link_net,
};
static void vxlan_handle_lowerdev_unregister(struct vxlan_net *vn,
if (res < 0)
goto out_err;
- return genlmsg_end(skb, hdr);
+ genlmsg_end(skb, hdr);
+ return 0;
out_err:
genlmsg_cancel(skb, hdr);
}
static int _rtl_pci_init_one_rxdesc(struct ieee80211_hw *hw,
- u8 *entry, int rxring_idx, int desc_idx)
+ struct sk_buff *new_skb, u8 *entry,
+ int rxring_idx, int desc_idx)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
u8 tmp_one = 1;
struct sk_buff *skb;
+ if (likely(new_skb)) {
+ skb = new_skb;
+ goto remap;
+ }
skb = dev_alloc_skb(rtlpci->rxbuffersize);
if (!skb)
return 0;
- rtlpci->rx_ring[rxring_idx].rx_buf[desc_idx] = skb;
+remap:
/* just set skb->cb to mapping addr for pci_unmap_single use */
*((dma_addr_t *)skb->cb) =
pci_map_single(rtlpci->pdev, skb_tail_pointer(skb),
bufferaddress = *((dma_addr_t *)skb->cb);
if (pci_dma_mapping_error(rtlpci->pdev, bufferaddress))
return 0;
+ rtlpci->rx_ring[rxring_idx].rx_buf[desc_idx] = skb;
if (rtlpriv->use_new_trx_flow) {
rtlpriv->cfg->ops->set_desc(hw, (u8 *)entry, false,
HW_DESC_RX_PREPARE,
/*rx pkt */
struct sk_buff *skb = rtlpci->rx_ring[rxring_idx].rx_buf[
rtlpci->rx_ring[rxring_idx].idx];
+ struct sk_buff *new_skb;
if (rtlpriv->use_new_trx_flow) {
rx_remained_cnt =
pci_unmap_single(rtlpci->pdev, *((dma_addr_t *)skb->cb),
rtlpci->rxbuffersize, PCI_DMA_FROMDEVICE);
+ /* get a new skb - if fail, old one will be reused */
+ new_skb = dev_alloc_skb(rtlpci->rxbuffersize);
+ if (unlikely(!new_skb)) {
+ pr_err("Allocation of new skb failed in %s\n",
+ __func__);
+ goto no_new;
+ }
if (rtlpriv->use_new_trx_flow) {
buffer_desc =
&rtlpci->rx_ring[rxring_idx].buffer_desc
schedule_work(&rtlpriv->works.lps_change_work);
}
end:
+ skb = new_skb;
+no_new:
if (rtlpriv->use_new_trx_flow) {
- _rtl_pci_init_one_rxdesc(hw, (u8 *)buffer_desc,
+ _rtl_pci_init_one_rxdesc(hw, skb, (u8 *)buffer_desc,
rxring_idx,
- rtlpci->rx_ring[rxring_idx].idx);
+ rtlpci->rx_ring[rxring_idx].idx);
} else {
- _rtl_pci_init_one_rxdesc(hw, (u8 *)pdesc, rxring_idx,
+ _rtl_pci_init_one_rxdesc(hw, skb, (u8 *)pdesc,
+ rxring_idx,
rtlpci->rx_ring[rxring_idx].idx);
-
if (rtlpci->rx_ring[rxring_idx].idx ==
rtlpci->rxringcount - 1)
rtlpriv->cfg->ops->set_desc(hw, (u8 *)pdesc,
rtlpci->rx_ring[rxring_idx].idx = 0;
for (i = 0; i < rtlpci->rxringcount; i++) {
entry = &rtlpci->rx_ring[rxring_idx].buffer_desc[i];
- if (!_rtl_pci_init_one_rxdesc(hw, (u8 *)entry,
+ if (!_rtl_pci_init_one_rxdesc(hw, NULL, (u8 *)entry,
rxring_idx, i))
return -ENOMEM;
}
for (i = 0; i < rtlpci->rxringcount; i++) {
entry = &rtlpci->rx_ring[rxring_idx].desc[i];
- if (!_rtl_pci_init_one_rxdesc(hw, (u8 *)entry,
+ if (!_rtl_pci_init_one_rxdesc(hw, NULL, (u8 *)entry,
rxring_idx, i))
return -ENOMEM;
}
struct xenvif_rx_cb {
unsigned long expires;
int meta_slots_used;
- bool full_coalesce;
};
#define XENVIF_RX_CB(skb) ((struct xenvif_rx_cb *)(skb)->cb)
}
}
-/*
- * Returns true if we should start a new receive buffer instead of
- * adding 'size' bytes to a buffer which currently contains 'offset'
- * bytes.
- */
-static bool start_new_rx_buffer(int offset, unsigned long size, int head,
- bool full_coalesce)
-{
- /* simple case: we have completely filled the current buffer. */
- if (offset == MAX_BUFFER_OFFSET)
- return true;
-
- /*
- * complex case: start a fresh buffer if the current frag
- * would overflow the current buffer but only if:
- * (i) this frag would fit completely in the next buffer
- * and (ii) there is already some data in the current buffer
- * and (iii) this is not the head buffer.
- * and (iv) there is no need to fully utilize the buffers
- *
- * Where:
- * - (i) stops us splitting a frag into two copies
- * unless the frag is too large for a single buffer.
- * - (ii) stops us from leaving a buffer pointlessly empty.
- * - (iii) stops us leaving the first buffer
- * empty. Strictly speaking this is already covered
- * by (ii) but is explicitly checked because
- * netfront relies on the first buffer being
- * non-empty and can crash otherwise.
- * - (iv) is needed for skbs which can use up more than MAX_SKB_FRAGS
- * slot
- *
- * This means we will effectively linearise small
- * frags but do not needlessly split large buffers
- * into multiple copies tend to give large frags their
- * own buffers as before.
- */
- BUG_ON(size > MAX_BUFFER_OFFSET);
- if ((offset + size > MAX_BUFFER_OFFSET) && offset && !head &&
- !full_coalesce)
- return true;
-
- return false;
-}
-
struct netrx_pending_operations {
unsigned copy_prod, copy_cons;
unsigned meta_prod, meta_cons;
BUG_ON(offset >= PAGE_SIZE);
BUG_ON(npo->copy_off > MAX_BUFFER_OFFSET);
- bytes = PAGE_SIZE - offset;
+ if (npo->copy_off == MAX_BUFFER_OFFSET)
+ meta = get_next_rx_buffer(queue, npo);
+ bytes = PAGE_SIZE - offset;
if (bytes > size)
bytes = size;
- if (start_new_rx_buffer(npo->copy_off,
- bytes,
- *head,
- XENVIF_RX_CB(skb)->full_coalesce)) {
- /*
- * Netfront requires there to be some data in the head
- * buffer.
- */
- BUG_ON(*head);
-
- meta = get_next_rx_buffer(queue, npo);
- }
-
if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
bytes = MAX_BUFFER_OFFSET - npo->copy_off;
while (xenvif_rx_ring_slots_available(queue, XEN_NETBK_RX_SLOTS_MAX)
&& (skb = xenvif_rx_dequeue(queue)) != NULL) {
- RING_IDX max_slots_needed;
RING_IDX old_req_cons;
RING_IDX ring_slots_used;
- int i;
queue->last_rx_time = jiffies;
- /* We need a cheap worse case estimate for the number of
- * slots we'll use.
- */
-
- max_slots_needed = DIV_ROUND_UP(offset_in_page(skb->data) +
- skb_headlen(skb),
- PAGE_SIZE);
- for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
- unsigned int size;
- unsigned int offset;
-
- size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
- offset = skb_shinfo(skb)->frags[i].page_offset;
-
- /* For a worse-case estimate we need to factor in
- * the fragment page offset as this will affect the
- * number of times xenvif_gop_frag_copy() will
- * call start_new_rx_buffer().
- */
- max_slots_needed += DIV_ROUND_UP(offset + size,
- PAGE_SIZE);
- }
-
- /* To avoid the estimate becoming too pessimal for some
- * frontends that limit posted rx requests, cap the estimate
- * at MAX_SKB_FRAGS. In this case netback will fully coalesce
- * the skb into the provided slots.
- */
- if (max_slots_needed > MAX_SKB_FRAGS) {
- max_slots_needed = MAX_SKB_FRAGS;
- XENVIF_RX_CB(skb)->full_coalesce = true;
- } else {
- XENVIF_RX_CB(skb)->full_coalesce = false;
- }
-
- /* We may need one more slot for GSO metadata */
- if (skb_is_gso(skb) &&
- (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 ||
- skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6))
- max_slots_needed++;
-
old_req_cons = queue->rx.req_cons;
XENVIF_RX_CB(skb)->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
ring_slots_used = queue->rx.req_cons - old_req_cons;
- BUG_ON(ring_slots_used > max_slots_needed);
-
__skb_queue_tail(&rxq, skb);
}
}
queue->remaining_credit = credit_bytes;
+ queue->credit_usec = credit_usec;
err = connect_rings(be, queue);
if (err) {
#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
struct netfront_stats {
- u64 rx_packets;
- u64 tx_packets;
- u64 rx_bytes;
- u64 tx_bytes;
+ u64 packets;
+ u64 bytes;
struct u64_stats_sync syncp;
};
struct sk_buff *rx_skbs[NET_RX_RING_SIZE];
grant_ref_t gref_rx_head;
grant_ref_t grant_rx_ref[NET_RX_RING_SIZE];
-
- unsigned long rx_pfn_array[NET_RX_RING_SIZE];
- struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
- struct mmu_update rx_mmu[NET_RX_RING_SIZE];
};
struct netfront_info {
struct netfront_queue *queues;
/* Statistics */
- struct netfront_stats __percpu *stats;
+ struct netfront_stats __percpu *rx_stats;
+ struct netfront_stats __percpu *tx_stats;
atomic_t rx_gso_checksum_fixup;
};
xennet_maybe_wake_tx(queue);
}
-static void xennet_make_frags(struct sk_buff *skb, struct netfront_queue *queue,
- struct xen_netif_tx_request *tx)
+static struct xen_netif_tx_request *xennet_make_one_txreq(
+ struct netfront_queue *queue, struct sk_buff *skb,
+ struct page *page, unsigned int offset, unsigned int len)
{
- char *data = skb->data;
- unsigned long mfn;
- RING_IDX prod = queue->tx.req_prod_pvt;
- int frags = skb_shinfo(skb)->nr_frags;
- unsigned int offset = offset_in_page(data);
- unsigned int len = skb_headlen(skb);
unsigned int id;
+ struct xen_netif_tx_request *tx;
grant_ref_t ref;
- int i;
- /* While the header overlaps a page boundary (including being
- larger than a page), split it it into page-sized chunks. */
- while (len > PAGE_SIZE - offset) {
- tx->size = PAGE_SIZE - offset;
- tx->flags |= XEN_NETTXF_more_data;
- len -= tx->size;
- data += tx->size;
- offset = 0;
+ len = min_t(unsigned int, PAGE_SIZE - offset, len);
- id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
- queue->tx_skbs[id].skb = skb_get(skb);
- tx = RING_GET_REQUEST(&queue->tx, prod++);
- tx->id = id;
- ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
- BUG_ON((signed short)ref < 0);
+ id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+ tx = RING_GET_REQUEST(&queue->tx, queue->tx.req_prod_pvt++);
+ ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
+ BUG_ON((signed short)ref < 0);
- mfn = virt_to_mfn(data);
- gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
- mfn, GNTMAP_readonly);
+ gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
+ page_to_mfn(page), GNTMAP_readonly);
- queue->grant_tx_page[id] = virt_to_page(data);
- tx->gref = queue->grant_tx_ref[id] = ref;
- tx->offset = offset;
- tx->size = len;
- tx->flags = 0;
- }
+ queue->tx_skbs[id].skb = skb;
+ queue->grant_tx_page[id] = page;
+ queue->grant_tx_ref[id] = ref;
- /* Grant backend access to each skb fragment page. */
- for (i = 0; i < frags; i++) {
- skb_frag_t *frag = skb_shinfo(skb)->frags + i;
- struct page *page = skb_frag_page(frag);
+ tx->id = id;
+ tx->gref = ref;
+ tx->offset = offset;
+ tx->size = len;
+ tx->flags = 0;
- len = skb_frag_size(frag);
- offset = frag->page_offset;
+ return tx;
+}
- /* Skip unused frames from start of page */
- page += offset >> PAGE_SHIFT;
- offset &= ~PAGE_MASK;
+static struct xen_netif_tx_request *xennet_make_txreqs(
+ struct netfront_queue *queue, struct xen_netif_tx_request *tx,
+ struct sk_buff *skb, struct page *page,
+ unsigned int offset, unsigned int len)
+{
+ /* Skip unused frames from start of page */
+ page += offset >> PAGE_SHIFT;
+ offset &= ~PAGE_MASK;
- while (len > 0) {
- unsigned long bytes;
-
- bytes = PAGE_SIZE - offset;
- if (bytes > len)
- bytes = len;
-
- tx->flags |= XEN_NETTXF_more_data;
-
- id = get_id_from_freelist(&queue->tx_skb_freelist,
- queue->tx_skbs);
- queue->tx_skbs[id].skb = skb_get(skb);
- tx = RING_GET_REQUEST(&queue->tx, prod++);
- tx->id = id;
- ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
- BUG_ON((signed short)ref < 0);
-
- mfn = pfn_to_mfn(page_to_pfn(page));
- gnttab_grant_foreign_access_ref(ref,
- queue->info->xbdev->otherend_id,
- mfn, GNTMAP_readonly);
-
- queue->grant_tx_page[id] = page;
- tx->gref = queue->grant_tx_ref[id] = ref;
- tx->offset = offset;
- tx->size = bytes;
- tx->flags = 0;
-
- offset += bytes;
- len -= bytes;
-
- /* Next frame */
- if (offset == PAGE_SIZE && len) {
- BUG_ON(!PageCompound(page));
- page++;
- offset = 0;
- }
- }
+ while (len) {
+ tx->flags |= XEN_NETTXF_more_data;
+ tx = xennet_make_one_txreq(queue, skb_get(skb),
+ page, offset, len);
+ page++;
+ offset = 0;
+ len -= tx->size;
}
- queue->tx.req_prod_pvt = prod;
+ return tx;
}
/*
- * Count how many ring slots are required to send the frags of this
- * skb. Each frag might be a compound page.
+ * Count how many ring slots are required to send this skb. Each frag
+ * might be a compound page.
*/
-static int xennet_count_skb_frag_slots(struct sk_buff *skb)
+static int xennet_count_skb_slots(struct sk_buff *skb)
{
int i, frags = skb_shinfo(skb)->nr_frags;
- int pages = 0;
+ int pages;
+
+ pages = PFN_UP(offset_in_page(skb->data) + skb_headlen(skb));
for (i = 0; i < frags; i++) {
skb_frag_t *frag = skb_shinfo(skb)->frags + i;
static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
- unsigned short id;
struct netfront_info *np = netdev_priv(dev);
- struct netfront_stats *stats = this_cpu_ptr(np->stats);
- struct xen_netif_tx_request *tx;
- char *data = skb->data;
- RING_IDX i;
- grant_ref_t ref;
- unsigned long mfn;
+ struct netfront_stats *tx_stats = this_cpu_ptr(np->tx_stats);
+ struct xen_netif_tx_request *tx, *first_tx;
+ unsigned int i;
int notify;
int slots;
- unsigned int offset = offset_in_page(data);
- unsigned int len = skb_headlen(skb);
+ struct page *page;
+ unsigned int offset;
+ unsigned int len;
unsigned long flags;
struct netfront_queue *queue = NULL;
unsigned int num_queues = dev->real_num_tx_queues;
goto drop;
}
- slots = DIV_ROUND_UP(offset + len, PAGE_SIZE) +
- xennet_count_skb_frag_slots(skb);
+ slots = xennet_count_skb_slots(skb);
if (unlikely(slots > MAX_SKB_FRAGS + 1)) {
net_dbg_ratelimited("xennet: skb rides the rocket: %d slots, %d bytes\n",
slots, skb->len);
if (skb_linearize(skb))
goto drop;
- data = skb->data;
- offset = offset_in_page(data);
- len = skb_headlen(skb);
}
+ page = virt_to_page(skb->data);
+ offset = offset_in_page(skb->data);
+ len = skb_headlen(skb);
+
spin_lock_irqsave(&queue->tx_lock, flags);
if (unlikely(!netif_carrier_ok(dev) ||
goto drop;
}
- i = queue->tx.req_prod_pvt;
-
- id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
- queue->tx_skbs[id].skb = skb;
-
- tx = RING_GET_REQUEST(&queue->tx, i);
-
- tx->id = id;
- ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
- BUG_ON((signed short)ref < 0);
- mfn = virt_to_mfn(data);
- gnttab_grant_foreign_access_ref(
- ref, queue->info->xbdev->otherend_id, mfn, GNTMAP_readonly);
- queue->grant_tx_page[id] = virt_to_page(data);
- tx->gref = queue->grant_tx_ref[id] = ref;
- tx->offset = offset;
- tx->size = len;
+ /* First request for the linear area. */
+ first_tx = tx = xennet_make_one_txreq(queue, skb,
+ page, offset, len);
+ page++;
+ offset = 0;
+ len -= tx->size;
- tx->flags = 0;
if (skb->ip_summed == CHECKSUM_PARTIAL)
/* local packet? */
tx->flags |= XEN_NETTXF_csum_blank | XEN_NETTXF_data_validated;
/* remote but checksummed. */
tx->flags |= XEN_NETTXF_data_validated;
+ /* Optional extra info after the first request. */
if (skb_shinfo(skb)->gso_size) {
struct xen_netif_extra_info *gso;
gso = (struct xen_netif_extra_info *)
- RING_GET_REQUEST(&queue->tx, ++i);
+ RING_GET_REQUEST(&queue->tx, queue->tx.req_prod_pvt++);
tx->flags |= XEN_NETTXF_extra_info;
gso->flags = 0;
}
- queue->tx.req_prod_pvt = i + 1;
+ /* Requests for the rest of the linear area. */
+ tx = xennet_make_txreqs(queue, tx, skb, page, offset, len);
+
+ /* Requests for all the frags. */
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ tx = xennet_make_txreqs(queue, tx, skb,
+ skb_frag_page(frag), frag->page_offset,
+ skb_frag_size(frag));
+ }
- xennet_make_frags(skb, queue, tx);
- tx->size = skb->len;
+ /* First request has the packet length. */
+ first_tx->size = skb->len;
RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
if (notify)
notify_remote_via_irq(queue->tx_irq);
- u64_stats_update_begin(&stats->syncp);
- stats->tx_bytes += skb->len;
- stats->tx_packets++;
- u64_stats_update_end(&stats->syncp);
+ u64_stats_update_begin(&tx_stats->syncp);
+ tx_stats->bytes += skb->len;
+ tx_stats->packets++;
+ u64_stats_update_end(&tx_stats->syncp);
/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
xennet_tx_buf_gc(queue);
static int handle_incoming_queue(struct netfront_queue *queue,
struct sk_buff_head *rxq)
{
- struct netfront_stats *stats = this_cpu_ptr(queue->info->stats);
+ struct netfront_stats *rx_stats = this_cpu_ptr(queue->info->rx_stats);
int packets_dropped = 0;
struct sk_buff *skb;
continue;
}
- u64_stats_update_begin(&stats->syncp);
- stats->rx_packets++;
- stats->rx_bytes += skb->len;
- u64_stats_update_end(&stats->syncp);
+ u64_stats_update_begin(&rx_stats->syncp);
+ rx_stats->packets++;
+ rx_stats->bytes += skb->len;
+ u64_stats_update_end(&rx_stats->syncp);
/* Pass it up. */
napi_gro_receive(&queue->napi, skb);
int cpu;
for_each_possible_cpu(cpu) {
- struct netfront_stats *stats = per_cpu_ptr(np->stats, cpu);
+ struct netfront_stats *rx_stats = per_cpu_ptr(np->rx_stats, cpu);
+ struct netfront_stats *tx_stats = per_cpu_ptr(np->tx_stats, cpu);
u64 rx_packets, rx_bytes, tx_packets, tx_bytes;
unsigned int start;
do {
- start = u64_stats_fetch_begin_irq(&stats->syncp);
+ start = u64_stats_fetch_begin_irq(&tx_stats->syncp);
+ tx_packets = tx_stats->packets;
+ tx_bytes = tx_stats->bytes;
+ } while (u64_stats_fetch_retry_irq(&tx_stats->syncp, start));
- rx_packets = stats->rx_packets;
- tx_packets = stats->tx_packets;
- rx_bytes = stats->rx_bytes;
- tx_bytes = stats->tx_bytes;
- } while (u64_stats_fetch_retry_irq(&stats->syncp, start));
+ do {
+ start = u64_stats_fetch_begin_irq(&rx_stats->syncp);
+ rx_packets = rx_stats->packets;
+ rx_bytes = rx_stats->bytes;
+ } while (u64_stats_fetch_retry_irq(&rx_stats->syncp, start));
tot->rx_packets += rx_packets;
tot->tx_packets += tx_packets;
#endif
};
+static void xennet_free_netdev(struct net_device *netdev)
+{
+ struct netfront_info *np = netdev_priv(netdev);
+
+ free_percpu(np->rx_stats);
+ free_percpu(np->tx_stats);
+ free_netdev(netdev);
+}
+
static struct net_device *xennet_create_dev(struct xenbus_device *dev)
{
int err;
np->queues = NULL;
err = -ENOMEM;
- np->stats = netdev_alloc_pcpu_stats(struct netfront_stats);
- if (np->stats == NULL)
+ np->rx_stats = netdev_alloc_pcpu_stats(struct netfront_stats);
+ if (np->rx_stats == NULL)
+ goto exit;
+ np->tx_stats = netdev_alloc_pcpu_stats(struct netfront_stats);
+ if (np->tx_stats == NULL)
goto exit;
netdev->netdev_ops = &xennet_netdev_ops;
return netdev;
exit:
- free_netdev(netdev);
+ xennet_free_netdev(netdev);
return ERR_PTR(err);
}
return 0;
fail:
- free_netdev(netdev);
+ xennet_free_netdev(netdev);
dev_set_drvdata(&dev->dev, NULL);
return err;
}
info->queues = NULL;
}
- free_percpu(info->stats);
-
- free_netdev(info->netdev);
+ xennet_free_netdev(info->netdev);
return 0;
}
bool pcie_tx_pol_inv;
bool sata_tx_pol_inv;
u32 sata_gen;
- u64 ctrlreg;
+ u32 ctrlreg;
u8 type;
};
bool sata = (miphy_phy->type == MIPHY_TYPE_SATA);
return regmap_update_bits(miphy_dev->regmap,
- (unsigned int)miphy_phy->ctrlreg,
+ miphy_phy->ctrlreg,
SYSCFG_SELECT_SATA_MASK,
sata << SYSCFG_SELECT_SATA_POS);
}
{
struct device_node *phynode = miphy_phy->phy->dev.of_node;
const char *name;
- const __be32 *taddr;
int type = miphy_phy->type;
int ret;
return ret;
}
- if (!strncmp(name, "syscfg", 6)) {
- taddr = of_get_address(phynode, index, NULL, NULL);
- if (!taddr) {
- dev_err(dev, "failed to fetch syscfg address\n");
- return -EINVAL;
- }
-
- miphy_phy->ctrlreg = of_translate_address(phynode, taddr);
- if (miphy_phy->ctrlreg == OF_BAD_ADDR) {
- dev_err(dev, "failed to translate syscfg address\n");
- return -EINVAL;
- }
-
- return 0;
- }
-
if (!((!strncmp(name, "sata", 4) && type == MIPHY_TYPE_SATA) ||
(!strncmp(name, "pcie", 4) && type == MIPHY_TYPE_PCIE)))
return 0;
return ret;
phy_set_drvdata(phy, miphy_dev->phys[port]);
+
port++;
+ /* sysconfig offsets are indexed from 1 */
+ ret = of_property_read_u32_index(np, "st,syscfg", port,
+ &miphy_phy->ctrlreg);
+ if (ret) {
+ dev_err(&pdev->dev, "No sysconfig offset found\n");
+ return ret;
+ }
}
provider = devm_of_phy_provider_register(&pdev->dev, miphy365x_xlate);
#include <linux/mfd/syscon.h>
#include <linux/phy/phy.h>
+#define PHYPARAM_REG 1
+#define PHYCTRL_REG 2
+
/* Default PHY_SEL and REFCLKSEL configuration */
#define STIH407_USB_PICOPHY_CTRL_PORT_CONF 0x6
#define STIH407_USB_PICOPHY_CTRL_PORT_MASK 0x1f
struct device_node *np = dev->of_node;
struct phy_provider *phy_provider;
struct phy *phy;
- struct resource *res;
+ int ret;
phy_dev = devm_kzalloc(dev, sizeof(*phy_dev), GFP_KERNEL);
if (!phy_dev)
return PTR_ERR(phy_dev->regmap);
}
- res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctrl");
- if (!res) {
- dev_err(dev, "No ctrl reg found\n");
- return -ENXIO;
+ ret = of_property_read_u32_index(np, "st,syscfg", PHYPARAM_REG,
+ &phy_dev->param);
+ if (ret) {
+ dev_err(dev, "can't get phyparam offset (%d)\n", ret);
+ return ret;
}
- phy_dev->ctrl = res->start;
- res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "param");
- if (!res) {
- dev_err(dev, "No param reg found\n");
- return -ENXIO;
+ ret = of_property_read_u32_index(np, "st,syscfg", PHYCTRL_REG,
+ &phy_dev->ctrl);
+ if (ret) {
+ dev_err(dev, "can't get phyctrl offset (%d)\n", ret);
+ return ret;
}
- phy_dev->param = res->start;
phy = devm_phy_create(dev, NULL, &stih407_usb2_picophy_data);
if (IS_ERR(phy)) {
* @reg_pull: optional separate register for additional pull settings
* @clk: clock of the gpio bank
* @irq: interrupt of the gpio bank
+ * @saved_enables: Saved content of GPIO_INTEN at suspend time.
* @pin_base: first pin number
* @nr_pins: number of pins in this bank
* @name: name of the bank
struct regmap *regmap_pull;
struct clk *clk;
int irq;
+ u32 saved_enables;
u32 pin_base;
u8 nr_pins;
char *name;
return 0;
}
+static void rockchip_irq_suspend(struct irq_data *d)
+{
+ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ struct rockchip_pin_bank *bank = gc->private;
+
+ bank->saved_enables = irq_reg_readl(gc, GPIO_INTEN);
+ irq_reg_writel(gc, gc->wake_active, GPIO_INTEN);
+}
+
+static void rockchip_irq_resume(struct irq_data *d)
+{
+ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ struct rockchip_pin_bank *bank = gc->private;
+
+ irq_reg_writel(gc, bank->saved_enables, GPIO_INTEN);
+}
+
+static void rockchip_irq_disable(struct irq_data *d)
+{
+ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ u32 val;
+
+ irq_gc_lock(gc);
+
+ val = irq_reg_readl(gc, GPIO_INTEN);
+ val &= ~d->mask;
+ irq_reg_writel(gc, val, GPIO_INTEN);
+
+ irq_gc_unlock(gc);
+}
+
+static void rockchip_irq_enable(struct irq_data *d)
+{
+ struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d);
+ u32 val;
+
+ irq_gc_lock(gc);
+
+ val = irq_reg_readl(gc, GPIO_INTEN);
+ val |= d->mask;
+ irq_reg_writel(gc, val, GPIO_INTEN);
+
+ irq_gc_unlock(gc);
+}
+
static int rockchip_interrupts_register(struct platform_device *pdev,
struct rockchip_pinctrl *info)
{
gc = irq_get_domain_generic_chip(bank->domain, 0);
gc->reg_base = bank->reg_base;
gc->private = bank;
- gc->chip_types[0].regs.mask = GPIO_INTEN;
+ gc->chip_types[0].regs.mask = GPIO_INTMASK;
gc->chip_types[0].regs.ack = GPIO_PORTS_EOI;
gc->chip_types[0].chip.irq_ack = irq_gc_ack_set_bit;
- gc->chip_types[0].chip.irq_mask = irq_gc_mask_clr_bit;
- gc->chip_types[0].chip.irq_unmask = irq_gc_mask_set_bit;
+ gc->chip_types[0].chip.irq_mask = irq_gc_mask_set_bit;
+ gc->chip_types[0].chip.irq_unmask = irq_gc_mask_clr_bit;
+ gc->chip_types[0].chip.irq_enable = rockchip_irq_enable;
+ gc->chip_types[0].chip.irq_disable = rockchip_irq_disable;
gc->chip_types[0].chip.irq_set_wake = irq_gc_set_wake;
+ gc->chip_types[0].chip.irq_suspend = rockchip_irq_suspend;
+ gc->chip_types[0].chip.irq_resume = rockchip_irq_resume;
gc->chip_types[0].chip.irq_set_type = rockchip_irq_set_type;
gc->wake_enabled = IRQ_MSK(bank->nr_pins);
struct seq_file *s, unsigned pin_id)
{
unsigned long config;
- st_pinconf_get(pctldev, pin_id, &config);
+ mutex_unlock(&pctldev->mutex);
+ st_pinconf_get(pctldev, pin_id, &config);
+ mutex_lock(&pctldev->mutex);
seq_printf(s, "[OE:%ld,PU:%ld,OD:%ld]\n"
"\t\t[retime:%ld,invclk:%ld,clknotdat:%ld,"
"de:%ld,rt-clk:%ld,rt-delay:%ld]",
static struct irq_chip st_gpio_irqchip = {
.name = "GPIO",
+ .irq_disable = st_gpio_irq_mask,
.irq_mask = st_gpio_irq_mask,
.irq_unmask = st_gpio_irq_unmask,
.irq_set_type = st_gpio_irq_set_type,
*/
static inline int ap_test_config_domain(unsigned int domain)
{
- if (!ap_configuration)
- return 1;
- return ap_test_config(ap_configuration->aqm, domain);
+ if (!ap_configuration) /* QCI not supported */
+ if (domain < 16)
+ return 1; /* then domains 0...15 are configured */
+ else
+ return 0;
+ else
+ return ap_test_config(ap_configuration->aqm, domain);
}
/**
static void
claw_unregister_debug_facility(void)
{
- if (claw_dbf_setup)
- debug_unregister(claw_dbf_setup);
- if (claw_dbf_trace)
- debug_unregister(claw_dbf_trace);
+ debug_unregister(claw_dbf_setup);
+ debug_unregister(claw_dbf_trace);
}
static int
int first = 1;
int i;
unsigned long duration;
- struct timespec done_stamp = current_kernel_time(); /* xtime */
+ unsigned long done_stamp = jiffies;
CTCM_PR_DEBUG("%s(%s): %s\n", __func__, ch->id, dev->name);
- duration =
- (done_stamp.tv_sec - ch->prof.send_stamp.tv_sec) * 1000000 +
- (done_stamp.tv_nsec - ch->prof.send_stamp.tv_nsec) / 1000;
+ duration = done_stamp - ch->prof.send_stamp;
if (duration > ch->prof.tx_time)
ch->prof.tx_time = duration;
spin_unlock(&ch->collect_lock);
ch->ccw[1].count = ch->trans_skb->len;
fsm_addtimer(&ch->timer, CTCM_TIME_5_SEC, CTC_EVENT_TIMER, ch);
- ch->prof.send_stamp = current_kernel_time(); /* xtime */
+ ch->prof.send_stamp = jiffies;
rc = ccw_device_start(ch->cdev, &ch->ccw[0],
(unsigned long)ch, 0xff, 0);
ch->prof.doios_multi++;
int rc;
struct th_header *header;
struct pdu *p_header;
- struct timespec done_stamp = current_kernel_time(); /* xtime */
+ unsigned long done_stamp = jiffies;
CTCM_PR_DEBUG("Enter %s: %s cp:%i\n",
__func__, dev->name, smp_processor_id());
- duration =
- (done_stamp.tv_sec - ch->prof.send_stamp.tv_sec) * 1000000 +
- (done_stamp.tv_nsec - ch->prof.send_stamp.tv_nsec) / 1000;
+ duration = done_stamp - ch->prof.send_stamp;
if (duration > ch->prof.tx_time)
ch->prof.tx_time = duration;
ch->ccw[1].count = ch->trans_skb->len;
fsm_addtimer(&ch->timer, CTCM_TIME_5_SEC, CTC_EVENT_TIMER, ch);
- ch->prof.send_stamp = current_kernel_time(); /* xtime */
+ ch->prof.send_stamp = jiffies;
if (do_debug_ccw)
ctcmpc_dumpit((char *)&ch->ccw[0], sizeof(struct ccw1) * 3);
rc = ccw_device_start(ch->cdev, &ch->ccw[0],
fsm_newstate(wch->fsm, CTC_STATE_TX);
spin_lock_irqsave(get_ccwdev_lock(wch->cdev), saveflags);
- wch->prof.send_stamp = current_kernel_time(); /* xtime */
+ wch->prof.send_stamp = jiffies;
rc = ccw_device_start(wch->cdev, &wch->ccw[3],
(unsigned long) wch, 0xff, 0);
spin_unlock_irqrestore(get_ccwdev_lock(wch->cdev), saveflags);
fsm_newstate(ch->fsm, CTC_STATE_TX);
fsm_addtimer(&ch->timer, CTCM_TIME_5_SEC, CTC_EVENT_TIMER, ch);
spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags);
- ch->prof.send_stamp = current_kernel_time(); /* xtime */
+ ch->prof.send_stamp = jiffies;
rc = ccw_device_start(ch->cdev, &ch->ccw[ccw_idx],
(unsigned long)ch, 0xff, 0);
spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags);
sizeof(struct ccw1) * 3);
spin_lock_irqsave(get_ccwdev_lock(ch->cdev), saveflags);
- ch->prof.send_stamp = current_kernel_time(); /* xtime */
+ ch->prof.send_stamp = jiffies;
rc = ccw_device_start(ch->cdev, &ch->ccw[ccw_idx],
(unsigned long)ch, 0xff, 0);
spin_unlock_irqrestore(get_ccwdev_lock(ch->cdev), saveflags);
unsigned long doios_multi;
unsigned long txlen;
unsigned long tx_time;
- struct timespec send_stamp;
+ unsigned long send_stamp;
};
/*
priv->channel[WRITE]->prof.doios_multi);
p += sprintf(p, " Netto bytes written: %ld\n",
priv->channel[WRITE]->prof.txlen);
- p += sprintf(p, " Max. TX IO-time: %ld\n",
- priv->channel[WRITE]->prof.tx_time);
+ p += sprintf(p, " Max. TX IO-time: %u\n",
+ jiffies_to_usecs(priv->channel[WRITE]->prof.tx_time));
printk(KERN_INFO "Statistics for %s:\n%s",
priv->channel[CTCM_WRITE]->netdev->name, sbuf);
static void
lcs_unregister_debug_facility(void)
{
- if (lcs_dbf_setup)
- debug_unregister(lcs_dbf_setup);
- if (lcs_dbf_trace)
- debug_unregister(lcs_dbf_trace);
+ debug_unregister(lcs_dbf_setup);
+ debug_unregister(lcs_dbf_trace);
}
static int
unsigned long doios_multi;
unsigned long txlen;
unsigned long tx_time;
- struct timespec send_stamp;
+ unsigned long send_stamp;
unsigned long tx_pending;
unsigned long tx_max_pending;
};
static void iucv_unregister_dbf_views(void)
{
- if (iucv_dbf_setup)
- debug_unregister(iucv_dbf_setup);
- if (iucv_dbf_data)
- debug_unregister(iucv_dbf_data);
- if (iucv_dbf_trace)
- debug_unregister(iucv_dbf_trace);
+ debug_unregister(iucv_dbf_setup);
+ debug_unregister(iucv_dbf_data);
+ debug_unregister(iucv_dbf_trace);
}
static int iucv_register_dbf_views(void)
{
header.next = 0;
memcpy(skb_put(conn->tx_buff, NETIUCV_HDRLEN), &header, NETIUCV_HDRLEN);
- conn->prof.send_stamp = current_kernel_time();
+ conn->prof.send_stamp = jiffies;
txmsg.class = 0;
txmsg.tag = 0;
rc = iucv_message_send(conn->path, &txmsg, 0, 0,
memcpy(skb_put(nskb, NETIUCV_HDRLEN), &header, NETIUCV_HDRLEN);
fsm_newstate(conn->fsm, CONN_STATE_TX);
- conn->prof.send_stamp = current_kernel_time();
+ conn->prof.send_stamp = jiffies;
msg.tag = 1;
msg.class = 0;
struct ccw1 ccw;
spinlock_t iob_lock;
wait_queue_head_t wait_q;
- struct tasklet_struct irq_tasklet;
struct ccw_device *ccwdev;
/*command buffer for control data*/
struct qeth_cmd_buffer iob[QETH_CMD_BUFFER_NO];
struct device_attribute *attr, const char *buf, size_t count)
{
struct qeth_card *card = dev_get_drvdata(dev);
- char *tmp;
int rc = 0;
if (!card)
goto out;
}
- tmp = strsep((char **) &buf, "\n");
- if (!strcmp(tmp, "prio_queueing_prec")) {
+ if (sysfs_streq(buf, "prio_queueing_prec")) {
card->qdio.do_prio_queueing = QETH_PRIO_Q_ING_PREC;
card->qdio.default_out_queue = QETH_DEFAULT_QUEUE;
- } else if (!strcmp(tmp, "prio_queueing_skb")) {
+ } else if (sysfs_streq(buf, "prio_queueing_skb")) {
card->qdio.do_prio_queueing = QETH_PRIO_Q_ING_SKB;
card->qdio.default_out_queue = QETH_DEFAULT_QUEUE;
- } else if (!strcmp(tmp, "prio_queueing_tos")) {
+ } else if (sysfs_streq(buf, "prio_queueing_tos")) {
card->qdio.do_prio_queueing = QETH_PRIO_Q_ING_TOS;
card->qdio.default_out_queue = QETH_DEFAULT_QUEUE;
- } else if (!strcmp(tmp, "prio_queueing_vlan")) {
+ } else if (sysfs_streq(buf, "prio_queueing_vlan")) {
if (!card->options.layer2) {
rc = -ENOTSUPP;
goto out;
}
card->qdio.do_prio_queueing = QETH_PRIO_Q_ING_VLAN;
card->qdio.default_out_queue = QETH_DEFAULT_QUEUE;
- } else if (!strcmp(tmp, "no_prio_queueing:0")) {
+ } else if (sysfs_streq(buf, "no_prio_queueing:0")) {
card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING;
card->qdio.default_out_queue = 0;
- } else if (!strcmp(tmp, "no_prio_queueing:1")) {
+ } else if (sysfs_streq(buf, "no_prio_queueing:1")) {
card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING;
card->qdio.default_out_queue = 1;
- } else if (!strcmp(tmp, "no_prio_queueing:2")) {
+ } else if (sysfs_streq(buf, "no_prio_queueing:2")) {
card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING;
card->qdio.default_out_queue = 2;
- } else if (!strcmp(tmp, "no_prio_queueing:3")) {
+ } else if (sysfs_streq(buf, "no_prio_queueing:3")) {
card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING;
card->qdio.default_out_queue = 3;
- } else if (!strcmp(tmp, "no_prio_queueing")) {
+ } else if (sysfs_streq(buf, "no_prio_queueing")) {
card->qdio.do_prio_queueing = QETH_NO_PRIO_QUEUEING;
card->qdio.default_out_queue = QETH_DEFAULT_QUEUE;
} else
struct qeth_card *card = dev_get_drvdata(dev);
enum qeth_ipa_isolation_modes isolation;
int rc = 0;
- char *tmp, *curtoken;
- curtoken = (char *) buf;
if (!card)
return -EINVAL;
}
/* parse input into isolation mode */
- tmp = strsep(&curtoken, "\n");
- if (!strcmp(tmp, ATTR_QETH_ISOLATION_NONE)) {
+ if (sysfs_streq(buf, ATTR_QETH_ISOLATION_NONE)) {
isolation = ISOLATION_MODE_NONE;
- } else if (!strcmp(tmp, ATTR_QETH_ISOLATION_FWD)) {
+ } else if (sysfs_streq(buf, ATTR_QETH_ISOLATION_FWD)) {
isolation = ISOLATION_MODE_FWD;
- } else if (!strcmp(tmp, ATTR_QETH_ISOLATION_DROP)) {
+ } else if (sysfs_streq(buf, ATTR_QETH_ISOLATION_DROP)) {
isolation = ISOLATION_MODE_DROP;
} else {
rc = -EINVAL;
/* defer IP assist if device is offline (until discipline->set_online)*/
card->options.prev_isolation = card->options.isolation;
card->options.isolation = isolation;
- if (card->state == CARD_STATE_SOFTSETUP ||
- card->state == CARD_STATE_UP) {
+ if (qeth_card_hw_is_reachable(card)) {
int ipa_rc = qeth_set_access_ctrl_online(card, 1);
if (ipa_rc != 0)
rc = ipa_rc;
if (!card)
return -EINVAL;
- if (card->state != CARD_STATE_SOFTSETUP && card->state != CARD_STATE_UP)
+ if (!qeth_card_hw_is_reachable(card))
return sprintf(buf, "n/a\n");
rc = qeth_query_switch_attributes(card, &sw_info);
{
struct qeth_card *card = dev_get_drvdata(dev);
int rc = 0;
- char *tmp, *curtoken;
int state = 0;
- curtoken = (char *)buf;
if (!card)
return -EINVAL;
mutex_lock(&card->conf_mutex);
- if (card->state == CARD_STATE_SOFTSETUP || card->state == CARD_STATE_UP)
+ if (qeth_card_hw_is_reachable(card))
state = 1;
- tmp = strsep(&curtoken, "\n");
- if (!strcmp(tmp, "arm") && !card->info.hwtrap) {
+ if (sysfs_streq(buf, "arm") && !card->info.hwtrap) {
if (state) {
if (qeth_is_diagass_supported(card,
QETH_DIAGS_CMD_TRAP)) {
rc = -EINVAL;
} else
card->info.hwtrap = 1;
- } else if (!strcmp(tmp, "disarm") && card->info.hwtrap) {
+ } else if (sysfs_streq(buf, "disarm") && card->info.hwtrap) {
if (state) {
rc = qeth_hw_trap(card, QETH_DIAGS_TRAP_DISARM);
if (!rc)
card->info.hwtrap = 0;
} else
card->info.hwtrap = 0;
- } else if (!strcmp(tmp, "trap") && state && card->info.hwtrap)
+ } else if (sysfs_streq(buf, "trap") && state && card->info.hwtrap)
rc = qeth_hw_trap(card, QETH_DIAGS_TRAP_CAPTURE);
else
rc = -EINVAL;
if (!card)
return -ENODEV;
- if ((card->state != CARD_STATE_UP) &&
- (card->state != CARD_STATE_SOFTSETUP))
+ if (!qeth_card_hw_is_reachable(card))
return -ENODEV;
if (card->info.type == QETH_CARD_TYPE_OSN)
if (!card)
return -ENODEV;
QETH_CARD_TEXT(card, 2, "osnsdmc");
- if ((card->state != CARD_STATE_UP) &&
- (card->state != CARD_STATE_SOFTSETUP))
+ if (!qeth_card_hw_is_reachable(card))
return -ENODEV;
iob = qeth_wait_for_buffer(&card->write);
memcpy(iob->data+IPA_PDU_HEADER_SIZE, data, data_len);
QETH_CARD_TEXT(card, 2, "sdiplist");
QETH_CARD_HEX(card, 2, &card, sizeof(void *));
- if ((card->state != CARD_STATE_UP &&
- card->state != CARD_STATE_SOFTSETUP) || card->options.sniffer) {
+ if (!qeth_card_hw_is_reachable(card) || card->options.sniffer)
return;
- }
spin_lock_irqsave(&card->ip_lock, flags);
tbd_list = card->ip_tbd_list;
if (!card)
return -ENODEV;
- if ((card->state != CARD_STATE_UP) &&
- (card->state != CARD_STATE_SOFTSETUP))
+ if (!qeth_card_hw_is_reachable(card))
return -ENODEV;
switch (cmd) {
* before we're going to overwrite this location with next hop ip.
* v6 uses passthrough, v4 sets the tag in the QDIO header.
*/
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
if ((ipv == 4) || (card->info.type == QETH_CARD_TYPE_IQD))
hdr->hdr.l3.ext_flags = QETH_HDR_EXT_VLAN_FRAME;
else
hdr->hdr.l3.ext_flags = QETH_HDR_EXT_INCLUDE_VLAN_TAG;
- hdr->hdr.l3.vlan_id = vlan_tx_tag_get(skb);
+ hdr->hdr.l3.vlan_id = skb_vlan_tag_get(skb);
}
hdr->hdr.l3.length = skb->len - sizeof(struct qeth_hdr);
skb_pull(new_skb, ETH_HLEN);
}
- if (ipv != 4 && vlan_tx_tag_present(new_skb)) {
+ if (ipv != 4 && skb_vlan_tag_present(new_skb)) {
skb_push(new_skb, VLAN_HLEN);
skb_copy_to_linear_data(new_skb, new_skb->data + 4, 4);
skb_copy_to_linear_data_offset(new_skb, 4,
new_skb->data + 12, 4);
tag = (u16 *)(new_skb->data + 12);
*tag = __constant_htons(ETH_P_8021Q);
- *(tag + 1) = htons(vlan_tx_tag_get(new_skb));
+ *(tag + 1) = htons(skb_vlan_tag_get(new_skb));
}
}
const char *buf, size_t count)
{
enum qeth_routing_types old_route_type = route->type;
- char *tmp;
int rc = 0;
- tmp = strsep((char **) &buf, "\n");
mutex_lock(&card->conf_mutex);
- if (!strcmp(tmp, "no_router")) {
+ if (sysfs_streq(buf, "no_router")) {
route->type = NO_ROUTER;
- } else if (!strcmp(tmp, "primary_connector")) {
+ } else if (sysfs_streq(buf, "primary_connector")) {
route->type = PRIMARY_CONNECTOR;
- } else if (!strcmp(tmp, "secondary_connector")) {
+ } else if (sysfs_streq(buf, "secondary_connector")) {
route->type = SECONDARY_CONNECTOR;
- } else if (!strcmp(tmp, "primary_router")) {
+ } else if (sysfs_streq(buf, "primary_router")) {
route->type = PRIMARY_ROUTER;
- } else if (!strcmp(tmp, "secondary_router")) {
+ } else if (sysfs_streq(buf, "secondary_router")) {
route->type = SECONDARY_ROUTER;
- } else if (!strcmp(tmp, "multicast_router")) {
+ } else if (sysfs_streq(buf, "multicast_router")) {
route->type = MULTICAST_ROUTER;
} else {
rc = -EINVAL;
goto out;
}
- if (((card->state == CARD_STATE_SOFTSETUP) ||
- (card->state == CARD_STATE_UP)) &&
+ if (qeth_card_hw_is_reachable(card) &&
(old_route_type != route->type)) {
if (prot == QETH_PROT_IPV4)
rc = qeth_l3_setrouting_v4(card);
{
struct qeth_card *card = dev_get_drvdata(dev);
struct qeth_ipaddr *tmpipa, *t;
- char *tmp;
int rc = 0;
if (!card)
goto out;
}
- tmp = strsep((char **) &buf, "\n");
- if (!strcmp(tmp, "toggle")) {
+ if (sysfs_streq(buf, "toggle")) {
card->ipato.enabled = (card->ipato.enabled)? 0 : 1;
- } else if (!strcmp(tmp, "1")) {
+ } else if (sysfs_streq(buf, "1")) {
card->ipato.enabled = 1;
list_for_each_entry_safe(tmpipa, t, card->ip_tbd_list, entry) {
if ((tmpipa->type == QETH_IP_TYPE_NORMAL) &&
QETH_IPA_SETIP_TAKEOVER_FLAG;
}
- } else if (!strcmp(tmp, "0")) {
+ } else if (sysfs_streq(buf, "0")) {
card->ipato.enabled = 0;
list_for_each_entry_safe(tmpipa, t, card->ip_tbd_list, entry) {
if (tmpipa->set_flags &
const char *buf, size_t count)
{
struct qeth_card *card = dev_get_drvdata(dev);
- char *tmp;
int rc = 0;
if (!card)
return -EINVAL;
mutex_lock(&card->conf_mutex);
- tmp = strsep((char **) &buf, "\n");
- if (!strcmp(tmp, "toggle")) {
+ if (sysfs_streq(buf, "toggle"))
card->ipato.invert4 = (card->ipato.invert4)? 0 : 1;
- } else if (!strcmp(tmp, "1")) {
+ else if (sysfs_streq(buf, "1"))
card->ipato.invert4 = 1;
- } else if (!strcmp(tmp, "0")) {
+ else if (sysfs_streq(buf, "0"))
card->ipato.invert4 = 0;
- } else
+ else
rc = -EINVAL;
mutex_unlock(&card->conf_mutex);
return rc ? rc : count;
struct device_attribute *attr, const char *buf, size_t count)
{
struct qeth_card *card = dev_get_drvdata(dev);
- char *tmp;
int rc = 0;
if (!card)
return -EINVAL;
mutex_lock(&card->conf_mutex);
- tmp = strsep((char **) &buf, "\n");
- if (!strcmp(tmp, "toggle")) {
+ if (sysfs_streq(buf, "toggle"))
card->ipato.invert6 = (card->ipato.invert6)? 0 : 1;
- } else if (!strcmp(tmp, "1")) {
+ else if (sysfs_streq(buf, "1"))
card->ipato.invert6 = 1;
- } else if (!strcmp(tmp, "0")) {
+ else if (sysfs_streq(buf, "0"))
card->ipato.invert6 = 0;
- } else
+ else
rc = -EINVAL;
mutex_unlock(&card->conf_mutex);
return rc ? rc : count;
obj-$(CONFIG_SCSI_CHELSIO_FCOE) += csiostor.o
csiostor-objs := csio_attr.o csio_init.o csio_lnode.o csio_scsi.o \
- csio_hw.o csio_hw_t4.o csio_hw_t5.o csio_isr.o \
+ csio_hw.o csio_hw_t5.o csio_isr.o \
csio_mb.o csio_rnode.o csio_wr.o
static int dev_num;
/* FCoE Adapter types & its description */
-static const struct csio_adap_desc csio_t4_fcoe_adapters[] = {
- {"T440-Dbg 10G", "Chelsio T440-Dbg 10G [FCoE]"},
- {"T420-CR 10G", "Chelsio T420-CR 10G [FCoE]"},
- {"T422-CR 10G/1G", "Chelsio T422-CR 10G/1G [FCoE]"},
- {"T440-CR 10G", "Chelsio T440-CR 10G [FCoE]"},
- {"T420-BCH 10G", "Chelsio T420-BCH 10G [FCoE]"},
- {"T440-BCH 10G", "Chelsio T440-BCH 10G [FCoE]"},
- {"T440-CH 10G", "Chelsio T440-CH 10G [FCoE]"},
- {"T420-SO 10G", "Chelsio T420-SO 10G [FCoE]"},
- {"T420-CX4 10G", "Chelsio T420-CX4 10G [FCoE]"},
- {"T420-BT 10G", "Chelsio T420-BT 10G [FCoE]"},
- {"T404-BT 1G", "Chelsio T404-BT 1G [FCoE]"},
- {"B420-SR 10G", "Chelsio B420-SR 10G [FCoE]"},
- {"B404-BT 1G", "Chelsio B404-BT 1G [FCoE]"},
- {"T480-CR 10G", "Chelsio T480-CR 10G [FCoE]"},
- {"T440-LP-CR 10G", "Chelsio T440-LP-CR 10G [FCoE]"},
- {"AMSTERDAM 10G", "Chelsio AMSTERDAM 10G [FCoE]"},
- {"HUAWEI T480 10G", "Chelsio HUAWEI T480 10G [FCoE]"},
- {"HUAWEI T440 10G", "Chelsio HUAWEI T440 10G [FCoE]"},
- {"HUAWEI STG 10G", "Chelsio HUAWEI STG 10G [FCoE]"},
- {"ACROMAG XAUI 10G", "Chelsio ACROMAG XAUI 10G [FCoE]"},
- {"ACROMAG SFP+ 10G", "Chelsio ACROMAG SFP+ 10G [FCoE]"},
- {"QUANTA SFP+ 10G", "Chelsio QUANTA SFP+ 10G [FCoE]"},
- {"HUAWEI 10Gbase-T", "Chelsio HUAWEI 10Gbase-T [FCoE]"},
- {"HUAWEI T4TOE 10G", "Chelsio HUAWEI T4TOE 10G [FCoE]"}
-};
-
static const struct csio_adap_desc csio_t5_fcoe_adapters[] = {
{"T580-Dbg 10G", "Chelsio T580-Dbg 10G [FCoE]"},
{"T520-CR 10G", "Chelsio T520-CR 10G [FCoE]"},
- {"T522-CR 10G/1G", "Chelsio T452-CR 10G/1G [FCoE]"},
+ {"T522-CR 10G/1G", "Chelsio T522-CR 10G/1G [FCoE]"},
{"T540-CR 10G", "Chelsio T540-CR 10G [FCoE]"},
{"T520-BCH 10G", "Chelsio T520-BCH 10G [FCoE]"},
{"T540-BCH 10G", "Chelsio T540-BCH 10G [FCoE]"},
{"T580-LP-CR 40G", "Chelsio T580-LP-CR 40G [FCoE]"},
{"T520-LL-CR 10G", "Chelsio T520-LL-CR 10G [FCoE]"},
{"T560-CR 40G", "Chelsio T560-CR 40G [FCoE]"},
- {"T580-CR 40G", "Chelsio T580-CR 40G [FCoE]"}
+ {"T580-CR 40G", "Chelsio T580-CR 40G [FCoE]"},
+ {"T580-SO 40G", "Chelsio T580-SO 40G [FCoE]"},
+ {"T502-BT 1G", "Chelsio T502-BT 1G [FCoE]"}
};
static void csio_mgmtm_cleanup(struct csio_mgmtm *);
csio_hw_tp_wr_bits_indirect(struct csio_hw *hw, unsigned int addr,
unsigned int mask, unsigned int val)
{
- csio_wr_reg32(hw, addr, TP_PIO_ADDR);
- val |= csio_rd_reg32(hw, TP_PIO_DATA) & ~mask;
- csio_wr_reg32(hw, val, TP_PIO_DATA);
+ csio_wr_reg32(hw, addr, TP_PIO_ADDR_A);
+ val |= csio_rd_reg32(hw, TP_PIO_DATA_A) & ~mask;
+ csio_wr_reg32(hw, val, TP_PIO_DATA_A);
}
void
}
pci_read_config_dword(hw->pdev, base + PCI_VPD_DATA, data);
- *data = le32_to_cpu(*data);
+ *data = le32_to_cpu(*(__le32 *)data);
return 0;
}
if (!byte_cnt || byte_cnt > 4)
return -EINVAL;
- if (csio_rd_reg32(hw, SF_OP) & SF_BUSY)
+ if (csio_rd_reg32(hw, SF_OP_A) & SF_BUSY_F)
return -EBUSY;
- cont = cont ? SF_CONT : 0;
- lock = lock ? SF_LOCK : 0;
-
- csio_wr_reg32(hw, lock | cont | BYTECNT(byte_cnt - 1), SF_OP);
- ret = csio_hw_wait_op_done_val(hw, SF_OP, SF_BUSY, 0, SF_ATTEMPTS,
- 10, NULL);
+ csio_wr_reg32(hw, SF_LOCK_V(lock) | SF_CONT_V(cont) |
+ BYTECNT_V(byte_cnt - 1), SF_OP_A);
+ ret = csio_hw_wait_op_done_val(hw, SF_OP_A, SF_BUSY_F, 0, SF_ATTEMPTS,
+ 10, NULL);
if (!ret)
- *valp = csio_rd_reg32(hw, SF_DATA);
+ *valp = csio_rd_reg32(hw, SF_DATA_A);
return ret;
}
{
if (!byte_cnt || byte_cnt > 4)
return -EINVAL;
- if (csio_rd_reg32(hw, SF_OP) & SF_BUSY)
+ if (csio_rd_reg32(hw, SF_OP_A) & SF_BUSY_F)
return -EBUSY;
- cont = cont ? SF_CONT : 0;
- lock = lock ? SF_LOCK : 0;
+ csio_wr_reg32(hw, val, SF_DATA_A);
+ csio_wr_reg32(hw, SF_CONT_V(cont) | BYTECNT_V(byte_cnt - 1) |
+ OP_V(1) | SF_LOCK_V(lock), SF_OP_A);
- csio_wr_reg32(hw, val, SF_DATA);
- csio_wr_reg32(hw, cont | BYTECNT(byte_cnt - 1) | OP_WR | lock, SF_OP);
-
- return csio_hw_wait_op_done_val(hw, SF_OP, SF_BUSY, 0, SF_ATTEMPTS,
+ return csio_hw_wait_op_done_val(hw, SF_OP_A, SF_BUSY_F, 0, SF_ATTEMPTS,
10, NULL);
}
for ( ; nwords; nwords--, data++) {
ret = csio_hw_sf1_read(hw, 4, nwords > 1, nwords == 1, data);
if (nwords == 1)
- csio_wr_reg32(hw, 0, SF_OP); /* unlock SF */
+ csio_wr_reg32(hw, 0, SF_OP_A); /* unlock SF */
if (ret)
return ret;
if (byte_oriented)
- *data = htonl(*data);
+ *data = (__force __u32) htonl(*data);
}
return 0;
}
if (ret)
goto unlock;
- csio_wr_reg32(hw, 0, SF_OP); /* unlock SF */
+ csio_wr_reg32(hw, 0, SF_OP_A); /* unlock SF */
/* Read the page to verify the write succeeded */
ret = csio_hw_read_flash(hw, addr & ~0xff, ARRAY_SIZE(buf), buf, 1);
return 0;
unlock:
- csio_wr_reg32(hw, 0, SF_OP); /* unlock SF */
+ csio_wr_reg32(hw, 0, SF_OP_A); /* unlock SF */
return ret;
}
if (ret)
csio_err(hw, "erase of flash sector %d failed, error %d\n",
start, ret);
- csio_wr_reg32(hw, 0, SF_OP); /* unlock SF */
+ csio_wr_reg32(hw, 0, SF_OP_A); /* unlock SF */
return 0;
}
vers, 0);
}
-/*
- * csio_hw_check_fw_version - check if the FW is compatible with
- * this driver
- * @hw: HW module
- *
- * Checks if an adapter's FW is compatible with the driver. Returns 0
- * if there's exact match, a negative error if the version could not be
- * read or there's a major/minor version mismatch/minor.
- */
-static int
-csio_hw_check_fw_version(struct csio_hw *hw)
-{
- int ret, major, minor, micro;
-
- ret = csio_hw_get_fw_version(hw, &hw->fwrev);
- if (!ret)
- ret = csio_hw_get_tp_version(hw, &hw->tp_vers);
- if (ret)
- return ret;
-
- major = FW_HDR_FW_VER_MAJOR_G(hw->fwrev);
- minor = FW_HDR_FW_VER_MINOR_G(hw->fwrev);
- micro = FW_HDR_FW_VER_MICRO_G(hw->fwrev);
-
- if (major != FW_VERSION_MAJOR(hw)) { /* major mismatch - fail */
- csio_err(hw, "card FW has major version %u, driver wants %u\n",
- major, FW_VERSION_MAJOR(hw));
- return -EINVAL;
- }
-
- if (minor == FW_VERSION_MINOR(hw) && micro == FW_VERSION_MICRO(hw))
- return 0; /* perfect match */
-
- /* Minor/micro version mismatch */
- return -EINVAL;
-}
-
/*
* csio_hw_fw_dload - download firmware.
* @hw: HW module
ret = csio_hw_sf1_write(hw, 1, 1, 0, SF_RD_ID);
if (!ret)
ret = csio_hw_sf1_read(hw, 3, 0, 1, &info);
- csio_wr_reg32(hw, 0, SF_OP); /* unlock SF */
+ csio_wr_reg32(hw, 0, SF_OP_A); /* unlock SF */
if (ret != 0)
return ret;
uint32_t reg;
int cnt = 6;
- while (((reg = csio_rd_reg32(hw, PL_WHOAMI)) == 0xFFFFFFFF) &&
- (--cnt != 0))
+ while (((reg = csio_rd_reg32(hw, PL_WHOAMI_A)) == 0xFFFFFFFF) &&
+ (--cnt != 0))
mdelay(100);
- if ((cnt == 0) && (((int32_t)(SOURCEPF_GET(reg)) < 0) ||
- (SOURCEPF_GET(reg) >= CSIO_MAX_PFN))) {
+ if ((cnt == 0) && (((int32_t)(SOURCEPF_G(reg)) < 0) ||
+ (SOURCEPF_G(reg) >= CSIO_MAX_PFN))) {
csio_err(hw, "PL_WHOAMI returned 0x%x, cnt:%d\n", reg, cnt);
return -EIO;
}
- hw->pfn = SOURCEPF_GET(reg);
+ hw->pfn = SOURCEPF_G(reg);
return 0;
}
* timeout ... and then retry if we haven't exhausted
* our retries ...
*/
- pcie_fw = csio_rd_reg32(hw, PCIE_FW);
- if (!(pcie_fw & (PCIE_FW_ERR|PCIE_FW_INIT))) {
+ pcie_fw = csio_rd_reg32(hw, PCIE_FW_A);
+ if (!(pcie_fw & (PCIE_FW_ERR_F|PCIE_FW_INIT_F))) {
if (waiting <= 0) {
if (retries-- > 0)
goto retry;
* report errors preferentially.
*/
if (state) {
- if (pcie_fw & PCIE_FW_ERR) {
+ if (pcie_fw & PCIE_FW_ERR_F) {
*state = CSIO_DEV_STATE_ERR;
rv = -ETIMEDOUT;
- } else if (pcie_fw & PCIE_FW_INIT)
+ } else if (pcie_fw & PCIE_FW_INIT_F)
*state = CSIO_DEV_STATE_INIT;
}
* there's not a valid Master PF, grab its identity
* for our caller.
*/
- if (mpfn == PCIE_FW_MASTER_MASK &&
- (pcie_fw & PCIE_FW_MASTER_VLD))
- mpfn = PCIE_FW_MASTER_GET(pcie_fw);
+ if (mpfn == PCIE_FW_MASTER_M &&
+ (pcie_fw & PCIE_FW_MASTER_VLD_F))
+ mpfn = PCIE_FW_MASTER_G(pcie_fw);
break;
}
hw->flags &= ~CSIO_HWF_MASTER;
if (!fw_rst) {
/* PIO reset */
- csio_wr_reg32(hw, PIORSTMODE | PIORST, PL_RST);
+ csio_wr_reg32(hw, PIORSTMODE_F | PIORST_F, PL_RST_A);
mdelay(2000);
return 0;
}
}
csio_mb_reset(hw, mbp, CSIO_MB_DEFAULT_TMO,
- PIORSTMODE | PIORST, 0, NULL);
+ PIORSTMODE_F | PIORST_F, 0, NULL);
if (csio_mb_issue(hw, mbp)) {
csio_err(hw, "Issue of RESET command failed.n");
* If a legitimate mailbox is provided, issue a RESET command
* with a HALT indication.
*/
- if (mbox <= PCIE_FW_MASTER_MASK) {
+ if (mbox <= PCIE_FW_MASTER_M) {
struct csio_mb *mbp;
mbp = mempool_alloc(hw->mb_mempool, GFP_ATOMIC);
}
csio_mb_reset(hw, mbp, CSIO_MB_DEFAULT_TMO,
- PIORSTMODE | PIORST, FW_RESET_CMD_HALT_F,
+ PIORSTMODE_F | PIORST_F, FW_RESET_CMD_HALT_F,
NULL);
if (csio_mb_issue(hw, mbp)) {
* rather than a RESET ... if it's new enough to understand that ...
*/
if (retval == 0 || force) {
- csio_set_reg_field(hw, CIM_BOOT_CFG, UPCRST, UPCRST);
- csio_set_reg_field(hw, PCIE_FW, PCIE_FW_HALT, PCIE_FW_HALT);
+ csio_set_reg_field(hw, CIM_BOOT_CFG_A, UPCRST_F, UPCRST_F);
+ csio_set_reg_field(hw, PCIE_FW_A, PCIE_FW_HALT_F,
+ PCIE_FW_HALT_F);
}
/*
* doing it automatically, we need to clear the PCIE_FW.HALT
* bit.
*/
- csio_set_reg_field(hw, PCIE_FW, PCIE_FW_HALT, 0);
+ csio_set_reg_field(hw, PCIE_FW_A, PCIE_FW_HALT_F, 0);
/*
* If we've been given a valid mailbox, first try to get the
* valid mailbox or the RESET command failed, fall back to
* hitting the chip with a hammer.
*/
- if (mbox <= PCIE_FW_MASTER_MASK) {
- csio_set_reg_field(hw, CIM_BOOT_CFG, UPCRST, 0);
+ if (mbox <= PCIE_FW_MASTER_M) {
+ csio_set_reg_field(hw, CIM_BOOT_CFG_A, UPCRST_F, 0);
msleep(100);
if (csio_do_reset(hw, true) == 0)
return 0;
}
- csio_wr_reg32(hw, PIORSTMODE | PIORST, PL_RST);
+ csio_wr_reg32(hw, PIORSTMODE_F | PIORST_F, PL_RST_A);
msleep(2000);
} else {
int ms;
- csio_set_reg_field(hw, CIM_BOOT_CFG, UPCRST, 0);
+ csio_set_reg_field(hw, CIM_BOOT_CFG_A, UPCRST_F, 0);
for (ms = 0; ms < FW_CMD_MAX_TIMEOUT; ) {
- if (!(csio_rd_reg32(hw, PCIE_FW) & PCIE_FW_HALT))
+ if (!(csio_rd_reg32(hw, PCIE_FW_A) & PCIE_FW_HALT_F))
return 0;
msleep(100);
ms += 100;
uint32_t *cfg_data;
int value_to_add = 0;
- if (request_firmware(&cf, CSIO_CF_FNAME(hw), dev) < 0) {
+ if (request_firmware(&cf, FW_CFG_NAME_T5, dev) < 0) {
csio_err(hw, "could not find config file %s, err: %d\n",
- CSIO_CF_FNAME(hw), ret);
+ FW_CFG_NAME_T5, ret);
return -ENOENT;
}
}
if (ret == 0) {
csio_info(hw, "config file upgraded to %s\n",
- CSIO_CF_FNAME(hw));
- snprintf(path, 64, "%s%s", "/lib/firmware/", CSIO_CF_FNAME(hw));
+ FW_CFG_NAME_T5);
+ snprintf(path, 64, "%s%s", "/lib/firmware/", FW_CFG_NAME_T5);
}
leave:
return rv;
}
+/* Is the given firmware API compatible with the one the driver was compiled
+ * with?
+ */
+static int fw_compatible(const struct fw_hdr *hdr1, const struct fw_hdr *hdr2)
+{
+
+ /* short circuit if it's the exact same firmware version */
+ if (hdr1->chip == hdr2->chip && hdr1->fw_ver == hdr2->fw_ver)
+ return 1;
+
+#define SAME_INTF(x) (hdr1->intfver_##x == hdr2->intfver_##x)
+ if (hdr1->chip == hdr2->chip && SAME_INTF(nic) && SAME_INTF(vnic) &&
+ SAME_INTF(ri) && SAME_INTF(iscsi) && SAME_INTF(fcoe))
+ return 1;
+#undef SAME_INTF
+
+ return 0;
+}
+
+/* The firmware in the filesystem is usable, but should it be installed?
+ * This routine explains itself in detail if it indicates the filesystem
+ * firmware should be installed.
+ */
+static int csio_should_install_fs_fw(struct csio_hw *hw, int card_fw_usable,
+ int k, int c)
+{
+ const char *reason;
+
+ if (!card_fw_usable) {
+ reason = "incompatible or unusable";
+ goto install;
+ }
+
+ if (k > c) {
+ reason = "older than the version supported with this driver";
+ goto install;
+ }
+
+ return 0;
+
+install:
+ csio_err(hw, "firmware on card (%u.%u.%u.%u) is %s, "
+ "installing firmware %u.%u.%u.%u on card.\n",
+ FW_HDR_FW_VER_MAJOR_G(c), FW_HDR_FW_VER_MINOR_G(c),
+ FW_HDR_FW_VER_MICRO_G(c), FW_HDR_FW_VER_BUILD_G(c), reason,
+ FW_HDR_FW_VER_MAJOR_G(k), FW_HDR_FW_VER_MINOR_G(k),
+ FW_HDR_FW_VER_MICRO_G(k), FW_HDR_FW_VER_BUILD_G(k));
+
+ return 1;
+}
+
+static struct fw_info fw_info_array[] = {
+ {
+ .chip = CHELSIO_T5,
+ .fs_name = FW_CFG_NAME_T5,
+ .fw_mod_name = FW_FNAME_T5,
+ .fw_hdr = {
+ .chip = FW_HDR_CHIP_T5,
+ .fw_ver = __cpu_to_be32(FW_VERSION(T5)),
+ .intfver_nic = FW_INTFVER(T5, NIC),
+ .intfver_vnic = FW_INTFVER(T5, VNIC),
+ .intfver_ri = FW_INTFVER(T5, RI),
+ .intfver_iscsi = FW_INTFVER(T5, ISCSI),
+ .intfver_fcoe = FW_INTFVER(T5, FCOE),
+ },
+ }
+};
+
+static struct fw_info *find_fw_info(int chip)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(fw_info_array); i++) {
+ if (fw_info_array[i].chip == chip)
+ return &fw_info_array[i];
+ }
+ return NULL;
+}
+
+static int csio_hw_prep_fw(struct csio_hw *hw, struct fw_info *fw_info,
+ const u8 *fw_data, unsigned int fw_size,
+ struct fw_hdr *card_fw, enum csio_dev_state state,
+ int *reset)
+{
+ int ret, card_fw_usable, fs_fw_usable;
+ const struct fw_hdr *fs_fw;
+ const struct fw_hdr *drv_fw;
+
+ drv_fw = &fw_info->fw_hdr;
+
+ /* Read the header of the firmware on the card */
+ ret = csio_hw_read_flash(hw, FLASH_FW_START,
+ sizeof(*card_fw) / sizeof(uint32_t),
+ (uint32_t *)card_fw, 1);
+ if (ret == 0) {
+ card_fw_usable = fw_compatible(drv_fw, (const void *)card_fw);
+ } else {
+ csio_err(hw,
+ "Unable to read card's firmware header: %d\n", ret);
+ card_fw_usable = 0;
+ }
+
+ if (fw_data != NULL) {
+ fs_fw = (const void *)fw_data;
+ fs_fw_usable = fw_compatible(drv_fw, fs_fw);
+ } else {
+ fs_fw = NULL;
+ fs_fw_usable = 0;
+ }
+
+ if (card_fw_usable && card_fw->fw_ver == drv_fw->fw_ver &&
+ (!fs_fw_usable || fs_fw->fw_ver == drv_fw->fw_ver)) {
+ /* Common case: the firmware on the card is an exact match and
+ * the filesystem one is an exact match too, or the filesystem
+ * one is absent/incompatible.
+ */
+ } else if (fs_fw_usable && state == CSIO_DEV_STATE_UNINIT &&
+ csio_should_install_fs_fw(hw, card_fw_usable,
+ be32_to_cpu(fs_fw->fw_ver),
+ be32_to_cpu(card_fw->fw_ver))) {
+ ret = csio_hw_fw_upgrade(hw, hw->pfn, fw_data,
+ fw_size, 0);
+ if (ret != 0) {
+ csio_err(hw,
+ "failed to install firmware: %d\n", ret);
+ goto bye;
+ }
+
+ /* Installed successfully, update the cached header too. */
+ memcpy(card_fw, fs_fw, sizeof(*card_fw));
+ card_fw_usable = 1;
+ *reset = 0; /* already reset as part of load_fw */
+ }
+
+ if (!card_fw_usable) {
+ uint32_t d, c, k;
+
+ d = be32_to_cpu(drv_fw->fw_ver);
+ c = be32_to_cpu(card_fw->fw_ver);
+ k = fs_fw ? be32_to_cpu(fs_fw->fw_ver) : 0;
+
+ csio_err(hw, "Cannot find a usable firmware: "
+ "chip state %d, "
+ "driver compiled with %d.%d.%d.%d, "
+ "card has %d.%d.%d.%d, filesystem has %d.%d.%d.%d\n",
+ state,
+ FW_HDR_FW_VER_MAJOR_G(d), FW_HDR_FW_VER_MINOR_G(d),
+ FW_HDR_FW_VER_MICRO_G(d), FW_HDR_FW_VER_BUILD_G(d),
+ FW_HDR_FW_VER_MAJOR_G(c), FW_HDR_FW_VER_MINOR_G(c),
+ FW_HDR_FW_VER_MICRO_G(c), FW_HDR_FW_VER_BUILD_G(c),
+ FW_HDR_FW_VER_MAJOR_G(k), FW_HDR_FW_VER_MINOR_G(k),
+ FW_HDR_FW_VER_MICRO_G(k), FW_HDR_FW_VER_BUILD_G(k));
+ ret = EINVAL;
+ goto bye;
+ }
+
+ /* We're using whatever's on the card and it's known to be good. */
+ hw->fwrev = be32_to_cpu(card_fw->fw_ver);
+ hw->tp_vers = be32_to_cpu(card_fw->tp_microcode_ver);
+
+bye:
+ return ret;
+}
+
/*
* Returns -EINVAL if attempts to flash the firmware failed
* else returns 0,
* latest firmware ECANCELED is returned
*/
static int
-csio_hw_flash_fw(struct csio_hw *hw)
+csio_hw_flash_fw(struct csio_hw *hw, int *reset)
{
int ret = -ECANCELED;
const struct firmware *fw;
- const struct fw_hdr *hdr;
- u32 fw_ver;
+ struct fw_info *fw_info;
+ struct fw_hdr *card_fw;
struct pci_dev *pci_dev = hw->pdev;
struct device *dev = &pci_dev->dev ;
+ const u8 *fw_data = NULL;
+ unsigned int fw_size = 0;
- if (request_firmware(&fw, CSIO_FW_FNAME(hw), dev) < 0) {
- csio_err(hw, "could not find firmware image %s, err: %d\n",
- CSIO_FW_FNAME(hw), ret);
+ /* This is the firmware whose headers the driver was compiled
+ * against
+ */
+ fw_info = find_fw_info(CHELSIO_CHIP_VERSION(hw->chip_id));
+ if (fw_info == NULL) {
+ csio_err(hw,
+ "unable to get firmware info for chip %d.\n",
+ CHELSIO_CHIP_VERSION(hw->chip_id));
return -EINVAL;
}
- hdr = (const struct fw_hdr *)fw->data;
- fw_ver = ntohl(hdr->fw_ver);
- if (FW_HDR_FW_VER_MAJOR_G(fw_ver) != FW_VERSION_MAJOR(hw))
- return -EINVAL; /* wrong major version, won't do */
+ if (request_firmware(&fw, FW_FNAME_T5, dev) < 0) {
+ csio_err(hw, "could not find firmware image %s, err: %d\n",
+ FW_FNAME_T5, ret);
+ return -EINVAL;
+ }
- /*
- * If the flash FW is unusable or we found something newer, load it.
+ /* allocate memory to read the header of the firmware on the
+ * card
*/
- if (FW_HDR_FW_VER_MAJOR_G(hw->fwrev) != FW_VERSION_MAJOR(hw) ||
- fw_ver > hw->fwrev) {
- ret = csio_hw_fw_upgrade(hw, hw->pfn, fw->data, fw->size,
- /*force=*/false);
- if (!ret)
- csio_info(hw,
- "firmware upgraded to version %pI4 from %s\n",
- &hdr->fw_ver, CSIO_FW_FNAME(hw));
- else
- csio_err(hw, "firmware upgrade failed! err=%d\n", ret);
- } else
- ret = -EINVAL;
+ card_fw = kmalloc(sizeof(*card_fw), GFP_KERNEL);
+
+ fw_data = fw->data;
+ fw_size = fw->size;
- release_firmware(fw);
+ /* upgrade FW logic */
+ ret = csio_hw_prep_fw(hw, fw_info, fw_data, fw_size, card_fw,
+ hw->fw_state, reset);
+ /* Cleaning up */
+ if (fw != NULL)
+ release_firmware(fw);
+ kfree(card_fw);
return ret;
}
-
/*
* csio_hw_configure - Configure HW
* @hw - HW module
}
/* HW version */
- hw->chip_ver = (char)csio_rd_reg32(hw, PL_REV);
+ hw->chip_ver = (char)csio_rd_reg32(hw, PL_REV_A);
/* Needed for FW download */
rv = csio_hw_get_flash_params(hw);
if (rv != 0)
goto out;
+ csio_hw_get_fw_version(hw, &hw->fwrev);
+ csio_hw_get_tp_version(hw, &hw->tp_vers);
if (csio_is_hw_master(hw) && hw->fw_state != CSIO_DEV_STATE_INIT) {
- rv = csio_hw_check_fw_version(hw);
- if (rv == -EINVAL) {
/* Do firmware update */
- spin_unlock_irq(&hw->lock);
- rv = csio_hw_flash_fw(hw);
- spin_lock_irq(&hw->lock);
+ spin_unlock_irq(&hw->lock);
+ rv = csio_hw_flash_fw(hw, &reset);
+ spin_lock_irq(&hw->lock);
+
+ if (rv != 0)
+ goto out;
- if (rv == 0) {
- reset = 0;
- /*
- * Note that the chip was reset as part of the
- * firmware upgrade so we don't reset it again
- * below and grab the new firmware version.
- */
- rv = csio_hw_check_fw_version(hw);
- }
- }
/*
* If the firmware doesn't support Configuration
* Files, use the old Driver-based, hard-wired
return;
}
-#define PF_INTR_MASK (PFSW | PFCIM)
+#define PF_INTR_MASK (PFSW_F | PFCIM_F)
/*
* csio_hw_intr_enable - Enable HW interrupts
csio_hw_intr_enable(struct csio_hw *hw)
{
uint16_t vec = (uint16_t)csio_get_mb_intr_idx(csio_hw_to_mbm(hw));
- uint32_t pf = SOURCEPF_GET(csio_rd_reg32(hw, PL_WHOAMI));
- uint32_t pl = csio_rd_reg32(hw, PL_INT_ENABLE);
+ uint32_t pf = SOURCEPF_G(csio_rd_reg32(hw, PL_WHOAMI_A));
+ uint32_t pl = csio_rd_reg32(hw, PL_INT_ENABLE_A);
/*
* Set aivec for MSI/MSIX. PCIE_PF_CFG.INTXType is set up
* by FW, so do nothing for INTX.
*/
if (hw->intr_mode == CSIO_IM_MSIX)
- csio_set_reg_field(hw, MYPF_REG(PCIE_PF_CFG),
- AIVEC(AIVEC_MASK), vec);
+ csio_set_reg_field(hw, MYPF_REG(PCIE_PF_CFG_A),
+ AIVEC_V(AIVEC_M), vec);
else if (hw->intr_mode == CSIO_IM_MSI)
- csio_set_reg_field(hw, MYPF_REG(PCIE_PF_CFG),
- AIVEC(AIVEC_MASK), 0);
+ csio_set_reg_field(hw, MYPF_REG(PCIE_PF_CFG_A),
+ AIVEC_V(AIVEC_M), 0);
- csio_wr_reg32(hw, PF_INTR_MASK, MYPF_REG(PL_PF_INT_ENABLE));
+ csio_wr_reg32(hw, PF_INTR_MASK, MYPF_REG(PL_PF_INT_ENABLE_A));
/* Turn on MB interrupts - this will internally flush PIO as well */
csio_mb_intr_enable(hw);
/*
* Disable the Serial FLASH interrupt, if enabled!
*/
- pl &= (~SF);
- csio_wr_reg32(hw, pl, PL_INT_ENABLE);
+ pl &= (~SF_F);
+ csio_wr_reg32(hw, pl, PL_INT_ENABLE_A);
- csio_wr_reg32(hw, ERR_CPL_EXCEED_IQE_SIZE |
- EGRESS_SIZE_ERR | ERR_INVALID_CIDX_INC |
- ERR_CPL_OPCODE_0 | ERR_DROPPED_DB |
- ERR_DATA_CPL_ON_HIGH_QID1 |
- ERR_DATA_CPL_ON_HIGH_QID0 | ERR_BAD_DB_PIDX3 |
- ERR_BAD_DB_PIDX2 | ERR_BAD_DB_PIDX1 |
- ERR_BAD_DB_PIDX0 | ERR_ING_CTXT_PRIO |
- ERR_EGR_CTXT_PRIO | INGRESS_SIZE_ERR,
- SGE_INT_ENABLE3);
- csio_set_reg_field(hw, PL_INT_MAP0, 0, 1 << pf);
+ csio_wr_reg32(hw, ERR_CPL_EXCEED_IQE_SIZE_F |
+ EGRESS_SIZE_ERR_F | ERR_INVALID_CIDX_INC_F |
+ ERR_CPL_OPCODE_0_F | ERR_DROPPED_DB_F |
+ ERR_DATA_CPL_ON_HIGH_QID1_F |
+ ERR_DATA_CPL_ON_HIGH_QID0_F | ERR_BAD_DB_PIDX3_F |
+ ERR_BAD_DB_PIDX2_F | ERR_BAD_DB_PIDX1_F |
+ ERR_BAD_DB_PIDX0_F | ERR_ING_CTXT_PRIO_F |
+ ERR_EGR_CTXT_PRIO_F | INGRESS_SIZE_ERR_F,
+ SGE_INT_ENABLE3_A);
+ csio_set_reg_field(hw, PL_INT_MAP0_A, 0, 1 << pf);
}
hw->flags |= CSIO_HWF_HW_INTR_ENABLED;
void
csio_hw_intr_disable(struct csio_hw *hw)
{
- uint32_t pf = SOURCEPF_GET(csio_rd_reg32(hw, PL_WHOAMI));
+ uint32_t pf = SOURCEPF_G(csio_rd_reg32(hw, PL_WHOAMI_A));
if (!(hw->flags & CSIO_HWF_HW_INTR_ENABLED))
return;
hw->flags &= ~CSIO_HWF_HW_INTR_ENABLED;
- csio_wr_reg32(hw, 0, MYPF_REG(PL_PF_INT_ENABLE));
+ csio_wr_reg32(hw, 0, MYPF_REG(PL_PF_INT_ENABLE_A));
if (csio_is_hw_master(hw))
- csio_set_reg_field(hw, PL_INT_MAP0, 1 << pf, 0);
+ csio_set_reg_field(hw, PL_INT_MAP0_A, 1 << pf, 0);
/* Turn off MB interrupts */
csio_mb_intr_disable(hw);
void
csio_hw_fatal_err(struct csio_hw *hw)
{
- csio_set_reg_field(hw, SGE_CONTROL, GLOBALENABLE, 0);
+ csio_set_reg_field(hw, SGE_CONTROL_A, GLOBALENABLE_F, 0);
csio_hw_intr_disable(hw);
/* Do not reset HW, we may need FW state for debugging */
* register directly.
*/
csio_err(hw, "Resetting HW and waiting 2 seconds...\n");
- csio_wr_reg32(hw, PIORSTMODE | PIORST, PL_RST);
+ csio_wr_reg32(hw, PIORSTMODE_F | PIORST_F, PL_RST_A);
mdelay(2000);
break;
{
static struct intr_info tp_intr_info[] = {
{ 0x3fffffff, "TP parity error", -1, 1 },
- { FLMTXFLSTEMPTY, "TP out of Tx pages", -1, 1 },
+ { FLMTXFLSTEMPTY_F, "TP out of Tx pages", -1, 1 },
{ 0, NULL, 0, 0 }
};
- if (csio_handle_intr_status(hw, TP_INT_CAUSE, tp_intr_info))
+ if (csio_handle_intr_status(hw, TP_INT_CAUSE_A, tp_intr_info))
csio_hw_fatal_err(hw);
}
uint64_t v;
static struct intr_info sge_intr_info[] = {
- { ERR_CPL_EXCEED_IQE_SIZE,
+ { ERR_CPL_EXCEED_IQE_SIZE_F,
"SGE received CPL exceeding IQE size", -1, 1 },
- { ERR_INVALID_CIDX_INC,
+ { ERR_INVALID_CIDX_INC_F,
"SGE GTS CIDX increment too large", -1, 0 },
- { ERR_CPL_OPCODE_0, "SGE received 0-length CPL", -1, 0 },
- { ERR_DROPPED_DB, "SGE doorbell dropped", -1, 0 },
- { ERR_DATA_CPL_ON_HIGH_QID1 | ERR_DATA_CPL_ON_HIGH_QID0,
+ { ERR_CPL_OPCODE_0_F, "SGE received 0-length CPL", -1, 0 },
+ { ERR_DROPPED_DB_F, "SGE doorbell dropped", -1, 0 },
+ { ERR_DATA_CPL_ON_HIGH_QID1_F | ERR_DATA_CPL_ON_HIGH_QID0_F,
"SGE IQID > 1023 received CPL for FL", -1, 0 },
- { ERR_BAD_DB_PIDX3, "SGE DBP 3 pidx increment too large", -1,
+ { ERR_BAD_DB_PIDX3_F, "SGE DBP 3 pidx increment too large", -1,
0 },
- { ERR_BAD_DB_PIDX2, "SGE DBP 2 pidx increment too large", -1,
+ { ERR_BAD_DB_PIDX2_F, "SGE DBP 2 pidx increment too large", -1,
0 },
- { ERR_BAD_DB_PIDX1, "SGE DBP 1 pidx increment too large", -1,
+ { ERR_BAD_DB_PIDX1_F, "SGE DBP 1 pidx increment too large", -1,
0 },
- { ERR_BAD_DB_PIDX0, "SGE DBP 0 pidx increment too large", -1,
+ { ERR_BAD_DB_PIDX0_F, "SGE DBP 0 pidx increment too large", -1,
0 },
- { ERR_ING_CTXT_PRIO,
+ { ERR_ING_CTXT_PRIO_F,
"SGE too many priority ingress contexts", -1, 0 },
- { ERR_EGR_CTXT_PRIO,
+ { ERR_EGR_CTXT_PRIO_F,
"SGE too many priority egress contexts", -1, 0 },
- { INGRESS_SIZE_ERR, "SGE illegal ingress QID", -1, 0 },
- { EGRESS_SIZE_ERR, "SGE illegal egress QID", -1, 0 },
+ { INGRESS_SIZE_ERR_F, "SGE illegal ingress QID", -1, 0 },
+ { EGRESS_SIZE_ERR_F, "SGE illegal egress QID", -1, 0 },
{ 0, NULL, 0, 0 }
};
- v = (uint64_t)csio_rd_reg32(hw, SGE_INT_CAUSE1) |
- ((uint64_t)csio_rd_reg32(hw, SGE_INT_CAUSE2) << 32);
+ v = (uint64_t)csio_rd_reg32(hw, SGE_INT_CAUSE1_A) |
+ ((uint64_t)csio_rd_reg32(hw, SGE_INT_CAUSE2_A) << 32);
if (v) {
csio_fatal(hw, "SGE parity error (%#llx)\n",
(unsigned long long)v);
csio_wr_reg32(hw, (uint32_t)(v & 0xFFFFFFFF),
- SGE_INT_CAUSE1);
- csio_wr_reg32(hw, (uint32_t)(v >> 32), SGE_INT_CAUSE2);
+ SGE_INT_CAUSE1_A);
+ csio_wr_reg32(hw, (uint32_t)(v >> 32), SGE_INT_CAUSE2_A);
}
- v |= csio_handle_intr_status(hw, SGE_INT_CAUSE3, sge_intr_info);
+ v |= csio_handle_intr_status(hw, SGE_INT_CAUSE3_A, sge_intr_info);
- if (csio_handle_intr_status(hw, SGE_INT_CAUSE3, sge_intr_info) ||
+ if (csio_handle_intr_status(hw, SGE_INT_CAUSE3_A, sge_intr_info) ||
v != 0)
csio_hw_fatal_err(hw);
}
-#define CIM_OBQ_INTR (OBQULP0PARERR | OBQULP1PARERR | OBQULP2PARERR |\
- OBQULP3PARERR | OBQSGEPARERR | OBQNCSIPARERR)
-#define CIM_IBQ_INTR (IBQTP0PARERR | IBQTP1PARERR | IBQULPPARERR |\
- IBQSGEHIPARERR | IBQSGELOPARERR | IBQNCSIPARERR)
+#define CIM_OBQ_INTR (OBQULP0PARERR_F | OBQULP1PARERR_F | OBQULP2PARERR_F |\
+ OBQULP3PARERR_F | OBQSGEPARERR_F | OBQNCSIPARERR_F)
+#define CIM_IBQ_INTR (IBQTP0PARERR_F | IBQTP1PARERR_F | IBQULPPARERR_F |\
+ IBQSGEHIPARERR_F | IBQSGELOPARERR_F | IBQNCSIPARERR_F)
/*
* CIM interrupt handler.
static void csio_cim_intr_handler(struct csio_hw *hw)
{
static struct intr_info cim_intr_info[] = {
- { PREFDROPINT, "CIM control register prefetch drop", -1, 1 },
+ { PREFDROPINT_F, "CIM control register prefetch drop", -1, 1 },
{ CIM_OBQ_INTR, "CIM OBQ parity error", -1, 1 },
{ CIM_IBQ_INTR, "CIM IBQ parity error", -1, 1 },
- { MBUPPARERR, "CIM mailbox uP parity error", -1, 1 },
- { MBHOSTPARERR, "CIM mailbox host parity error", -1, 1 },
- { TIEQINPARERRINT, "CIM TIEQ outgoing parity error", -1, 1 },
- { TIEQOUTPARERRINT, "CIM TIEQ incoming parity error", -1, 1 },
+ { MBUPPARERR_F, "CIM mailbox uP parity error", -1, 1 },
+ { MBHOSTPARERR_F, "CIM mailbox host parity error", -1, 1 },
+ { TIEQINPARERRINT_F, "CIM TIEQ outgoing parity error", -1, 1 },
+ { TIEQOUTPARERRINT_F, "CIM TIEQ incoming parity error", -1, 1 },
{ 0, NULL, 0, 0 }
};
static struct intr_info cim_upintr_info[] = {
- { RSVDSPACEINT, "CIM reserved space access", -1, 1 },
- { ILLTRANSINT, "CIM illegal transaction", -1, 1 },
- { ILLWRINT, "CIM illegal write", -1, 1 },
- { ILLRDINT, "CIM illegal read", -1, 1 },
- { ILLRDBEINT, "CIM illegal read BE", -1, 1 },
- { ILLWRBEINT, "CIM illegal write BE", -1, 1 },
- { SGLRDBOOTINT, "CIM single read from boot space", -1, 1 },
- { SGLWRBOOTINT, "CIM single write to boot space", -1, 1 },
- { BLKWRBOOTINT, "CIM block write to boot space", -1, 1 },
- { SGLRDFLASHINT, "CIM single read from flash space", -1, 1 },
- { SGLWRFLASHINT, "CIM single write to flash space", -1, 1 },
- { BLKWRFLASHINT, "CIM block write to flash space", -1, 1 },
- { SGLRDEEPROMINT, "CIM single EEPROM read", -1, 1 },
- { SGLWREEPROMINT, "CIM single EEPROM write", -1, 1 },
- { BLKRDEEPROMINT, "CIM block EEPROM read", -1, 1 },
- { BLKWREEPROMINT, "CIM block EEPROM write", -1, 1 },
- { SGLRDCTLINT , "CIM single read from CTL space", -1, 1 },
- { SGLWRCTLINT , "CIM single write to CTL space", -1, 1 },
- { BLKRDCTLINT , "CIM block read from CTL space", -1, 1 },
- { BLKWRCTLINT , "CIM block write to CTL space", -1, 1 },
- { SGLRDPLINT , "CIM single read from PL space", -1, 1 },
- { SGLWRPLINT , "CIM single write to PL space", -1, 1 },
- { BLKRDPLINT , "CIM block read from PL space", -1, 1 },
- { BLKWRPLINT , "CIM block write to PL space", -1, 1 },
- { REQOVRLOOKUPINT , "CIM request FIFO overwrite", -1, 1 },
- { RSPOVRLOOKUPINT , "CIM response FIFO overwrite", -1, 1 },
- { TIMEOUTINT , "CIM PIF timeout", -1, 1 },
- { TIMEOUTMAINT , "CIM PIF MA timeout", -1, 1 },
+ { RSVDSPACEINT_F, "CIM reserved space access", -1, 1 },
+ { ILLTRANSINT_F, "CIM illegal transaction", -1, 1 },
+ { ILLWRINT_F, "CIM illegal write", -1, 1 },
+ { ILLRDINT_F, "CIM illegal read", -1, 1 },
+ { ILLRDBEINT_F, "CIM illegal read BE", -1, 1 },
+ { ILLWRBEINT_F, "CIM illegal write BE", -1, 1 },
+ { SGLRDBOOTINT_F, "CIM single read from boot space", -1, 1 },
+ { SGLWRBOOTINT_F, "CIM single write to boot space", -1, 1 },
+ { BLKWRBOOTINT_F, "CIM block write to boot space", -1, 1 },
+ { SGLRDFLASHINT_F, "CIM single read from flash space", -1, 1 },
+ { SGLWRFLASHINT_F, "CIM single write to flash space", -1, 1 },
+ { BLKWRFLASHINT_F, "CIM block write to flash space", -1, 1 },
+ { SGLRDEEPROMINT_F, "CIM single EEPROM read", -1, 1 },
+ { SGLWREEPROMINT_F, "CIM single EEPROM write", -1, 1 },
+ { BLKRDEEPROMINT_F, "CIM block EEPROM read", -1, 1 },
+ { BLKWREEPROMINT_F, "CIM block EEPROM write", -1, 1 },
+ { SGLRDCTLINT_F, "CIM single read from CTL space", -1, 1 },
+ { SGLWRCTLINT_F, "CIM single write to CTL space", -1, 1 },
+ { BLKRDCTLINT_F, "CIM block read from CTL space", -1, 1 },
+ { BLKWRCTLINT_F, "CIM block write to CTL space", -1, 1 },
+ { SGLRDPLINT_F, "CIM single read from PL space", -1, 1 },
+ { SGLWRPLINT_F, "CIM single write to PL space", -1, 1 },
+ { BLKRDPLINT_F, "CIM block read from PL space", -1, 1 },
+ { BLKWRPLINT_F, "CIM block write to PL space", -1, 1 },
+ { REQOVRLOOKUPINT_F, "CIM request FIFO overwrite", -1, 1 },
+ { RSPOVRLOOKUPINT_F, "CIM response FIFO overwrite", -1, 1 },
+ { TIMEOUTINT_F, "CIM PIF timeout", -1, 1 },
+ { TIMEOUTMAINT_F, "CIM PIF MA timeout", -1, 1 },
{ 0, NULL, 0, 0 }
};
int fat;
- fat = csio_handle_intr_status(hw, CIM_HOST_INT_CAUSE,
- cim_intr_info) +
- csio_handle_intr_status(hw, CIM_HOST_UPACC_INT_CAUSE,
- cim_upintr_info);
+ fat = csio_handle_intr_status(hw, CIM_HOST_INT_CAUSE_A,
+ cim_intr_info) +
+ csio_handle_intr_status(hw, CIM_HOST_UPACC_INT_CAUSE_A,
+ cim_upintr_info);
if (fat)
csio_hw_fatal_err(hw);
}
{ 0, NULL, 0, 0 }
};
- if (csio_handle_intr_status(hw, ULP_RX_INT_CAUSE, ulprx_intr_info))
+ if (csio_handle_intr_status(hw, ULP_RX_INT_CAUSE_A, ulprx_intr_info))
csio_hw_fatal_err(hw);
}
static void csio_ulptx_intr_handler(struct csio_hw *hw)
{
static struct intr_info ulptx_intr_info[] = {
- { PBL_BOUND_ERR_CH3, "ULPTX channel 3 PBL out of bounds", -1,
+ { PBL_BOUND_ERR_CH3_F, "ULPTX channel 3 PBL out of bounds", -1,
0 },
- { PBL_BOUND_ERR_CH2, "ULPTX channel 2 PBL out of bounds", -1,
+ { PBL_BOUND_ERR_CH2_F, "ULPTX channel 2 PBL out of bounds", -1,
0 },
- { PBL_BOUND_ERR_CH1, "ULPTX channel 1 PBL out of bounds", -1,
+ { PBL_BOUND_ERR_CH1_F, "ULPTX channel 1 PBL out of bounds", -1,
0 },
- { PBL_BOUND_ERR_CH0, "ULPTX channel 0 PBL out of bounds", -1,
+ { PBL_BOUND_ERR_CH0_F, "ULPTX channel 0 PBL out of bounds", -1,
0 },
{ 0xfffffff, "ULPTX parity error", -1, 1 },
{ 0, NULL, 0, 0 }
};
- if (csio_handle_intr_status(hw, ULP_TX_INT_CAUSE, ulptx_intr_info))
+ if (csio_handle_intr_status(hw, ULP_TX_INT_CAUSE_A, ulptx_intr_info))
csio_hw_fatal_err(hw);
}
static void csio_pmtx_intr_handler(struct csio_hw *hw)
{
static struct intr_info pmtx_intr_info[] = {
- { PCMD_LEN_OVFL0, "PMTX channel 0 pcmd too large", -1, 1 },
- { PCMD_LEN_OVFL1, "PMTX channel 1 pcmd too large", -1, 1 },
- { PCMD_LEN_OVFL2, "PMTX channel 2 pcmd too large", -1, 1 },
- { ZERO_C_CMD_ERROR, "PMTX 0-length pcmd", -1, 1 },
+ { PCMD_LEN_OVFL0_F, "PMTX channel 0 pcmd too large", -1, 1 },
+ { PCMD_LEN_OVFL1_F, "PMTX channel 1 pcmd too large", -1, 1 },
+ { PCMD_LEN_OVFL2_F, "PMTX channel 2 pcmd too large", -1, 1 },
+ { ZERO_C_CMD_ERROR_F, "PMTX 0-length pcmd", -1, 1 },
{ 0xffffff0, "PMTX framing error", -1, 1 },
- { OESPI_PAR_ERROR, "PMTX oespi parity error", -1, 1 },
- { DB_OPTIONS_PAR_ERROR, "PMTX db_options parity error", -1,
+ { OESPI_PAR_ERROR_F, "PMTX oespi parity error", -1, 1 },
+ { DB_OPTIONS_PAR_ERROR_F, "PMTX db_options parity error", -1,
1 },
- { ICSPI_PAR_ERROR, "PMTX icspi parity error", -1, 1 },
- { C_PCMD_PAR_ERROR, "PMTX c_pcmd parity error", -1, 1},
+ { ICSPI_PAR_ERROR_F, "PMTX icspi parity error", -1, 1 },
+ { PMTX_C_PCMD_PAR_ERROR_F, "PMTX c_pcmd parity error", -1, 1},
{ 0, NULL, 0, 0 }
};
- if (csio_handle_intr_status(hw, PM_TX_INT_CAUSE, pmtx_intr_info))
+ if (csio_handle_intr_status(hw, PM_TX_INT_CAUSE_A, pmtx_intr_info))
csio_hw_fatal_err(hw);
}
static void csio_pmrx_intr_handler(struct csio_hw *hw)
{
static struct intr_info pmrx_intr_info[] = {
- { ZERO_E_CMD_ERROR, "PMRX 0-length pcmd", -1, 1 },
+ { ZERO_E_CMD_ERROR_F, "PMRX 0-length pcmd", -1, 1 },
{ 0x3ffff0, "PMRX framing error", -1, 1 },
- { OCSPI_PAR_ERROR, "PMRX ocspi parity error", -1, 1 },
- { DB_OPTIONS_PAR_ERROR, "PMRX db_options parity error", -1,
+ { OCSPI_PAR_ERROR_F, "PMRX ocspi parity error", -1, 1 },
+ { DB_OPTIONS_PAR_ERROR_F, "PMRX db_options parity error", -1,
1 },
- { IESPI_PAR_ERROR, "PMRX iespi parity error", -1, 1 },
- { E_PCMD_PAR_ERROR, "PMRX e_pcmd parity error", -1, 1},
+ { IESPI_PAR_ERROR_F, "PMRX iespi parity error", -1, 1 },
+ { PMRX_E_PCMD_PAR_ERROR_F, "PMRX e_pcmd parity error", -1, 1},
{ 0, NULL, 0, 0 }
};
- if (csio_handle_intr_status(hw, PM_RX_INT_CAUSE, pmrx_intr_info))
+ if (csio_handle_intr_status(hw, PM_RX_INT_CAUSE_A, pmrx_intr_info))
csio_hw_fatal_err(hw);
}
static void csio_cplsw_intr_handler(struct csio_hw *hw)
{
static struct intr_info cplsw_intr_info[] = {
- { CIM_OP_MAP_PERR, "CPLSW CIM op_map parity error", -1, 1 },
- { CIM_OVFL_ERROR, "CPLSW CIM overflow", -1, 1 },
- { TP_FRAMING_ERROR, "CPLSW TP framing error", -1, 1 },
- { SGE_FRAMING_ERROR, "CPLSW SGE framing error", -1, 1 },
- { CIM_FRAMING_ERROR, "CPLSW CIM framing error", -1, 1 },
- { ZERO_SWITCH_ERROR, "CPLSW no-switch error", -1, 1 },
+ { CIM_OP_MAP_PERR_F, "CPLSW CIM op_map parity error", -1, 1 },
+ { CIM_OVFL_ERROR_F, "CPLSW CIM overflow", -1, 1 },
+ { TP_FRAMING_ERROR_F, "CPLSW TP framing error", -1, 1 },
+ { SGE_FRAMING_ERROR_F, "CPLSW SGE framing error", -1, 1 },
+ { CIM_FRAMING_ERROR_F, "CPLSW CIM framing error", -1, 1 },
+ { ZERO_SWITCH_ERROR_F, "CPLSW no-switch error", -1, 1 },
{ 0, NULL, 0, 0 }
};
- if (csio_handle_intr_status(hw, CPL_INTR_CAUSE, cplsw_intr_info))
+ if (csio_handle_intr_status(hw, CPL_INTR_CAUSE_A, cplsw_intr_info))
csio_hw_fatal_err(hw);
}
static void csio_le_intr_handler(struct csio_hw *hw)
{
static struct intr_info le_intr_info[] = {
- { LIPMISS, "LE LIP miss", -1, 0 },
- { LIP0, "LE 0 LIP error", -1, 0 },
- { PARITYERR, "LE parity error", -1, 1 },
- { UNKNOWNCMD, "LE unknown command", -1, 1 },
- { REQQPARERR, "LE request queue parity error", -1, 1 },
+ { LIPMISS_F, "LE LIP miss", -1, 0 },
+ { LIP0_F, "LE 0 LIP error", -1, 0 },
+ { PARITYERR_F, "LE parity error", -1, 1 },
+ { UNKNOWNCMD_F, "LE unknown command", -1, 1 },
+ { REQQPARERR_F, "LE request queue parity error", -1, 1 },
{ 0, NULL, 0, 0 }
};
- if (csio_handle_intr_status(hw, LE_DB_INT_CAUSE, le_intr_info))
+ if (csio_handle_intr_status(hw, LE_DB_INT_CAUSE_A, le_intr_info))
csio_hw_fatal_err(hw);
}
{ 0, NULL, 0, 0 }
};
static struct intr_info mps_tx_intr_info[] = {
- { TPFIFO, "MPS Tx TP FIFO parity error", -1, 1 },
- { NCSIFIFO, "MPS Tx NC-SI FIFO parity error", -1, 1 },
- { TXDATAFIFO, "MPS Tx data FIFO parity error", -1, 1 },
- { TXDESCFIFO, "MPS Tx desc FIFO parity error", -1, 1 },
- { BUBBLE, "MPS Tx underflow", -1, 1 },
- { SECNTERR, "MPS Tx SOP/EOP error", -1, 1 },
- { FRMERR, "MPS Tx framing error", -1, 1 },
+ { TPFIFO_V(TPFIFO_M), "MPS Tx TP FIFO parity error", -1, 1 },
+ { NCSIFIFO_F, "MPS Tx NC-SI FIFO parity error", -1, 1 },
+ { TXDATAFIFO_V(TXDATAFIFO_M), "MPS Tx data FIFO parity error",
+ -1, 1 },
+ { TXDESCFIFO_V(TXDESCFIFO_M), "MPS Tx desc FIFO parity error",
+ -1, 1 },
+ { BUBBLE_F, "MPS Tx underflow", -1, 1 },
+ { SECNTERR_F, "MPS Tx SOP/EOP error", -1, 1 },
+ { FRMERR_F, "MPS Tx framing error", -1, 1 },
{ 0, NULL, 0, 0 }
};
static struct intr_info mps_trc_intr_info[] = {
- { FILTMEM, "MPS TRC filter parity error", -1, 1 },
- { PKTFIFO, "MPS TRC packet FIFO parity error", -1, 1 },
- { MISCPERR, "MPS TRC misc parity error", -1, 1 },
+ { FILTMEM_V(FILTMEM_M), "MPS TRC filter parity error", -1, 1 },
+ { PKTFIFO_V(PKTFIFO_M), "MPS TRC packet FIFO parity error",
+ -1, 1 },
+ { MISCPERR_F, "MPS TRC misc parity error", -1, 1 },
{ 0, NULL, 0, 0 }
};
static struct intr_info mps_stat_sram_intr_info[] = {
{ 0, NULL, 0, 0 }
};
static struct intr_info mps_cls_intr_info[] = {
- { MATCHSRAM, "MPS match SRAM parity error", -1, 1 },
- { MATCHTCAM, "MPS match TCAM parity error", -1, 1 },
- { HASHSRAM, "MPS hash SRAM parity error", -1, 1 },
+ { MATCHSRAM_F, "MPS match SRAM parity error", -1, 1 },
+ { MATCHTCAM_F, "MPS match TCAM parity error", -1, 1 },
+ { HASHSRAM_F, "MPS hash SRAM parity error", -1, 1 },
{ 0, NULL, 0, 0 }
};
int fat;
- fat = csio_handle_intr_status(hw, MPS_RX_PERR_INT_CAUSE,
- mps_rx_intr_info) +
- csio_handle_intr_status(hw, MPS_TX_INT_CAUSE,
- mps_tx_intr_info) +
- csio_handle_intr_status(hw, MPS_TRC_INT_CAUSE,
- mps_trc_intr_info) +
- csio_handle_intr_status(hw, MPS_STAT_PERR_INT_CAUSE_SRAM,
- mps_stat_sram_intr_info) +
- csio_handle_intr_status(hw, MPS_STAT_PERR_INT_CAUSE_TX_FIFO,
- mps_stat_tx_intr_info) +
- csio_handle_intr_status(hw, MPS_STAT_PERR_INT_CAUSE_RX_FIFO,
- mps_stat_rx_intr_info) +
- csio_handle_intr_status(hw, MPS_CLS_INT_CAUSE,
- mps_cls_intr_info);
-
- csio_wr_reg32(hw, 0, MPS_INT_CAUSE);
- csio_rd_reg32(hw, MPS_INT_CAUSE); /* flush */
+ fat = csio_handle_intr_status(hw, MPS_RX_PERR_INT_CAUSE_A,
+ mps_rx_intr_info) +
+ csio_handle_intr_status(hw, MPS_TX_INT_CAUSE_A,
+ mps_tx_intr_info) +
+ csio_handle_intr_status(hw, MPS_TRC_INT_CAUSE_A,
+ mps_trc_intr_info) +
+ csio_handle_intr_status(hw, MPS_STAT_PERR_INT_CAUSE_SRAM_A,
+ mps_stat_sram_intr_info) +
+ csio_handle_intr_status(hw, MPS_STAT_PERR_INT_CAUSE_TX_FIFO_A,
+ mps_stat_tx_intr_info) +
+ csio_handle_intr_status(hw, MPS_STAT_PERR_INT_CAUSE_RX_FIFO_A,
+ mps_stat_rx_intr_info) +
+ csio_handle_intr_status(hw, MPS_CLS_INT_CAUSE_A,
+ mps_cls_intr_info);
+
+ csio_wr_reg32(hw, 0, MPS_INT_CAUSE_A);
+ csio_rd_reg32(hw, MPS_INT_CAUSE_A); /* flush */
if (fat)
csio_hw_fatal_err(hw);
}
-#define MEM_INT_MASK (PERR_INT_CAUSE | ECC_CE_INT_CAUSE | ECC_UE_INT_CAUSE)
+#define MEM_INT_MASK (PERR_INT_CAUSE_F | ECC_CE_INT_CAUSE_F | \
+ ECC_UE_INT_CAUSE_F)
/*
* EDC/MC interrupt handler.
unsigned int addr, cnt_addr, v;
if (idx <= MEM_EDC1) {
- addr = EDC_REG(EDC_INT_CAUSE, idx);
- cnt_addr = EDC_REG(EDC_ECC_STATUS, idx);
+ addr = EDC_REG(EDC_INT_CAUSE_A, idx);
+ cnt_addr = EDC_REG(EDC_ECC_STATUS_A, idx);
} else {
- addr = MC_INT_CAUSE;
- cnt_addr = MC_ECC_STATUS;
+ addr = MC_INT_CAUSE_A;
+ cnt_addr = MC_ECC_STATUS_A;
}
v = csio_rd_reg32(hw, addr) & MEM_INT_MASK;
- if (v & PERR_INT_CAUSE)
+ if (v & PERR_INT_CAUSE_F)
csio_fatal(hw, "%s FIFO parity error\n", name[idx]);
- if (v & ECC_CE_INT_CAUSE) {
- uint32_t cnt = ECC_CECNT_GET(csio_rd_reg32(hw, cnt_addr));
+ if (v & ECC_CE_INT_CAUSE_F) {
+ uint32_t cnt = ECC_CECNT_G(csio_rd_reg32(hw, cnt_addr));
- csio_wr_reg32(hw, ECC_CECNT_MASK, cnt_addr);
+ csio_wr_reg32(hw, ECC_CECNT_V(ECC_CECNT_M), cnt_addr);
csio_warn(hw, "%u %s correctable ECC data error%s\n",
cnt, name[idx], cnt > 1 ? "s" : "");
}
- if (v & ECC_UE_INT_CAUSE)
+ if (v & ECC_UE_INT_CAUSE_F)
csio_fatal(hw, "%s uncorrectable ECC data error\n", name[idx]);
csio_wr_reg32(hw, v, addr);
- if (v & (PERR_INT_CAUSE | ECC_UE_INT_CAUSE))
+ if (v & (PERR_INT_CAUSE_F | ECC_UE_INT_CAUSE_F))
csio_hw_fatal_err(hw);
}
*/
static void csio_ma_intr_handler(struct csio_hw *hw)
{
- uint32_t v, status = csio_rd_reg32(hw, MA_INT_CAUSE);
+ uint32_t v, status = csio_rd_reg32(hw, MA_INT_CAUSE_A);
- if (status & MEM_PERR_INT_CAUSE)
+ if (status & MEM_PERR_INT_CAUSE_F)
csio_fatal(hw, "MA parity error, parity status %#x\n",
- csio_rd_reg32(hw, MA_PARITY_ERROR_STATUS));
- if (status & MEM_WRAP_INT_CAUSE) {
- v = csio_rd_reg32(hw, MA_INT_WRAP_STATUS);
+ csio_rd_reg32(hw, MA_PARITY_ERROR_STATUS_A));
+ if (status & MEM_WRAP_INT_CAUSE_F) {
+ v = csio_rd_reg32(hw, MA_INT_WRAP_STATUS_A);
csio_fatal(hw,
"MA address wrap-around error by client %u to address %#x\n",
- MEM_WRAP_CLIENT_NUM_GET(v), MEM_WRAP_ADDRESS_GET(v) << 4);
+ MEM_WRAP_CLIENT_NUM_G(v), MEM_WRAP_ADDRESS_G(v) << 4);
}
- csio_wr_reg32(hw, status, MA_INT_CAUSE);
+ csio_wr_reg32(hw, status, MA_INT_CAUSE_A);
csio_hw_fatal_err(hw);
}
static void csio_smb_intr_handler(struct csio_hw *hw)
{
static struct intr_info smb_intr_info[] = {
- { MSTTXFIFOPARINT, "SMB master Tx FIFO parity error", -1, 1 },
- { MSTRXFIFOPARINT, "SMB master Rx FIFO parity error", -1, 1 },
- { SLVFIFOPARINT, "SMB slave FIFO parity error", -1, 1 },
+ { MSTTXFIFOPARINT_F, "SMB master Tx FIFO parity error", -1, 1 },
+ { MSTRXFIFOPARINT_F, "SMB master Rx FIFO parity error", -1, 1 },
+ { SLVFIFOPARINT_F, "SMB slave FIFO parity error", -1, 1 },
{ 0, NULL, 0, 0 }
};
- if (csio_handle_intr_status(hw, SMB_INT_CAUSE, smb_intr_info))
+ if (csio_handle_intr_status(hw, SMB_INT_CAUSE_A, smb_intr_info))
csio_hw_fatal_err(hw);
}
static void csio_ncsi_intr_handler(struct csio_hw *hw)
{
static struct intr_info ncsi_intr_info[] = {
- { CIM_DM_PRTY_ERR, "NC-SI CIM parity error", -1, 1 },
- { MPS_DM_PRTY_ERR, "NC-SI MPS parity error", -1, 1 },
- { TXFIFO_PRTY_ERR, "NC-SI Tx FIFO parity error", -1, 1 },
- { RXFIFO_PRTY_ERR, "NC-SI Rx FIFO parity error", -1, 1 },
+ { CIM_DM_PRTY_ERR_F, "NC-SI CIM parity error", -1, 1 },
+ { MPS_DM_PRTY_ERR_F, "NC-SI MPS parity error", -1, 1 },
+ { TXFIFO_PRTY_ERR_F, "NC-SI Tx FIFO parity error", -1, 1 },
+ { RXFIFO_PRTY_ERR_F, "NC-SI Rx FIFO parity error", -1, 1 },
{ 0, NULL, 0, 0 }
};
- if (csio_handle_intr_status(hw, NCSI_INT_CAUSE, ncsi_intr_info))
+ if (csio_handle_intr_status(hw, NCSI_INT_CAUSE_A, ncsi_intr_info))
csio_hw_fatal_err(hw);
}
*/
static void csio_xgmac_intr_handler(struct csio_hw *hw, int port)
{
- uint32_t v = csio_rd_reg32(hw, CSIO_MAC_INT_CAUSE_REG(hw, port));
+ uint32_t v = csio_rd_reg32(hw, T5_PORT_REG(port, MAC_PORT_INT_CAUSE_A));
- v &= TXFIFO_PRTY_ERR | RXFIFO_PRTY_ERR;
+ v &= TXFIFO_PRTY_ERR_F | RXFIFO_PRTY_ERR_F;
if (!v)
return;
- if (v & TXFIFO_PRTY_ERR)
+ if (v & TXFIFO_PRTY_ERR_F)
csio_fatal(hw, "XGMAC %d Tx FIFO parity error\n", port);
- if (v & RXFIFO_PRTY_ERR)
+ if (v & RXFIFO_PRTY_ERR_F)
csio_fatal(hw, "XGMAC %d Rx FIFO parity error\n", port);
- csio_wr_reg32(hw, v, CSIO_MAC_INT_CAUSE_REG(hw, port));
+ csio_wr_reg32(hw, v, T5_PORT_REG(port, MAC_PORT_INT_CAUSE_A));
csio_hw_fatal_err(hw);
}
static void csio_pl_intr_handler(struct csio_hw *hw)
{
static struct intr_info pl_intr_info[] = {
- { FATALPERR, "T4 fatal parity error", -1, 1 },
- { PERRVFID, "PL VFID_MAP parity error", -1, 1 },
+ { FATALPERR_F, "T4 fatal parity error", -1, 1 },
+ { PERRVFID_F, "PL VFID_MAP parity error", -1, 1 },
{ 0, NULL, 0, 0 }
};
- if (csio_handle_intr_status(hw, PL_PL_INT_CAUSE, pl_intr_info))
+ if (csio_handle_intr_status(hw, PL_PL_INT_CAUSE_A, pl_intr_info))
csio_hw_fatal_err(hw);
}
int
csio_hw_slow_intr_handler(struct csio_hw *hw)
{
- uint32_t cause = csio_rd_reg32(hw, PL_INT_CAUSE);
+ uint32_t cause = csio_rd_reg32(hw, PL_INT_CAUSE_A);
if (!(cause & CSIO_GLBL_INTR_MASK)) {
CSIO_INC_STATS(hw, n_plint_unexp);
CSIO_INC_STATS(hw, n_plint_cnt);
- if (cause & CIM)
+ if (cause & CIM_F)
csio_cim_intr_handler(hw);
- if (cause & MPS)
+ if (cause & MPS_F)
csio_mps_intr_handler(hw);
- if (cause & NCSI)
+ if (cause & NCSI_F)
csio_ncsi_intr_handler(hw);
- if (cause & PL)
+ if (cause & PL_F)
csio_pl_intr_handler(hw);
- if (cause & SMB)
+ if (cause & SMB_F)
csio_smb_intr_handler(hw);
- if (cause & XGMAC0)
+ if (cause & XGMAC0_F)
csio_xgmac_intr_handler(hw, 0);
- if (cause & XGMAC1)
+ if (cause & XGMAC1_F)
csio_xgmac_intr_handler(hw, 1);
- if (cause & XGMAC_KR0)
+ if (cause & XGMAC_KR0_F)
csio_xgmac_intr_handler(hw, 2);
- if (cause & XGMAC_KR1)
+ if (cause & XGMAC_KR1_F)
csio_xgmac_intr_handler(hw, 3);
- if (cause & PCIE)
+ if (cause & PCIE_F)
hw->chip_ops->chip_pcie_intr_handler(hw);
- if (cause & MC)
+ if (cause & MC_F)
csio_mem_intr_handler(hw, MEM_MC);
- if (cause & EDC0)
+ if (cause & EDC0_F)
csio_mem_intr_handler(hw, MEM_EDC0);
- if (cause & EDC1)
+ if (cause & EDC1_F)
csio_mem_intr_handler(hw, MEM_EDC1);
- if (cause & LE)
+ if (cause & LE_F)
csio_le_intr_handler(hw);
- if (cause & TP)
+ if (cause & TP_F)
csio_tp_intr_handler(hw);
- if (cause & MA)
+ if (cause & MA_F)
csio_ma_intr_handler(hw);
- if (cause & PM_TX)
+ if (cause & PM_TX_F)
csio_pmtx_intr_handler(hw);
- if (cause & PM_RX)
+ if (cause & PM_RX_F)
csio_pmrx_intr_handler(hw);
- if (cause & ULP_RX)
+ if (cause & ULP_RX_F)
csio_ulprx_intr_handler(hw);
- if (cause & CPL_SWITCH)
+ if (cause & CPL_SWITCH_F)
csio_cplsw_intr_handler(hw);
- if (cause & SGE)
+ if (cause & SGE_F)
csio_sge_intr_handler(hw);
- if (cause & ULP_TX)
+ if (cause & ULP_TX_F)
csio_ulptx_intr_handler(hw);
/* Clear the interrupts just processed for which we are the master. */
- csio_wr_reg32(hw, cause & CSIO_GLBL_INTR_MASK, PL_INT_CAUSE);
- csio_rd_reg32(hw, PL_INT_CAUSE); /* flush */
+ csio_wr_reg32(hw, cause & CSIO_GLBL_INTR_MASK, PL_INT_CAUSE_A);
+ csio_rd_reg32(hw, PL_INT_CAUSE_A); /* flush */
return 1;
}
prot_type = (dev_id & CSIO_ASIC_DEVID_PROTO_MASK);
adap_type = (dev_id & CSIO_ASIC_DEVID_TYPE_MASK);
- if (prot_type == CSIO_T4_FCOE_ASIC) {
- memcpy(hw->hw_ver,
- csio_t4_fcoe_adapters[adap_type].model_no, 16);
- memcpy(hw->model_desc,
- csio_t4_fcoe_adapters[adap_type].description,
- 32);
- } else if (prot_type == CSIO_T5_FCOE_ASIC) {
+ if (prot_type == CSIO_T5_FCOE_ASIC) {
memcpy(hw->hw_ver,
csio_t5_fcoe_adapters[adap_type].model_no, 16);
memcpy(hw->model_desc,
strcpy(hw->name, CSIO_HW_NAME);
- /* Initialize the HW chip ops with T4/T5 specific ops */
- hw->chip_ops = csio_is_t4(hw->chip_id) ? &t4_ops : &t5_ops;
+ /* Initialize the HW chip ops T5 specific ops */
+ hw->chip_ops = &t5_ops;
/* Set the model & its description */
#define CSIO_ASIC_DEVID_PROTO_MASK 0xFF00
#define CSIO_ASIC_DEVID_TYPE_MASK 0x00FF
-#define CSIO_GLBL_INTR_MASK (CIM | MPS | PL | PCIE | MC | EDC0 | \
- EDC1 | LE | TP | MA | PM_TX | PM_RX | \
- ULP_RX | CPL_SWITCH | SGE | \
- ULP_TX | SF)
+#define CSIO_GLBL_INTR_MASK (CIM_F | MPS_F | PL_F | PCIE_F | MC_F | \
+ EDC0_F | EDC1_F | LE_F | TP_F | MA_F | \
+ PM_TX_F | PM_RX_F | ULP_RX_F | \
+ CPL_SWITCH_F | SGE_F | ULP_TX_F | SF_F)
/*
* Hard parameters used to initialize the card in the absence of a
SF_ERASE_SECTOR = 0xd8, /* erase sector */
FW_START_SEC = 8, /* first flash sector for FW */
- FW_END_SEC = 15, /* last flash sector for FW */
FW_IMG_START = FW_START_SEC * SF_SEC_SIZE,
- FW_MAX_SIZE = (FW_END_SEC - FW_START_SEC + 1) * SF_SEC_SIZE,
+ FW_MAX_SIZE = 16 * SF_SEC_SIZE,
FLASH_CFG_MAX_SIZE = 0x10000 , /* max size of the flash config file*/
FLASH_CFG_OFFSET = 0x1f0000,
* Location of firmware image in FLASH.
*/
FLASH_FW_START_SEC = 8,
- FLASH_FW_NSECS = 8,
+ FLASH_FW_NSECS = 16,
FLASH_FW_START = FLASH_START(FLASH_FW_START_SEC),
FLASH_FW_MAX_SIZE = FLASH_MAX_SIZE(FLASH_FW_NSECS),
#include "csio_defs.h"
/* Define MACRO values */
-#define CSIO_HW_T4 0x4000
-#define CSIO_T4_FCOE_ASIC 0x4600
#define CSIO_HW_T5 0x5000
#define CSIO_T5_FCOE_ASIC 0x5600
#define CSIO_HW_CHIP_MASK 0xF000
-#define T4_REGMAP_SIZE (160 * 1024)
#define T5_REGMAP_SIZE (332 * 1024)
-#define FW_FNAME_T4 "cxgb4/t4fw.bin"
#define FW_FNAME_T5 "cxgb4/t5fw.bin"
-#define FW_CFG_NAME_T4 "cxgb4/t4-config.txt"
#define FW_CFG_NAME_T5 "cxgb4/t5-config.txt"
-/* Define static functions */
-static inline int csio_is_t4(uint16_t chip)
-{
- return (chip == CSIO_HW_T4);
-}
+#define T5FW_VERSION_MAJOR 0x01
+#define T5FW_VERSION_MINOR 0x0B
+#define T5FW_VERSION_MICRO 0x1B
+#define T5FW_VERSION_BUILD 0x00
+
+#define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision))
+#define CHELSIO_CHIP_FPGA 0x100
+#define CHELSIO_CHIP_VERSION(code) (((code) >> 12) & 0xf)
+#define CHELSIO_CHIP_RELEASE(code) ((code) & 0xf)
+
+#define CHELSIO_T5 0x5
+
+enum chip_type {
+ T5_A0 = CHELSIO_CHIP_CODE(CHELSIO_T5, 0),
+ T5_A1 = CHELSIO_CHIP_CODE(CHELSIO_T5, 1),
+ T5_FIRST_REV = T5_A0,
+ T5_LAST_REV = T5_A1,
+};
static inline int csio_is_t5(uint16_t chip)
{
#define CSIO_DEVICE(devid, idx) \
{ PCI_VENDOR_ID_CHELSIO, (devid), PCI_ANY_ID, PCI_ANY_ID, 0, 0, (idx) }
-#define CSIO_HW_PIDX(hw, index) \
- (csio_is_t4(hw->chip_id) ? (PIDX(index)) : \
- (PIDX_T5(index) | DBTYPE(1U)))
+#include "t4fw_api.h"
-#define CSIO_HW_LP_INT_THRESH(hw, val) \
- (csio_is_t4(hw->chip_id) ? (LP_INT_THRESH(val)) : \
- (V_LP_INT_THRESH_T5(val)))
+#define FW_VERSION(chip) ( \
+ FW_HDR_FW_VER_MAJOR_G(chip##FW_VERSION_MAJOR) | \
+ FW_HDR_FW_VER_MINOR_G(chip##FW_VERSION_MINOR) | \
+ FW_HDR_FW_VER_MICRO_G(chip##FW_VERSION_MICRO) | \
+ FW_HDR_FW_VER_BUILD_G(chip##FW_VERSION_BUILD))
+#define FW_INTFVER(chip, intf) (FW_HDR_INTFVER_##intf)
-#define CSIO_HW_M_LP_INT_THRESH(hw) \
- (csio_is_t4(hw->chip_id) ? (LP_INT_THRESH_MASK) : (M_LP_INT_THRESH_T5))
-
-#define CSIO_MAC_INT_CAUSE_REG(hw, port) \
- (csio_is_t4(hw->chip_id) ? (PORT_REG(port, XGMAC_PORT_INT_CAUSE)) : \
- (T5_PORT_REG(port, MAC_PORT_INT_CAUSE)))
-
-#define FW_VERSION_MAJOR(hw) (csio_is_t4(hw->chip_id) ? 1 : 0)
-#define FW_VERSION_MINOR(hw) (csio_is_t4(hw->chip_id) ? 2 : 0)
-#define FW_VERSION_MICRO(hw) (csio_is_t4(hw->chip_id) ? 8 : 0)
-
-#define CSIO_FW_FNAME(hw) \
- (csio_is_t4(hw->chip_id) ? FW_FNAME_T4 : FW_FNAME_T5)
-
-#define CSIO_CF_FNAME(hw) \
- (csio_is_t4(hw->chip_id) ? FW_CFG_NAME_T4 : FW_CFG_NAME_T5)
+struct fw_info {
+ u8 chip;
+ char *fs_name;
+ char *fw_mod_name;
+ struct fw_hdr fw_hdr;
+};
/* Declare ENUMS */
enum { MEM_EDC0, MEM_EDC1, MEM_MC, MEM_MC0 = MEM_MC, MEM_MC1 };
void (*chip_dfs_create_ext_mem)(struct csio_hw *);
};
-extern struct csio_hw_chip_ops t4_ops;
extern struct csio_hw_chip_ops t5_ops;
#endif /* #ifndef __CSIO_HW_CHIP_H__ */
+++ /dev/null
-/*
- * This file is part of the Chelsio FCoE driver for Linux.
- *
- * Copyright (c) 2008-2013 Chelsio Communications, Inc. All rights reserved.
- *
- * This software is available to you under a choice of one of two
- * licenses. You may choose to be licensed under the terms of the GNU
- * General Public License (GPL) Version 2, available from the file
- * OpenIB.org BSD license below:
- *
- * Redistribution and use in source and binary forms, with or
- * without modification, are permitted provided that the following
- * conditions are met:
- *
- * - Redistributions of source code must retain the above
- * copyright notice, this list of conditions and the following
- * - Redistributions in binary form must reproduce the above
- * copyright notice, this list of conditions and the following
- * disclaimer in the documentation and/or other materials
- * provided with the distribution.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
- * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
- * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
- * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
- * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
- * SOFTWARE.
- */
-
-#include "csio_hw.h"
-#include "csio_init.h"
-
-/*
- * Return the specified PCI-E Configuration Space register from our Physical
- * Function. We try first via a Firmware LDST Command since we prefer to let
- * the firmware own all of these registers, but if that fails we go for it
- * directly ourselves.
- */
-static uint32_t
-csio_t4_read_pcie_cfg4(struct csio_hw *hw, int reg)
-{
- u32 val = 0;
- struct csio_mb *mbp;
- int rv;
- struct fw_ldst_cmd *ldst_cmd;
-
- mbp = mempool_alloc(hw->mb_mempool, GFP_ATOMIC);
- if (!mbp) {
- CSIO_INC_STATS(hw, n_err_nomem);
- pci_read_config_dword(hw->pdev, reg, &val);
- return val;
- }
-
- csio_mb_ldst(hw, mbp, CSIO_MB_DEFAULT_TMO, reg);
- rv = csio_mb_issue(hw, mbp);
-
- /*
- * If the LDST Command suucceeded, exctract the returned register
- * value. Otherwise read it directly ourself.
- */
- if (rv == 0) {
- ldst_cmd = (struct fw_ldst_cmd *)(mbp->mb);
- val = ntohl(ldst_cmd->u.pcie.data[0]);
- } else
- pci_read_config_dword(hw->pdev, reg, &val);
-
- mempool_free(mbp, hw->mb_mempool);
-
- return val;
-}
-
-static int
-csio_t4_set_mem_win(struct csio_hw *hw, uint32_t win)
-{
- u32 bar0;
- u32 mem_win_base;
-
- /*
- * Truncation intentional: we only read the bottom 32-bits of the
- * 64-bit BAR0/BAR1 ... We use the hardware backdoor mechanism to
- * read BAR0 instead of using pci_resource_start() because we could be
- * operating from within a Virtual Machine which is trapping our
- * accesses to our Configuration Space and we need to set up the PCI-E
- * Memory Window decoders with the actual addresses which will be
- * coming across the PCI-E link.
- */
- bar0 = csio_t4_read_pcie_cfg4(hw, PCI_BASE_ADDRESS_0);
- bar0 &= PCI_BASE_ADDRESS_MEM_MASK;
-
- mem_win_base = bar0 + MEMWIN_BASE;
-
- /*
- * Set up memory window for accessing adapter memory ranges. (Read
- * back MA register to ensure that changes propagate before we attempt
- * to use the new values.)
- */
- csio_wr_reg32(hw, mem_win_base | BIR(0) |
- WINDOW(ilog2(MEMWIN_APERTURE) - 10),
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, win));
- csio_rd_reg32(hw,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, win));
- return 0;
-}
-
-/*
- * Interrupt handler for the PCIE module.
- */
-static void
-csio_t4_pcie_intr_handler(struct csio_hw *hw)
-{
- static struct intr_info sysbus_intr_info[] = {
- { RNPP, "RXNP array parity error", -1, 1 },
- { RPCP, "RXPC array parity error", -1, 1 },
- { RCIP, "RXCIF array parity error", -1, 1 },
- { RCCP, "Rx completions control array parity error", -1, 1 },
- { RFTP, "RXFT array parity error", -1, 1 },
- { 0, NULL, 0, 0 }
- };
- static struct intr_info pcie_port_intr_info[] = {
- { TPCP, "TXPC array parity error", -1, 1 },
- { TNPP, "TXNP array parity error", -1, 1 },
- { TFTP, "TXFT array parity error", -1, 1 },
- { TCAP, "TXCA array parity error", -1, 1 },
- { TCIP, "TXCIF array parity error", -1, 1 },
- { RCAP, "RXCA array parity error", -1, 1 },
- { OTDD, "outbound request TLP discarded", -1, 1 },
- { RDPE, "Rx data parity error", -1, 1 },
- { TDUE, "Tx uncorrectable data error", -1, 1 },
- { 0, NULL, 0, 0 }
- };
-
- static struct intr_info pcie_intr_info[] = {
- { MSIADDRLPERR, "MSI AddrL parity error", -1, 1 },
- { MSIADDRHPERR, "MSI AddrH parity error", -1, 1 },
- { MSIDATAPERR, "MSI data parity error", -1, 1 },
- { MSIXADDRLPERR, "MSI-X AddrL parity error", -1, 1 },
- { MSIXADDRHPERR, "MSI-X AddrH parity error", -1, 1 },
- { MSIXDATAPERR, "MSI-X data parity error", -1, 1 },
- { MSIXDIPERR, "MSI-X DI parity error", -1, 1 },
- { PIOCPLPERR, "PCI PIO completion FIFO parity error", -1, 1 },
- { PIOREQPERR, "PCI PIO request FIFO parity error", -1, 1 },
- { TARTAGPERR, "PCI PCI target tag FIFO parity error", -1, 1 },
- { CCNTPERR, "PCI CMD channel count parity error", -1, 1 },
- { CREQPERR, "PCI CMD channel request parity error", -1, 1 },
- { CRSPPERR, "PCI CMD channel response parity error", -1, 1 },
- { DCNTPERR, "PCI DMA channel count parity error", -1, 1 },
- { DREQPERR, "PCI DMA channel request parity error", -1, 1 },
- { DRSPPERR, "PCI DMA channel response parity error", -1, 1 },
- { HCNTPERR, "PCI HMA channel count parity error", -1, 1 },
- { HREQPERR, "PCI HMA channel request parity error", -1, 1 },
- { HRSPPERR, "PCI HMA channel response parity error", -1, 1 },
- { CFGSNPPERR, "PCI config snoop FIFO parity error", -1, 1 },
- { FIDPERR, "PCI FID parity error", -1, 1 },
- { INTXCLRPERR, "PCI INTx clear parity error", -1, 1 },
- { MATAGPERR, "PCI MA tag parity error", -1, 1 },
- { PIOTAGPERR, "PCI PIO tag parity error", -1, 1 },
- { RXCPLPERR, "PCI Rx completion parity error", -1, 1 },
- { RXWRPERR, "PCI Rx write parity error", -1, 1 },
- { RPLPERR, "PCI replay buffer parity error", -1, 1 },
- { PCIESINT, "PCI core secondary fault", -1, 1 },
- { PCIEPINT, "PCI core primary fault", -1, 1 },
- { UNXSPLCPLERR, "PCI unexpected split completion error", -1,
- 0 },
- { 0, NULL, 0, 0 }
- };
-
- int fat;
- fat = csio_handle_intr_status(hw,
- PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS,
- sysbus_intr_info) +
- csio_handle_intr_status(hw,
- PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
- pcie_port_intr_info) +
- csio_handle_intr_status(hw, PCIE_INT_CAUSE, pcie_intr_info);
- if (fat)
- csio_hw_fatal_err(hw);
-}
-
-/*
- * csio_t4_flash_cfg_addr - return the address of the flash configuration file
- * @hw: the HW module
- *
- * Return the address within the flash where the Firmware Configuration
- * File is stored.
- */
-static unsigned int
-csio_t4_flash_cfg_addr(struct csio_hw *hw)
-{
- return FLASH_CFG_OFFSET;
-}
-
-/*
- * csio_t4_mc_read - read from MC through backdoor accesses
- * @hw: the hw module
- * @idx: not used for T4 adapter
- * @addr: address of first byte requested
- * @data: 64 bytes of data containing the requested address
- * @ecc: where to store the corresponding 64-bit ECC word
- *
- * Read 64 bytes of data from MC starting at a 64-byte-aligned address
- * that covers the requested address @addr. If @parity is not %NULL it
- * is assigned the 64-bit ECC word for the read data.
- */
-static int
-csio_t4_mc_read(struct csio_hw *hw, int idx, uint32_t addr, __be32 *data,
- uint64_t *ecc)
-{
- int i;
-
- if (csio_rd_reg32(hw, MC_BIST_CMD) & START_BIST)
- return -EBUSY;
- csio_wr_reg32(hw, addr & ~0x3fU, MC_BIST_CMD_ADDR);
- csio_wr_reg32(hw, 64, MC_BIST_CMD_LEN);
- csio_wr_reg32(hw, 0xc, MC_BIST_DATA_PATTERN);
- csio_wr_reg32(hw, BIST_OPCODE(1) | START_BIST | BIST_CMD_GAP(1),
- MC_BIST_CMD);
- i = csio_hw_wait_op_done_val(hw, MC_BIST_CMD, START_BIST,
- 0, 10, 1, NULL);
- if (i)
- return i;
-
-#define MC_DATA(i) MC_BIST_STATUS_REG(MC_BIST_STATUS_RDATA, i)
-
- for (i = 15; i >= 0; i--)
- *data++ = htonl(csio_rd_reg32(hw, MC_DATA(i)));
- if (ecc)
- *ecc = csio_rd_reg64(hw, MC_DATA(16));
-#undef MC_DATA
- return 0;
-}
-
-/*
- * csio_t4_edc_read - read from EDC through backdoor accesses
- * @hw: the hw module
- * @idx: which EDC to access
- * @addr: address of first byte requested
- * @data: 64 bytes of data containing the requested address
- * @ecc: where to store the corresponding 64-bit ECC word
- *
- * Read 64 bytes of data from EDC starting at a 64-byte-aligned address
- * that covers the requested address @addr. If @parity is not %NULL it
- * is assigned the 64-bit ECC word for the read data.
- */
-static int
-csio_t4_edc_read(struct csio_hw *hw, int idx, uint32_t addr, __be32 *data,
- uint64_t *ecc)
-{
- int i;
-
- idx *= EDC_STRIDE;
- if (csio_rd_reg32(hw, EDC_BIST_CMD + idx) & START_BIST)
- return -EBUSY;
- csio_wr_reg32(hw, addr & ~0x3fU, EDC_BIST_CMD_ADDR + idx);
- csio_wr_reg32(hw, 64, EDC_BIST_CMD_LEN + idx);
- csio_wr_reg32(hw, 0xc, EDC_BIST_DATA_PATTERN + idx);
- csio_wr_reg32(hw, BIST_OPCODE(1) | BIST_CMD_GAP(1) | START_BIST,
- EDC_BIST_CMD + idx);
- i = csio_hw_wait_op_done_val(hw, EDC_BIST_CMD + idx, START_BIST,
- 0, 10, 1, NULL);
- if (i)
- return i;
-
-#define EDC_DATA(i) (EDC_BIST_STATUS_REG(EDC_BIST_STATUS_RDATA, i) + idx)
-
- for (i = 15; i >= 0; i--)
- *data++ = htonl(csio_rd_reg32(hw, EDC_DATA(i)));
- if (ecc)
- *ecc = csio_rd_reg64(hw, EDC_DATA(16));
-#undef EDC_DATA
- return 0;
-}
-
-/*
- * csio_t4_memory_rw - read/write EDC 0, EDC 1 or MC via PCIE memory window
- * @hw: the csio_hw
- * @win: PCI-E memory Window to use
- * @mtype: memory type: MEM_EDC0, MEM_EDC1, MEM_MC0 (or MEM_MC) or MEM_MC1
- * @addr: address within indicated memory type
- * @len: amount of memory to transfer
- * @buf: host memory buffer
- * @dir: direction of transfer 1 => read, 0 => write
- *
- * Reads/writes an [almost] arbitrary memory region in the firmware: the
- * firmware memory address, length and host buffer must be aligned on
- * 32-bit boudaries. The memory is transferred as a raw byte sequence
- * from/to the firmware's memory. If this memory contains data
- * structures which contain multi-byte integers, it's the callers
- * responsibility to perform appropriate byte order conversions.
- */
-static int
-csio_t4_memory_rw(struct csio_hw *hw, u32 win, int mtype, u32 addr,
- u32 len, uint32_t *buf, int dir)
-{
- u32 pos, start, offset, memoffset, bar0;
- u32 edc_size, mc_size, mem_reg, mem_aperture, mem_base;
-
- /*
- * Argument sanity checks ...
- */
- if ((addr & 0x3) || (len & 0x3))
- return -EINVAL;
-
- /* Offset into the region of memory which is being accessed
- * MEM_EDC0 = 0
- * MEM_EDC1 = 1
- * MEM_MC = 2 -- T4
- */
- edc_size = EDRAM0_SIZE_G(csio_rd_reg32(hw, MA_EDRAM0_BAR_A));
- if (mtype != MEM_MC1)
- memoffset = (mtype * (edc_size * 1024 * 1024));
- else {
- mc_size = EXT_MEM_SIZE_G(csio_rd_reg32(hw,
- MA_EXT_MEMORY_BAR_A));
- memoffset = (MEM_MC0 * edc_size + mc_size) * 1024 * 1024;
- }
-
- /* Determine the PCIE_MEM_ACCESS_OFFSET */
- addr = addr + memoffset;
-
- /*
- * Each PCI-E Memory Window is programmed with a window size -- or
- * "aperture" -- which controls the granularity of its mapping onto
- * adapter memory. We need to grab that aperture in order to know
- * how to use the specified window. The window is also programmed
- * with the base address of the Memory Window in BAR0's address
- * space. For T4 this is an absolute PCI-E Bus Address. For T5
- * the address is relative to BAR0.
- */
- mem_reg = csio_rd_reg32(hw,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, win));
- mem_aperture = 1 << (WINDOW(mem_reg) + 10);
- mem_base = GET_PCIEOFST(mem_reg) << 10;
-
- bar0 = csio_t4_read_pcie_cfg4(hw, PCI_BASE_ADDRESS_0);
- bar0 &= PCI_BASE_ADDRESS_MEM_MASK;
- mem_base -= bar0;
-
- start = addr & ~(mem_aperture-1);
- offset = addr - start;
-
- csio_dbg(hw, "csio_t4_memory_rw: mem_reg: 0x%x, mem_aperture: 0x%x\n",
- mem_reg, mem_aperture);
- csio_dbg(hw, "csio_t4_memory_rw: mem_base: 0x%x, mem_offset: 0x%x\n",
- mem_base, memoffset);
- csio_dbg(hw, "csio_t4_memory_rw: bar0: 0x%x, start:0x%x, offset:0x%x\n",
- bar0, start, offset);
- csio_dbg(hw, "csio_t4_memory_rw: mtype: %d, addr: 0x%x, len: %d\n",
- mtype, addr, len);
-
- for (pos = start; len > 0; pos += mem_aperture, offset = 0) {
- /*
- * Move PCI-E Memory Window to our current transfer
- * position. Read it back to ensure that changes propagate
- * before we attempt to use the new value.
- */
- csio_wr_reg32(hw, pos,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET, win));
- csio_rd_reg32(hw,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET, win));
-
- while (offset < mem_aperture && len > 0) {
- if (dir)
- *buf++ = csio_rd_reg32(hw, mem_base + offset);
- else
- csio_wr_reg32(hw, *buf++, mem_base + offset);
-
- offset += sizeof(__be32);
- len -= sizeof(__be32);
- }
- }
- return 0;
-}
-
-/*
- * csio_t4_dfs_create_ext_mem - setup debugfs for MC to read the values
- * @hw: the csio_hw
- *
- * This function creates files in the debugfs with external memory region MC.
- */
-static void
-csio_t4_dfs_create_ext_mem(struct csio_hw *hw)
-{
- u32 size;
- int i = csio_rd_reg32(hw, MA_TARGET_MEM_ENABLE_A);
-
- if (i & EXT_MEM_ENABLE_F) {
- size = csio_rd_reg32(hw, MA_EXT_MEMORY_BAR_A);
- csio_add_debugfs_mem(hw, "mc", MEM_MC,
- EXT_MEM_SIZE_G(size));
- }
-}
-
-/* T4 adapter specific function */
-struct csio_hw_chip_ops t4_ops = {
- .chip_set_mem_win = csio_t4_set_mem_win,
- .chip_pcie_intr_handler = csio_t4_pcie_intr_handler,
- .chip_flash_cfg_addr = csio_t4_flash_cfg_addr,
- .chip_mc_read = csio_t4_mc_read,
- .chip_edc_read = csio_t4_edc_read,
- .chip_memory_rw = csio_t4_memory_rw,
- .chip_dfs_create_ext_mem = csio_t4_dfs_create_ext_mem,
-};
* back MA register to ensure that changes propagate before we attempt
* to use the new values.)
*/
- csio_wr_reg32(hw, mem_win_base | BIR(0) |
- WINDOW(ilog2(MEMWIN_APERTURE) - 10),
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, win));
+ csio_wr_reg32(hw, mem_win_base | BIR_V(0) |
+ WINDOW_V(ilog2(MEMWIN_APERTURE) - 10),
+ PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN_A, win));
csio_rd_reg32(hw,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, win));
+ PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN_A, win));
return 0;
}
csio_t5_pcie_intr_handler(struct csio_hw *hw)
{
static struct intr_info sysbus_intr_info[] = {
- { RNPP, "RXNP array parity error", -1, 1 },
- { RPCP, "RXPC array parity error", -1, 1 },
- { RCIP, "RXCIF array parity error", -1, 1 },
- { RCCP, "Rx completions control array parity error", -1, 1 },
- { RFTP, "RXFT array parity error", -1, 1 },
+ { RNPP_F, "RXNP array parity error", -1, 1 },
+ { RPCP_F, "RXPC array parity error", -1, 1 },
+ { RCIP_F, "RXCIF array parity error", -1, 1 },
+ { RCCP_F, "Rx completions control array parity error", -1, 1 },
+ { RFTP_F, "RXFT array parity error", -1, 1 },
{ 0, NULL, 0, 0 }
};
static struct intr_info pcie_port_intr_info[] = {
- { TPCP, "TXPC array parity error", -1, 1 },
- { TNPP, "TXNP array parity error", -1, 1 },
- { TFTP, "TXFT array parity error", -1, 1 },
- { TCAP, "TXCA array parity error", -1, 1 },
- { TCIP, "TXCIF array parity error", -1, 1 },
- { RCAP, "RXCA array parity error", -1, 1 },
- { OTDD, "outbound request TLP discarded", -1, 1 },
- { RDPE, "Rx data parity error", -1, 1 },
- { TDUE, "Tx uncorrectable data error", -1, 1 },
+ { TPCP_F, "TXPC array parity error", -1, 1 },
+ { TNPP_F, "TXNP array parity error", -1, 1 },
+ { TFTP_F, "TXFT array parity error", -1, 1 },
+ { TCAP_F, "TXCA array parity error", -1, 1 },
+ { TCIP_F, "TXCIF array parity error", -1, 1 },
+ { RCAP_F, "RXCA array parity error", -1, 1 },
+ { OTDD_F, "outbound request TLP discarded", -1, 1 },
+ { RDPE_F, "Rx data parity error", -1, 1 },
+ { TDUE_F, "Tx uncorrectable data error", -1, 1 },
{ 0, NULL, 0, 0 }
};
static struct intr_info pcie_intr_info[] = {
- { MSTGRPPERR, "Master Response Read Queue parity error",
+ { MSTGRPPERR_F, "Master Response Read Queue parity error",
-1, 1 },
- { MSTTIMEOUTPERR, "Master Timeout FIFO parity error", -1, 1 },
- { MSIXSTIPERR, "MSI-X STI SRAM parity error", -1, 1 },
- { MSIXADDRLPERR, "MSI-X AddrL parity error", -1, 1 },
- { MSIXADDRHPERR, "MSI-X AddrH parity error", -1, 1 },
- { MSIXDATAPERR, "MSI-X data parity error", -1, 1 },
- { MSIXDIPERR, "MSI-X DI parity error", -1, 1 },
- { PIOCPLGRPPERR, "PCI PIO completion Group FIFO parity error",
+ { MSTTIMEOUTPERR_F, "Master Timeout FIFO parity error", -1, 1 },
+ { MSIXSTIPERR_F, "MSI-X STI SRAM parity error", -1, 1 },
+ { MSIXADDRLPERR_F, "MSI-X AddrL parity error", -1, 1 },
+ { MSIXADDRHPERR_F, "MSI-X AddrH parity error", -1, 1 },
+ { MSIXDATAPERR_F, "MSI-X data parity error", -1, 1 },
+ { MSIXDIPERR_F, "MSI-X DI parity error", -1, 1 },
+ { PIOCPLGRPPERR_F, "PCI PIO completion Group FIFO parity error",
-1, 1 },
- { PIOREQGRPPERR, "PCI PIO request Group FIFO parity error",
+ { PIOREQGRPPERR_F, "PCI PIO request Group FIFO parity error",
-1, 1 },
- { TARTAGPERR, "PCI PCI target tag FIFO parity error", -1, 1 },
- { MSTTAGQPERR, "PCI master tag queue parity error", -1, 1 },
- { CREQPERR, "PCI CMD channel request parity error", -1, 1 },
- { CRSPPERR, "PCI CMD channel response parity error", -1, 1 },
- { DREQWRPERR, "PCI DMA channel write request parity error",
+ { TARTAGPERR_F, "PCI PCI target tag FIFO parity error", -1, 1 },
+ { MSTTAGQPERR_F, "PCI master tag queue parity error", -1, 1 },
+ { CREQPERR_F, "PCI CMD channel request parity error", -1, 1 },
+ { CRSPPERR_F, "PCI CMD channel response parity error", -1, 1 },
+ { DREQWRPERR_F, "PCI DMA channel write request parity error",
-1, 1 },
- { DREQPERR, "PCI DMA channel request parity error", -1, 1 },
- { DRSPPERR, "PCI DMA channel response parity error", -1, 1 },
- { HREQWRPERR, "PCI HMA channel count parity error", -1, 1 },
- { HREQPERR, "PCI HMA channel request parity error", -1, 1 },
- { HRSPPERR, "PCI HMA channel response parity error", -1, 1 },
- { CFGSNPPERR, "PCI config snoop FIFO parity error", -1, 1 },
- { FIDPERR, "PCI FID parity error", -1, 1 },
- { VFIDPERR, "PCI INTx clear parity error", -1, 1 },
- { MAGRPPERR, "PCI MA group FIFO parity error", -1, 1 },
- { PIOTAGPERR, "PCI PIO tag parity error", -1, 1 },
- { IPRXHDRGRPPERR, "PCI IP Rx header group parity error",
+ { DREQPERR_F, "PCI DMA channel request parity error", -1, 1 },
+ { DRSPPERR_F, "PCI DMA channel response parity error", -1, 1 },
+ { HREQWRPERR_F, "PCI HMA channel count parity error", -1, 1 },
+ { HREQPERR_F, "PCI HMA channel request parity error", -1, 1 },
+ { HRSPPERR_F, "PCI HMA channel response parity error", -1, 1 },
+ { CFGSNPPERR_F, "PCI config snoop FIFO parity error", -1, 1 },
+ { FIDPERR_F, "PCI FID parity error", -1, 1 },
+ { VFIDPERR_F, "PCI INTx clear parity error", -1, 1 },
+ { MAGRPPERR_F, "PCI MA group FIFO parity error", -1, 1 },
+ { PIOTAGPERR_F, "PCI PIO tag parity error", -1, 1 },
+ { IPRXHDRGRPPERR_F, "PCI IP Rx header group parity error",
-1, 1 },
- { IPRXDATAGRPPERR, "PCI IP Rx data group parity error",
+ { IPRXDATAGRPPERR_F, "PCI IP Rx data group parity error",
-1, 1 },
- { RPLPERR, "PCI IP replay buffer parity error", -1, 1 },
- { IPSOTPERR, "PCI IP SOT buffer parity error", -1, 1 },
- { TRGT1GRPPERR, "PCI TRGT1 group FIFOs parity error", -1, 1 },
- { READRSPERR, "Outbound read error", -1, 0 },
+ { RPLPERR_F, "PCI IP replay buffer parity error", -1, 1 },
+ { IPSOTPERR_F, "PCI IP SOT buffer parity error", -1, 1 },
+ { TRGT1GRPPERR_F, "PCI TRGT1 group FIFOs parity error", -1, 1 },
+ { READRSPERR_F, "Outbound read error", -1, 0 },
{ 0, NULL, 0, 0 }
};
int fat;
fat = csio_handle_intr_status(hw,
- PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS,
+ PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS_A,
sysbus_intr_info) +
csio_handle_intr_status(hw,
- PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS,
+ PCIE_CORE_UTL_PCI_EXPRESS_PORT_STATUS_A,
pcie_port_intr_info) +
- csio_handle_intr_status(hw, PCIE_INT_CAUSE, pcie_intr_info);
+ csio_handle_intr_status(hw, PCIE_INT_CAUSE_A, pcie_intr_info);
if (fat)
csio_hw_fatal_err(hw);
}
uint32_t mc_bist_cmd_reg, mc_bist_cmd_addr_reg, mc_bist_cmd_len_reg;
uint32_t mc_bist_status_rdata_reg, mc_bist_data_pattern_reg;
- mc_bist_cmd_reg = MC_REG(MC_P_BIST_CMD, idx);
- mc_bist_cmd_addr_reg = MC_REG(MC_P_BIST_CMD_ADDR, idx);
- mc_bist_cmd_len_reg = MC_REG(MC_P_BIST_CMD_LEN, idx);
- mc_bist_status_rdata_reg = MC_REG(MC_P_BIST_STATUS_RDATA, idx);
- mc_bist_data_pattern_reg = MC_REG(MC_P_BIST_DATA_PATTERN, idx);
+ mc_bist_cmd_reg = MC_REG(MC_P_BIST_CMD_A, idx);
+ mc_bist_cmd_addr_reg = MC_REG(MC_P_BIST_CMD_ADDR_A, idx);
+ mc_bist_cmd_len_reg = MC_REG(MC_P_BIST_CMD_LEN_A, idx);
+ mc_bist_status_rdata_reg = MC_REG(MC_P_BIST_STATUS_RDATA_A, idx);
+ mc_bist_data_pattern_reg = MC_REG(MC_P_BIST_DATA_PATTERN_A, idx);
- if (csio_rd_reg32(hw, mc_bist_cmd_reg) & START_BIST)
+ if (csio_rd_reg32(hw, mc_bist_cmd_reg) & START_BIST_F)
return -EBUSY;
csio_wr_reg32(hw, addr & ~0x3fU, mc_bist_cmd_addr_reg);
csio_wr_reg32(hw, 64, mc_bist_cmd_len_reg);
csio_wr_reg32(hw, 0xc, mc_bist_data_pattern_reg);
- csio_wr_reg32(hw, BIST_OPCODE(1) | START_BIST | BIST_CMD_GAP(1),
+ csio_wr_reg32(hw, BIST_OPCODE_V(1) | START_BIST_F | BIST_CMD_GAP_V(1),
mc_bist_cmd_reg);
- i = csio_hw_wait_op_done_val(hw, mc_bist_cmd_reg, START_BIST,
+ i = csio_hw_wait_op_done_val(hw, mc_bist_cmd_reg, START_BIST_F,
0, 10, 1, NULL);
if (i)
return i;
-#define MC_DATA(i) MC_BIST_STATUS_REG(MC_BIST_STATUS_RDATA, i)
+#define MC_DATA(i) MC_BIST_STATUS_REG(MC_BIST_STATUS_RDATA_A, i)
for (i = 15; i >= 0; i--)
*data++ = htonl(csio_rd_reg32(hw, MC_DATA(i)));
#define EDC_STRIDE_T5 (EDC_T51_BASE_ADDR - EDC_T50_BASE_ADDR)
#define EDC_REG_T5(reg, idx) (reg + EDC_STRIDE_T5 * idx)
- edc_bist_cmd_reg = EDC_REG_T5(EDC_H_BIST_CMD, idx);
- edc_bist_cmd_addr_reg = EDC_REG_T5(EDC_H_BIST_CMD_ADDR, idx);
- edc_bist_cmd_len_reg = EDC_REG_T5(EDC_H_BIST_CMD_LEN, idx);
- edc_bist_cmd_data_pattern = EDC_REG_T5(EDC_H_BIST_DATA_PATTERN, idx);
- edc_bist_status_rdata_reg = EDC_REG_T5(EDC_H_BIST_STATUS_RDATA, idx);
+ edc_bist_cmd_reg = EDC_REG_T5(EDC_H_BIST_CMD_A, idx);
+ edc_bist_cmd_addr_reg = EDC_REG_T5(EDC_H_BIST_CMD_ADDR_A, idx);
+ edc_bist_cmd_len_reg = EDC_REG_T5(EDC_H_BIST_CMD_LEN_A, idx);
+ edc_bist_cmd_data_pattern = EDC_REG_T5(EDC_H_BIST_DATA_PATTERN_A, idx);
+ edc_bist_status_rdata_reg = EDC_REG_T5(EDC_H_BIST_STATUS_RDATA_A, idx);
#undef EDC_REG_T5
#undef EDC_STRIDE_T5
- if (csio_rd_reg32(hw, edc_bist_cmd_reg) & START_BIST)
+ if (csio_rd_reg32(hw, edc_bist_cmd_reg) & START_BIST_F)
return -EBUSY;
csio_wr_reg32(hw, addr & ~0x3fU, edc_bist_cmd_addr_reg);
csio_wr_reg32(hw, 64, edc_bist_cmd_len_reg);
csio_wr_reg32(hw, 0xc, edc_bist_cmd_data_pattern);
- csio_wr_reg32(hw, BIST_OPCODE(1) | START_BIST | BIST_CMD_GAP(1),
+ csio_wr_reg32(hw, BIST_OPCODE_V(1) | START_BIST_F | BIST_CMD_GAP_V(1),
edc_bist_cmd_reg);
- i = csio_hw_wait_op_done_val(hw, edc_bist_cmd_reg, START_BIST,
+ i = csio_hw_wait_op_done_val(hw, edc_bist_cmd_reg, START_BIST_F,
0, 10, 1, NULL);
if (i)
return i;
-#define EDC_DATA(i) (EDC_BIST_STATUS_REG(EDC_BIST_STATUS_RDATA, i) + idx)
+#define EDC_DATA(i) (EDC_BIST_STATUS_REG(EDC_BIST_STATUS_RDATA_A, i) + idx)
for (i = 15; i >= 0; i--)
*data++ = htonl(csio_rd_reg32(hw, EDC_DATA(i)));
* the address is relative to BAR0.
*/
mem_reg = csio_rd_reg32(hw,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN, win));
- mem_aperture = 1 << (WINDOW(mem_reg) + 10);
- mem_base = GET_PCIEOFST(mem_reg) << 10;
+ PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_BASE_WIN_A, win));
+ mem_aperture = 1 << (WINDOW_V(mem_reg) + 10);
+ mem_base = PCIEOFST_G(mem_reg) << 10;
start = addr & ~(mem_aperture-1);
offset = addr - start;
- win_pf = V_PFNUM(hw->pfn);
+ win_pf = PFNUM_V(hw->pfn);
csio_dbg(hw, "csio_t5_memory_rw: mem_reg: 0x%x, mem_aperture: 0x%x\n",
mem_reg, mem_aperture);
* before we attempt to use the new value.
*/
csio_wr_reg32(hw, pos | win_pf,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET, win));
+ PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET_A, win));
csio_rd_reg32(hw,
- PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET, win));
+ PCIE_MEM_ACCESS_REG(PCIE_MEM_ACCESS_OFFSET_A, win));
while (offset < mem_aperture && len > 0) {
if (dir)
*/
#define CH_PCI_DEVICE_ID_TABLE_DEFINE_BEGIN \
static struct pci_device_id csio_pci_tbl[] = {
-/* Define for iSCSI uses PF5, FCoE uses PF6 */
-#define CH_PCI_DEVICE_ID_FUNCTION 0x5
-#define CH_PCI_DEVICE_ID_FUNCTION2 0x6
+/* Define for FCoE uses PF6 */
+#define CH_PCI_DEVICE_ID_FUNCTION 0x6
#define CH_PCI_ID_TABLE_ENTRY(devid) \
{ PCI_VDEVICE(CHELSIO, (devid)), 0 }
MODULE_LICENSE(CSIO_DRV_LICENSE);
MODULE_DEVICE_TABLE(pci, csio_pci_tbl);
MODULE_VERSION(CSIO_DRV_VERSION);
-MODULE_FIRMWARE(FW_FNAME_T4);
MODULE_FIRMWARE(FW_FNAME_T5);
/* Disable the interrupt for this PCI function. */
if (hw->intr_mode == CSIO_IM_INTX)
- csio_wr_reg32(hw, 0, MYPF_REG(PCIE_PF_CLI));
+ csio_wr_reg32(hw, 0, MYPF_REG(PCIE_PF_CLI_A));
/*
* The read in the following function will flush the
else {
/* Program DSGL to dma payload */
dsgl.cmd_nsge = htonl(ULPTX_CMD_V(ULP_TX_SC_DSGL) |
- ULPTX_MORE | ULPTX_NSGE(1));
+ ULPTX_MORE_F | ULPTX_NSGE_V(1));
dsgl.len0 = cpu_to_be32(pld_len);
dsgl.addr0 = cpu_to_be64(pld->paddr);
csio_wr_copy_to_wrp(&dsgl, &wrp, ALIGN(wr_off, 8),
void
csio_mb_intr_enable(struct csio_hw *hw)
{
- csio_wr_reg32(hw, MBMSGRDYINTEN(1), MYPF_REG(CIM_PF_HOST_INT_ENABLE));
- csio_rd_reg32(hw, MYPF_REG(CIM_PF_HOST_INT_ENABLE));
+ csio_wr_reg32(hw, MBMSGRDYINTEN_F, MYPF_REG(CIM_PF_HOST_INT_ENABLE_A));
+ csio_rd_reg32(hw, MYPF_REG(CIM_PF_HOST_INT_ENABLE_A));
}
/*
void
csio_mb_intr_disable(struct csio_hw *hw)
{
- csio_wr_reg32(hw, MBMSGRDYINTEN(0), MYPF_REG(CIM_PF_HOST_INT_ENABLE));
- csio_rd_reg32(hw, MYPF_REG(CIM_PF_HOST_INT_ENABLE));
+ csio_wr_reg32(hw, MBMSGRDYINTEN_V(0),
+ MYPF_REG(CIM_PF_HOST_INT_ENABLE_A));
+ csio_rd_reg32(hw, MYPF_REG(CIM_PF_HOST_INT_ENABLE_A));
}
static void
{
int i;
__be64 cmd[CSIO_MB_MAX_REGS];
- uint32_t ctl_reg = PF_REG(hw->pfn, CIM_PF_MAILBOX_CTRL);
- uint32_t data_reg = PF_REG(hw->pfn, CIM_PF_MAILBOX_DATA);
+ uint32_t ctl_reg = PF_REG(hw->pfn, CIM_PF_MAILBOX_CTRL_A);
+ uint32_t data_reg = PF_REG(hw->pfn, CIM_PF_MAILBOX_DATA_A);
int size = sizeof(struct fw_debug_cmd);
/* Copy mailbox data */
csio_mb_dump_fw_dbg(hw, cmd);
/* Notify FW of mailbox by setting owner as UP */
- csio_wr_reg32(hw, MBMSGVALID | MBINTREQ | MBOWNER(CSIO_MBOWNER_FW),
- ctl_reg);
+ csio_wr_reg32(hw, MBMSGVALID_F | MBINTREQ_F |
+ MBOWNER_V(CSIO_MBOWNER_FW), ctl_reg);
csio_rd_reg32(hw, ctl_reg);
wmb();
__be64 *cmd = mbp->mb;
__be64 hdr;
struct csio_mbm *mbm = &hw->mbm;
- uint32_t ctl_reg = PF_REG(hw->pfn, CIM_PF_MAILBOX_CTRL);
- uint32_t data_reg = PF_REG(hw->pfn, CIM_PF_MAILBOX_DATA);
+ uint32_t ctl_reg = PF_REG(hw->pfn, CIM_PF_MAILBOX_CTRL_A);
+ uint32_t data_reg = PF_REG(hw->pfn, CIM_PF_MAILBOX_DATA_A);
int size = mbp->mb_size;
int rv = -EINVAL;
struct fw_cmd_hdr *fw_hdr;
}
/* Now get ownership of mailbox */
- owner = MBOWNER_GET(csio_rd_reg32(hw, ctl_reg));
+ owner = MBOWNER_G(csio_rd_reg32(hw, ctl_reg));
if (!csio_mb_is_host_owner(owner)) {
for (i = 0; (owner == CSIO_MBOWNER_NONE) && (i < 3); i++)
- owner = MBOWNER_GET(csio_rd_reg32(hw, ctl_reg));
+ owner = MBOWNER_G(csio_rd_reg32(hw, ctl_reg));
/*
* Mailbox unavailable. In immediate mode, fail the command.
* In other modes, enqueue the request.
if (mbp->mb_cbfn != NULL) {
mbm->mcurrent = mbp;
mod_timer(&mbm->timer, jiffies + msecs_to_jiffies(mbp->tmo));
- csio_wr_reg32(hw, MBMSGVALID | MBINTREQ |
- MBOWNER(CSIO_MBOWNER_FW), ctl_reg);
+ csio_wr_reg32(hw, MBMSGVALID_F | MBINTREQ_F |
+ MBOWNER_V(CSIO_MBOWNER_FW), ctl_reg);
} else
- csio_wr_reg32(hw, MBMSGVALID | MBOWNER(CSIO_MBOWNER_FW),
+ csio_wr_reg32(hw, MBMSGVALID_F | MBOWNER_V(CSIO_MBOWNER_FW),
ctl_reg);
/* Flush posted writes */
/* Check for response */
ctl = csio_rd_reg32(hw, ctl_reg);
- if (csio_mb_is_host_owner(MBOWNER_GET(ctl))) {
+ if (csio_mb_is_host_owner(MBOWNER_G(ctl))) {
- if (!(ctl & MBMSGVALID)) {
+ if (!(ctl & MBMSGVALID_F)) {
csio_wr_reg32(hw, 0, ctl_reg);
continue;
}
__be64 *cmd;
uint32_t ctl, cim_cause, pl_cause;
int i;
- uint32_t ctl_reg = PF_REG(hw->pfn, CIM_PF_MAILBOX_CTRL);
- uint32_t data_reg = PF_REG(hw->pfn, CIM_PF_MAILBOX_DATA);
+ uint32_t ctl_reg = PF_REG(hw->pfn, CIM_PF_MAILBOX_CTRL_A);
+ uint32_t data_reg = PF_REG(hw->pfn, CIM_PF_MAILBOX_DATA_A);
int size;
__be64 hdr;
struct fw_cmd_hdr *fw_hdr;
- pl_cause = csio_rd_reg32(hw, MYPF_REG(PL_PF_INT_CAUSE));
- cim_cause = csio_rd_reg32(hw, MYPF_REG(CIM_PF_HOST_INT_CAUSE));
+ pl_cause = csio_rd_reg32(hw, MYPF_REG(PL_PF_INT_CAUSE_A));
+ cim_cause = csio_rd_reg32(hw, MYPF_REG(CIM_PF_HOST_INT_CAUSE_A));
- if (!(pl_cause & PFCIM) || !(cim_cause & MBMSGRDYINT)) {
+ if (!(pl_cause & PFCIM_F) || !(cim_cause & MBMSGRDYINT_F)) {
CSIO_INC_STATS(hw, n_mbint_unexp);
return -EINVAL;
}
* the upper level cause register. In other words, CIM-cause
* first followed by PL-Cause next.
*/
- csio_wr_reg32(hw, MBMSGRDYINT, MYPF_REG(CIM_PF_HOST_INT_CAUSE));
- csio_wr_reg32(hw, PFCIM, MYPF_REG(PL_PF_INT_CAUSE));
+ csio_wr_reg32(hw, MBMSGRDYINT_F, MYPF_REG(CIM_PF_HOST_INT_CAUSE_A));
+ csio_wr_reg32(hw, PFCIM_F, MYPF_REG(PL_PF_INT_CAUSE_A));
ctl = csio_rd_reg32(hw, ctl_reg);
- if (csio_mb_is_host_owner(MBOWNER_GET(ctl))) {
+ if (csio_mb_is_host_owner(MBOWNER_G(ctl))) {
CSIO_DUMP_MB(hw, hw->pfn, data_reg);
- if (!(ctl & MBMSGVALID)) {
+ if (!(ctl & MBMSGVALID_F)) {
csio_warn(hw,
"Stray mailbox interrupt recvd,"
" mailbox data not valid\n");
struct csio_dma_buf *dma_buf;
struct scsi_cmnd *scmnd = csio_scsi_cmnd(req);
- sgl->cmd_nsge = htonl(ULPTX_CMD_V(ULP_TX_SC_DSGL) | ULPTX_MORE |
- ULPTX_NSGE(req->nsge));
+ sgl->cmd_nsge = htonl(ULPTX_CMD_V(ULP_TX_SC_DSGL) | ULPTX_MORE_F |
+ ULPTX_NSGE_V(req->nsge));
/* Now add the data SGLs */
if (likely(!req->dcopy)) {
scsi_for_each_sg(scmnd, sgel, req->nsge, i) {
static int csio_sge_timer_reg = 1;
#define CSIO_SET_FLBUF_SIZE(_hw, _reg, _val) \
- csio_wr_reg32((_hw), (_val), SGE_FL_BUFFER_SIZE##_reg)
+ csio_wr_reg32((_hw), (_val), SGE_FL_BUFFER_SIZE##_reg##_A)
static void
csio_get_flbuf_size(struct csio_hw *hw, struct csio_sge *sge, uint32_t reg)
{
- sge->sge_fl_buf_size[reg] = csio_rd_reg32(hw, SGE_FL_BUFFER_SIZE0 +
+ sge->sge_fl_buf_size[reg] = csio_rd_reg32(hw, SGE_FL_BUFFER_SIZE0_A +
reg * sizeof(uint32_t));
}
static inline uint32_t
csio_wr_qstat_pgsz(struct csio_hw *hw)
{
- return (hw->wrm.sge.sge_control & EGRSTATUSPAGESIZE(1)) ? 128 : 64;
+ return (hw->wrm.sge.sge_control & EGRSTATUSPAGESIZE_F) ? 128 : 64;
}
/* Ring freelist doorbell */
* 8 freelist buffer pointers (since each pointer is 8 bytes).
*/
if (flq->inc_idx >= 8) {
- csio_wr_reg32(hw, DBPRIO(1) | QID(flq->un.fl.flid) |
- CSIO_HW_PIDX(hw, flq->inc_idx / 8),
- MYPF_REG(SGE_PF_KDOORBELL));
+ csio_wr_reg32(hw, DBPRIO_F | QID_V(flq->un.fl.flid) |
+ PIDX_T5_V(flq->inc_idx / 8) | DBTYPE_F,
+ MYPF_REG(SGE_PF_KDOORBELL_A));
flq->inc_idx &= 7;
}
}
static void
csio_wr_sge_intr_enable(struct csio_hw *hw, uint16_t iqid)
{
- csio_wr_reg32(hw, CIDXINC(0) |
- INGRESSQID(iqid) |
- TIMERREG(X_TIMERREG_RESTART_COUNTER),
- MYPF_REG(SGE_PF_GTS));
+ csio_wr_reg32(hw, CIDXINC_V(0) |
+ INGRESSQID_V(iqid) |
+ TIMERREG_V(X_TIMERREG_RESTART_COUNTER),
+ MYPF_REG(SGE_PF_GTS_A));
}
/*
wmb();
/* Ring SGE Doorbell writing q->pidx into it */
- csio_wr_reg32(hw, DBPRIO(prio) | QID(q->un.eq.physeqid) |
- CSIO_HW_PIDX(hw, q->inc_idx),
- MYPF_REG(SGE_PF_KDOORBELL));
+ csio_wr_reg32(hw, DBPRIO_V(prio) | QID_V(q->un.eq.physeqid) |
+ PIDX_T5_V(q->inc_idx) | DBTYPE_F,
+ MYPF_REG(SGE_PF_KDOORBELL_A));
q->inc_idx = 0;
return 0;
restart:
/* Now inform SGE about our incremental index value */
- csio_wr_reg32(hw, CIDXINC(q->inc_idx) |
- INGRESSQID(q->un.iq.physiqid) |
- TIMERREG(csio_sge_timer_reg),
- MYPF_REG(SGE_PF_GTS));
+ csio_wr_reg32(hw, CIDXINC_V(q->inc_idx) |
+ INGRESSQID_V(q->un.iq.physiqid) |
+ TIMERREG_V(csio_sge_timer_reg),
+ MYPF_REG(SGE_PF_GTS_A));
q->stats.n_tot_rsps += q->inc_idx;
q->inc_idx = 0;
uint32_t ingpad = 0;
uint32_t stat_len = clsz > 64 ? 128 : 64;
- csio_wr_reg32(hw, HOSTPAGESIZEPF0(s_hps) | HOSTPAGESIZEPF1(s_hps) |
- HOSTPAGESIZEPF2(s_hps) | HOSTPAGESIZEPF3(s_hps) |
- HOSTPAGESIZEPF4(s_hps) | HOSTPAGESIZEPF5(s_hps) |
- HOSTPAGESIZEPF6(s_hps) | HOSTPAGESIZEPF7(s_hps),
- SGE_HOST_PAGE_SIZE);
+ csio_wr_reg32(hw, HOSTPAGESIZEPF0_V(s_hps) | HOSTPAGESIZEPF1_V(s_hps) |
+ HOSTPAGESIZEPF2_V(s_hps) | HOSTPAGESIZEPF3_V(s_hps) |
+ HOSTPAGESIZEPF4_V(s_hps) | HOSTPAGESIZEPF5_V(s_hps) |
+ HOSTPAGESIZEPF6_V(s_hps) | HOSTPAGESIZEPF7_V(s_hps),
+ SGE_HOST_PAGE_SIZE_A);
sge->csio_fl_align = clsz < 32 ? 32 : clsz;
ingpad = ilog2(sge->csio_fl_align) - 5;
- csio_set_reg_field(hw, SGE_CONTROL, INGPADBOUNDARY_MASK |
- EGRSTATUSPAGESIZE(1),
- INGPADBOUNDARY(ingpad) |
- EGRSTATUSPAGESIZE(stat_len != 64));
+ csio_set_reg_field(hw, SGE_CONTROL_A,
+ INGPADBOUNDARY_V(INGPADBOUNDARY_M) |
+ EGRSTATUSPAGESIZE_F,
+ INGPADBOUNDARY_V(ingpad) |
+ EGRSTATUSPAGESIZE_V(stat_len != 64));
/* FL BUFFER SIZE#0 is Page size i,e already aligned to cache line */
- csio_wr_reg32(hw, PAGE_SIZE, SGE_FL_BUFFER_SIZE0);
+ csio_wr_reg32(hw, PAGE_SIZE, SGE_FL_BUFFER_SIZE0_A);
/*
* If using hard params, the following will get set correctly
*/
if (hw->flags & CSIO_HWF_USING_SOFT_PARAMS) {
csio_wr_reg32(hw,
- (csio_rd_reg32(hw, SGE_FL_BUFFER_SIZE2) +
+ (csio_rd_reg32(hw, SGE_FL_BUFFER_SIZE2_A) +
sge->csio_fl_align - 1) & ~(sge->csio_fl_align - 1),
- SGE_FL_BUFFER_SIZE2);
+ SGE_FL_BUFFER_SIZE2_A);
csio_wr_reg32(hw,
- (csio_rd_reg32(hw, SGE_FL_BUFFER_SIZE3) +
+ (csio_rd_reg32(hw, SGE_FL_BUFFER_SIZE3_A) +
sge->csio_fl_align - 1) & ~(sge->csio_fl_align - 1),
- SGE_FL_BUFFER_SIZE3);
+ SGE_FL_BUFFER_SIZE3_A);
}
- csio_wr_reg32(hw, HPZ0(PAGE_SHIFT - 12), ULP_RX_TDDP_PSZ);
+ csio_wr_reg32(hw, HPZ0_V(PAGE_SHIFT - 12), ULP_RX_TDDP_PSZ_A);
/* default value of rx_dma_offset of the NIC driver */
- csio_set_reg_field(hw, SGE_CONTROL, PKTSHIFT_MASK,
- PKTSHIFT(CSIO_SGE_RX_DMA_OFFSET));
+ csio_set_reg_field(hw, SGE_CONTROL_A,
+ PKTSHIFT_V(PKTSHIFT_M),
+ PKTSHIFT_V(CSIO_SGE_RX_DMA_OFFSET));
- csio_hw_tp_wr_bits_indirect(hw, TP_INGRESS_CONFIG,
- CSUM_HAS_PSEUDO_HDR, 0);
+ csio_hw_tp_wr_bits_indirect(hw, TP_INGRESS_CONFIG_A,
+ CSUM_HAS_PSEUDO_HDR_F, 0);
}
static void
u32 timer_value_0_and_1, timer_value_2_and_3, timer_value_4_and_5;
u32 ingress_rx_threshold;
- sge->sge_control = csio_rd_reg32(hw, SGE_CONTROL);
+ sge->sge_control = csio_rd_reg32(hw, SGE_CONTROL_A);
- ingpad = INGPADBOUNDARY_GET(sge->sge_control);
+ ingpad = INGPADBOUNDARY_G(sge->sge_control);
switch (ingpad) {
case X_INGPCIEBOUNDARY_32B:
for (i = 0; i < CSIO_SGE_FL_SIZE_REGS; i++)
csio_get_flbuf_size(hw, sge, i);
- timer_value_0_and_1 = csio_rd_reg32(hw, SGE_TIMER_VALUE_0_AND_1);
- timer_value_2_and_3 = csio_rd_reg32(hw, SGE_TIMER_VALUE_2_AND_3);
- timer_value_4_and_5 = csio_rd_reg32(hw, SGE_TIMER_VALUE_4_AND_5);
+ timer_value_0_and_1 = csio_rd_reg32(hw, SGE_TIMER_VALUE_0_AND_1_A);
+ timer_value_2_and_3 = csio_rd_reg32(hw, SGE_TIMER_VALUE_2_AND_3_A);
+ timer_value_4_and_5 = csio_rd_reg32(hw, SGE_TIMER_VALUE_4_AND_5_A);
sge->timer_val[0] = (uint16_t)csio_core_ticks_to_us(hw,
- TIMERVALUE0_GET(timer_value_0_and_1));
+ TIMERVALUE0_G(timer_value_0_and_1));
sge->timer_val[1] = (uint16_t)csio_core_ticks_to_us(hw,
- TIMERVALUE1_GET(timer_value_0_and_1));
+ TIMERVALUE1_G(timer_value_0_and_1));
sge->timer_val[2] = (uint16_t)csio_core_ticks_to_us(hw,
- TIMERVALUE2_GET(timer_value_2_and_3));
+ TIMERVALUE2_G(timer_value_2_and_3));
sge->timer_val[3] = (uint16_t)csio_core_ticks_to_us(hw,
- TIMERVALUE3_GET(timer_value_2_and_3));
+ TIMERVALUE3_G(timer_value_2_and_3));
sge->timer_val[4] = (uint16_t)csio_core_ticks_to_us(hw,
- TIMERVALUE4_GET(timer_value_4_and_5));
+ TIMERVALUE4_G(timer_value_4_and_5));
sge->timer_val[5] = (uint16_t)csio_core_ticks_to_us(hw,
- TIMERVALUE5_GET(timer_value_4_and_5));
+ TIMERVALUE5_G(timer_value_4_and_5));
- ingress_rx_threshold = csio_rd_reg32(hw, SGE_INGRESS_RX_THRESHOLD);
- sge->counter_val[0] = THRESHOLD_0_GET(ingress_rx_threshold);
- sge->counter_val[1] = THRESHOLD_1_GET(ingress_rx_threshold);
- sge->counter_val[2] = THRESHOLD_2_GET(ingress_rx_threshold);
- sge->counter_val[3] = THRESHOLD_3_GET(ingress_rx_threshold);
+ ingress_rx_threshold = csio_rd_reg32(hw, SGE_INGRESS_RX_THRESHOLD_A);
+ sge->counter_val[0] = THRESHOLD_0_G(ingress_rx_threshold);
+ sge->counter_val[1] = THRESHOLD_1_G(ingress_rx_threshold);
+ sge->counter_val[2] = THRESHOLD_2_G(ingress_rx_threshold);
+ sge->counter_val[3] = THRESHOLD_3_G(ingress_rx_threshold);
csio_init_intr_coalesce_parms(hw);
}
* Set up our basic SGE mode to deliver CPL messages to our Ingress
* Queue and Packet Date to the Free List.
*/
- csio_set_reg_field(hw, SGE_CONTROL, RXPKTCPLMODE(1), RXPKTCPLMODE(1));
+ csio_set_reg_field(hw, SGE_CONTROL_A, RXPKTCPLMODE_F, RXPKTCPLMODE_F);
- sge->sge_control = csio_rd_reg32(hw, SGE_CONTROL);
+ sge->sge_control = csio_rd_reg32(hw, SGE_CONTROL_A);
/* sge->csio_fl_align is set up by csio_wr_fixup_host_params(). */
* Set up to drop DOORBELL writes when the DOORBELL FIFO overflows
* and generate an interrupt when this occurs so we can recover.
*/
- csio_set_reg_field(hw, SGE_DBFIFO_STATUS,
- HP_INT_THRESH(HP_INT_THRESH_MASK) |
- CSIO_HW_LP_INT_THRESH(hw, CSIO_HW_M_LP_INT_THRESH(hw)),
- HP_INT_THRESH(CSIO_SGE_DBFIFO_INT_THRESH) |
- CSIO_HW_LP_INT_THRESH(hw, CSIO_SGE_DBFIFO_INT_THRESH));
+ csio_set_reg_field(hw, SGE_DBFIFO_STATUS_A,
+ LP_INT_THRESH_T5_V(LP_INT_THRESH_T5_M),
+ LP_INT_THRESH_T5_V(CSIO_SGE_DBFIFO_INT_THRESH));
+ csio_set_reg_field(hw, SGE_DBFIFO_STATUS2_A,
+ HP_INT_THRESH_T5_V(LP_INT_THRESH_T5_M),
+ HP_INT_THRESH_T5_V(CSIO_SGE_DBFIFO_INT_THRESH));
- csio_set_reg_field(hw, SGE_DOORBELL_CONTROL, ENABLE_DROP,
- ENABLE_DROP);
+ csio_set_reg_field(hw, SGE_DOORBELL_CONTROL_A, ENABLE_DROP_F,
+ ENABLE_DROP_F);
/* SGE_FL_BUFFER_SIZE0 is set up by csio_wr_fixup_host_params(). */
CSIO_SET_FLBUF_SIZE(hw, 1, CSIO_SGE_FLBUF_SIZE1);
csio_wr_reg32(hw, (CSIO_SGE_FLBUF_SIZE2 + sge->csio_fl_align - 1)
- & ~(sge->csio_fl_align - 1), SGE_FL_BUFFER_SIZE2);
+ & ~(sge->csio_fl_align - 1), SGE_FL_BUFFER_SIZE2_A);
csio_wr_reg32(hw, (CSIO_SGE_FLBUF_SIZE3 + sge->csio_fl_align - 1)
- & ~(sge->csio_fl_align - 1), SGE_FL_BUFFER_SIZE3);
+ & ~(sge->csio_fl_align - 1), SGE_FL_BUFFER_SIZE3_A);
CSIO_SET_FLBUF_SIZE(hw, 4, CSIO_SGE_FLBUF_SIZE4);
CSIO_SET_FLBUF_SIZE(hw, 5, CSIO_SGE_FLBUF_SIZE5);
CSIO_SET_FLBUF_SIZE(hw, 6, CSIO_SGE_FLBUF_SIZE6);
sge->counter_val[2] = CSIO_SGE_INT_CNT_VAL_2;
sge->counter_val[3] = CSIO_SGE_INT_CNT_VAL_3;
- csio_wr_reg32(hw, THRESHOLD_0(sge->counter_val[0]) |
- THRESHOLD_1(sge->counter_val[1]) |
- THRESHOLD_2(sge->counter_val[2]) |
- THRESHOLD_3(sge->counter_val[3]),
- SGE_INGRESS_RX_THRESHOLD);
+ csio_wr_reg32(hw, THRESHOLD_0_V(sge->counter_val[0]) |
+ THRESHOLD_1_V(sge->counter_val[1]) |
+ THRESHOLD_2_V(sge->counter_val[2]) |
+ THRESHOLD_3_V(sge->counter_val[3]),
+ SGE_INGRESS_RX_THRESHOLD_A);
csio_wr_reg32(hw,
- TIMERVALUE0(csio_us_to_core_ticks(hw, sge->timer_val[0])) |
- TIMERVALUE1(csio_us_to_core_ticks(hw, sge->timer_val[1])),
- SGE_TIMER_VALUE_0_AND_1);
+ TIMERVALUE0_V(csio_us_to_core_ticks(hw, sge->timer_val[0])) |
+ TIMERVALUE1_V(csio_us_to_core_ticks(hw, sge->timer_val[1])),
+ SGE_TIMER_VALUE_0_AND_1_A);
csio_wr_reg32(hw,
- TIMERVALUE2(csio_us_to_core_ticks(hw, sge->timer_val[2])) |
- TIMERVALUE3(csio_us_to_core_ticks(hw, sge->timer_val[3])),
- SGE_TIMER_VALUE_2_AND_3);
+ TIMERVALUE2_V(csio_us_to_core_ticks(hw, sge->timer_val[2])) |
+ TIMERVALUE3_V(csio_us_to_core_ticks(hw, sge->timer_val[3])),
+ SGE_TIMER_VALUE_2_AND_3_A);
csio_wr_reg32(hw,
- TIMERVALUE4(csio_us_to_core_ticks(hw, sge->timer_val[4])) |
- TIMERVALUE5(csio_us_to_core_ticks(hw, sge->timer_val[5])),
- SGE_TIMER_VALUE_4_AND_5);
+ TIMERVALUE4_V(csio_us_to_core_ticks(hw, sge->timer_val[4])) |
+ TIMERVALUE5_V(csio_us_to_core_ticks(hw, sge->timer_val[5])),
+ SGE_TIMER_VALUE_4_AND_5_A);
csio_init_intr_coalesce_parms(hw);
}
#include "t4fw_api.h"
#include "l2t.h"
#include "cxgb4i.h"
+#include "clip_tbl.h"
static unsigned int dbg_level;
struct cpl_act_establish *req = (struct cpl_act_establish *)skb->data;
unsigned short tcp_opt = ntohs(req->tcp_opt);
unsigned int tid = GET_TID(req);
- unsigned int atid = GET_TID_TID(ntohl(req->tos_atid));
+ unsigned int atid = TID_TID_G(ntohl(req->tos_atid));
struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(cdev);
struct tid_info *t = lldi->tids;
u32 rcv_isn = be32_to_cpu(req->rcv_isn);
if (cxgb4i_rcv_win > (RCV_BUFSIZ_MASK << 10))
csk->rcv_wup -= cxgb4i_rcv_win - (RCV_BUFSIZ_MASK << 10);
- csk->advmss = lldi->mtus[GET_TCPOPT_MSS(tcp_opt)] - 40;
- if (GET_TCPOPT_TSTAMP(tcp_opt))
+ csk->advmss = lldi->mtus[TCPOPT_MSS_G(tcp_opt)] - 40;
+ if (TCPOPT_TSTAMP_G(tcp_opt))
csk->advmss -= 12;
if (csk->advmss < 128)
csk->advmss = 128;
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_SOCK,
"csk 0x%p, mss_idx %u, advmss %u.\n",
- csk, GET_TCPOPT_MSS(tcp_opt), csk->advmss);
+ csk, TCPOPT_MSS_G(tcp_opt), csk->advmss);
cxgbi_sock_established(csk, ntohl(req->snd_isn), ntohs(req->tcp_opt));
struct cpl_act_open_rpl *rpl = (struct cpl_act_open_rpl *)skb->data;
unsigned int tid = GET_TID(rpl);
unsigned int atid =
- GET_TID_TID(GET_AOPEN_ATID(be32_to_cpu(rpl->atid_status)));
- unsigned int status = GET_AOPEN_STATUS(be32_to_cpu(rpl->atid_status));
+ TID_TID_G(AOPEN_ATID_G(be32_to_cpu(rpl->atid_status)));
+ unsigned int status = AOPEN_STATUS_G(be32_to_cpu(rpl->atid_status));
struct cxgb4_lld_info *lldi = cxgbi_cdev_priv(cdev);
struct tid_info *t = lldi->tids;
hlen = ntohs(cpl->len);
dlen = ntohl(*(unsigned int *)(bhs + 4)) & 0xFFFFFF;
- plen = ISCSI_PDU_LEN(pdu_len_ddp);
+ plen = ISCSI_PDU_LEN_G(pdu_len_ddp);
if (is_t4(lldi->adapter_type))
plen -= 40;
static void release_offload_resources(struct cxgbi_sock *csk)
{
struct cxgb4_lld_info *lldi;
+#if IS_ENABLED(CONFIG_IPV6)
+ struct net_device *ndev = csk->cdev->ports[csk->port_id];
+#endif
log_debug(1 << CXGBI_DBG_TOE | 1 << CXGBI_DBG_SOCK,
"csk 0x%p,%u,0x%lx,%u.\n",
}
l2t_put(csk);
+#if IS_ENABLED(CONFIG_IPV6)
+ if (csk->csk_family == AF_INET6)
+ cxgb4_clip_release(ndev,
+ (const u32 *)&csk->saddr6.sin6_addr, 1);
+#endif
+
if (cxgbi_sock_flag(csk, CTPF_HAS_ATID))
free_atid(csk);
else if (cxgbi_sock_flag(csk, CTPF_HAS_TID)) {
csk->l2t = cxgb4_l2t_get(lldi->l2t, n, ndev, 0);
if (!csk->l2t) {
pr_err("%s, cannot alloc l2t.\n", ndev->name);
- goto rel_resource;
+ goto rel_resource_without_clip;
}
cxgbi_sock_get(csk);
+#if IS_ENABLED(CONFIG_IPV6)
+ if (csk->csk_family == AF_INET6)
+ cxgb4_clip_get(ndev, (const u32 *)&csk->saddr6.sin6_addr, 1);
+#endif
+
if (t4) {
size = sizeof(struct cpl_act_open_req);
size6 = sizeof(struct cpl_act_open_req6);
return 0;
rel_resource:
+#if IS_ENABLED(CONFIG_IPV6)
+ if (csk->csk_family == AF_INET6)
+ cxgb4_clip_release(ndev,
+ (const u32 *)&csk->saddr6.sin6_addr, 1);
+#endif
+rel_resource_without_clip:
if (n)
neigh_release(n);
if (skb)
req = (struct cpl_set_tcb_field *)skb->head;
INIT_TP_WR(req, csk->tid);
OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_SET_TCB_FIELD, csk->tid));
- req->reply_ctrl = htons(NO_REPLY(reply) | QUEUENO(csk->rss_qid));
+ req->reply_ctrl = htons(NO_REPLY_V(reply) | QUEUENO_V(csk->rss_qid));
req->word_cookie = htons(0);
req->mask = cpu_to_be64(0x3 << 8);
req->val = cpu_to_be64(pg_idx << 8);
req = (struct cpl_set_tcb_field *)skb->head;
INIT_TP_WR(req, tid);
OPCODE_TID(req) = htonl(MK_OPCODE_TID(CPL_SET_TCB_FIELD, tid));
- req->reply_ctrl = htons(NO_REPLY(reply) | QUEUENO(csk->rss_qid));
+ req->reply_ctrl = htons(NO_REPLY_V(reply) | QUEUENO_V(csk->rss_qid));
req->word_cookie = htons(0);
req->mask = cpu_to_be64(0x3 << 4);
req->val = cpu_to_be64(((hcrc ? ULP_CRC_HEADER : 0) |
#define DRV_NAME "fnic"
#define DRV_DESCRIPTION "Cisco FCoE HBA Driver"
-#define DRV_VERSION "1.6.0.16"
+#define DRV_VERSION "1.6.0.17"
#define PFX DRV_NAME ": "
#define DFX DRV_NAME "%d: "
goto fnic_abort_cmd_end;
}
+ /* IO out of order */
+
+ if (!(CMD_FLAGS(sc) & (FNIC_IO_ABORTED | FNIC_IO_DONE))) {
+ spin_unlock_irqrestore(io_lock, flags);
+ FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
+ "Issuing Host reset due to out of order IO\n");
+
+ if (fnic_host_reset(sc) == FAILED) {
+ FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host,
+ "fnic_host_reset failed.\n");
+ }
+ ret = FAILED;
+ goto fnic_abort_cmd_end;
+ }
+
CMD_STATE(sc) = FNIC_IOREQ_ABTS_COMPLETE;
/*
}
/* send genetlink multicast message to notify appplications */
- result = genlmsg_end(skb, msg_header);
-
- if (result < 0) {
- pmcraid_err("genlmsg_end failed\n");
- nlmsg_free(skb);
- return result;
- }
+ genlmsg_end(skb, msg_header);
result = genlmsg_multicast(&pmcraid_event_family, skb,
0, 0, GFP_ATOMIC);
* Return target busy if we've received a non-zero retry_delay_timer
* in a FCP_RSP.
*/
- if (time_after(jiffies, fcport->retry_delay_timestamp))
+ if (fcport->retry_delay_timestamp == 0) {
+ /* retry delay not set */
+ } else if (time_after(jiffies, fcport->retry_delay_timestamp))
fcport->retry_delay_timestamp = 0;
else
goto qc24_target_busy;
}
/* signal not to enter either branch of the if () below */
timeleft = 0;
- rtn = NEEDS_RETRY;
+ rtn = FAILED;
} else {
timeleft = wait_for_completion_timeout(&done, timeout);
rtn = SUCCESS;
rtn = FAILED;
break;
}
- } else if (!rtn) {
+ } else if (rtn != FAILED) {
scsi_abort_eh_cmnd(scmd);
rtn = FAILED;
}
sd_config_discard(sdkp, SD_LBP_WS16);
} else { /* LBP VPD page tells us what to use */
-
- if (sdkp->lbpws)
+ if (sdkp->lbpu && sdkp->max_unmap_blocks && !sdkp->lbprz)
+ sd_config_discard(sdkp, SD_LBP_UNMAP);
+ else if (sdkp->lbpws)
sd_config_discard(sdkp, SD_LBP_WS16);
else if (sdkp->lbpws10)
sd_config_discard(sdkp, SD_LBP_WS10);
goto reject;
}
if (!strncmp("=All", text_ptr, 4)) {
- cmd->cmd_flags |= IFC_SENDTARGETS_ALL;
+ cmd->cmd_flags |= ICF_SENDTARGETS_ALL;
} else if (!strncmp("=iqn.", text_ptr, 5) ||
!strncmp("=eui.", text_ptr, 5)) {
- cmd->cmd_flags |= IFC_SENDTARGETS_SINGLE;
+ cmd->cmd_flags |= ICF_SENDTARGETS_SINGLE;
} else {
pr_err("Unable to locate valid SendTargets=%s value\n", text_ptr);
goto reject;
return -ENOMEM;
}
/*
- * Locate pointer to iqn./eui. string for IFC_SENDTARGETS_SINGLE
+ * Locate pointer to iqn./eui. string for ICF_SENDTARGETS_SINGLE
* explicit case..
*/
- if (cmd->cmd_flags & IFC_SENDTARGETS_SINGLE) {
+ if (cmd->cmd_flags & ICF_SENDTARGETS_SINGLE) {
text_ptr = strchr(text_in, '=');
if (!text_ptr) {
pr_err("Unable to locate '=' string in text_in:"
spin_lock(&tiqn_lock);
list_for_each_entry(tiqn, &g_tiqn_list, tiqn_list) {
- if ((cmd->cmd_flags & IFC_SENDTARGETS_SINGLE) &&
+ if ((cmd->cmd_flags & ICF_SENDTARGETS_SINGLE) &&
strcmp(tiqn->tiqn, text_ptr)) {
continue;
}
if (end_of_buf)
break;
- if (cmd->cmd_flags & IFC_SENDTARGETS_SINGLE)
+ if (cmd->cmd_flags & ICF_SENDTARGETS_SINGLE)
break;
}
spin_unlock(&tiqn_lock);
ICF_CONTIG_MEMORY = 0x00000020,
ICF_ATTACHED_TO_RQUEUE = 0x00000040,
ICF_OOO_CMDSN = 0x00000080,
- IFC_SENDTARGETS_ALL = 0x00000100,
- IFC_SENDTARGETS_SINGLE = 0x00000200,
+ ICF_SENDTARGETS_ALL = 0x00000100,
+ ICF_SENDTARGETS_SINGLE = 0x00000200,
};
/* struct iscsi_cmd->i_state */
}
EXPORT_SYMBOL(se_dev_set_queue_depth);
-int se_dev_set_fabric_max_sectors(struct se_device *dev, u32 fabric_max_sectors)
-{
- int block_size = dev->dev_attrib.block_size;
-
- if (dev->export_count) {
- pr_err("dev[%p]: Unable to change SE Device"
- " fabric_max_sectors while export_count is %d\n",
- dev, dev->export_count);
- return -EINVAL;
- }
- if (!fabric_max_sectors) {
- pr_err("dev[%p]: Illegal ZERO value for"
- " fabric_max_sectors\n", dev);
- return -EINVAL;
- }
- if (fabric_max_sectors < DA_STATUS_MAX_SECTORS_MIN) {
- pr_err("dev[%p]: Passed fabric_max_sectors: %u less than"
- " DA_STATUS_MAX_SECTORS_MIN: %u\n", dev, fabric_max_sectors,
- DA_STATUS_MAX_SECTORS_MIN);
- return -EINVAL;
- }
- if (fabric_max_sectors > DA_STATUS_MAX_SECTORS_MAX) {
- pr_err("dev[%p]: Passed fabric_max_sectors: %u"
- " greater than DA_STATUS_MAX_SECTORS_MAX:"
- " %u\n", dev, fabric_max_sectors,
- DA_STATUS_MAX_SECTORS_MAX);
- return -EINVAL;
- }
- /*
- * Align max_sectors down to PAGE_SIZE to follow transport_allocate_data_tasks()
- */
- if (!block_size) {
- block_size = 512;
- pr_warn("Defaulting to 512 for zero block_size\n");
- }
- fabric_max_sectors = se_dev_align_max_sectors(fabric_max_sectors,
- block_size);
-
- dev->dev_attrib.fabric_max_sectors = fabric_max_sectors;
- pr_debug("dev[%p]: SE Device max_sectors changed to %u\n",
- dev, fabric_max_sectors);
- return 0;
-}
-EXPORT_SYMBOL(se_dev_set_fabric_max_sectors);
-
int se_dev_set_optimal_sectors(struct se_device *dev, u32 optimal_sectors)
{
if (dev->export_count) {
dev, dev->export_count);
return -EINVAL;
}
- if (optimal_sectors > dev->dev_attrib.fabric_max_sectors) {
+ if (optimal_sectors > dev->dev_attrib.hw_max_sectors) {
pr_err("dev[%p]: Passed optimal_sectors %u cannot be"
- " greater than fabric_max_sectors: %u\n", dev,
- optimal_sectors, dev->dev_attrib.fabric_max_sectors);
+ " greater than hw_max_sectors: %u\n", dev,
+ optimal_sectors, dev->dev_attrib.hw_max_sectors);
return -EINVAL;
}
dev->dev_attrib.unmap_granularity_alignment =
DA_UNMAP_GRANULARITY_ALIGNMENT_DEFAULT;
dev->dev_attrib.max_write_same_len = DA_MAX_WRITE_SAME_LEN;
- dev->dev_attrib.fabric_max_sectors = DA_FABRIC_MAX_SECTORS;
- dev->dev_attrib.optimal_sectors = DA_FABRIC_MAX_SECTORS;
xcopy_lun = &dev->xcopy_lun;
xcopy_lun->lun_se_dev = dev;
dev->dev_attrib.hw_max_sectors =
se_dev_align_max_sectors(dev->dev_attrib.hw_max_sectors,
dev->dev_attrib.hw_block_size);
+ dev->dev_attrib.optimal_sectors = dev->dev_attrib.hw_max_sectors;
dev->dev_index = scsi_get_new_index(SCSI_DEVICE_INDEX);
dev->creation_time = get_jiffies_64();
struct fd_prot fd_prot;
sense_reason_t rc;
int ret = 0;
-
+ /*
+ * We are currently limited by the number of iovecs (2048) per
+ * single vfs_[writev,readv] call.
+ */
+ if (cmd->data_length > FD_MAX_BYTES) {
+ pr_err("FILEIO: Not able to process I/O of %u bytes due to"
+ "FD_MAX_BYTES: %u iovec count limitiation\n",
+ cmd->data_length, FD_MAX_BYTES);
+ return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+ }
/*
* Call vectorized fileio functions to map struct scatterlist
* physical memory addresses to struct iovec virtual memory.
&fileio_dev_attrib_hw_block_size.attr,
&fileio_dev_attrib_block_size.attr,
&fileio_dev_attrib_hw_max_sectors.attr,
- &fileio_dev_attrib_fabric_max_sectors.attr,
&fileio_dev_attrib_optimal_sectors.attr,
&fileio_dev_attrib_hw_queue_depth.attr,
&fileio_dev_attrib_queue_depth.attr,
q = bdev_get_queue(bd);
dev->dev_attrib.hw_block_size = bdev_logical_block_size(bd);
- dev->dev_attrib.hw_max_sectors = UINT_MAX;
+ dev->dev_attrib.hw_max_sectors = queue_max_hw_sectors(q);
dev->dev_attrib.hw_queue_depth = q->nr_requests;
/*
&iblock_dev_attrib_hw_block_size.attr,
&iblock_dev_attrib_block_size.attr,
&iblock_dev_attrib_hw_max_sectors.attr,
- &iblock_dev_attrib_fabric_max_sectors.attr,
&iblock_dev_attrib_optimal_sectors.attr,
&iblock_dev_attrib_hw_queue_depth.attr,
&iblock_dev_attrib_queue_depth.attr,
return 0;
}
+ } else if (we && registered_nexus) {
+ /*
+ * Reads are allowed for Write Exclusive locks
+ * from all registrants.
+ */
+ if (cmd->data_direction == DMA_FROM_DEVICE) {
+ pr_debug("Allowing READ CDB: 0x%02x for %s"
+ " reservation\n", cdb[0],
+ core_scsi3_pr_dump_type(pr_reg_type));
+
+ return 0;
+ }
}
pr_debug("%s Conflict for %sregistered nexus %s CDB: 0x%2x"
" for %s reservation\n", transport_dump_cmd_direction(cmd),
&rd_mcp_dev_attrib_hw_block_size.attr,
&rd_mcp_dev_attrib_block_size.attr,
&rd_mcp_dev_attrib_hw_max_sectors.attr,
- &rd_mcp_dev_attrib_fabric_max_sectors.attr,
&rd_mcp_dev_attrib_optimal_sectors.attr,
&rd_mcp_dev_attrib_hw_queue_depth.attr,
&rd_mcp_dev_attrib_queue_depth.attr,
if (cmd->se_cmd_flags & SCF_SCSI_DATA_CDB) {
unsigned long long end_lba;
-
- if (sectors > dev->dev_attrib.fabric_max_sectors) {
- printk_ratelimited(KERN_ERR "SCSI OP %02xh with too"
- " big sectors %u exceeds fabric_max_sectors:"
- " %u\n", cdb[0], sectors,
- dev->dev_attrib.fabric_max_sectors);
- return TCM_INVALID_CDB_FIELD;
- }
- if (sectors > dev->dev_attrib.hw_max_sectors) {
- printk_ratelimited(KERN_ERR "SCSI OP %02xh with too"
- " big sectors %u exceeds backend hw_max_sectors:"
- " %u\n", cdb[0], sectors,
- dev->dev_attrib.hw_max_sectors);
- return TCM_INVALID_CDB_FIELD;
- }
check_lba:
end_lba = dev->transport->get_blocks(dev) + 1;
if (cmd->t_task_lba + sectors > end_lba) {
spc_emulate_evpd_b0(struct se_cmd *cmd, unsigned char *buf)
{
struct se_device *dev = cmd->se_dev;
- u32 max_sectors;
int have_tp = 0;
int opt, min;
/*
* Set MAXIMUM TRANSFER LENGTH
*/
- max_sectors = min(dev->dev_attrib.fabric_max_sectors,
- dev->dev_attrib.hw_max_sectors);
- put_unaligned_be32(max_sectors, &buf[8]);
+ put_unaligned_be32(dev->dev_attrib.hw_max_sectors, &buf[8]);
/*
* Set OPTIMAL TRANSFER LENGTH
if (ret < 0)
goto free_skb;
- ret = genlmsg_end(skb, msg_header);
- if (ret < 0)
- goto free_skb;
+ genlmsg_end(skb, msg_header);
ret = genlmsg_multicast(&tcmu_genl_family, skb, 0,
TCMU_MCGRP_CONFIG, GFP_KERNEL);
&tcmu_dev_attrib_hw_block_size.attr,
&tcmu_dev_attrib_block_size.attr,
&tcmu_dev_attrib_hw_max_sectors.attr,
- &tcmu_dev_attrib_fabric_max_sectors.attr,
&tcmu_dev_attrib_optimal_sectors.attr,
&tcmu_dev_attrib_hw_queue_depth.attr,
&tcmu_dev_attrib_queue_depth.attr,
continue;
result = acpi_bus_get_device(trt->source, &adev);
- if (!result)
- acpi_create_platform_device(adev);
- else
+ if (result)
pr_warn("Failed to get source ACPI device\n");
result = acpi_bus_get_device(trt->target, &adev);
- if (!result)
- acpi_create_platform_device(adev);
- else
+ if (result)
pr_warn("Failed to get target ACPI device\n");
}
if (art->source) {
result = acpi_bus_get_device(art->source, &adev);
- if (!result)
- acpi_create_platform_device(adev);
- else
+ if (result)
pr_warn("Failed to get source ACPI device\n");
}
if (art->target) {
result = acpi_bus_get_device(art->target, &adev);
- if (!result)
- acpi_create_platform_device(adev);
- else
+ if (result)
pr_warn("Failed to get source ACPI device\n");
}
}
int ret;
adev = ACPI_COMPANION(dev);
+ if (!adev)
+ return -ENODEV;
status = acpi_evaluate_object(adev->handle, "PPCC", NULL, &buf);
if (ACPI_FAILURE(status))
thermal_event->event = event;
/* send multicast genetlink message */
- result = genlmsg_end(skb, msg_header);
- if (result < 0) {
- nlmsg_free(skb);
- return result;
- }
+ genlmsg_end(skb, msg_header);
result = genlmsg_multicast(&thermal_event_genl_family, skb, 0,
0, GFP_ATOMIC);
static int vfio_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
- u8 type;
struct vfio_pci_device *vdev;
struct iommu_group *group;
int ret;
- pci_read_config_byte(pdev, PCI_HEADER_TYPE, &type);
- if ((type & PCI_HEADER_TYPE) != PCI_HEADER_TYPE_NORMAL)
+ if (pdev->hdr_type != PCI_HEADER_TYPE_NORMAL)
return -EINVAL;
group = iommu_group_get(&pdev->dev);
head = skb_peek(&sk->sk_receive_queue);
if (likely(head)) {
len = head->len;
- if (vlan_tx_tag_present(head))
+ if (skb_vlan_tag_present(head))
len += VLAN_HLEN;
}
++headcount;
seg += in;
}
- heads[headcount - 1].len = cpu_to_vhost32(vq, len - datalen);
+ heads[headcount - 1].len = cpu_to_vhost32(vq, len + datalen);
*iovcount = seg;
if (unlikely(log))
*log_num = nlogs;
return 0;
}
+static int vhost_scsi_to_tcm_attr(int attr)
+{
+ switch (attr) {
+ case VIRTIO_SCSI_S_SIMPLE:
+ return TCM_SIMPLE_TAG;
+ case VIRTIO_SCSI_S_ORDERED:
+ return TCM_ORDERED_TAG;
+ case VIRTIO_SCSI_S_HEAD:
+ return TCM_HEAD_TAG;
+ case VIRTIO_SCSI_S_ACA:
+ return TCM_ACA_TAG;
+ default:
+ break;
+ }
+ return TCM_SIMPLE_TAG;
+}
+
static void tcm_vhost_submission_work(struct work_struct *work)
{
struct tcm_vhost_cmd *cmd =
rc = target_submit_cmd_map_sgls(se_cmd, tv_nexus->tvn_se_sess,
cmd->tvc_cdb, &cmd->tvc_sense_buf[0],
cmd->tvc_lun, cmd->tvc_exp_data_len,
- cmd->tvc_task_attr, cmd->tvc_data_direction,
- TARGET_SCF_ACK_KREF, sg_ptr, cmd->tvc_sgl_count,
- NULL, 0, sg_prot_ptr, cmd->tvc_prot_sgl_count);
+ vhost_scsi_to_tcm_attr(cmd->tvc_task_attr),
+ cmd->tvc_data_direction, TARGET_SCF_ACK_KREF,
+ sg_ptr, cmd->tvc_sgl_count, NULL, 0, sg_prot_ptr,
+ cmd->tvc_prot_sgl_count);
if (rc < 0) {
transport_send_check_condition_and_sense(se_cmd,
TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE, 0);
r = -EFAULT;
break;
}
- if ((a.avail_user_addr & (sizeof *vq->avail->ring - 1)) ||
- (a.used_user_addr & (sizeof *vq->used->ring - 1)) ||
- (a.log_guest_addr & (sizeof *vq->used->ring - 1))) {
+
+ /* Make sure it's safe to cast pointers to vring types. */
+ BUILD_BUG_ON(__alignof__ *vq->avail > VRING_AVAIL_ALIGN_SIZE);
+ BUILD_BUG_ON(__alignof__ *vq->used > VRING_USED_ALIGN_SIZE);
+ if ((a.avail_user_addr & (VRING_AVAIL_ALIGN_SIZE - 1)) ||
+ (a.used_user_addr & (VRING_USED_ALIGN_SIZE - 1)) ||
+ (a.log_guest_addr & (sizeof(u64) - 1))) {
r = -EINVAL;
break;
}
cancel_delayed_work_sync(&info->deferred_work);
/* Run it immediately */
- err = schedule_delayed_work(&info->deferred_work, 0);
+ schedule_delayed_work(&info->deferred_work, 0);
mutex_unlock(&inode->i_mutex);
- return err;
+
+ return 0;
}
EXPORT_SYMBOL_GPL(fb_deferred_io_fsync);
.mX_max = 127,
.fint_min = 500000,
.fint_max = 2500000,
- .clkdco_max = 1800000000,
.clkdco_min = 500000000,
.clkdco_low = 1000000000,
.mX_max = 127,
.fint_min = 620000,
.fint_max = 2500000,
- .clkdco_max = 1800000000,
.clkdco_min = 750000000,
.clkdco_low = 1500000000,
return 0;
err_enable:
- regulator_disable(pll->regulator);
+ if (pll->regulator)
+ regulator_disable(pll->regulator);
err_reg:
clk_disable_unprepare(pll->clkin);
return r;
out->output_type = OMAP_DISPLAY_TYPE_SDI;
out->name = "sdi.0";
out->dispc_channel = OMAP_DSS_CHANNEL_LCD;
+ /* We have SDI only on OMAP3, where it's on port 1 */
+ out->port_num = 1;
out->ops.sdi = &sdi_ops;
out->owner = THIS_MODULE;
module_param(nologo, bool, 0);
MODULE_PARM_DESC(nologo, "Disables startup logo");
+/*
+ * Logos are located in the initdata, and will be freed in kernel_init.
+ * Use late_init to mark the logos as freed to prevent any further use.
+ */
+
+static bool logos_freed;
+
+static int __init fb_logo_late_init(void)
+{
+ logos_freed = true;
+ return 0;
+}
+
+late_initcall(fb_logo_late_init);
+
/* logo's are marked __initdata. Use __init_refok to tell
* modpost that it is intended that this function uses data
* marked __initdata.
{
const struct linux_logo *logo = NULL;
- if (nologo)
+ if (nologo || logos_freed)
return NULL;
if (depth >= 1) {
vp_free_vectors(vdev);
kfree(vp_dev->vqs);
+ vp_dev->vqs = NULL;
}
static int vp_try_to_find_vqs(struct virtio_device *vdev, unsigned nvqs,
return 0;
}
-void virtio_pci_release_dev(struct device *_d)
-{
- /*
- * No need for a release method as we allocate/free
- * all devices together with the pci devices.
- * Provide an empty one to avoid getting a warning from core.
- */
-}
-
#ifdef CONFIG_PM_SLEEP
static int virtio_pci_freeze(struct device *dev)
{
* - ignore the affinity request if we're using INTX
*/
int vp_set_vq_affinity(struct virtqueue *vq, int cpu);
-void virtio_pci_release_dev(struct device *);
int virtio_pci_legacy_probe(struct pci_dev *pci_dev,
const struct pci_device_id *id);
.set_vq_affinity = vp_set_vq_affinity,
};
+static void virtio_pci_release_dev(struct device *_d)
+{
+ struct virtio_device *vdev = dev_to_virtio(_d);
+ struct virtio_pci_device *vp_dev = to_vp_device(vdev);
+
+ /* As struct device is a kobject, it's not safe to
+ * free the memory (including the reference counter itself)
+ * until it's release callback. */
+ kfree(vp_dev);
+}
+
/* the PCI probing function */
int virtio_pci_legacy_probe(struct pci_dev *pci_dev,
const struct pci_device_id *id)
pci_iounmap(pci_dev, vp_dev->ioaddr);
pci_release_regions(pci_dev);
pci_disable_device(pci_dev);
- kfree(vp_dev);
}
{
int ret;
int type;
- struct btrfs_tree_block_info *info;
struct btrfs_extent_inline_ref *eiref;
if (*ptr == (unsigned long)-1)
}
/* we can treat both ref types equally here */
- info = (struct btrfs_tree_block_info *)(ei + 1);
*out_root = btrfs_extent_inline_ref_offset(eb, eiref);
- *out_level = btrfs_tree_block_level(eb, info);
+
+ if (key->type == BTRFS_EXTENT_ITEM_KEY) {
+ struct btrfs_tree_block_info *info;
+
+ info = (struct btrfs_tree_block_info *)(ei + 1);
+ *out_level = btrfs_tree_block_level(eb, info);
+ } else {
+ ASSERT(key->type == BTRFS_METADATA_ITEM_KEY);
+ *out_level = (u8)key->offset;
+ }
if (ret == 1)
*ptr = (unsigned long)-1;
{
struct btrfs_delayed_node *delayed_node;
+ /*
+ * we don't do delayed inode updates during log recovery because it
+ * leads to enospc problems. This means we also can't do
+ * delayed inode refs
+ */
+ if (BTRFS_I(inode)->root->fs_info->log_root_recovering)
+ return -EAGAIN;
+
delayed_node = btrfs_get_or_create_delayed_node(inode);
if (IS_ERR(delayed_node))
return PTR_ERR(delayed_node);
struct extent_buffer *leaf;
ret = btrfs_search_slot(trans, extent_root, &cache->key, path, 0, 1);
- if (ret < 0)
+ if (ret) {
+ if (ret > 0)
+ ret = -ENOENT;
goto fail;
- BUG_ON(ret); /* Corruption */
+ }
leaf = path->nodes[0];
bi = btrfs_item_ptr_offset(leaf, path->slots[0]);
btrfs_mark_buffer_dirty(leaf);
btrfs_release_path(path);
fail:
- if (ret) {
+ if (ret)
btrfs_abort_transaction(trans, root, ret);
- return ret;
- }
- return 0;
+ return ret;
}
out_fail:
btrfs_end_transaction(trans, root);
- if (drop_on_err)
+ if (drop_on_err) {
+ inode_dec_link_count(inode);
iput(inode);
+ }
btrfs_balance_delayed_items(root);
btrfs_btree_balance_dirty(root);
return err;
ret = scrub_pages_for_parity(sparity, logical, l, physical, dev,
flags, gen, mirror_num,
have_csum ? csum : NULL);
-skip:
if (ret)
return ret;
+skip:
len -= l;
logical += l;
physical += l;
}
}
- dout("fill_inline_data %p %llx.%llx len %lu locked_page %p\n",
+ dout("fill_inline_data %p %llx.%llx len %zu locked_page %p\n",
inode, ceph_vinop(inode), len, locked_page);
if (len > 0) {
{
struct genlmsghdr *genlhdr = nlmsg_data((struct nlmsghdr *)skb->data);
void *data = genlmsg_data(genlhdr);
- int rv;
- rv = genlmsg_end(skb, data);
- if (rv < 0) {
- nlmsg_free(skb);
- return rv;
- }
+ genlmsg_end(skb, data);
return genlmsg_unicast(&init_net, skb, listener_nlportid);
}
/* fallback to generic here if not in extents fmt */
if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)))
- return __generic_block_fiemap(inode, fieinfo, start, len,
- ext4_get_block);
+ return generic_block_fiemap(inode, fieinfo, start, len,
+ ext4_get_block);
if (fiemap_check_flags(fieinfo, EXT4_FIEMAP_FLAGS))
return -EBADR;
* we determine this extent as a data or a hole according to whether the
* page cache has data or not.
*/
-static int ext4_find_unwritten_pgoff(struct inode *inode, int whence,
- loff_t endoff, loff_t *offset)
+static int ext4_find_unwritten_pgoff(struct inode *inode,
+ int whence,
+ struct ext4_map_blocks *map,
+ loff_t *offset)
{
struct pagevec pvec;
+ unsigned int blkbits;
pgoff_t index;
pgoff_t end;
+ loff_t endoff;
loff_t startoff;
loff_t lastoff;
int found = 0;
+ blkbits = inode->i_sb->s_blocksize_bits;
startoff = *offset;
lastoff = startoff;
-
+ endoff = (loff_t)(map->m_lblk + map->m_len) << blkbits;
index = startoff >> PAGE_CACHE_SHIFT;
end = endoff >> PAGE_CACHE_SHIFT;
static loff_t ext4_seek_data(struct file *file, loff_t offset, loff_t maxsize)
{
struct inode *inode = file->f_mapping->host;
- struct fiemap_extent_info fie;
- struct fiemap_extent ext[2];
- loff_t next;
- int i, ret = 0;
+ struct ext4_map_blocks map;
+ struct extent_status es;
+ ext4_lblk_t start, last, end;
+ loff_t dataoff, isize;
+ int blkbits;
+ int ret = 0;
mutex_lock(&inode->i_mutex);
- if (offset >= inode->i_size) {
+
+ isize = i_size_read(inode);
+ if (offset >= isize) {
mutex_unlock(&inode->i_mutex);
return -ENXIO;
}
- fie.fi_flags = 0;
- fie.fi_extents_max = 2;
- fie.fi_extents_start = (struct fiemap_extent __user *) &ext;
- while (1) {
- mm_segment_t old_fs = get_fs();
-
- fie.fi_extents_mapped = 0;
- memset(ext, 0, sizeof(*ext) * fie.fi_extents_max);
-
- set_fs(get_ds());
- ret = ext4_fiemap(inode, &fie, offset, maxsize - offset);
- set_fs(old_fs);
- if (ret)
+
+ blkbits = inode->i_sb->s_blocksize_bits;
+ start = offset >> blkbits;
+ last = start;
+ end = isize >> blkbits;
+ dataoff = offset;
+
+ do {
+ map.m_lblk = last;
+ map.m_len = end - last + 1;
+ ret = ext4_map_blocks(NULL, inode, &map, 0);
+ if (ret > 0 && !(map.m_flags & EXT4_MAP_UNWRITTEN)) {
+ if (last != start)
+ dataoff = (loff_t)last << blkbits;
break;
+ }
- /* No extents found, EOF */
- if (!fie.fi_extents_mapped) {
- ret = -ENXIO;
+ /*
+ * If there is a delay extent at this offset,
+ * it will be as a data.
+ */
+ ext4_es_find_delayed_extent_range(inode, last, last, &es);
+ if (es.es_len != 0 && in_range(last, es.es_lblk, es.es_len)) {
+ if (last != start)
+ dataoff = (loff_t)last << blkbits;
break;
}
- for (i = 0; i < fie.fi_extents_mapped; i++) {
- next = (loff_t)(ext[i].fe_length + ext[i].fe_logical);
- if (offset < (loff_t)ext[i].fe_logical)
- offset = (loff_t)ext[i].fe_logical;
- /*
- * If extent is not unwritten, then it contains valid
- * data, mapped or delayed.
- */
- if (!(ext[i].fe_flags & FIEMAP_EXTENT_UNWRITTEN))
- goto out;
+ /*
+ * If there is a unwritten extent at this offset,
+ * it will be as a data or a hole according to page
+ * cache that has data or not.
+ */
+ if (map.m_flags & EXT4_MAP_UNWRITTEN) {
+ int unwritten;
+ unwritten = ext4_find_unwritten_pgoff(inode, SEEK_DATA,
+ &map, &dataoff);
+ if (unwritten)
+ break;
+ }
- /*
- * If there is a unwritten extent at this offset,
- * it will be as a data or a hole according to page
- * cache that has data or not.
- */
- if (ext4_find_unwritten_pgoff(inode, SEEK_DATA,
- next, &offset))
- goto out;
+ last++;
+ dataoff = (loff_t)last << blkbits;
+ } while (last <= end);
- if (ext[i].fe_flags & FIEMAP_EXTENT_LAST) {
- ret = -ENXIO;
- goto out;
- }
- offset = next;
- }
- }
- if (offset > inode->i_size)
- offset = inode->i_size;
-out:
mutex_unlock(&inode->i_mutex);
- if (ret)
- return ret;
- return vfs_setpos(file, offset, maxsize);
+ if (dataoff > isize)
+ return -ENXIO;
+
+ return vfs_setpos(file, dataoff, maxsize);
}
/*
- * ext4_seek_hole() retrieves the offset for SEEK_HOLE
+ * ext4_seek_hole() retrieves the offset for SEEK_HOLE.
*/
static loff_t ext4_seek_hole(struct file *file, loff_t offset, loff_t maxsize)
{
struct inode *inode = file->f_mapping->host;
- struct fiemap_extent_info fie;
- struct fiemap_extent ext[2];
- loff_t next;
- int i, ret = 0;
+ struct ext4_map_blocks map;
+ struct extent_status es;
+ ext4_lblk_t start, last, end;
+ loff_t holeoff, isize;
+ int blkbits;
+ int ret = 0;
mutex_lock(&inode->i_mutex);
- if (offset >= inode->i_size) {
+
+ isize = i_size_read(inode);
+ if (offset >= isize) {
mutex_unlock(&inode->i_mutex);
return -ENXIO;
}
- fie.fi_flags = 0;
- fie.fi_extents_max = 2;
- fie.fi_extents_start = (struct fiemap_extent __user *)&ext;
- while (1) {
- mm_segment_t old_fs = get_fs();
-
- fie.fi_extents_mapped = 0;
- memset(ext, 0, sizeof(*ext));
+ blkbits = inode->i_sb->s_blocksize_bits;
+ start = offset >> blkbits;
+ last = start;
+ end = isize >> blkbits;
+ holeoff = offset;
- set_fs(get_ds());
- ret = ext4_fiemap(inode, &fie, offset, maxsize - offset);
- set_fs(old_fs);
- if (ret)
- break;
+ do {
+ map.m_lblk = last;
+ map.m_len = end - last + 1;
+ ret = ext4_map_blocks(NULL, inode, &map, 0);
+ if (ret > 0 && !(map.m_flags & EXT4_MAP_UNWRITTEN)) {
+ last += ret;
+ holeoff = (loff_t)last << blkbits;
+ continue;
+ }
- /* No extents found */
- if (!fie.fi_extents_mapped)
- break;
+ /*
+ * If there is a delay extent at this offset,
+ * we will skip this extent.
+ */
+ ext4_es_find_delayed_extent_range(inode, last, last, &es);
+ if (es.es_len != 0 && in_range(last, es.es_lblk, es.es_len)) {
+ last = es.es_lblk + es.es_len;
+ holeoff = (loff_t)last << blkbits;
+ continue;
+ }
- for (i = 0; i < fie.fi_extents_mapped; i++) {
- next = (loff_t)(ext[i].fe_logical + ext[i].fe_length);
- /*
- * If extent is not unwritten, then it contains valid
- * data, mapped or delayed.
- */
- if (!(ext[i].fe_flags & FIEMAP_EXTENT_UNWRITTEN)) {
- if (offset < (loff_t)ext[i].fe_logical)
- goto out;
- offset = next;
+ /*
+ * If there is a unwritten extent at this offset,
+ * it will be as a data or a hole according to page
+ * cache that has data or not.
+ */
+ if (map.m_flags & EXT4_MAP_UNWRITTEN) {
+ int unwritten;
+ unwritten = ext4_find_unwritten_pgoff(inode, SEEK_HOLE,
+ &map, &holeoff);
+ if (!unwritten) {
+ last += ret;
+ holeoff = (loff_t)last << blkbits;
continue;
}
- /*
- * If there is a unwritten extent at this offset,
- * it will be as a data or a hole according to page
- * cache that has data or not.
- */
- if (ext4_find_unwritten_pgoff(inode, SEEK_HOLE,
- next, &offset))
- goto out;
-
- offset = next;
- if (ext[i].fe_flags & FIEMAP_EXTENT_LAST)
- goto out;
}
- }
- if (offset > inode->i_size)
- offset = inode->i_size;
-out:
+
+ /* find a hole */
+ break;
+ } while (last <= end);
+
mutex_unlock(&inode->i_mutex);
- if (ret)
- return ret;
- return vfs_setpos(file, offset, maxsize);
+ if (holeoff > isize)
+ holeoff = isize;
+
+ return vfs_setpos(file, holeoff, maxsize);
}
/*
if (!capable(CAP_SYS_RESOURCE))
return -EPERM;
+ /*
+ * If we are not using the primary superblock/GDT copy don't resize,
+ * because the user tools have no way of handling this. Probably a
+ * bad time to do it anyways.
+ */
+ if (EXT4_SB(sb)->s_sbh->b_blocknr !=
+ le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block)) {
+ ext4_warning(sb, "won't resize using backup superblock at %llu",
+ (unsigned long long)EXT4_SB(sb)->s_sbh->b_blocknr);
+ return -EPERM;
+ }
+
/*
* We are not allowed to do online-resizing on a filesystem mounted
* with error, because it can destroy the filesystem easily.
"EXT4-fs: ext4_add_new_gdb: adding group block %lu\n",
gdb_num);
- /*
- * If we are not using the primary superblock/GDT copy don't resize,
- * because the user tools have no way of handling this. Probably a
- * bad time to do it anyways.
- */
- if (EXT4_SB(sb)->s_sbh->b_blocknr !=
- le32_to_cpu(EXT4_SB(sb)->s_es->s_first_data_block)) {
- ext4_warning(sb, "won't resize using backup superblock at %llu",
- (unsigned long long)EXT4_SB(sb)->s_sbh->b_blocknr);
- return -EPERM;
- }
-
gdb_bh = sb_bread(sb, gdblock);
if (!gdb_bh)
return -EIO;
if (EXT4_HAS_RO_COMPAT_FEATURE(sb,
EXT4_FEATURE_RO_COMPAT_METADATA_CSUM) &&
EXT4_HAS_RO_COMPAT_FEATURE(sb, EXT4_FEATURE_RO_COMPAT_GDT_CSUM))
- ext4_warning(sb, KERN_INFO "metadata_csum and uninit_bg are "
+ ext4_warning(sb, "metadata_csum and uninit_bg are "
"redundant flags; please run fsck.");
/* Check for a known checksum algorithm */
* Exceptions: O_NONBLOCK is a two bit define on parisc; O_NDELAY
* is defined as O_NONBLOCK on some platforms and not on others.
*/
- BUILD_BUG_ON(20 - 1 /* for O_RDONLY being 0 */ != HWEIGHT32(
+ BUILD_BUG_ON(21 - 1 /* for O_RDONLY being 0 */ != HWEIGHT32(
O_RDONLY | O_WRONLY | O_RDWR |
O_CREAT | O_EXCL | O_NOCTTY |
O_TRUNC | O_APPEND | /* O_NONBLOCK | */
__O_SYNC | O_DSYNC | FASYNC |
O_DIRECT | O_LARGEFILE | O_DIRECTORY |
O_NOFOLLOW | O_NOATIME | O_CLOEXEC |
- __FMODE_EXEC | O_PATH | __O_TMPFILE
+ __FMODE_EXEC | O_PATH | __O_TMPFILE |
+ __FMODE_NONOTIFY
));
fasync_cache = kmem_cache_create("fasync_cache",
break;
}
trace_generic_delete_lease(inode, fl);
- if (fl)
+ if (fl && IS_LEASE(fl))
error = fl->fl_lmops->lm_change(before, F_UNLCK, &dispose);
spin_unlock(&inode->i_lock);
locks_dispose_list(&dispose);
status = nfs4_setlease(dp);
goto out;
}
- atomic_inc(&fp->fi_delegees);
if (fp->fi_had_conflict) {
status = -EAGAIN;
goto out_unlock;
}
+ atomic_inc(&fp->fi_delegees);
hash_delegation_locked(dp, fp);
status = 0;
out_unlock:
struct fsnotify_event *kevent;
char __user *start;
int ret;
- DEFINE_WAIT(wait);
+ DEFINE_WAIT_FUNC(wait, woken_wake_function);
start = buf;
group = file->private_data;
pr_debug("%s: group=%p\n", __func__, group);
+ add_wait_queue(&group->notification_waitq, &wait);
while (1) {
- prepare_to_wait(&group->notification_waitq, &wait, TASK_INTERRUPTIBLE);
-
mutex_lock(&group->notification_mutex);
kevent = get_one_event(group, count);
mutex_unlock(&group->notification_mutex);
if (start != buf)
break;
- schedule();
+
+ wait_woken(&wait, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT);
continue;
}
buf += ret;
count -= ret;
}
+ remove_wait_queue(&group->notification_waitq, &wait);
- finish_wait(&group->notification_waitq, &wait);
if (start != buf && ret != -EFAULT)
ret = buf - start;
return ret;
dlm_lockres_drop_inflight_ref(dlm, res);
spin_unlock(&res->spinlock);
- if (ret < 0) {
+ if (ret < 0)
mlog_errno(ret);
- if (newlock)
- dlm_lock_put(newlock);
- }
return ret;
}
struct inode *inode,
const char *symname);
+static int ocfs2_double_lock(struct ocfs2_super *osb,
+ struct buffer_head **bh1,
+ struct inode *inode1,
+ struct buffer_head **bh2,
+ struct inode *inode2,
+ int rename);
+
+static void ocfs2_double_unlock(struct inode *inode1, struct inode *inode2);
/* An orphan dir name is an 8 byte value, printed as a hex string */
#define OCFS2_ORPHAN_NAMELEN ((int)(2 * sizeof(u64)))
{
handle_t *handle;
struct inode *inode = old_dentry->d_inode;
+ struct inode *old_dir = old_dentry->d_parent->d_inode;
int err;
struct buffer_head *fe_bh = NULL;
+ struct buffer_head *old_dir_bh = NULL;
struct buffer_head *parent_fe_bh = NULL;
struct ocfs2_dinode *fe = NULL;
struct ocfs2_super *osb = OCFS2_SB(dir->i_sb);
dquot_initialize(dir);
- err = ocfs2_inode_lock_nested(dir, &parent_fe_bh, 1, OI_LS_PARENT);
+ err = ocfs2_double_lock(osb, &old_dir_bh, old_dir,
+ &parent_fe_bh, dir, 0);
if (err < 0) {
if (err != -ENOENT)
mlog_errno(err);
return err;
}
+ /* make sure both dirs have bhs
+ * get an extra ref on old_dir_bh if old==new */
+ if (!parent_fe_bh) {
+ if (old_dir_bh) {
+ parent_fe_bh = old_dir_bh;
+ get_bh(parent_fe_bh);
+ } else {
+ mlog(ML_ERROR, "%s: no old_dir_bh!\n", osb->uuid_str);
+ err = -EIO;
+ goto out;
+ }
+ }
+
if (!dir->i_nlink) {
err = -ENOENT;
goto out;
}
- err = ocfs2_lookup_ino_from_name(dir, old_dentry->d_name.name,
+ err = ocfs2_lookup_ino_from_name(old_dir, old_dentry->d_name.name,
old_dentry->d_name.len, &old_de_ino);
if (err) {
err = -ENOENT;
ocfs2_inode_unlock(inode, 1);
out:
- ocfs2_inode_unlock(dir, 1);
+ ocfs2_double_unlock(old_dir, dir);
brelse(fe_bh);
brelse(parent_fe_bh);
+ brelse(old_dir_bh);
ocfs2_free_dir_lookup_result(&lookup);
}
/*
- * The only place this should be used is rename!
+ * The only place this should be used is rename and link!
* if they have the same id, then the 1st one is the only one locked.
*/
static int ocfs2_double_lock(struct ocfs2_super *osb,
struct buffer_head **bh1,
struct inode *inode1,
struct buffer_head **bh2,
- struct inode *inode2)
+ struct inode *inode2,
+ int rename)
{
int status;
int inode1_is_ancestor, inode2_is_ancestor;
}
/* lock id2 */
status = ocfs2_inode_lock_nested(inode2, bh2, 1,
- OI_LS_RENAME1);
+ rename == 1 ? OI_LS_RENAME1 : OI_LS_PARENT);
if (status < 0) {
if (status != -ENOENT)
mlog_errno(status);
}
/* lock id1 */
- status = ocfs2_inode_lock_nested(inode1, bh1, 1, OI_LS_RENAME2);
+ status = ocfs2_inode_lock_nested(inode1, bh1, 1,
+ rename == 1 ? OI_LS_RENAME2 : OI_LS_PARENT);
if (status < 0) {
/*
* An error return must mean that no cluster locks
/* if old and new are the same, this'll just do one lock. */
status = ocfs2_double_lock(osb, &old_dir_bh, old_dir,
- &new_dir_bh, new_dir);
+ &new_dir_bh, new_dir, 1);
if (status < 0) {
mlog_errno(status);
goto bail;
struct acpi_processor {
acpi_handle handle;
u32 acpi_id;
- u32 apic_id;
- u32 id;
+ u32 phys_id; /* CPU hardware ID such as APIC ID for x86 */
+ u32 id; /* CPU logical ID allocated by OS */
u32 pblk;
int performance_platform_limit;
int throttling_platform_limit;
#endif /* CONFIG_CPU_FREQ */
/* in processor_core.c */
-int acpi_get_apicid(acpi_handle, int type, u32 acpi_id);
-int acpi_map_cpuid(int apic_id, u32 acpi_id);
+int acpi_get_phys_id(acpi_handle, int type, u32 acpi_id);
+int acpi_map_cpuid(int phys_id, u32 acpi_id);
int acpi_get_cpuid(acpi_handle, int type, u32 acpi_id);
/* in processor_pdc.c */
static inline void __tlb_reset_range(struct mmu_gather *tlb)
{
- tlb->start = TASK_SIZE;
- tlb->end = 0;
+ if (tlb->fullmm) {
+ tlb->start = tlb->end = ~0;
+ } else {
+ tlb->start = TASK_SIZE;
+ tlb->end = 0;
+ }
}
/*
#ifdef CONFIG_ACPI_HOTPLUG_CPU
/* Arch dependent functions for cpu hotplug support */
-int acpi_map_lsapic(acpi_handle handle, int physid, int *pcpu);
-int acpi_unmap_lsapic(int cpu);
+int acpi_map_cpu(acpi_handle handle, int physid, int *pcpu);
+int acpi_unmap_cpu(int cpu);
#endif /* CONFIG_ACPI_HOTPLUG_CPU */
int acpi_register_ioapic(acpi_handle handle, u64 phys_addr, u32 gsi_base);
unsigned long flags; /* BLK_MQ_F_* flags */
struct request_queue *queue;
- unsigned int queue_num;
struct blk_flush_queue *fq;
void *driver_data;
unsigned long dispatched[BLK_MQ_MAX_DISPATCH_ORDER];
unsigned int numa_node;
- unsigned int cmd_size; /* per-request extra data */
+ unsigned int queue_num;
atomic_t nr_active;
struct blk_mq_hw_ctx *blk_mq_map_queue(struct request_queue *, const int ctx_index);
struct blk_mq_hw_ctx *blk_mq_alloc_single_hw_queue(struct blk_mq_tag_set *, unsigned int, int);
+int blk_mq_request_started(struct request *rq);
void blk_mq_start_request(struct request *rq);
void blk_mq_end_request(struct request *rq, int error);
void __blk_mq_end_request(struct request *rq, int error);
void blk_mq_requeue_request(struct request *rq);
void blk_mq_add_to_requeue_list(struct request *rq, bool at_head);
+void blk_mq_cancel_requeue_work(struct request_queue *q);
void blk_mq_kick_requeue_list(struct request_queue *q);
+void blk_mq_abort_requeue_list(struct request_queue *q);
void blk_mq_complete_request(struct request *rq);
void blk_mq_stop_hw_queue(struct blk_mq_hw_ctx *hctx);
void blk_mq_delay_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs);
void blk_mq_tag_busy_iter(struct blk_mq_hw_ctx *hctx, busy_iter_fn *fn,
void *priv);
+void blk_mq_unfreeze_queue(struct request_queue *q);
+void blk_mq_freeze_queue_start(struct request_queue *q);
/*
* Driver command data is immediately after the request. So subtract request
__REQ_PM, /* runtime pm request */
__REQ_HASHED, /* on IO scheduler merge hash */
__REQ_MQ_INFLIGHT, /* track inflight for MQ */
+ __REQ_NO_TIMEOUT, /* requests may never expire */
__REQ_NR_BITS, /* stops here */
};
#define REQ_PM (1ULL << __REQ_PM)
#define REQ_HASHED (1ULL << __REQ_HASHED)
#define REQ_MQ_INFLIGHT (1ULL << __REQ_MQ_INFLIGHT)
+#define REQ_NO_TIMEOUT (1ULL << __REQ_NO_TIMEOUT)
#endif /* __LINUX_BLK_TYPES_H */
struct ceph_osd_data osd_data;
} extent;
struct {
- __le32 name_len;
- __le32 value_len;
+ u32 name_len;
+ u32 value_len;
__u8 cmp_op; /* CEPH_OSD_CMPXATTR_OP_* */
__u8 cmp_mode; /* CEPH_OSD_CMPXATTR_MODE_* */
struct ceph_osd_data osd_data;
}
}
-static __always_inline void __assign_once_size(volatile void *p, void *res, int size)
+static __always_inline void __write_once_size(volatile void *p, void *res, int size)
{
switch (size) {
case 1: *(volatile __u8 *)p = *(__u8 *)res; break;
/*
* Prevent the compiler from merging or refetching reads or writes. The
* compiler is also forbidden from reordering successive instances of
- * READ_ONCE, ASSIGN_ONCE and ACCESS_ONCE (see below), but only when the
+ * READ_ONCE, WRITE_ONCE and ACCESS_ONCE (see below), but only when the
* compiler is aware of some particular ordering. One way to make the
* compiler aware of ordering is to put the two invocations of READ_ONCE,
- * ASSIGN_ONCE or ACCESS_ONCE() in different C statements.
+ * WRITE_ONCE or ACCESS_ONCE() in different C statements.
*
* In contrast to ACCESS_ONCE these two macros will also work on aggregate
* data types like structs or unions. If the size of the accessed data
* type exceeds the word size of the machine (e.g., 32 bits or 64 bits)
- * READ_ONCE() and ASSIGN_ONCE() will fall back to memcpy and print a
+ * READ_ONCE() and WRITE_ONCE() will fall back to memcpy and print a
* compile-time warning.
*
* Their two major use cases are: (1) Mediating communication between
#define READ_ONCE(x) \
({ typeof(x) __val; __read_once_size(&x, &__val, sizeof(__val)); __val; })
-#define ASSIGN_ONCE(val, x) \
- ({ typeof(x) __val; __val = val; __assign_once_size(&x, &__val, sizeof(__val)); __val; })
+#define WRITE_ONCE(x, val) \
+ ({ typeof(x) __val; __val = val; __write_once_size(&x, &__val, sizeof(__val)); __val; })
#endif /* __KERNEL__ */
#define FMODE_CAN_WRITE ((__force fmode_t)0x40000)
/* File was opened by fanotify and shouldn't generate fanotify events */
-#define FMODE_NONOTIFY ((__force fmode_t)0x1000000)
+#define FMODE_NONOTIFY ((__force fmode_t)0x4000000)
/*
* Flag for rw_copy_check_uvector and compat_rw_copy_check_uvector
typedef int br_should_route_hook_t(struct sk_buff *skb);
extern br_should_route_hook_t __rcu *br_should_route_hook;
-#if IS_ENABLED(CONFIG_BRIDGE)
-int br_fdb_external_learn_add(struct net_device *dev,
- const unsigned char *addr, u16 vid);
-int br_fdb_external_learn_del(struct net_device *dev,
- const unsigned char *addr, u16 vid);
-#else
-static inline int br_fdb_external_learn_add(struct net_device *dev,
- const unsigned char *addr, u16 vid)
-{
- return 0;
-}
-static inline int br_fdb_external_learn_del(struct net_device *dev,
- const unsigned char *addr, u16 vid)
-{
- return 0;
-}
-#endif
-
#if IS_ENABLED(CONFIG_BRIDGE) && IS_ENABLED(CONFIG_BRIDGE_IGMP_SNOOPING)
int br_multicast_list_adjacent(struct net_device *dev,
struct list_head *br_ip_list);
return dev->priv_flags & IFF_802_1Q_VLAN;
}
-#define vlan_tx_tag_present(__skb) ((__skb)->vlan_tci & VLAN_TAG_PRESENT)
-#define vlan_tx_tag_get(__skb) ((__skb)->vlan_tci & ~VLAN_TAG_PRESENT)
-#define vlan_tx_tag_get_id(__skb) ((__skb)->vlan_tci & VLAN_VID_MASK)
+#define skb_vlan_tag_present(__skb) ((__skb)->vlan_tci & VLAN_TAG_PRESENT)
+#define skb_vlan_tag_get(__skb) ((__skb)->vlan_tci & ~VLAN_TAG_PRESENT)
+#define skb_vlan_tag_get_id(__skb) ((__skb)->vlan_tci & VLAN_VID_MASK)
/**
* struct vlan_pcpu_stats - VLAN percpu rx/tx stats
static inline struct sk_buff *__vlan_hwaccel_push_inside(struct sk_buff *skb)
{
skb = vlan_insert_tag_set_proto(skb, skb->vlan_proto,
- vlan_tx_tag_get(skb));
+ skb_vlan_tag_get(skb));
if (likely(skb))
skb->vlan_tci = 0;
return skb;
*/
static inline struct sk_buff *vlan_hwaccel_push_inside(struct sk_buff *skb)
{
- if (vlan_tx_tag_present(skb))
+ if (skb_vlan_tag_present(skb))
skb = __vlan_hwaccel_push_inside(skb);
return skb;
}
static inline int __vlan_hwaccel_get_tag(const struct sk_buff *skb,
u16 *vlan_tci)
{
- if (vlan_tx_tag_present(skb)) {
- *vlan_tci = vlan_tx_tag_get(skb);
+ if (skb_vlan_tag_present(skb)) {
+ *vlan_tci = skb_vlan_tag_get(skb);
return 0;
} else {
*vlan_tci = 0;
{
__be16 protocol = 0;
- if (vlan_tx_tag_present(skb) ||
+ if (skb_vlan_tag_present(skb) ||
skb->protocol != cpu_to_be16(ETH_P_8021Q))
protocol = skb->protocol;
else {
__s32 force_tllao;
__s32 ndisc_notify;
__s32 suppress_frag_ndisc;
+ __s32 accept_ra_mtu;
void *sysctl;
};
* Copyright (C) 2009 Jason Wessel <jason.wessel@windriver.com>
*/
+/* Shifted versions of the command enable bits are be used if the command
+ * has no arguments (see kdb_check_flags). This allows commands, such as
+ * go, to have different permissions depending upon whether it is called
+ * with an argument.
+ */
+#define KDB_ENABLE_NO_ARGS_SHIFT 10
+
typedef enum {
- KDB_REPEAT_NONE = 0, /* Do not repeat this command */
- KDB_REPEAT_NO_ARGS, /* Repeat the command without arguments */
- KDB_REPEAT_WITH_ARGS, /* Repeat the command including its arguments */
-} kdb_repeat_t;
+ KDB_ENABLE_ALL = (1 << 0), /* Enable everything */
+ KDB_ENABLE_MEM_READ = (1 << 1),
+ KDB_ENABLE_MEM_WRITE = (1 << 2),
+ KDB_ENABLE_REG_READ = (1 << 3),
+ KDB_ENABLE_REG_WRITE = (1 << 4),
+ KDB_ENABLE_INSPECT = (1 << 5),
+ KDB_ENABLE_FLOW_CTRL = (1 << 6),
+ KDB_ENABLE_SIGNAL = (1 << 7),
+ KDB_ENABLE_REBOOT = (1 << 8),
+ /* User exposed values stop here, all remaining flags are
+ * exclusively used to describe a commands behaviour.
+ */
+
+ KDB_ENABLE_ALWAYS_SAFE = (1 << 9),
+ KDB_ENABLE_MASK = (1 << KDB_ENABLE_NO_ARGS_SHIFT) - 1,
+
+ KDB_ENABLE_ALL_NO_ARGS = KDB_ENABLE_ALL << KDB_ENABLE_NO_ARGS_SHIFT,
+ KDB_ENABLE_MEM_READ_NO_ARGS = KDB_ENABLE_MEM_READ
+ << KDB_ENABLE_NO_ARGS_SHIFT,
+ KDB_ENABLE_MEM_WRITE_NO_ARGS = KDB_ENABLE_MEM_WRITE
+ << KDB_ENABLE_NO_ARGS_SHIFT,
+ KDB_ENABLE_REG_READ_NO_ARGS = KDB_ENABLE_REG_READ
+ << KDB_ENABLE_NO_ARGS_SHIFT,
+ KDB_ENABLE_REG_WRITE_NO_ARGS = KDB_ENABLE_REG_WRITE
+ << KDB_ENABLE_NO_ARGS_SHIFT,
+ KDB_ENABLE_INSPECT_NO_ARGS = KDB_ENABLE_INSPECT
+ << KDB_ENABLE_NO_ARGS_SHIFT,
+ KDB_ENABLE_FLOW_CTRL_NO_ARGS = KDB_ENABLE_FLOW_CTRL
+ << KDB_ENABLE_NO_ARGS_SHIFT,
+ KDB_ENABLE_SIGNAL_NO_ARGS = KDB_ENABLE_SIGNAL
+ << KDB_ENABLE_NO_ARGS_SHIFT,
+ KDB_ENABLE_REBOOT_NO_ARGS = KDB_ENABLE_REBOOT
+ << KDB_ENABLE_NO_ARGS_SHIFT,
+ KDB_ENABLE_ALWAYS_SAFE_NO_ARGS = KDB_ENABLE_ALWAYS_SAFE
+ << KDB_ENABLE_NO_ARGS_SHIFT,
+ KDB_ENABLE_MASK_NO_ARGS = KDB_ENABLE_MASK << KDB_ENABLE_NO_ARGS_SHIFT,
+
+ KDB_REPEAT_NO_ARGS = 0x40000000, /* Repeat the command w/o arguments */
+ KDB_REPEAT_WITH_ARGS = 0x80000000, /* Repeat the command with args */
+} kdb_cmdflags_t;
typedef int (*kdb_func_t)(int, const char **);
#define KDB_BADLENGTH (-19)
#define KDB_NOBP (-20)
#define KDB_BADADDR (-21)
+#define KDB_NOPERM (-22)
/*
* kdb_diemsg
/* Dynamic kdb shell command registration */
extern int kdb_register(char *, kdb_func_t, char *, char *, short);
-extern int kdb_register_repeat(char *, kdb_func_t, char *, char *,
- short, kdb_repeat_t);
+extern int kdb_register_flags(char *, kdb_func_t, char *, char *,
+ short, kdb_cmdflags_t);
extern int kdb_unregister(char *);
#else /* ! CONFIG_KGDB_KDB */
static inline __printf(1, 2) int kdb_printf(const char *fmt, ...) { return 0; }
static inline void kdb_init(int level) {}
static inline int kdb_register(char *cmd, kdb_func_t func, char *usage,
char *help, short minlen) { return 0; }
-static inline int kdb_register_repeat(char *cmd, kdb_func_t func, char *usage,
- char *help, short minlen,
- kdb_repeat_t repeat) { return 0; }
+static inline int kdb_register_flags(char *cmd, kdb_func_t func, char *usage,
+ char *help, short minlen,
+ kdb_cmdflags_t flags) { return 0; }
static inline int kdb_unregister(char *cmd) { return 0; }
#endif /* CONFIG_KGDB_KDB */
enum {
#ifndef _LINUX_LIST_NULLS_H
#define _LINUX_LIST_NULLS_H
+#include <linux/poison.h>
+#include <linux/const.h>
+
/*
* Special version of lists, where end of list is not a NULL pointer,
* but a 'nulls' marker, which can have many different values.
struct hlist_nulls_node {
struct hlist_nulls_node *next, **pprev;
};
+#define NULLS_MARKER(value) (1UL | (((long)value) << 1))
#define INIT_HLIST_NULLS_HEAD(ptr, nulls) \
- ((ptr)->first = (struct hlist_nulls_node *) (1UL | (((long)nulls) << 1)))
+ ((ptr)->first = (struct hlist_nulls_node *) NULLS_MARKER(nulls))
#define hlist_nulls_entry(ptr, type, member) container_of(ptr,type,member)
/**
STMPE_IDX_GPEDR_MSB,
STMPE_IDX_GPRER_LSB,
STMPE_IDX_GPFER_LSB,
+ STMPE_IDX_GPPUR_LSB,
+ STMPE_IDX_GPPDR_LSB,
STMPE_IDX_GPAFR_U_MSB,
STMPE_IDX_IEGPIOR_LSB,
STMPE_IDX_ISGPIOR_LSB,
extern int stmpe_enable(struct stmpe *stmpe, unsigned int blocks);
extern int stmpe_disable(struct stmpe *stmpe, unsigned int blocks);
-struct matrix_keymap_data;
-
-/**
- * struct stmpe_keypad_platform_data - STMPE keypad platform data
- * @keymap_data: key map table and size
- * @debounce_ms: debounce interval, in ms. Maximum is
- * %STMPE_KEYPAD_MAX_DEBOUNCE.
- * @scan_count: number of key scanning cycles to confirm key data.
- * Maximum is %STMPE_KEYPAD_MAX_SCAN_COUNT.
- * @no_autorepeat: disable key autorepeat
- */
-struct stmpe_keypad_platform_data {
- const struct matrix_keymap_data *keymap_data;
- unsigned int debounce_ms;
- unsigned int scan_count;
- bool no_autorepeat;
-};
-
#define STMPE_GPIO_NOREQ_811_TOUCH (0xf0)
/**
* @irq_gpio: gpio number over which irq will be requested (significant only if
* irq_over_gpio is true)
* @gpio: GPIO-specific platform data
- * @keypad: keypad-specific platform data
* @ts: touchscreen-specific platform data
*/
struct stmpe_platform_data {
int autosleep_timeout;
struct stmpe_gpio_platform_data *gpio;
- struct stmpe_keypad_platform_data *keypad;
struct stmpe_ts_platform_data *ts;
};
int mlx4_set_vf_link_state(struct mlx4_dev *dev, int port, int vf, int link_state);
int mlx4_config_dev_retrieval(struct mlx4_dev *dev,
struct mlx4_config_dev_params *params);
+void mlx4_cmd_wake_completions(struct mlx4_dev *dev);
+void mlx4_report_internal_err_comm_event(struct mlx4_dev *dev);
/*
* mlx4_get_slave_default_vlan -
* return true if VST ( default vlan)
u16 *vlan, u8 *qos);
#define MLX4_COMM_GET_IF_REV(cmd_chan_ver) (u8)((cmd_chan_ver) >> 8)
+#define COMM_CHAN_EVENT_INTERNAL_ERR (1 << 17)
#endif /* MLX4_CMD_H */
MLX4_QUERY_FUNC_FLAGS_A0_RES_QP = 1LL << 1
};
+enum {
+ MLX4_VF_CAP_FLAG_RESET = 1 << 0
+};
+
/* bit enums for an 8-bit flags field indicating special use
* QPs which require special handling in qp_reserve_range.
* Currently, this only includes QPs used by the ETH interface,
MLX4_EQ_PORT_INFO_MSTR_SM_SL_CHANGE_MASK = 1 << 4,
};
+enum {
+ MLX4_DEVICE_STATE_UP = 1 << 0,
+ MLX4_DEVICE_STATE_INTERNAL_ERROR = 1 << 1,
+};
+
+enum {
+ MLX4_INTERFACE_STATE_UP = 1 << 0,
+ MLX4_INTERFACE_STATE_DELETION = 1 << 1,
+};
+
#define MSTR_SM_CHANGE_MASK (MLX4_EQ_PORT_INFO_MSTR_SM_SL_CHANGE_MASK | \
MLX4_EQ_PORT_INFO_MSTR_SM_LID_CHANGE_MASK)
u8 alloc_res_qp_mask;
u32 dmfs_high_rate_qpn_base;
u32 dmfs_high_rate_qpn_range;
+ u32 vf_caps;
};
struct mlx4_buf_list {
u8 n_ports;
};
-struct mlx4_dev {
+struct mlx4_dev_persistent {
struct pci_dev *pdev;
+ struct mlx4_dev *dev;
+ int nvfs[MLX4_MAX_PORTS + 1];
+ int num_vfs;
+ enum mlx4_port_type curr_port_type[MLX4_MAX_PORTS + 1];
+ enum mlx4_port_type curr_port_poss_type[MLX4_MAX_PORTS + 1];
+ struct work_struct catas_work;
+ struct workqueue_struct *catas_wq;
+ struct mutex device_state_mutex; /* protect HW state */
+ u8 state;
+ struct mutex interface_state_mutex; /* protect SW state */
+ u8 interface_state;
+};
+
+struct mlx4_dev {
+ struct mlx4_dev_persistent *persist;
unsigned long flags;
unsigned long num_slaves;
struct mlx4_caps caps;
struct radix_tree_root qp_table_tree;
u8 rev_id;
char board_id[MLX4_BOARD_ID_LEN];
- int num_vfs;
int numa_node;
int oper_log_mgm_entry_size;
u64 regid_promisc_array[MLX4_MAX_PORTS + 1];
u64 regid_allmulti_array[MLX4_MAX_PORTS + 1];
struct mlx4_vf_dev *dev_vfs;
- int nvfs[MLX4_MAX_PORTS + 1];
};
struct mlx4_eqe {
#if VM_GROWSUP
extern int expand_upwards(struct vm_area_struct *vma, unsigned long address);
#else
- #define expand_upwards(vma, address) do { } while (0)
+ #define expand_upwards(vma, address) (0)
#endif
/* Look up the first VMA which satisfies addr < vm_end, NULL if none. */
#define SDHCI_SDR104_NEEDS_TUNING (1<<10) /* SDR104/HS200 needs tuning */
#define SDHCI_USING_RETUNING_TIMER (1<<11) /* Host is using a retuning timer for the card */
#define SDHCI_USE_64_BIT_DMA (1<<12) /* Use 64-bit DMA */
+#define SDHCI_HS400_TUNING (1<<13) /* Tuning for HS400 */
unsigned int version; /* SDHCI spec. version */
* 3. Update dev->stats asynchronously and atomically, and define
* neither operation.
*
- * int (*ndo_vlan_rx_add_vid)(struct net_device *dev, __be16 proto, u16t vid);
+ * int (*ndo_vlan_rx_add_vid)(struct net_device *dev, __be16 proto, u16 vid);
* If device support VLAN filtering this function is called when a
* VLAN id is registered.
*
- * int (*ndo_vlan_rx_kill_vid)(struct net_device *dev, unsigned short vid);
+ * int (*ndo_vlan_rx_kill_vid)(struct net_device *dev, __be16 proto, u16 vid);
* If device support VLAN filtering this function is called when a
* VLAN id is unregistered.
*
struct sk_buff *(*gso_segment)(struct sk_buff *skb,
netdev_features_t features);
struct sk_buff **(*gro_receive)(struct sk_buff **head,
- struct sk_buff *skb);
+ struct sk_buff *skb);
int (*gro_complete)(struct sk_buff *skb, int nhoff);
};
struct list_head list;
};
+struct udp_offload;
+
+struct udp_offload_callbacks {
+ struct sk_buff **(*gro_receive)(struct sk_buff **head,
+ struct sk_buff *skb,
+ struct udp_offload *uoff);
+ int (*gro_complete)(struct sk_buff *skb,
+ int nhoff,
+ struct udp_offload *uoff);
+};
+
struct udp_offload {
__be16 port;
u8 ipproto;
- struct offload_callbacks callbacks;
+ struct udp_offload_callbacks callbacks;
};
/* often modified stats are per cpu, other are shared (netdev->stats) */
list_for_each_entry_continue_rcu(d, &(net)->dev_base_head, dev_list)
#define for_each_netdev_in_bond_rcu(bond, slave) \
for_each_netdev_rcu(&init_net, slave) \
- if (netdev_master_upper_dev_get_rcu(slave) == bond)
+ if (netdev_master_upper_dev_get_rcu(slave) == (bond))
#define net_device_entry(lh) list_entry(lh, struct net_device, dev_list)
static inline struct net_device *next_net_device(struct net_device *dev)
struct perf_branch_entry entries[0];
};
-struct perf_regs {
- __u64 abi;
- struct pt_regs *regs;
-};
-
struct task_struct;
/*
u32 reserved;
} cpu_entry;
struct perf_callchain_entry *callchain;
+
+ /*
+ * regs_user may point to task_pt_regs or to regs_user_copy, depending
+ * on arch details.
+ */
struct perf_regs regs_user;
+ struct pt_regs regs_user_copy;
+
struct perf_regs regs_intr;
u64 stack_user_size;
} ____cacheline_aligned;
#ifndef _LINUX_PERF_REGS_H
#define _LINUX_PERF_REGS_H
+struct perf_regs {
+ __u64 abi;
+ struct pt_regs *regs;
+};
+
#ifdef CONFIG_HAVE_PERF_REGS
#include <asm/perf_regs.h>
u64 perf_reg_value(struct pt_regs *regs, int idx);
int perf_reg_validate(u64 mask);
u64 perf_reg_abi(struct task_struct *task);
+void perf_get_regs_user(struct perf_regs *regs_user,
+ struct pt_regs *regs,
+ struct pt_regs *regs_user_copy);
#else
static inline u64 perf_reg_value(struct pt_regs *regs, int idx)
{
{
return PERF_SAMPLE_REGS_ABI_NONE;
}
+
+static inline void perf_get_regs_user(struct perf_regs *regs_user,
+ struct pt_regs *regs,
+ struct pt_regs *regs_user_copy)
+{
+ regs_user->regs = task_pt_regs(current);
+ regs_user->abi = perf_reg_abi(current);
+}
#endif /* CONFIG_HAVE_PERF_REGS */
#endif /* _LINUX_PERF_REGS_H */
void (*write_mmd_indirect)(struct phy_device *dev, int ptrad,
int devnum, int regnum, u32 val);
+ /* Get the size and type of the eeprom contained within a plug-in
+ * module */
+ int (*module_info)(struct phy_device *dev,
+ struct ethtool_modinfo *modinfo);
+
+ /* Get the eeprom information from the plug-in module */
+ int (*module_eeprom)(struct phy_device *dev,
+ struct ethtool_eeprom *ee, u8 *data);
+
struct device_driver driver;
};
#define to_phy_driver(d) container_of(d, struct phy_driver, driver)
#ifndef _LINUX_RHASHTABLE_H
#define _LINUX_RHASHTABLE_H
-#include <linux/rculist.h>
+#include <linux/list_nulls.h>
+#include <linux/workqueue.h>
+#include <linux/mutex.h>
+
+/*
+ * The end of the chain is marked with a special nulls marks which has
+ * the following format:
+ *
+ * +-------+-----------------------------------------------------+-+
+ * | Base | Hash |1|
+ * +-------+-----------------------------------------------------+-+
+ *
+ * Base (4 bits) : Reserved to distinguish between multiple tables.
+ * Specified via &struct rhashtable_params.nulls_base.
+ * Hash (27 bits): Full hash (unmasked) of first element added to bucket
+ * 1 (1 bit) : Nulls marker (always set)
+ *
+ * The remaining bits of the next pointer remain unused for now.
+ */
+#define RHT_BASE_BITS 4
+#define RHT_HASH_BITS 27
+#define RHT_BASE_SHIFT RHT_HASH_BITS
struct rhash_head {
struct rhash_head __rcu *next;
};
-#define INIT_HASH_HEAD(ptr) ((ptr)->next = NULL)
-
+/**
+ * struct bucket_table - Table of hash buckets
+ * @size: Number of hash buckets
+ * @locks_mask: Mask to apply before accessing locks[]
+ * @locks: Array of spinlocks protecting individual buckets
+ * @buckets: size * hash buckets
+ */
struct bucket_table {
size_t size;
+ unsigned int locks_mask;
+ spinlock_t *locks;
struct rhash_head __rcu *buckets[];
};
* @hash_rnd: Seed to use while hashing
* @max_shift: Maximum number of shifts while expanding
* @min_shift: Minimum number of shifts while shrinking
+ * @nulls_base: Base value to generate nulls marker
+ * @locks_mul: Number of bucket locks to allocate per cpu (default: 128)
* @hashfn: Function to hash key
* @obj_hashfn: Function to hash object
* @grow_decision: If defined, may return true if table should expand
* @shrink_decision: If defined, may return true if table should shrink
- * @mutex_is_held: Must return true if protecting mutex is held
+ *
+ * Note: when implementing the grow and shrink decision function, min/max
+ * shift must be enforced, otherwise, resizing watermarks they set may be
+ * useless.
*/
struct rhashtable_params {
size_t nelem_hint;
u32 hash_rnd;
size_t max_shift;
size_t min_shift;
+ u32 nulls_base;
+ size_t locks_mul;
rht_hashfn_t hashfn;
rht_obj_hashfn_t obj_hashfn;
bool (*grow_decision)(const struct rhashtable *ht,
size_t new_size);
bool (*shrink_decision)(const struct rhashtable *ht,
size_t new_size);
-#ifdef CONFIG_PROVE_LOCKING
- int (*mutex_is_held)(void *parent);
- void *parent;
-#endif
};
/**
* struct rhashtable - Hash table handle
* @tbl: Bucket table
+ * @future_tbl: Table under construction during expansion/shrinking
* @nelems: Number of elements in table
* @shift: Current size (1 << shift)
* @p: Configuration parameters
+ * @run_work: Deferred worker to expand/shrink asynchronously
+ * @mutex: Mutex to protect current/future table swapping
+ * @being_destroyed: True if table is set up for destruction
*/
struct rhashtable {
struct bucket_table __rcu *tbl;
- size_t nelems;
- size_t shift;
+ struct bucket_table __rcu *future_tbl;
+ atomic_t nelems;
+ atomic_t shift;
struct rhashtable_params p;
+ struct work_struct run_work;
+ struct mutex mutex;
+ bool being_destroyed;
};
+static inline unsigned long rht_marker(const struct rhashtable *ht, u32 hash)
+{
+ return NULLS_MARKER(ht->p.nulls_base + hash);
+}
+
+#define INIT_RHT_NULLS_HEAD(ptr, ht, hash) \
+ ((ptr) = (typeof(ptr)) rht_marker(ht, hash))
+
+static inline bool rht_is_a_nulls(const struct rhash_head *ptr)
+{
+ return ((unsigned long) ptr & 1);
+}
+
+static inline unsigned long rht_get_nulls_value(const struct rhash_head *ptr)
+{
+ return ((unsigned long) ptr) >> 1;
+}
+
#ifdef CONFIG_PROVE_LOCKING
-int lockdep_rht_mutex_is_held(const struct rhashtable *ht);
+int lockdep_rht_mutex_is_held(struct rhashtable *ht);
+int lockdep_rht_bucket_is_held(const struct bucket_table *tbl, u32 hash);
#else
-static inline int lockdep_rht_mutex_is_held(const struct rhashtable *ht)
+static inline int lockdep_rht_mutex_is_held(struct rhashtable *ht)
+{
+ return 1;
+}
+
+static inline int lockdep_rht_bucket_is_held(const struct bucket_table *tbl,
+ u32 hash)
{
return 1;
}
int rhashtable_init(struct rhashtable *ht, struct rhashtable_params *params);
-u32 rhashtable_hashfn(const struct rhashtable *ht, const void *key, u32 len);
-u32 rhashtable_obj_hashfn(const struct rhashtable *ht, void *ptr);
-
void rhashtable_insert(struct rhashtable *ht, struct rhash_head *node);
bool rhashtable_remove(struct rhashtable *ht, struct rhash_head *node);
-void rhashtable_remove_pprev(struct rhashtable *ht, struct rhash_head *obj,
- struct rhash_head __rcu **pprev);
bool rht_grow_above_75(const struct rhashtable *ht, size_t new_size);
bool rht_shrink_below_30(const struct rhashtable *ht, size_t new_size);
int rhashtable_expand(struct rhashtable *ht);
int rhashtable_shrink(struct rhashtable *ht);
-void *rhashtable_lookup(const struct rhashtable *ht, const void *key);
-void *rhashtable_lookup_compare(const struct rhashtable *ht, u32 hash,
+void *rhashtable_lookup(struct rhashtable *ht, const void *key);
+void *rhashtable_lookup_compare(struct rhashtable *ht, const void *key,
bool (*compare)(void *, void *), void *arg);
-void rhashtable_destroy(const struct rhashtable *ht);
+bool rhashtable_lookup_insert(struct rhashtable *ht, struct rhash_head *obj);
+bool rhashtable_lookup_compare_insert(struct rhashtable *ht,
+ struct rhash_head *obj,
+ bool (*compare)(void *, void *),
+ void *arg);
+
+void rhashtable_destroy(struct rhashtable *ht);
#define rht_dereference(p, ht) \
rcu_dereference_protected(p, lockdep_rht_mutex_is_held(ht))
#define rht_dereference_rcu(p, ht) \
rcu_dereference_check(p, lockdep_rht_mutex_is_held(ht))
-#define rht_entry(ptr, type, member) container_of(ptr, type, member)
-#define rht_entry_safe(ptr, type, member) \
-({ \
- typeof(ptr) __ptr = (ptr); \
- __ptr ? rht_entry(__ptr, type, member) : NULL; \
-})
+#define rht_dereference_bucket(p, tbl, hash) \
+ rcu_dereference_protected(p, lockdep_rht_bucket_is_held(tbl, hash))
+
+#define rht_dereference_bucket_rcu(p, tbl, hash) \
+ rcu_dereference_check(p, lockdep_rht_bucket_is_held(tbl, hash))
+
+#define rht_entry(tpos, pos, member) \
+ ({ tpos = container_of(pos, typeof(*tpos), member); 1; })
-#define rht_next_entry_safe(pos, ht, member) \
-({ \
- pos ? rht_entry_safe(rht_dereference((pos)->member.next, ht), \
- typeof(*(pos)), member) : NULL; \
-})
+/**
+ * rht_for_each_continue - continue iterating over hash chain
+ * @pos: the &struct rhash_head to use as a loop cursor.
+ * @head: the previous &struct rhash_head to continue from
+ * @tbl: the &struct bucket_table
+ * @hash: the hash value / bucket index
+ */
+#define rht_for_each_continue(pos, head, tbl, hash) \
+ for (pos = rht_dereference_bucket(head, tbl, hash); \
+ !rht_is_a_nulls(pos); \
+ pos = rht_dereference_bucket((pos)->next, tbl, hash))
/**
* rht_for_each - iterate over hash chain
- * @pos: &struct rhash_head to use as a loop cursor.
- * @head: head of the hash chain (struct rhash_head *)
- * @ht: pointer to your struct rhashtable
+ * @pos: the &struct rhash_head to use as a loop cursor.
+ * @tbl: the &struct bucket_table
+ * @hash: the hash value / bucket index
+ */
+#define rht_for_each(pos, tbl, hash) \
+ rht_for_each_continue(pos, (tbl)->buckets[hash], tbl, hash)
+
+/**
+ * rht_for_each_entry_continue - continue iterating over hash chain
+ * @tpos: the type * to use as a loop cursor.
+ * @pos: the &struct rhash_head to use as a loop cursor.
+ * @head: the previous &struct rhash_head to continue from
+ * @tbl: the &struct bucket_table
+ * @hash: the hash value / bucket index
+ * @member: name of the &struct rhash_head within the hashable struct.
*/
-#define rht_for_each(pos, head, ht) \
- for (pos = rht_dereference(head, ht); \
- pos; \
- pos = rht_dereference((pos)->next, ht))
+#define rht_for_each_entry_continue(tpos, pos, head, tbl, hash, member) \
+ for (pos = rht_dereference_bucket(head, tbl, hash); \
+ (!rht_is_a_nulls(pos)) && rht_entry(tpos, pos, member); \
+ pos = rht_dereference_bucket((pos)->next, tbl, hash))
/**
* rht_for_each_entry - iterate over hash chain of given type
- * @pos: type * to use as a loop cursor.
- * @head: head of the hash chain (struct rhash_head *)
- * @ht: pointer to your struct rhashtable
- * @member: name of the rhash_head within the hashable struct.
+ * @tpos: the type * to use as a loop cursor.
+ * @pos: the &struct rhash_head to use as a loop cursor.
+ * @tbl: the &struct bucket_table
+ * @hash: the hash value / bucket index
+ * @member: name of the &struct rhash_head within the hashable struct.
*/
-#define rht_for_each_entry(pos, head, ht, member) \
- for (pos = rht_entry_safe(rht_dereference(head, ht), \
- typeof(*(pos)), member); \
- pos; \
- pos = rht_next_entry_safe(pos, ht, member))
+#define rht_for_each_entry(tpos, pos, tbl, hash, member) \
+ rht_for_each_entry_continue(tpos, pos, (tbl)->buckets[hash], \
+ tbl, hash, member)
/**
* rht_for_each_entry_safe - safely iterate over hash chain of given type
- * @pos: type * to use as a loop cursor.
- * @n: type * to use for temporary next object storage
- * @head: head of the hash chain (struct rhash_head *)
- * @ht: pointer to your struct rhashtable
- * @member: name of the rhash_head within the hashable struct.
+ * @tpos: the type * to use as a loop cursor.
+ * @pos: the &struct rhash_head to use as a loop cursor.
+ * @next: the &struct rhash_head to use as next in loop cursor.
+ * @tbl: the &struct bucket_table
+ * @hash: the hash value / bucket index
+ * @member: name of the &struct rhash_head within the hashable struct.
*
* This hash chain list-traversal primitive allows for the looped code to
* remove the loop cursor from the list.
*/
-#define rht_for_each_entry_safe(pos, n, head, ht, member) \
- for (pos = rht_entry_safe(rht_dereference(head, ht), \
- typeof(*(pos)), member), \
- n = rht_next_entry_safe(pos, ht, member); \
- pos; \
- pos = n, \
- n = rht_next_entry_safe(pos, ht, member))
+#define rht_for_each_entry_safe(tpos, pos, next, tbl, hash, member) \
+ for (pos = rht_dereference_bucket((tbl)->buckets[hash], tbl, hash), \
+ next = !rht_is_a_nulls(pos) ? \
+ rht_dereference_bucket(pos->next, tbl, hash) : NULL; \
+ (!rht_is_a_nulls(pos)) && rht_entry(tpos, pos, member); \
+ pos = next, \
+ next = !rht_is_a_nulls(pos) ? \
+ rht_dereference_bucket(pos->next, tbl, hash) : NULL)
+
+/**
+ * rht_for_each_rcu_continue - continue iterating over rcu hash chain
+ * @pos: the &struct rhash_head to use as a loop cursor.
+ * @head: the previous &struct rhash_head to continue from
+ * @tbl: the &struct bucket_table
+ * @hash: the hash value / bucket index
+ *
+ * This hash chain list-traversal primitive may safely run concurrently with
+ * the _rcu mutation primitives such as rhashtable_insert() as long as the
+ * traversal is guarded by rcu_read_lock().
+ */
+#define rht_for_each_rcu_continue(pos, head, tbl, hash) \
+ for (({barrier(); }), \
+ pos = rht_dereference_bucket_rcu(head, tbl, hash); \
+ !rht_is_a_nulls(pos); \
+ pos = rcu_dereference_raw(pos->next))
/**
* rht_for_each_rcu - iterate over rcu hash chain
- * @pos: &struct rhash_head to use as a loop cursor.
- * @head: head of the hash chain (struct rhash_head *)
- * @ht: pointer to your struct rhashtable
+ * @pos: the &struct rhash_head to use as a loop cursor.
+ * @tbl: the &struct bucket_table
+ * @hash: the hash value / bucket index
+ *
+ * This hash chain list-traversal primitive may safely run concurrently with
+ * the _rcu mutation primitives such as rhashtable_insert() as long as the
+ * traversal is guarded by rcu_read_lock().
+ */
+#define rht_for_each_rcu(pos, tbl, hash) \
+ rht_for_each_rcu_continue(pos, (tbl)->buckets[hash], tbl, hash)
+
+/**
+ * rht_for_each_entry_rcu_continue - continue iterating over rcu hash chain
+ * @tpos: the type * to use as a loop cursor.
+ * @pos: the &struct rhash_head to use as a loop cursor.
+ * @head: the previous &struct rhash_head to continue from
+ * @tbl: the &struct bucket_table
+ * @hash: the hash value / bucket index
+ * @member: name of the &struct rhash_head within the hashable struct.
*
* This hash chain list-traversal primitive may safely run concurrently with
- * the _rcu fkht mutation primitives such as rht_insert() as long as the
+ * the _rcu mutation primitives such as rhashtable_insert() as long as the
* traversal is guarded by rcu_read_lock().
*/
-#define rht_for_each_rcu(pos, head, ht) \
- for (pos = rht_dereference_rcu(head, ht); \
- pos; \
- pos = rht_dereference_rcu((pos)->next, ht))
+#define rht_for_each_entry_rcu_continue(tpos, pos, head, tbl, hash, member) \
+ for (({barrier(); }), \
+ pos = rht_dereference_bucket_rcu(head, tbl, hash); \
+ (!rht_is_a_nulls(pos)) && rht_entry(tpos, pos, member); \
+ pos = rht_dereference_bucket_rcu(pos->next, tbl, hash))
/**
* rht_for_each_entry_rcu - iterate over rcu hash chain of given type
- * @pos: type * to use as a loop cursor.
- * @head: head of the hash chain (struct rhash_head *)
- * @member: name of the rhash_head within the hashable struct.
+ * @tpos: the type * to use as a loop cursor.
+ * @pos: the &struct rhash_head to use as a loop cursor.
+ * @tbl: the &struct bucket_table
+ * @hash: the hash value / bucket index
+ * @member: name of the &struct rhash_head within the hashable struct.
*
* This hash chain list-traversal primitive may safely run concurrently with
- * the _rcu fkht mutation primitives such as rht_insert() as long as the
+ * the _rcu mutation primitives such as rhashtable_insert() as long as the
* traversal is guarded by rcu_read_lock().
*/
-#define rht_for_each_entry_rcu(pos, head, member) \
- for (pos = rht_entry_safe(rcu_dereference_raw(head), \
- typeof(*(pos)), member); \
- pos; \
- pos = rht_entry_safe(rcu_dereference_raw((pos)->member.next), \
- typeof(*(pos)), member))
+#define rht_for_each_entry_rcu(tpos, pos, tbl, hash, member) \
+ rht_for_each_entry_rcu_continue(tpos, pos, (tbl)->buckets[hash],\
+ tbl, hash, member)
#endif /* _LINUX_RHASHTABLE_H */
*/
atomic_t refcount;
+ /*
+ * Count of child anon_vmas and VMAs which points to this anon_vma.
+ *
+ * This counter is used for making decision about reusing anon_vma
+ * instead of forking new one. See comments in function anon_vma_clone.
+ */
+ unsigned degree;
+
+ struct anon_vma *parent; /* Parent of this anon_vma */
+
/*
* NOTE: the LSB of the rb_root.rb_node is set by
* mm_take_all_locks() _after_ taking the above lock. So the
#ifdef CONFIG_DEBUG_LOCK_ALLOC
# define raw_spin_lock_nested(lock, subclass) \
_raw_spin_lock_nested(lock, subclass)
+# define raw_spin_lock_bh_nested(lock, subclass) \
+ _raw_spin_lock_bh_nested(lock, subclass)
# define raw_spin_lock_nest_lock(lock, nest_lock) \
do { \
# define raw_spin_lock_nested(lock, subclass) \
_raw_spin_lock(((void)(subclass), (lock)))
# define raw_spin_lock_nest_lock(lock, nest_lock) _raw_spin_lock(lock)
+# define raw_spin_lock_bh_nested(lock, subclass) _raw_spin_lock_bh(lock)
#endif
#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
raw_spin_lock_nested(spinlock_check(lock), subclass); \
} while (0)
+#define spin_lock_bh_nested(lock, subclass) \
+do { \
+ raw_spin_lock_bh_nested(spinlock_check(lock), subclass);\
+} while (0)
+
#define spin_lock_nest_lock(lock, nest_lock) \
do { \
raw_spin_lock_nest_lock(spinlock_check(lock), nest_lock); \
void __lockfunc _raw_spin_lock(raw_spinlock_t *lock) __acquires(lock);
void __lockfunc _raw_spin_lock_nested(raw_spinlock_t *lock, int subclass)
__acquires(lock);
+void __lockfunc _raw_spin_lock_bh_nested(raw_spinlock_t *lock, int subclass)
+ __acquires(lock);
void __lockfunc
_raw_spin_lock_nest_lock(raw_spinlock_t *lock, struct lockdep_map *map)
__acquires(lock);
#define _raw_spin_lock(lock) __LOCK(lock)
#define _raw_spin_lock_nested(lock, subclass) __LOCK(lock)
+#define _raw_spin_lock_bh_nested(lock, subclass) __LOCK(lock)
#define _raw_read_lock(lock) __LOCK(lock)
#define _raw_write_lock(lock) __LOCK(lock)
#define _raw_spin_lock_bh(lock) __LOCK_BH(lock)
unsigned int corkflag; /* Cork is required */
__u8 encap_type; /* Is this an Encapsulation socket? */
unsigned char no_check6_tx:1,/* Send zero UDP6 checksums on TX? */
- no_check6_rx:1,/* Allow zero UDP6 checksums on RX? */
- convert_csum:1;/* On receive, convert checksum
- * unnecessary to checksum complete
- * if possible.
- */
+ no_check6_rx:1;/* Allow zero UDP6 checksums on RX? */
/*
* Following member retains the information to create a UDP header
* when the socket is uncorked.
return udp_sk(sk)->no_check6_rx;
}
-static inline void udp_set_convert_csum(struct sock *sk, bool val)
-{
- udp_sk(sk)->convert_csum = val;
-}
-
-static inline bool udp_get_convert_csum(struct sock *sk)
-{
- return udp_sk(sk)->convert_csum;
-}
-
#define udp_portaddr_for_each_entry(__sk, node, list) \
hlist_nulls_for_each_entry(__sk, node, list, __sk_common.skc_portaddr_node)
struct writeback_control *wbc, writepage_t writepage,
void *data);
int do_writepages(struct address_space *mapping, struct writeback_control *wbc);
-void set_page_dirty_balance(struct page *page);
void writeback_set_ratelimit(void);
void tag_pages_for_writeback(struct address_space *mapping,
pgoff_t start, pgoff_t end);
struct hci_dev;
-typedef void (*hci_req_complete_t)(struct hci_dev *hdev, u8 status);
+typedef void (*hci_req_complete_t)(struct hci_dev *hdev, u8 status, u16 opcode);
struct hci_req_ctrl {
bool start;
*/
HCI_QUIRK_FIXUP_BUFFER_SIZE,
+ /* When this quirk is set, then a controller that does not
+ * indicate support for Inquiry Result with RSSI is assumed to
+ * support it anyway. Some early Bluetooth 1.2 controllers had
+ * wrongly configured local features that will require forcing
+ * them to enable this mode. Getting RSSI information with the
+ * inquiry responses is preferred since it allows for a better
+ * user expierence.
+ *
+ * This quirk must be set before hci_register_dev is called.
+ */
+ HCI_QUIRK_FIXUP_INQUIRY_MODE,
+
/* When this quirk is set, then the HCI Read Local Supported
* Commands command is not supported. In general Bluetooth 1.2
* and later controllers should support this command. However
*/
enum {
HCI_DUT_MODE,
- HCI_FORCE_SC,
- HCI_FORCE_LESC,
+ HCI_FORCE_BREDR_SMP,
HCI_FORCE_STATIC_ADDR,
};
#define HCI_CONN_SETUP_AUTO_OFF 0x01
#define HCI_CONN_SETUP_AUTO_ON 0x02
+#define HCI_OP_READ_STORED_LINK_KEY 0x0c0d
+struct hci_cp_read_stored_link_key {
+ bdaddr_t bdaddr;
+ __u8 read_all;
+} __packed;
+struct hci_rp_read_stored_link_key {
+ __u8 status;
+ __u8 max_keys;
+ __u8 num_keys;
+} __packed;
+
#define HCI_OP_DELETE_STORED_LINK_KEY 0x0c12
struct hci_cp_delete_stored_link_key {
bdaddr_t bdaddr;
__u8 delete_all;
} __packed;
+struct hci_rp_delete_stored_link_key {
+ __u8 status;
+ __u8 num_keys;
+} __packed;
#define HCI_MAX_NAME_LENGTH 248
__u16 lmp_subver;
__u16 voice_setting;
__u8 num_iac;
+ __u8 stored_max_keys;
+ __u8 stored_num_keys;
__u8 io_capability;
__s8 inq_tx_power;
__u16 page_scan_interval;
int hci_conn_check_secure(struct hci_conn *conn, __u8 sec_level);
int hci_conn_security(struct hci_conn *conn, __u8 sec_level, __u8 auth_type,
bool initiator);
-int hci_conn_change_link_key(struct hci_conn *conn);
int hci_conn_switch_role(struct hci_conn *conn, __u8 role);
void hci_conn_enter_active_mode(struct hci_conn *conn, __u8 force_active);
#define hdev_is_powered(hdev) (test_bit(HCI_UP, &hdev->flags) && \
!test_bit(HCI_AUTO_OFF, &hdev->dev_flags))
-#define bredr_sc_enabled(dev) ((lmp_sc_capable(dev) || \
- test_bit(HCI_FORCE_SC, &(dev)->dbg_flags)) && \
+#define bredr_sc_enabled(dev) (lmp_sc_capable(dev) && \
test_bit(HCI_SC_ENABLED, &(dev)->dev_flags))
/* ----- HCI protocols ----- */
NLMSG_HDRLEN);
}
+/**
+ * genlmsg_parse - parse attributes of a genetlink message
+ * @nlh: netlink message header
+ * @family: genetlink message family
+ * @tb: destination array with maxtype+1 elements
+ * @maxtype: maximum attribute type to be expected
+ * @policy: validation policy
+ * */
+static inline int genlmsg_parse(const struct nlmsghdr *nlh,
+ const struct genl_family *family,
+ struct nlattr *tb[], int maxtype,
+ const struct nla_policy *policy)
+{
+ return nlmsg_parse(nlh, family->hdrsize + GENL_HDRLEN, tb, maxtype,
+ policy);
+}
+
/**
* genl_dump_check_consistent - check if sequence is consistent and advertise if not
* @cb: netlink callback structure that stores the sequence number
* @skb: socket buffer the message is stored in
* @hdr: user specific header
*/
-static inline int genlmsg_end(struct sk_buff *skb, void *hdr)
+static inline void genlmsg_end(struct sk_buff *skb, void *hdr)
{
- return nlmsg_end(skb, hdr - GENL_HDRLEN - NLMSG_HDRLEN);
+ nlmsg_end(skb, hdr - GENL_HDRLEN - NLMSG_HDRLEN);
}
/**
typedef void (geneve_rcv_t)(struct geneve_sock *gs, struct sk_buff *skb);
struct geneve_sock {
- struct hlist_node hlist;
+ struct list_head list;
geneve_rcv_t *rcv;
void *rcv_data;
- struct work_struct del_work;
struct socket *sock;
struct rcu_head rcu;
- atomic_t refcnt;
+ int refcnt;
struct udp_offload udp_offloads;
};
struct gro_cell {
struct sk_buff_head napi_skbs;
struct napi_struct napi;
-} ____cacheline_aligned_in_smp;
+};
struct gro_cells {
- unsigned int gro_cells_mask;
- struct gro_cell *cells;
+ struct gro_cell __percpu *cells;
};
static inline void gro_cells_receive(struct gro_cells *gcells, struct sk_buff *skb)
{
- struct gro_cell *cell = gcells->cells;
+ struct gro_cell *cell;
struct net_device *dev = skb->dev;
- if (!cell || skb_cloned(skb) || !(dev->features & NETIF_F_GRO)) {
+ if (!gcells->cells || skb_cloned(skb) || !(dev->features & NETIF_F_GRO)) {
netif_rx(skb);
return;
}
- if (skb_rx_queue_recorded(skb))
- cell += skb_get_rx_queue(skb) & gcells->gro_cells_mask;
+ cell = this_cpu_ptr(gcells->cells);
if (skb_queue_len(&cell->napi_skbs) > netdev_max_backlog) {
atomic_long_inc(&dev->rx_dropped);
{
int i;
- gcells->gro_cells_mask = roundup_pow_of_two(netif_get_num_default_rss_queues()) - 1;
- gcells->cells = kcalloc(gcells->gro_cells_mask + 1,
- sizeof(struct gro_cell),
- GFP_KERNEL);
+ gcells->cells = alloc_percpu(struct gro_cell);
if (!gcells->cells)
return -ENOMEM;
- for (i = 0; i <= gcells->gro_cells_mask; i++) {
- struct gro_cell *cell = gcells->cells + i;
+ for_each_possible_cpu(i) {
+ struct gro_cell *cell = per_cpu_ptr(gcells->cells, i);
skb_queue_head_init(&cell->napi_skbs);
netif_napi_add(dev, &cell->napi, gro_cell_poll, 64);
static inline void gro_cells_destroy(struct gro_cells *gcells)
{
- struct gro_cell *cell = gcells->cells;
int i;
- if (!cell)
+ if (!gcells->cells)
return;
- for (i = 0; i <= gcells->gro_cells_mask; i++,cell++) {
+ for_each_possible_cpu(i) {
+ struct gro_cell *cell = per_cpu_ptr(gcells->cells, i);
netif_napi_del(&cell->napi);
skb_queue_purge(&cell->napi_skbs);
}
- kfree(gcells->cells);
+ free_percpu(gcells->cells);
gcells->cells = NULL;
}
const struct tcp_congestion_ops *icsk_ca_ops;
const struct inet_connection_sock_af_ops *icsk_af_ops;
unsigned int (*icsk_sync_mss)(struct sock *sk, u32 pmtu);
- __u8 icsk_ca_state;
+ __u8 icsk_ca_state:7,
+ icsk_ca_dst_locked:1;
__u8 icsk_retransmits;
__u8 icsk_pending;
__u8 icsk_backoff;
#ifndef _INET_SOCK_H
#define _INET_SOCK_H
-
+#include <linux/bitops.h>
#include <linux/kmemcheck.h>
#include <linux/string.h>
#include <linux/types.h>
mc_all:1,
nodefrag:1;
__u8 rcv_tos;
+ __u8 convert_csum;
int uc_index;
int mc_index;
__be32 mc_addr;
#define IPCORK_OPT 1 /* ip-options has been held in ipcork.opt */
#define IPCORK_ALLFRAG 2 /* always fragment (for ipv6 for now) */
+/* cmsg flags for inet */
+#define IP_CMSG_PKTINFO BIT(0)
+#define IP_CMSG_TTL BIT(1)
+#define IP_CMSG_TOS BIT(2)
+#define IP_CMSG_RECVOPTS BIT(3)
+#define IP_CMSG_RETOPTS BIT(4)
+#define IP_CMSG_PASSSEC BIT(5)
+#define IP_CMSG_ORIGDSTADDR BIT(6)
+#define IP_CMSG_CHECKSUM BIT(7)
+
static inline struct inet_sock *inet_sk(const struct sock *sk)
{
return (struct inet_sock *)sk;
return flags;
}
+static inline void inet_inc_convert_csum(struct sock *sk)
+{
+ inet_sk(sk)->convert_csum++;
+}
+
+static inline void inet_dec_convert_csum(struct sock *sk)
+{
+ if (inet_sk(sk)->convert_csum > 0)
+ inet_sk(sk)->convert_csum--;
+}
+
+static inline bool inet_get_convert_csum(struct sock *sk)
+{
+ return !!inet_sk(sk)->convert_csum;
+}
+
#endif /* _INET_SOCK_H */
*/
void ipv4_pktinfo_prepare(const struct sock *sk, struct sk_buff *skb);
-void ip_cmsg_recv(struct msghdr *msg, struct sk_buff *skb);
+void ip_cmsg_recv_offset(struct msghdr *msg, struct sk_buff *skb, int offset);
int ip_cmsg_send(struct net *net, struct msghdr *msg,
struct ipcm_cookie *ipc, bool allow_ipv6);
int ip_setsockopt(struct sock *sk, int level, int optname, char __user *optval,
void ip_local_error(struct sock *sk, int err, __be32 daddr, __be16 dport,
u32 info);
+static inline void ip_cmsg_recv(struct msghdr *msg, struct sk_buff *skb)
+{
+ ip_cmsg_recv_offset(msg, skb, 0);
+}
+
bool icmp_global_allow(void);
extern int sysctl_icmp_msgs_per_sec;
extern int sysctl_icmp_msgs_burst;
#define FIB6_SUBTREE(fn) ((fn)->subtree)
#endif
+struct mx6_config {
+ const u32 *mx;
+ DECLARE_BITMAP(mx_valid, RTAX_MAX);
+};
+
/*
* routing information
*
void fib6_clean_all(struct net *net, int (*func)(struct rt6_info *, void *arg),
void *arg);
-int fib6_add(struct fib6_node *root, struct rt6_info *rt, struct nl_info *info,
- struct nlattr *mx, int mx_len);
-
+int fib6_add(struct fib6_node *root, struct rt6_info *rt,
+ struct nl_info *info, struct mx6_config *mxc);
int fib6_del(struct rt6_info *rt, struct nl_info *info);
void inet6_rt_notify(int event, struct rt6_info *rt, struct nl_info *info);
__u16 ip6_tnl_parse_tlv_enc_lim(struct sk_buff *skb, __u8 *raw);
__u32 ip6_tnl_get_cap(struct ip6_tnl *t, const struct in6_addr *laddr,
const struct in6_addr *raddr);
+struct net *ip6_tnl_get_link_net(const struct net_device *dev);
static inline void ip6tunnel_xmit(struct sk_buff *skb, struct net_device *dev)
{
#define TUNNEL_DONT_FRAGMENT __cpu_to_be16(0x0100)
#define TUNNEL_OAM __cpu_to_be16(0x0200)
#define TUNNEL_CRIT_OPT __cpu_to_be16(0x0400)
-#define TUNNEL_OPTIONS_PRESENT __cpu_to_be16(0x0800)
+#define TUNNEL_GENEVE_OPT __cpu_to_be16(0x0800)
+#define TUNNEL_VXLAN_OPT __cpu_to_be16(0x1000)
+
+#define TUNNEL_OPTIONS_PRESENT (TUNNEL_GENEVE_OPT | TUNNEL_VXLAN_OPT)
struct tnl_ptk_info {
__be16 flags;
int ip_tunnel_init(struct net_device *dev);
void ip_tunnel_uninit(struct net_device *dev);
void ip_tunnel_dellink(struct net_device *dev, struct list_head *head);
+struct net *ip_tunnel_get_link_net(const struct net_device *dev);
int ip_tunnel_init_net(struct net *net, int ip_tnl_net_id,
struct rtnl_link_ops *ops, char *devname);
struct list_head exit_list; /* Use only net_mutex */
struct user_namespace *user_ns; /* Owning user namespace */
+ struct idr netns_ids;
struct ns_common ns;
#define __net_initconst __initconst
#endif
+int peernet2id(struct net *net, struct net *peer);
+struct net *get_net_ns_by_id(struct net *net, int id);
+
struct pernet_operations {
struct list_head list;
int (*init)(struct net *net);
int nf_conntrack_hash_check_insert(struct nf_conn *ct);
bool nf_ct_delete(struct nf_conn *ct, u32 pid, int report);
-void nf_conntrack_flush_report(struct net *net, u32 portid, int report);
-
bool nf_ct_get_tuplepr(const struct sk_buff *skb, unsigned int nhoff,
u_int16_t l3num, struct nf_conntrack_tuple *tuple);
bool nf_ct_invert_tuplepr(struct nf_conntrack_tuple *inverse,
* Corrects the netlink message header to include the appeneded
* attributes. Only necessary if attributes have been added to
* the message.
- *
- * Returns the total data length of the skb.
*/
-static inline int nlmsg_end(struct sk_buff *skb, struct nlmsghdr *nlh)
+static inline void nlmsg_end(struct sk_buff *skb, struct nlmsghdr *nlh)
{
nlh->nlmsg_len = skb_tail_pointer(skb) - (unsigned char *)nlh;
-
- return skb->len;
}
/**
*/
static inline void nlmsg_trim(struct sk_buff *skb, const void *mark)
{
- if (mark)
+ if (mark) {
+ WARN_ON((unsigned char *) mark < skb->data);
skb_trim(skb, (unsigned char *) mark - skb->data);
+ }
}
/**
#include <linux/jiffies.h>
#include <linux/ktime.h>
+#include <linux/if_vlan.h>
#include <net/sch_generic.h>
struct qdisc_walker {
int tc_classify(struct sk_buff *skb, const struct tcf_proto *tp,
struct tcf_result *res);
+static inline __be16 tc_skb_protocol(const struct sk_buff *skb)
+{
+ /* We need to take extra care in case the skb came via
+ * vlan accelerated path. In that case, use skb->vlan_proto
+ * as the original vlan header was already stripped.
+ */
+ if (skb_vlan_tag_present(skb))
+ return skb->vlan_proto;
+ return skb->protocol;
+}
+
/* Calculate maximal size of packet seen by hard_start_xmit
routine of this device.
*/
struct fib_nh;
struct fib_info;
+struct uncached_list;
struct rtable {
struct dst_entry dst;
u32 rt_pmtu;
struct list_head rt_uncached;
+ struct uncached_list *rt_uncached_list;
};
static inline bool rt_is_input_route(const struct rtable *rt)
* to create when creating a new device.
* @get_num_rx_queues: Function to determine number of receive queues
* to create when creating a new device.
+ * @get_link_net: Function to get the i/o netns of the device
*/
struct rtnl_link_ops {
struct list_head list;
int (*fill_slave_info)(struct sk_buff *skb,
const struct net_device *dev,
const struct net_device *slave_dev);
+ struct net *(*get_link_net)(const struct net_device *dev);
};
int __rtnl_link_register(struct rtnl_link_ops *ops);
#define _LINUX_SWITCHDEV_H_
#include <linux/netdevice.h>
+#include <linux/notifier.h>
+
+enum netdev_switch_notifier_type {
+ NETDEV_SWITCH_FDB_ADD = 1,
+ NETDEV_SWITCH_FDB_DEL,
+};
+
+struct netdev_switch_notifier_info {
+ struct net_device *dev;
+};
+
+struct netdev_switch_notifier_fdb_info {
+ struct netdev_switch_notifier_info info; /* must be first */
+ const unsigned char *addr;
+ u16 vid;
+};
+
+static inline struct net_device *
+netdev_switch_notifier_info_to_dev(const struct netdev_switch_notifier_info *info)
+{
+ return info->dev;
+}
#ifdef CONFIG_NET_SWITCHDEV
int netdev_switch_parent_id_get(struct net_device *dev,
struct netdev_phys_item_id *psid);
int netdev_switch_port_stp_update(struct net_device *dev, u8 state);
+int register_netdev_switch_notifier(struct notifier_block *nb);
+int unregister_netdev_switch_notifier(struct notifier_block *nb);
+int call_netdev_switch_notifiers(unsigned long val, struct net_device *dev,
+ struct netdev_switch_notifier_info *info);
#else
return -EOPNOTSUPP;
}
+static inline int register_netdev_switch_notifier(struct notifier_block *nb)
+{
+ return 0;
+}
+
+static inline int unregister_netdev_switch_notifier(struct notifier_block *nb)
+{
+ return 0;
+}
+
+static inline int call_netdev_switch_notifiers(unsigned long val, struct net_device *dev,
+ struct netdev_switch_notifier_info *info)
+{
+ return NOTIFY_DONE;
+}
+
#endif
#endif /* _LINUX_SWITCHDEV_H_ */
--- /dev/null
+/*
+ * Copyright (c) 2015 Jiri Pirko <jiri@resnulli.us>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __NET_TC_BPF_H
+#define __NET_TC_BPF_H
+
+#include <linux/filter.h>
+#include <net/act_api.h>
+
+struct tcf_bpf {
+ struct tcf_common common;
+ struct bpf_prog *filter;
+ struct sock_filter *bpf_ops;
+ u16 bpf_num_ops;
+};
+#define to_bpf(a) \
+ container_of(a->priv, struct tcf_bpf, common)
+
+#endif /* __NET_TC_BPF_H */
--- /dev/null
+#ifndef __NET_TC_CONNMARK_H
+#define __NET_TC_CONNMARK_H
+
+#include <net/act_api.h>
+
+struct tcf_connmark_info {
+ struct tcf_common common;
+ u16 zone;
+};
+
+#define to_connmark(a) \
+ container_of(a->priv, struct tcf_connmark_info, common)
+
+#endif /* __NET_TC_CONNMARK_H */
struct sock *tcp_create_openreq_child(struct sock *sk,
struct request_sock *req,
struct sk_buff *skb);
+void tcp_ca_openreq_child(struct sock *sk, const struct dst_entry *dst);
struct sock *tcp_v4_syn_recv_sock(struct sock *sk, struct sk_buff *skb,
struct request_sock *req,
struct dst_entry *dst);
return jiffies_to_usecs(tcp_rto_min(sk));
}
+static inline bool tcp_ca_dst_locked(const struct dst_entry *dst)
+{
+ return dst_metric_locked(dst, RTAX_CC_ALGO);
+}
+
/* Compute the actual receive window we are currently advertising.
* Rcv_nxt can be after the window if our peer push more data
* than the offered window.
#define TCP_CA_MAX 128
#define TCP_CA_BUF_MAX (TCP_CA_NAME_MAX*TCP_CA_MAX)
+#define TCP_CA_UNSPEC 0
+
/* Algorithm can be set on socket without CAP_NET_ADMIN privileges */
#define TCP_CONG_NON_RESTRICTED 0x1
/* Requires ECN/ECT set on all packets */
struct tcp_congestion_ops {
struct list_head list;
- unsigned long flags;
+ u32 key;
+ u32 flags;
/* initialize private data (optional) */
void (*init)(struct sock *sk);
void tcp_reno_cong_avoid(struct sock *sk, u32 ack, u32 acked);
extern struct tcp_congestion_ops tcp_reno;
+struct tcp_congestion_ops *tcp_ca_find_key(u32 key);
+u32 tcp_ca_get_key_by_name(const char *name);
+#ifdef CONFIG_INET
+char *tcp_ca_get_name_by_key(u32 key, char *buffer);
+#else
+static inline char *tcp_ca_get_name_by_key(u32 key, char *buffer)
+{
+ return NULL;
+}
+#endif
+
static inline bool tcp_ca_needs_ecn(const struct sock *sk)
{
const struct inet_connection_sock *icsk = inet_csk(sk);
struct udp_tunnel_sock_cfg *sock_cfg);
/* Transmit the skb using UDP encapsulation. */
-int udp_tunnel_xmit_skb(struct socket *sock, struct rtable *rt,
- struct sk_buff *skb, __be32 src, __be32 dst,
- __u8 tos, __u8 ttl, __be16 df, __be16 src_port,
- __be16 dst_port, bool xnet);
+int udp_tunnel_xmit_skb(struct rtable *rt, struct sk_buff *skb,
+ __be32 src, __be32 dst, __u8 tos, __u8 ttl,
+ __be16 df, __be16 src_port, __be16 dst_port,
+ bool xnet, bool nocheck);
#if IS_ENABLED(CONFIG_IPV6)
-int udp_tunnel6_xmit_skb(struct socket *sock, struct dst_entry *dst,
- struct sk_buff *skb, struct net_device *dev,
- struct in6_addr *saddr, struct in6_addr *daddr,
+int udp_tunnel6_xmit_skb(struct dst_entry *dst, struct sk_buff *skb,
+ struct net_device *dev, struct in6_addr *saddr,
+ struct in6_addr *daddr,
__u8 prio, __u8 ttl, __be16 src_port,
- __be16 dst_port);
+ __be16 dst_port, bool nocheck);
#endif
void udp_tunnel_sock_release(struct socket *sock);
#define VNI_HASH_BITS 10
#define VNI_HASH_SIZE (1<<VNI_HASH_BITS)
-/* VXLAN protocol header */
+/*
+ * VXLAN Group Based Policy Extension:
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * |1|-|-|-|1|-|-|-|R|D|R|R|A|R|R|R| Group Policy ID |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | VXLAN Network Identifier (VNI) | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ *
+ * D = Don't Learn bit. When set, this bit indicates that the egress
+ * VTEP MUST NOT learn the source address of the encapsulated frame.
+ *
+ * A = Indicates that the group policy has already been applied to
+ * this packet. Policies MUST NOT be applied by devices when the
+ * A bit is set.
+ *
+ * [0] https://tools.ietf.org/html/draft-smith-vxlan-group-policy
+ */
+struct vxlanhdr_gbp {
+ __u8 vx_flags;
+#ifdef __LITTLE_ENDIAN_BITFIELD
+ __u8 reserved_flags1:3,
+ policy_applied:1,
+ reserved_flags2:2,
+ dont_learn:1,
+ reserved_flags3:1;
+#elif defined(__BIG_ENDIAN_BITFIELD)
+ __u8 reserved_flags1:1,
+ dont_learn:1,
+ reserved_flags2:2,
+ policy_applied:1,
+ reserved_flags3:3;
+#else
+#error "Please fix <asm/byteorder.h>"
+#endif
+ __be16 policy_id;
+ __be32 vx_vni;
+};
+
+#define VXLAN_GBP_USED_BITS (VXLAN_HF_GBP | 0xFFFFFF)
+
+/* skb->mark mapping
+ *
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * |R|R|R|R|R|R|R|R|R|D|R|R|A|R|R|R| Group Policy ID |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+#define VXLAN_GBP_DONT_LEARN (BIT(6) << 16)
+#define VXLAN_GBP_POLICY_APPLIED (BIT(3) << 16)
+#define VXLAN_GBP_ID_MASK (0xFFFF)
+
+/* VXLAN protocol header:
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * |G|R|R|R|I|R|R|C| Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | VXLAN Network Identifier (VNI) | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ *
+ * G = 1 Group Policy (VXLAN-GBP)
+ * I = 1 VXLAN Network Identifier (VNI) present
+ * C = 1 Remote checksum offload (RCO)
+ */
struct vxlanhdr {
__be32 vx_flags;
__be32 vx_vni;
};
+/* VXLAN header flags. */
+#define VXLAN_HF_RCO BIT(24)
+#define VXLAN_HF_VNI BIT(27)
+#define VXLAN_HF_GBP BIT(31)
+
+/* Remote checksum offload header option */
+#define VXLAN_RCO_MASK 0x7f /* Last byte of vni field */
+#define VXLAN_RCO_UDP 0x80 /* Indicate UDP RCO (TCP when not set *) */
+#define VXLAN_RCO_SHIFT 1 /* Left shift of start */
+#define VXLAN_RCO_SHIFT_MASK ((1 << VXLAN_RCO_SHIFT) - 1)
+#define VXLAN_MAX_REMCSUM_START (VXLAN_RCO_MASK << VXLAN_RCO_SHIFT)
+
+#define VXLAN_N_VID (1u << 24)
+#define VXLAN_VID_MASK (VXLAN_N_VID - 1)
+#define VXLAN_HLEN (sizeof(struct udphdr) + sizeof(struct vxlanhdr))
+
+struct vxlan_metadata {
+ __be32 vni;
+ u32 gbp;
+};
+
struct vxlan_sock;
-typedef void (vxlan_rcv_t)(struct vxlan_sock *vh, struct sk_buff *skb, __be32 key);
+typedef void (vxlan_rcv_t)(struct vxlan_sock *vh, struct sk_buff *skb,
+ struct vxlan_metadata *md);
/* per UDP socket information */
struct vxlan_sock {
struct hlist_head vni_list[VNI_HASH_SIZE];
atomic_t refcnt;
struct udp_offload udp_offloads;
+ u32 flags;
};
#define VXLAN_F_LEARN 0x01
#define VXLAN_F_UDP_CSUM 0x40
#define VXLAN_F_UDP_ZERO_CSUM6_TX 0x80
#define VXLAN_F_UDP_ZERO_CSUM6_RX 0x100
+#define VXLAN_F_REMCSUM_TX 0x200
+#define VXLAN_F_REMCSUM_RX 0x400
+#define VXLAN_F_GBP 0x800
+
+/* Flags that are used in the receive patch. These flags must match in
+ * order for a socket to be shareable
+ */
+#define VXLAN_F_RCV_FLAGS (VXLAN_F_GBP | \
+ VXLAN_F_UDP_ZERO_CSUM6_RX | \
+ VXLAN_F_REMCSUM_RX)
struct vxlan_sock *vxlan_sock_add(struct net *net, __be16 port,
vxlan_rcv_t *rcv, void *data,
void vxlan_sock_release(struct vxlan_sock *vs);
-int vxlan_xmit_skb(struct vxlan_sock *vs,
- struct rtable *rt, struct sk_buff *skb,
+int vxlan_xmit_skb(struct rtable *rt, struct sk_buff *skb,
__be32 src, __be32 dst, __u8 tos, __u8 ttl, __be16 df,
- __be16 src_port, __be16 dst_port, __be32 vni, bool xnet);
+ __be16 src_port, __be16 dst_port, struct vxlan_metadata *md,
+ bool xnet, u32 vxflags);
static inline netdev_features_t vxlan_features_check(struct sk_buff *skb,
netdev_features_t features)
}
/**
- * params_channels - Get the sample rate from the hw params
+ * params_rate - Get the sample rate from the hw params
* @p: hw params
*/
static inline unsigned int params_rate(const struct snd_pcm_hw_params *p)
}
/**
- * params_channels - Get the period size (in frames) from the hw params
+ * params_period_size - Get the period size (in frames) from the hw params
* @p: hw params
*/
static inline unsigned int params_period_size(const struct snd_pcm_hw_params *p)
}
/**
- * params_channels - Get the number of periods from the hw params
+ * params_periods - Get the number of periods from the hw params
* @p: hw params
*/
static inline unsigned int params_periods(const struct snd_pcm_hw_params *p)
}
/**
- * params_channels - Get the buffer size (in frames) from the hw params
+ * params_buffer_size - Get the buffer size (in frames) from the hw params
* @p: hw params
*/
static inline unsigned int params_buffer_size(const struct snd_pcm_hw_params *p)
}
/**
- * params_channels - Get the buffer size (in bytes) from the hw params
+ * params_buffer_bytes - Get the buffer size (in bytes) from the hw params
* @p: hw params
*/
static inline unsigned int params_buffer_bytes(const struct snd_pcm_hw_params *p)
int se_dev_set_emulate_rest_reord(struct se_device *dev, int);
int se_dev_set_queue_depth(struct se_device *, u32);
int se_dev_set_max_sectors(struct se_device *, u32);
-int se_dev_set_fabric_max_sectors(struct se_device *, u32);
int se_dev_set_optimal_sectors(struct se_device *, u32);
int se_dev_set_block_size(struct se_device *, u32);
TB_DEV_ATTR(_backend, block_size, S_IRUGO | S_IWUSR); \
DEF_TB_DEV_ATTRIB_RO(_backend, hw_max_sectors); \
TB_DEV_ATTR_RO(_backend, hw_max_sectors); \
- DEF_TB_DEV_ATTRIB(_backend, fabric_max_sectors); \
- TB_DEV_ATTR(_backend, fabric_max_sectors, S_IRUGO | S_IWUSR); \
DEF_TB_DEV_ATTRIB(_backend, optimal_sectors); \
TB_DEV_ATTR(_backend, optimal_sectors, S_IRUGO | S_IWUSR); \
DEF_TB_DEV_ATTRIB_RO(_backend, hw_queue_depth); \
#define DA_UNMAP_GRANULARITY_ALIGNMENT_DEFAULT 0
/* Default max_write_same_len, disabled by default */
#define DA_MAX_WRITE_SAME_LEN 0
-/* Default max transfer length */
-#define DA_FABRIC_MAX_SECTORS 8192
/* Use a model alias based on the configfs backend device name */
#define DA_EMULATE_MODEL_ALIAS 0
/* Emulation for Direct Page Out */
u32 hw_block_size;
u32 block_size;
u32 hw_max_sectors;
- u32 fabric_max_sectors;
u32 optimal_sectors;
u32 hw_queue_depth;
u32 queue_depth;
__assign_str(name, dev->name);
__entry->queue_mapping = skb->queue_mapping;
__entry->skbaddr = skb;
- __entry->vlan_tagged = vlan_tx_tag_present(skb);
+ __entry->vlan_tagged = skb_vlan_tag_present(skb);
__entry->vlan_proto = ntohs(skb->vlan_proto);
- __entry->vlan_tci = vlan_tx_tag_get(skb);
+ __entry->vlan_tci = skb_vlan_tag_get(skb);
__entry->protocol = ntohs(skb->protocol);
__entry->ip_summed = skb->ip_summed;
__entry->len = skb->len;
#endif
__entry->queue_mapping = skb->queue_mapping;
__entry->skbaddr = skb;
- __entry->vlan_tagged = vlan_tx_tag_present(skb);
+ __entry->vlan_tagged = skb_vlan_tag_present(skb);
__entry->vlan_proto = ntohs(skb->vlan_proto);
- __entry->vlan_tci = vlan_tx_tag_get(skb);
+ __entry->vlan_tci = skb_vlan_tag_get(skb);
__entry->protocol = ntohs(skb->protocol);
__entry->ip_summed = skb->ip_summed;
__entry->hash = skb->hash;
/*
* FMODE_EXEC is 0x20
- * FMODE_NONOTIFY is 0x1000000
+ * FMODE_NONOTIFY is 0x4000000
* These cannot be used by userspace O_* until internal and external open
* flags are split.
* -Eric Paris
header-y += netlink_diag.h
header-y += netlink.h
header-y += netrom.h
+header-y += net_namespace.h
header-y += net_tstamp.h
header-y += nfc.h
header-y += nfs2.h
#define BRIDGE_VLAN_INFO_MASTER (1<<0) /* Operate on Bridge device as well */
#define BRIDGE_VLAN_INFO_PVID (1<<1) /* VLAN is PVID, ingress untagged */
#define BRIDGE_VLAN_INFO_UNTAGGED (1<<2) /* VLAN egresses untagged */
+#define BRIDGE_VLAN_INFO_RANGE_BEGIN (1<<3) /* VLAN is start of vlan range */
+#define BRIDGE_VLAN_INFO_RANGE_END (1<<4) /* VLAN is end of vlan range */
struct bridge_vlan_info {
__u16 flags;
IFLA_PHYS_PORT_ID,
IFLA_CARRIER_CHANGES,
IFLA_PHYS_SWITCH_ID,
+ IFLA_LINK_NETNSID,
__IFLA_MAX
};
IFLA_VXLAN_UDP_CSUM,
IFLA_VXLAN_UDP_ZERO_CSUM6_TX,
IFLA_VXLAN_UDP_ZERO_CSUM6_RX,
+ IFLA_VXLAN_REMCSUM_TX,
+ IFLA_VXLAN_REMCSUM_RX,
+ IFLA_VXLAN_GBP,
__IFLA_VXLAN_MAX
};
#define IFLA_VXLAN_MAX (__IFLA_VXLAN_MAX - 1)
#define IP_MINTTL 21
#define IP_NODEFRAG 22
+#define IP_CHECKSUM 23
/* IP_MTU_DISCOVER values */
#define IP_PMTUDISC_DONT 0 /* Never send DF frames */
#ifndef _UAPI_IPV6_H
#define _UAPI_IPV6_H
+#include <linux/libc-compat.h>
#include <linux/types.h>
#include <linux/in6.h>
#include <asm/byteorder.h>
* *under construction*
*/
-
+#if __UAPI_DEF_IN6_PKTINFO
struct in6_pktinfo {
struct in6_addr ipi6_addr;
int ipi6_ifindex;
};
+#endif
+#if __UAPI_DEF_IP6_MTUINFO
struct ip6_mtuinfo {
struct sockaddr_in6 ip6m_addr;
__u32 ip6m_mtu;
};
+#endif
struct in6_ifreq {
struct in6_addr ifr6_addr;
DEVCONF_SUPPRESS_FRAG_NDISC,
DEVCONF_ACCEPT_RA_FROM_LOCAL,
DEVCONF_USE_OPTIMISTIC,
+ DEVCONF_ACCEPT_RA_MTU,
DEVCONF_MAX
};
uint32_t pad;
};
-#define KFD_IOC_MAGIC 'K'
+#define AMDKFD_IOCTL_BASE 'K'
+#define AMDKFD_IO(nr) _IO(AMDKFD_IOCTL_BASE, nr)
+#define AMDKFD_IOR(nr, type) _IOR(AMDKFD_IOCTL_BASE, nr, type)
+#define AMDKFD_IOW(nr, type) _IOW(AMDKFD_IOCTL_BASE, nr, type)
+#define AMDKFD_IOWR(nr, type) _IOWR(AMDKFD_IOCTL_BASE, nr, type)
-#define KFD_IOC_GET_VERSION \
- _IOR(KFD_IOC_MAGIC, 1, struct kfd_ioctl_get_version_args)
+#define AMDKFD_IOC_GET_VERSION \
+ AMDKFD_IOR(0x01, struct kfd_ioctl_get_version_args)
-#define KFD_IOC_CREATE_QUEUE \
- _IOWR(KFD_IOC_MAGIC, 2, struct kfd_ioctl_create_queue_args)
+#define AMDKFD_IOC_CREATE_QUEUE \
+ AMDKFD_IOWR(0x02, struct kfd_ioctl_create_queue_args)
-#define KFD_IOC_DESTROY_QUEUE \
- _IOWR(KFD_IOC_MAGIC, 3, struct kfd_ioctl_destroy_queue_args)
+#define AMDKFD_IOC_DESTROY_QUEUE \
+ AMDKFD_IOWR(0x03, struct kfd_ioctl_destroy_queue_args)
-#define KFD_IOC_SET_MEMORY_POLICY \
- _IOW(KFD_IOC_MAGIC, 4, struct kfd_ioctl_set_memory_policy_args)
+#define AMDKFD_IOC_SET_MEMORY_POLICY \
+ AMDKFD_IOW(0x04, struct kfd_ioctl_set_memory_policy_args)
-#define KFD_IOC_GET_CLOCK_COUNTERS \
- _IOWR(KFD_IOC_MAGIC, 5, struct kfd_ioctl_get_clock_counters_args)
+#define AMDKFD_IOC_GET_CLOCK_COUNTERS \
+ AMDKFD_IOWR(0x05, struct kfd_ioctl_get_clock_counters_args)
-#define KFD_IOC_GET_PROCESS_APERTURES \
- _IOR(KFD_IOC_MAGIC, 6, struct kfd_ioctl_get_process_apertures_args)
+#define AMDKFD_IOC_GET_PROCESS_APERTURES \
+ AMDKFD_IOR(0x06, struct kfd_ioctl_get_process_apertures_args)
-#define KFD_IOC_UPDATE_QUEUE \
- _IOW(KFD_IOC_MAGIC, 7, struct kfd_ioctl_update_queue_args)
+#define AMDKFD_IOC_UPDATE_QUEUE \
+ AMDKFD_IOW(0x07, struct kfd_ioctl_update_queue_args)
+
+#define AMDKFD_COMMAND_START 0x01
+#define AMDKFD_COMMAND_END 0x08
#endif
#define __UAPI_DEF_IPV6_MREQ 0
#define __UAPI_DEF_IPPROTO_V6 0
#define __UAPI_DEF_IPV6_OPTIONS 0
+#define __UAPI_DEF_IN6_PKTINFO 0
+#define __UAPI_DEF_IP6_MTUINFO 0
#else
#define __UAPI_DEF_IPV6_MREQ 1
#define __UAPI_DEF_IPPROTO_V6 1
#define __UAPI_DEF_IPV6_OPTIONS 1
+#define __UAPI_DEF_IN6_PKTINFO 1
+#define __UAPI_DEF_IP6_MTUINFO 1
#endif /* _NETINET_IN_H */
#define __UAPI_DEF_IPV6_MREQ 1
#define __UAPI_DEF_IPPROTO_V6 1
#define __UAPI_DEF_IPV6_OPTIONS 1
+#define __UAPI_DEF_IN6_PKTINFO 1
+#define __UAPI_DEF_IP6_MTUINFO 1
/* Definitions for xattr.h */
#define __UAPI_DEF_XATTR 1
NDA_VNI,
NDA_IFINDEX,
NDA_MASTER,
+ NDA_NDM_IFINDEX_NETNSID,
__NDA_MAX
};
--- /dev/null
+/* Copyright (c) 2015 6WIND S.A.
+ * Author: Nicolas Dichtel <nicolas.dichtel@6wind.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ */
+#ifndef _UAPI_LINUX_NET_NAMESPACE_H_
+#define _UAPI_LINUX_NET_NAMESPACE_H_
+
+/* Attributes of RTM_NEWNSID/RTM_GETNSID messages */
+enum {
+ NETNSA_NONE,
+#define NETNSA_NSID_NOT_ASSIGNED -1
+ NETNSA_NSID,
+ NETNSA_PID,
+ NETNSA_FD,
+ __NETNSA_MAX,
+};
+
+#define NETNSA_MAX (__NETNSA_MAX - 1)
+
+#endif /* _UAPI_LINUX_NET_NAMESPACE_H_ */
OVS_PACKET_ATTR_USERDATA, /* OVS_ACTION_ATTR_USERSPACE arg. */
OVS_PACKET_ATTR_EGRESS_TUN_KEY, /* Nested OVS_TUNNEL_KEY_ATTR_*
attributes. */
+ OVS_PACKET_ATTR_UNUSED1,
+ OVS_PACKET_ATTR_UNUSED2,
+ OVS_PACKET_ATTR_PROBE, /* Packet operation is a feature probe,
+ error logging should be suppressed. */
__OVS_PACKET_ATTR_MAX
};
#define OVS_VPORT_ATTR_MAX (__OVS_VPORT_ATTR_MAX - 1)
+enum {
+ OVS_VXLAN_EXT_UNSPEC,
+ OVS_VXLAN_EXT_GBP, /* Flag or __u32 */
+ __OVS_VXLAN_EXT_MAX,
+};
+
+#define OVS_VXLAN_EXT_MAX (__OVS_VXLAN_EXT_MAX - 1)
+
+
/* OVS_VPORT_ATTR_OPTIONS attributes for tunnels.
*/
enum {
OVS_TUNNEL_ATTR_UNSPEC,
OVS_TUNNEL_ATTR_DST_PORT, /* 16-bit UDP port, used by L4 tunnels. */
+ OVS_TUNNEL_ATTR_EXTENSION,
__OVS_TUNNEL_ATTR_MAX
};
OVS_TUNNEL_KEY_ATTR_GENEVE_OPTS, /* Array of Geneve options. */
OVS_TUNNEL_KEY_ATTR_TP_SRC, /* be16 src Transport Port. */
OVS_TUNNEL_KEY_ATTR_TP_DST, /* be16 dst Transport Port. */
+ OVS_TUNNEL_KEY_ATTR_VXLAN_OPTS, /* Nested OVS_VXLAN_EXT_* */
__OVS_TUNNEL_KEY_ATTR_MAX
};
* a wildcarded match. Omitting attribute is treated as wildcarding all
* corresponding fields. Optional for all requests. If not present,
* all flow key bits are exact match bits.
+ * @OVS_FLOW_ATTR_UFID: A value between 1-16 octets specifying a unique
+ * identifier for the flow. Causes the flow to be indexed by this value rather
+ * than the value of the %OVS_FLOW_ATTR_KEY attribute. Optional for all
+ * requests. Present in notifications if the flow was created with this
+ * attribute.
+ * @OVS_FLOW_ATTR_UFID_FLAGS: A 32-bit value of OR'd %OVS_UFID_F_*
+ * flags that provide alternative semantics for flow installation and
+ * retrieval. Optional for all requests.
*
* These attributes follow the &struct ovs_header within the Generic Netlink
* payload for %OVS_FLOW_* commands.
OVS_FLOW_ATTR_MASK, /* Sequence of OVS_KEY_ATTR_* attributes. */
OVS_FLOW_ATTR_PROBE, /* Flow operation is a feature probe, error
* logging should be suppressed. */
+ OVS_FLOW_ATTR_UFID, /* Variable length unique flow identifier. */
+ OVS_FLOW_ATTR_UFID_FLAGS,/* u32 of OVS_UFID_F_*. */
__OVS_FLOW_ATTR_MAX
};
#define OVS_FLOW_ATTR_MAX (__OVS_FLOW_ATTR_MAX - 1)
+/**
+ * Omit attributes for notifications.
+ *
+ * If a datapath request contains an %OVS_UFID_F_OMIT_* flag, then the datapath
+ * may omit the corresponding %OVS_FLOW_ATTR_* from the response.
+ */
+#define OVS_UFID_F_OMIT_KEY (1 << 0)
+#define OVS_UFID_F_OMIT_MASK (1 << 1)
+#define OVS_UFID_F_OMIT_ACTIONS (1 << 2)
+
/**
* enum ovs_sample_attr - Attributes for %OVS_ACTION_ATTR_SAMPLE action.
* @OVS_SAMPLE_ATTR_PROBABILITY: 32-bit fraction of packets to sample with
RTM_GETMDB = 86,
#define RTM_GETMDB RTM_GETMDB
+ RTM_NEWNSID = 88,
+#define RTM_NEWNSID RTM_NEWNSID
+ RTM_GETNSID = 90,
+#define RTM_GETNSID RTM_GETNSID
+
__RTM_MAX,
#define RTM_MAX (((__RTM_MAX + 3) & ~3) - 1)
};
#define RTAX_INITRWND RTAX_INITRWND
RTAX_QUICKACK,
#define RTAX_QUICKACK RTAX_QUICKACK
+ RTAX_CC_ALGO,
+#define RTAX_CC_ALGO RTAX_CC_ALGO
__RTAX_MAX
};
/* New extended info filters for IFLA_EXT_MASK */
#define RTEXT_FILTER_VF (1 << 0)
#define RTEXT_FILTER_BRVLAN (1 << 1)
+#define RTEXT_FILTER_BRVLAN_COMPRESSED (1 << 2)
/* End of information exported to user level */
header-y += tc_pedit.h
header-y += tc_skbedit.h
header-y += tc_vlan.h
+header-y += tc_bpf.h
--- /dev/null
+/*
+ * Copyright (c) 2015 Jiri Pirko <jiri@resnulli.us>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef __LINUX_TC_BPF_H
+#define __LINUX_TC_BPF_H
+
+#include <linux/pkt_cls.h>
+
+#define TCA_ACT_BPF 13
+
+struct tc_act_bpf {
+ tc_gen;
+};
+
+enum {
+ TCA_ACT_BPF_UNSPEC,
+ TCA_ACT_BPF_TM,
+ TCA_ACT_BPF_PARMS,
+ TCA_ACT_BPF_OPS_LEN,
+ TCA_ACT_BPF_OPS,
+ __TCA_ACT_BPF_MAX,
+};
+#define TCA_ACT_BPF_MAX (__TCA_ACT_BPF_MAX - 1)
+
+#endif
--- /dev/null
+#ifndef __UAPI_TC_CONNMARK_H
+#define __UAPI_TC_CONNMARK_H
+
+#include <linux/types.h>
+#include <linux/pkt_cls.h>
+
+#define TCA_ACT_CONNMARK 14
+
+struct tc_connmark {
+ tc_gen;
+ __u16 zone;
+};
+
+enum {
+ TCA_CONNMARK_UNSPEC,
+ TCA_CONNMARK_PARMS,
+ TCA_CONNMARK_TM,
+ __TCA_CONNMARK_MAX
+};
+#define TCA_CONNMARK_MAX (__TCA_CONNMARK_MAX - 1)
+
+#endif
struct vring_used *used;
};
+/* Alignment requirements for vring elements.
+ * When using pre-virtio 1.0 layout, these fall out naturally.
+ */
+#define VRING_AVAIL_ALIGN_SIZE 2
+#define VRING_USED_ALIGN_SIZE 4
+#define VRING_DESC_ALIGN_SIZE 16
+
/* The standard layout for the ring is a continuous chunk of memory which looks
* like this. We assume num is a power of 2.
*
--- /dev/null
+/******************************************************************************
+ * nmi.h
+ *
+ * NMI callback registration and reason codes.
+ *
+ * Copyright (c) 2005, Keir Fraser <keir@xensource.com>
+ */
+
+#ifndef __XEN_PUBLIC_NMI_H__
+#define __XEN_PUBLIC_NMI_H__
+
+#include <xen/interface/xen.h>
+
+/*
+ * NMI reason codes:
+ * Currently these are x86-specific, stored in arch_shared_info.nmi_reason.
+ */
+ /* I/O-check error reported via ISA port 0x61, bit 6. */
+#define _XEN_NMIREASON_io_error 0
+#define XEN_NMIREASON_io_error (1UL << _XEN_NMIREASON_io_error)
+ /* PCI SERR reported via ISA port 0x61, bit 7. */
+#define _XEN_NMIREASON_pci_serr 1
+#define XEN_NMIREASON_pci_serr (1UL << _XEN_NMIREASON_pci_serr)
+ /* Unknown hardware-generated NMI. */
+#define _XEN_NMIREASON_unknown 2
+#define XEN_NMIREASON_unknown (1UL << _XEN_NMIREASON_unknown)
+
+/*
+ * long nmi_op(unsigned int cmd, void *arg)
+ * NB. All ops return zero on success, else a negative error code.
+ */
+
+/*
+ * Register NMI callback for this (calling) VCPU. Currently this only makes
+ * sense for domain 0, vcpu 0. All other callers will be returned EINVAL.
+ * arg == pointer to xennmi_callback structure.
+ */
+#define XENNMI_register_callback 0
+struct xennmi_callback {
+ unsigned long handler_address;
+ unsigned long pad;
+};
+DEFINE_GUEST_HANDLE_STRUCT(xennmi_callback);
+
+/*
+ * Deregister NMI callback for this (calling) VCPU.
+ * arg == NULL.
+ */
+#define XENNMI_unregister_callback 1
+
+#endif /* __XEN_PUBLIC_NMI_H__ */
#include <asm/xen/page.h>
+static inline unsigned long page_to_mfn(struct page *page)
+{
+ return pfn_to_mfn(page_to_pfn(page));
+}
+
struct xen_memory_region {
phys_addr_t start;
phys_addr_t size;
#include <linux/fs_struct.h>
#include <linux/compat.h>
#include <linux/ctype.h>
+#include <linux/string.h>
+#include <uapi/linux/limits.h>
#include "audit.h"
}
list_for_each_entry_reverse(n, &context->names_list, list) {
- /* does the name pointer match? */
- if (!n->name || n->name->name != name->name)
+ if (!n->name || strcmp(n->name->name, name->name))
continue;
/* match the correct record type */
n = audit_alloc_name(context, AUDIT_TYPE_UNKNOWN);
if (!n)
return;
- if (name)
- /* since name is not NULL we know there is already a matching
- * name record, see audit_getname(), so there must be a type
- * mismatch; reuse the string path since the original name
- * record will keep the string valid until we free it in
- * audit_free_names() */
- n->name = name;
+ /* unfortunately, while we may have a path name to record with the
+ * inode, we can't always rely on the string lasting until the end of
+ * the syscall so we need to create our own copy, it may fail due to
+ * memory allocation issues, but we do our best */
+ if (name) {
+ /* we can't use getname_kernel() due to size limits */
+ size_t len = strlen(name->name) + 1;
+ struct filename *new = __getname();
+
+ if (unlikely(!new))
+ goto out;
+
+ if (len <= (PATH_MAX - sizeof(*new))) {
+ new->name = (char *)(new) + sizeof(*new);
+ new->separate = false;
+ } else if (len <= PATH_MAX) {
+ /* this looks odd, but is due to final_putname() */
+ struct filename *new2;
+ new2 = kmalloc(sizeof(*new2), GFP_KERNEL);
+ if (unlikely(!new2)) {
+ __putname(new);
+ goto out;
+ }
+ new2->name = (char *)new;
+ new2->separate = true;
+ new = new2;
+ } else {
+ /* we should never get here, but let's be safe */
+ __putname(new);
+ goto out;
+ }
+ strlcpy((char *)new->name, name->name, len);
+ new->uptr = NULL;
+ new->aname = n;
+ n->name = new;
+ n->name_put = true;
+ }
out:
if (parent) {
n->name_len = n->name ? parent_len(n->name->name) : AUDIT_NAME_FULL;
* version 2. This program is licensed "as is" without any warranty of any
* kind, whether express or implied.
*/
+
+#define pr_fmt(fmt) "KGDB: " fmt
+
#include <linux/pid_namespace.h>
#include <linux/clocksource.h>
#include <linux/serial_core.h>
return err;
err = kgdb_arch_remove_breakpoint(&tmp);
if (err)
- printk(KERN_ERR "KGDB: Critical breakpoint error, kernel "
- "memory destroyed at: %lx", addr);
+ pr_err("Critical breakpoint error, kernel memory destroyed at: %lx\n",
+ addr);
return err;
}
error = kgdb_arch_set_breakpoint(&kgdb_break[i]);
if (error) {
ret = error;
- printk(KERN_INFO "KGDB: BP install failed: %lx",
- kgdb_break[i].bpt_addr);
+ pr_info("BP install failed: %lx\n",
+ kgdb_break[i].bpt_addr);
continue;
}
continue;
error = kgdb_arch_remove_breakpoint(&kgdb_break[i]);
if (error) {
- printk(KERN_INFO "KGDB: BP remove failed: %lx\n",
- kgdb_break[i].bpt_addr);
+ pr_info("BP remove failed: %lx\n",
+ kgdb_break[i].bpt_addr);
ret = error;
}
goto setundefined;
error = kgdb_arch_remove_breakpoint(&kgdb_break[i]);
if (error)
- printk(KERN_ERR "KGDB: breakpoint remove failed: %lx\n",
+ pr_err("breakpoint remove failed: %lx\n",
kgdb_break[i].bpt_addr);
setundefined:
kgdb_break[i].state = BP_UNDEFINED;
if (print_wait) {
#ifdef CONFIG_KGDB_KDB
if (!dbg_kdb_mode)
- printk(KERN_CRIT "KGDB: waiting... or $3#33 for KDB\n");
+ pr_crit("waiting... or $3#33 for KDB\n");
#else
- printk(KERN_CRIT "KGDB: Waiting for remote debugger\n");
+ pr_crit("Waiting for remote debugger\n");
#endif
}
return 1;
exception_level = 0;
kgdb_skipexception(ks->ex_vector, ks->linux_regs);
dbg_activate_sw_breakpoints();
- printk(KERN_CRIT "KGDB: re-enter error: breakpoint removed %lx\n",
- addr);
+ pr_crit("re-enter error: breakpoint removed %lx\n", addr);
WARN_ON_ONCE(1);
return 1;
panic("Recursive entry to debugger");
}
- printk(KERN_CRIT "KGDB: re-enter exception: ALL breakpoints killed\n");
+ pr_crit("re-enter exception: ALL breakpoints killed\n");
#ifdef CONFIG_KGDB_KDB
/* Allow kdb to debug itself one level */
return 0;
int cpu;
int trace_on = 0;
int online_cpus = num_online_cpus();
+ u64 time_left;
kgdb_info[ks->cpu].enter_kgdb++;
kgdb_info[ks->cpu].exception_state |= exception_state;
/*
* Wait for the other CPUs to be notified and be waiting for us:
*/
- while (kgdb_do_roundup && (atomic_read(&masters_in_kgdb) +
- atomic_read(&slaves_in_kgdb)) != online_cpus)
+ time_left = loops_per_jiffy * HZ;
+ while (kgdb_do_roundup && --time_left &&
+ (atomic_read(&masters_in_kgdb) + atomic_read(&slaves_in_kgdb)) !=
+ online_cpus)
cpu_relax();
+ if (!time_left)
+ pr_crit("KGDB: Timed out waiting for secondary CPUs.\n");
/*
* At this point the primary processor is completely
static void sysrq_handle_dbg(int key)
{
if (!dbg_io_ops) {
- printk(KERN_CRIT "ERROR: No KGDB I/O module available\n");
+ pr_crit("ERROR: No KGDB I/O module available\n");
return;
}
if (!kgdb_connected) {
#ifdef CONFIG_KGDB_KDB
if (!dbg_kdb_mode)
- printk(KERN_CRIT "KGDB or $3#33 for KDB\n");
+ pr_crit("KGDB or $3#33 for KDB\n");
#else
- printk(KERN_CRIT "Entering KGDB\n");
+ pr_crit("Entering KGDB\n");
#endif
}
{
kgdb_break_asap = 0;
- printk(KERN_CRIT "kgdb: Waiting for connection from remote gdb...\n");
+ pr_crit("Waiting for connection from remote gdb...\n");
kgdb_breakpoint();
}
if (dbg_io_ops) {
spin_unlock(&kgdb_registration_lock);
- printk(KERN_ERR "kgdb: Another I/O driver is already "
- "registered with KGDB.\n");
+ pr_err("Another I/O driver is already registered with KGDB\n");
return -EBUSY;
}
spin_unlock(&kgdb_registration_lock);
- printk(KERN_INFO "kgdb: Registered I/O driver %s.\n",
- new_dbg_io_ops->name);
+ pr_info("Registered I/O driver %s\n", new_dbg_io_ops->name);
/* Arm KGDB now. */
kgdb_register_callbacks();
spin_unlock(&kgdb_registration_lock);
- printk(KERN_INFO
- "kgdb: Unregistered I/O driver %s, debugger disabled.\n",
+ pr_info("Unregistered I/O driver %s, debugger disabled\n",
old_dbg_io_ops->name);
}
EXPORT_SYMBOL_GPL(kgdb_unregister_io_module);
for (i = 0, bp = kdb_breakpoints; i < KDB_MAXBPT; i++, bp++)
bp->bp_free = 1;
- kdb_register_repeat("bp", kdb_bp, "[<vaddr>]",
- "Set/Display breakpoints", 0, KDB_REPEAT_NO_ARGS);
- kdb_register_repeat("bl", kdb_bp, "[<vaddr>]",
- "Display breakpoints", 0, KDB_REPEAT_NO_ARGS);
+ kdb_register_flags("bp", kdb_bp, "[<vaddr>]",
+ "Set/Display breakpoints", 0,
+ KDB_ENABLE_FLOW_CTRL | KDB_REPEAT_NO_ARGS);
+ kdb_register_flags("bl", kdb_bp, "[<vaddr>]",
+ "Display breakpoints", 0,
+ KDB_ENABLE_FLOW_CTRL | KDB_REPEAT_NO_ARGS);
if (arch_kgdb_ops.flags & KGDB_HW_BREAKPOINT)
- kdb_register_repeat("bph", kdb_bp, "[<vaddr>]",
- "[datar [length]|dataw [length]] Set hw brk", 0, KDB_REPEAT_NO_ARGS);
- kdb_register_repeat("bc", kdb_bc, "<bpnum>",
- "Clear Breakpoint", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("be", kdb_bc, "<bpnum>",
- "Enable Breakpoint", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("bd", kdb_bc, "<bpnum>",
- "Disable Breakpoint", 0, KDB_REPEAT_NONE);
-
- kdb_register_repeat("ss", kdb_ss, "",
- "Single Step", 1, KDB_REPEAT_NO_ARGS);
+ kdb_register_flags("bph", kdb_bp, "[<vaddr>]",
+ "[datar [length]|dataw [length]] Set hw brk", 0,
+ KDB_ENABLE_FLOW_CTRL | KDB_REPEAT_NO_ARGS);
+ kdb_register_flags("bc", kdb_bc, "<bpnum>",
+ "Clear Breakpoint", 0,
+ KDB_ENABLE_FLOW_CTRL);
+ kdb_register_flags("be", kdb_bc, "<bpnum>",
+ "Enable Breakpoint", 0,
+ KDB_ENABLE_FLOW_CTRL);
+ kdb_register_flags("bd", kdb_bc, "<bpnum>",
+ "Disable Breakpoint", 0,
+ KDB_ENABLE_FLOW_CTRL);
+
+ kdb_register_flags("ss", kdb_ss, "",
+ "Single Step", 1,
+ KDB_ENABLE_FLOW_CTRL | KDB_REPEAT_NO_ARGS);
/*
* Architecture dependent initialization.
*/
ks->pass_exception = 1;
KDB_FLAG_SET(CATASTROPHIC);
}
+ /* set CATASTROPHIC if the system contains unresponsive processors */
+ for_each_online_cpu(i)
+ if (!kgdb_info[i].enter_kgdb)
+ KDB_FLAG_SET(CATASTROPHIC);
if (KDB_STATE(SSBPT) && reason == KDB_REASON_SSTEP) {
KDB_STATE_CLEAR(SSBPT);
KDB_STATE_CLEAR(DOING_SS);
*/
#include <linux/ctype.h>
+#include <linux/types.h>
#include <linux/string.h>
#include <linux/kernel.h>
#include <linux/kmsg_dump.h>
#include <linux/vmalloc.h>
#include <linux/atomic.h>
#include <linux/module.h>
+#include <linux/moduleparam.h>
#include <linux/mm.h>
#include <linux/init.h>
#include <linux/kallsyms.h>
#include <linux/slab.h>
#include "kdb_private.h"
+#undef MODULE_PARAM_PREFIX
+#define MODULE_PARAM_PREFIX "kdb."
+
+static int kdb_cmd_enabled = CONFIG_KDB_DEFAULT_ENABLE;
+module_param_named(cmd_enable, kdb_cmd_enabled, int, 0600);
+
#define GREP_LEN 256
char kdb_grep_string[GREP_LEN];
int kdb_grepping_flag;
KDBMSG(BADLENGTH, "Invalid length field"),
KDBMSG(NOBP, "No Breakpoint exists"),
KDBMSG(BADADDR, "Invalid address"),
+ KDBMSG(NOPERM, "Permission denied"),
};
#undef KDBMSG
return p;
}
+/*
+ * Check whether the flags of the current command and the permissions
+ * of the kdb console has allow a command to be run.
+ */
+static inline bool kdb_check_flags(kdb_cmdflags_t flags, int permissions,
+ bool no_args)
+{
+ /* permissions comes from userspace so needs massaging slightly */
+ permissions &= KDB_ENABLE_MASK;
+ permissions |= KDB_ENABLE_ALWAYS_SAFE;
+
+ /* some commands change group when launched with no arguments */
+ if (no_args)
+ permissions |= permissions << KDB_ENABLE_NO_ARGS_SHIFT;
+
+ flags |= KDB_ENABLE_ALL;
+
+ return permissions & flags;
+}
+
/*
* kdbgetenv - This function will return the character string value of
* an environment variable.
char *cp;
kdb_symtab_t symtab;
+ /*
+ * If the enable flags prohibit both arbitrary memory access
+ * and flow control then there are no reasonable grounds to
+ * provide symbol lookup.
+ */
+ if (!kdb_check_flags(KDB_ENABLE_MEM_READ | KDB_ENABLE_FLOW_CTRL,
+ kdb_cmd_enabled, false))
+ return KDB_NOPERM;
+
/*
* Process arguments which follow the following syntax:
*
if (!s->count)
s->usable = 0;
if (s->usable)
- kdb_register(s->name, kdb_exec_defcmd,
- s->usage, s->help, 0);
+ /* macros are always safe because when executed each
+ * internal command re-enters kdb_parse() and is
+ * safety checked individually.
+ */
+ kdb_register_flags(s->name, kdb_exec_defcmd, s->usage,
+ s->help, 0,
+ KDB_ENABLE_ALWAYS_SAFE);
return 0;
}
if (!s->usable)
if (i < kdb_max_commands) {
int result;
+
+ if (!kdb_check_flags(tp->cmd_flags, kdb_cmd_enabled, argc <= 1))
+ return KDB_NOPERM;
+
KDB_STATE_SET(CMD);
result = (*tp->cmd_func)(argc-1, (const char **)argv);
if (result && ignore_errors && result > KDB_CMD_GO)
result = 0;
KDB_STATE_CLEAR(CMD);
- switch (tp->cmd_repeat) {
- case KDB_REPEAT_NONE:
- argc = 0;
- if (argv[0])
- *(argv[0]) = '\0';
- break;
- case KDB_REPEAT_NO_ARGS:
- argc = 1;
- if (argv[1])
- *(argv[1]) = '\0';
- break;
- case KDB_REPEAT_WITH_ARGS:
- break;
- }
+
+ if (tp->cmd_flags & KDB_REPEAT_WITH_ARGS)
+ return result;
+
+ argc = tp->cmd_flags & KDB_REPEAT_NO_ARGS ? 1 : 0;
+ if (argv[argc])
+ *(argv[argc]) = '\0';
return result;
}
*/
static int kdb_sr(int argc, const char **argv)
{
+ bool check_mask =
+ !kdb_check_flags(KDB_ENABLE_ALL, kdb_cmd_enabled, false);
+
if (argc != 1)
return KDB_ARGCOUNT;
+
kdb_trap_printk++;
- __handle_sysrq(*argv[1], false);
+ __handle_sysrq(*argv[1], check_mask);
kdb_trap_printk--;
return 0;
for (start_cpu = -1, i = 0; i < NR_CPUS; i++) {
if (!cpu_online(i)) {
state = 'F'; /* cpu is offline */
+ } else if (!kgdb_info[i].enter_kgdb) {
+ state = 'D'; /* cpu is online but unresponsive */
} else {
state = ' '; /* cpu is responding to kdb */
if (kdb_task_state_char(KDB_TSK(i)) == 'I')
/*
* Validate cpunum
*/
- if ((cpunum > NR_CPUS) || !cpu_online(cpunum))
+ if ((cpunum > NR_CPUS) || !kgdb_info[cpunum].enter_kgdb)
return KDB_BADCPUNUM;
dbg_switch_cpu = cpunum;
return 0;
if (!kt->cmd_name)
continue;
+ if (!kdb_check_flags(kt->cmd_flags, kdb_cmd_enabled, true))
+ continue;
if (strlen(kt->cmd_usage) > 20)
space = "\n ";
kdb_printf("%-15.15s %-20s%s%s\n", kt->cmd_name,
}
/*
- * kdb_register_repeat - This function is used to register a kernel
+ * kdb_register_flags - This function is used to register a kernel
* debugger command.
* Inputs:
* cmd Command name
* zero for success, one if a duplicate command.
*/
#define kdb_command_extend 50 /* arbitrary */
-int kdb_register_repeat(char *cmd,
- kdb_func_t func,
- char *usage,
- char *help,
- short minlen,
- kdb_repeat_t repeat)
+int kdb_register_flags(char *cmd,
+ kdb_func_t func,
+ char *usage,
+ char *help,
+ short minlen,
+ kdb_cmdflags_t flags)
{
int i;
kdbtab_t *kp;
kp->cmd_func = func;
kp->cmd_usage = usage;
kp->cmd_help = help;
- kp->cmd_flags = 0;
kp->cmd_minlen = minlen;
- kp->cmd_repeat = repeat;
+ kp->cmd_flags = flags;
return 0;
}
-EXPORT_SYMBOL_GPL(kdb_register_repeat);
+EXPORT_SYMBOL_GPL(kdb_register_flags);
/*
* kdb_register - Compatibility register function for commands that do
* not need to specify a repeat state. Equivalent to
- * kdb_register_repeat with KDB_REPEAT_NONE.
+ * kdb_register_flags with flags set to 0.
* Inputs:
* cmd Command name
* func Function to execute the command
char *help,
short minlen)
{
- return kdb_register_repeat(cmd, func, usage, help, minlen,
- KDB_REPEAT_NONE);
+ return kdb_register_flags(cmd, func, usage, help, minlen, 0);
}
EXPORT_SYMBOL_GPL(kdb_register);
for_each_kdbcmd(kp, i)
kp->cmd_name = NULL;
- kdb_register_repeat("md", kdb_md, "<vaddr>",
+ kdb_register_flags("md", kdb_md, "<vaddr>",
"Display Memory Contents, also mdWcN, e.g. md8c1", 1,
- KDB_REPEAT_NO_ARGS);
- kdb_register_repeat("mdr", kdb_md, "<vaddr> <bytes>",
- "Display Raw Memory", 0, KDB_REPEAT_NO_ARGS);
- kdb_register_repeat("mdp", kdb_md, "<paddr> <bytes>",
- "Display Physical Memory", 0, KDB_REPEAT_NO_ARGS);
- kdb_register_repeat("mds", kdb_md, "<vaddr>",
- "Display Memory Symbolically", 0, KDB_REPEAT_NO_ARGS);
- kdb_register_repeat("mm", kdb_mm, "<vaddr> <contents>",
- "Modify Memory Contents", 0, KDB_REPEAT_NO_ARGS);
- kdb_register_repeat("go", kdb_go, "[<vaddr>]",
- "Continue Execution", 1, KDB_REPEAT_NONE);
- kdb_register_repeat("rd", kdb_rd, "",
- "Display Registers", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("rm", kdb_rm, "<reg> <contents>",
- "Modify Registers", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("ef", kdb_ef, "<vaddr>",
- "Display exception frame", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("bt", kdb_bt, "[<vaddr>]",
- "Stack traceback", 1, KDB_REPEAT_NONE);
- kdb_register_repeat("btp", kdb_bt, "<pid>",
- "Display stack for process <pid>", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("bta", kdb_bt, "[D|R|S|T|C|Z|E|U|I|M|A]",
- "Backtrace all processes matching state flag", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("btc", kdb_bt, "",
- "Backtrace current process on each cpu", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("btt", kdb_bt, "<vaddr>",
+ KDB_ENABLE_MEM_READ | KDB_REPEAT_NO_ARGS);
+ kdb_register_flags("mdr", kdb_md, "<vaddr> <bytes>",
+ "Display Raw Memory", 0,
+ KDB_ENABLE_MEM_READ | KDB_REPEAT_NO_ARGS);
+ kdb_register_flags("mdp", kdb_md, "<paddr> <bytes>",
+ "Display Physical Memory", 0,
+ KDB_ENABLE_MEM_READ | KDB_REPEAT_NO_ARGS);
+ kdb_register_flags("mds", kdb_md, "<vaddr>",
+ "Display Memory Symbolically", 0,
+ KDB_ENABLE_MEM_READ | KDB_REPEAT_NO_ARGS);
+ kdb_register_flags("mm", kdb_mm, "<vaddr> <contents>",
+ "Modify Memory Contents", 0,
+ KDB_ENABLE_MEM_WRITE | KDB_REPEAT_NO_ARGS);
+ kdb_register_flags("go", kdb_go, "[<vaddr>]",
+ "Continue Execution", 1,
+ KDB_ENABLE_REG_WRITE | KDB_ENABLE_ALWAYS_SAFE_NO_ARGS);
+ kdb_register_flags("rd", kdb_rd, "",
+ "Display Registers", 0,
+ KDB_ENABLE_REG_READ);
+ kdb_register_flags("rm", kdb_rm, "<reg> <contents>",
+ "Modify Registers", 0,
+ KDB_ENABLE_REG_WRITE);
+ kdb_register_flags("ef", kdb_ef, "<vaddr>",
+ "Display exception frame", 0,
+ KDB_ENABLE_MEM_READ);
+ kdb_register_flags("bt", kdb_bt, "[<vaddr>]",
+ "Stack traceback", 1,
+ KDB_ENABLE_MEM_READ | KDB_ENABLE_INSPECT_NO_ARGS);
+ kdb_register_flags("btp", kdb_bt, "<pid>",
+ "Display stack for process <pid>", 0,
+ KDB_ENABLE_INSPECT);
+ kdb_register_flags("bta", kdb_bt, "[D|R|S|T|C|Z|E|U|I|M|A]",
+ "Backtrace all processes matching state flag", 0,
+ KDB_ENABLE_INSPECT);
+ kdb_register_flags("btc", kdb_bt, "",
+ "Backtrace current process on each cpu", 0,
+ KDB_ENABLE_INSPECT);
+ kdb_register_flags("btt", kdb_bt, "<vaddr>",
"Backtrace process given its struct task address", 0,
- KDB_REPEAT_NONE);
- kdb_register_repeat("env", kdb_env, "",
- "Show environment variables", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("set", kdb_set, "",
- "Set environment variables", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("help", kdb_help, "",
- "Display Help Message", 1, KDB_REPEAT_NONE);
- kdb_register_repeat("?", kdb_help, "",
- "Display Help Message", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("cpu", kdb_cpu, "<cpunum>",
- "Switch to new cpu", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("kgdb", kdb_kgdb, "",
- "Enter kgdb mode", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("ps", kdb_ps, "[<flags>|A]",
- "Display active task list", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("pid", kdb_pid, "<pidnum>",
- "Switch to another task", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("reboot", kdb_reboot, "",
- "Reboot the machine immediately", 0, KDB_REPEAT_NONE);
+ KDB_ENABLE_MEM_READ | KDB_ENABLE_INSPECT_NO_ARGS);
+ kdb_register_flags("env", kdb_env, "",
+ "Show environment variables", 0,
+ KDB_ENABLE_ALWAYS_SAFE);
+ kdb_register_flags("set", kdb_set, "",
+ "Set environment variables", 0,
+ KDB_ENABLE_ALWAYS_SAFE);
+ kdb_register_flags("help", kdb_help, "",
+ "Display Help Message", 1,
+ KDB_ENABLE_ALWAYS_SAFE);
+ kdb_register_flags("?", kdb_help, "",
+ "Display Help Message", 0,
+ KDB_ENABLE_ALWAYS_SAFE);
+ kdb_register_flags("cpu", kdb_cpu, "<cpunum>",
+ "Switch to new cpu", 0,
+ KDB_ENABLE_ALWAYS_SAFE_NO_ARGS);
+ kdb_register_flags("kgdb", kdb_kgdb, "",
+ "Enter kgdb mode", 0, 0);
+ kdb_register_flags("ps", kdb_ps, "[<flags>|A]",
+ "Display active task list", 0,
+ KDB_ENABLE_INSPECT);
+ kdb_register_flags("pid", kdb_pid, "<pidnum>",
+ "Switch to another task", 0,
+ KDB_ENABLE_INSPECT);
+ kdb_register_flags("reboot", kdb_reboot, "",
+ "Reboot the machine immediately", 0,
+ KDB_ENABLE_REBOOT);
#if defined(CONFIG_MODULES)
- kdb_register_repeat("lsmod", kdb_lsmod, "",
- "List loaded kernel modules", 0, KDB_REPEAT_NONE);
+ kdb_register_flags("lsmod", kdb_lsmod, "",
+ "List loaded kernel modules", 0,
+ KDB_ENABLE_INSPECT);
#endif
#if defined(CONFIG_MAGIC_SYSRQ)
- kdb_register_repeat("sr", kdb_sr, "<key>",
- "Magic SysRq key", 0, KDB_REPEAT_NONE);
+ kdb_register_flags("sr", kdb_sr, "<key>",
+ "Magic SysRq key", 0,
+ KDB_ENABLE_ALWAYS_SAFE);
#endif
#if defined(CONFIG_PRINTK)
- kdb_register_repeat("dmesg", kdb_dmesg, "[lines]",
- "Display syslog buffer", 0, KDB_REPEAT_NONE);
+ kdb_register_flags("dmesg", kdb_dmesg, "[lines]",
+ "Display syslog buffer", 0,
+ KDB_ENABLE_ALWAYS_SAFE);
#endif
if (arch_kgdb_ops.enable_nmi) {
- kdb_register_repeat("disable_nmi", kdb_disable_nmi, "",
- "Disable NMI entry to KDB", 0, KDB_REPEAT_NONE);
- }
- kdb_register_repeat("defcmd", kdb_defcmd, "name \"usage\" \"help\"",
- "Define a set of commands, down to endefcmd", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("kill", kdb_kill, "<-signal> <pid>",
- "Send a signal to a process", 0, KDB_REPEAT_NONE);
- kdb_register_repeat("summary", kdb_summary, "",
- "Summarize the system", 4, KDB_REPEAT_NONE);
- kdb_register_repeat("per_cpu", kdb_per_cpu, "<sym> [<bytes>] [<cpu>]",
- "Display per_cpu variables", 3, KDB_REPEAT_NONE);
- kdb_register_repeat("grephelp", kdb_grep_help, "",
- "Display help on | grep", 0, KDB_REPEAT_NONE);
+ kdb_register_flags("disable_nmi", kdb_disable_nmi, "",
+ "Disable NMI entry to KDB", 0,
+ KDB_ENABLE_ALWAYS_SAFE);
+ }
+ kdb_register_flags("defcmd", kdb_defcmd, "name \"usage\" \"help\"",
+ "Define a set of commands, down to endefcmd", 0,
+ KDB_ENABLE_ALWAYS_SAFE);
+ kdb_register_flags("kill", kdb_kill, "<-signal> <pid>",
+ "Send a signal to a process", 0,
+ KDB_ENABLE_SIGNAL);
+ kdb_register_flags("summary", kdb_summary, "",
+ "Summarize the system", 4,
+ KDB_ENABLE_ALWAYS_SAFE);
+ kdb_register_flags("per_cpu", kdb_per_cpu, "<sym> [<bytes>] [<cpu>]",
+ "Display per_cpu variables", 3,
+ KDB_ENABLE_MEM_READ);
+ kdb_register_flags("grephelp", kdb_grep_help, "",
+ "Display help on | grep", 0,
+ KDB_ENABLE_ALWAYS_SAFE);
}
/* Execute any commands defined in kdb_cmds. */
kdb_func_t cmd_func; /* Function to execute command */
char *cmd_usage; /* Usage String for this command */
char *cmd_help; /* Help message for this command */
- short cmd_flags; /* Parsing flags */
short cmd_minlen; /* Minimum legal # command
* chars required */
- kdb_repeat_t cmd_repeat; /* Does command auto repeat on enter? */
+ kdb_cmdflags_t cmd_flags; /* Command behaviour flags */
} kdbtab_t;
extern int kdb_bt(int, const char **); /* KDB display back trace */
}
static void perf_sample_regs_user(struct perf_regs *regs_user,
- struct pt_regs *regs)
+ struct pt_regs *regs,
+ struct pt_regs *regs_user_copy)
{
- if (!user_mode(regs)) {
- if (current->mm)
- regs = task_pt_regs(current);
- else
- regs = NULL;
- }
-
- if (regs) {
- regs_user->abi = perf_reg_abi(current);
+ if (user_mode(regs)) {
+ regs_user->abi = perf_reg_abi(current);
regs_user->regs = regs;
+ } else if (current->mm) {
+ perf_get_regs_user(regs_user, regs, regs_user_copy);
} else {
regs_user->abi = PERF_SAMPLE_REGS_ABI_NONE;
regs_user->regs = NULL;
}
if (sample_type & (PERF_SAMPLE_REGS_USER | PERF_SAMPLE_STACK_USER))
- perf_sample_regs_user(&data->regs_user, regs);
+ perf_sample_regs_user(&data->regs_user, regs,
+ &data->regs_user_copy);
if (sample_type & PERF_SAMPLE_REGS_USER) {
/* regs dump ABI info */
static int wait_consider_task(struct wait_opts *wo, int ptrace,
struct task_struct *p)
{
+ /*
+ * We can race with wait_task_zombie() from another thread.
+ * Ensure that EXIT_ZOMBIE -> EXIT_DEAD/EXIT_TRACE transition
+ * can't confuse the checks below.
+ */
+ int exit_state = ACCESS_ONCE(p->exit_state);
int ret;
- if (unlikely(p->exit_state == EXIT_DEAD))
+ if (unlikely(exit_state == EXIT_DEAD))
return 0;
ret = eligible_child(wo, p);
return 0;
}
- if (unlikely(p->exit_state == EXIT_TRACE)) {
+ if (unlikely(exit_state == EXIT_TRACE)) {
/*
* ptrace == 0 means we are the natural parent. In this case
* we should clear notask_error, debugger will notify us.
}
/* slay zombie? */
- if (p->exit_state == EXIT_ZOMBIE) {
+ if (exit_state == EXIT_ZOMBIE) {
/* we don't reap group leaders with subthreads */
if (!delay_group_leader(p)) {
/*
DEBUG_LOCKS_WARN_ON(lock->owner != current);
DEBUG_LOCKS_WARN_ON(!lock->wait_list.prev && !lock->wait_list.next);
- mutex_clear_owner(lock);
}
/*
* __mutex_slowpath_needs_to_unlock() is explicitly 0 for debug
* mutexes so that we can do it here after we've verified state.
*/
+ mutex_clear_owner(lock);
atomic_set(&lock->count, 1);
}
}
EXPORT_SYMBOL(_raw_spin_lock_nested);
+void __lockfunc _raw_spin_lock_bh_nested(raw_spinlock_t *lock, int subclass)
+{
+ __local_bh_disable_ip(_RET_IP_, SOFTIRQ_LOCK_OFFSET);
+ spin_acquire(&lock->dep_map, subclass, 0, _RET_IP_);
+ LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock);
+}
+EXPORT_SYMBOL(_raw_spin_lock_bh_nested);
+
unsigned long __lockfunc _raw_spin_lock_irqsave_nested(raw_spinlock_t *lock,
int subclass)
{
#endif
#ifdef CONFIG_RT_GROUP_SCHED
alloc_size += 2 * nr_cpu_ids * sizeof(void **);
-#endif
-#ifdef CONFIG_CPUMASK_OFFSTACK
- alloc_size += num_possible_cpus() * cpumask_size();
#endif
if (alloc_size) {
ptr = (unsigned long)kzalloc(alloc_size, GFP_NOWAIT);
ptr += nr_cpu_ids * sizeof(void **);
#endif /* CONFIG_RT_GROUP_SCHED */
+ }
#ifdef CONFIG_CPUMASK_OFFSTACK
- for_each_possible_cpu(i) {
- per_cpu(load_balance_mask, i) = (void *)ptr;
- ptr += cpumask_size();
- }
-#endif /* CONFIG_CPUMASK_OFFSTACK */
+ for_each_possible_cpu(i) {
+ per_cpu(load_balance_mask, i) = (cpumask_var_t)kzalloc_node(
+ cpumask_size(), GFP_KERNEL, cpu_to_node(i));
}
+#endif /* CONFIG_CPUMASK_OFFSTACK */
init_rt_bandwidth(&def_rt_bandwidth,
global_rt_period(), global_rt_runtime());
static
int dl_runtime_exceeded(struct rq *rq, struct sched_dl_entity *dl_se)
{
- int dmiss = dl_time_before(dl_se->deadline, rq_clock(rq));
- int rorun = dl_se->runtime <= 0;
-
- if (!rorun && !dmiss)
- return 0;
-
- /*
- * If we are beyond our current deadline and we are still
- * executing, then we have already used some of the runtime of
- * the next instance. Thus, if we do not account that, we are
- * stealing bandwidth from the system at each deadline miss!
- */
- if (dmiss) {
- dl_se->runtime = rorun ? dl_se->runtime : 0;
- dl_se->runtime -= rq_clock(rq) - dl_se->deadline;
- }
-
- return 1;
+ return (dl_se->runtime <= 0);
}
extern bool sched_rt_bandwidth_account(struct rt_rq *rt_rq);
* parameters of the task might need updating. Otherwise,
* we want a replenishment of its runtime.
*/
- if (!dl_se->dl_new && flags & ENQUEUE_REPLENISH)
- replenish_dl_entity(dl_se, pi_se);
- else
+ if (dl_se->dl_new || flags & ENQUEUE_WAKEUP)
update_dl_entity(dl_se, pi_se);
+ else if (flags & ENQUEUE_REPLENISH)
+ replenish_dl_entity(dl_se, pi_se);
__enqueue_dl_entity(dl_se);
}
static void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
{
+ /* init_cfs_bandwidth() was not called */
+ if (!cfs_b->throttled_cfs_rq.next)
+ return;
+
hrtimer_cancel(&cfs_b->period_timer);
hrtimer_cancel(&cfs_b->slack_timer);
}
* wl = S * s'_i; see (2)
*/
if (W > 0 && w < W)
- wl = (w * tg->shares) / W;
+ wl = (w * (long)tg->shares) / W;
else
wl = tg->shares;
{
struct genlmsghdr *genlhdr = nlmsg_data(nlmsg_hdr(skb));
void *reply = genlmsg_data(genlhdr);
- int rc;
- rc = genlmsg_end(skb, reply);
- if (rc < 0) {
- nlmsg_free(skb);
- return rc;
- }
+ genlmsg_end(skb, reply);
return genlmsg_reply(skb, info);
}
void *reply = genlmsg_data(genlhdr);
int rc, delcount = 0;
- rc = genlmsg_end(skb, reply);
- if (rc < 0) {
- nlmsg_free(skb);
- return;
- }
+ genlmsg_end(skb, reply);
rc = 0;
down_read(&listeners->sem);
static __init int kdb_ftrace_register(void)
{
- kdb_register_repeat("ftdump", kdb_ftdump, "[skip_#lines] [cpu]",
- "Dump ftrace log", 0, KDB_REPEAT_NONE);
+ kdb_register_flags("ftdump", kdb_ftdump, "[skip_#lines] [cpu]",
+ "Dump ftrace log", 0, KDB_ENABLE_ALWAYS_SAFE);
return 0;
}
help
KDB frontend for kernel
+config KDB_DEFAULT_ENABLE
+ hex "KDB: Select kdb command functions to be enabled by default"
+ depends on KGDB_KDB
+ default 0x1
+ help
+ Specifiers which kdb commands are enabled by default. This may
+ be set to 1 or 0 to enable all commands or disable almost all
+ commands.
+
+ Alternatively the following bitmask applies:
+
+ 0x0002 - allow arbitrary reads from memory and symbol lookup
+ 0x0004 - allow arbitrary writes to memory
+ 0x0008 - allow current register state to be inspected
+ 0x0010 - allow current register state to be modified
+ 0x0020 - allow passive inspection (backtrace, process list, lsmod)
+ 0x0040 - allow flow control management (breakpoint, single step)
+ 0x0080 - enable signalling of processes
+ 0x0100 - allow machine to be rebooted
+
+ The config option merely sets the default at boot time. Both
+ issuing 'echo X > /sys/module/kdb/parameters/cmd_enable' or
+ setting with kdb.cmd_enable=X kernel command line option will
+ override the default settings.
+
config KDB_KEYBOARD
bool "KGDB_KDB: keyboard as input device"
depends on VT && KGDB_KDB
* 2 of the Licence, or (at your option) any later version.
*/
//#define DEBUG
+#include <linux/rcupdate.h>
#include <linux/slab.h>
#include <linux/err.h>
#include <linux/assoc_array_priv.h>
#define HASH_DEFAULT_SIZE 64UL
#define HASH_MIN_SIZE 4UL
+#define BUCKET_LOCKS_PER_CPU 128UL
+
+/* Base bits plus 1 bit for nulls marker */
+#define HASH_RESERVED_SPACE (RHT_BASE_BITS + 1)
+
+enum {
+ RHT_LOCK_NORMAL,
+ RHT_LOCK_NESTED,
+ RHT_LOCK_NESTED2,
+};
+
+/* The bucket lock is selected based on the hash and protects mutations
+ * on a group of hash buckets.
+ *
+ * IMPORTANT: When holding the bucket lock of both the old and new table
+ * during expansions and shrinking, the old bucket lock must always be
+ * acquired first.
+ */
+static spinlock_t *bucket_lock(const struct bucket_table *tbl, u32 hash)
+{
+ return &tbl->locks[hash & tbl->locks_mask];
+}
#define ASSERT_RHT_MUTEX(HT) BUG_ON(!lockdep_rht_mutex_is_held(HT))
+#define ASSERT_BUCKET_LOCK(TBL, HASH) \
+ BUG_ON(!lockdep_rht_bucket_is_held(TBL, HASH))
#ifdef CONFIG_PROVE_LOCKING
-int lockdep_rht_mutex_is_held(const struct rhashtable *ht)
+int lockdep_rht_mutex_is_held(struct rhashtable *ht)
{
- return ht->p.mutex_is_held(ht->p.parent);
+ return (debug_locks) ? lockdep_is_held(&ht->mutex) : 1;
}
EXPORT_SYMBOL_GPL(lockdep_rht_mutex_is_held);
+
+int lockdep_rht_bucket_is_held(const struct bucket_table *tbl, u32 hash)
+{
+ spinlock_t *lock = bucket_lock(tbl, hash);
+
+ return (debug_locks) ? lockdep_is_held(lock) : 1;
+}
+EXPORT_SYMBOL_GPL(lockdep_rht_bucket_is_held);
#endif
static void *rht_obj(const struct rhashtable *ht, const struct rhash_head *he)
return (void *) he - ht->p.head_offset;
}
-static u32 __hashfn(const struct rhashtable *ht, const void *key,
- u32 len, u32 hsize)
+static u32 rht_bucket_index(const struct bucket_table *tbl, u32 hash)
{
- u32 h;
+ return hash & (tbl->size - 1);
+}
- h = ht->p.hashfn(key, len, ht->p.hash_rnd);
+static u32 obj_raw_hashfn(const struct rhashtable *ht, const void *ptr)
+{
+ u32 hash;
+
+ if (unlikely(!ht->p.key_len))
+ hash = ht->p.obj_hashfn(ptr, ht->p.hash_rnd);
+ else
+ hash = ht->p.hashfn(ptr + ht->p.key_offset, ht->p.key_len,
+ ht->p.hash_rnd);
- return h & (hsize - 1);
+ return hash >> HASH_RESERVED_SPACE;
}
-/**
- * rhashtable_hashfn - compute hash for key of given length
- * @ht: hash table to compute for
- * @key: pointer to key
- * @len: length of key
- *
- * Computes the hash value using the hash function provided in the 'hashfn'
- * of struct rhashtable_params. The returned value is guaranteed to be
- * smaller than the number of buckets in the hash table.
- */
-u32 rhashtable_hashfn(const struct rhashtable *ht, const void *key, u32 len)
+static u32 key_hashfn(struct rhashtable *ht, const void *key, u32 len)
{
struct bucket_table *tbl = rht_dereference_rcu(ht->tbl, ht);
+ u32 hash;
- return __hashfn(ht, key, len, tbl->size);
+ hash = ht->p.hashfn(key, len, ht->p.hash_rnd);
+ hash >>= HASH_RESERVED_SPACE;
+
+ return rht_bucket_index(tbl, hash);
}
-EXPORT_SYMBOL_GPL(rhashtable_hashfn);
-static u32 obj_hashfn(const struct rhashtable *ht, const void *ptr, u32 hsize)
+static u32 head_hashfn(const struct rhashtable *ht,
+ const struct bucket_table *tbl,
+ const struct rhash_head *he)
{
- if (unlikely(!ht->p.key_len)) {
- u32 h;
+ return rht_bucket_index(tbl, obj_raw_hashfn(ht, rht_obj(ht, he)));
+}
- h = ht->p.obj_hashfn(ptr, ht->p.hash_rnd);
+static struct rhash_head __rcu **bucket_tail(struct bucket_table *tbl, u32 n)
+{
+ struct rhash_head __rcu **pprev;
- return h & (hsize - 1);
- }
+ for (pprev = &tbl->buckets[n];
+ !rht_is_a_nulls(rht_dereference_bucket(*pprev, tbl, n));
+ pprev = &rht_dereference_bucket(*pprev, tbl, n)->next)
+ ;
- return __hashfn(ht, ptr + ht->p.key_offset, ht->p.key_len, hsize);
+ return pprev;
}
-/**
- * rhashtable_obj_hashfn - compute hash for hashed object
- * @ht: hash table to compute for
- * @ptr: pointer to hashed object
- *
- * Computes the hash value using the hash function `hashfn` respectively
- * 'obj_hashfn' depending on whether the hash table is set up to work with
- * a fixed length key. The returned value is guaranteed to be smaller than
- * the number of buckets in the hash table.
- */
-u32 rhashtable_obj_hashfn(const struct rhashtable *ht, void *ptr)
+static int alloc_bucket_locks(struct rhashtable *ht, struct bucket_table *tbl)
{
- struct bucket_table *tbl = rht_dereference_rcu(ht->tbl, ht);
+ unsigned int i, size;
+#if defined(CONFIG_PROVE_LOCKING)
+ unsigned int nr_pcpus = 2;
+#else
+ unsigned int nr_pcpus = num_possible_cpus();
+#endif
+
+ nr_pcpus = min_t(unsigned int, nr_pcpus, 32UL);
+ size = roundup_pow_of_two(nr_pcpus * ht->p.locks_mul);
+
+ /* Never allocate more than one lock per bucket */
+ size = min_t(unsigned int, size, tbl->size);
+
+ if (sizeof(spinlock_t) != 0) {
+#ifdef CONFIG_NUMA
+ if (size * sizeof(spinlock_t) > PAGE_SIZE)
+ tbl->locks = vmalloc(size * sizeof(spinlock_t));
+ else
+#endif
+ tbl->locks = kmalloc_array(size, sizeof(spinlock_t),
+ GFP_KERNEL);
+ if (!tbl->locks)
+ return -ENOMEM;
+ for (i = 0; i < size; i++)
+ spin_lock_init(&tbl->locks[i]);
+ }
+ tbl->locks_mask = size - 1;
- return obj_hashfn(ht, ptr, tbl->size);
+ return 0;
}
-EXPORT_SYMBOL_GPL(rhashtable_obj_hashfn);
-static u32 head_hashfn(const struct rhashtable *ht,
- const struct rhash_head *he, u32 hsize)
+static void bucket_table_free(const struct bucket_table *tbl)
{
- return obj_hashfn(ht, rht_obj(ht, he), hsize);
+ if (tbl)
+ kvfree(tbl->locks);
+
+ kvfree(tbl);
}
-static struct bucket_table *bucket_table_alloc(size_t nbuckets)
+static struct bucket_table *bucket_table_alloc(struct rhashtable *ht,
+ size_t nbuckets)
{
struct bucket_table *tbl;
size_t size;
+ int i;
size = sizeof(*tbl) + nbuckets * sizeof(tbl->buckets[0]);
tbl = kzalloc(size, GFP_KERNEL | __GFP_NOWARN);
tbl->size = nbuckets;
- return tbl;
-}
+ if (alloc_bucket_locks(ht, tbl) < 0) {
+ bucket_table_free(tbl);
+ return NULL;
+ }
-static void bucket_table_free(const struct bucket_table *tbl)
-{
- kvfree(tbl);
+ for (i = 0; i < nbuckets; i++)
+ INIT_RHT_NULLS_HEAD(tbl->buckets[i], ht, i);
+
+ return tbl;
}
/**
bool rht_grow_above_75(const struct rhashtable *ht, size_t new_size)
{
/* Expand table when exceeding 75% load */
- return ht->nelems > (new_size / 4 * 3);
+ return atomic_read(&ht->nelems) > (new_size / 4 * 3) &&
+ (ht->p.max_shift && atomic_read(&ht->shift) < ht->p.max_shift);
}
EXPORT_SYMBOL_GPL(rht_grow_above_75);
bool rht_shrink_below_30(const struct rhashtable *ht, size_t new_size)
{
/* Shrink table beneath 30% load */
- return ht->nelems < (new_size * 3 / 10);
+ return atomic_read(&ht->nelems) < (new_size * 3 / 10) &&
+ (atomic_read(&ht->shift) > ht->p.min_shift);
}
EXPORT_SYMBOL_GPL(rht_shrink_below_30);
static void hashtable_chain_unzip(const struct rhashtable *ht,
const struct bucket_table *new_tbl,
- struct bucket_table *old_tbl, size_t n)
+ struct bucket_table *old_tbl,
+ size_t old_hash)
{
struct rhash_head *he, *p, *next;
- unsigned int h;
+ spinlock_t *new_bucket_lock, *new_bucket_lock2 = NULL;
+ unsigned int new_hash, new_hash2;
+
+ ASSERT_BUCKET_LOCK(old_tbl, old_hash);
/* Old bucket empty, no work needed. */
- p = rht_dereference(old_tbl->buckets[n], ht);
- if (!p)
+ p = rht_dereference_bucket(old_tbl->buckets[old_hash], old_tbl,
+ old_hash);
+ if (rht_is_a_nulls(p))
return;
+ new_hash = new_hash2 = head_hashfn(ht, new_tbl, p);
+ new_bucket_lock = bucket_lock(new_tbl, new_hash);
+
/* Advance the old bucket pointer one or more times until it
* reaches a node that doesn't hash to the same bucket as the
* previous node p. Call the previous node p;
*/
- h = head_hashfn(ht, p, new_tbl->size);
- rht_for_each(he, p->next, ht) {
- if (head_hashfn(ht, he, new_tbl->size) != h)
+ rht_for_each_continue(he, p->next, old_tbl, old_hash) {
+ new_hash2 = head_hashfn(ht, new_tbl, he);
+ if (new_hash != new_hash2)
break;
p = he;
}
- RCU_INIT_POINTER(old_tbl->buckets[n], p->next);
+ rcu_assign_pointer(old_tbl->buckets[old_hash], p->next);
+
+ spin_lock_bh_nested(new_bucket_lock, RHT_LOCK_NESTED);
+
+ /* If we have encountered an entry that maps to a different bucket in
+ * the new table, lock down that bucket as well as we might cut off
+ * the end of the chain.
+ */
+ new_bucket_lock2 = bucket_lock(new_tbl, new_hash);
+ if (new_bucket_lock != new_bucket_lock2)
+ spin_lock_bh_nested(new_bucket_lock2, RHT_LOCK_NESTED2);
/* Find the subsequent node which does hash to the same
* bucket as node P, or NULL if no such node exists.
*/
- next = NULL;
- if (he) {
- rht_for_each(he, he->next, ht) {
- if (head_hashfn(ht, he, new_tbl->size) == h) {
+ INIT_RHT_NULLS_HEAD(next, ht, old_hash);
+ if (!rht_is_a_nulls(he)) {
+ rht_for_each_continue(he, he->next, old_tbl, old_hash) {
+ if (head_hashfn(ht, new_tbl, he) == new_hash) {
next = he;
break;
}
/* Set p's next pointer to that subsequent node pointer,
* bypassing the nodes which do not hash to p's bucket
*/
- RCU_INIT_POINTER(p->next, next);
+ rcu_assign_pointer(p->next, next);
+
+ if (new_bucket_lock != new_bucket_lock2)
+ spin_unlock_bh(new_bucket_lock2);
+ spin_unlock_bh(new_bucket_lock);
+}
+
+static void link_old_to_new(struct bucket_table *new_tbl,
+ unsigned int new_hash, struct rhash_head *entry)
+{
+ spinlock_t *new_bucket_lock;
+
+ new_bucket_lock = bucket_lock(new_tbl, new_hash);
+
+ spin_lock_bh_nested(new_bucket_lock, RHT_LOCK_NESTED);
+ rcu_assign_pointer(*bucket_tail(new_tbl, new_hash), entry);
+ spin_unlock_bh(new_bucket_lock);
}
/**
* This function may only be called in a context where it is safe to call
* synchronize_rcu(), e.g. not within a rcu_read_lock() section.
*
- * The caller must ensure that no concurrent table mutations take place.
- * It is however valid to have concurrent lookups if they are RCU protected.
+ * The caller must ensure that no concurrent resizing occurs by holding
+ * ht->mutex.
+ *
+ * It is valid to have concurrent insertions and deletions protected by per
+ * bucket locks or concurrent RCU protected lookups and traversals.
*/
int rhashtable_expand(struct rhashtable *ht)
{
struct bucket_table *new_tbl, *old_tbl = rht_dereference(ht->tbl, ht);
struct rhash_head *he;
- unsigned int i, h;
- bool complete;
+ spinlock_t *old_bucket_lock;
+ unsigned int new_hash, old_hash;
+ bool complete = false;
ASSERT_RHT_MUTEX(ht);
- if (ht->p.max_shift && ht->shift >= ht->p.max_shift)
- return 0;
-
- new_tbl = bucket_table_alloc(old_tbl->size * 2);
+ new_tbl = bucket_table_alloc(ht, old_tbl->size * 2);
if (new_tbl == NULL)
return -ENOMEM;
- ht->shift++;
+ atomic_inc(&ht->shift);
+
+ /* Make insertions go into the new, empty table right away. Deletions
+ * and lookups will be attempted in both tables until we synchronize.
+ * The synchronize_rcu() guarantees for the new table to be picked up
+ * so no new additions go into the old table while we relink.
+ */
+ rcu_assign_pointer(ht->future_tbl, new_tbl);
+ synchronize_rcu();
- /* For each new bucket, search the corresponding old bucket
- * for the first entry that hashes to the new bucket, and
- * link the new bucket to that entry. Since all the entries
- * which will end up in the new bucket appear in the same
- * old bucket, this constructs an entirely valid new hash
- * table, but with multiple buckets "zipped" together into a
- * single imprecise chain.
+ /* For each new bucket, search the corresponding old bucket for the
+ * first entry that hashes to the new bucket, and link the end of
+ * newly formed bucket chain (containing entries added to future
+ * table) to that entry. Since all the entries which will end up in
+ * the new bucket appear in the same old bucket, this constructs an
+ * entirely valid new hash table, but with multiple buckets
+ * "zipped" together into a single imprecise chain.
*/
- for (i = 0; i < new_tbl->size; i++) {
- h = i & (old_tbl->size - 1);
- rht_for_each(he, old_tbl->buckets[h], ht) {
- if (head_hashfn(ht, he, new_tbl->size) == i) {
- RCU_INIT_POINTER(new_tbl->buckets[i], he);
+ for (new_hash = 0; new_hash < new_tbl->size; new_hash++) {
+ old_hash = rht_bucket_index(old_tbl, new_hash);
+ old_bucket_lock = bucket_lock(old_tbl, old_hash);
+
+ spin_lock_bh(old_bucket_lock);
+ rht_for_each(he, old_tbl, old_hash) {
+ if (head_hashfn(ht, new_tbl, he) == new_hash) {
+ link_old_to_new(new_tbl, new_hash, he);
break;
}
}
+ spin_unlock_bh(old_bucket_lock);
}
/* Publish the new table pointer. Lookups may now traverse
rcu_assign_pointer(ht->tbl, new_tbl);
/* Unzip interleaved hash chains */
- do {
+ while (!complete && !ht->being_destroyed) {
/* Wait for readers. All new readers will see the new
* table, and thus no references to the old table will
* remain.
* table): ...
*/
complete = true;
- for (i = 0; i < old_tbl->size; i++) {
- hashtable_chain_unzip(ht, new_tbl, old_tbl, i);
- if (old_tbl->buckets[i] != NULL)
+ for (old_hash = 0; old_hash < old_tbl->size; old_hash++) {
+ struct rhash_head *head;
+
+ old_bucket_lock = bucket_lock(old_tbl, old_hash);
+ spin_lock_bh(old_bucket_lock);
+
+ hashtable_chain_unzip(ht, new_tbl, old_tbl, old_hash);
+ head = rht_dereference_bucket(old_tbl->buckets[old_hash],
+ old_tbl, old_hash);
+ if (!rht_is_a_nulls(head))
complete = false;
+
+ spin_unlock_bh(old_bucket_lock);
}
- } while (!complete);
+ }
bucket_table_free(old_tbl);
return 0;
* This function may only be called in a context where it is safe to call
* synchronize_rcu(), e.g. not within a rcu_read_lock() section.
*
+ * The caller must ensure that no concurrent resizing occurs by holding
+ * ht->mutex.
+ *
* The caller must ensure that no concurrent table mutations take place.
* It is however valid to have concurrent lookups if they are RCU protected.
+ *
+ * It is valid to have concurrent insertions and deletions protected by per
+ * bucket locks or concurrent RCU protected lookups and traversals.
*/
int rhashtable_shrink(struct rhashtable *ht)
{
- struct bucket_table *ntbl, *tbl = rht_dereference(ht->tbl, ht);
- struct rhash_head __rcu **pprev;
- unsigned int i;
+ struct bucket_table *new_tbl, *tbl = rht_dereference(ht->tbl, ht);
+ spinlock_t *new_bucket_lock, *old_bucket_lock1, *old_bucket_lock2;
+ unsigned int new_hash;
ASSERT_RHT_MUTEX(ht);
- if (ht->shift <= ht->p.min_shift)
- return 0;
-
- ntbl = bucket_table_alloc(tbl->size / 2);
- if (ntbl == NULL)
+ new_tbl = bucket_table_alloc(ht, tbl->size / 2);
+ if (new_tbl == NULL)
return -ENOMEM;
- ht->shift--;
+ rcu_assign_pointer(ht->future_tbl, new_tbl);
+ synchronize_rcu();
- /* Link each bucket in the new table to the first bucket
- * in the old table that contains entries which will hash
- * to the new bucket.
+ /* Link the first entry in the old bucket to the end of the
+ * bucket in the new table. As entries are concurrently being
+ * added to the new table, lock down the new bucket. As we
+ * always divide the size in half when shrinking, each bucket
+ * in the new table maps to exactly two buckets in the old
+ * table.
+ *
+ * As removals can occur concurrently on the old table, we need
+ * to lock down both matching buckets in the old table.
*/
- for (i = 0; i < ntbl->size; i++) {
- ntbl->buckets[i] = tbl->buckets[i];
+ for (new_hash = 0; new_hash < new_tbl->size; new_hash++) {
+ old_bucket_lock1 = bucket_lock(tbl, new_hash);
+ old_bucket_lock2 = bucket_lock(tbl, new_hash + new_tbl->size);
+ new_bucket_lock = bucket_lock(new_tbl, new_hash);
+
+ spin_lock_bh(old_bucket_lock1);
- /* Link each bucket in the new table to the first bucket
- * in the old table that contains entries which will hash
- * to the new bucket.
+ /* Depending on the lock per buckets mapping, the bucket in
+ * the lower and upper region may map to the same lock.
*/
- for (pprev = &ntbl->buckets[i]; *pprev != NULL;
- pprev = &rht_dereference(*pprev, ht)->next)
- ;
- RCU_INIT_POINTER(*pprev, tbl->buckets[i + ntbl->size]);
+ if (old_bucket_lock1 != old_bucket_lock2) {
+ spin_lock_bh_nested(old_bucket_lock2, RHT_LOCK_NESTED);
+ spin_lock_bh_nested(new_bucket_lock, RHT_LOCK_NESTED2);
+ } else {
+ spin_lock_bh_nested(new_bucket_lock, RHT_LOCK_NESTED);
+ }
+
+ rcu_assign_pointer(*bucket_tail(new_tbl, new_hash),
+ tbl->buckets[new_hash]);
+ rcu_assign_pointer(*bucket_tail(new_tbl, new_hash),
+ tbl->buckets[new_hash + new_tbl->size]);
+
+ spin_unlock_bh(new_bucket_lock);
+ if (old_bucket_lock1 != old_bucket_lock2)
+ spin_unlock_bh(old_bucket_lock2);
+ spin_unlock_bh(old_bucket_lock1);
}
/* Publish the new, valid hash table */
- rcu_assign_pointer(ht->tbl, ntbl);
+ rcu_assign_pointer(ht->tbl, new_tbl);
+ atomic_dec(&ht->shift);
/* Wait for readers. No new readers will have references to the
* old hash table.
}
EXPORT_SYMBOL_GPL(rhashtable_shrink);
-/**
- * rhashtable_insert - insert object into hash hash table
- * @ht: hash table
- * @obj: pointer to hash head inside object
- *
- * Will automatically grow the table via rhashtable_expand() if the the
- * grow_decision function specified at rhashtable_init() returns true.
- *
- * The caller must ensure that no concurrent table mutations occur. It is
- * however valid to have concurrent lookups if they are RCU protected.
- */
-void rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj)
+static void rht_deferred_worker(struct work_struct *work)
{
- struct bucket_table *tbl = rht_dereference(ht->tbl, ht);
- u32 hash;
-
- ASSERT_RHT_MUTEX(ht);
+ struct rhashtable *ht;
+ struct bucket_table *tbl;
- hash = head_hashfn(ht, obj, tbl->size);
- RCU_INIT_POINTER(obj->next, tbl->buckets[hash]);
- rcu_assign_pointer(tbl->buckets[hash], obj);
- ht->nelems++;
+ ht = container_of(work, struct rhashtable, run_work);
+ mutex_lock(&ht->mutex);
+ tbl = rht_dereference(ht->tbl, ht);
if (ht->p.grow_decision && ht->p.grow_decision(ht, tbl->size))
rhashtable_expand(ht);
+ else if (ht->p.shrink_decision && ht->p.shrink_decision(ht, tbl->size))
+ rhashtable_shrink(ht);
+
+ mutex_unlock(&ht->mutex);
+}
+
+static void rhashtable_wakeup_worker(struct rhashtable *ht)
+{
+ struct bucket_table *tbl = rht_dereference_rcu(ht->tbl, ht);
+ struct bucket_table *new_tbl = rht_dereference_rcu(ht->future_tbl, ht);
+ size_t size = tbl->size;
+
+ /* Only adjust the table if no resizing is currently in progress. */
+ if (tbl == new_tbl &&
+ ((ht->p.grow_decision && ht->p.grow_decision(ht, size)) ||
+ (ht->p.shrink_decision && ht->p.shrink_decision(ht, size))))
+ schedule_work(&ht->run_work);
+}
+
+static void __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj,
+ struct bucket_table *tbl, u32 hash)
+{
+ struct rhash_head *head = rht_dereference_bucket(tbl->buckets[hash],
+ tbl, hash);
+
+ if (rht_is_a_nulls(head))
+ INIT_RHT_NULLS_HEAD(obj->next, ht, hash);
+ else
+ RCU_INIT_POINTER(obj->next, head);
+
+ rcu_assign_pointer(tbl->buckets[hash], obj);
+
+ atomic_inc(&ht->nelems);
+
+ rhashtable_wakeup_worker(ht);
}
-EXPORT_SYMBOL_GPL(rhashtable_insert);
/**
- * rhashtable_remove_pprev - remove object from hash table given previous element
+ * rhashtable_insert - insert object into hash table
* @ht: hash table
* @obj: pointer to hash head inside object
- * @pprev: pointer to previous element
*
- * Identical to rhashtable_remove() but caller is alreayd aware of the element
- * in front of the element to be deleted. This is in particular useful for
- * deletion when combined with walking or lookup.
+ * Will take a per bucket spinlock to protect against mutual mutations
+ * on the same bucket. Multiple insertions may occur in parallel unless
+ * they map to the same bucket lock.
+ *
+ * It is safe to call this function from atomic context.
+ *
+ * Will trigger an automatic deferred table resizing if the size grows
+ * beyond the watermark indicated by grow_decision() which can be passed
+ * to rhashtable_init().
*/
-void rhashtable_remove_pprev(struct rhashtable *ht, struct rhash_head *obj,
- struct rhash_head __rcu **pprev)
+void rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj)
{
- struct bucket_table *tbl = rht_dereference(ht->tbl, ht);
+ struct bucket_table *tbl;
+ spinlock_t *lock;
+ unsigned hash;
- ASSERT_RHT_MUTEX(ht);
+ rcu_read_lock();
- RCU_INIT_POINTER(*pprev, obj->next);
- ht->nelems--;
+ tbl = rht_dereference_rcu(ht->future_tbl, ht);
+ hash = head_hashfn(ht, tbl, obj);
+ lock = bucket_lock(tbl, hash);
- if (ht->p.shrink_decision &&
- ht->p.shrink_decision(ht, tbl->size))
- rhashtable_shrink(ht);
+ spin_lock_bh(lock);
+ __rhashtable_insert(ht, obj, tbl, hash);
+ spin_unlock_bh(lock);
+
+ rcu_read_unlock();
}
-EXPORT_SYMBOL_GPL(rhashtable_remove_pprev);
+EXPORT_SYMBOL_GPL(rhashtable_insert);
/**
* rhashtable_remove - remove object from hash table
* walk the bucket chain upon removal. The removal operation is thus
* considerable slow if the hash table is not correctly sized.
*
- * Will automatically shrink the table via rhashtable_expand() if the the
+ * Will automatically shrink the table via rhashtable_expand() if the
* shrink_decision function specified at rhashtable_init() returns true.
*
* The caller must ensure that no concurrent table mutations occur. It is
*/
bool rhashtable_remove(struct rhashtable *ht, struct rhash_head *obj)
{
- struct bucket_table *tbl = rht_dereference(ht->tbl, ht);
+ struct bucket_table *tbl;
struct rhash_head __rcu **pprev;
struct rhash_head *he;
- u32 h;
+ spinlock_t *lock;
+ unsigned int hash;
+ bool ret = false;
- ASSERT_RHT_MUTEX(ht);
+ rcu_read_lock();
+ tbl = rht_dereference_rcu(ht->tbl, ht);
+ hash = head_hashfn(ht, tbl, obj);
- h = head_hashfn(ht, obj, tbl->size);
+ lock = bucket_lock(tbl, hash);
+ spin_lock_bh(lock);
- pprev = &tbl->buckets[h];
- rht_for_each(he, tbl->buckets[h], ht) {
+restart:
+ pprev = &tbl->buckets[hash];
+ rht_for_each(he, tbl, hash) {
if (he != obj) {
pprev = &he->next;
continue;
}
- rhashtable_remove_pprev(ht, he, pprev);
- return true;
+ rcu_assign_pointer(*pprev, obj->next);
+
+ ret = true;
+ break;
+ }
+
+ /* The entry may be linked in either 'tbl', 'future_tbl', or both.
+ * 'future_tbl' only exists for a short period of time during
+ * resizing. Thus traversing both is fine and the added cost is
+ * very rare.
+ */
+ if (tbl != rht_dereference_rcu(ht->future_tbl, ht)) {
+ spin_unlock_bh(lock);
+
+ tbl = rht_dereference_rcu(ht->future_tbl, ht);
+ hash = head_hashfn(ht, tbl, obj);
+
+ lock = bucket_lock(tbl, hash);
+ spin_lock_bh(lock);
+ goto restart;
+ }
+
+ spin_unlock_bh(lock);
+
+ if (ret) {
+ atomic_dec(&ht->nelems);
+ rhashtable_wakeup_worker(ht);
}
- return false;
+ rcu_read_unlock();
+
+ return ret;
}
EXPORT_SYMBOL_GPL(rhashtable_remove);
+struct rhashtable_compare_arg {
+ struct rhashtable *ht;
+ const void *key;
+};
+
+static bool rhashtable_compare(void *ptr, void *arg)
+{
+ struct rhashtable_compare_arg *x = arg;
+ struct rhashtable *ht = x->ht;
+
+ return !memcmp(ptr + ht->p.key_offset, x->key, ht->p.key_len);
+}
+
/**
* rhashtable_lookup - lookup key in hash table
* @ht: hash table
* for a entry with an identical key. The first matching entry is returned.
*
* This lookup function may only be used for fixed key hash table (key_len
- * paramter set). It will BUG() if used inappropriately.
+ * parameter set). It will BUG() if used inappropriately.
*
- * Lookups may occur in parallel with hash mutations as long as the lookup is
- * guarded by rcu_read_lock(). The caller must take care of this.
+ * Lookups may occur in parallel with hashtable mutations and resizing.
*/
-void *rhashtable_lookup(const struct rhashtable *ht, const void *key)
+void *rhashtable_lookup(struct rhashtable *ht, const void *key)
{
- const struct bucket_table *tbl = rht_dereference_rcu(ht->tbl, ht);
- struct rhash_head *he;
- u32 h;
+ struct rhashtable_compare_arg arg = {
+ .ht = ht,
+ .key = key,
+ };
BUG_ON(!ht->p.key_len);
- h = __hashfn(ht, key, ht->p.key_len, tbl->size);
- rht_for_each_rcu(he, tbl->buckets[h], ht) {
- if (memcmp(rht_obj(ht, he) + ht->p.key_offset, key,
- ht->p.key_len))
- continue;
- return (void *) he - ht->p.head_offset;
- }
-
- return NULL;
+ return rhashtable_lookup_compare(ht, key, &rhashtable_compare, &arg);
}
EXPORT_SYMBOL_GPL(rhashtable_lookup);
/**
* rhashtable_lookup_compare - search hash table with compare function
* @ht: hash table
- * @hash: hash value of desired entry
+ * @key: the pointer to the key
* @compare: compare function, must return true on match
* @arg: argument passed on to compare function
*
* Traverses the bucket chain behind the provided hash value and calls the
* specified compare function for each entry.
*
- * Lookups may occur in parallel with hash mutations as long as the lookup is
- * guarded by rcu_read_lock(). The caller must take care of this.
+ * Lookups may occur in parallel with hashtable mutations and resizing.
*
* Returns the first entry on which the compare function returned true.
*/
-void *rhashtable_lookup_compare(const struct rhashtable *ht, u32 hash,
+void *rhashtable_lookup_compare(struct rhashtable *ht, const void *key,
bool (*compare)(void *, void *), void *arg)
{
- const struct bucket_table *tbl = rht_dereference_rcu(ht->tbl, ht);
+ const struct bucket_table *tbl, *old_tbl;
struct rhash_head *he;
+ u32 hash;
- if (unlikely(hash >= tbl->size))
- return NULL;
+ rcu_read_lock();
- rht_for_each_rcu(he, tbl->buckets[hash], ht) {
+ old_tbl = rht_dereference_rcu(ht->tbl, ht);
+ tbl = rht_dereference_rcu(ht->future_tbl, ht);
+ hash = key_hashfn(ht, key, ht->p.key_len);
+restart:
+ rht_for_each_rcu(he, tbl, rht_bucket_index(tbl, hash)) {
if (!compare(rht_obj(ht, he), arg))
continue;
- return (void *) he - ht->p.head_offset;
+ rcu_read_unlock();
+ return rht_obj(ht, he);
}
+ if (unlikely(tbl != old_tbl)) {
+ tbl = old_tbl;
+ goto restart;
+ }
+ rcu_read_unlock();
+
return NULL;
}
EXPORT_SYMBOL_GPL(rhashtable_lookup_compare);
+/**
+ * rhashtable_lookup_insert - lookup and insert object into hash table
+ * @ht: hash table
+ * @obj: pointer to hash head inside object
+ *
+ * Locks down the bucket chain in both the old and new table if a resize
+ * is in progress to ensure that writers can't remove from the old table
+ * and can't insert to the new table during the atomic operation of search
+ * and insertion. Searches for duplicates in both the old and new table if
+ * a resize is in progress.
+ *
+ * This lookup function may only be used for fixed key hash table (key_len
+ * parameter set). It will BUG() if used inappropriately.
+ *
+ * It is safe to call this function from atomic context.
+ *
+ * Will trigger an automatic deferred table resizing if the size grows
+ * beyond the watermark indicated by grow_decision() which can be passed
+ * to rhashtable_init().
+ */
+bool rhashtable_lookup_insert(struct rhashtable *ht, struct rhash_head *obj)
+{
+ struct rhashtable_compare_arg arg = {
+ .ht = ht,
+ .key = rht_obj(ht, obj) + ht->p.key_offset,
+ };
+
+ BUG_ON(!ht->p.key_len);
+
+ return rhashtable_lookup_compare_insert(ht, obj, &rhashtable_compare,
+ &arg);
+}
+EXPORT_SYMBOL_GPL(rhashtable_lookup_insert);
+
+/**
+ * rhashtable_lookup_compare_insert - search and insert object to hash table
+ * with compare function
+ * @ht: hash table
+ * @obj: pointer to hash head inside object
+ * @compare: compare function, must return true on match
+ * @arg: argument passed on to compare function
+ *
+ * Locks down the bucket chain in both the old and new table if a resize
+ * is in progress to ensure that writers can't remove from the old table
+ * and can't insert to the new table during the atomic operation of search
+ * and insertion. Searches for duplicates in both the old and new table if
+ * a resize is in progress.
+ *
+ * Lookups may occur in parallel with hashtable mutations and resizing.
+ *
+ * Will trigger an automatic deferred table resizing if the size grows
+ * beyond the watermark indicated by grow_decision() which can be passed
+ * to rhashtable_init().
+ */
+bool rhashtable_lookup_compare_insert(struct rhashtable *ht,
+ struct rhash_head *obj,
+ bool (*compare)(void *, void *),
+ void *arg)
+{
+ struct bucket_table *new_tbl, *old_tbl;
+ spinlock_t *new_bucket_lock, *old_bucket_lock;
+ u32 new_hash, old_hash;
+ bool success = true;
+
+ BUG_ON(!ht->p.key_len);
+
+ rcu_read_lock();
+
+ old_tbl = rht_dereference_rcu(ht->tbl, ht);
+ old_hash = head_hashfn(ht, old_tbl, obj);
+ old_bucket_lock = bucket_lock(old_tbl, old_hash);
+ spin_lock_bh(old_bucket_lock);
+
+ new_tbl = rht_dereference_rcu(ht->future_tbl, ht);
+ new_hash = head_hashfn(ht, new_tbl, obj);
+ new_bucket_lock = bucket_lock(new_tbl, new_hash);
+ if (unlikely(old_tbl != new_tbl))
+ spin_lock_bh_nested(new_bucket_lock, RHT_LOCK_NESTED);
+
+ if (rhashtable_lookup_compare(ht, rht_obj(ht, obj) + ht->p.key_offset,
+ compare, arg)) {
+ success = false;
+ goto exit;
+ }
+
+ __rhashtable_insert(ht, obj, new_tbl, new_hash);
+
+exit:
+ if (unlikely(old_tbl != new_tbl))
+ spin_unlock_bh(new_bucket_lock);
+ spin_unlock_bh(old_bucket_lock);
+
+ rcu_read_unlock();
+
+ return success;
+}
+EXPORT_SYMBOL_GPL(rhashtable_lookup_compare_insert);
+
static size_t rounded_hashtable_size(struct rhashtable_params *params)
{
return max(roundup_pow_of_two(params->nelem_hint * 4 / 3),
* .key_offset = offsetof(struct test_obj, key),
* .key_len = sizeof(int),
* .hashfn = jhash,
- * #ifdef CONFIG_PROVE_LOCKING
- * .mutex_is_held = &my_mutex_is_held,
- * #endif
+ * .nulls_base = (1U << RHT_BASE_SHIFT),
* };
*
* Configuration Example 2: Variable length keys
* .head_offset = offsetof(struct test_obj, node),
* .hashfn = jhash,
* .obj_hashfn = my_hash_fn,
- * #ifdef CONFIG_PROVE_LOCKING
- * .mutex_is_held = &my_mutex_is_held,
- * #endif
* };
*/
int rhashtable_init(struct rhashtable *ht, struct rhashtable_params *params)
(!params->key_len && !params->obj_hashfn))
return -EINVAL;
+ if (params->nulls_base && params->nulls_base < (1U << RHT_BASE_SHIFT))
+ return -EINVAL;
+
params->min_shift = max_t(size_t, params->min_shift,
ilog2(HASH_MIN_SIZE));
if (params->nelem_hint)
size = rounded_hashtable_size(params);
- tbl = bucket_table_alloc(size);
+ memset(ht, 0, sizeof(*ht));
+ mutex_init(&ht->mutex);
+ memcpy(&ht->p, params, sizeof(*params));
+
+ if (params->locks_mul)
+ ht->p.locks_mul = roundup_pow_of_two(params->locks_mul);
+ else
+ ht->p.locks_mul = BUCKET_LOCKS_PER_CPU;
+
+ tbl = bucket_table_alloc(ht, size);
if (tbl == NULL)
return -ENOMEM;
- memset(ht, 0, sizeof(*ht));
- ht->shift = ilog2(tbl->size);
- memcpy(&ht->p, params, sizeof(*params));
+ atomic_set(&ht->nelems, 0);
+ atomic_set(&ht->shift, ilog2(tbl->size));
RCU_INIT_POINTER(ht->tbl, tbl);
+ RCU_INIT_POINTER(ht->future_tbl, tbl);
if (!ht->p.hash_rnd)
get_random_bytes(&ht->p.hash_rnd, sizeof(ht->p.hash_rnd));
+ if (ht->p.grow_decision || ht->p.shrink_decision)
+ INIT_WORK(&ht->run_work, rht_deferred_worker);
+
return 0;
}
EXPORT_SYMBOL_GPL(rhashtable_init);
* has to make sure that no resizing may happen by unpublishing the hashtable
* and waiting for the quiescent cycle before releasing the bucket array.
*/
-void rhashtable_destroy(const struct rhashtable *ht)
+void rhashtable_destroy(struct rhashtable *ht)
{
- bucket_table_free(ht->tbl);
+ ht->being_destroyed = true;
+
+ if (ht->p.grow_decision || ht->p.shrink_decision)
+ cancel_work_sync(&ht->run_work);
+
+ mutex_lock(&ht->mutex);
+ bucket_table_free(rht_dereference(ht->tbl, ht));
+ mutex_unlock(&ht->mutex);
}
EXPORT_SYMBOL_GPL(rhashtable_destroy);
#define TEST_PTR ((void *) 0xdeadbeef)
#define TEST_NEXPANDS 4
-#ifdef CONFIG_PROVE_LOCKING
-static int test_mutex_is_held(void *parent)
-{
- return 1;
-}
-#endif
-
struct test_obj {
void *ptr;
int value;
static void test_bucket_stats(struct rhashtable *ht, bool quiet)
{
unsigned int cnt, rcu_cnt, i, total = 0;
+ struct rhash_head *pos;
struct test_obj *obj;
struct bucket_table *tbl;
if (!quiet)
pr_info(" [%#4x/%zu]", i, tbl->size);
- rht_for_each_entry_rcu(obj, tbl->buckets[i], node) {
+ rht_for_each_entry_rcu(obj, pos, tbl, i, node) {
cnt++;
total++;
if (!quiet)
pr_cont(" [%p],", obj);
}
- rht_for_each_entry_rcu(obj, tbl->buckets[i], node)
+ rht_for_each_entry_rcu(obj, pos, tbl, i, node)
rcu_cnt++;
if (rcu_cnt != cnt)
i, tbl->buckets[i], cnt);
}
- pr_info(" Traversal complete: counted=%u, nelems=%zu, entries=%d\n",
- total, ht->nelems, TEST_ENTRIES);
+ pr_info(" Traversal complete: counted=%u, nelems=%u, entries=%d\n",
+ total, atomic_read(&ht->nelems), TEST_ENTRIES);
- if (total != ht->nelems || total != TEST_ENTRIES)
+ if (total != atomic_read(&ht->nelems) || total != TEST_ENTRIES)
pr_warn("Test failed: Total count mismatch ^^^");
}
static int __init test_rhashtable(struct rhashtable *ht)
{
struct bucket_table *tbl;
- struct test_obj *obj, *next;
+ struct test_obj *obj;
+ struct rhash_head *pos, *next;
int err;
unsigned int i;
for (i = 0; i < TEST_NEXPANDS; i++) {
pr_info(" Table expansion iteration %u...\n", i);
+ mutex_lock(&ht->mutex);
rhashtable_expand(ht);
+ mutex_unlock(&ht->mutex);
rcu_read_lock();
pr_info(" Verifying lookups...\n");
for (i = 0; i < TEST_NEXPANDS; i++) {
pr_info(" Table shrinkage iteration %u...\n", i);
+ mutex_lock(&ht->mutex);
rhashtable_shrink(ht);
+ mutex_unlock(&ht->mutex);
rcu_read_lock();
pr_info(" Verifying lookups...\n");
error:
tbl = rht_dereference_rcu(ht->tbl, ht);
for (i = 0; i < tbl->size; i++)
- rht_for_each_entry_safe(obj, next, tbl->buckets[i], ht, node)
+ rht_for_each_entry_safe(obj, pos, next, tbl, i, node)
kfree(obj);
return err;
.key_offset = offsetof(struct test_obj, value),
.key_len = sizeof(int),
.hashfn = jhash,
-#ifdef CONFIG_PROVE_LOCKING
- .mutex_is_held = &test_mutex_is_held,
-#endif
+ .nulls_base = (3U << RHT_BASE_SHIFT),
.grow_decision = rht_grow_above_75,
.shrink_decision = rht_shrink_below_30,
};
depends on !KMEMCHECK
select PAGE_EXTENSION
select PAGE_POISONING if !ARCH_SUPPORTS_DEBUG_PAGEALLOC
- select PAGE_GUARD if ARCH_SUPPORTS_DEBUG_PAGEALLOC
---help---
Unmap pages from the kernel linear mapping after free_pages().
This results in a large slowdown, but helps to find certain types
that would result in incorrect warnings of memory corruption after
a resume because free pages are not saved to the suspend image.
-config WANT_PAGE_DEBUG_FLAGS
- bool
-
config PAGE_POISONING
bool
- select WANT_PAGE_DEBUG_FLAGS
-
-config PAGE_GUARD
- bool
- select WANT_PAGE_DEBUG_FLAGS
if (swap_cgroup_cmpxchg(entry, old_id, new_id) == old_id) {
mem_cgroup_swap_statistics(from, false);
mem_cgroup_swap_statistics(to, true);
- /*
- * This function is only called from task migration context now.
- * It postpones page_counter and refcount handling till the end
- * of task migration(mem_cgroup_clear_mc()) for performance
- * improvement. But we cannot postpone css_get(to) because if
- * the process that has been moved to @to does swap-in, the
- * refcount of @to might be decreased to 0.
- *
- * We are in attach() phase, so the cgroup is guaranteed to be
- * alive, so we can just call css_get().
- */
- css_get(&to->css);
return 0;
}
return -EINVAL;
if (parent_css == NULL) {
root_mem_cgroup = memcg;
page_counter_init(&memcg->memory, NULL);
+ memcg->soft_limit = PAGE_COUNTER_MAX;
page_counter_init(&memcg->memsw, NULL);
page_counter_init(&memcg->kmem, NULL);
}
if (parent->use_hierarchy) {
page_counter_init(&memcg->memory, &parent->memory);
+ memcg->soft_limit = PAGE_COUNTER_MAX;
page_counter_init(&memcg->memsw, &parent->memsw);
page_counter_init(&memcg->kmem, &parent->kmem);
*/
} else {
page_counter_init(&memcg->memory, NULL);
+ memcg->soft_limit = PAGE_COUNTER_MAX;
page_counter_init(&memcg->memsw, NULL);
page_counter_init(&memcg->kmem, NULL);
/*
mem_cgroup_resize_limit(memcg, PAGE_COUNTER_MAX);
mem_cgroup_resize_memsw_limit(memcg, PAGE_COUNTER_MAX);
memcg_update_kmem_limit(memcg, PAGE_COUNTER_MAX);
- memcg->soft_limit = 0;
+ memcg->soft_limit = PAGE_COUNTER_MAX;
}
#ifdef CONFIG_MMU
static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
{
+ if (!tlb->end)
+ return;
+
tlb_flush(tlb);
mmu_notifier_invalidate_range(tlb->mm, tlb->start, tlb->end);
#ifdef CONFIG_HAVE_RCU_TABLE_FREE
{
struct mmu_gather_batch *batch;
- for (batch = &tlb->local; batch; batch = batch->next) {
+ for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
free_pages_and_swap_cache(batch->pages, batch->nr);
batch->nr = 0;
}
void tlb_flush_mmu(struct mmu_gather *tlb)
{
- if (!tlb->end)
- return;
-
tlb_flush_mmu_tlbonly(tlb);
tlb_flush_mmu_free(tlb);
}
if (!dirty_page)
return ret;
- /*
- * Yes, Virginia, this is actually required to prevent a race
- * with clear_page_dirty_for_io() from clearing the page dirty
- * bit after it clear all dirty ptes, but before a racing
- * do_wp_page installs a dirty pte.
- *
- * do_shared_fault is protected similarly.
- */
if (!page_mkwrite) {
- wait_on_page_locked(dirty_page);
- set_page_dirty_balance(dirty_page);
+ struct address_space *mapping;
+ int dirtied;
+
+ lock_page(dirty_page);
+ dirtied = set_page_dirty(dirty_page);
+ VM_BUG_ON_PAGE(PageAnon(dirty_page), dirty_page);
+ mapping = dirty_page->mapping;
+ unlock_page(dirty_page);
+
+ if (dirtied && mapping) {
+ /*
+ * Some device drivers do not set page.mapping
+ * but still dirty their pages
+ */
+ balance_dirty_pages_ratelimited(mapping);
+ }
+
/* file_update_time outside page_lock */
if (vma->vm_file)
file_update_time(vma->vm_file);
if (prev && prev->vm_end == address)
return prev->vm_flags & VM_GROWSDOWN ? 0 : -ENOMEM;
- expand_downwards(vma, address - PAGE_SIZE);
+ return expand_downwards(vma, address - PAGE_SIZE);
}
if ((vma->vm_flags & VM_GROWSUP) && address + PAGE_SIZE == vma->vm_end) {
struct vm_area_struct *next = vma->vm_next;
if (next && next->vm_start == address + PAGE_SIZE)
return next->vm_flags & VM_GROWSUP ? 0 : -ENOMEM;
- expand_upwards(vma, address + PAGE_SIZE);
+ return expand_upwards(vma, address + PAGE_SIZE);
}
return 0;
}
if (exporter && exporter->anon_vma && !importer->anon_vma) {
int error;
+ importer->anon_vma = exporter->anon_vma;
error = anon_vma_clone(importer, exporter);
- if (error)
+ if (error) {
+ importer->anon_vma = NULL;
return error;
- importer->anon_vma = exporter->anon_vma;
+ }
}
}
{
struct mm_struct *mm = vma->vm_mm;
struct rlimit *rlim = current->signal->rlim;
- unsigned long new_start;
+ unsigned long new_start, actual_size;
/* address space limit tests */
if (!may_expand_vm(mm, grow))
return -ENOMEM;
/* Stack limit test */
- if (size > ACCESS_ONCE(rlim[RLIMIT_STACK].rlim_cur))
+ actual_size = size;
+ if (size && (vma->vm_flags & (VM_GROWSUP | VM_GROWSDOWN)))
+ actual_size -= PAGE_SIZE;
+ if (actual_size > ACCESS_ONCE(rlim[RLIMIT_STACK].rlim_cur))
return -ENOMEM;
/* mlock limit tests */
bdi_start_background_writeback(bdi);
}
-void set_page_dirty_balance(struct page *page)
-{
- if (set_page_dirty(page)) {
- struct address_space *mapping = page_mapping(page);
-
- if (mapping)
- balance_dirty_pages_ratelimited(mapping);
- }
-}
-
static DEFINE_PER_CPU(int, bdp_ratelimits);
/*
* page dirty in that case, but not all the buffers. This is a "bottom-up"
* dirtying, whereas __set_page_dirty_buffers() is a "top-down" dirtying.
*
- * Most callers have locked the page, which pins the address_space in memory.
- * But zap_pte_range() does not lock the page, however in that case the
- * mapping is pinned by the vma's ->vm_file reference.
- *
- * We take care to handle the case where the page was truncated from the
- * mapping by re-checking page_mapping() inside tree_lock.
+ * The caller must ensure this doesn't race with truncation. Most will simply
+ * hold the page lock, but e.g. zap_pte_range() calls with the page mapped and
+ * the pte lock held, which also locks out truncation.
*/
int __set_page_dirty_nobuffers(struct page *page)
{
if (!TestSetPageDirty(page)) {
struct address_space *mapping = page_mapping(page);
- struct address_space *mapping2;
unsigned long flags;
if (!mapping)
return 1;
spin_lock_irqsave(&mapping->tree_lock, flags);
- mapping2 = page_mapping(page);
- if (mapping2) { /* Race with truncate? */
- BUG_ON(mapping2 != mapping);
- WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));
- account_page_dirtied(page, mapping);
- radix_tree_tag_set(&mapping->page_tree,
- page_index(page), PAGECACHE_TAG_DIRTY);
- }
+ BUG_ON(page_mapping(page) != mapping);
+ WARN_ON_ONCE(!PagePrivate(page) && !PageUptodate(page));
+ account_page_dirtied(page, mapping);
+ radix_tree_tag_set(&mapping->page_tree, page_index(page),
+ PAGECACHE_TAG_DIRTY);
spin_unlock_irqrestore(&mapping->tree_lock, flags);
if (mapping->host) {
/* !PageAnon && !swapper_space */
/*
* We carefully synchronise fault handlers against
* installing a dirty pte and marking the page dirty
- * at this point. We do this by having them hold the
- * page lock at some point after installing their
- * pte, but before marking the page dirty.
- * Pages are always locked coming in here, so we get
- * the desired exclusion. See mm/memory.c:do_wp_page()
- * for more comments.
+ * at this point. We do this by having them hold the
+ * page lock while dirtying the page, and pages are
+ * always locked coming in here, so we get the desired
+ * exclusion.
*/
if (TestClearPageDirty(page)) {
dec_zone_page_state(page, NR_FILE_DIRTY);
anon_vma = kmem_cache_alloc(anon_vma_cachep, GFP_KERNEL);
if (anon_vma) {
atomic_set(&anon_vma->refcount, 1);
+ anon_vma->degree = 1; /* Reference for first vma */
+ anon_vma->parent = anon_vma;
/*
* Initialise the anon_vma root to point to itself. If called
* from fork, the root will be reset to the parents anon_vma.
if (likely(!vma->anon_vma)) {
vma->anon_vma = anon_vma;
anon_vma_chain_link(vma, avc, anon_vma);
+ /* vma reference or self-parent link for new root */
+ anon_vma->degree++;
allocated = NULL;
avc = NULL;
}
/*
* Attach the anon_vmas from src to dst.
* Returns 0 on success, -ENOMEM on failure.
+ *
+ * If dst->anon_vma is NULL this function tries to find and reuse existing
+ * anon_vma which has no vmas and only one child anon_vma. This prevents
+ * degradation of anon_vma hierarchy to endless linear chain in case of
+ * constantly forking task. On the other hand, an anon_vma with more than one
+ * child isn't reused even if there was no alive vma, thus rmap walker has a
+ * good chance of avoiding scanning the whole hierarchy when it searches where
+ * page is mapped.
*/
int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src)
{
anon_vma = pavc->anon_vma;
root = lock_anon_vma_root(root, anon_vma);
anon_vma_chain_link(dst, avc, anon_vma);
+
+ /*
+ * Reuse existing anon_vma if its degree lower than two,
+ * that means it has no vma and only one anon_vma child.
+ *
+ * Do not chose parent anon_vma, otherwise first child
+ * will always reuse it. Root anon_vma is never reused:
+ * it has self-parent reference and at least one child.
+ */
+ if (!dst->anon_vma && anon_vma != src->anon_vma &&
+ anon_vma->degree < 2)
+ dst->anon_vma = anon_vma;
}
+ if (dst->anon_vma)
+ dst->anon_vma->degree++;
unlock_anon_vma_root(root);
return 0;
if (!pvma->anon_vma)
return 0;
+ /* Drop inherited anon_vma, we'll reuse existing or allocate new. */
+ vma->anon_vma = NULL;
+
/*
* First, attach the new VMA to the parent VMA's anon_vmas,
* so rmap can find non-COWed pages in child processes.
if (error)
return error;
+ /* An existing anon_vma has been reused, all done then. */
+ if (vma->anon_vma)
+ return 0;
+
/* Then add our own anon_vma. */
anon_vma = anon_vma_alloc();
if (!anon_vma)
* lock any of the anon_vmas in this anon_vma tree.
*/
anon_vma->root = pvma->anon_vma->root;
+ anon_vma->parent = pvma->anon_vma;
/*
* With refcounts, an anon_vma can stay around longer than the
* process it belongs to. The root anon_vma needs to be pinned until
vma->anon_vma = anon_vma;
anon_vma_lock_write(anon_vma);
anon_vma_chain_link(vma, avc, anon_vma);
+ anon_vma->parent->degree++;
anon_vma_unlock_write(anon_vma);
return 0;
* Leave empty anon_vmas on the list - we'll need
* to free them outside the lock.
*/
- if (RB_EMPTY_ROOT(&anon_vma->rb_root))
+ if (RB_EMPTY_ROOT(&anon_vma->rb_root)) {
+ anon_vma->parent->degree--;
continue;
+ }
list_del(&avc->same_vma);
anon_vma_chain_free(avc);
}
+ if (vma->anon_vma)
+ vma->anon_vma->degree--;
unlock_anon_vma_root(root);
/*
list_for_each_entry_safe(avc, next, &vma->anon_vma_chain, same_vma) {
struct anon_vma *anon_vma = avc->anon_vma;
+ BUG_ON(anon_vma->degree);
put_anon_vma(anon_vma);
list_del(&avc->same_vma);
return false;
/*
- * There is a potential race between when kswapd checks its watermarks
- * and a process gets throttled. There is also a potential race if
- * processes get throttled, kswapd wakes, a large process exits therby
- * balancing the zones that causes kswapd to miss a wakeup. If kswapd
- * is going to sleep, no process should be sleeping on pfmemalloc_wait
- * so wake them now if necessary. If necessary, processes will wake
- * kswapd and get throttled again
+ * The throttled processes are normally woken up in balance_pgdat() as
+ * soon as pfmemalloc_watermark_ok() is true. But there is a potential
+ * race between when kswapd checks the watermarks and a process gets
+ * throttled. There is also a potential race if processes get
+ * throttled, kswapd wakes, a large process exits thereby balancing the
+ * zones, which causes kswapd to exit balance_pgdat() before reaching
+ * the wake up checks. If kswapd is going to sleep, no process should
+ * be sleeping on pfmemalloc_wait, so wake them now if necessary. If
+ * the wake up is premature, processes will wake kswapd and get
+ * throttled again. The difference from wake ups in balance_pgdat() is
+ * that here we are under prepare_to_wait().
*/
- if (waitqueue_active(&pgdat->pfmemalloc_wait)) {
- wake_up(&pgdat->pfmemalloc_wait);
- return false;
- }
+ if (waitqueue_active(&pgdat->pfmemalloc_wait))
+ wake_up_all(&pgdat->pfmemalloc_wait);
return pgdat_balanced(pgdat, order, classzone_idx);
}
{
struct sk_buff *skb = *skbp;
__be16 vlan_proto = skb->vlan_proto;
- u16 vlan_id = vlan_tx_tag_get_id(skb);
+ u16 vlan_id = skb_vlan_tag_get_id(skb);
struct net_device *vlan_dev;
struct vlan_pcpu_stats *rx_stats;
return -EMSGSIZE;
}
+static struct net *vlan_get_link_net(const struct net_device *dev)
+{
+ struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
+
+ return dev_net(real_dev);
+}
+
struct rtnl_link_ops vlan_link_ops __read_mostly = {
.kind = "vlan",
.maxtype = IFLA_VLAN_MAX,
.dellink = unregister_vlan_dev,
.get_size = vlan_get_size,
.fill_info = vlan_fill_info,
+ .get_link_net = vlan_get_link_net,
};
int __init vlan_netlink_init(void)
config BATMAN_ADV_DEBUG
bool "B.A.T.M.A.N. debugging"
depends on BATMAN_ADV
+ depends on DEBUG_FS
help
This is an option for use by developers; most people should
say N here. This enables compilation of support for
#include "bat_algo.h"
#include "network-coding.h"
-
/**
- * batadv_dup_status - duplicate status
+ * enum batadv_dup_status - duplicate status
* @BATADV_NO_DUP: the packet is a duplicate
* @BATADV_ORIG_DUP: OGM is a duplicate in the originator (but not for the
* neighbor)
* @bat_priv: the bat priv with all the soft interface information
* @packet_len: (total) length of the OGM
* @send_time: timestamp (jiffies) when the packet is to be sent
- * @direktlink: true if this is a direct link packet
+ * @directlink: true if this is a direct link packet
* @if_incoming: interface where the packet was received
* @if_outgoing: interface for which the retransmission should be considered
* @forw_packet: the forwarded packet which should be checked
hlist_for_each_entry_rcu(orig_node, head, hash_entry) {
spin_lock_bh(&orig_node->bat_iv.ogm_cnt_lock);
word_index = hard_iface->if_num * BATADV_NUM_WORDS;
- word = &(orig_node->bat_iv.bcast_own[word_index]);
+ word = &orig_node->bat_iv.bcast_own[word_index];
batadv_bit_get_packet(bat_priv, word, 1, 0);
if_num = hard_iface->if_num;
return ret;
}
-
/**
* batadv_iv_ogm_process_per_outif - process a batman iv OGM for an outgoing if
* @skb: the skb containing the OGM
+ * @ogm_offset: offset from skb->data to start of ogm header
* @orig_node: the (cached) orig node for the originator of this OGM
* @if_incoming: the interface where this packet was received
* @if_outgoing: the interface for which the packet should be considered
offset = if_num * BATADV_NUM_WORDS;
spin_lock_bh(&orig_neigh_node->bat_iv.ogm_cnt_lock);
- word = &(orig_neigh_node->bat_iv.bcast_own[offset]);
+ word = &orig_neigh_node->bat_iv.bcast_own[offset];
bit_pos = if_incoming_seqno - 2;
bit_pos -= ntohl(ogm_packet->seqno);
batadv_set_bit(word, bit_pos);
* batadv_iv_ogm_neigh_is_eob - check if neigh1 is equally good or better than
* neigh2 from the metric prospective
* @neigh1: the first neighbor object of the comparison
- * @if_outgoing: outgoing interface for the first neighbor
+ * @if_outgoing1: outgoing interface for the first neighbor
* @neigh2: the second neighbor object of the comparison
* @if_outgoing2: outgoing interface for the second neighbor
-
+ *
* Returns true if the metric via neigh1 is equally good or better than
* the metric via neigh2, false otherwise.
*/
bitmap_shift_left(seq_bits, seq_bits, n, BATADV_TQ_LOCAL_WINDOW_SIZE);
}
-
/* receive and process one packet within the sequence number window.
*
* returns:
diff = last_seqno - curr_seqno;
if (diff < 0 || diff >= BATADV_TQ_LOCAL_WINDOW_SIZE)
return 0;
- else
- return test_bit(diff, seq_bits) != 0;
+ return test_bit(diff, seq_bits) != 0;
}
/* turn corresponding bit on, so we can remember that we got the packet */
return hash % size;
}
-
/* compares address and vid of two backbone gws */
static int batadv_compare_backbone_gw(const struct hlist_node *node,
const void *data2)
spin_unlock_bh(list_lock);
}
- /* all claims gone, intialize CRC */
+ /* all claims gone, initialize CRC */
backbone_gw->crc = BATADV_BLA_CRC_INIT;
}
/**
* batadv_bla_send_claim - sends a claim frame according to the provided info
* @bat_priv: the bat priv with all the soft interface information
- * @orig: the mac address to be announced within the claim
+ * @mac: the mac address to be announced within the claim
* @vid: the VLAN ID
* @claimtype: the type of the claim (CLAIM, UNCLAIM, ANNOUNCE, ...)
*/
* @bat_priv: the bat priv with all the soft interface information
* @orig: the mac address of the originator
* @vid: the VLAN ID
+ * @own_backbone: set if the requested backbone is local
*
* searches for the backbone gw or creates a new one if it could not
* be found.
/**
* batadv_bla_answer_request - answer a bla request by sending own claims
* @bat_priv: the bat priv with all the soft interface information
+ * @primary_if: interface where the request came on
* @vid: the vid where the request came on
*
* Repeat all of our own claims, and finally send an ANNOUNCE frame
if (unlikely(!backbone_gw))
return 1;
-
/* handle as ANNOUNCE frame */
backbone_gw->lasttime = jiffies;
crc = ntohs(*((__be16 *)(&an_addr[4])));
/**
* batadv_check_claim_group
* @bat_priv: the bat priv with all the soft interface information
+ * @primary_if: the primary interface of this batman interface
* @hw_src: the Hardware source in the ARP Header
* @hw_dst: the Hardware destination in the ARP Header
* @ethhdr: pointer to the Ethernet header of the claim frame
return 2;
}
-
/**
* batadv_bla_process_claim
* @bat_priv: the bat priv with all the soft interface information
+ * @primary_if: the primary hard interface of this batman soft interface
* @skb: the frame to be checked
*
* Check if this is a claim frame, and process it accordingly.
goto out;
}
/* not found, add a new entry (overwrite the oldest entry)
- * and allow it, its the first occurence.
+ * and allow it, its the first occurrence.
*/
curr = (bat_priv->bla.bcast_duplist_curr + BATADV_DUPLIST_SIZE - 1);
curr %= BATADV_DUPLIST_SIZE;
return ret;
}
-
-
/**
* batadv_bla_is_backbone_gw_orig
* @bat_priv: the bat priv with all the soft interface information
return false;
}
-
/**
* batadv_bla_is_backbone_gw
* @skb: the frame to be checked
if (!atomic_read(&bat_priv->bridge_loop_avoidance))
goto allow;
-
if (unlikely(atomic_read(&bat_priv->bla.num_requests)))
/* don't allow broadcasts while requests are in flight */
if (is_multicast_ether_addr(ethhdr->h_dest) && is_bcast)
static void batadv_debug_log_cleanup(struct batadv_priv *bat_priv)
{
- return;
}
#endif
.release = single_release, \
}, \
}
+
static BATADV_HARDIF_DEBUGINFO(originators, S_IRUGO,
batadv_originators_hardif_open);
batadv_dat_send_data(bat_priv, skb, ip_src, BATADV_P_DAT_DHT_PUT);
batadv_dat_send_data(bat_priv, skb, ip_dst, BATADV_P_DAT_DHT_PUT);
}
+
/**
* batadv_dat_snoop_incoming_arp_reply - snoop the ARP reply and fill the local
* DAT storage only
#include <linux/if_arp.h>
-/**
- * BATADV_DAT_ADDR_MAX - maximum address value in the DHT space
- */
+/* BATADV_DAT_ADDR_MAX - maximum address value in the DHT space */
#define BATADV_DAT_ADDR_MAX ((batadv_dat_addr_t)~(batadv_dat_addr_t)0)
void batadv_dat_status_update(struct net_device *net_dev);
#include "hard-interface.h"
#include "soft-interface.h"
-
/**
* batadv_frag_clear_chain - delete entries in the fragment buffer chain
* @head: head of chain with entries.
if (!hlist_empty(&frags_entry->head) &&
batadv_has_timed_out(frags_entry->timestamp, BATADV_FRAG_TIMEOUT))
return true;
- else
- return false;
+ return false;
}
#endif /* _NET_BATMAN_ADV_FRAGMENTATION_H_ */
return ret;
}
+
/**
* batadv_gw_out_of_range - check if the dhcp request destination is the best gw
* @bat_priv: the bat priv with all the soft interface information
#include "network-coding.h"
#include "fragmentation.h"
-
/* List manipulations on hardif_list have to be rtnl_lock()'ed,
* list traversals just rcu-locked
*/
goto err_free;
}
+ /* reset control block to avoid left overs from previous users */
+ memset(skb->cb, 0, sizeof(struct batadv_skb_cb));
+
/* all receive handlers return whether they received or reused
* the supplied skb. if not, we have to free the skb.
*/
/**
* batadv_tvlv_container_free_ref - decrement the tvlv container refcounter and
* possibly free it
- * @tvlv_handler: the tvlv container to free
+ * @tvlv: the tvlv container to free
*/
static void batadv_tvlv_container_free_ref(struct batadv_tvlv_container *tvlv)
{
}
/**
- * batadv_tvlv_realloc_packet_buff - reallocate packet buffer to accomodate
+ * batadv_tvlv_realloc_packet_buff - reallocate packet buffer to accommodate
* requested packet size
* @packet_buff: packet buffer
* @packet_buff_len: packet buffer size
- * @packet_min_len: requested packet minimum size
+ * @min_packet_len: requested packet minimum size
* @additional_packet_len: requested additional packet size on top of minimum
* size
*
#define BATADV_DRIVER_DEVICE "batman-adv"
#ifndef BATADV_SOURCE_VERSION
-#define BATADV_SOURCE_VERSION "2014.4.0"
+#define BATADV_SOURCE_VERSION "2015.0"
#endif
/* B.A.T.M.A.N. parameters */
/* numbers of originator to contact for any PUT/GET DHT operation */
#define BATADV_DAT_CANDIDATES_NUM 3
-/**
- * BATADV_TQ_SIMILARITY_THRESHOLD - TQ points that a secondary metric can differ
- * at most from the primary one in order to be still considered acceptable
+/* BATADV_TQ_SIMILARITY_THRESHOLD - TQ points that a secondary metric can differ
+ * at most from the primary one in order to be still considered acceptable
*/
#define BATADV_TQ_SIMILARITY_THRESHOLD 50
* - when adding 128 - it is neither a predecessor nor a successor,
* - after adding more than 127 to the starting value - it is a successor
*/
-#define batadv_seq_before(x, y) ({typeof(x) _d1 = (x); \
- typeof(y) _d2 = (y); \
- typeof(x) _dummy = (_d1 - _d2); \
- (void) (&_d1 == &_d2); \
+#define batadv_seq_before(x, y) ({typeof(x)_d1 = (x); \
+ typeof(y)_d2 = (y); \
+ typeof(x)_dummy = (_d1 - _d2); \
+ (void)(&_d1 == &_d2); \
_dummy > batadv_smallest_signed_int(_dummy); })
#define batadv_seq_after(x, y) batadv_seq_before(y, x)
if (orig_initialized)
atomic_dec(&bat_priv->mcast.num_disabled);
orig->capabilities |= BATADV_ORIG_CAPA_HAS_MCAST;
- /* If mcast support is being switched off increase the disabled
- * mcast node counter.
+ /* If mcast support is being switched off or if this is an initial
+ * OGM without mcast support then increase the disabled mcast
+ * node counter.
*/
} else if (!orig_mcast_enabled &&
- orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST) {
+ (orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST ||
+ !orig_initialized)) {
atomic_inc(&bat_priv->mcast.num_disabled);
orig->capabilities &= ~BATADV_ORIG_CAPA_HAS_MCAST;
}
{
struct batadv_priv *bat_priv = orig->bat_priv;
- if (!(orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST))
+ if (!(orig->capabilities & BATADV_ORIG_CAPA_HAS_MCAST) &&
+ orig->capa_initialized & BATADV_ORIG_CAPA_HAS_MCAST)
atomic_dec(&bat_priv->mcast.num_disabled);
batadv_mcast_want_unsnoop_update(bat_priv, orig, BATADV_NO_FLAGS);
static inline void batadv_mcast_mla_update(struct batadv_priv *bat_priv)
{
- return;
}
static inline enum batadv_forw_mode
static inline void batadv_mcast_free(struct batadv_priv *bat_priv)
{
- return;
}
static inline void batadv_mcast_purge_orig(struct batadv_orig_node *orig_node)
{
- return;
}
#endif /* CONFIG_BATMAN_ADV_MCAST */
if (!bat_priv->nc.decoding_hash)
goto err;
- batadv_hash_set_lock_class(bat_priv->nc.coding_hash,
+ batadv_hash_set_lock_class(bat_priv->nc.decoding_hash,
&batadv_nc_decoding_hash_lock_class_key);
INIT_DELAYED_WORK(&bat_priv->nc.work, batadv_nc_worker);
{
if (BATADV_SKB_CB(skb)->decoded && !batadv_compare_eth(dst, src))
return false;
- else
- return true;
+ return true;
}
/**
batadv_frag_purge_orig(orig_node, NULL);
- batadv_tt_global_del_orig(orig_node->bat_priv, orig_node, -1,
- "originator timed out");
-
if (orig_node->bat_priv->bat_algo_ops->bat_orig_free)
orig_node->bat_priv->bat_algo_ops->bat_orig_free(orig_node);
atomic_set(&orig_node->last_ttvn, 0);
orig_node->tt_buff = NULL;
orig_node->tt_buff_len = 0;
+ orig_node->last_seen = jiffies;
reset_time = jiffies - 1 - msecs_to_jiffies(BATADV_RESET_PROTECTION_MS);
orig_node->bcast_seqno_reset = reset_time;
#ifdef CONFIG_BATMAN_ADV_MCAST
return ifinfo_purged;
}
-
/**
* batadv_purge_orig_neighbors - purges neighbors from originator
* @bat_priv: the bat priv with all the soft interface information
if (batadv_purge_orig_node(bat_priv, orig_node)) {
batadv_gw_node_delete(bat_priv, orig_node);
hlist_del_rcu(&orig_node->hash_entry);
+ batadv_tt_global_del_orig(orig_node->bat_priv,
+ orig_node, -1,
+ "originator timed out");
batadv_orig_node_free_ref(orig_node);
continue;
}
unsigned short vid);
void batadv_orig_node_vlan_free_ref(struct batadv_orig_node_vlan *orig_vlan);
-
/* hashfunction to choose an entry in a hash table of given size
* hash algorithm from http://en.wikipedia.org/wiki/Hash_table
*/
uint8_t type; /* bla_claimframe */
__be16 group; /* group id */
};
+
#pragma pack()
/**
uint8_t reserved:4;
uint8_t no:4;
#else
-#error "unknown bitfield endianess"
+#error "unknown bitfield endianness"
#endif
uint8_t dest[ETH_ALEN];
uint8_t orig[ETH_ALEN];
* @src: address of the source
* @dst: address of the destination
* @tvlv_len: length of tvlv data following the unicast tvlv header
- * @align: 2 bytes to align the header to a 4 byte boundry
+ * @align: 2 bytes to align the header to a 4 byte boundary
*/
struct batadv_unicast_tvlv_packet {
uint8_t packet_type;
return ret;
}
-
int batadv_recv_icmp_packet(struct sk_buff *skb,
struct batadv_hard_iface *recv_if)
{
router = batadv_orig_router_get(orig_node, recv_if);
+ if (!router)
+ return router;
+
/* only consider bonding for recv_if == BATADV_IF_DEFAULT (first hop)
* and if activated.
*/
- if (recv_if == BATADV_IF_DEFAULT || !atomic_read(&bat_priv->bonding) ||
- !router)
+ if (!(recv_if == BATADV_IF_DEFAULT && atomic_read(&bat_priv->bonding)))
return router;
/* bonding: loop through the list of possible routers found
* the last chosen bonding candidate (next_candidate). If no such
* router is found, use the first candidate found (the previously
* chosen bonding candidate might have been the last one in the list).
- * If this can't be found either, return the previously choosen
+ * If this can't be found either, return the previously chosen
* router - obviously there are no other candidates.
*/
rcu_read_lock();
#include "bridge_loop_avoidance.h"
#include "network-coding.h"
-
static int batadv_get_settings(struct net_device *dev, struct ethtool_cmd *cmd);
static void batadv_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info);
static BATADV_ATTR(_name, _mode, batadv_show_##_name, \
batadv_store_##_name)
-
#define BATADV_ATTR_SIF_STORE_UINT(_name, _min, _max, _post_func) \
ssize_t batadv_store_##_name(struct kobject *kobj, \
struct attribute *attr, char *buff, \
batadv_tt_global_del_roaming(bat_priv, tt_global_entry,
orig_node, message);
-
out:
if (tt_global_entry)
batadv_tt_global_entry_free_ref(tt_global_entry);
{
if (batadv_is_my_mac(bat_priv, req_dst))
return batadv_send_my_tt_response(bat_priv, tt_data, req_src);
- else
- return batadv_send_other_tt_response(bat_priv, tt_data,
- req_src, req_dst);
+ return batadv_send_other_tt_response(bat_priv, tt_data, req_src,
+ req_dst);
}
static void _batadv_tt_update_changes(struct batadv_priv *bat_priv,
/**
* batadv_is_my_client - check if a client is served by the local node
* @bat_priv: the bat priv with all the soft interface information
- * @addr: the mac adress of the client to check
+ * @addr: the mac address of the client to check
* @vid: VLAN identifier
*
* Returns true if the client is served by this node, false otherwise.
/**
* struct batadv_orig_node - structure for orig_list maintaining nodes of mesh
* @orig: originator ethernet address
- * @primary_addr: hosts primary interface address
* @ifinfo_list: list for routers per outgoing interface
* @last_bonding_candidate: pointer to last ifinfo of last used router
* @batadv_dat_addr_t: address of the orig node in the distributed hash
*/
struct batadv_orig_node {
uint8_t orig[ETH_ALEN];
- uint8_t primary_addr[ETH_ALEN];
struct hlist_head ifinfo_list;
struct batadv_orig_ifinfo *last_bonding_candidate;
#ifdef CONFIG_BATMAN_ADV_DAT
};
/**
- * struct batadv_tt_change_node - structure for tt changes occured
+ * struct batadv_tt_change_node - structure for tt changes occurred
* @list: list node for batadv_priv_tt::changes_list
* @change: holds the actual translation table diff data
*/
#define VERSION "0.1"
-static struct dentry *lowpan_psm_debugfs;
+static struct dentry *lowpan_enable_debugfs;
static struct dentry *lowpan_control_debugfs;
#define IFACE_NAME_TEMPLATE "bt%d"
static LIST_HEAD(bt_6lowpan_devices);
static DEFINE_SPINLOCK(devices_lock);
-/* If psm is set to 0 (default value), then 6lowpan is disabled.
- * Other values are used to indicate a Protocol Service Multiplexer
- * value for 6lowpan.
- */
-static u16 psm_6lowpan;
+static bool enable_6lowpan;
/* We are listening incoming connections via this channel
*/
if (hcon->type != LE_LINK)
return false;
- if (!psm_6lowpan)
+ if (!enable_6lowpan)
return false;
return true;
if (!pchan)
return -EINVAL;
- err = l2cap_chan_connect(pchan, cpu_to_le16(psm_6lowpan), 0,
+ err = l2cap_chan_connect(pchan, cpu_to_le16(L2CAP_PSM_IPSP), 0,
addr, dst_type);
BT_DBG("chan %p err %d", pchan, err);
struct l2cap_chan *pchan;
int err;
- if (psm_6lowpan == 0)
+ if (!enable_6lowpan)
return NULL;
pchan = chan_get();
atomic_set(&pchan->nesting, L2CAP_NESTING_PARENT);
- BT_DBG("psm 0x%04x chan %p src type %d", psm_6lowpan, pchan,
- pchan->src_type);
+ BT_DBG("chan %p src type %d", pchan, pchan->src_type);
- err = l2cap_add_psm(pchan, addr, cpu_to_le16(psm_6lowpan));
+ err = l2cap_add_psm(pchan, addr, cpu_to_le16(L2CAP_PSM_IPSP));
if (err) {
l2cap_chan_put(pchan);
BT_ERR("psm cannot be added err %d", err);
spin_unlock(&devices_lock);
}
-struct set_psm {
+struct set_enable {
struct work_struct work;
- u16 psm;
+ bool flag;
};
-static void do_psm_set(struct work_struct *work)
+static void do_enable_set(struct work_struct *work)
{
- struct set_psm *set_psm = container_of(work, struct set_psm, work);
+ struct set_enable *set_enable = container_of(work,
+ struct set_enable, work);
- if (set_psm->psm == 0 || psm_6lowpan != set_psm->psm)
+ if (!set_enable->flag || enable_6lowpan != set_enable->flag)
/* Disconnect existing connections if 6lowpan is
- * disabled (psm = 0), or if psm changes.
+ * disabled
*/
disconnect_all_peers();
- psm_6lowpan = set_psm->psm;
+ enable_6lowpan = set_enable->flag;
if (listen_chan) {
l2cap_chan_close(listen_chan, 0);
listen_chan = bt_6lowpan_listen();
- kfree(set_psm);
+ kfree(set_enable);
}
-static int lowpan_psm_set(void *data, u64 val)
+static int lowpan_enable_set(void *data, u64 val)
{
- struct set_psm *set_psm;
+ struct set_enable *set_enable;
- set_psm = kzalloc(sizeof(*set_psm), GFP_KERNEL);
- if (!set_psm)
+ set_enable = kzalloc(sizeof(*set_enable), GFP_KERNEL);
+ if (!set_enable)
return -ENOMEM;
- set_psm->psm = val;
- INIT_WORK(&set_psm->work, do_psm_set);
+ set_enable->flag = !!val;
+ INIT_WORK(&set_enable->work, do_enable_set);
- schedule_work(&set_psm->work);
+ schedule_work(&set_enable->work);
return 0;
}
-static int lowpan_psm_get(void *data, u64 *val)
+static int lowpan_enable_get(void *data, u64 *val)
{
- *val = psm_6lowpan;
+ *val = enable_6lowpan;
return 0;
}
-DEFINE_SIMPLE_ATTRIBUTE(lowpan_psm_fops, lowpan_psm_get,
- lowpan_psm_set, "%llu\n");
+DEFINE_SIMPLE_ATTRIBUTE(lowpan_enable_fops, lowpan_enable_get,
+ lowpan_enable_set, "%llu\n");
static ssize_t lowpan_control_write(struct file *fp,
const char __user *user_buffer,
static int __init bt_6lowpan_init(void)
{
- lowpan_psm_debugfs = debugfs_create_file("6lowpan_psm", 0644,
- bt_debugfs, NULL,
- &lowpan_psm_fops);
+ lowpan_enable_debugfs = debugfs_create_file("6lowpan_enable", 0644,
+ bt_debugfs, NULL,
+ &lowpan_enable_fops);
lowpan_control_debugfs = debugfs_create_file("6lowpan_control", 0644,
bt_debugfs, NULL,
&lowpan_control_fops);
static void __exit bt_6lowpan_exit(void)
{
- debugfs_remove(lowpan_psm_debugfs);
+ debugfs_remove(lowpan_enable_debugfs);
debugfs_remove(lowpan_control_debugfs);
if (listen_chan) {
if (skb->len < CAPI_MSG_BASELEN + 15)
break;
- controller = CAPIMSG_U32(skb->data, CAPI_MSG_BASELEN + 10);
-
if (!info && ctrl) {
int len = min_t(uint, CAPI_MANUFACTURER_LEN,
skb->data[CAPI_MSG_BASELEN + 14]);
if (skb->len < CAPI_MSG_BASELEN + 32)
break;
- controller = CAPIMSG_U32(skb->data, CAPI_MSG_BASELEN + 12);
-
if (!info && ctrl) {
ctrl->version.majorversion = CAPIMSG_U32(skb->data, CAPI_MSG_BASELEN + 16);
ctrl->version.minorversion = CAPIMSG_U32(skb->data, CAPI_MSG_BASELEN + 20);
if (skb->len < CAPI_MSG_BASELEN + 17)
break;
- controller = CAPIMSG_U32(skb->data, CAPI_MSG_BASELEN + 12);
-
if (!info && ctrl) {
int len = min_t(uint, CAPI_SERIAL_LEN,
skb->data[CAPI_MSG_BASELEN + 16]);
mgmt_reenable_advertising(hdev);
}
-static void create_le_conn_complete(struct hci_dev *hdev, u8 status)
+static void create_le_conn_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
struct hci_conn *conn;
}
EXPORT_SYMBOL(hci_conn_check_secure);
-/* Change link key */
-int hci_conn_change_link_key(struct hci_conn *conn)
-{
- BT_DBG("hcon %p", conn);
-
- if (!test_and_set_bit(HCI_CONN_AUTH_PEND, &conn->flags)) {
- struct hci_cp_change_conn_link_key cp;
- cp.handle = cpu_to_le16(conn->handle);
- hci_send_cmd(conn->hdev, HCI_OP_CHANGE_CONN_LINK_KEY,
- sizeof(cp), &cp);
- }
-
- return 0;
-}
-
/* Switch role */
int hci_conn_switch_role(struct hci_conn *conn, __u8 role)
{
/* ---- HCI requests ---- */
-static void hci_req_sync_complete(struct hci_dev *hdev, u8 result)
+static void hci_req_sync_complete(struct hci_dev *hdev, u8 result, u16 opcode)
{
BT_DBG("%s result 0x%2.2x", hdev->name, result);
set_bit(HCI_LE_ENABLED, &hdev->dev_flags);
}
-static u8 hci_get_inquiry_mode(struct hci_dev *hdev)
-{
- if (lmp_ext_inq_capable(hdev))
- return 0x02;
-
- if (lmp_inq_rssi_capable(hdev))
- return 0x01;
-
- if (hdev->manufacturer == 11 && hdev->hci_rev == 0x00 &&
- hdev->lmp_subver == 0x0757)
- return 0x01;
-
- if (hdev->manufacturer == 15) {
- if (hdev->hci_rev == 0x03 && hdev->lmp_subver == 0x6963)
- return 0x01;
- if (hdev->hci_rev == 0x09 && hdev->lmp_subver == 0x6963)
- return 0x01;
- if (hdev->hci_rev == 0x00 && hdev->lmp_subver == 0x6965)
- return 0x01;
- }
-
- if (hdev->manufacturer == 31 && hdev->hci_rev == 0x2005 &&
- hdev->lmp_subver == 0x1805)
- return 0x01;
-
- return 0x00;
-}
-
-static void hci_setup_inquiry_mode(struct hci_request *req)
-{
- u8 mode;
-
- mode = hci_get_inquiry_mode(req->hdev);
-
- hci_req_add(req, HCI_OP_WRITE_INQUIRY_MODE, 1, &mode);
-}
-
static void hci_setup_event_mask(struct hci_request *req)
{
struct hci_dev *hdev = req->hdev;
}
}
- if (lmp_inq_rssi_capable(hdev))
- hci_setup_inquiry_mode(req);
+ if (lmp_inq_rssi_capable(hdev) ||
+ test_bit(HCI_QUIRK_FIXUP_INQUIRY_MODE, &hdev->quirks)) {
+ u8 mode;
+
+ /* If Extended Inquiry Result events are supported, then
+ * they are clearly preferred over Inquiry Result with RSSI
+ * events.
+ */
+ mode = lmp_ext_inq_capable(hdev) ? 0x02 : 0x01;
+
+ hci_req_add(req, HCI_OP_WRITE_INQUIRY_MODE, 1, &mode);
+ }
if (lmp_inq_tx_pwr_capable(hdev))
hci_req_add(req, HCI_OP_READ_INQ_RSP_TX_POWER, 0, NULL);
hci_setup_event_mask(req);
- /* Some Broadcom based Bluetooth controllers do not support the
- * Delete Stored Link Key command. They are clearly indicating its
- * absence in the bit mask of supported commands.
- *
- * Check the supported commands and only if the the command is marked
- * as supported send it. If not supported assume that the controller
- * does not have actual support for stored link keys which makes this
- * command redundant anyway.
- *
- * Some controllers indicate that they support handling deleting
- * stored link keys, but they don't. The quirk lets a driver
- * just disable this command.
- */
- if (hdev->commands[6] & 0x80 &&
- !test_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks)) {
- struct hci_cp_delete_stored_link_key cp;
+ if (hdev->commands[6] & 0x20) {
+ struct hci_cp_read_stored_link_key cp;
bacpy(&cp.bdaddr, BDADDR_ANY);
- cp.delete_all = 0x01;
- hci_req_add(req, HCI_OP_DELETE_STORED_LINK_KEY,
- sizeof(cp), &cp);
+ cp.read_all = 0x01;
+ hci_req_add(req, HCI_OP_READ_STORED_LINK_KEY, sizeof(cp), &cp);
}
if (hdev->commands[5] & 0x10)
{
struct hci_dev *hdev = req->hdev;
+ /* Some Broadcom based Bluetooth controllers do not support the
+ * Delete Stored Link Key command. They are clearly indicating its
+ * absence in the bit mask of supported commands.
+ *
+ * Check the supported commands and only if the the command is marked
+ * as supported send it. If not supported assume that the controller
+ * does not have actual support for stored link keys which makes this
+ * command redundant anyway.
+ *
+ * Some controllers indicate that they support handling deleting
+ * stored link keys, but they don't. The quirk lets a driver
+ * just disable this command.
+ */
+ if (hdev->commands[6] & 0x80 &&
+ !test_bit(HCI_QUIRK_BROKEN_STORED_LINK_KEY, &hdev->quirks)) {
+ struct hci_cp_delete_stored_link_key cp;
+
+ bacpy(&cp.bdaddr, BDADDR_ANY);
+ cp.delete_all = 0x01;
+ hci_req_add(req, HCI_OP_DELETE_STORED_LINK_KEY,
+ sizeof(cp), &cp);
+ }
+
/* Set event mask page 2 if the HCI command for it is supported */
if (hdev->commands[22] & 0x04)
hci_set_event_mask_page_2(req);
if (err < 0)
return err;
- /* Only create debugfs entries during the initial setup
- * phase and not every time the controller gets powered on.
+ /* This function is only called when the controller is actually in
+ * configured state. When the controller is marked as unconfigured,
+ * this initialization procedure is not run.
+ *
+ * It means that it is possible that a controller runs through its
+ * setup phase and then discovers missing settings. If that is the
+ * case, then this function will not be called. It then will only
+ * be called during the config phase.
+ *
+ * So only when in setup phase or config phase, create the debugfs
+ * entries and register the SMP channels.
*/
- if (!test_bit(HCI_SETUP, &hdev->dev_flags))
+ if (!test_bit(HCI_SETUP, &hdev->dev_flags) &&
+ !test_bit(HCI_CONFIG, &hdev->dev_flags))
return 0;
hci_debugfs_create_common(hdev);
if (lmp_bredr_capable(hdev))
hci_debugfs_create_bredr(hdev);
- if (lmp_le_capable(hdev)) {
+ if (lmp_le_capable(hdev))
hci_debugfs_create_le(hdev);
- smp_register(hdev);
- }
return 0;
}
BT_DBG("%s", hdev->name);
hci_dev_do_close(hdev);
+
+ smp_unregister(hdev);
}
static void hci_discov_off(struct work_struct *work)
BT_DBG("All LE connection parameters were removed");
}
-static void inquiry_complete(struct hci_dev *hdev, u8 status)
+static void inquiry_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
if (status) {
BT_ERR("Failed to start inquiry: status %d", status);
}
}
-static void le_scan_disable_work_complete(struct hci_dev *hdev, u8 status)
+static void le_scan_disable_work_complete(struct hci_dev *hdev, u8 status,
+ u16 opcode)
{
/* General inquiry access code (GIAC) */
u8 lap[3] = { 0x33, 0x8b, 0x9e };
call_complete:
if (req_complete)
- req_complete(hdev, status);
+ req_complete(hdev, status, status ? opcode : HCI_OP_NOP);
}
static void hci_rx_work(struct work_struct *work)
DEFINE_SIMPLE_ATTRIBUTE(conn_info_max_age_fops, conn_info_max_age_get,
conn_info_max_age_set, "%llu\n");
+static ssize_t sc_only_mode_read(struct file *file, char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ struct hci_dev *hdev = file->private_data;
+ char buf[3];
+
+ buf[0] = test_bit(HCI_SC_ONLY, &hdev->dev_flags) ? 'Y': 'N';
+ buf[1] = '\n';
+ buf[2] = '\0';
+ return simple_read_from_buffer(user_buf, count, ppos, buf, 2);
+}
+
+static const struct file_operations sc_only_mode_fops = {
+ .open = simple_open,
+ .read = sc_only_mode_read,
+ .llseek = default_llseek,
+};
+
void hci_debugfs_create_common(struct hci_dev *hdev)
{
debugfs_create_file("features", 0444, hdev->debugfs, hdev,
&conn_info_min_age_fops);
debugfs_create_file("conn_info_max_age", 0644, hdev->debugfs, hdev,
&conn_info_max_age_fops);
+
+ if (lmp_sc_capable(hdev) || lmp_le_capable(hdev))
+ debugfs_create_file("sc_only_mode", 0444, hdev->debugfs,
+ hdev, &sc_only_mode_fops);
}
static int inquiry_cache_show(struct seq_file *f, void *p)
DEFINE_SIMPLE_ATTRIBUTE(auto_accept_delay_fops, auto_accept_delay_get,
auto_accept_delay_set, "%llu\n");
-static ssize_t sc_only_mode_read(struct file *file, char __user *user_buf,
- size_t count, loff_t *ppos)
-{
- struct hci_dev *hdev = file->private_data;
- char buf[3];
-
- buf[0] = test_bit(HCI_SC_ONLY, &hdev->dev_flags) ? 'Y': 'N';
- buf[1] = '\n';
- buf[2] = '\0';
- return simple_read_from_buffer(user_buf, count, ppos, buf, 2);
-}
-
-static const struct file_operations sc_only_mode_fops = {
- .open = simple_open,
- .read = sc_only_mode_read,
- .llseek = default_llseek,
-};
-
-static ssize_t force_sc_support_read(struct file *file, char __user *user_buf,
- size_t count, loff_t *ppos)
-{
- struct hci_dev *hdev = file->private_data;
- char buf[3];
-
- buf[0] = test_bit(HCI_FORCE_SC, &hdev->dbg_flags) ? 'Y': 'N';
- buf[1] = '\n';
- buf[2] = '\0';
- return simple_read_from_buffer(user_buf, count, ppos, buf, 2);
-}
-
-static ssize_t force_sc_support_write(struct file *file,
- const char __user *user_buf,
- size_t count, loff_t *ppos)
-{
- struct hci_dev *hdev = file->private_data;
- char buf[32];
- size_t buf_size = min(count, (sizeof(buf)-1));
- bool enable;
-
- if (test_bit(HCI_UP, &hdev->flags))
- return -EBUSY;
-
- if (copy_from_user(buf, user_buf, buf_size))
- return -EFAULT;
-
- buf[buf_size] = '\0';
- if (strtobool(buf, &enable))
- return -EINVAL;
-
- if (enable == test_bit(HCI_FORCE_SC, &hdev->dbg_flags))
- return -EALREADY;
-
- change_bit(HCI_FORCE_SC, &hdev->dbg_flags);
-
- return count;
-}
-
-static const struct file_operations force_sc_support_fops = {
- .open = simple_open,
- .read = force_sc_support_read,
- .write = force_sc_support_write,
- .llseek = default_llseek,
-};
-
-static ssize_t force_lesc_support_read(struct file *file,
- char __user *user_buf,
- size_t count, loff_t *ppos)
-{
- struct hci_dev *hdev = file->private_data;
- char buf[3];
-
- buf[0] = test_bit(HCI_FORCE_LESC, &hdev->dbg_flags) ? 'Y': 'N';
- buf[1] = '\n';
- buf[2] = '\0';
- return simple_read_from_buffer(user_buf, count, ppos, buf, 2);
-}
-
-static ssize_t force_lesc_support_write(struct file *file,
- const char __user *user_buf,
- size_t count, loff_t *ppos)
-{
- struct hci_dev *hdev = file->private_data;
- char buf[32];
- size_t buf_size = min(count, (sizeof(buf)-1));
- bool enable;
-
- if (copy_from_user(buf, user_buf, buf_size))
- return -EFAULT;
-
- buf[buf_size] = '\0';
- if (strtobool(buf, &enable))
- return -EINVAL;
-
- if (enable == test_bit(HCI_FORCE_LESC, &hdev->dbg_flags))
- return -EALREADY;
-
- change_bit(HCI_FORCE_LESC, &hdev->dbg_flags);
-
- return count;
-}
-
-static const struct file_operations force_lesc_support_fops = {
- .open = simple_open,
- .read = force_lesc_support_read,
- .write = force_lesc_support_write,
- .llseek = default_llseek,
-};
-
static int idle_timeout_set(void *data, u64 val)
{
struct hci_dev *hdev = data;
debugfs_create_file("voice_setting", 0444, hdev->debugfs, hdev,
&voice_setting_fops);
- if (lmp_ssp_capable(hdev)) {
+ if (lmp_ssp_capable(hdev))
debugfs_create_file("auto_accept_delay", 0644, hdev->debugfs,
hdev, &auto_accept_delay_fops);
- debugfs_create_file("sc_only_mode", 0444, hdev->debugfs,
- hdev, &sc_only_mode_fops);
-
- debugfs_create_file("force_sc_support", 0644, hdev->debugfs,
- hdev, &force_sc_support_fops);
-
- if (lmp_le_capable(hdev))
- debugfs_create_file("force_lesc_support", 0644,
- hdev->debugfs, hdev,
- &force_lesc_support_fops);
- }
if (lmp_sniff_capable(hdev)) {
debugfs_create_file("idle_timeout", 0644, hdev->debugfs,
hci_bdaddr_list_clear(&hdev->le_white_list);
}
+static void hci_cc_read_stored_link_key(struct hci_dev *hdev,
+ struct sk_buff *skb)
+{
+ struct hci_rp_read_stored_link_key *rp = (void *)skb->data;
+ struct hci_cp_read_stored_link_key *sent;
+
+ BT_DBG("%s status 0x%2.2x", hdev->name, rp->status);
+
+ sent = hci_sent_cmd_data(hdev, HCI_OP_READ_STORED_LINK_KEY);
+ if (!sent)
+ return;
+
+ if (!rp->status && sent->read_all == 0x01) {
+ hdev->stored_max_keys = rp->max_keys;
+ hdev->stored_num_keys = rp->num_keys;
+ }
+}
+
+static void hci_cc_delete_stored_link_key(struct hci_dev *hdev,
+ struct sk_buff *skb)
+{
+ struct hci_rp_delete_stored_link_key *rp = (void *)skb->data;
+
+ BT_DBG("%s status 0x%2.2x", hdev->name, rp->status);
+
+ if (rp->status)
+ return;
+
+ if (rp->num_keys <= hdev->stored_num_keys)
+ hdev->stored_num_keys -= rp->num_keys;
+ else
+ hdev->stored_num_keys = 0;
+}
+
static void hci_cc_write_local_name(struct hci_dev *hdev, struct sk_buff *skb)
{
__u8 status = *((__u8 *) skb->data);
hci_cc_reset(hdev, skb);
break;
+ case HCI_OP_READ_STORED_LINK_KEY:
+ hci_cc_read_stored_link_key(hdev, skb);
+ break;
+
+ case HCI_OP_DELETE_STORED_LINK_KEY:
+ hci_cc_delete_stored_link_key(hdev, skb);
+ break;
+
case HCI_OP_WRITE_LOCAL_NAME:
hci_cc_write_local_name(hdev, skb);
break;
}
}
-static void update_background_scan_complete(struct hci_dev *hdev, u8 status)
+static void update_background_scan_complete(struct hci_dev *hdev, u8 status,
+ u16 opcode)
{
if (status)
BT_DBG("HCI request failed to update background scanning: "
read_unlock(&hci_sk_list.lock);
}
+static void queue_monitor_skb(struct sk_buff *skb)
+{
+ struct sock *sk;
+
+ BT_DBG("len %d", skb->len);
+
+ read_lock(&hci_sk_list.lock);
+
+ sk_for_each(sk, &hci_sk_list.head) {
+ struct sk_buff *nskb;
+
+ if (sk->sk_state != BT_BOUND)
+ continue;
+
+ if (hci_pi(sk)->channel != HCI_CHANNEL_MONITOR)
+ continue;
+
+ nskb = skb_clone(skb, GFP_ATOMIC);
+ if (!nskb)
+ continue;
+
+ if (sock_queue_rcv_skb(sk, nskb))
+ kfree_skb(nskb);
+ }
+
+ read_unlock(&hci_sk_list.lock);
+}
+
/* Send frame to monitor socket */
void hci_send_to_monitor(struct hci_dev *hdev, struct sk_buff *skb)
{
- struct sock *sk;
struct sk_buff *skb_copy = NULL;
+ struct hci_mon_hdr *hdr;
__le16 opcode;
if (!atomic_read(&monitor_promisc))
return;
}
- read_lock(&hci_sk_list.lock);
-
- sk_for_each(sk, &hci_sk_list.head) {
- struct sk_buff *nskb;
-
- if (sk->sk_state != BT_BOUND)
- continue;
-
- if (hci_pi(sk)->channel != HCI_CHANNEL_MONITOR)
- continue;
-
- if (!skb_copy) {
- struct hci_mon_hdr *hdr;
-
- /* Create a private copy with headroom */
- skb_copy = __pskb_copy_fclone(skb, HCI_MON_HDR_SIZE,
- GFP_ATOMIC, true);
- if (!skb_copy)
- continue;
-
- /* Put header before the data */
- hdr = (void *) skb_push(skb_copy, HCI_MON_HDR_SIZE);
- hdr->opcode = opcode;
- hdr->index = cpu_to_le16(hdev->id);
- hdr->len = cpu_to_le16(skb->len);
- }
-
- nskb = skb_clone(skb_copy, GFP_ATOMIC);
- if (!nskb)
- continue;
-
- if (sock_queue_rcv_skb(sk, nskb))
- kfree_skb(nskb);
- }
+ /* Create a private copy with headroom */
+ skb_copy = __pskb_copy_fclone(skb, HCI_MON_HDR_SIZE, GFP_ATOMIC, true);
+ if (!skb_copy)
+ return;
- read_unlock(&hci_sk_list.lock);
+ /* Put header before the data */
+ hdr = (void *) skb_push(skb_copy, HCI_MON_HDR_SIZE);
+ hdr->opcode = opcode;
+ hdr->index = cpu_to_le16(hdev->id);
+ hdr->len = cpu_to_le16(skb->len);
+ queue_monitor_skb(skb_copy);
kfree_skb(skb_copy);
}
-static void send_monitor_event(struct sk_buff *skb)
-{
- struct sock *sk;
-
- BT_DBG("len %d", skb->len);
-
- read_lock(&hci_sk_list.lock);
-
- sk_for_each(sk, &hci_sk_list.head) {
- struct sk_buff *nskb;
-
- if (sk->sk_state != BT_BOUND)
- continue;
-
- if (hci_pi(sk)->channel != HCI_CHANNEL_MONITOR)
- continue;
-
- nskb = skb_clone(skb, GFP_ATOMIC);
- if (!nskb)
- continue;
-
- if (sock_queue_rcv_skb(sk, nskb))
- kfree_skb(nskb);
- }
-
- read_unlock(&hci_sk_list.lock);
-}
-
static struct sk_buff *create_monitor_event(struct hci_dev *hdev, int event)
{
struct hci_mon_hdr *hdr;
skb = create_monitor_event(hdev, event);
if (skb) {
- send_monitor_event(skb);
+ queue_monitor_skb(skb);
kfree_skb(skb);
}
}
{
int err;
+ BUILD_BUG_ON(sizeof(struct sockaddr_hci) > sizeof(struct sockaddr));
+
err = proto_register(&hci_sk_proto, 0);
if (err < 0)
return err;
static void l2cap_tx(struct l2cap_chan *chan, struct l2cap_ctrl *control,
struct sk_buff_head *skbs, u8 event);
-static inline __u8 bdaddr_type(struct hci_conn *hcon, __u8 type)
+static inline u8 bdaddr_type(u8 link_type, u8 bdaddr_type)
{
- if (hcon->type == LE_LINK) {
- if (type == ADDR_LE_DEV_PUBLIC)
+ if (link_type == LE_LINK) {
+ if (bdaddr_type == ADDR_LE_DEV_PUBLIC)
return BDADDR_LE_PUBLIC;
else
return BDADDR_LE_RANDOM;
return BDADDR_BREDR;
}
+static inline u8 bdaddr_src_type(struct hci_conn *hcon)
+{
+ return bdaddr_type(hcon->type, hcon->src_type);
+}
+
+static inline u8 bdaddr_dst_type(struct hci_conn *hcon)
+{
+ return bdaddr_type(hcon->type, hcon->dst_type);
+}
+
/* ---- L2CAP channels ---- */
static struct l2cap_chan *__l2cap_get_chan_by_dcid(struct l2cap_conn *conn,
list_for_each_entry(chan, &conn->chan_l, list) {
l2cap_chan_lock(chan);
bacpy(&chan->dst, &hcon->dst);
- chan->dst_type = bdaddr_type(hcon, hcon->dst_type);
+ chan->dst_type = bdaddr_dst_type(hcon);
l2cap_chan_unlock(chan);
}
bacpy(&chan->src, &conn->hcon->src);
bacpy(&chan->dst, &conn->hcon->dst);
- chan->src_type = bdaddr_type(conn->hcon, conn->hcon->src_type);
- chan->dst_type = bdaddr_type(conn->hcon, conn->hcon->dst_type);
+ chan->src_type = bdaddr_src_type(conn->hcon);
+ chan->dst_type = bdaddr_dst_type(conn->hcon);
chan->psm = psm;
chan->dcid = scid;
chan->local_amp_id = amp_id;
bacpy(&chan->src, &conn->hcon->src);
bacpy(&chan->dst, &conn->hcon->dst);
- chan->src_type = bdaddr_type(conn->hcon, conn->hcon->src_type);
- chan->dst_type = bdaddr_type(conn->hcon, conn->hcon->dst_type);
+ chan->src_type = bdaddr_src_type(conn->hcon);
+ chan->dst_type = bdaddr_dst_type(conn->hcon);
chan->psm = psm;
chan->dcid = scid;
chan->omtu = mtu;
*/
if (hcon->type == LE_LINK &&
hci_bdaddr_list_lookup(&hcon->hdev->blacklist, &hcon->dst,
- bdaddr_type(hcon, hcon->dst_type))) {
+ bdaddr_dst_type(hcon))) {
kfree_skb(skb);
return;
}
if (test_bit(HCI_LE_ENABLED, &hcon->hdev->dev_flags) &&
(bredr_sc_enabled(hcon->hdev) ||
- test_bit(HCI_FORCE_LESC, &hcon->hdev->dbg_flags)))
+ test_bit(HCI_FORCE_BREDR_SMP, &hcon->hdev->dbg_flags)))
conn->local_fixed_chan |= L2CAP_FC_SMP_BREDR;
mutex_init(&conn->ident_lock);
/* Update source addr of the socket */
bacpy(&chan->src, &hcon->src);
- chan->src_type = bdaddr_type(hcon, hcon->src_type);
+ chan->src_type = bdaddr_src_type(hcon);
__l2cap_chan_add(conn, chan);
* global list (by passing NULL as first parameter).
*/
static struct l2cap_chan *l2cap_global_fixed_chan(struct l2cap_chan *c,
- bdaddr_t *src, u8 link_type)
+ struct hci_conn *hcon)
{
+ u8 src_type = bdaddr_src_type(hcon);
+
read_lock(&chan_list_lock);
if (c)
continue;
if (c->state != BT_LISTEN)
continue;
- if (bacmp(&c->src, src) && bacmp(&c->src, BDADDR_ANY))
+ if (bacmp(&c->src, &hcon->src) && bacmp(&c->src, BDADDR_ANY))
continue;
- if (link_type == ACL_LINK && c->src_type != BDADDR_BREDR)
- continue;
- if (link_type == LE_LINK && c->src_type == BDADDR_BREDR)
+ if (src_type != c->src_type)
continue;
l2cap_chan_hold(c);
if (!conn)
return;
- dst_type = bdaddr_type(hcon, hcon->dst_type);
+ dst_type = bdaddr_dst_type(hcon);
/* If device is blocked, do not create channels for it */
if (hci_bdaddr_list_lookup(&hdev->blacklist, &hcon->dst, dst_type))
* we left off, because the list lock would prevent calling the
* potentially sleeping l2cap_chan_lock() function.
*/
- pchan = l2cap_global_fixed_chan(NULL, &hdev->bdaddr, hcon->type);
+ pchan = l2cap_global_fixed_chan(NULL, hcon);
while (pchan) {
struct l2cap_chan *chan, *next;
if (chan) {
bacpy(&chan->src, &hcon->src);
bacpy(&chan->dst, &hcon->dst);
- chan->src_type = bdaddr_type(hcon, hcon->src_type);
+ chan->src_type = bdaddr_src_type(hcon);
chan->dst_type = dst_type;
__l2cap_chan_add(conn, chan);
l2cap_chan_unlock(pchan);
next:
- next = l2cap_global_fixed_chan(pchan, &hdev->bdaddr,
- hcon->type);
+ next = l2cap_global_fixed_chan(pchan, hcon);
l2cap_chan_put(pchan);
pchan = next;
}
read_lock(&chan_list_lock);
list_for_each_entry(c, &chan_list, global_l) {
- seq_printf(f, "%pMR %pMR %d %d 0x%4.4x 0x%4.4x %d %d %d %d\n",
- &c->src, &c->dst,
+ seq_printf(f, "%pMR (%u) %pMR (%u) %d %d 0x%4.4x 0x%4.4x %d %d %d %d\n",
+ &c->src, c->src_type, &c->dst, c->dst_type,
c->state, __le16_to_cpu(c->psm),
c->scid, c->dcid, c->imtu, c->omtu,
c->sec_level, c->mode);
{
int err;
+ BUILD_BUG_ON(sizeof(struct sockaddr_l2) > sizeof(struct sockaddr));
+
err = proto_register(&l2cap_proto, 0);
if (err < 0)
return err;
settings |= MGMT_SETTING_HS;
}
- if (lmp_sc_capable(hdev) ||
- test_bit(HCI_FORCE_SC, &hdev->dbg_flags))
+ if (lmp_sc_capable(hdev))
settings |= MGMT_SETTING_SECURE_CONN;
}
sizeof(settings));
}
-static void clean_up_hci_complete(struct hci_dev *hdev, u8 status)
+static void clean_up_hci_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
BT_DBG("%s status 0x%02x", hdev->name, status);
return MGMT_STATUS_SUCCESS;
}
-static void set_discoverable_complete(struct hci_dev *hdev, u8 status)
+static void set_discoverable_complete(struct hci_dev *hdev, u8 status,
+ u16 opcode)
{
struct pending_cmd *cmd;
struct mgmt_mode *cp;
hci_req_add(req, HCI_OP_WRITE_PAGE_SCAN_TYPE, 1, &type);
}
-static void set_connectable_complete(struct hci_dev *hdev, u8 status)
+static void set_connectable_complete(struct hci_dev *hdev, u8 status,
+ u16 opcode)
{
struct pending_cmd *cmd;
struct mgmt_mode *cp;
return err;
}
-static void le_enable_complete(struct hci_dev *hdev, u8 status)
+static void le_enable_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
struct cmd_lookup match = { NULL, hdev };
hci_dev_unlock(hdev);
}
-static void add_uuid_complete(struct hci_dev *hdev, u8 status)
+static void add_uuid_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
BT_DBG("status 0x%02x", status);
return false;
}
-static void remove_uuid_complete(struct hci_dev *hdev, u8 status)
+static void remove_uuid_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
BT_DBG("status 0x%02x", status);
return err;
}
-static void set_class_complete(struct hci_dev *hdev, u8 status)
+static void set_class_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
BT_DBG("status 0x%02x", status);
hci_req_add(req, HCI_OP_WRITE_LOCAL_NAME, sizeof(cp), &cp);
}
-static void set_name_complete(struct hci_dev *hdev, u8 status)
+static void set_name_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
struct mgmt_cp_set_local_name *cp;
struct pending_cmd *cmd;
return true;
}
-static void start_discovery_complete(struct hci_dev *hdev, u8 status)
+static void start_discovery_complete(struct hci_dev *hdev, u8 status,
+ u16 opcode)
{
struct pending_cmd *cmd;
unsigned long timeout;
return err;
}
-static void stop_discovery_complete(struct hci_dev *hdev, u8 status)
+static void stop_discovery_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
struct pending_cmd *cmd;
return err;
}
-static void set_advertising_complete(struct hci_dev *hdev, u8 status)
+static void set_advertising_complete(struct hci_dev *hdev, u8 status,
+ u16 opcode)
{
struct cmd_lookup match = { NULL, hdev };
return err;
}
-static void fast_connectable_complete(struct hci_dev *hdev, u8 status)
+static void fast_connectable_complete(struct hci_dev *hdev, u8 status,
+ u16 opcode)
{
struct pending_cmd *cmd;
return err;
}
-static void set_bredr_complete(struct hci_dev *hdev, u8 status)
+static void set_bredr_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
struct pending_cmd *cmd;
err = cmd_status(sk, hdev->id, MGMT_OP_SET_BREDR,
MGMT_STATUS_REJECTED);
goto unlock;
+ } else {
+ /* When configuring a dual-mode controller to operate
+ * with LE only and using a static address, then switching
+ * BR/EDR back on is not allowed.
+ *
+ * Dual-mode controllers shall operate with the public
+ * address as its identity address for BR/EDR and LE. So
+ * reject the attempt to create an invalid configuration.
+ */
+ if (!test_bit(HCI_BREDR_ENABLED, &hdev->dev_flags) &&
+ bacmp(&hdev->static_addr, BDADDR_ANY)) {
+ err = cmd_status(sk, hdev->id, MGMT_OP_SET_BREDR,
+ MGMT_STATUS_REJECTED);
+ goto unlock;
+ }
}
if (mgmt_pending_find(MGMT_OP_SET_BREDR, hdev)) {
BT_DBG("request for %s", hdev->name);
- if (!test_bit(HCI_LE_ENABLED, &hdev->dev_flags) &&
- !lmp_sc_capable(hdev) && !test_bit(HCI_FORCE_SC, &hdev->dbg_flags))
+ if (!lmp_sc_capable(hdev) &&
+ !test_bit(HCI_LE_ENABLED, &hdev->dev_flags))
return cmd_status(sk, hdev->id, MGMT_OP_SET_SECURE_CONN,
MGMT_STATUS_NOT_SUPPORTED);
hci_dev_lock(hdev);
- if (!hdev_is_powered(hdev) ||
- (!lmp_sc_capable(hdev) &&
- !test_bit(HCI_FORCE_SC, &hdev->dbg_flags)) ||
+ if (!hdev_is_powered(hdev) || !lmp_sc_capable(hdev) ||
!test_bit(HCI_BREDR_ENABLED, &hdev->dev_flags)) {
bool changed;
return err;
}
-static void conn_info_refresh_complete(struct hci_dev *hdev, u8 hci_status)
+static void conn_info_refresh_complete(struct hci_dev *hdev, u8 hci_status,
+ u16 opcode)
{
struct hci_cp_read_rssi *cp;
struct pending_cmd *cmd;
return err;
}
-static void get_clock_info_complete(struct hci_dev *hdev, u8 status)
+static void get_clock_info_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
struct hci_cp_read_clock *hci_cp;
struct pending_cmd *cmd;
mgmt_event(MGMT_EV_DEVICE_ADDED, hdev, &ev, sizeof(ev), sk);
}
-static void add_device_complete(struct hci_dev *hdev, u8 status)
+static void add_device_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
struct pending_cmd *cmd;
mgmt_event(MGMT_EV_DEVICE_REMOVED, hdev, &ev, sizeof(ev), sk);
}
-static void remove_device_complete(struct hci_dev *hdev, u8 status)
+static void remove_device_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
struct pending_cmd *cmd;
__hci_update_background_scan(req);
}
-static void powered_complete(struct hci_dev *hdev, u8 status)
+static void powered_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
struct cmd_lookup match = { NULL, hdev };
BT_DBG("status 0x%02x", status);
+ if (!status) {
+ /* Register the available SMP channels (BR/EDR and LE) only
+ * when successfully powering on the controller. This late
+ * registration is required so that LE SMP can clearly
+ * decide if the public address or static address is used.
+ */
+ smp_register(hdev);
+ }
+
hci_dev_lock(hdev);
mgmt_pending_foreach(MGMT_OP_SET_POWERED, hdev, settings_rsp, &match);
mgmt_event(MGMT_EV_DISCOVERING, hdev, &ev, sizeof(ev), NULL);
}
-static void adv_enable_complete(struct hci_dev *hdev, u8 status)
+static void adv_enable_complete(struct hci_dev *hdev, u8 status, u16 opcode)
{
BT_DBG("%s status %u", hdev->name, status);
}
{
int err;
+ BUILD_BUG_ON(sizeof(struct sockaddr_rc) > sizeof(struct sockaddr));
+
err = proto_register(&rfcomm_proto, 0);
if (err < 0)
return err;
{
int err;
+ BUILD_BUG_ON(sizeof(struct sockaddr_sco) > sizeof(struct sockaddr));
+
err = proto_register(&sco_proto, 0);
if (err < 0)
return err;
delta = ktime_sub(rettime, calltime);
duration = (unsigned long long) ktime_to_ns(delta) >> 10;
- BT_INFO("ECDH test passed in %lld usecs", duration);
+ BT_INFO("ECDH test passed in %llu usecs", duration);
return 0;
}
SOFTWARE IS DISCLAIMED.
*/
+#include <linux/debugfs.h>
#include <linux/crypto.h>
#include <linux/scatterlist.h>
#include <crypto/b128ops.h>
if (err)
return err;
- BT_DBG("res %16phN", res);
+ SMP_DBG("res %16phN", res);
return err;
}
if (conn->hcon->type == ACL_LINK) {
/* We must have a BR/EDR SC link */
if (!test_bit(HCI_CONN_AES_CCM, &conn->hcon->flags) &&
- !test_bit(HCI_FORCE_LESC, &hdev->dbg_flags))
+ !test_bit(HCI_FORCE_BREDR_SMP, &hdev->dbg_flags))
return SMP_CROSS_TRANSP_NOT_ALLOWED;
set_bit(SMP_FLAG_SC, &smp->flags);
* implementations are not known of and in order to not over
* complicate our implementation, simply pretend that we never
* received an IRK for such a device.
+ *
+ * The Identity Address must also be a Static Random or Public
+ * Address, which hci_is_identity_address() checks for.
*/
- if (!bacmp(&info->bdaddr, BDADDR_ANY)) {
+ if (!bacmp(&info->bdaddr, BDADDR_ANY) ||
+ !hci_is_identity_address(&info->bdaddr, info->addr_type)) {
BT_ERR("Ignoring IRK with no identity address");
goto distribute;
}
/* BR/EDR must use Secure Connections for SMP */
if (!test_bit(HCI_CONN_AES_CCM, &hcon->flags) &&
- !test_bit(HCI_FORCE_LESC, &hdev->dbg_flags))
+ !test_bit(HCI_FORCE_BREDR_SMP, &hdev->dbg_flags))
return;
/* If our LE support is not enabled don't do anything */
l2cap_chan_set_defaults(chan);
- bacpy(&chan->src, &hdev->bdaddr);
- if (cid == L2CAP_CID_SMP)
- chan->src_type = BDADDR_LE_PUBLIC;
- else
+ if (cid == L2CAP_CID_SMP) {
+ /* If usage of static address is forced or if the devices
+ * does not have a public address, then listen on the static
+ * address.
+ *
+ * In case BR/EDR has been disabled on a dual-mode controller
+ * and a static address has been configued, then listen on
+ * the static address instead.
+ */
+ if (test_bit(HCI_FORCE_STATIC_ADDR, &hdev->dbg_flags) ||
+ !bacmp(&hdev->bdaddr, BDADDR_ANY) ||
+ (!test_bit(HCI_BREDR_ENABLED, &hdev->dev_flags) &&
+ bacmp(&hdev->static_addr, BDADDR_ANY))) {
+ bacpy(&chan->src, &hdev->static_addr);
+ chan->src_type = BDADDR_LE_RANDOM;
+ } else {
+ bacpy(&chan->src, &hdev->bdaddr);
+ chan->src_type = BDADDR_LE_PUBLIC;
+ }
+ } else {
+ bacpy(&chan->src, &hdev->bdaddr);
chan->src_type = BDADDR_BREDR;
+ }
+
chan->state = BT_LISTEN;
chan->mode = L2CAP_MODE_BASIC;
chan->imtu = L2CAP_DEFAULT_MTU;
l2cap_chan_put(chan);
}
+static ssize_t force_bredr_smp_read(struct file *file,
+ char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ struct hci_dev *hdev = file->private_data;
+ char buf[3];
+
+ buf[0] = test_bit(HCI_FORCE_BREDR_SMP, &hdev->dbg_flags) ? 'Y': 'N';
+ buf[1] = '\n';
+ buf[2] = '\0';
+ return simple_read_from_buffer(user_buf, count, ppos, buf, 2);
+}
+
+static ssize_t force_bredr_smp_write(struct file *file,
+ const char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ struct hci_dev *hdev = file->private_data;
+ char buf[32];
+ size_t buf_size = min(count, (sizeof(buf)-1));
+ bool enable;
+
+ if (copy_from_user(buf, user_buf, buf_size))
+ return -EFAULT;
+
+ buf[buf_size] = '\0';
+ if (strtobool(buf, &enable))
+ return -EINVAL;
+
+ if (enable == test_bit(HCI_FORCE_BREDR_SMP, &hdev->dbg_flags))
+ return -EALREADY;
+
+ if (enable) {
+ struct l2cap_chan *chan;
+
+ chan = smp_add_cid(hdev, L2CAP_CID_SMP_BREDR);
+ if (IS_ERR(chan))
+ return PTR_ERR(chan);
+
+ hdev->smp_bredr_data = chan;
+ } else {
+ struct l2cap_chan *chan;
+
+ chan = hdev->smp_bredr_data;
+ hdev->smp_bredr_data = NULL;
+ smp_del_chan(chan);
+ }
+
+ change_bit(HCI_FORCE_BREDR_SMP, &hdev->dbg_flags);
+
+ return count;
+}
+
+static const struct file_operations force_bredr_smp_fops = {
+ .open = simple_open,
+ .read = force_bredr_smp_read,
+ .write = force_bredr_smp_write,
+ .llseek = default_llseek,
+};
+
int smp_register(struct hci_dev *hdev)
{
struct l2cap_chan *chan;
BT_DBG("%s", hdev->name);
+ /* If the controller does not support Low Energy operation, then
+ * there is also no need to register any SMP channel.
+ */
+ if (!lmp_le_capable(hdev))
+ return 0;
+
+ if (WARN_ON(hdev->smp_data)) {
+ chan = hdev->smp_data;
+ hdev->smp_data = NULL;
+ smp_del_chan(chan);
+ }
+
chan = smp_add_cid(hdev, L2CAP_CID_SMP);
if (IS_ERR(chan))
return PTR_ERR(chan);
hdev->smp_data = chan;
- if (!lmp_sc_capable(hdev) &&
- !test_bit(HCI_FORCE_LESC, &hdev->dbg_flags))
+ /* If the controller does not support BR/EDR Secure Connections
+ * feature, then the BR/EDR SMP channel shall not be present.
+ *
+ * To test this with Bluetooth 4.0 controllers, create a debugfs
+ * switch that allows forcing BR/EDR SMP support and accepting
+ * cross-transport pairing on non-AES encrypted connections.
+ */
+ if (!lmp_sc_capable(hdev)) {
+ debugfs_create_file("force_bredr_smp", 0644, hdev->debugfs,
+ hdev, &force_bredr_smp_fops);
return 0;
+ }
+
+ if (WARN_ON(hdev->smp_bredr_data)) {
+ chan = hdev->smp_bredr_data;
+ hdev->smp_bredr_data = NULL;
+ smp_del_chan(chan);
+ }
chan = smp_add_cid(hdev, L2CAP_CID_SMP_BREDR);
if (IS_ERR(chan)) {
delta = ktime_sub(rettime, calltime);
duration = (unsigned long long) ktime_to_ns(delta) >> 10;
- BT_INFO("SMP test passed in %lld usecs", duration);
+ BT_INFO("SMP test passed in %llu usecs", duration);
return 0;
}
#include <linux/llc.h>
#include <net/llc.h>
#include <net/stp.h>
+#include <net/switchdev.h>
#include "br_private.h"
.notifier_call = br_device_event
};
+static int br_netdev_switch_event(struct notifier_block *unused,
+ unsigned long event, void *ptr)
+{
+ struct net_device *dev = netdev_switch_notifier_info_to_dev(ptr);
+ struct net_bridge_port *p;
+ struct net_bridge *br;
+ struct netdev_switch_notifier_fdb_info *fdb_info;
+ int err = NOTIFY_DONE;
+
+ rtnl_lock();
+ p = br_port_get_rtnl(dev);
+ if (!p)
+ goto out;
+
+ br = p->br;
+
+ switch (event) {
+ case NETDEV_SWITCH_FDB_ADD:
+ fdb_info = ptr;
+ err = br_fdb_external_learn_add(br, p, fdb_info->addr,
+ fdb_info->vid);
+ if (err)
+ err = notifier_from_errno(err);
+ break;
+ case NETDEV_SWITCH_FDB_DEL:
+ fdb_info = ptr;
+ err = br_fdb_external_learn_del(br, p, fdb_info->addr,
+ fdb_info->vid);
+ if (err)
+ err = notifier_from_errno(err);
+ break;
+ }
+
+out:
+ rtnl_unlock();
+ return err;
+}
+
+static struct notifier_block br_netdev_switch_notifier = {
+ .notifier_call = br_netdev_switch_event,
+};
+
static void __net_exit br_net_exit(struct net *net)
{
struct net_device *dev;
if (err)
goto err_out3;
- err = br_netlink_init();
+ err = register_netdev_switch_notifier(&br_netdev_switch_notifier);
if (err)
goto err_out4;
+ err = br_netlink_init();
+ if (err)
+ goto err_out5;
+
brioctl_set(br_ioctl_deviceless_stub);
#if IS_ENABLED(CONFIG_ATM_LANE)
return 0;
+err_out5:
+ unregister_netdev_switch_notifier(&br_netdev_switch_notifier);
err_out4:
unregister_netdevice_notifier(&br_device_notifier);
err_out3:
{
stp_proto_unregister(&br_stp_proto);
br_netlink_fini();
+ unregister_netdev_switch_notifier(&br_netdev_switch_notifier);
unregister_netdevice_notifier(&br_device_notifier);
brioctl_set(NULL);
unregister_pernet_subsys(&br_net_ops);
if (fdb->vlan_id && nla_put(skb, NDA_VLAN, sizeof(u16), &fdb->vlan_id))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
if (!(dev->priv_flags & IFF_EBRIDGE))
goto out;
+ if (!filter_dev)
+ idx = ndo_dflt_fdb_dump(skb, cb, dev, NULL, idx);
+
for (i = 0; i < BR_HASH_SIZE; i++) {
struct net_bridge_fdb_entry *f;
(!f->dst || f->dst->dev != filter_dev)) {
if (filter_dev != dev)
goto skip;
- /* !f->dst is a speacial case for bridge
+ /* !f->dst is a special case for bridge
* It means the MAC belongs to the bridge
* Therefore need a little more filtering
* we only want to dump the !f->dst case
if (f->dst)
goto skip;
}
+ if (!filter_dev && f->dst)
+ goto skip;
if (fdb_fill_info(skb, br, f,
NETLINK_CB(cb->skb).portid,
}
}
-int br_fdb_external_learn_add(struct net_device *dev,
+int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
const unsigned char *addr, u16 vid)
{
- struct net_bridge_port *p;
- struct net_bridge *br;
struct hlist_head *head;
struct net_bridge_fdb_entry *fdb;
int err = 0;
- rtnl_lock();
-
- p = br_port_get_rtnl(dev);
- if (!p) {
- pr_info("bridge: %s not a bridge port\n", dev->name);
- err = -EINVAL;
- goto err_rtnl_unlock;
- }
-
- br = p->br;
-
+ ASSERT_RTNL();
spin_lock_bh(&br->hash_lock);
head = &br->hash[br_mac_hash(addr, vid)];
err_unlock:
spin_unlock_bh(&br->hash_lock);
-err_rtnl_unlock:
- rtnl_unlock();
return err;
}
-EXPORT_SYMBOL(br_fdb_external_learn_add);
-int br_fdb_external_learn_del(struct net_device *dev,
+int br_fdb_external_learn_del(struct net_bridge *br, struct net_bridge_port *p,
const unsigned char *addr, u16 vid)
{
- struct net_bridge_port *p;
- struct net_bridge *br;
struct hlist_head *head;
struct net_bridge_fdb_entry *fdb;
int err = 0;
- rtnl_lock();
-
- p = br_port_get_rtnl(dev);
- if (!p) {
- pr_info("bridge: %s not a bridge port\n", dev->name);
- err = -EINVAL;
- goto err_rtnl_unlock;
- }
-
- br = p->br;
-
+ ASSERT_RTNL();
spin_lock_bh(&br->hash_lock);
head = &br->hash[br_mac_hash(addr, vid)];
err = -ENOENT;
spin_unlock_bh(&br->hash_lock);
-err_rtnl_unlock:
- rtnl_unlock();
return err;
}
-EXPORT_SYMBOL(br_fdb_external_learn_del);
features = netdev_increment_features(features,
p->dev->features, mask);
}
+ features = netdev_add_tso_features(features, mask);
return features;
}
int err = 0;
bool changed_addr;
- /* Don't allow bridging non-ethernet like devices */
+ /* Don't allow bridging non-ethernet like devices, or DSA-enabled
+ * master network devices since the bridge layer rx_handler prevents
+ * the DSA fake ethertype handler to be invoked, so we do not strip off
+ * the DSA switch tag protocol header and the bridge layer just return
+ * RX_HANDLER_CONSUMED, stopping RX processing for these frames.
+ */
if ((dev->flags & IFF_LOOPBACK) ||
dev->type != ARPHRD_ETHER || dev->addr_len != ETH_ALEN ||
- !is_valid_ether_addr(dev->dev_addr))
+ !is_valid_ether_addr(dev->dev_addr) ||
+ netdev_uses_dsa(dev))
return -EINVAL;
/* No bridging of bridges */
dst = NULL;
if (is_broadcast_ether_addr(dest)) {
- if (p->flags & BR_PROXYARP &&
+ if (IS_ENABLED(CONFIG_INET) &&
+ p->flags & BR_PROXYARP &&
skb->protocol == htons(ETH_P_ARP))
br_do_proxy_arp(skb, br, vid);
nla_nest_end(skb, nest2);
nla_nest_end(skb, nest);
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
end:
nla_nest_end(skb, nest);
struct net_device *dev;
int err;
- err = nlmsg_parse(nlh, sizeof(*bpm), tb, MDBA_SET_ENTRY, NULL);
+ err = nlmsg_parse(nlh, sizeof(*bpm), tb, MDBA_SET_ENTRY_MAX, NULL);
if (err < 0)
return err;
#endif
#define IS_IP(skb) \
- (!vlan_tx_tag_present(skb) && skb->protocol == htons(ETH_P_IP))
+ (!skb_vlan_tag_present(skb) && skb->protocol == htons(ETH_P_IP))
#define IS_IPV6(skb) \
- (!vlan_tx_tag_present(skb) && skb->protocol == htons(ETH_P_IPV6))
+ (!skb_vlan_tag_present(skb) && skb->protocol == htons(ETH_P_IPV6))
#define IS_ARP(skb) \
- (!vlan_tx_tag_present(skb) && skb->protocol == htons(ETH_P_ARP))
+ (!skb_vlan_tag_present(skb) && skb->protocol == htons(ETH_P_ARP))
static inline __be16 vlan_proto(const struct sk_buff *skb)
{
- if (vlan_tx_tag_present(skb))
+ if (skb_vlan_tag_present(skb))
return skb->protocol;
else if (skb->protocol == htons(ETH_P_8021Q))
return vlan_eth_hdr(skb)->h_vlan_encapsulated_proto;
struct net_device *vlan, *br;
br = bridge_parent(dev);
- if (brnf_pass_vlan_indev == 0 || !vlan_tx_tag_present(skb))
+ if (brnf_pass_vlan_indev == 0 || !skb_vlan_tag_present(skb))
return br;
vlan = __vlan_find_dev_deep_rcu(br, skb->vlan_proto,
- vlan_tx_tag_get(skb) & VLAN_VID_MASK);
+ skb_vlan_tag_get(skb) & VLAN_VID_MASK);
return vlan ? vlan : br;
}
return 0;
}
+static int br_fill_ifvlaninfo_range(struct sk_buff *skb, u16 vid_start,
+ u16 vid_end, u16 flags)
+{
+ struct bridge_vlan_info vinfo;
+
+ if ((vid_end - vid_start) > 0) {
+ /* add range to skb */
+ vinfo.vid = vid_start;
+ vinfo.flags = flags | BRIDGE_VLAN_INFO_RANGE_BEGIN;
+ if (nla_put(skb, IFLA_BRIDGE_VLAN_INFO,
+ sizeof(vinfo), &vinfo))
+ goto nla_put_failure;
+
+ vinfo.flags &= ~BRIDGE_VLAN_INFO_RANGE_BEGIN;
+
+ vinfo.vid = vid_end;
+ vinfo.flags = flags | BRIDGE_VLAN_INFO_RANGE_END;
+ if (nla_put(skb, IFLA_BRIDGE_VLAN_INFO,
+ sizeof(vinfo), &vinfo))
+ goto nla_put_failure;
+ } else {
+ vinfo.vid = vid_start;
+ vinfo.flags = flags;
+ if (nla_put(skb, IFLA_BRIDGE_VLAN_INFO,
+ sizeof(vinfo), &vinfo))
+ goto nla_put_failure;
+ }
+
+ return 0;
+
+nla_put_failure:
+ return -EMSGSIZE;
+}
+
+static int br_fill_ifvlaninfo_compressed(struct sk_buff *skb,
+ const struct net_port_vlans *pv)
+{
+ u16 vid_range_start = 0, vid_range_end = 0;
+ u16 vid_range_flags = 0;
+ u16 pvid, vid, flags;
+ int err = 0;
+
+ /* Pack IFLA_BRIDGE_VLAN_INFO's for every vlan
+ * and mark vlan info with begin and end flags
+ * if vlaninfo represents a range
+ */
+ pvid = br_get_pvid(pv);
+ for_each_set_bit(vid, pv->vlan_bitmap, VLAN_N_VID) {
+ flags = 0;
+ if (vid == pvid)
+ flags |= BRIDGE_VLAN_INFO_PVID;
+
+ if (test_bit(vid, pv->untagged_bitmap))
+ flags |= BRIDGE_VLAN_INFO_UNTAGGED;
+
+ if (vid_range_start == 0) {
+ goto initvars;
+ } else if ((vid - vid_range_end) == 1 &&
+ flags == vid_range_flags) {
+ vid_range_end = vid;
+ continue;
+ } else {
+ err = br_fill_ifvlaninfo_range(skb, vid_range_start,
+ vid_range_end,
+ vid_range_flags);
+ if (err)
+ return err;
+ }
+
+initvars:
+ vid_range_start = vid;
+ vid_range_end = vid;
+ vid_range_flags = flags;
+ }
+
+ if (vid_range_start != 0) {
+ /* Call it once more to send any left over vlans */
+ err = br_fill_ifvlaninfo_range(skb, vid_range_start,
+ vid_range_end,
+ vid_range_flags);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int br_fill_ifvlaninfo(struct sk_buff *skb,
+ const struct net_port_vlans *pv)
+{
+ struct bridge_vlan_info vinfo;
+ u16 pvid, vid;
+
+ pvid = br_get_pvid(pv);
+ for_each_set_bit(vid, pv->vlan_bitmap, VLAN_N_VID) {
+ vinfo.vid = vid;
+ vinfo.flags = 0;
+ if (vid == pvid)
+ vinfo.flags |= BRIDGE_VLAN_INFO_PVID;
+
+ if (test_bit(vid, pv->untagged_bitmap))
+ vinfo.flags |= BRIDGE_VLAN_INFO_UNTAGGED;
+
+ if (nla_put(skb, IFLA_BRIDGE_VLAN_INFO,
+ sizeof(vinfo), &vinfo))
+ goto nla_put_failure;
+ }
+
+ return 0;
+
+nla_put_failure:
+ return -EMSGSIZE;
+}
+
/*
* Create one netlink message for one interface
* Contains port and master info as well as carrier and bridge state.
}
/* Check if the VID information is requested */
- if (filter_mask & RTEXT_FILTER_BRVLAN) {
- struct nlattr *af;
+ if ((filter_mask & RTEXT_FILTER_BRVLAN) ||
+ (filter_mask & RTEXT_FILTER_BRVLAN_COMPRESSED)) {
const struct net_port_vlans *pv;
- struct bridge_vlan_info vinfo;
- u16 vid;
- u16 pvid;
+ struct nlattr *af;
+ int err;
if (port)
pv = nbp_get_vlan_info(port);
if (!af)
goto nla_put_failure;
- pvid = br_get_pvid(pv);
- for_each_set_bit(vid, pv->vlan_bitmap, VLAN_N_VID) {
- vinfo.vid = vid;
- vinfo.flags = 0;
- if (vid == pvid)
- vinfo.flags |= BRIDGE_VLAN_INFO_PVID;
-
- if (test_bit(vid, pv->untagged_bitmap))
- vinfo.flags |= BRIDGE_VLAN_INFO_UNTAGGED;
-
- if (nla_put(skb, IFLA_BRIDGE_VLAN_INFO,
- sizeof(vinfo), &vinfo))
- goto nla_put_failure;
- }
-
+ if (filter_mask & RTEXT_FILTER_BRVLAN_COMPRESSED)
+ err = br_fill_ifvlaninfo_compressed(skb, pv);
+ else
+ err = br_fill_ifvlaninfo(skb, pv);
+ if (err)
+ goto nla_put_failure;
nla_nest_end(skb, af);
}
done:
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
int br_getlink(struct sk_buff *skb, u32 pid, u32 seq,
struct net_device *dev, u32 filter_mask)
{
- int err = 0;
struct net_bridge_port *port = br_port_get_rtnl(dev);
- if (!port && !(filter_mask & RTEXT_FILTER_BRVLAN))
- goto out;
+ if (!port && !(filter_mask & RTEXT_FILTER_BRVLAN) &&
+ !(filter_mask & RTEXT_FILTER_BRVLAN_COMPRESSED))
+ return 0;
- err = br_fill_ifinfo(skb, port, pid, seq, RTM_NEWLINK, NLM_F_MULTI,
- filter_mask, dev);
-out:
- return err;
+ return br_fill_ifinfo(skb, port, pid, seq, RTM_NEWLINK, NLM_F_MULTI,
+ filter_mask, dev);
}
-static const struct nla_policy ifla_br_policy[IFLA_MAX+1] = {
- [IFLA_BRIDGE_FLAGS] = { .type = NLA_U16 },
- [IFLA_BRIDGE_MODE] = { .type = NLA_U16 },
- [IFLA_BRIDGE_VLAN_INFO] = { .type = NLA_BINARY,
- .len = sizeof(struct bridge_vlan_info), },
-};
+static int br_vlan_info(struct net_bridge *br, struct net_bridge_port *p,
+ int cmd, struct bridge_vlan_info *vinfo)
+{
+ int err = 0;
+
+ switch (cmd) {
+ case RTM_SETLINK:
+ if (p) {
+ err = nbp_vlan_add(p, vinfo->vid, vinfo->flags);
+ if (err)
+ break;
+
+ if (vinfo->flags & BRIDGE_VLAN_INFO_MASTER)
+ err = br_vlan_add(p->br, vinfo->vid,
+ vinfo->flags);
+ } else {
+ err = br_vlan_add(br, vinfo->vid, vinfo->flags);
+ }
+ break;
+
+ case RTM_DELLINK:
+ if (p) {
+ nbp_vlan_delete(p, vinfo->vid);
+ if (vinfo->flags & BRIDGE_VLAN_INFO_MASTER)
+ br_vlan_delete(p->br, vinfo->vid);
+ } else {
+ br_vlan_delete(br, vinfo->vid);
+ }
+ break;
+ }
+
+ return err;
+}
static int br_afspec(struct net_bridge *br,
struct net_bridge_port *p,
struct nlattr *af_spec,
int cmd)
{
- struct nlattr *tb[IFLA_BRIDGE_MAX+1];
+ struct bridge_vlan_info *vinfo_start = NULL;
+ struct bridge_vlan_info *vinfo = NULL;
+ struct nlattr *attr;
int err = 0;
+ int rem;
- err = nla_parse_nested(tb, IFLA_BRIDGE_MAX, af_spec, ifla_br_policy);
- if (err)
- return err;
+ nla_for_each_nested(attr, af_spec, rem) {
+ if (nla_type(attr) != IFLA_BRIDGE_VLAN_INFO)
+ continue;
+ if (nla_len(attr) != sizeof(struct bridge_vlan_info))
+ return -EINVAL;
+ vinfo = nla_data(attr);
+ if (vinfo->flags & BRIDGE_VLAN_INFO_RANGE_BEGIN) {
+ if (vinfo_start)
+ return -EINVAL;
+ vinfo_start = vinfo;
+ continue;
+ }
- if (tb[IFLA_BRIDGE_VLAN_INFO]) {
- struct bridge_vlan_info *vinfo;
+ if (vinfo_start) {
+ struct bridge_vlan_info tmp_vinfo;
+ int v;
- vinfo = nla_data(tb[IFLA_BRIDGE_VLAN_INFO]);
+ if (!(vinfo->flags & BRIDGE_VLAN_INFO_RANGE_END))
+ return -EINVAL;
- if (!vinfo->vid || vinfo->vid >= VLAN_VID_MASK)
- return -EINVAL;
+ if (vinfo->vid <= vinfo_start->vid)
+ return -EINVAL;
- switch (cmd) {
- case RTM_SETLINK:
- if (p) {
- err = nbp_vlan_add(p, vinfo->vid, vinfo->flags);
+ memcpy(&tmp_vinfo, vinfo_start,
+ sizeof(struct bridge_vlan_info));
+
+ for (v = vinfo_start->vid; v <= vinfo->vid; v++) {
+ tmp_vinfo.vid = v;
+ err = br_vlan_info(br, p, cmd, &tmp_vinfo);
if (err)
break;
-
- if (vinfo->flags & BRIDGE_VLAN_INFO_MASTER)
- err = br_vlan_add(p->br, vinfo->vid,
- vinfo->flags);
- } else
- err = br_vlan_add(br, vinfo->vid, vinfo->flags);
-
- break;
-
- case RTM_DELLINK:
- if (p) {
- nbp_vlan_delete(p, vinfo->vid);
- if (vinfo->flags & BRIDGE_VLAN_INFO_MASTER)
- br_vlan_delete(p->br, vinfo->vid);
- } else
- br_vlan_delete(br, vinfo->vid);
- break;
+ }
+ vinfo_start = NULL;
+ } else {
+ err = br_vlan_info(br, p, cmd, vinfo);
}
+ if (err)
+ break;
}
return err;
err = br_afspec((struct net_bridge *)netdev_priv(dev), p,
afspec, RTM_DELLINK);
+ if (err == 0)
+ /* Send RTM_NEWLINK because userspace
+ * expects RTM_NEWLINK for vlan dels
+ */
+ br_ifinfo_notify(RTM_NEWLINK, p);
return err;
}
struct net_device *dev, struct net_device *fdev, int idx);
int br_fdb_sync_static(struct net_bridge *br, struct net_bridge_port *p);
void br_fdb_unsync_static(struct net_bridge *br, struct net_bridge_port *p);
+int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
+ const unsigned char *addr, u16 vid);
+int br_fdb_external_learn_del(struct net_bridge *br, struct net_bridge_port *p,
+ const unsigned char *addr, u16 vid);
/* br_forward.c */
void br_deliver(const struct net_bridge_port *to, struct sk_buff *skb);
{
int err = 0;
- if (vlan_tx_tag_present(skb))
- *vid = vlan_tx_tag_get(skb) & VLAN_VID_MASK;
+ if (skb_vlan_tag_present(skb))
+ *vid = skb_vlan_tag_get(skb) & VLAN_VID_MASK;
else {
*vid = 0;
err = -EINVAL;
* sent from vlan device on the bridge device, it does not have
* HW accelerated vlan tag.
*/
- if (unlikely(!vlan_tx_tag_present(skb) &&
+ if (unlikely(!skb_vlan_tag_present(skb) &&
skb->protocol == proto)) {
skb = skb_vlan_untag(skb);
if (unlikely(!skb))
/* Protocol-mismatch, empty out vlan_tci for new tag */
skb_push(skb, ETH_HLEN);
skb = vlan_insert_tag_set_proto(skb, skb->vlan_proto,
- vlan_tx_tag_get(skb));
+ skb_vlan_tag_get(skb));
if (unlikely(!skb))
return false;
/* VLAN encapsulated Type/Length field, given from orig frame */
__be16 encap;
- if (vlan_tx_tag_present(skb)) {
- TCI = vlan_tx_tag_get(skb);
+ if (skb_vlan_tag_present(skb)) {
+ TCI = skb_vlan_tag_get(skb);
encap = skb->protocol;
} else {
const struct vlan_hdr *fp;
__be16 ethproto;
int verdict, i;
- if (vlan_tx_tag_present(skb))
+ if (skb_vlan_tag_present(skb))
ethproto = htons(ETH_P_8021Q);
else
ethproto = h->h_proto;
goto cancel;
}
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
cancel:
nlmsg_cancel(skb, nlh);
int ret;
char tmp_enc[40];
__le32 tmp[5] = {
- 16u, msg->hdr.crc, msg->footer.front_crc,
+ cpu_to_le32(16), msg->hdr.crc, msg->footer.front_crc,
msg->footer.middle_crc, msg->footer.data_crc,
};
ret = ceph_x_encrypt(&au->session_key, &tmp, sizeof(tmp),
if (src_len != sizeof(u32) + dst_len)
return -EINVAL;
- buf_len = le32_to_cpu(*(u32 *)src);
+ buf_len = le32_to_cpu(*(__le32 *)src);
if (buf_len != dst_len)
return -EINVAL;
if (skb->encapsulation)
features &= dev->hw_enc_features;
- if (!vlan_tx_tag_present(skb)) {
+ if (!skb_vlan_tag_present(skb)) {
if (unlikely(protocol == htons(ETH_P_8021Q) ||
protocol == htons(ETH_P_8021AD))) {
struct vlan_ethhdr *veh = (struct vlan_ethhdr *)skb->data;
static struct sk_buff *validate_xmit_vlan(struct sk_buff *skb,
netdev_features_t features)
{
- if (vlan_tx_tag_present(skb) &&
+ if (skb_vlan_tag_present(skb) &&
!vlan_hw_offload_capable(features, skb->vlan_proto))
skb = __vlan_hwaccel_push_inside(skb);
return skb;
if (pfmemalloc && !skb_pfmemalloc_protocol(skb))
goto drop;
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
if (pt_prev) {
ret = deliver_skb(skb, pt_prev, orig_dev);
pt_prev = NULL;
}
}
- if (unlikely(vlan_tx_tag_present(skb))) {
- if (vlan_tx_tag_get_id(skb))
+ if (unlikely(skb_vlan_tag_present(skb))) {
+ if (skb_vlan_tag_get_id(skb))
skb->pkt_type = PACKET_OTHERHOST;
/* Note: we might in the future use prio bits
* and set skb->priority like in vlan_do_receive()
{
unsigned int i, count = dev->num_rx_queues;
struct netdev_rx_queue *rx;
+ size_t sz = count * sizeof(*rx);
BUG_ON(count < 1);
- rx = kcalloc(count, sizeof(struct netdev_rx_queue), GFP_KERNEL);
- if (!rx)
- return -ENOMEM;
-
+ rx = kzalloc(sz, GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT);
+ if (!rx) {
+ rx = vzalloc(sz);
+ if (!rx)
+ return -ENOMEM;
+ }
dev->_rx = rx;
for (i = 0; i < count; i++)
netif_free_tx_queues(dev);
#ifdef CONFIG_SYSFS
- kfree(dev->_rx);
+ kvfree(dev->_rx);
#endif
kfree(rcu_dereference_protected(dev->ingress_queue, 1));
return err;
}
+static int __ethtool_get_module_info(struct net_device *dev,
+ struct ethtool_modinfo *modinfo)
+{
+ const struct ethtool_ops *ops = dev->ethtool_ops;
+ struct phy_device *phydev = dev->phydev;
+
+ if (phydev && phydev->drv && phydev->drv->module_info)
+ return phydev->drv->module_info(phydev, modinfo);
+
+ if (ops->get_module_info)
+ return ops->get_module_info(dev, modinfo);
+
+ return -EOPNOTSUPP;
+}
+
static int ethtool_get_module_info(struct net_device *dev,
void __user *useraddr)
{
int ret;
struct ethtool_modinfo modinfo;
- const struct ethtool_ops *ops = dev->ethtool_ops;
-
- if (!ops->get_module_info)
- return -EOPNOTSUPP;
if (copy_from_user(&modinfo, useraddr, sizeof(modinfo)))
return -EFAULT;
- ret = ops->get_module_info(dev, &modinfo);
+ ret = __ethtool_get_module_info(dev, &modinfo);
if (ret)
return ret;
return 0;
}
+static int __ethtool_get_module_eeprom(struct net_device *dev,
+ struct ethtool_eeprom *ee, u8 *data)
+{
+ const struct ethtool_ops *ops = dev->ethtool_ops;
+ struct phy_device *phydev = dev->phydev;
+
+ if (phydev && phydev->drv && phydev->drv->module_eeprom)
+ return phydev->drv->module_eeprom(phydev, ee, data);
+
+ if (ops->get_module_eeprom)
+ return ops->get_module_eeprom(dev, ee, data);
+
+ return -EOPNOTSUPP;
+}
+
static int ethtool_get_module_eeprom(struct net_device *dev,
void __user *useraddr)
{
int ret;
struct ethtool_modinfo modinfo;
- const struct ethtool_ops *ops = dev->ethtool_ops;
-
- if (!ops->get_module_info || !ops->get_module_eeprom)
- return -EOPNOTSUPP;
- ret = ops->get_module_info(dev, &modinfo);
+ ret = __ethtool_get_module_info(dev, &modinfo);
if (ret)
return ret;
- return ethtool_get_any_eeprom(dev, useraddr, ops->get_module_eeprom,
+ return ethtool_get_any_eeprom(dev, useraddr,
+ __ethtool_get_module_eeprom,
modinfo.eeprom_len);
}
if (ops->fill(rule, skb, frh) < 0)
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
goto nla_put_failure;
read_unlock_bh(&tbl->lock);
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
read_unlock_bh(&tbl->lock);
goto errout;
read_unlock_bh(&tbl->lock);
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
errout:
read_unlock_bh(&tbl->lock);
nlmsg_cancel(skb, nlh);
case NDTPA_BASE_REACHABLE_TIME:
NEIGH_VAR_SET(p, BASE_REACHABLE_TIME,
nla_get_msecs(tbp[i]));
+ /* update reachable_time as well, otherwise, the change will
+ * only be effective after the next time neigh_periodic_work
+ * decides to recompute it (can be multiple minutes)
+ */
+ p->reachable_time =
+ neigh_rand_reach_time(NEIGH_VAR(p, BASE_REACHABLE_TIME));
break;
case NDTPA_GC_STALETIME:
NEIGH_VAR_SET(p, GC_STALETIME,
if (neightbl_fill_info(skb, tbl, NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq, RTM_NEWNEIGHTBL,
- NLM_F_MULTI) <= 0)
+ NLM_F_MULTI) < 0)
break;
nidx = 0;
NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq,
RTM_NEWNEIGHTBL,
- NLM_F_MULTI) <= 0)
+ NLM_F_MULTI) < 0)
goto out;
next:
nidx++;
nla_put(skb, NDA_CACHEINFO, sizeof(ci), &ci))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
if (nla_put(skb, NDA_DST, tbl->key_len, pn->key))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
if (neigh_fill_info(skb, n, NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq,
RTM_NEWNEIGH,
- NLM_F_MULTI) <= 0) {
+ NLM_F_MULTI) < 0) {
rc = -1;
goto out;
}
if (pneigh_fill_info(skb, n, NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq,
RTM_NEWNEIGH,
- NLM_F_MULTI, tbl) <= 0) {
+ NLM_F_MULTI, tbl) < 0) {
read_unlock_bh(&tbl->lock);
rc = -1;
goto out;
return ret;
}
+static int neigh_proc_base_reachable_time(struct ctl_table *ctl, int write,
+ void __user *buffer,
+ size_t *lenp, loff_t *ppos)
+{
+ struct neigh_parms *p = ctl->extra2;
+ int ret;
+
+ if (strcmp(ctl->procname, "base_reachable_time") == 0)
+ ret = neigh_proc_dointvec_jiffies(ctl, write, buffer, lenp, ppos);
+ else if (strcmp(ctl->procname, "base_reachable_time_ms") == 0)
+ ret = neigh_proc_dointvec_ms_jiffies(ctl, write, buffer, lenp, ppos);
+ else
+ ret = -1;
+
+ if (write && ret == 0) {
+ /* update reachable_time as well, otherwise, the change will
+ * only be effective after the next time neigh_periodic_work
+ * decides to recompute it
+ */
+ p->reachable_time =
+ neigh_rand_reach_time(NEIGH_VAR(p, BASE_REACHABLE_TIME));
+ }
+ return ret;
+}
+
#define NEIGH_PARMS_DATA_OFFSET(index) \
(&((struct neigh_parms *) 0)->data[index])
t->neigh_vars[NEIGH_VAR_RETRANS_TIME_MS].proc_handler = handler;
/* ReachableTime (in milliseconds) */
t->neigh_vars[NEIGH_VAR_BASE_REACHABLE_TIME_MS].proc_handler = handler;
+ } else {
+ /* Those handlers will update p->reachable_time after
+ * base_reachable_time(_ms) is set to ensure the new timer starts being
+ * applied after the next neighbour update instead of waiting for
+ * neigh_periodic_work to update its value (can be multiple minutes)
+ * So any handler that replaces them should do this as well
+ */
+ /* ReachableTime */
+ t->neigh_vars[NEIGH_VAR_BASE_REACHABLE_TIME].proc_handler =
+ neigh_proc_base_reachable_time;
+ /* ReachableTime (in milliseconds) */
+ t->neigh_vars[NEIGH_VAR_BASE_REACHABLE_TIME_MS].proc_handler =
+ neigh_proc_base_reachable_time;
}
/* Don't export sysctls to unprivileged users */
#include <linux/file.h>
#include <linux/export.h>
#include <linux/user_namespace.h>
+#include <linux/net_namespace.h>
+#include <linux/rtnetlink.h>
+#include <net/sock.h>
+#include <net/netlink.h>
#include <net/net_namespace.h>
#include <net/netns/generic.h>
}
}
+static int alloc_netid(struct net *net, struct net *peer, int reqid)
+{
+ int min = 0, max = 0;
+
+ ASSERT_RTNL();
+
+ if (reqid >= 0) {
+ min = reqid;
+ max = reqid + 1;
+ }
+
+ return idr_alloc(&net->netns_ids, peer, min, max, GFP_KERNEL);
+}
+
+/* This function is used by idr_for_each(). If net is equal to peer, the
+ * function returns the id so that idr_for_each() stops. Because we cannot
+ * returns the id 0 (idr_for_each() will not stop), we return the magic value
+ * NET_ID_ZERO (-1) for it.
+ */
+#define NET_ID_ZERO -1
+static int net_eq_idr(int id, void *net, void *peer)
+{
+ if (net_eq(net, peer))
+ return id ? : NET_ID_ZERO;
+ return 0;
+}
+
+static int __peernet2id(struct net *net, struct net *peer, bool alloc)
+{
+ int id = idr_for_each(&net->netns_ids, net_eq_idr, peer);
+
+ ASSERT_RTNL();
+
+ /* Magic value for id 0. */
+ if (id == NET_ID_ZERO)
+ return 0;
+ if (id > 0)
+ return id;
+
+ if (alloc)
+ return alloc_netid(net, peer, -1);
+
+ return -ENOENT;
+}
+
+/* This function returns the id of a peer netns. If no id is assigned, one will
+ * be allocated and returned.
+ */
+int peernet2id(struct net *net, struct net *peer)
+{
+ int id = __peernet2id(net, peer, true);
+
+ return id >= 0 ? id : NETNSA_NSID_NOT_ASSIGNED;
+}
+EXPORT_SYMBOL(peernet2id);
+
+struct net *get_net_ns_by_id(struct net *net, int id)
+{
+ struct net *peer;
+
+ if (id < 0)
+ return NULL;
+
+ rcu_read_lock();
+ peer = idr_find(&net->netns_ids, id);
+ if (peer)
+ get_net(peer);
+ rcu_read_unlock();
+
+ return peer;
+}
+
/*
* setup_net runs the initializers for the network namespace object.
*/
atomic_set(&net->passive, 1);
net->dev_base_seq = 1;
net->user_ns = user_ns;
+ idr_init(&net->netns_ids);
#ifdef NETNS_REFCNT_DEBUG
atomic_set(&net->use_count, 0);
list_for_each_entry(net, &net_kill_list, cleanup_list) {
list_del_rcu(&net->list);
list_add_tail(&net->exit_list, &net_exit_list);
+ for_each_net(tmp) {
+ int id = __peernet2id(tmp, net, false);
+
+ if (id >= 0)
+ idr_remove(&tmp->netns_ids, id);
+ }
+ idr_destroy(&net->netns_ids);
+
}
rtnl_unlock();
.exit = net_ns_net_exit,
};
+static struct nla_policy rtnl_net_policy[NETNSA_MAX + 1] = {
+ [NETNSA_NONE] = { .type = NLA_UNSPEC },
+ [NETNSA_NSID] = { .type = NLA_S32 },
+ [NETNSA_PID] = { .type = NLA_U32 },
+ [NETNSA_FD] = { .type = NLA_U32 },
+};
+
+static int rtnl_net_newid(struct sk_buff *skb, struct nlmsghdr *nlh)
+{
+ struct net *net = sock_net(skb->sk);
+ struct nlattr *tb[NETNSA_MAX + 1];
+ struct net *peer;
+ int nsid, err;
+
+ err = nlmsg_parse(nlh, sizeof(struct rtgenmsg), tb, NETNSA_MAX,
+ rtnl_net_policy);
+ if (err < 0)
+ return err;
+ if (!tb[NETNSA_NSID])
+ return -EINVAL;
+ nsid = nla_get_s32(tb[NETNSA_NSID]);
+
+ if (tb[NETNSA_PID])
+ peer = get_net_ns_by_pid(nla_get_u32(tb[NETNSA_PID]));
+ else if (tb[NETNSA_FD])
+ peer = get_net_ns_by_fd(nla_get_u32(tb[NETNSA_FD]));
+ else
+ return -EINVAL;
+ if (IS_ERR(peer))
+ return PTR_ERR(peer);
+
+ if (__peernet2id(net, peer, false) >= 0) {
+ err = -EEXIST;
+ goto out;
+ }
+
+ err = alloc_netid(net, peer, nsid);
+ if (err > 0)
+ err = 0;
+out:
+ put_net(peer);
+ return err;
+}
+
+static int rtnl_net_get_size(void)
+{
+ return NLMSG_ALIGN(sizeof(struct rtgenmsg))
+ + nla_total_size(sizeof(s32)) /* NETNSA_NSID */
+ ;
+}
+
+static int rtnl_net_fill(struct sk_buff *skb, u32 portid, u32 seq, int flags,
+ int cmd, struct net *net, struct net *peer)
+{
+ struct nlmsghdr *nlh;
+ struct rtgenmsg *rth;
+ int id;
+
+ ASSERT_RTNL();
+
+ nlh = nlmsg_put(skb, portid, seq, cmd, sizeof(*rth), flags);
+ if (!nlh)
+ return -EMSGSIZE;
+
+ rth = nlmsg_data(nlh);
+ rth->rtgen_family = AF_UNSPEC;
+
+ id = __peernet2id(net, peer, false);
+ if (id < 0)
+ id = NETNSA_NSID_NOT_ASSIGNED;
+ if (nla_put_s32(skb, NETNSA_NSID, id))
+ goto nla_put_failure;
+
+ nlmsg_end(skb, nlh);
+ return 0;
+
+nla_put_failure:
+ nlmsg_cancel(skb, nlh);
+ return -EMSGSIZE;
+}
+
+static int rtnl_net_getid(struct sk_buff *skb, struct nlmsghdr *nlh)
+{
+ struct net *net = sock_net(skb->sk);
+ struct nlattr *tb[NETNSA_MAX + 1];
+ struct sk_buff *msg;
+ int err = -ENOBUFS;
+ struct net *peer;
+
+ err = nlmsg_parse(nlh, sizeof(struct rtgenmsg), tb, NETNSA_MAX,
+ rtnl_net_policy);
+ if (err < 0)
+ return err;
+ if (tb[NETNSA_PID])
+ peer = get_net_ns_by_pid(nla_get_u32(tb[NETNSA_PID]));
+ else if (tb[NETNSA_FD])
+ peer = get_net_ns_by_fd(nla_get_u32(tb[NETNSA_FD]));
+ else
+ return -EINVAL;
+
+ if (IS_ERR(peer))
+ return PTR_ERR(peer);
+
+ msg = nlmsg_new(rtnl_net_get_size(), GFP_KERNEL);
+ if (!msg) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ err = rtnl_net_fill(msg, NETLINK_CB(skb).portid, nlh->nlmsg_seq, 0,
+ RTM_GETNSID, net, peer);
+ if (err < 0)
+ goto err_out;
+
+ err = rtnl_unicast(msg, net, NETLINK_CB(skb).portid);
+ goto out;
+
+err_out:
+ nlmsg_free(msg);
+out:
+ put_net(peer);
+ return err;
+}
+
static int __init net_ns_init(void)
{
struct net_generic *ng;
register_pernet_subsys(&net_ns_ops);
+ rtnl_register(PF_UNSPEC, RTM_NEWNSID, rtnl_net_newid, NULL, NULL);
+ rtnl_register(PF_UNSPEC, RTM_GETNSID, rtnl_net_getid, NULL, NULL);
+
return 0;
}
features = netif_skb_features(skb);
- if (vlan_tx_tag_present(skb) &&
+ if (skb_vlan_tag_present(skb) &&
!vlan_hw_offload_capable(features, skb->vlan_proto)) {
skb = __vlan_hwaccel_push_inside(skb);
if (unlikely(!skb)) {
#include <net/arp.h>
#include <net/route.h>
#include <net/udp.h>
+#include <net/tcp.h>
#include <net/sock.h>
#include <net/pkt_sched.h>
#include <net/fib_rules.h>
for (i = 0; i < RTAX_MAX; i++) {
if (metrics[i]) {
+ if (i == RTAX_CC_ALGO - 1) {
+ char tmp[TCP_CA_NAME_MAX], *name;
+
+ name = tcp_ca_get_name_by_key(metrics[i], tmp);
+ if (!name)
+ continue;
+ if (nla_put_string(skb, i + 1, name))
+ goto nla_put_failure;
+ } else {
+ if (nla_put_u32(skb, i + 1, metrics[i]))
+ goto nla_put_failure;
+ }
valid++;
- if (nla_put_u32(skb, i+1, metrics[i]))
- goto nla_put_failure;
}
}
+ nla_total_size(1) /* IFLA_OPERSTATE */
+ nla_total_size(1) /* IFLA_LINKMODE */
+ nla_total_size(4) /* IFLA_CARRIER_CHANGES */
+ + nla_total_size(4) /* IFLA_LINK_NETNSID */
+ nla_total_size(ext_filter_mask
& RTEXT_FILTER_VF ? 4 : 0) /* IFLA_NUM_VF */
+ rtnl_vfinfo_size(dev, ext_filter_mask) /* IFLA_VFINFO_LIST */
goto nla_put_failure;
}
+ if (dev->rtnl_link_ops &&
+ dev->rtnl_link_ops->get_link_net) {
+ struct net *link_net = dev->rtnl_link_ops->get_link_net(dev);
+
+ if (!net_eq(dev_net(dev), link_net)) {
+ int id = peernet2id(dev_net(dev), link_net);
+
+ if (nla_put_s32(skb, IFLA_LINK_NETNSID, id))
+ goto nla_put_failure;
+ }
+ }
+
if (!(af_spec = nla_nest_start(skb, IFLA_AF_SPEC)))
goto nla_put_failure;
nla_nest_end(skb, af_spec);
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
[IFLA_PHYS_PORT_ID] = { .type = NLA_BINARY, .len = MAX_PHYS_ITEM_ID_LEN },
[IFLA_CARRIER_CHANGES] = { .type = NLA_U32 }, /* ignored */
[IFLA_PHYS_SWITCH_ID] = { .type = NLA_BINARY, .len = MAX_PHYS_ITEM_ID_LEN },
+ [IFLA_LINK_NETNSID] = { .type = NLA_S32 },
};
static const struct nla_policy ifla_info_policy[IFLA_INFO_MAX+1] = {
*/
WARN_ON((err == -EMSGSIZE) && (skb->len == 0));
- if (err <= 0)
+ if (err < 0)
goto out;
nl_dump_check_consistent(cb, nlmsg_hdr(skb));
struct nlattr *slave_attr[m_ops ? m_ops->slave_maxtype + 1 : 0];
struct nlattr **data = NULL;
struct nlattr **slave_data = NULL;
- struct net *dest_net;
+ struct net *dest_net, *link_net = NULL;
if (ops) {
if (ops->maxtype && linkinfo[IFLA_INFO_DATA]) {
if (IS_ERR(dest_net))
return PTR_ERR(dest_net);
- dev = rtnl_create_link(dest_net, ifname, name_assign_type, ops, tb);
+ if (tb[IFLA_LINK_NETNSID]) {
+ int id = nla_get_s32(tb[IFLA_LINK_NETNSID]);
+
+ link_net = get_net_ns_by_id(dest_net, id);
+ if (!link_net) {
+ err = -EINVAL;
+ goto out;
+ }
+ }
+
+ dev = rtnl_create_link(link_net ? : dest_net, ifname,
+ name_assign_type, ops, tb);
if (IS_ERR(dev)) {
err = PTR_ERR(dev);
goto out;
}
}
err = rtnl_configure_link(dev, ifm);
- if (err < 0)
+ if (err < 0) {
unregister_netdevice(dev);
+ goto out;
+ }
+
+ if (link_net) {
+ err = dev_change_net_namespace(dev, dest_net, ifname);
+ if (err < 0)
+ unregister_netdevice(dev);
+ }
out:
+ if (link_net)
+ put_net(link_net);
put_net(dest_net);
return err;
}
if (nla_put(skb, NDA_LLADDR, ETH_ALEN, addr))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
idx);
}
- idx = ndo_dflt_fdb_dump(skb, cb, dev, NULL, idx);
if (dev->netdev_ops->ndo_fdb_dump)
- idx = dev->netdev_ops->ndo_fdb_dump(skb, cb, bdev, dev,
+ idx = dev->netdev_ops->ndo_fdb_dump(skb, cb, dev, NULL,
idx);
+ else
+ idx = ndo_dflt_fdb_dump(skb, cb, dev, NULL, idx);
cops = NULL;
}
nla_nest_end(skb, protinfo);
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
return -EMSGSIZE;
+ nla_total_size(sizeof(u16)); /* IFLA_BRIDGE_MODE */
}
-static int rtnl_bridge_notify(struct net_device *dev, u16 flags)
+static int rtnl_bridge_notify(struct net_device *dev)
{
struct net *net = dev_net(dev);
- struct net_device *br_dev = netdev_master_upper_dev_get(dev);
struct sk_buff *skb;
int err = -EOPNOTSUPP;
+ if (!dev->netdev_ops->ndo_bridge_getlink)
+ return 0;
+
skb = nlmsg_new(bridge_nlmsg_size(), GFP_ATOMIC);
if (!skb) {
err = -ENOMEM;
goto errout;
}
- if ((!flags || (flags & BRIDGE_FLAGS_MASTER)) &&
- br_dev && br_dev->netdev_ops->ndo_bridge_getlink) {
- err = br_dev->netdev_ops->ndo_bridge_getlink(skb, 0, 0, dev, 0);
- if (err < 0)
- goto errout;
- }
-
- if ((flags & BRIDGE_FLAGS_SELF) &&
- dev->netdev_ops->ndo_bridge_getlink) {
- err = dev->netdev_ops->ndo_bridge_getlink(skb, 0, 0, dev, 0);
- if (err < 0)
- goto errout;
- }
+ err = dev->netdev_ops->ndo_bridge_getlink(skb, 0, 0, dev, 0);
+ if (err < 0)
+ goto errout;
rtnl_notify(skb, net, 0, RTNLGRP_LINK, NULL, GFP_ATOMIC);
return 0;
struct net_device *dev;
struct nlattr *br_spec, *attr = NULL;
int rem, err = -EOPNOTSUPP;
- u16 oflags, flags = 0;
+ u16 flags = 0;
bool have_flags = false;
if (nlmsg_len(nlh) < sizeof(*ifm))
}
}
- oflags = flags;
-
if (!flags || (flags & BRIDGE_FLAGS_MASTER)) {
struct net_device *br_dev = netdev_master_upper_dev_get(dev);
err = -EOPNOTSUPP;
else
err = dev->netdev_ops->ndo_bridge_setlink(dev, nlh);
-
- if (!err)
+ if (!err) {
flags &= ~BRIDGE_FLAGS_SELF;
+
+ /* Generate event to notify upper layer of bridge
+ * change
+ */
+ err = rtnl_bridge_notify(dev);
+ }
}
if (have_flags)
memcpy(nla_data(attr), &flags, sizeof(flags));
- /* Generate event to notify upper layer of bridge change */
- if (!err)
- err = rtnl_bridge_notify(dev, oflags);
out:
return err;
}
struct net_device *dev;
struct nlattr *br_spec, *attr = NULL;
int rem, err = -EOPNOTSUPP;
- u16 oflags, flags = 0;
+ u16 flags = 0;
bool have_flags = false;
if (nlmsg_len(nlh) < sizeof(*ifm))
}
}
- oflags = flags;
-
if (!flags || (flags & BRIDGE_FLAGS_MASTER)) {
struct net_device *br_dev = netdev_master_upper_dev_get(dev);
else
err = dev->netdev_ops->ndo_bridge_dellink(dev, nlh);
- if (!err)
+ if (!err) {
flags &= ~BRIDGE_FLAGS_SELF;
+
+ /* Generate event to notify upper layer of bridge
+ * change
+ */
+ err = rtnl_bridge_notify(dev);
+ }
}
if (have_flags)
memcpy(nla_data(attr), &flags, sizeof(flags));
- /* Generate event to notify upper layer of bridge change */
- if (!err)
- err = rtnl_bridge_notify(dev, oflags);
out:
return err;
}
struct vlan_hdr *vhdr;
u16 vlan_tci;
- if (unlikely(vlan_tx_tag_present(skb))) {
+ if (unlikely(skb_vlan_tag_present(skb))) {
/* vlan_tci is already set-up so leave this for another time */
return skb;
}
__be16 vlan_proto;
int err;
- if (likely(vlan_tx_tag_present(skb))) {
+ if (likely(skb_vlan_tag_present(skb))) {
skb->vlan_tci = 0;
} else {
if (unlikely((skb->protocol != htons(ETH_P_8021Q) &&
int skb_vlan_push(struct sk_buff *skb, __be16 vlan_proto, u16 vlan_tci)
{
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
unsigned int offset = skb->data - skb_mac_header(skb);
int err;
*/
__skb_push(skb, offset);
err = __vlan_insert_tag(skb, skb->vlan_proto,
- vlan_tx_tag_get(skb));
+ skb_vlan_tag_get(skb));
if (err)
return err;
skb->protocol = skb->vlan_proto;
nla_put_string(skb, IFA_LABEL, ifa->ifa_label)) ||
nla_put_u32(skb, IFA_FLAGS, ifa_flags))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
int type = nla_type(attr);
if (type) {
- if (type > RTAX_MAX || nla_len(attr) < 4)
+ if (type > RTAX_MAX || type == RTAX_CC_ALGO ||
+ nla_len(attr) < 4)
goto err_inval;
fi->fib_metrics[type-1] = nla_get_u32(attr);
nla_put_u32(skb, RTA_IIF, rt->fld.flowidn_iif) < 0)
goto errout;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
errout:
nlmsg_cancel(skb, nlh);
rt->rt_flags |= RTCF_NOTIFY;
err = dn_rt_fill_info(skb, NETLINK_CB(in_skb).portid, nlh->nlmsg_seq, RTM_NEWROUTE, 0, 0);
-
- if (err == 0)
- goto out_free;
if (err < 0) {
err = -EMSGSIZE;
goto out_free;
skb_dst_set(skb, dst_clone(&rt->dst));
if (dn_rt_fill_info(skb, NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq, RTM_NEWROUTE,
- 1, NLM_F_MULTI) <= 0) {
+ 1, NLM_F_MULTI) < 0) {
skb_dst_drop(skb);
rcu_read_unlock_bh();
goto done;
#include <linux/route.h> /* RTF_xxx */
#include <net/neighbour.h>
#include <net/netlink.h>
+#include <net/tcp.h>
#include <net/dst.h>
#include <net/flow.h>
#include <net/fib_rules.h>
size_t payload = NLMSG_ALIGN(sizeof(struct rtmsg))
+ nla_total_size(4) /* RTA_TABLE */
+ nla_total_size(2) /* RTA_DST */
- + nla_total_size(4); /* RTA_PRIORITY */
+ + nla_total_size(4) /* RTA_PRIORITY */
+ + nla_total_size(TCP_CA_NAME_MAX); /* RTAX_CC_ALGO */
/* space for nested metrics */
payload += nla_total_size((RTAX_MAX * nla_total_size(4)));
nla_nest_end(skb, mp_head);
}
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
errout:
nlmsg_cancel(skb, nlh);
--- /dev/null
+#ifndef __IEEE802154_6LOWPAN_I_H__
+#define __IEEE802154_6LOWPAN_I_H__
+
+#include <linux/list.h>
+
+#include <net/ieee802154_netdev.h>
+#include <net/inet_frag.h>
+
+struct lowpan_create_arg {
+ u16 tag;
+ u16 d_size;
+ const struct ieee802154_addr *src;
+ const struct ieee802154_addr *dst;
+};
+
+/* Equivalent of ipv4 struct ip
+ */
+struct lowpan_frag_queue {
+ struct inet_frag_queue q;
+
+ u16 tag;
+ u16 d_size;
+ struct ieee802154_addr saddr;
+ struct ieee802154_addr daddr;
+};
+
+static inline u32 ieee802154_addr_hash(const struct ieee802154_addr *a)
+{
+ switch (a->mode) {
+ case IEEE802154_ADDR_LONG:
+ return (((__force u64)a->extended_addr) >> 32) ^
+ (((__force u64)a->extended_addr) & 0xffffffff);
+ case IEEE802154_ADDR_SHORT:
+ return (__force u32)(a->short_addr);
+ default:
+ return 0;
+ }
+}
+
+struct lowpan_dev_record {
+ struct net_device *ldev;
+ struct list_head list;
+};
+
+/* private device info */
+struct lowpan_dev_info {
+ struct net_device *real_dev; /* real WPAN device ptr */
+ struct mutex dev_list_mtx; /* mutex for list ops */
+ u16 fragment_tag;
+};
+
+static inline struct
+lowpan_dev_info *lowpan_dev_info(const struct net_device *dev)
+{
+ return netdev_priv(dev);
+}
+
+extern struct list_head lowpan_devices;
+
+int lowpan_frag_rcv(struct sk_buff *skb, const u8 frag_type);
+void lowpan_net_frag_exit(void);
+int lowpan_net_frag_init(void);
+
+void lowpan_rx_init(void);
+void lowpan_rx_exit(void);
+
+int lowpan_header_create(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *_daddr,
+ const void *_saddr, unsigned int len);
+netdev_tx_t lowpan_xmit(struct sk_buff *skb, struct net_device *dev);
+
+#endif /* __IEEE802154_6LOWPAN_I_H__ */
--- /dev/null
+config IEEE802154_6LOWPAN
+ tristate "6lowpan support over IEEE 802.15.4"
+ depends on 6LOWPAN
+ ---help---
+ IPv6 compression over IEEE 802.15.4.
--- /dev/null
+obj-$(CONFIG_IEEE802154_6LOWPAN) += ieee802154_6lowpan.o
+
+ieee802154_6lowpan-y := core.o rx.o reassembly.o tx.o
--- /dev/null
+/* Copyright 2011, Siemens AG
+ * written by Alexander Smirnov <alex.bluesman.smirnov@gmail.com>
+ */
+
+/* Based on patches from Jon Smirl <jonsmirl@gmail.com>
+ * Copyright (c) 2011 Jon Smirl <jonsmirl@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/* Jon's code is based on 6lowpan implementation for Contiki which is:
+ * Copyright (c) 2008, Swedish Institute of Computer Science.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of the Institute nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE INSTITUTE AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE INSTITUTE OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/ieee802154.h>
+
+#include <net/ipv6.h>
+
+#include "6lowpan_i.h"
+
+LIST_HEAD(lowpan_devices);
+static int lowpan_open_count;
+
+static __le16 lowpan_get_pan_id(const struct net_device *dev)
+{
+ struct net_device *real_dev = lowpan_dev_info(dev)->real_dev;
+
+ return ieee802154_mlme_ops(real_dev)->get_pan_id(real_dev);
+}
+
+static __le16 lowpan_get_short_addr(const struct net_device *dev)
+{
+ struct net_device *real_dev = lowpan_dev_info(dev)->real_dev;
+
+ return ieee802154_mlme_ops(real_dev)->get_short_addr(real_dev);
+}
+
+static u8 lowpan_get_dsn(const struct net_device *dev)
+{
+ struct net_device *real_dev = lowpan_dev_info(dev)->real_dev;
+
+ return ieee802154_mlme_ops(real_dev)->get_dsn(real_dev);
+}
+
+static struct header_ops lowpan_header_ops = {
+ .create = lowpan_header_create,
+};
+
+static struct lock_class_key lowpan_tx_busylock;
+static struct lock_class_key lowpan_netdev_xmit_lock_key;
+
+static void lowpan_set_lockdep_class_one(struct net_device *dev,
+ struct netdev_queue *txq,
+ void *_unused)
+{
+ lockdep_set_class(&txq->_xmit_lock,
+ &lowpan_netdev_xmit_lock_key);
+}
+
+static int lowpan_dev_init(struct net_device *dev)
+{
+ netdev_for_each_tx_queue(dev, lowpan_set_lockdep_class_one, NULL);
+ dev->qdisc_tx_busylock = &lowpan_tx_busylock;
+ return 0;
+}
+
+static const struct net_device_ops lowpan_netdev_ops = {
+ .ndo_init = lowpan_dev_init,
+ .ndo_start_xmit = lowpan_xmit,
+};
+
+static struct ieee802154_mlme_ops lowpan_mlme = {
+ .get_pan_id = lowpan_get_pan_id,
+ .get_short_addr = lowpan_get_short_addr,
+ .get_dsn = lowpan_get_dsn,
+};
+
+static void lowpan_setup(struct net_device *dev)
+{
+ dev->addr_len = IEEE802154_ADDR_LEN;
+ memset(dev->broadcast, 0xff, IEEE802154_ADDR_LEN);
+ dev->type = ARPHRD_IEEE802154;
+ /* Frame Control + Sequence Number + Address fields + Security Header */
+ dev->hard_header_len = 2 + 1 + 20 + 14;
+ dev->needed_tailroom = 2; /* FCS */
+ dev->mtu = IPV6_MIN_MTU;
+ dev->tx_queue_len = 0;
+ dev->flags = IFF_BROADCAST | IFF_MULTICAST;
+ dev->watchdog_timeo = 0;
+
+ dev->netdev_ops = &lowpan_netdev_ops;
+ dev->header_ops = &lowpan_header_ops;
+ dev->ml_priv = &lowpan_mlme;
+ dev->destructor = free_netdev;
+}
+
+static int lowpan_validate(struct nlattr *tb[], struct nlattr *data[])
+{
+ if (tb[IFLA_ADDRESS]) {
+ if (nla_len(tb[IFLA_ADDRESS]) != IEEE802154_ADDR_LEN)
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int lowpan_newlink(struct net *src_net, struct net_device *dev,
+ struct nlattr *tb[], struct nlattr *data[])
+{
+ struct net_device *real_dev;
+ struct lowpan_dev_record *entry;
+ int ret;
+
+ ASSERT_RTNL();
+
+ pr_debug("adding new link\n");
+
+ if (!tb[IFLA_LINK])
+ return -EINVAL;
+ /* find and hold real wpan device */
+ real_dev = dev_get_by_index(src_net, nla_get_u32(tb[IFLA_LINK]));
+ if (!real_dev)
+ return -ENODEV;
+ if (real_dev->type != ARPHRD_IEEE802154) {
+ dev_put(real_dev);
+ return -EINVAL;
+ }
+
+ lowpan_dev_info(dev)->real_dev = real_dev;
+ mutex_init(&lowpan_dev_info(dev)->dev_list_mtx);
+
+ entry = kzalloc(sizeof(*entry), GFP_KERNEL);
+ if (!entry) {
+ dev_put(real_dev);
+ lowpan_dev_info(dev)->real_dev = NULL;
+ return -ENOMEM;
+ }
+
+ entry->ldev = dev;
+
+ /* Set the lowpan hardware address to the wpan hardware address. */
+ memcpy(dev->dev_addr, real_dev->dev_addr, IEEE802154_ADDR_LEN);
+
+ mutex_lock(&lowpan_dev_info(dev)->dev_list_mtx);
+ INIT_LIST_HEAD(&entry->list);
+ list_add_tail(&entry->list, &lowpan_devices);
+ mutex_unlock(&lowpan_dev_info(dev)->dev_list_mtx);
+
+ ret = register_netdevice(dev);
+ if (ret >= 0) {
+ if (!lowpan_open_count)
+ lowpan_rx_init();
+ lowpan_open_count++;
+ }
+
+ return ret;
+}
+
+static void lowpan_dellink(struct net_device *dev, struct list_head *head)
+{
+ struct lowpan_dev_info *lowpan_dev = lowpan_dev_info(dev);
+ struct net_device *real_dev = lowpan_dev->real_dev;
+ struct lowpan_dev_record *entry, *tmp;
+
+ ASSERT_RTNL();
+
+ lowpan_open_count--;
+ if (!lowpan_open_count)
+ lowpan_rx_exit();
+
+ mutex_lock(&lowpan_dev_info(dev)->dev_list_mtx);
+ list_for_each_entry_safe(entry, tmp, &lowpan_devices, list) {
+ if (entry->ldev == dev) {
+ list_del(&entry->list);
+ kfree(entry);
+ }
+ }
+ mutex_unlock(&lowpan_dev_info(dev)->dev_list_mtx);
+
+ mutex_destroy(&lowpan_dev_info(dev)->dev_list_mtx);
+
+ unregister_netdevice_queue(dev, head);
+
+ dev_put(real_dev);
+}
+
+static struct rtnl_link_ops lowpan_link_ops __read_mostly = {
+ .kind = "lowpan",
+ .priv_size = sizeof(struct lowpan_dev_info),
+ .setup = lowpan_setup,
+ .newlink = lowpan_newlink,
+ .dellink = lowpan_dellink,
+ .validate = lowpan_validate,
+};
+
+static inline int __init lowpan_netlink_init(void)
+{
+ return rtnl_link_register(&lowpan_link_ops);
+}
+
+static inline void lowpan_netlink_fini(void)
+{
+ rtnl_link_unregister(&lowpan_link_ops);
+}
+
+static int lowpan_device_event(struct notifier_block *unused,
+ unsigned long event, void *ptr)
+{
+ struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+ LIST_HEAD(del_list);
+ struct lowpan_dev_record *entry, *tmp;
+
+ if (dev->type != ARPHRD_IEEE802154)
+ goto out;
+
+ if (event == NETDEV_UNREGISTER) {
+ list_for_each_entry_safe(entry, tmp, &lowpan_devices, list) {
+ if (lowpan_dev_info(entry->ldev)->real_dev == dev)
+ lowpan_dellink(entry->ldev, &del_list);
+ }
+
+ unregister_netdevice_many(&del_list);
+ }
+
+out:
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block lowpan_dev_notifier = {
+ .notifier_call = lowpan_device_event,
+};
+
+static int __init lowpan_init_module(void)
+{
+ int err = 0;
+
+ err = lowpan_net_frag_init();
+ if (err < 0)
+ goto out;
+
+ err = lowpan_netlink_init();
+ if (err < 0)
+ goto out_frag;
+
+ err = register_netdevice_notifier(&lowpan_dev_notifier);
+ if (err < 0)
+ goto out_pack;
+
+ return 0;
+
+out_pack:
+ lowpan_netlink_fini();
+out_frag:
+ lowpan_net_frag_exit();
+out:
+ return err;
+}
+
+static void __exit lowpan_cleanup_module(void)
+{
+ lowpan_netlink_fini();
+
+ lowpan_net_frag_exit();
+
+ unregister_netdevice_notifier(&lowpan_dev_notifier);
+}
+
+module_init(lowpan_init_module);
+module_exit(lowpan_cleanup_module);
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_RTNL_LINK("lowpan");
--- /dev/null
+/* 6LoWPAN fragment reassembly
+ *
+ *
+ * Authors:
+ * Alexander Aring <aar@pengutronix.de>
+ *
+ * Based on: net/ipv6/reassembly.c
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#define pr_fmt(fmt) "6LoWPAN: " fmt
+
+#include <linux/net.h>
+#include <linux/list.h>
+#include <linux/netdevice.h>
+#include <linux/random.h>
+#include <linux/jhash.h>
+#include <linux/skbuff.h>
+#include <linux/slab.h>
+#include <linux/export.h>
+
+#include <net/ieee802154_netdev.h>
+#include <net/6lowpan.h>
+#include <net/ipv6.h>
+#include <net/inet_frag.h>
+
+#include "6lowpan_i.h"
+
+static const char lowpan_frags_cache_name[] = "lowpan-frags";
+
+struct lowpan_frag_info {
+ u16 d_tag;
+ u16 d_size;
+ u8 d_offset;
+};
+
+static struct lowpan_frag_info *lowpan_cb(struct sk_buff *skb)
+{
+ return (struct lowpan_frag_info *)skb->cb;
+}
+
+static struct inet_frags lowpan_frags;
+
+static int lowpan_frag_reasm(struct lowpan_frag_queue *fq,
+ struct sk_buff *prev, struct net_device *dev);
+
+static unsigned int lowpan_hash_frag(u16 tag, u16 d_size,
+ const struct ieee802154_addr *saddr,
+ const struct ieee802154_addr *daddr)
+{
+ net_get_random_once(&lowpan_frags.rnd, sizeof(lowpan_frags.rnd));
+ return jhash_3words(ieee802154_addr_hash(saddr),
+ ieee802154_addr_hash(daddr),
+ (__force u32)(tag + (d_size << 16)),
+ lowpan_frags.rnd);
+}
+
+static unsigned int lowpan_hashfn(const struct inet_frag_queue *q)
+{
+ const struct lowpan_frag_queue *fq;
+
+ fq = container_of(q, struct lowpan_frag_queue, q);
+ return lowpan_hash_frag(fq->tag, fq->d_size, &fq->saddr, &fq->daddr);
+}
+
+static bool lowpan_frag_match(const struct inet_frag_queue *q, const void *a)
+{
+ const struct lowpan_frag_queue *fq;
+ const struct lowpan_create_arg *arg = a;
+
+ fq = container_of(q, struct lowpan_frag_queue, q);
+ return fq->tag == arg->tag && fq->d_size == arg->d_size &&
+ ieee802154_addr_equal(&fq->saddr, arg->src) &&
+ ieee802154_addr_equal(&fq->daddr, arg->dst);
+}
+
+static void lowpan_frag_init(struct inet_frag_queue *q, const void *a)
+{
+ const struct lowpan_create_arg *arg = a;
+ struct lowpan_frag_queue *fq;
+
+ fq = container_of(q, struct lowpan_frag_queue, q);
+
+ fq->tag = arg->tag;
+ fq->d_size = arg->d_size;
+ fq->saddr = *arg->src;
+ fq->daddr = *arg->dst;
+}
+
+static void lowpan_frag_expire(unsigned long data)
+{
+ struct frag_queue *fq;
+ struct net *net;
+
+ fq = container_of((struct inet_frag_queue *)data, struct frag_queue, q);
+ net = container_of(fq->q.net, struct net, ieee802154_lowpan.frags);
+
+ spin_lock(&fq->q.lock);
+
+ if (fq->q.flags & INET_FRAG_COMPLETE)
+ goto out;
+
+ inet_frag_kill(&fq->q, &lowpan_frags);
+out:
+ spin_unlock(&fq->q.lock);
+ inet_frag_put(&fq->q, &lowpan_frags);
+}
+
+static inline struct lowpan_frag_queue *
+fq_find(struct net *net, const struct lowpan_frag_info *frag_info,
+ const struct ieee802154_addr *src,
+ const struct ieee802154_addr *dst)
+{
+ struct inet_frag_queue *q;
+ struct lowpan_create_arg arg;
+ unsigned int hash;
+ struct netns_ieee802154_lowpan *ieee802154_lowpan =
+ net_ieee802154_lowpan(net);
+
+ arg.tag = frag_info->d_tag;
+ arg.d_size = frag_info->d_size;
+ arg.src = src;
+ arg.dst = dst;
+
+ hash = lowpan_hash_frag(frag_info->d_tag, frag_info->d_size, src, dst);
+
+ q = inet_frag_find(&ieee802154_lowpan->frags,
+ &lowpan_frags, &arg, hash);
+ if (IS_ERR_OR_NULL(q)) {
+ inet_frag_maybe_warn_overflow(q, pr_fmt());
+ return NULL;
+ }
+ return container_of(q, struct lowpan_frag_queue, q);
+}
+
+static int lowpan_frag_queue(struct lowpan_frag_queue *fq,
+ struct sk_buff *skb, const u8 frag_type)
+{
+ struct sk_buff *prev, *next;
+ struct net_device *dev;
+ int end, offset;
+
+ if (fq->q.flags & INET_FRAG_COMPLETE)
+ goto err;
+
+ offset = lowpan_cb(skb)->d_offset << 3;
+ end = lowpan_cb(skb)->d_size;
+
+ /* Is this the final fragment? */
+ if (offset + skb->len == end) {
+ /* If we already have some bits beyond end
+ * or have different end, the segment is corrupted.
+ */
+ if (end < fq->q.len ||
+ ((fq->q.flags & INET_FRAG_LAST_IN) && end != fq->q.len))
+ goto err;
+ fq->q.flags |= INET_FRAG_LAST_IN;
+ fq->q.len = end;
+ } else {
+ if (end > fq->q.len) {
+ /* Some bits beyond end -> corruption. */
+ if (fq->q.flags & INET_FRAG_LAST_IN)
+ goto err;
+ fq->q.len = end;
+ }
+ }
+
+ /* Find out which fragments are in front and at the back of us
+ * in the chain of fragments so far. We must know where to put
+ * this fragment, right?
+ */
+ prev = fq->q.fragments_tail;
+ if (!prev || lowpan_cb(prev)->d_offset < lowpan_cb(skb)->d_offset) {
+ next = NULL;
+ goto found;
+ }
+ prev = NULL;
+ for (next = fq->q.fragments; next != NULL; next = next->next) {
+ if (lowpan_cb(next)->d_offset >= lowpan_cb(skb)->d_offset)
+ break; /* bingo! */
+ prev = next;
+ }
+
+found:
+ /* Insert this fragment in the chain of fragments. */
+ skb->next = next;
+ if (!next)
+ fq->q.fragments_tail = skb;
+ if (prev)
+ prev->next = skb;
+ else
+ fq->q.fragments = skb;
+
+ dev = skb->dev;
+ if (dev)
+ skb->dev = NULL;
+
+ fq->q.stamp = skb->tstamp;
+ if (frag_type == LOWPAN_DISPATCH_FRAG1) {
+ /* Calculate uncomp. 6lowpan header to estimate full size */
+ fq->q.meat += lowpan_uncompress_size(skb, NULL);
+ fq->q.flags |= INET_FRAG_FIRST_IN;
+ } else {
+ fq->q.meat += skb->len;
+ }
+ add_frag_mem_limit(&fq->q, skb->truesize);
+
+ if (fq->q.flags == (INET_FRAG_FIRST_IN | INET_FRAG_LAST_IN) &&
+ fq->q.meat == fq->q.len) {
+ int res;
+ unsigned long orefdst = skb->_skb_refdst;
+
+ skb->_skb_refdst = 0UL;
+ res = lowpan_frag_reasm(fq, prev, dev);
+ skb->_skb_refdst = orefdst;
+ return res;
+ }
+
+ return -1;
+err:
+ kfree_skb(skb);
+ return -1;
+}
+
+/* Check if this packet is complete.
+ * Returns NULL on failure by any reason, and pointer
+ * to current nexthdr field in reassembled frame.
+ *
+ * It is called with locked fq, and caller must check that
+ * queue is eligible for reassembly i.e. it is not COMPLETE,
+ * the last and the first frames arrived and all the bits are here.
+ */
+static int lowpan_frag_reasm(struct lowpan_frag_queue *fq, struct sk_buff *prev,
+ struct net_device *dev)
+{
+ struct sk_buff *fp, *head = fq->q.fragments;
+ int sum_truesize;
+
+ inet_frag_kill(&fq->q, &lowpan_frags);
+
+ /* Make the one we just received the head. */
+ if (prev) {
+ head = prev->next;
+ fp = skb_clone(head, GFP_ATOMIC);
+
+ if (!fp)
+ goto out_oom;
+
+ fp->next = head->next;
+ if (!fp->next)
+ fq->q.fragments_tail = fp;
+ prev->next = fp;
+
+ skb_morph(head, fq->q.fragments);
+ head->next = fq->q.fragments->next;
+
+ consume_skb(fq->q.fragments);
+ fq->q.fragments = head;
+ }
+
+ /* Head of list must not be cloned. */
+ if (skb_unclone(head, GFP_ATOMIC))
+ goto out_oom;
+
+ /* If the first fragment is fragmented itself, we split
+ * it to two chunks: the first with data and paged part
+ * and the second, holding only fragments.
+ */
+ if (skb_has_frag_list(head)) {
+ struct sk_buff *clone;
+ int i, plen = 0;
+
+ clone = alloc_skb(0, GFP_ATOMIC);
+ if (!clone)
+ goto out_oom;
+ clone->next = head->next;
+ head->next = clone;
+ skb_shinfo(clone)->frag_list = skb_shinfo(head)->frag_list;
+ skb_frag_list_init(head);
+ for (i = 0; i < skb_shinfo(head)->nr_frags; i++)
+ plen += skb_frag_size(&skb_shinfo(head)->frags[i]);
+ clone->len = head->data_len - plen;
+ clone->data_len = clone->len;
+ head->data_len -= clone->len;
+ head->len -= clone->len;
+ add_frag_mem_limit(&fq->q, clone->truesize);
+ }
+
+ WARN_ON(head == NULL);
+
+ sum_truesize = head->truesize;
+ for (fp = head->next; fp;) {
+ bool headstolen;
+ int delta;
+ struct sk_buff *next = fp->next;
+
+ sum_truesize += fp->truesize;
+ if (skb_try_coalesce(head, fp, &headstolen, &delta)) {
+ kfree_skb_partial(fp, headstolen);
+ } else {
+ if (!skb_shinfo(head)->frag_list)
+ skb_shinfo(head)->frag_list = fp;
+ head->data_len += fp->len;
+ head->len += fp->len;
+ head->truesize += fp->truesize;
+ }
+ fp = next;
+ }
+ sub_frag_mem_limit(&fq->q, sum_truesize);
+
+ head->next = NULL;
+ head->dev = dev;
+ head->tstamp = fq->q.stamp;
+
+ fq->q.fragments = NULL;
+ fq->q.fragments_tail = NULL;
+
+ return 1;
+out_oom:
+ net_dbg_ratelimited("lowpan_frag_reasm: no memory for reassembly\n");
+ return -1;
+}
+
+static int lowpan_get_frag_info(struct sk_buff *skb, const u8 frag_type,
+ struct lowpan_frag_info *frag_info)
+{
+ bool fail;
+ u8 pattern = 0, low = 0;
+ __be16 d_tag = 0;
+
+ fail = lowpan_fetch_skb(skb, &pattern, 1);
+ fail |= lowpan_fetch_skb(skb, &low, 1);
+ frag_info->d_size = (pattern & 7) << 8 | low;
+ fail |= lowpan_fetch_skb(skb, &d_tag, 2);
+ frag_info->d_tag = ntohs(d_tag);
+
+ if (frag_type == LOWPAN_DISPATCH_FRAGN) {
+ fail |= lowpan_fetch_skb(skb, &frag_info->d_offset, 1);
+ } else {
+ skb_reset_network_header(skb);
+ frag_info->d_offset = 0;
+ }
+
+ if (unlikely(fail))
+ return -EIO;
+
+ return 0;
+}
+
+int lowpan_frag_rcv(struct sk_buff *skb, const u8 frag_type)
+{
+ struct lowpan_frag_queue *fq;
+ struct net *net = dev_net(skb->dev);
+ struct lowpan_frag_info *frag_info = lowpan_cb(skb);
+ struct ieee802154_addr source, dest;
+ int err;
+
+ source = mac_cb(skb)->source;
+ dest = mac_cb(skb)->dest;
+
+ err = lowpan_get_frag_info(skb, frag_type, frag_info);
+ if (err < 0)
+ goto err;
+
+ if (frag_info->d_size > IPV6_MIN_MTU) {
+ net_warn_ratelimited("lowpan_frag_rcv: datagram size exceeds MTU\n");
+ goto err;
+ }
+
+ fq = fq_find(net, frag_info, &source, &dest);
+ if (fq != NULL) {
+ int ret;
+
+ spin_lock(&fq->q.lock);
+ ret = lowpan_frag_queue(fq, skb, frag_type);
+ spin_unlock(&fq->q.lock);
+
+ inet_frag_put(&fq->q, &lowpan_frags);
+ return ret;
+ }
+
+err:
+ kfree_skb(skb);
+ return -1;
+}
+EXPORT_SYMBOL(lowpan_frag_rcv);
+
+#ifdef CONFIG_SYSCTL
+static int zero;
+
+static struct ctl_table lowpan_frags_ns_ctl_table[] = {
+ {
+ .procname = "6lowpanfrag_high_thresh",
+ .data = &init_net.ieee802154_lowpan.frags.high_thresh,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &init_net.ieee802154_lowpan.frags.low_thresh
+ },
+ {
+ .procname = "6lowpanfrag_low_thresh",
+ .data = &init_net.ieee802154_lowpan.frags.low_thresh,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = &zero,
+ .extra2 = &init_net.ieee802154_lowpan.frags.high_thresh
+ },
+ {
+ .procname = "6lowpanfrag_time",
+ .data = &init_net.ieee802154_lowpan.frags.timeout,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_jiffies,
+ },
+ { }
+};
+
+/* secret interval has been deprecated */
+static int lowpan_frags_secret_interval_unused;
+static struct ctl_table lowpan_frags_ctl_table[] = {
+ {
+ .procname = "6lowpanfrag_secret_interval",
+ .data = &lowpan_frags_secret_interval_unused,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_jiffies,
+ },
+ { }
+};
+
+static int __net_init lowpan_frags_ns_sysctl_register(struct net *net)
+{
+ struct ctl_table *table;
+ struct ctl_table_header *hdr;
+ struct netns_ieee802154_lowpan *ieee802154_lowpan =
+ net_ieee802154_lowpan(net);
+
+ table = lowpan_frags_ns_ctl_table;
+ if (!net_eq(net, &init_net)) {
+ table = kmemdup(table, sizeof(lowpan_frags_ns_ctl_table),
+ GFP_KERNEL);
+ if (table == NULL)
+ goto err_alloc;
+
+ table[0].data = &ieee802154_lowpan->frags.high_thresh;
+ table[0].extra1 = &ieee802154_lowpan->frags.low_thresh;
+ table[0].extra2 = &init_net.ieee802154_lowpan.frags.high_thresh;
+ table[1].data = &ieee802154_lowpan->frags.low_thresh;
+ table[1].extra2 = &ieee802154_lowpan->frags.high_thresh;
+ table[2].data = &ieee802154_lowpan->frags.timeout;
+
+ /* Don't export sysctls to unprivileged users */
+ if (net->user_ns != &init_user_ns)
+ table[0].procname = NULL;
+ }
+
+ hdr = register_net_sysctl(net, "net/ieee802154/6lowpan", table);
+ if (hdr == NULL)
+ goto err_reg;
+
+ ieee802154_lowpan->sysctl.frags_hdr = hdr;
+ return 0;
+
+err_reg:
+ if (!net_eq(net, &init_net))
+ kfree(table);
+err_alloc:
+ return -ENOMEM;
+}
+
+static void __net_exit lowpan_frags_ns_sysctl_unregister(struct net *net)
+{
+ struct ctl_table *table;
+ struct netns_ieee802154_lowpan *ieee802154_lowpan =
+ net_ieee802154_lowpan(net);
+
+ table = ieee802154_lowpan->sysctl.frags_hdr->ctl_table_arg;
+ unregister_net_sysctl_table(ieee802154_lowpan->sysctl.frags_hdr);
+ if (!net_eq(net, &init_net))
+ kfree(table);
+}
+
+static struct ctl_table_header *lowpan_ctl_header;
+
+static int __init lowpan_frags_sysctl_register(void)
+{
+ lowpan_ctl_header = register_net_sysctl(&init_net,
+ "net/ieee802154/6lowpan",
+ lowpan_frags_ctl_table);
+ return lowpan_ctl_header == NULL ? -ENOMEM : 0;
+}
+
+static void lowpan_frags_sysctl_unregister(void)
+{
+ unregister_net_sysctl_table(lowpan_ctl_header);
+}
+#else
+static inline int lowpan_frags_ns_sysctl_register(struct net *net)
+{
+ return 0;
+}
+
+static inline void lowpan_frags_ns_sysctl_unregister(struct net *net)
+{
+}
+
+static inline int __init lowpan_frags_sysctl_register(void)
+{
+ return 0;
+}
+
+static inline void lowpan_frags_sysctl_unregister(void)
+{
+}
+#endif
+
+static int __net_init lowpan_frags_init_net(struct net *net)
+{
+ struct netns_ieee802154_lowpan *ieee802154_lowpan =
+ net_ieee802154_lowpan(net);
+
+ ieee802154_lowpan->frags.high_thresh = IPV6_FRAG_HIGH_THRESH;
+ ieee802154_lowpan->frags.low_thresh = IPV6_FRAG_LOW_THRESH;
+ ieee802154_lowpan->frags.timeout = IPV6_FRAG_TIMEOUT;
+
+ inet_frags_init_net(&ieee802154_lowpan->frags);
+
+ return lowpan_frags_ns_sysctl_register(net);
+}
+
+static void __net_exit lowpan_frags_exit_net(struct net *net)
+{
+ struct netns_ieee802154_lowpan *ieee802154_lowpan =
+ net_ieee802154_lowpan(net);
+
+ lowpan_frags_ns_sysctl_unregister(net);
+ inet_frags_exit_net(&ieee802154_lowpan->frags, &lowpan_frags);
+}
+
+static struct pernet_operations lowpan_frags_ops = {
+ .init = lowpan_frags_init_net,
+ .exit = lowpan_frags_exit_net,
+};
+
+int __init lowpan_net_frag_init(void)
+{
+ int ret;
+
+ ret = lowpan_frags_sysctl_register();
+ if (ret)
+ return ret;
+
+ ret = register_pernet_subsys(&lowpan_frags_ops);
+ if (ret)
+ goto err_pernet;
+
+ lowpan_frags.hashfn = lowpan_hashfn;
+ lowpan_frags.constructor = lowpan_frag_init;
+ lowpan_frags.destructor = NULL;
+ lowpan_frags.skb_free = NULL;
+ lowpan_frags.qsize = sizeof(struct frag_queue);
+ lowpan_frags.match = lowpan_frag_match;
+ lowpan_frags.frag_expire = lowpan_frag_expire;
+ lowpan_frags.frags_cache_name = lowpan_frags_cache_name;
+ ret = inet_frags_init(&lowpan_frags);
+ if (ret)
+ goto err_pernet;
+
+ return ret;
+err_pernet:
+ lowpan_frags_sysctl_unregister();
+ return ret;
+}
+
+void lowpan_net_frag_exit(void)
+{
+ inet_frags_fini(&lowpan_frags);
+ lowpan_frags_sysctl_unregister();
+ unregister_pernet_subsys(&lowpan_frags_ops);
+}
--- /dev/null
+/* This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/if_arp.h>
+
+#include <net/6lowpan.h>
+#include <net/ieee802154_netdev.h>
+
+#include "6lowpan_i.h"
+
+static int lowpan_give_skb_to_devices(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ struct lowpan_dev_record *entry;
+ struct sk_buff *skb_cp;
+ int stat = NET_RX_SUCCESS;
+
+ skb->protocol = htons(ETH_P_IPV6);
+ skb->pkt_type = PACKET_HOST;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(entry, &lowpan_devices, list)
+ if (lowpan_dev_info(entry->ldev)->real_dev == skb->dev) {
+ skb_cp = skb_copy(skb, GFP_ATOMIC);
+ if (!skb_cp) {
+ kfree_skb(skb);
+ rcu_read_unlock();
+ return NET_RX_DROP;
+ }
+
+ skb_cp->dev = entry->ldev;
+ stat = netif_rx(skb_cp);
+ if (stat == NET_RX_DROP)
+ break;
+ }
+ rcu_read_unlock();
+
+ consume_skb(skb);
+
+ return stat;
+}
+
+static int
+iphc_decompress(struct sk_buff *skb, const struct ieee802154_hdr *hdr)
+{
+ u8 iphc0, iphc1;
+ struct ieee802154_addr_sa sa, da;
+ void *sap, *dap;
+
+ raw_dump_table(__func__, "raw skb data dump", skb->data, skb->len);
+ /* at least two bytes will be used for the encoding */
+ if (skb->len < 2)
+ return -EINVAL;
+
+ if (lowpan_fetch_skb_u8(skb, &iphc0))
+ return -EINVAL;
+
+ if (lowpan_fetch_skb_u8(skb, &iphc1))
+ return -EINVAL;
+
+ ieee802154_addr_to_sa(&sa, &hdr->source);
+ ieee802154_addr_to_sa(&da, &hdr->dest);
+
+ if (sa.addr_type == IEEE802154_ADDR_SHORT)
+ sap = &sa.short_addr;
+ else
+ sap = &sa.hwaddr;
+
+ if (da.addr_type == IEEE802154_ADDR_SHORT)
+ dap = &da.short_addr;
+ else
+ dap = &da.hwaddr;
+
+ return lowpan_header_decompress(skb, skb->dev, sap, sa.addr_type,
+ IEEE802154_ADDR_LEN, dap, da.addr_type,
+ IEEE802154_ADDR_LEN, iphc0, iphc1);
+}
+
+static int lowpan_rcv(struct sk_buff *skb, struct net_device *dev,
+ struct packet_type *pt, struct net_device *orig_dev)
+{
+ struct ieee802154_hdr hdr;
+ int ret;
+
+ skb = skb_share_check(skb, GFP_ATOMIC);
+ if (!skb)
+ goto drop;
+
+ if (!netif_running(dev))
+ goto drop_skb;
+
+ if (skb->pkt_type == PACKET_OTHERHOST)
+ goto drop_skb;
+
+ if (dev->type != ARPHRD_IEEE802154)
+ goto drop_skb;
+
+ if (ieee802154_hdr_peek_addrs(skb, &hdr) < 0)
+ goto drop_skb;
+
+ /* check that it's our buffer */
+ if (skb->data[0] == LOWPAN_DISPATCH_IPV6) {
+ /* Pull off the 1-byte of 6lowpan header. */
+ skb_pull(skb, 1);
+ return lowpan_give_skb_to_devices(skb, NULL);
+ } else {
+ switch (skb->data[0] & 0xe0) {
+ case LOWPAN_DISPATCH_IPHC: /* ipv6 datagram */
+ ret = iphc_decompress(skb, &hdr);
+ if (ret < 0)
+ goto drop_skb;
+
+ return lowpan_give_skb_to_devices(skb, NULL);
+ case LOWPAN_DISPATCH_FRAG1: /* first fragment header */
+ ret = lowpan_frag_rcv(skb, LOWPAN_DISPATCH_FRAG1);
+ if (ret == 1) {
+ ret = iphc_decompress(skb, &hdr);
+ if (ret < 0)
+ goto drop_skb;
+
+ return lowpan_give_skb_to_devices(skb, NULL);
+ } else if (ret == -1) {
+ return NET_RX_DROP;
+ } else {
+ return NET_RX_SUCCESS;
+ }
+ case LOWPAN_DISPATCH_FRAGN: /* next fragments headers */
+ ret = lowpan_frag_rcv(skb, LOWPAN_DISPATCH_FRAGN);
+ if (ret == 1) {
+ ret = iphc_decompress(skb, &hdr);
+ if (ret < 0)
+ goto drop_skb;
+
+ return lowpan_give_skb_to_devices(skb, NULL);
+ } else if (ret == -1) {
+ return NET_RX_DROP;
+ } else {
+ return NET_RX_SUCCESS;
+ }
+ default:
+ break;
+ }
+ }
+
+drop_skb:
+ kfree_skb(skb);
+drop:
+ return NET_RX_DROP;
+}
+
+static struct packet_type lowpan_packet_type = {
+ .type = htons(ETH_P_IEEE802154),
+ .func = lowpan_rcv,
+};
+
+void lowpan_rx_init(void)
+{
+ dev_add_pack(&lowpan_packet_type);
+}
+
+void lowpan_rx_exit(void)
+{
+ dev_remove_pack(&lowpan_packet_type);
+}
--- /dev/null
+/* This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <net/6lowpan.h>
+#include <net/ieee802154_netdev.h>
+
+#include "6lowpan_i.h"
+
+/* don't save pan id, it's intra pan */
+struct lowpan_addr {
+ u8 mode;
+ union {
+ /* IPv6 needs big endian here */
+ __be64 extended_addr;
+ __be16 short_addr;
+ } u;
+};
+
+struct lowpan_addr_info {
+ struct lowpan_addr daddr;
+ struct lowpan_addr saddr;
+};
+
+static inline struct
+lowpan_addr_info *lowpan_skb_priv(const struct sk_buff *skb)
+{
+ WARN_ON_ONCE(skb_headroom(skb) < sizeof(struct lowpan_addr_info));
+ return (struct lowpan_addr_info *)(skb->data -
+ sizeof(struct lowpan_addr_info));
+}
+
+int lowpan_header_create(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *_daddr,
+ const void *_saddr, unsigned int len)
+{
+ const u8 *saddr = _saddr;
+ const u8 *daddr = _daddr;
+ struct lowpan_addr_info *info;
+
+ /* TODO:
+ * if this package isn't ipv6 one, where should it be routed?
+ */
+ if (type != ETH_P_IPV6)
+ return 0;
+
+ if (!saddr)
+ saddr = dev->dev_addr;
+
+ raw_dump_inline(__func__, "saddr", (unsigned char *)saddr, 8);
+ raw_dump_inline(__func__, "daddr", (unsigned char *)daddr, 8);
+
+ info = lowpan_skb_priv(skb);
+
+ /* TODO: Currently we only support extended_addr */
+ info->daddr.mode = IEEE802154_ADDR_LONG;
+ memcpy(&info->daddr.u.extended_addr, daddr,
+ sizeof(info->daddr.u.extended_addr));
+ info->saddr.mode = IEEE802154_ADDR_LONG;
+ memcpy(&info->saddr.u.extended_addr, saddr,
+ sizeof(info->daddr.u.extended_addr));
+
+ return 0;
+}
+
+static struct sk_buff*
+lowpan_alloc_frag(struct sk_buff *skb, int size,
+ const struct ieee802154_hdr *master_hdr)
+{
+ struct net_device *real_dev = lowpan_dev_info(skb->dev)->real_dev;
+ struct sk_buff *frag;
+ int rc;
+
+ frag = alloc_skb(real_dev->hard_header_len +
+ real_dev->needed_tailroom + size,
+ GFP_ATOMIC);
+
+ if (likely(frag)) {
+ frag->dev = real_dev;
+ frag->priority = skb->priority;
+ skb_reserve(frag, real_dev->hard_header_len);
+ skb_reset_network_header(frag);
+ *mac_cb(frag) = *mac_cb(skb);
+
+ rc = dev_hard_header(frag, real_dev, 0, &master_hdr->dest,
+ &master_hdr->source, size);
+ if (rc < 0) {
+ kfree_skb(frag);
+ return ERR_PTR(rc);
+ }
+ } else {
+ frag = ERR_PTR(-ENOMEM);
+ }
+
+ return frag;
+}
+
+static int
+lowpan_xmit_fragment(struct sk_buff *skb, const struct ieee802154_hdr *wpan_hdr,
+ u8 *frag_hdr, int frag_hdrlen,
+ int offset, int len)
+{
+ struct sk_buff *frag;
+
+ raw_dump_inline(__func__, " fragment header", frag_hdr, frag_hdrlen);
+
+ frag = lowpan_alloc_frag(skb, frag_hdrlen + len, wpan_hdr);
+ if (IS_ERR(frag))
+ return -PTR_ERR(frag);
+
+ memcpy(skb_put(frag, frag_hdrlen), frag_hdr, frag_hdrlen);
+ memcpy(skb_put(frag, len), skb_network_header(skb) + offset, len);
+
+ raw_dump_table(__func__, " fragment dump", frag->data, frag->len);
+
+ return dev_queue_xmit(frag);
+}
+
+static int
+lowpan_xmit_fragmented(struct sk_buff *skb, struct net_device *dev,
+ const struct ieee802154_hdr *wpan_hdr)
+{
+ u16 dgram_size, dgram_offset;
+ __be16 frag_tag;
+ u8 frag_hdr[5];
+ int frag_cap, frag_len, payload_cap, rc;
+ int skb_unprocessed, skb_offset;
+
+ dgram_size = lowpan_uncompress_size(skb, &dgram_offset) -
+ skb->mac_len;
+ frag_tag = htons(lowpan_dev_info(dev)->fragment_tag);
+ lowpan_dev_info(dev)->fragment_tag++;
+
+ frag_hdr[0] = LOWPAN_DISPATCH_FRAG1 | ((dgram_size >> 8) & 0x07);
+ frag_hdr[1] = dgram_size & 0xff;
+ memcpy(frag_hdr + 2, &frag_tag, sizeof(frag_tag));
+
+ payload_cap = ieee802154_max_payload(wpan_hdr);
+
+ frag_len = round_down(payload_cap - LOWPAN_FRAG1_HEAD_SIZE -
+ skb_network_header_len(skb), 8);
+
+ skb_offset = skb_network_header_len(skb);
+ skb_unprocessed = skb->len - skb->mac_len - skb_offset;
+
+ rc = lowpan_xmit_fragment(skb, wpan_hdr, frag_hdr,
+ LOWPAN_FRAG1_HEAD_SIZE, 0,
+ frag_len + skb_network_header_len(skb));
+ if (rc) {
+ pr_debug("%s unable to send FRAG1 packet (tag: %d)",
+ __func__, ntohs(frag_tag));
+ goto err;
+ }
+
+ frag_hdr[0] &= ~LOWPAN_DISPATCH_FRAG1;
+ frag_hdr[0] |= LOWPAN_DISPATCH_FRAGN;
+ frag_cap = round_down(payload_cap - LOWPAN_FRAGN_HEAD_SIZE, 8);
+
+ do {
+ dgram_offset += frag_len;
+ skb_offset += frag_len;
+ skb_unprocessed -= frag_len;
+ frag_len = min(frag_cap, skb_unprocessed);
+
+ frag_hdr[4] = dgram_offset >> 3;
+
+ rc = lowpan_xmit_fragment(skb, wpan_hdr, frag_hdr,
+ LOWPAN_FRAGN_HEAD_SIZE, skb_offset,
+ frag_len);
+ if (rc) {
+ pr_debug("%s unable to send a FRAGN packet. (tag: %d, offset: %d)\n",
+ __func__, ntohs(frag_tag), skb_offset);
+ goto err;
+ }
+ } while (skb_unprocessed > frag_cap);
+
+ consume_skb(skb);
+ return NET_XMIT_SUCCESS;
+
+err:
+ kfree_skb(skb);
+ return rc;
+}
+
+static int lowpan_header(struct sk_buff *skb, struct net_device *dev)
+{
+ struct ieee802154_addr sa, da;
+ struct ieee802154_mac_cb *cb = mac_cb_init(skb);
+ struct lowpan_addr_info info;
+ void *daddr, *saddr;
+
+ memcpy(&info, lowpan_skb_priv(skb), sizeof(info));
+
+ /* TODO: Currently we only support extended_addr */
+ daddr = &info.daddr.u.extended_addr;
+ saddr = &info.saddr.u.extended_addr;
+
+ lowpan_header_compress(skb, dev, ETH_P_IPV6, daddr, saddr, skb->len);
+
+ cb->type = IEEE802154_FC_TYPE_DATA;
+
+ /* prepare wpan address data */
+ sa.mode = IEEE802154_ADDR_LONG;
+ sa.pan_id = ieee802154_mlme_ops(dev)->get_pan_id(dev);
+ sa.extended_addr = ieee802154_devaddr_from_raw(saddr);
+
+ /* intra-PAN communications */
+ da.pan_id = sa.pan_id;
+
+ /* if the destination address is the broadcast address, use the
+ * corresponding short address
+ */
+ if (lowpan_is_addr_broadcast((const u8 *)daddr)) {
+ da.mode = IEEE802154_ADDR_SHORT;
+ da.short_addr = cpu_to_le16(IEEE802154_ADDR_BROADCAST);
+ cb->ackreq = false;
+ } else {
+ da.mode = IEEE802154_ADDR_LONG;
+ da.extended_addr = ieee802154_devaddr_from_raw(daddr);
+ cb->ackreq = true;
+ }
+
+ return dev_hard_header(skb, lowpan_dev_info(dev)->real_dev,
+ ETH_P_IPV6, (void *)&da, (void *)&sa, 0);
+}
+
+netdev_tx_t lowpan_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct ieee802154_hdr wpan_hdr;
+ int max_single, ret;
+
+ pr_debug("package xmit\n");
+
+ /* We must take a copy of the skb before we modify/replace the ipv6
+ * header as the header could be used elsewhere
+ */
+ skb = skb_unshare(skb, GFP_ATOMIC);
+ if (!skb)
+ return NET_XMIT_DROP;
+
+ ret = lowpan_header(skb, dev);
+ if (ret < 0) {
+ kfree_skb(skb);
+ return NET_XMIT_DROP;
+ }
+
+ if (ieee802154_hdr_peek(skb, &wpan_hdr) < 0) {
+ kfree_skb(skb);
+ return NET_XMIT_DROP;
+ }
+
+ max_single = ieee802154_max_payload(&wpan_hdr);
+
+ if (skb_tail_pointer(skb) - skb_network_header(skb) <= max_single) {
+ skb->dev = lowpan_dev_info(dev)->real_dev;
+ return dev_queue_xmit(skb);
+ } else {
+ netdev_tx_t rc;
+
+ pr_debug("frame is too big, fragmentation is needed\n");
+ rc = lowpan_xmit_fragmented(skb, dev, &wpan_hdr);
+
+ return rc < 0 ? NET_XMIT_DROP : rc;
+ }
+}
+++ /dev/null
-/* Copyright 2011, Siemens AG
- * written by Alexander Smirnov <alex.bluesman.smirnov@gmail.com>
- */
-
-/* Based on patches from Jon Smirl <jonsmirl@gmail.com>
- * Copyright (c) 2011 Jon Smirl <jonsmirl@gmail.com>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2
- * as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-/* Jon's code is based on 6lowpan implementation for Contiki which is:
- * Copyright (c) 2008, Swedish Institute of Computer Science.
- * All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * 3. Neither the name of the Institute nor the names of its contributors
- * may be used to endorse or promote products derived from this software
- * without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE INSTITUTE AND CONTRIBUTORS ``AS IS'' AND
- * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE INSTITUTE OR CONTRIBUTORS BE LIABLE
- * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
- * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
- * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
- * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
- * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
- * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
- * SUCH DAMAGE.
- */
-
-#include <linux/bitops.h>
-#include <linux/if_arp.h>
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-#include <linux/netdevice.h>
-#include <linux/ieee802154.h>
-#include <net/af_ieee802154.h>
-#include <net/ieee802154_netdev.h>
-#include <net/6lowpan.h>
-#include <net/ipv6.h>
-
-#include "reassembly.h"
-
-static LIST_HEAD(lowpan_devices);
-static int lowpan_open_count;
-
-/* private device info */
-struct lowpan_dev_info {
- struct net_device *real_dev; /* real WPAN device ptr */
- struct mutex dev_list_mtx; /* mutex for list ops */
- u16 fragment_tag;
-};
-
-struct lowpan_dev_record {
- struct net_device *ldev;
- struct list_head list;
-};
-
-/* don't save pan id, it's intra pan */
-struct lowpan_addr {
- u8 mode;
- union {
- /* IPv6 needs big endian here */
- __be64 extended_addr;
- __be16 short_addr;
- } u;
-};
-
-struct lowpan_addr_info {
- struct lowpan_addr daddr;
- struct lowpan_addr saddr;
-};
-
-static inline struct
-lowpan_dev_info *lowpan_dev_info(const struct net_device *dev)
-{
- return netdev_priv(dev);
-}
-
-static inline struct
-lowpan_addr_info *lowpan_skb_priv(const struct sk_buff *skb)
-{
- WARN_ON_ONCE(skb_headroom(skb) < sizeof(struct lowpan_addr_info));
- return (struct lowpan_addr_info *)(skb->data -
- sizeof(struct lowpan_addr_info));
-}
-
-static int lowpan_header_create(struct sk_buff *skb, struct net_device *dev,
- unsigned short type, const void *_daddr,
- const void *_saddr, unsigned int len)
-{
- const u8 *saddr = _saddr;
- const u8 *daddr = _daddr;
- struct lowpan_addr_info *info;
-
- /* TODO:
- * if this package isn't ipv6 one, where should it be routed?
- */
- if (type != ETH_P_IPV6)
- return 0;
-
- if (!saddr)
- saddr = dev->dev_addr;
-
- raw_dump_inline(__func__, "saddr", (unsigned char *)saddr, 8);
- raw_dump_inline(__func__, "daddr", (unsigned char *)daddr, 8);
-
- info = lowpan_skb_priv(skb);
-
- /* TODO: Currently we only support extended_addr */
- info->daddr.mode = IEEE802154_ADDR_LONG;
- memcpy(&info->daddr.u.extended_addr, daddr,
- sizeof(info->daddr.u.extended_addr));
- info->saddr.mode = IEEE802154_ADDR_LONG;
- memcpy(&info->saddr.u.extended_addr, saddr,
- sizeof(info->daddr.u.extended_addr));
-
- return 0;
-}
-
-static int lowpan_give_skb_to_devices(struct sk_buff *skb,
- struct net_device *dev)
-{
- struct lowpan_dev_record *entry;
- struct sk_buff *skb_cp;
- int stat = NET_RX_SUCCESS;
-
- skb->protocol = htons(ETH_P_IPV6);
- skb->pkt_type = PACKET_HOST;
-
- rcu_read_lock();
- list_for_each_entry_rcu(entry, &lowpan_devices, list)
- if (lowpan_dev_info(entry->ldev)->real_dev == skb->dev) {
- skb_cp = skb_copy(skb, GFP_ATOMIC);
- if (!skb_cp) {
- kfree_skb(skb);
- rcu_read_unlock();
- return NET_RX_DROP;
- }
-
- skb_cp->dev = entry->ldev;
- stat = netif_rx(skb_cp);
- if (stat == NET_RX_DROP)
- break;
- }
- rcu_read_unlock();
-
- consume_skb(skb);
-
- return stat;
-}
-
-static int
-iphc_decompress(struct sk_buff *skb, const struct ieee802154_hdr *hdr)
-{
- u8 iphc0, iphc1;
- struct ieee802154_addr_sa sa, da;
- void *sap, *dap;
-
- raw_dump_table(__func__, "raw skb data dump", skb->data, skb->len);
- /* at least two bytes will be used for the encoding */
- if (skb->len < 2)
- return -EINVAL;
-
- if (lowpan_fetch_skb_u8(skb, &iphc0))
- return -EINVAL;
-
- if (lowpan_fetch_skb_u8(skb, &iphc1))
- return -EINVAL;
-
- ieee802154_addr_to_sa(&sa, &hdr->source);
- ieee802154_addr_to_sa(&da, &hdr->dest);
-
- if (sa.addr_type == IEEE802154_ADDR_SHORT)
- sap = &sa.short_addr;
- else
- sap = &sa.hwaddr;
-
- if (da.addr_type == IEEE802154_ADDR_SHORT)
- dap = &da.short_addr;
- else
- dap = &da.hwaddr;
-
- return lowpan_header_decompress(skb, skb->dev, sap, sa.addr_type,
- IEEE802154_ADDR_LEN, dap, da.addr_type,
- IEEE802154_ADDR_LEN, iphc0, iphc1);
-}
-
-static struct sk_buff*
-lowpan_alloc_frag(struct sk_buff *skb, int size,
- const struct ieee802154_hdr *master_hdr)
-{
- struct net_device *real_dev = lowpan_dev_info(skb->dev)->real_dev;
- struct sk_buff *frag;
- int rc;
-
- frag = alloc_skb(real_dev->hard_header_len +
- real_dev->needed_tailroom + size,
- GFP_ATOMIC);
-
- if (likely(frag)) {
- frag->dev = real_dev;
- frag->priority = skb->priority;
- skb_reserve(frag, real_dev->hard_header_len);
- skb_reset_network_header(frag);
- *mac_cb(frag) = *mac_cb(skb);
-
- rc = dev_hard_header(frag, real_dev, 0, &master_hdr->dest,
- &master_hdr->source, size);
- if (rc < 0) {
- kfree_skb(frag);
- return ERR_PTR(rc);
- }
- } else {
- frag = ERR_PTR(-ENOMEM);
- }
-
- return frag;
-}
-
-static int
-lowpan_xmit_fragment(struct sk_buff *skb, const struct ieee802154_hdr *wpan_hdr,
- u8 *frag_hdr, int frag_hdrlen,
- int offset, int len)
-{
- struct sk_buff *frag;
-
- raw_dump_inline(__func__, " fragment header", frag_hdr, frag_hdrlen);
-
- frag = lowpan_alloc_frag(skb, frag_hdrlen + len, wpan_hdr);
- if (IS_ERR(frag))
- return -PTR_ERR(frag);
-
- memcpy(skb_put(frag, frag_hdrlen), frag_hdr, frag_hdrlen);
- memcpy(skb_put(frag, len), skb_network_header(skb) + offset, len);
-
- raw_dump_table(__func__, " fragment dump", frag->data, frag->len);
-
- return dev_queue_xmit(frag);
-}
-
-static int
-lowpan_xmit_fragmented(struct sk_buff *skb, struct net_device *dev,
- const struct ieee802154_hdr *wpan_hdr)
-{
- u16 dgram_size, dgram_offset;
- __be16 frag_tag;
- u8 frag_hdr[5];
- int frag_cap, frag_len, payload_cap, rc;
- int skb_unprocessed, skb_offset;
-
- dgram_size = lowpan_uncompress_size(skb, &dgram_offset) -
- skb->mac_len;
- frag_tag = htons(lowpan_dev_info(dev)->fragment_tag);
- lowpan_dev_info(dev)->fragment_tag++;
-
- frag_hdr[0] = LOWPAN_DISPATCH_FRAG1 | ((dgram_size >> 8) & 0x07);
- frag_hdr[1] = dgram_size & 0xff;
- memcpy(frag_hdr + 2, &frag_tag, sizeof(frag_tag));
-
- payload_cap = ieee802154_max_payload(wpan_hdr);
-
- frag_len = round_down(payload_cap - LOWPAN_FRAG1_HEAD_SIZE -
- skb_network_header_len(skb), 8);
-
- skb_offset = skb_network_header_len(skb);
- skb_unprocessed = skb->len - skb->mac_len - skb_offset;
-
- rc = lowpan_xmit_fragment(skb, wpan_hdr, frag_hdr,
- LOWPAN_FRAG1_HEAD_SIZE, 0,
- frag_len + skb_network_header_len(skb));
- if (rc) {
- pr_debug("%s unable to send FRAG1 packet (tag: %d)",
- __func__, ntohs(frag_tag));
- goto err;
- }
-
- frag_hdr[0] &= ~LOWPAN_DISPATCH_FRAG1;
- frag_hdr[0] |= LOWPAN_DISPATCH_FRAGN;
- frag_cap = round_down(payload_cap - LOWPAN_FRAGN_HEAD_SIZE, 8);
-
- do {
- dgram_offset += frag_len;
- skb_offset += frag_len;
- skb_unprocessed -= frag_len;
- frag_len = min(frag_cap, skb_unprocessed);
-
- frag_hdr[4] = dgram_offset >> 3;
-
- rc = lowpan_xmit_fragment(skb, wpan_hdr, frag_hdr,
- LOWPAN_FRAGN_HEAD_SIZE, skb_offset,
- frag_len);
- if (rc) {
- pr_debug("%s unable to send a FRAGN packet. (tag: %d, offset: %d)\n",
- __func__, ntohs(frag_tag), skb_offset);
- goto err;
- }
- } while (skb_unprocessed > frag_cap);
-
- consume_skb(skb);
- return NET_XMIT_SUCCESS;
-
-err:
- kfree_skb(skb);
- return rc;
-}
-
-static int lowpan_header(struct sk_buff *skb, struct net_device *dev)
-{
- struct ieee802154_addr sa, da;
- struct ieee802154_mac_cb *cb = mac_cb_init(skb);
- struct lowpan_addr_info info;
- void *daddr, *saddr;
-
- memcpy(&info, lowpan_skb_priv(skb), sizeof(info));
-
- /* TODO: Currently we only support extended_addr */
- daddr = &info.daddr.u.extended_addr;
- saddr = &info.saddr.u.extended_addr;
-
- lowpan_header_compress(skb, dev, ETH_P_IPV6, daddr, saddr, skb->len);
-
- cb->type = IEEE802154_FC_TYPE_DATA;
-
- /* prepare wpan address data */
- sa.mode = IEEE802154_ADDR_LONG;
- sa.pan_id = ieee802154_mlme_ops(dev)->get_pan_id(dev);
- sa.extended_addr = ieee802154_devaddr_from_raw(saddr);
-
- /* intra-PAN communications */
- da.pan_id = sa.pan_id;
-
- /* if the destination address is the broadcast address, use the
- * corresponding short address
- */
- if (lowpan_is_addr_broadcast((const u8 *)daddr)) {
- da.mode = IEEE802154_ADDR_SHORT;
- da.short_addr = cpu_to_le16(IEEE802154_ADDR_BROADCAST);
- cb->ackreq = false;
- } else {
- da.mode = IEEE802154_ADDR_LONG;
- da.extended_addr = ieee802154_devaddr_from_raw(daddr);
- cb->ackreq = true;
- }
-
- return dev_hard_header(skb, lowpan_dev_info(dev)->real_dev,
- ETH_P_IPV6, (void *)&da, (void *)&sa, 0);
-}
-
-static netdev_tx_t lowpan_xmit(struct sk_buff *skb, struct net_device *dev)
-{
- struct ieee802154_hdr wpan_hdr;
- int max_single, ret;
-
- pr_debug("package xmit\n");
-
- /* We must take a copy of the skb before we modify/replace the ipv6
- * header as the header could be used elsewhere
- */
- skb = skb_unshare(skb, GFP_ATOMIC);
- if (!skb)
- return NET_XMIT_DROP;
-
- ret = lowpan_header(skb, dev);
- if (ret < 0) {
- kfree_skb(skb);
- return NET_XMIT_DROP;
- }
-
- if (ieee802154_hdr_peek(skb, &wpan_hdr) < 0) {
- kfree_skb(skb);
- return NET_XMIT_DROP;
- }
-
- max_single = ieee802154_max_payload(&wpan_hdr);
-
- if (skb_tail_pointer(skb) - skb_network_header(skb) <= max_single) {
- skb->dev = lowpan_dev_info(dev)->real_dev;
- return dev_queue_xmit(skb);
- } else {
- netdev_tx_t rc;
-
- pr_debug("frame is too big, fragmentation is needed\n");
- rc = lowpan_xmit_fragmented(skb, dev, &wpan_hdr);
-
- return rc < 0 ? NET_XMIT_DROP : rc;
- }
-}
-
-static __le16 lowpan_get_pan_id(const struct net_device *dev)
-{
- struct net_device *real_dev = lowpan_dev_info(dev)->real_dev;
-
- return ieee802154_mlme_ops(real_dev)->get_pan_id(real_dev);
-}
-
-static __le16 lowpan_get_short_addr(const struct net_device *dev)
-{
- struct net_device *real_dev = lowpan_dev_info(dev)->real_dev;
-
- return ieee802154_mlme_ops(real_dev)->get_short_addr(real_dev);
-}
-
-static u8 lowpan_get_dsn(const struct net_device *dev)
-{
- struct net_device *real_dev = lowpan_dev_info(dev)->real_dev;
-
- return ieee802154_mlme_ops(real_dev)->get_dsn(real_dev);
-}
-
-static struct header_ops lowpan_header_ops = {
- .create = lowpan_header_create,
-};
-
-static struct lock_class_key lowpan_tx_busylock;
-static struct lock_class_key lowpan_netdev_xmit_lock_key;
-
-static void lowpan_set_lockdep_class_one(struct net_device *dev,
- struct netdev_queue *txq,
- void *_unused)
-{
- lockdep_set_class(&txq->_xmit_lock,
- &lowpan_netdev_xmit_lock_key);
-}
-
-static int lowpan_dev_init(struct net_device *dev)
-{
- netdev_for_each_tx_queue(dev, lowpan_set_lockdep_class_one, NULL);
- dev->qdisc_tx_busylock = &lowpan_tx_busylock;
- return 0;
-}
-
-static const struct net_device_ops lowpan_netdev_ops = {
- .ndo_init = lowpan_dev_init,
- .ndo_start_xmit = lowpan_xmit,
-};
-
-static struct ieee802154_mlme_ops lowpan_mlme = {
- .get_pan_id = lowpan_get_pan_id,
- .get_short_addr = lowpan_get_short_addr,
- .get_dsn = lowpan_get_dsn,
-};
-
-static void lowpan_setup(struct net_device *dev)
-{
- dev->addr_len = IEEE802154_ADDR_LEN;
- memset(dev->broadcast, 0xff, IEEE802154_ADDR_LEN);
- dev->type = ARPHRD_IEEE802154;
- /* Frame Control + Sequence Number + Address fields + Security Header */
- dev->hard_header_len = 2 + 1 + 20 + 14;
- dev->needed_tailroom = 2; /* FCS */
- dev->mtu = IPV6_MIN_MTU;
- dev->tx_queue_len = 0;
- dev->flags = IFF_BROADCAST | IFF_MULTICAST;
- dev->watchdog_timeo = 0;
-
- dev->netdev_ops = &lowpan_netdev_ops;
- dev->header_ops = &lowpan_header_ops;
- dev->ml_priv = &lowpan_mlme;
- dev->destructor = free_netdev;
-}
-
-static int lowpan_validate(struct nlattr *tb[], struct nlattr *data[])
-{
- if (tb[IFLA_ADDRESS]) {
- if (nla_len(tb[IFLA_ADDRESS]) != IEEE802154_ADDR_LEN)
- return -EINVAL;
- }
- return 0;
-}
-
-static int lowpan_rcv(struct sk_buff *skb, struct net_device *dev,
- struct packet_type *pt, struct net_device *orig_dev)
-{
- struct ieee802154_hdr hdr;
- int ret;
-
- skb = skb_share_check(skb, GFP_ATOMIC);
- if (!skb)
- goto drop;
-
- if (!netif_running(dev))
- goto drop_skb;
-
- if (skb->pkt_type == PACKET_OTHERHOST)
- goto drop_skb;
-
- if (dev->type != ARPHRD_IEEE802154)
- goto drop_skb;
-
- if (ieee802154_hdr_peek_addrs(skb, &hdr) < 0)
- goto drop_skb;
-
- /* check that it's our buffer */
- if (skb->data[0] == LOWPAN_DISPATCH_IPV6) {
- /* Pull off the 1-byte of 6lowpan header. */
- skb_pull(skb, 1);
- return lowpan_give_skb_to_devices(skb, NULL);
- } else {
- switch (skb->data[0] & 0xe0) {
- case LOWPAN_DISPATCH_IPHC: /* ipv6 datagram */
- ret = iphc_decompress(skb, &hdr);
- if (ret < 0)
- goto drop_skb;
-
- return lowpan_give_skb_to_devices(skb, NULL);
- case LOWPAN_DISPATCH_FRAG1: /* first fragment header */
- ret = lowpan_frag_rcv(skb, LOWPAN_DISPATCH_FRAG1);
- if (ret == 1) {
- ret = iphc_decompress(skb, &hdr);
- if (ret < 0)
- goto drop_skb;
-
- return lowpan_give_skb_to_devices(skb, NULL);
- } else if (ret == -1) {
- return NET_RX_DROP;
- } else {
- return NET_RX_SUCCESS;
- }
- case LOWPAN_DISPATCH_FRAGN: /* next fragments headers */
- ret = lowpan_frag_rcv(skb, LOWPAN_DISPATCH_FRAGN);
- if (ret == 1) {
- ret = iphc_decompress(skb, &hdr);
- if (ret < 0)
- goto drop_skb;
-
- return lowpan_give_skb_to_devices(skb, NULL);
- } else if (ret == -1) {
- return NET_RX_DROP;
- } else {
- return NET_RX_SUCCESS;
- }
- default:
- break;
- }
- }
-
-drop_skb:
- kfree_skb(skb);
-drop:
- return NET_RX_DROP;
-}
-
-static struct packet_type lowpan_packet_type = {
- .type = htons(ETH_P_IEEE802154),
- .func = lowpan_rcv,
-};
-
-static int lowpan_newlink(struct net *src_net, struct net_device *dev,
- struct nlattr *tb[], struct nlattr *data[])
-{
- struct net_device *real_dev;
- struct lowpan_dev_record *entry;
- int ret;
-
- ASSERT_RTNL();
-
- pr_debug("adding new link\n");
-
- if (!tb[IFLA_LINK])
- return -EINVAL;
- /* find and hold real wpan device */
- real_dev = dev_get_by_index(src_net, nla_get_u32(tb[IFLA_LINK]));
- if (!real_dev)
- return -ENODEV;
- if (real_dev->type != ARPHRD_IEEE802154) {
- dev_put(real_dev);
- return -EINVAL;
- }
-
- lowpan_dev_info(dev)->real_dev = real_dev;
- mutex_init(&lowpan_dev_info(dev)->dev_list_mtx);
-
- entry = kzalloc(sizeof(*entry), GFP_KERNEL);
- if (!entry) {
- dev_put(real_dev);
- lowpan_dev_info(dev)->real_dev = NULL;
- return -ENOMEM;
- }
-
- entry->ldev = dev;
-
- /* Set the lowpan hardware address to the wpan hardware address. */
- memcpy(dev->dev_addr, real_dev->dev_addr, IEEE802154_ADDR_LEN);
-
- mutex_lock(&lowpan_dev_info(dev)->dev_list_mtx);
- INIT_LIST_HEAD(&entry->list);
- list_add_tail(&entry->list, &lowpan_devices);
- mutex_unlock(&lowpan_dev_info(dev)->dev_list_mtx);
-
- ret = register_netdevice(dev);
- if (ret >= 0) {
- if (!lowpan_open_count)
- dev_add_pack(&lowpan_packet_type);
- lowpan_open_count++;
- }
-
- return ret;
-}
-
-static void lowpan_dellink(struct net_device *dev, struct list_head *head)
-{
- struct lowpan_dev_info *lowpan_dev = lowpan_dev_info(dev);
- struct net_device *real_dev = lowpan_dev->real_dev;
- struct lowpan_dev_record *entry, *tmp;
-
- ASSERT_RTNL();
-
- lowpan_open_count--;
- if (!lowpan_open_count)
- dev_remove_pack(&lowpan_packet_type);
-
- mutex_lock(&lowpan_dev_info(dev)->dev_list_mtx);
- list_for_each_entry_safe(entry, tmp, &lowpan_devices, list) {
- if (entry->ldev == dev) {
- list_del(&entry->list);
- kfree(entry);
- }
- }
- mutex_unlock(&lowpan_dev_info(dev)->dev_list_mtx);
-
- mutex_destroy(&lowpan_dev_info(dev)->dev_list_mtx);
-
- unregister_netdevice_queue(dev, head);
-
- dev_put(real_dev);
-}
-
-static struct rtnl_link_ops lowpan_link_ops __read_mostly = {
- .kind = "lowpan",
- .priv_size = sizeof(struct lowpan_dev_info),
- .setup = lowpan_setup,
- .newlink = lowpan_newlink,
- .dellink = lowpan_dellink,
- .validate = lowpan_validate,
-};
-
-static inline int __init lowpan_netlink_init(void)
-{
- return rtnl_link_register(&lowpan_link_ops);
-}
-
-static inline void lowpan_netlink_fini(void)
-{
- rtnl_link_unregister(&lowpan_link_ops);
-}
-
-static int lowpan_device_event(struct notifier_block *unused,
- unsigned long event, void *ptr)
-{
- struct net_device *dev = netdev_notifier_info_to_dev(ptr);
- LIST_HEAD(del_list);
- struct lowpan_dev_record *entry, *tmp;
-
- if (dev->type != ARPHRD_IEEE802154)
- goto out;
-
- if (event == NETDEV_UNREGISTER) {
- list_for_each_entry_safe(entry, tmp, &lowpan_devices, list) {
- if (lowpan_dev_info(entry->ldev)->real_dev == dev)
- lowpan_dellink(entry->ldev, &del_list);
- }
-
- unregister_netdevice_many(&del_list);
- }
-
-out:
- return NOTIFY_DONE;
-}
-
-static struct notifier_block lowpan_dev_notifier = {
- .notifier_call = lowpan_device_event,
-};
-
-static int __init lowpan_init_module(void)
-{
- int err = 0;
-
- err = lowpan_net_frag_init();
- if (err < 0)
- goto out;
-
- err = lowpan_netlink_init();
- if (err < 0)
- goto out_frag;
-
- err = register_netdevice_notifier(&lowpan_dev_notifier);
- if (err < 0)
- goto out_pack;
-
- return 0;
-
-out_pack:
- lowpan_netlink_fini();
-out_frag:
- lowpan_net_frag_exit();
-out:
- return err;
-}
-
-static void __exit lowpan_cleanup_module(void)
-{
- lowpan_netlink_fini();
-
- lowpan_net_frag_exit();
-
- unregister_netdevice_notifier(&lowpan_dev_notifier);
-}
-
-module_init(lowpan_init_module);
-module_exit(lowpan_cleanup_module);
-MODULE_LICENSE("GPL");
-MODULE_ALIAS_RTNL_LINK("lowpan");
-config IEEE802154
+menuconfig IEEE802154
tristate "IEEE Std 802.15.4 Low-Rate Wireless Personal Area Networks support"
---help---
IEEE Std 802.15.4 defines a low data rate, low power and low
Say Y here to compile LR-WPAN support into the kernel or say M to
compile it as modules.
-config IEEE802154_6LOWPAN
- tristate "6lowpan support over IEEE 802.15.4"
- depends on IEEE802154 && 6LOWPAN
+if IEEE802154
+
+config IEEE802154_SOCKET
+ tristate "IEEE 802.15.4 socket interface"
+ default y
---help---
- IPv6 compression over IEEE 802.15.4.
+ Socket interface for IEEE 802.15.4. Contains DGRAM sockets interface
+ for 802.15.4 dataframes. Also RAW socket interface to build MAC
+ header from userspace.
+
+source "net/ieee802154/6lowpan/Kconfig"
+
+endif
-obj-$(CONFIG_IEEE802154) += ieee802154.o af_802154.o
-obj-$(CONFIG_IEEE802154_6LOWPAN) += ieee802154_6lowpan.o
+obj-$(CONFIG_IEEE802154) += ieee802154.o
+obj-$(CONFIG_IEEE802154_SOCKET) += ieee802154_socket.o
+obj-y += 6lowpan/
-ieee802154_6lowpan-y := 6lowpan_rtnl.o reassembly.o
ieee802154-y := netlink.o nl-mac.o nl-phy.o nl_policy.o core.o \
header_ops.o sysfs.o nl802154.o
-af_802154-y := af_ieee802154.o raw.o dgram.o
+ieee802154_socket-y := socket.o
ccflags-y += -D__CHECK_ENDIAN__
+++ /dev/null
-/*
- * Internal interfaces for ieee 802.15.4 address family.
- *
- * Copyright 2007, 2008, 2009 Siemens AG
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2
- * as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * Written by:
- * Sergey Lapin <slapin@ossfans.org>
- * Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
- */
-
-#ifndef AF802154_H
-#define AF802154_H
-
-struct sk_buff;
-struct net_device;
-struct ieee802154_addr;
-extern struct proto ieee802154_raw_prot;
-extern struct proto ieee802154_dgram_prot;
-void ieee802154_raw_deliver(struct net_device *dev, struct sk_buff *skb);
-int ieee802154_dgram_deliver(struct net_device *dev, struct sk_buff *skb);
-struct net_device *ieee802154_get_dev(struct net *net,
- const struct ieee802154_addr *addr);
-
-#endif
+++ /dev/null
-/*
- * IEEE802154.4 socket interface
- *
- * Copyright 2007, 2008 Siemens AG
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2
- * as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * Written by:
- * Sergey Lapin <slapin@ossfans.org>
- * Maxim Gorbachyov <maxim.gorbachev@siemens.com>
- */
-
-#include <linux/net.h>
-#include <linux/capability.h>
-#include <linux/module.h>
-#include <linux/if_arp.h>
-#include <linux/if.h>
-#include <linux/termios.h> /* For TIOCOUTQ/INQ */
-#include <linux/list.h>
-#include <linux/slab.h>
-#include <net/datalink.h>
-#include <net/psnap.h>
-#include <net/sock.h>
-#include <net/tcp_states.h>
-#include <net/route.h>
-
-#include <net/af_ieee802154.h>
-#include <net/ieee802154_netdev.h>
-
-#include "af802154.h"
-
-/* Utility function for families */
-struct net_device*
-ieee802154_get_dev(struct net *net, const struct ieee802154_addr *addr)
-{
- struct net_device *dev = NULL;
- struct net_device *tmp;
- __le16 pan_id, short_addr;
- u8 hwaddr[IEEE802154_ADDR_LEN];
-
- switch (addr->mode) {
- case IEEE802154_ADDR_LONG:
- ieee802154_devaddr_to_raw(hwaddr, addr->extended_addr);
- rcu_read_lock();
- dev = dev_getbyhwaddr_rcu(net, ARPHRD_IEEE802154, hwaddr);
- if (dev)
- dev_hold(dev);
- rcu_read_unlock();
- break;
- case IEEE802154_ADDR_SHORT:
- if (addr->pan_id == cpu_to_le16(IEEE802154_PANID_BROADCAST) ||
- addr->short_addr == cpu_to_le16(IEEE802154_ADDR_UNDEF) ||
- addr->short_addr == cpu_to_le16(IEEE802154_ADDR_BROADCAST))
- break;
-
- rtnl_lock();
-
- for_each_netdev(net, tmp) {
- if (tmp->type != ARPHRD_IEEE802154)
- continue;
-
- pan_id = ieee802154_mlme_ops(tmp)->get_pan_id(tmp);
- short_addr =
- ieee802154_mlme_ops(tmp)->get_short_addr(tmp);
-
- if (pan_id == addr->pan_id &&
- short_addr == addr->short_addr) {
- dev = tmp;
- dev_hold(dev);
- break;
- }
- }
-
- rtnl_unlock();
- break;
- default:
- pr_warn("Unsupported ieee802154 address type: %d\n",
- addr->mode);
- break;
- }
-
- return dev;
-}
-
-static int ieee802154_sock_release(struct socket *sock)
-{
- struct sock *sk = sock->sk;
-
- if (sk) {
- sock->sk = NULL;
- sk->sk_prot->close(sk, 0);
- }
- return 0;
-}
-
-static int ieee802154_sock_sendmsg(struct kiocb *iocb, struct socket *sock,
- struct msghdr *msg, size_t len)
-{
- struct sock *sk = sock->sk;
-
- return sk->sk_prot->sendmsg(iocb, sk, msg, len);
-}
-
-static int ieee802154_sock_bind(struct socket *sock, struct sockaddr *uaddr,
- int addr_len)
-{
- struct sock *sk = sock->sk;
-
- if (sk->sk_prot->bind)
- return sk->sk_prot->bind(sk, uaddr, addr_len);
-
- return sock_no_bind(sock, uaddr, addr_len);
-}
-
-static int ieee802154_sock_connect(struct socket *sock, struct sockaddr *uaddr,
- int addr_len, int flags)
-{
- struct sock *sk = sock->sk;
-
- if (addr_len < sizeof(uaddr->sa_family))
- return -EINVAL;
-
- if (uaddr->sa_family == AF_UNSPEC)
- return sk->sk_prot->disconnect(sk, flags);
-
- return sk->sk_prot->connect(sk, uaddr, addr_len);
-}
-
-static int ieee802154_dev_ioctl(struct sock *sk, struct ifreq __user *arg,
- unsigned int cmd)
-{
- struct ifreq ifr;
- int ret = -ENOIOCTLCMD;
- struct net_device *dev;
-
- if (copy_from_user(&ifr, arg, sizeof(struct ifreq)))
- return -EFAULT;
-
- ifr.ifr_name[IFNAMSIZ-1] = 0;
-
- dev_load(sock_net(sk), ifr.ifr_name);
- dev = dev_get_by_name(sock_net(sk), ifr.ifr_name);
-
- if (!dev)
- return -ENODEV;
-
- if (dev->type == ARPHRD_IEEE802154 && dev->netdev_ops->ndo_do_ioctl)
- ret = dev->netdev_ops->ndo_do_ioctl(dev, &ifr, cmd);
-
- if (!ret && copy_to_user(arg, &ifr, sizeof(struct ifreq)))
- ret = -EFAULT;
- dev_put(dev);
-
- return ret;
-}
-
-static int ieee802154_sock_ioctl(struct socket *sock, unsigned int cmd,
- unsigned long arg)
-{
- struct sock *sk = sock->sk;
-
- switch (cmd) {
- case SIOCGSTAMP:
- return sock_get_timestamp(sk, (struct timeval __user *)arg);
- case SIOCGSTAMPNS:
- return sock_get_timestampns(sk, (struct timespec __user *)arg);
- case SIOCGIFADDR:
- case SIOCSIFADDR:
- return ieee802154_dev_ioctl(sk, (struct ifreq __user *)arg,
- cmd);
- default:
- if (!sk->sk_prot->ioctl)
- return -ENOIOCTLCMD;
- return sk->sk_prot->ioctl(sk, cmd, arg);
- }
-}
-
-static const struct proto_ops ieee802154_raw_ops = {
- .family = PF_IEEE802154,
- .owner = THIS_MODULE,
- .release = ieee802154_sock_release,
- .bind = ieee802154_sock_bind,
- .connect = ieee802154_sock_connect,
- .socketpair = sock_no_socketpair,
- .accept = sock_no_accept,
- .getname = sock_no_getname,
- .poll = datagram_poll,
- .ioctl = ieee802154_sock_ioctl,
- .listen = sock_no_listen,
- .shutdown = sock_no_shutdown,
- .setsockopt = sock_common_setsockopt,
- .getsockopt = sock_common_getsockopt,
- .sendmsg = ieee802154_sock_sendmsg,
- .recvmsg = sock_common_recvmsg,
- .mmap = sock_no_mmap,
- .sendpage = sock_no_sendpage,
-#ifdef CONFIG_COMPAT
- .compat_setsockopt = compat_sock_common_setsockopt,
- .compat_getsockopt = compat_sock_common_getsockopt,
-#endif
-};
-
-static const struct proto_ops ieee802154_dgram_ops = {
- .family = PF_IEEE802154,
- .owner = THIS_MODULE,
- .release = ieee802154_sock_release,
- .bind = ieee802154_sock_bind,
- .connect = ieee802154_sock_connect,
- .socketpair = sock_no_socketpair,
- .accept = sock_no_accept,
- .getname = sock_no_getname,
- .poll = datagram_poll,
- .ioctl = ieee802154_sock_ioctl,
- .listen = sock_no_listen,
- .shutdown = sock_no_shutdown,
- .setsockopt = sock_common_setsockopt,
- .getsockopt = sock_common_getsockopt,
- .sendmsg = ieee802154_sock_sendmsg,
- .recvmsg = sock_common_recvmsg,
- .mmap = sock_no_mmap,
- .sendpage = sock_no_sendpage,
-#ifdef CONFIG_COMPAT
- .compat_setsockopt = compat_sock_common_setsockopt,
- .compat_getsockopt = compat_sock_common_getsockopt,
-#endif
-};
-
-/* Create a socket. Initialise the socket, blank the addresses
- * set the state.
- */
-static int ieee802154_create(struct net *net, struct socket *sock,
- int protocol, int kern)
-{
- struct sock *sk;
- int rc;
- struct proto *proto;
- const struct proto_ops *ops;
-
- if (!net_eq(net, &init_net))
- return -EAFNOSUPPORT;
-
- switch (sock->type) {
- case SOCK_RAW:
- proto = &ieee802154_raw_prot;
- ops = &ieee802154_raw_ops;
- break;
- case SOCK_DGRAM:
- proto = &ieee802154_dgram_prot;
- ops = &ieee802154_dgram_ops;
- break;
- default:
- rc = -ESOCKTNOSUPPORT;
- goto out;
- }
-
- rc = -ENOMEM;
- sk = sk_alloc(net, PF_IEEE802154, GFP_KERNEL, proto);
- if (!sk)
- goto out;
- rc = 0;
-
- sock->ops = ops;
-
- sock_init_data(sock, sk);
- /* FIXME: sk->sk_destruct */
- sk->sk_family = PF_IEEE802154;
-
- /* Checksums on by default */
- sock_set_flag(sk, SOCK_ZAPPED);
-
- if (sk->sk_prot->hash)
- sk->sk_prot->hash(sk);
-
- if (sk->sk_prot->init) {
- rc = sk->sk_prot->init(sk);
- if (rc)
- sk_common_release(sk);
- }
-out:
- return rc;
-}
-
-static const struct net_proto_family ieee802154_family_ops = {
- .family = PF_IEEE802154,
- .create = ieee802154_create,
- .owner = THIS_MODULE,
-};
-
-static int ieee802154_rcv(struct sk_buff *skb, struct net_device *dev,
- struct packet_type *pt, struct net_device *orig_dev)
-{
- if (!netif_running(dev))
- goto drop;
- pr_debug("got frame, type %d, dev %p\n", dev->type, dev);
-#ifdef DEBUG
- print_hex_dump_bytes("ieee802154_rcv ",
- DUMP_PREFIX_NONE, skb->data, skb->len);
-#endif
-
- if (!net_eq(dev_net(dev), &init_net))
- goto drop;
-
- ieee802154_raw_deliver(dev, skb);
-
- if (dev->type != ARPHRD_IEEE802154)
- goto drop;
-
- if (skb->pkt_type != PACKET_OTHERHOST)
- return ieee802154_dgram_deliver(dev, skb);
-
-drop:
- kfree_skb(skb);
- return NET_RX_DROP;
-}
-
-static struct packet_type ieee802154_packet_type = {
- .type = htons(ETH_P_IEEE802154),
- .func = ieee802154_rcv,
-};
-
-static int __init af_ieee802154_init(void)
-{
- int rc = -EINVAL;
-
- rc = proto_register(&ieee802154_raw_prot, 1);
- if (rc)
- goto out;
-
- rc = proto_register(&ieee802154_dgram_prot, 1);
- if (rc)
- goto err_dgram;
-
- /* Tell SOCKET that we are alive */
- rc = sock_register(&ieee802154_family_ops);
- if (rc)
- goto err_sock;
- dev_add_pack(&ieee802154_packet_type);
-
- rc = 0;
- goto out;
-
-err_sock:
- proto_unregister(&ieee802154_dgram_prot);
-err_dgram:
- proto_unregister(&ieee802154_raw_prot);
-out:
- return rc;
-}
-
-static void __exit af_ieee802154_remove(void)
-{
- dev_remove_pack(&ieee802154_packet_type);
- sock_unregister(PF_IEEE802154);
- proto_unregister(&ieee802154_dgram_prot);
- proto_unregister(&ieee802154_raw_prot);
-}
-
-module_init(af_ieee802154_init);
-module_exit(af_ieee802154_remove);
-
-MODULE_LICENSE("GPL");
-MODULE_ALIAS_NETPROTO(PF_IEEE802154);
+++ /dev/null
-/*
- * IEEE 802.15.4 dgram socket interface
- *
- * Copyright 2007, 2008 Siemens AG
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2
- * as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * Written by:
- * Sergey Lapin <slapin@ossfans.org>
- * Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
- */
-
-#include <linux/capability.h>
-#include <linux/net.h>
-#include <linux/module.h>
-#include <linux/if_arp.h>
-#include <linux/list.h>
-#include <linux/slab.h>
-#include <linux/ieee802154.h>
-#include <net/sock.h>
-#include <net/af_ieee802154.h>
-#include <net/ieee802154_netdev.h>
-
-#include <asm/ioctls.h>
-
-#include "af802154.h"
-
-static HLIST_HEAD(dgram_head);
-static DEFINE_RWLOCK(dgram_lock);
-
-struct dgram_sock {
- struct sock sk;
-
- struct ieee802154_addr src_addr;
- struct ieee802154_addr dst_addr;
-
- unsigned int bound:1;
- unsigned int connected:1;
- unsigned int want_ack:1;
- unsigned int secen:1;
- unsigned int secen_override:1;
- unsigned int seclevel:3;
- unsigned int seclevel_override:1;
-};
-
-static inline struct dgram_sock *dgram_sk(const struct sock *sk)
-{
- return container_of(sk, struct dgram_sock, sk);
-}
-
-static void dgram_hash(struct sock *sk)
-{
- write_lock_bh(&dgram_lock);
- sk_add_node(sk, &dgram_head);
- sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
- write_unlock_bh(&dgram_lock);
-}
-
-static void dgram_unhash(struct sock *sk)
-{
- write_lock_bh(&dgram_lock);
- if (sk_del_node_init(sk))
- sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
- write_unlock_bh(&dgram_lock);
-}
-
-static int dgram_init(struct sock *sk)
-{
- struct dgram_sock *ro = dgram_sk(sk);
-
- ro->want_ack = 1;
- return 0;
-}
-
-static void dgram_close(struct sock *sk, long timeout)
-{
- sk_common_release(sk);
-}
-
-static int dgram_bind(struct sock *sk, struct sockaddr *uaddr, int len)
-{
- struct sockaddr_ieee802154 *addr = (struct sockaddr_ieee802154 *)uaddr;
- struct ieee802154_addr haddr;
- struct dgram_sock *ro = dgram_sk(sk);
- int err = -EINVAL;
- struct net_device *dev;
-
- lock_sock(sk);
-
- ro->bound = 0;
-
- if (len < sizeof(*addr))
- goto out;
-
- if (addr->family != AF_IEEE802154)
- goto out;
-
- ieee802154_addr_from_sa(&haddr, &addr->addr);
- dev = ieee802154_get_dev(sock_net(sk), &haddr);
- if (!dev) {
- err = -ENODEV;
- goto out;
- }
-
- if (dev->type != ARPHRD_IEEE802154) {
- err = -ENODEV;
- goto out_put;
- }
-
- ro->src_addr = haddr;
-
- ro->bound = 1;
- err = 0;
-out_put:
- dev_put(dev);
-out:
- release_sock(sk);
-
- return err;
-}
-
-static int dgram_ioctl(struct sock *sk, int cmd, unsigned long arg)
-{
- switch (cmd) {
- case SIOCOUTQ:
- {
- int amount = sk_wmem_alloc_get(sk);
-
- return put_user(amount, (int __user *)arg);
- }
-
- case SIOCINQ:
- {
- struct sk_buff *skb;
- unsigned long amount;
-
- amount = 0;
- spin_lock_bh(&sk->sk_receive_queue.lock);
- skb = skb_peek(&sk->sk_receive_queue);
- if (skb != NULL) {
- /* We will only return the amount
- * of this packet since that is all
- * that will be read.
- */
- amount = skb->len - ieee802154_hdr_length(skb);
- }
- spin_unlock_bh(&sk->sk_receive_queue.lock);
- return put_user(amount, (int __user *)arg);
- }
- }
-
- return -ENOIOCTLCMD;
-}
-
-/* FIXME: autobind */
-static int dgram_connect(struct sock *sk, struct sockaddr *uaddr,
- int len)
-{
- struct sockaddr_ieee802154 *addr = (struct sockaddr_ieee802154 *)uaddr;
- struct dgram_sock *ro = dgram_sk(sk);
- int err = 0;
-
- if (len < sizeof(*addr))
- return -EINVAL;
-
- if (addr->family != AF_IEEE802154)
- return -EINVAL;
-
- lock_sock(sk);
-
- if (!ro->bound) {
- err = -ENETUNREACH;
- goto out;
- }
-
- ieee802154_addr_from_sa(&ro->dst_addr, &addr->addr);
- ro->connected = 1;
-
-out:
- release_sock(sk);
- return err;
-}
-
-static int dgram_disconnect(struct sock *sk, int flags)
-{
- struct dgram_sock *ro = dgram_sk(sk);
-
- lock_sock(sk);
- ro->connected = 0;
- release_sock(sk);
-
- return 0;
-}
-
-static int dgram_sendmsg(struct kiocb *iocb, struct sock *sk,
- struct msghdr *msg, size_t size)
-{
- struct net_device *dev;
- unsigned int mtu;
- struct sk_buff *skb;
- struct ieee802154_mac_cb *cb;
- struct dgram_sock *ro = dgram_sk(sk);
- struct ieee802154_addr dst_addr;
- int hlen, tlen;
- int err;
-
- if (msg->msg_flags & MSG_OOB) {
- pr_debug("msg->msg_flags = 0x%x\n", msg->msg_flags);
- return -EOPNOTSUPP;
- }
-
- if (!ro->connected && !msg->msg_name)
- return -EDESTADDRREQ;
- else if (ro->connected && msg->msg_name)
- return -EISCONN;
-
- if (!ro->bound)
- dev = dev_getfirstbyhwtype(sock_net(sk), ARPHRD_IEEE802154);
- else
- dev = ieee802154_get_dev(sock_net(sk), &ro->src_addr);
-
- if (!dev) {
- pr_debug("no dev\n");
- err = -ENXIO;
- goto out;
- }
- mtu = dev->mtu;
- pr_debug("name = %s, mtu = %u\n", dev->name, mtu);
-
- if (size > mtu) {
- pr_debug("size = %Zu, mtu = %u\n", size, mtu);
- err = -EMSGSIZE;
- goto out_dev;
- }
-
- hlen = LL_RESERVED_SPACE(dev);
- tlen = dev->needed_tailroom;
- skb = sock_alloc_send_skb(sk, hlen + tlen + size,
- msg->msg_flags & MSG_DONTWAIT,
- &err);
- if (!skb)
- goto out_dev;
-
- skb_reserve(skb, hlen);
-
- skb_reset_network_header(skb);
-
- cb = mac_cb_init(skb);
- cb->type = IEEE802154_FC_TYPE_DATA;
- cb->ackreq = ro->want_ack;
-
- if (msg->msg_name) {
- DECLARE_SOCKADDR(struct sockaddr_ieee802154*,
- daddr, msg->msg_name);
-
- ieee802154_addr_from_sa(&dst_addr, &daddr->addr);
- } else {
- dst_addr = ro->dst_addr;
- }
-
- cb->secen = ro->secen;
- cb->secen_override = ro->secen_override;
- cb->seclevel = ro->seclevel;
- cb->seclevel_override = ro->seclevel_override;
-
- err = dev_hard_header(skb, dev, ETH_P_IEEE802154, &dst_addr,
- ro->bound ? &ro->src_addr : NULL, size);
- if (err < 0)
- goto out_skb;
-
- err = memcpy_from_msg(skb_put(skb, size), msg, size);
- if (err < 0)
- goto out_skb;
-
- skb->dev = dev;
- skb->sk = sk;
- skb->protocol = htons(ETH_P_IEEE802154);
-
- dev_put(dev);
-
- err = dev_queue_xmit(skb);
- if (err > 0)
- err = net_xmit_errno(err);
-
- return err ?: size;
-
-out_skb:
- kfree_skb(skb);
-out_dev:
- dev_put(dev);
-out:
- return err;
-}
-
-static int dgram_recvmsg(struct kiocb *iocb, struct sock *sk,
- struct msghdr *msg, size_t len, int noblock,
- int flags, int *addr_len)
-{
- size_t copied = 0;
- int err = -EOPNOTSUPP;
- struct sk_buff *skb;
- DECLARE_SOCKADDR(struct sockaddr_ieee802154 *, saddr, msg->msg_name);
-
- skb = skb_recv_datagram(sk, flags, noblock, &err);
- if (!skb)
- goto out;
-
- copied = skb->len;
- if (len < copied) {
- msg->msg_flags |= MSG_TRUNC;
- copied = len;
- }
-
- /* FIXME: skip headers if necessary ?! */
- err = skb_copy_datagram_msg(skb, 0, msg, copied);
- if (err)
- goto done;
-
- sock_recv_ts_and_drops(msg, sk, skb);
-
- if (saddr) {
- saddr->family = AF_IEEE802154;
- ieee802154_addr_to_sa(&saddr->addr, &mac_cb(skb)->source);
- *addr_len = sizeof(*saddr);
- }
-
- if (flags & MSG_TRUNC)
- copied = skb->len;
-done:
- skb_free_datagram(sk, skb);
-out:
- if (err)
- return err;
- return copied;
-}
-
-static int dgram_rcv_skb(struct sock *sk, struct sk_buff *skb)
-{
- skb = skb_share_check(skb, GFP_ATOMIC);
- if (!skb)
- return NET_RX_DROP;
-
- if (sock_queue_rcv_skb(sk, skb) < 0) {
- kfree_skb(skb);
- return NET_RX_DROP;
- }
-
- return NET_RX_SUCCESS;
-}
-
-static inline bool
-ieee802154_match_sock(__le64 hw_addr, __le16 pan_id, __le16 short_addr,
- struct dgram_sock *ro)
-{
- if (!ro->bound)
- return true;
-
- if (ro->src_addr.mode == IEEE802154_ADDR_LONG &&
- hw_addr == ro->src_addr.extended_addr)
- return true;
-
- if (ro->src_addr.mode == IEEE802154_ADDR_SHORT &&
- pan_id == ro->src_addr.pan_id &&
- short_addr == ro->src_addr.short_addr)
- return true;
-
- return false;
-}
-
-int ieee802154_dgram_deliver(struct net_device *dev, struct sk_buff *skb)
-{
- struct sock *sk, *prev = NULL;
- int ret = NET_RX_SUCCESS;
- __le16 pan_id, short_addr;
- __le64 hw_addr;
-
- /* Data frame processing */
- BUG_ON(dev->type != ARPHRD_IEEE802154);
-
- pan_id = ieee802154_mlme_ops(dev)->get_pan_id(dev);
- short_addr = ieee802154_mlme_ops(dev)->get_short_addr(dev);
- hw_addr = ieee802154_devaddr_from_raw(dev->dev_addr);
-
- read_lock(&dgram_lock);
- sk_for_each(sk, &dgram_head) {
- if (ieee802154_match_sock(hw_addr, pan_id, short_addr,
- dgram_sk(sk))) {
- if (prev) {
- struct sk_buff *clone;
-
- clone = skb_clone(skb, GFP_ATOMIC);
- if (clone)
- dgram_rcv_skb(prev, clone);
- }
-
- prev = sk;
- }
- }
-
- if (prev) {
- dgram_rcv_skb(prev, skb);
- } else {
- kfree_skb(skb);
- ret = NET_RX_DROP;
- }
- read_unlock(&dgram_lock);
-
- return ret;
-}
-
-static int dgram_getsockopt(struct sock *sk, int level, int optname,
- char __user *optval, int __user *optlen)
-{
- struct dgram_sock *ro = dgram_sk(sk);
-
- int val, len;
-
- if (level != SOL_IEEE802154)
- return -EOPNOTSUPP;
-
- if (get_user(len, optlen))
- return -EFAULT;
-
- len = min_t(unsigned int, len, sizeof(int));
-
- switch (optname) {
- case WPAN_WANTACK:
- val = ro->want_ack;
- break;
- case WPAN_SECURITY:
- if (!ro->secen_override)
- val = WPAN_SECURITY_DEFAULT;
- else if (ro->secen)
- val = WPAN_SECURITY_ON;
- else
- val = WPAN_SECURITY_OFF;
- break;
- case WPAN_SECURITY_LEVEL:
- if (!ro->seclevel_override)
- val = WPAN_SECURITY_LEVEL_DEFAULT;
- else
- val = ro->seclevel;
- break;
- default:
- return -ENOPROTOOPT;
- }
-
- if (put_user(len, optlen))
- return -EFAULT;
- if (copy_to_user(optval, &val, len))
- return -EFAULT;
- return 0;
-}
-
-static int dgram_setsockopt(struct sock *sk, int level, int optname,
- char __user *optval, unsigned int optlen)
-{
- struct dgram_sock *ro = dgram_sk(sk);
- struct net *net = sock_net(sk);
- int val;
- int err = 0;
-
- if (optlen < sizeof(int))
- return -EINVAL;
-
- if (get_user(val, (int __user *)optval))
- return -EFAULT;
-
- lock_sock(sk);
-
- switch (optname) {
- case WPAN_WANTACK:
- ro->want_ack = !!val;
- break;
- case WPAN_SECURITY:
- if (!ns_capable(net->user_ns, CAP_NET_ADMIN) &&
- !ns_capable(net->user_ns, CAP_NET_RAW)) {
- err = -EPERM;
- break;
- }
-
- switch (val) {
- case WPAN_SECURITY_DEFAULT:
- ro->secen_override = 0;
- break;
- case WPAN_SECURITY_ON:
- ro->secen_override = 1;
- ro->secen = 1;
- break;
- case WPAN_SECURITY_OFF:
- ro->secen_override = 1;
- ro->secen = 0;
- break;
- default:
- err = -EINVAL;
- break;
- }
- break;
- case WPAN_SECURITY_LEVEL:
- if (!ns_capable(net->user_ns, CAP_NET_ADMIN) &&
- !ns_capable(net->user_ns, CAP_NET_RAW)) {
- err = -EPERM;
- break;
- }
-
- if (val < WPAN_SECURITY_LEVEL_DEFAULT ||
- val > IEEE802154_SCF_SECLEVEL_ENC_MIC128) {
- err = -EINVAL;
- } else if (val == WPAN_SECURITY_LEVEL_DEFAULT) {
- ro->seclevel_override = 0;
- } else {
- ro->seclevel_override = 1;
- ro->seclevel = val;
- }
- break;
- default:
- err = -ENOPROTOOPT;
- break;
- }
-
- release_sock(sk);
- return err;
-}
-
-struct proto ieee802154_dgram_prot = {
- .name = "IEEE-802.15.4-MAC",
- .owner = THIS_MODULE,
- .obj_size = sizeof(struct dgram_sock),
- .init = dgram_init,
- .close = dgram_close,
- .bind = dgram_bind,
- .sendmsg = dgram_sendmsg,
- .recvmsg = dgram_recvmsg,
- .hash = dgram_hash,
- .unhash = dgram_unhash,
- .connect = dgram_connect,
- .disconnect = dgram_disconnect,
- .ioctl = dgram_ioctl,
- .getsockopt = dgram_getsockopt,
- .setsockopt = dgram_setsockopt,
-};
-
struct nlmsghdr *nlh = nlmsg_hdr(msg);
void *hdr = genlmsg_data(nlmsg_data(nlh));
- if (genlmsg_end(msg, hdr) < 0)
- goto out;
+ genlmsg_end(msg, hdr);
return genlmsg_multicast(&nl802154_family, msg, 0, group, GFP_ATOMIC);
-out:
- nlmsg_free(msg);
- return -ENOBUFS;
}
struct sk_buff *ieee802154_nl_new_reply(struct genl_info *info,
struct nlmsghdr *nlh = nlmsg_hdr(msg);
void *hdr = genlmsg_data(nlmsg_data(nlh));
- if (genlmsg_end(msg, hdr) < 0)
- goto out;
+ genlmsg_end(msg, hdr);
return genlmsg_reply(msg, info);
-out:
- nlmsg_free(msg);
- return -ENOBUFS;
}
static const struct genl_ops ieee8021154_ops[] = {
}
wpan_phy_put(phy);
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
wpan_phy_put(phy);
goto nla_put_failure;
mutex_unlock(&phy->pib_lock);
kfree(buf);
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
mutex_unlock(&phy->pib_lock);
goto nla_put_failure;
finish:
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(msg, hdr);
if (nla_put_u8(msg, NL802154_ATTR_LBT_MODE, wpan_dev->lbt))
goto nla_put_failure;
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(msg, hdr);
+++ /dev/null
-/*
- * Raw IEEE 802.15.4 sockets
- *
- * Copyright 2007, 2008 Siemens AG
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2
- * as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * Written by:
- * Sergey Lapin <slapin@ossfans.org>
- * Dmitry Eremin-Solenikov <dbaryshkov@gmail.com>
- */
-
-#include <linux/net.h>
-#include <linux/module.h>
-#include <linux/if_arp.h>
-#include <linux/list.h>
-#include <linux/slab.h>
-#include <net/sock.h>
-#include <net/af_ieee802154.h>
-#include <net/ieee802154_netdev.h>
-
-#include "af802154.h"
-
-static HLIST_HEAD(raw_head);
-static DEFINE_RWLOCK(raw_lock);
-
-static void raw_hash(struct sock *sk)
-{
- write_lock_bh(&raw_lock);
- sk_add_node(sk, &raw_head);
- sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
- write_unlock_bh(&raw_lock);
-}
-
-static void raw_unhash(struct sock *sk)
-{
- write_lock_bh(&raw_lock);
- if (sk_del_node_init(sk))
- sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
- write_unlock_bh(&raw_lock);
-}
-
-static void raw_close(struct sock *sk, long timeout)
-{
- sk_common_release(sk);
-}
-
-static int raw_bind(struct sock *sk, struct sockaddr *_uaddr, int len)
-{
- struct ieee802154_addr addr;
- struct sockaddr_ieee802154 *uaddr = (struct sockaddr_ieee802154 *)_uaddr;
- int err = 0;
- struct net_device *dev = NULL;
-
- if (len < sizeof(*uaddr))
- return -EINVAL;
-
- uaddr = (struct sockaddr_ieee802154 *)_uaddr;
- if (uaddr->family != AF_IEEE802154)
- return -EINVAL;
-
- lock_sock(sk);
-
- ieee802154_addr_from_sa(&addr, &uaddr->addr);
- dev = ieee802154_get_dev(sock_net(sk), &addr);
- if (!dev) {
- err = -ENODEV;
- goto out;
- }
-
- if (dev->type != ARPHRD_IEEE802154) {
- err = -ENODEV;
- goto out_put;
- }
-
- sk->sk_bound_dev_if = dev->ifindex;
- sk_dst_reset(sk);
-
-out_put:
- dev_put(dev);
-out:
- release_sock(sk);
-
- return err;
-}
-
-static int raw_connect(struct sock *sk, struct sockaddr *uaddr,
- int addr_len)
-{
- return -ENOTSUPP;
-}
-
-static int raw_disconnect(struct sock *sk, int flags)
-{
- return 0;
-}
-
-static int raw_sendmsg(struct kiocb *iocb, struct sock *sk,
- struct msghdr *msg, size_t size)
-{
- struct net_device *dev;
- unsigned int mtu;
- struct sk_buff *skb;
- int hlen, tlen;
- int err;
-
- if (msg->msg_flags & MSG_OOB) {
- pr_debug("msg->msg_flags = 0x%x\n", msg->msg_flags);
- return -EOPNOTSUPP;
- }
-
- lock_sock(sk);
- if (!sk->sk_bound_dev_if)
- dev = dev_getfirstbyhwtype(sock_net(sk), ARPHRD_IEEE802154);
- else
- dev = dev_get_by_index(sock_net(sk), sk->sk_bound_dev_if);
- release_sock(sk);
-
- if (!dev) {
- pr_debug("no dev\n");
- err = -ENXIO;
- goto out;
- }
-
- mtu = dev->mtu;
- pr_debug("name = %s, mtu = %u\n", dev->name, mtu);
-
- if (size > mtu) {
- pr_debug("size = %Zu, mtu = %u\n", size, mtu);
- err = -EINVAL;
- goto out_dev;
- }
-
- hlen = LL_RESERVED_SPACE(dev);
- tlen = dev->needed_tailroom;
- skb = sock_alloc_send_skb(sk, hlen + tlen + size,
- msg->msg_flags & MSG_DONTWAIT, &err);
- if (!skb)
- goto out_dev;
-
- skb_reserve(skb, hlen);
-
- skb_reset_mac_header(skb);
- skb_reset_network_header(skb);
-
- err = memcpy_from_msg(skb_put(skb, size), msg, size);
- if (err < 0)
- goto out_skb;
-
- skb->dev = dev;
- skb->sk = sk;
- skb->protocol = htons(ETH_P_IEEE802154);
-
- dev_put(dev);
-
- err = dev_queue_xmit(skb);
- if (err > 0)
- err = net_xmit_errno(err);
-
- return err ?: size;
-
-out_skb:
- kfree_skb(skb);
-out_dev:
- dev_put(dev);
-out:
- return err;
-}
-
-static int raw_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
- size_t len, int noblock, int flags, int *addr_len)
-{
- size_t copied = 0;
- int err = -EOPNOTSUPP;
- struct sk_buff *skb;
-
- skb = skb_recv_datagram(sk, flags, noblock, &err);
- if (!skb)
- goto out;
-
- copied = skb->len;
- if (len < copied) {
- msg->msg_flags |= MSG_TRUNC;
- copied = len;
- }
-
- err = skb_copy_datagram_msg(skb, 0, msg, copied);
- if (err)
- goto done;
-
- sock_recv_ts_and_drops(msg, sk, skb);
-
- if (flags & MSG_TRUNC)
- copied = skb->len;
-done:
- skb_free_datagram(sk, skb);
-out:
- if (err)
- return err;
- return copied;
-}
-
-static int raw_rcv_skb(struct sock *sk, struct sk_buff *skb)
-{
- skb = skb_share_check(skb, GFP_ATOMIC);
- if (!skb)
- return NET_RX_DROP;
-
- if (sock_queue_rcv_skb(sk, skb) < 0) {
- kfree_skb(skb);
- return NET_RX_DROP;
- }
-
- return NET_RX_SUCCESS;
-}
-
-void ieee802154_raw_deliver(struct net_device *dev, struct sk_buff *skb)
-{
- struct sock *sk;
-
- read_lock(&raw_lock);
- sk_for_each(sk, &raw_head) {
- bh_lock_sock(sk);
- if (!sk->sk_bound_dev_if ||
- sk->sk_bound_dev_if == dev->ifindex) {
- struct sk_buff *clone;
-
- clone = skb_clone(skb, GFP_ATOMIC);
- if (clone)
- raw_rcv_skb(sk, clone);
- }
- bh_unlock_sock(sk);
- }
- read_unlock(&raw_lock);
-}
-
-static int raw_getsockopt(struct sock *sk, int level, int optname,
- char __user *optval, int __user *optlen)
-{
- return -EOPNOTSUPP;
-}
-
-static int raw_setsockopt(struct sock *sk, int level, int optname,
- char __user *optval, unsigned int optlen)
-{
- return -EOPNOTSUPP;
-}
-
-struct proto ieee802154_raw_prot = {
- .name = "IEEE-802.15.4-RAW",
- .owner = THIS_MODULE,
- .obj_size = sizeof(struct sock),
- .close = raw_close,
- .bind = raw_bind,
- .sendmsg = raw_sendmsg,
- .recvmsg = raw_recvmsg,
- .hash = raw_hash,
- .unhash = raw_unhash,
- .connect = raw_connect,
- .disconnect = raw_disconnect,
- .getsockopt = raw_getsockopt,
- .setsockopt = raw_setsockopt,
-};
+++ /dev/null
-/* 6LoWPAN fragment reassembly
- *
- *
- * Authors:
- * Alexander Aring <aar@pengutronix.de>
- *
- * Based on: net/ipv6/reassembly.c
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
- */
-
-#define pr_fmt(fmt) "6LoWPAN: " fmt
-
-#include <linux/net.h>
-#include <linux/list.h>
-#include <linux/netdevice.h>
-#include <linux/random.h>
-#include <linux/jhash.h>
-#include <linux/skbuff.h>
-#include <linux/slab.h>
-#include <linux/export.h>
-
-#include <net/ieee802154_netdev.h>
-#include <net/6lowpan.h>
-#include <net/ipv6.h>
-#include <net/inet_frag.h>
-
-#include "reassembly.h"
-
-static const char lowpan_frags_cache_name[] = "lowpan-frags";
-
-struct lowpan_frag_info {
- u16 d_tag;
- u16 d_size;
- u8 d_offset;
-};
-
-static struct lowpan_frag_info *lowpan_cb(struct sk_buff *skb)
-{
- return (struct lowpan_frag_info *)skb->cb;
-}
-
-static struct inet_frags lowpan_frags;
-
-static int lowpan_frag_reasm(struct lowpan_frag_queue *fq,
- struct sk_buff *prev, struct net_device *dev);
-
-static unsigned int lowpan_hash_frag(u16 tag, u16 d_size,
- const struct ieee802154_addr *saddr,
- const struct ieee802154_addr *daddr)
-{
- net_get_random_once(&lowpan_frags.rnd, sizeof(lowpan_frags.rnd));
- return jhash_3words(ieee802154_addr_hash(saddr),
- ieee802154_addr_hash(daddr),
- (__force u32)(tag + (d_size << 16)),
- lowpan_frags.rnd);
-}
-
-static unsigned int lowpan_hashfn(const struct inet_frag_queue *q)
-{
- const struct lowpan_frag_queue *fq;
-
- fq = container_of(q, struct lowpan_frag_queue, q);
- return lowpan_hash_frag(fq->tag, fq->d_size, &fq->saddr, &fq->daddr);
-}
-
-static bool lowpan_frag_match(const struct inet_frag_queue *q, const void *a)
-{
- const struct lowpan_frag_queue *fq;
- const struct lowpan_create_arg *arg = a;
-
- fq = container_of(q, struct lowpan_frag_queue, q);
- return fq->tag == arg->tag && fq->d_size == arg->d_size &&
- ieee802154_addr_equal(&fq->saddr, arg->src) &&
- ieee802154_addr_equal(&fq->daddr, arg->dst);
-}
-
-static void lowpan_frag_init(struct inet_frag_queue *q, const void *a)
-{
- const struct lowpan_create_arg *arg = a;
- struct lowpan_frag_queue *fq;
-
- fq = container_of(q, struct lowpan_frag_queue, q);
-
- fq->tag = arg->tag;
- fq->d_size = arg->d_size;
- fq->saddr = *arg->src;
- fq->daddr = *arg->dst;
-}
-
-static void lowpan_frag_expire(unsigned long data)
-{
- struct frag_queue *fq;
- struct net *net;
-
- fq = container_of((struct inet_frag_queue *)data, struct frag_queue, q);
- net = container_of(fq->q.net, struct net, ieee802154_lowpan.frags);
-
- spin_lock(&fq->q.lock);
-
- if (fq->q.flags & INET_FRAG_COMPLETE)
- goto out;
-
- inet_frag_kill(&fq->q, &lowpan_frags);
-out:
- spin_unlock(&fq->q.lock);
- inet_frag_put(&fq->q, &lowpan_frags);
-}
-
-static inline struct lowpan_frag_queue *
-fq_find(struct net *net, const struct lowpan_frag_info *frag_info,
- const struct ieee802154_addr *src,
- const struct ieee802154_addr *dst)
-{
- struct inet_frag_queue *q;
- struct lowpan_create_arg arg;
- unsigned int hash;
- struct netns_ieee802154_lowpan *ieee802154_lowpan =
- net_ieee802154_lowpan(net);
-
- arg.tag = frag_info->d_tag;
- arg.d_size = frag_info->d_size;
- arg.src = src;
- arg.dst = dst;
-
- hash = lowpan_hash_frag(frag_info->d_tag, frag_info->d_size, src, dst);
-
- q = inet_frag_find(&ieee802154_lowpan->frags,
- &lowpan_frags, &arg, hash);
- if (IS_ERR_OR_NULL(q)) {
- inet_frag_maybe_warn_overflow(q, pr_fmt());
- return NULL;
- }
- return container_of(q, struct lowpan_frag_queue, q);
-}
-
-static int lowpan_frag_queue(struct lowpan_frag_queue *fq,
- struct sk_buff *skb, const u8 frag_type)
-{
- struct sk_buff *prev, *next;
- struct net_device *dev;
- int end, offset;
-
- if (fq->q.flags & INET_FRAG_COMPLETE)
- goto err;
-
- offset = lowpan_cb(skb)->d_offset << 3;
- end = lowpan_cb(skb)->d_size;
-
- /* Is this the final fragment? */
- if (offset + skb->len == end) {
- /* If we already have some bits beyond end
- * or have different end, the segment is corrupted.
- */
- if (end < fq->q.len ||
- ((fq->q.flags & INET_FRAG_LAST_IN) && end != fq->q.len))
- goto err;
- fq->q.flags |= INET_FRAG_LAST_IN;
- fq->q.len = end;
- } else {
- if (end > fq->q.len) {
- /* Some bits beyond end -> corruption. */
- if (fq->q.flags & INET_FRAG_LAST_IN)
- goto err;
- fq->q.len = end;
- }
- }
-
- /* Find out which fragments are in front and at the back of us
- * in the chain of fragments so far. We must know where to put
- * this fragment, right?
- */
- prev = fq->q.fragments_tail;
- if (!prev || lowpan_cb(prev)->d_offset < lowpan_cb(skb)->d_offset) {
- next = NULL;
- goto found;
- }
- prev = NULL;
- for (next = fq->q.fragments; next != NULL; next = next->next) {
- if (lowpan_cb(next)->d_offset >= lowpan_cb(skb)->d_offset)
- break; /* bingo! */
- prev = next;
- }
-
-found:
- /* Insert this fragment in the chain of fragments. */
- skb->next = next;
- if (!next)
- fq->q.fragments_tail = skb;
- if (prev)
- prev->next = skb;
- else
- fq->q.fragments = skb;
-
- dev = skb->dev;
- if (dev)
- skb->dev = NULL;
-
- fq->q.stamp = skb->tstamp;
- if (frag_type == LOWPAN_DISPATCH_FRAG1) {
- /* Calculate uncomp. 6lowpan header to estimate full size */
- fq->q.meat += lowpan_uncompress_size(skb, NULL);
- fq->q.flags |= INET_FRAG_FIRST_IN;
- } else {
- fq->q.meat += skb->len;
- }
- add_frag_mem_limit(&fq->q, skb->truesize);
-
- if (fq->q.flags == (INET_FRAG_FIRST_IN | INET_FRAG_LAST_IN) &&
- fq->q.meat == fq->q.len) {
- int res;
- unsigned long orefdst = skb->_skb_refdst;
-
- skb->_skb_refdst = 0UL;
- res = lowpan_frag_reasm(fq, prev, dev);
- skb->_skb_refdst = orefdst;
- return res;
- }
-
- return -1;
-err:
- kfree_skb(skb);
- return -1;
-}
-
-/* Check if this packet is complete.
- * Returns NULL on failure by any reason, and pointer
- * to current nexthdr field in reassembled frame.
- *
- * It is called with locked fq, and caller must check that
- * queue is eligible for reassembly i.e. it is not COMPLETE,
- * the last and the first frames arrived and all the bits are here.
- */
-static int lowpan_frag_reasm(struct lowpan_frag_queue *fq, struct sk_buff *prev,
- struct net_device *dev)
-{
- struct sk_buff *fp, *head = fq->q.fragments;
- int sum_truesize;
-
- inet_frag_kill(&fq->q, &lowpan_frags);
-
- /* Make the one we just received the head. */
- if (prev) {
- head = prev->next;
- fp = skb_clone(head, GFP_ATOMIC);
-
- if (!fp)
- goto out_oom;
-
- fp->next = head->next;
- if (!fp->next)
- fq->q.fragments_tail = fp;
- prev->next = fp;
-
- skb_morph(head, fq->q.fragments);
- head->next = fq->q.fragments->next;
-
- consume_skb(fq->q.fragments);
- fq->q.fragments = head;
- }
-
- /* Head of list must not be cloned. */
- if (skb_unclone(head, GFP_ATOMIC))
- goto out_oom;
-
- /* If the first fragment is fragmented itself, we split
- * it to two chunks: the first with data and paged part
- * and the second, holding only fragments.
- */
- if (skb_has_frag_list(head)) {
- struct sk_buff *clone;
- int i, plen = 0;
-
- clone = alloc_skb(0, GFP_ATOMIC);
- if (!clone)
- goto out_oom;
- clone->next = head->next;
- head->next = clone;
- skb_shinfo(clone)->frag_list = skb_shinfo(head)->frag_list;
- skb_frag_list_init(head);
- for (i = 0; i < skb_shinfo(head)->nr_frags; i++)
- plen += skb_frag_size(&skb_shinfo(head)->frags[i]);
- clone->len = head->data_len - plen;
- clone->data_len = clone->len;
- head->data_len -= clone->len;
- head->len -= clone->len;
- add_frag_mem_limit(&fq->q, clone->truesize);
- }
-
- WARN_ON(head == NULL);
-
- sum_truesize = head->truesize;
- for (fp = head->next; fp;) {
- bool headstolen;
- int delta;
- struct sk_buff *next = fp->next;
-
- sum_truesize += fp->truesize;
- if (skb_try_coalesce(head, fp, &headstolen, &delta)) {
- kfree_skb_partial(fp, headstolen);
- } else {
- if (!skb_shinfo(head)->frag_list)
- skb_shinfo(head)->frag_list = fp;
- head->data_len += fp->len;
- head->len += fp->len;
- head->truesize += fp->truesize;
- }
- fp = next;
- }
- sub_frag_mem_limit(&fq->q, sum_truesize);
-
- head->next = NULL;
- head->dev = dev;
- head->tstamp = fq->q.stamp;
-
- fq->q.fragments = NULL;
- fq->q.fragments_tail = NULL;
-
- return 1;
-out_oom:
- net_dbg_ratelimited("lowpan_frag_reasm: no memory for reassembly\n");
- return -1;
-}
-
-static int lowpan_get_frag_info(struct sk_buff *skb, const u8 frag_type,
- struct lowpan_frag_info *frag_info)
-{
- bool fail;
- u8 pattern = 0, low = 0;
- __be16 d_tag = 0;
-
- fail = lowpan_fetch_skb(skb, &pattern, 1);
- fail |= lowpan_fetch_skb(skb, &low, 1);
- frag_info->d_size = (pattern & 7) << 8 | low;
- fail |= lowpan_fetch_skb(skb, &d_tag, 2);
- frag_info->d_tag = ntohs(d_tag);
-
- if (frag_type == LOWPAN_DISPATCH_FRAGN) {
- fail |= lowpan_fetch_skb(skb, &frag_info->d_offset, 1);
- } else {
- skb_reset_network_header(skb);
- frag_info->d_offset = 0;
- }
-
- if (unlikely(fail))
- return -EIO;
-
- return 0;
-}
-
-int lowpan_frag_rcv(struct sk_buff *skb, const u8 frag_type)
-{
- struct lowpan_frag_queue *fq;
- struct net *net = dev_net(skb->dev);
- struct lowpan_frag_info *frag_info = lowpan_cb(skb);
- struct ieee802154_addr source, dest;
- int err;
-
- source = mac_cb(skb)->source;
- dest = mac_cb(skb)->dest;
-
- err = lowpan_get_frag_info(skb, frag_type, frag_info);
- if (err < 0)
- goto err;
-
- if (frag_info->d_size > IPV6_MIN_MTU) {
- net_warn_ratelimited("lowpan_frag_rcv: datagram size exceeds MTU\n");
- goto err;
- }
-
- fq = fq_find(net, frag_info, &source, &dest);
- if (fq != NULL) {
- int ret;
-
- spin_lock(&fq->q.lock);
- ret = lowpan_frag_queue(fq, skb, frag_type);
- spin_unlock(&fq->q.lock);
-
- inet_frag_put(&fq->q, &lowpan_frags);
- return ret;
- }
-
-err:
- kfree_skb(skb);
- return -1;
-}
-EXPORT_SYMBOL(lowpan_frag_rcv);
-
-#ifdef CONFIG_SYSCTL
-static int zero;
-
-static struct ctl_table lowpan_frags_ns_ctl_table[] = {
- {
- .procname = "6lowpanfrag_high_thresh",
- .data = &init_net.ieee802154_lowpan.frags.high_thresh,
- .maxlen = sizeof(int),
- .mode = 0644,
- .proc_handler = proc_dointvec_minmax,
- .extra1 = &init_net.ieee802154_lowpan.frags.low_thresh
- },
- {
- .procname = "6lowpanfrag_low_thresh",
- .data = &init_net.ieee802154_lowpan.frags.low_thresh,
- .maxlen = sizeof(int),
- .mode = 0644,
- .proc_handler = proc_dointvec_minmax,
- .extra1 = &zero,
- .extra2 = &init_net.ieee802154_lowpan.frags.high_thresh
- },
- {
- .procname = "6lowpanfrag_time",
- .data = &init_net.ieee802154_lowpan.frags.timeout,
- .maxlen = sizeof(int),
- .mode = 0644,
- .proc_handler = proc_dointvec_jiffies,
- },
- { }
-};
-
-/* secret interval has been deprecated */
-static int lowpan_frags_secret_interval_unused;
-static struct ctl_table lowpan_frags_ctl_table[] = {
- {
- .procname = "6lowpanfrag_secret_interval",
- .data = &lowpan_frags_secret_interval_unused,
- .maxlen = sizeof(int),
- .mode = 0644,
- .proc_handler = proc_dointvec_jiffies,
- },
- { }
-};
-
-static int __net_init lowpan_frags_ns_sysctl_register(struct net *net)
-{
- struct ctl_table *table;
- struct ctl_table_header *hdr;
- struct netns_ieee802154_lowpan *ieee802154_lowpan =
- net_ieee802154_lowpan(net);
-
- table = lowpan_frags_ns_ctl_table;
- if (!net_eq(net, &init_net)) {
- table = kmemdup(table, sizeof(lowpan_frags_ns_ctl_table),
- GFP_KERNEL);
- if (table == NULL)
- goto err_alloc;
-
- table[0].data = &ieee802154_lowpan->frags.high_thresh;
- table[0].extra1 = &ieee802154_lowpan->frags.low_thresh;
- table[0].extra2 = &init_net.ieee802154_lowpan.frags.high_thresh;
- table[1].data = &ieee802154_lowpan->frags.low_thresh;
- table[1].extra2 = &ieee802154_lowpan->frags.high_thresh;
- table[2].data = &ieee802154_lowpan->frags.timeout;
-
- /* Don't export sysctls to unprivileged users */
- if (net->user_ns != &init_user_ns)
- table[0].procname = NULL;
- }
-
- hdr = register_net_sysctl(net, "net/ieee802154/6lowpan", table);
- if (hdr == NULL)
- goto err_reg;
-
- ieee802154_lowpan->sysctl.frags_hdr = hdr;
- return 0;
-
-err_reg:
- if (!net_eq(net, &init_net))
- kfree(table);
-err_alloc:
- return -ENOMEM;
-}
-
-static void __net_exit lowpan_frags_ns_sysctl_unregister(struct net *net)
-{
- struct ctl_table *table;
- struct netns_ieee802154_lowpan *ieee802154_lowpan =
- net_ieee802154_lowpan(net);
-
- table = ieee802154_lowpan->sysctl.frags_hdr->ctl_table_arg;
- unregister_net_sysctl_table(ieee802154_lowpan->sysctl.frags_hdr);
- if (!net_eq(net, &init_net))
- kfree(table);
-}
-
-static struct ctl_table_header *lowpan_ctl_header;
-
-static int __init lowpan_frags_sysctl_register(void)
-{
- lowpan_ctl_header = register_net_sysctl(&init_net,
- "net/ieee802154/6lowpan",
- lowpan_frags_ctl_table);
- return lowpan_ctl_header == NULL ? -ENOMEM : 0;
-}
-
-static void lowpan_frags_sysctl_unregister(void)
-{
- unregister_net_sysctl_table(lowpan_ctl_header);
-}
-#else
-static inline int lowpan_frags_ns_sysctl_register(struct net *net)
-{
- return 0;
-}
-
-static inline void lowpan_frags_ns_sysctl_unregister(struct net *net)
-{
-}
-
-static inline int __init lowpan_frags_sysctl_register(void)
-{
- return 0;
-}
-
-static inline void lowpan_frags_sysctl_unregister(void)
-{
-}
-#endif
-
-static int __net_init lowpan_frags_init_net(struct net *net)
-{
- struct netns_ieee802154_lowpan *ieee802154_lowpan =
- net_ieee802154_lowpan(net);
-
- ieee802154_lowpan->frags.high_thresh = IPV6_FRAG_HIGH_THRESH;
- ieee802154_lowpan->frags.low_thresh = IPV6_FRAG_LOW_THRESH;
- ieee802154_lowpan->frags.timeout = IPV6_FRAG_TIMEOUT;
-
- inet_frags_init_net(&ieee802154_lowpan->frags);
-
- return lowpan_frags_ns_sysctl_register(net);
-}
-
-static void __net_exit lowpan_frags_exit_net(struct net *net)
-{
- struct netns_ieee802154_lowpan *ieee802154_lowpan =
- net_ieee802154_lowpan(net);
-
- lowpan_frags_ns_sysctl_unregister(net);
- inet_frags_exit_net(&ieee802154_lowpan->frags, &lowpan_frags);
-}
-
-static struct pernet_operations lowpan_frags_ops = {
- .init = lowpan_frags_init_net,
- .exit = lowpan_frags_exit_net,
-};
-
-int __init lowpan_net_frag_init(void)
-{
- int ret;
-
- ret = lowpan_frags_sysctl_register();
- if (ret)
- return ret;
-
- ret = register_pernet_subsys(&lowpan_frags_ops);
- if (ret)
- goto err_pernet;
-
- lowpan_frags.hashfn = lowpan_hashfn;
- lowpan_frags.constructor = lowpan_frag_init;
- lowpan_frags.destructor = NULL;
- lowpan_frags.skb_free = NULL;
- lowpan_frags.qsize = sizeof(struct frag_queue);
- lowpan_frags.match = lowpan_frag_match;
- lowpan_frags.frag_expire = lowpan_frag_expire;
- lowpan_frags.frags_cache_name = lowpan_frags_cache_name;
- ret = inet_frags_init(&lowpan_frags);
- if (ret)
- goto err_pernet;
-
- return ret;
-err_pernet:
- lowpan_frags_sysctl_unregister();
- return ret;
-}
-
-void lowpan_net_frag_exit(void)
-{
- inet_frags_fini(&lowpan_frags);
- lowpan_frags_sysctl_unregister();
- unregister_pernet_subsys(&lowpan_frags_ops);
-}
+++ /dev/null
-#ifndef __IEEE802154_6LOWPAN_REASSEMBLY_H__
-#define __IEEE802154_6LOWPAN_REASSEMBLY_H__
-
-#include <net/inet_frag.h>
-
-struct lowpan_create_arg {
- u16 tag;
- u16 d_size;
- const struct ieee802154_addr *src;
- const struct ieee802154_addr *dst;
-};
-
-/* Equivalent of ipv4 struct ip
- */
-struct lowpan_frag_queue {
- struct inet_frag_queue q;
-
- u16 tag;
- u16 d_size;
- struct ieee802154_addr saddr;
- struct ieee802154_addr daddr;
-};
-
-static inline u32 ieee802154_addr_hash(const struct ieee802154_addr *a)
-{
- switch (a->mode) {
- case IEEE802154_ADDR_LONG:
- return (((__force u64)a->extended_addr) >> 32) ^
- (((__force u64)a->extended_addr) & 0xffffffff);
- case IEEE802154_ADDR_SHORT:
- return (__force u32)(a->short_addr);
- default:
- return 0;
- }
-}
-
-int lowpan_frag_rcv(struct sk_buff *skb, const u8 frag_type);
-void lowpan_net_frag_exit(void);
-int lowpan_net_frag_init(void);
-
-#endif /* __IEEE802154_6LOWPAN_REASSEMBLY_H__ */
--- /dev/null
+/*
+ * IEEE802154.4 socket interface
+ *
+ * Copyright 2007, 2008 Siemens AG
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Written by:
+ * Sergey Lapin <slapin@ossfans.org>
+ * Maxim Gorbachyov <maxim.gorbachev@siemens.com>
+ */
+
+#include <linux/net.h>
+#include <linux/capability.h>
+#include <linux/module.h>
+#include <linux/if_arp.h>
+#include <linux/if.h>
+#include <linux/termios.h> /* For TIOCOUTQ/INQ */
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <net/datalink.h>
+#include <net/psnap.h>
+#include <net/sock.h>
+#include <net/tcp_states.h>
+#include <net/route.h>
+
+#include <net/af_ieee802154.h>
+#include <net/ieee802154_netdev.h>
+
+/* Utility function for families */
+static struct net_device*
+ieee802154_get_dev(struct net *net, const struct ieee802154_addr *addr)
+{
+ struct net_device *dev = NULL;
+ struct net_device *tmp;
+ __le16 pan_id, short_addr;
+ u8 hwaddr[IEEE802154_ADDR_LEN];
+
+ switch (addr->mode) {
+ case IEEE802154_ADDR_LONG:
+ ieee802154_devaddr_to_raw(hwaddr, addr->extended_addr);
+ rcu_read_lock();
+ dev = dev_getbyhwaddr_rcu(net, ARPHRD_IEEE802154, hwaddr);
+ if (dev)
+ dev_hold(dev);
+ rcu_read_unlock();
+ break;
+ case IEEE802154_ADDR_SHORT:
+ if (addr->pan_id == cpu_to_le16(IEEE802154_PANID_BROADCAST) ||
+ addr->short_addr == cpu_to_le16(IEEE802154_ADDR_UNDEF) ||
+ addr->short_addr == cpu_to_le16(IEEE802154_ADDR_BROADCAST))
+ break;
+
+ rtnl_lock();
+
+ for_each_netdev(net, tmp) {
+ if (tmp->type != ARPHRD_IEEE802154)
+ continue;
+
+ pan_id = ieee802154_mlme_ops(tmp)->get_pan_id(tmp);
+ short_addr =
+ ieee802154_mlme_ops(tmp)->get_short_addr(tmp);
+
+ if (pan_id == addr->pan_id &&
+ short_addr == addr->short_addr) {
+ dev = tmp;
+ dev_hold(dev);
+ break;
+ }
+ }
+
+ rtnl_unlock();
+ break;
+ default:
+ pr_warn("Unsupported ieee802154 address type: %d\n",
+ addr->mode);
+ break;
+ }
+
+ return dev;
+}
+
+static int ieee802154_sock_release(struct socket *sock)
+{
+ struct sock *sk = sock->sk;
+
+ if (sk) {
+ sock->sk = NULL;
+ sk->sk_prot->close(sk, 0);
+ }
+ return 0;
+}
+
+static int ieee802154_sock_sendmsg(struct kiocb *iocb, struct socket *sock,
+ struct msghdr *msg, size_t len)
+{
+ struct sock *sk = sock->sk;
+
+ return sk->sk_prot->sendmsg(iocb, sk, msg, len);
+}
+
+static int ieee802154_sock_bind(struct socket *sock, struct sockaddr *uaddr,
+ int addr_len)
+{
+ struct sock *sk = sock->sk;
+
+ if (sk->sk_prot->bind)
+ return sk->sk_prot->bind(sk, uaddr, addr_len);
+
+ return sock_no_bind(sock, uaddr, addr_len);
+}
+
+static int ieee802154_sock_connect(struct socket *sock, struct sockaddr *uaddr,
+ int addr_len, int flags)
+{
+ struct sock *sk = sock->sk;
+
+ if (addr_len < sizeof(uaddr->sa_family))
+ return -EINVAL;
+
+ if (uaddr->sa_family == AF_UNSPEC)
+ return sk->sk_prot->disconnect(sk, flags);
+
+ return sk->sk_prot->connect(sk, uaddr, addr_len);
+}
+
+static int ieee802154_dev_ioctl(struct sock *sk, struct ifreq __user *arg,
+ unsigned int cmd)
+{
+ struct ifreq ifr;
+ int ret = -ENOIOCTLCMD;
+ struct net_device *dev;
+
+ if (copy_from_user(&ifr, arg, sizeof(struct ifreq)))
+ return -EFAULT;
+
+ ifr.ifr_name[IFNAMSIZ-1] = 0;
+
+ dev_load(sock_net(sk), ifr.ifr_name);
+ dev = dev_get_by_name(sock_net(sk), ifr.ifr_name);
+
+ if (!dev)
+ return -ENODEV;
+
+ if (dev->type == ARPHRD_IEEE802154 && dev->netdev_ops->ndo_do_ioctl)
+ ret = dev->netdev_ops->ndo_do_ioctl(dev, &ifr, cmd);
+
+ if (!ret && copy_to_user(arg, &ifr, sizeof(struct ifreq)))
+ ret = -EFAULT;
+ dev_put(dev);
+
+ return ret;
+}
+
+static int ieee802154_sock_ioctl(struct socket *sock, unsigned int cmd,
+ unsigned long arg)
+{
+ struct sock *sk = sock->sk;
+
+ switch (cmd) {
+ case SIOCGSTAMP:
+ return sock_get_timestamp(sk, (struct timeval __user *)arg);
+ case SIOCGSTAMPNS:
+ return sock_get_timestampns(sk, (struct timespec __user *)arg);
+ case SIOCGIFADDR:
+ case SIOCSIFADDR:
+ return ieee802154_dev_ioctl(sk, (struct ifreq __user *)arg,
+ cmd);
+ default:
+ if (!sk->sk_prot->ioctl)
+ return -ENOIOCTLCMD;
+ return sk->sk_prot->ioctl(sk, cmd, arg);
+ }
+}
+
+/* RAW Sockets (802.15.4 created in userspace) */
+static HLIST_HEAD(raw_head);
+static DEFINE_RWLOCK(raw_lock);
+
+static void raw_hash(struct sock *sk)
+{
+ write_lock_bh(&raw_lock);
+ sk_add_node(sk, &raw_head);
+ sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+ write_unlock_bh(&raw_lock);
+}
+
+static void raw_unhash(struct sock *sk)
+{
+ write_lock_bh(&raw_lock);
+ if (sk_del_node_init(sk))
+ sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+ write_unlock_bh(&raw_lock);
+}
+
+static void raw_close(struct sock *sk, long timeout)
+{
+ sk_common_release(sk);
+}
+
+static int raw_bind(struct sock *sk, struct sockaddr *_uaddr, int len)
+{
+ struct ieee802154_addr addr;
+ struct sockaddr_ieee802154 *uaddr = (struct sockaddr_ieee802154 *)_uaddr;
+ int err = 0;
+ struct net_device *dev = NULL;
+
+ if (len < sizeof(*uaddr))
+ return -EINVAL;
+
+ uaddr = (struct sockaddr_ieee802154 *)_uaddr;
+ if (uaddr->family != AF_IEEE802154)
+ return -EINVAL;
+
+ lock_sock(sk);
+
+ ieee802154_addr_from_sa(&addr, &uaddr->addr);
+ dev = ieee802154_get_dev(sock_net(sk), &addr);
+ if (!dev) {
+ err = -ENODEV;
+ goto out;
+ }
+
+ if (dev->type != ARPHRD_IEEE802154) {
+ err = -ENODEV;
+ goto out_put;
+ }
+
+ sk->sk_bound_dev_if = dev->ifindex;
+ sk_dst_reset(sk);
+
+out_put:
+ dev_put(dev);
+out:
+ release_sock(sk);
+
+ return err;
+}
+
+static int raw_connect(struct sock *sk, struct sockaddr *uaddr,
+ int addr_len)
+{
+ return -ENOTSUPP;
+}
+
+static int raw_disconnect(struct sock *sk, int flags)
+{
+ return 0;
+}
+
+static int raw_sendmsg(struct kiocb *iocb, struct sock *sk,
+ struct msghdr *msg, size_t size)
+{
+ struct net_device *dev;
+ unsigned int mtu;
+ struct sk_buff *skb;
+ int hlen, tlen;
+ int err;
+
+ if (msg->msg_flags & MSG_OOB) {
+ pr_debug("msg->msg_flags = 0x%x\n", msg->msg_flags);
+ return -EOPNOTSUPP;
+ }
+
+ lock_sock(sk);
+ if (!sk->sk_bound_dev_if)
+ dev = dev_getfirstbyhwtype(sock_net(sk), ARPHRD_IEEE802154);
+ else
+ dev = dev_get_by_index(sock_net(sk), sk->sk_bound_dev_if);
+ release_sock(sk);
+
+ if (!dev) {
+ pr_debug("no dev\n");
+ err = -ENXIO;
+ goto out;
+ }
+
+ mtu = dev->mtu;
+ pr_debug("name = %s, mtu = %u\n", dev->name, mtu);
+
+ if (size > mtu) {
+ pr_debug("size = %Zu, mtu = %u\n", size, mtu);
+ err = -EINVAL;
+ goto out_dev;
+ }
+
+ hlen = LL_RESERVED_SPACE(dev);
+ tlen = dev->needed_tailroom;
+ skb = sock_alloc_send_skb(sk, hlen + tlen + size,
+ msg->msg_flags & MSG_DONTWAIT, &err);
+ if (!skb)
+ goto out_dev;
+
+ skb_reserve(skb, hlen);
+
+ skb_reset_mac_header(skb);
+ skb_reset_network_header(skb);
+
+ err = memcpy_from_msg(skb_put(skb, size), msg, size);
+ if (err < 0)
+ goto out_skb;
+
+ skb->dev = dev;
+ skb->sk = sk;
+ skb->protocol = htons(ETH_P_IEEE802154);
+
+ dev_put(dev);
+
+ err = dev_queue_xmit(skb);
+ if (err > 0)
+ err = net_xmit_errno(err);
+
+ return err ?: size;
+
+out_skb:
+ kfree_skb(skb);
+out_dev:
+ dev_put(dev);
+out:
+ return err;
+}
+
+static int raw_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
+ size_t len, int noblock, int flags, int *addr_len)
+{
+ size_t copied = 0;
+ int err = -EOPNOTSUPP;
+ struct sk_buff *skb;
+
+ skb = skb_recv_datagram(sk, flags, noblock, &err);
+ if (!skb)
+ goto out;
+
+ copied = skb->len;
+ if (len < copied) {
+ msg->msg_flags |= MSG_TRUNC;
+ copied = len;
+ }
+
+ err = skb_copy_datagram_msg(skb, 0, msg, copied);
+ if (err)
+ goto done;
+
+ sock_recv_ts_and_drops(msg, sk, skb);
+
+ if (flags & MSG_TRUNC)
+ copied = skb->len;
+done:
+ skb_free_datagram(sk, skb);
+out:
+ if (err)
+ return err;
+ return copied;
+}
+
+static int raw_rcv_skb(struct sock *sk, struct sk_buff *skb)
+{
+ skb = skb_share_check(skb, GFP_ATOMIC);
+ if (!skb)
+ return NET_RX_DROP;
+
+ if (sock_queue_rcv_skb(sk, skb) < 0) {
+ kfree_skb(skb);
+ return NET_RX_DROP;
+ }
+
+ return NET_RX_SUCCESS;
+}
+
+static void ieee802154_raw_deliver(struct net_device *dev, struct sk_buff *skb)
+{
+ struct sock *sk;
+
+ read_lock(&raw_lock);
+ sk_for_each(sk, &raw_head) {
+ bh_lock_sock(sk);
+ if (!sk->sk_bound_dev_if ||
+ sk->sk_bound_dev_if == dev->ifindex) {
+ struct sk_buff *clone;
+
+ clone = skb_clone(skb, GFP_ATOMIC);
+ if (clone)
+ raw_rcv_skb(sk, clone);
+ }
+ bh_unlock_sock(sk);
+ }
+ read_unlock(&raw_lock);
+}
+
+static int raw_getsockopt(struct sock *sk, int level, int optname,
+ char __user *optval, int __user *optlen)
+{
+ return -EOPNOTSUPP;
+}
+
+static int raw_setsockopt(struct sock *sk, int level, int optname,
+ char __user *optval, unsigned int optlen)
+{
+ return -EOPNOTSUPP;
+}
+
+static struct proto ieee802154_raw_prot = {
+ .name = "IEEE-802.15.4-RAW",
+ .owner = THIS_MODULE,
+ .obj_size = sizeof(struct sock),
+ .close = raw_close,
+ .bind = raw_bind,
+ .sendmsg = raw_sendmsg,
+ .recvmsg = raw_recvmsg,
+ .hash = raw_hash,
+ .unhash = raw_unhash,
+ .connect = raw_connect,
+ .disconnect = raw_disconnect,
+ .getsockopt = raw_getsockopt,
+ .setsockopt = raw_setsockopt,
+};
+
+static const struct proto_ops ieee802154_raw_ops = {
+ .family = PF_IEEE802154,
+ .owner = THIS_MODULE,
+ .release = ieee802154_sock_release,
+ .bind = ieee802154_sock_bind,
+ .connect = ieee802154_sock_connect,
+ .socketpair = sock_no_socketpair,
+ .accept = sock_no_accept,
+ .getname = sock_no_getname,
+ .poll = datagram_poll,
+ .ioctl = ieee802154_sock_ioctl,
+ .listen = sock_no_listen,
+ .shutdown = sock_no_shutdown,
+ .setsockopt = sock_common_setsockopt,
+ .getsockopt = sock_common_getsockopt,
+ .sendmsg = ieee802154_sock_sendmsg,
+ .recvmsg = sock_common_recvmsg,
+ .mmap = sock_no_mmap,
+ .sendpage = sock_no_sendpage,
+#ifdef CONFIG_COMPAT
+ .compat_setsockopt = compat_sock_common_setsockopt,
+ .compat_getsockopt = compat_sock_common_getsockopt,
+#endif
+};
+
+/* DGRAM Sockets (802.15.4 dataframes) */
+static HLIST_HEAD(dgram_head);
+static DEFINE_RWLOCK(dgram_lock);
+
+struct dgram_sock {
+ struct sock sk;
+
+ struct ieee802154_addr src_addr;
+ struct ieee802154_addr dst_addr;
+
+ unsigned int bound:1;
+ unsigned int connected:1;
+ unsigned int want_ack:1;
+ unsigned int secen:1;
+ unsigned int secen_override:1;
+ unsigned int seclevel:3;
+ unsigned int seclevel_override:1;
+};
+
+static inline struct dgram_sock *dgram_sk(const struct sock *sk)
+{
+ return container_of(sk, struct dgram_sock, sk);
+}
+
+static void dgram_hash(struct sock *sk)
+{
+ write_lock_bh(&dgram_lock);
+ sk_add_node(sk, &dgram_head);
+ sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
+ write_unlock_bh(&dgram_lock);
+}
+
+static void dgram_unhash(struct sock *sk)
+{
+ write_lock_bh(&dgram_lock);
+ if (sk_del_node_init(sk))
+ sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+ write_unlock_bh(&dgram_lock);
+}
+
+static int dgram_init(struct sock *sk)
+{
+ struct dgram_sock *ro = dgram_sk(sk);
+
+ ro->want_ack = 1;
+ return 0;
+}
+
+static void dgram_close(struct sock *sk, long timeout)
+{
+ sk_common_release(sk);
+}
+
+static int dgram_bind(struct sock *sk, struct sockaddr *uaddr, int len)
+{
+ struct sockaddr_ieee802154 *addr = (struct sockaddr_ieee802154 *)uaddr;
+ struct ieee802154_addr haddr;
+ struct dgram_sock *ro = dgram_sk(sk);
+ int err = -EINVAL;
+ struct net_device *dev;
+
+ lock_sock(sk);
+
+ ro->bound = 0;
+
+ if (len < sizeof(*addr))
+ goto out;
+
+ if (addr->family != AF_IEEE802154)
+ goto out;
+
+ ieee802154_addr_from_sa(&haddr, &addr->addr);
+ dev = ieee802154_get_dev(sock_net(sk), &haddr);
+ if (!dev) {
+ err = -ENODEV;
+ goto out;
+ }
+
+ if (dev->type != ARPHRD_IEEE802154) {
+ err = -ENODEV;
+ goto out_put;
+ }
+
+ ro->src_addr = haddr;
+
+ ro->bound = 1;
+ err = 0;
+out_put:
+ dev_put(dev);
+out:
+ release_sock(sk);
+
+ return err;
+}
+
+static int dgram_ioctl(struct sock *sk, int cmd, unsigned long arg)
+{
+ switch (cmd) {
+ case SIOCOUTQ:
+ {
+ int amount = sk_wmem_alloc_get(sk);
+
+ return put_user(amount, (int __user *)arg);
+ }
+
+ case SIOCINQ:
+ {
+ struct sk_buff *skb;
+ unsigned long amount;
+
+ amount = 0;
+ spin_lock_bh(&sk->sk_receive_queue.lock);
+ skb = skb_peek(&sk->sk_receive_queue);
+ if (skb) {
+ /* We will only return the amount
+ * of this packet since that is all
+ * that will be read.
+ */
+ amount = skb->len - ieee802154_hdr_length(skb);
+ }
+ spin_unlock_bh(&sk->sk_receive_queue.lock);
+ return put_user(amount, (int __user *)arg);
+ }
+ }
+
+ return -ENOIOCTLCMD;
+}
+
+/* FIXME: autobind */
+static int dgram_connect(struct sock *sk, struct sockaddr *uaddr,
+ int len)
+{
+ struct sockaddr_ieee802154 *addr = (struct sockaddr_ieee802154 *)uaddr;
+ struct dgram_sock *ro = dgram_sk(sk);
+ int err = 0;
+
+ if (len < sizeof(*addr))
+ return -EINVAL;
+
+ if (addr->family != AF_IEEE802154)
+ return -EINVAL;
+
+ lock_sock(sk);
+
+ if (!ro->bound) {
+ err = -ENETUNREACH;
+ goto out;
+ }
+
+ ieee802154_addr_from_sa(&ro->dst_addr, &addr->addr);
+ ro->connected = 1;
+
+out:
+ release_sock(sk);
+ return err;
+}
+
+static int dgram_disconnect(struct sock *sk, int flags)
+{
+ struct dgram_sock *ro = dgram_sk(sk);
+
+ lock_sock(sk);
+ ro->connected = 0;
+ release_sock(sk);
+
+ return 0;
+}
+
+static int dgram_sendmsg(struct kiocb *iocb, struct sock *sk,
+ struct msghdr *msg, size_t size)
+{
+ struct net_device *dev;
+ unsigned int mtu;
+ struct sk_buff *skb;
+ struct ieee802154_mac_cb *cb;
+ struct dgram_sock *ro = dgram_sk(sk);
+ struct ieee802154_addr dst_addr;
+ int hlen, tlen;
+ int err;
+
+ if (msg->msg_flags & MSG_OOB) {
+ pr_debug("msg->msg_flags = 0x%x\n", msg->msg_flags);
+ return -EOPNOTSUPP;
+ }
+
+ if (!ro->connected && !msg->msg_name)
+ return -EDESTADDRREQ;
+ else if (ro->connected && msg->msg_name)
+ return -EISCONN;
+
+ if (!ro->bound)
+ dev = dev_getfirstbyhwtype(sock_net(sk), ARPHRD_IEEE802154);
+ else
+ dev = ieee802154_get_dev(sock_net(sk), &ro->src_addr);
+
+ if (!dev) {
+ pr_debug("no dev\n");
+ err = -ENXIO;
+ goto out;
+ }
+ mtu = dev->mtu;
+ pr_debug("name = %s, mtu = %u\n", dev->name, mtu);
+
+ if (size > mtu) {
+ pr_debug("size = %Zu, mtu = %u\n", size, mtu);
+ err = -EMSGSIZE;
+ goto out_dev;
+ }
+
+ hlen = LL_RESERVED_SPACE(dev);
+ tlen = dev->needed_tailroom;
+ skb = sock_alloc_send_skb(sk, hlen + tlen + size,
+ msg->msg_flags & MSG_DONTWAIT,
+ &err);
+ if (!skb)
+ goto out_dev;
+
+ skb_reserve(skb, hlen);
+
+ skb_reset_network_header(skb);
+
+ cb = mac_cb_init(skb);
+ cb->type = IEEE802154_FC_TYPE_DATA;
+ cb->ackreq = ro->want_ack;
+
+ if (msg->msg_name) {
+ DECLARE_SOCKADDR(struct sockaddr_ieee802154*,
+ daddr, msg->msg_name);
+
+ ieee802154_addr_from_sa(&dst_addr, &daddr->addr);
+ } else {
+ dst_addr = ro->dst_addr;
+ }
+
+ cb->secen = ro->secen;
+ cb->secen_override = ro->secen_override;
+ cb->seclevel = ro->seclevel;
+ cb->seclevel_override = ro->seclevel_override;
+
+ err = dev_hard_header(skb, dev, ETH_P_IEEE802154, &dst_addr,
+ ro->bound ? &ro->src_addr : NULL, size);
+ if (err < 0)
+ goto out_skb;
+
+ err = memcpy_from_msg(skb_put(skb, size), msg, size);
+ if (err < 0)
+ goto out_skb;
+
+ skb->dev = dev;
+ skb->sk = sk;
+ skb->protocol = htons(ETH_P_IEEE802154);
+
+ dev_put(dev);
+
+ err = dev_queue_xmit(skb);
+ if (err > 0)
+ err = net_xmit_errno(err);
+
+ return err ?: size;
+
+out_skb:
+ kfree_skb(skb);
+out_dev:
+ dev_put(dev);
+out:
+ return err;
+}
+
+static int dgram_recvmsg(struct kiocb *iocb, struct sock *sk,
+ struct msghdr *msg, size_t len, int noblock,
+ int flags, int *addr_len)
+{
+ size_t copied = 0;
+ int err = -EOPNOTSUPP;
+ struct sk_buff *skb;
+ DECLARE_SOCKADDR(struct sockaddr_ieee802154 *, saddr, msg->msg_name);
+
+ skb = skb_recv_datagram(sk, flags, noblock, &err);
+ if (!skb)
+ goto out;
+
+ copied = skb->len;
+ if (len < copied) {
+ msg->msg_flags |= MSG_TRUNC;
+ copied = len;
+ }
+
+ /* FIXME: skip headers if necessary ?! */
+ err = skb_copy_datagram_msg(skb, 0, msg, copied);
+ if (err)
+ goto done;
+
+ sock_recv_ts_and_drops(msg, sk, skb);
+
+ if (saddr) {
+ saddr->family = AF_IEEE802154;
+ ieee802154_addr_to_sa(&saddr->addr, &mac_cb(skb)->source);
+ *addr_len = sizeof(*saddr);
+ }
+
+ if (flags & MSG_TRUNC)
+ copied = skb->len;
+done:
+ skb_free_datagram(sk, skb);
+out:
+ if (err)
+ return err;
+ return copied;
+}
+
+static int dgram_rcv_skb(struct sock *sk, struct sk_buff *skb)
+{
+ skb = skb_share_check(skb, GFP_ATOMIC);
+ if (!skb)
+ return NET_RX_DROP;
+
+ if (sock_queue_rcv_skb(sk, skb) < 0) {
+ kfree_skb(skb);
+ return NET_RX_DROP;
+ }
+
+ return NET_RX_SUCCESS;
+}
+
+static inline bool
+ieee802154_match_sock(__le64 hw_addr, __le16 pan_id, __le16 short_addr,
+ struct dgram_sock *ro)
+{
+ if (!ro->bound)
+ return true;
+
+ if (ro->src_addr.mode == IEEE802154_ADDR_LONG &&
+ hw_addr == ro->src_addr.extended_addr)
+ return true;
+
+ if (ro->src_addr.mode == IEEE802154_ADDR_SHORT &&
+ pan_id == ro->src_addr.pan_id &&
+ short_addr == ro->src_addr.short_addr)
+ return true;
+
+ return false;
+}
+
+static int ieee802154_dgram_deliver(struct net_device *dev, struct sk_buff *skb)
+{
+ struct sock *sk, *prev = NULL;
+ int ret = NET_RX_SUCCESS;
+ __le16 pan_id, short_addr;
+ __le64 hw_addr;
+
+ /* Data frame processing */
+ BUG_ON(dev->type != ARPHRD_IEEE802154);
+
+ pan_id = ieee802154_mlme_ops(dev)->get_pan_id(dev);
+ short_addr = ieee802154_mlme_ops(dev)->get_short_addr(dev);
+ hw_addr = ieee802154_devaddr_from_raw(dev->dev_addr);
+
+ read_lock(&dgram_lock);
+ sk_for_each(sk, &dgram_head) {
+ if (ieee802154_match_sock(hw_addr, pan_id, short_addr,
+ dgram_sk(sk))) {
+ if (prev) {
+ struct sk_buff *clone;
+
+ clone = skb_clone(skb, GFP_ATOMIC);
+ if (clone)
+ dgram_rcv_skb(prev, clone);
+ }
+
+ prev = sk;
+ }
+ }
+
+ if (prev) {
+ dgram_rcv_skb(prev, skb);
+ } else {
+ kfree_skb(skb);
+ ret = NET_RX_DROP;
+ }
+ read_unlock(&dgram_lock);
+
+ return ret;
+}
+
+static int dgram_getsockopt(struct sock *sk, int level, int optname,
+ char __user *optval, int __user *optlen)
+{
+ struct dgram_sock *ro = dgram_sk(sk);
+
+ int val, len;
+
+ if (level != SOL_IEEE802154)
+ return -EOPNOTSUPP;
+
+ if (get_user(len, optlen))
+ return -EFAULT;
+
+ len = min_t(unsigned int, len, sizeof(int));
+
+ switch (optname) {
+ case WPAN_WANTACK:
+ val = ro->want_ack;
+ break;
+ case WPAN_SECURITY:
+ if (!ro->secen_override)
+ val = WPAN_SECURITY_DEFAULT;
+ else if (ro->secen)
+ val = WPAN_SECURITY_ON;
+ else
+ val = WPAN_SECURITY_OFF;
+ break;
+ case WPAN_SECURITY_LEVEL:
+ if (!ro->seclevel_override)
+ val = WPAN_SECURITY_LEVEL_DEFAULT;
+ else
+ val = ro->seclevel;
+ break;
+ default:
+ return -ENOPROTOOPT;
+ }
+
+ if (put_user(len, optlen))
+ return -EFAULT;
+ if (copy_to_user(optval, &val, len))
+ return -EFAULT;
+ return 0;
+}
+
+static int dgram_setsockopt(struct sock *sk, int level, int optname,
+ char __user *optval, unsigned int optlen)
+{
+ struct dgram_sock *ro = dgram_sk(sk);
+ struct net *net = sock_net(sk);
+ int val;
+ int err = 0;
+
+ if (optlen < sizeof(int))
+ return -EINVAL;
+
+ if (get_user(val, (int __user *)optval))
+ return -EFAULT;
+
+ lock_sock(sk);
+
+ switch (optname) {
+ case WPAN_WANTACK:
+ ro->want_ack = !!val;
+ break;
+ case WPAN_SECURITY:
+ if (!ns_capable(net->user_ns, CAP_NET_ADMIN) &&
+ !ns_capable(net->user_ns, CAP_NET_RAW)) {
+ err = -EPERM;
+ break;
+ }
+
+ switch (val) {
+ case WPAN_SECURITY_DEFAULT:
+ ro->secen_override = 0;
+ break;
+ case WPAN_SECURITY_ON:
+ ro->secen_override = 1;
+ ro->secen = 1;
+ break;
+ case WPAN_SECURITY_OFF:
+ ro->secen_override = 1;
+ ro->secen = 0;
+ break;
+ default:
+ err = -EINVAL;
+ break;
+ }
+ break;
+ case WPAN_SECURITY_LEVEL:
+ if (!ns_capable(net->user_ns, CAP_NET_ADMIN) &&
+ !ns_capable(net->user_ns, CAP_NET_RAW)) {
+ err = -EPERM;
+ break;
+ }
+
+ if (val < WPAN_SECURITY_LEVEL_DEFAULT ||
+ val > IEEE802154_SCF_SECLEVEL_ENC_MIC128) {
+ err = -EINVAL;
+ } else if (val == WPAN_SECURITY_LEVEL_DEFAULT) {
+ ro->seclevel_override = 0;
+ } else {
+ ro->seclevel_override = 1;
+ ro->seclevel = val;
+ }
+ break;
+ default:
+ err = -ENOPROTOOPT;
+ break;
+ }
+
+ release_sock(sk);
+ return err;
+}
+
+static struct proto ieee802154_dgram_prot = {
+ .name = "IEEE-802.15.4-MAC",
+ .owner = THIS_MODULE,
+ .obj_size = sizeof(struct dgram_sock),
+ .init = dgram_init,
+ .close = dgram_close,
+ .bind = dgram_bind,
+ .sendmsg = dgram_sendmsg,
+ .recvmsg = dgram_recvmsg,
+ .hash = dgram_hash,
+ .unhash = dgram_unhash,
+ .connect = dgram_connect,
+ .disconnect = dgram_disconnect,
+ .ioctl = dgram_ioctl,
+ .getsockopt = dgram_getsockopt,
+ .setsockopt = dgram_setsockopt,
+};
+
+static const struct proto_ops ieee802154_dgram_ops = {
+ .family = PF_IEEE802154,
+ .owner = THIS_MODULE,
+ .release = ieee802154_sock_release,
+ .bind = ieee802154_sock_bind,
+ .connect = ieee802154_sock_connect,
+ .socketpair = sock_no_socketpair,
+ .accept = sock_no_accept,
+ .getname = sock_no_getname,
+ .poll = datagram_poll,
+ .ioctl = ieee802154_sock_ioctl,
+ .listen = sock_no_listen,
+ .shutdown = sock_no_shutdown,
+ .setsockopt = sock_common_setsockopt,
+ .getsockopt = sock_common_getsockopt,
+ .sendmsg = ieee802154_sock_sendmsg,
+ .recvmsg = sock_common_recvmsg,
+ .mmap = sock_no_mmap,
+ .sendpage = sock_no_sendpage,
+#ifdef CONFIG_COMPAT
+ .compat_setsockopt = compat_sock_common_setsockopt,
+ .compat_getsockopt = compat_sock_common_getsockopt,
+#endif
+};
+
+/* Create a socket. Initialise the socket, blank the addresses
+ * set the state.
+ */
+static int ieee802154_create(struct net *net, struct socket *sock,
+ int protocol, int kern)
+{
+ struct sock *sk;
+ int rc;
+ struct proto *proto;
+ const struct proto_ops *ops;
+
+ if (!net_eq(net, &init_net))
+ return -EAFNOSUPPORT;
+
+ switch (sock->type) {
+ case SOCK_RAW:
+ proto = &ieee802154_raw_prot;
+ ops = &ieee802154_raw_ops;
+ break;
+ case SOCK_DGRAM:
+ proto = &ieee802154_dgram_prot;
+ ops = &ieee802154_dgram_ops;
+ break;
+ default:
+ rc = -ESOCKTNOSUPPORT;
+ goto out;
+ }
+
+ rc = -ENOMEM;
+ sk = sk_alloc(net, PF_IEEE802154, GFP_KERNEL, proto);
+ if (!sk)
+ goto out;
+ rc = 0;
+
+ sock->ops = ops;
+
+ sock_init_data(sock, sk);
+ /* FIXME: sk->sk_destruct */
+ sk->sk_family = PF_IEEE802154;
+
+ /* Checksums on by default */
+ sock_set_flag(sk, SOCK_ZAPPED);
+
+ if (sk->sk_prot->hash)
+ sk->sk_prot->hash(sk);
+
+ if (sk->sk_prot->init) {
+ rc = sk->sk_prot->init(sk);
+ if (rc)
+ sk_common_release(sk);
+ }
+out:
+ return rc;
+}
+
+static const struct net_proto_family ieee802154_family_ops = {
+ .family = PF_IEEE802154,
+ .create = ieee802154_create,
+ .owner = THIS_MODULE,
+};
+
+static int ieee802154_rcv(struct sk_buff *skb, struct net_device *dev,
+ struct packet_type *pt, struct net_device *orig_dev)
+{
+ if (!netif_running(dev))
+ goto drop;
+ pr_debug("got frame, type %d, dev %p\n", dev->type, dev);
+#ifdef DEBUG
+ print_hex_dump_bytes("ieee802154_rcv ",
+ DUMP_PREFIX_NONE, skb->data, skb->len);
+#endif
+
+ if (!net_eq(dev_net(dev), &init_net))
+ goto drop;
+
+ ieee802154_raw_deliver(dev, skb);
+
+ if (dev->type != ARPHRD_IEEE802154)
+ goto drop;
+
+ if (skb->pkt_type != PACKET_OTHERHOST)
+ return ieee802154_dgram_deliver(dev, skb);
+
+drop:
+ kfree_skb(skb);
+ return NET_RX_DROP;
+}
+
+static struct packet_type ieee802154_packet_type = {
+ .type = htons(ETH_P_IEEE802154),
+ .func = ieee802154_rcv,
+};
+
+static int __init af_ieee802154_init(void)
+{
+ int rc = -EINVAL;
+
+ rc = proto_register(&ieee802154_raw_prot, 1);
+ if (rc)
+ goto out;
+
+ rc = proto_register(&ieee802154_dgram_prot, 1);
+ if (rc)
+ goto err_dgram;
+
+ /* Tell SOCKET that we are alive */
+ rc = sock_register(&ieee802154_family_ops);
+ if (rc)
+ goto err_sock;
+ dev_add_pack(&ieee802154_packet_type);
+
+ rc = 0;
+ goto out;
+
+err_sock:
+ proto_unregister(&ieee802154_dgram_prot);
+err_dgram:
+ proto_unregister(&ieee802154_raw_prot);
+out:
+ return rc;
+}
+
+static void __exit af_ieee802154_remove(void)
+{
+ dev_remove_pack(&ieee802154_packet_type);
+ sock_unregister(PF_IEEE802154);
+ proto_unregister(&ieee802154_dgram_prot);
+ proto_unregister(&ieee802154_raw_prot);
+}
+
+module_init(af_ieee802154_init);
+module_exit(af_ieee802154_remove);
+
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_NETPROTO(PF_IEEE802154);
preferred, valid))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
if (inet_fill_ifaddr(skb, ifa,
NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq,
- RTM_NEWADDR, NLM_F_MULTI) <= 0) {
+ RTM_NEWADDR, NLM_F_MULTI) < 0) {
rcu_read_unlock();
goto done;
}
IPV4_DEVCONF(*devconf, PROXY_ARP)) < 0)
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
cb->nlh->nlmsg_seq,
RTM_NEWNETCONF,
NLM_F_MULTI,
- -1) <= 0) {
+ -1) < 0) {
rcu_read_unlock();
goto done;
}
NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq,
RTM_NEWNETCONF, NLM_F_MULTI,
- -1) <= 0)
+ -1) < 0)
goto done;
else
h++;
NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq,
RTM_NEWNETCONF, NLM_F_MULTI,
- -1) <= 0)
+ -1) < 0)
goto done;
else
h++;
unsigned int);
void rtmsg_fib(int event, __be32 key, struct fib_alias *fa, int dst_len,
u32 tb_id, const struct nl_info *info, unsigned int nlm_flags);
-struct fib_alias *fib_find_alias(struct list_head *fah, u8 tos, u32 prio);
static inline void fib_result_assign(struct fib_result *res,
struct fib_info *fi)
+ nla_total_size(4) /* RTA_TABLE */
+ nla_total_size(4) /* RTA_DST */
+ nla_total_size(4) /* RTA_PRIORITY */
- + nla_total_size(4); /* RTA_PREFSRC */
+ + nla_total_size(4) /* RTA_PREFSRC */
+ + nla_total_size(TCP_CA_NAME_MAX); /* RTAX_CC_ALGO */
/* space for nested metrics */
payload += nla_total_size((RTAX_MAX * nla_total_size(4)));
rtnl_set_sk_err(info->nl_net, RTNLGRP_IPV4_ROUTE, err);
}
-/* Return the first fib alias matching TOS with
- * priority less than or equal to PRIO.
- */
-struct fib_alias *fib_find_alias(struct list_head *fah, u8 tos, u32 prio)
-{
- if (fah) {
- struct fib_alias *fa;
- list_for_each_entry(fa, fah, fa_list) {
- if (fa->fa_tos > tos)
- continue;
- if (fa->fa_info->fib_priority >= prio ||
- fa->fa_tos < tos)
- return fa;
- }
- }
- return NULL;
-}
-
static int fib_detect_death(struct fib_info *fi, int order,
struct fib_info **last_resort, int *last_idx,
int dflt)
if (type > RTAX_MAX)
goto err_inval;
- val = nla_get_u32(nla);
+ if (type == RTAX_CC_ALGO) {
+ char tmp[TCP_CA_NAME_MAX];
+
+ nla_strlcpy(tmp, nla, sizeof(tmp));
+ val = tcp_ca_get_key_by_name(tmp);
+ if (val == TCP_CA_UNSPEC)
+ goto err_inval;
+ } else {
+ val = nla_get_u32(nla);
+ }
if (type == RTAX_ADVMSS && val > 65535 - 40)
val = 65535 - 40;
if (type == RTAX_MTU && val > 65535 - 15)
nla_nest_end(skb, mp);
}
#endif
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
#define MAX_STAT_DEPTH 32
-#define KEYLENGTH (8*sizeof(t_key))
+#define KEYLENGTH (8*sizeof(t_key))
+#define KEY_MAX ((t_key)~0)
typedef unsigned int t_key;
union {
/* The fields in this struct are valid if bits > 0 (TNODE) */
struct {
- unsigned int full_children; /* KEYLENGTH bits needed */
- unsigned int empty_children; /* KEYLENGTH bits needed */
+ t_key empty_children; /* KEYLENGTH bits needed */
+ t_key full_children; /* KEYLENGTH bits needed */
struct tnode __rcu *child[0];
};
/* This list pointer if valid if bits == 0 (LEAF) */
return vzalloc(size);
}
+static inline void empty_child_inc(struct tnode *n)
+{
+ ++n->empty_children ? : ++n->full_children;
+}
+
+static inline void empty_child_dec(struct tnode *n)
+{
+ n->empty_children-- ? : n->full_children--;
+}
+
static struct tnode *leaf_new(t_key key)
{
struct tnode *l = kmem_cache_alloc(trie_leaf_kmem, GFP_KERNEL);
static struct tnode *tnode_new(t_key key, int pos, int bits)
{
- size_t sz = offsetof(struct tnode, child[1 << bits]);
+ size_t sz = offsetof(struct tnode, child[1ul << bits]);
struct tnode *tn = tnode_alloc(sz);
unsigned int shift = pos + bits;
tn->pos = pos;
tn->bits = bits;
tn->key = (shift < KEYLENGTH) ? (key >> shift) << shift : 0;
- tn->full_children = 0;
- tn->empty_children = 1<<bits;
+ if (bits == KEYLENGTH)
+ tn->full_children = 1;
+ else
+ tn->empty_children = 1ul << bits;
}
pr_debug("AT %p s=%zu %zu\n", tn, sizeof(struct tnode),
BUG_ON(i >= tnode_child_length(tn));
- /* update emptyChildren */
+ /* update emptyChildren, overflow into fullChildren */
if (n == NULL && chi != NULL)
- tn->empty_children++;
- else if (n != NULL && chi == NULL)
- tn->empty_children--;
+ empty_child_inc(tn);
+ if (n != NULL && chi == NULL)
+ empty_child_dec(tn);
/* update fullChildren */
wasfull = tnode_full(tn, chi);
rcu_assign_pointer(tn->child[i], n);
}
-static void put_child_root(struct tnode *tp, struct trie *t,
- t_key key, struct tnode *n)
+static void update_children(struct tnode *tn)
+{
+ unsigned long i;
+
+ /* update all of the child parent pointers */
+ for (i = tnode_child_length(tn); i;) {
+ struct tnode *inode = tnode_get_child(tn, --i);
+
+ if (!inode)
+ continue;
+
+ /* Either update the children of a tnode that
+ * already belongs to us or update the child
+ * to point to ourselves.
+ */
+ if (node_parent(inode) == tn)
+ update_children(inode);
+ else
+ node_set_parent(inode, tn);
+ }
+}
+
+static inline void put_child_root(struct tnode *tp, struct trie *t,
+ t_key key, struct tnode *n)
{
if (tp)
put_child(tp, get_index(key, tp), n);
}
}
+static void replace(struct trie *t, struct tnode *oldtnode, struct tnode *tn)
+{
+ struct tnode *tp = node_parent(oldtnode);
+ unsigned long i;
+
+ /* setup the parent pointer out of and back into this node */
+ NODE_INIT_PARENT(tn, tp);
+ put_child_root(tp, t, tn->key, tn);
+
+ /* update all of the child parent pointers */
+ update_children(tn);
+
+ /* all pointers should be clean so we are done */
+ tnode_free(oldtnode);
+
+ /* resize children now that oldtnode is freed */
+ for (i = tnode_child_length(tn); i;) {
+ struct tnode *inode = tnode_get_child(tn, --i);
+
+ /* resize child node */
+ if (tnode_full(tn, inode))
+ resize(t, inode);
+ }
+}
+
static int inflate(struct trie *t, struct tnode *oldtnode)
{
- struct tnode *inode, *node0, *node1, *tn, *tp;
- unsigned long i, j, k;
+ struct tnode *tn;
+ unsigned long i;
t_key m;
pr_debug("In inflate\n");
if (!tn)
return -ENOMEM;
+ /* prepare oldtnode to be freed */
+ tnode_free_init(oldtnode);
+
/* Assemble all of the pointers in our cluster, in this case that
* represents all of the pointers out of our allocated nodes that
* point to existing tnodes and the links between our allocated
* nodes.
*/
for (i = tnode_child_length(oldtnode), m = 1u << tn->pos; i;) {
- inode = tnode_get_child(oldtnode, --i);
+ struct tnode *inode = tnode_get_child(oldtnode, --i);
+ struct tnode *node0, *node1;
+ unsigned long j, k;
/* An empty child */
if (inode == NULL)
continue;
}
+ /* drop the node in the old tnode free list */
+ tnode_free_append(oldtnode, inode);
+
/* An internal node with two children */
if (inode->bits == 1) {
put_child(tn, 2 * i + 1, tnode_get_child(inode, 1));
node1 = tnode_new(inode->key | m, inode->pos, inode->bits - 1);
if (!node1)
goto nomem;
- tnode_free_append(tn, node1);
+ node0 = tnode_new(inode->key, inode->pos, inode->bits - 1);
- node0 = tnode_new(inode->key & ~m, inode->pos, inode->bits - 1);
+ tnode_free_append(tn, node1);
if (!node0)
goto nomem;
tnode_free_append(tn, node0);
put_child(tn, 2 * i, node0);
}
- /* setup the parent pointer into and out of this node */
- tp = node_parent(oldtnode);
- NODE_INIT_PARENT(tn, tp);
- put_child_root(tp, t, tn->key, tn);
-
- /* prepare oldtnode to be freed */
- tnode_free_init(oldtnode);
-
- /* update all child nodes parent pointers to route to us */
- for (i = tnode_child_length(oldtnode); i;) {
- inode = tnode_get_child(oldtnode, --i);
-
- /* A leaf or an internal node with skipped bits */
- if (!tnode_full(oldtnode, inode)) {
- node_set_parent(inode, tn);
- continue;
- }
-
- /* drop the node in the old tnode free list */
- tnode_free_append(oldtnode, inode);
-
- /* fetch new nodes */
- node1 = tnode_get_child(tn, 2 * i + 1);
- node0 = tnode_get_child(tn, 2 * i);
-
- /* bits == 1 then node0 and node1 represent inode's children */
- if (inode->bits == 1) {
- node_set_parent(node1, tn);
- node_set_parent(node0, tn);
- continue;
- }
-
- /* update parent pointers in child node's children */
- for (k = tnode_child_length(inode), j = k / 2; j;) {
- node_set_parent(tnode_get_child(inode, --k), node1);
- node_set_parent(tnode_get_child(inode, --j), node0);
- node_set_parent(tnode_get_child(inode, --k), node1);
- node_set_parent(tnode_get_child(inode, --j), node0);
- }
+ /* setup the parent pointers into and out of this node */
+ replace(t, oldtnode, tn);
- /* resize child nodes */
- resize(t, node1);
- resize(t, node0);
- }
-
- /* we completed without error, prepare to free old node */
- tnode_free(oldtnode);
return 0;
nomem:
/* all pointers should be clean so we are done */
static int halve(struct trie *t, struct tnode *oldtnode)
{
- struct tnode *tn, *tp, *inode, *node0, *node1;
+ struct tnode *tn;
unsigned long i;
pr_debug("In halve\n");
if (!tn)
return -ENOMEM;
+ /* prepare oldtnode to be freed */
+ tnode_free_init(oldtnode);
+
/* Assemble all of the pointers in our cluster, in this case that
* represents all of the pointers out of our allocated nodes that
* point to existing tnodes and the links between our allocated
* nodes.
*/
for (i = tnode_child_length(oldtnode); i;) {
- node1 = tnode_get_child(oldtnode, --i);
- node0 = tnode_get_child(oldtnode, --i);
+ struct tnode *node1 = tnode_get_child(oldtnode, --i);
+ struct tnode *node0 = tnode_get_child(oldtnode, --i);
+ struct tnode *inode;
/* At least one of the children is empty */
if (!node1 || !node0) {
put_child(tn, i / 2, inode);
}
- /* setup the parent pointer out of and back into this node */
- tp = node_parent(oldtnode);
- NODE_INIT_PARENT(tn, tp);
- put_child_root(tp, t, tn->key, tn);
-
- /* prepare oldtnode to be freed */
- tnode_free_init(oldtnode);
+ /* setup the parent pointers into and out of this node */
+ replace(t, oldtnode, tn);
- /* update all of the child parent pointers */
- for (i = tnode_child_length(tn); i;) {
- inode = tnode_get_child(tn, --i);
-
- /* only new tnodes will be considered "full" nodes */
- if (!tnode_full(tn, inode)) {
- node_set_parent(inode, tn);
- continue;
- }
+ return 0;
+}
- /* Two nonempty children */
- node_set_parent(tnode_get_child(inode, 1), inode);
- node_set_parent(tnode_get_child(inode, 0), inode);
+static void collapse(struct trie *t, struct tnode *oldtnode)
+{
+ struct tnode *n, *tp;
+ unsigned long i;
- /* resize child node */
- resize(t, inode);
- }
+ /* scan the tnode looking for that one child that might still exist */
+ for (n = NULL, i = tnode_child_length(oldtnode); !n && i;)
+ n = tnode_get_child(oldtnode, --i);
- /* all pointers should be clean so we are done */
- tnode_free(oldtnode);
+ /* compress one level */
+ tp = node_parent(oldtnode);
+ put_child_root(tp, t, oldtnode->key, n);
+ node_set_parent(n, tp);
- return 0;
+ /* drop dead node */
+ node_free(oldtnode);
}
static unsigned char update_suffix(struct tnode *tn)
/* Keep root node larger */
threshold *= tp ? inflate_threshold : inflate_threshold_root;
- used += tn->full_children;
used -= tn->empty_children;
+ used += tn->full_children;
+
+ /* if bits == KEYLENGTH then pos = 0, and will fail below */
- return tn->pos && ((50 * used) >= threshold);
+ return (used > 1) && tn->pos && ((50 * used) >= threshold);
}
static bool should_halve(const struct tnode *tp, const struct tnode *tn)
threshold *= tp ? halve_threshold : halve_threshold_root;
used -= tn->empty_children;
- return (tn->bits > 1) && ((100 * used) < threshold);
+ /* if bits == KEYLENGTH then used = 100% on wrap, and will fail below */
+
+ return (used > 1) && (tn->bits > 1) && ((100 * used) < threshold);
+}
+
+static bool should_collapse(const struct tnode *tn)
+{
+ unsigned long used = tnode_child_length(tn);
+
+ used -= tn->empty_children;
+
+ /* account for bits == KEYLENGTH case */
+ if ((tn->bits == KEYLENGTH) && tn->full_children)
+ used -= KEY_MAX;
+
+ /* One child or none, time to drop us from the trie */
+ return used < 2;
}
#define MAX_WORK 10
static void resize(struct trie *t, struct tnode *tn)
{
- struct tnode *tp = node_parent(tn), *n = NULL;
+ struct tnode *tp = node_parent(tn);
struct tnode __rcu **cptr;
- int max_work;
+ int max_work = MAX_WORK;
pr_debug("In tnode_resize %p inflate_threshold=%d threshold=%d\n",
tn, inflate_threshold, halve_threshold);
cptr = tp ? &tp->child[get_index(tn->key, tp)] : &t->trie;
BUG_ON(tn != rtnl_dereference(*cptr));
- /* No children */
- if (tn->empty_children > (tnode_child_length(tn) - 1))
- goto no_children;
-
- /* One child */
- if (tn->empty_children == (tnode_child_length(tn) - 1))
- goto one_child;
-
/* Double as long as the resulting node has a number of
* nonempty nodes that are above the threshold.
*/
- max_work = MAX_WORK;
- while (should_inflate(tp, tn) && max_work--) {
+ while (should_inflate(tp, tn) && max_work) {
if (inflate(t, tn)) {
#ifdef CONFIG_IP_FIB_TRIE_STATS
this_cpu_inc(t->stats->resize_node_skipped);
break;
}
+ max_work--;
tn = rtnl_dereference(*cptr);
}
/* Halve as long as the number of empty children in this
* node is above threshold.
*/
- max_work = MAX_WORK;
- while (should_halve(tp, tn) && max_work--) {
+ while (should_halve(tp, tn) && max_work) {
if (halve(t, tn)) {
#ifdef CONFIG_IP_FIB_TRIE_STATS
this_cpu_inc(t->stats->resize_node_skipped);
break;
}
+ max_work--;
tn = rtnl_dereference(*cptr);
}
/* Only one child remains */
- if (tn->empty_children == (tnode_child_length(tn) - 1)) {
- unsigned long i;
-one_child:
- for (i = tnode_child_length(tn); !n && i;)
- n = tnode_get_child(tn, --i);
-no_children:
- /* compress one level */
- put_child_root(tp, t, tn->key, n);
- node_set_parent(n, tp);
-
- /* drop dead node */
- tnode_free_init(tn);
- tnode_free(tn);
+ if (should_collapse(tn)) {
+ collapse(t, tn);
return;
}
static void remove_leaf_info(struct tnode *l, struct leaf_info *old)
{
- struct hlist_node *prev;
-
- /* record the location of the pointer to this object */
- prev = rtnl_dereference(hlist_pprev_rcu(&old->hlist));
+ /* record the location of the previous list_info entry */
+ struct hlist_node **pprev = old->hlist.pprev;
+ struct leaf_info *li = hlist_entry(pprev, typeof(*li), hlist.next);
/* remove the leaf info from the list */
hlist_del_rcu(&old->hlist);
- /* if we emptied the list this leaf will be freed and we can sort
- * out parent suffix lengths as a part of trie_rebalance
- */
- if (hlist_empty(&l->list))
+ /* only access li if it is pointing at the last valid hlist_node */
+ if (hlist_empty(&l->list) || (*pprev))
return;
- /* if we removed the tail then we need to update slen */
- if (!rcu_access_pointer(hlist_next_rcu(prev))) {
- struct leaf_info *li = hlist_entry(prev, typeof(*li), hlist);
-
- l->slen = KEYLENGTH - li->plen;
- leaf_pull_suffix(l);
- }
+ /* update the trie with the latest suffix length */
+ l->slen = KEYLENGTH - li->plen;
+ leaf_pull_suffix(l);
}
static void insert_leaf_info(struct tnode *l, struct leaf_info *new)
}
/* if we added to the tail node then we need to update slen */
- if (!rcu_access_pointer(hlist_next_rcu(&new->hlist))) {
+ if (l->slen < (KEYLENGTH - new->plen)) {
l->slen = KEYLENGTH - new->plen;
leaf_push_suffix(l);
}
* prefix plus zeros for the bits in the cindex. The index
* is the difference between the key and this value. From
* this we can actually derive several pieces of data.
- * if !(index >> bits)
- * we know the value is cindex
- * else
+ * if (index & (~0ul << bits))
* we have a mismatch in skip bits and failed
+ * else
+ * we know the value is cindex
*/
- if (index >> n->bits)
+ if (index & (~0ul << n->bits))
return NULL;
/* we have found a leaf. Prefixes have already been compared */
return n;
}
+/* Return the first fib alias matching TOS with
+ * priority less than or equal to PRIO.
+ */
+static struct fib_alias *fib_find_alias(struct list_head *fah, u8 tos, u32 prio)
+{
+ struct fib_alias *fa;
+
+ if (!fah)
+ return NULL;
+
+ list_for_each_entry(fa, fah, fa_list) {
+ if (fa->fa_tos > tos)
+ continue;
+ if (fa->fa_info->fib_priority >= prio || fa->fa_tos < tos)
+ return fa;
+ }
+
+ return NULL;
+}
+
static void trie_rebalance(struct trie *t, struct tnode *tn)
{
struct tnode *tp;
* prefix plus zeros for the "bits" in the prefix. The index
* is the difference between the key and this value. From
* this we can actually derive several pieces of data.
- * if !(index >> bits)
- * we know the value is child index
- * else
+ * if (index & (~0ul << bits))
* we have a mismatch in skip bits and failed
+ * else
+ * we know the value is cindex
*/
- if (index >> n->bits)
+ if (index & (~0ul << n->bits))
break;
/* we have found a leaf. Prefixes have already been compared */
struct hlist_head *lih = &l->list;
struct hlist_node *tmp;
struct leaf_info *li = NULL;
+ unsigned char plen = KEYLENGTH;
hlist_for_each_entry_safe(li, tmp, lih, hlist) {
found += trie_flush_list(&li->falh);
if (list_empty(&li->falh)) {
hlist_del_rcu(&li->hlist);
free_leaf_info(li);
+ continue;
}
+
+ plen = li->plen;
}
+
+ l->slen = KEYLENGTH - plen;
+
return found;
}
for (l = trie_firstleaf(t); l; l = trie_nextleaf(l)) {
found += trie_flush_leaf(l);
- if (ll && hlist_empty(&ll->list))
- trie_leaf_remove(t, ll);
+ if (ll) {
+ if (hlist_empty(&ll->list))
+ trie_leaf_remove(t, ll);
+ else
+ leaf_pull_suffix(ll);
+ }
+
ll = l;
}
- if (ll && hlist_empty(&ll->list))
- trie_leaf_remove(t, ll);
+ if (ll) {
+ if (hlist_empty(&ll->list))
+ trie_leaf_remove(t, ll);
+ else
+ leaf_pull_suffix(ll);
+ }
pr_debug("trie_flush found=%d\n", found);
return found;
hlist_for_each_entry_rcu(li, &n->list, hlist)
++s->prefixes;
} else {
- unsigned long i;
-
s->tnodes++;
if (n->bits < MAX_STAT_DEPTH)
s->nodesizes[n->bits]++;
-
- for (i = tnode_child_length(n); i--;) {
- if (!rcu_access_pointer(n->child[i]))
- s->nullpointers++;
- }
+ s->nullpointers += n->empty_children;
}
}
rcu_read_unlock();
}
static struct sk_buff **fou_gro_receive(struct sk_buff **head,
- struct sk_buff *skb)
+ struct sk_buff *skb,
+ struct udp_offload *uoff)
{
const struct net_offload *ops;
struct sk_buff **pp = NULL;
return pp;
}
-static int fou_gro_complete(struct sk_buff *skb, int nhoff)
+static int fou_gro_complete(struct sk_buff *skb, int nhoff,
+ struct udp_offload *uoff)
{
const struct net_offload *ops;
u8 proto = NAPI_GRO_CB(skb)->proto;
}
static struct sk_buff **gue_gro_receive(struct sk_buff **head,
- struct sk_buff *skb)
+ struct sk_buff *skb,
+ struct udp_offload *uoff)
{
const struct net_offload **offloads;
const struct net_offload *ops;
return pp;
}
-static int gue_gro_complete(struct sk_buff *skb, int nhoff)
+static int gue_gro_complete(struct sk_buff *skb, int nhoff,
+ struct udp_offload *uoff)
{
const struct net_offload **offloads;
struct guehdr *guehdr = (struct guehdr *)(skb->data + nhoff);
sk->sk_user_data = fou;
fou->sock = sock;
- udp_set_convert_csum(sk, true);
+ inet_inc_convert_csum(sk);
sk->sk_allocation = GFP_ATOMIC;
#include <linux/errno.h>
#include <linux/slab.h>
#include <linux/skbuff.h>
-#include <linux/rculist.h>
+#include <linux/list.h>
#include <linux/netdevice.h>
#include <linux/in.h>
#include <linux/ip.h>
#include <linux/etherdevice.h>
#include <linux/if_ether.h>
#include <linux/if_vlan.h>
-#include <linux/hash.h>
#include <linux/ethtool.h>
+#include <linux/mutex.h>
#include <net/arp.h>
#include <net/ndisc.h>
#include <net/ip.h>
#include <net/ip6_checksum.h>
#endif
-#define PORT_HASH_BITS 8
-#define PORT_HASH_SIZE (1<<PORT_HASH_BITS)
+/* Protects sock_list and refcounts. */
+static DEFINE_MUTEX(geneve_mutex);
/* per-network namespace private data for this module */
struct geneve_net {
- struct hlist_head sock_list[PORT_HASH_SIZE];
- spinlock_t sock_lock; /* Protects sock_list */
+ struct list_head sock_list;
};
static int geneve_net_id;
-static struct workqueue_struct *geneve_wq;
-
static inline struct genevehdr *geneve_hdr(const struct sk_buff *skb)
{
return (struct genevehdr *)(udp_hdr(skb) + 1);
}
-static struct hlist_head *gs_head(struct net *net, __be16 port)
+static struct geneve_sock *geneve_find_sock(struct net *net,
+ sa_family_t family, __be16 port)
{
struct geneve_net *gn = net_generic(net, geneve_net_id);
-
- return &gn->sock_list[hash_32(ntohs(port), PORT_HASH_BITS)];
-}
-
-/* Find geneve socket based on network namespace and UDP port */
-static struct geneve_sock *geneve_find_sock(struct net *net, __be16 port)
-{
struct geneve_sock *gs;
- hlist_for_each_entry_rcu(gs, gs_head(net, port), hlist) {
- if (inet_sk(gs->sock->sk)->inet_sport == port)
+ list_for_each_entry(gs, &gn->sock_list, list) {
+ if (inet_sk(gs->sock->sk)->inet_sport == port &&
+ inet_sk(gs->sock->sk)->sk.sk_family == family)
return gs;
}
min_headroom = LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len
+ GENEVE_BASE_HLEN + opt_len + sizeof(struct iphdr)
- + (vlan_tx_tag_present(skb) ? VLAN_HLEN : 0);
+ + (skb_vlan_tag_present(skb) ? VLAN_HLEN : 0);
err = skb_cow_head(skb, min_headroom);
if (unlikely(err)) {
skb_set_inner_protocol(skb, htons(ETH_P_TEB));
- return udp_tunnel_xmit_skb(gs->sock, rt, skb, src, dst,
- tos, ttl, df, src_port, dst_port, xnet);
+ return udp_tunnel_xmit_skb(rt, skb, src, dst,
+ tos, ttl, df, src_port, dst_port, xnet,
+ gs->sock->sk->sk_no_check_tx);
}
EXPORT_SYMBOL_GPL(geneve_xmit_skb);
}
static struct sk_buff **geneve_gro_receive(struct sk_buff **head,
- struct sk_buff *skb)
+ struct sk_buff *skb,
+ struct udp_offload *uoff)
{
struct sk_buff *p, **pp = NULL;
struct genevehdr *gh, *gh2;
return pp;
}
-static int geneve_gro_complete(struct sk_buff *skb, int nhoff)
+static int geneve_gro_complete(struct sk_buff *skb, int nhoff,
+ struct udp_offload *uoff)
{
struct genevehdr *gh;
struct packet_offload *ptype;
return 1;
}
-static void geneve_del_work(struct work_struct *work)
-{
- struct geneve_sock *gs = container_of(work, struct geneve_sock,
- del_work);
-
- udp_tunnel_sock_release(gs->sock);
- kfree_rcu(gs, rcu);
-}
-
static struct socket *geneve_create_sock(struct net *net, bool ipv6,
__be16 port)
{
if (!gs)
return ERR_PTR(-ENOMEM);
- INIT_WORK(&gs->del_work, geneve_del_work);
-
sock = geneve_create_sock(net, ipv6, port);
if (IS_ERR(sock)) {
kfree(gs);
}
gs->sock = sock;
- atomic_set(&gs->refcnt, 1);
+ gs->refcnt = 1;
gs->rcv = rcv;
gs->rcv_data = data;
gs->udp_offloads.port = port;
gs->udp_offloads.callbacks.gro_receive = geneve_gro_receive;
gs->udp_offloads.callbacks.gro_complete = geneve_gro_complete;
-
- spin_lock(&gn->sock_lock);
- hlist_add_head_rcu(&gs->hlist, gs_head(net, port));
geneve_notify_add_rx_port(gs);
- spin_unlock(&gn->sock_lock);
/* Mark socket as an encapsulation socket */
tunnel_cfg.sk_user_data = gs;
tunnel_cfg.encap_destroy = NULL;
setup_udp_tunnel_sock(net, sock, &tunnel_cfg);
+ list_add(&gs->list, &gn->sock_list);
+
return gs;
}
geneve_rcv_t *rcv, void *data,
bool no_share, bool ipv6)
{
- struct geneve_net *gn = net_generic(net, geneve_net_id);
struct geneve_sock *gs;
- gs = geneve_socket_create(net, port, rcv, data, ipv6);
- if (!IS_ERR(gs))
- return gs;
-
- if (no_share) /* Return error if sharing is not allowed. */
- return ERR_PTR(-EINVAL);
+ mutex_lock(&geneve_mutex);
- spin_lock(&gn->sock_lock);
- gs = geneve_find_sock(net, port);
- if (gs && ((gs->rcv != rcv) ||
- !atomic_add_unless(&gs->refcnt, 1, 0)))
+ gs = geneve_find_sock(net, ipv6 ? AF_INET6 : AF_INET, port);
+ if (gs) {
+ if (!no_share && gs->rcv == rcv)
+ gs->refcnt++;
+ else
gs = ERR_PTR(-EBUSY);
- spin_unlock(&gn->sock_lock);
+ } else {
+ gs = geneve_socket_create(net, port, rcv, data, ipv6);
+ }
- if (!gs)
- gs = ERR_PTR(-EINVAL);
+ mutex_unlock(&geneve_mutex);
return gs;
}
void geneve_sock_release(struct geneve_sock *gs)
{
- struct net *net = sock_net(gs->sock->sk);
- struct geneve_net *gn = net_generic(net, geneve_net_id);
+ mutex_lock(&geneve_mutex);
- if (!atomic_dec_and_test(&gs->refcnt))
- return;
+ if (--gs->refcnt)
+ goto unlock;
- spin_lock(&gn->sock_lock);
- hlist_del_rcu(&gs->hlist);
+ list_del(&gs->list);
geneve_notify_del_rx_port(gs);
- spin_unlock(&gn->sock_lock);
+ udp_tunnel_sock_release(gs->sock);
+ kfree_rcu(gs, rcu);
- queue_work(geneve_wq, &gs->del_work);
+unlock:
+ mutex_unlock(&geneve_mutex);
}
EXPORT_SYMBOL_GPL(geneve_sock_release);
static __net_init int geneve_init_net(struct net *net)
{
struct geneve_net *gn = net_generic(net, geneve_net_id);
- unsigned int h;
- spin_lock_init(&gn->sock_lock);
-
- for (h = 0; h < PORT_HASH_SIZE; ++h)
- INIT_HLIST_HEAD(&gn->sock_list[h]);
+ INIT_LIST_HEAD(&gn->sock_list);
return 0;
}
static struct pernet_operations geneve_net_ops = {
.init = geneve_init_net,
- .exit = NULL,
.id = &geneve_net_id,
.size = sizeof(struct geneve_net),
};
{
int rc;
- geneve_wq = alloc_workqueue("geneve", 0, 0);
- if (!geneve_wq)
- return -ENOMEM;
-
rc = register_pernet_subsys(&geneve_net_ops);
if (rc)
return rc;
return 0;
}
-late_initcall(geneve_init_module);
+module_init(geneve_init_module);
static void __exit geneve_cleanup_module(void)
{
- destroy_workqueue(geneve_wq);
unregister_pernet_subsys(&geneve_net_ops);
}
module_exit(geneve_cleanup_module);
icsk->icsk_ca_ops->get_info(sk, ext, skb);
out:
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
errout:
nlmsg_cancel(skb, nlh);
}
#endif
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
}
static int sk_diag_fill(struct sock *sk, struct sk_buff *skb,
}
#endif
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
}
static int inet_diag_dump_reqs(struct sk_buff *skb, struct sock *sk,
.dellink = ip_tunnel_dellink,
.get_size = ipgre_get_size,
.fill_info = ipgre_fill_info,
+ .get_link_net = ip_tunnel_get_link_net,
};
static struct rtnl_link_ops ipgre_tap_ops __read_mostly = {
.dellink = ip_tunnel_dellink,
.get_size = ipgre_get_size,
.fill_info = ipgre_fill_info,
+ .get_link_net = ip_tunnel_get_link_net,
};
static int __net_init ipgre_tap_init_net(struct net *net)
#include <net/route.h>
#include <net/xfrm.h>
#include <net/compat.h>
+#include <net/checksum.h>
#if IS_ENABLED(CONFIG_IPV6)
#include <net/transp_v6.h>
#endif
#include <linux/errqueue.h>
#include <asm/uaccess.h>
-#define IP_CMSG_PKTINFO 1
-#define IP_CMSG_TTL 2
-#define IP_CMSG_TOS 4
-#define IP_CMSG_RECVOPTS 8
-#define IP_CMSG_RETOPTS 16
-#define IP_CMSG_PASSSEC 32
-#define IP_CMSG_ORIGDSTADDR 64
-
/*
* SOL_IP control messages.
*/
put_cmsg(msg, SOL_IP, IP_RETOPTS, opt->optlen, opt->__data);
}
+static void ip_cmsg_recv_checksum(struct msghdr *msg, struct sk_buff *skb,
+ int offset)
+{
+ __wsum csum = skb->csum;
+
+ if (skb->ip_summed != CHECKSUM_COMPLETE)
+ return;
+
+ if (offset != 0)
+ csum = csum_sub(csum, csum_partial(skb->data, offset, 0));
+
+ put_cmsg(msg, SOL_IP, IP_CHECKSUM, sizeof(__wsum), &csum);
+}
+
static void ip_cmsg_recv_security(struct msghdr *msg, struct sk_buff *skb)
{
char *secdata;
put_cmsg(msg, SOL_IP, IP_ORIGDSTADDR, sizeof(sin), &sin);
}
-void ip_cmsg_recv(struct msghdr *msg, struct sk_buff *skb)
+void ip_cmsg_recv_offset(struct msghdr *msg, struct sk_buff *skb,
+ int offset)
{
struct inet_sock *inet = inet_sk(skb->sk);
unsigned int flags = inet->cmsg_flags;
/* Ordered by supposed usage frequency */
- if (flags & 1)
+ if (flags & IP_CMSG_PKTINFO) {
ip_cmsg_recv_pktinfo(msg, skb);
- if ((flags >>= 1) == 0)
- return;
- if (flags & 1)
+ flags &= ~IP_CMSG_PKTINFO;
+ if (!flags)
+ return;
+ }
+
+ if (flags & IP_CMSG_TTL) {
ip_cmsg_recv_ttl(msg, skb);
- if ((flags >>= 1) == 0)
- return;
- if (flags & 1)
+ flags &= ~IP_CMSG_TTL;
+ if (!flags)
+ return;
+ }
+
+ if (flags & IP_CMSG_TOS) {
ip_cmsg_recv_tos(msg, skb);
- if ((flags >>= 1) == 0)
- return;
- if (flags & 1)
+ flags &= ~IP_CMSG_TOS;
+ if (!flags)
+ return;
+ }
+
+ if (flags & IP_CMSG_RECVOPTS) {
ip_cmsg_recv_opts(msg, skb);
- if ((flags >>= 1) == 0)
- return;
- if (flags & 1)
+ flags &= ~IP_CMSG_RECVOPTS;
+ if (!flags)
+ return;
+ }
+
+ if (flags & IP_CMSG_RETOPTS) {
ip_cmsg_recv_retopts(msg, skb);
- if ((flags >>= 1) == 0)
- return;
- if (flags & 1)
+ flags &= ~IP_CMSG_RETOPTS;
+ if (!flags)
+ return;
+ }
+
+ if (flags & IP_CMSG_PASSSEC) {
ip_cmsg_recv_security(msg, skb);
- if ((flags >>= 1) == 0)
- return;
- if (flags & 1)
+ flags &= ~IP_CMSG_PASSSEC;
+ if (!flags)
+ return;
+ }
+
+ if (flags & IP_CMSG_ORIGDSTADDR) {
ip_cmsg_recv_dstaddr(msg, skb);
+ flags &= ~IP_CMSG_ORIGDSTADDR;
+ if (!flags)
+ return;
+ }
+
+ if (flags & IP_CMSG_CHECKSUM)
+ ip_cmsg_recv_checksum(msg, skb, offset);
}
-EXPORT_SYMBOL(ip_cmsg_recv);
+EXPORT_SYMBOL(ip_cmsg_recv_offset);
int ip_cmsg_send(struct net *net, struct msghdr *msg, struct ipcm_cookie *ipc,
bool allow_ipv6)
case IP_MULTICAST_ALL:
case IP_MULTICAST_LOOP:
case IP_RECVORIGDSTADDR:
+ case IP_CHECKSUM:
if (optlen >= sizeof(int)) {
if (get_user(val, (int __user *) optval))
return -EFAULT;
else
inet->cmsg_flags &= ~IP_CMSG_ORIGDSTADDR;
break;
+ case IP_CHECKSUM:
+ if (val) {
+ if (!(inet->cmsg_flags & IP_CMSG_CHECKSUM)) {
+ inet_inc_convert_csum(sk);
+ inet->cmsg_flags |= IP_CMSG_CHECKSUM;
+ }
+ } else {
+ if (inet->cmsg_flags & IP_CMSG_CHECKSUM) {
+ inet_dec_convert_csum(sk);
+ inet->cmsg_flags &= ~IP_CMSG_CHECKSUM;
+ }
+ }
+ break;
case IP_TOS: /* This sets both TOS and Precedence */
if (sk->sk_type == SOCK_STREAM) {
val &= ~INET_ECN_MASK;
case IP_RECVORIGDSTADDR:
val = (inet->cmsg_flags & IP_CMSG_ORIGDSTADDR) != 0;
break;
+ case IP_CHECKSUM:
+ val = (inet->cmsg_flags & IP_CMSG_CHECKSUM) != 0;
+ break;
case IP_TOS:
val = inet->tos;
break;
}
EXPORT_SYMBOL_GPL(ip_tunnel_dellink);
+struct net *ip_tunnel_get_link_net(const struct net_device *dev)
+{
+ struct ip_tunnel *tunnel = netdev_priv(dev);
+
+ return tunnel->net;
+}
+EXPORT_SYMBOL(ip_tunnel_get_link_net);
+
int ip_tunnel_init_net(struct net *net, int ip_tnl_net_id,
struct rtnl_link_ops *ops, char *devname)
{
.dellink = ip_tunnel_dellink,
.get_size = vti_get_size,
.fill_info = vti_fill_info,
+ .get_link_net = ip_tunnel_get_link_net,
};
static int __init vti_init(void)
last = &ic_first_dev;
rtnl_lock();
- /* bring loopback device up first */
+ /* bring loopback and DSA master network devices up first */
for_each_netdev(&init_net, dev) {
- if (!(dev->flags & IFF_LOOPBACK))
+ if (!(dev->flags & IFF_LOOPBACK) && !netdev_uses_dsa(dev))
continue;
if (dev_change_flags(dev, dev->flags | IFF_UP) < 0)
pr_err("IP-Config: Failed to open %s\n", dev->name);
while ((d = next)) {
next = d->next;
dev = d->dev;
- if (dev != ic_dev) {
+ if (dev != ic_dev && !netdev_uses_dsa(dev)) {
DBG(("IP-Config: Downing %s\n", dev->name));
dev_change_flags(dev, d->flags);
}
.dellink = ip_tunnel_dellink,
.get_size = ipip_get_size,
.fill_info = ipip_fill_info,
+ .get_link_net = ip_tunnel_get_link_net,
};
static struct xfrm_tunnel ipip_handler __read_mostly = {
if (err < 0 && err != -ENOENT)
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
memset(&mr, 0, sizeof(mr));
if (priv->sreg_proto_min) {
- mr.range[0].min.all = (__force __be16)
- data[priv->sreg_proto_min].data[0];
- mr.range[0].max.all = (__force __be16)
- data[priv->sreg_proto_max].data[0];
+ mr.range[0].min.all =
+ *(__be16 *)&data[priv->sreg_proto_min].data[0];
+ mr.range[0].max.all =
+ *(__be16 *)&data[priv->sreg_proto_max].data[0];
mr.range[0].flags |= NF_NAT_RANGE_PROTO_SPECIFIED;
}
return ret;
}
-static DEFINE_SPINLOCK(rt_uncached_lock);
-static LIST_HEAD(rt_uncached_list);
+struct uncached_list {
+ spinlock_t lock;
+ struct list_head head;
+};
+
+static DEFINE_PER_CPU_ALIGNED(struct uncached_list, rt_uncached_list);
static void rt_add_uncached_list(struct rtable *rt)
{
- spin_lock_bh(&rt_uncached_lock);
- list_add_tail(&rt->rt_uncached, &rt_uncached_list);
- spin_unlock_bh(&rt_uncached_lock);
+ struct uncached_list *ul = raw_cpu_ptr(&rt_uncached_list);
+
+ rt->rt_uncached_list = ul;
+
+ spin_lock_bh(&ul->lock);
+ list_add_tail(&rt->rt_uncached, &ul->head);
+ spin_unlock_bh(&ul->lock);
}
static void ipv4_dst_destroy(struct dst_entry *dst)
struct rtable *rt = (struct rtable *) dst;
if (!list_empty(&rt->rt_uncached)) {
- spin_lock_bh(&rt_uncached_lock);
+ struct uncached_list *ul = rt->rt_uncached_list;
+
+ spin_lock_bh(&ul->lock);
list_del(&rt->rt_uncached);
- spin_unlock_bh(&rt_uncached_lock);
+ spin_unlock_bh(&ul->lock);
}
}
void rt_flush_dev(struct net_device *dev)
{
- if (!list_empty(&rt_uncached_list)) {
- struct net *net = dev_net(dev);
- struct rtable *rt;
+ struct net *net = dev_net(dev);
+ struct rtable *rt;
+ int cpu;
- spin_lock_bh(&rt_uncached_lock);
- list_for_each_entry(rt, &rt_uncached_list, rt_uncached) {
+ for_each_possible_cpu(cpu) {
+ struct uncached_list *ul = &per_cpu(rt_uncached_list, cpu);
+
+ spin_lock_bh(&ul->lock);
+ list_for_each_entry(rt, &ul->head, rt_uncached) {
if (rt->dst.dev != dev)
continue;
rt->dst.dev = net->loopback_dev;
dev_hold(rt->dst.dev);
dev_put(dev);
}
- spin_unlock_bh(&rt_uncached_lock);
+ spin_unlock_bh(&ul->lock);
}
}
if (rtnl_put_cacheinfo(skb, &rt->dst, 0, expires, error) < 0)
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
err = rt_fill_info(net, dst, src, &fl4, skb,
NETLINK_CB(in_skb).portid, nlh->nlmsg_seq,
RTM_NEWROUTE, 0, 0);
- if (err <= 0)
+ if (err < 0)
goto errout_free;
err = rtnl_unicast(skb, net, NETLINK_CB(in_skb).portid);
int __init ip_rt_init(void)
{
int rc = 0;
+ int cpu;
ip_idents = kmalloc(IP_IDENTS_SZ * sizeof(*ip_idents), GFP_KERNEL);
if (!ip_idents)
prandom_bytes(ip_idents, IP_IDENTS_SZ * sizeof(*ip_idents));
+ for_each_possible_cpu(cpu) {
+ struct uncached_list *ul = &per_cpu(rt_uncached_list, cpu);
+
+ INIT_LIST_HEAD(&ul->head);
+ spin_lock_init(&ul->lock);
+ }
#ifdef CONFIG_IP_ROUTE_CLASSID
ip_rt_acct = __alloc_percpu(256 * sizeof(struct ip_rt_acct), __alignof__(struct ip_rt_acct));
if (!ip_rt_acct)
#include <linux/types.h>
#include <linux/list.h>
#include <linux/gfp.h>
+#include <linux/jhash.h>
#include <net/tcp.h>
static DEFINE_SPINLOCK(tcp_cong_list_lock);
return NULL;
}
+/* Must be called with rcu lock held */
+static const struct tcp_congestion_ops *__tcp_ca_find_autoload(const char *name)
+{
+ const struct tcp_congestion_ops *ca = tcp_ca_find(name);
+#ifdef CONFIG_MODULES
+ if (!ca && capable(CAP_NET_ADMIN)) {
+ rcu_read_unlock();
+ request_module("tcp_%s", name);
+ rcu_read_lock();
+ ca = tcp_ca_find(name);
+ }
+#endif
+ return ca;
+}
+
+/* Simple linear search, not much in here. */
+struct tcp_congestion_ops *tcp_ca_find_key(u32 key)
+{
+ struct tcp_congestion_ops *e;
+
+ list_for_each_entry_rcu(e, &tcp_cong_list, list) {
+ if (e->key == key)
+ return e;
+ }
+
+ return NULL;
+}
+
/*
* Attach new congestion control algorithm to the list
* of available options.
return -EINVAL;
}
+ ca->key = jhash(ca->name, sizeof(ca->name), strlen(ca->name));
+
spin_lock(&tcp_cong_list_lock);
- if (tcp_ca_find(ca->name)) {
- pr_notice("%s already registered\n", ca->name);
+ if (ca->key == TCP_CA_UNSPEC || tcp_ca_find_key(ca->key)) {
+ pr_notice("%s already registered or non-unique key\n",
+ ca->name);
ret = -EEXIST;
} else {
list_add_tail_rcu(&ca->list, &tcp_cong_list);
spin_lock(&tcp_cong_list_lock);
list_del_rcu(&ca->list);
spin_unlock(&tcp_cong_list_lock);
+
+ /* Wait for outstanding readers to complete before the
+ * module gets removed entirely.
+ *
+ * A try_module_get() should fail by now as our module is
+ * in "going" state since no refs are held anymore and
+ * module_exit() handler being called.
+ */
+ synchronize_rcu();
}
EXPORT_SYMBOL_GPL(tcp_unregister_congestion_control);
+u32 tcp_ca_get_key_by_name(const char *name)
+{
+ const struct tcp_congestion_ops *ca;
+ u32 key;
+
+ might_sleep();
+
+ rcu_read_lock();
+ ca = __tcp_ca_find_autoload(name);
+ key = ca ? ca->key : TCP_CA_UNSPEC;
+ rcu_read_unlock();
+
+ return key;
+}
+EXPORT_SYMBOL_GPL(tcp_ca_get_key_by_name);
+
+char *tcp_ca_get_name_by_key(u32 key, char *buffer)
+{
+ const struct tcp_congestion_ops *ca;
+ char *ret = NULL;
+
+ rcu_read_lock();
+ ca = tcp_ca_find_key(key);
+ if (ca)
+ ret = strncpy(buffer, ca->name,
+ TCP_CA_NAME_MAX);
+ rcu_read_unlock();
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(tcp_ca_get_name_by_key);
+
/* Assign choice of congestion control. */
void tcp_assign_congestion_control(struct sock *sk)
{
icsk->icsk_ca_ops->init(sk);
}
+static void tcp_reinit_congestion_control(struct sock *sk,
+ const struct tcp_congestion_ops *ca)
+{
+ struct inet_connection_sock *icsk = inet_csk(sk);
+
+ tcp_cleanup_congestion_control(sk);
+ icsk->icsk_ca_ops = ca;
+
+ if (sk->sk_state != TCP_CLOSE && icsk->icsk_ca_ops->init)
+ icsk->icsk_ca_ops->init(sk);
+}
+
/* Manage refcounts on socket close. */
void tcp_cleanup_congestion_control(struct sock *sk)
{
int tcp_set_congestion_control(struct sock *sk, const char *name)
{
struct inet_connection_sock *icsk = inet_csk(sk);
- struct tcp_congestion_ops *ca;
+ const struct tcp_congestion_ops *ca;
int err = 0;
- rcu_read_lock();
- ca = tcp_ca_find(name);
+ if (icsk->icsk_ca_dst_locked)
+ return -EPERM;
- /* no change asking for existing value */
+ rcu_read_lock();
+ ca = __tcp_ca_find_autoload(name);
+ /* No change asking for existing value */
if (ca == icsk->icsk_ca_ops)
goto out;
-
-#ifdef CONFIG_MODULES
- /* not found attempt to autoload module */
- if (!ca && capable(CAP_NET_ADMIN)) {
- rcu_read_unlock();
- request_module("tcp_%s", name);
- rcu_read_lock();
- ca = tcp_ca_find(name);
- }
-#endif
if (!ca)
err = -ENOENT;
-
else if (!((ca->flags & TCP_CONG_NON_RESTRICTED) ||
ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN)))
err = -EPERM;
-
else if (!try_module_get(ca->owner))
err = -EBUSY;
-
- else {
- tcp_cleanup_congestion_control(sk);
- icsk->icsk_ca_ops = ca;
-
- if (sk->sk_state != TCP_CLOSE && icsk->icsk_ca_ops->init)
- icsk->icsk_ca_ops->init(sk);
- }
+ else
+ tcp_reinit_congestion_control(sk, ca);
out:
rcu_read_unlock();
return err;
}
/* This routine deals with acks during a TLP episode.
+ * We mark the end of a TLP episode on receiving TLP dupack or when
+ * ack is after tlp_high_seq.
* Ref: loss detection algorithm in draft-dukkipati-tcpm-tcp-loss-probe.
*/
static void tcp_process_tlp_ack(struct sock *sk, u32 ack, int flag)
{
struct tcp_sock *tp = tcp_sk(sk);
- bool is_tlp_dupack = (ack == tp->tlp_high_seq) &&
- !(flag & (FLAG_SND_UNA_ADVANCED |
- FLAG_NOT_DUP | FLAG_DATA_SACKED));
- /* Mark the end of TLP episode on receiving TLP dupack or when
- * ack is after tlp_high_seq.
- */
- if (is_tlp_dupack) {
- tp->tlp_high_seq = 0;
+ if (before(ack, tp->tlp_high_seq))
return;
- }
- if (after(ack, tp->tlp_high_seq)) {
+ if (flag & FLAG_DSACKING_ACK) {
+ /* This DSACK means original and TLP probe arrived; no loss */
+ tp->tlp_high_seq = 0;
+ } else if (after(ack, tp->tlp_high_seq)) {
+ /* ACK advances: there was a loss, so reduce cwnd. Reset
+ * tlp_high_seq in tcp_init_cwnd_reduction()
+ */
+ tcp_init_cwnd_reduction(sk);
+ tcp_set_ca_state(sk, TCP_CA_CWR);
+ tcp_end_cwnd_reduction(sk);
+ tcp_try_keep_open(sk);
+ NET_INC_STATS_BH(sock_net(sk),
+ LINUX_MIB_TCPLOSSPROBERECOVERY);
+ } else if (!(flag & (FLAG_SND_UNA_ADVANCED |
+ FLAG_NOT_DUP | FLAG_DATA_SACKED))) {
+ /* Pure dupack: original and TLP probe arrived; no loss */
tp->tlp_high_seq = 0;
- /* Don't reduce cwnd if DSACK arrives for TLP retrans. */
- if (!(flag & FLAG_DSACKING_ACK)) {
- tcp_init_cwnd_reduction(sk);
- tcp_set_ca_state(sk, TCP_CA_CWR);
- tcp_end_cwnd_reduction(sk);
- tcp_try_keep_open(sk);
- NET_INC_STATS_BH(sock_net(sk),
- LINUX_MIB_TCPLOSSPROBERECOVERY);
- }
}
}
}
sk_setup_caps(newsk, dst);
+ tcp_ca_openreq_child(newsk, dst);
+
tcp_sync_mss(newsk, dst_mtu(dst));
newtp->advmss = dst_metric_advmss(dst);
if (tcp_sk(sk)->rx_opt.user_mss &&
if (tcp_metrics_fill_info(skb, tm) < 0)
goto nla_put_failure;
- return genlmsg_end(skb, hdr);
+ genlmsg_end(skb, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(skb, hdr);
tp->ecn_flags = inet_rsk(req)->ecn_ok ? TCP_ECN_OK : 0;
}
+void tcp_ca_openreq_child(struct sock *sk, const struct dst_entry *dst)
+{
+ struct inet_connection_sock *icsk = inet_csk(sk);
+ u32 ca_key = dst_metric(dst, RTAX_CC_ALGO);
+ bool ca_got_dst = false;
+
+ if (ca_key != TCP_CA_UNSPEC) {
+ const struct tcp_congestion_ops *ca;
+
+ rcu_read_lock();
+ ca = tcp_ca_find_key(ca_key);
+ if (likely(ca && try_module_get(ca->owner))) {
+ icsk->icsk_ca_dst_locked = tcp_ca_dst_locked(dst);
+ icsk->icsk_ca_ops = ca;
+ ca_got_dst = true;
+ }
+ rcu_read_unlock();
+ }
+
+ if (!ca_got_dst && !try_module_get(icsk->icsk_ca_ops->owner))
+ tcp_assign_congestion_control(sk);
+
+ tcp_set_ca_state(sk, TCP_CA_Open);
+}
+EXPORT_SYMBOL_GPL(tcp_ca_openreq_child);
+
/* This is not only more efficient than what we used to do, it eliminates
* a lot of code duplication between IPv4/IPv6 SYN recv processing. -DaveM
*
newtp->snd_cwnd = TCP_INIT_CWND;
newtp->snd_cwnd_cnt = 0;
- if (!try_module_get(newicsk->icsk_ca_ops->owner))
- tcp_assign_congestion_control(newsk);
-
- tcp_set_ca_state(newsk, TCP_CA_Open);
tcp_init_xmit_timers(newsk);
__skb_queue_head_init(&newtp->out_of_order_queue);
newtp->write_seq = newtp->pushed_seq = treq->snt_isn + 1;
if (unlikely(!tcp_snd_wnd_test(tp, skb, mss_now)))
break;
- if (tso_segs == 1) {
+ if (tso_segs == 1 || !max_segs) {
if (unlikely(!tcp_nagle_test(tp, skb, mss_now,
(tcp_skb_is_last(sk, skb) ?
nonagle : TCP_NAGLE_PUSH))))
}
limit = mss_now;
- if (tso_segs > 1 && !tcp_urg_mode(tp))
+ if (tso_segs > 1 && max_segs && !tcp_urg_mode(tp))
limit = tcp_mss_split_point(sk, skb, mss_now,
min_t(unsigned int,
cwnd_quota,
}
EXPORT_SYMBOL(tcp_make_synack);
+static void tcp_ca_dst_init(struct sock *sk, const struct dst_entry *dst)
+{
+ struct inet_connection_sock *icsk = inet_csk(sk);
+ const struct tcp_congestion_ops *ca;
+ u32 ca_key = dst_metric(dst, RTAX_CC_ALGO);
+
+ if (ca_key == TCP_CA_UNSPEC)
+ return;
+
+ rcu_read_lock();
+ ca = tcp_ca_find_key(ca_key);
+ if (likely(ca && try_module_get(ca->owner))) {
+ module_put(icsk->icsk_ca_ops->owner);
+ icsk->icsk_ca_dst_locked = tcp_ca_dst_locked(dst);
+ icsk->icsk_ca_ops = ca;
+ }
+ rcu_read_unlock();
+}
+
/* Do all connect socket setups that can be done AF independent. */
static void tcp_connect_init(struct sock *sk)
{
tcp_mtup_init(sk);
tcp_sync_mss(sk, dst_mtu(dst));
+ tcp_ca_dst_init(sk, dst);
+
if (!tp->window_clamp)
tp->window_clamp = dst_metric(dst, RTAX_WINDOW);
tp->advmss = dst_metric_advmss(dst);
*addr_len = sizeof(*sin);
}
if (inet->cmsg_flags)
- ip_cmsg_recv(msg, skb);
+ ip_cmsg_recv_offset(msg, skb, sizeof(struct udphdr));
err = copied;
if (flags & MSG_TRUNC)
if (sk != NULL) {
int ret;
- if (udp_sk(sk)->convert_csum && uh->check && !IS_UDPLITE(sk))
+ if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
inet_compute_pseudo);
skb_gro_pull(skb, sizeof(struct udphdr)); /* pull encapsulating udp header */
skb_gro_postpull_rcsum(skb, uh, sizeof(struct udphdr));
NAPI_GRO_CB(skb)->proto = uo_priv->offload->ipproto;
- pp = uo_priv->offload->callbacks.gro_receive(head, skb);
+ pp = uo_priv->offload->callbacks.gro_receive(head, skb,
+ uo_priv->offload);
out_unlock:
rcu_read_unlock();
if (uo_priv != NULL) {
NAPI_GRO_CB(skb)->proto = uo_priv->offload->ipproto;
- err = uo_priv->offload->callbacks.gro_complete(skb, nhoff + sizeof(struct udphdr));
+ err = uo_priv->offload->callbacks.gro_complete(skb,
+ nhoff + sizeof(struct udphdr),
+ uo_priv->offload);
}
rcu_read_unlock();
inet_sk(sk)->mc_loop = 0;
/* Enable CHECKSUM_UNNECESSARY to CHECKSUM_COMPLETE conversion */
- udp_set_convert_csum(sk, true);
+ inet_inc_convert_csum(sk);
rcu_assign_sk_user_data(sk, cfg->sk_user_data);
}
EXPORT_SYMBOL_GPL(setup_udp_tunnel_sock);
-int udp_tunnel_xmit_skb(struct socket *sock, struct rtable *rt,
- struct sk_buff *skb, __be32 src, __be32 dst,
- __u8 tos, __u8 ttl, __be16 df, __be16 src_port,
- __be16 dst_port, bool xnet)
+int udp_tunnel_xmit_skb(struct rtable *rt, struct sk_buff *skb,
+ __be32 src, __be32 dst, __u8 tos, __u8 ttl,
+ __be16 df, __be16 src_port, __be16 dst_port,
+ bool xnet, bool nocheck)
{
struct udphdr *uh;
uh->source = src_port;
uh->len = htons(skb->len);
- udp_set_csum(sock->sk->sk_no_check_tx, skb, src, dst, skb->len);
+ udp_set_csum(nocheck, skb, src, dst, skb->len);
- return iptunnel_xmit(sock->sk, rt, skb, src, dst, IPPROTO_UDP,
+ return iptunnel_xmit(skb->sk, rt, skb, src, dst, IPPROTO_UDP,
tos, ttl, df, xnet);
}
EXPORT_SYMBOL_GPL(udp_tunnel_xmit_skb);
.disable_ipv6 = 0,
.accept_dad = 1,
.suppress_frag_ndisc = 1,
+ .accept_ra_mtu = 1,
};
static struct ipv6_devconf ipv6_devconf_dflt __read_mostly = {
.disable_ipv6 = 0,
.accept_dad = 1,
.suppress_frag_ndisc = 1,
+ .accept_ra_mtu = 1,
};
/* Check if a valid qdisc is available */
nla_put_s32(skb, NETCONFA_PROXY_NEIGH, devconf->proxy_ndp) < 0)
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
cb->nlh->nlmsg_seq,
RTM_NEWNETCONF,
NLM_F_MULTI,
- -1) <= 0) {
+ -1) < 0) {
rcu_read_unlock();
goto done;
}
NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq,
RTM_NEWNETCONF, NLM_F_MULTI,
- -1) <= 0)
+ -1) < 0)
goto done;
else
h++;
NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq,
RTM_NEWNETCONF, NLM_F_MULTI,
- -1) <= 0)
+ -1) < 0)
goto done;
else
h++;
if (nla_put_u32(skb, IFA_FLAGS, ifa->flags) < 0)
goto error;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
error:
nlmsg_cancel(skb, nlh);
return -EMSGSIZE;
}
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
}
static int inet6_fill_ifacaddr(struct sk_buff *skb, struct ifacaddr6 *ifaca,
return -EMSGSIZE;
}
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
}
enum addr_type_t {
cb->nlh->nlmsg_seq,
RTM_NEWADDR,
NLM_F_MULTI);
- if (err <= 0)
+ if (err < 0)
break;
nl_dump_check_consistent(cb, nlmsg_hdr(skb));
}
cb->nlh->nlmsg_seq,
RTM_GETMULTICAST,
NLM_F_MULTI);
- if (err <= 0)
+ if (err < 0)
break;
}
break;
cb->nlh->nlmsg_seq,
RTM_GETANYCAST,
NLM_F_MULTI);
- if (err <= 0)
+ if (err < 0)
break;
}
break;
goto cont;
if (in6_dump_addrs(idev, skb, cb, type,
- s_ip_idx, &ip_idx) <= 0)
+ s_ip_idx, &ip_idx) < 0)
goto done;
cont:
idx++;
array[DEVCONF_NDISC_NOTIFY] = cnf->ndisc_notify;
array[DEVCONF_SUPPRESS_FRAG_NDISC] = cnf->suppress_frag_ndisc;
array[DEVCONF_ACCEPT_RA_FROM_LOCAL] = cnf->accept_ra_from_local;
+ array[DEVCONF_ACCEPT_RA_MTU] = cnf->accept_ra_mtu;
}
static inline size_t inet6_ifla6_size(void)
goto nla_put_failure;
nla_nest_end(skb, protoinfo);
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
if (inet6_fill_ifinfo(skb, idev,
NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq,
- RTM_NEWLINK, NLM_F_MULTI) <= 0)
+ RTM_NEWLINK, NLM_F_MULTI) < 0)
goto out;
cont:
idx++;
ci.valid_time = ntohl(pinfo->valid);
if (nla_put(skb, PREFIX_CACHEINFO, sizeof(ci), &ci))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
.mode = 0644,
.proc_handler = proc_dointvec,
},
+ {
+ .procname = "accept_ra_mtu",
+ .data = &ipv6_devconf.accept_ra_mtu,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec,
+ },
{
/* sentinel */
}
return -EMSGSIZE;
}
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
}
static int ip6addrlbl_dump(struct sk_buff *skb, struct netlink_callback *cb)
cb->nlh->nlmsg_seq,
RTM_NEWADDRLABEL,
NLM_F_MULTI);
- if (err <= 0)
+ if (err < 0)
break;
}
idx++;
* Dest addr check
*/
- if ((addr_type & IPV6_ADDR_MULTICAST || skb->pkt_type != PACKET_HOST)) {
+ if (addr_type & IPV6_ADDR_MULTICAST || skb->pkt_type != PACKET_HOST) {
if (type != ICMPV6_PKT_TOOBIG &&
!(type == ICMPV6_PARAMPROB &&
code == ICMPV6_UNK_OPTION &&
w->leaf = rt;
return 1;
}
- WARN_ON(res == 0);
}
w->leaf = NULL;
return 0;
RTF_GATEWAY;
}
-static int fib6_commit_metrics(struct dst_entry *dst,
- struct nlattr *mx, int mx_len)
+static void fib6_copy_metrics(u32 *mp, const struct mx6_config *mxc)
{
- struct nlattr *nla;
- int remaining;
- u32 *mp;
+ int i;
- if (dst->flags & DST_HOST) {
- mp = dst_metrics_write_ptr(dst);
- } else {
- mp = kzalloc(sizeof(u32) * RTAX_MAX, GFP_ATOMIC);
- if (!mp)
- return -ENOMEM;
- dst_init_metrics(dst, mp, 0);
+ for (i = 0; i < RTAX_MAX; i++) {
+ if (test_bit(i, mxc->mx_valid))
+ mp[i] = mxc->mx[i];
}
+}
- nla_for_each_attr(nla, mx, mx_len, remaining) {
- int type = nla_type(nla);
+static int fib6_commit_metrics(struct dst_entry *dst, struct mx6_config *mxc)
+{
+ if (!mxc->mx)
+ return 0;
- if (type) {
- if (type > RTAX_MAX)
- return -EINVAL;
+ if (dst->flags & DST_HOST) {
+ u32 *mp = dst_metrics_write_ptr(dst);
- mp[type - 1] = nla_get_u32(nla);
- }
+ if (unlikely(!mp))
+ return -ENOMEM;
+
+ fib6_copy_metrics(mp, mxc);
+ } else {
+ dst_init_metrics(dst, mxc->mx, false);
+
+ /* We've stolen mx now. */
+ mxc->mx = NULL;
}
+
return 0;
}
*/
static int fib6_add_rt2node(struct fib6_node *fn, struct rt6_info *rt,
- struct nl_info *info, struct nlattr *mx, int mx_len)
+ struct nl_info *info, struct mx6_config *mxc)
{
struct rt6_info *iter = NULL;
struct rt6_info **ins;
pr_warn("NLM_F_CREATE should be set when creating new route\n");
add:
- if (mx) {
- err = fib6_commit_metrics(&rt->dst, mx, mx_len);
- if (err)
- return err;
- }
+ err = fib6_commit_metrics(&rt->dst, mxc);
+ if (err)
+ return err;
+
rt->dst.rt6_next = iter;
*ins = rt;
rt->rt6i_node = fn;
pr_warn("NLM_F_REPLACE set, but no existing node found!\n");
return -ENOENT;
}
- if (mx) {
- err = fib6_commit_metrics(&rt->dst, mx, mx_len);
- if (err)
- return err;
- }
+
+ err = fib6_commit_metrics(&rt->dst, mxc);
+ if (err)
+ return err;
+
*ins = rt;
rt->rt6i_node = fn;
rt->dst.rt6_next = iter->dst.rt6_next;
* with source addr info in sub-trees
*/
-int fib6_add(struct fib6_node *root, struct rt6_info *rt, struct nl_info *info,
- struct nlattr *mx, int mx_len)
+int fib6_add(struct fib6_node *root, struct rt6_info *rt,
+ struct nl_info *info, struct mx6_config *mxc)
{
struct fib6_node *fn, *pn = NULL;
int err = -ENOMEM;
}
#endif
- err = fib6_add_rt2node(fn, rt, info, mx, mx_len);
+ err = fib6_add_rt2node(fn, rt, info, mxc);
if (!err) {
fib6_start_gc(info->nl_net, rt);
if (!(rt->rt6i_flags & RTF_CACHE))
.dellink = ip6gre_dellink,
.get_size = ip6gre_get_size,
.fill_info = ip6gre_fill_info,
+ .get_link_net = ip6_tnl_get_link_net,
};
static struct rtnl_link_ops ip6gre_tap_ops __read_mostly = {
.changelink = ip6gre_changelink,
.get_size = ip6gre_get_size,
.fill_info = ip6gre_fill_info,
+ .get_link_net = ip6_tnl_get_link_net,
};
/*
return -EMSGSIZE;
}
+struct net *ip6_tnl_get_link_net(const struct net_device *dev)
+{
+ struct ip6_tnl *tunnel = netdev_priv(dev);
+
+ return tunnel->net;
+}
+EXPORT_SYMBOL(ip6_tnl_get_link_net);
+
static const struct nla_policy ip6_tnl_policy[IFLA_IPTUN_MAX + 1] = {
[IFLA_IPTUN_LINK] = { .type = NLA_U32 },
[IFLA_IPTUN_LOCAL] = { .len = sizeof(struct in6_addr) },
.dellink = ip6_tnl_dellink,
.get_size = ip6_tnl_get_size,
.fill_info = ip6_tnl_fill_info,
+ .get_link_net = ip6_tnl_get_link_net,
};
static struct xfrm6_tunnel ip4ip6_handler __read_mostly = {
}
EXPORT_SYMBOL_GPL(udp_sock_create6);
-int udp_tunnel6_xmit_skb(struct socket *sock, struct dst_entry *dst,
- struct sk_buff *skb, struct net_device *dev,
- struct in6_addr *saddr, struct in6_addr *daddr,
- __u8 prio, __u8 ttl, __be16 src_port, __be16 dst_port)
+int udp_tunnel6_xmit_skb(struct dst_entry *dst, struct sk_buff *skb,
+ struct net_device *dev, struct in6_addr *saddr,
+ struct in6_addr *daddr,
+ __u8 prio, __u8 ttl, __be16 src_port,
+ __be16 dst_port, bool nocheck)
{
struct udphdr *uh;
struct ipv6hdr *ip6h;
- struct sock *sk = sock->sk;
__skb_push(skb, sizeof(*uh));
skb_reset_transport_header(skb);
| IPSKB_REROUTED);
skb_dst_set(skb, dst);
- udp6_set_csum(udp_get_no_check6_tx(sk), skb, saddr, daddr, skb->len);
+ udp6_set_csum(nocheck, skb, saddr, daddr, skb->len);
__skb_push(skb, sizeof(*ip6h));
skb_reset_network_header(skb);
.changelink = vti6_changelink,
.get_size = vti6_get_size,
.fill_info = vti6_fill_info,
+ .get_link_net = ip6_tnl_get_link_net,
};
static void __net_exit vti6_destroy_tunnels(struct vti6_net *ip6n)
if (err < 0 && err != -ENOENT)
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
lock_sock(sk);
skb = np->pktoptions;
if (skb)
- atomic_inc(&skb->users);
- release_sock(sk);
-
- if (skb) {
ip6_datagram_recv_ctl(sk, &msg, skb);
- kfree_skb(skb);
- } else {
+ release_sock(sk);
+ if (!skb) {
if (np->rxopt.bits.rxinfo) {
struct in6_pktinfo src_info;
src_info.ipi6_ifindex = np->mcast_oif ? np->mcast_oif :
}
}
- if (ndopts.nd_opts_mtu) {
+ if (ndopts.nd_opts_mtu && in6_dev->cnf.accept_ra_mtu) {
__be32 n;
u32 mtu;
memset(&range, 0, sizeof(range));
if (priv->sreg_proto_min) {
- range.min_proto.all = (__force __be16)
- data[priv->sreg_proto_min].data[0];
- range.max_proto.all = (__force __be16)
- data[priv->sreg_proto_max].data[0];
+ range.min_proto.all =
+ *(__be16 *)&data[priv->sreg_proto_min].data[0];
+ range.max_proto.all =
+ *(__be16 *)&data[priv->sreg_proto_max].data[0];
range.flags |= NF_NAT_RANGE_PROTO_SPECIFIED;
}
*/
static int __ip6_ins_rt(struct rt6_info *rt, struct nl_info *info,
- struct nlattr *mx, int mx_len)
+ struct mx6_config *mxc)
{
int err;
struct fib6_table *table;
table = rt->rt6i_table;
write_lock_bh(&table->tb6_lock);
- err = fib6_add(&table->tb6_root, rt, info, mx, mx_len);
+ err = fib6_add(&table->tb6_root, rt, info, mxc);
write_unlock_bh(&table->tb6_lock);
return err;
int ip6_ins_rt(struct rt6_info *rt)
{
- struct nl_info info = {
- .nl_net = dev_net(rt->dst.dev),
- };
- return __ip6_ins_rt(rt, &info, NULL, 0);
+ struct nl_info info = { .nl_net = dev_net(rt->dst.dev), };
+ struct mx6_config mxc = { .mx = NULL, };
+
+ return __ip6_ins_rt(rt, &info, &mxc);
}
static struct rt6_info *rt6_alloc_cow(struct rt6_info *ort,
return entries > rt_max_size;
}
-/*
- *
- */
+static int ip6_convert_metrics(struct mx6_config *mxc,
+ const struct fib6_config *cfg)
+{
+ struct nlattr *nla;
+ int remaining;
+ u32 *mp;
+
+ if (cfg->fc_mx == NULL)
+ return 0;
+
+ mp = kzalloc(sizeof(u32) * RTAX_MAX, GFP_KERNEL);
+ if (unlikely(!mp))
+ return -ENOMEM;
+
+ nla_for_each_attr(nla, cfg->fc_mx, cfg->fc_mx_len, remaining) {
+ int type = nla_type(nla);
+
+ if (type) {
+ u32 val;
+
+ if (unlikely(type > RTAX_MAX))
+ goto err;
+ if (type == RTAX_CC_ALGO) {
+ char tmp[TCP_CA_NAME_MAX];
+
+ nla_strlcpy(tmp, nla, sizeof(tmp));
+ val = tcp_ca_get_key_by_name(tmp);
+ if (val == TCP_CA_UNSPEC)
+ goto err;
+ } else {
+ val = nla_get_u32(nla);
+ }
+
+ mp[type - 1] = val;
+ __set_bit(type - 1, mxc->mx_valid);
+ }
+ }
+
+ mxc->mx = mp;
+
+ return 0;
+ err:
+ kfree(mp);
+ return -EINVAL;
+}
int ip6_route_add(struct fib6_config *cfg)
{
struct net_device *dev = NULL;
struct inet6_dev *idev = NULL;
struct fib6_table *table;
+ struct mx6_config mxc = { .mx = NULL, };
int addr_type;
if (cfg->fc_dst_len > 128 || cfg->fc_src_len > 128)
cfg->fc_nlinfo.nl_net = dev_net(dev);
- return __ip6_ins_rt(rt, &cfg->fc_nlinfo, cfg->fc_mx, cfg->fc_mx_len);
+ err = ip6_convert_metrics(&mxc, cfg);
+ if (err)
+ goto out;
+ err = __ip6_ins_rt(rt, &cfg->fc_nlinfo, &mxc);
+
+ kfree(mxc.mx);
+ return err;
out:
if (dev)
dev_put(dev);
+ nla_total_size(4) /* RTA_OIF */
+ nla_total_size(4) /* RTA_PRIORITY */
+ RTAX_MAX * nla_total_size(4) /* RTA_METRICS */
- + nla_total_size(sizeof(struct rta_cacheinfo));
+ + nla_total_size(sizeof(struct rta_cacheinfo))
+ + nla_total_size(TCP_CA_NAME_MAX); /* RTAX_CC_ALGO */
}
static int rt6_fill_node(struct net *net,
if (rtnl_put_cacheinfo(skb, &rt->dst, 0, expires, rt->dst.error) < 0)
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
.get_size = ipip6_get_size,
.fill_info = ipip6_fill_info,
.dellink = ipip6_dellink,
+ .get_link_net = ip_tunnel_get_link_net,
};
static struct xfrm_tunnel sit_handler __read_mostly = {
inet_csk(newsk)->icsk_ext_hdr_len = (newnp->opt->opt_nflen +
newnp->opt->opt_flen);
+ tcp_ca_openreq_child(newsk, dst);
+
tcp_sync_mss(newsk, dst_mtu(dst));
newtp->advmss = dst_metric_advmss(dst);
if (tcp_sk(sk)->rx_opt.user_mss &&
goto csum_error;
}
- if (udp_sk(sk)->convert_csum && uh->check && !IS_UDPLITE(sk))
+ if (inet_get_convert_csum(sk) && uh->check && !IS_UDPLITE(sk))
skb_checksum_try_convert(skb, IPPROTO_UDP, uh->check,
ip6_compute_pseudo);
}
out:
- return genlmsg_end(skb, hdr);
+ genlmsg_end(skb, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(skb, hdr);
if (l2tp_nl_tunnel_send(skb, NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq, NLM_F_MULTI,
- tunnel, L2TP_CMD_TUNNEL_GET) <= 0)
+ tunnel, L2TP_CMD_TUNNEL_GET) < 0)
goto out;
ti++;
goto nla_put_failure;
nla_nest_end(skb, nest);
- return genlmsg_end(skb, hdr);
+ genlmsg_end(skb, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(skb, hdr);
if (l2tp_nl_session_send(skb, NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq, NLM_F_MULTI,
- session, L2TP_CMD_SESSION_GET) <= 0)
+ session, L2TP_CMD_SESSION_GET) < 0)
break;
si++;
struct net_device *err;
err = ieee802154_if_add(local, name, type, extended_addr);
- if (IS_ERR(err))
- return PTR_ERR(err);
-
- return 0;
+ return PTR_ERR_OR_ZERO(err);
}
static int
if (ip_vs_genl_fill_service(skb, svc) < 0)
goto nla_put_failure;
- return genlmsg_end(skb, hdr);
+ genlmsg_end(skb, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(skb, hdr);
if (ip_vs_genl_fill_dest(skb, dest) < 0)
goto nla_put_failure;
- return genlmsg_end(skb, hdr);
+ genlmsg_end(skb, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(skb, hdr);
if (ip_vs_genl_fill_daemon(skb, state, mcast_ifn, syncid))
goto nla_put_failure;
- return genlmsg_end(skb, hdr);
+ genlmsg_end(skb, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(skb, hdr);
struct nf_conn *ct;
struct net *net;
+ *diff = 0;
+
#ifdef CONFIG_IP_VS_IPV6
/* This application helper doesn't work with IPv6 yet,
* so turn this into a no-op for IPv6 packets
return 1;
#endif
- *diff = 0;
-
/* Only useful for established sessions */
if (cp->state != IP_VS_TCP_S_ESTABLISHED)
return 1;
struct ip_vs_conn *n_cp;
struct net *net;
+ /* no diff required for incoming packets */
+ *diff = 0;
+
#ifdef CONFIG_IP_VS_IPV6
/* This application helper doesn't work with IPv6 yet,
* so turn this into a no-op for IPv6 packets
return 1;
#endif
- /* no diff required for incoming packets */
- *diff = 0;
-
/* Only useful for established sessions */
if (cp->state != IP_VS_TCP_S_ESTABLISHED)
return 1;
*/
NF_CT_ASSERT(!nf_ct_is_confirmed(ct));
pr_debug("Confirming conntrack %p\n", ct);
- /* We have to check the DYING flag inside the lock to prevent
- a race against nf_ct_get_next_corpse() possibly called from
- user context, else we insert an already 'dead' hash, blocking
- further use of that particular connection -JM */
+ /* We have to check the DYING flag after unlink to prevent
+ * a race against nf_ct_get_next_corpse() possibly called from
+ * user context, else we insert an already 'dead' hash, blocking
+ * further use of that particular connection -JM.
+ */
+ nf_ct_del_from_dying_or_unconfirmed_list(ct);
- if (unlikely(nf_ct_is_dying(ct))) {
- nf_conntrack_double_unlock(hash, reply_hash);
- local_bh_enable();
- return NF_ACCEPT;
- }
+ if (unlikely(nf_ct_is_dying(ct)))
+ goto out;
/* See if there's one in the list already, including reverse:
NAT could have grabbed it without realizing, since we're
zone == nf_ct_zone(nf_ct_tuplehash_to_ctrack(h)))
goto out;
- nf_ct_del_from_dying_or_unconfirmed_list(ct);
-
/* Timer relative to confirmation time, not original
setting time, otherwise we'd get timer wrap in
weird delay cases. */
return NF_ACCEPT;
out:
+ nf_ct_add_to_dying_list(ct);
nf_conntrack_double_unlock(hash, reply_hash);
NF_CT_STAT_INC(net, insert_failed);
local_bh_enable();
}
EXPORT_SYMBOL_GPL(nf_ct_free_hashtable);
-void nf_conntrack_flush_report(struct net *net, u32 portid, int report)
-{
- nf_ct_iterate_cleanup(net, kill_all, NULL, portid, report);
-}
-EXPORT_SYMBOL_GPL(nf_conntrack_flush_report);
-
static int untrack_refs(void)
{
int cnt = 0, cpu;
for (i = 0; i < CONNTRACK_LOCKS; i++)
spin_lock_init(&nf_conntrack_locks[i]);
- /* Idea from tcp.c: use 1/16384 of memory. On i386: 32MB
- * machine has 512 buckets. >= 1GB machines have 16384 buckets. */
if (!nf_conntrack_htable_size) {
+ /* Idea from tcp.c: use 1/16384 of memory.
+ * On i386: 32MB machine has 512 buckets.
+ * >= 1GB machines have 16384 buckets.
+ * >= 4GB machines have 65536 buckets.
+ */
nf_conntrack_htable_size
= (((totalram_pages << PAGE_SHIFT) / 16384)
/ sizeof(struct hlist_head));
- if (totalram_pages > (1024 * 1024 * 1024 / PAGE_SIZE))
+ if (totalram_pages > (4 * (1024 * 1024 * 1024 / PAGE_SIZE)))
+ nf_conntrack_htable_size = 65536;
+ else if (totalram_pages > (1024 * 1024 * 1024 / PAGE_SIZE))
nf_conntrack_htable_size = 16384;
if (nf_conntrack_htable_size < 32)
nf_conntrack_htable_size = 32;
return 0;
}
-struct ctnetlink_dump_filter {
+struct ctnetlink_filter {
struct {
u_int32_t val;
u_int32_t mask;
} mark;
};
+static struct ctnetlink_filter *
+ctnetlink_alloc_filter(const struct nlattr * const cda[])
+{
+#ifdef CONFIG_NF_CONNTRACK_MARK
+ struct ctnetlink_filter *filter;
+
+ filter = kzalloc(sizeof(*filter), GFP_KERNEL);
+ if (filter == NULL)
+ return ERR_PTR(-ENOMEM);
+
+ filter->mark.val = ntohl(nla_get_be32(cda[CTA_MARK]));
+ filter->mark.mask = ntohl(nla_get_be32(cda[CTA_MARK_MASK]));
+
+ return filter;
+#else
+ return ERR_PTR(-EOPNOTSUPP);
+#endif
+}
+
+static int ctnetlink_filter_match(struct nf_conn *ct, void *data)
+{
+ struct ctnetlink_filter *filter = data;
+
+ if (filter == NULL)
+ return 1;
+
+#ifdef CONFIG_NF_CONNTRACK_MARK
+ if ((ct->mark & filter->mark.mask) == filter->mark.val)
+ return 1;
+#endif
+
+ return 0;
+}
+
static int
ctnetlink_dump_table(struct sk_buff *skb, struct netlink_callback *cb)
{
int res;
spinlock_t *lockp;
-#ifdef CONFIG_NF_CONNTRACK_MARK
- const struct ctnetlink_dump_filter *filter = cb->data;
-#endif
-
last = (struct nf_conn *)cb->args[1];
local_bh_disable();
continue;
cb->args[1] = 0;
}
-#ifdef CONFIG_NF_CONNTRACK_MARK
- if (filter && !((ct->mark & filter->mark.mask) ==
- filter->mark.val)) {
+ if (!ctnetlink_filter_match(ct, cb->data))
continue;
- }
-#endif
+
rcu_read_lock();
res =
ctnetlink_fill_info(skb, NETLINK_CB(cb->skb).portid,
.len = NF_CT_LABELS_MAX_SIZE },
};
+static int ctnetlink_flush_conntrack(struct net *net,
+ const struct nlattr * const cda[],
+ u32 portid, int report)
+{
+ struct ctnetlink_filter *filter = NULL;
+
+ if (cda[CTA_MARK] && cda[CTA_MARK_MASK]) {
+ filter = ctnetlink_alloc_filter(cda);
+ if (IS_ERR(filter))
+ return PTR_ERR(filter);
+ }
+
+ nf_ct_iterate_cleanup(net, ctnetlink_filter_match, filter,
+ portid, report);
+ kfree(filter);
+
+ return 0;
+}
+
static int
ctnetlink_del_conntrack(struct sock *ctnl, struct sk_buff *skb,
const struct nlmsghdr *nlh,
else if (cda[CTA_TUPLE_REPLY])
err = ctnetlink_parse_tuple(cda, &tuple, CTA_TUPLE_REPLY, u3);
else {
- /* Flush the whole table */
- nf_conntrack_flush_report(net,
- NETLINK_CB(skb).portid,
- nlmsg_report(nlh));
- return 0;
+ return ctnetlink_flush_conntrack(net, cda,
+ NETLINK_CB(skb).portid,
+ nlmsg_report(nlh));
}
if (err < 0)
.dump = ctnetlink_dump_table,
.done = ctnetlink_done,
};
-#ifdef CONFIG_NF_CONNTRACK_MARK
+
if (cda[CTA_MARK] && cda[CTA_MARK_MASK]) {
- struct ctnetlink_dump_filter *filter;
+ struct ctnetlink_filter *filter;
- filter = kzalloc(sizeof(struct ctnetlink_dump_filter),
- GFP_ATOMIC);
- if (filter == NULL)
- return -ENOMEM;
+ filter = ctnetlink_alloc_filter(cda);
+ if (IS_ERR(filter))
+ return PTR_ERR(filter);
- filter->mark.val = ntohl(nla_get_be32(cda[CTA_MARK]));
- filter->mark.mask =
- ntohl(nla_get_be32(cda[CTA_MARK_MASK]));
c.data = filter;
}
-#endif
return netlink_dump_start(ctnl, skb, nlh, &c);
}
new_end_seq = htonl(ntohl(sack->end_seq) -
seq->offset_before);
- pr_debug("sack_adjust: start_seq: %d->%d, end_seq: %d->%d\n",
- ntohl(sack->start_seq), new_start_seq,
- ntohl(sack->end_seq), new_end_seq);
+ pr_debug("sack_adjust: start_seq: %u->%u, end_seq: %u->%u\n",
+ ntohl(sack->start_seq), ntohl(new_start_seq),
+ ntohl(sack->end_seq), ntohl(new_end_seq));
inet_proto_csum_replace4(&tcph->check, skb,
sack->start_seq, new_start_seq, 0);
nf_log_sysctl_table[i].procname =
nf_log_sysctl_fnames[i];
nf_log_sysctl_table[i].data = NULL;
- nf_log_sysctl_table[i].maxlen =
- NFLOGGER_NAME_LEN * sizeof(char);
+ nf_log_sysctl_table[i].maxlen = NFLOGGER_NAME_LEN;
nf_log_sysctl_table[i].mode = 0644;
nf_log_sysctl_table[i].proc_handler =
nf_log_proc_dostring;
nla_put_be32(skb, NFTA_TABLE_USE, htonl(table->use)))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_trim(skb, nlh);
struct nft_chain *chain, *nc;
struct nft_set *set, *ns;
- list_for_each_entry_safe(chain, nc, &ctx->table->chains, list) {
+ list_for_each_entry(chain, &ctx->table->chains, list) {
ctx->chain = chain;
err = nft_delrule_by_chain(ctx);
if (err < 0)
goto out;
-
- err = nft_delchain(ctx);
- if (err < 0)
- goto out;
}
list_for_each_entry_safe(set, ns, &ctx->table->sets, list) {
goto out;
}
+ list_for_each_entry_safe(chain, nc, &ctx->table->chains, list) {
+ ctx->chain = chain;
+
+ err = nft_delchain(ctx);
+ if (err < 0)
+ goto out;
+ }
+
err = nft_deltable(ctx);
out:
return err;
if (nla_put_be32(skb, NFTA_CHAIN_USE, htonl(chain->use)))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_trim(skb, nlh);
nla_put(skb, NFTA_RULE_USERDATA, rule->ulen, nft_userdata(rule)))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_trim(skb, nlh);
goto nla_put_failure;
nla_nest_end(skb, desc);
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_trim(skb, nlh);
nla_nest_end(skb, nest);
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_trim(skb, nlh);
if (nla_put_be32(skb, NFTA_GEN_ID, htonl(net->nft.base_seq)))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_trim(skb, nlh);
static void nfnetlink_rcv_batch(struct sk_buff *skb, struct nlmsghdr *nlh,
u_int16_t subsys_id)
{
- struct sk_buff *nskb, *oskb = skb;
+ struct sk_buff *oskb = skb;
struct net *net = sock_net(skb->sk);
const struct nfnetlink_subsystem *ss;
const struct nfnl_callback *nc;
if (subsys_id >= NFNL_SUBSYS_COUNT)
return netlink_ack(skb, nlh, -EINVAL);
replay:
- nskb = netlink_skb_clone(oskb, GFP_KERNEL);
- if (!nskb)
+ skb = netlink_skb_clone(oskb, GFP_KERNEL);
+ if (!skb)
return netlink_ack(oskb, nlh, -ENOMEM);
- nskb->sk = oskb->sk;
- skb = nskb;
+ skb->sk = oskb->sk;
nfnl_lock(subsys_id);
ss = rcu_dereference_protected(table[subsys_id].subsys,
{
nfnl_unlock(subsys_id);
netlink_ack(skb, nlh, -EOPNOTSUPP);
- return kfree_skb(nskb);
+ return kfree_skb(skb);
}
}
nlh = nlmsg_hdr(skb);
err = 0;
- if (nlh->nlmsg_len < NLMSG_HDRLEN) {
+ if (nlmsg_len(nlh) < sizeof(struct nfgenmsg) ||
+ skb->len < nlh->nlmsg_len) {
err = -EINVAL;
goto ack;
}
nfnl_err_reset(&err_list);
ss->abort(oskb);
nfnl_unlock(subsys_id);
- kfree_skb(nskb);
+ kfree_skb(skb);
goto replay;
}
}
nfnl_err_deliver(&err_list, oskb);
nfnl_unlock(subsys_id);
- kfree_skb(nskb);
+ kfree_skb(skb);
}
static void nfnetlink_rcv(struct sk_buff *skb)
int type;
if (group <= NFNLGRP_NONE || group > NFNLGRP_MAX)
- return -EINVAL;
+ return 0;
type = nfnl_group2type[group];
static int
nfnl_cthelper_from_nlattr(struct nlattr *attr, struct nf_conn *ct)
{
- const struct nf_conn_help *help = nfct_help(ct);
+ struct nf_conn_help *help = nfct_help(ct);
if (attr == NULL)
return -EINVAL;
if (help->helper->data_len == 0)
return -EINVAL;
- memcpy(&help->data, nla_data(attr), help->helper->data_len);
+ memcpy(help->data, nla_data(attr), help->helper->data_len);
return 0;
}
const struct nft_data *key,
struct nft_data *data)
{
- const struct rhashtable *priv = nft_set_priv(set);
+ struct rhashtable *priv = nft_set_priv(set);
const struct nft_hash_elem *he;
he = rhashtable_lookup(priv, key);
const struct nft_set_elem *elem)
{
struct rhashtable *priv = nft_set_priv(set);
- struct rhash_head *he, __rcu **pprev;
- pprev = elem->cookie;
- he = rht_dereference((*pprev), priv);
+ rhashtable_remove(priv, elem->cookie);
+ synchronize_rcu();
+ kfree(elem->cookie);
+}
- rhashtable_remove_pprev(priv, he, pprev);
+struct nft_compare_arg {
+ const struct nft_set *set;
+ struct nft_set_elem *elem;
+};
- synchronize_rcu();
- kfree(he);
+static bool nft_hash_compare(void *ptr, void *arg)
+{
+ struct nft_hash_elem *he = ptr;
+ struct nft_compare_arg *x = arg;
+
+ if (!nft_data_cmp(&he->key, &x->elem->key, x->set->klen)) {
+ x->elem->cookie = he;
+ x->elem->flags = 0;
+ if (x->set->flags & NFT_SET_MAP)
+ nft_data_copy(&x->elem->data, he->data);
+
+ return true;
+ }
+
+ return false;
}
static int nft_hash_get(const struct nft_set *set, struct nft_set_elem *elem)
{
- const struct rhashtable *priv = nft_set_priv(set);
- const struct bucket_table *tbl = rht_dereference_rcu(priv->tbl, priv);
- struct rhash_head __rcu * const *pprev;
- struct nft_hash_elem *he;
- u32 h;
-
- h = rhashtable_hashfn(priv, &elem->key, set->klen);
- pprev = &tbl->buckets[h];
- rht_for_each_entry_rcu(he, tbl->buckets[h], node) {
- if (nft_data_cmp(&he->key, &elem->key, set->klen)) {
- pprev = &he->node.next;
- continue;
- }
+ struct rhashtable *priv = nft_set_priv(set);
+ struct nft_compare_arg arg = {
+ .set = set,
+ .elem = elem,
+ };
- elem->cookie = (void *)pprev;
- elem->flags = 0;
- if (set->flags & NFT_SET_MAP)
- nft_data_copy(&elem->data, he->data);
+ if (rhashtable_lookup_compare(priv, &elem->key,
+ &nft_hash_compare, &arg))
return 0;
- }
+
return -ENOENT;
}
static void nft_hash_walk(const struct nft_ctx *ctx, const struct nft_set *set,
struct nft_set_iter *iter)
{
- const struct rhashtable *priv = nft_set_priv(set);
+ struct rhashtable *priv = nft_set_priv(set);
const struct bucket_table *tbl;
const struct nft_hash_elem *he;
struct nft_set_elem elem;
tbl = rht_dereference_rcu(priv->tbl, priv);
for (i = 0; i < tbl->size; i++) {
- rht_for_each_entry_rcu(he, tbl->buckets[i], node) {
+ struct rhash_head *pos;
+
+ rht_for_each_entry_rcu(he, pos, tbl, i, node) {
if (iter->count < iter->skip)
goto cont;
return sizeof(struct rhashtable);
}
-#ifdef CONFIG_PROVE_LOCKING
-static int lockdep_nfnl_lock_is_held(void *parent)
-{
- return lockdep_nfnl_is_held(NFNL_SUBSYS_NFTABLES);
-}
-#endif
-
static int nft_hash_init(const struct nft_set *set,
const struct nft_set_desc *desc,
const struct nlattr * const tb[])
.hashfn = jhash,
.grow_decision = rht_grow_above_75,
.shrink_decision = rht_shrink_below_30,
-#ifdef CONFIG_PROVE_LOCKING
- .mutex_is_held = lockdep_nfnl_lock_is_held,
-#endif
};
return rhashtable_init(priv, ¶ms);
static void nft_hash_destroy(const struct nft_set *set)
{
- const struct rhashtable *priv = nft_set_priv(set);
- const struct bucket_table *tbl = priv->tbl;
- struct nft_hash_elem *he, *next;
+ struct rhashtable *priv = nft_set_priv(set);
+ const struct bucket_table *tbl;
+ struct nft_hash_elem *he;
+ struct rhash_head *pos, *next;
unsigned int i;
+ /* Stop an eventual async resizing */
+ priv->being_destroyed = true;
+ mutex_lock(&priv->mutex);
+
+ tbl = rht_dereference(priv->tbl, priv);
for (i = 0; i < tbl->size; i++) {
- for (he = rht_entry(tbl->buckets[i], struct nft_hash_elem, node);
- he != NULL; he = next) {
- next = rht_entry(he->node.next, struct nft_hash_elem, node);
+ rht_for_each_entry_safe(he, pos, next, tbl, i, node)
nft_hash_elem_destroy(set, he);
- }
}
+ mutex_unlock(&priv->mutex);
+
rhashtable_destroy(priv);
}
}
if (priv->sreg_proto_min) {
- range.min_proto.all = (__force __be16)
- data[priv->sreg_proto_min].data[0];
- range.max_proto.all = (__force __be16)
- data[priv->sreg_proto_max].data[0];
+ range.min_proto.all =
+ *(__be16 *)&data[priv->sreg_proto_min].data[0];
+ range.max_proto.all =
+ *(__be16 *)&data[priv->sreg_proto_max].data[0];
range.flags |= NF_NAT_RANGE_PROTO_SPECIFIED;
}
rcu_read_lock();
list_for_each_entry_rcu(kf, &xt_osf_fingers[df], finger_entry) {
+ int foptsize, optnum;
+
f = &kf->finger;
if (!(info->flags & XT_OSF_LOG) && strcmp(info->genre, f->genre))
optp = _optp;
fmatch = FMATCH_WRONG;
- if (totlen == f->ss && xt_osf_ttl(skb, info, f->ttl)) {
- int foptsize, optnum;
+ if (totlen != f->ss || !xt_osf_ttl(skb, info, f->ttl))
+ continue;
- /*
- * Should not happen if userspace parser was written correctly.
- */
- if (f->wss.wc >= OSF_WSS_MAX)
- continue;
+ /*
+ * Should not happen if userspace parser was written correctly.
+ */
+ if (f->wss.wc >= OSF_WSS_MAX)
+ continue;
- /* Check options */
+ /* Check options */
- foptsize = 0;
- for (optnum = 0; optnum < f->opt_num; ++optnum)
- foptsize += f->opt[optnum].length;
+ foptsize = 0;
+ for (optnum = 0; optnum < f->opt_num; ++optnum)
+ foptsize += f->opt[optnum].length;
- if (foptsize > MAX_IPOPTLEN ||
- optsize > MAX_IPOPTLEN ||
- optsize != foptsize)
- continue;
+ if (foptsize > MAX_IPOPTLEN ||
+ optsize > MAX_IPOPTLEN ||
+ optsize != foptsize)
+ continue;
- check_WSS = f->wss.wc;
+ check_WSS = f->wss.wc;
- for (optnum = 0; optnum < f->opt_num; ++optnum) {
- if (f->opt[optnum].kind == (*optp)) {
- __u32 len = f->opt[optnum].length;
- const __u8 *optend = optp + len;
- int loop_cont = 0;
+ for (optnum = 0; optnum < f->opt_num; ++optnum) {
+ if (f->opt[optnum].kind == (*optp)) {
+ __u32 len = f->opt[optnum].length;
+ const __u8 *optend = optp + len;
+ int loop_cont = 0;
- fmatch = FMATCH_OK;
+ fmatch = FMATCH_OK;
- switch (*optp) {
- case OSFOPT_MSS:
- mss = optp[3];
- mss <<= 8;
- mss |= optp[2];
+ switch (*optp) {
+ case OSFOPT_MSS:
+ mss = optp[3];
+ mss <<= 8;
+ mss |= optp[2];
- mss = ntohs((__force __be16)mss);
- break;
- case OSFOPT_TS:
- loop_cont = 1;
- break;
- }
+ mss = ntohs((__force __be16)mss);
+ break;
+ case OSFOPT_TS:
+ loop_cont = 1;
+ break;
+ }
- optp = optend;
- } else
- fmatch = FMATCH_OPT_WRONG;
+ optp = optend;
+ } else
+ fmatch = FMATCH_OPT_WRONG;
- if (fmatch != FMATCH_OK)
- break;
- }
+ if (fmatch != FMATCH_OK)
+ break;
+ }
- if (fmatch != FMATCH_OPT_WRONG) {
- fmatch = FMATCH_WRONG;
+ if (fmatch != FMATCH_OPT_WRONG) {
+ fmatch = FMATCH_WRONG;
- switch (check_WSS) {
- case OSF_WSS_PLAIN:
- if (f->wss.val == 0 || window == f->wss.val)
- fmatch = FMATCH_OK;
- break;
- case OSF_WSS_MSS:
- /*
- * Some smart modems decrease mangle MSS to
- * SMART_MSS_2, so we check standard, decreased
- * and the one provided in the fingerprint MSS
- * values.
- */
+ switch (check_WSS) {
+ case OSF_WSS_PLAIN:
+ if (f->wss.val == 0 || window == f->wss.val)
+ fmatch = FMATCH_OK;
+ break;
+ case OSF_WSS_MSS:
+ /*
+ * Some smart modems decrease mangle MSS to
+ * SMART_MSS_2, so we check standard, decreased
+ * and the one provided in the fingerprint MSS
+ * values.
+ */
#define SMART_MSS_1 1460
#define SMART_MSS_2 1448
- if (window == f->wss.val * mss ||
- window == f->wss.val * SMART_MSS_1 ||
- window == f->wss.val * SMART_MSS_2)
- fmatch = FMATCH_OK;
- break;
- case OSF_WSS_MTU:
- if (window == f->wss.val * (mss + 40) ||
- window == f->wss.val * (SMART_MSS_1 + 40) ||
- window == f->wss.val * (SMART_MSS_2 + 40))
- fmatch = FMATCH_OK;
- break;
- case OSF_WSS_MODULO:
- if ((window % f->wss.val) == 0)
- fmatch = FMATCH_OK;
- break;
- }
+ if (window == f->wss.val * mss ||
+ window == f->wss.val * SMART_MSS_1 ||
+ window == f->wss.val * SMART_MSS_2)
+ fmatch = FMATCH_OK;
+ break;
+ case OSF_WSS_MTU:
+ if (window == f->wss.val * (mss + 40) ||
+ window == f->wss.val * (SMART_MSS_1 + 40) ||
+ window == f->wss.val * (SMART_MSS_2 + 40))
+ fmatch = FMATCH_OK;
+ break;
+ case OSF_WSS_MODULO:
+ if ((window % f->wss.val) == 0)
+ fmatch = FMATCH_OK;
+ break;
}
+ }
- if (fmatch != FMATCH_OK)
- continue;
+ if (fmatch != FMATCH_OK)
+ continue;
- fcount++;
+ fcount++;
- if (info->flags & XT_OSF_LOG)
- nf_log_packet(net, p->family, p->hooknum, skb,
- p->in, p->out, NULL,
- "%s [%s:%s] : %pI4:%d -> %pI4:%d hops=%d\n",
- f->genre, f->version, f->subtype,
- &ip->saddr, ntohs(tcp->source),
- &ip->daddr, ntohs(tcp->dest),
- f->ttl - ip->ttl);
+ if (info->flags & XT_OSF_LOG)
+ nf_log_packet(net, p->family, p->hooknum, skb,
+ p->in, p->out, NULL,
+ "%s [%s:%s] : %pI4:%d -> %pI4:%d hops=%d\n",
+ f->genre, f->version, f->subtype,
+ &ip->saddr, ntohs(tcp->source),
+ &ip->daddr, ntohs(tcp->dest),
+ f->ttl - ip->ttl);
- if ((info->flags & XT_OSF_LOG) &&
- info->loglevel == XT_OSF_LOGLEVEL_FIRST)
- break;
- }
+ if ((info->flags & XT_OSF_LOG) &&
+ info->loglevel == XT_OSF_LOGLEVEL_FIRST)
+ break;
}
rcu_read_unlock();
if (ret_val != 0)
goto listall_cb_failure;
- return genlmsg_end(cb_arg->skb, data);
+ genlmsg_end(cb_arg->skb, data);
+ return 0;
listall_cb_failure:
genlmsg_cancel(cb_arg->skb, data);
goto listall_cb_failure;
cb_arg->seq++;
- return genlmsg_end(cb_arg->skb, data);
+ genlmsg_end(cb_arg->skb, data);
+ return 0;
listall_cb_failure:
genlmsg_cancel(cb_arg->skb, data);
if (ret_val != 0)
goto protocols_cb_failure;
- return genlmsg_end(skb, data);
+ genlmsg_end(skb, data);
+ return 0;
protocols_cb_failure:
genlmsg_cancel(skb, data);
goto list_cb_failure;
cb_arg->seq++;
- return genlmsg_end(cb_arg->skb, data);
+ genlmsg_end(cb_arg->skb, data);
+ return 0;
list_cb_failure:
genlmsg_cancel(cb_arg->skb, data);
static void netlink_skb_destructor(struct sk_buff *skb);
/* nl_table locking explained:
- * Lookup and traversal are protected with nl_sk_hash_lock or nl_table_lock
- * combined with an RCU read-side lock. Insertion and removal are protected
- * with nl_sk_hash_lock while using RCU list modification primitives and may
- * run in parallel to nl_table_lock protected lookups. Destruction of the
- * Netlink socket may only occur *after* nl_table_lock has been acquired
- * either during or after the socket has been removed from the list.
+ * Lookup and traversal are protected with an RCU read-side lock. Insertion
+ * and removal are protected with per bucket lock while using RCU list
+ * modification primitives and may run in parallel to RCU protected lookups.
+ * Destruction of the Netlink socket may only occur *after* nl_table_lock has
+ * been acquired * either during or after the socket has been removed from
+ * the list and after an RCU grace period.
*/
DEFINE_RWLOCK(nl_table_lock);
EXPORT_SYMBOL_GPL(nl_table_lock);
#define nl_deref_protected(X) rcu_dereference_protected(X, lockdep_is_held(&nl_table_lock));
-/* Protects netlink socket hash table mutations */
-DEFINE_MUTEX(nl_sk_hash_lock);
-EXPORT_SYMBOL_GPL(nl_sk_hash_lock);
-
-#ifdef CONFIG_PROVE_LOCKING
-static int lockdep_nl_sk_hash_is_held(void *parent)
-{
- if (debug_locks)
- return lockdep_is_held(&nl_sk_hash_lock) || lockdep_is_held(&nl_table_lock);
- return 1;
-}
-#endif
-
static ATOMIC_NOTIFIER_HEAD(netlink_chain);
static DEFINE_SPINLOCK(netlink_tap_lock);
.net = net,
.portid = portid,
};
- u32 hash;
- hash = rhashtable_hashfn(&table->hash, &portid, sizeof(portid));
-
- return rhashtable_lookup_compare(&table->hash, hash,
+ return rhashtable_lookup_compare(&table->hash, &portid,
&netlink_compare, &arg);
}
+static bool __netlink_insert(struct netlink_table *table, struct sock *sk,
+ struct net *net)
+{
+ struct netlink_compare_arg arg = {
+ .net = net,
+ .portid = nlk_sk(sk)->portid,
+ };
+
+ return rhashtable_lookup_compare_insert(&table->hash,
+ &nlk_sk(sk)->node,
+ &netlink_compare, &arg);
+}
+
static struct sock *netlink_lookup(struct net *net, int protocol, u32 portid)
{
struct netlink_table *table = &nl_table[protocol];
struct sock *sk;
- read_lock(&nl_table_lock);
rcu_read_lock();
sk = __netlink_lookup(table, portid, net);
if (sk)
sock_hold(sk);
rcu_read_unlock();
- read_unlock(&nl_table_lock);
return sk;
}
static int netlink_insert(struct sock *sk, struct net *net, u32 portid)
{
struct netlink_table *table = &nl_table[sk->sk_protocol];
- int err = -EADDRINUSE;
+ int err;
- mutex_lock(&nl_sk_hash_lock);
- if (__netlink_lookup(table, portid, net))
- goto err;
+ lock_sock(sk);
err = -EBUSY;
if (nlk_sk(sk)->portid)
goto err;
err = -ENOMEM;
- if (BITS_PER_LONG > 32 && unlikely(table->hash.nelems >= UINT_MAX))
+ if (BITS_PER_LONG > 32 &&
+ unlikely(atomic_read(&table->hash.nelems) >= UINT_MAX))
goto err;
nlk_sk(sk)->portid = portid;
sock_hold(sk);
- rhashtable_insert(&table->hash, &nlk_sk(sk)->node);
+
err = 0;
+ if (!__netlink_insert(table, sk, net)) {
+ err = -EADDRINUSE;
+ sock_put(sk);
+ }
+
err:
- mutex_unlock(&nl_sk_hash_lock);
+ release_sock(sk);
return err;
}
{
struct netlink_table *table;
- mutex_lock(&nl_sk_hash_lock);
table = &nl_table[sk->sk_protocol];
if (rhashtable_remove(&table->hash, &nlk_sk(sk)->node)) {
WARN_ON(atomic_read(&sk->sk_refcnt) == 1);
__sock_put(sk);
}
- mutex_unlock(&nl_sk_hash_lock);
netlink_table_grab();
if (nlk_sk(sk)->subscriptions) {
goto out;
}
+static void deferred_put_nlk_sk(struct rcu_head *head)
+{
+ struct netlink_sock *nlk = container_of(head, struct netlink_sock, rcu);
+
+ sock_put(&nlk->sk);
+}
+
static int netlink_release(struct socket *sock)
{
struct sock *sk = sock->sk;
local_bh_disable();
sock_prot_inuse_add(sock_net(sk), &netlink_proto, -1);
local_bh_enable();
- sock_put(sk);
+ call_rcu(&nlk->rcu, deferred_put_nlk_sk);
return 0;
}
retry:
cond_resched();
- netlink_table_grab();
rcu_read_lock();
if (__netlink_lookup(table, portid, net)) {
/* Bind collision, search negative portid values. */
if (rover > -4097)
rover = -4097;
rcu_read_unlock();
- netlink_table_ungrab();
goto retry;
}
rcu_read_unlock();
- netlink_table_ungrab();
err = netlink_insert(sk, net, portid);
if (err == -EADDRINUSE)
const struct bucket_table *tbl = rht_dereference_rcu(ht->tbl, ht);
for (j = 0; j < tbl->size; j++) {
- rht_for_each_entry_rcu(nlk, tbl->buckets[j], node) {
+ struct rhash_head *node;
+
+ rht_for_each_entry_rcu(nlk, node, tbl, j, node) {
s = (struct sock *)nlk;
if (sock_net(s) != seq_file_net(seq))
}
static void *netlink_seq_start(struct seq_file *seq, loff_t *pos)
- __acquires(nl_table_lock) __acquires(RCU)
+ __acquires(RCU)
{
- read_lock(&nl_table_lock);
rcu_read_lock();
return *pos ? netlink_seq_socket_idx(seq, *pos - 1) : SEQ_START_TOKEN;
}
static void *netlink_seq_next(struct seq_file *seq, void *v, loff_t *pos)
{
struct rhashtable *ht;
+ const struct bucket_table *tbl;
+ struct rhash_head *node;
struct netlink_sock *nlk;
struct nl_seq_iter *iter;
struct net *net;
i = iter->link;
ht = &nl_table[i].hash;
- rht_for_each_entry(nlk, nlk->node.next, ht, node)
+ tbl = rht_dereference_rcu(ht->tbl, ht);
+ rht_for_each_entry_rcu_continue(nlk, node, nlk->node.next, tbl, iter->hash_idx, node)
if (net_eq(sock_net((struct sock *)nlk), net))
return nlk;
j = iter->hash_idx + 1;
do {
- const struct bucket_table *tbl = rht_dereference_rcu(ht->tbl, ht);
for (; j < tbl->size; j++) {
- rht_for_each_entry(nlk, tbl->buckets[j], ht, node) {
+ rht_for_each_entry_rcu(nlk, node, tbl, j, node) {
if (net_eq(sock_net((struct sock *)nlk), net)) {
iter->link = i;
iter->hash_idx = j;
}
static void netlink_seq_stop(struct seq_file *seq, void *v)
- __releases(RCU) __releases(nl_table_lock)
+ __releases(RCU)
{
rcu_read_unlock();
- read_unlock(&nl_table_lock);
}
.max_shift = 16, /* 64K */
.grow_decision = rht_grow_above_75,
.shrink_decision = rht_shrink_below_30,
-#ifdef CONFIG_PROVE_LOCKING
- .mutex_is_held = lockdep_nl_sk_hash_is_held,
-#endif
};
if (err != 0)
#endif /* CONFIG_NETLINK_MMAP */
struct rhash_head node;
+ struct rcu_head rcu;
};
static inline struct netlink_sock *nlk_sk(struct sock *sk)
extern struct netlink_table *nl_table;
extern rwlock_t nl_table_lock;
-extern struct mutex nl_sk_hash_lock;
#endif
sk_diag_put_rings_cfg(sk, skb))
goto out_nlmsg_trim;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
out_nlmsg_trim:
nlmsg_cancel(skb, nlh);
{
struct netlink_table *tbl = &nl_table[protocol];
struct rhashtable *ht = &tbl->hash;
- const struct bucket_table *htbl = rht_dereference(ht->tbl, ht);
+ const struct bucket_table *htbl = rht_dereference_rcu(ht->tbl, ht);
struct net *net = sock_net(skb->sk);
struct netlink_diag_req *req;
struct netlink_sock *nlsk;
req = nlmsg_data(cb->nlh);
for (i = 0; i < htbl->size; i++) {
- rht_for_each_entry(nlsk, htbl->buckets[i], ht, node) {
+ struct rhash_head *pos;
+
+ rht_for_each_entry_rcu(nlsk, pos, htbl, i, node) {
sk = (struct sock *)nlsk;
if (!net_eq(sock_net(sk), net))
req = nlmsg_data(cb->nlh);
- mutex_lock(&nl_sk_hash_lock);
+ rcu_read_lock();
read_lock(&nl_table_lock);
if (req->sdiag_protocol == NDIAG_PROTO_ALL) {
} else {
if (req->sdiag_protocol >= MAX_LINKS) {
read_unlock(&nl_table_lock);
- mutex_unlock(&nl_sk_hash_lock);
+ rcu_read_unlock();
return -ENOENT;
}
}
read_unlock(&nl_table_lock);
- mutex_unlock(&nl_sk_hash_lock);
+ rcu_read_unlock();
return skb->len;
}
nla_nest_end(skb, nla_grps);
}
- return genlmsg_end(skb, hdr);
+ genlmsg_end(skb, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(skb, hdr);
nla_nest_end(skb, nest);
nla_nest_end(skb, nla_grps);
- return genlmsg_end(skb, hdr);
+ genlmsg_end(skb, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(skb, hdr);
goto nla_put_failure;
}
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(msg, hdr);
nla_put_u8(msg, NFC_ATTR_RF_MODE, dev->rf_mode))
goto nla_put_failure;
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(msg, hdr);
nla_put_u16(msg, NFC_ATTR_LLC_PARAM_MIUX, be16_to_cpu(local->miux)))
goto nla_put_failure;
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
nla_put_u8(msg, NFC_ATTR_SE_TYPE, se->type))
goto nla_put_failure;
- if (genlmsg_end(msg, hdr) < 0)
- goto nla_put_failure;
+ genlmsg_end(msg, hdr);
}
return 0;
int err;
err = skb_vlan_pop(skb);
- if (vlan_tx_tag_present(skb))
+ if (skb_vlan_tag_present(skb))
invalidate_flow_key(key);
else
key->eth.tci = 0;
static int push_vlan(struct sk_buff *skb, struct sw_flow_key *key,
const struct ovs_action_push_vlan *vlan)
{
- if (vlan_tx_tag_present(skb))
+ if (skb_vlan_tag_present(skb))
invalidate_flow_key(key);
else
key->eth.tci = vlan->vlan_tci;
static struct genl_family dp_flow_genl_family;
static struct genl_family dp_datapath_genl_family;
+static const struct nla_policy flow_policy[];
+
static const struct genl_multicast_group ovs_dp_flow_multicast_group = {
.name = OVS_FLOW_MCGROUP,
};
if (!dp_ifindex)
return -ENODEV;
- if (vlan_tx_tag_present(skb)) {
+ if (skb_vlan_tag_present(skb)) {
nskb = skb_clone(skb, GFP_ATOMIC);
if (!nskb)
return -ENOMEM;
0, upcall_info->cmd);
upcall->dp_ifindex = dp_ifindex;
- nla = nla_nest_start(user_skb, OVS_PACKET_ATTR_KEY);
- err = ovs_nla_put_flow(key, key, user_skb);
+ err = ovs_nla_put_key(key, key, OVS_PACKET_ATTR_KEY, false, user_skb);
BUG_ON(err);
- nla_nest_end(user_skb, nla);
if (upcall_info->userdata)
__nla_put(user_skb, OVS_PACKET_ATTR_USERDATA,
struct vport *input_vport;
int len;
int err;
- bool log = !a[OVS_FLOW_ATTR_PROBE];
+ bool log = !a[OVS_PACKET_ATTR_PROBE];
err = -EINVAL;
if (!a[OVS_PACKET_ATTR_PACKET] || !a[OVS_PACKET_ATTR_KEY] ||
[OVS_PACKET_ATTR_PACKET] = { .len = ETH_HLEN },
[OVS_PACKET_ATTR_KEY] = { .type = NLA_NESTED },
[OVS_PACKET_ATTR_ACTIONS] = { .type = NLA_NESTED },
+ [OVS_PACKET_ATTR_PROBE] = { .type = NLA_FLAG },
};
static const struct genl_ops dp_packet_genl_ops[] = {
}
}
-static size_t ovs_flow_cmd_msg_size(const struct sw_flow_actions *acts)
+static bool should_fill_key(const struct sw_flow_id *sfid, uint32_t ufid_flags)
{
- return NLMSG_ALIGN(sizeof(struct ovs_header))
- + nla_total_size(ovs_key_attr_size()) /* OVS_FLOW_ATTR_KEY */
- + nla_total_size(ovs_key_attr_size()) /* OVS_FLOW_ATTR_MASK */
- + nla_total_size(sizeof(struct ovs_flow_stats)) /* OVS_FLOW_ATTR_STATS */
- + nla_total_size(1) /* OVS_FLOW_ATTR_TCP_FLAGS */
- + nla_total_size(8) /* OVS_FLOW_ATTR_USED */
- + nla_total_size(acts->actions_len); /* OVS_FLOW_ATTR_ACTIONS */
+ return ovs_identifier_is_ufid(sfid) &&
+ !(ufid_flags & OVS_UFID_F_OMIT_KEY);
}
-/* Called with ovs_mutex or RCU read lock. */
-static int ovs_flow_cmd_fill_match(const struct sw_flow *flow,
- struct sk_buff *skb)
+static bool should_fill_mask(uint32_t ufid_flags)
{
- struct nlattr *nla;
- int err;
+ return !(ufid_flags & OVS_UFID_F_OMIT_MASK);
+}
- /* Fill flow key. */
- nla = nla_nest_start(skb, OVS_FLOW_ATTR_KEY);
- if (!nla)
- return -EMSGSIZE;
+static bool should_fill_actions(uint32_t ufid_flags)
+{
+ return !(ufid_flags & OVS_UFID_F_OMIT_ACTIONS);
+}
- err = ovs_nla_put_flow(&flow->unmasked_key, &flow->unmasked_key, skb);
- if (err)
- return err;
+static size_t ovs_flow_cmd_msg_size(const struct sw_flow_actions *acts,
+ const struct sw_flow_id *sfid,
+ uint32_t ufid_flags)
+{
+ size_t len = NLMSG_ALIGN(sizeof(struct ovs_header));
- nla_nest_end(skb, nla);
+ /* OVS_FLOW_ATTR_UFID */
+ if (sfid && ovs_identifier_is_ufid(sfid))
+ len += nla_total_size(sfid->ufid_len);
- /* Fill flow mask. */
- nla = nla_nest_start(skb, OVS_FLOW_ATTR_MASK);
- if (!nla)
- return -EMSGSIZE;
+ /* OVS_FLOW_ATTR_KEY */
+ if (!sfid || should_fill_key(sfid, ufid_flags))
+ len += nla_total_size(ovs_key_attr_size());
- err = ovs_nla_put_flow(&flow->key, &flow->mask->key, skb);
- if (err)
- return err;
+ /* OVS_FLOW_ATTR_MASK */
+ if (should_fill_mask(ufid_flags))
+ len += nla_total_size(ovs_key_attr_size());
- nla_nest_end(skb, nla);
- return 0;
+ /* OVS_FLOW_ATTR_ACTIONS */
+ if (should_fill_actions(ufid_flags))
+ len += nla_total_size(acts->actions_len);
+
+ return len
+ + nla_total_size(sizeof(struct ovs_flow_stats)) /* OVS_FLOW_ATTR_STATS */
+ + nla_total_size(1) /* OVS_FLOW_ATTR_TCP_FLAGS */
+ + nla_total_size(8); /* OVS_FLOW_ATTR_USED */
}
/* Called with ovs_mutex or RCU read lock. */
/* Called with ovs_mutex or RCU read lock. */
static int ovs_flow_cmd_fill_info(const struct sw_flow *flow, int dp_ifindex,
struct sk_buff *skb, u32 portid,
- u32 seq, u32 flags, u8 cmd)
+ u32 seq, u32 flags, u8 cmd, u32 ufid_flags)
{
const int skb_orig_len = skb->len;
struct ovs_header *ovs_header;
ovs_header->dp_ifindex = dp_ifindex;
- err = ovs_flow_cmd_fill_match(flow, skb);
+ err = ovs_nla_put_identifier(flow, skb);
if (err)
goto error;
+ if (should_fill_key(&flow->id, ufid_flags)) {
+ err = ovs_nla_put_masked_key(flow, skb);
+ if (err)
+ goto error;
+ }
+
+ if (should_fill_mask(ufid_flags)) {
+ err = ovs_nla_put_mask(flow, skb);
+ if (err)
+ goto error;
+ }
+
err = ovs_flow_cmd_fill_stats(flow, skb);
if (err)
goto error;
- err = ovs_flow_cmd_fill_actions(flow, skb, skb_orig_len);
- if (err)
- goto error;
+ if (should_fill_actions(ufid_flags)) {
+ err = ovs_flow_cmd_fill_actions(flow, skb, skb_orig_len);
+ if (err)
+ goto error;
+ }
- return genlmsg_end(skb, ovs_header);
+ genlmsg_end(skb, ovs_header);
+ return 0;
error:
genlmsg_cancel(skb, ovs_header);
/* May not be called with RCU read lock. */
static struct sk_buff *ovs_flow_cmd_alloc_info(const struct sw_flow_actions *acts,
+ const struct sw_flow_id *sfid,
struct genl_info *info,
- bool always)
+ bool always,
+ uint32_t ufid_flags)
{
struct sk_buff *skb;
+ size_t len;
if (!always && !ovs_must_notify(&dp_flow_genl_family, info, 0))
return NULL;
- skb = genlmsg_new_unicast(ovs_flow_cmd_msg_size(acts), info, GFP_KERNEL);
+ len = ovs_flow_cmd_msg_size(acts, sfid, ufid_flags);
+ skb = genlmsg_new_unicast(len, info, GFP_KERNEL);
if (!skb)
return ERR_PTR(-ENOMEM);
static struct sk_buff *ovs_flow_cmd_build_info(const struct sw_flow *flow,
int dp_ifindex,
struct genl_info *info, u8 cmd,
- bool always)
+ bool always, u32 ufid_flags)
{
struct sk_buff *skb;
int retval;
- skb = ovs_flow_cmd_alloc_info(ovsl_dereference(flow->sf_acts), info,
- always);
+ skb = ovs_flow_cmd_alloc_info(ovsl_dereference(flow->sf_acts),
+ &flow->id, info, always, ufid_flags);
if (IS_ERR_OR_NULL(skb))
return skb;
retval = ovs_flow_cmd_fill_info(flow, dp_ifindex, skb,
info->snd_portid, info->snd_seq, 0,
- cmd);
+ cmd, ufid_flags);
BUG_ON(retval < 0);
return skb;
}
{
struct nlattr **a = info->attrs;
struct ovs_header *ovs_header = info->userhdr;
- struct sw_flow *flow, *new_flow;
+ struct sw_flow *flow = NULL, *new_flow;
struct sw_flow_mask mask;
struct sk_buff *reply;
struct datapath *dp;
+ struct sw_flow_key key;
struct sw_flow_actions *acts;
struct sw_flow_match match;
+ u32 ufid_flags = ovs_nla_get_ufid_flags(a[OVS_FLOW_ATTR_UFID_FLAGS]);
int error;
bool log = !a[OVS_FLOW_ATTR_PROBE];
}
/* Extract key. */
- ovs_match_init(&match, &new_flow->unmasked_key, &mask);
+ ovs_match_init(&match, &key, &mask);
error = ovs_nla_get_match(&match, a[OVS_FLOW_ATTR_KEY],
a[OVS_FLOW_ATTR_MASK], log);
if (error)
goto err_kfree_flow;
- ovs_flow_mask_key(&new_flow->key, &new_flow->unmasked_key, &mask);
+ ovs_flow_mask_key(&new_flow->key, &key, &mask);
+
+ /* Extract flow identifier. */
+ error = ovs_nla_get_identifier(&new_flow->id, a[OVS_FLOW_ATTR_UFID],
+ &key, log);
+ if (error)
+ goto err_kfree_flow;
/* Validate actions. */
error = ovs_nla_copy_actions(a[OVS_FLOW_ATTR_ACTIONS], &new_flow->key,
goto err_kfree_flow;
}
- reply = ovs_flow_cmd_alloc_info(acts, info, false);
+ reply = ovs_flow_cmd_alloc_info(acts, &new_flow->id, info, false,
+ ufid_flags);
if (IS_ERR(reply)) {
error = PTR_ERR(reply);
goto err_kfree_acts;
error = -ENODEV;
goto err_unlock_ovs;
}
+
/* Check if this is a duplicate flow */
- flow = ovs_flow_tbl_lookup(&dp->table, &new_flow->unmasked_key);
+ if (ovs_identifier_is_ufid(&new_flow->id))
+ flow = ovs_flow_tbl_lookup_ufid(&dp->table, &new_flow->id);
+ if (!flow)
+ flow = ovs_flow_tbl_lookup(&dp->table, &key);
if (likely(!flow)) {
rcu_assign_pointer(new_flow->sf_acts, acts);
ovs_header->dp_ifindex,
reply, info->snd_portid,
info->snd_seq, 0,
- OVS_FLOW_CMD_NEW);
+ OVS_FLOW_CMD_NEW,
+ ufid_flags);
BUG_ON(error < 0);
}
ovs_unlock();
error = -EEXIST;
goto err_unlock_ovs;
}
- /* The unmasked key has to be the same for flow updates. */
- if (unlikely(!ovs_flow_cmp_unmasked_key(flow, &match))) {
- /* Look for any overlapping flow. */
- flow = ovs_flow_tbl_lookup_exact(&dp->table, &match);
+ /* The flow identifier has to be the same for flow updates.
+ * Look for any overlapping flow.
+ */
+ if (unlikely(!ovs_flow_cmp(flow, &match))) {
+ if (ovs_identifier_is_key(&flow->id))
+ flow = ovs_flow_tbl_lookup_exact(&dp->table,
+ &match);
+ else /* UFID matches but key is different */
+ flow = NULL;
if (!flow) {
error = -ENOENT;
goto err_unlock_ovs;
ovs_header->dp_ifindex,
reply, info->snd_portid,
info->snd_seq, 0,
- OVS_FLOW_CMD_NEW);
+ OVS_FLOW_CMD_NEW,
+ ufid_flags);
BUG_ON(error < 0);
}
ovs_unlock();
struct datapath *dp;
struct sw_flow_actions *old_acts = NULL, *acts = NULL;
struct sw_flow_match match;
+ struct sw_flow_id sfid;
+ u32 ufid_flags = ovs_nla_get_ufid_flags(a[OVS_FLOW_ATTR_UFID_FLAGS]);
int error;
bool log = !a[OVS_FLOW_ATTR_PROBE];
+ bool ufid_present;
/* Extract key. */
error = -EINVAL;
goto error;
}
+ ufid_present = ovs_nla_get_ufid(&sfid, a[OVS_FLOW_ATTR_UFID], log);
ovs_match_init(&match, &key, &mask);
error = ovs_nla_get_match(&match, a[OVS_FLOW_ATTR_KEY],
a[OVS_FLOW_ATTR_MASK], log);
}
/* Can allocate before locking if have acts. */
- reply = ovs_flow_cmd_alloc_info(acts, info, false);
+ reply = ovs_flow_cmd_alloc_info(acts, &sfid, info, false,
+ ufid_flags);
if (IS_ERR(reply)) {
error = PTR_ERR(reply);
goto err_kfree_acts;
goto err_unlock_ovs;
}
/* Check that the flow exists. */
- flow = ovs_flow_tbl_lookup_exact(&dp->table, &match);
+ if (ufid_present)
+ flow = ovs_flow_tbl_lookup_ufid(&dp->table, &sfid);
+ else
+ flow = ovs_flow_tbl_lookup_exact(&dp->table, &match);
if (unlikely(!flow)) {
error = -ENOENT;
goto err_unlock_ovs;
ovs_header->dp_ifindex,
reply, info->snd_portid,
info->snd_seq, 0,
- OVS_FLOW_CMD_NEW);
+ OVS_FLOW_CMD_NEW,
+ ufid_flags);
BUG_ON(error < 0);
}
} else {
/* Could not alloc without acts before locking. */
reply = ovs_flow_cmd_build_info(flow, ovs_header->dp_ifindex,
- info, OVS_FLOW_CMD_NEW, false);
+ info, OVS_FLOW_CMD_NEW, false,
+ ufid_flags);
+
if (unlikely(IS_ERR(reply))) {
error = PTR_ERR(reply);
goto err_unlock_ovs;
struct sw_flow *flow;
struct datapath *dp;
struct sw_flow_match match;
- int err;
+ struct sw_flow_id ufid;
+ u32 ufid_flags = ovs_nla_get_ufid_flags(a[OVS_FLOW_ATTR_UFID_FLAGS]);
+ int err = 0;
bool log = !a[OVS_FLOW_ATTR_PROBE];
+ bool ufid_present;
- if (!a[OVS_FLOW_ATTR_KEY]) {
+ ufid_present = ovs_nla_get_ufid(&ufid, a[OVS_FLOW_ATTR_UFID], log);
+ if (a[OVS_FLOW_ATTR_KEY]) {
+ ovs_match_init(&match, &key, NULL);
+ err = ovs_nla_get_match(&match, a[OVS_FLOW_ATTR_KEY], NULL,
+ log);
+ } else if (!ufid_present) {
OVS_NLERR(log,
"Flow get message rejected, Key attribute missing.");
- return -EINVAL;
+ err = -EINVAL;
}
-
- ovs_match_init(&match, &key, NULL);
- err = ovs_nla_get_match(&match, a[OVS_FLOW_ATTR_KEY], NULL, log);
if (err)
return err;
goto unlock;
}
- flow = ovs_flow_tbl_lookup_exact(&dp->table, &match);
+ if (ufid_present)
+ flow = ovs_flow_tbl_lookup_ufid(&dp->table, &ufid);
+ else
+ flow = ovs_flow_tbl_lookup_exact(&dp->table, &match);
if (!flow) {
err = -ENOENT;
goto unlock;
}
reply = ovs_flow_cmd_build_info(flow, ovs_header->dp_ifindex, info,
- OVS_FLOW_CMD_NEW, true);
+ OVS_FLOW_CMD_NEW, true, ufid_flags);
if (IS_ERR(reply)) {
err = PTR_ERR(reply);
goto unlock;
struct ovs_header *ovs_header = info->userhdr;
struct sw_flow_key key;
struct sk_buff *reply;
- struct sw_flow *flow;
+ struct sw_flow *flow = NULL;
struct datapath *dp;
struct sw_flow_match match;
+ struct sw_flow_id ufid;
+ u32 ufid_flags = ovs_nla_get_ufid_flags(a[OVS_FLOW_ATTR_UFID_FLAGS]);
int err;
bool log = !a[OVS_FLOW_ATTR_PROBE];
+ bool ufid_present;
- if (likely(a[OVS_FLOW_ATTR_KEY])) {
+ ufid_present = ovs_nla_get_ufid(&ufid, a[OVS_FLOW_ATTR_UFID], log);
+ if (a[OVS_FLOW_ATTR_KEY]) {
ovs_match_init(&match, &key, NULL);
err = ovs_nla_get_match(&match, a[OVS_FLOW_ATTR_KEY], NULL,
log);
goto unlock;
}
- if (unlikely(!a[OVS_FLOW_ATTR_KEY])) {
+ if (unlikely(!a[OVS_FLOW_ATTR_KEY] && !ufid_present)) {
err = ovs_flow_tbl_flush(&dp->table);
goto unlock;
}
- flow = ovs_flow_tbl_lookup_exact(&dp->table, &match);
+ if (ufid_present)
+ flow = ovs_flow_tbl_lookup_ufid(&dp->table, &ufid);
+ else
+ flow = ovs_flow_tbl_lookup_exact(&dp->table, &match);
if (unlikely(!flow)) {
err = -ENOENT;
goto unlock;
ovs_unlock();
reply = ovs_flow_cmd_alloc_info((const struct sw_flow_actions __force *) flow->sf_acts,
- info, false);
+ &flow->id, info, false, ufid_flags);
if (likely(reply)) {
if (likely(!IS_ERR(reply))) {
rcu_read_lock(); /*To keep RCU checker happy. */
err = ovs_flow_cmd_fill_info(flow, ovs_header->dp_ifindex,
reply, info->snd_portid,
info->snd_seq, 0,
- OVS_FLOW_CMD_DEL);
+ OVS_FLOW_CMD_DEL,
+ ufid_flags);
rcu_read_unlock();
BUG_ON(err < 0);
static int ovs_flow_cmd_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
+ struct nlattr *a[__OVS_FLOW_ATTR_MAX];
struct ovs_header *ovs_header = genlmsg_data(nlmsg_data(cb->nlh));
struct table_instance *ti;
struct datapath *dp;
+ u32 ufid_flags;
+ int err;
+
+ err = genlmsg_parse(cb->nlh, &dp_flow_genl_family, a,
+ OVS_FLOW_ATTR_MAX, flow_policy);
+ if (err)
+ return err;
+ ufid_flags = ovs_nla_get_ufid_flags(a[OVS_FLOW_ATTR_UFID_FLAGS]);
rcu_read_lock();
dp = get_dp_rcu(sock_net(skb->sk), ovs_header->dp_ifindex);
if (ovs_flow_cmd_fill_info(flow, ovs_header->dp_ifindex, skb,
NETLINK_CB(cb->skb).portid,
cb->nlh->nlmsg_seq, NLM_F_MULTI,
- OVS_FLOW_CMD_NEW) < 0)
+ OVS_FLOW_CMD_NEW, ufid_flags) < 0)
break;
cb->args[0] = bucket;
[OVS_FLOW_ATTR_ACTIONS] = { .type = NLA_NESTED },
[OVS_FLOW_ATTR_CLEAR] = { .type = NLA_FLAG },
[OVS_FLOW_ATTR_PROBE] = { .type = NLA_FLAG },
+ [OVS_FLOW_ATTR_UFID] = { .type = NLA_UNSPEC, .len = 1 },
+ [OVS_FLOW_ATTR_UFID_FLAGS] = { .type = NLA_U32 },
};
static const struct genl_ops dp_flow_genl_ops[] = {
if (nla_put_u32(skb, OVS_DP_ATTR_USER_FEATURES, dp->user_features))
goto nla_put_failure;
- return genlmsg_end(skb, ovs_header);
+ genlmsg_end(skb, ovs_header);
+ return 0;
nla_put_failure:
genlmsg_cancel(skb, ovs_header);
if (err == -EMSGSIZE)
goto error;
- return genlmsg_end(skb, ovs_header);
+ genlmsg_end(skb, ovs_header);
+ return 0;
nla_put_failure:
err = -EMSGSIZE;
{
struct flow_stats *stats;
int node = numa_node_id();
+ int len = skb->len + (skb_vlan_tag_present(skb) ? VLAN_HLEN : 0);
stats = rcu_dereference(flow->stats[node]);
if (likely(new_stats)) {
new_stats->used = jiffies;
new_stats->packet_count = 1;
- new_stats->byte_count = skb->len;
+ new_stats->byte_count = len;
new_stats->tcp_flags = tcp_flags;
spin_lock_init(&new_stats->lock);
stats->used = jiffies;
stats->packet_count++;
- stats->byte_count += skb->len;
+ stats->byte_count += len;
stats->tcp_flags |= tcp_flags;
unlock:
spin_unlock(&stats->lock);
*/
key->eth.tci = 0;
- if (vlan_tx_tag_present(skb))
+ if (skb_vlan_tag_present(skb))
key->eth.tci = htons(skb->vlan_tci);
else if (eth->h_proto == htons(ETH_P_8021Q))
if (unlikely(parse_vlan(skb, key)))
BUILD_BUG_ON((1 << (sizeof(tun_info->options_len) *
8)) - 1
> sizeof(key->tun_opts));
- memcpy(GENEVE_OPTS(key, tun_info->options_len),
+ memcpy(TUN_METADATA_OPTS(key, tun_info->options_len),
tun_info->options, tun_info->options_len);
key->tun_opts_len = tun_info->options_len;
} else {
struct ovs_tunnel_info {
struct ovs_key_ipv4_tunnel tunnel;
- const struct geneve_opt *options;
+ const void *options;
u8 options_len;
};
* maximum size. This allows us to get the benefits of variable length
* matching for small options.
*/
-#define GENEVE_OPTS(flow_key, opt_len) \
- ((struct geneve_opt *)((flow_key)->tun_opts + \
- FIELD_SIZEOF(struct sw_flow_key, tun_opts) - \
- opt_len))
+#define TUN_METADATA_OFFSET(opt_len) \
+ (FIELD_SIZEOF(struct sw_flow_key, tun_opts) - opt_len)
+#define TUN_METADATA_OPTS(flow_key, opt_len) \
+ ((void *)((flow_key)->tun_opts + TUN_METADATA_OFFSET(opt_len)))
static inline void __ovs_flow_tun_info_init(struct ovs_tunnel_info *tun_info,
__be32 saddr, __be32 daddr,
__be16 tp_dst,
__be64 tun_id,
__be16 tun_flags,
- const struct geneve_opt *opts,
+ const void *opts,
u8 opts_len)
{
tun_info->tunnel.tun_id = tun_id;
__be16 tp_dst,
__be64 tun_id,
__be16 tun_flags,
- const struct geneve_opt *opts,
+ const void *opts,
u8 opts_len)
{
__ovs_flow_tun_info_init(tun_info, iph->saddr, iph->daddr,
struct sw_flow_mask *mask;
};
+#define MAX_UFID_LENGTH 16 /* 128 bits */
+
+struct sw_flow_id {
+ u32 ufid_len;
+ union {
+ u32 ufid[MAX_UFID_LENGTH / 4];
+ struct sw_flow_key *unmasked_key;
+ };
+};
+
struct sw_flow_actions {
struct rcu_head rcu;
u32 actions_len;
struct sw_flow {
struct rcu_head rcu;
- struct hlist_node hash_node[2];
- u32 hash;
+ struct {
+ struct hlist_node node[2];
+ u32 hash;
+ } flow_table, ufid_table;
int stats_last_writer; /* NUMA-node id of the last writer on
* 'stats[0]'.
*/
struct sw_flow_key key;
- struct sw_flow_key unmasked_key;
+ struct sw_flow_id id;
struct sw_flow_mask *mask;
struct sw_flow_actions __rcu *sf_acts;
struct flow_stats __rcu *stats[]; /* One for each NUMA node. First one
unsigned char ar_tip[4]; /* target IP address */
} __packed;
+static inline bool ovs_identifier_is_ufid(const struct sw_flow_id *sfid)
+{
+ return sfid->ufid_len;
+}
+
+static inline bool ovs_identifier_is_key(const struct sw_flow_id *sfid)
+{
+ return !ovs_identifier_is_ufid(sfid);
+}
+
void ovs_flow_stats_update(struct sw_flow *, __be16 tcp_flags,
const struct sk_buff *);
void ovs_flow_stats_get(const struct sw_flow *, struct ovs_flow_stats *,
#include <net/mpls.h>
#include "flow_netlink.h"
+#include "vport-vxlan.h"
+
+struct ovs_len_tbl {
+ int len;
+ const struct ovs_len_tbl *next;
+};
+
+#define OVS_ATTR_NESTED -1
static void update_range(struct sw_flow_match *match,
size_t offset, size_t size, bool is_mask)
+ nla_total_size(0) /* OVS_TUNNEL_KEY_ATTR_CSUM */
+ nla_total_size(0) /* OVS_TUNNEL_KEY_ATTR_OAM */
+ nla_total_size(256) /* OVS_TUNNEL_KEY_ATTR_GENEVE_OPTS */
+ /* OVS_TUNNEL_KEY_ATTR_VXLAN_OPTS is mutually exclusive with
+ * OVS_TUNNEL_KEY_ATTR_GENEVE_OPTS and covered by it.
+ */
+ nla_total_size(2) /* OVS_TUNNEL_KEY_ATTR_TP_SRC */
+ nla_total_size(2); /* OVS_TUNNEL_KEY_ATTR_TP_DST */
}
+ nla_total_size(28); /* OVS_KEY_ATTR_ND */
}
+static const struct ovs_len_tbl ovs_tunnel_key_lens[OVS_TUNNEL_KEY_ATTR_MAX + 1] = {
+ [OVS_TUNNEL_KEY_ATTR_ID] = { .len = sizeof(u64) },
+ [OVS_TUNNEL_KEY_ATTR_IPV4_SRC] = { .len = sizeof(u32) },
+ [OVS_TUNNEL_KEY_ATTR_IPV4_DST] = { .len = sizeof(u32) },
+ [OVS_TUNNEL_KEY_ATTR_TOS] = { .len = 1 },
+ [OVS_TUNNEL_KEY_ATTR_TTL] = { .len = 1 },
+ [OVS_TUNNEL_KEY_ATTR_DONT_FRAGMENT] = { .len = 0 },
+ [OVS_TUNNEL_KEY_ATTR_CSUM] = { .len = 0 },
+ [OVS_TUNNEL_KEY_ATTR_TP_SRC] = { .len = sizeof(u16) },
+ [OVS_TUNNEL_KEY_ATTR_TP_DST] = { .len = sizeof(u16) },
+ [OVS_TUNNEL_KEY_ATTR_OAM] = { .len = 0 },
+ [OVS_TUNNEL_KEY_ATTR_GENEVE_OPTS] = { .len = OVS_ATTR_NESTED },
+ [OVS_TUNNEL_KEY_ATTR_VXLAN_OPTS] = { .len = OVS_ATTR_NESTED },
+};
+
/* The size of the argument for each %OVS_KEY_ATTR_* Netlink attribute. */
-static const int ovs_key_lens[OVS_KEY_ATTR_MAX + 1] = {
- [OVS_KEY_ATTR_ENCAP] = -1,
- [OVS_KEY_ATTR_PRIORITY] = sizeof(u32),
- [OVS_KEY_ATTR_IN_PORT] = sizeof(u32),
- [OVS_KEY_ATTR_SKB_MARK] = sizeof(u32),
- [OVS_KEY_ATTR_ETHERNET] = sizeof(struct ovs_key_ethernet),
- [OVS_KEY_ATTR_VLAN] = sizeof(__be16),
- [OVS_KEY_ATTR_ETHERTYPE] = sizeof(__be16),
- [OVS_KEY_ATTR_IPV4] = sizeof(struct ovs_key_ipv4),
- [OVS_KEY_ATTR_IPV6] = sizeof(struct ovs_key_ipv6),
- [OVS_KEY_ATTR_TCP] = sizeof(struct ovs_key_tcp),
- [OVS_KEY_ATTR_TCP_FLAGS] = sizeof(__be16),
- [OVS_KEY_ATTR_UDP] = sizeof(struct ovs_key_udp),
- [OVS_KEY_ATTR_SCTP] = sizeof(struct ovs_key_sctp),
- [OVS_KEY_ATTR_ICMP] = sizeof(struct ovs_key_icmp),
- [OVS_KEY_ATTR_ICMPV6] = sizeof(struct ovs_key_icmpv6),
- [OVS_KEY_ATTR_ARP] = sizeof(struct ovs_key_arp),
- [OVS_KEY_ATTR_ND] = sizeof(struct ovs_key_nd),
- [OVS_KEY_ATTR_RECIRC_ID] = sizeof(u32),
- [OVS_KEY_ATTR_DP_HASH] = sizeof(u32),
- [OVS_KEY_ATTR_TUNNEL] = -1,
- [OVS_KEY_ATTR_MPLS] = sizeof(struct ovs_key_mpls),
+static const struct ovs_len_tbl ovs_key_lens[OVS_KEY_ATTR_MAX + 1] = {
+ [OVS_KEY_ATTR_ENCAP] = { .len = OVS_ATTR_NESTED },
+ [OVS_KEY_ATTR_PRIORITY] = { .len = sizeof(u32) },
+ [OVS_KEY_ATTR_IN_PORT] = { .len = sizeof(u32) },
+ [OVS_KEY_ATTR_SKB_MARK] = { .len = sizeof(u32) },
+ [OVS_KEY_ATTR_ETHERNET] = { .len = sizeof(struct ovs_key_ethernet) },
+ [OVS_KEY_ATTR_VLAN] = { .len = sizeof(__be16) },
+ [OVS_KEY_ATTR_ETHERTYPE] = { .len = sizeof(__be16) },
+ [OVS_KEY_ATTR_IPV4] = { .len = sizeof(struct ovs_key_ipv4) },
+ [OVS_KEY_ATTR_IPV6] = { .len = sizeof(struct ovs_key_ipv6) },
+ [OVS_KEY_ATTR_TCP] = { .len = sizeof(struct ovs_key_tcp) },
+ [OVS_KEY_ATTR_TCP_FLAGS] = { .len = sizeof(__be16) },
+ [OVS_KEY_ATTR_UDP] = { .len = sizeof(struct ovs_key_udp) },
+ [OVS_KEY_ATTR_SCTP] = { .len = sizeof(struct ovs_key_sctp) },
+ [OVS_KEY_ATTR_ICMP] = { .len = sizeof(struct ovs_key_icmp) },
+ [OVS_KEY_ATTR_ICMPV6] = { .len = sizeof(struct ovs_key_icmpv6) },
+ [OVS_KEY_ATTR_ARP] = { .len = sizeof(struct ovs_key_arp) },
+ [OVS_KEY_ATTR_ND] = { .len = sizeof(struct ovs_key_nd) },
+ [OVS_KEY_ATTR_RECIRC_ID] = { .len = sizeof(u32) },
+ [OVS_KEY_ATTR_DP_HASH] = { .len = sizeof(u32) },
+ [OVS_KEY_ATTR_TUNNEL] = { .len = OVS_ATTR_NESTED,
+ .next = ovs_tunnel_key_lens, },
+ [OVS_KEY_ATTR_MPLS] = { .len = sizeof(struct ovs_key_mpls) },
};
static bool is_all_zero(const u8 *fp, size_t size)
return -EINVAL;
}
- expected_len = ovs_key_lens[type];
- if (nla_len(nla) != expected_len && expected_len != -1) {
+ expected_len = ovs_key_lens[type].len;
+ if (nla_len(nla) != expected_len && expected_len != OVS_ATTR_NESTED) {
OVS_NLERR(log, "Key %d has unexpected len %d expected %d",
type, nla_len(nla), expected_len);
return -EINVAL;
SW_FLOW_KEY_PUT(match, tun_opts_len, 0xff, true);
}
- opt_key_offset = (unsigned long)GENEVE_OPTS((struct sw_flow_key *)0,
- nla_len(a));
+ opt_key_offset = TUN_METADATA_OFFSET(nla_len(a));
SW_FLOW_KEY_MEMCPY_OFFSET(match, opt_key_offset, nla_data(a),
nla_len(a), is_mask);
return 0;
}
+static const struct nla_policy vxlan_opt_policy[OVS_VXLAN_EXT_MAX + 1] = {
+ [OVS_VXLAN_EXT_GBP] = { .type = NLA_U32 },
+};
+
+static int vxlan_tun_opt_from_nlattr(const struct nlattr *a,
+ struct sw_flow_match *match, bool is_mask,
+ bool log)
+{
+ struct nlattr *tb[OVS_VXLAN_EXT_MAX+1];
+ unsigned long opt_key_offset;
+ struct ovs_vxlan_opts opts;
+ int err;
+
+ BUILD_BUG_ON(sizeof(opts) > sizeof(match->key->tun_opts));
+
+ err = nla_parse_nested(tb, OVS_VXLAN_EXT_MAX, a, vxlan_opt_policy);
+ if (err < 0)
+ return err;
+
+ memset(&opts, 0, sizeof(opts));
+
+ if (tb[OVS_VXLAN_EXT_GBP])
+ opts.gbp = nla_get_u32(tb[OVS_VXLAN_EXT_GBP]);
+
+ if (!is_mask)
+ SW_FLOW_KEY_PUT(match, tun_opts_len, sizeof(opts), false);
+ else
+ SW_FLOW_KEY_PUT(match, tun_opts_len, 0xff, true);
+
+ opt_key_offset = TUN_METADATA_OFFSET(sizeof(opts));
+ SW_FLOW_KEY_MEMCPY_OFFSET(match, opt_key_offset, &opts, sizeof(opts),
+ is_mask);
+ return 0;
+}
+
static int ipv4_tun_from_nlattr(const struct nlattr *attr,
struct sw_flow_match *match, bool is_mask,
bool log)
int rem;
bool ttl = false;
__be16 tun_flags = 0;
+ int opts_type = 0;
nla_for_each_nested(a, attr, rem) {
int type = nla_type(a);
int err;
- static const u32 ovs_tunnel_key_lens[OVS_TUNNEL_KEY_ATTR_MAX + 1] = {
- [OVS_TUNNEL_KEY_ATTR_ID] = sizeof(u64),
- [OVS_TUNNEL_KEY_ATTR_IPV4_SRC] = sizeof(u32),
- [OVS_TUNNEL_KEY_ATTR_IPV4_DST] = sizeof(u32),
- [OVS_TUNNEL_KEY_ATTR_TOS] = 1,
- [OVS_TUNNEL_KEY_ATTR_TTL] = 1,
- [OVS_TUNNEL_KEY_ATTR_DONT_FRAGMENT] = 0,
- [OVS_TUNNEL_KEY_ATTR_CSUM] = 0,
- [OVS_TUNNEL_KEY_ATTR_TP_SRC] = sizeof(u16),
- [OVS_TUNNEL_KEY_ATTR_TP_DST] = sizeof(u16),
- [OVS_TUNNEL_KEY_ATTR_OAM] = 0,
- [OVS_TUNNEL_KEY_ATTR_GENEVE_OPTS] = -1,
- };
-
if (type > OVS_TUNNEL_KEY_ATTR_MAX) {
OVS_NLERR(log, "Tunnel attr %d out of range max %d",
type, OVS_TUNNEL_KEY_ATTR_MAX);
return -EINVAL;
}
- if (ovs_tunnel_key_lens[type] != nla_len(a) &&
- ovs_tunnel_key_lens[type] != -1) {
+ if (ovs_tunnel_key_lens[type].len != nla_len(a) &&
+ ovs_tunnel_key_lens[type].len != OVS_ATTR_NESTED) {
OVS_NLERR(log, "Tunnel attr %d has unexpected len %d expected %d",
- type, nla_len(a), ovs_tunnel_key_lens[type]);
+ type, nla_len(a), ovs_tunnel_key_lens[type].len);
return -EINVAL;
}
tun_flags |= TUNNEL_OAM;
break;
case OVS_TUNNEL_KEY_ATTR_GENEVE_OPTS:
+ if (opts_type) {
+ OVS_NLERR(log, "Multiple metadata blocks provided");
+ return -EINVAL;
+ }
+
err = genev_tun_opt_from_nlattr(a, match, is_mask, log);
if (err)
return err;
- tun_flags |= TUNNEL_OPTIONS_PRESENT;
+ tun_flags |= TUNNEL_GENEVE_OPT;
+ opts_type = type;
+ break;
+ case OVS_TUNNEL_KEY_ATTR_VXLAN_OPTS:
+ if (opts_type) {
+ OVS_NLERR(log, "Multiple metadata blocks provided");
+ return -EINVAL;
+ }
+
+ err = vxlan_tun_opt_from_nlattr(a, match, is_mask, log);
+ if (err)
+ return err;
+
+ tun_flags |= TUNNEL_VXLAN_OPT;
+ opts_type = type;
break;
default:
OVS_NLERR(log, "Unknown IPv4 tunnel attribute %d",
}
}
+ return opts_type;
+}
+
+static int vxlan_opt_to_nlattr(struct sk_buff *skb,
+ const void *tun_opts, int swkey_tun_opts_len)
+{
+ const struct ovs_vxlan_opts *opts = tun_opts;
+ struct nlattr *nla;
+
+ nla = nla_nest_start(skb, OVS_TUNNEL_KEY_ATTR_VXLAN_OPTS);
+ if (!nla)
+ return -EMSGSIZE;
+
+ if (nla_put_u32(skb, OVS_VXLAN_EXT_GBP, opts->gbp) < 0)
+ return -EMSGSIZE;
+
+ nla_nest_end(skb, nla);
return 0;
}
static int __ipv4_tun_to_nlattr(struct sk_buff *skb,
const struct ovs_key_ipv4_tunnel *output,
- const struct geneve_opt *tun_opts,
- int swkey_tun_opts_len)
+ const void *tun_opts, int swkey_tun_opts_len)
{
if (output->tun_flags & TUNNEL_KEY &&
nla_put_be64(skb, OVS_TUNNEL_KEY_ATTR_ID, output->tun_id))
if ((output->tun_flags & TUNNEL_OAM) &&
nla_put_flag(skb, OVS_TUNNEL_KEY_ATTR_OAM))
return -EMSGSIZE;
- if (tun_opts &&
- nla_put(skb, OVS_TUNNEL_KEY_ATTR_GENEVE_OPTS,
- swkey_tun_opts_len, tun_opts))
- return -EMSGSIZE;
+ if (tun_opts) {
+ if (output->tun_flags & TUNNEL_GENEVE_OPT &&
+ nla_put(skb, OVS_TUNNEL_KEY_ATTR_GENEVE_OPTS,
+ swkey_tun_opts_len, tun_opts))
+ return -EMSGSIZE;
+ else if (output->tun_flags & TUNNEL_VXLAN_OPT &&
+ vxlan_opt_to_nlattr(skb, tun_opts, swkey_tun_opts_len))
+ return -EMSGSIZE;
+ }
return 0;
}
static int ipv4_tun_to_nlattr(struct sk_buff *skb,
const struct ovs_key_ipv4_tunnel *output,
- const struct geneve_opt *tun_opts,
- int swkey_tun_opts_len)
+ const void *tun_opts, int swkey_tun_opts_len)
{
struct nlattr *nla;
int err;
}
if (*attrs & (1 << OVS_KEY_ATTR_TUNNEL)) {
if (ipv4_tun_from_nlattr(a[OVS_KEY_ATTR_TUNNEL], match,
- is_mask, log))
+ is_mask, log) < 0)
return -EINVAL;
*attrs &= ~(1 << OVS_KEY_ATTR_TUNNEL);
}
return 0;
}
-static void nlattr_set(struct nlattr *attr, u8 val, bool is_attr_mask_key)
+static void nlattr_set(struct nlattr *attr, u8 val,
+ const struct ovs_len_tbl *tbl)
{
struct nlattr *nla;
int rem;
/* The nlattr stream should already have been validated */
nla_for_each_nested(nla, attr, rem) {
- /* We assume that ovs_key_lens[type] == -1 means that type is a
- * nested attribute
- */
- if (is_attr_mask_key && ovs_key_lens[nla_type(nla)] == -1)
- nlattr_set(nla, val, false);
+ if (tbl && tbl[nla_type(nla)].len == OVS_ATTR_NESTED)
+ nlattr_set(nla, val, tbl[nla_type(nla)].next);
else
memset(nla_data(nla), val, nla_len(nla));
}
static void mask_set_nlattr(struct nlattr *attr, u8 val)
{
- nlattr_set(attr, val, true);
+ nlattr_set(attr, val, ovs_key_lens);
}
/**
return err;
}
+static size_t get_ufid_len(const struct nlattr *attr, bool log)
+{
+ size_t len;
+
+ if (!attr)
+ return 0;
+
+ len = nla_len(attr);
+ if (len < 1 || len > MAX_UFID_LENGTH) {
+ OVS_NLERR(log, "ufid size %u bytes exceeds the range (1, %d)",
+ nla_len(attr), MAX_UFID_LENGTH);
+ return 0;
+ }
+
+ return len;
+}
+
+/* Initializes 'flow->ufid', returning true if 'attr' contains a valid UFID,
+ * or false otherwise.
+ */
+bool ovs_nla_get_ufid(struct sw_flow_id *sfid, const struct nlattr *attr,
+ bool log)
+{
+ sfid->ufid_len = get_ufid_len(attr, log);
+ if (sfid->ufid_len)
+ memcpy(sfid->ufid, nla_data(attr), sfid->ufid_len);
+
+ return sfid->ufid_len;
+}
+
+int ovs_nla_get_identifier(struct sw_flow_id *sfid, const struct nlattr *ufid,
+ const struct sw_flow_key *key, bool log)
+{
+ struct sw_flow_key *new_key;
+
+ if (ovs_nla_get_ufid(sfid, ufid, log))
+ return 0;
+
+ /* If UFID was not provided, use unmasked key. */
+ new_key = kmalloc(sizeof(*new_key), GFP_KERNEL);
+ if (!new_key)
+ return -ENOMEM;
+ memcpy(new_key, key, sizeof(*key));
+ sfid->unmasked_key = new_key;
+
+ return 0;
+}
+
+u32 ovs_nla_get_ufid_flags(const struct nlattr *attr)
+{
+ return attr ? nla_get_u32(attr) : 0;
+}
+
/**
* ovs_nla_get_flow_metadata - parses Netlink attributes into a flow key.
* @key: Receives extracted in_port, priority, tun_key and skb_mark.
return metadata_from_nlattrs(&match, &attrs, a, false, log);
}
-int ovs_nla_put_flow(const struct sw_flow_key *swkey,
- const struct sw_flow_key *output, struct sk_buff *skb)
+static int __ovs_nla_put_key(const struct sw_flow_key *swkey,
+ const struct sw_flow_key *output, bool is_mask,
+ struct sk_buff *skb)
{
struct ovs_key_ethernet *eth_key;
struct nlattr *nla, *encap;
- bool is_mask = (swkey != output);
if (nla_put_u32(skb, OVS_KEY_ATTR_RECIRC_ID, output->recirc_id))
goto nla_put_failure;
goto nla_put_failure;
if ((swkey->tun_key.ipv4_dst || is_mask)) {
- const struct geneve_opt *opts = NULL;
+ const void *opts = NULL;
if (output->tun_key.tun_flags & TUNNEL_OPTIONS_PRESENT)
- opts = GENEVE_OPTS(output, swkey->tun_opts_len);
+ opts = TUN_METADATA_OPTS(output, swkey->tun_opts_len);
if (ipv4_tun_to_nlattr(skb, &output->tun_key, opts,
swkey->tun_opts_len))
return -EMSGSIZE;
}
+int ovs_nla_put_key(const struct sw_flow_key *swkey,
+ const struct sw_flow_key *output, int attr, bool is_mask,
+ struct sk_buff *skb)
+{
+ int err;
+ struct nlattr *nla;
+
+ nla = nla_nest_start(skb, attr);
+ if (!nla)
+ return -EMSGSIZE;
+ err = __ovs_nla_put_key(swkey, output, is_mask, skb);
+ if (err)
+ return err;
+ nla_nest_end(skb, nla);
+
+ return 0;
+}
+
+/* Called with ovs_mutex or RCU read lock. */
+int ovs_nla_put_identifier(const struct sw_flow *flow, struct sk_buff *skb)
+{
+ if (ovs_identifier_is_ufid(&flow->id))
+ return nla_put(skb, OVS_FLOW_ATTR_UFID, flow->id.ufid_len,
+ flow->id.ufid);
+
+ return ovs_nla_put_key(flow->id.unmasked_key, flow->id.unmasked_key,
+ OVS_FLOW_ATTR_KEY, false, skb);
+}
+
+/* Called with ovs_mutex or RCU read lock. */
+int ovs_nla_put_masked_key(const struct sw_flow *flow, struct sk_buff *skb)
+{
+ return ovs_nla_put_key(&flow->mask->key, &flow->key,
+ OVS_FLOW_ATTR_KEY, false, skb);
+}
+
+/* Called with ovs_mutex or RCU read lock. */
+int ovs_nla_put_mask(const struct sw_flow *flow, struct sk_buff *skb)
+{
+ return ovs_nla_put_key(&flow->key, &flow->mask->key,
+ OVS_FLOW_ATTR_MASK, true, skb);
+}
+
#define MAX_ACTIONS_BUFSIZE (32 * 1024)
static struct sw_flow_actions *nla_alloc_flow_actions(int size, bool log)
}
}
+static int validate_geneve_opts(struct sw_flow_key *key)
+{
+ struct geneve_opt *option;
+ int opts_len = key->tun_opts_len;
+ bool crit_opt = false;
+
+ option = (struct geneve_opt *)TUN_METADATA_OPTS(key, key->tun_opts_len);
+ while (opts_len > 0) {
+ int len;
+
+ if (opts_len < sizeof(*option))
+ return -EINVAL;
+
+ len = sizeof(*option) + option->length * 4;
+ if (len > opts_len)
+ return -EINVAL;
+
+ crit_opt |= !!(option->type & GENEVE_CRIT_OPT_TYPE);
+
+ option = (struct geneve_opt *)((u8 *)option + len);
+ opts_len -= len;
+ };
+
+ key->tun_key.tun_flags |= crit_opt ? TUNNEL_CRIT_OPT : 0;
+
+ return 0;
+}
+
static int validate_and_copy_set_tun(const struct nlattr *attr,
struct sw_flow_actions **sfa, bool log)
{
struct sw_flow_key key;
struct ovs_tunnel_info *tun_info;
struct nlattr *a;
- int err, start;
+ int err, start, opts_type;
ovs_match_init(&match, &key, NULL);
- err = ipv4_tun_from_nlattr(nla_data(attr), &match, false, log);
- if (err)
- return err;
+ opts_type = ipv4_tun_from_nlattr(nla_data(attr), &match, false, log);
+ if (opts_type < 0)
+ return opts_type;
if (key.tun_opts_len) {
- struct geneve_opt *option = GENEVE_OPTS(&key,
- key.tun_opts_len);
- int opts_len = key.tun_opts_len;
- bool crit_opt = false;
-
- while (opts_len > 0) {
- int len;
-
- if (opts_len < sizeof(*option))
- return -EINVAL;
-
- len = sizeof(*option) + option->length * 4;
- if (len > opts_len)
- return -EINVAL;
-
- crit_opt |= !!(option->type & GENEVE_CRIT_OPT_TYPE);
-
- option = (struct geneve_opt *)((u8 *)option + len);
- opts_len -= len;
- };
-
- key.tun_key.tun_flags |= crit_opt ? TUNNEL_CRIT_OPT : 0;
+ switch (opts_type) {
+ case OVS_TUNNEL_KEY_ATTR_GENEVE_OPTS:
+ err = validate_geneve_opts(&key);
+ if (err < 0)
+ return err;
+ break;
+ case OVS_TUNNEL_KEY_ATTR_VXLAN_OPTS:
+ break;
+ }
};
start = add_nested_action_start(sfa, OVS_ACTION_ATTR_SET, log);
* everything else will go away after flow setup. We can append
* it to tun_info and then point there.
*/
- memcpy((tun_info + 1), GENEVE_OPTS(&key, key.tun_opts_len),
- key.tun_opts_len);
- tun_info->options = (struct geneve_opt *)(tun_info + 1);
+ memcpy((tun_info + 1),
+ TUN_METADATA_OPTS(&key, key.tun_opts_len), key.tun_opts_len);
+ tun_info->options = (tun_info + 1);
} else {
tun_info->options = NULL;
}
return -EINVAL;
if (key_type > OVS_KEY_ATTR_MAX ||
- (ovs_key_lens[key_type] != nla_len(ovs_key) &&
- ovs_key_lens[key_type] != -1))
+ (ovs_key_lens[key_type].len != nla_len(ovs_key) &&
+ ovs_key_lens[key_type].len != OVS_ATTR_NESTED))
return -EINVAL;
switch (key_type) {
void ovs_match_init(struct sw_flow_match *match,
struct sw_flow_key *key, struct sw_flow_mask *mask);
-int ovs_nla_put_flow(const struct sw_flow_key *,
- const struct sw_flow_key *, struct sk_buff *);
+int ovs_nla_put_key(const struct sw_flow_key *, const struct sw_flow_key *,
+ int attr, bool is_mask, struct sk_buff *);
int ovs_nla_get_flow_metadata(const struct nlattr *, struct sw_flow_key *,
bool log);
+int ovs_nla_put_identifier(const struct sw_flow *flow, struct sk_buff *skb);
+int ovs_nla_put_masked_key(const struct sw_flow *flow, struct sk_buff *skb);
+int ovs_nla_put_mask(const struct sw_flow *flow, struct sk_buff *skb);
+
int ovs_nla_get_match(struct sw_flow_match *, const struct nlattr *key,
const struct nlattr *mask, bool log);
int ovs_nla_put_egress_tunnel_key(struct sk_buff *,
const struct ovs_tunnel_info *);
+bool ovs_nla_get_ufid(struct sw_flow_id *, const struct nlattr *, bool log);
+int ovs_nla_get_identifier(struct sw_flow_id *sfid, const struct nlattr *ufid,
+ const struct sw_flow_key *key, bool log);
+u32 ovs_nla_get_ufid_flags(const struct nlattr *attr);
+
int ovs_nla_copy_actions(const struct nlattr *attr,
const struct sw_flow_key *key,
struct sw_flow_actions **sfa, bool log);
{
int node;
+ if (ovs_identifier_is_key(&flow->id))
+ kfree(flow->id.unmasked_key);
kfree((struct sw_flow_actions __force *)flow->sf_acts);
for_each_node(node)
if (flow->stats[node])
int ovs_flow_tbl_init(struct flow_table *table)
{
- struct table_instance *ti;
+ struct table_instance *ti, *ufid_ti;
ti = table_instance_alloc(TBL_MIN_BUCKETS);
if (!ti)
return -ENOMEM;
+ ufid_ti = table_instance_alloc(TBL_MIN_BUCKETS);
+ if (!ufid_ti)
+ goto free_ti;
+
rcu_assign_pointer(table->ti, ti);
+ rcu_assign_pointer(table->ufid_ti, ufid_ti);
INIT_LIST_HEAD(&table->mask_list);
table->last_rehash = jiffies;
table->count = 0;
+ table->ufid_count = 0;
return 0;
+
+free_ti:
+ __table_instance_destroy(ti);
+ return -ENOMEM;
}
static void flow_tbl_destroy_rcu_cb(struct rcu_head *rcu)
__table_instance_destroy(ti);
}
-static void table_instance_destroy(struct table_instance *ti, bool deferred)
+static void table_instance_destroy(struct table_instance *ti,
+ struct table_instance *ufid_ti,
+ bool deferred)
{
int i;
if (!ti)
return;
+ BUG_ON(!ufid_ti);
if (ti->keep_flows)
goto skip_flows;
struct hlist_head *head = flex_array_get(ti->buckets, i);
struct hlist_node *n;
int ver = ti->node_ver;
+ int ufid_ver = ufid_ti->node_ver;
- hlist_for_each_entry_safe(flow, n, head, hash_node[ver]) {
- hlist_del_rcu(&flow->hash_node[ver]);
+ hlist_for_each_entry_safe(flow, n, head, flow_table.node[ver]) {
+ hlist_del_rcu(&flow->flow_table.node[ver]);
+ if (ovs_identifier_is_ufid(&flow->id))
+ hlist_del_rcu(&flow->ufid_table.node[ufid_ver]);
ovs_flow_free(flow, deferred);
}
}
skip_flows:
- if (deferred)
+ if (deferred) {
call_rcu(&ti->rcu, flow_tbl_destroy_rcu_cb);
- else
+ call_rcu(&ufid_ti->rcu, flow_tbl_destroy_rcu_cb);
+ } else {
__table_instance_destroy(ti);
+ __table_instance_destroy(ufid_ti);
+ }
}
/* No need for locking this function is called from RCU callback or
void ovs_flow_tbl_destroy(struct flow_table *table)
{
struct table_instance *ti = rcu_dereference_raw(table->ti);
+ struct table_instance *ufid_ti = rcu_dereference_raw(table->ufid_ti);
- table_instance_destroy(ti, false);
+ table_instance_destroy(ti, ufid_ti, false);
}
struct sw_flow *ovs_flow_tbl_dump_next(struct table_instance *ti,
while (*bucket < ti->n_buckets) {
i = 0;
head = flex_array_get(ti->buckets, *bucket);
- hlist_for_each_entry_rcu(flow, head, hash_node[ver]) {
+ hlist_for_each_entry_rcu(flow, head, flow_table.node[ver]) {
if (i < *last) {
i++;
continue;
(hash & (ti->n_buckets - 1)));
}
-static void table_instance_insert(struct table_instance *ti, struct sw_flow *flow)
+static void table_instance_insert(struct table_instance *ti,
+ struct sw_flow *flow)
+{
+ struct hlist_head *head;
+
+ head = find_bucket(ti, flow->flow_table.hash);
+ hlist_add_head_rcu(&flow->flow_table.node[ti->node_ver], head);
+}
+
+static void ufid_table_instance_insert(struct table_instance *ti,
+ struct sw_flow *flow)
{
struct hlist_head *head;
- head = find_bucket(ti, flow->hash);
- hlist_add_head_rcu(&flow->hash_node[ti->node_ver], head);
+ head = find_bucket(ti, flow->ufid_table.hash);
+ hlist_add_head_rcu(&flow->ufid_table.node[ti->node_ver], head);
}
static void flow_table_copy_flows(struct table_instance *old,
- struct table_instance *new)
+ struct table_instance *new, bool ufid)
{
int old_ver;
int i;
head = flex_array_get(old->buckets, i);
- hlist_for_each_entry(flow, head, hash_node[old_ver])
- table_instance_insert(new, flow);
+ if (ufid)
+ hlist_for_each_entry(flow, head,
+ ufid_table.node[old_ver])
+ ufid_table_instance_insert(new, flow);
+ else
+ hlist_for_each_entry(flow, head,
+ flow_table.node[old_ver])
+ table_instance_insert(new, flow);
}
old->keep_flows = true;
}
static struct table_instance *table_instance_rehash(struct table_instance *ti,
- int n_buckets)
+ int n_buckets, bool ufid)
{
struct table_instance *new_ti;
if (!new_ti)
return NULL;
- flow_table_copy_flows(ti, new_ti);
+ flow_table_copy_flows(ti, new_ti, ufid);
return new_ti;
}
int ovs_flow_tbl_flush(struct flow_table *flow_table)
{
- struct table_instance *old_ti;
- struct table_instance *new_ti;
+ struct table_instance *old_ti, *new_ti;
+ struct table_instance *old_ufid_ti, *new_ufid_ti;
- old_ti = ovsl_dereference(flow_table->ti);
new_ti = table_instance_alloc(TBL_MIN_BUCKETS);
if (!new_ti)
return -ENOMEM;
+ new_ufid_ti = table_instance_alloc(TBL_MIN_BUCKETS);
+ if (!new_ufid_ti)
+ goto err_free_ti;
+
+ old_ti = ovsl_dereference(flow_table->ti);
+ old_ufid_ti = ovsl_dereference(flow_table->ufid_ti);
rcu_assign_pointer(flow_table->ti, new_ti);
+ rcu_assign_pointer(flow_table->ufid_ti, new_ufid_ti);
flow_table->last_rehash = jiffies;
flow_table->count = 0;
+ flow_table->ufid_count = 0;
- table_instance_destroy(old_ti, true);
+ table_instance_destroy(old_ti, old_ufid_ti, true);
return 0;
+
+err_free_ti:
+ __table_instance_destroy(new_ti);
+ return -ENOMEM;
}
-static u32 flow_hash(const struct sw_flow_key *key, int key_start,
- int key_end)
+static u32 flow_hash(const struct sw_flow_key *key,
+ const struct sw_flow_key_range *range)
{
+ int key_start = range->start;
+ int key_end = range->end;
const u32 *hash_key = (const u32 *)((const u8 *)key + key_start);
int hash_u32s = (key_end - key_start) >> 2;
static bool flow_cmp_masked_key(const struct sw_flow *flow,
const struct sw_flow_key *key,
- int key_start, int key_end)
+ const struct sw_flow_key_range *range)
{
- return cmp_key(&flow->key, key, key_start, key_end);
+ return cmp_key(&flow->key, key, range->start, range->end);
}
-bool ovs_flow_cmp_unmasked_key(const struct sw_flow *flow,
- const struct sw_flow_match *match)
+static bool ovs_flow_cmp_unmasked_key(const struct sw_flow *flow,
+ const struct sw_flow_match *match)
{
struct sw_flow_key *key = match->key;
int key_start = flow_key_start(key);
int key_end = match->range.end;
- return cmp_key(&flow->unmasked_key, key, key_start, key_end);
+ BUG_ON(ovs_identifier_is_ufid(&flow->id));
+ return cmp_key(flow->id.unmasked_key, key, key_start, key_end);
}
static struct sw_flow *masked_flow_lookup(struct table_instance *ti,
{
struct sw_flow *flow;
struct hlist_head *head;
- int key_start = mask->range.start;
- int key_end = mask->range.end;
u32 hash;
struct sw_flow_key masked_key;
ovs_flow_mask_key(&masked_key, unmasked, mask);
- hash = flow_hash(&masked_key, key_start, key_end);
+ hash = flow_hash(&masked_key, &mask->range);
head = find_bucket(ti, hash);
- hlist_for_each_entry_rcu(flow, head, hash_node[ti->node_ver]) {
- if (flow->mask == mask && flow->hash == hash &&
- flow_cmp_masked_key(flow, &masked_key,
- key_start, key_end))
+ hlist_for_each_entry_rcu(flow, head, flow_table.node[ti->node_ver]) {
+ if (flow->mask == mask && flow->flow_table.hash == hash &&
+ flow_cmp_masked_key(flow, &masked_key, &mask->range))
return flow;
}
return NULL;
/* Always called under ovs-mutex. */
list_for_each_entry(mask, &tbl->mask_list, list) {
flow = masked_flow_lookup(ti, match->key, mask);
- if (flow && ovs_flow_cmp_unmasked_key(flow, match)) /* Found */
+ if (flow && ovs_identifier_is_key(&flow->id) &&
+ ovs_flow_cmp_unmasked_key(flow, match))
+ return flow;
+ }
+ return NULL;
+}
+
+static u32 ufid_hash(const struct sw_flow_id *sfid)
+{
+ return jhash(sfid->ufid, sfid->ufid_len, 0);
+}
+
+static bool ovs_flow_cmp_ufid(const struct sw_flow *flow,
+ const struct sw_flow_id *sfid)
+{
+ if (flow->id.ufid_len != sfid->ufid_len)
+ return false;
+
+ return !memcmp(flow->id.ufid, sfid->ufid, sfid->ufid_len);
+}
+
+bool ovs_flow_cmp(const struct sw_flow *flow, const struct sw_flow_match *match)
+{
+ if (ovs_identifier_is_ufid(&flow->id))
+ return flow_cmp_masked_key(flow, match->key, &match->range);
+
+ return ovs_flow_cmp_unmasked_key(flow, match);
+}
+
+struct sw_flow *ovs_flow_tbl_lookup_ufid(struct flow_table *tbl,
+ const struct sw_flow_id *ufid)
+{
+ struct table_instance *ti = rcu_dereference_ovsl(tbl->ufid_ti);
+ struct sw_flow *flow;
+ struct hlist_head *head;
+ u32 hash;
+
+ hash = ufid_hash(ufid);
+ head = find_bucket(ti, hash);
+ hlist_for_each_entry_rcu(flow, head, ufid_table.node[ti->node_ver]) {
+ if (flow->ufid_table.hash == hash &&
+ ovs_flow_cmp_ufid(flow, ufid))
return flow;
}
return NULL;
return num;
}
-static struct table_instance *table_instance_expand(struct table_instance *ti)
+static struct table_instance *table_instance_expand(struct table_instance *ti,
+ bool ufid)
{
- return table_instance_rehash(ti, ti->n_buckets * 2);
+ return table_instance_rehash(ti, ti->n_buckets * 2, ufid);
}
/* Remove 'mask' from the mask list, if it is not needed any more. */
void ovs_flow_tbl_remove(struct flow_table *table, struct sw_flow *flow)
{
struct table_instance *ti = ovsl_dereference(table->ti);
+ struct table_instance *ufid_ti = ovsl_dereference(table->ufid_ti);
BUG_ON(table->count == 0);
- hlist_del_rcu(&flow->hash_node[ti->node_ver]);
+ hlist_del_rcu(&flow->flow_table.node[ti->node_ver]);
table->count--;
+ if (ovs_identifier_is_ufid(&flow->id)) {
+ hlist_del_rcu(&flow->ufid_table.node[ufid_ti->node_ver]);
+ table->ufid_count--;
+ }
/* RCU delete the mask. 'flow->mask' is not NULLed, as it should be
* accessible as long as the RCU read lock is held.
}
/* Must be called with OVS mutex held. */
-int ovs_flow_tbl_insert(struct flow_table *table, struct sw_flow *flow,
- const struct sw_flow_mask *mask)
+static void flow_key_insert(struct flow_table *table, struct sw_flow *flow)
{
struct table_instance *new_ti = NULL;
struct table_instance *ti;
- int err;
-
- err = flow_mask_insert(table, flow, mask);
- if (err)
- return err;
- flow->hash = flow_hash(&flow->key, flow->mask->range.start,
- flow->mask->range.end);
+ flow->flow_table.hash = flow_hash(&flow->key, &flow->mask->range);
ti = ovsl_dereference(table->ti);
table_instance_insert(ti, flow);
table->count++;
/* Expand table, if necessary, to make room. */
if (table->count > ti->n_buckets)
- new_ti = table_instance_expand(ti);
+ new_ti = table_instance_expand(ti, false);
else if (time_after(jiffies, table->last_rehash + REHASH_INTERVAL))
- new_ti = table_instance_rehash(ti, ti->n_buckets);
+ new_ti = table_instance_rehash(ti, ti->n_buckets, false);
if (new_ti) {
rcu_assign_pointer(table->ti, new_ti);
- table_instance_destroy(ti, true);
+ call_rcu(&ti->rcu, flow_tbl_destroy_rcu_cb);
table->last_rehash = jiffies;
}
+}
+
+/* Must be called with OVS mutex held. */
+static void flow_ufid_insert(struct flow_table *table, struct sw_flow *flow)
+{
+ struct table_instance *ti;
+
+ flow->ufid_table.hash = ufid_hash(&flow->id);
+ ti = ovsl_dereference(table->ufid_ti);
+ ufid_table_instance_insert(ti, flow);
+ table->ufid_count++;
+
+ /* Expand table, if necessary, to make room. */
+ if (table->ufid_count > ti->n_buckets) {
+ struct table_instance *new_ti;
+
+ new_ti = table_instance_expand(ti, true);
+ if (new_ti) {
+ rcu_assign_pointer(table->ufid_ti, new_ti);
+ call_rcu(&ti->rcu, flow_tbl_destroy_rcu_cb);
+ }
+ }
+}
+
+/* Must be called with OVS mutex held. */
+int ovs_flow_tbl_insert(struct flow_table *table, struct sw_flow *flow,
+ const struct sw_flow_mask *mask)
+{
+ int err;
+
+ err = flow_mask_insert(table, flow, mask);
+ if (err)
+ return err;
+ flow_key_insert(table, flow);
+ if (ovs_identifier_is_ufid(&flow->id))
+ flow_ufid_insert(table, flow);
+
return 0;
}
struct flow_table {
struct table_instance __rcu *ti;
+ struct table_instance __rcu *ufid_ti;
struct list_head mask_list;
unsigned long last_rehash;
unsigned int count;
+ unsigned int ufid_count;
};
extern struct kmem_cache *flow_stats_cache;
const struct sw_flow_key *);
struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl,
const struct sw_flow_match *match);
-bool ovs_flow_cmp_unmasked_key(const struct sw_flow *flow,
- const struct sw_flow_match *match);
+struct sw_flow *ovs_flow_tbl_lookup_ufid(struct flow_table *,
+ const struct sw_flow_id *);
+
+bool ovs_flow_cmp(const struct sw_flow *, const struct sw_flow_match *);
void ovs_flow_mask_key(struct sw_flow_key *dst, const struct sw_flow_key *src,
const struct sw_flow_mask *mask);
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-#include <linux/version.h>
-
#include <linux/in.h>
#include <linux/ip.h>
#include <linux/net.h>
opts_len = geneveh->opt_len * 4;
- flags = TUNNEL_KEY | TUNNEL_OPTIONS_PRESENT |
+ flags = TUNNEL_KEY | TUNNEL_GENEVE_OPT |
(udp_hdr(skb)->check != 0 ? TUNNEL_CSUM : 0) |
(geneveh->oam ? TUNNEL_OAM : 0) |
(geneveh->critical ? TUNNEL_CRIT_OPT : 0);
static int geneve_tnl_send(struct vport *vport, struct sk_buff *skb)
{
- struct ovs_key_ipv4_tunnel *tun_key;
+ const struct ovs_key_ipv4_tunnel *tun_key;
struct ovs_tunnel_info *tun_info;
struct net *net = ovs_dp_get_net(vport->dp);
struct geneve_port *geneve_port = geneve_vport(vport);
__be16 sport;
struct rtable *rt;
struct flowi4 fl;
- u8 vni[3];
+ u8 vni[3], opts_len, *opts;
__be16 df;
int err;
}
tun_key = &tun_info->tunnel;
-
- /* Route lookup */
- memset(&fl, 0, sizeof(fl));
- fl.daddr = tun_key->ipv4_dst;
- fl.saddr = tun_key->ipv4_src;
- fl.flowi4_tos = RT_TOS(tun_key->ipv4_tos);
- fl.flowi4_mark = skb->mark;
- fl.flowi4_proto = IPPROTO_UDP;
-
- rt = ip_route_output_key(net, &fl);
+ rt = ovs_tunnel_route_lookup(net, tun_key, skb->mark, &fl, IPPROTO_UDP);
if (IS_ERR(rt)) {
err = PTR_ERR(rt);
goto error;
tunnel_id_to_vni(tun_key->tun_id, vni);
skb->ignore_df = 1;
+ if (tun_key->tun_flags & TUNNEL_GENEVE_OPT) {
+ opts = (u8 *)tun_info->options;
+ opts_len = tun_info->options_len;
+ } else {
+ opts = NULL;
+ opts_len = 0;
+ }
+
err = geneve_xmit_skb(geneve_port->gs, rt, skb, fl.saddr,
tun_key->ipv4_dst, tun_key->ipv4_tos,
tun_key->ipv4_ttl, df, sport, dport,
- tun_key->tun_flags, vni,
- tun_info->options_len, (u8 *)tun_info->options,
+ tun_key->tun_flags, vni, opts_len, opts,
false);
if (err < 0)
ip_rt_put(rt);
static int gre_tnl_send(struct vport *vport, struct sk_buff *skb)
{
struct net *net = ovs_dp_get_net(vport->dp);
- struct ovs_key_ipv4_tunnel *tun_key;
+ const struct ovs_key_ipv4_tunnel *tun_key;
struct flowi4 fl;
struct rtable *rt;
int min_headroom;
}
tun_key = &OVS_CB(skb)->egress_tun_info->tunnel;
- /* Route lookup */
- memset(&fl, 0, sizeof(fl));
- fl.daddr = tun_key->ipv4_dst;
- fl.saddr = tun_key->ipv4_src;
- fl.flowi4_tos = RT_TOS(tun_key->ipv4_tos);
- fl.flowi4_mark = skb->mark;
- fl.flowi4_proto = IPPROTO_GRE;
-
- rt = ip_route_output_key(net, &fl);
+ rt = ovs_tunnel_route_lookup(net, tun_key, skb->mark, &fl, IPPROTO_GRE);
if (IS_ERR(rt)) {
err = PTR_ERR(rt);
goto err_free_skb;
min_headroom = LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len
+ tunnel_hlen + sizeof(struct iphdr)
- + (vlan_tx_tag_present(skb) ? VLAN_HLEN : 0);
+ + (skb_vlan_tag_present(skb) ? VLAN_HLEN : 0);
if (skb_headroom(skb) < min_headroom || skb_header_cloned(skb)) {
int head_delta = SKB_DATA_ALIGN(min_headroom -
skb_headroom(skb) +
#include "datapath.h"
#include "vport.h"
+#include "vport-vxlan.h"
/**
* struct vxlan_port - Keeps track of open UDP ports
struct vxlan_port {
struct vxlan_sock *vs;
char name[IFNAMSIZ];
+ u32 exts; /* VXLAN_F_* in <net/vxlan.h> */
};
static struct vport_ops ovs_vxlan_vport_ops;
}
/* Called with rcu_read_lock and BH disabled. */
-static void vxlan_rcv(struct vxlan_sock *vs, struct sk_buff *skb, __be32 vx_vni)
+static void vxlan_rcv(struct vxlan_sock *vs, struct sk_buff *skb,
+ struct vxlan_metadata *md)
{
struct ovs_tunnel_info tun_info;
+ struct vxlan_port *vxlan_port;
struct vport *vport = vs->data;
struct iphdr *iph;
+ struct ovs_vxlan_opts opts = {
+ .gbp = md->gbp,
+ };
__be64 key;
+ __be16 flags;
+
+ flags = TUNNEL_KEY;
+ vxlan_port = vxlan_vport(vport);
+ if (vxlan_port->exts & VXLAN_F_GBP)
+ flags |= TUNNEL_VXLAN_OPT;
/* Save outer tunnel values */
iph = ip_hdr(skb);
- key = cpu_to_be64(ntohl(vx_vni) >> 8);
+ key = cpu_to_be64(ntohl(md->vni) >> 8);
ovs_flow_tun_info_init(&tun_info, iph,
udp_hdr(skb)->source, udp_hdr(skb)->dest,
- key, TUNNEL_KEY, NULL, 0);
+ key, flags, &opts, sizeof(opts));
ovs_vport_receive(vport, skb, &tun_info);
}
if (nla_put_u16(skb, OVS_TUNNEL_ATTR_DST_PORT, ntohs(dst_port)))
return -EMSGSIZE;
+
+ if (vxlan_port->exts) {
+ struct nlattr *exts;
+
+ exts = nla_nest_start(skb, OVS_TUNNEL_ATTR_EXTENSION);
+ if (!exts)
+ return -EMSGSIZE;
+
+ if (vxlan_port->exts & VXLAN_F_GBP &&
+ nla_put_flag(skb, OVS_VXLAN_EXT_GBP))
+ return -EMSGSIZE;
+
+ nla_nest_end(skb, exts);
+ }
+
return 0;
}
ovs_vport_deferred_free(vport);
}
+static const struct nla_policy exts_policy[OVS_VXLAN_EXT_MAX+1] = {
+ [OVS_VXLAN_EXT_GBP] = { .type = NLA_FLAG, },
+};
+
+static int vxlan_configure_exts(struct vport *vport, struct nlattr *attr)
+{
+ struct nlattr *exts[OVS_VXLAN_EXT_MAX+1];
+ struct vxlan_port *vxlan_port;
+ int err;
+
+ if (nla_len(attr) < sizeof(struct nlattr))
+ return -EINVAL;
+
+ err = nla_parse_nested(exts, OVS_VXLAN_EXT_MAX, attr, exts_policy);
+ if (err < 0)
+ return err;
+
+ vxlan_port = vxlan_vport(vport);
+
+ if (exts[OVS_VXLAN_EXT_GBP])
+ vxlan_port->exts |= VXLAN_F_GBP;
+
+ return 0;
+}
+
static struct vport *vxlan_tnl_create(const struct vport_parms *parms)
{
struct net *net = ovs_dp_get_net(parms->dp);
vxlan_port = vxlan_vport(vport);
strncpy(vxlan_port->name, parms->name, IFNAMSIZ);
- vs = vxlan_sock_add(net, htons(dst_port), vxlan_rcv, vport, true, 0);
+ a = nla_find_nested(options, OVS_TUNNEL_ATTR_EXTENSION);
+ if (a) {
+ err = vxlan_configure_exts(vport, a);
+ if (err) {
+ ovs_vport_free(vport);
+ goto error;
+ }
+ }
+
+ vs = vxlan_sock_add(net, htons(dst_port), vxlan_rcv, vport, true,
+ vxlan_port->exts);
if (IS_ERR(vs)) {
ovs_vport_free(vport);
return (void *)vs;
return ERR_PTR(err);
}
+static int vxlan_ext_gbp(struct sk_buff *skb)
+{
+ const struct ovs_tunnel_info *tun_info;
+ const struct ovs_vxlan_opts *opts;
+
+ tun_info = OVS_CB(skb)->egress_tun_info;
+ opts = tun_info->options;
+
+ if (tun_info->tunnel.tun_flags & TUNNEL_VXLAN_OPT &&
+ tun_info->options_len >= sizeof(*opts))
+ return opts->gbp;
+ else
+ return 0;
+}
+
static int vxlan_tnl_send(struct vport *vport, struct sk_buff *skb)
{
struct net *net = ovs_dp_get_net(vport->dp);
struct vxlan_port *vxlan_port = vxlan_vport(vport);
__be16 dst_port = inet_sk(vxlan_port->vs->sock->sk)->inet_sport;
- struct ovs_key_ipv4_tunnel *tun_key;
+ const struct ovs_key_ipv4_tunnel *tun_key;
+ struct vxlan_metadata md = {0};
struct rtable *rt;
struct flowi4 fl;
__be16 src_port;
}
tun_key = &OVS_CB(skb)->egress_tun_info->tunnel;
- /* Route lookup */
- memset(&fl, 0, sizeof(fl));
- fl.daddr = tun_key->ipv4_dst;
- fl.saddr = tun_key->ipv4_src;
- fl.flowi4_tos = RT_TOS(tun_key->ipv4_tos);
- fl.flowi4_mark = skb->mark;
- fl.flowi4_proto = IPPROTO_UDP;
-
- rt = ip_route_output_key(net, &fl);
+ rt = ovs_tunnel_route_lookup(net, tun_key, skb->mark, &fl, IPPROTO_UDP);
if (IS_ERR(rt)) {
err = PTR_ERR(rt);
goto error;
skb->ignore_df = 1;
src_port = udp_flow_src_port(net, skb, 0, 0, true);
+ md.vni = htonl(be64_to_cpu(tun_key->tun_id) << 8);
+ md.gbp = vxlan_ext_gbp(skb);
- err = vxlan_xmit_skb(vxlan_port->vs, rt, skb,
- fl.saddr, tun_key->ipv4_dst,
+ err = vxlan_xmit_skb(rt, skb, fl.saddr, tun_key->ipv4_dst,
tun_key->ipv4_tos, tun_key->ipv4_ttl, df,
src_port, dst_port,
- htonl(be64_to_cpu(tun_key->tun_id) << 8),
- false);
+ &md, false, vxlan_port->exts);
if (err < 0)
ip_rt_put(rt);
return err;
--- /dev/null
+#ifndef VPORT_VXLAN_H
+#define VPORT_VXLAN_H 1
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+
+struct ovs_vxlan_opts {
+ __u32 gbp;
+};
+
+#endif
stats = this_cpu_ptr(vport->percpu_stats);
u64_stats_update_begin(&stats->syncp);
stats->rx_packets++;
- stats->rx_bytes += skb->len;
+ stats->rx_bytes += skb->len +
+ (skb_vlan_tag_present(skb) ? VLAN_HLEN : 0);
u64_stats_update_end(&stats->syncp);
OVS_CB(skb)->input_vport = vport;
* The process may need to be changed if the corresponding process
* in vports ops changed.
*/
- memset(&fl, 0, sizeof(fl));
- fl.daddr = tun_key->ipv4_dst;
- fl.saddr = tun_key->ipv4_src;
- fl.flowi4_tos = RT_TOS(tun_key->ipv4_tos);
- fl.flowi4_mark = skb_mark;
- fl.flowi4_proto = ipproto;
-
- rt = ip_route_output_key(net, &fl);
+ rt = ovs_tunnel_route_lookup(net, tun_key, skb_mark, &fl, ipproto);
if (IS_ERR(rt))
return PTR_ERR(rt);
int ovs_vport_ops_register(struct vport_ops *ops);
void ovs_vport_ops_unregister(struct vport_ops *ops);
+static inline struct rtable *ovs_tunnel_route_lookup(struct net *net,
+ const struct ovs_key_ipv4_tunnel *key,
+ u32 mark,
+ struct flowi4 *fl,
+ u8 protocol)
+{
+ struct rtable *rt;
+
+ memset(fl, 0, sizeof(*fl));
+ fl->daddr = key->ipv4_dst;
+ fl->saddr = key->ipv4_src;
+ fl->flowi4_tos = RT_TOS(key->ipv4_tos);
+ fl->flowi4_mark = mark;
+ fl->flowi4_proto = protocol;
+
+ rt = ip_route_output_key(net, fl);
+ return rt;
+}
#endif /* vport.h */
static void prb_fill_vlan_info(struct tpacket_kbdq_core *pkc,
struct tpacket3_hdr *ppd)
{
- if (vlan_tx_tag_present(pkc->skb)) {
- ppd->hv1.tp_vlan_tci = vlan_tx_tag_get(pkc->skb);
+ if (skb_vlan_tag_present(pkc->skb)) {
+ ppd->hv1.tp_vlan_tci = skb_vlan_tag_get(pkc->skb);
ppd->hv1.tp_vlan_tpid = ntohs(pkc->skb->vlan_proto);
ppd->tp_status = TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
} else {
h.h2->tp_net = netoff;
h.h2->tp_sec = ts.tv_sec;
h.h2->tp_nsec = ts.tv_nsec;
- if (vlan_tx_tag_present(skb)) {
- h.h2->tp_vlan_tci = vlan_tx_tag_get(skb);
+ if (skb_vlan_tag_present(skb)) {
+ h.h2->tp_vlan_tci = skb_vlan_tag_get(skb);
h.h2->tp_vlan_tpid = ntohs(skb->vlan_proto);
status |= TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
} else {
{
/* net device doesn't like empty head */
if (unlikely(len <= dev->hard_header_len)) {
- net_warn_ratelimited("%s: packet size is too short (%d < %d)\n",
+ net_warn_ratelimited("%s: packet size is too short (%d <= %d)\n",
current->comm, len, dev->hard_header_len);
return true;
}
err = -EINVAL;
if (sock->type == SOCK_DGRAM) {
offset = dev_hard_header(skb, dev, ntohs(proto), addr, NULL, len);
- if (unlikely(offset) < 0)
+ if (unlikely(offset < 0))
goto out_free;
} else {
if (ll_header_truncated(dev, len))
aux.tp_snaplen = skb->len;
aux.tp_mac = 0;
aux.tp_net = skb_network_offset(skb);
- if (vlan_tx_tag_present(skb)) {
- aux.tp_vlan_tci = vlan_tx_tag_get(skb);
+ if (skb_vlan_tag_present(skb)) {
+ aux.tp_vlan_tci = skb_vlan_tag_get(skb);
aux.tp_vlan_tpid = ntohs(skb->vlan_proto);
aux.tp_status |= TP_STATUS_VLAN_VALID | TP_STATUS_VLAN_TPID_VALID;
} else {
PACKET_DIAG_FILTER))
goto out_nlmsg_trim;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
out_nlmsg_trim:
nlmsg_cancel(skb, nlh);
ifm->ifa_index = dev->ifindex;
if (nla_put_u8(skb, IFA_LOCAL, addr))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
if (nla_put_u8(skb, RTA_DST, dst) ||
nla_put_u32(skb, RTA_OIF, dev->ifindex))
goto nla_put_failure;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
nla_put_failure:
nlmsg_cancel(skb, nlh);
static int route_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
{
struct net *net = sock_net(skb->sk);
- u8 addr, addr_idx = 0, addr_start_idx = cb->args[0];
+ u8 addr;
rcu_read_lock();
- for (addr = 0; addr < 64; addr++) {
- struct net_device *dev;
+ for (addr = cb->args[0]; addr < 64; addr++) {
+ struct net_device *dev = phonet_route_get_rcu(net, addr << 2);
- dev = phonet_route_get_rcu(net, addr << 2);
if (!dev)
continue;
- if (addr_idx++ < addr_start_idx)
- continue;
if (fill_route(skb, dev, addr << 2, NETLINK_CB(cb->skb).portid,
- cb->nlh->nlmsg_seq, RTM_NEWROUTE))
+ cb->nlh->nlmsg_seq, RTM_NEWROUTE) < 0)
goto out;
}
out:
rcu_read_unlock();
- cb->args[0] = addr_idx;
- cb->args[1] = 0;
+ cb->args[0] = addr;
return skb->len;
}
To compile this code as a module, choose M here: the
module will be called act_vlan.
+config NET_ACT_BPF
+ tristate "BPF based action"
+ depends on NET_CLS_ACT
+ ---help---
+ Say Y here to execute BPF code on packets. The BPF code will decide
+ if the packet should be dropped or not.
+
+ If unsure, say N.
+
+ To compile this code as a module, choose M here: the
+ module will be called act_bpf.
+
+config NET_ACT_CONNMARK
+ tristate "Netfilter Connection Mark Retriever"
+ depends on NET_CLS_ACT && NETFILTER && IP_NF_IPTABLES
+ depends on NF_CONNTRACK_MARK
+ ---help---
+ Say Y here to allow retrieving of conn mark
+
+ If unsure, say N.
+
+ To compile this code as a module, choose M here: the
+ module will be called act_connmark.
+
config NET_CLS_IND
bool "Incoming device classification"
depends on NET_CLS_U32 || NET_CLS_FW
obj-$(CONFIG_NET_ACT_SKBEDIT) += act_skbedit.o
obj-$(CONFIG_NET_ACT_CSUM) += act_csum.o
obj-$(CONFIG_NET_ACT_VLAN) += act_vlan.o
+obj-$(CONFIG_NET_ACT_BPF) += act_bpf.o
+obj-$(CONFIG_NET_ACT_CONNMARK) += act_connmark.o
obj-$(CONFIG_NET_SCH_FIFO) += sch_fifo.o
obj-$(CONFIG_NET_SCH_CBQ) += sch_cbq.o
obj-$(CONFIG_NET_SCH_HTB) += sch_htb.o
--- /dev/null
+/*
+ * Copyright (c) 2015 Jiri Pirko <jiri@resnulli.us>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <linux/rtnetlink.h>
+#include <linux/filter.h>
+#include <net/netlink.h>
+#include <net/pkt_sched.h>
+
+#include <linux/tc_act/tc_bpf.h>
+#include <net/tc_act/tc_bpf.h>
+
+#define BPF_TAB_MASK 15
+
+static int tcf_bpf(struct sk_buff *skb, const struct tc_action *a,
+ struct tcf_result *res)
+{
+ struct tcf_bpf *b = a->priv;
+ int action;
+ int filter_res;
+
+ spin_lock(&b->tcf_lock);
+ b->tcf_tm.lastuse = jiffies;
+ bstats_update(&b->tcf_bstats, skb);
+ action = b->tcf_action;
+
+ filter_res = BPF_PROG_RUN(b->filter, skb);
+ if (filter_res == 0) {
+ /* Return code 0 from the BPF program
+ * is being interpreted as a drop here.
+ */
+ action = TC_ACT_SHOT;
+ b->tcf_qstats.drops++;
+ }
+
+ spin_unlock(&b->tcf_lock);
+ return action;
+}
+
+static int tcf_bpf_dump(struct sk_buff *skb, struct tc_action *a,
+ int bind, int ref)
+{
+ unsigned char *tp = skb_tail_pointer(skb);
+ struct tcf_bpf *b = a->priv;
+ struct tc_act_bpf opt = {
+ .index = b->tcf_index,
+ .refcnt = b->tcf_refcnt - ref,
+ .bindcnt = b->tcf_bindcnt - bind,
+ .action = b->tcf_action,
+ };
+ struct tcf_t t;
+ struct nlattr *nla;
+
+ if (nla_put(skb, TCA_ACT_BPF_PARMS, sizeof(opt), &opt))
+ goto nla_put_failure;
+
+ if (nla_put_u16(skb, TCA_ACT_BPF_OPS_LEN, b->bpf_num_ops))
+ goto nla_put_failure;
+
+ nla = nla_reserve(skb, TCA_ACT_BPF_OPS, b->bpf_num_ops *
+ sizeof(struct sock_filter));
+ if (!nla)
+ goto nla_put_failure;
+
+ memcpy(nla_data(nla), b->bpf_ops, nla_len(nla));
+
+ t.install = jiffies_to_clock_t(jiffies - b->tcf_tm.install);
+ t.lastuse = jiffies_to_clock_t(jiffies - b->tcf_tm.lastuse);
+ t.expires = jiffies_to_clock_t(b->tcf_tm.expires);
+ if (nla_put(skb, TCA_ACT_BPF_TM, sizeof(t), &t))
+ goto nla_put_failure;
+ return skb->len;
+
+nla_put_failure:
+ nlmsg_trim(skb, tp);
+ return -1;
+}
+
+static const struct nla_policy act_bpf_policy[TCA_ACT_BPF_MAX + 1] = {
+ [TCA_ACT_BPF_PARMS] = { .len = sizeof(struct tc_act_bpf) },
+ [TCA_ACT_BPF_OPS_LEN] = { .type = NLA_U16 },
+ [TCA_ACT_BPF_OPS] = { .type = NLA_BINARY,
+ .len = sizeof(struct sock_filter) * BPF_MAXINSNS },
+};
+
+static int tcf_bpf_init(struct net *net, struct nlattr *nla,
+ struct nlattr *est, struct tc_action *a,
+ int ovr, int bind)
+{
+ struct nlattr *tb[TCA_ACT_BPF_MAX + 1];
+ struct tc_act_bpf *parm;
+ struct tcf_bpf *b;
+ u16 bpf_size, bpf_num_ops;
+ struct sock_filter *bpf_ops;
+ struct sock_fprog_kern tmp;
+ struct bpf_prog *fp;
+ int ret;
+
+ if (!nla)
+ return -EINVAL;
+
+ ret = nla_parse_nested(tb, TCA_ACT_BPF_MAX, nla, act_bpf_policy);
+ if (ret < 0)
+ return ret;
+
+ if (!tb[TCA_ACT_BPF_PARMS] ||
+ !tb[TCA_ACT_BPF_OPS_LEN] || !tb[TCA_ACT_BPF_OPS])
+ return -EINVAL;
+ parm = nla_data(tb[TCA_ACT_BPF_PARMS]);
+
+ bpf_num_ops = nla_get_u16(tb[TCA_ACT_BPF_OPS_LEN]);
+ if (bpf_num_ops > BPF_MAXINSNS || bpf_num_ops == 0)
+ return -EINVAL;
+
+ bpf_size = bpf_num_ops * sizeof(*bpf_ops);
+ if (bpf_size != nla_len(tb[TCA_ACT_BPF_OPS]))
+ return -EINVAL;
+
+ bpf_ops = kzalloc(bpf_size, GFP_KERNEL);
+ if (!bpf_ops)
+ return -ENOMEM;
+
+ memcpy(bpf_ops, nla_data(tb[TCA_ACT_BPF_OPS]), bpf_size);
+
+ tmp.len = bpf_num_ops;
+ tmp.filter = bpf_ops;
+
+ ret = bpf_prog_create(&fp, &tmp);
+ if (ret)
+ goto free_bpf_ops;
+
+ if (!tcf_hash_check(parm->index, a, bind)) {
+ ret = tcf_hash_create(parm->index, est, a, sizeof(*b), bind);
+ if (ret)
+ goto destroy_fp;
+
+ ret = ACT_P_CREATED;
+ } else {
+ if (bind)
+ goto destroy_fp;
+ tcf_hash_release(a, bind);
+ if (!ovr) {
+ ret = -EEXIST;
+ goto destroy_fp;
+ }
+ }
+
+ b = to_bpf(a);
+ spin_lock_bh(&b->tcf_lock);
+ b->tcf_action = parm->action;
+ b->bpf_num_ops = bpf_num_ops;
+ b->bpf_ops = bpf_ops;
+ b->filter = fp;
+ spin_unlock_bh(&b->tcf_lock);
+
+ if (ret == ACT_P_CREATED)
+ tcf_hash_insert(a);
+ return ret;
+
+destroy_fp:
+ bpf_prog_destroy(fp);
+free_bpf_ops:
+ kfree(bpf_ops);
+ return ret;
+}
+
+static void tcf_bpf_cleanup(struct tc_action *a, int bind)
+{
+ struct tcf_bpf *b = a->priv;
+
+ bpf_prog_destroy(b->filter);
+}
+
+static struct tc_action_ops act_bpf_ops = {
+ .kind = "bpf",
+ .type = TCA_ACT_BPF,
+ .owner = THIS_MODULE,
+ .act = tcf_bpf,
+ .dump = tcf_bpf_dump,
+ .cleanup = tcf_bpf_cleanup,
+ .init = tcf_bpf_init,
+};
+
+static int __init bpf_init_module(void)
+{
+ return tcf_register_action(&act_bpf_ops, BPF_TAB_MASK);
+}
+
+static void __exit bpf_cleanup_module(void)
+{
+ tcf_unregister_action(&act_bpf_ops);
+}
+
+module_init(bpf_init_module);
+module_exit(bpf_cleanup_module);
+
+MODULE_AUTHOR("Jiri Pirko <jiri@resnulli.us>");
+MODULE_DESCRIPTION("TC BPF based action");
+MODULE_LICENSE("GPL v2");
--- /dev/null
+/*
+ * net/sched/act_connmark.c netfilter connmark retriever action
+ * skb mark is over-written
+ *
+ * Copyright (c) 2011 Felix Fietkau <nbd@openwrt.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+*/
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/skbuff.h>
+#include <linux/rtnetlink.h>
+#include <linux/pkt_cls.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <net/netlink.h>
+#include <net/pkt_sched.h>
+#include <net/act_api.h>
+#include <uapi/linux/tc_act/tc_connmark.h>
+#include <net/tc_act/tc_connmark.h>
+
+#include <net/netfilter/nf_conntrack.h>
+#include <net/netfilter/nf_conntrack_core.h>
+#include <net/netfilter/nf_conntrack_zones.h>
+
+#define CONNMARK_TAB_MASK 3
+
+static int tcf_connmark(struct sk_buff *skb, const struct tc_action *a,
+ struct tcf_result *res)
+{
+ const struct nf_conntrack_tuple_hash *thash;
+ struct nf_conntrack_tuple tuple;
+ enum ip_conntrack_info ctinfo;
+ struct tcf_connmark_info *ca = a->priv;
+ struct nf_conn *c;
+ int proto;
+
+ spin_lock(&ca->tcf_lock);
+ ca->tcf_tm.lastuse = jiffies;
+ bstats_update(&ca->tcf_bstats, skb);
+
+ if (skb->protocol == htons(ETH_P_IP)) {
+ if (skb->len < sizeof(struct iphdr))
+ goto out;
+
+ proto = NFPROTO_IPV4;
+ } else if (skb->protocol == htons(ETH_P_IPV6)) {
+ if (skb->len < sizeof(struct ipv6hdr))
+ goto out;
+
+ proto = NFPROTO_IPV6;
+ } else {
+ goto out;
+ }
+
+ c = nf_ct_get(skb, &ctinfo);
+ if (c) {
+ skb->mark = c->mark;
+ /* using overlimits stats to count how many packets marked */
+ ca->tcf_qstats.overlimits++;
+ nf_ct_put(c);
+ goto out;
+ }
+
+ if (!nf_ct_get_tuplepr(skb, skb_network_offset(skb),
+ proto, &tuple))
+ goto out;
+
+ thash = nf_conntrack_find_get(dev_net(skb->dev), ca->zone, &tuple);
+ if (!thash)
+ goto out;
+
+ c = nf_ct_tuplehash_to_ctrack(thash);
+ /* using overlimits stats to count how many packets marked */
+ ca->tcf_qstats.overlimits++;
+ skb->mark = c->mark;
+ nf_ct_put(c);
+
+out:
+ skb->nfct = NULL;
+ spin_unlock(&ca->tcf_lock);
+ return ca->tcf_action;
+}
+
+static const struct nla_policy connmark_policy[TCA_CONNMARK_MAX + 1] = {
+ [TCA_CONNMARK_PARMS] = { .len = sizeof(struct tc_connmark) },
+};
+
+static int tcf_connmark_init(struct net *net, struct nlattr *nla,
+ struct nlattr *est, struct tc_action *a,
+ int ovr, int bind)
+{
+ struct nlattr *tb[TCA_CONNMARK_MAX + 1];
+ struct tcf_connmark_info *ci;
+ struct tc_connmark *parm;
+ int ret = 0;
+
+ if (!nla)
+ return -EINVAL;
+
+ ret = nla_parse_nested(tb, TCA_CONNMARK_MAX, nla, connmark_policy);
+ if (ret < 0)
+ return ret;
+
+ parm = nla_data(tb[TCA_CONNMARK_PARMS]);
+
+ if (!tcf_hash_check(parm->index, a, bind)) {
+ ret = tcf_hash_create(parm->index, est, a, sizeof(*ci), bind);
+ if (ret)
+ return ret;
+
+ ci = to_connmark(a);
+ ci->tcf_action = parm->action;
+ ci->zone = parm->zone;
+
+ tcf_hash_insert(a);
+ ret = ACT_P_CREATED;
+ } else {
+ ci = to_connmark(a);
+ if (bind)
+ return 0;
+ tcf_hash_release(a, bind);
+ if (!ovr)
+ return -EEXIST;
+ /* replacing action and zone */
+ ci->tcf_action = parm->action;
+ ci->zone = parm->zone;
+ }
+
+ return ret;
+}
+
+static inline int tcf_connmark_dump(struct sk_buff *skb, struct tc_action *a,
+ int bind, int ref)
+{
+ unsigned char *b = skb_tail_pointer(skb);
+ struct tcf_connmark_info *ci = a->priv;
+
+ struct tc_connmark opt = {
+ .index = ci->tcf_index,
+ .refcnt = ci->tcf_refcnt - ref,
+ .bindcnt = ci->tcf_bindcnt - bind,
+ .action = ci->tcf_action,
+ .zone = ci->zone,
+ };
+ struct tcf_t t;
+
+ if (nla_put(skb, TCA_CONNMARK_PARMS, sizeof(opt), &opt))
+ goto nla_put_failure;
+
+ t.install = jiffies_to_clock_t(jiffies - ci->tcf_tm.install);
+ t.lastuse = jiffies_to_clock_t(jiffies - ci->tcf_tm.lastuse);
+ t.expires = jiffies_to_clock_t(ci->tcf_tm.expires);
+ if (nla_put(skb, TCA_CONNMARK_TM, sizeof(t), &t))
+ goto nla_put_failure;
+
+ return skb->len;
+nla_put_failure:
+ nlmsg_trim(skb, b);
+ return -1;
+}
+
+static struct tc_action_ops act_connmark_ops = {
+ .kind = "connmark",
+ .type = TCA_ACT_CONNMARK,
+ .owner = THIS_MODULE,
+ .act = tcf_connmark,
+ .dump = tcf_connmark_dump,
+ .init = tcf_connmark_init,
+};
+
+static int __init connmark_init_module(void)
+{
+ return tcf_register_action(&act_connmark_ops, CONNMARK_TAB_MASK);
+}
+
+static void __exit connmark_cleanup_module(void)
+{
+ tcf_unregister_action(&act_connmark_ops);
+}
+
+module_init(connmark_init_module);
+module_exit(connmark_cleanup_module);
+MODULE_AUTHOR("Felix Fietkau <nbd@openwrt.org>");
+MODULE_DESCRIPTION("Connection tracking mark restoring");
+MODULE_LICENSE("GPL");
+
if (unlikely(action == TC_ACT_SHOT))
goto drop;
- switch (skb->protocol) {
+ switch (tc_skb_protocol(skb)) {
case cpu_to_be16(ETH_P_IP):
if (!tcf_csum_ipv4(skb, update_flags))
goto drop;
if (head == NULL)
return 0UL;
- list_for_each_entry(f, &head->flist, link)
- if (f->handle == handle)
+ list_for_each_entry(f, &head->flist, link) {
+ if (f->handle == handle) {
l = (unsigned long) f;
+ break;
+ }
+ }
return l;
}
struct tcf_result res;
struct list_head link;
u32 handle;
- u16 bpf_len;
+ u16 bpf_num_ops;
struct tcf_proto *tp;
struct rcu_head rcu;
};
struct tcf_exts exts;
struct sock_fprog_kern tmp;
struct bpf_prog *fp;
- u16 bpf_size, bpf_len;
+ u16 bpf_size, bpf_num_ops;
u32 classid;
int ret;
return ret;
classid = nla_get_u32(tb[TCA_BPF_CLASSID]);
- bpf_len = nla_get_u16(tb[TCA_BPF_OPS_LEN]);
- if (bpf_len > BPF_MAXINSNS || bpf_len == 0) {
+ bpf_num_ops = nla_get_u16(tb[TCA_BPF_OPS_LEN]);
+ if (bpf_num_ops > BPF_MAXINSNS || bpf_num_ops == 0) {
ret = -EINVAL;
goto errout;
}
- bpf_size = bpf_len * sizeof(*bpf_ops);
+ bpf_size = bpf_num_ops * sizeof(*bpf_ops);
bpf_ops = kzalloc(bpf_size, GFP_KERNEL);
if (bpf_ops == NULL) {
ret = -ENOMEM;
memcpy(bpf_ops, nla_data(tb[TCA_BPF_OPS]), bpf_size);
- tmp.len = bpf_len;
+ tmp.len = bpf_num_ops;
tmp.filter = bpf_ops;
ret = bpf_prog_create(&fp, &tmp);
if (ret)
goto errout_free;
- prog->bpf_len = bpf_len;
+ prog->bpf_num_ops = bpf_num_ops;
prog->bpf_ops = bpf_ops;
prog->filter = fp;
prog->res.classid = classid;
if (nla_put_u32(skb, TCA_BPF_CLASSID, prog->res.classid))
goto nla_put_failure;
- if (nla_put_u16(skb, TCA_BPF_OPS_LEN, prog->bpf_len))
+ if (nla_put_u16(skb, TCA_BPF_OPS_LEN, prog->bpf_num_ops))
goto nla_put_failure;
- nla = nla_reserve(skb, TCA_BPF_OPS, prog->bpf_len *
+ nla = nla_reserve(skb, TCA_BPF_OPS, prog->bpf_num_ops *
sizeof(struct sock_filter));
if (nla == NULL)
goto nla_put_failure;
{
if (flow->dst)
return ntohl(flow->dst);
- return addr_fold(skb_dst(skb)) ^ (__force u16)skb->protocol;
+ return addr_fold(skb_dst(skb)) ^ (__force u16) tc_skb_protocol(skb);
}
static u32 flow_get_proto(const struct sk_buff *skb, const struct flow_keys *flow)
if (flow->ports)
return ntohs(flow->port16[1]);
- return addr_fold(skb_dst(skb)) ^ (__force u16)skb->protocol;
+ return addr_fold(skb_dst(skb)) ^ (__force u16) tc_skb_protocol(skb);
}
static u32 flow_get_iif(const struct sk_buff *skb)
static u32 flow_get_nfct_src(const struct sk_buff *skb, const struct flow_keys *flow)
{
- switch (skb->protocol) {
+ switch (tc_skb_protocol(skb)) {
case htons(ETH_P_IP):
return ntohl(CTTUPLE(skb, src.u3.ip));
case htons(ETH_P_IPV6):
static u32 flow_get_nfct_dst(const struct sk_buff *skb, const struct flow_keys *flow)
{
- switch (skb->protocol) {
+ switch (tc_skb_protocol(skb)) {
case htons(ETH_P_IP):
return ntohl(CTTUPLE(skb, dst.u3.ip));
case htons(ETH_P_IPV6):
struct net_device *dev, *indev = NULL;
int ret, network_offset;
- switch (skb->protocol) {
+ switch (tc_skb_protocol(skb)) {
case htons(ETH_P_IP):
acpar.family = NFPROTO_IPV4;
if (!pskb_network_may_pull(skb, sizeof(struct iphdr)))
{
unsigned short tag;
- tag = vlan_tx_tag_get(skb);
+ tag = skb_vlan_tag_get(skb);
if (!tag && __vlan_get_tag(skb, &tag))
*err = -1;
else
META_COLLECTOR(int_protocol)
{
/* Let userspace take care of the byte ordering */
- dst->value = skb->protocol;
+ dst->value = tc_skb_protocol(skb);
}
META_COLLECTOR(int_pkttype)
int tc_classify_compat(struct sk_buff *skb, const struct tcf_proto *tp,
struct tcf_result *res)
{
- __be16 protocol = skb->protocol;
+ __be16 protocol = tc_skb_protocol(skb);
int err;
for (; tp; tp = rcu_dereference_bh(tp->next)) {
pr_debug("%s(skb %p,sch %p,[qdisc %p])\n", __func__, skb, sch, p);
if (p->set_tc_index) {
- switch (skb->protocol) {
+ switch (tc_skb_protocol(skb)) {
case htons(ETH_P_IP):
if (skb_cow_head(skb, sizeof(struct iphdr)))
goto drop;
index = skb->tc_index & (p->indices - 1);
pr_debug("index %d->%d\n", skb->tc_index, index);
- switch (skb->protocol) {
+ switch (tc_skb_protocol(skb)) {
case htons(ETH_P_IP):
ipv4_change_dsfield(ip_hdr(skb), p->mask[index],
p->value[index]);
*/
if (p->mask[index] != 0xff || p->value[index])
pr_warn("%s: unsupported protocol %d\n",
- __func__, ntohs(skb->protocol));
+ __func__, ntohs(tc_skb_protocol(skb)));
break;
}
return NULL;
}
-static inline void
-teql_neigh_release(struct neighbour *n)
-{
- if (n)
- neigh_release(n);
-}
-
static void
teql_reset(struct Qdisc *sch)
{
char haddr[MAX_ADDR_LEN];
neigh_ha_snapshot(haddr, n, dev);
- err = dev_hard_header(skb, dev, ntohs(skb->protocol), haddr,
- NULL, skb->len);
+ err = dev_hard_header(skb, dev, ntohs(tc_skb_protocol(skb)),
+ haddr, NULL, skb->len);
if (err < 0)
err = -EINVAL;
unsigned long nr_segs)
{
struct socket *sock = file->private_data;
- size_t size = 0;
- int i;
-
- for (i = 0; i < nr_segs; i++)
- size += iov[i].iov_len;
msg->msg_name = NULL;
msg->msg_namelen = 0;
msg->msg_control = NULL;
msg->msg_controllen = 0;
- iov_iter_init(&msg->msg_iter, READ, iov, nr_segs, size);
+ iov_iter_init(&msg->msg_iter, READ, iov, nr_segs, iocb->ki_nbytes);
msg->msg_flags = (file->f_flags & O_NONBLOCK) ? MSG_DONTWAIT : 0;
- return __sock_recvmsg(iocb, sock, msg, size, msg->msg_flags);
+ return __sock_recvmsg(iocb, sock, msg, iocb->ki_nbytes, msg->msg_flags);
}
static ssize_t sock_aio_read(struct kiocb *iocb, const struct iovec *iov,
unsigned long nr_segs)
{
struct socket *sock = file->private_data;
- size_t size = 0;
- int i;
-
- for (i = 0; i < nr_segs; i++)
- size += iov[i].iov_len;
msg->msg_name = NULL;
msg->msg_namelen = 0;
msg->msg_control = NULL;
msg->msg_controllen = 0;
- iov_iter_init(&msg->msg_iter, WRITE, iov, nr_segs, size);
+ iov_iter_init(&msg->msg_iter, WRITE, iov, nr_segs, iocb->ki_nbytes);
msg->msg_flags = (file->f_flags & O_NONBLOCK) ? MSG_DONTWAIT : 0;
if (sock->type == SOCK_SEQPACKET)
msg->msg_flags |= MSG_EOR;
- return __sock_sendmsg(iocb, sock, msg, size);
+ return __sock_sendmsg(iocb, sock, msg, iocb->ki_nbytes);
}
static ssize_t sock_aio_write(struct kiocb *iocb, const struct iovec *iov,
struct kvec *head = buf->head;
struct kvec *tail = buf->tail;
int fraglen;
- int new, old;
+ int new;
if (len > buf->len) {
WARN_ON_ONCE(1);
buf->len -= fraglen;
new = buf->page_base + buf->page_len;
- old = new + fraglen;
- xdr->page_ptr -= (old >> PAGE_SHIFT) - (new >> PAGE_SHIFT);
+
+ xdr->page_ptr = buf->pages + (new >> PAGE_SHIFT);
if (buf->page_len) {
xdr->p = page_address(*xdr->page_ptr);
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/init.h>
+#include <linux/mutex.h>
+#include <linux/notifier.h>
#include <linux/netdevice.h>
#include <net/switchdev.h>
return ops->ndo_switch_port_stp_update(dev, state);
}
EXPORT_SYMBOL(netdev_switch_port_stp_update);
+
+static DEFINE_MUTEX(netdev_switch_mutex);
+static RAW_NOTIFIER_HEAD(netdev_switch_notif_chain);
+
+/**
+ * register_netdev_switch_notifier - Register nofifier
+ * @nb: notifier_block
+ *
+ * Register switch device notifier. This should be used by code
+ * which needs to monitor events happening in particular device.
+ * Return values are same as for atomic_notifier_chain_register().
+ */
+int register_netdev_switch_notifier(struct notifier_block *nb)
+{
+ int err;
+
+ mutex_lock(&netdev_switch_mutex);
+ err = raw_notifier_chain_register(&netdev_switch_notif_chain, nb);
+ mutex_unlock(&netdev_switch_mutex);
+ return err;
+}
+EXPORT_SYMBOL(register_netdev_switch_notifier);
+
+/**
+ * unregister_netdev_switch_notifier - Unregister nofifier
+ * @nb: notifier_block
+ *
+ * Unregister switch device notifier.
+ * Return values are same as for atomic_notifier_chain_unregister().
+ */
+int unregister_netdev_switch_notifier(struct notifier_block *nb)
+{
+ int err;
+
+ mutex_lock(&netdev_switch_mutex);
+ err = raw_notifier_chain_unregister(&netdev_switch_notif_chain, nb);
+ mutex_unlock(&netdev_switch_mutex);
+ return err;
+}
+EXPORT_SYMBOL(unregister_netdev_switch_notifier);
+
+/**
+ * call_netdev_switch_notifiers - Call nofifiers
+ * @val: value passed unmodified to notifier function
+ * @dev: port device
+ * @info: notifier information data
+ *
+ * Call all network notifier blocks. This should be called by driver
+ * when it needs to propagate hardware event.
+ * Return values are same as for atomic_notifier_call_chain().
+ */
+int call_netdev_switch_notifiers(unsigned long val, struct net_device *dev,
+ struct netdev_switch_notifier_info *info)
+{
+ int err;
+
+ info->dev = dev;
+ mutex_lock(&netdev_switch_mutex);
+ err = raw_notifier_call_chain(&netdev_switch_notif_chain, val, info);
+ mutex_unlock(&netdev_switch_mutex);
+ return err;
+}
+EXPORT_SYMBOL(call_netdev_switch_notifiers);
If in doubt, say N.
-config TIPC_PORTS
- int "Maximum number of ports in a node"
- depends on TIPC
- range 127 65535
- default "8191"
- help
- Specifies how many ports can be supported by a node.
- Can range from 127 to 65535 ports; default is 8191.
-
- Setting this to a smaller value saves some memory,
- setting it to higher allows for more ports.
-
config TIPC_MEDIA_IB
bool "InfiniBand media type support"
depends on TIPC && INFINIBAND_IPOIB
* POSSIBILITY OF SUCH DAMAGE.
*/
-#include "core.h"
+#include <linux/kernel.h>
#include "addr.h"
+#include "core.h"
+
+/**
+ * in_own_cluster - test for cluster inclusion; <0.0.0> always matches
+ */
+int in_own_cluster(struct net *net, u32 addr)
+{
+ return in_own_cluster_exact(net, addr) || !addr;
+}
+
+int in_own_cluster_exact(struct net *net, u32 addr)
+{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+
+ return !((addr ^ tn->own_addr) >> 12);
+}
+
+/**
+ * in_own_node - test for node inclusion; <0.0.0> always matches
+ */
+int in_own_node(struct net *net, u32 addr)
+{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+
+ return (addr == tn->own_addr) || !addr;
+}
+
+/**
+ * addr_domain - convert 2-bit scope value to equivalent message lookup domain
+ *
+ * Needed when address of a named message must be looked up a second time
+ * after a network hop.
+ */
+u32 addr_domain(struct net *net, u32 sc)
+{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+
+ if (likely(sc == TIPC_NODE_SCOPE))
+ return tn->own_addr;
+ if (sc == TIPC_CLUSTER_SCOPE)
+ return tipc_cluster_mask(tn->own_addr);
+ return tipc_zone_mask(tn->own_addr);
+}
/**
* tipc_addr_domain_valid - validates a network domain address
#ifndef _TIPC_ADDR_H
#define _TIPC_ADDR_H
-#include "core.h"
+#include <linux/types.h>
+#include <linux/tipc.h>
+#include <net/net_namespace.h>
+#include <net/netns/generic.h>
#define TIPC_ZONE_MASK 0xff000000u
#define TIPC_CLUSTER_MASK 0xfffff000u
return addr & TIPC_CLUSTER_MASK;
}
-static inline int in_own_cluster_exact(u32 addr)
-{
- return !((addr ^ tipc_own_addr) >> 12);
-}
-
-/**
- * in_own_node - test for node inclusion; <0.0.0> always matches
- */
-static inline int in_own_node(u32 addr)
-{
- return (addr == tipc_own_addr) || !addr;
-}
-
-/**
- * in_own_cluster - test for cluster inclusion; <0.0.0> always matches
- */
-static inline int in_own_cluster(u32 addr)
-{
- return in_own_cluster_exact(addr) || !addr;
-}
-
-/**
- * addr_domain - convert 2-bit scope value to equivalent message lookup domain
- *
- * Needed when address of a named message must be looked up a second time
- * after a network hop.
- */
-static inline u32 addr_domain(u32 sc)
-{
- if (likely(sc == TIPC_NODE_SCOPE))
- return tipc_own_addr;
- if (sc == TIPC_CLUSTER_SCOPE)
- return tipc_cluster_mask(tipc_own_addr);
- return tipc_zone_mask(tipc_own_addr);
-}
-
+int in_own_cluster(struct net *net, u32 addr);
+int in_own_cluster_exact(struct net *net, u32 addr);
+int in_own_node(struct net *net, u32 addr);
+u32 addr_domain(struct net *net, u32 sc);
int tipc_addr_domain_valid(u32);
int tipc_addr_node_valid(u32 addr);
int tipc_in_scope(u32 domain, u32 addr);
* POSSIBILITY OF SUCH DAMAGE.
*/
-#include "core.h"
-#include "link.h"
#include "socket.h"
#include "msg.h"
#include "bcast.h"
#include "name_distr.h"
+#include "core.h"
#define MAX_PKT_DEFAULT_MCAST 1500 /* bcast link max packet size (fixed) */
#define BCLINK_WIN_DEFAULT 20 /* bcast link window size (default) */
-#define BCBEARER MAX_BEARERS
-
-/**
- * struct tipc_bcbearer_pair - a pair of bearers used by broadcast link
- * @primary: pointer to primary bearer
- * @secondary: pointer to secondary bearer
- *
- * Bearers must have same priority and same set of reachable destinations
- * to be paired.
- */
-
-struct tipc_bcbearer_pair {
- struct tipc_bearer *primary;
- struct tipc_bearer *secondary;
-};
-
-/**
- * struct tipc_bcbearer - bearer used by broadcast link
- * @bearer: (non-standard) broadcast bearer structure
- * @media: (non-standard) broadcast media structure
- * @bpairs: array of bearer pairs
- * @bpairs_temp: temporary array of bearer pairs used by tipc_bcbearer_sort()
- * @remains: temporary node map used by tipc_bcbearer_send()
- * @remains_new: temporary node map used tipc_bcbearer_send()
- *
- * Note: The fields labelled "temporary" are incorporated into the bearer
- * to avoid consuming potentially limited stack space through the use of
- * large local variables within multicast routines. Concurrent access is
- * prevented through use of the spinlock "bclink_lock".
- */
-struct tipc_bcbearer {
- struct tipc_bearer bearer;
- struct tipc_media media;
- struct tipc_bcbearer_pair bpairs[MAX_BEARERS];
- struct tipc_bcbearer_pair bpairs_temp[TIPC_MAX_LINK_PRI + 1];
- struct tipc_node_map remains;
- struct tipc_node_map remains_new;
-};
-
-/**
- * struct tipc_bclink - link used for broadcast messages
- * @lock: spinlock governing access to structure
- * @link: (non-standard) broadcast link structure
- * @node: (non-standard) node structure representing b'cast link's peer node
- * @flags: represent bclink states
- * @bcast_nodes: map of broadcast-capable nodes
- * @retransmit_to: node that most recently requested a retransmit
- *
- * Handles sequence numbering, fragmentation, bundling, etc.
- */
-struct tipc_bclink {
- spinlock_t lock;
- struct tipc_link link;
- struct tipc_node node;
- unsigned int flags;
- struct tipc_node_map bcast_nodes;
- struct tipc_node *retransmit_to;
-};
-
-static struct tipc_bcbearer *bcbearer;
-static struct tipc_bclink *bclink;
-static struct tipc_link *bcl;
const char tipc_bclink_name[] = "broadcast-link";
static void tipc_nmap_add(struct tipc_node_map *nm_ptr, u32 node);
static void tipc_nmap_remove(struct tipc_node_map *nm_ptr, u32 node);
-static void tipc_bclink_lock(void)
+static void tipc_bclink_lock(struct net *net)
{
- spin_lock_bh(&bclink->lock);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+
+ spin_lock_bh(&tn->bclink->lock);
}
-static void tipc_bclink_unlock(void)
+static void tipc_bclink_unlock(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_node *node = NULL;
- if (likely(!bclink->flags)) {
- spin_unlock_bh(&bclink->lock);
+ if (likely(!tn->bclink->flags)) {
+ spin_unlock_bh(&tn->bclink->lock);
return;
}
- if (bclink->flags & TIPC_BCLINK_RESET) {
- bclink->flags &= ~TIPC_BCLINK_RESET;
- node = tipc_bclink_retransmit_to();
+ if (tn->bclink->flags & TIPC_BCLINK_RESET) {
+ tn->bclink->flags &= ~TIPC_BCLINK_RESET;
+ node = tipc_bclink_retransmit_to(net);
}
- spin_unlock_bh(&bclink->lock);
+ spin_unlock_bh(&tn->bclink->lock);
if (node)
tipc_link_reset_all(node);
return MAX_PKT_DEFAULT_MCAST;
}
-void tipc_bclink_set_flags(unsigned int flags)
+void tipc_bclink_set_flags(struct net *net, unsigned int flags)
{
- bclink->flags |= flags;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+
+ tn->bclink->flags |= flags;
}
static u32 bcbuf_acks(struct sk_buff *buf)
bcbuf_set_acks(buf, bcbuf_acks(buf) - 1);
}
-void tipc_bclink_add_node(u32 addr)
+void tipc_bclink_add_node(struct net *net, u32 addr)
{
- tipc_bclink_lock();
- tipc_nmap_add(&bclink->bcast_nodes, addr);
- tipc_bclink_unlock();
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+
+ tipc_bclink_lock(net);
+ tipc_nmap_add(&tn->bclink->bcast_nodes, addr);
+ tipc_bclink_unlock(net);
}
-void tipc_bclink_remove_node(u32 addr)
+void tipc_bclink_remove_node(struct net *net, u32 addr)
{
- tipc_bclink_lock();
- tipc_nmap_remove(&bclink->bcast_nodes, addr);
- tipc_bclink_unlock();
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+
+ tipc_bclink_lock(net);
+ tipc_nmap_remove(&tn->bclink->bcast_nodes, addr);
+ tipc_bclink_unlock(net);
}
-static void bclink_set_last_sent(void)
+static void bclink_set_last_sent(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct tipc_link *bcl = tn->bcl;
+
if (bcl->next_out)
bcl->fsm_msg_cnt = mod(buf_seqno(bcl->next_out) - 1);
else
bcl->fsm_msg_cnt = mod(bcl->next_out_no - 1);
}
-u32 tipc_bclink_get_last_sent(void)
+u32 tipc_bclink_get_last_sent(struct net *net)
{
- return bcl->fsm_msg_cnt;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+
+ return tn->bcl->fsm_msg_cnt;
}
static void bclink_update_last_sent(struct tipc_node *node, u32 seqno)
*
* Called with bclink_lock locked
*/
-struct tipc_node *tipc_bclink_retransmit_to(void)
+struct tipc_node *tipc_bclink_retransmit_to(struct net *net)
{
- return bclink->retransmit_to;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+
+ return tn->bclink->retransmit_to;
}
/**
*
* Called with bclink_lock locked
*/
-static void bclink_retransmit_pkt(u32 after, u32 to)
+static void bclink_retransmit_pkt(struct tipc_net *tn, u32 after, u32 to)
{
struct sk_buff *skb;
+ struct tipc_link *bcl = tn->bcl;
skb_queue_walk(&bcl->outqueue, skb) {
- if (more(buf_seqno(skb), after))
+ if (more(buf_seqno(skb), after)) {
+ tipc_link_retransmit(bcl, skb, mod(to - after));
break;
+ }
}
- tipc_link_retransmit(bcl, skb, mod(to - after));
}
/**
*
* Called with no locks taken
*/
-void tipc_bclink_wakeup_users(void)
+void tipc_bclink_wakeup_users(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct sk_buff *skb;
- while ((skb = skb_dequeue(&bclink->link.waiting_sks)))
- tipc_sk_rcv(skb);
-
+ while ((skb = skb_dequeue(&tn->bclink->link.waiting_sks)))
+ tipc_sk_rcv(net, skb);
}
/**
struct sk_buff *skb, *tmp;
struct sk_buff *next;
unsigned int released = 0;
+ struct net *net = n_ptr->net;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
/* Bail out if tx queue is empty (no clean up is required) */
- skb = skb_peek(&bcl->outqueue);
+ skb = skb_peek(&tn->bcl->outqueue);
if (!skb)
goto exit;
* acknowledge sent messages only (if other nodes still exist)
* or both sent and unsent messages (otherwise)
*/
- if (bclink->bcast_nodes.count)
- acked = bcl->fsm_msg_cnt;
+ if (tn->bclink->bcast_nodes.count)
+ acked = tn->bcl->fsm_msg_cnt;
else
- acked = bcl->next_out_no;
+ acked = tn->bcl->next_out_no;
} else {
/*
* Bail out if specified sequence number does not correspond
* to a message that has been sent and not yet acknowledged
*/
if (less(acked, buf_seqno(skb)) ||
- less(bcl->fsm_msg_cnt, acked) ||
+ less(tn->bcl->fsm_msg_cnt, acked) ||
less_eq(acked, n_ptr->bclink.acked))
goto exit;
}
/* Skip over packets that node has previously acknowledged */
- skb_queue_walk(&bcl->outqueue, skb) {
+ skb_queue_walk(&tn->bcl->outqueue, skb) {
if (more(buf_seqno(skb), n_ptr->bclink.acked))
break;
}
/* Update packets that node is now acknowledging */
- skb_queue_walk_from_safe(&bcl->outqueue, skb, tmp) {
+ skb_queue_walk_from_safe(&tn->bcl->outqueue, skb, tmp) {
if (more(buf_seqno(skb), acked))
break;
- next = tipc_skb_queue_next(&bcl->outqueue, skb);
- if (skb != bcl->next_out) {
+ next = tipc_skb_queue_next(&tn->bcl->outqueue, skb);
+ if (skb != tn->bcl->next_out) {
bcbuf_decr_acks(skb);
} else {
bcbuf_set_acks(skb, 0);
- bcl->next_out = next;
- bclink_set_last_sent();
+ tn->bcl->next_out = next;
+ bclink_set_last_sent(net);
}
if (bcbuf_acks(skb) == 0) {
- __skb_unlink(skb, &bcl->outqueue);
+ __skb_unlink(skb, &tn->bcl->outqueue);
kfree_skb(skb);
released = 1;
}
n_ptr->bclink.acked = acked;
/* Try resolving broadcast link congestion, if necessary */
- if (unlikely(bcl->next_out)) {
- tipc_link_push_packets(bcl);
- bclink_set_last_sent();
+ if (unlikely(tn->bcl->next_out)) {
+ tipc_link_push_packets(tn->bcl);
+ bclink_set_last_sent(net);
}
- if (unlikely(released && !skb_queue_empty(&bcl->waiting_sks)))
+ if (unlikely(released && !skb_queue_empty(&tn->bcl->waiting_sks)))
n_ptr->action_flags |= TIPC_WAKEUP_BCAST_USERS;
exit:
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
}
/**
*
* RCU and node lock set
*/
-void tipc_bclink_update_link_state(struct tipc_node *n_ptr, u32 last_sent)
+void tipc_bclink_update_link_state(struct net *net, struct tipc_node *n_ptr,
+ u32 last_sent)
{
struct sk_buff *buf;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
/* Ignore "stale" link state info */
if (less_eq(last_sent, n_ptr->bclink.last_in))
struct sk_buff *skb = skb_peek(&n_ptr->bclink.deferred_queue);
u32 to = skb ? buf_seqno(skb) - 1 : n_ptr->bclink.last_sent;
- tipc_msg_init(msg, BCAST_PROTOCOL, STATE_MSG,
+ tipc_msg_init(net, msg, BCAST_PROTOCOL, STATE_MSG,
INT_H_SIZE, n_ptr->addr);
msg_set_non_seq(msg, 1);
- msg_set_mc_netid(msg, tipc_net_id);
+ msg_set_mc_netid(msg, tn->net_id);
msg_set_bcast_ack(msg, n_ptr->bclink.last_in);
msg_set_bcgap_after(msg, n_ptr->bclink.last_in);
msg_set_bcgap_to(msg, to);
- tipc_bclink_lock();
- tipc_bearer_send(MAX_BEARERS, buf, NULL);
- bcl->stats.sent_nacks++;
- tipc_bclink_unlock();
+ tipc_bclink_lock(net);
+ tipc_bearer_send(net, MAX_BEARERS, buf, NULL);
+ tn->bcl->stats.sent_nacks++;
+ tipc_bclink_unlock(net);
kfree_skb(buf);
n_ptr->bclink.oos_state++;
* Delay any upcoming NACK by this node if another node has already
* requested the first message this node is going to ask for.
*/
-static void bclink_peek_nack(struct tipc_msg *msg)
+static void bclink_peek_nack(struct net *net, struct tipc_msg *msg)
{
- struct tipc_node *n_ptr = tipc_node_find(msg_destnode(msg));
+ struct tipc_node *n_ptr = tipc_node_find(net, msg_destnode(msg));
if (unlikely(!n_ptr))
return;
/* tipc_bclink_xmit - broadcast buffer chain to all nodes in cluster
* and to identified node local sockets
+ * @net: the applicable net namespace
* @list: chain of buffers containing message
* Consumes the buffer chain, except when returning -ELINKCONG
* Returns 0 if success, otherwise errno: -ELINKCONG,-EHOSTUNREACH,-EMSGSIZE
*/
-int tipc_bclink_xmit(struct sk_buff_head *list)
+int tipc_bclink_xmit(struct net *net, struct sk_buff_head *list)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct tipc_link *bcl = tn->bcl;
+ struct tipc_bclink *bclink = tn->bclink;
int rc = 0;
int bc = 0;
struct sk_buff *skb;
/* Broadcast to all other nodes */
if (likely(bclink)) {
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
if (likely(bclink->bcast_nodes.count)) {
- rc = __tipc_link_xmit(bcl, list);
+ rc = __tipc_link_xmit(net, bcl, list);
if (likely(!rc)) {
u32 len = skb_queue_len(&bcl->outqueue);
- bclink_set_last_sent();
+ bclink_set_last_sent(net);
bcl->stats.queue_sz_counts++;
bcl->stats.accu_queue_sz += len;
}
bc = 1;
}
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
}
if (unlikely(!bc))
/* Deliver message clone */
if (likely(!rc))
- tipc_sk_mcast_rcv(skb);
+ tipc_sk_mcast_rcv(net, skb);
else
kfree_skb(skb);
*/
static void bclink_accept_pkt(struct tipc_node *node, u32 seqno)
{
+ struct tipc_net *tn = net_generic(node->net, tipc_net_id);
+
bclink_update_last_sent(node, seqno);
node->bclink.last_in = seqno;
node->bclink.oos_state = 0;
- bcl->stats.recv_info++;
+ tn->bcl->stats.recv_info++;
/*
* Unicast an ACK periodically, ensuring that
* all nodes in the cluster don't ACK at the same time
*/
- if (((seqno - tipc_own_addr) % TIPC_MIN_LINK_WIN) == 0) {
+ if (((seqno - tn->own_addr) % TIPC_MIN_LINK_WIN) == 0) {
tipc_link_proto_xmit(node->active_links[node->addr & 1],
STATE_MSG, 0, 0, 0, 0, 0);
- bcl->stats.sent_acks++;
+ tn->bcl->stats.sent_acks++;
}
}
*
* RCU is locked, no other locks set
*/
-void tipc_bclink_rcv(struct sk_buff *buf)
+void tipc_bclink_rcv(struct net *net, struct sk_buff *buf)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct tipc_link *bcl = tn->bcl;
struct tipc_msg *msg = buf_msg(buf);
struct tipc_node *node;
u32 next_in;
int deferred = 0;
/* Screen out unwanted broadcast messages */
- if (msg_mc_netid(msg) != tipc_net_id)
+ if (msg_mc_netid(msg) != tn->net_id)
goto exit;
- node = tipc_node_find(msg_prevnode(msg));
+ node = tipc_node_find(net, msg_prevnode(msg));
if (unlikely(!node))
goto exit;
if (unlikely(msg_user(msg) == BCAST_PROTOCOL)) {
if (msg_type(msg) != STATE_MSG)
goto unlock;
- if (msg_destnode(msg) == tipc_own_addr) {
+ if (msg_destnode(msg) == tn->own_addr) {
tipc_bclink_acknowledge(node, msg_bcast_ack(msg));
tipc_node_unlock(node);
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
bcl->stats.recv_nacks++;
- bclink->retransmit_to = node;
- bclink_retransmit_pkt(msg_bcgap_after(msg),
+ tn->bclink->retransmit_to = node;
+ bclink_retransmit_pkt(tn, msg_bcgap_after(msg),
msg_bcgap_to(msg));
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
} else {
tipc_node_unlock(node);
- bclink_peek_nack(msg);
+ bclink_peek_nack(net, msg);
}
goto exit;
}
receive:
/* Deliver message to destination */
if (likely(msg_isdata(msg))) {
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
bclink_accept_pkt(node, seqno);
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
tipc_node_unlock(node);
if (likely(msg_mcast(msg)))
- tipc_sk_mcast_rcv(buf);
+ tipc_sk_mcast_rcv(net, buf);
else
kfree_skb(buf);
} else if (msg_user(msg) == MSG_BUNDLER) {
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
bclink_accept_pkt(node, seqno);
bcl->stats.recv_bundles++;
bcl->stats.recv_bundled += msg_msgcnt(msg);
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
tipc_node_unlock(node);
- tipc_link_bundle_rcv(buf);
+ tipc_link_bundle_rcv(net, buf);
} else if (msg_user(msg) == MSG_FRAGMENTER) {
tipc_buf_append(&node->bclink.reasm_buf, &buf);
if (unlikely(!buf && !node->bclink.reasm_buf))
goto unlock;
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
bclink_accept_pkt(node, seqno);
bcl->stats.recv_fragments++;
if (buf) {
bcl->stats.recv_fragmented++;
msg = buf_msg(buf);
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
goto receive;
}
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
tipc_node_unlock(node);
} else if (msg_user(msg) == NAME_DISTRIBUTOR) {
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
bclink_accept_pkt(node, seqno);
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
tipc_node_unlock(node);
- tipc_named_rcv(buf);
+ tipc_named_rcv(net, buf);
} else {
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
bclink_accept_pkt(node, seqno);
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
tipc_node_unlock(node);
kfree_skb(buf);
}
buf = NULL;
}
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
if (deferred)
bcl->stats.deferred_recv++;
else
bcl->stats.duplicates++;
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
unlock:
tipc_node_unlock(node);
u32 tipc_bclink_acks_missing(struct tipc_node *n_ptr)
{
return (n_ptr->bclink.recv_permitted &&
- (tipc_bclink_get_last_sent() != n_ptr->bclink.acked));
+ (tipc_bclink_get_last_sent(n_ptr->net) != n_ptr->bclink.acked));
}
* Returns 0 (packet sent successfully) under all circumstances,
* since the broadcast link's pseudo-bearer never blocks
*/
-static int tipc_bcbearer_send(struct sk_buff *buf, struct tipc_bearer *unused1,
+static int tipc_bcbearer_send(struct net *net, struct sk_buff *buf,
+ struct tipc_bearer *unused1,
struct tipc_media_addr *unused2)
{
int bp_index;
struct tipc_msg *msg = buf_msg(buf);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct tipc_bcbearer *bcbearer = tn->bcbearer;
+ struct tipc_bclink *bclink = tn->bclink;
/* Prepare broadcast link message for reliable transmission,
* if first time trying to send it;
if (likely(!msg_non_seq(buf_msg(buf)))) {
bcbuf_set_acks(buf, bclink->bcast_nodes.count);
msg_set_non_seq(msg, 1);
- msg_set_mc_netid(msg, tipc_net_id);
- bcl->stats.sent_info++;
+ msg_set_mc_netid(msg, tn->net_id);
+ tn->bcl->stats.sent_info++;
if (WARN_ON(!bclink->bcast_nodes.count)) {
dump_stack();
if (bp_index == 0) {
/* Use original buffer for first bearer */
- tipc_bearer_send(b->identity, buf, &b->bcast_addr);
+ tipc_bearer_send(net, b->identity, buf, &b->bcast_addr);
} else {
/* Avoid concurrent buffer access */
tbuf = pskb_copy_for_clone(buf, GFP_ATOMIC);
if (!tbuf)
break;
- tipc_bearer_send(b->identity, tbuf, &b->bcast_addr);
+ tipc_bearer_send(net, b->identity, tbuf,
+ &b->bcast_addr);
kfree_skb(tbuf); /* Bearer keeps a clone */
}
if (bcbearer->remains_new.count == 0)
/**
* tipc_bcbearer_sort - create sets of bearer pairs used by broadcast bearer
*/
-void tipc_bcbearer_sort(struct tipc_node_map *nm_ptr, u32 node, bool action)
+void tipc_bcbearer_sort(struct net *net, struct tipc_node_map *nm_ptr,
+ u32 node, bool action)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct tipc_bcbearer *bcbearer = tn->bcbearer;
struct tipc_bcbearer_pair *bp_temp = bcbearer->bpairs_temp;
struct tipc_bcbearer_pair *bp_curr;
struct tipc_bearer *b;
int b_index;
int pri;
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
if (action)
tipc_nmap_add(nm_ptr, node);
rcu_read_lock();
for (b_index = 0; b_index < MAX_BEARERS; b_index++) {
- b = rcu_dereference_rtnl(bearer_list[b_index]);
+ b = rcu_dereference_rtnl(tn->bearer_list[b_index]);
if (!b || !b->nodes.count)
continue;
bp_curr++;
}
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
}
static int __tipc_nl_add_bc_link_stat(struct sk_buff *skb,
return -EMSGSIZE;
}
-int tipc_nl_add_bc_link(struct tipc_nl_msg *msg)
+int tipc_nl_add_bc_link(struct net *net, struct tipc_nl_msg *msg)
{
int err;
void *hdr;
struct nlattr *attrs;
struct nlattr *prop;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct tipc_link *bcl = tn->bcl;
if (!bcl)
return 0;
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_v2_family,
NLM_F_MULTI, TIPC_NL_LINK_GET);
if (err)
goto attr_msg_full;
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
nla_nest_end(msg->skb, attrs);
genlmsg_end(msg->skb, hdr);
attr_msg_full:
nla_nest_cancel(msg->skb, attrs);
msg_full:
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
genlmsg_cancel(msg->skb, hdr);
return -EMSGSIZE;
}
-int tipc_bclink_stats(char *buf, const u32 buf_size)
+int tipc_bclink_stats(struct net *net, char *buf, const u32 buf_size)
{
int ret;
struct tipc_stats *s;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct tipc_link *bcl = tn->bcl;
if (!bcl)
return 0;
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
s = &bcl->stats;
s->queue_sz_counts ?
(s->accu_queue_sz / s->queue_sz_counts) : 0);
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
return ret;
}
-int tipc_bclink_reset_stats(void)
+int tipc_bclink_reset_stats(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct tipc_link *bcl = tn->bcl;
+
if (!bcl)
return -ENOPROTOOPT;
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
memset(&bcl->stats, 0, sizeof(bcl->stats));
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
return 0;
}
-int tipc_bclink_set_queue_limits(u32 limit)
+int tipc_bclink_set_queue_limits(struct net *net, u32 limit)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct tipc_link *bcl = tn->bcl;
+
if (!bcl)
return -ENOPROTOOPT;
if ((limit < TIPC_MIN_LINK_WIN) || (limit > TIPC_MAX_LINK_WIN))
return -EINVAL;
- tipc_bclink_lock();
+ tipc_bclink_lock(net);
tipc_link_set_queue_limits(bcl, limit);
- tipc_bclink_unlock();
+ tipc_bclink_unlock(net);
return 0;
}
-int tipc_bclink_init(void)
+int tipc_bclink_init(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct tipc_bcbearer *bcbearer;
+ struct tipc_bclink *bclink;
+ struct tipc_link *bcl;
+
bcbearer = kzalloc(sizeof(*bcbearer), GFP_ATOMIC);
if (!bcbearer)
return -ENOMEM;
spin_lock_init(&bclink->node.lock);
__skb_queue_head_init(&bclink->node.waiting_sks);
bcl->owner = &bclink->node;
+ bcl->owner->net = net;
bcl->max_pkt = MAX_PKT_DEFAULT_MCAST;
tipc_link_set_queue_limits(bcl, BCLINK_WIN_DEFAULT);
bcl->bearer_id = MAX_BEARERS;
- rcu_assign_pointer(bearer_list[MAX_BEARERS], &bcbearer->bearer);
+ rcu_assign_pointer(tn->bearer_list[MAX_BEARERS], &bcbearer->bearer);
bcl->state = WORKING_WORKING;
strlcpy(bcl->name, tipc_bclink_name, TIPC_MAX_LINK_NAME);
+ tn->bcbearer = bcbearer;
+ tn->bclink = bclink;
+ tn->bcl = bcl;
return 0;
}
-void tipc_bclink_stop(void)
+void tipc_bclink_stop(struct net *net)
{
- tipc_bclink_lock();
- tipc_link_purge_queues(bcl);
- tipc_bclink_unlock();
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+
+ tipc_bclink_lock(net);
+ tipc_link_purge_queues(tn->bcl);
+ tipc_bclink_unlock(net);
- RCU_INIT_POINTER(bearer_list[BCBEARER], NULL);
+ RCU_INIT_POINTER(tn->bearer_list[BCBEARER], NULL);
synchronize_net();
- kfree(bcbearer);
- kfree(bclink);
+ kfree(tn->bcbearer);
+ kfree(tn->bclink);
}
/**
#ifndef _TIPC_BCAST_H
#define _TIPC_BCAST_H
-#include "netlink.h"
+#include <linux/tipc_config.h>
+#include "link.h"
+#include "node.h"
-#define MAX_NODES 4096
-#define WSIZE 32
-#define TIPC_BCLINK_RESET 1
-
-/**
- * struct tipc_node_map - set of node identifiers
- * @count: # of nodes in set
- * @map: bitmap of node identifiers that are in the set
- */
-struct tipc_node_map {
- u32 count;
- u32 map[MAX_NODES / WSIZE];
-};
-
-#define PLSIZE 32
+#define TIPC_BCLINK_RESET 1
+#define PLSIZE 32
+#define BCBEARER MAX_BEARERS
/**
* struct tipc_port_list - set of node local destination ports
u32 ports[PLSIZE];
};
+/**
+ * struct tipc_bcbearer_pair - a pair of bearers used by broadcast link
+ * @primary: pointer to primary bearer
+ * @secondary: pointer to secondary bearer
+ *
+ * Bearers must have same priority and same set of reachable destinations
+ * to be paired.
+ */
-struct tipc_node;
+struct tipc_bcbearer_pair {
+ struct tipc_bearer *primary;
+ struct tipc_bearer *secondary;
+};
+/**
+ * struct tipc_bcbearer - bearer used by broadcast link
+ * @bearer: (non-standard) broadcast bearer structure
+ * @media: (non-standard) broadcast media structure
+ * @bpairs: array of bearer pairs
+ * @bpairs_temp: temporary array of bearer pairs used by tipc_bcbearer_sort()
+ * @remains: temporary node map used by tipc_bcbearer_send()
+ * @remains_new: temporary node map used tipc_bcbearer_send()
+ *
+ * Note: The fields labelled "temporary" are incorporated into the bearer
+ * to avoid consuming potentially limited stack space through the use of
+ * large local variables within multicast routines. Concurrent access is
+ * prevented through use of the spinlock "bclink_lock".
+ */
+struct tipc_bcbearer {
+ struct tipc_bearer bearer;
+ struct tipc_media media;
+ struct tipc_bcbearer_pair bpairs[MAX_BEARERS];
+ struct tipc_bcbearer_pair bpairs_temp[TIPC_MAX_LINK_PRI + 1];
+ struct tipc_node_map remains;
+ struct tipc_node_map remains_new;
+};
+
+/**
+ * struct tipc_bclink - link used for broadcast messages
+ * @lock: spinlock governing access to structure
+ * @link: (non-standard) broadcast link structure
+ * @node: (non-standard) node structure representing b'cast link's peer node
+ * @flags: represent bclink states
+ * @bcast_nodes: map of broadcast-capable nodes
+ * @retransmit_to: node that most recently requested a retransmit
+ *
+ * Handles sequence numbering, fragmentation, bundling, etc.
+ */
+struct tipc_bclink {
+ spinlock_t lock;
+ struct tipc_link link;
+ struct tipc_node node;
+ unsigned int flags;
+ struct tipc_node_map bcast_nodes;
+ struct tipc_node *retransmit_to;
+};
+
+struct tipc_node;
extern const char tipc_bclink_name[];
/**
void tipc_port_list_add(struct tipc_port_list *pl_ptr, u32 port);
void tipc_port_list_free(struct tipc_port_list *pl_ptr);
-int tipc_bclink_init(void);
-void tipc_bclink_stop(void);
-void tipc_bclink_set_flags(unsigned int flags);
-void tipc_bclink_add_node(u32 addr);
-void tipc_bclink_remove_node(u32 addr);
-struct tipc_node *tipc_bclink_retransmit_to(void);
+int tipc_bclink_init(struct net *net);
+void tipc_bclink_stop(struct net *net);
+void tipc_bclink_set_flags(struct net *tn, unsigned int flags);
+void tipc_bclink_add_node(struct net *net, u32 addr);
+void tipc_bclink_remove_node(struct net *net, u32 addr);
+struct tipc_node *tipc_bclink_retransmit_to(struct net *tn);
void tipc_bclink_acknowledge(struct tipc_node *n_ptr, u32 acked);
-void tipc_bclink_rcv(struct sk_buff *buf);
-u32 tipc_bclink_get_last_sent(void);
+void tipc_bclink_rcv(struct net *net, struct sk_buff *buf);
+u32 tipc_bclink_get_last_sent(struct net *net);
u32 tipc_bclink_acks_missing(struct tipc_node *n_ptr);
-void tipc_bclink_update_link_state(struct tipc_node *n_ptr, u32 last_sent);
-int tipc_bclink_stats(char *stats_buf, const u32 buf_size);
-int tipc_bclink_reset_stats(void);
-int tipc_bclink_set_queue_limits(u32 limit);
-void tipc_bcbearer_sort(struct tipc_node_map *nm_ptr, u32 node, bool action);
+void tipc_bclink_update_link_state(struct net *net, struct tipc_node *n_ptr,
+ u32 last_sent);
+int tipc_bclink_stats(struct net *net, char *stats_buf, const u32 buf_size);
+int tipc_bclink_reset_stats(struct net *net);
+int tipc_bclink_set_queue_limits(struct net *net, u32 limit);
+void tipc_bcbearer_sort(struct net *net, struct tipc_node_map *nm_ptr,
+ u32 node, bool action);
uint tipc_bclink_get_mtu(void);
-int tipc_bclink_xmit(struct sk_buff_head *list);
-void tipc_bclink_wakeup_users(void);
-int tipc_nl_add_bc_link(struct tipc_nl_msg *msg);
+int tipc_bclink_xmit(struct net *net, struct sk_buff_head *list);
+void tipc_bclink_wakeup_users(struct net *net);
+int tipc_nl_add_bc_link(struct net *net, struct tipc_nl_msg *msg);
#endif
* POSSIBILITY OF SUCH DAMAGE.
*/
+#include <net/sock.h>
#include "core.h"
#include "config.h"
#include "bearer.h"
#include "link.h"
#include "discover.h"
+#include "bcast.h"
#define MAX_ADDR_STR 60
[TIPC_NLA_MEDIA_PROP] = { .type = NLA_NESTED }
};
-struct tipc_bearer __rcu *bearer_list[MAX_BEARERS + 1];
-
-static void bearer_disable(struct tipc_bearer *b_ptr, bool shutting_down);
+static void bearer_disable(struct net *net, struct tipc_bearer *b_ptr,
+ bool shutting_down);
/**
* tipc_media_find - locates specified media object by name
/**
* tipc_bearer_find - locates bearer object with matching bearer name
*/
-struct tipc_bearer *tipc_bearer_find(const char *name)
+struct tipc_bearer *tipc_bearer_find(struct net *net, const char *name)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_bearer *b_ptr;
u32 i;
for (i = 0; i < MAX_BEARERS; i++) {
- b_ptr = rtnl_dereference(bearer_list[i]);
+ b_ptr = rtnl_dereference(tn->bearer_list[i]);
if (b_ptr && (!strcmp(b_ptr->name, name)))
return b_ptr;
}
/**
* tipc_bearer_get_names - record names of bearers in buffer
*/
-struct sk_buff *tipc_bearer_get_names(void)
+struct sk_buff *tipc_bearer_get_names(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct sk_buff *buf;
struct tipc_bearer *b;
int i, j;
for (i = 0; media_info_array[i] != NULL; i++) {
for (j = 0; j < MAX_BEARERS; j++) {
- b = rtnl_dereference(bearer_list[j]);
+ b = rtnl_dereference(tn->bearer_list[j]);
if (!b)
continue;
if (b->media == media_info_array[i]) {
return buf;
}
-void tipc_bearer_add_dest(u32 bearer_id, u32 dest)
+void tipc_bearer_add_dest(struct net *net, u32 bearer_id, u32 dest)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_bearer *b_ptr;
rcu_read_lock();
- b_ptr = rcu_dereference_rtnl(bearer_list[bearer_id]);
+ b_ptr = rcu_dereference_rtnl(tn->bearer_list[bearer_id]);
if (b_ptr) {
- tipc_bcbearer_sort(&b_ptr->nodes, dest, true);
+ tipc_bcbearer_sort(net, &b_ptr->nodes, dest, true);
tipc_disc_add_dest(b_ptr->link_req);
}
rcu_read_unlock();
}
-void tipc_bearer_remove_dest(u32 bearer_id, u32 dest)
+void tipc_bearer_remove_dest(struct net *net, u32 bearer_id, u32 dest)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_bearer *b_ptr;
rcu_read_lock();
- b_ptr = rcu_dereference_rtnl(bearer_list[bearer_id]);
+ b_ptr = rcu_dereference_rtnl(tn->bearer_list[bearer_id]);
if (b_ptr) {
- tipc_bcbearer_sort(&b_ptr->nodes, dest, false);
+ tipc_bcbearer_sort(net, &b_ptr->nodes, dest, false);
tipc_disc_remove_dest(b_ptr->link_req);
}
rcu_read_unlock();
/**
* tipc_enable_bearer - enable bearer with the given name
*/
-int tipc_enable_bearer(const char *name, u32 disc_domain, u32 priority)
+int tipc_enable_bearer(struct net *net, const char *name, u32 disc_domain,
+ u32 priority)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_bearer *b_ptr;
struct tipc_media *m_ptr;
struct tipc_bearer_names b_names;
u32 i;
int res = -EINVAL;
- if (!tipc_own_addr) {
+ if (!tn->own_addr) {
pr_warn("Bearer <%s> rejected, not supported in standalone mode\n",
name);
return -ENOPROTOOPT;
return -EINVAL;
}
if (tipc_addr_domain_valid(disc_domain) &&
- (disc_domain != tipc_own_addr)) {
- if (tipc_in_scope(disc_domain, tipc_own_addr)) {
- disc_domain = tipc_own_addr & TIPC_CLUSTER_MASK;
+ (disc_domain != tn->own_addr)) {
+ if (tipc_in_scope(disc_domain, tn->own_addr)) {
+ disc_domain = tn->own_addr & TIPC_CLUSTER_MASK;
res = 0; /* accept any node in own cluster */
- } else if (in_own_cluster_exact(disc_domain))
+ } else if (in_own_cluster_exact(net, disc_domain))
res = 0; /* accept specified node in own cluster */
}
if (res) {
bearer_id = MAX_BEARERS;
with_this_prio = 1;
for (i = MAX_BEARERS; i-- != 0; ) {
- b_ptr = rtnl_dereference(bearer_list[i]);
+ b_ptr = rtnl_dereference(tn->bearer_list[i]);
if (!b_ptr) {
bearer_id = i;
continue;
strcpy(b_ptr->name, name);
b_ptr->media = m_ptr;
- res = m_ptr->enable_media(b_ptr);
+ res = m_ptr->enable_media(net, b_ptr);
if (res) {
pr_warn("Bearer <%s> rejected, enable failure (%d)\n",
name, -res);
b_ptr->net_plane = bearer_id + 'A';
b_ptr->priority = priority;
- res = tipc_disc_create(b_ptr, &b_ptr->bcast_addr);
+ res = tipc_disc_create(net, b_ptr, &b_ptr->bcast_addr);
if (res) {
- bearer_disable(b_ptr, false);
+ bearer_disable(net, b_ptr, false);
pr_warn("Bearer <%s> rejected, discovery object creation failed\n",
name);
return -EINVAL;
}
- rcu_assign_pointer(bearer_list[bearer_id], b_ptr);
+ rcu_assign_pointer(tn->bearer_list[bearer_id], b_ptr);
pr_info("Enabled bearer <%s>, discovery domain %s, priority %u\n",
name,
/**
* tipc_reset_bearer - Reset all links established over this bearer
*/
-static int tipc_reset_bearer(struct tipc_bearer *b_ptr)
+static int tipc_reset_bearer(struct net *net, struct tipc_bearer *b_ptr)
{
pr_info("Resetting bearer <%s>\n", b_ptr->name);
- tipc_link_reset_list(b_ptr->identity);
- tipc_disc_reset(b_ptr);
+ tipc_link_reset_list(net, b_ptr->identity);
+ tipc_disc_reset(net, b_ptr);
return 0;
}
*
* Note: This routine assumes caller holds RTNL lock.
*/
-static void bearer_disable(struct tipc_bearer *b_ptr, bool shutting_down)
+static void bearer_disable(struct net *net, struct tipc_bearer *b_ptr,
+ bool shutting_down)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
u32 i;
pr_info("Disabling bearer <%s>\n", b_ptr->name);
b_ptr->media->disable_media(b_ptr);
- tipc_link_delete_list(b_ptr->identity, shutting_down);
+ tipc_link_delete_list(net, b_ptr->identity, shutting_down);
if (b_ptr->link_req)
tipc_disc_delete(b_ptr->link_req);
for (i = 0; i < MAX_BEARERS; i++) {
- if (b_ptr == rtnl_dereference(bearer_list[i])) {
- RCU_INIT_POINTER(bearer_list[i], NULL);
+ if (b_ptr == rtnl_dereference(tn->bearer_list[i])) {
+ RCU_INIT_POINTER(tn->bearer_list[i], NULL);
break;
}
}
kfree_rcu(b_ptr, rcu);
}
-int tipc_disable_bearer(const char *name)
+int tipc_disable_bearer(struct net *net, const char *name)
{
struct tipc_bearer *b_ptr;
int res;
- b_ptr = tipc_bearer_find(name);
+ b_ptr = tipc_bearer_find(net, name);
if (b_ptr == NULL) {
pr_warn("Attempt to disable unknown bearer <%s>\n", name);
res = -EINVAL;
} else {
- bearer_disable(b_ptr, false);
+ bearer_disable(net, b_ptr, false);
res = 0;
}
return res;
}
-int tipc_enable_l2_media(struct tipc_bearer *b)
+int tipc_enable_l2_media(struct net *net, struct tipc_bearer *b)
{
struct net_device *dev;
char *driver_name = strchr((const char *)b->name, ':') + 1;
/* Find device with specified name */
- dev = dev_get_by_name(&init_net, driver_name);
+ dev = dev_get_by_name(net, driver_name);
if (!dev)
return -ENODEV;
* @b_ptr: the bearer through which the packet is to be sent
* @dest: peer destination address
*/
-int tipc_l2_send_msg(struct sk_buff *buf, struct tipc_bearer *b,
- struct tipc_media_addr *dest)
+int tipc_l2_send_msg(struct net *net, struct sk_buff *buf,
+ struct tipc_bearer *b, struct tipc_media_addr *dest)
{
struct sk_buff *clone;
struct net_device *dev;
* The media send routine must not alter the buffer being passed in
* as it may be needed for later retransmission!
*/
-void tipc_bearer_send(u32 bearer_id, struct sk_buff *buf,
+void tipc_bearer_send(struct net *net, u32 bearer_id, struct sk_buff *buf,
struct tipc_media_addr *dest)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_bearer *b_ptr;
rcu_read_lock();
- b_ptr = rcu_dereference_rtnl(bearer_list[bearer_id]);
+ b_ptr = rcu_dereference_rtnl(tn->bearer_list[bearer_id]);
if (likely(b_ptr))
- b_ptr->media->send_msg(buf, b_ptr, dest);
+ b_ptr->media->send_msg(net, buf, b_ptr, dest);
rcu_read_unlock();
}
{
struct tipc_bearer *b_ptr;
- if (!net_eq(dev_net(dev), &init_net)) {
- kfree_skb(buf);
- return NET_RX_DROP;
- }
-
rcu_read_lock();
b_ptr = rcu_dereference_rtnl(dev->tipc_ptr);
if (likely(b_ptr)) {
if (likely(buf->pkt_type <= PACKET_BROADCAST)) {
buf->next = NULL;
- tipc_rcv(buf, b_ptr);
+ tipc_rcv(dev_net(dev), buf, b_ptr);
rcu_read_unlock();
return NET_RX_SUCCESS;
}
static int tipc_l2_device_event(struct notifier_block *nb, unsigned long evt,
void *ptr)
{
- struct tipc_bearer *b_ptr;
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
-
- if (!net_eq(dev_net(dev), &init_net))
- return NOTIFY_DONE;
+ struct net *net = dev_net(dev);
+ struct tipc_bearer *b_ptr;
b_ptr = rtnl_dereference(dev->tipc_ptr);
if (!b_ptr)
break;
case NETDEV_DOWN:
case NETDEV_CHANGEMTU:
- tipc_reset_bearer(b_ptr);
+ tipc_reset_bearer(net, b_ptr);
break;
case NETDEV_CHANGEADDR:
b_ptr->media->raw2addr(b_ptr, &b_ptr->addr,
(char *)dev->dev_addr);
- tipc_reset_bearer(b_ptr);
+ tipc_reset_bearer(net, b_ptr);
break;
case NETDEV_UNREGISTER:
case NETDEV_CHANGENAME:
- bearer_disable(b_ptr, false);
+ bearer_disable(dev_net(dev), b_ptr, false);
break;
}
return NOTIFY_OK;
dev_remove_pack(&tipc_packet_type);
}
-void tipc_bearer_stop(void)
+void tipc_bearer_stop(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_bearer *b_ptr;
u32 i;
for (i = 0; i < MAX_BEARERS; i++) {
- b_ptr = rtnl_dereference(bearer_list[i]);
+ b_ptr = rtnl_dereference(tn->bearer_list[i]);
if (b_ptr) {
- bearer_disable(b_ptr, true);
- bearer_list[i] = NULL;
+ bearer_disable(net, b_ptr, true);
+ tn->bearer_list[i] = NULL;
}
}
}
int i = cb->args[0];
struct tipc_bearer *bearer;
struct tipc_nl_msg msg;
+ struct net *net = sock_net(skb->sk);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
if (i == MAX_BEARERS)
return 0;
rtnl_lock();
for (i = 0; i < MAX_BEARERS; i++) {
- bearer = rtnl_dereference(bearer_list[i]);
+ bearer = rtnl_dereference(tn->bearer_list[i]);
if (!bearer)
continue;
struct tipc_bearer *bearer;
struct tipc_nl_msg msg;
struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
+ struct net *net = genl_info_net(info);
if (!info->attrs[TIPC_NLA_BEARER])
return -EINVAL;
msg.seq = info->snd_seq;
rtnl_lock();
- bearer = tipc_bearer_find(name);
+ bearer = tipc_bearer_find(net, name);
if (!bearer) {
err = -EINVAL;
goto err_out;
char *name;
struct tipc_bearer *bearer;
struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
+ struct net *net = genl_info_net(info);
if (!info->attrs[TIPC_NLA_BEARER])
return -EINVAL;
name = nla_data(attrs[TIPC_NLA_BEARER_NAME]);
rtnl_lock();
- bearer = tipc_bearer_find(name);
+ bearer = tipc_bearer_find(net, name);
if (!bearer) {
rtnl_unlock();
return -EINVAL;
}
- bearer_disable(bearer, false);
+ bearer_disable(net, bearer, false);
rtnl_unlock();
return 0;
int tipc_nl_bearer_enable(struct sk_buff *skb, struct genl_info *info)
{
+ struct net *net = genl_info_net(info);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
int err;
char *bearer;
struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
u32 prio;
prio = TIPC_MEDIA_LINK_PRI;
- domain = tipc_own_addr & TIPC_CLUSTER_MASK;
+ domain = tn->own_addr & TIPC_CLUSTER_MASK;
if (!info->attrs[TIPC_NLA_BEARER])
return -EINVAL;
}
rtnl_lock();
- err = tipc_enable_bearer(bearer, domain, prio);
+ err = tipc_enable_bearer(net, bearer, domain, prio);
if (err) {
rtnl_unlock();
return err;
char *name;
struct tipc_bearer *b;
struct nlattr *attrs[TIPC_NLA_BEARER_MAX + 1];
+ struct net *net = genl_info_net(info);
if (!info->attrs[TIPC_NLA_BEARER])
return -EINVAL;
name = nla_data(attrs[TIPC_NLA_BEARER_NAME]);
rtnl_lock();
- b = tipc_bearer_find(name);
+ b = tipc_bearer_find(net, name);
if (!b) {
rtnl_unlock();
return -EINVAL;
#ifndef _TIPC_BEARER_H
#define _TIPC_BEARER_H
-#include "bcast.h"
#include "netlink.h"
#include <net/genetlink.h>
#define MAX_BEARERS 2
#define MAX_MEDIA 2
+#define MAX_NODES 4096
+#define WSIZE 32
/* Identifiers associated with TIPC message header media address info
* - address info field is 32 bytes long
#define TIPC_MEDIA_TYPE_ETH 1
#define TIPC_MEDIA_TYPE_IB 2
+/**
+ * struct tipc_node_map - set of node identifiers
+ * @count: # of nodes in set
+ * @map: bitmap of node identifiers that are in the set
+ */
+struct tipc_node_map {
+ u32 count;
+ u32 map[MAX_NODES / WSIZE];
+};
+
/**
* struct tipc_media_addr - destination address used by TIPC bearers
* @value: address info (format defined by media)
* @name: media name
*/
struct tipc_media {
- int (*send_msg)(struct sk_buff *buf,
+ int (*send_msg)(struct net *net, struct sk_buff *buf,
struct tipc_bearer *b_ptr,
struct tipc_media_addr *dest);
- int (*enable_media)(struct tipc_bearer *b_ptr);
+ int (*enable_media)(struct net *net, struct tipc_bearer *b_ptr);
void (*disable_media)(struct tipc_bearer *b_ptr);
int (*addr2str)(struct tipc_media_addr *addr,
char *strbuf,
char if_name[TIPC_MAX_IF_NAME];
};
-struct tipc_link;
-
-extern struct tipc_bearer __rcu *bearer_list[];
-
/*
* TIPC routines available to supported media types
*/
-void tipc_rcv(struct sk_buff *skb, struct tipc_bearer *tb_ptr);
-int tipc_enable_bearer(const char *bearer_name, u32 disc_domain, u32 priority);
-int tipc_disable_bearer(const char *name);
+void tipc_rcv(struct net *net, struct sk_buff *skb, struct tipc_bearer *b_ptr);
+int tipc_enable_bearer(struct net *net, const char *bearer_name,
+ u32 disc_domain, u32 priority);
+int tipc_disable_bearer(struct net *net, const char *name);
/*
* Routines made available to TIPC by supported media types
int tipc_media_set_window(const char *name, u32 new_value);
void tipc_media_addr_printf(char *buf, int len, struct tipc_media_addr *a);
struct sk_buff *tipc_media_get_names(void);
-int tipc_enable_l2_media(struct tipc_bearer *b);
+int tipc_enable_l2_media(struct net *net, struct tipc_bearer *b);
void tipc_disable_l2_media(struct tipc_bearer *b);
-int tipc_l2_send_msg(struct sk_buff *buf, struct tipc_bearer *b,
- struct tipc_media_addr *dest);
+int tipc_l2_send_msg(struct net *net, struct sk_buff *buf,
+ struct tipc_bearer *b, struct tipc_media_addr *dest);
-struct sk_buff *tipc_bearer_get_names(void);
-void tipc_bearer_add_dest(u32 bearer_id, u32 dest);
-void tipc_bearer_remove_dest(u32 bearer_id, u32 dest);
-struct tipc_bearer *tipc_bearer_find(const char *name);
+struct sk_buff *tipc_bearer_get_names(struct net *net);
+void tipc_bearer_add_dest(struct net *net, u32 bearer_id, u32 dest);
+void tipc_bearer_remove_dest(struct net *net, u32 bearer_id, u32 dest);
+struct tipc_bearer *tipc_bearer_find(struct net *net, const char *name);
struct tipc_media *tipc_media_find(const char *name);
int tipc_bearer_setup(void);
void tipc_bearer_cleanup(void);
-void tipc_bearer_stop(void);
-void tipc_bearer_send(u32 bearer_id, struct sk_buff *buf,
+void tipc_bearer_stop(struct net *net);
+void tipc_bearer_send(struct net *net, u32 bearer_id, struct sk_buff *buf,
struct tipc_media_addr *dest);
#endif /* _TIPC_BEARER_H */
return buf;
}
-static struct sk_buff *cfg_enable_bearer(void)
+static struct sk_buff *cfg_enable_bearer(struct net *net)
{
struct tipc_bearer_config *args;
return tipc_cfg_reply_error_string(TIPC_CFG_TLV_ERROR);
args = (struct tipc_bearer_config *)TLV_DATA(req_tlv_area);
- if (tipc_enable_bearer(args->name,
+ if (tipc_enable_bearer(net, args->name,
ntohl(args->disc_domain),
ntohl(args->priority)))
return tipc_cfg_reply_error_string("unable to enable bearer");
return tipc_cfg_reply_none();
}
-static struct sk_buff *cfg_disable_bearer(void)
+static struct sk_buff *cfg_disable_bearer(struct net *net)
{
if (!TLV_CHECK(req_tlv_area, req_tlv_space, TIPC_TLV_BEARER_NAME))
return tipc_cfg_reply_error_string(TIPC_CFG_TLV_ERROR);
- if (tipc_disable_bearer((char *)TLV_DATA(req_tlv_area)))
+ if (tipc_disable_bearer(net, (char *)TLV_DATA(req_tlv_area)))
return tipc_cfg_reply_error_string("unable to disable bearer");
return tipc_cfg_reply_none();
}
-static struct sk_buff *cfg_set_own_addr(void)
+static struct sk_buff *cfg_set_own_addr(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
u32 addr;
if (!TLV_CHECK(req_tlv_area, req_tlv_space, TIPC_TLV_NET_ADDR))
return tipc_cfg_reply_error_string(TIPC_CFG_TLV_ERROR);
addr = ntohl(*(__be32 *)TLV_DATA(req_tlv_area));
- if (addr == tipc_own_addr)
+ if (addr == tn->own_addr)
return tipc_cfg_reply_none();
if (!tipc_addr_node_valid(addr))
return tipc_cfg_reply_error_string(TIPC_CFG_INVALID_VALUE
" (node address)");
- if (tipc_own_addr)
+ if (tn->own_addr)
return tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
" (cannot change node address once assigned)");
- if (!tipc_net_start(addr))
+ if (!tipc_net_start(net, addr))
return tipc_cfg_reply_none();
return tipc_cfg_reply_error_string("cannot change to network mode");
}
-static struct sk_buff *cfg_set_max_ports(void)
+static struct sk_buff *cfg_set_netid(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
u32 value;
if (!TLV_CHECK(req_tlv_area, req_tlv_space, TIPC_TLV_UNSIGNED))
return tipc_cfg_reply_error_string(TIPC_CFG_TLV_ERROR);
value = ntohl(*(__be32 *)TLV_DATA(req_tlv_area));
- if (value == tipc_max_ports)
- return tipc_cfg_reply_none();
- if (value < 127 || value > 65535)
- return tipc_cfg_reply_error_string(TIPC_CFG_INVALID_VALUE
- " (max ports must be 127-65535)");
- return tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
- " (cannot change max ports while TIPC is active)");
-}
-
-static struct sk_buff *cfg_set_netid(void)
-{
- u32 value;
-
- if (!TLV_CHECK(req_tlv_area, req_tlv_space, TIPC_TLV_UNSIGNED))
- return tipc_cfg_reply_error_string(TIPC_CFG_TLV_ERROR);
- value = ntohl(*(__be32 *)TLV_DATA(req_tlv_area));
- if (value == tipc_net_id)
+ if (value == tn->net_id)
return tipc_cfg_reply_none();
if (value < 1 || value > 9999)
return tipc_cfg_reply_error_string(TIPC_CFG_INVALID_VALUE
" (network id must be 1-9999)");
- if (tipc_own_addr)
+ if (tn->own_addr)
return tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
" (cannot change network id once TIPC has joined a network)");
- tipc_net_id = value;
+ tn->net_id = value;
return tipc_cfg_reply_none();
}
-struct sk_buff *tipc_cfg_do_cmd(u32 orig_node, u16 cmd, const void *request_area,
- int request_space, int reply_headroom)
+struct sk_buff *tipc_cfg_do_cmd(struct net *net, u32 orig_node, u16 cmd,
+ const void *request_area, int request_space,
+ int reply_headroom)
{
struct sk_buff *rep_tlv_buf;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
rtnl_lock();
rep_headroom = reply_headroom;
/* Check command authorization */
- if (likely(in_own_node(orig_node))) {
+ if (likely(in_own_node(net, orig_node))) {
/* command is permitted */
} else {
rep_tlv_buf = tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
rep_tlv_buf = tipc_cfg_reply_none();
break;
case TIPC_CMD_GET_NODES:
- rep_tlv_buf = tipc_node_get_nodes(req_tlv_area, req_tlv_space);
+ rep_tlv_buf = tipc_node_get_nodes(net, req_tlv_area,
+ req_tlv_space);
break;
case TIPC_CMD_GET_LINKS:
- rep_tlv_buf = tipc_node_get_links(req_tlv_area, req_tlv_space);
+ rep_tlv_buf = tipc_node_get_links(net, req_tlv_area,
+ req_tlv_space);
break;
case TIPC_CMD_SHOW_LINK_STATS:
- rep_tlv_buf = tipc_link_cmd_show_stats(req_tlv_area, req_tlv_space);
+ rep_tlv_buf = tipc_link_cmd_show_stats(net, req_tlv_area,
+ req_tlv_space);
break;
case TIPC_CMD_RESET_LINK_STATS:
- rep_tlv_buf = tipc_link_cmd_reset_stats(req_tlv_area, req_tlv_space);
+ rep_tlv_buf = tipc_link_cmd_reset_stats(net, req_tlv_area,
+ req_tlv_space);
break;
case TIPC_CMD_SHOW_NAME_TABLE:
- rep_tlv_buf = tipc_nametbl_get(req_tlv_area, req_tlv_space);
+ rep_tlv_buf = tipc_nametbl_get(net, req_tlv_area,
+ req_tlv_space);
break;
case TIPC_CMD_GET_BEARER_NAMES:
- rep_tlv_buf = tipc_bearer_get_names();
+ rep_tlv_buf = tipc_bearer_get_names(net);
break;
case TIPC_CMD_GET_MEDIA_NAMES:
rep_tlv_buf = tipc_media_get_names();
break;
case TIPC_CMD_SHOW_PORTS:
- rep_tlv_buf = tipc_sk_socks_show();
+ rep_tlv_buf = tipc_sk_socks_show(net);
break;
case TIPC_CMD_SHOW_STATS:
rep_tlv_buf = tipc_show_stats();
case TIPC_CMD_SET_LINK_TOL:
case TIPC_CMD_SET_LINK_PRI:
case TIPC_CMD_SET_LINK_WINDOW:
- rep_tlv_buf = tipc_link_cmd_config(req_tlv_area, req_tlv_space, cmd);
+ rep_tlv_buf = tipc_link_cmd_config(net, req_tlv_area,
+ req_tlv_space, cmd);
break;
case TIPC_CMD_ENABLE_BEARER:
- rep_tlv_buf = cfg_enable_bearer();
+ rep_tlv_buf = cfg_enable_bearer(net);
break;
case TIPC_CMD_DISABLE_BEARER:
- rep_tlv_buf = cfg_disable_bearer();
+ rep_tlv_buf = cfg_disable_bearer(net);
break;
case TIPC_CMD_SET_NODE_ADDR:
- rep_tlv_buf = cfg_set_own_addr();
- break;
- case TIPC_CMD_SET_MAX_PORTS:
- rep_tlv_buf = cfg_set_max_ports();
+ rep_tlv_buf = cfg_set_own_addr(net);
break;
case TIPC_CMD_SET_NETID:
- rep_tlv_buf = cfg_set_netid();
- break;
- case TIPC_CMD_GET_MAX_PORTS:
- rep_tlv_buf = tipc_cfg_reply_unsigned(tipc_max_ports);
+ rep_tlv_buf = cfg_set_netid(net);
break;
case TIPC_CMD_GET_NETID:
- rep_tlv_buf = tipc_cfg_reply_unsigned(tipc_net_id);
+ rep_tlv_buf = tipc_cfg_reply_unsigned(tn->net_id);
break;
case TIPC_CMD_NOT_NET_ADMIN:
rep_tlv_buf =
case TIPC_CMD_SET_REMOTE_MNG:
case TIPC_CMD_GET_REMOTE_MNG:
case TIPC_CMD_DUMP_LOG:
+ case TIPC_CMD_SET_MAX_PORTS:
+ case TIPC_CMD_GET_MAX_PORTS:
rep_tlv_buf = tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
" (obsolete command)");
break;
#ifndef _TIPC_CONFIG_H
#define _TIPC_CONFIG_H
-/* ---------------------------------------------------------------------- */
-
#include "link.h"
+#define ULTRA_STRING_MAX_LEN 32768
+
struct sk_buff *tipc_cfg_reply_alloc(int payload_size);
int tipc_cfg_append_tlv(struct sk_buff *buf, int tlv_type,
void *tlv_data, int tlv_data_size);
return tipc_cfg_reply_string_type(TIPC_TLV_ULTRA_STRING, string);
}
-struct sk_buff *tipc_cfg_do_cmd(u32 orig_node, u16 cmd,
+struct sk_buff *tipc_cfg_do_cmd(struct net *net, u32 orig_node, u16 cmd,
const void *req_tlv_area, int req_tlv_space,
int headroom);
#endif
* POSSIBILITY OF SUCH DAMAGE.
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include "core.h"
#include "name_table.h"
#include "subscr.h"
#include <linux/module.h>
-/* global variables used by multiple sub-systems within TIPC */
-int tipc_random __read_mostly;
-
/* configurable TIPC parameters */
-u32 tipc_own_addr __read_mostly;
-int tipc_max_ports __read_mostly;
int tipc_net_id __read_mostly;
int sysctl_tipc_rmem[3] __read_mostly; /* min/default/max */
-/**
- * tipc_buf_acquire - creates a TIPC message buffer
- * @size: message size (including TIPC header)
- *
- * Returns a new buffer with data pointers set to the specified size.
- *
- * NOTE: Headroom is reserved to allow prepending of a data link header.
- * There may also be unrequested tailroom present at the buffer's end.
- */
-struct sk_buff *tipc_buf_acquire(u32 size)
+static int __net_init tipc_init_net(struct net *net)
{
- struct sk_buff *skb;
- unsigned int buf_size = (BUF_HEADROOM + size + 3) & ~3u;
-
- skb = alloc_skb_fclone(buf_size, GFP_ATOMIC);
- if (skb) {
- skb_reserve(skb, BUF_HEADROOM);
- skb_put(skb, size);
- skb->next = NULL;
- }
- return skb;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ int err;
+
+ tn->net_id = 4711;
+ tn->own_addr = 0;
+ get_random_bytes(&tn->random, sizeof(int));
+ INIT_LIST_HEAD(&tn->node_list);
+ spin_lock_init(&tn->node_list_lock);
+
+ err = tipc_sk_rht_init(net);
+ if (err)
+ goto out_sk_rht;
+
+ err = tipc_nametbl_init(net);
+ if (err)
+ goto out_nametbl;
+
+ err = tipc_subscr_start(net);
+ if (err)
+ goto out_subscr;
+ return 0;
+
+out_subscr:
+ tipc_nametbl_stop(net);
+out_nametbl:
+ tipc_sk_rht_destroy(net);
+out_sk_rht:
+ return err;
}
-/**
- * tipc_core_stop - switch TIPC from SINGLE NODE to NOT RUNNING mode
- */
-static void tipc_core_stop(void)
+static void __net_exit tipc_exit_net(struct net *net)
{
- tipc_net_stop();
- tipc_bearer_cleanup();
- tipc_netlink_stop();
- tipc_subscr_stop();
- tipc_nametbl_stop();
- tipc_sk_ref_table_stop();
- tipc_socket_stop();
- tipc_unregister_sysctl();
+ tipc_subscr_stop(net);
+ tipc_net_stop(net);
+ tipc_nametbl_stop(net);
+ tipc_sk_rht_destroy(net);
}
-/**
- * tipc_core_start - switch TIPC from NOT RUNNING to SINGLE NODE mode
- */
-static int tipc_core_start(void)
+static struct pernet_operations tipc_net_ops = {
+ .init = tipc_init_net,
+ .exit = tipc_exit_net,
+ .id = &tipc_net_id,
+ .size = sizeof(struct tipc_net),
+};
+
+static int __init tipc_init(void)
{
int err;
- get_random_bytes(&tipc_random, sizeof(tipc_random));
-
- err = tipc_sk_ref_table_init(tipc_max_ports, tipc_random);
- if (err)
- goto out_reftbl;
+ pr_info("Activated (version " TIPC_MOD_VER ")\n");
- err = tipc_nametbl_init();
- if (err)
- goto out_nametbl;
+ sysctl_tipc_rmem[0] = TIPC_CONN_OVERLOAD_LIMIT >> 4 <<
+ TIPC_LOW_IMPORTANCE;
+ sysctl_tipc_rmem[1] = TIPC_CONN_OVERLOAD_LIMIT >> 4 <<
+ TIPC_CRITICAL_IMPORTANCE;
+ sysctl_tipc_rmem[2] = TIPC_CONN_OVERLOAD_LIMIT;
err = tipc_netlink_start();
if (err)
if (err)
goto out_sysctl;
- err = tipc_subscr_start();
+ err = register_pernet_subsys(&tipc_net_ops);
if (err)
- goto out_subscr;
+ goto out_pernet;
err = tipc_bearer_setup();
if (err)
goto out_bearer;
+ pr_info("Started in single node mode\n");
return 0;
out_bearer:
- tipc_subscr_stop();
-out_subscr:
+ unregister_pernet_subsys(&tipc_net_ops);
+out_pernet:
tipc_unregister_sysctl();
out_sysctl:
tipc_socket_stop();
out_socket:
tipc_netlink_stop();
out_netlink:
- tipc_nametbl_stop();
-out_nametbl:
- tipc_sk_ref_table_stop();
-out_reftbl:
+ pr_err("Unable to start in single node mode\n");
return err;
}
-static int __init tipc_init(void)
-{
- int res;
-
- pr_info("Activated (version " TIPC_MOD_VER ")\n");
-
- tipc_own_addr = 0;
- tipc_max_ports = CONFIG_TIPC_PORTS;
- tipc_net_id = 4711;
-
- sysctl_tipc_rmem[0] = TIPC_CONN_OVERLOAD_LIMIT >> 4 <<
- TIPC_LOW_IMPORTANCE;
- sysctl_tipc_rmem[1] = TIPC_CONN_OVERLOAD_LIMIT >> 4 <<
- TIPC_CRITICAL_IMPORTANCE;
- sysctl_tipc_rmem[2] = TIPC_CONN_OVERLOAD_LIMIT;
-
- res = tipc_core_start();
- if (res)
- pr_err("Unable to start in single node mode\n");
- else
- pr_info("Started in single node mode\n");
- return res;
-}
-
static void __exit tipc_exit(void)
{
- tipc_core_stop();
+ tipc_bearer_cleanup();
+ tipc_netlink_stop();
+ tipc_socket_stop();
+ tipc_unregister_sysctl();
+ unregister_pernet_subsys(&tipc_net_ops);
+
pr_info("Deactivated\n");
}
#ifndef _TIPC_CORE_H
#define _TIPC_CORE_H
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
#include <linux/tipc.h>
#include <linux/tipc_config.h>
#include <linux/tipc_netlink.h>
#include <linux/vmalloc.h>
#include <linux/rtnetlink.h>
#include <linux/etherdevice.h>
+#include <net/netns/generic.h>
+#include <linux/rhashtable.h>
-#define TIPC_MOD_VER "2.0.0"
-
-#define ULTRA_STRING_MAX_LEN 32768
-#define TIPC_MAX_SUBSCRIPTIONS 65535
-#define TIPC_MAX_PUBLICATIONS 65535
+#include "node.h"
+#include "bearer.h"
+#include "bcast.h"
+#include "netlink.h"
+#include "link.h"
+#include "node.h"
+#include "msg.h"
-struct tipc_msg; /* msg.h */
+#define TIPC_MOD_VER "2.0.0"
int tipc_snprintf(char *buf, int len, const char *fmt, ...);
-/*
- * TIPC-specific error codes
- */
-#define ELINKCONG EAGAIN /* link congestion <=> resource unavailable */
-
-/*
- * Global configuration variables
- */
-extern u32 tipc_own_addr __read_mostly;
-extern int tipc_max_ports __read_mostly;
extern int tipc_net_id __read_mostly;
extern int sysctl_tipc_rmem[3] __read_mostly;
extern int sysctl_tipc_named_timeout __read_mostly;
-/*
- * Other global variables
- */
-extern int tipc_random __read_mostly;
+struct tipc_net {
+ u32 own_addr;
+ int net_id;
+ int random;
-/*
- * Routines available to privileged subsystems
- */
-int tipc_netlink_start(void);
-void tipc_netlink_stop(void);
-int tipc_socket_init(void);
-void tipc_socket_stop(void);
-int tipc_sock_create_local(int type, struct socket **res);
-void tipc_sock_release_local(struct socket *sock);
-int tipc_sock_accept_local(struct socket *sock, struct socket **newsock,
- int flags);
+ /* Node table and node list */
+ spinlock_t node_list_lock;
+ struct hlist_head node_htable[NODE_HTABLE_SIZE];
+ struct list_head node_list;
+ u32 num_nodes;
+ u32 num_links;
+
+ /* Bearer list */
+ struct tipc_bearer __rcu *bearer_list[MAX_BEARERS + 1];
+
+ /* Broadcast link */
+ struct tipc_bcbearer *bcbearer;
+ struct tipc_bclink *bclink;
+ struct tipc_link *bcl;
+
+ /* Socket hash table */
+ struct rhashtable sk_rht;
+
+ /* Name table */
+ spinlock_t nametbl_lock;
+ struct name_table *nametbl;
+
+ /* Topology subscription server */
+ struct tipc_server *topsrv;
+ atomic_t subscription_count;
+};
#ifdef CONFIG_SYSCTL
int tipc_register_sysctl(void);
#define tipc_unregister_sysctl()
#endif
-/*
- * TIPC timer code
- */
-typedef void (*Handler) (unsigned long);
-
-/**
- * k_init_timer - initialize a timer
- * @timer: pointer to timer structure
- * @routine: pointer to routine to invoke when timer expires
- * @argument: value to pass to routine when timer expires
- *
- * Timer must be initialized before use (and terminated when no longer needed).
- */
-static inline void k_init_timer(struct timer_list *timer, Handler routine,
- unsigned long argument)
-{
- setup_timer(timer, routine, argument);
-}
-
-/**
- * k_start_timer - start a timer
- * @timer: pointer to timer structure
- * @msec: time to delay (in ms)
- *
- * Schedules a previously initialized timer for later execution.
- * If timer is already running, the new timeout overrides the previous request.
- *
- * To ensure the timer doesn't expire before the specified delay elapses,
- * the amount of delay is rounded up when converting to the jiffies
- * then an additional jiffy is added to account for the fact that
- * the starting time may be in the middle of the current jiffy.
- */
-static inline void k_start_timer(struct timer_list *timer, unsigned long msec)
-{
- mod_timer(timer, jiffies + msecs_to_jiffies(msec) + 1);
-}
-
-/**
- * k_cancel_timer - cancel a timer
- * @timer: pointer to timer structure
- *
- * Cancels a previously initialized timer.
- * Can be called safely even if the timer is already inactive.
- *
- * WARNING: Must not be called when holding locks required by the timer's
- * timeout routine, otherwise deadlock can occur on SMP systems!
- */
-static inline void k_cancel_timer(struct timer_list *timer)
-{
- del_timer_sync(timer);
-}
-
-/**
- * k_term_timer - terminate a timer
- * @timer: pointer to timer structure
- *
- * Prevents further use of a previously initialized timer.
- *
- * WARNING: Caller must ensure timer isn't currently running.
- *
- * (Do not "enhance" this routine to automatically cancel an active timer,
- * otherwise deadlock can arise when a timeout routine calls k_term_timer.)
- */
-static inline void k_term_timer(struct timer_list *timer)
-{
-}
-
-/*
- * TIPC message buffer code
- *
- * TIPC message buffer headroom reserves space for the worst-case
- * link-level device header (in case the message is sent off-node).
- *
- * Note: Headroom should be a multiple of 4 to ensure the TIPC header fields
- * are word aligned for quicker access
- */
-#define BUF_HEADROOM LL_MAX_HEADER
-
-struct tipc_skb_cb {
- void *handle;
- struct sk_buff *tail;
- bool deferred;
- bool wakeup_pending;
- bool bundling;
- u16 chain_sz;
- u16 chain_imp;
-};
-
-#define TIPC_SKB_CB(__skb) ((struct tipc_skb_cb *)&((__skb)->cb[0]))
-
-static inline struct tipc_msg *buf_msg(struct sk_buff *skb)
-{
- return (struct tipc_msg *)skb->data;
-}
-
-struct sk_buff *tipc_buf_acquire(u32 size);
-
#endif
#include "link.h"
#include "discover.h"
-#define TIPC_LINK_REQ_INIT 125 /* min delay during bearer start up */
-#define TIPC_LINK_REQ_FAST 1000 /* max delay if bearer has no links */
-#define TIPC_LINK_REQ_SLOW 60000 /* max delay if bearer has links */
-#define TIPC_LINK_REQ_INACTIVE 0xffffffff /* indicates no timer in use */
+/* min delay during bearer start up */
+#define TIPC_LINK_REQ_INIT msecs_to_jiffies(125)
+/* max delay if bearer has no links */
+#define TIPC_LINK_REQ_FAST msecs_to_jiffies(1000)
+/* max delay if bearer has links */
+#define TIPC_LINK_REQ_SLOW msecs_to_jiffies(60000)
+/* indicates no timer in use */
+#define TIPC_LINK_REQ_INACTIVE 0xffffffff
/**
* struct tipc_link_req - information about an ongoing link setup request
* @bearer_id: identity of bearer issuing requests
+ * @net: network namespace instance
* @dest: destination address for request messages
* @domain: network domain to which links can be established
* @num_nodes: number of nodes currently discovered (i.e. with an active link)
struct tipc_link_req {
u32 bearer_id;
struct tipc_media_addr dest;
+ struct net *net;
u32 domain;
int num_nodes;
spinlock_t lock;
struct sk_buff *buf;
struct timer_list timer;
- unsigned int timer_intv;
+ unsigned long timer_intv;
};
/**
* tipc_disc_init_msg - initialize a link setup message
+ * @net: the applicable net namespace
* @type: message type (request or response)
* @b_ptr: ptr to bearer issuing message
*/
-static void tipc_disc_init_msg(struct sk_buff *buf, u32 type,
+static void tipc_disc_init_msg(struct net *net, struct sk_buff *buf, u32 type,
struct tipc_bearer *b_ptr)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_msg *msg;
u32 dest_domain = b_ptr->domain;
msg = buf_msg(buf);
- tipc_msg_init(msg, LINK_CONFIG, type, INT_H_SIZE, dest_domain);
+ tipc_msg_init(net, msg, LINK_CONFIG, type, INT_H_SIZE, dest_domain);
msg_set_non_seq(msg, 1);
- msg_set_node_sig(msg, tipc_random);
+ msg_set_node_sig(msg, tn->random);
msg_set_dest_domain(msg, dest_domain);
- msg_set_bc_netid(msg, tipc_net_id);
+ msg_set_bc_netid(msg, tn->net_id);
b_ptr->media->addr2msg(msg_media_addr(msg), &b_ptr->addr);
}
/**
* tipc_disc_rcv - handle incoming discovery message (request or response)
+ * @net: the applicable net namespace
* @buf: buffer containing message
* @bearer: bearer that message arrived on
*/
-void tipc_disc_rcv(struct sk_buff *buf, struct tipc_bearer *bearer)
+void tipc_disc_rcv(struct net *net, struct sk_buff *buf,
+ struct tipc_bearer *bearer)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_node *node;
struct tipc_link *link;
struct tipc_media_addr maddr;
kfree_skb(buf);
/* Ensure message from node is valid and communication is permitted */
- if (net_id != tipc_net_id)
+ if (net_id != tn->net_id)
return;
if (maddr.broadcast)
return;
if (!tipc_addr_node_valid(onode))
return;
- if (in_own_node(onode)) {
+ if (in_own_node(net, onode)) {
if (memcmp(&maddr, &bearer->addr, sizeof(maddr)))
- disc_dupl_alert(bearer, tipc_own_addr, &maddr);
+ disc_dupl_alert(bearer, tn->own_addr, &maddr);
return;
}
- if (!tipc_in_scope(ddom, tipc_own_addr))
+ if (!tipc_in_scope(ddom, tn->own_addr))
return;
if (!tipc_in_scope(bearer->domain, onode))
return;
/* Locate, or if necessary, create, node: */
- node = tipc_node_find(onode);
+ node = tipc_node_find(net, onode);
if (!node)
- node = tipc_node_create(onode);
+ node = tipc_node_create(net, onode);
if (!node)
return;
if (respond && (mtyp == DSC_REQ_MSG)) {
rbuf = tipc_buf_acquire(INT_H_SIZE);
if (rbuf) {
- tipc_disc_init_msg(rbuf, DSC_RESP_MSG, bearer);
- tipc_bearer_send(bearer->identity, rbuf, &maddr);
+ tipc_disc_init_msg(net, rbuf, DSC_RESP_MSG, bearer);
+ tipc_bearer_send(net, bearer->identity, rbuf, &maddr);
kfree_skb(rbuf);
}
}
if ((req->timer_intv == TIPC_LINK_REQ_INACTIVE) ||
(req->timer_intv > TIPC_LINK_REQ_FAST)) {
req->timer_intv = TIPC_LINK_REQ_INIT;
- k_start_timer(&req->timer, req->timer_intv);
+ mod_timer(&req->timer, jiffies + req->timer_intv);
}
}
}
/**
* disc_timeout - send a periodic link setup request
- * @req: ptr to link request structure
+ * @data: ptr to link request structure
*
* Called whenever a link setup request timer associated with a bearer expires.
*/
-static void disc_timeout(struct tipc_link_req *req)
+static void disc_timeout(unsigned long data)
{
+ struct tipc_link_req *req = (struct tipc_link_req *)data;
int max_delay;
spin_lock_bh(&req->lock);
* hold at fast polling rate if don't have any associated nodes,
* otherwise hold at slow polling rate
*/
- tipc_bearer_send(req->bearer_id, req->buf, &req->dest);
+ tipc_bearer_send(req->net, req->bearer_id, req->buf, &req->dest);
req->timer_intv *= 2;
if (req->timer_intv > max_delay)
req->timer_intv = max_delay;
- k_start_timer(&req->timer, req->timer_intv);
+ mod_timer(&req->timer, jiffies + req->timer_intv);
exit:
spin_unlock_bh(&req->lock);
}
/**
* tipc_disc_create - create object to send periodic link setup requests
+ * @net: the applicable net namespace
* @b_ptr: ptr to bearer issuing requests
* @dest: destination address for request messages
* @dest_domain: network domain to which links can be established
*
* Returns 0 if successful, otherwise -errno.
*/
-int tipc_disc_create(struct tipc_bearer *b_ptr, struct tipc_media_addr *dest)
+int tipc_disc_create(struct net *net, struct tipc_bearer *b_ptr,
+ struct tipc_media_addr *dest)
{
struct tipc_link_req *req;
return -ENOMEM;
}
- tipc_disc_init_msg(req->buf, DSC_REQ_MSG, b_ptr);
+ tipc_disc_init_msg(net, req->buf, DSC_REQ_MSG, b_ptr);
memcpy(&req->dest, dest, sizeof(*dest));
+ req->net = net;
req->bearer_id = b_ptr->identity;
req->domain = b_ptr->domain;
req->num_nodes = 0;
req->timer_intv = TIPC_LINK_REQ_INIT;
spin_lock_init(&req->lock);
- k_init_timer(&req->timer, (Handler)disc_timeout, (unsigned long)req);
- k_start_timer(&req->timer, req->timer_intv);
+ setup_timer(&req->timer, disc_timeout, (unsigned long)req);
+ mod_timer(&req->timer, jiffies + req->timer_intv);
b_ptr->link_req = req;
- tipc_bearer_send(req->bearer_id, req->buf, &req->dest);
+ tipc_bearer_send(net, req->bearer_id, req->buf, &req->dest);
return 0;
}
*/
void tipc_disc_delete(struct tipc_link_req *req)
{
- k_cancel_timer(&req->timer);
- k_term_timer(&req->timer);
+ del_timer_sync(&req->timer);
kfree_skb(req->buf);
kfree(req);
}
/**
* tipc_disc_reset - reset object to send periodic link setup requests
+ * @net: the applicable net namespace
* @b_ptr: ptr to bearer issuing requests
* @dest_domain: network domain to which links can be established
*/
-void tipc_disc_reset(struct tipc_bearer *b_ptr)
+void tipc_disc_reset(struct net *net, struct tipc_bearer *b_ptr)
{
struct tipc_link_req *req = b_ptr->link_req;
spin_lock_bh(&req->lock);
- tipc_disc_init_msg(req->buf, DSC_REQ_MSG, b_ptr);
+ tipc_disc_init_msg(net, req->buf, DSC_REQ_MSG, b_ptr);
+ req->net = net;
req->bearer_id = b_ptr->identity;
req->domain = b_ptr->domain;
req->num_nodes = 0;
req->timer_intv = TIPC_LINK_REQ_INIT;
- k_start_timer(&req->timer, req->timer_intv);
- tipc_bearer_send(req->bearer_id, req->buf, &req->dest);
+ mod_timer(&req->timer, jiffies + req->timer_intv);
+ tipc_bearer_send(net, req->bearer_id, req->buf, &req->dest);
spin_unlock_bh(&req->lock);
}
struct tipc_link_req;
-int tipc_disc_create(struct tipc_bearer *b_ptr, struct tipc_media_addr *dest);
+int tipc_disc_create(struct net *net, struct tipc_bearer *b_ptr,
+ struct tipc_media_addr *dest);
void tipc_disc_delete(struct tipc_link_req *req);
-void tipc_disc_reset(struct tipc_bearer *b_ptr);
+void tipc_disc_reset(struct net *net, struct tipc_bearer *b_ptr);
void tipc_disc_add_dest(struct tipc_link_req *req);
void tipc_disc_remove_dest(struct tipc_link_req *req);
-void tipc_disc_rcv(struct sk_buff *buf, struct tipc_bearer *b_ptr);
+void tipc_disc_rcv(struct net *net, struct sk_buff *buf,
+ struct tipc_bearer *b_ptr);
#endif
*/
#define START_CHANGEOVER 100000u
-static void link_handle_out_of_seq_msg(struct tipc_link *l_ptr,
+static void link_handle_out_of_seq_msg(struct net *net,
+ struct tipc_link *l_ptr,
struct sk_buff *buf);
-static void tipc_link_proto_rcv(struct tipc_link *l_ptr, struct sk_buff *buf);
-static int tipc_link_tunnel_rcv(struct tipc_node *n_ptr,
+static void tipc_link_proto_rcv(struct net *net, struct tipc_link *l_ptr,
+ struct sk_buff *buf);
+static int tipc_link_tunnel_rcv(struct net *net, struct tipc_node *n_ptr,
struct sk_buff **buf);
-static void link_set_supervision_props(struct tipc_link *l_ptr, u32 tolerance);
+static void link_set_supervision_props(struct tipc_link *l_ptr, u32 tol);
static void link_state_event(struct tipc_link *l_ptr, u32 event);
static void link_reset_statistics(struct tipc_link *l_ptr);
static void link_print(struct tipc_link *l_ptr, const char *str);
static void tipc_link_sync_xmit(struct tipc_link *l);
static void tipc_link_sync_rcv(struct tipc_node *n, struct sk_buff *buf);
-static int tipc_link_input(struct tipc_link *l, struct sk_buff *buf);
-static int tipc_link_prepare_input(struct tipc_link *l, struct sk_buff **buf);
+static int tipc_link_input(struct net *net, struct tipc_link *l,
+ struct sk_buff *buf);
+static int tipc_link_prepare_input(struct net *net, struct tipc_link *l,
+ struct sk_buff **buf);
/*
* Simple link routines
static void link_init_max_pkt(struct tipc_link *l_ptr)
{
+ struct tipc_node *node = l_ptr->owner;
+ struct tipc_net *tn = net_generic(node->net, tipc_net_id);
struct tipc_bearer *b_ptr;
u32 max_pkt;
rcu_read_lock();
- b_ptr = rcu_dereference_rtnl(bearer_list[l_ptr->bearer_id]);
+ b_ptr = rcu_dereference_rtnl(tn->bearer_list[l_ptr->bearer_id]);
if (!b_ptr) {
rcu_read_unlock();
return;
* link_timeout - handle expiration of link timer
* @l_ptr: pointer to link
*/
-static void link_timeout(struct tipc_link *l_ptr)
+static void link_timeout(unsigned long data)
{
+ struct tipc_link *l_ptr = (struct tipc_link *)data;
struct sk_buff *skb;
tipc_node_lock(l_ptr->owner);
tipc_node_unlock(l_ptr->owner);
}
-static void link_set_timer(struct tipc_link *l_ptr, u32 time)
+static void link_set_timer(struct tipc_link *link, unsigned long time)
{
- k_start_timer(&l_ptr->timer, time);
+ mod_timer(&link->timer, jiffies + time);
}
/**
struct tipc_bearer *b_ptr,
const struct tipc_media_addr *media_addr)
{
+ struct tipc_net *tn = net_generic(n_ptr->net, tipc_net_id);
struct tipc_link *l_ptr;
struct tipc_msg *msg;
char *if_name;
l_ptr->addr = peer;
if_name = strchr(b_ptr->name, ':') + 1;
sprintf(l_ptr->name, "%u.%u.%u:%s-%u.%u.%u:unknown",
- tipc_zone(tipc_own_addr), tipc_cluster(tipc_own_addr),
- tipc_node(tipc_own_addr),
+ tipc_zone(tn->own_addr), tipc_cluster(tn->own_addr),
+ tipc_node(tn->own_addr),
if_name,
tipc_zone(peer), tipc_cluster(peer), tipc_node(peer));
/* note: peer i/f name is updated by reset/activate message */
l_ptr->pmsg = (struct tipc_msg *)&l_ptr->proto_msg;
msg = l_ptr->pmsg;
- tipc_msg_init(msg, LINK_PROTOCOL, RESET_MSG, INT_H_SIZE, l_ptr->addr);
+ tipc_msg_init(n_ptr->net, msg, LINK_PROTOCOL, RESET_MSG, INT_H_SIZE,
+ l_ptr->addr);
msg_set_size(msg, sizeof(l_ptr->proto_msg));
- msg_set_session(msg, (tipc_random & 0xffff));
+ msg_set_session(msg, (tn->random & 0xffff));
msg_set_bearer_id(msg, b_ptr->identity);
strcpy((char *)msg_data(msg), if_name);
tipc_node_attach_link(n_ptr, l_ptr);
- k_init_timer(&l_ptr->timer, (Handler)link_timeout,
- (unsigned long)l_ptr);
+ setup_timer(&l_ptr->timer, link_timeout, (unsigned long)l_ptr);
link_state_event(l_ptr, STARTING_EVT);
return l_ptr;
}
-void tipc_link_delete_list(unsigned int bearer_id, bool shutting_down)
+void tipc_link_delete_list(struct net *net, unsigned int bearer_id,
+ bool shutting_down)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_link *l_ptr;
struct tipc_node *n_ptr;
rcu_read_lock();
- list_for_each_entry_rcu(n_ptr, &tipc_node_list, list) {
+ list_for_each_entry_rcu(n_ptr, &tn->node_list, list) {
tipc_node_lock(n_ptr);
l_ptr = n_ptr->links[bearer_id];
if (l_ptr) {
static bool link_schedule_user(struct tipc_link *link, u32 oport,
uint chain_sz, uint imp)
{
+ struct net *net = link->owner->net;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct sk_buff *buf;
- buf = tipc_msg_create(SOCK_WAKEUP, 0, INT_H_SIZE, 0, tipc_own_addr,
- tipc_own_addr, oport, 0, 0);
+ buf = tipc_msg_create(net, SOCK_WAKEUP, 0, INT_H_SIZE, 0, tn->own_addr,
+ tn->own_addr, oport, 0, 0);
if (!buf)
return false;
TIPC_SKB_CB(buf)->chain_sz = chain_sz;
return;
tipc_node_link_down(l_ptr->owner, l_ptr);
- tipc_bearer_remove_dest(l_ptr->bearer_id, l_ptr->addr);
+ tipc_bearer_remove_dest(owner->net, l_ptr->bearer_id, l_ptr->addr);
if (was_active_link && tipc_node_active_links(l_ptr->owner)) {
l_ptr->reset_checkpoint = checkpoint;
link_reset_statistics(l_ptr);
}
-void tipc_link_reset_list(unsigned int bearer_id)
+void tipc_link_reset_list(struct net *net, unsigned int bearer_id)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_link *l_ptr;
struct tipc_node *n_ptr;
rcu_read_lock();
- list_for_each_entry_rcu(n_ptr, &tipc_node_list, list) {
+ list_for_each_entry_rcu(n_ptr, &tn->node_list, list) {
tipc_node_lock(n_ptr);
l_ptr = n_ptr->links[bearer_id];
if (l_ptr)
rcu_read_unlock();
}
-static void link_activate(struct tipc_link *l_ptr)
+static void link_activate(struct tipc_link *link)
{
- l_ptr->next_in_no = l_ptr->stats.recv_info = 1;
- tipc_node_link_up(l_ptr->owner, l_ptr);
- tipc_bearer_add_dest(l_ptr->bearer_id, l_ptr->addr);
+ struct tipc_node *node = link->owner;
+
+ link->next_in_no = 1;
+ link->stats.recv_info = 1;
+ tipc_node_link_up(node, link);
+ tipc_bearer_add_dest(node->net, link->bearer_id, link->addr);
}
/**
static void link_state_event(struct tipc_link *l_ptr, unsigned int event)
{
struct tipc_link *other;
- u32 cont_intv = l_ptr->continuity_interval;
+ unsigned long cont_intv = l_ptr->cont_intv;
if (l_ptr->flags & LINK_STOPPED)
return;
* Only the socket functions tipc_send_stream() and tipc_send_packet() need
* to act on the return value, since they may need to do more send attempts.
*/
-int __tipc_link_xmit(struct tipc_link *link, struct sk_buff_head *list)
+int __tipc_link_xmit(struct net *net, struct tipc_link *link,
+ struct sk_buff_head *list)
{
struct tipc_msg *msg = buf_msg(skb_peek(list));
uint psz = msg_size(msg);
if (skb_queue_len(outqueue) < sndlim) {
__skb_queue_tail(outqueue, skb);
- tipc_bearer_send(link->bearer_id, skb, addr);
+ tipc_bearer_send(net, link->bearer_id,
+ skb, addr);
link->next_out = NULL;
link->unacked_window = 0;
} else if (tipc_msg_bundle(outqueue, skb, mtu)) {
link->stats.sent_bundled++;
continue;
- } else if (tipc_msg_make_bundle(outqueue, skb, mtu,
+ } else if (tipc_msg_make_bundle(net, outqueue, skb, mtu,
link->addr)) {
link->stats.sent_bundled++;
link->stats.sent_bundles++;
struct sk_buff_head head;
skb2list(skb, &head);
- return __tipc_link_xmit(link, &head);
+ return __tipc_link_xmit(link->owner->net, link, &head);
}
-int tipc_link_xmit_skb(struct sk_buff *skb, u32 dnode, u32 selector)
+int tipc_link_xmit_skb(struct net *net, struct sk_buff *skb, u32 dnode,
+ u32 selector)
{
struct sk_buff_head head;
skb2list(skb, &head);
- return tipc_link_xmit(&head, dnode, selector);
+ return tipc_link_xmit(net, &head, dnode, selector);
}
/**
* tipc_link_xmit() is the general link level function for message sending
+ * @net: the applicable net namespace
* @list: chain of buffers containing message
* @dsz: amount of user data to be sent
* @dnode: address of destination node
* Consumes the buffer chain, except when returning -ELINKCONG
* Returns 0 if success, otherwise errno: -ELINKCONG,-EHOSTUNREACH,-EMSGSIZE
*/
-int tipc_link_xmit(struct sk_buff_head *list, u32 dnode, u32 selector)
+int tipc_link_xmit(struct net *net, struct sk_buff_head *list, u32 dnode,
+ u32 selector)
{
struct tipc_link *link = NULL;
struct tipc_node *node;
int rc = -EHOSTUNREACH;
- node = tipc_node_find(dnode);
+ node = tipc_node_find(net, dnode);
if (node) {
tipc_node_lock(node);
link = node->active_links[selector & 1];
if (link)
- rc = __tipc_link_xmit(link, list);
+ rc = __tipc_link_xmit(net, link, list);
tipc_node_unlock(node);
}
if (link)
return rc;
- if (likely(in_own_node(dnode))) {
+ if (likely(in_own_node(net, dnode))) {
/* As a node local message chain never contains more than one
* buffer, we just need to dequeue one SKB buffer from the
* head list.
*/
- return tipc_sk_rcv(__skb_dequeue(list));
+ return tipc_sk_rcv(net, __skb_dequeue(list));
}
__skb_queue_purge(list);
return;
msg = buf_msg(skb);
- tipc_msg_init(msg, BCAST_PROTOCOL, STATE_MSG, INT_H_SIZE, link->addr);
+ tipc_msg_init(link->owner->net, msg, BCAST_PROTOCOL, STATE_MSG,
+ INT_H_SIZE, link->addr);
msg_set_last_bcast(msg, link->owner->bclink.acked);
__tipc_link_xmit_skb(link, skb);
}
msg_set_bcast_ack(msg, l_ptr->owner->bclink.last_in);
if (msg_user(msg) == MSG_BUNDLER)
TIPC_SKB_CB(skb)->bundling = false;
- tipc_bearer_send(l_ptr->bearer_id, skb,
+ tipc_bearer_send(l_ptr->owner->net,
+ l_ptr->bearer_id, skb,
&l_ptr->media_addr);
l_ptr->next_out = tipc_skb_queue_next(outqueue, skb);
} else {
struct sk_buff *buf)
{
struct tipc_msg *msg = buf_msg(buf);
+ struct net *net = l_ptr->owner->net;
pr_warn("Retransmission failure on link <%s>\n", l_ptr->name);
pr_cont("Outstanding acks: %lu\n",
(unsigned long) TIPC_SKB_CB(buf)->handle);
- n_ptr = tipc_bclink_retransmit_to();
+ n_ptr = tipc_bclink_retransmit_to(net);
tipc_node_lock(n_ptr);
tipc_addr_string_fill(addr_string, n_ptr->addr);
tipc_node_unlock(n_ptr);
- tipc_bclink_set_flags(TIPC_BCLINK_RESET);
+ tipc_bclink_set_flags(net, TIPC_BCLINK_RESET);
l_ptr->stale_count = 0;
}
}
msg = buf_msg(skb);
msg_set_ack(msg, mod(l_ptr->next_in_no - 1));
msg_set_bcast_ack(msg, l_ptr->owner->bclink.last_in);
- tipc_bearer_send(l_ptr->bearer_id, skb, &l_ptr->media_addr);
+ tipc_bearer_send(l_ptr->owner->net, l_ptr->bearer_id, skb,
+ &l_ptr->media_addr);
retransmits--;
l_ptr->stats.retransmitted++;
}
/**
* tipc_rcv - process TIPC packets/messages arriving from off-node
+ * @net: the applicable net namespace
* @skb: TIPC packet
* @b_ptr: pointer to bearer message arrived on
*
* Invoked with no locks held. Bearer pointer must point to a valid bearer
* structure (i.e. cannot be NULL), but bearer can be inactive.
*/
-void tipc_rcv(struct sk_buff *skb, struct tipc_bearer *b_ptr)
+void tipc_rcv(struct net *net, struct sk_buff *skb, struct tipc_bearer *b_ptr)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct sk_buff_head head;
struct tipc_node *n_ptr;
struct tipc_link *l_ptr;
if (unlikely(msg_non_seq(msg))) {
if (msg_user(msg) == LINK_CONFIG)
- tipc_disc_rcv(skb, b_ptr);
+ tipc_disc_rcv(net, skb, b_ptr);
else
- tipc_bclink_rcv(skb);
+ tipc_bclink_rcv(net, skb);
continue;
}
/* Discard unicast link messages destined for another node */
if (unlikely(!msg_short(msg) &&
- (msg_destnode(msg) != tipc_own_addr)))
+ (msg_destnode(msg) != tn->own_addr)))
goto discard;
/* Locate neighboring node that sent message */
- n_ptr = tipc_node_find(msg_prevnode(msg));
+ n_ptr = tipc_node_find(net, msg_prevnode(msg));
if (unlikely(!n_ptr))
goto discard;
tipc_node_lock(n_ptr);
/* Process the incoming packet */
if (unlikely(!link_working_working(l_ptr))) {
if (msg_user(msg) == LINK_PROTOCOL) {
- tipc_link_proto_rcv(l_ptr, skb);
+ tipc_link_proto_rcv(net, l_ptr, skb);
link_retrieve_defq(l_ptr, &head);
tipc_node_unlock(n_ptr);
continue;
/* Link is now in state WORKING_WORKING */
if (unlikely(seq_no != mod(l_ptr->next_in_no))) {
- link_handle_out_of_seq_msg(l_ptr, skb);
+ link_handle_out_of_seq_msg(net, l_ptr, skb);
link_retrieve_defq(l_ptr, &head);
tipc_node_unlock(n_ptr);
continue;
tipc_link_proto_xmit(l_ptr, STATE_MSG, 0, 0, 0, 0, 0);
}
- if (tipc_link_prepare_input(l_ptr, &skb)) {
+ if (tipc_link_prepare_input(net, l_ptr, &skb)) {
tipc_node_unlock(n_ptr);
continue;
}
tipc_node_unlock(n_ptr);
- if (tipc_link_input(l_ptr, skb) != 0)
+ if (tipc_link_input(net, l_ptr, skb) != 0)
goto discard;
continue;
unlock_discard:
*
* Node lock must be held
*/
-static int tipc_link_prepare_input(struct tipc_link *l, struct sk_buff **buf)
+static int tipc_link_prepare_input(struct net *net, struct tipc_link *l,
+ struct sk_buff **buf)
{
struct tipc_node *n;
struct tipc_msg *msg;
msg = buf_msg(*buf);
switch (msg_user(msg)) {
case CHANGEOVER_PROTOCOL:
- if (tipc_link_tunnel_rcv(n, buf))
+ if (tipc_link_tunnel_rcv(net, n, buf))
res = 0;
break;
case MSG_FRAGMENTER:
/**
* tipc_link_input - Deliver message too higher layers
*/
-static int tipc_link_input(struct tipc_link *l, struct sk_buff *buf)
+static int tipc_link_input(struct net *net, struct tipc_link *l,
+ struct sk_buff *buf)
{
struct tipc_msg *msg = buf_msg(buf);
int res = 0;
case TIPC_HIGH_IMPORTANCE:
case TIPC_CRITICAL_IMPORTANCE:
case CONN_MANAGER:
- tipc_sk_rcv(buf);
+ tipc_sk_rcv(net, buf);
break;
case NAME_DISTRIBUTOR:
- tipc_named_rcv(buf);
+ tipc_named_rcv(net, buf);
break;
case MSG_BUNDLER:
- tipc_link_bundle_rcv(buf);
+ tipc_link_bundle_rcv(net, buf);
break;
default:
res = -EINVAL;
/*
* link_handle_out_of_seq_msg - handle arrival of out-of-sequence packet
*/
-static void link_handle_out_of_seq_msg(struct tipc_link *l_ptr,
+static void link_handle_out_of_seq_msg(struct net *net,
+ struct tipc_link *l_ptr,
struct sk_buff *buf)
{
u32 seq_no = buf_seqno(buf);
if (likely(msg_user(buf_msg(buf)) == LINK_PROTOCOL)) {
- tipc_link_proto_rcv(l_ptr, buf);
+ tipc_link_proto_rcv(net, l_ptr, buf);
return;
}
msg_set_type(msg, msg_typ);
msg_set_net_plane(msg, l_ptr->net_plane);
msg_set_bcast_ack(msg, l_ptr->owner->bclink.last_in);
- msg_set_last_bcast(msg, tipc_bclink_get_last_sent());
+ msg_set_last_bcast(msg, tipc_bclink_get_last_sent(l_ptr->owner->net));
if (msg_typ == STATE_MSG) {
u32 next_sent = mod(l_ptr->next_out_no);
skb_copy_to_linear_data(buf, msg, sizeof(l_ptr->proto_msg));
buf->priority = TC_PRIO_CONTROL;
- tipc_bearer_send(l_ptr->bearer_id, buf, &l_ptr->media_addr);
+ tipc_bearer_send(l_ptr->owner->net, l_ptr->bearer_id, buf,
+ &l_ptr->media_addr);
l_ptr->unacked_window = 0;
kfree_skb(buf);
}
* Note that network plane id propagates through the network, and may
* change at any time. The node with lowest address rules
*/
-static void tipc_link_proto_rcv(struct tipc_link *l_ptr, struct sk_buff *buf)
+static void tipc_link_proto_rcv(struct net *net, struct tipc_link *l_ptr,
+ struct sk_buff *buf)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
u32 rec_gap = 0;
u32 max_pkt_info;
u32 max_pkt_ack;
goto exit;
if (l_ptr->net_plane != msg_net_plane(msg))
- if (tipc_own_addr > msg_prevnode(msg))
+ if (tn->own_addr > msg_prevnode(msg))
l_ptr->net_plane = msg_net_plane(msg);
switch (msg_type(msg)) {
/* Protocol message before retransmits, reduce loss risk */
if (l_ptr->owner->bclink.recv_permitted)
- tipc_bclink_update_link_state(l_ptr->owner,
+ tipc_bclink_update_link_state(net, l_ptr->owner,
msg_last_bcast(msg));
if (rec_gap || (msg_probe(msg))) {
if (!tunnel)
return;
- tipc_msg_init(&tunnel_hdr, CHANGEOVER_PROTOCOL,
- ORIGINAL_MSG, INT_H_SIZE, l_ptr->addr);
+ tipc_msg_init(l_ptr->owner->net, &tunnel_hdr, CHANGEOVER_PROTOCOL,
+ ORIGINAL_MSG, INT_H_SIZE, l_ptr->addr);
msg_set_bearer_id(&tunnel_hdr, l_ptr->peer_bearer_id);
msg_set_msgcnt(&tunnel_hdr, msgcount);
struct sk_buff *skb;
struct tipc_msg tunnel_hdr;
- tipc_msg_init(&tunnel_hdr, CHANGEOVER_PROTOCOL,
- DUPLICATE_MSG, INT_H_SIZE, l_ptr->addr);
+ tipc_msg_init(l_ptr->owner->net, &tunnel_hdr, CHANGEOVER_PROTOCOL,
+ DUPLICATE_MSG, INT_H_SIZE, l_ptr->addr);
msg_set_msgcnt(&tunnel_hdr, skb_queue_len(&l_ptr->outqueue));
msg_set_bearer_id(&tunnel_hdr, l_ptr->peer_bearer_id);
skb_queue_walk(&l_ptr->outqueue, skb) {
/* tipc_link_dup_rcv(): Receive a tunnelled DUPLICATE_MSG packet.
* Owner node is locked.
*/
-static void tipc_link_dup_rcv(struct tipc_link *l_ptr,
+static void tipc_link_dup_rcv(struct net *net, struct tipc_link *l_ptr,
struct sk_buff *t_buf)
{
struct sk_buff *buf;
}
/* Add buffer to deferred queue, if applicable: */
- link_handle_out_of_seq_msg(l_ptr, buf);
+ link_handle_out_of_seq_msg(net, l_ptr, buf);
}
/* tipc_link_failover_rcv(): Receive a tunnelled ORIGINAL_MSG packet
* returned to the active link for delivery upwards.
* Owner node is locked.
*/
-static int tipc_link_tunnel_rcv(struct tipc_node *n_ptr,
+static int tipc_link_tunnel_rcv(struct net *net, struct tipc_node *n_ptr,
struct sk_buff **buf)
{
struct sk_buff *t_buf = *buf;
goto exit;
if (msg_type(t_msg) == DUPLICATE_MSG)
- tipc_link_dup_rcv(l_ptr, t_buf);
+ tipc_link_dup_rcv(net, l_ptr, t_buf);
else if (msg_type(t_msg) == ORIGINAL_MSG)
*buf = tipc_link_failover_rcv(l_ptr, t_buf);
else
/*
* Bundler functionality:
*/
-void tipc_link_bundle_rcv(struct sk_buff *buf)
+void tipc_link_bundle_rcv(struct net *net, struct sk_buff *buf)
{
u32 msgcount = msg_msgcnt(buf_msg(buf));
u32 pos = INT_H_SIZE;
pos += align(msg_size(omsg));
if (msg_isdata(omsg)) {
if (unlikely(msg_type(omsg) == TIPC_MCAST_MSG))
- tipc_sk_mcast_rcv(obuf);
+ tipc_sk_mcast_rcv(net, obuf);
else
- tipc_sk_rcv(obuf);
+ tipc_sk_rcv(net, obuf);
} else if (msg_user(omsg) == CONN_MANAGER) {
- tipc_sk_rcv(obuf);
+ tipc_sk_rcv(net, obuf);
} else if (msg_user(omsg) == NAME_DISTRIBUTOR) {
- tipc_named_rcv(obuf);
+ tipc_named_rcv(net, obuf);
} else {
pr_warn("Illegal bundled msg: %u\n", msg_user(omsg));
kfree_skb(obuf);
kfree_skb(buf);
}
-static void link_set_supervision_props(struct tipc_link *l_ptr, u32 tolerance)
+static void link_set_supervision_props(struct tipc_link *l_ptr, u32 tol)
{
- if ((tolerance < TIPC_MIN_LINK_TOL) || (tolerance > TIPC_MAX_LINK_TOL))
+ unsigned long intv = ((tol / 4) > 500) ? 500 : tol / 4;
+
+ if ((tol < TIPC_MIN_LINK_TOL) || (tol > TIPC_MAX_LINK_TOL))
return;
- l_ptr->tolerance = tolerance;
- l_ptr->continuity_interval =
- ((tolerance / 4) > 500) ? 500 : tolerance / 4;
- l_ptr->abort_limit = tolerance / (l_ptr->continuity_interval / 4);
+ l_ptr->tolerance = tol;
+ l_ptr->cont_intv = msecs_to_jiffies(intv);
+ l_ptr->abort_limit = tol / (jiffies_to_msecs(l_ptr->cont_intv) / 4);
}
void tipc_link_set_queue_limits(struct tipc_link *l_ptr, u32 window)
}
/* tipc_link_find_owner - locate owner node of link by link's name
+ * @net: the applicable net namespace
* @name: pointer to link name string
* @bearer_id: pointer to index in 'node->links' array where the link was found.
*
* Returns pointer to node owning the link, or 0 if no matching link is found.
*/
-static struct tipc_node *tipc_link_find_owner(const char *link_name,
+static struct tipc_node *tipc_link_find_owner(struct net *net,
+ const char *link_name,
unsigned int *bearer_id)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_link *l_ptr;
struct tipc_node *n_ptr;
struct tipc_node *found_node = NULL;
*bearer_id = 0;
rcu_read_lock();
- list_for_each_entry_rcu(n_ptr, &tipc_node_list, list) {
+ list_for_each_entry_rcu(n_ptr, &tn->node_list, list) {
tipc_node_lock(n_ptr);
for (i = 0; i < MAX_BEARERS; i++) {
l_ptr = n_ptr->links[i];
/**
* link_cmd_set_value - change priority/tolerance/window for link/bearer/media
+ * @net: the applicable net namespace
* @name: ptr to link, bearer, or media name
* @new_value: new value of link, bearer, or media setting
* @cmd: which link, bearer, or media attribute to set (TIPC_CMD_SET_LINK_*)
*
* Returns 0 if value updated and negative value on error.
*/
-static int link_cmd_set_value(const char *name, u32 new_value, u16 cmd)
+static int link_cmd_set_value(struct net *net, const char *name, u32 new_value,
+ u16 cmd)
{
struct tipc_node *node;
struct tipc_link *l_ptr;
int bearer_id;
int res = 0;
- node = tipc_link_find_owner(name, &bearer_id);
+ node = tipc_link_find_owner(net, name, &bearer_id);
if (node) {
tipc_node_lock(node);
l_ptr = node->links[bearer_id];
return res;
}
- b_ptr = tipc_bearer_find(name);
+ b_ptr = tipc_bearer_find(net, name);
if (b_ptr) {
switch (cmd) {
case TIPC_CMD_SET_LINK_TOL:
return res;
}
-struct sk_buff *tipc_link_cmd_config(const void *req_tlv_area, int req_tlv_space,
- u16 cmd)
+struct sk_buff *tipc_link_cmd_config(struct net *net, const void *req_tlv_area,
+ int req_tlv_space, u16 cmd)
{
struct tipc_link_config *args;
u32 new_value;
if (!strcmp(args->name, tipc_bclink_name)) {
if ((cmd == TIPC_CMD_SET_LINK_WINDOW) &&
- (tipc_bclink_set_queue_limits(new_value) == 0))
+ (tipc_bclink_set_queue_limits(net, new_value) == 0))
return tipc_cfg_reply_none();
return tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
" (cannot change setting on broadcast link)");
}
- res = link_cmd_set_value(args->name, new_value, cmd);
+ res = link_cmd_set_value(net, args->name, new_value, cmd);
if (res)
return tipc_cfg_reply_error_string("cannot change link setting");
l_ptr->stats.recv_info = l_ptr->next_in_no;
}
-struct sk_buff *tipc_link_cmd_reset_stats(const void *req_tlv_area, int req_tlv_space)
+struct sk_buff *tipc_link_cmd_reset_stats(struct net *net,
+ const void *req_tlv_area,
+ int req_tlv_space)
{
char *link_name;
struct tipc_link *l_ptr;
link_name = (char *)TLV_DATA(req_tlv_area);
if (!strcmp(link_name, tipc_bclink_name)) {
- if (tipc_bclink_reset_stats())
+ if (tipc_bclink_reset_stats(net))
return tipc_cfg_reply_error_string("link not found");
return tipc_cfg_reply_none();
}
- node = tipc_link_find_owner(link_name, &bearer_id);
+ node = tipc_link_find_owner(net, link_name, &bearer_id);
if (!node)
return tipc_cfg_reply_error_string("link not found");
/**
* tipc_link_stats - print link statistics
+ * @net: the applicable net namespace
* @name: link name
* @buf: print buffer area
* @buf_size: size of print buffer area
*
* Returns length of print buffer data string (or 0 if error)
*/
-static int tipc_link_stats(const char *name, char *buf, const u32 buf_size)
+static int tipc_link_stats(struct net *net, const char *name, char *buf,
+ const u32 buf_size)
{
struct tipc_link *l;
struct tipc_stats *s;
int ret;
if (!strcmp(name, tipc_bclink_name))
- return tipc_bclink_stats(buf, buf_size);
+ return tipc_bclink_stats(net, buf, buf_size);
- node = tipc_link_find_owner(name, &bearer_id);
+ node = tipc_link_find_owner(net, name, &bearer_id);
if (!node)
return 0;
return ret;
}
-struct sk_buff *tipc_link_cmd_show_stats(const void *req_tlv_area, int req_tlv_space)
+struct sk_buff *tipc_link_cmd_show_stats(struct net *net,
+ const void *req_tlv_area,
+ int req_tlv_space)
{
struct sk_buff *buf;
struct tlv_desc *rep_tlv;
rep_tlv = (struct tlv_desc *)buf->data;
pb = TLV_DATA(rep_tlv);
pb_len = ULTRA_STRING_MAX_LEN;
- str_len = tipc_link_stats((char *)TLV_DATA(req_tlv_area),
+ str_len = tipc_link_stats(net, (char *)TLV_DATA(req_tlv_area),
pb, pb_len);
if (!str_len) {
kfree_skb(buf);
return buf;
}
-/**
- * tipc_link_get_max_pkt - get maximum packet size to use when sending to destination
- * @dest: network address of destination node
- * @selector: used to select from set of active links
- *
- * If no active link can be found, uses default maximum packet size.
- */
-u32 tipc_link_get_max_pkt(u32 dest, u32 selector)
-{
- struct tipc_node *n_ptr;
- struct tipc_link *l_ptr;
- u32 res = MAX_PKT_DEFAULT;
-
- if (dest == tipc_own_addr)
- return MAX_MSG_SIZE;
-
- n_ptr = tipc_node_find(dest);
- if (n_ptr) {
- tipc_node_lock(n_ptr);
- l_ptr = n_ptr->active_links[selector & 1];
- if (l_ptr)
- res = l_ptr->max_pkt;
- tipc_node_unlock(n_ptr);
- }
- return res;
-}
-
static void link_print(struct tipc_link *l_ptr, const char *str)
{
+ struct tipc_net *tn = net_generic(l_ptr->owner->net, tipc_net_id);
struct tipc_bearer *b_ptr;
rcu_read_lock();
- b_ptr = rcu_dereference_rtnl(bearer_list[l_ptr->bearer_id]);
+ b_ptr = rcu_dereference_rtnl(tn->bearer_list[l_ptr->bearer_id]);
if (b_ptr)
pr_info("%s Link %x<%s>:", str, l_ptr->addr, b_ptr->name);
rcu_read_unlock();
struct tipc_link *link;
struct tipc_node *node;
struct nlattr *attrs[TIPC_NLA_LINK_MAX + 1];
+ struct net *net = genl_info_net(info);
if (!info->attrs[TIPC_NLA_LINK])
return -EINVAL;
name = nla_data(attrs[TIPC_NLA_LINK_NAME]);
- node = tipc_link_find_owner(name, &bearer_id);
+ node = tipc_link_find_owner(net, name, &bearer_id);
if (!node)
return -EINVAL;
}
/* Caller should hold appropriate locks to protect the link */
-static int __tipc_nl_add_link(struct tipc_nl_msg *msg, struct tipc_link *link)
+static int __tipc_nl_add_link(struct net *net, struct tipc_nl_msg *msg,
+ struct tipc_link *link)
{
int err;
void *hdr;
struct nlattr *attrs;
struct nlattr *prop;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_v2_family,
NLM_F_MULTI, TIPC_NL_LINK_GET);
if (nla_put_string(msg->skb, TIPC_NLA_LINK_NAME, link->name))
goto attr_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_LINK_DEST,
- tipc_cluster_mask(tipc_own_addr)))
+ tipc_cluster_mask(tn->own_addr)))
goto attr_msg_full;
if (nla_put_u32(msg->skb, TIPC_NLA_LINK_MTU, link->max_pkt))
goto attr_msg_full;
}
/* Caller should hold node lock */
-static int __tipc_nl_add_node_links(struct tipc_nl_msg *msg,
- struct tipc_node *node,
- u32 *prev_link)
+static int __tipc_nl_add_node_links(struct net *net, struct tipc_nl_msg *msg,
+ struct tipc_node *node, u32 *prev_link)
{
u32 i;
int err;
if (!node->links[i])
continue;
- err = __tipc_nl_add_link(msg, node->links[i]);
+ err = __tipc_nl_add_link(net, msg, node->links[i]);
if (err)
return err;
}
int tipc_nl_link_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
+ struct net *net = sock_net(skb->sk);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_node *node;
struct tipc_nl_msg msg;
u32 prev_node = cb->args[0];
rcu_read_lock();
if (prev_node) {
- node = tipc_node_find(prev_node);
+ node = tipc_node_find(net, prev_node);
if (!node) {
/* We never set seq or call nl_dump_check_consistent()
* this means that setting prev_seq here will cause the
goto out;
}
- list_for_each_entry_continue_rcu(node, &tipc_node_list, list) {
+ list_for_each_entry_continue_rcu(node, &tn->node_list,
+ list) {
tipc_node_lock(node);
- err = __tipc_nl_add_node_links(&msg, node, &prev_link);
+ err = __tipc_nl_add_node_links(net, &msg, node,
+ &prev_link);
tipc_node_unlock(node);
if (err)
goto out;
prev_node = node->addr;
}
} else {
- err = tipc_nl_add_bc_link(&msg);
+ err = tipc_nl_add_bc_link(net, &msg);
if (err)
goto out;
- list_for_each_entry_rcu(node, &tipc_node_list, list) {
+ list_for_each_entry_rcu(node, &tn->node_list, list) {
tipc_node_lock(node);
- err = __tipc_nl_add_node_links(&msg, node, &prev_link);
+ err = __tipc_nl_add_node_links(net, &msg, node,
+ &prev_link);
tipc_node_unlock(node);
if (err)
goto out;
int tipc_nl_link_get(struct sk_buff *skb, struct genl_info *info)
{
+ struct net *net = genl_info_net(info);
struct sk_buff *ans_skb;
struct tipc_nl_msg msg;
struct tipc_link *link;
return -EINVAL;
name = nla_data(info->attrs[TIPC_NLA_LINK_NAME]);
- node = tipc_link_find_owner(name, &bearer_id);
+ node = tipc_link_find_owner(net, name, &bearer_id);
if (!node)
return -EINVAL;
goto err_out;
}
- err = __tipc_nl_add_link(&msg, link);
+ err = __tipc_nl_add_link(net, &msg, link);
if (err)
goto err_out;
struct tipc_link *link;
struct tipc_node *node;
struct nlattr *attrs[TIPC_NLA_LINK_MAX + 1];
+ struct net *net = genl_info_net(info);
if (!info->attrs[TIPC_NLA_LINK])
return -EINVAL;
link_name = nla_data(attrs[TIPC_NLA_LINK_NAME]);
if (strcmp(link_name, tipc_bclink_name) == 0) {
- err = tipc_bclink_reset_stats();
+ err = tipc_bclink_reset_stats(net);
if (err)
return err;
return 0;
}
- node = tipc_link_find_owner(link_name, &bearer_id);
+ node = tipc_link_find_owner(net, link_name, &bearer_id);
if (!node)
return -EINVAL;
#include "msg.h"
#include "node.h"
+/* TIPC-specific error codes
+*/
+#define ELINKCONG EAGAIN /* link congestion <=> resource unavailable */
+
/* Out-of-range value for link sequence numbers
*/
#define INVALID_LINK_SEQ 0x10000
* @peer_bearer_id: bearer id used by link's peer endpoint
* @bearer_id: local bearer id used by link
* @tolerance: minimum link continuity loss needed to reset link [in ms]
- * @continuity_interval: link continuity testing interval [in ms]
+ * @cont_intv: link continuity testing interval
* @abort_limit: # of unacknowledged continuity probes needed to reset link
* @state: current state of link FSM
* @fsm_msg_cnt: # of protocol messages link FSM has sent in current state
u32 peer_bearer_id;
u32 bearer_id;
u32 tolerance;
- u32 continuity_interval;
+ unsigned long cont_intv;
u32 abort_limit;
int state;
u32 fsm_msg_cnt;
struct tipc_link *tipc_link_create(struct tipc_node *n_ptr,
struct tipc_bearer *b_ptr,
const struct tipc_media_addr *media_addr);
-void tipc_link_delete_list(unsigned int bearer_id, bool shutting_down);
+void tipc_link_delete_list(struct net *net, unsigned int bearer_id,
+ bool shutting_down);
void tipc_link_failover_send_queue(struct tipc_link *l_ptr);
void tipc_link_dup_queue_xmit(struct tipc_link *l_ptr, struct tipc_link *dest);
void tipc_link_reset_fragments(struct tipc_link *l_ptr);
int tipc_link_is_up(struct tipc_link *l_ptr);
int tipc_link_is_active(struct tipc_link *l_ptr);
void tipc_link_purge_queues(struct tipc_link *l_ptr);
-struct sk_buff *tipc_link_cmd_config(const void *req_tlv_area,
- int req_tlv_space,
- u16 cmd);
-struct sk_buff *tipc_link_cmd_show_stats(const void *req_tlv_area,
+struct sk_buff *tipc_link_cmd_config(struct net *net, const void *req_tlv_area,
+ int req_tlv_space, u16 cmd);
+struct sk_buff *tipc_link_cmd_show_stats(struct net *net,
+ const void *req_tlv_area,
int req_tlv_space);
-struct sk_buff *tipc_link_cmd_reset_stats(const void *req_tlv_area,
+struct sk_buff *tipc_link_cmd_reset_stats(struct net *net,
+ const void *req_tlv_area,
int req_tlv_space);
void tipc_link_reset_all(struct tipc_node *node);
void tipc_link_reset(struct tipc_link *l_ptr);
-void tipc_link_reset_list(unsigned int bearer_id);
-int tipc_link_xmit_skb(struct sk_buff *skb, u32 dest, u32 selector);
-int tipc_link_xmit(struct sk_buff_head *list, u32 dest, u32 selector);
-int __tipc_link_xmit(struct tipc_link *link, struct sk_buff_head *list);
-u32 tipc_link_get_max_pkt(u32 dest, u32 selector);
-void tipc_link_bundle_rcv(struct sk_buff *buf);
+void tipc_link_reset_list(struct net *net, unsigned int bearer_id);
+int tipc_link_xmit_skb(struct net *net, struct sk_buff *skb, u32 dest,
+ u32 selector);
+int tipc_link_xmit(struct net *net, struct sk_buff_head *list, u32 dest,
+ u32 selector);
+int __tipc_link_xmit(struct net *net, struct tipc_link *link,
+ struct sk_buff_head *list);
+void tipc_link_bundle_rcv(struct net *net, struct sk_buff *buf);
void tipc_link_proto_xmit(struct tipc_link *l_ptr, u32 msg_typ, int prob,
u32 gap, u32 tolerance, u32 priority, u32 acked_mtu);
void tipc_link_push_packets(struct tipc_link *l_ptr);
* POSSIBILITY OF SUCH DAMAGE.
*/
+#include <net/sock.h>
#include "core.h"
#include "msg.h"
#include "addr.h"
return (i + 3) & ~3u;
}
-void tipc_msg_init(struct tipc_msg *m, u32 user, u32 type, u32 hsize,
- u32 destnode)
+/**
+ * tipc_buf_acquire - creates a TIPC message buffer
+ * @size: message size (including TIPC header)
+ *
+ * Returns a new buffer with data pointers set to the specified size.
+ *
+ * NOTE: Headroom is reserved to allow prepending of a data link header.
+ * There may also be unrequested tailroom present at the buffer's end.
+ */
+struct sk_buff *tipc_buf_acquire(u32 size)
{
+ struct sk_buff *skb;
+ unsigned int buf_size = (BUF_HEADROOM + size + 3) & ~3u;
+
+ skb = alloc_skb_fclone(buf_size, GFP_ATOMIC);
+ if (skb) {
+ skb_reserve(skb, BUF_HEADROOM);
+ skb_put(skb, size);
+ skb->next = NULL;
+ }
+ return skb;
+}
+
+void tipc_msg_init(struct net *net, struct tipc_msg *m, u32 user, u32 type,
+ u32 hsize, u32 destnode)
+{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+
memset(m, 0, hsize);
msg_set_version(m);
msg_set_user(m, user);
msg_set_hdr_sz(m, hsize);
msg_set_size(m, hsize);
- msg_set_prevnode(m, tipc_own_addr);
+ msg_set_prevnode(m, tn->own_addr);
msg_set_type(m, type);
if (hsize > SHORT_H_SIZE) {
- msg_set_orignode(m, tipc_own_addr);
+ msg_set_orignode(m, tn->own_addr);
msg_set_destnode(m, destnode);
}
}
-struct sk_buff *tipc_msg_create(uint user, uint type, uint hdr_sz,
- uint data_sz, u32 dnode, u32 onode,
- u32 dport, u32 oport, int errcode)
+struct sk_buff *tipc_msg_create(struct net *net, uint user, uint type,
+ uint hdr_sz, uint data_sz, u32 dnode,
+ u32 onode, u32 dport, u32 oport, int errcode)
{
struct tipc_msg *msg;
struct sk_buff *buf;
return NULL;
msg = buf_msg(buf);
- tipc_msg_init(msg, user, type, hdr_sz, dnode);
+ tipc_msg_init(net, msg, user, type, hdr_sz, dnode);
msg_set_size(msg, hdr_sz + data_sz);
msg_set_prevnode(msg, onode);
msg_set_origport(msg, oport);
*
* Returns message data size or errno: -ENOMEM, -EFAULT
*/
-int tipc_msg_build(struct tipc_msg *mhdr, struct msghdr *m, int offset,
- int dsz, int pktmax, struct sk_buff_head *list)
+int tipc_msg_build(struct net *net, struct tipc_msg *mhdr, struct msghdr *m,
+ int offset, int dsz, int pktmax, struct sk_buff_head *list)
{
int mhsz = msg_hdr_sz(mhdr);
int msz = mhsz + dsz;
skb = tipc_buf_acquire(msz);
if (unlikely(!skb))
return -ENOMEM;
+ skb_orphan(skb);
__skb_queue_tail(list, skb);
skb_copy_to_linear_data(skb, mhdr, mhsz);
pktpos = skb->data + mhsz;
}
/* Prepare reusable fragment header */
- tipc_msg_init(&pkthdr, MSG_FRAGMENTER, FIRST_FRAGMENT,
- INT_H_SIZE, msg_destnode(mhdr));
+ tipc_msg_init(net, &pkthdr, MSG_FRAGMENTER, FIRST_FRAGMENT, INT_H_SIZE,
+ msg_destnode(mhdr));
msg_set_size(&pkthdr, pktmax);
msg_set_fragm_no(&pkthdr, pktno);
skb = tipc_buf_acquire(pktmax);
if (!skb)
return -ENOMEM;
+ skb_orphan(skb);
__skb_queue_tail(list, skb);
pktpos = skb->data;
skb_copy_to_linear_data(skb, &pkthdr, INT_H_SIZE);
rc = -ENOMEM;
goto error;
}
+ skb_orphan(skb);
__skb_queue_tail(list, skb);
msg_set_type(&pkthdr, FRAGMENT);
msg_set_size(&pkthdr, pktsz);
* Replaces buffer if successful
* Returns true if success, otherwise false
*/
-bool tipc_msg_make_bundle(struct sk_buff_head *list, struct sk_buff *skb,
- u32 mtu, u32 dnode)
+bool tipc_msg_make_bundle(struct net *net, struct sk_buff_head *list,
+ struct sk_buff *skb, u32 mtu, u32 dnode)
{
struct sk_buff *bskb;
struct tipc_msg *bmsg;
skb_trim(bskb, INT_H_SIZE);
bmsg = buf_msg(bskb);
- tipc_msg_init(bmsg, MSG_BUNDLER, 0, INT_H_SIZE, dnode);
+ tipc_msg_init(net, bmsg, MSG_BUNDLER, 0, INT_H_SIZE, dnode);
msg_set_seqno(bmsg, msg_seqno(msg));
msg_set_ack(bmsg, msg_ack(msg));
msg_set_bcast_ack(bmsg, msg_bcast_ack(msg));
* Consumes buffer if failure
* Returns true if success, otherwise false
*/
-bool tipc_msg_reverse(struct sk_buff *buf, u32 *dnode, int err)
+bool tipc_msg_reverse(struct net *net, struct sk_buff *buf, u32 *dnode,
+ int err)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_msg *msg = buf_msg(buf);
uint imp = msg_importance(msg);
struct tipc_msg ohdr;
msg_set_errcode(msg, err);
msg_set_origport(msg, msg_destport(&ohdr));
msg_set_destport(msg, msg_origport(&ohdr));
- msg_set_prevnode(msg, tipc_own_addr);
+ msg_set_prevnode(msg, tn->own_addr);
if (!msg_short(msg)) {
msg_set_orignode(msg, msg_destnode(&ohdr));
msg_set_destnode(msg, msg_orignode(&ohdr));
* Returns 0 (TIPC_OK) if message ok and we can try again, -TIPC error
* code if message to be rejected
*/
-int tipc_msg_eval(struct sk_buff *buf, u32 *dnode)
+int tipc_msg_eval(struct net *net, struct sk_buff *buf, u32 *dnode)
{
struct tipc_msg *msg = buf_msg(buf);
u32 dport;
if (msg_reroute_cnt(msg) > 0)
return -TIPC_ERR_NO_NAME;
- *dnode = addr_domain(msg_lookup_scope(msg));
- dport = tipc_nametbl_translate(msg_nametype(msg),
+ *dnode = addr_domain(net, msg_lookup_scope(msg));
+ dport = tipc_nametbl_translate(net, msg_nametype(msg),
msg_nameinst(msg),
dnode);
if (!dport)
#ifndef _TIPC_MSG_H
#define _TIPC_MSG_H
-#include "bearer.h"
+#include <linux/tipc.h>
/*
* Constants and routines used to read and write TIPC payload message headers
#define TIPC_MEDIA_ADDR_OFFSET 5
+/**
+ * TIPC message buffer code
+ *
+ * TIPC message buffer headroom reserves space for the worst-case
+ * link-level device header (in case the message is sent off-node).
+ *
+ * Note: Headroom should be a multiple of 4 to ensure the TIPC header fields
+ * are word aligned for quicker access
+ */
+#define BUF_HEADROOM LL_MAX_HEADER
+
+struct tipc_skb_cb {
+ void *handle;
+ struct sk_buff *tail;
+ bool deferred;
+ bool wakeup_pending;
+ bool bundling;
+ u16 chain_sz;
+ u16 chain_imp;
+};
+
+#define TIPC_SKB_CB(__skb) ((struct tipc_skb_cb *)&((__skb)->cb[0]))
struct tipc_msg {
__be32 hdr[15];
};
+static inline struct tipc_msg *buf_msg(struct sk_buff *skb)
+{
+ return (struct tipc_msg *)skb->data;
+}
static inline u32 msg_word(struct tipc_msg *m, u32 pos)
{
return msg_origport(m);
}
-bool tipc_msg_reverse(struct sk_buff *buf, u32 *dnode, int err);
-
-int tipc_msg_eval(struct sk_buff *buf, u32 *dnode);
-
-void tipc_msg_init(struct tipc_msg *m, u32 user, u32 type, u32 hsize,
- u32 destnode);
-
-struct sk_buff *tipc_msg_create(uint user, uint type, uint hdr_sz,
- uint data_sz, u32 dnode, u32 onode,
- u32 dport, u32 oport, int errcode);
-
+struct sk_buff *tipc_buf_acquire(u32 size);
+bool tipc_msg_reverse(struct net *net, struct sk_buff *buf, u32 *dnode,
+ int err);
+int tipc_msg_eval(struct net *net, struct sk_buff *buf, u32 *dnode);
+void tipc_msg_init(struct net *net, struct tipc_msg *m, u32 user, u32 type,
+ u32 hsize, u32 destnode);
+struct sk_buff *tipc_msg_create(struct net *net, uint user, uint type,
+ uint hdr_sz, uint data_sz, u32 dnode,
+ u32 onode, u32 dport, u32 oport, int errcode);
int tipc_buf_append(struct sk_buff **headbuf, struct sk_buff **buf);
-
bool tipc_msg_bundle(struct sk_buff_head *list, struct sk_buff *skb, u32 mtu);
-
-bool tipc_msg_make_bundle(struct sk_buff_head *list, struct sk_buff *skb,
- u32 mtu, u32 dnode);
-
-int tipc_msg_build(struct tipc_msg *mhdr, struct msghdr *m, int offset,
- int dsz, int mtu, struct sk_buff_head *list);
-
+bool tipc_msg_make_bundle(struct net *net, struct sk_buff_head *list,
+ struct sk_buff *skb, u32 mtu, u32 dnode);
+int tipc_msg_build(struct net *net, struct tipc_msg *mhdr, struct msghdr *m,
+ int offset, int dsz, int mtu, struct sk_buff_head *list);
struct sk_buff *tipc_msg_reassemble(struct sk_buff_head *list);
#endif
/**
* named_prepare_buf - allocate & initialize a publication message
*/
-static struct sk_buff *named_prepare_buf(u32 type, u32 size, u32 dest)
+static struct sk_buff *named_prepare_buf(struct net *net, u32 type, u32 size,
+ u32 dest)
{
struct sk_buff *buf = tipc_buf_acquire(INT_H_SIZE + size);
struct tipc_msg *msg;
if (buf != NULL) {
msg = buf_msg(buf);
- tipc_msg_init(msg, NAME_DISTRIBUTOR, type, INT_H_SIZE, dest);
+ tipc_msg_init(net, msg, NAME_DISTRIBUTOR, type, INT_H_SIZE,
+ dest);
msg_set_size(msg, INT_H_SIZE + size);
}
return buf;
}
-void named_cluster_distribute(struct sk_buff *skb)
+void named_cluster_distribute(struct net *net, struct sk_buff *skb)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct sk_buff *oskb;
struct tipc_node *node;
u32 dnode;
rcu_read_lock();
- list_for_each_entry_rcu(node, &tipc_node_list, list) {
+ list_for_each_entry_rcu(node, &tn->node_list, list) {
dnode = node->addr;
- if (in_own_node(dnode))
+ if (in_own_node(net, dnode))
continue;
if (!tipc_node_active_links(node))
continue;
if (!oskb)
break;
msg_set_destnode(buf_msg(oskb), dnode);
- tipc_link_xmit_skb(oskb, dnode, dnode);
+ tipc_link_xmit_skb(net, oskb, dnode, dnode);
}
rcu_read_unlock();
/**
* tipc_named_publish - tell other nodes about a new publication by this node
*/
-struct sk_buff *tipc_named_publish(struct publication *publ)
+struct sk_buff *tipc_named_publish(struct net *net, struct publication *publ)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct sk_buff *buf;
struct distr_item *item;
list_add_tail_rcu(&publ->local_list,
- &tipc_nametbl->publ_list[publ->scope]);
+ &tn->nametbl->publ_list[publ->scope]);
if (publ->scope == TIPC_NODE_SCOPE)
return NULL;
- buf = named_prepare_buf(PUBLICATION, ITEM_SIZE, 0);
+ buf = named_prepare_buf(net, PUBLICATION, ITEM_SIZE, 0);
if (!buf) {
pr_warn("Publication distribution failure\n");
return NULL;
/**
* tipc_named_withdraw - tell other nodes about a withdrawn publication by this node
*/
-struct sk_buff *tipc_named_withdraw(struct publication *publ)
+struct sk_buff *tipc_named_withdraw(struct net *net, struct publication *publ)
{
struct sk_buff *buf;
struct distr_item *item;
if (publ->scope == TIPC_NODE_SCOPE)
return NULL;
- buf = named_prepare_buf(WITHDRAWAL, ITEM_SIZE, 0);
+ buf = named_prepare_buf(net, WITHDRAWAL, ITEM_SIZE, 0);
if (!buf) {
pr_warn("Withdrawal distribution failure\n");
return NULL;
* @dnode: node to be updated
* @pls: linked list of publication items to be packed into buffer chain
*/
-static void named_distribute(struct sk_buff_head *list, u32 dnode,
- struct list_head *pls)
+static void named_distribute(struct net *net, struct sk_buff_head *list,
+ u32 dnode, struct list_head *pls)
{
struct publication *publ;
struct sk_buff *skb = NULL;
struct distr_item *item = NULL;
- uint msg_dsz = (tipc_node_get_mtu(dnode, 0) / ITEM_SIZE) * ITEM_SIZE;
+ uint msg_dsz = (tipc_node_get_mtu(net, dnode, 0) / ITEM_SIZE) *
+ ITEM_SIZE;
uint msg_rem = msg_dsz;
list_for_each_entry(publ, pls, local_list) {
/* Prepare next buffer: */
if (!skb) {
- skb = named_prepare_buf(PUBLICATION, msg_rem, dnode);
+ skb = named_prepare_buf(net, PUBLICATION, msg_rem,
+ dnode);
if (!skb) {
pr_warn("Bulk publication failure\n");
return;
/**
* tipc_named_node_up - tell specified node about all publications by this node
*/
-void tipc_named_node_up(u32 dnode)
+void tipc_named_node_up(struct net *net, u32 dnode)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct sk_buff_head head;
__skb_queue_head_init(&head);
rcu_read_lock();
- named_distribute(&head, dnode,
- &tipc_nametbl->publ_list[TIPC_CLUSTER_SCOPE]);
- named_distribute(&head, dnode,
- &tipc_nametbl->publ_list[TIPC_ZONE_SCOPE]);
+ named_distribute(net, &head, dnode,
+ &tn->nametbl->publ_list[TIPC_CLUSTER_SCOPE]);
+ named_distribute(net, &head, dnode,
+ &tn->nametbl->publ_list[TIPC_ZONE_SCOPE]);
rcu_read_unlock();
- tipc_link_xmit(&head, dnode, dnode);
+ tipc_link_xmit(net, &head, dnode, dnode);
}
-static void tipc_publ_subscribe(struct publication *publ, u32 addr)
+static void tipc_publ_subscribe(struct net *net, struct publication *publ,
+ u32 addr)
{
struct tipc_node *node;
- if (in_own_node(addr))
+ if (in_own_node(net, addr))
return;
- node = tipc_node_find(addr);
+ node = tipc_node_find(net, addr);
if (!node) {
pr_warn("Node subscription rejected, unknown node 0x%x\n",
addr);
tipc_node_unlock(node);
}
-static void tipc_publ_unsubscribe(struct publication *publ, u32 addr)
+static void tipc_publ_unsubscribe(struct net *net, struct publication *publ,
+ u32 addr)
{
struct tipc_node *node;
- node = tipc_node_find(addr);
+ node = tipc_node_find(net, addr);
if (!node)
return;
* Invoked for each publication issued by a newly failed node.
* Removes publication structure from name table & deletes it.
*/
-static void tipc_publ_purge(struct publication *publ, u32 addr)
+static void tipc_publ_purge(struct net *net, struct publication *publ, u32 addr)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct publication *p;
- spin_lock_bh(&tipc_nametbl_lock);
- p = tipc_nametbl_remove_publ(publ->type, publ->lower,
+ spin_lock_bh(&tn->nametbl_lock);
+ p = tipc_nametbl_remove_publ(net, publ->type, publ->lower,
publ->node, publ->ref, publ->key);
if (p)
- tipc_publ_unsubscribe(p, addr);
- spin_unlock_bh(&tipc_nametbl_lock);
+ tipc_publ_unsubscribe(net, p, addr);
+ spin_unlock_bh(&tn->nametbl_lock);
if (p != publ) {
pr_err("Unable to remove publication from failed node\n"
kfree_rcu(p, rcu);
}
-void tipc_publ_notify(struct list_head *nsub_list, u32 addr)
+void tipc_publ_notify(struct net *net, struct list_head *nsub_list, u32 addr)
{
struct publication *publ, *tmp;
list_for_each_entry_safe(publ, tmp, nsub_list, nodesub_list)
- tipc_publ_purge(publ, addr);
+ tipc_publ_purge(net, publ, addr);
}
/**
* tipc_nametbl_lock must be held.
* Returns the publication item if successful, otherwise NULL.
*/
-static bool tipc_update_nametbl(struct distr_item *i, u32 node, u32 dtype)
+static bool tipc_update_nametbl(struct net *net, struct distr_item *i,
+ u32 node, u32 dtype)
{
struct publication *publ = NULL;
if (dtype == PUBLICATION) {
- publ = tipc_nametbl_insert_publ(ntohl(i->type), ntohl(i->lower),
+ publ = tipc_nametbl_insert_publ(net, ntohl(i->type),
+ ntohl(i->lower),
ntohl(i->upper),
TIPC_CLUSTER_SCOPE, node,
ntohl(i->ref), ntohl(i->key));
if (publ) {
- tipc_publ_subscribe(publ, node);
+ tipc_publ_subscribe(net, publ, node);
return true;
}
} else if (dtype == WITHDRAWAL) {
- publ = tipc_nametbl_remove_publ(ntohl(i->type), ntohl(i->lower),
+ publ = tipc_nametbl_remove_publ(net, ntohl(i->type),
+ ntohl(i->lower),
node, ntohl(i->ref),
ntohl(i->key));
if (publ) {
- tipc_publ_unsubscribe(publ, node);
+ tipc_publ_unsubscribe(net, publ, node);
kfree_rcu(publ, rcu);
return true;
}
* tipc_named_process_backlog - try to process any pending name table updates
* from the network.
*/
-void tipc_named_process_backlog(void)
+void tipc_named_process_backlog(struct net *net)
{
struct distr_queue_item *e, *tmp;
char addr[16];
list_for_each_entry_safe(e, tmp, &tipc_dist_queue, next) {
if (time_after(e->expires, now)) {
- if (!tipc_update_nametbl(&e->i, e->node, e->dtype))
+ if (!tipc_update_nametbl(net, &e->i, e->node, e->dtype))
continue;
} else {
tipc_addr_string_fill(addr, e->node);
/**
* tipc_named_rcv - process name table update message sent by another node
*/
-void tipc_named_rcv(struct sk_buff *buf)
+void tipc_named_rcv(struct net *net, struct sk_buff *buf)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_msg *msg = buf_msg(buf);
struct distr_item *item = (struct distr_item *)msg_data(msg);
u32 count = msg_data_sz(msg) / ITEM_SIZE;
u32 node = msg_orignode(msg);
- spin_lock_bh(&tipc_nametbl_lock);
+ spin_lock_bh(&tn->nametbl_lock);
while (count--) {
- if (!tipc_update_nametbl(item, node, msg_type(msg)))
+ if (!tipc_update_nametbl(net, item, node, msg_type(msg)))
tipc_named_add_backlog(item, msg_type(msg), node);
item++;
}
- tipc_named_process_backlog();
- spin_unlock_bh(&tipc_nametbl_lock);
+ tipc_named_process_backlog(net);
+ spin_unlock_bh(&tn->nametbl_lock);
kfree_skb(buf);
}
* All name table entries published by this node are updated to reflect
* the node's new network address.
*/
-void tipc_named_reinit(void)
+void tipc_named_reinit(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct publication *publ;
int scope;
- spin_lock_bh(&tipc_nametbl_lock);
+ spin_lock_bh(&tn->nametbl_lock);
for (scope = TIPC_ZONE_SCOPE; scope <= TIPC_NODE_SCOPE; scope++)
- list_for_each_entry_rcu(publ, &tipc_nametbl->publ_list[scope],
+ list_for_each_entry_rcu(publ, &tn->nametbl->publ_list[scope],
local_list)
- publ->node = tipc_own_addr;
+ publ->node = tn->own_addr;
- spin_unlock_bh(&tipc_nametbl_lock);
+ spin_unlock_bh(&tn->nametbl_lock);
}
__be32 key;
};
-struct sk_buff *tipc_named_publish(struct publication *publ);
-struct sk_buff *tipc_named_withdraw(struct publication *publ);
-void named_cluster_distribute(struct sk_buff *buf);
-void tipc_named_node_up(u32 dnode);
-void tipc_named_rcv(struct sk_buff *buf);
-void tipc_named_reinit(void);
-void tipc_named_process_backlog(void);
-void tipc_publ_notify(struct list_head *nsub_list, u32 addr);
+struct sk_buff *tipc_named_publish(struct net *net, struct publication *publ);
+struct sk_buff *tipc_named_withdraw(struct net *net, struct publication *publ);
+void named_cluster_distribute(struct net *net, struct sk_buff *buf);
+void tipc_named_node_up(struct net *net, u32 dnode);
+void tipc_named_rcv(struct net *net, struct sk_buff *buf);
+void tipc_named_reinit(struct net *net);
+void tipc_named_process_backlog(struct net *net);
+void tipc_publ_notify(struct net *net, struct list_head *nsub_list, u32 addr);
#endif
* POSSIBILITY OF SUCH DAMAGE.
*/
+#include <net/sock.h>
#include "core.h"
#include "config.h"
#include "name_table.h"
#include "name_distr.h"
#include "subscr.h"
+#include "bcast.h"
#define TIPC_NAMETBL_SIZE 1024 /* must be a power of 2 */
struct rcu_head rcu;
};
-struct name_table *tipc_nametbl;
-DEFINE_SPINLOCK(tipc_nametbl_lock);
-
static int hash(int x)
{
return x & (TIPC_NAMETBL_SIZE - 1);
/**
* tipc_nameseq_insert_publ
*/
-static struct publication *tipc_nameseq_insert_publ(struct name_seq *nseq,
- u32 type, u32 lower, u32 upper,
- u32 scope, u32 node, u32 port, u32 key)
+static struct publication *tipc_nameseq_insert_publ(struct net *net,
+ struct name_seq *nseq,
+ u32 type, u32 lower,
+ u32 upper, u32 scope,
+ u32 node, u32 port, u32 key)
{
struct tipc_subscription *s;
struct tipc_subscription *st;
list_add(&publ->zone_list, &info->zone_list);
info->zone_list_size++;
- if (in_own_cluster(node)) {
+ if (in_own_cluster(net, node)) {
list_add(&publ->cluster_list, &info->cluster_list);
info->cluster_list_size++;
}
- if (in_own_node(node)) {
+ if (in_own_node(net, node)) {
list_add(&publ->node_list, &info->node_list);
info->node_list_size++;
}
* A failed withdraw request simply returns a failure indication and lets the
* caller issue any error or warning messages associated with such a problem.
*/
-static struct publication *tipc_nameseq_remove_publ(struct name_seq *nseq, u32 inst,
- u32 node, u32 ref, u32 key)
+static struct publication *tipc_nameseq_remove_publ(struct net *net,
+ struct name_seq *nseq,
+ u32 inst, u32 node,
+ u32 ref, u32 key)
{
struct publication *publ;
struct sub_seq *sseq = nameseq_find_subseq(nseq, inst);
info->zone_list_size--;
/* Remove publication from cluster scope list, if present */
- if (in_own_cluster(node)) {
+ if (in_own_cluster(net, node)) {
list_del(&publ->cluster_list);
info->cluster_list_size--;
}
/* Remove publication from node scope list, if present */
- if (in_own_node(node)) {
+ if (in_own_node(net, node)) {
list_del(&publ->node_list);
info->node_list_size--;
}
}
}
-static struct name_seq *nametbl_find_seq(u32 type)
+static struct name_seq *nametbl_find_seq(struct net *net, u32 type)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct hlist_head *seq_head;
struct name_seq *ns;
- seq_head = &tipc_nametbl->seq_hlist[hash(type)];
+ seq_head = &tn->nametbl->seq_hlist[hash(type)];
hlist_for_each_entry_rcu(ns, seq_head, ns_list) {
if (ns->type == type)
return ns;
return NULL;
};
-struct publication *tipc_nametbl_insert_publ(u32 type, u32 lower, u32 upper,
- u32 scope, u32 node, u32 port, u32 key)
+struct publication *tipc_nametbl_insert_publ(struct net *net, u32 type,
+ u32 lower, u32 upper, u32 scope,
+ u32 node, u32 port, u32 key)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct publication *publ;
- struct name_seq *seq = nametbl_find_seq(type);
+ struct name_seq *seq = nametbl_find_seq(net, type);
int index = hash(type);
if ((scope < TIPC_ZONE_SCOPE) || (scope > TIPC_NODE_SCOPE) ||
}
if (!seq)
- seq = tipc_nameseq_create(type,
- &tipc_nametbl->seq_hlist[index]);
+ seq = tipc_nameseq_create(type, &tn->nametbl->seq_hlist[index]);
if (!seq)
return NULL;
spin_lock_bh(&seq->lock);
- publ = tipc_nameseq_insert_publ(seq, type, lower, upper,
+ publ = tipc_nameseq_insert_publ(net, seq, type, lower, upper,
scope, node, port, key);
spin_unlock_bh(&seq->lock);
return publ;
}
-struct publication *tipc_nametbl_remove_publ(u32 type, u32 lower,
- u32 node, u32 ref, u32 key)
+struct publication *tipc_nametbl_remove_publ(struct net *net, u32 type,
+ u32 lower, u32 node, u32 ref,
+ u32 key)
{
struct publication *publ;
- struct name_seq *seq = nametbl_find_seq(type);
+ struct name_seq *seq = nametbl_find_seq(net, type);
if (!seq)
return NULL;
spin_lock_bh(&seq->lock);
- publ = tipc_nameseq_remove_publ(seq, lower, node, ref, key);
+ publ = tipc_nameseq_remove_publ(net, seq, lower, node, ref, key);
if (!seq->first_free && list_empty(&seq->subscriptions)) {
hlist_del_init_rcu(&seq->ns_list);
kfree(seq->sseqs);
* - if name translation is attempted and fails, sets 'destnode' to 0
* and returns 0
*/
-u32 tipc_nametbl_translate(u32 type, u32 instance, u32 *destnode)
+u32 tipc_nametbl_translate(struct net *net, u32 type, u32 instance,
+ u32 *destnode)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct sub_seq *sseq;
struct name_info *info;
struct publication *publ;
u32 ref = 0;
u32 node = 0;
- if (!tipc_in_scope(*destnode, tipc_own_addr))
+ if (!tipc_in_scope(*destnode, tn->own_addr))
return 0;
rcu_read_lock();
- seq = nametbl_find_seq(type);
+ seq = nametbl_find_seq(net, type);
if (unlikely(!seq))
goto not_found;
spin_lock_bh(&seq->lock);
}
/* Round-Robin Algorithm */
- else if (*destnode == tipc_own_addr) {
+ else if (*destnode == tn->own_addr) {
if (list_empty(&info->node_list))
goto no_match;
publ = list_first_entry(&info->node_list, struct publication,
node_list);
list_move_tail(&publ->node_list, &info->node_list);
- } else if (in_own_cluster_exact(*destnode)) {
+ } else if (in_own_cluster_exact(net, *destnode)) {
if (list_empty(&info->cluster_list))
goto no_match;
publ = list_first_entry(&info->cluster_list, struct publication,
*
* Returns non-zero if any off-node ports overlap
*/
-int tipc_nametbl_mc_translate(u32 type, u32 lower, u32 upper, u32 limit,
- struct tipc_port_list *dports)
+int tipc_nametbl_mc_translate(struct net *net, u32 type, u32 lower, u32 upper,
+ u32 limit, struct tipc_port_list *dports)
{
struct name_seq *seq;
struct sub_seq *sseq;
int res = 0;
rcu_read_lock();
- seq = nametbl_find_seq(type);
+ seq = nametbl_find_seq(net, type);
if (!seq)
goto exit;
/*
* tipc_nametbl_publish - add name publication to network name tables
*/
-struct publication *tipc_nametbl_publish(u32 type, u32 lower, u32 upper,
- u32 scope, u32 port_ref, u32 key)
+struct publication *tipc_nametbl_publish(struct net *net, u32 type, u32 lower,
+ u32 upper, u32 scope, u32 port_ref,
+ u32 key)
{
struct publication *publ;
struct sk_buff *buf = NULL;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
- spin_lock_bh(&tipc_nametbl_lock);
- if (tipc_nametbl->local_publ_count >= TIPC_MAX_PUBLICATIONS) {
+ spin_lock_bh(&tn->nametbl_lock);
+ if (tn->nametbl->local_publ_count >= TIPC_MAX_PUBLICATIONS) {
pr_warn("Publication failed, local publication limit reached (%u)\n",
TIPC_MAX_PUBLICATIONS);
- spin_unlock_bh(&tipc_nametbl_lock);
+ spin_unlock_bh(&tn->nametbl_lock);
return NULL;
}
- publ = tipc_nametbl_insert_publ(type, lower, upper, scope,
- tipc_own_addr, port_ref, key);
+ publ = tipc_nametbl_insert_publ(net, type, lower, upper, scope,
+ tn->own_addr, port_ref, key);
if (likely(publ)) {
- tipc_nametbl->local_publ_count++;
- buf = tipc_named_publish(publ);
+ tn->nametbl->local_publ_count++;
+ buf = tipc_named_publish(net, publ);
/* Any pending external events? */
- tipc_named_process_backlog();
+ tipc_named_process_backlog(net);
}
- spin_unlock_bh(&tipc_nametbl_lock);
+ spin_unlock_bh(&tn->nametbl_lock);
if (buf)
- named_cluster_distribute(buf);
+ named_cluster_distribute(net, buf);
return publ;
}
/**
* tipc_nametbl_withdraw - withdraw name publication from network name tables
*/
-int tipc_nametbl_withdraw(u32 type, u32 lower, u32 ref, u32 key)
+int tipc_nametbl_withdraw(struct net *net, u32 type, u32 lower, u32 ref,
+ u32 key)
{
struct publication *publ;
struct sk_buff *skb = NULL;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
- spin_lock_bh(&tipc_nametbl_lock);
- publ = tipc_nametbl_remove_publ(type, lower, tipc_own_addr, ref, key);
+ spin_lock_bh(&tn->nametbl_lock);
+ publ = tipc_nametbl_remove_publ(net, type, lower, tn->own_addr,
+ ref, key);
if (likely(publ)) {
- tipc_nametbl->local_publ_count--;
- skb = tipc_named_withdraw(publ);
+ tn->nametbl->local_publ_count--;
+ skb = tipc_named_withdraw(net, publ);
/* Any pending external events? */
- tipc_named_process_backlog();
+ tipc_named_process_backlog(net);
list_del_init(&publ->pport_list);
kfree_rcu(publ, rcu);
} else {
"(type=%u, lower=%u, ref=%u, key=%u)\n",
type, lower, ref, key);
}
- spin_unlock_bh(&tipc_nametbl_lock);
+ spin_unlock_bh(&tn->nametbl_lock);
if (skb) {
- named_cluster_distribute(skb);
+ named_cluster_distribute(net, skb);
return 1;
}
return 0;
*/
void tipc_nametbl_subscribe(struct tipc_subscription *s)
{
+ struct tipc_net *tn = net_generic(s->net, tipc_net_id);
u32 type = s->seq.type;
int index = hash(type);
struct name_seq *seq;
- spin_lock_bh(&tipc_nametbl_lock);
- seq = nametbl_find_seq(type);
+ spin_lock_bh(&tn->nametbl_lock);
+ seq = nametbl_find_seq(s->net, type);
if (!seq)
- seq = tipc_nameseq_create(type,
- &tipc_nametbl->seq_hlist[index]);
+ seq = tipc_nameseq_create(type, &tn->nametbl->seq_hlist[index]);
if (seq) {
spin_lock_bh(&seq->lock);
tipc_nameseq_subscribe(seq, s);
pr_warn("Failed to create subscription for {%u,%u,%u}\n",
s->seq.type, s->seq.lower, s->seq.upper);
}
- spin_unlock_bh(&tipc_nametbl_lock);
+ spin_unlock_bh(&tn->nametbl_lock);
}
/**
*/
void tipc_nametbl_unsubscribe(struct tipc_subscription *s)
{
+ struct tipc_net *tn = net_generic(s->net, tipc_net_id);
struct name_seq *seq;
- spin_lock_bh(&tipc_nametbl_lock);
- seq = nametbl_find_seq(s->seq.type);
+ spin_lock_bh(&tn->nametbl_lock);
+ seq = nametbl_find_seq(s->net, s->seq.type);
if (seq != NULL) {
spin_lock_bh(&seq->lock);
list_del_init(&s->nameseq_list);
spin_unlock_bh(&seq->lock);
}
}
- spin_unlock_bh(&tipc_nametbl_lock);
+ spin_unlock_bh(&tn->nametbl_lock);
}
/**
/**
* nametbl_list - print specified name table contents into the given buffer
*/
-static int nametbl_list(char *buf, int len, u32 depth_info,
+static int nametbl_list(struct net *net, char *buf, int len, u32 depth_info,
u32 type, u32 lowbound, u32 upbound)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct hlist_head *seq_head;
struct name_seq *seq;
int all_types;
lowbound = 0;
upbound = ~0;
for (i = 0; i < TIPC_NAMETBL_SIZE; i++) {
- seq_head = &tipc_nametbl->seq_hlist[i];
+ seq_head = &tn->nametbl->seq_hlist[i];
hlist_for_each_entry_rcu(seq, seq_head, ns_list) {
ret += nameseq_list(seq, buf + ret, len - ret,
depth, seq->type,
}
ret += nametbl_header(buf + ret, len - ret, depth);
i = hash(type);
- seq_head = &tipc_nametbl->seq_hlist[i];
+ seq_head = &tn->nametbl->seq_hlist[i];
hlist_for_each_entry_rcu(seq, seq_head, ns_list) {
if (seq->type == type) {
ret += nameseq_list(seq, buf + ret, len - ret,
return ret;
}
-struct sk_buff *tipc_nametbl_get(const void *req_tlv_area, int req_tlv_space)
+struct sk_buff *tipc_nametbl_get(struct net *net, const void *req_tlv_area,
+ int req_tlv_space)
{
struct sk_buff *buf;
struct tipc_name_table_query *argv;
pb_len = ULTRA_STRING_MAX_LEN;
argv = (struct tipc_name_table_query *)TLV_DATA(req_tlv_area);
rcu_read_lock();
- str_len = nametbl_list(pb, pb_len, ntohl(argv->depth),
+ str_len = nametbl_list(net, pb, pb_len, ntohl(argv->depth),
ntohl(argv->type),
ntohl(argv->lowbound), ntohl(argv->upbound));
rcu_read_unlock();
return buf;
}
-int tipc_nametbl_init(void)
+int tipc_nametbl_init(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct name_table *tipc_nametbl;
int i;
tipc_nametbl = kzalloc(sizeof(*tipc_nametbl), GFP_ATOMIC);
INIT_LIST_HEAD(&tipc_nametbl->publ_list[TIPC_ZONE_SCOPE]);
INIT_LIST_HEAD(&tipc_nametbl->publ_list[TIPC_CLUSTER_SCOPE]);
INIT_LIST_HEAD(&tipc_nametbl->publ_list[TIPC_NODE_SCOPE]);
+ tn->nametbl = tipc_nametbl;
+ spin_lock_init(&tn->nametbl_lock);
return 0;
}
*
* tipc_nametbl_lock must be held when calling this function
*/
-static void tipc_purge_publications(struct name_seq *seq)
+static void tipc_purge_publications(struct net *net, struct name_seq *seq)
{
struct publication *publ, *safe;
struct sub_seq *sseq;
sseq = seq->sseqs;
info = sseq->info;
list_for_each_entry_safe(publ, safe, &info->zone_list, zone_list) {
- tipc_nametbl_remove_publ(publ->type, publ->lower, publ->node,
- publ->ref, publ->key);
+ tipc_nametbl_remove_publ(net, publ->type, publ->lower,
+ publ->node, publ->ref, publ->key);
kfree_rcu(publ, rcu);
}
hlist_del_init_rcu(&seq->ns_list);
kfree_rcu(seq, rcu);
}
-void tipc_nametbl_stop(void)
+void tipc_nametbl_stop(struct net *net)
{
u32 i;
struct name_seq *seq;
struct hlist_head *seq_head;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct name_table *tipc_nametbl = tn->nametbl;
/* Verify name table is empty and purge any lingering
* publications, then release the name table
*/
- spin_lock_bh(&tipc_nametbl_lock);
+ spin_lock_bh(&tn->nametbl_lock);
for (i = 0; i < TIPC_NAMETBL_SIZE; i++) {
if (hlist_empty(&tipc_nametbl->seq_hlist[i]))
continue;
seq_head = &tipc_nametbl->seq_hlist[i];
hlist_for_each_entry_rcu(seq, seq_head, ns_list) {
- tipc_purge_publications(seq);
+ tipc_purge_publications(net, seq);
}
}
- spin_unlock_bh(&tipc_nametbl_lock);
+ spin_unlock_bh(&tn->nametbl_lock);
synchronize_net();
kfree(tipc_nametbl);
return 0;
}
-static int __tipc_nl_seq_list(struct tipc_nl_msg *msg, u32 *last_type,
- u32 *last_lower, u32 *last_publ)
+static int tipc_nl_seq_list(struct net *net, struct tipc_nl_msg *msg,
+ u32 *last_type, u32 *last_lower, u32 *last_publ)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct hlist_head *seq_head;
struct name_seq *seq = NULL;
int err;
i = 0;
for (; i < TIPC_NAMETBL_SIZE; i++) {
- seq_head = &tipc_nametbl->seq_hlist[i];
+ seq_head = &tn->nametbl->seq_hlist[i];
if (*last_type) {
- seq = nametbl_find_seq(*last_type);
+ seq = nametbl_find_seq(net, *last_type);
if (!seq)
return -EPIPE;
} else {
u32 last_type = cb->args[0];
u32 last_lower = cb->args[1];
u32 last_publ = cb->args[2];
+ struct net *net = sock_net(skb->sk);
struct tipc_nl_msg msg;
if (done)
msg.seq = cb->nlh->nlmsg_seq;
rcu_read_lock();
- err = __tipc_nl_seq_list(&msg, &last_type, &last_lower, &last_publ);
+ err = tipc_nl_seq_list(net, &msg, &last_type, &last_lower, &last_publ);
if (!err) {
done = 1;
} else if (err != -EMSGSIZE) {
u32 local_publ_count;
};
-extern spinlock_t tipc_nametbl_lock;
-extern struct name_table *tipc_nametbl;
-
int tipc_nl_name_table_dump(struct sk_buff *skb, struct netlink_callback *cb);
-struct sk_buff *tipc_nametbl_get(const void *req_tlv_area, int req_tlv_space);
-u32 tipc_nametbl_translate(u32 type, u32 instance, u32 *node);
-int tipc_nametbl_mc_translate(u32 type, u32 lower, u32 upper, u32 limit,
- struct tipc_port_list *dports);
-struct publication *tipc_nametbl_publish(u32 type, u32 lower, u32 upper,
- u32 scope, u32 port_ref, u32 key);
-int tipc_nametbl_withdraw(u32 type, u32 lower, u32 ref, u32 key);
-struct publication *tipc_nametbl_insert_publ(u32 type, u32 lower, u32 upper,
- u32 scope, u32 node, u32 ref,
+struct sk_buff *tipc_nametbl_get(struct net *net, const void *req_tlv_area,
+ int req_tlv_space);
+u32 tipc_nametbl_translate(struct net *net, u32 type, u32 instance, u32 *node);
+int tipc_nametbl_mc_translate(struct net *net, u32 type, u32 lower, u32 upper,
+ u32 limit, struct tipc_port_list *dports);
+struct publication *tipc_nametbl_publish(struct net *net, u32 type, u32 lower,
+ u32 upper, u32 scope, u32 port_ref,
+ u32 key);
+int tipc_nametbl_withdraw(struct net *net, u32 type, u32 lower, u32 ref,
+ u32 key);
+struct publication *tipc_nametbl_insert_publ(struct net *net, u32 type,
+ u32 lower, u32 upper, u32 scope,
+ u32 node, u32 ref, u32 key);
+struct publication *tipc_nametbl_remove_publ(struct net *net, u32 type,
+ u32 lower, u32 node, u32 ref,
u32 key);
-struct publication *tipc_nametbl_remove_publ(u32 type, u32 lower, u32 node,
- u32 ref, u32 key);
void tipc_nametbl_subscribe(struct tipc_subscription *s);
void tipc_nametbl_unsubscribe(struct tipc_subscription *s);
-int tipc_nametbl_init(void);
-void tipc_nametbl_stop(void);
+int tipc_nametbl_init(struct net *net);
+void tipc_nametbl_stop(struct net *net);
#endif
#include "socket.h"
#include "node.h"
#include "config.h"
+#include "bcast.h"
static const struct nla_policy tipc_nl_net_policy[TIPC_NLA_NET_MAX + 1] = {
[TIPC_NLA_NET_UNSPEC] = { .type = NLA_UNSPEC },
* - A local spin_lock protecting the queue of subscriber events.
*/
-int tipc_net_start(u32 addr)
+int tipc_net_start(struct net *net, u32 addr)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
char addr_string[16];
int res;
- tipc_own_addr = addr;
- tipc_named_reinit();
- tipc_sk_reinit();
- res = tipc_bclink_init();
+ tn->own_addr = addr;
+ tipc_named_reinit(net);
+ tipc_sk_reinit(net);
+ res = tipc_bclink_init(net);
if (res)
return res;
- tipc_nametbl_publish(TIPC_CFG_SRV, tipc_own_addr, tipc_own_addr,
- TIPC_ZONE_SCOPE, 0, tipc_own_addr);
+ tipc_nametbl_publish(net, TIPC_CFG_SRV, tn->own_addr, tn->own_addr,
+ TIPC_ZONE_SCOPE, 0, tn->own_addr);
pr_info("Started in network mode\n");
pr_info("Own node address %s, network identity %u\n",
- tipc_addr_string_fill(addr_string, tipc_own_addr), tipc_net_id);
+ tipc_addr_string_fill(addr_string, tn->own_addr),
+ tn->net_id);
return 0;
}
-void tipc_net_stop(void)
+void tipc_net_stop(struct net *net)
{
- if (!tipc_own_addr)
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+
+ if (!tn->own_addr)
return;
- tipc_nametbl_withdraw(TIPC_CFG_SRV, tipc_own_addr, 0, tipc_own_addr);
+ tipc_nametbl_withdraw(net, TIPC_CFG_SRV, tn->own_addr, 0,
+ tn->own_addr);
rtnl_lock();
- tipc_bearer_stop();
- tipc_bclink_stop();
- tipc_node_stop();
+ tipc_bearer_stop(net);
+ tipc_bclink_stop(net);
+ tipc_node_stop(net);
rtnl_unlock();
pr_info("Left network mode\n");
}
-static int __tipc_nl_add_net(struct tipc_nl_msg *msg)
+static int __tipc_nl_add_net(struct net *net, struct tipc_nl_msg *msg)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
void *hdr;
struct nlattr *attrs;
if (!attrs)
goto msg_full;
- if (nla_put_u32(msg->skb, TIPC_NLA_NET_ID, tipc_net_id))
+ if (nla_put_u32(msg->skb, TIPC_NLA_NET_ID, tn->net_id))
goto attr_msg_full;
nla_nest_end(msg->skb, attrs);
int tipc_nl_net_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
+ struct net *net = sock_net(skb->sk);
int err;
int done = cb->args[0];
struct tipc_nl_msg msg;
msg.portid = NETLINK_CB(cb->skb).portid;
msg.seq = cb->nlh->nlmsg_seq;
- err = __tipc_nl_add_net(&msg);
+ err = __tipc_nl_add_net(net, &msg);
if (err)
goto out;
int tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info)
{
- int err;
+ struct net *net = genl_info_net(info);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct nlattr *attrs[TIPC_NLA_NET_MAX + 1];
+ int err;
if (!info->attrs[TIPC_NLA_NET])
return -EINVAL;
u32 val;
/* Can't change net id once TIPC has joined a network */
- if (tipc_own_addr)
+ if (tn->own_addr)
return -EPERM;
val = nla_get_u32(attrs[TIPC_NLA_NET_ID]);
if (val < 1 || val > 9999)
return -EINVAL;
- tipc_net_id = val;
+ tn->net_id = val;
}
if (attrs[TIPC_NLA_NET_ADDR]) {
u32 addr;
/* Can't change net addr once TIPC has joined a network */
- if (tipc_own_addr)
+ if (tn->own_addr)
return -EPERM;
addr = nla_get_u32(attrs[TIPC_NLA_NET_ADDR]);
return -EINVAL;
rtnl_lock();
- tipc_net_start(addr);
+ tipc_net_start(net, addr);
rtnl_unlock();
}
#include <net/genetlink.h>
-int tipc_net_start(u32 addr);
+int tipc_net_start(struct net *net, u32 addr);
-void tipc_net_stop(void);
+void tipc_net_stop(struct net *net);
int tipc_nl_net_dump(struct sk_buff *skb, struct netlink_callback *cb);
int tipc_nl_net_set(struct sk_buff *skb, struct genl_info *info);
static int handle_cmd(struct sk_buff *skb, struct genl_info *info)
{
+ struct net *net = genl_info_net(info);
struct sk_buff *rep_buf;
struct nlmsghdr *rep_nlh;
struct nlmsghdr *req_nlh = info->nlhdr;
int hdr_space = nlmsg_total_size(GENL_HDRLEN + TIPC_GENL_HDRLEN);
u16 cmd;
- if ((req_userhdr->cmd & 0xC000) && (!netlink_capable(skb, CAP_NET_ADMIN)))
+ if ((req_userhdr->cmd & 0xC000) &&
+ (!netlink_net_capable(skb, CAP_NET_ADMIN)))
cmd = TIPC_CMD_NOT_NET_ADMIN;
else
cmd = req_userhdr->cmd;
- rep_buf = tipc_cfg_do_cmd(req_userhdr->dest, cmd,
- nlmsg_data(req_nlh) + GENL_HDRLEN + TIPC_GENL_HDRLEN,
- nlmsg_attrlen(req_nlh, GENL_HDRLEN + TIPC_GENL_HDRLEN),
- hdr_space);
+ rep_buf = tipc_cfg_do_cmd(net, req_userhdr->dest, cmd,
+ nlmsg_data(req_nlh) + GENL_HDRLEN +
+ TIPC_GENL_HDRLEN,
+ nlmsg_attrlen(req_nlh, GENL_HDRLEN +
+ TIPC_GENL_HDRLEN), hdr_space);
if (rep_buf) {
skb_push(rep_buf, hdr_space);
rep_nlh = nlmsg_hdr(rep_buf);
memcpy(rep_nlh, req_nlh, hdr_space);
rep_nlh->nlmsg_len = rep_buf->len;
- genlmsg_unicast(&init_net, rep_buf, NETLINK_CB(skb).portid);
+ genlmsg_unicast(net, rep_buf, NETLINK_CB(skb).portid);
}
return 0;
.version = TIPC_GENL_VERSION,
.hdrsize = TIPC_GENL_HDRLEN,
.maxattr = 0,
+ .netnsok = true,
};
/* Legacy ASCII API */
.version = TIPC_GENL_V2_VERSION,
.hdrsize = 0,
.maxattr = TIPC_NLA_MAX,
+ .netnsok = true,
};
static const struct genl_ops tipc_genl_v2_ops[] = {
u32 seq;
};
+int tipc_netlink_start(void);
+void tipc_netlink_stop(void);
+
#endif
#include "name_distr.h"
#include "socket.h"
-#define NODE_HTABLE_SIZE 512
-
static void node_lost_contact(struct tipc_node *n_ptr);
static void node_established_contact(struct tipc_node *n_ptr);
-static struct hlist_head node_htable[NODE_HTABLE_SIZE];
-LIST_HEAD(tipc_node_list);
-static u32 tipc_num_nodes;
-static u32 tipc_num_links;
-static DEFINE_SPINLOCK(node_list_lock);
-
struct tipc_sock_conn {
u32 port;
u32 peer_port;
/*
* tipc_node_find - locate specified node object, if it exists
*/
-struct tipc_node *tipc_node_find(u32 addr)
+struct tipc_node *tipc_node_find(struct net *net, u32 addr)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_node *node;
- if (unlikely(!in_own_cluster_exact(addr)))
+ if (unlikely(!in_own_cluster_exact(net, addr)))
return NULL;
rcu_read_lock();
- hlist_for_each_entry_rcu(node, &node_htable[tipc_hashfn(addr)], hash) {
+ hlist_for_each_entry_rcu(node, &tn->node_htable[tipc_hashfn(addr)],
+ hash) {
if (node->addr == addr) {
rcu_read_unlock();
return node;
return NULL;
}
-struct tipc_node *tipc_node_create(u32 addr)
+struct tipc_node *tipc_node_create(struct net *net, u32 addr)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_node *n_ptr, *temp_node;
- spin_lock_bh(&node_list_lock);
+ spin_lock_bh(&tn->node_list_lock);
n_ptr = kzalloc(sizeof(*n_ptr), GFP_ATOMIC);
if (!n_ptr) {
- spin_unlock_bh(&node_list_lock);
+ spin_unlock_bh(&tn->node_list_lock);
pr_warn("Node creation failed, no memory\n");
return NULL;
}
n_ptr->addr = addr;
+ n_ptr->net = net;
spin_lock_init(&n_ptr->lock);
INIT_HLIST_NODE(&n_ptr->hash);
INIT_LIST_HEAD(&n_ptr->list);
skb_queue_head_init(&n_ptr->waiting_sks);
__skb_queue_head_init(&n_ptr->bclink.deferred_queue);
- hlist_add_head_rcu(&n_ptr->hash, &node_htable[tipc_hashfn(addr)]);
+ hlist_add_head_rcu(&n_ptr->hash, &tn->node_htable[tipc_hashfn(addr)]);
- list_for_each_entry_rcu(temp_node, &tipc_node_list, list) {
+ list_for_each_entry_rcu(temp_node, &tn->node_list, list) {
if (n_ptr->addr < temp_node->addr)
break;
}
n_ptr->action_flags = TIPC_WAIT_PEER_LINKS_DOWN;
n_ptr->signature = INVALID_NODE_SIG;
- tipc_num_nodes++;
+ tn->num_nodes++;
- spin_unlock_bh(&node_list_lock);
+ spin_unlock_bh(&tn->node_list_lock);
return n_ptr;
}
-static void tipc_node_delete(struct tipc_node *n_ptr)
+static void tipc_node_delete(struct tipc_net *tn, struct tipc_node *n_ptr)
{
list_del_rcu(&n_ptr->list);
hlist_del_rcu(&n_ptr->hash);
kfree_rcu(n_ptr, rcu);
- tipc_num_nodes--;
+ tn->num_nodes--;
}
-void tipc_node_stop(void)
+void tipc_node_stop(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_node *node, *t_node;
- spin_lock_bh(&node_list_lock);
- list_for_each_entry_safe(node, t_node, &tipc_node_list, list)
- tipc_node_delete(node);
- spin_unlock_bh(&node_list_lock);
+ spin_lock_bh(&tn->node_list_lock);
+ list_for_each_entry_safe(node, t_node, &tn->node_list, list)
+ tipc_node_delete(tn, node);
+ spin_unlock_bh(&tn->node_list_lock);
}
-int tipc_node_add_conn(u32 dnode, u32 port, u32 peer_port)
+int tipc_node_add_conn(struct net *net, u32 dnode, u32 port, u32 peer_port)
{
struct tipc_node *node;
struct tipc_sock_conn *conn;
- if (in_own_node(dnode))
+ if (in_own_node(net, dnode))
return 0;
- node = tipc_node_find(dnode);
+ node = tipc_node_find(net, dnode);
if (!node) {
pr_warn("Connecting sock to node 0x%x failed\n", dnode);
return -EHOSTUNREACH;
return 0;
}
-void tipc_node_remove_conn(u32 dnode, u32 port)
+void tipc_node_remove_conn(struct net *net, u32 dnode, u32 port)
{
struct tipc_node *node;
struct tipc_sock_conn *conn, *safe;
- if (in_own_node(dnode))
+ if (in_own_node(net, dnode))
return;
- node = tipc_node_find(dnode);
+ node = tipc_node_find(net, dnode);
if (!node)
return;
tipc_node_unlock(node);
}
-void tipc_node_abort_sock_conns(struct list_head *conns)
+void tipc_node_abort_sock_conns(struct net *net, struct list_head *conns)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_sock_conn *conn, *safe;
struct sk_buff *buf;
list_for_each_entry_safe(conn, safe, conns, list) {
- buf = tipc_msg_create(TIPC_CRITICAL_IMPORTANCE, TIPC_CONN_MSG,
- SHORT_H_SIZE, 0, tipc_own_addr,
- conn->peer_node, conn->port,
- conn->peer_port, TIPC_ERR_NO_NODE);
+ buf = tipc_msg_create(net, TIPC_CRITICAL_IMPORTANCE,
+ TIPC_CONN_MSG, SHORT_H_SIZE, 0,
+ tn->own_addr, conn->peer_node,
+ conn->port, conn->peer_port,
+ TIPC_ERR_NO_NODE);
if (likely(buf))
- tipc_sk_rcv(buf);
+ tipc_sk_rcv(net, buf);
list_del(&conn->list);
kfree(conn);
}
*/
void tipc_node_link_down(struct tipc_node *n_ptr, struct tipc_link *l_ptr)
{
+ struct tipc_net *tn = net_generic(n_ptr->net, tipc_net_id);
struct tipc_link **active;
n_ptr->working_links--;
}
/* Loopback link went down? No fragmentation needed from now on. */
- if (n_ptr->addr == tipc_own_addr) {
+ if (n_ptr->addr == tn->own_addr) {
n_ptr->act_mtus[0] = MAX_MSG_SIZE;
n_ptr->act_mtus[1] = MAX_MSG_SIZE;
}
void tipc_node_attach_link(struct tipc_node *n_ptr, struct tipc_link *l_ptr)
{
+ struct tipc_net *tn = net_generic(n_ptr->net, tipc_net_id);
+
n_ptr->links[l_ptr->bearer_id] = l_ptr;
- spin_lock_bh(&node_list_lock);
- tipc_num_links++;
- spin_unlock_bh(&node_list_lock);
+ spin_lock_bh(&tn->node_list_lock);
+ tn->num_links++;
+ spin_unlock_bh(&tn->node_list_lock);
n_ptr->link_cnt++;
}
void tipc_node_detach_link(struct tipc_node *n_ptr, struct tipc_link *l_ptr)
{
+ struct tipc_net *tn = net_generic(n_ptr->net, tipc_net_id);
int i;
for (i = 0; i < MAX_BEARERS; i++) {
if (l_ptr != n_ptr->links[i])
continue;
n_ptr->links[i] = NULL;
- spin_lock_bh(&node_list_lock);
- tipc_num_links--;
- spin_unlock_bh(&node_list_lock);
+ spin_lock_bh(&tn->node_list_lock);
+ tn->num_links--;
+ spin_unlock_bh(&tn->node_list_lock);
n_ptr->link_cnt--;
}
}
{
n_ptr->action_flags |= TIPC_NOTIFY_NODE_UP;
n_ptr->bclink.oos_state = 0;
- n_ptr->bclink.acked = tipc_bclink_get_last_sent();
- tipc_bclink_add_node(n_ptr->addr);
+ n_ptr->bclink.acked = tipc_bclink_get_last_sent(n_ptr->net);
+ tipc_bclink_add_node(n_ptr->net, n_ptr->addr);
}
static void node_lost_contact(struct tipc_node *n_ptr)
n_ptr->bclink.reasm_buf = NULL;
}
- tipc_bclink_remove_node(n_ptr->addr);
+ tipc_bclink_remove_node(n_ptr->net, n_ptr->addr);
tipc_bclink_acknowledge(n_ptr, INVALID_LINK_SEQ);
n_ptr->bclink.recv_permitted = false;
TIPC_NOTIFY_NODE_DOWN;
}
-struct sk_buff *tipc_node_get_nodes(const void *req_tlv_area, int req_tlv_space)
+struct sk_buff *tipc_node_get_nodes(struct net *net, const void *req_tlv_area,
+ int req_tlv_space)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
u32 domain;
struct sk_buff *buf;
struct tipc_node *n_ptr;
return tipc_cfg_reply_error_string(TIPC_CFG_INVALID_VALUE
" (network address)");
- spin_lock_bh(&node_list_lock);
- if (!tipc_num_nodes) {
- spin_unlock_bh(&node_list_lock);
+ spin_lock_bh(&tn->node_list_lock);
+ if (!tn->num_nodes) {
+ spin_unlock_bh(&tn->node_list_lock);
return tipc_cfg_reply_none();
}
/* For now, get space for all other nodes */
- payload_size = TLV_SPACE(sizeof(node_info)) * tipc_num_nodes;
+ payload_size = TLV_SPACE(sizeof(node_info)) * tn->num_nodes;
if (payload_size > 32768u) {
- spin_unlock_bh(&node_list_lock);
+ spin_unlock_bh(&tn->node_list_lock);
return tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
" (too many nodes)");
}
- spin_unlock_bh(&node_list_lock);
+ spin_unlock_bh(&tn->node_list_lock);
buf = tipc_cfg_reply_alloc(payload_size);
if (!buf)
/* Add TLVs for all nodes in scope */
rcu_read_lock();
- list_for_each_entry_rcu(n_ptr, &tipc_node_list, list) {
+ list_for_each_entry_rcu(n_ptr, &tn->node_list, list) {
if (!tipc_in_scope(domain, n_ptr->addr))
continue;
node_info.addr = htonl(n_ptr->addr);
return buf;
}
-struct sk_buff *tipc_node_get_links(const void *req_tlv_area, int req_tlv_space)
+struct sk_buff *tipc_node_get_links(struct net *net, const void *req_tlv_area,
+ int req_tlv_space)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
u32 domain;
struct sk_buff *buf;
struct tipc_node *n_ptr;
return tipc_cfg_reply_error_string(TIPC_CFG_INVALID_VALUE
" (network address)");
- if (!tipc_own_addr)
+ if (!tn->own_addr)
return tipc_cfg_reply_none();
- spin_lock_bh(&node_list_lock);
+ spin_lock_bh(&tn->node_list_lock);
/* Get space for all unicast links + broadcast link */
- payload_size = TLV_SPACE((sizeof(link_info)) * (tipc_num_links + 1));
+ payload_size = TLV_SPACE((sizeof(link_info)) * (tn->num_links + 1));
if (payload_size > 32768u) {
- spin_unlock_bh(&node_list_lock);
+ spin_unlock_bh(&tn->node_list_lock);
return tipc_cfg_reply_error_string(TIPC_CFG_NOT_SUPPORTED
" (too many links)");
}
- spin_unlock_bh(&node_list_lock);
+ spin_unlock_bh(&tn->node_list_lock);
buf = tipc_cfg_reply_alloc(payload_size);
if (!buf)
return NULL;
/* Add TLV for broadcast link */
- link_info.dest = htonl(tipc_cluster_mask(tipc_own_addr));
+ link_info.dest = htonl(tipc_cluster_mask(tn->own_addr));
link_info.up = htonl(1);
strlcpy(link_info.str, tipc_bclink_name, TIPC_MAX_LINK_NAME);
tipc_cfg_append_tlv(buf, TIPC_TLV_LINK_INFO, &link_info, sizeof(link_info));
/* Add TLVs for any other links in scope */
rcu_read_lock();
- list_for_each_entry_rcu(n_ptr, &tipc_node_list, list) {
+ list_for_each_entry_rcu(n_ptr, &tn->node_list, list) {
u32 i;
if (!tipc_in_scope(domain, n_ptr->addr))
*
* Returns 0 on success
*/
-int tipc_node_get_linkname(u32 bearer_id, u32 addr, char *linkname, size_t len)
+int tipc_node_get_linkname(struct net *net, u32 bearer_id, u32 addr,
+ char *linkname, size_t len)
{
struct tipc_link *link;
- struct tipc_node *node = tipc_node_find(addr);
+ struct tipc_node *node = tipc_node_find(net, addr);
if ((bearer_id >= MAX_BEARERS) || !node)
return -EINVAL;
void tipc_node_unlock(struct tipc_node *node)
{
+ struct net *net = node->net;
LIST_HEAD(nsub_list);
LIST_HEAD(conn_sks);
struct sk_buff_head waiting_sks;
spin_unlock_bh(&node->lock);
while (!skb_queue_empty(&waiting_sks))
- tipc_sk_rcv(__skb_dequeue(&waiting_sks));
+ tipc_sk_rcv(net, __skb_dequeue(&waiting_sks));
if (!list_empty(&conn_sks))
- tipc_node_abort_sock_conns(&conn_sks);
+ tipc_node_abort_sock_conns(net, &conn_sks);
if (!list_empty(&nsub_list))
- tipc_publ_notify(&nsub_list, addr);
+ tipc_publ_notify(net, &nsub_list, addr);
if (flags & TIPC_WAKEUP_BCAST_USERS)
- tipc_bclink_wakeup_users();
+ tipc_bclink_wakeup_users(net);
if (flags & TIPC_NOTIFY_NODE_UP)
- tipc_named_node_up(addr);
+ tipc_named_node_up(net, addr);
if (flags & TIPC_NOTIFY_LINK_UP)
- tipc_nametbl_publish(TIPC_LINK_STATE, addr, addr,
+ tipc_nametbl_publish(net, TIPC_LINK_STATE, addr, addr,
TIPC_NODE_SCOPE, link_id, addr);
if (flags & TIPC_NOTIFY_LINK_DOWN)
- tipc_nametbl_withdraw(TIPC_LINK_STATE, addr,
+ tipc_nametbl_withdraw(net, TIPC_LINK_STATE, addr,
link_id, addr);
}
int tipc_nl_node_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
int err;
+ struct net *net = sock_net(skb->sk);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
int done = cb->args[0];
int last_addr = cb->args[1];
struct tipc_node *node;
rcu_read_lock();
- if (last_addr && !tipc_node_find(last_addr)) {
+ if (last_addr && !tipc_node_find(net, last_addr)) {
rcu_read_unlock();
/* We never set seq or call nl_dump_check_consistent() this
* means that setting prev_seq here will cause the consistence
return -EPIPE;
}
- list_for_each_entry_rcu(node, &tipc_node_list, list) {
+ list_for_each_entry_rcu(node, &tn->node_list, list) {
if (last_addr) {
if (node->addr == last_addr)
last_addr = 0;
#include "bearer.h"
#include "msg.h"
-/*
- * Out-of-range value for node signature
- */
-#define INVALID_NODE_SIG 0x10000
+/* Out-of-range value for node signature */
+#define INVALID_NODE_SIG 0x10000
+
+#define NODE_HTABLE_SIZE 512
/* Flags used to take different actions according to flag type
* TIPC_WAIT_PEER_LINKS_DOWN: wait to see that peer's links are down
* struct tipc_node - TIPC node structure
* @addr: network address of node
* @lock: spinlock governing access to structure
+ * @net: the applicable net namespace
* @hash: links to adjacent nodes in unsorted hash chain
* @active_links: pointers to active links to node
* @links: pointers to all links to node
struct tipc_node {
u32 addr;
spinlock_t lock;
+ struct net *net;
struct hlist_node hash;
struct tipc_link *active_links[2];
u32 act_mtus[2];
struct rcu_head rcu;
};
-extern struct list_head tipc_node_list;
-
-struct tipc_node *tipc_node_find(u32 addr);
-struct tipc_node *tipc_node_create(u32 addr);
-void tipc_node_stop(void);
+struct tipc_node *tipc_node_find(struct net *net, u32 addr);
+struct tipc_node *tipc_node_create(struct net *net, u32 addr);
+void tipc_node_stop(struct net *net);
void tipc_node_attach_link(struct tipc_node *n_ptr, struct tipc_link *l_ptr);
void tipc_node_detach_link(struct tipc_node *n_ptr, struct tipc_link *l_ptr);
void tipc_node_link_down(struct tipc_node *n_ptr, struct tipc_link *l_ptr);
void tipc_node_link_up(struct tipc_node *n_ptr, struct tipc_link *l_ptr);
int tipc_node_active_links(struct tipc_node *n_ptr);
int tipc_node_is_up(struct tipc_node *n_ptr);
-struct sk_buff *tipc_node_get_links(const void *req_tlv_area, int req_tlv_space);
-struct sk_buff *tipc_node_get_nodes(const void *req_tlv_area, int req_tlv_space);
-int tipc_node_get_linkname(u32 bearer_id, u32 node, char *linkname, size_t len);
+struct sk_buff *tipc_node_get_links(struct net *net, const void *req_tlv_area,
+ int req_tlv_space);
+struct sk_buff *tipc_node_get_nodes(struct net *net, const void *req_tlv_area,
+ int req_tlv_space);
+int tipc_node_get_linkname(struct net *net, u32 bearer_id, u32 node,
+ char *linkname, size_t len);
void tipc_node_unlock(struct tipc_node *node);
-int tipc_node_add_conn(u32 dnode, u32 port, u32 peer_port);
-void tipc_node_remove_conn(u32 dnode, u32 port);
+int tipc_node_add_conn(struct net *net, u32 dnode, u32 port, u32 peer_port);
+void tipc_node_remove_conn(struct net *net, u32 dnode, u32 port);
int tipc_nl_node_dump(struct sk_buff *skb, struct netlink_callback *cb);
TIPC_NOTIFY_NODE_DOWN | TIPC_WAIT_OWN_LINKS_DOWN));
}
-static inline uint tipc_node_get_mtu(u32 addr, u32 selector)
+static inline uint tipc_node_get_mtu(struct net *net, u32 addr, u32 selector)
{
struct tipc_node *node;
u32 mtu;
- node = tipc_node_find(addr);
+ node = tipc_node_find(net, addr);
if (likely(node))
mtu = node->act_mtus[selector & 1];
#include "server.h"
#include "core.h"
+#include "socket.h"
#include <net/sock.h>
/* Number of messages to send before rescheduling */
goto out_close;
}
- s->tipc_conn_recvmsg(con->conid, &addr, con->usr_data, buf, ret);
+ s->tipc_conn_recvmsg(sock_net(con->sock->sk), con->conid, &addr,
+ con->usr_data, buf, ret);
kmem_cache_free(s->rcvbuf_cache, buf);
struct socket *sock = NULL;
int ret;
- ret = tipc_sock_create_local(s->type, &sock);
+ ret = tipc_sock_create_local(s->net, s->type, &sock);
if (ret < 0)
return NULL;
ret = kernel_setsockopt(sock, SOL_TIPC, TIPC_IMPORTANCE,
#ifndef _TIPC_SERVER_H
#define _TIPC_SERVER_H
-#include "core.h"
+#include <linux/idr.h>
+#include <linux/tipc.h>
+#include <net/net_namespace.h>
#define TIPC_SERVER_NAME_LEN 32
* @conn_idr: identifier set of connection
* @idr_lock: protect the connection identifier set
* @idr_in_use: amount of allocated identifier entry
+ * @net: network namspace instance
* @rcvbuf_cache: memory cache of server receive buffer
* @rcv_wq: receive workqueue
* @send_wq: send workqueue
struct idr conn_idr;
spinlock_t idr_lock;
int idr_in_use;
+ struct net *net;
struct kmem_cache *rcvbuf_cache;
struct workqueue_struct *rcv_wq;
struct workqueue_struct *send_wq;
int max_rcvbuf_size;
- void *(*tipc_conn_new) (int conid);
- void (*tipc_conn_shutdown) (int conid, void *usr_data);
- void (*tipc_conn_recvmsg) (int conid, struct sockaddr_tipc *addr,
- void *usr_data, void *buf, size_t len);
+ void *(*tipc_conn_new)(int conid);
+ void (*tipc_conn_shutdown)(int conid, void *usr_data);
+ void (*tipc_conn_recvmsg)(struct net *net, int conid,
+ struct sockaddr_tipc *addr, void *usr_data,
+ void *buf, size_t len);
struct sockaddr_tipc *saddr;
- const char name[TIPC_SERVER_NAME_LEN];
+ char name[TIPC_SERVER_NAME_LEN];
int imp;
int type;
};
* POSSIBILITY OF SUCH DAMAGE.
*/
+#include <linux/rhashtable.h>
+#include <linux/jhash.h>
#include "core.h"
#include "name_table.h"
#include "node.h"
#include "link.h"
-#include <linux/export.h>
#include "config.h"
#include "socket.h"
-#define SS_LISTENING -1 /* socket is listening */
-#define SS_READY -2 /* socket is connectionless */
+#define SS_LISTENING -1 /* socket is listening */
+#define SS_READY -2 /* socket is connectionless */
-#define CONN_TIMEOUT_DEFAULT 8000 /* default connect timeout = 8s */
-#define CONN_PROBING_INTERVAL 3600000 /* [ms] => 1 h */
-#define TIPC_FWD_MSG 1
-#define TIPC_CONN_OK 0
-#define TIPC_CONN_PROBING 1
+#define CONN_TIMEOUT_DEFAULT 8000 /* default connect timeout = 8s */
+#define CONN_PROBING_INTERVAL msecs_to_jiffies(3600000) /* [ms] => 1 h */
+#define TIPC_FWD_MSG 1
+#define TIPC_CONN_OK 0
+#define TIPC_CONN_PROBING 1
+#define TIPC_MAX_PORT 0xffffffff
+#define TIPC_MIN_PORT 1
/**
* struct tipc_sock - TIPC socket structure
* @conn_instance: TIPC instance used when connection was established
* @published: non-zero if port has one or more associated names
* @max_pkt: maximum packet size "hint" used when building messages sent by port
- * @ref: unique reference to port in TIPC object registry
+ * @portid: unique port identity in TIPC socket hash table
* @phdr: preformatted message header used when sending messages
* @port_list: adjacent ports in TIPC's global list of ports
* @publications: list of publications for port
* @pub_count: total # of publications port has made during its lifetime
* @probing_state:
- * @probing_interval:
- * @timer:
+ * @probing_intv:
* @port: port - interacts with 'sk' and with the rest of the TIPC stack
* @peer_name: the peer of the connection, if any
* @conn_timeout: the time we can wait for an unresponded setup request
* @link_cong: non-zero if owner must sleep because of link congestion
* @sent_unacked: # messages sent by socket, and not yet acked by peer
* @rcv_unacked: # messages read by user, but not yet acked back to peer
+ * @node: hash table node
+ * @rcu: rcu struct for tipc_sock
*/
struct tipc_sock {
struct sock sk;
u32 conn_instance;
int published;
u32 max_pkt;
- u32 ref;
+ u32 portid;
struct tipc_msg phdr;
struct list_head sock_list;
struct list_head publications;
u32 pub_count;
u32 probing_state;
- u32 probing_interval;
- struct timer_list timer;
+ unsigned long probing_intv;
uint conn_timeout;
atomic_t dupl_rcvcnt;
bool link_cong;
uint sent_unacked;
uint rcv_unacked;
+ struct rhash_head node;
+ struct rcu_head rcu;
};
static int tipc_backlog_rcv(struct sock *sk, struct sk_buff *skb);
static int tipc_release(struct socket *sock);
static int tipc_accept(struct socket *sock, struct socket *new_sock, int flags);
static int tipc_wait_for_sndmsg(struct socket *sock, long *timeo_p);
-static void tipc_sk_timeout(unsigned long ref);
+static void tipc_sk_timeout(unsigned long data);
static int tipc_sk_publish(struct tipc_sock *tsk, uint scope,
struct tipc_name_seq const *seq);
static int tipc_sk_withdraw(struct tipc_sock *tsk, uint scope,
struct tipc_name_seq const *seq);
-static u32 tipc_sk_ref_acquire(struct tipc_sock *tsk);
-static void tipc_sk_ref_discard(u32 ref);
-static struct tipc_sock *tipc_sk_get(u32 ref);
-static struct tipc_sock *tipc_sk_get_next(u32 *ref);
-static void tipc_sk_put(struct tipc_sock *tsk);
+static struct tipc_sock *tipc_sk_lookup(struct net *net, u32 portid);
+static int tipc_sk_insert(struct tipc_sock *tsk);
+static void tipc_sk_remove(struct tipc_sock *tsk);
static const struct proto_ops packet_ops;
static const struct proto_ops stream_ops;
{
struct sk_buff *skb;
u32 dnode;
+ struct net *net = sock_net(sk);
while ((skb = __skb_dequeue(&sk->sk_receive_queue))) {
- if (tipc_msg_reverse(skb, &dnode, TIPC_ERR_NO_PORT))
- tipc_link_xmit_skb(skb, dnode, 0);
+ if (tipc_msg_reverse(net, skb, &dnode, TIPC_ERR_NO_PORT))
+ tipc_link_xmit_skb(net, skb, dnode, 0);
}
}
*/
static bool tsk_peer_msg(struct tipc_sock *tsk, struct tipc_msg *msg)
{
+ struct tipc_net *tn = net_generic(sock_net(&tsk->sk), tipc_net_id);
u32 peer_port = tsk_peer_port(tsk);
u32 orig_node;
u32 peer_node;
if (likely(orig_node == peer_node))
return true;
- if (!orig_node && (peer_node == tipc_own_addr))
+ if (!orig_node && (peer_node == tn->own_addr))
return true;
- if (!peer_node && (orig_node == tipc_own_addr))
+ if (!peer_node && (orig_node == tn->own_addr))
return true;
return false;
struct sock *sk;
struct tipc_sock *tsk;
struct tipc_msg *msg;
- u32 ref;
/* Validate arguments */
if (unlikely(protocol != 0))
return -ENOMEM;
tsk = tipc_sk(sk);
- ref = tipc_sk_ref_acquire(tsk);
- if (!ref) {
- pr_warn("Socket create failed; reference table exhausted\n");
- return -ENOMEM;
- }
tsk->max_pkt = MAX_PKT_DEFAULT;
- tsk->ref = ref;
INIT_LIST_HEAD(&tsk->publications);
msg = &tsk->phdr;
- tipc_msg_init(msg, TIPC_LOW_IMPORTANCE, TIPC_NAMED_MSG,
+ tipc_msg_init(net, msg, TIPC_LOW_IMPORTANCE, TIPC_NAMED_MSG,
NAMED_H_SIZE, 0);
- msg_set_origport(msg, ref);
/* Finish initializing socket data structures */
sock->ops = ops;
sock->state = state;
sock_init_data(sock, sk);
- k_init_timer(&tsk->timer, (Handler)tipc_sk_timeout, ref);
+ if (tipc_sk_insert(tsk)) {
+ pr_warn("Socket create failed; port numbrer exhausted\n");
+ return -EINVAL;
+ }
+ msg_set_origport(msg, tsk->portid);
+ setup_timer(&sk->sk_timer, tipc_sk_timeout, (unsigned long)tsk);
sk->sk_backlog_rcv = tipc_backlog_rcv;
sk->sk_rcvbuf = sysctl_tipc_rmem[1];
sk->sk_data_ready = tipc_data_ready;
*
* Returns 0 on success, errno otherwise
*/
-int tipc_sock_create_local(int type, struct socket **res)
+int tipc_sock_create_local(struct net *net, int type, struct socket **res)
{
int rc;
pr_err("Failed to create kernel socket\n");
return rc;
}
- tipc_sk_create(&init_net, *res, 0, 1);
+ tipc_sk_create(net, *res, 0, 1);
return 0;
}
return ret;
}
+static void tipc_sk_callback(struct rcu_head *head)
+{
+ struct tipc_sock *tsk = container_of(head, struct tipc_sock, rcu);
+
+ sock_put(&tsk->sk);
+}
+
/**
* tipc_release - destroy a TIPC socket
* @sock: socket to destroy
static int tipc_release(struct socket *sock)
{
struct sock *sk = sock->sk;
+ struct net *net;
+ struct tipc_net *tn;
struct tipc_sock *tsk;
struct sk_buff *skb;
- u32 dnode;
+ u32 dnode, probing_state;
/*
* Exit if socket isn't fully initialized (occurs when a failed accept()
if (sk == NULL)
return 0;
+ net = sock_net(sk);
+ tn = net_generic(net, tipc_net_id);
+
tsk = tipc_sk(sk);
lock_sock(sk);
(sock->state == SS_CONNECTED)) {
sock->state = SS_DISCONNECTING;
tsk->connected = 0;
- tipc_node_remove_conn(dnode, tsk->ref);
+ tipc_node_remove_conn(net, dnode, tsk->portid);
}
- if (tipc_msg_reverse(skb, &dnode, TIPC_ERR_NO_PORT))
- tipc_link_xmit_skb(skb, dnode, 0);
+ if (tipc_msg_reverse(net, skb, &dnode,
+ TIPC_ERR_NO_PORT))
+ tipc_link_xmit_skb(net, skb, dnode, 0);
}
}
tipc_sk_withdraw(tsk, 0, NULL);
- tipc_sk_ref_discard(tsk->ref);
- k_cancel_timer(&tsk->timer);
+ probing_state = tsk->probing_state;
+ if (del_timer_sync(&sk->sk_timer) &&
+ probing_state != TIPC_CONN_PROBING)
+ sock_put(sk);
+ tipc_sk_remove(tsk);
if (tsk->connected) {
- skb = tipc_msg_create(TIPC_CRITICAL_IMPORTANCE, TIPC_CONN_MSG,
- SHORT_H_SIZE, 0, dnode, tipc_own_addr,
- tsk_peer_port(tsk),
- tsk->ref, TIPC_ERR_NO_PORT);
+ skb = tipc_msg_create(net, TIPC_CRITICAL_IMPORTANCE,
+ TIPC_CONN_MSG, SHORT_H_SIZE, 0, dnode,
+ tn->own_addr, tsk_peer_port(tsk),
+ tsk->portid, TIPC_ERR_NO_PORT);
if (skb)
- tipc_link_xmit_skb(skb, dnode, tsk->ref);
- tipc_node_remove_conn(dnode, tsk->ref);
+ tipc_link_xmit_skb(net, skb, dnode, tsk->portid);
+ tipc_node_remove_conn(net, dnode, tsk->portid);
}
- k_term_timer(&tsk->timer);
/* Discard any remaining (connection-based) messages in receive queue */
__skb_queue_purge(&sk->sk_receive_queue);
/* Reject any messages that accumulated in backlog queue */
sock->state = SS_DISCONNECTING;
release_sock(sk);
- sock_put(sk);
+
+ call_rcu(&tsk->rcu, tipc_sk_callback);
sock->sk = NULL;
return 0;
{
struct sockaddr_tipc *addr = (struct sockaddr_tipc *)uaddr;
struct tipc_sock *tsk = tipc_sk(sock->sk);
+ struct tipc_net *tn = net_generic(sock_net(sock->sk), tipc_net_id);
memset(addr, 0, sizeof(*addr));
if (peer) {
addr->addr.id.ref = tsk_peer_port(tsk);
addr->addr.id.node = tsk_peer_node(tsk);
} else {
- addr->addr.id.ref = tsk->ref;
- addr->addr.id.node = tipc_own_addr;
+ addr->addr.id.ref = tsk->portid;
+ addr->addr.id.node = tn->own_addr;
}
*uaddr_len = sizeof(*addr);
struct msghdr *msg, size_t dsz, long timeo)
{
struct sock *sk = sock->sk;
+ struct net *net = sock_net(sk);
struct tipc_msg *mhdr = &tipc_sk(sk)->phdr;
struct sk_buff_head head;
uint mtu;
new_mtu:
mtu = tipc_bclink_get_mtu();
__skb_queue_head_init(&head);
- rc = tipc_msg_build(mhdr, msg, 0, dsz, mtu, &head);
+ rc = tipc_msg_build(net, mhdr, msg, 0, dsz, mtu, &head);
if (unlikely(rc < 0))
return rc;
do {
- rc = tipc_bclink_xmit(&head);
+ rc = tipc_bclink_xmit(net, &head);
if (likely(rc >= 0)) {
rc = dsz;
break;
/* tipc_sk_mcast_rcv - Deliver multicast message to all destination sockets
*/
-void tipc_sk_mcast_rcv(struct sk_buff *buf)
+void tipc_sk_mcast_rcv(struct net *net, struct sk_buff *buf)
{
struct tipc_msg *msg = buf_msg(buf);
struct tipc_port_list dports = {0, NULL, };
uint i, last, dst = 0;
u32 scope = TIPC_CLUSTER_SCOPE;
- if (in_own_node(msg_orignode(msg)))
+ if (in_own_node(net, msg_orignode(msg)))
scope = TIPC_NODE_SCOPE;
/* Create destination port list: */
- tipc_nametbl_mc_translate(msg_nametype(msg),
- msg_namelower(msg),
- msg_nameupper(msg),
- scope,
- &dports);
+ tipc_nametbl_mc_translate(net, msg_nametype(msg), msg_namelower(msg),
+ msg_nameupper(msg), scope, &dports);
last = dports.count;
if (!last) {
kfree_skb(buf);
continue;
}
msg_set_destport(msg, item->ports[i]);
- tipc_sk_rcv(b);
+ tipc_sk_rcv(net, b);
}
}
tipc_port_list_free(&dports);
if (conn_cong)
tsk->sk.sk_write_space(&tsk->sk);
} else if (msg_type(msg) == CONN_PROBE) {
- if (!tipc_msg_reverse(buf, dnode, TIPC_OK))
+ if (!tipc_msg_reverse(sock_net(&tsk->sk), buf, dnode, TIPC_OK))
return TIPC_OK;
msg_set_type(msg, CONN_PROBE_REPLY);
return TIPC_FWD_MSG;
DECLARE_SOCKADDR(struct sockaddr_tipc *, dest, m->msg_name);
struct sock *sk = sock->sk;
struct tipc_sock *tsk = tipc_sk(sk);
+ struct net *net = sock_net(sk);
struct tipc_msg *mhdr = &tsk->phdr;
u32 dnode, dport;
struct sk_buff_head head;
msg_set_nametype(mhdr, type);
msg_set_nameinst(mhdr, inst);
msg_set_lookup_scope(mhdr, tipc_addr_scope(domain));
- dport = tipc_nametbl_translate(type, inst, &dnode);
+ dport = tipc_nametbl_translate(net, type, inst, &dnode);
msg_set_destnode(mhdr, dnode);
msg_set_destport(mhdr, dport);
if (unlikely(!dport && !dnode)) {
}
new_mtu:
- mtu = tipc_node_get_mtu(dnode, tsk->ref);
+ mtu = tipc_node_get_mtu(net, dnode, tsk->portid);
__skb_queue_head_init(&head);
- rc = tipc_msg_build(mhdr, m, 0, dsz, mtu, &head);
+ rc = tipc_msg_build(net, mhdr, m, 0, dsz, mtu, &head);
if (rc < 0)
goto exit;
do {
skb = skb_peek(&head);
TIPC_SKB_CB(skb)->wakeup_pending = tsk->link_cong;
- rc = tipc_link_xmit(&head, dnode, tsk->ref);
+ rc = tipc_link_xmit(net, &head, dnode, tsk->portid);
if (likely(rc >= 0)) {
if (sock->state != SS_READY)
sock->state = SS_CONNECTING;
struct msghdr *m, size_t dsz)
{
struct sock *sk = sock->sk;
+ struct net *net = sock_net(sk);
struct tipc_sock *tsk = tipc_sk(sk);
struct tipc_msg *mhdr = &tsk->phdr;
struct sk_buff_head head;
DECLARE_SOCKADDR(struct sockaddr_tipc *, dest, m->msg_name);
- u32 ref = tsk->ref;
+ u32 portid = tsk->portid;
int rc = -EINVAL;
long timeo;
u32 dnode;
mtu = tsk->max_pkt;
send = min_t(uint, dsz - sent, TIPC_MAX_USER_MSG_SIZE);
__skb_queue_head_init(&head);
- rc = tipc_msg_build(mhdr, m, sent, send, mtu, &head);
+ rc = tipc_msg_build(net, mhdr, m, sent, send, mtu, &head);
if (unlikely(rc < 0))
goto exit;
do {
if (likely(!tsk_conn_cong(tsk))) {
- rc = tipc_link_xmit(&head, dnode, ref);
+ rc = tipc_link_xmit(net, &head, dnode, portid);
if (likely(!rc)) {
tsk->sent_unacked++;
sent += send;
goto next;
}
if (rc == -EMSGSIZE) {
- tsk->max_pkt = tipc_node_get_mtu(dnode, ref);
+ tsk->max_pkt = tipc_node_get_mtu(net, dnode,
+ portid);
goto next;
}
if (rc != -ELINKCONG)
static void tipc_sk_finish_conn(struct tipc_sock *tsk, u32 peer_port,
u32 peer_node)
{
+ struct sock *sk = &tsk->sk;
+ struct net *net = sock_net(sk);
struct tipc_msg *msg = &tsk->phdr;
msg_set_destnode(msg, peer_node);
msg_set_lookup_scope(msg, 0);
msg_set_hdr_sz(msg, SHORT_H_SIZE);
- tsk->probing_interval = CONN_PROBING_INTERVAL;
+ tsk->probing_intv = CONN_PROBING_INTERVAL;
tsk->probing_state = TIPC_CONN_OK;
tsk->connected = 1;
- k_start_timer(&tsk->timer, tsk->probing_interval);
- tipc_node_add_conn(peer_node, tsk->ref, peer_port);
- tsk->max_pkt = tipc_node_get_mtu(peer_node, tsk->ref);
+ sk_reset_timer(sk, &sk->sk_timer, jiffies + tsk->probing_intv);
+ tipc_node_add_conn(net, peer_node, tsk->portid, peer_port);
+ tsk->max_pkt = tipc_node_get_mtu(net, peer_node, tsk->portid);
}
/**
static void tipc_sk_send_ack(struct tipc_sock *tsk, uint ack)
{
+ struct net *net = sock_net(&tsk->sk);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct sk_buff *skb = NULL;
struct tipc_msg *msg;
u32 peer_port = tsk_peer_port(tsk);
if (!tsk->connected)
return;
- skb = tipc_msg_create(CONN_MANAGER, CONN_ACK, INT_H_SIZE, 0, dnode,
- tipc_own_addr, peer_port, tsk->ref, TIPC_OK);
+ skb = tipc_msg_create(net, CONN_MANAGER, CONN_ACK, INT_H_SIZE, 0,
+ dnode, tn->own_addr, peer_port, tsk->portid,
+ TIPC_OK);
if (!skb)
return;
msg = buf_msg(skb);
msg_set_msgcnt(msg, ack);
- tipc_link_xmit_skb(skb, dnode, msg_link_selector(msg));
+ tipc_link_xmit_skb(net, skb, dnode, msg_link_selector(msg));
}
static int tipc_wait_for_rcvmsg(struct socket *sock, long *timeop)
static int filter_connect(struct tipc_sock *tsk, struct sk_buff **buf)
{
struct sock *sk = &tsk->sk;
+ struct net *net = sock_net(sk);
struct socket *sock = sk->sk_socket;
struct tipc_msg *msg = buf_msg(*buf);
int retval = -TIPC_ERR_NO_PORT;
sock->state = SS_DISCONNECTING;
tsk->connected = 0;
/* let timer expire on it's own */
- tipc_node_remove_conn(tsk_peer_node(tsk),
- tsk->ref);
+ tipc_node_remove_conn(net, tsk_peer_node(tsk),
+ tsk->portid);
}
retval = TIPC_OK;
}
int rc;
u32 onode;
struct tipc_sock *tsk = tipc_sk(sk);
+ struct net *net = sock_net(sk);
uint truesize = skb->truesize;
rc = filter_rcv(sk, skb);
return 0;
}
- if ((rc < 0) && !tipc_msg_reverse(skb, &onode, -rc))
+ if ((rc < 0) && !tipc_msg_reverse(net, skb, &onode, -rc))
return 0;
- tipc_link_xmit_skb(skb, onode, 0);
+ tipc_link_xmit_skb(net, skb, onode, 0);
return 0;
}
* Consumes buffer
* Returns 0 if success, or errno: -EHOSTUNREACH
*/
-int tipc_sk_rcv(struct sk_buff *skb)
+int tipc_sk_rcv(struct net *net, struct sk_buff *skb)
{
struct tipc_sock *tsk;
struct sock *sk;
u32 dnode;
/* Validate destination and message */
- tsk = tipc_sk_get(dport);
+ tsk = tipc_sk_lookup(net, dport);
if (unlikely(!tsk)) {
- rc = tipc_msg_eval(skb, &dnode);
+ rc = tipc_msg_eval(net, skb, &dnode);
goto exit;
}
sk = &tsk->sk;
rc = -TIPC_ERR_OVERLOAD;
}
spin_unlock_bh(&sk->sk_lock.slock);
- tipc_sk_put(tsk);
+ sock_put(sk);
if (likely(!rc))
return 0;
exit:
- if ((rc < 0) && !tipc_msg_reverse(skb, &dnode, -rc))
+ if ((rc < 0) && !tipc_msg_reverse(net, skb, &dnode, -rc))
return -EHOSTUNREACH;
- tipc_link_xmit_skb(skb, dnode, 0);
+ tipc_link_xmit_skb(net, skb, dnode, 0);
return (rc < 0) ? -EHOSTUNREACH : 0;
}
static int tipc_shutdown(struct socket *sock, int how)
{
struct sock *sk = sock->sk;
+ struct net *net = sock_net(sk);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_sock *tsk = tipc_sk(sk);
struct sk_buff *skb;
u32 dnode;
kfree_skb(skb);
goto restart;
}
- if (tipc_msg_reverse(skb, &dnode, TIPC_CONN_SHUTDOWN))
- tipc_link_xmit_skb(skb, dnode, tsk->ref);
- tipc_node_remove_conn(dnode, tsk->ref);
+ if (tipc_msg_reverse(net, skb, &dnode,
+ TIPC_CONN_SHUTDOWN))
+ tipc_link_xmit_skb(net, skb, dnode,
+ tsk->portid);
+ tipc_node_remove_conn(net, dnode, tsk->portid);
} else {
dnode = tsk_peer_node(tsk);
- skb = tipc_msg_create(TIPC_CRITICAL_IMPORTANCE,
+ skb = tipc_msg_create(net, TIPC_CRITICAL_IMPORTANCE,
TIPC_CONN_MSG, SHORT_H_SIZE,
- 0, dnode, tipc_own_addr,
+ 0, dnode, tn->own_addr,
tsk_peer_port(tsk),
- tsk->ref, TIPC_CONN_SHUTDOWN);
- tipc_link_xmit_skb(skb, dnode, tsk->ref);
+ tsk->portid, TIPC_CONN_SHUTDOWN);
+ tipc_link_xmit_skb(net, skb, dnode, tsk->portid);
}
tsk->connected = 0;
sock->state = SS_DISCONNECTING;
- tipc_node_remove_conn(dnode, tsk->ref);
+ tipc_node_remove_conn(net, dnode, tsk->portid);
/* fall through */
case SS_DISCONNECTING:
return res;
}
-static void tipc_sk_timeout(unsigned long ref)
+static void tipc_sk_timeout(unsigned long data)
{
- struct tipc_sock *tsk;
- struct sock *sk;
+ struct tipc_sock *tsk = (struct tipc_sock *)data;
+ struct sock *sk = &tsk->sk;
+ struct net *net = sock_net(sk);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct sk_buff *skb = NULL;
u32 peer_port, peer_node;
- tsk = tipc_sk_get(ref);
- if (!tsk)
- return;
-
- sk = &tsk->sk;
bh_lock_sock(sk);
if (!tsk->connected) {
bh_unlock_sock(sk);
if (tsk->probing_state == TIPC_CONN_PROBING) {
/* Previous probe not answered -> self abort */
- skb = tipc_msg_create(TIPC_CRITICAL_IMPORTANCE, TIPC_CONN_MSG,
- SHORT_H_SIZE, 0, tipc_own_addr,
- peer_node, ref, peer_port,
- TIPC_ERR_NO_PORT);
+ skb = tipc_msg_create(net, TIPC_CRITICAL_IMPORTANCE,
+ TIPC_CONN_MSG, SHORT_H_SIZE, 0,
+ tn->own_addr, peer_node, tsk->portid,
+ peer_port, TIPC_ERR_NO_PORT);
} else {
- skb = tipc_msg_create(CONN_MANAGER, CONN_PROBE, INT_H_SIZE,
- 0, peer_node, tipc_own_addr,
- peer_port, ref, TIPC_OK);
+ skb = tipc_msg_create(net, CONN_MANAGER, CONN_PROBE, INT_H_SIZE,
+ 0, peer_node, tn->own_addr,
+ peer_port, tsk->portid, TIPC_OK);
tsk->probing_state = TIPC_CONN_PROBING;
- k_start_timer(&tsk->timer, tsk->probing_interval);
+ sk_reset_timer(sk, &sk->sk_timer, jiffies + tsk->probing_intv);
}
bh_unlock_sock(sk);
if (skb)
- tipc_link_xmit_skb(skb, peer_node, ref);
+ tipc_link_xmit_skb(sock_net(sk), skb, peer_node, tsk->portid);
exit:
- tipc_sk_put(tsk);
+ sock_put(sk);
}
static int tipc_sk_publish(struct tipc_sock *tsk, uint scope,
struct tipc_name_seq const *seq)
{
+ struct net *net = sock_net(&tsk->sk);
struct publication *publ;
u32 key;
if (tsk->connected)
return -EINVAL;
- key = tsk->ref + tsk->pub_count + 1;
- if (key == tsk->ref)
+ key = tsk->portid + tsk->pub_count + 1;
+ if (key == tsk->portid)
return -EADDRINUSE;
- publ = tipc_nametbl_publish(seq->type, seq->lower, seq->upper,
- scope, tsk->ref, key);
+ publ = tipc_nametbl_publish(net, seq->type, seq->lower, seq->upper,
+ scope, tsk->portid, key);
if (unlikely(!publ))
return -EINVAL;
static int tipc_sk_withdraw(struct tipc_sock *tsk, uint scope,
struct tipc_name_seq const *seq)
{
+ struct net *net = sock_net(&tsk->sk);
struct publication *publ;
struct publication *safe;
int rc = -EINVAL;
continue;
if (publ->upper != seq->upper)
break;
- tipc_nametbl_withdraw(publ->type, publ->lower,
+ tipc_nametbl_withdraw(net, publ->type, publ->lower,
publ->ref, publ->key);
rc = 0;
break;
}
- tipc_nametbl_withdraw(publ->type, publ->lower,
+ tipc_nametbl_withdraw(net, publ->type, publ->lower,
publ->ref, publ->key);
rc = 0;
}
static int tipc_sk_show(struct tipc_sock *tsk, char *buf,
int len, int full_id)
{
+ struct net *net = sock_net(&tsk->sk);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct publication *publ;
int ret;
if (full_id)
ret = tipc_snprintf(buf, len, "<%u.%u.%u:%u>:",
- tipc_zone(tipc_own_addr),
- tipc_cluster(tipc_own_addr),
- tipc_node(tipc_own_addr), tsk->ref);
+ tipc_zone(tn->own_addr),
+ tipc_cluster(tn->own_addr),
+ tipc_node(tn->own_addr), tsk->portid);
else
- ret = tipc_snprintf(buf, len, "%-10u:", tsk->ref);
+ ret = tipc_snprintf(buf, len, "%-10u:", tsk->portid);
if (tsk->connected) {
u32 dport = tsk_peer_port(tsk);
return ret;
}
-struct sk_buff *tipc_sk_socks_show(void)
+struct sk_buff *tipc_sk_socks_show(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ const struct bucket_table *tbl;
+ struct rhash_head *pos;
struct sk_buff *buf;
struct tlv_desc *rep_tlv;
char *pb;
int pb_len;
struct tipc_sock *tsk;
int str_len = 0;
- u32 ref = 0;
+ int i;
buf = tipc_cfg_reply_alloc(TLV_SPACE(ULTRA_STRING_MAX_LEN));
if (!buf)
pb = TLV_DATA(rep_tlv);
pb_len = ULTRA_STRING_MAX_LEN;
- tsk = tipc_sk_get_next(&ref);
- for (; tsk; tsk = tipc_sk_get_next(&ref)) {
- lock_sock(&tsk->sk);
- str_len += tipc_sk_show(tsk, pb + str_len,
- pb_len - str_len, 0);
- release_sock(&tsk->sk);
- tipc_sk_put(tsk);
+ rcu_read_lock();
+ tbl = rht_dereference_rcu((&tn->sk_rht)->tbl, &tn->sk_rht);
+ for (i = 0; i < tbl->size; i++) {
+ rht_for_each_entry_rcu(tsk, pos, tbl, i, node) {
+ spin_lock_bh(&tsk->sk.sk_lock.slock);
+ str_len += tipc_sk_show(tsk, pb + str_len,
+ pb_len - str_len, 0);
+ spin_unlock_bh(&tsk->sk.sk_lock.slock);
+ }
}
+ rcu_read_unlock();
+
str_len += 1; /* for "\0" */
skb_put(buf, TLV_SPACE(str_len));
TLV_SET(rep_tlv, TIPC_TLV_ULTRA_STRING, NULL, str_len);
/* tipc_sk_reinit: set non-zero address in all existing sockets
* when we go from standalone to network mode.
*/
-void tipc_sk_reinit(void)
+void tipc_sk_reinit(struct net *net)
{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ const struct bucket_table *tbl;
+ struct rhash_head *pos;
+ struct tipc_sock *tsk;
struct tipc_msg *msg;
- u32 ref = 0;
- struct tipc_sock *tsk = tipc_sk_get_next(&ref);
+ int i;
- for (; tsk; tsk = tipc_sk_get_next(&ref)) {
- lock_sock(&tsk->sk);
- msg = &tsk->phdr;
- msg_set_prevnode(msg, tipc_own_addr);
- msg_set_orignode(msg, tipc_own_addr);
- release_sock(&tsk->sk);
- tipc_sk_put(tsk);
+ rcu_read_lock();
+ tbl = rht_dereference_rcu((&tn->sk_rht)->tbl, &tn->sk_rht);
+ for (i = 0; i < tbl->size; i++) {
+ rht_for_each_entry_rcu(tsk, pos, tbl, i, node) {
+ spin_lock_bh(&tsk->sk.sk_lock.slock);
+ msg = &tsk->phdr;
+ msg_set_prevnode(msg, tn->own_addr);
+ msg_set_orignode(msg, tn->own_addr);
+ spin_unlock_bh(&tsk->sk.sk_lock.slock);
+ }
}
+ rcu_read_unlock();
}
-/**
- * struct reference - TIPC socket reference entry
- * @tsk: pointer to socket associated with reference entry
- * @ref: reference value for socket (combines instance & array index info)
- */
-struct reference {
- struct tipc_sock *tsk;
- u32 ref;
-};
-
-/**
- * struct tipc_ref_table - table of TIPC socket reference entries
- * @entries: pointer to array of reference entries
- * @capacity: array index of first unusable entry
- * @init_point: array index of first uninitialized entry
- * @first_free: array index of first unused socket reference entry
- * @last_free: array index of last unused socket reference entry
- * @index_mask: bitmask for array index portion of reference values
- * @start_mask: initial value for instance value portion of reference values
- */
-struct ref_table {
- struct reference *entries;
- u32 capacity;
- u32 init_point;
- u32 first_free;
- u32 last_free;
- u32 index_mask;
- u32 start_mask;
-};
-
-/* Socket reference table consists of 2**N entries.
- *
- * State Socket ptr Reference
- * ----- ---------- ---------
- * In use non-NULL XXXX|own index
- * (XXXX changes each time entry is acquired)
- * Free NULL YYYY|next free index
- * (YYYY is one more than last used XXXX)
- * Uninitialized NULL 0
- *
- * Entry 0 is not used; this allows index 0 to denote the end of the free list.
- *
- * Note that a reference value of 0 does not necessarily indicate that an
- * entry is uninitialized, since the last entry in the free list could also
- * have a reference value of 0 (although this is unlikely).
- */
-
-static struct ref_table tipc_ref_table;
-
-static DEFINE_RWLOCK(ref_table_lock);
-
-/**
- * tipc_ref_table_init - create reference table for sockets
- */
-int tipc_sk_ref_table_init(u32 req_sz, u32 start)
+static struct tipc_sock *tipc_sk_lookup(struct net *net, u32 portid)
{
- struct reference *table;
- u32 actual_sz;
-
- /* account for unused entry, then round up size to a power of 2 */
-
- req_sz++;
- for (actual_sz = 16; actual_sz < req_sz; actual_sz <<= 1) {
- /* do nothing */
- };
-
- /* allocate table & mark all entries as uninitialized */
- table = vzalloc(actual_sz * sizeof(struct reference));
- if (table == NULL)
- return -ENOMEM;
-
- tipc_ref_table.entries = table;
- tipc_ref_table.capacity = req_sz;
- tipc_ref_table.init_point = 1;
- tipc_ref_table.first_free = 0;
- tipc_ref_table.last_free = 0;
- tipc_ref_table.index_mask = actual_sz - 1;
- tipc_ref_table.start_mask = start & ~tipc_ref_table.index_mask;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct tipc_sock *tsk;
- return 0;
-}
+ rcu_read_lock();
+ tsk = rhashtable_lookup(&tn->sk_rht, &portid);
+ if (tsk)
+ sock_hold(&tsk->sk);
+ rcu_read_unlock();
-/**
- * tipc_ref_table_stop - destroy reference table for sockets
- */
-void tipc_sk_ref_table_stop(void)
-{
- if (!tipc_ref_table.entries)
- return;
- vfree(tipc_ref_table.entries);
- tipc_ref_table.entries = NULL;
+ return tsk;
}
-/* tipc_ref_acquire - create reference to a socket
- *
- * Register an socket pointer in the reference table.
- * Returns a unique reference value that is used from then on to retrieve the
- * socket pointer, or to determine if the socket has been deregistered.
- */
-u32 tipc_sk_ref_acquire(struct tipc_sock *tsk)
+static int tipc_sk_insert(struct tipc_sock *tsk)
{
- u32 index;
- u32 index_mask;
- u32 next_plus_upper;
- u32 ref = 0;
- struct reference *entry;
-
- if (unlikely(!tsk)) {
- pr_err("Attempt to acquire ref. to non-existent obj\n");
- return 0;
- }
- if (unlikely(!tipc_ref_table.entries)) {
- pr_err("Ref. table not found in acquisition attempt\n");
- return 0;
- }
-
- /* Take a free entry, if available; otherwise initialize a new one */
- write_lock_bh(&ref_table_lock);
- index = tipc_ref_table.first_free;
- entry = &tipc_ref_table.entries[index];
-
- if (likely(index)) {
- index = tipc_ref_table.first_free;
- entry = &tipc_ref_table.entries[index];
- index_mask = tipc_ref_table.index_mask;
- next_plus_upper = entry->ref;
- tipc_ref_table.first_free = next_plus_upper & index_mask;
- ref = (next_plus_upper & ~index_mask) + index;
- entry->tsk = tsk;
- } else if (tipc_ref_table.init_point < tipc_ref_table.capacity) {
- index = tipc_ref_table.init_point++;
- entry = &tipc_ref_table.entries[index];
- ref = tipc_ref_table.start_mask + index;
+ struct sock *sk = &tsk->sk;
+ struct net *net = sock_net(sk);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ u32 remaining = (TIPC_MAX_PORT - TIPC_MIN_PORT) + 1;
+ u32 portid = prandom_u32() % remaining + TIPC_MIN_PORT;
+
+ while (remaining--) {
+ portid++;
+ if ((portid < TIPC_MIN_PORT) || (portid > TIPC_MAX_PORT))
+ portid = TIPC_MIN_PORT;
+ tsk->portid = portid;
+ sock_hold(&tsk->sk);
+ if (rhashtable_lookup_insert(&tn->sk_rht, &tsk->node))
+ return 0;
+ sock_put(&tsk->sk);
}
- if (ref) {
- entry->ref = ref;
- entry->tsk = tsk;
- }
- write_unlock_bh(&ref_table_lock);
- return ref;
+ return -1;
}
-/* tipc_sk_ref_discard - invalidate reference to an socket
- *
- * Disallow future references to an socket and free up the entry for re-use.
- */
-void tipc_sk_ref_discard(u32 ref)
+static void tipc_sk_remove(struct tipc_sock *tsk)
{
- struct reference *entry;
- u32 index;
- u32 index_mask;
-
- if (unlikely(!tipc_ref_table.entries)) {
- pr_err("Ref. table not found during discard attempt\n");
- return;
- }
-
- index_mask = tipc_ref_table.index_mask;
- index = ref & index_mask;
- entry = &tipc_ref_table.entries[index];
-
- write_lock_bh(&ref_table_lock);
+ struct sock *sk = &tsk->sk;
+ struct tipc_net *tn = net_generic(sock_net(sk), tipc_net_id);
- if (unlikely(!entry->tsk)) {
- pr_err("Attempt to discard ref. to non-existent socket\n");
- goto exit;
+ if (rhashtable_remove(&tn->sk_rht, &tsk->node)) {
+ WARN_ON(atomic_read(&sk->sk_refcnt) == 1);
+ __sock_put(sk);
}
- if (unlikely(entry->ref != ref)) {
- pr_err("Attempt to discard non-existent reference\n");
- goto exit;
- }
-
- /* Mark entry as unused; increment instance part of entry's
- * reference to invalidate any subsequent references
- */
-
- entry->tsk = NULL;
- entry->ref = (ref & ~index_mask) + (index_mask + 1);
-
- /* Append entry to free entry list */
- if (unlikely(tipc_ref_table.first_free == 0))
- tipc_ref_table.first_free = index;
- else
- tipc_ref_table.entries[tipc_ref_table.last_free].ref |= index;
- tipc_ref_table.last_free = index;
-exit:
- write_unlock_bh(&ref_table_lock);
}
-/* tipc_sk_get - find referenced socket and return pointer to it
- */
-struct tipc_sock *tipc_sk_get(u32 ref)
+int tipc_sk_rht_init(struct net *net)
{
- struct reference *entry;
- struct tipc_sock *tsk;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct rhashtable_params rht_params = {
+ .nelem_hint = 192,
+ .head_offset = offsetof(struct tipc_sock, node),
+ .key_offset = offsetof(struct tipc_sock, portid),
+ .key_len = sizeof(u32), /* portid */
+ .hashfn = jhash,
+ .max_shift = 20, /* 1M */
+ .min_shift = 8, /* 256 */
+ .grow_decision = rht_grow_above_75,
+ .shrink_decision = rht_shrink_below_30,
+ };
- if (unlikely(!tipc_ref_table.entries))
- return NULL;
- read_lock_bh(&ref_table_lock);
- entry = &tipc_ref_table.entries[ref & tipc_ref_table.index_mask];
- tsk = entry->tsk;
- if (likely(tsk && (entry->ref == ref)))
- sock_hold(&tsk->sk);
- else
- tsk = NULL;
- read_unlock_bh(&ref_table_lock);
- return tsk;
+ return rhashtable_init(&tn->sk_rht, &rht_params);
}
-/* tipc_sk_get_next - lock & return next socket after referenced one
-*/
-struct tipc_sock *tipc_sk_get_next(u32 *ref)
+void tipc_sk_rht_destroy(struct net *net)
{
- struct reference *entry;
- struct tipc_sock *tsk = NULL;
- uint index = *ref & tipc_ref_table.index_mask;
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
- read_lock_bh(&ref_table_lock);
- while (++index < tipc_ref_table.capacity) {
- entry = &tipc_ref_table.entries[index];
- if (!entry->tsk)
- continue;
- tsk = entry->tsk;
- sock_hold(&tsk->sk);
- *ref = entry->ref;
- break;
- }
- read_unlock_bh(&ref_table_lock);
- return tsk;
-}
+ /* Wait for socket readers to complete */
+ synchronize_net();
-static void tipc_sk_put(struct tipc_sock *tsk)
-{
- sock_put(&tsk->sk);
+ rhashtable_destroy(&tn->sk_rht);
}
/**
return put_user(sizeof(value), ol);
}
-static int tipc_ioctl(struct socket *sk, unsigned int cmd, unsigned long arg)
+static int tipc_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
{
+ struct sock *sk = sock->sk;
struct tipc_sioc_ln_req lnr;
void __user *argp = (void __user *)arg;
case SIOCGETLINKNAME:
if (copy_from_user(&lnr, argp, sizeof(lnr)))
return -EFAULT;
- if (!tipc_node_get_linkname(lnr.bearer_id & 0xffff, lnr.peer,
+ if (!tipc_node_get_linkname(sock_net(sk),
+ lnr.bearer_id & 0xffff, lnr.peer,
lnr.linkname, TIPC_MAX_LINK_NAME)) {
if (copy_to_user(argp, &lnr, sizeof(lnr)))
return -EFAULT;
int err;
void *hdr;
struct nlattr *attrs;
+ struct net *net = sock_net(skb->sk);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
&tipc_genl_v2_family, NLM_F_MULTI, TIPC_NL_SOCK_GET);
attrs = nla_nest_start(skb, TIPC_NLA_SOCK);
if (!attrs)
goto genlmsg_cancel;
- if (nla_put_u32(skb, TIPC_NLA_SOCK_REF, tsk->ref))
+ if (nla_put_u32(skb, TIPC_NLA_SOCK_REF, tsk->portid))
goto attr_msg_cancel;
- if (nla_put_u32(skb, TIPC_NLA_SOCK_ADDR, tipc_own_addr))
+ if (nla_put_u32(skb, TIPC_NLA_SOCK_ADDR, tn->own_addr))
goto attr_msg_cancel;
if (tsk->connected) {
{
int err;
struct tipc_sock *tsk;
- u32 prev_ref = cb->args[0];
- u32 ref = prev_ref;
-
- tsk = tipc_sk_get_next(&ref);
- for (; tsk; tsk = tipc_sk_get_next(&ref)) {
- lock_sock(&tsk->sk);
- err = __tipc_nl_add_sk(skb, cb, tsk);
- release_sock(&tsk->sk);
- tipc_sk_put(tsk);
- if (err)
- break;
+ const struct bucket_table *tbl;
+ struct rhash_head *pos;
+ struct net *net = sock_net(skb->sk);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ u32 tbl_id = cb->args[0];
+ u32 prev_portid = cb->args[1];
- prev_ref = ref;
- }
+ rcu_read_lock();
+ tbl = rht_dereference_rcu((&tn->sk_rht)->tbl, &tn->sk_rht);
+ for (; tbl_id < tbl->size; tbl_id++) {
+ rht_for_each_entry_rcu(tsk, pos, tbl, tbl_id, node) {
+ spin_lock_bh(&tsk->sk.sk_lock.slock);
+ if (prev_portid && prev_portid != tsk->portid) {
+ spin_unlock_bh(&tsk->sk.sk_lock.slock);
+ continue;
+ }
- cb->args[0] = prev_ref;
+ err = __tipc_nl_add_sk(skb, cb, tsk);
+ if (err) {
+ prev_portid = tsk->portid;
+ spin_unlock_bh(&tsk->sk.sk_lock.slock);
+ goto out;
+ }
+ prev_portid = 0;
+ spin_unlock_bh(&tsk->sk.sk_lock.slock);
+ }
+ }
+out:
+ rcu_read_unlock();
+ cb->args[0] = tbl_id;
+ cb->args[1] = prev_portid;
return skb->len;
}
int tipc_nl_publ_dump(struct sk_buff *skb, struct netlink_callback *cb)
{
int err;
- u32 tsk_ref = cb->args[0];
+ u32 tsk_portid = cb->args[0];
u32 last_publ = cb->args[1];
u32 done = cb->args[2];
+ struct net *net = sock_net(skb->sk);
struct tipc_sock *tsk;
- if (!tsk_ref) {
+ if (!tsk_portid) {
struct nlattr **attrs;
struct nlattr *sock[TIPC_NLA_SOCK_MAX + 1];
if (!sock[TIPC_NLA_SOCK_REF])
return -EINVAL;
- tsk_ref = nla_get_u32(sock[TIPC_NLA_SOCK_REF]);
+ tsk_portid = nla_get_u32(sock[TIPC_NLA_SOCK_REF]);
}
if (done)
return 0;
- tsk = tipc_sk_get(tsk_ref);
+ tsk = tipc_sk_lookup(net, tsk_portid);
if (!tsk)
return -EINVAL;
if (!err)
done = 1;
release_sock(&tsk->sk);
- tipc_sk_put(tsk);
+ sock_put(&tsk->sk);
- cb->args[0] = tsk_ref;
+ cb->args[0] = tsk_portid;
cb->args[1] = last_publ;
cb->args[2] = done;
#define TIPC_FLOWCTRL_WIN (TIPC_CONNACK_INTV * 2)
#define TIPC_CONN_OVERLOAD_LIMIT ((TIPC_FLOWCTRL_WIN * 2 + 1) * \
SKB_TRUESIZE(TIPC_MAX_USER_MSG_SIZE))
-int tipc_sk_rcv(struct sk_buff *buf);
-struct sk_buff *tipc_sk_socks_show(void);
-void tipc_sk_mcast_rcv(struct sk_buff *buf);
-void tipc_sk_reinit(void);
-int tipc_sk_ref_table_init(u32 requested_size, u32 start);
-void tipc_sk_ref_table_stop(void);
+
+int tipc_socket_init(void);
+void tipc_socket_stop(void);
+int tipc_sock_create_local(struct net *net, int type, struct socket **res);
+void tipc_sock_release_local(struct socket *sock);
+int tipc_sock_accept_local(struct socket *sock, struct socket **newsock,
+ int flags);
+int tipc_sk_rcv(struct net *net, struct sk_buff *buf);
+struct sk_buff *tipc_sk_socks_show(struct net *net);
+void tipc_sk_mcast_rcv(struct net *net, struct sk_buff *buf);
+void tipc_sk_reinit(struct net *net);
+int tipc_sk_rht_init(struct net *net);
+void tipc_sk_rht_destroy(struct net *net);
int tipc_nl_sk_dump(struct sk_buff *skb, struct netlink_callback *cb);
int tipc_nl_publ_dump(struct sk_buff *skb, struct netlink_callback *cb);
struct list_head subscription_list;
};
-static void subscr_conn_msg_event(int conid, struct sockaddr_tipc *addr,
- void *usr_data, void *buf, size_t len);
-static void *subscr_named_msg_event(int conid);
-static void subscr_conn_shutdown_event(int conid, void *usr_data);
-
-static atomic_t subscription_count = ATOMIC_INIT(0);
-
-static struct sockaddr_tipc topsrv_addr __read_mostly = {
- .family = AF_TIPC,
- .addrtype = TIPC_ADDR_NAMESEQ,
- .addr.nameseq.type = TIPC_TOP_SRV,
- .addr.nameseq.lower = TIPC_TOP_SRV,
- .addr.nameseq.upper = TIPC_TOP_SRV,
- .scope = TIPC_NODE_SCOPE
-};
-
-static struct tipc_server topsrv __read_mostly = {
- .saddr = &topsrv_addr,
- .imp = TIPC_CRITICAL_IMPORTANCE,
- .type = SOCK_SEQPACKET,
- .max_rcvbuf_size = sizeof(struct tipc_subscr),
- .name = "topology_server",
- .tipc_conn_recvmsg = subscr_conn_msg_event,
- .tipc_conn_new = subscr_named_msg_event,
- .tipc_conn_shutdown = subscr_conn_shutdown_event,
-};
-
/**
* htohl - convert value to endianness used by destination
* @in: value to convert
u32 found_upper, u32 event, u32 port_ref,
u32 node)
{
+ struct tipc_net *tn = net_generic(sub->net, tipc_net_id);
struct tipc_subscriber *subscriber = sub->subscriber;
struct kvec msg_sect;
sub->evt.found_upper = htohl(found_upper, sub->swap);
sub->evt.port.ref = htohl(port_ref, sub->swap);
sub->evt.port.node = htohl(node, sub->swap);
- tipc_conn_sendmsg(&topsrv, subscriber->conid, NULL, msg_sect.iov_base,
- msg_sect.iov_len);
+ tipc_conn_sendmsg(tn->topsrv, subscriber->conid, NULL,
+ msg_sect.iov_base, msg_sect.iov_len);
}
/**
subscr_send_event(sub, found_lower, found_upper, event, port_ref, node);
}
-static void subscr_timeout(struct tipc_subscription *sub)
+static void subscr_timeout(unsigned long data)
{
+ struct tipc_subscription *sub = (struct tipc_subscription *)data;
struct tipc_subscriber *subscriber = sub->subscriber;
+ struct tipc_net *tn = net_generic(sub->net, tipc_net_id);
/* The spin lock per subscriber is used to protect its members */
spin_lock_bh(&subscriber->lock);
TIPC_SUBSCR_TIMEOUT, 0, 0);
/* Now destroy subscription */
- k_term_timer(&sub->timer);
kfree(sub);
- atomic_dec(&subscription_count);
+ atomic_dec(&tn->subscription_count);
}
/**
*/
static void subscr_del(struct tipc_subscription *sub)
{
+ struct tipc_net *tn = net_generic(sub->net, tipc_net_id);
+
tipc_nametbl_unsubscribe(sub);
list_del(&sub->subscription_list);
kfree(sub);
- atomic_dec(&subscription_count);
+ atomic_dec(&tn->subscription_count);
}
/**
*
* Note: Must call it in process context since it might sleep.
*/
-static void subscr_terminate(struct tipc_subscriber *subscriber)
+static void subscr_terminate(struct tipc_subscription *sub)
{
- tipc_conn_terminate(&topsrv, subscriber->conid);
+ struct tipc_subscriber *subscriber = sub->subscriber;
+ struct tipc_net *tn = net_generic(sub->net, tipc_net_id);
+
+ tipc_conn_terminate(tn->topsrv, subscriber->conid);
}
static void subscr_release(struct tipc_subscriber *subscriber)
subscription_list) {
if (sub->timeout != TIPC_WAIT_FOREVER) {
spin_unlock_bh(&subscriber->lock);
- k_cancel_timer(&sub->timer);
- k_term_timer(&sub->timer);
+ del_timer_sync(&sub->timer);
spin_lock_bh(&subscriber->lock);
}
subscr_del(sub);
if (sub->timeout != TIPC_WAIT_FOREVER) {
sub->timeout = TIPC_WAIT_FOREVER;
spin_unlock_bh(&subscriber->lock);
- k_cancel_timer(&sub->timer);
- k_term_timer(&sub->timer);
+ del_timer_sync(&sub->timer);
spin_lock_bh(&subscriber->lock);
}
subscr_del(sub);
*
* Called with subscriber lock held.
*/
-static int subscr_subscribe(struct tipc_subscr *s,
+static int subscr_subscribe(struct net *net, struct tipc_subscr *s,
struct tipc_subscriber *subscriber,
- struct tipc_subscription **sub_p) {
+ struct tipc_subscription **sub_p)
+{
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
struct tipc_subscription *sub;
int swap;
}
/* Refuse subscription if global limit exceeded */
- if (atomic_read(&subscription_count) >= TIPC_MAX_SUBSCRIPTIONS) {
+ if (atomic_read(&tn->subscription_count) >= TIPC_MAX_SUBSCRIPTIONS) {
pr_warn("Subscription rejected, limit reached (%u)\n",
TIPC_MAX_SUBSCRIPTIONS);
return -EINVAL;
}
/* Initialize subscription object */
+ sub->net = net;
sub->seq.type = htohl(s->seq.type, swap);
sub->seq.lower = htohl(s->seq.lower, swap);
sub->seq.upper = htohl(s->seq.upper, swap);
- sub->timeout = htohl(s->timeout, swap);
+ sub->timeout = msecs_to_jiffies(htohl(s->timeout, swap));
sub->filter = htohl(s->filter, swap);
if ((!(sub->filter & TIPC_SUB_PORTS) ==
!(sub->filter & TIPC_SUB_SERVICE)) ||
sub->subscriber = subscriber;
sub->swap = swap;
memcpy(&sub->evt.s, s, sizeof(struct tipc_subscr));
- atomic_inc(&subscription_count);
+ atomic_inc(&tn->subscription_count);
if (sub->timeout != TIPC_WAIT_FOREVER) {
- k_init_timer(&sub->timer,
- (Handler)subscr_timeout, (unsigned long)sub);
- k_start_timer(&sub->timer, sub->timeout);
+ setup_timer(&sub->timer, subscr_timeout, (unsigned long)sub);
+ mod_timer(&sub->timer, jiffies + sub->timeout);
}
*sub_p = sub;
return 0;
}
/* Handle one request to create a new subscription for the subscriber */
-static void subscr_conn_msg_event(int conid, struct sockaddr_tipc *addr,
- void *usr_data, void *buf, size_t len)
+static void subscr_conn_msg_event(struct net *net, int conid,
+ struct sockaddr_tipc *addr, void *usr_data,
+ void *buf, size_t len)
{
struct tipc_subscriber *subscriber = usr_data;
struct tipc_subscription *sub = NULL;
spin_lock_bh(&subscriber->lock);
- if (subscr_subscribe((struct tipc_subscr *)buf, subscriber, &sub) < 0) {
+ if (subscr_subscribe(net, (struct tipc_subscr *)buf, subscriber,
+ &sub) < 0) {
spin_unlock_bh(&subscriber->lock);
- subscr_terminate(subscriber);
+ subscr_terminate(sub);
return;
}
if (sub)
spin_unlock_bh(&subscriber->lock);
}
-
/* Handle one request to establish a new subscriber */
static void *subscr_named_msg_event(int conid)
{
return (void *)subscriber;
}
-int tipc_subscr_start(void)
+int tipc_subscr_start(struct net *net)
{
- return tipc_server_start(&topsrv);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ const char name[] = "topology_server";
+ struct tipc_server *topsrv;
+ struct sockaddr_tipc *saddr;
+
+ saddr = kzalloc(sizeof(*saddr), GFP_ATOMIC);
+ if (!saddr)
+ return -ENOMEM;
+ saddr->family = AF_TIPC;
+ saddr->addrtype = TIPC_ADDR_NAMESEQ;
+ saddr->addr.nameseq.type = TIPC_TOP_SRV;
+ saddr->addr.nameseq.lower = TIPC_TOP_SRV;
+ saddr->addr.nameseq.upper = TIPC_TOP_SRV;
+ saddr->scope = TIPC_NODE_SCOPE;
+
+ topsrv = kzalloc(sizeof(*topsrv), GFP_ATOMIC);
+ if (!topsrv) {
+ kfree(saddr);
+ return -ENOMEM;
+ }
+ topsrv->net = net;
+ topsrv->saddr = saddr;
+ topsrv->imp = TIPC_CRITICAL_IMPORTANCE;
+ topsrv->type = SOCK_SEQPACKET;
+ topsrv->max_rcvbuf_size = sizeof(struct tipc_subscr);
+ topsrv->tipc_conn_recvmsg = subscr_conn_msg_event;
+ topsrv->tipc_conn_new = subscr_named_msg_event;
+ topsrv->tipc_conn_shutdown = subscr_conn_shutdown_event;
+
+ strncpy(topsrv->name, name, strlen(name) + 1);
+ tn->topsrv = topsrv;
+ atomic_set(&tn->subscription_count, 0);
+
+ return tipc_server_start(topsrv);
}
-void tipc_subscr_stop(void)
+void tipc_subscr_stop(struct net *net)
{
- tipc_server_stop(&topsrv);
+ struct tipc_net *tn = net_generic(net, tipc_net_id);
+ struct tipc_server *topsrv = tn->topsrv;
+
+ tipc_server_stop(topsrv);
+ kfree(topsrv->saddr);
+ kfree(topsrv);
}
#include "server.h"
+#define TIPC_MAX_SUBSCRIPTIONS 65535
+#define TIPC_MAX_PUBLICATIONS 65535
+
struct tipc_subscription;
struct tipc_subscriber;
* struct tipc_subscription - TIPC network topology subscription object
* @subscriber: pointer to its subscriber
* @seq: name sequence associated with subscription
+ * @net: point to network namespace
* @timeout: duration of subscription (in ms)
* @filter: event filtering to be done for subscription
* @timer: timer governing subscription duration (optional)
struct tipc_subscription {
struct tipc_subscriber *subscriber;
struct tipc_name_seq seq;
- u32 timeout;
+ struct net *net;
+ unsigned long timeout;
u32 filter;
struct timer_list timer;
struct list_head nameseq_list;
int tipc_subscr_overlap(struct tipc_subscription *sub, u32 found_lower,
u32 found_upper);
-
void tipc_subscr_report_overlap(struct tipc_subscription *sub, u32 found_lower,
u32 found_upper, u32 event, u32 port_ref,
u32 node, int must);
-
-int tipc_subscr_start(void);
-
-void tipc_subscr_stop(void);
+int tipc_subscr_start(struct net *net);
+void tipc_subscr_stop(struct net *net);
#endif
if (nla_put_u8(skb, UNIX_DIAG_SHUTDOWN, sk->sk_shutdown))
goto out_nlmsg_trim;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
out_nlmsg_trim:
nlmsg_cancel(skb, nlh);
Most distributions have a CRDA package. So if unsure, say N.
config CFG80211_WEXT
- bool
+ bool "cfg80211 wireless extensions compatibility"
depends on CFG80211
select WEXT_CORE
help
break;
}
finish:
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(msg, hdr);
goto nla_put_failure;
}
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(msg, hdr);
sinfo->assoc_req_ies))
goto nla_put_failure;
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(msg, hdr);
nla_nest_end(msg, pinfoattr);
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(msg, hdr);
nla_put_flag(msg, NL80211_ATTR_WIPHY_SELF_MANAGED_REG))
goto nla_put_failure;
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(msg, hdr);
nla_nest_end(msg, bss);
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
fail_unlock_rcu:
rcu_read_unlock();
nla_nest_end(msg, infoattr);
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(msg, hdr);
/* ignore errors and send incomplete event anyway */
nl80211_add_scan_req(msg, rdev);
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(msg, hdr);
nla_put_u32(msg, NL80211_ATTR_IFINDEX, netdev->ifindex))
goto nla_put_failure;
- return genlmsg_end(msg, hdr);
+ genlmsg_end(msg, hdr);
+ return 0;
nla_put_failure:
genlmsg_cancel(msg, hdr);
if (skb->priority >= 256 && skb->priority <= 263)
return skb->priority - 256;
- if (vlan_tx_tag_present(skb)) {
- vlan_priority = (vlan_tx_tag_get(skb) & VLAN_PRIO_MASK)
+ if (skb_vlan_tag_present(skb)) {
+ vlan_priority = (skb_vlan_tag_get(skb) & VLAN_PRIO_MASK)
>> VLAN_PRIO_SHIFT;
if (vlan_priority > 0)
return vlan_priority;
},
};
-static inline int aead_entries(void)
-{
- return ARRAY_SIZE(aead_list);
-}
-
static inline int aalg_entries(void)
{
return ARRAY_SIZE(aalg_list);
return err;
}
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
}
static int xfrm_set_spdinfo(struct sk_buff *skb, struct nlmsghdr *nlh,
return err;
}
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
}
static int xfrm_get_sadinfo(struct sk_buff *skb, struct nlmsghdr *nlh,
if (err)
goto out_cancel;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
out_cancel:
nlmsg_cancel(skb, nlh);
goto out_cancel;
}
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
out_cancel:
nlmsg_cancel(skb, nlh);
if (err)
return err;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
}
static int xfrm_exp_state_notify(struct xfrm_state *x, const struct km_event *c)
return err;
}
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
}
static int xfrm_send_acquire(struct xfrm_state *x, struct xfrm_tmpl *xt,
}
upe->hard = !!hard;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
}
static int xfrm_exp_policy_notify(struct xfrm_policy *xp, int dir, const struct km_event *c)
return err;
}
}
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
}
static int xfrm_send_report(struct net *net, u8 proto,
um->old_sport = x->encap->encap_sport;
um->reqid = x->props.reqid;
- return nlmsg_end(skb, nlh);
+ nlmsg_end(skb, nlh);
+ return 0;
}
static int xfrm_send_mapping(struct xfrm_state *x, xfrm_address_t *ipaddr,
__clean-files := $(filter-out $(no-clean-files), $(__clean-files))
-# as clean-files is given relative to the current directory, this adds
-# a $(obj) prefix, except for absolute paths
+# clean-files is given relative to the current directory, unless it
+# starts with $(objtree)/ (which means "./", so do not add "./" unless
+# you want to delete a file from the toplevel object directory).
__clean-files := $(wildcard \
- $(addprefix $(obj)/, $(filter-out /%, $(__clean-files))) \
- $(filter /%, $(__clean-files)))
+ $(addprefix $(obj)/, $(filter-out $(objtree)/%, $(__clean-files))) \
+ $(filter $(objtree)/%, $(__clean-files)))
-# as clean-dirs is given relative to the current directory, this adds
-# a $(obj) prefix, except for absolute paths
+# same as clean-files
__clean-dirs := $(wildcard \
- $(addprefix $(obj)/, $(filter-out /%, $(clean-dirs))) \
- $(filter /%, $(clean-dirs)))
+ $(addprefix $(obj)/, $(filter-out $(objtree)/%, $(clean-dirs))) \
+ $(filter $(objtree)/%, $(clean-dirs)))
# ==========================================================================
if (test_bit(KEY_FLAG_INSTANTIATED, &key->flags))
atomic_dec(&key->user->nikeys);
- key_user_put(key->user);
-
/* now throw away the key memory */
if (key->type->destroy)
key->type->destroy(key);
+ key_user_put(key->user);
+
kfree(key->description);
#ifdef KEY_DEBUGGING
spin_lock_irq(&efw->lock);
t = (struct snd_efw_transaction *)data;
- length = min_t(size_t, t->length * sizeof(t->length), length);
+ length = min_t(size_t, be32_to_cpu(t->length) * sizeof(u32), length);
if (efw->push_ptr < efw->pull_ptr)
capacity = (unsigned int)(efw->pull_ptr - efw->push_ptr);
{ .id = 0x10de0067, .name = "MCP67 HDMI", .patch = patch_nvhdmi_2ch },
{ .id = 0x10de0070, .name = "GPU 70 HDMI/DP", .patch = patch_nvhdmi },
{ .id = 0x10de0071, .name = "GPU 71 HDMI/DP", .patch = patch_nvhdmi },
+{ .id = 0x10de0072, .name = "GPU 72 HDMI/DP", .patch = patch_nvhdmi },
{ .id = 0x10de8001, .name = "MCP73 HDMI", .patch = patch_nvhdmi_2ch },
{ .id = 0x11069f80, .name = "VX900 HDMI/DP", .patch = patch_via_hdmi },
{ .id = 0x11069f81, .name = "VX900 HDMI/DP", .patch = patch_via_hdmi },
MODULE_ALIAS("snd-hda-codec-id:10de0067");
MODULE_ALIAS("snd-hda-codec-id:10de0070");
MODULE_ALIAS("snd-hda-codec-id:10de0071");
+MODULE_ALIAS("snd-hda-codec-id:10de0072");
MODULE_ALIAS("snd-hda-codec-id:10de8001");
MODULE_ALIAS("snd-hda-codec-id:11069f80");
MODULE_ALIAS("snd-hda-codec-id:11069f81");
spec->gpio_mask;
}
if (get_int_hint(codec, "gpio_dir", &spec->gpio_dir))
- spec->gpio_mask &= spec->gpio_mask;
- if (get_int_hint(codec, "gpio_data", &spec->gpio_data))
spec->gpio_dir &= spec->gpio_mask;
+ if (get_int_hint(codec, "gpio_data", &spec->gpio_data))
+ spec->gpio_data &= spec->gpio_mask;
if (get_int_hint(codec, "eapd_mask", &spec->eapd_mask))
spec->eapd_mask &= spec->gpio_mask;
if (get_int_hint(codec, "gpio_mute", &spec->gpio_mute))
static int rt5677_dsp_vad_get(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
{
- struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol);
- struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ struct snd_soc_component *component = snd_kcontrol_chip(kcontrol);
+ struct rt5677_priv *rt5677 = snd_soc_component_get_drvdata(component);
ucontrol->value.integer.value[0] = rt5677->dsp_vad_en;
static int rt5677_dsp_vad_put(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
{
- struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol);
- struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec);
+ struct snd_soc_component *component = snd_kcontrol_chip(kcontrol);
+ struct rt5677_priv *rt5677 = snd_soc_component_get_drvdata(component);
+ struct snd_soc_codec *codec = snd_soc_component_to_codec(component);
rt5677->dsp_vad_en = !!ucontrol->value.integer.value[0];
switch (config->chan_nr) {
case EIGHT_CHANNEL_SUPPORT:
- ch_reg = 3;
- break;
case SIX_CHANNEL_SUPPORT:
- ch_reg = 2;
- break;
case FOUR_CHANNEL_SUPPORT:
- ch_reg = 1;
- break;
case TWO_CHANNEL_SUPPORT:
- ch_reg = 0;
break;
default:
dev_err(dev->dev, "channel not supported\n");
i2s_disable_channels(dev, substream->stream);
- if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
- i2s_write_reg(dev->i2s_base, TCR(ch_reg), xfer_resolution);
- i2s_write_reg(dev->i2s_base, TFCR(ch_reg), 0x02);
- irq = i2s_read_reg(dev->i2s_base, IMR(ch_reg));
- i2s_write_reg(dev->i2s_base, IMR(ch_reg), irq & ~0x30);
- i2s_write_reg(dev->i2s_base, TER(ch_reg), 1);
- } else {
- i2s_write_reg(dev->i2s_base, RCR(ch_reg), xfer_resolution);
- i2s_write_reg(dev->i2s_base, RFCR(ch_reg), 0x07);
- irq = i2s_read_reg(dev->i2s_base, IMR(ch_reg));
- i2s_write_reg(dev->i2s_base, IMR(ch_reg), irq & ~0x03);
- i2s_write_reg(dev->i2s_base, RER(ch_reg), 1);
+ for (ch_reg = 0; ch_reg < (config->chan_nr / 2); ch_reg++) {
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ i2s_write_reg(dev->i2s_base, TCR(ch_reg),
+ xfer_resolution);
+ i2s_write_reg(dev->i2s_base, TFCR(ch_reg), 0x02);
+ irq = i2s_read_reg(dev->i2s_base, IMR(ch_reg));
+ i2s_write_reg(dev->i2s_base, IMR(ch_reg), irq & ~0x30);
+ i2s_write_reg(dev->i2s_base, TER(ch_reg), 1);
+ } else {
+ i2s_write_reg(dev->i2s_base, RCR(ch_reg),
+ xfer_resolution);
+ i2s_write_reg(dev->i2s_base, RFCR(ch_reg), 0x07);
+ irq = i2s_read_reg(dev->i2s_base, IMR(ch_reg));
+ i2s_write_reg(dev->i2s_base, IMR(ch_reg), irq & ~0x03);
+ i2s_write_reg(dev->i2s_base, RER(ch_reg), 1);
+ }
}
i2s_write_reg(dev->i2s_base, CCR, ccr);
snd_soc_dai_set_dma_data(dai, substream, NULL);
}
+static int dw_i2s_prepare(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *dai)
+{
+ struct dw_i2s_dev *dev = snd_soc_dai_get_drvdata(dai);
+
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+ i2s_write_reg(dev->i2s_base, TXFFR, 1);
+ else
+ i2s_write_reg(dev->i2s_base, RXFFR, 1);
+
+ return 0;
+}
+
static int dw_i2s_trigger(struct snd_pcm_substream *substream,
int cmd, struct snd_soc_dai *dai)
{
.startup = dw_i2s_startup,
.shutdown = dw_i2s_shutdown,
.hw_params = dw_i2s_hw_params,
+ .prepare = dw_i2s_prepare,
.trigger = dw_i2s_trigger,
};
config SND_SOC_INTEL_BYTCR_RT5640_MACH
tristate "ASoC Audio DSP Support for MID BYT Platform"
- depends on X86
+ depends on X86 && I2C
select SND_SOC_RT5640
select SND_SST_MFLD_PLATFORM
select SND_SST_IPC_ACPI
config SND_SOC_INTEL_CHT_BSW_RT5672_MACH
tristate "ASoC Audio driver for Intel Cherrytrail & Braswell with RT5672 codec"
- depends on X86_INTEL_LPSS
+ depends on X86_INTEL_LPSS && I2C
select SND_SOC_RT5670
select SND_SST_MFLD_PLATFORM
select SND_SST_IPC_ACPI
MODULE_DESCRIPTION("ASoC Intel(R) Baytrail CR Machine driver");
MODULE_AUTHOR("Subhransu S. Prusty <subhransu.s.prusty@intel.com>");
MODULE_LICENSE("GPL v2");
-MODULE_ALIAS("platform:bytrt5640-audio");
+MODULE_ALIAS("platform:bytt100_rt5640");
/* does block span more than 1 section */
if (ba->offset >= block->offset && ba->offset < block_end) {
+ /* add block */
+ list_move(&block->list, &dsp->used_block_list);
+ list_add(&block->module_list, block_list);
/* align ba to block boundary */
- ba->offset = block->offset;
+ ba->size -= block_end - ba->offset;
+ ba->offset = block_end;
err = block_alloc_contiguous(dsp, ba, block_list);
if (err < 0)
}
static struct sst_machines sst_acpi_bytcr[] = {
- {"10EC5640", "T100", "bytt100_rt5640", NULL, "fw_sst_0f28.bin",
+ {"10EC5640", "T100", "bytt100_rt5640", NULL, "intel/fw_sst_0f28.bin",
&byt_rvp_platform_data },
{},
};
i2s->playback_dma_data.addr = res->start + I2S_TXDR;
i2s->playback_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
- i2s->playback_dma_data.maxburst = 16;
+ i2s->playback_dma_data.maxburst = 4;
i2s->capture_dma_data.addr = res->start + I2S_RXDR;
i2s->capture_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
- i2s->capture_dma_data.maxburst = 16;
+ i2s->capture_dma_data.maxburst = 4;
i2s->dev = &pdev->dev;
dev_set_drvdata(&pdev->dev, i2s);
#define I2S_DMACR_TDE_DISABLE (0 << I2S_DMACR_TDE_SHIFT)
#define I2S_DMACR_TDE_ENABLE (1 << I2S_DMACR_TDE_SHIFT)
#define I2S_DMACR_TDL_SHIFT 0
-#define I2S_DMACR_TDL(x) ((x - 1) << I2S_DMACR_TDL_SHIFT)
+#define I2S_DMACR_TDL(x) ((x) << I2S_DMACR_TDL_SHIFT)
#define I2S_DMACR_TDL_MASK (0x1f << I2S_DMACR_TDL_SHIFT)
/*
const char *propname)
{
struct device_node *np = card->dev->of_node;
- int num_routes, old_routes;
+ int num_routes;
struct snd_soc_dapm_route *routes;
int i, ret;
return -EINVAL;
}
- old_routes = card->num_dapm_routes;
- routes = devm_kzalloc(card->dev,
- (old_routes + num_routes) * sizeof(*routes),
+ routes = devm_kzalloc(card->dev, num_routes * sizeof(*routes),
GFP_KERNEL);
if (!routes) {
dev_err(card->dev,
return -EINVAL;
}
- memcpy(routes, card->dapm_routes, old_routes * sizeof(*routes));
-
for (i = 0; i < num_routes; i++) {
ret = of_property_read_string_index(np, propname,
- 2 * i, &routes[old_routes + i].sink);
+ 2 * i, &routes[i].sink);
if (ret) {
dev_err(card->dev,
"ASoC: Property '%s' index %d could not be read: %d\n",
return -EINVAL;
}
ret = of_property_read_string_index(np, propname,
- (2 * i) + 1, &routes[old_routes + i].source);
+ (2 * i) + 1, &routes[i].source);
if (ret) {
dev_err(card->dev,
"ASoC: Property '%s' index %d could not be read: %d\n",
}
}
- card->num_dapm_routes += num_routes;
+ card->num_dapm_routes = num_routes;
card->dapm_routes = routes;
return 0;
return -EINVAL;
}
- if (cdev->n_streams < 2) {
+ if (cdev->n_streams < 1) {
dev_err(dev, "bogus number of streams: %d\n", cdev->n_streams);
return -EINVAL;
}
*
* TODO: Hook into free() and add that check there as well.
*/
- debug_check_no_locks_freed(mutex, mutex + sizeof(*mutex));
+ debug_check_no_locks_freed(mutex, sizeof(*mutex));
__del_lock(__get_lock(mutex));
return ll_pthread_mutex_destroy(mutex);
}
{
try_init_preload();
- debug_check_no_locks_freed(rwlock, rwlock + sizeof(*rwlock));
+ debug_check_no_locks_freed(rwlock, sizeof(*rwlock));
__del_lock(__get_lock(rwlock));
return ll_pthread_rwlock_destroy(rwlock);
}
if (nr_samples > 0) {
total_nr_samples += nr_samples;
hists__collapse_resort(hists, NULL);
- hists__output_resort(hists);
+ hists__output_resort(hists, NULL);
if (symbol_conf.event_group &&
!perf_evsel__is_group_leader(pos))
return __hist_entry__cmp_compute(p_left, p_right, c);
}
+static int64_t
+hist_entry__cmp_nop(struct hist_entry *left __maybe_unused,
+ struct hist_entry *right __maybe_unused)
+{
+ return 0;
+}
+
+static int64_t
+hist_entry__cmp_baseline(struct hist_entry *left, struct hist_entry *right)
+{
+ if (sort_compute)
+ return 0;
+
+ if (left->stat.period == right->stat.period)
+ return 0;
+ return left->stat.period > right->stat.period ? 1 : -1;
+}
+
+static int64_t
+hist_entry__cmp_delta(struct hist_entry *left, struct hist_entry *right)
+{
+ return hist_entry__cmp_compute(right, left, COMPUTE_DELTA);
+}
+
+static int64_t
+hist_entry__cmp_ratio(struct hist_entry *left, struct hist_entry *right)
+{
+ return hist_entry__cmp_compute(right, left, COMPUTE_RATIO);
+}
+
+static int64_t
+hist_entry__cmp_wdiff(struct hist_entry *left, struct hist_entry *right)
+{
+ return hist_entry__cmp_compute(right, left, COMPUTE_WEIGHTED_DIFF);
+}
+
static void insert_hist_entry_by_compute(struct rb_root *root,
struct hist_entry *he,
int c)
hists__precompute(hists);
hists__compute_resort(hists);
} else {
- hists__output_resort(hists);
+ hists__output_resort(hists, NULL);
}
hists__fprintf(hists, true, 0, 0, 0, stdout);
fmt->header = hpp__header;
fmt->width = hpp__width;
fmt->entry = hpp__entry_global;
+ fmt->cmp = hist_entry__cmp_nop;
+ fmt->collapse = hist_entry__cmp_nop;
/* TODO more colors */
switch (idx) {
case PERF_HPP_DIFF__BASELINE:
fmt->color = hpp__color_baseline;
+ fmt->sort = hist_entry__cmp_baseline;
break;
case PERF_HPP_DIFF__DELTA:
fmt->color = hpp__color_delta;
+ fmt->sort = hist_entry__cmp_delta;
break;
case PERF_HPP_DIFF__RATIO:
fmt->color = hpp__color_ratio;
+ fmt->sort = hist_entry__cmp_ratio;
break;
case PERF_HPP_DIFF__WEIGHTED_DIFF:
fmt->color = hpp__color_wdiff;
+ fmt->sort = hist_entry__cmp_wdiff;
break;
default:
+ fmt->sort = hist_entry__cmp_nop;
break;
}
init_header(d, dfmt);
perf_hpp__column_register(fmt);
+ perf_hpp__register_sort_field(fmt);
}
static void ui_init(void)
int cmd_list(int argc, const char **argv, const char *prefix __maybe_unused)
{
int i;
- const struct option list_options[] = {
+ bool raw_dump = false;
+ struct option list_options[] = {
+ OPT_BOOLEAN(0, "raw-dump", &raw_dump, "Dump raw events"),
OPT_END()
};
const char * const list_usage[] = {
NULL
};
+ set_option_flag(list_options, 0, "raw-dump", PARSE_OPT_HIDDEN);
+
argc = parse_options(argc, argv, list_options, list_usage,
PARSE_OPT_STOP_AT_NON_OPTION);
setup_pager();
+ if (raw_dump) {
+ print_events(NULL, true);
+ return 0;
+ }
+
if (argc == 0) {
print_events(NULL, false);
return 0;
print_hwcache_events(NULL, false);
else if (strcmp(argv[i], "pmu") == 0)
print_pmu_events(NULL, false);
- else if (strcmp(argv[i], "--raw-dump") == 0)
- print_events(NULL, true);
else {
char *sep = strchr(argv[i], ':'), *s;
int sep_idx;
ui_progress__finish();
}
+static void report__output_resort(struct report *rep)
+{
+ struct ui_progress prog;
+ struct perf_evsel *pos;
+
+ ui_progress__init(&prog, rep->nr_entries, "Sorting events for output...");
+
+ evlist__for_each(rep->session->evlist, pos)
+ hists__output_resort(evsel__hists(pos), &prog);
+
+ ui_progress__finish();
+}
+
static int __cmd_report(struct report *rep)
{
int ret;
if (session_done())
return 0;
+ /*
+ * recalculate number of entries after collapsing since it
+ * might be changed during the collapse phase.
+ */
+ rep->nr_entries = 0;
+ evlist__for_each(session->evlist, pos)
+ rep->nr_entries += evsel__hists(pos)->nr_entries;
+
if (rep->nr_entries == 0) {
ui__error("The %s file has no samples!\n", file->path);
return 0;
}
- evlist__for_each(session->evlist, pos)
- hists__output_resort(evsel__hists(pos));
+ report__output_resort(rep);
return report__browse_hists(rep);
}
}
hists__collapse_resort(hists, NULL);
- hists__output_resort(hists);
+ hists__output_resort(hists, NULL);
hists__output_recalc_col_len(hists, top->print_entries - printed);
putchar('\n');
}
hists__collapse_resort(hists, NULL);
- hists__output_resort(hists);
+ hists__output_resort(hists, NULL);
}
static void *display_thread_tui(void *arg)
* function since TEST_ASSERT_VAL() returns in case of failure.
*/
hists__collapse_resort(hists, NULL);
- hists__output_resort(hists);
+ hists__output_resort(hists, NULL);
if (verbose > 2) {
pr_info("use callchain: %d, cumulate callchain: %d\n",
* 30.00% 10.00% perf perf [.] cmd_record
* 20.00% 0.00% bash libc [.] malloc
* 10.00% 10.00% bash [kernel] [k] page_fault
- * 10.00% 10.00% perf [kernel] [k] schedule
- * 10.00% 0.00% perf [kernel] [k] sys_perf_event_open
+ * 10.00% 10.00% bash bash [.] xmalloc
* 10.00% 10.00% perf [kernel] [k] page_fault
- * 10.00% 10.00% perf libc [.] free
* 10.00% 10.00% perf libc [.] malloc
- * 10.00% 10.00% bash bash [.] xmalloc
+ * 10.00% 10.00% perf [kernel] [k] schedule
+ * 10.00% 10.00% perf libc [.] free
+ * 10.00% 0.00% perf [kernel] [k] sys_perf_event_open
*/
struct result expected[] = {
{ 7000, 2000, "perf", "perf", "main" },
{ 3000, 1000, "perf", "perf", "cmd_record" },
{ 2000, 0, "bash", "libc", "malloc" },
{ 1000, 1000, "bash", "[kernel]", "page_fault" },
- { 1000, 1000, "perf", "[kernel]", "schedule" },
- { 1000, 0, "perf", "[kernel]", "sys_perf_event_open" },
+ { 1000, 1000, "bash", "bash", "xmalloc" },
{ 1000, 1000, "perf", "[kernel]", "page_fault" },
+ { 1000, 1000, "perf", "[kernel]", "schedule" },
{ 1000, 1000, "perf", "libc", "free" },
{ 1000, 1000, "perf", "libc", "malloc" },
- { 1000, 1000, "bash", "bash", "xmalloc" },
+ { 1000, 0, "perf", "[kernel]", "sys_perf_event_open" },
};
symbol_conf.use_callchain = false;
* malloc
* main
*
- * 10.00% 10.00% perf [kernel] [k] schedule
+ * 10.00% 10.00% bash bash [.] xmalloc
* |
- * --- schedule
- * run_command
+ * --- xmalloc
+ * malloc
+ * xmalloc <--- NOTE: there's a cycle
+ * malloc
+ * xmalloc
* main
*
* 10.00% 0.00% perf [kernel] [k] sys_perf_event_open
* run_command
* main
*
+ * 10.00% 10.00% perf [kernel] [k] schedule
+ * |
+ * --- schedule
+ * run_command
+ * main
+ *
* 10.00% 10.00% perf libc [.] free
* |
* --- free
* run_command
* main
*
- * 10.00% 10.00% bash bash [.] xmalloc
- * |
- * --- xmalloc
- * malloc
- * xmalloc <--- NOTE: there's a cycle
- * malloc
- * xmalloc
- * main
- *
*/
struct result expected[] = {
{ 7000, 2000, "perf", "perf", "main" },
{ 3000, 1000, "perf", "perf", "cmd_record" },
{ 2000, 0, "bash", "libc", "malloc" },
{ 1000, 1000, "bash", "[kernel]", "page_fault" },
- { 1000, 1000, "perf", "[kernel]", "schedule" },
+ { 1000, 1000, "bash", "bash", "xmalloc" },
{ 1000, 0, "perf", "[kernel]", "sys_perf_event_open" },
{ 1000, 1000, "perf", "[kernel]", "page_fault" },
+ { 1000, 1000, "perf", "[kernel]", "schedule" },
{ 1000, 1000, "perf", "libc", "free" },
{ 1000, 1000, "perf", "libc", "malloc" },
- { 1000, 1000, "bash", "bash", "xmalloc" },
};
struct callchain_result expected_callchain[] = {
{
{ "bash", "main" }, },
},
{
- 3, { { "[kernel]", "schedule" },
- { "perf", "run_command" },
- { "perf", "main" }, },
+ 6, { { "bash", "xmalloc" },
+ { "libc", "malloc" },
+ { "bash", "xmalloc" },
+ { "libc", "malloc" },
+ { "bash", "xmalloc" },
+ { "bash", "main" }, },
},
{
3, { { "[kernel]", "sys_perf_event_open" },
{ "perf", "run_command" },
{ "perf", "main" }, },
},
+ {
+ 3, { { "[kernel]", "schedule" },
+ { "perf", "run_command" },
+ { "perf", "main" }, },
+ },
{
4, { { "libc", "free" },
{ "perf", "cmd_record" },
{ "perf", "run_command" },
{ "perf", "main" }, },
},
- {
- 6, { { "bash", "xmalloc" },
- { "libc", "malloc" },
- { "bash", "xmalloc" },
- { "libc", "malloc" },
- { "bash", "xmalloc" },
- { "bash", "main" }, },
- },
};
symbol_conf.use_callchain = true;
struct hists *hists = evsel__hists(evsel);
hists__collapse_resort(hists, NULL);
- hists__output_resort(hists);
+ hists__output_resort(hists, NULL);
if (verbose > 2) {
pr_info("Normal histogram\n");
goto out;
hists__collapse_resort(hists, NULL);
- hists__output_resort(hists);
+ hists__output_resort(hists, NULL);
if (verbose > 2) {
pr_info("[fields = %s, sort = %s]\n", field_order, sort_order);
goto out;
hists__collapse_resort(hists, NULL);
- hists__output_resort(hists);
+ hists__output_resort(hists, NULL);
if (verbose > 2) {
pr_info("[fields = %s, sort = %s]\n", field_order, sort_order);
goto out;
hists__collapse_resort(hists, NULL);
- hists__output_resort(hists);
+ hists__output_resort(hists, NULL);
if (verbose > 2) {
pr_info("[fields = %s, sort = %s]\n", field_order, sort_order);
goto out;
hists__collapse_resort(hists, NULL);
- hists__output_resort(hists);
+ hists__output_resort(hists, NULL);
if (verbose > 2) {
pr_info("[fields = %s, sort = %s]\n", field_order, sort_order);
goto out;
hists__collapse_resort(hists, NULL);
- hists__output_resort(hists);
+ hists__output_resort(hists, NULL);
if (verbose > 2) {
pr_info("[fields = %s, sort = %s]\n", field_order, sort_order);
bool need_percent;
node = rb_first(root);
- need_percent = !!rb_next(node);
+ need_percent = node && rb_next(node);
while (node) {
struct callchain_node *child = rb_entry(node, struct callchain_node, rb_node);
if (ret)
return ret;
+ if (a->thread != b->thread || !symbol_conf.use_callchain)
+ return 0;
+
ret = b->callchain->max_depth - a->callchain->max_depth;
}
return ret;
#include <signal.h>
#include <stdbool.h>
+#ifdef HAVE_BACKTRACE_SUPPORT
+#include <execinfo.h>
+#endif
#include "../../util/cache.h"
#include "../../util/debug.h"
return SLkp_getkey();
}
+#ifdef HAVE_BACKTRACE_SUPPORT
+static void ui__signal_backtrace(int sig)
+{
+ void *stackdump[32];
+ size_t size;
+
+ ui__exit(false);
+ psignal(sig, "perf");
+
+ printf("-------- backtrace --------\n");
+ size = backtrace(stackdump, ARRAY_SIZE(stackdump));
+ backtrace_symbols_fd(stackdump, size, STDOUT_FILENO);
+
+ exit(0);
+}
+#else
+# define ui__signal_backtrace ui__signal
+#endif
+
static void ui__signal(int sig)
{
ui__exit(false);
ui_browser__init();
tui_progress__init();
- signal(SIGSEGV, ui__signal);
- signal(SIGFPE, ui__signal);
+ signal(SIGSEGV, ui__signal_backtrace);
+ signal(SIGFPE, ui__signal_backtrace);
signal(SIGINT, ui__signal);
signal(SIGQUIT, ui__signal);
signal(SIGTERM, ui__signal);
return bf;
}
+
+static void free_callchain_node(struct callchain_node *node)
+{
+ struct callchain_list *list, *tmp;
+ struct callchain_node *child;
+ struct rb_node *n;
+
+ list_for_each_entry_safe(list, tmp, &node->val, list) {
+ list_del(&list->list);
+ free(list);
+ }
+
+ n = rb_first(&node->rb_root_in);
+ while (n) {
+ child = container_of(n, struct callchain_node, rb_node_in);
+ n = rb_next(n);
+ rb_erase(&child->rb_node_in, &node->rb_root_in);
+
+ free_callchain_node(child);
+ free(child);
+ }
+}
+
+void free_callchain(struct callchain_root *root)
+{
+ if (!symbol_conf.use_callchain)
+ return;
+
+ free_callchain_node(&root->node);
+}
char *callchain_list__sym_name(struct callchain_list *cl,
char *bf, size_t bfsize, bool show_dso);
+void free_callchain(struct callchain_root *root);
+
#endif /* __PERF_CALLCHAIN_H */
#include "evlist.h"
#include "evsel.h"
#include "annotate.h"
+#include "ui/progress.h"
#include <math.h>
static bool hists__filter_entry_by_dso(struct hists *hists,
size_t callchain_size = 0;
struct hist_entry *he;
- if (symbol_conf.use_callchain || symbol_conf.cumulate_callchain)
+ if (symbol_conf.use_callchain)
callchain_size = sizeof(struct callchain_root);
he = zalloc(sizeof(*he) + callchain_size);
iter->he = he;
he_cache[iter->curr++] = he;
- callchain_append(he->callchain, &callchain_cursor, sample->period);
+ hist_entry__append_callchain(he, sample);
/*
* We need to re-initialize the cursor since callchain_append()
iter->he = he;
he_cache[iter->curr++] = he;
- callchain_append(he->callchain, &cursor, sample->period);
+ if (symbol_conf.use_callchain)
+ callchain_append(he->callchain, &cursor, sample->period);
return 0;
}
zfree(&he->mem_info);
zfree(&he->stat_acc);
free_srcline(he->srcline);
+ free_callchain(he->callchain);
free(he);
}
else
p = &(*p)->rb_right;
}
+ hists->nr_entries++;
rb_link_node(&he->rb_node_in, parent, p);
rb_insert_color(&he->rb_node_in, root);
if (!sort__need_collapse)
return;
+ hists->nr_entries = 0;
+
root = hists__get_rotate_entries_in(hists);
+
next = rb_first(root);
while (next) {
rb_insert_color(&he->rb_node, entries);
}
-void hists__output_resort(struct hists *hists)
+void hists__output_resort(struct hists *hists, struct ui_progress *prog)
{
struct rb_root *root;
struct rb_node *next;
if (!n->filtered)
hists__calc_col_len(hists, n);
+
+ if (prog)
+ ui_progress__update(prog, 1);
}
}
struct hists *hists);
void hist_entry__free(struct hist_entry *);
-void hists__output_resort(struct hists *hists);
+void hists__output_resort(struct hists *hists, struct ui_progress *prog);
void hists__collapse_resort(struct hists *hists, struct ui_progress *prog);
void hists__decay_entries(struct hists *hists, bool zap_user, bool zap_kernel);
}
if (ntevs == 0) { /* No error but failed to find probe point. */
- pr_warning("Probe point '%s' not found.\n",
+ pr_warning("Probe point '%s' not found in debuginfo.\n",
synthesize_perf_probe_point(&pev->point));
- return -ENOENT;
+ if (need_dwarf)
+ return -ENOENT;
+ return 0;
}
/* Error path : ntevs < 0 */
pr_debug("An error occurred in debuginfo analysis (%d).\n", ntevs);
int ret = 0;
#if _ELFUTILS_PREREQ(0, 142)
+ Elf *elf;
+ GElf_Ehdr ehdr;
+ GElf_Shdr shdr;
+
/* Get the call frame information from this dwarf */
- pf->cfi = dwarf_getcfi_elf(dwarf_getelf(dbg->dbg));
+ elf = dwarf_getelf(dbg->dbg);
+ if (elf == NULL)
+ return -EINVAL;
+
+ if (gelf_getehdr(elf, &ehdr) == NULL)
+ return -EINVAL;
+
+ if (elf_section_by_name(elf, &ehdr, &shdr, ".eh_frame", NULL) &&
+ shdr.sh_type == SHT_PROGBITS) {
+ pf->cfi = dwarf_getcfi_elf(elf);
+ } else {
+ pf->cfi = dwarf_getcfi(dbg->dbg);
+ }
#endif
off = 0;
}
static int check_execveat_invoked_rc(int fd, const char *path, int flags,
- int expected_rc)
+ int expected_rc, int expected_rc2)
{
int status;
int rc;
child, status);
return 1;
}
- if (WEXITSTATUS(status) != expected_rc) {
- printf("[FAIL] (child %d exited with %d not %d)\n",
- child, WEXITSTATUS(status), expected_rc);
+ if ((WEXITSTATUS(status) != expected_rc) &&
+ (WEXITSTATUS(status) != expected_rc2)) {
+ printf("[FAIL] (child %d exited with %d not %d nor %d)\n",
+ child, WEXITSTATUS(status), expected_rc, expected_rc2);
return 1;
}
printf("[OK]\n");
static int check_execveat(int fd, const char *path, int flags)
{
- return check_execveat_invoked_rc(fd, path, flags, 99);
+ return check_execveat_invoked_rc(fd, path, flags, 99, 99);
}
static char *concat(const char *left, const char *right)
* Execute as a long pathname relative to ".". If this is a script,
* the interpreter will launch but fail to open the script because its
* name ("/dev/fd/5/xxx....") is bigger than PATH_MAX.
+ *
+ * The failure code is usually 127 (POSIX: "If a command is not found,
+ * the exit status shall be 127."), but some systems give 126 (POSIX:
+ * "If the command name is found, but it is not an executable utility,
+ * the exit status shall be 126."), so allow either.
*/
if (is_script)
- fail += check_execveat_invoked_rc(dot_dfd, longpath, 0, 127);
+ fail += check_execveat_invoked_rc(dot_dfd, longpath, 0,
+ 127, 126);
else
fail += check_execveat(dot_dfd, longpath, 0);
{
struct mq_attr attr;
char *option, *next_option;
- int i, cpu;
+ int i, cpu, rc;
struct sigaction sa;
poptContext popt_context;
- char rc;
void *retval;
main_thread = pthread_self();
all: $(BINARIES)
%: %.c
- $(CC) $(CFLAGS) -o $@ $^
+ $(CC) $(CFLAGS) -o $@ $^ -lrt
run_tests: all
@/bin/sh ./run_vmtests || (echo "vmtests: [FAIL]"; exit 1)