5 This document describes the **Distributed Switch Architecture (DSA)** subsystem
6 design principles, limitations, interactions with other subsystems, and how to
7 develop drivers for this subsystem as well as a TODO for developers interested
13 The Distributed Switch Architecture subsystem was primarily designed to
14 support Marvell Ethernet switches (MV88E6xxx, a.k.a. Link Street product
15 line) using Linux, but has since evolved to support other vendors as well.
17 The original philosophy behind this design was to be able to use unmodified
18 Linux tools such as bridge, iproute2, ifconfig to work transparently whether
19 they configured/queried a switch port network device or a regular network
22 An Ethernet switch typically comprises multiple front-panel ports and one
23 or more CPU or management ports. The DSA subsystem currently relies on the
24 presence of a management port connected to an Ethernet controller capable of
25 receiving Ethernet frames from the switch. This is a very common setup for all
26 kinds of Ethernet switches found in Small Home and Office products: routers,
27 gateways, or even top-of-rack switches. This host Ethernet controller will
28 be later referred to as "master" and "cpu" in DSA terminology and code.
30 The D in DSA stands for Distributed, because the subsystem has been designed
31 with the ability to configure and manage cascaded switches on top of each other
32 using upstream and downstream Ethernet links between switches. These specific
33 ports are referred to as "dsa" ports in DSA terminology and code. A collection
34 of multiple switches connected to each other is called a "switch tree".
36 For each front-panel port, DSA creates specialized network devices which are
37 used as controlling and data-flowing endpoints for use by the Linux networking
38 stack. These specialized network interfaces are referred to as "slave" network
39 interfaces in DSA terminology and code.
41 The ideal case for using DSA is when an Ethernet switch supports a "switch tag"
42 which is a hardware feature making the switch insert a specific tag for each
43 Ethernet frame it receives to/from specific ports to help the management
46 - what port is this frame coming from
47 - what was the reason why this frame got forwarded
48 - how to send CPU originated traffic to specific ports
50 The subsystem does support switches not capable of inserting/stripping tags, but
51 the features might be slightly limited in that case (traffic separation relies
52 on Port-based VLAN IDs).
54 Note that DSA does not currently create network interfaces for the "cpu" and
57 - the "cpu" port is the Ethernet switch facing side of the management
58 controller, and as such, would create a duplication of feature, since you
59 would get two interfaces for the same conduit: master netdev, and "cpu" netdev
61 - the "dsa" port(s) are just conduits between two or more switches, and as such
62 cannot really be used as proper network interfaces either, only the
63 downstream, or the top-most upstream interface makes sense with that model
65 Switch tagging protocols
66 ------------------------
68 DSA supports many vendor-specific tagging protocols, one software-defined
69 tagging protocol, and a tag-less mode as well (``DSA_TAG_PROTO_NONE``).
71 The exact format of the tag protocol is vendor specific, but in general, they
72 all contain something which:
74 - identifies which port the Ethernet frame came from/should be sent to
75 - provides a reason why this frame was forwarded to the management interface
77 All tagging protocols are in ``net/dsa/tag_*.c`` files and implement the
78 methods of the ``struct dsa_device_ops`` structure, which are detailed below.
80 Tagging protocols generally fall in one of three categories:
82 1. The switch-specific frame header is located before the Ethernet header,
83 shifting to the right (from the perspective of the DSA master's frame
84 parser) the MAC DA, MAC SA, EtherType and the entire L2 payload.
85 2. The switch-specific frame header is located before the EtherType, keeping
86 the MAC DA and MAC SA in place from the DSA master's perspective, but
87 shifting the 'real' EtherType and L2 payload to the right.
88 3. The switch-specific frame header is located at the tail of the packet,
89 keeping all frame headers in place and not altering the view of the packet
90 that the DSA master's frame parser has.
92 A tagging protocol may tag all packets with switch tags of the same length, or
93 the tag length might vary (for example packets with PTP timestamps might
94 require an extended switch tag, or there might be one tag length on TX and a
95 different one on RX). Either way, the tagging protocol driver must populate the
96 ``struct dsa_device_ops::needed_headroom`` and/or ``struct dsa_device_ops::needed_tailroom``
97 with the length in octets of the longest switch frame header/trailer. The DSA
98 framework will automatically adjust the MTU of the master interface to
99 accommodate for this extra size in order for DSA user ports to support the
100 standard MTU (L2 payload length) of 1500 octets. The ``needed_headroom`` and
101 ``needed_tailroom`` properties are also used to request from the network stack,
102 on a best-effort basis, the allocation of packets with enough extra space such
103 that the act of pushing the switch tag on transmission of a packet does not
104 cause it to reallocate due to lack of memory.
106 Even though applications are not expected to parse DSA-specific frame headers,
107 the format on the wire of the tagging protocol represents an Application Binary
108 Interface exposed by the kernel towards user space, for decoders such as
109 ``libpcap``. The tagging protocol driver must populate the ``proto`` member of
110 ``struct dsa_device_ops`` with a value that uniquely describes the
111 characteristics of the interaction required between the switch hardware and the
112 data path driver: the offset of each bit field within the frame header and any
113 stateful processing required to deal with the frames (as may be required for
116 From the perspective of the network stack, all switches within the same DSA
117 switch tree use the same tagging protocol. In case of a packet transiting a
118 fabric with more than one switch, the switch-specific frame header is inserted
119 by the first switch in the fabric that the packet was received on. This header
120 typically contains information regarding its type (whether it is a control
121 frame that must be trapped to the CPU, or a data frame to be forwarded).
122 Control frames should be decapsulated only by the software data path, whereas
123 data frames might also be autonomously forwarded towards other user ports of
124 other switches from the same fabric, and in this case, the outermost switch
125 ports must decapsulate the packet.
127 Note that in certain cases, it might be the case that the tagging format used
128 by a leaf switch (not connected directly to the CPU) is not the same as what
129 the network stack sees. This can be seen with Marvell switch trees, where the
130 CPU port can be configured to use either the DSA or the Ethertype DSA (EDSA)
131 format, but the DSA links are configured to use the shorter (without Ethertype)
132 DSA frame header, in order to reduce the autonomous packet forwarding overhead.
133 It still remains the case that, if the DSA switch tree is configured for the
134 EDSA tagging protocol, the operating system sees EDSA-tagged packets from the
135 leaf switches that tagged them with the shorter DSA header. This can be done
136 because the Marvell switch connected directly to the CPU is configured to
137 perform tag translation between DSA and EDSA (which is simply the operation of
138 adding or removing the ``ETH_P_EDSA`` EtherType and some padding octets).
140 It is possible to construct cascaded setups of DSA switches even if their
141 tagging protocols are not compatible with one another. In this case, there are
142 no DSA links in this fabric, and each switch constitutes a disjoint DSA switch
143 tree. The DSA links are viewed as simply a pair of a DSA master (the out-facing
144 port of the upstream DSA switch) and a CPU port (the in-facing port of the
145 downstream DSA switch).
147 The tagging protocol of the attached DSA switch tree can be viewed through the
148 ``dsa/tagging`` sysfs attribute of the DSA master::
150 cat /sys/class/net/eth0/dsa/tagging
152 If the hardware and driver are capable, the tagging protocol of the DSA switch
153 tree can be changed at runtime. This is done by writing the new tagging
154 protocol name to the same sysfs device attribute as above (the DSA master and
155 all attached switch ports must be down while doing this).
157 It is desirable that all tagging protocols are testable with the ``dsa_loop``
158 mockup driver, which can be attached to any network interface. The goal is that
159 any network interface should be capable of transmitting the same packet in the
160 same way, and the tagger should decode the same received packet in the same way
161 regardless of the driver used for the switch control path, and the driver used
164 The transmission of a packet goes through the tagger's ``xmit`` function.
165 The passed ``struct sk_buff *skb`` has ``skb->data`` pointing at
166 ``skb_mac_header(skb)``, i.e. at the destination MAC address, and the passed
167 ``struct net_device *dev`` represents the virtual DSA user network interface
168 whose hardware counterpart the packet must be steered to (i.e. ``swp0``).
169 The job of this method is to prepare the skb in a way that the switch will
170 understand what egress port the packet is for (and not deliver it towards other
171 ports). Typically this is fulfilled by pushing a frame header. Checking for
172 insufficient size in the skb headroom or tailroom is unnecessary provided that
173 the ``needed_headroom`` and ``needed_tailroom`` properties were filled out
174 properly, because DSA ensures there is enough space before calling this method.
176 The reception of a packet goes through the tagger's ``rcv`` function. The
177 passed ``struct sk_buff *skb`` has ``skb->data`` pointing at
178 ``skb_mac_header(skb) + ETH_ALEN`` octets, i.e. to where the first octet after
179 the EtherType would have been, were this frame not tagged. The role of this
180 method is to consume the frame header, adjust ``skb->data`` to really point at
181 the first octet after the EtherType, and to change ``skb->dev`` to point to the
182 virtual DSA user network interface corresponding to the physical front-facing
183 switch port that the packet was received on.
185 Since tagging protocols in category 1 and 2 break software (and most often also
186 hardware) packet dissection on the DSA master, features such as RPS (Receive
187 Packet Steering) on the DSA master would be broken. The DSA framework deals
188 with this by hooking into the flow dissector and shifting the offset at which
189 the IP header is to be found in the tagged frame as seen by the DSA master.
190 This behavior is automatic based on the ``overhead`` value of the tagging
191 protocol. If not all packets are of equal size, the tagger can implement the
192 ``flow_dissect`` method of the ``struct dsa_device_ops`` and override this
193 default behavior by specifying the correct offset incurred by each individual
194 RX packet. Tail taggers do not cause issues to the flow dissector.
196 Checksum offload should work with category 1 and 2 taggers when the DSA master
197 driver declares NETIF_F_HW_CSUM in vlan_features and looks at csum_start and
198 csum_offset. For those cases, DSA will shift the checksum start and offset by
199 the tag size. If the DSA master driver still uses the legacy NETIF_F_IP_CSUM
200 or NETIF_F_IPV6_CSUM in vlan_features, the offload might only work if the
201 offload hardware already expects that specific tag (perhaps due to matching
202 vendors). DSA slaves inherit those flags from the master port, and it is up to
203 the driver to correctly fall back to software checksum when the IP header is not
204 where the hardware expects. If that check is ineffective, the packets might go
205 to the network without a proper checksum (the checksum field will have the
206 pseudo IP header sum). For category 3, when the offload hardware does not
207 already expect the switch tag in use, the checksum must be calculated before any
208 tag is inserted (i.e. inside the tagger). Otherwise, the DSA master would
209 include the tail tag in the (software or hardware) checksum calculation. Then,
210 when the tag gets stripped by the switch during transmission, it will leave an
211 incorrect IP checksum in place.
213 Due to various reasons (most common being category 1 taggers being associated
214 with DSA-unaware masters, mangling what the master perceives as MAC DA), the
215 tagging protocol may require the DSA master to operate in promiscuous mode, to
216 receive all frames regardless of the value of the MAC DA. This can be done by
217 setting the ``promisc_on_master`` property of the ``struct dsa_device_ops``.
218 Note that this assumes a DSA-unaware master driver, which is the norm.
220 Master network devices
221 ----------------------
223 Master network devices are regular, unmodified Linux network device drivers for
224 the CPU/management Ethernet interface. Such a driver might occasionally need to
225 know whether DSA is enabled (e.g.: to enable/disable specific offload features),
226 but the DSA subsystem has been proven to work with industry standard drivers:
227 ``e1000e,`` ``mv643xx_eth`` etc. without having to introduce modifications to these
228 drivers. Such network devices are also often referred to as conduit network
229 devices since they act as a pipe between the host processor and the hardware
232 Networking stack hooks
233 ----------------------
235 When a master netdev is used with DSA, a small hook is placed in the
236 networking stack is in order to have the DSA subsystem process the Ethernet
237 switch specific tagging protocol. DSA accomplishes this by registering a
238 specific (and fake) Ethernet type (later becoming ``skb->protocol``) with the
239 networking stack, this is also known as a ``ptype`` or ``packet_type``. A typical
240 Ethernet Frame receive sequence looks like this:
242 Master network device (e.g.: e1000e):
244 1. Receive interrupt fires:
246 - receive function is invoked
247 - basic packet processing is done: getting length, status etc.
248 - packet is prepared to be processed by the Ethernet layer by calling
251 2. net/ethernet/eth.c::
253 eth_type_trans(skb, dev)
254 if (dev->dsa_ptr != NULL)
255 -> skb->protocol = ETH_P_XDSA
257 3. drivers/net/ethernet/\*::
259 netif_receive_skb(skb)
260 -> iterate over registered packet_type
261 -> invoke handler for ETH_P_XDSA, calls dsa_switch_rcv()
266 -> invoke switch tag specific protocol handler in 'net/dsa/tag_*.c'
270 - inspect and strip switch tag protocol to determine originating port
271 - locate per-port network device
272 - invoke ``eth_type_trans()`` with the DSA slave network device
273 - invoked ``netif_receive_skb()``
275 Past this point, the DSA slave network devices get delivered regular Ethernet
276 frames that can be processed by the networking stack.
278 Slave network devices
279 ---------------------
281 Slave network devices created by DSA are stacked on top of their master network
282 device, each of these network interfaces will be responsible for being a
283 controlling and data-flowing end-point for each front-panel port of the switch.
284 These interfaces are specialized in order to:
286 - insert/remove the switch tag protocol (if it exists) when sending traffic
287 to/from specific switch ports
288 - query the switch for ethtool operations: statistics, link state,
289 Wake-on-LAN, register dumps...
290 - manage external/internal PHY: link, auto-negotiation, etc.
292 These slave network devices have custom net_device_ops and ethtool_ops function
293 pointers which allow DSA to introduce a level of layering between the networking
294 stack/ethtool and the switch driver implementation.
296 Upon frame transmission from these slave network devices, DSA will look up which
297 switch tagging protocol is currently registered with these network devices and
298 invoke a specific transmit routine which takes care of adding the relevant
299 switch tag in the Ethernet frames.
301 These frames are then queued for transmission using the master network device
302 ``ndo_start_xmit()`` function. Since they contain the appropriate switch tag, the
303 Ethernet switch will be able to process these incoming frames from the
304 management interface and deliver them to the physical switch port.
306 Graphical representation
307 ------------------------
309 Summarized, this is basically how DSA looks like from a network device
313 opens and binds socket
316 +-----------v--|--------------------+
317 |+------+ +------+ +------+ +------+|
318 || swp0 | | swp1 | | swp2 | | swp3 ||
319 |+------+-+------+-+------+-+------+|
320 | DSA switch driver |
321 +-----------------------------------+
323 Tag added by | | Tag consumed by
324 switch driver | | switch driver
326 +-----------------------------------+
327 | Unmodified host interface driver | Software
328 --------+-----------------------------------+------------
329 | Host interface (eth0) | Hardware
330 +-----------------------------------+
332 Tag consumed by | | Tag added by
333 switch hardware | | switch hardware
335 +-----------------------------------+
337 |+------+ +------+ +------+ +------+|
338 || swp0 | | swp1 | | swp2 | | swp3 ||
339 ++------+-+------+-+------+-+------++
344 In order to be able to read to/from a switch PHY built into it, DSA creates a
345 slave MDIO bus which allows a specific switch driver to divert and intercept
346 MDIO reads/writes towards specific PHY addresses. In most MDIO-connected
347 switches, these functions would utilize direct or indirect PHY addressing mode
348 to return standard MII registers from the switch builtin PHYs, allowing the PHY
349 library and/or to return link status, link partner pages, auto-negotiation
352 For Ethernet switches which have both external and internal MDIO buses, the
353 slave MII bus can be utilized to mux/demux MDIO reads and writes towards either
354 internal or external MDIO devices this switch might be connected to: internal
355 PHYs, external PHYs, or even external switches.
360 DSA data structures are defined in ``include/net/dsa.h`` as well as
361 ``net/dsa/dsa_priv.h``:
363 - ``dsa_chip_data``: platform data configuration for a given switch device,
364 this structure describes a switch device's parent device, its address, as
365 well as various properties of its ports: names/labels, and finally a routing
366 table indication (when cascading switches)
368 - ``dsa_platform_data``: platform device configuration data which can reference
369 a collection of dsa_chip_data structures if multiple switches are cascaded,
370 the master network device this switch tree is attached to needs to be
373 - ``dsa_switch_tree``: structure assigned to the master network device under
374 ``dsa_ptr``, this structure references a dsa_platform_data structure as well as
375 the tagging protocol supported by the switch tree, and which receive/transmit
376 function hooks should be invoked, information about the directly attached
377 switch is also provided: CPU port. Finally, a collection of dsa_switch are
378 referenced to address individual switches in the tree.
380 - ``dsa_switch``: structure describing a switch device in the tree, referencing
381 a ``dsa_switch_tree`` as a backpointer, slave network devices, master network
382 device, and a reference to the backing``dsa_switch_ops``
384 - ``dsa_switch_ops``: structure referencing function pointers, see below for a
390 Lack of CPU/DSA network devices
391 -------------------------------
393 DSA does not currently create slave network devices for the CPU or DSA ports, as
394 described before. This might be an issue in the following cases:
396 - inability to fetch switch CPU port statistics counters using ethtool, which
397 can make it harder to debug MDIO switch connected using xMII interfaces
399 - inability to configure the CPU port link parameters based on the Ethernet
400 controller capabilities attached to it: http://patchwork.ozlabs.org/patch/509806/
402 - inability to configure specific VLAN IDs / trunking VLANs between switches
403 when using a cascaded setup
405 Common pitfalls using DSA setups
406 --------------------------------
408 Once a master network device is configured to use DSA (dev->dsa_ptr becomes
409 non-NULL), and the switch behind it expects a tagging protocol, this network
410 interface can only exclusively be used as a conduit interface. Sending packets
411 directly through this interface (e.g.: opening a socket using this interface)
412 will not make us go through the switch tagging protocol transmit function, so
413 the Ethernet switch on the other end, expecting a tag will typically drop this
416 Interactions with other subsystems
417 ==================================
419 DSA currently leverages the following subsystems:
421 - MDIO/PHY library: ``drivers/net/phy/phy.c``, ``mdio_bus.c``
422 - Switchdev:``net/switchdev/*``
423 - Device Tree for various of_* functions
424 - Devlink: ``net/core/devlink.c``
429 Slave network devices exposed by DSA may or may not be interfacing with PHY
430 devices (``struct phy_device`` as defined in ``include/linux/phy.h)``, but the DSA
431 subsystem deals with all possible combinations:
433 - internal PHY devices, built into the Ethernet switch hardware
434 - external PHY devices, connected via an internal or external MDIO bus
435 - internal PHY devices, connected via an internal MDIO bus
436 - special, non-autonegotiated or non MDIO-managed PHY devices: SFPs, MoCA; a.k.a
439 The PHY configuration is done by the ``dsa_slave_phy_setup()`` function and the
440 logic basically looks like this:
442 - if Device Tree is used, the PHY device is looked up using the standard
443 "phy-handle" property, if found, this PHY device is created and registered
444 using ``of_phy_connect()``
446 - if Device Tree is used and the PHY device is "fixed", that is, conforms to
447 the definition of a non-MDIO managed PHY as defined in
448 ``Documentation/devicetree/bindings/net/fixed-link.txt``, the PHY is registered
449 and connected transparently using the special fixed MDIO bus driver
451 - finally, if the PHY is built into the switch, as is very common with
452 standalone switch packages, the PHY is probed using the slave MII bus created
459 DSA directly utilizes SWITCHDEV when interfacing with the bridge layer, and
460 more specifically with its VLAN filtering portion when configuring VLANs on top
461 of per-port slave network devices. As of today, the only SWITCHDEV objects
462 supported by DSA are the FDB and VLAN objects.
467 DSA registers one devlink device per physical switch in the fabric.
468 For each devlink device, every physical port (i.e. user ports, CPU ports, DSA
469 links or unused ports) is exposed as a devlink port.
471 DSA drivers can make use of the following devlink features:
473 - Regions: debugging feature which allows user space to dump driver-defined
474 areas of hardware information in a low-level, binary format. Both global
475 regions as well as per-port regions are supported. It is possible to export
476 devlink regions even for pieces of data that are already exposed in some way
477 to the standard iproute2 user space programs (ip-link, bridge), like address
478 tables and VLAN tables. For example, this might be useful if the tables
479 contain additional hardware-specific details which are not visible through
480 the iproute2 abstraction, or it might be useful to inspect these tables on
481 the non-user ports too, which are invisible to iproute2 because no network
482 interface is registered for them.
483 - Params: a feature which enables user to configure certain low-level tunable
484 knobs pertaining to the device. Drivers may implement applicable generic
485 devlink params, or may add new device-specific devlink params.
486 - Resources: a monitoring feature which enables users to see the degree of
487 utilization of certain hardware tables in the device, such as FDB, VLAN, etc.
488 - Shared buffers: a QoS feature for adjusting and partitioning memory and frame
489 reservations per port and per traffic class, in the ingress and egress
490 directions, such that low-priority bulk traffic does not impede the
491 processing of high-priority critical traffic.
493 For more details, consult ``Documentation/networking/devlink/``.
498 DSA features a standardized binding which is documented in
499 ``Documentation/devicetree/bindings/net/dsa/dsa.txt``. PHY/MDIO library helper
500 functions such as ``of_get_phy_mode()``, ``of_phy_connect()`` are also used to query
501 per-port PHY specific details: interface connection, MDIO bus location, etc.
506 DSA switch drivers need to implement a ``dsa_switch_ops`` structure which will
507 contain the various members described below.
509 Probing, registration and device lifetime
510 -----------------------------------------
512 DSA switches are regular ``device`` structures on buses (be they platform, SPI,
513 I2C, MDIO or otherwise). The DSA framework is not involved in their probing
514 with the device core.
516 Switch registration from the perspective of a driver means passing a valid
517 ``struct dsa_switch`` pointer to ``dsa_register_switch()``, usually from the
518 switch driver's probing function. The following members must be valid in the
521 - ``ds->dev``: will be used to parse the switch's OF node or platform data.
523 - ``ds->num_ports``: will be used to create the port list for this switch, and
524 to validate the port indices provided in the OF node.
526 - ``ds->ops``: a pointer to the ``dsa_switch_ops`` structure holding the DSA
527 method implementations.
529 - ``ds->priv``: backpointer to a driver-private data structure which can be
530 retrieved in all further DSA method callbacks.
532 In addition, the following flags in the ``dsa_switch`` structure may optionally
533 be configured to obtain driver-specific behavior from the DSA core. Their
534 behavior when set is documented through comments in ``include/net/dsa.h``.
536 - ``ds->vlan_filtering_is_global``
538 - ``ds->needs_standalone_vlan_filtering``
540 - ``ds->configure_vlan_while_not_filtering``
542 - ``ds->untag_bridge_pvid``
544 - ``ds->assisted_learning_on_cpu_port``
546 - ``ds->mtu_enforcement_ingress``
548 - ``ds->fdb_isolation``
550 Internally, DSA keeps an array of switch trees (group of switches) global to
551 the kernel, and attaches a ``dsa_switch`` structure to a tree on registration.
552 The tree ID to which the switch is attached is determined by the first u32
553 number of the ``dsa,member`` property of the switch's OF node (0 if missing).
554 The switch ID within the tree is determined by the second u32 number of the
555 same OF property (0 if missing). Registering multiple switches with the same
556 switch ID and tree ID is illegal and will cause an error. Using platform data,
557 a single switch and a single switch tree is permitted.
559 In case of a tree with multiple switches, probing takes place asymmetrically.
560 The first N-1 callers of ``dsa_register_switch()`` only add their ports to the
561 port list of the tree (``dst->ports``), each port having a backpointer to its
562 associated switch (``dp->ds``). Then, these switches exit their
563 ``dsa_register_switch()`` call early, because ``dsa_tree_setup_routing_table()``
564 has determined that the tree is not yet complete (not all ports referenced by
565 DSA links are present in the tree's port list). The tree becomes complete when
566 the last switch calls ``dsa_register_switch()``, and this triggers the effective
567 continuation of initialization (including the call to ``ds->ops->setup()``) for
568 all switches within that tree, all as part of the calling context of the last
569 switch's probe function.
571 The opposite of registration takes place when calling ``dsa_unregister_switch()``,
572 which removes a switch's ports from the port list of the tree. The entire tree
573 is torn down when the first switch unregisters.
575 It is mandatory for DSA switch drivers to implement the ``shutdown()`` callback
576 of their respective bus, and call ``dsa_switch_shutdown()`` from it (a minimal
577 version of the full teardown performed by ``dsa_unregister_switch()``).
578 The reason is that DSA keeps a reference on the master net device, and if the
579 driver for the master device decides to unbind on shutdown, DSA's reference
580 will block that operation from finalizing.
582 Either ``dsa_switch_shutdown()`` or ``dsa_unregister_switch()`` must be called,
583 but not both, and the device driver model permits the bus' ``remove()`` method
584 to be called even if ``shutdown()`` was already called. Therefore, drivers are
585 expected to implement a mutual exclusion method between ``remove()`` and
586 ``shutdown()`` by setting their drvdata to NULL after any of these has run, and
587 checking whether the drvdata is NULL before proceeding to take any action.
589 After ``dsa_switch_shutdown()`` or ``dsa_unregister_switch()`` was called, no
590 further callbacks via the provided ``dsa_switch_ops`` may take place, and the
591 driver may free the data structures associated with the ``dsa_switch``.
596 - ``get_tag_protocol``: this is to indicate what kind of tagging protocol is
597 supported, should be a valid value from the ``dsa_tag_protocol`` enum.
598 The returned information does not have to be static; the driver is passed the
599 CPU port number, as well as the tagging protocol of a possibly stacked
600 upstream switch, in case there are hardware limitations in terms of supported
603 - ``change_tag_protocol``: when the default tagging protocol has compatibility
604 problems with the master or other issues, the driver may support changing it
605 at runtime, either through a device tree property or through sysfs. In that
606 case, further calls to ``get_tag_protocol`` should report the protocol in
609 - ``setup``: setup function for the switch, this function is responsible for setting
610 up the ``dsa_switch_ops`` private structure with all it needs: register maps,
611 interrupts, mutexes, locks, etc. This function is also expected to properly
612 configure the switch to separate all network interfaces from each other, that
613 is, they should be isolated by the switch hardware itself, typically by creating
614 a Port-based VLAN ID for each port and allowing only the CPU port and the
615 specific port to be in the forwarding vector. Ports that are unused by the
616 platform should be disabled. Past this function, the switch is expected to be
617 fully configured and ready to serve any kind of request. It is recommended
618 to issue a software reset of the switch during this setup function in order to
619 avoid relying on what a previous software agent such as a bootloader/firmware
620 may have previously configured. The method responsible for undoing any
621 applicable allocations or operations done here is ``teardown``.
623 - ``port_setup`` and ``port_teardown``: methods for initialization and
624 destruction of per-port data structures. It is mandatory for some operations
625 such as registering and unregistering devlink port regions to be done from
626 these methods, otherwise they are optional. A port will be torn down only if
627 it has been previously set up. It is possible for a port to be set up during
628 probing only to be torn down immediately afterwards, for example in case its
629 PHY cannot be found. In this case, probing of the DSA switch continues
630 without that particular port.
632 PHY devices and link management
633 -------------------------------
635 - ``get_phy_flags``: Some switches are interfaced to various kinds of Ethernet PHYs,
636 if the PHY library PHY driver needs to know about information it cannot obtain
637 on its own (e.g.: coming from switch memory mapped registers), this function
638 should return a 32-bit bitmask of "flags" that is private between the switch
639 driver and the Ethernet PHY driver in ``drivers/net/phy/\*``.
641 - ``phy_read``: Function invoked by the DSA slave MDIO bus when attempting to read
642 the switch port MDIO registers. If unavailable, return 0xffff for each read.
643 For builtin switch Ethernet PHYs, this function should allow reading the link
644 status, auto-negotiation results, link partner pages, etc.
646 - ``phy_write``: Function invoked by the DSA slave MDIO bus when attempting to write
647 to the switch port MDIO registers. If unavailable return a negative error
650 - ``adjust_link``: Function invoked by the PHY library when a slave network device
651 is attached to a PHY device. This function is responsible for appropriately
652 configuring the switch port link parameters: speed, duplex, pause based on
653 what the ``phy_device`` is providing.
655 - ``fixed_link_update``: Function invoked by the PHY library, and specifically by
656 the fixed PHY driver asking the switch driver for link parameters that could
657 not be auto-negotiated, or obtained by reading the PHY registers through MDIO.
658 This is particularly useful for specific kinds of hardware such as QSGMII,
659 MoCA or other kinds of non-MDIO managed PHYs where out of band link
660 information is obtained
665 - ``get_strings``: ethtool function used to query the driver's strings, will
666 typically return statistics strings, private flags strings, etc.
668 - ``get_ethtool_stats``: ethtool function used to query per-port statistics and
669 return their values. DSA overlays slave network devices general statistics:
670 RX/TX counters from the network device, with switch driver specific statistics
673 - ``get_sset_count``: ethtool function used to query the number of statistics items
675 - ``get_wol``: ethtool function used to obtain Wake-on-LAN settings per-port, this
676 function may for certain implementations also query the master network device
677 Wake-on-LAN settings if this interface needs to participate in Wake-on-LAN
679 - ``set_wol``: ethtool function used to configure Wake-on-LAN settings per-port,
680 direct counterpart to set_wol with similar restrictions
682 - ``set_eee``: ethtool function which is used to configure a switch port EEE (Green
683 Ethernet) settings, can optionally invoke the PHY library to enable EEE at the
684 PHY level if relevant. This function should enable EEE at the switch port MAC
685 controller and data-processing logic
687 - ``get_eee``: ethtool function which is used to query a switch port EEE settings,
688 this function should return the EEE state of the switch port MAC controller
689 and data-processing logic as well as query the PHY for its currently configured
692 - ``get_eeprom_len``: ethtool function returning for a given switch the EEPROM
695 - ``get_eeprom``: ethtool function returning for a given switch the EEPROM contents
697 - ``set_eeprom``: ethtool function writing specified data to a given switch EEPROM
699 - ``get_regs_len``: ethtool function returning the register length for a given
702 - ``get_regs``: ethtool function returning the Ethernet switch internal register
703 contents. This function might require user-land code in ethtool to
704 pretty-print register values and registers
709 - ``suspend``: function invoked by the DSA platform device when the system goes to
710 suspend, should quiesce all Ethernet switch activities, but keep ports
711 participating in Wake-on-LAN active as well as additional wake-up logic if
714 - ``resume``: function invoked by the DSA platform device when the system resumes,
715 should resume all Ethernet switch activities and re-configure the switch to be
716 in a fully active state
718 - ``port_enable``: function invoked by the DSA slave network device ndo_open
719 function when a port is administratively brought up, this function should
720 fully enable a given switch port. DSA takes care of marking the port with
721 ``BR_STATE_BLOCKING`` if the port is a bridge member, or ``BR_STATE_FORWARDING`` if it
722 was not, and propagating these changes down to the hardware
724 - ``port_disable``: function invoked by the DSA slave network device ndo_close
725 function when a port is administratively brought down, this function should
726 fully disable a given switch port. DSA takes care of marking the port with
727 ``BR_STATE_DISABLED`` and propagating changes to the hardware if this port is
728 disabled while being a bridge member
733 Switching hardware is expected to have a table for FDB entries, however not all
734 of them are active at the same time. An address database is the subset (partition)
735 of FDB entries that is active (can be matched by address learning on RX, or FDB
736 lookup on TX) depending on the state of the port. An address database may
737 occasionally be called "FID" (Filtering ID) in this document, although the
738 underlying implementation may choose whatever is available to the hardware.
740 For example, all ports that belong to a VLAN-unaware bridge (which is
741 *currently* VLAN-unaware) are expected to learn source addresses in the
742 database associated by the driver with that bridge (and not with other
743 VLAN-unaware bridges). During forwarding and FDB lookup, a packet received on a
744 VLAN-unaware bridge port should be able to find a VLAN-unaware FDB entry having
745 the same MAC DA as the packet, which is present on another port member of the
746 same bridge. At the same time, the FDB lookup process must be able to not find
747 an FDB entry having the same MAC DA as the packet, if that entry points towards
748 a port which is a member of a different VLAN-unaware bridge (and is therefore
749 associated with a different address database).
751 Similarly, each VLAN of each offloaded VLAN-aware bridge should have an
752 associated address database, which is shared by all ports which are members of
753 that VLAN, but not shared by ports belonging to different bridges that are
754 members of the same VID.
756 In this context, a VLAN-unaware database means that all packets are expected to
757 match on it irrespective of VLAN ID (only MAC address lookup), whereas a
758 VLAN-aware database means that packets are supposed to match based on the VLAN
759 ID from the classified 802.1Q header (or the pvid if untagged).
761 At the bridge layer, VLAN-unaware FDB entries have the special VID value of 0,
762 whereas VLAN-aware FDB entries have non-zero VID values. Note that a
763 VLAN-unaware bridge may have VLAN-aware (non-zero VID) FDB entries, and a
764 VLAN-aware bridge may have VLAN-unaware FDB entries. As in hardware, the
765 software bridge keeps separate address databases, and offloads to hardware the
766 FDB entries belonging to these databases, through switchdev, asynchronously
767 relative to the moment when the databases become active or inactive.
769 When a user port operates in standalone mode, its driver should configure it to
770 use a separate database called a port private database. This is different from
771 the databases described above, and should impede operation as standalone port
772 (packet in, packet out to the CPU port) as little as possible. For example,
773 on ingress, it should not attempt to learn the MAC SA of ingress traffic, since
774 learning is a bridging layer service and this is a standalone port, therefore
775 it would consume useless space. With no address learning, the port private
776 database should be empty in a naive implementation, and in this case, all
777 received packets should be trivially flooded to the CPU port.
779 DSA (cascade) and CPU ports are also called "shared" ports because they service
780 multiple address databases, and the database that a packet should be associated
781 to is usually embedded in the DSA tag. This means that the CPU port may
782 simultaneously transport packets coming from a standalone port (which were
783 classified by hardware in one address database), and from a bridge port (which
784 were classified to a different address database).
786 Switch drivers which satisfy certain criteria are able to optimize the naive
787 configuration by removing the CPU port from the flooding domain of the switch,
788 and just program the hardware with FDB entries pointing towards the CPU port
789 for which it is known that software is interested in those MAC addresses.
790 Packets which do not match a known FDB entry will not be delivered to the CPU,
791 which will save CPU cycles required for creating an skb just to drop it.
793 DSA is able to perform host address filtering for the following kinds of
796 - Primary unicast MAC addresses of ports (``dev->dev_addr``). These are
797 associated with the port private database of the respective user port,
798 and the driver is notified to install them through ``port_fdb_add`` towards
801 - Secondary unicast and multicast MAC addresses of ports (addresses added
802 through ``dev_uc_add()`` and ``dev_mc_add()``). These are also associated
803 with the port private database of the respective user port.
805 - Local/permanent bridge FDB entries (``BR_FDB_LOCAL``). These are the MAC
806 addresses of the bridge ports, for which packets must be terminated locally
807 and not forwarded. They are associated with the address database for that
810 - Static bridge FDB entries installed towards foreign (non-DSA) interfaces
811 present in the same bridge as some DSA switch ports. These are also
812 associated with the address database for that bridge.
814 - Dynamically learned FDB entries on foreign interfaces present in the same
815 bridge as some DSA switch ports, only if ``ds->assisted_learning_on_cpu_port``
816 is set to true by the driver. These are associated with the address database
819 For various operations detailed below, DSA provides a ``dsa_db`` structure
820 which can be of the following types:
822 - ``DSA_DB_PORT``: the FDB (or MDB) entry to be installed or deleted belongs to
823 the port private database of user port ``db->dp``.
824 - ``DSA_DB_BRIDGE``: the entry belongs to one of the address databases of bridge
825 ``db->bridge``. Separation between the VLAN-unaware database and the per-VID
826 databases of this bridge is expected to be done by the driver.
827 - ``DSA_DB_LAG``: the entry belongs to the address database of LAG ``db->lag``.
828 Note: ``DSA_DB_LAG`` is currently unused and may be removed in the future.
830 The drivers which act upon the ``dsa_db`` argument in ``port_fdb_add``,
831 ``port_mdb_add`` etc should declare ``ds->fdb_isolation`` as true.
833 DSA associates each offloaded bridge and each offloaded LAG with a one-based ID
834 (``struct dsa_bridge :: num``, ``struct dsa_lag :: id``) for the purposes of
835 refcounting addresses on shared ports. Drivers may piggyback on DSA's numbering
836 scheme (the ID is readable through ``db->bridge.num`` and ``db->lag.id`` or may
839 Only the drivers which declare support for FDB isolation are notified of FDB
840 entries on the CPU port belonging to ``DSA_DB_PORT`` databases.
841 For compatibility/legacy reasons, ``DSA_DB_BRIDGE`` addresses are notified to
842 drivers even if they do not support FDB isolation. However, ``db->bridge.num``
843 and ``db->lag.id`` are always set to 0 in that case (to denote the lack of
844 isolation, for refcounting purposes).
846 Note that it is not mandatory for a switch driver to implement physically
847 separate address databases for each standalone user port. Since FDB entries in
848 the port private databases will always point to the CPU port, there is no risk
849 for incorrect forwarding decisions. In this case, all standalone ports may
850 share the same database, but the reference counting of host-filtered addresses
851 (not deleting the FDB entry for a port's MAC address if it's still in use by
852 another port) becomes the responsibility of the driver, because DSA is unaware
853 that the port databases are in fact shared. This can be achieved by calling
854 ``dsa_fdb_present_in_other_db()`` and ``dsa_mdb_present_in_other_db()``.
855 The down side is that the RX filtering lists of each user port are in fact
856 shared, which means that user port A may accept a packet with a MAC DA it
857 shouldn't have, only because that MAC address was in the RX filtering list of
858 user port B. These packets will still be dropped in software, however.
863 Offloading the bridge forwarding plane is optional and handled by the methods
864 below. They may be absent, return -EOPNOTSUPP, or ``ds->max_num_bridges`` may
865 be non-zero and exceeded, and in this case, joining a bridge port is still
866 possible, but the packet forwarding will take place in software, and the ports
867 under a software bridge must remain configured in the same way as for
868 standalone operation, i.e. have all bridging service functions (address
869 learning etc) disabled, and send all received packets to the CPU port only.
871 Concretely, a port starts offloading the forwarding plane of a bridge once it
872 returns success to the ``port_bridge_join`` method, and stops doing so after
873 ``port_bridge_leave`` has been called. Offloading the bridge means autonomously
874 learning FDB entries in accordance with the software bridge port's state, and
875 autonomously forwarding (or flooding) received packets without CPU intervention.
876 This is optional even when offloading a bridge port. Tagging protocol drivers
877 are expected to call ``dsa_default_offload_fwd_mark(skb)`` for packets which
878 have already been autonomously forwarded in the forwarding domain of the
879 ingress switch port. DSA, through ``dsa_port_devlink_setup()``, considers all
880 switch ports part of the same tree ID to be part of the same bridge forwarding
881 domain (capable of autonomous forwarding to each other).
883 Offloading the TX forwarding process of a bridge is a distinct concept from
884 simply offloading its forwarding plane, and refers to the ability of certain
885 driver and tag protocol combinations to transmit a single skb coming from the
886 bridge device's transmit function to potentially multiple egress ports (and
887 thereby avoid its cloning in software).
889 Packets for which the bridge requests this behavior are called data plane
890 packets and have ``skb->offload_fwd_mark`` set to true in the tag protocol
891 driver's ``xmit`` function. Data plane packets are subject to FDB lookup,
892 hardware learning on the CPU port, and do not override the port STP state.
893 Additionally, replication of data plane packets (multicast, flooding) is
894 handled in hardware and the bridge driver will transmit a single skb for each
895 packet that may or may not need replication.
897 When the TX forwarding offload is enabled, the tag protocol driver is
898 responsible to inject packets into the data plane of the hardware towards the
899 correct bridging domain (FID) that the port is a part of. The port may be
900 VLAN-unaware, and in this case the FID must be equal to the FID used by the
901 driver for its VLAN-unaware address database associated with that bridge.
902 Alternatively, the bridge may be VLAN-aware, and in that case, it is guaranteed
903 that the packet is also VLAN-tagged with the VLAN ID that the bridge processed
904 this packet in. It is the responsibility of the hardware to untag the VID on
905 the egress-untagged ports, or keep the tag on the egress-tagged ones.
907 - ``port_bridge_join``: bridge layer function invoked when a given switch port is
908 added to a bridge, this function should do what's necessary at the switch
909 level to permit the joining port to be added to the relevant logical
910 domain for it to ingress/egress traffic with other members of the bridge.
911 By setting the ``tx_fwd_offload`` argument to true, the TX forwarding process
912 of this bridge is also offloaded.
914 - ``port_bridge_leave``: bridge layer function invoked when a given switch port is
915 removed from a bridge, this function should do what's necessary at the
916 switch level to deny the leaving port from ingress/egress traffic from the
917 remaining bridge members.
919 - ``port_stp_state_set``: bridge layer function invoked when a given switch port STP
920 state is computed by the bridge layer and should be propagated to switch
921 hardware to forward/block/learn traffic.
923 - ``port_bridge_flags``: bridge layer function invoked when a port must
924 configure its settings for e.g. flooding of unknown traffic or source address
925 learning. The switch driver is responsible for initial setup of the
926 standalone ports with address learning disabled and egress flooding of all
927 types of traffic, then the DSA core notifies of any change to the bridge port
928 flags when the port joins and leaves a bridge. DSA does not currently manage
929 the bridge port flags for the CPU port. The assumption is that address
930 learning should be statically enabled (if supported by the hardware) on the
931 CPU port, and flooding towards the CPU port should also be enabled, due to a
932 lack of an explicit address filtering mechanism in the DSA core.
934 - ``port_fast_age``: bridge layer function invoked when flushing the
935 dynamically learned FDB entries on the port is necessary. This is called when
936 transitioning from an STP state where learning should take place to an STP
937 state where it shouldn't, or when leaving a bridge, or when address learning
938 is turned off via ``port_bridge_flags``.
940 Bridge VLAN filtering
941 ---------------------
943 - ``port_vlan_filtering``: bridge layer function invoked when the bridge gets
944 configured for turning on or off VLAN filtering. If nothing specific needs to
945 be done at the hardware level, this callback does not need to be implemented.
946 When VLAN filtering is turned on, the hardware must be programmed with
947 rejecting 802.1Q frames which have VLAN IDs outside of the programmed allowed
948 VLAN ID map/rules. If there is no PVID programmed into the switch port,
949 untagged frames must be rejected as well. When turned off the switch must
950 accept any 802.1Q frames irrespective of their VLAN ID, and untagged frames are
953 - ``port_vlan_add``: bridge layer function invoked when a VLAN is configured
954 (tagged or untagged) for the given switch port. The CPU port becomes a member
955 of a VLAN only if a foreign bridge port is also a member of it (and
956 forwarding needs to take place in software), or the VLAN is installed to the
957 VLAN group of the bridge device itself, for termination purposes
958 (``bridge vlan add dev br0 vid 100 self``). VLANs on shared ports are
959 reference counted and removed when there is no user left. Drivers do not need
960 to manually install a VLAN on the CPU port.
962 - ``port_vlan_del``: bridge layer function invoked when a VLAN is removed from the
965 - ``port_fdb_add``: bridge layer function invoked when the bridge wants to install a
966 Forwarding Database entry, the switch hardware should be programmed with the
967 specified address in the specified VLAN Id in the forwarding database
968 associated with this VLAN ID.
970 - ``port_fdb_del``: bridge layer function invoked when the bridge wants to remove a
971 Forwarding Database entry, the switch hardware should be programmed to delete
972 the specified MAC address from the specified VLAN ID if it was mapped into
973 this port forwarding database
975 - ``port_fdb_dump``: bridge bypass function invoked by ``ndo_fdb_dump`` on the
976 physical DSA port interfaces. Since DSA does not attempt to keep in sync its
977 hardware FDB entries with the software bridge, this method is implemented as
978 a means to view the entries visible on user ports in the hardware database.
979 The entries reported by this function have the ``self`` flag in the output of
980 the ``bridge fdb show`` command.
982 - ``port_mdb_add``: bridge layer function invoked when the bridge wants to install
983 a multicast database entry. The switch hardware should be programmed with the
984 specified address in the specified VLAN ID in the forwarding database
985 associated with this VLAN ID.
987 - ``port_mdb_del``: bridge layer function invoked when the bridge wants to remove a
988 multicast database entry, the switch hardware should be programmed to delete
989 the specified MAC address from the specified VLAN ID if it was mapped into
990 this port forwarding database.
995 Link aggregation is implemented in the Linux networking stack by the bonding
996 and team drivers, which are modeled as virtual, stackable network interfaces.
997 DSA is capable of offloading a link aggregation group (LAG) to hardware that
998 supports the feature, and supports bridging between physical ports and LAGs,
999 as well as between LAGs. A bonding/team interface which holds multiple physical
1000 ports constitutes a logical port, although DSA has no explicit concept of a
1001 logical port at the moment. Due to this, events where a LAG joins/leaves a
1002 bridge are treated as if all individual physical ports that are members of that
1003 LAG join/leave the bridge. Switchdev port attributes (VLAN filtering, STP
1004 state, etc) and objects (VLANs, MDB entries) offloaded to a LAG as bridge port
1005 are treated similarly: DSA offloads the same switchdev object / port attribute
1006 on all members of the LAG. Static bridge FDB entries on a LAG are not yet
1007 supported, since the DSA driver API does not have the concept of a logical port
1010 - ``port_lag_join``: function invoked when a given switch port is added to a
1011 LAG. The driver may return ``-EOPNOTSUPP``, and in this case, DSA will fall
1012 back to a software implementation where all traffic from this port is sent to
1014 - ``port_lag_leave``: function invoked when a given switch port leaves a LAG
1015 and returns to operation as a standalone port.
1016 - ``port_lag_change``: function invoked when the link state of any member of
1017 the LAG changes, and the hashing function needs rebalancing to only make use
1018 of the subset of physical LAG member ports that are up.
1020 Drivers that benefit from having an ID associated with each offloaded LAG
1021 can optionally populate ``ds->num_lag_ids`` from the ``dsa_switch_ops::setup``
1022 method. The LAG ID associated with a bonding/team interface can then be
1023 retrieved by a DSA switch driver using the ``dsa_lag_id`` function.
1028 The Media Redundancy Protocol is a topology management protocol optimized for
1029 fast fault recovery time for ring networks, which has some components
1030 implemented as a function of the bridge driver. MRP uses management PDUs
1031 (Test, Topology, LinkDown/Up, Option) sent at a multicast destination MAC
1032 address range of 01:15:4e:00:00:0x and with an EtherType of 0x88e3.
1033 Depending on the node's role in the ring (MRM: Media Redundancy Manager,
1034 MRC: Media Redundancy Client, MRA: Media Redundancy Automanager), certain MRP
1035 PDUs might need to be terminated locally and others might need to be forwarded.
1036 An MRM might also benefit from offloading to hardware the creation and
1037 transmission of certain MRP PDUs (Test).
1039 Normally an MRP instance can be created on top of any network interface,
1040 however in the case of a device with an offloaded data path such as DSA, it is
1041 necessary for the hardware, even if it is not MRP-aware, to be able to extract
1042 the MRP PDUs from the fabric before the driver can proceed with the software
1043 implementation. DSA today has no driver which is MRP-aware, therefore it only
1044 listens for the bare minimum switchdev objects required for the software assist
1045 to work properly. The operations are detailed below.
1047 - ``port_mrp_add`` and ``port_mrp_del``: notifies driver when an MRP instance
1048 with a certain ring ID, priority, primary port and secondary port is
1050 - ``port_mrp_add_ring_role`` and ``port_mrp_del_ring_role``: function invoked
1051 when an MRP instance changes ring roles between MRM or MRC. This affects
1052 which MRP PDUs should be trapped to software and which should be autonomously
1055 IEC 62439-3 (HSR/PRP)
1056 ---------------------
1058 The Parallel Redundancy Protocol (PRP) is a network redundancy protocol which
1059 works by duplicating and sequence numbering packets through two independent L2
1060 networks (which are unaware of the PRP tail tags carried in the packets), and
1061 eliminating the duplicates at the receiver. The High-availability Seamless
1062 Redundancy (HSR) protocol is similar in concept, except all nodes that carry
1063 the redundant traffic are aware of the fact that it is HSR-tagged (because HSR
1064 uses a header with an EtherType of 0x892f) and are physically connected in a
1065 ring topology. Both HSR and PRP use supervision frames for monitoring the
1066 health of the network and for discovery of other nodes.
1068 In Linux, both HSR and PRP are implemented in the hsr driver, which
1069 instantiates a virtual, stackable network interface with two member ports.
1070 The driver only implements the basic roles of DANH (Doubly Attached Node
1071 implementing HSR) and DANP (Doubly Attached Node implementing PRP); the roles
1072 of RedBox and QuadBox are not implemented (therefore, bridging a hsr network
1073 interface with a physical switch port does not produce the expected result).
1075 A driver which is able of offloading certain functions of a DANP or DANH should
1076 declare the corresponding netdev features as indicated by the documentation at
1077 ``Documentation/networking/netdev-features.rst``. Additionally, the following
1078 methods must be implemented:
1080 - ``port_hsr_join``: function invoked when a given switch port is added to a
1081 DANP/DANH. The driver may return ``-EOPNOTSUPP`` and in this case, DSA will
1082 fall back to a software implementation where all traffic from this port is
1084 - ``port_hsr_leave``: function invoked when a given switch port leaves a
1085 DANP/DANH and returns to normal operation as a standalone port.
1090 Making SWITCHDEV and DSA converge towards an unified codebase
1091 -------------------------------------------------------------
1093 SWITCHDEV properly takes care of abstracting the networking stack with offload
1094 capable hardware, but does not enforce a strict switch device driver model. On
1095 the other DSA enforces a fairly strict device driver model, and deals with most
1096 of the switch specific. At some point we should envision a merger between these
1097 two subsystems and get the best of both worlds.
1099 Other hanging fruits
1100 --------------------
1102 - allowing more than one CPU/management interface:
1103 http://comments.gmane.org/gmane.linux.network/365657