linux-2.6-block.git
11 months agoMerge branch 'introduce-define_flex-macro'
Jakub Kicinski [Tue, 3 Oct 2023 19:17:13 +0000 (12:17 -0700)]
Merge branch 'introduce-define_flex-macro'

Przemek Kitszel says:

====================
introduce DEFINE_FLEX() macro

Add DEFINE_FLEX() macro, that helps on-stack allocation of structures
with trailing flex array member.
Expose __struct_size() macro which reads size of data allocated
by DEFINE_FLEX().

Accompany new macros introduction with actual usage,
in the ice driver - hence targeting for netdev tree.

Obvious benefits include simpler resulting code, less heap usage,
less error checking. Less obvious is the fact that compiler has
more room to optimize, and as a whole, even with more stuff on the stack,
we end up with overall better (smaller) report from bloat-o-meter:
add/remove: 8/6 grow/shrink: 7/18 up/down: 2211/-2270 (-59)
(individual results in each patch).
====================

Link: https://lore.kernel.org/r/20230912115937.1645707-1-przemyslaw.kitszel@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoice: make use of DEFINE_FLEX() in ice_switch.c
Przemek Kitszel [Tue, 12 Sep 2023 11:59:37 +0000 (07:59 -0400)]
ice: make use of DEFINE_FLEX() in ice_switch.c

Use DEFINE_FLEX() macro for 1-elem flex array members of ice_switch.c

Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Link: https://lore.kernel.org/r/20230912115937.1645707-8-przemyslaw.kitszel@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoice: make use of DEFINE_FLEX() for struct ice_aqc_dis_txq_item
Przemek Kitszel [Tue, 12 Sep 2023 11:59:36 +0000 (07:59 -0400)]
ice: make use of DEFINE_FLEX() for struct ice_aqc_dis_txq_item

Use DEFINE_FLEX() macro for 1-elem flex array use case
of struct ice_aqc_dis_txq_item.

Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Link: https://lore.kernel.org/r/20230912115937.1645707-7-przemyslaw.kitszel@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoice: make use of DEFINE_FLEX() for struct ice_aqc_add_tx_qgrp
Przemek Kitszel [Tue, 12 Sep 2023 11:59:35 +0000 (07:59 -0400)]
ice: make use of DEFINE_FLEX() for struct ice_aqc_add_tx_qgrp

Use DEFINE_FLEX() macro for 1-elem flex array use case
of struct ice_aqc_add_tx_qgrp.

Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Link: https://lore.kernel.org/r/20230912115937.1645707-6-przemyslaw.kitszel@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoice: make use of DEFINE_FLEX() in ice_ddp.c
Przemek Kitszel [Tue, 12 Sep 2023 11:59:34 +0000 (07:59 -0400)]
ice: make use of DEFINE_FLEX() in ice_ddp.c

Use DEFINE_FLEX() macro for constant-num-of-elems (4)
flex array members of ice_ddp.c

Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Link: https://lore.kernel.org/r/20230912115937.1645707-5-przemyslaw.kitszel@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoice: drop two params of ice_aq_move_sched_elems()
Przemek Kitszel [Tue, 12 Sep 2023 11:59:33 +0000 (07:59 -0400)]
ice: drop two params of ice_aq_move_sched_elems()

Remove two arguments of ice_aq_move_sched_elems().
Last of them was always NULL, and @grps_req was always 1.

Assuming @grps_req to be one, allows us to use DEFINE_FLEX() macro,
what removes some need for heap allocations.

Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Link: https://lore.kernel.org/r/20230912115937.1645707-4-przemyslaw.kitszel@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoice: ice_sched_remove_elems: replace 1 elem array param by u32
Przemek Kitszel [Tue, 12 Sep 2023 11:59:32 +0000 (07:59 -0400)]
ice: ice_sched_remove_elems: replace 1 elem array param by u32

Replace array+size params of ice_sched_remove_elems:() by just single u32,
as all callers are using it with "1".

This enables moving from heap-based, to stack-based allocation, what is also
more elegant thanks to DEFINE_FLEX() macro.

Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Link: https://lore.kernel.org/r/20230912115937.1645707-3-przemyslaw.kitszel@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agooverflow: add DEFINE_FLEX() for on-stack allocs
Przemek Kitszel [Tue, 12 Sep 2023 11:59:31 +0000 (07:59 -0400)]
overflow: add DEFINE_FLEX() for on-stack allocs

Add DEFINE_FLEX() macro for on-stack allocations of structs with
flexible array member.

Expose __struct_size() macro outside of fortify-string.h, as it could be
used to read size of structs allocated by DEFINE_FLEX().
Move __member_size() alongside it.
-Kees

Using underlying array for on-stack storage lets us to declare
known-at-compile-time structures without kzalloc().

Actual usage for ice driver is in following patches of the series.

Missing __has_builtin() workaround is moved up to serve also assembly
compilation with m68k-linux-gcc, see [1].
Error was (note the .S file extension):
In file included from ../include/linux/linkage.h:5,
                 from ../arch/m68k/fpsp040/skeleton.S:40:
../include/linux/compiler_types.h:331:5: warning: "__has_builtin" is not defined, evaluates to 0 [-Wundef]
  331 | #if __has_builtin(__builtin_dynamic_object_size)
      |     ^~~~~~~~~~~~~
../include/linux/compiler_types.h:331:18: error: missing binary operator before token "("
  331 | #if __has_builtin(__builtin_dynamic_object_size)
      |                  ^

[1] https://lore.kernel.org/netdev/202308112122.OuF0YZqL-lkp@intel.com/
Co-developed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>
Link: https://lore.kernel.org/r/20230912115937.1645707-2-przemyslaw.kitszel@intel.com
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoMerge branch 'bpf-remove-xdp_do_flush_map'
Jakub Kicinski [Tue, 3 Oct 2023 14:34:53 +0000 (07:34 -0700)]
Merge branch 'bpf-remove-xdp_do_flush_map'

Sebastian Andrzej Siewior says:

====================
bpf: Remove xdp_do_flush_map().

I had #1 split in several patches per vendor and then decided to merge
it. I can repost it with one patch per vendor if this preferred.
====================

Link: https://lore.kernel.org/r/20230908143215.869913-1-bigeasy@linutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agobpf: Remove xdp_do_flush_map().
Sebastian Andrzej Siewior [Fri, 8 Sep 2023 14:32:15 +0000 (16:32 +0200)]
bpf: Remove xdp_do_flush_map().

xdp_do_flush_map() can be removed because there is no more user in tree.

Remove xdp_do_flush_map().

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
Link: https://lore.kernel.org/r/20230908143215.869913-3-bigeasy@linutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agonet: Tree wide: Replace xdp_do_flush_map() with xdp_do_flush().
Sebastian Andrzej Siewior [Fri, 8 Sep 2023 14:32:14 +0000 (16:32 +0200)]
net: Tree wide: Replace xdp_do_flush_map() with xdp_do_flush().

xdp_do_flush_map() is deprecated and new code should use xdp_do_flush()
instead.

Replace xdp_do_flush_map() with xdp_do_flush().

Cc: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
Cc: Clark Wang <xiaoning.wang@nxp.com>
Cc: Claudiu Manoil <claudiu.manoil@nxp.com>
Cc: David Arinzon <darinzon@amazon.com>
Cc: Edward Cree <ecree.xilinx@gmail.com>
Cc: Felix Fietkau <nbd@nbd.name>
Cc: Grygorii Strashko <grygorii.strashko@ti.com>
Cc: Jassi Brar <jaswinder.singh@linaro.org>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Cc: John Crispin <john@phrozen.org>
Cc: Leon Romanovsky <leon@kernel.org>
Cc: Lorenzo Bianconi <lorenzo@kernel.org>
Cc: Louis Peens <louis.peens@corigine.com>
Cc: Marcin Wojtas <mw@semihalf.com>
Cc: Mark Lee <Mark-MC.Lee@mediatek.com>
Cc: Matthias Brugger <matthias.bgg@gmail.com>
Cc: NXP Linux Team <linux-imx@nxp.com>
Cc: Noam Dagan <ndagan@amazon.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Saeed Bishara <saeedb@amazon.com>
Cc: Saeed Mahameed <saeedm@nvidia.com>
Cc: Sean Wang <sean.wang@mediatek.com>
Cc: Shay Agroskin <shayagr@amazon.com>
Cc: Shenwei Wang <shenwei.wang@nxp.com>
Cc: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
Cc: Tony Nguyen <anthony.l.nguyen@intel.com>
Cc: Vladimir Oltean <vladimir.oltean@nxp.com>
Cc: Wei Fang <wei.fang@nxp.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Arthur Kiyanovski <akiyano@amazon.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Acked-by: Martin Habets <habetsm.xilinx@gmail.com>
Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
Link: https://lore.kernel.org/r/20230908143215.869913-2-bigeasy@linutronix.de
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agonet: microchip: sparx5: clean up error checking in vcap_show_admin()
Dan Carpenter [Fri, 8 Sep 2023 07:03:37 +0000 (10:03 +0300)]
net: microchip: sparx5: clean up error checking in vcap_show_admin()

The vcap_decode_rule() never returns NULL.  There is no need to check
for that.  This code assumes that if it did return NULL we should
end abruptly and return success.  It is confusing.  Fix the check to
just be if (IS_ERR()) instead of if (IS_ERR_OR_NULL()).

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/r/202309070831.hTvj9ekP-lkp@intel.com/
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Reviewed-by: Simon Horman <horms@kernel.org>
Reviewed-by: Daniel Machon <daniel.machon@microchip.com>
Link: https://lore.kernel.org/r/b88eba86-9488-4749-a896-7c7050132e7b@moroto.mountain
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoMerge branch 'net-dsa-hsr-enable-hsr-hw-offloading-for-ksz9477'
Paolo Abeni [Tue, 3 Oct 2023 11:51:06 +0000 (13:51 +0200)]
Merge branch 'net-dsa-hsr-enable-hsr-hw-offloading-for-ksz9477'

Lukasz Majewski says:

====================
net: dsa: hsr: Enable HSR HW offloading for KSZ9477

This patch series provides support for HSR HW offloading in KSZ9477
switch IC.

To test this feature:
ip link add name hsr0 type hsr slave1 lan1 slave2 lan2 supervision 45 version 1
ip link set dev lan1 up
ip link set dev lan2 up
ip a add 192.168.0.1/24 dev hsr0
ip link set dev hsr0 up

To remove HSR network device:
ip link del hsr0

To test if one can adjust MAC address:
ip link set lan2 address 00:01:02:AA:BB:CC

It is also possible to create another HSR interface, but it will
only support HSR is software - e.g.
ip link add name hsr1 type hsr slave1 lan3 slave2 lan4 supervision 45 version 1

Test HW:
Two KSZ9477-EVB boards with HSR ports set to "Port1" and "Port2".

Performance SW used:
nuttcp -S --nofork
nuttcp -vv -T 60 -r 192.168.0.2
nuttcp -vv -T 60 -t 192.168.0.2

Code: v6.6.0-rc2+ Linux net-next repository
SHA1: 5a1b322cb0b7d0d33a2d13462294dc0f46911172

Tested HSR v0 and v1
Results:
With KSZ9477 offloading support added: RX: 100 Mbps TX: 98 Mbps
With no offloading         RX: 63 Mbps  TX: 63 Mbps
====================

Link: https://lore.kernel.org/r/20230922133108.2090612-1-lukma@denx.de
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agonet: dsa: microchip: Enable HSR offloading for KSZ9477
Lukasz Majewski [Fri, 22 Sep 2023 13:31:08 +0000 (15:31 +0200)]
net: dsa: microchip: Enable HSR offloading for KSZ9477

This patch adds functions for providing in KSZ9477 switch HSR
(High-availability Seamless Redundancy) hardware offloading.

According to AN3474 application note following features are provided:
- TX packet duplication from host to switch (NETIF_F_HW_HSR_DUP)
- RX packet duplication discarding
- Prevention of packet loop

For last two ones - there is a probability that some packets will not
be filtered in HW (in some special cases - described in AN3474).
Hence, the HSR core code shall be used to discard those not caught frames.

Moreover, some switch registers adjustments are required - like setting
MAC address of HSR network interface.

Additionally, the KSZ9477 switch has been configured to forward frames
between HSR ports (e.g. 1,2) members to provide support for
NETIF_F_HW_HSR_FWD flag.

Join and leave functions are written in a way, that are executed with
single port - i.e. configuration is NOT done only when second HSR port
is configured.

Co-developed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agonet: dsa: microchip: move REG_SW_MAC_ADDR to dev->info->regs[]
Vladimir Oltean [Fri, 22 Sep 2023 13:31:07 +0000 (15:31 +0200)]
net: dsa: microchip: move REG_SW_MAC_ADDR to dev->info->regs[]

Defining macros which have the same name but different values is bad
practice, because it makes it hard to avoid code duplication. The same
code does different things, depending on the file it's placed in.
Case in point, we want to access REG_SW_MAC_ADDR from ksz_common.c, but
currently we can't, because we don't know which kszXXXX_reg.h to include
from the common code.

Remove the REG_SW_MAC_ADDR_{0..5} macros from ksz8795_reg.h and
ksz9477_reg.h, and re-add this register offset to the dev->info->regs[]
array.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agonet: dsa: tag_ksz: Extend ksz9477_xmit() for HSR frame duplication
Lukasz Majewski [Fri, 22 Sep 2023 13:31:06 +0000 (15:31 +0200)]
net: dsa: tag_ksz: Extend ksz9477_xmit() for HSR frame duplication

The KSZ9477 has support for HSR (High-Availability Seamless Redundancy).
One of its offloading (i.e. performed in the switch IC hardware) features
is to duplicate received frame to both HSR aware switch ports.

To achieve this goal - the tail TAG needs to be modified. To be more
specific, both ports must be marked as destination (egress) ones.

The NETIF_F_HW_HSR_DUP flag indicates that the device supports HSR and
assures (in HSR core code) that frame is sent only once from HOST to
switch with tail tag indicating both ports.

Signed-off-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agonet: dsa: notify drivers of MAC address changes on user ports
Vladimir Oltean [Fri, 22 Sep 2023 13:31:05 +0000 (15:31 +0200)]
net: dsa: notify drivers of MAC address changes on user ports

In some cases, drivers may need to veto the changing of a MAC address on
a user port. Such is the case with KSZ9477 when it offloads a HSR device,
because it programs the MAC address of multiple ports to a shared
hardware register. Those ports need to have equal MAC addresses for the
lifetime of the HSR offload.

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agonet: dsa: propagate extack to ds->ops->port_hsr_join()
Vladimir Oltean [Fri, 22 Sep 2023 13:31:04 +0000 (15:31 +0200)]
net: dsa: propagate extack to ds->ops->port_hsr_join()

Drivers can provide meaningful error messages which state a reason why
they can't perform an offload, and dsa_slave_changeupper() already has
the infrastructure to propagate these over netlink rather than printing
to the kernel log. So pass the extack argument and modify the xrs700x
driver's port_hsr_join() prototype.

Also take the opportunity and use the extack for the 2 -EOPNOTSUPP cases
from xrs700x_hsr_join().

Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Lukasz Majewski <lukma@denx.de>
Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com>
Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agonet: sfp: add quirk for FS's 2.5G copper SFP
Raju Lakkaraju [Mon, 25 Sep 2023 08:00:59 +0000 (13:30 +0530)]
net: sfp: add quirk for FS's 2.5G copper SFP

Add a quirk for a copper SFP that identifies itself as "FS" "SFP-2.5G-T".
This module's PHY is inaccessible, and can only run at 2500base-X with the
host without negotiation. Add a quirk to enable the 2500base-X interface mode
with 2500base-T support and disable auto negotiation.

Signed-off-by: Raju Lakkaraju <Raju.Lakkaraju@microchip.com>
Link: https://lore.kernel.org/r/20230925080059.266240-1-Raju.Lakkaraju@microchip.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agoipv6: mark address parameters of udp_tunnel6_xmit_skb() as const
Beniamino Galvani [Sun, 24 Sep 2023 15:30:14 +0000 (17:30 +0200)]
ipv6: mark address parameters of udp_tunnel6_xmit_skb() as const

The function doesn't modify the addresses passed as input, mark them
as 'const' to make that clear.

Signed-off-by: Beniamino Galvani <b.galvani@gmail.com>
Reviewed-by: Guillaume Nault <gnault@redhat.com>
Link: https://lore.kernel.org/r/20230924153014.786962-1-b.galvani@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agonet: phy: amd: Support the Altima AMI101L
Linus Walleij [Sun, 24 Sep 2023 08:19:02 +0000 (10:19 +0200)]
net: phy: amd: Support the Altima AMI101L

The Altima AC101L is obviously compatible with the AMD PHY,
as seen by reading the datasheet.

Datasheet: https://docs.broadcom.com/doc/AC101L-DS05-405-RDS.pdf

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Link: https://lore.kernel.org/r/20230924-ac101l-phy-v1-1-5e6349e28aa4@linaro.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agoudp_tunnel: Use flex array to simplify code
Christophe JAILLET [Sun, 24 Sep 2023 08:03:07 +0000 (10:03 +0200)]
udp_tunnel: Use flex array to simplify code

'n_tables' is small, UDP_TUNNEL_NIC_MAX_TABLES = 4 as a maximum. So there
is no real point to allocate the 'entries' pointers array with a dedicate
memory allocation.

Using a flexible array for struct udp_tunnel_nic->entries avoids the
overhead of an additional memory allocation.

This also saves an indirection when the array is accessed.

Finally, __counted_by() can be used for run-time bounds checking if
configured and supported by the compiler.

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Link: https://lore.kernel.org/r/4a096ba9cf981a588aa87235bb91e933ee162b3d.1695542544.git.christophe.jaillet@wanadoo.fr
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agonet: ixp4xx_eth: Specify min/max MTU
Linus Walleij [Sat, 23 Sep 2023 18:38:22 +0000 (20:38 +0200)]
net: ixp4xx_eth: Specify min/max MTU

As we don't specify the MTU in the driver, the framework
will fall back to 1500 bytes and this doesn't work very
well when we try to attach a DSA switch:

  eth1: mtu greater than device maximum
  ixp4xx_eth c800a000.ethernet eth1: error -22 setting
  MTU to 1504 to include DSA overhead

I checked the developer docs and the hardware can actually
do really big frames, so update the driver accordingly.

Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Link: https://lore.kernel.org/r/20230923-ixp4xx-eth-mtu-v1-1-9e88b908e1b2@linaro.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agoMerge branch 'tcp_metrics-four-fixes'
Paolo Abeni [Tue, 3 Oct 2023 08:05:24 +0000 (10:05 +0200)]
Merge branch 'tcp_metrics-four-fixes'

Eric Dumazet says:

====================
tcp_metrics: four fixes

Looking at an inconclusive syzbot report, I was surprised
to see that tcp_metrics cache on my host was full of
useless entries, even though I have
/proc/sys/net/ipv4/tcp_no_metrics_save set to 1.

While looking more closely I found a total of four issues.
====================

Link: https://lore.kernel.org/r/20230922220356.3739090-1-edumazet@google.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agotcp_metrics: optimize tcp_metrics_flush_all()
Eric Dumazet [Fri, 22 Sep 2023 22:03:56 +0000 (22:03 +0000)]
tcp_metrics: optimize tcp_metrics_flush_all()

This is inspired by several syzbot reports where
tcp_metrics_flush_all() was seen in the traces.

We can avoid acquiring tcp_metrics_lock for empty buckets,
and we should add one cond_resched() to break potential long loops.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agotcp_metrics: do not create an entry from tcp_init_metrics()
Eric Dumazet [Fri, 22 Sep 2023 22:03:55 +0000 (22:03 +0000)]
tcp_metrics: do not create an entry from tcp_init_metrics()

tcp_init_metrics() only wants to get metrics if they were
previously stored in the cache. Creating an entry is adding
useless costs, especially when tcp_no_metrics_save is set.

Fixes: 51c5d0c4b169 ("tcp: Maintain dynamic metrics in local cache.")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agotcp_metrics: properly set tp->snd_ssthresh in tcp_init_metrics()
Eric Dumazet [Fri, 22 Sep 2023 22:03:54 +0000 (22:03 +0000)]
tcp_metrics: properly set tp->snd_ssthresh in tcp_init_metrics()

We need to set tp->snd_ssthresh to TCP_INFINITE_SSTHRESH
in the case tcp_get_metrics() fails for some reason.

Fixes: 9ad7c049f0f7 ("tcp: RFC2988bis + taking RTT sample from 3WHS for the passive open side")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agotcp_metrics: add missing barriers on delete
Eric Dumazet [Fri, 22 Sep 2023 22:03:53 +0000 (22:03 +0000)]
tcp_metrics: add missing barriers on delete

When removing an item from RCU protected list, we must prevent
store-tearing, using rcu_assign_pointer() or WRITE_ONCE().

Fixes: 04f721c671656 ("tcp_metrics: Rewrite tcp_metrics_flush_all")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agoMerge branch 'fix-implicit-sign-conversions-in-handshake-upcall'
Jakub Kicinski [Mon, 2 Oct 2023 19:34:23 +0000 (12:34 -0700)]
Merge branch 'fix-implicit-sign-conversions-in-handshake-upcall'

Chuck Lever says:

====================
Fix implicit sign conversions in handshake upcall

An internal static analysis tool noticed some implicit sign
conversions for some of the arguments in the handshake upcall
protocol.
====================

Link: https://lore.kernel.org/r/169530154802.8905.2645661840284268222.stgit@oracle-102.nfsv4bat.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agohandshake: Fix sign of key_serial_t fields
Chuck Lever [Thu, 21 Sep 2023 13:08:07 +0000 (09:08 -0400)]
handshake: Fix sign of key_serial_t fields

key_serial_t fields are signed integers. Use nla_get/put_s32 for
those to avoid implicit signed conversion in the netlink protocol.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/169530167716.8905.645746457741372879.stgit@oracle-102.nfsv4bat.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agohandshake: Fix sign of socket file descriptor fields
Chuck Lever [Thu, 21 Sep 2023 13:07:40 +0000 (09:07 -0400)]
handshake: Fix sign of socket file descriptor fields

Socket file descriptors are signed integers. Use nla_get/put_s32 for
those to avoid implicit signed conversion in the netlink protocol.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/169530165057.8905.8650469415145814828.stgit@oracle-102.nfsv4bat.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoMerge branch 'mlxsw-annotate-structs-with-__counted_by'
Jakub Kicinski [Mon, 2 Oct 2023 18:27:17 +0000 (11:27 -0700)]
Merge branch 'mlxsw-annotate-structs-with-__counted_by'

Kees Cook says:

====================
mlxsw: Annotate structs with __counted_by

This annotates several mlxsw structures with the coming __counted_by attribute
for bounds checking of flexible arrays at run-time. For more details, see
commit dd06e72e68bc ("Compiler Attributes: Add __counted_by macro").
====================

Link: https://lore.kernel.org/r/20230929180611.work.870-kees@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agomlxsw: spectrum_span: Annotate struct mlxsw_sp_span with __counted_by
Kees Cook [Fri, 29 Sep 2023 18:07:44 +0000 (11:07 -0700)]
mlxsw: spectrum_span: Annotate struct mlxsw_sp_span with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct mlxsw_sp_span.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Petr Machata <petrm@nvidia.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Link: https://lore.kernel.org/r/20230929180746.3005922-5-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agomlxsw: spectrum_router: Annotate struct mlxsw_sp_nexthop_group_info with __counted_by
Kees Cook [Fri, 29 Sep 2023 18:07:43 +0000 (11:07 -0700)]
mlxsw: spectrum_router: Annotate struct mlxsw_sp_nexthop_group_info with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct mlxsw_sp_nexthop_group_info.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Petr Machata <petrm@nvidia.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Link: https://lore.kernel.org/r/20230929180746.3005922-4-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agomlxsw: spectrum: Annotate struct mlxsw_sp_counter_pool with __counted_by
Kees Cook [Fri, 29 Sep 2023 18:07:42 +0000 (11:07 -0700)]
mlxsw: spectrum: Annotate struct mlxsw_sp_counter_pool with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct mlxsw_sp_counter_pool.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Petr Machata <petrm@nvidia.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Link: https://lore.kernel.org/r/20230929180746.3005922-3-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agomlxsw: core: Annotate struct mlxsw_env with __counted_by
Kees Cook [Fri, 29 Sep 2023 18:07:41 +0000 (11:07 -0700)]
mlxsw: core: Annotate struct mlxsw_env with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct mlxsw_env.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Petr Machata <petrm@nvidia.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Link: https://lore.kernel.org/r/20230929180746.3005922-2-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agomlxsw: Annotate struct mlxsw_linecards with __counted_by
Kees Cook [Fri, 29 Sep 2023 18:07:40 +0000 (11:07 -0700)]
mlxsw: Annotate struct mlxsw_linecards with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct mlxsw_linecards.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Petr Machata <petrm@nvidia.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Link: https://lore.kernel.org/r/20230929180746.3005922-1-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoMerge branch 'batch-1-annotate-structs-with-__counted_by'
Jakub Kicinski [Mon, 2 Oct 2023 18:25:03 +0000 (11:25 -0700)]
Merge branch 'batch-1-annotate-structs-with-__counted_by'

Kees Cook says:

====================
Batch 1: Annotate structs with __counted_by

This is the batch 1 of patches touching netdev for preparing for
the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by to structs that would
benefit from the annotation.

Since the element count member must be set before accessing the annotated
flexible array member, some patches also move the member's initialization
earlier. (These are noted in the individual patches.)

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci
====================

Link: https://lore.kernel.org/r/20230922172449.work.906-kees@kernel.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agonet: tulip: Annotate struct mediatable with __counted_by
Kees Cook [Fri, 22 Sep 2023 17:28:55 +0000 (10:28 -0700)]
net: tulip: Annotate struct mediatable with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct mediatable.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Shaokun Zhang <zhangshaokun@hisilicon.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20230922172858.3822653-13-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agonet: openvswitch: Annotate struct dp_meter with __counted_by
Kees Cook [Fri, 22 Sep 2023 17:28:54 +0000 (10:28 -0700)]
net: openvswitch: Annotate struct dp_meter with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct dp_meter.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Pravin B Shelar <pshelar@ovn.org>
Cc: dev@openvswitch.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20230922172858.3822653-12-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agonet: enetc: Annotate struct enetc_psfp_gate with __counted_by
Kees Cook [Fri, 22 Sep 2023 17:28:53 +0000 (10:28 -0700)]
net: enetc: Annotate struct enetc_psfp_gate with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct enetc_psfp_gate.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Claudiu Manoil <claudiu.manoil@nxp.com>
Cc: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20230922172858.3822653-11-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agonet: openvswitch: Annotate struct dp_meter_instance with __counted_by
Kees Cook [Fri, 22 Sep 2023 17:28:52 +0000 (10:28 -0700)]
net: openvswitch: Annotate struct dp_meter_instance with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct dp_meter_instance.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Pravin B Shelar <pshelar@ovn.org>
Cc: dev@openvswitch.org
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20230922172858.3822653-10-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agonet: mana: Annotate struct hwc_dma_buf with __counted_by
Kees Cook [Fri, 22 Sep 2023 17:28:51 +0000 (10:28 -0700)]
net: mana: Annotate struct hwc_dma_buf with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct hwc_dma_buf.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Long Li <longli@microsoft.com>
Cc: Ajay Sharma <sharmaajay@microsoft.com>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Wei Liu <wei.liu@kernel.org>
Cc: Dexuan Cui <decui@microsoft.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20230922172858.3822653-9-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agonet: ipa: Annotate struct ipa_power with __counted_by
Kees Cook [Fri, 22 Sep 2023 17:28:50 +0000 (10:28 -0700)]
net: ipa: Annotate struct ipa_power with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct ipa_power.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: Alex Elder <elder@linaro.org>
Link: https://lore.kernel.org/r/20230922172858.3822653-8-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agonet: mana: Annotate struct mana_rxq with __counted_by
Kees Cook [Fri, 22 Sep 2023 17:28:49 +0000 (10:28 -0700)]
net: mana: Annotate struct mana_rxq with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct mana_rxq.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Wei Liu <wei.liu@kernel.org>
Cc: Dexuan Cui <decui@microsoft.com>
Cc: Long Li <longli@microsoft.com>
Cc: Ajay Sharma <sharmaajay@microsoft.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20230922172858.3822653-7-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agonet: hisilicon: Annotate struct rcb_common_cb with __counted_by
Kees Cook [Fri, 22 Sep 2023 17:28:48 +0000 (10:28 -0700)]
net: hisilicon: Annotate struct rcb_common_cb with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct rcb_common_cb.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Yisen Zhuang <yisen.zhuang@huawei.com>
Cc: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20230922172858.3822653-6-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agonet: enetc: Annotate struct enetc_int_vector with __counted_by
Kees Cook [Fri, 22 Sep 2023 17:28:47 +0000 (10:28 -0700)]
net: enetc: Annotate struct enetc_int_vector with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct enetc_int_vector.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Claudiu Manoil <claudiu.manoil@nxp.com>
Cc: Vladimir Oltean <vladimir.oltean@nxp.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20230922172858.3822653-5-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agonet: hns: Annotate struct ppe_common_cb with __counted_by
Kees Cook [Fri, 22 Sep 2023 17:28:46 +0000 (10:28 -0700)]
net: hns: Annotate struct ppe_common_cb with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct ppe_common_cb.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Yisen Zhuang <yisen.zhuang@huawei.com>
Cc: Salil Mehta <salil.mehta@huawei.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20230922172858.3822653-4-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoipv6: Annotate struct ip6_sf_socklist with __counted_by
Kees Cook [Fri, 22 Sep 2023 17:28:45 +0000 (10:28 -0700)]
ipv6: Annotate struct ip6_sf_socklist with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct ip6_sf_socklist.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20230922172858.3822653-3-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoipv4/igmp: Annotate struct ip_sf_socklist with __counted_by
Kees Cook [Fri, 22 Sep 2023 17:28:44 +0000 (10:28 -0700)]
ipv4/igmp: Annotate struct ip_sf_socklist with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct ip_sf_socklist.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: Martin KaFai Lau <martin.lau@kernel.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20230922172858.3822653-2-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoipv4: Annotate struct fib_info with __counted_by
Kees Cook [Fri, 22 Sep 2023 17:28:43 +0000 (10:28 -0700)]
ipv4: Annotate struct fib_info with __counted_by

Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).

As found with Coccinelle[1], add __counted_by for struct fib_info.

[1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci

Cc: David Ahern <dsahern@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20230922172858.3822653-1-keescook@chromium.org
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
11 months agoMerge branch 'mlxsw-next'
David S. Miller [Mon, 2 Oct 2023 07:07:13 +0000 (08:07 +0100)]
Merge branch 'mlxsw-next'

Petr Machata says:

====================
mlxsw: Provide enhancements and new feature

Vadim Pasternak writes:

Patch #1 - Optimize transaction size for efficient retrieval of module
           data.
Patch #3 - Enable thermal zone binding with new cooling device.
Patch #4 - Employ standard macros for dividing buffer into the chunks.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agomlxsw: i2c: Utilize standard macros for dividing buffer into chunks
Vadim Pasternak [Fri, 22 Sep 2023 17:18:38 +0000 (19:18 +0200)]
mlxsw: i2c: Utilize standard macros for dividing buffer into chunks

Use standard macro DIV_ROUND_UP() to determine the number of chunks
required for a given buffer.

Signed-off-by: Vadim Pasternak <vadimp@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agomlxsw: core: Extend allowed list of external cooling devices for thermal zone binding
Vadim Pasternak [Fri, 22 Sep 2023 17:18:37 +0000 (19:18 +0200)]
mlxsw: core: Extend allowed list of external cooling devices for thermal zone binding

Extend the list of allowed external cooling devices for thermal zone
binding to include devices of type "emc2305".

The motivation is to provide support for the system SN2201, which is
equipped with the Spectrum-1 ASIC.
The system's airflow control is managed by the EMC2305 RPM-based PWM
Fan Speed Controller as the cooling device.

Signed-off-by: Vadim Pasternak <vadimp@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agomlxsw: reg: Limit MTBR register payload to a single data record
Vadim Pasternak [Fri, 22 Sep 2023 17:18:36 +0000 (19:18 +0200)]
mlxsw: reg: Limit MTBR register payload to a single data record

The MTBR register is used to read temperatures from multiple sensors in
one transaction, but the driver only reads from a single sensor in each
transaction.

Rrestrict the payload size of the MTBR register to prevent the
transmission of redundant data to the firmware.

Signed-off-by: Vadim Pasternak <vadimp@nvidia.com>
Reviewed-by: Ido Schimmel <idosch@nvidia.com>
Signed-off-by: Petr Machata <petrm@nvidia.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoMerge branch 'inet-more-data-race-fixes'
David S. Miller [Sun, 1 Oct 2023 18:39:19 +0000 (19:39 +0100)]
Merge branch 'inet-more-data-race-fixes'

Eric Dumazet says:

====================
inet: more data-race fixes

This series fixes some existing data-races on inet fields:

inet->mc_ttl, inet->pmtudisc, inet->tos, inet->uc_index,
inet->mc_index and inet->mc_addr.

While fixing them, we convert eight socket options
to lockless implementation.

v2: addressed David Ahern feedback on ("inet: implement lockless IP_TOS")
    Added David Reviewed-by: tag on other patches.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoinet: implement lockless getsockopt(IP_MULTICAST_IF)
Eric Dumazet [Fri, 22 Sep 2023 03:42:21 +0000 (03:42 +0000)]
inet: implement lockless getsockopt(IP_MULTICAST_IF)

Add missing annotations to inet->mc_index and inet->mc_addr
to fix data-races.

getsockopt(IP_MULTICAST_IF) can be lockless.

setsockopt() side is left for later.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoinet: lockless IP_PKTOPTIONS implementation
Eric Dumazet [Fri, 22 Sep 2023 03:42:20 +0000 (03:42 +0000)]
inet: lockless IP_PKTOPTIONS implementation

Current implementation is already lockless, because the socket
lock is released before reading socket fields.

Add missing READ_ONCE() annotations.

Note that corresponding WRITE_ONCE() are needed, the order
of the patches do not really matter.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoinet: implement lockless getsockopt(IP_UNICAST_IF)
Eric Dumazet [Fri, 22 Sep 2023 03:42:19 +0000 (03:42 +0000)]
inet: implement lockless getsockopt(IP_UNICAST_IF)

Add missing READ_ONCE() annotations when reading inet->uc_index

Implementing getsockopt(IP_UNICAST_IF) locklessly seems possible,
the setsockopt() part might not be possible at the moment.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoinet: lockless getsockopt(IP_MTU)
Eric Dumazet [Fri, 22 Sep 2023 03:42:18 +0000 (03:42 +0000)]
inet: lockless getsockopt(IP_MTU)

sk_dst_get() does not require socket lock.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoinet: lockless getsockopt(IP_OPTIONS)
Eric Dumazet [Fri, 22 Sep 2023 03:42:17 +0000 (03:42 +0000)]
inet: lockless getsockopt(IP_OPTIONS)

inet->inet_opt being RCU protected, we can use RCU instead
of locking the socket.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoinet: implement lockless IP_TOS
Eric Dumazet [Fri, 22 Sep 2023 03:42:16 +0000 (03:42 +0000)]
inet: implement lockless IP_TOS

Some reads of inet->tos are racy.

Add needed READ_ONCE() annotations and convert IP_TOS option lockless.

v2: missing changes in include/net/route.h (David Ahern)

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoinet: implement lockless IP_MTU_DISCOVER
Eric Dumazet [Fri, 22 Sep 2023 03:42:15 +0000 (03:42 +0000)]
inet: implement lockless IP_MTU_DISCOVER

inet->pmtudisc can be read locklessly.

Implement proper lockless reads and writes to inet->pmtudisc

ip_sock_set_mtu_discover() can now be called from arbitrary
contexts.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoinet: implement lockless IP_MULTICAST_TTL
Eric Dumazet [Fri, 22 Sep 2023 03:42:14 +0000 (03:42 +0000)]
inet: implement lockless IP_MULTICAST_TTL

inet->mc_ttl can be read locklessly.

Implement proper lockless reads and writes to inet->mc_ttl

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: David Ahern <dsahern@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoMerge branch 'socket-option-lockless'
David S. Miller [Sun, 1 Oct 2023 18:09:55 +0000 (19:09 +0100)]
Merge branch 'socket-option-lockless'

Eric Dumazet says:

====================
net: more data-races fixes and lockless socket options

This is yet another round of data-races fixes,
and lockless socket options.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet: annotate data-races around sk->sk_dst_pending_confirm
Eric Dumazet [Thu, 21 Sep 2023 20:28:18 +0000 (20:28 +0000)]
net: annotate data-races around sk->sk_dst_pending_confirm

This field can be read or written without socket lock being held.

Add annotations to avoid load-store tearing.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet: annotate data-races around sk->sk_tx_queue_mapping
Eric Dumazet [Thu, 21 Sep 2023 20:28:17 +0000 (20:28 +0000)]
net: annotate data-races around sk->sk_tx_queue_mapping

This field can be read or written without socket lock being held.

Add annotations to avoid load-store tearing.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet: lockless implementation of SO_TXREHASH
Eric Dumazet [Thu, 21 Sep 2023 20:28:16 +0000 (20:28 +0000)]
net: lockless implementation of SO_TXREHASH

sk->sk_txrehash readers are already safe against
concurrent change of this field.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet: implement lockless SO_MAX_PACING_RATE
Eric Dumazet [Thu, 21 Sep 2023 20:28:15 +0000 (20:28 +0000)]
net: implement lockless SO_MAX_PACING_RATE

SO_MAX_PACING_RATE setsockopt() does not need to hold
the socket lock, because sk->sk_pacing_rate readers
can run fine if the value is changed by other threads,
after adding READ_ONCE() accessors.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet: lockless implementation of SO_BUSY_POLL, SO_PREFER_BUSY_POLL, SO_BUSY_POLL_BUDGET
Eric Dumazet [Thu, 21 Sep 2023 20:28:14 +0000 (20:28 +0000)]
net: lockless implementation of SO_BUSY_POLL, SO_PREFER_BUSY_POLL, SO_BUSY_POLL_BUDGET

Setting sk->sk_ll_usec, sk_prefer_busy_poll and sk_busy_poll_budget
do not require the socket lock, readers are lockless anyway.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet: lockless SO_{TYPE|PROTOCOL|DOMAIN|ERROR } setsockopt()
Eric Dumazet [Thu, 21 Sep 2023 20:28:13 +0000 (20:28 +0000)]
net: lockless SO_{TYPE|PROTOCOL|DOMAIN|ERROR } setsockopt()

This options can not be set and return -ENOPROTOOPT,
no need to acqure socket lock.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet: lockless SO_PASSCRED, SO_PASSPIDFD and SO_PASSSEC
Eric Dumazet [Thu, 21 Sep 2023 20:28:12 +0000 (20:28 +0000)]
net: lockless SO_PASSCRED, SO_PASSPIDFD and SO_PASSSEC

sock->flags are atomic, no need to hold the socket lock
in sk_setsockopt() for SO_PASSCRED, SO_PASSPIDFD and SO_PASSSEC.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet: implement lockless SO_PRIORITY
Eric Dumazet [Thu, 21 Sep 2023 20:28:11 +0000 (20:28 +0000)]
net: implement lockless SO_PRIORITY

This is a followup of 8bf43be799d4 ("net: annotate data-races
around sk->sk_priority").

sk->sk_priority can be read and written without holding the socket lock.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Wenjia Zhang <wenjia@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoopenvswitch: reduce stack usage in do_execute_actions
Ilya Maximets [Thu, 21 Sep 2023 19:42:35 +0000 (21:42 +0200)]
openvswitch: reduce stack usage in do_execute_actions

do_execute_actions() function can be called recursively multiple
times while executing actions that require pipeline forking or
recirculations.  It may also be re-entered multiple times if the packet
leaves openvswitch module and re-enters it through a different port.

Currently, there is a 256-byte array allocated on stack in this
function that is supposed to hold NSH header.  Compilers tend to
pre-allocate that space right at the beginning of the function:

     a88:       48 81 ec b0 01 00 00    sub    $0x1b0,%rsp

NSH is not a very common protocol, but the space is allocated on every
recursive call or re-entry multiplying the wasted stack space.

Move the stack allocation to push_nsh() function that is only used
if NSH actions are actually present.  push_nsh() is also a simple
function without a possibility for re-entry, so the stack is returned
right away.

With this change the preallocated space is reduced by 256 B per call:

     b18:       48 81 ec b0 00 00 00    sub    $0xb0,%rsp

Signed-off-by: Ilya Maximets <i.maximets@ovn.org>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Eelco Chaudron echaudro@redhat.com
Reviewed-by: Aaron Conole <aconole@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoMerge branch 'dev-stats-virtio-l2tp_eth'
David S. Miller [Sun, 1 Oct 2023 15:33:01 +0000 (16:33 +0100)]
Merge branch 'dev-stats-virtio-l2tp_eth'

Eric Dumazet says:

====================
net: use DEV_STATS_xxx() helpers in virtio_net and l2tp_eth

Inspired by another (minor) KCSAN syzbot report.
Both virtio_net and l2tp_eth can use DEV_STATS_xxx() helpers.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet: l2tp_eth: use generic dev->stats fields
Eric Dumazet [Thu, 21 Sep 2023 08:52:18 +0000 (08:52 +0000)]
net: l2tp_eth: use generic dev->stats fields

Core networking has opt-in atomic variant of dev->stats,
simply use DEV_STATS_INC(), DEV_STATS_ADD() and DEV_STATS_READ().

v2: removed @priv local var in l2tp_eth_dev_recv() (Simon)

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agovirtio_net: avoid data-races on dev->stats fields
Eric Dumazet [Thu, 21 Sep 2023 08:52:17 +0000 (08:52 +0000)]
virtio_net: avoid data-races on dev->stats fields

Use DEV_STATS_INC() and DEV_STATS_READ() which provide
atomicity on paths that can be used concurrently.

Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet: add DEV_STATS_READ() helper
Eric Dumazet [Thu, 21 Sep 2023 08:52:16 +0000 (08:52 +0000)]
net: add DEV_STATS_READ() helper

Companion of DEV_STATS_INC() & DEV_STATS_ADD().

This is going to be used in the series.

Use it in macsec_get_stats64().

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoocteontx2-pf: Tc flower offload support for MPLS
Hariprasad Kelam [Thu, 21 Sep 2023 08:50:55 +0000 (14:20 +0530)]
octeontx2-pf: Tc flower offload support for MPLS

This patch extends flower offload support for MPLS protocol.
Due to hardware limitation, currently driver supports lse
depth up to 4.

Signed-off-by: Hariprasad Kelam <hkelam@marvell.com>
Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet: ethernet: xilinx: Drop kernel doc comment about return value
Uwe Kleine-König [Thu, 21 Sep 2023 06:35:01 +0000 (08:35 +0200)]
net: ethernet: xilinx: Drop kernel doc comment about return value

During review of the patch that became 2e0ec0afa902 ("net: ethernet:
xilinx: Convert to platform remove callback returning void") in
net-next, Radhey Shyam Pandey pointed out that the change makes the
documentation about the return value obsolete. The patch was applied
without addressing this feedback, so here comes a fix in a separate
patch.

Fixes: 2e0ec0afa902 ("net: ethernet: xilinx: Convert to platform remove callback returning void")
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet: atl1c: switch to napi_consume_skb()
Sieng-Piaw Liew [Thu, 21 Sep 2023 00:56:23 +0000 (08:56 +0800)]
net: atl1c: switch to napi_consume_skb()

Switch to napi_consume_skb() to take advantage of bulk free, and skb
reuse through skb cache in conjunction with napi_build_skb().

When parameter 'budget' = 0, indicating non-NAPI context,
dev_consume_skb_any() is called internally.

Signed-off-by: Sieng-Piaw Liew <liew.s.piaw@gmail.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoMerge branch 'sch_fq-improvements'
David S. Miller [Sun, 1 Oct 2023 12:20:36 +0000 (13:20 +0100)]
Merge branch 'sch_fq-improvements'

Eric Dumazet says:

====================
net_sched: sch_fq: round of improvements

For FQ tenth anniversary, it was time for making it faster.

The FQ part (as in Fair Queue) is rather expensive, because
we have to classify packets and store them in a per-flow structure,
and add this per-flow structure in a hash table. Then the RR lists
also add cache line misses.

Most fq qdisc are almost idle. Trying to share NIC bandwidth has
no benefits, thus the qdisc could behave like a FIFO.

This series brings a 5 % throughput increase in intensive
tcp_rr workload, and 13 % increase for (unpaced) UDP packets.

v2: removed an extra label (build bot).
    Fix an accidental increase of stat_internal_packets counter
    in fast path.
    Added "constify qdisc_priv()" patch to allow fq_fastpath_check()
    first parameter to be const.
    typo on 'eligible' (Willem)
====================

Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com>
Reviewed-by: Willem de Bruijn <willemb@google.com>
Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet_sched: sch_fq: always garbage collect
Eric Dumazet [Wed, 20 Sep 2023 20:17:15 +0000 (20:17 +0000)]
net_sched: sch_fq: always garbage collect

FQ performs garbage collection at enqueue time, and only
if number of flows is above a given threshold, which
is hit after the qdisc has been used a bit.

Since an RB-tree traversal is needed to locate a flow,
it makes sense to perform gc all the time, to keep
rb-trees smaller.

This reduces by 50 % average storage costs in FQ,
and avoids 1 cache line miss at enqueue time when
fast path added in prior patch can not be used.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet_sched: sch_fq: add fast path for mostly idle qdisc
Eric Dumazet [Wed, 20 Sep 2023 20:17:14 +0000 (20:17 +0000)]
net_sched: sch_fq: add fast path for mostly idle qdisc

TCQ_F_CAN_BYPASS can be used by few qdiscs.

Idea is that if we queue a packet to an empty qdisc,
following dequeue() would pick it immediately.

FQ can not use the generic TCQ_F_CAN_BYPASS code,
because some additional checks need to be performed.

This patch adds a similar fast path to FQ.

Most of the time, qdisc is not throttled,
and many packets can avoid bringing/touching
at least four cache lines, and consuming 128bytes
of memory to store the state of a flow.

After this patch, netperf can send UDP packets about 13 % faster,
and pktgen goes 30 % faster (when FQ is in the way), on a fast NIC.

TCP traffic is also improved, thanks to a reduction of cache line misses.
I have measured a 5 % increase of throughput on a tcp_rr intensive workload.

tc -s -d qd sh dev eth1
...
qdisc fq 8004: parent 1:2 limit 10000p flow_limit 100p buckets 1024
   orphan_mask 1023 quantum 3028b initial_quantum 15140b low_rate_threshold 550Kbit
   refill_delay 40ms timer_slack 10us horizon 10s horizon_drop
 Sent 5646784384 bytes 1985161 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  flows 122 (inactive 122 throttled 0)
  gc 0 highprio 0 fastpath 659990 throttled 27762 latency 8.57us

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet_sched: sch_fq: change how @inactive is tracked
Eric Dumazet [Wed, 20 Sep 2023 20:17:13 +0000 (20:17 +0000)]
net_sched: sch_fq: change how @inactive is tracked

Currently, when one fq qdisc has no more packets to send, it can still
have some flows stored in its RR lists (q->new_flows & q->old_flows)

This was a design choice, but what is a bit disturbing is that
the inactive_flows counter does not include the count of empty flows
in RR lists.

As next patch needs to know better if there are active flows,
this change makes inactive_flows exact.

Before the patch, following command on an empty qdisc could have returned:

lpaa17:~# tc -s -d qd sh dev eth1 | grep inactive
  flows 1322 (inactive 1316 throttled 0)
  flows 1330 (inactive 1325 throttled 0)
  flows 1193 (inactive 1190 throttled 0)
  flows 1208 (inactive 1202 throttled 0)

After the patch, we now have:

lpaa17:~# tc -s -d qd sh dev eth1 | grep inactive
  flows 1322 (inactive 1322 throttled 0)
  flows 1330 (inactive 1330 throttled 0)
  flows 1193 (inactive 1193 throttled 0)
  flows 1208 (inactive 1208 throttled 0)

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet_sched: sch_fq: struct sched_data reorg
Eric Dumazet [Wed, 20 Sep 2023 20:17:12 +0000 (20:17 +0000)]
net_sched: sch_fq: struct sched_data reorg

q->flows can be often modified, and q->timer_slack is read mostly.

Exchange the two fields, so that cache line countaining
quantum, initial_quantum, and other critical parameters
stay clean (read-mostly).

Move q->watchdog next to q->stat_throttled

Add comments explaining how the structure is split in
three different parts.

pahole output before the patch:

struct fq_sched_data {
struct fq_flow_head        new_flows;            /*     0  0x10 */
struct fq_flow_head        old_flows;            /*  0x10  0x10 */
struct rb_root             delayed;              /*  0x20   0x8 */
u64                        time_next_delayed_flow; /*  0x28   0x8 */
u64                        ktime_cache;          /*  0x30   0x8 */
unsigned long              unthrottle_latency_ns; /*  0x38   0x8 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct fq_flow             internal __attribute__((__aligned__(64))); /*  0x40  0x80 */

/* XXX last struct has 16 bytes of padding */

/* --- cacheline 3 boundary (192 bytes) --- */
u32                        quantum;              /*  0xc0   0x4 */
u32                        initial_quantum;      /*  0xc4   0x4 */
u32                        flow_refill_delay;    /*  0xc8   0x4 */
u32                        flow_plimit;          /*  0xcc   0x4 */
unsigned long              flow_max_rate;        /*  0xd0   0x8 */
u64                        ce_threshold;         /*  0xd8   0x8 */
u64                        horizon;              /*  0xe0   0x8 */
u32                        orphan_mask;          /*  0xe8   0x4 */
u32                        low_rate_threshold;   /*  0xec   0x4 */
struct rb_root *           fq_root;              /*  0xf0   0x8 */
u8                         rate_enable;          /*  0xf8   0x1 */
u8                         fq_trees_log;         /*  0xf9   0x1 */
u8                         horizon_drop;         /*  0xfa   0x1 */

/* XXX 1 byte hole, try to pack */

<bad> u32                        flows;                /*  0xfc   0x4 */
/* --- cacheline 4 boundary (256 bytes) --- */
u32                        inactive_flows;       /* 0x100   0x4 */
u32                        throttled_flows;      /* 0x104   0x4 */
u64                        stat_gc_flows;        /* 0x108   0x8 */
u64                        stat_internal_packets; /* 0x110   0x8 */
u64                        stat_throttled;       /* 0x118   0x8 */
u64                        stat_ce_mark;         /* 0x120   0x8 */
u64                        stat_horizon_drops;   /* 0x128   0x8 */
u64                        stat_horizon_caps;    /* 0x130   0x8 */
u64                        stat_flows_plimit;    /* 0x138   0x8 */
/* --- cacheline 5 boundary (320 bytes) --- */
u64                        stat_pkts_too_long;   /* 0x140   0x8 */
u64                        stat_allocation_errors; /* 0x148   0x8 */
<bad> u32                        timer_slack;          /* 0x150   0x4 */

/* XXX 4 bytes hole, try to pack */

struct qdisc_watchdog      watchdog;             /* 0x158  0x48 */

/* size: 448, cachelines: 7, members: 34 */
/* sum members: 411, holes: 2, sum holes: 5 */
/* padding: 32 */
/* paddings: 1, sum paddings: 16 */
/* forced alignments: 1 */
};

pahole output after the patch:

struct fq_sched_data {
struct fq_flow_head        new_flows;            /*     0  0x10 */
struct fq_flow_head        old_flows;            /*  0x10  0x10 */
struct rb_root             delayed;              /*  0x20   0x8 */
u64                        time_next_delayed_flow; /*  0x28   0x8 */
u64                        ktime_cache;          /*  0x30   0x8 */
unsigned long              unthrottle_latency_ns; /*  0x38   0x8 */
/* --- cacheline 1 boundary (64 bytes) --- */
struct fq_flow             internal __attribute__((__aligned__(64))); /*  0x40  0x80 */

/* XXX last struct has 16 bytes of padding */

/* --- cacheline 3 boundary (192 bytes) --- */
u32                        quantum;              /*  0xc0   0x4 */
u32                        initial_quantum;      /*  0xc4   0x4 */
u32                        flow_refill_delay;    /*  0xc8   0x4 */
u32                        flow_plimit;          /*  0xcc   0x4 */
unsigned long              flow_max_rate;        /*  0xd0   0x8 */
u64                        ce_threshold;         /*  0xd8   0x8 */
u64                        horizon;              /*  0xe0   0x8 */
u32                        orphan_mask;          /*  0xe8   0x4 */
u32                        low_rate_threshold;   /*  0xec   0x4 */
struct rb_root *           fq_root;              /*  0xf0   0x8 */
u8                         rate_enable;          /*  0xf8   0x1 */
u8                         fq_trees_log;         /*  0xf9   0x1 */
u8                         horizon_drop;         /*  0xfa   0x1 */

/* XXX 1 byte hole, try to pack */

<good> u32                        timer_slack;          /*  0xfc   0x4 */
/* --- cacheline 4 boundary (256 bytes) --- */
<good> u32                        flows;                /* 0x100   0x4 */
u32                        inactive_flows;       /* 0x104   0x4 */
u32                        throttled_flows;      /* 0x108   0x4 */

/* XXX 4 bytes hole, try to pack */

u64                        stat_throttled;       /* 0x110   0x8 */
<better> struct qdisc_watchdog     watchdog;             /* 0x118  0x48 */
/* --- cacheline 5 boundary (320 bytes) was 32 bytes ago --- */
u64                        stat_gc_flows;        /* 0x160   0x8 */
u64                        stat_internal_packets; /* 0x168   0x8 */
u64                        stat_ce_mark;         /* 0x170   0x8 */
u64                        stat_horizon_drops;   /* 0x178   0x8 */
/* --- cacheline 6 boundary (384 bytes) --- */
u64                        stat_horizon_caps;    /* 0x180   0x8 */
u64                        stat_flows_plimit;    /* 0x188   0x8 */
u64                        stat_pkts_too_long;   /* 0x190   0x8 */
u64                        stat_allocation_errors; /* 0x198   0x8 */

/* Force padding: */
u64                        :64;
u64                        :64;
u64                        :64;
u64                        :64;

/* size: 448, cachelines: 7, members: 34 */
/* sum members: 411, holes: 2, sum holes: 5 */
/* padding: 32 */
/* paddings: 1, sum paddings: 16 */
/* forced alignments: 1 */
};

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet_sched: constify qdisc_priv()
Eric Dumazet [Wed, 20 Sep 2023 20:17:11 +0000 (20:17 +0000)]
net_sched: constify qdisc_priv()

In order to propagate const qualifiers, we change qdisc_priv()
to accept a possibly const argument.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoMerge branch 'tcp_delack_max'
David S. Miller [Sun, 1 Oct 2023 12:13:01 +0000 (13:13 +0100)]
Merge branch 'tcp_delack_max'

Eric Dumazet says:

====================
tcp: add tcp_delack_max()

First patches are adding const qualifiers to four existing helpers.

Third patch adds a much needed companion feature to RTAX_RTO_MIN.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agotcp: derive delack_max from rto_min
Eric Dumazet [Wed, 20 Sep 2023 17:29:43 +0000 (17:29 +0000)]
tcp: derive delack_max from rto_min

While BPF allows to set icsk->->icsk_delack_max
and/or icsk->icsk_rto_min, we have an ip route
attribute (RTAX_RTO_MIN) to be able to tune rto_min,
but nothing to consequently adjust max delayed ack,
which vary from 40ms to 200 ms (TCP_DELACK_{MIN|MAX}).

This makes RTAX_RTO_MIN of almost no practical use,
unless customers are in big trouble.

Modern days datacenter communications want to set
rto_min to ~5 ms, and the max delayed ack one jiffie
smaller to avoid spurious retransmits.

After this patch, an "rto_min 5" route attribute will
effectively lower max delayed ack timers to 4 ms.

Note in the following ss output, "rto:6 ... ato:4"

$ ss -temoi dst XXXXXX
State Recv-Q Send-Q           Local Address:Port       Peer Address:Port  Process
ESTAB 0      0        [2002:a05:6608:295::]:52950   [2002:a05:6608:297::]:41597
     ino:255134 sk:1001 <->
         skmem:(r0,rb1707063,t872,tb262144,f0,w0,o0,bl0,d0) ts sack
 cubic wscale:8,8 rto:6 rtt:0.02/0.002 ato:4 mss:4096 pmtu:4500
 rcvmss:536 advmss:4096 cwnd:10 bytes_sent:54823160 bytes_acked:54823121
 bytes_received:54823120 segs_out:1370582 segs_in:1370580
 data_segs_out:1370579 data_segs_in:1370578 send 16.4Gbps
 pacing_rate 32.6Gbps delivery_rate 1.72Gbps delivered:1370579
 busy:26920ms unacked:1 rcv_rtt:34.615 rcv_space:65920
 rcv_ssthresh:65535 minrtt:0.015 snd_wnd:65536

While we could argue this patch fixes a bug with RTAX_RTO_MIN,
I do not add a Fixes: tag, so that we can soak it a bit before
asking backports to stable branches.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agotcp: constify tcp_rto_min() and tcp_rto_min_us() argument
Eric Dumazet [Wed, 20 Sep 2023 17:29:42 +0000 (17:29 +0000)]
tcp: constify tcp_rto_min() and tcp_rto_min_us() argument

Make clear these functions do not change any field from TCP socket.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agonet: constify sk_dst_get() and __sk_dst_get() argument
Eric Dumazet [Wed, 20 Sep 2023 17:29:41 +0000 (17:29 +0000)]
net: constify sk_dst_get() and __sk_dst_get() argument

Both helpers only read fields from their socket argument.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agoMerge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next...
David S. Miller [Sat, 30 Sep 2023 17:49:14 +0000 (18:49 +0100)]
Merge branch '100GbE' of git://git./linux/kernel/git/tnguy/next-queue

Tony Nguyen says:

====================
ice: add PTP auxiliary bus support

Michal Michalik says:

Auxiliary bus allows exchanging information between PFs, which allows
both fixing problems and simplifying new features implementation.
The auxiliary bus is enabled for all devices supported by ice driver.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
11 months agopktgen: Introducing 'SHARED' flag for testing with non-shared skb
Liang Chen [Wed, 20 Sep 2023 12:56:58 +0000 (20:56 +0800)]
pktgen: Introducing 'SHARED' flag for testing with non-shared skb

Currently, skbs generated by pktgen always have their reference count
incremented before transmission, causing their reference count to be
always greater than 1, leading to two issues:
  1. Only the code paths for shared skbs can be tested.
  2. In certain situations, skbs can only be released by pktgen.
To enhance testing comprehensiveness, we are introducing the "SHARED"
flag to indicate whether an SKB is shared. This flag is enabled by
default, aligning with the current behavior. However, disabling this
flag allows skbs with a reference count of 1 to be transmitted.
So we can test non-shared skbs and code paths where skbs are released
within the stack.

Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
Reviewed-by: Benjamin Poirier <bpoirier@nvidia.com>
Link: https://lore.kernel.org/r/20230920125658.46978-2-liangchen.linux@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agopktgen: Automate flag enumeration for unknown flag handling
Liang Chen [Wed, 20 Sep 2023 12:56:57 +0000 (20:56 +0800)]
pktgen: Automate flag enumeration for unknown flag handling

When specifying an unknown flag, it will print all available flags.
Currently, these flags are provided as fixed strings, which requires
manual updates when flags change. Replacing it with automated flag
enumeration.

Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
Signed-off-by: Benjamin Poirier <bpoirier@nvidia.com>
Link: https://lore.kernel.org/r/20230920125658.46978-1-liangchen.linux@gmail.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agoMAINTAINERS: Add an obsolete entry for LL TEMAC driver
Harini Katakam [Wed, 20 Sep 2023 11:50:47 +0000 (17:20 +0530)]
MAINTAINERS: Add an obsolete entry for LL TEMAC driver

LL TEMAC IP is no longer supported. Hence add an entry marking the
driver as obsolete.

Signed-off-by: Harini Katakam <harini.katakam@amd.com>
Link: https://lore.kernel.org/r/20230920115047.31345-1-harini.katakam@amd.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agoMerge tag 'mlx5-updates-2023-09-19' of git://git.kernel.org/pub/scm/linux/kernel...
Paolo Abeni [Thu, 28 Sep 2023 13:45:25 +0000 (15:45 +0200)]
Merge tag 'mlx5-updates-2023-09-19' of git://git./linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
mlx5-updates-2023-09-19

Misc updates for mlx5 driver

1) From Erez, Add support for multicast forwarding to multi destination
   in bridge offloads with software steering mode (SMFS).

2) From Jianbo, Utilize the maximum aggregated link speed for police
   action rate.

3) From Moshe, Add a health error syndrome for pci data poisoned

4) From Shay, Enable 4 ports multiport E-switch

5) From Jiri, Trivial SF code cleanup

====================

Link: https://lore.kernel.org/r/20230920063552.296978-1-saeed@kernel.org
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agoethernet/intel: Use list_for_each_entry() helper
Jinjie Ruan [Tue, 19 Sep 2023 17:04:09 +0000 (10:04 -0700)]
ethernet/intel: Use list_for_each_entry() helper

Convert list_for_each() to list_for_each_entry() where applicable.

No functional changed.

Signed-off-by: Jinjie Ruan <ruanjinjie@huawei.com>
Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Rafal Romanowski <rafal.romanowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Reviewed-by: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/r/20230919170409.1581074-1-anthony.l.nguyen@intel.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agoMerge branch 'selftests-tc-testing-parallel-tdc'
Paolo Abeni [Thu, 28 Sep 2023 07:51:09 +0000 (09:51 +0200)]
Merge branch 'selftests-tc-testing-parallel-tdc'

Pedro Tammela says:

====================
selftests/tc-testing: parallel tdc

As the number of tdc tests is growing, so is our completion wall time.
One of the ideas to improve this is to run tests in parallel, as they
are self contained.

This series allows for tests to run in parallel, in batches of 32 tests.
Not all tests can run in parallel as they might conflict with each other.
The code will still honor this requirement even when trying to run the
tests over the worker pool.

In order to make this happen we had to localize the test resources
(patches 1 and 2), where instead of having all tests sharing one single
namespace and veths devices each test now gets it's own local namespace and devices.

Even though the tests serialize over rtnl_lock in the kernel, we
measured a speedup of about 3x in a test VM.
====================

Link: https://lore.kernel.org/r/20230919135404.1778595-1-pctammela@mojatatu.com
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agoselftests/tc-testing: update tdc documentation
Pedro Tammela [Tue, 19 Sep 2023 13:54:04 +0000 (10:54 -0300)]
selftests/tc-testing: update tdc documentation

Update the documentation to reflect the changes made to tdc with regards
to minimal requirements and test definitions expectations.

Tested-by: Davide Caratti <dcaratti@redhat.com>
Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
11 months agoselftests/tc-testing: implement tdc parallel test run
Pedro Tammela [Tue, 19 Sep 2023 13:54:03 +0000 (10:54 -0300)]
selftests/tc-testing: implement tdc parallel test run

Use a Python process pool to run the tests in parallel.
Not all tests can run in parallel, for instance tests that are not
namespaced and tests that use netdevsim, as they can conflict with one
another.

The code logic will split the tests into serial and parallel.
For the parallel tests, we build batches of 32 tests and queue each
batch on the process pool. For the serial tests, they are queued as a
whole into the process pool, which in turn executes them concurrently
with the parallel tests.

Even though the tests serialize on rtnl_lock in the kernel, this feature
showed results with a ~3x speedup on the wall time for the entire test suite
running in a VM:
   Before - 4m32.502s
   After - 1m19.202s

Examples:
   In order to run tdc using 4 processes:
      ./tdc.py -J4 <...>
   In order to run tdc using 1 process:
      ./tdc.py -J1 <...> || ./tdc.py <...>

Note that the kernel configuration will affect the speed of the tests,
especially if such configuration slows down process creation and/or
fork().

Tested-by: Davide Caratti <dcaratti@redhat.com>
Signed-off-by: Pedro Tammela <pctammela@mojatatu.com>
Acked-by: Jamal Hadi Salim <jhs@mojatatu.com>
Signed-off-by: Paolo Abeni <pabeni@redhat.com>