linux-2.6-block.git
2 months agoselftests/bpf: dummy_st_ops should reject 0 for non-nullable params
Eduard Zingerman [Wed, 24 Apr 2024 01:28:21 +0000 (18:28 -0700)]
selftests/bpf: dummy_st_ops should reject 0 for non-nullable params

Check if BPF_PROG_TEST_RUN for bpf_dummy_struct_ops programs
rejects execution if NULL is passed for non-nullable parameter.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20240424012821.595216-6-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: check bpf_dummy_struct_ops program params for test runs
Eduard Zingerman [Wed, 24 Apr 2024 01:28:20 +0000 (18:28 -0700)]
bpf: check bpf_dummy_struct_ops program params for test runs

When doing BPF_PROG_TEST_RUN for bpf_dummy_struct_ops programs,
reject execution when NULL is passed for non-nullable params.
For programs with non-nullable params verifier assumes that
such params are never NULL and thus might optimize out NULL checks.

Suggested-by: Kui-Feng Lee <sinquersw@gmail.com>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20240424012821.595216-5-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agoselftests/bpf: do not pass NULL for non-nullable params in dummy_st_ops
Eduard Zingerman [Wed, 24 Apr 2024 01:28:19 +0000 (18:28 -0700)]
selftests/bpf: do not pass NULL for non-nullable params in dummy_st_ops

dummy_st_ops.test_2 and dummy_st_ops.test_sleepable do not have their
'state' parameter marked as nullable. Update dummy_st_ops.c to avoid
passing NULL for such parameters, as the next patch would allow kernel
to enforce this restriction.

Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20240424012821.595216-4-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agoselftests/bpf: adjust dummy_st_ops_success to detect additional error
Eduard Zingerman [Wed, 24 Apr 2024 01:28:18 +0000 (18:28 -0700)]
selftests/bpf: adjust dummy_st_ops_success to detect additional error

As reported by Jose E. Marchesi in off-list discussion, GCC and LLVM
generate slightly different code for dummy_st_ops_success/test_1():

  SEC("struct_ops/test_1")
  int BPF_PROG(test_1, struct bpf_dummy_ops_state *state)
  {
   int ret;

   if (!state)
   return 0xf2f3f4f5;

   ret = state->val;
   state->val = 0x5a;
   return ret;
  }

  GCC-generated                  LLVM-generated
  ----------------------------   ---------------------------
  0: r1 = *(u64 *)(r1 + 0x0)     0: w0 = -0xd0c0b0b
  1: if r1 == 0x0 goto 5f        1: r1 = *(u64 *)(r1 + 0x0)
  2: r0 = *(s32 *)(r1 + 0x0)     2: if r1 == 0x0 goto 6f
  3: *(u32 *)(r1 + 0x0) = 0x5a   3: r0 = *(u32 *)(r1 + 0x0)
  4: exit                        4: w2 = 0x5a
  5: r0 = -0xd0c0b0b             5: *(u32 *)(r1 + 0x0) = r2
  6: exit                        6: exit

If the 'state' argument is not marked as nullable in
net/bpf/bpf_dummy_struct_ops.c, the verifier would assume that
'r1 == 0x0' is never true:
- for the GCC version, this means that instructions #5-6 would be
  marked as dead and removed;
- for the LLVM version, all instructions would be marked as live.

The test dummy_st_ops/dummy_init_ret_value actually sets the 'state'
parameter to NULL.

Therefore, when the 'state' argument is not marked as nullable,
the GCC-generated version of the code would trigger a NULL pointer
dereference at instruction #3.

This patch updates the test_1() test case to always follow a shape
similar to the GCC-generated version above, in order to verify whether
the 'state' nullability is marked correctly.

Reported-by: Jose E. Marchesi <jemarch@gnu.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20240424012821.595216-3-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: mark bpf_dummy_struct_ops.test_1 parameter as nullable
Eduard Zingerman [Wed, 24 Apr 2024 01:28:17 +0000 (18:28 -0700)]
bpf: mark bpf_dummy_struct_ops.test_1 parameter as nullable

Test case dummy_st_ops/dummy_init_ret_value passes NULL as the first
parameter of the test_1() function. Mark this parameter as nullable to
make verifier aware of such possibility.
Otherwise, NULL check in the test_1() code:

      SEC("struct_ops/test_1")
      int BPF_PROG(test_1, struct bpf_dummy_ops_state *state)
      {
            if (!state)
                    return ...;

            ... access state ...
      }

Might be removed by verifier, thus triggering NULL pointer dereference
under certain conditions.

Reported-by: Jose E. Marchesi <jemarch@gnu.org>
Signed-off-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20240424012821.595216-2-eddyz87@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agoselftests/bpf: Add ring_buffer__consume_n test.
Andrea Righi [Thu, 25 Apr 2024 14:06:27 +0000 (16:06 +0200)]
selftests/bpf: Add ring_buffer__consume_n test.

Add a testcase for the ring_buffer__consume_n() API.

The test produces multiple samples in a ring buffer, using a
sys_getpid() fentry prog, and consumes them from user-space in batches,
rather than consuming all of them greedily, like ring_buffer__consume()
does.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/lkml/CAEf4BzaR4zqUpDmj44KNLdpJ=Tpa97GrvzuzVNO5nM6b7oWd1w@mail.gmail.com
Link: https://lore.kernel.org/bpf/20240425140627.112728-1-andrea.righi@canonical.com
2 months agobpf: Add bpf_guard_preempt() convenience macro
Alexei Starovoitov [Wed, 24 Apr 2024 22:55:29 +0000 (15:55 -0700)]
bpf: Add bpf_guard_preempt() convenience macro

Add bpf_guard_preempt() macro that uses newly introduced
bpf_preempt_disable/enable() kfuncs to guard a critical section.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20240424225529.16782-1-alexei.starovoitov@gmail.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoMerge branch 'BPF crypto API framework'
Martin KaFai Lau [Wed, 24 Apr 2024 20:23:43 +0000 (13:23 -0700)]
Merge branch 'BPF crypto API framework'

Vadim Fedorenko says:

====================
This series introduces crypto kfuncs to make BPF programs able to
utilize kernel crypto subsystem. Crypto operations made pluggable to
avoid extensive growth of kernel when it's not needed. Only skcipher is
added within this series, but it can be easily extended to other types
of operations. No hardware offload supported as it needs sleepable
context which is not available for TX or XDP programs. At the same time
crypto context initialization kfunc can only run in sleepable context,
that's why it should be run separately and store the result in the map.

Selftests show the common way to implement crypto actions in BPF
programs. Benchmark is also added to have a baseline.
====================

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoselftests: bpf: crypto: add benchmark for crypto functions
Vadim Fedorenko [Mon, 22 Apr 2024 22:50:24 +0000 (15:50 -0700)]
selftests: bpf: crypto: add benchmark for crypto functions

Some simple benchmarks are added to understand the baseline of
performance.

Signed-off-by: Vadim Fedorenko <vadfed@meta.com>
Link: https://lore.kernel.org/r/20240422225024.2847039-5-vadfed@meta.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoselftests: bpf: crypto skcipher algo selftests
Vadim Fedorenko [Mon, 22 Apr 2024 22:50:23 +0000 (15:50 -0700)]
selftests: bpf: crypto skcipher algo selftests

Add simple tc hook selftests to show the way to work with new crypto
BPF API. Some tricky dynptr initialization is used to provide empty iv
dynptr. Simple AES-ECB algo is used to demonstrate encryption and
decryption of fixed size buffers.

Signed-off-by: Vadim Fedorenko <vadfed@meta.com>
Link: https://lore.kernel.org/r/20240422225024.2847039-4-vadfed@meta.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agobpf: crypto: add skcipher to bpf crypto
Vadim Fedorenko [Mon, 22 Apr 2024 22:50:22 +0000 (15:50 -0700)]
bpf: crypto: add skcipher to bpf crypto

Implement skcipher crypto in BPF crypto framework.

Signed-off-by: Vadim Fedorenko <vadfed@meta.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Link: https://lore.kernel.org/r/20240422225024.2847039-3-vadfed@meta.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agobpf: make common crypto API for TC/XDP programs
Vadim Fedorenko [Mon, 22 Apr 2024 22:50:21 +0000 (15:50 -0700)]
bpf: make common crypto API for TC/XDP programs

Add crypto API support to BPF to be able to decrypt or encrypt packets
in TC/XDP BPF programs. Special care should be taken for initialization
part of crypto algo because crypto alloc) doesn't work with preemtion
disabled, it can be run only in sleepable BPF program. Also async crypto
is not supported because of the very same issue - TC/XDP BPF programs
are not sleepable.

Signed-off-by: Vadim Fedorenko <vadfed@meta.com>
Link: https://lore.kernel.org/r/20240422225024.2847039-2-vadfed@meta.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agobpf: update the comment for BTF_FIELDS_MAX
Haiyue Wang [Wed, 24 Apr 2024 05:45:07 +0000 (13:45 +0800)]
bpf: update the comment for BTF_FIELDS_MAX

The commit d56b63cf0c0f ("bpf: add support for bpf_wq user type")
changes the fields support number to 11, just sync the comment.

Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240424054526.8031-1-haiyue.wang@intel.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agoselftests/bpf: Fix wq test.
Alexei Starovoitov [Wed, 24 Apr 2024 20:57:18 +0000 (13:57 -0700)]
selftests/bpf: Fix wq test.

The wq test was missing destroy(skel) part which was causing bpf progs to stay
loaded. That was causing test_progs to complain with
"Failed to unload bpf_testmod.ko from kernel: -11" message, but adding
destroy() wasn't enough, since wq callback may be delayed, so loop on unload of
bpf_testmod if errno is EAGAIN.

Acked-by: Andrii Nakryiko <andrii@kernel.org>
Fixes: 8290dba51910 ("selftests/bpf: wq: add bpf_wq_start() checks")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agoMerge branch 'use network helpers, part 2'
Martin KaFai Lau [Wed, 24 Apr 2024 15:55:30 +0000 (08:55 -0700)]
Merge branch 'use network helpers, part 2'

Geliang Tang says:

====================
This patchset uses more network helpers in test_sock_addr.c, but
first of all, patch 2 is needed to make network_helpers.c independent
of test_progs.c. Then network_helpers.h can be included into
test_sock_addr.c without compile errors.

Patch 1 and patch 2 address Martin's comments for the previous series
too.

v2:
 - Only a few minor cleanups to patch 5.
====================

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoselftests/bpf: Use make_sockaddr in test_sock_addr
Geliang Tang [Tue, 23 Apr 2024 10:35:31 +0000 (18:35 +0800)]
selftests/bpf: Use make_sockaddr in test_sock_addr

This patch uses public helper make_sockaddr() exported in network_helpers.h
instead of the local defined function mk_sockaddr() in test_sock_addr.c.
This can avoid duplicate code.

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/1473e189d6ca1a3925de4c5354d191a14eca0f3f.1713868264.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoselftests/bpf: Use connect_to_addr in test_sock_addr
Geliang Tang [Tue, 23 Apr 2024 10:35:30 +0000 (18:35 +0800)]
selftests/bpf: Use connect_to_addr in test_sock_addr

This patch uses public network helper connect_to_addr() exported in
network_helpers.h instead of the local defined function connect_to_server()
in test_sock_addr.c. This can avoid duplicate code.

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/f263797712d93fdfaf2943585c5dfae56714a00b.1713868264.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoselftests/bpf: Use start_server_addr in test_sock_addr
Geliang Tang [Tue, 23 Apr 2024 10:35:29 +0000 (18:35 +0800)]
selftests/bpf: Use start_server_addr in test_sock_addr

Include network_helpers.h in test_sock_addr.c, use the newly added public
helper start_server_addr() instead of the local defined function
start_server(). This can avoid duplicate code.

In order to use functions defined in network_helpers.c in test_sock_addr.c,
Makefile needs to be updated and <Linux/err.h> needs to be included in
network_helpers.h to avoid compilation errors.

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/3101f57bde5502383eb41723c8956cc26be06893.1713868264.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoselftests/bpf: Use log_err in open_netns/close_netns
Geliang Tang [Tue, 23 Apr 2024 10:35:28 +0000 (18:35 +0800)]
selftests/bpf: Use log_err in open_netns/close_netns

ASSERT helpers defined in test_progs.h shouldn't be used in public
functions like open_netns() and close_netns(). Since they depend on
test__fail() which defined in test_progs.c. Public functions may be
used not only in test_progs.c, but in other tests like test_sock_addr.c
in the next commit.

This patch uses log_err() to replace ASSERT helpers in open_netns()
and close_netns() in network_helpers.c to decouple dependencies, then
uses ASSERT_OK_PTR() to check the return values of all open_netns().

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/d1dad22b2ff4909af3f8bfd0667d046e235303cb.1713868264.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoselftests/bpf: Fix a fd leak in error paths in open_netns
Geliang Tang [Tue, 23 Apr 2024 10:35:27 +0000 (18:35 +0800)]
selftests/bpf: Fix a fd leak in error paths in open_netns

As Martin mentioned in review comment, there is an existing bug that
orig_netns_fd will be leaked in the later "goto fail;" case after
open("/proc/self/ns/net") in open_netns() in network_helpers.c. This
patch adds "close(token->orig_netns_fd);" before "free(token);" to
fix it.

Fixes: a30338840fa5 ("selftests/bpf: Move open_netns() and close_netns() into network_helpers.c")
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/a104040b47c3c34c67f3f125cdfdde244a870d3c.1713868264.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoMerge branch 'introduce-bpf_preempt_-disable-enable'
Alexei Starovoitov [Wed, 24 Apr 2024 16:47:49 +0000 (09:47 -0700)]
Merge branch 'introduce-bpf_preempt_-disable-enable'

Kumar Kartikeya Dwivedi says:

====================
Introduce bpf_preempt_{disable,enable}

This set introduces two kfuncs, bpf_preempt_disable and
bpf_preempt_enable, which are wrappers around preempt_disable and
preempt_enable in the kernel. These functions allow a BPF program to
have code sections where preemption is disabled. There are multiple use
cases that are served by such a feature, a few are listed below:

1. Writing safe per-CPU alogrithms/data structures that work correctly
   across different contexts.
2. Writing safe per-CPU allocators similar to bpf_memalloc on top of
   array/arena memory blobs.
3. Writing locking algorithms in BPF programs natively.

Note that local_irq_disable/enable equivalent is also needed for proper
IRQ context protection, but that is a more involved change and will be
sent later.

While bpf_preempt_{disable,enable} is not sufficient for all of these
usage scenarios on its own, it is still necessary.

The same effect as these kfuncs can in some sense be already achieved
using the bpf_spin_lock or rcu_read_lock APIs, therefore from the
standpoint of kernel functionality exposure in the verifier, this is
well understood territory.

Note that these helpers do allow calling kernel helpers and kfuncs from
within the non-preemptible region (unless sleepable). Otherwise, any
locks built using the preemption helpers will be as limited as
existing bpf_spin_lock.

Nesting is allowed by keeping a counter for tracking remaining enables
required to be performed. Similar approach can be applied to
rcu_read_locks in a follow up.

Changelog
=========
v1: https://lore.kernel.org/bpf/20240423061922.2295517-1-memxor@gmail.com

 * Move kfunc BTF ID declerations above css task kfunc for
   !CONFIG_CGROUPS config (Alexei)
 * Add test case for global function call in non-preemptible region
   (Jiri)
====================

Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20240424031315.2757363-1-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agoselftests/bpf: Add tests for preempt kfuncs
Kumar Kartikeya Dwivedi [Wed, 24 Apr 2024 03:13:15 +0000 (03:13 +0000)]
selftests/bpf: Add tests for preempt kfuncs

Add tests for nested cases, nested count preservation upon different
subprog calls that disable/enable preemption, and test sleepable helper
call in non-preemptible regions.

182/1   preempt_lock/preempt_lock_missing_1:OK
182/2   preempt_lock/preempt_lock_missing_2:OK
182/3   preempt_lock/preempt_lock_missing_3:OK
182/4   preempt_lock/preempt_lock_missing_3_minus_2:OK
182/5   preempt_lock/preempt_lock_missing_1_subprog:OK
182/6   preempt_lock/preempt_lock_missing_2_subprog:OK
182/7   preempt_lock/preempt_lock_missing_2_minus_1_subprog:OK
182/8   preempt_lock/preempt_balance:OK
182/9   preempt_lock/preempt_balance_subprog_test:OK
182/10  preempt_lock/preempt_global_subprog_test:OK
182/11  preempt_lock/preempt_sleepable_helper:OK
182     preempt_lock:OK
Summary: 1/11 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20240424031315.2757363-3-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: Introduce bpf_preempt_[disable,enable] kfuncs
Kumar Kartikeya Dwivedi [Wed, 24 Apr 2024 03:13:14 +0000 (03:13 +0000)]
bpf: Introduce bpf_preempt_[disable,enable] kfuncs

Introduce two new BPF kfuncs, bpf_preempt_disable and
bpf_preempt_enable. These kfuncs allow disabling preemption in BPF
programs. Nesting is allowed, since the intended use cases includes
building native BPF spin locks without kernel helper involvement. Apart
from that, this can be used to per-CPU data structures for cases where
programs (or userspace) may preempt one or the other. Currently, while
per-CPU access is stable, whether it will be consistent is not
guaranteed, as only migration is disabled for BPF programs.

Global functions are disallowed from being called, but support for them
will be added as a follow up not just preempt kfuncs, but rcu_read_lock
kfuncs as well. Static subprog calls are permitted. Sleepable helpers
and kfuncs are disallowed in non-preemptible regions.

Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20240424031315.2757363-2-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: Don't check for recursion in bpf_wq_work.
Alexei Starovoitov [Wed, 24 Apr 2024 16:00:23 +0000 (09:00 -0700)]
bpf: Don't check for recursion in bpf_wq_work.

__bpf_prog_enter_sleepable_recur does recursion check which is not applicable
to wq callback. The callback function is part of bpf program and bpf prog might
be running on the same cpu. So recursion check would incorrectly prevent
callback from running. The code can call __bpf_prog_enter_sleepable(), but
run_ctx would be fake, hence use explicit rcu_read_lock_trace();
migrate_disable(); to address this problem. Another reason to open code is
__bpf_prog_enter* are not available in !JIT configs.

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202404241719.IIGdpAku-lkp@intel.com/
Closes: https://lore.kernel.org/oe-kbuild-all/202404241811.FFV4Bku3-lkp@intel.com/
Fixes: eb48f6cd41a0 ("bpf: wq: add bpf_wq_init")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agoMerge branch 'introduce-bpf_wq'
Alexei Starovoitov [Wed, 24 Apr 2024 01:31:26 +0000 (18:31 -0700)]
Merge branch 'introduce-bpf_wq'

Benjamin Tissoires says:

====================
Introduce bpf_wq

This is a followup of sleepable bpf_timer[0].

When discussing sleepable bpf_timer, it was thought that we should give
a try to bpf_wq, as the 2 APIs are similar but distinct enough to
justify a new one.

So here it is.

I tried to keep as much as possible common code in kernel/bpf/helpers.c
but I couldn't get away with code duplication in kernel/bpf/verifier.c.

This series introduces a basic bpf_wq support:
- creation is supported
- assignment is supported
- running a simple bpf_wq is also supported.

We will probably need to extend the API further with:
- a full delayed_work API (can be piggy backed on top with a correct
  flag)
- bpf_wq_cancel() <- apparently not, this is shooting ourself in the
  foot
- bpf_wq_cancel_sync() (for sleepable programs)
- documentation
---

For reference, the use cases I have in mind:

---

Basically, I need to be able to defer a HID-BPF program for the
following reasons (from the aforementioned patch):
1. defer an event:
   Sometimes we receive an out of proximity event, but the device can not
   be trusted enough, and we need to ensure that we won't receive another
   one in the following n milliseconds. So we need to wait those n
   milliseconds, and eventually re-inject that event in the stack.

2. inject new events in reaction to one given event:
   We might want to transform one given event into several. This is the
   case for macro keys where a single key press is supposed to send
   a sequence of key presses. But this could also be used to patch a
   faulty behavior, if a device forgets to send a release event.

3. communicate with the device in reaction to one event:
   We might want to communicate back to the device after a given event.
   For example a device might send us an event saying that it came back
   from sleeping state and needs to be re-initialized.

Currently we can achieve that by keeping a userspace program around,
raise a bpf event, and let that userspace program inject the events and
commands.
However, we are just keeping that program alive as a daemon for just
scheduling commands. There is no logic in it, so it doesn't really justify
an actual userspace wakeup. So a kernel workqueue seems simpler to handle.

bpf_timers are currently running in a soft IRQ context, this patch
series implements a sleppable context for them.

Cheers,
Benjamin

[0] https://lore.kernel.org/all/20240408-hid-bpf-sleepable-v6-0-0499ddd91b94@kernel.org/

Changes in v2:
- took previous review into account
- mainly dropped BPF_F_WQ_SLEEPABLE
- Link to v1: https://lore.kernel.org/r/20240416-bpf_wq-v1-0-c9e66092f842@kernel.org

====================

Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-0-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agoselftests/bpf: wq: add bpf_wq_start() checks
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:16 +0000 (11:09 +0200)]
selftests/bpf: wq: add bpf_wq_start() checks

Allows to test if allocation/free works

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-16-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: add bpf_wq_start
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:15 +0000 (11:09 +0200)]
bpf: add bpf_wq_start

again, copy/paste from bpf_timer_start().

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-15-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agoselftests/bpf: add checks for bpf_wq_set_callback()
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:14 +0000 (11:09 +0200)]
selftests/bpf: add checks for bpf_wq_set_callback()

We assign the callback and set everything up.
The actual tests of these callbacks will be done when bpf_wq_start() is
available.

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-14-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: wq: add bpf_wq_set_callback_impl
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:13 +0000 (11:09 +0200)]
bpf: wq: add bpf_wq_set_callback_impl

To support sleepable async callbacks, we need to tell push_async_cb()
whether the cb is sleepable or not.

The verifier now detects that we are in bpf_wq_set_callback_impl and
can allow a sleepable callback to happen.

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-13-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agoselftests/bpf: wq: add bpf_wq_init() checks
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:12 +0000 (11:09 +0200)]
selftests/bpf: wq: add bpf_wq_init() checks

Allows to test if allocation/free works

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-12-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: wq: add bpf_wq_init
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:11 +0000 (11:09 +0200)]
bpf: wq: add bpf_wq_init

We need to teach the verifier about the second argument which is declared
as void * but which is of type KF_ARG_PTR_TO_MAP. We could have dropped
this extra case if we declared the second argument as struct bpf_map *,
but that means users will have to do extra casting to have their program
compile.

We also need to duplicate the timer code for the checking if the map
argument is matching the provided workqueue.

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-11-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agoselftests/bpf: add bpf_wq tests
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:10 +0000 (11:09 +0200)]
selftests/bpf: add bpf_wq tests

We simply try in all supported map types if we can store/load a bpf_wq.

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-10-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: allow struct bpf_wq to be embedded in arraymaps and hashmaps
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:09 +0000 (11:09 +0200)]
bpf: allow struct bpf_wq to be embedded in arraymaps and hashmaps

Currently bpf_wq_cancel_and_free() is just a placeholder as there is
no memory allocation for bpf_wq just yet.

Again, duplication of the bpf_timer approach

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-9-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: add support for KF_ARG_PTR_TO_WORKQUEUE
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:08 +0000 (11:09 +0200)]
bpf: add support for KF_ARG_PTR_TO_WORKQUEUE

Introduce support for KF_ARG_PTR_TO_WORKQUEUE. The kfuncs will use bpf_wq
as argument and that will be recognized as workqueue argument by verifier.
bpf_wq_kern casting can happen inside kfunc, but using bpf_wq in
argument makes life easier for users who work with non-kern type in BPF
progs.

Duplicate process_timer_func into process_wq_func.
meta argument is only needed to ensure bpf_wq_init's workqueue and map
arguments are coming from the same map (map_uid logic is necessary for
correct inner-map handling), so also amend check_kfunc_args() to
match what helpers functions check is doing.

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-8-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: verifier: bail out if the argument is not a map
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:07 +0000 (11:09 +0200)]
bpf: verifier: bail out if the argument is not a map

When a kfunc is declared with a KF_ARG_PTR_TO_MAP, we should have
reg->map_ptr set to a non NULL value, otherwise, that means that the
underlying type is not a map.

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-7-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agotools: sync include/uapi/linux/bpf.h
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:06 +0000 (11:09 +0200)]
tools: sync include/uapi/linux/bpf.h

cp include/uapi/linux/bpf.h tools/include/uapi/linux/bpf.h

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-6-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: add support for bpf_wq user type
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:05 +0000 (11:09 +0200)]
bpf: add support for bpf_wq user type

Mostly a copy/paste from the bpf_timer API, without the initialization
and free, as they will be done in a separate patch.

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-5-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: replace bpf_timer_cancel_and_free with a generic helper
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:04 +0000 (11:09 +0200)]
bpf: replace bpf_timer_cancel_and_free with a generic helper

Same reason than most bpf_timer* functions, we need almost the same for
workqueues.
So extract the generic part out of it so bpf_wq_cancel_and_free can reuse
it.

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-4-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: replace bpf_timer_set_callback with a generic helper
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:03 +0000 (11:09 +0200)]
bpf: replace bpf_timer_set_callback with a generic helper

In the same way we have a generic __bpf_async_init(), we also need
to share code between timer and workqueue for the set_callback call.

We just add an unused flags parameter, as it will be used for workqueues.

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-3-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: replace bpf_timer_init with a generic helper
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:02 +0000 (11:09 +0200)]
bpf: replace bpf_timer_init with a generic helper

No code change except for the new flags argument being stored in the
local data struct.

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-2-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: make timer data struct more generic
Benjamin Tissoires [Sat, 20 Apr 2024 09:09:01 +0000 (11:09 +0200)]
bpf: make timer data struct more generic

To be able to add workqueues and reuse most of the timer code, we need
to make bpf_hrtimer more generic.

There is no code change except that the new struct gets a new u64 flags
attribute. We are still below 2 cache lines, so this shouldn't impact
the current running codes.

The ordering is also changed. Everything related to async callback
is now on top of bpf_hrtimer.

Signed-off-by: Benjamin Tissoires <bentiss@kernel.org>
Link: https://lore.kernel.org/r/20240420-bpf_wq-v2-1-6c986a5a741f@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf: Fix typos in comments
Rafael Passos [Wed, 17 Apr 2024 18:49:14 +0000 (15:49 -0300)]
bpf: Fix typos in comments

Found the following typos in comments, and fixed them:

s/unpriviledged/unprivileged/
s/reponsible/responsible/
s/possiblities/possibilities/
s/Divison/Division/
s/precsion/precision/
s/havea/have a/
s/reponsible/responsible/
s/responsibile/responsible/
s/tigher/tighter/
s/respecitve/respective/

Signed-off-by: Rafael Passos <rafael@rcpassos.me>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/6af7deb4-bb24-49e8-b3f1-8dd410597337@smtp-relay.sendinblue.com
2 months agobpf: Fix typo in function save_aux_ptr_type
Rafael Passos [Wed, 17 Apr 2024 17:52:26 +0000 (14:52 -0300)]
bpf: Fix typo in function save_aux_ptr_type

I found this typo in the save_aux_ptr_type function.
s/allow_trust_missmatch/allow_trust_mismatch/
I did not find this anywhere else in the codebase.

Signed-off-by: Rafael Passos <rafael@rcpassos.me>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/fbe1d636-8172-4698-9a5a-5a3444b55322@smtp-relay.sendinblue.com
2 months agobpf, docs: Fix formatting nit in instruction-set.rst
Dave Thaler [Fri, 19 Apr 2024 21:38:26 +0000 (14:38 -0700)]
bpf, docs: Fix formatting nit in instruction-set.rst

Other places that had pseudocode were prefixed with ::
so as to appear in a literal block, but one place was inconsistent.
This patch fixes that inconsistency.

Signed-off-by: Dave Thaler <dthaler1968@googlemail.com>
Acked-by: David Vernet <void@manifault.com>
Link: https://lore.kernel.org/r/20240419213826.7301-1-dthaler1968@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agobpf, docs: Clarify helper ID and pointer terms in instruction-set.rst
Dave Thaler [Fri, 19 Apr 2024 20:36:17 +0000 (13:36 -0700)]
bpf, docs: Clarify helper ID and pointer terms in instruction-set.rst

Per IETF 119 meeting discussion and mailing list discussion at
https://mailarchive.ietf.org/arch/msg/bpf/2JwWQwFdOeMGv0VTbD0CKWwAOEA/
the following changes are made.

First, say call by "static ID" rather than call by "address"

Second, change "pointer" to "address"

Signed-off-by: Dave Thaler <dthaler1968@gmail.com>
Acked-by: David Vernet <void@manifault.com>
Link: https://lore.kernel.org/r/20240419203617.6850-1-dthaler1968@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2 months agoMerge branch 'use network helpers, part 1'
Martin KaFai Lau [Sat, 20 Apr 2024 00:13:29 +0000 (17:13 -0700)]
Merge branch 'use network helpers, part 1'

Geliang Tang says:

====================
v5:
 - address Martin's comments for v4. (thanks)
 - drop start_server_addr_opts, add opts as a argument of
   start_server_addr.
 - add opts argument for connect_to_addr too.
 - move some patches out of this set, stay with start_server_addr()
   and connect_to_addr() only in it.

v4:
 - add more patches using make_sockaddr and get_socket_local_port
   helpers.

v3:
 - address comments of Martin and Eduard in v2. (thanks)
 - move "int type" to the first argument of start_server_addr and
   connect_to_addr.
 - add start_server_addr_opts.
 - using "sockaddr_storage" instead of "sockaddr".
 - move start_server_setsockopt patches out of this series.

v2:
 - update patch 6 only, fix errors reported by CI.

This patchset uses public helpers start_server_* and connect_to_* defined
in network_helpers.c to drop duplicate code.
====================

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoselftests/bpf: Use connect_to_addr in sk_assign
Geliang Tang [Thu, 18 Apr 2024 08:09:12 +0000 (16:09 +0800)]
selftests/bpf: Use connect_to_addr in sk_assign

This patch uses public helper connect_to_addr() exported in
network_helpers.h instead of the local defined function connect_to_server()
in prog_tests/sk_assign.c. This can avoid duplicate code.

The code that sets SO_SNDTIMEO timeout as timeo_sec (3s) can be dropped,
since connect_to_addr() sets default timeout as 3s.

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/98fdd384872bda10b2adb052e900a2212c9047b9.1713427236.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoselftests/bpf: Use connect_to_addr in cls_redirect
Geliang Tang [Thu, 18 Apr 2024 08:09:11 +0000 (16:09 +0800)]
selftests/bpf: Use connect_to_addr in cls_redirect

This patch uses public helper connect_to_addr() exported in
network_helpers.h instead of the local defined function connect_to_server()
in prog_tests/cls_redirect.c. This can avoid duplicate code.

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/4a03ac92d2d392f8721f398fa449a83ac75577bc.1713427236.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoselftests/bpf: Update arguments of connect_to_addr
Geliang Tang [Thu, 18 Apr 2024 08:09:10 +0000 (16:09 +0800)]
selftests/bpf: Update arguments of connect_to_addr

Move the third argument "int type" of connect_to_addr() to the first one
which is closer to how the socket syscall is doing it. And add a
network_helper_opts argument as the fourth one. Then change its usages in
sock_addr.c too.

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/088ea8a95055f93409c5f57d12f0e58d43059ac4.1713427236.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoselftests/bpf: Use start_server_addr in sk_assign
Geliang Tang [Thu, 18 Apr 2024 08:09:09 +0000 (16:09 +0800)]
selftests/bpf: Use start_server_addr in sk_assign

Include network_helpers.h in prog_tests/sk_assign.c, use the newly
added public helper start_server_addr() instead of the local defined
function start_server(). This can avoid duplicate code.

The code that sets SO_RCVTIMEO timeout as timeo_sec (3s) can be dropped,
since start_server_addr() sets default timeout as 3s.

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/2af706ffbad63b4f7eaf93a426ed1076eadf1a05.1713427236.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoselftests/bpf: Use start_server_addr in cls_redirect
Geliang Tang [Thu, 18 Apr 2024 08:09:08 +0000 (16:09 +0800)]
selftests/bpf: Use start_server_addr in cls_redirect

Include network_helpers.h in prog_tests/cls_redirect.c, use the newly
added public helper start_server_addr() instead of the local defined
function start_server(). This can avoid duplicate code.

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/13f336cb4c6680175d50bb963d9532e11528c758.1713427236.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agoselftests/bpf: Add start_server_addr helper
Geliang Tang [Thu, 18 Apr 2024 08:09:07 +0000 (16:09 +0800)]
selftests/bpf: Add start_server_addr helper

In order to pair up with connect_to_addr(), this patch adds a new helper
start_server_addr(), which is a wrapper of __start_server(). It accepts
an argument 'addr' of 'struct sockaddr_storage' type instead of a string
type argument like start_server(), and a network_helper_opts argument as
the last one.

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/2f01d48fa026467926738debe554ac452c19b86f.1713427236.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
2 months agobpf: Fix JIT of is_mov_percpu_addr instruction.
Alexei Starovoitov [Wed, 17 Apr 2024 21:44:06 +0000 (14:44 -0700)]
bpf: Fix JIT of is_mov_percpu_addr instruction.

The codegen for is_mov_percpu_addr instruction works for rax/r8 registers
only. Fix it to generate proper x86 byte code for other registers.

Fixes: 7bdbf7446305 ("bpf: add special internal-only MOV instruction to resolve per-CPU addrs")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20240417214406.15788-1-alexei.starovoitov@gmail.com
2 months agolibbpf: Fix dump of subsequent char arrays
Quentin Deslandes [Sat, 13 Apr 2024 21:12:58 +0000 (23:12 +0200)]
libbpf: Fix dump of subsequent char arrays

When dumping a character array, libbpf will watch for a '\0' and set
is_array_terminated=true if found. This prevents libbpf from printing
the remaining characters of the array, treating it as a nul-terminated
string.

However, once this flag is set, it's never reset, leading to subsequent
characters array not being printed properly:

.str_multi = (__u8[2][16])[
    [
        'H',
        'e',
        'l',
    ],
],

This patch saves the is_array_terminated flag and restores its
default (false) value before looping over the elements of an array,
then restores it afterward. This way, libbpf's behavior is unchanged
when dumping the characters of an array, but subsequent arrays are
printed properly:

.str_multi = (__u8[2][16])[
    [
        'H',
        'e',
        'l',
    ],
    [
        'l',
        'o',
    ],
],

Signed-off-by: Quentin Deslandes <qde@naccy.de>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20240413211258.134421-3-qde@naccy.de
2 months agolibbpf: Fix misaligned array closing bracket
Quentin Deslandes [Sat, 13 Apr 2024 21:12:57 +0000 (23:12 +0200)]
libbpf: Fix misaligned array closing bracket

In btf_dump_array_data(), libbpf will call btf_dump_dump_type_data() for
each element. For an array of characters, each element will be
processed the following way:

- btf_dump_dump_type_data() is called to print the character
- btf_dump_data_pfx() prefixes the current line with the proper number
  of indentations
- btf_dump_int_data() is called to print the character
- After the last character is printed, btf_dump_dump_type_data() calls
  btf_dump_data_pfx() before writing the closing bracket

However, for an array containing characters, btf_dump_int_data() won't
print any '\0' and subsequent characters. This leads to situations where
the line prefix is written, no character is added, then the prefix is
written again before adding the closing bracket:

(struct sk_metadata){
    .str_array = (__u8[14])[
        'H',
        'e',
        'l',
        'l',
        'o',
                ],

This change solves this issue by printing the '\0' character, which
has two benefits:

- The bracket closing the array is properly aligned
- It's clear from a user point of view that libbpf uses '\0' as a
  terminator for arrays of characters.

Signed-off-by: Quentin Deslandes <qde@naccy.de>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20240413211258.134421-2-qde@naccy.de
2 months agobpftool: Address minor issues in bash completion
Quentin Monnet [Sat, 13 Apr 2024 01:14:27 +0000 (02:14 +0100)]
bpftool: Address minor issues in bash completion

This commit contains a series of clean-ups and fixes for bpftool's bash
completion file:

- Make sure all local variables are declared as such.
- Make sure variables are initialised before being read.
- Update ELF section ("maps" -> ".maps") for looking up map names in
  object files.
- Fix call to _init_completion.
- Move definition for MAP_TYPE and PROG_TYPE higher up in the scope to
  avoid defining them multiple times, reuse MAP_TYPE where relevant.
- Simplify completion for "duration" keyword in "bpftool prog profile".
- Fix completion for "bpftool struct_ops register" and "bpftool link
  (pin|detach)" where we would repeatedly suggest file names instead of
  suggesting just one name.
- Fix completion for "bpftool iter pin ... map MAP" to account for the
  "map" keyword.
- Add missing "detach" suggestion for "bpftool link".

Signed-off-by: Quentin Monnet <qmo@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20240413011427.14402-3-qmo@kernel.org
2 months agobpftool: Update documentation where progs/maps can be passed by name
Quentin Monnet [Sat, 13 Apr 2024 01:14:26 +0000 (02:14 +0100)]
bpftool: Update documentation where progs/maps can be passed by name

When using references to BPF programs, bpftool supports passing programs
by name on the command line. The manual pages for "bpftool prog" and
"bpftool map" (for prog_array updates) mention it, but we have a few
additional subcommands that support referencing programs by name but do
not mention it in their documentation. Let's update the pages for
subcommands "btf", "cgroup", and "net".

Similarly, we can reference maps by name when passing them to "bpftool
prog load", so we update the page for "bpftool prog" as well.

Signed-off-by: Quentin Monnet <qmo@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20240413011427.14402-2-qmo@kernel.org
2 months agobpf: Harden and/or/xor value tracking in verifier
Harishankar Vishwanathan [Tue, 16 Apr 2024 11:53:02 +0000 (07:53 -0400)]
bpf: Harden and/or/xor value tracking in verifier

This patch addresses a latent unsoundness issue in the
scalar(32)_min_max_and/or/xor functions. While it is not a bugfix,
it ensures that the functions produce sound outputs for all inputs.

The issue occurs in these functions when setting signed bounds. The
following example illustrates the issue for scalar_min_max_and(),
but it applies to the other functions.

In scalar_min_max_and() the following clause is executed when ANDing
positive numbers:

  /* ANDing two positives gives a positive, so safe to
   * cast result into s64.
   */
  dst_reg->smin_value = dst_reg->umin_value;
  dst_reg->smax_value = dst_reg->umax_value;

However, if umin_value and umax_value of dst_reg cross the sign boundary
(i.e., if (s64)dst_reg->umin_value > (s64)dst_reg->umax_value), then we
will end up with smin_value > smax_value, which is unsound.

Previous works [1, 2] have discovered and reported this issue. Our tool
Agni [2, 3] consideres it a false positive. This is because, during the
verification of the abstract operator scalar_min_max_and(), Agni restricts
its inputs to those passing through reg_bounds_sync(). This mimics
real-world verifier behavior, as reg_bounds_sync() is invariably executed
at the tail of every abstract operator. Therefore, such behavior is
unlikely in an actual verifier execution.

However, it is still unsound for an abstract operator to set signed bounds
such that smin_value > smax_value. This patch fixes it, making the abstract
operator sound for all (well-formed) inputs.

It is worth noting that while the previous code updated the signed bounds
(using the output unsigned bounds) only when the *input signed* bounds
were positive, the new code updates them whenever the *output unsigned*
bounds do not cross the sign boundary.

An alternative approach to fix this latent unsoundness would be to
unconditionally set the signed bounds to unbounded [S64_MIN, S64_MAX], and
let reg_bounds_sync() refine the signed bounds using the unsigned bounds
and the tnum. We found that our approach produces more precise (tighter)
bounds.

For example, consider these inputs to BPF_AND:

  /* dst_reg */
  var_off.value: 8608032320201083347
  var_off.mask: 615339716653692460
  smin_value: 8070450532247928832
  smax_value: 8070450532247928832
  umin_value: 13206380674380886586
  umax_value: 13206380674380886586
  s32_min_value: -2110561598
  s32_max_value: -133438816
  u32_min_value: 4135055354
  u32_max_value: 4135055354

  /* src_reg */
  var_off.value: 8584102546103074815
  var_off.mask: 9862641527606476800
  smin_value: 2920655011908158522
  smax_value: 7495731535348625717
  umin_value: 7001104867969363969
  umax_value: 8584102543730304042
  s32_min_value: -2097116671
  s32_max_value: 71704632
  u32_min_value: 1047457619
  u32_max_value: 4268683090

After going through tnum_and() -> scalar32_min_max_and() ->
scalar_min_max_and() -> reg_bounds_sync(), our patch produces the following
bounds for s32:

  s32_min_value: -1263875629
  s32_max_value: -159911942

Whereas, setting the signed bounds to unbounded in scalar_min_max_and()
produces:

  s32_min_value: -1263875629
  s32_max_value: -1

As observed, our patch produces a tighter s32 bound. We also confirmed
using Agni and SMT verification that our patch always produces signed
bounds that are equal to or more precise than setting the signed bounds to
unbounded in scalar_min_max_and().

  [1] https://sanjit-bhat.github.io/assets/pdf/ebpf-verifier-range-analysis22.pdf
  [2] https://link.springer.com/chapter/10.1007/978-3-031-37709-9_12
  [3] https://github.com/bpfverif/agni

Co-developed-by: Matan Shachnai <m.shachnai@rutgers.edu>
Signed-off-by: Matan Shachnai <m.shachnai@rutgers.edu>
Co-developed-by: Srinivas Narayana <srinivas.narayana@rutgers.edu>
Signed-off-by: Srinivas Narayana <srinivas.narayana@rutgers.edu>
Co-developed-by: Santosh Nagarakatte <santosh.nagarakatte@rutgers.edu>
Signed-off-by: Santosh Nagarakatte <santosh.nagarakatte@rutgers.edu>
Signed-off-by: Harishankar Vishwanathan <harishankar.vishwanathan@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20240402212039.51815-1-harishankar.vishwanathan@gmail.com
Link: https://lore.kernel.org/bpf/20240416115303.331688-1-harishankar.vishwanathan@gmail.com
2 months agobpf, tests: Fix typos in comments
Chen Pei [Mon, 15 Apr 2024 08:19:28 +0000 (16:19 +0800)]
bpf, tests: Fix typos in comments

Currently, there are two comments with same name "64-bit ATOMIC magnitudes",
the second one should be "32-bit ATOMIC magnitudes" based on the context.

Signed-off-by: Chen Pei <cp0613@linux.alibaba.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/bpf/20240415081928.17440-1-cp0613@linux.alibaba.com
2 months agobtf: Avoid weak external references
Ard Biesheuvel [Mon, 15 Apr 2024 16:20:45 +0000 (18:20 +0200)]
btf: Avoid weak external references

If the BTF code is enabled in the build configuration, the start/stop
BTF markers are guaranteed to exist. Only when CONFIG_DEBUG_INFO_BTF=n,
the references in btf_parse_vmlinux() will remain unsatisfied, relying
on the weak linkage of the external references to avoid breaking the
build.

Avoid GOT based relocations to these markers in the final executable by
dropping the weak attribute and instead, make btf_parse_vmlinux() return
ERR_PTR(-ENOENT) directly if CONFIG_DEBUG_INFO_BTF is not enabled to
begin with.  The compiler will drop any subsequent references to
__start_BTF and __stop_BTF in that case, allowing the link to succeed.

Note that Clang will notice that taking the address of __start_BTF can
no longer yield NULL, so testing for that condition becomes unnecessary.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/bpf/20240415162041.2491523-8-ardb+git@google.com
3 months agoselftests/bpf: Add read_trace_pipe_iter function
Jiri Olsa [Wed, 10 Apr 2024 14:09:52 +0000 (16:09 +0200)]
selftests/bpf: Add read_trace_pipe_iter function

We have two printk tests reading trace_pipe in non blocking way,
with the very same code. Moving that in new read_trace_pipe_iter
function.

Current read_trace_pipe is used from samples/bpf and needs to
do blocking read and printf of the trace_pipe data, using new
read_trace_pipe_iter to implement that.

Both printk tests do early checks for the number of found messages
and can bail earlier, but I did not find any speed difference w/o
that condition, so I did not complicate the change more for that.

Some of the samples/bpf programs use read_trace_pipe function,
so I kept that interface untouched. I did not see any issues with
affected samples/bpf programs other than there's slight change in
read_trace_pipe output. The current code uses puts that adds new
line after the printed string, so we would occasionally see extra
new line. With this patch we read output per lines, so there's no
need to use puts and we can use just printf instead without extra
new line.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/bpf/20240410140952.292261-1-jolsa@kernel.org
3 months agobpftool: Fix typo in error message
Thorsten Blum [Thu, 11 Apr 2024 16:43:00 +0000 (18:43 +0200)]
bpftool: Fix typo in error message

s/at at/at a/

Signed-off-by: Thorsten Blum <thorsten.blum@toblux.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Quentin Monnet <qmo@kernel.org>
Link: https://lore.kernel.org/bpf/20240411164258.533063-3-thorsten.blum@toblux.com
3 months agoMerge branch 'export send_recv_data'
Martin KaFai Lau [Thu, 11 Apr 2024 18:17:56 +0000 (11:17 -0700)]
Merge branch 'export send_recv_data'

Geliang Tang says:

====================
v5:
 - address Martin's comments for v4 (thanks).
 - update patch 2, use 'return err' instead of 'return -1/0'.
 - drop patch 3 in v4.

v4:
 - fix a bug in v3, it should be 'if (err)', not 'if (!err)'.
 - move "selftests/bpf: Use log_err in network_helpers" out of this
   series.

v3:
 - add two more patches.
 - use log_err instead of ASSERT in v3.
 - let send_recv_data return int as Martin suggested.

v2:

Address Martin's comments for v1 (thanks.)
 - drop patch 1, "export send_byte helper".
 - drop "WRITE_ONCE(arg.stop, 0)".
 - rebased.

send_recv_data will be re-used in MPTCP bpf tests, but not included
in this set because it depends on other patches that have not been
in the bpf-next yet. It will be sent as another set soon.
====================

Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
3 months agoselftests/bpf: Export send_recv_data helper
Geliang Tang [Thu, 11 Apr 2024 05:43:12 +0000 (13:43 +0800)]
selftests/bpf: Export send_recv_data helper

This patch extracts the code to send and receive data into a new
helper named send_recv_data() in network_helpers.c and export it
in network_helpers.h.

This helper will be used for MPTCP BPF selftests.

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/5231103be91fadcce3674a589542c63b6a5eedd4.1712813933.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
3 months agoselftests/bpf: Add struct send_recv_arg
Geliang Tang [Thu, 11 Apr 2024 05:43:11 +0000 (13:43 +0800)]
selftests/bpf: Add struct send_recv_arg

Avoid setting total_bytes and stop as global variables, this patch adds
a new struct named send_recv_arg to pass arguments between threads. Put
these two variables together with fd into this struct and pass it to
server thread, so that server thread can access these two variables without
setting them as global ones.

Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Link: https://lore.kernel.org/r/ca1dd703b796f6810985418373e750f7068b4186.1712813933.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
3 months agoselftests/bpf: Fix umount cgroup2 error in test_sockmap
Geliang Tang [Tue, 9 Apr 2024 05:18:40 +0000 (13:18 +0800)]
selftests/bpf: Fix umount cgroup2 error in test_sockmap

This patch fixes the following "umount cgroup2" error in test_sockmap.c:

 (cgroup_helpers.c:353: errno: Device or resource busy) umount cgroup2

Cgroup fd cg_fd should be closed before cleanup_cgroup_environment().

Fixes: 13a5f3ffd202 ("bpf: Selftests, sockmap test prog run without setting cgroup")
Signed-off-by: Geliang Tang <tanggeliang@kylinos.cn>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/0399983bde729708773416b8488bac2cd5e022b8.1712639568.git.tanggeliang@kylinos.cn
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
3 months agoselftests/bpf: Enable tests for atomics with cpuv4
Yonghong Song [Wed, 10 Apr 2024 15:33:26 +0000 (08:33 -0700)]
selftests/bpf: Enable tests for atomics with cpuv4

When looking at Alexei's patch ([1]) which added tests for atomics,
I noticed that the tests will be skipped with cpuv4. For example,
with latest llvm19, I see:
  [root@arch-fb-vm1 bpf]# ./test_progs -t arena_atomics
  #3/1     arena_atomics/add:OK
  ...
  #3/7     arena_atomics/xchg:OK
  #3       arena_atomics:OK
  Summary: 1/7 PASSED, 0 SKIPPED, 0 FAILED
  [root@arch-fb-vm1 bpf]# ./test_progs-cpuv4 -t arena_atomics
  #3       arena_atomics:SKIP
  Summary: 1/0 PASSED, 1 SKIPPED, 0 FAILED
  [root@arch-fb-vm1 bpf]#

It is perfectly fine to enable atomics-related tests for cpuv4.
With this patch, I have
  [root@arch-fb-vm1 bpf]# ./test_progs-cpuv4 -t arena_atomics
  #3/1     arena_atomics/add:OK
  ...
  #3/7     arena_atomics/xchg:OK
  #3       arena_atomics:OK
  Summary: 1/7 PASSED, 0 SKIPPED, 0 FAILED

  [1] https://lore.kernel.org/r/20240405231134.17274-2-alexei.starovoitov@gmail.com

Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240410153326.1851055-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agoMerge branch 'bpf-add-bpf_link-support-for-sk_msg-and-sk_skb-progs'
Alexei Starovoitov [Thu, 11 Apr 2024 02:52:26 +0000 (19:52 -0700)]
Merge branch 'bpf-add-bpf_link-support-for-sk_msg-and-sk_skb-progs'

Yonghong Song says:

====================
bpf: Add bpf_link support for sk_msg and sk_skb progs

One of our internal services started to use sk_msg program and currently
it used existing prog attach/detach2 as demonstrated in selftests.
But attach/detach of all other bpf programs are based on bpf_link.
Consistent attach/detach APIs for all programs will make things easy to
undersand and less error prone. So this patch added bpf_link
support for BPF_PROG_TYPE_SK_MSG. Based on comments from
previous RFC patch, I added BPF_PROG_TYPE_SK_SKB support as well
as both program types have similar treatment w.r.t. bpf_link
handling.

For the patch series, patch 1 added kernel support. Patch 2
added libbpf support. Patch 3 added bpftool support and
patches 4/5 added some new tests.

Changelogs:
  v6 -> v7:
    - fix an missing-mutex_unlock error.
  v5 -> v6:
    - resolve libbpf conflict due to recent upstream change.
    - add a bpf_link_create() test.
    - some code refactoring for better code quality.
  v4 -> v5:
    - increase scope of mutex protection in link_release.
    - remove previous-leftover entry in libbpf.map.
    - make some code changes for better understanding.
  v3 -> v4:
    - use a single mutex lock to protect both attach/detach/update
      and release/fill_info/show_fdinfo.
    - simplify code for sock_map_link_lookup().
    - fix a few bugs.
    - add more tests.
  v2 -> v3:
    - consolidate link types of sk_msg and sk_skb to
      a single link type BPF_PROG_TYPE_SOCKMAP.
    - fix bpf_link lifecycle issue. in v2, after bpf_link
      is attached, a subsequent prog_attach could change
      that bpf_link. this patch makes bpf_link having
      correct behavior such that it won't go away facing
      other prog/link attach for the same map and the same
      attach type.
====================

Link: https://lore.kernel.org/r/20240410043522.3736912-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agoselftests/bpf: Add some tests with new bpf_program__attach_sockmap() APIs
Yonghong Song [Wed, 10 Apr 2024 04:35:47 +0000 (21:35 -0700)]
selftests/bpf: Add some tests with new bpf_program__attach_sockmap() APIs

Add a few more tests in sockmap_basic.c and sockmap_listen.c to
test bpf_link based APIs for SK_MSG and SK_SKB programs.
Link attach/detach/update are all tested.

All tests are passed.

Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Reviewed-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240410043547.3738448-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agoselftests/bpf: Refactor out helper functions for a few tests
Yonghong Song [Wed, 10 Apr 2024 04:35:42 +0000 (21:35 -0700)]
selftests/bpf: Refactor out helper functions for a few tests

These helper functions will be used later new tests as well.
There are no functionality change.

Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Reviewed-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240410043542.3738166-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agobpftool: Add link dump support for BPF_LINK_TYPE_SOCKMAP
Yonghong Song [Wed, 10 Apr 2024 04:35:37 +0000 (21:35 -0700)]
bpftool: Add link dump support for BPF_LINK_TYPE_SOCKMAP

An example output looks like:
  $ bpftool link
    1776: sk_skb  prog 49730
            map_id 0  attach_type sk_skb_verdict
            pids test_progs(8424)
    1777: sk_skb  prog 49755
            map_id 0  attach_type sk_skb_stream_verdict
            pids test_progs(8424)
    1778: sk_msg  prog 49770
            map_id 8208  attach_type sk_msg_verdict
            pids test_progs(8424)

Reviewed-by: John Fastabend <john.fastabend@gmail.com>
Reviewed-by: Quentin Monnet <qmo@kernel.org>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240410043537.3737928-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agolibbpf: Add bpf_link support for BPF_PROG_TYPE_SOCKMAP
Yonghong Song [Wed, 10 Apr 2024 04:35:32 +0000 (21:35 -0700)]
libbpf: Add bpf_link support for BPF_PROG_TYPE_SOCKMAP

Introduce a libbpf API function bpf_program__attach_sockmap()
which allow user to get a bpf_link for their corresponding programs.

Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Reviewed-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240410043532.3737722-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agobpf: Add bpf_link support for sk_msg and sk_skb progs
Yonghong Song [Wed, 10 Apr 2024 04:35:27 +0000 (21:35 -0700)]
bpf: Add bpf_link support for sk_msg and sk_skb progs

Add bpf_link support for sk_msg and sk_skb programs. We have an
internal request to support bpf_link for sk_msg programs so user
space can have a uniform handling with bpf_link based libbpf
APIs. Using bpf_link based libbpf API also has a benefit which
makes system robust by decoupling prog life cycle and
attachment life cycle.

Reviewed-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240410043527.3737160-1-yonghong.song@linux.dev
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agoselftests/bpf: Add tests for atomics in bpf_arena.
Alexei Starovoitov [Fri, 5 Apr 2024 23:11:34 +0000 (16:11 -0700)]
selftests/bpf: Add tests for atomics in bpf_arena.

Add selftests for atomic instructions in bpf_arena.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20240405231134.17274-2-alexei.starovoitov@gmail.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
3 months agobpf: Add support for certain atomics in bpf_arena to x86 JIT
Alexei Starovoitov [Fri, 5 Apr 2024 23:11:33 +0000 (16:11 -0700)]
bpf: Add support for certain atomics in bpf_arena to x86 JIT

Support atomics in bpf_arena that can be JITed as a single x86 instruction.
Instructions that are JITed as loops are not supported at the moment,
since they require more complex extable and loop logic.

JITs can choose to do smarter things with bpf_jit_supports_insn().
Like arm64 may decide to support all bpf atomics instructions
when emit_lse_atomic is available and none in ll_sc mode.

bpf_jit_supports_percpu_insn(), bpf_jit_supports_ptr_xchg() and
other such callbacks can be replaced with bpf_jit_supports_insn()
in the future.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Eduard Zingerman <eddyz87@gmail.com>
Link: https://lore.kernel.org/r/20240405231134.17274-1-alexei.starovoitov@gmail.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
3 months agoselftests/bpf: eliminate warning of get_cgroup_id_from_path()
Jason Xing [Sat, 6 Apr 2024 14:46:13 +0000 (22:46 +0800)]
selftests/bpf: eliminate warning of get_cgroup_id_from_path()

The output goes like this if I make samples/bpf:
...warning: no previous prototype for ‘get_cgroup_id_from_path’...

Make this function static could solve the warning problem since
no one outside of the file calls it.

Signed-off-by: Jason Xing <kernelxing@tencent.com>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240406144613.4434-1-kerneljasonxing@gmail.com
Signed-off-by: Martin KaFai Lau <martin.lau@kernel.org>
3 months agoMerge branch 'libbpf-api-to-partially-consume-items-from-ringbuffer'
Andrii Nakryiko [Sat, 6 Apr 2024 16:11:11 +0000 (09:11 -0700)]
Merge branch 'libbpf-api-to-partially-consume-items-from-ringbuffer'

Andrea Righi says:

====================
libbpf: API to partially consume items from ringbuffer

Introduce ring__consume_n() and ring_buffer__consume_n() API to
partially consume items from one (or more) ringbuffer(s).

This can be useful, for example, to consume just a single item or when
we need to copy multiple items to a limited user-space buffer from the
ringbuffer callback.

Practical example (where this API can be used):
https://github.com/sched-ext/scx/blob/b7c06b9ed9f72cad83c31e39e9c4e2cfd8683a55/rust/scx_rustland_core/src/bpf.rs#L217

See also:
https://lore.kernel.org/lkml/20240310154726.734289-1-andrea.righi@canonical.com/T/#u

v4:
 - open a new 1.5.0 cycle

v3:
 - rename ring__consume_max() -> ring__consume_n() and
   ring_buffer__consume_max() -> ring_buffer__consume_n()
 - add new API to a new 1.5.0 cycle
 - fixed minor nits / comments

v2:
 - introduce a new API instead of changing the callback's retcode
   behavior
====================

Link: https://lore.kernel.org/r/20240406092005.92399-1-andrea.righi@canonical.com
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
3 months agolibbpf: Add ring__consume_n / ring_buffer__consume_n
Andrea Righi [Sat, 6 Apr 2024 09:15:43 +0000 (11:15 +0200)]
libbpf: Add ring__consume_n / ring_buffer__consume_n

Introduce a new API to consume items from a ring buffer, limited to a
specified amount, and return to the caller the actual number of items
consumed.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/lkml/20240310154726.734289-1-andrea.righi@canonical.com/T
Link: https://lore.kernel.org/bpf/20240406092005.92399-4-andrea.righi@canonical.com
3 months agolibbpf: ringbuf: Allow to consume up to a certain amount of items
Andrea Righi [Sat, 6 Apr 2024 09:15:42 +0000 (11:15 +0200)]
libbpf: ringbuf: Allow to consume up to a certain amount of items

In some cases, instead of always consuming all items from ring buffers
in a greedy way, we may want to consume up to a certain amount of items,
for example when we need to copy items from the BPF ring buffer to a
limited user buffer.

This change allows to set an upper limit to the amount of items consumed
from one or more ring buffers.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20240406092005.92399-3-andrea.righi@canonical.com
3 months agolibbpf: Start v1.5 development cycle
Andrea Righi [Sat, 6 Apr 2024 09:15:41 +0000 (11:15 +0200)]
libbpf: Start v1.5 development cycle

Bump libbpf.map to v1.5.0 to start a new libbpf version cycle.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20240406092005.92399-2-andrea.righi@canonical.com
3 months agoMerge branch 'bpf-allow-invoking-kfuncs-from-bpf_prog_type_syscall-progs'
Andrii Nakryiko [Fri, 5 Apr 2024 17:56:09 +0000 (10:56 -0700)]
Merge branch 'bpf-allow-invoking-kfuncs-from-bpf_prog_type_syscall-progs'

David Vernet says:

====================
bpf: Allow invoking kfuncs from BPF_PROG_TYPE_SYSCALL progs

Currently, a set of core BPF kfuncs (e.g. bpf_task_*, bpf_cgroup_*,
bpf_cpumask_*, etc) cannot be invoked from BPF_PROG_TYPE_SYSCALL
programs. The whitelist approach taken for enabling kfuncs makes sense:
it not safe to call these kfuncs from every program type. For example,
it may not be safe to call bpf_task_acquire() in an fentry to
free_task().

BPF_PROG_TYPE_SYSCALL, on the other hand, is a perfectly safe program
type from which to invoke these kfuncs, as it's a very controlled
environment, and we should never be able to run into any of the typical
problems such as recursive invoations, acquiring references on freeing
kptrs, etc. Being able to invoke these kfuncs would be useful, as
BPF_PROG_TYPE_SYSCALL can be invoked with BPF_PROG_RUN, and would
therefore enable user space programs to synchronously call into BPF to
manipulate these kptrs.
---

v1: https://lore.kernel.org/all/20240404010308.334604-1-void@manifault.com/
v1 -> v2:

- Create new verifier_kfunc_prog_types testcase meant to specifically
  validate calling core kfuncs from various program types. Remove the
  macros and testcases that had been added to the task, cgrp, and
  cpumask kfunc testcases (Andrii and Yonghong)
====================

Link: https://lore.kernel.org/r/20240405143041.632519-1-void@manifault.com
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
3 months agoselftests/bpf: Verify calling core kfuncs from BPF_PROG_TYPE_SYCALL
David Vernet [Fri, 5 Apr 2024 14:30:41 +0000 (09:30 -0500)]
selftests/bpf: Verify calling core kfuncs from BPF_PROG_TYPE_SYCALL

Now that we can call some kfuncs from BPF_PROG_TYPE_SYSCALL progs, let's
add some selftests that verify as much. As a bonus, let's also verify
that we can't call the progs from raw tracepoints. Do do this, we add a
new selftest suite called verifier_kfunc_prog_types.

Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/bpf/20240405143041.632519-3-void@manifault.com
3 months agobpf: Allow invoking kfuncs from BPF_PROG_TYPE_SYSCALL progs
David Vernet [Fri, 5 Apr 2024 14:30:40 +0000 (09:30 -0500)]
bpf: Allow invoking kfuncs from BPF_PROG_TYPE_SYSCALL progs

Currently, a set of core BPF kfuncs (e.g. bpf_task_*, bpf_cgroup_*,
bpf_cpumask_*, etc) cannot be invoked from BPF_PROG_TYPE_SYSCALL
programs. The whitelist approach taken for enabling kfuncs makes sense:
it not safe to call these kfuncs from every program type. For example,
it may not be safe to call bpf_task_acquire() in an fentry to
free_task().

BPF_PROG_TYPE_SYSCALL, on the other hand, is a perfectly safe program
type from which to invoke these kfuncs, as it's a very controlled
environment, and we should never be able to run into any of the typical
problems such as recursive invoations, acquiring references on freeing
kptrs, etc. Being able to invoke these kfuncs would be useful, as
BPF_PROG_TYPE_SYSCALL can be invoked with BPF_PROG_RUN, and would
therefore enable user space programs to synchronously call into BPF to
manipulate these kptrs.

This patch therefore enables invoking the aforementioned core kfuncs
from BPF_PROG_TYPE_SYSCALL progs.

Signed-off-by: David Vernet <void@manifault.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/bpf/20240405143041.632519-2-void@manifault.com
3 months agobpf, docs: Editorial nits in instruction-set.rst
Dave Thaler [Fri, 5 Apr 2024 15:52:45 +0000 (08:52 -0700)]
bpf, docs: Editorial nits in instruction-set.rst

This patch addresses a number of editorial nits including
spelling, punctuation, grammar, and wording consistency issues
in instruction-set.rst.

Signed-off-by: Dave Thaler <dthaler1968@gmail.com>
Acked-by: David Vernet <void@manifault.com>
Link: https://lore.kernel.org/r/20240405155245.3618-1-dthaler1968@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agoselftests/bpf: Make sure libbpf doesn't enforce the signature of a func pointer.
Kui-Feng Lee [Thu, 4 Apr 2024 23:23:42 +0000 (16:23 -0700)]
selftests/bpf: Make sure libbpf doesn't enforce the signature of a func pointer.

The verifier in the kernel ensures that the struct_ops operators behave
correctly by checking that they access parameters and context
appropriately. The verifier will approve a program as long as it correctly
accesses the context/parameters, regardless of its function signature. In
contrast, libbpf should not verify the signature of function pointers and
functions to enable flexibility in loading various implementations of an
operator even if the signature of the function pointer does not match those
in the implementations or the kernel.

With this flexibility, user space applications can adapt to different
kernel versions by loading a specific implementation of an operator based
on feature detection.

This is a follow-up of the commit c911fc61a7ce ("libbpf: Skip zeroed or
null fields if not found in the kernel type.")

Signed-off-by: Kui-Feng Lee <thinker.li@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20240404232342.991414-1-thinker.li@gmail.com
3 months agoMerge branch 'bpf-allow-bpf_for_each_map_elem-helper-with-different-input-maps'
Alexei Starovoitov [Fri, 5 Apr 2024 17:31:18 +0000 (10:31 -0700)]
Merge branch 'bpf-allow-bpf_for_each_map_elem-helper-with-different-input-maps'

Philo Lu says:

====================
bpf: allow bpf_for_each_map_elem() helper with different input maps

Currently, taking different maps within a single bpf_for_each_map_elem
call is not allowed. For example the following codes cannot pass the
verifier (with error "tail_call abusing map_ptr"):
```
static void test_by_pid(int pid)
{
if (pid <= 100)
bpf_for_each_map_elem(&map1, map_elem_cb, NULL, 0);
else
bpf_for_each_map_elem(&map2, map_elem_cb, NULL, 0);
}
```

This is because during bpf_for_each_map_elem verifying,
bpf_insn_aux_data->map_ptr_state is expected as map_ptr (instead of poison
state), which is then needed by set_map_elem_callback_state. However, as
there are two different map ptr input, map_ptr_state is marked as
BPF_MAP_PTR_POISON, and thus the second map_ptr would be lost.
BPF_MAP_PTR_POISON is also needed by bpf_for_each_map_elem to skip
retpoline optimization in do_misc_fixups(). Therefore, map_ptr_state and
map_ptr are both needed for bpf_for_each_map_elem.

This patchset solves it by transform bpf_insn_aux_data->map_ptr_state as a
new struct, storing poison/unpriv state and map pointer together without
additional memory overhead. Then bpf_for_each_map_elem works well with
different input maps. It also makes map_ptr_state logic clearer.

A test case is added to selftest, which would fail to load without this
patchset.

Changelogs
-> v1:
- PATCH 1/3:
  - make the commit log clearer
  - change poison and unpriv to bool in struct bpf_map_ptr_state, also the
    return value in bpf_map_ptr_poisoned() and bpf_map_ptr_unpriv()
- PATCH 2/3:
  - change the comments in set_map_elem_callback_state()
- PATCH 3/3:
  - remove the "skipping the last element" logic during map updating
  - change if() to ASSERT_OK()

Please review, thanks.
====================

Link: https://lore.kernel.org/r/20240405025536.18113-1-lulie@linux.alibaba.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agoselftests/bpf: add test for bpf_for_each_map_elem() with different maps
Philo Lu [Fri, 5 Apr 2024 02:55:36 +0000 (10:55 +0800)]
selftests/bpf: add test for bpf_for_each_map_elem() with different maps

A test is added for bpf_for_each_map_elem() with either an arraymap or a
hashmap.
$ tools/testing/selftests/bpf/test_progs -t for_each
 #93/1    for_each/hash_map:OK
 #93/2    for_each/array_map:OK
 #93/3    for_each/write_map_key:OK
 #93/4    for_each/multi_maps:OK
 #93      for_each:OK
Summary: 1/4 PASSED, 0 SKIPPED, 0 FAILED

Signed-off-by: Philo Lu <lulie@linux.alibaba.com>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240405025536.18113-4-lulie@linux.alibaba.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agobpf: allow invoking bpf_for_each_map_elem with different maps
Philo Lu [Fri, 5 Apr 2024 02:55:35 +0000 (10:55 +0800)]
bpf: allow invoking bpf_for_each_map_elem with different maps

Taking different maps within a single bpf_for_each_map_elem call is not
allowed before, because from the second map,
bpf_insn_aux_data->map_ptr_state will be marked as *poison*. In fact
both map_ptr and state are needed to support this use case: map_ptr is
used by set_map_elem_callback_state() while poison state is needed to
determine whether to use direct call.

Signed-off-by: Philo Lu <lulie@linux.alibaba.com>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240405025536.18113-3-lulie@linux.alibaba.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agobpf: store both map ptr and state in bpf_insn_aux_data
Philo Lu [Fri, 5 Apr 2024 02:55:34 +0000 (10:55 +0800)]
bpf: store both map ptr and state in bpf_insn_aux_data

Currently, bpf_insn_aux_data->map_ptr_state is used to store either
map_ptr or its poison state (i.e., BPF_MAP_PTR_POISON). Thus
BPF_MAP_PTR_POISON must be checked before reading map_ptr. In certain
cases, we may need valid map_ptr even in case of poison state.
This will be explained in next patch with bpf_for_each_map_elem()
helper.

This patch changes map_ptr_state into a new struct including both map
pointer and its state (poison/unpriv). It's in the same union with
struct bpf_loop_inline_state, so there is no extra memory overhead.
Besides, macros BPF_MAP_PTR_UNPRIV/BPF_MAP_PTR_POISON/BPF_MAP_PTR are no
longer needed.

This patch does not change any existing functionality.

Signed-off-by: Philo Lu <lulie@linux.alibaba.com>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240405025536.18113-2-lulie@linux.alibaba.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agobpf: fix perf_snapshot_branch_stack link failure
Arnd Bergmann [Fri, 5 Apr 2024 14:26:25 +0000 (16:26 +0200)]
bpf: fix perf_snapshot_branch_stack link failure

The newly added code to handle bpf_get_branch_snapshot fails to link when
CONFIG_PERF_EVENTS is disabled:

aarch64-linux-ld: kernel/bpf/verifier.o: in function `do_misc_fixups':
verifier.c:(.text+0x1090c): undefined reference to `__SCK__perf_snapshot_branch_stack'

Add a build-time check for that Kconfig symbol around the code to
remove the link time dependency.

Fixes: 314a53623cd4 ("bpf: inline bpf_get_branch_snapshot() helper")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Link: https://lore.kernel.org/r/20240405142637.577046-1-arnd@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agoselftests/bpf: add fp-leaking precise subprog result tests
Andrii Nakryiko [Thu, 4 Apr 2024 21:45:36 +0000 (14:45 -0700)]
selftests/bpf: add fp-leaking precise subprog result tests

Add selftests validating that BPF verifier handles precision marking
for SCALAR registers derived from r10 (fp) register correctly.

Given `r0 = (s8)r10;` syntax is not supported by older Clang compilers,
use the raw BPF instruction syntax to maximize compatibility.

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240404214536.3551295-2-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agobpf: prevent r10 register from being marked as precise
Andrii Nakryiko [Thu, 4 Apr 2024 21:45:35 +0000 (14:45 -0700)]
bpf: prevent r10 register from being marked as precise

r10 is a special register that is not under BPF program's control and is
always effectively precise. The rest of precision logic assumes that
only r0-r9 SCALAR registers are marked as precise, so prevent r10 from
being marked precise.

This can happen due to signed cast instruction allowing to do something
like `r0 = (s8)r10;`, which later, if r0 needs to be precise, would lead
to an attempt to mark r10 as precise.

Prevent this with an extra check during instruction backtracking.

Fixes: 8100928c8814 ("bpf: Support new sign-extension mov insns")
Reported-by: syzbot+148110ee7cf72f39f33e@syzkaller.appspotmail.com
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240404214536.3551295-1-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agobpf: Pack struct bpf_fib_lookup
Anton Protopopov [Wed, 3 Apr 2024 12:33:03 +0000 (14:33 +0200)]
bpf: Pack struct bpf_fib_lookup

The struct bpf_fib_lookup is supposed to be of size 64. A recent commit
59b418c7063d ("bpf: Add a check for struct bpf_fib_lookup size") added
a static assertion to check this property so that future changes to the
structure will not accidentally break this assumption.

As it immediately turned out, on some 32-bit arm systems, when AEABI=n,
the total size of the structure was equal to 68, see [1]. This happened
because the bpf_fib_lookup structure contains a union of two 16-bit
fields:

    union {
            __u16 tot_len;
            __u16 mtu_result;
    };

which was supposed to compile to a 16-bit-aligned 16-bit field. On the
aforementioned setups it was instead both aligned and padded to 32-bits.

Declare this inner union as __attribute__((packed, aligned(2))) such
that it always is of size 2 and is aligned to 16 bits.

  [1] https://lore.kernel.org/all/CA+G9fYtsoP51f-oP_Sp5MOq-Ffv8La2RztNpwvE6+R1VtFiLrw@mail.gmail.com/#t

Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Fixes: e1850ea9bd9e ("bpf: bpf_fib_lookup return MTU value as output when looked up")
Signed-off-by: Anton Protopopov <aspsk@isovalent.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20240403123303.1452184-1-aspsk@isovalent.com
3 months agobpftool: Mount bpffs on provided dir instead of parent dir
Sahil Siddiq [Thu, 4 Apr 2024 19:22:19 +0000 (00:52 +0530)]
bpftool: Mount bpffs on provided dir instead of parent dir

When pinning programs/objects under PATH (eg: during "bpftool prog
loadall") the bpffs is mounted on the parent dir of PATH in the
following situations:
- the given dir exists but it is not bpffs.
- the given dir doesn't exist and the parent dir is not bpffs.

Mounting on the parent dir can also have the unintentional side-
effect of hiding other files located under the parent dir.

If the given dir exists but is not bpffs, then the bpffs should
be mounted on the given dir and not its parent dir.

Similarly, if the given dir doesn't exist and its parent dir is not
bpffs, then the given dir should be created and the bpffs should be
mounted on this new dir.

Fixes: 2a36c26fe3b8 ("bpftool: Support bpffs mountpoint as pin path for prog loadall")
Signed-off-by: Sahil Siddiq <icegambit91@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/2da44d24-74ae-a564-1764-afccf395eeec@isovalent.com/T/#t
Link: https://lore.kernel.org/bpf/20240404192219.52373-1-icegambit91@gmail.com
Closes: https://github.com/libbpf/bpftool/issues/100

Changes since v1:
 - Split "mount_bpffs_for_pin" into two functions.
   This is done to improve maintainability and readability.

Changes since v2:
- mount_bpffs_for_pin: rename to "create_and_mount_bpffs_dir".
- mount_bpffs_given_file: rename to "mount_bpffs_given_file".
- create_and_mount_bpffs_dir:
  - introduce "dir_exists" boolean.
  - remove new dir if "mnt_fs" fails.
- improve error handling and error messages.

Changes since v3:
- Rectify function name.
- Improve error messages and formatting.
- mount_bpffs_for_file:
  - Check if dir exists before block_mount check.

Changes since v4:
- Use strdup instead of strcpy.
- create_and_mount_bpffs_dir:
  - Use S_IRWXU instead of 0700.
- Improve error handling and formatting.

3 months agoMerge branch 'inline-bpf_get_branch_snapshot-bpf-helper'
Alexei Starovoitov [Thu, 4 Apr 2024 20:08:01 +0000 (13:08 -0700)]
Merge branch 'inline-bpf_get_branch_snapshot-bpf-helper'

Andrii Nakryiko says:

====================
Inline bpf_get_branch_snapshot() BPF helper

Implement inlining of bpf_get_branch_snapshot() BPF helper using generic BPF
assembly approach. This allows to reduce LBR record usage right before LBR
records are captured from inside BPF program.

See v1 cover letter ([0]) for some visual examples. I dropped them from v2
because there are multiple independent changes landing and being reviewed, all
of which remove different parts of LBR record waste, so presenting final state
of LBR "waste" gets more complicated until all of the pieces land.

  [0] https://lore.kernel.org/bpf/20240321180501.734779-1-andrii@kernel.org/

v2->v3:
  - fix BPF_MUL instruction definition;
v1->v2:
  - inlining of bpf_get_smp_processor_id() split out into a separate patch set
    implementing internal per-CPU BPF instruction;
  - add efficient divide-by-24 through multiplication logic, and leave
    comments to explain the idea behind it; this way inlined version of
    bpf_get_branch_snapshot() has no compromises compared to non-inlined
    version of the helper  (Alexei).
====================

Link: https://lore.kernel.org/r/20240404002640.1774210-1-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agobpf: inline bpf_get_branch_snapshot() helper
Andrii Nakryiko [Thu, 4 Apr 2024 00:26:40 +0000 (17:26 -0700)]
bpf: inline bpf_get_branch_snapshot() helper

Inline bpf_get_branch_snapshot() helper using architecture-agnostic
inline BPF code which calls directly into underlying callback of
perf_snapshot_branch_stack static call. This callback is set early
during kernel initialization and is never updated or reset, so it's ok
to fetch actual implementation using static_call_query() and call
directly into it.

This change eliminates a full function call and saves one LBR entry
in PERF_SAMPLE_BRANCH_ANY LBR mode.

Acked-by: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240404002640.1774210-3-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agobpf: make bpf_get_branch_snapshot() architecture-agnostic
Andrii Nakryiko [Thu, 4 Apr 2024 00:26:39 +0000 (17:26 -0700)]
bpf: make bpf_get_branch_snapshot() architecture-agnostic

perf_snapshot_branch_stack is set up in an architecture-agnostic way, so
there is no reason for BPF subsystem to keep track of which
architectures do support LBR or not. E.g., it looks like ARM64 might soon
get support for BRBE ([0]), which (with proper integration) should be
possible to utilize using this BPF helper.

perf_snapshot_branch_stack static call will point to
__static_call_return0() by default, which just returns zero, which will
lead to -ENOENT, as expected. So no need to guard anything here.

  [0] https://lore.kernel.org/linux-arm-kernel/20240125094119.2542332-1-anshuman.khandual@arm.com/

Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Link: https://lore.kernel.org/r/20240404002640.1774210-2-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agobpf, riscv: Implement bpf_addr_space_cast instruction
Puranjay Mohan [Thu, 4 Apr 2024 11:42:03 +0000 (11:42 +0000)]
bpf, riscv: Implement bpf_addr_space_cast instruction

LLVM generates bpf_addr_space_cast instruction while translating
pointers between native (zero) address space and
__attribute__((address_space(N))). The addr_space=0 is reserved as
bpf_arena address space.

rY = addr_space_cast(rX, 0, 1) is processed by the verifier and
converted to normal 32-bit move: wX = wY

rY = addr_space_cast(rX, 1, 0) has to be converted by JIT.

Signed-off-by: Puranjay Mohan <puranjay12@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Björn Töpel <bjorn@rivosinc.com>
Tested-by: Pu Lehui <pulehui@huawei.com>
Reviewed-by: Pu Lehui <pulehui@huawei.com>
Acked-by: Björn Töpel <bjorn@kernel.org>
Link: https://lore.kernel.org/bpf/20240404114203.105970-3-puranjay12@gmail.com
3 months agobpf, riscv: Implement PROBE_MEM32 pseudo instructions
Puranjay Mohan [Thu, 4 Apr 2024 11:42:02 +0000 (11:42 +0000)]
bpf, riscv: Implement PROBE_MEM32 pseudo instructions

Add support for [LDX | STX | ST], PROBE_MEM32, [B | H | W | DW]
instructions. They are similar to PROBE_MEM instructions with the
following differences:

- PROBE_MEM32 supports store.
- PROBE_MEM32 relies on the verifier to clear upper 32-bit of the
  src/dst register
- PROBE_MEM32 adds 64-bit kern_vm_start address (which is stored in S7
  in the prologue). Due to bpf_arena constructions such S7 + reg +
  off16 access is guaranteed to be within arena virtual range, so no
  address check at run-time.
- S11 is a free callee-saved register, so it is used to store kern_vm_start
- PROBE_MEM32 allows STX and ST. If they fault the store is a nop. When
  LDX faults the destination register is zeroed.

To support these on riscv, we do tmp = S7 + src/dst reg and then use
tmp2 as the new src/dst register. This allows us to reuse most of the
code for normal [LDX | STX | ST].

Signed-off-by: Puranjay Mohan <puranjay12@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Tested-by: Björn Töpel <bjorn@rivosinc.com>
Tested-by: Pu Lehui <pulehui@huawei.com>
Reviewed-by: Pu Lehui <pulehui@huawei.com>
Acked-by: Björn Töpel <bjorn@kernel.org>
Link: https://lore.kernel.org/bpf/20240404114203.105970-2-puranjay12@gmail.com
3 months agobpf: Optimize emit_mov_imm64().
Alexei Starovoitov [Mon, 1 Apr 2024 23:38:00 +0000 (16:38 -0700)]
bpf: Optimize emit_mov_imm64().

Turned out that bpf prog callback addresses, bpf prog addresses
used in bpf_trampoline, and in other cases the 64-bit address
can be represented as sign extended 32-bit value.

According to https://gcc.gnu.org/bugzilla/show_bug.cgi?id=82339
"Skylake has 0.64c throughput for mov r64, imm64, vs. 0.25 for mov r32, imm32."
So use shorter encoding and faster instruction when possible.

Special care is needed in jit_subprogs(), since bpf_pseudo_func()
instruction cannot change its size during the last step of JIT.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/CAADnVQKFfpY-QZBrOU2CG8v2du8Lgyb7MNVmOZVK_yTyOdNbBA@mail.gmail.com
Link: https://lore.kernel.org/bpf/20240401233800.42737-1-alexei.starovoitov@gmail.com