perf trace: Split BPF skel code to util/bpf_trace_augment.c
authorNamhyung Kim <namhyung@kernel.org>
Mon, 23 Jun 2025 22:57:21 +0000 (15:57 -0700)
committerNamhyung Kim <namhyung@kernel.org>
Thu, 26 Jun 2025 17:31:05 +0000 (10:31 -0700)
commitf6109fb6f5d7fb9403cecfc75302bbf47ed83b8d
treed62a67e486f5198c47d6b64e641cb2153b42ad42
parent2f5d370dec3f800b44bbf7b68875d521e0af43cd
perf trace: Split BPF skel code to util/bpf_trace_augment.c

And make builtin-trace.c less conditional.  Dummy functions will be
called when BUILD_BPF_SKEL=0 is used.  This makes the builtin-trace.c
slightly smaller and simpler by removing the skeleton and its helpers.

The conditional guard of trace__init_syscalls_bpf_prog_array_maps() is
changed from the HAVE_BPF_SKEL to HAVE_LIBBPF_SUPPORT as it doesn't
have a skeleton in the code directly.  And a dummy function is added so
that it can be called unconditionally.  The function will succeed only
if the both conditions are true.

Do not include trace_augment.h from the BPF code and move the definition
of TRACE_AUG_MAX_BUF to the BPF directly.

Reviewed-by: Howard Chu <howardchu95@gmail.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Link: https://lore.kernel.org/r/20250623225721.21553-1-namhyung@kernel.org
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
tools/perf/builtin-trace.c
tools/perf/util/Build
tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c
tools/perf/util/bpf_trace_augment.c [new file with mode: 0644]
tools/perf/util/trace_augment.h