diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-03-24 13:13:26 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-03-24 13:13:26 -0700 |
commit | 169e77764adc041b1dacba84ea90516a895d43b2 (patch) | |
tree | af7124681fa65d40fccee902af5194ab9f9c95f4 /tools/bpf/bpftool | |
parent | 7403e6d8263937dea206dd201fed1ceed190ca18 (diff) | |
parent | 89695196f0ba78a17453f9616355f2ca6b293402 (diff) | |
download | linux-169e77764adc041b1dacba84ea90516a895d43b2.tar.bz2 |
Merge tag 'net-next-5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski:
"The sprinkling of SPI drivers is because we added a new one and Mark
sent us a SPI driver interface conversion pull request.
Core
----
- Introduce XDP multi-buffer support, allowing the use of XDP with
jumbo frame MTUs and combination with Rx coalescing offloads (LRO).
- Speed up netns dismantling (5x) and lower the memory cost a little.
Remove unnecessary per-netns sockets. Scope some lists to a netns.
Cut down RCU syncing. Use batch methods. Allow netdev registration
to complete out of order.
- Support distinguishing timestamp types (ingress vs egress) and
maintaining them across packet scrubbing points (e.g. redirect).
- Continue the work of annotating packet drop reasons throughout the
stack.
- Switch netdev error counters from an atomic to dynamically
allocated per-CPU counters.
- Rework a few preempt_disable(), local_irq_save() and busy waiting
sections problematic on PREEMPT_RT.
- Extend the ref_tracker to allow catching use-after-free bugs.
BPF
---
- Introduce "packing allocator" for BPF JIT images. JITed code is
marked read only, and used to be allocated at page granularity.
Custom allocator allows for more efficient memory use, lower iTLB
pressure and prevents identity mapping huge pages from getting
split.
- Make use of BTF type annotations (e.g. __user, __percpu) to enforce
the correct probe read access method, add appropriate helpers.
- Convert the BPF preload to use light skeleton and drop the
user-mode-driver dependency.
- Allow XDP BPF_PROG_RUN test infra to send real packets, enabling
its use as a packet generator.
- Allow local storage memory to be allocated with GFP_KERNEL if
called from a hook allowed to sleep.
- Introduce fprobe (multi kprobe) to speed up mass attachment (arch
bits to come later).
- Add unstable conntrack lookup helpers for BPF by using the BPF
kfunc infra.
- Allow cgroup BPF progs to return custom errors to user space.
- Add support for AF_UNIX iterator batching.
- Allow iterator programs to use sleepable helpers.
- Support JIT of add, and, or, xor and xchg atomic ops on arm64.
- Add BTFGen support to bpftool which allows to use CO-RE in kernels
without BTF info.
- Large number of libbpf API improvements, cleanups and deprecations.
Protocols
---------
- Micro-optimize UDPv6 Tx, gaining up to 5% in test on dummy netdev.
- Adjust TSO packet sizes based on min_rtt, allowing very low latency
links (data centers) to always send full-sized TSO super-frames.
- Make IPv6 flow label changes (AKA hash rethink) more configurable,
via sysctl and setsockopt. Distinguish between server and client
behavior.
- VxLAN support to "collect metadata" devices to terminate only
configured VNIs. This is similar to VLAN filtering in the bridge.
- Support inserting IPv6 IOAM information to a fraction of frames.
- Add protocol attribute to IP addresses to allow identifying where
given address comes from (kernel-generated, DHCP etc.)
- Support setting socket and IPv6 options via cmsg on ping6 sockets.
- Reject mis-use of ECN bits in IP headers as part of DSCP/TOS.
Define dscp_t and stop taking ECN bits into account in fib-rules.
- Add support for locked bridge ports (for 802.1X).
- tun: support NAPI for packets received from batched XDP buffs,
doubling the performance in some scenarios.
- IPv6 extension header handling in Open vSwitch.
- Support IPv6 control message load balancing in bonding, prevent
neighbor solicitation and advertisement from using the wrong port.
Support NS/NA monitor selection similar to existing ARP monitor.
- SMC
- improve performance with TCP_CORK and sendfile()
- support auto-corking
- support TCP_NODELAY
- MCTP (Management Component Transport Protocol)
- add user space tag control interface
- I2C binding driver (as specified by DMTF DSP0237)
- Multi-BSSID beacon handling in AP mode for WiFi.
- Bluetooth:
- handle MSFT Monitor Device Event
- add MGMT Adv Monitor Device Found/Lost events
- Multi-Path TCP:
- add support for the SO_SNDTIMEO socket option
- lots of selftest cleanups and improvements
- Increase the max PDU size in CAN ISOTP to 64 kB.
Driver API
----------
- Add HW counters for SW netdevs, a mechanism for devices which
offload packet forwarding to report packet statistics back to
software interfaces such as tunnels.
- Select the default NIC queue count as a fraction of number of
physical CPU cores, instead of hard-coding to 8.
- Expose devlink instance locks to drivers. Allow device layer of
drivers to use that lock directly instead of creating their own
which always runs into ordering issues in devlink callbacks.
- Add header/data split indication to guide user space enabling of
TCP zero-copy Rx.
- Allow configuring completion queue event size.
- Refactor page_pool to enable fragmenting after allocation.
- Add allocation and page reuse statistics to page_pool.
- Improve Multiple Spanning Trees support in the bridge to allow
reuse of topologies across VLANs, saving HW resources in switches.
- DSA (Distributed Switch Architecture):
- replay and offload of host VLAN entries
- offload of static and local FDB entries on LAG interfaces
- FDB isolation and unicast filtering
New hardware / drivers
----------------------
- Ethernet:
- LAN937x T1 PHYs
- Davicom DM9051 SPI NIC driver
- Realtek RTL8367S, RTL8367RB-VB switch and MDIO
- Microchip ksz8563 switches
- Netronome NFP3800 SmartNICs
- Fungible SmartNICs
- MediaTek MT8195 switches
- WiFi:
- mt76: MediaTek mt7916
- mt76: MediaTek mt7921u USB adapters
- brcmfmac: Broadcom BCM43454/6
- Mobile:
- iosm: Intel M.2 7360 WWAN card
Drivers
-------
- Convert many drivers to the new phylink API built for split PCS
designs but also simplifying other cases.
- Intel Ethernet NICs:
- add TTY for GNSS module for E810T device
- improve AF_XDP performance
- GTP-C and GTP-U filter offload
- QinQ VLAN support
- Mellanox Ethernet NICs (mlx5):
- support xdp->data_meta
- multi-buffer XDP
- offload tc push_eth and pop_eth actions
- Netronome Ethernet NICs (nfp):
- flow-independent tc action hardware offload (police / meter)
- AF_XDP
- Other Ethernet NICs:
- at803x: fiber and SFP support
- xgmac: mdio: preamble suppression and custom MDC frequencies
- r8169: enable ASPM L1.2 if system vendor flags it as safe
- macb/gem: ZynqMP SGMII
- hns3: add TX push mode
- dpaa2-eth: software TSO
- lan743x: multi-queue, mdio, SGMII, PTP
- axienet: NAPI and GRO support
- Mellanox Ethernet switches (mlxsw):
- source and dest IP address rewrites
- RJ45 ports
- Marvell Ethernet switches (prestera):
- basic routing offload
- multi-chain TC ACL offload
- NXP embedded Ethernet switches (ocelot & felix):
- PTP over UDP with the ocelot-8021q DSA tagging protocol
- basic QoS classification on Felix DSA switch using dcbnl
- port mirroring for ocelot switches
- Microchip high-speed industrial Ethernet (sparx5):
- offloading of bridge port flooding flags
- PTP Hardware Clock
- Other embedded switches:
- lan966x: PTP Hardward Clock
- qca8k: mdio read/write operations via crafted Ethernet packets
- Qualcomm 802.11ax WiFi (ath11k):
- add LDPC FEC type and 802.11ax High Efficiency data in radiotap
- enable RX PPDU stats in monitor co-exist mode
- Intel WiFi (iwlwifi):
- UHB TAS enablement via BIOS
- band disablement via BIOS
- channel switch offload
- 32 Rx AMPDU sessions in newer devices
- MediaTek WiFi (mt76):
- background radar detection
- thermal management improvements on mt7915
- SAR support for more mt76 platforms
- MBSSID and 6 GHz band on mt7915
- RealTek WiFi:
- rtw89: AP mode
- rtw89: 160 MHz channels and 6 GHz band
- rtw89: hardware scan
- Bluetooth:
- mt7921s: wake on Bluetooth, SCO over I2S, wide-band-speed (WBS)
- Microchip CAN (mcp251xfd):
- multiple RX-FIFOs and runtime configurable RX/TX rings
- internal PLL, runtime PM handling simplification
- improve chip detection and error handling after wakeup"
* tag 'net-next-5.18' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2521 commits)
llc: fix netdevice reference leaks in llc_ui_bind()
drivers: ethernet: cpsw: fix panic when interrupt coaleceing is set via ethtool
ice: don't allow to run ice_send_event_to_aux() in atomic ctx
ice: fix 'scheduling while atomic' on aux critical err interrupt
net/sched: fix incorrect vlan_push_eth dest field
net: bridge: mst: Restrict info size queries to bridge ports
net: marvell: prestera: add missing destroy_workqueue() in prestera_module_init()
drivers: net: xgene: Fix regression in CRC stripping
net: geneve: add missing netlink policy and size for IFLA_GENEVE_INNER_PROTO_INHERIT
net: dsa: fix missing host-filtered multicast addresses
net/mlx5e: Fix build warning, detected write beyond size of field
iwlwifi: mvm: Don't fail if PPAG isn't supported
selftests/bpf: Fix kprobe_multi test.
Revert "rethook: x86: Add rethook x86 implementation"
Revert "arm64: rethook: Add arm64 rethook implementation"
Revert "powerpc: Add rethook support"
Revert "ARM: rethook: Add rethook arm implementation"
netdevice: add missing dm_private kdoc
net: bridge: mst: prevent NULL deref in br_mst_info_size()
selftests: forwarding: Use same VRF for port and VLAN upper
...
Diffstat (limited to 'tools/bpf/bpftool')
-rw-r--r-- | tools/bpf/bpftool/Documentation/bpftool-gen.rst | 115 | ||||
-rw-r--r-- | tools/bpf/bpftool/Documentation/bpftool.rst | 13 | ||||
-rw-r--r-- | tools/bpf/bpftool/Documentation/common_options.rst | 13 | ||||
-rw-r--r-- | tools/bpf/bpftool/Makefile | 34 | ||||
-rw-r--r-- | tools/bpf/bpftool/bash-completion/bpftool | 18 | ||||
-rw-r--r-- | tools/bpf/bpftool/btf.c | 2 | ||||
-rw-r--r-- | tools/bpf/bpftool/cgroup.c | 6 | ||||
-rw-r--r-- | tools/bpf/bpftool/common.c | 46 | ||||
-rw-r--r-- | tools/bpf/bpftool/feature.c | 141 | ||||
-rw-r--r-- | tools/bpf/bpftool/gen.c | 1367 | ||||
-rw-r--r-- | tools/bpf/bpftool/link.c | 3 | ||||
-rw-r--r-- | tools/bpf/bpftool/main.c | 31 | ||||
-rw-r--r-- | tools/bpf/bpftool/main.h | 8 | ||||
-rw-r--r-- | tools/bpf/bpftool/map.c | 44 | ||||
-rw-r--r-- | tools/bpf/bpftool/net.c | 2 | ||||
-rw-r--r-- | tools/bpf/bpftool/pids.c | 11 | ||||
-rw-r--r-- | tools/bpf/bpftool/prog.c | 52 | ||||
-rw-r--r-- | tools/bpf/bpftool/skeleton/pid_iter.bpf.c | 22 | ||||
-rw-r--r-- | tools/bpf/bpftool/skeleton/pid_iter.h | 2 | ||||
-rw-r--r-- | tools/bpf/bpftool/struct_ops.c | 4 | ||||
-rw-r--r-- | tools/bpf/bpftool/xlated_dumper.c | 5 |
21 files changed, 1717 insertions, 222 deletions
diff --git a/tools/bpf/bpftool/Documentation/bpftool-gen.rst b/tools/bpf/bpftool/Documentation/bpftool-gen.rst index bc276388f432..68454ef28f58 100644 --- a/tools/bpf/bpftool/Documentation/bpftool-gen.rst +++ b/tools/bpf/bpftool/Documentation/bpftool-gen.rst @@ -25,6 +25,8 @@ GEN COMMANDS | **bpftool** **gen object** *OUTPUT_FILE* *INPUT_FILE* [*INPUT_FILE*...] | **bpftool** **gen skeleton** *FILE* [**name** *OBJECT_NAME*] +| **bpftool** **gen subskeleton** *FILE* [**name** *OBJECT_NAME*] +| **bpftool** **gen min_core_btf** *INPUT* *OUTPUT* *OBJECT* [*OBJECT*...] | **bpftool** **gen help** DESCRIPTION @@ -149,6 +151,50 @@ DESCRIPTION (non-read-only) data from userspace, with same simplicity as for BPF side. + **bpftool gen subskeleton** *FILE* + Generate BPF subskeleton C header file for a given *FILE*. + + Subskeletons are similar to skeletons, except they do not own + the corresponding maps, programs, or global variables. They + require that the object file used to generate them is already + loaded into a *bpf_object* by some other means. + + This functionality is useful when a library is included into a + larger BPF program. A subskeleton for the library would have + access to all objects and globals defined in it, without + having to know about the larger program. + + Consequently, there are only two functions defined + for subskeletons: + + - **example__open(bpf_object\*)** + Instantiates a subskeleton from an already opened (but not + necessarily loaded) **bpf_object**. + + - **example__destroy()** + Frees the storage for the subskeleton but *does not* unload + any BPF programs or maps. + + **bpftool** **gen min_core_btf** *INPUT* *OUTPUT* *OBJECT* [*OBJECT*...] + Generate a minimum BTF file as *OUTPUT*, derived from a given + *INPUT* BTF file, containing all needed BTF types so one, or + more, given eBPF objects CO-RE relocations may be satisfied. + + When kernels aren't compiled with CONFIG_DEBUG_INFO_BTF, + libbpf, when loading an eBPF object, has to rely on external + BTF files to be able to calculate CO-RE relocations. + + Usually, an external BTF file is built from existing kernel + DWARF data using pahole. It contains all the types used by + its respective kernel image and, because of that, is big. + + The min_core_btf feature builds smaller BTF files, customized + to one or multiple eBPF objects, so they can be distributed + together with an eBPF CO-RE based application, turning the + application portable to different kernel versions. + + Check examples bellow for more information how to use it. + **bpftool gen help** Print short help message. @@ -215,7 +261,9 @@ This is example BPF application with two BPF programs and a mix of BPF maps and global variables. Source code is split across two source code files. **$ clang -target bpf -g example1.bpf.c -o example1.bpf.o** + **$ clang -target bpf -g example2.bpf.c -o example2.bpf.o** + **$ bpftool gen object example.bpf.o example1.bpf.o example2.bpf.o** This set of commands compiles *example1.bpf.c* and *example2.bpf.c* @@ -329,3 +377,70 @@ BPF ELF object file *example.bpf.o*. my_static_var: 7 This is a stripped-out version of skeleton generated for above example code. + +min_core_btf +------------ + +**$ bpftool btf dump file 5.4.0-example.btf format raw** + +:: + + [1] INT 'long unsigned int' size=8 bits_offset=0 nr_bits=64 encoding=(none) + [2] CONST '(anon)' type_id=1 + [3] VOLATILE '(anon)' type_id=1 + [4] ARRAY '(anon)' type_id=1 index_type_id=21 nr_elems=2 + [5] PTR '(anon)' type_id=8 + [6] CONST '(anon)' type_id=5 + [7] INT 'char' size=1 bits_offset=0 nr_bits=8 encoding=(none) + [8] CONST '(anon)' type_id=7 + [9] INT 'unsigned int' size=4 bits_offset=0 nr_bits=32 encoding=(none) + <long output> + +**$ bpftool btf dump file one.bpf.o format raw** + +:: + + [1] PTR '(anon)' type_id=2 + [2] STRUCT 'trace_event_raw_sys_enter' size=64 vlen=4 + 'ent' type_id=3 bits_offset=0 + 'id' type_id=7 bits_offset=64 + 'args' type_id=9 bits_offset=128 + '__data' type_id=12 bits_offset=512 + [3] STRUCT 'trace_entry' size=8 vlen=4 + 'type' type_id=4 bits_offset=0 + 'flags' type_id=5 bits_offset=16 + 'preempt_count' type_id=5 bits_offset=24 + <long output> + +**$ bpftool gen min_core_btf 5.4.0-example.btf 5.4.0-smaller.btf one.bpf.o** + +**$ bpftool btf dump file 5.4.0-smaller.btf format raw** + +:: + + [1] TYPEDEF 'pid_t' type_id=6 + [2] STRUCT 'trace_event_raw_sys_enter' size=64 vlen=1 + 'args' type_id=4 bits_offset=128 + [3] STRUCT 'task_struct' size=9216 vlen=2 + 'pid' type_id=1 bits_offset=17920 + 'real_parent' type_id=7 bits_offset=18048 + [4] ARRAY '(anon)' type_id=5 index_type_id=8 nr_elems=6 + [5] INT 'long unsigned int' size=8 bits_offset=0 nr_bits=64 encoding=(none) + [6] TYPEDEF '__kernel_pid_t' type_id=8 + [7] PTR '(anon)' type_id=3 + [8] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED + <end> + +Now, the "5.4.0-smaller.btf" file may be used by libbpf as an external BTF file +when loading the "one.bpf.o" object into the "5.4.0-example" kernel. Note that +the generated BTF file won't allow other eBPF objects to be loaded, just the +ones given to min_core_btf. + +:: + + LIBBPF_OPTS(bpf_object_open_opts, opts, .btf_custom_path = "5.4.0-smaller.btf"); + struct bpf_object *obj; + + obj = bpf_object__open_file("one.bpf.o", &opts); + + ... diff --git a/tools/bpf/bpftool/Documentation/bpftool.rst b/tools/bpf/bpftool/Documentation/bpftool.rst index 7084dd9fa2f8..6965c94dfdaf 100644 --- a/tools/bpf/bpftool/Documentation/bpftool.rst +++ b/tools/bpf/bpftool/Documentation/bpftool.rst @@ -20,7 +20,8 @@ SYNOPSIS **bpftool** **version** - *OBJECT* := { **map** | **program** | **cgroup** | **perf** | **net** | **feature** } + *OBJECT* := { **map** | **program** | **link** | **cgroup** | **perf** | **net** | **feature** | + **btf** | **gen** | **struct_ops** | **iter** } *OPTIONS* := { { **-V** | **--version** } | |COMMON_OPTIONS| } @@ -31,6 +32,8 @@ SYNOPSIS *PROG-COMMANDS* := { **show** | **list** | **dump jited** | **dump xlated** | **pin** | **load** | **attach** | **detach** | **help** } + *LINK-COMMANDS* := { **show** | **list** | **pin** | **detach** | **help** } + *CGROUP-COMMANDS* := { **show** | **list** | **attach** | **detach** | **help** } *PERF-COMMANDS* := { **show** | **list** | **help** } @@ -39,6 +42,14 @@ SYNOPSIS *FEATURE-COMMANDS* := { **probe** | **help** } + *BTF-COMMANDS* := { **show** | **list** | **dump** | **help** } + + *GEN-COMMANDS* := { **object** | **skeleton** | **min_core_btf** | **help** } + + *STRUCT-OPS-COMMANDS* := { **show** | **list** | **dump** | **register** | **unregister** | **help** } + + *ITER-COMMANDS* := { **pin** | **help** } + DESCRIPTION =========== *bpftool* allows for inspection and simple modification of BPF objects diff --git a/tools/bpf/bpftool/Documentation/common_options.rst b/tools/bpf/bpftool/Documentation/common_options.rst index 908487b9c2ad..4107a586b68b 100644 --- a/tools/bpf/bpftool/Documentation/common_options.rst +++ b/tools/bpf/bpftool/Documentation/common_options.rst @@ -4,12 +4,13 @@ Print short help message (similar to **bpftool help**). -V, --version - Print version number (similar to **bpftool version**), and optional - features that were included when bpftool was compiled. Optional - features include linking against libbfd to provide the disassembler - for JIT-ted programs (**bpftool prog dump jited**) and usage of BPF - skeletons (some features like **bpftool prog profile** or showing - pids associated to BPF objects may rely on it). + Print bpftool's version number (similar to **bpftool version**), the + number of the libbpf version in use, and optional features that were + included when bpftool was compiled. Optional features include linking + against libbfd to provide the disassembler for JIT-ted programs + (**bpftool prog dump jited**) and usage of BPF skeletons (some + features like **bpftool prog profile** or showing pids associated to + BPF objects may rely on it). -j, --json Generate JSON output. For commands that cannot produce JSON, this diff --git a/tools/bpf/bpftool/Makefile b/tools/bpf/bpftool/Makefile index 83369f55df61..9800f966fd51 100644 --- a/tools/bpf/bpftool/Makefile +++ b/tools/bpf/bpftool/Makefile @@ -18,37 +18,33 @@ BPF_DIR = $(srctree)/tools/lib/bpf ifneq ($(OUTPUT),) _OUTPUT := $(OUTPUT) else - _OUTPUT := $(CURDIR) + _OUTPUT := $(CURDIR)/ endif -BOOTSTRAP_OUTPUT := $(_OUTPUT)/bootstrap/ +BOOTSTRAP_OUTPUT := $(_OUTPUT)bootstrap/ -LIBBPF_OUTPUT := $(_OUTPUT)/libbpf/ +LIBBPF_OUTPUT := $(_OUTPUT)libbpf/ LIBBPF_DESTDIR := $(LIBBPF_OUTPUT) -LIBBPF_INCLUDE := $(LIBBPF_DESTDIR)/include +LIBBPF_INCLUDE := $(LIBBPF_DESTDIR)include LIBBPF_HDRS_DIR := $(LIBBPF_INCLUDE)/bpf LIBBPF := $(LIBBPF_OUTPUT)libbpf.a LIBBPF_BOOTSTRAP_OUTPUT := $(BOOTSTRAP_OUTPUT)libbpf/ LIBBPF_BOOTSTRAP_DESTDIR := $(LIBBPF_BOOTSTRAP_OUTPUT) -LIBBPF_BOOTSTRAP_INCLUDE := $(LIBBPF_BOOTSTRAP_DESTDIR)/include +LIBBPF_BOOTSTRAP_INCLUDE := $(LIBBPF_BOOTSTRAP_DESTDIR)include LIBBPF_BOOTSTRAP_HDRS_DIR := $(LIBBPF_BOOTSTRAP_INCLUDE)/bpf LIBBPF_BOOTSTRAP := $(LIBBPF_BOOTSTRAP_OUTPUT)libbpf.a -# We need to copy hashmap.h and nlattr.h which is not otherwise exported by -# libbpf, but still required by bpftool. -LIBBPF_INTERNAL_HDRS := $(addprefix $(LIBBPF_HDRS_DIR)/,hashmap.h nlattr.h) -LIBBPF_BOOTSTRAP_INTERNAL_HDRS := $(addprefix $(LIBBPF_BOOTSTRAP_HDRS_DIR)/,hashmap.h) - -ifeq ($(BPFTOOL_VERSION),) -BPFTOOL_VERSION := $(shell make -rR --no-print-directory -sC ../../.. kernelversion) -endif +# We need to copy hashmap.h, nlattr.h, relo_core.h and libbpf_internal.h +# which are not otherwise exported by libbpf, but still required by bpftool. +LIBBPF_INTERNAL_HDRS := $(addprefix $(LIBBPF_HDRS_DIR)/,hashmap.h nlattr.h relo_core.h libbpf_internal.h) +LIBBPF_BOOTSTRAP_INTERNAL_HDRS := $(addprefix $(LIBBPF_BOOTSTRAP_HDRS_DIR)/,hashmap.h relo_core.h libbpf_internal.h) $(LIBBPF_OUTPUT) $(BOOTSTRAP_OUTPUT) $(LIBBPF_BOOTSTRAP_OUTPUT) $(LIBBPF_HDRS_DIR) $(LIBBPF_BOOTSTRAP_HDRS_DIR): $(QUIET_MKDIR)mkdir -p $@ $(LIBBPF): $(wildcard $(BPF_DIR)/*.[ch] $(BPF_DIR)/Makefile) | $(LIBBPF_OUTPUT) $(Q)$(MAKE) -C $(BPF_DIR) OUTPUT=$(LIBBPF_OUTPUT) \ - DESTDIR=$(LIBBPF_DESTDIR) prefix= $(LIBBPF) install_headers + DESTDIR=$(LIBBPF_DESTDIR:/=) prefix= $(LIBBPF) install_headers $(LIBBPF_INTERNAL_HDRS): $(LIBBPF_HDRS_DIR)/%.h: $(BPF_DIR)/%.h | $(LIBBPF_HDRS_DIR) $(call QUIET_INSTALL, $@) @@ -56,7 +52,7 @@ $(LIBBPF_INTERNAL_HDRS): $(LIBBPF_HDRS_DIR)/%.h: $(BPF_DIR)/%.h | $(LIBBPF_HDRS_ $(LIBBPF_BOOTSTRAP): $(wildcard $(BPF_DIR)/*.[ch] $(BPF_DIR)/Makefile) | $(LIBBPF_BOOTSTRAP_OUTPUT) $(Q)$(MAKE) -C $(BPF_DIR) OUTPUT=$(LIBBPF_BOOTSTRAP_OUTPUT) \ - DESTDIR=$(LIBBPF_BOOTSTRAP_DESTDIR) prefix= \ + DESTDIR=$(LIBBPF_BOOTSTRAP_DESTDIR:/=) prefix= \ ARCH= CROSS_COMPILE= CC=$(HOSTCC) LD=$(HOSTLD) $@ install_headers $(LIBBPF_BOOTSTRAP_INTERNAL_HDRS): $(LIBBPF_BOOTSTRAP_HDRS_DIR)/%.h: $(BPF_DIR)/%.h | $(LIBBPF_BOOTSTRAP_HDRS_DIR) @@ -83,7 +79,9 @@ CFLAGS += -DPACKAGE='"bpftool"' -D__EXPORTED_HEADERS__ \ -I$(srctree)/kernel/bpf/ \ -I$(srctree)/tools/include \ -I$(srctree)/tools/include/uapi +ifneq ($(BPFTOOL_VERSION),) CFLAGS += -DBPFTOOL_VERSION='"$(BPFTOOL_VERSION)"' +endif ifneq ($(EXTRA_CFLAGS),) CFLAGS += $(EXTRA_CFLAGS) endif @@ -95,7 +93,7 @@ INSTALL ?= install RM ?= rm -f FEATURE_USER = .bpftool -FEATURE_TESTS = libbfd disassembler-four-args reallocarray zlib libcap \ +FEATURE_TESTS = libbfd disassembler-four-args zlib libcap \ clang-bpf-co-re FEATURE_DISPLAY = libbfd disassembler-four-args zlib libcap \ clang-bpf-co-re @@ -120,10 +118,6 @@ ifeq ($(feature-disassembler-four-args), 1) CFLAGS += -DDISASM_FOUR_ARGS_SIGNATURE endif -ifeq ($(feature-reallocarray), 0) -CFLAGS += -DCOMPAT_NEED_REALLOCARRAY -endif - LIBS = $(LIBBPF) -lelf -lz LIBS_BOOTSTRAP = $(LIBBPF_BOOTSTRAP) -lelf -lz ifeq ($(feature-libcap), 1) diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool index 493753a4962e..5df8d72c5179 100644 --- a/tools/bpf/bpftool/bash-completion/bpftool +++ b/tools/bpf/bpftool/bash-completion/bpftool @@ -1003,9 +1003,25 @@ _bpftool() ;; esac ;; + subskeleton) + case $prev in + $command) + _filedir + return 0 + ;; + *) + _bpftool_once_attr 'name' + return 0 + ;; + esac + ;; + min_core_btf) + _filedir + return 0 + ;; *) [[ $prev == $object ]] && \ - COMPREPLY=( $( compgen -W 'object skeleton help' -- "$cur" ) ) + COMPREPLY=( $( compgen -W 'object skeleton subskeleton help min_core_btf' -- "$cur" ) ) ;; esac ;; diff --git a/tools/bpf/bpftool/btf.c b/tools/bpf/bpftool/btf.c index 59833125ac0a..a2c665beda87 100644 --- a/tools/bpf/bpftool/btf.c +++ b/tools/bpf/bpftool/btf.c @@ -902,7 +902,7 @@ static int do_show(int argc, char **argv) equal_fn_for_key_as_id, NULL); btf_map_table = hashmap__new(hash_fn_for_key_as_id, equal_fn_for_key_as_id, NULL); - if (!btf_prog_table || !btf_map_table) { + if (IS_ERR(btf_prog_table) || IS_ERR(btf_map_table)) { hashmap__free(btf_prog_table); hashmap__free(btf_map_table); if (fd >= 0) diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c index 3571a281c43f..effe136119d7 100644 --- a/tools/bpf/bpftool/cgroup.c +++ b/tools/bpf/bpftool/cgroup.c @@ -50,6 +50,7 @@ static int show_bpf_prog(int id, enum bpf_attach_type attach_type, const char *attach_flags_str, int level) { + char prog_name[MAX_PROG_FULL_NAME]; struct bpf_prog_info info = {}; __u32 info_len = sizeof(info); int prog_fd; @@ -63,6 +64,7 @@ static int show_bpf_prog(int id, enum bpf_attach_type attach_type, return -1; } + get_prog_full_name(&info, prog_fd, prog_name, sizeof(prog_name)); if (json_output) { jsonw_start_object(json_wtr); jsonw_uint_field(json_wtr, "id", info.id); @@ -73,7 +75,7 @@ static int show_bpf_prog(int id, enum bpf_attach_type attach_type, jsonw_uint_field(json_wtr, "attach_type", attach_type); jsonw_string_field(json_wtr, "attach_flags", attach_flags_str); - jsonw_string_field(json_wtr, "name", info.name); + jsonw_string_field(json_wtr, "name", prog_name); jsonw_end_object(json_wtr); } else { printf("%s%-8u ", level ? " " : "", info.id); @@ -81,7 +83,7 @@ static int show_bpf_prog(int id, enum bpf_attach_type attach_type, printf("%-15s", attach_type_name[attach_type]); else printf("type %-10u", attach_type); - printf(" %-15s %-15s\n", attach_flags_str, info.name); + printf(" %-15s %-15s\n", attach_flags_str, prog_name); } close(prog_fd); diff --git a/tools/bpf/bpftool/common.c b/tools/bpf/bpftool/common.c index fa8eb8134344..0c1e06cf50b9 100644 --- a/tools/bpf/bpftool/common.c +++ b/tools/bpf/bpftool/common.c @@ -24,6 +24,7 @@ #include <bpf/bpf.h> #include <bpf/hashmap.h> #include <bpf/libbpf.h> /* libbpf_num_possible_cpus */ +#include <bpf/btf.h> #include "main.h" @@ -55,7 +56,6 @@ const char * const attach_type_name[__MAX_BPF_ATTACH_TYPE] = { [BPF_CGROUP_UDP6_RECVMSG] = "recvmsg6", [BPF_CGROUP_GETSOCKOPT] = "getsockopt", [BPF_CGROUP_SETSOCKOPT] = "setsockopt", - [BPF_SK_SKB_STREAM_PARSER] = "sk_skb_stream_parser", [BPF_SK_SKB_STREAM_VERDICT] = "sk_skb_stream_verdict", [BPF_SK_SKB_VERDICT] = "sk_skb_verdict", @@ -75,6 +75,7 @@ const char * const attach_type_name[__MAX_BPF_ATTACH_TYPE] = { [BPF_SK_REUSEPORT_SELECT] = "sk_skb_reuseport_select", [BPF_SK_REUSEPORT_SELECT_OR_MIGRATE] = "sk_skb_reuseport_select_or_migrate", [BPF_PERF_EVENT] = "perf_event", + [BPF_TRACE_KPROBE_MULTI] = "trace_kprobe_multi", }; void p_err(const char *fmt, ...) @@ -304,6 +305,49 @@ const char *get_fd_type_name(enum bpf_obj_type type) return names[type]; } +void get_prog_full_name(const struct bpf_prog_info *prog_info, int prog_fd, + char *name_buff, size_t buff_len) +{ + const char *prog_name = prog_info->name; + const struct btf_type *func_type; + const struct bpf_func_info finfo = {}; + struct bpf_prog_info info = {}; + __u32 info_len = sizeof(info); + struct btf *prog_btf = NULL; + + if (buff_len <= BPF_OBJ_NAME_LEN || + strlen(prog_info->name) < BPF_OBJ_NAME_LEN - 1) + goto copy_name; + + if (!prog_info->btf_id || prog_info->nr_func_info == 0) + goto copy_name; + + info.nr_func_info = 1; + info.func_info_rec_size = prog_info->func_info_rec_size; + if (info.func_info_rec_size > sizeof(finfo)) + info.func_info_rec_size = sizeof(finfo); + info.func_info = ptr_to_u64(&finfo); + + if (bpf_obj_get_info_by_fd(prog_fd, &info, &info_len)) + goto copy_name; + + prog_btf = btf__load_from_kernel_by_id(info.btf_id); + if (!prog_btf) + goto copy_name; + + func_type = btf__type_by_id(prog_btf, finfo.type_id); + if (!func_type || !btf_is_func(func_type)) + goto copy_name; + + prog_name = btf__name_by_offset(prog_btf, func_type->name_off); + +copy_name: + snprintf(name_buff, buff_len, "%s", prog_name); + + if (prog_btf) + btf__free(prog_btf); +} + int get_fd_type(int fd) { char path[PATH_MAX]; diff --git a/tools/bpf/bpftool/feature.c b/tools/bpf/bpftool/feature.c index e999159fa28d..c2f43a5d38e0 100644 --- a/tools/bpf/bpftool/feature.c +++ b/tools/bpf/bpftool/feature.c @@ -3,6 +3,7 @@ #include <ctype.h> #include <errno.h> +#include <fcntl.h> #include <string.h> #include <unistd.h> #include <net/if.h> @@ -45,6 +46,11 @@ static bool run_as_unprivileged; /* Miscellaneous utility functions */ +static bool grep(const char *buffer, const char *pattern) +{ + return !!strstr(buffer, pattern); +} + static bool check_procfs(void) { struct statfs st_fs; @@ -135,6 +141,32 @@ static void print_end_section(void) /* Probing functions */ +static int get_vendor_id(int ifindex) +{ + char ifname[IF_NAMESIZE], path[64], buf[8]; + ssize_t len; + int fd; + + if (!if_indextoname(ifindex, ifname)) + return -1; + + snprintf(path, sizeof(path), "/sys/class/net/%s/device/vendor", ifname); + + fd = open(path, O_RDONLY | O_CLOEXEC); + if (fd < 0) + return -1; + + len = read(fd, buf, sizeof(buf)); + close(fd); + if (len < 0) + return -1; + if (len >= (ssize_t)sizeof(buf)) + return -1; + buf[len] = '\0'; + + return strtol(buf, NULL, 0); +} + static int read_procfs(const char *path) { char *endptr, *line = NULL; @@ -478,6 +510,40 @@ static bool probe_bpf_syscall(const char *define_prefix) return res; } +static bool +probe_prog_load_ifindex(enum bpf_prog_type prog_type, + const struct bpf_insn *insns, size_t insns_cnt, + char *log_buf, size_t log_buf_sz, + __u32 ifindex) +{ + LIBBPF_OPTS(bpf_prog_load_opts, opts, + .log_buf = log_buf, + .log_size = log_buf_sz, + .log_level = log_buf ? 1 : 0, + .prog_ifindex = ifindex, + ); + int fd; + + errno = 0; + fd = bpf_prog_load(prog_type, NULL, "GPL", insns, insns_cnt, &opts); + if (fd >= 0) + close(fd); + + return fd >= 0 && errno != EINVAL && errno != EOPNOTSUPP; +} + +static bool probe_prog_type_ifindex(enum bpf_prog_type prog_type, __u32 ifindex) +{ + /* nfp returns -EINVAL on exit(0) with TC offload */ + struct bpf_insn insns[2] = { + BPF_MOV64_IMM(BPF_REG_0, 2), + BPF_EXIT_INSN() + }; + + return probe_prog_load_ifindex(prog_type, insns, ARRAY_SIZE(insns), + NULL, 0, ifindex); +} + static void probe_prog_type(enum bpf_prog_type prog_type, bool *supported_types, const char *define_prefix, __u32 ifindex) @@ -487,8 +553,7 @@ probe_prog_type(enum bpf_prog_type prog_type, bool *supported_types, size_t maxlen; bool res; - if (ifindex) - /* Only test offload-able program types */ + if (ifindex) { switch (prog_type) { case BPF_PROG_TYPE_SCHED_CLS: case BPF_PROG_TYPE_XDP: @@ -497,7 +562,11 @@ probe_prog_type(enum bpf_prog_type prog_type, bool *supported_types, return; } - res = bpf_probe_prog_type(prog_type, ifindex); + res = probe_prog_type_ifindex(prog_type, ifindex); + } else { + res = libbpf_probe_bpf_prog_type(prog_type, NULL); + } + #ifdef USE_LIBCAP /* Probe may succeed even if program load fails, for unprivileged users * check that we did not fail because of insufficient permissions @@ -526,6 +595,26 @@ probe_prog_type(enum bpf_prog_type prog_type, bool *supported_types, define_prefix); } +static bool probe_map_type_ifindex(enum bpf_map_type map_type, __u32 ifindex) +{ + LIBBPF_OPTS(bpf_map_create_opts, opts); + int key_size, value_size, max_entries; + int fd; + + opts.map_ifindex = ifindex; + + key_size = sizeof(__u32); + value_size = sizeof(__u32); + max_entries = 1; + + fd = bpf_map_create(map_type, NULL, key_size, value_size, max_entries, + &opts); + if (fd >= 0) + close(fd); + + return fd >= 0; +} + static void probe_map_type(enum bpf_map_type map_type, const char *define_prefix, __u32 ifindex) @@ -535,7 +624,19 @@ probe_map_type(enum bpf_map_type map_type, const char *define_prefix, size_t maxlen; bool res; - res = bpf_probe_map_type(map_type, ifindex); + if (ifindex) { + switch (map_type) { + case BPF_MAP_TYPE_HASH: + case BPF_MAP_TYPE_ARRAY: + break; + default: + return; + } + + res = probe_map_type_ifindex(map_type, ifindex); + } else { + res = libbpf_probe_bpf_map_type(map_type, NULL); + } /* Probe result depends on the success of map creation, no additional * check required for unprivileged users @@ -559,6 +660,33 @@ probe_map_type(enum bpf_map_type map_type, const char *define_prefix, define_prefix); } +static bool +probe_helper_ifindex(enum bpf_func_id id, enum bpf_prog_type prog_type, + __u32 ifindex) +{ + struct bpf_insn insns[2] = { + BPF_EMIT_CALL(id), + BPF_EXIT_INSN() + }; + char buf[4096] = {}; + bool res; + + probe_prog_load_ifindex(prog_type, insns, ARRAY_SIZE(insns), buf, + sizeof(buf), ifindex); + res = !grep(buf, "invalid func ") && !grep(buf, "unknown func "); + + switch (get_vendor_id(ifindex)) { + case 0x19ee: /* Netronome specific */ + res = res && !grep(buf, "not supported by FW") && + !grep(buf, "unsupported function id"); + break; + default: + break; + } + + return res; +} + static void probe_helper_for_progtype(enum bpf_prog_type prog_type, bool supported_type, const char *define_prefix, unsigned int id, @@ -567,7 +695,10 @@ probe_helper_for_progtype(enum bpf_prog_type prog_type, bool supported_type, bool res = false; if (supported_type) { - res = bpf_probe_helper(id, prog_type, ifindex); + if (ifindex) + res = probe_helper_ifindex(id, prog_type, ifindex); + else + res = libbpf_probe_bpf_helper(prog_type, id, NULL); #ifdef USE_LIBCAP /* Probe may succeed even if program load fails, for * unprivileged users check that we did not fail because of diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c index b4695df2ea3d..7ba7ff55d2ea 100644 --- a/tools/bpf/bpftool/gen.c +++ b/tools/bpf/bpftool/gen.c @@ -14,6 +14,7 @@ #include <unistd.h> #include <bpf/bpf.h> #include <bpf/libbpf.h> +#include <bpf/libbpf_internal.h> #include <sys/types.h> #include <sys/stat.h> #include <sys/mman.h> @@ -63,11 +64,11 @@ static void get_obj_name(char *name, const char *file) sanitize_identifier(name); } -static void get_header_guard(char *guard, const char *obj_name) +static void get_header_guard(char *guard, const char *obj_name, const char *suffix) { int i; - sprintf(guard, "__%s_SKEL_H__", obj_name); + sprintf(guard, "__%s_%s__", obj_name, suffix); for (i = 0; guard[i]; i++) guard[i] = toupper(guard[i]); } @@ -208,15 +209,47 @@ static int codegen_datasec_def(struct bpf_object *obj, return 0; } +static const struct btf_type *find_type_for_map(struct btf *btf, const char *map_ident) +{ + int n = btf__type_cnt(btf), i; + char sec_ident[256]; + + for (i = 1; i < n; i++) { + const struct btf_type *t = btf__type_by_id(btf, i); + const char *name; + + if (!btf_is_datasec(t)) + continue; + + name = btf__str_by_offset(btf, t->name_off); + if (!get_datasec_ident(name, sec_ident, sizeof(sec_ident))) + continue; + + if (strcmp(sec_ident, map_ident) == 0) + return t; + } + return NULL; +} + +static bool is_internal_mmapable_map(const struct bpf_map *map, char *buf, size_t sz) +{ + if (!bpf_map__is_internal(map) || !(bpf_map__map_flags(map) & BPF_F_MMAPABLE)) + return false; + + if (!get_map_ident(map, buf, sz)) + return false; + + return true; +} + static int codegen_datasecs(struct bpf_object *obj, const char *obj_name) { struct btf *btf = bpf_object__btf(obj); - int n = btf__type_cnt(btf); struct btf_dump *d; struct bpf_map *map; const struct btf_type *sec; - char sec_ident[256], map_ident[256]; - int i, err = 0; + char map_ident[256]; + int err = 0; d = btf_dump__new(btf, codegen_btf_dump_printf, NULL, NULL); err = libbpf_get_error(d); @@ -225,31 +258,10 @@ static int codegen_datasecs(struct bpf_object *obj, const char *obj_name) bpf_object__for_each_map(map, obj) { /* only generate definitions for memory-mapped internal maps */ - if (!bpf_map__is_internal(map)) + if (!is_internal_mmapable_map(map, map_ident, sizeof(map_ident))) continue; - if (!(bpf_map__def(map)->map_flags & BPF_F_MMAPABLE)) - continue; - - if (!get_map_ident(map, map_ident, sizeof(map_ident))) - continue; - - sec = NULL; - for (i = 1; i < n; i++) { - const struct btf_type *t = btf__type_by_id(btf, i); - const char *name; - - if (!btf_is_datasec(t)) - continue; - - name = btf__str_by_offset(btf, t->name_off); - if (!get_datasec_ident(name, sec_ident, sizeof(sec_ident))) - continue; - if (strcmp(sec_ident, map_ident) == 0) { - sec = t; - break; - } - } + sec = find_type_for_map(btf, map_ident); /* In some cases (e.g., sections like .rodata.cst16 containing * compiler allocated string constants only) there will be @@ -274,6 +286,96 @@ out: return err; } +static bool btf_is_ptr_to_func_proto(const struct btf *btf, + const struct btf_type *v) +{ + return btf_is_ptr(v) && btf_is_func_proto(btf__type_by_id(btf, v->type)); +} + +static int codegen_subskel_datasecs(struct bpf_object *obj, const char *obj_name) +{ + struct btf *btf = bpf_object__btf(obj); + struct btf_dump *d; + struct bpf_map *map; + const struct btf_type *sec, *var; + const struct btf_var_secinfo *sec_var; + int i, err = 0, vlen; + char map_ident[256], sec_ident[256]; + bool strip_mods = false, needs_typeof = false; + const char *sec_name, *var_name; + __u32 var_type_id; + + d = btf_dump__new(btf, codegen_btf_dump_printf, NULL, NULL); + if (!d) + return -errno; + + bpf_object__for_each_map(map, obj) { + /* only generate definitions for memory-mapped internal maps */ + if (!is_internal_mmapable_map(map, map_ident, sizeof(map_ident))) + continue; + + sec = find_type_for_map(btf, map_ident); + if (!sec) + continue; + + sec_name = btf__name_by_offset(btf, sec->name_off); + if (!get_datasec_ident(sec_name, sec_ident, sizeof(sec_ident))) + continue; + + strip_mods = strcmp(sec_name, ".kconfig") != 0; + printf(" struct %s__%s {\n", obj_name, sec_ident); + + sec_var = btf_var_secinfos(sec); + vlen = btf_vlen(sec); + for (i = 0; i < vlen; i++, sec_var++) { + DECLARE_LIBBPF_OPTS(btf_dump_emit_type_decl_opts, opts, + .indent_level = 2, + .strip_mods = strip_mods, + /* we'll print the name separately */ + .field_name = "", + ); + + var = btf__type_by_id(btf, sec_var->type); + var_name = btf__name_by_offset(btf, var->name_off); + var_type_id = var->type; + + /* static variables are not exposed through BPF skeleton */ + if (btf_var(var)->linkage == BTF_VAR_STATIC) + continue; + + /* The datasec member has KIND_VAR but we want the + * underlying type of the variable (e.g. KIND_INT). + */ + var = skip_mods_and_typedefs(btf, var->type, NULL); + + printf("\t\t"); + /* Func and array members require special handling. + * Instead of producing `typename *var`, they produce + * `typeof(typename) *var`. This allows us to keep a + * similar syntax where the identifier is just prefixed + * by *, allowing us to ignore C declaration minutiae. + */ + needs_typeof = btf_is_array(var) || btf_is_ptr_to_func_proto(btf, var); + if (needs_typeof) + printf("typeof("); + + err = btf_dump__emit_type_decl(d, var_type_id, &opts); + if (err) + goto out; + + if (needs_typeof) + printf(")"); + + printf(" *%s;\n", var_name); + } + printf(" } %s;\n", sec_ident); + } + +out: + btf_dump__free(d); + return err; +} + static void codegen(const char *template, ...) { const char *src, *end; @@ -362,6 +464,69 @@ static size_t bpf_map_mmap_sz(const struct bpf_map *map) return map_sz; } +/* Emit type size asserts for all top-level fields in memory-mapped internal maps. */ +static void codegen_asserts(struct bpf_object *obj, const char *obj_name) +{ + struct btf *btf = bpf_object__btf(obj); + struct bpf_map *map; + struct btf_var_secinfo *sec_var; + int i, vlen; + const struct btf_type *sec; + char map_ident[256], var_ident[256]; + + codegen("\ + \n\ + __attribute__((unused)) static void \n\ + %1$s__assert(struct %1$s *s) \n\ + { \n\ + #ifdef __cplusplus \n\ + #define _Static_assert static_assert \n\ + #endif \n\ + ", obj_name); + + bpf_object__for_each_map(map, obj) { + if (!is_internal_mmapable_map(map, map_ident, sizeof(map_ident))) + continue; + + sec = find_type_for_map(btf, map_ident); + if (!sec) { + /* best effort, couldn't find the type for this map */ + continue; + } + + sec_var = btf_var_secinfos(sec); + vlen = btf_vlen(sec); + + for (i = 0; i < vlen; i++, sec_var++) { + const struct btf_type *var = btf__type_by_id(btf, sec_var->type); + const char *var_name = btf__name_by_offset(btf, var->name_off); + long var_size; + + /* static variables are not exposed through BPF skeleton */ + if (btf_var(var)->linkage == BTF_VAR_STATIC) + continue; + + var_size = btf__resolve_size(btf, var->type); + if (var_size < 0) + continue; + + var_ident[0] = '\0'; + strncat(var_ident, var_name, sizeof(var_ident) - 1); + sanitize_identifier(var_ident); + + printf("\t_Static_assert(sizeof(s->%s->%s) == %ld, \"unexpected size of '%s'\");\n", + map_ident, var_ident, var_size, var_ident); + } + } + codegen("\ + \n\ + #ifdef __cplusplus \n\ + #undef _Static_assert \n\ + #endif \n\ + } \n\ + "); +} + static void codegen_attach_detach(struct bpf_object *obj, const char *obj_name) { struct bpf_program *prog; @@ -378,13 +543,16 @@ static void codegen_attach_detach(struct bpf_object *obj, const char *obj_name) int prog_fd = skel->progs.%2$s.prog_fd; \n\ ", obj_name, bpf_program__name(prog)); - switch (bpf_program__get_type(prog)) { + switch (bpf_program__type(prog)) { case BPF_PROG_TYPE_RAW_TRACEPOINT: tp_name = strchr(bpf_program__section_name(prog), '/') + 1; - printf("\tint fd = bpf_raw_tracepoint_open(\"%s\", prog_fd);\n", tp_name); + printf("\tint fd = skel_raw_tracepoint_open(\"%s\", prog_fd);\n", tp_name); break; case BPF_PROG_TYPE_TRACING: - printf("\tint fd = bpf_raw_tracepoint_open(NULL, prog_fd);\n"); + if (bpf_program__expected_attach_type(prog) == BPF_TRACE_ITER) + printf("\tint fd = skel_link_create(prog_fd, 0, BPF_TRACE_ITER);\n"); + else + printf("\tint fd = skel_raw_tracepoint_open(NULL, prog_fd);\n"); break; default: printf("\tint fd = ((void)prog_fd, 0); /* auto-attach not supported */\n"); @@ -468,8 +636,8 @@ static void codegen_destroy(struct bpf_object *obj, const char *obj_name) if (!get_map_ident(map, ident, sizeof(ident))) continue; if (bpf_map__is_internal(map) && - (bpf_map__def(map)->map_flags & BPF_F_MMAPABLE)) - printf("\tmunmap(skel->%1$s, %2$zd);\n", + (bpf_map__map_flags(map) & BPF_F_MMAPABLE)) + printf("\tskel_free_map_data(skel->%1$s, skel->maps.%1$s.initial_value, %2$zd);\n", ident, bpf_map_mmap_sz(map)); codegen("\ \n\ @@ -478,7 +646,7 @@ static void codegen_destroy(struct bpf_object *obj, const char *obj_name) } codegen("\ \n\ - free(skel); \n\ + skel_free(skel); \n\ } \n\ ", obj_name); @@ -522,7 +690,7 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h { \n\ struct %1$s *skel; \n\ \n\ - skel = calloc(sizeof(*skel), 1); \n\ + skel = skel_alloc(sizeof(*skel)); \n\ if (!skel) \n\ goto cleanup; \n\ skel->ctx.sz = (void *)&skel->links - (void *)skel; \n\ @@ -532,27 +700,22 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h const void *mmap_data = NULL; size_t mmap_size = 0; - if (!get_map_ident(map, ident, sizeof(ident))) - continue; - - if (!bpf_map__is_internal(map) || - !(bpf_map__def(map)->map_flags & BPF_F_MMAPABLE)) + if (!is_internal_mmapable_map(map, ident, sizeof(ident))) continue; codegen("\ - \n\ - skel->%1$s = \n\ - mmap(NULL, %2$zd, PROT_READ | PROT_WRITE,\n\ - MAP_SHARED | MAP_ANONYMOUS, -1, 0); \n\ - if (skel->%1$s == (void *) -1) \n\ - goto cleanup; \n\ - memcpy(skel->%1$s, (void *)\"\\ \n\ - ", ident, bpf_map_mmap_sz(map)); + \n\ + skel->%1$s = skel_prep_map_data((void *)\"\\ \n\ + ", ident); mmap_data = bpf_map__initial_value(map, &mmap_size); print_hex(mmap_data, mmap_size); - printf("\", %2$zd);\n" - "\tskel->maps.%1$s.initial_value = (__u64)(long)skel->%1$s;\n", - ident, mmap_size); + codegen("\ + \n\ + \", %1$zd, %2$zd); \n\ + if (!skel->%3$s) \n\ + goto cleanup; \n\ + skel->maps.%3$s.initial_value = (__u64) (long) skel->%3$s;\n\ + ", bpf_map_mmap_sz(map), mmap_size, ident); } codegen("\ \n\ @@ -596,21 +759,21 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h bpf_object__for_each_map(map, obj) { const char *mmap_flags; - if (!get_map_ident(map, ident, sizeof(ident))) - continue; - - if (!bpf_map__is_internal(map) || - !(bpf_map__def(map)->map_flags & BPF_F_MMAPABLE)) + if (!is_internal_mmapable_map(map, ident, sizeof(ident))) continue; - if (bpf_map__def(map)->map_flags & BPF_F_RDONLY_PROG) + if (bpf_map__map_flags(map) & BPF_F_RDONLY_PROG) mmap_flags = "PROT_READ"; else mmap_flags = "PROT_READ | PROT_WRITE"; - printf("\tskel->%1$s =\n" - "\t\tmmap(skel->%1$s, %2$zd, %3$s, MAP_SHARED | MAP_FIXED,\n" - "\t\t\tskel->maps.%1$s.map_fd, 0);\n", + codegen("\ + \n\ + skel->%1$s = skel_finalize_map_data(&skel->maps.%1$s.initial_value, \n\ + %2$zd, %3$s, skel->maps.%1$s.map_fd);\n\ + if (!skel->%1$s) \n\ + return -ENOMEM; \n\ + ", ident, bpf_map_mmap_sz(map), mmap_flags); } codegen("\ @@ -632,8 +795,11 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h } \n\ return skel; \n\ } \n\ + \n\ ", obj_name); + codegen_asserts(obj, obj_name); + codegen("\ \n\ \n\ @@ -645,10 +811,95 @@ out: return err; } +static void +codegen_maps_skeleton(struct bpf_object *obj, size_t map_cnt, bool mmaped) +{ + struct bpf_map *map; + char ident[256]; + size_t i; + + if (!map_cnt) + return; + + codegen("\ + \n\ + \n\ + /* maps */ \n\ + s->map_cnt = %zu; \n\ + s->map_skel_sz = sizeof(*s->maps); \n\ + s->maps = (struct bpf_map_skeleton *)calloc(s->map_cnt, s->map_skel_sz);\n\ + if (!s->maps) \n\ + goto err; \n\ + ", + map_cnt + ); + i = 0; + bpf_object__for_each_map(map, obj) { + if (!get_map_ident(map, ident, sizeof(ident))) + continue; + + codegen("\ + \n\ + \n\ + s->maps[%zu].name = \"%s\"; \n\ + s->maps[%zu].map = &obj->maps.%s; \n\ + ", + i, bpf_map__name(map), i, ident); + /* memory-mapped internal maps */ + if (mmaped && is_internal_mmapable_map(map, ident, sizeof(ident))) { + printf("\ts->maps[%zu].mmaped = (void **)&obj->%s;\n", + i, ident); + } + i++; + } +} + +static void +codegen_progs_skeleton(struct bpf_object *obj, size_t prog_cnt, bool populate_links) +{ + struct bpf_program *prog; + int i; + + if (!prog_cnt) + return; + + codegen("\ + \n\ + \n\ + /* programs */ \n\ + s->prog_cnt = %zu; \n\ + s->prog_skel_sz = sizeof(*s->progs); \n\ + s->progs = (struct bpf_prog_skeleton *)calloc(s->prog_cnt, s->prog_skel_sz);\n\ + if (!s->progs) \n\ + goto err; \n\ + ", + prog_cnt + ); + i = 0; + bpf_object__for_each_program(prog, obj) { + codegen("\ + \n\ + \n\ + s->progs[%1$zu].name = \"%2$s\"; \n\ + s->progs[%1$zu].prog = &obj->progs.%2$s;\n\ + ", + i, bpf_program__name(prog)); + + if (populate_links) { + codegen("\ + \n\ + s->progs[%1$zu].link = &obj->links.%2$s;\n\ + ", + i, bpf_program__name(prog)); + } + i++; + } +} + static int do_skeleton(int argc, char **argv) { char header_guard[MAX_OBJ_NAME_LEN + sizeof("__SKEL_H__")]; - size_t i, map_cnt = 0, prog_cnt = 0, file_sz, mmap_sz; + size_t map_cnt = 0, prog_cnt = 0, file_sz, mmap_sz; DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts); char obj_name[MAX_OBJ_NAME_LEN] = "", *obj_data; struct bpf_object *obj = NULL; @@ -739,7 +990,7 @@ static int do_skeleton(int argc, char **argv) prog_cnt++; } - get_header_guard(header_guard, obj_name); + get_header_guard(header_guard, obj_name, "SKEL_H"); if (use_loader) { codegen("\ \n\ @@ -748,8 +999,6 @@ static int do_skeleton(int argc, char **argv) #ifndef %2$s \n\ #define %2$s \n\ \n\ - #include <stdlib.h> \n\ - #include <bpf/bpf.h> \n\ #include <bpf/skel_internal.h> \n\ \n\ struct %1$s { \n\ @@ -827,6 +1076,16 @@ static int do_skeleton(int argc, char **argv) codegen("\ \n\ + \n\ + #ifdef __cplusplus \n\ + static inline struct %1$s *open(const struct bpf_object_open_opts *opts = nullptr);\n\ + static inline struct %1$s *open_and_load(); \n\ + static inline int load(struct %1$s *skel); \n\ + static inline int attach(struct %1$s *skel); \n\ + static inline void detach(struct %1$s *skel); \n\ + static inline void destroy(struct %1$s *skel); \n\ + static inline const void *elf_bytes(size_t *sz); \n\ + #endif /* __cplusplus */ \n\ }; \n\ \n\ static void \n\ @@ -927,7 +1186,6 @@ static int do_skeleton(int argc, char **argv) s = (struct bpf_object_skeleton *)calloc(1, sizeof(*s));\n\ if (!s) \n\ goto err; \n\ - obj->skeleton = s; \n\ \n\ s->sz = sizeof(*s); \n\ s->name = \"%1$s\"; \n\ @@ -935,71 +1193,16 @@ static int do_skeleton(int argc, char **argv) ", obj_name ); - if (map_cnt) { - codegen("\ - \n\ - \n\ - /* maps */ \n\ - s->map_cnt = %zu; \n\ - s->map_skel_sz = sizeof(*s->maps); \n\ - s->maps = (struct bpf_map_skeleton *)calloc(s->map_cnt, s->map_skel_sz);\n\ - if (!s->maps) \n\ - goto err; \n\ - ", - map_cnt - ); - i = 0; - bpf_object__for_each_map(map, obj) { - if (!get_map_ident(map, ident, sizeof(ident))) - continue; - codegen("\ - \n\ - \n\ - s->maps[%zu].name = \"%s\"; \n\ - s->maps[%zu].map = &obj->maps.%s; \n\ - ", - i, bpf_map__name(map), i, ident); - /* memory-mapped internal maps */ - if (bpf_map__is_internal(map) && - (bpf_map__def(map)->map_flags & BPF_F_MMAPABLE)) { - printf("\ts->maps[%zu].mmaped = (void **)&obj->%s;\n", - i, ident); - } - i++; - } - } - if (prog_cnt) { - codegen("\ - \n\ - \n\ - /* programs */ \n\ - s->prog_cnt = %zu; \n\ - s->prog_skel_sz = sizeof(*s->progs); \n\ - s->progs = (struct bpf_prog_skeleton *)calloc(s->prog_cnt, s->prog_skel_sz);\n\ - if (!s->progs) \n\ - goto err; \n\ - ", - prog_cnt - ); - i = 0; - bpf_object__for_each_program(prog, obj) { - codegen("\ - \n\ - \n\ - s->progs[%1$zu].name = \"%2$s\"; \n\ - s->progs[%1$zu].prog = &obj->progs.%2$s;\n\ - s->progs[%1$zu].link = &obj->links.%2$s;\n\ - ", - i, bpf_program__name(prog)); - i++; - } - } + codegen_maps_skeleton(obj, map_cnt, true /*mmaped*/); + codegen_progs_skeleton(obj, prog_cnt, true /*populate_links*/); + codegen("\ \n\ \n\ s->data = (void *)%2$s__elf_bytes(&s->data_sz); \n\ \n\ + obj->skeleton = s; \n\ return 0; \n\ err: \n\ bpf_object__destroy_skeleton(s); \n\ @@ -1021,7 +1224,25 @@ static int do_skeleton(int argc, char **argv) \"; \n\ } \n\ \n\ - #endif /* %s */ \n\ + #ifdef __cplusplus \n\ + struct %1$s *%1$s::open(const struct bpf_object_open_opts *opts) { return %1$s__open_opts(opts); }\n\ + struct %1$s *%1$s::open_and_load() { return %1$s__open_and_load(); } \n\ + int %1$s::load(struct %1$s *skel) { return %1$s__load(skel); } \n\ + int %1$s::attach(struct %1$s *skel) { return %1$s__attach(skel); } \n\ + void %1$s::detach(struct %1$s *skel) { %1$s__detach(skel); } \n\ + void %1$s::destroy(struct %1$s *skel) { %1$s__destroy(skel); } \n\ + const void *%1$s::elf_bytes(size_t *sz) { return %1$s__elf_bytes(sz); } \n\ + #endif /* __cplusplus */ \n\ + \n\ + ", + obj_name); + + codegen_asserts(obj, obj_name); + + codegen("\ + \n\ + \n\ + #endif /* %1$s */ \n\ ", header_guard); err = 0; @@ -1033,6 +1254,310 @@ out: return err; } +/* Subskeletons are like skeletons, except they don't own the bpf_object, + * associated maps, links, etc. Instead, they know about the existence of + * variables, maps, programs and are able to find their locations + * _at runtime_ from an already loaded bpf_object. + * + * This allows for library-like BPF objects to have userspace counterparts + * with access to their own items without having to know anything about the + * final BPF object that the library was linked into. + */ +static int do_subskeleton(int argc, char **argv) +{ + char header_guard[MAX_OBJ_NAME_LEN + sizeof("__SUBSKEL_H__")]; + size_t i, len, file_sz, map_cnt = 0, prog_cnt = 0, mmap_sz, var_cnt = 0, var_idx = 0; + DECLARE_LIBBPF_OPTS(bpf_object_open_opts, opts); + char obj_name[MAX_OBJ_NAME_LEN] = "", *obj_data; + struct bpf_object *obj = NULL; + const char *file, *var_name; + char ident[256]; + int fd, err = -1, map_type_id; + const struct bpf_map *map; + struct bpf_program *prog; + struct btf *btf; + const struct btf_type *map_type, *var_type; + const struct btf_var_secinfo *var; + struct stat st; + + if (!REQ_ARGS(1)) { + usage(); + return -1; + } + file = GET_ARG(); + + while (argc) { + if (!REQ_ARGS(2)) + return -1; + + if (is_prefix(*argv, "name")) { + NEXT_ARG(); + + if (obj_name[0] != '\0') { + p_err("object name already specified"); + return -1; + } + + strncpy(obj_name, *argv, MAX_OBJ_NAME_LEN - 1); + obj_name[MAX_OBJ_NAME_LEN - 1] = '\0'; + } else { + p_err("unknown arg %s", *argv); + return -1; + } + + NEXT_ARG(); + } + + if (argc) { + p_err("extra unknown arguments"); + return -1; + } + + if (use_loader) { + p_err("cannot use loader for subskeletons"); + return -1; + } + + if (stat(file, &st)) { + p_err("failed to stat() %s: %s", file, strerror(errno)); + return -1; + } + file_sz = st.st_size; + mmap_sz = roundup(file_sz, sysconf(_SC_PAGE_SIZE)); + fd = open(file, O_RDONLY); + if (fd < 0) { + p_err("failed to open() %s: %s", file, strerror(errno)); + return -1; + } + obj_data = mmap(NULL, mmap_sz, PROT_READ, MAP_PRIVATE, fd, 0); + if (obj_data == MAP_FAILED) { + obj_data = NULL; + p_err("failed to mmap() %s: %s", file, strerror(errno)); + goto out; + } + if (obj_name[0] == '\0') + get_obj_name(obj_name, file); + + /* The empty object name allows us to use bpf_map__name and produce + * ELF section names out of it. (".data" instead of "obj.data") + */ + opts.object_name = ""; + obj = bpf_object__open_mem(obj_data, file_sz, &opts); + if (!obj) { + char err_buf[256]; + + libbpf_strerror(errno, err_buf, sizeof(err_buf)); + p_err("failed to open BPF object file: %s", err_buf); + obj = NULL; + goto out; + } + + btf = bpf_object__btf(obj); + if (!btf) { + err = -1; + p_err("need btf type information for %s", obj_name); + goto out; + } + + bpf_object__for_each_program(prog, obj) { + prog_cnt++; + } + + /* First, count how many variables we have to find. + * We need this in advance so the subskel can allocate the right + * amount of storage. + */ + bpf_object__for_each_map(map, obj) { + if (!get_map_ident(map, ident, sizeof(ident))) + continue; + + /* Also count all maps that have a name */ + map_cnt++; + + if (!is_internal_mmapable_map(map, ident, sizeof(ident))) + continue; + + map_type_id = bpf_map__btf_value_type_id(map); + if (map_type_id <= 0) { + err = map_type_id; + goto out; + } + map_type = btf__type_by_id(btf, map_type_id); + + var = btf_var_secinfos(map_type); + len = btf_vlen(map_type); + for (i = 0; i < len; i++, var++) { + var_type = btf__type_by_id(btf, var->type); + + if (btf_var(var_type)->linkage == BTF_VAR_STATIC) + continue; + + var_cnt++; + } + } + + get_header_guard(header_guard, obj_name, "SUBSKEL_H"); + codegen("\ + \n\ + /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ \n\ + \n\ + /* THIS FILE IS AUTOGENERATED! */ \n\ + #ifndef %2$s \n\ + #define %2$s \n\ + \n\ + #include <errno.h> \n\ + #include <stdlib.h> \n\ + #include <bpf/libbpf.h> \n\ + \n\ + struct %1$s { \n\ + struct bpf_object *obj; \n\ + struct bpf_object_subskeleton *subskel; \n\ + ", obj_name, header_guard); + + if (map_cnt) { + printf("\tstruct {\n"); + bpf_object__for_each_map(map, obj) { + if (!get_map_ident(map, ident, sizeof(ident))) + continue; + printf("\t\tstruct bpf_map *%s;\n", ident); + } + printf("\t} maps;\n"); + } + + if (prog_cnt) { + printf("\tstruct {\n"); + bpf_object__for_each_program(prog, obj) { + printf("\t\tstruct bpf_program *%s;\n", + bpf_program__name(prog)); + } + printf("\t} progs;\n"); + } + + err = codegen_subskel_datasecs(obj, obj_name); + if (err) + goto out; + + /* emit code that will allocate enough storage for all symbols */ + codegen("\ + \n\ + \n\ + #ifdef __cplusplus \n\ + static inline struct %1$s *open(const struct bpf_object *src);\n\ + static inline void destroy(struct %1$s *skel); \n\ + #endif /* __cplusplus */ \n\ + }; \n\ + \n\ + static inline void \n\ + %1$s__destroy(struct %1$s *skel) \n\ + { \n\ + if (!skel) \n\ + return; \n\ + if (skel->subskel) \n\ + bpf_object__destroy_subskeleton(skel->subskel);\n\ + free(skel); \n\ + } \n\ + \n\ + static inline struct %1$s * \n\ + %1$s__open(const struct bpf_object *src) \n\ + { \n\ + struct %1$s *obj; \n\ + struct bpf_object_subskeleton *s; \n\ + int err; \n\ + \n\ + obj = (struct %1$s *)calloc(1, sizeof(*obj)); \n\ + if (!obj) { \n\ + errno = ENOMEM; \n\ + goto err; \n\ + } \n\ + s = (struct bpf_object_subskeleton *)calloc(1, sizeof(*s));\n\ + if (!s) { \n\ + errno = ENOMEM; \n\ + goto err; \n\ + } \n\ + s->sz = sizeof(*s); \n\ + s->obj = src; \n\ + s->var_skel_sz = sizeof(*s->vars); \n\ + obj->subskel = s; \n\ + \n\ + /* vars */ \n\ + s->var_cnt = %2$d; \n\ + s->vars = (struct bpf_var_skeleton *)calloc(%2$d, sizeof(*s->vars));\n\ + if (!s->vars) { \n\ + errno = ENOMEM; \n\ + goto err; \n\ + } \n\ + ", + obj_name, var_cnt + ); + + /* walk through each symbol and emit the runtime representation */ + bpf_object__for_each_map(map, obj) { + if (!is_internal_mmapable_map(map, ident, sizeof(ident))) + continue; + + map_type_id = bpf_map__btf_value_type_id(map); + if (map_type_id <= 0) + /* skip over internal maps with no type*/ + continue; + + map_type = btf__type_by_id(btf, map_type_id); + var = btf_var_secinfos(map_type); + len = btf_vlen(map_type); + for (i = 0; i < len; i++, var++) { + var_type = btf__type_by_id(btf, var->type); + var_name = btf__name_by_offset(btf, var_type->name_off); + + if (btf_var(var_type)->linkage == BTF_VAR_STATIC) + continue; + + /* Note that we use the dot prefix in .data as the + * field access operator i.e. maps%s becomes maps.data + */ + codegen("\ + \n\ + \n\ + s->vars[%3$d].name = \"%1$s\"; \n\ + s->vars[%3$d].map = &obj->maps.%2$s; \n\ + s->vars[%3$d].addr = (void **) &obj->%2$s.%1$s;\n\ + ", var_name, ident, var_idx); + + var_idx++; + } + } + + codegen_maps_skeleton(obj, map_cnt, false /*mmaped*/); + codegen_progs_skeleton(obj, prog_cnt, false /*links*/); + + codegen("\ + \n\ + \n\ + err = bpf_object__open_subskeleton(s); \n\ + if (err) \n\ + goto err; \n\ + \n\ + return obj; \n\ + err: \n\ + %1$s__destroy(obj); \n\ + return NULL; \n\ + } \n\ + \n\ + #ifdef __cplusplus \n\ + struct %1$s *%1$s::open(const struct bpf_object *src) { return %1$s__open(src); }\n\ + void %1$s::destroy(struct %1$s *skel) { %1$s__destroy(skel); }\n\ + #endif /* __cplusplus */ \n\ + \n\ + #endif /* %2$s */ \n\ + ", + obj_name, header_guard); + err = 0; +out: + bpf_object__close(obj); + if (obj_data) + munmap(obj_data, mmap_sz); + close(fd); + return err; +} + static int do_object(int argc, char **argv) { struct bpf_linker *linker; @@ -1084,6 +1609,8 @@ static int do_help(int argc, char **argv) fprintf(stderr, "Usage: %1$s %2$s object OUTPUT_FILE INPUT_FILE [INPUT_FILE...]\n" " %1$s %2$s skeleton FILE [name OBJECT_NAME]\n" + " %1$s %2$s subskeleton FILE [name OBJECT_NAME]\n" + " %1$s %2$s min_core_btf INPUT OUTPUT OBJECT [OBJECT...]\n" " %1$s %2$s help\n" "\n" " " HELP_SPEC_OPTIONS " |\n" @@ -1094,10 +1621,594 @@ static int do_help(int argc, char **argv) return 0; } +static int btf_save_raw(const struct btf *btf, const char *path) +{ + const void *data; + FILE *f = NULL; + __u32 data_sz; + int err = 0; + + data = btf__raw_data(btf, &data_sz); + if (!data) + return -ENOMEM; + + f = fopen(path, "wb"); + if (!f) + return -errno; + + if (fwrite(data, 1, data_sz, f) != data_sz) + err = -errno; + + fclose(f); + return err; +} + +struct btfgen_info { + struct btf *src_btf; + struct btf *marked_btf; /* btf structure used to mark used types */ +}; + +static size_t btfgen_hash_fn(const void *key, void *ctx) +{ + return (size_t)key; +} + +static bool btfgen_equal_fn(const void *k1, const void *k2, void *ctx) +{ + return k1 == k2; +} + +static void *u32_as_hash_key(__u32 x) +{ + return (void *)(uintptr_t)x; +} + +static void btfgen_free_info(struct btfgen_info *info) +{ + if (!info) + return; + + btf__free(info->src_btf); + btf__free(info->marked_btf); + + free(info); +} + +static struct btfgen_info * +btfgen_new_info(const char *targ_btf_path) +{ + struct btfgen_info *info; + int err; + + info = calloc(1, sizeof(*info)); + if (!info) + return NULL; + + info->src_btf = btf__parse(targ_btf_path, NULL); + if (!info->src_btf) { + err = -errno; + p_err("failed parsing '%s' BTF file: %s", targ_btf_path, strerror(errno)); + goto err_out; + } + + info->marked_btf = btf__parse(targ_btf_path, NULL); + if (!info->marked_btf) { + err = -errno; + p_err("failed parsing '%s' BTF file: %s", targ_btf_path, strerror(errno)); + goto err_out; + } + + return info; + +err_out: + btfgen_free_info(info); + errno = -err; + return NULL; +} + +#define MARKED UINT32_MAX + +static void btfgen_mark_member(struct btfgen_info *info, int type_id, int idx) +{ + const struct btf_type *t = btf__type_by_id(info->marked_btf, type_id); + struct btf_member *m = btf_members(t) + idx; + + m->name_off = MARKED; +} + +static int +btfgen_mark_type(struct btfgen_info *info, unsigned int type_id, bool follow_pointers) +{ + const struct btf_type *btf_type = btf__type_by_id(info->src_btf, type_id); + struct btf_type *cloned_type; + struct btf_param *param; + struct btf_array *array; + int err, i; + + if (type_id == 0) + return 0; + + /* mark type on cloned BTF as used */ + cloned_type = (struct btf_type *) btf__type_by_id(info->marked_btf, type_id); + cloned_type->name_off = MARKED; + + /* recursively mark other types needed by it */ + switch (btf_kind(btf_type)) { + case BTF_KIND_UNKN: + case BTF_KIND_INT: + case BTF_KIND_FLOAT: + case BTF_KIND_ENUM: + case BTF_KIND_STRUCT: + case BTF_KIND_UNION: + break; + case BTF_KIND_PTR: + if (follow_pointers) { + err = btfgen_mark_type(info, btf_type->type, follow_pointers); + if (err) + return err; + } + break; + case BTF_KIND_CONST: + case BTF_KIND_VOLATILE: + case BTF_KIND_TYPEDEF: + err = btfgen_mark_type(info, btf_type->type, follow_pointers); + if (err) + return err; + break; + case BTF_KIND_ARRAY: + array = btf_array(btf_type); + + /* mark array type */ + err = btfgen_mark_type(info, array->type, follow_pointers); + /* mark array's index type */ + err = err ? : btfgen_mark_type(info, array->index_type, follow_pointers); + if (err) + return err; + break; + case BTF_KIND_FUNC_PROTO: + /* mark ret type */ + err = btfgen_mark_type(info, btf_type->type, follow_pointers); + if (err) + return err; + + /* mark parameters types */ + param = btf_params(btf_type); + for (i = 0; i < btf_vlen(btf_type); i++) { + err = btfgen_mark_type(info, param->type, follow_pointers); + if (err) + return err; + param++; + } + break; + /* tells if some other type needs to be handled */ + default: + p_err("unsupported kind: %s (%d)", btf_kind_str(btf_type), type_id); + return -EINVAL; + } + + return 0; +} + +static int btfgen_record_field_relo(struct btfgen_info *info, struct bpf_core_spec *targ_spec) +{ + struct btf *btf = info->src_btf; + const struct btf_type *btf_type; + struct btf_member *btf_member; + struct btf_array *array; + unsigned int type_id = targ_spec->root_type_id; + int idx, err; + + /* mark root type */ + btf_type = btf__type_by_id(btf, type_id); + err = btfgen_mark_type(info, type_id, false); + if (err) + return err; + + /* mark types for complex types (arrays, unions, structures) */ + for (int i = 1; i < targ_spec->raw_len; i++) { + /* skip typedefs and mods */ + while (btf_is_mod(btf_type) || btf_is_typedef(btf_type)) { + type_id = btf_type->type; + btf_type = btf__type_by_id(btf, type_id); + } + + switch (btf_kind(btf_type)) { + case BTF_KIND_STRUCT: + case BTF_KIND_UNION: + idx = targ_spec->raw_spec[i]; + btf_member = btf_members(btf_type) + idx; + + /* mark member */ + btfgen_mark_member(info, type_id, idx); + + /* mark member's type */ + type_id = btf_member->type; + btf_type = btf__type_by_id(btf, type_id); + err = btfgen_mark_type(info, type_id, false); + if (err) + return err; + break; + case BTF_KIND_ARRAY: + array = btf_array(btf_type); + type_id = array->type; + btf_type = btf__type_by_id(btf, type_id); + break; + default: + p_err("unsupported kind: %s (%d)", + btf_kind_str(btf_type), btf_type->type); + return -EINVAL; + } + } + + return 0; +} + +static int btfgen_record_type_relo(struct btfgen_info *info, struct bpf_core_spec *targ_spec) +{ + return btfgen_mark_type(info, targ_spec->root_type_id, true); +} + +static int btfgen_record_enumval_relo(struct btfgen_info *info, struct bpf_core_spec *targ_spec) +{ + return btfgen_mark_type(info, targ_spec->root_type_id, false); +} + +static int btfgen_record_reloc(struct btfgen_info *info, struct bpf_core_spec *res) +{ + switch (res->relo_kind) { + case BPF_CORE_FIELD_BYTE_OFFSET: + case BPF_CORE_FIELD_BYTE_SIZE: + case BPF_CORE_FIELD_EXISTS: + case BPF_CORE_FIELD_SIGNED: + case BPF_CORE_FIELD_LSHIFT_U64: + case BPF_CORE_FIELD_RSHIFT_U64: + return btfgen_record_field_relo(info, res); + case BPF_CORE_TYPE_ID_LOCAL: /* BPF_CORE_TYPE_ID_LOCAL doesn't require kernel BTF */ + return 0; + case BPF_CORE_TYPE_ID_TARGET: + case BPF_CORE_TYPE_EXISTS: + case BPF_CORE_TYPE_SIZE: + return btfgen_record_type_relo(info, res); + case BPF_CORE_ENUMVAL_EXISTS: + case BPF_CORE_ENUMVAL_VALUE: + return btfgen_record_enumval_relo(info, res); + default: + return -EINVAL; + } +} + +static struct bpf_core_cand_list * +btfgen_find_cands(const struct btf *local_btf, const struct btf *targ_btf, __u32 local_id) +{ + const struct btf_type *local_type; + struct bpf_core_cand_list *cands = NULL; + struct bpf_core_cand local_cand = {}; + size_t local_essent_len; + const char *local_name; + int err; + + local_cand.btf = local_btf; + local_cand.id = local_id; + + local_type = btf__type_by_id(local_btf, local_id); + if (!local_type) { + err = -EINVAL; + goto err_out; + } + + local_name = btf__name_by_offset(local_btf, local_type->name_off); + if (!local_name) { + err = -EINVAL; + goto err_out; + } + local_essent_len = bpf_core_essential_name_len(local_name); + + cands = calloc(1, sizeof(*cands)); + if (!cands) + return NULL; + + err = bpf_core_add_cands(&local_cand, local_essent_len, targ_btf, "vmlinux", 1, cands); + if (err) + goto err_out; + + return cands; + +err_out: + bpf_core_free_cands(cands); + errno = -err; + return NULL; +} + +/* Record relocation information for a single BPF object */ +static int btfgen_record_obj(struct btfgen_info *info, const char *obj_path) +{ + const struct btf_ext_info_sec *sec; + const struct bpf_core_relo *relo; + const struct btf_ext_info *seg; + struct hashmap_entry *entry; + struct hashmap *cand_cache = NULL; + struct btf_ext *btf_ext = NULL; + unsigned int relo_idx; + struct btf *btf = NULL; + size_t i; + int err; + + btf = btf__parse(obj_path, &btf_ext); + if (!btf) { + err = -errno; + p_err("failed to parse BPF object '%s': %s", obj_path, strerror(errno)); + return err; + } + + if (!btf_ext) { + p_err("failed to parse BPF object '%s': section %s not found", + obj_path, BTF_EXT_ELF_SEC); + err = -EINVAL; + goto out; + } + + if (btf_ext->core_relo_info.len == 0) { + err = 0; + goto out; + } + + cand_cache = hashmap__new(btfgen_hash_fn, btfgen_equal_fn, NULL); + if (IS_ERR(cand_cache)) { + err = PTR_ERR(cand_cache); + goto out; + } + + seg = &btf_ext->core_relo_info; + for_each_btf_ext_sec(seg, sec) { + for_each_btf_ext_rec(seg, sec, relo_idx, relo) { + struct bpf_core_spec specs_scratch[3] = {}; + struct bpf_core_relo_res targ_res = {}; + struct bpf_core_cand_list *cands = NULL; + const void *type_key = u32_as_hash_key(relo->type_id); + const char *sec_name = btf__name_by_offset(btf, sec->sec_name_off); + + if (relo->kind != BPF_CORE_TYPE_ID_LOCAL && + !hashmap__find(cand_cache, type_key, (void **)&cands)) { + cands = btfgen_find_cands(btf, info->src_btf, relo->type_id); + if (!cands) { + err = -errno; + goto out; + } + + err = hashmap__set(cand_cache, type_key, cands, NULL, NULL); + if (err) + goto out; + } + + err = bpf_core_calc_relo_insn(sec_name, relo, relo_idx, btf, cands, + specs_scratch, &targ_res); + if (err) + goto out; + + /* specs_scratch[2] is the target spec */ + err = btfgen_record_reloc(info, &specs_scratch[2]); + if (err) + goto out; + } + } + +out: + btf__free(btf); + btf_ext__free(btf_ext); + + if (!IS_ERR_OR_NULL(cand_cache)) { + hashmap__for_each_entry(cand_cache, entry, i) { + bpf_core_free_cands(entry->value); + } + hashmap__free(cand_cache); + } + + return err; +} + +static int btfgen_remap_id(__u32 *type_id, void *ctx) +{ + unsigned int *ids = ctx; + + *type_id = ids[*type_id]; + + return 0; +} + +/* Generate BTF from relocation information previously recorded */ +static struct btf *btfgen_get_btf(struct btfgen_info *info) +{ + struct btf *btf_new = NULL; + unsigned int *ids = NULL; + unsigned int i, n = btf__type_cnt(info->marked_btf); + int err = 0; + + btf_new = btf__new_empty(); + if (!btf_new) { + err = -errno; + goto err_out; + } + + ids = calloc(n, sizeof(*ids)); + if (!ids) { + err = -errno; + goto err_out; + } + + /* first pass: add all marked types to btf_new and add their new ids to the ids map */ + for (i = 1; i < n; i++) { + const struct btf_type *cloned_type, *type; + const char *name; + int new_id; + + cloned_type = btf__type_by_id(info->marked_btf, i); + + if (cloned_type->name_off != MARKED) + continue; + + type = btf__type_by_id(info->src_btf, i); + + /* add members for struct and union */ + if (btf_is_composite(type)) { + struct btf_member *cloned_m, *m; + unsigned short vlen; + int idx_src; + + name = btf__str_by_offset(info->src_btf, type->name_off); + + if (btf_is_struct(type)) + err = btf__add_struct(btf_new, name, type->size); + else + err = btf__add_union(btf_new, name, type->size); + + if (err < 0) + goto err_out; + new_id = err; + + cloned_m = btf_members(cloned_type); + m = btf_members(type); + vlen = btf_vlen(cloned_type); + for (idx_src = 0; idx_src < vlen; idx_src++, cloned_m++, m++) { + /* add only members that are marked as used */ + if (cloned_m->name_off != MARKED) + continue; + + name = btf__str_by_offset(info->src_btf, m->name_off); + err = btf__add_field(btf_new, name, m->type, + btf_member_bit_offset(cloned_type, idx_src), + btf_member_bitfield_size(cloned_type, idx_src)); + if (err < 0) + goto err_out; + } + } else { + err = btf__add_type(btf_new, info->src_btf, type); + if (err < 0) + goto err_out; + new_id = err; + } + + /* add ID mapping */ + ids[i] = new_id; + } + + /* second pass: fix up type ids */ + for (i = 1; i < btf__type_cnt(btf_new); i++) { + struct btf_type *btf_type = (struct btf_type *) btf__type_by_id(btf_new, i); + + err = btf_type_visit_type_ids(btf_type, btfgen_remap_id, ids); + if (err) + goto err_out; + } + + free(ids); + return btf_new; + +err_out: + btf__free(btf_new); + free(ids); + errno = -err; + return NULL; +} + +/* Create minimized BTF file for a set of BPF objects. + * + * The BTFGen algorithm is divided in two main parts: (1) collect the + * BTF types that are involved in relocations and (2) generate the BTF + * object using the collected types. + * + * In order to collect the types involved in the relocations, we parse + * the BTF and BTF.ext sections of the BPF objects and use + * bpf_core_calc_relo_insn() to get the target specification, this + * indicates how the types and fields are used in a relocation. + * + * Types are recorded in different ways according to the kind of the + * relocation. For field-based relocations only the members that are + * actually used are saved in order to reduce the size of the generated + * BTF file. For type-based relocations empty struct / unions are + * generated and for enum-based relocations the whole type is saved. + * + * The second part of the algorithm generates the BTF object. It creates + * an empty BTF object and fills it with the types recorded in the + * previous step. This function takes care of only adding the structure + * and union members that were marked as used and it also fixes up the + * type IDs on the generated BTF object. + */ +static int minimize_btf(const char *src_btf, const char *dst_btf, const char *objspaths[]) +{ + struct btfgen_info *info; + struct btf *btf_new = NULL; + int err, i; + + info = btfgen_new_info(src_btf); + if (!info) { + err = -errno; + p_err("failed to allocate info structure: %s", strerror(errno)); + goto out; + } + + for (i = 0; objspaths[i] != NULL; i++) { + err = btfgen_record_obj(info, objspaths[i]); + if (err) { + p_err("error recording relocations for %s: %s", objspaths[i], + strerror(errno)); + goto out; + } + } + + btf_new = btfgen_get_btf(info); + if (!btf_new) { + err = -errno; + p_err("error generating BTF: %s", strerror(errno)); + goto out; + } + + err = btf_save_raw(btf_new, dst_btf); + if (err) { + p_err("error saving btf file: %s", strerror(errno)); + goto out; + } + +out: + btf__free(btf_new); + btfgen_free_info(info); + + return err; +} + +static int do_min_core_btf(int argc, char **argv) +{ + const char *input, *output, **objs; + int i, err; + + if (!REQ_ARGS(3)) { + usage(); + return -1; + } + + input = GET_ARG(); + output = GET_ARG(); + + objs = (const char **) calloc(argc + 1, sizeof(*objs)); + if (!objs) { + p_err("failed to allocate array for object names"); + return -ENOMEM; + } + + i = 0; + while (argc) + objs[i++] = GET_ARG(); + + err = minimize_btf(input, output, objs); + free(objs); + return err; +} + static const struct cmd cmds[] = { - { "object", do_object }, - { "skeleton", do_skeleton }, - { "help", do_help }, + { "object", do_object }, + { "skeleton", do_skeleton }, + { "subskeleton", do_subskeleton }, + { "min_core_btf", do_min_core_btf}, + { "help", do_help }, { 0 } }; diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c index 2c258db0d352..97dec81950e5 100644 --- a/tools/bpf/bpftool/link.c +++ b/tools/bpf/bpftool/link.c @@ -2,6 +2,7 @@ /* Copyright (C) 2020 Facebook */ #include <errno.h> +#include <linux/err.h> #include <net/if.h> #include <stdio.h> #include <unistd.h> @@ -306,7 +307,7 @@ static int do_show(int argc, char **argv) if (show_pinned) { link_table = hashmap__new(hash_fn_for_key_as_id, equal_fn_for_key_as_id, NULL); - if (!link_table) { + if (IS_ERR(link_table)) { p_err("failed to create hashmap for pinned paths"); return -1; } diff --git a/tools/bpf/bpftool/main.c b/tools/bpf/bpftool/main.c index 020e91a542d5..e81227761f5d 100644 --- a/tools/bpf/bpftool/main.c +++ b/tools/bpf/bpftool/main.c @@ -71,6 +71,17 @@ static int do_help(int argc, char **argv) return 0; } +#ifndef BPFTOOL_VERSION +/* bpftool's major and minor version numbers are aligned on libbpf's. There is + * an offset of 6 for the version number, because bpftool's version was higher + * than libbpf's when we adopted this scheme. The patch number remains at 0 + * for now. Set BPFTOOL_VERSION to override. + */ +#define BPFTOOL_MAJOR_VERSION (LIBBPF_MAJOR_VERSION + 6) +#define BPFTOOL_MINOR_VERSION LIBBPF_MINOR_VERSION +#define BPFTOOL_PATCH_VERSION 0 +#endif + static int do_version(int argc, char **argv) { #ifdef HAVE_LIBBFD_SUPPORT @@ -88,7 +99,15 @@ static int do_version(int argc, char **argv) jsonw_start_object(json_wtr); /* root object */ jsonw_name(json_wtr, "version"); +#ifdef BPFTOOL_VERSION jsonw_printf(json_wtr, "\"%s\"", BPFTOOL_VERSION); +#else + jsonw_printf(json_wtr, "\"%d.%d.%d\"", BPFTOOL_MAJOR_VERSION, + BPFTOOL_MINOR_VERSION, BPFTOOL_PATCH_VERSION); +#endif + jsonw_name(json_wtr, "libbpf_version"); + jsonw_printf(json_wtr, "\"%d.%d\"", + libbpf_major_version(), libbpf_minor_version()); jsonw_name(json_wtr, "features"); jsonw_start_object(json_wtr); /* features */ @@ -101,7 +120,13 @@ static int do_version(int argc, char **argv) } else { unsigned int nb_features = 0; +#ifdef BPFTOOL_VERSION printf("%s v%s\n", bin_name, BPFTOOL_VERSION); +#else + printf("%s v%d.%d.%d\n", bin_name, BPFTOOL_MAJOR_VERSION, + BPFTOOL_MINOR_VERSION, BPFTOOL_PATCH_VERSION); +#endif + printf("using libbpf %s\n", libbpf_version_string()); printf("features:"); if (has_libbfd) { printf(" libbfd"); @@ -478,7 +503,11 @@ int main(int argc, char **argv) } if (!legacy_libbpf) { - ret = libbpf_set_strict_mode(LIBBPF_STRICT_ALL); + /* Allow legacy map definitions for skeleton generation. + * It will still be rejected if users use LIBBPF_STRICT_ALL + * mode for loading generated skeleton. + */ + ret = libbpf_set_strict_mode(LIBBPF_STRICT_ALL & ~LIBBPF_STRICT_MAP_DEFINITIONS); if (ret) p_err("failed to enable libbpf strict mode: %d", ret); } diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h index 8d76d937a62b..6e9277ffc68c 100644 --- a/tools/bpf/bpftool/main.h +++ b/tools/bpf/bpftool/main.h @@ -8,10 +8,10 @@ #undef GCC_VERSION #include <stdbool.h> #include <stdio.h> +#include <stdlib.h> #include <linux/bpf.h> #include <linux/compiler.h> #include <linux/kernel.h> -#include <tools/libc_compat.h> #include <bpf/hashmap.h> #include <bpf/libbpf.h> @@ -113,7 +113,9 @@ struct obj_ref { struct obj_refs { int ref_cnt; + bool has_bpf_cookie; struct obj_ref *refs; + __u64 bpf_cookie; }; struct btf; @@ -140,6 +142,10 @@ struct cmd { int cmd_select(const struct cmd *cmds, int argc, char **argv, int (*help)(int argc, char **argv)); +#define MAX_PROG_FULL_NAME 128 +void get_prog_full_name(const struct bpf_prog_info *prog_info, int prog_fd, + char *name_buff, size_t buff_len); + int get_fd_type(int fd); const char *get_fd_type_name(enum bpf_obj_type type); char *get_fdinfo(int fd, const char *key); diff --git a/tools/bpf/bpftool/map.c b/tools/bpf/bpftool/map.c index cc530a229812..c26378f20831 100644 --- a/tools/bpf/bpftool/map.c +++ b/tools/bpf/bpftool/map.c @@ -504,7 +504,7 @@ static int show_map_close_json(int fd, struct bpf_map_info *info) jsonw_uint_field(json_wtr, "max_entries", info->max_entries); if (memlock) - jsonw_int_field(json_wtr, "bytes_memlock", atoi(memlock)); + jsonw_int_field(json_wtr, "bytes_memlock", atoll(memlock)); free(memlock); if (info->type == BPF_MAP_TYPE_PROG_ARRAY) { @@ -620,17 +620,14 @@ static int show_map_close_plain(int fd, struct bpf_map_info *info) u32_as_hash_field(info->id)) printf("\n\tpinned %s", (char *)entry->value); } - printf("\n"); if (frozen_str) { frozen = atoi(frozen_str); free(frozen_str); } - if (!info->btf_id && !frozen) - return 0; - - printf("\t"); + if (info->btf_id || frozen) + printf("\n\t"); if (info->btf_id) printf("btf_id %d", info->btf_id); @@ -699,7 +696,7 @@ static int do_show(int argc, char **argv) if (show_pinned) { map_table = hashmap__new(hash_fn_for_key_as_id, equal_fn_for_key_as_id, NULL); - if (!map_table) { + if (IS_ERR(map_table)) { p_err("failed to create hashmap for pinned paths"); return -1; } @@ -805,29 +802,30 @@ static int maps_have_btf(int *fds, int nb_fds) static struct btf *btf_vmlinux; -static struct btf *get_map_kv_btf(const struct bpf_map_info *info) +static int get_map_kv_btf(const struct bpf_map_info *info, struct btf **btf) { - struct btf *btf = NULL; + int err = 0; if (info->btf_vmlinux_value_type_id) { if (!btf_vmlinux) { btf_vmlinux = libbpf_find_kernel_btf(); - if (libbpf_get_error(btf_vmlinux)) + err = libbpf_get_error(btf_vmlinux); + if (err) { p_err("failed to get kernel btf"); + return err; + } } - return btf_vmlinux; + *btf = btf_vmlinux; } else if (info->btf_value_type_id) { - int err; - - btf = btf__load_from_kernel_by_id(info->btf_id); - err = libbpf_get_error(btf); - if (err) { + *btf = btf__load_from_kernel_by_id(info->btf_id); + err = libbpf_get_error(*btf); + if (err) p_err("failed to get btf"); - btf = ERR_PTR(err); - } + } else { + *btf = NULL; } - return btf; + return err; } static void free_map_kv_btf(struct btf *btf) @@ -862,8 +860,7 @@ map_dump(int fd, struct bpf_map_info *info, json_writer_t *wtr, prev_key = NULL; if (wtr) { - btf = get_map_kv_btf(info); - err = libbpf_get_error(btf); + err = get_map_kv_btf(info, &btf); if (err) { goto exit_free; } @@ -1054,11 +1051,8 @@ static void print_key_value(struct bpf_map_info *info, void *key, json_writer_t *btf_wtr; struct btf *btf; - btf = btf__load_from_kernel_by_id(info->btf_id); - if (libbpf_get_error(btf)) { - p_err("failed to get btf"); + if (get_map_kv_btf(info, &btf)) return; - } if (json_output) { print_entry_json(info, key, value, btf); diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c index 649053704bd7..526a332c48e6 100644 --- a/tools/bpf/bpftool/net.c +++ b/tools/bpf/bpftool/net.c @@ -551,7 +551,7 @@ static int do_attach_detach_xdp(int progfd, enum net_attach_type attach_type, if (attach_type == NET_ATTACH_TYPE_XDP_OFFLOAD) flags |= XDP_FLAGS_HW_MODE; - return bpf_set_link_xdp_fd(ifindex, progfd, flags); + return bpf_xdp_attach(ifindex, progfd, flags, NULL); } static int do_attach(int argc, char **argv) diff --git a/tools/bpf/bpftool/pids.c b/tools/bpf/bpftool/pids.c index 56b598eee043..bb6c969a114a 100644 --- a/tools/bpf/bpftool/pids.c +++ b/tools/bpf/bpftool/pids.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) /* Copyright (C) 2020 Facebook */ #include <errno.h> +#include <linux/err.h> #include <stdbool.h> #include <stdio.h> #include <stdlib.h> @@ -77,6 +78,8 @@ static void add_ref(struct hashmap *map, struct pid_iter_entry *e) ref->pid = e->pid; memcpy(ref->comm, e->comm, sizeof(ref->comm)); refs->ref_cnt = 1; + refs->has_bpf_cookie = e->has_bpf_cookie; + refs->bpf_cookie = e->bpf_cookie; err = hashmap__append(map, u32_as_hash_field(e->id), refs); if (err) @@ -101,7 +104,7 @@ int build_obj_refs_table(struct hashmap **map, enum bpf_obj_type type) libbpf_print_fn_t default_print; *map = hashmap__new(hash_fn_for_key_as_id, equal_fn_for_key_as_id, NULL); - if (!*map) { + if (IS_ERR(*map)) { p_err("failed to create hashmap for PID references"); return -1; } @@ -204,6 +207,9 @@ void emit_obj_refs_json(struct hashmap *map, __u32 id, if (refs->ref_cnt == 0) break; + if (refs->has_bpf_cookie) + jsonw_lluint_field(json_writer, "bpf_cookie", refs->bpf_cookie); + jsonw_name(json_writer, "pids"); jsonw_start_array(json_writer); for (i = 0; i < refs->ref_cnt; i++) { @@ -233,6 +239,9 @@ void emit_obj_refs_plain(struct hashmap *map, __u32 id, const char *prefix) if (refs->ref_cnt == 0) break; + if (refs->has_bpf_cookie) + printf("\n\tbpf_cookie %llu", (unsigned long long) refs->bpf_cookie); + printf("%s", prefix); for (i = 0; i < refs->ref_cnt; i++) { struct obj_ref *ref = &refs->refs[i]; diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c index 2a21d50516bc..bc4e05542c2b 100644 --- a/tools/bpf/bpftool/prog.c +++ b/tools/bpf/bpftool/prog.c @@ -26,6 +26,7 @@ #include <bpf/btf.h> #include <bpf/hashmap.h> #include <bpf/libbpf.h> +#include <bpf/libbpf_internal.h> #include <bpf/skel_internal.h> #include "cfg.h" @@ -424,8 +425,10 @@ out_free: free(value); } -static void print_prog_header_json(struct bpf_prog_info *info) +static void print_prog_header_json(struct bpf_prog_info *info, int fd) { + char prog_name[MAX_PROG_FULL_NAME]; + jsonw_uint_field(json_wtr, "id", info->id); if (info->type < ARRAY_SIZE(prog_type_name)) jsonw_string_field(json_wtr, "type", @@ -433,8 +436,10 @@ static void print_prog_header_json(struct bpf_prog_info *info) else jsonw_uint_field(json_wtr, "type", info->type); - if (*info->name) - jsonw_string_field(json_wtr, "name", info->name); + if (*info->name) { + get_prog_full_name(info, fd, prog_name, sizeof(prog_name)); + jsonw_string_field(json_wtr, "name", prog_name); + } jsonw_name(json_wtr, "tag"); jsonw_printf(json_wtr, "\"" BPF_TAG_FMT "\"", @@ -455,7 +460,7 @@ static void print_prog_json(struct bpf_prog_info *info, int fd) char *memlock; jsonw_start_object(json_wtr); - print_prog_header_json(info); + print_prog_header_json(info, fd); print_dev_json(info->ifindex, info->netns_dev, info->netns_ino); if (info->load_time) { @@ -480,7 +485,7 @@ static void print_prog_json(struct bpf_prog_info *info, int fd) memlock = get_fdinfo(fd, "memlock"); if (memlock) - jsonw_int_field(json_wtr, "bytes_memlock", atoi(memlock)); + jsonw_int_field(json_wtr, "bytes_memlock", atoll(memlock)); free(memlock); if (info->nr_map_ids) @@ -507,16 +512,20 @@ static void print_prog_json(struct bpf_prog_info *info, int fd) jsonw_end_object(json_wtr); } -static void print_prog_header_plain(struct bpf_prog_info *info) +static void print_prog_header_plain(struct bpf_prog_info *info, int fd) { + char prog_name[MAX_PROG_FULL_NAME]; + printf("%u: ", info->id); if (info->type < ARRAY_SIZE(prog_type_name)) printf("%s ", prog_type_name[info->type]); else printf("type %u ", info->type); - if (*info->name) - printf("name %s ", info->name); + if (*info->name) { + get_prog_full_name(info, fd, prog_name, sizeof(prog_name)); + printf("name %s ", prog_name); + } printf("tag "); fprint_hex(stdout, info->tag, BPF_TAG_SIZE, ""); @@ -534,7 +543,7 @@ static void print_prog_plain(struct bpf_prog_info *info, int fd) { char *memlock; - print_prog_header_plain(info); + print_prog_header_plain(info, fd); if (info->load_time) { char buf[32]; @@ -641,7 +650,7 @@ static int do_show(int argc, char **argv) if (show_pinned) { prog_table = hashmap__new(hash_fn_for_key_as_id, equal_fn_for_key_as_id, NULL); - if (!prog_table) { + if (IS_ERR(prog_table)) { p_err("failed to create hashmap for pinned paths"); return -1; } @@ -972,10 +981,10 @@ static int do_dump(int argc, char **argv) if (json_output && nb_fds > 1) { jsonw_start_object(json_wtr); /* prog object */ - print_prog_header_json(&info); + print_prog_header_json(&info, fds[i]); jsonw_name(json_wtr, "insns"); } else if (nb_fds > 1) { - print_prog_header_plain(&info); + print_prog_header_plain(&info, fds[i]); } err = prog_dump(&info, mode, filepath, opcodes, visual, linum); @@ -1264,12 +1273,12 @@ static int do_run(int argc, char **argv) { char *data_fname_in = NULL, *data_fname_out = NULL; char *ctx_fname_in = NULL, *ctx_fname_out = NULL; - struct bpf_prog_test_run_attr test_attr = {0}; const unsigned int default_size = SZ_32K; void *data_in = NULL, *data_out = NULL; void *ctx_in = NULL, *ctx_out = NULL; unsigned int repeat = 1; int fd, err; + LIBBPF_OPTS(bpf_test_run_opts, test_attr); if (!REQ_ARGS(4)) return -1; @@ -1387,14 +1396,13 @@ static int do_run(int argc, char **argv) goto free_ctx_in; } - test_attr.prog_fd = fd; test_attr.repeat = repeat; test_attr.data_in = data_in; test_attr.data_out = data_out; test_attr.ctx_in = ctx_in; test_attr.ctx_out = ctx_out; - err = bpf_prog_test_run_xattr(&test_attr); + err = bpf_prog_test_run_opts(fd, &test_attr); if (err) { p_err("failed to run program: %s", strerror(errno)); goto free_ctx_out; @@ -1551,9 +1559,9 @@ static int load_with_options(int argc, char **argv, bool first_prog_only) if (fd < 0) goto err_free_reuse_maps; - new_map_replace = reallocarray(map_replace, - old_map_fds + 1, - sizeof(*map_replace)); + new_map_replace = libbpf_reallocarray(map_replace, + old_map_fds + 1, + sizeof(*map_replace)); if (!new_map_replace) { p_err("mem alloc failed"); goto err_free_reuse_maps; @@ -2275,10 +2283,10 @@ static int do_profile(int argc, char **argv) profile_obj->rodata->num_metric = num_metric; /* adjust map sizes */ - bpf_map__resize(profile_obj->maps.events, num_metric * num_cpu); - bpf_map__resize(profile_obj->maps.fentry_readings, num_metric); - bpf_map__resize(profile_obj->maps.accum_readings, num_metric); - bpf_map__resize(profile_obj->maps.counts, 1); + bpf_map__set_max_entries(profile_obj->maps.events, num_metric * num_cpu); + bpf_map__set_max_entries(profile_obj->maps.fentry_readings, num_metric); + bpf_map__set_max_entries(profile_obj->maps.accum_readings, num_metric); + bpf_map__set_max_entries(profile_obj->maps.counts, 1); /* change target name */ profile_tgt_name = profile_target_name(profile_tgt_fd); diff --git a/tools/bpf/bpftool/skeleton/pid_iter.bpf.c b/tools/bpf/bpftool/skeleton/pid_iter.bpf.c index f70702fcb224..eb05ea53afb1 100644 --- a/tools/bpf/bpftool/skeleton/pid_iter.bpf.c +++ b/tools/bpf/bpftool/skeleton/pid_iter.bpf.c @@ -38,6 +38,17 @@ static __always_inline __u32 get_obj_id(void *ent, enum bpf_obj_type type) } } +/* could be used only with BPF_LINK_TYPE_PERF_EVENT links */ +static __u64 get_bpf_cookie(struct bpf_link *link) +{ + struct bpf_perf_link *perf_link; + struct perf_event *event; + + perf_link = container_of(link, struct bpf_perf_link, link); + event = BPF_CORE_READ(perf_link, perf_file, private_data); + return BPF_CORE_READ(event, bpf_cookie); +} + SEC("iter/task_file") int iter(struct bpf_iter__task_file *ctx) { @@ -69,8 +80,19 @@ int iter(struct bpf_iter__task_file *ctx) if (file->f_op != fops) return 0; + __builtin_memset(&e, 0, sizeof(e)); e.pid = task->tgid; e.id = get_obj_id(file->private_data, obj_type); + + if (obj_type == BPF_OBJ_LINK) { + struct bpf_link *link = (struct bpf_link *) file->private_data; + + if (BPF_CORE_READ(link, type) == BPF_LINK_TYPE_PERF_EVENT) { + e.has_bpf_cookie = true; + e.bpf_cookie = get_bpf_cookie(link); + } + } + bpf_probe_read_kernel_str(&e.comm, sizeof(e.comm), task->group_leader->comm); bpf_seq_write(ctx->meta->seq, &e, sizeof(e)); diff --git a/tools/bpf/bpftool/skeleton/pid_iter.h b/tools/bpf/bpftool/skeleton/pid_iter.h index 5692cf257adb..bbb570d4cca6 100644 --- a/tools/bpf/bpftool/skeleton/pid_iter.h +++ b/tools/bpf/bpftool/skeleton/pid_iter.h @@ -6,6 +6,8 @@ struct pid_iter_entry { __u32 id; int pid; + __u64 bpf_cookie; + bool has_bpf_cookie; char comm[16]; }; diff --git a/tools/bpf/bpftool/struct_ops.c b/tools/bpf/bpftool/struct_ops.c index 2f693b082bdb..e08a6ff2866c 100644 --- a/tools/bpf/bpftool/struct_ops.c +++ b/tools/bpf/bpftool/struct_ops.c @@ -480,7 +480,6 @@ static int do_unregister(int argc, char **argv) static int do_register(int argc, char **argv) { LIBBPF_OPTS(bpf_object_open_opts, open_opts); - const struct bpf_map_def *def; struct bpf_map_info info = {}; __u32 info_len = sizeof(info); int nr_errs = 0, nr_maps = 0; @@ -510,8 +509,7 @@ static int do_register(int argc, char **argv) } bpf_object__for_each_map(map, obj) { - def = bpf_map__def(map); - if (def->type != BPF_MAP_TYPE_STRUCT_OPS) + if (bpf_map__type(map) != BPF_MAP_TYPE_STRUCT_OPS) continue; link = bpf_map__attach_struct_ops(map); diff --git a/tools/bpf/bpftool/xlated_dumper.c b/tools/bpf/bpftool/xlated_dumper.c index f1f32e21d5cd..2d9cd6a7b3c8 100644 --- a/tools/bpf/bpftool/xlated_dumper.c +++ b/tools/bpf/bpftool/xlated_dumper.c @@ -8,6 +8,7 @@ #include <string.h> #include <sys/types.h> #include <bpf/libbpf.h> +#include <bpf/libbpf_internal.h> #include "disasm.h" #include "json_writer.h" @@ -32,8 +33,8 @@ void kernel_syms_load(struct dump_data *dd) return; while (fgets(buff, sizeof(buff), fp)) { - tmp = reallocarray(dd->sym_mapping, dd->sym_count + 1, - sizeof(*dd->sym_mapping)); + tmp = libbpf_reallocarray(dd->sym_mapping, dd->sym_count + 1, + sizeof(*dd->sym_mapping)); if (!tmp) { out: free(dd->sym_mapping); |