summaryrefslogtreecommitdiffstats
path: root/tools/lib/bpf
AgeCommit message (Collapse)AuthorFilesLines
2022-03-31Merge tag 'kbuild-v5.18-v2' of ↵Linus Torvalds1-1/+1
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild updates from Masahiro Yamada: - Add new environment variables, USERCFLAGS and USERLDFLAGS to allow additional flags to be passed to user-space programs. - Fix missing fflush() bugs in Kconfig and fixdep - Fix a minor bug in the comment format of the .config file - Make kallsyms ignore llvm's local labels, .L* - Fix UAPI compile-test for cross-compiling with Clang - Extend the LLVM= syntax to support LLVM=<suffix> form for using a particular version of LLVm, and LLVM=<prefix> form for using custom LLVM in a particular directory path. - Clean up Makefiles * tag 'kbuild-v5.18-v2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: kbuild: Make $(LLVM) more flexible kbuild: add --target to correctly cross-compile UAPI headers with Clang fixdep: use fflush() and ferror() to ensure successful write to files arch: syscalls: simplify uapi/kapi directory creation usr/include: replace extra-y with always-y certs: simplify empty certs creation in certs/Makefile certs: include certs/signing_key.x509 unconditionally kallsyms: ignore all local labels prefixed by '.L' kconfig: fix missing '# end of' for empty menu kconfig: add fflush() before ferror() check kbuild: replace $(if A,A,B) with $(or A,B) kbuild: Add environment variables for userprogs flags kbuild: unify cmd_copy and cmd_shipped
2022-03-21libbpf: Close fd in bpf_object__reuse_mapHengqi Chen1-1/+1
pin_fd is dup-ed and assigned in bpf_map__reuse_fd. Close it in bpf_object__reuse_map after reuse. Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220319030533.3132250-1-hengqi.chen@gmail.com
2022-03-20libbpf: Avoid NULL deref when initializing map BTF infoAndrii Nakryiko1-0/+3
If BPF object doesn't have an BTF info, don't attempt to search for BTF types describing BPF map key or value layout. Fixes: 262cfb74ffda ("libbpf: Init btf_{key,value}_type_id on internal map open") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220320001911.3640917-1-andrii@kernel.org
2022-03-17libbpf: Add subskeleton scaffoldingDelyan Kratunov3-21/+149
In symmetry with bpf_object__open_skeleton(), bpf_object__open_subskeleton() performs the actual walking and linking of maps, progs, and globals described by bpf_*_skeleton objects. Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/6942a46fbe20e7ebf970affcca307ba616985b15.1647473511.git.delyank@fb.com
2022-03-17libbpf: Init btf_{key,value}_type_id on internal map openDelyan Kratunov1-1/+14
For internal and user maps, look up the key and value btf types on open() and not load(), so that `bpf_map_btf_value_type_id` is usable in `bpftool gen`. Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/78dbe4e457b4a05e098fc6c8f50014b680c86e4e.1647473511.git.delyank@fb.com
2022-03-17libbpf: .text routines are subprograms in strict modeDelyan Kratunov2-0/+11
Currently, libbpf considers a single routine in .text to be a program. This is particularly confusing when it comes to library objects - a single routine meant to be used as an extern will instead be considered a bpf_program. This patch hides this compatibility behavior behind the pre-existing SEC_NAME strict mode flag. Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/018de8d0d67c04bf436055270d35d394ba393505.1647473511.git.delyank@fb.com
2022-03-17libbpf: Add bpf_program__attach_kprobe_multi_opts functionJiri Olsa3-0/+184
Adding bpf_program__attach_kprobe_multi_opts function for attaching kprobe program to multiple functions. struct bpf_link * bpf_program__attach_kprobe_multi_opts(const struct bpf_program *prog, const char *pattern, const struct bpf_kprobe_multi_opts *opts); User can specify functions to attach with 'pattern' argument that allows wildcards (*?' supported) or provide symbols or addresses directly through opts argument. These 3 options are mutually exclusive. When using symbols or addresses, user can also provide cookie value for each symbol/address that can be retrieved later in bpf program with bpf_get_attach_cookie helper. struct bpf_kprobe_multi_opts { size_t sz; const char **syms; const unsigned long *addrs; const __u64 *cookies; size_t cnt; bool retprobe; size_t :0; }; Symbols, addresses and cookies are provided through opts object (syms/addrs/cookies) as array pointers with specified count (cnt). Each cookie value is paired with provided function address or symbol with the same array index. The program can be also attached as return probe if 'retprobe' is set. For quick usage with NULL opts argument, like: bpf_program__attach_kprobe_multi_opts(prog, "ksys_*", NULL) the 'prog' will be attached as kprobe to 'ksys_*' functions. Also adding new program sections for automatic attachment: kprobe.multi/<symbol_pattern> kretprobe.multi/<symbol_pattern> The symbol_pattern is used as 'pattern' argument in bpf_program__attach_kprobe_multi_opts function. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220316122419.933957-10-jolsa@kernel.org
2022-03-17libbpf: Add bpf_link_create support for multi kprobesJiri Olsa2-1/+17
Adding new kprobe_multi struct to bpf_link_create_opts object to pass multiple kprobe data to link_create attr uapi. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220316122419.933957-9-jolsa@kernel.org
2022-03-17libbpf: Add libbpf_kallsyms_parse functionJiri Olsa2-24/+43
Move the kallsyms parsing in internal libbpf_kallsyms_parse function, so it can be used from other places. It will be used in following changes. Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220316122419.933957-8-jolsa@kernel.org
2022-03-09libbpf: Support batch_size option to bpf_prog_test_runToke Høiland-Jørgensen2-1/+3
Add support for setting the new batch_size parameter to BPF_PROG_TEST_RUN to libbpf; just add it as an option and pass it through to the kernel. Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220309105346.100053-4-toke@redhat.com
2022-03-07libbpf: Fix array_size.cocci warningGuo Zhengkui2-3/+4
Fix the following coccicheck warning: tools/lib/bpf/bpf.c:114:31-32: WARNING: Use ARRAY_SIZE tools/lib/bpf/xsk.c:484:34-35: WARNING: Use ARRAY_SIZE tools/lib/bpf/xsk.c:485:35-36: WARNING: Use ARRAY_SIZE It has been tested with gcc (Debian 8.3.0-6) 8.3.0 on x86_64. Signed-off-by: Guo Zhengkui <guozhengkui@vivo.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220306023426.19324-1-guozhengkui@vivo.com
2022-03-07libbpf: Unmap rings when umem deletedlic1211-0/+11
xsk_umem__create() does mmap for fill/comp rings, but xsk_umem__delete() doesn't do the unmap. This works fine for regular cases, because xsk_socket__delete() does unmap for the rings. But for the case that xsk_socket__create_shared() fails, umem rings are not unmapped. fill_save/comp_save are checked to determine if rings have already be unmapped by xsk. If fill_save and comp_save are NULL, it means that the rings have already been used by xsk. Then they are supposed to be unmapped by xsk_socket__delete(). Otherwise, xsk_umem__delete() does the unmap. Fixes: 2f6324a3937f ("libbpf: Support shared umems between queues and devices") Signed-off-by: Cheng Li <lic121@chinatelecom.cn> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220301132623.GA19995@vscode.7~
2022-03-05libbpf: Support custom SEC() handlersAndrii Nakryiko4-53/+268
Allow registering and unregistering custom handlers for BPF program. This allows user applications and libraries to plug into libbpf's declarative SEC() definition handling logic. This allows to offload complex and intricate custom logic into external libraries, but still provide a great user experience. One such example is USDT handling library, which has a lot of code and complexity which doesn't make sense to put into libbpf directly, but it would be really great for users to be able to specify BPF programs with something like SEC("usdt/<path-to-binary>:<usdt_provider>:<usdt_name>") and have correct BPF program type set (BPF_PROGRAM_TYPE_KPROBE, as it is uprobe) and even support BPF skeleton's auto-attach logic. In some cases, it might be even good idea to override libbpf's default handling, like for SEC("perf_event") programs. With custom library, it's possible to extend logic to support specifying perf event specification right there in SEC() definition without burdening libbpf with lots of custom logic or extra library dependecies (e.g., libpfm4). With current patch it's possible to override libbpf's SEC("perf_event") handling and specify a completely custom ones. Further, it's possible to specify a generic fallback handling for any SEC() that doesn't match any other custom or standard libbpf handlers. This allows to accommodate whatever legacy use cases there might be, if necessary. See doc comments for libbpf_register_prog_handler() and libbpf_unregister_prog_handler() for detailed semantics. This patch also bumps libbpf development version to v0.8 and adds new APIs there. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Tested-by: Alan Maguire <alan.maguire@oracle.com> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/20220305010129.1549719-3-andrii@kernel.org
2022-03-05libbpf: Allow BPF program auto-attach handlers to bail outAndrii Nakryiko1-55/+85
Allow some BPF program types to support auto-attach only in subste of cases. Currently, if some BPF program type specifies attach callback, it is assumed that during skeleton attach operation all such programs either successfully attach or entire skeleton attachment fails. If some program doesn't support auto-attachment from skeleton, such BPF program types shouldn't have attach callback specified. This is limiting for cases when, depending on how full the SEC("") definition is, there could either be enough details to support auto-attach or there might not be and user has to use some specific API to provide more details at runtime. One specific example of such desired behavior might be SEC("uprobe"). If it's specified as just uprobe auto-attach isn't possible. But if it's SEC("uprobe/<some_binary>:<some_func>") then there are enough details to support auto-attach. Note that there is a somewhat subtle difference between auto-attach behavior of BPF skeleton and using "generic" bpf_program__attach(prog) (which uses the same attach handlers under the cover). Skeleton allow some programs within bpf_object to not have auto-attach implemented and doesn't treat that as an error. Instead such BPF programs are just skipped during skeleton's (optional) attach step. bpf_program__attach(), on the other hand, is called when user *expects* auto-attach to work, so if specified program doesn't implement or doesn't support auto-attach functionality, that will be treated as an error. Another improvement to the way libbpf is handling SEC()s would be to not require providing dummy kernel function name for kprobe. Currently, SEC("kprobe/whatever") is necessary even if actual kernel function is determined by user at runtime and bpf_program__attach_kprobe() is used to specify it. With changes in this patch, it's possible to support both SEC("kprobe") and SEC("kprobe/<actual_kernel_function"), while only in the latter case auto-attach will be performed. In the former one, such kprobe will be skipped during skeleton attach operation. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Tested-by: Alan Maguire <alan.maguire@oracle.com> Reviewed-by: Alan Maguire <alan.maguire@oracle.com> Link: https://lore.kernel.org/bpf/20220305010129.1549719-2-andrii@kernel.org
2022-03-03libbpf: Add a check to ensure that page_cnt is non-zeroYuntao Wang1-2/+2
The page_cnt parameter is used to specify the number of memory pages allocated for each per-CPU buffer, it must be non-zero and a power of 2. Currently, the __perf_buffer__new() function attempts to validate that the page_cnt is a power of 2 but forgets checking for the case where page_cnt is zero, we can fix it by replacing 'page_cnt & (page_cnt - 1)' with 'page_cnt == 0 || (page_cnt & (page_cnt - 1))'. If so, we also don't need to add a check in perf_buffer__new_v0_6_0() to make sure that page_cnt is non-zero and the check for zero in perf_buffer__new_raw_v0_6_0() can also be removed. The code will be cleaner and more readable. Signed-off-by: Yuntao Wang <ytcoode@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220303005921.53436-1-ytcoode@gmail.com
2022-03-01libbpf: Skip forward declaration when counting duplicated type namesXu Kuohai1-0/+5
Currently if a declaration appears in the BTF before the definition, the definition is dumped as a conflicting name, e.g.: $ bpftool btf dump file vmlinux format raw | grep "'unix_sock'" [81287] FWD 'unix_sock' fwd_kind=struct [89336] STRUCT 'unix_sock' size=1024 vlen=14 $ bpftool btf dump file vmlinux format c | grep "struct unix_sock" struct unix_sock; struct unix_sock___2 { <--- conflict, the "___2" is unexpected struct unix_sock___2 *unix_sk; This causes a compilation error if the dump output is used as a header file. Fix it by skipping declaration when counting duplicated type names. Fixes: 351131b51c7a ("libbpf: add btf_dump API for BTF-to-C conversion") Signed-off-by: Xu Kuohai <xukuohai@huawei.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20220301053250.1464204-2-xukuohai@huawei.com
2022-02-28libbpf: Fix BPF_MAP_TYPE_PERF_EVENT_ARRAY auto-pinningStijn Tintel1-19/+25
When a BPF map of type BPF_MAP_TYPE_PERF_EVENT_ARRAY doesn't have the max_entries parameter set, the map will be created with max_entries set to the number of available CPUs. When we try to reuse such a pinned map, map_is_reuse_compat will return false, as max_entries in the map definition differs from max_entries of the existing map, causing the following error: libbpf: couldn't reuse pinned map at '/sys/fs/bpf/m_logging': parameter mismatch Fix this by overwriting max_entries in the map definition. For this to work, we need to do this in bpf_object__create_maps, before calling bpf_object__reuse_map. Fixes: 57a00f41644f ("libbpf: Add auto-pinning of maps when loading BPF objects") Signed-off-by: Stijn Tintel <stijn@linux-ipv6.be> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20220225152355.315204-1-stijn@linux-ipv6.be
2022-02-23libbpf: Simplify the find_elf_sec_sz() functionYuntao Wang1-4/+2
The check in the last return statement is unnecessary, we can just return the ret variable. But we can simplify the function further by returning 0 immediately if we find the section size and -ENOENT otherwise. Thus we can also remove the ret variable. Signed-off-by: Yuntao Wang <ytcoode@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220223085244.3058118-1-ytcoode@gmail.com
2022-02-22libbpf: Remove redundant check in btf_fixup_datasec()Yuntao Wang1-1/+1
The check 't->size && t->size != size' is redundant because if t->size compares unequal to 0, we will just skip straight to sorting variables. Signed-off-by: Yuntao Wang <ytcoode@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220220072750.209215-1-ytcoode@gmail.com
2022-02-17libbpf: Fix memleak in libbpf_netlink_recv()Andrii Nakryiko1-3/+5
Ensure that libbpf_netlink_recv() frees dynamically allocated buffer in all code paths. Fixes: 9c3de619e13e ("libbpf: Use dynamically allocated buffer when receiving netlink messages") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20220217073958.276959-1-andrii@kernel.org
2022-02-16libbpf: Expose bpf_core_{add,free}_cands() to bpftoolMauricio Vásquez2-7/+19
Expose bpf_core_add_cands() and bpf_core_free_cands() to handle candidates list. Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by: Rafael David Tinoco <rafael.tinoco@aquasec.com> Signed-off-by: Lorenzo Fontana <lorenzo.fontana@elastic.co> Signed-off-by: Leonardo Di Donato <leonardo.didonato@elastic.co> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220215225856.671072-3-mauricio@kinvolk.io
2022-02-16libbpf: Split bpf_core_apply_relo()Mauricio Vásquez3-93/+99
BTFGen needs to run the core relocation logic in order to understand what are the types involved in a given relocation. Currently bpf_core_apply_relo() calculates and **applies** a relocation to an instruction. Having both operations in the same function makes it difficult to only calculate the relocation without patching the instruction. This commit splits that logic in two different phases: (1) calculate the relocation and (2) patch the instruction. For the first phase bpf_core_apply_relo() is renamed to bpf_core_calc_relo_insn() who is now only on charge of calculating the relocation, the second phase uses the already existing bpf_core_patch_insn(). bpf_object__relocate_core() uses both of them and the BTFGen will use only bpf_core_calc_relo_insn(). Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by: Rafael David Tinoco <rafael.tinoco@aquasec.com> Signed-off-by: Lorenzo Fontana <lorenzo.fontana@elastic.co> Signed-off-by: Leonardo Di Donato <leonardo.didonato@elastic.co> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220215225856.671072-2-mauricio@kinvolk.io
2022-02-15kbuild: replace $(if A,A,B) with $(or A,B)Masahiro Yamada1-1/+1
$(or ...) is available since GNU Make 3.81, and useful to shorten the code in some places. Covert as follows: $(if A,A,B) --> $(or A,B) This patch also converts: $(if A, A, B) --> $(or A, B) Strictly speaking, the latter is not an equivalent conversion because GNU Make keeps spaces after commas; if A is not empty, $(if A, A, B) expands to " A", while $(or A, B) expands to "A". Anyway, preceding spaces are not significant in the code hunks I touched. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Reviewed-by: Nicolas Schier <nicolas@fjasle.eu>
2022-02-12libbpf: Use dynamically allocated buffer when receiving netlink messagesToke Høiland-Jørgensen1-4/+51
When receiving netlink messages, libbpf was using a statically allocated stack buffer of 4k bytes. This happened to work fine on systems with a 4k page size, but on systems with larger page sizes it can lead to truncated messages. The user-visible impact of this was that libbpf would insist no XDP program was attached to some interfaces because that bit of the netlink message got chopped off. Fix this by switching to a dynamically allocated buffer; we borrow the approach from iproute2 of using recvmsg() with MSG_PEEK|MSG_TRUNC to get the actual size of the pending message before receiving it, adjusting the buffer as necessary. While we're at it, also add retries on interrupted system calls around the recvmsg() call. v2: - Move peek logic to libbpf_netlink_recv(), don't double free on ENOMEM. Fixes: 8bbb77b7c7a2 ("libbpf: Add various netlink helpers") Reported-by: Zhiqian Guan <zhguan@redhat.com> Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/bpf/20220211234819.612288-1-toke@redhat.com
2022-02-11libbpf: Fix libbpf.map inheritance chain for LIBBPF_0.7.0Andrii Nakryiko1-1/+1
Ensure that LIBBPF_0.7.0 inherits everything from LIBBPF_0.6.0. Fixes: dbdd2c7f8cec ("libbpf: Add API to get/set log_level at per-program level") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220211205235.2089104-1-andrii@kernel.org
2022-02-10libbpf: Prepare light skeleton for the kernel.Alexei Starovoitov2-21/+179
Prepare light skeleton to be used in the kernel module and in the user space. The look and feel of lskel.h is mostly the same with the difference that for user space the skel->rodata is the same pointer before and after skel_load operation, while in the kernel the skel->rodata after skel_open and the skel->rodata after skel_load are different pointers. Typical usage of skeleton remains the same for kernel and user space: skel = my_bpf__open(); skel->rodata->my_global_var = init_val; err = my_bpf__load(skel); err = my_bpf__attach(skel); // access skel->rodata->my_global_var; // access skel->bss->another_var; Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Acked-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220209232001.27490-3-alexei.starovoitov@gmail.com
2022-02-09libbpf: Fix compilation warning due to mismatched printf formatAndrii Nakryiko1-1/+2
On ppc64le architecture __s64 is long int and requires %ld. Cast to ssize_t and use %zd to avoid architecture-specific specifiers. Fixes: 4172843ed4a3 ("libbpf: Fix signedness bug in btf_dump_array_data()") Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220209063909.1268319-1-andrii@kernel.org
2022-02-08libbpf: Add BPF_KPROBE_SYSCALL macroHengqi Chen1-0/+35
Add syscall-specific variant of BPF_KPROBE named BPF_KPROBE_SYSCALL ([0]). The new macro hides the underlying way of getting syscall input arguments. With the new macro, the following code: SEC("kprobe/__x64_sys_close") int BPF_KPROBE(do_sys_close, struct pt_regs *regs) { int fd; fd = PT_REGS_PARM1_CORE(regs); /* do something with fd */ } can be written as: SEC("kprobe/__x64_sys_close") int BPF_KPROBE_SYSCALL(do_sys_close, int fd) { /* do something with fd */ } [0] Closes: https://github.com/libbpf/libbpf/issues/425 Signed-off-by: Hengqi Chen <hengqi.chen@gmail.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220207143134.2977852-2-hengqi.chen@gmail.com
2022-02-08libbpf: Fix accessing the first syscall argument on s390Ilya Leoshkevich1-0/+6
On s390, the first syscall argument should be accessed via orig_gpr2 (see arch/s390/include/asm/syscall.h). Currently gpr[2] is used instead, leading to bpf_syscall_macro test failure. orig_gpr2 cannot be added to user_pt_regs, since its layout is a part of the ABI. Therefore provide access to it only through PT_REGS_PARM1_CORE_SYSCALL() by using a struct pt_regs flavor. Reported-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220209021745.2215452-11-iii@linux.ibm.com
2022-02-08libbpf: Fix accessing the first syscall argument on arm64Ilya Leoshkevich1-0/+6
On arm64, the first syscall argument should be accessed via orig_x0 (see arch/arm64/include/asm/syscall.h). Currently regs[0] is used instead, leading to bpf_syscall_macro test failure. orig_x0 cannot be added to struct user_pt_regs, since its layout is a part of the ABI. Therefore provide access to it only through PT_REGS_PARM1_CORE_SYSCALL() by using a struct pt_regs flavor. Reported-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220209021745.2215452-10-iii@linux.ibm.com
2022-02-08libbpf: Allow overriding PT_REGS_PARM1{_CORE}_SYSCALLIlya Leoshkevich1-8/+12
arm64 and s390 need a special way to access the first syscall argument. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220209021745.2215452-9-iii@linux.ibm.com
2022-02-08libbpf: Fix accessing syscall arguments on riscvIlya Leoshkevich1-0/+2
riscv does not select ARCH_HAS_SYSCALL_WRAPPER, so its syscall handlers take "unpacked" syscall arguments. Indicate this to libbpf using PT_REGS_SYSCALL_REGS macro. Reported-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220209021745.2215452-7-iii@linux.ibm.com
2022-02-08libbpf: Fix riscv register namesIlya Leoshkevich1-2/+2
riscv registers are accessed via struct user_regs_struct, not struct pt_regs. The program counter member in this struct is called pc, not epc. The frame pointer is called s0, not fp. Fixes: 3cc31d794097 ("libbpf: Normalize PT_REGS_xxx() macro definitions") Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220209021745.2215452-6-iii@linux.ibm.com
2022-02-08libbpf: Fix accessing syscall arguments on powerpcIlya Leoshkevich1-0/+2
powerpc does not select ARCH_HAS_SYSCALL_WRAPPER, so its syscall handlers take "unpacked" syscall arguments. Indicate this to libbpf using PT_REGS_SYSCALL_REGS macro. Reported-by: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Tested-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Link: https://lore.kernel.org/bpf/20220209021745.2215452-5-iii@linux.ibm.com
2022-02-08libbpf: Add PT_REGS_SYSCALL_REGS macroIlya Leoshkevich1-0/+10
Architectures that select ARCH_HAS_SYSCALL_WRAPPER pass a pointer to struct pt_regs to syscall handlers, others unpack it into individual function parameters. Introduce a macro to describe what a particular arch does. Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220209021745.2215452-3-iii@linux.ibm.com
2022-02-08libbpf: Fix signedness bug in btf_dump_array_data()Dan Carpenter1-2/+3
The btf__resolve_size() function returns negative error codes so "elem_size" must be signed for the error handling to work. Fixes: 920d16af9b42 ("libbpf: BTF dumper support for typed data") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220208071552.GB10495@kili
2022-02-07libbpf: Remove mode check in libbpf_set_strict_mode()Mauricio Vásquez1-8/+0
libbpf_set_strict_mode() checks that the passed mode doesn't contain extra bits for LIBBPF_STRICT_* flags that don't exist yet. It makes it difficult for applications to disable some strict flags as something like "LIBBPF_STRICT_ALL & ~LIBBPF_STRICT_MAP_DEFINITIONS" is rejected by this check and they have to use a rather complicated formula to calculate it.[0] One possibility is to change LIBBPF_STRICT_ALL to only contain the bits of all existing LIBBPF_STRICT_* flags instead of 0xffffffff. However it's not possible because the idea is that applications compiled against older libbpf_legacy.h would still be opting into latest LIBBPF_STRICT_ALL features.[1] The other possibility is to remove that check so something like "LIBBPF_STRICT_ALL & ~LIBBPF_STRICT_MAP_DEFINITIONS" is allowed. It's what this commit does. [0]: https://lore.kernel.org/bpf/20220204220435.301896-1-mauricio@kinvolk.io/ [1]: https://lore.kernel.org/bpf/CAEf4BzaTWa9fELJLh+bxnOb0P1EMQmaRbJVG0L+nXZdy0b8G3Q@mail.gmail.com/ Fixes: 93b8952d223a ("libbpf: deprecate legacy BPF map definitions") Signed-off-by: Mauricio Vásquez <mauricio@kinvolk.io> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220207145052.124421-2-mauricio@kinvolk.io
2022-02-04libbpf: Fix build issue with llvm-readelfYonghong Song1-2/+2
There are cases where clang compiler is packaged in a way readelf is a symbolic link to llvm-readelf. In such cases, llvm-readelf will be used instead of default binutils readelf, and the following error will appear during libbpf build: Warning: Num of global symbols in /home/yhs/work/bpf-next/tools/testing/selftests/bpf/tools/build/libbpf/sharedobjs/libbpf-in.o (367) does NOT match with num of versioned symbols in /home/yhs/work/bpf-next/tools/testing/selftests/bpf/tools/build/libbpf/libbpf.so libbpf.map (383). Please make sure all LIBBPF_API symbols are versioned in libbpf.map. --- /home/yhs/work/bpf-next/tools/testing/selftests/bpf/tools/build/libbpf/libbpf_global_syms.tmp ... +++ /home/yhs/work/bpf-next/tools/testing/selftests/bpf/tools/build/libbpf/libbpf_versioned_syms.tmp ... @@ -324,6 +324,22 @@ btf__str_by_offset btf__type_by_id btf__type_cnt +LIBBPF_0.0.1 +LIBBPF_0.0.2 +LIBBPF_0.0.3 +LIBBPF_0.0.4 +LIBBPF_0.0.5 +LIBBPF_0.0.6 +LIBBPF_0.0.7 +LIBBPF_0.0.8 +LIBBPF_0.0.9 +LIBBPF_0.1.0 +LIBBPF_0.2.0 +LIBBPF_0.3.0 +LIBBPF_0.4.0 +LIBBPF_0.5.0 +LIBBPF_0.6.0 +LIBBPF_0.7.0 libbpf_attach_type_by_name libbpf_find_kernel_btf libbpf_find_vmlinux_btf_id make[2]: *** [Makefile:184: check_abi] Error 1 make[1]: *** [Makefile:140: all] Error 2 The above failure is due to different printouts for some ABS versioned symbols. For example, with the same libbpf.so, $ /bin/readelf --dyn-syms --wide tools/lib/bpf/libbpf.so | grep "LIBBPF" | grep ABS 134: 0000000000000000 0 OBJECT GLOBAL DEFAULT ABS LIBBPF_0.5.0 202: 0000000000000000 0 OBJECT GLOBAL DEFAULT ABS LIBBPF_0.6.0 ... $ /opt/llvm/bin/readelf --dyn-syms --wide tools/lib/bpf/libbpf.so | grep "LIBBPF" | grep ABS 134: 0000000000000000 0 OBJECT GLOBAL DEFAULT ABS LIBBPF_0.5.0@@LIBBPF_0.5.0 202: 0000000000000000 0 OBJECT GLOBAL DEFAULT ABS LIBBPF_0.6.0@@LIBBPF_0.6.0 ... The binutils readelf doesn't print out the symbol LIBBPF_* version and llvm-readelf does. Such a difference caused libbpf build failure with llvm-readelf. The proposed fix filters out all ABS symbols as they are not part of the comparison. This works for both binutils readelf and llvm-readelf. Reported-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220204214355.502108-1-yhs@fb.com
2022-02-04libbpf: Deprecate forgotten btf__get_map_kv_tids()Andrii Nakryiko2-0/+4
btf__get_map_kv_tids() is in the same group of APIs as btf_ext__reloc_func_info()/btf_ext__reloc_line_info() which were only used by BCC. It was missed to be marked as deprecated in [0]. Fixing that to complete [1]. [0] https://patchwork.kernel.org/project/netdevbpf/patch/20220201014610.3522985-1-davemarchevsky@fb.com/ [1] Closes: https://github.com/libbpf/libbpf/issues/277 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/bpf/20220203225017.1795946-1-andrii@kernel.org
2022-02-03libbpf: Deprecate priv/set_priv storageDelyan Kratunov1-1/+6
Arbitrary storage via bpf_*__set_priv/__priv is being deprecated without a replacement ([1]). perf uses this capability, but most of that is going away with the removal of prologue generation ([2]). perf is already suppressing deprecation warnings, so the remaining cleanup will happen separately. [1]: Closes: https://github.com/libbpf/libbpf/issues/294 [2]: https://lore.kernel.org/bpf/20220123221932.537060-1-jolsa@kernel.org/ Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220203180032.1921580-1-delyank@fb.com
2022-02-03libbpf: Stop using deprecated bpf_map__is_offload_neutral()Andrii Nakryiko1-1/+1
Open-code bpf_map__is_offload_neutral() logic in one place in to-be-deprecated bpf_prog_load_xattr2. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Reviewed-by: Quentin Monnet <quentin@isovalent.com> Link: https://lore.kernel.org/bpf/20220202225916.3313522-2-andrii@kernel.org
2022-02-02libbpf: Deprecate bpf_prog_test_run_xattr and bpf_prog_test_runDelyan Kratunov1-1/+3
Deprecate non-extendable bpf_prog_test_run{,_xattr} in favor of OPTS-based bpf_prog_test_run_opts ([0]). [0] Closes: https://github.com/libbpf/libbpf/issues/286 Signed-off-by: Delyan Kratunov <delyank@fb.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220202235423.1097270-5-delyank@fb.com
2022-02-01libbpf: Open code raw_tp_open and link_create commands.Alexei Starovoitov1-0/+26
Open code raw_tracepoint_open and link_create used by light skeleton to be able to avoid full libbpf eventually. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-4-alexei.starovoitov@gmail.com
2022-02-01libbpf: Open code low level bpf commands.Alexei Starovoitov1-2/+42
Open code low level bpf commands used by light skeleton to be able to avoid full libbpf eventually. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20220131220528.98088-3-alexei.starovoitov@gmail.com
2022-02-01libbpf: Deprecate xdp_cpumap, xdp_devmap and classifier sec definitionsLorenzo Bianconi1-3/+11
Deprecate xdp_cpumap, xdp_devmap and classifier sec definitions. Introduce xdp/devmap and xdp/cpumap definitions according to the standard for SEC("") in libbpf: - prog_type.prog_flags/attach_place Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/5c7bd9426b3ce6a31d9a4b1f97eb299e1467fc52.1643727185.git.lorenzo@kernel.org
2022-02-01libbpf: Deprecate btf_ext rec_size APIsDave Marchevsky1-2/+4
btf_ext__{func,line}_info_rec_size functions are used in conjunction with already-deprecated btf_ext__reloc_{func,line}_info functions. Since struct btf_ext is opaque to the user it was necessary to expose rec_size getters in the past. btf_ext__reloc_{func,line}_info were deprecated in commit 8505e8709b5ee ("libbpf: Implement generalized .BTF.ext func/line info adjustment") as they're not compatible with support for multiple programs per section. It was decided[0] that users of these APIs should implement their own .btf.ext parsing to access this data, in which case the rec_size getters are unnecessary. So deprecate them from libbpf 0.7.0 onwards. [0] Closes: https://github.com/libbpf/libbpf/issues/277 Signed-off-by: Dave Marchevsky <davemarchevsky@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20220201014610.3522985-1-davemarchevsky@fb.com
2022-01-25libbpf: deprecate bpf_program__is_<type>() and bpf_program__set_<type>() APIsAndrii Nakryiko1-0/+26
Not sure why these APIs were added in the first place instead of a completely generic (and not requiring constantly adding new APIs with each new BPF program type) bpf_program__type() and bpf_program__set_type() APIs. But as it is right now, there are 13 such specialized is_type/set_type APIs, while latest kernel is already at 30+ BPF program types. Instead of completing the set of APIs and keep chasing kernel's bpf_prog_type enum, deprecate existing subset and recommend generic bpf_program__type() and bpf_program__set_type() APIs. Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220124194254.2051434-4-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-01-25libbpf: deprecate bpf_map__resize()Andrii Nakryiko1-0/+1
Deprecated bpf_map__resize() in favor of bpf_map__set_max_entries() setter. In addition to having a surprising name (users often don't realize that they need to use bpf_map__resize()), the name also implies some magic way of resizing BPF map after it is created, which is clearly not the case. Another minor annoyance is that bpf_map__resize() disallows 0 value for max_entries, which in some cases is totally acceptable (e.g., like for BPF perf buf case to let libbpf auto-create one buffer per each available CPU core). [0] Closes: https://github.com/libbpf/libbpf/issues/304 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220124194254.2051434-3-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-01-25libbpf: hide and discourage inconsistently named gettersAndrii Nakryiko6-16/+37
Move a bunch of "getters" into libbpf_legacy.h to keep them there in libbpf 1.0. See [0] for discussion of "Discouraged APIs". These getters don't add any maintenance burden and are simple alias, but they are inconsistent in naming. So keep them in libbpf_legacy.h instead of libbpf.h to "hide" them in favor of preferred getters ([1]). Also add two missing getters: bpf_program__type() and bpf_program__expected_attach_type(). [0] https://github.com/libbpf/libbpf/wiki/Libbpf:-the-road-to-v1.0#handling-deprecation-of-apis-and-functionality [1] Closes: https://github.com/libbpf/libbpf/issues/307 Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20220124194254.2051434-2-andrii@kernel.org Signed-off-by: Alexei Starovoitov <ast@kernel.org>
2022-01-24libbpf: Fix the incorrect register read for syscalls on x86_64Kenta Tada1-0/+34
Currently, rcx is read as the fourth parameter of syscall on x86_64. But x86_64 Linux System Call convention uses r10 actually. This commit adds the wrapper for users who want to access to syscall params to analyze the user space. Signed-off-by: Kenta Tada <Kenta.Tada@sony.com> Signed-off-by: Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/bpf/20220124141622.4378-3-Kenta.Tada@sony.com