summaryrefslogtreecommitdiffstats
path: root/scripts/kallsyms.c
AgeCommit message (Collapse)AuthorFilesLines
2021-02-05kallsyms: fix nonconverging kallsyms table with lldArnd Bergmann1-0/+6
ARM randconfig builds with lld sometimes show a build failure from kallsyms: Inconsistent kallsyms data Try make KALLSYMS_EXTRA_PASS=1 as a workaround The problem is the veneers/thunks getting added by the linker extend the symbol table, which in turn leads to more veneers being needed, so it may take a few extra iterations to converge. This bug has been fixed multiple times before, but comes back every time a new symbol name is used. lld uses a different set of identifiers from ld.bfd, so the additional ones need to be added as well. I looked through the sources and found that arm64 and mips define similar prefixes, so I'm adding those as well, aside from the ones I observed. I'm not sure about powerpc64, which seems to already be handled through a section match, but if it comes back, the "__long_branch_" and "__plt_" prefixes would have to get added as well. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2020-09-25scripts/kallsyms: skip ppc compiler stub *.long_branch.* / *.plt_branch.*Masahiro Yamada1-1/+15
PowerPC allmodconfig often fails to build as follows: LD .tmp_vmlinux.kallsyms1 KSYM .tmp_vmlinux.kallsyms1.o LD .tmp_vmlinux.kallsyms2 KSYM .tmp_vmlinux.kallsyms2.o LD .tmp_vmlinux.kallsyms3 KSYM .tmp_vmlinux.kallsyms3.o LD vmlinux SORTTAB vmlinux SYSMAP System.map Inconsistent kallsyms data Try make KALLSYMS_EXTRA_PASS=1 as a workaround make[2]: *** [../Makefile:1162: vmlinux] Error 1 Setting KALLSYMS_EXTRA_PASS=1 does not help. This is caused by the compiler inserting stubs such as *.long_branch.* and *.plt_branch.* $ powerpc-linux-nm -n .tmp_vmlinux.kallsyms2 [ snip ] c00000000210c010 t 00000075.plt_branch.da9:19 c00000000210c020 t 00000075.plt_branch.1677:5 c00000000210c030 t 00000075.long_branch.memmove c00000000210c034 t 00000075.plt_branch.9e0:5 c00000000210c044 t 00000075.plt_branch.free_initrd_mem ... Actually, the problem mentioned in scripts/link-vmlinux.sh comments; "In theory it's possible this results in even more stubs, but unlikely" is happening here, and ends up with another kallsyms step required. scripts/kallsyms.c already ignores various compiler stubs. Let's do similar to make kallsysms for PowerPC always succeed in 2 steps. Reported-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Tested-by: Guenter Roeck <linux@roeck-us.net>
2020-07-05KVM: arm64: Add build rules for separate VHE/nVHE object filesDavid Brazdil1-0/+1
Add new folders arch/arm64/kvm/hyp/{vhe,nvhe} and Makefiles for building code that runs in EL2 under VHE/nVHE KVM, repsectivelly. Add an include folder for hyp-specific header files which will include code common to VHE/nVHE. Build nVHE code with -D__KVM_NVHE_HYPERVISOR__, VHE code with -D__KVM_VHE_HYPERVISOR__. Under nVHE compile each source file into a `.hyp.tmp.o` object first, then prefix all its symbols with "__kvm_nvhe_" using `objcopy` and produce a `.hyp.o`. Suffixes were chosen so that it would be possible for VHE and nVHE to share some source files, but compiled with different CFLAGS. The nVHE ELF symbol prefix is added to kallsyms.c as ignored. EL2-only symbols will never appear in EL1 stack traces. Due to symbol prefixing, add a section in image-vars.h for aliases of symbols that are defined in nVHE EL2 and accessed by kernel in EL1 or vice versa. Signed-off-by: David Brazdil <dbrazdil@google.com> Signed-off-by: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20200625131420.71444-4-dbrazdil@google.com
2020-05-04gcc-10 warnings: fix low-hanging fruitLinus Torvalds1-1/+1
Due to a bug-report that was compiler-dependent, I updated one of my machines to gcc-10. That shows a lot of new warnings. Happily they seem to be mostly the valid kind, but it's going to cause a round of churn for getting rid of them.. This is the really low-hanging fruit of removing a couple of zero-sized arrays in some core code. We have had a round of these patches before, and we'll have many more coming, and there is nothing special about these except that they were particularly trivial, and triggered more warnings than most. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2020-03-19scripts/kallsyms: fix wrong kallsyms_relative_baseMikhail Petrov1-4/+4
There is the code in the read_symbol function in 'scripts/kallsyms.c': if (is_ignored_symbol(name, type)) return NULL; /* Ignore most absolute/undefined (?) symbols. */ if (strcmp(name, "_text") == 0) _text = addr; But the is_ignored_symbol function returns true for name="_text" and type='A'. So the next condition is not executed and the _text variable is always zero. It makes the wrong kallsyms_relative_base symbol as a result of the code (CONFIG_KALLSYMS_BASE_RELATIVE is defined): if (base_relative) { output_label("kallsyms_relative_base"); output_address(relative_base); printf("\n"); } Because the output_address function uses the _text variable. So the kallsyms_lookup function and all related functions in the kernel do not work properly. For example, the stack trace in oops: Call Trace: [aa095e58] [809feab8] kobj_ns_ops_tbl+0x7ff09ac8/0x7ff1c1c4 (unreliable) [aa095e98] [80002b64] kobj_ns_ops_tbl+0x7f50db74/0x80000010 [aa095ef8] [809c3d24] kobj_ns_ops_tbl+0x7feced34/0x7ff1c1c4 [aa095f28] [80002ed0] kobj_ns_ops_tbl+0x7f50dee0/0x80000010 [aa095f38] [8000f238] kobj_ns_ops_tbl+0x7f51a248/0x80000010 The right stack trace: Call Trace: [aa095e58] [809feab8] module_vdu_video_init+0x2fc/0x3bc (unreliable) [aa095e98] [80002b64] do_one_initcall+0x40/0x1f0 [aa095ef8] [809c3d24] kernel_init_freeable+0x164/0x1d8 [aa095f28] [80002ed0] kernel_init+0x14/0x124 [aa095f38] [8000f238] ret_from_kernel_thread+0x14/0x1c [masahiroy@kernel.org: This issue happens on binutils <= 2.22 The following commit fixed it: https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=d2667025dd30611514810c28bee9709e4623012a The symbol type of _text is 'T' on binutils >= 2.23 The minimal supported binutils version for the kernel build is 2.21 ] Signed-off-by: Mikhail Petrov <Mikhail.Petrov@mir.dev> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2020-02-11scripts/kallsyms: fix memory corruption caused by write over-runMasahiro Yamada1-2/+2
memcpy() writes one more byte than allocated. Fixes: 8d60526999aa ("scripts/kallsyms: change table to store (strcut sym_entry *)") Reported-by: youling257 <youling257@gmail.com> Reported-by: Pavel Machek <pavel@ucw.cz> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Tested-by: Pavel Machek <pavel@ucw.cz>
2020-02-04scripts/kallsyms: change table to store (strcut sym_entry *)Masahiro Yamada1-56/+65
The symbol table is extended every 10000 addition by using realloc(), where data copy might occur to the new buffer. To decrease the amount of possible data copy, let's change the table to store the pointer. The symbol type + symbol name part is appended at the end of (struct sym_entry), and allocated together with the struct body. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2020-02-04scripts/kallsyms: rename local variables in read_symbol()Masahiro Yamada1-12/+12
I will use 'sym' for the point to struce sym_entry in the next commit. Rename 'sym', 'stype' to 'name', 'type', which are more intuitive. Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2019-12-14scripts/kallsyms: fix offset overflow of kallsyms_relative_baseMasahiro Yamada1-20/+18
Since commit 5e5c4fa78745 ("scripts/kallsyms: shrink table before sorting it"), kallsyms_relative_base can be larger than _text, which causes overflow when building the 32-bit kernel. https://lkml.org/lkml/2019/12/7/156 This is because _text is, unless --all-symbols is specified, now trimmed from the symbol table before record_relative_base() is called. Handle the offset signedness also for kallsyms_relative_base. Introduce a new helper, output_address(), to reduce the code duplication. Fixes: 5e5c4fa78745 ("scripts/kallsyms: shrink table before sorting it") Reported-by: Olof Johansson <olof@lixom.net> Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
2019-11-25scripts/kallsyms: remove redundant initializersMasahiro Yamada1-3/+3
These are set to zero without the explicit initializers. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: put check_symbol_range() calls close togetherMasahiro Yamada1-3/+1
Put the relevant code close together. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: make check_symbol_range() void functionMasahiro Yamada1-9/+6
There is no more reason to check the return value of check_symbol_range(). Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: move ignored symbol types to is_ignored_symbol()Masahiro Yamada1-15/+15
Collect the ignored patterns to is_ignored_symbol(). Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: move more patterns to the ignored_prefixes arrayMasahiro Yamada1-10/+2
Refactoring for shortening the code. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: skip ignored symbols very earlyMasahiro Yamada1-51/+62
Unless the address range matters, symbols can be ignored earlier, which avoids unneeded memory allocation. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: add const qualifiers where possibleMasahiro Yamada1-14/+14
Add 'const' where a function does not write to the pointer dereferenes. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: make find_token() return (unsigned char *)Masahiro Yamada1-1/+2
The callers of this function expect (unsigned char *). I do not see a good reason to make this function return (void *). Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: replace prefix_underscores_count() with strspn()Masahiro Yamada1-12/+2
You can do equivalent things with strspn(). I do not see noticeable performance difference. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: add sym_name() to mitigate cast uglinessMasahiro Yamada1-13/+16
sym_entry::sym is (unsigned char *) instead of (char *) because kallsyms exploits the MSB for compression, and the characters are used as the index of token_profit array. However, it requires casting (unsigned char *) to (char *) in some places since standard library functions such as strcmp(), strlen() expect (char *). Introduce a new helper, sym_name(), which advances the given pointer by 1 and casts it to (char *). Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: remove unneeded length check for prefix matchingMasahiro Yamada1-2/+1
l <= strlen(sym_name) is unnecessary for prefix matching. strncmp() will do. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: remove redundant is_arm_mapping_symbol()Masahiro Yamada1-13/+6
Since commit 6f00df24ee39 ("[PATCH] Strip local symbols from kallsyms"), all symbols starting '$' are ignored. is_arm_mapping_symbol() particularly ignores $a, $t, etc. but it is redundant. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: set relative_base more effectivelyMasahiro Yamada1-4/+8
Currently, record_relative_base() iterates over the entire table to find the minimum address, but it is not efficient because we sort the table anyway. After sort_symbol(), the table is sorted by address. (kallsyms parses the 'nm -n' output, so the data is already sorted by address, but this commit does not rely on it.) Move record_relative_base() after sort_symbols(), and take the first non-absolute symbol value. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: shrink table before sorting itMasahiro Yamada1-20/+29
Currently, build_initial_tok_table() trims unused symbols, but it is called after sort_symbols(). It is not efficient to sort the huge table that contains unused entries. Shrink the table before sorting it. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: fix definitely-lost memory leakMasahiro Yamada1-0/+2
build_initial_tok_table() overwrites unused sym_entry to shrink the table size. Before the entry is overwritten, table[i].sym must be freed since it is malloc'ed data. This fixes the 'definitely lost' report from valgrind. I ran valgrind against x86_64_defconfig of v5.4-rc8 kernel, and here is the summary: [Before the fix] LEAK SUMMARY: definitely lost: 53,184 bytes in 2,874 blocks [After the fix] LEAK SUMMARY: definitely lost: 0 bytes in 0 blocks Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-11-25scripts/kallsyms: remove unneeded #ifndef ARRAY_SIZEMasahiro Yamada1-2/+0
This is not defined in the standard headers. #ifndef is unneeded. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-07-08kallsyms: exclude kasan local symbols on s390Vasily Gorbik1-0/+3
gcc asan instrumentation emits the following sequence to store frame pc when the kernel is built with CONFIG_RELOCATABLE: debug/vsprintf.s: .section .data.rel.ro.local,"aw" .align 8 .LC3: .quad .LASANPC4826@GOTOFF .text .align 8 .type number, @function number: .LASANPC4826: and in case reloc is issued for LASANPC label it also gets into .symtab with the same address as actual function symbol: $ nm -n vmlinux | grep 0000000001397150 0000000001397150 t .LASANPC4826 0000000001397150 t number In the end kernel backtraces are almost unreadable: [ 143.748476] Call Trace: [ 143.748484] ([<000000002da3e62c>] .LASANPC2671+0x114/0x190) [ 143.748492] [<000000002eca1a58>] .LASANPC2612+0x110/0x160 [ 143.748502] [<000000002de9d830>] print_address_description+0x80/0x3b0 [ 143.748511] [<000000002de9dd64>] __kasan_report+0x15c/0x1c8 [ 143.748521] [<000000002ecb56d4>] strrchr+0x34/0x60 [ 143.748534] [<000003ff800a9a40>] kasan_strings+0xb0/0x148 [test_kasan] [ 143.748547] [<000003ff800a9bba>] kmalloc_tests_init+0xe2/0x528 [test_kasan] [ 143.748555] [<000000002da2117c>] .LASANPC4069+0x354/0x748 [ 143.748563] [<000000002dbfbb16>] do_init_module+0x136/0x3b0 [ 143.748571] [<000000002dbff3f4>] .LASANPC3191+0x2164/0x25d0 [ 143.748580] [<000000002dbffc4c>] .LASANPC3196+0x184/0x1b8 [ 143.748587] [<000000002ecdf2ec>] system_call+0xd8/0x2d8 Since LASANPC labels are not even unique and get into .symtab only due to relocs filter them out in kallsyms. Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-03-10Merge tag 'kbuild-v5.1' of ↵Linus Torvalds1-8/+5
git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild updates from Masahiro Yamada: - do not generate unneeded top-level built-in.a - let git ignore O= directory entirely - optimize scripts/kallsyms slightly - exclude DWARF info from *.s regardless of config options - fix GCC toolchain search path for Clang to prepare ld.lld support - do not generate modules.order when CONFIG_MODULES is disabled - simplify single target rules and remove VPATH for external module build - allow to add optional flags to dpkg-buildpackage when building deb-pkg - move some compiler option tests from Makefile to Kconfig - various Makefile cleanups * tag 'kbuild-v5.1' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (40 commits) kbuild: remove scripts/basic/% build target kbuild: use -Werror=implicit-... instead of -Werror-implicit-... kbuild: clean up scripts/gcc-version.sh kbuild: remove cc-version macro kbuild: update comment block of scripts/clang-version.sh kbuild: remove commented-out INITRD_COMPRESS kbuild: move -gsplit-dwarf, -gdwarf-4 option tests to Kconfig kbuild: [bin]deb-pkg: add DPKG_FLAGS variable kbuild: move ".config not found!" message from Kconfig to Makefile kbuild: invoke syncconfig if include/config/auto.conf.cmd is missing kbuild: simplify single target rules kbuild: remove empty rules for makefiles kbuild: make -r/-R effective in top Makefile for old Make versions kbuild: move tools_silent to a more relevant place kbuild: compute false-positive -Wmaybe-uninitialized cases in Kconfig kbuild: refactor cc-cross-prefix implementation kbuild: hardcode genksyms path and remove GENKSYMS variable scripts/gdb: refactor rules for symlink creation kbuild: create symlink to vmlinux-gdb.py in scripts_gdb target scripts/gdb: do not descend into scripts/gdb from scripts ...
2019-02-19kallsyms: include <asm/bitsperlong.h> instead of <asm/types.h>Masahiro Yamada1-1/+1
<asm/bitsperlong.h> is enough to include the definition of BITS_PER_LONG. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-02-19kallsyms: remove unneeded memset() callsMasahiro Yamada1-3/+0
Global variables in the .bss section are zeroed out before the program starts to run. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-02-19kallsyms: add static qualifiers where missingMasahiro Yamada1-4/+4
Fix the following sparse warnings: scripts/kallsyms.c:65:5: warning: symbol 'token_profit' was not declared. Should it be static? scripts/kallsyms.c:68:15: warning: symbol 'best_table' was not declared. Should it be static? scripts/kallsyms.c:69:15: warning: symbol 'best_table_len' was not declared. Should it be static? Also, remove 'inline' from is_arm_mapping_symbol(). The compiler will inline it anyway when it is appropriate to do so. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-28kallsyms: Handle too long symbols in kallsyms.cEugene Loh1-2/+2
When checking for symbols with excessively long names, account for null terminating character. Fixes: f3462aa952cf ("Kbuild: Handle longer symbols in kallsyms.c") Signed-off-by: Eugene Loh <eugene.loh@oracle.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2019-01-06kallsyms: lower alignment on ARMMathias Krause1-2/+2
As mentioned in the info pages of gas, the '.align' pseudo op's interpretation of the alignment value is architecture specific. It might either be a byte value or taken to the power of two. On ARM it's actually the latter which leads to unnecessary large alignments of 16 bytes for 32 bit builds or 256 bytes for 64 bit builds. Fix this by switching to '.balign' instead which is consistent across all architectures. Signed-off-by: Mathias Krause <minipli@googlemail.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-09-10kallsyms: remove left-over Blackfin codeMasahiro Yamada1-2/+0
These symbols were added by commit 028f042613c3 ("kallsyms: support kernel symbols in Blackfin on-chip memory") for Blackfin. The Blackfin support was removed by commit 4ba66a976072 ("arch: remove blackfin port"). Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-09-10kallsyms: reduce size a little on 64-bitJan Beulich1-2/+2
Both kallsyms_num_syms and kallsyms_markers[] don't really need to use unsigned long as their (base) types; unsigned int fully suffices. Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-05-29scripts: Fixed printf format mismatchnixiaoming1-1/+1
scripts/kallsyms.c: function write_src: "printf", the #1 format specifier "d" need arg type "int", but the according arg "table_cnt" has type "unsigned int" scripts/recordmcount.c: function do_file: "fprintf", the #1 format specifier "d" need arg type "int", but the according arg "(*w2)(ehdr->e_machine)" has type "unsigned int" scripts/recordmcount.h: function find_secsym_ndx: "fprintf", the #1 format specifier "d" need arg type "int", but the according arg "txtndx" has type "unsigned int" Signed-off-by: nixiaoming <nixiaoming@huawei.com> Acked-by: Steven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-05-17kallsyms: remove symbol prefix supportMasahiro Yamada1-36/+11
CONFIG_HAVE_UNDERSCORE_SYMBOL_PREFIX was selected by BLACKFIN, METAG. They were removed by commit 4ba66a976072 ("arch: remove blackfin port"), commit bb6fb6dfcc17 ("metag: Remove arch/metag/"), respectively. No more architecture enables CONFIG_HAVE_UNDERSCORE_SYMBOL_PREFIX, hence the --symbol-prefix option is unnecessary. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
2018-04-04Merge tag 'arm64-upstream' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Will Deacon: "Nothing particularly stands out here, probably because people were tied up with spectre/meltdown stuff last time around. Still, the main pieces are: - Rework of our CPU features framework so that we can whitelist CPUs that don't require kpti even in a heterogeneous system - Support for the IDC/DIC architecture extensions, which allow us to elide instruction and data cache maintenance when writing out instructions - Removal of the large memory model which resulted in suboptimal codegen by the compiler and increased the use of literal pools, which could potentially be used as ROP gadgets since they are mapped as executable - Rework of forced signal delivery so that the siginfo_t is well-formed and handling of show_unhandled_signals is consolidated and made consistent between different fault types - More siginfo cleanup based on the initial patches from Eric Biederman - Workaround for Cortex-A55 erratum #1024718 - Some small ACPI IORT updates and cleanups from Lorenzo Pieralisi - Misc cleanups and non-critical fixes" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (70 commits) arm64: uaccess: Fix omissions from usercopy whitelist arm64: fpsimd: Split cpu field out from struct fpsimd_state arm64: tlbflush: avoid writing RES0 bits arm64: cmpxchg: Include linux/compiler.h in asm/cmpxchg.h arm64: move percpu cmpxchg implementation from cmpxchg.h to percpu.h arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG arm64: lse: Include compiler_types.h and export.h for out-of-line LL/SC arm64: fpsimd: include <linux/init.h> in fpsimd.h drivers/perf: arm_pmu_platform: do not warn about affinity on uniprocessor perf: arm_spe: include linux/vmalloc.h for vmap() Revert "arm64: Revert L1_CACHE_SHIFT back to 6 (64-byte cache line size)" arm64: cpufeature: Avoid warnings due to unused symbols arm64: Add work around for Arm Cortex-A55 Erratum 1024718 arm64: Delay enabling hardware DBM feature arm64: Add MIDR encoding for Arm Cortex-A55 and Cortex-A35 arm64: capabilities: Handle shared entries arm64: capabilities: Add support for checks based on a list of MIDRs arm64: Add helpers for checking CPU MIDR against a range arm64: capabilities: Clean up midr range helpers arm64: capabilities: Change scope of VHE to Boot CPU feature ...
2018-03-06scripts/kallsyms: filter arm64's __efistub_ symbolsArd Biesheuvel1-0/+1
On arm64, the EFI stub and the kernel proper are essentially the same binary, although the EFI stub executes at a different virtual address as the kernel. For this reason, the EFI stub is restricted in the symbols it can link to, which is ensured by prefixing all EFI stub symbols with __efistub_ (and emitting __efistub_ prefixed aliases for routines that may be shared between the core kernel and the stub) These symbols are leaking into kallsyms, polluting the namespace, so let's filter them explicitly. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2018-03-02kbuild/kallsyms: trivial typo fixCao jin1-1/+1
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com> Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2017-10-13scripts/kallsyms.c: ignore symbol type 'n'Guenter Roeck1-1/+1
gcc on aarch64 may emit synbols of type 'n' if the kernel is built with '-frecord-gcc-switches'. In most cases, those symbols are reported with nm as 000000000000000e n $d and with objdump as 0000000000000000 l d .GCC.command.line 0000000000000000 .GCC.command.line 000000000000000e l .GCC.command.line 0000000000000000 $d Those symbols are detected in is_arm_mapping_symbol() and ignored. However, if "--prefix-symbols=<prefix>" is configured as well, the situation is different. For example, in efi/libstub, arm64 images are built with '--prefix-alloc-sections=.init --prefix-symbols=__efistub_'. In combination with '-frecord-gcc-switches', the symbols are now reported by nm as: 000000000000000e n __efistub_$d and by objdump as: 0000000000000000 l d .GCC.command.line 0000000000000000 .GCC.command.line 000000000000000e l .GCC.command.line 0000000000000000 __efistub_$d Those symbols are no longer ignored and included in the base address calculation. This results in a base address of 000000000000000e, which in turn causes kallsyms to abort with kallsyms failure: relative symbol value 0xffffff900800a000 out of range in relative mode The problem is seen in little endian arm64 builds with CONFIG_EFI enabled and with '-frecord-gcc-switches' set in KCFLAGS. Explicitly ignore symbols of type 'n' since those are clearly debug symbols. Link: http://lkml.kernel.org/r/1507136063-3139-1-git-send-email-linux@roeck-us.net Signed-off-by: Guenter Roeck <linux@roeck-us.net> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2017-02-03kbuild: modversions: add infrastructure for emitting relative CRCsArd Biesheuvel1-0/+12
This add the kbuild infrastructure that will allow architectures to emit vmlinux symbol CRCs as 32-bit offsets to another location in the kernel where the actual value is stored. This works around problems with CRCs being mistaken for relocatable symbols on kernels that self relocate at runtime (i.e., powerpc with CONFIG_RELOCATABLE=y) For the kbuild side of things, this comes down to the following: - introducing a Kconfig symbol MODULE_REL_CRCS - adding a -R switch to genksyms to instruct it to emit the CRC symbols as references into the .rodata section - making modpost distinguish such references from absolute CRC symbols by the section index (SHN_ABS) - making kallsyms disregard non-absolute symbols with a __crc_ prefix Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-12-11scripts/kallsyms: remove last remnants of --page-offset optionArd Biesheuvel1-1/+0
The implementation of the --page-offset kallsyms command line option has been removed, so remove it from the usage string as well. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Michal Marek <mmarek@suse.com>
2016-04-07ARM: 8553/1: kallsyms: remove --page-offset command line optionArd Biesheuvel1-8/+0
The --page-offset command line option was only used for ARM, to filter symbol addresses below CONFIG_PAGE_OFFSET. This is no longer needed, so remove the functionality altogether. Acked-by: Nicolas Pitre <nico@linaro.org> Acked-by: Chris Brandt <chris.brandt@renesas.com> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-04-07ARM: 8555/1: kallsyms: ignore ARM mode switching veneersArd Biesheuvel1-0/+2
On ARM, the linker may emit veneers to deal with relative branch instructions that appear too far away from their targets. Since the second kallsyms pass results in an increase of the kernel size, it may result in additional veneers to be emitted, potentially affecting the output of kallsyms itself if these symbols are visible to it, and for that reason, symbols whose names end in '_veneer' are ignored explicitly. However, when building Thumb2 kernels, such veneers are named differently if they also incur a mode switch, and since they are not filtered by kallsyms, they may cause the build to fail. So filter symbols whose names end in '_from_arm' or '_from_thumb' as well. Tested-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-03-15kallsyms: add support for relative offsets in kallsyms address tableArd Biesheuvel1-10/+69
Similar to how relative extables are implemented, it is possible to emit the kallsyms table in such a way that it contains offsets relative to some anchor point in the kernel image rather than absolute addresses. On 64-bit architectures, it cuts the size of the kallsyms address table in half, since offsets between kernel symbols can typically be expressed in 32 bits. This saves several hundreds of kilobytes of permanent .rodata on average. In addition, the kallsyms address table is no longer subject to dynamic relocation when CONFIG_RELOCATABLE is in effect, so the relocation work done after decompression now doesn't have to do relocation updates for all these values. This saves up to 24 bytes (i.e., the size of a ELF64 RELA relocation table entry) per value, which easily adds up to a couple of megabytes of uncompressed __init data on ppc64 or arm64. Even if these relocation entries typically compress well, the combined size reduction of 2.8 MB uncompressed for a ppc64_defconfig build (of which 2.4 MB is __init data) results in a ~500 KB space saving in the compressed image. Since it is useful for some architectures (like x86) to retain the ability to emit absolute values as well, this patch also adds support for capturing both absolute and relative values when KALLSYMS_ABSOLUTE_PERCPU is in effect, by emitting absolute per-cpu addresses as positive 32-bit values, and addresses relative to the lowest encountered relative symbol as negative values, which are subtracted from the runtime address of this base symbol to produce the actual address. Support for the above is enabled by default for all architectures except IA-64 and Tile-GX, whose symbols are too far apart to capture in this manner. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Ingo Molnar <mingo@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Michal Marek <mmarek@suse.cz> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-15kallsyms: don't overload absolute symbol type for percpu symbolsArd Biesheuvel1-2/+12
Commit c6bda7c988a5 ("kallsyms: fix percpu vars on x86-64 with relocation") overloaded the 'A' (absolute) symbol type to signify that a symbol is not subject to dynamic relocation. However, the original A type does not imply that at all, and depending on the version of the toolchain, many A type symbols are emitted that are in fact relative to the kernel text, i.e., if the kernel is relocated at runtime, these symbols should be updated as well. For instance, on sparc32, the following symbols are emitted as absolute (kindly provided by Guenter Roeck): f035a420 A _etext f03d9000 A _sdata f03de8c4 A jiffies f03f8860 A _edata f03fc000 A __init_begin f041bdc8 A __init_text_end f0423000 A __bss_start f0423000 A __init_end f044457d A __bss_stop f044457d A _end On x86_64, similar behavior can be observed: ffffffff81a00000 A __end_rodata_hpage_align ffffffff81b19000 A __vvar_page ffffffff81d3d000 A _end Even if only a couple of them pass the symbol range check that results in them to be taken into account for the final kallsyms symbol table, it is obvious that 'A' does not mean the symbol does not need to be updated at relocation time, and overloading its meaning to signify that is perhaps not a good idea. So instead, add a new percpu_absolute member to struct sym_entry, and when --absolute-percpu is in effect, use it to record symbols whose addresses should be emitted as final values rather than values that still require relocation at runtime. That way, we can drop the check against the 'A' type. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Guenter Roeck <linux@roeck-us.net> Reviewed-by: Kees Cook <keescook@chromium.org> Tested-by: Kees Cook <keescook@chromium.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Ingo Molnar <mingo@kernel.org> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Michal Marek <mmarek@suse.cz> Acked-by: Rusty Russell <rusty@rustcorp.com.au> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-04-07Kbuild: kallsyms: drop special handling of pre-3.0 GCC symbolsArd Biesheuvel1-1/+0
Since we have required at least GCC v3.2 for some time now, we can drop the special handling of the 'gcc[0-9]_compiled.' label which is not emitted anymore since GCC v3.0. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Michal Marek <mmarek@suse.cz>
2015-04-07Kbuild: kallsyms: ignore veneers emitted by the ARM linkerArd Biesheuvel1-9/+21
When linking large kernels on ARM, the linker will insert veneers (i.e., PLT like stubs) when function symbols are out of reach for the ordinary relative branch/branch-and-link instructions. However, due to the fact that the kallsyms region sits in .rodata, which is between .text and .init.text, additional veneers may be emitted in the second pass due to the fact that the size of the kallsyms region itself has pushed the .init.text section further apart, requiring even more veneers. So ignore the veneers when generating the symbol table. Veneers have no corresponding source code, and they will not turn up in backtraces anyway. This patch also lightly refactors the symbol_valid() function to use a local 'sym_name' rather than the obfuscated 'sym + 1' and 'sym + offset' Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Michal Marek <mmarek@suse.cz>
2014-10-02aarch64: filter $x from kallsymsKyle McMartin1-1/+1
Similar to ARM, AArch64 is generating $x and $d syms... which isn't terribly helpful when looking at %pF output and the like. Filter those out in kallsyms, modpost and when looking at module symbols. Seems simplest since none of these check EM_ARM anyway, to just add it to the strchr used, rather than trying to make things overly complicated. initcall_debug improves: dmesg_before.txt: initcall $x+0x0/0x154 [sg] returned 0 after 26331 usecs dmesg_after.txt: initcall init_sg+0x0/0x154 [sg] returned 0 after 15461 usecs Signed-off-by: Kyle McMartin <kyle@redhat.com> Acked-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-06-10kbuild: trivial - use tabs for code indent where possibleMasahiro Yamada1-1/+1
Signed-off-by: Masahiro Yamada <yamada.m@jp.panasonic.com> Signed-off-by: Michal Marek <mmarek@suse.cz>