- Aug 25, 2023
-
-
Yonghong Song authored
All users of cleanup_symbol_name() do not use the return value. So let us change the return value of cleanup_symbol_name() to 'void' to reflect its usage pattern. Suggested-by:
Nick Desaulniers <ndesaulniers@google.com> Signed-off-by:
Yonghong Song <yonghong.song@linux.dev> Reviewed-by:
Nick Desaulniers <ndesaulniers@google.com> Reviewed-by:
Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20230825202036.441212-1-yonghong.song@linux.dev Signed-off-by:
Kees Cook <keescook@chromium.org>
-
Yonghong Song authored
Kernel test robot reported a kallsyms_test failure when clang lto is enabled (thin or full) and CONFIG_KALLSYMS_SELFTEST is also enabled. I can reproduce in my local environment with the following error message with thin lto: [ 1.877897] kallsyms_selftest: Test for 1750th symbol failed: (tsc_cs_mark_unstable) addr=ffffffff81038090 [ 1.877901] kallsyms_selftest: abort It appears that commit 8cc32a9b ("kallsyms: strip LTO-only suffixes from promoted global functions") caused the failure. Commit 8cc32a9b changed cleanup_symbol_name() based on ".llvm." instead of '.' where ".llvm." is appended to a before-lto-optimization local symbol name. We need to propagate such knowledge in kallsyms_selftest.c as well. Further more, compare_symbol_name() in kallsyms.c needs change as well. In scripts/kallsyms.c, kallsyms_names and kallsyms_seqs_of_names are used to record symbol names themselves and index to symbol names respectively. For example: kallsyms_names: ... __amd_smn_rw._entry <== seq 1000 __amd_smn_rw._entry.5 <== seq 1001 __amd_smn_rw.llvm.<hash> <== seq 1002 ... kallsyms_seqs_of_names are sorted based on cleanup_symbol_name() through, so the order in kallsyms_seqs_of_names actually has index 1000: seq 1002 <== __amd_smn_rw.llvm.<hash> (actual symbol comparison using '__amd_smn_rw') index 1001: seq 1000 <== __amd_smn_rw._entry index 1002: seq 1001 <== __amd_smn_rw._entry.5 Let us say at a particular point, at index 1000, symbol '__amd_smn_rw.llvm.<hash>' is comparing to '__amd_smn_rw._entry' where '__amd_smn_rw._entry' is the one to search e.g., with function kallsyms_on_each_match_symbol(). The current implementation will find out '__amd_smn_rw._entry' is less than '__amd_smn_rw.llvm.<hash>' and then continue to search e.g., index 999 and never found a match although the actual index 1001 is a match. To fix this issue, let us do cleanup_symbol_name() first and then do comparison. In the above case, comparing '__amd_smn_rw' vs '__amd_smn_rw._entry' and '__amd_smn_rw._entry' being greater than '__amd_smn_rw', the next comparison will be > index 1000 and eventually index 1001 will be hit an a match is found. For any symbols not having '.llvm.' substr, there is no functionality change for compare_symbol_name(). Fixes: 8cc32a9b ("kallsyms: strip LTO-only suffixes from promoted global functions") Reported-by:
kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202308232200.1c932a90-oliver.sang@intel.com Signed-off-by:
Yonghong Song <yonghong.song@linux.dev> Reviewed-by:
Song Liu <song@kernel.org> Reviewed-by:
Zhen Lei <thunder.leizhen@huawei.com> Link: https://lore.kernel.org/r/20230825034659.1037627-1-yonghong.song@linux.dev Cc: stable@vger.kernel.org Signed-off-by:
Kees Cook <keescook@chromium.org>
-
- Jul 12, 2023
-
-
Yonghong Song authored
Commit 6eb4bd92 ("kallsyms: strip LTO suffixes from static functions") stripped all function/variable suffixes started with '.' regardless of whether those suffixes are generated at LTO mode or not. In fact, as far as I know, in LTO mode, when a static function/variable is promoted to the global scope, '.llvm.<...>' suffix is added. The existing mechanism breaks live patch for a LTO kernel even if no <symbol>.llvm.<...> symbols are involved. For example, for the following kernel symbols: $ grep bpf_verifier_vlog /proc/kallsyms ffffffff81549f60 t bpf_verifier_vlog ffffffff8268b430 d bpf_verifier_vlog._entry ffffffff8282a958 d bpf_verifier_vlog._entry_ptr ffffffff82e12a1f d bpf_verifier_vlog.__already_done 'bpf_verifier_vlog' is a static function. '_entry', '_entry_ptr' and '__already_done' are static variables used inside 'bpf_verifier_vlog', so llvm promotes them to file-level static with prefix 'bpf_verifier_vlog.'. Note that the func-level to file-level static function promotion also happens without LTO. Given a symbol name 'bpf_verifier_vlog', with LTO kernel, current mechanism will return 4 symbols to live patch subsystem which current live patching subsystem cannot handle it. With non-LTO kernel, only one symbol is returned. In [1], we have a lengthy discussion, the suggestion is to separate two cases: (1). new symbols with suffix which are generated regardless of whether LTO is enabled or not, and (2). new symbols with suffix generated only when LTO is enabled. The cleanup_symbol_name() should only remove suffixes for case (2). Case (1) should not be changed so it can work uniformly with or without LTO. This patch removed LTO-only suffix '.llvm.<...>' so live patching and tracing should work the same way for non-LTO kernel. The cleanup_symbol_name() in scripts/kallsyms.c is also changed to have the same filtering pattern so both kernel and kallsyms tool have the same expectation on the order of symbols. [1] https://lore.kernel.org/live-patching/20230615170048.2382735-1-song@kernel.org/T/#u Fixes: 6eb4bd92 ("kallsyms: strip LTO suffixes from static functions") Reported-by:
Song Liu <song@kernel.org> Signed-off-by:
Yonghong Song <yhs@fb.com> Reviewed-by:
Zhen Lei <thunder.leizhen@huawei.com> Reviewed-by:
Nick Desaulniers <ndesaulniers@google.com> Acked-by:
Song Liu <song@kernel.org> Link: https://lore.kernel.org/r/20230628181926.4102448-1-yhs@fb.com Signed-off-by:
Kees Cook <keescook@chromium.org>
-
- Jun 14, 2023
-
-
Azeem Shaikh authored
strlcpy() reads the entire source buffer first. This read may exceed the destination size limit. This is both inefficient and can lead to linear read overflows if a source string is not NUL-terminated [1]. In an effort to remove strlcpy() completely [2], replace strlcpy() here with strscpy(). No return values were used, so direct replacement is safe. [1] https://www.kernel.org/doc/html/latest/process/deprecated.html#strlcpy [2] https://github.com/KSPP/linux/issues/89 Signed-off-by:
Azeem Shaikh <azeemshaikh38@gmail.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230614010354.1026096-1-azeemshaikh38@gmail.com
-
- Jun 08, 2023
-
-
Maninder Singh authored
function kallsyms_show_value() is used by other parts like modules_open(), kprobes_read() etc. which can work in case of !KALLSYMS also. e.g. as of now lsmod do not show module address if KALLSYMS is disabled. since kallsyms_show_value() defination is not present, it returns false in !KALLSYMS. / # lsmod test 12288 0 - Live 0x0000000000000000 (O) So kallsyms_show_value() can be made generic without dependency on KALLSYMS. Thus moving out function to a new file ksyms_common.c. With this patch code is just moved to new file and no functional change. Co-developed-by:
Onkarnath <onkarnath.1@samsung.com> Signed-off-by:
Onkarnath <onkarnath.1@samsung.com> Signed-off-by:
Maninder Singh <maninder1.s@samsung.com> Reviewed-by:
Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
- May 26, 2023
-
-
Maninder Singh authored
with commit '7878c231 ("slab: remove /proc/slab_allocators")' lookup_symbol_attrs usage is removed. Thus removing redundant API. Signed-off-by:
Maninder Singh <maninder1.s@samsung.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
Arnd Bergmann authored
The arch_get_kallsym() function was introduced so that x86 could override it, but that override was removed in bf904d27 ("x86/pti/64: Remove the SYSCALL64 entry trampoline"), so now this does nothing except causing a warning about a missing prototype: kernel/kallsyms.c:662:12: error: no previous prototype for 'arch_get_kallsym' [-Werror=missing-prototypes] 662 | int __weak arch_get_kallsym(unsigned int symnum, unsigned long *value, Restore the old behavior before d83212d5 ("kallsyms, x86: Export addresses of PTI entry trampolines") to simplify the code and avoid the warning. Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Tested-by:
Alan Maguire <alan.maguire@oracle.com> [mcgrof: fold in bpf selftest fix] Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
- Mar 19, 2023
-
-
Zhen Lei authored
The parameter 'struct module *' in the hook function associated with {module_}kallsyms_on_each_symbol() is no longer used. Delete it. Suggested-by:
Petr Mladek <pmladek@suse.com> Signed-off-by:
Zhen Lei <thunder.leizhen@huawei.com> Reviewed-by:
Vincenzo Palazzo <vincenzopalazzodev@gmail.com> Acked-by:
Jiri Olsa <jolsa@kernel.org> Acked-by:
Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
- Nov 15, 2022
-
-
Zhen Lei authored
Added test cases for basic functions and performance of functions kallsyms_lookup_name(), kallsyms_on_each_symbol() and kallsyms_on_each_match_symbol(). It also calculates the compression rate of the kallsyms compression algorithm for the current symbol set. The basic functions test begins by testing a set of symbols whose address values are known. Then, traverse all symbol addresses and find the corresponding symbol name based on the address. It's impossible to determine whether these addresses are correct, but we can use the above three functions along with the addresses to test each other. Due to the traversal operation of kallsyms_on_each_symbol() is too slow, only 60 symbols can be tested in one second, so let it test on average once every 128 symbols. The other two functions validate all symbols. If the basic functions test is passed, print only performance test results. If the test fails, print error information, but do not perform subsequent performance tests. Start self-test automatically after system startup if CONFIG_KALLSYMS_SELFTEST=y. Example of output content: (prefix 'kallsyms_selftest:' is omitted start --------------------------------------------------------- | nr_symbols | compressed size | original size | ratio(%) | |---------------------------------------------------------| | 107543 | 1357912 | 2407433 | 56.40 | --------------------------------------------------------- kallsyms_lookup_name() looked up 107543 symbols The time spent on each symbol is (ns): min=630, max=35295, avg=7353 kallsyms_on_each_symbol() traverse all: 11782628 ns kallsyms_on_each_match_symbol() traverse all: 9261 ns finish Signed-off-by:
Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
- Nov 13, 2022
-
-
Zhen Lei authored
Function kallsyms_on_each_symbol() traverses all symbols and submits each symbol to the hook 'fn' for judgment and processing. For some cases, the hook actually only handles the matched symbol, such as livepatch. Because all symbols are currently sorted by name, all the symbols with the same name are clustered together. Function kallsyms_lookup_names() gets the start and end positions of the set corresponding to the specified name. So we can easily and quickly traverse all the matches. The test results are as follows (twice): (x86) kallsyms_on_each_match_symbol: 7454, 7984 kallsyms_on_each_symbol : 11733809, 11785803 kallsyms_on_each_match_symbol() consumes only 0.066% of kallsyms_on_each_symbol()'s time. In other words, 1523x better performance. Signed-off-by:
Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
Zhen Lei authored
kallsyms_seqs_of_names[] records the symbol index sorted by address, the maximum value in kallsyms_seqs_of_names[] is the number of symbols. And 2^24 = 16777216, which means that three bytes are enough to store the index. This can help us save (1 * kallsyms_num_syms) bytes of memory. Signed-off-by:
Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
Zhen Lei authored
Currently, to search for a symbol, we need to expand the symbols in 'kallsyms_names' one by one, and then use the expanded string for comparison. It's O(n). If we sort names in ascending order like addresses, we can also use binary search. It's O(log(n)). In order not to change the implementation of "/proc/kallsyms", the table kallsyms_names[] is still stored in a one-to-one correspondence with the address in ascending order. Add array kallsyms_seqs_of_names[], it's indexed by the sequence number of the sorted names, and the corresponding content is the sequence number of the sorted addresses. For example: Assume that the index of NameX in array kallsyms_seqs_of_names[] is 'i', the content of kallsyms_seqs_of_names[i] is 'k', then the corresponding address of NameX is kallsyms_addresses[k]. The offset in kallsyms_names[] is get_symbol_offset(k). Note that the memory usage will increase by (4 * kallsyms_num_syms) bytes, the next two patches will reduce (1 * kallsyms_num_syms) bytes and properly handle the case CONFIG_LTO_CLANG=y. Performance test results: (x86) Before: min=234, max=10364402, avg=5206926 min=267, max=11168517, avg=5207587 After: min=1016, max=90894, avg=7272 min=1014, max=93470, avg=7293 The average lookup performance of kallsyms_lookup_name() improved 715x. Signed-off-by:
Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by:
Luis Chamberlain <mcgrof@kernel.org>
-
- Nov 01, 2022
-
-
Peter Zijlstra authored
This is a full revert of commit: f1389181 ("kallsyms: Take callthunks into account") The commit assumes a number of things that are not quite right. Notably it assumes every symbol has PADDING_BYTES in front of it that are not claimed by another symbol. This is not true; even when compiled with: -fpatchable-function-entry=${PADDING_BYTES},${PADDING_BYTES} Notably things like .cold subfunctions do not need to adhere to this change in ABI. It it also not true when build with CFI_CLANG, which claims these PADDING_BYTES in the __cfi_##name symbol. Once the prefix bytes are not consistent and or otherwise claimed the approach this patch takes goes out the window and kallsym resolution will report invalid symbol names. Therefore revert this to make room for another approach. Reported-by:
Reported-by: kernel test robot <yujie.liu@intel.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by:
Yujie Liu <yujie.liu@intel.com> Link: https://lore.kernel.org/r/202210241614.2ae4c1f5-yujie.liu@intel.com Link: https://lkml.kernel.org/r/20221028194453.330970755@infradead.org
-
- Oct 17, 2022
-
-
Peter Zijlstra authored
Since the pre-symbol function padding is an integral part of the symbol make kallsyms report it as part of the symbol by reporting it as sym-x instead of prev_sym+y. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/r/20220915111148.409656012@infradead.org
-
- Sep 28, 2022
-
-
Miguel Ojeda authored
Rust symbols can become quite long due to namespacing introduced by modules, types, traits, generics, etc. Increasing to 255 is not enough in some cases, therefore introduce longer lengths to the symbol table. In order to avoid increasing all lengths to 2 bytes (since most of them are small, including many Rust ones), use ULEB128 to keep smaller symbols in 1 byte, with the rest in 2 bytes. Reviewed-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Co-developed-by:
Alex Gaynor <alex.gaynor@gmail.com> Signed-off-by:
Alex Gaynor <alex.gaynor@gmail.com> Co-developed-by:
Wedson Almeida Filho <wedsonaf@google.com> Signed-off-by:
Wedson Almeida Filho <wedsonaf@google.com> Co-developed-by:
Gary Guo <gary@garyguo.net> Signed-off-by:
Gary Guo <gary@garyguo.net> Co-developed-by:
Boqun Feng <boqun.feng@gmail.com> Signed-off-by:
Boqun Feng <boqun.feng@gmail.com> Co-developed-by:
Matthew Wilcox <willy@infradead.org> Signed-off-by:
Matthew Wilcox <willy@infradead.org> Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
- Sep 26, 2022
-
-
Sami Tolvanen authored
With -fsanitize=kcfi, the compiler no longer renames static functions with CONFIG_CFI_CLANG + ThinLTO. Drop the code that cleans up the ThinLTO hash from the function names. Signed-off-by:
Sami Tolvanen <samitolvanen@google.com> Reviewed-by:
Nick Desaulniers <ndesaulniers@google.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Tested-by:
Kees Cook <keescook@chromium.org> Tested-by:
Nathan Chancellor <nathan@kernel.org> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220908215504.3686827-19-samitolvanen@google.com
-
- Jul 18, 2022
-
-
Stephen Brennan authored
Patch series "Expose kallsyms data in vmcoreinfo note". The kernel can be configured to contain a lot of introspection or debugging information built-in, such as ORC for unwinding stack traces, BTF for type information, and of course kallsyms. Debuggers could use this information to navigate a core dump or live system, but they need to be able to find it. This patch series adds the necessary symbols into vmcoreinfo, which would allow a debugger to find and interpret the kallsyms table. Using the kallsyms data, the debugger can then lookup any symbol, allowing it to find ORC, BTF, or any other useful data. This would allow a live kernel, or core dump, to be debugged without any DWARF debuginfo. This is useful for many cases: the debuginfo may not have been generated, or you may not want to deploy the large files everywhere you need them. I've demonstrated a proof of concept for this at LSF/MM+BPF during a lighting talk. Using a work-in-progress branch of the drgn debugger, and an extended set of BTF generated by a patched version of dwarves, I've been able to open a core dump without any DWARF info and do basic tasks such as enumerating slab caches, block devices, tasks, and doing backtraces. I hope this series can be a first step toward a new possibility of "DWARFless debugging". Related discussion around the BTF side of this: https://lore.kernel.org/bpf/586a6288-704a-f7a7-b256-e18a675927df@oracle.com/T/#u Some work-in-progress branches using this feature: https://github.com/brenns10/dwarves/tree/remove_percpu_restriction_1 https://github.com/brenns10/drgn/tree/kallsyms_plus_btf This patch (of 2): To include kallsyms data in the vmcoreinfo note, we must make the symbol declarations visible outside of kallsyms.c. Move these to a new internal header file. Link: https://lkml.kernel.org/r/20220517000508.777145-1-stephen.s.brennan@oracle.com Link: https://lkml.kernel.org/r/20220517000508.777145-2-stephen.s.brennan@oracle.com Signed-off-by:
Stephen Brennan <stephen.s.brennan@oracle.com> Acked-by:
Baoquan He <bhe@redhat.com> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Dave Young <dyoung@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Stephen Boyd <swboyd@chromium.org> Cc: Bixuan Cui <cuibixuan@huawei.com> Cc: David Vernet <void@manifault.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Sami Tolvanen <samitolvanen@google.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Jul 12, 2022
-
-
Alan Maguire authored
add a "ksym" iterator which provides access to a "struct kallsym_iter" for each symbol. Intent is to support more flexible symbol parsing as discussed in [1]. [1] https://lore.kernel.org/all/YjRPZj6Z8vuLeEZo@krava/ Suggested-by:
Alexei Starovoitov <alexei.starovoitov@gmail.com> Signed-off-by:
Alan Maguire <alan.maguire@oracle.com> Acked-by:
Yonghong Song <yhs@fb.com> Link: https://lore.kernel.org/r/1657629105-7812-2-git-send-email-alan.maguire@oracle.com Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- May 10, 2022
-
-
Jiri Olsa authored
Adding ftrace_lookup_symbols function that resolves array of symbols with single pass over kallsyms. The user provides array of string pointers with count and pointer to allocated array for resolved values. int ftrace_lookup_symbols(const char **sorted_syms, size_t cnt, unsigned long *addrs) It iterates all kallsyms symbols and tries to loop up each in provided symbols array with bsearch. The symbols array needs to be sorted by name for this reason. We also check each symbol to pass ftrace_location, because this API will be used for fprobe symbols resolving. Suggested-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Reviewed-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20220510122616.2652285-3-jolsa@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Jiri Olsa authored
Making kallsyms_on_each_symbol generally available, so it can be used outside CONFIG_LIVEPATCH option in following changes. Rather than adding another ifdef option let's make the function generally available (when CONFIG_KALLSYMS option is defined). Cc: Christoph Hellwig <hch@lst.de> Reviewed-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Jiri Olsa <jolsa@kernel.org> Link: https://lore.kernel.org/r/20220510122616.2652285-2-jolsa@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- Mar 18, 2022
-
-
Jiri Olsa authored
When kallsyms_lookup_name is called with empty string, it will do futile search for it through all the symbols. Skipping the search for empty string. Signed-off-by:
Jiri Olsa <jolsa@kernel.org> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20220316122419.933957-3-jolsa@kernel.org
-
- Jan 07, 2022
-
-
David Vernet authored
When initializing a 'struct klp_object' in klp_init_object_loaded(), and performing relocations in klp_resolve_symbols(), klp_find_object_symbol() is invoked to look up the address of a symbol in an already-loaded module (or vmlinux). This, in turn, calls kallsyms_on_each_symbol() or module_kallsyms_on_each_symbol() to find the address of the symbol that is being patched. It turns out that symbol lookups often take up the most CPU time when enabling and disabling a patch, and may hog the CPU and cause other tasks on that CPU's runqueue to starve -- even in paths where interrupts are enabled. For example, under certain workloads, enabling a KLP patch with many objects or functions may cause ksoftirqd to be starved, and thus for interrupts to be backlogged and delayed. This may end up causing TCP retransmits on the host where the KLP patch is being applied, and in general, may cause any interrupts serviced by softirqd to be delayed while the patch is being applied. So as to ensure that kallsyms_on_each_symbol() does not end up hogging the CPU, this patch adds a call to cond_resched() in kallsyms_on_each_symbol() and module_kallsyms_on_each_symbol(), which are invoked when doing a symbol lookup in vmlinux and a module respectively. Without this patch, if a live-patch is applied on a 36-core Intel host with heavy TCP traffic, a ~10x spike is observed in TCP retransmits while the patch is being applied. Additionally, collecting sched events with perf indicates that ksoftirqd is awakened ~1.3 seconds before it's eventually scheduled. With the patch, no increase in TCP retransmit events is observed, and ksoftirqd is scheduled shortly after it's awakened. Signed-off-by:
David Vernet <void@manifault.com> Acked-by:
Miroslav Benes <mbenes@suse.cz> Acked-by:
Song Liu <song@kernel.org> Signed-off-by:
Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20211229215646.830451-1-void@manifault.com
-
- Oct 04, 2021
-
-
Nick Desaulniers authored
Similar to: commit 8b8e6b5d ("kallsyms: strip ThinLTO hashes from static functions") It's very common for compilers to modify the symbol name for static functions as part of optimizing transformations. That makes hooking static functions (that weren't inlined or DCE'd) with kprobes difficult. LLVM has yet another name mangling scheme used by thin LTO. Combine handling of the various schemes by truncating after the first '.'. Strip off these suffixes so that we can continue to hook such static functions. Clang releases prior to clang-13 would use '$' instead of '.' Link: https://reviews.llvm.org/rGc6e5c4654bd5045fe22a1a52779e48e2038a404c Reported-by:
KE.LI(Lieke) <like1@oppo.com> Suggested-by:
Nathan Chancellor <nathan@kernel.org> Suggested-by:
Padmanabha Srinivasaiah <treasure4paddy@gmail.com> Suggested-by:
Sami Tolvanen <samitolvanen@google.com> Reviewed-by:
Nathan Chancellor <nathan@kernel.org> Reviewed-by:
Fangrui Song <maskray@google.com> Reviewed-by:
Sami Tolvanen <samitolvanen@google.com> Signed-off-by:
Nick Desaulniers <ndesaulniers@google.com> Signed-off-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20211004162936.21961-1-ndesaulniers@google.com
-
- Jul 08, 2021
-
-
Stephen Boyd authored
Let's make kernel stacktraces easier to identify by including the build ID[1] of a module if the stacktrace is printing a symbol from a module. This makes it simpler for developers to locate a kernel module's full debuginfo for a particular stacktrace. Combined with scripts/decode_stracktrace.sh, a developer can download the matching debuginfo from a debuginfod[2] server and find the exact file and line number for the functions plus offsets in a stacktrace that match the module. This is especially useful for pstore crash debugging where the kernel crashes are recorded in something like console-ramoops and the recovery kernel/modules are different or the debuginfo doesn't exist on the device due to space concerns (the debuginfo can be too large for space limited devices). Originally, I put this on the %pS format, but that was quickly rejected given that %pS is used in other places such as ftrace where build IDs aren't meaningful. There was some discussions on the list to put every module build ID into the "Modules linked in:" section of the stacktrace message but that quickly becomes very hard to read once you have more than three or four modules linked in. It also provides too much information when we don't expect each module to be traversed in a stacktrace. Having the build ID for modules that aren't important just makes things messy. Splitting it to multiple lines for each module quickly explodes the number of lines printed in an oops too, possibly wrapping the warning off the console. And finally, trying to stash away each module used in a callstack to provide the ID of each symbol printed is cumbersome and would require changes to each architecture to stash away modules and return their build IDs once unwinding has completed. Instead, we opt for the simpler approach of introducing new printk formats '%pS[R]b' for "pointer symbolic backtrace with module build ID" and '%pBb' for "pointer backtrace with module build ID" and then updating the few places in the architecture layer where the stacktrace is printed to use this new format. Before: Call trace: lkdtm_WARNING+0x28/0x30 [lkdtm] direct_entry+0x16c/0x1b4 [lkdtm] full_proxy_write+0x74/0xa4 vfs_write+0xec/0x2e8 After: Call trace: lkdtm_WARNING+0x28/0x30 [lkdtm 6c2215028606bda50de823490723dc4bc5bf46f9] direct_entry+0x16c/0x1b4 [lkdtm 6c2215028606bda50de823490723dc4bc5bf46f9] full_proxy_write+0x74/0xa4 vfs_write+0xec/0x2e8 [akpm@linux-foundation.org: fix build with CONFIG_MODULES=n, tweak code layout] [rdunlap@infradead.org: fix build when CONFIG_MODULES is not set] Link: https://lkml.kernel.org/r/20210513171510.20328-1-rdunlap@infradead.org [akpm@linux-foundation.org: make kallsyms_lookup_buildid() static] [cuibixuan@huawei.com: fix build error when CONFIG_SYSFS is disabled] Link: https://lkml.kernel.org/r/20210525105049.34804-1-cuibixuan@huawei.com Link: https://lkml.kernel.org/r/20210511003845.2429846-6-swboyd@chromium.org Link: https://fedoraproject.org/wiki/Releases/FeatureBuildId [1] Link: https://sourceware.org/elfutils/Debuginfod.html [2] Signed-off-by:
Stephen Boyd <swboyd@chromium.org> Signed-off-by:
Bixuan Cui <cuibixuan@huawei.com> Signed-off-by:
Randy Dunlap <rdunlap@infradead.org> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Jessica Yu <jeyu@kernel.org> Cc: Evan Green <evgreen@chromium.org> Cc: Hsin-Yi Wang <hsinyi@chromium.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Matthew Wilcox <willy@infradead.org> Cc: Baoquan He <bhe@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Dave Young <dyoung@redhat.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Sasha Levin <sashal@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Will Deacon <will@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Apr 08, 2021
-
-
Sami Tolvanen authored
With CONFIG_CFI_CLANG and ThinLTO, Clang appends a hash to the names of all static functions not marked __used. This can break userspace tools that don't expect the function name to change, so strip out the hash from the output. Suggested-by:
Jack Pham <jackp@codeaurora.org> Signed-off-by:
Sami Tolvanen <samitolvanen@google.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Tested-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20210408182843.1754385-8-samitolvanen@google.com
-
- Feb 08, 2021
-
-
Christoph Hellwig authored
kallsyms_on_each_symbol and module_kallsyms_on_each_symbol are only used by the livepatching code, so don't build them if livepatching is not enabled. Reviewed-by:
Miroslav Benes <mbenes@suse.cz> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jessica Yu <jeyu@kernel.org>
-
Christoph Hellwig authored
Require an explicit call to module_kallsyms_on_each_symbol to look for symbols in modules instead of the call from kallsyms_on_each_symbol, and acquire module_mutex inside of module_kallsyms_on_each_symbol instead of leaving that up to the caller. Note that this slightly changes the behavior for the livepatch code in that the symbols from vmlinux are not iterated anymore if objname is set, but that actually is the desired behavior in this case. Reviewed-by:
Petr Mladek <pmladek@suse.com> Acked-by:
Miroslav Benes <mbenes@suse.cz> Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
Jessica Yu <jeyu@kernel.org>
-
- Oct 25, 2020
-
-
Joe Perches authored
Use a more generic form for __section that requires quotes to avoid complications with clang and gcc differences. Remove the quote operator # from compiler_attributes.h __section macro. Convert all unquoted __section(foo) uses to quoted __section("foo"). Also convert __attribute__((section("foo"))) uses to __section("foo") even if the __attribute__ has multiple list entry forms. Conversion done using the script at: https://lore.kernel.org/lkml/75393e5ddc272dc7403de74d645e6c6e0f4e70eb.camel@perches.com/2-convert_section.pl Signed-off-by:
Joe Perches <joe@perches.com> Reviewed-by:
Nick Desaulniers <ndesaulniers@gooogle.com> Reviewed-by:
Miguel Ojeda <ojeda@kernel.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Aug 23, 2020
-
-
Gustavo A. R. Silva authored
Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by:
Gustavo A. R. Silva <gustavoars@kernel.org>
-
- Jul 08, 2020
-
-
Kees Cook authored
In order to perform future tests against the cred saved during open(), switch kallsyms_show_value() to operate on a cred, and have all current callers pass current_cred(). This makes it very obvious where callers are checking the wrong credential in their "read" contexts. These will be fixed in the coming patches. Additionally switch return value to bool, since it is always used as a direct permission check, not a 0-on-success, negative-on-error style function return. Cc: stable@vger.kernel.org Signed-off-by:
Kees Cook <keescook@chromium.org>
-
- Jun 15, 2020
-
-
Adrian Hunter authored
Symbols are needed for tools to describe instruction addresses. Pages allocated for ftrace's purposes need symbols to be created for them. Add such symbols to be visible via /proc/kallsyms. Example on x86 with CONFIG_DYNAMIC_FTRACE=y # echo function > /sys/kernel/debug/tracing/current_tracer # cat /proc/kallsyms | grep '\[__builtin__ftrace\]' ffffffffc0238000 t ftrace_trampoline [__builtin__ftrace] Note: This patch adds "__builtin__ftrace" as a module name in /proc/kallsyms for symbols for pages allocated for ftrace's purposes, even though "__builtin__ftrace" is not a module. Signed-off-by:
Adrian Hunter <adrian.hunter@intel.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20200512121922.8997-7-adrian.hunter@intel.com
-
Adrian Hunter authored
Symbols are needed for tools to describe instruction addresses. Pages allocated for kprobe's purposes need symbols to be created for them. Add such symbols to be visible via /proc/kallsyms. Note: kprobe insn pages are not used if ftrace is configured. To see the effect of this patch, the kernel must be configured with: # CONFIG_FUNCTION_TRACER is not set CONFIG_KPROBES=y and for optimised kprobes: CONFIG_OPTPROBES=y Example on x86: # perf probe __schedule Added new event: probe:__schedule (on __schedule) # cat /proc/kallsyms | grep '\[__builtin__kprobes\]' ffffffffc00d4000 t kprobe_insn_page [__builtin__kprobes] ffffffffc00d6000 t kprobe_optinsn_page [__builtin__kprobes] Note: This patch adds "__builtin__kprobes" as a module name in /proc/kallsyms for symbols for pages allocated for kprobes' purposes, even though "__builtin__kprobes" is not a module. Signed-off-by:
Adrian Hunter <adrian.hunter@intel.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Link: https://lkml.kernel.org/r/20200528080058.20230-1-adrian.hunter@intel.com
-
- Apr 07, 2020
-
-
Will Deacon authored
kallsyms_lookup_name() and kallsyms_on_each_symbol() are exported to modules despite having no in-tree users and being wide open to abuse by out-of-tree modules that can use them as a method to invoke arbitrary non-exported kernel functions. Unexport kallsyms_lookup_name() and kallsyms_on_each_symbol(). Signed-off-by:
Will Deacon <will@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Reviewed-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Masami Hiramatsu <mhiramat@kernel.org> Reviewed-by:
Quentin Perret <qperret@google.com> Acked-by:
Alexei Starovoitov <ast@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Frederic Weisbecker <frederic@kernel.org> Cc: K.Prasad <prasad@linux.vnet.ibm.com> Cc: Miroslav Benes <mbenes@suse.cz> Cc: Petr Mladek <pmladek@suse.com> Cc: Joe Lawrence <joe.lawrence@redhat.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: http://lkml.kernel.org/r/20200221114404.14641-4-will@kernel.org Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Feb 05, 2020
-
-
Masahiro Yamada authored
kallsyms_token_table[] only contains ASCII characters. It should be char instead of u8. Signed-off-by:
Masahiro Yamada <masahiroy@kernel.org> Reviewed-by:
Masami Hiramatsu <mhiramat@kernel.org>
-
- Feb 04, 2020
-
-
Alexey Dobriyan authored
The most notable change is DEFINE_SHOW_ATTRIBUTE macro split in seq_file.h. Conversion rule is: llseek => proc_lseek unlocked_ioctl => proc_ioctl xxx => proc_xxx delete ".owner = THIS_MODULE" line [akpm@linux-foundation.org: fix drivers/isdn/capi/kcapi_proc.c] [sfr@canb.auug.org.au: fix kernel/sched/psi.c] Link: http://lkml.kernel.org/r/20200122180545.36222f50@canb.auug.org.au Link: http://lkml.kernel.org/r/20191225172546.GB13378@avx2 Signed-off-by:
Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Aug 27, 2019
-
-
Marc Zyngier authored
An arm64 kernel configured with CONFIG_KPROBES=y CONFIG_KALLSYMS=y # CONFIG_KALLSYMS_ALL is not set CONFIG_KALLSYMS_BASE_RELATIVE=y reports the following kprobe failure: [ 0.032677] kprobes: failed to populate blacklist: -22 [ 0.033376] Please take care of using kprobes. It appears that kprobe fails to retrieve the symbol at address 0xffff000010081000, despite this symbol being in System.map: ffff000010081000 T __exception_text_start This symbol is part of the first group of aliases in the kallsyms_offsets array (symbol names generated using ugly hacks in scripts/kallsyms.c): kallsyms_offsets: .long 0x1000 // do_undefinstr .long 0x1000 // efi_header_end .long 0x1000 // _stext .long 0x1000 // __exception_text_start .long 0x12b0 // do_cp15instr Looking at the implementation of get_symbol_pos(), it returns the lowest index for aliasing symbols. In this case, it return 0. But kallsyms_lookup_size_offset() considers 0 as a failure, which is obviously wrong (there is definitely a valid symbol living there). In turn, the kprobe blacklisting stops abruptly, hence the original error. A CONFIG_KALLSYMS_ALL kernel wouldn't fail as there is always some random symbols at the beginning of this array, which are never looked up via kallsyms_lookup_size_offset. Fix it by considering that get_symbol_pos() is always successful (which is consistent with the other uses of this function). Fixes: ffc50891 ("[PATCH] Create kallsyms_lookup_size_offset()") Reviewed-by:
Masami Hiramatsu <mhiramat@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Marc Zyngier <maz@kernel.org> Signed-off-by:
Will Deacon <will@kernel.org>
-
- May 21, 2019
-
-
Thomas Gleixner authored
Add SPDX license identifiers to all files which: - Have no license information of any form - Have EXPORT_.*_SYMBOL_GPL inside which was used in the initial scan/conversion to ignore the file These files fall under the project license, GPL v2 only. The resulting SPDX license identifier is: GPL-2.0-only Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- Jan 21, 2019
-
-
Song Liu authored
With this patch, /proc/kallsyms will show BPF programs as <addr> t bpf_prog_<tag>_<name> [bpf] Signed-off-by:
Song Liu <songliubraving@fb.com> Reviewed-by:
Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by:
Arnaldo Carvalho de Melo <acme@redhat.com> Acked-by:
Peter Zijlstra <peterz@infradead.org> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Peter Zijlstra <peterz@infradead.org> Cc: kernel-team@fb.com Cc: netdev@vger.kernel.org Link: http://lkml.kernel.org/r/20190117161521.1341602-10-songliubraving@fb.com Signed-off-by:
Arnaldo Carvalho de Melo <acme@redhat.com>
-
- Sep 10, 2018
-
-
Jan Beulich authored
Both kallsyms_num_syms and kallsyms_markers[] don't really need to use unsigned long as their (base) types; unsigned int fully suffices. Signed-off-by:
Jan Beulich <jbeulich@suse.com> Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com>
-
- Aug 14, 2018
-
-
Alexander Shishkin authored
Currently, the addresses of PTI entry trampolines are not exported to user space. Kernel profiling tools need these addresses to identify the kernel code, so add a symbol and address for each CPU's PTI entry trampoline. Signed-off-by:
Alexander Shishkin <alexander.shishkin@linux.intel.com> Acked-by:
Andi Kleen <ak@linux.intel.com> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86@kernel.org Link: http://lkml.kernel.org/r/1528289651-4113-3-git-send-email-adrian.hunter@intel.com Signed-off-by:
Arnaldo Carvalho de Melo <acme@redhat.com>
-