- Oct 02, 2024
-
-
Al Viro authored
asm/unaligned.h is always an include of asm-generic/unaligned.h; might as well move that thing to linux/unaligned.h and include that - there's nothing arch-specific in that header. auto-generated by the following: for i in `git grep -l -w asm/unaligned.h`; do sed -i -e "s/asm\/unaligned.h/linux\/unaligned.h/" $i done for i in `git grep -l -w asm-generic/unaligned.h`; do sed -i -e "s/asm-generic\/unaligned.h/linux\/unaligned.h/" $i done git mv include/asm-generic/unaligned.h include/linux/unaligned.h git mv tools/include/asm-generic/unaligned.h tools/include/linux/unaligned.h sed -i -e "/unaligned.h/d" include/asm-generic/Kbuild sed -i -e "s/__ASM_GENERIC/__LINUX/" include/linux/unaligned.h tools/include/linux/unaligned.h
-
Vlastimil Babka authored
Guenter Roeck reports that the new slub kunit tests added by commit 4e1c44b3 ("kunit, slub: add test_kfree_rcu() and test_leak_destroy()") cause a lockup on boot on several architectures when the kunit tests are configured to be built-in and not modules. The test_kfree_rcu test invokes kfree_rcu() and boot sequence inspection showed the runner for built-in kunit tests kunit_run_all_tests() is called before setting system_state to SYSTEM_RUNNING and calling rcu_end_inkernel_boot(), so this seems like a likely cause. So while I was unable to reproduce the problem myself, skipping the test when the slub_kunit module is built-in should avoid the issue. An alternative fix that was moving the call to kunit_run_all_tests() a bit later in the boot was tried, but has broken tests with functions marked as __init due to free_initmem() already being done. Fixes: 4e1c44b3 ("kunit, slub: add test_kfree_rcu() and test_leak_destroy()") Reported-by:
Guenter Roeck <linux@roeck-us.net> Closes: https://lore.kernel.org/all/6fcb1252-7990-4f0d-8027-5e83f0fb9409@roeck-us.net/ Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Uladzislau Rezki <urezki@gmail.com> Cc: rcu@vger.kernel.org Cc: Brendan Higgins <brendanhiggins@google.com> Cc: David Gow <davidgow@google.com> Cc: Rae Moar <rmoar@google.com> Cc: linux-kselftest@vger.kernel.org Cc: kunit-dev@googlegroups.com Tested-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Vlastimil Babka <vbabka@suse.cz>
-
Vlastimil Babka authored
The test_leak_destroy kunit test intends to test the detection of stray objects in kmem_cache_destroy(), which normally produces a warning. The other slab kunit tests suppress the warnings in the kunit test context, so suppress warnings and related printk output in this test as well. Automated test running environments then don't need to learn to filter the warnings. Also rename the test's kmem_cache, the name was wrongly copy-pasted from test_kfree_rcu. Fixes: 4e1c44b3 ("kunit, slub: add test_kfree_rcu() and test_leak_destroy()") Reported-by:
kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202408251723.42f3d902-oliver.sang@intel.com Reported-by:
Hyeonggon Yoo <42.hyeyoo@gmail.com> Closes: https://lore.kernel.org/all/CAB=+i9RHHbfSkmUuLshXGY_ifEZg9vCZi3fqr99+kmmnpDus7Q@mail.gmail.com/ Reported-by:
Guenter Roeck <linux@roeck-us.net> Closes: https://lore.kernel.org/all/6fcb1252-7990-4f0d-8027-5e83f0fb9409@roeck-us.net/ Tested-by:
Guenter Roeck <linux@roeck-us.net> Reviewed-by:
Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by:
Vlastimil Babka <vbabka@suse.cz>
-
- Oct 01, 2024
-
-
Omar Sandoval authored
iter_folioq_get_pages() decides to advance to the next folioq slot when it has reached the end of the current folio. However, it is checking offset, which is the beginning of the current part, instead of iov_offset, which is adjusted to the end of the current part, so it doesn't advance the slot when it's supposed to. As a result, on the next iteration, we'll use the same folio with an out-of-bounds offset and return an unrelated page. This manifested as various crashes and other failures in 9pfs in drgn's VM testing setup and BPF CI. Fixes: db0aa2e9 ("mm: Define struct folio_queue and ITER_FOLIOQ to handle a sequence of folios") Link: https://lore.kernel.org/linux-fsdevel/20240923183432.1876750-1-chantr4@gmail.com/ Tested-by:
Manu Bretelle <chantr4@gmail.com> Signed-off-by:
Omar Sandoval <osandov@fb.com> Link: https://lore.kernel.org/r/cbaf141ba6c0e2e209717d02746584072844841a.1727722269.git.osandov@fb.com Tested-by:
Eduard Zingerman <eddyz87@gmail.com> Tested-by:
Leon Romanovsky <leon@kernel.org> Tested-by:
Joey Gouly <joey.gouly@arm.com> Acked-by:
David Howells <dhowells@redhat.com> Signed-off-by:
Christian Brauner <brauner@kernel.org>
-
- Sep 26, 2024
-
-
Guenter Roeck authored
This reverts commit e620799c. The commit introduces unit test failures. Expected cur == &entries[i], but cur == 0000037fffadfd80 &entries[i] == 0000037fffadfd60 # list_test_list_cut_position: pass:0 fail:1 skip:0 total:1 not ok 21 list_test_list_cut_position # list_test_list_cut_before: EXPECTATION FAILED at lib/list-test.c:444 Expected cur == &entries[i], but cur == 0000037fffa9fd70 &entries[i] == 0000037fffa9fd60 # list_test_list_cut_before: EXPECTATION FAILED at lib/list-test.c:444 Expected cur == &entries[i], but cur == 0000037fffa9fd80 &entries[i] == 0000037fffa9fd70 Revert it. Link: https://lkml.kernel.org/r/20240922150507.553814-1-linux@roeck-us.net Fixes: e620799c ("list: test: fix tests for list_cut_position()") Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Cc: I Hsin Cheng <richard120310@gmail.com> Cc: David Gow <davidgow@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Sep 20, 2024
-
-
Ming Lei authored
When called from sbitmap_queue_get(), sbitmap_deferred_clear() may be run with preempt disabled. In RT kernel, spin_lock() can sleep, then warning of "BUG: sleeping function called from invalid context" can be triggered. Fix it by replacing it with raw_spin_lock. Cc: Yang Yang <yang.yang@vivo.com> Fixes: 72d04bdc ("sbitmap: fix io hung due to race on sbitmap_word::cleared") Signed-off-by:
Ming Lei <ming.lei@redhat.com> Reviewed-by:
Yang Yang <yang.yang@vivo.com> Link: https://lore.kernel.org/r/20240919021709.511329-1-ming.lei@redhat.com Signed-off-by:
Jens Axboe <axboe@kernel.dk>
-
Kris Van Hees authored
Create file module.builtin.ranges that can be used to find where built-in modules are located by their addresses. This will be useful for tracing tools to find what functions are for various built-in modules. The offset range data for builtin modules is generated using: - modules.builtin: associates object files with module names - vmlinux.map: provides load order of sections and offset of first member per section - vmlinux.o.map: provides offset of object file content per section - .*.cmd: build cmd file with KBUILD_MODFILE The generated data will look like: .text 00000000-00000000 = _text .text 0000baf0-0000cb10 amd_uncore .text 0009bd10-0009c8e0 iosf_mbi ... .text 00b9f080-00ba011a intel_skl_int3472_discrete .text 00ba0120-00ba03c0 intel_skl_int3472_discrete intel_skl_int3472_tps68470 .text 00ba03c0-00ba08d6 intel_skl_int3472_tps68470 ... .data 00000000-00000000 = _sdata .data 0000f020-0000f680 amd_uncore For each ELF section, it lists the offset of the first symbol. This can be used to determine the base address of the section at runtime. Next, it lists (in strict ascending order) offset ranges in that section that cover the symbols of one or more builtin modules. Multiple ranges can apply to a single module, and ranges can be shared between modules. The CONFIG_BUILTIN_MODULE_RANGES option controls whether offset range data is generated for kernel modules that are built into the kernel image. How it works: 1. The modules.builtin file is parsed to obtain a list of built-in module names and their associated object names (the .ko file that the module would be in if it were a loadable module, hereafter referred to as <kmodfile>). This object name can be used to identify objects in the kernel compile because any C or assembler code that ends up into a built-in module will have the option -DKBUILD_MODFILE=<kmodfile> present in its build command, and those can be found in the .<obj>.cmd file in the kernel build tree. If an object is part of multiple modules, they will all be listed in the KBUILD_MODFILE option argument. This allows us to conclusively determine whether an object in the kernel build belong to any modules, and which. 2. The vmlinux.map is parsed next to determine the base address of each top level section so that all addresses into the section can be turned into offsets. This makes it possible to handle sections getting loaded at different addresses at system boot. We also determine an 'anchor' symbol at the beginning of each section to make it possible to calculate the true base address of a section at runtime (i.e. symbol address - symbol offset). We collect start addresses of sections that are included in the top level section. This is used when vmlinux is linked using vmlinux.o, because in that case, we need to look at the vmlinux.o linker map to know what object a symbol is found in. And finally, we process each symbol that is listed in vmlinux.map (or vmlinux.o.map) based on the following structure: vmlinux linked from vmlinux.a: vmlinux.map: <top level section> <included section> -- might be same as top level section) <object> -- built-in association known <symbol> -- belongs to module(s) object belongs to ... vmlinux linked from vmlinux.o: vmlinux.map: <top level section> <included section> -- might be same as top level section) vmlinux.o -- need to use vmlinux.o.map <symbol> -- ignored ... vmlinux.o.map: <section> <object> -- built-in association known <symbol> -- belongs to module(s) object belongs to ... 3. As sections, objects, and symbols are processed, offset ranges are constructed in a straight-forward way: - If the symbol belongs to one or more built-in modules: - If we were working on the same module(s), extend the range to include this object - If we were working on another module(s), close that range, and start the new one - If the symbol does not belong to any built-in modules: - If we were working on a module(s) range, close that range Signed-off-by:
Kris Van Hees <kris.van.hees@oracle.com> Reviewed-by:
Nick Alcock <nick.alcock@oracle.com> Reviewed-by:
Alan Maguire <alan.maguire@oracle.com> Reviewed-by:
Steven Rostedt (Google) <rostedt@goodmis.org> Tested-by:
Sam James <sam@gentoo.org> Reviewed-by:
Sami Tolvanen <samitolvanen@google.com> Tested-by:
Sami Tolvanen <samitolvanen@google.com> Signed-off-by:
Masahiro Yamada <masahiroy@kernel.org>
-
- Sep 17, 2024
-
-
I Hsin Cheng authored
Increase the test coverage of list_test_list_replace*() by adding the checks to compare the pointer of "a_new.next" and "a_new.prev" to make sure a perfect circular doubly linked list is formed after the replacement. Link: https://lkml.kernel.org/r/20240910040818.65723-1-richard120310@gmail.com Signed-off-by:
I Hsin Cheng <richard120310@gmail.com> Cc: David Gow <davidgow@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
I Hsin Cheng authored
Fix test for list_cut_position*() for the missing check of integer "i" after the second loop. The variable should be checked for second time to make sure both lists after the cut operation are formed as expected. Link: https://lkml.kernel.org/r/20240910043531.71343-1-richard120310@gmail.com Signed-off-by:
I Hsin Cheng <richard120310@gmail.com> Cc: David Gow <davidgow@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Huang Ying authored
Patch series "resource: Fix region_intersects() vs add_memory_driver_managed()", v3. The patchset fixes a bug of region_intersects() for systems with CXL memory. The details of the bug can be found in [1/3]. To avoid similar bugs in the future. A kunit test case for region_intersects() is added in [3/3]. [2/3] is a preparation patch for [3/3]. This patch (of 3): region_intersects() is important because it's used for /dev/mem permission checking. To avoid possible bug of region_intersects() in the future, a kunit test case for region_intersects() is added. Link: https://lkml.kernel.org/r/20240906030713.204292-1-ying.huang@intel.com Link: https://lkml.kernel.org/r/20240906030713.204292-4-ying.huang@intel.com Signed-off-by:
"Huang, Ying" <ying.huang@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: David Hildenbrand <david@redhat.com> Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Jonathan Cameron <jonathan.cameron@huawei.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Alison Schofield <alison.schofield@intel.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Ira Weiny <ira.weiny@intel.com> Cc: Alistair Popple <apopple@nvidia.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Baoquan He <bhe@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Sep 14, 2024
-
-
Masahiro Yamada authored
As described in commit 42d9b379 ("lib/Kconfig.debug: Allow BTF + DWARF5 with pahole 1.21+"), the combination of CONFIG_DEBUG_INFO_BTF and CONFIG_DEBUG_INFO_DWARF5 requires pahole 1.21+. GCC 11+ and Clang 14+ default to DWARF 5 when the -g flag is passed. For the same reason, the combination of CONFIG_DEBUG_INFO_BTF and CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT is also likely to require pahole 1.21+ these days. (At least, it is uncertain whether the actual requirement is pahole 1.16+ or 1.21+.) Signed-off-by:
Masahiro Yamada <masahiroy@kernel.org> Reviewed-by:
Alan Maguire <alan.maguire@oracle.com> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Reviewed-by:
Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20240913173759.1316390-3-masahiroy@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Masahiro Yamada authored
When DEBUG_INFO_DWARF5 is selected, pahole 1.21+ is required to enable DEBUG_INFO_BTF. When DEBUG_INFO_DWARF4 or DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT is selected, DEBUG_INFO_BTF can be enabled without pahole installed, but a build error will occur in scripts/link-vmlinux.sh: LD .tmp_vmlinux1 BTF: .tmp_vmlinux1: pahole (pahole) is not available Failed to generate BTF for vmlinux Try to disable CONFIG_DEBUG_INFO_BTF We did not guard DEBUG_INFO_BTF by PAHOLE_VERSION when previously discussed [1]. However, commit 613fe169 ("kbuild: Add CONFIG_PAHOLE_VERSION") added CONFIG_PAHOLE_VERSION after all. Now several CONFIG options, as well as the combination of DEBUG_INFO_BTF and DEBUG_INFO_DWARF5, are guarded by PAHOLE_VERSION. The remaining compile-time check in scripts/link-vmlinux.sh now appears to be an awkward inconsistency. This commit adopts Nathan's original work. [1]: https://lore.kernel.org/lkml/20210111180609.713998-1-natechancellor@gmail.com/ Signed-off-by:
Masahiro Yamada <masahiroy@kernel.org> Reviewed-by:
Alan Maguire <alan.maguire@oracle.com> Acked-by:
Andrii Nakryiko <andrii@kernel.org> Reviewed-by:
Nathan Chancellor <nathan@kernel.org> Link: https://lore.kernel.org/r/20240913173759.1316390-2-masahiroy@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- Sep 13, 2024
-
-
Depending on the architecture, building a 32-bit vDSO on a 64-bit kernel is problematic when some system headers are included. Minimise the amount of headers by moving needed items, such as __{get,put}_unaligned_t, into dedicated common headers and in general use more specific headers, similar to what was done in commit 8165b57b ("linux/const.h: Extract common header for vDSO") and commit 8c59ab83 ("lib/vdso: Enable common headers"). On some architectures this results in missing PAGE_SIZE, as was described by commit 8b3843ae ("vdso/datapage: Quick fix - use asm/page-def.h for ARM64"), so define this if necessary, in the same way as done prior by commit cffaefd1 ("vdso: Use CONFIG_PAGE_SHIFT in vdso/datapage.h"). Removing linux/time64.h leads to missing 'struct timespec64' in x86's asm/pvclock.h. Add a forward declaration of that struct in that file. Signed-off-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by:
Jason A. Donenfeld <Jason@zx2c4.com>
-
With the current implementation, __cvdso_getrandom_data() calls memset() on certain architectures, which is unexpected in the VDSO. Rather than providing a memset(), simply rewrite opaque data initialization to avoid memset(). Signed-off-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Acked-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Jason A. Donenfeld <Jason@zx2c4.com>
-
Same as for the gettimeofday CVDSO implementation, add c-getrandom-y to ease the inclusion of lib/vdso/getrandom.c in architectures' VDSO builds. Signed-off-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by:
Jason A. Donenfeld <Jason@zx2c4.com>
-
Performing SMP atomic operations on u64 fails on powerpc32: CC drivers/char/random.o In file included from <command-line>: drivers/char/random.c: In function 'crng_reseed': ././include/linux/compiler_types.h:510:45: error: call to '__compiletime_assert_391' declared with attribute error: Need native word sized stores/loads for atomicity. 510 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__) | ^ ././include/linux/compiler_types.h:491:25: note: in definition of macro '__compiletime_assert' 491 | prefix ## suffix(); \ | ^~~~~~ ././include/linux/compiler_types.h:510:9: note: in expansion of macro '_compiletime_assert' 510 | _compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__) | ^~~~~~~~~~~~~~~~~~~ ././include/linux/compiler_types.h:513:9: note: in expansion of macro 'compiletime_assert' 513 | compiletime_assert(__native_word(t), \ | ^~~~~~~~~~~~~~~~~~ ./arch/powerpc/include/asm/barrier.h:74:9: note: in expansion of macro 'compiletime_assert_atomic_type' 74 | compiletime_assert_atomic_type(*p); \ | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ./include/asm-generic/barrier.h:172:55: note: in expansion of macro '__smp_store_release' 172 | #define smp_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0) | ^~~~~~~~~~~~~~~~~~~ drivers/char/random.c:286:9: note: in expansion of macro 'smp_store_release' 286 | smp_store_release(&__arch_get_k_vdso_rng_data()->generation, next_gen + 1); | ^~~~~~~~~~~~~~~~~ The kernel-side generation counter in the random driver is handled as an unsigned long, not as a u64, in base_crng and struct crng. But on the vDSO side, it needs to be an u64, not just an unsigned long, in order to support a 32-bit vDSO atop a 64-bit kernel. On kernel side, however, it is an unsigned long, hence a 32-bit value on 32-bit architectures, so just cast it to unsigned long for the smp_store_release(). A side effect is that on big endian architectures the store will be performed in the upper 32 bits. It is not an issue on its own because the vDSO site doesn't mind the value, as it only checks differences. Just make sure that the vDSO side checks the full 64 bits. For that, the local current_generation has to be u64 as well. Signed-off-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Suggested-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Jason A. Donenfeld <Jason@zx2c4.com>
-
- Sep 12, 2024
-
-
Luis Felipe Hernandez authored
Adds test suite for integer based power function which performs integer exponentiation. The test suite is designed to verify that the implementation of int_pow correctly computes the power of a given base raised to a given exponent. The tests check various scenarios and edge cases to ensure the accuracy and reliability of the exponentiation function. Updated commit with test information at commit time: Shuah Khan Signed-off-by:
Luis Felipe Hernandez <luis.hernandez093@gmail.com> Reviewed-by:
David Gow <davidgow@google.com> Signed-off-by:
Shuah Khan <skhan@linuxfoundation.org>
-
David Howells authored
Define a data structure, struct folio_queue, to represent a sequence of folios and a kernel-internal I/O iterator type, ITER_FOLIOQ, to allow a list of folio_queue structures to be used to provide a buffer to iov_iter-taking functions, such as sendmsg and recvmsg. The folio_queue structure looks like: struct folio_queue { struct folio_batch vec; u8 orders[PAGEVEC_SIZE]; struct folio_queue *next; struct folio_queue *prev; unsigned long marks; unsigned long marks2; }; It does not use a list_head so that next and/or prev can be set to NULL at the ends of the list, allowing iov_iter-handling routines to determine that they *are* the ends without needing to store a head pointer in the iov_iter struct. A folio_batch struct is used to hold the folio pointers which allows the batch to be passed to batch handling functions. Two mark bits are available per slot. The intention is to use at least one of them to mark folios that need putting, but that might not be ultimately necessary. Accessor functions are used to access the slots to do the masking and an additional accessor function is used to indicate the size of the array. The order of each folio is also stored in the structure to avoid the need for iov_iter_advance() and iov_iter_revert() to have to query each folio to find its size. With careful barriering, this can be used as an extending buffer with new folios inserted and new folio_queue structs added without the need for a lock. Further, provided we always keep at least one struct in the buffer, we can also remove consumed folios and consumed structs from the head end as we without the need for locks. [Questions/thoughts] (1) To manage this, I need a head pointer, a tail pointer, a tail slot number (assuming insertion happens at the tail end and the next pointers point from head to tail). Should I put these into a struct of their own, say "folio_queue_head" or "rolling_buffer"? I will end up with two of these in netfs_io_request eventually, one keeping track of the pagecache I'm dealing with for buffered I/O and the other to hold a bounce buffer when we need one. (2) Should I make the slots {folio,off,len} or bio_vec? (3) This is intended to replace ITER_XARRAY eventually. Using an xarray in I/O iteration requires the taking of the RCU read lock, doing copying under the RCU read lock, walking the xarray (which may change under us), handling retries and dealing with special values. The advantage of ITER_XARRAY is that when we're dealing with the pagecache directly, we don't need any allocation - but if we're doing encrypted comms, there's a good chance we'd be using a bounce buffer anyway. This will require afs, erofs, cifs, orangefs and fscache to be converted to not use this. afs still uses it for dirs and symlinks; some of erofs usages should be easy to change, but there's one which won't be so easy; ceph's use via fscache can be fixed by porting ceph to netfslib; cifs is using xarray as a bounce buffer - that can be moved to use sheaves instead; and orangefs has a similar problem to erofs - maybe orangefs could use netfslib? Signed-off-by:
David Howells <dhowells@redhat.com> cc: Matthew Wilcox <willy@infradead.org> cc: Jeff Layton <jlayton@kernel.org> cc: Steve French <sfrench@samba.org> cc: Ilya Dryomov <idryomov@gmail.com> cc: Gao Xiang <xiang@kernel.org> cc: Mike Marshall <hubcap@omnibond.com> cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org cc: linux-afs@lists.infradead.org cc: linux-cifs@vger.kernel.org cc: ceph-devel@vger.kernel.org cc: linux-erofs@lists.ozlabs.org cc: devel@lists.orangefs.org Link: https://lore.kernel.org/r/20240814203850.2240469-13-dhowells@redhat.com/ # v2 Signed-off-by:
Christian Brauner <brauner@kernel.org>
-
- Sep 11, 2024
-
-
Andrii Nakryiko authored
With freader we don't need to restrict ourselves to a single page, so let's allow ELF notes to be at any valid position with the file. We also merge parse_build_id() and parse_build_id_buf() as now the only difference between them is note offset overflow, which makes sense to check in all situations. Reviewed-by:
Eduard Zingerman <eddyz87@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240829174232.3133883-8-andrii@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Extend freader with a flag specifying whether it's OK to cause page fault to fetch file data that is not already physically present in memory. With this, it's now easy to wait for data if the caller is running in sleepable (faultable) context. We utilize read_cache_folio() to bring the desired folio into page cache, after which the rest of the logic works just the same at folio level. Suggested-by:
Omar Sandoval <osandov@fb.com> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Johannes Weiner <hannes@cmpxchg.org> Reviewed-by:
Eduard Zingerman <eddyz87@gmail.com> Reviewed-by:
Shakeel Butt <shakeel.butt@linux.dev> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240829174232.3133883-7-andrii@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Make it clear that build_id_parse() assumes that it can take no page fault by renaming it and current few users to build_id_parse_nofault(). Also add build_id_parse() stub which for now falls back to non-sleepable implementation, but will be changed in subsequent patches to take advantage of sleepable context. PROCMAP_QUERY ioctl() on /proc/<pid>/maps file is using build_id_parse() and will automatically take advantage of more reliable sleepable context implementation. Reviewed-by:
Eduard Zingerman <eddyz87@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240829174232.3133883-6-andrii@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Now that freader allows to access multiple pages transparently, there is no need to limit program headers to the very first ELF file page. Remove this limitation, but still put some sane limit on amount of program headers that we are willing to iterate over (set arbitrarily to 256). Reviewed-by:
Eduard Zingerman <eddyz87@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240829174232.3133883-5-andrii@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Current code assumption is that program (segment) headers are following ELF header immediately. This is a common case, but is not guaranteed. So take into account e_phoff field of the ELF header when accessing program headers. Reviewed-by:
Eduard Zingerman <eddyz87@gmail.com> Reported-by:
Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240829174232.3133883-4-andrii@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Add freader abstraction that transparently manages fetching and local mapping of the underlying file page(s) and provides a simple direct data access interface. freader_fetch() is the only and single interface necessary. It accepts file offset and desired number of bytes that should be accessed, and will return a kernel mapped pointer that caller can use to dereference data up to requested size. Requested size can't be bigger than the size of the extra buffer provided during initialization (because, worst case, all requested data has to be copied into it, so it's better to flag wrongly sized buffer unconditionally, regardless if requested data range is crossing page boundaries or not). If folio is not paged in, or some of the conditions are not satisfied, NULL is returned and more detailed error code can be accessed through freader->err field. This approach makes the usage of freader_fetch() cleaner. To accommodate accessing file data that crosses folio boundaries, user has to provide an extra buffer that will be used to make a local copy, if necessary. This is done to maintain a simple linear pointer data access interface. We switch existing build ID parsing logic to it, without changing or lifting any of the existing constraints, yet. This will be done separately. Given existing code was written with the assumption that it's always working with a single (first) page of the underlying ELF file, logic passes direct pointers around, which doesn't really work well with freader approach and would be limiting when removing the single page (folio) limitation. So we adjust all the logic to work in terms of file offsets. There is also a memory buffer-based version (freader_init_from_mem()) for cases when desired data is already available in kernel memory. This is used for parsing vmlinux's own build ID note. In this mode assumption is that provided data starts at "file offset" zero, which works great when parsing ELF notes sections, as all the parsing logic is relative to note section's start. Reviewed-by:
Eduard Zingerman <eddyz87@gmail.com> Reviewed-by:
Shakeel Butt <shakeel.butt@linux.dev> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240829174232.3133883-3-andrii@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
Andrii Nakryiko authored
Harden build ID parsing logic, adding explicit READ_ONCE() where it's important to have a consistent value read and validated just once. Also, as pointed out by Andi Kleen, we need to make sure that entire ELF note is within a page bounds, so move the overflow check up and add an extra note_size boundaries validation. Fixes tag below points to the code that moved this code into lib/buildid.c, and then subsequently was used in perf subsystem, making this code exposed to perf_event_open() users in v5.12+. Cc: stable@vger.kernel.org Reviewed-by:
Eduard Zingerman <eddyz87@gmail.com> Reviewed-by:
Jann Horn <jannh@google.com> Suggested-by:
Andi Kleen <ak@linux.intel.com> Fixes: bd7525da ("bpf: Move stack_map_get_build_id into lib") Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Link: https://lore.kernel.org/r/20240829174232.3133883-2-andrii@kernel.org Signed-off-by:
Alexei Starovoitov <ast@kernel.org>
-
- Sep 09, 2024
-
-
Alok Swaminathan authored
Add null check for character class. Previously, an inverted character class could result in a nul byte being matched and lead to the function reading past the end of the inputted string. Link: https://lkml.kernel.org/r/20240826155709.12383-1-swaminathanalok@gmail.com Signed-off-by:
Alok Swaminathan <swaminathanalok@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Liam R. Howlett authored
People keep trying to remove three functions that are going to be used in a feature that is being developed. Dropping the functions entirely may end up with people trying to use the bit for other uses, as people have tried in the past. Adding __maybe_unused stops compilers complaining about the unused functions so they can be silently optimised out of the compiled code and people won't try to claim the bit for another use. Link: https://lore.kernel.org/all/20230726080916.17454-2-zhangpeng.00@bytedance.com/ Link: https://lore.kernel.org/all/202408310728.S7EE59BN-lkp@intel.com/ Link: https://lkml.kernel.org/r/20240907021506.4018676-1-Liam.Howlett@oracle.com Signed-off-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Reviewed-by:
Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by:
Kuan-Wei Chiu <visitorckw@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Sergey Senozhatsky authored
ZSTD_createCDict_advanced2() must ensure that ZSTD_createCDict_advanced_internal() has successfully allocated cdict. customMalloc() may be called under low memory condition and may be unable to allocate workspace for cdict. Link: https://lkml.kernel.org/r/20240902105656.1383858-4-senozhatsky@chromium.org Signed-off-by:
Sergey Senozhatsky <senozhatsky@chromium.org> Cc: Nick Terrell <terrelln@fb.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Sergey Senozhatsky authored
This symbol is needed to enable lz4hc dictionary support. Link: https://lkml.kernel.org/r/20240902105656.1383858-3-senozhatsky@chromium.org Signed-off-by:
Sergey Senozhatsky <senozhatsky@chromium.org> Cc: Nick Terrell <terrelln@fb.com> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Sergey Senozhatsky authored
Patch series "zram: introduce custom comp backends API", v7. This series introduces support for run-time compression algorithms tuning, so users, for instance, can adjust compression/acceleration levels and provide pre-trained compression/decompression dictionaries which certain algorithms support. At this point we stop supporting (old/deprecated) comp API. We may add new acomp API support in the future, but before that zram needs to undergo some major rework (we are not ready for async compression). Some benchmarks for reference (look at column #2) *** init zstd /sys/block/zram0/mm_stat 1750659072 504622188 514355200 0 514355200 1 0 34204 34204 *** init zstd dict=/home/ss/zstd-dict-amd64 /sys/block/zram0/mm_stat 1750650880 465908890 475398144 0 475398144 1 0 34185 34185 *** init zstd level=8 dict=/home/ss/zstd-dict-amd64 /sys/block/zram0/mm_stat 1750654976 430803319 439873536 0 439873536 1 0 34185 34185 *** init lz4 /sys/block/zram0/mm_stat 1750646784 664266564 677060608 0 677060608 1 0 34288 34288 *** init lz4 dict=/home/ss/lz4-dict-amd64 /sys/block/zram0/mm_stat 1750650880 619990300 632102912 0 632102912 1 0 34278 34278 *** init lz4hc /sys/block/zram0/mm_stat 1750630400 609023822 621232128 0 621232128 1 0 34288 34288 *** init lz4hc dict=/home/ss/lz4-dict-amd64 /sys/block/zram0/mm_stat 1750659072 505133172 515231744 0 515231744 1 0 34278 34278 Recompress init zram zstd (prio=0), zstd level=5 (prio 1), zstd with dict (prio 2) *** zstd /sys/block/zram0/mm_stat 1750982656 504630584 514269184 0 514269184 1 0 34204 34204 *** idle recompress priority=1 (zstd level=5) /sys/block/zram0/mm_stat 1750982656 488645601 525438976 0 514269184 1 0 34204 34204 *** idle recompress priority=2 (zstd dict) /sys/block/zram0/mm_stat 1750982656 460869640 517914624 0 514269184 1 0 34185 34204 This patch (of 24): We need to export a number of API functions that enable advanced zstd usage - C/D dictionaries, dictionaries sharing between contexts, etc. Link: https://lkml.kernel.org/r/20240902105656.1383858-1-senozhatsky@chromium.org Link: https://lkml.kernel.org/r/20240902105656.1383858-2-senozhatsky@chromium.org Signed-off-by:
Sergey Senozhatsky <senozhatsky@chromium.org> Cc: Nick Terrell <terrelln@fb.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Wei Yang authored
This patch tries to cleanup some function description: * function name mismatch * parameter name mismatch * parameter all end up with ':' * not prefix '*' if parameter is a pointer There is still some missing description of parameters, I didn't add them since I am not sure the exact meaning. Link: https://lkml.kernel.org/r/20240830220400.2007-1-richard.weiyang@gmail.com Signed-off-by:
Wei Yang <richard.weiyang@gmail.com> Reviewed-by:
Liam R. Howlett <Liam.Howlett@Oracle.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Wei Yang authored
Just do what mt_dump_range64() does. Dump the error message based on format. Link: https://lkml.kernel.org/r/20240826012422.29935-2-richard.weiyang@gmail.com Signed-off-by:
Wei Yang <richard.weiyang@gmail.com> Cc: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Wei Yang authored
mt_dump_arange64() only applies to an entry whose type is maple_arange_64, in which mte_is_leaf() must return false. Since mte_is_leaf() here is always false, we can remove this condition check. Link: https://lkml.kernel.org/r/20240826012422.29935-1-richard.weiyang@gmail.com Signed-off-by:
Wei Yang <richard.weiyang@gmail.com> Reviewed-by:
Liam R. Howlett <Liam.Howlett@Oracle.com> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Zhen Lei authored
fill_pool() checks locklessly at the beginning whether the pool has to be refilled. After that it checks locklessly in a loop whether the free list contains objects and repeats the refill check. If both conditions are true, it acquires the pool lock and tries to move objects from the free list to the pool repeating the same checks again. There are two redundant issues with that: 1) The repeated check for the fill condition 2) The loop processing The repeated check is pointless as it was just established that fill is required. The condition has to be re-evaluated under the lock anyway. The loop processing is not required either because there is practically zero chance that a repeated attempt will succeed if the checks under the lock terminate the moving of objects. Remove the redundant check and replace the loop with a simple if condition. [ tglx: Massaged change log ] Signed-off-by:
Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20240904133944.2124-4-thunder.leizhen@huawei.com
-
Zhen Lei authored
fill_pool() uses 'obj_pool_min_free' to decide whether objects should be handed back to the kmem cache. But 'obj_pool_min_free' records the lowest historical value of the number of objects in the object pool and not the minimum number of objects which should be kept in the pool. Use 'debug_objects_pool_min_level' instead, which holds the minimum number which was scaled to the number of CPUs at boot time. [ tglx: Massage change log ] Fixes: d26bf505 ("debugobjects: Reduce number of pool_lock acquisitions in fill_pool()") Fixes: 36c4ead6 ("debugobjects: Add global free list and the counter") Signed-off-by:
Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/20240904133944.2124-3-thunder.leizhen@huawei.com
-
Zhen Lei authored
1. Both debug_objects_pool_min_level and debug_objects_pool_size are read-only after initialization, change attribute '__read_mostly' to '__ro_after_init', and remove '__data_racy'. 2. Many global variables are read in the debug_stats_show() function, but didn't mask KCSAN's detection. Add '__data_racy' for them. Suggested-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Zhen Lei <thunder.leizhen@huawei.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/all/20240904133944.2124-2-thunder.leizhen@huawei.com
-
Kent Overstreet authored
Signed-off-by:
Kent Overstreet <kent.overstreet@linux.dev>
-
Kent Overstreet authored
Provide an inlined fast path Signed-off-by:
Kent Overstreet <kent.overstreet@linux.dev>
-
- Sep 08, 2024
-
-
Anna-Maria Behnsen authored
There are several comments all over the place, which uses a wrong singular form of jiffies. Replace 'jiffie' by 'jiffy'. No functional change. Signed-off-by:
Anna-Maria Behnsen <anna-maria@linutronix.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> # m68k Link: https://lore.kernel.org/all/20240904-devel-anna-maria-b4-timers-flseep-v1-3-e98760256370@linutronix.de
-
- Sep 04, 2024
-
-
Matthew Wilcox (Oracle) authored
Patch series "Increase the number of bits available in page_type". Kent wants more than 16 bits in page_type, so I resurrected this old patch and expanded it a bit. It's a bit more efficient than our current scheme (1 4-byte insn vs 3 insns of 13 bytes total) to test a single page type. This patch (of 4): An upcoming patch will convert page type from being a bitfield to a single byte, so we will not be able to use %pG to print the page type any more. The printing of the symbolic name will be restored in that patch. Link: https://lkml.kernel.org/r/20240821173914.2270383-1-willy@infradead.org Link: https://lkml.kernel.org/r/20240821173914.2270383-2-willy@infradead.org Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by:
David Hildenbrand <david@redhat.com> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-