- Dec 26, 2022
-
-
YoungJun.park authored
When it fails to allocate fragment, it does not free and return error. And check the pointer inappropriately. Fixed merge conflicts with commit 61888776 ("kunit: update NULL vs IS_ERR() tests") Shuah Khan <skhan@linuxfoundation.org> Signed-off-by:
YoungJun.park <her0gyugyu@gmail.com> Reviewed-by:
David Gow <davidgow@google.com> Signed-off-by:
Shuah Khan <skhan@linuxfoundation.org>
-
- Dec 21, 2022
-
-
Liam Howlett authored
Add a test to the maple tree test suite for the spanning rebalance insufficient node issue does not go undetected again. Link: https://lkml.kernel.org/r/20221219161922.2708732-3-Liam.Howlett@oracle.com Fixes: 54a611b6 ("Maple Tree: add new data structure") Signed-off-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Andrei Vagin <avagin@gmail.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Muhammad Usama Anjum <usama.anjum@collabora.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Liam Howlett authored
Mike Rapoport contacted me off-list with a regression in running criu. Periodic tests fail with an RCU stall during execution. Although rare, it is possible to hit this with other uses so this patch should be backported to fix the regression. This patchset adds the fix and a test case to the maple tree test suite. This patch (of 2): An insufficient node was causing an out-of-bounds access on the node in mas_leaf_max_gap(). The cause was the faulty detection of the new node being a root node when overwriting many entries at the end of the tree. Fix the detection of a new root and ensure there is sufficient data prior to entering the spanning rebalance loop. Link: https://lkml.kernel.org/r/20221219161922.2708732-1-Liam.Howlett@oracle.com Link: https://lkml.kernel.org/r/20221219161922.2708732-2-Liam.Howlett@oracle.com Fixes: 54a611b6 ("Maple Tree: add new data structure") Signed-off-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Reported-by:
Mike Rapoport <rppt@kernel.org> Tested-by:
Mike Rapoport <rppt@kernel.org> Cc: Andrei Vagin <avagin@gmail.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Muhammad Usama Anjum <usama.anjum@collabora.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Dec 16, 2022
-
-
Wei Yongjun authored
stacktrace filter is checked after others, such as fail-nth, interval and probability. This make it doesn't work well as expected. Fix to running stacktrace filter before other filters. It will speed up fault inject testing for driver modules. Link: https://lkml.kernel.org/r/20220817080332.1052710-5-weiyongjun1@huawei.com Signed-off-by:
Wei Yongjun <weiyongjun1@huawei.com> Cc: Akinobu Mita <akinobu.mita@gmail.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Isabella Basso <isabbasso@riseup.net> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Wei Yongjun authored
Attributes of stack filter are show as unsigned decimal, such as 'require-start', 'require-end'. This patch change to show them as unsigned hexadecimal for more readable. Before: $ echo 0xffffffffc0257000 > /sys/kernel/debug/failslab/require-start $ cat /sys/kernel/debug/failslab/require-start 18446744072638263296 After: $ echo 0xffffffffc0257000 > /sys/kernel/debug/failslab/require-start $ cat /sys/kernel/debug/failslab/require-start 0xffffffffc0257000 [wangyufen@huawei.com: use debugfs_create_xul() instead of debugfs_create_xl()] Link: https://lkml.kernel.org/r/1664331299-4976-1-git-send-email-wangyufen@huawei.com Link: https://lkml.kernel.org/r/20220817080332.1052710-4-weiyongjun1@huawei.com Signed-off-by:
Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by:
Wang Yufen <wangyufen@huawei.com> Reviewed-by:
Akinobu Mita <akinobu.mita@gmail.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Isabella Basso <isabbasso@riseup.net> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Wei Yongjun authored
If FAULT_INJECTION_STACKTRACE_FILTER is enabled, the depth is default to 32. This means fail_stacktrace() will iter each entry's stacktrace, even if filter is not configured. This patch changes to quick return from fail_stacktrace() if stacktrace filter is not set. Link: https://lkml.kernel.org/r/20220817080332.1052710-3-weiyongjun1@huawei.com Signed-off-by:
Wei Yongjun <weiyongjun1@huawei.com> Cc: Akinobu Mita <akinobu.mita@gmail.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Isabella Basso <isabbasso@riseup.net> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Wei Yongjun authored
This patchset allow fault injection to run on x86_64 and makes stacktrace filter work as expected. With this, we can test a device driver module with fault injection more easily. This patch (of 4): FAULT_INJECTION_STACKTRACE_FILTER option was apparently disallowed on x86_64 because of problems with the stack unwinder: commit 6d690dca Author: Akinobu Mita <akinobu.mita@gmail.com> Date: Sat May 12 10:36:53 2007 -0700 fault injection: disable stacktrace filter for x86-64 However, there is no problems whatsoever with this today. Let's allow it again. Link: https://lkml.kernel.org/r/20220817080332.1052710-1-weiyongjun1@huawei.com Link: https://lkml.kernel.org/r/20220817080332.1052710-2-weiyongjun1@huawei.com Signed-off-by:
Wei Yongjun <weiyongjun1@huawei.com> Cc: Akinobu Mita <akinobu.mita@gmail.com> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Kees Cook <keescook@chromium.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: Josh Poimboeuf <jpoimboe@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Miguel Ojeda <ojeda@kernel.org> Cc: Isabella Basso <isabbasso@riseup.net> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Zhaoyang Huang authored
Using stack_depot to record kmemleak's backtrace which has been implemented on slub for reducing redundant information. [akpm@linux-foundation.org: fix build - remove now-unused __save_stack_trace()] [zhaoyang.huang@unisoc.com: v3] Link: https://lkml.kernel.org/r/1667101354-4669-1-git-send-email-zhaoyang.huang@unisoc.com [akpm@linux-foundation.org: fix v3 layout oddities] [akpm@linux-foundation.org: coding-style cleanups] Link: https://lkml.kernel.org/r/1666864224-27541-1-git-send-email-zhaoyang.huang@unisoc.com Signed-off-by:
Zhaoyang Huang <zhaoyang.huang@unisoc.com> Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: ke.wang <ke.wang@unisoc.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Zhaoyang Huang <huangzhaoyang@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Liam Howlett authored
mas_find_rev() uses mas_prev_entry(), not mas_next_entry(), correct comment. Link: https://lkml.kernel.org/r/20221025173756.2719616-1-Liam.Howlett@oracle.com Signed-off-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Dec 12, 2022
-
-
Uladzislau Koshchanka authored
Remove bit_reverse() function. Instead use bitrev8() from linux/bitrev.h + bitshift. Reduces code-repetition. Signed-off-by:
Uladzislau Koshchanka <koshchanka@gmail.com> Link: https://lore.kernel.org/r/20221210004423.32332-1-koshchanka@gmail.com Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
Rae Moar authored
Change KUnit test output to better comply with KTAP v1 specifications found here: https://kernel.org/doc/html/latest/dev-tools/ktap.html. 1) Use "KTAP version 1" instead of "TAP version 14" as test output header 2) Remove '-' between test number and test name on test result lines 2) Add KTAP version lines to each subtest header as well Note that the new KUnit output still includes the “# Subtest” line now located after the KTAP version line. This does not completely match the KTAP v1 spec but since it is classified as a diagnostic line, it is not expected to be disruptive or break any existing parsers. This “# Subtest” line comes from the TAP 14 spec (https://testanything.org/tap-version-14-specification.html ) and it is used to define the test name before the results. Original output: TAP version 14 1..1 # Subtest: kunit-test-suite 1..3 ok 1 - kunit_test_1 ok 2 - kunit_test_2 ok 3 - kunit_test_3 # kunit-test-suite: pass:3 fail:0 skip:0 total:3 # Totals: pass:3 fail:0 skip:0 total:3 ok 1 - kunit-test-suite New output: KTAP version 1 1..1 KTAP version 1 # Subtest: kunit-test-suite 1..3 ok 1 kunit_test_1 ok 2 kunit_test_2 ok 3 kunit_test_3 # kunit-test-suite: pass:3 fail:0 skip:0 total:3 # Totals: pass:3 fail:0 skip:0 total:3 ok 1 kunit-test-suite Signed-off-by:
Rae Moar <rmoar@google.com> Reviewed-by:
Daniel Latypov <dlatypov@google.com> Reviewed-by:
David Gow <davidgow@google.com> Tested-by:
Anders Roxell <anders.roxell@linaro.org> Signed-off-by:
Shuah Khan <skhan@linuxfoundation.org>
-
David Gow authored
Use the newly-added function kunit_get_current_test() instead of accessing current->kunit_test directly. This function uses a static key to return more quickly when KUnit is enabled, but no tests are actively running. There should therefore be a negligible performance impact to enabling the slub KUnit tests. Other than the performance improvement, this should be a no-op. Cc: Oliver Glitta <glittao@gmail.com> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: Christoph Lameter <cl@linux.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
David Gow <davidgow@google.com> Acked-by:
Vlastimil Babka <vbabka@suse.cz> Acked-by:
Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by:
Shuah Khan <skhan@linuxfoundation.org>
-
David Gow authored
KUnit does a few expensive things when enabled. This hasn't been a problem because KUnit was only enabled on test kernels, but with a few people enabling (but not _using_) KUnit on production systems, we need a runtime way of handling this. Provide a 'kunit_running' static key (defaulting to false), which allows us to hide any KUnit code behind a static branch. This should reduce the performance impact (on other code) of having KUnit enabled to a single NOP when no tests are running. Note that, while it looks unintuitive, tests always run entirely within __kunit_test_suites_init(), so it's safe to decrement the static key at the end of this function, rather than in __kunit_test_suites_exit(), which is only there to clean up results in debugfs. Signed-off-by:
David Gow <davidgow@google.com> Reviewed-by:
Daniel Latypov <dlatypov@google.com> Signed-off-by:
Shuah Khan <skhan@linuxfoundation.org>
-
- Dec 09, 2022
-
-
Tejun Heo authored
rhashtable currently only does bh-safe synchronization making it impossible to use from irq-safe contexts. Switch it to use irq-safe synchronization to remove the restriction. v2: Update the lock functions to return the ulong flags value and unlock functions to take the value directly instead of passing around the pointer. Suggested by Linus. Signed-off-by:
Tejun Heo <tj@kernel.org> Reviewed-by:
David Vernet <dvernet@meta.com> Acked-by:
Josh Don <joshdon@google.com> Acked-by:
Hao Luo <haoluo@google.com> Acked-by:
Barret Rhoden <brho@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Dec 06, 2022
-
-
Heiko Carstens authored
The vector instruction macros can also be used in inline assemblies. For this the magic asm(".include \"asm/vx-insn.h\"\n"); must be added to C files in order to avoid that the pre-processor eliminates the __ASSEMBLY__ guarded macros. This however comes with the problem that changes to asm/vx-insn.h do not cause a recompile of C files which have only this magic statement instead of a proper include statement. This can be observed with the arch/s390/kernel/fpu.c file. In order to fix this problem and also to avoid that the include must be specified twice, add a wrapper include header file which will do all necessary steps. This way only the vx-insn.h header file needs to be included and changes to the new vx-insn-asm.h header file cause a recompile of all dependent files like it should. Signed-off-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Alexander Gordeev <agordeev@linux.ibm.com>
-
- Dec 04, 2022
-
-
Gary Guo authored
The `build_error` crate provides a function `build_error` which will panic at compile-time if executed in const context and, by default, will cause a build error if not executed at compile time and the optimizer does not optimise away the call. The `CONFIG_RUST_BUILD_ASSERT_ALLOW` kernel option allows to relax the default build failure and convert it to a runtime check. If the runtime check fails, `panic!` will be called. Its functionality will be exposed to users as a couple macros in the `kernel` crate in the following patch, thus some documentation here refers to them for simplicity. Signed-off-by:
Gary Guo <gary@garyguo.net> Reviewed-by:
Wei Liu <wei.liu@kernel.org> [Reworded, adapted for upstream and applied latest changes] Signed-off-by:
Miguel Ojeda <ojeda@kernel.org>
-
- Dec 02, 2022
-
-
Anders Roxell authored
Building allmodconfig with aarch64-linux-gnu-gcc (Debian 11.3.0-6), fortify_kunit with strucleak plugin enabled makes the stack frame size to grow too large: lib/fortify_kunit.c:140:1: error: the frame size of 2368 bytes is larger than 2048 bytes [-Werror=frame-larger-than=] Turn off the structleak plugin checks for fortify_kunit. Suggested-by:
Arnd Bergmann <arnd@arndb.de> Signed-off-by:
Anders Roxell <anders.roxell@linaro.org> Signed-off-by:
Kees Cook <keescook@chromium.org>
-
Kees Cook authored
Several run-time checkers (KASAN, UBSAN, KFENCE, KCSAN, sched) roll their own warnings, and each check "panic_on_warn". Consolidate this into a single function so that future instrumentation can be added in a single location. Cc: Marco Elver <elver@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Juri Lelli <juri.lelli@redhat.com> Cc: Vincent Guittot <vincent.guittot@linaro.org> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Ben Segall <bsegall@google.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Daniel Bristot de Oliveira <bristot@redhat.com> Cc: Valentin Schneider <vschneid@redhat.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: David Gow <davidgow@google.com> Cc: tangmeng <tangmeng@uniontech.com> Cc: Jann Horn <jannh@google.com> Cc: Shuah Khan <skhan@linuxfoundation.org> Cc: Petr Mladek <pmladek@suse.com> Cc: "Paul E. McKenney" <paulmck@kernel.org> Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Cc: "Guilherme G. Piccoli" <gpiccoli@igalia.com> Cc: Tiezhu Yang <yangtiezhu@loongson.cn> Cc: kasan-dev@googlegroups.com Cc: linux-mm@kvack.org Reviewed-by:
Luis Chamberlain <mcgrof@kernel.org> Signed-off-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Marco Elver <elver@google.com> Reviewed-by:
Andrey Konovalov <andreyknvl@gmail.com> Link: https://lore.kernel.org/r/20221117234328.594699-4-keescook@chromium.org
-
Stephen Boyd authored
Delayed kobject debugging (CONFIG_DEBUG_KOBJECT_RELEASE) prints the kobject pointer that's being released in kobject_release() before scheduling a randomly delayed work to do the actual release work. If the caller of kobject_put() frees the kobject upon return then this will typically emit a debugobject warning about freeing an active timer. Usually the release function is the function that does the kfree() of the struct containing the kobject. For example the following print is seen kobject: 'queue' (ffff888114236190): kobject_release, parent 0000000000000000 (delayed 1000) ------------[ cut here ]------------ ODEBUG: free active (active state 0) object type: timer_list hint: kobject_delayed_cleanup+0x0/0x390 but the kobject printk cannot be matched with the debug object printk because it could be any number of kobjects that was released around that time. The random delay for the work doesn't help either. Print the address of the object being tracked to help to figure out which kobject is the problem here. Note that this does not use %px here to match the other %p usage in debugobject debugging. Due to %p usage it is required to disable pointer hashing to correlate the two pointer printks. Signed-off-by:
Stephen Boyd <swboyd@chromium.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Guenter Roeck <linux@roeck-us.net> Link: https://lore.kernel.org/r/20220519202201.2348343-1-swboyd@chromium.org
-
- Dec 01, 2022
-
-
Steven Rostedt (Google) authored
The config to be able to inject error codes into any function annotated with ALLOW_ERROR_INJECTION() is enabled when FUNCTION_ERROR_INJECTION is enabled. But unfortunately, this is always enabled on x86 when KPROBES is enabled, and there's no way to turn it off. As kprobes is useful for observability of the kernel, it is useful to have it enabled in production environments. But error injection should be avoided. Add a prompt to the config to allow it to be disabled even when kprobes is enabled, and get rid of the "def_bool y". This is a kernel debug feature (it's in Kconfig.debug), and should have never been something enabled by default. Cc: stable@vger.kernel.org Fixes: 540adea3 ("error-injection: Separate error-injection from kprobe") Signed-off-by:
Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Yury Norov authored
In current form, FORCE_NR_CPUS is visible to all users building their kernels, even not experts. It is also set in allmodconfig or allyesconfig, which is not a correct behavior. This patch fixes it. It also changes the parameter short description: removes implementation details and highlights the effect of the change. Link: https://lkml.kernel.org/r/20221116172451.274938-1-yury.norov@gmail.com Signed-off-by:
Yury Norov <yury.norov@gmail.com> Suggested-by:
Geert Uytterhoeven <geert@linux-m68k.org> Suggested-by:
Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by:
Valentin Schneider <vschneid@redhat.com> Cc: Alexey Klimov <klimov.linux@gmail.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Eric Biggers <ebiggers@google.com> Cc: Paul E. McKenney <paulmck@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sander Vanheule <sander@svanheule.net> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Rong Tao authored
We need to set an initial value for offset to eliminate compilation warning. How to reproduce warning: $ make -C tools/testing/radix-tree radix-tree.c: In function `radix_tree_tag_clear': radix-tree.c:1046:17: warning: `offset' may be used uninitialized in this function [-Wmaybe-uninitialized] 1046 | node_tag_clear(root, parent, tag, offset); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Link: https://lkml.kernel.org/r/tencent_DF74099967595DCEA93CBDC28D062026180A@qq.com Signed-off-by:
Rong Tao <rongtao@cestc.cn> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Akinobu Mita authored
The simple attribute files do not accept a negative value since the commit 488dac0c ("libfs: fix error cast of negative value in simple_attr_write()"). This restores the previous behaviour by using newly introduced DEFINE_SIMPLE_ATTRIBUTE_SIGNED instead of DEFINE_SIMPLE_ATTRIBUTE. Link: https://lkml.kernel.org/r/20220919172418.45257-3-akinobu.mita@gmail.com Fixes: 488dac0c ("libfs: fix error cast of negative value in simple_attr_write()") Signed-off-by:
Akinobu Mita <akinobu.mita@gmail.com> Reported-by:
Zhao Gongyi <zhaogongyi@huawei.com> Reviewed-by:
David Hildenbrand <david@redhat.com> Reviewed-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Oscar Salvador <osalvador@suse.de> Cc: Rafael J. Wysocki <rafael@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Wei Yongjun <weiyongjun1@huawei.com> Cc: Yicong Yang <yangyicong@hisilicon.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Nov 30, 2022
-
-
Randy Dunlap authored
Prevent a kconfig warning that is caused by TEST_MAPLE_TREE by adding a "depends on" clause for TEST_MAPLE_TREE since 'select' does not follow any kconfig dependencies. WARNING: unmet direct dependencies detected for DEBUG_MAPLE_TREE Depends on [n]: DEBUG_KERNEL [=n] Selected by [y]: - TEST_MAPLE_TREE [=y] && RUNTIME_TESTING_MENU [=y] Link: https://lkml.kernel.org/r/20221119055117.14094-1-rdunlap@infradead.org Fixes: 120b1162 ("maple_tree: reorganize testing to restore module testing") Signed-off-by:
Randy Dunlap <rdunlap@infradead.org> Reported-by:
Geert Uytterhoeven <geert@linux-m68k.org> Reported-by:
kernel test robot <lkp@intel.com> Reviewed-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Liam Howlett authored
mte_set_full() and mte_clear_full() were incorrectly setting a pointer to a value without returning a result. Fix this by returning the modified pointer to be use as necessary. Also add a third function to return if the bit is set or not. Link: https://lore.kernel.org/lkml/20221026120029.12555-1-lukas.bulwahn@gmail.com/ Link: https://lkml.kernel.org/r/20221028144520.2776767-1-Liam.Howlett@oracle.com Signed-off-by:
Liam R. Howlett <Liam.Howlett@oracle.com> Suggested-by:
Lukas Bulwahn <lukas.bulwahn@gmail.com> Suggested-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
Shakeel Butt authored
The percpu_counter is used for scenarios where performance is more important than the accuracy. For percpu_counter users, who want more accurate information in their slowpath, percpu_counter_sum is provided which traverses all the online CPUs to accumulate the data. The reason it only needs to traverse online CPUs is because percpu_counter does implement CPU offline callback which syncs the local data of the offlined CPU. However there is a small race window between the online CPUs traversal of percpu_counter_sum and the CPU offline callback. The offline callback has to traverse all the percpu_counters on the system to flush the CPU local data which can be a lot. During that time, the CPU which is going offline has already been published as offline to all the readers. So, as the offline callback is running, percpu_counter_sum can be called for one counter which has some state on the CPU going offline. Since percpu_counter_sum only traverses online CPUs, it will skip that specific CPU and the offline callback might not have flushed the state for that specific percpu_counter on that offlined CPU. Normally this is not an issue because percpu_counter users can deal with some inaccuracy for small time window. However a new user i.e. mm_struct on the cleanup path wants to check the exact state of the percpu_counter through check_mm(). For such users, this patch introduces percpu_counter_sum_all() which traverses all possible CPUs and it is used in fork.c:check_mm() to avoid the potential race. This issue is exposed by the later patch "mm: convert mm's rss stats into percpu_counter". Link: https://lkml.kernel.org/r/20221109012011.881058-1-shakeelb@google.com Signed-off-by:
Shakeel Butt <shakeelb@google.com> Reported-by:
Marek Szyprowski <m.szyprowski@samsung.com> Tested-by:
Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
kmalloc redzone check for slub has been merged, and it's better to add a kunit case for it, which is inspired by a real-world case as described in commit 120ee599 ("staging: octeon-usb: prevent memory corruption"): " octeon-hcd will crash the kernel when SLOB is used. This usually happens after the 18-byte control transfer when a device descriptor is read. The DMA engine is always transferring full 32-bit words and if the transfer is shorter, some random garbage appears after the buffer. The problem is not visible with SLUB since it rounds up the allocations to word boundary, and the extra bytes will go undetected. " To avoid interrupting the normal functioning of kmalloc caches, a kmem_cache mimicing kmalloc cache is created with similar flags, and kmalloc_trace() is used to really test the orig_size and redzone setup. Suggested-by:
Vlastimil Babka <vbabka@suse.cz> Signed-off-by:
Feng Tang <feng.tang@intel.com> Reviewed-by:
Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by:
Vlastimil Babka <vbabka@suse.cz>
-
Lee Jones authored
When enabled, KASAN enlarges function's stack-frames. Pushing quite a few over the current threshold. This can mainly be seen on 32-bit architectures where the present limit (when !GCC) is a lowly 1024-Bytes. Link: https://lkml.kernel.org/r/20221125120750.3537134-3-lee@kernel.org Signed-off-by:
Lee Jones <lee@kernel.org> Acked-by:
Arnd Bergmann <arnd@arndb.de> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: "Christian König" <christian.koenig@amd.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: David Airlie <airlied@gmail.com> Cc: Harry Wentland <harry.wentland@amd.com> Cc: Leo Li <sunpeng.li@amd.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <mripard@kernel.org> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Nick Desaulniers <ndesaulniers@google.com> Cc: "Pan, Xinhui" <Xinhui.Pan@amd.com> Cc: Rodrigo Siqueira <Rodrigo.Siqueira@amd.com> Cc: Thomas Zimmermann <tzimmermann@suse.de> Cc: Tom Rix <trix@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
When kfence is enabled, the buffer allocated from the test case could be from a kfence pool, and the operation could be also caught and reported by kfence first, causing the case to fail. With default kfence setting, this is very difficult to be triggered. By changing CONFIG_KFENCE_NUM_OBJECTS from 255 to 16383, and CONFIG_KFENCE_SAMPLE_INTERVAL from 100 to 5, the allocation from kfence did hit 7 times in different slub_kunit cases out of 900 times of boot test. To avoid this, initially we tried is_kfence_address() to check this and repeated allocation till finding a non-kfence address. Vlastimil Babka suggested SLAB_SKIP_KFENCE flag could be used to achieve this, and better add a wrapper function for simplifying cache creation. Signed-off-by:
Feng Tang <feng.tang@intel.com> Reviewed-by:
Marco Elver <elver@google.com> Reviewed-by:
Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by:
Vlastimil Babka <vbabka@suse.cz>
-
Joel Fernandes (Google) authored
Earlier commits in this series allow battery-powered systems to build their kernels with the default-disabled CONFIG_RCU_LAZY=y Kconfig option. This Kconfig option causes call_rcu() to delay its callbacks in order to batch callbacks. This means that a given RCU grace period covers more callbacks, thus reducing the number of grace periods, in turn reducing the amount of energy consumed, which increases battery lifetime which can be a very good thing. This is not a subtle effect: In some important use cases, the battery lifetime is increased by more than 10%. This CONFIG_RCU_LAZY=y option is available only for CPUs that offload callbacks, for example, CPUs mentioned in the rcu_nocbs kernel boot parameter passed to kernels built with CONFIG_RCU_NOCB_CPU=y. Delaying callbacks is normally not a problem because most callbacks do nothing but free memory. If the system is short on memory, a shrinker will kick all currently queued lazy callbacks out of their laziness, thus freeing their memory in short order. Similarly, the rcu_barrier() function, which blocks until all currently queued callbacks are invoked, will also kick lazy callbacks, thus enabling rcu_barrier() to complete in a timely manner. However, there are some cases where laziness is not a good option. For example, synchronize_rcu() invokes call_rcu(), and blocks until the newly queued callback is invoked. It would not be a good for synchronize_rcu() to block for ten seconds, even on an idle system. Therefore, synchronize_rcu() invokes call_rcu_hurry() instead of call_rcu(). The arrival of a non-lazy call_rcu_hurry() callback on a given CPU kicks any lazy callbacks that might be already queued on that CPU. After all, if there is going to be a grace period, all callbacks might as well get full benefit from it. Yes, this could be done the other way around by creating a call_rcu_lazy(), but earlier experience with this approach and feedback at the 2022 Linux Plumbers Conference shifted the approach to call_rcu() being lazy with call_rcu_hurry() for the few places where laziness is inappropriate. And another call_rcu() instance that cannot be lazy is the one on the percpu refcounter's "per-CPU to atomic switch" code path, which uses RCU when switching to atomic mode. The enqueued callback wakes up waiters waiting in the percpu_ref_switch_waitq. Allowing this callback to be lazy would result in unacceptable slowdowns for users of per-CPU refcounts, such as blk_pre_runtime_suspend(). Therefore, make __percpu_ref_switch_to_atomic() use call_rcu_hurry() in order to revert to the old behavior. [ paulmck: Apply s/call_rcu_flush/call_rcu_hurry/ feedback from Tejun Heo. ] Signed-off-by:
Joel Fernandes (Google) <joel@joelfernandes.org> Acked-by:
Tejun Heo <tj@kernel.org> Signed-off-by:
Paul E. McKenney <paulmck@kernel.org> Cc: Dennis Zhou <dennis@kernel.org> Cc: Christoph Lameter <cl@linux.com> Cc: <linux-mm@kvack.org>
-
- Nov 29, 2022
-
-
Jason Gunthorpe authored
The span iterator travels over the indexes of the interval_tree, not the nodes, and classifies spans of indexes as either 'used' or 'hole'. 'used' spans are fully covered by nodes in the tree and 'hole' spans have no node intersecting the span. This is done greedily such that spans are maximally sized and every iteration step switches between used/hole. As an example a trivial allocator can be written as: for (interval_tree_span_iter_first(&span, itree, 0, ULONG_MAX); !interval_tree_span_iter_done(&span); interval_tree_span_iter_next(&span)) if (span.is_hole && span.last_hole - span.start_hole >= allocation_size - 1) return span.start_hole; With all the tricky boundary conditions handled by the library code. The following iommufd patches have several algorithms for its overlapping node interval trees that are significantly simplified with this kind of iteration primitive. As it seems generally useful, put it into lib/. Link: https://lore.kernel.org/r/3-v6-a196d26f289e+11787-iommufd_jgg@nvidia.com Reviewed-by:
Kevin Tian <kevin.tian@intel.com> Reviewed-by:
Eric Auger <eric.auger@redhat.com> Tested-by:
Nicolin Chen <nicolinc@nvidia.com> Tested-by:
Yi Liu <yi.l.liu@intel.com> Tested-by:
Lixiao Yang <lixiao.yang@intel.com> Tested-by:
Matthew Rosato <mjrosato@linux.ibm.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com>
-
- Nov 27, 2022
-
-
Vlastimil Babka authored
For tiny systems that have used SLOB until now, SLUB might be impractical due to its higher memory usage. To help with that, introduce an option CONFIG_SLUB_TINY that modifies SLUB to use less memory. This is done by sacrificing scalability, security and debugging features, therefore not recommended for any system with more than 16MB RAM. This commit introduces the option and uses it to set other related options in a way that reduces memory usage. Signed-off-by:
Vlastimil Babka <vbabka@suse.cz> Acked-by:
Hyeonggon Yoo <42.hyeyoo@gmail.com> Acked-by:
Mike Rapoport <rppt@linux.ibm.com> Reviewed-by:
Christoph Lameter <cl@linux.com>
-
- Nov 25, 2022
-
-
Al Viro authored
instead of "don't do it to ITER_PIPE" check for ->data_source being false on copying from iterator. Check for !->data_source for copying to iterator, while we are at it. Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Al Viro authored
Not hard to implement - we are not copying anything here, so csum_and_memcpy() is not usable, but calculating a checksum of source directly is trivial... Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Al Viro authored
Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Jiapeng Chong authored
Variable 'insert_retries' is not effectively used in the function, so delete it. lib/test_rhashtable.c:437:18: warning: variable 'insert_retries' set but not used. Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=3242 Reported-by:
Abaci Robot <abaci@linux.alibaba.com> Signed-off-by:
Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Acked-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Nov 23, 2022
-
-
Greg Kroah-Hartman authored
The latest version of grep claims the egrep is now obsolete so the build now contains warnings that look like: egrep: warning: egrep is obsolescent; using grep -E fix this up by moving the vdso Makefile to use "grep -E" instead. Cc: Andy Lutomirski <luto@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Vincenzo Frascino <vincenzo.frascino@arm.com> Link: https://lore.kernel.org/r/20220920170633.3133829-1-gregkh@linuxfoundation.org Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Zhengchao Shao authored
When misc_register() failed in test_firmware_init(), the memory pointed by test_fw_config->name is not released. The memory leak information is as follows: unreferenced object 0xffff88810a34cb00 (size 32): comm "insmod", pid 7952, jiffies 4294948236 (age 49.060s) hex dump (first 32 bytes): 74 65 73 74 2d 66 69 72 6d 77 61 72 65 2e 62 69 test-firmware.bi 6e 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 n............... backtrace: [<ffffffff81b21fcb>] __kmalloc_node_track_caller+0x4b/0xc0 [<ffffffff81affb96>] kstrndup+0x46/0xc0 [<ffffffffa0403a49>] __test_firmware_config_init+0x29/0x380 [test_firmware] [<ffffffffa040f068>] 0xffffffffa040f068 [<ffffffff81002c41>] do_one_initcall+0x141/0x780 [<ffffffff816a72c3>] do_init_module+0x1c3/0x630 [<ffffffff816adb9e>] load_module+0x623e/0x76a0 [<ffffffff816af471>] __do_sys_finit_module+0x181/0x240 [<ffffffff89978f99>] do_syscall_64+0x39/0xb0 [<ffffffff89a0008b>] entry_SYSCALL_64_after_hwframe+0x63/0xcd Fixes: c92316bf ("test_firmware: add batched firmware tests") Signed-off-by:
Zhengchao Shao <shaozhengchao@huawei.com> Acked-by:
Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/r/20221119035721.18268-1-shaozhengchao@huawei.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Li Hua authored
If KPROBES_SANITY_TEST and ARCH_CORRECT_STACKTRACE_ON_KRETPROBE is enabled, but STACKTRACE is not set. Build failed as below: lib/test_kprobes.c: In function ‘stacktrace_return_handler’: lib/test_kprobes.c:228:8: error: implicit declaration of function ‘stack_trace_save’; did you mean ‘stacktrace_driver’? [-Werror=implicit-function-declaration] ret = stack_trace_save(stack_buf, STACK_BUF_SIZE, 0); ^~~~~~~~~~~~~~~~ stacktrace_driver cc1: all warnings being treated as errors scripts/Makefile.build:250: recipe for target 'lib/test_kprobes.o' failed make[2]: *** [lib/test_kprobes.o] Error 1 To fix this error, Select STACKTRACE if ARCH_CORRECT_STACKTRACE_ON_KRETPROBE is enabled. Link: https://lore.kernel.org/all/20221121030620.63181-1-hucool.lihua@huawei.com/ Fixes: 1f6d3a8f ("kprobes: Add a test case for stacktrace from kretprobe handler") Cc: stable@vger.kernel.org Signed-off-by:
Li Hua <hucool.lihua@huawei.com> Acked-by:
Masami Hiramatsu (Google) <mhiramat@kernel.org> Signed-off-by:
Masami Hiramatsu (Google) <mhiramat@kernel.org>
-
Kees Cook authored
Validate the effect of the __alloc_size attribute on allocators. If the compiler doesn't support __builtin_dynamic_object_size(), skip the associated tests. (For GCC, just remove the "--make_options" line below...) $ ./tools/testing/kunit/kunit.py run --arch x86_64 \ --kconfig_add CONFIG_FORTIFY_SOURCE=y \ --make_options LLVM=1 fortify ... [15:16:30] ================== fortify (10 subtests) =================== [15:16:30] [PASSED] known_sizes_test [15:16:30] [PASSED] control_flow_split_test [15:16:30] [PASSED] alloc_size_kmalloc_const_test [15:16:30] [PASSED] alloc_size_kmalloc_dynamic_test [15:16:30] [PASSED] alloc_size_vmalloc_const_test [15:16:30] [PASSED] alloc_size_vmalloc_dynamic_test [15:16:30] [PASSED] alloc_size_kvmalloc_const_test [15:16:30] [PASSED] alloc_size_kvmalloc_dynamic_test [15:16:30] [PASSED] alloc_size_devm_kmalloc_const_test [15:16:30] [PASSED] alloc_size_devm_kmalloc_dynamic_test [15:16:30] ===================== [PASSED] fortify ===================== [15:16:30] ============================================================ [15:16:30] Testing complete. Ran 10 tests: passed: 10 [15:16:31] Elapsed time: 8.348s total, 0.002s configuring, 6.923s building, 1.075s running For earlier GCC prior to version 12, the dynamic tests will be skipped: [15:18:59] ================== fortify (10 subtests) =================== [15:18:59] [PASSED] known_sizes_test [15:18:59] [PASSED] control_flow_split_test [15:18:59] [PASSED] alloc_size_kmalloc_const_test [15:18:59] [SKIPPED] alloc_size_kmalloc_dynamic_test [15:18:59] [PASSED] alloc_size_vmalloc_const_test [15:18:59] [SKIPPED] alloc_size_vmalloc_dynamic_test [15:18:59] [PASSED] alloc_size_kvmalloc_const_test [15:18:59] [SKIPPED] alloc_size_kvmalloc_dynamic_test [15:18:59] [PASSED] alloc_size_devm_kmalloc_const_test [15:18:59] [SKIPPED] alloc_size_devm_kmalloc_dynamic_test [15:18:59] ===================== [PASSED] fortify ===================== [15:18:59] ============================================================ [15:18:59] Testing complete. Ran 10 tests: passed: 6, skipped: 4 [15:18:59] Elapsed time: 11.965s total, 0.002s configuring, 10.540s building, 1.068s running Cc: David Gow <davidgow@google.com> Cc: linux-hardening@vger.kernel.org Signed-off-by:
Kees Cook <keescook@chromium.org>
-