- Dec 02, 2021
-
-
Kumar Kartikeya Dwivedi authored
Vinicius Costa Gomes reported [0] that build fails when CONFIG_DEBUG_INFO_BTF is enabled and CONFIG_BPF_SYSCALL is disabled. This leads to btf.c not being compiled, and then no symbol being present in vmlinux for the declarations in btf.h. Since BTF is not useful without enabling BPF subsystem, disallow this combination. However, theoretically disabling both now could still fail, as the symbol for kfunc_btf_id_list variables is not available. This isn't a problem as the compiler usually optimizes the whole register/unregister call, but at lower optimization levels it can fail the build in linking stage. Fix that by adding dummy variables so that modules taking address of them still work, but the whole thing is a noop. [0]: https://lore.kernel.org/bpf/20211110205418.332403-1-vinicius.gomes@intel.com Fixes: 14f267d9 ("bpf: btf: Introduce helpers for dynamic BTF set registration") Reported-by:
Vinicius Costa Gomes <vinicius.gomes@intel.com> Signed-off-by:
Kumar Kartikeya Dwivedi <memxor@gmail.com> Signed-off-by:
Andrii Nakryiko <andrii@kernel.org> Acked-by:
Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20211122144742.477787-2-memxor@gmail.com
-
- Nov 30, 2021
-
-
Arnd Bergmann authored
On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS because the ordinary load/store instructions (ldr, ldrh, ldrb) can tolerate any misalignment of the memory address. However, load/store double and load/store multiple instructions (ldrd, ldm) may still only be used on memory addresses that are 32-bit aligned, and so we have to use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we may end up with a severe performance hit due to alignment traps that require fixups by the kernel. Testing shows that this currently happens with clang-13 but not gcc-11. In theory, any compiler version can produce this bug or other problems, as we are dealing with undefined behavior in C99 even on architectures that support this in hardware, see also https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100363. Fortunately, the get_unaligned() accessors do the right thing: when building for ARMv6 or later, the compiler will emit unaligned accesses using the ordinary load/store instructions (but avoid the ones that require 32-bit alignment). When building for older ARM, those accessors will emit the appropriate sequence of ldrb/mov/orr instructions. And on architectures that can truly tolerate any kind of misalignment, the get_unaligned() accessors resolve to the leXX_to_cpup accessors that operate on aligned addresses. Since the compiler will in fact emit ldrd or ldm instructions when building this code for ARM v6 or later, the solution is to use the unaligned accessors unconditionally on architectures where this is known to be fast. The _aligned version of the hash function is however still needed to get the best performance on architectures that cannot do any unaligned access in hardware. This new version avoids the undefined behavior and should produce the fastest hash on all architectures we support. Link: https://lore.kernel.org/linux-arm-kernel/20181008211554.5355-4-ard.biesheuvel@linaro.org/ Link: https://lore.kernel.org/linux-crypto/CAK8P3a2KfmmGDbVHULWevB0hv71P2oi2ZCHEAqT=8dQfa0=cqQ@mail.gmail.com/ Reported-by:
Ard Biesheuvel <ard.biesheuvel@linaro.org> Fixes: 2c956a60 ("siphash: add cryptographically secure PRF") Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Reviewed-by:
Jason A. Donenfeld <Jason@zx2c4.com> Acked-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by:
Jakub Kicinski <kuba@kernel.org>
-
- Nov 22, 2021
-
-
Helge Deller authored
PA-RISC uses a much bigger frame size for functions than other architectures. So increase it to 2048 for 32- and 64-bit kernels. This fixes e.g. a warning in lib/xxhash.c. Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Helge Deller <deller@gmx.de>
-
- Nov 20, 2021
-
-
Kees Cook authored
As done in commit d73dad4e ("kasan: test: bypass __alloc_size checks") for __write_overflow warnings, also silence some more cases that trip the __read_overflow warnings seen in 5.16-rc1[1]: In file included from include/linux/string.h:253, from include/linux/bitmap.h:10, from include/linux/cpumask.h:12, from include/linux/mm_types_task.h:14, from include/linux/mm_types.h:5, from include/linux/page-flags.h:13, from arch/arm64/include/asm/mte.h:14, from arch/arm64/include/asm/pgtable.h:12, from include/linux/pgtable.h:6, from include/linux/kasan.h:29, from lib/test_kasan.c:10: In function 'memcmp', inlined from 'kasan_memcmp' at lib/test_kasan.c:897:2: include/linux/fortify-string.h:263:25: error: call to '__read_overflow' declared with attribute error: detected read beyond size of object (1st parameter) 263 | __read_overflow(); | ^~~~~~~~~~~~~~~~~ In function 'memchr', inlined from 'kasan_memchr' at lib/test_kasan.c:872:2: include/linux/fortify-string.h:277:17: error: call to '__read_overflow' declared with attribute error: detected read beyond size of object (1st parameter) 277 | __read_overflow(); | ^~~~~~~~~~~~~~~~~ [1] http://kisskb.ellerman.id.au/kisskb/buildresult/14660585/log/ Link: https://lkml.kernel.org/r/20211116004111.3171781-1-keescook@chromium.org Fixes: d73dad4e ("kasan: test: bypass __alloc_size checks") Signed-off-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Andrey Konovalov <andreyknvl@gmail.com> Acked-by:
Marco Elver <elver@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Nov 18, 2021
-
-
Nick Terrell authored
After the update to zstd-1.4.10 passing -O3 is no longer necessary to get good performance from zstd. Using the default optimization level -O2 is sufficient to get good performance. I've measured no significant change to compression speed, and a ~1% decompression speed loss, which is acceptable. This fixes the reported parisc -Wframe-larger-than=1536 errors [0]. The gcc-8-hppa-linux-gnu compiler performed very poorly with -O3, generating stacks that are ~3KB. With -O2 these same functions generate stacks in the < 100B, completely fixing the problem. Function size deltas are listed below: ZSTD_compressBlock_fast_extDict_generic: 3800 -> 68 ZSTD_compressBlock_fast: 2216 -> 40 ZSTD_compressBlock_fast_dictMatchState: 1848 -> 64 ZSTD_compressBlock_doubleFast_extDict_generic: 3744 -> 76 ZSTD_fillDoubleHashTable: 3252 -> 0 ZSTD_compressBlock_doubleFast: 5856 -> 36 ZSTD_compressBlock_doubleFast_dictMatchState: 5380 -> 84 ZSTD_copmressBlock_lazy2: 2420 -> 72 Additionally, this improves the reported code bloat [1]. With gcc-11 bloat-o-meter shows an 80KB code size improvement: ``` > ../scripts/bloat-o-meter vmlinux.old vmlinux add/remove: 31/8 grow/shrink: 24/155 up/down: 25734/-107924 (-82190) Total: Before=6418562, After=6336372, chg -1.28% ``` Compared to before the zstd-1.4.10 update we see a total code size regression of 105KB, down from 374KB at v5.16-rc1: ``` > ../scripts/bloat-o-meter vmlinux.old vmlinux add/remove: 292/62 grow/shrink: 56/88 up/down: 235009/-127487 (107522) Total: Before=6228850, After=6336372, chg +1.73% ``` [0] https://lkml.org/lkml/2021/11/15/710 [1] https://lkml.org/lkml/2021/11/14/189 Link: https://lore.kernel.org/r/20211117014949.1169186-4-nickrterrell@gmail.com/ Link: https://lore.kernel.org/r/20211117201459.1194876-4-nickrterrell@gmail.com/ Reported-by:
Geert Uytterhoeven <geert@linux-m68k.org> Tested-by:
Geert Uytterhoeven <geert@linux-m68k.org> Reviewed-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Nick Terrell <terrelln@fb.com>
-
Nick Terrell authored
`zstd_opt.c` contains the match finder for the highest compression levels. These levels are already very slow, and are unlikely to be used in the kernel. If they are used, they shouldn't be used in latency sensitive workloads, so slowing them down shouldn't be a big deal. This saves 188 KB of the 288 KB regression reported by Geert Uytterhoeven [0]. I've also opened an issue upstream [1] so that we can properly tackle the code size issue in `zstd_opt.c` for all users, and can hopefully remove this hack in the next zstd version we import. Bloat-o-meter output on x86-64: ``` > ../scripts/bloat-o-meter vmlinux.old vmlinux add/remove: 6/5 grow/shrink: 1/9 up/down: 16673/-209939 (-193266) Function old new delta ZSTD_compressBlock_opt_generic.constprop - 7559 +7559 ZSTD_insertBtAndGetAllMatches - 6304 +6304 ZSTD_insertBt1 - 1731 +1731 ZSTD_storeSeq - 693 +693 ZSTD_BtGetAllMatches - 255 +255 ZSTD_updateRep - 128 +128 ZSTD_updateTree 96 99 +3 ZSTD_insertAndFindFirstIndexHash3 81 - -81 ZSTD_setBasePrices.constprop 98 - -98 ZSTD_litLengthPrice.constprop 138 - -138 ZSTD_count 362 181 -181 ZSTD_count_2segments 1407 938 -469 ZSTD_insertBt1.constprop 2689 - -2689 ZSTD_compressBlock_btultra2 19990 423 -19567 ZSTD_compressBlock_btultra 19633 15 -19618 ZSTD_initStats_ultra 19825 - -19825 ZSTD_compressBlock_btopt 20374 12 -20362 ZSTD_compressBlock_btopt_extDict 29984 12 -29972 ZSTD_compressBlock_btultra_extDict 30718 15 -30703 ZSTD_compressBlock_btopt_dictMatchState 32689 12 -32677 ZSTD_compressBlock_btultra_dictMatchState 33574 15 -33559 Total: Before=6611828, After=6418562, chg -2.92% ``` [0] https://lkml.org/lkml/2021/11/14/189 [1] https://github.com/facebook/zstd/issues/2862 Link: https://lore.kernel.org/r/20211117014949.1169186-3-nickrterrell@gmail.com/ Link: https://lore.kernel.org/r/20211117201459.1194876-3-nickrterrell@gmail.com/ Reported-by:
Geert Uytterhoeven <geert@linux-m68k.org> Tested-by:
Geert Uytterhoeven <geert@linux-m68k.org> Reviewed-by:
Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by:
Nick Terrell <terrelln@fb.com>
-
Nick Terrell authored
The variable `litLengthSum` is only used by an `assert()`, so when asserts are disabled the compiler doesn't see any usage and warns. This issue is already fixed upstream by PR #2838 [0]. It was reported by the Kernel test robot in [1]. Another approach would be to change zstd's disabled `assert()` definition to use the argument in a disabled branch, instead of ignoring the argument. I've avoided this approach because there are some small changes necessary to get zstd to build, and I would want to thoroughly re-test for performance, since that is slightly changing the code in every function in zstd. It seems like a trivial change, but some functions are pretty sensitive to small changes. However, I think it is a valid approach that I would like to see upstream take, so I've opened Issue #2868 to attempt this upstream. Lastly, I've chosen not to use __maybe_unused because all code in lib/zstd/ must eventually be upstreamed. Upstream zstd can't use __maybe_unused because it isn't portable across all compilers. [0] https://github.com/facebook/zstd/pull/2838 [1] https://lore.kernel.org/linux-mm/202111120312.833wII4i-lkp@intel.com/T/ [2] https://github.com/facebook/zstd/issues/2868 Link: https://lore.kernel.org/r/20211117014949.1169186-2-nickrterrell@gmail.com/ Link: https://lore.kernel.org/r/20211117201459.1194876-2-nickrterrell@gmail.com/ Reported-by:
kernel test robot <lkp@intel.com> Signed-off-by:
Nick Terrell <terrelln@fb.com>
-
- Nov 11, 2021
-
-
Alistair Popple authored
MIGRATE_PFN_LOCKED is used to indicate to migrate_vma_prepare() that a source page was already locked during migrate_vma_collect(). If it wasn't then the a second attempt is made to lock the page. However if the first attempt failed it's unlikely a second attempt will succeed, and the retry adds complexity. So clean this up by removing the retry and MIGRATE_PFN_LOCKED flag. Destination pages are also meant to have the MIGRATE_PFN_LOCKED flag set, but nothing actually checks that. Link: https://lkml.kernel.org/r/20211025041608.289017-1-apopple@nvidia.com Signed-off-by:
Alistair Popple <apopple@nvidia.com> Reviewed-by:
Ralph Campbell <rcampbell@nvidia.com> Acked-by:
Felix Kuehling <Felix.Kuehling@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Zi Yan <ziy@nvidia.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Ben Skeggs <bskeggs@redhat.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Nov 10, 2021
-
-
Nicholas Piggin authored
printk from NMI context relies on irq work being raised on the local CPU to print to console. This can be a problem if the NMI was raised by a lockup detector to print lockup stack and regs, because the CPU may not enable irqs (because it is locked up). Introduce printk_trigger_flush() that can be called another CPU to try to get those messages to the console, call that where printk_safe_flush was previously called. Fixes: 93d102f0 ("printk: remove safe buffers") Cc: stable@vger.kernel.org # 5.15 Signed-off-by:
Nicholas Piggin <npiggin@gmail.com> Reviewed-by:
Petr Mladek <pmladek@suse.com> Reviewed-by:
John Ogness <john.ogness@linutronix.de> Signed-off-by:
Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20211107045116.1754411-1-npiggin@gmail.com
-
- Nov 09, 2021
-
-
Thomas Gleixner authored
sg_miter_stop() checks for disabled preemption before unmapping a page via kunmap_atomic(). The kernel doc mentions under context that preemption must be disabled if SG_MITER_ATOMIC is set. There is no active requirement for the caller to have preemption disabled before invoking sg_mitter_stop(). The sg_mitter_*() implementation itself has no such requirement. In fact, preemption is disabled by kmap_atomic() as part of sg_miter_next() and remains disabled as long as there is an active SG_MITER_ATOMIC mapping. This is a consequence of kmap_atomic() and not a requirement for sg_mitter_*() itself. The user chooses SG_MITER_ATOMIC because it uses the API in a context where blocking is not possible or blocking is possible but he chooses a lower weight mapping which is not available on all CPUs and so it might need less overhead to setup at a price that now preemption will be disabled. The kmap_atomic() implementation on PREEMPT_RT does not disable preemption. It simply disables CPU migration to ensure that the task remains on the same CPU while the caller remains preemptible. This in turn triggers the warning in sg_miter_stop() because preemption is allowed. The PREEMPT_RT and !PREEMPT_RT implementation of kmap_atomic() disable pagefaults as a requirement. It is sufficient to check for this instead of disabled preemption. Check for disabled pagefault handler in the SG_MITER_ATOMIC case. Remove the "preemption disabled" part from the kernel doc as the sg_milter*() implementation does not care. [bigeasy@linutronix.de: commit description] Link: https://lkml.kernel.org/r/20211015211409.cqopacv3pxdwn2ty@linutronix.de Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Alexey Dobriyan authored
Codegen become bloated again after simple_strntoull() introduction add/remove: 0/0 grow/shrink: 0/4 up/down: 0/-224 (-224) Function old new delta simple_strtoul 5 2 -3 simple_strtol 23 20 -3 simple_strtoull 119 15 -104 simple_strtoll 155 41 -114 Link: https://lkml.kernel.org/r/YVmlB9yY4lvbNKYt@localhost.localdomain Signed-off-by:
Alexey Dobriyan <adobriyan@gmail.com> Cc: Richard Fitzgerald <rf@opensource.cirrus.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Imran Khan authored
To print stack entries into a buffer, users of stackdepot, first get a list of stack entries using stack_depot_fetch and then print this list into a buffer using stack_trace_snprint. Provide a helper in stackdepot for this purpose. Also change above mentioned users to use this helper. [imran.f.khan@oracle.com: fix build error] Link: https://lkml.kernel.org/r/20210915175321.3472770-4-imran.f.khan@oracle.com [imran.f.khan@oracle.com: export stack_depot_snprint() to modules] Link: https://lkml.kernel.org/r/20210916133535.3592491-4-imran.f.khan@oracle.com Link: https://lkml.kernel.org/r/20210915014806.3206938-4-imran.f.khan@oracle.com Signed-off-by:
Imran Khan <imran.f.khan@oracle.com> Suggested-by:
Vlastimil Babka <vbabka@suse.cz> Acked-by:
Vlastimil Babka <vbabka@suse.cz> Acked-by: Jani Nikula <jani.nikula@intel.com> [i915] Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: David Airlie <airlied@linux.ie> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <mripard@kernel.org> Cc: Thomas Zimmermann <tzimmermann@suse.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Imran Khan authored
To print a stack entries, users of stackdepot, first use stack_depot_fetch to get a list of stack entries and then use stack_trace_print to print this list. Provide a helper in stackdepot to print stack entries based on stackdepot handle. Also change above mentioned users to use this helper. Link: https://lkml.kernel.org/r/20210915014806.3206938-3-imran.f.khan@oracle.com Signed-off-by:
Imran Khan <imran.f.khan@oracle.com> Suggested-by:
Vlastimil Babka <vbabka@suse.cz> Acked-by:
Vlastimil Babka <vbabka@suse.cz> Reviewed-by:
Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: David Airlie <airlied@linux.ie> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <mripard@kernel.org> Cc: Thomas Zimmermann <tzimmermann@suse.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Imran Khan authored
Patch series "lib, stackdepot: check stackdepot handle before accessing slabs", v2. PATCH-1: Checks validity of a stackdepot handle before proceeding to access stackdepot slab/objects. PATCH-2: Adds a helper in stackdepot, to allow users to print stack entries just by specifying the stackdepot handle. It also changes such users to use this new interface. PATCH-3: Adds a helper in stackdepot, to allow users to print stack entries into buffers just by specifying the stackdepot handle and destination buffer. It also changes such users to use this new interface. This patch (of 3): stack_depot_save allocates slabs that will be used for storing objects in future.If this slab allocation fails we may get to a situation where space allocation for a new stack_record fails, causing stack_depot_save to return 0 as handle. If user of this handle ends up invoking stack_depot_fetch with this handle value, current implementation of stack_depot_fetch will end up using slab from wrong index. To avoid this check handle value at the beginning. Link: https://lkml.kernel.org/r/20210915175321.3472770-1-imran.f.khan@oracle.com Link: https://lkml.kernel.org/r/20210915014806.3206938-1-imran.f.khan@oracle.com Link: https://lkml.kernel.org/r/20210915014806.3206938-2-imran.f.khan@oracle.com Signed-off-by:
Imran Khan <imran.f.khan@oracle.com> Suggested-by:
Vlastimil Babka <vbabka@suse.cz> Acked-by:
Vlastimil Babka <vbabka@suse.cz> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <mripard@kernel.org> Cc: Thomas Zimmermann <tzimmermann@suse.de> Cc: David Airlie <airlied@linux.ie> Cc: Daniel Vetter <daniel@ffwll.ch> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Nathan Chancellor authored
A new warning in clang warns that there is an instance where boolean expressions are being used with bitwise operators instead of logical ones: lib/zstd/decompress/huf_decompress.c:890:25: warning: use of bitwise '&' with boolean operands [-Wbitwise-instead-of-logical] (BIT_reloadDStreamFast(&bitD1) == BIT_DStream_unfinished) ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ zstd does this frequently to help with performance, as logical operators have branches whereas bitwise ones do not. To fix this warning in other cases, the expressions were placed on separate lines with the '&=' operator; however, this particular instance was moved away from that so that it could be surrounded by LIKELY, which is a macro for __builtin_expect(), to help with a performance regression, according to upstream zstd pull #1973. Aside from switching to logical operators, which is likely undesirable in this instance, or disabling the warning outright, the solution is casting one of the expressions to an integer type to make it clear to clang that the author knows what they are doing. Add a cast to U32 to silence the warning. The first U32 cast is to silence an instance of -Wshorten-64-to-32 because __builtin_expect() returns long so it cannot be moved. Link: https://github.com/ClangBuiltLinux/linux/issues/1486 Link: https://github.com/facebook/zstd/pull/1973 Reported-by:
Nick Desaulniers <ndesaulniers@google.com> Signed-off-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Nick Terrell <terrelln@fb.com>
-
Nick Terrell authored
Upgrade to the latest upstream zstd version 1.4.10. This patch is 100% generated from upstream zstd commit 20821a46f412 [0]. This patch is very large because it is transitioning from the custom kernel zstd to using upstream directly. The new zstd follows upstreams file structure which is different. Future update patches will be much smaller because they will only contain the changes from one upstream zstd release. As an aid for review I've created a commit [1] that shows the diff between upstream zstd as-is (which doesn't compile), and the zstd code imported in this patch. The verion of zstd in this patch is generated from upstream with changes applied by automation to replace upstreams libc dependencies, remove unnecessary portability macros, replace `/**` comments with `/*` comments, and use the kernel's xxhash instead of bundling it. The benefits of this patch are as follows: 1. Using upstream directly with automated script to generate kernel code. This allows us to update the kernel every upstream release, so the kernel gets the latest bug fixes and performance improvements, and doesn't get 3 years out of date again. The automation and the translated code are tested every upstream commit to ensure it continues to work. 2. Upgrades from a custom zstd based on 1.3.1 to 1.4.10, getting 3 years of performance improvements and bug fixes. On x86_64 I've measured 15% faster BtrFS and SquashFS decompression+read speeds, 35% faster kernel decompression, and 30% faster ZRAM decompression+read speeds. 3. Zstd-1.4.10 supports negative compression levels, which allow zstd to match or subsume lzo's performance. 4. Maintains the same kernel-specific wrapper API, so no callers have to be modified with zstd version updates. One concern that was brought up was stack usage. Upstream zstd had already removed most of its heavy stack usage functions, but I just removed the last functions that allocate arrays on the stack. I've measured the high water mark for both compression and decompression before and after this patch. Decompression is approximately neutral, using about 1.2KB of stack space. Compression levels up to 3 regressed from 1.4KB -> 1.6KB, and higher compression levels regressed from 1.5KB -> 2KB. We've added unit tests upstream to prevent further regression. I believe that this is a reasonable increase, and if it does end up causing problems, this commit can be cleanly reverted, because it only touches zstd. I chose the bulk update instead of replaying upstream commits because there have been ~3500 upstream commits since the 1.3.1 release, zstd wasn't ready to be used in the kernel as-is before a month ago, and not all upstream zstd commits build. The bulk update preserves bisectablity because bugs can be bisected to the zstd version update. At that point the update can be reverted, and we can work with upstream to find and fix the bug. Note that upstream zstd release 1.4.10 doesn't exist yet. I have cut a staging branch at 20821a46f412 [0] and will apply any changes requested to the staging branch. Once we're ready to merge this update I will cut a zstd release at the commit we merge, so we have a known zstd release in the kernel. The implementation of the kernel API is contained in zstd_compress_module.c and zstd_decompress_module.c. [0] https://github.com/facebook/zstd/commit/20821a46f4122f9abd7c7b245d28162dde8129c9 [1] https://github.com/terrelln/linux/commit/e0fa481d0e3df26918da0a13749740a1f6777574 Signed-off-by:
Nick Terrell <terrelln@fb.com> Tested By: Paul Jones <paul@pauljones.id.au> Tested-by:
Oleksandr Natalenko <oleksandr@natalenko.name> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # LLVM/Clang v13.0.0 on x86-64 Tested-by:
Jean-Denis Girard <jd.girard@sysnux.pf>
-
Nick Terrell authored
Adds decompress_sources.h which includes every .c file necessary for zstd decompression. This is used in decompress_unzstd.c so the internal structure of the library isn't exposed. This allows us to upgrade the zstd library version without modifying any callers. Instead we just need to update decompress_sources.h. Signed-off-by:
Nick Terrell <terrelln@fb.com> Tested By: Paul Jones <paul@pauljones.id.au> Tested-by:
Oleksandr Natalenko <oleksandr@natalenko.name> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # LLVM/Clang v13.0.0 on x86-64 Tested-by:
Jean-Denis Girard <jd.girard@sysnux.pf>
-
Nick Terrell authored
This patch: - Moves `include/linux/zstd.h` -> `include/linux/zstd_lib.h` - Updates modified zstd headers to yearless copyright - Adds a new API in `include/linux/zstd.h` that is functionally equivalent to the in-use subset of the current API. Functions are renamed to avoid symbol collisions with zstd, to make it clear it is not the upstream zstd API, and to follow the kernel style guide. - Updates all callers to use the new API. There are no functional changes in this patch. Since there are no functional change, I felt it was okay to update all the callers in a single patch. Once the API is approved, the callers are mechanically changed. This patch is preparing for the 3rd patch in this series, which updates zstd to version 1.4.10. Since the upstream zstd API is no longer exposed to callers, the update can happen transparently. Signed-off-by:
Nick Terrell <terrelln@fb.com> Tested By: Paul Jones <paul@pauljones.id.au> Tested-by:
Oleksandr Natalenko <oleksandr@natalenko.name> Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # LLVM/Clang v13.0.0 on x86-64 Tested-by:
Jean-Denis Girard <jd.girard@sysnux.pf>
-
- Nov 06, 2021
-
-
Marco Elver authored
We have observed that on very large machines with newer CPUs, the static key/branch switching delay is on the order of milliseconds. This is due to the required broadcast IPIs, which simply does not scale well to hundreds of CPUs (cores). If done too frequently, this can adversely affect tail latencies of various workloads. One workaround is to increase the sample interval to several seconds, while decreasing sampled allocation coverage, but the problem still exists and could still increase tail latencies. As already noted in the Kconfig help text, there are trade-offs: at lower sample intervals the dynamic branch results in better performance; however, at very large sample intervals, the static keys mode can result in better performance -- careful benchmarking is recommended. Our initial benchmarking showed that with large enough sample intervals and workloads stressing the allocator, the static keys mode was slightly better. Evaluating and observing the possible system-wide side-effects of the static-key-switching induced broadcast IPIs, however, was a blind spot (in particular on large machines with 100s of cores). Therefore, a major downside of the static keys mode is, unfortunately, that it is hard to predict performance on new system architectures and topologies, but also making conclusions about performance of new workloads based on a limited set of benchmarks. Most distributions will simply select the defaults, while targeting a large variety of different workloads and system architectures. As such, the better default is CONFIG_KFENCE_STATIC_KEYS=n, and re-enabling it is only recommended after careful evaluation. For reference, on x86-64 the condition in kfence_alloc() generates exactly 2 instructions in the kmem_cache_alloc() fast-path: | ... | cmpl $0x0,0x1a8021c(%rip) # ffffffff82d560d0 <kfence_allocation_gate> | je ffffffff812d6003 <kmem_cache_alloc+0x243> | ... which, given kfence_allocation_gate is infrequently modified, should be well predicted by most CPUs. Link: https://lkml.kernel.org/r/20211019102524.2807208-2-elver@google.com Signed-off-by:
Marco Elver <elver@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Jann Horn <jannh@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Marco Elver authored
filter_irq_stacks() has little to do with the stackdepot implementation, except that it is usually used by users (such as KASAN) of stackdepot to reduce the stack trace. However, filter_irq_stacks() itself is not useful without a stack trace as obtained by stack_trace_save() and friends. Therefore, move filter_irq_stacks() to kernel/stacktrace.c, so that new users of filter_irq_stacks() do not have to start depending on STACKDEPOT only for filter_irq_stacks(). Link: https://lkml.kernel.org/r/20210923104803.2620285-1-elver@google.com Signed-off-by:
Marco Elver <elver@google.com> Acked-by:
Dmitry Vyukov <dvyukov@google.com> Cc: Alexander Potapenko <glider@google.com> Cc: Jann Horn <jannh@google.com> Cc: Aleksandr Nogikh <nogikh@google.com> Cc: Taras Madan <tarasmadan@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
David Hildenbrand authored
CONFIG_MEMORY_HOTPLUG depends on CONFIG_SPARSEMEM, so there is no need for CONFIG_MEMORY_HOTPLUG_SPARSE anymore; adjust all instances to use CONFIG_MEMORY_HOTPLUG and remove CONFIG_MEMORY_HOTPLUG_SPARSE. Link: https://lkml.kernel.org/r/20210929143600.49379-3-david@redhat.com Signed-off-by:
David Hildenbrand <david@redhat.com> Acked-by: Shuah Khan <skhan@linuxfoundation.org> [kselftest] Acked-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by:
Oscar Salvador <osalvador@suse.de> Cc: Alex Shi <alexs@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Mike Rapoport authored
Rename memblock_free_ptr() to memblock_free() and use memblock_free() when freeing a virtual pointer so that memblock_free() will be a counterpart of memblock_alloc() The callers are updated with the below semantic patch and manual addition of (void *) casting to pointers that are represented by unsigned long variables. @@ identifier vaddr; expression size; @@ ( - memblock_phys_free(__pa(vaddr), size); + memblock_free(vaddr, size); | - memblock_free_ptr(vaddr, size); + memblock_free(vaddr, size); ) [sfr@canb.auug.org.au: fixup] Link: https://lkml.kernel.org/r/20211018192940.3d1d532f@canb.auug.org.au Link: https://lkml.kernel.org/r/20210930185031.18648-7-rppt@kernel.org Signed-off-by:
Mike Rapoport <rppt@linux.ibm.com> Signed-off-by:
Stephen Rothwell <sfr@canb.auug.org.au> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Juergen Gross <jgross@suse.com> Cc: Shahab Vahedi <Shahab.Vahedi@synopsys.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Mike Rapoport authored
Since memblock_free() operates on a physical range, make its name reflect it and rename it to memblock_phys_free(), so it will be a logical counterpart to memblock_phys_alloc(). The callers are updated with the below semantic patch: @@ expression addr; expression size; @@ - memblock_free(addr, size); + memblock_phys_free(addr, size); Link: https://lkml.kernel.org/r/20210930185031.18648-6-rppt@kernel.org Signed-off-by:
Mike Rapoport <rppt@linux.ibm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Juergen Gross <jgross@suse.com> Cc: Shahab Vahedi <Shahab.Vahedi@synopsys.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Mike Rapoport authored
memblock_free_early_nid() is unused and memblock_free_early() is an alias for memblock_free(). Replace calls to memblock_free_early() with calls to memblock_free() and remove memblock_free_early() and memblock_free_early_nid(). Link: https://lkml.kernel.org/r/20210930185031.18648-4-rppt@kernel.org Signed-off-by:
Mike Rapoport <rppt@linux.ibm.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Juergen Gross <jgross@suse.com> Cc: Shahab Vahedi <Shahab.Vahedi@synopsys.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Changcheng Deng authored
Use swap() in order to make code cleaner. Issue found by coccinelle. Link: https://lkml.kernel.org/r/20211028111443.15744-1-deng.changcheng@zte.com.cn Signed-off-by:
Changcheng Deng <deng.changcheng@zte.com.cn> Reported-by:
Zeal Robot <zealci@zte.com.cn> Reviewed-by:
Uladzislau Rezki (Sony) <urezki@gmail.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Kees Cook authored
Intentional overflows, as performed by the KASAN tests, are detected at compile time[1] (instead of only at run-time) with the addition of __alloc_size. Fix this by forcing the compiler into not being able to trust the size used following the kmalloc()s. [1] https://lore.kernel.org/lkml/20211005184717.65c6d8eb39350395e387b71f@linux-foundation.org Link: https://lkml.kernel.org/r/20211006181544.1670992-1-keescook@chromium.org Signed-off-by:
Kees Cook <keescook@chromium.org> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Peter Collingbourne authored
With HW tag-based KASAN, error checks are performed implicitly by the load and store instructions in the memcpy implementation. A failed check results in tag checks being disabled and execution will keep going. As a result, under HW tag-based KASAN, prior to commit 1b0668be ("kasan: test: disable kmalloc_memmove_invalid_size for HW_TAGS"), this memcpy would end up corrupting memory until it hits an inaccessible page and causes a kernel panic. This is a pre-existing issue that was revealed by commit 28513304 ("arm64: Import latest memcpy()/memmove() implementation") which changed the memcpy implementation from using signed comparisons (incorrectly, resulting in the memcpy being terminated early for negative sizes) to using unsigned comparisons. It is unclear how this could be handled by memcpy itself in a reasonable way. One possibility would be to add an exception handler that would force memcpy to return if a tag check fault is detected -- this would make the behavior roughly similar to generic and SW tag-based KASAN. However, this wouldn't solve the problem for asynchronous mode and also makes memcpy behavior inconsistent with manually copying data. This test was added as a part of a series that taught KASAN to detect negative sizes in memory operations, see commit 8cceeff4 ("kasan: detect negative size in memory operation function"). Therefore we should keep testing for negative sizes with generic and SW tag-based KASAN. But there is some value in testing small memcpy overflows, so let's add another test with memcpy that does not destabilize the kernel by performing out-of-bounds writes, and run it in all modes. Link: https://linux-review.googlesource.com/id/I048d1e6a9aff766c4a53f989fb0c83de68923882 Link: https://lkml.kernel.org/r/20210910211356.3603758-1-pcc@google.com Signed-off-by:
Peter Collingbourne <pcc@google.com> Reviewed-by:
Andrey Konovalov <andreyknvl@gmail.com> Acked-by:
Marco Elver <elver@google.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Alexander Potapenko <glider@google.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Marco Elver authored
Add __stack_depot_save(), which provides more fine-grained control over stackdepot's memory allocation behaviour, in case stackdepot runs out of "stack slabs". Normally stackdepot uses alloc_pages() in case it runs out of space; passing can_alloc==false to __stack_depot_save() prohibits this, at the cost of more likely failure to record a stack trace. Link: https://lkml.kernel.org/r/20210913112609.2651084-4-elver@google.com Signed-off-by:
Marco Elver <elver@google.com> Tested-by:
Shuah Khan <skhan@linuxfoundation.org> Acked-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by:
Andrey Konovalov <andreyknvl@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Taras Madan <tarasmadan@google.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vijayanand Jitta <vjitta@codeaurora.org> Cc: Vinayak Menon <vinmenon@codeaurora.org> Cc: Walter Wu <walter-zh.wu@mediatek.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
Marco Elver authored
alloc_flags in depot_alloc_stack() is no longer used; remove it. Link: https://lkml.kernel.org/r/20210913112609.2651084-3-elver@google.com Signed-off-by:
Marco Elver <elver@google.com> Tested-by:
Shuah Khan <skhan@linuxfoundation.org> Acked-by:
Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by:
Andrey Konovalov <andreyknvl@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: "Gustavo A. R. Silva" <gustavoars@kernel.org> Cc: Lai Jiangshan <jiangshanlai@gmail.com> Cc: Taras Madan <tarasmadan@google.com> Cc: Tejun Heo <tj@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vijayanand Jitta <vjitta@codeaurora.org> Cc: Vinayak Menon <vinmenon@codeaurora.org> Cc: Walter Wu <walter-zh.wu@mediatek.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Nov 03, 2021
-
-
Guenter Roeck authored
When building m68k:allmodconfig, recent versions of gcc generate the following error if the length of UTS_RELEASE is less than 8 bytes. In function 'memcpy_and_pad', inlined from 'nvmet_execute_disc_identify' at drivers/nvme/target/discovery.c:268:2: arch/m68k/include/asm/string.h:72:25: error: '__builtin_memcpy' reading 8 bytes from a region of size 7 Discussions around the problem suggest that this only happens if an architecture does not provide strlen(), if -ffreestanding is provided as compiler option, and if CONFIG_FORTIFY_SOURCE=n. All of this is the case for m68k. The exact reasons are unknown, but seem to be related to the ability of the compiler to evaluate the return value of strlen() and the resulting execution flow in memcpy_and_pad(). It would be possible to work around the problem by using sizeof(UTS_RELEASE) instead of strlen(UTS_RELEASE), but that would only postpone the problem until the function is called in a similar way. Uninline memcpy_and_pad() instead to solve the problem for good. Suggested-by:
Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by:
Geert Uytterhoeven <geert@linux-m68k.org> Acked-by:
Andy Shevchenko <andriy.shevchenko@intel.com> Signed-off-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- Oct 28, 2021
-
-
Tiezhu Yang authored
After commit 9298e63e ("bpf/tests: Add exhaustive tests of ALU operand magnitudes"), when modprobe test_bpf.ko with JIT on mips64, there exists segment fault due to the following reason: [...] ALU64_MOV_X: all register value magnitudes jited:1 Break instruction in kernel code[#1] [...] It seems that the related JIT implementations of some test cases in test_bpf() have problems. At this moment, I do not care about the segment fault while I just want to verify the test cases of tail calls. Based on the above background and motivation, add the following module parameter test_suite to the test_bpf.ko: test_suite=<string>: only the specified test suite will be run, the string can be "test_bpf", "test_tail_calls" or "test_skb_segment". If test_suite is not specified, but test_id, test_name or test_range is specified, set 'test_bpf' as the default test suite. This is useful to only test the corresponding test suite when specifying the valid test_suite string. Any invalid test suite will result in -EINVAL being returned and no tests being run. If the test_suite is not specified or specified as empty string, it does not change the current logic, all of the test cases will be run. Here are some test results: # dmesg -c # modprobe test_bpf # dmesg | grep Summary test_bpf: Summary: 1009 PASSED, 0 FAILED, [0/997 JIT'ed] test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [0/8 JIT'ed] test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED # rmmod test_bpf # dmesg -c # modprobe test_bpf test_suite=test_bpf # dmesg | tail -1 test_bpf: Summary: 1009 PASSED, 0 FAILED, [0/997 JIT'ed] # rmmod test_bpf # dmesg -c # modprobe test_bpf test_suite=test_tail_calls # dmesg test_bpf: #0 Tail call leaf jited:0 21 PASS [...] test_bpf: #7 Tail call error path, index out of range jited:0 32 PASS test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [0/8 JIT'ed] # rmmod test_bpf # dmesg -c # modprobe test_bpf test_suite=test_skb_segment # dmesg test_bpf: #0 gso_with_rx_frags PASS test_bpf: #1 gso_linear_no_head_frag PASS test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED # rmmod test_bpf # dmesg -c # modprobe test_bpf test_id=1 # dmesg test_bpf: test_bpf: set 'test_bpf' as the default test_suite. test_bpf: #1 TXA jited:0 54 51 50 PASS test_bpf: Summary: 1 PASSED, 0 FAILED, [0/1 JIT'ed] # rmmod test_bpf # dmesg -c # modprobe test_bpf test_suite=test_bpf test_name=TXA # dmesg test_bpf: #1 TXA jited:0 54 50 51 PASS test_bpf: Summary: 1 PASSED, 0 FAILED, [0/1 JIT'ed] # rmmod test_bpf # dmesg -c # modprobe test_bpf test_suite=test_tail_calls test_range=6,7 # dmesg test_bpf: #6 Tail call error path, NULL target jited:0 41 PASS test_bpf: #7 Tail call error path, index out of range jited:0 32 PASS test_bpf: test_tail_calls: Summary: 2 PASSED, 0 FAILED, [0/2 JIT'ed] # rmmod test_bpf # dmesg -c # modprobe test_bpf test_suite=test_skb_segment test_id=1 # dmesg test_bpf: #1 gso_linear_no_head_frag PASS test_bpf: test_skb_segment: Summary: 1 PASSED, 0 FAILED By the way, the above segment fault has been fixed in the latest bpf-next tree which contains the mips64 JIT rework. Signed-off-by:
Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Tested-by:
Johan Almbladh <johan.almbladh@anyfinetworks.com> Acked-by:
Johan Almbladh <johan.almbladh@anyfinetworks.com> Link: https://lore.kernel.org/bpf/1635384321-28128-1-git-send-email-yangtiezhu@loongson.cn
-
- Oct 27, 2021
-
-
Steven Rostedt (VMware) authored
The do while loop continues while ret is zero, but ret is never initialized. The check for ret in the loop at the while should always be initialized, but if an empty string were to be passed in, q would be NULL and p would be '\0', and it would break out of the loop without ever setting ret. Set ret to zero, and then xbc_verify_tree() would be called and catch that it is an empty tree and report the proper error. Link: https://lkml.kernel.org/r/20211027105753.6ab9da5f@gandalf.local.home Fixes: bdac5c2b ("bootconfig: Allocate xbc_data inside xbc_init()") Reported-by:
kernel test robot <lkp@intel.com> Reported-by:
Andrew Morton <akpm@linux-foundation.org> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Steven Rostedt (VMware) <rostedt@goodmis.org>
-
Andy Shevchenko authored
There are couple of improvements to static asserts against the format specifier flags: - new static assert for SIGN - fix static assert for SMALL SMALL is not equal to ASCII code of white space, it equals to the bit difference between capital and small letters (however the value is the same, semantically expression means different things). Signed-off-by:
Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by:
Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20211026140356.45610-1-andriy.shevchenko@linux.intel.com
-
Matthew Wilcox (Oracle) authored
All existing users of %pGp want the hex value as well as the decoded flag names. This looks awkward (passing the same parameter to printf twice), so move that functionality into the core. If we want, we can make that optional with flag arguments to %pGp in the future. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Yafang Shao <laoar.shao@gmail.com> Reviewed-by:
Petr Mladek <pmladek@suse.com> Signed-off-by:
Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20211019142621.2810043-6-willy@infradead.org
-
Matthew Wilcox (Oracle) authored
Use scnprintf instead of snprintf + strlen. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Petr Mladek <pmladek@suse.com> Reviewed-by:
Yafang Shao <laoar.shao@gmail.com> Signed-off-by:
Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20211019142621.2810043-5-willy@infradead.org
-
Matthew Wilcox (Oracle) authored
Instead of having an ifdef to decide whether to print a |, use the 'append' functionality of the main loop to print it. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Yafang Shao <laoar.shao@gmail.com> Reviewed-by:
Petr Mladek <pmladek@suse.com> Reviewed-by:
Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by:
Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20211019142621.2810043-4-willy@infradead.org
-
Matthew Wilcox (Oracle) authored
Keep flags intact so that we also test what happens when unknown flags are passed to %pGp. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Yafang Shao <laoar.shao@gmail.com> Reviewed-by:
Petr Mladek <pmladek@suse.com> Signed-off-by:
Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20211019142621.2810043-3-willy@infradead.org
-
Matthew Wilcox (Oracle) authored
Instead of assigning ptf[i].value, leave the values in the on-stack array and then we can make the array const. Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Reviewed-by:
Yafang Shao <laoar.shao@gmail.com> Reviewed-by:
Petr Mladek <pmladek@suse.com> Reviewed-by:
Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by:
Petr Mladek <pmladek@suse.com> Link: https://lore.kernel.org/r/20211019142621.2810043-2-willy@infradead.org
-
Tariq Toukan authored
Expose new node-aware API for bitmap allocation: bitmap_alloc_node() / bitmap_zalloc_node(). Signed-off-by:
Tariq Toukan <tariqt@nvidia.com> Reviewed-by:
Moshe Shemesh <moshe@nvidia.com> Signed-off-by:
Saeed Mahameed <saeedm@nvidia.com>
-
- Oct 26, 2021
-
-
Tiezhu Yang authored
Since config KPROBES_SANITY_TEST is in lib/Kconfig.debug, it is better to let test_kprobes.c in lib/, just like other similar tests found in lib/. Link: https://lkml.kernel.org/r/1635213091-24387-4-git-send-email-yangtiezhu@loongson.cn Signed-off-by:
Tiezhu Yang <yangtiezhu@loongson.cn> Acked-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Steven Rostedt (VMware) <rostedt@goodmis.org>
-