Skip to content
Snippets Groups Projects
  1. Dec 02, 2021
  2. Nov 30, 2021
    • Arnd Bergmann's avatar
      siphash: use _unaligned version by default · f7e5b9bf
      Arnd Bergmann authored
      On ARM v6 and later, we define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
      because the ordinary load/store instructions (ldr, ldrh, ldrb) can
      tolerate any misalignment of the memory address. However, load/store
      double and load/store multiple instructions (ldrd, ldm) may still only
      be used on memory addresses that are 32-bit aligned, and so we have to
      use the CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS macro with care, or we
      may end up with a severe performance hit due to alignment traps that
      require fixups by the kernel. Testing shows that this currently happens
      with clang-13 but not gcc-11. In theory, any compiler version can
      produce this bug or other problems, as we are dealing with undefined
      behavior in C99 even on architectures that support this in hardware,
      see also https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100363.
      
      Fortunately, the get_unaligned() accessors do the right thing: when
      building for ARMv6 or later, the compiler will emit unaligned accesses
      using the ordinary load/store instructions (but avoid the ones that
      require 32-bit alignment). When building for older ARM, those accessors
      will emit the appropriate sequence of ldrb/mov/orr instructions. And on
      architectures that can truly tolerate any kind of misalignment, the
      get_unaligned() accessors resolve to the leXX_to_cpup accessors that
      operate on aligned addresses.
      
      Since the compiler will in fact emit ldrd or ldm instructions when
      building this code for ARM v6 or later, the solution is to use the
      unaligned accessors unconditionally on architectures where this is
      known to be fast. The _aligned version of the hash function is
      however still needed to get the best performance on architectures
      that cannot do any unaligned access in hardware.
      
      This new version avoids the undefined behavior and should produce
      the fastest hash on all architectures we support.
      
      Link: https://lore.kernel.org/linux-arm-kernel/20181008211554.5355-4-ard.biesheuvel@linaro.org/
      Link: https://lore.kernel.org/linux-crypto/CAK8P3a2KfmmGDbVHULWevB0hv71P2oi2ZCHEAqT=8dQfa0=cqQ@mail.gmail.com/
      
      
      Reported-by: default avatarArd Biesheuvel <ard.biesheuvel@linaro.org>
      Fixes: 2c956a60 ("siphash: add cryptographically secure PRF")
      Signed-off-by: default avatarArnd Bergmann <arnd@arndb.de>
      Reviewed-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
      Acked-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
      Signed-off-by: default avatarJakub Kicinski <kuba@kernel.org>
      f7e5b9bf
  3. Nov 22, 2021
  4. Nov 20, 2021
  5. Nov 18, 2021
  6. Nov 11, 2021
  7. Nov 10, 2021
  8. Nov 09, 2021
    • Thomas Gleixner's avatar
      mm/scatterlist: replace the !preemptible warning in sg_miter_stop() · 723aca20
      Thomas Gleixner authored
      sg_miter_stop() checks for disabled preemption before unmapping a page
      via kunmap_atomic().  The kernel doc mentions under context that
      preemption must be disabled if SG_MITER_ATOMIC is set.
      
      There is no active requirement for the caller to have preemption
      disabled before invoking sg_mitter_stop().  The sg_mitter_*()
      implementation itself has no such requirement.
      
      In fact, preemption is disabled by kmap_atomic() as part of
      sg_miter_next() and remains disabled as long as there is an active
      SG_MITER_ATOMIC mapping.  This is a consequence of kmap_atomic() and not
      a requirement for sg_mitter_*() itself.
      
      The user chooses SG_MITER_ATOMIC because it uses the API in a context
      where blocking is not possible or blocking is possible but he chooses a
      lower weight mapping which is not available on all CPUs and so it might
      need less overhead to setup at a price that now preemption will be
      disabled.
      
      The kmap_atomic() implementation on PREEMPT_RT does not disable
      preemption.  It simply disables CPU migration to ensure that the task
      remains on the same CPU while the caller remains preemptible.  This in
      turn triggers the warning in sg_miter_stop() because preemption is
      allowed.
      
      The PREEMPT_RT and !PREEMPT_RT implementation of kmap_atomic() disable
      pagefaults as a requirement.  It is sufficient to check for this instead
      of disabled preemption.
      
      Check for disabled pagefault handler in the SG_MITER_ATOMIC case.
      Remove the "preemption disabled" part from the kernel doc as the
      sg_milter*() implementation does not care.
      
      [bigeasy@linutronix.de: commit description]
      
      Link: https://lkml.kernel.org/r/20211015211409.cqopacv3pxdwn2ty@linutronix.de
      
      
      Signed-off-by: default avatarThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: default avatarSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      723aca20
    • Alexey Dobriyan's avatar
      lib: uninline simple_strntoull() as well · 839b395e
      Alexey Dobriyan authored
      Codegen become bloated again after simple_strntoull() introduction
      
      	add/remove: 0/0 grow/shrink: 0/4 up/down: 0/-224 (-224)
      	Function                                     old     new   delta
      	simple_strtoul                                 5       2      -3
      	simple_strtol                                 23      20      -3
      	simple_strtoull                              119      15    -104
      	simple_strtoll                               155      41    -114
      
      Link: https://lkml.kernel.org/r/YVmlB9yY4lvbNKYt@localhost.localdomain
      
      
      Signed-off-by: default avatarAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Richard Fitzgerald <rf@opensource.cirrus.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      839b395e
    • Imran Khan's avatar
      lib, stackdepot: add helper to print stack entries into buffer · 0f68d45e
      Imran Khan authored
      To print stack entries into a buffer, users of stackdepot, first get a
      list of stack entries using stack_depot_fetch and then print this list
      into a buffer using stack_trace_snprint.  Provide a helper in stackdepot
      for this purpose.  Also change above mentioned users to use this helper.
      
      [imran.f.khan@oracle.com: fix build error]
        Link: https://lkml.kernel.org/r/20210915175321.3472770-4-imran.f.khan@oracle.com
      [imran.f.khan@oracle.com: export stack_depot_snprint() to modules]
        Link: https://lkml.kernel.org/r/20210916133535.3592491-4-imran.f.khan@oracle.com
      
      Link: https://lkml.kernel.org/r/20210915014806.3206938-4-imran.f.khan@oracle.com
      
      
      Signed-off-by: default avatarImran Khan <imran.f.khan@oracle.com>
      Suggested-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: Jani Nikula <jani.nikula@intel.com>	[i915]
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: Maxime Ripard <mripard@kernel.org>
      Cc: Thomas Zimmermann <tzimmermann@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      0f68d45e
    • Imran Khan's avatar
      lib, stackdepot: add helper to print stack entries · 505be481
      Imran Khan authored
      To print a stack entries, users of stackdepot, first use stack_depot_fetch
      to get a list of stack entries and then use stack_trace_print to print
      this list.  Provide a helper in stackdepot to print stack entries based on
      stackdepot handle.  Also change above mentioned users to use this helper.
      
      Link: https://lkml.kernel.org/r/20210915014806.3206938-3-imran.f.khan@oracle.com
      
      
      Signed-off-by: default avatarImran Khan <imran.f.khan@oracle.com>
      Suggested-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Reviewed-by: default avatarAlexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: Maxime Ripard <mripard@kernel.org>
      Cc: Thomas Zimmermann <tzimmermann@suse.de>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      505be481
    • Imran Khan's avatar
      lib, stackdepot: check stackdepot handle before accessing slabs · 4d4712c1
      Imran Khan authored
      Patch series "lib, stackdepot: check stackdepot handle before accessing slabs", v2.
      
      PATCH-1: Checks validity of a stackdepot handle before proceeding to
      access stackdepot slab/objects.
      
      PATCH-2: Adds a helper in stackdepot, to allow users to print stack
      entries just by specifying the stackdepot handle.  It also changes such
      users to use this new interface.
      
      PATCH-3: Adds a helper in stackdepot, to allow users to print stack
      entries into buffers just by specifying the stackdepot handle and
      destination buffer.  It also changes such users to use this new interface.
      
      This patch (of 3):
      
      stack_depot_save allocates slabs that will be used for storing objects in
      future.If this slab allocation fails we may get to a situation where space
      allocation for a new stack_record fails, causing stack_depot_save to
      return 0 as handle.  If user of this handle ends up invoking
      stack_depot_fetch with this handle value, current implementation of
      stack_depot_fetch will end up using slab from wrong index.  To avoid this
      check handle value at the beginning.
      
      Link: https://lkml.kernel.org/r/20210915175321.3472770-1-imran.f.khan@oracle.com
      Link: https://lkml.kernel.org/r/20210915014806.3206938-1-imran.f.khan@oracle.com
      Link: https://lkml.kernel.org/r/20210915014806.3206938-2-imran.f.khan@oracle.com
      
      
      Signed-off-by: default avatarImran Khan <imran.f.khan@oracle.com>
      Suggested-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Acked-by: default avatarVlastimil Babka <vbabka@suse.cz>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Andrey Konovalov <andreyknvl@gmail.com>
      Cc: Dmitry Vyukov <dvyukov@google.com>
      Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
      Cc: Maxime Ripard <mripard@kernel.org>
      Cc: Thomas Zimmermann <tzimmermann@suse.de>
      Cc: David Airlie <airlied@linux.ie>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      4d4712c1
    • Nathan Chancellor's avatar
      lib: zstd: Add cast to silence clang's -Wbitwise-instead-of-logical · 0a8ea235
      Nathan Chancellor authored
      A new warning in clang warns that there is an instance where boolean
      expressions are being used with bitwise operators instead of logical
      ones:
      
      lib/zstd/decompress/huf_decompress.c:890:25: warning: use of bitwise '&' with boolean operands [-Wbitwise-instead-of-logical]
                             (BIT_reloadDStreamFast(&bitD1) == BIT_DStream_unfinished)
                             ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      
      zstd does this frequently to help with performance, as logical operators
      have branches whereas bitwise ones do not.
      
      To fix this warning in other cases, the expressions were placed on
      separate lines with the '&=' operator; however, this particular instance
      was moved away from that so that it could be surrounded by LIKELY, which
      is a macro for __builtin_expect(), to help with a performance
      regression, according to upstream zstd pull #1973.
      
      Aside from switching to logical operators, which is likely undesirable
      in this instance, or disabling the warning outright, the solution is
      casting one of the expressions to an integer type to make it clear to
      clang that the author knows what they are doing. Add a cast to U32 to
      silence the warning. The first U32 cast is to silence an instance of
      -Wshorten-64-to-32 because __builtin_expect() returns long so it cannot
      be moved.
      
      Link: https://github.com/ClangBuiltLinux/linux/issues/1486
      Link: https://github.com/facebook/zstd/pull/1973
      
      
      Reported-by: default avatarNick Desaulniers <ndesaulniers@google.com>
      Signed-off-by: default avatarNathan Chancellor <nathan@kernel.org>
      Signed-off-by: default avatarNick Terrell <terrelln@fb.com>
      0a8ea235
    • Nick Terrell's avatar
      lib: zstd: Upgrade to latest upstream zstd version 1.4.10 · e0c1b49f
      Nick Terrell authored
      Upgrade to the latest upstream zstd version 1.4.10.
      
      This patch is 100% generated from upstream zstd commit 20821a46f412 [0].
      
      This patch is very large because it is transitioning from the custom
      kernel zstd to using upstream directly. The new zstd follows upstreams
      file structure which is different. Future update patches will be much
      smaller because they will only contain the changes from one upstream
      zstd release.
      
      As an aid for review I've created a commit [1] that shows the diff
      between upstream zstd as-is (which doesn't compile), and the zstd
      code imported in this patch. The verion of zstd in this patch is
      generated from upstream with changes applied by automation to replace
      upstreams libc dependencies, remove unnecessary portability macros,
      replace `/**` comments with `/*` comments, and use the kernel's xxhash
      instead of bundling it.
      
      The benefits of this patch are as follows:
      1. Using upstream directly with automated script to generate kernel
         code. This allows us to update the kernel every upstream release, so
         the kernel gets the latest bug fixes and performance improvements,
         and doesn't get 3 years out of date again. The automation and the
         translated code are tested every upstream commit to ensure it
         continues to work.
      2. Upgrades from a custom zstd based on 1.3.1 to 1.4.10, getting 3 years
         of performance improvements and bug fixes. On x86_64 I've measured
         15% faster BtrFS and SquashFS decompression+read speeds, 35% faster
         kernel decompression, and 30% faster ZRAM decompression+read speeds.
      3. Zstd-1.4.10 supports negative compression levels, which allow zstd to
         match or subsume lzo's performance.
      4. Maintains the same kernel-specific wrapper API, so no callers have to
         be modified with zstd version updates.
      
      One concern that was brought up was stack usage. Upstream zstd had
      already removed most of its heavy stack usage functions, but I just
      removed the last functions that allocate arrays on the stack. I've
      measured the high water mark for both compression and decompression
      before and after this patch. Decompression is approximately neutral,
      using about 1.2KB of stack space. Compression levels up to 3 regressed
      from 1.4KB -> 1.6KB, and higher compression levels regressed from 1.5KB
      -> 2KB. We've added unit tests upstream to prevent further regression.
      I believe that this is a reasonable increase, and if it does end up
      causing problems, this commit can be cleanly reverted, because it only
      touches zstd.
      
      I chose the bulk update instead of replaying upstream commits because
      there have been ~3500 upstream commits since the 1.3.1 release, zstd
      wasn't ready to be used in the kernel as-is before a month ago, and not
      all upstream zstd commits build. The bulk update preserves bisectablity
      because bugs can be bisected to the zstd version update. At that point
      the update can be reverted, and we can work with upstream to find and
      fix the bug.
      
      Note that upstream zstd release 1.4.10 doesn't exist yet. I have cut a
      staging branch at 20821a46f412 [0] and will apply any changes requested
      to the staging branch. Once we're ready to merge this update I will cut
      a zstd release at the commit we merge, so we have a known zstd release
      in the kernel.
      
      The implementation of the kernel API is contained in
      zstd_compress_module.c and zstd_decompress_module.c.
      
      [0] https://github.com/facebook/zstd/commit/20821a46f4122f9abd7c7b245d28162dde8129c9
      [1] https://github.com/terrelln/linux/commit/e0fa481d0e3df26918da0a13749740a1f6777574
      
      
      
      Signed-off-by: default avatarNick Terrell <terrelln@fb.com>
      Tested By: Paul Jones <paul@pauljones.id.au>
      Tested-by: default avatarOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # LLVM/Clang v13.0.0 on x86-64
      Tested-by: default avatarJean-Denis Girard <jd.girard@sysnux.pf>
      e0c1b49f
    • Nick Terrell's avatar
      lib: zstd: Add decompress_sources.h for decompress_unzstd · 2479b523
      Nick Terrell authored
      
      Adds decompress_sources.h which includes every .c file necessary for
      zstd decompression. This is used in decompress_unzstd.c so the internal
      structure of the library isn't exposed.
      
      This allows us to upgrade the zstd library version without modifying any
      callers. Instead we just need to update decompress_sources.h.
      
      Signed-off-by: default avatarNick Terrell <terrelln@fb.com>
      Tested By: Paul Jones <paul@pauljones.id.au>
      Tested-by: default avatarOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # LLVM/Clang v13.0.0 on x86-64
      Tested-by: default avatarJean-Denis Girard <jd.girard@sysnux.pf>
      2479b523
    • Nick Terrell's avatar
      lib: zstd: Add kernel-specific API · cf30f6a5
      Nick Terrell authored
      
      This patch:
      - Moves `include/linux/zstd.h` -> `include/linux/zstd_lib.h`
      - Updates modified zstd headers to yearless copyright
      - Adds a new API in `include/linux/zstd.h` that is functionally
        equivalent to the in-use subset of the current API. Functions are
        renamed to avoid symbol collisions with zstd, to make it clear it is
        not the upstream zstd API, and to follow the kernel style guide.
      - Updates all callers to use the new API.
      
      There are no functional changes in this patch. Since there are no
      functional change, I felt it was okay to update all the callers in a
      single patch. Once the API is approved, the callers are mechanically
      changed.
      
      This patch is preparing for the 3rd patch in this series, which updates
      zstd to version 1.4.10. Since the upstream zstd API is no longer exposed
      to callers, the update can happen transparently.
      
      Signed-off-by: default avatarNick Terrell <terrelln@fb.com>
      Tested By: Paul Jones <paul@pauljones.id.au>
      Tested-by: default avatarOleksandr Natalenko <oleksandr@natalenko.name>
      Tested-by: Sedat Dilek <sedat.dilek@gmail.com> # LLVM/Clang v13.0.0 on x86-64
      Tested-by: default avatarJean-Denis Girard <jd.girard@sysnux.pf>
      cf30f6a5
  9. Nov 06, 2021
  10. Nov 03, 2021
    • Guenter Roeck's avatar
      string: uninline memcpy_and_pad · 5c4e0a21
      Guenter Roeck authored
      
      When building m68k:allmodconfig, recent versions of gcc generate the
      following error if the length of UTS_RELEASE is less than 8 bytes.
      
        In function 'memcpy_and_pad',
          inlined from 'nvmet_execute_disc_identify' at
            drivers/nvme/target/discovery.c:268:2: arch/m68k/include/asm/string.h:72:25: error:
      	'__builtin_memcpy' reading 8 bytes from a region of size 7
      
      Discussions around the problem suggest that this only happens if an
      architecture does not provide strlen(), if -ffreestanding is provided as
      compiler option, and if CONFIG_FORTIFY_SOURCE=n. All of this is the case
      for m68k. The exact reasons are unknown, but seem to be related to the
      ability of the compiler to evaluate the return value of strlen() and
      the resulting execution flow in memcpy_and_pad(). It would be possible
      to work around the problem by using sizeof(UTS_RELEASE) instead of
      strlen(UTS_RELEASE), but that would only postpone the problem until the
      function is called in a similar way. Uninline memcpy_and_pad() instead
      to solve the problem for good.
      
      Suggested-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      Reviewed-by: default avatarGeert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: default avatarAndy Shevchenko <andriy.shevchenko@intel.com>
      Signed-off-by: default avatarGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5c4e0a21
  11. Oct 28, 2021
    • Tiezhu Yang's avatar
      bpf, tests: Add module parameter test_suite to test_bpf module · b066abba
      Tiezhu Yang authored
      
      After commit 9298e63e ("bpf/tests: Add exhaustive tests of ALU
      operand magnitudes"), when modprobe test_bpf.ko with JIT on mips64,
      there exists segment fault due to the following reason:
      
        [...]
        ALU64_MOV_X: all register value magnitudes jited:1
        Break instruction in kernel code[#1]
        [...]
      
      It seems that the related JIT implementations of some test cases
      in test_bpf() have problems. At this moment, I do not care about
      the segment fault while I just want to verify the test cases of
      tail calls.
      
      Based on the above background and motivation, add the following
      module parameter test_suite to the test_bpf.ko:
      
        test_suite=<string>: only the specified test suite will be run, the
        string can be "test_bpf", "test_tail_calls" or "test_skb_segment".
      
      If test_suite is not specified, but test_id, test_name or test_range
      is specified, set 'test_bpf' as the default test suite. This is useful
      to only test the corresponding test suite when specifying the valid
      test_suite string.
      
      Any invalid test suite will result in -EINVAL being returned and no
      tests being run. If the test_suite is not specified or specified as
      empty string, it does not change the current logic, all of the test
      cases will be run.
      
      Here are some test results:
      
       # dmesg -c
       # modprobe test_bpf
       # dmesg | grep Summary
       test_bpf: Summary: 1009 PASSED, 0 FAILED, [0/997 JIT'ed]
       test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [0/8 JIT'ed]
       test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_suite=test_bpf
       # dmesg | tail -1
       test_bpf: Summary: 1009 PASSED, 0 FAILED, [0/997 JIT'ed]
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_suite=test_tail_calls
       # dmesg
       test_bpf: #0 Tail call leaf jited:0 21 PASS
       [...]
       test_bpf: #7 Tail call error path, index out of range jited:0 32 PASS
       test_bpf: test_tail_calls: Summary: 8 PASSED, 0 FAILED, [0/8 JIT'ed]
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_suite=test_skb_segment
       # dmesg
       test_bpf: #0 gso_with_rx_frags PASS
       test_bpf: #1 gso_linear_no_head_frag PASS
       test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_id=1
       # dmesg
       test_bpf: test_bpf: set 'test_bpf' as the default test_suite.
       test_bpf: #1 TXA jited:0 54 51 50 PASS
       test_bpf: Summary: 1 PASSED, 0 FAILED, [0/1 JIT'ed]
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_suite=test_bpf test_name=TXA
       # dmesg
       test_bpf: #1 TXA jited:0 54 50 51 PASS
       test_bpf: Summary: 1 PASSED, 0 FAILED, [0/1 JIT'ed]
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_suite=test_tail_calls test_range=6,7
       # dmesg
       test_bpf: #6 Tail call error path, NULL target jited:0 41 PASS
       test_bpf: #7 Tail call error path, index out of range jited:0 32 PASS
       test_bpf: test_tail_calls: Summary: 2 PASSED, 0 FAILED, [0/2 JIT'ed]
      
       # rmmod test_bpf
       # dmesg -c
       # modprobe test_bpf test_suite=test_skb_segment test_id=1
       # dmesg
       test_bpf: #1 gso_linear_no_head_frag PASS
       test_bpf: test_skb_segment: Summary: 1 PASSED, 0 FAILED
      
      By the way, the above segment fault has been fixed in the latest bpf-next
      tree which contains the mips64 JIT rework.
      
      Signed-off-by: default avatarTiezhu Yang <yangtiezhu@loongson.cn>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Tested-by: default avatarJohan Almbladh <johan.almbladh@anyfinetworks.com>
      Acked-by: default avatarJohan Almbladh <johan.almbladh@anyfinetworks.com>
      Link: https://lore.kernel.org/bpf/1635384321-28128-1-git-send-email-yangtiezhu@loongson.cn
      b066abba
  12. Oct 27, 2021
  13. Oct 26, 2021
Loading