- Dec 20, 2022
-
-
Jason A. Donenfeld authored
Convert the final two users of prandom_u32_max() that slipped in during 6.2-rc1 to use get_random_u32_below(). Then, with no more users left, we can finally remove the deprecated function. Signed-off-by:
Jason A. Donenfeld <Jason@zx2c4.com>
-
Jason A. Donenfeld authored
The <asm/archrandom.h> header is a random.c private detail, not something to be called by other code. As such, don't make it automatically available by way of random.h. Cc: Michael Ellerman <mpe@ellerman.id.au> Acked-by:
Heiko Carstens <hca@linux.ibm.com> Reviewed-by:
Christophe Leroy <christophe.leroy@csgroup.eu> Signed-off-by:
Jason A. Donenfeld <Jason@zx2c4.com>
-
- Dec 17, 2022
-
-
Marc Zyngier authored
Since 2f2940d1 ("genirq/msi: Remove filter from msi_free_descs_free_range()"), the core MSI code relies on the msi_desc->irq field to have been cleared before the descriptor can be freed, as it indicates that there is no association with a device anymore. The irq domain code provides this guarantee, and so does s390, which is one of the two architectures not using irq domains for MSIs. Powerpc, however, is missing this particular requirements, leading in a splat and leaked MSI descriptors. Adding the now required irq reset to the handful of powerpc backends that implement MSIs fixes that particular problem. Reported-by:
Guenter Roeck <linux@roeck-us.net> Signed-off-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/70dab88e-6119-0c12-7c6a-61bcbe239f66@roeck-us.net
-
- Dec 16, 2022
-
-
Michael Ellerman authored
Nathan reported that the new per-cpu mm patching oopses if DEBUG_VM is enabled: ------------[ cut here ]------------ kernel BUG at arch/powerpc/mm/pgtable.c:333! Oops: Exception in kernel mode, sig: 5 [#1] LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA PowerNV Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 6.1.0-rc2+ #1 Hardware name: IBM PowerNV (emulated by qemu) POWER9 0x4e1200 opal:v7.0 PowerNV ... NIP assert_pte_locked+0x180/0x1a0 LR assert_pte_locked+0x170/0x1a0 Call Trace: 0x60000000 (unreliable) patch_instruction+0x618/0x6d0 arch_prepare_kprobe+0xfc/0x2d0 register_kprobe+0x520/0x7c0 arch_init_kprobes+0x28/0x3c init_kprobes+0x108/0x184 do_one_initcall+0x60/0x2e0 kernel_init_freeable+0x1f0/0x3e0 kernel_init+0x34/0x1d0 ret_from_kernel_thread+0x5c/0x64 It's caused by the assert_spin_locked() failing in assert_pte_locked(). The assert fails because the PTE was unlocked in text_area_cpu_up_mm(), and never relocked. The PTE page shouldn't be freed, the patching_mm is only used for patching on this CPU, only that single PTE is ever mapped, and it's only unmapped at CPU offline. In fact assert_pte_locked() has a special case to ignore init_mm entirely, and the patching_mm is more-or-less like init_mm, so possibly the check could be skipped for patching_mm too. But for now be conservative, and use the proper PTE accessors at patching time, so that the PTE lock is held while the PTE is used. That also avoids the warning in assert_pte_locked(). With that it's no longer necessary to save the PTE in cpu_patching_context for the mm_patch_enabled() case. Fixes: c28c15b6 ("powerpc/code-patching: Use temporary mm for Radix MMU") Reported-by:
Nathan Chancellor <nathan@kernel.org> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20221216125913.990972-1-mpe@ellerman.id.au
-
- Dec 15, 2022
-
-
Dave Hansen authored
There are a few kernel users like kfence that require 4k pages to work correctly and do not support large mappings. They use set_memory_4k() to break down those large mappings. That, in turn relies on cpa_data->force_split option to indicate to set_memory code that it should split page tables regardless of whether the need to be. But, a recent change added an optimization which would return early if a set_memory request came in that did not change permissions. It did not consult ->force_split and would mistakenly optimize away the splitting that set_memory_4k() needs. This broke kfence. Skip the same-permission optimization when ->force_split is set. Fixes: 127960a05548 ("x86/mm: Inhibit _PAGE_NX changes from cpa_process_alias()") Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Tested-by:
Marco Elver <elver@google.com> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lore.kernel.org/all/CA+G9fYuFxZTxkeS35VTZMXwQvohu73W3xbZ5NtjebsVvH6hCuA@mail.gmail.com/
-
Sean Christopherson authored
Popuplate the shadow for the shared portion of the CPU entry area, i.e. the read-only IDT mapping, during KASAN initialization. A recent change modified KASAN to map the per-CPU areas on-demand, but forgot to keep a shadow for the common area that is shared amongst all CPUs. Map the common area in KASAN init instead of letting idt_map_in_cea() do the dirty work so that it Just Works in the unlikely event more shared data is shoved into the CPU entry area. The bug manifests as a not-present #PF when software attempts to lookup an IDT entry, e.g. when KVM is handling IRQs on Intel CPUs (KVM performs direct CALL to the IRQ handler to avoid the overhead of INTn): BUG: unable to handle page fault for address: fffffbc0000001d8 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 16c03a067 P4D 16c03a067 PUD 0 Oops: 0000 [#1] PREEMPT SMP KASAN CPU: 5 PID: 901 Comm: repro Tainted: G W 6.1.0-rc3+ #410 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 RIP: 0010:kasan_check_range+0xdf/0x190 vmx_handle_exit_irqoff+0x152/0x290 [kvm_intel] vcpu_run+0x1d89/0x2bd0 [kvm] kvm_arch_vcpu_ioctl_run+0x3ce/0xa70 [kvm] kvm_vcpu_ioctl+0x349/0x900 [kvm] __x64_sys_ioctl+0xb8/0xf0 do_syscall_64+0x2b/0x50 entry_SYSCALL_64_after_hwframe+0x46/0xb0 Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand") Reported-by:
<syzbot+8cdd16fd5a6c0565e227@syzkaller.appspotmail.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221110203504.1985010-6-seanjc@google.com
-
Sean Christopherson authored
Add helpers to dedup code for aligning shadow address up/down to page boundaries when translating an address to its shadow. No functional change intended. Signed-off-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Andrey Ryabinin <ryabinin.a.a@gmail.com> Link: https://lkml.kernel.org/r/20221110203504.1985010-5-seanjc@google.com
-
Sean Christopherson authored
Rename the CPU entry area variables in kasan_init() to shorten their names, a future fix will reference the beginning of the per-CPU portion of the CPU entry area, and shadow_cpu_entry_per_cpu_begin is a bit much. No functional change intended. Signed-off-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Andrey Ryabinin <ryabinin.a.a@gmail.com> Link: https://lkml.kernel.org/r/20221110203504.1985010-4-seanjc@google.com
-
Sean Christopherson authored
Populate a KASAN shadow for the entire possible per-CPU range of the CPU entry area instead of requiring that each individual chunk map a shadow. Mapping shadows individually is error prone, e.g. the per-CPU GDT mapping was left behind, which can lead to not-present page faults during KASAN validation if the kernel performs a software lookup into the GDT. The DS buffer is also likely affected. The motivation for mapping the per-CPU areas on-demand was to avoid mapping the entire 512GiB range that's reserved for the CPU entry area, shaving a few bytes by not creating shadows for potentially unused memory was not a goal. The bug is most easily reproduced by doing a sigreturn with a garbage CS in the sigcontext, e.g. int main(void) { struct sigcontext regs; syscall(__NR_mmap, 0x1ffff000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul); syscall(__NR_mmap, 0x20000000ul, 0x1000000ul, 7ul, 0x32ul, -1, 0ul); syscall(__NR_mmap, 0x21000000ul, 0x1000ul, 0ul, 0x32ul, -1, 0ul); memset(®s, 0, sizeof(regs)); regs.cs = 0x1d0; syscall(__NR_rt_sigreturn); return 0; } to coerce the kernel into doing a GDT lookup to compute CS.base when reading the instruction bytes on the subsequent #GP to determine whether or not the #GP is something the kernel should handle, e.g. to fixup UMIP violations or to emulate CLI/STI for IOPL=3 applications. BUG: unable to handle page fault for address: fffffbc8379ace00 #PF: supervisor read access in kernel mode #PF: error_code(0x0000) - not-present page PGD 16c03a067 P4D 16c03a067 PUD 15b990067 PMD 15b98f067 PTE 0 Oops: 0000 [#1] PREEMPT SMP KASAN CPU: 3 PID: 851 Comm: r2 Not tainted 6.1.0-rc3-next-20221103+ #432 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 RIP: 0010:kasan_check_range+0xdf/0x190 Call Trace: <TASK> get_desc+0xb0/0x1d0 insn_get_seg_base+0x104/0x270 insn_fetch_from_user+0x66/0x80 fixup_umip_exception+0xb1/0x530 exc_general_protection+0x181/0x210 asm_exc_general_protection+0x22/0x30 RIP: 0003:0x0 Code: Unable to access opcode bytes at 0xffffffffffffffd6. RSP: 0003:0000000000000000 EFLAGS: 00000202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00000000000001d0 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 </TASK> Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand") Reported-by:
<syzbot+ffb4f000dc2872c93f62@syzkaller.appspotmail.com> Suggested-by:
Andrey Ryabinin <ryabinin.a.a@gmail.com> Signed-off-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Andrey Ryabinin <ryabinin.a.a@gmail.com> Link: https://lkml.kernel.org/r/20221110203504.1985010-3-seanjc@google.com
-
Sean Christopherson authored
Recompute the physical address for each per-CPU page in the CPU entry area, a recent commit inadvertantly modified cea_map_percpu_pages() such that every PTE is mapped to the physical address of the first page. Fixes: 9fd429c28073 ("x86/kasan: Map shadow for percpu pages on demand") Signed-off-by:
Sean Christopherson <seanjc@google.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Andrey Ryabinin <ryabinin.a.a@gmail.com> Link: https://lkml.kernel.org/r/20221110203504.1985010-2-seanjc@google.com
-
Peter Zijlstra authored
Now that the checkalias functionality is taken by CPA_NO_CHECK_ALIAS rename the argument to better match is remaining purpose: primary, matching __change_page_attr(). Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221110125544.661001508%40infradead.org
-
Peter Zijlstra authored
There is a cludge in change_page_attr_set_clr() that inhibits propagating NX changes to the aliases (directmap and highmap) -- this is a cludge twofold: - it also inhibits the primary checks in __change_page_attr(); - it hard depends on single bit changes. The introduction of set_memory_rox() triggered this last issue for clearing both _PAGE_RW and _PAGE_NX. Explicitly ignore _PAGE_NX in cpa_process_alias() instead. Fixes: b38994948567 ("x86/mm: Implement native set_memory_rox()") Reported-by:
kernel test robot <oliver.sang@intel.com> Debugged-by:
Dave Hansen <dave.hansen@intel.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221110125544.594991716%40infradead.org
-
Peter Zijlstra authored
The .checkalias argument to __change_page_attr_set_clr() is overloaded and serves two different purposes: - it inhibits the call to cpa_process_alias() -- as suggested by the name; however, - it also serves as 'primary' indicator for __change_page_attr() ( which in turn also serves as a recursion terminator for cpa_process_alias() ). Untangle these by extending the use of CPA_NO_CHECK_ALIAS to all callsites that currently use .checkalias=0 for this purpose. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221110125544.527267183%40infradead.org
-
Peter Zijlstra authored
It's a shame to hide useful comments in Changelogs, add some to the code. Shamelessly stolen from commit: c40a56a7 ("x86/mm/init: Remove freed kernel image areas from alias mapping") Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221110125544.460677011%40infradead.org
-
Kirill A. Shutemov authored
The mask must not include bits above physical address mask. These bits are reserved and can be used for other things. Bits 61 and 62 are used for Linear Address Masking. Signed-off-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by:
Rick Edgecombe <rick.p.edgecombe@intel.com> Reviewed-by:
Alexander Potapenko <glider@google.com> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by:
Alexander Potapenko <glider@google.com> Link: https://lore.kernel.org/all/20221109165140.9137-2-kirill.shutemov%40linux.intel.com
-
Pasha Tatashin authored
Other architectures and the common mm/ use P*D_MASK, and P*D_SIZE. Remove the duplicated P*D_PAGE_MASK and P*D_PAGE_SIZE which are only used in x86/*. Signed-off-by:
Pasha Tatashin <pasha.tatashin@soleen.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Anshuman Khandual <anshuman.khandual@arm.com> Acked-by:
Mike Rapoport <rppt@linux.ibm.com> Link: https://lore.kernel.org/r/20220516185202.604654-1-tatashin@google.com
-
Peter Zijlstra authored
Since __HAVE_ARCH_* style guards have been depricated in favour of defining the function name onto itself, convert pxxp_get(). Suggested-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/Y2EUEBlQXNgaJgoI@hirez.programming.kicks-ass.net
-
Peter Zijlstra authored
Recognise that set_64bit() is a special case of our previously introduced pxx_xchg64(), so use that and get rid of set_64bit(). Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221022114425.233481884%40infradead.org
-
Peter Zijlstra authored
The use of set_64bit() in X86_64 only code is pretty pointless, seeing how it's a direct assignment. Remove all this nonsense. [nathanchance: unbreak irte] Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221022114425.168036718%40infradead.org
-
Peter Zijlstra authored
Given that ptep_get_and_clear() uses cmpxchg8b, and that should be by far the most common case, there's no point in having an optimized variant for pmd/pud. Introduce the pxx_xchg64() helper to implement the common logic once. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221022114425.103392961%40infradead.org
-
Peter Zijlstra authored
Disallow write-tearing, that would be really unfortunate. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221022114425.038102604%40infradead.org
-
Peter Zijlstra authored
PAE implies CX8, write readable code. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221022114424.971450128%40infradead.org
-
Peter Zijlstra authored
Since it no longer applies to only PTEs, rename it to PXX. Suggested-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221022114424.776404066%40infradead.org
-
Peter Zijlstra authored
AFAICT there's no reason to do anything different than what we do for PTEs. Make it so (also affects SH). Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221022114424.711181252%40infradead.org
-
Peter Zijlstra authored
Just like 64bit pte_t, have a low/high split in pmd_t. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221022114424.645657294%40infradead.org
-
Peter Zijlstra authored
Instead of mucking about with at least 2 different ways of fudging it, do the same thing we do for pte_t. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221022114424.580310787%40infradead.org
-
Peter Zijlstra authored
Provide a native implementation of set_memory_rox(), avoiding the double set_memory_ro();set_memory_x(); calls. Suggested-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org>
-
Peter Zijlstra authored
Because endlessly repeating: set_memory_ro() set_memory_x() is getting tedious. Suggested-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/Y1jek64pXOsougmz@hirez.programming.kicks-ass.net
-
Peter Zijlstra authored
Straight up revert of commit: a970174d ("x86/mm: Do not verify W^X at boot up") now that the root cause has been fixed. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221025201058.011279208@infradead.org
-
Peter Zijlstra authored
Now that text_poke is available before ftrace, remove the SYSTEM_BOOTING exceptions. Specifically, this cures a W+X case during boot. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221025201057.945960823@infradead.org
-
Peter Zijlstra authored
Instead of duplicating init_mm, allocate a fresh mm. The advantage is that mm_alloc() has much simpler dependencies. Additionally it makes more conceptual sense, init_mm has no (and must not have) user state to duplicate. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/20221025201057.816175235@infradead.org
-
Peter Zijlstra authored
Seth found that the CPU-entry-area; the piece of per-cpu data that is mapped into the userspace page-tables for kPTI is not subject to any randomization -- irrespective of kASLR settings. On x86_64 a whole P4D (512 GB) of virtual address space is reserved for this structure, which is plenty large enough to randomize things a little. As such, use a straight forward randomization scheme that avoids duplicates to spread the existing CPUs over the available space. [ bp: Fix le build. ] Reported-by:
Seth Jenkins <sethjenkins@google.com> Reviewed-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de>
-
Andrey Ryabinin authored
KASAN maps shadow for the entire CPU-entry-area: [CPU_ENTRY_AREA_BASE, CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_MAP_SIZE] This will explode once the per-cpu entry areas are randomized since it will increase CPU_ENTRY_AREA_MAP_SIZE to 512 GB and KASAN fails to allocate shadow for such big area. Fix this by allocating KASAN shadow only for really used cpu entry area addresses mapped by cea_map_percpu_pages() Thanks to the 0day folks for finding and reporting this to be an issue. [ dhansen: tweak changelog since this will get committed before peterz's actual cpu-entry-area randomization ] Signed-off-by:
Andrey Ryabinin <ryabinin.a.a@gmail.com> Signed-off-by:
Dave Hansen <dave.hansen@linux.intel.com> Tested-by:
Yujie Liu <yujie.liu@intel.com> Cc: kernel test robot <yujie.liu@intel.com> Link: https://lore.kernel.org/r/202210241508.2e203c3d-yujie.liu@intel.com
-
Will Deacon authored
This reverts commit 44ecda71. All versions of this patch on the mailing list, including the version that ended up getting merged, have portions of code guarded by the non-existent CONFIG_ARM64_WORKAROUND_2645198 option. Although Anshuman says he tested the code with some additional debug changes [1], I'm hesitant to fix the CONFIG option and light up a bunch of code right before I (and others) disappear for the end of year holidays, during which time we won't be around to deal with any fallout. So revert the change for now. We can bring back a fixed, tested version for a later -rc when folks are thinking about things other than trees and turkeys. [1] https://lore.kernel.org/r/b6f61241-e436-5db1-1053-3b441080b8d6@arm.com Reported-by:
Lukas Bulwahn <lukas.bulwahn@gmail.com> Link: https://lore.kernel.org/r/20221215094811.23188-1-lukas.bulwahn@gmail.com Signed-off-by:
Will Deacon <will@kernel.org>
-
- Dec 14, 2022
-
-
Ard Biesheuvel authored
The recent switch on arm64 from DYNAMIC_FTRACE_WITH_REGS to DYNAMIC_FTRACE_WITH_ARGS failed to take into account that we currently require the former in order to allow the function graph tracer to be enabled in combination with shadow call stacks. This means that this is no longer permitted at all, in spite of the fact that either flavour of ftrace works perfectly fine in this combination. So permit WITH_ARGS as well as WITH_REGS. Fixes: ddc9863e ("scs: Disable when function graph tracing is enabled") Acked-by:
Mark Rutland <mark.rutland@arm.com> Acked-by:
Steven Rostedt (Google) <rostedt@goodmis.org> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20221213132407.1485025-1-ardb@kernel.org Signed-off-by:
Will Deacon <will@kernel.org>
-
Huacai Chen authored
1, Enable suspend (ACPI S3) and hibernation (ACPI S4). 2, Enable some options for FDT-based systems (e.g., SERIAL_OF_PLATFORM). 3, Enable CONFIG_KALLSYMS_ALL and CONFIG_DEBUG_FS to convenient ftrace. 4, Regenerate the whole file to keep the order of options be the same as the latest source code. Signed-off-by:
Qing Zhang <zhangqing@loongson.cn> Signed-off-by:
Huacai Chen <chenhuacai@loongson.cn>
-
Qing Zhang authored
This patch implements ftrace trampolines through plt entry. Tested by forcing ftrace_make_call() to use the module PLT, and then loading up a module after setting up ftrace with: | echo ":mod:<module-name>" > set_ftrace_filter; | echo function > current_tracer; | modprobe <module-name> Since FTRACE_ADDR/FTRACE_REGS_ADDR is only defined when CONFIG_DYNAMIC_ FTRACE is selected, we wrap their usage in module_init_ftrace_plt() with ifdeffery rather than using IS_ENABLED(). Signed-off-by:
Qing Zhang <zhangqing@loongson.cn> Signed-off-by:
Huacai Chen <chenhuacai@loongson.cn>
-
Qing Zhang authored
ftrace_graph_ret_addr() can be called by stack unwinding code to convert a found stack return address ('ret') to its original value, in case the function graph tracer has modified it to be 'return_to_handler'. If the hasn't been modified, the unchanged value of 'ret' is returned. Signed-off-by:
Qing Zhang <zhangqing@loongson.cn> Signed-off-by:
Huacai Chen <chenhuacai@loongson.cn>
-
Qing Zhang authored
Allow for arguments to be passed in to ftrace_regs by default. If this is set, then arguments and stack can be found from the pt_regs. 1. HAVE_DYNAMIC_FTRACE_WITH_ARGS don't need special hook for graph tracer entry point, but instead we can use graph_ops::func function to install the return_hooker. 2. Livepatch requires this option in the future. Signed-off-by:
Qing Zhang <zhangqing@loongson.cn> Signed-off-by:
Huacai Chen <chenhuacai@loongson.cn>
-
Qing Zhang authored
This patch implements CONFIG_DYNAMIC_FTRACE_WITH_REGS on LoongArch, which allows a traced function's arguments (and some other registers) to be captured into a struct pt_regs, allowing these to be inspected and modified. Co-developed-by:
Jinyang He <hejinyang@loongson.cn> Signed-off-by:
Jinyang He <hejinyang@loongson.cn> Signed-off-by:
Qing Zhang <zhangqing@loongson.cn> Signed-off-by:
Huacai Chen <chenhuacai@loongson.cn>
-