- Sep 04, 2024
-
-
Matthew Wilcox (Oracle) authored
Convert x86 to use PG_arch_2 instead of PG_uncached and remove PG_uncached. Link: https://lkml.kernel.org/r/20240821193445.2294269-11-willy@infradead.org Signed-off-by:
Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Jul 20, 2024
-
-
Huacai Chen authored
Add ARCH_HAS_DEBUG_VM_PGTABLE selection in Kconfig, in order to make corresponding vm debug features usable on LoongArch. Also update the corresponding arch-support.txt document. Signed-off-by:
Huacai Chen <chenhuacai@loongson.cn>
-
- Feb 15, 2024
-
-
Andrea Parri authored
RISC-V uses xRET instructions on return from interrupt and to go back to user-space; the xRET instruction is not core serializing. Use FENCE.I for providing core serialization as follows: - by calling sync_core_before_usermode() on return from interrupt (cf. ipi_sync_core()), - via switch_mm() and sync_core_before_usermode() (respectively, for uthread->uthread and kthread->uthread transitions) before returning to user-space. On RISC-V, the serialization in switch_mm() is activated by resetting the icache_stale_mask of the mm at prepare_sync_core_cmd(). Suggested-by:
Palmer Dabbelt <palmer@dabbelt.com> Signed-off-by:
Andrea Parri <parri.andrea@gmail.com> Reviewed-by:
Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Link: https://lore.kernel.org/r/20240131144936.29190-5-parri.andrea@gmail.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- Jan 11, 2024
-
-
Alexandre Ghiti authored
Allow to defer the flushing of the TLB when unmapping pages, which allows to reduce the numbers of IPI and the number of sfence.vma. The ubenchmarch used in commit 43b3dfdd ("arm64: support batched/deferred tlb shootdown during page reclamation/migration") that was multithreaded to force the usage of IPI shows good performance improvement on all platforms: * Unmatched: ~34% * TH1520 : ~78% * Qemu : ~81% In addition, perf on qemu reports an important decrease in time spent dealing with IPIs: Before: 68.17% main [kernel.kallsyms] [k] __sbi_rfence_v02_call After : 8.64% main [kernel.kallsyms] [k] __sbi_rfence_v02_call * Benchmark: int stick_this_thread_to_core(int core_id) { int num_cores = sysconf(_SC_NPROCESSORS_ONLN); if (core_id < 0 || core_id >= num_cores) return EINVAL; cpu_set_t cpuset; CPU_ZERO(&cpuset); CPU_SET(core_id, &cpuset); pthread_t current_thread = pthread_self(); return pthread_setaffinity_np(current_thread, sizeof(cpu_set_t), &cpuset); } static void *fn_thread (void *p_data) { int ret; pthread_t thread; stick_this_thread_to_core((int)p_data); while (1) { sleep(1); } return NULL; } int main() { volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0); pthread_t threads[4]; int ret; for (int i = 0; i < 4; ++i) { ret = pthread_create(&threads[i], NULL, fn_thread, (void *)i); if (ret) { printf("%s", strerror (ret)); } } memset(p, 0x88, SIZE); for (int k = 0; k < 10000; k++) { /* swap in */ for (int i = 0; i < SIZE; i += 4096) { (void)p[i]; } /* swap out */ madvise(p, SIZE, MADV_PAGEOUT); } for (int i = 0; i < 4; i++) { pthread_cancel(threads[i]); } for (int i = 0; i < 4; i++) { pthread_join(threads[i], NULL); } return 0; } Signed-off-by:
Alexandre Ghiti <alexghiti@rivosinc.com> Reviewed-by:
Jisheng Zhang <jszhang@kernel.org> Tested-by: Jisheng Zhang <jszhang@kernel.org> # Tested on TH1520 Tested-by:
Nam Cao <namcao@linutronix.de> Link: https://lore.kernel.org/r/20240108193640.344929-1-alexghiti@rivosinc.com Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- Sep 11, 2023
-
-
Ard Biesheuvel authored
Itanium (IA64) is going away, so drop it from the kernel feature documentation. Signed-off-by:
Ard Biesheuvel <ardb@kernel.org>
-
- Sep 06, 2023
-
-
Qing Zhang authored
1/8 of kernel addresses reserved for shadow memory. But for LoongArch, There are a lot of holes between different segments and valid address space (256T available) is insufficient to map all these segments to kasan shadow memory with the common formula provided by kasan core, saying (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET So LoongArch has a arch-specific mapping formula, different segments are mapped individually, and only limited space lengths of these specific segments are mapped to shadow. At early boot stage the whole shadow region populated with just one physical page (kasan_early_shadow_page). Later, this page is reused as readonly zero shadow for some memory that kasan currently don't track. After mapping the physical memory, pages for shadow memory are allocated and mapped. Functions like memset()/memcpy()/memmove() do a lot of memory accesses. If bad pointer passed to one of these function it is important to be caught. Compiler's instrumentation cannot do this since these functions are written in assembly. KASan replaces memory functions with manually instrumented variants. Original functions declared as weak symbols so strong definitions in mm/kasan/kasan.c could replace them. Original functions have aliases with '__' prefix in names, so we could call non-instrumented variant if needed. Signed-off-by:
Qing Zhang <zhangqing@loongson.cn> Signed-off-by:
Huacai Chen <chenhuacai@loongson.cn>
-
Feiyang Chen authored
Add ARCH_HAS_KCOV and HAVE_GCC_PLUGINS to the LoongArch Kconfig. And also disable instrumentation of vdso. Signed-off-by:
Feiyang Chen <chenfeiyang@loongson.cn> Signed-off-by:
Huacai Chen <chenhuacai@loongson.cn>
-
Qing Zhang authored
KGDB is intended to be used as a source level debugger for the Linux kernel. It is used along with gdb to debug a Linux kernel. GDB can be used to "break in" to the kernel to inspect memory, variables and regs similar to the way an application developer would use GDB to debug an application. KDB is a frontend of KGDB which is similar to GDB. By now, in addition to the generic KGDB features, the LoongArch KGDB implements the following features: - Hardware breakpoints/watchpoints; - Software single-step support for KDB. Signed-off-by: Qing Zhang <zhangqing@loongson.cn> # Framework & CoreFeature Signed-off-by: Binbin Zhou <zhoubinbin@loongson.cn> # BreakPoint & SingleStep Signed-off-by: Hui Li <lihui@loongson.cn> # Some Minor Improvements Signed-off-by: Randy Dunlap <rdunlap@infradead.org> # Some Build Error Fixes Signed-off-by:
Huacai Chen <chenhuacai@loongson.cn>
-
- Aug 18, 2023
-
-
Bjorn Helgaas authored
Fix typos in Documentation. Signed-off-by:
Bjorn Helgaas <bhelgaas@google.com> Link: https://lore.kernel.org/r/20230814212822.193684-4-helgaas@kernel.org Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Barry Song authored
On x86, batched and deferred tlb shootdown has lead to 90% performance increase on tlb shootdown. on arm64, HW can do tlb shootdown without software IPI. But sync tlbi is still quite expensive. Even running a simplest program which requires swapout can prove this is true, #include <sys/types.h> #include <unistd.h> #include <sys/mman.h> #include <string.h> int main() { #define SIZE (1 * 1024 * 1024) volatile unsigned char *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_ANONYMOUS, -1, 0); memset(p, 0x88, SIZE); for (int k = 0; k < 10000; k++) { /* swap in */ for (int i = 0; i < SIZE; i += 4096) { (void)p[i]; } /* swap out */ madvise(p, SIZE, MADV_PAGEOUT); } } Perf result on snapdragon 888 with 8 cores by using zRAM as the swap block device. ~ # perf record taskset -c 4 ./a.out [ perf record: Woken up 10 times to write data ] [ perf record: Captured and wrote 2.297 MB perf.data (60084 samples) ] ~ # perf report # To display the perf.data header info, please use --header/--header-only options. # To display the perf.data header info, please use --header/--header-only options. # # # Total Lost Samples: 0 # # Samples: 60K of event 'cycles' # Event count (approx.): 35706225414 # # Overhead Command Shared Object Symbol # ........ ....... ................. ...... # 21.07% a.out [kernel.kallsyms] [k] _raw_spin_unlock_irq 8.23% a.out [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore 6.67% a.out [kernel.kallsyms] [k] filemap_map_pages 6.16% a.out [kernel.kallsyms] [k] __zram_bvec_write 5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush 3.71% a.out [kernel.kallsyms] [k] _raw_spin_lock 3.49% a.out [kernel.kallsyms] [k] memset64 1.63% a.out [kernel.kallsyms] [k] clear_page 1.42% a.out [kernel.kallsyms] [k] _raw_spin_unlock 1.26% a.out [kernel.kallsyms] [k] mod_zone_state.llvm.8525150236079521930 1.23% a.out [kernel.kallsyms] [k] xas_load 1.15% a.out [kernel.kallsyms] [k] zram_slot_lock ptep_clear_flush() takes 5.36% CPU in the micro-benchmark swapping in/out a page mapped by only one process. If the page is mapped by multiple processes, typically, like more than 100 on a phone, the overhead would be much higher as we have to run tlb flush 100 times for one single page. Plus, tlb flush overhead will increase with the number of CPU cores due to the bad scalability of tlb shootdown in HW, so those ARM64 servers should expect much higher overhead. Further perf annonate shows 95% cpu time of ptep_clear_flush is actually used by the final dsb() to wait for the completion of tlb flush. This provides us a very good chance to leverage the existing batched tlb in kernel. The minimum modification is that we only send async tlbi in the first stage and we send dsb while we have to sync in the second stage. With the above simplest micro benchmark, collapsed time to finish the program decreases around 5%. Typical collapsed time w/o patch: ~ # time taskset -c 4 ./a.out 0.21user 14.34system 0:14.69elapsed w/ patch: ~ # time taskset -c 4 ./a.out 0.22user 13.45system 0:13.80elapsed Also tested with benchmark in the commit on Kunpeng920 arm64 server and observed an improvement around 12.5% with command `time ./swap_bench`. w/o w/ real 0m13.460s 0m11.771s user 0m0.248s 0m0.279s sys 0m12.039s 0m11.458s Originally it's noticed a 16.99% overhead of ptep_clear_flush() which has been eliminated by this patch: [root@localhost yang]# perf record -- ./swap_bench && perf report [...] 16.99% swap_bench [kernel.kallsyms] [k] ptep_clear_flush It is tested on 4,8,128 CPU platforms and shows to be beneficial on large systems but may not have improvement on small systems like on a 4 CPU platform. Also this patch improve the performance of page migration. Using pmbench and tries to migrate the pages of pmbench between node 0 and node 1 for 100 times for 1G memory, this patch decrease the time used around 20% (prev 18.338318910 sec after 13.981866350 sec) and saved the time used by ptep_clear_flush(). Link: https://lkml.kernel.org/r/20230717131004.12662-5-yangyicong@huawei.com Tested-by:
Yicong Yang <yangyicong@hisilicon.com> Tested-by:
Xin Hao <xhao@linux.alibaba.com> Tested-by:
Punit Agrawal <punit.agrawal@bytedance.com> Signed-off-by:
Barry Song <v-songbaohua@oppo.com> Signed-off-by:
Yicong Yang <yangyicong@hisilicon.com> Reviewed-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by:
Xin Hao <xhao@linux.alibaba.com> Reviewed-by:
Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Nadav Amit <namit@vmware.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Barry Song <baohua@kernel.org> Cc: Darren Hart <darren@os.amperecomputing.com> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com> Cc: lipeifeng <lipeifeng@oppo.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Steven Miao <realmz6@gmail.com> Cc: Will Deacon <will@kernel.org> Cc: Zeng Tao <prime.zeng@hisilicon.com> Signed-off-by:
Andrew Morton <akpm@linux-foundation.org>
-
- Jul 14, 2023
-
-
Tiezhu Yang authored
Run the refresh script [1] to document the recent feature additions. [1] Documentation/features/scripts/features-refresh.sh Signed-off-by:
Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by:
Jonathan Corbet <corbet@lwn.net> Link: https://lore.kernel.org/r/1689060720-4628-3-git-send-email-yangtiezhu@loongson.cn
-
Tiezhu Yang authored
ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT selects ARCH_HAS_ELF_RANDOMIZE, so add ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT as another Kconfig check for ELF-ASLR feature, then the refresh script can be used to handle this case for all archs. Co-developed-by:
Xi Ruoyao <xry111@xry111.site> Signed-off-by:
Xi Ruoyao <xry111@xry111.site> Signed-off-by:
Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by:
Jonathan Corbet <corbet@lwn.net> Link: https://lore.kernel.org/r/1689060720-4628-2-git-send-email-yangtiezhu@loongson.cn
-
- Jun 29, 2023
-
-
Youling Tang authored
Add support for jump labels based on the ARM64 version. Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Youling Tang <tangyouling@loongson.cn> Signed-off-by:
Huacai Chen <chenhuacai@loongson.cn>
-
Tiezhu Yang authored
We can see that DEBUG_KMEMLEAK depends on HAVE_DEBUG_KMEMLEAK after commit b69ec42b ("Kconfig: clean up the long arch list for the DEBUG_KMEMLEAK config option"), just select HAVE_DEBUG_KMEMLEAK to support kmemleak on LoongArch. Signed-off-by:
Tiezhu Yang <yangtiezhu@loongson.cn> Signed-off-by:
Huacai Chen <chenhuacai@loongson.cn>
-
- Mar 27, 2023
-
-
Heiko Carstens authored
s390 trivially supports the ARCH_HAS_MEMBARRIER_SYNC_CORE requirements since the used lpswe(y) instruction to return from any kernel context to user space performs CPU serialization. This is very similar to arm, arm64 and powerpc. See commit 70216e18 ("membarrier: Provide core serializing command, *_SYNC_CORE") for further details. Signed-off-by:
Heiko Carstens <hca@linux.ibm.com> Signed-off-by:
Vasily Gorbik <gor@linux.ibm.com>
-
- Jan 30, 2023
-
-
Add secure_computing() call to syscall_trace_enter to actually filter system calls. Add necessary arch Kconfig options, define TIF_SECCOMP trace flag and provide basic seccomp filter support in asm/syscall.h syscall_get_nr currently uses the syscall nr stored in orig_d0 because we change d0 to a default return code before starting a syscall trace. This may be inconsistent with syscall_rollback copying orig_d0 to d0 (which we never check upon return from trace). We use d0 for the return code from syscall_trace_enter in entry.S currently, and could perhaps expand that to store a new syscall number returned by the seccomp filter before executing the syscall. This clearly needs some discussion. seccomp_bpf self test on ARAnyM passes 81 out of 94 tests. Signed-off-by:
Michael Schmitz <schmitzmic@gmail.com> Reviewed-by:
Geert Uytterhoeven <geert@linux-m68k.org> Link: https://lore.kernel.org/r/20230112035529.13521-3-schmitzmic@gmail.com Signed-off-by:
Geert Uytterhoeven <geert@linux-m68k.org>
-
- Dec 05, 2022
-
-
Tiezhu Yang authored
The official arch name is LoongArch [1], we should use small letter loongarch instead of loong in Documentation/features, just use the features-refresh.sh to refresh all the related files. [1] https://www.kernel.org/doc/html/latest/loongarch/index.html Fixes: 5860800e ("Documentation/features: Update the arch support status files") Signed-off-by:
Tiezhu Yang <yangtiezhu@loongson.cn> Link: https://lore.kernel.org/r/1670156327-9631-3-git-send-email-yangtiezhu@loongson.cn Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Tiezhu Yang authored
It should only sed the beginning "arch" of ARCH_DIR in features-refresh.sh, otherwise loongarch is recognized as loong, that is not what we want. Fixes: be99f610 ("Documentation/features: Add script that refreshes the arch support status files in place") Signed-off-by:
Tiezhu Yang <yangtiezhu@loongson.cn> Link: https://lore.kernel.org/r/1670156327-9631-2-git-send-email-yangtiezhu@loongson.cn Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
- Dec 03, 2022
-
-
Wei Li authored
Run the refresh script to document the recent feature additions on loong, um and csky as of v6.1-rc7. Signed-off-by:
Wei Li <liwei391@huawei.com> Link: https://lore.kernel.org/r/20221203093750.4145802-1-liwei391@huawei.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
- Oct 29, 2022
-
-
Liu Shixin authored
This sets the HAVE_ARCH_HUGE_VMAP option, and defines the required page table functions. With this feature, ioremap area will be mapped with huge page granularity according to its actual size. This feature can be disabled by kernel parameter "nohugeiomap". Signed-off-by:
Liu Shixin <liushixin2@huawei.com> Reviewed-by:
Björn Töpel <bjorn@kernel.org> Tested-by:
Björn Töpel <bjorn@kernel.org> Link: https://lore.kernel.org/r/20221012120038.1034354-2-liushixin2@huawei.com [Palmer: minor formatting] Signed-off-by:
Palmer Dabbelt <palmer@rivosinc.com>
-
- Jul 14, 2022
-
-
Max Filippov authored
Select ARCH_HAS_GCOV_PROFILE_ALL and set GCOV_PROFILE = n inside arch/xtensa/boot/lib. Signed-off-by:
Max Filippov <jcmvbkbc@gmail.com>
-
Max Filippov authored
Select ARCH_HAS_KCOV and set KCOV_INSTRUMENT = n inside arch/xtensa/boot/lib. Signed-off-by:
Max Filippov <jcmvbkbc@gmail.com>
-
- Jun 30, 2022
-
-
Frederic Weisbecker authored
Context tracking is going to be used not only to track user transitions but also idle/IRQs/NMIs. The user tracking part will then become a separate feature. Prepare Kconfig for that. [ frederic: Apply Max Filippov feedback. ] Signed-off-by:
Frederic Weisbecker <frederic@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Neeraj Upadhyay <quic_neeraju@quicinc.com> Cc: Uladzislau Rezki <uladzislau.rezki@sony.com> Cc: Joel Fernandes <joel@joelfernandes.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Nicolas Saenz Julienne <nsaenz@kernel.org> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Xiongfeng Wang <wangxiongfeng2@huawei.com> Cc: Yu Liao <liaoyu15@huawei.com> Cc: Phil Auld <pauld@redhat.com> Cc: Paul Gortmaker<paul.gortmaker@windriver.com> Cc: Alex Belits <abelits@marvell.com> Signed-off-by:
Paul E. McKenney <paulmck@kernel.org> Reviewed-by:
Nicolas Saenz Julienne <nsaenzju@redhat.com> Tested-by:
Nicolas Saenz Julienne <nsaenzju@redhat.com>
-
- Jun 27, 2022
-
-
Kefeng Wang authored
With ioremap_prot() definition from generic ioremap, also move pte_pgprot() from hugetlbpage.c into pgtable.h, then arm64 could have HAVE_IOREMAP_PROT, which will enable generic_access_phys() code, it is useful for debug, eg, gdb. Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Link: https://lore.kernel.org/r/20220607125027.44946-7-wangkefeng.wang@huawei.com Signed-off-by:
Will Deacon <will@kernel.org>
-
- Jun 09, 2022
-
-
Zheng Zengkai authored
The arch support status files don't match reality as of v5.19-rc1, use the features-refresh.sh to refresh all the arch-support.txt files in place. The main effect is to add entries for the new loong architecture. Signed-off-by:
Zheng Zengkai <zhengzengkai@huawei.com> Link: https://lore.kernel.org/r/20220609025656.143460-1-zhengzengkai@huawei.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
- May 02, 2022
-
-
Max Filippov authored
xtensa kernels successfully build and run with CONFIG_DEBUG_VM_PGTABLE=y, enable arch support for it. Reviewed-by:
Anshuman Khandual <anshuman.khandual@arm.com> Signed-off-by:
Max Filippov <jcmvbkbc@gmail.com>
-
Max Filippov authored
There's no direct cputime_t manipulation in the xtensa arch code, so generic virt CPU accounting may be enabled. Signed-off-by:
Max Filippov <jcmvbkbc@gmail.com>
-
Max Filippov authored
Put user exit context tracking call on the common kernel entry/exit path (function calls are impossible at earlier kernel entry stages because PS.EXCM is not cleared yet). Put user entry context tracking call on the user exit path. Syscalls go through this common code too, so nothing specific needs to be done for them. Signed-off-by:
Max Filippov <jcmvbkbc@gmail.com>
-
- Mar 07, 2022
-
-
Alan Kao authored
The nds32 architecture, also known as AndeStar V3, is a custom 32-bit RISC target designed by Andes Technologies. Support was added to the kernel in 2016 as the replacement RISC-V based V5 processors were already announced, and maintained by (current or former) Andes employees. As explained by Alan Kao, new customers are now all using RISC-V, and all known nds32 users are already on longterm stable kernels provided by Andes, with no development work going into mainline support any more. While the port is still in a reasonably good shape, it only gets worse over time without active maintainers, so it seems best to remove it before it becomes unusable. As always, if it turns out that there are mainline users after all, and they volunteer to maintain the port in the future, the removal can be reverted. Link: https://lore.kernel.org/linux-mm/YhdWNLUhk+x9RAzU@yamatobi.andestech.com/ Link: https://lore.kernel.org/lkml/20220302065213.82702-1-alankao@andestech.com/ Link: https://www.andestech.com/en/products-solutions/andestar-architecture/ Signed-off-by:
Alan Kao <alankao@andestech.com> [arnd: rewrite changelog to provide more background] Signed-off-by:
Arnd Bergmann <arnd@arndb.de>
-
- Feb 23, 2022
-
-
Christoph Hellwig authored
Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- Dec 17, 2021
-
-
Ard Biesheuvel authored
Since commit bcf9033e ("sched: move CPU field back into thread_info if THREAD_INFO_IN_TASK=y"), the CPU field in thread_info went back to being managed by the core code, so we no longer have to keep it in sync in arch code. While at it, mark THREAD_INFO_IN_TASK as done for ARM in the documentation. Reviewed-by:
Linus Walleij <linus.walleij@linaro.org> Signed-off-by:
Ard Biesheuvel <ardb@kernel.org> Signed-off-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
-
- Nov 01, 2021
-
-
Helge Deller authored
This implements the CONFIG_THREAD_INFO_IN_TASK option. With this change: - before thread_info was part of the stack and located at the beginning of the stack - now the thread_info struct is moved and located inside the task_struct structure - the stack is allocated and handled like the major other platforms - drop the cpu field of thread_info and use instead the one in task_struct Signed-off-by:
Helge Deller <deller@gmx.de> Signed-off-by:
Sven Schnelle <svens@stackframe.org>
-
- Sep 11, 2021
-
-
Kefeng Wang authored
This enlarges the bits availiable for stack randomisation on RV64 from the default of 8MiB to 1GiB, to match arm64 and x86. Also, update the documentation to reflect our support for stack randomisation. Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> [Palmer: commit text] Signed-off-by:
Palmer Dabbelt <palmerdabbelt@google.com>
-
- Aug 24, 2021
-
-
Mark Rutland authored
In commit: bbc180a5 ("mm: HUGE_VMAP arch support cleanup") We replaced: * ioremap_pud_enabled() with arch_vmap_pud_supported() * ioremap_pmd_enabled() with arch_vmap_pmd_supported() Update the documentation accordingly. Signed-off-by:
Mark Rutland <mark.rutland@arm.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Ingo Molnar <mingo@kernel.org> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: linux-doc@vger.kernel.org Link: https://lore.kernel.org/r/20210817091621.16799-1-mark.rutland@arm.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
- Aug 12, 2021
-
-
Jisheng Zhang authored
After commit e88b3331 ("riscv: mm: add THP support on 64-bit"), riscv can support THP. Signed-off-by:
Jisheng Zhang <jszhang@kernel.org> Link: https://lore.kernel.org/r/20210805002739.23f44d2d@xhacker Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
- Jul 15, 2021
-
-
Ingo Molnar authored
Signed-off-by:
Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/YN2nhV5F0hBVNPuX@gmail.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
Ingo Molnar authored
Risc-V gained support recently. Signed-off-by:
Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/YN2nqOVHgGDt4Iid@gmail.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
- Mar 31, 2021
-
-
Aneesh Kumar K.V authored
This reverts commit 675bceb0 ("powerpc/mm: Remove DEBUG_VM_PGTABLE support on powerpc") All the related issues are fixed as of commit: f14312e1 ("mm/debug_vm_pgtable: avoid doing memory allocation with pgtable_t mapped.") Hence re-enable it. Signed-off-by:
Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Signed-off-by:
Michael Ellerman <mpe@ellerman.id.au> Link: https://lore.kernel.org/r/20210318034855.74513-1-aneesh.kumar@linux.ibm.com
-
- Mar 15, 2021
-
-
Barry Song authored
BATCHED_UNMAP_TLB_FLUSH is used on x86 to do batched tlb shootdown by sending one IPI to TLB flush all entries after unmapping pages rather than sending an IPI to flush each individual entry. On arm64, tlb shootdown is done by hardware. Flush instructions are innershareable. The local flushes are limited to the boot (1 per CPU) and when a task is getting a new ASID. So marking this feature as "TODO" is not proper. ".." isn't good as well. So this patch adds a "N/A" for this kind of features which are not needed on some architectures. Signed-off-by:
Barry Song <song.bao.hua@hisilicon.com> Acked-by:
Will Deacon <will@kernel.org> Cc: Mel Gorman <mgorman@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20210223003230.11976-1-song.bao.hua@hisilicon.com Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-
- Feb 25, 2021
-
-
Arnd Bergmann authored
Run the update script to document the recent feature additions on riscv, mips and csky. Fixes: c109f424 ("csky: Add kmemleak support") Fixes: 8b3165e5 ("MIPS: Enable GCOV") Fixes: 1ddc96bd ("MIPS: kernel: Support extracting off-line stack traces from user-space with perf") Fixes: 74784081 ("riscv: Add uprobes supported") Fixes: 829adda5 ("riscv: Add KPROBES_ON_FTRACE supported") Fixes: c22b0bcb ("riscv: Add kprobes supported") Fixes: dcdc7a53 ("RISC-V: Implement ptrace regs and stack API") Signed-off-by:
Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20210225142841.3385428-2-arnd@kernel.org Signed-off-by:
Jonathan Corbet <corbet@lwn.net>
-