Skip to content
Snippets Groups Projects
  1. Jan 26, 2024
  2. Jan 08, 2024
  3. Dec 29, 2023
    • David Hildenbrand's avatar
      mm: convert page_try_share_anon_rmap() to folio_try_share_anon_rmap_[pte|pmd]() · e3b4b137
      David Hildenbrand authored
      Let's convert it like we converted all the other rmap functions.  Don't
      introduce folio_try_share_anon_rmap_ptes() for now, as we don't have a
      user that wants rmap batching in sight.  Pretty easy to add later.
      
      All users are easy to convert -- only ksm.c doesn't use folios yet but
      that is left for future work -- so let's just do it in a single shot.
      
      While at it, turn the BUG_ON into a WARN_ON_ONCE.
      
      Note that page_try_share_anon_rmap() so far didn't care about pte/pmd
      mappings (no compound parameter).  We're changing that so we can perform
      better sanity checks and make the code actually more readable/consistent. 
      For example, __folio_rmap_sanity_checks() will make sure that a PMD range
      actually falls completely into the folio.
      
      Link: https://lkml.kernel.org/r/20231220224504.646757-39-david@redhat.com
      
      
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Ryan Roberts <ryan.roberts@arm.com>
      Cc: Yin Fengwei <fengwei.yin@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      e3b4b137
    • David Hildenbrand's avatar
      mm/huge_memory: page_try_dup_anon_rmap() -> folio_try_dup_anon_rmap_pmd() · 96c772c2
      David Hildenbrand authored
      Let's convert copy_huge_pmd() and fixup the comment in copy_huge_pud(). 
      While at it, perform more folio conversion in copy_huge_pmd().
      
      Link: https://lkml.kernel.org/r/20231220224504.646757-36-david@redhat.com
      
      
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Ryan Roberts <ryan.roberts@arm.com>
      Cc: Yin Fengwei <fengwei.yin@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      96c772c2
    • David Hildenbrand's avatar
      mm/huge_memory: page_remove_rmap() -> folio_remove_rmap_pmd() · a8e61d58
      David Hildenbrand authored
      Let's convert zap_huge_pmd() and set_pmd_migration_entry().  While at it,
      perform some more folio conversion.
      
      Link: https://lkml.kernel.org/r/20231220224504.646757-26-david@redhat.com
      
      
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Ryan Roberts <ryan.roberts@arm.com>
      Cc: Yin Fengwei <fengwei.yin@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      a8e61d58
    • David Hildenbrand's avatar
      mm/huge_memory: page_add_anon_rmap() -> folio_add_anon_rmap_pmd() · 395db7b1
      David Hildenbrand authored
      Let's convert remove_migration_pmd().  No need to set RMAP_COMPOUND, that
      we will remove soon.
      
      Link: https://lkml.kernel.org/r/20231220224504.646757-17-david@redhat.com
      
      
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Ryan Roberts <ryan.roberts@arm.com>
      Cc: Yin Fengwei <fengwei.yin@intel.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      395db7b1
    • David Hildenbrand's avatar
      mm/huge_memory: batch rmap operations in __split_huge_pmd_locked() · 91b2978a
      David Hildenbrand authored
      Let's use folio_add_anon_rmap_ptes(), batching the rmap operations.
      
      While at it, use more folio operations (but only in the code branch we're
      touching), use VM_WARN_ON_FOLIO(), and pass RMAP_EXCLUSIVE instead of
      manually setting PageAnonExclusive.
      
      We should never see non-anon pages on that branch: otherwise, the existing
      page_add_anon_rmap() call would have been flawed already.
      
      Link: https://lkml.kernel.org/r/20231220224504.646757-16-david@redhat.com
      
      
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarYin Fengwei <fengwei.yin@intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Ryan Roberts <ryan.roberts@arm.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      91b2978a
    • David Hildenbrand's avatar
      mm/huge_memory: page_add_file_rmap() -> folio_add_file_rmap_pmd() · 14d85a6e
      David Hildenbrand authored
      Let's convert remove_migration_pmd() and while at it, perform some folio
      conversion.
      
      Link: https://lkml.kernel.org/r/20231220224504.646757-10-david@redhat.com
      
      
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarYin Fengwei <fengwei.yin@intel.com>
      Reviewed-by: default avatarRyan Roberts <ryan.roberts@arm.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Peter Xu <peterx@redhat.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      14d85a6e
    • Andrea Arcangeli's avatar
      userfaultfd: UFFDIO_MOVE uABI · adef4406
      Andrea Arcangeli authored
      Implement the uABI of UFFDIO_MOVE ioctl.
      UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
      needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
      available (in userspace) for recycling, as is usually the case in heap
      compaction algorithms, then we can avoid the page allocation and memcpy
      (done by UFFDIO_COPY). Also, since the pages are recycled in the
      userspace, we avoid the need to release (via madvise) the pages back to
      the kernel [2].
      
      We see over 40% reduction (on a Google pixel 6 device) in the compacting
      thread's completion time by using UFFDIO_MOVE vs.  UFFDIO_COPY.  This was
      measured using a benchmark that emulates a heap compaction implementation
      using userfaultfd (to allow concurrent accesses by application threads). 
      More details of the usecase are explained in [2].  Furthermore,
      UFFDIO_MOVE enables moving swapped-out pages without touching them within
      the same vma.  Today, it can only be done by mremap, however it forces
      splitting the vma.
      
      [1] https://lore.kernel.org/all/1425575884-2574-1-git-send-email-aarcange@redhat.com/
      [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
      
      Update for the ioctl_userfaultfd(2)  manpage:
      
         UFFDIO_MOVE
             (Since Linux xxx)  Move a continuous memory chunk into the
             userfault registered range and optionally wake up the blocked
             thread. The source and destination addresses and the number of
             bytes to move are specified by the src, dst, and len fields of
             the uffdio_move structure pointed to by argp:
      
                 struct uffdio_move {
                     __u64 dst;    /* Destination of move */
                     __u64 src;    /* Source of move */
                     __u64 len;    /* Number of bytes to move */
                     __u64 mode;   /* Flags controlling behavior of move */
                     __s64 move;   /* Number of bytes moved, or negated error */
                 };
      
             The following value may be bitwise ORed in mode to change the
             behavior of the UFFDIO_MOVE operation:
      
             UFFDIO_MOVE_MODE_DONTWAKE
                    Do not wake up the thread that waits for page-fault
                    resolution
      
             UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
                    Allow holes in the source virtual range that is being moved.
                    When not specified, the holes will result in ENOENT error.
                    When specified, the holes will be accounted as successfully
                    moved memory. This is mostly useful to move hugepage aligned
                    virtual regions without knowing if there are transparent
                    hugepages in the regions or not, but preventing the risk of
                    having to split the hugepage during the operation.
      
             The move field is used by the kernel to return the number of
             bytes that was actually moved, or an error (a negated errno-
             style value).  If the value returned in move doesn't match the
             value that was specified in len, the operation fails with the
             error EAGAIN.  The move field is output-only; it is not read by
             the UFFDIO_MOVE operation.
      
             The operation may fail for various reasons. Usually, remapping of
             pages that are not exclusive to the given process fail; once KSM
             might deduplicate pages or fork() COW-shares pages during fork()
             with child processes, they are no longer exclusive. Further, the
             kernel might only perform lightweight checks for detecting whether
             the pages are exclusive, and return -EBUSY in case that check fails.
             To make the operation more likely to succeed, KSM should be
             disabled, fork() should be avoided or MADV_DONTFORK should be
             configured for the source VMA before fork().
      
             This ioctl(2) operation returns 0 on success.  In this case, the
             entire area was moved.  On error, -1 is returned and errno is
             set to indicate the error.  Possible errors include:
      
             EAGAIN The number of bytes moved (i.e., the value returned in
                    the move field) does not equal the value that was
                    specified in the len field.
      
             EINVAL Either dst or len was not a multiple of the system page
                    size, or the range specified by src and len or dst and len
                    was invalid.
      
             EINVAL An invalid bit was specified in the mode field.
      
             ENOENT
                    The source virtual memory range has unmapped holes and
                    UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
      
             EEXIST
                    The destination virtual memory range is fully or partially
                    mapped.
      
             EBUSY
                    The pages in the source virtual memory range are either
                    pinned or not exclusive to the process. The kernel might
                    only perform lightweight checks for detecting whether the
                    pages are exclusive. To make the operation more likely to
                    succeed, KSM should be disabled, fork() should be avoided
                    or MADV_DONTFORK should be configured for the source virtual
                    memory area before fork().
      
             ENOMEM Allocating memory needed for the operation failed.
      
             ESRCH
                    The target process has exited at the time of a UFFDIO_MOVE
                    operation.
      
      Link: https://lkml.kernel.org/r/20231206103702.3873743-3-surenb@google.com
      
      
      Signed-off-by: default avatarAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: default avatarSuren Baghdasaryan <surenb@google.com>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Axel Rasmussen <axelrasmussen@google.com>
      Cc: Brian Geffon <bgeffon@google.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jann Horn <jannh@google.com>
      Cc: Kalesh Singh <kaleshsingh@google.com>
      Cc: Liam R. Howlett <Liam.Howlett@oracle.com>
      Cc: Lokesh Gidra <lokeshgidra@google.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Mike Rapoport (IBM) <rppt@kernel.org>
      Cc: Nicolas Geoffray <ngeoffray@google.com>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Ryan Roberts <ryan.roberts@arm.com>
      Cc: Shuah Khan <shuah@kernel.org>
      Cc: ZhangPeng <zhangpeng362@huawei.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      adef4406
    • Baolin Wang's avatar
      mm: memcg: fix split queue list crash when large folio migration · 9bcef597
      Baolin Wang authored
      When running autonuma with enabling multi-size THP, I encountered the
      following kernel crash issue:
      
      [  134.290216] list_del corruption. prev->next should be fffff9ad42e1c490,
      but was dead000000000100. (prev=fffff9ad42399890)
      [  134.290877] kernel BUG at lib/list_debug.c:62!
      [  134.291052] invalid opcode: 0000 [#1] PREEMPT SMP NOPTI
      [  134.291210] CPU: 56 PID: 8037 Comm: numa01 Kdump: loaded Tainted:
      G            E      6.7.0-rc4+ #20
      [  134.291649] RIP: 0010:__list_del_entry_valid_or_report+0x97/0xb0
      ......
      [  134.294252] Call Trace:
      [  134.294362]  <TASK>
      [  134.294440]  ? die+0x33/0x90
      [  134.294561]  ? do_trap+0xe0/0x110
      ......
      [  134.295681]  ? __list_del_entry_valid_or_report+0x97/0xb0
      [  134.295842]  folio_undo_large_rmappable+0x99/0x100
      [  134.296003]  destroy_large_folio+0x68/0x70
      [  134.296172]  migrate_folio_move+0x12e/0x260
      [  134.296264]  ? __pfx_remove_migration_pte+0x10/0x10
      [  134.296389]  migrate_pages_batch+0x495/0x6b0
      [  134.296523]  migrate_pages+0x1d0/0x500
      [  134.296646]  ? __pfx_alloc_misplaced_dst_folio+0x10/0x10
      [  134.296799]  migrate_misplaced_folio+0x12d/0x2b0
      [  134.296953]  do_numa_page+0x1f4/0x570
      [  134.297121]  __handle_mm_fault+0x2b0/0x6c0
      [  134.297254]  handle_mm_fault+0x107/0x270
      [  134.300897]  do_user_addr_fault+0x167/0x680
      [  134.304561]  exc_page_fault+0x65/0x140
      [  134.307919]  asm_exc_page_fault+0x22/0x30
      
      The reason for the crash is that, the commit 85ce2c51 ("memcontrol:
      only transfer the memcg data for migration") removed the charging and
      uncharging operations of the migration folios and cleared the memcg data
      of the old folio.
      
      During the subsequent release process of the old large folio in
      destroy_large_folio(), if the large folio needs to be removed from the
      split queue, an incorrect split queue can be obtained (which is
      pgdat->deferred_split_queue) because the old folio's memcg is NULL now. 
      This can lead to list operations being performed under the wrong split
      queue lock protection, resulting in a list crash as above.
      
      After the migration, the old folio is going to be freed, so we can remove
      it from the split queue in mem_cgroup_migrate() a bit earlier before
      clearing the memcg data to avoid getting incorrect split queue.
      
      [akpm@linux-foundation.org: fix comment, per Zi Yan]
      Link: https://lkml.kernel.org/r/61273e5e9b490682388377c20f52d19de4a80460.1703054559.git.baolin.wang@linux.alibaba.com
      
      
      Fixes: 85ce2c51 ("memcontrol: only transfer the memcg data for migration")
      Signed-off-by: default avatarBaolin Wang <baolin.wang@linux.alibaba.com>
      Reviewed-by: default avatarNhat Pham <nphamcs@gmail.com>
      Reviewed-by: default avatarYang Shi <shy828301@gmail.com>
      Reviewed-by: default avatarZi Yan <ziy@nvidia.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: "Huang, Ying" <ying.huang@intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Roman Gushchin <roman.gushchin@linux.dev>
      Cc: Shakeel Butt <shakeelb@google.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      9bcef597
  4. Dec 20, 2023
    • Ryan Roberts's avatar
      mm: thp: introduce multi-size THP sysfs interface · 3485b883
      Ryan Roberts authored
      In preparation for adding support for anonymous multi-size THP, introduce
      new sysfs structure that will be used to control the new behaviours.  A
      new directory is added under transparent_hugepage for each supported THP
      size, and contains an `enabled` file, which can be set to "inherit" (to
      inherit the global setting), "always", "madvise" or "never".  For now, the
      kernel still only supports PMD-sized anonymous THP, so only 1 directory is
      populated.
      
      The first half of the change converts transhuge_vma_suitable() and
      hugepage_vma_check() so that they take a bitfield of orders for which the
      user wants to determine support, and the functions filter out all the
      orders that can't be supported, given the current sysfs configuration and
      the VMA dimensions.  The resulting functions are renamed to
      thp_vma_suitable_orders() and thp_vma_allowable_orders() respectively. 
      Convenience functions that take a single, unencoded order and return a
      boolean are also defined as thp_vma_suitable_order() and
      thp_vma_allowable_order().
      
      The second half of the change implements the new sysfs interface.  It has
      been done so that each supported THP size has a `struct thpsize`, which
      describes the relevant metadata and is itself a kobject.  This is pretty
      minimal for now, but should make it easy to add new per-thpsize files to
      the interface if needed in future (e.g.  per-size defrag).  Rather than
      keep the `enabled` state directly in the struct thpsize, I've elected to
      directly encode it into huge_anon_orders_[always|madvise|inherit]
      bitfields since this reduces the amount of work required in
      thp_vma_allowable_orders() which is called for every page fault.
      
      See Documentation/admin-guide/mm/transhuge.rst, as modified by this
      commit, for details of how the new sysfs interface works.
      
      [ryan.roberts@arm.com: fix build warning when CONFIG_SYSFS is disabled]
        Link: https://lkml.kernel.org/r/20231211125320.3997543-1-ryan.roberts@arm.com
      Link: https://lkml.kernel.org/r/20231207161211.2374093-4-ryan.roberts@arm.com
      
      
      Signed-off-by: default avatarRyan Roberts <ryan.roberts@arm.com>
      Reviewed-by: default avatarBarry Song <v-songbaohua@oppo.com>
      Tested-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
      Tested-by: default avatarJohn Hubbard <jhubbard@nvidia.com>
      Acked-by: default avatarDavid Hildenbrand <david@redhat.com>
      Cc: Alistair Popple <apopple@nvidia.com>
      Cc: Anshuman Khandual <anshuman.khandual@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: "Huang, Ying" <ying.huang@intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Itaru Kitayama <itaru.kitayama@gmail.com>
      Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Cc: Luis Chamberlain <mcgrof@kernel.org>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Yang Shi <shy828301@gmail.com>
      Cc: Yin Fengwei <fengwei.yin@intel.com>
      Cc: Yu Zhao <yuzhao@google.com>
      Cc: Zi Yan <ziy@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      3485b883
  5. Dec 12, 2023
  6. Dec 11, 2023
    • Baolin Wang's avatar
      mm: huge_memory: batch tlb flush when splitting a pte-mapped THP · 3027c6f8
      Baolin Wang authored
      I can observe an obvious tlb flush hotspot when splitting a pte-mapped THP
      on my ARM64 server, and the distribution of this hotspot is as follows:
      
         - 16.85% split_huge_page_to_list
            + 7.80% down_write
            - 7.49% try_to_migrate
               - 7.48% rmap_walk_anon
                    7.23% ptep_clear_flush
            + 1.52% __split_huge_page
      
      The reason is that the split_huge_page_to_list() will build migration
      entries for each subpage of a pte-mapped Anon THP by try_to_migrate(), or
      unmap for file THP, and it will clear and tlb flush for each subpage's
      pte.  Moreover, the split_huge_page_to_list() will set TTU_SPLIT_HUGE_PMD
      flag to ensure the THP is already a pte-mapped THP before splitting it to
      some normal pages.
      
      Actually, there is no need to flush tlb for each subpage immediately,
      instead we can batch tlb flush for the pte-mapped THP to improve the
      performance.
      
      After this patch, we can see the batch tlb flush can improve the latency
      obviously when running thpscale.
      
                                   k6.5-base                   patched
      Amean     fault-both-1      1071.17 (   0.00%)      901.83 *  15.81%*
      Amean     fault-both-3      2386.08 (   0.00%)     1865.32 *  21.82%*
      Amean     fault-both-5      2851.10 (   0.00%)     2273.84 *  20.25%*
      Amean     fault-both-7      3679.91 (   0.00%)     2881.66 *  21.69%*
      Amean     fault-both-12     5916.66 (   0.00%)     4369.55 *  26.15%*
      Amean     fault-both-18     7981.36 (   0.00%)     6303.57 *  21.02%*
      Amean     fault-both-24    10950.79 (   0.00%)     8752.56 *  20.07%*
      Amean     fault-both-30    14077.35 (   0.00%)    10170.01 *  27.76%*
      Amean     fault-both-32    13061.57 (   0.00%)    11630.08 *  10.96%*
      
      Link: https://lkml.kernel.org/r/431d9fb6823036369dcb1d3b2f63732f01df21a7.1698488264.git.baolin.wang@linux.alibaba.com
      
      
      Signed-off-by: default avatarBaolin Wang <baolin.wang@linux.alibaba.com>
      Reviewed-by: default avatar"Huang, Ying" <ying.huang@intel.com>
      Reviewed-by: default avatarYang Shi <shy828301@gmail.com>
      Reviewed-by: default avatarAlistair Popple <apopple@nvidia.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      3027c6f8
  7. Nov 15, 2023
  8. Oct 25, 2023
  9. Oct 18, 2023
  10. Oct 16, 2023
  11. Oct 04, 2023
    • Kefeng Wang's avatar
      mm: migrate: convert migrate_misplaced_page() to migrate_misplaced_folio() · 73eab3ca
      Kefeng Wang authored
      At present, numa balance only support base page and PMD-mapped THP, but we
      will expand to support to migrate large folio/pte-mapped THP in the
      future, it is better to make migrate_misplaced_page() to take a folio
      instead of a page, and rename it to migrate_misplaced_folio(), it is a
      preparation, also this remove several compound_head() calls.
      
      Link: https://lkml.kernel.org/r/20230913095131.2426871-5-wangkefeng.wang@huawei.com
      
      
      Signed-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
      Reviewed-by: default avatarZi Yan <ziy@nvidia.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: "Huang, Ying" <ying.huang@intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      73eab3ca
    • Qi Zheng's avatar
      mm: thp: dynamically allocate the thp-related shrinkers · 54d91729
      Qi Zheng authored
      Use new APIs to dynamically allocate the thp-zero and thp-deferred_split
      shrinkers.
      
      Link: https://lkml.kernel.org/r/20230911094444.68966-18-zhengqi.arch@bytedance.com
      
      
      Signed-off-by: default avatarQi Zheng <zhengqi.arch@bytedance.com>
      Cc: Abhinav Kumar <quic_abhinavk@quicinc.com>
      Cc: Alasdair Kergon <agk@redhat.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
      Cc: Andreas Dilger <adilger.kernel@dilger.ca>
      Cc: Andreas Gruenbacher <agruenba@redhat.com>
      Cc: Anna Schumaker <anna@kernel.org>
      Cc: Arnd Bergmann <arnd@arndb.de>
      Cc: Bob Peterson <rpeterso@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Carlos Llamas <cmllamas@google.com>
      Cc: Chandan Babu R <chandan.babu@oracle.com>
      Cc: Chao Yu <chao@kernel.org>
      Cc: Chris Mason <clm@fb.com>
      Cc: Christian Brauner <brauner@kernel.org>
      Cc: Christian Koenig <christian.koenig@amd.com>
      Cc: Chuck Lever <cel@kernel.org>
      Cc: Coly Li <colyli@suse.de>
      Cc: Dai Ngo <Dai.Ngo@oracle.com>
      Cc: Daniel Vetter <daniel@ffwll.ch>
      Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
      Cc: "Darrick J. Wong" <djwong@kernel.org>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: David Airlie <airlied@gmail.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: David Sterba <dsterba@suse.com>
      Cc: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
      Cc: Gao Xiang <hsiangkao@linux.alibaba.com>
      Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Huang Rui <ray.huang@amd.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jaegeuk Kim <jaegeuk@kernel.org>
      Cc: Jani Nikula <jani.nikula@linux.intel.com>
      Cc: Jan Kara <jack@suse.cz>
      Cc: Jason Wang <jasowang@redhat.com>
      Cc: Jeff Layton <jlayton@kernel.org>
      Cc: Jeffle Xu <jefflexu@linux.alibaba.com>
      Cc: Joel Fernandes (Google) <joel@joelfernandes.org>
      Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
      Cc: Josef Bacik <josef@toxicpanda.com>
      Cc: Juergen Gross <jgross@suse.com>
      Cc: Kent Overstreet <kent.overstreet@gmail.com>
      Cc: Kirill Tkhai <tkhai@ya.ru>
      Cc: Marijn Suijten <marijn.suijten@somainline.org>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Mike Snitzer <snitzer@kernel.org>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Muchun Song <muchun.song@linux.dev>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Nadav Amit <namit@vmware.com>
      Cc: Neil Brown <neilb@suse.de>
      Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
      Cc: Olga Kornievskaia <kolga@netapp.com>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Rob Clark <robdclark@gmail.com>
      Cc: Rob Herring <robh@kernel.org>
      Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
      Cc: Roman Gushchin <roman.gushchin@linux.dev>
      Cc: Sean Paul <sean@poorly.run>
      Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
      Cc: Song Liu <song@kernel.org>
      Cc: Stefano Stabellini <sstabellini@kernel.org>
      Cc: Steven Price <steven.price@arm.com>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
      Cc: Tom Talpey <tom@talpey.com>
      Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
      Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
      Cc: Yue Hu <huyue2@coolpad.com>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      54d91729
  12. Aug 24, 2023
  13. Aug 21, 2023
  14. Aug 18, 2023
Loading