Skip to content
  • Mike Kravetz's avatar
    mm: migration: fix migration of huge PMD shared pages · 017b1660
    Mike Kravetz authored
    The page migration code employs try_to_unmap() to try and unmap the source
    page.  This is accomplished by using rmap_walk to find all vmas where the
    page is mapped.  This search stops when page mapcount is zero.  For shared
    PMD huge pages, the page map count is always 1 no matter the number of
    mappings.  Shared mappings are tracked via the reference count of the PMD
    page.  Therefore, try_to_unmap stops prematurely and does not completely
    unmap all mappings of the source page.
    
    This problem can result is data corruption as writes to the original
    source page can happen after contents of the page are copied to the target
    page.  Hence, data is lost.
    
    This problem was originally seen as DB corruption of shared global areas
    after a huge page was soft offlined due to ECC memory errors.  DB
    developers noticed they could reproduce the issue by (hotplug) offlining
    memory used to back huge pages.  A simple testcase can reproduce the
    problem by creating a shared PMD mapping (note that this must be at least
    PUD_SIZE in size and PUD_SIZE aligned (1GB on x86)), and using
    migrate_pages() to migrate process pages between nodes while continually
    writing to the huge pages being migrated.
    
    To fix, have the try_to_unmap_one routine check for huge PMD sharing by
    calling huge_pmd_unshare for hugetlbfs huge pages.  If it is a shared
    mapping it will be 'unshared' which removes the page table entry and drops
    the reference on the PMD page.  After this, flush caches and TLB.
    
    mmu notifiers are called before locking page tables, but we can not be
    sure of PMD sharing until page tables are locked.  Therefore, check for
    the possibility of PMD sharing before locking so that notifiers can
    prepare for the worst possible case.
    
    Link: http://lkml.kernel.org/r/20180823205917.16297-2-mike.kravetz@oracle.com
    [mike.kravetz@oracle.com: make _range_in_vma() a static inline]
      Link: http://lkml.kernel.org/r/6063f215-a5c8-2f0c-465a-2c515ddc952d@oracle.com
    Fixes: 39dde65c
    
     ("shared page table for hugetlb page")
    Signed-off-by: default avatarMike Kravetz <mike.kravetz@oracle.com>
    Acked-by: default avatarKirill A. Shutemov <kirill.shutemov@linux.intel.com>
    Reviewed-by: default avatarNaoya Horiguchi <n-horiguchi@ah.jp.nec.com>
    Acked-by: default avatarMichal Hocko <mhocko@suse.com>
    Cc: Vlastimil Babka <vbabka@suse.cz>
    Cc: Davidlohr Bueso <dave@stgolabs.net>
    Cc: Jerome Glisse <jglisse@redhat.com>
    Cc: Mike Kravetz <mike.kravetz@oracle.com>
    Cc: <stable@vger.kernel.org>
    Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
    Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
    017b1660