Skip to content
  • Benjamin Herrenschmidt's avatar
    [PATCH] ppc64: Fix pages marked dirty abusively · 5d965515
    Benjamin Herrenschmidt authored
    
    
    While working on 64K pages, I found this little buglet in our
    update_mmu_cache() implementation.
    
    The code calls __hash_page() passing it an "access" parameter (the type
    of access that triggers the hash) containing the bits _PAGE_RW and
    _PAGE_USER of the linux PTE.  The latter is useless in this case and the
    former is wrong.  In fact, if we have a writeable PTE and we pass
    _PAGE_RW to hash_page(), it will set _PAGE_DIRTY (since we track dirty
    that way, by hash faulting !dirty) which is not what we want.
    
    In fact, the correct fix is to always pass 0. That means that only
    read-only or already dirty read write PTEs will be preloaded. The
    (hopefully rare) case of a non dirty read write PTE can't be preloaded
    this way, it will have to fault in hash_page on the actual access.
    
    Signed-off-by: default avatarBenjamin Herrenschmidt <benh@kernel.crashing.org>
    Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
    5d965515