1. 30 4月, 2022 1 次提交
  2. 29 4月, 2022 3 次提交
  3. 22 4月, 2022 1 次提交
  4. 16 4月, 2022 1 次提交
  5. 23 3月, 2022 6 次提交
  6. 22 3月, 2022 2 次提交
  7. 27 2月, 2022 2 次提交
  8. 18 2月, 2022 1 次提交
    • H
      mm/munlock: rmap call mlock_vma_page() munlock_vma_page() · cea86fe2
      Hugh Dickins 提交于
      Add vma argument to mlock_vma_page() and munlock_vma_page(), make them
      inline functions which check (vma->vm_flags & VM_LOCKED) before calling
      mlock_page() and munlock_page() in mm/mlock.c.
      
      Add bool compound to mlock_vma_page() and munlock_vma_page(): this is
      because we have understandable difficulty in accounting pte maps of THPs,
      and if passed a PageHead page, mlock_page() and munlock_page() cannot
      tell whether it's a pmd map to be counted or a pte map to be ignored.
      
      Add vma arg to page_add_file_rmap() and page_remove_rmap(), like the
      others, and use that to call mlock_vma_page() at the end of the page
      adds, and munlock_vma_page() at the end of page_remove_rmap() (end or
      beginning? unimportant, but end was easier for assertions in testing).
      
      No page lock is required (although almost all adds happen to hold it):
      delete the "Serialize with page migration" BUG_ON(!PageLocked(page))s.
      Certainly page lock did serialize with page migration, but I'm having
      difficulty explaining why that was ever important.
      
      Mlock accounting on THPs has been hard to define, differed between anon
      and file, involved PageDoubleMap in some places and not others, required
      clear_page_mlock() at some points.  Keep it simple now: just count the
      pmds and ignore the ptes, there is no reason for ptes to undo pmd mlocks.
      
      page_add_new_anon_rmap() callers unchanged: they have long been calling
      lru_cache_add_inactive_or_unevictable(), which does its own VM_LOCKED
      handling (it also checks for not VM_SPECIAL: I think that's overcautious,
      and inconsistent with other checks, that mmap_region() already prevents
      VM_LOCKED on VM_SPECIAL; but haven't quite convinced myself to change it).
      Signed-off-by: NHugh Dickins <hughd@google.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Signed-off-by: NMatthew Wilcox (Oracle) <willy@infradead.org>
      cea86fe2
  9. 15 1月, 2022 1 次提交
    • P
      mm: change page type prior to adding page table entry · 1eba86c0
      Pasha Tatashin 提交于
      Patch series "page table check", v3.
      
      Ensure that some memory corruptions are prevented by checking at the
      time of insertion of entries into user page tables that there is no
      illegal sharing.
      
      We have recently found a problem [1] that existed in kernel since 4.14.
      The problem was caused by broken page ref count and led to memory
      leaking from one process into another.  The problem was accidentally
      detected by studying a dump of one process and noticing that one page
      contains memory that should not belong to this process.
      
      There are some other page->_refcount related problems that were recently
      fixed: [2], [3] which potentially could also lead to illegal sharing.
      
      In addition to hardening refcount [4] itself, this work is an attempt to
      prevent this class of memory corruption issues.
      
      It uses a simple state machine that is independent from regular MM logic
      to check for illegal sharing at time pages are inserted and removed from
      page tables.
      
      [1] https://lore.kernel.org/all/xr9335nxwc5y.fsf@gthelen2.svl.corp.google.com
      [2] https://lore.kernel.org/all/1582661774-30925-2-git-send-email-akaher@vmware.com
      [3] https://lore.kernel.org/all/20210622021423.154662-3-mike.kravetz@oracle.com
      [4] https://lore.kernel.org/all/20211221150140.988298-1-pasha.tatashin@soleen.com
      
      This patch (of 4):
      
      There are a few places where we first update the entry in the user page
      table, and later change the struct page to indicate that this is
      anonymous or file page.
      
      In most places, however, we first configure the page metadata and then
      insert entries into the page table.  Page table check, will use the
      information from struct page to verify the type of entry is inserted.
      
      Change the order in all places to first update struct page, and later to
      update page table.
      
      This means that we first do calls that may change the type of page (anon
      or file):
      
      	page_move_anon_rmap
      	page_add_anon_rmap
      	do_page_add_anon_rmap
      	page_add_new_anon_rmap
      	page_add_file_rmap
      	hugepage_add_anon_rmap
      	hugepage_add_new_anon_rmap
      
      And after that do calls that add entries to the page table:
      
      	set_huge_pte_at
      	set_pte_at
      
      Link: https://lkml.kernel.org/r/20211221154650.1047963-1-pasha.tatashin@soleen.com
      Link: https://lkml.kernel.org/r/20211221154650.1047963-2-pasha.tatashin@soleen.comSigned-off-by: NPasha Tatashin <pasha.tatashin@soleen.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Paul Turner <pjt@google.com>
      Cc: Wei Xu <weixugc@google.com>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jonathan Corbet <corbet@lwn.net>
      Cc: Will Deacon <will@kernel.org>
      Cc: Mike Rapoport <rppt@kernel.org>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Masahiro Yamada <masahiroy@kernel.org>
      Cc: Sami Tolvanen <samitolvanen@google.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
      Cc: Jiri Slaby <jirislaby@kernel.org>
      Cc: Muchun Song <songmuchun@bytedance.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1eba86c0
  10. 11 12月, 2021 1 次提交
  11. 23 11月, 2021 2 次提交
    • N
      hugetlbfs: flush before unlock on move_hugetlb_page_tables() · 13e4ad2c
      Nadav Amit 提交于
      We must flush the TLB before releasing i_mmap_rwsem to avoid the
      potential reuse of an unshared PMDs page.  This is not true in the case
      of move_hugetlb_page_tables().  The last reference on the page table can
      therefore be dropped before the TLB flush took place.
      
      Prevent it by reordering the operations and flushing the TLB before
      releasing i_mmap_rwsem.
      
      Fixes: 550a7d60 ("mm, hugepages: add mremap() support for hugepage backed vma")
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Reviewed-by: NMike Kravetz <mike.kravetz@oracle.com>
      Cc: Mina Almasry <almasrymina@google.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      13e4ad2c
    • N
      hugetlbfs: flush TLBs correctly after huge_pmd_unshare · a4a118f2
      Nadav Amit 提交于
      When __unmap_hugepage_range() calls to huge_pmd_unshare() succeed, a TLB
      flush is missing.  This TLB flush must be performed before releasing the
      i_mmap_rwsem, in order to prevent an unshared PMDs page from being
      released and reused before the TLB flush took place.
      
      Arguably, a comprehensive solution would use mmu_gather interface to
      batch the TLB flushes and the PMDs page release, however it is not an
      easy solution: (1) try_to_unmap_one() and try_to_migrate_one() also call
      huge_pmd_unshare() and they cannot use the mmu_gather interface; and (2)
      deferring the release of the page reference for the PMDs page until
      after i_mmap_rwsem is dropeed can confuse huge_pmd_unshare() into
      thinking PMDs are shared when they are not.
      
      Fix __unmap_hugepage_range() by adding the missing TLB flush, and
      forcing a flush when unshare is successful.
      
      Fixes: 24669e58 ("hugetlb: use mmu_gather instead of a temporary linked list for accumulating pages)" # 3.6
      Signed-off-by: NNadav Amit <namit@vmware.com>
      Reviewed-by: NMike Kravetz <mike.kravetz@oracle.com>
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      a4a118f2
  12. 21 11月, 2021 2 次提交
  13. 07 11月, 2021 12 次提交
  14. 18 10月, 2021 1 次提交
  15. 04 9月, 2021 4 次提交