1. 30 7月, 2020 1 次提交
  2. 15 6月, 2020 1 次提交
  3. 10 6月, 2020 2 次提交
  4. 10 4月, 2020 1 次提交
  5. 03 4月, 2020 1 次提交
    • P
      mm: allow VM_FAULT_RETRY for multiple times · 4064b982
      Peter Xu 提交于
      The idea comes from a discussion between Linus and Andrea [1].
      
      Before this patch we only allow a page fault to retry once.  We achieved
      this by clearing the FAULT_FLAG_ALLOW_RETRY flag when doing
      handle_mm_fault() the second time.  This was majorly used to avoid
      unexpected starvation of the system by looping over forever to handle the
      page fault on a single page.  However that should hardly happen, and after
      all for each code path to return a VM_FAULT_RETRY we'll first wait for a
      condition (during which time we should possibly yield the cpu) to happen
      before VM_FAULT_RETRY is really returned.
      
      This patch removes the restriction by keeping the FAULT_FLAG_ALLOW_RETRY
      flag when we receive VM_FAULT_RETRY.  It means that the page fault handler
      now can retry the page fault for multiple times if necessary without the
      need to generate another page fault event.  Meanwhile we still keep the
      FAULT_FLAG_TRIED flag so page fault handler can still identify whether a
      page fault is the first attempt or not.
      
      Then we'll have these combinations of fault flags (only considering
      ALLOW_RETRY flag and TRIED flag):
      
        - ALLOW_RETRY and !TRIED:  this means the page fault allows to
                                   retry, and this is the first try
      
        - ALLOW_RETRY and TRIED:   this means the page fault allows to
                                   retry, and this is not the first try
      
        - !ALLOW_RETRY and !TRIED: this means the page fault does not allow
                                   to retry at all
      
        - !ALLOW_RETRY and TRIED:  this is forbidden and should never be used
      
      In existing code we have multiple places that has taken special care of
      the first condition above by checking against (fault_flags &
      FAULT_FLAG_ALLOW_RETRY).  This patch introduces a simple helper to detect
      the first retry of a page fault by checking against both (fault_flags &
      FAULT_FLAG_ALLOW_RETRY) and !(fault_flag & FAULT_FLAG_TRIED) because now
      even the 2nd try will have the ALLOW_RETRY set, then use that helper in
      all existing special paths.  One example is in __lock_page_or_retry(), now
      we'll drop the mmap_sem only in the first attempt of page fault and we'll
      keep it in follow up retries, so old locking behavior will be retained.
      
      This will be a nice enhancement for current code [2] at the same time a
      supporting material for the future userfaultfd-writeprotect work, since in
      that work there will always be an explicit userfault writeprotect retry
      for protected pages, and if that cannot resolve the page fault (e.g., when
      userfaultfd-writeprotect is used in conjunction with swapped pages) then
      we'll possibly need a 3rd retry of the page fault.  It might also benefit
      other potential users who will have similar requirement like userfault
      write-protection.
      
      GUP code is not touched yet and will be covered in follow up patch.
      
      Please read the thread below for more information.
      
      [1] https://lore.kernel.org/lkml/20171102193644.GB22686@redhat.com/
      [2] https://lore.kernel.org/lkml/20181230154648.GB9832@redhat.com/Suggested-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Suggested-by: NAndrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Tested-by: NBrian Geffon <bgeffon@google.com>
      Cc: Bobby Powers <bobbypowers@gmail.com>
      Cc: David Hildenbrand <david@redhat.com>
      Cc: Denis Plotnikov <dplotnikov@virtuozzo.com>
      Cc: "Dr . David Alan Gilbert" <dgilbert@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Jerome Glisse <jglisse@redhat.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: "Kirill A . Shutemov" <kirill@shutemov.name>
      Cc: Martin Cracauer <cracauer@cons.org>
      Cc: Marty McFadden <mcfadden8@llnl.gov>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Maya Gokhale <gokhale2@llnl.gov>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Mike Kravetz <mike.kravetz@oracle.com>
      Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
      Cc: Pavel Emelyanov <xemul@openvz.org>
      Link: http://lkml.kernel.org/r/20200220160246.9790-1-peterx@redhat.comSigned-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4064b982
  6. 25 3月, 2020 1 次提交
    • T
      drm/ttm, drm/vmwgfx: Support huge TTM pagefaults · 314b6580
      Thomas Hellstrom (VMware) 提交于
      Support huge (PMD-size and PUD-size) page-table entries by providing a
      huge_fault() callback.
      We still support private mappings and write-notify by splitting the huge
      page-table entries on write-access.
      
      Note that for huge page-faults to occur, either the kernel needs to be
      compiled with trans-huge-pages always enabled, or the kernel needs to be
      compiled with trans-huge-pages enabled using madvise, and the user-space
      app needs to call madvise() to enable trans-huge pages on a per-mapping
      basis.
      
      Furthermore huge page-faults will not succeed unless buffer objects and
      user-space addresses are aligned on huge page size boundaries.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: "Christian König" <christian.koenig@amd.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Signed-off-by: NThomas Hellstrom (VMware) <thomas_os@shipmail.org>
      Reviewed-by: NRoland Scheidegger <sroland@vmware.com>
      Reviewed-by: NChristian König <christian.koenig@amd.com>
      314b6580
  7. 16 1月, 2020 1 次提交
    • T
      mm, drm/ttm: Fix vm page protection handling · 5379e4dd
      Thomas Hellstrom 提交于
      TTM graphics buffer objects may, transparently to user-space,  move
      between IO and system memory. When that happens, all PTEs pointing to the
      old location are zapped before the move and then faulted in again if
      needed. When that happens, the page protection caching mode- and
      encryption bits may change and be different from those of
      struct vm_area_struct::vm_page_prot.
      
      We were using an ugly hack to set the page protection correctly.
      Fix that and instead export and use vmf_insert_mixed_prot() or use
      vmf_insert_pfn_prot().
      Also get the default page protection from
      struct vm_area_struct::vm_page_prot rather than using vm_get_page_prot().
      This way we catch modifications done by the vm system for drivers that
      want write-notification.
      
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Ralph Campbell <rcampbell@nvidia.com>
      Cc: "Jérôme Glisse" <jglisse@redhat.com>
      Cc: "Christian König" <christian.koenig@amd.com>
      Signed-off-by: NThomas Hellstrom <thellstrom@vmware.com>
      Reviewed-by: NChristian König <christian.koenig@amd.com>
      Acked-by: NAndrew Morton <akpm@linux-foundation.org>
      5379e4dd
  8. 06 12月, 2019 1 次提交
  9. 08 11月, 2019 1 次提交
  10. 06 11月, 2019 3 次提交
  11. 04 11月, 2019 2 次提交
  12. 30 10月, 2019 1 次提交
  13. 25 10月, 2019 1 次提交
  14. 17 10月, 2019 2 次提交
  15. 14 10月, 2019 1 次提交
  16. 11 9月, 2019 1 次提交
  17. 13 8月, 2019 1 次提交
  18. 06 8月, 2019 2 次提交
  19. 16 7月, 2019 1 次提交
  20. 18 6月, 2019 2 次提交
  21. 20 3月, 2019 2 次提交
  22. 26 1月, 2019 1 次提交
  23. 28 9月, 2018 1 次提交
  24. 11 7月, 2018 2 次提交
  25. 20 6月, 2018 1 次提交
  26. 16 5月, 2018 1 次提交
  27. 27 2月, 2018 2 次提交
  28. 20 2月, 2018 2 次提交
  29. 30 1月, 2018 1 次提交