1. 23 2月, 2017 5 次提交
  2. 11 1月, 2017 3 次提交
  3. 26 12月, 2016 1 次提交
    • N
      mm: add PageWaiters indicating tasks are waiting for a page bit · 62906027
      Nicholas Piggin 提交于
      Add a new page flag, PageWaiters, to indicate the page waitqueue has
      tasks waiting. This can be tested rather than testing waitqueue_active
      which requires another cacheline load.
      
      This bit is always set when the page has tasks on page_waitqueue(page),
      and is set and cleared under the waitqueue lock. It may be set when
      there are no tasks on the waitqueue, which will cause a harmless extra
      wakeup check that will clears the bit.
      
      The generic bit-waitqueue infrastructure is no longer used for pages.
      Instead, waitqueues are used directly with a custom key type. The
      generic code was not flexible enough to have PageWaiters manipulation
      under the waitqueue lock (which simplifies concurrency).
      
      This improves the performance of page lock intensive microbenchmarks by
      2-3%.
      
      Putting two bits in the same word opens the opportunity to remove the
      memory barrier between clearing the lock bit and testing the waiters
      bit, after some work on the arch primitives (e.g., ensuring memory
      operand widths match and cover both bits).
      Signed-off-by: NNicholas Piggin <npiggin@gmail.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Bob Peterson <rpeterso@redhat.com>
      Cc: Steven Whitehouse <swhiteho@redhat.com>
      Cc: Andrew Lutomirski <luto@kernel.org>
      Cc: Andreas Gruenbacher <agruenba@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      62906027
  4. 15 12月, 2016 10 次提交
  5. 23 11月, 2016 1 次提交
    • E
      ptrace: Don't allow accessing an undumpable mm · 84d77d3f
      Eric W. Biederman 提交于
      It is the reasonable expectation that if an executable file is not
      readable there will be no way for a user without special privileges to
      read the file.  This is enforced in ptrace_attach but if ptrace
      is already attached before exec there is no enforcement for read-only
      executables.
      
      As the only way to read such an mm is through access_process_vm
      spin a variant called ptrace_access_vm that will fail if the
      target process is not being ptraced by the current process, or
      the current process did not have sufficient privileges when ptracing
      began to read the target processes mm.
      
      In the ptrace implementations replace access_process_vm by
      ptrace_access_vm.  There remain several ptrace sites that still use
      access_process_vm as they are reading the target executables
      instructions (for kernel consumption) or register stacks.  As such it
      does not appear necessary to add a permission check to those calls.
      
      This bug has always existed in Linux.
      
      Fixes: v1.0
      Cc: stable@vger.kernel.org
      Reported-by: NAndy Lutomirski <luto@amacapital.net>
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      84d77d3f
  6. 25 10月, 2016 1 次提交
    • L
      mm: unexport __get_user_pages() · 0d731759
      Lorenzo Stoakes 提交于
      This patch unexports the low-level __get_user_pages() function.
      
      Recent refactoring of the get_user_pages* functions allow flags to be
      passed through get_user_pages() which eliminates the need for access to
      this function from its one user, kvm.
      
      We can see that the two calls to get_user_pages() which replace
      __get_user_pages() in kvm_main.c are equivalent by examining their call
      stacks:
      
        get_user_page_nowait():
          get_user_pages(start, 1, flags, page, NULL)
          __get_user_pages_locked(current, current->mm, start, 1, page, NULL, NULL,
      			    false, flags | FOLL_TOUCH)
          __get_user_pages(current, current->mm, start, 1,
      		     flags | FOLL_TOUCH | FOLL_GET, page, NULL, NULL)
      
        check_user_page_hwpoison():
          get_user_pages(addr, 1, flags, NULL, NULL)
          __get_user_pages_locked(current, current->mm, addr, 1, NULL, NULL, NULL,
      			    false, flags | FOLL_TOUCH)
          __get_user_pages(current, current->mm, addr, 1, flags | FOLL_TOUCH, NULL,
      		     NULL, NULL)
      Signed-off-by: NLorenzo Stoakes <lstoakes@gmail.com>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0d731759
  7. 20 10月, 2016 1 次提交
  8. 19 10月, 2016 9 次提交
  9. 08 10月, 2016 8 次提交
    • Z
      linux/mm.h: canonicalize macro PAGE_ALIGNED() definition · 1061b0d2
      zijun_hu 提交于
      The macro PAGE_ALIGNED() is prone to cause error because it doesn't
      follow convention to parenthesize parameter @addr within macro body, for
      example unsigned long *ptr = kmalloc(...); PAGE_ALIGNED(ptr + 16); for
      the left parameter of macro IS_ALIGNED(), (unsigned long)(ptr + 16) is
      desired but the actual one is (unsigned long)ptr + 16.
      
      It is fixed by simply canonicalizing macro PAGE_ALIGNED() definition.
      
      Link: http://lkml.kernel.org/r/57EA6AE7.7090807@zoho.comSigned-off-by: Nzijun_hu <zijun_hu@htc.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1061b0d2
    • M
      mm: consolidate warn_alloc_failed users · 7877cdcc
      Michal Hocko 提交于
      warn_alloc_failed is currently used from the page and vmalloc
      allocators.  This is a good reuse of the code except that vmalloc would
      appreciate a slightly different warning message.  This is already
      handled by the fmt parameter except that
      
        "%s: page allocation failure: order:%u, mode:%#x(%pGg)"
      
      is printed anyway.  This might be quite misleading because it might be a
      vmalloc failure which leads to the warning while the page allocator is
      not the culprit here.  Fix this by always using the fmt string and only
      print the context that makes sense for the particular context (e.g.
      order makes only very little sense for the vmalloc context).
      
      Rename the function to not miss any user and also because a later patch
      will reuse it also for !failure cases.
      
      Link: http://lkml.kernel.org/r/20160929084407.7004-2-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7877cdcc
    • A
      mm: vma_merge: fix vm_page_prot SMP race condition against rmap_walk · e86f15ee
      Andrea Arcangeli 提交于
      The rmap_walk can access vm_page_prot (and potentially vm_flags in the
      pte/pmd manipulations).  So it's not safe to wait the caller to update
      the vm_page_prot/vm_flags after vma_merge returned potentially removing
      the "next" vma and extending the "current" vma over the
      next->vm_start,vm_end range, but still with the "current" vma
      vm_page_prot, after releasing the rmap locks.
      
      The vm_page_prot/vm_flags must be transferred from the "next" vma to the
      current vma while vma_merge still holds the rmap locks.
      
      The side effect of this race condition is pte corruption during migrate
      as remove_migration_ptes when run on a address of the "next" vma that
      got removed, used the vm_page_prot of the current vma.
      
        migrate   	      	        mprotect
        ------------			-------------
        migrating in "next" vma
      				vma_merge() # removes "next" vma and
      			        	    # extends "current" vma
      					    # current vma is not with
      					    # vm_page_prot updated
        remove_migration_ptes
        read vm_page_prot of current "vma"
        establish pte with wrong permissions
      				vm_set_page_prot(vma) # too late!
      				change_protection in the old vma range
      				only, next range is not updated
      
      This caused segmentation faults and potentially memory corruption in
      heavy mprotect loads with some light page migration caused by compaction
      in the background.
      
      Hugh Dickins pointed out the comment about the Odd case 8 in vma_merge
      which confirms the case 8 is only buggy one where the race can trigger,
      in all other vma_merge cases the above cannot happen.
      
      This fix removes the oddness factor from case 8 and it converts it from:
      
            AAAA
        PPPPNNNNXXXX -> PPPPNNNNNNNN
      
      to:
      
            AAAA
        PPPPNNNNXXXX -> PPPPXXXXXXXX
      
      XXXX has the right vma properties for the whole merged vma returned by
      vma_adjust, so it solves the problem fully.  It has the added benefits
      that the callers could stop updating vma properties when vma_merge
      succeeds however the callers are not updated by this patch (there are
      bits like VM_SOFTDIRTY that still need special care for the whole range,
      as the vma merging ignores them, but as long as they're not processed by
      rmap walks and instead they're accessed with the mmap_sem at least for
      reading, they are fine not to be updated within vma_adjust before
      releasing the rmap_locks).
      
      Link: http://lkml.kernel.org/r/1474309513-20313-1-git-send-email-aarcange@redhat.comSigned-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Reported-by: NAditya Mandaleeka <adityam@microsoft.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Jan Vorlicek <janvorli@microsoft.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e86f15ee
    • A
      mm: vm_page_prot: update with WRITE_ONCE/READ_ONCE · 6d2329f8
      Andrea Arcangeli 提交于
      vma->vm_page_prot is read lockless from the rmap_walk, it may be updated
      concurrently and this prevents the risk of reading intermediate values.
      
      Link: http://lkml.kernel.org/r/1474660305-19222-1-git-send-email-aarcange@redhat.comSigned-off-by: NAndrea Arcangeli <aarcange@redhat.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Jan Vorlicek <janvorli@microsoft.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6d2329f8
    • H
      mm: remove page_file_index · 8cd79788
      Huang Ying 提交于
      After using the offset of the swap entry as the key of the swap cache,
      the page_index() becomes exactly same as page_file_index().  So the
      page_file_index() is removed and the callers are changed to use
      page_index() instead.
      
      Link: http://lkml.kernel.org/r/1473270649-27229-2-git-send-email-ying.huang@intel.comSigned-off-by: N"Huang, Ying" <ying.huang@intel.com>
      Cc: Trond Myklebust <trond.myklebust@primarydata.com>
      Cc: Anna Schumaker <anna.schumaker@netapp.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Michal Hocko <mhocko@suse.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8cd79788
    • H
      mm, swap: use offset of swap entry as key of swap cache · f6ab1f7f
      Huang Ying 提交于
      This patch is to improve the performance of swap cache operations when
      the type of the swap device is not 0.  Originally, the whole swap entry
      value is used as the key of the swap cache, even though there is one
      radix tree for each swap device.  If the type of the swap device is not
      0, the height of the radix tree of the swap cache will be increased
      unnecessary, especially on 64bit architecture.  For example, for a 1GB
      swap device on the x86_64 architecture, the height of the radix tree of
      the swap cache is 11.  But if the offset of the swap entry is used as
      the key of the swap cache, the height of the radix tree of the swap
      cache is 4.  The increased height causes unnecessary radix tree
      descending and increased cache footprint.
      
      This patch reduces the height of the radix tree of the swap cache via
      using the offset of the swap entry instead of the whole swap entry value
      as the key of the swap cache.  In 32 processes sequential swap out test
      case on a Xeon E5 v3 system with RAM disk as swap, the lock contention
      for the spinlock of the swap cache is reduced from 20.15% to 12.19%,
      when the type of the swap device is 1.
      
      Use the whole swap entry as key,
      
        perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.__add_to_swap_cache.add_to_swap_cache.add_to_swap.shrink_page_list: 10.37,
        perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg: 9.78,
      
      Use the swap offset as key,
      
        perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.__add_to_swap_cache.add_to_swap_cache.add_to_swap.shrink_page_list: 6.25,
        perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg: 5.94,
      
      Link: http://lkml.kernel.org/r/1473270649-27229-1-git-send-email-ying.huang@intel.comSigned-off-by: N"Huang, Ying" <ying.huang@intel.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
      Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mgorman@techsingularity.net>
      Cc: Minchan Kim <minchan@kernel.org>
      Cc: Aaron Lu <aaron.lu@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f6ab1f7f
    • S
      mm: introduce arch_reserved_kernel_pages() · f6f34b43
      Srikar Dronamraju 提交于
      Currently arch specific code can reserve memory blocks but
      alloc_large_system_hash() may not take it into consideration when sizing
      the hashes.  This can lead to bigger hash than required and lead to no
      available memory for other purposes.  This is specifically true for
      systems with CONFIG_DEFERRED_STRUCT_PAGE_INIT enabled.
      
      One approach to solve this problem would be to walk through the memblock
      regions and calculate the available memory and base the size of hash
      system on the available memory.
      
      The other approach would be to depend on the architecture to provide the
      number of pages that are reserved.  This change provides hooks to allow
      the architecture to provide the required info.
      
      Link: http://lkml.kernel.org/r/1472476010-4709-2-git-send-email-srikar@linux.vnet.ibm.comSigned-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      Suggested-by: NMel Gorman <mgorman@techsingularity.net>
      Cc: Vlastimil Babka <vbabka@suse.cz>
      Cc: Michal Hocko <mhocko@kernel.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Cc: Hari Bathini <hbathini@linux.vnet.ibm.com>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f6f34b43
    • J
      mm: pagewalk: fix the comment for test_walk · f7e2355f
      James Morse 提交于
      Modify the comment describing struct mm_walk->test_walk()s behaviour to
      match the comment on walk_page_test() and the behaviour of
      walk_page_vma().
      
      Fixes: fafaa426 ("pagewalk: improve vma handling")
      Link: http://lkml.kernel.org/r/1471622518-21980-1-git-send-email-james.morse@arm.comSigned-off-by: NJames Morse <james.morse@arm.com>
      Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      f7e2355f
  10. 15 9月, 2016 1 次提交