1. 01 8月, 2012 11 次提交
  2. 30 5月, 2012 3 次提交
  3. 26 5月, 2012 1 次提交
  4. 11 5月, 2012 1 次提交
  5. 26 4月, 2012 1 次提交
  6. 13 4月, 2012 1 次提交
    • C
      hugetlb: fix race condition in hugetlb_fault() · 66aebce7
      Chris Metcalf 提交于
      The race is as follows:
      
      Suppose a multi-threaded task forks a new process (on cpu A), thus
      bumping up the ref count on all the pages.  While the fork is occurring
      (and thus we have marked all the PTEs as read-only), another thread in
      the original process (on cpu B) tries to write to a huge page, taking an
      access violation from the write-protect and calling hugetlb_cow().  Now,
      suppose the fork() fails.  It will undo the COW and decrement the ref
      count on the pages, so the ref count on the huge page drops back to 1.
      Meanwhile hugetlb_cow() also decrements the ref count by one on the
      original page, since the original address space doesn't need it any
      more, having copied a new page to replace the original page.  This
      leaves the ref count at zero, and when we call unlock_page(), we panic.
      
      	fork on CPU A				fault on CPU B
      	=============				==============
      	...
      	down_write(&parent->mmap_sem);
      	down_write_nested(&child->mmap_sem);
      	...
      	while duplicating vmas
      		if error
      			break;
      	...
      	up_write(&child->mmap_sem);
      	up_write(&parent->mmap_sem);		...
      						down_read(&parent->mmap_sem);
      						...
      						lock_page(page);
      						handle COW
      						page_mapcount(old_page) == 2
      						alloc and prepare new_page
      	...
      	handle error
      	page_remove_rmap(page);
      	put_page(page);
      	...
      						fold new_page into pte
      						page_remove_rmap(page);
      						put_page(page);
      						...
      				oops ==>	unlock_page(page);
      						up_read(&parent->mmap_sem);
      
      The solution is to take an extra reference to the page while we are
      holding the lock on it.
      Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      66aebce7
  7. 24 3月, 2012 1 次提交
  8. 22 3月, 2012 4 次提交
    • D
      hugepages: fix use after free bug in "quota" handling · 90481622
      David Gibson 提交于
      hugetlbfs_{get,put}_quota() are badly named.  They don't interact with the
      general quota handling code, and they don't much resemble its behaviour.
      Rather than being about maintaining limits on on-disk block usage by
      particular users, they are instead about maintaining limits on in-memory
      page usage (including anonymous MAP_PRIVATE copied-on-write pages)
      associated with a particular hugetlbfs filesystem instance.
      
      Worse, they work by having callbacks to the hugetlbfs filesystem code from
      the low-level page handling code, in particular from free_huge_page().
      This is a layering violation of itself, but more importantly, if the
      kernel does a get_user_pages() on hugepages (which can happen from KVM
      amongst others), then the free_huge_page() can be delayed until after the
      associated inode has already been freed.  If an unmount occurs at the
      wrong time, even the hugetlbfs superblock where the "quota" limits are
      stored may have been freed.
      
      Andrew Barry proposed a patch to fix this by having hugepages, instead of
      storing a pointer to their address_space and reaching the superblock from
      there, had the hugepages store pointers directly to the superblock,
      bumping the reference count as appropriate to avoid it being freed.
      Andrew Morton rejected that version, however, on the grounds that it made
      the existing layering violation worse.
      
      This is a reworked version of Andrew's patch, which removes the extra, and
      some of the existing, layering violation.  It works by introducing the
      concept of a hugepage "subpool" at the lower hugepage mm layer - that is a
      finite logical pool of hugepages to allocate from.  hugetlbfs now creates
      a subpool for each filesystem instance with a page limit set, and a
      pointer to the subpool gets added to each allocated hugepage, instead of
      the address_space pointer used now.  The subpool has its own lifetime and
      is only freed once all pages in it _and_ all other references to it (i.e.
      superblocks) are gone.
      
      subpools are optional - a NULL subpool pointer is taken by the code to
      mean that no subpool limits are in effect.
      
      Previous discussion of this bug found in:  "Fix refcounting in hugetlbfs
      quota handling.". See:  https://lkml.org/lkml/2011/8/11/28 or
      http://marc.info/?l=linux-mm&m=126928970510627&w=1
      
      v2: Fixed a bug spotted by Hillf Danton, and removed the extra parameter to
      alloc_huge_page() - since it already takes the vma, it is not necessary.
      Signed-off-by: NAndrew Barry <abarry@cray.com>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Hillf Danton <dhillf@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      90481622
    • M
      cpuset: mm: reduce large amounts of memory barrier related damage v3 · cc9a6c87
      Mel Gorman 提交于
      Commit c0ff7453 ("cpuset,mm: fix no node to alloc memory when
      changing cpuset's mems") wins a super prize for the largest number of
      memory barriers entered into fast paths for one commit.
      
      [get|put]_mems_allowed is incredibly heavy with pairs of full memory
      barriers inserted into a number of hot paths.  This was detected while
      investigating at large page allocator slowdown introduced some time
      after 2.6.32.  The largest portion of this overhead was shown by
      oprofile to be at an mfence introduced by this commit into the page
      allocator hot path.
      
      For extra style points, the commit introduced the use of yield() in an
      implementation of what looks like a spinning mutex.
      
      This patch replaces the full memory barriers on both read and write
      sides with a sequence counter with just read barriers on the fast path
      side.  This is much cheaper on some architectures, including x86.  The
      main bulk of the patch is the retry logic if the nodemask changes in a
      manner that can cause a false failure.
      
      While updating the nodemask, a check is made to see if a false failure
      is a risk.  If it is, the sequence number gets bumped and parallel
      allocators will briefly stall while the nodemask update takes place.
      
      In a page fault test microbenchmark, oprofile samples from
      __alloc_pages_nodemask went from 4.53% of all samples to 1.15%.  The
      actual results were
      
                                   3.3.0-rc3          3.3.0-rc3
                                   rc3-vanilla        nobarrier-v2r1
          Clients   1 UserTime       0.07 (  0.00%)   0.08 (-14.19%)
          Clients   2 UserTime       0.07 (  0.00%)   0.07 (  2.72%)
          Clients   4 UserTime       0.08 (  0.00%)   0.07 (  3.29%)
          Clients   1 SysTime        0.70 (  0.00%)   0.65 (  6.65%)
          Clients   2 SysTime        0.85 (  0.00%)   0.82 (  3.65%)
          Clients   4 SysTime        1.41 (  0.00%)   1.41 (  0.32%)
          Clients   1 WallTime       0.77 (  0.00%)   0.74 (  4.19%)
          Clients   2 WallTime       0.47 (  0.00%)   0.45 (  3.73%)
          Clients   4 WallTime       0.38 (  0.00%)   0.37 (  1.58%)
          Clients   1 Flt/sec/cpu  497620.28 (  0.00%) 520294.53 (  4.56%)
          Clients   2 Flt/sec/cpu  414639.05 (  0.00%) 429882.01 (  3.68%)
          Clients   4 Flt/sec/cpu  257959.16 (  0.00%) 258761.48 (  0.31%)
          Clients   1 Flt/sec      495161.39 (  0.00%) 517292.87 (  4.47%)
          Clients   2 Flt/sec      820325.95 (  0.00%) 850289.77 (  3.65%)
          Clients   4 Flt/sec      1020068.93 (  0.00%) 1022674.06 (  0.26%)
          MMTests Statistics: duration
          Sys Time Running Test (seconds)             135.68    132.17
          User+Sys Time Running Test (seconds)         164.2    160.13
          Total Elapsed Time (seconds)                123.46    120.87
      
      The overall improvement is small but the System CPU time is much
      improved and roughly in correlation to what oprofile reported (these
      performance figures are without profiling so skew is expected).  The
      actual number of page faults is noticeably improved.
      
      For benchmarks like kernel builds, the overall benefit is marginal but
      the system CPU time is slightly reduced.
      
      To test the actual bug the commit fixed I opened two terminals.  The
      first ran within a cpuset and continually ran a small program that
      faulted 100M of anonymous data.  In a second window, the nodemask of the
      cpuset was continually randomised in a loop.
      
      Without the commit, the program would fail every so often (usually
      within 10 seconds) and obviously with the commit everything worked fine.
      With this patch applied, it also worked fine so the fix should be
      functionally equivalent.
      Signed-off-by: NMel Gorman <mgorman@suse.de>
      Cc: Miao Xie <miaox@cn.fujitsu.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <cl@linux.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cc9a6c87
    • H
      mm: hugetlb: bail out unmapping after serving reference page · 9e81130b
      Hillf Danton 提交于
      When unmapping a given VM range, we could bail out if a reference page is
      supplied and is unmapped, which is a minor optimization.
      Signed-off-by: NHillf Danton <dhillf@gmail.com>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      9e81130b
    • H
      mm: hugetlb: defer freeing pages when gathering surplus pages · 28073b02
      Hillf Danton 提交于
      When gathering surplus pages, the number of needed pages is recomputed
      after reacquiring hugetlb lock to catch changes in resv_huge_pages and
      free_huge_pages.  Plus it is recomputed with the number of newly allocated
      pages involved.
      
      Thus freeing pages can be deferred a bit to see if the final page request
      is satisfied, though pages could be allocated less than needed.
      Signed-off-by: NHillf Danton <dhillf@gmail.com>
      Reviewed-by: NMichal Hocko <mhocko@suse.cz>
      Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Hugh Dickins <hughd@google.com>
      Cc: Mel Gorman <mel@csn.ul.ie>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      28073b02
  9. 06 3月, 2012 1 次提交
    • A
      flush_tlb_range() needs ->page_table_lock when ->mmap_sem is not held · cd2934a3
      Al Viro 提交于
      All other callers already hold either ->mmap_sem (exclusive) or
      ->page_table_lock.  And we need it because some page table flushing
      instanced do work explicitly with ge tables.
      
      See e.g.  arch/powerpc/mm/tlb_hash32.c, flush_tlb_range() and
      flush_range() in there.  The same goes for uml, with a lot more
      extensive playing with page tables.
      
      Almost all callers are actually fine - flush_tlb_range() may have no
      need to bother playing with page tables, but it can do so safely; again,
      this caller is the sole exception - everything else either has exclusive
      ->mmap_sem on the mm in question, or mm->page_table_lock is held.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      cd2934a3
  10. 24 1月, 2012 1 次提交
  11. 11 1月, 2012 5 次提交
  12. 30 12月, 2011 1 次提交
  13. 22 12月, 2011 1 次提交
  14. 09 12月, 2011 1 次提交
    • Y
      thp: set compound tail page _count to zero · 58a84aa9
      Youquan Song 提交于
      Commit 70b50f94 ("mm: thp: tail page refcounting fix") keeps all
      page_tail->_count zero at all times.  But the current kernel does not
      set page_tail->_count to zero if a 1GB page is utilized.  So when an
      IOMMU 1GB page is used by KVM, it wil result in a kernel oops because a
      tail page's _count does not equal zero.
      
        kernel BUG at include/linux/mm.h:386!
        invalid opcode: 0000 [#1] SMP
        Call Trace:
          gup_pud_range+0xb8/0x19d
          get_user_pages_fast+0xcb/0x192
          ? trace_hardirqs_off+0xd/0xf
          hva_to_pfn+0x119/0x2f2
          gfn_to_pfn_memslot+0x2c/0x2e
          kvm_iommu_map_pages+0xfd/0x1c1
          kvm_iommu_map_memslots+0x7c/0xbd
          kvm_iommu_map_guest+0xaa/0xbf
          kvm_vm_ioctl_assigned_device+0x2ef/0xa47
          kvm_vm_ioctl+0x36c/0x3a2
          do_vfs_ioctl+0x49e/0x4e4
          sys_ioctl+0x5a/0x7c
          system_call_fastpath+0x16/0x1b
        RIP  gup_huge_pud+0xf2/0x159
      Signed-off-by: NYouquan Song <youquan.song@intel.com>
      Reviewed-by: NAndrea Arcangeli <aarcange@redhat.com>
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      58a84aa9
  15. 16 11月, 2011 1 次提交
  16. 26 7月, 2011 2 次提交
  17. 16 6月, 2011 1 次提交
    • R
      mm: fix negative commitlimit when gigantic hugepages are allocated · b0320c7b
      Rafael Aquini 提交于
      When 1GB hugepages are allocated on a system, free(1) reports less
      available memory than what really is installed in the box.  Also, if the
      total size of hugepages allocated on a system is over half of the total
      memory size, CommitLimit becomes a negative number.
      
      The problem is that gigantic hugepages (order > MAX_ORDER) can only be
      allocated at boot with bootmem, thus its frames are not accounted to
      'totalram_pages'.  However, they are accounted to hugetlb_total_pages()
      
      What happens to turn CommitLimit into a negative number is this
      calculation, in fs/proc/meminfo.c:
      
              allowed = ((totalram_pages - hugetlb_total_pages())
                      * sysctl_overcommit_ratio / 100) + total_swap_pages;
      
      A similar calculation occurs in __vm_enough_memory() in mm/mmap.c.
      
      Also, every vm statistic which depends on 'totalram_pages' will render
      confusing values, as if system were 'missing' some part of its memory.
      
      Impact of this bug:
      
      When gigantic hugepages are allocated and sysctl_overcommit_memory ==
      OVERCOMMIT_NEVER.  In a such situation, __vm_enough_memory() goes through
      the mentioned 'allowed' calculation and might end up mistakenly returning
      -ENOMEM, thus forcing the system to start reclaiming pages earlier than it
      would be ususal, and this could cause detrimental impact to overall
      system's performance, depending on the workload.
      
      Besides the aforementioned scenario, I can only think of this causing
      annoyances with memory reports from /proc/meminfo and free(1).
      
      [akpm@linux-foundation.org: standardize comment layout]
      Reported-by: NRuss Anderson <rja@sgi.com>
      Signed-off-by: NRafael Aquini <aquini@linux.com>
      Acked-by: NRuss Anderson <rja@sgi.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b0320c7b
  18. 06 6月, 2011 1 次提交
  19. 27 5月, 2011 1 次提交
  20. 25 5月, 2011 1 次提交