1. 15 11月, 2006 2 次提交
    • D
      [PATCH] hugetlb: check for brk() entering a hugepage region · cb07c9a1
      David Gibson 提交于
      Unlike mmap(), the codepath for brk() creates a vma without first checking
      that it doesn't touch a region exclusively reserved for hugepages.  On
      powerpc, this can allow it to create a normal page vma in a hugepage
      region, causing oopses and other badness.
      
      Add a test to prevent this.  With this patch, brk() will simply fail if it
      attempts to move the break into a hugepage reserved region.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Cc: Adam Litke <agl@us.ibm.com>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      cb07c9a1
    • H
      [PATCH] hugetlb: prepare_hugepage_range check offset too · 68589bc3
      Hugh Dickins 提交于
      (David:)
      
      If hugetlbfs_file_mmap() returns a failure to do_mmap_pgoff() - for example,
      because the given file offset is not hugepage aligned - then do_mmap_pgoff
      will go to the unmap_and_free_vma backout path.
      
      But at this stage the vma hasn't been marked as hugepage, and the backout path
      will call unmap_region() on it.  That will eventually call down to the
      non-hugepage version of unmap_page_range().  On ppc64, at least, that will
      cause serious problems if there are any existing hugepage pagetable entries in
      the vicinity - for example if there are any other hugepage mappings under the
      same PUD.  unmap_page_range() will trigger a bad_pud() on the hugepage pud
      entries.  I suspect this will also cause bad problems on ia64, though I don't
      have a machine to test it on.
      
      (Hugh:)
      
      prepare_hugepage_range() should check file offset alignment when it checks
      virtual address and length, to stop MAP_FIXED with a bad huge offset from
      unmapping before it fails further down.  PowerPC should apply the same
      prepare_hugepage_range alignment checks as ia64 and all the others do.
      
      Then none of the alignment checks in hugetlbfs_file_mmap are required (nor
      is the check for too small a mapping); but even so, move up setting of
      VM_HUGETLB and add a comment to warn of what David Gibson discovered - if
      hugetlbfs_file_mmap fails before setting it, do_mmap_pgoff's unmap_region
      when unwinding from error will go the non-huge way, which may cause bad
      behaviour on architectures (powerpc and ia64) which segregate their huge
      mappings into a separate region of the address space.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: "Luck, Tony" <tony.luck@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Acked-by: NAdam Litke <agl@us.ibm.com>
      Acked-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      68589bc3
  2. 16 10月, 2006 1 次提交
  3. 26 9月, 2006 2 次提交
    • C
      [PATCH] ZVC: Support NR_SLAB_RECLAIMABLE / NR_SLAB_UNRECLAIMABLE · 972d1a7b
      Christoph Lameter 提交于
      Remove the atomic counter for slab_reclaim_pages and replace the counter
      and NR_SLAB with two ZVC counter that account for unreclaimable and
      reclaimable slab pages: NR_SLAB_RECLAIMABLE and NR_SLAB_UNRECLAIMABLE.
      
      Change the check in vmscan.c to refer to to NR_SLAB_RECLAIMABLE.  The
      intend seems to be to check for slab pages that could be freed.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      972d1a7b
    • P
      [PATCH] mm: tracking shared dirty pages · d08b3851
      Peter Zijlstra 提交于
      Tracking of dirty pages in shared writeable mmap()s.
      
      The idea is simple: write protect clean shared writeable pages, catch the
      write-fault, make writeable and set dirty.  On page write-back clean all the
      PTE dirty bits and write protect them once again.
      
      The implementation is a tad harder, mainly because the default
      backing_dev_info capabilities were too loosely maintained.  Hence it is not
      enough to test the backing_dev_info for cap_account_dirty.
      
      The current heuristic is as follows, a VMA is eligible when:
       - its shared writeable
          (vm_flags & (VM_WRITE|VM_SHARED)) == (VM_WRITE|VM_SHARED)
       - it is not a 'special' mapping
          (vm_flags & (VM_PFNMAP|VM_INSERTPAGE)) == 0
       - the backing_dev_info is cap_account_dirty
          mapping_cap_account_dirty(vma->vm_file->f_mapping)
       - f_op->mmap() didn't change the default page protection
      
      Page from remap_pfn_range() are explicitly excluded because their COW
      semantics are already horrid enough (see vm_normal_page() in do_wp_page()) and
      because they don't have a backing store anyway.
      
      mprotect() is taught about the new behaviour as well.  However it overrides
      the last condition.
      
      Cleaning the pages on write-back is done with page_mkclean() a new rmap call.
      It can be called on any page, but is currently only implemented for mapped
      pages, if the page is found the be of a VMA that accounts dirty pages it will
      also wrprotect the PTE.
      
      Finally, in fs/buffers.c:try_to_free_buffers(); remove clear_page_dirty() from
      under ->private_lock.  This seems to be safe, since ->private_lock is used to
      serialize access to the buffers, not the page itself.  This is needed because
      clear_page_dirty() will call into page_mkclean() and would thereby violate
      locking order.
      
      [dhowells@redhat.com: Provide a page_mkclean() implementation for NOMMU]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d08b3851
  4. 08 9月, 2006 1 次提交
  5. 27 7月, 2006 1 次提交
  6. 01 7月, 2006 1 次提交
  7. 23 6月, 2006 1 次提交
    • D
      [PATCH] add page_mkwrite() vm_operations method · 9637a5ef
      David Howells 提交于
      Add a new VMA operation to notify a filesystem or other driver about the
      MMU generating a fault because userspace attempted to write to a page
      mapped through a read-only PTE.
      
      This facility permits the filesystem or driver to:
      
       (*) Implement storage allocation/reservation on attempted write, and so to
           deal with problems such as ENOSPC more gracefully (perhaps by generating
           SIGBUS).
      
       (*) Delay making the page writable until the contents have been written to a
           backing cache. This is useful for NFS/AFS when using FS-Cache/CacheFS.
           It permits the filesystem to have some guarantee about the state of the
           cache.
      
       (*) Account and limit number of dirty pages. This is one piece of the puzzle
           needed to make shared writable mapping work safely in FUSE.
      
      Needed by cachefs (Or is it cachefiles?  Or fscache? <head spins>).
      
      At least four other groups have stated an interest in it or a desire to use
      the functionality it provides: FUSE, OCFS2, NTFS and JFFS2.  Also, things like
      EXT3 really ought to use it to deal with the case of shared-writable mmap
      encountering ENOSPC before we permit the page to be dirtied.
      
      From: Peter Zijlstra <a.p.zijlstra@chello.nl>
      
        get_user_pages(.write=1, .force=1) can generate COW hits on read-only
        shared mappings, this patch traps those as mkpage_write candidates and fails
        to handle them the old way.
      Signed-off-by: NDavid Howells <dhowells@redhat.com>
      Cc: Miklos Szeredi <miklos@szeredi.hu>
      Cc: Joel Becker <Joel.Becker@oracle.com>
      Cc: Mark Fasheh <mark.fasheh@oracle.com>
      Cc: Anton Altaparmakov <aia21@cantab.net>
      Cc: David Woodhouse <dwmw2@infradead.org>
      Cc: Hugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9637a5ef
  8. 11 4月, 2006 2 次提交
  9. 01 4月, 2006 1 次提交
  10. 26 3月, 2006 1 次提交
  11. 22 3月, 2006 1 次提交
    • H
      [PATCH] remove VM_DONTCOPY bogosities · a6f563db
      Hugh Dickins 提交于
      Now that it's madvisable, remove two pieces of VM_DONTCOPY bogosity:
      
      1. There was and is no logical reason why VM_DONTCOPY should be in the
         list of flags which forbid vma merging (and those drivers which set
         it are also setting VM_IO, which itself forbids the merge).
      
      2. It's hard to understand the purpose of the VM_HUGETLB, VM_DONTCOPY
         block in vm_stat_account: but never mind, it's under CONFIG_HUGETLB,
         which (unlike CONFIG_HUGETLB_PAGE or CONFIG_HUGETLBFS) has never been
         defined.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a6f563db
  12. 12 1月, 2006 1 次提交
  13. 17 12月, 2005 1 次提交
  14. 23 11月, 2005 1 次提交
    • H
      [PATCH] unpaged: private write VM_RESERVED · 83e9b7e9
      Hugh Dickins 提交于
      The PageReserved removal in 2.6.15-rc1 issued a "deprecated" message when you
      tried to mmap or mprotect MAP_PRIVATE PROT_WRITE a VM_RESERVED, and failed
      with -EACCES: because do_wp_page lacks the refinement to COW pages in those
      areas, nor do we expect to find anonymous pages in them; and it seemed just
      bloat to add code for handling such a peculiar case.  But immediately it
      caused vbetool and ddcprobe (using lrmi) to fail.
      
      So revert the "deprecated" messages, letting mmap and mprotect succeed.  But
      leave do_wp_page's BUG_ON(vma->vm_flags & VM_RESERVED) in place until we've
      added the code to do it right: so this particular patch is only good if the
      app doesn't really need to write to that private area.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      83e9b7e9
  15. 19 11月, 2005 1 次提交
  16. 07 11月, 2005 1 次提交
  17. 31 10月, 2005 1 次提交
  18. 30 10月, 2005 9 次提交
    • H
      [PATCH] mm: unmap_vmas with inner ptlock · 508034a3
      Hugh Dickins 提交于
      Remove the page_table_lock from around the calls to unmap_vmas, and replace
      the pte_offset_map in zap_pte_range by pte_offset_map_lock: all callers are
      now safe to descend without page_table_lock.
      
      Don't attempt fancy locking for hugepages, just take page_table_lock in
      unmap_hugepage_range.  Which makes zap_hugepage_range, and the hugetlb test in
      zap_page_range, redundant: unmap_vmas calls unmap_hugepage_range anyway.  Nor
      does unmap_vmas have much use for its mm arg now.
      
      The tlb_start_vma and tlb_end_vma in unmap_page_range are now called without
      page_table_lock: if they're implemented at all, they typically come down to
      flush_cache_range (usually done outside page_table_lock) and flush_tlb_range
      (which we already audited for the mprotect case).
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      508034a3
    • H
      [PATCH] mm: unlink vma before pagetables · 8f4f8c16
      Hugh Dickins 提交于
      In most places the descent from pgd to pud to pmd to pte holds mmap_sem
      (exclusively or not), which ensures that free_pgtables cannot be freeing page
      tables from any level at the same time.  But truncation and reverse mapping
      descend without mmap_sem.
      
      No problem: just make sure that a vma is unlinked from its prio_tree (or
      nonlinear list) and from its anon_vma list, after zapping the vma, but before
      freeing its page tables.  Then neither vmtruncate nor rmap can reach that vma
      whose page tables are now volatile (nor do they need to reach it, since all
      its page entries have been zapped by this stage).
      
      The i_mmap_lock and anon_vma->lock already serialize this correctly; but the
      locking hierarchy is such that we cannot take them while holding
      page_table_lock.  Well, we're trying to push that down anyway.  So in this
      patch, move anon_vma_unlink and unlink_file_vma into free_pgtables, at the
      same time as moving page_table_lock around calls to unmap_vmas.
      
      tlb_gather_mmu and tlb_finish_mmu then fall outside the page_table_lock, but
      we made them preempt_disable and preempt_enable earlier; and a long source
      audit of all the architectures has shown no problem with removing
      page_table_lock from them.  free_pgtables doesn't need page_table_lock for
      itself, nor for what it calls; tlb->mm->nr_ptes is usually protected by
      page_table_lock, but partly by non-exclusive mmap_sem - here it's decremented
      with exclusive mmap_sem, or mm_users 0.  update_hiwater_rss and
      vm_unacct_memory don't need page_table_lock either.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8f4f8c16
    • H
      [PATCH] mm: ia64 use expand_upwards · 46dea3d0
      Hugh Dickins 提交于
      ia64 has expand_backing_store function for growing its Register Backing Store
      vma upwards.  But more complete code for this purpose is found in the
      CONFIG_STACK_GROWSUP part of mm/mmap.c.  Uglify its #ifdefs further to provide
      expand_upwards for ia64 as well as expand_stack for parisc.
      
      The Register Backing Store vma should be marked VM_ACCOUNT.  Implement the
      intention of growing it only a page at a time, instead of passing an address
      outside of the vma to handle_mm_fault, with unknown consequences.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      46dea3d0
    • H
      [PATCH] mm: update_hiwaters just in time · 365e9c87
      Hugh Dickins 提交于
      update_mem_hiwater has attracted various criticisms, in particular from those
      concerned with mm scalability.  Originally it was called whenever rss or
      total_vm got raised.  Then many of those callsites were replaced by a timer
      tick call from account_system_time.  Now Frank van Maarseveen reports that to
      be found inadequate.  How about this?  Works for Frank.
      
      Replace update_mem_hiwater, a poor combination of two unrelated ops, by macros
      update_hiwater_rss and update_hiwater_vm.  Don't attempt to keep
      mm->hiwater_rss up to date at timer tick, nor every time we raise rss (usually
      by 1): those are hot paths.  Do the opposite, update only when about to lower
      rss (usually by many), or just before final accounting in do_exit.  Handle
      mm->hiwater_vm in the same way, though it's much less of an issue.  Demand
      that whoever collects these hiwater statistics do the work of taking the
      maximum with rss or total_vm.
      
      And there has been no collector of these hiwater statistics in the tree.  The
      new convention needs an example, so match Frank's usage by adding a VmPeak
      line above VmSize to /proc/<pid>/status, and also a VmHWM line above VmRSS
      (High-Water-Mark or High-Water-Memory).
      
      There was a particular anomaly during mremap move, that hiwater_vm might be
      captured too high.  A fleeting such anomaly remains, but it's quickly
      corrected now, whereas before it would stick.
      
      What locking?  None: if the app is racy then these statistics will be racy,
      it's not worth any overhead to make them exact.  But whenever it suits,
      hiwater_vm is updated under exclusive mmap_sem, and hiwater_rss under
      page_table_lock (for now) or with preemption disabled (later on): without
      going to any trouble, minimize the time between reading current values and
      updating, to minimize those occasions when a racing thread bumps a count up
      and back down in between.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      365e9c87
    • N
      [PATCH] core remove PageReserved · b5810039
      Nick Piggin 提交于
      Remove PageReserved() calls from core code by tightening VM_RESERVED
      handling in mm/ to cover PageReserved functionality.
      
      PageReserved special casing is removed from get_page and put_page.
      
      All setting and clearing of PageReserved is retained, and it is now flagged
      in the page_alloc checks to help ensure we don't introduce any refcount
      based freeing of Reserved pages.
      
      MAP_PRIVATE, PROT_WRITE of VM_RESERVED regions is tentatively being
      deprecated.  We never completely handled it correctly anyway, and is be
      reintroduced in future if required (Hugh has a proof of concept).
      
      Once PageReserved() calls are removed from kernel/power/swsusp.c, and all
      arch/ and driver code, the Set and Clear calls, and the PG_reserved bit can
      be trivially removed.
      
      Last real user of PageReserved is swsusp, which uses PageReserved to
      determine whether a struct page points to valid memory or not.  This still
      needs to be addressed (a generic page_is_ram() should work).
      
      A last caveat: the ZERO_PAGE is now refcounted and managed with rmap (and
      thus mapcounted and count towards shared rss).  These writes to the struct
      page could cause excessive cacheline bouncing on big systems.  There are a
      number of ways this could be addressed if it is an issue.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      
      Refcount bug fix for filemap_xip.c
      Signed-off-by: NCarsten Otte <cotte@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b5810039
    • H
      [PATCH] mm: exit_mmap need not reset · 7c1fd6b9
      Hugh Dickins 提交于
      exit_mmap resets various mm_struct fields, but the mm is well on its way out,
      and none of those fields matter by this point.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7c1fd6b9
    • H
      [PATCH] mm: unlink_file_vma, remove_vma · a8fb5618
      Hugh Dickins 提交于
      Divide remove_vm_struct into two parts: first anon_vma_unlink plus
      unlink_file_vma, to unlink the vma from the list and tree by which rmap or
      vmtruncate might find it; then remove_vma to close, fput and free.
      
      The intention here is to do the anon_vma_unlink and unlink_file_vma earlier,
      in free_pgtables before freeing any page tables: so we can be sure that any
      page tables traversed by rmap and vmtruncate are stable (and other, ordinary
      cases are stabilized by holding mmap_sem).
      
      This will be crucial to traversing pgd,pud,pmd without page_table_lock.  But
      testing the split-out patch showed that lifting the page_table_lock is
      symbiotically necessary to make this change - the lock ordering is wrong to
      move those unlinks into free_pgtables while it's under ptlock.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a8fb5618
    • H
      [PATCH] mm: remove_vma_list consolidation · 2c0b3814
      Hugh Dickins 提交于
      unmap_vma doesn't amount to much, let's put it inside unmap_vma_list.  Except
      it doesn't unmap anything, unmap_region just did the unmapping: rename it to
      remove_vma_list.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2c0b3814
    • H
      [PATCH] mm: vm_stat_account unshackled · ab50b8ed
      Hugh Dickins 提交于
      The original vm_stat_account has fallen into disuse, with only one user, and
      only one user of vm_stat_unaccount.  It's easier to keep track if we convert
      them all to __vm_stat_account, then free it from its __shackles.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ab50b8ed
  19. 22 9月, 2005 1 次提交
  20. 15 9月, 2005 1 次提交
    • H
      [PATCH] error path in setup_arg_pages() misses vm_unacct_memory() · 2fd4ef85
      Hugh Dickins 提交于
      Pavel Emelianov and Kirill Korotaev observe that fs and arch users of
      security_vm_enough_memory tend to forget to vm_unacct_memory when a
      failure occurs further down (typically in setup_arg_pages variants).
      
      These are all users of insert_vm_struct, and that reservation will only
      be unaccounted on exit if the vma is marked VM_ACCOUNT: which in some
      cases it is (hidden inside VM_STACK_FLAGS) and in some cases it isn't.
      
      So x86_64 32-bit and ppc64 vDSO ELFs have been leaking memory into
      Committed_AS each time they're run.  But don't add VM_ACCOUNT to them,
      it's inappropriate to reserve against the very unlikely case that gdb
      be used to COW a vDSO page - we ought to do something about that in
      do_wp_page, but there are yet other inconsistencies to be resolved.
      
      The safe and economical way to fix this is to let insert_vm_struct do
      the security_vm_enough_memory check when it finds VM_ACCOUNT is set.
      
      And the MIPS irix_brk has been calling security_vm_enough_memory before
      calling do_brk which repeats it, doubly accounting and so also leaking.
      Remove that, and all the fs and arch calls to security_vm_enough_memory:
      give it a less misleading name later on.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-Off-By: NKirill Korotaev <dev@sw.ru>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2fd4ef85
  21. 08 9月, 2005 2 次提交
  22. 05 8月, 2005 1 次提交
    • S
      [PATCH] __vm_enough_memory() signedness fix · 2f60f8d3
      Simon Derr 提交于
      We have found what seems to be a small bug in __vm_enough_memory() when
      sysctl_overcommit_memory is set to OVERCOMMIT_NEVER.
      
      When this bug occurs the systems fails to boot, with /sbin/init whining
      about fork() returning ENOMEM.
      
      We hunted down the problem to this:
      
      The deferred update mecanism used in vm_acct_memory(), on a SMP system,
      allows the vm_committed_space counter to have a negative value.
      
      This should not be a problem since this counter is known to be inaccurate.
      
      But in __vm_enough_memory() this counter is compared to the `allowed'
      variable, which is an unsigned long.  This comparison is broken since it
      will consider the negative values of vm_committed_space to be huge positive
      values, resulting in a memory allocation failure.
      
      Signed-off-by: <Jean-Marc.Saffroy@ext.bull.net>
      Signed-off-by: <Simon.Derr@bull.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2f60f8d3
  23. 22 6月, 2005 2 次提交
    • C
      [PATCH] mmap topdown fix for large stack limit, large allocation · 73219d17
      Chris Wright 提交于
      The topdown changes in 2.6.12-rc1 can cause large allocations with large
      stack limit to fail, despite there being space available.  The
      mmap_base-len is only valid when len >= mmap_base.  However, nothing in
      topdown allocator checks this.  It's only (now) caught at higher level,
      which will cause allocation to simply fail.  The following change restores
      the fallback to bottom-up path, which will allow large allocations with
      large stack limit to potentially still succeed.
      Signed-off-by: NChris Wright <chrisw@osdl.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      73219d17
    • W
      [PATCH] Avoiding mmap fragmentation · 1363c3cd
      Wolfgang Wander 提交于
      Ingo recently introduced a great speedup for allocating new mmaps using the
      free_area_cache pointer which boosts the specweb SSL benchmark by 4-5% and
      causes huge performance increases in thread creation.
      
      The downside of this patch is that it does lead to fragmentation in the
      mmap-ed areas (visible via /proc/self/maps), such that some applications
      that work fine under 2.4 kernels quickly run out of memory on any 2.6
      kernel.
      
      The problem is twofold:
      
        1) the free_area_cache is used to continue a search for memory where
           the last search ended.  Before the change new areas were always
           searched from the base address on.
      
           So now new small areas are cluttering holes of all sizes
           throughout the whole mmap-able region whereas before small holes
           tended to close holes near the base leaving holes far from the base
           large and available for larger requests.
      
        2) the free_area_cache also is set to the location of the last
           munmap-ed area so in scenarios where we allocate e.g.  five regions of
           1K each, then free regions 4 2 3 in this order the next request for 1K
           will be placed in the position of the old region 3, whereas before we
           appended it to the still active region 1, placing it at the location
           of the old region 2.  Before we had 1 free region of 2K, now we only
           get two free regions of 1K -> fragmentation.
      
      The patch addresses thes issues by introducing yet another cache descriptor
      cached_hole_size that contains the largest known hole size below the
      current free_area_cache.  If a new request comes in the size is compared
      against the cached_hole_size and if the request can be filled with a hole
      below free_area_cache the search is started from the base instead.
      
      The results look promising: Whereas 2.6.12-rc4 fragments quickly and my
      (earlier posted) leakme.c test program terminates after 50000+ iterations
      with 96 distinct and fragmented maps in /proc/self/maps it performs nicely
      (as expected) with thread creation, Ingo's test_str02 with 20000 threads
      requires 0.7s system time.
      
      Taking out Ingo's patch (un-patch available per request) by basically
      deleting all mentions of free_area_cache from the kernel and starting the
      search for new memory always at the respective bases we observe: leakme
      terminates successfully with 11 distinctive hardly fragmented areas in
      /proc/self/maps but thread creating is gringdingly slow: 30+s(!) system
      time for Ingo's test_str02 with 20000 threads.
      
      Now - drumroll ;-) the appended patch works fine with leakme: it ends with
      only 7 distinct areas in /proc/self/maps and also thread creation seems
      sufficiently fast with 0.71s for 20000 threads.
      Signed-off-by: NWolfgang Wander <wwc@rentec.com>
      Credit-to: "Richard Purdie" <rpurdie@rpsys.net>
      Signed-off-by: NKen Chen <kenneth.w.chen@intel.com>
      Acked-by: Ingo Molnar <mingo@elte.hu> (partly)
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1363c3cd
  24. 20 5月, 2005 1 次提交
    • L
      Fix get_unmapped_area sanity tests · 07ab67c8
      Linus Torvalds 提交于
      As noted by Chris Wright, we need to do the full range of tests regardless
      of whether MAP_FIXED is set or not, so re-organize get_unmapped_area()
      slightly to do the sanity checks unconditionally.
      07ab67c8
  25. 19 5月, 2005 1 次提交
    • L
      [PATCH] prevent NULL mmap in topdown model · 49a43876
      Linus Torvalds 提交于
      Prevent the topdown allocator from allocating mmap areas all the way
      down to address zero.
      
      We still allow a MAP_FIXED mapping of page 0 (needed for various things,
      ranging from Wine and DOSEMU to people who want to allow speculative
      loads off a NULL pointer).
      
      Tested by Chris Wright.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      49a43876
  26. 01 5月, 2005 2 次提交