1. 22 3月, 2006 1 次提交
    • A
      [PATCH] __get_page_state() cpumask cleanup and fix · b40607fc
      Andrew Morton 提交于
      __get_page_state() has an open-coded for_each_cpu_mask() loop in it.
      
      Tidy that up, then notice that the code was buggy:
      
      	while (cpu < NR_CPUS) {
      		unsigned long *in, *out, off;
      
      		if (!cpu_isset(cpu, *cpumask))
      			continue;
      
      an obvious infinite loop.  I guess we just never call it with a holey cpu
      mask.
      
      Even after my cpumask size-reduction work, this patch increases code size :(
      
      Cc: Paul Jackson <pj@sgi.com>
      Cc: Christoph Lameter <clameter@engr.sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b40607fc
  2. 17 3月, 2006 3 次提交
  3. 15 3月, 2006 2 次提交
  4. 10 3月, 2006 4 次提交
    • C
      [PATCH] slab: Node rotor for freeing alien caches and remote per cpu pages. · 8fce4d8e
      Christoph Lameter 提交于
      The cache reaper currently tries to free all alien caches and all remote
      per cpu pages in each pass of cache_reap.  For a machines with large number
      of nodes (such as Altix) this may lead to sporadic delays of around ~10ms.
      Interrupts are disabled while reclaiming creating unacceptable delays.
      
      This patch changes that behavior by adding a per cpu reap_node variable.
      Instead of attempting to free all caches, we free only one alien cache and
      the per cpu pages from one remote node.  That reduces the time spend in
      cache_reap.  However, doing so will lengthen the time it takes to
      completely drain all remote per cpu pagesets and all alien caches.  The
      time needed will grow with the number of nodes in the system.  All caches
      are drained when they overflow their respective capacity.  So the drawback
      here is only that a bit of memory may be wasted for awhile longer.
      
      Details:
      
      1. Rename drain_remote_pages to drain_node_pages to allow the specification
         of the node to drain of pcp pages.
      
      2. Add additional functions init_reap_node, next_reap_node for NUMA
         that manage a per cpu reap_node counter.
      
      3. Add a reap_alien function that reaps only from the current reap_node.
      
      For us this seems to be a critical issue.  Holdoffs of an average of ~7ms
      cause some HPC benchmarks to slow down significantly.  F.e.  NAS parallel
      slows down dramatically.  NAS parallel has a 12-16 seconds runtime w/o rotor
      compared to 5.8 secs with the rotor patches.  It gets down to 5.05 secs with
      the additional interrupt holdoff reductions.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8fce4d8e
    • Y
      [PATCH] memory hotadd: pgdat->node_present_pages fix · f2937be5
      Yasunori Goto 提交于
      When pages are onlined, not only zone->present_pages but also
      pgdat->node_present_pages should be refreshed.
      
      This parameter is used to show information at
      /sys/device/system/node/nodeX/meminfo via si_meminfo_node().
      
      So, it shows strange value for MemUsed which is calculated
      (node_present_pages - all zones free pages).
      Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f2937be5
    • C
      [PATCH] vmscan: no zone_reclaim if PF_MALLOC is set · a6bf5270
      Christoph Lameter 提交于
      If the process has already set PF_MALLOC and is already using
      current->reclaim_state then do not try to reclaim memory from the zone.
      This is set by kswapd and/or synchrononous global reclaim which will not
      take it lightly if we zap the reclaim_state.
      Signed-off-by: NChristoph Lameter <clameter@sig.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a6bf5270
    • H
      [PATCH] page_add_file_rmap(): remove BUG_ON()s · 85a6cd03
      Hugh Dickins 提交于
      Remove two early-development BUG_ONs from page_add_file_rmap.
      
      The pfn_valid test (originally useful for checking that nobody passed an
      artificial struct page) comes too late, since we already have the struct
      page.
      
      The PageAnon test (useful when anon was first distinguished from file rmap)
      prevents ->nopage implementations from reusing ->mapping, which would
      otherwise be available.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      85a6cd03
  5. 09 3月, 2006 4 次提交
  6. 07 3月, 2006 3 次提交
    • C
      [PATCH] numa_maps update · 397874df
      Christoph Lameter 提交于
      Change the format of numa_maps to be more compact and contain additional
      information that is useful for managing and troubleshooting memory on a
      NUMA system.  Numa_maps can now also support huge pages.
      
      Fixes:
      
      1. More compact format. Only display fields if they contain additional
      	information.
      
      2. Always display information for all vmas. The old numa_maps did not display
      	vma with no mapped entries. This was a bit confusing because page
      	migration removes ptes for file backed vmas. After page migration
      	a part of the vmas vanished.
      
      3. Rename maxref to maxmap. This is the maximum mapcount of all the pages
      	in a vma and may be used as an indicator as to how many processes
      	may be using a certain vma.
      
      4. Include the ability to scan over huge page vmas.
      
      New items shown:
      
      dirty
      	Number of pages in a vma that have either the dirty bit set in the
      	page_struct or in the pte.
      
      file=<filename>
      	The file backing the pages if any
      
      stack
      	Stack area
      
      heap
      	Heap area
      
      huge
      	Huge page area. The number of pages shows is the number of huge
      	pages not the regular sized pages.
      
      swapcache
      	Number of pages with swap references. Must be >0 in order to
      	be shown.
      
      active
      	Number of active pages. Only displayed if different from the number
      	of pages mapped.
      
      writeback
      	Number of pages under writeback. Only displayed if >0.
      
      Sample ouput of a process using huge pages:
      
      00000000 default
      2000000000000000 default file=/lib/ld-2.3.90.so mapped=13 mapmax=30 N0=13
      2000000000044000 default file=/lib/ld-2.3.90.so anon=2 dirty=2 swapcache=2 N2=2
      2000000000064000 default file=/lib/librt-2.3.90.so mapped=2 active=1 N1=1 N3=1
      2000000000074000 default file=/lib/librt-2.3.90.so
      2000000000080000 default file=/lib/librt-2.3.90.so anon=1 swapcache=1 N2=1
      2000000000084000 default
      2000000000088000 default file=/lib/libc-2.3.90.so mapped=52 mapmax=32 active=48 N0=52
      20000000002bc000 default file=/lib/libc-2.3.90.so
      20000000002c8000 default file=/lib/libc-2.3.90.so anon=3 dirty=2 swapcache=3 active=2 N1=1 N2=2
      20000000002d4000 default anon=1 swapcache=1 N1=1
      20000000002d8000 default file=/lib/libpthread-2.3.90.so mapped=8 mapmax=3 active=7 N2=2 N3=6
      20000000002fc000 default file=/lib/libpthread-2.3.90.so
      2000000000308000 default file=/lib/libpthread-2.3.90.so anon=1 dirty=1 swapcache=1 N1=1
      200000000030c000 default anon=1 dirty=1 swapcache=1 N1=1
      2000000000320000 default anon=1 dirty=1 N1=1
      200000000071c000 default
      2000000000720000 default anon=2 dirty=2 swapcache=1 N1=1 N2=1
      2000000000f1c000 default
      2000000000f20000 default anon=2 dirty=2 swapcache=1 active=1 N2=1 N3=1
      200000000171c000 default
      2000000001720000 default anon=1 dirty=1 swapcache=1 N1=1
      2000000001b20000 default
      2000000001b38000 default file=/lib/libgcc_s.so.1 mapped=2 N1=2
      2000000001b48000 default file=/lib/libgcc_s.so.1
      2000000001b54000 default file=/lib/libgcc_s.so.1 anon=1 dirty=1 active=0 N1=1
      2000000001b58000 default file=/lib/libunwind.so.7.0.0 mapped=2 active=1 N1=2
      2000000001b74000 default file=/lib/libunwind.so.7.0.0
      2000000001b80000 default file=/lib/libunwind.so.7.0.0
      2000000001b84000 default
      4000000000000000 default file=/media/huge/test9 mapped=1 N1=1
      6000000000000000 default file=/media/huge/test9 anon=1 dirty=1 active=0 N1=1
      6000000000004000 default heap
      607fffff7fffc000 default anon=1 dirty=1 swapcache=1 N2=1
      607fffffff06c000 default stack anon=1 dirty=1 active=0 N1=1
      8000000060000000 default file=/mnt/huge/test0 huge dirty=3 N1=3
      8000000090000000 default file=/mnt/huge/test1 huge dirty=3 N0=1 N2=2
      80000000c0000000 default file=/mnt/huge/test2 huge dirty=3 N1=1 N3=2
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      397874df
    • L
      slab: clarify and fix calculate_slab_order() · 9888e6fa
      Linus Torvalds 提交于
      If we triggered the 'offslab_limit' test, we would return with
      cachep->gfporder incremented once too many times.
      
      This clarifies the logic somewhat, and fixes that bug.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9888e6fa
    • L
      Fix "check_slabp" printout size calculation · 264132bc
      Linus Torvalds 提交于
      We want to use the "struct slab" size, not the size of the pointer to
      same.  As it is, we'd not print out the last <n> entry pointers in the
      slab (where <n> is ~10, depending on whether it's a 32-bit or 64-bit
      kernel).
      
      Gaah, that slab code was written by somebody who likes unreadable crud.
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      264132bc
  7. 03 3月, 2006 2 次提交
  8. 01 3月, 2006 4 次提交
  9. 25 2月, 2006 2 次提交
  10. 22 2月, 2006 1 次提交
    • H
      [PATCH] tmpfs: fix mount mpol nodelist parsing · b00dc3ad
      Hugh Dickins 提交于
      I've been dissatisfied with the mpol_nodelist mount option which was
      added to tmpfs earlier in -rc.  Replace it by mpol=policy:nodelist.
      
      And it was broken: a nodelist is a comma-separated list of numbers and
      ranges; the mount options are a comma-separated list of token=values.
      Whoops, blindly strsep'ing on commas doesn't work so well: since we've
      no numeric tokens, and unlikely to add them, use that to distinguish.
      
      Move the mpol= parsing to shmem_parse_mpol under CONFIG_NUMA, reject
      all its options as invalid if not NUMA.  /proc shows MPOL_PREFERRED
      as "prefer", so use that name for the policy instead of "preferred".
      
      Enforce that mpol=default has no nodelist; that mpol=prefer has one
      node only; that mpol=bind has a nodelist; but let mpol=interleave use
      node_online_map if no nodelist given.  Describe this in tmpfs.txt.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Acked-by: NRobin Holt <holt@sgi.com>
      Acked-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b00dc3ad
  11. 21 2月, 2006 5 次提交
    • A
      [PATCH] mm/mempolicy.c: fix 'if ();' typo · fcab6f35
      Alexey Dobriyan 提交于
      [akpm; it happens that the code was still correct, only inefficient ]
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Cc: Christoph Lameter <christoph@lameter.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fcab6f35
    • L
      7a9166e3
    • A
      [PATCH] Fix units in mbind check · a9c930ba
      Andi Kleen 提交于
      maxnode is a bit index and can't be directly compared against a byte length
      like PAGE_SIZE
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a9c930ba
    • C
      [PATCH] Terminate process that fails on a constrained allocation · 9b0f8b04
      Christoph Lameter 提交于
      Some allocations are restricted to a limited set of nodes (due to memory
      policies or cpuset constraints).  If the page allocator is not able to find
      enough memory then that does not mean that overall system memory is low.
      
      In particular going postal and more or less randomly shooting at processes
      is not likely going to help the situation but may just lead to suicide (the
      whole system coming down).
      
      It is better to signal to the process that no memory exists given the
      constraints that the process (or the configuration of the process) has
      placed on the allocation behavior.  The process may be killed but then the
      sysadmin or developer can investigate the situation.  The solution is
      similar to what we do when running out of hugepages.
      
      This patch adds a check before we kill processes.  At that point
      performance considerations do not matter much so we just scan the zonelist
      and reconstruct a list of nodes.  If the list of nodes does not contain all
      online nodes then this is a constrained allocation and we should kill the
      current process.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9b0f8b04
    • K
      [PATCH] OOM kill: children accounting · 9827b781
      Kurt Garloff 提交于
      In the badness() calculation, there's currently this piece of code:
      
              /*
               * Processes which fork a lot of child processes are likely
               * a good choice. We add the vmsize of the children if they
               * have an own mm. This prevents forking servers to flood the
               * machine with an endless amount of children
               */
              list_for_each(tsk, &p->children) {
                      struct task_struct *chld;
                      chld = list_entry(tsk, struct task_struct, sibling);
                      if (chld->mm = p->mm && chld->mm)
                              points += chld->mm->total_vm;
              }
      
      The intention is clear: If some server (apache) keeps spawning new children
      and we run OOM, we want to kill the father rather than picking a child.
      
      This -- to some degree -- also helps a bit with getting fork bombs under
      control, though I'd consider this a desirable side-effect rather than a
      feature.
      
      There's one problem with this: No matter how many or few children there are,
      if just one of them misbehaves, and all others (including the father) do
      everything right, we still always kill the whole family.  This hits in real
      life; whether it's javascript in konqueror resulting in kdeinit (and thus the
      whole KDE session) being hit or just a classical server that spawns children.
      
      Sidenote: The killer does kill all direct children as well, not only the
      selected father, see oom_kill_process().
      
      The idea in attached patch is that we do want to account the memory
      consumption of the (direct) children to the father -- however not fully.
      This maintains the property that fathers with too many children will still
      very likely be picked, whereas a single misbehaving child has the chance to
      be picked by the OOM killer.
      
      In the patch I account only half (rounded up) of the children's vm_size to
      the parent.  This means that if one child eats more mem than the rest of
      the family, it will be picked, otherwise it's still the father and thus the
      whole family that gets selected.
      
      This is heuristics -- we could debate whether accounting for a fourth would
      be better than for half of it.  Or -- if people would consider it worth the
      trouble -- make it a sysctl.  For now I sticked to accounting for half,
      which should IMHO be a significant improvement.
      
      The patch does one more thing: As users tend to be irritated by the choice
      of killed processes (mainly because the children are killed first, despite
      some of them having a very low OOM score), I added some more output: The
      selected (father) process will be reported first and it's oom_score printed
      to syslog.
      
      Description:
      
      Only account for half of children's vm size in oom score calculation
      
      This should still give the parent enough point in case of fork bombs.  If
      any child however has more than 50% of the vm size of all children
      together, it'll get a higher score and be elected.
      
      This patch also makes the kernel display the oom_score.
      Signed-off-by: NKurt Garloff <garloff@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9827b781
  12. 18 2月, 2006 4 次提交
  13. 15 2月, 2006 3 次提交
    • M
      [PATCH] madvise MADV_DONTFORK/MADV_DOFORK · f8225661
      Michael S. Tsirkin 提交于
      Currently, copy-on-write may change the physical address of a page even if the
      user requested that the page is pinned in memory (either by mlock or by
      get_user_pages).  This happens if the process forks meanwhile, and the parent
      writes to that page.  As a result, the page is orphaned: in case of
      get_user_pages, the application will never see any data hardware DMA's into
      this page after the COW.  In case of mlock'd memory, the parent is not getting
      the realtime/security benefits of mlock.
      
      In particular, this affects the Infiniband modules which do DMA from and into
      user pages all the time.
      
      This patch adds madvise options to control whether memory range is inherited
      across fork.  Useful e.g.  for when hardware is doing DMA from/into these
      pages.  Could also be useful to an application wanting to speed up its forks
      by cutting large areas out of consideration.
      Signed-off-by: NMichael S. Tsirkin <mst@mellanox.co.il>
      Acked-by: NHugh Dickins <hugh@veritas.com>
      Cc: Michael Kerrisk <mtk-manpages@gmx.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f8225661
    • H
      [PATCH] compound page: default destructor · d98c7a09
      Hugh Dickins 提交于
      Somehow I imagined that calling a NULL destructor would free a compound page
      rather than oopsing.  No, we must supply a default destructor, __free_pages_ok
      using the order noted by prep_compound_page.  hugetlb can still replace this
      as before with its own free_huge_page pointer.
      
      The case that needs this is not common: rarely does put_compound_page's
      put_page_testzero bring the count down to 0.  But if get_user_pages is applied
      to some part of a compound page, without immediate release (e.g.  AIO or
      Infiniband), then it's possible for its put_page to come after the containing
      vma has been unmapped and the driver done its free_pages.
      
      That's just the kind of case compound pages are supposed to be guarding
      against (but Nick points out, nor did PageReserved handle this right).
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d98c7a09
    • H
      [PATCH] compound page: use page[1].lru · 41d78ba5
      Hugh Dickins 提交于
      If a compound page has its own put_page_testzero destructor (the only current
      example is free_huge_page), that is noted in page[1].mapping of the compound
      page.  But that's rather a poor place to keep it: functions which call
      set_page_dirty_lock after get_user_pages (e.g.  Infiniband's
      __ib_umem_release) ought to be checking first, otherwise set_page_dirty is
      liable to crash on what's not the address of a struct address_space.
      
      And now I'm about to make that worse: it turns out that every compound page
      needs a destructor, so we can no longer rely on hugetlb pages going their own
      special way, to avoid further problems of page->mapping reuse.  For example,
      not many people know that: on 50% of i386 -Os builds, the first tail page of a
      compound page purports to be PageAnon (when its destructor has an odd
      address), which surprises page_add_file_rmap.
      
      Keep the compound page destructor in page[1].lru.next instead.  And to free up
      the common pairing of mapping and index, also move compound page order from
      index to lru.prev.  Slab reuses page->lru too: but if we ever need slab to use
      compound pages, it can easily stack its use above this.
      
      (akpm: decoded version of the above: the tail pages of a compound page now
      have ->mapping==NULL, so there's no need for the set_page_dirty[_lock]()
      caller to check that they're not compund pages before doing the dirty).
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      41d78ba5
  14. 12 2月, 2006 2 次提交