1. 08 5月, 2007 1 次提交
  2. 24 4月, 2007 2 次提交
  3. 17 3月, 2007 1 次提交
  4. 06 1月, 2007 1 次提交
  5. 31 12月, 2006 1 次提交
  6. 14 12月, 2006 1 次提交
    • P
      [PATCH] cpuset: rework cpuset_zone_allowed api · 02a0e53d
      Paul Jackson 提交于
      Elaborate the API for calling cpuset_zone_allowed(), so that users have to
      explicitly choose between the two variants:
      
        cpuset_zone_allowed_hardwall()
        cpuset_zone_allowed_softwall()
      
      Until now, whether or not you got the hardwall flavor depended solely on
      whether or not you or'd in the __GFP_HARDWALL gfp flag to the gfp_mask
      argument.
      
      If you didn't specify __GFP_HARDWALL, you implicitly got the softwall
      version.
      
      Unfortunately, this meant that users would end up with the softwall version
      without thinking about it.  Since only the softwall version might sleep,
      this led to bugs with possible sleeping in interrupt context on more than
      one occassion.
      
      The hardwall version requires that the current tasks mems_allowed allows
      the node of the specified zone (or that you're in interrupt or that
      __GFP_THISNODE is set or that you're on a one cpuset system.)
      
      The softwall version, depending on the gfp_mask, might allow a node if it
      was allowed in the nearest enclusing cpuset marked mem_exclusive (which
      requires taking the cpuset lock 'callback_mutex' to evaluate.)
      
      This patch removes the cpuset_zone_allowed() call, and forces the caller to
      explicitly choose between the hardwall and the softwall case.
      
      If the caller wants the gfp_mask to determine this choice, they should (1)
      be sure they can sleep or that __GFP_HARDWALL is set, and (2) invoke the
      cpuset_zone_allowed_softwall() routine.
      
      This adds another 100 or 200 bytes to the kernel text space, due to the few
      lines of nearly duplicate code at the top of both cpuset_zone_allowed_*
      routines.  It should save a few instructions executed for the calls that
      turned into calls of cpuset_zone_allowed_hardwall, thanks to not having to
      set (before the call) then check (within the call) the __GFP_HARDWALL flag.
      
      For the most critical call, from get_page_from_freelist(), the same
      instructions are executed as before -- the old cpuset_zone_allowed()
      routine it used to call is the same code as the
      cpuset_zone_allowed_softwall() routine that it calls now.
      
      Not a perfect win, but seems worth it, to reduce this chance of hitting a
      sleeping with irq off complaint again.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      02a0e53d
  7. 08 12月, 2006 3 次提交
  8. 21 10月, 2006 1 次提交
  9. 30 9月, 2006 7 次提交
  10. 26 9月, 2006 9 次提交
  11. 04 7月, 2006 1 次提交
  12. 23 6月, 2006 2 次提交
  13. 20 4月, 2006 2 次提交
  14. 03 3月, 2006 1 次提交
  15. 01 3月, 2006 1 次提交
  16. 21 2月, 2006 2 次提交
    • C
      [PATCH] Terminate process that fails on a constrained allocation · 9b0f8b04
      Christoph Lameter 提交于
      Some allocations are restricted to a limited set of nodes (due to memory
      policies or cpuset constraints).  If the page allocator is not able to find
      enough memory then that does not mean that overall system memory is low.
      
      In particular going postal and more or less randomly shooting at processes
      is not likely going to help the situation but may just lead to suicide (the
      whole system coming down).
      
      It is better to signal to the process that no memory exists given the
      constraints that the process (or the configuration of the process) has
      placed on the allocation behavior.  The process may be killed but then the
      sysadmin or developer can investigate the situation.  The solution is
      similar to what we do when running out of hugepages.
      
      This patch adds a check before we kill processes.  At that point
      performance considerations do not matter much so we just scan the zonelist
      and reconstruct a list of nodes.  If the list of nodes does not contain all
      online nodes then this is a constrained allocation and we should kill the
      current process.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9b0f8b04
    • K
      [PATCH] OOM kill: children accounting · 9827b781
      Kurt Garloff 提交于
      In the badness() calculation, there's currently this piece of code:
      
              /*
               * Processes which fork a lot of child processes are likely
               * a good choice. We add the vmsize of the children if they
               * have an own mm. This prevents forking servers to flood the
               * machine with an endless amount of children
               */
              list_for_each(tsk, &p->children) {
                      struct task_struct *chld;
                      chld = list_entry(tsk, struct task_struct, sibling);
                      if (chld->mm = p->mm && chld->mm)
                              points += chld->mm->total_vm;
              }
      
      The intention is clear: If some server (apache) keeps spawning new children
      and we run OOM, we want to kill the father rather than picking a child.
      
      This -- to some degree -- also helps a bit with getting fork bombs under
      control, though I'd consider this a desirable side-effect rather than a
      feature.
      
      There's one problem with this: No matter how many or few children there are,
      if just one of them misbehaves, and all others (including the father) do
      everything right, we still always kill the whole family.  This hits in real
      life; whether it's javascript in konqueror resulting in kdeinit (and thus the
      whole KDE session) being hit or just a classical server that spawns children.
      
      Sidenote: The killer does kill all direct children as well, not only the
      selected father, see oom_kill_process().
      
      The idea in attached patch is that we do want to account the memory
      consumption of the (direct) children to the father -- however not fully.
      This maintains the property that fathers with too many children will still
      very likely be picked, whereas a single misbehaving child has the chance to
      be picked by the OOM killer.
      
      In the patch I account only half (rounded up) of the children's vm_size to
      the parent.  This means that if one child eats more mem than the rest of
      the family, it will be picked, otherwise it's still the father and thus the
      whole family that gets selected.
      
      This is heuristics -- we could debate whether accounting for a fourth would
      be better than for half of it.  Or -- if people would consider it worth the
      trouble -- make it a sysctl.  For now I sticked to accounting for half,
      which should IMHO be a significant improvement.
      
      The patch does one more thing: As users tend to be irritated by the choice
      of killed processes (mainly because the children are killed first, despite
      some of them having a very low OOM score), I added some more output: The
      selected (father) process will be reported first and it's oom_score printed
      to syslog.
      
      Description:
      
      Only account for half of children's vm size in oom score calculation
      
      This should still give the parent enough point in case of fork bombs.  If
      any child however has more than 50% of the vm size of all children
      together, it'll get a higher score and be elected.
      
      This patch also makes the kernel display the oom_score.
      Signed-off-by: NKurt Garloff <garloff@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9827b781
  17. 02 2月, 2006 1 次提交
  18. 15 1月, 2006 1 次提交
    • P
      [PATCH] cpuset oom lock fix · 505970b9
      Paul Jackson 提交于
      The problem, reported in:
      
        http://bugzilla.kernel.org/show_bug.cgi?id=5859
      
      and by various other email messages and lkml posts is that the cpuset hook
      in the oom (out of memory) code can try to take a cpuset semaphore while
      holding the tasklist_lock (a spinlock).
      
      One must not sleep while holding a spinlock.
      
      The fix seems easy enough - move the cpuset semaphore region outside the
      tasklist_lock region.
      
      This required a few lines of mechanism to implement.  The oom code where
      the locking needs to be changed does not have access to the cpuset locks,
      which are internal to kernel/cpuset.c only.  So I provided a couple more
      cpuset interface routines, available to the rest of the kernel, which
      simple take and drop the lock needed here (cpusets callback_sem).
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      505970b9
  19. 09 1月, 2006 1 次提交
  20. 09 10月, 2005 1 次提交