1. 21 2月, 2006 1 次提交
    • K
      [PATCH] OOM kill: children accounting · 9827b781
      Kurt Garloff 提交于
      In the badness() calculation, there's currently this piece of code:
      
              /*
               * Processes which fork a lot of child processes are likely
               * a good choice. We add the vmsize of the children if they
               * have an own mm. This prevents forking servers to flood the
               * machine with an endless amount of children
               */
              list_for_each(tsk, &p->children) {
                      struct task_struct *chld;
                      chld = list_entry(tsk, struct task_struct, sibling);
                      if (chld->mm = p->mm && chld->mm)
                              points += chld->mm->total_vm;
              }
      
      The intention is clear: If some server (apache) keeps spawning new children
      and we run OOM, we want to kill the father rather than picking a child.
      
      This -- to some degree -- also helps a bit with getting fork bombs under
      control, though I'd consider this a desirable side-effect rather than a
      feature.
      
      There's one problem with this: No matter how many or few children there are,
      if just one of them misbehaves, and all others (including the father) do
      everything right, we still always kill the whole family.  This hits in real
      life; whether it's javascript in konqueror resulting in kdeinit (and thus the
      whole KDE session) being hit or just a classical server that spawns children.
      
      Sidenote: The killer does kill all direct children as well, not only the
      selected father, see oom_kill_process().
      
      The idea in attached patch is that we do want to account the memory
      consumption of the (direct) children to the father -- however not fully.
      This maintains the property that fathers with too many children will still
      very likely be picked, whereas a single misbehaving child has the chance to
      be picked by the OOM killer.
      
      In the patch I account only half (rounded up) of the children's vm_size to
      the parent.  This means that if one child eats more mem than the rest of
      the family, it will be picked, otherwise it's still the father and thus the
      whole family that gets selected.
      
      This is heuristics -- we could debate whether accounting for a fourth would
      be better than for half of it.  Or -- if people would consider it worth the
      trouble -- make it a sysctl.  For now I sticked to accounting for half,
      which should IMHO be a significant improvement.
      
      The patch does one more thing: As users tend to be irritated by the choice
      of killed processes (mainly because the children are killed first, despite
      some of them having a very low OOM score), I added some more output: The
      selected (father) process will be reported first and it's oom_score printed
      to syslog.
      
      Description:
      
      Only account for half of children's vm size in oom score calculation
      
      This should still give the parent enough point in case of fork bombs.  If
      any child however has more than 50% of the vm size of all children
      together, it'll get a higher score and be elected.
      
      This patch also makes the kernel display the oom_score.
      Signed-off-by: NKurt Garloff <garloff@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9827b781
  2. 02 2月, 2006 1 次提交
  3. 15 1月, 2006 1 次提交
    • P
      [PATCH] cpuset oom lock fix · 505970b9
      Paul Jackson 提交于
      The problem, reported in:
      
        http://bugzilla.kernel.org/show_bug.cgi?id=5859
      
      and by various other email messages and lkml posts is that the cpuset hook
      in the oom (out of memory) code can try to take a cpuset semaphore while
      holding the tasklist_lock (a spinlock).
      
      One must not sleep while holding a spinlock.
      
      The fix seems easy enough - move the cpuset semaphore region outside the
      tasklist_lock region.
      
      This required a few lines of mechanism to implement.  The oom code where
      the locking needs to be changed does not have access to the cpuset locks,
      which are internal to kernel/cpuset.c only.  So I provided a couple more
      cpuset interface routines, available to the rest of the kernel, which
      simple take and drop the lock needed here (cpusets callback_sem).
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      505970b9
  4. 09 1月, 2006 1 次提交
  5. 09 10月, 2005 1 次提交
  6. 11 9月, 2005 1 次提交
  7. 08 9月, 2005 2 次提交
    • P
      [PATCH] cpusets: confine oom_killer to mem_exclusive cpuset · ef08e3b4
      Paul Jackson 提交于
      Now the real motivation for this cpuset mem_exclusive patch series seems
      trivial.
      
      This patch keeps a task in or under one mem_exclusive cpuset from provoking an
      oom kill of a task under a non-overlapping mem_exclusive cpuset.  Since only
      interrupt and GFP_ATOMIC allocations are allowed to escape mem_exclusive
      containment, there is little to gain from oom killing a task under a
      non-overlapping mem_exclusive cpuset, as almost all kernel and user memory
      allocation must come from disjoint memory nodes.
      
      This patch enables configuring a system so that a runaway job under one
      mem_exclusive cpuset cannot cause the killing of a job in another such cpuset
      that might be using very high compute and memory resources for a prolonged
      time.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ef08e3b4
    • P
      [PATCH] cpusets: oom_kill tweaks · a49335cc
      Paul Jackson 提交于
      This patch series extends the use of the cpuset attribute 'mem_exclusive'
      to support cpuset configurations that:
       1) allow GFP_KERNEL allocations to come from a potentially larger
          set of memory nodes than GFP_USER allocations, and
       2) can constrain the oom killer to tasks running in cpusets in
          a specified subtree of the cpuset hierarchy.
      
      Here's an example usage scenario.  For a few hours or more, a large NUMA
      system at a University is to be divided in two halves, with a bunch of student
      jobs running in half the system under some form of batch manager, and with a
      big research project running in the other half.  Each of the student jobs is
      placed in a small cpuset, but should share the classic Unix time share
      facilities, such as buffered pages of files in /bin and /usr/lib.  The big
      research project wants no interference whatsoever from the student jobs, and
      has highly tuned, unusual memory and i/o patterns that intend to make full use
      of all the main memory on the nodes available to it.
      
      In this example, we have two big sibling cpusets, one of which is further
      divided into a more dynamic set of child cpusets.
      
      We want kernel memory allocations constrained by the two big cpusets, and user
      allocations constrained by the smaller child cpusets where present.  And we
      require that the oom killer not operate across the two halves of this system,
      or else the first time a student job runs amuck, the big research project will
      likely be first inline to get shot.
      
      Tweaking /proc/<pid>/oom_adj is not ideal -- if the big research project
      really does run amuck allocating memory, it should be shot, not some other
      task outside the research projects mem_exclusive cpuset.
      
      I propose to extend the use of the 'mem_exclusive' flag of cpusets to manage
      such scenarios.  Let memory allocations for user space (GFP_USER) be
      constrained by a tasks current cpuset, but memory allocations for kernel space
      (GFP_KERNEL) by constrained by the nearest mem_exclusive ancestor of the
      current cpuset, even though kernel space allocations will still _prefer_ to
      remain within the current tasks cpuset, if memory is easily available.
      
      Let the oom killer be constrained to consider only tasks that are in
      overlapping mem_exclusive cpusets (it won't help much to kill a task that
      normally cannot allocate memory on any of the same nodes as the ones on which
      the current task can allocate.)
      
      The current constraints imposed on setting mem_exclusive are unchanged.  A
      cpuset may only be mem_exclusive if its parent is also mem_exclusive, and a
      mem_exclusive cpuset may not overlap any of its siblings memory nodes.
      
      This patch was presented on linux-mm in early July 2005, though did not
      generate much feedback at that time.  It has been built for a variety of
      arch's using cross tools, and built, booted and tested for function on SN2
      (ia64).
      
      There are 4 patches in this set:
        1) Some minor cleanup, and some improvements to the code layout
           of one routine to make subsequent patches cleaner.
        2) Add another GFP flag - __GFP_HARDWALL.  It marks memory
           requests for USER space, which are tightly confined by the
           current tasks cpuset.
        3) Now memory requests (such as KERNEL) that not marked HARDWALL can
           if short on memory, look in the potentially larger pool of memory
           defined by the nearest mem_exclusive ancestor cpuset of the current
           tasks cpuset.
        4) Finally, modify the oom killer to skip any task whose mem_exclusive
           cpuset doesn't overlap ours.
      
      Patch (1), the one time I looked on an SN2 (ia64) build, actually saved 32
      bytes of kernel text space.  Patch (2) has no affect on the size of kernel
      text space (it just adds a preprocessor flag).  Patches (3) and (4) added
      about 600 bytes each of kernel text space, mostly in kernel/cpuset.c, which
      matters only if CONFIG_CPUSET is enabled.
      
      This patch:
      
      This patch applies a few comment and code cleanups to mm/oom_kill.c prior to
      applying a few small patches to improve cpuset management of memory placement.
      
      The comment changed in oom_kill.c was seriously misleading.  The code layout
      change in select_bad_process() makes room for adding another condition on
      which a process can be spared the oom killer (see the subsequent
      cpuset_nodes_overlap patch for this addition).
      
      Also a couple typos and spellos that bugged me, while I was here.
      
      This patch should have no material affect.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a49335cc
  8. 08 7月, 2005 2 次提交
  9. 22 6月, 2005 1 次提交
  10. 17 4月, 2005 2 次提交