1. 21 2月, 2006 7 次提交
    • H
      [PATCH] cpu hotplug documentation fix · 6303dbf5
      Heiko Carstens 提交于
      Looks like there was a merge conflict when patches
      8f8b1138 and
      255acee7 were applied which wasn't properly
      resolved. Fix this and add some additional description.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Ashok Raj <ashok.raj@intel.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6303dbf5
    • A
      [PATCH] x86_64: Don't set CONFIG_DEBUG_INFO in defconfig · 2e2b4263
      Andi Kleen 提交于
      Undo setting of CONFIG_DEBUG_INFO in the previous defconfig update.  It
      will make every build much slower and need more disk space and isn't a good
      default.
      Signed-off-by: NAndi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2e2b4263
    • S
      [PATCH] spi: Fix modular master driver remove and device suspend/remove · d2799f08
      Stephen Street 提交于
      Fix two problems in the spi subsystem:
      
      1) spi subsystem core dumps when modular spi master is unloaded.
      2) spi subsystem core dumps when spi slave device is suspended/resumed and
         module slave driver is not loaded.
      Signed-off-by: NStephen Street <stephen@streetfiresound.com>
      Signed-off-by: NDavid Brownell <dbrownell@users.sourceforge.net>
      Cc: Greg KH <greg@kroah.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d2799f08
    • A
      [PATCH] cfi_cmdset_0001: fix range for cache invalidation · d86d4370
      Alexey Korolev 提交于
      I found an issue in cfi_cmdset0001.c.  It is related to cache region
      invalidation in the buffered write procedure.
      
      The code performs cache invalidation from "cmd_addr" to "cmd_adr + len" in
      do_write_buffer() while we modify region from "adr" to "adr+len".
      
      This issue affects writes + reads of data by small chunks.
      Signed-off-by: NNicolas Pitre <nico@cam.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d86d4370
    • D
      [PATCH] i386: need to pass virtual address to smp_read_mpc() · 7d4c8e56
      Daniel Yeisley 提交于
      I'm seeing a kernel panic on an ES7000-600 when booting in virtual wire
      mode.  The panic happens because smp_read_mpc() is passed a physical
      address, and it should be virtual.  I tested the attached patch on the
      ES7000-600 and on a 2 cpu Dell box, and saw no problems on either.
      Signed-off-by: NDan Yeisley <dan.yeisley@unisys.com>
      Acked-by: NAndi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      7d4c8e56
    • C
      [PATCH] Terminate process that fails on a constrained allocation · 9b0f8b04
      Christoph Lameter 提交于
      Some allocations are restricted to a limited set of nodes (due to memory
      policies or cpuset constraints).  If the page allocator is not able to find
      enough memory then that does not mean that overall system memory is low.
      
      In particular going postal and more or less randomly shooting at processes
      is not likely going to help the situation but may just lead to suicide (the
      whole system coming down).
      
      It is better to signal to the process that no memory exists given the
      constraints that the process (or the configuration of the process) has
      placed on the allocation behavior.  The process may be killed but then the
      sysadmin or developer can investigate the situation.  The solution is
      similar to what we do when running out of hugepages.
      
      This patch adds a check before we kill processes.  At that point
      performance considerations do not matter much so we just scan the zonelist
      and reconstruct a list of nodes.  If the list of nodes does not contain all
      online nodes then this is a constrained allocation and we should kill the
      current process.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Andi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9b0f8b04
    • K
      [PATCH] OOM kill: children accounting · 9827b781
      Kurt Garloff 提交于
      In the badness() calculation, there's currently this piece of code:
      
              /*
               * Processes which fork a lot of child processes are likely
               * a good choice. We add the vmsize of the children if they
               * have an own mm. This prevents forking servers to flood the
               * machine with an endless amount of children
               */
              list_for_each(tsk, &p->children) {
                      struct task_struct *chld;
                      chld = list_entry(tsk, struct task_struct, sibling);
                      if (chld->mm = p->mm && chld->mm)
                              points += chld->mm->total_vm;
              }
      
      The intention is clear: If some server (apache) keeps spawning new children
      and we run OOM, we want to kill the father rather than picking a child.
      
      This -- to some degree -- also helps a bit with getting fork bombs under
      control, though I'd consider this a desirable side-effect rather than a
      feature.
      
      There's one problem with this: No matter how many or few children there are,
      if just one of them misbehaves, and all others (including the father) do
      everything right, we still always kill the whole family.  This hits in real
      life; whether it's javascript in konqueror resulting in kdeinit (and thus the
      whole KDE session) being hit or just a classical server that spawns children.
      
      Sidenote: The killer does kill all direct children as well, not only the
      selected father, see oom_kill_process().
      
      The idea in attached patch is that we do want to account the memory
      consumption of the (direct) children to the father -- however not fully.
      This maintains the property that fathers with too many children will still
      very likely be picked, whereas a single misbehaving child has the chance to
      be picked by the OOM killer.
      
      In the patch I account only half (rounded up) of the children's vm_size to
      the parent.  This means that if one child eats more mem than the rest of
      the family, it will be picked, otherwise it's still the father and thus the
      whole family that gets selected.
      
      This is heuristics -- we could debate whether accounting for a fourth would
      be better than for half of it.  Or -- if people would consider it worth the
      trouble -- make it a sysctl.  For now I sticked to accounting for half,
      which should IMHO be a significant improvement.
      
      The patch does one more thing: As users tend to be irritated by the choice
      of killed processes (mainly because the children are killed first, despite
      some of them having a very low OOM score), I added some more output: The
      selected (father) process will be reported first and it's oom_score printed
      to syslog.
      
      Description:
      
      Only account for half of children's vm size in oom score calculation
      
      This should still give the parent enough point in case of fork bombs.  If
      any child however has more than 50% of the vm size of all children
      together, it'll get a higher score and be elected.
      
      This patch also makes the kernel display the oom_score.
      Signed-off-by: NKurt Garloff <garloff@suse.de>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9827b781
  2. 18 2月, 2006 33 次提交