1. 03 10月, 2006 7 次提交
  2. 02 10月, 2006 1 次提交
  3. 01 10月, 2006 1 次提交
    • J
      [PATCH] csa: convert CONFIG tag for extended accounting routines · 8f0ab514
      Jay Lan 提交于
      There were a few accounting data/macros that are used in CSA but are #ifdef'ed
      inside CONFIG_BSD_PROCESS_ACCT.  This patch is to change those ifdef's from
      CONFIG_BSD_PROCESS_ACCT to CONFIG_TASK_XACCT.  A few defines are moved from
      kernel/acct.c and include/linux/acct.h to kernel/tsacct.c and
      include/linux/tsacct_kern.h.
      Signed-off-by: NJay Lan <jlan@sgi.com>
      Cc: Shailabh Nagar <nagar@watson.ibm.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Cc: Jes Sorensen <jes@sgi.com>
      Cc: Chris Sturtivant <csturtiv@sgi.com>
      Cc: Tony Ernst <tee@sgi.com>
      Cc: Guillaume Thouvenin <guillaume.thouvenin@bull.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8f0ab514
  4. 30 9月, 2006 7 次提交
  5. 26 9月, 2006 1 次提交
    • C
      [PATCH] Fix longstanding load balancing bug in the scheduler · 0a2966b4
      Christoph Lameter 提交于
      The scheduler will stop load balancing if the most busy processor contains
      processes pinned via processor affinity.
      
      The scheduler currently only does one search for busiest cpu.  If it cannot
      pull any tasks away from the busiest cpu because they were pinned then the
      scheduler goes into a corner and sulks leaving the idle processors idle.
      
      F.e.  If you have processor 0 busy running four tasks pinned via taskset,
      there are none on processor 1 and one just started two processes on
      processor 2 then the scheduler will not move one of the two processes away
      from processor 2.
      
      This patch fixes that issue by forcing the scheduler to come out of its
      corner and retrying the load balancing by considering other processors for
      load balancing.
      
      This patch was originally developed by John Hawkes and discussed at
      
          http://marc.theaimsgroup.com/?l=linux-kernel&m=113901368523205&w=2.
      
      I have removed extraneous material and gone back to equipping struct rq
      with the cpu the queue is associated with since this makes the patch much
      easier and it is likely that others in the future will have the same
      difficulty of figuring out which processor owns which runqueue.
      
      The overhead added through these patches is a single word on the stack if
      the kernel is configured to support 32 cpus or less (32 bit).  For 32 bit
      environments the maximum number of cpus that can be configued is 255 which
      would result in the use of 32 bytes additional on the stack.  On IA64 up to
      1k cpus can be configured which will result in the use of 128 additional
      bytes on the stack.  The maximum additional cache footprint is one
      cacheline.  Typically memory use will be much less than a cacheline and the
      additional cpumask will be placed on the stack in a cacheline that already
      contains other local variable.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Cc: John Hawkes <hawkes@sgi.com>
      Cc: "Siddha, Suresh B" <suresh.b.siddha@intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Peter Williams <pwil3058@bigpond.net.au>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0a2966b4
  6. 28 8月, 2006 1 次提交
  7. 01 8月, 2006 3 次提交
  8. 15 7月, 2006 3 次提交
  9. 11 7月, 2006 2 次提交
  10. 04 7月, 2006 7 次提交
  11. 01 7月, 2006 1 次提交
    • A
      [PATCH] cond_resched() fix · e7b38404
      Andrew Morton 提交于
      Fix a bug identified by Zou Nan hai <nanhai.zou@intel.com>:
      
      If the system is in state SYSTEM_BOOTING, and need_resched() is true,
      cond_resched() returns true even though it didn't reschedule.  Consequently
      need_resched() remains true and JBD locks up.
      
      Fix that by teaching cond_resched() to only return true if it really did call
      schedule().
      
      cond_resched_lock() and cond_resched_softirq() have a problem too.  If we're
      in SYSTEM_BOOTING state and need_resched() is true, these functions will drop
      the lock and will then try to call schedule(), but the SYSTEM_BOOTING state
      will prevent schedule() from being called.  So on return, need_resched() will
      still be true, but cond_resched_lock() has to return 1 to tell the caller that
      the lock was dropped.  The caller will probably lock up.
      
      Bottom line: if these functions dropped the lock, they _must_ call schedule()
      to clear need_resched().   Make it so.
      
      Also, uninline __cond_resched().  It's largeish, and slowpath.
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e7b38404
  12. 28 6月, 2006 6 次提交