1. 11 9月, 2005 16 次提交
    • N
      [PATCH] kernel: fix-up schedule_timeout() usage · 75bcc8c5
      Nishanth Aravamudan 提交于
      Use schedule_timeout_{,un}interruptible() instead of
      set_current_state()/schedule_timeout() to reduce kernel size.
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      75bcc8c5
    • N
      [PATCH] add schedule_timeout_{,un}interruptible() interfaces · 64ed93a2
      Nishanth Aravamudan 提交于
      Add schedule_timeout_{,un}interruptible() interfaces so that
      schedule_timeout() callers don't have to worry about forgetting to add the
      set_current_state() call beforehand.
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      64ed93a2
    • R
      [PATCH] kernel/acct: add kerneldoc · 417ef531
      Randy Dunlap 提交于
      for kernel/acct.c:
      - fix typos
      - add kerneldoc for non-static functions
      Signed-off-by: NRandy Dunlap <rdunlap@xenotime.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      417ef531
    • S
      [PATCH] sched: allow the load to grow upto its cpu_power · 0c117f1b
      Siddha, Suresh B 提交于
      Don't pull tasks from a group if that would cause the group's total load to
      drop below its total cpu_power (ie.  cause the group to start going idle).
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0c117f1b
    • S
      [PATCH] sched: don't kick ALB in the presence of pinned task · fa3b6ddc
      Siddha, Suresh B 提交于
      Jack Steiner brought this issue at my OLS talk.
      
      Take a scenario where two tasks are pinned to two HT threads in a physical
      package.  Idle packages in the system will keep kicking migration_thread on
      the busy package with out any success.
      
      We will run into similar scenarios in the presence of CMP/NUMA.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fa3b6ddc
    • R
      [PATCH] sched: use cached variable in sys_sched_yield() · 5927ad78
      Renaud Lienhart 提交于
      In sys_sched_yield(), we cache current->array in the "array" variable, thus
      there's no need to dereference "current" again later.
      Signed-Off-By: NRenaud Lienhart <renaud.lienhart@free.fr>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      5927ad78
    • N
      [PATCH] sched: HT optimisation · 5969fe06
      Nick Piggin 提交于
      If an idle sibling of an HT queue encounters a busy sibling, then make
      higher level load balancing of the non-idle variety.
      
      Performance of multiprocessor HT systems with low numbers of tasks
      (generally < number of virtual CPUs) can be significantly worse than the
      exact same workloads when running in non-HT mode.  The reason is largely
      due to poor scheduling behaviour.
      
      This patch improves the situation, making the performance gap far less
      significant on one problematic test case (tbench).
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      5969fe06
    • N
      [PATCH] sched: less locking · e17224bf
      Nick Piggin 提交于
      During periodic load balancing, don't hold this runqueue's lock while
      scanning remote runqueues, which can take a non trivial amount of time
      especially on very large systems.
      
      Holding the runqueue lock will only help to stabilise ->nr_running, however
      this doesn't do much to help because tasks being woken will simply get held
      up on the runqueue lock, so ->nr_running would not provide a really
      accurate picture of runqueue load in that case anyway.
      
      What's more, ->nr_running (and possibly the cpu_load averages) of remote
      runqueues won't be stable anyway, so load balancing is always an inexact
      operation.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e17224bf
    • N
      [PATCH] sched: less newidle locking · d6d5cfaf
      Nick Piggin 提交于
      Similarly to the earlier change in load_balance, only lock the runqueue in
      load_balance_newidle if the busiest queue found has a nr_running > 1.  This
      will reduce frequency of expensive remote runqueue lock aquisitions in the
      schedule() path on some workloads.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d6d5cfaf
    • I
      [PATCH] sched: fix SMT scheduler latency bug · 67f9a619
      Ingo Molnar 提交于
      William Weston reported unusually high scheduling latencies on his x86 HT
      box, on the -RT kernel.  I managed to reproduce it on my HT box and the
      latency tracer shows the incident in action:
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
            du-2803  3Dnh2    0us : __trace_start_sched_wakeup (try_to_wake_up)
              ..............................................................
              ... we are running on CPU#3, PID 2778 gets woken to CPU#1: ...
              ..............................................................
            du-2803  3Dnh2    0us : __trace_start_sched_wakeup <<...>-2778> (73 1)
            du-2803  3Dnh2    0us : _raw_spin_unlock (try_to_wake_up)
              ................................................
              ... still on CPU#3, we send an IPI to CPU#1: ...
              ................................................
            du-2803  3Dnh1    0us : resched_task (try_to_wake_up)
            du-2803  3Dnh1    1us : smp_send_reschedule (try_to_wake_up)
            du-2803  3Dnh1    1us : send_IPI_mask_bitmask (smp_send_reschedule)
            du-2803  3Dnh1    2us : _raw_spin_unlock_irqrestore (try_to_wake_up)
              ...............................................
              ... 1 usec later, the IPI arrives on CPU#1: ...
              ...............................................
        <idle>-0     1Dnh.    2us : smp_reschedule_interrupt (c0100c5a 0 0)
      
      So far so good, this is the normal wakeup/preemption mechanism.  But here
      comes the scheduler anomaly on CPU#1:
      
        <idle>-0     1Dnh.    2us : preempt_schedule_irq (need_resched)
        <idle>-0     1Dnh.    2us : preempt_schedule_irq (need_resched)
        <idle>-0     1Dnh.    3us : __schedule (preempt_schedule_irq)
        <idle>-0     1Dnh.    3us : profile_hit (__schedule)
        <idle>-0     1Dnh1    3us : sched_clock (__schedule)
        <idle>-0     1Dnh1    4us : _raw_spin_lock_irq (__schedule)
        <idle>-0     1Dnh1    4us : _raw_spin_lock_irqsave (__schedule)
        <idle>-0     1Dnh2    5us : _raw_spin_unlock (__schedule)
        <idle>-0     1Dnh1    5us : preempt_schedule (__schedule)
        <idle>-0     1Dnh1    6us : _raw_spin_lock (__schedule)
        <idle>-0     1Dnh2    6us : find_next_bit (__schedule)
        <idle>-0     1Dnh2    6us : _raw_spin_lock (__schedule)
        <idle>-0     1Dnh3    7us : find_next_bit (__schedule)
        <idle>-0     1Dnh3    7us : find_next_bit (__schedule)
        <idle>-0     1Dnh3    8us : _raw_spin_unlock (__schedule)
        <idle>-0     1Dnh2    8us : preempt_schedule (__schedule)
        <idle>-0     1Dnh2    8us : find_next_bit (__schedule)
        <idle>-0     1Dnh2    9us : trace_stop_sched_switched (__schedule)
        <idle>-0     1Dnh2    9us : _raw_spin_lock (trace_stop_sched_switched)
        <idle>-0     1Dnh3   10us : trace_stop_sched_switched <<...>-2778> (73 8c)
        <idle>-0     1Dnh3   10us : _raw_spin_unlock (trace_stop_sched_switched)
        <idle>-0     1Dnh1   10us : _raw_spin_unlock (__schedule)
        <idle>-0     1Dnh.   11us : local_irq_enable_noresched (preempt_schedule_irq)
        <idle>-0     1Dnh.   11us < (0)
      
      we didnt pick up pid 2778! It only gets scheduled much later:
      
         <...>-2778  1Dnh2  412us : __switch_to (__schedule)
         <...>-2778  1Dnh2  413us : __schedule <<idle>-0> (8c 73)
         <...>-2778  1Dnh2  413us : _raw_spin_unlock (__schedule)
         <...>-2778  1Dnh1  413us : trace_stop_sched_switched (__schedule)
         <...>-2778  1Dnh1  414us : _raw_spin_lock (trace_stop_sched_switched)
         <...>-2778  1Dnh2  414us : trace_stop_sched_switched <<...>-2778> (73 1)
         <...>-2778  1Dnh2  414us : _raw_spin_unlock (trace_stop_sched_switched)
         <...>-2778  1Dnh1  415us : trace_stop_sched_switched (__schedule)
      
      the reason for this anomaly is the following code in dependent_sleeper():
      
                      /*
                       * If a user task with lower static priority than the
                       * running task on the SMT sibling is trying to schedule,
                       * delay it till there is proportionately less timeslice
                       * left of the sibling task to prevent a lower priority
                       * task from using an unfair proportion of the
                       * physical cpu's resources. -ck
                       */
      [...]
                              if (((smt_curr->time_slice * (100 - sd->per_cpu_gain) /
                                      100) > task_timeslice(p)))
                                              ret = 1;
      
      Note that in contrast to the comment above, we dont actually do the check
      based on static priority, we do the check based on timeslices.  But
      timeslices go up and down, and even highprio tasks can randomly have very
      low timeslices (just before their next refill) and can thus be judged as
      'lowprio' by the above piece of code.  This condition is clearly buggy.
      The correct test is to check for static_prio _and_ to check for the
      preemption priority.  Even on different static priority levels, a
      higher-prio interactive task should not be delayed due to a
      higher-static-prio CPU hog.
      
      There is a symmetric bug in the 'kick SMT sibling' code of this function as
      well, which can be solved in a similar way.
      
      The patch below (against the current scheduler queue in -mm) fixes both
      bugs.  I have build and boot-tested this on x86 SMT, and nice +20 tasks
      still get properly throttled - so the dependent-sleeper logic is still in
      action.
      
      btw., these bugs pessimised the SMT scheduler because the 'delay wakeup'
      property was applied too liberally, so this fix is likely a throughput
      improvement as well.
      
      I separated out a smt_slice() function to make the code easier to read.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      67f9a619
    • I
      [PATCH] sched: TASK_NONINTERACTIVE · d79fc0fc
      Ingo Molnar 提交于
      This patch implements a task state bit (TASK_NONINTERACTIVE), which can be
      used by blocking points to mark the task's wait as "non-interactive".  This
      does not mean the task will be considered a CPU-hog - the wait will simply
      not have an effect on the waiting task's priority - positive or negative
      alike.  Right now only pipe_wait() will make use of it, because it's a
      common source of not-so-interactive waits (kernel compilation jobs, etc.).
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d79fc0fc
    • I
      [PATCH] sched cleanups · 95cdf3b7
      Ingo Molnar 提交于
      whitespace cleanups.
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      95cdf3b7
    • M
      [PATCH] sched: make idlest_group/cpu cpus_allowed-aware · da5a5522
      M.Baris Demiray 提交于
      Add relevant checks into find_idlest_group() and find_idlest_cpu() to make
      them return only the groups that have allowed CPUs and allowed CPUs
      respectively.
      Signed-off-by: NM.Baris Demiray <baris@labristeknoloji.com>
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      da5a5522
    • C
      [PATCH] sched: run SCHED_NORMAL tasks with real time tasks on SMT siblings · fc38ed75
      Con Kolivas 提交于
      The hyperthread aware nice handling currently puts to sleep any non real
      time task when a real time task is running on its sibling cpu.  This can
      lead to prolonged starvation by having the non real time task pegged to the
      cpu with load balancing not pulling that task away.
      
      Currently we force lower priority hyperthread tasks to run a percentage of
      time difference based on timeslice differences which is meaningless when
      comparing real time tasks to SCHED_NORMAL tasks.  We can allow non real
      time tasks to run with real time tasks on the sibling up to per_cpu_gain%
      if we use jiffies as a counter.
      
      Cleanups and micro-optimisations to the relevant code section should make
      it more understandable as well.
      Signed-off-by: NCon Kolivas <kernel@kolivas.org>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fc38ed75
    • P
      [PATCH] cpuset semaphore depth check deadlock fix · 4247bdc6
      Paul Jackson 提交于
      The cpusets-formalize-intermediate-gfp_kernel-containment patch
      has a deadlock problem.
      
      This patch was part of a set of four patches to make more
      extensive use of the cpuset 'mem_exclusive' attribute to
      manage kernel GFP_KERNEL memory allocations and to constrain
      the out-of-memory (oom) killer.
      
      A task that is changing cpusets in particular ways on a system
      when it is very short of free memory could double trip over
      the global cpuset_sem semaphore (get the lock and then deadlock
      trying to get it again).
      
      The second attempt to get cpuset_sem would be in the routine
      cpuset_zone_allowed().  This was discovered by code inspection.
      I can not reproduce the problem except with an artifically
      hacked kernel and a specialized stress test.
      
      In real life you cannot hit this unless you are manipulating
      cpusets, and are very unlikely to hit it unless you are rapidly
      modifying cpusets on a memory tight system.  Even then it would
      be a rare occurence.
      
      If you did hit it, the task double tripping over cpuset_sem
      would deadlock in the kernel, and any other task also trying
      to manipulate cpusets would deadlock there too, on cpuset_sem.
      Your batch manager would be wedged solid (if it was cpuset
      savvy), but classic Unix shells and utilities would work well
      enough to reboot the system.
      
      The unusual condition that led to this bug is that unlike most
      semaphores, cpuset_sem _can_ be acquired while in the page
      allocation code, when __alloc_pages() calls cpuset_zone_allowed.
      So it easy to mistakenly perform the following sequence:
        1) task makes system call to alter a cpuset
        2) take cpuset_sem
        3) try to allocate memory
        4) memory allocator, via cpuset_zone_allowed, trys to take cpuset_sem
        5) deadlock
      
      The reason that this is not a serious bug for most users
      is that almost all calls to allocate memory don't require
      taking cpuset_sem.  Only some code paths off the beaten
      track require taking cpuset_sem -- which is good.  Taking
      a global semaphore on the main code path for allocating
      memory would not scale well.
      
      This patch fixes this deadlock by wrapping the up() and down()
      calls on cpuset_sem in kernel/cpuset.c with code that tracks
      the nesting depth of the current task on that semaphore, and
      only does the real down() if the task doesn't hold the lock
      already, and only does the real up() if the nesting depth
      (number of unmatched downs) is exactly one.
      
      The previous required use of refresh_mems(), anytime that
      the cpuset_sem semaphore was acquired and the code executed
      while holding that semaphore might try to allocate memory, is
      no longer required.  Two refresh_mems() calls were removed
      thanks to this.  This is a good change, as failing to get
      all the necessary refresh_mems() calls placed was a primary
      source of bugs in this cpuset code.  The only remaining call
      to refresh_mems() is made while doing a memory allocation,
      if certain task memory placement data needs to be updated
      from its cpuset, due to the cpuset having been changed behind
      the tasks back.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4247bdc6
    • I
      [PATCH] spinlock consolidation · fb1c8f93
      Ingo Molnar 提交于
      This patch (written by me and also containing many suggestions of Arjan van
      de Ven) does a major cleanup of the spinlock code.  It does the following
      things:
      
       - consolidates and enhances the spinlock/rwlock debugging code
      
       - simplifies the asm/spinlock.h files
      
       - encapsulates the raw spinlock type and moves generic spinlock
         features (such as ->break_lock) into the generic code.
      
       - cleans up the spinlock code hierarchy to get rid of the spaghetti.
      
      Most notably there's now only a single variant of the debugging code,
      located in lib/spinlock_debug.c.  (previously we had one SMP debugging
      variant per architecture, plus a separate generic one for UP builds)
      
      Also, i've enhanced the rwlock debugging facility, it will now track
      write-owners.  There is new spinlock-owner/CPU-tracking on SMP builds too.
      All locks have lockup detection now, which will work for both soft and hard
      spin/rwlock lockups.
      
      The arch-level include files now only contain the minimally necessary
      subset of the spinlock code - all the rest that can be generalized now
      lives in the generic headers:
      
       include/asm-i386/spinlock_types.h       |   16
       include/asm-x86_64/spinlock_types.h     |   16
      
      I have also split up the various spinlock variants into separate files,
      making it easier to see which does what. The new layout is:
      
         SMP                         |  UP
         ----------------------------|-----------------------------------
         asm/spinlock_types_smp.h    |  linux/spinlock_types_up.h
         linux/spinlock_types.h      |  linux/spinlock_types.h
         asm/spinlock_smp.h          |  linux/spinlock_up.h
         linux/spinlock_api_smp.h    |  linux/spinlock_api_up.h
         linux/spinlock.h            |  linux/spinlock.h
      
      /*
       * here's the role of the various spinlock/rwlock related include files:
       *
       * on SMP builds:
       *
       *  asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
       *                        initializers
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  asm/spinlock.h:       contains the __raw_spin_*()/etc. lowlevel
       *                        implementations, mostly inline assembly code
       *
       *   (also included on UP-debug builds:)
       *
       *  linux/spinlock_api_smp.h:
       *                        contains the prototypes for the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       *
       * on UP builds:
       *
       *  linux/spinlock_type_up.h:
       *                        contains the generic, simplified UP spinlock type.
       *                        (which is an empty structure on non-debug builds)
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  linux/spinlock_up.h:
       *                        contains the __raw_spin_*()/etc. version of UP
       *                        builds. (which are NOPs on non-debug, non-preempt
       *                        builds)
       *
       *   (included on UP-non-debug builds:)
       *
       *  linux/spinlock_api_up.h:
       *                        builds the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       */
      
      All SMP and UP architectures are converted by this patch.
      
      arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
      crosscompilers.  m32r, mips, sh, sparc, have not been tested yet, but should
      be mostly fine.
      
      From: Grant Grundler <grundler@parisc-linux.org>
      
        Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
        Builds 32-bit SMP kernel (not booted or tested).  I did not try to build
        non-SMP kernels.  That should be trivial to fix up later if necessary.
      
        I converted bit ops atomic_hash lock to raw_spinlock_t.  Doing so avoids
        some ugly nesting of linux/*.h and asm/*.h files.  Those particular locks
        are well tested and contained entirely inside arch specific code.  I do NOT
        expect any new issues to arise with them.
      
       If someone does ever need to use debug/metrics with them, then they will
        need to unravel this hairball between spinlocks, atomic ops, and bit ops
        that exist only because parisc has exactly one atomic instruction: LDCW
        (load and clear word).
      
      From: "Luck, Tony" <tony.luck@intel.com>
      
         ia64 fix
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NArjan van de Ven <arjanv@infradead.org>
      Signed-off-by: NGrant Grundler <grundler@parisc-linux.org>
      Cc: Matthew Wilcox <willy@debian.org>
      Signed-off-by: NHirokazu Takata <takata@linux-m32r.org>
      Signed-off-by: NMikael Pettersson <mikpe@csd.uu.se>
      Signed-off-by: NBenoit Boissinot <benoit.boissinot@ens-lyon.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fb1c8f93
  2. 10 9月, 2005 7 次提交
  3. 08 9月, 2005 17 次提交
    • K
      [PATCH] kprobes: fix bug when probed on task and isr functions · deac66ae
      Keshavamurthy Anil S 提交于
      This patch fixes a race condition where in system used to hang or sometime
      crash within minutes when kprobes are inserted on ISR routine and a task
      routine.
      
      The fix has been stress tested on i386, ia64, pp64 and on x86_64.  To
      reproduce the problem insert kprobes on schedule() and do_IRQ() functions
      and you should see hang or system crash.
      Signed-off-by: NAnil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Acked-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      deac66ae
    • P
      [PATCH] Kprobes: prevent possible race conditions generic · d0aaff97
      Prasanna S Panchamukhi 提交于
      There are possible race conditions if probes are placed on routines within the
      kprobes files and routines used by the kprobes.  For example if you put probe
      on get_kprobe() routines, the system can hang while inserting probes on any
      routine such as do_fork().  Because while inserting probes on do_fork(),
      register_kprobes() routine grabs the kprobes spin lock and executes
      get_kprobe() routine and to handle probe of get_kprobe(), kprobes_handler()
      gets executed and tries to grab kprobes spin lock, and spins forever.  This
      patch avoids such possible race conditions by preventing probes on routines
      within the kprobes file and routines used by kprobes.
      
      I have modified the patches as per Andi Kleen's suggestion to move kprobes
      routines and other routines used by kprobes to a seperate section
      .kprobes.text.
      
      Also moved page fault and exception handlers, general protection fault to
      .kprobes.text section.
      
      These patches have been tested on i386, x86_64 and ppc64 architectures, also
      compiled on ia64 and sparc64 architectures.
      Signed-off-by: NPrasanna S Panchamukhi <prasanna@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d0aaff97
    • P
      [PATCH] introduce and use kzalloc · dd392710
      Pekka J Enberg 提交于
      This patch introduces a kzalloc wrapper and converts kernel/ to use it.  It
      saves a little program text.
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NAdrian Bunk <bunk@stusta.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      dd392710
    • M
      [PATCH] remove duplicated code from proc and ptrace · ab8d11be
      Miklos Szeredi 提交于
      Extract common code used by ptrace_attach() and may_ptrace_attach()
      into a separate function.
      Signed-off-by: NMiklos Szeredi <miklos@szeredi.hu>
      Cc: <viro@parcelfarce.linux.theplanet.co.uk>
      Cc: Christoph Hellwig <hch@lst.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ab8d11be
    • J
      [PATCH] cpusets: re-enable "dynamic sched domains" · 0811bab2
      John Hawkes 提交于
      Revert the hack introduced last week.
      Signed-off-by: NJohn Hawkes <hawkes@sgi.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0811bab2
    • J
      [PATCH] cpusets: fix the "dynamic sched domains" bug · d1b55138
      John Hawkes 提交于
      For a NUMA system with multiple CPUs per node, declaring a cpu-exclusive
      cpuset that includes only some, but not all, of the CPUs in a node will mangle
      the sched domain structures.
      Signed-off-by: NJohn Hawkes <hawkes@sgi.com>
      Cc; Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d1b55138
    • J
      9c1cfda2
    • P
      [PATCH] cpusets: confine oom_killer to mem_exclusive cpuset · ef08e3b4
      Paul Jackson 提交于
      Now the real motivation for this cpuset mem_exclusive patch series seems
      trivial.
      
      This patch keeps a task in or under one mem_exclusive cpuset from provoking an
      oom kill of a task under a non-overlapping mem_exclusive cpuset.  Since only
      interrupt and GFP_ATOMIC allocations are allowed to escape mem_exclusive
      containment, there is little to gain from oom killing a task under a
      non-overlapping mem_exclusive cpuset, as almost all kernel and user memory
      allocation must come from disjoint memory nodes.
      
      This patch enables configuring a system so that a runaway job under one
      mem_exclusive cpuset cannot cause the killing of a job in another such cpuset
      that might be using very high compute and memory resources for a prolonged
      time.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ef08e3b4
    • P
      [PATCH] cpusets: formalize intermediate GFP_KERNEL containment · 9bf2229f
      Paul Jackson 提交于
      This patch makes use of the previously underutilized cpuset flag
      'mem_exclusive' to provide what amounts to another layer of memory placement
      resolution.  With this patch, there are now the following four layers of
      memory placement available:
      
       1) The whole system (interrupt and GFP_ATOMIC allocations can use this),
       2) The nearest enclosing mem_exclusive cpuset (GFP_KERNEL allocations can use),
       3) The current tasks cpuset (GFP_USER allocations constrained to here), and
       4) Specific node placement, using mbind and set_mempolicy.
      
      These nest - each layer is a subset (same or within) of the previous.
      
      Layer (2) above is new, with this patch.  The call used to check whether a
      zone (its node, actually) is in a cpuset (in its mems_allowed, actually) is
      extended to take a gfp_mask argument, and its logic is extended, in the case
      that __GFP_HARDWALL is not set in the flag bits, to look up the cpuset
      hierarchy for the nearest enclosing mem_exclusive cpuset, to determine if
      placement is allowed.  The definition of GFP_USER, which used to be identical
      to GFP_KERNEL, is changed to also set the __GFP_HARDWALL bit, in the previous
      cpuset_gfp_hardwall_flag patch.
      
      GFP_ATOMIC and GFP_KERNEL allocations will stay within the current tasks
      cpuset, so long as any node therein is not too tight on memory, but will
      escape to the larger layer, if need be.
      
      The intended use is to allow something like a batch manager to handle several
      jobs, each job in its own cpuset, but using common kernel memory for caches
      and such.  Swapper and oom_kill activity is also constrained to Layer (2).  A
      task in or below one mem_exclusive cpuset should not cause swapping on nodes
      in another non-overlapping mem_exclusive cpuset, nor provoke oom_killing of a
      task in another such cpuset.  Heavy use of kernel memory for i/o caching and
      such by one job should not impact the memory available to jobs in other
      non-overlapping mem_exclusive cpusets.
      
      This patch enables providing hardwall, inescapable cpusets for memory
      allocations of each job, while sharing kernel memory allocations between
      several jobs, in an enclosing mem_exclusive cpuset.
      
      Like Dinakar's patch earlier to enable administering sched domains using the
      cpu_exclusive flag, this patch also provides a useful meaning to a cpuset flag
      that had previously done nothing much useful other than restrict what cpuset
      configurations were allowed.
      Signed-off-by: NPaul Jackson <pj@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9bf2229f
    • P
      [PATCH] futex: remove duplicate code · 39ed3fde
      Pekka Enberg 提交于
      This patch cleans up the error path of futex_fd() by removing duplicate
      code.
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      39ed3fde
    • O
      [PATCH] fix send_sigqueue() vs thread exit race · e752dd6c
      Oleg Nesterov 提交于
      posix_timer_event() first checks that the thread (SIGEV_THREAD_ID case)
      does not have PF_EXITING flag, then it calls send_sigqueue() which locks
      task list.  But if the thread exits in between the kernel will oops
      (->sighand == NULL after __exit_sighand).
      
      This patch moves the PF_EXITING check into the send_sigqueue(), it must be
      done atomically under tasklist_lock.  When send_sigqueue() detects exiting
      thread it returns -1.  In that case posix_timer_event will send the signal
      to thread group.
      
      Also, this patch fixes task_struct use-after-free in posix_timer_event.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e752dd6c
    • J
      [PATCH] remove a redundant variable in sys_prctl() · 0730ded5
      Jesper Juhl 提交于
      The patch removes a redundant variable `sig' from sys_prctl().
      
      For some reason, when sys_prctl is called with option == PR_SET_PDEATHSIG
      then the value of arg2 is assigned to an int variable named sig.  Then sig
      is tested with valid_signal() and later used to set the value of
      current->pdeath_signal .
      
      There is no reason to use this intermediate variable since valid_signal()
      takes a unsigned long argument, so it can handle being passed arg2
      directly, and if the call to valid_signal is OK, then we know the value of
      arg2 is in the range zero to _NSIG and thus it'll easily fit in a plain int
      and thus there's no problem assigning it later to current->pdeath_signal
      (which is an int).
      
      The patch gets rid of the pointless variable `sig'.
      This reduces the size of kernel/sys.o in 2.6.13-rc6-mm1 by 32 bytes on my
      system.
      
      Patch has been compile tested, boot tested, and just to make damn sure I
      didn't break anything I wrote a quick test app that calls
      prctl(PR_SET_PDEATHSIG ...) with the entire range of values for a
      unsigned long, and it behaves as expected with and without the patch.
      Signed-off-by: NJesper Juhl <jesper.juhl@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0730ded5
    • P
      [PATCH] largefile support for accounting · 6c9c0b52
      Peter Staubach 提交于
      There is a problem in the accounting subsystem in the kernel can not
      correctly handle files larger than 2GB.  The output file containing the
      process accounting data can grow very large if the system is large enough
      and active enough.  If the 2GB limit is reached, then the system simply
      stops storing process accounting data.
      
      Another annoying problem is that once the system reaches this 2GB limit,
      then every process which exits will receive a signal, SIGXFSZ.  This signal
      is generated because an attempt was made to write beyond the limit for the
      file descriptor.  This signal makes it look like every process has exited
      due to a signal, when in fact, they have not.
      
      The solution is to add the O_LARGEFILE flag to the list of flags used to
      open the accounting file.  The rest of the accounting support is already
      largefile safe.
      
      The changes were tested by constructing a large file (just short of 2GB),
      enabling accounting, and then running enough commands to cause the
      accounting data generated to increase the size of the file to 2GB.  Without
      the changes, the file grows to 2GB and the last command run in the test
      script appears to exit due a signal when it has not.  With the changes,
      things work as expected and quietly.
      
      There are some user level changes required so that it can deal with
      largefiles, but those are being handled separately.
      Signed-off-by: NPeter Staubach <staubach@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6c9c0b52
    • O
      [PATCH] do_notify_parent_cldstop() cleanup · bc505a47
      Oleg Nesterov 提交于
      This patch simplifies the usage of do_notify_parent_cldstop(), it lessens
      the source and .text size slightly, and makes the code (in my opinion) a
      bit more readable.
      
      I am sending this patch now because I'm afraid Paul will touch
      do_notify_parent_cldstop() really soon, It's better to cleanup first.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      bc505a47
    • K
      [PATCH] CHECK_IRQ_PER_CPU() to avoid dead code in __do_IRQ() · f26fdd59
      Karsten Wiese 提交于
      IRQ_PER_CPU is not used by all architectures.  This patch introduces the
      macros ARCH_HAS_IRQ_PER_CPU and CHECK_IRQ_PER_CPU() to avoid the generation
      of dead code in __do_IRQ().
      
      ARCH_HAS_IRQ_PER_CPU is defined by architectures using IRQ_PER_CPU in their
      include/asm_ARCH/irq.h file.
      
      Through grepping the tree I found the following architectures currently use
      IRQ_PER_CPU:
      
              cris, ia64, ppc, ppc64 and parisc.
      Signed-off-by: NKarsten Wiese <annabellesgarden@yahoo.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f26fdd59
    • M
      [PATCH] create_workqueue_thread() signedness fix · 230649da
      Mika Kukkonen 提交于
      With "-W -Wno-unused -Wno-sign-compare" I get the following compile warning:
      
        CC      kernel/workqueue.o
      kernel/workqueue.c: In function `workqueue_cpu_callback':
      kernel/workqueue.c:504: warning: ordered comparison of pointer with integer zero
      
      On error create_workqueue_thread() returns NULL, not negative pointer, so
      following trivial patch suggests itself.
      Signed-off-by: NMika Kukkonen <mikukkon@gmail.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      230649da
    • T
      [PATCH] flush icache early when loading module · 378bac82
      Thomas Koeller 提交于
      Change the sequence of operations performed during module loading to flush
      the instruction cache before module parameters are processed.  If a module
      has parameters of an unusual type that cannot be handled using the standard
      accessor functions param_set_xxx and param_get_xxx, it has to to provide a
      set of accessor functions for this type.  This requires module code to be
      executed during parameter processing, which is of course only possible
      after the icache has been flushed.
      Signed-off-by: NThomas Koeller <thomas@koeller.dyndns.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      378bac82