1. 27 10月, 2005 1 次提交
  2. 14 9月, 2005 1 次提交
  3. 12 9月, 2005 2 次提交
  4. 11 9月, 2005 12 次提交
    • S
      [PATCH] sched: allow the load to grow upto its cpu_power · 0c117f1b
      Siddha, Suresh B 提交于
      Don't pull tasks from a group if that would cause the group's total load to
      drop below its total cpu_power (ie.  cause the group to start going idle).
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0c117f1b
    • S
      [PATCH] sched: don't kick ALB in the presence of pinned task · fa3b6ddc
      Siddha, Suresh B 提交于
      Jack Steiner brought this issue at my OLS talk.
      
      Take a scenario where two tasks are pinned to two HT threads in a physical
      package.  Idle packages in the system will keep kicking migration_thread on
      the busy package with out any success.
      
      We will run into similar scenarios in the presence of CMP/NUMA.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fa3b6ddc
    • R
      [PATCH] sched: use cached variable in sys_sched_yield() · 5927ad78
      Renaud Lienhart 提交于
      In sys_sched_yield(), we cache current->array in the "array" variable, thus
      there's no need to dereference "current" again later.
      Signed-Off-By: NRenaud Lienhart <renaud.lienhart@free.fr>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      5927ad78
    • N
      [PATCH] sched: HT optimisation · 5969fe06
      Nick Piggin 提交于
      If an idle sibling of an HT queue encounters a busy sibling, then make
      higher level load balancing of the non-idle variety.
      
      Performance of multiprocessor HT systems with low numbers of tasks
      (generally < number of virtual CPUs) can be significantly worse than the
      exact same workloads when running in non-HT mode.  The reason is largely
      due to poor scheduling behaviour.
      
      This patch improves the situation, making the performance gap far less
      significant on one problematic test case (tbench).
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      5969fe06
    • N
      [PATCH] sched: less locking · e17224bf
      Nick Piggin 提交于
      During periodic load balancing, don't hold this runqueue's lock while
      scanning remote runqueues, which can take a non trivial amount of time
      especially on very large systems.
      
      Holding the runqueue lock will only help to stabilise ->nr_running, however
      this doesn't do much to help because tasks being woken will simply get held
      up on the runqueue lock, so ->nr_running would not provide a really
      accurate picture of runqueue load in that case anyway.
      
      What's more, ->nr_running (and possibly the cpu_load averages) of remote
      runqueues won't be stable anyway, so load balancing is always an inexact
      operation.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      e17224bf
    • N
      [PATCH] sched: less newidle locking · d6d5cfaf
      Nick Piggin 提交于
      Similarly to the earlier change in load_balance, only lock the runqueue in
      load_balance_newidle if the busiest queue found has a nr_running > 1.  This
      will reduce frequency of expensive remote runqueue lock aquisitions in the
      schedule() path on some workloads.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d6d5cfaf
    • I
      [PATCH] sched: fix SMT scheduler latency bug · 67f9a619
      Ingo Molnar 提交于
      William Weston reported unusually high scheduling latencies on his x86 HT
      box, on the -RT kernel.  I managed to reproduce it on my HT box and the
      latency tracer shows the incident in action:
      
                       _------=> CPU#
                      / _-----=> irqs-off
                     | / _----=> need-resched
                     || / _---=> hardirq/softirq
                     ||| / _--=> preempt-depth
                     |||| /
                     |||||     delay
         cmd     pid ||||| time  |   caller
            \   /    |||||   \   |   /
            du-2803  3Dnh2    0us : __trace_start_sched_wakeup (try_to_wake_up)
              ..............................................................
              ... we are running on CPU#3, PID 2778 gets woken to CPU#1: ...
              ..............................................................
            du-2803  3Dnh2    0us : __trace_start_sched_wakeup <<...>-2778> (73 1)
            du-2803  3Dnh2    0us : _raw_spin_unlock (try_to_wake_up)
              ................................................
              ... still on CPU#3, we send an IPI to CPU#1: ...
              ................................................
            du-2803  3Dnh1    0us : resched_task (try_to_wake_up)
            du-2803  3Dnh1    1us : smp_send_reschedule (try_to_wake_up)
            du-2803  3Dnh1    1us : send_IPI_mask_bitmask (smp_send_reschedule)
            du-2803  3Dnh1    2us : _raw_spin_unlock_irqrestore (try_to_wake_up)
              ...............................................
              ... 1 usec later, the IPI arrives on CPU#1: ...
              ...............................................
        <idle>-0     1Dnh.    2us : smp_reschedule_interrupt (c0100c5a 0 0)
      
      So far so good, this is the normal wakeup/preemption mechanism.  But here
      comes the scheduler anomaly on CPU#1:
      
        <idle>-0     1Dnh.    2us : preempt_schedule_irq (need_resched)
        <idle>-0     1Dnh.    2us : preempt_schedule_irq (need_resched)
        <idle>-0     1Dnh.    3us : __schedule (preempt_schedule_irq)
        <idle>-0     1Dnh.    3us : profile_hit (__schedule)
        <idle>-0     1Dnh1    3us : sched_clock (__schedule)
        <idle>-0     1Dnh1    4us : _raw_spin_lock_irq (__schedule)
        <idle>-0     1Dnh1    4us : _raw_spin_lock_irqsave (__schedule)
        <idle>-0     1Dnh2    5us : _raw_spin_unlock (__schedule)
        <idle>-0     1Dnh1    5us : preempt_schedule (__schedule)
        <idle>-0     1Dnh1    6us : _raw_spin_lock (__schedule)
        <idle>-0     1Dnh2    6us : find_next_bit (__schedule)
        <idle>-0     1Dnh2    6us : _raw_spin_lock (__schedule)
        <idle>-0     1Dnh3    7us : find_next_bit (__schedule)
        <idle>-0     1Dnh3    7us : find_next_bit (__schedule)
        <idle>-0     1Dnh3    8us : _raw_spin_unlock (__schedule)
        <idle>-0     1Dnh2    8us : preempt_schedule (__schedule)
        <idle>-0     1Dnh2    8us : find_next_bit (__schedule)
        <idle>-0     1Dnh2    9us : trace_stop_sched_switched (__schedule)
        <idle>-0     1Dnh2    9us : _raw_spin_lock (trace_stop_sched_switched)
        <idle>-0     1Dnh3   10us : trace_stop_sched_switched <<...>-2778> (73 8c)
        <idle>-0     1Dnh3   10us : _raw_spin_unlock (trace_stop_sched_switched)
        <idle>-0     1Dnh1   10us : _raw_spin_unlock (__schedule)
        <idle>-0     1Dnh.   11us : local_irq_enable_noresched (preempt_schedule_irq)
        <idle>-0     1Dnh.   11us < (0)
      
      we didnt pick up pid 2778! It only gets scheduled much later:
      
         <...>-2778  1Dnh2  412us : __switch_to (__schedule)
         <...>-2778  1Dnh2  413us : __schedule <<idle>-0> (8c 73)
         <...>-2778  1Dnh2  413us : _raw_spin_unlock (__schedule)
         <...>-2778  1Dnh1  413us : trace_stop_sched_switched (__schedule)
         <...>-2778  1Dnh1  414us : _raw_spin_lock (trace_stop_sched_switched)
         <...>-2778  1Dnh2  414us : trace_stop_sched_switched <<...>-2778> (73 1)
         <...>-2778  1Dnh2  414us : _raw_spin_unlock (trace_stop_sched_switched)
         <...>-2778  1Dnh1  415us : trace_stop_sched_switched (__schedule)
      
      the reason for this anomaly is the following code in dependent_sleeper():
      
                      /*
                       * If a user task with lower static priority than the
                       * running task on the SMT sibling is trying to schedule,
                       * delay it till there is proportionately less timeslice
                       * left of the sibling task to prevent a lower priority
                       * task from using an unfair proportion of the
                       * physical cpu's resources. -ck
                       */
      [...]
                              if (((smt_curr->time_slice * (100 - sd->per_cpu_gain) /
                                      100) > task_timeslice(p)))
                                              ret = 1;
      
      Note that in contrast to the comment above, we dont actually do the check
      based on static priority, we do the check based on timeslices.  But
      timeslices go up and down, and even highprio tasks can randomly have very
      low timeslices (just before their next refill) and can thus be judged as
      'lowprio' by the above piece of code.  This condition is clearly buggy.
      The correct test is to check for static_prio _and_ to check for the
      preemption priority.  Even on different static priority levels, a
      higher-prio interactive task should not be delayed due to a
      higher-static-prio CPU hog.
      
      There is a symmetric bug in the 'kick SMT sibling' code of this function as
      well, which can be solved in a similar way.
      
      The patch below (against the current scheduler queue in -mm) fixes both
      bugs.  I have build and boot-tested this on x86 SMT, and nice +20 tasks
      still get properly throttled - so the dependent-sleeper logic is still in
      action.
      
      btw., these bugs pessimised the SMT scheduler because the 'delay wakeup'
      property was applied too liberally, so this fix is likely a throughput
      improvement as well.
      
      I separated out a smt_slice() function to make the code easier to read.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      67f9a619
    • I
      [PATCH] sched: TASK_NONINTERACTIVE · d79fc0fc
      Ingo Molnar 提交于
      This patch implements a task state bit (TASK_NONINTERACTIVE), which can be
      used by blocking points to mark the task's wait as "non-interactive".  This
      does not mean the task will be considered a CPU-hog - the wait will simply
      not have an effect on the waiting task's priority - positive or negative
      alike.  Right now only pipe_wait() will make use of it, because it's a
      common source of not-so-interactive waits (kernel compilation jobs, etc.).
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d79fc0fc
    • I
      [PATCH] sched cleanups · 95cdf3b7
      Ingo Molnar 提交于
      whitespace cleanups.
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      95cdf3b7
    • M
      [PATCH] sched: make idlest_group/cpu cpus_allowed-aware · da5a5522
      M.Baris Demiray 提交于
      Add relevant checks into find_idlest_group() and find_idlest_cpu() to make
      them return only the groups that have allowed CPUs and allowed CPUs
      respectively.
      Signed-off-by: NM.Baris Demiray <baris@labristeknoloji.com>
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      da5a5522
    • C
      [PATCH] sched: run SCHED_NORMAL tasks with real time tasks on SMT siblings · fc38ed75
      Con Kolivas 提交于
      The hyperthread aware nice handling currently puts to sleep any non real
      time task when a real time task is running on its sibling cpu.  This can
      lead to prolonged starvation by having the non real time task pegged to the
      cpu with load balancing not pulling that task away.
      
      Currently we force lower priority hyperthread tasks to run a percentage of
      time difference based on timeslice differences which is meaningless when
      comparing real time tasks to SCHED_NORMAL tasks.  We can allow non real
      time tasks to run with real time tasks on the sibling up to per_cpu_gain%
      if we use jiffies as a counter.
      
      Cleanups and micro-optimisations to the relevant code section should make
      it more understandable as well.
      Signed-off-by: NCon Kolivas <kernel@kolivas.org>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fc38ed75
    • I
      [PATCH] spinlock consolidation · fb1c8f93
      Ingo Molnar 提交于
      This patch (written by me and also containing many suggestions of Arjan van
      de Ven) does a major cleanup of the spinlock code.  It does the following
      things:
      
       - consolidates and enhances the spinlock/rwlock debugging code
      
       - simplifies the asm/spinlock.h files
      
       - encapsulates the raw spinlock type and moves generic spinlock
         features (such as ->break_lock) into the generic code.
      
       - cleans up the spinlock code hierarchy to get rid of the spaghetti.
      
      Most notably there's now only a single variant of the debugging code,
      located in lib/spinlock_debug.c.  (previously we had one SMP debugging
      variant per architecture, plus a separate generic one for UP builds)
      
      Also, i've enhanced the rwlock debugging facility, it will now track
      write-owners.  There is new spinlock-owner/CPU-tracking on SMP builds too.
      All locks have lockup detection now, which will work for both soft and hard
      spin/rwlock lockups.
      
      The arch-level include files now only contain the minimally necessary
      subset of the spinlock code - all the rest that can be generalized now
      lives in the generic headers:
      
       include/asm-i386/spinlock_types.h       |   16
       include/asm-x86_64/spinlock_types.h     |   16
      
      I have also split up the various spinlock variants into separate files,
      making it easier to see which does what. The new layout is:
      
         SMP                         |  UP
         ----------------------------|-----------------------------------
         asm/spinlock_types_smp.h    |  linux/spinlock_types_up.h
         linux/spinlock_types.h      |  linux/spinlock_types.h
         asm/spinlock_smp.h          |  linux/spinlock_up.h
         linux/spinlock_api_smp.h    |  linux/spinlock_api_up.h
         linux/spinlock.h            |  linux/spinlock.h
      
      /*
       * here's the role of the various spinlock/rwlock related include files:
       *
       * on SMP builds:
       *
       *  asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
       *                        initializers
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  asm/spinlock.h:       contains the __raw_spin_*()/etc. lowlevel
       *                        implementations, mostly inline assembly code
       *
       *   (also included on UP-debug builds:)
       *
       *  linux/spinlock_api_smp.h:
       *                        contains the prototypes for the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       *
       * on UP builds:
       *
       *  linux/spinlock_type_up.h:
       *                        contains the generic, simplified UP spinlock type.
       *                        (which is an empty structure on non-debug builds)
       *
       *  linux/spinlock_types.h:
       *                        defines the generic type and initializers
       *
       *  linux/spinlock_up.h:
       *                        contains the __raw_spin_*()/etc. version of UP
       *                        builds. (which are NOPs on non-debug, non-preempt
       *                        builds)
       *
       *   (included on UP-non-debug builds:)
       *
       *  linux/spinlock_api_up.h:
       *                        builds the _spin_*() APIs.
       *
       *  linux/spinlock.h:     builds the final spin_*() APIs.
       */
      
      All SMP and UP architectures are converted by this patch.
      
      arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
      crosscompilers.  m32r, mips, sh, sparc, have not been tested yet, but should
      be mostly fine.
      
      From: Grant Grundler <grundler@parisc-linux.org>
      
        Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
        Builds 32-bit SMP kernel (not booted or tested).  I did not try to build
        non-SMP kernels.  That should be trivial to fix up later if necessary.
      
        I converted bit ops atomic_hash lock to raw_spinlock_t.  Doing so avoids
        some ugly nesting of linux/*.h and asm/*.h files.  Those particular locks
        are well tested and contained entirely inside arch specific code.  I do NOT
        expect any new issues to arise with them.
      
       If someone does ever need to use debug/metrics with them, then they will
        need to unravel this hairball between spinlocks, atomic ops, and bit ops
        that exist only because parisc has exactly one atomic instruction: LDCW
        (load and clear word).
      
      From: "Luck, Tony" <tony.luck@intel.com>
      
         ia64 fix
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NArjan van de Ven <arjanv@infradead.org>
      Signed-off-by: NGrant Grundler <grundler@parisc-linux.org>
      Cc: Matthew Wilcox <willy@debian.org>
      Signed-off-by: NHirokazu Takata <takata@linux-m32r.org>
      Signed-off-by: NMikael Pettersson <mikpe@csd.uu.se>
      Signed-off-by: NBenoit Boissinot <benoit.boissinot@ens-lyon.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      fb1c8f93
  5. 10 9月, 2005 1 次提交
  6. 08 9月, 2005 2 次提交
  7. 07 9月, 2005 1 次提交
  8. 19 8月, 2005 1 次提交
  9. 27 7月, 2005 2 次提交
    • S
      [PATCH] fix MAX_USER_RT_PRIO and MAX_RT_PRIO · d46523ea
      Steven Rostedt 提交于
      Here's the patch again to fix the code to handle if the values between
      MAX_USER_RT_PRIO and MAX_RT_PRIO are different.
      
      Without this patch, an SMP system will crash if the values are
      different.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NDean Nelson <dcn@sgi.com>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      d46523ea
    • A
      [PATCH] Fix RLIMIT_RTPRIO breakage · 18586e72
      Andreas Steinmetz 提交于
      RLIMIT_RTPRIO is supposed to grant non privileged users the right to use
      SCHED_FIFO/SCHED_RR scheduling policies with priorites bounded by the
      RLIMIT_RTPRIO value via sched_setscheduler(). This is usually used by
      audio users.
      
      Unfortunately this is broken in 2.6.13rc3 as you can see in the excerpt
      from sched_setscheduler below:
      
              /*
               * Allow unprivileged RT tasks to decrease priority:
               */
              if (!capable(CAP_SYS_NICE)) {
                      /* can't change policy */
                      if (policy != p->policy)
                              return -EPERM;
      
      After the above unconditional test which causes sched_setscheduler to
      fail with no regard to the RLIMIT_RTPRIO value the following check is made:
      
                     /* can't increase priority */
                      if (policy != SCHED_NORMAL &&
                          param->sched_priority > p->rt_priority &&
                          param->sched_priority >
                                      p->signal->rlim[RLIMIT_RTPRIO].rlim_cur)
                              return -EPERM;
      
      Thus I do believe that the RLIMIT_RTPRIO value must be taken into
      account for the policy check, especially as the RLIMIT_RTPRIO limit is
      of no use without this change.
      
      The attached patch fixes this problem.
      Signed-off-by: NAndreas Steinmetz <ast@domdv.de>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      18586e72
  10. 08 7月, 2005 1 次提交
  11. 29 6月, 2005 1 次提交
  12. 28 6月, 2005 1 次提交
    • J
      [PATCH] Update cfq io scheduler to time sliced design · 22e2c507
      Jens Axboe 提交于
      This updates the CFQ io scheduler to the new time sliced design (cfq
      v3).  It provides full process fairness, while giving excellent
      aggregate system throughput even for many competing processes.  It
      supports io priorities, either inherited from the cpu nice value or set
      directly with the ioprio_get/set syscalls.  The latter closely mimic
      set/getpriority.
      
      This import is based on my latest from -mm.
      Signed-off-by: NJens Axboe <axboe@suse.de>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      22e2c507
  13. 26 6月, 2005 14 次提交
    • C
      [PATCH] Cleanup patch for process freezing · 3e1d1d28
      Christoph Lameter 提交于
      1. Establish a simple API for process freezing defined in linux/include/sched.h:
      
         frozen(process)		Check for frozen process
         freezing(process)		Check if a process is being frozen
         freeze(process)		Tell a process to freeze (go to refrigerator)
         thaw_process(process)	Restart process
         frozen_process(process)	Process is frozen now
      
      2. Remove all references to PF_FREEZE and PF_FROZEN from all
         kernel sources except sched.h
      
      3. Fix numerous locations where try_to_freeze is manually done by a driver
      
      4. Remove the argument that is no longer necessary from two function calls.
      
      5. Some whitespace cleanup
      
      6. Clear potential race in refrigerator (provides an open window of PF_FREEZE
         cleared before setting PF_FROZEN, recalc_sigpending does not check
         PF_FROZEN).
      
      This patch does not address the problem of freeze_processes() violating the rule
      that a task may only modify its own flags by setting PF_FREEZE. This is not clean
      in an SMP environment. freeze(process) is therefore not SMP safe!
      Signed-off-by: NChristoph Lameter <christoph@lameter.com>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3e1d1d28
    • D
      [PATCH] Dynamic sched domains: sched changes · 1a20ff27
      Dinakar Guniguntala 提交于
      The following patches add dynamic sched domains functionality that was
      extensively discussed on lkml and lse-tech.  I would like to see this added to
      -mm
      
      o The main advantage with this feature is that it ensures that the scheduler
        load balacing code only balances against the cpus that are in the sched
        domain as defined by an exclusive cpuset and not all of the cpus in the
        system. This removes any overhead due to load balancing code trying to
        pull tasks outside of the cpu exclusive cpuset only to be prevented by
        the tasks' cpus_allowed mask.
      o cpu exclusive cpusets are useful for servers running orthogonal
        workloads such as RT applications requiring low latency and HPC
        applications that are throughput sensitive
      
      o It provides a new API partition_sched_domains in sched.c
        that makes dynamic sched domains possible.
      o cpu_exclusive cpusets sets are now associated with a sched domain.
        Which means that the users can dynamically modify the sched domains
        through the cpuset file system interface
      o ia64 sched domain code has been updated to support this feature as well
      o Currently, this does not support hotplug. (However some of my tests
        indicate hotplug+preempt is currently broken)
      o I have tested it extensively on x86.
      o This should have very minimal impact on performance as none of
        the fast paths are affected
      Signed-off-by: NDinakar Guniguntala <dino@in.ibm.com>
      Acked-by: NPaul Jackson <pj@sgi.com>
      Acked-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Acked-by: NMatthew Dobson <colpatch@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1a20ff27
    • O
      [PATCH] Changing RT priority without CAP_SYS_NICE · 37e4ab3f
      Olivier Croquette 提交于
      Presently, a process without the capability CAP_SYS_NICE can not change
      its own policy, which is OK.
      
      But it can also not decrease its RT priority (if scheduled with policy
      SCHED_RR or SCHED_FIFO), which is what this patch changes.
      
      The rationale is the same as for the nice value: a process should be
      able to require less priority for itself. Increasing the priority is
      still not allowed.
      
      This is for example useful if you give a multithreaded user process a RT
      priority, and the process would like to organize its internal threads
      using priorities also. Then you can give the process the highest
      priority needed N, and the process starts its threads with lower
      priorities: N-1, N-2...
      
      The POSIX norm says that the permissions are implementation specific, so
      I think we can do that.
      
      In a sense, it makes the permissions consistent whatever the policy is:
      with this patch, process scheduled by SCHED_FIFO, SCHED_RR and
      SCHED_OTHER can all decrease their priority.
      
      From: Ingo Molnar <mingo@elte.hu>
      
      cleaned up and merged to -mm.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      37e4ab3f
    • C
      [PATCH] sched: micro-optimize task requeueing in schedule() · a3464a10
      Chen Shang 提交于
      micro-optimize task requeueing in schedule() & clean up recalc_task_prio().
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a3464a10
    • N
      [PATCH] sched: relax pinned balancing · 77391d71
      Nick Piggin 提交于
      The maximum rebalance interval allowed by the multiprocessor balancing
      backoff is often not large enough to handle corner cases where there are
      lots of tasks pinned on a CPU.  Suresh reported:
      
      	I see system livelock's if for example I have 7000 processes
      	pinned onto one cpu (this is on the fastest 8-way system I
      	have access to).
      
      After this patch, the machine is reported to go well above this number.
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      77391d71
    • N
      [PATCH] sched: consolidate sbe sbf · 476d139c
      Nick Piggin 提交于
      Consolidate balance-on-exec with balance-on-fork.  This is made easy by the
      sched-domains RCU patches.
      
      As well as the general goodness of code reduction, this allows the runqueues
      to be unlocked during balance-on-fork.
      
      schedstats is a problem.  Maybe just have balance-on-event instead of
      distinguishing fork and exec?
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      476d139c
    • N
      [PATCH] sched: RCU domains · 674311d5
      Nick Piggin 提交于
      One of the problems with the multilevel balance-on-fork/exec is that it needs
      to jump through hoops to satisfy sched-domain's locking semantics (that is,
      you may traverse your own domain when not preemptable, and you may traverse
      others' domains when holding their runqueue lock).
      
      balance-on-exec had to potentially migrate between more than one CPU before
      finding a final CPU to migrate to, and balance-on-fork needed to potentially
      take multiple runqueue locks.
      
      So bite the bullet and make sched-domains go completely RCU.  This actually
      simplifies the code quite a bit.
      
      From: Ingo Molnar <mingo@elte.hu>
      
      schedstats RCU fix, and a nice comment on for_each_domain, from Ingo.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      674311d5
    • N
      [PATCH] sched: multilevel sbe sbf · 3dbd5342
      Nick Piggin 提交于
      The fundamental problem that Suresh has with balance on exec and fork is that
      it only tries to balance the top level domain with the flag set.
      
      This was worked around by removing degenerate domains, but is still a problem
      if people want to start using more complex sched-domains, especially
      multilevel NUMA that ia64 is already using.
      
      This patch makes balance on fork and exec try balancing over not just the top
      most domain with the flag set, but all the way down the domain tree.
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3dbd5342
    • S
      [PATCH] sched: remove degenerate domains · 245af2c7
      Suresh Siddha 提交于
      Remove degenerate scheduler domains during the sched-domain init.
      
      For example on x86_64, we always have NUMA configured in.  On Intel EM64T
      systems, top most sched domain will be of NUMA and with only one sched_group
      in it.
      
      With fork/exec balances(recent Nick's fixes in -mm tree), we always endup
      taking wrong decisions because of this topmost domain (as it contains only one
      group and find_idlest_group always returns NULL).  We will endup loading HT
      package completely first, letting active load balance kickin and correct it.
      
      In general, this patch also makes sense with out recent Nick's fixes in -mm.
      
      From: Nick Piggin <nickpiggin@yahoo.com.au>
      
      Modified to account for more than just sched_groups when scanning for
      degenerate domains by Nick Piggin.  And allow a runqueue's sd to go NULL
      rather than keep a single degenerate domain around (this happens when you run
      with maxcpus=1).
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      245af2c7
    • N
      [PATCH] sched: null domains · 41c7ce9a
      Nick Piggin 提交于
      Fix the last 2 places that directly access a runqueue's sched-domain and
      assume it cannot be NULL.
      
      That allows the use of NULL for domain, instead of a dummy domain, to signify
      no balancing is to happen.  No functional changes.
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      41c7ce9a
    • N
      [PATCH] sched: cleanup context switch locking · 4866cde0
      Nick Piggin 提交于
      Instead of requiring architecture code to interact with the scheduler's
      locking implementation, provide a couple of defines that can be used by the
      architecture to request runqueue unlocked context switches, and ask for
      interrupts to be enabled over the context switch.
      
      Also replaces the "switch_lock" used by these architectures with an oncpu
      flag (note, not a potentially slow bitflag).  This eliminates one bus
      locked memory operation when context switching, and simplifies the
      task_running function.
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4866cde0
    • I
      [PATCH] sched: uninline task_timeslice · 48c08d3f
      Ingo Molnar 提交于
            "Chen, Kenneth W" <kenneth.w.chen@intel.com>
      
      uninline task_timeslice() - reduces code footprint noticeably, and it's
      slowpath code.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      48c08d3f
    • N
      [PATCH] sched: schedstats update for balance on fork · 68767a0a
      Nick Piggin 提交于
      Add SCHEDSTAT statistics for sched-balance-fork.
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      68767a0a
    • N
      [PATCH] sched: balance on fork · 147cbb4b
      Nick Piggin 提交于
      Reimplement the balance on exec balancing to be sched-domains aware.  Use this
      to also do balance on fork balancing.  Make x86_64 do balance on fork over the
      NUMA domain.
      
      The problem that the non sched domains aware blancing became apparent on dual
      core, multi socket opterons.  What we want is for the new tasks to be sent to
      a different socket, but more often than not, we would first load up our
      sibling core, or fill two cores of a single remote socket before selecting a
      new one.
      
      This gives large improvements to STREAM on such systems.
      Signed-off-by: NNick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      147cbb4b