1. 13 2月, 2008 7 次提交
  2. 09 2月, 2008 1 次提交
  3. 01 2月, 2008 1 次提交
  4. 30 1月, 2008 1 次提交
    • N
      spinlock: lockbreak cleanup · 95c354fe
      Nick Piggin 提交于
      The break_lock data structure and code for spinlocks is quite nasty.
      Not only does it double the size of a spinlock but it changes locking to
      a potentially less optimal trylock.
      
      Put all of that under CONFIG_GENERIC_LOCKBREAK, and introduce a
      __raw_spin_is_contended that uses the lock data itself to determine whether
      there are waiters on the lock, to be used if CONFIG_GENERIC_LOCKBREAK is
      not set.
      
      Rename need_lockbreak to spin_needbreak, make it use spin_is_contended to
      decouple it from the spinlock implementation, and make it typesafe (rwlocks
      do not have any need_lockbreak sites -- why do they even get bloated up
      with that break_lock then?).
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      95c354fe
  5. 26 1月, 2008 30 次提交
    • N
      sched: print backtrace of running tasks too · 5fb5e6de
      Nick Piggin 提交于
      The attached patch is something really simple that can sometimes help
      in getting more info out of a hung system.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5fb5e6de
    • G
      sched: monitor clock underflows in /proc/sched_debug · cc203d24
      Guillaume Chazarain 提交于
      We monitor clock overflows, let's also monitor clock underflows.
      Signed-off-by: NGuillaume Chazarain <guichaz@yahoo.fr>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cc203d24
    • G
      sched: fix rq->clock warps on frequency changes · 782daeee
      Guillaume Chazarain 提交于
      sched: fix rq->clock warps on frequency changes
      
      Fix 2bacec8c
      (sched: touch softlockup watchdog after idling) that reintroduced warps
      on frequency changes. touch_softlockup_watchdog() calls __update_rq_clock
      that checks rq->clock for warps, so call it after adjusting rq->clock.
      Signed-off-by: NGuillaume Chazarain <guichaz@yahoo.fr>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      782daeee
    • I
      sched: remove the !PREEMPT_BKL code · 6478d880
      Ingo Molnar 提交于
      remove the !PREEMPT_BKL code.
      
      this removes 160 lines of legacy code.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6478d880
    • P
      sched: rt throttling vs no_hz · 48d5e258
      Peter Zijlstra 提交于
      We need to teach no_hz about the rt throttling because its tick driven.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      48d5e258
    • P
      sched: rt group scheduling · 6f505b16
      Peter Zijlstra 提交于
      Extend group scheduling to also cover the realtime classes. It uses the time
      limiting introduced by the previous patch to allow multiple realtime groups.
      
      The hard time limit is required to keep behaviour deterministic.
      
      The algorithms used make the realtime scheduler O(tg), linear scaling wrt the
      number of task groups. This is the worst case behaviour I can't seem to get out
      of, the avg. case of the algorithms can be improved, I focused on correctness
      and worst case.
      
      [ akpm@linux-foundation.org: move side-effects out of BUG_ON(). ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6f505b16
    • P
      sched: rt time limit · fa85ae24
      Peter Zijlstra 提交于
      Very simple time limit on the realtime scheduling classes.
      Allow the rq's realtime class to consume sched_rt_ratio of every
      sched_rt_period slice. If the class exceeds this quota the fair class
      will preempt the realtime class.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fa85ae24
    • P
      sched: high-res preemption tick · 8f4d37ec
      Peter Zijlstra 提交于
      Use HR-timers (when available) to deliver an accurate preemption tick.
      
      The regular scheduler tick that runs at 1/HZ can be too coarse when nice
      level are used. The fairness system will still keep the cpu utilisation 'fair'
      by then delaying the task that got an excessive amount of CPU time but try to
      minimize this by delivering preemption points spot-on.
      
      The average frequency of this extra interrupt is sched_latency / nr_latency.
      Which need not be higher than 1/HZ, its just that the distribution within the
      sched_latency period is important.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8f4d37ec
    • H
      sched: do not do cond_resched() when CONFIG_PREEMPT · 02b67cc3
      Herbert Xu 提交于
      Why do we even have cond_resched when real preemption
      is on? It seems to be a waste of space and time.
      
      remove cond_resched with CONFIG_PREEMPT on.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      02b67cc3
    • I
      sched: documentation, whitespace fixes · 03319ec8
      Ingo Molnar 提交于
      whitespace fixes.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      03319ec8
    • P
      sched: sched_rt_entity · fa717060
      Peter Zijlstra 提交于
      Move the task_struct members specific to rt scheduling together.
      A future optimization could be to put sched_entity and sched_rt_entity
      into a union.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      CC: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fa717060
    • G
      sched: dynamically update the root-domain span/online maps · dc938520
      Gregory Haskins 提交于
      The baseline code statically builds the span maps when the domain is formed.
      Previous attempts at dynamically updating the maps caused a suspend-to-ram
      regression, which should now be fixed.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      CC: Gautham R Shenoy <ego@in.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      dc938520
    • S
      sched: RT-balance, add new methods to sched_class · cb469845
      Steven Rostedt 提交于
      Dmitry Adamushko found that the current implementation of the RT
      balancing code left out changes to the sched_setscheduler and
      rt_mutex_setprio.
      
      This patch addresses this issue by adding methods to the schedule classes
      to handle being switched out of (switched_from) and being switched into
      (switched_to) a sched_class. Also a method for changing of priorities
      is also added (prio_changed).
      
      This patch also removes some duplicate logic between rt_mutex_setprio and
      sched_setscheduler.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cb469845
    • S
      sched: RT-balance, replace hooks with pre/post schedule and wakeup methods · 9a897c5a
      Steven Rostedt 提交于
      To make the main sched.c code more agnostic to the schedule classes.
      Instead of having specific hooks in the schedule code for the RT class
      balancing. They are replaced with a pre_schedule, post_schedule
      and task_wake_up methods. These methods may be used by any of the classes
      but currently, only the sched_rt class implements them.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9a897c5a
    • D
      sched: get rid of 'new_cpu' in try_to_wake_up() · 5d2f5a61
      Dmitry Adamushko 提交于
      Clean-up try_to_wake_up().
      
      Get rid of the 'new_cpu' variable in try_to_wake_up() [ that's, one
      #ifdef section less ].  Also remove a few redundant blank lines.
      Signed-off-by: NDmitry Adamushko <dmitry.adamushko@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5d2f5a61
    • I
      sched: add credits for RT balancing improvements · b9131769
      Ingo Molnar 提交于
      add credits for RT balancing improvements.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b9131769
    • I
      sched: style cleanup, #2 · 0eab9146
      Ingo Molnar 提交于
      style cleanup of various changes that were done recently.
      
      no code changed:
      
            text    data     bss     dec     hex filename
           26399    2578      48   29025    7161 sched.o.before
           26399    2578      48   29025    7161 sched.o.after
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0eab9146
    • I
      sched: remove unused JIFFIES_TO_NS() macro · d7876a08
      Ingo Molnar 提交于
      remove unused JIFFIES_TO_NS() macro.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d7876a08
    • G
      sched: only balance our RT tasks within our domain · 637f5085
      Gregory Haskins 提交于
      We move the rt-overload data as the first global to per-domain
      reclassification.  This limits the scope of overload related cache-line
      bouncing to stay with a specified partition instead of affecting all
      cpus in the system.
      
      Finally, we limit the scope of find_lowest_cpu searches to the domain
      instead of the entire system.  Note that we would always respect domain
      boundaries even without this patch, but we first would scan potentially
      all cpus before whittling the list down.  Now we can avoid looking at
      RQs that are out of scope, again reducing cache-line hits.
      
      Note: In some cases, task->cpus_allowed will effectively reduce our search
      to within our domain.  However, I believe there are cases where the
      cpus_allowed mask may be all ones and therefore we err on the side of
      caution.  If it can be optimized later, so be it.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      CC: Christoph Lameter <clameter@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      637f5085
    • G
      sched: add sched-domain roots · 57d885fe
      Gregory Haskins 提交于
      We add the notion of a root-domain which will be used later to rescope
      global variables to per-domain variables.  Each exclusive cpuset
      essentially defines an island domain by fully partitioning the member cpus
      from any other cpuset.  However, we currently still maintain some
      policy/state as global variables which transcend all cpusets.  Consider,
      for instance, rt-overload state.
      
      Whenever a new exclusive cpuset is created, we also create a new
      root-domain object and move each cpu member to the root-domain's span.
      By default the system creates a single root-domain with all cpus as
      members (mimicking the global state we have today).
      
      We add some plumbing for storing class specific data in our root-domain.
      Whenever a RQ is switching root-domains (because of repartitioning) we
      give each sched_class the opportunity to remove any state from its old
      domain and add state to the new one.  This logic doesn't have any clients
      yet but it will later in the series.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      CC: Christoph Lameter <clameter@sgi.com>
      CC: Paul Jackson <pj@sgi.com>
      CC: Simon Derr <simon.derr@bull.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      57d885fe
    • S
      sched: RT-balance on new task · 0d1311a5
      Steven Rostedt 提交于
      rt-balance when creating new tasks.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0d1311a5
    • G
      sched: wake-balance fixes · a22d7fc1
      Gregory Haskins 提交于
      We have logic to detect whether the system has migratable tasks, but we are
      not using it when deciding whether to push tasks away.  So we add support
      for considering this new information.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a22d7fc1
    • G
      sched: de-SCHED_OTHER-ize the RT path · e7693a36
      Gregory Haskins 提交于
      The current wake-up code path tries to determine if it can optimize the
      wake-up to "this_cpu" by computing load calculations.  The problem is that
      these calculations are only relevant to SCHED_OTHER tasks where load is king.
      For RT tasks, priority is king.  So the load calculation is completely wasted
      bandwidth.
      
      Therefore, we create a new sched_class interface to help with
      pre-wakeup routing decisions and move the load calculation as a function
      of CFS task's class.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e7693a36
    • G
      sched: add RT-balance cpu-weight · 73fe6aae
      Gregory Haskins 提交于
      Some RT tasks (particularly kthreads) are bound to one specific CPU.
      It is fairly common for two or more bound tasks to get queued up at the
      same time.  Consider, for instance, softirq_timer and softirq_sched.  A
      timer goes off in an ISR which schedules softirq_thread to run at RT50.
      Then the timer handler determines that it's time to smp-rebalance the
      system so it schedules softirq_sched to run.  So we are in a situation
      where we have two RT50 tasks queued, and the system will go into
      rt-overload condition to request other CPUs for help.
      
      This causes two problems in the current code:
      
      1) If a high-priority bound task and a low-priority unbounded task queue
         up behind the running task, we will fail to ever relocate the unbounded
         task because we terminate the search on the first unmovable task.
      
      2) We spend precious futile cycles in the fast-path trying to pull
         overloaded tasks over.  It is therefore optimial to strive to avoid the
         overhead all together if we can cheaply detect the condition before
         overload even occurs.
      
      This patch tries to achieve this optimization by utilizing the hamming
      weight of the task->cpus_allowed mask.  A weight of 1 indicates that
      the task cannot be migrated.  We will then utilize this information to
      skip non-migratable tasks and to eliminate uncessary rebalance attempts.
      
      We introduce a per-rq variable to count the number of migratable tasks
      that are currently running.  We only go into overload if we have more
      than one rt task, AND at least one of them is migratable.
      
      In addition, we introduce a per-task variable to cache the cpus_allowed
      weight, since the hamming calculation is probably relatively expensive.
      We only update the cached value when the mask is updated which should be
      relatively infrequent, especially compared to scheduling frequency
      in the fast path.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      73fe6aae
    • S
      sched: push RT tasks from overloaded CPUs · 4642dafd
      Steven Rostedt 提交于
      This patch adds pushing of overloaded RT tasks from a runqueue that is
      having tasks (most likely RT tasks) added to the run queue.
      
      TODO: We don't cover the case of waking of new RT tasks (yet).
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4642dafd
    • S
      sched: pull RT tasks from overloaded runqueues · f65eda4f
      Steven Rostedt 提交于
      This patch adds the algorithm to pull tasks from RT overloaded runqueues.
      
      When a pull RT is initiated, all overloaded runqueues are examined for
      a RT task that is higher in prio than the highest prio task queued on the
      target runqueue. If another runqueue holds a RT task that is of higher
      prio than the highest prio task on the target runqueue is found it is pulled
      to the target runqueue.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f65eda4f
    • S
      sched: add RT task pushing · e8fa1362
      Steven Rostedt 提交于
      This patch adds an algorithm to push extra RT tasks off a run queue to
      other CPU runqueues.
      
      When more than one RT task is added to a run queue, this algorithm takes
      an assertive approach to push the RT tasks that are not running onto other
      run queues that have lower priority.  The way this works is that the highest
      RT task that is not running is looked at and we examine the runqueues on
      the CPUS for that tasks affinity mask. We find the runqueue with the lowest
      prio in the CPU affinity of the picked task, and if it is lower in prio than
      the picked task, we push the task onto that CPU runqueue.
      
      We continue pushing RT tasks off the current runqueue until we don't push any
      more.  The algorithm stops when the next highest RT task can't preempt any
      other processes on other CPUS.
      
      TODO: The algorithm may stop when there are still RT tasks that can be
       migrated. Specifically, if the highest non running RT task CPU affinity
       is restricted to CPUs that are running higher priority tasks, there may
       be a lower priority task queued that has an affinity with a CPU that is
       running a lower priority task that it could be migrated to.  This
       patch set does not address this issue.
      
      Note: checkpatch reveals two over 80 character instances. I'm not sure
       that breaking them up will help visually, so I left them as is.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e8fa1362
    • S
      sched: track highest prio task queued · 764a9d6f
      Steven Rostedt 提交于
      This patch adds accounting to each runqueue to keep track of the
      highest prio task queued on the run queue. We only care about
      RT tasks, so if the run queue does not contain any active RT tasks
      its priority will be considered MAX_RT_PRIO.
      
      This information will be used for later patches.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      764a9d6f
    • S
      sched: count # of queued RT tasks · 63489e45
      Steven Rostedt 提交于
      This patch adds accounting to keep track of the number of RT tasks running
      on a runqueue. This information will be used in later patches.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      63489e45
    • I
      softlockup: automatically detect hung TASK_UNINTERRUPTIBLE tasks · 82a1fcb9
      Ingo Molnar 提交于
      this patch extends the soft-lockup detector to automatically
      detect hung TASK_UNINTERRUPTIBLE tasks. Such hung tasks are
      printed the following way:
      
       ------------------>
       INFO: task prctl:3042 blocked for more than 120 seconds.
       "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message
       prctl         D fd5e3793     0  3042   2997
              f6050f38 00000046 00000001 fd5e3793 00000009 c06d8264 c06dae80 00000286
              f6050f40 f6050f00 f7d34d90 f7d34fc8 c1e1be80 00000001 f6050000 00000000
              f7e92d00 00000286 f6050f18 c0489d1a f6050f40 00006605 00000000 c0133a5b
       Call Trace:
        [<c04883a5>] schedule_timeout+0x6d/0x8b
        [<c04883d8>] schedule_timeout_uninterruptible+0x15/0x17
        [<c0133a76>] msleep+0x10/0x16
        [<c0138974>] sys_prctl+0x30/0x1e2
        [<c0104c52>] sysenter_past_esp+0x5f/0xa5
        =======================
       2 locks held by prctl/3042:
       #0:  (&sb->s_type->i_mutex_key#5){--..}, at: [<c0197d11>] do_fsync+0x38/0x7a
       #1:  (jbd_handle){--..}, at: [<c01ca3d2>] journal_start+0xc7/0xe9
       <------------------
      
      the current default timeout is 120 seconds. Such messages are printed
      up to 10 times per bootup. If the system has crashed already then the
      messages are not printed.
      
      if lockdep is enabled then all held locks are printed as well.
      
      this feature is a natural extension to the softlockup-detector (kernel
      locked up without scheduling) and to the NMI watchdog (kernel locked up
      with IRQs disabled).
      
      [ Gautham R Shenoy <ego@in.ibm.com>: CPU hotplug fixes. ]
      [ Andrew Morton <akpm@linux-foundation.org>: build warning fix. ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NArjan van de Ven <arjan@linux.intel.com>
      82a1fcb9