1. 16 1月, 2009 1 次提交
    • P
      sched: make plist a library facility · ceacc2c1
      Peter Zijlstra 提交于
      Ingo Molnar wrote:
      
      > here's a new build failure with tip/sched/rt:
      >
      >   LD      .tmp_vmlinux1
      > kernel/built-in.o: In function `set_curr_task_rt':
      > sched.c:(.text+0x3675): undefined reference to `plist_del'
      > kernel/built-in.o: In function `pick_next_task_rt':
      > sched.c:(.text+0x37ce): undefined reference to `plist_del'
      > kernel/built-in.o: In function `enqueue_pushable_task':
      > sched.c:(.text+0x381c): undefined reference to `plist_del'
      
      Eliminate the plist library kconfig and make it available
      unconditionally.
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ceacc2c1
  2. 14 1月, 2009 2 次提交
  3. 04 1月, 2009 1 次提交
    • M
      sched: put back some stack hog changes that were undone in kernel/sched.c · 6ca09dfc
      Mike Travis 提交于
      Impact: prevents panic from stack overflow on numa-capable machines.
      
      Some of the "removal of stack hogs" changes in kernel/sched.c by using
      node_to_cpumask_ptr were undone by the early cpumask API updates, and
      causes a panic due to stack overflow.  This patch undoes those changes
      by using cpumask_of_node() which returns a 'const struct cpumask *'.
      
      In addition, cpu_coregoup_map is replaced with cpu_coregroup_mask further
      reducing stack usage.  (Both of these updates removed 9 FIXME's!)
      
      Also:
         Pick up some remaining changes from the old 'cpumask_t' functions to
         the new 'struct cpumask *' functions.
      
         Optimize memory traffic by allocating each percpu local_cpu_mask on the
         same node as the referring cpu.
      Signed-off-by: NMike Travis <travis@sgi.com>
      Acked-by: NRusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6ca09dfc
  4. 29 12月, 2008 8 次提交
    • G
      RT: fix push_rt_task() to handle dequeue_pushable properly · 1563513d
      Gregory Haskins 提交于
      A panic was discovered by Chirag Jog where a BUG_ON sanity check
      in the new "pushable_task" logic would trigger a panic under
      certain circumstances:
      
      http://lkml.org/lkml/2008/9/25/189
      
      Gilles Carry discovered that the root cause was attributed to the
      pushable_tasks list getting corrupted in the push_rt_task logic.
      This was the result of a dropped rq lock in double_lock_balance
      allowing a task in the process of being pushed to potentially migrate
      away, and thus corrupt the pushable_tasks() list.
      
      I traced back the problem as introduced by the pushable_tasks patch
      that went in recently.   There is a "retry" path in push_rt_task()
      that actually had a compound conditional to decide whether to
      retry or exit.  I missed the meaning behind the rationale for the
      virtual "if(!task) goto out;" portion of the compound statement and
      thus did not handle it properly.  The new pushable_tasks logic
      actually creates three distinct conditions:
      
      1) an untouched and unpushable task should be dequeued
      2) a migrated task where more pushable tasks remain should be retried
      3) a migrated task where no more pushable tasks exist should exit
      
      The original logic mushed (1) and (3) together, resulting in the
      system dequeuing a migrated task (against an unlocked foreign run-queue
      nonetheless).
      
      To fix this, we get rid of the notion of "paranoid" and we support the
      three unique conditions properly.  The paranoid feature is no longer
      relevant with the new pushable logic (since pushable naturally limits
      the loop) anyway, so lets just remove it.
      Reported-By: NChirag Jog <chirag@linux.vnet.ibm.com>
      Found-by: NGilles Carry <gilles.carry@bull.net>
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      1563513d
    • G
      sched: create "pushable_tasks" list to limit pushing to one attempt · 917b627d
      Gregory Haskins 提交于
      The RT scheduler employs a "push/pull" design to actively balance tasks
      within the system (on a per disjoint cpuset basis).  When a task is
      awoken, it is immediately determined if there are any lower priority
      cpus which should be preempted.  This is opposed to the way normal
      SCHED_OTHER tasks behave, which will wait for a periodic rebalancing
      operation to occur before spreading out load.
      
      When a particular RQ has more than 1 active RT task, it is said to
      be in an "overloaded" state.  Once this occurs, the system enters
      the active balancing mode, where it will try to push the task away,
      or persuade a different cpu to pull it over.  The system will stay
      in this state until the system falls back below the <= 1 queued RT
      task per RQ.
      
      However, the current implementation suffers from a limitation in the
      push logic.  Once overloaded, all tasks (other than current) on the
      RQ are analyzed on every push operation, even if it was previously
      unpushable (due to affinity, etc).  Whats more, the operation stops
      at the first task that is unpushable and will not look at items
      lower in the queue.  This causes two problems:
      
      1) We can have the same tasks analyzed over and over again during each
         push, which extends out the fast path in the scheduler for no
         gain.  Consider a RQ that has dozens of tasks that are bound to a
         core.  Each one of those tasks will be encountered and skipped
         for each push operation while they are queued.
      
      2) There may be lower-priority tasks under the unpushable task that
         could have been successfully pushed, but will never be considered
         until either the unpushable task is cleared, or a pull operation
         succeeds.  The net result is a potential latency source for mid
         priority tasks.
      
      This patch aims to rectify these two conditions by introducing a new
      priority sorted list: "pushable_tasks".  A task is added to the list
      each time a task is activated or preempted.  It is removed from the
      list any time it is deactivated, made current, or fails to push.
      
      This works because a task only needs to be attempted to push once.
      After an initial failure to push, the other cpus will eventually try to
      pull the task when the conditions are proper.  This also solves the
      problem that we don't completely analyze all tasks due to encountering
      an unpushable tasks.  Now every task will have a push attempted (when
      appropriate).
      
      This reduces latency both by shorting the critical section of the
      rq->lock for certain workloads, and by making sure the algorithm
      considers all eligible tasks in the system.
      
      [ rostedt: added a couple more BUG_ONs ]
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Acked-by: NSteven Rostedt <srostedt@redhat.com>
      917b627d
    • G
      sched: add sched_class->needs_post_schedule() member · 967fc046
      Gregory Haskins 提交于
      We currently run class->post_schedule() outside of the rq->lock, which
      means that we need to test for the need to post_schedule outside of
      the lock to avoid a forced reacquistion.  This is currently not a problem
      as we only look at rq->rt.overloaded.  However, we want to enhance this
      going forward to look at more state to reduce the need to post_schedule to
      a bare minimum set.  Therefore, we introduce a new member-func called
      needs_post_schedule() which tests for the post_schedule condtion without
      actually performing the work.  Therefore it is safe to call this
      function before the rq->lock is released, because we are guaranteed not
      to drop the lock at an intermediate point (such as what post_schedule()
      may do).
      
      We will use this later in the series
      
      [ rostedt: removed paranoid BUG_ON ]
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      967fc046
    • G
      sched: only try to push a task on wakeup if it is migratable · 777c2f38
      Gregory Haskins 提交于
      There is no sense in wasting time trying to push a task away that
      cannot move anywhere else.  We gain no benefit from trying to push
      other tasks at this point, so if the task being woken up is non
      migratable, just skip the whole operation.  This reduces overhead
      in the wakeup path for certain tasks.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      777c2f38
    • G
      sched: use highest_prio.next to optimize pull operations · 74ab8e4f
      Gregory Haskins 提交于
      We currently take the rq->lock for every cpu in an overload state during
      pull_rt_tasks().  However, we now have enough information via the
      highest_prio.[curr|next] fields to determine if there is any tasks of
      interest to warrant the overhead of the rq->lock, before we actually take
      it.  So we use this information to reduce lock contention during the
      pull for the case where the source-rq doesnt have tasks that preempt
      the current task.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      74ab8e4f
    • G
      sched: use highest_prio.curr for pull threshold · a8728944
      Gregory Haskins 提交于
      highest_prio.curr is actually a more accurate way to keep track of
      the pull_rt_task() threshold since it is always up to date, even
      if the "next" task migrates during double_lock.  Therefore, stop
      looking at the "next" task object and simply use the highest_prio.curr.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      a8728944
    • G
      sched: track the next-highest priority on each runqueue · e864c499
      Gregory Haskins 提交于
      We will use this later in the series to reduce the amount of rq-lock
      contention during a pull operation
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      e864c499
    • G
      sched: cleanup inc/dec_rt_tasks · 4d984277
      Gregory Haskins 提交于
      Move some common definitions up to the function prologe to simplify the
      body logic.
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      4d984277
  5. 17 12月, 2008 1 次提交
  6. 29 11月, 2008 1 次提交
    • A
      sched: move double_unlock_balance() higher · 70574a99
      Alexey Dobriyan 提交于
      Move double_lock_balance()/double_unlock_balance() higher to fix the following
      with gcc-3.4.6:
      
         CC      kernel/sched.o
       In file included from kernel/sched.c:1605:
       kernel/sched_rt.c: In function `find_lock_lowest_rq':
       kernel/sched_rt.c:914: sorry, unimplemented: inlining failed in call to 'double_unlock_balance': function body not available
       kernel/sched_rt.c:1077: sorry, unimplemented: called from here
       make[2]: *** [kernel/sched.o] Error 1
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      70574a99
  7. 26 11月, 2008 1 次提交
  8. 25 11月, 2008 5 次提交
  9. 07 11月, 2008 1 次提交
    • S
      sched, lockdep: inline double_unlock_balance() · cf7f8690
      Sripathi Kodi 提交于
      We have a test case which measures the variation in the amount of time
      needed to perform a fixed amount of work on the preempt_rt kernel. We
      started seeing deterioration in it's performance recently. The test
      should never take more than 10 microseconds, but we started 5-10%
      failure rate.
      
      Using elimination method, we traced the problem to commit
      1b12bbc7 (lockdep: re-annotate
      scheduler runqueues).
      
      When LOCKDEP is disabled, this patch only adds an additional function
      call to double_unlock_balance(). Hence I inlined double_unlock_balance()
      and the problem went away. Here is a patch to make this change.
      Signed-off-by: NSripathi Kodi <sripathik@in.ibm.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cf7f8690
  10. 03 11月, 2008 1 次提交
    • D
      sched/rt: small optimization to update_curr_rt() · e113a745
      Dimitri Sivanich 提交于
      Impact: micro-optimization to SCHED_FIFO/RR scheduling
      
      A very minor improvement, but might it be better to check sched_rt_runtime(rt_rq)
      before taking the rt_runtime_lock?
      
      Peter Zijlstra observes:
      
      > Yes, I think its ok to do so.
      >
      > Like pointed out in the other thread, there are two races:
      >
      >  - sched_rt_runtime() going to RUNTIME_INF, and that will be handled
      >    properly by sched_rt_runtime_exceeded()
      >
      >  - sched_rt_runtime() going to !RUNTIME_INF, and here we can miss an
      >    accounting cycle, but I don't think that is something to worry too
      >    much about.
      Signed-off-by: NDimitri Sivanich <sivanich@sgi.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      
      --
      
       kernel/sched_rt.c |    4 ++--
       1 file changed, 2 insertions(+), 2 deletions(-)
      e113a745
  11. 22 10月, 2008 1 次提交
  12. 04 10月, 2008 1 次提交
    • D
      sched_rt.c: resch needed in rt_rq_enqueue() for the root rt_rq · f6121f4f
      Dario Faggioli 提交于
      While working on the new version of the code for SCHED_SPORADIC I
      noticed something strange in the present throttling mechanism. More
      specifically in the throttling timer handler in sched_rt.c
      (do_sched_rt_period_timer()) and in rt_rq_enqueue().
      
      The problem is that, when unthrottling a runqueue, rt_rq_enqueue() only
      asks for rescheduling if the runqueue has a sched_entity associated to
      it (i.e., rt_rq->rt_se != NULL).
      Now, if the runqueue is the root rq (which has a rt_se = NULL)
      rescheduling does not take place, and it is delayed to some undefined
      instant in the future.
      
      This imply some random bandwidth usage by the RT tasks under throttling.
      For instance, setting rt_runtime_us/rt_period_us = 950ms/1000ms an RT
      task will get less than 95%. In our tests we got something varying
      between 70% to 95%.
      Using smaller time values, e.g., 95ms/100ms, things are even worse, and
      I can see values also going down to 20-25%!!
      
      The tests we performed are simply running 'yes' as a SCHED_FIFO task,
      and checking the CPU usage with top, but we can investigate thoroughly
      if you think it is needed.
      
      Things go much better, for us, with the attached patch... Don't know if
      it is the best approach, but it solved the issue for us.
      Signed-off-by: NDario Faggioli <raistlin@linux.it>
      Signed-off-by: NMichael Trimarchi <trimarchimichael@yahoo.it>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f6121f4f
  13. 23 9月, 2008 1 次提交
  14. 22 9月, 2008 1 次提交
  15. 14 9月, 2008 1 次提交
    • F
      timers: fix itimer/many thread hang · f06febc9
      Frank Mayhar 提交于
      Overview
      
      This patch reworks the handling of POSIX CPU timers, including the
      ITIMER_PROF, ITIMER_VIRT timers and rlimit handling.  It was put together
      with the help of Roland McGrath, the owner and original writer of this code.
      
      The problem we ran into, and the reason for this rework, has to do with using
      a profiling timer in a process with a large number of threads.  It appears
      that the performance of the old implementation of run_posix_cpu_timers() was
      at least O(n*3) (where "n" is the number of threads in a process) or worse.
      Everything is fine with an increasing number of threads until the time taken
      for that routine to run becomes the same as or greater than the tick time, at
      which point things degrade rather quickly.
      
      This patch fixes bug 9906, "Weird hang with NPTL and SIGPROF."
      
      Code Changes
      
      This rework corrects the implementation of run_posix_cpu_timers() to make it
      run in constant time for a particular machine.  (Performance may vary between
      one machine and another depending upon whether the kernel is built as single-
      or multiprocessor and, in the latter case, depending upon the number of
      running processors.)  To do this, at each tick we now update fields in
      signal_struct as well as task_struct.  The run_posix_cpu_timers() function
      uses those fields to make its decisions.
      
      We define a new structure, "task_cputime," to contain user, system and
      scheduler times and use these in appropriate places:
      
      struct task_cputime {
      	cputime_t utime;
      	cputime_t stime;
      	unsigned long long sum_exec_runtime;
      };
      
      This is included in the structure "thread_group_cputime," which is a new
      substructure of signal_struct and which varies for uniprocessor versus
      multiprocessor kernels.  For uniprocessor kernels, it uses "task_cputime" as
      a simple substructure, while for multiprocessor kernels it is a pointer:
      
      struct thread_group_cputime {
      	struct task_cputime totals;
      };
      
      struct thread_group_cputime {
      	struct task_cputime *totals;
      };
      
      We also add a new task_cputime substructure directly to signal_struct, to
      cache the earliest expiration of process-wide timers, and task_cputime also
      replaces the it_*_expires fields of task_struct (used for earliest expiration
      of thread timers).  The "thread_group_cputime" structure contains process-wide
      timers that are updated via account_user_time() and friends.  In the non-SMP
      case the structure is a simple aggregator; unfortunately in the SMP case that
      simplicity was not achievable due to cache-line contention between CPUs (in
      one measured case performance was actually _worse_ on a 16-cpu system than
      the same test on a 4-cpu system, due to this contention).  For SMP, the
      thread_group_cputime counters are maintained as a per-cpu structure allocated
      using alloc_percpu().  The timer functions update only the timer field in
      the structure corresponding to the running CPU, obtained using per_cpu_ptr().
      
      We define a set of inline functions in sched.h that we use to maintain the
      thread_group_cputime structure and hide the differences between UP and SMP
      implementations from the rest of the kernel.  The thread_group_cputime_init()
      function initializes the thread_group_cputime structure for the given task.
      The thread_group_cputime_alloc() is a no-op for UP; for SMP it calls the
      out-of-line function thread_group_cputime_alloc_smp() to allocate and fill
      in the per-cpu structures and fields.  The thread_group_cputime_free()
      function, also a no-op for UP, in SMP frees the per-cpu structures.  The
      thread_group_cputime_clone_thread() function (also a UP no-op) for SMP calls
      thread_group_cputime_alloc() if the per-cpu structures haven't yet been
      allocated.  The thread_group_cputime() function fills the task_cputime
      structure it is passed with the contents of the thread_group_cputime fields;
      in UP it's that simple but in SMP it must also safely check that tsk->signal
      is non-NULL (if it is it just uses the appropriate fields of task_struct) and,
      if so, sums the per-cpu values for each online CPU.  Finally, the three
      functions account_group_user_time(), account_group_system_time() and
      account_group_exec_runtime() are used by timer functions to update the
      respective fields of the thread_group_cputime structure.
      
      Non-SMP operation is trivial and will not be mentioned further.
      
      The per-cpu structure is always allocated when a task creates its first new
      thread, via a call to thread_group_cputime_clone_thread() from copy_signal().
      It is freed at process exit via a call to thread_group_cputime_free() from
      cleanup_signal().
      
      All functions that formerly summed utime/stime/sum_sched_runtime values from
      from all threads in the thread group now use thread_group_cputime() to
      snapshot the values in the thread_group_cputime structure or the values in
      the task structure itself if the per-cpu structure hasn't been allocated.
      
      Finally, the code in kernel/posix-cpu-timers.c has changed quite a bit.
      The run_posix_cpu_timers() function has been split into a fast path and a
      slow path; the former safely checks whether there are any expired thread
      timers and, if not, just returns, while the slow path does the heavy lifting.
      With the dedicated thread group fields, timers are no longer "rebalanced" and
      the process_timer_rebalance() function and related code has gone away.  All
      summing loops are gone and all code that used them now uses the
      thread_group_cputime() inline.  When process-wide timers are set, the new
      task_cputime structure in signal_struct is used to cache the earliest
      expiration; this is checked in the fast path.
      
      Performance
      
      The fix appears not to add significant overhead to existing operations.  It
      generally performs the same as the current code except in two cases, one in
      which it performs slightly worse (Case 5 below) and one in which it performs
      very significantly better (Case 2 below).  Overall it's a wash except in those
      two cases.
      
      I've since done somewhat more involved testing on a dual-core Opteron system.
      
      Case 1: With no itimer running, for a test with 100,000 threads, the fixed
      	kernel took 1428.5 seconds, 513 seconds more than the unfixed system,
      	all of which was spent in the system.  There were twice as many
      	voluntary context switches with the fix as without it.
      
      Case 2: With an itimer running at .01 second ticks and 4000 threads (the most
      	an unmodified kernel can handle), the fixed kernel ran the test in
      	eight percent of the time (5.8 seconds as opposed to 70 seconds) and
      	had better tick accuracy (.012 seconds per tick as opposed to .023
      	seconds per tick).
      
      Case 3: A 4000-thread test with an initial timer tick of .01 second and an
      	interval of 10,000 seconds (i.e. a timer that ticks only once) had
      	very nearly the same performance in both cases:  6.3 seconds elapsed
      	for the fixed kernel versus 5.5 seconds for the unfixed kernel.
      
      With fewer threads (eight in these tests), the Case 1 test ran in essentially
      the same time on both the modified and unmodified kernels (5.2 seconds versus
      5.8 seconds).  The Case 2 test ran in about the same time as well, 5.9 seconds
      versus 5.4 seconds but again with much better tick accuracy, .013 seconds per
      tick versus .025 seconds per tick for the unmodified kernel.
      
      Since the fix affected the rlimit code, I also tested soft and hard CPU limits.
      
      Case 4: With a hard CPU limit of 20 seconds and eight threads (and an itimer
      	running), the modified kernel was very slightly favored in that while
      	it killed the process in 19.997 seconds of CPU time (5.002 seconds of
      	wall time), only .003 seconds of that was system time, the rest was
      	user time.  The unmodified kernel killed the process in 20.001 seconds
      	of CPU (5.014 seconds of wall time) of which .016 seconds was system
      	time.  Really, though, the results were too close to call.  The results
      	were essentially the same with no itimer running.
      
      Case 5: With a soft limit of 20 seconds and a hard limit of 2000 seconds
      	(where the hard limit would never be reached) and an itimer running,
      	the modified kernel exhibited worse tick accuracy than the unmodified
      	kernel: .050 seconds/tick versus .028 seconds/tick.  Otherwise,
      	performance was almost indistinguishable.  With no itimer running this
      	test exhibited virtually identical behavior and times in both cases.
      
      In times past I did some limited performance testing.  those results are below.
      
      On a four-cpu Opteron system without this fix, a sixteen-thread test executed
      in 3569.991 seconds, of which user was 3568.435s and system was 1.556s.  On
      the same system with the fix, user and elapsed time were about the same, but
      system time dropped to 0.007 seconds.  Performance with eight, four and one
      thread were comparable.  Interestingly, the timer ticks with the fix seemed
      more accurate:  The sixteen-thread test with the fix received 149543 ticks
      for 0.024 seconds per tick, while the same test without the fix received 58720
      for 0.061 seconds per tick.  Both cases were configured for an interval of
      0.01 seconds.  Again, the other tests were comparable.  Each thread in this
      test computed the primes up to 25,000,000.
      
      I also did a test with a large number of threads, 100,000 threads, which is
      impossible without the fix.  In this case each thread computed the primes only
      up to 10,000 (to make the runtime manageable).  System time dominated, at
      1546.968 seconds out of a total 2176.906 seconds (giving a user time of
      629.938s).  It received 147651 ticks for 0.015 seconds per tick, still quite
      accurate.  There is obviously no comparable test without the fix.
      Signed-off-by: NFrank Mayhar <fmayhar@google.com>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Alexey Dobriyan <adobriyan@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f06febc9
  16. 11 9月, 2008 1 次提交
    • Z
      sched: fix 2.6.27-rc5 couldn't boot on tulsa machine randomly · baf25731
      Zhang, Yanmin 提交于
      On my tulsa x86-64 machine, kernel 2.6.25-rc5 couldn't boot randomly.
      
      Basically, function __enable_runtime forgets to reset rt_rq->rt_throttled
      to 0. When every cpu is up, per-cpu migration_thread is created and it runs
      very fast, sometimes to mark the corresponding rt_rq->rt_throttled to 1 very
      quickly. After all cpus are up, with below calling chain:
      
         sched_init_smp => arch_init_sched_domains => build_sched_domains => ...
      => cpu_attach_domain => rq_attach_root => set_rq_online => ...
      => _enable_runtime
      
      _enable_runtime is called against every rt_rq again, so rt_rq->rt_time is
      reset to 0, but rt_rq->rt_throttled might be still 1. Later on function
      do_sched_rt_period_timer couldn't reset it, and all RT tasks couldn't be
      scheduled to run on that cpu. here is RT task migration_thread which is
      woken up when a task is migrated to another cpu.
      
      Below patch fixes it against 2.6.27-rc5.
      Signed-off-by: NZhang Yanmin <yanmin_zhang@linux.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      baf25731
  17. 28 8月, 2008 2 次提交
    • P
      sched: rt-bandwidth accounting fix · cc2991cf
      Peter Zijlstra 提交于
      It fixes an accounting bug where we would continue accumulating runtime
      even though the bandwidth control is disabled. This would lead to very long
      throttle periods once bandwidth control gets turned on again.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cc2991cf
    • J
      sched: fix sched_rt_rq_enqueue() resched idle · f3ade837
      John Blackwood 提交于
      When sysctl_sched_rt_runtime is set to something other than -1 and the
      CONFIG_RT_GROUP_SCHED kernel parameter is NOT enabled, we get into a state
      where we see one or more CPUs idling forvever even though there are
      real-time
      tasks in their rt runqueue that are able to run (no longer throttled).
      
      The sequence is:
      
      - A real-time task is running when the timer sets the rt runqueue
          to throttled, and the rt task is resched_task()ed and switched
          out, and idle is switched in since there are no non-rt tasks to
          run on that cpu.
      
      - Eventually the do_sched_rt_period_timer() runs and un-throttles
          the rt runqueue, but we just exit the timer interrupt and go back
          to executing the idle task in the idle loop forever.
      
      If we change the sched_rt_rq_enqueue() routine to use some of the code
      from the CONFIG_RT_GROUP_SCHED enabled version of this same routine and
      resched_task() the currently executing task (idle in our case) if it is
      a lower priority task than the higher rt task in the now un-throttled
      runqueue, the problem is no longer observed.
      Signed-off-by: NJohn Blackwood <john.blackwood@ccur.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f3ade837
  18. 19 8月, 2008 2 次提交
  19. 14 8月, 2008 1 次提交
  20. 11 8月, 2008 1 次提交
    • P
      lockdep: re-annotate scheduler runqueues · 1b12bbc7
      Peter Zijlstra 提交于
      Instead of using a per-rq lock class, use the regular nesting operations.
      
      However, take extra care with double_lock_balance() as it can release the
      already held rq->lock (and therefore change its nesting class).
      
      So what can happen is:
      
       spin_lock(rq->lock);	// this rq subclass 0
      
       double_lock_balance(rq, other_rq);
         // release rq
         // acquire other_rq->lock subclass 0
         // acquire rq->lock subclass 1
      
       spin_unlock(other_rq->lock);
      
      leaving you with rq->lock in subclass 1
      
      So a subsequent double_lock_balance() call can try to nest a subclass 1
      lock while already holding a subclass 1 lock.
      
      Fix this by introducing double_unlock_balance() which releases the other
      rq's lock, but also re-sets the subclass for this rq's lock to 0.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1b12bbc7
  21. 24 7月, 2008 1 次提交
  22. 18 7月, 2008 3 次提交
  23. 27 6月, 2008 2 次提交