1. 15 5月, 2009 1 次提交
    • P
      perf_counter: per user mlock gift · 789f90fc
      Peter Zijlstra 提交于
      Instead of a per-process mlock gift for perf-counters, use a
      per-user gift so that there is less of a DoS potential.
      
      [ Impact: allow less worst-case unprivileged memory consumption ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <20090515132018.496182835@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      789f90fc
  2. 09 4月, 2009 1 次提交
    • N
      sched: do not count frozen tasks toward load · e3c8ca83
      Nathan Lynch 提交于
      Freezing tasks via the cgroup freezer causes the load average to climb
      because the freezer's current implementation puts frozen tasks in
      uninterruptible sleep (D state).
      
      Some applications which perform job-scheduling functions consult the
      load average when making decisions.  If a cgroup is frozen, the load
      average does not provide a useful measure of the system's utilization
      to such applications.  This is especially inconvenient if the job
      scheduler employs the cgroup freezer as a mechanism for preempting low
      priority jobs.  Contrast this with using SIGSTOP for the same purpose:
      the stopped tasks do not count toward system load.
      
      Change task_contributes_to_load() to return false if the task is
      frozen.  This results in /proc/loadavg behavior that better meets
      users' expectations.
      Signed-off-by: NNathan Lynch <ntl@pobox.com>
      Acked-by: NAndrew Morton <akpm@linux-foundation.org>
      Acked-by: NNigel Cunningham <nigel@tuxonice.net>
      Tested-by: NNigel Cunningham <nigel@tuxonice.net>
      Cc: <stable@kernel.org>
      Cc: containers@lists.linux-foundation.org
      Cc: linux-pm@lists.linux-foundation.org
      Cc: Matt Helsley <matthltc@us.ibm.com>
      LKML-Reference: <20090408194512.47a99b95@manatee.lan>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e3c8ca83
  3. 06 4月, 2009 1 次提交
  4. 03 4月, 2009 5 次提交
  5. 01 4月, 2009 2 次提交
  6. 24 3月, 2009 2 次提交
    • S
      function-graph: ignore times across schedule · 8aef2d28
      Steven Rostedt 提交于
      Impact: more accurate timings
      
      The current method of function graph tracing does not take into
      account the time spent when a task is not running. This shows functions
      that call schedule have increased costs:
      
       3) + 18.664 us   |      }
       ------------------------------------------
       3)    <idle>-0    =>  kblockd-123
       ------------------------------------------
      
       3)               |      finish_task_switch() {
       3)   1.441 us    |        _spin_unlock_irq();
       3)   3.966 us    |      }
       3) ! 2959.433 us |    }
       3) ! 2961.465 us |  }
      
      This patch uses the tracepoint in the scheduling context switch to
      account for time that has elapsed while a task is scheduled out.
      Now we see:
      
       ------------------------------------------
       3)    <idle>-0    =>  edac-po-1067
       ------------------------------------------
      
       3)               |      finish_task_switch() {
       3)   0.685 us    |        _spin_unlock_irq();
       3)   2.331 us    |      }
       3) + 41.439 us   |    }
       3) + 42.663 us   |  }
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      8aef2d28
    • T
      genirq: add threaded interrupt handler support · 3aa551c9
      Thomas Gleixner 提交于
      Add support for threaded interrupt handlers:
      
      A device driver can request that its main interrupt handler runs in a
      thread. To achive this the device driver requests the interrupt with
      request_threaded_irq() and provides additionally to the handler a
      thread function. The handler function is called in hard interrupt
      context and needs to check whether the interrupt originated from the
      device. If the interrupt originated from the device then the handler
      can either return IRQ_HANDLED or IRQ_WAKE_THREAD. IRQ_HANDLED is
      returned when no further action is required. IRQ_WAKE_THREAD causes
      the genirq code to invoke the threaded (main) handler. When
      IRQ_WAKE_THREAD is returned handler must have disabled the interrupt
      on the device level. This is mandatory for shared interrupt handlers,
      but we need to do it as well for obscure x86 hardware where disabling
      an interrupt on the IO_APIC level redirects the interrupt to the
      legacy PIC interrupt lines.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NIngo Molnar <mingo@elte.hu>
      3aa551c9
  7. 12 3月, 2009 1 次提交
  8. 27 2月, 2009 2 次提交
  9. 15 2月, 2009 1 次提交
    • N
      lockdep: annotate reclaim context (__GFP_NOFS) · cf40bd16
      Nick Piggin 提交于
      Here is another version, with the incremental patch rolled up, and
      added reclaim context annotation to kswapd, and allocation tracing
      to slab allocators (which may only ever reach the page allocator
      in rare cases, so it is good to put annotations here too).
      
      Haven't tested this version as such, but it should be getting closer
      to merge worthy ;)
      
      --
      After noticing some code in mm/filemap.c accidentally perform a __GFP_FS
      allocation when it should not have been, I thought it might be a good idea to
      try to catch this kind of thing with lockdep.
      
      I coded up a little idea that seems to work. Unfortunately the system has to
      actually be in __GFP_FS page reclaim, then take the lock, before it will mark
      it. But at least that might still be some orders of magnitude more common
      (and more debuggable) than an actual deadlock condition, so we have some
      improvement I hope (the concept is no less complete than discovery of a lock's
      interrupt contexts).
      
      I guess we could even do the same thing with __GFP_IO (normal reclaim), and
      even GFP_NOIO locks too... but filesystems will have the most locks and fiddly
      code paths, so let's start there and see how it goes.
      
      It *seems* to work. I did a quick test.
      
      =================================
      [ INFO: inconsistent lock state ]
      2.6.28-rc6-00007-ged313489-dirty #26
      ---------------------------------
      inconsistent {in-reclaim-W} -> {ov-reclaim-W} usage.
      modprobe/8526 [HC0[0]:SC0[0]:HE1:SE1] takes:
       (testlock){--..}, at: [<ffffffffa0020055>] brd_init+0x55/0x216 [brd]
      {in-reclaim-W} state was registered at:
        [<ffffffff80267bdb>] __lock_acquire+0x75b/0x1a60
        [<ffffffff80268f71>] lock_acquire+0x91/0xc0
        [<ffffffff8070f0e1>] mutex_lock_nested+0xb1/0x310
        [<ffffffffa002002b>] brd_init+0x2b/0x216 [brd]
        [<ffffffff8020903b>] _stext+0x3b/0x170
        [<ffffffff80272ebf>] sys_init_module+0xaf/0x1e0
        [<ffffffff8020c3fb>] system_call_fastpath+0x16/0x1b
        [<ffffffffffffffff>] 0xffffffffffffffff
      irq event stamp: 3929
      hardirqs last  enabled at (3929): [<ffffffff8070f2b5>] mutex_lock_nested+0x285/0x310
      hardirqs last disabled at (3928): [<ffffffff8070f089>] mutex_lock_nested+0x59/0x310
      softirqs last  enabled at (3732): [<ffffffff8061f623>] sk_filter+0x83/0xe0
      softirqs last disabled at (3730): [<ffffffff8061f5b6>] sk_filter+0x16/0xe0
      
      other info that might help us debug this:
      1 lock held by modprobe/8526:
       #0:  (testlock){--..}, at: [<ffffffffa0020055>] brd_init+0x55/0x216 [brd]
      
      stack backtrace:
      Pid: 8526, comm: modprobe Not tainted 2.6.28-rc6-00007-ged313489-dirty #26
      Call Trace:
       [<ffffffff80265483>] print_usage_bug+0x193/0x1d0
       [<ffffffff80266530>] mark_lock+0xaf0/0xca0
       [<ffffffff80266735>] mark_held_locks+0x55/0xc0
       [<ffffffffa0020000>] ? brd_init+0x0/0x216 [brd]
       [<ffffffff802667ca>] trace_reclaim_fs+0x2a/0x60
       [<ffffffff80285005>] __alloc_pages_internal+0x475/0x580
       [<ffffffff8070f29e>] ? mutex_lock_nested+0x26e/0x310
       [<ffffffffa0020000>] ? brd_init+0x0/0x216 [brd]
       [<ffffffffa002006a>] brd_init+0x6a/0x216 [brd]
       [<ffffffffa0020000>] ? brd_init+0x0/0x216 [brd]
       [<ffffffff8020903b>] _stext+0x3b/0x170
       [<ffffffff8070f8b9>] ? mutex_unlock+0x9/0x10
       [<ffffffff8070f83d>] ? __mutex_unlock_slowpath+0x10d/0x180
       [<ffffffff802669ec>] ? trace_hardirqs_on_caller+0x12c/0x190
       [<ffffffff80272ebf>] sys_init_module+0xaf/0x1e0
       [<ffffffff8020c3fb>] system_call_fastpath+0x16/0x1b
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cf40bd16
  10. 12 2月, 2009 1 次提交
  11. 11 2月, 2009 2 次提交
  12. 09 2月, 2009 2 次提交
    • P
      perf_counters: make software counters work as per-cpu counters · 23a185ca
      Paul Mackerras 提交于
      Impact: kernel crash fix
      
      Yanmin Zhang reported that using a PERF_COUNT_TASK_CLOCK software
      counter as a per-cpu counter would reliably crash the system, because
      it calls __task_delta_exec with a null pointer.  The page fault,
      context switch and cpu migration counters also won't function
      correctly as per-cpu counters since they reference the current task.
      
      This fixes the problem by redirecting the task_clock counter to the
      cpu_clock counter when used as a per-cpu counter, and by implementing
      per-cpu page fault, context switch and cpu migration counters.
      
      Along the way, this:
      
      - Initializes counter->ctx earlier, in perf_counter_alloc, so that
        sw_perf_counter_init can use it
      - Adds code to kernel/sched.c to count task migrations into each
        cpu, in rq->nr_migrations_in
      - Exports the per-cpu context switch and task migration counts
        via new functions added to kernel/sched.c
      - Makes sure that if sw_perf_counter_init fails, we don't try to
        initialize the counter as a hardware counter.  Since the user has
        passed a negative, non-raw event type, they clearly don't intend
        for it to be interpreted as a hardware event.
      Reported-by: N"Zhang Yanmin" <yanmin_zhang@linux.intel.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      23a185ca
    • M
      softlockup: remove timestamp checking from hung_task · 17406b82
      Mandeep Singh Baines 提交于
      Impact: saves sizeof(long) bytes per task_struct
      
      By guaranteeing that sysctl_hung_task_timeout_secs have elapsed between
      tasklist scans we can avoid using timestamps.
      Signed-off-by: NMandeep Singh Baines <msb@google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      17406b82
  13. 06 2月, 2009 1 次提交
  14. 05 2月, 2009 2 次提交
    • P
      timers: split process wide cpu clocks/timers · 4cd4c1b4
      Peter Zijlstra 提交于
      Change the process wide cpu timers/clocks so that we:
      
       1) don't mess up the kernel with too many threads,
       2) don't have a per-cpu allocation for each process,
       3) have no impact when not used.
      
      In order to accomplish this we're going to split it into two parts:
      
       - clocks; which can take all the time they want since they run
                 from user context -- ie. sys_clock_gettime(CLOCK_PROCESS_CPUTIME_ID)
      
       - timers; which need constant time sampling but since they're
                 explicity used, the user can pay the overhead.
      
      The clock readout will go back to a full sum of the thread group, while the
      timers will run of a global 'clock' that only runs when needed, so only
      programs that make use of the facility pay the price.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Reviewed-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4cd4c1b4
    • P
      signal: re-add dead task accumulation stats. · 32bd671d
      Peter Zijlstra 提交于
      We're going to split the process wide cpu accounting into two parts:
      
       - clocks; which can take all the time they want since they run
                 from user context.
      
       - timers; which need constant time tracing but can affort the overhead
                 because they're default off -- and rare.
      
      The clock readout will go back to a full sum of the thread group, for this
      we need to re-add the exit stats that were removed in the initial itimer
      rework (f06febc9: timers: fix itimer/many thread hang).
      
      Furthermore, since that full sum can be rather slow for large thread groups
      and we have the complete dead task stats, revert the do_notify_parent time
      computation.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Reviewed-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      32bd671d
  15. 03 2月, 2009 1 次提交
  16. 01 2月, 2009 1 次提交
  17. 30 1月, 2009 1 次提交
  18. 23 1月, 2009 1 次提交
    • S
      trace, lockdep: manual preempt count adding for local_bh_disable · 7e49fcce
      Steven Rostedt 提交于
      Impact: fix to preempt trace triggering lockdep check_flag failure
      
      In local_bh_disable, the use of add_preempt_count causes the
      preempt tracer to start recording the time preemption is off.
      But because it already modified the preempt_count to show
      softirqs disabled, and before it called the lockdep code to
      handle this, it causes a state that lockdep can not handle.
      
      The preempt tracer will reset the ring buffer on start of a trace,
      and the ring buffer reset code does a spin_lock_irqsave. This
      calls into lockdep and lockdep will fail when it detects the
      invalid state of having softirqs disabled but the internal
      current->softirqs_enabled is still set.
      
      The fix is to manually add the SOFTIRQ_OFFSET to preempt count
      and call the preempt tracer code outside the lockdep critical
      area.
      
      Thanks to Peter Zijlstra for suggesting this solution.
      Signed-off-by: NSteven Rostedt <srostedt@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7e49fcce
  19. 16 1月, 2009 2 次提交
    • M
      softlockup: decouple hung tasks check from softlockup detection · e162b39a
      Mandeep Singh Baines 提交于
      Decoupling allows:
      
      * hung tasks check to happen at very low priority
      
      * hung tasks check and softlockup to be enabled/disabled independently
        at compile and/or run-time
      
      * individual panic settings to be enabled disabled independently
        at compile and/or run-time
      
      * softlockup threshold to be reduced without increasing hung tasks
        poll frequency (hung task check is expensive relative to softlock watchdog)
      
      * hung task check to be zero over-head when disabled at run-time
      Signed-off-by: NMandeep Singh Baines <msb@google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e162b39a
    • I
      sched: fix !CONFIG_SCHEDSTATS build failure · 34cb6135
      Ingo Molnar 提交于
      Stephen Rothwell reported this linux-next build failure with !CONFIG_SCHEDSTATS:
      
      | In file included from kernel/sched.c:1703:
      | kernel/sched_fair.c: In function 'adaptive_gran':
      | kernel/sched_fair.c:1324: error: 'struct sched_entity' has no member named 'avg_wakeup'
      
      The start_runtime and avg_wakeup metrics are now not just for statistics,
      but also for scheduling - so they always need to be available. (Also
      move out the nr_migrations fields - for future perfcounters usage.)
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      34cb6135
  20. 15 1月, 2009 3 次提交
    • P
      sched: introduce avg_wakeup · 831451ac
      Peter Zijlstra 提交于
      Introduce a new avg_wakeup statistic.
      
      avg_wakeup is a measure of how frequently a task wakes up other tasks, it
      represents the average time between wakeups, with a limit of avg_runtime
      for when it doesn't wake up anybody.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NMike Galbraith <efault@gmx.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      831451ac
    • P
      mutex: implement adaptive spinning · 0d66bf6d
      Peter Zijlstra 提交于
      Change mutex contention behaviour such that it will sometimes busy wait on
      acquisition - moving its behaviour closer to that of spinlocks.
      
      This concept got ported to mainline from the -rt tree, where it was originally
      implemented for rtmutexes by Steven Rostedt, based on work by Gregory Haskins.
      
      Testing with Ingo's test-mutex application (http://lkml.org/lkml/2006/1/8/50)
      gave a 345% boost for VFS scalability on my testbox:
      
       # ./test-mutex-shm V 16 10 | grep "^avg ops"
       avg ops/sec:               296604
      
       # ./test-mutex-shm V 16 10 | grep "^avg ops"
       avg ops/sec:               85870
      
      The key criteria for the busy wait is that the lock owner has to be running on
      a (different) cpu. The idea is that as long as the owner is running, there is a
      fair chance it'll release the lock soon, and thus we'll be better off spinning
      instead of blocking/scheduling.
      
      Since regular mutexes (as opposed to rtmutexes) do not atomically track the
      owner, we add the owner in a non-atomic fashion and deal with the races in
      the slowpath.
      
      Furthermore, to ease the testing of the performance impact of this new code,
      there is means to disable this behaviour runtime (without having to reboot
      the system), when scheduler debugging is enabled (CONFIG_SCHED_DEBUG=y),
      by issuing the following command:
      
       # echo NO_OWNER_SPIN > /debug/sched_features
      
      This command re-enables spinning again (this is also the default):
      
       # echo OWNER_SPIN > /debug/sched_features
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0d66bf6d
    • P
      mutex: preemption fixes · 41719b03
      Peter Zijlstra 提交于
      The problem is that dropping the spinlock right before schedule is a voluntary
      preemption point and can cause a schedule, right after which we schedule again.
      
      Fix this inefficiency by keeping preemption disabled until we schedule, do this
      by explicity disabling preemption and providing a schedule() variant that
      assumes preemption is already disabled.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      41719b03
  21. 14 1月, 2009 1 次提交
  22. 08 1月, 2009 1 次提交
    • P
      itimers: remove the per-cpu-ish-ness · 490dea45
      Peter Zijlstra 提交于
      Either we bounce once cacheline per cpu per tick, yielding n^2 bounces
      or we just bounce a single..
      
      Also, using per-cpu allocations for the thread-groups complicates the
      per-cpu allocator in that its currently aimed to be a fixed sized
      allocator and the only possible extention to that would be vmap based,
      which is seriously constrained on 32 bit archs.
      
      So making the per-cpu memory requirement depend on the number of
      processes is an issue.
      
      Lastly, it didn't deal with cpu-hotplug, although admittedly that might
      be fixable.
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      490dea45
  23. 07 1月, 2009 1 次提交
  24. 05 1月, 2009 1 次提交
  25. 31 12月, 2008 1 次提交
    • M
      [PATCH] idle cputime accounting · 79741dd3
      Martin Schwidefsky 提交于
      The cpu time spent by the idle process actually doing something is
      currently accounted as idle time. This is plain wrong, the architectures
      that support VIRT_CPU_ACCOUNTING=y can do better: distinguish between the
      time spent doing nothing and the time spent by idle doing work. The first
      is accounted with account_idle_time and the second with account_system_time.
      The architectures that use the account_xxx_time interface directly and not
      the account_xxx_ticks interface now need to do the check for the idle
      process in their arch code. In particular to improve the system vs true
      idle time accounting the arch code needs to measure the true idle time
      instead of just testing for the idle process.
      To improve the tick based accounting as well we would need an architecture
      primitive that can tell us if the pt_regs of the interrupted context
      points to the magic instruction that halts the cpu.
      
      In addition idle time is no more added to the stime of the idle process.
      This field now contains the system time of the idle process as it should
      be. On systems without VIRT_CPU_ACCOUNTING this will always be zero as
      every tick that occurs while idle is running will be accounted as idle
      time.
      
      This patch contains the necessary common code changes to be able to
      distinguish idle system time and true idle time. The architectures with
      support for VIRT_CPU_ACCOUNTING need some changes to exploit this.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      79741dd3
  26. 30 12月, 2008 1 次提交
    • J
      sched: sched.c declare variables before they get used · 47fea2ad
      Jaswinder Singh Rajput 提交于
      Impact: cleanup, avoid sparse warnings
      
      In linux/sched.h moved out sysctl_sched_latency, sysctl_sched_min_granularity,
      sysctl_sched_wakeup_granularity, sysctl_sched_shares_ratelimit and
      sysctl_sched_shares_thresh from #ifdef CONFIG_SCHED_DEBUG as these variables
      are common for both.
      
      Fixes these sparse warnings:
        kernel/sched.c:825:14: warning: symbol 'sysctl_sched_shares_ratelimit' was not declared. Should it be static?
        kernel/sched.c:832:14: warning: symbol 'sysctl_sched_shares_thresh' was not declared. Should it be static?
        kernel/sched_fair.c:37:14: warning: symbol 'sysctl_sched_latency' was not declared. Should it be static?
        kernel/sched_fair.c:43:14: warning: symbol 'sysctl_sched_min_granularity' was not declared. Should it be static?
        kernel/sched_fair.c:72:14: warning: symbol 'sysctl_sched_wakeup_granularity' was not declared. Should it be static?
      Signed-off-by: NJaswinder Singh Rajput <jaswinderrajput@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      47fea2ad
  27. 29 12月, 2008 1 次提交
    • G
      sched: create "pushable_tasks" list to limit pushing to one attempt · 917b627d
      Gregory Haskins 提交于
      The RT scheduler employs a "push/pull" design to actively balance tasks
      within the system (on a per disjoint cpuset basis).  When a task is
      awoken, it is immediately determined if there are any lower priority
      cpus which should be preempted.  This is opposed to the way normal
      SCHED_OTHER tasks behave, which will wait for a periodic rebalancing
      operation to occur before spreading out load.
      
      When a particular RQ has more than 1 active RT task, it is said to
      be in an "overloaded" state.  Once this occurs, the system enters
      the active balancing mode, where it will try to push the task away,
      or persuade a different cpu to pull it over.  The system will stay
      in this state until the system falls back below the <= 1 queued RT
      task per RQ.
      
      However, the current implementation suffers from a limitation in the
      push logic.  Once overloaded, all tasks (other than current) on the
      RQ are analyzed on every push operation, even if it was previously
      unpushable (due to affinity, etc).  Whats more, the operation stops
      at the first task that is unpushable and will not look at items
      lower in the queue.  This causes two problems:
      
      1) We can have the same tasks analyzed over and over again during each
         push, which extends out the fast path in the scheduler for no
         gain.  Consider a RQ that has dozens of tasks that are bound to a
         core.  Each one of those tasks will be encountered and skipped
         for each push operation while they are queued.
      
      2) There may be lower-priority tasks under the unpushable task that
         could have been successfully pushed, but will never be considered
         until either the unpushable task is cleared, or a pull operation
         succeeds.  The net result is a potential latency source for mid
         priority tasks.
      
      This patch aims to rectify these two conditions by introducing a new
      priority sorted list: "pushable_tasks".  A task is added to the list
      each time a task is activated or preempted.  It is removed from the
      list any time it is deactivated, made current, or fails to push.
      
      This works because a task only needs to be attempted to push once.
      After an initial failure to push, the other cpus will eventually try to
      pull the task when the conditions are proper.  This also solves the
      problem that we don't completely analyze all tasks due to encountering
      an unpushable tasks.  Now every task will have a push attempted (when
      appropriate).
      
      This reduces latency both by shorting the critical section of the
      rq->lock for certain workloads, and by making sure the algorithm
      considers all eligible tasks in the system.
      
      [ rostedt: added a couple more BUG_ONs ]
      Signed-off-by: NGregory Haskins <ghaskins@novell.com>
      Acked-by: NSteven Rostedt <srostedt@redhat.com>
      917b627d