1. 13 4月, 2016 4 次提交
  2. 31 3月, 2016 7 次提交
    • Y
      sched/fair: Initiate a new task's util avg to a bounded value · 2b8c41da
      Yuyang Du 提交于
      A new task's util_avg is set to full utilization of a CPU (100% time
      running). This accelerates a new task's utilization ramp-up, useful to
      boost its execution in early time. However, it may result in
      (insanely) high utilization for a transient time period when a flood
      of tasks are spawned. Importantly, it violates the "fundamentally
      bounded" CPU utilization, and its side effect is negative if we don't
      take any measure to bound it.
      
      This patch proposes an algorithm to address this issue. It has
      two methods to approach a sensible initial util_avg:
      
      (1) An expected (or average) util_avg based on its cfs_rq's util_avg:
      
        util_avg = cfs_rq->util_avg / (cfs_rq->load_avg + 1) * se.load.weight
      
      (2) A trajectory of how successive new tasks' util develops, which
      gives 1/2 of the left utilization budget to a new task such that
      the additional util is noticeably large (when overall util is low) or
      unnoticeably small (when overall util is high enough). In the meantime,
      the aggregate utilization is well bounded:
      
        util_avg_cap = (1024 - cfs_rq->avg.util_avg) / 2^n
      
      where n denotes the nth task.
      
      If util_avg is larger than util_avg_cap, then the effective util is
      clamped to the util_avg_cap.
      Reported-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: NYuyang Du <yuyang.du@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: bsegall@google.com
      Cc: morten.rasmussen@arm.com
      Cc: pjt@google.com
      Cc: steve.muckle@linaro.org
      Link: http://lkml.kernel.org/r/1459283456-21682-1-git-send-email-yuyang.du@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2b8c41da
    • Y
      sched/fair: Update comments after a variable rename · 1c3de5e1
      Yuyang Du 提交于
      The following commit:
      
        ed82b8a1 ("sched/core: Move the sched_to_prio[] arrays out of line")
      
      renamed prio_to_weight to sched_prio_to_weight, but the old name was not
      updated in comments.
      Signed-off-by: NYuyang Du <yuyang.du@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1459292871-22531-1-git-send-email-yuyang.du@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1c3de5e1
    • S
      sched/core: Add preempt checks in preempt_schedule() code · 47252cfb
      Steven Rostedt 提交于
      While testing the tracer preemptoff, I hit this strange trace:
      
         <...>-259     0...1    0us : schedule <-worker_thread
         <...>-259     0d..1    0us : rcu_note_context_switch <-__schedule
         <...>-259     0d..1    0us : rcu_sched_qs <-rcu_note_context_switch
         <...>-259     0d..1    0us : rcu_preempt_qs <-rcu_note_context_switch
         <...>-259     0d..1    0us : _raw_spin_lock <-__schedule
         <...>-259     0d..1    0us : preempt_count_add <-_raw_spin_lock
         <...>-259     0d..2    0us : do_raw_spin_lock <-_raw_spin_lock
         <...>-259     0d..2    1us : deactivate_task <-__schedule
         <...>-259     0d..2    1us : update_rq_clock.part.84 <-deactivate_task
         <...>-259     0d..2    1us : dequeue_task_fair <-deactivate_task
         <...>-259     0d..2    1us : dequeue_entity <-dequeue_task_fair
         <...>-259     0d..2    1us : update_curr <-dequeue_entity
         <...>-259     0d..2    1us : update_min_vruntime <-update_curr
         <...>-259     0d..2    1us : cpuacct_charge <-update_curr
         <...>-259     0d..2    1us : __rcu_read_lock <-cpuacct_charge
         <...>-259     0d..2    1us : __rcu_read_unlock <-cpuacct_charge
         <...>-259     0d..2    1us : clear_buddies <-dequeue_entity
         <...>-259     0d..2    1us : account_entity_dequeue <-dequeue_entity
         <...>-259     0d..2    2us : update_min_vruntime <-dequeue_entity
         <...>-259     0d..2    2us : update_cfs_shares <-dequeue_entity
         <...>-259     0d..2    2us : hrtick_update <-dequeue_task_fair
         <...>-259     0d..2    2us : wq_worker_sleeping <-__schedule
         <...>-259     0d..2    2us : kthread_data <-wq_worker_sleeping
         <...>-259     0d..2    2us : pick_next_task_fair <-__schedule
         <...>-259     0d..2    2us : check_cfs_rq_runtime <-pick_next_task_fair
         <...>-259     0d..2    2us : pick_next_entity <-pick_next_task_fair
         <...>-259     0d..2    2us : clear_buddies <-pick_next_entity
         <...>-259     0d..2    2us : pick_next_entity <-pick_next_task_fair
         <...>-259     0d..2    2us : clear_buddies <-pick_next_entity
         <...>-259     0d..2    2us : set_next_entity <-pick_next_task_fair
         <...>-259     0d..2    3us : put_prev_entity <-pick_next_task_fair
         <...>-259     0d..2    3us : check_cfs_rq_runtime <-put_prev_entity
         <...>-259     0d..2    3us : set_next_entity <-pick_next_task_fair
      gnome-sh-1031    0d..2    3us : finish_task_switch <-__schedule
      gnome-sh-1031    0d..2    3us : _raw_spin_unlock_irq <-finish_task_switch
      gnome-sh-1031    0d..2    3us : do_raw_spin_unlock <-_raw_spin_unlock_irq
      gnome-sh-1031    0...2    3us!: preempt_count_sub <-_raw_spin_unlock_irq
      gnome-sh-1031    0...1  582us : do_raw_spin_lock <-_raw_spin_lock
      gnome-sh-1031    0...1  583us : _raw_spin_unlock <-drm_gem_object_lookup
      gnome-sh-1031    0...1  583us : do_raw_spin_unlock <-_raw_spin_unlock
      gnome-sh-1031    0...1  583us : preempt_count_sub <-_raw_spin_unlock
      gnome-sh-1031    0...1  584us : _raw_spin_unlock <-drm_gem_object_lookup
      gnome-sh-1031    0...1  584us+: trace_preempt_on <-drm_gem_object_lookup
      gnome-sh-1031    0...1  603us : <stack trace>
       => preempt_count_sub
       => _raw_spin_unlock
       => drm_gem_object_lookup
       => i915_gem_madvise_ioctl
       => drm_ioctl
       => do_vfs_ioctl
       => SyS_ioctl
       => entry_SYSCALL_64_fastpath
      
      As I'm tracing preemption disabled, it seemed incorrect that the trace
      would go across a schedule and report not being in the scheduler.
      Looking into this I discovered the problem.
      
      schedule() calls preempt_disable() but the preempt_schedule() calls
      preempt_enable_notrace(). What happened above was that the gnome-shell
      task was preempted on another CPU, migrated over to the idle cpu. The
      tracer stared with idle calling schedule(), which called
      preempt_disable(), but then gnome-shell finished, and it enabled
      preemption with preempt_enable_notrace() that does stop the trace, even
      though preemption was enabled.
      
      The purpose of the preempt_disable_notrace() in the preempt_schedule()
      is to prevent function tracing from going into an infinite loop.
      Because function tracing can trace the preempt_enable/disable() calls
      that are traced. The problem with function tracing is:
      
        NEED_RESCHED set
        preempt_schedule()
          preempt_disable()
            preempt_count_inc()
              function trace (before incrementing preempt count)
                preempt_disable_notrace()
                preempt_enable_notrace()
                  sees NEED_RESCHED set
                     preempt_schedule() (repeat)
      
      Now by breaking out the preempt off/on tracing into their own code:
      preempt_disable_check() and preempt_enable_check(), we can add these to
      the preempt_schedule() code. As preemption would then be disabled, even
      if they were to be traced by the function tracer, the disabled
      preemption would prevent the recursion.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20160321112339.6dc78ad6@gandalf.local.homeSigned-off-by: NIngo Molnar <mingo@kernel.org>
      47252cfb
    • T
      sched/numa: Remove unnecessary NUMA dequeue update from non-SMP kernels · bfdb198c
      Tim Chen 提交于
      In account_entity_enqueue(), we do not do account_numa_enqueue()
      as NUMA balancing is not needed for UP kernels.
      
      Hence, we should remove the account_numa_dequeue() call from
      account_entity_dequeue() for UP kernels.
      Signed-off-by: NTim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Mel Gorman <mgorman@suse.de>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1454366879.21738.29.camel@schen9-desk2.jf.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      bfdb198c
    • S
      sched/fair: Reset nr_balance_failed after active balancing · d02c0711
      Srikar Dronamraju 提交于
      To force a task migration during active balancing, nr_balance_failed is set
      to cache_nice_tries + 1. However nr_balance_failed is not reset. As a side
      effect, the next regular load balance under the same sd, a cache hot task
      might be migrated, just because nr_balance_failed count is high.
      
      Resetting nr_balance_failed after a successful active balance ensures
      that a hot task is not unreasonably migrated. This can be verified by
      looking at othe number of hot task migrations reported by /proc/schedstat.
      Signed-off-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1458735884-30105-1-git-send-email-srikar@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d02c0711
    • D
      sched/cpuacct: Split usage accounting into user_usage and sys_usage · d740037f
      Dongsheng Yang 提交于
      Sometimes, cpuacct.usage is not detailed enough to see how much CPU
      usage a group had. We want to know how much time it used in user mode
      and how much in kernel mode.
      
      This patch introduces more files to give this information:
      
       # ls /sys/fs/cgroup/cpuacct/cpuacct.usage*
       /sys/fs/cgroup/cpuacct/cpuacct.usage
       /sys/fs/cgroup/cpuacct/cpuacct.usage_percpu
       /sys/fs/cgroup/cpuacct/cpuacct.usage_user
       /sys/fs/cgroup/cpuacct/cpuacct.usage_percpu_user
       /sys/fs/cgroup/cpuacct/cpuacct.usage_sys
       /sys/fs/cgroup/cpuacct/cpuacct.usage_percpu_sys
      
      ... while keeping the ABI with the existing counter.
      Signed-off-by: NDongsheng Yang <yangds.fnst@cn.fujitsu.com>
      [ Ported to newer kernels. ]
      Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NTejun Heo <tj@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Tejun Heo <htejun@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/aa171da036b520b51c79549e9b3215d29473f19d.1458635566.git.zhaolei@cn.fujitsu.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d740037f
    • Z
      sched/cpuacct: Show all possible CPUs in cpuacct output · 5ca3726a
      Zhao Lei 提交于
      Current code show stats of online CPUs in cpuacct.statcpus,
      show stats of present cpus in cpuacct.usage(_percpu), and using
      present CPUs for setting cpuacct.usage.
      
      It will cause inconsistent result when a CPU is online or offline
      or hotpluged.
      
      We should always use possible CPUs to avoid above problem.
      
      Here are the contents of a cpuacct.usage_percpu sysfs file,
      on a 4 CPU system with maxcpus=32:
      
      Before the patch:
       # cat cpuacct.usage_percpu
       2456565 411435 1052897 832584
      
      After the patch:
       # cat cpuacct.usage_percpu
       2456565 411435 1052897 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Tejun Heo <htejun@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/a11d56cef12d0b4807f8be3a46bf9798c3014d59.1458635566.git.zhaolei@cn.fujitsu.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5ca3726a
  3. 29 3月, 2016 1 次提交
  4. 23 3月, 2016 1 次提交
    • D
      kernel: add kcov code coverage · 5c9a8750
      Dmitry Vyukov 提交于
      kcov provides code coverage collection for coverage-guided fuzzing
      (randomized testing).  Coverage-guided fuzzing is a testing technique
      that uses coverage feedback to determine new interesting inputs to a
      system.  A notable user-space example is AFL
      (http://lcamtuf.coredump.cx/afl/).  However, this technique is not
      widely used for kernel testing due to missing compiler and kernel
      support.
      
      kcov does not aim to collect as much coverage as possible.  It aims to
      collect more or less stable coverage that is function of syscall inputs.
      To achieve this goal it does not collect coverage in soft/hard
      interrupts and instrumentation of some inherently non-deterministic or
      non-interesting parts of kernel is disbled (e.g.  scheduler, locking).
      
      Currently there is a single coverage collection mode (tracing), but the
      API anticipates additional collection modes.  Initially I also
      implemented a second mode which exposes coverage in a fixed-size hash
      table of counters (what Quentin used in his original patch).  I've
      dropped the second mode for simplicity.
      
      This patch adds the necessary support on kernel side.  The complimentary
      compiler support was added in gcc revision 231296.
      
      We've used this support to build syzkaller system call fuzzer, which has
      found 90 kernel bugs in just 2 months:
      
        https://github.com/google/syzkaller/wiki/Found-Bugs
      
      We've also found 30+ bugs in our internal systems with syzkaller.
      Another (yet unexplored) direction where kcov coverage would greatly
      help is more traditional "blob mutation".  For example, mounting a
      random blob as a filesystem, or receiving a random blob over wire.
      
      Why not gcov.  Typical fuzzing loop looks as follows: (1) reset
      coverage, (2) execute a bit of code, (3) collect coverage, repeat.  A
      typical coverage can be just a dozen of basic blocks (e.g.  an invalid
      input).  In such context gcov becomes prohibitively expensive as
      reset/collect coverage steps depend on total number of basic
      blocks/edges in program (in case of kernel it is about 2M).  Cost of
      kcov depends only on number of executed basic blocks/edges.  On top of
      that, kernel requires per-thread coverage because there are always
      background threads and unrelated processes that also produce coverage.
      With inlined gcov instrumentation per-thread coverage is not possible.
      
      kcov exposes kernel PCs and control flow to user-space which is
      insecure.  But debugfs should not be mapped as user accessible.
      
      Based on a patch by Quentin Casasnovas.
      
      [akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
      [akpm@linux-foundation.org: unbreak allmodconfig]
      [akpm@linux-foundation.org: follow x86 Makefile layout standards]
      Signed-off-by: NDmitry Vyukov <dvyukov@google.com>
      Reviewed-by: NKees Cook <keescook@chromium.org>
      Cc: syzkaller <syzkaller@googlegroups.com>
      Cc: Vegard Nossum <vegard.nossum@oracle.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Tavis Ormandy <taviso@google.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Kostya Serebryany <kcc@google.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Kees Cook <keescook@google.com>
      Cc: Bjorn Helgaas <bhelgaas@google.com>
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: David Drysdale <drysdale@google.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
      Cc: Kirill A. Shutemov <kirill@shutemov.name>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5c9a8750
  5. 21 3月, 2016 5 次提交
  6. 11 3月, 2016 1 次提交
  7. 10 3月, 2016 1 次提交
  8. 09 3月, 2016 1 次提交
  9. 08 3月, 2016 2 次提交
  10. 05 3月, 2016 1 次提交
    • T
      sched/cputime: Fix steal time accounting vs. CPU hotplug · e9532e69
      Thomas Gleixner 提交于
      On CPU hotplug the steal time accounting can keep a stale rq->prev_steal_time
      value over CPU down and up. So after the CPU comes up again the delta
      calculation in steal_account_process_tick() wreckages itself due to the
      unsigned math:
      
      	 u64 steal = paravirt_steal_clock(smp_processor_id());
      
      	 steal -= this_rq()->prev_steal_time;
      
      So if steal is smaller than rq->prev_steal_time we end up with an insane large
      value which then gets added to rq->prev_steal_time, resulting in a permanent
      wreckage of the accounting. As a consequence the per CPU stats in /proc/stat
      become stale.
      
      Nice trick to tell the world how idle the system is (100%) while the CPU is
      100% busy running tasks. Though we prefer realistic numbers.
      
      None of the accounting values which use a previous value to account for
      fractions is reset at CPU hotplug time. update_rq_clock_task() has a sanity
      check for prev_irq_time and prev_steal_time_rq, but that sanity check solely
      deals with clock warps and limits the /proc/stat visible wreckage. The
      prev_time values are still wrong.
      
      Solution is simple: Reset rq->prev_*_time when the CPU is plugged in again.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NRik van Riel <riel@redhat.com>
      Cc: <stable@vger.kernel.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Glauber Costa <glommer@parallels.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Fixes: commit 095c0aa8 "sched: adjust scheduler cpu power for stolen time"
      Fixes: commit aa483808 "sched: Remove irq time from available CPU power"
      Fixes: commit e6e6685a "KVM guest: Steal time accounting"
      Link: http://lkml.kernel.org/r/alpine.DEB.2.11.1603041539490.3686@nanosSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e9532e69
  11. 02 3月, 2016 8 次提交
    • F
      sched-clock: Migrate to use new tick dependency mask model · 4f49b90a
      Frederic Weisbecker 提交于
      Instead of checking sched_clock_stable from the nohz subsystem to verify
      its tick dependency, migrate it to the new mask in order to include it
      to the all-in-one check.
      Reviewed-by: NChris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      4f49b90a
    • F
      sched: Migrate sched to use new tick dependency mask model · 76d92ac3
      Frederic Weisbecker 提交于
      Instead of providing asynchronous checks for the nohz subsystem to verify
      sched tick dependency, migrate sched to the new mask.
      
      Everytime a task is enqueued or dequeued, we evaluate the state of the
      tick dependency on top of the policy of the tasks in the runqueue, by
      order of priority:
      
      SCHED_DEADLINE: Need the tick in order to periodically check for runtime
      SCHED_FIFO    : Don't need the tick (no round-robin)
      SCHED_RR      : Need the tick if more than 1 task of the same priority
                      for round robin (simplified with checking if more than
                      one SCHED_RR task no matter what priority).
      SCHED_NORMAL  : Need the tick if more than 1 task for round-robin.
      
      We could optimize that further with one flag per sched policy on the tick
      dependency mask and perform only the checks relevant to the policy
      concerned by an enqueue/dequeue operation.
      
      Since the checks aren't based on the current task anymore, we could get
      rid of the task switch hook but it's still needed for posix cpu
      timers.
      Reviewed-by: NChris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      76d92ac3
    • F
      sched: Account rr tasks · 01d36d0a
      Frederic Weisbecker 提交于
      In order to evaluate the scheduler tick dependency without probing
      context switches, we need to know how much SCHED_RR and SCHED_FIFO tasks
      are enqueued as those policies don't have the same preemption
      requirements.
      
      To prepare for that, let's account SCHED_RR tasks, we'll be able to
      deduce SCHED_FIFO tasks as well from it and the total RT tasks in the
      runqueue.
      Reviewed-by: NChris Metcalf <cmetcalf@ezchip.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Chris Metcalf <cmetcalf@ezchip.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Luiz Capitulino <lcapitulino@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      01d36d0a
    • A
      sched/core: Get rid of 'cpu' argument in wq_worker_sleeping() · 9b7f6597
      Alexander Gordeev 提交于
      Given that wq_worker_sleeping() could only be called for a
      CPU it is running on, we do not need passing a CPU ID as an
      argument.
      Suggested-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NAlexander Gordeev <agordeev@redhat.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      9b7f6597
    • T
      rcu: Make CPU_DYING_IDLE an explicit call · 27d50c7e
      Thomas Gleixner 提交于
      Make the RCU CPU_DYING_IDLE callback an explicit function call, so it gets
      invoked at the proper place.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182341.870167933@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      27d50c7e
    • T
      cpu/hotplug: Make wait for dead cpu completion based · e69aab13
      Thomas Gleixner 提交于
      Kill the busy spinning on the control side and just wait for the hotplugged
      cpu to tell that it reached the dead state.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182341.776157858@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      e69aab13
    • T
      cpu/hotplug: Let upcoming cpu bring itself fully up · 8df3e07e
      Thomas Gleixner 提交于
      Let the upcoming cpu kick the hotplug thread and let itself complete the
      bringup. That way the controll side can just wait for the completion or later
      when we made the hotplug machinery async not care at all.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182341.697655464@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      8df3e07e
    • T
      cpu/hotplug: Move scheduler cpu_online notifier to hotplug core · 949338e3
      Thomas Gleixner 提交于
      Move the scheduler cpu online notifier part to the hotplug core. This is
      anyway the highest priority callback and we need that functionality right now
      for the next changes.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182341.200791046@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      949338e3
  12. 29 2月, 2016 8 次提交
    • P
      sched/deadline: Remove superfluous call to switched_to_dl() · 801ccdbf
      Peter Zijlstra 提交于
      	if (A || B) {
      
      	} else if (A && !B) {
      
      	}
      
      If A we'll take the first branch, if !A we will not satisfy the second.
      Therefore the second branch will never be taken.
      Reported-by: Nluca abeni <luca.abeni@unitn.it>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NJuri Lelli <juri.lelli@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20160225140149.GK6357@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      801ccdbf
    • S
      sched/debug: Fix preempt_disable_ip recording for preempt_disable() · f904f582
      Sebastian Andrzej Siewior 提交于
      The preempt_disable() invokes preempt_count_add() which saves the caller
      in ->preempt_disable_ip. It uses CALLER_ADDR1 which does not look for
      its caller but for the parent of the caller. Which means we get the correct
      caller for something like spin_lock() unless the architectures inlines
      those invocations. It is always wrong for preempt_disable() or
      local_bh_disable().
      
      This patch makes the function get_lock_parent_ip() which tries
      CALLER_ADDR0,1,2 if the former is a locking function.
      This seems to record the preempt_disable() caller properly for
      preempt_disable() itself as well as for get_cpu_var() or
      local_bh_disable().
      
      Steven asked for the get_parent_ip() -> get_lock_parent_ip() rename.
      Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20160226135456.GB18244@linutronix.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f904f582
    • R
      sched, time: Switch VIRT_CPU_ACCOUNTING_GEN to jiffy granularity · ff9a9b4c
      Rik van Riel 提交于
      When profiling syscall overhead on nohz-full kernels,
      after removing __acct_update_integrals() from the profile,
      native_sched_clock() remains as the top CPU user. This can be
      reduced by moving VIRT_CPU_ACCOUNTING_GEN to jiffy granularity.
      
      This will reduce timing accuracy on nohz_full CPUs to jiffy
      based sampling, just like on normal CPUs. It results in
      totally removing native_sched_clock from the profile, and
      significantly speeding up the syscall entry and exit path,
      as well as irq entry and exit, and KVM guest entry & exit.
      
      Additionally, only call the more expensive functions (and
      advance the seqlock) when jiffies actually changed.
      
      This code relies on another CPU advancing jiffies when the
      system is busy. On a nohz_full system, this is done by a
      housekeeping CPU.
      
      A microbenchmark calling an invalid syscall number 10 million
      times in a row speeds up an additional 30% over the numbers
      with just the previous patches, for a total speedup of about
      40% over 4.4 and 4.5-rc1.
      
      Run times for the microbenchmark:
      
       4.4				3.8 seconds
       4.5-rc1			3.7 seconds
       4.5-rc1 + first patch		3.3 seconds
       4.5-rc1 + first 3 patches	3.1 seconds
       4.5-rc1 + all patches		2.3 seconds
      
      A non-NOHZ_FULL cpu (not the housekeeping CPU):
      
       all kernels			1.86 seconds
      Signed-off-by: NRik van Riel <riel@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: clark@redhat.com
      Cc: eric.dumazet@gmail.com
      Cc: fweisbec@gmail.com
      Cc: luto@amacapital.net
      Link: http://lkml.kernel.org/r/1455152907-18495-5-git-send-email-riel@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ff9a9b4c
    • S
      sched/rt: Kick RT bandwidth timer immediately on start up · c3a990dc
      Steven Rostedt 提交于
      I've been debugging why deadline tasks can cause the RT scheduler to
      throttle, even when the deadline tasks are only taking up 50% of the
      CPU and RT tasks are not even using 1% of the CPU. Here's what I found.
      
      In order to keep a CPU from being hogged by RT tasks, the deadline
      scheduler adds its run time (delta_exec) to the rt_time of the RT
      bandwidth. That way, if the two use more than 95% of the CPU within one
      second (default settings), the RT tasks are throttled to allow non RT
      tasks to run.
      
      Although the deadline tasks add their run time to the RT bandwidth, it
      lets the RT tasks do the accounting. This is where the problem lies. If
      a deadline task runs for a bit, and no RT tasks are running, then it
      will continually add to the RT rt_time that is used to calculate how
      much CPU the RT tasks use. But no RT period is in play, and this
      accumulation of the runtime never gets reset.
      
      When an RT task finally gets to run, and the watchdog goes off, it can
      see that the RT task has used more than it should of, because the
      deadline task added all this runtime to its rt_time. Then the RT task
      that just woke up gets throttled for no good reason.
      
      I also noticed that when an RT task is queued, it starts the timer to
      account for overload and such. But that timer goes off one period
      later, which may be too late and the extra rt_time will trigger a
      throttle.
      
      This is a quick work around to the problem. When a new RT task is
      queued, the bandwidth timer is set to go off immediately. Then the
      timer can clear out the extra time added to the rt_time while there was
      no RT task running. This stops my tests from triggering the throttle,
      and it will still throttle if an RT task runs too much, even while a
      deadline task is running.
      
      A better solution may be to subtract the bandwidth that the deadline
      task uses from the rt_runtime, and add it back when its finished. Then
      there wont be a need for runtime tracking of the time used by deadline
      tasks.
      
      I may play with that solution tomorrow.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <juri.lelli@gmail.com>
      Cc: <williams@redhat.com>
      Cc: Clark Williams
      Cc: Daniel Bristot de Oliveira <bristot@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      Cc: Juri Lelli
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20160216183746.349ec98b@gandalf.local.homeSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c3a990dc
    • S
      sched/debug: Add deadline scheduler bandwidth ratio to /proc/sched_debug · ef477183
      Steven Rostedt (Red Hat) 提交于
      Playing with SCHED_DEADLINE and cpusets, I found that I was unable to create
      new SCHED_DEADLINE tasks, with the error of EBUSY as if the bandwidth was
      already used up. I then realized there wa no way to see what bandwidth is
      used by the runqueues to debug the issue.
      
      By adding the dl_bw->bw and dl_bw->total_bw to the output of the deadline
      info in /proc/sched_debug, this allows us to see what bandwidth has been
      reserved and where a problem may exist.
      
      For example, before the issue we see the ratio of the bandwidth:
      
       # cat /proc/sys/kernel/sched_rt_runtime_us
       950000
       # cat /proc/sys/kernel/sched_rt_period_us
       1000000
      
        # grep dl /proc/sched_debug
        dl_rq[0]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : 0
        dl_rq[1]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : 0
        dl_rq[2]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : 0
        dl_rq[3]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : 0
        dl_rq[4]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : 0
        dl_rq[5]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : 0
        dl_rq[6]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : 0
        dl_rq[7]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : 0
      
      Note: (950000 / 1000000) << 20 == 996147
      
      After I played with cpusets and hit the issue, the result is now:
      
        # grep dl /proc/sched_debug
        dl_rq[0]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : -104857
        dl_rq[1]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : 104857
        dl_rq[2]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : 104857
        dl_rq[3]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : 104857
        dl_rq[4]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : -104857
        dl_rq[5]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : -104857
        dl_rq[6]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : -104857
        dl_rq[7]:
          .dl_nr_running                 : 0
          .dl_bw->bw                     : 996147
          .dl_bw->total_bw               : -104857
      
      This shows that there is definitely a problem as we should never have a
      negative total bandwidth.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Clark Williams <williams@redhat.com>
      Cc: Juri Lelli <juri.lelli@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20160222212825.756849091@goodmis.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ef477183
    • S
      sched/debug: Move sched_domain_sysctl to debug.c · 3866e845
      Steven Rostedt (Red Hat) 提交于
      The sched_domain_sysctl setup is only enabled when SCHED_DEBUG is
      configured. As debug.c is only compiled when SCHED_DEBUG is configured as
      well, move the setup of sched_domain_sysctl into that file.
      
      Note, the (un)register_sched_domain_sysctl() functions had to be changed
      from static to allow access to them from core.c.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Clark Williams <williams@redhat.com>
      Cc: Juri Lelli <juri.lelli@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20160222212825.599278093@goodmis.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      3866e845
    • S
      sched/debug: Move the /sys/kernel/debug/sched_features file setup into debug.c · d6ca41d7
      Steven Rostedt (Red Hat) 提交于
      As /sys/kernel/debug/sched_features is only created when SCHED_DEBUG is enabled, and the file
      debug.c is only compiled when SCHED_DEBUG is enabled, it makes sense to move
      sched_feature setup into that file and get rid of the #ifdef.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Clark Williams <williams@redhat.com>
      Cc: Juri Lelli <juri.lelli@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20160222212825.464193063@goodmis.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d6ca41d7
    • P
      sched/rt: Fix PI handling vs. sched_setscheduler() · ff77e468
      Peter Zijlstra 提交于
      Andrea Parri reported:
      
      > I found that the following scenario (with CONFIG_RT_GROUP_SCHED=y) is not
      > handled correctly:
      >
      >     T1 (prio = 20)
      >        lock(rtmutex);
      >
      >     T2 (prio = 20)
      >        blocks on rtmutex  (rt_nr_boosted = 0 on T1's rq)
      >
      >     T1 (prio = 20)
      >        sys_set_scheduler(prio = 0)
      >           [new_effective_prio == oldprio]
      >           T1 prio = 20    (rt_nr_boosted = 0 on T1's rq)
      >
      > The last step is incorrect as T1 is now boosted (c.f., rt_se_boosted());
      > in particular, if we continue with
      >
      >    T1 (prio = 20)
      >       unlock(rtmutex)
      >          wakeup(T2)
      >          adjust_prio(T1)
      >             [prio != rt_mutex_getprio(T1)]
      >	    dequeue(T1)
      >	       rt_nr_boosted = (unsigned long)(-1)
      >	       ...
      >             T1 prio = 0
      >
      > then we end up leaving rt_nr_boosted in an "inconsistent" state.
      >
      > The simple program attached could reproduce the previous scenario; note
      > that, as a consequence of the presence of this state, the "assertion"
      >
      >     WARN_ON(!rt_nr_running && rt_nr_boosted)
      >
      > from dec_rt_group() may trigger.
      
      So normally we dequeue/enqueue tasks in sched_setscheduler(), which
      would ensure the accounting stays correct. However in the early PI path
      we fail to do so.
      
      So this was introduced at around v3.14, by:
      
        c365c292 ("sched: Consider pi boosting in setscheduler()")
      
      which fixed another problem exactly because that dequeue/enqueue, joy.
      
      Fix this by teaching rt about DEQUEUE_SAVE/ENQUEUE_RESTORE and have it
      preserve runqueue location with that option. This requires decoupling
      the on_rt_rq() state from being on the list.
      
      In order to allow for explicit movement during the SAVE/RESTORE,
      introduce {DE,EN}QUEUE_MOVE. We still must use SAVE/RESTORE in these
      cases to preserve other invariants.
      
      Respecting the SAVE/RESTORE flags also has the (nice) side-effect that
      things like sys_nice()/sys_sched_setaffinity() also do not reorder
      FIFO tasks (whereas they used to before this patch).
      Reported-by: NAndrea Parri <parri.andrea@gmail.com>
      Tested-by: NAndrea Parri <parri.andrea@gmail.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Juri Lelli <juri.lelli@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      ff77e468