1. 06 10月, 2015 3 次提交
    • P
      sched/core: Create preempt_count invariant · 609ca066
      Peter Zijlstra 提交于
      Assuming units of PREEMPT_DISABLE_OFFSET for preempt_count() numbers.
      
      Now that TASK_DEAD no longer results in preempt_count() == 3 during
      scheduling, we will always call context_switch() with preempt_count()
      == 2.
      
      However, we don't always end up with preempt_count() == 2 in
      finish_task_switch() because new tasks get created with
      preempt_count() == 1.
      
      Create FORK_PREEMPT_COUNT and set it to 2 and use that in the right
      places. Note that we cannot use INIT_PREEMPT_COUNT as that serves
      another purpose (boot).
      
      After this, preempt_count() is invariant across the context switch,
      with exception of PREEMPT_ACTIVE.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      609ca066
    • P
      sched/core: Rework TASK_DEAD preemption exception · b99def8b
      Peter Zijlstra 提交于
      TASK_DEAD is special in that the final schedule call from do_exit()
      must be done with preemption disabled.
      
      This means we end up scheduling with a preempt_count() higher than
      usual (3 instead of the 'expected' 2).
      
      Since future patches will want to rely on an invariant
      preempt_count() value during schedule, fix this up.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      b99def8b
    • P
      sched/core: Fix TASK_DEAD race in finish_task_switch() · 95913d97
      Peter Zijlstra 提交于
      So the problem this patch is trying to address is as follows:
      
              CPU0                            CPU1
      
              context_switch(A, B)
                                              ttwu(A)
                                                LOCK A->pi_lock
                                                A->on_cpu == 0
              finish_task_switch(A)
                prev_state = A->state  <-.
                WMB                      |
                A->on_cpu = 0;           |
                UNLOCK rq0->lock         |
                                         |    context_switch(C, A)
                                         `--  A->state = TASK_DEAD
                prev_state == TASK_DEAD
                  put_task_struct(A)
                                              context_switch(A, C)
                                              finish_task_switch(A)
                                                A->state == TASK_DEAD
                                                  put_task_struct(A)
      
      The argument being that the WMB will allow the load of A->state on CPU0
      to cross over and observe CPU1's store of A->state, which will then
      result in a double-drop and use-after-free.
      
      Now the comment states (and this was true once upon a long time ago)
      that we need to observe A->state while holding rq->lock because that
      will order us against the wakeup; however the wakeup will not in fact
      acquire (that) rq->lock; it takes A->pi_lock these days.
      
      We can obviously fix this by upgrading the WMB to an MB, but that is
      expensive, so we'd rather avoid that.
      
      The alternative this patch takes is: smp_store_release(&A->on_cpu, 0),
      which avoids the MB on some archs, but not important ones like ARM.
      Reported-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Cc: <stable@vger.kernel.org> # v3.1+
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Cc: manfred@colorfullife.com
      Cc: will.deacon@arm.com
      Fixes: e4a52bcb ("sched: Remove rq->lock from the first half of ttwu()")
      Link: http://lkml.kernel.org/r/20150929124509.GG3816@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      95913d97
  2. 23 9月, 2015 1 次提交
  3. 18 9月, 2015 3 次提交
  4. 13 9月, 2015 7 次提交
  5. 11 9月, 2015 1 次提交
    • W
      sched: 'Annotate' migrate_tasks() · 5473e0cc
      Wanpeng Li 提交于
      Kernel testing triggered this warning:
      
      | WARNING: CPU: 0 PID: 13 at kernel/sched/core.c:1156 do_set_cpus_allowed+0x7e/0x80()
      | Modules linked in:
      | CPU: 0 PID: 13 Comm: migration/0 Not tainted 4.2.0-rc1-00049-g25834c73 #2
      | Call Trace:
      |   dump_stack+0x4b/0x75
      |   warn_slowpath_common+0x8b/0xc0
      |   warn_slowpath_null+0x22/0x30
      |   do_set_cpus_allowed+0x7e/0x80
      |   cpuset_cpus_allowed_fallback+0x7c/0x170
      |   select_fallback_rq+0x221/0x280
      |   migration_call+0xe3/0x250
      |   notifier_call_chain+0x53/0x70
      |   __raw_notifier_call_chain+0x1e/0x30
      |   cpu_notify+0x28/0x50
      |   take_cpu_down+0x22/0x40
      |   multi_cpu_stop+0xd5/0x140
      |   cpu_stopper_thread+0xbc/0x170
      |   smpboot_thread_fn+0x174/0x2f0
      |   kthread+0xc4/0xe0
      |   ret_from_kernel_thread+0x21/0x30
      
      As Peterz pointed out:
      
      | So the normal rules for changing task_struct::cpus_allowed are holding
      | both pi_lock and rq->lock, such that holding either stabilizes the mask.
      |
      | This is so that wakeup can happen without rq->lock and load-balance
      | without pi_lock.
      |
      | From this we already get the relaxation that we can omit acquiring
      | rq->lock if the task is not on the rq, because in that case
      | load-balancing will not apply to it.
      |
      | ** these are the rules currently tested in do_set_cpus_allowed() **
      |
      | Now, since __set_cpus_allowed_ptr() uses task_rq_lock() which
      | unconditionally acquires both locks, we could get away with holding just
      | rq->lock when on_rq for modification because that'd still exclude
      | __set_cpus_allowed_ptr(), it would also work against
      | __kthread_bind_mask() because that assumes !on_rq.
      |
      | That said, this is all somewhat fragile.
      |
      | Now, I don't think dropping rq->lock is quite as disastrous as it
      | usually is because !cpu_active at this point, which means load-balance
      | will not interfere, but that too is somewhat fragile.
      |
      | So we end up with a choice of two fragile..
      
      This patch fixes it by following the rules for changing
      task_struct::cpus_allowed with both pi_lock and rq->lock held.
      Reported-by: Nkernel test robot <ying.huang@intel.com>
      Reported-by: NSasha Levin <sasha.levin@oracle.com>
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      [ Modified changelog and patch. ]
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/BLU436-SMTP1660820490DE202E3934ED3806E0@phx.gblSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5473e0cc
  6. 02 9月, 2015 1 次提交
  7. 25 8月, 2015 1 次提交
    • J
      sched: Fix cpu_active_mask/cpu_online_mask race · dd9d3843
      Jan H. Schönherr 提交于
      There is a race condition in SMP bootup code, which may result
      in
      
          WARNING: CPU: 0 PID: 1 at kernel/workqueue.c:4418
          workqueue_cpu_up_callback()
      or
          kernel BUG at kernel/smpboot.c:135!
      
      It can be triggered with a bit of luck in Linux guests running
      on busy hosts.
      
      	CPU0                        CPUn
      	====                        ====
      
      	_cpu_up()
      	  __cpu_up()
      				    start_secondary()
      				      set_cpu_online()
      					cpumask_set_cpu(cpu,
      						   to_cpumask(cpu_online_bits));
      	  cpu_notify(CPU_ONLINE)
      	    <do stuff, see below>
      					cpumask_set_cpu(cpu,
      						   to_cpumask(cpu_active_bits));
      
      During the various CPU_ONLINE callbacks CPUn is online but not
      active. Several things can go wrong at that point, depending on
      the scheduling of tasks on CPU0.
      
      Variant 1:
      
        cpu_notify(CPU_ONLINE)
          workqueue_cpu_up_callback()
            rebind_workers()
              set_cpus_allowed_ptr()
      
        This call fails because it requires an active CPU; rebind_workers()
        ends with a warning:
      
          WARNING: CPU: 0 PID: 1 at kernel/workqueue.c:4418
          workqueue_cpu_up_callback()
      
      Variant 2:
      
        cpu_notify(CPU_ONLINE)
          smpboot_thread_call()
            smpboot_unpark_threads()
             ..
              __kthread_unpark()
                __kthread_bind()
                wake_up_state()
                 ..
                  select_task_rq()
                    select_fallback_rq()
      
        The ->wake_cpu of the unparked thread is not allowed, making a call
        to select_fallback_rq() necessary. Then, select_fallback_rq() cannot
        find an allowed, active CPU and promptly resets the allowed CPUs, so
        that the task in question ends up on CPU0.
      
        When those unparked tasks are eventually executed, they run
        immediately into a BUG:
      
          kernel BUG at kernel/smpboot.c:135!
      
      Just changing the order in which the online/active bits are set
      (and adding some memory barriers), would solve the two issues
      above. However, it would change the order of operations back to
      the one before commit 6acbfb96 ("sched: Fix hotplug vs.
      set_cpus_allowed_ptr()"), thus, reintroducing that particular
      problem.
      
      Going further back into history, we have at least the following
      commits touching this topic:
      - commit 2baab4e9 ("sched: Fix select_fallback_rq() vs cpu_active/cpu_online")
      - commit 5fbd036b ("sched: Cleanup cpu_active madness")
      
      Together, these give us the following non-working solutions:
      
        - secondary CPU sets active before online, because active is assumed to
          be a subset of online;
      
        - secondary CPU sets online before active, because the primary CPU
          assumes that an online CPU is also active;
      
        - secondary CPU sets online and waits for primary CPU to set active,
          because it might deadlock.
      
      Commit 875ebe94 ("powerpc/smp: Wait until secondaries are
      active & online") introduces an arch-specific solution to this
      arch-independent problem.
      
      Now, go for a more general solution without explicit waiting and
      simply set active twice: once on the secondary CPU after online
      was set and once on the primary CPU after online was seen.
      
      set_cpus_allowed_ptr()")
      Signed-off-by: NJan H. Schönherr <jschoenh@amazon.de>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: <stable@vger.kernel.org>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Joerg Roedel <jroedel@suse.de>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Wilson <msw@amazon.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 6acbfb96 ("sched: Fix hotplug vs. set_cpus_allowed_ptr()")
      Link: http://lkml.kernel.org/r/1439408156-18840-1-git-send-email-jschoenh@amazon.deSigned-off-by: NIngo Molnar <mingo@kernel.org>
      dd9d3843
  8. 12 8月, 2015 4 次提交
  9. 04 8月, 2015 1 次提交
  10. 03 8月, 2015 6 次提交
    • Y
      sched/fair: Init cfs_rq's sched_entity load average · 540247fb
      Yuyang Du 提交于
      The runnable load and utilization averages of cfs_rq's sched_entity
      were not initiated. Like done to a task, give new cfs_rq' sched_entity
      start values to heavy its load in infant time.
      Signed-off-by: NYuyang Du <yuyang.du@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: arjan@linux.intel.com
      Cc: bsegall@google.com
      Cc: dietmar.eggemann@arm.com
      Cc: fengguang.wu@intel.com
      Cc: len.brown@intel.com
      Cc: morten.rasmussen@arm.com
      Cc: pjt@google.com
      Cc: rafael.j.wysocki@intel.com
      Cc: umgwanakikbuti@gmail.com
      Cc: vincent.guittot@linaro.org
      Link: http://lkml.kernel.org/r/1436918682-4971-5-git-send-email-yuyang.du@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      540247fb
    • Y
      sched/fair: Rewrite runnable load and utilization average tracking · 9d89c257
      Yuyang Du 提交于
      The idea of runnable load average (let runnable time contribute to weight)
      was proposed by Paul Turner and Ben Segall, and it is still followed by
      this rewrite. This rewrite aims to solve the following issues:
      
      1. cfs_rq's load average (namely runnable_load_avg and blocked_load_avg) is
         updated at the granularity of an entity at a time, which results in the
         cfs_rq's load average is stale or partially updated: at any time, only
         one entity is up to date, all other entities are effectively lagging
         behind. This is undesirable.
      
         To illustrate, if we have n runnable entities in the cfs_rq, as time
         elapses, they certainly become outdated:
      
           t0: cfs_rq { e1_old, e2_old, ..., en_old }
      
         and when we update:
      
           t1: update e1, then we have cfs_rq { e1_new, e2_old, ..., en_old }
      
           t2: update e2, then we have cfs_rq { e1_old, e2_new, ..., en_old }
      
           ...
      
         We solve this by combining all runnable entities' load averages together
         in cfs_rq's avg, and update the cfs_rq's avg as a whole. This is based
         on the fact that if we regard the update as a function, then:
      
         w * update(e) = update(w * e) and
      
         update(e1) + update(e2) = update(e1 + e2), then
      
         w1 * update(e1) + w2 * update(e2) = update(w1 * e1 + w2 * e2)
      
         therefore, by this rewrite, we have an entirely updated cfs_rq at the
         time we update it:
      
           t1: update cfs_rq { e1_new, e2_new, ..., en_new }
      
           t2: update cfs_rq { e1_new, e2_new, ..., en_new }
      
           ...
      
      2. cfs_rq's load average is different between top rq->cfs_rq and other
         task_group's per CPU cfs_rqs in whether or not blocked_load_average
         contributes to the load.
      
         The basic idea behind runnable load average (the same for utilization)
         is that the blocked state is taken into account as opposed to only
         accounting for the currently runnable state. Therefore, the average
         should include both the runnable/running and blocked load averages.
         This rewrite does that.
      
         In addition, we also combine runnable/running and blocked averages
         of all entities into the cfs_rq's average, and update it together at
         once. This is based on the fact that:
      
           update(runnable) + update(blocked) = update(runnable + blocked)
      
         This significantly reduces the code as we don't need to separately
         maintain/update runnable/running load and blocked load.
      
      3. How task_group entities' share is calculated is complex and imprecise.
      
         We reduce the complexity in this rewrite to allow a very simple rule:
         the task_group's load_avg is aggregated from its per CPU cfs_rqs's
         load_avgs. Then group entity's weight is simply proportional to its
         own cfs_rq's load_avg / task_group's load_avg. To illustrate,
      
         if a task_group has { cfs_rq1, cfs_rq2, ..., cfs_rqn }, then,
      
         task_group_avg = cfs_rq1_avg + cfs_rq2_avg + ... + cfs_rqn_avg, then
      
         cfs_rqx's entity's share = cfs_rqx_avg / task_group_avg * task_group's share
      
      To sum up, this rewrite in principle is equivalent to the current one, but
      fixes the issues described above. Turns out, it significantly reduces the
      code complexity and hence increases clarity and efficiency. In addition,
      the new averages are more smooth/continuous (no spurious spikes and valleys)
      and updated more consistently and quickly to reflect the load dynamics.
      
      As a result, we have less load tracking overhead, better performance,
      and especially better power efficiency due to more balanced load.
      Signed-off-by: NYuyang Du <yuyang.du@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: arjan@linux.intel.com
      Cc: bsegall@google.com
      Cc: dietmar.eggemann@arm.com
      Cc: fengguang.wu@intel.com
      Cc: len.brown@intel.com
      Cc: morten.rasmussen@arm.com
      Cc: pjt@google.com
      Cc: rafael.j.wysocki@intel.com
      Cc: umgwanakikbuti@gmail.com
      Cc: vincent.guittot@linaro.org
      Link: http://lkml.kernel.org/r/1436918682-4971-3-git-send-email-yuyang.du@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9d89c257
    • K
      sched/preempt: Fix cond_resched_lock() and cond_resched_softirq() · fe32d3cd
      Konstantin Khlebnikov 提交于
      These functions check should_resched() before unlocking spinlock/bh-enable:
      preempt_count always non-zero => should_resched() always returns false.
      cond_resched_lock() worked iff spin_needbreak is set.
      
      This patch adds argument "preempt_offset" to should_resched().
      
      preempt_count offset constants for that:
      
        PREEMPT_DISABLE_OFFSET  - offset after preempt_disable()
        PREEMPT_LOCK_OFFSET     - offset after spin_lock()
        SOFTIRQ_DISABLE_OFFSET  - offset after local_bh_distable()
        SOFTIRQ_LOCK_OFFSET     - offset after spin_lock_bh()
      Signed-off-by: NKonstantin Khlebnikov <khlebnikov@yandex-team.ru>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Graf <agraf@suse.de>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: bdb43806 ("sched: Extract the basic add/sub preempt_count modifiers")
      Link: http://lkml.kernel.org/r/20150715095204.12246.98268.stgit@buzzSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fe32d3cd
    • P
      sched: Introduce the 'trace_sched_waking' tracepoint · fbd705a0
      Peter Zijlstra 提交于
      Mathieu reported that since 317f3941 ("sched: Move the second half
      of ttwu() to the remote cpu") trace_sched_wakeup() can happen out of
      context of the waker.
      
      This is a problem when you want to analyse wakeup paths because it is
      now very hard to correlate the wakeup event to whoever issued the
      wakeup.
      
      OTOH trace_sched_wakeup() is issued at the point where we set
      p->state = TASK_RUNNING, which is right were we hand the task off to
      the scheduler, so this is an important point when looking at
      scheduling behaviour, up to here its been the wakeup path everything
      hereafter is due to scheduler policy.
      
      To bridge this gap, introduce a second tracepoint: trace_sched_waking.
      It is guaranteed to be called in the waker context.
      Reported-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Francis Giraldeau <francis.giraldeau@gmail.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20150609091336.GQ3644@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fbd705a0
    • M
      sched, sysctl: Delete an unnecessary check before unregister_sysctl_table() · 781b0203
      Markus Elfring 提交于
      The unregister_sysctl_table() function tests whether its argument is NULL
      and then returns immediately. Thus the test around the call is not needed.
      
      This issue was detected by using the Coccinelle software.
      Signed-off-by: NMarkus Elfring <elfring@users.sourceforge.net>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/5597877E.3060503@users.sourceforge.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      781b0203
    • P
      locking/static_keys: Add static_key_{en,dis}able() helpers · e33886b3
      Peter Zijlstra 提交于
      Add two helpers to make it easier to treat the refcount as boolean.
      Suggested-by: NJason Baron <jasonbaron0@gmail.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      e33886b3
  11. 29 7月, 2015 1 次提交
  12. 23 7月, 2015 1 次提交
  13. 15 7月, 2015 1 次提交
    • A
      cgroup: allow a cgroup subsystem to reject a fork · 7e47682e
      Aleksa Sarai 提交于
      Add a new cgroup subsystem callback can_fork that conditionally
      states whether or not the fork is accepted or rejected by a cgroup
      policy. In addition, add a cancel_fork callback so that if an error
      occurs later in the forking process, any state modified by can_fork can
      be reverted.
      
      Allow for a private opaque pointer to be passed from cgroup_can_fork to
      cgroup_post_fork, allowing for the fork state to be stored by each
      subsystem separately.
      
      Also add a tagging system for cgroup_subsys.h to allow for CGROUP_<TAG>
      enumerations to be be defined and used. In addition, explicitly add a
      CGROUP_CANFORK_COUNT macro to make arrays easier to define.
      
      This is in preparation for implementing the pids cgroup subsystem.
      Signed-off-by: NAleksa Sarai <cyphar@cyphar.com>
      Signed-off-by: NTejun Heo <tj@kernel.org>
      7e47682e
  14. 04 7月, 2015 2 次提交
  15. 19 6月, 2015 7 次提交
    • T
      timer: Reduce timer migration overhead if disabled · bc7a34b8
      Thomas Gleixner 提交于
      Eric reported that the timer_migration sysctl is not really nice
      performance wise as it needs to check at every timer insertion whether
      the feature is enabled or not. Further the check does not live in the
      timer code, so we have an extra function call which checks an extra
      cache line to figure out that it is disabled.
      
      We can do better and store that information in the per cpu (hr)timer
      bases. I pondered to use a static key, but that's a nightmare to
      update from the nohz code and the timer base cache line is hot anyway
      when we select a timer base.
      
      The old logic enabled the timer migration unconditionally if
      CONFIG_NO_HZ was set even if nohz was disabled on the kernel command
      line.
      
      With this modification, we start off with migration disabled. The user
      visible sysctl is still set to enabled. If the kernel switches to NOHZ
      migration is enabled, if the user did not disable it via the sysctl
      prior to the switch. If nohz=off is on the kernel command line,
      migration stays disabled no matter what.
      
      Before:
        47.76%  hog       [.] main
        14.84%  [kernel]  [k] _raw_spin_lock_irqsave
         9.55%  [kernel]  [k] _raw_spin_unlock_irqrestore
         6.71%  [kernel]  [k] mod_timer
         6.24%  [kernel]  [k] lock_timer_base.isra.38
         3.76%  [kernel]  [k] detach_if_pending
         3.71%  [kernel]  [k] del_timer
         2.50%  [kernel]  [k] internal_add_timer
         1.51%  [kernel]  [k] get_nohz_timer_target
         1.28%  [kernel]  [k] __internal_add_timer
         0.78%  [kernel]  [k] timerfn
         0.48%  [kernel]  [k] wake_up_nohz_cpu
      
      After:
        48.10%  hog       [.] main
        15.25%  [kernel]  [k] _raw_spin_lock_irqsave
         9.76%  [kernel]  [k] _raw_spin_unlock_irqrestore
         6.50%  [kernel]  [k] mod_timer
         6.44%  [kernel]  [k] lock_timer_base.isra.38
         3.87%  [kernel]  [k] detach_if_pending
         3.80%  [kernel]  [k] del_timer
         2.67%  [kernel]  [k] internal_add_timer
         1.33%  [kernel]  [k] __internal_add_timer
         0.73%  [kernel]  [k] timerfn
         0.54%  [kernel]  [k] wake_up_nohz_cpu
      Reported-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Viresh Kumar <viresh.kumar@linaro.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Joonwoo Park <joonwoop@codeaurora.org>
      Cc: Wenbo Wang <wenbo.wang@memblaze.com>
      Link: http://lkml.kernel.org/r/20150526224512.127050787@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      bc7a34b8
    • W
      sched: Remove superfluous resetting of the p->dl_throttled flag · 6713c3aa
      Wanpeng Li 提交于
      Resetting the p->dl_throttled flag in rt_mutex_setprio() (for a task that is going
      to be boosted) is superfluous, as the natural place to do so is in
      replenish_dl_entity().
      
      If the task was on the runqueue and it is boosted by a DL task, it will be enqueued
      back with ENQUEUE_REPLENISH flag set, which can guarantee that dl_throttled is
      reset in replenish_dl_entity().
      
      This patch drops the resetting of throttled status in function rt_mutex_setprio().
      Signed-off-by: NWanpeng Li <wanpeng.li@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Juri Lelli <juri.lelli@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1431496867-4194-6-git-send-email-wanpeng.li@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6713c3aa
    • P
      sched/preempt: Add static_key() to preempt_notifiers · 1cde2930
      Peter Zijlstra 提交于
      Avoid touching the curr->preempt_notifier cacheline when not needed.
      
      Provides a small improvement on pipe-bench:
      
        taskset 01 perf stat --repeat 10 -- perf bench sched pipe
      
      before:
      
       Performance counter stats for 'perf bench sched pipe' (10 runs):
      
            12385.016204      task-clock (msec)         #    1.001 CPUs utilized            ( +-  0.34% )
               2,000,023      context-switches          #    0.161 M/sec                    ( +-  0.00% )
                       0      cpu-migrations            #    0.000 K/sec
                     175      page-faults               #    0.014 K/sec                    ( +-  0.26% )
          41,376,162,250      cycles                    #    3.341 GHz                      ( +-  0.11% )
          17,389,139,321      stalled-cycles-frontend   #   42.03% frontend cycles idle     ( +-  0.25% )
         <not supported>      stalled-cycles-backend
          68,788,588,003      instructions              #    1.66  insns per cycle
                                                        #    0.25  stalled cycles per insn  ( +-  0.02% )
          13,449,387,620      branches                  # 1085.940 M/sec                    ( +-  0.02% )
              20,880,690      branch-misses             #    0.16% of all branches          ( +-  0.98% )
      
            12.372646094 seconds time elapsed                                          ( +-  0.34% )
      
      after:
      
       Performance counter stats for 'perf bench sched pipe' (10 runs):
      
            12180.936528      task-clock (msec)         #    1.001 CPUs utilized            ( +-  0.33% )
               2,000,077      context-switches          #    0.164 M/sec                    ( +-  0.00% )
                       0      cpu-migrations            #    0.000 K/sec
                     174      page-faults               #    0.014 K/sec                    ( +-  0.27% )
          40,691,545,577      cycles                    #    3.341 GHz                      ( +-  0.06% )
          16,446,333,371      stalled-cycles-frontend   #   40.42% frontend cycles idle     ( +-  0.18% )
         <not supported>      stalled-cycles-backend
          68,570,100,387      instructions              #    1.69  insns per cycle
                                                        #    0.24  stalled cycles per insn  ( +-  0.01% )
          13,389,740,014      branches                  # 1099.237 M/sec                    ( +-  0.01% )
              20,175,440      branch-misses             #    0.15% of all branches          ( +-  0.52% )
      
            12.169253010 seconds time elapsed                                          ( +-  0.33% )
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      1cde2930
    • M
      sched/preempt: Fix preempt notifiers documentation about hlist_del() within unsafe iteration · d84525a8
      Mathieu Desnoyers 提交于
      preempt_notifier_unregister() documents:
      
        "This is safe to call from within a preemption notifier."
      
      However, both fire_sched_in_preempt_notifiers() and
      fire_sched_out_preempt_notifiers() are using hlist_for_each_entry(),
      which is not safe against entry removal during iteration.
      
      Inspection of the KVM code does not reveal any use of
      preempt_notifier_unregister() within the preempt notifiers.
      
      Therefore, fix the comment.
      Signed-off-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/1431881590-1456-1-git-send-email-mathieu.desnoyers@efficios.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d84525a8
    • P
      sched,lockdep: Employ lock pinning · cbce1a68
      Peter Zijlstra 提交于
      Employ the new lockdep lock pinning annotation to ensure no
      'accidental' lock-breaks happen with rq->lock.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: ktkhai@parallels.com
      Cc: rostedt@goodmis.org
      Cc: juri.lelli@gmail.com
      Cc: pang.xunlei@linaro.org
      Cc: oleg@redhat.com
      Cc: wanpeng.li@linux.intel.com
      Cc: umgwanakikbuti@gmail.com
      Link: http://lkml.kernel.org/r/20150611124744.003233193@infradead.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      cbce1a68
    • P
      sched: Streamline the task migration locking a little · 5e16bbc2
      Peter Zijlstra 提交于
      The whole migrate_task{,s}() locking seems a little shaky, there's a
      lot of dropping an require happening. Pull the locking up into the
      callers as far as possible to streamline the lot.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: ktkhai@parallels.com
      Cc: rostedt@goodmis.org
      Cc: juri.lelli@gmail.com
      Cc: pang.xunlei@linaro.org
      Cc: oleg@redhat.com
      Cc: wanpeng.li@linux.intel.com
      Cc: umgwanakikbuti@gmail.com
      Link: http://lkml.kernel.org/r/20150611124743.755256708@infradead.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      5e16bbc2
    • P
      sched: Move code around · 5cc389bc
      Peter Zijlstra 提交于
      In preparation to reworking set_cpus_allowed_ptr() move some code
      around. This also removes some superfluous #ifdefs and adds comments
      to some #endifs.
      
         text    data     bss     dec     hex filename
      12211532        1738144 1081344 15031020         e55aec defconfig-build/vmlinux.pre
      12211532        1738144 1081344 15031020         e55aec defconfig-build/vmlinux.post
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: ktkhai@parallels.com
      Cc: rostedt@goodmis.org
      Cc: juri.lelli@gmail.com
      Cc: pang.xunlei@linaro.org
      Cc: oleg@redhat.com
      Cc: wanpeng.li@linux.intel.com
      Cc: umgwanakikbuti@gmail.com
      Link: http://lkml.kernel.org/r/20150611124743.662086684@infradead.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      5cc389bc