1. 24 12月, 2013 2 次提交
    • J
      timekeeping: Fix potential lost pv notification of time change · 5258d3f2
      John Stultz 提交于
      In 780427f0 (Indicate that clock was set in the pvclock
      gtod notifier), logic was added to pass a CLOCK_WAS_SET
      notification to the pvclock notifier chain.
      
      While that patch added a action flag returned from
      accumulate_nsecs_to_secs(), it only uses the returned value
      in one location, and not in the logarithmic accumulation.
      
      This means if a leap second triggered during the logarithmic
      accumulation (which is most likely where it would happen),
      the notification that the clock was set would not make it to
      the pv notifiers.
      
      This patch extends the logarithmic_accumulation pass down
      that action flag so proper notification will occur.
      
      This patch also changes the varialbe action -> clock_set
      per Ingo's suggestion.
      
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: <xen-devel@lists.xen.org>
      Cc: stable <stable@vger.kernel.org> #3.11+
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      5258d3f2
    • J
      timekeeping: Fix lost updates to tai adjustment · f55c0760
      John Stultz 提交于
      Since 48cdc135 (Implement a shadow timekeeper), we have to
      call timekeeping_update() after any adjustment to the timekeeping
      structure in order to make sure that any adjustments to the structure
      persist.
      
      Unfortunately, the updates to the tai offset via adjtimex do not
      trigger this update, causing adjustments to the tai offset to be
      made and then over-written by the previous value at the next
      update_wall_time() call.
      
      This patch resovles the issue by calling timekeeping_update()
      right after setting the tai offset.
      
      Cc: Sasha Levin <sasha.levin@oracle.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: stable <stable@vger.kernel.org> #3.10+
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      f55c0760
  2. 03 12月, 2013 1 次提交
    • F
      nohz: Convert a few places to use local per cpu accesses · e8fcaa5c
      Frederic Weisbecker 提交于
      A few functions use remote per CPU access APIs when they
      deal with local values.
      
      Just do the right conversion to improve performance, code
      readability and debug checks.
      
      While at it, lets extend some of these function names with *_this_cpu()
      suffix in order to display their purpose more clearly.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      e8fcaa5c
  3. 23 10月, 2013 1 次提交
    • T
      clockevents: Sanitize ticks to nsec conversion · 97b94106
      Thomas Gleixner 提交于
      Marc Kleine-Budde pointed out, that commit 77cc982f "clocksource: use
      clockevents_config_and_register() where possible" caused a regression
      for some of the converted subarchs.
      
      The reason is, that the clockevents core code converts the minimal
      hardware tick delta to a nanosecond value for core internal
      usage. This conversion is affected by integer math rounding loss, so
      the backwards conversion to hardware ticks will likely result in a
      value which is less than the configured hardware limitation. The
      affected subarchs used their own workaround (SIGH!) which got lost in
      the conversion.
      
      The solution for the issue at hand is simple: adding evt->mult - 1 to
      the shifted value before the integer divison in the core conversion
      function takes care of it. But this only works for the case where for
      the scaled math mult/shift pair "mult <= 1 << shift" is true. For the
      case where "mult > 1 << shift" we can apply the rounding add only for
      the minimum delta value to make sure that the backward conversion is
      not less than the given hardware limit. For the upper bound we need to
      omit the rounding add, because the backwards conversion is always
      larger than the original latch value. That would violate the upper
      bound of the hardware device.
      
      Though looking closer at the details of that function reveals another
      bogosity: The upper bounds check is broken as well. Checking for a
      resulting "clc" value greater than KTIME_MAX after the conversion is
      pointless. The conversion does:
      
            u64 clc = (latch << evt->shift) / evt->mult;
      
      So there is no sanity check for (latch << evt->shift) exceeding the
      64bit boundary. The latch argument is "unsigned long", so on a 64bit
      arch the handed in argument could easily lead to an unnoticed shift
      overflow. With the above rounding fix applied the calculation before
      the divison is:
      
             u64 clc = (latch << evt->shift) + evt->mult - 1;
      
      So we need to make sure, that neither the shift nor the rounding add
      is overflowing the u64 boundary.
      
      [ukl: move assignment to rnd after eventually changing mult, fix build
       issue and correct comment with the right math]
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Russell King - ARM Linux <linux@arm.linux.org.uk>
      Cc: Marc Kleine-Budde <mkl@pengutronix.de>
      Cc: nicolas.ferre@atmel.com
      Cc: Marc Pignat <marc.pignat@hevs.ch>
      Cc: john.stultz@linaro.org
      Cc: kernel@pengutronix.de
      Cc: Ronald Wahl <ronald.wahl@raritan.com>
      Cc: LAK <linux-arm-kernel@lists.infradead.org>
      Cc: Ludovic Desroches <ludovic.desroches@atmel.com>
      Cc: stable@vger.kernel.org
      Link: http://lkml.kernel.org/r/1380052223-24139-1-git-send-email-u.kleine-koenig@pengutronix.deSigned-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de>
      97b94106
  4. 19 10月, 2013 3 次提交
  5. 18 10月, 2013 1 次提交
  6. 10 10月, 2013 2 次提交
  7. 02 10月, 2013 1 次提交
    • S
      tick: broadcast: Deny per-cpu clockevents from being broadcast sources · 245a3496
      Soren Brinkmann 提交于
      On most ARM systems the per-cpu clockevents are truly per-cpu in
      the sense that they can't be controlled on any other CPU besides
      the CPU that they interrupt. If one of these clockevents were to
      become a broadcast source we will run into a lot of trouble
      because the broadcast source is enabled on the first CPU to go
      into deep idle (if that CPU suffers from FEAT_C3_STOP) and that
      could be a different CPU than what the clockevent is interrupting
      (or even worse the CPU that the clockevent interrupts could be
      offline).
      
      Theoretically it's possible to support per-cpu clockevents as the
      broadcast source but so far we haven't needed this and supporting
      it is rather complicated. Let's just deny the possibility for now
      until this becomes a reality (let's hope it never does!).
      Signed-off-by: NSoren Brinkmann <soren.brinkmann@xilinx.com>
      Signed-off-by: NDaniel Lezcano <daniel.lezcano@linaro.org>
      Acked-by: NMichal Simek <michal.simek@xilinx.com>
      245a3496
  8. 30 9月, 2013 2 次提交
    • K
      nohz: Drop generic vtime obsolete dependency on CONFIG_64BIT · ff3fb254
      Kevin Hilman 提交于
      The CONFIG_64BIT requirement on vtime can finally be removed
      since we now depend on HAVE_VIRT_CPU_ACCOUNTING_GEN which
      already takes care of the arch ability to handle nsecs based
      cputime_t safely.
      Signed-off-by: NKevin Hilman <khilman@linaro.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Arm Linux <linux-arm-kernel@lists.infradead.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      ff3fb254
    • K
      vtime: Add HAVE_VIRT_CPU_ACCOUNTING_GEN Kconfig · 554b0004
      Kevin Hilman 提交于
      With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit. In order
      to use that feature, arch code should be audited to ensure there are no
      races in concurrent read/write of cputime_t. For example,
      reading/writing 64-bit cputime_t on some 32-bit arches may require
      multiple accesses for low and high value parts, so proper locking
      is needed to protect against concurrent accesses.
      
      Therefore, add CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN which arches can
      enable after they've been audited for potential races.
      
      This option is automatically enabled on 64-bit platforms.
      
      Feature requested by Frederic Weisbecker.
      Signed-off-by: NKevin Hilman <khilman@linaro.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Arm Linux <linux-arm-kernel@lists.infradead.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      554b0004
  9. 18 9月, 2013 1 次提交
  10. 12 9月, 2013 1 次提交
    • J
      timekeeping: Fix HRTICK related deadlock from ntp lock changes · 7bd36014
      John Stultz 提交于
      Gerlando Falauto reported that when HRTICK is enabled, it is
      possible to trigger system deadlocks. These were hard to
      reproduce, as HRTICK has been broken in the past, but seemed
      to be connected to the timekeeping_seq lock.
      
      Since seqlock/seqcount's aren't supported w/ lockdep, I added
      some extra spinlock based locking and triggered the following
      lockdep output:
      
      [   15.849182] ntpd/4062 is trying to acquire lock:
      [   15.849765]  (&(&pool->lock)->rlock){..-...}, at: [<ffffffff810aa9b5>] __queue_work+0x145/0x480
      [   15.850051]
      [   15.850051] but task is already holding lock:
      [   15.850051]  (timekeeper_lock){-.-.-.}, at: [<ffffffff810df6df>] do_adjtimex+0x7f/0x100
      
      <snip>
      
      [   15.850051] Chain exists of: &(&pool->lock)->rlock --> &p->pi_lock --> timekeeper_lock
      [   15.850051]  Possible unsafe locking scenario:
      [   15.850051]
      [   15.850051]        CPU0                    CPU1
      [   15.850051]        ----                    ----
      [   15.850051]   lock(timekeeper_lock);
      [   15.850051]                                lock(&p->pi_lock);
      [   15.850051] lock(timekeeper_lock);
      [   15.850051] lock(&(&pool->lock)->rlock);
      [   15.850051]
      [   15.850051]  *** DEADLOCK ***
      
      The deadlock was introduced by 06c017fd ("timekeeping:
      Hold timekeepering locks in do_adjtimex and hardpps") in 3.10
      
      This patch avoids this deadlock, by moving the call to
      schedule_delayed_work() outside of the timekeeper lock
      critical section.
      Reported-by: NGerlando Falauto <gerlando.falauto@keymile.com>
      Tested-by: NLin Ming <minggr@gmail.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Cc: stable <stable@vger.kernel.org> #3.11, 3.10
      Link: http://lkml.kernel.org/r/1378943457-27314-1-git-send-email-john.stultz@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7bd36014
  11. 01 9月, 2013 1 次提交
    • P
      nohz_full: Add full-system-idle state machine · 0edd1b17
      Paul E. McKenney 提交于
      This commit adds the state machine that takes the per-CPU idle data
      as input and produces a full-system-idle indication as output.  This
      state machine is driven out of RCU's quiescent-state-forcing
      mechanism, which invokes rcu_sysidle_check_cpu() to collect per-CPU
      idle state and then rcu_sysidle_report() to drive the state machine.
      
      The full-system-idle state is sampled using rcu_sys_is_idle(), which
      also drives the state machine if RCU is idle (and does so by forcing
      RCU to become non-idle).  This function returns true if all but the
      timekeeping CPU (tick_do_timer_cpu) are idle and have been idle long
      enough to avoid memory contention on the full_sysidle_state state
      variable.  The rcu_sysidle_force_exit() may be called externally
      to reset the state machine back into non-idle state.
      
      For large systems the state machine is driven out of RCU's
      force-quiescent-state logic, which provides good scalability at the price
      of millisecond-scale latencies on the transition to full-system-idle
      state.  This is not so good for battery-powered systems, which are usually
      small enough that they don't need to care about scalability, but which
      do care deeply about energy efficiency.  Small systems therefore drive
      the state machine directly out of the idle-entry code.  The number of
      CPUs in a "small" system is defined by a new NO_HZ_FULL_SYSIDLE_SMALL
      Kconfig parameter, which defaults to 8.  Note that this is a build-time
      definition.
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
      [ paulmck: Use true and false for boolean constants per Lai Jiangshan. ]
      Reviewed-by: NJosh Triplett <josh@joshtriplett.org>
      [ paulmck: Simplify logic and provide better comments for memory barriers,
        based on review comments and questions by Lai Jiangshan. ]
      0edd1b17
  12. 29 8月, 2013 1 次提交
  13. 23 8月, 2013 1 次提交
    • M
      ntp: Make periodic RTC update more reliable · a97ad0c4
      Miroslav Lichvar 提交于
      The current code requires that the scheduled update of the RTC happens
      in the closest tick to the half of the second. This seems to be
      difficult to achieve reliably. The scheduled work may be missing the
      target time by a tick or two and be constantly rescheduled every second.
      
      Relax the limit to 10 ticks. As a typical RTC drifts in the 11-minute
      update interval by several milliseconds, this shouldn't affect the
      overall accuracy of the RTC much.
      Signed-off-by: NMiroslav Lichvar <mlichvar@redhat.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      a97ad0c4
  14. 19 8月, 2013 1 次提交
  15. 16 8月, 2013 1 次提交
  16. 14 8月, 2013 3 次提交
    • F
      nohz: Optimize full dynticks's sched hooks with static keys · d13508f9
      Frederic Weisbecker 提交于
      Scheduler IPIs and task context switches are serious fast path.
      Let's try to hide as much as we can the impact of full
      dynticks APIs' off case that are called on these sites
      through the use of static keys.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kevin Hilman <khilman@linaro.org>
      d13508f9
    • F
      nohz: Optimize full dynticks state checks with static keys · 460775df
      Frederic Weisbecker 提交于
      These APIs are frequenctly accessed and priority is given
      to optimize the full dynticks off-case in order to let
      distros enable this feature without suffering from
      significant performance regressions.
      
      Let's inline these APIs and optimize them with static keys.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kevin Hilman <khilman@linaro.org>
      460775df
    • F
      nohz: Rename a few state variables · 73867dcd
      Frederic Weisbecker 提交于
      Rename the full dynticks's cpumask and cpumask state variables
      to some more exportable names.
      
      These will be used later from global headers to optimize
      the main full dynticks APIs in conjunction with static keys.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kevin Hilman <khilman@linaro.org>
      73867dcd
  17. 13 8月, 2013 2 次提交
    • F
      context_tracking: Remove full dynticks' hacky dependency on wide context tracking · d84d27a4
      Frederic Weisbecker 提交于
      Now that the full dynticks subsystem only enables the context tracking
      on full dynticks CPUs, lets remove the dependency on CONTEXT_TRACKING_FORCE
      
      This dependency was a hack to enable the context tracking widely for the
      full dynticks susbsystem until the latter becomes able to enable it in a
      more CPU-finegrained fashion.
      
      Now CONTEXT_TRACKING_FORCE only stands for testing on archs that
      work on support for the context tracking while full dynticks can't be
      used yet due to unmet dependencies. It simulates a system where all CPUs
      are full dynticks so that RCU user extended quiescent states and dynticks
      cputime accounting can be tested on the given arch.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kevin Hilman <khilman@linaro.org>
      d84d27a4
    • F
      nohz: Only enable context tracking on full dynticks CPUs · 2e709338
      Frederic Weisbecker 提交于
      The context tracking subsystem has the ability to selectively
      enable the tracking on any defined subset of CPU. This means that
      we can define a CPU range that doesn't run the context tracking
      and another range that does.
      
      Now what we want in practice is to enable the tracking on full
      dynticks CPUs only. In order to perform this, we just need to pass
      our full dynticks CPU range selection from the full dynticks
      subsystem to the context tracking.
      
      This way we can spare the overhead of RCU user extended quiescent
      state and vtime maintainance on the CPUs that are outside the
      full dynticks range. Just keep in mind the raw context tracking
      itself is still necessary everywhere.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kevin Hilman <khilman@linaro.org>
      2e709338
  18. 31 7月, 2013 4 次提交
    • S
      sched_clock: Add support for >32 bit sched_clock · e7e3ff1b
      Stephen Boyd 提交于
      The ARM architected system counter has at least 56 usable bits.
      Add support for counters with more than 32 bits to the generic
      sched_clock implementation so we can increase the time between
      wakeups due to dealing with wrap-around on these devices while
      benefiting from the irqtime accounting and suspend/resume
      handling that the generic sched_clock code already has. On my
      system using 56 bits over 32 bits changes the wraparound time
      from a few minutes to an hour. For faster running counters (GHz
      range) this is even more important because we may not be able to
      execute the timer in time to deal with the wraparound if only 32
      bits are used.
      
      We choose a maxsec value of 3600 seconds because we assume no
      system will go idle for more than an hour. In the future we may
      need to increase this value.
      
      Note: All users should switch over to the 64-bit read function so
      we can remove setup_sched_clock() in favor of sched_clock_register().
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      e7e3ff1b
    • S
      sched_clock: Use an hrtimer instead of timer · a08ca5d1
      Stephen Boyd 提交于
      In the next patch we're going to increase the number of bits that
      the generic sched_clock can handle to be greater than 32. With
      more than 32 bits the wraparound time can be larger than what can
      fit into the units that msecs_to_jiffies takes (unsigned int).
      Luckily, the wraparound is initially calculated in nanoseconds
      which we can easily use with hrtimers, so switch to using an
      hrtimer.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      [jstultz: Fixup hrtimer intitialization order issue]
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      a08ca5d1
    • S
      sched_clock: Use seqcount instead of rolling our own · 85c3d2dd
      Stephen Boyd 提交于
      We're going to increase the cyc value to 64 bits in the near
      future. Doing that is going to break the custom seqcount
      implementation in the sched_clock code because 64 bit numbers
      aren't guaranteed to be atomic. Replace the cyc_copy with a
      seqcount to avoid this problem.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      85c3d2dd
    • S
      clocksource: Extract max nsec calculation into separate function · 87d8b9eb
      Stephen Boyd 提交于
      We need to calculate the same number in the clocksource code and
      the sched_clock code, so extract this code into its own function.
      We also drop the min_t and just use min() because the two types
      are the same.
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      87d8b9eb
  19. 29 7月, 2013 1 次提交
    • R
      Revert "cpuidle: Quickly notice prediction failure for repeat mode" · 14851912
      Rafael J. Wysocki 提交于
      Revert commit 69a37bea (cpuidle: Quickly notice prediction failure for
      repeat mode), because it has been identified as the source of a
      significant performance regression in v3.8 and later as explained by
      Jeremy Eder:
      
        We believe we've identified a particular commit to the cpuidle code
        that seems to be impacting performance of variety of workloads.
        The simplest way to reproduce is using netperf TCP_RR test, so
        we're using that, on a pair of Sandy Bridge based servers.  We also
        have data from a large database setup where performance is also
        measurably/positively impacted, though that test data isn't easily
        share-able.
      
        Included below are test results from 3 test kernels:
      
        kernel       reverts
        -----------------------------------------------------------
        1) vanilla   upstream (no reverts)
      
        2) perfteam2 reverts e11538d1
      
        3) test      reverts 69a37bea
                             e11538d1
      
        In summary, netperf TCP_RR numbers improve by approximately 4%
        after reverting 69a37bea.  When
        69a37bea is included, C0 residency
        never seems to get above 40%.  Taking that patch out gets C0 near
        100% quite often, and performance increases.
      
        The below data are histograms representing the %c0 residency @
        1-second sample rates (using turbostat), while under netperf test.
      
        - If you look at the first 4 histograms, you can see %c0 residency
          almost entirely in the 30,40% bin.
        - The last pair, which reverts 69a37bea,
          shows %c0 in the 80,90,100% bins.
      
        Below each kernel name are netperf TCP_RR trans/s numbers for the
        particular kernel that can be disclosed publicly, comparing the 3
        test kernels.  We ran a 4th test with the vanilla kernel where
        we've also set /dev/cpu_dma_latency=0 to show overall impact
        boosting single-threaded TCP_RR performance over 11% above
        baseline.
      
        3.10-rc2 vanilla RX + c0 lock (/dev/cpu_dma_latency=0):
        TCP_RR trans/s 54323.78
      
        -----------------------------------------------------------
        3.10-rc2 vanilla RX (no reverts)
        TCP_RR trans/s 48192.47
      
        Receiver %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     0]:
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    59]:
        ***********************************************************
           40.0000 -    50.0000 [     1]: *
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [     0]:
      
        Sender %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     0]:
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    11]: ***********
           40.0000 -    50.0000 [    49]:
        *************************************************
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [     0]:
      
        -----------------------------------------------------------
        3.10-rc2 perfteam2 RX (reverts commit
        e11538d1)
        TCP_RR trans/s 49698.69
      
        Receiver %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     1]: *
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    59]:
        ***********************************************************
           40.0000 -    50.0000 [     0]:
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [     0]:
      
        Sender %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     0]:
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [     2]: **
           40.0000 -    50.0000 [    58]:
        **********************************************************
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [     0]:
      
        -----------------------------------------------------------
        3.10-rc2 test RX (reverts 69a37bea
        and e11538d1)
        TCP_RR trans/s 47766.95
      
        Receiver %c0
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     1]: *
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    27]: ***************************
           40.0000 -    50.0000 [     2]: **
           50.0000 -    60.0000 [     0]:
           60.0000 -    70.0000 [     2]: **
           70.0000 -    80.0000 [     0]:
           80.0000 -    90.0000 [     0]:
           90.0000 -   100.0000 [    28]: ****************************
      
        Sender:
            0.0000 -    10.0000 [     1]: *
           10.0000 -    20.0000 [     0]:
           20.0000 -    30.0000 [     0]:
           30.0000 -    40.0000 [    11]: ***********
           40.0000 -    50.0000 [     0]:
           50.0000 -    60.0000 [     1]: *
           60.0000 -    70.0000 [     0]:
           70.0000 -    80.0000 [     3]: ***
           80.0000 -    90.0000 [     7]: *******
           90.0000 -   100.0000 [    38]: **************************************
      
        These results demonstrate gaining back the tendency of the CPU to
        stay in more responsive, performant C-states (and thus yield
        measurably better performance), by reverting commit
        69a37bea.
      Requested-by: NJeremy Eder <jeder@redhat.com>
      Tested-by: NLen Brown <len.brown@intel.com>
      Cc: 3.8+ <stable@vger.kernel.org>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      14851912
  20. 25 7月, 2013 2 次提交
    • L
      nohz: fix compile warning in tick_nohz_init() · ca06416b
      Li Zhong 提交于
      cpu is not used after commit 5b8621a6Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kevin Hilman <khilman@linaro.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      ca06416b
    • S
      nohz: Do not warn about unstable tsc unless user uses nohz_full · 543487c7
      Steven Rostedt 提交于
      If the user enables CONFIG_NO_HZ_FULL and runs the kernel on a machine
      with an unstable TSC, it will produce a WARN_ON dump as well as taint
      the kernel. This is a bit extreme for a kernel that just enables a
      feature but doesn't use it.
      
      The warning should only happen if the user tries to use the feature by
      either adding nohz_full to the kernel command line, or by enabling
      CONFIG_NO_HZ_FULL_ALL that makes nohz used on all CPUs at boot up. Note,
      this second feature should not (yet) be used by distros or anyone that
      doesn't care if NO_HZ is used or not.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Kevin Hilman <khilman@linaro.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      543487c7
  21. 23 7月, 2013 2 次提交
  22. 15 7月, 2013 1 次提交
    • P
      kernel: delete __cpuinit usage from all core kernel files · 0db0628d
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      This removes all the uses of the __cpuinit macros from C files in
      the core kernel directories (kernel, init, lib, mm, and include)
      that don't really have a specific maintainer.
      
      [1] https://lkml.org/lkml/2013/5/20/589Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      0db0628d
  23. 12 7月, 2013 1 次提交
    • S
      tick: broadcast: Check broadcast mode on CPU hotplug · a272dcca
      Stephen Boyd 提交于
      On ARM systems the dummy clockevent is registered with the cpu
      hotplug notifier chain before any other per-cpu clockevent. This
      has the side-effect of causing the dummy clockevent to be
      registered first in every hotplug sequence. Because the dummy is
      first, we'll try to turn the broadcast source on but the code in
      tick_device_uses_broadcast() assumes the broadcast source is in
      periodic mode and calls tick_broadcast_start_periodic()
      unconditionally.
      
      On boot this isn't a problem because we typically haven't
      switched into oneshot mode yet (if at all). During hotplug, if
      the broadcast source isn't in periodic mode we'll replace the
      broadcast oneshot handler with the broadcast periodic handler and
      start emulating oneshot mode when we shouldn't. Due to the way
      the broadcast oneshot handler programs the next_event it's
      possible for it to contain KTIME_MAX and cause us to hang the
      system when the periodic handler tries to program the next tick.
      Fix this by using the appropriate function to start the broadcast
      source.
      Reported-by: NStephen Warren <swarren@nvidia.com>
      Tested-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Cc: Mark Rutland <Mark.Rutland@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: ARM kernel mailing list <linux-arm-kernel@lists.infradead.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Joseph Lo <josephl@nvidia.com>
      Link: http://lkml.kernel.org/r/20130711140059.GA27430@codeaurora.orgSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      a272dcca
  24. 05 7月, 2013 1 次提交
    • T
      clocksource: Reselect clocksource when watchdog validated high-res capability · 332962f2
      Thomas Gleixner 提交于
      Up to commit 5d33b883 (clocksource: Always verify highres capability)
      we had no sanity check when selecting a clocksource, which prevented
      that a non highres capable clocksource is used when the system already
      switched to highres/nohz mode.
      
      The new sanity check works as Alex and Tim found out. It prevents the
      TSC from being used. This happens because on x86 the boot process
      looks like this:
      
       tsc_start_freqency_validation(TSC);
       clocksource_register(HPET);
       clocksource_done_booting();
      	clocksource_select()
      		Selects HPET which is valid for high-res
      
       switch_to_highres();
      
       clocksource_register(TSC);
       	TSC is not selected, because it is not yet
      	flagged as VALID_HIGH_RES
      
       clocksource_watchdog()
      	Validates TSC for highres, but that does not make TSC
      	the current clocksource.
      
      Before the sanity check was added, we installed TSC unvalidated which
      worked most of the time. If the TSC was really detected as unstable,
      then the unstable logic removed it and installed HPET again.
      
      The sanity check is correct and needed. So the watchdog needs to kick
      a reselection of the clocksource, when it qualifies TSC as a valid
      high res clocksource.
      
      To solve this, we mark the clocksource which got the flag
      CLOCK_SOURCE_VALID_FOR_HRES set by the watchdog with an new flag
      CLOCK_SOURCE_RESELECT and trigger the watchdog thread. The watchdog
      thread evaluates the flag and invokes clocksource_select() when set.
      
      To avoid that the clocksource_done_booting() code, which is about to
      install the first real clocksource anyway, needs to go through
      clocksource_select and tick_oneshot_notify() pointlessly, split out
      the clocksource_watchdog_kthread() list walk code and invoke the
      select/notify only when called from clocksource_watchdog_kthread().
      
      So clocksource_done_booting() can utilize the same splitout code
      without the select/notify invocation and the clocksource_mutex
      unlock/relock dance.
      Reported-and-tested-by: NAlex Shi <alex.shi@intel.com>
      Cc: Hans Peter Anvin <hpa@linux.intel.com>
      Cc: Tim Chen <tim.c.chen@linux.intel.com>
      Cc: Andi Kleen <andi.kleen@intel.com>
      Tested-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Davidlohr Bueso <davidlohr.bueso@hp.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1307042239150.11637@ionos.tec.linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      332962f2
  25. 02 7月, 2013 3 次提交
    • T
      tick: Sanitize broadcast control logic · 07bd1172
      Thomas Gleixner 提交于
      The recent implementation of a generic dummy timer resulted in a
      different registration order of per cpu local timers which made the
      broadcast control logic go belly up.
      
      If the dummy timer is the first clock event device which is registered
      for a CPU, then it is installed, the broadcast timer is initialized
      and the CPU is marked as broadcast target.
      
      If a real clock event device is installed after that, we can fail to
      take the CPU out of the broadcast mask. In the worst case we end up
      with two periodic timer events firing for the same CPU. One from the
      per cpu hardware device and one from the broadcast.
      
      Now the problem is that we have no way to distinguish whether the
      system is in a state which makes broadcasting necessary or the
      broadcast bit was set due to the nonfunctional dummy timer
      installment.
      
      To solve this we need to keep track of the system state seperately and
      provide a more detailed decision logic whether we keep the CPU in
      broadcast mode or not.
      
      The old decision logic only clears the broadcast mode, if the newly
      installed clock event device is not affected by power states.
      
      The new logic clears the broadcast mode if one of the following is
      true:
      
        - The new device is not affected by power states.
      
        - The system is not in a power state affected mode
      
        - The system has switched to oneshot mode. The oneshot broadcast is
          controlled from the deep idle state. The CPU is not in idle at
          this point, so it's safe to remove it from the mask.
      
      If we clear the broadcast bit for the CPU when a new device is
      installed, we also shutdown the broadcast device when this was the
      last CPU in the broadcast mask.
      
      If the broadcast bit is kept, then we leave the new device in shutdown
      state and rely on the broadcast to deliver the timer interrupts via
      the broadcast ipis.
      Reported-and-tested-by: NStehle Vincent-B46079 <B46079@freescale.com>
      Reviewed-by: NStephen Boyd <sboyd@codeaurora.org>
      Cc: John Stultz <john.stultz@linaro.org>,
      Cc: Mark Rutland <mark.rutland@arm.com>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1307012153060.4013@ionos.tec.linutronix.de
      Cc: stable@vger.kernel.org
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      07bd1172
    • T
      tick: Prevent uncontrolled switch to oneshot mode · 1f73a980
      Thomas Gleixner 提交于
      When the system switches from periodic to oneshot mode, the broadcast
      logic causes a possibility that a CPU which has not yet switched to
      oneshot mode puts its own clock event device into oneshot mode without
      updating the state and the timer handler.
      
      CPU0				CPU1
      				per cpu tickdev is in periodic mode
      				and switched to broadcast
      
      Switch to oneshot mode
       tick_broadcast_switch_to_oneshot()
        cpumask_copy(tick_oneshot_broacast_mask,
      	       tick_broadcast_mask);
      
        broadcast device mode = oneshot
      
      				Timer interrupt
      						
      				irq_enter()
      				 tick_check_oneshot_broadcast()
      				  dev->set_mode(ONESHOT);
      
      				tick_handle_periodic()
      				 if (dev->mode == ONESHOT)
      				   dev->next_event += period;
      				   FAIL.
      
      We fail, because dev->next_event contains KTIME_MAX, if the device was
      in periodic mode before the uncontrolled switch to oneshot happened.
      
      We must copy the broadcast bits over to the oneshot mask, because
      otherwise a CPU which relies on the broadcast would not been woken up
      anymore after the broadcast device switched to oneshot mode.
      
      So we need to verify in tick_check_oneshot_broadcast() whether the CPU
      has already switched to oneshot mode. If not, leave the device
      untouched and let the CPU switch controlled into oneshot mode.
      
      This is a long standing bug, which was never noticed, because the main
      user of the broadcast x86 cannot run into that scenario, AFAICT. The
      nonarchitected timer mess of ARM creates a gazillion of differently
      broken abominations which trigger the shortcomings of that broadcast
      code, which better had never been necessary in the first place.
      Reported-and-tested-by: NStehle Vincent-B46079 <B46079@freescale.com>
      Reviewed-by: NStephen Boyd <sboyd@codeaurora.org>
      Cc: John Stultz <john.stultz@linaro.org>,
      Cc: Mark Rutland <mark.rutland@arm.com>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1307012153060.4013@ionos.tec.linutronix.de
      Cc: stable@vger.kernel.org
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      1f73a980
    • T
      tick: Make oneshot broadcast robust vs. CPU offlining · c9b5a266
      Thomas Gleixner 提交于
      In periodic mode we remove offline cpus from the broadcast propagation
      mask. In oneshot mode we fail to do so. This was not a problem so far,
      but the recent changes to the broadcast propagation introduced a
      constellation which can result in a NULL pointer dereference.
      
      What happens is:
      
      CPU0			CPU1
      			idle()
      			  arch_idle()
      			    tick_broadcast_oneshot_control(OFF);
      			      set cpu1 in tick_broadcast_force_mask
      			  if (cpu_offline())
      			     arch_cpu_dead()
      
      cpu_dead_cleanup(cpu1)
       cpu1 tickdevice pointer = NULL
      
      broadcast interrupt
        dereference cpu1 tickdevice pointer -> OOPS
      
      We dereference the pointer because cpu1 is still set in
      tick_broadcast_force_mask and tick_do_broadcast() expects a valid
      cpumask and therefor lacks any further checks.
      
      Remove the cpu from the tick_broadcast_force_mask before we set the
      tick device pointer to NULL. Also add a sanity check to the oneshot
      broadcast function, so we can detect such issues w/o crashing the
      machine.
      Reported-by: NPrarit Bhargava <prarit@redhat.com>
      Cc: athorlton@sgi.com
      Cc: CAI Qian <caiqian@redhat.com>
      Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1306261303260.4013@ionos.tec.linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      c9b5a266