1. 05 4月, 2013 5 次提交
  2. 26 3月, 2013 1 次提交
  3. 25 3月, 2013 1 次提交
  4. 23 3月, 2013 7 次提交
  5. 16 3月, 2013 1 次提交
    • F
      timekeeping: utilize the suspend-nonstop clocksource to count suspended time · e445cf1c
      Feng Tang 提交于
      There are some new processors whose TSC clocksource won't stop during
      suspend. Currently, after system resumes, kernel will use persistent
      clock or RTC to compensate the sleep time, but with these nonstop
      clocksources, we could skip the special compensation from external
      sources, and just use current clocksource for time recounting.
      
      This can solve some time drift bugs caused by some not-so-accurate or
      error-prone RTC devices.
      
      The current way to count suspended time is first try to use the persistent
      clock, and then try the RTC if persistent clock can't be used. This
      patch will change the trying order to:
      	suspend-nonstop clocksource -> persistent clock -> RTC
      
      When counting the sleep time with nonstop clocksource, use an accurate way
      suggested by Jason Gunthorpe to cover very large delta cycles.
      Signed-off-by: NFeng Tang <feng.tang@intel.com>
      [jstultz: Small optimization, avoiding re-reading the clocksource]
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      e445cf1c
  6. 13 3月, 2013 3 次提交
    • T
      tick: Provide a check for a forced broadcast pending · eaa907c5
      Thomas Gleixner 提交于
      On the CPU which gets woken along with the target CPU of the broadcast
      the following happens:
      
        deep_idle()
      			<-- spurious wakeup
        broadcast_exit()
          set forced bit
        
        enable interrupts
          
      			<-- Nothing happens
      
        disable interrupts
      
        broadcast_enter()
      			<-- Here we observe the forced bit is set
        deep_idle()
      
      Now after that the target CPU of the broadcast runs the broadcast
      handler and finds the other CPU in both the broadcast and the forced
      mask, sends the IPI and stuff gets back to normal.
      
      So it's not actually harmful, just more evidence for the theory, that
      hardware designers have access to very special drug supplies.
      
      Now there is no point in going back to deep idle just to wake up again
      right away via an IPI. Provide a check which allows the idle code to
      avoid the deep idle transition.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: LAK <linux-arm-kernel@lists.infradead.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Arjan van de Veen <arjan@infradead.org>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Cc: Jason Liu <liu.h.jason@gmail.com>
      Link: http://lkml.kernel.org/r/20130306111537.565418308@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      eaa907c5
    • T
      tick: Handle broadcast wakeup of multiple cpus · 989dcb64
      Thomas Gleixner 提交于
      Some brilliant hardware implementations wake multiple cores when the
      broadcast timer fires. This leads to the following interesting
      problem:
      
      CPU0				CPU1
      wakeup from idle		wakeup from idle
      
      leave broadcast mode		leave broadcast mode
       restart per cpu timer		 restart per cpu timer
       	     	 		go back to idle
      handle broadcast
       (empty mask)			
      				enter broadcast mode
      				programm broadcast device
      enter broadcast mode
      programm broadcast device
      
      So what happens is that due to the forced reprogramming of the cpu
      local timer, we need to set a event in the future. Now if we manage to
      go back to idle before the timer fires, we switch off the timer and
      arm the broadcast device with an already expired time (covered by
      forced mode). So in the worst case we repeat the above ping pong
      forever.
      					
      Unfortunately we have no information about what caused the wakeup, but
      we can check current time against the expiry time of the local cpu. If
      the local event is already in the past, we know that the broadcast
      timer is about to fire and send an IPI. So we mark ourself as an IPI
      target even if we left broadcast mode and avoid the reprogramming of
      the local cpu timer.
      
      This still leaves the possibility that a CPU which is not handling the
      broadcast interrupt is going to reach idle again before the IPI
      arrives. This can't be solved in the core code and will be handled in
      follow up patches.
      Reported-by: NJason Liu <liu.h.jason@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: LAK <linux-arm-kernel@lists.infradead.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Arjan van de Veen <arjan@infradead.org>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Link: http://lkml.kernel.org/r/20130306111537.492045206@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      989dcb64
    • T
      tick: Avoid programming the local cpu timer if broadcast pending · 26517f3e
      Thomas Gleixner 提交于
      If the local cpu timer stops in deep idle, we arm the broadcast device
      and get woken by an IPI. Now when we return from deep idle we reenable
      the local cpu timer unconditionally before handling the IPI. But
      that's a pointless exercise: the timer is already expired and the IPI
      is on the way. And it's an expensive exercise as we use the forced
      reprogramming mode so that we do not lose a timer event. This forced
      reprogramming will loop at least once in the retry.
      
      To avoid this reprogramming, we mark the cpu in a pending bit mask
      before we send the IPI. Now when the IPI target cpu wakes up, it will
      see the pending bit set and skip the reprogramming. The reprogramming
      of the cpu local timer will happen in the IPI handler which runs the
      cpu local timer interrupt function.
      Reported-by: NJason Liu <liu.h.jason@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: LAK <linux-arm-kernel@lists.infradead.org>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Arjan van de Veen <arjan@infradead.org>
      Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com>
      Link: http://lkml.kernel.org/r/20130306111537.431082074@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      26517f3e
  7. 07 3月, 2013 3 次提交
  8. 22 2月, 2013 2 次提交
    • T
      Revert "nohz: Make tick_nohz_irq_exit() irq safe" · af7bdbaf
      Thomas Gleixner 提交于
      This reverts commit 351429b2e62b6545bb10c756686393f29ba268a1. The
      extra local_irq_save() is not longer needed as the call site now
      always calls with interrupts disabled.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linuxfoundation.org>
      af7bdbaf
    • F
      nohz: Make tick_nohz_irq_exit() irq safe · e5ab012c
      Frederic Weisbecker 提交于
      As it stands, irq_exit() may or may not be called with
      irqs disabled, depending on __ARCH_IRQ_EXIT_IRQS_DISABLED
      that the arch can define.
      
      It makes tick_nohz_irq_exit() unsafe. For example two
      interrupts can race in tick_nohz_stop_sched_tick(): the inner
      most one computes the expiring time on top of the timer list,
      then it's interrupted right before reprogramming the
      clock. The new interrupt enqueues a new timer list timer,
      it reprogram the clock to take it into account and it exits.
      The CPUs resumes the inner most interrupt and performs the clock
      reprogramming without considering the new timer list timer.
      
      This regression has been introduced by:
           280f0677
           ("nohz: Separate out irq exit and idle loop dyntick logic")
      
      Let's fix it right now with the appropriate protections.
      
      A saner long term solution will be to remove
      __ARCH_IRQ_EXIT_IRQS_DISABLED and mandate that irq_exit() is called
      with interrupts disabled.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Linus Torvalds <torvalds@linuxfoundation.org>
      Cc: <stable@vger.kernel.org> #v3.2+
      Link: http://lkml.kernel.org/r/1361373336-11337-1-git-send-email-fweisbec@gmail.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      e5ab012c
  9. 19 2月, 2013 1 次提交
  10. 13 2月, 2013 1 次提交
    • M
      clockevents: Fix generic broadcast for FEAT_C3STOP · 5d1d9a29
      Mark Rutland 提交于
      Commit 12ad1000: "clockevents: Add generic timer broadcast function"
      made tick_device_uses_broadcast set up the generic broadcast function
      for dummy devices (where !tick_device_is_functional(dev)), but neglected
      to set up the broadcast function for devices that stop in low power
      states (with the CLOCK_EVT_FEAT_C3STOP flag).
      
      When these devices enter low power states they will not have the generic
      broadcast function assigned, and will bring down the system when an
      attempt is made to broadcast to them.
      
      This patch ensures that the broadcast function is also assigned for
      devices which require broadcast in low power states.
      Reported-by: NStephen Warren <swarren@nvidia.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NStephen Warren <swarren@nvidia.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Cc: nico@linaro.org
      Cc: Marc.Zyngier@arm.com
      Cc: Will.Deacon@arm.com
      Cc: santosh.shilimkar@ti.com
      Cc: john.stultz@linaro.org
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      5d1d9a29
  11. 09 2月, 2013 1 次提交
    • P
      time, Fix setting of hardware clock in NTP code · 84e345e4
      Prarit Bhargava 提交于
      At init time, if the system time is "warped" forward in warp_clock()
      it will differ from the hardware clock by sys_tz.tz_minuteswest.  This time
      difference is not taken into account when ntp updates the hardware clock,
      and this causes the system time to jump forward by this offset every reboot.
      
      The kernel must take this offset into account when writing the system time
      to the hardware clock in the ntp code.  This patch adds
      persistent_clock_is_local which indicates that an offset has been applied
      in warp_clock() and accounts for the "warp" before writing the hardware
      clock.
      
      x86 does not have this problem as rtc writes are software limited to a
      +/-15 minute window relative to the current rtc time.  Other arches, such
      as powerpc, however do a full synchronization of the system time to the
      rtc and will see this problem.
      
      [v2]: generated against tip/timers/core
      Signed-off-by: NPrarit Bhargava <prarit@redhat.com>
      Cc: John Stultz <john.stultz@linaro.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      84e345e4
  12. 01 2月, 2013 2 次提交
  13. 30 1月, 2013 1 次提交
  14. 28 1月, 2013 1 次提交
    • F
      cputime: Allow dynamic switch between tick/virtual based cputime accounting · 3f4724ea
      Frederic Weisbecker 提交于
      Allow to dynamically switch between tick and virtual based
      cputime accounting. This way we can provide a kind of "on-demand"
      virtual based cputime accounting. In this mode, the kernel relies
      on the context tracking subsystem to dynamically probe on kernel
      boundaries.
      
      This is in preparation for being able to stop the timer tick in
      more places than just the idle state. Doing so will depend on
      CONFIG_VIRT_CPU_ACCOUNTING_GEN which makes it possible to account
      the cputime without the tick by hooking on kernel/user boundaries.
      
      Depending whether the tick is stopped or not, we can switch between
      tick and vtime based accounting anytime in order to minimize the
      overhead associated to user hooks.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Namhyung Kim <namhyung.kim@lge.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      3f4724ea
  15. 17 1月, 2013 1 次提交
  16. 16 1月, 2013 4 次提交
    • F
      timekeeping: Add CONFIG_HAS_PERSISTENT_CLOCK option · 05ad717c
      Feng Tang 提交于
      Make the persistent clock check a kernel config option, so that some
      platform can explicitely select it, also make CONFIG_RTC_HCTOSYS and
      RTC_SYSTOHC depend on its non-existence, which could prevent the
      persistent clock and RTC code from doing similar thing twice during
      system's init/suspend/resume phases.
      
      If the CONFIG_HAS_PERSISTENT_CLOCK=n, then no change happens for kernel
      which still does the persistent clock check in timekeeping_init().
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Suggested-by: NJohn Stultz <john.stultz@linaro.org>
      Signed-off-by: NFeng Tang <feng.tang@intel.com>
      [jstultz: Added dependency for RTC_SYSTOHC as well]
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      05ad717c
    • F
      timekeeping: Add persistent_clock_exist flag · 31ade306
      Feng Tang 提交于
      In current kernel, there are several places which need to check
      whether there is a persistent clock for the platform. Current check
      is done by calling the read_persistent_clock() and validating its
      return value.
      
      So one optimization is to do the check only once in timekeeping_init(),
      and use a flag persistent_clock_exist to record it.
      
      v2: Add a has_persistent_clock() helper function, as suggested by John.
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: John Stultz <john.stultz@linaro.org>
      Signed-off-by: NFeng Tang <feng.tang@intel.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      31ade306
    • J
      NTP: Add a CONFIG_RTC_SYSTOHC configuration · 023f333a
      Jason Gunthorpe 提交于
      The purpose of this option is to allow ARM/etc systems that rely on the
      class RTC subsystem to have the same kind of automatic NTP based
      synchronization that we have on PC platforms. Today ARM does not
      implement update_persistent_clock and makes extensive use of the class
      RTC system.
      
      When enabled CONFIG_RTC_SYSTOHC will provide a generic
      rtc_update_persistent_clock that stores the current time in the RTC and
      is intended complement the existing CONFIG_RTC_HCTOSYS option that loads
      the RTC at boot.
      
      Like with RTC_HCTOSYS the platform's update_persistent_clock is used
      first, if it works. Platforms with mixed class RTC and non-RTC drivers
      need to return ENODEV when class RTC should be used. Such an update for
      PPC is included in this patch.
      
      Long term, implementations of update_persistent_clock should migrate to
      proper class RTC drivers and use CONFIG_RTC_SYSTOHC instead.
      
      Tested on ARM kirkwood and PPC405
      Signed-off-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      023f333a
    • K
      time: create __getnstimeofday for WARNless calls · 1e817fb6
      Kees Cook 提交于
      The pstore RAM backend can get called during resume, and must be defensive
      against a suspended time source. Expose getnstimeofday logic that returns
      an error instead of a WARN. This can be detected and the timestamp can
      be zeroed out.
      Reported-by: NDoug Anderson <dianders@chromium.org>
      Cc: John Stultz <johnstul@us.ibm.com>
      Cc: Anton Vorontsov <anton.vorontsov@linaro.org>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      1e817fb6
  17. 15 1月, 2013 1 次提交
  18. 25 12月, 2012 1 次提交
    • S
      time: convert arch_gettimeoffset to a pointer · 7b1f6207
      Stephen Warren 提交于
      Currently, whenever CONFIG_ARCH_USES_GETTIMEOFFSET is enabled, each
      arch core provides a single implementation of arch_gettimeoffset(). In
      many cases, different sub-architectures, different machines, or
      different timer providers exist, and so the arch ends up implementing
      arch_gettimeoffset() as a call-through-pointer anyway. Examples are
      ARM, Cris, M68K, and it's arguable that the remaining architectures,
      M32R and Blackfin, should be doing this anyway.
      
      Modify arch_gettimeoffset so that it itself is a function pointer, which
      the arch initializes. This will allow later changes to move the
      initialization of this function into individual machine support or timer
      drivers. This is particularly useful for code in drivers/clocksource
      which should rely on an arch-independant mechanism to register their
      implementation of arch_gettimeoffset().
      
      This patch also converts the Cris architecture to set arch_gettimeoffset
      directly to the final implementation in time_init(), because Cris already
      had separate time_init() functions per sub-architecture. M68K and ARM
      are converted to set arch_gettimeoffset to the final implementation in
      later patches, because they already have function pointers in place for
      this purpose.
      
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Mike Frysinger <vapier@gentoo.org>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Hirokazu Takata <takata@linux-m32r.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Acked-by: NGeert Uytterhoeven <geert@linux-m68k.org>
      Acked-by: NJesper Nilsson <jesper.nilsson@axis.com>
      Acked-by: NJohn Stultz <johnstul@us.ibm.com>
      Signed-off-by: NStephen Warren <swarren@nvidia.com>
      7b1f6207
  19. 28 11月, 2012 1 次提交
  20. 18 11月, 2012 2 次提交
    • F
      printk: Wake up klogd using irq_work · 74876a98
      Frederic Weisbecker 提交于
      klogd is woken up asynchronously from the tick in order
      to do it safely.
      
      However if printk is called when the tick is stopped, the reader
      won't be woken up until the next interrupt, which might not fire
      for a while. As a result, the user may miss some message.
      
      To fix this, lets implement the printk tick using a lazy irq work.
      This subsystem takes care of the timer tick state and can
      fix up accordingly.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      74876a98
    • F
      irq_work: Don't stop the tick with pending works · 00b42959
      Frederic Weisbecker 提交于
      Don't stop the tick if we have pending irq works on the
      queue, otherwise if the arch can't raise self-IPIs, we may not
      find an opportunity to execute the pending works for a while.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      00b42959