1. 26 7月, 2007 2 次提交
    • J
      Cache xtime every call to update_wall_time · 17c38b74
      john stultz 提交于
      This avoids xtime lag seen with dynticks, because while 'xtime' itself
      is still not updated often, we keep a 'xtime_cache' variable around that
      contains the approximate real-time that _is_ updated each time we do a
      'update_wall_time()', and is thus never off by more than one tick.
      
      IOW, this restores the original semantics for 'xtime' users, as long as
      you use the proper abstraction functions (ie 'current_kernel_time()' or
      'get_seconds()' depending on whether you want a timespec or just the
      seconds field).
      
      [ Updated Patch.  As penance for my sins I've also yanked another #ifdef
        that was added to avoid the xtime lag w/ hrtimers.  ]
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      17c38b74
    • J
      Cleanup non-arch xtime uses, use get_seconds() or current_kernel_time(). · 2c6b47de
      john stultz 提交于
      This avoids use of the kernel-internal "xtime" variable directly outside
      of the actual time-related functions.  Instead, use the helper functions
      that we already have available to us.
      
      This doesn't actually change any behaviour, but this will allow us to
      fix the fact that "xtime" isn't updated very often with CONFIG_NO_HZ
      (because much of the realtime information is maintained as separate
      offsets to 'xtime'), which has caused interfaces that use xtime directly
      to get a time that is out of sync with the real-time clock by up to a
      third of a second or so.
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      2c6b47de
  2. 22 7月, 2007 2 次提交
  3. 17 7月, 2007 1 次提交
  4. 10 5月, 2007 1 次提交
    • R
      Add suspend-related notifications for CPU hotplug · 8bb78442
      Rafael J. Wysocki 提交于
      Since nonboot CPUs are now disabled after tasks and devices have been
      frozen and the CPU hotplug infrastructure is used for this purpose, we need
      special CPU hotplug notifications that will help the CPU-hotplug-aware
      subsystems distinguish normal CPU hotplug events from CPU hotplug events
      related to a system-wide suspend or resume operation in progress.  This
      patch introduces such notifications and causes them to be used during
      suspend and resume transitions.  It also changes all of the
      CPU-hotplug-aware subsystems to take these notifications into consideration
      (for now they are handled in the same way as the corresponding "normal"
      ones).
      
      [oleg@tv-sign.ru: cleanups]
      Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8bb78442
  5. 09 5月, 2007 1 次提交
  6. 28 4月, 2007 1 次提交
  7. 26 4月, 2007 1 次提交
  8. 08 4月, 2007 1 次提交
    • I
      [PATCH] high-res timers: resume fix · 995f054f
      Ingo Molnar 提交于
      Soeren Sonnenburg reported that upon resume he is getting
      this backtrace:
      
       [<c0119637>] smp_apic_timer_interrupt+0x57/0x90
       [<c0142d30>] retrigger_next_event+0x0/0xb0
       [<c0104d30>] apic_timer_interrupt+0x28/0x30
       [<c0142d30>] retrigger_next_event+0x0/0xb0
       [<c0140068>] __kfifo_put+0x8/0x90
       [<c0130fe5>] on_each_cpu+0x35/0x60
       [<c0143538>] clock_was_set+0x18/0x20
       [<c0135cdc>] timekeeping_resume+0x7c/0xa0
       [<c02aabe1>] __sysdev_resume+0x11/0x80
       [<c02ab0c7>] sysdev_resume+0x47/0x80
       [<c02b0b05>] device_power_up+0x5/0x10
      
      it turns out that on resume we mistakenly re-enable interrupts too
      early.  Do the timer retrigger only on the current CPU.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NSoeren Sonnenburg <kernel@nn7.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      995f054f
  9. 29 3月, 2007 1 次提交
  10. 17 3月, 2007 2 次提交
  11. 07 3月, 2007 1 次提交
  12. 05 3月, 2007 1 次提交
    • H
      [PATCH] timer/hrtimer: take per cpu locks in sane order · e81ce1f7
      Heiko Carstens 提交于
      Doing something like this on a two cpu system
      
        # echo 0 > /sys/devices/system/cpu/cpu0/online
        # echo 1 > /sys/devices/system/cpu/cpu0/online
        # echo 0 > /sys/devices/system/cpu/cpu1/online
      
      will give me this:
      
        =======================================================
        [ INFO: possible circular locking dependency detected ]
        2.6.21-rc2-g562aa1d4-dirty #7
        -------------------------------------------------------
        bash/1282 is trying to acquire lock:
         (&cpu_base->lock_key){.+..}, at: [<000000000005f17e>] hrtimer_cpu_notify+0xc6/0x240
      
        but task is already holding lock:
         (&cpu_base->lock_key#2){.+..}, at: [<000000000005f174>] hrtimer_cpu_notify+0xbc/0x240
      
        which lock already depends on the new lock.
      
      This happens because we have the following code in kernel/hrtimer.c:
      
        migrate_hrtimers(int cpu)
        [...]
        old_base = &per_cpu(hrtimer_bases, cpu);
        new_base = &get_cpu_var(hrtimer_bases);
        [...]
        spin_lock(&new_base->lock);
        spin_lock(&old_base->lock);
      
      Which means the spinlocks are taken in an order which depends on which cpu
      gets shut down from which other cpu. Therefore lockdep complains that there
      might be an ABBA deadlock. Since migrate_hrtimers() gets only called on
      cpu hotplug it's safe to assume that it isn't executed concurrently on a
      
      The same problem exists in kernel/timer.c: migrate_timers().
      
      As pointed out by Christian Borntraeger one possible solution to avoid
      the locking order complaints would be to make sure that the locks are
      always taken in the same order. E.g. by taking the lock of the cpu with
      the lower number first.
      
      To achieve this we introduce two new spinlock functions double_spin_lock
      and double_spin_unlock which lock or unlock two locks in a given order.
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Roman Zippel <zippel@linux-m68k.org>
      Cc: John Stultz <johnstul@us.ibm.com>
      Cc: Christian Borntraeger <cborntra@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e81ce1f7
  13. 17 2月, 2007 11 次提交
  14. 12 2月, 2007 1 次提交
  15. 30 9月, 2006 1 次提交
  16. 15 8月, 2006 1 次提交
  17. 01 8月, 2006 1 次提交
  18. 04 7月, 2006 2 次提交
  19. 28 6月, 2006 2 次提交
  20. 26 6月, 2006 2 次提交
  21. 01 6月, 2006 1 次提交
  22. 26 4月, 2006 2 次提交
  23. 22 4月, 2006 1 次提交