1. 17 4月, 2008 1 次提交
  2. 26 3月, 2008 1 次提交
    • T
      NOHZ: reevaluate idle sleep length after add_timer_on() · 06d8308c
      Thomas Gleixner 提交于
      add_timer_on() can add a timer on a CPU which is currently in a long
      idle sleep, but the timer wheel is not reevaluated by the nohz code on
      that CPU. So a timer can be delayed for quite a long time. This
      triggered a false positive in the clocksource watchdog code.
      
      To avoid this we need to wake up the idle CPU and enforce the
      reevaluation of the timer wheel for the next timer event.
      
      Add a function, which checks a given CPU for idle state, marks the
      idle task with NEED_RESCHED and sends a reschedule IPI to notify the
      other CPU of the change in the timer wheel.
      
      Call this function from add_timer_on().
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Cc: stable@kernel.org
      
      --
       include/linux/sched.h |    6 ++++++
       kernel/sched.c        |   43 +++++++++++++++++++++++++++++++++++++++++++
       kernel/timer.c        |   10 +++++++++-
       3 files changed, 58 insertions(+), 1 deletion(-)
      06d8308c
  3. 09 2月, 2008 2 次提交
  4. 07 2月, 2008 1 次提交
    • M
      taskstats scaled time cleanup · 06b8e878
      Michael Neuling 提交于
      This moves the ability to scale cputime into generic code.  This allows us
      to fix the issue in kernel/timer.c (noticed by Balbir) where we could only
      add an unscaled value to the scaled utime/stime.
      
      This adds a cputime_to_scaled function.  As before, the POWERPC version
      does the scaling based on the last SPURR/PURR ratio calculated.  The
      generic and s390 (only other arch to implement asm/cputime.h) versions are
      both NOPs.
      
      Also moves the SPURR and PURR snapshots closer.
      Signed-off-by: NMichael Neuling <mikey@neuling.org>
      Cc: Jay Lan <jlan@engr.sgi.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      06b8e878
  5. 30 1月, 2008 2 次提交
  6. 26 1月, 2008 1 次提交
  7. 22 1月, 2008 1 次提交
    • R
      timer: fix section mismatch · 48ccf3da
      Randy Dunlap 提交于
      The caller is __cpuinit.
      Also, this code block and its caller are inside #ifdef CONFIG_HOTPLUG_CPU
      blocks, so this code should reflect that config symbol's usage.
      
      WARNING: vmlinux.o(.text+0x4252f): Section mismatch: reference to .init.text: (between 'timer_cpu_notify' and 'msleep')
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Cc: Sam Ravnborg <sam@ravnborg.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: Linus Torvalds <torvalds@akpm@linux-foundation.org>
      48ccf3da
  8. 14 1月, 2008 1 次提交
    • R
      remove task_ppid_nr_ns · 84427eae
      Roland McGrath 提交于
      task_ppid_nr_ns is called in three places.  One of these should never
      have called it.  In the other two, using it broke the existing
      semantics.  This was presumably accidental.  If the function had not
      been there, it would have been much more obvious to the eye that those
      patches were changing the behavior.  We don't need this function.
      
      In task_state, the pid of the ptracer is not the ppid of the ptracer.
      
      In do_task_stat, ppid is the tgid of the real_parent, not its pid.
      I also moved the call outside of lock_task_sighand, since it doesn't
      need it.
      
      In sys_getppid, ppid is the tgid of the real_parent, not its pid.
      Signed-off-by: NRoland McGrath <roland@redhat.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      84427eae
  9. 19 12月, 2007 1 次提交
  10. 07 12月, 2007 1 次提交
  11. 10 11月, 2007 1 次提交
    • P
      sched: restore deterministic CPU accounting on powerpc · fa13a5a1
      Paul Mackerras 提交于
      Since powerpc started using CONFIG_GENERIC_CLOCKEVENTS, the
      deterministic CPU accounting (CONFIG_VIRT_CPU_ACCOUNTING) has been
      broken on powerpc, because we end up counting user time twice: once in
      timer_interrupt() and once in update_process_times().
      
      This fixes the problem by pulling the code in update_process_times
      that updates utime and stime into a separate function called
      account_process_tick.  If CONFIG_VIRT_CPU_ACCOUNTING is not defined,
      there is a version of account_process_tick in kernel/timer.c that
      simply accounts a whole tick to either utime or stime as before.  If
      CONFIG_VIRT_CPU_ACCOUNTING is defined, then arch code gets to
      implement account_process_tick.
      
      This also lets us simplify the s390 code a bit; it means that the s390
      timer interrupt can now call update_process_times even when
      CONFIG_VIRT_CPU_ACCOUNTING is turned on, and can just implement a
      suitable account_process_tick().
      
      account_process_tick() now takes the task_struct * as an argument.
      Tested both with and without CONFIG_VIRT_CPU_ACCOUNTING.
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fa13a5a1
  12. 06 11月, 2007 1 次提交
  13. 20 10月, 2007 1 次提交
    • P
      pid namespaces: changes to show virtual ids to user · b488893a
      Pavel Emelyanov 提交于
      This is the largest patch in the set. Make all (I hope) the places where
      the pid is shown to or get from user operate on the virtual pids.
      
      The idea is:
       - all in-kernel data structures must store either struct pid itself
         or the pid's global nr, obtained with pid_nr() call;
       - when seeking the task from kernel code with the stored id one
         should use find_task_by_pid() call that works with global pids;
       - when showing pid's numerical value to the user the virtual one
         should be used, but however when one shows task's pid outside this
         task's namespace the global one is to be used;
       - when getting the pid from userspace one need to consider this as
         the virtual one and use appropriate task/pid-searching functions.
      
      [akpm@linux-foundation.org: build fix]
      [akpm@linux-foundation.org: nuther build fix]
      [akpm@linux-foundation.org: yet nuther build fix]
      [akpm@linux-foundation.org: remove unneeded casts]
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NAlexey Dobriyan <adobriyan@openvz.org>
      Cc: Sukadev Bhattiprolu <sukadev@us.ibm.com>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Paul Menage <menage@google.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b488893a
  14. 19 10月, 2007 2 次提交
  15. 21 7月, 2007 1 次提交
  16. 20 7月, 2007 1 次提交
  17. 18 7月, 2007 1 次提交
  18. 17 7月, 2007 2 次提交
  19. 30 5月, 2007 1 次提交
  20. 15 5月, 2007 1 次提交
  21. 11 5月, 2007 1 次提交
  22. 10 5月, 2007 3 次提交
  23. 09 5月, 2007 3 次提交
    • P
      Introduce a handy list_first_entry macro · b5e61818
      Pavel Emelianov 提交于
      There are many places in the kernel where the construction like
      
         foo = list_entry(head->next, struct foo_struct, list);
      
      are used.
      The code might look more descriptive and neat if using the macro
      
         list_first_entry(head, type, member) \
                   list_entry((head)->next, type, member)
      
      Here is the macro itself and the examples of its usage in the generic code.
       If it will turn out to be useful, I can prepare the set of patches to
      inject in into arch-specific code, drivers, networking, etc.
      Signed-off-by: NPavel Emelianov <xemul@openvz.org>
      Signed-off-by: NKirill Korotaev <dev@openvz.org>
      Cc: Randy Dunlap <randy.dunlap@oracle.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Zach Brown <zach.brown@oracle.com>
      Cc: Davide Libenzi <davidel@xmailserver.org>
      Cc: John McCutchan <ttb@tentacle.dhs.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: john stultz <johnstul@us.ibm.com>
      Cc: Ram Pai <linuxram@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b5e61818
    • J
      Move timekeeping code to timekeeping.c · 8524070b
      john stultz 提交于
      Move the timekeeping code out of kernel/timer.c and into
      kernel/time/timekeeping.c.  I made no cleanups or other changes in transit.
      
      [akpm@linux-foundation.org: build fix]
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      8524070b
    • V
      Add support for deferrable timers · 6e453a67
      Venki Pallipadi 提交于
      Introduce a new flag for timers - deferrable: Timers that work normally
      when system is busy.  But, will not cause CPU to come out of idle (just to
      service this timer), when CPU is idle.  Instead, this timer will be
      serviced when CPU eventually wakes up with a subsequent non-deferrable
      timer.
      
      The main advantage of this is to avoid unnecessary timer interrupts when
      CPU is idle.  If the routine currently called by a timer can wait until
      next event without any issues, this new timer can be used to setup timer
      event for that routine.  This, with dynticks, allows CPUs to be lazy,
      allowing them to stay in idle for extended period of time by reducing
      unnecesary wakeup and thereby reducing the power consumption.
      
      This patch:
      
      Builds this new timer on top of existing timer infrastructure.  It uses
      last bit in 'base' pointer of timer_list structure to store this deferrable
      timer flag.  __next_timer_interrupt() function skips over these deferrable
      timers when CPU looks for next timer event for which it has to wake up.
      
      This is exported by a new interface init_timer_deferrable() that can be
      called in place of regular init_timer().
      
      [akpm@linux-foundation.org: Privatise a #define]
      Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Oleg Nesterov <oleg@tv-sign.ru>
      Cc: Dave Jones <davej@codemonkey.org.uk>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      6e453a67
  24. 27 4月, 2007 1 次提交
  25. 08 4月, 2007 1 次提交
    • I
      [PATCH] high-res timers: resume fix · 995f054f
      Ingo Molnar 提交于
      Soeren Sonnenburg reported that upon resume he is getting
      this backtrace:
      
       [<c0119637>] smp_apic_timer_interrupt+0x57/0x90
       [<c0142d30>] retrigger_next_event+0x0/0xb0
       [<c0104d30>] apic_timer_interrupt+0x28/0x30
       [<c0142d30>] retrigger_next_event+0x0/0xb0
       [<c0140068>] __kfifo_put+0x8/0x90
       [<c0130fe5>] on_each_cpu+0x35/0x60
       [<c0143538>] clock_was_set+0x18/0x20
       [<c0135cdc>] timekeeping_resume+0x7c/0xa0
       [<c02aabe1>] __sysdev_resume+0x11/0x80
       [<c02ab0c7>] sysdev_resume+0x47/0x80
       [<c02b0b05>] device_power_up+0x5/0x10
      
      it turns out that on resume we mistakenly re-enable interrupts too
      early.  Do the timer retrigger only on the current CPU.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NSoeren Sonnenburg <kernel@nn7.de>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      995f054f
  26. 26 3月, 2007 1 次提交
  27. 07 3月, 2007 2 次提交
  28. 05 3月, 2007 1 次提交
    • H
      [PATCH] timer/hrtimer: take per cpu locks in sane order · e81ce1f7
      Heiko Carstens 提交于
      Doing something like this on a two cpu system
      
        # echo 0 > /sys/devices/system/cpu/cpu0/online
        # echo 1 > /sys/devices/system/cpu/cpu0/online
        # echo 0 > /sys/devices/system/cpu/cpu1/online
      
      will give me this:
      
        =======================================================
        [ INFO: possible circular locking dependency detected ]
        2.6.21-rc2-g562aa1d4-dirty #7
        -------------------------------------------------------
        bash/1282 is trying to acquire lock:
         (&cpu_base->lock_key){.+..}, at: [<000000000005f17e>] hrtimer_cpu_notify+0xc6/0x240
      
        but task is already holding lock:
         (&cpu_base->lock_key#2){.+..}, at: [<000000000005f174>] hrtimer_cpu_notify+0xbc/0x240
      
        which lock already depends on the new lock.
      
      This happens because we have the following code in kernel/hrtimer.c:
      
        migrate_hrtimers(int cpu)
        [...]
        old_base = &per_cpu(hrtimer_bases, cpu);
        new_base = &get_cpu_var(hrtimer_bases);
        [...]
        spin_lock(&new_base->lock);
        spin_lock(&old_base->lock);
      
      Which means the spinlocks are taken in an order which depends on which cpu
      gets shut down from which other cpu. Therefore lockdep complains that there
      might be an ABBA deadlock. Since migrate_hrtimers() gets only called on
      cpu hotplug it's safe to assume that it isn't executed concurrently on a
      
      The same problem exists in kernel/timer.c: migrate_timers().
      
      As pointed out by Christian Borntraeger one possible solution to avoid
      the locking order complaints would be to make sure that the locks are
      always taken in the same order. E.g. by taking the lock of the cpu with
      the lower number first.
      
      To achieve this we introduce two new spinlock functions double_spin_lock
      and double_spin_unlock which lock or unlock two locks in a given order.
      
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Roman Zippel <zippel@linux-m68k.org>
      Cc: John Stultz <johnstul@us.ibm.com>
      Cc: Christian Borntraeger <cborntra@de.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e81ce1f7
  29. 02 3月, 2007 2 次提交
  30. 17 2月, 2007 1 次提交