1. 26 2月, 2011 1 次提交
    • T
      clockevents: Prevent oneshot mode when broadcast device is periodic · 3a142a06
      Thomas Gleixner 提交于
      When the per cpu timer is marked CLOCK_EVT_FEAT_C3STOP, then we only
      can switch into oneshot mode, when the backup broadcast device
      supports oneshot mode as well. Otherwise we would try to switch the
      broadcast device into an unsupported mode unconditionally. This went
      unnoticed so far as the current available broadcast devices support
      oneshot mode. Seth unearthed this problem while debugging and working
      around an hpet related BIOS wreckage.
      
      Add the necessary check to tick_is_oneshot_available().
      Reported-and-tested-by: NSeth Forshee <seth.forshee@canonical.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <alpine.LFD.2.00.1102252231200.2701@localhost6.localdomain6>
      Cc: stable@kernel.org # .21 ->
      3a142a06
  2. 12 2月, 2011 1 次提交
  3. 20 1月, 2011 1 次提交
    • S
      hrtimers: Notify hrtimer users of switches to NOHZ mode · 2d0640b4
      Stephen Boyd 提交于
      When NOHZ=y and high res timers are disabled (via cmdline or
      Kconfig) tick_nohz_switch_to_nohz() will notify the user about
      switching into NOHZ mode. Nothing is printed for the case where
      HIGH_RES_TIMERS=y. Fix this for the HIGH_RES_TIMERS=y case by
      duplicating the printk from the low res NOHZ path in the high
      res NOHZ path.
      
      This confused me since I was thinking 'dmesg | grep -i NOHZ' would
      tell me if NOHZ was enabled, but if I have hrtimers there is
      nothing.
      Signed-off-by: NStephen Boyd <sboyd@codeaurora.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1295419594-13085-1-git-send-email-sboyd@codeaurora.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2d0640b4
  4. 14 1月, 2011 2 次提交
  5. 12 1月, 2011 2 次提交
  6. 23 12月, 2010 1 次提交
  7. 17 12月, 2010 1 次提交
  8. 11 12月, 2010 1 次提交
  9. 02 11月, 2010 1 次提交
  10. 21 10月, 2010 2 次提交
  11. 10 9月, 2010 1 次提交
  12. 14 8月, 2010 1 次提交
    • J
      time: Workaround gcc loop optimization that causes 64bit div errors · c7dcf87a
      John Stultz 提交于
      Early 4.3 versions of gcc apparently aggressively optimize the raw
      time accumulation loop, replacing it with a divide.
      
      On 32bit systems, this causes the following link errors:
      	undefined reference to `__umoddi3'
      	undefined reference to `__udivdi3'
      
      The gcc issue has been fixed in 4.4 and greater.
      
      This patch replaces the accumulation loop with a do_div, as suggested
      by Linus.
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      CC: Jason Wessel <jason.wessel@windriver.com>
      CC: Larry Finger <Larry.Finger@lwfinger.net>
      CC: Ingo Molnar <mingo@elte.hu>
      CC: Linus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      c7dcf87a
  13. 13 8月, 2010 1 次提交
    • J
      timekeeping: Fix overflow in rawtime tv_nsec on 32 bit archs · deda2e81
      Jason Wessel 提交于
      The tv_nsec is a long and when added to the shifted interval it can wrap
      and become negative which later causes looping problems in the
      getrawmonotonic().  The edge case occurs when the system has slept for
      a short period of time of ~2 seconds.
      
      A trace printk of the values in this patch illustrate the problem:
      
      ftrace time stamp: log
      43.716079: logarithmic_accumulation: raw: 3d0913 tv_nsec d687faa
      43.718513: logarithmic_accumulation: raw: 3d0913 tv_nsec da588bd
      43.722161: logarithmic_accumulation: raw: 3d0913 tv_nsec de291d0
      46.349925: logarithmic_accumulation: raw: 7a122600 tv_nsec e1f9ae3b
      46.349930: logarithmic_accumulation: raw: 1e848980 tv_nsec 8831c0e3
      
      The kernel starts looping at 46.349925 in the getrawmonotonic() due to
      the negative value from adding the raw value to tv_nsec.
      
      A simple solution is to accumulate into a u64, and then normalize it
      to a timespec_t.
      Signed-off-by: NJason Wessel <jason.wessel@windriver.com>
       [ Reworked variable names and simplified some of the code. - John ]
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      deda2e81
  14. 03 8月, 2010 1 次提交
  15. 27 7月, 2010 6 次提交
  16. 17 7月, 2010 1 次提交
  17. 12 7月, 2010 1 次提交
  18. 01 7月, 2010 1 次提交
    • P
      sched: Cure nr_iowait_cpu() users · 8c215bd3
      Peter Zijlstra 提交于
      Commit 0224cf4c (sched: Intoduce get_cpu_iowait_time_us())
      broke things by not making sure preemption was indeed disabled
      by the callers of nr_iowait_cpu() which took the iowait value of
      the current cpu.
      
      This resulted in a heap of preempt warnings. Cure this by making
      nr_iowait_cpu() take a cpu number and fix up the callers to pass
      in the right number.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
      Cc: Rafael J. Wysocki <rjw@sisk.pl>
      Cc: Maxim Levitsky <maximlevitsky@gmail.com>
      Cc: Len Brown <len.brown@intel.com>
      Cc: Pavel Machek <pavel@ucw.cz>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: linux-pm@lists.linux-foundation.org
      LKML-Reference: <1277968037.1868.120.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8c215bd3
  19. 18 6月, 2010 1 次提交
  20. 09 6月, 2010 1 次提交
    • V
      sched: Change nohz idle load balancing logic to push model · 83cd4fe2
      Venkatesh Pallipadi 提交于
      In the new push model, all idle CPUs indeed go into nohz mode. There is
      still the concept of idle load balancer (performing the load balancing
      on behalf of all the idle cpu's in the system). Busy CPU kicks the nohz
      balancer when any of the nohz CPUs need idle load balancing.
      The kickee CPU does the idle load balancing on behalf of all idle CPUs
      instead of the normal idle balance.
      
      This addresses the below two problems with the current nohz ilb logic:
      * the idle load balancer continued to have periodic ticks during idle and
        wokeup frequently, even though it did not have any rebalancing to do on
        behalf of any of the idle CPUs.
      * On x86 and CPUs that have APIC timer stoppage on idle CPUs, this
        periodic wakeup can result in a periodic additional interrupt on a CPU
        doing the timer broadcast.
      
      Also currently we are migrating the unpinned timers from an idle to the cpu
      doing idle load balancing (when all the cpus in the system are idle,
      there is no idle load balancing cpu and timers get added to the same idle cpu
      where the request was made. So the existing optimization works only on semi idle
      system).
      
      And In semi idle system, we no longer have periodic ticks on the idle load
      balancer CPU. Using that cpu will add more delays to the timers than intended
      (as that cpu's timer base may not be uptodate wrt jiffies etc). This was
      causing mysterious slowdowns during boot etc.
      
      For now, in the semi idle case, use the nearest busy cpu for migrating timers
      from an idle cpu.  This is good for power-savings anyway.
      Signed-off-by: NVenkatesh Pallipadi <venki@google.com>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      LKML-Reference: <1274486981.2840.46.camel@sbs-t61.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      83cd4fe2
  21. 10 5月, 2010 7 次提交
  22. 13 4月, 2010 1 次提交
    • J
      time: Remove xtime_cache · 6a867a39
      John Stultz 提交于
      With the earlier logarithmic time accumulation patch, xtime will now
      always be within one "tick" of the current time, instead of possibly
      half a second off.
      
      This removes the need for the xtime_cache value, which always stored the
      time at the last interrupt, so this patch cleans that up removing the
      xtime_cache related code.
      
      This patch also addresses an issue with an earlier version of this change,
      where xtime_cache was normalizing xtime, which could in some cases be
      not valid (ie: tv_nsec == NSEC_PER_SEC). This is fixed by handling
      the edge case in update_wall_time().
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Cc: Petr Titěra <P.Titera@century.cz>
      LKML-Reference: <1270589451-30773-1-git-send-email-johnstul@us.ibm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      6a867a39
  23. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  24. 24 3月, 2010 1 次提交
  25. 23 3月, 2010 1 次提交
    • J
      time: Fix accumulation bug triggered by long delay. · 830ec045
      John Stultz 提交于
      The logarithmic accumulation done in the timekeeping has some overflow
      protection that limits the max shift value. That means it will take
      more then shift loops to accumulate all of the cycles. This causes
      the shift decrement to underflow, which causes the loop to never exit.
      
      The simplest fix would be simply to do a:
      	if (shift)
      		shift--;
      
      However that is not optimal, as we know the cycle offset is larger
      then the interval << shift, the above would make shift drop to zero,
      then we would be spinning for quite awhile accumulating at interval
      chunks at a time.
      
      Instead, this patch only decreases shift if the offset is smaller
      then cycle_interval << shift.  This makes sure we accumulate using
      the largest chunks possible without overflowing tick_length, and limits
      the number of iterations through the loop.
      
      This issue was found and reported by Sonic Zhang, who also tested the fix.
      Many thanks your explanation and testing!
      Reported-by: NSonic Zhang <sonic.adi@gmail.com>
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Tested-by: NSonic Zhang <sonic.adi@gmail.com>
      LKML-Reference: <1268948850-5225-1-git-send-email-johnstul@us.ibm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      830ec045
  26. 13 3月, 2010 1 次提交