1. 27 7月, 2010 3 次提交
    • J
      timkeeping: Fix update_vsyscall to provide wall_to_monotonic offset · 7615856e
      John Stultz 提交于
      update_vsyscall() did not provide the wall_to_monotoinc offset,
      so arch specific implementations tend to reference wall_to_monotonic
      directly. This limits future cleanups in the timekeeping core, so
      this patch fixes the update_vsyscall interface to provide
      wall_to_monotonic, allowing wall_to_monotonic to be made static
      as planned in Documentation/feature-removal-schedule.txt
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Tony Luck <tony.luck@intel.com>
      LKML-Reference: <1279068988-21864-7-git-send-email-johnstul@us.ibm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      7615856e
    • J
      time: Kill off CONFIG_GENERIC_TIME · 592913ec
      John Stultz 提交于
      Now that all arches have been converted over to use generic time via
      clocksources or arch_gettimeoffset(), we can remove the GENERIC_TIME
      config option and simplify the generic code.
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      LKML-Reference: <1279068988-21864-4-git-send-email-johnstul@us.ibm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      592913ec
    • J
      time: Implement timespec_add · ce3bf7ab
      John Stultz 提交于
      After accidentally misusing timespec_add_safe, I wanted to make sure
      we don't accidently trip over that issue again, so I created a simple
      timespec_add() function which we can use to replace the instances
      of timespec_add_safe() that don't want the overflow detection.
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      LKML-Reference: <1279068988-21864-3-git-send-email-johnstul@us.ibm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      ce3bf7ab
  2. 13 4月, 2010 1 次提交
    • J
      time: Remove xtime_cache · 6a867a39
      John Stultz 提交于
      With the earlier logarithmic time accumulation patch, xtime will now
      always be within one "tick" of the current time, instead of possibly
      half a second off.
      
      This removes the need for the xtime_cache value, which always stored the
      time at the last interrupt, so this patch cleans that up removing the
      xtime_cache related code.
      
      This patch also addresses an issue with an earlier version of this change,
      where xtime_cache was normalizing xtime, which could in some cases be
      not valid (ie: tv_nsec == NSEC_PER_SEC). This is fixed by handling
      the edge case in update_wall_time().
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Cc: Petr Titěra <P.Titera@century.cz>
      LKML-Reference: <1270589451-30773-1-git-send-email-johnstul@us.ibm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      6a867a39
  3. 23 3月, 2010 1 次提交
    • J
      time: Fix accumulation bug triggered by long delay. · 830ec045
      John Stultz 提交于
      The logarithmic accumulation done in the timekeeping has some overflow
      protection that limits the max shift value. That means it will take
      more then shift loops to accumulate all of the cycles. This causes
      the shift decrement to underflow, which causes the loop to never exit.
      
      The simplest fix would be simply to do a:
      	if (shift)
      		shift--;
      
      However that is not optimal, as we know the cycle offset is larger
      then the interval << shift, the above would make shift drop to zero,
      then we would be spinning for quite awhile accumulating at interval
      chunks at a time.
      
      Instead, this patch only decreases shift if the offset is smaller
      then cycle_interval << shift.  This makes sure we accumulate using
      the largest chunks possible without overflowing tick_length, and limits
      the number of iterations through the loop.
      
      This issue was found and reported by Sonic Zhang, who also tested the fix.
      Many thanks your explanation and testing!
      Reported-by: NSonic Zhang <sonic.adi@gmail.com>
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Tested-by: NSonic Zhang <sonic.adi@gmail.com>
      LKML-Reference: <1268948850-5225-1-git-send-email-johnstul@us.ibm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      830ec045
  4. 10 2月, 2010 1 次提交
  5. 05 2月, 2010 1 次提交
    • M
      clocksource: add suspend callback · c54a42b1
      Magnus Damm 提交于
      Add a clocksource suspend callback.  This callback can be used by the
      clocksource driver to shutdown and perform any kind of late suspend
      activities even though the clocksource driver itself is a non-sysdev
      driver.
      
      One example where this is useful is to fix the sh_cmt.c platform driver
      that today suspends using the platform bus and shuts down the clocksource
      too early.
      
      With this callback in place the sh_cmt driver will suspend using the
      clocksource and clockevent hooks and leave the platform device pm
      callbacks unused.
      Signed-off-by: NMagnus Damm <damm@opensource.se>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: john stultz <johnstul@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      c54a42b1
  6. 23 12月, 2009 1 次提交
    • L
      Revert "time: Remove xtime_cache" · 83f57a11
      Linus Torvalds 提交于
      This reverts commit 7bc7d637, as
      requested by John Stultz. Quoting John:
      
       "Petr Titěra reported an issue where he saw odd atime regressions with
        2.6.33 where there were a full second worth of nanoseconds in the
        nanoseconds field.
      
        He also reviewed the time code and narrowed down the problem: unhandled
        overflow of the nanosecond field caused by rounding up the
        sub-nanosecond accumulated time.
      
        Details:
      
         * At the end of update_wall_time(), we currently round up the
        sub-nanosecond portion of accumulated time when storing it into xtime.
        This was added to avoid time inconsistencies caused when the
        sub-nanosecond portion was truncated when storing into xtime.
        Unfortunately we don't handle the possible second overflow caused by
        that rounding.
      
         * Previously the xtime_cache code hid this overflow by normalizing the
        xtime value when storing into the xtime_cache.
      
         * We could try to handle the second overflow after the rounding up, but
        since this affects the timekeeping's internal state, this would further
        complicate the next accumulation cycle, causing small errors in ntp
        steering. As much as I'd like to get rid of it, the xtime_cache code is
        known to work.
      
         * The correct fix is really to include the sub-nanosecond portion in the
        timekeeping accessor function, so we don't need to round up at during
        accumulation. This would greatly simplify the accumulation code.
        Unfortunately, we can't do this safely until the last three
        non-GENERIC_TIME arches (sparc32, arm, cris) are converted  (those
        patches are in -mm) and we kill off the spots where arches set xtime
        directly. This is all 2.6.34 material, so I think reverting the
        xtime_cache change is the best approach for now.
      
        Many thanks to Petr for both reporting and finding the issue!"
      Reported-by: NPetr Titěra <P.Titera@century.cz>
      Requested-by: Njohn stultz <johnstul@us.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      83f57a11
  7. 17 11月, 2009 1 次提交
    • L
      timekeeping: Fix clock_gettime vsyscall time warp · 0696b711
      Lin Ming 提交于
      Since commit 0a544198 "timekeeping: Move NTP adjusted clock multiplier
      to struct timekeeper" the clock multiplier of vsyscall is updated with
      the unmodified clock multiplier of the clock source and not with the
      NTP adjusted multiplier of the timekeeper.
      
      This causes user space observerable time warps:
      new CLOCK-warp maximum: 120 nsecs,  00000025c337c537 -> 00000025c337c4bf
      
      Add a new argument "mult" to update_vsyscall() and hand in the
      timekeeping internal NTP adjusted multiplier.
      Signed-off-by: NLin Ming <ming.m.lin@intel.com>
      Cc: "Zhang Yanmin" <yanmin_zhang@linux.intel.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Tony Luck <tony.luck@intel.com>
      LKML-Reference: <1258436990.17765.83.camel@minggr.sh.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      0696b711
  8. 14 11月, 2009 1 次提交
    • J
      nohz: Prevent clocksource wrapping during idle · 98962465
      Jon Hunter 提交于
      The dynamic tick allows the kernel to sleep for periods longer than a
      single tick, but it does not limit the sleep time currently. In the
      worst case the kernel could sleep longer than the wrap around time of
      the time keeping clock source which would result in losing track of
      time.
      
      Prevent this by limiting it to the safe maximum sleep time of the
      current time keeping clock source. The value is calculated when the
      clock source is registered.
      
      [ tglx: simplified the code a bit and massaged the commit msg ]
      Signed-off-by: NJon Hunter <jon-hunter@ti.com>
      Cc: John Stultz <johnstul@us.ibm.com>
      LKML-Reference: <1250617512-23567-2-git-send-email-jon-hunter@ti.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      98962465
  9. 12 10月, 2009 1 次提交
  10. 05 10月, 2009 2 次提交
    • J
      time: Remove xtime_cache · 7bc7d637
      john stultz 提交于
      With the prior logarithmic time accumulation patch, xtime will now
      always be within one "tick" of the current time, instead of
      possibly half a second off.
      
      This removes the need for the xtime_cache value, which always
      stored the time at the last interrupt, so this patch cleans that up
      removing the xtime_cache related code.
      
      This is a bit simpler, but still could use some wider testing.
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJohn Kacur <jkacur@redhat.com>
      Cc: Clark Williams <williams@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <1254525855.7741.95.camel@localhost.localdomain>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      7bc7d637
    • J
      time: Implement logarithmic time accumulation · a092ff0f
      john stultz 提交于
      Accumulating one tick at a time works well unless we're using NOHZ.
      Then it can be an issue, since we may have to run through the loop
      a few thousand times, which can increase timer interrupt caused
      latency.
      
      The current solution was to accumulate in half-second intervals
      with NOHZ. This kept the number of loops down, however it did
      slightly change how we make NTP adjustments. While not an issue
      with NTPd users, as NTPd makes adjustments over a longer period of
      time, other adjtimex() users have noticed the half-second
      granularity with which we can apply frequency changes to the clock.
      
      For instance, if a application tries to apply a 100ppm frequency
      correction for 20ms to correct a 2us offset, with NOHZ they either
      get no correction, or a 50us correction.
      
      Now, there will always be some granularity error for applying
      frequency corrections. However with users sensitive to this error
      have seen a 50-500x increase with NOHZ compared to running without
      NOHZ.
      
      So I figured I'd try another approach then just simply increasing
      the interval. My approach is to consume the time interval
      logarithmically. This reduces the number of times through the loop
      needed keeping latency down, while still preserving the original
      granularity error for adjtimex() changes.
      
      Further, this change allows us to remove the xtime_cache code
      (patch to follow), as xtime is always within one tick of the
      current time, instead of the half-second updates it saw before.
      
      An earlier version of this patch has been shipping to x86 users in
      the RedHat MRG releases for awhile without issue, but I've reworked
      this version to be even more careful about avoiding possible
      overflows if the shift value gets too large.
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Reviewed-by: NJohn Kacur <jkacur@redhat.com>
      Cc: Clark Williams <williams@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      LKML-Reference: <1254525473.7741.88.camel@localhost.localdomain>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a092ff0f
  11. 25 8月, 2009 1 次提交
  12. 22 8月, 2009 1 次提交
    • J
      time: Introduce CLOCK_REALTIME_COARSE · da15cfda
      john stultz 提交于
      After talking with some application writers who want very fast, but not
      fine-grained timestamps, I decided to try to implement new clock_ids
      to clock_gettime(): CLOCK_REALTIME_COARSE and CLOCK_MONOTONIC_COARSE
      which returns the time at the last tick. This is very fast as we don't
      have to access any hardware (which can be very painful if you're using
      something like the acpi_pm clocksource), and we can even use the vdso
      clock_gettime() method to avoid the syscall. The only trade off is you
      only get low-res tick grained time resolution.
      
      This isn't a new idea, I know Ingo has a patch in the -rt tree that made
      the vsyscall gettimeofday() return coarse grained time when the
      vsyscall64 sysctrl was set to 2. However this affects all applications
      on a system.
      
      With this method, applications can choose the proper speed/granularity
      trade-off for themselves.
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: nikolag@ca.ibm.com
      Cc: Darren Hart <dvhltc@us.ibm.com>
      Cc: arjan@infradead.org
      Cc: jonathan@jonmasters.org
      LKML-Reference: <1250734414.6897.5.camel@localhost.localdomain>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      da15cfda
  13. 15 8月, 2009 11 次提交
  14. 07 7月, 2009 2 次提交
    • T
      timekeeping: Move ktime_get() functions to timekeeping.c · a40f262c
      Thomas Gleixner 提交于
      The ktime_get() functions for GENERIC_TIME=n are still located in
      hrtimer.c. Move them to time/timekeeping.c where they belong.
      
      LKML-Reference: <new-submission>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      a40f262c
    • M
      timekeeping: optimized ktime_get[_ts] for GENERIC_TIME=y · 951ed4d3
      Martin Schwidefsky 提交于
      The generic ktime_get function defined in kernel/hrtimer.c is suboptimial
      for GENERIC_TIME=y:
      
       0)               |  ktime_get() {
       0)               |    ktime_get_ts() {
       0)               |      getnstimeofday() {
       0)               |        read_tod_clock() {
       0)   0.601 us    |        }
       0)   1.938 us    |      }
       0)               |      set_normalized_timespec() {
       0)   0.602 us    |      }
       0)   4.375 us    |    }
       0)   5.523 us    |  }
      
      Overall there are two read_seqbegin/read_seqretry loops and a lot of
      unnecessary struct timespec calculations. ktime_get returns a nano second
      value which is the sum of xtime, wall_to_monotonic and the nano second
      delta from the clock source.
      
      ktime_get can be optimized for GENERIC_TIME=y. The new version only calls
      clocksource_read:
      
       0)               |  ktime_get() {
       0)               |    read_tod_clock() {
       0)   0.610 us    |    }
       0)   1.977 us    |  }
      
      It uses a single read_seqbegin/readseqretry loop and just adds everthing
      to a nano second value.
      
      ktime_get_ts is optimized in a similar fashion.
      
      [ tglx: added WARN_ON(timekeeping_suspended) as in getnstimeofday() ]
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      Acked-by: Njohn stultz <johnstul@us.ibm.com>
      LKML-Reference: <20090707112728.3005244d@skybase>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      951ed4d3
  15. 15 5月, 2009 1 次提交
    • T
      sched, timers: move calc_load() to scheduler · dce48a84
      Thomas Gleixner 提交于
      Dimitri Sivanich noticed that xtime_lock is held write locked across
      calc_load() which iterates over all online CPUs. That can cause long
      latencies for xtime_lock readers on large SMP systems. 
      
      The load average calculation is an rough estimate anyway so there is
      no real need to protect the readers vs. the update. It's not a problem
      when the avenrun array is updated while a reader copies the values.
      
      Instead of iterating over all online CPUs let the scheduler_tick code
      update the number of active tasks shortly before the avenrun update
      happens. The avenrun update itself is handled by the CPU which calls
      do_timer().
      
      [ Impact: reduce xtime_lock write locked section ]
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      dce48a84
  16. 02 5月, 2009 1 次提交
  17. 22 4月, 2009 1 次提交
  18. 31 12月, 2008 1 次提交
    • T
      sched_clock: prevent scd->clock from moving backwards, take #2 · 1c5745aa
      Thomas Gleixner 提交于
      Redo:
      
        5b7dba4f: sched_clock: prevent scd->clock from moving backwards
      
      which had to be reverted due to s2ram hangs:
      
        ca7e716c: Revert "sched_clock: prevent scd->clock from moving backwards"
      
      ... this time with resume restoring GTOD later in the sequence
      taken into account as well.
      
      The "timekeeping_suspended" flag is not very nice but we cannot call into
      GTOD before it has been properly resumed and the scheduler will run very
      early in the resume sequence.
      
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1c5745aa
  19. 04 12月, 2008 1 次提交
    • J
      time: catch xtime_nsec underflows and fix them · 6c9bacb4
      john stultz 提交于
      Impact: fix time warp bug
      
      Alex Shi, along with Yanmin Zhang have been noticing occasional time
      inconsistencies recently. Through their great diagnosis, they found that
      the xtime_nsec value used in update_wall_time was occasionally going
      negative. After looking through the code for awhile, I realized we have
      the possibility for an underflow when three conditions are met in
      update_wall_time():
      
        1) We have accumulated a second's worth of nanoseconds, so we
           incremented xtime.tv_sec and appropriately decrement xtime_nsec.
           (This doesn't cause xtime_nsec to go negative, but it can cause it
            to be small).
      
        2) The remaining offset value is large, but just slightly less then
           cycle_interval.
      
        3) clocksource_adjust() is speeding up the clock, causing a
           corrective amount (compensating for the increase in the multiplier
           being multiplied against the unaccumulated offset value) to be
           subtracted from xtime_nsec.
      
      This can cause xtime_nsec to underflow.
      
      Unfortunately, since we notify the NTP subsystem via second_overflow()
      whenever we accumulate a full second, and this effects the error
      accumulation that has already occured, we cannot simply revert the
      accumulated second from xtime nor move the second accumulation to after
      the clocksource_adjust call without a change in behavior.
      
      This leaves us with (at least) two options:
      
      1) Simply return from clocksource_adjust() without making a change if we
         notice the adjustment would cause xtime_nsec to go negative.
      
      This would work, but I'm concerned that if a large adjustment was needed
      (due to the error being large), it may be possible to get stuck with an
      ever increasing error that becomes too large to correct (since it may
      always force xtime_nsec negative). This may just be paranoia on my part.
      
      2) Catch xtime_nsec if it is negative, then add back the amount its
         negative to both xtime_nsec and the error.
      
      This second method is consistent with how we've handled earlier rounding
      issues, and also has the benefit that the error being added is always in
      the oposite direction also always equal or smaller then the correction
      being applied. So the risk of a corner case where things get out of
      control is lessened.
      
      This patch fixes bug 11970, as tested by Yanmin Zhang
      http://bugzilla.kernel.org/show_bug.cgi?id=11970
      
      Reported-by: alex.shi@intel.com
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Acked-by: N"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
      Tested-by: N"Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6c9bacb4
  20. 24 9月, 2008 1 次提交
    • R
      timekeeping: fix rounding problem during clock update · 5cd1c9c5
      Roman Zippel 提交于
      Due to a rounding problem during a clock update it's possible for readers
      to observe the clock jumping back by 1nsec.  The following simplified
      example demonstrates the problem:
      
      cycle	xtime
      0	0
      1000	999999.6
      2000	1999999.2
      3000	2999998.8
      ...
      
      1500 =	1499999.4
      =	0.0 + 1499999.4
      =	999999.6 + 499999.8
      
      When reading the clock only the full nanosecond part is used, while
      timekeeping internally keeps nanosecond fractions.  If the clock is now
      updated at cycle 1500 here, a nanosecond is missing due to the truncation.
      
      The simple fix is to round up the xtime value during the update, this also
      changes the distance to the reference time, but the adjustment will
      automatically take care that it stays under control.
      Signed-off-by: NRoman Zippel <zippel@linux-m68k.org>
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      5cd1c9c5
  21. 21 8月, 2008 2 次提交
    • J
      clocksource: introduce CLOCK_MONOTONIC_RAW · 2d42244a
      John Stultz 提交于
      In talking with Josip Loncaric, and his work on clock synchronization (see
      btime.sf.net), he mentioned that for really close synchronization, it is
      useful to have access to "hardware time", that is a notion of time that is
      not in any way adjusted by the clock slewing done to keep close time sync.
      
      Part of the issue is if we are using the kernel's ntp adjusted
      representation of time in order to measure how we should correct time, we
      can run into what Paul McKenney aptly described as "Painting a road using
      the lines we're painting as the guide".
      
      I had been thinking of a similar problem, and was trying to come up with a
      way to give users access to a purely hardware based time representation
      that avoided users having to know the underlying frequency and mask values
      needed to deal with the wide variety of possible underlying hardware
      counters.
      
      My solution is to introduce CLOCK_MONOTONIC_RAW.  This exposes a
      nanosecond based time value, that increments starting at bootup and has no
      frequency adjustments made to it what so ever.
      
      The time is accessed from userspace via the posix_clock_gettime() syscall,
      passing CLOCK_MONOTONIC_RAW as the clock_id.
      Signed-off-by: NJohn Stultz <johnstul@us.ibm.com>
      Signed-off-by: NRoman Zippel <zippel@linux-m68k.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2d42244a
    • R
      clocksource: introduce clocksource_forward_now() · 9a055117
      Roman Zippel 提交于
      To keep the raw monotonic patch simple first introduce
      clocksource_forward_now(), which takes care of the offset since the last
      update_wall_time() call and adds it to the clock, so there is no need
      anymore to deal with it explicitly at various places, which need to make
      significant changes to the clock.
      
      This is also gets rid of the timekeeping_suspend_nsecs, instead of
      waiting until resume, the value is accumulated during suspend. In the end
      there is only a single user of __get_nsec_offset() left, so I integrated
      it back to getnstimeofday().
      Signed-off-by: NRoman Zippel <zippel@linux-m68k.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9a055117
  22. 01 5月, 2008 3 次提交
  23. 20 4月, 2008 1 次提交
    • T
      x86: tsc prevent time going backwards · d8bb6f4c
      Thomas Gleixner 提交于
      We already catch most of the TSC problems by sanity checks, but there
      is a subtle bug which has been in the code forever. This can cause
      time jumps in the range of hours.
      
      This was reported in:
           http://lkml.org/lkml/2007/8/23/96
      and
           http://lkml.org/lkml/2008/3/31/23
      
      I was able to reproduce the problem with a gettimeofday loop test on a
      dual core and a quad core machine which both have sychronized
      TSCs. The TSCs seems not to be perfectly in sync though, but the
      kernel is not able to detect the slight delta in the sync check. Still
      there exists an extremly small window where this delta can be observed
      with a real big time jump. So far I was only able to reproduce this
      with the vsyscall gettimeofday implementation, but in theory this
      might be observable with the syscall based version as well.
      
      CPU 0 updates the clock source variables under xtime/vyscall lock and
      CPU1, where the TSC is slighty behind CPU0, is reading the time right
      after the seqlock was unlocked.
      
      The clocksource reference data was updated with the TSC from CPU0 and
      the value which is read from TSC on CPU1 is less than the reference
      data. This results in a huge delta value due to the unsigned
      subtraction of the TSC value and the reference value. This algorithm
      can not be changed due to the support of wrapping clock sources like
      pm timer.
      
      The huge delta is converted to nanoseconds and added to xtime, which
      is then observable by the caller. The next gettimeofday call on CPU1
      will show the correct time again as now the TSC has advanced above the
      reference value.
      
      To prevent this TSC specific wreckage we need to compare the TSC value
      against the reference value and return the latter when it is larger
      than the actual TSC value.
      
      I pondered to mark the TSC unstable when the readout is smaller than
      the reference value, but this would render an otherwise good and fast
      clocksource unusable without a real good reason.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d8bb6f4c