1. 15 7月, 2016 1 次提交
  2. 12 7月, 2016 2 次提交
  3. 13 4月, 2016 1 次提交
  4. 04 3月, 2016 1 次提交
    • C
      x86/tsc: Always Running Timer (ART) correlated clocksource · f9677e0f
      Christopher S. Hall 提交于
      On modern Intel systems TSC is derived from the new Always Running Timer
      (ART). ART can be captured simultaneous to the capture of
      audio and network device clocks, allowing a correlation between timebases
      to be constructed. Upon capture, the driver converts the captured ART
      value to the appropriate system clock using the correlated clocksource
      mechanism.
      
      On systems that support ART a new CPUID leaf (0x15) returns parameters
      “m” and “n” such that:
      
      TSC_value = (ART_value * m) / n + k [n >= 1]
      
      [k is an offset that can adjusted by a privileged agent. The
      IA32_TSC_ADJUST MSR is an example of an interface to adjust k.
      See 17.14.4 of the Intel SDM for more details]
      
      Cc: Prarit Bhargava <prarit@redhat.com>
      Cc: Richard Cochran <richardcochran@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: kevin.b.stanton@intel.com
      Cc: kevin.j.clarke@intel.com
      Cc: hpa@zytor.com
      Cc: jeffrey.t.kirsher@intel.com
      Cc: netdev@vger.kernel.org
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NChristopher S. Hall <christopher.s.hall@intel.com>
      [jstultz: Tweaked to fix build issue, also reworked math for
      64bit division on 32bit systems, as well as !CONFIG_CPU_FREQ build
      fixes]
      Signed-off-by: NJohn Stultz <john.stultz@linaro.org>
      f9677e0f
  5. 04 8月, 2015 1 次提交
  6. 06 7月, 2015 4 次提交
  7. 20 2月, 2014 1 次提交
    • T
      x86, tsc: Fallback to normal calibration if fast MSR calibration fails · 5f0e0309
      Thomas Gleixner 提交于
      If we cannot calibrate TSC via MSR based calibration
      try_msr_calibrate_tsc() stores zero to fast_calibrate and returns that
      to the caller. This value gets then propagated further to clockevents
      code resulting division by zero oops like the one below:
      
       divide error: 0000 [#1] PREEMPT SMP
       Modules linked in:
       CPU: 0 PID: 1 Comm: swapper/0 Tainted: G        W    3.13.0+ #47
       task: ffff880075508000 ti: ffff880075506000 task.ti: ffff880075506000
       RIP: 0010:[<ffffffff810aec14>]  [<ffffffff810aec14>] clockevents_config.part.3+0x24/0xa0
       RSP: 0000:ffff880075507e58  EFLAGS: 00010246
       RAX: ffffffffffffffff RBX: ffff880079c0cd80 RCX: 0000000000000000
       RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffffffffff
       RBP: ffff880075507e70 R08: 0000000000000001 R09: 00000000000000be
       R10: 00000000000000bd R11: 0000000000000003 R12: 000000000000b008
       R13: 0000000000000008 R14: 000000000000b010 R15: 0000000000000000
       FS:  0000000000000000(0000) GS:ffff880079c00000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
       CR2: ffff880079fff000 CR3: 0000000001c0b000 CR4: 00000000001006f0
       Stack:
        ffff880079c0cd80 000000000000b008 0000000000000008 ffff880075507e88
        ffffffff810aecb0 ffff880079c0cd80 ffff880075507e98 ffffffff81030168
        ffff880075507ed8 ffffffff81d1104f 00000000000000c3 0000000000000000
       Call Trace:
        [<ffffffff810aecb0>] clockevents_config_and_register+0x20/0x30
        [<ffffffff81030168>] setup_APIC_timer+0xc8/0xd0
        [<ffffffff81d1104f>] setup_boot_APIC_clock+0x4cc/0x4d8
        [<ffffffff81d0f5de>] native_smp_prepare_cpus+0x3dd/0x3f0
        [<ffffffff81d02ee9>] kernel_init_freeable+0xc3/0x205
        [<ffffffff8177c910>] ? rest_init+0x90/0x90
        [<ffffffff8177c91e>] kernel_init+0xe/0x120
        [<ffffffff8178deec>] ret_from_fork+0x7c/0xb0
        [<ffffffff8177c910>] ? rest_init+0x90/0x90
      
      Prevent this from happening by:
       1) Modifying try_msr_calibrate_tsc() to return calibration value or zero
          if it fails.
       2) Check this return value in native_calibrate_tsc() and in case of zero
          fallback to use normal non-MSR based calibration.
      
      [mw: Added subject and changelog]
      Reported-and-tested-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: Bin Gao <bin.gao@linux.intel.com>
      Cc: One Thousand Gnomes <gnomes@lxorguk.ukuu.org.uk>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Link: http://lkml.kernel.org/r/1392810750-18660-1-git-send-email-mika.westerberg@linux.intel.comSigned-off-by: NMika Westerberg <mika.westerberg@linux.intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      5f0e0309
  8. 16 1月, 2014 1 次提交
  9. 23 7月, 2013 1 次提交
  10. 20 3月, 2012 1 次提交
    • M
      x86: kvmclock: abstract save/restore sched_clock_state · b74f05d6
      Marcelo Tosatti 提交于
      Upon resume from hibernation, CPU 0's hvclock area contains the old
      values for system_time and tsc_timestamp. It is necessary for the
      hypervisor to update these values with uptodate ones before the CPU uses
      them.
      
      Abstract TSC's save/restore sched_clock_state functions and use
      restore_state to write to KVM_SYSTEM_TIME MSR, forcing an update.
      
      Also move restore_sched_clock_state before __restore_processor_state,
      since the later calls CONFIG_LOCK_STAT's lockstat_clock (also for TSC).
      Thanks to Igor Mammedov for tracking it down.
      
      Fixes suspend-to-disk with kvmclock.
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      b74f05d6
  11. 06 12月, 2011 1 次提交
  12. 15 7月, 2011 1 次提交
  13. 24 5月, 2011 1 次提交
  14. 18 3月, 2011 1 次提交
  15. 20 8月, 2010 1 次提交
    • S
      x86, tsc, sched: Recompute cyc2ns_offset's during resume from sleep states · cd7240c0
      Suresh Siddha 提交于
      TSC's get reset after suspend/resume (even on cpu's with invariant TSC
      which runs at a constant rate across ACPI P-, C- and T-states). And in
      some systems BIOS seem to reinit TSC to arbitrary large value (still
      sync'd across cpu's) during resume.
      
      This leads to a scenario of scheduler rq->clock (sched_clock_cpu()) less
      than rq->age_stamp (introduced in 2.6.32). This leads to a big value
      returned by scale_rt_power() and the resulting big group power set by the
      update_group_power() is causing improper load balancing between busy and
      idle cpu's after suspend/resume.
      
      This resulted in multi-threaded workloads (like kernel-compilation) go
      slower after suspend/resume cycle on core i5 laptops.
      
      Fix this by recomputing cyc2ns_offset's during resume, so that
      sched_clock() continues from the point where it was left off during
      suspend.
      Reported-by: NFlorian Pritz <flo@xssn.at>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Cc: <stable@kernel.org> # [v2.6.32+]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1282262618.2675.24.camel@sbsiddha-MOBL3.sc.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      cd7240c0
  16. 31 8月, 2009 1 次提交
  17. 10 11月, 2008 1 次提交
    • I
      x86: clean up vget_cycles() · 4fcc50ab
      Ingo Molnar 提交于
      Impact: remove unused variable
      
      I forgot to remove the now unused "cycles_t cycles" parameter from
      vget_cycles() - which triggers build warnings as tsc.h is included
      in a number of files.
      
      Remove it.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4fcc50ab
  18. 09 11月, 2008 1 次提交
  19. 08 11月, 2008 1 次提交
    • I
      sched: improve sched_clock() performance · 0d12cdd5
      Ingo Molnar 提交于
      in scheduler-intense workloads native_read_tsc() overhead accounts for
      20% of the system overhead:
      
       659567 system_call                              41222.9375
       686796 schedule                                 435.7843
       718382 __switch_to                              665.1685
       823875 switch_mm                                4526.7857
       1883122 native_read_tsc                          55385.9412
       9761990 total                                      2.8468
      
      this is large part due to the rdtsc_barrier() that is done before
      and after reading the TSC.
      
      But sched_clock() is not a precise clock in the GTOD sense, using such
      barriers is completely pointless. So remove the barriers and only use
      them in vget_cycles().
      
      This improves lat_ctx performance by about 5%.
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0d12cdd5
  20. 23 10月, 2008 2 次提交
  21. 23 7月, 2008 1 次提交
    • V
      x86: consolidate header guards · 77ef50a5
      Vegard Nossum 提交于
      This patch is the result of an automatic script that consolidates the
      format of all the headers in include/asm-x86/.
      
      The format:
      
      1. No leading underscore. Names with leading underscores are reserved.
      2. Pathname components are separated by two underscores. So we can
         distinguish between mm_types.h and mm/types.h.
      3. Everything except letters and numbers are turned into single
         underscores.
      Signed-off-by: NVegard Nossum <vegard.nossum@gmail.com>
      77ef50a5
  22. 09 7月, 2008 2 次提交
  23. 29 4月, 2008 1 次提交
    • H
      x86: vget_cycles() __always_inline · 97520825
      Hugh Dickins 提交于
      Mark vget_cycles() as __always_inline, so gcc is never tempted to make
      the vsyscall vread_tsc() dive into kernel text, with resulting SIGSEGV.
      
      This was a self-inflicted wound: I've not seen that happen with unhacked
      sources; but for debug reasons I'd changed my x86/Makefile to compile
      no-unit-at-a-time, and that in conjunction with OPTIMIZE_INLINING=y
      ended up with vget_cycles() in kernel text.  Perhaps it can happen
      in other ways: safer to use __always_inline.
      Signed-off-by: NHugh Dickins <hugh@veritas.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      97520825
  24. 25 4月, 2008 1 次提交
  25. 20 4月, 2008 1 次提交
    • E
      x86: implement prctl PR_GET_TSC and PR_SET_TSC · 529e25f6
      Erik Bosman 提交于
      This patch implements the PR_GET_TSC and PR_SET_TSC prctl()
      commands on the x86 platform (both 32 and 64 bit.) These
      commands control the ability to read the timestamp counter
      from userspace (the RDTSC instruction.)
      
      While the RDTSC instuction is a useful profiling tool,
      it is also the source of some non-determinism in ring-3.
      For deterministic replay applications it is useful to be
      able to trap and emulate (and record the outcome of) this
      instruction.
      
      This patch uses code earlier used to disable the timestamp
      counter for the SECCOMP framework. A side-effect of this
      patch is that the SECCOMP environment will now also disable
      the timestamp counter on x86_64 due to the addition of the
      TIF_NOTSC define on this platform.
      
      The code which enables/disables the RDTSC instruction during
      context switches is in the __switch_to_xtra function, which
      already handles other unusual conditions, so normal
      performance should not have to suffer from this change.
      Signed-off-by: NErik Bosman <ejbosman@cs.vu.nl>
      Acked-by: NArjan van de Ven  <arjan@linux.intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      529e25f6
  26. 17 4月, 2008 1 次提交
  27. 30 1月, 2008 7 次提交
  28. 13 10月, 2007 1 次提交