1. 25 3月, 2013 1 次提交
  2. 16 2月, 2013 1 次提交
  3. 14 2月, 2013 1 次提交
    • L
      tools/power turbostat: display SMI count by default · 1ed51011
      Len Brown 提交于
      The SMI counter is popular -- so display it by default
      rather than requiring an option.  What the heck,
      we've blown the 80 column budget on many systems already...
      
      Note that the value displayed is the delta
      during the measurement interval.
      The absolute value of the counter can still be seen with
      the generic 32-bit MSR option, ie.  -m 0x34
      Signed-off-by: NLen Brown <len.brown@intel.com>
      1ed51011
  4. 09 2月, 2013 1 次提交
  5. 01 2月, 2013 1 次提交
  6. 15 12月, 2012 1 次提交
  7. 01 12月, 2012 1 次提交
    • W
      KVM: x86: Emulate IA32_TSC_ADJUST MSR · ba904635
      Will Auld 提交于
      CPUID.7.0.EBX[1]=1 indicates IA32_TSC_ADJUST MSR 0x3b is supported
      
      Basic design is to emulate the MSR by allowing reads and writes to a guest
      vcpu specific location to store the value of the emulated MSR while adding
      the value to the vmcs tsc_offset. In this way the IA32_TSC_ADJUST value will
      be included in all reads to the TSC MSR whether through rdmsr or rdtsc. This
      is of course as long as the "use TSC counter offsetting" VM-execution control
      is enabled as well as the IA32_TSC_ADJUST control.
      
      However, because hardware will only return the TSC + IA32_TSC_ADJUST +
      vmsc tsc_offset for a guest process when it does and rdtsc (with the correct
      settings) the value of our virtualized IA32_TSC_ADJUST must be stored in one
      of these three locations. The argument against storing it in the actual MSR
      is performance. This is likely to be seldom used while the save/restore is
      required on every transition. IA32_TSC_ADJUST was created as a way to solve
      some issues with writing TSC itself so that is not an option either.
      
      The remaining option, defined above as our solution has the problem of
      returning incorrect vmcs tsc_offset values (unless we intercept and fix, not
      done here) as mentioned above. However, more problematic is that storing the
      data in vmcs tsc_offset will have a different semantic effect on the system
      than does using the actual MSR. This is illustrated in the following example:
      
      The hypervisor set the IA32_TSC_ADJUST, then the guest sets it and a guest
      process performs a rdtsc. In this case the guest process will get
      TSC + IA32_TSC_ADJUST_hyperviser + vmsc tsc_offset including
      IA32_TSC_ADJUST_guest. While the total system semantics changed the semantics
      as seen by the guest do not and hence this will not cause a problem.
      Signed-off-by: NWill Auld <will.auld@intel.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      ba904635
  8. 24 11月, 2012 2 次提交
  9. 02 11月, 2012 1 次提交
  10. 04 10月, 2012 1 次提交
  11. 10 9月, 2012 2 次提交
  12. 08 3月, 2012 1 次提交
  13. 05 3月, 2012 1 次提交
  14. 26 9月, 2011 1 次提交
  15. 15 7月, 2011 2 次提交
  16. 12 7月, 2011 1 次提交
  17. 11 5月, 2011 1 次提交
  18. 16 4月, 2011 1 次提交
  19. 18 3月, 2011 1 次提交
    • J
      KVM: x86: handle guest access to BBL_CR_CTL3 MSR · 91c9c3ed
      john cooper 提交于
      A correction to Intel cpu model CPUID data (patch queued)
      caused winxp to BSOD when booted with a Penryn model.
      This was traced to the CPUID "model" field correction from
      6 -> 23 (as is proper for a Penryn class of cpu).  Only in
      this case does the problem surface.
      
      The cause for this failure is winxp accessing the BBL_CR_CTL3
      MSR which is unsupported by current kvm, appears to be a
      legacy MSR not fully characterized yet existing in current
      silicon, and is apparently carried forward in MSR space to
      accommodate vintage code as here.  It is not yet conclusive
      whether this MSR implements any of its legacy functionality
      or is just an ornamental dud for compatibility.  While I
      found no silicon version specific documentation link to
      this MSR, a general description exists in Intel's developer's
      reference which agrees with the functional behavior of
      other bootloader/kernel code I've examined accessing
      BBL_CR_CTL3.  Regrettably winxp appears to be setting bit #19
      called out as "reserved" in the above document.
      
      So to minimally accommodate this MSR, kvm msr get will provide
      the equivalent mock data and kvm msr write will simply toss the
      guest passed data without interpretation.  While this treatment
      of BBL_CR_CTL3 addresses the immediate problem, the approach may
      be modified pending clarification from Intel.
      Signed-off-by: Njohn cooper <john.cooper@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      91c9c3ed
  20. 04 3月, 2011 1 次提交
    • A
      perf: Add support for supplementary event registers · a7e3ed1e
      Andi Kleen 提交于
      Change logs against Andi's original version:
      
      - Extends perf_event_attr:config to config{,1,2} (Peter Zijlstra)
      - Fixed a major event scheduling issue. There cannot be a ref++ on an
        event that has already done ref++ once and without calling
        put_constraint() in between. (Stephane Eranian)
      - Use thread_cpumask for percore allocation. (Lin Ming)
      - Use MSR names in the extra reg lists. (Lin Ming)
      - Remove redundant "c = NULL" in intel_percore_constraints
      - Fix comment of perf_event_attr::config1
      
      Intel Nehalem/Westmere have a special OFFCORE_RESPONSE event
      that can be used to monitor any offcore accesses from a core.
      This is a very useful event for various tunings, and it's
      also needed to implement the generic LLC-* events correctly.
      
      Unfortunately this event requires programming a mask in a separate
      register. And worse this separate register is per core, not per
      CPU thread.
      
      This patch:
      
      - Teaches perf_events that OFFCORE_RESPONSE needs extra parameters.
        The extra parameters are passed by user space in the
        perf_event_attr::config1 field.
      
      - Adds support to the Intel perf_event core to schedule per
        core resources. This adds fairly generic infrastructure that
        can be also used for other per core resources.
        The basic code has is patterned after the similar AMD northbridge
        constraints code.
      
      Thanks to Stephane Eranian who pointed out some problems
      in the original version and suggested improvements.
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NLin Ming <ming.m.lin@intel.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1299119690-13991-2-git-send-email-ming.m.lin@intel.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a7e3ed1e
  21. 18 2月, 2011 2 次提交
  22. 04 1月, 2011 1 次提交
  23. 19 12月, 2010 1 次提交
  24. 18 11月, 2010 1 次提交
  25. 24 10月, 2010 1 次提交
  26. 15 10月, 2010 1 次提交
    • R
      oprofile, x86: Add support for IBS branch target address reporting · 25da6950
      Robert Richter 提交于
      This patch adds support for IBS branch target address reporting. A new
      MSR (MSRC001_103B IBS Branch Target Address) has been added that
      provides the logical address in canonical form for the branch
      target. The size of the IBS sample that is transferred to the userland
      has been increased.
      
      For backward compatibility, the userland daemon must explicit enable
      the feature by writing to the oprofilefs file
      
       ibs_op/branch_target
      
      After enabling branch target address reporting, the userland daemon
      must handle the extended size of the IBS sample.
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      25da6950
  27. 01 8月, 2010 1 次提交
  28. 31 7月, 2010 1 次提交
  29. 22 7月, 2010 1 次提交
  30. 17 6月, 2010 1 次提交
    • V
      x86: Look for IA32_ENERGY_PERF_BIAS support · 23016bf0
      Venkatesh Pallipadi 提交于
      The new IA32_ENERGY_PERF_BIAS MSR allows system software to give
      hardware a hint whether OS policy favors more power saving,
      or more performance.  This allows the OS to have some influence
      on internal hardware power/performance tradeoffs where the OS
      has previously had no influence.
      
      The support for this feature is indicated by CPUID.06H.ECX.bit3,
      as documented in the Intel Architectures Software Developer's Manual.
      
      This patch discovers support of this feature and displays it
      as "epb" in /proc/cpuinfo.
      Signed-off-by: NVenkatesh Pallipadi <venki@google.com>
      LKML-Reference: <alpine.LFD.2.00.1006032310160.6669@localhost.localdomain>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      23016bf0
  31. 11 6月, 2010 1 次提交
  32. 09 6月, 2010 1 次提交
  33. 25 5月, 2010 1 次提交
  34. 19 5月, 2010 1 次提交
  35. 26 3月, 2010 1 次提交
  36. 20 3月, 2010 1 次提交
    • A
      x86, amd: Restrict usage of c1e_idle() · 035a02c1
      Andreas Herrmann 提交于
      Currently c1e_idle returns true for all CPUs greater than or equal to
      family 0xf model 0x40. This covers too many CPUs.
      
      Meanwhile a respective erratum for the underlying problem was filed
      (#400). This patch adds the logic to check whether erratum #400
      applies to a given CPU.
      Especially for CPUs where SMI/HW triggered C1e is not supported,
      c1e_idle() doesn't need to be used. We can check this by looking at
      the respective OSVW bit for erratum #400.
      
      Cc: <stable@kernel.org> # .32.x .33.x
      Signed-off-by: NAndreas Herrmann <andreas.herrmann3@amd.com>
      LKML-Reference: <20100319110922.GA19614@alberich.amd.com>
      Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
      035a02c1