1. 22 7月, 2011 2 次提交
  2. 16 7月, 2011 1 次提交
  3. 15 7月, 2011 2 次提交
    • C
      perf, x86: P4 PMU - Introduce event alias feature · f9129870
      Cyrill Gorcunov 提交于
      Instead of hw_nmi_watchdog_set_attr() weak function
      and appropriate x86_pmu::hw_watchdog_set_attr() call
      we introduce even alias mechanism which allow us
      to drop this routines completely and isolate quirks
      of Netburst architecture inside P4 PMU code only.
      
      The main idea remains the same though -- to allow
      nmi-watchdog and perf top run simultaneously.
      
      Note the aliasing mechanism applies to generic
      PERF_COUNT_HW_CPU_CYCLES event only because arbitrary
      event (say passed as RAW initially) might have some
      additional bits set inside ESCR register changing
      the behaviour of event and we can't guarantee anymore
      that alias event will give the same result.
      
      P.S. Thanks a huge to Don and Steven for for testing
           and early review.
      Acked-by: NDon Zickus <dzickus@redhat.com>
      Tested-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      CC: Ingo Molnar <mingo@elte.hu>
      CC: Peter Zijlstra <a.p.zijlstra@chello.nl>
      CC: Stephane Eranian <eranian@google.com>
      CC: Lin Ming <ming.m.lin@intel.com>
      CC: Arnaldo Carvalho de Melo <acme@redhat.com>
      CC: Frederic Weisbecker <fweisbec@gmail.com>
      Link: http://lkml.kernel.org/r/20110708201712.GS23657@sunSigned-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f9129870
    • L
      x86, intel, power: Initialize MSR_IA32_ENERGY_PERF_BIAS · abe48b10
      Len Brown 提交于
      Since 2.6.36 (23016bf0), Linux prints the existence of "epb" in /proc/cpuinfo,
      Since 2.6.38 (d5532ee7), the x86_energy_perf_policy(8) utility has
      been available in-tree to update MSR_IA32_ENERGY_PERF_BIAS.
      
      However, the typical BIOS fails to initialize the MSR, presumably
      because this is handled by high-volume shrink-wrap operating systems...
      
      Linux distros, on the other hand, do not yet invoke x86_energy_perf_policy(8).
      As a result, WSM-EP, SNB, and later hardware from Intel will run in its
      default hardware power-on state (performance), which assumes that users
      care for performance at all costs and not for energy efficiency.
      While that is fine for performance benchmarks, the hardware's intended default
      operating point is "normal" mode...
      
      Initialize the MSR to the "normal" by default during kernel boot.
      
      x86_energy_perf_policy(8) is available to change the default after boot,
      should the user have a different preference.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1107140051020.18606@x980Acked-by: NRafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: <stable@kernel.org>
      abe48b10
  4. 09 7月, 2011 1 次提交
  5. 02 7月, 2011 1 次提交
  6. 01 7月, 2011 10 次提交
  7. 28 6月, 2011 2 次提交
    • S
      x86, mtrr: use stop_machine APIs for doing MTRR rendezvous · 192d8857
      Suresh Siddha 提交于
      MTRR rendezvous sequence is not implemened using stop_machine() before, as this
      gets called both from the process context aswell as the cpu online paths
      (where the cpu has not come online and the interrupts are disabled etc).
      
      Now that we have a new stop_machine_from_inactive_cpu() API, use it for
      rendezvous during mtrr init of a logical processor that is coming online.
      
      For the rest (runtime MTRR modification, system boot, resume paths), use
      stop_machine() to implement the rendezvous sequence. This will consolidate and
      cleanup the code.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/20110623182057.076997177@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      192d8857
    • S
      x86, mtrr: lock stop machine during MTRR rendezvous sequence · 6d3321e8
      Suresh Siddha 提交于
      MTRR rendezvous sequence using stop_one_cpu_nowait() can potentially
      happen in parallel with another system wide rendezvous using
      stop_machine(). This can lead to deadlock (The order in which
      works are queued can be different on different cpu's. Some cpu's
      will be running the first rendezvous handler and others will be running
      the second rendezvous handler. Each set waiting for the other set to join
      for the system wide rendezvous, leading to a deadlock).
      
      MTRR rendezvous sequence is not implemented using stop_machine() as this
      gets called both from the process context aswell as the cpu online paths
      (where the cpu has not come online and the interrupts are disabled etc).
      stop_machine() works with only online cpus.
      
      For now, take the stop_machine mutex in the MTRR rendezvous sequence that
      gets called from an online cpu (here we are in the process context
      and can potentially sleep while taking the mutex). And the MTRR rendezvous
      that gets triggered during cpu online doesn't need to take this stop_machine
      lock (as the stop_machine() already ensures that there is no cpu hotplug
      going on in parallel by doing get_online_cpus())
      
          TBD: Pursue a cleaner solution of extending the stop_machine()
               infrastructure to handle the case where the calling cpu is
               still not online and use this for MTRR rendezvous sequence.
      
      fixes: https://bugzilla.novell.com/show_bug.cgi?id=672008Reported-by: NVadim Kotelnikov <vadimuzzz@inbox.ru>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/20110623182056.807230326@sbsiddha-MOBL3.sc.intel.com
      Cc: stable@kernel.org # 2.6.35+, backport a week or two after this gets more testing in mainline
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      6d3321e8
  8. 16 6月, 2011 12 次提交
  9. 29 5月, 2011 2 次提交
    • L
      x86 idle: deprecate "no-hlt" cmdline param · cdaab4a0
      Len Brown 提交于
      We'd rather that modern machines not check if HLT works on
      every entry into idle, for the benefit of machines that had
      marginal electricals 15-years ago.  If those machines are still running
      the upstream kernel, they can use "idle=poll".  The only difference
      will be that they'll now invoke HLT in machine_hlt().
      
      cc: x86@kernel.org # .39.x
      Signed-off-by: NLen Brown <len.brown@intel.com>
      cdaab4a0
    • L
      x86 idle: clarify AMD erratum 400 workaround · 02c68a02
      Len Brown 提交于
      The workaround for AMD erratum 400 uses the term "c1e" falsely suggesting:
      1. Intel C1E is somehow involved
      2. All AMD processors with C1E are involved
      
      Use the string "amd_c1e" instead of simply "c1e" to clarify that
      this workaround is specific to AMD's version of C1E.
      Use the string "e400" to clarify that the workaround is specific
      to AMD processors with Erratum 400.
      
      This patch is text-substitution only, with no functional change.
      
      cc: x86@kernel.org
      Acked-by: NBorislav Petkov <borislav.petkov@amd.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      02c68a02
  10. 27 5月, 2011 1 次提交
  11. 26 5月, 2011 1 次提交
  12. 23 5月, 2011 1 次提交
    • L
      x86: setup_smep needs to be __cpuinit · 82da65da
      Linus Torvalds 提交于
      The setup_smep function gets calle at resume time too, and is thus not a
      pure __init function.  When marked as __init, it gets thrown out after
      the kernel has initialized, and when the kernel is suspended and
      resumed, the code will no longer be around, and we'll get a nice "kernel
      tried to execute NX-protected page" oops because the page is no longer
      marked executable.
      Reported-and-tested-by: NParag Warudkar <parag.lkml@gmail.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: "H. Peter Anvin" <hpa@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      82da65da
  13. 21 5月, 2011 1 次提交
  14. 20 5月, 2011 2 次提交
  15. 18 5月, 2011 1 次提交
    • F
      x86, cpu: Enable/disable Supervisor Mode Execution Protection · de5397ad
      Fenghua Yu 提交于
      Enable/disable newly documented SMEP (Supervisor Mode Execution Protection) CPU
      feature in kernel. CR4.SMEP (bit 20) is 0 at power-on. If the feature is
      supported by CPU (X86_FEATURE_SMEP), enable SMEP by setting CR4.SMEP. New kernel
      option nosmep disables the feature even if the feature is supported by CPU.
      
      [ hpa: moved the call to setup_smep() until after the vendor-specific
        initialization; that ensures that CPUID features are unmasked.  We
        will still run it before we have userspace (never mind uncontrolled
        userspace). ]
      Signed-off-by: NFenghua Yu <fenghua.yu@intel.com>
      LKML-Reference: <1305157865-31727-1-git-send-email-fenghua.yu@intel.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      de5397ad