1. 31 8月, 2011 1 次提交
    • A
      x86, perf: Check that current->mm is alive before getting user callchain · 20afc60f
      Andrey Vagin 提交于
      An event may occur when an mm is already released.
      
      I added an event in dequeue_entity() and caught a panic with
      the following backtrace:
      
      [  434.421110] BUG: unable to handle kernel NULL pointer dereference at 0000000000000050
      [  434.421258] IP: [<ffffffff810464ac>] __get_user_pages_fast+0x9c/0x120
      ...
      [  434.421258] Call Trace:
      [  434.421258]  [<ffffffff8101ae81>] copy_from_user_nmi+0x51/0xf0
      [  434.421258]  [<ffffffff8109a0d5>] ? sched_clock_local+0x25/0x90
      [  434.421258]  [<ffffffff8101b048>] perf_callchain_user+0x128/0x170
      [  434.421258]  [<ffffffff811154cd>] ? __perf_event_header__init_id+0xed/0x100
      [  434.421258]  [<ffffffff81116690>] perf_prepare_sample+0x200/0x280
      [  434.421258]  [<ffffffff81118da8>] __perf_event_overflow+0x1b8/0x290
      [  434.421258]  [<ffffffff81065240>] ? tg_shares_up+0x0/0x670
      [  434.421258]  [<ffffffff8104fe1a>] ? walk_tg_tree+0x6a/0xb0
      [  434.421258]  [<ffffffff81118f44>] perf_swevent_overflow+0xc4/0xf0
      [  434.421258]  [<ffffffff81119150>] do_perf_sw_event+0x1e0/0x250
      [  434.421258]  [<ffffffff81119204>] perf_tp_event+0x44/0x70
      [  434.421258]  [<ffffffff8105701f>] ftrace_profile_sched_block+0xdf/0x110
      [  434.421258]  [<ffffffff8106121d>] dequeue_entity+0x2ad/0x2d0
      [  434.421258]  [<ffffffff810614ec>] dequeue_task_fair+0x1c/0x60
      [  434.421258]  [<ffffffff8105818a>] dequeue_task+0x9a/0xb0
      [  434.421258]  [<ffffffff810581e2>] deactivate_task+0x42/0xe0
      [  434.421258]  [<ffffffff814bc019>] thread_return+0x191/0x808
      [  434.421258]  [<ffffffff81098a44>] ? switch_task_namespaces+0x24/0x60
      [  434.421258]  [<ffffffff8106f4c4>] do_exit+0x464/0x910
      [  434.421258]  [<ffffffff8106f9c8>] do_group_exit+0x58/0xd0
      [  434.421258]  [<ffffffff8106fa57>] sys_exit_group+0x17/0x20
      [  434.421258]  [<ffffffff8100b202>] system_call_fastpath+0x16/0x1b
      Signed-off-by: NAndrey Vagin <avagin@openvz.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: stable@kernel.org
      Link: http://lkml.kernel.org/r/1314693156-24131-1-git-send-email-avagin@openvz.orgSigned-off-by: NIngo Molnar <mingo@elte.hu>
      20afc60f
  2. 26 8月, 2011 1 次提交
  3. 09 8月, 2011 1 次提交
  4. 27 7月, 2011 1 次提交
  5. 22 7月, 2011 2 次提交
  6. 16 7月, 2011 1 次提交
  7. 15 7月, 2011 2 次提交
    • C
      perf, x86: P4 PMU - Introduce event alias feature · f9129870
      Cyrill Gorcunov 提交于
      Instead of hw_nmi_watchdog_set_attr() weak function
      and appropriate x86_pmu::hw_watchdog_set_attr() call
      we introduce even alias mechanism which allow us
      to drop this routines completely and isolate quirks
      of Netburst architecture inside P4 PMU code only.
      
      The main idea remains the same though -- to allow
      nmi-watchdog and perf top run simultaneously.
      
      Note the aliasing mechanism applies to generic
      PERF_COUNT_HW_CPU_CYCLES event only because arbitrary
      event (say passed as RAW initially) might have some
      additional bits set inside ESCR register changing
      the behaviour of event and we can't guarantee anymore
      that alias event will give the same result.
      
      P.S. Thanks a huge to Don and Steven for for testing
           and early review.
      Acked-by: NDon Zickus <dzickus@redhat.com>
      Tested-by: NSteven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      CC: Ingo Molnar <mingo@elte.hu>
      CC: Peter Zijlstra <a.p.zijlstra@chello.nl>
      CC: Stephane Eranian <eranian@google.com>
      CC: Lin Ming <ming.m.lin@intel.com>
      CC: Arnaldo Carvalho de Melo <acme@redhat.com>
      CC: Frederic Weisbecker <fweisbec@gmail.com>
      Link: http://lkml.kernel.org/r/20110708201712.GS23657@sunSigned-off-by: NSteven Rostedt <rostedt@goodmis.org>
      f9129870
    • L
      x86, intel, power: Initialize MSR_IA32_ENERGY_PERF_BIAS · abe48b10
      Len Brown 提交于
      Since 2.6.36 (23016bf0), Linux prints the existence of "epb" in /proc/cpuinfo,
      Since 2.6.38 (d5532ee7), the x86_energy_perf_policy(8) utility has
      been available in-tree to update MSR_IA32_ENERGY_PERF_BIAS.
      
      However, the typical BIOS fails to initialize the MSR, presumably
      because this is handled by high-volume shrink-wrap operating systems...
      
      Linux distros, on the other hand, do not yet invoke x86_energy_perf_policy(8).
      As a result, WSM-EP, SNB, and later hardware from Intel will run in its
      default hardware power-on state (performance), which assumes that users
      care for performance at all costs and not for energy efficiency.
      While that is fine for performance benchmarks, the hardware's intended default
      operating point is "normal" mode...
      
      Initialize the MSR to the "normal" by default during kernel boot.
      
      x86_energy_perf_policy(8) is available to change the default after boot,
      should the user have a different preference.
      Signed-off-by: NLen Brown <len.brown@intel.com>
      Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1107140051020.18606@x980Acked-by: NRafael J. Wysocki <rjw@sisk.pl>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      Cc: <stable@kernel.org>
      abe48b10
  8. 09 7月, 2011 1 次提交
  9. 02 7月, 2011 1 次提交
  10. 01 7月, 2011 10 次提交
  11. 28 6月, 2011 2 次提交
    • S
      x86, mtrr: use stop_machine APIs for doing MTRR rendezvous · 192d8857
      Suresh Siddha 提交于
      MTRR rendezvous sequence is not implemened using stop_machine() before, as this
      gets called both from the process context aswell as the cpu online paths
      (where the cpu has not come online and the interrupts are disabled etc).
      
      Now that we have a new stop_machine_from_inactive_cpu() API, use it for
      rendezvous during mtrr init of a logical processor that is coming online.
      
      For the rest (runtime MTRR modification, system boot, resume paths), use
      stop_machine() to implement the rendezvous sequence. This will consolidate and
      cleanup the code.
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/20110623182057.076997177@sbsiddha-MOBL3.sc.intel.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      192d8857
    • S
      x86, mtrr: lock stop machine during MTRR rendezvous sequence · 6d3321e8
      Suresh Siddha 提交于
      MTRR rendezvous sequence using stop_one_cpu_nowait() can potentially
      happen in parallel with another system wide rendezvous using
      stop_machine(). This can lead to deadlock (The order in which
      works are queued can be different on different cpu's. Some cpu's
      will be running the first rendezvous handler and others will be running
      the second rendezvous handler. Each set waiting for the other set to join
      for the system wide rendezvous, leading to a deadlock).
      
      MTRR rendezvous sequence is not implemented using stop_machine() as this
      gets called both from the process context aswell as the cpu online paths
      (where the cpu has not come online and the interrupts are disabled etc).
      stop_machine() works with only online cpus.
      
      For now, take the stop_machine mutex in the MTRR rendezvous sequence that
      gets called from an online cpu (here we are in the process context
      and can potentially sleep while taking the mutex). And the MTRR rendezvous
      that gets triggered during cpu online doesn't need to take this stop_machine
      lock (as the stop_machine() already ensures that there is no cpu hotplug
      going on in parallel by doing get_online_cpus())
      
          TBD: Pursue a cleaner solution of extending the stop_machine()
               infrastructure to handle the case where the calling cpu is
               still not online and use this for MTRR rendezvous sequence.
      
      fixes: https://bugzilla.novell.com/show_bug.cgi?id=672008Reported-by: NVadim Kotelnikov <vadimuzzz@inbox.ru>
      Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com>
      Link: http://lkml.kernel.org/r/20110623182056.807230326@sbsiddha-MOBL3.sc.intel.com
      Cc: stable@kernel.org # 2.6.35+, backport a week or two after this gets more testing in mainline
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      6d3321e8
  12. 16 6月, 2011 12 次提交
  13. 29 5月, 2011 2 次提交
    • L
      x86 idle: deprecate "no-hlt" cmdline param · cdaab4a0
      Len Brown 提交于
      We'd rather that modern machines not check if HLT works on
      every entry into idle, for the benefit of machines that had
      marginal electricals 15-years ago.  If those machines are still running
      the upstream kernel, they can use "idle=poll".  The only difference
      will be that they'll now invoke HLT in machine_hlt().
      
      cc: x86@kernel.org # .39.x
      Signed-off-by: NLen Brown <len.brown@intel.com>
      cdaab4a0
    • L
      x86 idle: clarify AMD erratum 400 workaround · 02c68a02
      Len Brown 提交于
      The workaround for AMD erratum 400 uses the term "c1e" falsely suggesting:
      1. Intel C1E is somehow involved
      2. All AMD processors with C1E are involved
      
      Use the string "amd_c1e" instead of simply "c1e" to clarify that
      this workaround is specific to AMD's version of C1E.
      Use the string "e400" to clarify that the workaround is specific
      to AMD processors with Erratum 400.
      
      This patch is text-substitution only, with no functional change.
      
      cc: x86@kernel.org
      Acked-by: NBorislav Petkov <borislav.petkov@amd.com>
      Signed-off-by: NLen Brown <len.brown@intel.com>
      02c68a02
  14. 27 5月, 2011 1 次提交
  15. 26 5月, 2011 1 次提交
  16. 23 5月, 2011 1 次提交
    • L
      x86: setup_smep needs to be __cpuinit · 82da65da
      Linus Torvalds 提交于
      The setup_smep function gets calle at resume time too, and is thus not a
      pure __init function.  When marked as __init, it gets thrown out after
      the kernel has initialized, and when the kernel is suspended and
      resumed, the code will no longer be around, and we'll get a nice "kernel
      tried to execute NX-protected page" oops because the page is no longer
      marked executable.
      Reported-and-tested-by: NParag Warudkar <parag.lkml@gmail.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: "H. Peter Anvin" <hpa@linux.intel.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      82da65da