1. 27 6月, 2013 1 次提交
    • S
      perf/x86: Fix shared register mutual exclusion enforcement · 2f7f73a5
      Stephane Eranian 提交于
      This patch fixes a problem with the shared registers mutual
      exclusion code and incremental event scheduling by the
      generic perf_event code.
      
      There was a bug whereby the mutual exclusion on the shared
      registers was not enforced because of incremental scheduling
      abort due to event constraints. As an example on Intel
      Nehalem, consider the following events:
      
      group1= L1D_CACHE_LD:E_STATE,OFFCORE_RESPONSE_0:PF_RFO,L1D_CACHE_LD:I_STATE
      group2= L1D_CACHE_LD:I_STATE
      
      The L1D_CACHE_LD event can only be measured by 2 counters. Yet, there
      are 3 instances here. The first group can be scheduled and is committed.
      Then, the generic code tries to schedule group2 and this fails (because
      there is no more counter to support the 3rd instance of L1D_CACHE_LD).
      But in x86_schedule_events() error path, put_event_contraints() is invoked
      on ALL the events and not just the ones that just failed. That causes the
      "lock" on the shared offcore_response MSR to be released. Yet the first group
      is actually scheduled and is exposed to reprogramming of that shared msr by
      the sibling HT thread. In other words, there is no guarantee on what is
      measured.
      
      This patch fixes the problem by tagging committed events with the
      PERF_X86_EVENT_COMMITTED tag. In the error path of x86_schedule_events(),
      only the events NOT tagged have their constraint released. The tag
      is eventually removed when the event in descheduled.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20130620164254.GA3556@quadSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2f7f73a5
  2. 26 6月, 2013 1 次提交
  3. 23 6月, 2013 1 次提交
    • D
      perf: Drop sample rate when sampling is too slow · 14c63f17
      Dave Hansen 提交于
      This patch keeps track of how long perf's NMI handler is taking,
      and also calculates how many samples perf can take a second.  If
      the sample length times the expected max number of samples
      exceeds a configurable threshold, it drops the sample rate.
      
      This way, we don't have a runaway sampling process eating up the
      CPU.
      
      This patch can tend to drop the sample rate down to level where
      perf doesn't work very well.  *BUT* the alternative is that my
      system hangs because it spends all of its time handling NMIs.
      
      I'll take a busted performance tool over an entire system that's
      busted and undebuggable any day.
      
      BTW, my suspicion is that there's still an underlying bug here.
      Using the HPET instead of the TSC is definitely a contributing
      factor, but I suspect there are some other things going on.
      But, I can't go dig down on a bug like that with my machine
      hanging all the time.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus@samba.org
      Cc: acme@ghostprotocols.net
      Cc: Dave Hansen <dave@sr71.net>
      [ Prettified it a bit. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      14c63f17
  4. 19 6月, 2013 11 次提交
  5. 28 5月, 2013 3 次提交
  6. 05 5月, 2013 1 次提交
  7. 04 5月, 2013 2 次提交
  8. 30 4月, 2013 4 次提交
  9. 26 4月, 2013 1 次提交
    • I
      perf/x86/intel/P4: Robistify P4 PMU types · 5ac2b5c2
      Ingo Molnar 提交于
      Linus found, while extending integer type extension checks in the
      sparse static code checker, various fragile patterns of mixed
      signed/unsigned  64-bit/32-bit integer use in perf_events_p4.c.
      
      The relevant hardware register ABI is 64 bit wide on 32-bit
      kernels as  well, so clean it all up a bit, remove unnecessary
      casts, and make sure we  use 64-bit unsigned integers in these
      places.
      
      [ Unfortunately this patch was not tested on real P4 hardware,
        those are pretty rare already. If this patch causes any
        problems on P4 hardware then please holler ... ]
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/20130424072630.GB1780@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5ac2b5c2
  10. 22 4月, 2013 1 次提交
  11. 21 4月, 2013 6 次提交
  12. 18 4月, 2013 1 次提交
  13. 16 4月, 2013 4 次提交
  14. 12 4月, 2013 1 次提交
    • K
      x86: Use a read-only IDT alias on all CPUs · 4eefbe79
      Kees Cook 提交于
      Make a copy of the IDT (as seen via the "sidt" instruction) read-only.
      This primarily removes the IDT from being a target for arbitrary memory
      write attacks, and has the added benefit of also not leaking the kernel
      base offset, if it has been relocated.
      
      We already did this on vendor == Intel and family == 5 because of the
      F0 0F bug -- regardless of if a particular CPU had the F0 0F bug or
      not.  Since the workaround was so cheap, there simply was no reason to
      be very specific.  This patch extends the readonly alias to all CPUs,
      but does not activate the #PF to #UD conversion code needed to deliver
      the proper exception in the F0 0F case except on Intel family 5
      processors.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Link: http://lkml.kernel.org/r/20130410192422.GA17344@www.outflux.net
      Cc: Eric Northup <digitaleric@google.com>
      Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      4eefbe79
  15. 10 4月, 2013 2 次提交