1. 29 1月, 2010 3 次提交
    • S
      perf_events, x86: Improve x86 event scheduling · 1da53e02
      Stephane Eranian 提交于
      This patch improves event scheduling by maximizing the use of PMU
      registers regardless of the order in which events are created in a group.
      
      The algorithm takes into account the list of counter constraints for each
      event. It assigns events to counters from the most constrained, i.e.,
      works on only one counter, to the least constrained, i.e., works on any
      counter.
      
      Intel Fixed counter events and the BTS special event are also handled via
      this algorithm which is designed to be fairly generic.
      
      The patch also updates the validation of an event to use the scheduling
      algorithm. This will cause early failure in perf_event_open().
      
      The 2nd version of this patch follows the model used by PPC, by running
      the scheduling algorithm and the actual assignment separately. Actual
      assignment takes place in hw_perf_enable() whereas scheduling is
      implemented in hw_perf_group_sched_in() and x86_pmu_enable().
      Signed-off-by: NStephane Eranian <eranian@google.com>
      [ fixup whitespace and style nits as well as adding is_x86_event() ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <4b5430c6.0f975e0a.1bf9.ffff85fe@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      1da53e02
    • K
      x86/hw-breakpoints: Optimize return code from notifier chain in hw_breakpoint_handler · e0e53db6
      K.Prasad 提交于
      Processing of debug exceptions in do_debug() can stop if it
      originated from a hw-breakpoint exception by returning NOTIFY_STOP
      in most cases.
      
      But for certain cases such as:
      
      a) user-space breakpoints with pending SIGTRAP signal delivery (as
      in the case of ptrace induced breakpoints).
      
      b) exceptions due to other causes than breakpoints
      
      We will continue to process the exception by returning NOTIFY_DONE.
      Signed-off-by: NK.Prasad <prasad@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Jan Kiszka <jan.kiszka@siemens.com>
      LKML-Reference: <20100128111415.GC13935@in.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      e0e53db6
    • K
      x86/debug: Clear reserved bits of DR6 in do_debug() · 40f9249a
      K.Prasad 提交于
      Clear the reserved bits from the stored copy of debug status
      register (DR6).
      This will help easy bitwise operations such as quick testing
      of a debug event origin.
      Signed-off-by: NK.Prasad <prasad@linux.vnet.ibm.com>
      Cc: Roland McGrath <roland@redhat.com>
      Cc: Jan Kiszka <jan.kiszka@siemens.com>
      Cc: Alan Stern <stern@rowland.harvard.edu>
      Cc: Ingo Molnar <mingo@elte.hu>
      LKML-Reference: <20100128111401.GB13935@in.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      40f9249a
  2. 28 1月, 2010 1 次提交
    • A
      perf: Fix inconsistency between IP and callchain sampling · 339ce1a4
      Anton Blanchard 提交于
      When running perf across all cpus with backtracing (-a -g), sometimes we
      get samples without associated backtraces:
      
          23.44%         init  [kernel]                     [k] restore
          11.46%         init                       eeba0c  [k] 0x00000000eeba0c
           6.77%      swapper  [kernel]                     [k] .perf_ctx_adjust_freq
           5.73%         init  [kernel]                     [k] .__trace_hcall_entry
           4.69%         perf  libc-2.9.so                  [.] 0x0000000006bb8c
                             |
                             |--11.11%-- 0xfffa941bbbc
      
      It turns out the backtrace code has a check for the idle task and the IP
      sampling does not. This creates problems when profiling an interrupt
      heavy workload (in my case 10Gbit ethernet) since we get no backtraces
      for interrupts received while idle (ie most of the workload).
      
      Right now x86 and sh check that current is not NULL, which should never
      happen so remove that too.
      
      Idle task's exclusion must be performed from the core code, on top
      of perf_event_attr:exclude_idle.
      Signed-off-by: NAnton Blanchard <anton@samba.org>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mundt <lethal@linux-sh.org>
      LKML-Reference: <20100118054707.GT12666@kryten>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      339ce1a4
  3. 13 1月, 2010 28 次提交
  4. 12 1月, 2010 8 次提交