1. 03 4月, 2010 1 次提交
    • R
      perf, x86: Undo some some *_counter* -> *_event* renames · 948b1bb8
      Robert Richter 提交于
      The big rename:
      
       cdd6c482 perf: Do the big rename: Performance Counters -> Performance Events
      
      accidentally renamed some members of stucts that were named after
      registers in the spec. To avoid confusion this patch reverts some
      changes. The related specs are MSR descriptions in AMD's BKDGs and the
      ARCHITECTURAL PERFORMANCE MONITORING section in the Intel 64 and IA-32
      Architectures Software Developer's Manuals.
      
      This patch does:
      
       $ sed -i -e 's:num_events:num_counters:g' \
         arch/x86/include/asm/perf_event.h \
         arch/x86/kernel/cpu/perf_event_amd.c \
         arch/x86/kernel/cpu/perf_event.c \
         arch/x86/kernel/cpu/perf_event_intel.c \
         arch/x86/kernel/cpu/perf_event_p6.c \
         arch/x86/kernel/cpu/perf_event_p4.c \
         arch/x86/oprofile/op_model_ppro.c
      
       $ sed -i -e 's:event_bits:cntval_bits:g' -e 's:event_mask:cntval_mask:g' \
         arch/x86/kernel/cpu/perf_event_amd.c \
         arch/x86/kernel/cpu/perf_event.c \
         arch/x86/kernel/cpu/perf_event_intel.c \
         arch/x86/kernel/cpu/perf_event_p6.c \
         arch/x86/kernel/cpu/perf_event_p4.c
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1269880612-25800-2-git-send-email-robert.richter@amd.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      948b1bb8
  2. 26 3月, 2010 1 次提交
  3. 12 3月, 2010 1 次提交
    • C
      perf, x86: Implement initial P4 PMU driver · a072738e
      Cyrill Gorcunov 提交于
      The netburst PMU is way different from the "architectural
      perfomance monitoring" specification that current CPUs use.
      P4 uses a tuple of ESCR+CCCR+COUNTER MSR registers to handle
      perfomance monitoring events.
      
      A few implementational details:
      
      1) We need a separate x86_pmu::hw_config helper in struct
         x86_pmu since register bit-fields are quite different from P6,
         Core and later cpu series.
      
      2) For the same reason is a x86_pmu::schedule_events helper
         introduced.
      
      3) hw_perf_event::config consists of packed ESCR+CCCR values.
         It's allowed since in reality both registers only use a half
         of their size. Of course before making a real write into a
         particular MSR we need to unpack the value and extend it to
         a proper size.
      
      4) The tuple of packed ESCR+CCCR in hw_perf_event::config
         doesn't describe the memory address of ESCR MSR register
         so that we need to keep a mapping between these tuples
         used and available ESCR (various P4 events may use same
         ESCRs but not simultaneously), for this sake every active
         event has a per-cpu map of hw_perf_event::idx <--> ESCR
         addresses.
      
      5) Since hw_perf_event::idx is an offset to counter/control register
         we need to lift X86_PMC_MAX_GENERIC up, otherwise kernel
         strips it down to 8 registers and event armed may never be turned
         off (ie the bit in active_mask is set but the loop never reaches
         this index to check), thanks to Peter Zijlstra
      
      Restrictions:
      
       - No cascaded counters support (do we ever need them?)
       - No dependent events support (so PERF_COUNT_HW_INSTRUCTIONS
         doesn't work for now)
       - There are events with same counters which can't work simultaneously
         (need to use intersected ones due to broken counter 1)
       - No PERF_COUNT_HW_CACHE_ events yet
      
      Todo:
      
       - Implement dependent events
       - Need proper hashing for event opcodes (no linear search, good for
         debugging stage but not in real loads)
       - Some events counted during a clock cycle -- need to set threshold
         for them and count every clock cycle just to get summary statistics
         (ie to behave the same way as other PMUs do)
       - Need to swicth to use event_constraints
       - To support RAW events we need to encode a global list of P4 events
         into p4_templates
       - Cache events need to be added
      
      Event support status matrix:
      
       Event			status
       -----------------------------
       cycles			works
       cache-references	works
       cache-misses		works
       branch-misses		works
       bus-cycles		partially (does not work on 64bit cpu with HT enabled)
       instruction		doesnt work (needs dependent event [mop tagging])
       branches		doesnt work
      Signed-off-by: NCyrill Gorcunov <gorcunov@openvz.org>
      Signed-off-by: NLin Ming <ming.m.lin@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20100311165439.GB5129@lenovo>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a072738e
  4. 10 3月, 2010 1 次提交
  5. 01 3月, 2010 1 次提交
    • R
      perf, x86: rename macro in ARCH_PERFMON_EVENTSEL_ENABLE · bb1165d6
      Robert Richter 提交于
      For consistency reasons this patch renames
      ARCH_PERFMON_EVENTSEL0_ENABLE to ARCH_PERFMON_EVENTSEL_ENABLE.
      
      The following is performed:
      
       $ sed -i -e s/ARCH_PERFMON_EVENTSEL0_ENABLE/ARCH_PERFMON_EVENTSEL_ENABLE/g \
         arch/x86/include/asm/perf_event.h arch/x86/kernel/cpu/perf_event.c \
         arch/x86/kernel/cpu/perf_event_p6.c \
         arch/x86/kernel/cpu/perfctr-watchdog.c \
         arch/x86/oprofile/op_model_amd.c arch/x86/oprofile/op_model_ppro.c
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      bb1165d6
  6. 26 2月, 2010 1 次提交