• S
    perf_events, x86: Improve x86 event scheduling · 1da53e02
    Stephane Eranian 提交于
    This patch improves event scheduling by maximizing the use of PMU
    registers regardless of the order in which events are created in a group.
    
    The algorithm takes into account the list of counter constraints for each
    event. It assigns events to counters from the most constrained, i.e.,
    works on only one counter, to the least constrained, i.e., works on any
    counter.
    
    Intel Fixed counter events and the BTS special event are also handled via
    this algorithm which is designed to be fairly generic.
    
    The patch also updates the validation of an event to use the scheduling
    algorithm. This will cause early failure in perf_event_open().
    
    The 2nd version of this patch follows the model used by PPC, by running
    the scheduling algorithm and the actual assignment separately. Actual
    assignment takes place in hw_perf_enable() whereas scheduling is
    implemented in hw_perf_group_sched_in() and x86_pmu_enable().
    Signed-off-by: NStephane Eranian <eranian@google.com>
    [ fixup whitespace and style nits as well as adding is_x86_event() ]
    Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
    Cc: Mike Galbraith <efault@gmx.de>
    Cc: Paul Mackerras <paulus@samba.org>
    Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
    Cc: Frederic Weisbecker <fweisbec@gmail.com>
    LKML-Reference: <4b5430c6.0f975e0a.1bf9.ffff85fe@mx.google.com>
    Signed-off-by: NIngo Molnar <mingo@elte.hu>
    1da53e02
perf_event.c 65.4 KB