1. 26 2月, 2010 4 次提交
    • P
      perf_events: Simplify code by removing cpu argument to hw_perf_group_sched_in() · 6e37738a
      Peter Zijlstra 提交于
      Since the cpu argument to hw_perf_group_sched_in() is always
      smp_processor_id(), simplify the code a little by removing this argument
      and using the current cpu where needed.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1265890918.5396.3.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6e37738a
    • S
      perf_events, x86: AMD event scheduling · 38331f62
      Stephane Eranian 提交于
      This patch adds correct AMD NorthBridge event scheduling.
      
      NB events are events measuring L3 cache, Hypertransport traffic. They are
      identified by an event code >= 0xe0. They measure events on the
      Northbride which is shared by all cores on a package. NB events are
      counted on a shared set of counters. When a NB event is programmed in a
      counter, the data actually comes from a shared counter. Thus, access to
      those counters needs to be synchronized.
      
      We implement the synchronization such that no two cores can be measuring
      NB events using the same counters. Thus, we maintain a per-NB allocation
      table. The available slot is propagated using the event_constraint
      structure.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <4b703957.0702d00a.6bf2.7b7d@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      38331f62
    • S
      perf_events: Add new start/stop PMU callbacks · d76a0812
      Stephane Eranian 提交于
      In certain situations, the kernel may need to stop and start the same
      event rapidly. The current PMU callbacks do not distinguish between stop
      and release (i.e., stop + free the resource). Thus, a counter may be
      released, then it will be immediately re-acquired. Event scheduling will
      again take place with no guarantee to assign the same counter. On some
      processors, this may event yield to failure to assign the event back due
      to competion between cores.
      
      This patch is adding a new pair of callback to stop and restart a counter
      without actually release the underlying counter resource. On stop, the
      counter is stopped, its values saved and that's it. On start, the value
      is reloaded and counter is restarted (on x86, actual restart is delayed
      until perf_enable()).
      Signed-off-by: NStephane Eranian <eranian@google.com>
      [ added fallback to ->enable/->disable for all other PMUs
        fixed x86_pmu_start() to call x86_pmu.enable()
        merged __x86_pmu_disable into x86_pmu_stop() ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <4b703875.0a04d00a.7896.ffffb824@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d76a0812
    • P
      perf_events: Report the MMAP pgoff value in bytes · 3a0304e9
      Peter Zijlstra 提交于
      DaveM reported that currently perf interprets the pgoff value reported by
      the MMAP events as a byte range, but the kernel reports it as a page
      offset.
      
      Since its broken (and unusable) anyway, change the kernel behaviour (ABI)
      to report bytes indeed, avoiding the need for userspace to deal with
      PAGE_SIZE things.
      Reported-by: NDavid Miller <davem@davemloft.net>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3a0304e9
  2. 04 2月, 2010 1 次提交
  3. 29 1月, 2010 1 次提交
    • P
      perf_events: Fix sample_period transfer on inherit · 75c9f328
      Peter Zijlstra 提交于
      One problem with frequency driven counters is that we cannot
      predict the rate at which they trigger, therefore we have to
      start them at period=1, this causes a ramp up effect. However,
      if we fail to propagate the stable state on fork each new child
      will have to ramp up again. This can lead to significant
      artifacts in sample data.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: eranian@google.com
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1264752266.4283.2121.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      75c9f328
  4. 27 1月, 2010 1 次提交
    • P
      perf: Reimplement frequency driven sampling · abd50713
      Peter Zijlstra 提交于
      There was a bug in the old period code that caused intel_pmu_enable_all()
      or native_write_msr_safe() to show up quite high in the profiles.
      
      In staring at that code it made my head hurt, so I rewrote it in a
      hopefully simpler fashion. Its now fully symetric between tick and
      overflow driven adjustments and uses less data to boot.
      
      The only complication is that it basically wants to do a u128 division.
      The code approximates that in a rather simple truncate until it fits
      fashion, taking care to balance the terms while truncating.
      
      This version does not generate that sampling artefact.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      abd50713
  5. 21 1月, 2010 1 次提交
  6. 17 1月, 2010 4 次提交
    • F
      perf: Better order flexible and pinned scheduling · 329c0e01
      Frederic Weisbecker 提交于
      When a task gets scheduled in. We don't touch the cpu bound events
      so the priority order becomes:
      
      	cpu pinned, cpu flexible, task pinned, task flexible.
      
      So schedule out cpu flexibles when a new task context gets in
      and correctly order the groups to schedule in:
      
      	task pinned, cpu flexible, task flexible.
      
      Cpu pinned groups don't need to be touched at this time.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      329c0e01
    • F
      perf: Don't schedule out/in pinned events on task tick · 7defb0f8
      Frederic Weisbecker 提交于
      We don't need to schedule in/out pinned events on task tick,
      now that pinned and flexible groups can be scheduled separately.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      7defb0f8
    • F
      perf: Allow pinned and flexible groups to be scheduled separately · 5b0311e1
      Frederic Weisbecker 提交于
      Tune the scheduling helpers so that we can choose to schedule either
      pinned and/or flexible groups from a context.
      
      And while at it, refactor a bit the naming of these helpers to make
      these more consistent and flexible.
      
      There is no (intended) change in scheduling behaviour in this
      patch.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      5b0311e1
    • F
      perf: Make __perf_event_sched_out static · 42cce92f
      Frederic Weisbecker 提交于
      __perf_event_sched_out doesn't need to be globally available, make
      it static.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      42cce92f
  7. 16 1月, 2010 3 次提交
    • F
      perf: Export software-only event group characteristic as a flag · d6f962b5
      Frederic Weisbecker 提交于
      Before scheduling an event group, we first check if a group can go
      on. We first check if the group is made of software only events
      first, in which case it is enough to know if the group can be
      scheduled in.
      
      For that purpose, we iterate through the whole group, which is
      wasteful as we could do this check when we add/delete an event to
      a group.
      
      So we create a group_flags field in perf event that can host
      characteristics from a group of events, starting with a first
      PERF_GROUP_SOFTWARE flag that reduces the check on the fast path.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      d6f962b5
    • F
      perf: Round robin flexible groups of events using list_rotate_left() · e2864173
      Frederic Weisbecker 提交于
      This is more proper that doing it through a list_for_each_entry()
      that breaks after the first entry.
      
      v2: Don't rotate pinned groups as its not needed to time share
      them.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      e2864173
    • F
      perf/core: Split context's event group list into pinned and non-pinned lists · 889ff015
      Frederic Weisbecker 提交于
      Split-up struct perf_event_context::group_list into pinned_groups
      and flexible_groups (non-pinned).
      
      This first appears to be useless as it duplicates various loops around
      the group list handlings.
      
      But it scales better in the fast-path in perf_sched_in(). We don't
      anymore iterate twice through the entire list to separate pinned and
      non-pinned scheduling. Instead we interate through two distinct lists.
      
      The another desired effect is that it makes easier to define distinct
      scheduling rules on both.
      
      Changes in v2:
      - Respectively rename pinned_grp_list and
        volatile_grp_list into pinned_groups and flexible_groups as per
        Ingo suggestion.
      - Various cleanups
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      889ff015
  8. 31 12月, 2009 1 次提交
  9. 28 12月, 2009 2 次提交
    • L
      perf events: Remove CONFIG_EVENT_PROFILE · 07b139c8
      Li Zefan 提交于
      Quoted from Ingo:
      
      | This reminds me - i think we should eliminate CONFIG_EVENT_PROFILE -
      | it's an unnecessary Kconfig complication. If both PERF_EVENTS and
      | EVENT_TRACING is enabled we should expose generic tracepoints.
      |
      | Nor is it limited to event 'profiling', so it has become a misnomer as
      | well.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <4B2F1557.2050705@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      07b139c8
    • P
      perf events: Remove arg from perf sched hooks · 49f47433
      Peter Zijlstra 提交于
      Since we only ever schedule the local cpu, there is no need to pass the
      cpu number to the perf sched hooks.
      
      This micro-optimizes things a bit.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      49f47433
  10. 23 12月, 2009 1 次提交
  11. 17 12月, 2009 3 次提交
  12. 16 12月, 2009 1 次提交
    • P
      perf_events: Fix perf_event_attr layout · f13c12c6
      Peter Zijlstra 提交于
      The miss-alignment of bp_addr created a 32bit hole, causing
      different structure packings on 32 and 64 bit machines.
      
      Fix that by moving __reserve_2 into that hole.
      
      Further, remove the useless struct and redundant __bp_reserve
      muck.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <1260902591.8023.781.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f13c12c6
  13. 15 12月, 2009 2 次提交
  14. 11 12月, 2009 1 次提交
  15. 10 12月, 2009 1 次提交
    • X
      perf_event: Fix perf_swevent_hrtimer() variable initialization · 21140f4d
      Xiao Guangrong 提交于
      fix:
      
       [<c0477471>] ? printk+0x1d/0x24
       [<c01c98f9>] ? perf_prepare_sample+0x269/0x280
       [<c0149231>] warn_slowpath_common+0x71/0xd0
       [<c01c98f9>] ? perf_prepare_sample+0x269/0x280
       [<c01492aa>] warn_slowpath_null+0x1a/0x20
       [<c01c98f9>] perf_prepare_sample+0x269/0x280
       [<c016e9f3>] ? cpu_clock+0x53/0x90
       [<c01cc368>] __perf_event_overflow+0x2a8/0x300
       [<c01ccc3b>] perf_event_overflow+0x1b/0x30
       [<c01ccccf>] perf_swevent_hrtimer+0x7f/0x120
      
      This is because 'data.raw' variable not initialize.
      Signed-off-by: NXiao Guangrong <xiaoguangrong@cn.fujitsu.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <4B208E93.1010801@cn.fujitsu.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      21140f4d
  16. 09 12月, 2009 4 次提交
  17. 06 12月, 2009 1 次提交
  18. 04 12月, 2009 1 次提交
  19. 02 12月, 2009 1 次提交
    • K
      perf: Don't free perf_mmap_data until work has been done · ec70ccd8
      Kristian Høgsberg 提交于
      In the CONFIG_PERF_USE_VMALLOC case, perf_mmap_data_free() only
      schedules the cleanup of the perf_mmap_data struct.  In that
      case we have to wait until the work has been done before we free
      data.
      Signed-off-by: NKristian Høgsberg <krh@bitplanet.net>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: <stable@kernel.org>
      LKML-Reference: <1259697901-1747-1-git-send-email-krh@bitplanet.net>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ec70ccd8
  20. 01 12月, 2009 1 次提交
  21. 27 11月, 2009 1 次提交
  22. 26 11月, 2009 2 次提交
  23. 25 11月, 2009 1 次提交
    • F
      perf_events: Fix bad software/trace event recursion counting · fe612672
      Frederic Weisbecker 提交于
      Commit 4ed7c92d
      (perf_events: Undo some recursion damage) has introduced a bad
      reference counting of the recursion context. putting the context
      behaves like getting it, dropping every software/trace events
      after the first one in a context.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arjan van de Ven <arjan@infradead.org>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <1259091502-5171-1-git-send-email-fweisbec@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fe612672
  24. 24 11月, 2009 1 次提交