1. 26 2月, 2010 4 次提交
    • P
      perf_events: Simplify code by removing cpu argument to hw_perf_group_sched_in() · 6e37738a
      Peter Zijlstra 提交于
      Since the cpu argument to hw_perf_group_sched_in() is always
      smp_processor_id(), simplify the code a little by removing this argument
      and using the current cpu where needed.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <1265890918.5396.3.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      6e37738a
    • S
      perf_events, x86: AMD event scheduling · 38331f62
      Stephane Eranian 提交于
      This patch adds correct AMD NorthBridge event scheduling.
      
      NB events are events measuring L3 cache, Hypertransport traffic. They are
      identified by an event code >= 0xe0. They measure events on the
      Northbride which is shared by all cores on a package. NB events are
      counted on a shared set of counters. When a NB event is programmed in a
      counter, the data actually comes from a shared counter. Thus, access to
      those counters needs to be synchronized.
      
      We implement the synchronization such that no two cores can be measuring
      NB events using the same counters. Thus, we maintain a per-NB allocation
      table. The available slot is propagated using the event_constraint
      structure.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <4b703957.0702d00a.6bf2.7b7d@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      38331f62
    • S
      perf_events: Add new start/stop PMU callbacks · d76a0812
      Stephane Eranian 提交于
      In certain situations, the kernel may need to stop and start the same
      event rapidly. The current PMU callbacks do not distinguish between stop
      and release (i.e., stop + free the resource). Thus, a counter may be
      released, then it will be immediately re-acquired. Event scheduling will
      again take place with no guarantee to assign the same counter. On some
      processors, this may event yield to failure to assign the event back due
      to competion between cores.
      
      This patch is adding a new pair of callback to stop and restart a counter
      without actually release the underlying counter resource. On stop, the
      counter is stopped, its values saved and that's it. On start, the value
      is reloaded and counter is restarted (on x86, actual restart is delayed
      until perf_enable()).
      Signed-off-by: NStephane Eranian <eranian@google.com>
      [ added fallback to ->enable/->disable for all other PMUs
        fixed x86_pmu_start() to call x86_pmu.enable()
        merged __x86_pmu_disable into x86_pmu_stop() ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <4b703875.0a04d00a.7896.ffffb824@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d76a0812
    • P
      perf_events: Report the MMAP pgoff value in bytes · 3a0304e9
      Peter Zijlstra 提交于
      DaveM reported that currently perf interprets the pgoff value reported by
      the MMAP events as a byte range, but the kernel reports it as a page
      offset.
      
      Since its broken (and unusable) anyway, change the kernel behaviour (ABI)
      to report bytes indeed, avoiding the need for userspace to deal with
      PAGE_SIZE things.
      Reported-by: NDavid Miller <davem@davemloft.net>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3a0304e9
  2. 05 2月, 2010 1 次提交
    • M
      kprobes: Add mcount to the kprobes blacklist · 5ecaafdb
      Masami Hiramatsu 提交于
      Since mcount function can be called from everywhere,
      it should be blacklisted. Moreover, the "mcount" symbol
      is a special symbol name. So, it is better to put it in
      the generic blacklist.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <20100205062433.3745.36726.stgit@dhcp-100-2-132.bos.redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      5ecaafdb
  3. 04 2月, 2010 5 次提交
    • P
      perf_events: Optimize perf_event_task_tick() · 9717e6cd
      Peter Zijlstra 提交于
      Pretty much all of the calls do perf_disable/perf_enable cycles, pull
      that out to cut back on hardware programming.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Mike Galbraith <efault@gmx.de>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9717e6cd
    • M
      ftrace: Remove record freezing · f24bb999
      Masami Hiramatsu 提交于
      Remove record freezing. Because kprobes never puts probe on
      ftrace's mcount call anymore, it doesn't need ftrace to check
      whether kprobes on it.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: przemyslaw@pawelczyk.it
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <20100202214925.4694.73469.stgit@dhcp-100-2-132.bos.redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f24bb999
    • M
      kprobes: Check probe address is reserved · 4554dbcb
      Masami Hiramatsu 提交于
      Check whether the address of new probe is already reserved by
      ftrace or alternatives (on x86) when registering new probe.
      If reserved, it returns an error and not register the probe.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: przemyslaw@pawelczyk.it
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Jason Baron <jbaron@redhat.com>
      LKML-Reference: <20100202214918.4694.94179.stgit@dhcp-100-2-132.bos.redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4554dbcb
    • M
      ftrace/alternatives: Introducing *_text_reserved functions · 2cfa1978
      Masami Hiramatsu 提交于
      Introducing *_text_reserved functions for checking the text
      address range is partially reserved or not. This patch provides
      checking routines for x86 smp alternatives and dynamic ftrace.
      Since both functions modify fixed pieces of kernel text, they
      should reserve and protect those from other dynamic text
      modifier, like kprobes.
      
      This will also be extended when introducing other subsystems
      which modify fixed pieces of kernel text. Dynamic text modifiers
      should avoid those.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: przemyslaw@pawelczyk.it
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Jason Baron <jbaron@redhat.com>
      LKML-Reference: <20100202214911.4694.16587.stgit@dhcp-100-2-132.bos.redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2cfa1978
    • M
      kprobes: Disable booster when CONFIG_PREEMPT=y · 615d0ebb
      Masami Hiramatsu 提交于
      Disable kprobe booster when CONFIG_PREEMPT=y at this time,
      because it can't ensure that all kernel threads preempted on
      kprobe's boosted slot run out from the slot even using
      freeze_processes().
      
      The booster on preemptive kernel will be resumed if
      synchronize_tasks() or something like that is introduced.
      Signed-off-by: NMasami Hiramatsu <mhiramat@redhat.com>
      Cc: systemtap <systemtap@sources.redhat.com>
      Cc: DLE <dle-develop@lists.sourceforge.net>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jim Keniston <jkenisto@us.ibm.com>
      Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      LKML-Reference: <20100202214904.4694.24330.stgit@dhcp-100-2-132.bos.redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      615d0ebb
  4. 29 1月, 2010 3 次提交
  5. 28 1月, 2010 1 次提交
    • M
      hw_breakpoints: Release the bp slot if arch_validate_hwbkpt_settings() fails. · b23ff0e9
      Mahesh Salgaonkar 提交于
      On a given architecture, when hardware breakpoint registration fails
      due to un-supported access type (read/write/execute), we lose the bp
      slot since register_perf_hw_breakpoint() does not release the bp slot
      on failure.
      Hence, any subsequent hardware breakpoint registration starts failing
      with 'no space left on device' error.
      
      This patch introduces error handling in register_perf_hw_breakpoint()
      function and releases bp slot on error.
      Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: K. Prasad <prasad@linux.vnet.ibm.com>
      Cc: Maneesh Soni <maneesh@in.ibm.com>
      LKML-Reference: <20100121125516.GA32521@in.ibm.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      b23ff0e9
  6. 27 1月, 2010 1 次提交
    • P
      perf: Reimplement frequency driven sampling · abd50713
      Peter Zijlstra 提交于
      There was a bug in the old period code that caused intel_pmu_enable_all()
      or native_write_msr_safe() to show up quite high in the profiles.
      
      In staring at that code it made my head hurt, so I rewrote it in a
      hopefully simpler fashion. Its now fully symetric between tick and
      overflow driven adjustments and uses less data to boot.
      
      The only complication is that it basically wants to do a u128 division.
      The code approximates that in a rather simple truncate until it fits
      fashion, taking care to balance the terms while truncating.
      
      This version does not generate that sampling artefact.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Cc: <stable@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      abd50713
  7. 21 1月, 2010 4 次提交
  8. 18 1月, 2010 1 次提交
  9. 17 1月, 2010 10 次提交
  10. 16 1月, 2010 3 次提交
    • F
      perf: Export software-only event group characteristic as a flag · d6f962b5
      Frederic Weisbecker 提交于
      Before scheduling an event group, we first check if a group can go
      on. We first check if the group is made of software only events
      first, in which case it is enough to know if the group can be
      scheduled in.
      
      For that purpose, we iterate through the whole group, which is
      wasteful as we could do this check when we add/delete an event to
      a group.
      
      So we create a group_flags field in perf event that can host
      characteristics from a group of events, starting with a first
      PERF_GROUP_SOFTWARE flag that reduces the check on the fast path.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      d6f962b5
    • F
      perf: Round robin flexible groups of events using list_rotate_left() · e2864173
      Frederic Weisbecker 提交于
      This is more proper that doing it through a list_for_each_entry()
      that breaks after the first entry.
      
      v2: Don't rotate pinned groups as its not needed to time share
      them.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      e2864173
    • F
      perf/core: Split context's event group list into pinned and non-pinned lists · 889ff015
      Frederic Weisbecker 提交于
      Split-up struct perf_event_context::group_list into pinned_groups
      and flexible_groups (non-pinned).
      
      This first appears to be useless as it duplicates various loops around
      the group list handlings.
      
      But it scales better in the fast-path in perf_sched_in(). We don't
      anymore iterate twice through the entire list to separate pinned and
      non-pinned scheduling. Instead we interate through two distinct lists.
      
      The another desired effect is that it makes easier to define distinct
      scheduling rules on both.
      
      Changes in v2:
      - Respectively rename pinned_grp_list and
        volatile_grp_list into pinned_groups and flexible_groups as per
        Ingo suggestion.
      - Various cleanups
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
      889ff015
  11. 15 1月, 2010 6 次提交
  12. 13 1月, 2010 1 次提交