1. 19 10月, 2010 1 次提交
    • P
      irq_work: Add generic hardirq context callbacks · e360adbe
      Peter Zijlstra 提交于
      Provide a mechanism that allows running code in IRQ context. It is
      most useful for NMI code that needs to interact with the rest of the
      system -- like wakeup a task to drain buffers.
      
      Perf currently has such a mechanism, so extract that and provide it as
      a generic feature, independent of perf so that others may also
      benefit.
      
      The IRQ context callback is generated through self-IPIs where
      possible, or on architectures like powerpc the decrementer (the
      built-in timer facility) is set to generate an interrupt immediately.
      
      Architectures that don't have anything like this get to do with a
      callback from the timer tick. These architectures can call
      irq_work_run() at the tail of any IRQ handlers that might enqueue such
      work (like the perf IRQ handler) to avoid undue latencies in
      processing the work.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Acked-by: NKyle McMartin <kyle@mcmartin.ca>
      Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      [ various fixes ]
      Signed-off-by: NHuang Ying <ying.huang@intel.com>
      LKML-Reference: <1287036094.7768.291.camel@yhuang-dev>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e360adbe
  2. 11 10月, 2010 2 次提交
  3. 17 9月, 2010 2 次提交
    • P
      perf: Undo the per cpu-context timer stuff · e9d2b064
      Peter Zijlstra 提交于
      Revert the timer per cpu-context timers because of unfortunate
      nohz interaction. Fixing that would have been somewhat ugly, so
      go back to driving things from the regular tick. Provide a
      jiffies interval feature for people who want slower rotations.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Yinghai Lu <yinghai@kernel.org>
      LKML-Reference: <20100917093009.519845633@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e9d2b064
    • P
      perf: Complete software pmu grouping · b04243ef
      Peter Zijlstra 提交于
      Aside from allowing software events into a !software group,
      allow adding !software events to pure software groups.
      
      Once we've moved the software group and attached the first
      !software event, the group will no longer be a pure software
      group and hence no longer be eligible for movement, at which
      point the straight ctx comparison is correct again.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20100917093009.410784731@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b04243ef
  4. 15 9月, 2010 1 次提交
    • M
      perf events: Clean up pid passing · 38a81da2
      Matt Helsley 提交于
      The kernel perf event creation path shouldn't use find_task_by_vpid()
      because a vpid exists in a specific namespace. find_task_by_vpid() uses
      current's pid namespace which isn't always the correct namespace to use
      for the vpid in all the places perf_event_create_kernel_counter() (and
      thus find_get_context()) is called.
      
      The goal is to clean up pid namespace handling and prevent bugs like:
      
      	https://bugzilla.kernel.org/show_bug.cgi?id=17281
      
      Instead of using pids switch find_get_context() to use task struct
      pointers directly. The syscall is responsible for resolving the pid to
      a task struct. This moves the pid namespace resolution into the syscall
      much like every other syscall that takes pid parameters.
      Signed-off-by: NMatt Helsley <matthltc@us.ibm.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Robin Green <greenrd@greenrd.org>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      LKML-Reference: <a134e5e392ab0204961fd1a62c84a222bf5874a9.1284407763.git.matthltc@us.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      38a81da2
  5. 10 9月, 2010 14 次提交
    • P
      perf: Fix up delayed_put_task_struct() · 4e231c79
      Peter Zijlstra 提交于
      I missed a perf_event_ctxp user when converting it to an array. Pull this
      last user into perf_event.c as well and fix it up.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4e231c79
    • P
      perf: Provide a separate task context for swevents · 89a1e187
      Peter Zijlstra 提交于
      Since software events are always schedulable, mixing them up with
      hardware events (who are not) can lead to funny scheduling oddities.
      
      Giving them their own context solves this.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      89a1e187
    • P
      perf: Multiple task contexts · 8dc85d54
      Peter Zijlstra 提交于
      Provide the infrastructure for multiple task contexts.
      
      A more flexible approach would have resulted in more pointer chases
      in the scheduling hot-paths. This approach has the limitation of a
      static number of task contexts.
      
      Since I expect most external PMUs to be system wide, or at least node
      wide (as per the intel uncore unit) they won't actually need a task
      context.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      8dc85d54
    • P
      perf: Per-pmu-per-cpu contexts · 108b02cf
      Peter Zijlstra 提交于
      Allocate per-cpu contexts per pmu.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      108b02cf
    • P
      perf: Per cpu-context rotation timer · b5ab4cd5
      Peter Zijlstra 提交于
      Give each cpu-context its own timer so that it is a self contained
      entity, this eases the way for per-pmu-per-cpu contexts as well as
      provides the basic infrastructure to allow different rotation
      times per pmu.
      
      Things to look at:
       - folding the tick and these TICK_NSEC timers
       - separate task context rotation
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b5ab4cd5
    • P
      perf: Remove the swevent hash-table from the cpu context · b28ab83c
      Peter Zijlstra 提交于
      Separate the swevent hash-table from the cpu_context bits in
      preparation for per pmu cpu contexts.
      
      This keeps the swevent hash a global entity.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b28ab83c
    • P
      perf: Remove the sysfs bits · 15ac9a39
      Peter Zijlstra 提交于
      Neither the overcommit nor the reservation sysfs parameter were
      actually working, remove them as they'll only get in the way.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      15ac9a39
    • P
      perf: Rework the PMU methods · a4eaf7f1
      Peter Zijlstra 提交于
      Replace pmu::{enable,disable,start,stop,unthrottle} with
      pmu::{add,del,start,stop}, all of which take a flags argument.
      
      The new interface extends the capability to stop a counter while
      keeping it scheduled on the PMU. We replace the throttled state with
      the generic stopped state.
      
      This also allows us to efficiently stop/start counters over certain
      code paths (like IRQ handlers).
      
      It also allows scheduling a counter without it starting, allowing for
      a generic frozen state (useful for rotating stopped counters).
      
      The stopped state is implemented in two different ways, depending on
      how the architecture implemented the throttled state:
      
       1) We disable the counter:
          a) the pmu has per-counter enable bits, we flip that
          b) we program a NOP event, preserving the counter state
      
       2) We store the counter state and ignore all read/overflow events
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a4eaf7f1
    • P
      perf: Shrink hw_perf_event · fa407f35
      Peter Zijlstra 提交于
      Use hw_perf_event::period_left instead of hw_perf_event::remaining
      and win back 8 bytes.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fa407f35
    • P
      perf: Default PMU ops · ad5133b7
      Peter Zijlstra 提交于
      Provide default implementations for the pmu txn methods, this
      allows us to remove some conditional code.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ad5133b7
    • P
      perf: Per PMU disable · 33696fc0
      Peter Zijlstra 提交于
      Changes perf_disable() into perf_pmu_disable().
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      33696fc0
    • P
      perf: Reduce perf_disable() usage · 24cd7f54
      Peter Zijlstra 提交于
      Since the current perf_disable() usage is only an optimization,
      remove it for now. This eases the removal of the __weak
      hw_perf_enable() interface.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      24cd7f54
    • P
      perf: Register PMU implementations · b0a873eb
      Peter Zijlstra 提交于
      Simple registration interface for struct pmu, this provides the
      infrastructure for removing all the weak functions.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      b0a873eb
    • P
      perf: Deconstify struct pmu · 51b0fe39
      Peter Zijlstra 提交于
      sed -ie 's/const struct pmu\>/struct pmu/g' `git grep -l "const struct pmu\>"`
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus <paulus@samba.org>
      Cc: stephane eranian <eranian@googlemail.com>
      Cc: Robert Richter <robert.richter@amd.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Cyrill Gorcunov <gorcunov@gmail.com>
      Cc: Lin Ming <ming.m.lin@intel.com>
      Cc: Yanmin <yanmin_zhang@linux.intel.com>
      Cc: Deng-Cheng Zhu <dengcheng.zhu@gmail.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Michael Cree <mcree@orcon.net.nz>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      51b0fe39
  6. 19 8月, 2010 4 次提交
    • F
      perf: Humanize the number of contexts · 7ae07ea3
      Frederic Weisbecker 提交于
      Instead of hardcoding the number of contexts for the recursions
      barriers, define a cpp constant to make the code more
      self-explanatory.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Stephane Eranian <eranian@google.com>
      7ae07ea3
    • F
      perf: Fix race in callchains · 927c7a9e
      Frederic Weisbecker 提交于
      Now that software events don't have interrupt disabled anymore in
      the event path, callchains can nest on any context. So seperating
      nmi and others contexts in two buffers has become racy.
      
      Fix this by providing one buffer per nesting level. Given the size
      of the callchain entries (2040 bytes * 4), we now need to allocate
      them dynamically.
      
      v2: Fixed put_callchain_entry call after recursion.
          Fix the type of the recursion, it must be an array.
      
      v3: Use a manual pr cpu allocation (temporary solution until NMIs
          can safely access vmalloc'ed memory).
          Do a better separation between callchain reference tracking and
          allocation. Make the "put" path lockless for non-release cases.
      
      v4: Protect the callchain buffers with rcu.
      
      v5: Do the cpu buffers allocations node affine.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Tested-by: NWill Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Borislav Petkov <bp@amd64.org>
      927c7a9e
    • F
      perf: Generalize some arch callchain code · 56962b44
      Frederic Weisbecker 提交于
      - Most archs use one callchain buffer per cpu, except x86 that needs
        to deal with NMIs. Provide a default perf_callchain_buffer()
        implementation that x86 overrides.
      
      - Centralize all the kernel/user regs handling and invoke new arch
        handlers from there: perf_callchain_user() / perf_callchain_kernel()
        That avoid all the user_mode(), current->mm checks and so...
      
      - Invert some parameters in perf_callchain_*() helpers: entry to the
        left, regs to the right, following the traditional (dst, src).
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Tested-by: NWill Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Borislav Petkov <bp@amd64.org>
      56962b44
    • F
      perf: Generalize callchain_store() · 70791ce9
      Frederic Weisbecker 提交于
      callchain_store() is the same on every archs, inline it in
      perf_event.h and rename it to perf_callchain_store() to avoid
      any collision.
      
      This removes repetitive code.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NPaul Mackerras <paulus@samba.org>
      Tested-by: NWill Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: David Miller <davem@davemloft.net>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: Borislav Petkov <bp@amd64.org>
      70791ce9
  7. 25 6月, 2010 2 次提交
    • N
      perf: Fix argument of perf_arch_fetch_caller_regs · 5cfaf214
      Nobuhiro Iwamatsu 提交于
      "struct regs" was set to argument of perf_arch_fetch_caller_regs
      off-case. It should be "struct pt_regs".
      
      This fixes various build errors in archs that have CONFIG_PERF_EVENTS=y
      but no overriden implementation of perf_arch_fetch_caller_regs.
      
      cc1: warnings being treated as errors
      In file included from include/linux/ftrace_event.h:8,
                       from include/trace/syscall.h:6,
                       from include/linux/syscalls.h:75,
                       from arch/sh/kernel/sys_sh32.c:9:
      include/linux/perf_event.h:937: error: 'struct regs' declared inside parameter list
      include/linux/perf_event.h:937: error: its scope is only this definition or declaration, which is probably not what you want
      include/linux/perf_event.h: In function 'perf_fetch_caller_regs':
      include/linux/perf_event.h:952: error: passing argument 1 of 'perf_arch_fetch_caller_regs' from incompatible pointer type
      Signed-off-by: NNobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
      Reported-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: David Miller <davem@davemloft.net>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <AANLkTinKKFKEBQrZ3Hkj-XCaMwaTqulb-XnFzqEYiFRr@mail.gmail.com>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      5cfaf214
    • F
      hw_breakpoints: Fix per task breakpoint tracking · 45a73372
      Frederic Weisbecker 提交于
      Freeing a perf event can happen in several ways. A task
      calls perf_event_exit_task() right before exiting. This helper
      will detach all the events from the task context and queue their
      removal through free_event() if they are child tasks. The task
      also loses its context reference there.
      
      Releasing the breakpoint slot from the constraint table is made
      from free_event() that calls release_bp_slot(). We count the number
      of breakpoints this task is running by looking at the task's
      perf_event_ctxp and iterating through its attached events.
      But at this time, the reference to this context has been cleaned up
      already.
      
      So looking at the event->ctx instead of task->perf_event_ctxp
      to count the remaining breakpoints should solve the problem.
      At least it would for child breakpoints, but not for parent ones.
      If the parent exits before the child, it will remove all its
      events from the context but free_event() will be called later,
      on fd release time. And checking the number of breakpoints the
      task has attached to its context at this time is unreliable as all
      events have been removed from the context.
      
      To solve this, we keep track of the list of per task breakpoints.
      On top of it, we maintain our array of numbers of breakpoints used
      by the tasks. We use the context address as a task id.
      
      So, instead of looking at the number of events attached to a context,
      we walk through our list of per task breakpoints and count the number
      of breakpoints that use the same ctx than the one to be reserved or
      released from the constraint table, and update the count on top of this
      result.
      
      In the meantime it solves a bad refcounting, it also solves a warning,
      reported by Paul.
      
      Badness at /home/paulus/kernel/perf/kernel/hw_breakpoint.c:114
      NIP: c0000000000cb470 LR: c0000000000cb46c CTR: c00000000032d9b8
      REGS: c000000118e7b570 TRAP: 0700   Not tainted  (2.6.35-rc3-perf-00008-g76b0f133
      )
      MSR: 9000000000029032 <EE,ME,CE,IR,DR>  CR: 44004424  XER: 000fffff
      TASK = c0000001187dcad0[3143] 'perf' THREAD: c000000118e78000 CPU: 1
      GPR00: c0000000000cb46c c000000118e7b7f0 c0000000009866a0 0000000000000020
      GPR04: 0000000000000000 000000000000001d 0000000000000000 0000000000000001
      GPR08: c0000000009bed68 c00000000086dff8 c000000000a5bf10 0000000000000001
      GPR12: 0000000024004422 c00000000ffff200 0000000000000000 0000000000000000
      GPR16: 0000000000000000 0000000000000000 0000000000000018 00000000101150f4
      GPR20: 0000000010206b40 0000000000000000 0000000000000000 00000000101150f4
      GPR24: c0000001199090c0 0000000000000001 0000000000000000 0000000000000001
      GPR28: 0000000000000000 0000000000000000 c0000000008ec290 0000000000000000
      NIP [c0000000000cb470] .task_bp_pinned+0x5c/0x12c
      LR [c0000000000cb46c] .task_bp_pinned+0x58/0x12c
      Call Trace:
      [c000000118e7b7f0] [c0000000000cb46c] .task_bp_pinned+0x58/0x12c (unreliable)
      [c000000118e7b8a0] [c0000000000cb584] .toggle_bp_task_slot+0x44/0xe4
      [c000000118e7b940] [c0000000000cb6c8] .toggle_bp_slot+0xa4/0x164
      [c000000118e7b9f0] [c0000000000cbafc] .release_bp_slot+0x44/0x6c
      [c000000118e7ba80] [c0000000000c4178] .bp_perf_event_destroy+0x10/0x24
      [c000000118e7bb00] [c0000000000c4aec] .free_event+0x180/0x1bc
      [c000000118e7bbc0] [c0000000000c54c4] .perf_event_release_kernel+0x14c/0x170
      Reported-by: NPaul Mackerras <paulus@samba.org>
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Prasad <prasad@linux.vnet.ibm.com>
      Cc: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Jason Wessel <jason.wessel@windriver.com>
      45a73372
  8. 09 6月, 2010 10 次提交
  9. 31 5月, 2010 2 次提交
  10. 21 5月, 2010 2 次提交