1. 24 8月, 2014 1 次提交
  2. 13 8月, 2014 2 次提交
    • S
      perf/x86: Fix data source encoding issues for load latency/precise store · 770eee1f
      Stephane Eranian 提交于
      This patch fixes issues introuduce by Andi's previous patch 'Revamp PEBS'
      series.
      
      This patch fixes the following:
      
       - precise_store_data_hsw() encode the mem op type whenever we can
       - precise_store_data_hsw set the default data source correctly
      
       - 0 is not a valid init value for data source. Define PERF_MEM_NA as the
         default value
      
      This bug was actually introduced by
      
          commit 722e76e6
          Author: Stephane Eranian <eranian@google.com>
          Date:   Thu May 15 17:56:44 2014 +0200
      
              fix Haswell precise store data source encoding
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1407785233-32193-4-git-send-email-eranian@google.com
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: ak@linux.intel.com
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      770eee1f
    • J
      perf: Add queued work to remove orphaned child events · fadfe7be
      Jiri Olsa 提交于
      In cases when the  owner task exits before the workload and the
      workload made some forks, all the events stay in until the last
      workload process exits. Thats' because each child event holds
      parent reference.
      
      We want to release all children events once the parent is gone,
      because at that time there's no process to read them anyway, so
      they're just eating resources.
      
      This removal  races with process exit, which removes all events
      and fork, which clone events.  To be clear of those two, adding
      work queue to remove orphaned child for context in case such
      event is detected.
      
      Using delayed work queue (with delay == 1), because we queue this
      work under perf scheduler callbacks. Normal work queue tries to wake
      up the queue process, which deadlocks on rq->lock in this place.
      
      Also preventing clones from abandoned parent event.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: http://lkml.kernel.org/r/1406896382-18404-4-git-send-email-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fadfe7be
  3. 06 6月, 2014 2 次提交
    • A
      perf: Differentiate exec() and non-exec() comm events · 82b89778
      Adrian Hunter 提交于
      perf tools like 'perf report' can aggregate samples by comm strings,
      which generally works.  However, there are other potential use-cases.
      For example, to pair up 'calls' with 'returns' accurately (from branch
      events like Intel BTS) it is necessary to identify whether the process
      has exec'd.  Although a comm event is generated when an 'exec' happens
      it is also generated whenever the comm string is changed on a whim
      (e.g. by prctl PR_SET_NAME).  This patch adds a flag to the comm event
      to differentiate one case from the other.
      
      In order to determine whether the kernel supports the new flag, a
      selection bit named 'exec' is added to struct perf_event_attr.  The
      bit does nothing but will cause perf_event_open() to fail if the bit
      is set on kernels that do not have it defined.
      Signed-off-by: NAdrian Hunter <adrian.hunter@intel.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/537D9EBE.7030806@intel.com
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      82b89778
    • P
      perf: Fix perf_event_comm() vs. exec() assumption · e041e328
      Peter Zijlstra 提交于
      perf_event_comm() assumes that set_task_comm() is only called on
      exec(), and in particular that its only called on current.
      
      Neither are true, as Dave reported a WARN triggered by set_task_comm()
      being called on !current.
      
      Separate the exec() hook from the comm hook.
      Reported-by: NDave Jones <davej@redhat.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: linux-fsdevel@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/20140521153219.GH5226@laptop.programming.kicks-ass.net
      [ Build fix. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      e041e328
  4. 05 6月, 2014 1 次提交
  5. 19 5月, 2014 1 次提交
    • P
      perf: Fix a race between ring_buffer_detach() and ring_buffer_attach() · b69cf536
      Peter Zijlstra 提交于
      Alexander noticed that we use RCU iteration on rb->event_list but do
      not use list_{add,del}_rcu() to add,remove entries to that list, nor
      do we observe proper grace periods when re-using the entries.
      
      Merge ring_buffer_detach() into ring_buffer_attach() such that
      attaching to the NULL buffer is detaching.
      
      Furthermore, ensure that between any 'detach' and 'attach' of the same
      event we observe the required grace period, but only when strictly
      required. In effect this means that only ioctl(.request =
      PERF_EVENT_IOC_SET_OUTPUT) will wait for a grace period, while the
      normal initial attach and final detach will not be delayed.
      
      This patch should, I think, do the right thing under all
      circumstances, the 'normal' cases all should never see the extra grace
      period, but the two cases:
      
       1) PERF_EVENT_IOC_SET_OUTPUT on an event which already has a
          ring_buffer set, will now observe the required grace period between
          removing itself from the old and attaching itself to the new buffer.
      
          This case is 'simple' in that both buffers are present in
          perf_event_set_output() one could think an unconditional
          synchronize_rcu() would be sufficient; however...
      
       2) an event that has a buffer attached, the buffer is destroyed
          (munmap) and then the event is attached to a new/different buffer
          using PERF_EVENT_IOC_SET_OUTPUT.
      
          This case is more complex because the buffer destruction does:
            ring_buffer_attach(.rb = NULL)
          followed by the ioctl() doing:
            ring_buffer_attach(.rb = foo);
      
          and we still need to observe the grace period between these two
          calls due to us reusing the event->rb_entry list_head.
      
      In order to make 2 happen we use Paul's latest cond_synchronize_rcu()
      call.
      
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Mike Galbraith <efault@gmx.de>
      Reported-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/20140507123526.GD13658@twins.programming.kicks-ass.netSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      b69cf536
  6. 18 4月, 2014 1 次提交
  7. 20 3月, 2014 1 次提交
    • S
      CPU hotplug, perf: Fix CPU hotplug callback registration · f0bdb5e0
      Srivatsa S. Bhat 提交于
      Subsystems that want to register CPU hotplug callbacks, as well as perform
      initialization for the CPUs that are already online, often do it as shown
      below:
      
      	get_online_cpus();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	register_cpu_notifier(&foobar_cpu_notifier);
      
      	put_online_cpus();
      
      This is wrong, since it is prone to ABBA deadlocks involving the
      cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
      with CPU hotplug operations).
      
      Instead, the correct and race-free way of performing the callback
      registration is:
      
      	cpu_notifier_register_begin();
      
      	for_each_online_cpu(cpu)
      		init_cpu(cpu);
      
      	/* Note the use of the double underscored version of the API */
      	__register_cpu_notifier(&foobar_cpu_notifier);
      
      	cpu_notifier_register_done();
      
      Fix the perf subsystem's hotplug notifier by using this latter form of
      callback registration.
      
      Also provide a bare-bones version of perf_cpu_notifier() that doesn't
      invoke the notifiers for the already online CPUs. This would be useful
      for subsystems that need to perform a different set of initialization
      for the already online CPUs, or don't need the initialization altogether.
      
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Signed-off-by: NSrivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      f0bdb5e0
  8. 12 1月, 2014 1 次提交
    • S
      perf/x86: Fix active_entry initialization · f3ae75de
      Stephane Eranian 提交于
      This patch fixes a problem with the initialization of the
      struct perf_event active_entry field. It is defined inside
      an anonymous union and was initialized in perf_event_alloc()
      using INIT_LIST_HEAD(). However at that time, we do not know
      whether the event is going to use active_entry or hlist_entry (SW).
      Or at last, we don't want to make that determination there.
      The problem is that hlist and list_head are not initialized
      the same way. One is okay with NULL (from kzmalloc), the other
      needs to pointers to point to self.
      
      This patch resolves this problem by dropping the union.
      This will avoid problems later on, if someone starts using
      active_entry or hlist_entry without verifying that they
      actually overlap. This also solves the initialization
      problem.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Cc: ak@linux.intel.com
      Cc: acme@redhat.com
      Cc: jolsa@redhat.com
      Cc: zheng.z.yan@intel.com
      Cc: bp@alien8.de
      Cc: vincent.weaver@maine.edu
      Cc: maria.n.dimakopoulou@gmail.com
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1389176153-3128-2-git-send-email-eranian@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f3ae75de
  9. 27 11月, 2013 1 次提交
  10. 04 10月, 2013 2 次提交
    • A
      perf: Add generic transaction flags · fdfbbd07
      Andi Kleen 提交于
      Add a generic qualifier for transaction events, as a new sample
      type that returns a flag word. This is particularly useful
      for qualifying aborts: to distinguish aborts which happen
      due to asynchronous events (like conflicts caused by another
      CPU) versus instructions that lead to an abort.
      
      The tuning strategies are very different for those cases,
      so it's important to distinguish them easily and early.
      
      Since it's inconvenient and inflexible to filter for this
      in the kernel we report all the events out and allow
      some post processing in user space.
      
      The flags are based on the Intel TSX events, but should be fairly
      generic and mostly applicable to other HTM architectures too. In addition
      to various flag words there's also reserved space to report an
      program supplied abort code. For TSX this is used to distinguish specific
      classes of aborts, like a lock busy abort when doing lock elision.
      
      Flags:
      
      Elision and generic transactions 		   (ELISION vs TRANSACTION)
      (HLE vs RTM on TSX; IBM etc.  would likely only use TRANSACTION)
      Aborts caused by current thread vs aborts caused by others (SYNC vs ASYNC)
      Retryable transaction				   (RETRY)
      Conflicts with other threads			   (CONFLICT)
      Transaction write capacity overflow		   (CAPACITY WRITE)
      Transaction read capacity overflow		   (CAPACITY READ)
      
      Transactions implicitely aborted can also return an abort code.
      This can be used to signal specific events to the profiler. A common
      case is abort on lock busy in a RTM eliding library (code 0xff)
      To handle this case we include the TSX abort code
      
      Common example aborts in TSX would be:
      
      - Data conflict with another thread on memory read.
                                            Flags: TRANSACTION|ASYNC|CONFLICT
      - executing a WRMSR in a transaction. Flags: TRANSACTION|SYNC
      - HLE transaction in user space is too large
                                            Flags: ELISION|SYNC|CAPACITY-WRITE
      
      The only flag that is somewhat TSX specific is ELISION.
      
      This adds the perf core glue needed for reporting the new flag word out.
      
      v2: Add MEM/MISC
      v3: Move transaction to the end
      v4: Separate capacity-read/write and remove misc
      v5: Remove _SAMPLE. Move abort flags to 32bit. Rename
          transaction to txn
      Signed-off-by: NAndi Kleen <ak@linux.intel.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/r/1379688044-14173-2-git-send-email-andi@firstfloor.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fdfbbd07
    • P
      perf: Fix perf_pmu_migrate_context · 9886167d
      Peter Zijlstra 提交于
      While auditing the list_entry usage due to a trinity bug I found that
      perf_pmu_migrate_context violates the rules for
      perf_event::event_entry.
      
      The problem is that perf_event::event_entry is a RCU list element, and
      hence we must wait for a full RCU grace period before re-using the
      element after deletion.
      
      Therefore the usage in perf_pmu_migrate_context() which re-uses the
      entry immediately is broken. For now introduce another list_head into
      perf_event for this specific usage.
      
      This doesn't actually fix the trinity report because that never goes
      through this code.
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/n/tip-mkj72lxagw1z8fvjm648iznw@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9886167d
  11. 02 9月, 2013 1 次提交
  12. 09 8月, 2013 1 次提交
  13. 15 7月, 2013 1 次提交
    • P
      kernel: delete __cpuinit usage from all core kernel files · 0db0628d
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      This removes all the uses of the __cpuinit macros from C files in
      the core kernel directories (kernel, init, lib, mm, and include)
      that don't really have a specific maintainer.
      
      [1] https://lkml.org/lkml/2013/5/20/589Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      0db0628d
  14. 23 6月, 2013 1 次提交
    • D
      perf: Drop sample rate when sampling is too slow · 14c63f17
      Dave Hansen 提交于
      This patch keeps track of how long perf's NMI handler is taking,
      and also calculates how many samples perf can take a second.  If
      the sample length times the expected max number of samples
      exceeds a configurable threshold, it drops the sample rate.
      
      This way, we don't have a runaway sampling process eating up the
      CPU.
      
      This patch can tend to drop the sample rate down to level where
      perf doesn't work very well.  *BUT* the alternative is that my
      system hangs because it spends all of its time handling NMIs.
      
      I'll take a busted performance tool over an entire system that's
      busted and undebuggable any day.
      
      BTW, my suspicion is that there's still an underlying bug here.
      Using the HPET instead of the TSC is definitely a contributing
      factor, but I suspect there are some other things going on.
      But, I can't go dig down on a bug like that with my machine
      hanging all the time.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: paulus@samba.org
      Cc: acme@ghostprotocols.net
      Cc: Dave Hansen <dave@sr71.net>
      [ Prettified it a bit. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      14c63f17
  15. 19 6月, 2013 4 次提交
  16. 28 5月, 2013 4 次提交
    • P
      perf: Fix perf mmap bugs · 26cb63ad
      Peter Zijlstra 提交于
      Vince reported a problem found by his perf specific trinity
      fuzzer.
      
      Al noticed 2 problems with perf's mmap():
      
       - it has issues against fork() since we use vma->vm_mm for accounting.
       - it has an rb refcount leak on double mmap().
      
      We fix the issues against fork() by using VM_DONTCOPY; I don't
      think there's code out there that uses this; we didn't hear
      about weird accounting problems/crashes. If we do need this to
      work, the previously proposed VM_PINNED could make this work.
      
      Aside from the rb reference leak spotted by Al, Vince's example
      prog was indeed doing a double mmap() through the use of
      perf_event_set_output().
      
      This exposes another problem, since we now have 2 events with
      one buffer, the accounting gets screwy because we account per
      event. Fix this by making the buffer responsible for its own
      accounting.
      Reported-by: NVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Link: http://lkml.kernel.org/r/20130528085548.GA12193@twins.programming.kicks-ass.netSigned-off-by: NIngo Molnar <mingo@kernel.org>
      26cb63ad
    • S
      perf: Add sysfs entry to adjust multiplexing interval per PMU · 62b85639
      Stephane Eranian 提交于
      This patch adds /sys/device/xxx/perf_event_mux_interval_ms to ajust
      the multiplexing interval per PMU. The unit is milliseconds. Value has
      to be >= 1.
      
      In the 4th version, we renamed the sysfs file to be more consistent
      with the other /proc/sys/kernel entries for perf_events.
      
      In the 5th version, we handle the reprogramming of the hrtimer using
      hrtimer_forward_now(). That way, we sync up to new timer value quickly
      (suggested by Jiri Olsa).
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Link: http://lkml.kernel.org/r/1364991694-5876-3-git-send-email-eranian@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      62b85639
    • S
      perf: Use hrtimers for event multiplexing · 9e630205
      Stephane Eranian 提交于
      The current scheme of using the timer tick was fine for per-thread
      events. However, it was causing bias issues in system-wide mode
      (including for uncore PMUs). Event groups would not get their fair
      share of runtime on the PMU. With tickless kernels, if a core is idle
      there is no timer tick, and thus no event rotation (multiplexing).
      However, there are events (especially uncore events) which do count
      even though cores are asleep.
      
      This patch changes the timer source for multiplexing.  It introduces a
      per-PMU per-cpu hrtimer. The advantage is that even when a core goes
      idle, it will come back to service the hrtimer, thus multiplexing on
      system-wide events works much better.
      
      The per-PMU implementation (suggested by PeterZ) enables adjusting the
      multiplexing interval per PMU. The preferred interval is stashed into
      the struct pmu. If not set, it will be forced to the default interval
      value.
      
      In order to minimize the impact of the hrtimer, it is turned on and
      off on demand. When the PMU on a CPU is overcommited, the hrtimer is
      activated.  It is stopped when the PMU is not overcommitted.
      
      In order for this to work properly, we had to change the order of
      initialization in start_kernel() such that hrtimer_init() is run
      before perf_event_init().
      
      The default interval in milliseconds is set to a timer tick just like
      with the old code. We will provide a sysctl to tune this in another
      patch.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Link: http://lkml.kernel.org/r/1364991694-5876-2-git-send-email-eranian@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9e630205
    • J
      perf: Fix hw breakpoints overflow period sampling · ab573844
      Jiri Olsa 提交于
      The hw breakpoint pmu 'add' function is missing the
      period_left update needed for SW events.
      
      The perf HW breakpoint events use the SW events framework
      to process the overflow, so it needs to be properly initialized
      in the PMU 'add' method.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Reviewed-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Stephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1367421944-19082-5-git-send-email-jolsa@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ab573844
  17. 23 4月, 2013 1 次提交
    • F
      perf: New helper to prevent full dynticks CPUs from stopping tick · 026249ef
      Frederic Weisbecker 提交于
      Provide a new helper that help full dynticks CPUs to prevent
      from stopping their tick in case there are events in the local
      rotation list.
      
      This way we make sure that perf_event_task_tick() is serviced
      on demand.
      Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Cc: Chris Metcalf <cmetcalf@tilera.com>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Gilad Ben Yossef <gilad@benyossef.com>
      Cc: Hakan Akkan <hakanakkan@gmail.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Kevin Hilman <khilman@linaro.org>
      Cc: Li Zhong <zhong@linux.vnet.ibm.com>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      026249ef
  18. 01 4月, 2013 3 次提交
  19. 27 3月, 2013 1 次提交
  20. 18 3月, 2013 1 次提交
  21. 16 3月, 2013 1 次提交
  22. 06 3月, 2013 1 次提交
  23. 09 2月, 2013 1 次提交
    • O
      perf: Introduce hw_perf_event->tp_target and ->tp_list · f22c1bb6
      Oleg Nesterov 提交于
      sys_perf_event_open()->perf_init_event(event) is called before
      find_get_context(event), this means that event->ctx == NULL when
      class->reg(TRACE_REG_PERF_REGISTER/OPEN) is called and thus it
      can't know if this event is per-task or system-wide.
      
      This patch adds hw_perf_event->tp_target for PERF_TYPE_TRACEPOINT,
      this is analogous to PERF_TYPE_BREAKPOINT/bp_target we already have.
      The patch also moves ->bp_target up so that it can overlap with the
      new member, this can help the compiler to generate the better code.
      
      trace_uprobe_register() will use it for prefiltering to avoid the
      unnecessary breakpoints in mm's we do not want to trace.
      
      ->tp_target doesn't have its own reference, but we can rely on the
      fact that either sys_perf_event_open() holds a reference, or it is
      equal to event->ctx->task. So this pointer is always valid until
      free_event().
      
      Also add the "struct list_head tp_list" into this union. It is not
      strictly necessary, but it can simplify the next changes and we can
      add it for free.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      f22c1bb6
  24. 01 2月, 2013 1 次提交
  25. 24 10月, 2012 2 次提交
  26. 13 10月, 2012 1 次提交
  27. 05 10月, 2012 1 次提交
  28. 13 9月, 2012 1 次提交