1. 26 5月, 2009 2 次提交
  2. 25 5月, 2009 2 次提交
  3. 24 5月, 2009 4 次提交
    • P
      perf_counter: Remove perf_counter_context::nr_enabled · 475c5579
      Peter Zijlstra 提交于
      now that pctrl() no longer disables other people's counters,
      remove the PMU cache code that deals with that.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090523163013.032998331@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      475c5579
    • P
      perf_counter: Change pctrl() behaviour · 082ff5a2
      Peter Zijlstra 提交于
      Instead of en/dis-abling all counters acting on a particular
      task, en/dis- able all counters we created.
      
      [ v2: fix crash on first counter enable ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090523163012.916937244@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      082ff5a2
    • P
      perf_counter: Sanitize counter->mutex · fccc714b
      Peter Zijlstra 提交于
      s/counter->mutex/counter->child_mutex/ and make sure its only
      used to protect child_list.
      
      The usage in __perf_counter_exit_task() doesn't appear to be
      problematic since ctx->mutex also covers anything related to fd
      tear-down.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090523163012.533186528@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      fccc714b
    • P
      perf_counter: Fix dynamic irq_period logging · e220d2dc
      Peter Zijlstra 提交于
      We call perf_adjust_freq() from perf_counter_task_tick() which
      is is called under the rq->lock causing lock recursion.
      However, it's no longer required to be called under the
      rq->lock, so remove it from under it.
      
      Also, fix up some related comments.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090523163012.476197912@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e220d2dc
  4. 22 5月, 2009 3 次提交
    • I
      perf_counter: fix !PERF_COUNTERS build failure · 910431c7
      Ingo Molnar 提交于
      Update the !CONFIG_PERF_COUNTERS prototype too, for
      perf_counter_task_sched_out().
      
      [ Impact: build fix ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <18966.10666.517218.332164@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      910431c7
    • P
      perf_counter: Optimize context switch between identical inherited contexts · 564c2b21
      Paul Mackerras 提交于
      When monitoring a process and its descendants with a set of inherited
      counters, we can often get the situation in a context switch where
      both the old (outgoing) and new (incoming) process have the same set
      of counters, and their values are ultimately going to be added together.
      In that situation it doesn't matter which set of counters are used to
      count the activity for the new process, so there is really no need to
      go through the process of reading the hardware counters and updating
      the old task's counters and then setting up the PMU for the new task.
      
      This optimizes the context switch in this situation.  Instead of
      scheduling out the perf_counter_context for the old task and
      scheduling in the new context, we simply transfer the old context
      to the new task and keep using it without interruption.  The new
      context gets transferred to the old task.  This means that both
      tasks still have a valid perf_counter_context, so no special case
      is introduced when the old task gets scheduled in again, either on
      this CPU or another CPU.
      
      The equivalence of contexts is detected by keeping a pointer in
      each cloned context pointing to the context it was cloned from.
      To cope with the situation where a context is changed by adding
      or removing counters after it has been cloned, we also keep a
      generation number on each context which is incremented every time
      a context is changed.  When a context is cloned we take a copy
      of the parent's generation number, and two cloned contexts are
      equivalent only if they have the same parent and the same
      generation number.  In order that the parent context pointer
      remains valid (and is not reused), we increment the parent
      context's reference count for each context cloned from it.
      
      Since we don't have individual fds for the counters in a cloned
      context, the only thing that can make two clones of a given parent
      different after they have been cloned is enabling or disabling all
      counters with prctl.  To account for this, we keep a count of the
      number of enabled counters in each context.  Two contexts must have
      the same number of enabled counters to be considered equivalent.
      
      Here are some measurements of the context switch time as measured with
      the lat_ctx benchmark from lmbench, comparing the times obtained with
      and without this patch series:
      
      		-----Unmodified-----		With this patch series
      Counters:	none	2 HW	4H+4S	none	2 HW	4H+4S
      
      2 processes:
      Average		3.44	6.45	11.24	3.12	3.39	3.60
      St dev		0.04	0.04	0.13	0.05	0.17	0.19
      
      8 processes:
      Average		6.45	8.79	14.00	5.57	6.23	7.57
      St dev		1.27	1.04	0.88	1.42	1.46	1.42
      
      32 processes:
      Average		5.56	8.43	13.78	5.28	5.55	7.15
      St dev		0.41	0.47	0.53	0.54	0.57	0.81
      
      The numbers are the mean and standard deviation of 20 runs of
      lat_ctx.  The "none" columns are lat_ctx run directly without any
      counters.  The "2 HW" columns are with lat_ctx run under perfstat,
      counting cycles and instructions.  The "4H+4S" columns are lat_ctx run
      under perfstat with 4 hardware counters and 4 software counters
      (cycles, instructions, cache references, cache misses, task
      clock, context switch, cpu migrations, and page faults).
      
      [ Impact: performance optimization of counter context-switches ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <18966.10666.517218.332164@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      564c2b21
    • P
      perf_counter: Dynamically allocate tasks' perf_counter_context struct · a63eaf34
      Paul Mackerras 提交于
      This replaces the struct perf_counter_context in the task_struct with
      a pointer to a dynamically allocated perf_counter_context struct.  The
      main reason for doing is this is to allow us to transfer a
      perf_counter_context from one task to another when we do lazy PMU
      switching in a later patch.
      
      This has a few side-benefits: the task_struct becomes a little smaller,
      we save some memory because only tasks that have perf_counters attached
      get a perf_counter_context allocated for them, and we can remove the
      inclusion of <linux/perf_counter.h> in sched.h, meaning that we don't
      end up recompiling nearly everything whenever perf_counter.h changes.
      
      The perf_counter_context structures are reference-counted and freed
      when the last reference is dropped.  A context can have references
      from its task and the counters on its task.  Counters can outlive the
      task so it is possible that a context will be freed well after its
      task has exited.
      
      Contexts are allocated on fork if the parent had a context, or
      otherwise the first time that a per-task counter is created on a task.
      In the latter case, we set the context pointer in the task struct
      locklessly using an atomic compare-and-exchange operation in case we
      raced with some other task in creating a context for the subject task.
      
      This also removes the task pointer from the perf_counter struct.  The
      task pointer was not used anywhere and would make it harder to move a
      context from one task to another.  Anything that needed to know which
      task a counter was attached to was already using counter->ctx->task.
      
      The __perf_counter_init_context function moves up in perf_counter.c
      so that it can be called from find_get_context, and now initializes
      the refcount, but is otherwise unchanged.
      
      We were potentially calling list_del_counter twice: once from
      __perf_counter_exit_task when the task exits and once from
      __perf_counter_remove_from_context when the counter's fd gets closed.
      This adds a check in list_del_counter so it doesn't do anything if
      the counter has already been removed from the lists.
      
      Since perf_counter_task_sched_in doesn't do anything if the task doesn't
      have a context, and leaves cpuctx->task_ctx = NULL, this adds code to
      __perf_install_in_context to set cpuctx->task_ctx if necessary, i.e. in
      the case where the current task adds the first counter to itself and
      thus creates a context for itself.
      
      This also adds similar code to __perf_counter_enable to handle a
      similar situation which can arise when the counters have been disabled
      using prctl; that also leaves cpuctx->task_ctx = NULL.
      
      [ Impact: refactor counter context management to prepare for new feature ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <18966.10075.781053.231153@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a63eaf34
  5. 20 5月, 2009 3 次提交
    • P
      perf_counter: Log irq_period changes · 26b119bc
      Peter Zijlstra 提交于
      For the dynamic irq_period code, log whenever we change the period so that
      analyzing code can normalize the event flow.
      
      [ Impact: add new feature to allow more precise profiling ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090520102553.298769743@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      26b119bc
    • P
      perf_counter: Solve the rotate_ctx vs inherit race differently · d7b629a3
      Peter Zijlstra 提交于
      Instead of disabling RR scheduling of the counters, use a different list
      that does not get rotated to iterate the counters on inheritance.
      
      [ Impact: cleanup, optimization ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: John Kacur <jkacur@redhat.com>
      LKML-Reference: <20090520102553.237504544@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d7b629a3
    • I
      perf_counter: fix counter inheritance race · c44d70a3
      Ingo Molnar 提交于
      Context rotation should not occur when we are in the middle of
      walking the counter list when inheriting counters ...
      
      [ Impact: fix occasionally incorrect perf stat results ]
      
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c44d70a3
  6. 15 5月, 2009 3 次提交
    • P
      perf_counter: allow arch to supply event misc flags and instruction pointer · 9d23a90a
      Paul Mackerras 提交于
      At present the values we put in overflow events for the misc
      flags indicating processor mode and the instruction pointer are
      obtained using the standard user_mode() and
      instruction_pointer() functions. Those functions tell you where
      the performance monitor interrupt was taken, which might not be
      exactly where the counter overflow occurred, for example
      because interrupts were disabled at the point where the
      overflow occurred, or because the processor had many
      instructions in flight and chose to complete some more
      instructions beyond the one that caused the counter overflow.
      
      Some architectures (e.g. powerpc) can supply more precise
      information about where the counter overflow occurred and the
      processor mode at that point.  This introduces new functions,
      perf_misc_flags() and perf_instruction_pointer(), which arch
      code can override to provide more precise information if
      available.  They have default implementations which are
      identical to the existing code.
      
      This also adds a new misc flag value,
      PERF_EVENT_MISC_HYPERVISOR, for the case where a counter
      overflow occurred in the hypervisor.  We encode the processor
      mode in the 2 bits previously used to indicate user or kernel
      mode; the values for user and kernel mode are unchanged and
      hypervisor mode is indicated by both bits being set.
      
      [ Impact: generalize perfcounter core facilities ]
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      Acked-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <18956.1272.818511.561835@cargo.ozlabs.ibm.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9d23a90a
    • P
      perf_counter: frequency based adaptive irq_period · 60db5e09
      Peter Zijlstra 提交于
      Instead of specifying the irq_period for a counter, provide a target interrupt
      frequency and dynamically adapt the irq_period to match this frequency.
      
      [ Impact: new perf-counter attribute/feature ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      LKML-Reference: <20090515132018.646195868@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      60db5e09
    • P
      perf_counter: Rework the perf counter disable/enable · 9e35ad38
      Peter Zijlstra 提交于
      The current disable/enable mechanism is:
      
      	token = hw_perf_save_disable();
      	...
      	/* do bits */
      	...
      	hw_perf_restore(token);
      
      This works well, provided that the use nests properly. Except we don't.
      
      x86 NMI/INT throttling has non-nested use of this, breaking things. Therefore
      provide a reference counter disable/enable interface, where the first disable
      disables the hardware, and the last enable enables the hardware again.
      
      [ Impact: refactor, simplify the PMU disable/enable logic ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      9e35ad38
  7. 09 5月, 2009 3 次提交
    • P
      perf_counter: add PERF_RECORD_CPU · f370e1e2
      Peter Zijlstra 提交于
      Allow recording the CPU number the event was generated on.
      
      RFC: this leaves a u32 as reserved, should we fill in the
           node_id() there, or leave this open for future extention,
           as userspace can already easily do the cpu->node mapping
           if needed.
      
      [ Impact: extend perfcounter output record format ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090508170029.008627711@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      f370e1e2
    • P
      perf_counter: add PERF_RECORD_CONFIG · a85f61ab
      Peter Zijlstra 提交于
      Much like CONFIG_RECORD_GROUP records the hw_event.config to
      identify the values, allow to record this for all counters.
      
      [ Impact: extend perfcounter output record format ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090508170028.923228280@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a85f61ab
    • P
      perf_counter: rework ioctl()s · 3df5edad
      Peter Zijlstra 提交于
      Corey noticed that ioctl()s on grouped counters didn't work on
      the whole group. This extends the ioctl() interface to take a
      second argument that is interpreted as a flags field. We then
      provide PERF_IOC_FLAG_GROUP to toggle the behaviour.
      
      Having this flag gives the greatest flexibility, allowing you
      to individually enable/disable/reset counters in a group, or
      all together.
      
      [ Impact: fix group counter enable/disable semantics ]
      Reported-by: NCorey Ashford <cjashfor@linux.vnet.ibm.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      LKML-Reference: <20090508170028.837558214@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3df5edad
  8. 06 5月, 2009 3 次提交
  9. 05 5月, 2009 1 次提交
    • I
      perf_counter: initialize the per-cpu context earlier · 0d905bca
      Ingo Molnar 提交于
      percpu scheduling for perfcounters wants to take the context lock,
      but that lock first needs to be initialized. Currently it is an
      early_initcall() - but that is too late, the task tick runs much
      sooner than that.
      
      Call it explicitly from the scheduler init sequence instead.
      
      [ Impact: fix access-before-init crash ]
      
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0d905bca
  10. 01 5月, 2009 1 次提交
    • P
      perf_counter: fix race in perf_output_* · c33a0bc4
      Peter Zijlstra 提交于
      When two (or more) contexts output to the same buffer, it is possible
      to observe half written output.
      
      Suppose we have CPU0 doing perf_counter_mmap(), CPU1 doing
      perf_counter_overflow(). If CPU1 does a wakeup and exposes head to
      user-space, then CPU2 can observe the data CPU0 is still writing.
      
      [ Impact: fix occasionally corrupted profiling records ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090501102533.007821627@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      c33a0bc4
  11. 29 4月, 2009 3 次提交
  12. 09 4月, 2009 7 次提交
  13. 07 4月, 2009 5 次提交
    • P
      perf_counter: rework context time · 4af4998b
      Peter Zijlstra 提交于
      Since perf_counter_context is switched along with tasks, we can
      maintain the context time without using the task runtime clock.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.353552838@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4af4998b
    • P
      perf_counter: change event definition · 4c9e2542
      Peter Zijlstra 提交于
      Currently the definition of an event is slightly ambiguous. We have
      wakeup events, for poll() and SIGIO, which are either generated
      when a record crosses a page boundary (hw_events.wakeup_events == 0),
      or every wakeup_events new records.
      
      Now a record can be either a counter overflow record, or a number of
      different things, like the mmap PROT_EXEC region notifications.
      
      Then there is the PERF_COUNTER_IOC_REFRESH event limit, which only
      considers counter overflows.
      
      This patch changes then wakeup_events and SIGIO notification to only
      consider overflow events. Furthermore it changes the SIGIO notification
      to report SIGHUP when the event limit is reached and the counter will
      be disabled.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.266679874@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4c9e2542
    • P
      perf_counter: comment the perf_event_type stuff · 0c593b34
      Peter Zijlstra 提交于
      Describe the event format.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.211174347@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0c593b34
    • P
      perf_counter: counter overflow limit · 79f14641
      Peter Zijlstra 提交于
      Provide means to auto-disable the counter after 'n' overflow events.
      
      Create the counter with hw_event.disabled = 1, and then issue an
      ioctl(fd, PREF_COUNTER_IOC_REFRESH, n); to set the limit and enable
      the counter.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.083139737@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      79f14641
    • P
      perf_counter: PERF_RECORD_TIME · 339f7c90
      Peter Zijlstra 提交于
      By popular request, provide means to log a timestamp along with the
      counter overflow event.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
      LKML-Reference: <20090406094518.024173282@chello.nl>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      339f7c90