1. 01 7月, 2011 3 次提交
  2. 09 6月, 2011 1 次提交
  3. 07 6月, 2011 1 次提交
  4. 31 5月, 2011 1 次提交
  5. 29 5月, 2011 9 次提交
  6. 28 5月, 2011 1 次提交
  7. 04 5月, 2011 1 次提交
  8. 03 5月, 2011 1 次提交
    • B
      perf: Start the restructuring · fae85b7c
      Borislav Petkov 提交于
      mv kernel/perf_event.c -> kernel/events/core.c. From there, all further
      sensible splitting can happen. The idea is that due to perf_event.c
      becoming pretty sizable and with the advent of the marriage with ftrace,
      splitting functionality into its logical parts should help speeding up
      the unification and to manage the complexity of the subsystem.
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      fae85b7c
  9. 11 4月, 2011 1 次提交
  10. 05 4月, 2011 1 次提交
    • J
      jump label: Introduce static_branch() interface · d430d3d7
      Jason Baron 提交于
      Introduce:
      
      static __always_inline bool static_branch(struct jump_label_key *key);
      
      instead of the old JUMP_LABEL(key, label) macro.
      
      In this way, jump labels become really easy to use:
      
      Define:
      
              struct jump_label_key jump_key;
      
      Can be used as:
      
              if (static_branch(&jump_key))
                      do unlikely code
      
      enable/disale via:
      
              jump_label_inc(&jump_key);
              jump_label_dec(&jump_key);
      
      that's it!
      
      For the jump labels disabled case, the static_branch() becomes an
      atomic_read(), and jump_label_inc()/dec() are simply atomic_inc(),
      atomic_dec() operations. We show testing results for this change below.
      
      Thanks to H. Peter Anvin for suggesting the 'static_branch()' construct.
      
      Since we now require a 'struct jump_label_key *key', we can store a pointer into
      the jump table addresses. In this way, we can enable/disable jump labels, in
      basically constant time. This change allows us to completely remove the previous
      hashtable scheme. Thanks to Peter Zijlstra for this re-write.
      
      Testing:
      
      I ran a series of 'tbench 20' runs 5 times (with reboots) for 3
      configurations, where tracepoints were disabled.
      
      jump label configured in
      avg: 815.6
      
      jump label *not* configured in (using atomic reads)
      avg: 800.1
      
      jump label *not* configured in (regular reads)
      avg: 803.4
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20110316212947.GA8792@redhat.com>
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Suggested-by: NH. Peter Anvin <hpa@linux.intel.com>
      Tested-by: NDavid Daney <ddaney@caviumnetworks.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d430d3d7
  11. 31 3月, 2011 2 次提交
  12. 24 3月, 2011 1 次提交
  13. 23 3月, 2011 1 次提交
    • S
      perf_events: Fix stale ->cgrp pointer in update_cgrp_time_from_cpuctx() · 68cacd29
      Stephane Eranian 提交于
      This patch solves a stale pointer problem in
      update_cgrp_time_from_cpuctx(). The cpuctx->cgrp
      was not cleared on all possible event exit paths,
      including:
      
         close()
           perf_release()
             perf_release_kernel()
               list_del_event()
      
      This patch fixes list_del_event() to clear cpuctx->cgrp
      when there are no cgroup events left in the context.
      
      [ This second version makes the code compile when
        CONFIG_CGROUP_PERF is not enabled. We unconditionally define
        perf_cpu_context->cgrp. ]
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Cc: peterz@infradead.org
      Cc: perfmon2-devel@lists.sf.net
      Cc: paulus@samba.org
      Cc: davem@davemloft.net
      LKML-Reference: <20110323150306.GA1580@quad>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      68cacd29
  14. 16 3月, 2011 3 次提交
  15. 04 3月, 2011 5 次提交
  16. 23 2月, 2011 2 次提交
  17. 16 2月, 2011 4 次提交
    • P
      perf: Optimize hrtimer events · ba3dd36c
      Peter Zijlstra 提交于
      There is no need to re-initialize the hrtimer every time we start it,
      so don't do that (shaves a few cycles). Also, since we know hrtimers
      run at a fixed rate (nanoseconds) we can pre-compute the desired
      frequency at which they tick. This avoids us having to go through the
      whole adaptive frequency feedback logic (shaves another few cycles).
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <1297448589.5226.47.camel@laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      ba3dd36c
    • P
      perf: Optimize throttling code · 163ec435
      Peter Zijlstra 提交于
      By pre-computing the maximum number of samples per tick we can avoid a
      multiplication and a conditional since MAX_INTERRUPTS >
      max_samples_per_tick.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      163ec435
    • S
      perf: Add cgroup support · e5d1367f
      Stephane Eranian 提交于
      This kernel patch adds the ability to filter monitoring based on
      container groups (cgroups). This is for use in per-cpu mode only.
      
      The cgroup to monitor is passed as a file descriptor in the pid
      argument to the syscall. The file descriptor must be opened to
      the cgroup name in the cgroup filesystem. For instance, if the
      cgroup name is foo and cgroupfs is mounted in /cgroup, then the
      file descriptor is opened to /cgroup/foo. Cgroup mode is
      activated by passing PERF_FLAG_PID_CGROUP in the flags argument
      to the syscall.
      
      For instance to measure in cgroup foo on CPU1 assuming
      cgroupfs is mounted under /cgroup:
      
      struct perf_event_attr attr;
      int cgroup_fd, fd;
      
      cgroup_fd = open("/cgroup/foo", O_RDONLY);
      fd = perf_event_open(&attr, cgroup_fd, 1, -1, PERF_FLAG_PID_CGROUP);
      close(cgroup_fd);
      Signed-off-by: NStephane Eranian <eranian@google.com>
      [ added perf_cgroup_{exit,attach} ]
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <4d590250.114ddf0a.689e.4482@mx.google.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      e5d1367f
    • P
      perf: Fix throttle logic · 4fe757dd
      Peter Zijlstra 提交于
      It was possible to call pmu::start() on an already running event. In
      particular this lead so some wreckage as the hrtimer events would
      re-initialize active timers.
      
      This was due to throttled events being activated again by scheduling.
      Scheduling in a context would add and force start events, resulting in
      running events with a possible throttle status. The next tick to hit
      that task will then try to unthrottle the event and call ->start() on
      an already running event.
      Reported-by: NJeff Moyer <jmoyer@redhat.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <new-submission>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      4fe757dd
  18. 03 2月, 2011 2 次提交