1. 13 12月, 2011 1 次提交
    • T
      cgroup: don't use subsys->can_attach_task() or ->attach_task() · bb9d97b6
      Tejun Heo 提交于
      Now that subsys->can_attach() and attach() take @tset instead of
      @task, they can handle per-task operations.  Convert
      ->can_attach_task() and ->attach_task() users to use ->can_attach()
      and attach() instead.  Most converions are straight-forward.
      Noteworthy changes are,
      
      * In cgroup_freezer, remove unnecessary NULL assignments to unused
        methods.  It's useless and very prone to get out of sync, which
        already happened.
      
      * In cpuset, PF_THREAD_BOUND test is checked for each task.  This
        doesn't make any practical difference but is conceptually cleaner.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Reviewed-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NLi Zefan <lizf@cn.fujitsu.com>
      Cc: Paul Menage <paul@paulmenage.org>
      Cc: Balbir Singh <bsingharora@gmail.com>
      Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      Cc: James Morris <jmorris@namei.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      bb9d97b6
  2. 03 11月, 2011 1 次提交
  3. 01 11月, 2011 2 次提交
    • C
      mm: distinguish between mlocked and pinned pages · bc3e53f6
      Christoph Lameter 提交于
      Some kernel components pin user space memory (infiniband and perf) (by
      increasing the page count) and account that memory as "mlocked".
      
      The difference between mlocking and pinning is:
      
      A. mlocked pages are marked with PG_mlocked and are exempt from
         swapping. Page migration may move them around though.
         They are kept on a special LRU list.
      
      B. Pinned pages cannot be moved because something needs to
         directly access physical memory. They may not be on any
         LRU list.
      
      I recently saw an mlockalled process where mm->locked_vm became
      bigger than the virtual size of the process (!) because some
      memory was accounted for twice:
      
      Once when the page was mlocked and once when the Infiniband
      layer increased the refcount because it needt to pin the RDMA
      memory.
      
      This patch introduces a separate counter for pinned pages and
      accounts them seperately.
      Signed-off-by: NChristoph Lameter <cl@linux.com>
      Cc: Mike Marciniszyn <infinipath@qlogic.com>
      Cc: Roland Dreier <roland@kernel.org>
      Cc: Sean Hefty <sean.hefty@intel.com>
      Cc: Hugh Dickins <hughd@google.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bc3e53f6
    • P
      kernel: Fix files explicitly needing EXPORT_SYMBOL infrastructure · 6e5fdeed
      Paul Gortmaker 提交于
      These files were getting <linux/module.h> via an implicit non-obvious
      path, but we want to crush those out of existence since they cost
      time during compiles of processing thousands of lines of headers
      for no reason.  Give them the lightweight header that just contains
      the EXPORT_SYMBOL infrastructure.
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      6e5fdeed
  4. 31 8月, 2011 2 次提交
    • E
      perf_event: Fix broken calc_timer_values() · 7f310a5d
      Eric B Munson 提交于
      We detected a serious issue with PERF_SAMPLE_READ and
      timing information when events were being multiplexing.
      
      Samples would have time_running > time_enabled. That
      was easy to reproduce with a libpfm4 example (ran 3
      times to cause multiplexing on Core 2):
      
       $ syst_smpl -e uops_retired:freq=1 &
       $ syst_smpl -e uops_retired:freq=1 &
       $ syst_smpl -e uops_retired:freq=1 &
       IIP:0x0000000040062d ... PERIOD:2355332948 ENA=40144625315 RUN=60014875184
       syst_smpl: WARNING: time_running > time_enabled
      	63277537998 uops_retired:freq=1 , scaled
      
      The bug was not present in kernel up to (and including) 3.0. It turns
      out the bug was introduced by the following commit:
      
      commit c4794295
      
          events: Move lockless timer calculation into helper function
      
      The parameters of the function got reversed yet the call sites
      were not updated to reflect the change. That lead to time_running
      and time_enabled being swapped. That had no effect when there was
      no multiplexing because in that case time_running = time_enabled
      but it would show up in any other scenario.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/20110829124112.GA4828@quadSigned-off-by: NIngo Molnar <mingo@elte.hu>
      7f310a5d
    • M
      perf: provide PMU when initing events · 5f12a761
      Mark Rutland 提交于
      Currently, an event's 'pmu' field is set after pmu::event_init() is
      called. This means that pmu::event_init() must figure out which struct
      pmu the event was initialised from. This makes it difficult to
      consolidate common event initialisation code for similar PMUs, and
      very difficult to implement drivers for PMUs which can have multiple
      instances (e.g. a USB controller PMU, a GPU PMU, etc).
      
      This patch sets the 'pmu' field before initialising the event, allowing
      event init code to identify the struct pmu instance easily. In the
      event of failure to initialise an event, the event is destroyed via
      kfree() without calling perf_event::destroy(), so this shouldn't
      result in bad behaviour even if the destroy field was set before
      failure to initialise was noted.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/1313062280-19123-1-git-send-email-mark.rutland@arm.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      5f12a761
  5. 29 8月, 2011 1 次提交
    • S
      perf events: Fix slow and broken cgroup context switch code · a8d757ef
      Stephane Eranian 提交于
      The current cgroup context switch code was incorrect leading
      to bogus counts. Furthermore, as soon as there was an active
      cgroup event on a CPU, the context switch cost on that CPU
      would increase by a significant amount as demonstrated by a
      simple ping/pong example:
      
       $ ./pong
       Both processes pinned to CPU1, running for 10s
       10684.51 ctxsw/s
      
      Now start a cgroup perf stat:
       $ perf stat -e cycles,cycles -A -a -G test  -C 1 -- sleep 100
      
      $ ./pong
       Both processes pinned to CPU1, running for 10s
       6674.61 ctxsw/s
      
      That's a 37% penalty.
      
      Note that pong is not even in the monitored cgroup.
      
      The results shown by perf stat are bogus:
       $ perf stat -e cycles,cycles -A -a -G test  -C 1 -- sleep 100
      
       Performance counter stats for 'sleep 100':
      
       CPU1 <not counted> cycles   test
       CPU1 16,984,189,138 cycles  #    0.000 GHz
      
      The second 'cycles' event should report a count @ CPU clock
      (here 2.4GHz) as it is counting across all cgroups.
      
      The patch below fixes the bogus accounting and bypasses any
      cgroup switches in case the outgoing and incoming tasks are
      in the same cgroup.
      
      With this patch the same test now yields:
       $ ./pong
       Both processes pinned to CPU1, running for 10s
       10775.30 ctxsw/s
      
      Start perf stat with cgroup:
      
       $ perf stat -e cycles,cycles -A -a -G test  -C 1 -- sleep 10
      
      Run pong outside the cgroup:
       $ /pong
       Both processes pinned to CPU1, running for 10s
       10687.80 ctxsw/s
      
      The penalty is now less than 2%.
      
      And the results for perf stat are correct:
      
      $ perf stat -e cycles,cycles -A -a -G test  -C 1 -- sleep 10
      
       Performance counter stats for 'sleep 10':
      
       CPU1 <not counted> cycles test #    0.000 GHz
       CPU1 23,933,981,448 cycles      #    0.000 GHz
      
      Now perf stat reports the correct counts for
      for the non cgroup event.
      
      If we run pong inside the cgroup, then we also get the
      correct counts:
      
      $ perf stat -e cycles,cycles -A -a -G test  -C 1 -- sleep 10
      
       Performance counter stats for 'sleep 10':
      
       CPU1 22,297,726,205 cycles test #    0.000 GHz
       CPU1 23,933,981,448 cycles      #    0.000 GHz
      
            10.001457237 seconds time elapsed
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Link: http://lkml.kernel.org/r/20110825135803.GA4697@quadSigned-off-by: NIngo Molnar <mingo@elte.hu>
      a8d757ef
  6. 14 8月, 2011 2 次提交
  7. 22 7月, 2011 1 次提交
  8. 01 7月, 2011 8 次提交
  9. 09 6月, 2011 1 次提交
  10. 07 6月, 2011 1 次提交
  11. 31 5月, 2011 1 次提交
  12. 29 5月, 2011 9 次提交
  13. 28 5月, 2011 1 次提交
  14. 04 5月, 2011 1 次提交
  15. 03 5月, 2011 1 次提交
    • B
      perf: Start the restructuring · fae85b7c
      Borislav Petkov 提交于
      mv kernel/perf_event.c -> kernel/events/core.c. From there, all further
      sensible splitting can happen. The idea is that due to perf_event.c
      becoming pretty sizable and with the advent of the marriage with ftrace,
      splitting functionality into its logical parts should help speeding up
      the unification and to manage the complexity of the subsystem.
      Signed-off-by: NBorislav Petkov <borislav.petkov@amd.com>
      fae85b7c
  16. 11 4月, 2011 1 次提交
  17. 05 4月, 2011 1 次提交
    • J
      jump label: Introduce static_branch() interface · d430d3d7
      Jason Baron 提交于
      Introduce:
      
      static __always_inline bool static_branch(struct jump_label_key *key);
      
      instead of the old JUMP_LABEL(key, label) macro.
      
      In this way, jump labels become really easy to use:
      
      Define:
      
              struct jump_label_key jump_key;
      
      Can be used as:
      
              if (static_branch(&jump_key))
                      do unlikely code
      
      enable/disale via:
      
              jump_label_inc(&jump_key);
              jump_label_dec(&jump_key);
      
      that's it!
      
      For the jump labels disabled case, the static_branch() becomes an
      atomic_read(), and jump_label_inc()/dec() are simply atomic_inc(),
      atomic_dec() operations. We show testing results for this change below.
      
      Thanks to H. Peter Anvin for suggesting the 'static_branch()' construct.
      
      Since we now require a 'struct jump_label_key *key', we can store a pointer into
      the jump table addresses. In this way, we can enable/disable jump labels, in
      basically constant time. This change allows us to completely remove the previous
      hashtable scheme. Thanks to Peter Zijlstra for this re-write.
      
      Testing:
      
      I ran a series of 'tbench 20' runs 5 times (with reboots) for 3
      configurations, where tracepoints were disabled.
      
      jump label configured in
      avg: 815.6
      
      jump label *not* configured in (using atomic reads)
      avg: 800.1
      
      jump label *not* configured in (regular reads)
      avg: 803.4
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      LKML-Reference: <20110316212947.GA8792@redhat.com>
      Signed-off-by: NJason Baron <jbaron@redhat.com>
      Suggested-by: NH. Peter Anvin <hpa@linux.intel.com>
      Tested-by: NDavid Daney <ddaney@caviumnetworks.com>
      Acked-by: NRalf Baechle <ralf@linux-mips.org>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Acked-by: NMathieu Desnoyers <mathieu.desnoyers@efficios.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      d430d3d7
  18. 31 3月, 2011 2 次提交
  19. 24 3月, 2011 1 次提交
  20. 23 3月, 2011 1 次提交
    • S
      perf_events: Fix stale ->cgrp pointer in update_cgrp_time_from_cpuctx() · 68cacd29
      Stephane Eranian 提交于
      This patch solves a stale pointer problem in
      update_cgrp_time_from_cpuctx(). The cpuctx->cgrp
      was not cleared on all possible event exit paths,
      including:
      
         close()
           perf_release()
             perf_release_kernel()
               list_del_event()
      
      This patch fixes list_del_event() to clear cpuctx->cgrp
      when there are no cgroup events left in the context.
      
      [ This second version makes the code compile when
        CONFIG_CGROUP_PERF is not enabled. We unconditionally define
        perf_cpu_context->cgrp. ]
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Cc: peterz@infradead.org
      Cc: perfmon2-devel@lists.sf.net
      Cc: paulus@samba.org
      Cc: davem@davemloft.net
      LKML-Reference: <20110323150306.GA1580@quad>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      68cacd29
  21. 16 3月, 2011 1 次提交