1. 08 1月, 2009 4 次提交
  2. 30 12月, 2008 2 次提交
  3. 29 12月, 2008 1 次提交
  4. 10 12月, 2008 6 次提交
    • R
      oprofile: port to the new ring_buffer · 6dad828b
      Robert Richter 提交于
      This patch replaces the current oprofile cpu buffer implementation
      with the ring buffer provided by the tracing framework. The motivation
      here is to leave the pain of implementing ring buffers to others. Oh,
      no, there are more advantages. Main reason is the support of different
      sample sizes that could be stored in the buffer. Use cases for this
      are IBS and Cell spu profiling. Using the new ring buffer ensures
      valid and complete samples and allows copying the cpu buffer stateless
      without knowing its content. Second it will use generic kernel API and
      also reduce code size. And hopefully, there are less bugs.
      
      Since the new tracing ring buffer implementation uses spin locks to
      protect the buffer during read/write access, it is difficult to use
      the buffer in an NMI handler. In this case, writing to the buffer by
      the NMI handler (x86) could occur also during critical sections when
      reading the buffer. To avoid this, there are 2 buffers for independent
      read and write access. Read access is in process context only, write
      access only in the NMI handler. If the read buffer runs empty, both
      buffers are swapped atomically. There is potentially a small window
      during swapping where the buffers are disabled and samples could be
      lost.
      
      Using 2 buffers is a little bit overhead, but the solution is clear
      and does not require changes in the ring buffer implementation. It can
      be changed to a single buffer solution when the ring buffer access is
      implemented as non-locking atomic code.
      
      The new buffer requires more size to store the same amount of samples
      because each sample includes an u32 header. Also, there is more code
      to execute for buffer access. Nonetheless, the buffer implementation
      is proven in the ftrace environment and worth to use also in oprofile.
      
      Patches that changes the internal IBS buffer usage will follow.
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      6dad828b
    • R
      oprofile: moving cpu_buffer_reset() to cpu_buffer.h · fbc9bf9f
      Robert Richter 提交于
      This is in preparation for changes in the cpu buffer implementation.
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      fbc9bf9f
    • R
      oprofile: adding cpu_buffer_entries() · bf589e32
      Robert Richter 提交于
      This is in preparation for changes in the cpu buffer implementation.
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      bf589e32
    • R
      oprofile: adding cpu buffer r/w access functions · 7d468abe
      Robert Richter 提交于
      This is in preparation for changes in the cpu buffer implementation.
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      7d468abe
    • R
      fd7826d5
    • R
      oprofile: fix typo · 8dbc50c3
      Robert Richter 提交于
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      8dbc50c3
  5. 21 10月, 2008 1 次提交
    • C
      powerpc/oprofile: Fix mutex locking for cell spu-oprofile · a5598ca0
      Carl Love 提交于
      The issue is the SPU code is not holding the kernel mutex lock while
      adding samples to the kernel buffer.
      
      This patch creates per SPU buffers to hold the data.  Data
      is added to the buffers from in interrupt context.  The data
      is periodically pushed to the kernel buffer via a new Oprofile
      function oprofile_put_buff(). The oprofile_put_buff() function
      is called via a work queue enabling the funtion to acquire the
      mutex lock.
      
      The existing user controls for adjusting the per CPU buffer
      size is used to control the size of the per SPU buffers.
      Similarly, overflows of the SPU buffers are reported by
      incrementing the per CPU buffer stats.  This eliminates the
      need to have architecture specific controls for the per SPU
      buffers which is not acceptable to the OProfile user tool
      maintainer.
      
      The export of the oprofile add_event_entry() is removed as it
      is no longer needed given this patch.
      
      Note, this patch has not addressed the issue of indexing arrays
      by the spu number.  This still needs to be fixed as the spu
      numbering is not guarenteed to be 0 to max_num_spus-1.
      Signed-off-by: NCarl Love <carll@us.ibm.com>
      Signed-off-by: NMaynard Johnson <maynardj@us.ibm.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NAcked-by: Robert Richter <robert.richter@amd.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a5598ca0
  6. 20 10月, 2008 1 次提交
  7. 16 10月, 2008 2 次提交
  8. 26 7月, 2008 4 次提交
  9. 28 4月, 2008 1 次提交
  10. 15 2月, 2008 1 次提交
  11. 21 7月, 2007 1 次提交
  12. 22 5月, 2007 1 次提交
    • A
      Detach sched.h from mm.h · e8edc6e0
      Alexey Dobriyan 提交于
      First thing mm.h does is including sched.h solely for can_do_mlock() inline
      function which has "current" dereference inside. By dealing with can_do_mlock()
      mm.h can be detached from sched.h which is good. See below, why.
      
      This patch
      a) removes unconditional inclusion of sched.h from mm.h
      b) makes can_do_mlock() normal function in mm/mlock.c
      c) exports can_do_mlock() to not break compilation
      d) adds sched.h inclusions back to files that were getting it indirectly.
      e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
         getting them indirectly
      
      Net result is:
      a) mm.h users would get less code to open, read, preprocess, parse, ... if
         they don't need sched.h
      b) sched.h stops being dependency for significant number of files:
         on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
         after patch it's only 3744 (-8.3%).
      
      Cross-compile tested on
      
      	all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
      	alpha alpha-up
      	arm
      	i386 i386-up i386-defconfig i386-allnoconfig
      	ia64 ia64-up
      	m68k
      	mips
      	parisc parisc-up
      	powerpc powerpc-up
      	s390 s390-up
      	sparc sparc-up
      	sparc64 sparc64-up
      	um-x86_64
      	x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig
      
      as well as my two usual configs.
      Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      e8edc6e0
  13. 09 12月, 2006 1 次提交
  14. 26 6月, 2006 1 次提交
  15. 09 1月, 2006 1 次提交
    • P
      [PATCH] Make RCU task_struct safe for oprofile · 4369ef3c
      Paul E. McKenney 提交于
      Applying RCU to the task structure broke oprofile, because
      free_task_notify() can now be called from softirq.  This means that the
      task_mortuary lock must be acquired with irq disabled in order to avoid
      intermittent self-deadlock.  Since irq is now disabled, the critical
      section within process_task_mortuary() has been restructured to be O(1) in
      order to maximize scalability and minimize realtime latency degradation.
      
      Kudos to Wu Fengguang for finding this problem!
      
      CC: Wu Fengguang <wfg@mail.ustc.edu.cn>
      Cc: Philippe Elie <phil.el@wanadoo.fr>
      Cc: John Levon <levon@movementarian.org>
      Signed-off-by: N"Paul E. McKenney" <paulmck@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      4369ef3c
  16. 24 6月, 2005 1 次提交
  17. 22 6月, 2005 1 次提交
    • I
      [PATCH] smp_processor_id() cleanup · 39c715b7
      Ingo Molnar 提交于
      This patch implements a number of smp_processor_id() cleanup ideas that
      Arjan van de Ven and I came up with.
      
      The previous __smp_processor_id/_smp_processor_id/smp_processor_id API
      spaghetti was hard to follow both on the implementational and on the
      usage side.
      
      Some of the complexity arose from picking wrong names, some of the
      complexity comes from the fact that not all architectures defined
      __smp_processor_id.
      
      In the new code, there are two externally visible symbols:
      
       - smp_processor_id(): debug variant.
      
       - raw_smp_processor_id(): nondebug variant. Replaces all existing
         uses of _smp_processor_id() and __smp_processor_id(). Defined
         by every SMP architecture in include/asm-*/smp.h.
      
      There is one new internal symbol, dependent on DEBUG_PREEMPT:
      
       - debug_smp_processor_id(): internal debug variant, mapped to
                                   smp_processor_id().
      
      Also, i moved debug_smp_processor_id() from lib/kernel_lock.c into a new
      lib/smp_processor_id.c file.  All related comments got updated and/or
      clarified.
      
      I have build/boot tested the following 8 .config combinations on x86:
      
       {SMP,UP} x {PREEMPT,!PREEMPT} x {DEBUG_PREEMPT,!DEBUG_PREEMPT}
      
      I have also build/boot tested x64 on UP/PREEMPT/DEBUG_PREEMPT.  (Other
      architectures are untested, but should work just fine.)
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NArjan van de Ven <arjan@infradead.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      39c715b7
  18. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4