1. 08 1月, 2009 5 次提交
  2. 30 12月, 2008 4 次提交
  3. 29 12月, 2008 2 次提交
  4. 17 12月, 2008 1 次提交
  5. 11 12月, 2008 2 次提交
    • R
      oprofile: fix lost sample counter · 211117ff
      Robert Richter 提交于
      The number of lost samples could be greater than the number of
      received samples. This patches fixes this. The implementation
      introduces return values for add_sample() and add_code().
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      211117ff
    • R
      oprofile: remove nr_available_slots() · 1d7503b5
      Robert Richter 提交于
      This function is no longer available after the port to the new ring
      buffer. Its removal can lead to incomplete sampling sequences since
      IBS samples and backtraces are transfered in multiple samples. Due to
      a full buffer, samples could be lost any time. The userspace daemon
      has to live with such incomplete sampling sequences as long as the
      data within one sample is consistent.
      
      This will be fixed by changing the internal buffer data there all data
      of one IBS sample or a backtrace is packed in a single ring buffer
      entry. This is possible since the new ring buffer supports variable
      data size.
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      1d7503b5
  6. 10 12月, 2008 6 次提交
    • R
      oprofile: port to the new ring_buffer · 6dad828b
      Robert Richter 提交于
      This patch replaces the current oprofile cpu buffer implementation
      with the ring buffer provided by the tracing framework. The motivation
      here is to leave the pain of implementing ring buffers to others. Oh,
      no, there are more advantages. Main reason is the support of different
      sample sizes that could be stored in the buffer. Use cases for this
      are IBS and Cell spu profiling. Using the new ring buffer ensures
      valid and complete samples and allows copying the cpu buffer stateless
      without knowing its content. Second it will use generic kernel API and
      also reduce code size. And hopefully, there are less bugs.
      
      Since the new tracing ring buffer implementation uses spin locks to
      protect the buffer during read/write access, it is difficult to use
      the buffer in an NMI handler. In this case, writing to the buffer by
      the NMI handler (x86) could occur also during critical sections when
      reading the buffer. To avoid this, there are 2 buffers for independent
      read and write access. Read access is in process context only, write
      access only in the NMI handler. If the read buffer runs empty, both
      buffers are swapped atomically. There is potentially a small window
      during swapping where the buffers are disabled and samples could be
      lost.
      
      Using 2 buffers is a little bit overhead, but the solution is clear
      and does not require changes in the ring buffer implementation. It can
      be changed to a single buffer solution when the ring buffer access is
      implemented as non-locking atomic code.
      
      The new buffer requires more size to store the same amount of samples
      because each sample includes an u32 header. Also, there is more code
      to execute for buffer access. Nonetheless, the buffer implementation
      is proven in the ftrace environment and worth to use also in oprofile.
      
      Patches that changes the internal IBS buffer usage will follow.
      
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      6dad828b
    • R
      oprofile: moving cpu_buffer_reset() to cpu_buffer.h · fbc9bf9f
      Robert Richter 提交于
      This is in preparation for changes in the cpu buffer implementation.
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      fbc9bf9f
    • R
      oprofile: adding cpu_buffer_write_commit() · 229234ae
      Robert Richter 提交于
      This is in preparation for changes in the cpu buffer implementation.
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      229234ae
    • R
      oprofile: adding cpu buffer r/w access functions · 7d468abe
      Robert Richter 提交于
      This is in preparation for changes in the cpu buffer implementation.
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      7d468abe
    • R
      oprofile: whitspace changes only · cdc1834d
      Robert Richter 提交于
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      cdc1834d
    • R
      oprofile: comment cleanup · fd13f6c8
      Robert Richter 提交于
      This fixes the coding style of some comments.
      Signed-off-by: NRobert Richter <robert.richter@amd.com>
      fd13f6c8
  7. 21 10月, 2008 1 次提交
    • C
      powerpc/oprofile: Fix mutex locking for cell spu-oprofile · a5598ca0
      Carl Love 提交于
      The issue is the SPU code is not holding the kernel mutex lock while
      adding samples to the kernel buffer.
      
      This patch creates per SPU buffers to hold the data.  Data
      is added to the buffers from in interrupt context.  The data
      is periodically pushed to the kernel buffer via a new Oprofile
      function oprofile_put_buff(). The oprofile_put_buff() function
      is called via a work queue enabling the funtion to acquire the
      mutex lock.
      
      The existing user controls for adjusting the per CPU buffer
      size is used to control the size of the per SPU buffers.
      Similarly, overflows of the SPU buffers are reported by
      incrementing the per CPU buffer stats.  This eliminates the
      need to have architecture specific controls for the per SPU
      buffers which is not acceptable to the OProfile user tool
      maintainer.
      
      The export of the oprofile add_event_entry() is removed as it
      is no longer needed given this patch.
      
      Note, this patch has not addressed the issue of indexing arrays
      by the spu number.  This still needs to be fixed as the spu
      numbering is not guarenteed to be 0 to max_num_spus-1.
      Signed-off-by: NCarl Love <carll@us.ibm.com>
      Signed-off-by: NMaynard Johnson <maynardj@us.ibm.com>
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Acked-by: NAcked-by: Robert Richter <robert.richter@amd.com>
      Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
      a5598ca0
  8. 17 10月, 2008 1 次提交
  9. 16 10月, 2008 3 次提交
  10. 26 8月, 2008 1 次提交
  11. 26 7月, 2008 3 次提交
  12. 15 5月, 2008 1 次提交
  13. 28 4月, 2008 1 次提交
  14. 15 11月, 2007 1 次提交
  15. 22 11月, 2006 1 次提交
  16. 29 3月, 2006 1 次提交
  17. 23 3月, 2006 1 次提交
    • A
      [PATCH] more for_each_cpu() conversions · 394e3902
      Andrew Morton 提交于
      When we stop allocating percpu memory for not-possible CPUs we must not touch
      the percpu data for not-possible CPUs at all.  The correct way of doing this
      is to test cpu_possible() or to use for_each_cpu().
      
      This patch is a kernel-wide sweep of all instances of NR_CPUS.  I found very
      few instances of this bug, if any.  But the patch converts lots of open-coded
      test to use the preferred helper macros.
      
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: David Howells <dhowells@redhat.com>
      Acked-by: NKyle McMartin <kyle@parisc-linux.org>
      Cc: Anton Blanchard <anton@samba.org>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Paul Mundt <lethal@linux-sh.org>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: William Lee Irwin III <wli@holomorphy.com>
      Cc: Andi Kleen <ak@muc.de>
      Cc: Christian Zankel <chris@zankel.net>
      Cc: Philippe Elie <phil.el@wanadoo.fr>
      Cc: Nathan Scott <nathans@sgi.com>
      Cc: Jens Axboe <axboe@suse.de>
      Cc: Eric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      394e3902
  18. 09 1月, 2006 1 次提交
  19. 28 7月, 2005 1 次提交
  20. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4