1. 05 1月, 2011 3 次提交
  2. 10 11月, 2010 2 次提交
  3. 29 10月, 2010 1 次提交
  4. 27 5月, 2010 1 次提交
  5. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  6. 22 6月, 2009 1 次提交
  7. 12 6月, 2009 1 次提交
  8. 07 1月, 2009 1 次提交
  9. 06 1月, 2009 1 次提交
  10. 28 7月, 2008 1 次提交
  11. 26 7月, 2008 1 次提交
    • S
      kprobes: improve kretprobe scalability with hashed locking · ef53d9c5
      Srinivasa D S 提交于
      Currently list of kretprobe instances are stored in kretprobe object (as
      used_instances,free_instances) and in kretprobe hash table.  We have one
      global kretprobe lock to serialise the access to these lists.  This causes
      only one kretprobe handler to execute at a time.  Hence affects system
      performance, particularly on SMP systems and when return probe is set on
      lot of functions (like on all systemcalls).
      
      Solution proposed here gives fine-grain locks that performs better on SMP
      system compared to present kretprobe implementation.
      
      Solution:
      
       1) Instead of having one global lock to protect kretprobe instances
          present in kretprobe object and kretprobe hash table.  We will have
          two locks, one lock for protecting kretprobe hash table and another
          lock for kretporbe object.
      
       2) We hold lock present in kretprobe object while we modify kretprobe
          instance in kretprobe object and we hold per-hash-list lock while
          modifying kretprobe instances present in that hash list.  To prevent
          deadlock, we never grab a per-hash-list lock while holding a kretprobe
          lock.
      
       3) We can remove used_instances from struct kretprobe, as we can
          track used instances of kretprobe instances using kretprobe hash
          table.
      
      Time duration for kernel compilation ("make -j 8") on a 8-way ppc64 system
      with return probes set on all systemcalls looks like this.
      
      cacheline              non-cacheline             Un-patched kernel
      aligned patch 	       aligned patch
      ===============================================================================
      real    9m46.784s       9m54.412s                  10m2.450s
      user    40m5.715s       40m7.142s                  40m4.273s
      sys     2m57.754s       2m58.583s                  3m17.430s
      ===========================================================
      
      Time duration for kernel compilation ("make -j 8) on the same system, when
      kernel is not probed.
      =========================
      real    9m26.389s
      user    40m8.775s
      sys     2m7.283s
      =========================
      Signed-off-by: NSrinivasa DS <srinivasa@in.ibm.com>
      Signed-off-by: NJim Keniston <jkenisto@us.ibm.com>
      Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ef53d9c5
  12. 14 7月, 2008 1 次提交
  13. 17 4月, 2008 1 次提交
  14. 17 10月, 2007 1 次提交
  15. 22 8月, 2007 1 次提交
  16. 21 5月, 2007 1 次提交
  17. 09 5月, 2007 3 次提交
    • A
      Kprobes: The ON/OFF knob thru debugfs · bf8f6e5b
      Ananth N Mavinakayanahalli 提交于
      This patch provides a debugfs knob to turn kprobes on/off
      
      o A new file /debug/kprobes/enabled indicates if kprobes is enabled or
        not (default enabled)
      o Echoing 0 to this file will disarm all installed probes
      o Any new probe registration when disabled will register the probe but
        not arm it. A message will be printed out in such a case.
      o When a value 1 is echoed to the file, all probes (including ones
        registered in the intervening period) will be enabled
      o Unregistration will happen irrespective of whether probes are globally
        enabled or not.
      o Update Documentation/kprobes.txt to reflect these changes. While there
        also update the doc to make it current.
      
      We are also looking at providing sysrq key support to tie to the disabling
      feature provided by this patch.
      
      [akpm@linux-foundation.org: Use bool like a bool!]
      [akpm@linux-foundation.org: add printk facility levels]
      [cornelia.huck@de.ibm.com: Add the missing arch_trampoline_kprobe() for s390]
      Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Signed-off-by: NSrinivasa DS <srinivasa@in.ibm.com>
      Signed-off-by: NCornelia Huck <cornelia.huck@de.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bf8f6e5b
    • C
      kprobes: kretprobes simplifications · 4c4308cb
      Christoph Hellwig 提交于
       - consolidate duplicate code in all arch_prepare_kretprobe instances
         into common code
       - replace various odd helpers that use hlist_for_each_entry to get
         the first elemenet of a list with either a hlist_for_each_entry_save
         or an opencoded access to the first element in the caller
       - inline add_rp_inst into it's only remaining caller
       - use kretprobe_inst_table_head instead of opencoding it
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com>
      Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      4c4308cb
    • C
      move die notifier handling to common code · 1eeb66a1
      Christoph Hellwig 提交于
      This patch moves the die notifier handling to common code.  Previous
      various architectures had exactly the same code for it.  Note that the new
      code is compiled unconditionally, this should be understood as an appel to
      the other architecture maintainer to implement support for it aswell (aka
      sprinkling a notify_die or two in the proper place)
      
      arm had a notifiy_die that did something totally different, I renamed it to
      arm_notify_die as part of the patch and made it static to the file it's
      declared and used at.  avr32 used to pass slightly less information through
      this interface and I brought it into line with the other architectures.
      
      [akpm@linux-foundation.org: build fix]
      [akpm@linux-foundation.org: fix vmalloc_sync_all bustage]
      [bryan.wu@analog.com: fix vmalloc_sync_all in nommu]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Cc: <linux-arch@vger.kernel.org>
      Cc: Russell King <rmk@arm.linux.org.uk>
      Signed-off-by: NBryan Wu <bryan.wu@analog.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      1eeb66a1
  18. 05 5月, 2007 1 次提交
  19. 27 3月, 2007 1 次提交
    • D
      [S390] kprobes: Align probe address. · b70842df
      David Wilder 提交于
      Running a probe on s390 with a probe address that is not 4 byte aligned
      results in a Kernel BUG.  The problem is that the stura instruction used
      by swap_instruction requires the destination address to be 4 byte aligned.
      As stura only writes 4 bytes, aligning to the next 4 byte aligned address
      results in the breakpoint instruction being stored past the probe address.
      The fix is to align the address backward (to the previous 4 byte aligned
      address) and writing the two byte breakpoint instruction in the appropriate
      bytes.
      
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Signed-off-by: NDavid Wilder <dwilder@us.ibm.com>
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      b70842df
  20. 06 3月, 2007 1 次提交
    • M
      [S390] kprobes breaks BUG_ON · f794c827
      Martin Schwidefsky 提交于
      The illegal operation handler calls the die notifier with DIE_BPT to
      let kprobes pick up its breakpoint. If kprobes does not find its
      breakpoint it returns NOTIFY_STOP instead of NOTIFY_DONE.
      Since we use stop_machine_run on s390 to arm/disarm the kprobes
      breakpoints the race that kprobe_handler tries to solve by checking
      for the kprobes breakpoints does not exist. Removing the check makes
      BUG_ON working again.
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      f794c827
  21. 06 2月, 2007 3 次提交
  22. 08 12月, 2006 1 次提交
    • M
      [PATCH] kprobes: enable booster on the preemptible kernel · b4c6c34a
      Masami Hiramatsu 提交于
      When we are unregistering a kprobe-booster, we can't release its
      instruction buffer immediately on the preemptive kernel, because some
      processes might be preempted on the buffer.  The freeze_processes() and
      thaw_processes() functions can clean most of processes up from the buffer.
      There are still some non-frozen threads who have the PF_NOFREEZE flag.  If
      those threads are sleeping (not preempted) at the known place outside the
      buffer, we can ensure safety of freeing.
      
      However, the processing of this check routine takes a long time.  So, this
      patch introduces the garbage collection mechanism of insn_slot.  It also
      introduces the "dirty" flag to free_insn_slot because of efficiency.
      
      The "clean" instruction slots (dirty flag is cleared) are released
      immediately.  But the "dirty" slots which are used by boosted kprobes, are
      marked as garbages.  collect_garbage_slots() will be invoked to release
      "dirty" slots if there are more than INSNS_PER_PAGE garbage slots or if
      there are no unused slots.
      
      Cc: "Keshavamurthy, Anil S" <anil.s.keshavamurthy@intel.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: "bibo,mao" <bibo.mao@intel.com>
      Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com>
      Cc: Yumiko Sugita <yumiko.sugita.yf@hitachi.com>
      Cc: Satoshi Oshima <soshima@redhat.com>
      Cc: Hideo Aoki <haoki@redhat.com>
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      b4c6c34a
  23. 04 10月, 2006 1 次提交
  24. 02 10月, 2006 1 次提交
  25. 20 9月, 2006 1 次提交
  26. 17 8月, 2006 1 次提交
    • A
      [POWERPC] kprobes: Fix possible system crash during out-of-line single-stepping · 83db3dde
      Ananth N Mavinakayanahalli 提交于
      - On archs that have no-exec support, we vmalloc() a executable scratch
      area of PAGE_SIZE and divide it up into an array of slots of maximum
      instruction size for that arch
      - On a kprobe registration, the original instruction is copied to the
      first available free slot, so if multiple kprobes are registered, chances
      are, they get contiguous slots
      - On POWER4, due to not having coherent icaches, we could hit a situation
      where a probe that is registered on one processor, is hit immediately on
      another. This second processor could have fetched the stream of text from
      the out-of-line single-stepping area *before* the probe registration
      completed, possibly due to an earlier (and a different) kprobe hit and
      hence would see stale data at the slot.
      
      Executing such an arbitrary instruction lead to a problem as reported
      in LTC bugzilla 23555.
      
      The correct solution is to call flush_icache_range() as soon as the
      instruction is copied for out-of-line single-stepping, so the correct
      instruction is seen on all processors.
      
      Thanks to Will Schmidt who tracked this down.
      Signed-off-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Acked-by: NWill Schmidt <will_schmidt@vnet.ibm.com>
      Signed-off-by: NPaul Mackerras <paulus@samba.org>
      83db3dde
  27. 01 7月, 2006 1 次提交
  28. 03 5月, 2006 1 次提交
  29. 20 4月, 2006 1 次提交
  30. 27 3月, 2006 2 次提交
  31. 23 3月, 2006 1 次提交
  32. 10 2月, 2006 1 次提交