1. 16 10月, 2008 2 次提交
  2. 14 10月, 2008 1 次提交
  3. 11 10月, 2008 8 次提交
  4. 04 10月, 2008 1 次提交
    • H
      [S390] nohz: Fix __udelay. · d3d238c7
      Heiko Carstens 提交于
      This fixes a regression that came with 934b2857
      ("[S390] nohz/sclp: disable timer on synchronous waits.").
      If udelay() gets called from a disabled context it sets the clock comparator
      to a value where it expects the next interrupt. When the interrupt happens
      the clock comparator gets not reset and therefore the interrupt condition
      doesn't get cleared. The result is an endless timer interrupt loop.
      
      In addition this patch fixes also the following:
      
      rcutorture reveals that our __udelay implementation is still buggy,
      since it might schedule tasklets, but prevents their execution:
      
      NOHZ: local_softirq_pending 42
      NOHZ: local_softirq_pending 02
      NOHZ: local_softirq_pending 142
      NOHZ: local_softirq_pending 02
      
      To fix this we make sure that only the clock comparator interrupt
      is enabled when the enabled wait psw is loaded.
      Also no code gets called anymore which might schedule tasklets.
      Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
      d3d238c7
  5. 09 9月, 2008 2 次提交
  6. 07 9月, 2008 1 次提交
  7. 26 8月, 2008 1 次提交
  8. 22 8月, 2008 4 次提交
  9. 15 8月, 2008 1 次提交
  10. 02 8月, 2008 1 次提交
  11. 01 8月, 2008 6 次提交
  12. 31 7月, 2008 1 次提交
  13. 28 7月, 2008 1 次提交
  14. 27 7月, 2008 6 次提交
  15. 26 7月, 2008 2 次提交
    • O
      S390 topology: don't use kthread() for arch_reinit_sched_domains() · 69b895fd
      Oleg Nesterov 提交于
      Now that it is safe to use get_online_cpus() we can revert
      
      	[S390] cpu topology: Fix possible deadlock.
      	commit: fd781fa2
      
      and call arch_reinit_sched_domains() directly from topology_work_fn().
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Cc: Gautham R Shenoy <ego@in.ibm.com>
      Tested-by: NHeiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Max Krasnyansky <maxk@qualcomm.com>
      Cc: Paul Jackson <pj@sgi.com>
      Cc: Paul Menage <menage@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Vegard Nossum <vegard.nossum@gmail.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      69b895fd
    • S
      kprobes: improve kretprobe scalability with hashed locking · ef53d9c5
      Srinivasa D S 提交于
      Currently list of kretprobe instances are stored in kretprobe object (as
      used_instances,free_instances) and in kretprobe hash table.  We have one
      global kretprobe lock to serialise the access to these lists.  This causes
      only one kretprobe handler to execute at a time.  Hence affects system
      performance, particularly on SMP systems and when return probe is set on
      lot of functions (like on all systemcalls).
      
      Solution proposed here gives fine-grain locks that performs better on SMP
      system compared to present kretprobe implementation.
      
      Solution:
      
       1) Instead of having one global lock to protect kretprobe instances
          present in kretprobe object and kretprobe hash table.  We will have
          two locks, one lock for protecting kretprobe hash table and another
          lock for kretporbe object.
      
       2) We hold lock present in kretprobe object while we modify kretprobe
          instance in kretprobe object and we hold per-hash-list lock while
          modifying kretprobe instances present in that hash list.  To prevent
          deadlock, we never grab a per-hash-list lock while holding a kretprobe
          lock.
      
       3) We can remove used_instances from struct kretprobe, as we can
          track used instances of kretprobe instances using kretprobe hash
          table.
      
      Time duration for kernel compilation ("make -j 8") on a 8-way ppc64 system
      with return probes set on all systemcalls looks like this.
      
      cacheline              non-cacheline             Un-patched kernel
      aligned patch 	       aligned patch
      ===============================================================================
      real    9m46.784s       9m54.412s                  10m2.450s
      user    40m5.715s       40m7.142s                  40m4.273s
      sys     2m57.754s       2m58.583s                  3m17.430s
      ===========================================================
      
      Time duration for kernel compilation ("make -j 8) on the same system, when
      kernel is not probed.
      =========================
      real    9m26.389s
      user    40m8.775s
      sys     2m7.283s
      =========================
      Signed-off-by: NSrinivasa DS <srinivasa@in.ibm.com>
      Signed-off-by: NJim Keniston <jkenisto@us.ibm.com>
      Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Masami Hiramatsu <mhiramat@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ef53d9c5
  16. 25 7月, 2008 2 次提交