1. 04 10月, 2006 1 次提交
    • O
      [PATCH] rcu: simplify/improve batch tuning · 20e9751b
      Oleg Nesterov 提交于
      Kill a hard-to-calculate 'rsinterval' boot parameter and per-cpu
      rcu_data.last_rs_qlen.  Instead, it adds adds a flag rcu_ctrlblk.signaled,
      which records the fact that one of CPUs has sent a resched IPI since the
      last rcu_start_batch().
      
      Roughly speaking, we need two rcu_start_batch()s in order to move callbacks
      from ->nxtlist to ->donelist.  This means that when ->qlen exceeds qhimark
      and continues to grow, we should send a resched IPI, and then do it again
      after we gone through a quiescent state.
      
      On the other hand, if it was already sent, we don't need to do it again
      when another CPU detects overflow of the queue.
      Signed-off-by: NOleg Nesterov <oleg@tv-sign.ru>
      Acked-by: NPaul E. McKenney <paulmck@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      20e9751b
  2. 01 7月, 2006 1 次提交
  3. 28 6月, 2006 1 次提交
  4. 23 6月, 2006 1 次提交
  5. 16 5月, 2006 1 次提交
  6. 23 3月, 2006 1 次提交
  7. 09 3月, 2006 1 次提交
    • D
      [PATCH] rcu batch tuning · 21a1ea9e
      Dipankar Sarma 提交于
      This patch adds new tunables for RCU queue and finished batches.  There are
      two types of controls - number of completed RCU updates invoked in a batch
      (blimit) and monitoring for high rate of incoming RCUs on a cpu (qhimark,
      qlowmark).
      
      By default, the per-cpu batch limit is set to a small value.  If the input
      RCU rate exceeds the high watermark, we do two things - force quiescent
      state on all cpus and set the batch limit of the CPU to INTMAX.  Setting
      batch limit to INTMAX forces all finished RCUs to be processed in one shot.
       If we have more than INTMAX RCUs queued up, then we have bigger problems
      anyway.  Once the incoming queued RCUs fall below the low watermark, the
      batch limit is set to the default.
      Signed-off-by: NDipankar Sarma <dipankar@in.ibm.com>
      Cc: "Paul E. McKenney" <paulmck@us.ibm.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      21a1ea9e
  8. 04 2月, 2006 1 次提交
  9. 11 1月, 2006 1 次提交
  10. 10 1月, 2006 1 次提交
  11. 09 1月, 2006 1 次提交
  12. 13 12月, 2005 1 次提交
    • D
      [PATCH] add rcu_barrier() synchronization point · ab4720ec
      Dipankar Sarma 提交于
      This introduces a new interface - rcu_barrier() which waits until all
      the RCUs queued until this call have been completed.
      
      Reiser4 needs this, because we do more than just freeing memory object
      in our RCU callback: we also remove it from the list hanging off
      super-block.  This means, that before freeing reiser4-specific portion
      of super-block (during umount) we have to wait until all pending RCU
      callbacks are executed.
      
      The only change of reiser4 made to the original patch, is exporting of
      rcu_barrier().
      
      Cc: Hans Reiser <reiser@namesys.com>
      Cc: Vladimir V. Saveliev <vs@namesys.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ab4720ec
  13. 31 10月, 2005 1 次提交
  14. 18 10月, 2005 1 次提交
  15. 10 9月, 2005 1 次提交
    • D
      [PATCH] files: fix rcu initializers · 8b6490e5
      Dipankar Sarma 提交于
      First of a number of files_lock scaability patches.
      
       Here are the x86 numbers -
      
       tiobench on a 4(8)-way (HT) P4 system on ramdisk :
      
                                               (lockfree)
       Test            2.6.10-vanilla  Stdev   2.6.10-fd       Stdev
       -------------------------------------------------------------
       Seqread         1400.8          11.52   1465.4          34.27
       Randread        1594            8.86    2397.2          29.21
       Seqwrite        242.72          3.47    238.46          6.53
       Randwrite       445.74          9.15    446.4           9.75
      
       The performance improvement is very significant.
       We are getting killed by the cacheline bouncing of the files_struct
       lock here. Writes on ramdisk (ext2) seems to vary just too
       much to get any meaningful number.
      
       Also, With Tridge's thread_perf test on a 4(8)-way (HT) P4 xeon system :
      
       2.6.12-rc5-vanilla :
      
       Running test 'readwrite' with 8 tasks
       Threads     0.34 +/- 0.01 seconds
       Processes   0.16 +/- 0.00 seconds
      
       2.6.12-rc5-fd :
      
       Running test 'readwrite' with 8 tasks
       Threads     0.17 +/- 0.02 seconds
       Processes   0.17 +/- 0.02 seconds
      
       I repeated the measurements on ramfs (as opposed to ext2 on ramdisk in
       the earlier measurement) and I got more consistent results from tiobench :
      
       4(8) way xeon P4
       -----------------
                                               (lock-free)
       Test            2.6.12-rc5      Stdev   2.6.12-rc5-fd   Stdev
       -------------------------------------------------------------
       Seqread         1282            18.59   1343.6          26.37
       Randread        1517            7       2415            34.27
       Seqwrite        702.2           5.27    709.46           5.9
       Randwrite       846.86          15.15   919.68          21.4
      
       4-way ppc64
       ------------
                                               (lock-free)
       Test            2.6.12-rc5      Stdev   2.6.12-rc5-fd   Stdev
       -------------------------------------------------------------
       Seqread         1549            91.16   1569.6          47.2
       Randread        1473.6          25.11   1585.4          69.99
       Seqwrite        1096.8          20.03   1136            29.61
       Randwrite       1189.6           4.04   1275.2          32.96
      
       Also running Tridge's thread_perf test on ppc64 :
      
       2.6.12-rc5-vanilla
       --------------------
       Running test 'readwrite' with 4 tasks
       Threads     0.20 +/- 0.02 seconds
       Processes   0.16 +/- 0.01 seconds
      
       2.6.12-rc5-fd
       --------------------
       Running test 'readwrite' with 4 tasks
       Threads     0.18 +/- 0.04 seconds
       Processes   0.16 +/- 0.01 seconds
      
       The benefits are huge (upto ~60%) in some cases on x86 primarily
       due to the atomic operations during acquisition of ->file_lock
       and cache line bouncing in fast path. ppc64 benefits are modest
       due to LL/SC based locking, but still statistically significant.
      
      This patch:
      
      RCU head initilizer no longer needs the head varible name since we don't use
      list.h lists anymore.
      Signed-off-by: NDipankar Sarma <dipankar@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8b6490e5
  16. 01 5月, 2005 1 次提交
    • P
      [PATCH] Deprecate synchronize_kernel, GPL replacement · 9b06e818
      Paul E. McKenney 提交于
      The synchronize_kernel() primitive is used for quite a few different purposes:
      waiting for RCU readers, waiting for NMIs, waiting for interrupts, and so on.
      This makes RCU code harder to read, since synchronize_kernel() might or might
      not have matching rcu_read_lock()s.  This patch creates a new
      synchronize_rcu() that is to be used for RCU readers and a new
      synchronize_sched() that is used for the rest.  These two new primitives
      currently have the same implementation, but this is might well change with
      additional real-time support.  Both new primitives are GPL-only, the old
      primitive is deprecated.
      Signed-off-by: NPaul E. McKenney <paulmck@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9b06e818
  17. 17 4月, 2005 1 次提交
    • L
      Linux-2.6.12-rc2 · 1da177e4
      Linus Torvalds 提交于
      Initial git repository build. I'm not bothering with the full history,
      even though we have it. We can create a separate "historical" git
      archive of that later if we want to, and in the meantime it's about
      3.2GB when imported into git - space that would just make the early
      git days unnecessarily complicated, when we don't have a lot of good
      infrastructure for it.
      
      Let it rip!
      1da177e4