1. 13 3月, 2006 2 次提交
    • L
      Merge master.kernel.org:/home/rmk/linux-2.6-arm · 7cafae52
      Linus Torvalds 提交于
      * master.kernel.org:/home/rmk/linux-2.6-arm:
        [ARM] iwmmxt thread state alignment
        [ARM] 3350/1: Enable 1-wire on ARM
        [ARM] 3356/1: Workaround for the ARM1136 I-cache invalidation problem
        [ARM] 3355/1: NSLU2: remove propmt depends
        [ARM] 3354/1: NAS100d: fix power led handling
        [ARM] Fix muldi3.S
      7cafae52
    • R
      [ARM] iwmmxt thread state alignment · cdaabbd7
      Russell King 提交于
      This patch removes the reliance of iwmmxt on hand coded alignments.
      Since thread_info is always 8K aligned, specifying that fpstate is
      8-byte aligned achieves the same effect without needing to resort
      to hand coded alignments.
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      cdaabbd7
  2. 12 3月, 2006 31 次提交
  3. 11 3月, 2006 6 次提交
  4. 10 3月, 2006 1 次提交
    • C
      [PATCH] slab: Node rotor for freeing alien caches and remote per cpu pages. · 8fce4d8e
      Christoph Lameter 提交于
      The cache reaper currently tries to free all alien caches and all remote
      per cpu pages in each pass of cache_reap.  For a machines with large number
      of nodes (such as Altix) this may lead to sporadic delays of around ~10ms.
      Interrupts are disabled while reclaiming creating unacceptable delays.
      
      This patch changes that behavior by adding a per cpu reap_node variable.
      Instead of attempting to free all caches, we free only one alien cache and
      the per cpu pages from one remote node.  That reduces the time spend in
      cache_reap.  However, doing so will lengthen the time it takes to
      completely drain all remote per cpu pagesets and all alien caches.  The
      time needed will grow with the number of nodes in the system.  All caches
      are drained when they overflow their respective capacity.  So the drawback
      here is only that a bit of memory may be wasted for awhile longer.
      
      Details:
      
      1. Rename drain_remote_pages to drain_node_pages to allow the specification
         of the node to drain of pcp pages.
      
      2. Add additional functions init_reap_node, next_reap_node for NUMA
         that manage a per cpu reap_node counter.
      
      3. Add a reap_alien function that reaps only from the current reap_node.
      
      For us this seems to be a critical issue.  Holdoffs of an average of ~7ms
      cause some HPC benchmarks to slow down significantly.  F.e.  NAS parallel
      slows down dramatically.  NAS parallel has a 12-16 seconds runtime w/o rotor
      compared to 5.8 secs with the rotor patches.  It gets down to 5.05 secs with
      the additional interrupt holdoff reductions.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      8fce4d8e