1. 10 9月, 2006 2 次提交
  2. 07 9月, 2006 1 次提交
  3. 03 9月, 2006 1 次提交
    • G
      [ARM] 3762/1: Fix ptrace cache coherency bug for ARM1136 VIPT nonaliasing Harvard caches · a188ad2b
      George G. Davis 提交于
      Patch from George G. Davis
      
      Resolve ARM1136 VIPT non-aliasing cache coherency issues observed when
      using ptrace to set breakpoints and cleanup copy_{to,from}_user_page()
      while we're here as requested by Russell King because "it's also far
      too heavy on non-v6 CPUs".
      
      NOTES:
      
      1. Only access_process_vm() calls copy_{to,from}_user_page().
      2. access_process_vm() calls get_user_pages() to pin down the "page".
      3. get_user_pages() calls flush_dcache_page(page) which ensures cache
         coherency between kernel and userspace mappings of "page".  However
         flush_dcache_page(page) may not invalidate I-Cache over this range
         for all cases, specifically, I-Cache is not invalidated for the VIPT
         non-aliasing case.  So memory is consistent between kernel and user
         space mappings of "page" but I-Cache may still be hot over this
         range.  IOW, we don't have to worry about flush_cache_page() before
         memcpy().
      4. Now, for the copy_to_user_page() case, after memcpy(), we must flush
         the caches so memory is consistent with kernel cache entries and
         invalidate the I-Cache if this mm region is executable.  We don't
         need to do anything after memcpy() for the copy_from_user_page()
         case since kernel cache entries will be invalidated via the same
         process above if we access "page" again.  The flush_ptrace_access()
         function (borrowed from SPARC64 implementation) is added to handle
         cache flushing after memcpy() for the copy_to_user_page() case.
      Signed-off-by: NGeorge G. Davis <gdavis@mvista.com>
      Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
      a188ad2b
  4. 02 9月, 2006 3 次提交
    • A
      [PATCH] x86: increase MAX_MP_BUSSES on default arch · a9aa141c
      Andrew Morton 提交于
      Vitezslav Samel <samel@mail.cz> reports that an HP DL380 g4 fails using the
      default arch due to the ISA bus having an ID of 32.
      
      It would have worked OK with the generic arch - for some reason the default
      arch doesn't support as many busses.
      
      So bump that up to support 256 busses, but leave it at 32 if we're building a
      tiny system to save a bit of memory.
      
      Cc: Vitezslav Samel <samel@mail.cz>
      Acked-by: NAndi Kleen <ak@muc.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a9aa141c
    • S
      [PATCH] task delay accounting fixes · 35df17c5
      Shailabh Nagar 提交于
      Cleanup allocation and freeing of tsk->delays used by delay accounting.
      This solves two problems reported for delay accounting:
      
      1. oops in __delayacct_blkio_ticks
      http://www.uwsg.indiana.edu/hypermail/linux/kernel/0608.2/1844.html
      
      Currently tsk->delays is getting freed too early in task exit which can
      cause a NULL tsk->delays to get accessed via reading of /proc/<tgid>/stats.
       The patch fixes this problem by freeing tsk->delays closer to when
      task_struct itself is freed up.  As a result, it also eliminates the use of
      tsk->delays_lock which was only being used (inadequately) to safeguard
      access to tsk->delays while a task was exiting.
      
      2. Possible memory leak in kernel/delayacct.c
      http://www.uwsg.indiana.edu/hypermail/linux/kernel/0608.2/1389.html
      
      The patch cleans up tsk->delays allocations after a bad fork which was
      missing earlier.
      
      The patch has been tested to fix the problems listed above and stress
      tested with rapid calls to delay accounting's taskstats command interface
      (which is the other path that can access the same data, besides the /proc
      interface causing the oops above).
      Signed-off-by: NShailabh Nagar <nagar@watson.ibm.com>
      Cc: Balbir Singh <balbir@in.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      35df17c5
    • C
      [PATCH] ZVC: Scale thresholds depending on the size of the system · df9ecaba
      Christoph Lameter 提交于
      The ZVC counter update threshold is currently set to a fixed value of 32.
      This patch sets up the threshold depending on the number of processors and
      the sizes of the zones in the system.
      
      With the current threshold of 32, I was able to observe slight contention
      when more than 130-140 processors concurrently updated the counters.  The
      contention vanished when I either increased the threshold to 64 or used
      Andrew's idea of overstepping the interval (see ZVC overstep patch).
      
      However, we saw contention again at 220-230 processors.  So we need higher
      values for larger systems.
      
      But the current default is already a bit of an overkill for smaller
      systems.  Some systems have tiny zones where precision matters.  For
      example i386 and x86_64 have 16M DMA zones and either 900M ZONE_NORMAL or
      ZONE_DMA32.  These are even present on SMP and NUMA systems.
      
      The patch here sets up a threshold based on the number of processors in the
      system and the size of the zone that these counters are used for.  The
      threshold should grow logarithmically, so we use fls() as an easy
      approximation.
      
      Results of tests on a system with 1024 processors (4TB RAM)
      
      The following output is from a test allocating 1GB of memory concurrently
      on each processor (Forking the process.  So contention on mmap_sem and the
      pte locks is not a factor):
      
                             X                   MIN
      TYPE:               CPUS       WALL       WALL        SYS     USER     TOTCPU
      fork                   1      0.552      0.552      0.540    0.012      0.552
      fork                   4      0.552      0.548      2.164    0.036      2.200
      fork                  16      0.564      0.548      8.812    0.164      8.976
      fork                 128      0.580      0.572     72.204    1.208     73.412
      fork                 256      1.300      0.660    310.400    2.160    312.560
      fork                 512      3.512      0.696   1526.836    4.816   1531.652
      fork                1020     20.024      0.700  17243.176    6.688  17249.863
      
      So a threshold of 32 is fine up to 128 processors. At 256 processors contention
      becomes a factor.
      
      Overstepping the counter (earlier patch) improves the numbers a bit:
      
      fork                   4      0.552      0.548      2.164    0.040      2.204
      fork                  16      0.552      0.548      8.640    0.148      8.788
      fork                 128      0.556      0.548     69.676    0.956     70.632
      fork                 256      0.876      0.636    212.468    2.108    214.576
      fork                 512      2.276      0.672    997.324    4.260   1001.584
      fork                1020     13.564      0.680  11586.436    6.088  11592.523
      
      Still contention at 512 and 1020. Contention at 1020 is down by a third.
      256 still has a slight bit of contention.
      
      After this patch the counter threshold will be set to 125 which reduces
      contention significantly:
      
      fork                 128      0.560      0.548     69.776    0.932     70.708
      fork                 256      0.636      0.556    143.460    2.036    145.496
      fork                 512      0.640      0.548    284.244    4.236    288.480
      fork                1020      1.500      0.588   1326.152    8.892   1335.044
      
      [akpm@osdl.org: !SMP build fix]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      df9ecaba
  5. 01 9月, 2006 2 次提交
  6. 31 8月, 2006 11 次提交
  7. 30 8月, 2006 6 次提交
  8. 28 8月, 2006 8 次提交
  9. 27 8月, 2006 3 次提交
  10. 25 8月, 2006 3 次提交