1. 26 9月, 2006 2 次提交
  2. 23 9月, 2006 2 次提交
  3. 09 9月, 2006 1 次提交
    • A
      [PATCH] invalidate_complete_page() race fix · 016eb4a0
      Andrew Morton 提交于
      If a CPU faults this page into pagetables after invalidate_mapping_pages()
      checked page_mapped(), invalidate_complete_page() will still proceed to remove
      the page from pagecache.  This leaves the page-faulting process with a
      detached page.  If it was MAP_SHARED then file data loss will ensue.
      
      Fix that up by checking the page's refcount after taking tree_lock.
      
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Hugh Dickins <hugh@veritas.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      016eb4a0
  4. 08 9月, 2006 1 次提交
  5. 02 9月, 2006 4 次提交
    • N
      [PATCH] fix NUMA interleaving for huge pages · 3b98b087
      Nishanth Aravamudan 提交于
      Since vma->vm_pgoff is in units of smallpages, VMAs for huge pages have the
      lower HPAGE_SHIFT - PAGE_SHIFT bits always cleared, which results in badd
      offsets to the interleave functions.  Take this difference from small pages
      into account when calculating the offset.  This does add a 0-bit shift into
      the small-page path (via alloc_page_vma()), but I think that is negligible.
       Also add a BUG_ON to prevent the offset from growing due to a negative
      right-shift, which probably shouldn't be allowed anyways.
      
      Tested on an 8-memory node ppc64 NUMA box and got the interleaving I
      expected.
      Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com>
      Signed-off-by: NAdam Litke <agl@us.ibm.com>
      Cc: Andi Kleen <ak@muc.de>
      Acked-by: NChristoph Lameter <clameter@engr.sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3b98b087
    • P
      [PATCH] dm: work around mempool_alloc, bio_alloc_bioset deadlocks · 0b1d647a
      Pavel Mironchik 提交于
      This patch works around a complex dm-related deadlock/livelock down in the
      mempool allocator.
      
      Alasdair said:
      
        Several dm targets suffer from this.
      
        Mempools are not yet used correctly everywhere in device-mapper: they can
        get shared when devices are stacked, and some targets share them across
        multiple instances.  I made fixing this one of the prerequisites for this
        patch:
      
          md-dm-reduce-stack-usage-with-stacked-block-devices.patch
      
        which in some cases makes people more likely to hit the problem.
      
        There's been some progress on this recently with (unfinished) dm-crypt
        patches at:
      
          http://www.kernel.org/pub/linux/kernel/people/agk/patches/2.6/editing/
            (dm-crypt-move-io-to-workqueue.patch plus dependencies)
      
      and:
      
        I've no problems with a temporary workaround like that, but Milan Broz (a
        new Redhat developer in the Czech Republic) has started reviewing all the
        mempool usage in device-mapper so I'm expecting we'll soon have a proper fix
        for this associated problems.  [He's back from holiday at the start of next
        week.]
      
      For now, this sad-but-safe little patch will allow the machine to recover.
      
      [akpm@osdl.org: rewrote changelog]
      Cc: Alasdair G Kergon <agk@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      0b1d647a
    • C
      [PATCH] ZVC: Scale thresholds depending on the size of the system · df9ecaba
      Christoph Lameter 提交于
      The ZVC counter update threshold is currently set to a fixed value of 32.
      This patch sets up the threshold depending on the number of processors and
      the sizes of the zones in the system.
      
      With the current threshold of 32, I was able to observe slight contention
      when more than 130-140 processors concurrently updated the counters.  The
      contention vanished when I either increased the threshold to 64 or used
      Andrew's idea of overstepping the interval (see ZVC overstep patch).
      
      However, we saw contention again at 220-230 processors.  So we need higher
      values for larger systems.
      
      But the current default is already a bit of an overkill for smaller
      systems.  Some systems have tiny zones where precision matters.  For
      example i386 and x86_64 have 16M DMA zones and either 900M ZONE_NORMAL or
      ZONE_DMA32.  These are even present on SMP and NUMA systems.
      
      The patch here sets up a threshold based on the number of processors in the
      system and the size of the zone that these counters are used for.  The
      threshold should grow logarithmically, so we use fls() as an easy
      approximation.
      
      Results of tests on a system with 1024 processors (4TB RAM)
      
      The following output is from a test allocating 1GB of memory concurrently
      on each processor (Forking the process.  So contention on mmap_sem and the
      pte locks is not a factor):
      
                             X                   MIN
      TYPE:               CPUS       WALL       WALL        SYS     USER     TOTCPU
      fork                   1      0.552      0.552      0.540    0.012      0.552
      fork                   4      0.552      0.548      2.164    0.036      2.200
      fork                  16      0.564      0.548      8.812    0.164      8.976
      fork                 128      0.580      0.572     72.204    1.208     73.412
      fork                 256      1.300      0.660    310.400    2.160    312.560
      fork                 512      3.512      0.696   1526.836    4.816   1531.652
      fork                1020     20.024      0.700  17243.176    6.688  17249.863
      
      So a threshold of 32 is fine up to 128 processors. At 256 processors contention
      becomes a factor.
      
      Overstepping the counter (earlier patch) improves the numbers a bit:
      
      fork                   4      0.552      0.548      2.164    0.040      2.204
      fork                  16      0.552      0.548      8.640    0.148      8.788
      fork                 128      0.556      0.548     69.676    0.956     70.632
      fork                 256      0.876      0.636    212.468    2.108    214.576
      fork                 512      2.276      0.672    997.324    4.260   1001.584
      fork                1020     13.564      0.680  11586.436    6.088  11592.523
      
      Still contention at 512 and 1020. Contention at 1020 is down by a third.
      256 still has a slight bit of contention.
      
      After this patch the counter threshold will be set to 125 which reduces
      contention significantly:
      
      fork                 128      0.560      0.548     69.776    0.932     70.708
      fork                 256      0.636      0.556    143.460    2.036    145.496
      fork                 512      0.640      0.548    284.244    4.236    288.480
      fork                1020      1.500      0.588   1326.152    8.892   1335.044
      
      [akpm@osdl.org: !SMP build fix]
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      df9ecaba
    • C
      [PATCH] ZVC: Overstep counters · a302eb4e
      Christoph Lameter 提交于
      Increments and decrements are usually grouped rather than mixed.  We can
      optimize the inc and dec functions for that case.
      
      Increment and decrement the counters by 50% more than the threshold in
      those cases and set the differential accordingly.  This decreases the need
      to update the atomic counters.
      
      The idea came originally from Andrew Morton.  The overstepping alone was
      sufficient to address the contention issue found when updating the global
      and the per zone counters from 160 processors.
      
      Also remove some code in dec_zone_page_state.
      Signed-off-by: NChristoph Lameter <clameter@sgi.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      a302eb4e
  6. 28 8月, 2006 1 次提交
  7. 15 8月, 2006 1 次提交
  8. 06 8月, 2006 4 次提交
    • K
      [PATCH] memory hotadd fixes: enhance collision check · ebd15302
      KAMEZAWA Hiroyuki 提交于
      This patch is for collision check enhancement for memory hot add.
      
      It's better to do resouce collision check before doing memory hot add,
      which will touch memory management structures.
      
      And add_section() should check section exists or not before calling
      sparse_add_one_section(). (sparse_add_one_section() will do another
      check anyway. but checking in memory_hotplug.c will be easy to understand.)
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: keith mannthey <kmannth@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ebd15302
    • K
      [PATCH] memory hotadd fixes: find_next_system_ram catch range fix · 58c1b5b0
      KAMEZAWA Hiroyuki 提交于
      find_next_system_ram() is used to find available memory resource at onlining
      newly added memory.  This patch fixes following problem.
      
      find_next_system_ram() cannot catch this case.
      
      Resource:      (start)-------------(end)
      Section :                (start)-------------(end)
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Keith Mannthey <kmannth@gmail.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      58c1b5b0
    • K
      [PATCH] memory hotadd fixes: not-aligned memory hotadd handling fix · 6f712711
      KAMEZAWA Hiroyuki 提交于
      ioresouce handling code in memory hotplug allows not-aligned memory hot add.
      But when memmap and other memory structures are initialized, parameters should
      be aligned.  (if not aligned, initialization of mem_map will do wrong, it
      assumes parameters are aligned.) This patch fix it.
      
      And this patch allows ioresource collision check to handle -EEXIST.
      Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      Cc: Keith Mannthey <kmannth@gmail.com>
      Cc: Yasunori Goto <y-goto@jp.fujitsu.com>
      Cc: Dave Hansen <haveblue@us.ibm.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      6f712711
    • A
      [PATCH] fadvise() make POSIX_FADV_NOREUSE a no-op · 60c371bc
      Andrew Morton 提交于
      The POSIX_FADV_NOREUSE hint means "the application will use this range of the
      file a single time".  It seems to be intended that the implementation will use
      this hint to perform drop-behind of that part of the file when the application
      gets around to reading or writing it.
      
      However for reasons which aren't obvious (or sane?) I mapped
      POSIX_FADV_NOREUSE onto POSIX_FADV_WILLNEED.  ie: it does readahead.
      
      That's daft.  So for now, make POSIX_FADV_NOREUSE a no-op.
      
      This is a non-back-compatible change.  If someone was using POSIX_FADV_NOREUSE
      to perform readahead, they lose.  The likelihood is low.
      
      If/when we later implement POSIX_FADV_NOREUSE things will get interesting - to
      do it fully we'll need to maintain file offset/length ranges and peform all
      sorts of complex tricks, and managing the lifetime of those ranges' data
      structures will be interesting..
      
      A sensible implementation would probably ignore the file range and would
      simply mark the entire file as needing some form of drop-behind treatment.
      
      Cc: Michael Kerrisk <mtk-manpages@gmx.net>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      60c371bc
  9. 01 8月, 2006 2 次提交
  10. 30 7月, 2006 1 次提交
  11. 27 7月, 2006 1 次提交
  12. 15 7月, 2006 4 次提交
  13. 14 7月, 2006 3 次提交
  14. 11 7月, 2006 5 次提交
  15. 04 7月, 2006 6 次提交
  16. 01 7月, 2006 2 次提交