1. 29 10月, 2006 9 次提交
    • N
      [PATCH] md: simplify checking of available size when resizing an array · 01ab5662
      NeilBrown 提交于
      When "mdadm --grow --size=xxx" is used to resize an array (use more or less of
      each device), we check the new siza against the available space in each
      device.
      
      We already have that number recorded in rdev->size, so calculating it is
      pointless (and wrong in one obscure case).
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      01ab5662
    • N
      [PATCH] md: fix bug where spares don't always get rebuilt properly when they become live · 2b6e8459
      NeilBrown 提交于
      If save_raid_disk is >= 0, then the device could be a device that is already
      in sync that is being re-added.  So we need to default this value to -1.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2b6e8459
    • B
      [PATCH] fix efi_memory_present_wrapper() · ae74589c
      bibo,mao 提交于
      efi_memory_present_wrapper() parameter start/end is physical address, but
      function memory_present parameter is PFN, this patch converts physical
      address to PFN.
      Signed-off-by: Nbibo, mao <bibo.mao@intel.com>
      Cc: Andi Kleen <ak@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ae74589c
    • E
      [PATCH] jbd2: journal_dirty_data re-check for unmapped buffers · 9b57988d
      Eric Sandeen 提交于
      When running several fsx's and other filesystem stress tests, we found
      cases where an unmapped buffer was still being sent to submit_bh by the
      ext3 dirty data journaling code.
      
      I saw this happen in two ways, both related to another thread doing a
      truncate which would unmap the buffer in question.
      
      Either we would get into journal_dirty_data with a bh which was already
      unmapped (although journal_dirty_data_fn had checked for this earlier, the
      state was not locked at that point), or it would get unmapped in the middle
      of journal_dirty_data when we dropped locks to call sync_dirty_buffer.
      
      By re-checking for mapped state after we've acquired the bh state lock, we
      should avoid these races.  If we find a buffer which is no longer mapped,
      we essentially ignore it, because journal_unmap_buffer has already decided
      that this buffer can go away.
      
      I've also added tracepoints in these two cases, and made a couple other
      tracepoint changes that I found useful in debugging this.
      Signed-off-by: NEric Sandeen <esandeen@redhat.com>
      Cc: <linux-ext4@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      9b57988d
    • E
      [PATCH] jbd: journal_dirty_data re-check for unmapped buffers · f58a74dc
      Eric Sandeen 提交于
      When running several fsx's and other filesystem stress tests, we found
      cases where an unmapped buffer was still being sent to submit_bh by the
      ext3 dirty data journaling code.
      
      I saw this happen in two ways, both related to another thread doing a
      truncate which would unmap the buffer in question.
      
      Either we would get into journal_dirty_data with a bh which was already
      unmapped (although journal_dirty_data_fn had checked for this earlier, the
      state was not locked at that point), or it would get unmapped in the middle
      of journal_dirty_data when we dropped locks to call sync_dirty_buffer.
      
      By re-checking for mapped state after we've acquired the bh state lock, we
      should avoid these races.  If we find a buffer which is no longer mapped,
      we essentially ignore it, because journal_unmap_buffer has already decided
      that this buffer can go away.
      
      I've also added tracepoints in these two cases, and made a couple other
      tracepoint changes that I found useful in debugging this.
      Signed-off-by: NEric Sandeen <esandeen@redhat.com>
      Cc: <linux-ext4@vger.kernel.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      f58a74dc
    • R
      [PATCH] ext4: fix printk format warnings · 1939e49a
      Randy Dunlap 提交于
      fs/ext4/resize.c:72: warning: long long unsigned int format, __u64 arg (arg 4)
      fs/ext4/resize.c:76: warning: long long unsigned int format, __u64 arg (arg 4)
      fs/ext4/resize.c:81: warning: long long unsigned int format, __u64 arg (arg 4)
      fs/ext4/resize.c:85: warning: long long unsigned int format, __u64 arg (arg 4)
      fs/ext4/resize.c:89: warning: long long unsigned int format, __u64 arg (arg 4)
      fs/ext4/resize.c:89: warning: long long unsigned int format, __u64 arg (arg 5)
      fs/ext4/resize.c:93: warning: long long unsigned int format, __u64 arg (arg 4)
      fs/ext4/resize.c:93: warning: long long unsigned int format, __u64 arg (arg 5)
      fs/ext4/resize.c:98: warning: long long unsigned int format, __u64 arg (arg 4)
      fs/ext4/resize.c:103: warning: long long unsigned int format, __u64 arg (arg 4)
      fs/ext4/resize.c:109: warning: long long unsigned int format, __u64 arg (arg 4)
      Signed-off-by: NRandy Dunlap <randy.dunlap@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      1939e49a
    • M
      [PATCH] Use min of two prio settings in calculating distress for reclaim · bbdb396a
      Martin Bligh 提交于
      If try_to_free_pages / balance_pgdat are called with a gfp_mask specifying
      GFP_IO and/or GFP_FS, they will reclaim the requisite number of pages, and the
      reset prev_priority to DEF_PRIORITY (or to some other high (ie: unurgent)
      value).
      
      However, another reclaimer without those gfp_mask flags set (say, GFP_NOIO)
      may still be struggling to reclaim pages.  The concurrent overwrite of
      zone->prev_priority will cause this GFP_NOIO thread to unexpectedly cease
      deactivating mapped pages, thus causing reclaim difficulties.
      
      Fix this is to key the distress calculation not off zone->prev_priority, but
      also take into account the local caller's priority by using
      min(zone->prev_priority, sc->priority)
      Signed-off-by: NMartin J. Bligh <mbligh@google.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      bbdb396a
    • M
      [PATCH] vmscan: Fix temp_priority race · 3bb1a852
      Martin Bligh 提交于
      The temp_priority field in zone is racy, as we can walk through a reclaim
      path, and just before we copy it into prev_priority, it can be overwritten
      (say with DEF_PRIORITY) by another reclaimer.
      
      The same bug is contained in both try_to_free_pages and balance_pgdat, but
      it is fixed slightly differently.  In balance_pgdat, we keep a separate
      priority record per zone in a local array.  In try_to_free_pages there is
      no need to do this, as the priority level is the same for all zones that we
      reclaim from.
      
      Impact of this bug is that temp_priority is copied into prev_priority, and
      setting this artificially high causes reclaimers to set distress
      artificially low.  They then fail to reclaim mapped pages, when they are,
      in fact, under severe memory pressure (their priority may be as low as 0).
      This causes the OOM killer to fire incorrectly.
      
      From: Andrew Morton <akpm@osdl.org>
      
      __zone_reclaim() isn't modifying zone->prev_priority.  But zone->prev_priority
      is used in the decision whether or not to bring mapped pages onto the inactive
      list.  Hence there's a risk here that __zone_reclaim() will fail because
      zone->prev_priority ir large (ie: low urgency) and lots of mapped pages end up
      stuck on the active list.
      
      Fix that up by decreasing (ie making more urgent) zone->prev_priority as
      __zone_reclaim() scans the zone's pages.
      
      This bug perhaps explains why ZONE_RECLAIM_PRIORITY was created.  It should be
      possible to remove that now, and to just start out at DEF_PRIORITY?
      
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Cc: Christoph Lameter <clameter@engr.sgi.com>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3bb1a852
    • N
      [PATCH] mm: clean up pagecache allocation · 2ae88149
      Nick Piggin 提交于
      - Consolidate page_cache_alloc
      
      - Fix splice: only the pagecache pages and filesystem data need to use
        mapping_gfp_mask.
      
      - Fix grab_cache_page_nowait: same as splice, also honour NUMA placement.
      Signed-off-by: NNick Piggin <npiggin@suse.de>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      2ae88149
  2. 28 10月, 2006 10 次提交
  3. 27 10月, 2006 1 次提交
  4. 26 10月, 2006 20 次提交