1. 09 12月, 2010 2 次提交
    • N
      md: move code in to submit_flushes. · a7a07e69
      NeilBrown 提交于
      submit_flushes is called from exactly one place.
      Move the code that is before and after that call into
      submit_flushes.
      
      This has not functional change, but will make the next patch
      smaller and easier to follow.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      a7a07e69
    • N
      md: remove handling of flush_pending in md_submit_flush_data · 2b74e12e
      NeilBrown 提交于
      None of the functions called between setting flush_pending to 1, and
      atomic_dec_and_test can change flush_pending, or will anything
      running in any other thread (as ->flush_bio is not NULL).  So the
      atomic_dec_and_test will always succeed.
      So remove the atomic_sec and the atomic_dec_and_test.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      2b74e12e
  2. 24 11月, 2010 3 次提交
    • D
      md: Call blk_queue_flush() to establish flush/fua support · be20e6c6
      Darrick J. Wong 提交于
      Before 2.6.37, the md layer had a mechanism for catching I/Os with the
      barrier flag set, and translating the barrier into barriers for all
      the underlying devices.  With 2.6.37, I/O barriers have become plain
      old flushes, and the md code was updated to reflect this.  However,
      one piece was left out -- the md layer does not tell the block layer
      that it supports flushes or FUA access at all, which results in md
      silently dropping flush requests.
      
      Since the support already seems there, just add this one piece of
      bookkeeping.
      Signed-off-by: NDarrick J. Wong <djwong@us.ibm.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      be20e6c6
    • N
      md/raid1: really fix recovery looping when single good device fails. · 8f9e0ee3
      NeilBrown 提交于
      Commit 4044ba58 supposedly fixed a
      problem where if a raid1 with just one good device gets a read-error
      during recovery, the recovery would abort and immediately restart in
      an infinite loop.
      
      However it depended on raid1_remove_disk removing the spare device
      from the array.  But that does not happen in this case.  So add a test
      so that in the 'recovery_disabled' case, the device will be removed.
      
      This suitable for any kernel since 2.6.29 which is when
      recovery_disabled was introduced.
      
      Cc: stable@kernel.org
      Reported-by: NSebastian Färber <faerber@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      8f9e0ee3
    • J
      md: fix return value of rdev_size_change() · c26a44ed
      Justin Maggard 提交于
      When trying to grow an array by enlarging component devices,
      rdev_size_store() expects the return value of rdev_size_change() to be
      in sectors, but the actual value is returned in KBs.
      
      This functionality was broken by commit
           dd8ac336
      so this patch is suitable for any kernel since 2.6.30.
      
      Cc: stable@kernel.org
      Signed-off-by: NJustin Maggard <jmaggard10@gmail.com>
      Signed-off-by: NNeilBrown <neilb@suse.de>
      c26a44ed
  3. 22 11月, 2010 1 次提交
  4. 20 11月, 2010 13 次提交
  5. 19 11月, 2010 21 次提交