1. 02 8月, 2008 1 次提交
  2. 21 7月, 2008 1 次提交
    • N
      md: Protect access to mddev->disks list using RCU · 4b80991c
      NeilBrown 提交于
      All modifications and most access to the mddev->disks list are made
      under the reconfig_mutex lock.  However there are three places where
      the list is walked without any locking.  If a reconfig happens at this
      time, havoc (and oops) can ensue.
      
      So use RCU to protect these accesses:
        - wrap them in rcu_read_{,un}lock()
        - use list_for_each_entry_rcu
        - add to the list with list_add_rcu
        - delete from the list with list_del_rcu
        - delay the 'free' with call_rcu rather than schedule_work
      
      Note that export_rdev did a list_del_init on this list.  In almost all
      cases the entry was not in the list anymore so it was a no-op and so
      safe.  It is no longer safe as after list_del_rcu we may not touch
      the list_head.
      An audit shows that export_rdev is called:
        - after unbind_rdev_from_array, in which case the delete has
           already been done,
        - after bind_rdev_to_array fails, in which case the delete isn't needed.
        - before the device has been put on a list at all (e.g. in
            add_new_disk where reading the superblock fails).
        - and in autorun devices after a failure when the device is on a
            different list.
      
      So remove the list_del_init call from export_rdev, and add it back
      immediately before the called to export_rdev for that last case.
      
      Note also that ->same_set is sometimes used for lists other than
      mddev->list (e.g. candidates).  In these cases rcu is not needed.
      Signed-off-by: NNeilBrown <neilb@suse.de>
      4b80991c
  3. 11 7月, 2008 1 次提交
  4. 28 6月, 2008 1 次提交
    • N
      Improve setting of "events_cleared" for write-intent bitmaps. · a0da84f3
      Neil Brown 提交于
      When an array is degraded, bits in the write-intent bitmap are not
      cleared, so that if the missing device is re-added, it can be synced
      by only updated those parts of the device that have changed since
      it was removed.
      
      The enable this a 'events_cleared' value is stored. It is the event
      counter for the array the last time that any bits were cleared.
      
      Sometimes - if a device disappears from an array while it is 'clean' -
      the events_cleared value gets updated incorrectly (there are subtle
      ordering issues between updateing events in the main metadata and the
      bitmap metadata) resulting in the missing device appearing to require
      a full resync when it is re-added.
      
      With this patch, we update events_cleared precisely when we are about
      to clear a bit in the bitmap.  We record events_cleared when we clear
      the bit internally, and copy that to the superblock which is written
      out before the bit on storage.  This makes it more "obviously correct".
      
      We also need to update events_cleared when the event_count is going
      backwards (as happens on a dirty->clean transition of a non-degraded
      array).
      
      Thanks to Mike Snitzer for identifying this problem and testing early
      "fixes".
      
      Cc:  "Mike Snitzer" <snitzer@gmail.com>
      Signed-off-by: NNeil Brown <neilb@suse.de>
      a0da84f3
  5. 25 5月, 2008 1 次提交
  6. 11 3月, 2008 1 次提交
  7. 05 3月, 2008 1 次提交
  8. 15 2月, 2008 1 次提交
  9. 07 2月, 2008 2 次提交
  10. 09 11月, 2007 1 次提交
  11. 23 10月, 2007 1 次提交
  12. 18 7月, 2007 2 次提交
  13. 24 5月, 2007 1 次提交
  14. 09 5月, 2007 1 次提交
  15. 13 4月, 2007 1 次提交
  16. 12 2月, 2007 1 次提交
  17. 10 2月, 2007 1 次提交
    • N
      [PATCH] md: avoid possible BUG_ON in md bitmap handling · da6e1a32
      Neil Brown 提交于
      md/bitmap tracks how many active write requests are pending on blocks
      associated with each bit in the bitmap, so that it knows when it can clear
      the bit (when count hits zero).
      
      The counter has 14 bits of space, so if there are ever more than 16383, we
      cannot cope.
      
      Currently the code just calles BUG_ON as "all" drivers have request queue
      limits much smaller than this.
      
      However is seems that some don't.  Apparently some multipath configurations
      can allow more than 16383 concurrent write requests.
      
      So, in this unlikely situation, instead of calling BUG_ON we now wait
      for the count to drop down a bit.  This requires a new wait_queue_head,
      some waiting code, and a wakeup call.
      
      Tested by limiting the counter to 20 instead of 16383 (writes go a lot slower
      in that case...).
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Cc: <stable@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      da6e1a32
  18. 27 1月, 2007 1 次提交
  19. 09 12月, 2006 1 次提交
  20. 22 10月, 2006 1 次提交
  21. 12 10月, 2006 1 次提交
  22. 03 10月, 2006 2 次提交
  23. 01 7月, 2006 1 次提交
  24. 27 6月, 2006 8 次提交
  25. 27 3月, 2006 1 次提交
  26. 26 3月, 2006 1 次提交
  27. 25 3月, 2006 1 次提交
  28. 15 1月, 2006 1 次提交
  29. 07 1月, 2006 2 次提交