1. 24 7月, 2007 1 次提交
  2. 18 7月, 2007 5 次提交
  3. 13 7月, 2007 1 次提交
    • D
      xor: make 'xor_blocks' a library routine for use with async_tx · 685784aa
      Dan Williams 提交于
      The async_tx api tries to use a dma engine for an operation, but will fall
      back to an optimized software routine otherwise.  Xor support is
      implemented using the raid5 xor routines.  For organizational purposes this
      routine is moved to a common area.
      
      The following fixes are also made:
      * rename xor_block => xor_blocks, suggested by Adrian Bunk
      * ensure that xor.o initializes before md.o in the built-in case
      * checkpatch.pl fixes
      * mark calibrate_xor_blocks __init, Adrian Bunk
      
      Cc: Adrian Bunk <bunk@stusta.de>
      Cc: NeilBrown <neilb@suse.de>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDan Williams <dan.j.williams@intel.com>
      685784aa
  4. 24 5月, 2007 1 次提交
  5. 11 5月, 2007 1 次提交
    • N
      md: improve the is_mddev_idle test · 435b71be
      NeilBrown 提交于
      During a 'resync' or similar activity, md checks if the devices in the
      array are otherwise active and winds back resync activity when they are.
      This test in done in is_mddev_idle, and it is somewhat fragile - it
      sometimes thinks there is non-sync io when there isn't.
      
      The test compares the total sectors of io (disk_stat_read) with the sectors
      of resync io (disk->sync_io).  This has problems because total sectors gets
      updated when a request completes, while resync io gets updated when the
      request is submitted.  The time difference can cause large differenced
      between the two which do not actually imply non-resync activity.  The test
      currently allows for some fuzz (+/- 4096) but there are some cases when it
      is not enough.
      
      The test currently looks for any (non-fuzz) difference, either positive or
      negative.  This clearly is not needed.  Any non-sync activity will cause
      the total sectors to grow faster than the sync_io count (never slower) so
      we only need to look for a positive differences.
      
      If we do this then the amount of in-flight sync io will never cause the
      appearance of non-sync IO.  Once enough non-sync IO to worry about starts
      happening, resync will be slowed down and the measurements will thus be
      more precise (as there is less in-flight) and control of resync will still
      be suitably responsive.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      435b71be
  6. 10 5月, 2007 7 次提交
  7. 08 5月, 2007 1 次提交
  8. 05 4月, 2007 1 次提交
  9. 28 3月, 2007 2 次提交
  10. 02 3月, 2007 3 次提交
  11. 15 2月, 2007 2 次提交
  12. 13 2月, 2007 1 次提交
  13. 27 1月, 2007 2 次提交
  14. 23 12月, 2006 1 次提交
    • N
      [PATCH] md: fix a few problems with the interface (sysfs and ioctl) to md · 3f9d7b0d
      NeilBrown 提交于
      While developing more functionality in mdadm I found some bugs in md...
      
      - When we remove a device from an inactive array (write 'remove' to
        the 'state' sysfs file - see 'state_store') would should not
        update the superblock information - as we may not have
        read and processed it all properly yet.
      
      - initialise all raid_disk entries to '-1' else the 'slot sysfs file
        will claim '0' for all devices in an array before the array is
        started.
      
      - all '\n' not to be present at the end of words written to
        sysfs files
      
      - when we use SET_ARRAY_INFO to set the md metadata version,
        set the flag to say that there is persistant metadata.
      
      - allow GET_BITMAP_FILE to be called on an array that hasn't
        been started yet.
      Signed-off-by: NNeil Brown <neilb@suse.de>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      3f9d7b0d
  15. 11 12月, 2006 3 次提交
  16. 09 12月, 2006 3 次提交
  17. 08 12月, 2006 1 次提交
  18. 09 11月, 2006 2 次提交
  19. 04 11月, 2006 1 次提交
  20. 29 10月, 2006 1 次提交