1. 11 12月, 2009 6 次提交
  2. 22 6月, 2009 2 次提交
  3. 23 5月, 2009 1 次提交
  4. 03 4月, 2009 2 次提交
  5. 06 1月, 2009 4 次提交
  6. 22 10月, 2008 1 次提交
  7. 21 7月, 2008 1 次提交
  8. 25 4月, 2008 7 次提交
  9. 08 2月, 2008 1 次提交
  10. 20 10月, 2007 1 次提交
  11. 10 5月, 2007 4 次提交
  12. 09 12月, 2006 2 次提交
  13. 27 6月, 2006 5 次提交
  14. 03 2月, 2006 1 次提交
  15. 02 2月, 2006 1 次提交
  16. 07 1月, 2006 1 次提交
    • D
      [PATCH] make dm-mirror not issue invalid resync requests · ac81b2ee
      Darrick J. Wong 提交于
      I've been attempting to set up a (Host)RAID mirror with dm_mirror on
      2.6.14.3, and I've been having a strange little problem.  The configuration
      in question is a set of 9GB SCSI disks that have 17942584 sectors.  I set
      up the dm_mirror table as such:
      
      0 17942528 mirror core 2 2048 nosync 2 8:48 0 8:64 0
      
      If I'm not mistaken, this sets up a 9GB RAID1 mriror with 1MB stripes
      across both SCSI disks.  The sector count of the dm device is less than the
      size of the disks, so we shouldn't fall off the end.  However, I always get
      the messages like this in dmesg when I set up the dm table:
      
      attempt to access beyond end of device
      sdd: rw=0, want=17958656, limit=17942584
      
      Clearly, something is trying to read sectors past the end of the drive.  I
      traced it down to the __rh_recovery_prepare function in dm-raid1.c, which
      gets called when we're putting the mirror set together.  This function
      calls the dirty region log's get_resync_work function to see if there's any
      resync that needs to be done, and queues up any areas that are out of sync.
       The log's get_resync_work function is actually a pointer to the
      core_get_resync_work function in dm-log.c.
      
      The core_get_resync_work function queries a bitset lc->sync_bits to find
      out if there are any regions that are out of date (i.e.  the bit is 0),
      which is where the problem occurs.  If every bit in lc->sync_bits is 1
      (which is the case when we've just configured a new RAID1 with the nosync
      option), the find_next_zero_bit does NOT return the size parameter
      (lc->region_count in this case), it returns the size parameter rounded up
      to the nearest multiple of 32!  I don't know if this is intentional, but
      i386 and x86_64 both exhibit this behavior.
      
      In any case, the statement "if (*region == lc->region_count)" looks like
      it's supposed to catch the case where are no regions to resync and
      return 0.  Since find_next_zero_bit apparently has a habit of returning
      a value that's larger than lc->region_count, the enclosed patch changes
      the equality test to a greater-than test so that we don't try to resync
      areas outside of the RAID1 region.  Seeing as the HostRAID metadata
      lives just past the end of the RAID1 data, mucking around in that area
      is not a good idea.
      
      I suppose another way to fix this would be to amend find_next_zero_bit so
      that it doesn't return values larger than "size", but I don't know if
      there's a reason for the current behavior.
      Signed-Off-By: NDarrick J. Wong <djwong@us.ibm.com>
      Acked-by: NAlasdair G Kergon <agk@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@osdl.org>
      Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
      ac81b2ee