1. 17 12月, 2018 11 次提交
  2. 21 10月, 2018 2 次提交
  3. 15 10月, 2018 4 次提交
  4. 18 8月, 2018 1 次提交
  5. 06 8月, 2018 12 次提交
  6. 19 7月, 2018 1 次提交
    • F
      Btrfs: fix file data corruption after cloning a range and fsync · bd3599a0
      Filipe Manana 提交于
      When we clone a range into a file we can end up dropping existing
      extent maps (or trimming them) and replacing them with new ones if the
      range to be cloned overlaps with a range in the destination inode.
      When that happens we add the new extent maps to the list of modified
      extents in the inode's extent map tree, so that a "fast" fsync (the flag
      BTRFS_INODE_NEEDS_FULL_SYNC not set in the inode) will see the extent maps
      and log corresponding extent items. However, at the end of range cloning
      operation we do truncate all the pages in the affected range (in order to
      ensure future reads will not get stale data). Sometimes this truncation
      will release the corresponding extent maps besides the pages from the page
      cache. If this happens, then a "fast" fsync operation will miss logging
      some extent items, because it relies exclusively on the extent maps being
      present in the inode's extent tree, leading to data loss/corruption if
      the fsync ends up using the same transaction used by the clone operation
      (that transaction was not committed in the meanwhile). An extent map is
      released through the callback btrfs_invalidatepage(), which gets called by
      truncate_inode_pages_range(), and it calls __btrfs_releasepage(). The
      later ends up calling try_release_extent_mapping() which will release the
      extent map if some conditions are met, like the file size being greater
      than 16Mb, gfp flags allow blocking and the range not being locked (which
      is the case during the clone operation) nor being the extent map flagged
      as pinned (also the case for cloning).
      
      The following example, turned into a test for fstests, reproduces the
      issue:
      
        $ mkfs.btrfs -f /dev/sdb
        $ mount /dev/sdb /mnt
      
        $ xfs_io -f -c "pwrite -S 0x18 9000K 6908K" /mnt/foo
        $ xfs_io -f -c "pwrite -S 0x20 2572K 156K" /mnt/bar
      
        $ xfs_io -c "fsync" /mnt/bar
        # reflink destination offset corresponds to the size of file bar,
        # 2728Kb minus 4Kb.
        $ xfs_io -c ""reflink ${SCRATCH_MNT}/foo 0 2724K 15908K" /mnt/bar
        $ xfs_io -c "fsync" /mnt/bar
      
        $ md5sum /mnt/bar
        95a95813a8c2abc9aa75a6c2914a077e  /mnt/bar
      
        <power fail>
      
        $ mount /dev/sdb /mnt
        $ md5sum /mnt/bar
        207fd8d0b161be8a84b945f0df8d5f8d  /mnt/bar
        # digest should be 95a95813a8c2abc9aa75a6c2914a077e like before the
        # power failure
      
      In the above example, the destination offset of the clone operation
      corresponds to the size of the "bar" file minus 4Kb. So during the clone
      operation, the extent map covering the range from 2572Kb to 2728Kb gets
      trimmed so that it ends at offset 2724Kb, and a new extent map covering
      the range from 2724Kb to 11724Kb is created. So at the end of the clone
      operation when we ask to truncate the pages in the range from 2724Kb to
      2724Kb + 15908Kb, the page invalidation callback ends up removing the new
      extent map (through try_release_extent_mapping()) when the page at offset
      2724Kb is passed to that callback.
      
      Fix this by setting the bit BTRFS_INODE_NEEDS_FULL_SYNC whenever an extent
      map is removed at try_release_extent_mapping(), forcing the next fsync to
      search for modified extents in the fs/subvolume tree instead of relying on
      the presence of extent maps in memory. This way we can continue doing a
      "fast" fsync if the destination range of a clone operation does not
      overlap with an existing range or if any of the criteria necessary to
      remove an extent map at try_release_extent_mapping() is not met (file
      size not bigger then 16Mb or gfp flags do not allow blocking).
      
      CC: stable@vger.kernel.org # 3.16+
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      bd3599a0
  7. 22 6月, 2018 1 次提交
    • F
      Btrfs: fix physical offset reported by fiemap for inline extents · f0986318
      Filipe Manana 提交于
      Commit 9d311e11 ("Btrfs: fiemap: pass correct bytenr when
      fm_extent_count is zero") introduced a regression where we no longer
      report 0 as the physical offset for inline extents (and other extents
      with a special block_start value). This is because it always sets the
      variable used to report the physical offset ("disko") as em->block_start
      plus some offset, and em->block_start has the value 18446744073709551614
      ((u64) -2) for inline extents.
      
      This made the btrfs test 004 (from fstests) often fail, for example, for
      a file with an inline extent we have the following items in the subvolume
      tree:
      
          item 101 key (418 INODE_ITEM 0) itemoff 11029 itemsize 160
                 generation 25 transid 38 size 1525 nbytes 1525
                 block group 0 mode 100666 links 1 uid 0 gid 0 rdev 0
                 sequence 0 flags 0x2(none)
                 atime 1529342058.461891730 (2018-06-18 18:14:18)
                 ctime 1529342058.461891730 (2018-06-18 18:14:18)
                 mtime 1529342058.461891730 (2018-06-18 18:14:18)
                 otime 1529342055.869892885 (2018-06-18 18:14:15)
          item 102 key (418 INODE_REF 264) itemoff 11016 itemsize 13
                 index 25 namelen 3 name: fc7
          item 103 key (418 EXTENT_DATA 0) itemoff 9470 itemsize 1546
                 generation 38 type 0 (inline)
                 inline extent data size 1525 ram_bytes 1525 compression 0 (none)
      
      Then when test 004 invoked fiemap against the file it got a non-zero
      physical offset:
      
       $ filefrag -v /mnt/p0/d4/d7/fc7
       Filesystem type is: 9123683e
       File size of /mnt/p0/d4/d7/fc7 is 1525 (1 block of 4096 bytes)
        ext:     logical_offset:        physical_offset: length:   expected: flags:
          0:        0..    4095: 18446744073709551614..      4093:   4096:             last,not_aligned,inline,eof
       /mnt/p0/d4/d7/fc7: 1 extent found
      
      This resulted in the test failing like this:
      
      btrfs/004 49s ... [failed, exit status 1]- output mismatch (see /home/fdmanana/git/hub/xfstests/results//btrfs/004.out.bad)
          --- tests/btrfs/004.out	2016-08-23 10:17:35.027012095 +0100
          +++ /home/fdmanana/git/hub/xfstests/results//btrfs/004.out.bad	2018-06-18 18:15:02.385872155 +0100
          @@ -1,3 +1,10 @@
           QA output created by 004
           *** test backref walking
          -*** done
          +./tests/btrfs/004: line 227: [: 7.55578637259143e+22: integer expression expected
          +ERROR: 7.55578637259143e+22 is not a valid numeric value.
          +unexpected output from
          +	/home/fdmanana/git/hub/btrfs-progs/btrfs inspect-internal logical-resolve -s 65536 -P 7.55578637259143e+22 /home/fdmanana/btrfs-tests/scratch_1
          ...
          (Run 'diff -u tests/btrfs/004.out /home/fdmanana/git/hub/xfstests/results//btrfs/004.out.bad'  to see the entire diff)
      Ran: btrfs/004
      
      The large number in scientific notation reported as an invalid numeric
      value is the result from the filter passed to perl which multiplies the
      physical offset by the block size reported by fiemap.
      
      So fix this by ensuring the physical offset is always set to 0 when we
      are processing an extent with a special block_start value.
      
      Fixes: 9d311e11 ("Btrfs: fiemap: pass correct bytenr when fm_extent_count is zero")
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f0986318
  8. 07 6月, 2018 1 次提交
    • R
      Btrfs: fiemap: pass correct bytenr when fm_extent_count is zero · 9d311e11
      Robbie Ko 提交于
      [BUG]
      fm_mapped_extents is not correct when fm_extent_count is 0
      Like:
         # mount /dev/vdb5 /mnt/btrfs
         # dd if=/dev/zero bs=16K count=4 oflag=dsync of=/mnt/btrfs/file
         # xfs_io -c "fiemap -v" /mnt/btrfs/file
         /mnt/btrfs/file:
         EXT: FILE-OFFSET      BLOCK-RANGE      TOTAL FLAGS
           0: [0..127]:        25088..25215       128   0x1
      
      When user space wants to get the number of file extents,
      set fm_extent_count to 0 to run fiemap and then read fm_mapped_extents.
      
      In the above example, fiemap will return with fm_mapped_extents set to 4,
      but it should be 1 since there's only one entry in the output.
      
      [REASON]
      The problem seems to be that disko is only set if
      fieinfo->fi_extents_max is set. And this member is initialized, in the
      generic ioctl_fiemap function, to the value of used-passed
      fm_extent_count. So when the user passes 0 then fi_extent_max is also
      set to zero and this causes btrfs to not initialize disko at all.
      Eventually this leads emit_fiemap_extent being called with a bogus
      'phys' argument preventing proper fiemap entries merging.
      
      [FIX]
      Move the disko initialization earlier in extent_fiemap making it
      independent of user-passed arguments, allowing emit_fiemap_extent to
      properly handle consecutive extent entries.
      Signed-off-by: NRobbie Ko <robbieko@synology.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      9d311e11
  9. 31 5月, 2018 1 次提交
  10. 29 5月, 2018 5 次提交
  11. 12 4月, 2018 1 次提交