1. 25 1月, 2013 3 次提交
    • L
      Btrfs: use right range to find checksum for compressed extents · 192000dd
      Liu Bo 提交于
      For compressed extents, the range of checksum is covered by disk length,
      and the disk length is different with ram length, so we need to use disk
      length instead to get us the right checksum.
      Signed-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      192000dd
    • J
      Btrfs: fix panic when recovering tree log · b0175117
      Josef Bacik 提交于
      A user reported a BUG_ON(ret) that occured during tree log replay.  Ret was
      -EAGAIN, so what I think happened is that we removed an extent that covered
      a bitmap entry and an extent entry.  We remove the part from the bitmap and
      return -EAGAIN and then search for the next piece we want to remove, which
      happens to be an entire extent entry, so we just free the sucker and return.
      The problem is ret is still set to -EAGAIN so we trip the BUG_ON().  The
      user used btrfs-zero-log so I'm not 100% sure this is what happened so I've
      added a WARN_ON() to catch the other possibility.  Thanks,
      Reported-by: NJan Steffens <jan.steffens@gmail.com>
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      b0175117
    • J
      Btrfs: do not allow logged extents to be merged or removed · 201a9038
      Josef Bacik 提交于
      We drop the extent map tree lock while we're logging extents, so somebody
      could come in and merge another extent into this one and screw up our
      logging, or they could even remove us from the list which would keep us from
      logging the extent or freeing our ref on it, so we need to make sure to not
      clear LOGGING until after the extent is logged, and then we can merge it to
      adjacent extents.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      201a9038
  2. 22 1月, 2013 3 次提交
  3. 20 1月, 2013 5 次提交
  4. 15 1月, 2013 14 次提交
  5. 19 12月, 2012 2 次提交
  6. 18 12月, 2012 2 次提交
    • L
      Btrfs: fix a bug of per-file nocow · 213490b3
      Liu Bo 提交于
      Users report a bug, the reproducer is:
      $ mkfs.btrfs /dev/loop0
      $ mount /dev/loop0 /mnt/btrfs/
      $ mkdir /mnt/btrfs/dir
      $ chattr +C /mnt/btrfs/dir/
      $ dd if=/dev/zero of=/mnt/btrfs/dir/foo bs=4K count=10;
      $ lsattr /mnt/btrfs/dir/foo
      ---------------C- /mnt/btrfs/dir/foo
      $ filefrag /mnt/btrfs/dir/foo
      /mnt/btrfs/dir/foo: 1 extent found    ---> an extent
      $ dd if=/dev/zero of=/mnt/btrfs/dir/foo bs=4K count=1 seek=5 conv=notrunc,nocreat; sync
      $ filefrag /mnt/btrfs/dir/foo
      /mnt/btrfs/dir/foo: 3 extents found   ---> with nocow, btrfs breaks the extent into three parts
      
      The new created file should not only inherit the NODATACOW flag, but also
      honor NODATASUM flag, because we must do COW on a file extent with checksum.
      Signed-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <chris.mason@fusionio.com>
      213490b3
    • C
      Btrfs: fix hash overflow handling · 9c52057c
      Chris Mason 提交于
      The handling for directory crc hash overflows was fairly obscure,
      split_leaf returns EOVERFLOW when we try to extend the item and that is
      supposed to bubble up to userland.  For a while it did so, but along the
      way we added better handling of errors and forced the FS readonly if we
      hit IO errors during the directory insertion.
      
      Along the way, we started testing only for EEXIST and the EOVERFLOW case
      was dropped.  The end result is that we may force the FS readonly if we
      catch a directory hash bucket overflow.
      
      This fixes a few problem spots.  First I add tests for EOVERFLOW in the
      places where we can safely just return the error up the chain.
      
      btrfs_rename is harder though, because it tries to insert the new
      directory item only after it has already unlinked anything the rename
      was going to overwrite.  Rather than adding very complex logic, I added
      a helper to test for the hash overflow case early while it is still safe
      to bail out.
      
      Snapshot and subvolume creation had a similar problem, so they are using
      the new helper now too.
      Signed-off-by: NChris Mason <chris.mason@fusionio.com>
      Reported-by: NPascal Junod <pascal@junod.info>
      9c52057c
  7. 17 12月, 2012 11 次提交