1. 11 4月, 2015 4 次提交
    • J
      Btrfs: reserve space for block groups · cb723e49
      Josef Bacik 提交于
      This changes our delayed refs calculations to include the space needed
      to write back dirty block groups.
      Signed-off-by: NChris Mason <clm@fb.com>
      cb723e49
    • C
      Btrfs: refill block reserves during truncate · 28f75a0e
      Chris Mason 提交于
      When truncate starts, it allocates some space in the block reserves so
      that we'll have enough to update metadata along the way.
      
      For very large files, we can easily go through all of that space as we
      loop through the extents.  This changes truncate to refill the space
      reservation as it progresses through the file.
      Signed-off-by: NChris Mason <clm@fb.com>
      28f75a0e
    • J
      Btrfs: account for crcs in delayed ref processing · 1262133b
      Josef Bacik 提交于
      As we delete large extents, we end up doing huge amounts of COW in order
      to delete the corresponding crcs.  This adds accounting so that we keep
      track of that space and flushing of delayed refs so that we don't build
      up too much delayed crc work.
      
      This helps limit the delayed work that must be done at commit time and
      tries to avoid ENOSPC aborts because the crcs eat all the global
      reserves.
      Signed-off-by: NChris Mason <clm@fb.com>
      1262133b
    • C
      btrfs: actively run the delayed refs while deleting large files · 28ed1345
      Chris Mason 提交于
      When we are deleting large files with large extents, we are building up
      a huge set of delayed refs for processing.  Truncate isn't checking
      often enough to see if we need to back off and process those, or let
      a commit proceed.
      
      The end result is long stalls after the rm, and very long commit times.
      During the commits, other processes back up waiting to start new
      transactions and we get into trouble.
      Signed-off-by: NChris Mason <clm@fb.com>
      28ed1345
  2. 02 4月, 2015 1 次提交
  3. 01 4月, 2015 1 次提交
    • C
      Btrfs: free and unlock our path before btrfs_free_and_pin_reserved_extent() · dd825259
      Chris Mason 提交于
      The error handling path for alloc_reserved_tree_block is calling
      btrfs_free_and_pin_reserved_extent with a spinning tree lock held.  This
      might sleep as we allocate extent_state objects:
      
       BUG: sleeping function called from invalid context at mm/slub.c:1268
       in_atomic(): 1, irqs_disabled(): 0, pid: 11093, name: kworker/u4:7
       5 locks held by kworker/u4:7/11093:
        #0:  ("%s-%s""btrfs", name){++++.+}, at: [<ffffffff81091d51>] process_one_work+0x151/0x520
        #1:  ((&work->normal_work)){+.+.+.}, at: [<ffffffff81091d51>] process_one_work+0x151/0x520
        #2:  (sb_internal){++++.+}, at: [<ffffffffa003a70e>] start_transaction+0x43e/0x590 [btrfs]
        #3:  (&head_ref->mutex){+.+...}, at: [<ffffffffa0089f8c>] btrfs_delayed_ref_lock+0x4c/0x240 [btrfs]
        #4:  (btrfs-extent-00){++++..}, at: [<ffffffffa007697b>] btrfs_clear_lock_blocking_rw+0x9b/0x150 [btrfs]
       CPU: 0 PID: 11093 Comm: kworker/u4:7 Tainted: G        W 4.0.0-rc6-default+ #246
       Hardware name: Intel Corporation Santa Rosa platform/Matanzas, BIOS TSRSCRB1.86C.0047.B00.0610170821 10/17/06
       Workqueue: btrfs-extent-refs btrfs_extent_refs_helper [btrfs]
        00000000000004f4 ffff88006dd17848 ffffffff81ab0e3b ffff88006dd17848
        ffff88007a944760 ffff88006dd17868 ffffffff8109d516 ffff88006dd17898
        0000000000000000 ffff88006dd17898 ffffffff8109d5b2 ffffffff81aba2bb
       Call Trace:
        [<ffffffff81ab0e3b>] dump_stack+0x4f/0x6c
        [<ffffffff8109d516>] ___might_sleep+0xf6/0x140
        [<ffffffff8109d5b2>] __might_sleep+0x52/0x90
        [<ffffffff81aba2bb>] ? ftrace_call+0x5/0x34
        [<ffffffff81196363>] kmem_cache_alloc+0x163/0x1b0
        [<ffffffffa0056f31>] ? alloc_extent_state+0x31/0x150 [btrfs]
        [<ffffffffa0056f20>] ? alloc_extent_state+0x20/0x150 [btrfs]
        [<ffffffffa0056f31>] alloc_extent_state+0x31/0x150 [btrfs]
        [<ffffffffa005805b>] __set_extent_bit+0x37b/0x5d0 [btrfs]
        [<ffffffff81aba2bb>] ? ftrace_call+0x5/0x34
        [<ffffffffa005888d>] ? set_extent_bit+0xd/0x30 [btrfs]
        [<ffffffffa00588a3>] set_extent_bit+0x23/0x30 [btrfs]
        [<ffffffffa0058e80>] set_extent_dirty+0x20/0x30 [btrfs]
        [<ffffffffa00195ba>] pin_down_extent+0xaa/0x170 [btrfs]
        [<ffffffffa001d8ef>] __btrfs_free_reserved_extent+0xcf/0x160 [btrfs]
        [<ffffffffa0023856>] btrfs_free_and_pin_reserved_extent+0x16/0x20 [btrfs]
        [<ffffffffa002482a>] __btrfs_run_delayed_refs+0xfca/0x1290 [btrfs]
        [<ffffffffa0026eae>] btrfs_run_delayed_refs+0x6e/0x2e0 [btrfs]
        [<ffffffffa0027378>] delayed_ref_async_start+0x48/0xb0 [btrfs]
        [<ffffffffa006c883>] normal_work_helper+0x83/0x350 [btrfs]
        [<ffffffffa006cd79>] ? btrfs_extent_refs_helper+0x9/0x20 [btrfs]
        [<ffffffffa006cd82>] btrfs_extent_refs_helper+0x12/0x20 [btrfs]
        [<ffffffff81091dcb>] process_one_work+0x1cb/0x520
        [<ffffffff81091d51>] ? process_one_work+0x151/0x520
        [<ffffffff811c7abf>] ? seq_read+0x3f/0x400
        [<ffffffff8109260b>] worker_thread+0x5b/0x4e0
        [<ffffffff81097be2>] ? __kthread_parkme+0x12/0xa0
        [<ffffffff810925b0>] ? rescuer_thread+0x450/0x450
        [<ffffffff81098686>] kthread+0xf6/0x120
        [<ffffffff81098590>] ? flush_kthread_worker+0x1b0/0x1b0
        [<ffffffff81ab8088>] ret_from_fork+0x58/0x90
        [<ffffffff81098590>] ? flush_kthread_worker+0x1b0/0x1b0
       ------------[ cut here ]------------
      
      This changes things to free the path first, which will also unlock the
      extent buffer.
      Signed-off-by: NChris Mason <clm@fb.com>
      Reported-by: NDave Sterba <dsterba@suse.cz>
      Tested-by: NDave Sterba <dsterba@suse.cz>
      dd825259
  4. 27 3月, 2015 14 次提交
    • L
      Btrfs: Remove the check for old-style mkfs · e56a951e
      Liu Bo 提交于
      This was used to make sure that a fresh btrfs from an older mkfs.btrfs,
      but it also allows us to mount a buggy btrfs if this btrfs has the right
      superblock head part but has something wrong with chunk tree part[1], and
      after that we can hit BUG_ON()s set in the code to prevent something
      impossible.
      
      Since David has released "Btrfs progs v3.19-rc2", just remove the check,
      if anyone who wants to make a fresh btrfs, please use the latest one.
      
      [1]: http://www.spinics.net/lists/linux-btrfs/msg42358.htmlSigned-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Reviewed-by: NOmar Sandoval <osandov@osandov.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.cz>
      Signed-off-by: NChris Mason <clm@fb.com>
      e56a951e
    • J
      btrfs: cleanup orphans while looking up default subvolume · 727b9784
      Jeff Mahoney 提交于
      Orphans in the fs tree are cleaned up via open_ctree and subvolume
      orphans are cleaned via btrfs_lookup_dentry -- except when a default
      subvolume is in use.  The name for the default subvolume uses a manual
      lookup that doesn't trigger orphan cleanup and needs to trigger it
      manually as well. This doesn't apply to the remount case since the
      subvolumes are cleaned up by walking the root radix tree.
      Signed-off-by: NJeff Mahoney <jeffm@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.cz>
      Signed-off-by: NChris Mason <clm@fb.com>
      727b9784
    • T
      btrfs: explicitly set control file's private_data · d8620958
      Tom Van Braeckel 提交于
      The private_data member of the Btrfs control device file
      (/dev/btrfs-control) is used to hold the current transaction and needs
      to be initialized to NULL to signify that no transaction is in progress.
      
      We explicitly set the control file's private_data to NULL to be
      independent of whatever value the misc subsystem initializes it to.
      
      Backstory:
      ----------
      
      The misc subsystem (which is used by /dev/btrfs-control) initializes
      a file's private_data to point to the misc device when a driver has
      registered a custom open file operation and initializes it to NULL
      when a custom open file operation has *not* been provided.
      
      This subtle quirk is confusing, to the point where kernel code registers
      *empty* file open operations to have private_data point to the misc
      device structure.
      
      And it leads to bugs, where the addition or removal of a custom open
      file operation surprisingly changes the initial contents of a file's
      private_data structure.
      
      To simplify things in the misc subsystem, a patch [1] has been proposed
      to *always* set private_data to point to the misc device instead of
      only doing this when a custom open file operation has been registered.
      
      But before we can fix this in the misc subsystem itself, we need to
      modify the (few) drivers that rely on this very subtle behavior.
      
      [1] https://lkml.org/lkml/2014/12/4/939Signed-off-by: NMartin Kepplinger <martink@posteo.de>
      Signed-off-by: NTom Van Braeckel <tomvanbraeckel@gmail.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      d8620958
    • C
      btrfs: incorrect handling for fiemap_fill_next_extent return · 26e726af
      Chengyu Song 提交于
      fiemap_fill_next_extent returns 0 on success, -errno on error, 1 if this was
      the last extent that will fit in user array. If 1 is returned, the return
      value may eventually returned to user space, which should not happen, according
      to manpage of ioctl.
      Signed-off-by: NChengyu Song <csong84@gatech.edu>
      Reviewed-by: NDavid Sterba <dsterba@suse.cz>
      Reviewed-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      26e726af
    • D
      btrfs: don't accept bare namespace as a valid xattr · 3c3b04d1
      David Sterba 提交于
      Due to insufficient check in btrfs_is_valid_xattr, this unexpectedly
      works:
      
       $ touch file
       $ setfattr -n user. -v 1 file
       $ getfattr -d file
      user.="1"
      
      ie. the missing attribute name after the namespace.
      
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=94291Reported-by: NWilliam Douglas <william.douglas@intel.com>
      CC: <stable@vger.kernel.org> # 2.6.29+
      Signed-off-by: NDavid Sterba <dsterba@suse.cz>
      Signed-off-by: NChris Mason <clm@fb.com>
      3c3b04d1
    • F
      Btrfs: fix log tree corruption when fs mounted with -o discard · dcc82f47
      Filipe Manana 提交于
      While committing a transaction we free the log roots before we write the
      new super block. Freeing the log roots implies marking the disk location
      of every node/leaf (metadata extent) as pinned before the new super block
      is written. This is to prevent the disk location of log metadata extents
      from being reused before the new super block is written, otherwise we
      would have a corrupted log tree if before the new super block is written
      a crash/reboot happens and the location of any log tree metadata extent
      ended up being reused and rewritten.
      
      Even though we pinned the log tree's metadata extents, we were issuing a
      discard against them if the fs was mounted with the -o discard option,
      resulting in corruption of the log tree if a crash/reboot happened before
      writing the new super block - the next time the fs was mounted, during
      the log replay process we would find nodes/leafs of the log btree with
      a content full of zeroes, causing the process to fail and require the
      use of the tool btrfs-zero-log to wipeout the log tree (and all data
      previously fsynced becoming lost forever).
      
      Fix this by not doing a discard when pinning an extent. The discard will
      be done later when it's safe (after the new super block is committed) at
      extent-tree.c:btrfs_finish_extent_commit().
      
      Fixes: e688b725 (Btrfs: fix extent pinning bugs in the tree log)
      CC: <stable@vger.kernel.org>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      dcc82f47
    • F
      Btrfs: fix metadata inconsistencies after directory fsync · 2f2ff0ee
      Filipe Manana 提交于
      We can get into inconsistency between inodes and directory entries
      after fsyncing a directory. The issue is that while a directory gets
      the new dentries persisted in the fsync log and replayed at mount time,
      the link count of the inode that directory entries point to doesn't
      get updated, staying with an incorrect link count (smaller then the
      correct value). This later leads to stale file handle errors when
      accessing (including attempt to delete) some of the links if all the
      other ones are removed, which also implies impossibility to delete the
      parent directories, since the dentries can not be removed.
      
      Another issue is that (unlike ext3/4, xfs, f2fs, reiserfs, nilfs2),
      when fsyncing a directory, new files aren't logged (their metadata and
      dentries) nor any child directories. So this patch fixes this issue too,
      since it has the same resolution as the incorrect inode link count issue
      mentioned before.
      
      This is very easy to reproduce, and the following excerpt from my test
      case for xfstests shows how:
      
        _scratch_mkfs >> $seqres.full 2>&1
        _init_flakey
        _mount_flakey
      
        # Create our main test file and directory.
        $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 8K" $SCRATCH_MNT/foo | _filter_xfs_io
        mkdir $SCRATCH_MNT/mydir
      
        # Make sure all metadata and data are durably persisted.
        sync
      
        # Add a hard link to 'foo' inside our test directory and fsync only the
        # directory. The btrfs fsync implementation had a bug that caused the new
        # directory entry to be visible after the fsync log replay but, the inode
        # of our file remained with a link count of 1.
        ln $SCRATCH_MNT/foo $SCRATCH_MNT/mydir/foo_2
      
        # Add a few more links and new files.
        # This is just to verify nothing breaks or gives incorrect results after the
        # fsync log is replayed.
        ln $SCRATCH_MNT/foo $SCRATCH_MNT/mydir/foo_3
        $XFS_IO_PROG -f -c "pwrite -S 0xff 0 64K" $SCRATCH_MNT/hello | _filter_xfs_io
        ln $SCRATCH_MNT/hello $SCRATCH_MNT/mydir/hello_2
      
        # Add some subdirectories and new files and links to them. This is to verify
        # that after fsyncing our top level directory 'mydir', all the subdirectories
        # and their files/links are registered in the fsync log and exist after the
        # fsync log is replayed.
        mkdir -p $SCRATCH_MNT/mydir/x/y/z
        ln $SCRATCH_MNT/foo $SCRATCH_MNT/mydir/x/y/foo_y_link
        ln $SCRATCH_MNT/foo $SCRATCH_MNT/mydir/x/y/z/foo_z_link
        touch $SCRATCH_MNT/mydir/x/y/z/qwerty
      
        # Now fsync only our top directory.
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/mydir
      
        # And fsync now our new file named 'hello', just to verify later that it has
        # the expected content and that the previous fsync on the directory 'mydir' had
        # no bad influence on this fsync.
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/hello
      
        # Simulate a crash/power loss.
        _load_flakey_table $FLAKEY_DROP_WRITES
        _unmount_flakey
      
        _load_flakey_table $FLAKEY_ALLOW_WRITES
        _mount_flakey
      
        # Verify the content of our file 'foo' remains the same as before, 8192 bytes,
        # all with the value 0xaa.
        echo "File 'foo' content after log replay:"
        od -t x1 $SCRATCH_MNT/foo
      
        # Remove the first name of our inode. Because of the directory fsync bug, the
        # inode's link count was 1 instead of 5, so removing the 'foo' name ended up
        # deleting the inode and the other names became stale directory entries (still
        # visible to applications). Attempting to remove or access the remaining
        # dentries pointing to that inode resulted in stale file handle errors and
        # made it impossible to remove the parent directories since it was impossible
        # for them to become empty.
        echo "file 'foo' link count after log replay: $(stat -c %h $SCRATCH_MNT/foo)"
        rm -f $SCRATCH_MNT/foo
      
        # Now verify that all files, links and directories created before fsyncing our
        # directory exist after the fsync log was replayed.
        [ -f $SCRATCH_MNT/mydir/foo_2 ] || echo "Link mydir/foo_2 is missing"
        [ -f $SCRATCH_MNT/mydir/foo_3 ] || echo "Link mydir/foo_3 is missing"
        [ -f $SCRATCH_MNT/hello ] || echo "File hello is missing"
        [ -f $SCRATCH_MNT/mydir/hello_2 ] || echo "Link mydir/hello_2 is missing"
        [ -f $SCRATCH_MNT/mydir/x/y/foo_y_link ] || \
            echo "Link mydir/x/y/foo_y_link is missing"
        [ -f $SCRATCH_MNT/mydir/x/y/z/foo_z_link ] || \
            echo "Link mydir/x/y/z/foo_z_link is missing"
        [ -f $SCRATCH_MNT/mydir/x/y/z/qwerty ] || \
            echo "File mydir/x/y/z/qwerty is missing"
      
        # We expect our file here to have a size of 64Kb and all the bytes having the
        # value 0xff.
        echo "file 'hello' content after log replay:"
        od -t x1 $SCRATCH_MNT/hello
      
        # Now remove all files/links, under our test directory 'mydir', and verify we
        # can remove all the directories.
        rm -f $SCRATCH_MNT/mydir/x/y/z/*
        rmdir $SCRATCH_MNT/mydir/x/y/z
        rm -f $SCRATCH_MNT/mydir/x/y/*
        rmdir $SCRATCH_MNT/mydir/x/y
        rmdir $SCRATCH_MNT/mydir/x
        rm -f $SCRATCH_MNT/mydir/*
        rmdir $SCRATCH_MNT/mydir
      
        # An fsck, run by the fstests framework everytime a test finishes, also detected
        # the inconsistency and printed the following error message:
        #
        # root 5 inode 257 errors 2001, no inode item, link count wrong
        #    unresolved ref dir 258 index 2 namelen 5 name foo_2 filetype 1 errors 4, no inode ref
        #    unresolved ref dir 258 index 3 namelen 5 name foo_3 filetype 1 errors 4, no inode ref
      
        status=0
        exit
      
      The expected golden output for the test is:
      
        wrote 8192/8192 bytes at offset 0
        XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
        wrote 65536/65536 bytes at offset 0
        XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
        File 'foo' content after log replay:
        0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
        *
        0020000
        file 'foo' link count after log replay: 5
        file 'hello' content after log replay:
        0000000 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
        *
        0200000
      
      Which is the output after this patch and when running the test against
      ext3/4, xfs, f2fs, reiserfs or nilfs2. Without this patch, the test's
      output is:
      
        wrote 8192/8192 bytes at offset 0
        XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
        wrote 65536/65536 bytes at offset 0
        XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
        File 'foo' content after log replay:
        0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
        *
        0020000
        file 'foo' link count after log replay: 1
        Link mydir/foo_2 is missing
        Link mydir/foo_3 is missing
        Link mydir/x/y/foo_y_link is missing
        Link mydir/x/y/z/foo_z_link is missing
        File mydir/x/y/z/qwerty is missing
        file 'hello' content after log replay:
        0000000 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
        *
        0200000
        rmdir: failed to remove '/home/fdmanana/btrfs-tests/scratch_1/mydir/x/y/z': No such file or directory
        rmdir: failed to remove '/home/fdmanana/btrfs-tests/scratch_1/mydir/x/y': No such file or directory
        rmdir: failed to remove '/home/fdmanana/btrfs-tests/scratch_1/mydir/x': No such file or directory
        rm: cannot remove '/home/fdmanana/btrfs-tests/scratch_1/mydir/foo_2': Stale file handle
        rm: cannot remove '/home/fdmanana/btrfs-tests/scratch_1/mydir/foo_3': Stale file handle
        rmdir: failed to remove '/home/fdmanana/btrfs-tests/scratch_1/mydir': Directory not empty
      
      Fsck, without this fix, also complains about the wrong link count:
      
        root 5 inode 257 errors 2001, no inode item, link count wrong
            unresolved ref dir 258 index 2 namelen 5 name foo_2 filetype 1 errors 4, no inode ref
            unresolved ref dir 258 index 3 namelen 5 name foo_3 filetype 1 errors 4, no inode ref
      
      So fix this by logging the inodes that the dentries point to when
      fsyncing a directory.
      
      A test case for xfstests follows.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      2f2ff0ee
    • F
      Btrfs: change the insertion criteria for the qgroup operations rbtree · bf691960
      Filipe Manana 提交于
      After looking at Liu Bo's recent patch (titled
      "Btrfs: fix comp_oper to get right order") I realized the search made by
      qgroup_oper_exists() was buggy because its rbtree navigation comparison
      function, comp_oper_exist(), only looks at the fields bytenr and ref_root
      of a tree node, ignoring the seq field completely. This was wrong because
      when we insert a node into the rbtree we use comp_oper(), which takes a
      decision based first on bytenr, then on seq and then on the ref_root field.
      That means qgroup_oper_exists() could miss the fact that at least one
      operation with given bytenr and ref_root exists.
      
      Consider the following simple example of a 3 nodes qgroup operations
      rbtree (created using comp_oper before this patch), where each node's key
      is a tuple with the shape (bytenr, seq, ref_root, op):
      
                                [ (4096, 2, 20, op X) ]
                               /                       \
                              /                         \
         [ (4096, 1, 5, op Y) ]                         [ (4096, 3, 10, op Z) ]
      
      qgroup_oper_exists() when called to search for an existing operation for
      bytenr 4096 and ref root 10 wouldn't find anything because it would go to
      the left subtree instead of the right subtree, since comp_oper_exits()
      ignores the seq field completely.
      
      Fix this by changing the insertion navigation function to use the ref_root
      field right after using the bytenr field and before using the seq field,
      so that qgroup_oper_exists() / comp_oper_exist() work as expected.
      
      This patch applies on top of the patch mentioned above from Liu.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      bf691960
    • F
      Btrfs: add missing inode item update in fallocate() · 3d850dd4
      Filipe Manana 提交于
      If we fallocate(), without the keep size flag, into an area already covered
      by an extent previously fallocated, we were updating the inode's i_size but
      we weren't updating the inode item in the fs/subvol tree. A following umount
      + mount would result in a loss of the inode's size (and an fsync would miss
      too the fact that the inode changed).
      
      Reproducer:
      
        $ mkfs.btrfs -f /dev/sdd
        $ mount /dev/sdd /mnt
        $ fallocate -n -l 1M /mnt/foobar
        $ fallocate -l 512K /mnt/foobar
        $ umount /mnt
        $ mount /dev/sdd /mnt
        $ od -t x1 /mnt/foobar
        0000000
      
      The expected result is:
      
        $ od -t x1 /mnt/foobar
        0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
        *
        2000000
      
      A test case for fstests follows soon.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      3d850dd4
    • F
      Btrfs: incremental send, remove dead code · 5f806c3a
      Filipe Manana 提交于
      The logic to detect path loops when attempting to apply a pending
      directory rename, introduced in commit
      f959492f (Btrfs: send, fix more issues related to directory renames)
      is no longer needed, and the respective fstests test case for that commit,
      btrfs/045, now passes without this code (as well as all the other test
      cases for send/receive).
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      5f806c3a
    • F
      Btrfs: incremental send, clear name from cache after orphanization · 8996a48c
      Filipe Manana 提交于
      If a directory's reference ends up being orphanized, because the inode
      currently being processed has a new path that matches that directory's
      path, make sure we evict the name of the directory from the name cache.
      This is because there might be descendent inodes (either directories or
      regular files) that will be orphanized later too, and therefore the
      orphan name of the ancestor must be used, otherwise we send issue rename
      operations with a wrong path in the send stream.
      
      Reproducer:
      
        $ mkfs.btrfs -f /dev/sdb
        $ mount /dev/sdb /mnt
      
        $ mkdir -p /mnt/data/n1/n2/p1/p2
        $ mkdir /mnt/data/n4
        $ mkdir -p /mnt/data/p1/p2
      
        $ btrfs subvolume snapshot -r /mnt /mnt/snap1
      
        $ mv /mnt/data/p1/p2 /mnt/data
        $ mv /mnt/data/n1/n2/p1/p2 /mnt/data/p1
        $ mv /mnt/data/p2 /mnt/data/n1/n2/p1
        $ mv /mnt/data/n1/n2 /mnt/data/p1
        $ mv /mnt/data/p1 /mnt/data/n4
        $ mv /mnt/data/n4/p1/n2/p1 /mnt/data
      
        $ btrfs subvolume snapshot -r /mnt /mnt/snap2
      
        $ btrfs send /mnt/snap1 -f /tmp/1.send
        $ btrfs send -p /mnt/snap1 /mnt/snap2 -f /tmp/2.send
      
        $ mkfs.btrfs -f /dev/sdc
        $ mount /dev/sdc /mnt2
        $ btrfs receive /mnt2 -f /tmp/1.send
        $ btrfs receive /mnt2 -f /tmp/2.send
        ERROR: rename data/p1/p2 -> data/n4/p1/p2 failed. no such file or directory
      
      Directories data/p1 (inode 263) and data/p1/p2 (inode 264) in the parent
      snapshot are both orphanized during the incremental send, and as soon as
      data/p1 is orphanized, we must make sure that when orphanizing data/p1/p2
      we use a source path of o263-6-o/p2 for the rename operation instead of
      the old path data/p1/p2 (the one before the orphanization of inode 263).
      
      A test case for xfstests follows soon.
      Reported-by: NRobbie Ko <robbieko@synology.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      8996a48c
    • F
      Btrfs: send, don't leave without decrementing clone root's send_progress · 2f1f465a
      Filipe Manana 提交于
      If the clone root was not readonly or the dead flag was set on it, we were
      leaving without decrementing the root's send_progress counter (and before
      we just incremented it). If a concurrent snapshot deletion was in progress
      and ended up being aborted, it would be impossible to later attempt to
      delete again the snapshot, since the root's send_in_progress counter could
      never go back to 0.
      
      We were also setting clone_sources_to_rollback to i + 1 too early - if we
      bailed out because the clone root we got is not readonly or flagged as dead
      we ended up later derreferencing a null pointer because we didn't assign
      the clone root to sctx->clone_roots[i].root:
      
      		for (i = 0; sctx && i < clone_sources_to_rollback; i++)
      			btrfs_root_dec_send_in_progress(
      					sctx->clone_roots[i].root);
      
      So just don't increment the send_in_progress counter if the root is readonly
      or flagged as dead.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.cz>
      Signed-off-by: NChris Mason <clm@fb.com>
      2f1f465a
    • F
      Btrfs: send, add missing check for dead clone root · 5cc2b17e
      Filipe Manana 提交于
      After we locked the root's root item, a concurrent snapshot deletion
      call might have set the dead flag on it. So check if the dead flag
      is set and abort if it is, just like we do for the parent root.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.cz>
      Signed-off-by: NChris Mason <clm@fb.com>
      5cc2b17e
    • F
      Btrfs: remove deleted xattrs on fsync log replay · 4f764e51
      Filipe Manana 提交于
      If we deleted xattrs from a file and fsynced the file, after a log replay
      the xattrs would remain associated to the file. This was an unexpected
      behaviour and differs from what other filesystems do, such as for example
      xfs and ext3/4.
      
      Fix this by, on fsync log replay, check if every xattr in the fs/subvol
      tree (that belongs to a logged inode) has a matching xattr in the log,
      and if it does not, delete it from the fs/subvol tree. This is a similar
      approach to what we do for dentries when we replay a directory from the
      fsync log.
      
      This issue is trivial to reproduce, and the following excerpt from my
      test for xfstests triggers the issue:
      
        _crash_and_mount()
        {
             # Simulate a crash/power loss.
             _load_flakey_table $FLAKEY_DROP_WRITES
             _unmount_flakey
             _load_flakey_table $FLAKEY_ALLOW_WRITES
             _mount_flakey
        }
      
        rm -f $seqres.full
      
        _scratch_mkfs >> $seqres.full 2>&1
        _init_flakey
        _mount_flakey
      
        # Create out test file and add 3 xattrs to it.
        touch $SCRATCH_MNT/foobar
        $SETFATTR_PROG -n user.attr1 -v val1 $SCRATCH_MNT/foobar
        $SETFATTR_PROG -n user.attr2 -v val2 $SCRATCH_MNT/foobar
        $SETFATTR_PROG -n user.attr3 -v val3 $SCRATCH_MNT/foobar
      
        # Make sure everything is durably persisted.
        sync
      
        # Now delete the second xattr and fsync the inode.
        $SETFATTR_PROG -x user.attr2 $SCRATCH_MNT/foobar
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foobar
      
        _crash_and_mount
      
        # After the fsync log is replayed, the file should have only 2 xattrs, the ones
        # named user.attr1 and user.attr3. The btrfs fsync log replay bug left the file
        # with the 3 xattrs that we had before deleting the second one and fsyncing the
        # file.
        echo "xattr names and values after first fsync log replay:"
        $GETFATTR_PROG --absolute-names --dump $SCRATCH_MNT/foobar | _filter_scratch
      
        # Now write some data to our file, fsync it, remove the first xattr, add a new
        # hard link to our file and commit the fsync log by fsyncing some other new
        # file. This is to verify that after log replay our first xattr does not exist
        # anymore.
        echo "hello world!" >> $SCRATCH_MNT/foobar
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foobar
        $SETFATTR_PROG -x user.attr1 $SCRATCH_MNT/foobar
        ln $SCRATCH_MNT/foobar $SCRATCH_MNT/foobar_link
        touch $SCRATCH_MNT/qwerty
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/qwerty
      
        _crash_and_mount
      
        # Now only the xattr with name user.attr3 should be set in our file.
        echo "xattr names and values after second fsync log replay:"
        $GETFATTR_PROG --absolute-names --dump $SCRATCH_MNT/foobar | _filter_scratch
      
        status=0
        exit
      
      The expected golden output, which is produced with this patch applied or
      when testing against xfs or ext3/4, is:
      
        xattr names and values after first fsync log replay:
        # file: SCRATCH_MNT/foobar
        user.attr1="val1"
        user.attr3="val3"
      
        xattr names and values after second fsync log replay:
        # file: SCRATCH_MNT/foobar
        user.attr3="val3"
      
      Without this patch applied, the output is:
      
        xattr names and values after first fsync log replay:
        # file: SCRATCH_MNT/foobar
        user.attr1="val1"
        user.attr2="val2"
        user.attr3="val3"
      
        xattr names and values after second fsync log replay:
        # file: SCRATCH_MNT/foobar
        user.attr1="val1"
        user.attr2="val2"
        user.attr3="val3"
      
      A patch with a test case for xfstests follows soon.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      4f764e51
  5. 18 3月, 2015 4 次提交
    • J
      Btrfs: fix outstanding_extents accounting in DIO · e1cbbfa5
      Josef Bacik 提交于
      We are keeping track of how many extents we need to reserve properly based on
      the amount we want to write, but we were still incrementing outstanding_extents
      if we wrote less than what we requested.  This isn't quite right since we will
      be limited to our max extent size.  So instead lets do something horrible!  Keep
      track of how many outstanding_extents we reserved, and decrement each time we
      allocate an extent.  If we use our entire reserve make sure to jack up
      outstanding_extents on the inode so the accounting works out properly.  Thanks,
      Reported-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      e1cbbfa5
    • J
      Btrfs: add sanity test for outstanding_extents accounting · 6a3891c5
      Josef Bacik 提交于
      I introduced a regression wrt outstanding_extents accounting.  These are tricky
      areas that aren't easily covered by xfstests as we could change MAX_EXTENT_SIZE
      at any time.  So add sanity tests to cover the various conditions that are
      tricky in order to make sure we don't introduce regressions in the future.
      Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      6a3891c5
    • J
      Btrfs: just free dummy extent buffers · bcb7e449
      Josef Bacik 提交于
      If we fail during our sanity tests we could get NULL deref's because we unload
      the module before the dummy extent buffers are free'd via RCU.  So check for
      this case and just free the things directly.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      bcb7e449
    • J
      Btrfs: account merges/splits properly · ba117213
      Josef Bacik 提交于
      My fix
      
      Btrfs: fix merge delalloc logic
      
      only fixed half of the problems, it didn't fix the case where we have two large
      extents on either side and then join them together with a new small extent.  We
      need to instead keep track of how many extents we have accounted for with each
      side of the new extent, and then see how many extents we need for the new large
      extent.  If they match then we know we need to keep our reservation, otherwise
      we need to drop our reservation.  This shows up with a case like this
      
      [BTRFS_MAX_EXTENT_SIZE+4K][4K HOLE][BTRFS_MAX_EXTENT_SIZE+4K]
      
      Previously the logic would have said that the number extents required for the
      new size (3) is larger than the number of extents required for the largest side
      (2) therefore we need to keep our reservation.  But this isn't the case, since
      both sides require a reservation of 2 which leads to 4 for the whole range
      currently reserved, but we only need 3, so we need to drop one of the
      reservations.  The same problem existed for splits, we'd think we only need 3
      extents when creating the hole but in reality we need 4.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      ba117213
  6. 17 3月, 2015 1 次提交
    • J
      Btrfs: prepare block group cache before writing · dcdf7f6d
      Josef Bacik 提交于
      Writing the block group cache will modify the extent tree quite a bit because it
      truncates the old space cache and pre-allocates new stuff.  To try and cut down
      on the churn lets do the setup dance first, then later on hopefully we can avoid
      looping with newly dirtied roots.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      dcdf7f6d
  7. 14 3月, 2015 6 次提交
  8. 06 3月, 2015 3 次提交
    • Q
      Btrfs:__add_inode_ref: out of bounds memory read when looking for extended ref. · dd9ef135
      Quentin Casasnovas 提交于
      Improper arithmetics when calculting the address of the extended ref could
      lead to an out of bounds memory read and kernel panic.
      Signed-off-by: NQuentin Casasnovas <quentin.casasnovas@oracle.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.cz>
      cc: stable@vger.kernel.org # v3.7+
      Signed-off-by: NChris Mason <clm@fb.com>
      dd9ef135
    • F
      Btrfs: fix data loss in the fast fsync path · 3a8b36f3
      Filipe Manana 提交于
      When using the fast file fsync code path we can miss the fact that new
      writes happened since the last file fsync and therefore return without
      waiting for the IO to finish and write the new extents to the fsync log.
      
      Here's an example scenario where the fsync will miss the fact that new
      file data exists that wasn't yet durably persisted:
      
      1. fs_info->last_trans_committed == N - 1 and current transaction is
         transaction N (fs_info->generation == N);
      
      2. do a buffered write;
      
      3. fsync our inode, this clears our inode's full sync flag, starts
         an ordered extent and waits for it to complete - when it completes
         at btrfs_finish_ordered_io(), the inode's last_trans is set to the
         value N (via btrfs_update_inode_fallback -> btrfs_update_inode ->
         btrfs_set_inode_last_trans);
      
      4. transaction N is committed, so fs_info->last_trans_committed is now
         set to the value N and fs_info->generation remains with the value N;
      
      5. do another buffered write, when this happens btrfs_file_write_iter
         sets our inode's last_trans to the value N + 1 (that is
         fs_info->generation + 1 == N + 1);
      
      6. transaction N + 1 is started and fs_info->generation now has the
         value N + 1;
      
      7. transaction N + 1 is committed, so fs_info->last_trans_committed
         is set to the value N + 1;
      
      8. fsync our inode - because it doesn't have the full sync flag set,
         we only start the ordered extent, we don't wait for it to complete
         (only in a later phase) therefore its last_trans field has the
         value N + 1 set previously by btrfs_file_write_iter(), and so we
         have:
      
             inode->last_trans <= fs_info->last_trans_committed
                 (N + 1)              (N + 1)
      
         Which made us not log the last buffered write and exit the fsync
         handler immediately, returning success (0) to user space and resulting
         in data loss after a crash.
      
      This can actually be triggered deterministically and the following excerpt
      from a testcase I made for xfstests triggers the issue. It moves a dummy
      file across directories and then fsyncs the old parent directory - this
      is just to trigger a transaction commit, so moving files around isn't
      directly related to the issue but it was chosen because running 'sync' for
      example does more than just committing the current transaction, as it
      flushes/waits for all file data to be persisted. The issue can also happen
      at random periods, since the transaction kthread periodicaly commits the
      current transaction (about every 30 seconds by default).
      The body of the test is:
      
        _scratch_mkfs >> $seqres.full 2>&1
        _init_flakey
        _mount_flakey
      
        # Create our main test file 'foo', the one we check for data loss.
        # By doing an fsync against our file, it makes btrfs clear the 'needs_full_sync'
        # bit from its flags (btrfs inode specific flags).
        $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 8K" \
                        -c "fsync" $SCRATCH_MNT/foo | _filter_xfs_io
      
        # Now create one other file and 2 directories. We will move this second file
        # from one directory to the other later because it forces btrfs to commit its
        # currently open transaction if we fsync the old parent directory. This is
        # necessary to trigger the data loss bug that affected btrfs.
        mkdir $SCRATCH_MNT/testdir_1
        touch $SCRATCH_MNT/testdir_1/bar
        mkdir $SCRATCH_MNT/testdir_2
      
        # Make sure everything is durably persisted.
        sync
      
        # Write more 8Kb of data to our file.
        $XFS_IO_PROG -c "pwrite -S 0xbb 8K 8K" $SCRATCH_MNT/foo | _filter_xfs_io
      
        # Move our 'bar' file into a new directory.
        mv $SCRATCH_MNT/testdir_1/bar $SCRATCH_MNT/testdir_2/bar
      
        # Fsync our first directory. Because it had a file moved into some other
        # directory, this made btrfs commit the currently open transaction. This is
        # a condition necessary to trigger the data loss bug.
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/testdir_1
      
        # Now fsync our main test file. If the fsync succeeds, we expect the 8Kb of
        # data we wrote previously to be persisted and available if a crash happens.
        # This did not happen with btrfs, because of the transaction commit that
        # happened when we fsynced the parent directory.
        $XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foo
      
        # Simulate a crash/power loss.
        _load_flakey_table $FLAKEY_DROP_WRITES
        _unmount_flakey
      
        _load_flakey_table $FLAKEY_ALLOW_WRITES
        _mount_flakey
      
        # Now check that all data we wrote before are available.
        echo "File content after log replay:"
        od -t x1 $SCRATCH_MNT/foo
      
        status=0
        exit
      
      The expected golden output for the test, which is what we get with this
      fix applied (or when running against ext3/4 and xfs), is:
      
        wrote 8192/8192 bytes at offset 0
        XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
        wrote 8192/8192 bytes at offset 8192
        XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
        File content after log replay:
        0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
        *
        0020000 bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb
        *
        0040000
      
      Without this fix applied, the output shows the test file does not have
      the second 8Kb extent that we successfully fsynced:
      
        wrote 8192/8192 bytes at offset 0
        XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
        wrote 8192/8192 bytes at offset 8192
        XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
        File content after log replay:
        0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
        *
        0020000
      
      So fix this by skipping the fsync only if we're doing a full sync and
      if the inode's last_trans is <= fs_info->last_trans_committed, or if
      the inode is already in the log. Also remove setting the inode's
      last_trans in btrfs_file_write_iter since it's useless/unreliable.
      
      Also because btrfs_file_write_iter no longer sets inode->last_trans to
      fs_info->generation + 1, don't set last_trans to 0 if we bail out and don't
      bail out if last_trans is 0, otherwise something as simple as the following
      example wouldn't log the second write on the last fsync:
      
        1. write to file
      
        2. fsync file
      
        3. fsync file
             |--> btrfs_inode_in_log() returns true and it set last_trans to 0
      
        4. write to file
             |--> btrfs_file_write_iter() no longers sets last_trans, so it
                  remained with a value of 0
        5. fsync
             |--> inode->last_trans == 0, so it bails out without logging the
                  second write
      
      A test case for xfstests will be sent soon.
      
      CC: <stable@vger.kernel.org>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      3a8b36f3
    • J
      Btrfs: remove extra run_delayed_refs in update_cowonly_root · f5c0a122
      Josef Bacik 提交于
      This got added with my dirty_bgs patch, it's not needed.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      f5c0a122
  9. 04 3月, 2015 6 次提交