1. 24 3月, 2020 40 次提交
    • J
      btrfs: move the root freeing stuff into btrfs_put_root · 8c38938c
      Josef Bacik 提交于
      There are a few different ways to free roots, either you allocated them
      yourself and you just do
      
      free_extent_buffer(root->node);
      free_extent_buffer(root->commit_node);
      btrfs_put_root(root);
      
      Which is the pattern for log roots.  Or for snapshots/subvolumes that
      are being dropped you simply call btrfs_free_fs_root() which does all
      the cleanup for you.
      
      Unify this all into btrfs_put_root(), so that we don't free up things
      associated with the root until the last reference is dropped.  This
      makes the root freeing code much more significant.
      
      The only caveat is at close_ctree() time we have to free the extent
      buffers for all of our main roots (extent_root, chunk_root, etc) because
      we have to drop the btree_inode and we'll run into issues if we hold
      onto those nodes until ->kill_sb() time.  This will be addressed in the
      future when we kill the btree_inode.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      8c38938c
    • J
      btrfs: move ino_cache_inode dropping out of btrfs_free_fs_root · 0e996e7f
      Josef Bacik 提交于
      We are going to make root life be controlled soley by refcounting, and
      inodes will be one of the things that hold a ref on the root.  This
      means we need to handle dropping the ino_cache_inode outside of the root
      freeing logic, so move it into btrfs_drop_and_free_fs_root() so it is
      cleaned up properly on unmount.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      0e996e7f
    • J
      btrfs: make the extent buffer leak check per fs info · 3fd63727
      Josef Bacik 提交于
      I'm going to make the entire destruction of btrfs_root's controlled by
      their refcount, so it will be helpful to notice if we're leaking their
      eb's on umount.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      3fd63727
    • J
      btrfs: remove a BUG_ON() from merge_reloc_roots() · 7b7b7431
      Josef Bacik 提交于
      This was pretty subtle, we default to reloc roots having 0 root refs, so
      if we crash in the middle of the relocation they can just be deleted.
      If we successfully complete the relocation operations we'll set our root
      refs to 1 in prepare_to_merge() and then go on to merge_reloc_roots().
      
      At prepare_to_merge() time if any of the reloc roots have a 0 reference
      still, we will remove that reloc root from our reloc root rb tree, and
      then clean it up later.
      
      However this only happens if we successfully start a transaction.  If
      we've aborted previously we will skip this step completely, and only
      have reloc roots with a reference count of 0, but were never properly
      removed from the reloc control's rb tree.
      
      This isn't a problem per-se, our references are held by the list the
      reloc roots are on, and by the original root the reloc root belongs to.
      If we end up in this situation all the reloc roots will be added to the
      dirty_reloc_list, and then properly dropped at that point.  The reloc
      control will be free'd and the rb tree is no longer used.
      
      There were two options when fixing this, one was to remove the BUG_ON(),
      the other was to make prepare_to_merge() handle the case where we
      couldn't start a trans handle.
      
      IMO this is the cleaner solution.  I started with handling the error in
      prepare_to_merge(), but it turned out super ugly.  And in the end this
      BUG_ON() simply doesn't matter, the cleanup was happening properly, we
      were just panicing because this BUG_ON() only matters in the success
      case.  So I've opted to just remove it and add a comment where it was.
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      7b7b7431
    • J
      btrfs: hold a ref on the root->reloc_root · f44deb74
      Josef Bacik 提交于
      We previously were relying on root->reloc_root to be cleaned up by the
      drop snapshot, or the error handling.  However if btrfs_drop_snapshot()
      failed it wouldn't drop the ref for the root.  Also we sort of depend on
      the right thing to happen with moving reloc roots between lists and the
      fs root they belong to, which makes it hard to figure out who owns the
      reference.
      
      Fix this by explicitly holding a reference on the reloc root for
      roo->reloc_root.  This means that we hold two references on reloc roots,
      one for whichever reloc_roots list it's attached to, and the
      root->reloc_root we're on.
      
      This makes it easier to reason out who owns a reference on the root, and
      when it needs to be dropped.
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f44deb74
    • J
      btrfs: clear DEAD_RELOC_TREE before dropping the reloc root · f28de8d8
      Josef Bacik 提交于
      The DEAD_RELOC_TREE flag is in place in order to avoid a use after free
      in init_reloc_root, tracking the presence of reloc_root.  However adding
      the explicit tree references in previous patches makes the use after
      free impossible because at this point we no longer have a reloc_control
      set on the fs_info and thus cannot enter the function.
      
      So move this to be coupled with clearing the root->reloc_root so we're
      consistent with all other operations of the reloc root.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      [ update changelog ]
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f28de8d8
    • J
      btrfs: free the reloc_control in a consistent way · 1a0afa0e
      Josef Bacik 提交于
      If we have an error while processing the reloc roots we could leak roots
      that were added to rc->reloc_roots before we hit the error.  We could
      have also not removed the reloc tree mapping from our rb_tree, so clean
      up any remaining nodes in the reloc root rb_tree.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      [ use rbtree_postorder_for_each_entry_safe ]
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      1a0afa0e
    • J
      btrfs: do not init a reloc root if we aren't relocating · 2abc726a
      Josef Bacik 提交于
      We previously were checking if the root had a dead root before accessing
      root->reloc_root in order to avoid a use-after-free type bug.  However
      this scenario happens after we've unset the reloc control, so we would
      have been saved if we'd simply checked for fs_info->reloc_control.  At
      this point during relocation we no longer need to be creating new reloc
      roots, so simply move this check above the reloc_root checks to avoid
      any future races and confusion.
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2abc726a
    • J
      btrfs: reloc: clean dirty subvols if we fail to start a transaction · 6217b0fa
      Josef Bacik 提交于
      If we do merge_reloc_roots() we could insert a few roots onto the dirty
      subvol roots list, where we hold a ref on them.  If we fail to start the
      transaction we need to run clean_dirty_subvols() in order to cleanup the
      refs.
      
      CC: stable@vger.kernel.org # 5.4+
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6217b0fa
    • J
      btrfs: unset reloc control if we fail to recover · fb2d83ee
      Josef Bacik 提交于
      If we fail to load an fs root, or fail to start a transaction we can
      bail without unsetting the reloc control, which leads to problems later
      when we free the reloc control but still have it attached to the file
      system.
      
      In the normal path we'll end up calling unset_reloc_control() twice, but
      all it does is set fs_info->reloc_control = NULL, and we can only have
      one balance at a time so it's not racey.
      
      CC: stable@vger.kernel.org # 5.4+
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      fb2d83ee
    • J
      btrfs: drop block from cache on error in relocation · 8e19c973
      Josef Bacik 提交于
      If we have an error while building the backref tree in relocation we'll
      process all the pending edges and then free the node.  However if we
      integrated some edges into the cache we'll lose our link to those edges
      by simply freeing this node, which means we'll leak memory and
      references to any roots that we've found.
      
      Instead we need to use remove_backref_node(), which walks through all of
      the edges that are still linked to this node and free's them up and
      drops any root references we may be holding.
      
      CC: stable@vger.kernel.org # 4.9+
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      8e19c973
    • Q
      btrfs: relocation: Use btrfs_find_all_leafs to locate data extent parent tree leaves · 19b546d7
      Qu Wenruo 提交于
      In relocation, we need to locate all parent tree leaves referring to one
      data extent, thus we have a complex mechanism to iterate throught extent
      tree and subvolume trees to locate the related leaves.
      
      However this is already done in backref.c, we have
      btrfs_find_all_leafs(), which can return a ulist containing all leaves
      referring to that data extent.
      
      Use btrfs_find_all_leafs() to replace find_data_references().
      
      There is a special handling for v1 space cache data extents, where we
      need to delete the v1 space cache data extents, to avoid those data
      extents to hang the data relocation.
      
      In this patch, the special handling is done by re-iterating the root
      tree leaf.  Although it's a little less efficient than the old handling,
      considering we can reuse a lot of code, it should be acceptable.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      19b546d7
    • J
      btrfs: fix ref-verify to catch operations on 0 ref extents · b39c8f5a
      Josef Bacik 提交于
      While debugging I noticed I wasn't getting ref verify errors before
      everything blew up.  Turns out it's because we don't warn when we try to
      add a normal ref via btrfs_inc_ref() if the block entry exists but has 0
      references.  This is incorrect, we should never be doing anything other
      than adding a new extent once a block entry drops to 0 references.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      b39c8f5a
    • F
      btrfs: make ranged full fsyncs more efficient · 0a8068a3
      Filipe Manana 提交于
      Commit 0c713cba ("Btrfs: fix race between ranged fsync and writeback
      of adjacent ranges") fixed a bug where we could end up with file extent
      items in a log tree that represent file ranges that overlap due to a race
      between the hole detection of a ranged full fsync and writeback for a
      different file range.
      
      The problem was solved by forcing any ranged full fsync to become a
      non-ranged full fsync - setting the range start to 0 and the end offset to
      LLONG_MAX. This was a simple solution because the code that detected and
      marked holes was very complex, it used to be done at copy_items() and
      implied several searches on the fs/subvolume tree. The drawback of that
      solution was that we started to flush delalloc for the entire file and
      wait for all the ordered extents to complete for ranged full fsyncs
      (including ordered extents covering ranges completely outside the given
      range). Fortunatelly ranged full fsyncs are not the most common case
      (hopefully for most workloads).
      
      However a later fix for detecting and marking holes was made by commit
      0e56315c ("Btrfs: fix missing hole after hole punching and fsync
      when using NO_HOLES") and it simplified a lot the detection of holes,
      and now copy_items() no longer does it and we do it in a much more simple
      way at btrfs_log_holes().
      
      This makes it now possible to simply make the code that detects holes to
      operate only on the initial range and no longer need to operate on the
      whole file, while also avoiding the need to flush delalloc for the entire
      file and wait for ordered extents that cover ranges that don't overlap the
      given range.
      
      Another special care is that we must skip file extent items that fall
      entirely outside the fsync range when copying inode items from the
      fs/subvolume tree into the log tree - this is to avoid races with ordered
      extent completion for extents falling outside the fsync range, which could
      cause us to end up with file extent items in the log tree that have
      overlapping ranges - for example if the fsync range is [1Mb, 2Mb], when
      we copy inode items we could copy an extent item for the range [0, 512K],
      then release the search path and before moving to the next leaf, an
      ordered extent for a range of [256Kb, 512Kb] completes - this would
      cause us to copy the new extent item for range [256Kb, 512Kb] into the
      log tree after we have copied one for the range [0, 512Kb] - the extents
      overlap, resulting in a corruption.
      
      So this change just does these steps:
      
      1) When the NO_HOLES feature is enabled it leaves the initial range
         intact - no longer sets it to [0, LLONG_MAX] when the full sync bit
         is set in the inode. If NO_HOLES is not enabled, always set the range
         to a full, just like before this change, to avoid missing file extent
         items representing holes after replaying the log (for both full and
         fast fsyncs);
      
      2) Make the hole detection code to operate only on the fsync range;
      
      3) Make the code that copies items from the fs/subvolume tree to skip
         copying file extent items that cover a range completely outside the
         range of the fsync.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      0a8068a3
    • F
      btrfs: factor out inode items copy loop from btrfs_log_inode() · da447009
      Filipe Manana 提交于
      The function btrfs_log_inode() is quite large and so is its loop which
      iterates the inode items from the fs/subvolume tree and copies them into
      a log tree. Because this is a large loop inside a very large function
      and because an upcoming patch in this series needs to add some more logic
      inside that loop, move the loop into a helper function to make it a bit
      more manageable.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      da447009
    • F
      btrfs: add helper to get the end offset of a file extent item · a5eeb3d1
      Filipe Manana 提交于
      Getting the end offset for a file extent item requires a bit of code since
      the extent can be either inline or regular/prealloc. There are some places
      all over the code base that open code this logic and in another patch
      later in this series it will be needed again. Therefore encapsulate this
      logic in a helper function and use it.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      a5eeb3d1
    • F
      btrfs: fix missing file extent item for hole after ranged fsync · 95418ed1
      Filipe Manana 提交于
      When doing a fast fsync for a range that starts at an offset greater than
      zero, we can end up with a log that when replayed causes the respective
      inode miss a file extent item representing a hole if we are not using the
      NO_HOLES feature. This is because for fast fsyncs we don't log any extents
      that cover a range different from the one requested in the fsync.
      
      Example scenario to trigger it:
      
        $ mkfs.btrfs -O ^no-holes -f /dev/sdd
        $ mount /dev/sdd /mnt
      
        # Create a file with a single 256K and fsync it to clear to full sync
        # bit in the inode - we want the msync below to trigger a fast fsync.
        $ xfs_io -f -c "pwrite -S 0xab 0 256K" -c "fsync" /mnt/foo
      
        # Force a transaction commit and wipe out the log tree.
        $ sync
      
        # Dirty 768K of data, increasing the file size to 1Mb, and flush only
        # the range from 256K to 512K without updating the log tree
        # (sync_file_range() does not trigger fsync, it only starts writeback
        # and waits for it to finish).
      
        $ xfs_io -c "pwrite -S 0xcd 256K 768K" /mnt/foo
        $ xfs_io -c "sync_range -abw 256K 256K" /mnt/foo
      
        # Now dirty the range from 768K to 1M again and sync that range.
        $ xfs_io -c "mmap -w 768K 256K"        \
                 -c "mwrite -S 0xef 768K 256K" \
                 -c "msync -s 768K 256K"       \
                 -c "munmap"                   \
                 /mnt/foo
      
        <power fail>
      
        # Mount to replay the log.
        $ mount /dev/sdd /mnt
        $ umount /mnt
      
        $ btrfs check /dev/sdd
        Opening filesystem to check...
        Checking filesystem on /dev/sdd
        UUID: 482fb574-b288-478e-a190-a9c44a78fca6
        [1/7] checking root items
        [2/7] checking extents
        [3/7] checking free space cache
        [4/7] checking fs roots
        root 5 inode 257 errors 100, file extent discount
        Found file extent holes:
             start: 262144, len: 524288
        ERROR: errors found in fs roots
        found 720896 bytes used, error(s) found
        total csum bytes: 512
        total tree bytes: 131072
        total fs tree bytes: 32768
        total extent tree bytes: 16384
        btree space waste bytes: 123514
        file data blocks allocated: 589824
          referenced 589824
      
      Fix this issue by setting the range to full (0 to LLONG_MAX) when the
      NO_HOLES feature is not enabled. This results in extra work being done
      but it gives the guarantee we don't end up with missing holes after
      replaying the log.
      
      CC: stable@vger.kernel.org # 4.19+
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      95418ed1
    • N
      btrfs: account ticket size at add/delete time · db161806
      Nikolay Borisov 提交于
      Instead of iterating all pending tickets on the normal/priority list to
      sum their total size the cost can be amortized across ticket addition/
      removal. This turns O(n) + O(m) (where n is the size of the normal list
      and m of the priority list) into O(1). This will mostly have effect in
      workloads that experience heavy flushing.
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      db161806
    • R
      btrfs: implement migratepage callback for data pages · f8e66081
      Roman Gushchin 提交于
      Currently btrfs doesn't provide a migratepage callback for data pages.
      It means that fallback_migrate_page() is used to migrate btrfs pages.
      
      fallback_migrate_page() cannot move dirty pages, instead it tries to
      flush them (in sync mode) or just fails (in async mode).
      
      In the sync mode pages which are scheduled to be processed by
      btrfs_writepage_fixup_worker() can't be effectively flushed by the
      migration code, because there is no established way to wait for the
      completion of the delayed work.
      
      It all leads to page migration failures.
      
      To fix it the patch implements a btrs-specific migratepage callback,
      which is similar to iomap_migrate_page() used by some other fs, except
      it does take care of the PagePrivate2 flag which is used for data
      ordering purposes.
      Reviewed-by: NChris Mason <clm@fb.com>
      Signed-off-by: NRoman Gushchin <guro@fb.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f8e66081
    • N
      btrfs: Remove block_rsv parameter from btrfs_drop_snapshot · 0078a9f9
      Nikolay Borisov 提交于
      It's no longer used following 30d40577 ("btrfs: reloc: Also queue
      orphan reloc tree for cleanup to avoid BUG_ON()"), so just remove it.
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      0078a9f9
    • N
      btrfs: Remove __ prefix from btrfs_block_rsv_release · 63f018be
      Nikolay Borisov 提交于
      Currently the non-prefixed version is a simple wrapper used to hide
      the 4th argument of the prefixed version. This doesn't bring much value
      in practice and only makes the code harder to follow by adding another
      level of indirection. Rectify this by removing the __ prefix and
      have only one public function to release bytes from a block reservation.
      No semantic changes.
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      63f018be
    • Q
      btrfs: relocation: Check cancel request after each extent found · f31ea088
      Qu Wenruo 提交于
      When relocating data block groups with tons of small extents, or large
      metadata block groups, there can be over 200,000 extents.
      
      We will iterate all extents of such block group in relocate_block_group(),
      where iteration itself can be kinda time-consuming.
      
      So when user want to cancel the balance, the extent iteration loop can
      be another target.
      
      This patch will add the cancelling check in the extent iteration loop of
      relocate_block_group() to make balance cancelling faster.
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f31ea088
    • Q
      btrfs: relocation: Check cancel request after each data page read · 7f913c7c
      Qu Wenruo 提交于
      When relocating a data extents with large large data extents, we spend
      most of our time in relocate_file_extent_cluster() at stage "moving data
      extents":
      
       1)               |  btrfs_relocate_block_group [btrfs]() {
       1)               |    relocate_file_extent_cluster [btrfs]() {
       1) $ 6586769 us  |    }
       1) + 18.260 us   |    relocate_file_extent_cluster [btrfs]();
       1) + 15.770 us   |    relocate_file_extent_cluster [btrfs]();
       1) $ 8916340 us  |  }
       1)               |  btrfs_relocate_block_group [btrfs]() {
       1)               |    relocate_file_extent_cluster [btrfs]() {
       1) $ 11611586 us |    }
       1) + 16.930 us   |    relocate_file_extent_cluster [btrfs]();
       1) + 15.870 us   |    relocate_file_extent_cluster [btrfs]();
       1) $ 14986130 us |  }
      
      To make data relocation cancelling quicker, add extra balance cancelling
      check after each page read in relocate_file_extent_cluster().
      
      Cleanup and error handling uses the same mechanism as if the whole
      process finished
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      7f913c7c
    • Q
      btrfs: relocation: add error injection points for cancelling balance · 726a3421
      Qu Wenruo 提交于
      Introduce a new error injection point, should_cancel_balance().
      
      It's just a wrapper of atomic_read(&fs_info->balance_cancel_req), but
      allows us to override the return value.
      
      Currently there are only one locations using this function:
      
      - btrfs_balance()
        It checks cancel before each block group.
      
      There are other locations checking fs_info->balance_cancel_req, but they
      are not used as an indicator to exit, so there is no need to use the
      wrapper.
      
      But there will be more locations coming, and some locations can cause
      kernel panic if not handled properly.  So introduce this error injection
      to provide better test interface.
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      726a3421
    • F
      Btrfs: implement full reflink support for inline extents · 05a5a762
      Filipe Manana 提交于
      There are a few cases where we don't allow cloning an inline extent into
      the destination inode, returning -EOPNOTSUPP to user space. This was done
      to prevent several types of file corruption and because it's not very
      straightforward to deal with these cases, as they can't rely on simply
      copying the inline extent between leaves. Such cases require copying the
      inline extent's data into the respective page of the destination inode.
      
      Not supporting these cases makes it harder and more cumbersome to write
      applications/libraries that work on any filesystem with reflink support,
      since all these cases for which btrfs fails with -EOPNOTSUPP work just
      fine on xfs for example. These unsupported cases are also not documented
      anywhere and explaining which exact cases fail require a bit of too
      technical understanding of btrfs's internal (inline extents and when and
      where can they exist in a file), so it's not really user friendly.
      
      Also some test cases from fstests that use fsx, such as generic/522 for
      example, can sporadically fail because they trigger one of these cases,
      and fsx expects all operations to succeed.
      
      This change adds supports for cloning all these cases by copying the
      inline extent's data into the respective page of the destination inode.
      
      With this change test case btrfs/112 from fstests fails because it
      expects some clone operations to fail, so it will be updated. Also a
      new test case that exercises all these previously unsupported cases
      will be added to fstests.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      05a5a762
    • F
      Btrfs: simplify inline extent handling when doing reflinks · a61e1e0d
      Filipe Manana 提交于
      We can not reflink parts of an inline extent, we must always reflink the
      whole inline extent. We know that inline extents always start at file
      offset 0 and that can never represent an amount of data larger then the
      filesystem's sector size (both compressed and uncompressed). We also have
      had the constraints that reflink operations must have a start offset that
      is aligned to the sector size and an end offset that is also aligned or
      it ends the inode's i_size, so there's no way for user space to be able
      to do a reflink operation that will refer to only a part of an inline
      extent.
      
      Initially there was a bug in the inlining code that could allow compressed
      inline extents that encoded more than 1 page, but that was fixed in 2008
      by commit 70b99e69 ("Btrfs: Compression corner fixes") since that
      was problematic.
      
      So remove all the extent cloning code that deals with the possibility
      of cloning only partial inline extents.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      a61e1e0d
    • F
      Btrfs: move all reflink implementation code into its own file · 6a177381
      Filipe Manana 提交于
      The reflink code is quite large and has been living in ioctl.c since ever.
      It has grown over the years after many bug fixes and improvements, and
      since I'm planning on making some further improvements on it, it's time
      to get it better organized by moving into its own file, reflink.c
      (similar to what xfs does for example).
      
      This change only moves the code out of ioctl.c into the new file, it
      doesn't do any other change.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6a177381
    • G
      btrfs: scrub: Replace zero-length array with flexible-array member · a8753ee3
      Gustavo A. R. Silva 提交于
      The current codebase makes use of the zero-length array language
      extension to the C90 standard, but the preferred mechanism to declare
      variable-length types such as these ones is a flexible array
      member[1][2], introduced in C99:
      
      struct foo {
              int stuff;
              struct boo array[];
      };
      
      By making use of the mechanism above, we will get a compiler warning in
      case the flexible array does not occur last in the structure, which will
      help us prevent some kind of undefined behavior bugs from being
      inadvertently introduced[3] to the codebase from now on.
      
      Also, notice that, dynamic memory allocations won't be affected by this
      change:
      
        "Flexible array members have incomplete type, and so the sizeof operator
         may not be applied. As a quirk of the original implementation of
         zero-length arrays, sizeof evaluates to zero." [1]
      
      This issue was found with the help of Coccinelle.
      
      [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
      [2] https://github.com/KSPP/linux/issues/21
      [3] commit 76497732 ("cxgb3/l2t: Fix undefined behaviour")
      Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      a8753ee3
    • G
      btrfs: rcu-string: Replace zero-length array with flexible-array member · 7593f4c5
      Gustavo A. R. Silva 提交于
      The current codebase makes use of the zero-length array language
      extension to the C90 standard, but the preferred mechanism to declare
      variable-length types such as these ones is a flexible array
      member[1][2], introduced in C99:
      
      struct foo {
              int stuff;
              struct boo array[];
      };
      
      By making use of the mechanism above, we will get a compiler warning in
      case the flexible array does not occur last in the structure, which will
      help us prevent some kind of undefined behavior bugs from being
      inadvertently introduced[3] to the codebase from now on.
      
      Also, notice that, dynamic memory allocations won't be affected by this
      change:
      
       "Flexible array members have incomplete type, and so the sizeof operator
        may not be applied. As a quirk of the original implementation of
        zero-length arrays, sizeof evaluates to zero." [1]
      
      This issue was found with the help of Coccinelle.
      
      [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
      [2] https://github.com/KSPP/linux/issues/21
      [3] commit 76497732 ("cxgb3/l2t: Fix undefined behaviour")
      Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      7593f4c5
    • G
      btrfs: delayed-inode: Replace zero-length array with flexible-array member · 17b238ac
      Gustavo A. R. Silva 提交于
      The current codebase makes use of the zero-length array language
      extension to the C90 standard, but the preferred mechanism to declare
      variable-length types such as these ones is a flexible array
      member[1][2], introduced in C99:
      
      struct foo {
              int stuff;
              struct boo array[];
      };
      
      By making use of the mechanism above, we will get a compiler warning in
      case the flexible array does not occur last in the structure, which will
      help us prevent some kind of undefined behavior bugs from being
      inadvertently introduced[3] to the codebase from now on.
      
      Also, notice that, dynamic memory allocations won't be affected by this
      change:
      
       "Flexible array members have incomplete type, and so the sizeof operator
        may not be applied. As a quirk of the original implementation of
        zero-length arrays, sizeof evaluates to zero." [1]
      
      This issue was found with the help of Coccinelle.
      
      [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
      [2] https://github.com/KSPP/linux/issues/21
      [3] commit 76497732 ("cxgb3/l2t: Fix undefined behaviour")
      Signed-off-by: NGustavo A. R. Silva <gustavo@embeddedor.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      17b238ac
    • M
      btrfs: add RCU locks around block group initialization · 29566c9c
      Madhuparna Bhowmik 提交于
      The space_info list is normally RCU protected and should be traversed
      with rcu_read_lock held. There's a warning
      
        [29.104756] WARNING: suspicious RCU usage
        [29.105046] 5.6.0-rc4-next-20200305 #1 Not tainted
        [29.105231] -----------------------------
        [29.105401] fs/btrfs/block-group.c:2011 RCU-list traversed in non-reader section!!
      
      pointing out that the locking is missing in btrfs_read_block_groups.
      However this is not necessary as the list traversal happens at mount
      time when there's no other thread potentially accessing the list.
      
      To fix the warning and for consistency let's add the RCU lock/unlock,
      the code won't be affected much as it's doing some lightweight
      operations.
      Reported-by: NGuenter Roeck <linux@roeck-us.net>
      Signed-off-by: NMadhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      29566c9c
    • N
      btrfs: Open code insert_extent_backref · 65cd6d9e
      Nikolay Borisov 提交于
      No need to add a level of indirection for hiding a simple 'if'. Open
      code insert_extent_backref in its sole caller. No functional changes.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      65cd6d9e
    • N
      btrfs: Remove impossible BUG_ON in get_tree_block_key · c6600d9a
      Nikolay Borisov 提交于
      relocate_tree_blocks calls get_tree_block_key for a block iff that block
      has its ->key_ready equal false. Thus the BUG_ON in the latter function
      cannot ever be triggered so remove it.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      c6600d9a
    • D
      btrfs: balance: factor out convert profile validation · 5ba366c3
      David Sterba 提交于
      The validation follows the same steps for all three block group types,
      the existing helper validate_convert_profile can be enhanced and do more
      of the common things.
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      5ba366c3
    • D
      btrfs: return void from csum_tree_block · c67b3892
      David Sterba 提交于
      Now that csum_tree_block is not returning any errors, we can make
      csum_tree_block return void and simplify callers.
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      c67b3892
    • D
      btrfs: simplify tree block checksumming loop · e9be5a30
      David Sterba 提交于
      Thw whole point of csum_tree_block is to iterate over all extent buffer
      pages and pass it to checksumming functions. The bytes where checksum is
      stored must be skipped, thus map_private_extent_buffer. This complicates
      further offset calculations.
      
      As the first page will be always present, checksum the relevant bytes
      unconditionally and then do a simple iteration over the remaining pages.
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      e9be5a30
    • D
      btrfs: inline checksum name and driver definitions · 59a0fcdb
      David Sterba 提交于
      There's an unnecessary indirection in the checksum definition table,
      pointer and the string itself. The strings are short and the overall
      size of one entry is now 24 bytes.
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      59a0fcdb
    • N
      btrfs: Rename __btrfs_alloc_chunk to btrfs_alloc_chunk · 11c67b1a
      Nikolay Borisov 提交于
      Having btrfs_alloc_chunk doesn't bring any value since it
      encapsulates a lockdep assert and a call to find_next_chunk. Simply
      rename the internal __btrfs_alloc_chunk function to the public one
      and remove it's 2nd parameter as all callers always pass the return
      value of find_next_chunk. Finally, migrate the call to
      lockdep_assert_held so as to not lose the check.
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      11c67b1a
    • J
      btrfs: fix btrfs_calc_reclaim_metadata_size calculation · fa121a26
      Josef Bacik 提交于
      I noticed while running my snapshot torture test that we were getting a
      lot of metadata chunks allocated with very little actually used.
      Digging into this we would commit the transaction, still not have enough
      space, and then force a chunk allocation.
      
      I noticed that we were barely flushing any delalloc at all, despite the
      fact that we had around 13gib of outstanding delalloc reservations.  It
      turns out this is because of our btrfs_calc_reclaim_metadata_size()
      calculation.  It _only_ takes into account the outstanding ticket sizes,
      which isn't the whole story.  In this particular workload we're slowly
      filling up the disk, which means our overcommit space will suddenly
      become a lot less, and our outstanding reservations will be well more
      than what we can handle.  However we are only flushing based on our
      ticket size, which is much less than we need to actually reclaim.
      
      So fix btrfs_calc_reclaim_metadata_size() to take into account the
      overage in the case that we've gotten less available space suddenly.
      This makes it so we attempt to reclaim a lot more delalloc space, which
      allows us to make our reservations and we no longer are allocating a
      bunch of needless metadata chunks.
      
      CC: stable@vger.kernel.org # 4.4+
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      fa121a26
    • F
      Btrfs: fix crash during unmount due to race with delayed inode workers · f0cc2cd7
      Filipe Manana 提交于
      During unmount we can have a job from the delayed inode items work queue
      still running, that can lead to at least two bad things:
      
      1) A crash, because the worker can try to create a transaction just
         after the fs roots were freed;
      
      2) A transaction leak, because the worker can create a transaction
         before the fs roots are freed and just after we committed the last
         transaction and after we stopped the transaction kthread.
      
      A stack trace example of the crash:
      
       [79011.691214] kernel BUG at lib/radix-tree.c:982!
       [79011.692056] invalid opcode: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC PTI
       [79011.693180] CPU: 3 PID: 1394 Comm: kworker/u8:2 Tainted: G        W         5.6.0-rc2-btrfs-next-54 #2
       (...)
       [79011.696789] Workqueue: btrfs-delayed-meta btrfs_work_helper [btrfs]
       [79011.697904] RIP: 0010:radix_tree_tag_set+0xe7/0x170
       (...)
       [79011.702014] RSP: 0018:ffffb3c84a317ca0 EFLAGS: 00010293
       [79011.702949] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
       [79011.704202] RDX: ffffb3c84a317cb0 RSI: ffffb3c84a317ca8 RDI: ffff8db3931340a0
       [79011.705463] RBP: 0000000000000005 R08: 0000000000000005 R09: ffffffff974629d0
       [79011.706756] R10: ffffb3c84a317bc0 R11: 0000000000000001 R12: ffff8db393134000
       [79011.708010] R13: ffff8db3931340a0 R14: ffff8db393134068 R15: 0000000000000001
       [79011.709270] FS:  0000000000000000(0000) GS:ffff8db3b6a00000(0000) knlGS:0000000000000000
       [79011.710699] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       [79011.711710] CR2: 00007f22c2a0a000 CR3: 0000000232ad4005 CR4: 00000000003606e0
       [79011.712958] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
       [79011.714205] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
       [79011.715448] Call Trace:
       [79011.715925]  record_root_in_trans+0x72/0xf0 [btrfs]
       [79011.716819]  btrfs_record_root_in_trans+0x4b/0x70 [btrfs]
       [79011.717925]  start_transaction+0xdd/0x5c0 [btrfs]
       [79011.718829]  btrfs_async_run_delayed_root+0x17e/0x2b0 [btrfs]
       [79011.719915]  btrfs_work_helper+0xaa/0x720 [btrfs]
       [79011.720773]  process_one_work+0x26d/0x6a0
       [79011.721497]  worker_thread+0x4f/0x3e0
       [79011.722153]  ? process_one_work+0x6a0/0x6a0
       [79011.722901]  kthread+0x103/0x140
       [79011.723481]  ? kthread_create_worker_on_cpu+0x70/0x70
       [79011.724379]  ret_from_fork+0x3a/0x50
       (...)
      
      The following diagram shows a sequence of steps that lead to the crash
      during ummount of the filesystem:
      
              CPU 1                                             CPU 2                                CPU 3
      
       btrfs_punch_hole()
         btrfs_btree_balance_dirty()
           btrfs_balance_delayed_items()
             --> sees
                 fs_info->delayed_root->items
                 with value 200, which is greater
                 than
                 BTRFS_DELAYED_BACKGROUND (128)
                 and smaller than
                 BTRFS_DELAYED_WRITEBACK (512)
             btrfs_wq_run_delayed_node()
               --> queues a job for
                   fs_info->delayed_workers to run
                   btrfs_async_run_delayed_root()
      
                                                                                                  btrfs_async_run_delayed_root()
                                                                                                    --> job queued by CPU 1
      
                                                                                                    --> starts picking and running
                                                                                                        delayed nodes from the
                                                                                                        prepare_list list
      
                                                       close_ctree()
      
                                                         btrfs_delete_unused_bgs()
      
                                                         btrfs_commit_super()
      
                                                           btrfs_join_transaction()
                                                             --> gets transaction N
      
                                                           btrfs_commit_transaction(N)
                                                             --> set transaction state
                                                              to TRANTS_STATE_COMMIT_START
      
                                                                                                   btrfs_first_prepared_delayed_node()
                                                                                                     --> picks delayed node X through
                                                                                                         the prepared_list list
      
                                                             btrfs_run_delayed_items()
      
                                                               btrfs_first_delayed_node()
                                                                 --> also picks delayed node X
                                                                     but through the node_list
                                                                     list
      
                                                               __btrfs_commit_inode_delayed_items()
                                                                  --> runs all delayed items from
                                                                      this node and drops the
                                                                      node's item count to 0
                                                                      through call to
                                                                      btrfs_release_delayed_inode()
      
                                                               --> finishes running any remaining
                                                                   delayed nodes
      
                                                             --> finishes transaction commit
      
                                                         --> stops cleaner and transaction threads
      
                                                         btrfs_free_fs_roots()
                                                           --> frees all roots and removes them
                                                               from the radix tree
                                                               fs_info->fs_roots_radix
      
                                                                                                   btrfs_join_transaction()
                                                                                                     start_transaction()
                                                                                                       btrfs_record_root_in_trans()
                                                                                                         record_root_in_trans()
                                                                                                           radix_tree_tag_set()
                                                                                                             --> crashes because
                                                                                                                 the root is not in
                                                                                                                 the radix tree
                                                                                                                 anymore
      
      If the worker is able to call btrfs_join_transaction() before the unmount
      task frees the fs roots, we end up leaking a transaction and all its
      resources, since after the call to btrfs_commit_super() and stopping the
      transaction kthread, we don't expect to have any transaction open anymore.
      
      When this situation happens the worker has a delayed node that has no
      more items to run, since the task calling btrfs_run_delayed_items(),
      which is doing a transaction commit, picks the same node and runs all
      its items first.
      
      We can not wait for the worker to complete when running delayed items
      through btrfs_run_delayed_items(), because we call that function in
      several phases of a transaction commit, and that could cause a deadlock
      because the worker calls btrfs_join_transaction() and the task doing the
      transaction commit may have already set the transaction state to
      TRANS_STATE_COMMIT_DOING.
      
      Also it's not possible to get into a situation where only some of the
      items of a delayed node are added to the fs/subvolume tree in the current
      transaction and the remaining ones in the next transaction, because when
      running the items of a delayed inode we lock its mutex, effectively
      waiting for the worker if the worker is running the items of the delayed
      node already.
      
      Since this can only cause issues when unmounting a filesystem, fix it in
      a simple way by waiting for any jobs on the delayed workers queue before
      calling btrfs_commit_supper() at close_ctree(). This works because at this
      point no one can call btrfs_btree_balance_dirty() or
      btrfs_balance_delayed_items(), and if we end up waiting for any worker to
      complete, btrfs_commit_super() will commit the transaction created by the
      worker.
      
      CC: stable@vger.kernel.org # 4.4+
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f0cc2cd7