1. 23 9月, 2015 1 次提交
    • J
      Btrfs: keep dropped roots in cache until transaction commit · 2b9dbef2
      Josef Bacik 提交于
      When dropping a snapshot we need to account for the qgroup changes.  If we drop
      the snapshot in all one go then the backref code will fail to find blocks from
      the snapshot we dropped since it won't be able to find the root in the fs root
      cache.  This can lead to us failing to find refs from other roots that pointed
      at blocks in the now deleted root.  To handle this we need to not remove the fs
      roots from the cache until after we process the qgroup operations.  Do this by
      adding dropped roots to a list on the transaction, and letting the transaction
      remove the roots at the same time it drops the commit roots.  This will keep all
      of the backref searching code in sync properly, and fixes a problem Mark was
      seeing with snapshot delete and qgroups.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Tested-by: NHolger Hoffstätte <holger.hoffstaette@googlemail.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      2b9dbef2
  2. 20 8月, 2015 1 次提交
    • F
      Btrfs: check if previous transaction aborted to avoid fs corruption · 1f9b8c8f
      Filipe Manana 提交于
      While we are committing a transaction, it's possible the previous one is
      still finishing its commit and therefore we wait for it to finish first.
      However we were not checking if that previous transaction ended up getting
      aborted after we waited for it to commit, so we ended up committing the
      current transaction which can lead to fs corruption because the new
      superblock can point to trees that have had one or more nodes/leafs that
      were never durably persisted.
      The following sequence diagram exemplifies how this is possible:
      
                CPU 0                                                        CPU 1
      
        transaction N starts
      
        (...)
      
        btrfs_commit_transaction(N)
      
          cur_trans->state = TRANS_STATE_COMMIT_START;
          (...)
          cur_trans->state = TRANS_STATE_COMMIT_DOING;
          (...)
      
          cur_trans->state = TRANS_STATE_UNBLOCKED;
          root->fs_info->running_transaction = NULL;
      
                                                                    btrfs_start_transaction()
                                                                       --> starts transaction N + 1
      
          btrfs_write_and_wait_transaction(trans, root);
            --> starts writing all new or COWed ebs created
                at transaction N
      
                                                                    creates some new ebs, COWs some
                                                                    existing ebs but doesn't COW or
                                                                    deletes eb X
      
                                                                    btrfs_commit_transaction(N + 1)
                                                                      (...)
                                                                      cur_trans->state = TRANS_STATE_COMMIT_START;
                                                                      (...)
                                                                      wait_for_commit(root, prev_trans);
                                                                        --> prev_trans == transaction N
      
          btrfs_write_and_wait_transaction() continues
          writing ebs
             --> fails writing eb X, we abort transaction N
                 and set bit BTRFS_FS_STATE_ERROR on
                 fs_info->fs_state, so no new transactions
                 can start after setting that bit
      
             cleanup_transaction()
               btrfs_cleanup_one_transaction()
                 wakes up task at CPU 1
      
                                                                      continues, doesn't abort because
                                                                      cur_trans->aborted (transaction N + 1)
                                                                      is zero, and no checks for bit
                                                                      BTRFS_FS_STATE_ERROR in fs_info->fs_state
                                                                      are made
      
                                                                      btrfs_write_and_wait_transaction(trans, root);
                                                                        --> succeeds, no errors during writeback
      
                                                                      write_ctree_super(trans, root, 0);
                                                                        --> succeeds
                                                                        --> we have now a superblock that points us
                                                                            to some root that uses eb X, which was
                                                                            never written to disk
      
      In this scenario future attempts to read eb X from disk results in an
      error message like "parent transid verify failed on X wanted Y found Z".
      
      So fix this by aborting the current transaction if after waiting for the
      previous transaction we verify that it was aborted.
      
      Cc: stable@vger.kernel.org
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NJosef Bacik <jbacik@fb.com>
      Reviewed-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      1f9b8c8f
  3. 15 8月, 2015 1 次提交
  4. 09 8月, 2015 1 次提交
  5. 29 7月, 2015 1 次提交
  6. 23 7月, 2015 1 次提交
    • Z
      btrfs: Fix lockdep warning of btrfs_run_delayed_iputs() · 8a733013
      Zhao Lei 提交于
      Liu Bo <bo.li.liu@oracle.com> reported a lockdep warning of
      delayed_iput_sem in xfstests generic/241:
        [ 2061.345955] =============================================
        [ 2061.346027] [ INFO: possible recursive locking detected ]
        [ 2061.346027] 4.1.0+ #268 Tainted: G        W
        [ 2061.346027] ---------------------------------------------
        [ 2061.346027] btrfs-cleaner/3045 is trying to acquire lock:
        [ 2061.346027]  (&fs_info->delayed_iput_sem){++++..}, at:
        [<ffffffff814063ab>] btrfs_run_delayed_iputs+0x6b/0x100
        [ 2061.346027] but task is already holding lock:
        [ 2061.346027]  (&fs_info->delayed_iput_sem){++++..}, at: [<ffffffff814063ab>] btrfs_run_delayed_iputs+0x6b/0x100
        [ 2061.346027] other info that might help us debug this:
        [ 2061.346027]  Possible unsafe locking scenario:
      
        [ 2061.346027]        CPU0
        [ 2061.346027]        ----
        [ 2061.346027]   lock(&fs_info->delayed_iput_sem);
        [ 2061.346027]   lock(&fs_info->delayed_iput_sem);
        [ 2061.346027]
         *** DEADLOCK ***
      It is rarely happened, about 1/400 in my test env.
      
      The reason is recursion of btrfs_run_delayed_iputs():
        cleaner_kthread
        -> btrfs_run_delayed_iputs() *1
        -> get delayed_iput_sem lock *2
        -> iput()
        -> ...
        -> btrfs_commit_transaction()
        -> btrfs_run_delayed_iputs() *1
        -> get delayed_iput_sem lock (dead lock) *2
        *1: recursion of btrfs_run_delayed_iputs()
        *2: warning of lockdep about delayed_iput_sem
      
      When fs is in high stress, new iputs may added into fs_info->delayed_iputs
      list when btrfs_run_delayed_iputs() is running, which cause
      second btrfs_run_delayed_iputs() run into down_read(&fs_info->delayed_iput_sem)
      again, and cause above lockdep warning.
      
      Actually, it will not cause real problem because both locks are read lock,
      but to avoid lockdep warning, we can do a fix.
      
      Fix:
        Don't do btrfs_run_delayed_iputs() in btrfs_commit_transaction() for
        cleaner_kthread thread to break above recursion path.
        cleaner_kthread is calling btrfs_run_delayed_iputs() explicitly in code,
        and don't need to call btrfs_run_delayed_iputs() again in
        btrfs_commit_transaction(), it also give us a bonus to avoid stack overflow.
      
      Test:
        No above lockdep warning after patch in 1200 generic/241 tests.
      Reported-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com>
      Reviewed-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      8a733013
  7. 12 7月, 2015 1 次提交
    • F
      Btrfs: fix list transaction->pending_ordered corruption · d3efe084
      Filipe Manana 提交于
      When we call btrfs_commit_transaction(), we splice the list "ordered"
      of our transaction handle into the transaction's "pending_ordered"
      list, but we don't re-initialize the "ordered" list of our transaction
      handle, this means it still points to the same elements it used to
      before the splice. Then we check if the current transaction's state is
      >= TRANS_STATE_COMMIT_START and if it is we end up calling
      btrfs_end_transaction() which simply splices again the "ordered" list
      of our handle into the transaction's "pending_ordered" list, leaving
      multiple pointers to the same ordered extents which results in list
      corruption when we are iterating, removing and freeing ordered extents
      at btrfs_wait_pending_ordered(), resulting in access to dangling
      pointers / use-after-free issues.
      Similarly, btrfs_end_transaction() can end up in some cases calling
      btrfs_commit_transaction(), and both did a list splice of the transaction
      handle's "ordered" list into the transaction's "pending_ordered" without
      re-initializing the handle's "ordered" list, resulting in exactly the
      same problem.
      
      This produces the following warning on a kernel with linked list
      debugging enabled:
      
      [109749.265416] ------------[ cut here ]------------
      [109749.266410] WARNING: CPU: 7 PID: 324 at lib/list_debug.c:59 __list_del_entry+0x5a/0x98()
      [109749.267969] list_del corruption. prev->next should be ffff8800ba087e20, but was fffffff8c1f7c35d
      (...)
      [109749.287505] Call Trace:
      [109749.288135]  [<ffffffff8145f077>] dump_stack+0x4f/0x7b
      [109749.298080]  [<ffffffff81095de5>] ? console_unlock+0x356/0x3a2
      [109749.331605]  [<ffffffff8104b3b0>] warn_slowpath_common+0xa1/0xbb
      [109749.334849]  [<ffffffff81260642>] ? __list_del_entry+0x5a/0x98
      [109749.337093]  [<ffffffff8104b410>] warn_slowpath_fmt+0x46/0x48
      [109749.337847]  [<ffffffff81260642>] __list_del_entry+0x5a/0x98
      [109749.338678]  [<ffffffffa053e8bf>] btrfs_wait_pending_ordered+0x46/0xdb [btrfs]
      [109749.340145]  [<ffffffffa058a65f>] ? __btrfs_run_delayed_items+0x149/0x163 [btrfs]
      [109749.348313]  [<ffffffffa054077d>] btrfs_commit_transaction+0x36b/0xa10 [btrfs]
      [109749.349745]  [<ffffffff81087310>] ? trace_hardirqs_on+0xd/0xf
      [109749.350819]  [<ffffffffa055370d>] btrfs_sync_file+0x36f/0x3fc [btrfs]
      [109749.351976]  [<ffffffff8118ec98>] vfs_fsync_range+0x8f/0x9e
      [109749.360341]  [<ffffffff8118ecc3>] vfs_fsync+0x1c/0x1e
      [109749.368828]  [<ffffffff8118ee1d>] do_fsync+0x34/0x4e
      [109749.369790]  [<ffffffff8118f045>] SyS_fsync+0x10/0x14
      [109749.370925]  [<ffffffff81465197>] system_call_fastpath+0x12/0x6f
      [109749.382274] ---[ end trace 48e0d07f7c03d95a ]---
      
      On a non-debug kernel this leads to invalid memory accesses, causing a
      crash. Fix this by using list_splice_init() instead of list_splice() in
      btrfs_commit_transaction() and btrfs_end_transaction().
      
      Cc: stable@vger.kernel.org
      Fixes: 50d9aa99 ("Btrfs: make sure logged extents complete in the current transaction V3"
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      d3efe084
  8. 11 6月, 2015 4 次提交
  9. 03 6月, 2015 1 次提交
    • F
      Btrfs: fix -ENOSPC when finishing block group creation · 4fbcdf66
      Filipe Manana 提交于
      While creating a block group, we often end up getting ENOSPC while updating
      the chunk tree, which leads to a transaction abortion that produces a trace
      like the following:
      
      [30670.116368] WARNING: CPU: 4 PID: 20735 at fs/btrfs/super.c:260 __btrfs_abort_transaction+0x52/0x106 [btrfs]()
      [30670.117777] BTRFS: Transaction aborted (error -28)
      (...)
      [30670.163567] Call Trace:
      [30670.163906]  [<ffffffff8142fa46>] dump_stack+0x4f/0x7b
      [30670.164522]  [<ffffffff8108b6a2>] ? console_unlock+0x361/0x3ad
      [30670.165171]  [<ffffffff81045ea5>] warn_slowpath_common+0xa1/0xbb
      [30670.166323]  [<ffffffffa035daa7>] ? __btrfs_abort_transaction+0x52/0x106 [btrfs]
      [30670.167213]  [<ffffffff81045f05>] warn_slowpath_fmt+0x46/0x48
      [30670.167862]  [<ffffffffa035daa7>] __btrfs_abort_transaction+0x52/0x106 [btrfs]
      [30670.169116]  [<ffffffffa03743d7>] btrfs_create_pending_block_groups+0x101/0x130 [btrfs]
      [30670.170593]  [<ffffffffa038426a>] __btrfs_end_transaction+0x84/0x366 [btrfs]
      [30670.171960]  [<ffffffffa038455c>] btrfs_end_transaction+0x10/0x12 [btrfs]
      [30670.174649]  [<ffffffffa036eb6b>] btrfs_check_data_free_space+0x11f/0x27c [btrfs]
      [30670.176092]  [<ffffffffa039450d>] btrfs_fallocate+0x7c8/0xb96 [btrfs]
      [30670.177218]  [<ffffffff812459f2>] ? __this_cpu_preempt_check+0x13/0x15
      [30670.178622]  [<ffffffff81152447>] vfs_fallocate+0x14c/0x1de
      [30670.179642]  [<ffffffff8116b915>] ? __fget_light+0x2d/0x4f
      [30670.180692]  [<ffffffff81152863>] SyS_fallocate+0x47/0x62
      [30670.186737]  [<ffffffff81435b32>] system_call_fastpath+0x12/0x17
      [30670.187792] ---[ end trace 0373e6b491c4a8cc ]---
      
      This is because we don't do proper space reservation for the chunk block
      reserve when we have multiple tasks allocating chunks in parallel.
      
      So block group creation has 2 phases, and the first phase essentially
      checks if there is enough space in the system space_info, allocating a
      new system chunk if there isn't, while the second phase updates the
      device, extent and chunk trees. However, because the updates to the
      chunk tree happen in the second phase, if we have N tasks, each with
      its own transaction handle, allocating new chunks in parallel and if
      there is only enough space in the system space_info to allocate M chunks,
      where M < N, none of the tasks ends up allocating a new system chunk in
      the first phase and N - M tasks will get -ENOSPC when attempting to
      update the chunk tree in phase 2 if they need to COW any nodes/leafs
      from the chunk tree.
      
      Fix this by doing proper reservation in the chunk block reserve.
      
      The issue could be reproduced by running fstests generic/038 in a loop,
      which eventually triggered the problem.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      4fbcdf66
  10. 11 4月, 2015 4 次提交
    • C
      Btrfs: allow block group cache writeout outside critical section in commit · 1bbc621e
      Chris Mason 提交于
      We loop through all of the dirty block groups during commit and write
      the free space cache.  In order to make sure the cache is currect, we do
      this while no other writers are allowed in the commit.
      
      If a large number of block groups are dirty, this can introduce long
      stalls during the final stages of the commit, which can block new procs
      trying to change the filesystem.
      
      This commit changes the block group cache writeout to take appropriate
      locks and allow it to run earlier in the commit.  We'll still have to
      redo some of the block groups, but it means we can get most of the work
      out of the way without blocking the entire FS.
      Signed-off-by: NChris Mason <clm@fb.com>
      1bbc621e
    • J
      Btrfs: reserve space for block groups · cb723e49
      Josef Bacik 提交于
      This changes our delayed refs calculations to include the space needed
      to write back dirty block groups.
      Signed-off-by: NChris Mason <clm@fb.com>
      cb723e49
    • J
      Btrfs: account for crcs in delayed ref processing · 1262133b
      Josef Bacik 提交于
      As we delete large extents, we end up doing huge amounts of COW in order
      to delete the corresponding crcs.  This adds accounting so that we keep
      track of that space and flushing of delayed refs so that we don't build
      up too much delayed crc work.
      
      This helps limit the delayed work that must be done at commit time and
      tries to avoid ENOSPC aborts because the crcs eat all the global
      reserves.
      Signed-off-by: NChris Mason <clm@fb.com>
      1262133b
    • C
      btrfs: actively run the delayed refs while deleting large files · 28ed1345
      Chris Mason 提交于
      When we are deleting large files with large extents, we are building up
      a huge set of delayed refs for processing.  Truncate isn't checking
      often enough to see if we need to back off and process those, or let
      a commit proceed.
      
      The end result is long stalls after the rm, and very long commit times.
      During the commits, other processes back up waiting to start new
      transactions and we get into trouble.
      Signed-off-by: NChris Mason <clm@fb.com>
      28ed1345
  11. 17 3月, 2015 1 次提交
    • J
      Btrfs: prepare block group cache before writing · dcdf7f6d
      Josef Bacik 提交于
      Writing the block group cache will modify the extent tree quite a bit because it
      truncates the old space cache and pre-allocates new stuff.  To try and cut down
      on the churn lets do the setup dance first, then later on hopefully we can avoid
      looping with newly dirtied roots.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      dcdf7f6d
  12. 14 3月, 2015 2 次提交
    • J
      Btrfs: fix ASSERT(list_empty(&cur_trans->dirty_bgs_list) · ea526d18
      Josef Bacik 提交于
      Dave could hit this assert consistently running btrfs/078.  This is because
      when we update the block groups we could truncate the free space, which would
      try to delete the csums for that range and dirty the csum root.  For this to
      happen we have to have already written out the csum root so it's kind of hard to
      hit this case.  This patch fixes this by changing the logic to only write the
      dirty block groups if the dirty_cowonly_roots list is empty.  This will get us
      the same effect as before since we add the extent root last, and will cover the
      case that we dirty some other root again but not the extent root.  Thanks,
      Reported-by: NDavid Sterba <dsterba@suse.cz>
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      ea526d18
    • L
      Btrfs: catch transaction abortion after waiting for it · b4924a0f
      Liu Bo 提交于
      This problem is uncovered by a test case: http://patchwork.ozlabs.org/patch/244297.
      
      Fsync() can report success when it actually doesn't.  When we
      have several threads running fsync() at the same tiem and in one fsync() we
      get a transaction abortion due to some problems(in the test case it's disk
      failures), and other fsync()s may return successfully which makes userspace
      programs think that data is now safely flushed into disk.
      
      It's because that after fsyncs() fail btrfs_sync_log() due to disk failures,
      they get to try btrfs_commit_transaction() where it finds that there is
      already a transaction being committed, and they'll just call wait_for_commit()
      and return.  Note that we actually check "trans->aborted" in btrfs_end_transaction,
      but it's likely that the error message is still not yet throwed out and only after
      wait_for_commit() we're sure whether the transaction is committed successfully.
      
      This add the necessary check and it now passes the test.
      Signed-off-by: NLiu Bo <bo.li.liu@oracle.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      b4924a0f
  13. 06 3月, 2015 1 次提交
  14. 04 3月, 2015 1 次提交
  15. 17 2月, 2015 1 次提交
  16. 15 2月, 2015 1 次提交
    • Z
      btrfs: Fix out-of-space bug · 13212b54
      Zhao Lei 提交于
      Btrfs will report NO_SPACE when we create and remove files for several times,
      and we can't write to filesystem until mount it again.
      
      Steps to reproduce:
       1: Create a single-dev btrfs fs with default option
       2: Write a file into it to take up most fs space
       3: Delete above file
       4: Wait about 100s to let chunk removed
       5: goto 2
      
      Script is like following:
       #!/bin/bash
      
       # Recommend 1.2G space, too large disk will make test slow
       DEV="/dev/sda16"
       MNT="/mnt/tmp"
      
       dev_size="$(lsblk -bn -o SIZE "$DEV")" || exit 2
       file_size_m=$((dev_size * 75 / 100 / 1024 / 1024))
      
       echo "Loop write ${file_size_m}M file on $((dev_size / 1024 / 1024))M dev"
      
       for ((i = 0; i < 10; i++)); do umount "$MNT" 2>/dev/null; done
       echo "mkfs $DEV"
       mkfs.btrfs -f "$DEV" >/dev/null || exit 2
       echo "mount $DEV $MNT"
       mount "$DEV" "$MNT" || exit 2
      
       for ((loop_i = 0; loop_i < 20; loop_i++)); do
           echo
           echo "loop $loop_i"
      
           echo "dd file..."
           cmd=(dd if=/dev/zero of="$MNT"/file0 bs=1M count="$file_size_m")
           "${cmd[@]}" 2>/dev/null || {
               # NO_SPACE error triggered
               echo "dd failed: ${cmd[*]}"
               exit 1
           }
      
           echo "rm file..."
           rm -f "$MNT"/file0 || exit 2
      
           for ((i = 0; i < 10; i++)); do
               df "$MNT" | tail -1
               sleep 10
           done
       done
      
      Reason:
       It is triggered by commit: 47ab2a6c
       which is used to remove empty block groups automatically, but the
       reason is not in that patch. Code before works well because btrfs
       don't need to create and delete chunks so many times with high
       complexity.
       Above bug is caused by many reason, any of them can trigger it.
      
      Reason1:
       When we remove some continuous chunks but leave other chunks after,
       these disk space should be used by chunk-recreating, but in current
       code, only first create will successed.
       Fixed by Forrest Liu <forrestl@synology.com> in:
       Btrfs: fix find_free_dev_extent() malfunction in case device tree has hole
      
      Reason2:
       contains_pending_extent() return wrong value in calculation.
       Fixed by Forrest Liu <forrestl@synology.com> in:
       Btrfs: fix find_free_dev_extent() malfunction in case device tree has hole
      
      Reason3:
       btrfs_check_data_free_space() try to commit transaction and retry
       allocating chunk when the first allocating failed, but space_info->full
       is set in first allocating, and prevent second allocating in retry.
       Fixed in this patch by clear space_info->full in commit transaction.
      
       Tested for severial times by above script.
      
      Changelog v3->v4:
       use light weight int instead of atomic_t to record have_remove_bgs in
       transaction, suggested by:
       Josef Bacik <jbacik@fb.com>
      
      Changelog v2->v3:
       v2 fixed the bug by adding more commit-transaction, but we
       only need to reclaim space when we are really have no space for
       new chunk, noticed by:
       Filipe David Manana <fdmanana@gmail.com>
      
       Actually, our code already have this type of commit-and-retry,
       we only need to make it working with removed-bgs.
       v3 fixed the bug with above way.
      
      Changelog v1->v2:
       v1 will introduce a new bug when delete and create chunk in same disk
       space in same transaction, noticed by:
       Filipe David Manana <fdmanana@gmail.com>
       V2 fix this bug by commit transaction after remove block grops.
      Reported-by: NTsutomu Itoh <t-itoh@jp.fujitsu.com>
      Suggested-by: NFilipe David Manana <fdmanana@gmail.com>
      Suggested-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NZhao Lei <zhaolei@cn.fujitsu.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      13212b54
  17. 22 1月, 2015 2 次提交
    • J
      Btrfs: track dirty block groups on their own list · ce93ec54
      Josef Bacik 提交于
      Currently any time we try to update the block groups on disk we will walk _all_
      block groups and check for the ->dirty flag to see if it is set.  This function
      can get called several times during a commit.  So if you have several terabytes
      of data you will be a very sad panda as we will loop through _all_ of the block
      groups several times, which makes the commit take a while which slows down the
      rest of the file system operations.
      
      This patch introduces a dirty list for the block groups that we get added to
      when we dirty the block group for the first time.  Then we simply update any
      block groups that have been dirtied since the last time we called
      btrfs_write_dirty_block_groups.  This allows us to clean up how we write the
      free space cache out so it is much cleaner.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      ce93ec54
    • J
      Btrfs: change how we track dirty roots · e7070be1
      Josef Bacik 提交于
      I've been overloading root->dirty_list to keep track of dirty roots and which
      roots need to have their commit roots switched at transaction commit time.  This
      could cause us to lose an update to the root which could corrupt the file
      system.  To fix this use a state bit to know if the root is dirty, and if it
      isn't set we go ahead and move the root to the dirty list.  This way if we
      re-dirty the root after adding it to the switch_commit list we make sure to
      update it.  This also makes it so that the extent root is always the last root
      on the dirty list to try and keep the amount of churn down at this point in the
      commit.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      e7070be1
  18. 21 1月, 2015 1 次提交
  19. 22 11月, 2014 1 次提交
    • J
      Btrfs: make sure logged extents complete in the current transaction V3 · 50d9aa99
      Josef Bacik 提交于
      Liu Bo pointed out that my previous fix would lose the generation update in the
      scenario I described.  It is actually much worse than that, we could lose the
      entire extent if we lose power right after the transaction commits.  Consider
      the following
      
      write extent 0-4k
      log extent in log tree
      commit transaction
      	< power fail happens here
      ordered extent completes
      
      We would lose the 0-4k extent because it hasn't updated the actual fs tree, and
      the transaction commit will reset the log so it isn't replayed.  If we lose
      power before the transaction commit we are save, otherwise we are not.
      
      Fix this by keeping track of all extents we logged in this transaction.  Then
      when we go to commit the transaction make sure we wait for all of those ordered
      extents to complete before proceeding.  This will make sure that if we lose
      power after the transaction commit we still have our data.  This also fixes the
      problem of the improperly updated extent generation.  Thanks,
      
      cc: stable@vger.kernel.org
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      50d9aa99
  20. 21 11月, 2014 2 次提交
    • F
      Btrfs: make find_first_extent_bit be able to cache any state · e38e2ed7
      Filipe Manana 提交于
      Right now the only caller of find_first_extent_bit() that is interested
      in caching extent states (transaction or log commit), never gets an extent
      state cached. This is because find_first_extent_bit() only caches states
      that have at least one of the flags EXTENT_IOBITS or EXTENT_BOUNDARY, and
      the transaction/log commit caller always passes a tree that doesn't have
      ever extent states with any of those flags (they can only have one of the
      following flags: EXTENT_DIRTY, EXTENT_NEW or EXTENT_NEED_WAIT).
      
      This change together with the following one in the patch series (titled
      "Btrfs: avoid returning -ENOMEM in convert_extent_bit() too early") will
      help reduce significantly the chances of calls to convert_extent_bit()
      fail with -ENOMEM when called from the transaction/log commit code.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      e38e2ed7
    • F
      Btrfs: deal with convert_extent_bit errors to avoid fs corruption · 663dfbb0
      Filipe Manana 提交于
      When committing a transaction or a log, we look for btree extents that
      need to be durably persisted by searching for ranges in a io tree that
      have some bits set (EXTENT_DIRTY or EXTENT_NEW). We then attempt to clear
      those bits and set the EXTENT_NEED_WAIT bit, with calls to the function
      convert_extent_bit, and then start writeback for the extents.
      
      That function however can return an error (at the moment only -ENOMEM
      is possible, specially when it does GFP_ATOMIC allocation requests
      through alloc_extent_state_atomic) - that means the ranges didn't got
      the EXTENT_NEED_WAIT bit set (or at least not for the whole range),
      which in turn means a call to btrfs_wait_marked_extents() won't find
      those ranges for which we started writeback, causing a transaction
      commit or a log commit to persist a new superblock without waiting
      for the writeback of extents in that range to finish first.
      
      Therefore if a crash happens after persisting the new superblock and
      before writeback finishes, we have a superblock pointing to roots that
      weren't fully persisted or roots that point to nodes or leafs that weren't
      fully persisted, causing all sorts of unexpected/bad behaviour as we endup
      reading garbage from disk or the content of some node/leaf from a past
      generation that got cowed or deleted and is no longer valid (for this later
      case we end up getting error messages like "parent transid verify failed on
      X wanted Y found Z" when reading btree nodes/leafs from disk).
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      663dfbb0
  21. 12 11月, 2014 3 次提交
    • D
      btrfs: introduce pending action: commit · d51033d0
      David Sterba 提交于
      In some contexts, like in sysfs handlers, we don't want to trigger a
      transaction commit. It's a heavy operation, we don't know what external
      locks may be taken. Instead, make it possible to finish the operation
      through sync syscall or SYNC_FS ioctl.
      Signed-off-by: NDavid Sterba <dsterba@suse.cz>
      d51033d0
    • D
      btrfs: switch inode_cache option handling to pending changes · 7e1876ac
      David Sterba 提交于
      The pending mount option(s) now share namespace and bits with the normal
      options, and the existing one for (inode_cache) is unset unconditionally
      at each transaction commit.
      
      Introduce a separate namespace for pending changes and enhance the
      descriptions of the intended change to use separate bits for each
      action.
      Signed-off-by: NDavid Sterba <dsterba@suse.cz>
      7e1876ac
    • D
      btrfs: add support for processing pending changes · 572d9ab7
      David Sterba 提交于
      There are some actions that modify global filesystem state but cannot be
      performed at the time of request, but later at the transaction commit
      time when the filesystem is in a known state.
      
      For example enabling new incompat features on-the-fly or issuing
      transaction commit from unsafe contexts (sysfs handlers).
      Signed-off-by: NDavid Sterba <dsterba@suse.cz>
      572d9ab7
  22. 04 10月, 2014 2 次提交
    • S
      Btrfs: fix race in WAIT_SYNC ioctl · 42383020
      Sage Weil 提交于
      We check whether transid is already committed via last_trans_committed and
      then search through trans_list for pending transactions.  If
      last_trans_committed is updated by btrfs_commit_transaction after we check
      it (there is no locking), we will fail to find the committed transaction
      and return EINVAL to the caller.  This has been observed occasionally by
      ceph-osd (which uses this ioctl heavily).
      
      Fix by rechecking whether the provided transid <= last_trans_committed
      after the search fails, and if so return 0.
      Signed-off-by: NSage Weil <sage@redhat.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      42383020
    • F
      Btrfs: be aware of btree inode write errors to avoid fs corruption · 656f30db
      Filipe Manana 提交于
      While we have a transaction ongoing, the VM might decide at any time
      to call btree_inode->i_mapping->a_ops->writepages(), which will start
      writeback of dirty pages belonging to btree nodes/leafs. This call
      might return an error or the writeback might finish with an error
      before we attempt to commit the running transaction. If this happens,
      we might have no way of knowing that such error happened when we are
      committing the transaction - because the pages might no longer be
      marked dirty nor tagged for writeback (if a subsequent modification
      to the extent buffer didn't happen before the transaction commit) which
      makes filemap_fdata[write|wait]_range unable to find such pages (even
      if they're marked with SetPageError).
      So if this happens we must abort the transaction, otherwise we commit
      a super block with btree roots that point to btree nodes/leafs whose
      content on disk is invalid - either garbage or the content of some
      node/leaf from a past generation that got cowed or deleted and is no
      longer valid (for this later case we end up getting error messages like
      "parent transid verify failed on 10826481664 wanted 25748 found 29562"
      when reading btree nodes/leafs from disk).
      
      Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
      i_mapping would not be enough because we need to distinguish between
      log tree extents (not fatal) vs non-log tree extents (fatal) and
      because the next call to filemap_fdatawait_range() will catch and clear
      such errors in the mapping - and that call might be from a log sync and
      not from a transaction commit, which means we would not know about the
      error at transaction commit time. Also, checking for the eb flag
      EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
      not be completely reliable, as the eb might be removed from memory and
      read back when trying to get it, which clears that flag right before
      reading the eb's pages from disk, making us not know about the previous
      write error.
      
      Using the new 3 flags for the btree inode also makes us achieve the
      goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
      writeback for all dirty pages and before filemap_fdatawait_range() is
      called, the writeback for all dirty pages had already finished with
      errors - because we were not using AS_EIO/AS_ENOSPC,
      filemap_fdatawait_range() would return success, as it could not know
      that writeback errors happened (the pages were no longer tagged for
      writeback).
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      656f30db
  23. 02 10月, 2014 2 次提交
  24. 18 9月, 2014 4 次提交