1. 07 7月, 2021 2 次提交
    • F
      btrfs: rework chunk allocation to avoid exhaustion of the system chunk array · 79bd3712
      Filipe Manana 提交于
      Commit eafa4fd0 ("btrfs: fix exhaustion of the system chunk array
      due to concurrent allocations") fixed a problem that resulted in
      exhausting the system chunk array in the superblock when there are many
      tasks allocating chunks in parallel. Basically too many tasks enter the
      first phase of chunk allocation without previous tasks having finished
      their second phase of allocation, resulting in too many system chunks
      being allocated. That was originally observed when running the fallocate
      tests of stress-ng on a PowerPC machine, using a node size of 64K.
      
      However that commit also introduced a deadlock where a task in phase 1 of
      the chunk allocation waited for another task that had allocated a system
      chunk to finish its phase 2, but that other task was waiting on an extent
      buffer lock held by the first task, therefore resulting in both tasks not
      making any progress. That change was later reverted by a patch with the
      subject "btrfs: fix deadlock with concurrent chunk allocations involving
      system chunks", since there is no simple and short solution to address it
      and the deadlock is relatively easy to trigger on zoned filesystems, while
      the system chunk array exhaustion is not so common.
      
      This change reworks the chunk allocation to avoid the system chunk array
      exhaustion. It accomplishes that by making the first phase of chunk
      allocation do the updates of the device items in the chunk btree and the
      insertion of the new chunk item in the chunk btree. This is done while
      under the protection of the chunk mutex (fs_info->chunk_mutex), in the
      same critical section that checks for available system space, allocates
      a new system chunk if needed and reserves system chunk space. This way
      we do not have chunk space reserved until the second phase completes.
      
      The same logic is applied to chunk removal as well, since it keeps
      reserved system space long after it is done updating the chunk btree.
      
      For direct allocation of system chunks, the previous behaviour remains,
      because otherwise we would deadlock on extent buffers of the chunk btree.
      Changes to the chunk btree are by large done by chunk allocation and chunk
      removal, which first reserve chunk system space and then later do changes
      to the chunk btree. The other remaining cases are uncommon and correspond
      to adding a device, removing a device and resizing a device. All these
      other cases do not pre-reserve system space, they modify the chunk btree
      right away, so they don't hold reserved space for a long period like chunk
      allocation and chunk removal do.
      
      The diff of this change is huge, but more than half of it is just addition
      of comments describing both how things work regarding chunk allocation and
      removal, including both the new behavior and the parts of the old behavior
      that did not change.
      
      CC: stable@vger.kernel.org # 5.12+
      Tested-by: NShin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
      Tested-by: NNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Tested-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      79bd3712
    • F
      btrfs: fix deadlock with concurrent chunk allocations involving system chunks · 1cb3db1c
      Filipe Manana 提交于
      When a task attempting to allocate a new chunk verifies that there is not
      currently enough free space in the system space_info and there is another
      task that allocated a new system chunk but it did not finish yet the
      creation of the respective block group, it waits for that other task to
      finish creating the block group. This is to avoid exhaustion of the system
      chunk array in the superblock, which is limited, when we have a thundering
      herd of tasks allocating new chunks. This problem was described and fixed
      by commit eafa4fd0 ("btrfs: fix exhaustion of the system chunk array
      due to concurrent allocations").
      
      However there are two very similar scenarios where this can lead to a
      deadlock:
      
      1) Task B allocated a new system chunk and task A is waiting on task B
         to finish creation of the respective system block group. However before
         task B ends its transaction handle and finishes the creation of the
         system block group, it attempts to allocate another chunk (like a data
         chunk for an fallocate operation for a very large range). Task B will
         be unable to progress and allocate the new chunk, because task A set
         space_info->chunk_alloc to 1 and therefore it loops at
         btrfs_chunk_alloc() waiting for task A to finish its chunk allocation
         and set space_info->chunk_alloc to 0, but task A is waiting on task B
         to finish creation of the new system block group, therefore resulting
         in a deadlock;
      
      2) Task B allocated a new system chunk and task A is waiting on task B to
         finish creation of the respective system block group. By the time that
         task B enter the final phase of block group allocation, which happens
         at btrfs_create_pending_block_groups(), when it modifies the extent
         tree, the device tree or the chunk tree to insert the items for some
         new block group, it needs to allocate a new chunk, so it ends up at
         btrfs_chunk_alloc() and keeps looping there because task A has set
         space_info->chunk_alloc to 1, but task A is waiting for task B to
         finish creation of the new system block group and release the reserved
         system space, therefore resulting in a deadlock.
      
      In short, the problem is if a task B needs to allocate a new chunk after
      it previously allocated a new system chunk and if another task A is
      currently waiting for task B to complete the allocation of the new system
      chunk.
      
      Unfortunately this deadlock scenario introduced by the previous fix for
      the system chunk array exhaustion problem does not have a simple and short
      fix, and requires a big change to rework the chunk allocation code so that
      chunk btree updates are all made in the first phase of chunk allocation.
      And since this deadlock regression is being frequently hit on zoned
      filesystems and the system chunk array exhaustion problem is triggered
      in more extreme cases (originally observed on PowerPC with a node size
      of 64K when running the fallocate tests from stress-ng), revert the
      changes from that commit. The next patch in the series, with a subject
      of "btrfs: rework chunk allocation to avoid exhaustion of the system
      chunk array" does the necessary changes to fix the system chunk array
      exhaustion problem.
      Reported-by: NNaohiro Aota <naohiro.aota@wdc.com>
      Link: https://lore.kernel.org/linux-btrfs/20210621015922.ewgbffxuawia7liz@naota-xeon/
      Fixes: eafa4fd0 ("btrfs: fix exhaustion of the system chunk array due to concurrent allocations")
      CC: stable@vger.kernel.org # 5.12+
      Tested-by: NShin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
      Tested-by: NNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Tested-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      1cb3db1c
  2. 22 6月, 2021 2 次提交
    • F
      btrfs: send: fix crash when memory allocations trigger reclaim · 35b22c19
      Filipe Manana 提交于
      When doing a send we don't expect the task to ever start a transaction
      after the initial check that verifies if commit roots match the regular
      roots. This is because after that we set current->journal_info with a
      stub (special value) that signals we are in send context, so that we take
      a read lock on an extent buffer when reading it from disk and verifying
      it is valid (its generation matches the generation stored in the parent).
      This stub was introduced in 2014 by commit a26e8c9f ("Btrfs: don't
      clear uptodate if the eb is under IO") in order to fix a concurrency issue
      between send and balance.
      
      However there is one particular exception where we end up needing to start
      a transaction and when this happens it results in a crash with a stack
      trace like the following:
      
      [60015.902283] kernel: WARNING: CPU: 3 PID: 58159 at arch/x86/include/asm/kfence.h:44 kfence_protect_page+0x21/0x80
      [60015.902292] kernel: Modules linked in: uinput rfcomm snd_seq_dummy (...)
      [60015.902384] kernel: CPU: 3 PID: 58159 Comm: btrfs Not tainted 5.12.9-300.fc34.x86_64 #1
      [60015.902387] kernel: Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./F2A88XN-WIFI, BIOS F6 12/24/2015
      [60015.902389] kernel: RIP: 0010:kfence_protect_page+0x21/0x80
      [60015.902393] kernel: Code: ff 0f 1f 84 00 00 00 00 00 55 48 89 fd (...)
      [60015.902396] kernel: RSP: 0018:ffff9fb583453220 EFLAGS: 00010246
      [60015.902399] kernel: RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffff9fb583453224
      [60015.902401] kernel: RDX: ffff9fb583453224 RSI: 0000000000000000 RDI: 0000000000000000
      [60015.902402] kernel: RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
      [60015.902404] kernel: R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000002
      [60015.902406] kernel: R13: ffff9fb583453348 R14: 0000000000000000 R15: 0000000000000001
      [60015.902408] kernel: FS:  00007f158e62d8c0(0000) GS:ffff93bd37580000(0000) knlGS:0000000000000000
      [60015.902410] kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      [60015.902412] kernel: CR2: 0000000000000039 CR3: 00000001256d2000 CR4: 00000000000506e0
      [60015.902414] kernel: Call Trace:
      [60015.902419] kernel:  kfence_unprotect+0x13/0x30
      [60015.902423] kernel:  page_fault_oops+0x89/0x270
      [60015.902427] kernel:  ? search_module_extables+0xf/0x40
      [60015.902431] kernel:  ? search_bpf_extables+0x57/0x70
      [60015.902435] kernel:  kernelmode_fixup_or_oops+0xd6/0xf0
      [60015.902437] kernel:  __bad_area_nosemaphore+0x142/0x180
      [60015.902440] kernel:  exc_page_fault+0x67/0x150
      [60015.902445] kernel:  asm_exc_page_fault+0x1e/0x30
      [60015.902450] kernel: RIP: 0010:start_transaction+0x71/0x580
      [60015.902454] kernel: Code: d3 0f 84 92 00 00 00 80 e7 06 0f 85 63 (...)
      [60015.902456] kernel: RSP: 0018:ffff9fb5834533f8 EFLAGS: 00010246
      [60015.902458] kernel: RAX: 0000000000000001 RBX: 0000000000000001 RCX: 0000000000000000
      [60015.902460] kernel: RDX: 0000000000000801 RSI: 0000000000000000 RDI: 0000000000000039
      [60015.902462] kernel: RBP: ffff93bc0a7eb800 R08: 0000000000000001 R09: 0000000000000000
      [60015.902463] kernel: R10: 0000000000098a00 R11: 0000000000000001 R12: 0000000000000001
      [60015.902464] kernel: R13: 0000000000000000 R14: ffff93bc0c92b000 R15: ffff93bc0c92b000
      [60015.902468] kernel:  btrfs_commit_inode_delayed_inode+0x5d/0x120
      [60015.902473] kernel:  btrfs_evict_inode+0x2c5/0x3f0
      [60015.902476] kernel:  evict+0xd1/0x180
      [60015.902480] kernel:  inode_lru_isolate+0xe7/0x180
      [60015.902483] kernel:  __list_lru_walk_one+0x77/0x150
      [60015.902487] kernel:  ? iput+0x1a0/0x1a0
      [60015.902489] kernel:  ? iput+0x1a0/0x1a0
      [60015.902491] kernel:  list_lru_walk_one+0x47/0x70
      [60015.902495] kernel:  prune_icache_sb+0x39/0x50
      [60015.902497] kernel:  super_cache_scan+0x161/0x1f0
      [60015.902501] kernel:  do_shrink_slab+0x142/0x240
      [60015.902505] kernel:  shrink_slab+0x164/0x280
      [60015.902509] kernel:  shrink_node+0x2c8/0x6e0
      [60015.902512] kernel:  do_try_to_free_pages+0xcb/0x4b0
      [60015.902514] kernel:  try_to_free_pages+0xda/0x190
      [60015.902516] kernel:  __alloc_pages_slowpath.constprop.0+0x373/0xcc0
      [60015.902521] kernel:  ? __memcg_kmem_charge_page+0xc2/0x1e0
      [60015.902525] kernel:  __alloc_pages_nodemask+0x30a/0x340
      [60015.902528] kernel:  pipe_write+0x30b/0x5c0
      [60015.902531] kernel:  ? set_next_entity+0xad/0x1e0
      [60015.902534] kernel:  ? switch_mm_irqs_off+0x58/0x440
      [60015.902538] kernel:  __kernel_write+0x13a/0x2b0
      [60015.902541] kernel:  kernel_write+0x73/0x150
      [60015.902543] kernel:  send_cmd+0x7b/0xd0
      [60015.902545] kernel:  send_extent_data+0x5a3/0x6b0
      [60015.902549] kernel:  process_extent+0x19b/0xed0
      [60015.902551] kernel:  btrfs_ioctl_send+0x1434/0x17e0
      [60015.902554] kernel:  ? _btrfs_ioctl_send+0xe1/0x100
      [60015.902557] kernel:  _btrfs_ioctl_send+0xbf/0x100
      [60015.902559] kernel:  ? enqueue_entity+0x18c/0x7b0
      [60015.902562] kernel:  btrfs_ioctl+0x185f/0x2f80
      [60015.902564] kernel:  ? psi_task_change+0x84/0xc0
      [60015.902569] kernel:  ? _flat_send_IPI_mask+0x21/0x40
      [60015.902572] kernel:  ? check_preempt_curr+0x2f/0x70
      [60015.902576] kernel:  ? selinux_file_ioctl+0x137/0x1e0
      [60015.902579] kernel:  ? expand_files+0x1cb/0x1d0
      [60015.902582] kernel:  ? __x64_sys_ioctl+0x82/0xb0
      [60015.902585] kernel:  __x64_sys_ioctl+0x82/0xb0
      [60015.902588] kernel:  do_syscall_64+0x33/0x40
      [60015.902591] kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xae
      [60015.902595] kernel: RIP: 0033:0x7f158e38f0ab
      [60015.902599] kernel: Code: ff ff ff 85 c0 79 9b (...)
      [60015.902602] kernel: RSP: 002b:00007ffcb2519bf8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
      [60015.902605] kernel: RAX: ffffffffffffffda RBX: 00007ffcb251ae00 RCX: 00007f158e38f0ab
      [60015.902607] kernel: RDX: 00007ffcb2519cf0 RSI: 0000000040489426 RDI: 0000000000000004
      [60015.902608] kernel: RBP: 0000000000000004 R08: 00007f158e297640 R09: 00007f158e297640
      [60015.902610] kernel: R10: 0000000000000008 R11: 0000000000000246 R12: 0000000000000000
      [60015.902612] kernel: R13: 0000000000000002 R14: 00007ffcb251aee0 R15: 0000558c1a83e2a0
      [60015.902615] kernel: ---[ end trace 7bbc33e23bb887ae ]---
      
      This happens because when writing to the pipe, by calling kernel_write(),
      we end up doing page allocations using GFP_HIGHUSER | __GFP_ACCOUNT as the
      gfp flags, which allow reclaim to happen if there is memory pressure. This
      allocation happens at fs/pipe.c:pipe_write().
      
      If the reclaim is triggered, inode eviction can be triggered and that in
      turn can result in starting a transaction if the inode has a link count
      of 0. The transaction start happens early on during eviction, when we call
      btrfs_commit_inode_delayed_inode() at btrfs_evict_inode(). This happens if
      there is currently an open file descriptor for an inode with a link count
      of 0 and the reclaim task gets a reference on the inode before that
      descriptor is closed, in which case the reclaim task ends up doing the
      final iput that triggers the inode eviction.
      
      When we have assertions enabled (CONFIG_BTRFS_ASSERT=y), this triggers
      the following assertion at transaction.c:start_transaction():
      
          /* Send isn't supposed to start transactions. */
          ASSERT(current->journal_info != BTRFS_SEND_TRANS_STUB);
      
      And when assertions are not enabled, it triggers a crash since after that
      assertion we cast current->journal_info into a transaction handle pointer
      and then dereference it:
      
         if (current->journal_info) {
             WARN_ON(type & TRANS_EXTWRITERS);
             h = current->journal_info;
             refcount_inc(&h->use_count);
             (...)
      
      Which obviously results in a crash due to an invalid memory access.
      
      The same type of issue can happen during other memory allocations we
      do directly in the send code with kmalloc (and friends) as they use
      GFP_KERNEL and therefore may trigger reclaim too, which started to
      happen since 2016 after commit e780b0d1 ("btrfs: send: use
      GFP_KERNEL everywhere").
      
      The issue could be solved by setting up a NOFS context for the entire
      send operation so that reclaim could not be triggered when allocating
      memory or pages through kernel_write(). However that is not very friendly
      and we can in fact get rid of the send stub because:
      
      1) The stub was introduced way back in 2014 by commit a26e8c9f
         ("Btrfs: don't clear uptodate if the eb is under IO") to solve an
         issue exclusive to when send and balance are running in parallel,
         however there were other problems between balance and send and we do
         not allow anymore to have balance and send run concurrently since
         commit 9e967495 ("Btrfs: prevent send failures and crashes due
         to concurrent relocation"). More generically the issues are between
         send and relocation, and that last commit eliminated only the
         possibility of having send and balance run concurrently, but shrinking
         a device also can trigger relocation, and on zoned filesystems we have
         relocation of partially used block groups triggered automatically as
         well. The previous patch that has a subject of:
      
         "btrfs: ensure relocation never runs while we have send operations running"
      
         Addresses all the remaining cases that can trigger relocation.
      
      2) We can actually allow starting and even committing transactions while
         in a send context if needed because send is not holding any locks that
         would block the start or the commit of a transaction.
      
      So get rid of all the logic added by commit a26e8c9f ("Btrfs: don't
      clear uptodate if the eb is under IO"). We can now always call
      clear_extent_buffer_uptodate() at verify_parent_transid() since send is
      the only case that uses commit roots without having a transaction open or
      without holding the commit_root_sem.
      Reported-by: NChris Murphy <lists@colorremedies.com>
      Link: https://lore.kernel.org/linux-btrfs/CAJCQCtRQ57=qXo3kygwpwEBOU_CA_eKvdmjP52sU=eFvuVOEGw@mail.gmail.com/Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      35b22c19
    • N
      btrfs: fix unbalanced unlock in qgroup_account_snapshot() · 44365827
      Naohiro Aota 提交于
      qgroup_account_snapshot() is trying to unlock the not taken
      tree_log_mutex in a error path. Since ret != 0 in this case, we can
      just return from here.
      
      Fixes: 2a4d84c1 ("btrfs: move delayed ref flushing for qgroup into qgroup helper")
      CC: stable@vger.kernel.org # 5.12+
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NNaohiro Aota <naohiro.aota@wdc.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      44365827
  3. 21 6月, 2021 4 次提交
    • D
      btrfs: inline wait_current_trans_commit_start in its caller · ae5d29d4
      David Sterba 提交于
      Function wait_current_trans_commit_start is now fairly trivial so it can
      be inlined in its only caller.
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      ae5d29d4
    • D
      btrfs: sink wait_for_unblock parameter to async commit · 32cc4f87
      David Sterba 提交于
      There's only one caller left btrfs_ioctl_start_sync that passes 0, so we
      can remove the switch in btrfs_commit_transaction_async.
      
      A cleanup 9babda9f ("btrfs: Remove async_transid from
      btrfs_mksubvol/create_subvol/create_snapshot") removed calls that passed
      1, so this is a followup.
      
      As this removes last call of wait_current_trans_commit_start_and_unblock,
      remove the function as well.
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      32cc4f87
    • D
      btrfs: clear defrag status of a root if starting transaction fails · 6819703f
      David Sterba 提交于
      The defrag loop processes leaves in batches and starting transaction for
      each. The whole defragmentation on a given root is protected by a bit
      but in case the transaction fails, the bit is not cleared
      
      In case the transaction fails the bit would prevent starting
      defragmentation again, so make sure it's cleared.
      
      CC: stable@vger.kernel.org # 4.4+
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6819703f
    • J
      btrfs: always abort the transaction if we abort a trans handle · 5963ffca
      Josef Bacik 提交于
      While stress testing our error handling I noticed that sometimes we
      would still commit the transaction even though we had aborted the
      transaction.
      
      Currently we track if a trans handle has dirtied any metadata, and if it
      hasn't we mark the filesystem as having an error (so no new transactions
      can be started), but we will allow the current transaction to complete
      as we do not mark the transaction itself as having been aborted.
      
      This sounds good in theory, but we were not properly tracking IO errors
      in btrfs_finish_ordered_io, and thus committing the transaction with
      bogus free space data.  This isn't necessarily a problem per-se with the
      free space cache, as the other guards in place would have kept us from
      accepting the free space cache as valid, but highlights a real world
      case where we had a bug and could have corrupted the filesystem because
      of it.
      
      This "skip abort on empty trans handle" is nice in theory, but assumes
      we have perfect error handling everywhere, which we clearly do not.
      Also we do not allow further transactions to be started, so all this
      does is save the last transaction that was happening, which doesn't
      necessarily gain us anything other than the potential for real
      corruption.
      
      Remove this particular bit of code, if we decide we need to abort the
      transaction then abort the current one and keep us from doing real harm
      to the file system, regardless of whether this specific trans handle
      dirtied anything or not.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      5963ffca
  4. 19 4月, 2021 8 次提交
    • F
      btrfs: fix race between transaction aborts and fsyncs leading to use-after-free · 061dde82
      Filipe Manana 提交于
      There is a race between a task aborting a transaction during a commit,
      a task doing an fsync and the transaction kthread, which leads to an
      use-after-free of the log root tree. When this happens, it results in a
      stack trace like the following:
      
        BTRFS info (device dm-0): forced readonly
        BTRFS warning (device dm-0): Skipping commit of aborted transaction.
        BTRFS: error (device dm-0) in cleanup_transaction:1958: errno=-5 IO failure
        BTRFS warning (device dm-0): lost page write due to IO error on /dev/mapper/error-test (-5)
        BTRFS warning (device dm-0): Skipping commit of aborted transaction.
        BTRFS warning (device dm-0): direct IO failed ino 261 rw 0,0 sector 0xa4e8 len 4096 err no 10
        BTRFS error (device dm-0): error writing primary super block to device 1
        BTRFS warning (device dm-0): direct IO failed ino 261 rw 0,0 sector 0x12e000 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 261 rw 0,0 sector 0x12e008 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 261 rw 0,0 sector 0x12e010 len 4096 err no 10
        BTRFS: error (device dm-0) in write_all_supers:4110: errno=-5 IO failure (1 errors while writing supers)
        BTRFS: error (device dm-0) in btrfs_sync_log:3308: errno=-5 IO failure
        general protection fault, probably for non-canonical address 0x6b6b6b6b6b6b6b68: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC PTI
        CPU: 2 PID: 2458471 Comm: fsstress Not tainted 5.12.0-rc5-btrfs-next-84 #1
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014
        RIP: 0010:__mutex_lock+0x139/0xa40
        Code: c0 74 19 (...)
        RSP: 0018:ffff9f18830d7b00 EFLAGS: 00010202
        RAX: 6b6b6b6b6b6b6b68 RBX: 0000000000000001 RCX: 0000000000000002
        RDX: ffffffffb9c54d13 RSI: 0000000000000000 RDI: 0000000000000000
        RBP: ffff9f18830d7bc0 R08: 0000000000000000 R09: 0000000000000000
        R10: ffff9f18830d7be0 R11: 0000000000000001 R12: ffff8c6cd199c040
        R13: ffff8c6c95821358 R14: 00000000fffffffb R15: ffff8c6cbcf01358
        FS:  00007fa9140c2b80(0000) GS:ffff8c6fac600000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 00007fa913d52000 CR3: 000000013d2b4003 CR4: 0000000000370ee0
        DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
        DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
        Call Trace:
         ? __btrfs_handle_fs_error+0xde/0x146 [btrfs]
         ? btrfs_sync_log+0x7c1/0xf20 [btrfs]
         ? btrfs_sync_log+0x7c1/0xf20 [btrfs]
         btrfs_sync_log+0x7c1/0xf20 [btrfs]
         btrfs_sync_file+0x40c/0x580 [btrfs]
         do_fsync+0x38/0x70
         __x64_sys_fsync+0x10/0x20
         do_syscall_64+0x33/0x80
         entry_SYSCALL_64_after_hwframe+0x44/0xae
        RIP: 0033:0x7fa9142a55c3
        Code: 8b 15 09 (...)
        RSP: 002b:00007fff26278d48 EFLAGS: 00000246 ORIG_RAX: 000000000000004a
        RAX: ffffffffffffffda RBX: 0000563c83cb4560 RCX: 00007fa9142a55c3
        RDX: 00007fff26278cb0 RSI: 00007fff26278cb0 RDI: 0000000000000005
        RBP: 0000000000000005 R08: 0000000000000001 R09: 00007fff26278d5c
        R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000340
        R13: 00007fff26278de0 R14: 00007fff26278d96 R15: 0000563c83ca57c0
        Modules linked in: btrfs dm_zero dm_snapshot dm_thin_pool (...)
        ---[ end trace ee2f1b19327d791d ]---
      
      The steps that lead to this crash are the following:
      
      1) We are at transaction N;
      
      2) We have two tasks with a transaction handle attached to transaction N.
         Task A and Task B. Task B is doing an fsync;
      
      3) Task B is at btrfs_sync_log(), and has saved fs_info->log_root_tree
         into a local variable named 'log_root_tree' at the top of
         btrfs_sync_log(). Task B is about to call write_all_supers(), but
         before that...
      
      4) Task A calls btrfs_commit_transaction(), and after it sets the
         transaction state to TRANS_STATE_COMMIT_START, an error happens before
         it waits for the transaction's 'num_writers' counter to reach a value
         of 1 (no one else attached to the transaction), so it jumps to the
         label "cleanup_transaction";
      
      5) Task A then calls cleanup_transaction(), where it aborts the
         transaction, setting BTRFS_FS_STATE_TRANS_ABORTED on fs_info->fs_state,
         setting the ->aborted field of the transaction and the handle to an
         errno value and also setting BTRFS_FS_STATE_ERROR on fs_info->fs_state.
      
         After that, at cleanup_transaction(), it deletes the transaction from
         the list of transactions (fs_info->trans_list), sets the transaction
         to the state TRANS_STATE_COMMIT_DOING and then waits for the number
         of writers to go down to 1, as it's currently 2 (1 for task A and 1
         for task B);
      
      6) The transaction kthread is running and sees that BTRFS_FS_STATE_ERROR
         is set in fs_info->fs_state, so it calls btrfs_cleanup_transaction().
      
         There it sees the list fs_info->trans_list is empty, and then proceeds
         into calling btrfs_drop_all_logs(), which frees the log root tree with
         a call to btrfs_free_log_root_tree();
      
      7) Task B calls write_all_supers() and, shortly after, under the label
         'out_wake_log_root', it deferences the pointer stored in
         'log_root_tree', which was already freed in the previous step by the
         transaction kthread. This results in a use-after-free leading to a
         crash.
      
      Fix this by deleting the transaction from the list of transactions at
      cleanup_transaction() only after setting the transaction state to
      TRANS_STATE_COMMIT_DOING and waiting for all existing tasks that are
      attached to the transaction to release their transaction handles.
      This makes the transaction kthread wait for all the tasks attached to
      the transaction to be done with the transaction before dropping the
      log roots and doing other cleanups.
      
      Fixes: ef67963d ("btrfs: drop logs when we've aborted a transaction")
      CC: stable@vger.kernel.org # 5.10+
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      061dde82
    • J
      btrfs: handle btrfs_update_reloc_root failure in commit_fs_roots · 2dd8298e
      Josef Bacik 提交于
      btrfs_update_reloc_root will will return errors in the future, so handle
      the error properly in commit_fs_roots.
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2dd8298e
    • J
      btrfs: return an error from btrfs_record_root_in_trans · 03a7e111
      Josef Bacik 提交于
      We can create a reloc root when we record the root in the trans, which
      can fail for all sorts of different reasons.  Propagate this error up
      the chain of callers.  Future patches will fix the callers of
      btrfs_record_root_in_trans() to handle the error.
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      03a7e111
    • J
      btrfs: handle record_root_in_trans failure in create_pending_snapshot · f0118cb6
      Josef Bacik 提交于
      record_root_in_trans can currently fail, so handle this failure
      properly.
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f0118cb6
    • J
      btrfs: handle record_root_in_trans failure in btrfs_record_root_in_trans · 1409e6cc
      Josef Bacik 提交于
      record_root_in_trans can fail currently, handle this failure properly.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      1409e6cc
    • J
      btrfs: handle record_root_in_trans failure in qgroup_account_snapshot · 1c442d22
      Josef Bacik 提交于
      record_root_in_trans can fail currently, so handle this failure
      properly.
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      1c442d22
    • J
      btrfs: handle btrfs_record_root_in_trans failure in start_transaction · 68075ea8
      Josef Bacik 提交于
      btrfs_record_root_in_trans will return errors in the future, so handle
      the error properly in start_transaction.
      Reviewed-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      [ add comment ]
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      68075ea8
    • F
      btrfs: fix exhaustion of the system chunk array due to concurrent allocations · eafa4fd0
      Filipe Manana 提交于
      When we are running out of space for updating the chunk tree, that is,
      when we are low on available space in the system space info, if we have
      many task concurrently allocating block groups, via fallocate for example,
      many of them can end up all allocating new system chunks when only one is
      needed. In extreme cases this can lead to exhaustion of the system chunk
      array, which has a size limit of 2048 bytes, and results in a transaction
      abort with errno EFBIG, producing a trace in dmesg like the following,
      which was triggered on a PowerPC machine with a node/leaf size of 64K:
      
        [1359.518899] ------------[ cut here ]------------
        [1359.518980] BTRFS: Transaction aborted (error -27)
        [1359.519135] WARNING: CPU: 3 PID: 16463 at ../fs/btrfs/block-group.c:1968 btrfs_create_pending_block_groups+0x340/0x3c0 [btrfs]
        [1359.519152] Modules linked in: (...)
        [1359.519239] Supported: Yes, External
        [1359.519252] CPU: 3 PID: 16463 Comm: stress-ng Tainted: G               X    5.3.18-47-default #1 SLE15-SP3
        [1359.519274] NIP:  c008000000e36fe8 LR: c008000000e36fe4 CTR: 00000000006de8e8
        [1359.519293] REGS: c00000056890b700 TRAP: 0700   Tainted: G               X     (5.3.18-47-default)
        [1359.519317] MSR:  800000000282b033 <SF,VEC,VSX,EE,FP,ME,IR,DR,RI,LE>  CR: 48008222  XER: 00000007
        [1359.519356] CFAR: c00000000013e170 IRQMASK: 0
        [1359.519356] GPR00: c008000000e36fe4 c00000056890b990 c008000000e83200 0000000000000026
        [1359.519356] GPR04: 0000000000000000 0000000000000000 0000d52a3b027651 0000000000000007
        [1359.519356] GPR08: 0000000000000003 0000000000000001 0000000000000007 0000000000000000
        [1359.519356] GPR12: 0000000000008000 c00000063fe44600 000000001015e028 000000001015dfd0
        [1359.519356] GPR16: 000000000000404f 0000000000000001 0000000000010000 0000dd1e287affff
        [1359.519356] GPR20: 0000000000000001 c000000637c9a000 ffffffffffffffe5 0000000000000000
        [1359.519356] GPR24: 0000000000000004 0000000000000000 0000000000000100 ffffffffffffffc0
        [1359.519356] GPR28: c000000637c9a000 c000000630e09230 c000000630e091d8 c000000562188b08
        [1359.519561] NIP [c008000000e36fe8] btrfs_create_pending_block_groups+0x340/0x3c0 [btrfs]
        [1359.519613] LR [c008000000e36fe4] btrfs_create_pending_block_groups+0x33c/0x3c0 [btrfs]
        [1359.519626] Call Trace:
        [1359.519671] [c00000056890b990] [c008000000e36fe4] btrfs_create_pending_block_groups+0x33c/0x3c0 [btrfs] (unreliable)
        [1359.519729] [c00000056890ba90] [c008000000d68d44] __btrfs_end_transaction+0xbc/0x2f0 [btrfs]
        [1359.519782] [c00000056890bae0] [c008000000e309ac] btrfs_alloc_data_chunk_ondemand+0x154/0x610 [btrfs]
        [1359.519844] [c00000056890bba0] [c008000000d8a0fc] btrfs_fallocate+0xe4/0x10e0 [btrfs]
        [1359.519891] [c00000056890bd00] [c0000000004a23b4] vfs_fallocate+0x174/0x350
        [1359.519929] [c00000056890bd50] [c0000000004a3cf8] ksys_fallocate+0x68/0xf0
        [1359.519957] [c00000056890bda0] [c0000000004a3da8] sys_fallocate+0x28/0x40
        [1359.519988] [c00000056890bdc0] [c000000000038968] system_call_exception+0xe8/0x170
        [1359.520021] [c00000056890be20] [c00000000000cb70] system_call_common+0xf0/0x278
        [1359.520037] Instruction dump:
        [1359.520049] 7d0049ad 40c2fff4 7c0004ac 71490004 40820024 2f83fffb 419e0048 3c620000
        [1359.520082] e863bcb8 7ec4b378 48010d91 e8410018 <0fe00000> 3c820000 e884bcc8 7ec6b378
        [1359.520122] ---[ end trace d6c186e151022e20 ]---
      
      The following steps explain how we can end up in this situation:
      
      1) Task A is at check_system_chunk(), either because it is allocating a
         new data or metadata block group, at btrfs_chunk_alloc(), or because
         it is removing a block group or turning a block group RO. It does not
         matter why;
      
      2) Task A sees that there is not enough free space in the system
         space_info object, that is 'left' is < 'thresh'. And at this point
         the system space_info has a value of 0 for its 'bytes_may_use'
         counter;
      
      3) As a consequence task A calls btrfs_alloc_chunk() in order to allocate
         a new system block group (chunk) and then reserves 'thresh' bytes in
         the chunk block reserve with the call to btrfs_block_rsv_add(). This
         changes the chunk block reserve's 'reserved' and 'size' counters by an
         amount of 'thresh', and changes the 'bytes_may_use' counter of the
         system space_info object from 0 to 'thresh'.
      
         Also during its call to btrfs_alloc_chunk(), we end up increasing the
         value of the 'total_bytes' counter of the system space_info object by
         8MiB (the size of a system chunk stripe). This happens through the
         call chain:
      
         btrfs_alloc_chunk()
             create_chunk()
                 btrfs_make_block_group()
                     btrfs_update_space_info()
      
      4) After it finishes the first phase of the block group allocation, at
         btrfs_chunk_alloc(), task A unlocks the chunk mutex;
      
      5) At this point the new system block group was added to the transaction
         handle's list of new block groups, but its block group item, device
         items and chunk item were not yet inserted in the extent, device and
         chunk trees, respectively. That only happens later when we call
         btrfs_finish_chunk_alloc() through a call to
         btrfs_create_pending_block_groups();
      
         Note that only when we update the chunk tree, through the call to
         btrfs_finish_chunk_alloc(), we decrement the 'reserved' counter
         of the chunk block reserve as we COW/allocate extent buffers,
         through:
      
         btrfs_alloc_tree_block()
            btrfs_use_block_rsv()
               btrfs_block_rsv_use_bytes()
      
         And the system space_info's 'bytes_may_use' is decremented everytime
         we allocate an extent buffer for COW operations on the chunk tree,
         through:
      
         btrfs_alloc_tree_block()
            btrfs_reserve_extent()
               find_free_extent()
                  btrfs_add_reserved_bytes()
      
         If we end up COWing less chunk btree nodes/leaves than expected, which
         is the typical case since the amount of space we reserve is always
         pessimistic to account for the worst possible case, we release the
         unused space through:
      
         btrfs_create_pending_block_groups()
            btrfs_trans_release_chunk_metadata()
               btrfs_block_rsv_release()
                  block_rsv_release_bytes()
                      btrfs_space_info_free_bytes_may_use()
      
         But before task A gets into btrfs_create_pending_block_groups()...
      
      6) Many other tasks start allocating new block groups through fallocate,
         each one does the first phase of block group allocation in a
         serialized way, since btrfs_chunk_alloc() takes the chunk mutex
         before calling check_system_chunk() and btrfs_alloc_chunk().
      
         However before everyone enters the final phase of the block group
         allocation, that is, before calling btrfs_create_pending_block_groups(),
         new tasks keep coming to allocate new block groups and while at
         check_system_chunk(), the system space_info's 'bytes_may_use' keeps
         increasing each time a task reserves space in the chunk block reserve.
         This means that eventually some other task can end up not seeing enough
         free space in the system space_info and decide to allocate yet another
         system chunk.
      
         This may repeat several times if yet more new tasks keep allocating
         new block groups before task A, and all the other tasks, finish the
         creation of the pending block groups, which is when reserved space
         in excess is released. Eventually this can result in exhaustion of
         system chunk array in the superblock, with btrfs_add_system_chunk()
         returning EFBIG, resulting later in a transaction abort.
      
         Even when we don't reach the extreme case of exhausting the system
         array, most, if not all, unnecessarily created system block groups
         end up being unused since when finishing creation of the first
         pending system block group, the creation of the following ones end
         up not needing to COW nodes/leaves of the chunk tree, so we never
         allocate and deallocate from them, resulting in them never being
         added to the list of unused block groups - as a consequence they
         don't get deleted by the cleaner kthread - the only exceptions are
         if we unmount and mount the filesystem again, which adds any unused
         block groups to the list of unused block groups, if a scrub is
         run, which also adds unused block groups to the unused list, and
         under some circumstances when using a zoned filesystem or async
         discard, which may also add unused block groups to the unused list.
      
      So fix this by:
      
      *) Tracking the number of reserved bytes for the chunk tree per
         transaction, which is the sum of reserved chunk bytes by each
         transaction handle currently being used;
      
      *) When there is not enough free space in the system space_info,
         if there are other transaction handles which reserved chunk space,
         wait for some of them to complete in order to have enough excess
         reserved space released, and then try again. Otherwise proceed with
         the creation of a new system chunk.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      eafa4fd0
  5. 09 2月, 2021 9 次提交
    • N
      btrfs: zoned: redirty released extent buffers · d3575156
      Naohiro Aota 提交于
      Tree manipulating operations like merging nodes often release
      once-allocated tree nodes. Such nodes are cleaned so that pages in the
      node are not uselessly written out. On zoned volumes, however, such
      optimization blocks the following IOs as the cancellation of the write
      out of the freed blocks breaks the sequential write sequence expected by
      the device.
      
      Introduce a list of clean and unwritten extent buffers that have been
      released in a transaction. Redirty the buffers so that
      btree_write_cache_pages() can send proper bios to the devices.
      
      Besides it clears the entire content of the extent buffer not to confuse
      raw block scanners e.g. 'btrfs check'. By clearing the content,
      csum_dirty_buffer() complains about bytenr mismatch, so avoid the
      checking and checksum using newly introduced buffer flag
      EXTENT_BUFFER_NO_CHECK.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NNaohiro Aota <naohiro.aota@wdc.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      d3575156
    • F
      btrfs: make concurrent fsyncs wait less when waiting for a transaction commit · d0c2f4fa
      Filipe Manana 提交于
      Often an fsync needs to fallback to a transaction commit for several
      reasons (to ensure consistency after a power failure, a new block group
      was allocated or a temporary error such as ENOMEM or ENOSPC happened).
      
      In that case the log is marked as needing a full commit and any concurrent
      tasks attempting to log inodes or commit the log will also fallback to the
      transaction commit. When this happens they all wait for the task that first
      started the transaction commit to finish the transaction commit - however
      they wait until the full transaction commit happens, which is not needed,
      as they only need to wait for the superblocks to be persisted and not for
      unpinning all the extents pinned during the transaction's lifetime, which
      even for short lived transactions can be a few thousand and take some
      significant amount of time to complete - for dbench workloads I have
      observed up to 4~5 milliseconds of time spent unpinning extents in the
      worst cases, and the number of pinned extents was between 2 to 3 thousand.
      
      So allow fsync tasks to skip waiting for the unpinning of extents when
      they call btrfs_commit_transaction() and they were not the task that
      started the transaction commit (that one has to do it, the alternative
      would be to offload the transaction commit to another task so that it
      could avoid waiting for the extent unpinning or offload the extent
      unpinning to another task).
      
      This patch is part of a patchset comprised of the following patches:
      
        btrfs: remove unnecessary directory inode item update when deleting dir entry
        btrfs: stop setting nbytes when filling inode item for logging
        btrfs: avoid logging new ancestor inodes when logging new inode
        btrfs: skip logging directories already logged when logging all parents
        btrfs: skip logging inodes already logged when logging new entries
        btrfs: remove unnecessary check_parent_dirs_for_sync()
        btrfs: make concurrent fsyncs wait less when waiting for a transaction commit
      
      After applying the entire patchset, dbench shows improvements in respect
      to throughput and latency. The script used to measure it is the following:
      
        $ cat dbench-test.sh
        #!/bin/bash
      
        DEV=/dev/sdk
        MNT=/mnt/sdk
        MOUNT_OPTIONS="-o ssd"
        MKFS_OPTIONS="-m single -d single"
      
        echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
      
        umount $DEV &> /dev/null
        mkfs.btrfs -f $MKFS_OPTIONS $DEV
        mount $MOUNT_OPTIONS $DEV $MNT
      
        dbench -D $MNT -t 300 64
      
        umount $MNT
      
      The test was run on a physical machine with 12 cores (Intel corei7), 64G
      of ram, using a NVMe device and a non-debug kernel configuration (Debian's
      default configuration).
      
      Before applying patchset, 32 clients:
      
       Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    9627107     0.153    61.938
       Close        7072076     0.001     3.175
       Rename        407633     1.222    44.439
       Unlink       1943895     0.658    44.440
       Deltree          256    17.339   110.891
       Mkdir            128     0.003     0.009
       Qpathinfo    8725406     0.064    17.850
       Qfileinfo    1529516     0.001     2.188
       Qfsinfo      1599884     0.002     1.457
       Sfileinfo     784200     0.005     3.562
       Find         3373513     0.411    30.312
       WriteX       4802132     0.053    29.054
       ReadX       15089959     0.002     5.801
       LockX          31344     0.002     0.425
       UnlockX        31344     0.001     0.173
       Flush         674724     5.952   341.830
      
      Throughput 1008.02 MB/sec  32 clients  32 procs  max_latency=341.833 ms
      
      After applying patchset, 32 clients:
      
      After patchset, with 32 clients:
      
       Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    9931568     0.111    25.597
       Close        7295730     0.001     2.171
       Rename        420549     0.982    49.714
       Unlink       2005366     0.497    39.015
       Deltree          256    11.149    89.242
       Mkdir            128     0.002     0.014
       Qpathinfo    9001863     0.049    20.761
       Qfileinfo    1577730     0.001     2.546
       Qfsinfo      1650508     0.002     3.531
       Sfileinfo     809031     0.005     5.846
       Find         3480259     0.309    23.977
       WriteX       4952505     0.043    41.283
       ReadX       15568127     0.002     5.476
       LockX          32338     0.002     0.978
       UnlockX        32338     0.001     2.032
       Flush         696017     7.485   228.835
      
      Throughput 1049.91 MB/sec  32 clients  32 procs  max_latency=228.847 ms
      
       --> +4.1% throughput, -39.6% max latency
      
      Before applying patchset, 64 clients:
      
       Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    8956748     0.342   108.312
       Close        6579660     0.001     3.823
       Rename        379209     2.396    81.897
       Unlink       1808625     1.108   131.148
       Deltree          256    25.632   172.176
       Mkdir            128     0.003     0.018
       Qpathinfo    8117615     0.131    55.916
       Qfileinfo    1423495     0.001     2.635
       Qfsinfo      1488496     0.002     5.412
       Sfileinfo     729472     0.007     8.643
       Find         3138598     0.855    78.321
       WriteX       4470783     0.102    79.442
       ReadX       14038139     0.002     7.578
       LockX          29158     0.002     0.844
       UnlockX        29158     0.001     0.567
       Flush         627746    14.168   506.151
      
      Throughput 924.738 MB/sec  64 clients  64 procs  max_latency=506.154 ms
      
      After applying patchset, 64 clients:
      
       Operation      Count    AvgLat    MaxLat
       ----------------------------------------
       NTCreateX    9069003     0.303    43.193
       Close        6662328     0.001     3.888
       Rename        383976     2.194    46.418
       Unlink       1831080     1.022    43.873
       Deltree          256    24.037   155.763
       Mkdir            128     0.002     0.005
       Qpathinfo    8219173     0.137    30.233
       Qfileinfo    1441203     0.001     3.204
       Qfsinfo      1507092     0.002     4.055
       Sfileinfo     738775     0.006     5.431
       Find         3177874     0.936    38.170
       WriteX       4526152     0.084    39.518
       ReadX       14213562     0.002    24.760
       LockX          29522     0.002     1.221
       UnlockX        29522     0.001     0.694
       Flush         635652    14.358   422.039
      
      Throughput 990.13 MB/sec  64 clients  64 procs  max_latency=422.043 ms
      
       --> +6.8% throughput, -18.1% max latency
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      d0c2f4fa
    • J
      btrfs: run delayed refs less often in commit_cowonly_roots · 488bc2a2
      Josef Bacik 提交于
      We love running delayed refs in commit_cowonly_roots, but it is a bit
      excessive.  I was seeing cases of running 3 or 4 refs a few times in a
      row during this time.  Instead simply:
      
      - update all of the roots first
      - then run delayed refs
      - then handle the empty block groups case
      - and then if we have any more dirty roots do the whole thing again
      
      This allows us to be much more efficient with our delayed ref running,
      as we can batch a few more operations at once.
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      488bc2a2
    • J
      btrfs: stop running all delayed refs during snapshot · dac348e9
      Josef Bacik 提交于
      This was added in commit 361048f5 ("Btrfs: fix full backref problem
      when inserting shared block reference") to address a problem where we
      hit the following BUG_ON() in alloc_reserved_tree_block
      
              if (node->type == BTRFS_SHARED_BLOCK_REF_KEY) {
                      BUG_ON(!(flags & BTRFS_BLOCK_FLAG_FULL_BACKREF));
      
      However this BUG_ON() is bogus, and was removed by previous commit:
      
        btrfs: remove bogus BUG_ON in alloc_reserved_tree_block
      
      We no longer need to run delayed refs because of this, and can remove
      this flushing here.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      dac348e9
    • J
      btrfs: move delayed ref flushing for qgroup into qgroup helper · 2a4d84c1
      Josef Bacik 提交于
      The commit d6726335 ("btrfs: qgroup: Make snapshot accounting work
      with new extent-oriented qgroup.") added a flush of the delayed refs
      during snapshot creation in order to get the qgroup accounting properly.
      However this code has changed and been moved to it's own helper that is
      skipped if qgroups are turned off.  Move the flushing to the helper, as
      we do not need it when qgroups are turned off.
      
      Also add a comment explaining why it exists, and why it doesn't actually
      save us.  This will be helpful later when we try to fix qgroup
      accounting properly.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2a4d84c1
    • J
      btrfs: only run delayed refs once before committing · ad368f33
      Josef Bacik 提交于
      We try to pre-flush the delayed refs when committing, because we want to
      do as little work as possible in the critical section of the transaction
      commit.
      
      However doing this twice can lead to very long transaction commit delays
      as other threads are allowed to continue to generate more delayed refs,
      which potentially delays the commit by multiple minutes in very extreme
      cases.
      
      So simply stick to one pre-flush, and then continue the rest of the
      transaction commit.
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      ad368f33
    • J
      btrfs: only let one thread pre-flush delayed refs in commit · e19eb11f
      Josef Bacik 提交于
      I've been running a stress test that runs 20 workers in their own
      subvolume, which are running an fsstress instance with 4 threads per
      worker, which is 80 total fsstress threads.  In addition to this I'm
      running balance in the background as well as creating and deleting
      snapshots.  This test takes around 12 hours to run normally, going
      slower and slower as the test goes on.
      
      The reason for this is because fsstress is running fsync sometimes, and
      because we're messing with block groups we often fall through to
      btrfs_commit_transaction, so will often have 20-30 threads all calling
      btrfs_commit_transaction at the same time.
      
      These all get stuck contending on the extent tree while they try to run
      delayed refs during the initial part of the commit.
      
      This is suboptimal, really because the extent tree is a single point of
      failure we only want one thread acting on that tree at once to reduce
      lock contention.
      
      Fix this by making the flushing mechanism a bit operation, to make it
      easy to use test_and_set_bit() in order to make sure only one task does
      this initial flush.
      
      Once we're into the transaction commit we only have one thread doing
      delayed ref running, it's just this initial pre-flush that is
      problematic.  With this patch my stress test takes around 90 minutes to
      run, instead of 12 hours.
      
      The memory barrier is not necessary for the flushing bit as it's
      ordered, unlike plain int. The transaction state accessed in
      btrfs_should_end_transaction could be affected by that too as it's not
      always used under transaction lock. Upon Nikolay's analysis in [1]
      it's not necessary:
      
        In should_end_transaction it's read without holding any locks. (U)
      
        It's modified in btrfs_cleanup_transaction without holding the
        fs_info->trans_lock (U), but the STATE_ERROR flag is going to be set.
      
        set in cleanup_transaction under fs_info->trans_lock (L)
        set in btrfs_commit_trans to COMMIT_START under fs_info->trans_lock.(L)
        set in btrfs_commit_trans to COMMIT_DOING under fs_info->trans_lock.(L)
        set in btrfs_commit_trans to COMMIT_UNBLOCK under
        fs_info->trans_lock.(L)
      
        set in btrfs_commit_trans to COMMIT_COMPLETED without locks but at this
        point the transaction is finished and fs_info->running_trans is NULL (U
        but irrelevant).
      
        So by the looks of it we can have a concurrent READ race with a WRITE,
        due to reads not taking a lock. In this case what we want to ensure is
        we either see new or old state. I consulted with Will Deacon and he said
        that in such a case we'd want to annotate the accesses to ->state with
        (READ|WRITE)_ONCE so as to avoid a theoretical tear, in this case I
        don't think this could happen but I imagine at some point KCSAN would
        flag such an access as racy (which it is).
      
      [1] https://lore.kernel.org/linux-btrfs/e1fd5cc1-0f28-f670-69f4-e9958b4964e6@suse.comReviewed-by: NNikolay Borisov <nborisov@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      [ add comments regarding memory barrier ]
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      e19eb11f
    • N
      btrfs: rename btrfs_find_free_objectid to btrfs_get_free_objectid · 543068a2
      Nikolay Borisov 提交于
      This better reflects the semantics of the function i.e no search is
      performed whatsoever.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      543068a2
    • J
      btrfs: fix error handling in commit_fs_roots · 4f4317c1
      Josef Bacik 提交于
      While doing error injection I would sometimes get a corrupt file system.
      This is because I was injecting errors at btrfs_search_slot, but would
      only do it one time per stack.  This uncovered a problem in
      commit_fs_roots, where if we get an error we would just break.  However
      we're in a nested loop, the first loop being a loop to find all the
      dirty fs roots, and then subsequent root updates would succeed clearing
      the error value.
      
      This isn't likely to happen in real scenarios, however we could
      potentially get a random ENOMEM once and then not again, and we'd end up
      with a corrupted file system.  Fix this by moving the error checking
      around a bit to the main loop, as this is the only place where something
      will fail, and return the error as soon as it occurs.
      
      With this patch my reproducer no longer corrupts the file system.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      4f4317c1
  6. 12 1月, 2021 1 次提交
  7. 10 12月, 2020 2 次提交
    • B
      btrfs: keep sb cache_generation consistent with space_cache · 94846229
      Boris Burkov 提交于
      When mounting, btrfs uses the cache_generation in the super block to
      determine if space cache v1 is in use. However, by mounting with
      nospace_cache or space_cache=v2, it is possible to disable space cache
      v1, which does not result in un-setting cache_generation back to 0.
      
      In order to base some logic, like mount option printing in /proc/mounts,
      on the current state of the space cache rather than just the values of
      the mount option, keep the value of cache_generation consistent with the
      status of space cache v1.
      
      We ensure that cache_generation > 0 iff the file system is using
      space_cache v1. This requires committing a transaction on any mount
      which changes whether we are using v1. (v1->nospace_cache, v1->v2,
      nospace_cache->v1, v2->v1).
      
      Since the mechanism for writing out the cache generation is transaction
      commit, but we want some finer grained control over when we un-set it,
      we can't just rely on the SPACE_CACHE mount option, and introduce an
      fs_info flag that mount can use when it wants to unset the generation.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NBoris Burkov <boris@bur.io>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      94846229
    • N
      btrfs: remove inode number cache feature · 5297199a
      Nikolay Borisov 提交于
      It's been deprecated since commit b547a88e ("btrfs: start
      deprecation of mount option inode_cache") which enumerates the reasons.
      
      A filesystem that uses the feature (mount -o inode_cache) tracks the
      inode numbers in bitmaps, that data stay on the filesystem after this
      patch. The size is roughly 5MiB for 1M inodes [1], which is considered
      small enough to be left there. Removal of the change can be implemented
      in btrfs-progs if needed.
      
      [1] https://lore.kernel.org/linux-btrfs/20201127145836.GZ6430@twin.jikos.cz/Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      [ update changelog ]
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      5297199a
  8. 08 12月, 2020 7 次提交
    • N
      btrfs: return bool from btrfs_should_end_transaction · a2633b6a
      Nikolay Borisov 提交于
      Results in slightly smaller code.
      
      add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-11 (-11)
      Function                                     old     new   delta
      btrfs_should_end_transaction                  96      85     -11
      Total: Before=20070, After=20059, chg -0.05%
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      a2633b6a
    • N
      8a8f4dea
    • N
    • J
      btrfs: protect fs_info->caching_block_groups by block_group_cache_lock · bbb86a37
      Josef Bacik 提交于
      I got the following lockdep splat
      
        ======================================================
        WARNING: possible circular locking dependency detected
        5.9.0+ #101 Not tainted
        ------------------------------------------------------
        btrfs-cleaner/3445 is trying to acquire lock:
        ffff89dbec39ab48 (btrfs-root-00){++++}-{3:3}, at: __btrfs_tree_read_lock+0x32/0x170
      
        but task is already holding lock:
        ffff89dbeaf28a88 (&fs_info->commit_root_sem){++++}-{3:3}, at: btrfs_find_all_roots+0x41/0x80
      
        which lock already depends on the new lock.
      
        the existing dependency chain (in reverse order) is:
      
        -> #2 (&fs_info->commit_root_sem){++++}-{3:3}:
      	 down_write+0x3d/0x70
      	 btrfs_cache_block_group+0x2d5/0x510
      	 find_free_extent+0xb6e/0x12f0
      	 btrfs_reserve_extent+0xb3/0x1b0
      	 btrfs_alloc_tree_block+0xb1/0x330
      	 alloc_tree_block_no_bg_flush+0x4f/0x60
      	 __btrfs_cow_block+0x11d/0x580
      	 btrfs_cow_block+0x10c/0x220
      	 commit_cowonly_roots+0x47/0x2e0
      	 btrfs_commit_transaction+0x595/0xbd0
      	 sync_filesystem+0x74/0x90
      	 generic_shutdown_super+0x22/0x100
      	 kill_anon_super+0x14/0x30
      	 btrfs_kill_super+0x12/0x20
      	 deactivate_locked_super+0x36/0xa0
      	 cleanup_mnt+0x12d/0x190
      	 task_work_run+0x5c/0xa0
      	 exit_to_user_mode_prepare+0x1df/0x200
      	 syscall_exit_to_user_mode+0x54/0x280
      	 entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
        -> #1 (&space_info->groups_sem){++++}-{3:3}:
      	 down_read+0x40/0x130
      	 find_free_extent+0x2ed/0x12f0
      	 btrfs_reserve_extent+0xb3/0x1b0
      	 btrfs_alloc_tree_block+0xb1/0x330
      	 alloc_tree_block_no_bg_flush+0x4f/0x60
      	 __btrfs_cow_block+0x11d/0x580
      	 btrfs_cow_block+0x10c/0x220
      	 commit_cowonly_roots+0x47/0x2e0
      	 btrfs_commit_transaction+0x595/0xbd0
      	 sync_filesystem+0x74/0x90
      	 generic_shutdown_super+0x22/0x100
      	 kill_anon_super+0x14/0x30
      	 btrfs_kill_super+0x12/0x20
      	 deactivate_locked_super+0x36/0xa0
      	 cleanup_mnt+0x12d/0x190
      	 task_work_run+0x5c/0xa0
      	 exit_to_user_mode_prepare+0x1df/0x200
      	 syscall_exit_to_user_mode+0x54/0x280
      	 entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
        -> #0 (btrfs-root-00){++++}-{3:3}:
      	 __lock_acquire+0x1167/0x2150
      	 lock_acquire+0xb9/0x3d0
      	 down_read_nested+0x43/0x130
      	 __btrfs_tree_read_lock+0x32/0x170
      	 __btrfs_read_lock_root_node+0x3a/0x50
      	 btrfs_search_slot+0x614/0x9d0
      	 btrfs_find_root+0x35/0x1b0
      	 btrfs_read_tree_root+0x61/0x120
      	 btrfs_get_root_ref+0x14b/0x600
      	 find_parent_nodes+0x3e6/0x1b30
      	 btrfs_find_all_roots_safe+0xb4/0x130
      	 btrfs_find_all_roots+0x60/0x80
      	 btrfs_qgroup_trace_extent_post+0x27/0x40
      	 btrfs_add_delayed_data_ref+0x3fd/0x460
      	 btrfs_free_extent+0x42/0x100
      	 __btrfs_mod_ref+0x1d7/0x2f0
      	 walk_up_proc+0x11c/0x400
      	 walk_up_tree+0xf0/0x180
      	 btrfs_drop_snapshot+0x1c7/0x780
      	 btrfs_clean_one_deleted_snapshot+0xfb/0x110
      	 cleaner_kthread+0xd4/0x140
      	 kthread+0x13a/0x150
      	 ret_from_fork+0x1f/0x30
      
        other info that might help us debug this:
      
        Chain exists of:
          btrfs-root-00 --> &space_info->groups_sem --> &fs_info->commit_root_sem
      
         Possible unsafe locking scenario:
      
      	 CPU0                    CPU1
      	 ----                    ----
          lock(&fs_info->commit_root_sem);
      				 lock(&space_info->groups_sem);
      				 lock(&fs_info->commit_root_sem);
          lock(btrfs-root-00);
      
         *** DEADLOCK ***
      
        3 locks held by btrfs-cleaner/3445:
         #0: ffff89dbeaf28838 (&fs_info->cleaner_mutex){+.+.}-{3:3}, at: cleaner_kthread+0x6e/0x140
         #1: ffff89dbeb6c7640 (sb_internal){.+.+}-{0:0}, at: start_transaction+0x40b/0x5c0
         #2: ffff89dbeaf28a88 (&fs_info->commit_root_sem){++++}-{3:3}, at: btrfs_find_all_roots+0x41/0x80
      
        stack backtrace:
        CPU: 0 PID: 3445 Comm: btrfs-cleaner Not tainted 5.9.0+ #101
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-2.fc32 04/01/2014
        Call Trace:
         dump_stack+0x8b/0xb0
         check_noncircular+0xcf/0xf0
         __lock_acquire+0x1167/0x2150
         ? __bfs+0x42/0x210
         lock_acquire+0xb9/0x3d0
         ? __btrfs_tree_read_lock+0x32/0x170
         down_read_nested+0x43/0x130
         ? __btrfs_tree_read_lock+0x32/0x170
         __btrfs_tree_read_lock+0x32/0x170
         __btrfs_read_lock_root_node+0x3a/0x50
         btrfs_search_slot+0x614/0x9d0
         ? find_held_lock+0x2b/0x80
         btrfs_find_root+0x35/0x1b0
         ? do_raw_spin_unlock+0x4b/0xa0
         btrfs_read_tree_root+0x61/0x120
         btrfs_get_root_ref+0x14b/0x600
         find_parent_nodes+0x3e6/0x1b30
         btrfs_find_all_roots_safe+0xb4/0x130
         btrfs_find_all_roots+0x60/0x80
         btrfs_qgroup_trace_extent_post+0x27/0x40
         btrfs_add_delayed_data_ref+0x3fd/0x460
         btrfs_free_extent+0x42/0x100
         __btrfs_mod_ref+0x1d7/0x2f0
         walk_up_proc+0x11c/0x400
         walk_up_tree+0xf0/0x180
         btrfs_drop_snapshot+0x1c7/0x780
         ? btrfs_clean_one_deleted_snapshot+0x73/0x110
         btrfs_clean_one_deleted_snapshot+0xfb/0x110
         cleaner_kthread+0xd4/0x140
         ? btrfs_alloc_root+0x50/0x50
         kthread+0x13a/0x150
         ? kthread_create_worker_on_cpu+0x40/0x40
         ret_from_fork+0x1f/0x30
      
      while testing another lockdep fix.  This happens because we're using the
      commit_root_sem to protect fs_info->caching_block_groups, which creates
      a dependency on the groups_sem -> commit_root_sem, which is problematic
      because we will allocate blocks while holding tree roots.  Fix this by
      making the list itself protected by the fs_info->block_group_cache_lock.
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      bbb86a37
    • J
      btrfs: update last_byte_to_unpin in switch_commit_roots · 27d56e62
      Josef Bacik 提交于
      While writing an explanation for the need of the commit_root_sem for
      btrfs_prepare_extent_commit, I realized we have a slight hole that could
      result in leaked space if we have to do the old style caching.  Consider
      the following scenario
      
       commit root
       +----+----+----+----+----+----+----+
       |\\\\|    |\\\\|\\\\|    |\\\\|\\\\|
       +----+----+----+----+----+----+----+
       0    1    2    3    4    5    6    7
      
       new commit root
       +----+----+----+----+----+----+----+
       |    |    |    |\\\\|    |    |\\\\|
       +----+----+----+----+----+----+----+
       0    1    2    3    4    5    6    7
      
      Prior to this patch, we run btrfs_prepare_extent_commit, which updates
      the last_byte_to_unpin, and then we subsequently run
      switch_commit_roots.  In this example lets assume that
      caching_ctl->progress == 1 at btrfs_prepare_extent_commit() time, which
      means that cache->last_byte_to_unpin == 1.  Then we go and do the
      switch_commit_roots(), but in the meantime the caching thread has made
      some more progress, because we drop the commit_root_sem and re-acquired
      it.  Now caching_ctl->progress == 3.  We swap out the commit root and
      carry on to unpin.
      
      The race can happen like:
      
        1) The caching thread was running using the old commit root when it
           found the extent for [2, 3);
      
        2) Then it released the commit_root_sem because it was in the last
           item of a leaf and the semaphore was contended, and set ->progress
           to 3 (value of 'last'), as the last extent item in the current leaf
           was for the extent for range [2, 3);
      
        3) Next time it gets the commit_root_sem, will start using the new
           commit root and search for a key with offset 3, so it never finds
           the hole for [2, 3).
      
        So the caching thread never saw [2, 3) as free space in any of the
        commit roots, and by the time finish_extent_commit() was called for
        the range [0, 3), ->last_byte_to_unpin was 1, so it only returned the
        subrange [0, 1) to the free space cache, skipping [2, 3).
      
      In the unpin code we have last_byte_to_unpin == 1, so we unpin [0,1),
      but do not unpin [2,3).  However because caching_ctl->progress == 3 we
      do not see the newly freed section of [2,3), and thus do not add it to
      our free space cache.  This results in us missing a chunk of free space
      in memory (on disk too, unless we have a power failure before writing
      the free space cache to disk).
      
      Fix this by making sure the ->last_byte_to_unpin is set at the same time
      that we swap the commit roots, this ensures that we will always be
      consistent.
      
      CC: stable@vger.kernel.org # 5.8+
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      [ update changelog with Filipe's review comments ]
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      27d56e62
    • J
      btrfs: locking: remove all the blocking helpers · ac5887c8
      Josef Bacik 提交于
      Now that we're using a rw_semaphore we no longer need to indicate if a
      lock is blocking or not, nor do we need to flip the entire path from
      blocking to spinning.  Remove these helpers and all the places they are
      called.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      ac5887c8
    • F
      btrfs: do not start and wait for delalloc on snapshot roots on transaction commit · 88090ad3
      Filipe Manana 提交于
      We do not need anymore to start writeback for delalloc of roots that are
      being snapshotted and wait for it to complete. This was done in commit
      609e804d ("Btrfs: fix file corruption after snapshotting due to mix
      of buffered/DIO writes") to fix a type of file corruption where files in a
      snapshot end up having their i_size updated in a non-ordered way, leaving
      implicit file holes, when buffered IO writes that increase a file's size
      are followed by direct IO writes that also increase the file's size.
      
      This is not needed anymore because we now have a more generic mechanism
      to prevent a non-ordered i_size update since commit 9ddc959e
      ("btrfs: use the file extent tree infrastructure"), which addresses this
      scenario involving snapshots as well.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      88090ad3
  9. 07 10月, 2020 2 次提交
    • J
      btrfs: introduce BTRFS_NESTING_COW for cow'ing blocks · 9631e4cc
      Josef Bacik 提交于
      When we COW a block we are holding a lock on the original block, and
      then we lock the new COW block.  Because our lockdep maps are based on
      root + level, this will make lockdep complain.  We need a way to
      indicate a subclass for locking the COW'ed block, so plumb through our
      btrfs_lock_nesting from btrfs_cow_block down to the btrfs_init_buffer,
      and then introduce BTRFS_NESTING_COW to be used for cow'ing blocks.
      
      The reason I've added all this extra infrastructure is because there
      will be need of different nesting classes in follow up patches.
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      9631e4cc
    • F
      btrfs: make fast fsyncs wait only for writeback · 48778179
      Filipe Manana 提交于
      Currently regardless of a full or a fast fsync we always wait for ordered
      extents to complete, and then start logging the inode after that. However
      for fast fsyncs we can just wait for the writeback to complete, we don't
      need to wait for the ordered extents to complete since we use the list of
      modified extents maps to figure out which extents we must log and we can
      get their checksums directly from the ordered extents that are still in
      flight, otherwise look them up from the checksums tree.
      
      Until commit b5e6c3e1 ("btrfs: always wait on ordered extents at
      fsync time"), for fast fsyncs, we used to start logging without even
      waiting for the writeback to complete first, we would wait for it to
      complete after logging, while holding a transaction open, which lead to
      performance issues when using cgroups and probably for other cases too,
      as wait for IO while holding a transaction handle should be avoided as
      much as possible. After that, for fast fsyncs, we started to wait for
      ordered extents to complete before starting to log, which adds some
      latency to fsyncs and we even got at least one report about a performance
      drop which bisected to that particular change:
      
      https://lore.kernel.org/linux-btrfs/20181109215148.GF23260@techsingularity.net/
      
      This change makes fast fsyncs only wait for writeback to finish before
      starting to log the inode, instead of waiting for both the writeback to
      finish and for the ordered extents to complete. This brings back part of
      the logic we had that extracts checksums from in flight ordered extents,
      which are not yet in the checksums tree, and making sure transaction
      commits wait for the completion of ordered extents previously logged
      (by far most of the time they have already completed by the time a
      transaction commit starts, resulting in no wait at all), to avoid any
      data loss if an ordered extent completes after the transaction used to
      log an inode is committed, followed by a power failure.
      
      When there are no other tasks accessing the checksums and the subvolume
      btrees, the ordered extent completion is pretty fast, typically taking
      100 to 200 microseconds only in my observations. However when there are
      other tasks accessing these btrees, ordered extent completion can take a
      lot more time due to lock contention on nodes and leaves of these btrees.
      I've seen cases over 2 milliseconds, which starts to be significant. In
      particular when we do have concurrent fsyncs against different files there
      is a lot of contention on the checksums btree, since we have many tasks
      writing the checksums into the btree and other tasks that already started
      the logging phase are doing lookups for checksums in the btree.
      
      This change also turns all ranged fsyncs into full ranged fsyncs, which
      is something we already did when not using the NO_HOLES features or when
      doing a full fsync. This is to guarantee we never miss checksums due to
      writeback having been triggered only for a part of an extent, and we end
      up logging the full extent but only checksums for the written range, which
      results in missing checksums after log replay. Allowing ranged fsyncs to
      operate again only in the original range, when using the NO_HOLES feature
      and doing a fast fsync is doable but requires some non trivial changes to
      the writeback path, which can always be worked on later if needed, but I
      don't think they are a very common use case.
      
      Several tests were performed using fio for different numbers of concurrent
      jobs, each writing and fsyncing its own file, for both sequential and
      random file writes. The tests were run on bare metal, no virtualization,
      on a box with 12 cores (Intel i7-8700), 64Gb of RAM and a NVMe device,
      with a kernel configuration that is the default of typical distributions
      (debian in this case), without debug options enabled (kasan, kmemleak,
      slub debug, debug of page allocations, lock debugging, etc).
      
      The following script that calls fio was used:
      
        $ cat test-fsync.sh
        #!/bin/bash
      
        DEV=/dev/nvme0n1
        MNT=/mnt/btrfs
        MOUNT_OPTIONS="-o ssd -o space_cache=v2"
        MKFS_OPTIONS="-d single -m single"
      
        if [ $# -ne 5 ]; then
          echo "Use $0 NUM_JOBS FILE_SIZE FSYNC_FREQ BLOCK_SIZE [write|randwrite]"
          exit 1
        fi
      
        NUM_JOBS=$1
        FILE_SIZE=$2
        FSYNC_FREQ=$3
        BLOCK_SIZE=$4
        WRITE_MODE=$5
      
        if [ "$WRITE_MODE" != "write" ] && [ "$WRITE_MODE" != "randwrite" ]; then
          echo "Invalid WRITE_MODE, must be 'write' or 'randwrite'"
          exit 1
        fi
      
        cat <<EOF > /tmp/fio-job.ini
        [writers]
        rw=$WRITE_MODE
        fsync=$FSYNC_FREQ
        fallocate=none
        group_reporting=1
        direct=0
        bs=$BLOCK_SIZE
        ioengine=sync
        size=$FILE_SIZE
        directory=$MNT
        numjobs=$NUM_JOBS
        EOF
      
        echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
      
        echo
        echo "Using config:"
        echo
        cat /tmp/fio-job.ini
        echo
      
        umount $MNT &> /dev/null
        mkfs.btrfs -f $MKFS_OPTIONS $DEV
        mount $MOUNT_OPTIONS $DEV $MNT
        fio /tmp/fio-job.ini
        umount $MNT
      
      The results were the following:
      
      *************************
      *** sequential writes ***
      *************************
      
      ==== 1 job, 8GiB file, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=36.6MiB/s (38.4MB/s), 36.6MiB/s-36.6MiB/s (38.4MB/s-38.4MB/s), io=8192MiB (8590MB), run=223689-223689msec
      
      After patch:
      
      WRITE: bw=40.2MiB/s (42.1MB/s), 40.2MiB/s-40.2MiB/s (42.1MB/s-42.1MB/s), io=8192MiB (8590MB), run=203980-203980msec
      (+9.8%, -8.8% runtime)
      
      ==== 2 jobs, 4GiB files, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=35.8MiB/s (37.5MB/s), 35.8MiB/s-35.8MiB/s (37.5MB/s-37.5MB/s), io=8192MiB (8590MB), run=228950-228950msec
      
      After patch:
      
      WRITE: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=8192MiB (8590MB), run=188272-188272msec
      (+21.5% throughput, -17.8% runtime)
      
      ==== 4 jobs, 2GiB files, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=50.1MiB/s (52.6MB/s), 50.1MiB/s-50.1MiB/s (52.6MB/s-52.6MB/s), io=8192MiB (8590MB), run=163446-163446msec
      
      After patch:
      
      WRITE: bw=64.5MiB/s (67.6MB/s), 64.5MiB/s-64.5MiB/s (67.6MB/s-67.6MB/s), io=8192MiB (8590MB), run=126987-126987msec
      (+28.7% throughput, -22.3% runtime)
      
      ==== 8 jobs, 1GiB files, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=64.0MiB/s (68.1MB/s), 64.0MiB/s-64.0MiB/s (68.1MB/s-68.1MB/s), io=8192MiB (8590MB), run=126075-126075msec
      
      After patch:
      
      WRITE: bw=86.8MiB/s (91.0MB/s), 86.8MiB/s-86.8MiB/s (91.0MB/s-91.0MB/s), io=8192MiB (8590MB), run=94358-94358msec
      (+35.6% throughput, -25.2% runtime)
      
      ==== 16 jobs, 512MiB files, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=79.8MiB/s (83.6MB/s), 79.8MiB/s-79.8MiB/s (83.6MB/s-83.6MB/s), io=8192MiB (8590MB), run=102694-102694msec
      
      After patch:
      
      WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=8192MiB (8590MB), run=76446-76446msec
      (+34.1% throughput, -25.6% runtime)
      
      ==== 32 jobs, 512MiB files, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=93.2MiB/s (97.7MB/s), 93.2MiB/s-93.2MiB/s (97.7MB/s-97.7MB/s), io=16.0GiB (17.2GB), run=175836-175836msec
      
      After patch:
      
      WRITE: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=16.0GiB (17.2GB), run=147001-147001msec
      (+19.1% throughput, -16.4% runtime)
      
      ==== 64 jobs, 512MiB files, fsync frequency 1, block size 64KiB ====
      
      Before patch:
      
      WRITE: bw=108MiB/s (114MB/s), 108MiB/s-108MiB/s (114MB/s-114MB/s), io=32.0GiB (34.4GB), run=302656-302656msec
      
      After patch:
      
      WRITE: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=32.0GiB (34.4GB), run=246003-246003msec
      (+23.1% throughput, -18.7% runtime)
      
      ************************
      ***   random writes  ***
      ************************
      
      ==== 1 job, 8GiB file, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=11.5MiB/s (12.0MB/s), 11.5MiB/s-11.5MiB/s (12.0MB/s-12.0MB/s), io=8192MiB (8590MB), run=714281-714281msec
      
      After patch:
      
      WRITE: bw=11.6MiB/s (12.2MB/s), 11.6MiB/s-11.6MiB/s (12.2MB/s-12.2MB/s), io=8192MiB (8590MB), run=705959-705959msec
      (+0.9% throughput, -1.7% runtime)
      
      ==== 2 jobs, 4GiB files, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=12.8MiB/s (13.5MB/s), 12.8MiB/s-12.8MiB/s (13.5MB/s-13.5MB/s), io=8192MiB (8590MB), run=638101-638101msec
      
      After patch:
      
      WRITE: bw=13.1MiB/s (13.7MB/s), 13.1MiB/s-13.1MiB/s (13.7MB/s-13.7MB/s), io=8192MiB (8590MB), run=625374-625374msec
      (+2.3% throughput, -2.0% runtime)
      
      ==== 4 jobs, 2GiB files, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=15.4MiB/s (16.2MB/s), 15.4MiB/s-15.4MiB/s (16.2MB/s-16.2MB/s), io=8192MiB (8590MB), run=531146-531146msec
      
      After patch:
      
      WRITE: bw=17.8MiB/s (18.7MB/s), 17.8MiB/s-17.8MiB/s (18.7MB/s-18.7MB/s), io=8192MiB (8590MB), run=460431-460431msec
      (+15.6% throughput, -13.3% runtime)
      
      ==== 8 jobs, 1GiB files, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=19.9MiB/s (20.8MB/s), 19.9MiB/s-19.9MiB/s (20.8MB/s-20.8MB/s), io=8192MiB (8590MB), run=412664-412664msec
      
      After patch:
      
      WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=8192MiB (8590MB), run=368589-368589msec
      (+11.6% throughput, -10.7% runtime)
      
      ==== 16 jobs, 512MiB files, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=29.3MiB/s (30.7MB/s), 29.3MiB/s-29.3MiB/s (30.7MB/s-30.7MB/s), io=8192MiB (8590MB), run=279924-279924msec
      
      After patch:
      
      WRITE: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=8192MiB (8590MB), run=269258-269258msec
      (+3.8% throughput, -3.8% runtime)
      
      ==== 32 jobs, 512MiB files, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=16.0GiB (17.2GB), run=443581-443581msec
      
      After patch:
      
      WRITE: bw=41.6MiB/s (43.6MB/s), 41.6MiB/s-41.6MiB/s (43.6MB/s-43.6MB/s), io=16.0GiB (17.2GB), run=394114-394114msec
      (+12.7% throughput, -11.2% runtime)
      
      ==== 64 jobs, 512MiB files, fsync frequency 16, block size 4KiB ====
      
      Before patch:
      
      WRITE: bw=45.9MiB/s (48.1MB/s), 45.9MiB/s-45.9MiB/s (48.1MB/s-48.1MB/s), io=32.0GiB (34.4GB), run=714614-714614msec
      
      After patch:
      
      WRITE: bw=48.8MiB/s (51.1MB/s), 48.8MiB/s-48.8MiB/s (51.1MB/s-51.1MB/s), io=32.0GiB (34.4GB), run=672087-672087msec
      (+6.3% throughput, -6.0% runtime)
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      48778179
  10. 08 9月, 2020 1 次提交
    • F
      btrfs: fix NULL pointer dereference after failure to create snapshot · 2d892ccd
      Filipe Manana 提交于
      When trying to get a new fs root for a snapshot during the transaction
      at transaction.c:create_pending_snapshot(), if btrfs_get_new_fs_root()
      fails we leave "pending->snap" pointing to an error pointer, and then
      later at ioctl.c:create_snapshot() we dereference that pointer, resulting
      in a crash:
      
        [12264.614689] BUG: kernel NULL pointer dereference, address: 00000000000007c4
        [12264.615650] #PF: supervisor write access in kernel mode
        [12264.616487] #PF: error_code(0x0002) - not-present page
        [12264.617436] PGD 0 P4D 0
        [12264.618328] Oops: 0002 [#1] PREEMPT SMP DEBUG_PAGEALLOC PTI
        [12264.619150] CPU: 0 PID: 2310635 Comm: fsstress Tainted: G        W         5.9.0-rc3-btrfs-next-67 #1
        [12264.619960] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014
        [12264.621769] RIP: 0010:btrfs_mksubvol+0x438/0x4a0 [btrfs]
        [12264.622528] Code: bc ef ff ff (...)
        [12264.624092] RSP: 0018:ffffaa6fc7277cd8 EFLAGS: 00010282
        [12264.624669] RAX: 00000000fffffff4 RBX: ffff9d3e8f151a60 RCX: 0000000000000000
        [12264.625249] RDX: 0000000000000001 RSI: ffffffff9d56c9be RDI: fffffffffffffff4
        [12264.625830] RBP: ffff9d3e8f151b48 R08: 0000000000000000 R09: 0000000000000000
        [12264.626413] R10: 0000000000000000 R11: 0000000000000000 R12: 00000000fffffff4
        [12264.626994] R13: ffff9d3ede380538 R14: ffff9d3ede380500 R15: ffff9d3f61b2eeb8
        [12264.627582] FS:  00007f140d5d8200(0000) GS:ffff9d3fb5e00000(0000) knlGS:0000000000000000
        [12264.628176] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        [12264.628773] CR2: 00000000000007c4 CR3: 000000020f8e8004 CR4: 00000000003706f0
        [12264.629379] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
        [12264.629994] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
        [12264.630594] Call Trace:
        [12264.631227]  btrfs_mksnapshot+0x7b/0xb0 [btrfs]
        [12264.631840]  __btrfs_ioctl_snap_create+0x16f/0x1a0 [btrfs]
        [12264.632458]  btrfs_ioctl_snap_create_v2+0xb0/0xf0 [btrfs]
        [12264.633078]  btrfs_ioctl+0x1864/0x3130 [btrfs]
        [12264.633689]  ? do_sys_openat2+0x1a7/0x2d0
        [12264.634295]  ? kmem_cache_free+0x147/0x3a0
        [12264.634899]  ? __x64_sys_ioctl+0x83/0xb0
        [12264.635488]  __x64_sys_ioctl+0x83/0xb0
        [12264.636058]  do_syscall_64+0x33/0x80
        [12264.636616]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
        (gdb) list *(btrfs_mksubvol+0x438)
        0x7c7b8 is in btrfs_mksubvol (fs/btrfs/ioctl.c:858).
        853		ret = 0;
        854		pending_snapshot->anon_dev = 0;
        855	fail:
        856		/* Prevent double freeing of anon_dev */
        857		if (ret && pending_snapshot->snap)
        858			pending_snapshot->snap->anon_dev = 0;
        859		btrfs_put_root(pending_snapshot->snap);
        860		btrfs_subvolume_release_metadata(root, &pending_snapshot->block_rsv);
        861	free_pending:
        862		if (pending_snapshot->anon_dev)
      
      So fix this by setting "pending->snap" to NULL if we get an error from the
      call to btrfs_get_new_fs_root() at transaction.c:create_pending_snapshot().
      
      Fixes: 2dfb1e43 ("btrfs: preallocate anon block device at first phase of snapshot creation")
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2d892ccd
  11. 27 7月, 2020 2 次提交
    • J
      btrfs: return EROFS for BTRFS_FS_STATE_ERROR cases · fbabd4a3
      Josef Bacik 提交于
      Eric reported seeing this message while running generic/475
      
        BTRFS: error (device dm-3) in btrfs_sync_log:3084: errno=-117 Filesystem corrupted
      
      Full stack trace:
      
        BTRFS: error (device dm-0) in btrfs_commit_transaction:2323: errno=-5 IO failure (Error while writing out transaction)
        BTRFS info (device dm-0): forced readonly
        BTRFS warning (device dm-0): Skipping commit of aborted transaction.
        ------------[ cut here ]------------
        BTRFS: error (device dm-0) in cleanup_transaction:1894: errno=-5 IO failure
        BTRFS: Transaction aborted (error -117)
        BTRFS warning (device dm-0): direct IO failed ino 3555 rw 0,0 sector 0x1c6480 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3555 rw 0,0 sector 0x1c6488 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3555 rw 0,0 sector 0x1c6490 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3555 rw 0,0 sector 0x1c6498 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3555 rw 0,0 sector 0x1c64a0 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3555 rw 0,0 sector 0x1c64a8 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3555 rw 0,0 sector 0x1c64b0 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3555 rw 0,0 sector 0x1c64b8 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3555 rw 0,0 sector 0x1c64c0 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3572 rw 0,0 sector 0x1b85e8 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3572 rw 0,0 sector 0x1b85f0 len 4096 err no 10
        WARNING: CPU: 3 PID: 23985 at fs/btrfs/tree-log.c:3084 btrfs_sync_log+0xbc8/0xd60 [btrfs]
        BTRFS warning (device dm-0): direct IO failed ino 3548 rw 0,0 sector 0x1d4288 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3548 rw 0,0 sector 0x1d4290 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3548 rw 0,0 sector 0x1d4298 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3548 rw 0,0 sector 0x1d42a0 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3548 rw 0,0 sector 0x1d42a8 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3548 rw 0,0 sector 0x1d42b0 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3548 rw 0,0 sector 0x1d42b8 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3548 rw 0,0 sector 0x1d42c0 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3548 rw 0,0 sector 0x1d42c8 len 4096 err no 10
        BTRFS warning (device dm-0): direct IO failed ino 3548 rw 0,0 sector 0x1d42d0 len 4096 err no 10
        CPU: 3 PID: 23985 Comm: fsstress Tainted: G        W    L    5.8.0-rc4-default+ #1181
        Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba527-rebuilt.opensuse.org 04/01/2014
        RIP: 0010:btrfs_sync_log+0xbc8/0xd60 [btrfs]
        RSP: 0018:ffff909a44d17bd0 EFLAGS: 00010286
        RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000001
        RDX: ffff8f3be41cb940 RSI: ffffffffb0108d2b RDI: ffffffffb0108ff7
        RBP: ffff909a44d17e70 R08: 0000000000000000 R09: 0000000000000000
        R10: 0000000000000000 R11: 0000000000037988 R12: ffff8f3bd20e4000
        R13: ffff8f3bd20e4428 R14: 00000000ffffff8b R15: ffff909a44d17c70
        FS:  00007f6a6ed3fb80(0000) GS:ffff8f3c3dc00000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 00007f6a6ed3e000 CR3: 00000000525c0003 CR4: 0000000000160ee0
        Call Trace:
         ? finish_wait+0x90/0x90
         ? __mutex_unlock_slowpath+0x45/0x2a0
         ? lock_acquire+0xa3/0x440
         ? lockref_put_or_lock+0x9/0x30
         ? dput+0x20/0x4a0
         ? dput+0x20/0x4a0
         ? do_raw_spin_unlock+0x4b/0xc0
         ? _raw_spin_unlock+0x1f/0x30
         btrfs_sync_file+0x335/0x490 [btrfs]
         do_fsync+0x38/0x70
         __x64_sys_fsync+0x10/0x20
         do_syscall_64+0x50/0xe0
         entry_SYSCALL_64_after_hwframe+0x44/0xa9
        RIP: 0033:0x7f6a6ef1b6e3
        Code: Bad RIP value.
        RSP: 002b:00007ffd01e20038 EFLAGS: 00000246 ORIG_RAX: 000000000000004a
        RAX: ffffffffffffffda RBX: 000000000007a120 RCX: 00007f6a6ef1b6e3
        RDX: 00007ffd01e1ffa0 RSI: 00007ffd01e1ffa0 RDI: 0000000000000003
        RBP: 0000000000000003 R08: 0000000000000001 R09: 00007ffd01e2004c
        R10: 0000000000000000 R11: 0000000000000246 R12: 000000000000009f
        R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
        irq event stamp: 0
        hardirqs last  enabled at (0): [<0000000000000000>] 0x0
        hardirqs last disabled at (0): [<ffffffffb007fe0b>] copy_process+0x67b/0x1b00
        softirqs last  enabled at (0): [<ffffffffb007fe0b>] copy_process+0x67b/0x1b00
        softirqs last disabled at (0): [<0000000000000000>] 0x0
        ---[ end trace af146e0e38433456 ]---
        BTRFS: error (device dm-0) in btrfs_sync_log:3084: errno=-117 Filesystem corrupted
      
      This ret came from btrfs_write_marked_extents().  If we get an aborted
      transaction via EIO before, we'll see it in btree_write_cache_pages()
      and return EUCLEAN, which gets printed as "Filesystem corrupted".
      
      Except we shouldn't be returning EUCLEAN here, we need to be returning
      EROFS because EUCLEAN is reserved for actual corruption, not IO errors.
      
      We are inconsistent about our handling of BTRFS_FS_STATE_ERROR
      elsewhere, but we want to use EROFS for this particular case.  The
      original transaction abort has the real error code for why we ended up
      with an aborted transaction, all subsequent actions just need to return
      EROFS because they may not have a trans handle and have no idea about
      the original cause of the abort.
      
      After patch "btrfs: don't WARN if we abort a transaction with EROFS" the
      stacktrace will not be dumped either.
      Reported-by: NEric Sandeen <esandeen@redhat.com>
      CC: stable@vger.kernel.org # 5.4+
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      [ add full test stacktrace ]
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      fbabd4a3
    • Q
      btrfs: qgroup: remove ASYNC_COMMIT mechanism in favor of reserve retry-after-EDQUOT · adca4d94
      Qu Wenruo 提交于
      commit a514d638 ("btrfs: qgroup: Commit transaction in advance to
      reduce early EDQUOT") tries to reduce the early EDQUOT problems by
      checking the qgroup free against threshold and tries to wake up commit
      kthread to free some space.
      
      The problem of that mechanism is, it can only free qgroup per-trans
      metadata space, can't do anything to data, nor prealloc qgroup space.
      
      Now since we have the ability to flush qgroup space, and implemented
      retry-after-EDQUOT behavior, such mechanism can be completely replaced.
      
      So this patch will cleanup such mechanism in favor of
      retry-after-EDQUOT.
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      adca4d94