1. 27 10月, 2021 40 次提交
    • J
      btrfs: use btrfs_get_dev_args_from_path in dev removal ioctls · 1a15eb72
      Josef Bacik 提交于
      For device removal and replace we call btrfs_find_device_by_devspec,
      which if we give it a device path and nothing else will call
      btrfs_get_dev_args_from_path, which opens the block device and reads the
      super block and then looks up our device based on that.
      
      However at this point we're holding the sb write "lock", so reading the
      block device pulls in the dependency of ->open_mutex, which produces the
      following lockdep splat
      
      ======================================================
      WARNING: possible circular locking dependency detected
      5.14.0-rc2+ #405 Not tainted
      ------------------------------------------------------
      losetup/11576 is trying to acquire lock:
      ffff9bbe8cded938 ((wq_completion)loop0){+.+.}-{0:0}, at: flush_workqueue+0x67/0x5e0
      
      but task is already holding lock:
      ffff9bbe88e4fc68 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0x41/0x660 [loop]
      
      which lock already depends on the new lock.
      
      the existing dependency chain (in reverse order) is:
      
      -> #4 (&lo->lo_mutex){+.+.}-{3:3}:
             __mutex_lock+0x7d/0x750
             lo_open+0x28/0x60 [loop]
             blkdev_get_whole+0x25/0xf0
             blkdev_get_by_dev.part.0+0x168/0x3c0
             blkdev_open+0xd2/0xe0
             do_dentry_open+0x161/0x390
             path_openat+0x3cc/0xa20
             do_filp_open+0x96/0x120
             do_sys_openat2+0x7b/0x130
             __x64_sys_openat+0x46/0x70
             do_syscall_64+0x38/0x90
             entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      -> #3 (&disk->open_mutex){+.+.}-{3:3}:
             __mutex_lock+0x7d/0x750
             blkdev_get_by_dev.part.0+0x56/0x3c0
             blkdev_get_by_path+0x98/0xa0
             btrfs_get_bdev_and_sb+0x1b/0xb0
             btrfs_find_device_by_devspec+0x12b/0x1c0
             btrfs_rm_device+0x127/0x610
             btrfs_ioctl+0x2a31/0x2e70
             __x64_sys_ioctl+0x80/0xb0
             do_syscall_64+0x38/0x90
             entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      -> #2 (sb_writers#12){.+.+}-{0:0}:
             lo_write_bvec+0xc2/0x240 [loop]
             loop_process_work+0x238/0xd00 [loop]
             process_one_work+0x26b/0x560
             worker_thread+0x55/0x3c0
             kthread+0x140/0x160
             ret_from_fork+0x1f/0x30
      
      -> #1 ((work_completion)(&lo->rootcg_work)){+.+.}-{0:0}:
             process_one_work+0x245/0x560
             worker_thread+0x55/0x3c0
             kthread+0x140/0x160
             ret_from_fork+0x1f/0x30
      
      -> #0 ((wq_completion)loop0){+.+.}-{0:0}:
             __lock_acquire+0x10ea/0x1d90
             lock_acquire+0xb5/0x2b0
             flush_workqueue+0x91/0x5e0
             drain_workqueue+0xa0/0x110
             destroy_workqueue+0x36/0x250
             __loop_clr_fd+0x9a/0x660 [loop]
             block_ioctl+0x3f/0x50
             __x64_sys_ioctl+0x80/0xb0
             do_syscall_64+0x38/0x90
             entry_SYSCALL_64_after_hwframe+0x44/0xae
      
      other info that might help us debug this:
      
      Chain exists of:
        (wq_completion)loop0 --> &disk->open_mutex --> &lo->lo_mutex
      
       Possible unsafe locking scenario:
      
             CPU0                    CPU1
             ----                    ----
        lock(&lo->lo_mutex);
                                     lock(&disk->open_mutex);
                                     lock(&lo->lo_mutex);
        lock((wq_completion)loop0);
      
       *** DEADLOCK ***
      
      1 lock held by losetup/11576:
       #0: ffff9bbe88e4fc68 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0x41/0x660 [loop]
      
      stack backtrace:
      CPU: 0 PID: 11576 Comm: losetup Not tainted 5.14.0-rc2+ #405
      Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-2.fc32 04/01/2014
      Call Trace:
       dump_stack_lvl+0x57/0x72
       check_noncircular+0xcf/0xf0
       ? stack_trace_save+0x3b/0x50
       __lock_acquire+0x10ea/0x1d90
       lock_acquire+0xb5/0x2b0
       ? flush_workqueue+0x67/0x5e0
       ? lockdep_init_map_type+0x47/0x220
       flush_workqueue+0x91/0x5e0
       ? flush_workqueue+0x67/0x5e0
       ? verify_cpu+0xf0/0x100
       drain_workqueue+0xa0/0x110
       destroy_workqueue+0x36/0x250
       __loop_clr_fd+0x9a/0x660 [loop]
       ? blkdev_ioctl+0x8d/0x2a0
       block_ioctl+0x3f/0x50
       __x64_sys_ioctl+0x80/0xb0
       do_syscall_64+0x38/0x90
       entry_SYSCALL_64_after_hwframe+0x44/0xae
      RIP: 0033:0x7f31b02404cb
      
      Instead what we want to do is populate our device lookup args before we
      grab any locks, and then pass these args into btrfs_rm_device().  From
      there we can find the device and do the appropriate removal.
      Suggested-by: NAnand Jain <anand.jain@oracle.com>
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      1a15eb72
    • J
      btrfs: add a btrfs_get_dev_args_from_path helper · faa775c4
      Josef Bacik 提交于
      We are going to want to populate our device lookup args outside of any
      locks and then do the actual device lookup later, so add a helper to do
      this work and make btrfs_find_device_by_devspec() use this helper for
      now.
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      faa775c4
    • J
      btrfs: handle device lookup with btrfs_dev_lookup_args · 562d7b15
      Josef Bacik 提交于
      We have a lot of device lookup functions that all do something slightly
      different.  Clean this up by adding a struct to hold the different
      lookup criteria, and then pass this around to btrfs_find_device() so it
      can do the proper matching based on the lookup criteria.
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      562d7b15
    • J
      btrfs: do not call close_fs_devices in btrfs_rm_device · 8b41393f
      Josef Bacik 提交于
      There's a subtle case where if we're removing the seed device from a
      file system we need to free its private copy of the fs_devices.  However
      we do not need to call close_fs_devices(), because at this point there
      are no devices left to close as we've closed the last one.  The only
      thing that close_fs_devices() does is decrement ->opened, which should
      be 1.  We want to avoid calling close_fs_devices() here because it has a
      lockdep_assert_held(&uuid_mutex), and we are going to stop holding the
      uuid_mutex in this path.
      
      So simply decrement the  ->opened counter like we should, and then clean
      up like normal.  Also add a comment explaining what we're doing here as
      I initially removed this code erroneously.
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      8b41393f
    • A
      btrfs: add comments for device counts in struct btrfs_fs_devices · add9745a
      Anand Jain 提交于
      A bug was was checking a wrong device count before we delete the struct
      btrfs_fs_devices in btrfs_rm_device(). To avoid future confusion and
      easy reference add a comment about the various device counts that we have
      in the struct btrfs_fs_devices.
      Signed-off-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      add9745a
    • A
      btrfs: use num_device to check for the last surviving seed device · 8e906945
      Anand Jain 提交于
      For both sprout and seed fsids,
       btrfs_fs_devices::num_devices provides device count including missing
       btrfs_fs_devices::open_devices provides device count excluding missing
      
      We create a dummy struct btrfs_device for the missing device, so
      num_devices != open_devices when there is a missing device.
      
      In btrfs_rm_devices() we wrongly check for %cur_devices->open_devices
      before freeing the seed fs_devices. Instead we should check for
      %cur_devices->num_devices.
      Signed-off-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      8e906945
    • F
      btrfs: fix lost error handling when replaying directory deletes · 10adb115
      Filipe Manana 提交于
      At replay_dir_deletes(), if find_dir_range() returns an error we break out
      of the main while loop and then assign a value of 0 (success) to the 'ret'
      variable, resulting in completely ignoring that an error happened. Fix
      that by jumping to the 'out' label when find_dir_range() returns an error
      (negative value).
      
      CC: stable@vger.kernel.org # 4.4+
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      10adb115
    • Q
      btrfs: remove btrfs_bio::logical member · f4f39fc5
      Qu Wenruo 提交于
      The member btrfs_bio::logical is only initialized by two call sites:
      
      - btrfs_repair_one_sector()
        No corresponding site to utilize it.
      
      - btrfs_submit_direct()
        The corresponding site to utilize it is btrfs_check_read_dio_bio().
      
      However for btrfs_check_read_dio_bio(), we can grab the file_offset from
      btrfs_dio_private::file_offset directly.
      
      Thus it turns out we don't really need that btrfs_bio::logical member at
      all.
      
      For btrfs_bio, the logical bytenr can be fetched from its
      bio->bi_iter.bi_sector directly.
      
      So let's just remove the member to save 8 bytes for structure btrfs_bio.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f4f39fc5
    • Q
      btrfs: rename btrfs_dio_private::logical_offset to file_offset · 47926ab5
      Qu Wenruo 提交于
      The naming of "logical_offset" can be confused with logical bytenr of
      the dio range.
      
      In fact it's file offset, and the naming "file_offset" is already widely
      used in all other sites.
      
      Just do the rename to avoid confusion.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      47926ab5
    • C
      btrfs: use bvec_kmap_local in btrfs_csum_one_bio · 3dcfbcce
      Christoph Hellwig 提交于
      Using local kmaps slightly reduces the chances to stray writes, and
      the bvec interface cleans up the code a little bit.
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      3dcfbcce
    • A
      btrfs: reduce btrfs_update_block_group alloc argument to bool · 11b66fa6
      Anand Jain 提交于
      btrfs_update_block_group() accounts for the number of bytes allocated or
      freed. Argument @alloc specifies whether the call is for alloc or free.
      Convert the argument @alloc type from int to bool.
      Reviewed-by: NSu Yue <l@damenly.su>
      Signed-off-by: NAnand Jain <anand.jain@oracle.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      11b66fa6
    • N
      btrfs: make btrfs_ref::real_root optional · eed2037f
      Nikolay Borisov 提交于
      Now that real_root is only used in ref-verify core gate it behind
      CONFIG_BTRFS_FS_REF_VERIFY ifdef. This shrinks the size of pending
      delayed refs by 8 bytes per ref, of which we can have many at any one
      time depending on intensity of the workload. Also change the comment
      about the member as it no longer deals with qgroups.
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      eed2037f
    • N
      btrfs: pull up qgroup checks from delayed-ref core to init time · 681145d4
      Nikolay Borisov 提交于
      Instead of checking whether qgroup processing for a dealyed ref has to
      happen in the core of delayed ref, simply pull the check at init time of
      respective delayed ref structures. This eliminates the final use of
      real_root in delayed-ref core paving the way to making this member
      optional.
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      681145d4
    • N
      btrfs: add additional parameters to btrfs_init_tree_ref/btrfs_init_data_ref · f42c5da6
      Nikolay Borisov 提交于
      In order to make 'real_root' used only in ref-verify it's required to
      have the necessary context to perform the same checks that this member
      is used for. So add 'mod_root' which will contain the root on behalf of
      which a delayed ref was created and a 'skip_group' parameter which
      will contain callsite-specific override of skip_qgroup.
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f42c5da6
    • N
      btrfs: rely on owning_root field in btrfs_add_delayed_tree_ref to detect CHUNK_ROOT · d55b9e68
      Nikolay Borisov 提交于
      The real_root field is going to be used only by ref-verify tool so limit
      its use outside of it. Blocks belonging to the chunk root will always
      have it as an owner so the check is equivalent.
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      d55b9e68
    • N
      btrfs: rename root fields in delayed refs structs · 113479d5
      Nikolay Borisov 提交于
      Both data and metadata delayed ref structures have fields named
      root/ref_root respectively. Those are somewhat cryptic and don't really
      convey the real meaning. In fact those roots are really the original
      owners of the respective block (i.e in case of a snapshot a data delayed
      ref will contain the original root that owns the given block). Rename
      those fields accordingly and adjust comments.
      Signed-off-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      113479d5
    • J
      btrfs: do not infinite loop in data reclaim if we aborted · 0e24f6d8
      Josef Bacik 提交于
      Error injection stressing uncovered a busy loop in our data reclaim
      loop.  There are two cases here, one where we loop creating block groups
      until space_info->full is set, or in the main loop we will skip erroring
      out any tickets if space_info->full == 0.  Unfortunately if we aborted
      the transaction then we will never allocate chunks or reclaim any space
      and thus never get ->full, and you'll see stack traces like this:
      
        watchdog: BUG: soft lockup - CPU#0 stuck for 26s! [kworker/u4:4:139]
        CPU: 0 PID: 139 Comm: kworker/u4:4 Tainted: G        W         5.13.0-rc1+ #328
        Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-2.fc32 04/01/2014
        Workqueue: events_unbound btrfs_async_reclaim_data_space
        RIP: 0010:btrfs_join_transaction+0x12/0x20
        RSP: 0018:ffffb2b780b77de0 EFLAGS: 00000246
        RAX: ffffb2b781863d58 RBX: 0000000000000000 RCX: 0000000000000000
        RDX: 0000000000000801 RSI: ffff987952b57400 RDI: ffff987940aa3000
        RBP: ffff987954d55000 R08: 0000000000000001 R09: ffff98795539e8f0
        R10: 000000000000000f R11: 000000000000000f R12: ffffffffffffffff
        R13: ffff987952b574c8 R14: ffff987952b57400 R15: 0000000000000008
        FS:  0000000000000000(0000) GS:ffff9879bbc00000(0000) knlGS:0000000000000000
        CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
        CR2: 00007f0703da4000 CR3: 0000000113398004 CR4: 0000000000370ef0
        Call Trace:
         flush_space+0x4a8/0x660
         btrfs_async_reclaim_data_space+0x55/0x130
         process_one_work+0x1e9/0x380
         worker_thread+0x53/0x3e0
         ? process_one_work+0x380/0x380
         kthread+0x118/0x140
         ? __kthread_bind_mask+0x60/0x60
         ret_from_fork+0x1f/0x30
      
      Fix this by checking to see if we have a btrfs fs error in either of the
      reclaim loops, and if so fail the tickets and bail.  In addition to
      this, fix maybe_fail_all_tickets() to not try to grant tickets if we've
      aborted, simply fail everything.
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      0e24f6d8
    • J
      btrfs: add a BTRFS_FS_ERROR helper · 84961539
      Josef Bacik 提交于
      We have a few flags that are inconsistently used to describe the fs in
      different states of failure.  As of 5963ffca ("btrfs: always abort
      the transaction if we abort a trans handle") we will always set
      BTRFS_FS_STATE_ERROR if we abort, so we don't have to check both ABORTED
      and ERROR to see if things have gone wrong.  Add a helper to check
      BTRFS_FS_STATE_ERROR and then convert all checkers of FS_STATE_ERROR to
      use the helper.
      
      The TRANS_ABORTED bit check was added in af722733 ("Btrfs: clean up
      resources during umount after trans is aborted") but is not actually
      specific.
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      84961539
    • J
      btrfs: change error handling for btrfs_delete_*_in_log · 9a35fc95
      Josef Bacik 提交于
      Currently we will abort the transaction if we get a random error (like
      -EIO) while trying to remove the directory entries from the root log
      during rename.
      
      However since these are simply log tree related errors, we can mark the
      trans as needing a full commit.  Then if the error was truly
      catastrophic we'll hit it during the normal commit and abort as
      appropriate.
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      9a35fc95
    • J
      btrfs: change handle_fs_error in recover_log_trees to aborts · ba51e2a1
      Josef Bacik 提交于
      During inspection of the return path for replay I noticed that we don't
      actually abort the transaction if we get a failure during replay.  This
      isn't a problem necessarily, as we properly return the error and will
      fail to mount.  However we still leave this dangling transaction that
      could conceivably be committed without thinking there was an error.
      
      We were using btrfs_handle_fs_error() here, but that pre-dates the
      transaction abort code.  Simply replace the btrfs_handle_fs_error()
      calls with transaction aborts, so we still know where exactly things
      went wrong, and add a few in some other un-handled error cases.
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      ba51e2a1
    • K
      btrfs: zoned: use kmemdup() to replace kmalloc + memcpy · 64259baa
      Kai Song 提交于
      Fix memdup.cocci warning:
      fs/btrfs/zoned.c:1198:23-30: WARNING opportunity for kmemdup
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NKai Song <songkai01@inspur.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      64259baa
    • Q
      btrfs: subpage: only allow compression if the range is fully page aligned · 0cf9b244
      Qu Wenruo 提交于
      For compressed write, we use a mechanism called async COW, which unlike
      regular run_delalloc_cow() or cow_file_range() will also unlock the
      first page.
      
      This mechanism allows us to continue handling next ranges, without
      waiting for the time consuming compression.
      
      But this has a problem for subpage case, as we could have the following
      delalloc range for a page:
      
      0		32K		64K
      |	|///////|	|///////|
      		\- A		\- B
      
      In the above case, if we pass both ranges to cow_file_range_async(),
      both range A and range B will try to unlock the full page [0, 64K).
      
      And which one finishes later than the other one will try to do other
      page operations like end_page_writeback() on a unlocked page, triggering
      VM layer BUG_ON().
      
      To make subpage compression work at least partially, here we add another
      restriction for it, only allow compression if the delalloc range is
      fully page aligned.
      
      By that, async extent is always ensured to unlock the first page
      exclusively, just like it used to be for regular sectorsize.
      
      In theory, we only need to make sure the delalloc range fully covers its
      first page, but the tail page will be locked anyway, blocking later
      writeback until the compression finishes.
      
      Thus here we choose to make sure the range is fully page aligned before
      doing the compression.
      
      In the future, we could optimize the situation by properly increasing
      subpage::writers number for the locked page, but that also means we need
      to change how we run delalloc range of page.
      (Instead of running each delalloc range we hit, we need to find and lock
      all delalloc ranges covering the page, then run each of them).
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      0cf9b244
    • Q
      btrfs: subpage: avoid potential deadlock with compression and delalloc · 2749f7ef
      Qu Wenruo 提交于
      [BUG]
      With experimental subpage compression enabled, a simple fsstress can
      lead to self deadlock on page 720896:
      
              mkfs.btrfs -f -s 4k $dev > /dev/null
              mount $dev -o compress $mnt
              $fsstress -p 1 -n 100 -w -d $mnt -v -s 1625511156
      
      [CAUSE]
      If we have a file layout looks like below:
      
      	0	32K	64K	96K	128K
      	|//|		|///////////////|
      	   4K
      
      Then we run delalloc range for the inode, it will:
      
      - Call find_lock_delalloc_range() with @delalloc_start = 0
        Then we got a delalloc range [0, 4K).
      
        This range will be COWed.
      
      - Call find_lock_delalloc_range() again with @delalloc_start = 4K
        Since find_lock_delalloc_range() never cares whether the range
        is still inside page range [0, 64K), it will return range [64K, 128K).
      
        This range meets the condition for subpage compression, will go
        through async COW path.
      
        And async COW path will return @page_started.
      
        But that @page_started is now for range [64K, 128K), not for range
        [0, 64K).
      
      - writepage_dellloc() returned 1 for page [0, 64K)
        Thus page [0, 64K) will not be unlocked, nor its page dirty status
        will be cleared.
      
      Next time when we try to lock page [0, 64K) we will deadlock, as there
      is no one to release page [0, 64K).
      
      This problem will never happen for regular page size as one page only
      contains one sector.  After the first find_lock_delalloc_range() call,
      the @delalloc_end will go beyond @page_end no matter if we found a
      delalloc range or not
      
      Thus this bug only happens for subpage, as now we need multiple runs to
      exhaust the delalloc range of a page.
      
      [FIX]
      Fix the problem by ensuring the delalloc range we ran at least started
      inside @locked_page.
      
      So that we will never get incorrect @page_started.
      
      And to prevent such problem from happening again:
      
      - Make find_lock_delalloc_range() return false if the found range is
        beyond @end value passed in.
      
        Since @end will be utilized now, add an ASSERT() to ensure we pass
        correct @end into find_lock_delalloc_range().
      
        This also means, for selftests we needs to populate @end before calling
        find_lock_delalloc_range().
      
      - New ASSERT() in find_lock_delalloc_range()
        Now we will make sure the @start/@end passed in at least covers part
        of the page.
      
      - New ASSERT() in run_delalloc_range()
        To make sure the range at least starts inside @locked page.
      
      - Use @delalloc_start as proper cursor, while @delalloc_end is always
        reset to @page_end.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2749f7ef
    • Q
      btrfs: handle page locking in btrfs_page_end_writer_lock with no writers · 164674a7
      Qu Wenruo 提交于
      There are several call sites of extent_clear_unlock_delalloc() which get
      @locked_page = NULL.
      So that extent_clear_unlock_delalloc() will try to call
      process_one_page() to unlock every page even the first page is not
      locked by btrfs_page_start_writer_lock().
      
      This will trigger an ASSERT() in btrfs_subpage_end_and_test_writer() as
      previously we require every page passed to
      btrfs_subpage_end_and_test_writer() to be locked by
      btrfs_page_start_writer_lock().
      
      But compression path doesn't go that way.
      
      Thankfully it's not hard to distinguish page locked by lock_page() and
      btrfs_page_start_writer_lock().
      
      So do the check in btrfs_subpage_end_and_test_writer() so now it can
      handle both cases well.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      164674a7
    • Q
      btrfs: rework page locking in __extent_writepage() · e55a0de1
      Qu Wenruo 提交于
      Pages passed to __extent_writepage() are always locked, but they may be
      locked by different functions.
      
      There are two types of locked page for __extent_writepage():
      
      - Page locked by plain lock_page()
        It should not have any subpage::writers count.
        Can be unlocked by unlock_page().
        This is the most common locked page for __extent_writepage() called
        inside extent_write_cache_pages() or extent_write_full_page().
        Rarer cases include the @locked_page from extent_write_locked_range().
      
      - Page locked by lock_delalloc_pages()
        There is only one caller, all pages except @locked_page for
        extent_write_locked_range().
        In this case, we have to call subpage helper to handle the case.
      
      So here we introduce a helper, btrfs_page_unlock_writer(), to allow
      __extent_writepage() to unlock different locked pages.
      
      And since for all other callers of __extent_writepage() their pages are
      ensured to be locked by lock_page(), also add an extra check for
      epd::extent_locked to unlock such pages directly.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      e55a0de1
    • Q
      btrfs: subpage: make lzo_compress_pages() compatible · d4088803
      Qu Wenruo 提交于
      There are several problems in lzo_compress_pages() preventing it from
      being subpage compatible:
      
      - No page offset is calculated when reading from inode pages
        For subpage case, we could have @start which is not aligned to
        PAGE_SIZE.
      
        Thus the destination where we read data from must take offset in page
        into consideration.
      
      - The padding for segment header is bound to PAGE_SIZE
        This means, for subpage case we can skip several corners where on x86
        machines we need to add padding zeros.
      
      The rework will:
      
      - Update the comment to replace "page" with "sector"
      
      - Introduce a new helper, copy_compressed_data_to_page(), to do the copy
        So that we don't need to bother page switching for both input and
        output.
      
        Now in lzo_compress_pages() we only care about page switching for
        input, while in copy_compressed_data_to_page() we only care about the
        page switching for output.
      
      - Only one main cursor
        For lzo_compress_pages() we use @cur_in as main cursor.
        It will be the file offset we are currently at.
      
        All other helper variables will be only declared inside the loop.
      
        For copy_compressed_data_to_page() it's similar, we will have
        @cur_out at the main cursor, which records how many bytes are in the
        output.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      d4088803
    • Q
      btrfs: factor uncompressed async extent submission code into a new helper · 2b83a0ee
      Qu Wenruo 提交于
      Introduce a new helper, submit_uncompressed_range(), for async cow cases
      where we fallback to COW.
      
      There are some new updates introduced to the helper:
      
      - Proper locked_page detection
        It's possible that the async_extent range doesn't cover the locked
        page.  In that case we shouldn't unlock the locked page.
      
        In the new helper, we will ensure that we only unlock the locked page
        when:
      
        * The locked page covers part of the async_extent range
        * The locked page is not unlocked by cow_file_range() nor
          extent_write_locked_range()
      
        This also means extra comments are added focusing on the page locking.
      
      - Add extra comment on some rare parameter used.
        We use @unlock_page = 0 for cow_file_range(), where only two call
        sites doing the same thing, including the new helper.
      
        It's definitely worth some comments.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2b83a0ee
    • Q
      btrfs: subpage: make extent_write_locked_range() compatible · 66448b9d
      Qu Wenruo 提交于
      There are two sites are not subpage compatible yet for
      extent_write_locked_range():
      
      - How @nr_pages are calculated
        For subpage we can have the following range with 64K page size:
      
        0   32K  64K   96K 128K
        |   |////|/////|   |
      
        In that case, although 96K - 32K == 64K, thus it looks like one page
        is enough, but the range spans two pages, not one.
      
        Fix it by doing proper round_up() and round_down() to calculate
        @nr_pages.
      
        Also add some extra ASSERT()s to ensure the range passed in is already
        aligned.
      
      - How the page end is calculated
        Currently we just use cur + PAGE_SIZE - 1 to calculate the page end.
      
        Which can't handle the above range layout, and will trigger ASSERT()
        in btrfs_writepage_endio_finish_ordered(), as the range is no longer
        covered by the page range.
      
        Fix it by taking page end into consideration.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      66448b9d
    • Q
      btrfs: subpage: make end_compressed_bio_writeback() compatible · 741ec653
      Qu Wenruo 提交于
      In end_compressed_writeback() we just clear the full page writeback.
      For subpage case, if there are two delalloc ranges in the same page, the
      2nd range will trigger a BUG_ON() as the page writeback is already
      cleared by previous range.
      
      Fix it by using btrfs_page_clamp_clear_writeback() helper.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      741ec653
    • Q
      btrfs: subpage: make btrfs_submit_compressed_write() compatible · bbbff01a
      Qu Wenruo 提交于
      There is a WARN_ON() checking if @start is aligned to PAGE_SIZE, not
      sectorsize, which will cause false alert for subpage.  Fix it to check
      against sectorsize.
      
      Furthermore:
      
      - Use ASSERT() to do the check
        So that in the future we may skip the check for production build
      
      - Also check alignment for @len
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      bbbff01a
    • Q
      btrfs: subpage: make compress_file_range() compatible · 4c162778
      Qu Wenruo 提交于
      In function compress_file_range(), when the compression is finished, the
      function just rounds up @total_in to PAGE_SIZE.  This is fine for
      regular sectorsize == PAGE_SIZE case, but not for subpage.
      
      Just change the ALIGN(, PAGE_SIZE) to round_up(, sectorsize) so that
      both regular sectorsize and subpage sectorsize will be happy.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      4c162778
    • Q
      btrfs: cleanup for extent_write_locked_range() · 2bd0fc93
      Qu Wenruo 提交于
      There are several cleanups for extent_write_locked_range(), most of them
      are pure cleanups, but with some preparation for future subpage support.
      
      - Add a proper comment for which call sites are suitable
        Unlike regular synchronized extent write back, if async COW or zoned
        COW happens, we have all pages in the range still locked.
      
        Thus for those (only) two call sites, we need this function to submit
        page content into bios and submit them.
      
      - Remove @mode parameter
        All the existing two call sites pass WB_SYNC_ALL. No need for @mode
        parameter.
      
      - Better error handling
        Currently if we hit an error during the page iteration loop, we
        overwrite @ret, causing only the last error can be recorded.
      
        Here we add @found_error and @first_error variable to record if we hit
        any error, and the first error we hit.
        So the first error won't get lost.
      
      - Don't reuse @start as the cursor
        We reuse the parameter @start as the cursor to iterate the range, not
        a big problem, but since we're here, introduce a proper @cur as the
        cursor.
      
      - Remove impossible branch
        Since all pages are still locked after the ordered extent is inserted,
        there is no way that pages can get its dirty bit cleared.
        Remove the branch where page is not dirty and replace it with an
        ASSERT().
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2bd0fc93
    • Q
      btrfs: refactor submit_compressed_extents() · b4ccace8
      Qu Wenruo 提交于
      We have a big chunk of code inside a while() loop, with tons of strange
      jumps for error handling.  It's definitely not to the code standard of
      today.  Move the code into a new function, submit_one_async_extent().
      
      Since we're here, also do the following changes:
      
      - Comment style change
        To follow the current scheme
      
      - Don't fallback to non-compressed write then hitting ENOSPC
        If we hit ENOSPC for compressed write, how could we reserve more space
        for non-compressed write?
        Thus we go error path directly.
        This removes the retry: label.
      
      - Add more comment for super long parameter list
        Explain which parameter is for, so we don't need to check the
        prototype.
      
      - Move the error handling to submit_one_async_extent()
        Thus no strange code like:
      
        out_free:
      	...
      	goto again;
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      b4ccace8
    • Q
      btrfs: remove unused function btrfs_bio_fits_in_stripe() · 6aabd858
      Qu Wenruo 提交于
      As the last caller in compression.c has been removed, we don't need that
      function anymore.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6aabd858
    • Q
      btrfs: determine stripe boundary at bio allocation time in btrfs_submit_compressed_write · 91507240
      Qu Wenruo 提交于
      Currently btrfs_submit_compressed_write() will check
      btrfs_bio_fits_in_stripe() each time a new page is going to be added.
      Even if compressed extent is small, we don't really need to do that for
      every page.
      
      Align the behavior to extent_io.c, by determining the stripe boundary
      when allocating a bio.
      
      Unlike extent_io.c, in compressed.c we don't need to bother things like
      different bio flags, thus no need to re-use bio_ctrl.
      
      Here we just manually introduce new local variable, next_stripe_start,
      and use that value returned from alloc_compressed_bio() to calculate
      the stripe boundary.
      
      Then each time we add some page range into the bio, we check if we
      reached the boundary.  And if reached, submit it.
      
      Also, since we have @cur_disk_bytenr to determine whether we're the last
      bio, we don't need a explicit last_bio: tag for error handling any more.
      
      And since we use @cur_disk_bytenr to wait, there is no need for
      pending_bios, also remove it to save some memory of compressed_bio.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      91507240
    • Q
      btrfs: determine stripe boundary at bio allocation time in btrfs_submit_compressed_read · f472c28f
      Qu Wenruo 提交于
      Currently btrfs_submit_compressed_read() will check
      btrfs_bio_fits_in_stripe() each time a new page is going to be added.
      Even if compressed extent is small, we don't really need to do that for
      every page.
      
      This patch will align the behavior to extent_io.c, by determining the
      stripe boundary when allocating a bio.
      
      Unlike extent_io.c, in compressed.c we don't need to bother things like
      different bio flags, thus no need to re-use bio_ctrl.
      
      Here we just manually introduce new local variable, next_stripe_start,
      and teach alloc_compressed_bio() to calculate the stripe boundary.
      
      Then each time we add some page range into the bio, we check if we
      reached the boundary.  And if reached, submit it.
      
      Also, since we have @cur_disk_byte to determine whether we're the last
      bio, we don't need a explicit last_bio: tag for error handling any more.
      
      And we can use @cur_disk_byte to track which range has been added to
      bio, we can also use @cur_disk_byte to calculate the wait condition, no
      need for @pending_bios.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f472c28f
    • Q
      btrfs: introduce alloc_compressed_bio() for compression · 22c306fe
      Qu Wenruo 提交于
      Just aggregate the bio allocation code into one helper, so that we can
      replace 4 call sites.
      
      There is one special note for zoned write.
      
      Currently btrfs_submit_compressed_write() will only allocate the first
      bio using ZONE_APPEND.  If we have to submit current bio due to stripe
      boundary, the new bio allocated will not use ZONE_APPEND.
      
      In theory this should be a bug, but considering zoned mode currently
      only support SINGLE profile, which doesn't have any stripe boundary
      limit, it should never be a problem and we have assertions in place.
      
      This function will provide a good entrance for any work which needs to
      be done at bio allocation time. Like determining the stripe boundary.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      22c306fe
    • Q
      btrfs: introduce submit_compressed_bio() for compression · 2d4e0b84
      Qu Wenruo 提交于
      The new helper, submit_compressed_bio(), will aggregate the following
      work:
      
      - Increase compressed_bio::pending_bios
      - Remap the endio function
      - Map and submit the bio
      
      This slightly reorders calls to btrfs_csum_one_bio or
      btrfs_lookup_bio_sums but but none of them does anything regarding IO
      submission so this is effectively no change. We mainly care about order
      of
      
      - atomic_inc
      - btrfs_bio_wq_end_io
      - btrfs_map_bio
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2d4e0b84
    • Q
      btrfs: handle errors properly inside btrfs_submit_compressed_write() · 6853c64a
      Qu Wenruo 提交于
      Just like btrfs_submit_compressed_read(), there are quite some BUG_ON()s
      inside btrfs_submit_compressed_write() for the bio submission path.
      
      Fix them using the same method:
      
      - For last bio, just endio the bio
        As in that case, one of the endio function of all these submitted bio
        will be able to free the compressed_bio
      
      - For half-submitted bio, wait and finish the compressed_bio manually
        In this case, as long as all other bio finish, we're the only one
        referring the compressed bio, and can manually finish it.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6853c64a
    • Q
      btrfs: handle errors properly inside btrfs_submit_compressed_read() · 86ccbb4d
      Qu Wenruo 提交于
      There are quite some BUG_ON()s inside btrfs_submit_compressed_read(),
      namely all errors inside the for() loop relies on BUG_ON() to handle
      -ENOMEM.
      
      Handle these errors properly by:
      
      - Wait for submitted bios to finish first
        Using wake_var_event() APIs to wait without introducing extra memory
        overhead inside compressed_bio.
        This allows us to wait for any submitted bio to finish, while still
        keeps the compressed_bio from being freed.
      
      - Introduce finish_compressed_bio_read() to finish the compressed_bio
      
      - Properly end the bio and finish compressed_bio when error happens
      
      Now in btrfs_submit_compressed_read() even when the bio submission
      failed, we can properly handle the error without triggering BUG_ON().
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      86ccbb4d