1. 27 10月, 2021 40 次提交
    • J
      btrfs: add a BTRFS_FS_ERROR helper · 84961539
      Josef Bacik 提交于
      We have a few flags that are inconsistently used to describe the fs in
      different states of failure.  As of 5963ffca ("btrfs: always abort
      the transaction if we abort a trans handle") we will always set
      BTRFS_FS_STATE_ERROR if we abort, so we don't have to check both ABORTED
      and ERROR to see if things have gone wrong.  Add a helper to check
      BTRFS_FS_STATE_ERROR and then convert all checkers of FS_STATE_ERROR to
      use the helper.
      
      The TRANS_ABORTED bit check was added in af722733 ("Btrfs: clean up
      resources during umount after trans is aborted") but is not actually
      specific.
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      84961539
    • J
      btrfs: change error handling for btrfs_delete_*_in_log · 9a35fc95
      Josef Bacik 提交于
      Currently we will abort the transaction if we get a random error (like
      -EIO) while trying to remove the directory entries from the root log
      during rename.
      
      However since these are simply log tree related errors, we can mark the
      trans as needing a full commit.  Then if the error was truly
      catastrophic we'll hit it during the normal commit and abort as
      appropriate.
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      9a35fc95
    • J
      btrfs: change handle_fs_error in recover_log_trees to aborts · ba51e2a1
      Josef Bacik 提交于
      During inspection of the return path for replay I noticed that we don't
      actually abort the transaction if we get a failure during replay.  This
      isn't a problem necessarily, as we properly return the error and will
      fail to mount.  However we still leave this dangling transaction that
      could conceivably be committed without thinking there was an error.
      
      We were using btrfs_handle_fs_error() here, but that pre-dates the
      transaction abort code.  Simply replace the btrfs_handle_fs_error()
      calls with transaction aborts, so we still know where exactly things
      went wrong, and add a few in some other un-handled error cases.
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      ba51e2a1
    • K
      btrfs: zoned: use kmemdup() to replace kmalloc + memcpy · 64259baa
      Kai Song 提交于
      Fix memdup.cocci warning:
      fs/btrfs/zoned.c:1198:23-30: WARNING opportunity for kmemdup
      Reviewed-by: NJohannes Thumshirn <johannes.thumshirn@wdc.com>
      Signed-off-by: NKai Song <songkai01@inspur.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      64259baa
    • Q
      btrfs: subpage: only allow compression if the range is fully page aligned · 0cf9b244
      Qu Wenruo 提交于
      For compressed write, we use a mechanism called async COW, which unlike
      regular run_delalloc_cow() or cow_file_range() will also unlock the
      first page.
      
      This mechanism allows us to continue handling next ranges, without
      waiting for the time consuming compression.
      
      But this has a problem for subpage case, as we could have the following
      delalloc range for a page:
      
      0		32K		64K
      |	|///////|	|///////|
      		\- A		\- B
      
      In the above case, if we pass both ranges to cow_file_range_async(),
      both range A and range B will try to unlock the full page [0, 64K).
      
      And which one finishes later than the other one will try to do other
      page operations like end_page_writeback() on a unlocked page, triggering
      VM layer BUG_ON().
      
      To make subpage compression work at least partially, here we add another
      restriction for it, only allow compression if the delalloc range is
      fully page aligned.
      
      By that, async extent is always ensured to unlock the first page
      exclusively, just like it used to be for regular sectorsize.
      
      In theory, we only need to make sure the delalloc range fully covers its
      first page, but the tail page will be locked anyway, blocking later
      writeback until the compression finishes.
      
      Thus here we choose to make sure the range is fully page aligned before
      doing the compression.
      
      In the future, we could optimize the situation by properly increasing
      subpage::writers number for the locked page, but that also means we need
      to change how we run delalloc range of page.
      (Instead of running each delalloc range we hit, we need to find and lock
      all delalloc ranges covering the page, then run each of them).
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      0cf9b244
    • Q
      btrfs: subpage: avoid potential deadlock with compression and delalloc · 2749f7ef
      Qu Wenruo 提交于
      [BUG]
      With experimental subpage compression enabled, a simple fsstress can
      lead to self deadlock on page 720896:
      
              mkfs.btrfs -f -s 4k $dev > /dev/null
              mount $dev -o compress $mnt
              $fsstress -p 1 -n 100 -w -d $mnt -v -s 1625511156
      
      [CAUSE]
      If we have a file layout looks like below:
      
      	0	32K	64K	96K	128K
      	|//|		|///////////////|
      	   4K
      
      Then we run delalloc range for the inode, it will:
      
      - Call find_lock_delalloc_range() with @delalloc_start = 0
        Then we got a delalloc range [0, 4K).
      
        This range will be COWed.
      
      - Call find_lock_delalloc_range() again with @delalloc_start = 4K
        Since find_lock_delalloc_range() never cares whether the range
        is still inside page range [0, 64K), it will return range [64K, 128K).
      
        This range meets the condition for subpage compression, will go
        through async COW path.
      
        And async COW path will return @page_started.
      
        But that @page_started is now for range [64K, 128K), not for range
        [0, 64K).
      
      - writepage_dellloc() returned 1 for page [0, 64K)
        Thus page [0, 64K) will not be unlocked, nor its page dirty status
        will be cleared.
      
      Next time when we try to lock page [0, 64K) we will deadlock, as there
      is no one to release page [0, 64K).
      
      This problem will never happen for regular page size as one page only
      contains one sector.  After the first find_lock_delalloc_range() call,
      the @delalloc_end will go beyond @page_end no matter if we found a
      delalloc range or not
      
      Thus this bug only happens for subpage, as now we need multiple runs to
      exhaust the delalloc range of a page.
      
      [FIX]
      Fix the problem by ensuring the delalloc range we ran at least started
      inside @locked_page.
      
      So that we will never get incorrect @page_started.
      
      And to prevent such problem from happening again:
      
      - Make find_lock_delalloc_range() return false if the found range is
        beyond @end value passed in.
      
        Since @end will be utilized now, add an ASSERT() to ensure we pass
        correct @end into find_lock_delalloc_range().
      
        This also means, for selftests we needs to populate @end before calling
        find_lock_delalloc_range().
      
      - New ASSERT() in find_lock_delalloc_range()
        Now we will make sure the @start/@end passed in at least covers part
        of the page.
      
      - New ASSERT() in run_delalloc_range()
        To make sure the range at least starts inside @locked page.
      
      - Use @delalloc_start as proper cursor, while @delalloc_end is always
        reset to @page_end.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2749f7ef
    • Q
      btrfs: handle page locking in btrfs_page_end_writer_lock with no writers · 164674a7
      Qu Wenruo 提交于
      There are several call sites of extent_clear_unlock_delalloc() which get
      @locked_page = NULL.
      So that extent_clear_unlock_delalloc() will try to call
      process_one_page() to unlock every page even the first page is not
      locked by btrfs_page_start_writer_lock().
      
      This will trigger an ASSERT() in btrfs_subpage_end_and_test_writer() as
      previously we require every page passed to
      btrfs_subpage_end_and_test_writer() to be locked by
      btrfs_page_start_writer_lock().
      
      But compression path doesn't go that way.
      
      Thankfully it's not hard to distinguish page locked by lock_page() and
      btrfs_page_start_writer_lock().
      
      So do the check in btrfs_subpage_end_and_test_writer() so now it can
      handle both cases well.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      164674a7
    • Q
      btrfs: rework page locking in __extent_writepage() · e55a0de1
      Qu Wenruo 提交于
      Pages passed to __extent_writepage() are always locked, but they may be
      locked by different functions.
      
      There are two types of locked page for __extent_writepage():
      
      - Page locked by plain lock_page()
        It should not have any subpage::writers count.
        Can be unlocked by unlock_page().
        This is the most common locked page for __extent_writepage() called
        inside extent_write_cache_pages() or extent_write_full_page().
        Rarer cases include the @locked_page from extent_write_locked_range().
      
      - Page locked by lock_delalloc_pages()
        There is only one caller, all pages except @locked_page for
        extent_write_locked_range().
        In this case, we have to call subpage helper to handle the case.
      
      So here we introduce a helper, btrfs_page_unlock_writer(), to allow
      __extent_writepage() to unlock different locked pages.
      
      And since for all other callers of __extent_writepage() their pages are
      ensured to be locked by lock_page(), also add an extra check for
      epd::extent_locked to unlock such pages directly.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      e55a0de1
    • Q
      btrfs: subpage: make lzo_compress_pages() compatible · d4088803
      Qu Wenruo 提交于
      There are several problems in lzo_compress_pages() preventing it from
      being subpage compatible:
      
      - No page offset is calculated when reading from inode pages
        For subpage case, we could have @start which is not aligned to
        PAGE_SIZE.
      
        Thus the destination where we read data from must take offset in page
        into consideration.
      
      - The padding for segment header is bound to PAGE_SIZE
        This means, for subpage case we can skip several corners where on x86
        machines we need to add padding zeros.
      
      The rework will:
      
      - Update the comment to replace "page" with "sector"
      
      - Introduce a new helper, copy_compressed_data_to_page(), to do the copy
        So that we don't need to bother page switching for both input and
        output.
      
        Now in lzo_compress_pages() we only care about page switching for
        input, while in copy_compressed_data_to_page() we only care about the
        page switching for output.
      
      - Only one main cursor
        For lzo_compress_pages() we use @cur_in as main cursor.
        It will be the file offset we are currently at.
      
        All other helper variables will be only declared inside the loop.
      
        For copy_compressed_data_to_page() it's similar, we will have
        @cur_out at the main cursor, which records how many bytes are in the
        output.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      d4088803
    • Q
      btrfs: factor uncompressed async extent submission code into a new helper · 2b83a0ee
      Qu Wenruo 提交于
      Introduce a new helper, submit_uncompressed_range(), for async cow cases
      where we fallback to COW.
      
      There are some new updates introduced to the helper:
      
      - Proper locked_page detection
        It's possible that the async_extent range doesn't cover the locked
        page.  In that case we shouldn't unlock the locked page.
      
        In the new helper, we will ensure that we only unlock the locked page
        when:
      
        * The locked page covers part of the async_extent range
        * The locked page is not unlocked by cow_file_range() nor
          extent_write_locked_range()
      
        This also means extra comments are added focusing on the page locking.
      
      - Add extra comment on some rare parameter used.
        We use @unlock_page = 0 for cow_file_range(), where only two call
        sites doing the same thing, including the new helper.
      
        It's definitely worth some comments.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2b83a0ee
    • Q
      btrfs: subpage: make extent_write_locked_range() compatible · 66448b9d
      Qu Wenruo 提交于
      There are two sites are not subpage compatible yet for
      extent_write_locked_range():
      
      - How @nr_pages are calculated
        For subpage we can have the following range with 64K page size:
      
        0   32K  64K   96K 128K
        |   |////|/////|   |
      
        In that case, although 96K - 32K == 64K, thus it looks like one page
        is enough, but the range spans two pages, not one.
      
        Fix it by doing proper round_up() and round_down() to calculate
        @nr_pages.
      
        Also add some extra ASSERT()s to ensure the range passed in is already
        aligned.
      
      - How the page end is calculated
        Currently we just use cur + PAGE_SIZE - 1 to calculate the page end.
      
        Which can't handle the above range layout, and will trigger ASSERT()
        in btrfs_writepage_endio_finish_ordered(), as the range is no longer
        covered by the page range.
      
        Fix it by taking page end into consideration.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      66448b9d
    • Q
      btrfs: subpage: make end_compressed_bio_writeback() compatible · 741ec653
      Qu Wenruo 提交于
      In end_compressed_writeback() we just clear the full page writeback.
      For subpage case, if there are two delalloc ranges in the same page, the
      2nd range will trigger a BUG_ON() as the page writeback is already
      cleared by previous range.
      
      Fix it by using btrfs_page_clamp_clear_writeback() helper.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      741ec653
    • Q
      btrfs: subpage: make btrfs_submit_compressed_write() compatible · bbbff01a
      Qu Wenruo 提交于
      There is a WARN_ON() checking if @start is aligned to PAGE_SIZE, not
      sectorsize, which will cause false alert for subpage.  Fix it to check
      against sectorsize.
      
      Furthermore:
      
      - Use ASSERT() to do the check
        So that in the future we may skip the check for production build
      
      - Also check alignment for @len
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      bbbff01a
    • Q
      btrfs: subpage: make compress_file_range() compatible · 4c162778
      Qu Wenruo 提交于
      In function compress_file_range(), when the compression is finished, the
      function just rounds up @total_in to PAGE_SIZE.  This is fine for
      regular sectorsize == PAGE_SIZE case, but not for subpage.
      
      Just change the ALIGN(, PAGE_SIZE) to round_up(, sectorsize) so that
      both regular sectorsize and subpage sectorsize will be happy.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      4c162778
    • Q
      btrfs: cleanup for extent_write_locked_range() · 2bd0fc93
      Qu Wenruo 提交于
      There are several cleanups for extent_write_locked_range(), most of them
      are pure cleanups, but with some preparation for future subpage support.
      
      - Add a proper comment for which call sites are suitable
        Unlike regular synchronized extent write back, if async COW or zoned
        COW happens, we have all pages in the range still locked.
      
        Thus for those (only) two call sites, we need this function to submit
        page content into bios and submit them.
      
      - Remove @mode parameter
        All the existing two call sites pass WB_SYNC_ALL. No need for @mode
        parameter.
      
      - Better error handling
        Currently if we hit an error during the page iteration loop, we
        overwrite @ret, causing only the last error can be recorded.
      
        Here we add @found_error and @first_error variable to record if we hit
        any error, and the first error we hit.
        So the first error won't get lost.
      
      - Don't reuse @start as the cursor
        We reuse the parameter @start as the cursor to iterate the range, not
        a big problem, but since we're here, introduce a proper @cur as the
        cursor.
      
      - Remove impossible branch
        Since all pages are still locked after the ordered extent is inserted,
        there is no way that pages can get its dirty bit cleared.
        Remove the branch where page is not dirty and replace it with an
        ASSERT().
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2bd0fc93
    • Q
      btrfs: refactor submit_compressed_extents() · b4ccace8
      Qu Wenruo 提交于
      We have a big chunk of code inside a while() loop, with tons of strange
      jumps for error handling.  It's definitely not to the code standard of
      today.  Move the code into a new function, submit_one_async_extent().
      
      Since we're here, also do the following changes:
      
      - Comment style change
        To follow the current scheme
      
      - Don't fallback to non-compressed write then hitting ENOSPC
        If we hit ENOSPC for compressed write, how could we reserve more space
        for non-compressed write?
        Thus we go error path directly.
        This removes the retry: label.
      
      - Add more comment for super long parameter list
        Explain which parameter is for, so we don't need to check the
        prototype.
      
      - Move the error handling to submit_one_async_extent()
        Thus no strange code like:
      
        out_free:
      	...
      	goto again;
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      b4ccace8
    • Q
      btrfs: remove unused function btrfs_bio_fits_in_stripe() · 6aabd858
      Qu Wenruo 提交于
      As the last caller in compression.c has been removed, we don't need that
      function anymore.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6aabd858
    • Q
      btrfs: determine stripe boundary at bio allocation time in btrfs_submit_compressed_write · 91507240
      Qu Wenruo 提交于
      Currently btrfs_submit_compressed_write() will check
      btrfs_bio_fits_in_stripe() each time a new page is going to be added.
      Even if compressed extent is small, we don't really need to do that for
      every page.
      
      Align the behavior to extent_io.c, by determining the stripe boundary
      when allocating a bio.
      
      Unlike extent_io.c, in compressed.c we don't need to bother things like
      different bio flags, thus no need to re-use bio_ctrl.
      
      Here we just manually introduce new local variable, next_stripe_start,
      and use that value returned from alloc_compressed_bio() to calculate
      the stripe boundary.
      
      Then each time we add some page range into the bio, we check if we
      reached the boundary.  And if reached, submit it.
      
      Also, since we have @cur_disk_bytenr to determine whether we're the last
      bio, we don't need a explicit last_bio: tag for error handling any more.
      
      And since we use @cur_disk_bytenr to wait, there is no need for
      pending_bios, also remove it to save some memory of compressed_bio.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      91507240
    • Q
      btrfs: determine stripe boundary at bio allocation time in btrfs_submit_compressed_read · f472c28f
      Qu Wenruo 提交于
      Currently btrfs_submit_compressed_read() will check
      btrfs_bio_fits_in_stripe() each time a new page is going to be added.
      Even if compressed extent is small, we don't really need to do that for
      every page.
      
      This patch will align the behavior to extent_io.c, by determining the
      stripe boundary when allocating a bio.
      
      Unlike extent_io.c, in compressed.c we don't need to bother things like
      different bio flags, thus no need to re-use bio_ctrl.
      
      Here we just manually introduce new local variable, next_stripe_start,
      and teach alloc_compressed_bio() to calculate the stripe boundary.
      
      Then each time we add some page range into the bio, we check if we
      reached the boundary.  And if reached, submit it.
      
      Also, since we have @cur_disk_byte to determine whether we're the last
      bio, we don't need a explicit last_bio: tag for error handling any more.
      
      And we can use @cur_disk_byte to track which range has been added to
      bio, we can also use @cur_disk_byte to calculate the wait condition, no
      need for @pending_bios.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f472c28f
    • Q
      btrfs: introduce alloc_compressed_bio() for compression · 22c306fe
      Qu Wenruo 提交于
      Just aggregate the bio allocation code into one helper, so that we can
      replace 4 call sites.
      
      There is one special note for zoned write.
      
      Currently btrfs_submit_compressed_write() will only allocate the first
      bio using ZONE_APPEND.  If we have to submit current bio due to stripe
      boundary, the new bio allocated will not use ZONE_APPEND.
      
      In theory this should be a bug, but considering zoned mode currently
      only support SINGLE profile, which doesn't have any stripe boundary
      limit, it should never be a problem and we have assertions in place.
      
      This function will provide a good entrance for any work which needs to
      be done at bio allocation time. Like determining the stripe boundary.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      22c306fe
    • Q
      btrfs: introduce submit_compressed_bio() for compression · 2d4e0b84
      Qu Wenruo 提交于
      The new helper, submit_compressed_bio(), will aggregate the following
      work:
      
      - Increase compressed_bio::pending_bios
      - Remap the endio function
      - Map and submit the bio
      
      This slightly reorders calls to btrfs_csum_one_bio or
      btrfs_lookup_bio_sums but but none of them does anything regarding IO
      submission so this is effectively no change. We mainly care about order
      of
      
      - atomic_inc
      - btrfs_bio_wq_end_io
      - btrfs_map_bio
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2d4e0b84
    • Q
      btrfs: handle errors properly inside btrfs_submit_compressed_write() · 6853c64a
      Qu Wenruo 提交于
      Just like btrfs_submit_compressed_read(), there are quite some BUG_ON()s
      inside btrfs_submit_compressed_write() for the bio submission path.
      
      Fix them using the same method:
      
      - For last bio, just endio the bio
        As in that case, one of the endio function of all these submitted bio
        will be able to free the compressed_bio
      
      - For half-submitted bio, wait and finish the compressed_bio manually
        In this case, as long as all other bio finish, we're the only one
        referring the compressed bio, and can manually finish it.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6853c64a
    • Q
      btrfs: handle errors properly inside btrfs_submit_compressed_read() · 86ccbb4d
      Qu Wenruo 提交于
      There are quite some BUG_ON()s inside btrfs_submit_compressed_read(),
      namely all errors inside the for() loop relies on BUG_ON() to handle
      -ENOMEM.
      
      Handle these errors properly by:
      
      - Wait for submitted bios to finish first
        Using wake_var_event() APIs to wait without introducing extra memory
        overhead inside compressed_bio.
        This allows us to wait for any submitted bio to finish, while still
        keeps the compressed_bio from being freed.
      
      - Introduce finish_compressed_bio_read() to finish the compressed_bio
      
      - Properly end the bio and finish compressed_bio when error happens
      
      Now in btrfs_submit_compressed_read() even when the bio submission
      failed, we can properly handle the error without triggering BUG_ON().
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      86ccbb4d
    • Q
      btrfs: subpage: add bitmap for PageChecked flag · e4f94347
      Qu Wenruo 提交于
      Although in btrfs we have very limited usage of PageChecked flag, it's
      still some page flag not yet subpage compatible.
      
      Fix it by introducing btrfs_subpage::checked_offset to do the convert.
      
      For most call sites, especially for free-space cache, COW fixup and
      btrfs_invalidatepage(), they all work in full page mode anyway.
      
      For other call sites, they work as subpage compatible mode.
      
      Some call sites need extra modification:
      
      - btrfs_drop_pages()
        Needs extra parameter to get the real range we need to clear checked
        flag.
      
        Also since btrfs_drop_pages() will accept pages beyond the dirtied
        range, update btrfs_subpage_clamp_range() to handle such case
        by setting @len to 0 if the page is beyond target range.
      
      - btrfs_invalidatepage()
        We need to call subpage helper before calling __btrfs_releasepage(),
        or it will trigger ASSERT() as page->private will be cleared.
      
      - btrfs_verify_data_csum()
        In theory we don't need the io_bio->csum check anymore, but it's
        won't hurt.  Just change the comment.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      e4f94347
    • Q
      btrfs: introduce compressed_bio::pending_sectors to trace compressed bio · 6ec9765d
      Qu Wenruo 提交于
      For btrfs_submit_compressed_read() and btrfs_submit_compressed_write(),
      we have a pretty weird dance around compressed_bio::pending_bios:
      
        btrfs_submit_compressed_read/write()
        {
      	cb = kmalloc()
      	refcount_set(&cb->pending_bios, 0);
      	bio = btrfs_alloc_bio();
      
      	/* NOTE here, we haven't yet submitted any bio */
      	refcount_set(&cb->pending_bios, 1);
      
      	for (pg_index = 0; pg_index < cb->nr_pages; pg_index++) {
      		if (submit) {
      			/* Here we submit bio, but we always have one
      			 * extra pending_bios */
      			refcount_inc(&cb->pending_bios);
      			ret = btrfs_map_bio();
      		}
      	}
      
      	/* Submit the last bio */
      	ret = btrfs_map_bio();
        }
      
      There are two reasons why we do this:
      
      - compressed_bio::pending_bios is a refcount
        Thus if it's reduced to 0, it can not be increased again.
      
      - To ensure the compressed_bio is not freed by some submitted bios
        If the submitted bio is finished before the next bio submitted,
        we can free the compressed_bio completely.
      
      But the above code is sometimes confusing, and we can do it better by
      introducing a new member, compressed_bio::pending_sectors.
      
      Now we use compressed_bio::pending_sectors to indicate whether we have
      any pending sectors under IO or not yet submitted.
      
      If pending_sectors == 0, we're definitely the last bio of compressed_bio,
      and is OK to release the compressed bio.
      
      Now the workflow looks like this:
      
        btrfs_submit_compressed_read/write()
        {
      	cb = kmalloc()
      	atomic_set(&cb->pending_bios, 0);
      	refcount_set(&cb->pending_sectors,
      		     compressed_len >> sectorsize_bits);
      	bio = btrfs_alloc_bio();
      
      	for (pg_index = 0; pg_index < cb->nr_pages; pg_index++) {
      		if (submit) {
      			refcount_inc(&cb->pending_bios);
      			ret = btrfs_map_bio();
      		}
      	}
      
      	/* Submit the last bio */
      	refcount_inc(&cb->pending_bios);
      	ret = btrfs_map_bio();
        }
      
      For now we still need pending_bios for later error handling, but will
      remove pending_bios eventually after properly handling the errors.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6ec9765d
    • Q
      btrfs: subpage: make add_ra_bio_pages() compatible · 6a404910
      Qu Wenruo 提交于
      [BUG]
      If we remove the subpage limitation in add_ra_bio_pages(), then read a
      compressed extent which has part of its range in next page, like the
      following inode layout:
      
      	0	32K	64K	96K	128K
      	|<--------------|-------------->|
      
      Btrfs will trigger ASSERT() in endio function:
      
        assertion failed: atomic_read(&subpage->readers) >= nbits
        ------------[ cut here ]------------
        kernel BUG at fs/btrfs/ctree.h:3431!
        Internal error: Oops - BUG: 0 [#1] SMP
        Workqueue: btrfs-endio btrfs_work_helper [btrfs]
        Call trace:
         assertfail.constprop.0+0x28/0x2c [btrfs]
         btrfs_subpage_end_reader+0x148/0x14c [btrfs]
         end_page_read+0x8c/0x100 [btrfs]
         end_bio_extent_readpage+0x320/0x6b0 [btrfs]
         bio_endio+0x15c/0x1dc
         end_workqueue_fn+0x44/0x64 [btrfs]
         btrfs_work_helper+0x74/0x250 [btrfs]
         process_one_work+0x1d4/0x47c
         worker_thread+0x180/0x400
         kthread+0x11c/0x120
         ret_from_fork+0x10/0x30
        ---[ end trace c8b7b552d3bb408c ]---
      
      [CAUSE]
      When we read the page range [0, 64K), we find it's a compressed extent,
      and we will try to add extra pages in add_ra_bio_pages() to avoid
      reading the same compressed extent.
      
      But when we add such page into the read bio, it doesn't follow the
      behavior of btrfs_do_readpage() to properly set subpage::readers.
      
      This means, for page [64K, 128K), its subpage::readers is still 0.
      
      And when endio is executed on both pages, since page [64K, 128K) has 0
      subpage::readers, it triggers above ASSERT()
      
      [FIX]
      Function add_ra_bio_pages() is far from subpage compatible, it always
      assume PAGE_SIZE == sectorsize, thus when it skip to next range it
      always just skip PAGE_SIZE.
      
      Make it subpage compatible by:
      
      - Skip to next page properly when needed
        If we find there is already a page cache, we need to skip to next page.
        For that case, we shouldn't just skip PAGE_SIZE bytes, but use
        @pg_index to calculate the next bytenr and continue.
      
      - Only add the page range covered by current extent map
        We need to calculate which range is covered by current extent map and
        only add that part into the read bio.
      
      - Update subpage::readers before submitting the bio
      
      - Use proper cursor other than confusing @last_offset
      
      - Calculate the missed threshold based on sector size
        It's no longer using missed pages, as for 64K page size, we have at
        most 3 pages to skip. (If aligned only 2 pages)
      
      - Add ASSERT() to make sure our bytenr is always aligned
      
      - Add comment for the function
        Add a special note for subpage case, as the function won't really
        work well for subpage cases.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6a404910
    • Q
      btrfs: don't pass compressed pages to btrfs_writepage_endio_finish_ordered() · 58469174
      Qu Wenruo 提交于
      Since async_extent holds the compressed page, it would trigger the new
      ASSERT() in btrfs_mark_ordered_io_finished() which checks that the range
      is inside the page.
      
      Now btrfs_writepage_endio_finish_ordered() can accept @page == NULL,
      just pass NULL to btrfs_writepage_endio_finish_ordered().
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      58469174
    • Q
      btrfs: use async_chunk::async_cow to replace the confusing pending pointer · 9e895a8f
      Qu Wenruo 提交于
      For structure async_chunk, we use a very strange member layout to grab
      structure async_cow who owns this async_chunk.
      
      At initialization, it goes like this:
      
      		async_chunk[i].pending = &ctx->num_chunks;
      
      Then at async_cow_free() we do a super weird freeing:
      
      	/*
      	 * Since the pointer to 'pending' is at the beginning of the array of
      	 * async_chunk's, freeing it ensures the whole array has been freed.
      	 */
      	if (atomic_dec_and_test(async_chunk->pending))
      		kvfree(async_chunk->pending);
      
      This is absolutely an abuse of kvfree().
      
      Replace async_chunk::pending with async_chunk::async_cow, so that we can
      grab the async_cow structure directly, without this strange dancing.
      
      And with this change, there is no requirement for any specific member
      location.
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      9e895a8f
    • Q
      btrfs: remove unnecessary parameter delalloc_start for writepage_delalloc() · cf3075fb
      Qu Wenruo 提交于
      In function __extent_writepage() we always pass page start to
      @delalloc_start for writepage_delalloc().
      
      Thus we don't really need @delalloc_start parameter as we can extract it
      from @page.
      
      Remove @delalloc_start parameter and make __extent_writepage() to
      declare @page_start and @page_end as const.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      cf3075fb
    • Q
      btrfs: remove unused parameter nr_pages in add_ra_bio_pages() · cd9255be
      Qu Wenruo 提交于
      Variable @nr_pages only gets increased but never used.  Remove it.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      cd9255be
    • F
      btrfs: use single bulk copy operations when logging directories · da1b811f
      Filipe Manana 提交于
      When logging a directory and inserting a batch of directory items, we are
      copying the data of each item from a leaf in the fs/subvolume tree to a
      leaf in a log tree, separately. This is not really needed, since we are
      copying from a contiguous memory area into another one, so we can use a
      single copy operation to copy all items at once.
      
      This patch is part of a small patchset that is comprised of the following
      patches:
      
        btrfs: loop only once over data sizes array when inserting an item batch
        btrfs: unexport setup_items_for_insert()
        btrfs: use single bulk copy operations when logging directories
      
      This is patch 3/3.
      
      The following test was used to compare performance of a branch without the
      patchset versus one branch that has the whole patchset applied:
      
        $ cat dir-fsync-test.sh
        #!/bin/bash
      
        DEV=/dev/nvme0n1
        MNT=/mnt/nvme0n1
      
        NUM_NEW_FILES=1000000
        NUM_FILE_DELETES=1000
        LEAF_SIZE=16K
      
        mkfs.btrfs -f -n $LEAF_SIZE $DEV
        mount -o ssd $DEV $MNT
      
        mkdir $MNT/testdir
      
        for ((i = 1; i <= $NUM_NEW_FILES; i++)); do
            echo -n > $MNT/testdir/file_$i
        done
      
        # Fsync the directory, this will log the new dir items and the inodes
        # they point to, because these are new inodes.
        start=$(date +%s%N)
        xfs_io -c "fsync" $MNT/testdir
        end=$(date +%s%N)
      
        dur=$(( (end - start) / 1000000 ))
        echo "dir fsync took $dur ms after adding $NUM_NEW_FILES files"
      
        # sync to force transaction commit and wipeout the log.
        sync
      
        del_inc=$(( $NUM_NEW_FILES / $NUM_FILE_DELETES ))
        for ((i = 1; i <= $NUM_NEW_FILES; i += $del_inc)); do
            rm -f $MNT/testdir/file_$i
        done
      
        # Fsync the directory, this will only log dir items, there are no
        # dentries pointing to new inodes.
        start=$(date +%s%N)
        xfs_io -c "fsync" $MNT/testdir
        end=$(date +%s%N)
      
        dur=$(( (end - start) / 1000000 ))
        echo "dir fsync took $dur ms after deleting $NUM_FILE_DELETES files"
      
        umount $MNT
      
      The tests were run on a non-debug kernel (Debian's default kernel config)
      and were the following:
      
      *** with a leaf size of 16K, before patchset ***
      
      dir fsync took 8482 ms after adding 1000000 files
      dir fsync took 166 ms after deleting 1000 files
      
      *** with a leaf size of 16K, after patchset ***
      
      dir fsync took 8196 ms after adding 1000000 files  (-3.4%)
      dir fsync took 143 ms after deleting 1000 files    (-14.9%)
      
      *** with a leaf size of 64K, before patchset ***
      
      dir fsync took 12851 ms after adding 1000000 files
      dir fsync took 466 ms after deleting 1000 files
      
      *** with a leaf size of 64K, after  patchset ***
      
      dir fsync took 12287 ms after adding 1000000 files (-4.5%)
      dir fsync took 414 ms after deleting 1000 files    (-11.8%)
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      da1b811f
    • F
      btrfs: unexport setup_items_for_insert() · f0641656
      Filipe Manana 提交于
      Since setup_items_for_insert() is not used anymore outside of ctree.c,
      make it static and remove its prototype from ctree.h. This also requires
      to move the definition of setup_item_for_insert() from ctree.h to ctree.c
      and move down btrfs_duplicate_item() so that it's defined after
      setup_items_for_insert().
      
      Further, since setup_item_for_insert() is used outside ctree.c, rename it
      to btrfs_setup_item_for_insert().
      
      This patch is part of a small patchset that is comprised of the following
      patches:
      
        btrfs: loop only once over data sizes array when inserting an item batch
        btrfs: unexport setup_items_for_insert()
        btrfs: use single bulk copy operations when logging directories
      
      This is patch 2/3 and performance results, and the specific tests, are
      included in the changelog of patch 3/3.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f0641656
    • F
      btrfs: loop only once over data sizes array when inserting an item batch · b7ef5f3a
      Filipe Manana 提交于
      When inserting a batch of items into a btree, we end up looping over the
      data sizes array 3 times:
      
      1) Once in the caller of btrfs_insert_empty_items(), when it populates the
         array with the data sizes for each item;
      
      2) Once at btrfs_insert_empty_items() to sum the elements of the data
         sizes array and compute the total data size;
      
      3) And then once again at setup_items_for_insert(), where we do exactly
         the same as what we do at btrfs_insert_empty_items(), to compute the
         total data size.
      
      That is not bad for small arrays, but when the arrays have hundreds of
      elements, the time spent on looping is not negligible. For example when
      doing batch inserts of delayed items for dir index items or when logging
      a directory, it's common to have 200 to 260 dir index items in a single
      batch when using a leaf size of 16K and using file names between 8 and 12
      characters. For a 64K leaf size, multiply that by 4. Taking into account
      that during directory logging or when flushing delayed dir index items we
      can have many of those large batches, the time spent on the looping adds
      up quickly.
      
      It's also more important to avoid it at setup_items_for_insert(), since
      we are holding a write lock on a leaf and, in some cases, on upper nodes
      of the btree, which causes us to block other tasks that want to access
      the leaf and nodes for longer than necessary.
      
      So change the code so that setup_items_for_insert() and
      btrfs_insert_empty_items() no longer compute the total data size, and
      instead rely on the caller to supply it. This makes us loop over the
      array only once, where we can both populate the data size array and
      compute the total data size, taking advantage of spatial and temporal
      locality. To make this more manageable, use a structure to contain
      all the relevant details for a batch of items (keys array, data sizes
      array, total data size, number of items), and use it as an argument
      for btrfs_insert_empty_items() and setup_items_for_insert().
      
      This patch is part of a small patchset that is comprised of the following
      patches:
      
        btrfs: loop only once over data sizes array when inserting an item batch
        btrfs: unexport setup_items_for_insert()
        btrfs: use single bulk copy operations when logging directories
      
      This is patch 1/3 and performance results, and the specific tests, are
      included in the changelog of patch 3/3.
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      b7ef5f3a
    • Q
      btrfs: remove btrfs_raid_bio::fs_info member · 6a258d72
      Qu Wenruo 提交于
      We can grab fs_info reliably from btrfs_raid_bio::bioc, as the bioc is
      always passed into alloc_rbio(), and only get released when the raid bio
      is released.
      
      Remove btrfs_raid_bio::fs_info member, and cleanup all the @fs_info
      parameters for alloc_rbio() callers.
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6a258d72
    • Q
      btrfs: make sure btrfs_io_context::fs_info is always initialized · 731ccf15
      Qu Wenruo 提交于
      Currently btrfs_io_context::fs_info is only initialized in
      btrfs_map_bio, but there are call sites like btrfs_map_sblock() which
      calls __btrfs_map_block() directly, leaving bioc::fs_info uninitialized
      (NULL).
      
      Currently this is fine, but later cleanup will rely on bioc::fs_info to
      grab fs_info, and this can be a hidden problem for such usage.
      
      This patch will remove such hidden uninitialized member by always
      assigning bioc::fs_info at alloc_btrfs_io_context().
      Reviewed-by: NNikolay Borisov <nborisov@suse.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      731ccf15
    • F
      btrfs: assert that extent buffers are write locked instead of only locked · 49d0c642
      Filipe Manana 提交于
      We currently use lockdep_assert_held() at btrfs_assert_tree_locked(), and
      that checks that we hold a lock either in read mode or write mode.
      
      However in all contexts we use btrfs_assert_tree_locked(), we actually
      want to check if we are holding a write lock on the extent buffer's rw
      semaphore - it would be a bug if in any of those contexts we were holding
      a read lock instead.
      
      So change btrfs_assert_tree_locked() to use lockdep_assert_held_write()
      instead and, to make it more explicit, rename btrfs_assert_tree_locked()
      to btrfs_assert_tree_write_locked(), so that it's clear we want to check
      we are holding a write lock.
      
      For now there are no contexts where we want to assert that we must have
      a read lock, but in case that is needed in the future, we can add a new
      helper function that just calls out lockdep_assert_held_read().
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      49d0c642
    • J
      btrfs: do not take the uuid_mutex in btrfs_rm_device · 8ef9dc0f
      Josef Bacik 提交于
      We got the following lockdep splat while running fstests (specifically
      btrfs/003 and btrfs/020 in a row) with the new rc.  This was uncovered
      by 87579e9b ("loop: use worker per cgroup instead of kworker") which
      converted loop to using workqueues, which comes with lockdep
      annotations that don't exist with kworkers.  The lockdep splat is as
      follows:
      
        WARNING: possible circular locking dependency detected
        5.14.0-rc2-custom+ #34 Not tainted
        ------------------------------------------------------
        losetup/156417 is trying to acquire lock:
        ffff9c7645b02d38 ((wq_completion)loop0){+.+.}-{0:0}, at: flush_workqueue+0x84/0x600
      
        but task is already holding lock:
        ffff9c7647395468 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0x41/0x650 [loop]
      
        which lock already depends on the new lock.
      
        the existing dependency chain (in reverse order) is:
      
        -> #5 (&lo->lo_mutex){+.+.}-{3:3}:
      	 __mutex_lock+0xba/0x7c0
      	 lo_open+0x28/0x60 [loop]
      	 blkdev_get_whole+0x28/0xf0
      	 blkdev_get_by_dev.part.0+0x168/0x3c0
      	 blkdev_open+0xd2/0xe0
      	 do_dentry_open+0x163/0x3a0
      	 path_openat+0x74d/0xa40
      	 do_filp_open+0x9c/0x140
      	 do_sys_openat2+0xb1/0x170
      	 __x64_sys_openat+0x54/0x90
      	 do_syscall_64+0x3b/0x90
      	 entry_SYSCALL_64_after_hwframe+0x44/0xae
      
        -> #4 (&disk->open_mutex){+.+.}-{3:3}:
      	 __mutex_lock+0xba/0x7c0
      	 blkdev_get_by_dev.part.0+0xd1/0x3c0
      	 blkdev_get_by_path+0xc0/0xd0
      	 btrfs_scan_one_device+0x52/0x1f0 [btrfs]
      	 btrfs_control_ioctl+0xac/0x170 [btrfs]
      	 __x64_sys_ioctl+0x83/0xb0
      	 do_syscall_64+0x3b/0x90
      	 entry_SYSCALL_64_after_hwframe+0x44/0xae
      
        -> #3 (uuid_mutex){+.+.}-{3:3}:
      	 __mutex_lock+0xba/0x7c0
      	 btrfs_rm_device+0x48/0x6a0 [btrfs]
      	 btrfs_ioctl+0x2d1c/0x3110 [btrfs]
      	 __x64_sys_ioctl+0x83/0xb0
      	 do_syscall_64+0x3b/0x90
      	 entry_SYSCALL_64_after_hwframe+0x44/0xae
      
        -> #2 (sb_writers#11){.+.+}-{0:0}:
      	 lo_write_bvec+0x112/0x290 [loop]
      	 loop_process_work+0x25f/0xcb0 [loop]
      	 process_one_work+0x28f/0x5d0
      	 worker_thread+0x55/0x3c0
      	 kthread+0x140/0x170
      	 ret_from_fork+0x22/0x30
      
        -> #1 ((work_completion)(&lo->rootcg_work)){+.+.}-{0:0}:
      	 process_one_work+0x266/0x5d0
      	 worker_thread+0x55/0x3c0
      	 kthread+0x140/0x170
      	 ret_from_fork+0x22/0x30
      
        -> #0 ((wq_completion)loop0){+.+.}-{0:0}:
      	 __lock_acquire+0x1130/0x1dc0
      	 lock_acquire+0xf5/0x320
      	 flush_workqueue+0xae/0x600
      	 drain_workqueue+0xa0/0x110
      	 destroy_workqueue+0x36/0x250
      	 __loop_clr_fd+0x9a/0x650 [loop]
      	 lo_ioctl+0x29d/0x780 [loop]
      	 block_ioctl+0x3f/0x50
      	 __x64_sys_ioctl+0x83/0xb0
      	 do_syscall_64+0x3b/0x90
      	 entry_SYSCALL_64_after_hwframe+0x44/0xae
      
        other info that might help us debug this:
        Chain exists of:
          (wq_completion)loop0 --> &disk->open_mutex --> &lo->lo_mutex
         Possible unsafe locking scenario:
      	 CPU0                    CPU1
      	 ----                    ----
          lock(&lo->lo_mutex);
      				 lock(&disk->open_mutex);
      				 lock(&lo->lo_mutex);
          lock((wq_completion)loop0);
      
         *** DEADLOCK ***
        1 lock held by losetup/156417:
         #0: ffff9c7647395468 (&lo->lo_mutex){+.+.}-{3:3}, at: __loop_clr_fd+0x41/0x650 [loop]
      
        stack backtrace:
        CPU: 8 PID: 156417 Comm: losetup Not tainted 5.14.0-rc2-custom+ #34
        Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
        Call Trace:
         dump_stack_lvl+0x57/0x72
         check_noncircular+0x10a/0x120
         __lock_acquire+0x1130/0x1dc0
         lock_acquire+0xf5/0x320
         ? flush_workqueue+0x84/0x600
         flush_workqueue+0xae/0x600
         ? flush_workqueue+0x84/0x600
         drain_workqueue+0xa0/0x110
         destroy_workqueue+0x36/0x250
         __loop_clr_fd+0x9a/0x650 [loop]
         lo_ioctl+0x29d/0x780 [loop]
         ? __lock_acquire+0x3a0/0x1dc0
         ? update_dl_rq_load_avg+0x152/0x360
         ? lock_is_held_type+0xa5/0x120
         ? find_held_lock.constprop.0+0x2b/0x80
         block_ioctl+0x3f/0x50
         __x64_sys_ioctl+0x83/0xb0
         do_syscall_64+0x3b/0x90
         entry_SYSCALL_64_after_hwframe+0x44/0xae
        RIP: 0033:0x7f645884de6b
      
      Usually the uuid_mutex exists to protect the fs_devices that map
      together all of the devices that match a specific uuid.  In rm_device
      we're messing with the uuid of a device, so it makes sense to protect
      that here.
      
      However in doing that it pulls in a whole host of lockdep dependencies,
      as we call mnt_may_write() on the sb before we grab the uuid_mutex, thus
      we end up with the dependency chain under the uuid_mutex being added
      under the normal sb write dependency chain, which causes problems with
      loop devices.
      
      We don't need the uuid mutex here however.  If we call
      btrfs_scan_one_device() before we scratch the super block we will find
      the fs_devices and not find the device itself and return EBUSY because
      the fs_devices is open.  If we call it after the scratch happens it will
      not appear to be a valid btrfs file system.
      
      We do not need to worry about other fs_devices modifying operations here
      because we're protected by the exclusive operations locking.
      
      So drop the uuid_mutex here in order to fix the lockdep splat.
      
      A more detailed explanation from the discussion:
      
      We are worried about rm and scan racing with each other, before this
      change we'll zero the device out under the UUID mutex so when scan does
      run it'll make sure that it can go through the whole device scan thing
      without rm messing with us.
      
      We aren't worried if the scratch happens first, because the result is we
      don't think this is a btrfs device and we bail out.
      
      The only case we are concerned with is we scratch _after_ scan is able
      to read the superblock and gets a seemingly valid super block, so lets
      consider this case.
      
      Scan will call device_list_add() with the device we're removing.  We'll
      call find_fsid_with_metadata_uuid() and get our fs_devices for this
      UUID.  At this point we lock the fs_devices->device_list_mutex.  This is
      what protects us in this case, but we have two cases here.
      
      1. We aren't to the device removal part of the RM.  We found our device,
         and device name matches our path, we go down and we set total_devices
         to our super number of devices, which doesn't affect anything because
         we haven't done the remove yet.
      
      2. We are past the device removal part, which is protected by the
         device_list_mutex.  Scan doesn't find the device, it goes down and
         does the
      
         if (fs_devices->opened)
      	   return -EBUSY;
      
         check and we bail out.
      
      Nothing about this situation is ideal, but the lockdep splat is real,
      and the fix is safe, tho admittedly a bit scary looking.
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NJosef Bacik <josef@toxicpanda.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      [ copy more from the discussion ]
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      8ef9dc0f
    • Q
      btrfs: rename struct btrfs_io_bio to btrfs_bio · c3a3b19b
      Qu Wenruo 提交于
      Previously we had "struct btrfs_bio", which records IO context for
      mirrored IO and RAID56, and "strcut btrfs_io_bio", which records extra
      btrfs specific info for logical bytenr bio.
      
      With "btrfs_bio" renamed to "btrfs_io_context", we are safe to rename
      "btrfs_io_bio" to "btrfs_bio" which is a more suitable name now.
      
      The struct btrfs_bio changes meaning by this commit. There was a
      suggested name like btrfs_logical_bio but it's a bit long and we'd
      prefer to use a shorter name.
      
      This could be a concern for backports to older kernels where the
      different meaning could possibly cause confusion or bugs. Comparing the
      new and old structures, there's no overlap among the struct members so a
      build would break in case of incorrect backport.
      
      We haven't had many backports to bio code anyway so this is more of a
      theoretical cause of bugs and a matter of precaution but we'll need to
      keep the semantic change in mind.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      c3a3b19b
    • Q
      btrfs: remove btrfs_bio_alloc() helper · cd8e0cca
      Qu Wenruo 提交于
      The helper btrfs_bio_alloc() is almost the same as btrfs_io_bio_alloc(),
      except it's allocating using BIO_MAX_VECS as @nr_iovecs, and initializes
      bio->bi_iter.bi_sector.
      
      However the naming itself is not using "btrfs_io_bio" to indicate its
      parameter is "strcut btrfs_io_bio" and can be easily confused with
      "struct btrfs_bio".
      
      Considering assigned bio->bi_iter.bi_sector is such a simple work and
      there are already tons of call sites doing that manually, there is no
      need to do that in a helper.
      
      Remove btrfs_bio_alloc() helper, and enhance btrfs_io_bio_alloc()
      function to provide a fail-safe value for its @nr_iovecs.
      
      And then replace all btrfs_bio_alloc() callers with
      btrfs_io_bio_alloc().
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      cd8e0cca
    • Q
      btrfs: rename btrfs_bio to btrfs_io_context · 4c664611
      Qu Wenruo 提交于
      The structure btrfs_bio is used by two different sites:
      
      - bio->bi_private for mirror based profiles
        For those profiles (SINGLE/DUP/RAID1*/RAID10), this structures records
        how many mirrors are still pending, and save the original endio
        function of the bio.
      
      - RAID56 code
        In that case, RAID56 only utilize the stripes info, and no long uses
        that to trace the pending mirrors.
      
      So btrfs_bio is not always bind to a bio, and contains more info for IO
      context, thus renaming it will make the naming less confusing.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      4c664611