1. 25 7月, 2022 3 次提交
  2. 16 5月, 2022 4 次提交
  3. 06 4月, 2022 1 次提交
  4. 14 3月, 2022 6 次提交
  5. 07 1月, 2022 1 次提交
  6. 03 1月, 2022 1 次提交
  7. 27 10月, 2021 15 次提交
    • D
      Revert "btrfs: compression: drop kmap/kunmap from generic helpers" · 3a60f653
      David Sterba 提交于
      This reverts commit 4c2bf276.
      
      The kmaps in compression code are still needed and cause crashes on
      32bit machines (ARM, x86). Reproducible eg. by running fstest btrfs/004
      with enabled LZO or ZSTD compression.
      
      Link: https://lore.kernel.org/all/CAJCQCtT+OuemovPO7GZk8Y8=qtOObr0XTDp8jh4OHD6y84AFxw@mail.gmail.com/
      Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=214839Signed-off-by: NDavid Sterba <dsterba@suse.com>
      3a60f653
    • Q
      btrfs: subpage: make end_compressed_bio_writeback() compatible · 741ec653
      Qu Wenruo 提交于
      In end_compressed_writeback() we just clear the full page writeback.
      For subpage case, if there are two delalloc ranges in the same page, the
      2nd range will trigger a BUG_ON() as the page writeback is already
      cleared by previous range.
      
      Fix it by using btrfs_page_clamp_clear_writeback() helper.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      741ec653
    • Q
      btrfs: subpage: make btrfs_submit_compressed_write() compatible · bbbff01a
      Qu Wenruo 提交于
      There is a WARN_ON() checking if @start is aligned to PAGE_SIZE, not
      sectorsize, which will cause false alert for subpage.  Fix it to check
      against sectorsize.
      
      Furthermore:
      
      - Use ASSERT() to do the check
        So that in the future we may skip the check for production build
      
      - Also check alignment for @len
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      bbbff01a
    • Q
      btrfs: determine stripe boundary at bio allocation time in btrfs_submit_compressed_write · 91507240
      Qu Wenruo 提交于
      Currently btrfs_submit_compressed_write() will check
      btrfs_bio_fits_in_stripe() each time a new page is going to be added.
      Even if compressed extent is small, we don't really need to do that for
      every page.
      
      Align the behavior to extent_io.c, by determining the stripe boundary
      when allocating a bio.
      
      Unlike extent_io.c, in compressed.c we don't need to bother things like
      different bio flags, thus no need to re-use bio_ctrl.
      
      Here we just manually introduce new local variable, next_stripe_start,
      and use that value returned from alloc_compressed_bio() to calculate
      the stripe boundary.
      
      Then each time we add some page range into the bio, we check if we
      reached the boundary.  And if reached, submit it.
      
      Also, since we have @cur_disk_bytenr to determine whether we're the last
      bio, we don't need a explicit last_bio: tag for error handling any more.
      
      And since we use @cur_disk_bytenr to wait, there is no need for
      pending_bios, also remove it to save some memory of compressed_bio.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      91507240
    • Q
      btrfs: determine stripe boundary at bio allocation time in btrfs_submit_compressed_read · f472c28f
      Qu Wenruo 提交于
      Currently btrfs_submit_compressed_read() will check
      btrfs_bio_fits_in_stripe() each time a new page is going to be added.
      Even if compressed extent is small, we don't really need to do that for
      every page.
      
      This patch will align the behavior to extent_io.c, by determining the
      stripe boundary when allocating a bio.
      
      Unlike extent_io.c, in compressed.c we don't need to bother things like
      different bio flags, thus no need to re-use bio_ctrl.
      
      Here we just manually introduce new local variable, next_stripe_start,
      and teach alloc_compressed_bio() to calculate the stripe boundary.
      
      Then each time we add some page range into the bio, we check if we
      reached the boundary.  And if reached, submit it.
      
      Also, since we have @cur_disk_byte to determine whether we're the last
      bio, we don't need a explicit last_bio: tag for error handling any more.
      
      And we can use @cur_disk_byte to track which range has been added to
      bio, we can also use @cur_disk_byte to calculate the wait condition, no
      need for @pending_bios.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      f472c28f
    • Q
      btrfs: introduce alloc_compressed_bio() for compression · 22c306fe
      Qu Wenruo 提交于
      Just aggregate the bio allocation code into one helper, so that we can
      replace 4 call sites.
      
      There is one special note for zoned write.
      
      Currently btrfs_submit_compressed_write() will only allocate the first
      bio using ZONE_APPEND.  If we have to submit current bio due to stripe
      boundary, the new bio allocated will not use ZONE_APPEND.
      
      In theory this should be a bug, but considering zoned mode currently
      only support SINGLE profile, which doesn't have any stripe boundary
      limit, it should never be a problem and we have assertions in place.
      
      This function will provide a good entrance for any work which needs to
      be done at bio allocation time. Like determining the stripe boundary.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      22c306fe
    • Q
      btrfs: introduce submit_compressed_bio() for compression · 2d4e0b84
      Qu Wenruo 提交于
      The new helper, submit_compressed_bio(), will aggregate the following
      work:
      
      - Increase compressed_bio::pending_bios
      - Remap the endio function
      - Map and submit the bio
      
      This slightly reorders calls to btrfs_csum_one_bio or
      btrfs_lookup_bio_sums but but none of them does anything regarding IO
      submission so this is effectively no change. We mainly care about order
      of
      
      - atomic_inc
      - btrfs_bio_wq_end_io
      - btrfs_map_bio
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      2d4e0b84
    • Q
      btrfs: handle errors properly inside btrfs_submit_compressed_write() · 6853c64a
      Qu Wenruo 提交于
      Just like btrfs_submit_compressed_read(), there are quite some BUG_ON()s
      inside btrfs_submit_compressed_write() for the bio submission path.
      
      Fix them using the same method:
      
      - For last bio, just endio the bio
        As in that case, one of the endio function of all these submitted bio
        will be able to free the compressed_bio
      
      - For half-submitted bio, wait and finish the compressed_bio manually
        In this case, as long as all other bio finish, we're the only one
        referring the compressed bio, and can manually finish it.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6853c64a
    • Q
      btrfs: handle errors properly inside btrfs_submit_compressed_read() · 86ccbb4d
      Qu Wenruo 提交于
      There are quite some BUG_ON()s inside btrfs_submit_compressed_read(),
      namely all errors inside the for() loop relies on BUG_ON() to handle
      -ENOMEM.
      
      Handle these errors properly by:
      
      - Wait for submitted bios to finish first
        Using wake_var_event() APIs to wait without introducing extra memory
        overhead inside compressed_bio.
        This allows us to wait for any submitted bio to finish, while still
        keeps the compressed_bio from being freed.
      
      - Introduce finish_compressed_bio_read() to finish the compressed_bio
      
      - Properly end the bio and finish compressed_bio when error happens
      
      Now in btrfs_submit_compressed_read() even when the bio submission
      failed, we can properly handle the error without triggering BUG_ON().
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      86ccbb4d
    • Q
      btrfs: subpage: add bitmap for PageChecked flag · e4f94347
      Qu Wenruo 提交于
      Although in btrfs we have very limited usage of PageChecked flag, it's
      still some page flag not yet subpage compatible.
      
      Fix it by introducing btrfs_subpage::checked_offset to do the convert.
      
      For most call sites, especially for free-space cache, COW fixup and
      btrfs_invalidatepage(), they all work in full page mode anyway.
      
      For other call sites, they work as subpage compatible mode.
      
      Some call sites need extra modification:
      
      - btrfs_drop_pages()
        Needs extra parameter to get the real range we need to clear checked
        flag.
      
        Also since btrfs_drop_pages() will accept pages beyond the dirtied
        range, update btrfs_subpage_clamp_range() to handle such case
        by setting @len to 0 if the page is beyond target range.
      
      - btrfs_invalidatepage()
        We need to call subpage helper before calling __btrfs_releasepage(),
        or it will trigger ASSERT() as page->private will be cleared.
      
      - btrfs_verify_data_csum()
        In theory we don't need the io_bio->csum check anymore, but it's
        won't hurt.  Just change the comment.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      e4f94347
    • Q
      btrfs: introduce compressed_bio::pending_sectors to trace compressed bio · 6ec9765d
      Qu Wenruo 提交于
      For btrfs_submit_compressed_read() and btrfs_submit_compressed_write(),
      we have a pretty weird dance around compressed_bio::pending_bios:
      
        btrfs_submit_compressed_read/write()
        {
      	cb = kmalloc()
      	refcount_set(&cb->pending_bios, 0);
      	bio = btrfs_alloc_bio();
      
      	/* NOTE here, we haven't yet submitted any bio */
      	refcount_set(&cb->pending_bios, 1);
      
      	for (pg_index = 0; pg_index < cb->nr_pages; pg_index++) {
      		if (submit) {
      			/* Here we submit bio, but we always have one
      			 * extra pending_bios */
      			refcount_inc(&cb->pending_bios);
      			ret = btrfs_map_bio();
      		}
      	}
      
      	/* Submit the last bio */
      	ret = btrfs_map_bio();
        }
      
      There are two reasons why we do this:
      
      - compressed_bio::pending_bios is a refcount
        Thus if it's reduced to 0, it can not be increased again.
      
      - To ensure the compressed_bio is not freed by some submitted bios
        If the submitted bio is finished before the next bio submitted,
        we can free the compressed_bio completely.
      
      But the above code is sometimes confusing, and we can do it better by
      introducing a new member, compressed_bio::pending_sectors.
      
      Now we use compressed_bio::pending_sectors to indicate whether we have
      any pending sectors under IO or not yet submitted.
      
      If pending_sectors == 0, we're definitely the last bio of compressed_bio,
      and is OK to release the compressed bio.
      
      Now the workflow looks like this:
      
        btrfs_submit_compressed_read/write()
        {
      	cb = kmalloc()
      	atomic_set(&cb->pending_bios, 0);
      	refcount_set(&cb->pending_sectors,
      		     compressed_len >> sectorsize_bits);
      	bio = btrfs_alloc_bio();
      
      	for (pg_index = 0; pg_index < cb->nr_pages; pg_index++) {
      		if (submit) {
      			refcount_inc(&cb->pending_bios);
      			ret = btrfs_map_bio();
      		}
      	}
      
      	/* Submit the last bio */
      	refcount_inc(&cb->pending_bios);
      	ret = btrfs_map_bio();
        }
      
      For now we still need pending_bios for later error handling, but will
      remove pending_bios eventually after properly handling the errors.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6ec9765d
    • Q
      btrfs: subpage: make add_ra_bio_pages() compatible · 6a404910
      Qu Wenruo 提交于
      [BUG]
      If we remove the subpage limitation in add_ra_bio_pages(), then read a
      compressed extent which has part of its range in next page, like the
      following inode layout:
      
      	0	32K	64K	96K	128K
      	|<--------------|-------------->|
      
      Btrfs will trigger ASSERT() in endio function:
      
        assertion failed: atomic_read(&subpage->readers) >= nbits
        ------------[ cut here ]------------
        kernel BUG at fs/btrfs/ctree.h:3431!
        Internal error: Oops - BUG: 0 [#1] SMP
        Workqueue: btrfs-endio btrfs_work_helper [btrfs]
        Call trace:
         assertfail.constprop.0+0x28/0x2c [btrfs]
         btrfs_subpage_end_reader+0x148/0x14c [btrfs]
         end_page_read+0x8c/0x100 [btrfs]
         end_bio_extent_readpage+0x320/0x6b0 [btrfs]
         bio_endio+0x15c/0x1dc
         end_workqueue_fn+0x44/0x64 [btrfs]
         btrfs_work_helper+0x74/0x250 [btrfs]
         process_one_work+0x1d4/0x47c
         worker_thread+0x180/0x400
         kthread+0x11c/0x120
         ret_from_fork+0x10/0x30
        ---[ end trace c8b7b552d3bb408c ]---
      
      [CAUSE]
      When we read the page range [0, 64K), we find it's a compressed extent,
      and we will try to add extra pages in add_ra_bio_pages() to avoid
      reading the same compressed extent.
      
      But when we add such page into the read bio, it doesn't follow the
      behavior of btrfs_do_readpage() to properly set subpage::readers.
      
      This means, for page [64K, 128K), its subpage::readers is still 0.
      
      And when endio is executed on both pages, since page [64K, 128K) has 0
      subpage::readers, it triggers above ASSERT()
      
      [FIX]
      Function add_ra_bio_pages() is far from subpage compatible, it always
      assume PAGE_SIZE == sectorsize, thus when it skip to next range it
      always just skip PAGE_SIZE.
      
      Make it subpage compatible by:
      
      - Skip to next page properly when needed
        If we find there is already a page cache, we need to skip to next page.
        For that case, we shouldn't just skip PAGE_SIZE bytes, but use
        @pg_index to calculate the next bytenr and continue.
      
      - Only add the page range covered by current extent map
        We need to calculate which range is covered by current extent map and
        only add that part into the read bio.
      
      - Update subpage::readers before submitting the bio
      
      - Use proper cursor other than confusing @last_offset
      
      - Calculate the missed threshold based on sector size
        It's no longer using missed pages, as for 64K page size, we have at
        most 3 pages to skip. (If aligned only 2 pages)
      
      - Add ASSERT() to make sure our bytenr is always aligned
      
      - Add comment for the function
        Add a special note for subpage case, as the function won't really
        work well for subpage cases.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      6a404910
    • Q
      btrfs: remove unused parameter nr_pages in add_ra_bio_pages() · cd9255be
      Qu Wenruo 提交于
      Variable @nr_pages only gets increased but never used.  Remove it.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      cd9255be
    • Q
      btrfs: rename struct btrfs_io_bio to btrfs_bio · c3a3b19b
      Qu Wenruo 提交于
      Previously we had "struct btrfs_bio", which records IO context for
      mirrored IO and RAID56, and "strcut btrfs_io_bio", which records extra
      btrfs specific info for logical bytenr bio.
      
      With "btrfs_bio" renamed to "btrfs_io_context", we are safe to rename
      "btrfs_io_bio" to "btrfs_bio" which is a more suitable name now.
      
      The struct btrfs_bio changes meaning by this commit. There was a
      suggested name like btrfs_logical_bio but it's a bit long and we'd
      prefer to use a shorter name.
      
      This could be a concern for backports to older kernels where the
      different meaning could possibly cause confusion or bugs. Comparing the
      new and old structures, there's no overlap among the struct members so a
      build would break in case of incorrect backport.
      
      We haven't had many backports to bio code anyway so this is more of a
      theoretical cause of bugs and a matter of precaution but we'll need to
      keep the semantic change in mind.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      c3a3b19b
    • Q
      btrfs: remove btrfs_bio_alloc() helper · cd8e0cca
      Qu Wenruo 提交于
      The helper btrfs_bio_alloc() is almost the same as btrfs_io_bio_alloc(),
      except it's allocating using BIO_MAX_VECS as @nr_iovecs, and initializes
      bio->bi_iter.bi_sector.
      
      However the naming itself is not using "btrfs_io_bio" to indicate its
      parameter is "strcut btrfs_io_bio" and can be easily confused with
      "struct btrfs_bio".
      
      Considering assigned bio->bi_iter.bi_sector is such a simple work and
      there are already tons of call sites doing that manually, there is no
      need to do that in a helper.
      
      Remove btrfs_bio_alloc() helper, and enhance btrfs_io_bio_alloc()
      function to provide a fail-safe value for its @nr_iovecs.
      
      And then replace all btrfs_bio_alloc() callers with
      btrfs_io_bio_alloc().
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      cd8e0cca
  8. 18 10月, 2021 1 次提交
  9. 23 8月, 2021 5 次提交
    • Q
      btrfs: rework btrfs_decompress_buf2page() · 1c3dc173
      Qu Wenruo 提交于
      There are several bugs inside the function btrfs_decompress_buf2page()
      
      - @start_byte doesn't take bvec.bv_offset into consideration
        Thus it can't handle case where the target range is not page aligned.
      
      - Too many helper variables
        There are tons of helper variables, @buf_offset, @current_buf_start,
        @start_byte, @prev_start_byte, @working_bytes, @bytes.
        This hurts anyone who wants to read the function.
      
      - No obvious main cursor for the iteartion
        A new problem caused by previous problem.
      
      - Comments for parameter list makes no sense
        Like @buf_start is the offset to @buf, or offset inside the full
        decompressed extent? (Spoiler alert, the later case)
        And @total_out acts more like @buf_start + @size_of_buf.
      
        The worst is @disk_start.
        The real meaning of it is the file offset of the full decompressed
        extent.
      
      This patch will rework the whole function by:
      
      - Add a proper comment with ASCII art to explain the parameter list
      
      - Rework parameter list
        The old @buf_start is renamed to @decompressed, to show how many bytes
        are already decompressed inside the full decompressed extent.
        The old @total_out is replaced by @buf_len, which is the decompressed
        data size.
        For old @disk_start and @bio, just pass @compressed_bio in.
      
      - Use single main cursor
        The main cursor will be @cur_file_offset, to show what's the current
        file offset.
        Other helper variables will be declared inside the main loop, and only
        minimal amount of helper variables:
        * offset_inside_decompressed_buf:	The only real helper
        * copy_start_file_offset:		File offset we start memcpy
        * bvec_file_offset:			File offset of current bvec
      
      Even with all these extensive comments, the final function is still
      smaller than the original function, which is definitely a win.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      1c3dc173
    • Q
      btrfs: grab correct extent map for subpage compressed extent read · 557023ea
      Qu Wenruo 提交于
      [BUG]
      When subpage compressed read write support is enabled, btrfs/038 always
      fails with EIO.
      
      A simplified script can easily trigger the problem:
      
        mkfs.btrfs -f -s 4k $dev
        mount $dev $mnt -o compress=lzo
      
        xfs_io -f -c "truncate 118811" $mnt/foo
        xfs_io -c "pwrite -S 0x0d -b 39987 92267 39987" $mnt/foo > /dev/null
      
        sync
        btrfs subvolume snapshot -r $mnt $mnt/mysnap1
      
        xfs_io -c "pwrite -S 0x3e -b 80000 200000 80000" $mnt/foo > /dev/null
        sync
      
        xfs_io -c "pwrite -S 0xdc -b 10000 250000 10000" $mnt/foo > /dev/null
        xfs_io -c "pwrite -S 0xff -b 10000 300000 10000" $mnt/foo > /dev/null
      
        sync
        btrfs subvolume snapshot -r $mnt $mnt/mysnap2
      
        cat $mnt/mysnap2/foo
        # Above cat will fail due to EIO
      
      [CAUSE]
      The problem is in btrfs_submit_compressed_read().
      
      When it tries to grab the extent map of the read range, it uses the
      following call:
      
      	em = lookup_extent_mapping(em_tree,
        				   page_offset(bio_first_page_all(bio)),
      				   fs_info->sectorsize);
      
      The problem is in the page_offset(bio_first_page_all(bio)) part.
      
      The offending inode has the following file extent layout
      
              item 10 key (257 EXTENT_DATA 131072) itemoff 15639 itemsize 53
                      generation 8 type 1 (regular)
                      extent data disk byte 13680640 nr 4096
                      extent data offset 0 nr 4096 ram 4096
                      extent compression 0 (none)
              item 11 key (257 EXTENT_DATA 135168) itemoff 15586 itemsize 53
                      generation 8 type 1 (regular)
                      extent data disk byte 0 nr 0
              item 12 key (257 EXTENT_DATA 196608) itemoff 15533 itemsize 53
                      generation 8 type 1 (regular)
                      extent data disk byte 13676544 nr 4096
                      extent data offset 0 nr 53248 ram 86016
                      extent compression 2 (lzo)
      
      And the bio passed in has the following parameters:
      
      page_offset(bio_first_page_all(bio))	= 131072
      bio_first_bvec_all(bio)->bv_offset	= 65536
      
      If we use page_offset(bio_first_page_all(bio) without adding bv_offset,
      we will get an extent map for file offset 131072, not 196608.
      
      This means we read uncompressed data from disk, and later decompression
      will definitely fail.
      
      [FIX]
      Take bv_offset into consideration when trying to grab an extent map.
      
      And add an ASSERT() to ensure we're really getting a compressed extent.
      
      Thankfully this won't affect anything but subpage, thus we only need to
      ensure this patch get merged before we enabled basic subpage support.
      Reviewed-by: NAnand Jain <anand.jain@oracle.com>
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      557023ea
    • Q
      btrfs: disable compressed readahead for subpage · ca62e85d
      Qu Wenruo 提交于
      For current subpage support, we only support 64K page size with 4K
      sector size.
      
      This makes compressed readahead less effective, as maximum compressed
      extent size is only 128K, 2x the page size.
      
      On the other hand, the function add_ra_bio_pages() is still assuming
      sectorsize == PAGE_SIZE, and code change may affect 4K page size
      systems.
      
      So for now, let's disable subpage compressed readahead for now.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      ca62e85d
    • D
      btrfs: compression: drop kmap/kunmap from generic helpers · 4c2bf276
      David Sterba 提交于
      The pages in compressed_pages are not from highmem anymore so we can
      drop the mapping for checksum calculation and inline extent.
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      4c2bf276
    • D
      btrfs: drop from __GFP_HIGHMEM all allocations · b0ee5e1e
      David Sterba 提交于
      The highmem flag is used for allocating pages for compression and for
      raid56 pages. The high memory makes sense on 32bit systems but is not
      without problems. On 64bit system's it's just another layer of wrappers.
      
      The time the pages are allocated for compression or raid56 is relatively
      short (about a transaction commit), so the pages are not blocked
      indefinitely. As the number of pages depends on the amount of data being
      written/read, there's a theoretical problem. A fast device on a 32bit
      system could use most of the low memory pool, while with the highmem
      allocation that would not happen. This was possibly the original idea
      long time ago, but nowadays we optimize for 64bit systems.
      
      This patch removes all usage of the __GFP_HIGHMEM flag for page
      allocation, the kmap/kunmap are still in place and will be removed in
      followup patches. Remaining is masking out the bit in
      alloc_extent_state and __lookup_free_space_inode, that can safely stay.
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      b0ee5e1e
  10. 29 7月, 2021 1 次提交
  11. 22 6月, 2021 1 次提交
  12. 21 6月, 2021 1 次提交
    • Q
      btrfs: pass btrfs_inode to btrfs_writepage_endio_finish_ordered() · 38a39ac7
      Qu Wenruo 提交于
      There is a pretty bad abuse of btrfs_writepage_endio_finish_ordered() in
      end_compressed_bio_write().
      
      It passes compressed pages to btrfs_writepage_endio_finish_ordered(),
      which is only supposed to accept inode pages.
      
      Thankfully the important info here is the inode, so let's pass
      btrfs_inode directly into btrfs_writepage_endio_finish_ordered(), and
      make @page parameter optional.
      
      By this, end_compressed_bio_write() can happily pass page=NULL while
      still getting everything done properly.
      
      Also, to cooperate with such modification, replace @page parameter for
      trace_btrfs_writepage_end_io_hook() with btrfs_inode.
      Although this removes page_index info, the existing start/len should be
      enough for most usage.
      Signed-off-by: NQu Wenruo <wqu@suse.com>
      Reviewed-by: NDavid Sterba <dsterba@suse.com>
      Signed-off-by: NDavid Sterba <dsterba@suse.com>
      38a39ac7