1. 05 10月, 2016 2 次提交
    • D
      xfs: allocate delayed extents in CoW fork · ef473667
      Darrick J. Wong 提交于
      Modify the writepage handler to find and convert pending delalloc
      extents to real allocations.  Furthermore, when we're doing non-cow
      writes to a part of a file that already has a CoW reservation (the
      cowextsz hint that we set up in a subsequent patch facilitates this),
      promote the write to copy-on-write so that the entire extent can get
      written out as a single extent on disk, thereby reducing post-CoW
      fragmentation.
      
      Christoph moved the CoW support code in _map_blocks to a separate helper
      function, refactored other functions, and reduced the number of CoW fork
      lookups, so I merged those changes here to reduce churn.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      ef473667
    • D
      xfs: support allocating delayed extents in CoW fork · 60b4984f
      Darrick J. Wong 提交于
      Modify xfs_bmap_add_extent_delay_real() so that we can convert delayed
      allocation extents in the CoW fork to real allocations, and wire this
      up all the way back to xfs_iomap_write_allocate().  In a subsequent
      patch, we'll modify the writepage handler to call this.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      60b4984f
  2. 19 9月, 2016 1 次提交
  3. 22 7月, 2016 2 次提交
    • D
      xfs: bufferhead chains are invalid after end_page_writeback · 28b783e4
      Dave Chinner 提交于
      In xfs_finish_page_writeback(), we have a loop that looks like this:
      
              do {
                      if (off < bvec->bv_offset)
                              goto next_bh;
                      if (off > end)
                              break;
                      bh->b_end_io(bh, !error);
      next_bh:
                      off += bh->b_size;
              } while ((bh = bh->b_this_page) != head);
      
      The b_end_io function is end_buffer_async_write(), which will call
      end_page_writeback() once all the buffers have marked as no longer
      under IO.  This issue here is that the only thing currently
      protecting both the bufferhead chain and the page from being
      reclaimed is the PageWriteback state held on the page.
      
      While we attempt to limit the loop to just the buffers covered by
      the IO, we still read from the buffer size and follow the next
      pointer in the bufferhead chain. There is no guarantee that either
      of these are valid after the PageWriteback flag has been cleared.
      Hence, loops like this are completely unsafe, and result in
      use-after-free issues. One such problem was caught by Calvin Owens
      with KASAN:
      
      .....
       INFO: Freed in 0x103fc80ec age=18446651500051355200 cpu=2165122683 pid=-1
        free_buffer_head+0x41/0x90
        __slab_free+0x1ed/0x340
        kmem_cache_free+0x270/0x300
        free_buffer_head+0x41/0x90
        try_to_free_buffers+0x171/0x240
        xfs_vm_releasepage+0xcb/0x3b0
        try_to_release_page+0x106/0x190
        shrink_page_list+0x118e/0x1a10
        shrink_inactive_list+0x42c/0xdf0
        shrink_zone_memcg+0xa09/0xfa0
        shrink_zone+0x2c3/0xbc0
      .....
       Call Trace:
        <IRQ>  [<ffffffff81e8b8e4>] dump_stack+0x68/0x94
        [<ffffffff8153a995>] print_trailer+0x115/0x1a0
        [<ffffffff81541174>] object_err+0x34/0x40
        [<ffffffff815436e7>] kasan_report_error+0x217/0x530
        [<ffffffff81543b33>] __asan_report_load8_noabort+0x43/0x50
        [<ffffffff819d651f>] xfs_destroy_ioend+0x3bf/0x4c0
        [<ffffffff819d69d4>] xfs_end_bio+0x154/0x220
        [<ffffffff81de0c58>] bio_endio+0x158/0x1b0
        [<ffffffff81dff61b>] blk_update_request+0x18b/0xb80
        [<ffffffff821baf57>] scsi_end_request+0x97/0x5a0
        [<ffffffff821c5558>] scsi_io_completion+0x438/0x1690
        [<ffffffff821a8d95>] scsi_finish_command+0x375/0x4e0
        [<ffffffff821c3940>] scsi_softirq_done+0x280/0x340
      
      
      Where the access is occuring during IO completion after the buffer
      had been freed from direct memory reclaim.
      
      Prevent use-after-free accidents in this end_io processing loop by
      pre-calculating the loop conditionals before calling bh->b_end_io().
      The loop is already limited to just the bufferheads covered by the
      IO in progress, so the offset checks are sufficient to prevent
      accessing buffers in the chain after end_page_writeback() has been
      called by the the bh->b_end_io() callout.
      
      Yet another example of why Bufferheads Must Die.
      
      cc: <stable@vger.kernel.org> # 4.7
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reported-and-Tested-by: NCalvin Owens <calvinowens@fb.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      28b783e4
    • B
      xfs: skip dirty pages in ->releasepage() · 99579cce
      Brian Foster 提交于
      XFS has had scattered reports of delalloc blocks present at
      ->releasepage() time. This results in a warning with a stack trace
      similar to the following:
      
       ...
       Call Trace:
        [<ffffffffa23c5b8f>] dump_stack+0x63/0x84
        [<ffffffffa20837a7>] warn_slowpath_common+0x97/0xe0
        [<ffffffffa208380a>] warn_slowpath_null+0x1a/0x20
        [<ffffffffa2326caf>] xfs_vm_releasepage+0x10f/0x140
        [<ffffffffa218c680>] ? page_mkclean_one+0xd0/0xd0
        [<ffffffffa218d3a0>] ? anon_vma_prepare+0x150/0x150
        [<ffffffffa21521c2>] try_to_release_page+0x32/0x50
        [<ffffffffa2166b2e>] shrink_active_list+0x3ce/0x3e0
        [<ffffffffa21671c7>] shrink_lruvec+0x687/0x7d0
        [<ffffffffa21673ec>] shrink_zone+0xdc/0x2c0
        [<ffffffffa2168539>] kswapd+0x4f9/0x970
        [<ffffffffa2168040>] ? mem_cgroup_shrink_node_zone+0x1a0/0x1a0
        [<ffffffffa20a0d99>] kthread+0xc9/0xe0
        [<ffffffffa20a0cd0>] ? kthread_stop+0x100/0x100
        [<ffffffffa26b404f>] ret_from_fork+0x3f/0x70
        [<ffffffffa20a0cd0>] ? kthread_stop+0x100/0x100
      
      This occurs because it is possible for shrink_active_list() to send
      pages marked dirty to ->releasepage() when certain buffer_head threshold
      conditions are met. shrink_active_list() doesn't check the page dirty
      state apparently to handle an old ext3 corner case where in some cases
      clean pages would not have the dirty bit cleared, thus it is up to the
      filesystem to determine how to handle the page.
      
      XFS currently handles the delalloc case properly, but this behavior
      makes the warning spurious. Update the XFS ->releasepage() handler to
      explicitly skip dirty pages. Retain the existing delalloc/unwritten
      checks so we continue to warn if such buffers exist on clean pages when
      they shouldn't.
      Diagnosed-by: NDave Chinner <david@fromorbit.com>
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      99579cce
  4. 20 7月, 2016 1 次提交
  5. 21 6月, 2016 2 次提交
  6. 08 6月, 2016 2 次提交
  7. 20 5月, 2016 1 次提交
  8. 02 5月, 2016 1 次提交
  9. 06 4月, 2016 4 次提交
    • C
      xfs: better xfs_trans_alloc interface · 253f4911
      Christoph Hellwig 提交于
      Merge xfs_trans_reserve and xfs_trans_alloc into a single function call
      that returns a transaction with all the required log and block reservations,
      and which allows passing transaction flags directly to avoid the cumbersome
      _xfs_trans_alloc interface.
      
      While we're at it we also get rid of the transaction type argument that has
      been superflous since we stopped supporting the non-CIL logging mode.  The
      guts of it will be removed in another patch.
      
      [dchinner: fixed transaction leak in error path in xfs_setattr_nonsize]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      253f4911
    • C
      xfs: optimize bio handling in the buffer writeback path · 0e51a8e1
      Christoph Hellwig 提交于
      This patch implements two closely related changes:  First it embeds
      a bio the ioend structure so that we don't have to allocate one
      separately.  Second it uses the block layer bio chaining mechanism
      to chain additional bios off this first one if needed instead of
      manually accounting for multiple bio completions in the ioend
      structure.  Together this removes a memory allocation per ioend and
      greatly simplifies the ioend setup and I/O completion path.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      0e51a8e1
    • D
      xfs: don't release bios on completion immediately · 37992c18
      Dave Chinner 提交于
      Completion of an ioend requires us to walk the bufferhead list to
      end writback on all the bufferheads. This, in turn, is needed so
      that we can end writeback on all the pages we just did IO on.
      
      To remove our dependency on bufferheads in writeback, we need to
      turn this around the other way - we need to walk the pages we've
      just completed IO on, and then walk the buffers attached to the
      pages and complete their IO. In doing this, we remove the
      requirement for the ioend to track bufferheads directly.
      
      To enable IO completion to walk all the pages we've submitted IO on,
      we need to keep the bios that we used for IO around until the ioend
      has been completed. We can do this simply by chaining the bios to
      the ioend at completion time, and then walking their pages directly
      just before destroying the ioend.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      [hch: changed the xfs_finish_page_writeback calling convention]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      37992c18
    • D
      xfs: build bios directly in xfs_add_to_ioend · bb18782a
      Dave Chinner 提交于
      Currently adding a buffer to the ioend and then building a bio from
      the buffer list are two separate operations. We don't build the bios
      and submit them until the ioend is submitted, and this places a
      fixed dependency on bufferhead chaining in the ioend.
      
      The first step to removing the bufferhead chaining in the ioend is
      on the IO submission side. We can build the bio directly as we add
      the buffers to the ioend chain, thereby removing the need for a
      latter "buffer-to-bio" submission loop. This allows us to submit
      bios on large ioends as soon as we cannot add more data to the bio.
      
      These bios then get captured by the active plug, and hence will be
      dispatched as soon as either the plug overflows or we schedule away
      from the writeback context. This will reduce submission latency for
      large IOs, but will also allow more timely request queue based
      writeback blocking when the device becomes congested.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      [hch: various small updates]
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      bb18782a
  10. 05 4月, 2016 2 次提交
    • K
      mm, fs: remove remaining PAGE_CACHE_* and page_cache_{get,release} usage · ea1754a0
      Kirill A. Shutemov 提交于
      Mostly direct substitution with occasional adjustment or removing
      outdated comments.
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ea1754a0
    • K
      mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros · 09cbfeaf
      Kirill A. Shutemov 提交于
      PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time
      ago with promise that one day it will be possible to implement page
      cache with bigger chunks than PAGE_SIZE.
      
      This promise never materialized.  And unlikely will.
      
      We have many places where PAGE_CACHE_SIZE assumed to be equal to
      PAGE_SIZE.  And it's constant source of confusion on whether
      PAGE_CACHE_* or PAGE_* constant should be used in a particular case,
      especially on the border between fs and mm.
      
      Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much
      breakage to be doable.
      
      Let's stop pretending that pages in page cache are special.  They are
      not.
      
      The changes are pretty straight-forward:
      
       - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>;
      
       - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN};
      
       - page_cache_get() -> get_page();
      
       - page_cache_release() -> put_page();
      
      This patch contains automated changes generated with coccinelle using
      script below.  For some reason, coccinelle doesn't patch header files.
      I've called spatch for them manually.
      
      The only adjustment after coccinelle is revert of changes to
      PAGE_CAHCE_ALIGN definition: we are going to drop it later.
      
      There are few places in the code where coccinelle didn't reach.  I'll
      fix them manually in a separate patch.  Comments and documentation also
      will be addressed with the separate patch.
      
      virtual patch
      
      @@
      expression E;
      @@
      - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      expression E;
      @@
      - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT)
      + E
      
      @@
      @@
      - PAGE_CACHE_SHIFT
      + PAGE_SHIFT
      
      @@
      @@
      - PAGE_CACHE_SIZE
      + PAGE_SIZE
      
      @@
      @@
      - PAGE_CACHE_MASK
      + PAGE_MASK
      
      @@
      expression E;
      @@
      - PAGE_CACHE_ALIGN(E)
      + PAGE_ALIGN(E)
      
      @@
      expression E;
      @@
      - page_cache_get(E)
      + get_page(E)
      
      @@
      expression E;
      @@
      - page_cache_release(E)
      + put_page(E)
      Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      09cbfeaf
  11. 16 3月, 2016 2 次提交
  12. 15 3月, 2016 1 次提交
    • B
      xfs: debug mode forced buffered write failure · 801cc4e1
      Brian Foster 提交于
      Add a DEBUG mode-only sysfs knob to enable forced buffered write
      failure. An additional side effect of this mode is brute force killing
      of delayed allocation blocks in the range of the write. The latter is
      the prime motiviation behind this patch, as userspace test
      infrastructure requires a reliable mechanism to create and split
      delalloc extents without causing extent conversion.
      
      Certain fallocate operations (i.e., zero range) were used for this in
      the past, but the implementations have changed such that delalloc
      extents are flushed and converted to real blocks, rendering the test
      useless.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      801cc4e1
  13. 07 3月, 2016 1 次提交
  14. 28 2月, 2016 2 次提交
    • R
      dax: move writeback calls into the filesystems · 7f6d5b52
      Ross Zwisler 提交于
      Previously calls to dax_writeback_mapping_range() for all DAX filesystems
      (ext2, ext4 & xfs) were centralized in filemap_write_and_wait_range().
      
      dax_writeback_mapping_range() needs a struct block_device, and it used
      to get that from inode->i_sb->s_bdev.  This is correct for normal inodes
      mounted on ext2, ext4 and XFS filesystems, but is incorrect for DAX raw
      block devices and for XFS real-time files.
      
      Instead, call dax_writeback_mapping_range() directly from the filesystem
      ->writepages function so that it can supply us with a valid block
      device.  This also fixes DAX code to properly flush caches in response
      to sync(2).
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NJan Kara <jack@suse.cz>
      Cc: Al Viro <viro@ftp.linux.org.uk>
      Cc: Dan Williams <dan.j.williams@intel.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      7f6d5b52
    • R
      dax: give DAX clearing code correct bdev · 20a90f58
      Ross Zwisler 提交于
      dax_clear_blocks() needs a valid struct block_device and previously it
      was using inode->i_sb->s_bdev in all cases.  This is correct for normal
      inodes on mounted ext2, ext4 and XFS filesystems, but is incorrect for
      DAX raw block devices and for XFS real-time devices.
      
      Instead, rename dax_clear_blocks() to dax_clear_sectors(), and change
      its arguments to take a bdev and a sector instead of an inode and a
      block.  This better reflects what the function does, and it allows the
      filesystem and raw block device code to pass in an appropriate struct
      block_device.
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Suggested-by: NDan Williams <dan.j.williams@intel.com>
      Reviewed-by: NJan Kara <jack@suse.cz>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Cc: Al Viro <viro@ftp.linux.org.uk>
      Cc: Dave Chinner <david@fromorbit.com>
      Cc: Jens Axboe <axboe@fb.com>
      Cc: Matthew Wilcox <matthew.r.wilcox@intel.com>
      Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
      Cc: Theodore Ts'o <tytso@mit.edu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      20a90f58
  15. 15 2月, 2016 6 次提交
    • D
      xfs: don't chain ioends during writepage submission · e10de372
      Dave Chinner 提交于
      Currently we can build a long ioend chain during ->writepages that
      gets attached to the writepage context. IO submission only then
      occurs when we finish all the writepage processing. This means we
      can have many ioends allocated and pending, and this violates the
      mempool guarantees that we need to give about forwards progress.
      i.e. we really should only have one ioend being built at a time,
      otherwise we may drain the mempool trying to allocate a new ioend
      and that blocks submission, completion and freeing of ioends that
      are already in progress.
      
      To prevent this situation from happening, we need to submit ioends
      for IO as soon as they are ready for dispatch rather than queuing
      them for later submission. This means the ioends have bios built
      immediately and they get queued on any plug that is current active.
      Hence if we schedule away from writeback, the ioends that have been
      built will make forwards progress due to the plug flushing on
      context switch. This will also prevent context switches from
      creating unnecessary IO submission latency.
      
      We can't completely avoid having nested IO allocation - when we have
      a block size smaller than a page size, we still need to hold the
      ioend submission until after we have marked the current page dirty.
      Hence we may need multiple ioends to be held while the current page
      is completely mapped and made ready for IO dispatch. We cannot avoid
      this problem - the current code already has this ioend chaining
      within a page so we can mostly ignore that it occurs.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      e10de372
    • D
      xfs: factor mapping out of xfs_do_writepage · bfce7d2e
      Dave Chinner 提交于
      Separate out the bufferhead based mapping from the writepage code so
      that we have a clear separation of the page operations and the
      bufferhead state.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      bfce7d2e
    • D
      xfs: xfs_cluster_write is redundant · ad68972a
      Dave Chinner 提交于
      xfs_cluster_write() is not necessary now that xfs_vm_writepages()
      aggregates writepage calls across a single mapping. This means we no
      longer need to do page lookups in xfs_cluster_write, so writeback
      only needs to look up th epage cache once per page being written.
      This also removes a large amount of mostly duplicate code between
      xfs_do_writepage() and xfs_convert_page().
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      ad68972a
    • D
      xfs: Introduce writeback context for writepages · fbcc0256
      Dave Chinner 提交于
      xfs_vm_writepages() calls generic_writepages to writeback a range of
      a file, but then xfs_vm_writepage() clusters pages itself as it does
      not have any context it can pass between->writepage calls from
      __write_cache_pages().
      
      Introduce a writeback context for xfs_vm_writepages() and call
      __write_cache_pages directly with our own writepage callback so that
      we can pass that context to each writepage invocation. This
      encapsulates the current mapping, whether it is valid or not, the
      current ioend and it's IO type and the ioend chain being built.
      
      This requires us to move the ioend submission up to the level where
      the writepage context is declared. This does mean we do not submit
      IO until we packaged the entire writeback range, but with the block
      plugging in the writepages call this is the way IO is submitted,
      anyway.
      
      It also means that we need to handle discontiguous page ranges.  If
      the pages sent down by write_cache_pages to the writepage callback
      are discontiguous, we need to detect this and put each discontiguous
      page range into individual ioends. This is needed to ensure that the
      ioend accurately represents the range of the file that it covers so
      that file size updates during IO completion set the size correctly.
      Failure to take into account the discontiguous ranges results in
      files being too small when writeback patterns are non-sequential.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      fbcc0256
    • D
      xfs: remove xfs_cancel_ioend · 150d5be0
      Dave Chinner 提交于
      We currently have code to cancel ioends being built because we
      change bufferhead state as we build the ioend. On error, this needs
      to be unwound and so we have cancelling code that walks the buffers
      on the ioend chain and undoes these state changes.
      
      However, the IO submission path already handles state changes for
      buffers when a submission error occurs, so we don't really need a
      separate cancel function to do this - we can simply submit the
      ioend chain with the specific error and it will be cancelled rather
      than submitted.
      
      Hence we can remove the explicit cancel code and just rely on
      submission to deal with the error correctly.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      150d5be0
    • D
      xfs: remove nonblocking mode from xfs_vm_writepage · 988ef927
      Dave Chinner 提交于
      Remove the nonblocking optimisation done for mapping lookups during
      writeback. It's not clear that leaving a hole in the writeback range
      just because we couldn't get a lock is really a win, as it makes us
      do another small random IO later on rather than a large sequential
      IO now.
      
      As this gets in the way of sane error handling later on, just remove
      for the moment and we can re-introduce an equivalent optimisation in
      future if we see problems due to extent map lock contention.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      988ef927
  16. 08 2月, 2016 5 次提交
    • B
      xfs: fix xfs_log_ticket leak in xfs_end_io() after fs shutdown · af055e37
      Brian Foster 提交于
      If the filesystem has shut down, xfs_end_io() currently sets an
      error on the ioend and proceeds to ioend destruction. The ioend
      might contain a truncate transaction if the I/O extended the size of
      the file. This transaction is only cleaned up in
      xfs_setfilesize_ioend(), however, which is skipped in this case.
      This results in an xfs_log_ticket leak message when the associate
      cache slab is destroyed (e.g., on rmmod).
      
      This was originally reproduced by xfs/141 on a distro kernel. The
      problem is reproducible on an upstream kernel, but not easily
      detected in current upstream if the xfs_log_ticket cache happens to
      be merged with another cache. This can be reproduced more
      deterministically with the 'slab_nomerge' kernel boot option.
      
      Update xfs_end_io() to proceed with normal end I/O processing after
      an error is set on an ioend due to fs shutdown. The I/O type-based
      processing is already designed to handle an I/O error and ensure
      that the ioend is cleaned up correctly.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      af055e37
    • B
      xfs: clean up unwritten buffers on write failure · 60630fe6
      Brian Foster 提交于
      The xfs_vm_write_failed() handler is currently responsible for cleaning
      up any delalloc blocks over the range of a failed write beyond EOF.
      Failure to do so results in warning messages and other inconsistencies
      between buffer and extent state. The ->releasepage() handler currently
      warns in the event of a page being released with either unwritten or
      delalloc buffers, as neither is ever expected by the time a page is
      released.
      
      As has been reproduced by generic/083 on a -bsize=1k fs, it is currently
      possible to trigger the ->releasepage() warning for a page with
      unwritten buffers when a filesystem is near ENOSPC. This is reproduced
      by the following sequence:
      
        $ mkfs.xfs -f -b size=1k -d size=100m <dev>
        $ mount <dev> /mnt/
        $
        $ xfs_io -fc "falloc -k 0 1k" /mnt/file
        $ dd if=/dev/zero of=/mnt/enospc conv=notrunc oflag=append
        $
        $ xfs_io -c "pwrite 512 1k" /mnt/file
        $ xfs_io -d -c "pwrite 16k 1k" /mnt/file
      
      The first pwrite command attempts a block unaligned write across an
      unwritten block and a hole. The delalloc for the hole fails with ENOSPC
      and the subsequent error handling does not clean up the unwritten buffer
      that was instantiated during the first ->get_block() call.
      
      The second pwrite triggers a warning as part of the inode mapping
      invalidation that occurs prior to direct I/O. The releasepage() handler
      detects the unwritten buffer at this time, warns and prevents the
      release of the page.
      
      To deal with this problem, update xfs_vm_write_failed() to clean up
      unwritten as well as delalloc buffers that are beyond EOF and within the
      range of the failed write.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      60630fe6
    • C
      c19b104a
    • C
      xfs: don't use ioends for direct write completions · 273dda76
      Christoph Hellwig 提交于
      We only need to communicate two bits of information to the direct I/O
      completion handler:
      
       (1) do we need to convert any unwritten extents in the range
       (2) do we need to check if we need to update the inode size based
           on the range passed to the completion handler
      
      We can use the private data passed to the get_block handler and the
      completion handler as a simple bitmask to communicate this information
      instead of the current complicated infrastructure reusing the ioends
      from the buffer I/O path, and thus avoiding a memory allocation and
      a context switch for any non-trivial direct write.  As a nice side
      effect we also decouple the direct I/O path implementation from that
      of the buffered I/O path.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      
      273dda76
    • C
      direct-io: always call ->end_io if non-NULL · 187372a3
      Christoph Hellwig 提交于
      This way we can pass back errors to the file system, and allow for
      cleanup required for all direct I/O invocations.
      
      Also allow the ->end_io handlers to return errors on their own, so that
      I/O completion errors can be passed on to the callers.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      187372a3
  17. 08 1月, 2016 1 次提交
  18. 03 11月, 2015 3 次提交
    • D
      xfs: DAX does not use IO completion callbacks · 01a155e6
      Dave Chinner 提交于
      For DAX, we are now doing block zeroing during allocation. This
      means we no longer need a special DAX fault IO completion callback
      to do unwritten extent conversion. Because mmap never extends the
      file size (it SEGVs the process) we don't need a callback to update
      the file size, either. Hence we can remove the completion callbacks
      from the __dax_fault and __dax_mkwrite calls.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      01a155e6
    • D
      xfs: Don't use unwritten extents for DAX · 1ca19157
      Dave Chinner 提交于
      DAX has a page fault serialisation problem with block allocation.
      Because it allows concurrent page faults and does not have a page
      lock to serialise faults to the same page, it can get two concurrent
      faults to the page that race.
      
      When two read faults race, this isn't a huge problem as the data
      underlying the page is not changing and so "detect and drop" works
      just fine. The issues are to do with write faults.
      
      When two write faults occur, we serialise block allocation in
      get_blocks() so only one faul will allocate the extent. It will,
      however, be marked as an unwritten extent, and that is where the
      problem lies - the DAX fault code cannot differentiate between a
      block that was just allocated and a block that was preallocated and
      needs zeroing. The result is that both write faults end up zeroing
      the block and attempting to convert it back to written.
      
      The problem is that the first fault can zero and convert before the
      second fault starts zeroing, resulting in the zeroing for the second
      fault overwriting the data that the first fault wrote with zeros.
      The second fault then attempts to convert the unwritten extent,
      which is then a no-op because it's already written. Data loss occurs
      as a result of this race.
      
      Because there is no sane locking construct in the page fault code
      that we can use for serialisation across the page faults, we need to
      ensure block allocation and zeroing occurs atomically in the
      filesystem. This means we can still take concurrent page faults and
      the only time they will serialise is in the filesystem
      mapping/allocation callback. The page fault code will always see
      written, initialised extents, so we will be able to remove the
      unwritten extent handling from the DAX code when all filesystems are
      converted.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      1ca19157
    • D
      xfs: fix inode size update overflow in xfs_map_direct() · 3e12dbbd
      Dave Chinner 提交于
      Both direct IO and DAX pass an offset and count into get_blocks that
      will overflow a s64 variable when an IO goes into the last supported
      block in a file (i.e. at offset 2^63 - 1FSB bytes). This can be seen
      from the tracing:
      
      xfs_get_blocks_alloc: [...] offset 0x7ffffffffffff000 count 4096
      xfs_gbmap_direct:     [...] offset 0x7ffffffffffff000 count 4096
      xfs_gbmap_direct_none:[...] offset 0x7ffffffffffff000 count 4096
      
      0x7ffffffffffff000 + 4096 = 0x8000000000000000, and hence that
      overflows the s64 offset and we fail to detect the need for a
      filesize update and an ioend is not allocated.
      
      This is *mostly* avoided for direct IO because such extending IOs
      occur with full block allocation, and so the "IS_UNWRITTEN()" check
      still evaluates as true and we get an ioend that way. However, doing
      single sector extending IOs to this last block will expose the fact
      that file size updates will not occur after the first allocating
      direct IO as the overflow will then be exposed.
      
      There is one further complexity: the DAX page fault path also
      exposes the same issue in block allocation. However, page faults
      cannot extend the file size, so in this case we want to allocate the
      block but do not want to allocate an ioend to enable file size
      update at IO completion. Hence we now need to distinguish between
      the direct IO patch allocation and dax fault path allocation to
      avoid leaking ioend structures.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      3e12dbbd
  19. 12 10月, 2015 1 次提交
    • B
      xfs: add missing ilock around dio write last extent alignment · 009c6e87
      Brian Foster 提交于
      The iomap codepath (via get_blocks()) acquires and release the inode
      lock in the case of a direct write that requires block allocation. This
      is because xfs_iomap_write_direct() allocates a transaction, which means
      the ilock must be dropped and reacquired after the transaction is
      allocated and reserved.
      
      xfs_iomap_write_direct() invokes xfs_iomap_eof_align_last_fsb() before
      the transaction is created and thus before the ilock is reacquired. This
      can lead to calls to xfs_iread_extents() and reads of the in-core extent
      list without any synchronization (via xfs_bmap_eof() and
      xfs_bmap_last_extent()). xfs_iread_extents() assert fails if the ilock
      is not held, but this is not currently seen in practice as the current
      callers had already invoked xfs_bmapi_read().
      
      What has been seen in practice are reports of crashes down in the
      xfs_bmap_eof() codepath on direct writes due to seemingly bogus pointer
      references from xfs_iext_get_ext(). While an explicit reproducer is not
      currently available to confirm the cause of the problem, crash analysis
      and code inspection from David Jeffrey had identified the insufficient
      locking.
      
      xfs_iomap_eof_align_last_fsb() is called from other contexts with the
      inode lock already held, so we cannot acquire it therein.
      __xfs_get_blocks() acquires and drops the ilock with variable flags to
      cover the event that the extent list must be read in. The common case is
      that __xfs_get_blocks() acquires the shared ilock. To provide locking
      around the last extent alignment call without adding more lock cycles to
      the dio path, update xfs_iomap_write_direct() to expect the shared ilock
      held on entry and do the extent alignment under its protection. Demote
      the lock, if necessary, from __xfs_get_blocks() and push the
      xfs_qm_dqattach() call outside of the shared lock critical section.
      Also, add an assert to document that the extent list is always expected
      to be present in this path. Otherwise, we risk a call to
      xfs_iread_extents() while under the shared ilock. This is safe as all
      current callers have executed an xfs_bmapi_read() call under the current
      iolock context.
      Reported-by: NDavid Jeffery <djeffery@redhat.com>
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      009c6e87