1. 04 1月, 2016 1 次提交
    • D
      xfs: Don't use reserved blocks for data blocks with DAX · 3b0fe478
      Dave Chinner 提交于
      Commit 1ca19157 ("xfs: Don't use unwritten extents for DAX") enabled
      the DAX allocation call to dip into the reserve pool in case it was
      converting unwritten extents rather than allocating blocks. This was
      a direct copy of the unwritten extent conversion code, but had an
      unintended side effect of allowing normal data block allocation to
      use the reserve pool. Hence normal block allocation could deplete
      the reserve pool and prevent unwritten extent conversion at ENOSPC,
      hence violating fallocate guarantees on preallocated space.
      
      Fix it by checking whether the incoming map from __xfs_get_blocks()
      spans an unwritten extent and only use the reserve pool if the
      allocation covers an unwritten extent.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Tested-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      3b0fe478
  2. 14 11月, 2015 1 次提交
  3. 11 11月, 2015 1 次提交
    • R
      vfs: remove unused wrapper block_page_mkwrite() · 5c500029
      Ross Zwisler 提交于
      The function currently called "__block_page_mkwrite()" used to be called
      "block_page_mkwrite()" until a wrapper for this function was added by:
      
      commit 24da4fab ("vfs: Create __block_page_mkwrite() helper passing
      	error values back")
      
      This wrapper, the current "block_page_mkwrite()", is currently unused.
      __block_page_mkwrite() is used directly by ext4, nilfs2 and xfs.
      
      Remove the unused wrapper, rename __block_page_mkwrite() back to
      block_page_mkwrite() and update the comment above block_page_mkwrite().
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Reviewed-by: NJan Kara <jack@suse.com>
      Cc: Jan Kara <jack@suse.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      5c500029
  4. 10 11月, 2015 3 次提交
    • C
      xfs: give all workqueues rescuer threads · 7a29ac47
      Chris Mason 提交于
      We're consistently hitting deadlocks here with XFS on recent kernels.
      After some digging through the crash files, it looks like everyone in
      the system is waiting for XFS to reclaim memory.
      
      Something like this:
      
      PID: 2733434  TASK: ffff8808cd242800  CPU: 19  COMMAND: "java"
       #0 [ffff880019c53588] __schedule at ffffffff818c4df2
       #1 [ffff880019c535d8] schedule at ffffffff818c5517
       #2 [ffff880019c535f8] _xfs_log_force_lsn at ffffffff81316348
       #3 [ffff880019c53688] xfs_log_force_lsn at ffffffff813164fb
       #4 [ffff880019c536b8] xfs_iunpin_wait at ffffffff8130835e
       #5 [ffff880019c53728] xfs_reclaim_inode at ffffffff812fd453
       #6 [ffff880019c53778] xfs_reclaim_inodes_ag at ffffffff812fd8c7
       #7 [ffff880019c53928] xfs_reclaim_inodes_nr at ffffffff812fe433
       #8 [ffff880019c53958] xfs_fs_free_cached_objects at ffffffff8130d3b9
       #9 [ffff880019c53968] super_cache_scan at ffffffff811a6f73
      #10 [ffff880019c539c8] shrink_slab at ffffffff811460e6
      #11 [ffff880019c53aa8] shrink_zone at ffffffff8114a53f
      #12 [ffff880019c53b48] do_try_to_free_pages at ffffffff8114a8ba
      #13 [ffff880019c53be8] try_to_free_pages at ffffffff8114ad5a
      #14 [ffff880019c53c78] __alloc_pages_nodemask at ffffffff8113e1b8
      #15 [ffff880019c53d88] alloc_kmem_pages_node at ffffffff8113e671
      #16 [ffff880019c53dd8] copy_process at ffffffff8104f781
      #17 [ffff880019c53ec8] do_fork at ffffffff8105129c
      #18 [ffff880019c53f38] sys_clone at ffffffff810515b6
      #19 [ffff880019c53f48] stub_clone at ffffffff818c8e4d
      
      xfs_log_force_lsn is waiting for logs to get cleaned, which is waiting
      for IO, which is waiting for workers to complete the IO which is waiting
      for worker threads that don't exist yet:
      
      PID: 2752451  TASK: ffff880bd6bdda00  CPU: 37  COMMAND: "kworker/37:1"
       #0 [ffff8808d20abbb0] __schedule at ffffffff818c4df2
       #1 [ffff8808d20abc00] schedule at ffffffff818c5517
       #2 [ffff8808d20abc20] schedule_timeout at ffffffff818c7c6c
       #3 [ffff8808d20abcc0] wait_for_completion_killable at ffffffff818c6495
       #4 [ffff8808d20abd30] kthread_create_on_node at ffffffff8106ec82
       #5 [ffff8808d20abdf0] create_worker at ffffffff8106752f
       #6 [ffff8808d20abe40] worker_thread at ffffffff810699be
       #7 [ffff8808d20abec0] kthread at ffffffff8106ef59
       #8 [ffff8808d20abf50] ret_from_fork at ffffffff818c8ac8
      
      I think we should be using WQ_MEM_RECLAIM to make sure this thread
      pool makes progress when we're not able to allocate new workers.
      
      [dchinner: make all workqueues WQ_MEM_RECLAIM]
      Signed-off-by: NChris Mason <clm@fb.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      7a29ac47
    • B
      xfs: fix log recovery op header validation assert · 848ccfc8
      Brian Foster 提交于
      Commit 89cebc84 ("xfs: validate transaction header length on log
      recovery") added additional validation of the on-disk op header length
      to protect from buffer overflow during log recovery. It accounts for the
      fact that the transaction header can be split across multiple op
      headers. It added an assert for when this occurs that verifies the
      length of the second part of a split transaction header is less than a
      full transaction header. In other words, it expects that the first op
      header of a split transaction header includes at least some portion of
      the transaction header.
      
      This expectation is not always valid as a zero-length op header can
      exist for the first op header of a split transaction header (see
      xlog_recover_add_to_trans() for details). This means that the second op
      header can have a valid, full length transaction header and thus the
      full header is copied in xlog_recover_add_to_cont_trans(). Fix the
      assert in xlog_recover_add_to_cont_trans() to handle this case correctly
      and require that the op header length is less than or equal to a full
      transaction header.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      848ccfc8
    • A
      xfs: Fix error path in xfs_get_acl · edfb8ebc
      Andreas Gruenbacher 提交于
      Error codes from xfs_attr_get other than -ENOATTR were not properly
      reported.  Fix that.
      
      In addition, the declaration of struct xfs_inode in xfs_acl.h isn't needed.
      Signed-off-by: NAndreas Gruenbacher <agruenba@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      edfb8ebc
  5. 07 11月, 2015 1 次提交
    • M
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep... · d0164adc
      Mel Gorman 提交于
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
      
      __GFP_WAIT has been used to identify atomic context in callers that hold
      spinlocks or are in interrupts.  They are expected to be high priority and
      have access one of two watermarks lower than "min" which can be referred
      to as the "atomic reserve".  __GFP_HIGH users get access to the first
      lower watermark and can be called the "high priority reserve".
      
      Over time, callers had a requirement to not block when fallback options
      were available.  Some have abused __GFP_WAIT leading to a situation where
      an optimisitic allocation with a fallback option can access atomic
      reserves.
      
      This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
      cannot sleep and have no alternative.  High priority users continue to use
      __GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
      are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
      callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
      redefined as a caller that is willing to enter direct reclaim and wake
      kswapd for background reclaim.
      
      This patch then converts a number of sites
      
      o __GFP_ATOMIC is used by callers that are high priority and have memory
        pools for those requests. GFP_ATOMIC uses this flag.
      
      o Callers that have a limited mempool to guarantee forward progress clear
        __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
        into this category where kswapd will still be woken but atomic reserves
        are not used as there is a one-entry mempool to guarantee progress.
      
      o Callers that are checking if they are non-blocking should use the
        helper gfpflags_allow_blocking() where possible. This is because
        checking for __GFP_WAIT as was done historically now can trigger false
        positives. Some exceptions like dm-crypt.c exist where the code intent
        is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
        flag manipulations.
      
      o Callers that built their own GFP flags instead of starting with GFP_KERNEL
        and friends now also need to specify __GFP_KSWAPD_RECLAIM.
      
      The first key hazard to watch out for is callers that removed __GFP_WAIT
      and was depending on access to atomic reserves for inconspicuous reasons.
      In some cases it may be appropriate for them to use __GFP_HIGH.
      
      The second key hazard is callers that assembled their own combination of
      GFP flags instead of starting with something like GFP_KERNEL.  They may
      now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
      if it's missed in most cases as other activity will wake kswapd.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0164adc
  6. 03 11月, 2015 12 次提交
    • D
      xfs: optimise away log forces on timestamp updates for fdatasync · fc0561ce
      Dave Chinner 提交于
      xfs: timestamp updates cause excessive fdatasync log traffic
      
      Sage Weil reported that a ceph test workload was writing to the
      log on every fdatasync during an overwrite workload. Event tracing
      showed that the only metadata modification being made was the
      timestamp updates during the write(2) syscall, but fdatasync(2)
      is supposed to ignore them. The key observation was that the
      transactions in the log all looked like this:
      
      INODE: #regs: 4   ino: 0x8b  flags: 0x45   dsize: 32
      
      And contained a flags field of 0x45 or 0x85, and had data and
      attribute forks following the inode core. This means that the
      timestamp updates were triggering dirty relogging of previously
      logged parts of the inode that hadn't yet been flushed back to
      disk.
      
      There are two parts to this problem. The first is that XFS relogs
      dirty regions in subsequent transactions, so it carries around the
      fields that have been dirtied since the last time the inode was
      written back to disk, not since the last time the inode was forced
      into the log.
      
      The second part is that on v5 filesystems, the inode change count
      update during inode dirtying also sets the XFS_ILOG_CORE flag, so
      on v5 filesystems this makes a timestamp update dirty the entire
      inode.
      
      As a result when fdatasync is run, it looks at the dirty fields in
      the inode, and sees more than just the timestamp flag, even though
      the only metadata change since the last fdatasync was just the
      timestamps. Hence we force the log on every subsequent fdatasync
      even though it is not needed.
      
      To fix this, add a new field to the inode log item that tracks
      changes since the last time fsync/fdatasync forced the log to flush
      the changes to the journal. This flag is updated when we dirty the
      inode, but we do it before updating the change count so it does not
      carry the "core dirty" flag from timestamp updates. The fields are
      zeroed when the inode is marked clean (due to writeback/freeing) or
      when an fsync/datasync forces the log. Hence if we only dirty the
      timestamps on the inode between fsync/fdatasync calls, the fdatasync
      will not trigger another log force.
      
      Over 100 runs of the test program:
      
      Ext4 baseline:
      	runtime: 1.63s +/- 0.24s
      	avg lat: 1.59ms +/- 0.24ms
      	iops: ~2000
      
      XFS, vanilla kernel:
              runtime: 2.45s +/- 0.18s
      	avg lat: 2.39ms +/- 0.18ms
      	log forces: ~400/s
      	iops: ~1000
      
      XFS, patched kernel:
              runtime: 1.49s +/- 0.26s
      	avg lat: 1.46ms +/- 0.25ms
      	log forces: ~30/s
      	iops: ~1500
      Reported-by: NSage Weil <sage@redhat.com>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      fc0561ce
    • D
      xfs: don't leak uuid table on rmmod · af3b6382
      Darrick J. Wong 提交于
      Don't leak the UUID table when the module is unloaded.
      (Found with kmemleak.)
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      af3b6382
    • A
      xfs: invalidate cached acl if set via ioctl · 47e1bf64
      Andreas Gruenbacher 提交于
      Setting or removing the "SGI_ACL_[FILE|DEFAULT]" attributes via the
      XFS_IOC_ATTRMULTI_BY_HANDLE ioctl completely bypasses the POSIX ACL
      infrastructure, like setting the "trusted.SGI_ACL_[FILE|DEFAULT]" xattrs
      did until commit 6caa1056.  Similar to that commit, invalidate cached
      acls when setting/removing them via the ioctl as well.
      Signed-off-by: NAndreas Gruenbacher <agruenba@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      47e1bf64
    • A
      xfs: Plug memory leak in xfs_attrmulti_attr_set · 09cb22d2
      Andreas Gruenbacher 提交于
      When setting attributes via XFS_IOC_ATTRMULTI_BY_HANDLE, the user-space
      buffer is copied into a new kernel-space buffer via memdup_user; that
      buffer then isn't freed.
      Signed-off-by: NAndreas Gruenbacher <agruenba@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      09cb22d2
    • A
      xfs: Validate the length of on-disk ACLs · 86a21c79
      Andreas Gruenbacher 提交于
      In xfs_acl_from_disk, instead of trusting that xfs_acl.acl_cnt is correct,
      make sure that the length of the attributes is correct as well.  Also, turn
      the aclp parameter into a const pointer.
      Signed-off-by: NAndreas Gruenbacher <agruenba@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      86a21c79
    • B
      xfs: invalidate cached acl if set directly via xattr · 67d8e04e
      Brian Foster 提交于
      ACLs are stored as extended attributes of the inode to which they apply.
      XFS converts the standard "system.posix_acl_[access|default]" attribute
      names used to control ACLs to "trusted.SGI_ACL_[FILE|DEFAULT]" as stored
      on-disk. These xattrs are directly exposed in on-disk format via
      getxattr/setxattr, without any ACL aware code in the path to perform
      validation, etc. This is partly historical and supports backup/restore
      applications such as xfsdump to back up and restore the binary blob that
      represents ACLs as-is.
      
      Andreas reports that the ACLs observed via the getfacl interface is not
      consistent when ACLs are set directly via the setxattr path. This occurs
      because the ACLs are cached in-core against the inode and the xattr path
      has no knowledge that the operation relates to ACLs.
      
      Update the xattr set codepath to trap writes of the special XFS ACL
      attributes and invalidate the associated cached ACL when this occurs.
      This ensures that the correct ACLs are used on a subsequent operation
      through the actual ACL interface.
      
      Note that this does not update or add support for setting the ACL xattrs
      directly beyond the restore use case that requires a correctly formatted
      binary blob and to restore a consistent i_mode at the same time. It is
      still possible for a root user to set an invalid or inconsistent (with
      i_mode) ACL blob on-disk and potentially cause corruption.
      
      [ With fixes from Andreas Gruenbacher. ]
      Reported-by: NAndreas Gruenbacher <agruenba@redhat.com>
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      67d8e04e
    • D
      xfs: xfs_filemap_pmd_fault treats read faults as write faults · 13ad4fe3
      Dave Chinner 提交于
      The code initially committed didn't have the same checks for write
      faults as the dax_pmd_fault code and hence treats all faults as
      write faults. We can get read faults through this path because they
      is no pmd_mkwrite path for write faults similar to the normal page
      fault path. Hence we need to ensure that we only do c/mtime updates
      on write faults, and freeze protection is unnecessary for read
      faults.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      13ad4fe3
    • D
      xfs: add ->pfn_mkwrite support for DAX · 3af49285
      Dave Chinner 提交于
      ->pfn_mkwrite support is needed so that when a page with allocated
      backing store takes a write fault we can check that the fault has
      not raced with a truncate and is pointing to a region beyond the
      current end of file.
      
      This also allows us to update the timestamp on the inode, too, which
      fixes a generic/080 failure.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      3af49285
    • D
      xfs: DAX does not use IO completion callbacks · 01a155e6
      Dave Chinner 提交于
      For DAX, we are now doing block zeroing during allocation. This
      means we no longer need a special DAX fault IO completion callback
      to do unwritten extent conversion. Because mmap never extends the
      file size (it SEGVs the process) we don't need a callback to update
      the file size, either. Hence we can remove the completion callbacks
      from the __dax_fault and __dax_mkwrite calls.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      01a155e6
    • D
      xfs: Don't use unwritten extents for DAX · 1ca19157
      Dave Chinner 提交于
      DAX has a page fault serialisation problem with block allocation.
      Because it allows concurrent page faults and does not have a page
      lock to serialise faults to the same page, it can get two concurrent
      faults to the page that race.
      
      When two read faults race, this isn't a huge problem as the data
      underlying the page is not changing and so "detect and drop" works
      just fine. The issues are to do with write faults.
      
      When two write faults occur, we serialise block allocation in
      get_blocks() so only one faul will allocate the extent. It will,
      however, be marked as an unwritten extent, and that is where the
      problem lies - the DAX fault code cannot differentiate between a
      block that was just allocated and a block that was preallocated and
      needs zeroing. The result is that both write faults end up zeroing
      the block and attempting to convert it back to written.
      
      The problem is that the first fault can zero and convert before the
      second fault starts zeroing, resulting in the zeroing for the second
      fault overwriting the data that the first fault wrote with zeros.
      The second fault then attempts to convert the unwritten extent,
      which is then a no-op because it's already written. Data loss occurs
      as a result of this race.
      
      Because there is no sane locking construct in the page fault code
      that we can use for serialisation across the page faults, we need to
      ensure block allocation and zeroing occurs atomically in the
      filesystem. This means we can still take concurrent page faults and
      the only time they will serialise is in the filesystem
      mapping/allocation callback. The page fault code will always see
      written, initialised extents, so we will be able to remove the
      unwritten extent handling from the DAX code when all filesystems are
      converted.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      1ca19157
    • D
      xfs: introduce BMAPI_ZERO for allocating zeroed extents · 3fbbbea3
      Dave Chinner 提交于
      To enable DAX to do atomic allocation of zeroed extents, we need to
      drive the block zeroing deep into the allocator. Because
      xfs_bmapi_write() can return merged extents on allocation that were
      only partially allocated (i.e. requested range spans allocated and
      hole regions, allocation into the hole was contiguous), we cannot
      zero the extent returned from xfs_bmapi_write() as that can
      overwrite existing data with zeros.
      
      Hence we have to drive the extent zeroing into the allocation code,
      prior to where we merge the extents into the BMBT and return the
      resultant map. This means we need to propagate this need down to
      the xfs_alloc_vextent() and issue the block zeroing at this point.
      
      While this functionality is being introduced for DAX, there is no
      reason why it is specific to DAX - we can per-zero blocks during the
      allocation transaction on any type of device. It's just slow (and
      usually slower than unwritten allocation and conversion) on
      traditional block devices so doesn't tend to get used. We can,
      however, hook hardware zeroing optimisations via sb_issue_zeroout()
      to this operation, so it may be useful in future and hence the
      "allocate zeroed blocks" API needs to be implementation neutral.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      3fbbbea3
    • D
      xfs: fix inode size update overflow in xfs_map_direct() · 3e12dbbd
      Dave Chinner 提交于
      Both direct IO and DAX pass an offset and count into get_blocks that
      will overflow a s64 variable when an IO goes into the last supported
      block in a file (i.e. at offset 2^63 - 1FSB bytes). This can be seen
      from the tracing:
      
      xfs_get_blocks_alloc: [...] offset 0x7ffffffffffff000 count 4096
      xfs_gbmap_direct:     [...] offset 0x7ffffffffffff000 count 4096
      xfs_gbmap_direct_none:[...] offset 0x7ffffffffffff000 count 4096
      
      0x7ffffffffffff000 + 4096 = 0x8000000000000000, and hence that
      overflows the s64 offset and we fail to detect the need for a
      filesize update and an ioend is not allocated.
      
      This is *mostly* avoided for direct IO because such extending IOs
      occur with full block allocation, and so the "IS_UNWRITTEN()" check
      still evaluates as true and we get an ioend that way. However, doing
      single sector extending IOs to this last block will expose the fact
      that file size updates will not occur after the first allocating
      direct IO as the overflow will then be exposed.
      
      There is one further complexity: the DAX page fault path also
      exposes the same issue in block allocation. However, page faults
      cannot extend the file size, so in this case we want to allocate the
      block but do not want to allocate an ioend to enable file size
      update at IO completion. Hence we now need to distinguish between
      the direct IO patch allocation and dax fault path allocation to
      avoid leaking ioend structures.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      3e12dbbd
  7. 02 11月, 2015 1 次提交
  8. 19 10月, 2015 2 次提交
  9. 12 10月, 2015 18 次提交
    • E
      xfs: simplify /proc teardown & error handling · 9e92054e
      Eric Sandeen 提交于
      remove_proc_subtree() was added in 3.9, and can be
      used to simplify our procfile creation error handling
      and cleanup, removing the nested gotos.  It simply
      removes fs/xfs and everything created under it.
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      9e92054e
    • B
      xfs: per-filesystem stats counter implementation · ff6d6af2
      Bill O'Donnell 提交于
      This patch modifies the stats counting macros and the callers
      to those macros to properly increment, decrement, and add-to
      the xfs stats counts. The counts for global and per-fs stats
      are correctly advanced, and cleared by writing a "1" to the
      corresponding clear file.
      
      global counts: /sys/fs/xfs/stats/stats
      per-fs counts: /sys/fs/xfs/sda*/stats/stats
      
      global clear:  /sys/fs/xfs/stats/stats_clear
      per-fs clear:  /sys/fs/xfs/sda*/stats/stats_clear
      
      [dchinner: cleaned up macro variables, removed CONFIG_FS_PROC around
       stats structures and macros. ]
      Signed-off-by: NBill O'Donnell <billodo@redhat.com>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      ff6d6af2
    • B
      xfs: per-filesystem stats in sysfs · 225e4635
      Bill O'Donnell 提交于
      This patch implements per-filesystem stats objects in sysfs. It
      depends on the application of the previous patch series that
      develops the infrastructure to support both xfs global stats and
      xfs per-fs stats in sysfs.
      
      Stats objects are instantiated when an xfs filesystem is mounted
      and deleted on unmount. With this patch, the stats directory is
      created and populated with the familiar stats and stats_clear files.
      Example:
              /sys/fs/xfs/sda9/stats/stats
              /sys/fs/xfs/sda9/stats/stats_clear
      
      With this patch, the individual counts within the new per-fs
      stats file(s) remain at zero. Functions that use the the macros
      to increment, decrement, and add-to the per-fs stats counts will
      be covered in a separate new patch to follow this one. Note that
      the counts within the global stats file (/sys/fs/xfs/stats/stats)
      advance normally and can be cleared as it was prior to this patch.
      
      [dchinner: move setup/teardown to xfs_fs_{fill|put}_super() so
      it is down before/after any path that uses the per-mount stats. ]
      Signed-off-by: NBill O'Donnell <billodo@redhat.com>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      225e4635
    • E
      xfs: more info from kmem deadlocks and high-level error msgs · 847f9f68
      Eric Sandeen 提交于
      In an effort to get more useful out of "possible memory
      allocation deadlock" messages, print the size of the
      requested allocation, and dump the stack if the xfs error
      level is tuned high.
      
      The stack dump is implemented in define_xfs_printk_level()
      for error levels >= LOGLEVEL_ERR, partly because it
      seems generically useful, and also because kmem.c has
      no knowledge of xfs error level tunables or other such bits,
      it's very kmem-specific.
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      847f9f68
    • E
      xfs: avoid null *src in memcpy call in xlog_write · 91f9f5fe
      Eric Sandeen 提交于
      The gcc undefined behavior sanitizer caught this; surely
      any sane memcpy implementation will no-op if size == 0,
      but behavior with a *src of NULL is technically undefined
      (declared nonnull), so avoid it here.
      
      We are actually in this situation frequently via
      xlog_commit_record(), because:
      
              struct xfs_log_iovec reg = {
                      .i_addr = NULL,
                      .i_len = 0,
                      .i_type = XLOG_REG_TYPE_COMMIT,
              };
      Reported-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      91f9f5fe
    • B
      xfs: pass total block res. as total xfs_bmapi_write() parameter · dbd5c8c9
      Brian Foster 提交于
      The total field from struct xfs_alloc_arg is a bit of an unknown
      commodity. It is documented as the total block requirement for the
      transaction and is used in this manner from most call sites by virtue of
      passing the total block reservation of the transaction associated with
      an allocation. Several xfs_bmapi_write() callers pass hardcoded values
      of 0 or 1 for the total block requirement, which is a historical oddity
      without any clear reasoning.
      
      The xfs_iomap_write_direct() caller, for example, passes 0 for the total
      block requirement. This has been determined to cause problems in the
      form of ABBA deadlocks of AGF buffers due to incorrect AG selection in
      the block allocator. Specifically, the xfs_alloc_space_available()
      function incorrectly selects an AG that doesn't actually have sufficient
      space for the allocation. This occurs because the args.total field is 0
      and thus the remaining free space check on the AG doesn't actually
      consider the size of the allocation request. This locks the AGF buffer,
      the allocation attempt proceeds and ultimately fails (in
      xfs_alloc_fix_minleft()), and xfs_alloc_vexent() moves on to the next
      AG. In turn, this can lead to incorrect AG locking order (if the
      allocator wraps around, attempting to lock AG 0 after acquiring AG N)
      and thus deadlock if racing with another operation. This problem has
      been reproduced via generic/299 on smallish (1GB) ramdisk test devices.
      
      To avoid this problem, replace the undocumented hardcoded total
      parameters from the iomap and utility callers to pass the block
      reservation used for the associated transaction. This is consistent with
      other xfs_bmapi_write() callers throughout XFS. The assumption is that
      the total field allows the selection of an AG that can handle the entire
      operation rather than simply the allocation/range being requested (e.g.,
      resulting btree splits, etc.). This addresses the aforementioned
      generic/299 hang by ensuring AG selection only occurs when the
      allocation can be satisfied by the AG.
      Reported-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      dbd5c8c9
    • J
      xfs: avoid dependency on Linux XATTR_SIZE_MAX · 51fcbfe7
      Jan Tulak 提交于
      Currently, we depends on Linux XATTR value for on disk
      definition. Which causes trouble on other platforms and
      maybe also if this value was to change.
      
      Fix it by creating a custom definition independent from
      those in Linux (although with the same values), so it is OK
      with the be16 fields used for holding these attributes.
      
      This patch reflects a change in xfsprogs.
      Signed-off-by: NJan Tulak <jtulak@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      51fcbfe7
    • J
      xfs: prefix XATTR_LIST_MAX with XFS_ · 4e247614
      Jan Tulak 提交于
      Remove a hard dependency of Linux XATTR_LIST_MAX value by using
      a prefixed version. This patch reflects the same change in xfsprogs.
      Signed-off-by: NJan Tulak <jtulak@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      4e247614
    • G
      libxfs: fix two comment typos · fef4ded8
      Geliang Tang 提交于
      Just fix two typos in code comments.
      Signed-off-by: NGeliang Tang <geliangtang@163.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      fef4ded8
    • B
      xfs: add an xfs_zero_eof() tracepoint · 0a50f162
      Brian Foster 提交于
      Add a tracepoint in xfs_zero_eof() to facilitate tracking and debugging
      EOF zeroing events. This has proven useful in the context of other
      direct I/O tracepoints to ensure EOF zeroing occurs within appropriate
      file ranges.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      0a50f162
    • B
      xfs: always drain dio before extending aio write submission · 3136e8bb
      Brian Foster 提交于
      XFS supports and typically allows concurrent asynchronous direct I/O
      submission to a single file. One exception to the rule is that file
      extending dio writes that start beyond the current EOF (e.g.,
      potentially create a hole at EOF) require exclusive I/O access to the
      file. This is because such writes must zero any pre-existing blocks
      beyond EOF that are exposed by virtue of now residing within EOF as a
      result of the write about to be submitted.
      
      Before EOF zeroing can occur, the current file i_size must be stabilized
      to avoid data corruption. In this scenario, XFS upgrades the iolock to
      exclude any further I/O submission, waits on in-flight I/O to complete
      to ensure i_size is up to date (i_size is updated on dio write
      completion) and restarts the various checks against the state of the
      file. The problem is that this protection sequence is triggered only
      when the iolock is currently held shared. While this is true for async
      dio in most cases, the caller may upgrade the lock in advance based on
      arbitrary circumstances with respect to EOF zeroing. For example, the
      iolock is always acquired exclusively if the start offset is not block
      aligned. This means that even though the iolock is already held
      exclusive for such I/Os, pending I/O is not drained and thus EOF zeroing
      can occur based on an unstable i_size.
      
      This problem has been reproduced as guest data corruption in virtual
      machines with file-backed qcow2 virtual disks hosted on an XFS
      filesystem. The virtual disks must be configured with aio=native mode
      and the must not be truncated out to the maximum file size (as some virt
      managers will do).
      
      Update xfs_file_aio_write_checks() to unconditionally drain in-flight
      dio before EOF zeroing can occur. Rather than trigger the wait based on
      iolock state, use a new flag and upgrade the iolock when necessary. Note
      that this results in a full restart of the inode checks even when the
      iolock was already held exclusive when technically it is only required
      to recheck i_size. This should be a rare enough occurrence that it is
      preferable to keep the code simple rather than create an alternate
      restart jump target.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      3136e8bb
    • B
      xfs: validate metadata LSNs against log on v5 superblocks · a45086e2
      Brian Foster 提交于
      Since the onset of v5 superblocks, the LSN of the last modification has
      been included in a variety of on-disk data structures. This LSN is used
      to provide log recovery ordering guarantees (e.g., to ensure an older
      log recovery item is not replayed over a newer target data structure).
      
      While this works correctly from the point a filesystem is formatted and
      mounted, userspace tools have some problematic behaviors that defeat
      this mechanism. For example, xfs_repair historically zeroes out the log
      unconditionally (regardless of whether corruption is detected). If this
      occurs, the LSN of the filesystem is reset and the log is now in a
      problematic state with respect to on-disk metadata structures that might
      have a larger LSN. Until either the log catches up to the highest
      previously used metadata LSN or each affected data structure is modified
      and written out without incident (which resets the metadata LSN), log
      recovery is susceptible to filesystem corruption.
      
      This problem is ultimately addressed and repaired in the associated
      userspace tools. The kernel is still responsible to detect the problem
      and notify the user that something is wrong. Check the superblock LSN at
      mount time and fail the mount if it is invalid. From that point on,
      trigger verifier failure on any metadata I/O where an invalid LSN is
      detected. This results in a filesystem shutdown and guarantees that we
      do not log metadata changes with invalid LSNs on disk. Since this is a
      known issue with a known recovery path, present a warning to instruct
      the user how to recover.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      a45086e2
    • T
      xfs: Print name and pid when memory allocation loops · 5bf97b1c
      Tetsuo Handa 提交于
      This patch adds comm name and pid to warning messages printed by
      kmem_alloc(), kmem_zone_alloc() and xfs_buf_allocate_memory().
      This will help telling which memory allocations (e.g. kernel worker
      threads, OOM victim tasks, neither) are stalling because these functions
      are passing __GFP_NOWARN which suppresses not only backtrace but comm name
      and pid.
      Signed-off-by: NTetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      5bf97b1c
    • B
      xfs: log local to remote symlink conversions correctly on v5 supers · b7cdc66b
      Brian Foster 提交于
      A local format symlink inode is converted to extent format when an
      extended attribute is set on an inode as part of the attribute fork
      creation. This means a block is allocated, the local symlink target name
      is copied to the block and the block is logged. Currently,
      xfs_bmap_local_to_extents() handles logging the remote block data based
      on the size of the data fork prior to the conversion. This is not
      correct on v5 superblock filesystems, which add an additional header to
      remote symlink blocks that is nonexistent in local format inodes.
      
      As a result, the full length of the remote symlink block content is not
      logged. This can lead to corruption should a crash occur and log
      recovery replay this transaction.
      
      Since a callout is already used to initialize the new remote symlink
      block, update the local-to-extents conversion mechanism to make the
      callout also responsible for logging the block. It is already required
      to set the log buffer type and format the block appropriately based on
      the superblock version. This ensures the remote symlink is always logged
      correctly. Note that xfs_bmap_local_to_extents() is only called for
      symlinks so there are no other callouts that require modification.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      b7cdc66b
    • B
      xfs: add missing ilock around dio write last extent alignment · 009c6e87
      Brian Foster 提交于
      The iomap codepath (via get_blocks()) acquires and release the inode
      lock in the case of a direct write that requires block allocation. This
      is because xfs_iomap_write_direct() allocates a transaction, which means
      the ilock must be dropped and reacquired after the transaction is
      allocated and reserved.
      
      xfs_iomap_write_direct() invokes xfs_iomap_eof_align_last_fsb() before
      the transaction is created and thus before the ilock is reacquired. This
      can lead to calls to xfs_iread_extents() and reads of the in-core extent
      list without any synchronization (via xfs_bmap_eof() and
      xfs_bmap_last_extent()). xfs_iread_extents() assert fails if the ilock
      is not held, but this is not currently seen in practice as the current
      callers had already invoked xfs_bmapi_read().
      
      What has been seen in practice are reports of crashes down in the
      xfs_bmap_eof() codepath on direct writes due to seemingly bogus pointer
      references from xfs_iext_get_ext(). While an explicit reproducer is not
      currently available to confirm the cause of the problem, crash analysis
      and code inspection from David Jeffrey had identified the insufficient
      locking.
      
      xfs_iomap_eof_align_last_fsb() is called from other contexts with the
      inode lock already held, so we cannot acquire it therein.
      __xfs_get_blocks() acquires and drops the ilock with variable flags to
      cover the event that the extent list must be read in. The common case is
      that __xfs_get_blocks() acquires the shared ilock. To provide locking
      around the last extent alignment call without adding more lock cycles to
      the dio path, update xfs_iomap_write_direct() to expect the shared ilock
      held on entry and do the extent alignment under its protection. Demote
      the lock, if necessary, from __xfs_get_blocks() and push the
      xfs_qm_dqattach() call outside of the shared lock critical section.
      Also, add an assert to document that the extent list is always expected
      to be present in this path. Otherwise, we risk a call to
      xfs_iread_extents() while under the shared ilock. This is safe as all
      current callers have executed an xfs_bmapi_read() call under the current
      iolock context.
      Reported-by: NDavid Jeffery <djeffery@redhat.com>
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      009c6e87
    • Z
      cancel the setfilesize transation when io error happen · 5cb13dcd
      Zhaohongjiang 提交于
      When I ran xfstest/073 case, the remount process was blocked to wait
      transactions to be zero. I found there was a io error happened, and
      the setfilesize transaction was not released properly. We should add
      the changes to cancel the io error in this case.
      
      Reproduction steps:
      1. dd if=/dev/zero of=xfs1.img bs=1M count=2048
      2. mkfs.xfs xfs1.img
      3. losetup -f ./xfs1.img /dev/loop0
      4. mount -t xfs /dev/loop0 /home/test_dir/
      5. mkdir /home/test_dir/test
      6. mkfs.xfs -dfile,name=image,size=2g
      7. mount -t xfs -o loop image /home/test_dir/test
      8. cp a file bigger than 2g to /home/test_dir/test
      9. mount -t xfs -o remount,ro /home/test_dir/test
      
      [ dchinner: moved io error detection to xfs_setfilesize_ioend() after
        transaction context restoration. ]
      Signed-off-by: NZhao Hongjiang <zhaohongjiang@huawei.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      5cb13dcd
    • B
      xfs: pass xfsstats structures to handlers and macros · 80529c45
      Bill O'Donnell 提交于
      This patch is the next step toward per-fs xfs stats. The patch makes
      the show and clear routines able to handle any stats structure
      associated with a kobject.
      
      Instead of a single global xfsstats structure, add kobject and a pointer
      to a per-cpu struct xfsstats. Modify the macros that manipulate the stats
      accordingly: XFS_STATS_INC, XFS_STATS_DEC, and XFS_STATS_ADD now access
      xfsstats->xs_stats.
      
      The sysfs functions need to get from the kobject back to the xfsstats
      structure which contains it, and pass the pointer to the ->xs_stats
      percpu structure into the show & clear routines.
      Signed-off-by: NBill O'Donnell <billodo@redhat.com>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      80529c45
    • B
      xfs: consolidate sysfs ops · a27c2640
      Bill O'Donnell 提交于
      As a part of the series to move xfs global stats from procfs to sysfs,
      this patch consolidates the sysfs ops functions and removes redundancy.
      Signed-off-by: NBill O'Donnell <billodo@redhat.com>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      a27c2640