1. 11 1月, 2016 1 次提交
    • E
      xfs: eliminate committed arg from xfs_bmap_finish · f6106efa
      Eric Sandeen 提交于
      Calls to xfs_bmap_finish() and xfs_trans_ijoin(), and the
      associated comments were replicated several times across
      the attribute code, all dealing with what to do if the
      transaction was or wasn't committed.
      
      And in that replicated code, an ASSERT() test of an
      uninitialized variable occurs in several locations:
      
      	error = xfs_attr_thing(&args);
      	if (!error) {
      		error = xfs_bmap_finish(&args.trans, args.flist,
      					&committed);
      	}
      	if (error) {
      		ASSERT(committed);
      
      If the first xfs_attr_thing() failed, we'd skip the xfs_bmap_finish,
      never set "committed", and then test it in the ASSERT.
      
      Fix this up by moving the committed state internal to xfs_bmap_finish,
      and add a new inode argument.  If an inode is passed in, it is passed
      through to __xfs_trans_roll() and joined to the transaction there if
      the transaction was committed.
      
      xfs_qm_dqalloc() was a little unique in that it called bjoin rather
      than ijoin, but as Dave points out we can detect the committed state
      but checking whether (*tpp != tp).
      
      Addresses-Coverity-Id: 102360
      Addresses-Coverity-Id: 102361
      Addresses-Coverity-Id: 102363
      Addresses-Coverity-Id: 102364
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      f6106efa
  2. 08 1月, 2016 2 次提交
    • D
      xfs: bmapbt checking on debug kernels too expensive · e3543819
      Dave Chinner 提交于
      For large sparse or fragmented files, checking every single entry in
      the bmapbt on every operation is prohibitively expensive. Especially
      as such checks rarely discover problems during normal operations on
      high extent coutn files. Our regression tests don't tend to exercise
      files with hundreds of thousands to millions of extents, so mostly
      this isn't noticed.
      
      However, trying to run things like xfs_mdrestore of large filesystem
      dumps on a debug kernel quickly becomes impossible as the CPU is
      completely burnt up repeatedly walking the sparse file bmapbt that
      is generated for every allocation that is made.
      
      Hence, if the file has more than 10,000 extents, just don't bother
      with walking the tree to check it exhaustively. The btree code has
      checks that ensure that the newly inserted/removed/modified record
      is correctly ordered, so the entrie tree walk in thses cases has
      limited additional value.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      e3543819
    • D
      xfs: add tracepoints to readpage calls · 121e213e
      Dave Chinner 提交于
      This allows us to see page cache driven readahead in action as it
      passes through XFS. This helps to understand buffered read
      throughput problems such as readahead IO IO sizes being too small
      for the underlying device to reach max throughput.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      121e213e
  3. 04 1月, 2016 11 次提交
  4. 14 11月, 2015 1 次提交
  5. 11 11月, 2015 1 次提交
    • R
      vfs: remove unused wrapper block_page_mkwrite() · 5c500029
      Ross Zwisler 提交于
      The function currently called "__block_page_mkwrite()" used to be called
      "block_page_mkwrite()" until a wrapper for this function was added by:
      
      commit 24da4fab ("vfs: Create __block_page_mkwrite() helper passing
      	error values back")
      
      This wrapper, the current "block_page_mkwrite()", is currently unused.
      __block_page_mkwrite() is used directly by ext4, nilfs2 and xfs.
      
      Remove the unused wrapper, rename __block_page_mkwrite() back to
      block_page_mkwrite() and update the comment above block_page_mkwrite().
      Signed-off-by: NRoss Zwisler <ross.zwisler@linux.intel.com>
      Reviewed-by: NJan Kara <jack@suse.com>
      Cc: Jan Kara <jack@suse.com>
      Cc: Christoph Hellwig <hch@lst.de>
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      5c500029
  6. 10 11月, 2015 3 次提交
    • C
      xfs: give all workqueues rescuer threads · 7a29ac47
      Chris Mason 提交于
      We're consistently hitting deadlocks here with XFS on recent kernels.
      After some digging through the crash files, it looks like everyone in
      the system is waiting for XFS to reclaim memory.
      
      Something like this:
      
      PID: 2733434  TASK: ffff8808cd242800  CPU: 19  COMMAND: "java"
       #0 [ffff880019c53588] __schedule at ffffffff818c4df2
       #1 [ffff880019c535d8] schedule at ffffffff818c5517
       #2 [ffff880019c535f8] _xfs_log_force_lsn at ffffffff81316348
       #3 [ffff880019c53688] xfs_log_force_lsn at ffffffff813164fb
       #4 [ffff880019c536b8] xfs_iunpin_wait at ffffffff8130835e
       #5 [ffff880019c53728] xfs_reclaim_inode at ffffffff812fd453
       #6 [ffff880019c53778] xfs_reclaim_inodes_ag at ffffffff812fd8c7
       #7 [ffff880019c53928] xfs_reclaim_inodes_nr at ffffffff812fe433
       #8 [ffff880019c53958] xfs_fs_free_cached_objects at ffffffff8130d3b9
       #9 [ffff880019c53968] super_cache_scan at ffffffff811a6f73
      #10 [ffff880019c539c8] shrink_slab at ffffffff811460e6
      #11 [ffff880019c53aa8] shrink_zone at ffffffff8114a53f
      #12 [ffff880019c53b48] do_try_to_free_pages at ffffffff8114a8ba
      #13 [ffff880019c53be8] try_to_free_pages at ffffffff8114ad5a
      #14 [ffff880019c53c78] __alloc_pages_nodemask at ffffffff8113e1b8
      #15 [ffff880019c53d88] alloc_kmem_pages_node at ffffffff8113e671
      #16 [ffff880019c53dd8] copy_process at ffffffff8104f781
      #17 [ffff880019c53ec8] do_fork at ffffffff8105129c
      #18 [ffff880019c53f38] sys_clone at ffffffff810515b6
      #19 [ffff880019c53f48] stub_clone at ffffffff818c8e4d
      
      xfs_log_force_lsn is waiting for logs to get cleaned, which is waiting
      for IO, which is waiting for workers to complete the IO which is waiting
      for worker threads that don't exist yet:
      
      PID: 2752451  TASK: ffff880bd6bdda00  CPU: 37  COMMAND: "kworker/37:1"
       #0 [ffff8808d20abbb0] __schedule at ffffffff818c4df2
       #1 [ffff8808d20abc00] schedule at ffffffff818c5517
       #2 [ffff8808d20abc20] schedule_timeout at ffffffff818c7c6c
       #3 [ffff8808d20abcc0] wait_for_completion_killable at ffffffff818c6495
       #4 [ffff8808d20abd30] kthread_create_on_node at ffffffff8106ec82
       #5 [ffff8808d20abdf0] create_worker at ffffffff8106752f
       #6 [ffff8808d20abe40] worker_thread at ffffffff810699be
       #7 [ffff8808d20abec0] kthread at ffffffff8106ef59
       #8 [ffff8808d20abf50] ret_from_fork at ffffffff818c8ac8
      
      I think we should be using WQ_MEM_RECLAIM to make sure this thread
      pool makes progress when we're not able to allocate new workers.
      
      [dchinner: make all workqueues WQ_MEM_RECLAIM]
      Signed-off-by: NChris Mason <clm@fb.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      7a29ac47
    • B
      xfs: fix log recovery op header validation assert · 848ccfc8
      Brian Foster 提交于
      Commit 89cebc84 ("xfs: validate transaction header length on log
      recovery") added additional validation of the on-disk op header length
      to protect from buffer overflow during log recovery. It accounts for the
      fact that the transaction header can be split across multiple op
      headers. It added an assert for when this occurs that verifies the
      length of the second part of a split transaction header is less than a
      full transaction header. In other words, it expects that the first op
      header of a split transaction header includes at least some portion of
      the transaction header.
      
      This expectation is not always valid as a zero-length op header can
      exist for the first op header of a split transaction header (see
      xlog_recover_add_to_trans() for details). This means that the second op
      header can have a valid, full length transaction header and thus the
      full header is copied in xlog_recover_add_to_cont_trans(). Fix the
      assert in xlog_recover_add_to_cont_trans() to handle this case correctly
      and require that the op header length is less than or equal to a full
      transaction header.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      848ccfc8
    • A
      xfs: Fix error path in xfs_get_acl · edfb8ebc
      Andreas Gruenbacher 提交于
      Error codes from xfs_attr_get other than -ENOATTR were not properly
      reported.  Fix that.
      
      In addition, the declaration of struct xfs_inode in xfs_acl.h isn't needed.
      Signed-off-by: NAndreas Gruenbacher <agruenba@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      edfb8ebc
  7. 07 11月, 2015 1 次提交
    • M
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep... · d0164adc
      Mel Gorman 提交于
      mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd
      
      __GFP_WAIT has been used to identify atomic context in callers that hold
      spinlocks or are in interrupts.  They are expected to be high priority and
      have access one of two watermarks lower than "min" which can be referred
      to as the "atomic reserve".  __GFP_HIGH users get access to the first
      lower watermark and can be called the "high priority reserve".
      
      Over time, callers had a requirement to not block when fallback options
      were available.  Some have abused __GFP_WAIT leading to a situation where
      an optimisitic allocation with a fallback option can access atomic
      reserves.
      
      This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
      cannot sleep and have no alternative.  High priority users continue to use
      __GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
      are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
      callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
      redefined as a caller that is willing to enter direct reclaim and wake
      kswapd for background reclaim.
      
      This patch then converts a number of sites
      
      o __GFP_ATOMIC is used by callers that are high priority and have memory
        pools for those requests. GFP_ATOMIC uses this flag.
      
      o Callers that have a limited mempool to guarantee forward progress clear
        __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
        into this category where kswapd will still be woken but atomic reserves
        are not used as there is a one-entry mempool to guarantee progress.
      
      o Callers that are checking if they are non-blocking should use the
        helper gfpflags_allow_blocking() where possible. This is because
        checking for __GFP_WAIT as was done historically now can trigger false
        positives. Some exceptions like dm-crypt.c exist where the code intent
        is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
        flag manipulations.
      
      o Callers that built their own GFP flags instead of starting with GFP_KERNEL
        and friends now also need to specify __GFP_KSWAPD_RECLAIM.
      
      The first key hazard to watch out for is callers that removed __GFP_WAIT
      and was depending on access to atomic reserves for inconspicuous reasons.
      In some cases it may be appropriate for them to use __GFP_HIGH.
      
      The second key hazard is callers that assembled their own combination of
      GFP flags instead of starting with something like GFP_KERNEL.  They may
      now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
      if it's missed in most cases as other activity will wake kswapd.
      Signed-off-by: NMel Gorman <mgorman@techsingularity.net>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Acked-by: NMichal Hocko <mhocko@suse.com>
      Acked-by: NJohannes Weiner <hannes@cmpxchg.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Vitaly Wool <vitalywool@gmail.com>
      Cc: Rik van Riel <riel@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      d0164adc
  8. 03 11月, 2015 12 次提交
    • D
      xfs: optimise away log forces on timestamp updates for fdatasync · fc0561ce
      Dave Chinner 提交于
      xfs: timestamp updates cause excessive fdatasync log traffic
      
      Sage Weil reported that a ceph test workload was writing to the
      log on every fdatasync during an overwrite workload. Event tracing
      showed that the only metadata modification being made was the
      timestamp updates during the write(2) syscall, but fdatasync(2)
      is supposed to ignore them. The key observation was that the
      transactions in the log all looked like this:
      
      INODE: #regs: 4   ino: 0x8b  flags: 0x45   dsize: 32
      
      And contained a flags field of 0x45 or 0x85, and had data and
      attribute forks following the inode core. This means that the
      timestamp updates were triggering dirty relogging of previously
      logged parts of the inode that hadn't yet been flushed back to
      disk.
      
      There are two parts to this problem. The first is that XFS relogs
      dirty regions in subsequent transactions, so it carries around the
      fields that have been dirtied since the last time the inode was
      written back to disk, not since the last time the inode was forced
      into the log.
      
      The second part is that on v5 filesystems, the inode change count
      update during inode dirtying also sets the XFS_ILOG_CORE flag, so
      on v5 filesystems this makes a timestamp update dirty the entire
      inode.
      
      As a result when fdatasync is run, it looks at the dirty fields in
      the inode, and sees more than just the timestamp flag, even though
      the only metadata change since the last fdatasync was just the
      timestamps. Hence we force the log on every subsequent fdatasync
      even though it is not needed.
      
      To fix this, add a new field to the inode log item that tracks
      changes since the last time fsync/fdatasync forced the log to flush
      the changes to the journal. This flag is updated when we dirty the
      inode, but we do it before updating the change count so it does not
      carry the "core dirty" flag from timestamp updates. The fields are
      zeroed when the inode is marked clean (due to writeback/freeing) or
      when an fsync/datasync forces the log. Hence if we only dirty the
      timestamps on the inode between fsync/fdatasync calls, the fdatasync
      will not trigger another log force.
      
      Over 100 runs of the test program:
      
      Ext4 baseline:
      	runtime: 1.63s +/- 0.24s
      	avg lat: 1.59ms +/- 0.24ms
      	iops: ~2000
      
      XFS, vanilla kernel:
              runtime: 2.45s +/- 0.18s
      	avg lat: 2.39ms +/- 0.18ms
      	log forces: ~400/s
      	iops: ~1000
      
      XFS, patched kernel:
              runtime: 1.49s +/- 0.26s
      	avg lat: 1.46ms +/- 0.25ms
      	log forces: ~30/s
      	iops: ~1500
      Reported-by: NSage Weil <sage@redhat.com>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      fc0561ce
    • D
      xfs: don't leak uuid table on rmmod · af3b6382
      Darrick J. Wong 提交于
      Don't leak the UUID table when the module is unloaded.
      (Found with kmemleak.)
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      af3b6382
    • A
      xfs: invalidate cached acl if set via ioctl · 47e1bf64
      Andreas Gruenbacher 提交于
      Setting or removing the "SGI_ACL_[FILE|DEFAULT]" attributes via the
      XFS_IOC_ATTRMULTI_BY_HANDLE ioctl completely bypasses the POSIX ACL
      infrastructure, like setting the "trusted.SGI_ACL_[FILE|DEFAULT]" xattrs
      did until commit 6caa1056.  Similar to that commit, invalidate cached
      acls when setting/removing them via the ioctl as well.
      Signed-off-by: NAndreas Gruenbacher <agruenba@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      47e1bf64
    • A
      xfs: Plug memory leak in xfs_attrmulti_attr_set · 09cb22d2
      Andreas Gruenbacher 提交于
      When setting attributes via XFS_IOC_ATTRMULTI_BY_HANDLE, the user-space
      buffer is copied into a new kernel-space buffer via memdup_user; that
      buffer then isn't freed.
      Signed-off-by: NAndreas Gruenbacher <agruenba@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      09cb22d2
    • A
      xfs: Validate the length of on-disk ACLs · 86a21c79
      Andreas Gruenbacher 提交于
      In xfs_acl_from_disk, instead of trusting that xfs_acl.acl_cnt is correct,
      make sure that the length of the attributes is correct as well.  Also, turn
      the aclp parameter into a const pointer.
      Signed-off-by: NAndreas Gruenbacher <agruenba@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      86a21c79
    • B
      xfs: invalidate cached acl if set directly via xattr · 67d8e04e
      Brian Foster 提交于
      ACLs are stored as extended attributes of the inode to which they apply.
      XFS converts the standard "system.posix_acl_[access|default]" attribute
      names used to control ACLs to "trusted.SGI_ACL_[FILE|DEFAULT]" as stored
      on-disk. These xattrs are directly exposed in on-disk format via
      getxattr/setxattr, without any ACL aware code in the path to perform
      validation, etc. This is partly historical and supports backup/restore
      applications such as xfsdump to back up and restore the binary blob that
      represents ACLs as-is.
      
      Andreas reports that the ACLs observed via the getfacl interface is not
      consistent when ACLs are set directly via the setxattr path. This occurs
      because the ACLs are cached in-core against the inode and the xattr path
      has no knowledge that the operation relates to ACLs.
      
      Update the xattr set codepath to trap writes of the special XFS ACL
      attributes and invalidate the associated cached ACL when this occurs.
      This ensures that the correct ACLs are used on a subsequent operation
      through the actual ACL interface.
      
      Note that this does not update or add support for setting the ACL xattrs
      directly beyond the restore use case that requires a correctly formatted
      binary blob and to restore a consistent i_mode at the same time. It is
      still possible for a root user to set an invalid or inconsistent (with
      i_mode) ACL blob on-disk and potentially cause corruption.
      
      [ With fixes from Andreas Gruenbacher. ]
      Reported-by: NAndreas Gruenbacher <agruenba@redhat.com>
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      67d8e04e
    • D
      xfs: xfs_filemap_pmd_fault treats read faults as write faults · 13ad4fe3
      Dave Chinner 提交于
      The code initially committed didn't have the same checks for write
      faults as the dax_pmd_fault code and hence treats all faults as
      write faults. We can get read faults through this path because they
      is no pmd_mkwrite path for write faults similar to the normal page
      fault path. Hence we need to ensure that we only do c/mtime updates
      on write faults, and freeze protection is unnecessary for read
      faults.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      13ad4fe3
    • D
      xfs: add ->pfn_mkwrite support for DAX · 3af49285
      Dave Chinner 提交于
      ->pfn_mkwrite support is needed so that when a page with allocated
      backing store takes a write fault we can check that the fault has
      not raced with a truncate and is pointing to a region beyond the
      current end of file.
      
      This also allows us to update the timestamp on the inode, too, which
      fixes a generic/080 failure.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      3af49285
    • D
      xfs: DAX does not use IO completion callbacks · 01a155e6
      Dave Chinner 提交于
      For DAX, we are now doing block zeroing during allocation. This
      means we no longer need a special DAX fault IO completion callback
      to do unwritten extent conversion. Because mmap never extends the
      file size (it SEGVs the process) we don't need a callback to update
      the file size, either. Hence we can remove the completion callbacks
      from the __dax_fault and __dax_mkwrite calls.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      01a155e6
    • D
      xfs: Don't use unwritten extents for DAX · 1ca19157
      Dave Chinner 提交于
      DAX has a page fault serialisation problem with block allocation.
      Because it allows concurrent page faults and does not have a page
      lock to serialise faults to the same page, it can get two concurrent
      faults to the page that race.
      
      When two read faults race, this isn't a huge problem as the data
      underlying the page is not changing and so "detect and drop" works
      just fine. The issues are to do with write faults.
      
      When two write faults occur, we serialise block allocation in
      get_blocks() so only one faul will allocate the extent. It will,
      however, be marked as an unwritten extent, and that is where the
      problem lies - the DAX fault code cannot differentiate between a
      block that was just allocated and a block that was preallocated and
      needs zeroing. The result is that both write faults end up zeroing
      the block and attempting to convert it back to written.
      
      The problem is that the first fault can zero and convert before the
      second fault starts zeroing, resulting in the zeroing for the second
      fault overwriting the data that the first fault wrote with zeros.
      The second fault then attempts to convert the unwritten extent,
      which is then a no-op because it's already written. Data loss occurs
      as a result of this race.
      
      Because there is no sane locking construct in the page fault code
      that we can use for serialisation across the page faults, we need to
      ensure block allocation and zeroing occurs atomically in the
      filesystem. This means we can still take concurrent page faults and
      the only time they will serialise is in the filesystem
      mapping/allocation callback. The page fault code will always see
      written, initialised extents, so we will be able to remove the
      unwritten extent handling from the DAX code when all filesystems are
      converted.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      1ca19157
    • D
      xfs: introduce BMAPI_ZERO for allocating zeroed extents · 3fbbbea3
      Dave Chinner 提交于
      To enable DAX to do atomic allocation of zeroed extents, we need to
      drive the block zeroing deep into the allocator. Because
      xfs_bmapi_write() can return merged extents on allocation that were
      only partially allocated (i.e. requested range spans allocated and
      hole regions, allocation into the hole was contiguous), we cannot
      zero the extent returned from xfs_bmapi_write() as that can
      overwrite existing data with zeros.
      
      Hence we have to drive the extent zeroing into the allocation code,
      prior to where we merge the extents into the BMBT and return the
      resultant map. This means we need to propagate this need down to
      the xfs_alloc_vextent() and issue the block zeroing at this point.
      
      While this functionality is being introduced for DAX, there is no
      reason why it is specific to DAX - we can per-zero blocks during the
      allocation transaction on any type of device. It's just slow (and
      usually slower than unwritten allocation and conversion) on
      traditional block devices so doesn't tend to get used. We can,
      however, hook hardware zeroing optimisations via sb_issue_zeroout()
      to this operation, so it may be useful in future and hence the
      "allocate zeroed blocks" API needs to be implementation neutral.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      3fbbbea3
    • D
      xfs: fix inode size update overflow in xfs_map_direct() · 3e12dbbd
      Dave Chinner 提交于
      Both direct IO and DAX pass an offset and count into get_blocks that
      will overflow a s64 variable when an IO goes into the last supported
      block in a file (i.e. at offset 2^63 - 1FSB bytes). This can be seen
      from the tracing:
      
      xfs_get_blocks_alloc: [...] offset 0x7ffffffffffff000 count 4096
      xfs_gbmap_direct:     [...] offset 0x7ffffffffffff000 count 4096
      xfs_gbmap_direct_none:[...] offset 0x7ffffffffffff000 count 4096
      
      0x7ffffffffffff000 + 4096 = 0x8000000000000000, and hence that
      overflows the s64 offset and we fail to detect the need for a
      filesize update and an ioend is not allocated.
      
      This is *mostly* avoided for direct IO because such extending IOs
      occur with full block allocation, and so the "IS_UNWRITTEN()" check
      still evaluates as true and we get an ioend that way. However, doing
      single sector extending IOs to this last block will expose the fact
      that file size updates will not occur after the first allocating
      direct IO as the overflow will then be exposed.
      
      There is one further complexity: the DAX page fault path also
      exposes the same issue in block allocation. However, page faults
      cannot extend the file size, so in this case we want to allocate the
      block but do not want to allocate an ioend to enable file size
      update at IO completion. Hence we now need to distinguish between
      the direct IO patch allocation and dax fault path allocation to
      avoid leaking ioend structures.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      3e12dbbd
  9. 02 11月, 2015 1 次提交
  10. 19 10月, 2015 2 次提交
  11. 12 10月, 2015 5 次提交
新手
引导
客服 返回
顶部