1. 02 3月, 2017 1 次提交
  2. 02 2月, 2017 1 次提交
  3. 26 1月, 2017 1 次提交
    • D
      xfs: clear _XBF_PAGES from buffers when readahead page · 2aa6ba7b
      Darrick J. Wong 提交于
      If we try to allocate memory pages to back an xfs_buf that we're trying
      to read, it's possible that we'll be so short on memory that the page
      allocation fails.  For a blocking read we'll just wait, but for
      readahead we simply dump all the pages we've collected so far.
      
      Unfortunately, after dumping the pages we neglect to clear the
      _XBF_PAGES state, which means that the subsequent call to xfs_buf_free
      thinks that b_pages still points to pages we own.  It then double-frees
      the b_pages pages.
      
      This results in screaming about negative page refcounts from the memory
      manager, which xfs oughtn't be triggering.  To reproduce this case,
      mount a filesystem where the size of the inodes far outweighs the
      availalble memory (a ~500M inode filesystem on a VM with 300MB memory
      did the trick here) and run bulkstat in parallel with other memory
      eating processes to put a huge load on the system.  The "check summary"
      phase of xfs_scrub also works for this purpose.
      Signed-off-by: NDarrick J. Wong <darrick.wong@oracle.com>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      2aa6ba7b
  4. 09 12月, 2016 1 次提交
  5. 07 12月, 2016 1 次提交
    • L
      xfs: use rhashtable to track buffer cache · 6031e73a
      Lucas Stach 提交于
      On filesystems with a lot of metadata and in metadata intensive workloads
      xfs_buf_find() is showing up at the top of the CPU cycles trace. Most of
      the CPU time is spent on CPU cache misses while traversing the rbtree.
      
      As the buffer cache does not need any kind of ordering, but fast lookups
      a hashtable is the natural data structure to use. The rhashtable
      infrastructure provides a self-scaling hashtable implementation and
      allows lookups to proceed while the table is going through a resize
      operation.
      
      This reduces the CPU-time spent for the lookups to 1/3 even for small
      filesystems with a relatively small number of cached buffers, with
      possibly much larger gains on higher loaded filesystems.
      
      [dchinner: reduce minimum hash size to an acceptable size for large
      	   filesystems with many AGs with no active use.]
      [dchinner: remove stale rbtree asserts.]
      [dchinner: use xfs_buf_map for compare function argument.]
      [dchinner: make functions static.]
      [dchinner: remove redundant comments.]
      Signed-off-by: NLucas Stach <dev@lynxeye.de>
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      6031e73a
  6. 01 11月, 2016 1 次提交
  7. 26 8月, 2016 1 次提交
    • B
      xfs: prevent dropping ioend completions during buftarg wait · 800b2694
      Brian Foster 提交于
      xfs_wait_buftarg() waits for all pending I/O, drains the ioend
      completion workqueue and walks the LRU until all buffers in the cache
      have been released. This is traditionally an unmount operation` but the
      mechanism is also reused during filesystem freeze.
      
      xfs_wait_buftarg() invokes drain_workqueue() as part of the quiesce,
      which is intended more for a shutdown sequence in that it indicates to
      the queue that new operations are not expected once the drain has begun.
      New work jobs after this point result in a WARN_ON_ONCE() and are
      otherwise dropped.
      
      With filesystem freeze, however, read operations are allowed and can
      proceed during or after the workqueue drain. If such a read occurs
      during the drain sequence, the workqueue infrastructure complains about
      the queued ioend completion work item and drops it on the floor. As a
      result, the buffer remains on the LRU and the freeze never completes.
      
      Despite the fact that the overall buffer cache cleanup is not necessary
      during freeze, fix up this operation such that it is safe to invoke
      during non-unmount quiesce operations. Replace the drain_workqueue()
      call with flush_workqueue(), which runs a similar serialization on
      pending workqueue jobs without causing new jobs to be dropped. This is
      safe for unmount as unmount independently locks out new operations by
      the time xfs_wait_buftarg() is invoked.
      
      cc: <stable@vger.kernel.org>
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      800b2694
  8. 17 8月, 2016 1 次提交
    • B
      xfs: don't assert fail on non-async buffers on ioacct decrement · 4dd3fd71
      Brian Foster 提交于
      The buffer I/O accounting mechanism tracks async buffers under I/O.  As
      an optimization, the buffer I/O count is incremented only once on the
      first async I/O for a given hold cycle of a buffer and decremented once
      the buffer is released to the LRU (or freed).
      
      xfs_buf_ioacct_dec() has an ASSERT() check for an XBF_ASYNC buffer, but
      we have one or two corner cases where a buffer can be submitted for I/O
      multiple times via different methods in a single hold cycle. If an async
      I/O occurs first, the I/O count is incremented. If a sync I/O occurs
      before the hold count drops, XBF_ASYNC is cleared by the time the I/O
      count is decremented.
      
      Remove the async assert check from xfs_buf_ioacct_dec() as this is a
      perfectly valid scenario. For the purposes of I/O accounting, we really
      only care about the buffer async state at I/O submission time.
      Discovered-and-analyzed-by: NDave Chinner <david@fromorbit.com>
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      4dd3fd71
  9. 20 7月, 2016 3 次提交
    • B
      xfs: track and serialize in-flight async buffers against unmount · 9c7504aa
      Brian Foster 提交于
      Newly allocated XFS metadata buffers are added to the LRU once the hold
      count is released, which typically occurs after I/O completion. There is
      no other mechanism at current that tracks the existence or I/O state of
      a new buffer. Further, readahead I/O tends to be submitted
      asynchronously by nature, which means the I/O can remain in flight and
      actually complete long after the calling context is gone. This means
      that file descriptors or any other holds on the filesystem can be
      released, allowing the filesystem to be unmounted while I/O is still in
      flight. When I/O completion occurs, core data structures may have been
      freed, causing completion to run into invalid memory accesses and likely
      to panic.
      
      This problem is reproduced on XFS via directory readahead. A filesystem
      is mounted, a directory is opened/closed and the filesystem immediately
      unmounted. The open/close cycle triggers a directory readahead that if
      delayed long enough, runs buffer I/O completion after the unmount has
      completed.
      
      To address this problem, add a mechanism to track all in-flight,
      asynchronous buffers using per-cpu counters in the buftarg. The buffer
      is accounted on the first I/O submission after the current reference is
      acquired and unaccounted once the buffer is returned to the LRU or
      freed. Update xfs_wait_buftarg() to wait on all in-flight I/O before
      walking the LRU list. Once in-flight I/O has completed and the workqueue
      has drained, all new buffers should have been released onto the LRU.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      9c7504aa
    • B
      xfs: exclude never-released buffers from buftarg I/O accounting · c891c30a
      Brian Foster 提交于
      The upcoming buftarg I/O accounting mechanism maintains a count of
      all buffers that have undergone I/O in the current hold-release
      cycle.  Certain buffers associated with core infrastructure (e.g.,
      the xfs_mount superblock buffer, log buffers) are never released,
      however. This means that accounting I/O submission on such buffers
      elevates the buftarg count indefinitely and could lead to lockup on
      unmount.
      
      Define a new buffer flag to explicitly exclude buffers from buftarg
      I/O accounting. Set the flag on the superblock and associated log
      buffers.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      c891c30a
    • E
      xfs: remove extraneous buffer flag changes · 0b4db5df
      Eric Sandeen 提交于
      Fix up a couple places where extra flag manipulation occurs.
      
      In the first case we clear XBF_ASYNC and then immediately reset it -
      so don't bother clearing in the first place.
      
      In the 2nd case we are at a point in the function where the buffer
      must already be async, so there is no need to reset it.
      
      Add consistent spacing around the " | " while we're at it.
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Reviewed-by: NCarlos Maiolino <cmaiolino@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      0b4db5df
  10. 21 6月, 2016 1 次提交
  11. 10 6月, 2016 1 次提交
  12. 08 6月, 2016 3 次提交
  13. 01 6月, 2016 1 次提交
    • D
      xfs: reduce lock hold times in buffer writeback · 26f1fe85
      Dave Chinner 提交于
      When we have a lot of metadata to flush from the AIL, the buffer
      list can get very long. The current submission code tries to batch
      submission to optimise IO order of the metadata (i.e. ascending
      block order) to maximise block layer merging or IO to adjacent
      metadata blocks.
      
      Unfortunately, the method used can result in long lock times
      occurring as buffers locked early on in the buffer list might not be
      dispatched until the end of the IO licst processing. This is because
      sorting does not occur util after the buffer list has been processed
      and the buffers that are going to be submitted are locked. Hence
      when the buffer list is several thousand buffers long, the lock hold
      times before IO dispatch can be significant.
      
      To fix this, sort the buffer list before we start trying to lock and
      submit buffers. This means we can now submit buffers immediately
      after they are locked, allowing merging to occur immediately on the
      plug and dispatch to occur as quickly as possible. This means there
      is minimal delay between locking the buffer and IO submission
      occuring, hence reducing the worst case lock hold times seen during
      delayed write buffer IO submission signficantly.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NCarlos Maiolino <cmaiolino@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      26f1fe85
  14. 18 5月, 2016 1 次提交
    • B
      xfs: buffer ->bi_end_io function requires irq-safe lock · 9bdd9bd6
      Brian Foster 提交于
      Reports have surfaced of a lockdep splat complaining about an
      irq-safe -> irq-unsafe locking order in the xfs_buf_bio_end_io() bio
      completion handler. This only occurs when I/O errors are present
      because bp->b_lock is only acquired in this context to protect
      setting an error on the buffer. The problem is that this lock can be
      acquired with the (request_queue) q->queue_lock held. See
      scsi_end_request() or ata_qc_schedule_eh(), for example.
      
      Replace the locked test/set of b_io_error with a cmpxchg() call.
      This eliminates the need for the lock and thus the lock ordering
      problem goes away.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      9bdd9bd6
  15. 10 2月, 2016 1 次提交
  16. 19 1月, 2016 1 次提交
    • D
      xfs: log mount failures don't wait for buffers to be released · 85bec546
      Dave Chinner 提交于
      Recently I've been seeing xfs/051 fail on 1k block size filesystems.
      Trying to trace the events during the test lead to the problem going
      away, indicating that it was a race condition that lead to this
      ASSERT failure:
      
      XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 156
      .....
      [<ffffffff814e1257>] xfs_free_perag+0x87/0xb0
      [<ffffffff814e21b9>] xfs_mountfs+0x4d9/0x900
      [<ffffffff814e5dff>] xfs_fs_fill_super+0x3bf/0x4d0
      [<ffffffff811d8800>] mount_bdev+0x180/0x1b0
      [<ffffffff814e3ff5>] xfs_fs_mount+0x15/0x20
      [<ffffffff811d90a8>] mount_fs+0x38/0x170
      [<ffffffff811f4347>] vfs_kern_mount+0x67/0x120
      [<ffffffff811f7018>] do_mount+0x218/0xd60
      [<ffffffff811f7e5b>] SyS_mount+0x8b/0xd0
      
      When I finally caught it with tracing enabled, I saw that AG 2 had
      an elevated reference count and a buffer was responsible for it. I
      tracked down the specific buffer, and found that it was missing the
      final reference count release that would put it back on the LRU and
      hence be found by xfs_wait_buftarg() calls in the log mount failure
      handling.
      
      The last four traces for the buffer before the assert were (trimmed
      for relevance)
      
      kworker/0:1-5259   xfs_buf_iodone:        hold 2  lock 0 flags ASYNC
      kworker/0:1-5259   xfs_buf_ioerror:       hold 2  lock 0 error -5
      mount-7163	   xfs_buf_lock_done:     hold 2  lock 0 flags ASYNC
      mount-7163	   xfs_buf_unlock:        hold 2  lock 1 flags ASYNC
      
      This is an async write that is completing, so there's nobody waiting
      for it directly.  Hence we call xfs_buf_relse() once all the
      processing is complete. That does:
      
      static inline void xfs_buf_relse(xfs_buf_t *bp)
      {
      	xfs_buf_unlock(bp);
      	xfs_buf_rele(bp);
      }
      
      Now, it's clear that mount is waiting on the buffer lock, and that
      it has been released by xfs_buf_relse() and gained by mount. This is
      expected, because at this point the mount process is in
      xfs_buf_delwri_submit() waiting for all the IO it submitted to
      complete.
      
      The mount process, however, is waiting on the lock for the buffer
      because it is in xfs_buf_delwri_submit(). This waits for IO
      completion, but it doesn't wait for the buffer reference owned by
      the IO to go away. The mount process collects all the completions,
      fails the log recovery, and the higher level code then calls
      xfs_wait_buftarg() to free all the remaining buffers in the
      filesystem.
      
      The issue is that on unlocking the buffer, the scheduler has decided
      that the mount process has higher priority than the the kworker
      thread that is running the IO completion, and so immediately
      switched contexts to the mount process from the semaphore unlock
      code, hence preventing the kworker thread from finishing the IO
      completion and releasing the IO reference to the buffer.
      
      Hence by the time that xfs_wait_buftarg() is run, the buffer still
      has an active reference and so isn't on the LRU list that the
      function walks to free the remaining buffers. Hence we miss that
      buffer and continue onwards to tear down the mount structures,
      at which time we get find a stray reference count on the perag
      structure. On a non-debug kernel, this will be ignored and the
      structure torn down and freed. Hence when the kworker thread is then
      rescheduled and the buffer released and freed, it will access a
      freed perag structure.
      
      The problem here is that when the log mount fails, we still need to
      quiesce the log to ensure that the IO workqueues have returned to
      idle before we run xfs_wait_buftarg(). By synchronising the
      workqueues, we ensure that all IO completions are fully processed,
      not just to the point where buffers have been unlocked. This ensures
      we don't end up in the situation above.
      
      cc: <stable@vger.kernel.org> # 3.18
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      85bec546
  17. 12 1月, 2016 1 次提交
    • D
      xfs: inode recovery readahead can race with inode buffer creation · b79f4a1c
      Dave Chinner 提交于
      When we do inode readahead in log recovery, we do can do the
      readahead before we've replayed the icreate transaction that stamps
      the buffer with inode cores. The inode readahead verifier catches
      this and marks the buffer as !done to indicate that it doesn't yet
      contain valid inodes.
      
      In adding buffer error notification  (i.e. setting b_error = -EIO at
      the same time as as we clear the done flag) to such a readahead
      verifier failure, we can then get subsequent inode recovery failing
      with this error:
      
      XFS (dm-0): metadata I/O error: block 0xa00060 ("xlog_recover_do..(read#2)") error 5 numblks 32
      
      This occurs when readahead completion races with icreate item replay
      such as:
      
      	inode readahead
      		find buffer
      		lock buffer
      		submit RA io
      	....
      	icreate recovery
      	    xfs_trans_get_buffer
      		find buffer
      		lock buffer
      		<blocks on RA completion>
      	.....
      	<ra completion>
      		fails verifier
      		clear XBF_DONE
      		set bp->b_error = -EIO
      		release and unlock buffer
      	<icreate gains lock>
      	icreate initialises buffer
      	marks buffer as done
      	adds buffer to delayed write queue
      	releases buffer
      
      At this point, we have an initialised inode buffer that is up to
      date but has an -EIO state registered against it. When we finally
      get to recovering an inode in that buffer:
      
      	inode item recovery
      	    xfs_trans_read_buffer
      		find buffer
      		lock buffer
      		sees XBF_DONE is set, returns buffer
      	    sees bp->b_error is set
      		fail log recovery!
      
      Essentially, we need xfs_trans_get_buf_map() to clear the error status of
      the buffer when doing a lookup. This function returns uninitialised
      buffers, so the buffer returned can not be in an error state and
      none of the code that uses this function expects b_error to be set
      on return. Indeed, there is an ASSERT(!bp->b_error); in the
      transaction case in xfs_trans_get_buf_map() that would have caught
      this if log recovery used transactions....
      
      This patch firstly changes the inode readahead failure to set -EIO
      on the buffer, and secondly changes xfs_buf_get_map() to never
      return a buffer with an error state set so this first change doesn't
      cause unexpected log recovery failures.
      
      cc: <stable@vger.kernel.org> # 3.12 - current
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NBrian Foster <bfoster@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      b79f4a1c
  18. 07 1月, 2016 1 次提交
  19. 04 1月, 2016 1 次提交
  20. 12 10月, 2015 2 次提交
  21. 25 8月, 2015 1 次提交
  22. 29 7月, 2015 2 次提交
    • C
      block: add a bi_error field to struct bio · 4246a0b6
      Christoph Hellwig 提交于
      Currently we have two different ways to signal an I/O error on a BIO:
      
       (1) by clearing the BIO_UPTODATE flag
       (2) by returning a Linux errno value to the bi_end_io callback
      
      The first one has the drawback of only communicating a single possible
      error (-EIO), and the second one has the drawback of not beeing persistent
      when bios are queued up, and are not passed along from child to parent
      bio in the ever more popular chaining scenario.  Having both mechanisms
      available has the additional drawback of utterly confusing driver authors
      and introducing bugs where various I/O submitters only deal with one of
      them, and the others have to add boilerplate code to deal with both kinds
      of error returns.
      
      So add a new bi_error field to store an errno value directly in struct
      bio and remove the existing mechanisms to clean all this up.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NHannes Reinecke <hare@suse.de>
      Reviewed-by: NNeilBrown <neilb@suse.com>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      4246a0b6
    • J
      xfs: Use consistent logging message prefixes · f41febd2
      Joe Perches 提交于
      The second and subsequent lines of multi-line logging messages
      are not prefixed with the same information as the first line.
      
      Separate messages with newlines into multiple calls to ensure
      consistent prefixing and allow easier grep use.
      Signed-off-by: NJoe Perches <joe@perches.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      f41febd2
  23. 22 6月, 2015 1 次提交
  24. 13 2月, 2015 2 次提交
    • V
      list_lru: add helpers to isolate items · 3f97b163
      Vladimir Davydov 提交于
      Currently, the isolate callback passed to the list_lru_walk family of
      functions is supposed to just delete an item from the list upon returning
      LRU_REMOVED or LRU_REMOVED_RETRY, while nr_items counter is fixed by
      __list_lru_walk_one after the callback returns.  Since the callback is
      allowed to drop the lock after removing an item (it has to return
      LRU_REMOVED_RETRY then), the nr_items can be less than the actual number
      of elements on the list even if we check them under the lock.  This makes
      it difficult to move items from one list_lru_one to another, which is
      required for per-memcg list_lru reparenting - we can't just splice the
      lists, we have to move entries one by one.
      
      This patch therefore introduces helpers that must be used by callback
      functions to isolate items instead of raw list_del/list_move.  These are
      list_lru_isolate and list_lru_isolate_move.  They not only remove the
      entry from the list, but also fix the nr_items counter, making sure
      nr_items always reflects the actual number of elements on the list if
      checked under the appropriate lock.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Dave Chinner <david@fromorbit.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      3f97b163
    • V
      list_lru: introduce list_lru_shrink_{count,walk} · 503c358c
      Vladimir Davydov 提交于
      Kmem accounting of memcg is unusable now, because it lacks slab shrinker
      support.  That means when we hit the limit we will get ENOMEM w/o any
      chance to recover.  What we should do then is to call shrink_slab, which
      would reclaim old inode/dentry caches from this cgroup.  This is what
      this patch set is intended to do.
      
      Basically, it does two things.  First, it introduces the notion of
      per-memcg slab shrinker.  A shrinker that wants to reclaim objects per
      cgroup should mark itself as SHRINKER_MEMCG_AWARE.  Then it will be
      passed the memory cgroup to scan from in shrink_control->memcg.  For
      such shrinkers shrink_slab iterates over the whole cgroup subtree under
      the target cgroup and calls the shrinker for each kmem-active memory
      cgroup.
      
      Secondly, this patch set makes the list_lru structure per-memcg.  It's
      done transparently to list_lru users - everything they have to do is to
      tell list_lru_init that they want memcg-aware list_lru.  Then the
      list_lru will automatically distribute objects among per-memcg lists
      basing on which cgroup the object is accounted to.  This way to make FS
      shrinkers (icache, dcache) memcg-aware we only need to make them use
      memcg-aware list_lru, and this is what this patch set does.
      
      As before, this patch set only enables per-memcg kmem reclaim when the
      pressure goes from memory.limit, not from memory.kmem.limit.  Handling
      memory.kmem.limit is going to be tricky due to GFP_NOFS allocations, and
      it is still unclear whether we will have this knob in the unified
      hierarchy.
      
      This patch (of 9):
      
      NUMA aware slab shrinkers use the list_lru structure to distribute
      objects coming from different NUMA nodes to different lists.  Whenever
      such a shrinker needs to count or scan objects from a particular node,
      it issues commands like this:
      
              count = list_lru_count_node(lru, sc->nid);
              freed = list_lru_walk_node(lru, sc->nid, isolate_func,
                                         isolate_arg, &sc->nr_to_scan);
      
      where sc is an instance of the shrink_control structure passed to it
      from vmscan.
      
      To simplify this, let's add special list_lru functions to be used by
      shrinkers, list_lru_shrink_count() and list_lru_shrink_walk(), which
      consolidate the nid and nr_to_scan arguments in the shrink_control
      structure.
      
      This will also allow us to avoid patching shrinkers that use list_lru
      when we make shrink_slab() per-memcg - all we will have to do is extend
      the shrink_control structure to include the target memcg and make
      list_lru_shrink_{count,walk} handle this appropriately.
      Signed-off-by: NVladimir Davydov <vdavydov@parallels.com>
      Suggested-by: NDave Chinner <david@fromorbit.com>
      Cc: Johannes Weiner <hannes@cmpxchg.org>
      Cc: Michal Hocko <mhocko@suse.cz>
      Cc: Greg Thelen <gthelen@google.com>
      Cc: Glauber Costa <glommer@gmail.com>
      Cc: Alexander Viro <viro@zeniv.linux.org.uk>
      Cc: Christoph Lameter <cl@linux.com>
      Cc: Pekka Enberg <penberg@kernel.org>
      Cc: David Rientjes <rientjes@google.com>
      Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
      Cc: Tejun Heo <tj@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      503c358c
  25. 04 12月, 2014 1 次提交
    • B
      xfs: split metadata and log buffer completion to separate workqueues · b29c70f5
      Brian Foster 提交于
      XFS traditionally sends all buffer I/O completion work to a single
      workqueue. This includes metadata buffer completion and log buffer
      completion. The log buffer completion requires a high priority queue to
      prevent stalls due to log forces getting stuck behind other queued work.
      
      Rather than continue to prioritize all buffer I/O completion due to the
      needs of log completion, split log buffer completion off to
      m_log_workqueue and move the high priority flag from m_buf_workqueue to
      m_log_workqueue.
      
      Add a b_ioend_wq wq pointer to xfs_buf to allow completion workqueue
      customization on a per-buffer basis. Initialize b_ioend_wq to
      m_buf_workqueue by default in the generic buffer I/O submission path.
      Finally, override the default wq with the high priority m_log_workqueue
      in the log buffer I/O submission path.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      b29c70f5
  26. 28 11月, 2014 3 次提交
    • C
      xfs: merge xfs_ag.h into xfs_format.h · 4fb6e8ad
      Christoph Hellwig 提交于
      More on-disk format consolidation.  A few declarations that weren't on-disk
      format related move into better suitable spots.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      4fb6e8ad
    • E
      xfs: catch invalid negative blknos in _xfs_buf_find() · db52d09e
      Eric Sandeen 提交于
      Here blkno is a daddr_t, which is a __s64; it's possible to hold
      a value which is negative, and thus pass the (blkno >= eofs)
      test.  Then we try to do a xfs_perag_get() for a ridiculous
      agno via xfs_daddr_to_agno(), and bad things happen when that
      fails, and returns a null pag which is dereferenced shortly
      thereafter.
      
      Found via a user-supplied fuzzed image...
      Signed-off-by: NEric Sandeen <sandeen@redhat.com>
      Reviewed-by: NMark Tinguely <tinguely@sgi.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      db52d09e
    • B
      xfs: replace global xfslogd wq with per-mount wq · 78c931b8
      Brian Foster 提交于
      The xfslogd workqueue is a global, single-job workqueue for buffer ioend
      processing. This means we allow for a single work item at a time for all
      possible XFS mounts on a system. fsstress testing in loopback XFS over
      XFS configurations has reproduced xfslogd deadlocks due to the single
      threaded nature of the queue and dependencies introduced between the
      separate XFS instances by online discard (-o discard).
      
      Discard over a loopback device converts the discard request to a hole
      punch (fallocate) on the underlying file. Online discard requests are
      issued synchronously and from xfslogd context in XFS, hence the xfslogd
      workqueue is blocked in the upper fs waiting on a hole punch request to
      be servied in the lower fs. If the lower fs issues I/O that depends on
      xfslogd to complete, both filesystems end up hung indefinitely. This is
      reproduced reliabily by generic/013 on XFS->loop->XFS test devices with
      the '-o discard' mount option.
      
      Further, docker implementations appear to use this kind of configuration
      for container instance filesystems by default (container fs->dm->
      loop->base fs) and therefore are subject to this deadlock when running
      on XFS.
      
      Replace the global xfslogd workqueue with a per-mount variant. This
      guarantees each mount access to a single worker and prevents deadlocks
      due to inter-fs dependencies introduced by discard. Since the queue is
      only responsible for buffer iodone processing at this point in time,
      rename xfslogd to xfs-buf.
      Signed-off-by: NBrian Foster <bfoster@redhat.com>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      78c931b8
  27. 02 10月, 2014 5 次提交
    • D
      xfs: check xfs_buf_read_uncached returns correctly · ba372674
      Dave Chinner 提交于
      xfs_buf_read_uncached() has two failure modes. If can either return
      NULL or bp->b_error != 0 depending on the type of failure, and not
      all callers check for both. Fix it so that xfs_buf_read_uncached()
      always returns the error status, and the buffer is returned as a
      function parameter. The buffer will only be returned on success.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      ba372674
    • D
      xfs: introduce xfs_buf_submit[_wait] · 595bff75
      Dave Chinner 提交于
      There is a lot of cookie-cutter code that looks like:
      
      	if (shutdown)
      		handle buffer error
      	xfs_buf_iorequest(bp)
      	error = xfs_buf_iowait(bp)
      	if (error)
      		handle buffer error
      
      spread through XFS. There's significant complexity now in
      xfs_buf_iorequest() to specifically handle this sort of synchronous
      IO pattern, but there's all sorts of nasty surprises in different
      error handling code dependent on who owns the buffer references and
      the locks.
      
      Pull this pattern into a single helper, where we can hide all the
      synchronous IO warts and hence make the error handling for all the
      callers much saner. This removes the need for a special extra
      reference to protect IO completion processing, as we can now hold a
      single reference across dispatch and waiting, simplifying the sync
      IO smeantics and error handling.
      
      In doing this, also rename xfs_buf_iorequest to xfs_buf_submit and
      make it explicitly handle on asynchronous IO. This forces all users
      to be switched specifically to one interface or the other and
      removes any ambiguity between how the interfaces are to be used. It
      also means that xfs_buf_iowait() goes away.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      595bff75
    • D
      xfs: kill xfs_bioerror_relse · 8b131973
      Dave Chinner 提交于
      There is only one caller now - xfs_trans_read_buf_map() - and it has
      very well defined call semantics - read, synchronous, and b_iodone
      is NULL. Hence it's pretty clear what error handling is necessary
      for this case. The bigger problem of untangling
      xfs_trans_read_buf_map error handling is left to a future patch.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      8b131973
    • D
      xfs: xfs_bioerror can die. · 27187754
      Dave Chinner 提交于
      Internal buffer write error handling is a mess due to the unnatural
      split between xfs_bioerror and xfs_bioerror_relse().
      
      xfs_bwrite() only does sync IO and determines the handler to
      call based on b_iodone, so for this caller the only difference
      between xfs_bioerror() and xfs_bioerror_release() is the XBF_DONE
      flag. We don't care what the XBF_DONE flag state is because we stale
      the buffer in both paths - the next buffer lookup will clear
      XBF_DONE because XBF_STALE is set. Hence we can use common
      error handling for xfs_bwrite().
      
      __xfs_buf_delwri_submit() is a similar - it's only ever called
      on writes - all sync or async - and again there's no reason to
      handle them any differently at all.
      
      Clean up the nasty error handling and remove xfs_bioerror().
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      27187754
    • D
      xfs: kill xfs_bdstrat_cb · 8dac3921
      Dave Chinner 提交于
      Only has two callers, and is just a shutdown check and error handler
      around xfs_buf_iorequest. However, the error handling is a mess of
      read and write semantics, and both internal callers only call it for
      writes. Hence kill the wrapper, and follow up with a patch to
      sanitise the error handling.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      
      8dac3921