1. 23 2月, 2012 9 次提交
  2. 09 12月, 2011 1 次提交
  3. 07 12月, 2011 1 次提交
    • C
      xfs: fix the logspace waiting algorithm · 9f9c19ec
      Christoph Hellwig 提交于
      Apply the scheme used in log_regrant_write_log_space to wake up any other
      threads waiting for log space before the newly added one to
      log_regrant_write_log_space as well, and factor the code into readable
      helpers.  For each of the queues we have add two helpers:
      
       - one to try to wake up all waiting threads.  This helper will also be
         usable by xfs_log_move_tail once we remove the current opportunistic
         wakeups in it.
       - one to sleep on t_wait until enough log space is available, loosely
         modelled after Linux waitqueues.
       
      And use them to reimplement the guts of log_regrant_write_log_space and
      log_regrant_write_log_space.  These two function now use one and the same
      algorithm for waiting on log space instead of subtly different ones before,
      with an option to completely unify them in the near future.
      
      Also move the filesystem shutdown handling to the common caller given
      that we had to touch it anyway.
      
      Based on hard debugging and an earlier patch from
      Chandra Seetharaman <sekharan@us.ibm.com>.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NChandra Seetharaman <sekharan@us.ibm.com>
      Tested-by: NChandra Seetharaman <sekharan@us.ibm.com>
      Signed-off-by: NBen Myers <bpm@sgi.com>
      9f9c19ec
  4. 09 11月, 2011 1 次提交
  5. 12 10月, 2011 3 次提交
  6. 26 7月, 2011 4 次提交
  7. 13 7月, 2011 3 次提交
  8. 08 7月, 2011 5 次提交
  9. 16 6月, 2011 1 次提交
    • C
      xfs: make log devices with write back caches work · a27a263b
      Christoph Hellwig 提交于
      There's no reason not to support cache flushing on external log devices.
      The only thing this really requires is flushing the data device first
      both in fsync and log commits.  A side effect is that we also have to
      remove the barrier write test during mount, which has been superflous
      since the new FLUSH+FUA code anyway.  Also use the chance to flush the
      RT subvolume write cache before the fsync commit, which is required
      for correct semantics.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      a27a263b
  10. 20 5月, 2011 1 次提交
    • D
      xfs: reset buffer pointers before freeing them · 44396476
      Dave Chinner 提交于
      When we free a vmapped buffer, we need to ensure the vmap address
      and length we free is the same as when it was allocated. In various
      places in the log code we change the memory the buffer is pointing
      to before issuing IO, but we never reset the buffer to point back to
      it's original memory (or no memory, if that is the case for the
      buffer).
      
      As a result, when we free the buffer it points to memory that is
      owned by something else and attempts to unmap and free it. Because
      the range does not match any known mapped range, it can trigger
      BUG_ON() traps in the vmap code, and potentially corrupt the vmap
      area tracking.
      
      Fix this by always resetting these buffers to their original state
      before freeing them.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      44396476
  11. 29 4月, 2011 1 次提交
    • C
      xfs: exact busy extent tracking · 97d3ac75
      Christoph Hellwig 提交于
      Update the extent tree in case we have to reuse a busy extent, so that it
      always is kept uptodate.  This is done by replacing the busy list searches
      with a new xfs_alloc_busy_reuse helper, which updates the busy extent tree
      in case of a reuse.  This allows us to allow reusing metadata extents
      unconditionally, and thus avoid log forces especially for allocation btree
      blocks.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      97d3ac75
  12. 08 4月, 2011 2 次提交
    • D
      xfs: convert log tail checking to a warning · da8a1a4a
      Dave Chinner 提交于
      On the Power platform, the log tail debug checks fire excessively
      causing the system to panic early in testing. The debug checks are
      known to be racy, though on x86_64 there is no evidence that they
      trigger at all.
      
      We want to keep the checks active on debug systems to alert us to
      problems with log space accounting, but we need to reduce the impact
      of a racy check on testing on the Power platform.
      
      As a result, convert the ASSERT conditions to warnings, and
      allow them to fire only once per filesystem mount. This will prevent
      false positives from interfering with testing, whilst still
      providing us with the indication that they may be a problem with log
      space accounting should that occur.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NAlex Elder <aelder@sgi.com>
      da8a1a4a
    • D
      xfs: push the AIL from memory reclaim and periodic sync · fd074841
      Dave Chinner 提交于
      When we are short on memory, we want to expedite the cleaning of
      dirty objects.  Hence when we run short on memory, we need to kick
      the AIL flushing into action to clean as many dirty objects as
      quickly as possible.  To implement this, sample the lsn of the log
      item at the head of the AIL and use that as the push target for the
      AIL flush.
      
      Further, we keep items in the AIL that are dirty that are not
      tracked any other way, so we can get objects sitting in the AIL that
      don't get written back until the AIL is pushed. Hence to get the
      filesystem to the idle state, we might need to push the AIL to flush
      out any remaining dirty objects sitting in the AIL. This requires
      the same push mechanism as the reclaim push.
      
      This patch also renames xfs_trans_ail_tail() to xfs_ail_min_lsn() to
      match the new xfs_ail_max_lsn() function introduced in this patch.
      Similarly for xfs_trans_ail_push -> xfs_ail_push.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NAlex Elder <aelder@sgi.com>
      fd074841
  13. 07 3月, 2011 1 次提交
  14. 12 1月, 2011 1 次提交
    • D
      xfs: prevent NMI timeouts in cmn_err · 73efe4a4
      Dave Chinner 提交于
      We currently have a global error message buffer in cmn_err that is
      protected by a spin lock that disables interrupts.  Recently there
      have been reports of NMI timeouts occurring when the console is
      being flooded by SCSI error reports due to cmn_err() getting stuck
      trying to print to the console while holding this lock (i.e. with
      interrupts disabled). The NMI watchdog is seeing this CPU as
      non-responding and so is triggering a panic.  While the trigger for
      the reported case is SCSI errors, pretty much anything that spams
      the kernel log could cause this to occur.
      
      Realistically the only reason that we have the intemediate message
      buffer is to prepend the correct kernel log level prefix to the log
      message. The only reason we have the lock is to protect the global
      message buffer and the only reason the message buffer is global is
      to keep it off the stack. Hence if we can avoid needing a global
      message buffer we avoid needing the lock, and we can do this with a
      small amount of cleanup and some preprocessor tricks:
      
      	1. clean up xfs_cmn_err() panic mask functionality to avoid
      	   needing debug code in xfs_cmn_err()
      	2. remove the couple of "!" message prefixes that still exist that
      	   the existing cmn_err() code steps over.
      	3. redefine CE_* levels directly to KERN_*
      	4. redefine cmn_err() and friends to use printk() directly
      	   via variable argument length macros.
      
      By doing this, we can completely remove the cmn_err() code and the
      lock that is causing the problems, and rely solely on printk()
      serialisation to ensure that we don't get garbled messages.
      
      A series of followup patches is really needed to clean up all the
      cmn_err() calls and related messages properly, but that results in a
      series that is not easily back portable to enterprise kernels. Hence
      this initial fix is only to address the direct problem in the lowest
      impact way possible.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      73efe4a4
  15. 21 12月, 2010 2 次提交
    • D
      xfs: convert grant head manipulations to lockless algorithm · d0eb2f38
      Dave Chinner 提交于
      The only thing that the grant lock remains to protect is the grant head
      manipulations when adding or removing space from the log. These calculations
      are already based on atomic variables, so we can already update them safely
      without locks. However, the grant head manpulations require atomic multi-step
      calculations to be executed, which the algorithms currently don't allow.
      
      To make these multi-step calculations atomic, convert the algorithms to
      compare-and-exchange loops on the atomic variables. That is, we sample the old
      value, perform the calculation and use atomic64_cmpxchg() to attempt to update
      the head with the new value. If the head has not changed since we sampled it,
      it will succeed and we are done. Otherwise, we rerun the calculation again from
      a new sample of the head.
      
      This allows us to remove the grant lock from around all the grant head space
      manipulations, and that effectively removes the grant lock from the log
      completely. Hence we can remove the grant lock completely from the log at this
      point.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      d0eb2f38
    • D
      xfs: introduce new locks for the log grant ticket wait queues · 3f16b985
      Dave Chinner 提交于
      The log grant ticket wait queues are currently protected by the log
      grant lock.  However, the queues are functionally independent from
      each other, and operations on them only require serialisation
      against other queue operations now that all of the other log
      variables they use are atomic values.
      
      Hence, we can make them independent of the grant lock by introducing
      new locks just to protect the lists operations. because the lists
      are independent, we can use a lock per list and ensure that reserve
      and write head queuing do not contend.
      
      To ensure forced shutdowns work correctly in conjunction with the
      new fast paths, ensure that we check whether the log has been shut
      down in the grant functions once we hold the relevant spin locks but
      before we go to sleep. This is needed to co-ordinate correctly with
      the wakeups that are issued on the ticket queues so we don't leave
      any processes sleeping on the queues during a shutdown.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      3f16b985
  16. 03 12月, 2010 1 次提交
  17. 21 12月, 2010 1 次提交
    • D
      xfs: convert l_tail_lsn to an atomic variable. · 1c3cb9ec
      Dave Chinner 提交于
      log->l_tail_lsn is currently protected by the log grant lock. The
      lock is only needed for serialising readers against writers, so we
      don't really need the lock if we make the l_tail_lsn variable an
      atomic. Converting the l_tail_lsn variable to an atomic64_t means we
      can start to peel back the grant lock from various operations.
      
      Also, provide functions to safely crack an atomic LSN variable into
      it's component pieces and to recombined the components into an
      atomic variable. Use them where appropriate.
      
      This also removes the need for explicitly holding a spinlock to read
      the l_tail_lsn on 32 bit platforms.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      
      1c3cb9ec
  18. 03 12月, 2010 1 次提交
    • D
      xfs: convert l_last_sync_lsn to an atomic variable · 84f3c683
      Dave Chinner 提交于
      log->l_last_sync_lsn is updated in only one critical spot - log
      buffer Io completion - and is protected by the grant lock here. This
      requires the grant lock to be taken for every log buffer IO
      completion. Converting the l_last_sync_lsn variable to an atomic64_t
      means that we do not need to take the grant lock in log buffer IO
      completion to update it.
      
      This also removes the need for explicitly holding a spinlock to read
      the l_last_sync_lsn on 32 bit platforms.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      84f3c683
  19. 21 12月, 2010 1 次提交
    • D
      xfs: make AIL tail pushing independent of the grant lock · 2ced19cb
      Dave Chinner 提交于
      The xlog_grant_push_ail() currently takes the grant lock internally to sample
      the tail lsn, last sync lsn and the reserve grant head. Most of the callers
      already hold the grant lock but have to drop it before calling
      xlog_grant_push_ail(). This is a left over from when the AIL tail pushing was
      done in line and hence xlog_grant_push_ail had to drop the grant lock. AIL push
      is now done in another thread and hence we can safely hold the grant lock over
      the entire xlog_grant_push_ail call.
      
      Push the grant lock outside of xlog_grant_push_ail() to simplify the locking
      and synchronisation needed for tail pushing.  This will reduce traffic on the
      grant lock by itself, but this is only one step in preparing for the complete
      removal of the grant lock.
      
      While there, clean up the formatting of xlog_grant_push_ail() to match the
      rest of the XFS code.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      2ced19cb