1. 27 7月, 2010 10 次提交
    • C
      xfs: remove obsolete osyncisosync mount option · a64afb05
      Christoph Hellwig 提交于
      Since Linux 2.6.33 the kernel has support for real O_SYNC, which made
      the osyncisosync option a no-op.  Warn the users about this and remove
      the mount flag for it.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      a64afb05
    • D
      xfs: move inode shrinker unregister even earlier · a4190f90
      Dave Chinner 提交于
      I missed Dave Chinner's second revision of this change, and pushed
      his first version out to the repository instead.
      
      	commit a476c59ebb279d738718edc0e3fb76aab3687114
      	Author: Dave Chinner <dchinner@redhat.com>
      
      This commit compensates for that by moving a block of code up a bit
      further, with a result that matches the the effect of Dave's second
      version.
      
      Dave's first version was:
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      Dave's second version was:
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      a4190f90
    • D
      xfs: unregister inode shrinker before freeing filesystem structures · 2727ccc9
      Dave Chinner 提交于
      Currently we don't remove the XFS mount from the shrinker list until
      late in the unmount path. By this time, we have already torn down
      the internals of the filesystem (e.g. the per-ag structures), and
      hence if the shrinker is executed between the teardown and the
      unregistering, the shrinker will get NULL per-ag structure pointers
      and panic trying to dereference them.
      
      Fix this by removing the xfs mount from the shrinker list before
      tearing down it's internal structures.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NEric Sandeen <sandeen@redhat.com>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      2727ccc9
    • C
      xfs: split xfs_itrace_entry · cca28fb8
      Christoph Hellwig 提交于
      Replace the xfs_itrace_entry catchall with specific trace points.  For
      most simple callers we now use the simple inode class, which used to
      be the iget class, but add more details tracing for namespace events,
      which now includes the name of the directory entries manipulated.
      
      Remove the xfs_inactive trace point, which is a duplicate of the clear_inode
      one, and the xfs_change_file_space trace point, which is immediately
      followed by the more specific alloc/free space trace points.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      cca28fb8
    • C
      xfs: remove explicit xfs_sync_data/xfs_sync_attr calls on umount · 64c86149
      Christoph Hellwig 提交于
      On the final put of a superblock the VFS already calls sync_filesystem
      for us to write out all data and wait for it.  No need to start another
      asynchronous writeback inside ->put_super.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      64c86149
    • C
      xfs: avoid synchronous transaction in xfs_fs_write_inode · 7a36c8a9
      Christoph Hellwig 提交于
      We already rely on the fact that the sync code will cause a synchronous
      log force later on (currently via xfs_fs_sync_fs -> xfs_quiesce_data ->
      xfs_sync_data), so no need to do this here.  This allows us to avoid
      a lot of synchronous log forces during sync, which pays of especially
      with delayed logging enabled.   Some compilebench numbers that show
      this:
      
      xfs (delayed logging, 256k logbufs)
      ===================================
      
      intial create		  25.94 MB/s	  25.75 MB/s	  25.64 MB/s
      create			   8.54 MB/s	   9.12 MB/s	   9.15 MB/s
      patch			   2.47 MB/s	   2.47 MB/s	   3.17 MB/s
      compile			  29.65 MB/s	  30.51 MB/s	  27.33 MB/s
      clean			  90.92 MB/s	  98.83 MB/s	 128.87 MB/s
      read tree		  11.90 MB/s	  11.84 MB/s	   8.56 MB/s
      read compiled		  28.75 MB/s	  29.96 MB/s	  24.25 MB/s
      delete tree		8.39 seconds	8.12 seconds	8.46 seconds
      delete compiled		8.35 seconds	8.44 seconds	5.11 seconds
      stat tree		6.03 seconds	5.59 seconds	5.19 seconds
      stat compiled tree	9.00 seconds	9.52 seconds	8.49 seconds
      
      xfs + write_inode log_force removal
      ===================================
      intial create		  25.87 MB/s	  25.76 MB/s	  25.87 MB/s
      create			  15.18 MB/s	  14.80 MB/s	  14.94 MB/s
      patch			   3.13 MB/s	   3.14 MB/s	   3.11 MB/s
      compile			  36.74 MB/s	  37.17 MB/s	  36.84 MB/s
      clean			 226.02 MB/s	 222.58 MB/s	 217.94 MB/s
      read tree		  15.14 MB/s	  15.02 MB/s	  15.14 MB/s
      read compiled tree	  29.30 MB/s	  29.31 MB/s	  29.32 MB/s
      delete tree		6.22 seconds	6.14 seconds	6.15 seconds
      delete compiled tree	5.75 seconds	5.92 seconds	5.81 seconds
      stat tree		4.60 seconds	4.51 seconds	4.56 seconds
      stat compiled tree	4.07 seconds	3.87 seconds	3.96 seconds
      
      In addition to that also remove the delwri inode flush that is unessecary
      now that bulkstat is always coherent.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      7a36c8a9
    • C
      xfs: simplify inode to transaction joining · 898621d5
      Christoph Hellwig 提交于
      Currently we need to either call IHOLD or xfs_trans_ihold on an inode when
      joining it to a transaction via xfs_trans_ijoin.
      
      This patches instead makes xfs_trans_ijoin usable on it's own by doing
      an implicity xfs_trans_ihold, which also allows us to drop the third
      argument.  For the case where we want to hold a reference on the inode
      a xfs_trans_ijoin_ref wrapper is added which does the IHOLD and marks
      the inode for needing an xfs_iput.  In addition to the cleaner interface
      to the caller this also simplifies the implementation.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      898621d5
    • C
      xfs: simplify log item descriptor tracking · e98c414f
      Christoph Hellwig 提交于
      Currently we track log item descriptor belonging to a transaction using a
      complex opencoded chunk allocator.  This code has been there since day one
      and seems to work around the lack of an efficient slab allocator.
      
      This patch replaces it with dynamically allocated log item descriptors
      from a dedicated slab pool, linked to the transaction by a linked list.
      
      This allows to greatly simplify the log item descriptor tracking to the
      point where it's just a couple hundred lines in xfs_trans.c instead of
      a separate file.  The external API has also been simplified while we're
      at it - the xfs_trans_add_item and xfs_trans_del_item functions to add/
      delete items from a transaction have been simplified to the bare minium,
      and the xfs_trans_find_item function is replaced with a direct dereference
      of the li_desc field.  All debug code walking the list of log items in
      a transaction is down to a simple list_for_each_entry.
      
      Note that we could easily use a singly linked list here instead of the
      double linked list from list.h as the fastpath only does deletion from
      sequential traversal.  But given that we don't have one available as
      a library function yet I use the list.h functions for simplicity.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      e98c414f
    • C
      xfs: remove unneeded #include statements · 3400777f
      Christoph Hellwig 提交于
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <david@fromorbit.com>
      3400777f
    • C
      xfs: drop dmapi hooks · 288699fe
      Christoph Hellwig 提交于
      Dmapi support was never merged upstream, but we still have a lot of hooks
      bloating XFS for it, all over the fast pathes of the filesystem.
      
      This patch drops over 700 lines of dmapi overhead.  If we'll ever get HSM
      support in mainline at least the namespace events can be done much saner
      in the VFS instead of the individual filesystem, so it's not like this
      is much help for future work.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <dchinner@redhat.com>
      288699fe
  2. 20 7月, 2010 1 次提交
  3. 24 5月, 2010 2 次提交
    • D
      xfs: Introduce delayed logging core code · 71e330b5
      Dave Chinner 提交于
      The delayed logging code only changes in-memory structures and as
      such can be enabled and disabled with a mount option. Add the mount
      option and emit a warning that this is an experimental feature that
      should not be used in production yet.
      
      We also need infrastructure to track committed items that have not
      yet been written to the log. This is what the Committed Item List
      (CIL) is for.
      
      The log item also needs to be extended to track the current log
      vector, the associated memory buffer and it's location in the Commit
      Item List. Extend the log item and log vector structures to enable
      this tracking.
      
      To maintain the current log format for transactions with delayed
      logging, we need to introduce a checkpoint transaction and a context
      for tracking each checkpoint from initiation to transaction
      completion.  This includes adding a log ticket for tracking space
      log required/used by the context checkpoint.
      
      To track all the changes we need an io vector array per log item,
      rather than a single array for the entire transaction. Using the new
      log vector structure for this requires two passes - the first to
      allocate the log vector structures and chain them together, and the
      second to fill them out.  This log vector chain can then be passed
      to the CIL for formatting, pinning and insertion into the CIL.
      
      Formatting of the log vector chain is relatively simple - it's just
      a loop over the iovecs on each log vector, but it is made slightly
      more complex because we re-write the iovec after the copy to point
      back at the memory buffer we just copied into.
      
      This code also needs to pin log items. If the log item is not
      already tracked in this checkpoint context, then it needs to be
      pinned. Otherwise it is already pinned and we don't need to pin it
      again.
      
      The only other complexity is calculating the amount of new log space
      the formatting has consumed. This needs to be accounted to the
      transaction in progress, and the accounting is made more complex
      becase we need also to steal space from it for log metadata in the
      checkpoint transaction. Calculate all this at insert time and update
      all the tickets, counters, etc correctly.
      
      Once we've formatted all the log items in the transaction, attach
      the busy extents to the checkpoint context so the busy extents live
      until checkpoint completion and can be processed at that point in
      time. Transactions can then be freed at this point in time.
      
      Now we need to issue checkpoints - we are tracking the amount of log space
      used by the items in the CIL, so we can trigger background checkpoints when the
      space usage gets to a certain threshold. Otherwise, checkpoints need ot be
      triggered when a log synchronisation point is reached - a log force event.
      
      Because the log write code already handles chained log vectors, writing the
      transaction is trivial, too. Construct a transaction header, add it
      to the head of the chain and write it into the log, then issue a
      commit record write. Then we can release the checkpoint log ticket
      and attach the context to the log buffer so it can be called during
      Io completion to complete the checkpoint.
      
      We also need to allow for synchronising multiple in-flight
      checkpoints. This is needed for two things - the first is to ensure
      that checkpoint commit records appear in the log in the correct
      sequence order (so they are replayed in the correct order). The
      second is so that xfs_log_force_lsn() operates correctly and only
      flushes and/or waits for the specific sequence it was provided with.
      
      To do this we need a wait variable and a list tracking the
      checkpoint commits in progress. We can walk this list and wait for
      the checkpoints to change state or complete easily, an this provides
      the necessary synchronisation for correct operation in both cases.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      71e330b5
    • D
      xfs: Clean up XFS_BLI_* flag namespace · c1155410
      Dave Chinner 提交于
      Clean up the buffer log format (XFS_BLI_*) flags because they have a
      polluted namespace. They XFS_BLI_ prefix is used for both in-memory
      and on-disk flag feilds, but have overlapping values for different
      flags. Rename the buffer log format flags to use the XFS_BLF_*
      prefix to avoid confusing them with the in-memory XFS_BLI_* prefixed
      flags.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      c1155410
  4. 19 5月, 2010 2 次提交
  5. 30 4月, 2010 1 次提交
    • D
      xfs: add a shrinker to background inode reclaim · 9bf729c0
      Dave Chinner 提交于
      On low memory boxes or those with highmem, kernel can OOM before the
      background reclaims inodes via xfssyncd. Add a shrinker to run inode
      reclaim so that it inode reclaim is expedited when memory is low.
      
      This is more complex than it needs to be because the VM folk don't
      want a context added to the shrinker infrastructure. Hence we need
      to add a global list of XFS mount structures so the shrinker can
      traverse them.
      Signed-off-by: NDave Chinner <dchinner@redhat.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      9bf729c0
  6. 29 4月, 2010 1 次提交
  7. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  8. 06 3月, 2010 2 次提交
  9. 09 2月, 2010 1 次提交
    • C
      xfs: log changed inodes instead of writing them synchronously · 07fec736
      Christoph Hellwig 提交于
      When an inode has already be flushed delayed write,
      xfs_inode_clean() returns true and hence xfs_fs_write_inode() can
      return on a synchronous inode write without having written the
      inode. Currently these sycnhronous writes only come sync(1),
      unmount, a sycnhronous NFS export and cachefiles so should be
      relatively rare and out of common performance paths.
      
      Realistically, a synchronous inode write is not necessary here; we
      can avoid writing the inode by logging any non-transactional changes
      that are pending.  This needs to be done with synchronous
      transactions, but it avoids seeking between the log and inode
      clusters as we do now. We don't force the log if the inode is
      pinned, though, so this differs from the fsync case.  For normal
      sys_sync and unmount behaviour this is fine because we do a
      synchronous log force in xfs_sync_data which is called from the
      ->sync_fs code.
      
      It does however break the NFS synchronous export guarantees for now,
      but work is under way to fix this at a higher level or for the
      higher level to provide an additional flag in the writeback control
      to tell us that a log force is needed.
      
      Portions of this patch are based on work from Dave Chinner.
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Reviewed-by: NDave Chinner <david@fromorbit.com>
      Reviewed-by: NAlex Elder <aelder@sgi.com>
      07fec736
  10. 06 2月, 2010 1 次提交
    • D
      xfs: Use delayed write for inodes rather than async V2 · c854363e
      Dave Chinner 提交于
      We currently do background inode flush asynchronously, resulting in
      inodes being written in whatever order the background writeback
      issues them. Not only that, there are also blocking and non-blocking
      asynchronous inode flushes, depending on where the flush comes from.
      
      This patch completely removes asynchronous inode writeback. It
      removes all the strange writeback modes and replaces them with
      either a synchronous flush or a non-blocking delayed write flush.
      That is, inode flushes will only issue IO directly if they are
      synchronous, and background flushing may do nothing if the operation
      would block (e.g. on a pinned inode or buffer lock).
      
      Delayed write flushes will now result in the inode buffer sitting in
      the delwri queue of the buffer cache to be flushed by either an AIL
      push or by the xfsbufd timing out the buffer. This will allow
      accumulation of dirty inode buffers in memory and allow optimisation
      of inode cluster writeback at the xfsbufd level where we have much
      greater queue depths than the block layer elevators. We will also
      get adjacent inode cluster buffer IO merging for free when a later
      patch in the series allows sorting of the delayed write buffers
      before dispatch.
      
      This effectively means that any inode that is written back by
      background writeback will be seen as flush locked during AIL
      pushing, and will result in the buffers being pushed from there.
      This writeback path is currently non-optimal, but the next patch
      in the series will fix that problem.
      
      A side effect of this delayed write mechanism is that background
      inode reclaim will no longer directly flush inodes, nor can it wait
      on the flush lock. The result is that inode reclaim must leave the
      inode in the reclaimable state until it is clean. Hence attempts to
      reclaim a dirty inode in the background will simply skip the inode
      until it is clean and this allows other mechanisms (i.e. xfsbufd) to
      do more optimal writeback of the dirty buffers. As a result, the
      inode reclaim code has been rewritten so that it no longer relies on
      the ambiguous return values of xfs_iflush() to determine whether it
      is safe to reclaim an inode.
      
      Portions of this patch are derived from patches by Christoph
      Hellwig.
      
      Version 2:
      - cleanup reclaim code as suggested by Christoph
      - log background reclaim inode flush errors
      - just pass sync flags to xfs_iflush
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      c854363e
  11. 09 2月, 2010 1 次提交
  12. 26 1月, 2010 1 次提交
    • D
      xfs: don't hold onto reserved blocks on remount,ro · cbe132a8
      Dave Chinner 提交于
      If we hold onto reserved blocks when doing a remount,ro we end
      up writing the blocks used count to disk that includes the reserved
      blocks. Reserved blocks are not actually used, so this results in
      the values in the superblock being incorrect.
      
      Hence if we run xfs_check or xfs_repair -n while the filesystem is
      mounted remount,ro we end up with an inconsistent filesystem being
      reported. Also, running xfs_copy on the remount,ro filesystem will
      result in an inconsistent image being generated.
      
      To fix this, unreserve the blocks when doing the remount,ro, and
      reserved them again on remount,rw. This way a remount,ro filesystem
      will appear consistent on disk to all utilities.
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      cbe132a8
  13. 16 1月, 2010 2 次提交
    • D
      xfs: Don't wake the aild once per second · 453eac8a
      Dave Chinner 提交于
      Now that the AIL push algorithm is traversal safe, we don't need a
      watchdog function in the xfsaild to catch pushes that fail to make
      progress. Remove the watchdog timeout and make pushes purely driven
      by demand. This will remove the once-per-second wakeup that is seen
      when the filesystem is idle and make laptop power misers happy.
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      453eac8a
    • D
      xfs: reclaim all inodes by background tree walks · 57817c68
      Dave Chinner 提交于
      We cannot do direct inode reclaim without taking the flush lock to
      ensure that we do not reclaim an inode under IO. We check the inode
      is clean before doing direct reclaim, but this is not good enough
      because the inode flush code marks the inode clean once it has
      copied the in-core dirty state to the backing buffer.
      
      It is the flush lock that determines whether the inode is still
      under IO, even though it is marked clean, and the inode is still
      required at IO completion so we can't reclaim it even though it is
      clean in core. Hence the requirement that we need to take the flush
      lock even on clean inodes because this guarantees that the inode
      writeback IO has completed and it is safe to reclaim the inode.
      
      With delayed write inode flushing, we coul dend up waiting a long
      time on the flush lock even for a clean inode. The background
      reclaim already handles this efficiently, so avoid all the problems
      by killing the direct reclaim path altogether.
      Signed-off-by: NDave Chinner <david@fromorbit.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      57817c68
  14. 15 12月, 2009 1 次提交
    • C
      xfs: event tracing support · 0b1b213f
      Christoph Hellwig 提交于
      Convert the old xfs tracing support that could only be used with the
      out of tree kdb and xfsidbg patches to use the generic event tracer.
      
      To use it make sure CONFIG_EVENT_TRACING is enabled and then enable
      all xfs trace channels by:
      
         echo 1 > /sys/kernel/debug/tracing/events/xfs/enable
      
      or alternatively enable single events by just doing the same in one
      event subdirectory, e.g.
      
         echo 1 > /sys/kernel/debug/tracing/events/xfs/xfs_ihold/enable
      
      or set more complex filters, etc. In Documentation/trace/events.txt
      all this is desctribed in more detail.  To reads the events do a
      
         cat /sys/kernel/debug/tracing/trace
      
      Compared to the last posting this patch converts the tracing mostly to
      the one tracepoint per callsite model that other users of the new
      tracing facility also employ.  This allows a very fine-grained control
      of the tracing, a cleaner output of the traces and also enables the
      perf tool to use each tracepoint as a virtual performance counter,
           allowing us to e.g. count how often certain workloads git various
           spots in XFS.  Take a look at
      
          http://lwn.net/Articles/346470/
      
      for some examples.
      
      Also the btree tracing isn't included at all yet, as it will require
      additional core tracing features not in mainline yet, I plan to
      deliver it later.
      
      And the really nice thing about this patch is that it actually removes
      many lines of code while adding this nice functionality:
      
       fs/xfs/Makefile                |    8
       fs/xfs/linux-2.6/xfs_acl.c     |    1
       fs/xfs/linux-2.6/xfs_aops.c    |   52 -
       fs/xfs/linux-2.6/xfs_aops.h    |    2
       fs/xfs/linux-2.6/xfs_buf.c     |  117 +--
       fs/xfs/linux-2.6/xfs_buf.h     |   33
       fs/xfs/linux-2.6/xfs_fs_subr.c |    3
       fs/xfs/linux-2.6/xfs_ioctl.c   |    1
       fs/xfs/linux-2.6/xfs_ioctl32.c |    1
       fs/xfs/linux-2.6/xfs_iops.c    |    1
       fs/xfs/linux-2.6/xfs_linux.h   |    1
       fs/xfs/linux-2.6/xfs_lrw.c     |   87 --
       fs/xfs/linux-2.6/xfs_lrw.h     |   45 -
       fs/xfs/linux-2.6/xfs_super.c   |  104 ---
       fs/xfs/linux-2.6/xfs_super.h   |    7
       fs/xfs/linux-2.6/xfs_sync.c    |    1
       fs/xfs/linux-2.6/xfs_trace.c   |   75 ++
       fs/xfs/linux-2.6/xfs_trace.h   | 1369 +++++++++++++++++++++++++++++++++++++++++
       fs/xfs/linux-2.6/xfs_vnode.h   |    4
       fs/xfs/quota/xfs_dquot.c       |  110 ---
       fs/xfs/quota/xfs_dquot.h       |   21
       fs/xfs/quota/xfs_qm.c          |   40 -
       fs/xfs/quota/xfs_qm_syscalls.c |    4
       fs/xfs/support/ktrace.c        |  323 ---------
       fs/xfs/support/ktrace.h        |   85 --
       fs/xfs/xfs.h                   |   16
       fs/xfs/xfs_ag.h                |   14
       fs/xfs/xfs_alloc.c             |  230 +-----
       fs/xfs/xfs_alloc.h             |   27
       fs/xfs/xfs_alloc_btree.c       |    1
       fs/xfs/xfs_attr.c              |  107 ---
       fs/xfs/xfs_attr.h              |   10
       fs/xfs/xfs_attr_leaf.c         |   14
       fs/xfs/xfs_attr_sf.h           |   40 -
       fs/xfs/xfs_bmap.c              |  507 +++------------
       fs/xfs/xfs_bmap.h              |   49 -
       fs/xfs/xfs_bmap_btree.c        |    6
       fs/xfs/xfs_btree.c             |    5
       fs/xfs/xfs_btree_trace.h       |   17
       fs/xfs/xfs_buf_item.c          |   87 --
       fs/xfs/xfs_buf_item.h          |   20
       fs/xfs/xfs_da_btree.c          |    3
       fs/xfs/xfs_da_btree.h          |    7
       fs/xfs/xfs_dfrag.c             |    2
       fs/xfs/xfs_dir2.c              |    8
       fs/xfs/xfs_dir2_block.c        |   20
       fs/xfs/xfs_dir2_leaf.c         |   21
       fs/xfs/xfs_dir2_node.c         |   27
       fs/xfs/xfs_dir2_sf.c           |   26
       fs/xfs/xfs_dir2_trace.c        |  216 ------
       fs/xfs/xfs_dir2_trace.h        |   72 --
       fs/xfs/xfs_filestream.c        |    8
       fs/xfs/xfs_fsops.c             |    2
       fs/xfs/xfs_iget.c              |  111 ---
       fs/xfs/xfs_inode.c             |   67 --
       fs/xfs/xfs_inode.h             |   76 --
       fs/xfs/xfs_inode_item.c        |    5
       fs/xfs/xfs_iomap.c             |   85 --
       fs/xfs/xfs_iomap.h             |    8
       fs/xfs/xfs_log.c               |  181 +----
       fs/xfs/xfs_log_priv.h          |   20
       fs/xfs/xfs_log_recover.c       |    1
       fs/xfs/xfs_mount.c             |    2
       fs/xfs/xfs_quota.h             |    8
       fs/xfs/xfs_rename.c            |    1
       fs/xfs/xfs_rtalloc.c           |    1
       fs/xfs/xfs_rw.c                |    3
       fs/xfs/xfs_trans.h             |   47 +
       fs/xfs/xfs_trans_buf.c         |   62 -
       fs/xfs/xfs_vnodeops.c          |    8
       70 files changed, 2151 insertions(+), 2592 deletions(-)
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      0b1b213f
  15. 12 12月, 2009 3 次提交
    • C
      xfs: cleanup dmapi macros in the umount path · 30ac0683
      Christoph Hellwig 提交于
      Stop the flag saving as we never mangle those in the unmount path, and
      hide all the weird arguents to the dmapi code inside the
      XFS_SEND_PREUNMOUNT / XFS_SEND_UNMOUNT macros.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <david@fromorbit.com>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      30ac0683
    • C
      xfs: reset the i_iolock lock class in the reclaim path · 033da48f
      Christoph Hellwig 提交于
      The iolock is used for protecting reads, writes and block truncates
      against each other.  We have two classes of callers, the first one is
      induced by a file operation and requires a reference to the inode be
      held and not dropped after the operation is done:
      
       - xfs_vm_vmap, xfs_vn_fallocate, xfs_read, xfs_write, xfs_splice_read,
         xfs_splice_write and xfs_setattr are all implementations of VFS
         methods that require a live inode
       - xfs_getbmap and xfs_swap_extents are ioctl subcommand for which the
         same is true
       - xfs_truncate_file is only called on quota inodes just returned from
         xfs_iget
       - xfs_sync_inode_data does the lock just after an igrab()
       - xfs_filestream_associate and xfs_filestream_new_ag take the iolock
         on the parent inode of an inode which by VFS rules must be referenced
      
      And we have various calls to truncate blocks past EOF or the whole
      file when dropping the last reference to an inode.  Unfortunately
      lockdep complains when we do memory allocations that can recurse into
      the filesystem in the first class because the second class happens to
      take the same lock.  To avoid this re-init the iolock in the beginning
      of xfs_fs_clear_inode to get a new lock class.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NAlex Elder <aelder@sgi.com>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      033da48f
    • C
      xfs: simplify inode teardown · 848ce8f7
      Christoph Hellwig 提交于
      Currently the reclaim code for the case where we don't reclaim the
      final reclaim is overly complicated.  We know that the inode is clean
      but instead of just directly reclaiming the clean inode we go through
      the whole process of marking the inode reclaimable just to directly
      reclaim it from the calling context.  Besides being overly complicated
      this introduces a race where iget could recycle an inode between
      marked reclaimable and actually being reclaimed leading to panics.
      
      This patch gets rid of the existing reclaim path, and replaces it with
      a simple call to xfs_ireclaim if the inode was clean.  While we're at
      it we also use the slightly more lax xfs_inode_clean check we'd use
      later to determine if we need to flush the inode here.
      
      Finally get rid of xfs_reclaim function and place the remaining small
      bits of reclaim code directly into xfs_fs_destroy_inode.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reported-by: NPatrick Schreurs <patrick@news-service.com>
      Reported-by: NTommy van Leeuwen <tommy@news-service.com>
      Tested-by: NPatrick Schreurs <patrick@news-service.com>
      Reviewed-by: NAlex Elder <aelder@sgi.com>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      848ce8f7
  16. 09 10月, 2009 2 次提交
    • C
      xfs: cleanup ->sync_fs · 69961a26
      Christoph Hellwig 提交于
      Sort out ->sync_fs to not perform a superblock writeback for the wait = 0 case
      as that is just an optional first pass and the superblock will be written back
      properly in the next call with wait = 1.  Instead perform an opportunistic
      quota writeback to have less work later.  Also remove the freeze special case
      as we do a proper wait = 1 call in the freeze code anyway.
      
      Also rename the function to xfs_fs_sync_fs to match the normal naming
      convention, update comments and avoid calling into the laptop_mode logic on
      an error.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NAlex Elder <aelder@sgi.com>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      69961a26
    • C
      xfs: implement ->dirty_inode to fix timestamp handling · f9581b14
      Christoph Hellwig 提交于
      This is picking up on Felix's repost of Dave's patch to implement a
      .dirty_inode method.  We really need this notification because
      the VFS keeps writing directly into the inode structure instead
      of going through methods to update this state.  In addition to
      the long-known atime issue we now also have a caller in VM code
      that updates c/mtime that way for shared writeable mmaps.  And
      I found another one that no one has noticed in practice in the FIFO
      code.
      
      So implement ->dirty_inode to set i_update_core whenever the
      inode gets externally dirtied, and switch the c/mtime handling to
      the same scheme we already use for atime (always picking up
      the value from the Linux inode).
      
      Note that this patch also removes the xfs_synchronize_atime call
      in xfs_reclaim it was superflous as we already synchronize the time
      when writing the inode via the log (xfs_inode_item_format) or the
      normal buffers (xfs_iflush_int).
      
      In addition also remove the I_CLEAR check before copying the Linux
      timestamps - now that we always have the Linux inode available
      we can always use the timestamps in it.
      
      Also switch to just using file_update_time for regular reads/writes -
      that will get us all optimization done to it for free and make
      sure we notice early when it breaks.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NFelix Blyakher <felixb@sgi.com>
      Reviewed-by: NAlex Elder <aelder@sgi.com>
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      f9581b14
  17. 22 9月, 2009 1 次提交
  18. 03 9月, 2009 1 次提交
    • A
      xfs: xfs_showargs() reports group *and* project quotas enabled · 988abe40
      Alex Elder 提交于
      If you enable group or project quotas on an XFS file system, then the
      mount table presented through /proc/self/mounts erroneously shows
      that both options are in effect for the file system.  The root of
      the problem is some bad logic in the xfs_showargs() function, which
      is used to format the file system type-specific options in effect
      for a file system.
      
      The problem originated in this GIT commit:
          Move platform specific mount option parse out of core XFS code
          Date: 11/22/07
          Author: Dave Chinner
          SHA1 ID: a67d7c5f
      
      For XFS quotas, project and group quota management are mutually
      exclusive--only one can be in effect at a time.  There are two
      parts to managing quotas:  aggregating usage information; and
      enforcing limits.  It is possible to have a quota in effect
      (aggregating usage) but not enforced.
      
      These features are recorded on an XFS mount point using these flags:
          XFS_PQUOTA_ACCT - Project quotas are aggregated
          XFS_GQUOTA_ACCT - Group quotas are aggregated
          XFS_OQUOTA_ENFD - Project/group quotas are enforced
      
      The code in error is in fs/xfs/linux-2.6/xfs_super.c:
      
              if (mp->m_qflags & (XFS_PQUOTA_ACCT|XFS_OQUOTA_ENFD))
                      seq_puts(m, "," MNTOPT_PRJQUOTA);
              else if (mp->m_qflags & XFS_PQUOTA_ACCT)
                      seq_puts(m, "," MNTOPT_PQUOTANOENF);
      
              if (mp->m_qflags & (XFS_GQUOTA_ACCT|XFS_OQUOTA_ENFD))
                      seq_puts(m, "," MNTOPT_GRPQUOTA);
              else if (mp->m_qflags & XFS_GQUOTA_ACCT)
                      seq_puts(m, "," MNTOPT_GQUOTANOENF);
      
      The problem is that XFS_OQUOTA_ENFD will be set in mp->m_qflags
      if either group or project quotas are enforced, and as a result
      both MNTOPT_PRJQUOTA and MNTOPT_GRPQUOTA will be shown as mount
      options.
      Signed-off-by: NAlex Elder <aelder@sgi.com>
      Reviewed-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NFelix Blyakher <felixb@sgi.com>
      988abe40
  19. 01 9月, 2009 1 次提交
  20. 02 7月, 2009 1 次提交
  21. 19 6月, 2009 1 次提交
  22. 12 6月, 2009 1 次提交
  23. 10 6月, 2009 1 次提交
  24. 08 6月, 2009 1 次提交
    • C
      xfs: split xfs_sync_inodes · 075fe102
      Christoph Hellwig 提交于
      xfs_sync_inodes is used to write back either file data or inode metadata.
      In general we always do these separately, except for one fishy case in
      xfs_fs_put_super that does both.  So separate xfs_sync_inodes into
      separate xfs_sync_data and xfs_sync_attr functions.  In xfs_fs_put_super
      we first call the data sync and then the attr sync as that was the previous
      order.  The moved log force in that path doesn't make a difference because
      we will force the log again as part of the real unmount process.
      
      The filesystem readonly checks are not performed by the new function but
      instead moved into the callers, given that most callers alredy have it
      further up in the stack.  Also add debug checks that we do not pass in
      incorrect flags in the new xfs_sync_data and xfs_sync_attr function and
      fix the one place that did pass in a wrong flag.
      
      Also remove a comment mentioning xfs_sync_inodes that has been incorrect
      for a while because we always take either the iolock or ilock in the
      sync path these days.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
      075fe102