1. 04 12月, 2008 1 次提交
  2. 01 12月, 2008 1 次提交
    • C
      [XFS] kill xfs_dinode_core_t · 81591fe2
      Christoph Hellwig 提交于
      Now that we have a separate xfs_icdinode_t for the in-core inode which
      gets logged there is no need anymore for the xfs_dinode vs xfs_dinode_core
      split - the fact that part of the structure gets logged through the inode
      log item and a small part not can better be described in a comment.
      
      All sizeof operations on the dinode_core either really wanted the
      icdinode and are switched to that one, or had already added the size
      of the agi unlinked list pointer.  Later both will be replaced with
      helpers once we get the larger CRC-enabled dinode.
      
      Removing the data and attribute fork unions also has the advantage that
      xfs_dinode.h doesn't need to pull in every header under the sun.
      
      While we're at it also add some more comments describing the dinode
      structure.
      
      (First sent on October 7th)
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NDave Chinner <david@fromorbit.com>
      Signed-off-by: NNiv Sardi <xaiki@sgi.com>
      81591fe2
  3. 10 11月, 2008 1 次提交
  4. 30 10月, 2008 6 次提交
  5. 13 8月, 2008 8 次提交
  6. 28 7月, 2008 8 次提交
  7. 29 4月, 2008 3 次提交
  8. 18 4月, 2008 6 次提交
  9. 10 4月, 2008 3 次提交
  10. 07 2月, 2008 3 次提交
    • D
      [XFS] Move AIL pushing into it's own thread · 249a8c11
      David Chinner 提交于
      When many hundreds to thousands of threads all try to do simultaneous
      transactions and the log is in a tail-pushing situation (i.e. full), we
      can get multiple threads walking the AIL list and contending on the AIL
      lock.
      
      The AIL push is, in effect, a simple I/O dispatch algorithm complicated by
      the ordering constraints placed on it by the transaction subsystem. It
      really does not need multiple threads to push on it - even when only a
      single CPU is pushing the AIL, it can push the I/O out far faster that
      pretty much any disk subsystem can handle.
      
      So, to avoid contention problems stemming from multiple list walkers, move
      the list walk off into another thread and simply provide a "target" to
      push to. When a thread requires a push, it sets the target and wakes the
      push thread, then goes to sleep waiting for the required amount of space
      to become available in the log.
      
      This mechanism should also be a lot fairer under heavy load as the waiters
      will queue in arrival order, rather than queuing in "who completed a push
      first" order.
      
      Also, by moving the pushing to a separate thread we can do more
      effectively overload detection and prevention as we can keep context from
      loop iteration to loop iteration. That is, we can push only part of the
      list each loop and not have to loop back to the start of the list every
      time we run. This should also help by reducing the number of items we try
      to lock and/or push items that we cannot move.
      
      Note that this patch is not intended to solve the inefficiencies in the
      AIL structure and the associated issues with extremely large list
      contents. That needs to be addresses separately; parallel access would
      cause problems to any new structure as well, so I'm only aiming to isolate
      the structure from unbounded parallelism here.
      
      SGI-PV: 972759
      SGI-Modid: xfs-linux-melb:xfs-kern:30371a
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      249a8c11
    • D
      [XFS] Fix up sparse warnings. · a8272ce0
      David Chinner 提交于
      These are mostly locking annotations, marking things static, casts where
      needed and declaring stuff in header files.
      
      SGI-PV: 971186
      SGI-Modid: xfs-linux-melb:xfs-kern:30002a
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      a8272ce0
    • E
      [XFS] Refactor xfs_mountfs · 0771fb45
      Eric Sandeen 提交于
      Refactoring xfs_mountfs() to call sub-functions for logical chunks can
      help save a bit of stack, and can make it easier to read this long
      function.
      
      The mount path is one of the longest common callchains, easily getting to
      within a few bytes of the end of a 4k stack when over lvm, quotas are
      enabled, and quotacheck must be done.
      
      With this change on top of the other stack-related changes I've sent, I
      can get xfs to survive a normal xfsqa run on 4k stacks over lvm.
      
      SGI-PV: 971186
      SGI-Modid: xfs-linux-melb:xfs-kern:29834a
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      0771fb45