1. 13 8月, 2008 8 次提交
  2. 28 7月, 2008 8 次提交
  3. 29 4月, 2008 3 次提交
  4. 18 4月, 2008 6 次提交
  5. 10 4月, 2008 3 次提交
  6. 07 2月, 2008 7 次提交
    • D
      [XFS] Move AIL pushing into it's own thread · 249a8c11
      David Chinner 提交于
      When many hundreds to thousands of threads all try to do simultaneous
      transactions and the log is in a tail-pushing situation (i.e. full), we
      can get multiple threads walking the AIL list and contending on the AIL
      lock.
      
      The AIL push is, in effect, a simple I/O dispatch algorithm complicated by
      the ordering constraints placed on it by the transaction subsystem. It
      really does not need multiple threads to push on it - even when only a
      single CPU is pushing the AIL, it can push the I/O out far faster that
      pretty much any disk subsystem can handle.
      
      So, to avoid contention problems stemming from multiple list walkers, move
      the list walk off into another thread and simply provide a "target" to
      push to. When a thread requires a push, it sets the target and wakes the
      push thread, then goes to sleep waiting for the required amount of space
      to become available in the log.
      
      This mechanism should also be a lot fairer under heavy load as the waiters
      will queue in arrival order, rather than queuing in "who completed a push
      first" order.
      
      Also, by moving the pushing to a separate thread we can do more
      effectively overload detection and prevention as we can keep context from
      loop iteration to loop iteration. That is, we can push only part of the
      list each loop and not have to loop back to the start of the list every
      time we run. This should also help by reducing the number of items we try
      to lock and/or push items that we cannot move.
      
      Note that this patch is not intended to solve the inefficiencies in the
      AIL structure and the associated issues with extremely large list
      contents. That needs to be addresses separately; parallel access would
      cause problems to any new structure as well, so I'm only aiming to isolate
      the structure from unbounded parallelism here.
      
      SGI-PV: 972759
      SGI-Modid: xfs-linux-melb:xfs-kern:30371a
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      249a8c11
    • D
      [XFS] Fix up sparse warnings. · a8272ce0
      David Chinner 提交于
      These are mostly locking annotations, marking things static, casts where
      needed and declaring stuff in header files.
      
      SGI-PV: 971186
      SGI-Modid: xfs-linux-melb:xfs-kern:30002a
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      a8272ce0
    • E
      [XFS] Refactor xfs_mountfs · 0771fb45
      Eric Sandeen 提交于
      Refactoring xfs_mountfs() to call sub-functions for logical chunks can
      help save a bit of stack, and can make it easier to read this long
      function.
      
      The mount path is one of the longest common callchains, easily getting to
      within a few bytes of the end of a 4k stack when over lvm, quotas are
      enabled, and quotacheck must be done.
      
      With this change on top of the other stack-related changes I've sent, I
      can get xfs to survive a normal xfsqa run on 4k stacks over lvm.
      
      SGI-PV: 971186
      SGI-Modid: xfs-linux-melb:xfs-kern:29834a
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      0771fb45
    • E
      [XFS] Remove spin.h · 007c61c6
      Eric Sandeen 提交于
      remove spinlock init abstraction macro in spin.h, remove the callers, and
      remove the file. Move no-op spinlock_destroy to xfs_linux.h Cleanup
      spinlock locals in xfs_mount.c
      
      SGI-PV: 970382
      SGI-Modid: xfs-linux-melb:xfs-kern:29751a
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      007c61c6
    • E
      [XFS] Unwrap XFS_SB_LOCK. · 3685c2a1
      Eric Sandeen 提交于
      Un-obfuscate XFS_SB_LOCK, remove XFS_SB_LOCK->mutex_lock->spin_lock
      macros, call spin_lock directly, remove extraneous cookie holdover from
      old xfs code, and change lock type to spinlock_t.
      
      SGI-PV: 970382
      SGI-Modid: xfs-linux-melb:xfs-kern:29746a
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      3685c2a1
    • D
      [XFS] Unwrap AIL_LOCK · 287f3dad
      Donald Douwsma 提交于
      SGI-PV: 970382
      SGI-Modid: xfs-linux-melb:xfs-kern:29739a
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      287f3dad
    • L
      [XFS] kill unnessecary ioops indirection · 541d7d3c
      Lachlan McIlroy 提交于
      Currently there is an indirection called ioops in the XFS data I/O path.
      Various functions are called by functions pointers, but there is no
      coherence in what this is for, and of course for XFS itself it's entirely
      unused. This patch removes it instead and significantly reduces source and
      binary size of XFS while making maintaince easier.
      
      SGI-PV: 970841
      SGI-Modid: xfs-linux-melb:xfs-kern:29737a
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      541d7d3c
  7. 16 10月, 2007 5 次提交