1. 13 8月, 2008 4 次提交
  2. 28 7月, 2008 8 次提交
  3. 29 4月, 2008 3 次提交
  4. 18 4月, 2008 6 次提交
  5. 10 4月, 2008 3 次提交
  6. 07 2月, 2008 7 次提交
    • D
      [XFS] Move AIL pushing into it's own thread · 249a8c11
      David Chinner 提交于
      When many hundreds to thousands of threads all try to do simultaneous
      transactions and the log is in a tail-pushing situation (i.e. full), we
      can get multiple threads walking the AIL list and contending on the AIL
      lock.
      
      The AIL push is, in effect, a simple I/O dispatch algorithm complicated by
      the ordering constraints placed on it by the transaction subsystem. It
      really does not need multiple threads to push on it - even when only a
      single CPU is pushing the AIL, it can push the I/O out far faster that
      pretty much any disk subsystem can handle.
      
      So, to avoid contention problems stemming from multiple list walkers, move
      the list walk off into another thread and simply provide a "target" to
      push to. When a thread requires a push, it sets the target and wakes the
      push thread, then goes to sleep waiting for the required amount of space
      to become available in the log.
      
      This mechanism should also be a lot fairer under heavy load as the waiters
      will queue in arrival order, rather than queuing in "who completed a push
      first" order.
      
      Also, by moving the pushing to a separate thread we can do more
      effectively overload detection and prevention as we can keep context from
      loop iteration to loop iteration. That is, we can push only part of the
      list each loop and not have to loop back to the start of the list every
      time we run. This should also help by reducing the number of items we try
      to lock and/or push items that we cannot move.
      
      Note that this patch is not intended to solve the inefficiencies in the
      AIL structure and the associated issues with extremely large list
      contents. That needs to be addresses separately; parallel access would
      cause problems to any new structure as well, so I'm only aiming to isolate
      the structure from unbounded parallelism here.
      
      SGI-PV: 972759
      SGI-Modid: xfs-linux-melb:xfs-kern:30371a
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      249a8c11
    • D
      [XFS] Fix up sparse warnings. · a8272ce0
      David Chinner 提交于
      These are mostly locking annotations, marking things static, casts where
      needed and declaring stuff in header files.
      
      SGI-PV: 971186
      SGI-Modid: xfs-linux-melb:xfs-kern:30002a
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      a8272ce0
    • E
      [XFS] Refactor xfs_mountfs · 0771fb45
      Eric Sandeen 提交于
      Refactoring xfs_mountfs() to call sub-functions for logical chunks can
      help save a bit of stack, and can make it easier to read this long
      function.
      
      The mount path is one of the longest common callchains, easily getting to
      within a few bytes of the end of a 4k stack when over lvm, quotas are
      enabled, and quotacheck must be done.
      
      With this change on top of the other stack-related changes I've sent, I
      can get xfs to survive a normal xfsqa run on 4k stacks over lvm.
      
      SGI-PV: 971186
      SGI-Modid: xfs-linux-melb:xfs-kern:29834a
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      0771fb45
    • E
      [XFS] Remove spin.h · 007c61c6
      Eric Sandeen 提交于
      remove spinlock init abstraction macro in spin.h, remove the callers, and
      remove the file. Move no-op spinlock_destroy to xfs_linux.h Cleanup
      spinlock locals in xfs_mount.c
      
      SGI-PV: 970382
      SGI-Modid: xfs-linux-melb:xfs-kern:29751a
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      007c61c6
    • E
      [XFS] Unwrap XFS_SB_LOCK. · 3685c2a1
      Eric Sandeen 提交于
      Un-obfuscate XFS_SB_LOCK, remove XFS_SB_LOCK->mutex_lock->spin_lock
      macros, call spin_lock directly, remove extraneous cookie holdover from
      old xfs code, and change lock type to spinlock_t.
      
      SGI-PV: 970382
      SGI-Modid: xfs-linux-melb:xfs-kern:29746a
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      3685c2a1
    • D
      [XFS] Unwrap AIL_LOCK · 287f3dad
      Donald Douwsma 提交于
      SGI-PV: 970382
      SGI-Modid: xfs-linux-melb:xfs-kern:29739a
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      287f3dad
    • L
      [XFS] kill unnessecary ioops indirection · 541d7d3c
      Lachlan McIlroy 提交于
      Currently there is an indirection called ioops in the XFS data I/O path.
      Various functions are called by functions pointers, but there is no
      coherence in what this is for, and of course for XFS itself it's entirely
      unused. This patch removes it instead and significantly reduces source and
      binary size of XFS while making maintaince easier.
      
      SGI-PV: 970841
      SGI-Modid: xfs-linux-melb:xfs-kern:29737a
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      541d7d3c
  7. 16 10月, 2007 6 次提交
  8. 15 10月, 2007 3 次提交
    • D
      [XFS] Radix tree based inode caching · da353b0d
      David Chinner 提交于
      One of the perpetual scaling problems XFS has is indexing it's incore
      inodes. We currently uses hashes and the default hash sizes chosen can
      only ever be a tradeoff between memory consumption and the maximum
      realistic size of the cache.
      
      As a result, anyone who has millions of inodes cached on a filesystem
      needs to tunes the size of the cache via the ihashsize mount option to
      allow decent scalability with inode cache operations.
      
      A further problem is the separate inode cluster hash, whose size is based
      on the ihashsize but is smaller, and so under certain conditions (sparse
      cluster cache population) this can become a limitation long before the
      inode hash is causing issues.
      
      The following patchset removes the inode hash and cluster hash and
      replaces them with radix trees to avoid the scalability limitations of the
      hashes. It also reduces the size of the inodes by 3 pointers....
      
      SGI-PV: 969561
      SGI-Modid: xfs-linux-melb:xfs-kern:29481a
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      da353b0d
    • C
      [XFS] superblock endianess annotations · 2bdf7cd0
      Christoph Hellwig 提交于
      Creates a new xfs_dsb_t that is __be annotated and keeps xfs_sb_t for the
      incore one. xfs_xlatesb is renamed to xfs_sb_to_disk and only handles the
      incore -> disk conversion. A new helper xfs_sb_from_disk handles the other
      direction and doesn't need the slightly hacky table-driven approach
      because we only ever read the full sb from disk.
      
      The handling of shared r/o filesystems has been buggy on little endian
      system and fixing this required shuffling around of some code in that
      area.
      
      SGI-PV: 968563
      SGI-Modid: xfs-linux-melb:xfs-kern:29477a
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      2bdf7cd0
    • J
      [XFS] Fix a potential NULL pointer deref in XFS on failed mount. · 49ee6c91
      Jesper Juhl 提交于
      If we fail to open the the log device buftarg, we can fall through to
      error handling code that fails to check for a NULL log device buftarg
      before calling xfs_free_buftarg().
      
      This patch fixes the issue by checking mp->m_logdev_targp against NULL in
      xfs_unmountfs_close() and doing the proper xfs_blkdev_put(logdev); and
      xfs_blkdev_put(rtdev); on (!mp->m_rtdev_targp) in xfs_mount().
      
      Discovered by the Coverity checker.
      
      SGI-PV: 968563
      SGI-Modid: xfs-linux-melb:xfs-kern:29328a
      Signed-off-by: NJesper Juhl <jesper.juhl@gmail.com>
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      49ee6c91