1. 18 4月, 2008 2 次提交
  2. 29 2月, 2008 1 次提交
  3. 07 2月, 2008 6 次提交
    • D
      [XFS] Move AIL pushing into it's own thread · 249a8c11
      David Chinner 提交于
      When many hundreds to thousands of threads all try to do simultaneous
      transactions and the log is in a tail-pushing situation (i.e. full), we
      can get multiple threads walking the AIL list and contending on the AIL
      lock.
      
      The AIL push is, in effect, a simple I/O dispatch algorithm complicated by
      the ordering constraints placed on it by the transaction subsystem. It
      really does not need multiple threads to push on it - even when only a
      single CPU is pushing the AIL, it can push the I/O out far faster that
      pretty much any disk subsystem can handle.
      
      So, to avoid contention problems stemming from multiple list walkers, move
      the list walk off into another thread and simply provide a "target" to
      push to. When a thread requires a push, it sets the target and wakes the
      push thread, then goes to sleep waiting for the required amount of space
      to become available in the log.
      
      This mechanism should also be a lot fairer under heavy load as the waiters
      will queue in arrival order, rather than queuing in "who completed a push
      first" order.
      
      Also, by moving the pushing to a separate thread we can do more
      effectively overload detection and prevention as we can keep context from
      loop iteration to loop iteration. That is, we can push only part of the
      list each loop and not have to loop back to the start of the list every
      time we run. This should also help by reducing the number of items we try
      to lock and/or push items that we cannot move.
      
      Note that this patch is not intended to solve the inefficiencies in the
      AIL structure and the associated issues with extremely large list
      contents. That needs to be addresses separately; parallel access would
      cause problems to any new structure as well, so I'm only aiming to isolate
      the structure from unbounded parallelism here.
      
      SGI-PV: 972759
      SGI-Modid: xfs-linux-melb:xfs-kern:30371a
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      249a8c11
    • C
      [XFS] kill xfs_iocore_t · 613d7043
      Christoph Hellwig 提交于
      xfs_iocore_t is a structure embedded in xfs_inode. Except for one field it
      just duplicates fields already in xfs_inode, and there is nothing this
      abstraction buys us on XFS/Linux. This patch removes it and shrinks source
      and binary size of xfs aswell as shrinking the size of xfs_inode by 60/44
      bytes in debug/non-debug builds.
      
      SGI-PV: 970852
      SGI-Modid: xfs-linux-melb:xfs-kern:29754a
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      613d7043
    • E
      [XFS] Cleanup lock goop. · 36e41eeb
      Eric Sandeen 提交于
      Switch last couple lock_t's to spinlock_t's. Remove now-unused
      spinlock-related macros & types.
      
      SGI-PV: 970382
      SGI-Modid: xfs-linux-melb:xfs-kern:29748a
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      36e41eeb
    • E
      [XFS] Unwrap XFS_SB_LOCK. · 3685c2a1
      Eric Sandeen 提交于
      Un-obfuscate XFS_SB_LOCK, remove XFS_SB_LOCK->mutex_lock->spin_lock
      macros, call spin_lock directly, remove extraneous cookie holdover from
      old xfs code, and change lock type to spinlock_t.
      
      SGI-PV: 970382
      SGI-Modid: xfs-linux-melb:xfs-kern:29746a
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      3685c2a1
    • D
      [XFS] Unwrap AIL_LOCK · 287f3dad
      Donald Douwsma 提交于
      SGI-PV: 970382
      SGI-Modid: xfs-linux-melb:xfs-kern:29739a
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      287f3dad
    • L
      [XFS] kill unnessecary ioops indirection · 541d7d3c
      Lachlan McIlroy 提交于
      Currently there is an indirection called ioops in the XFS data I/O path.
      Various functions are called by functions pointers, but there is no
      coherence in what this is for, and of course for XFS itself it's entirely
      unused. This patch removes it instead and significantly reduces source and
      binary size of XFS while making maintaince easier.
      
      SGI-PV: 970841
      SGI-Modid: xfs-linux-melb:xfs-kern:29737a
      Signed-off-by: NLachlan McIlroy <lachlan@sgi.com>
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      541d7d3c
  4. 16 10月, 2007 10 次提交
  5. 15 10月, 2007 3 次提交
    • D
      [XFS] Radix tree based inode caching · da353b0d
      David Chinner 提交于
      One of the perpetual scaling problems XFS has is indexing it's incore
      inodes. We currently uses hashes and the default hash sizes chosen can
      only ever be a tradeoff between memory consumption and the maximum
      realistic size of the cache.
      
      As a result, anyone who has millions of inodes cached on a filesystem
      needs to tunes the size of the cache via the ihashsize mount option to
      allow decent scalability with inode cache operations.
      
      A further problem is the separate inode cluster hash, whose size is based
      on the ihashsize but is smaller, and so under certain conditions (sparse
      cluster cache population) this can become a limitation long before the
      inode hash is causing issues.
      
      The following patchset removes the inode hash and cluster hash and
      replaces them with radix trees to avoid the scalability limitations of the
      hashes. It also reduces the size of the inodes by 3 pointers....
      
      SGI-PV: 969561
      SGI-Modid: xfs-linux-melb:xfs-kern:29481a
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      da353b0d
    • C
      [XFS] superblock endianess annotations · 2bdf7cd0
      Christoph Hellwig 提交于
      Creates a new xfs_dsb_t that is __be annotated and keeps xfs_sb_t for the
      incore one. xfs_xlatesb is renamed to xfs_sb_to_disk and only handles the
      incore -> disk conversion. A new helper xfs_sb_from_disk handles the other
      direction and doesn't need the slightly hacky table-driven approach
      because we only ever read the full sb from disk.
      
      The handling of shared r/o filesystems has been buggy on little endian
      system and fixing this required shuffling around of some code in that
      area.
      
      SGI-PV: 968563
      SGI-Modid: xfs-linux-melb:xfs-kern:29477a
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      2bdf7cd0
    • E
      [XFS] Remove m_nreadaheads · 40906630
      Eric Sandeen 提交于
      m_nreadaheads in the mount struct is never used; remove it and the various
      macros assigned to it. Also remove a couple other unused macros in the
      same areas.
      
      Removes one user of xfs_physmem.
      
      SGI-PV: 968563
      SGI-Modid: xfs-linux-melb:xfs-kern:29322a
      Signed-off-by: NEric Sandeen <sandeen@sandeen.net>
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      40906630
  6. 14 7月, 2007 3 次提交
    • D
      [XFS] Concurrent Multi-File Data Streams · 2a82b8be
      David Chinner 提交于
      In media spaces, video is often stored in a frame-per-file format. When
      dealing with uncompressed realtime HD video streams in this format, it is
      crucial that files do not get fragmented and that multiple files a placed
      contiguously on disk.
      
      When multiple streams are being ingested and played out at the same time,
      it is critical that the filesystem does not cross the streams and
      interleave them together as this creates seek and readahead cache miss
      latency and prevents both ingest and playout from meeting frame rate
      targets.
      
      This patch set creates a "stream of files" concept into the allocator to
      place all the data from a single stream contiguously on disk so that RAID
      array readahead can be used effectively. Each additional stream gets
      placed in different allocation groups within the filesystem, thereby
      ensuring that we don't cross any streams. When an AG fills up, we select a
      new AG for the stream that is not in use.
      
      The core of the functionality is the stream tracking - each inode that we
      create in a directory needs to be associated with the directories' stream.
      Hence every time we create a file, we look up the directories' stream
      object and associate the new file with that object.
      
      Once we have a stream object for a file, we use the AG that the stream
      object point to for allocations. If we can't allocate in that AG (e.g. it
      is full) we move the entire stream to another AG. Other inodes in the same
      stream are moved to the new AG on their next allocation (i.e. lazy
      update).
      
      Stream objects are kept in a cache and hold a reference on the inode.
      Hence the inode cannot be reclaimed while there is an outstanding stream
      reference. This means that on unlink we need to remove the stream
      association and we also need to flush all the associations on certain
      events that want to reclaim all unreferenced inodes (e.g. filesystem
      freeze).
      
      SGI-PV: 964469
      SGI-Modid: xfs-linux-melb:xfs-kern:29096a
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NBarry Naujok <bnaujok@sgi.com>
      Signed-off-by: NDonald Douwsma <donaldd@sgi.com>
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      Signed-off-by: NVlad Apostolov <vapo@sgi.com>
      2a82b8be
    • D
      [XFS] Lazy Superblock Counters · 92821e2b
      David Chinner 提交于
      When we have a couple of hundred transactions on the fly at once, they all
      typically modify the on disk superblock in some way.
      create/unclink/mkdir/rmdir modify inode counts, allocation/freeing modify
      free block counts.
      
      When these counts are modified in a transaction, they must eventually lock
      the superblock buffer and apply the mods. The buffer then remains locked
      until the transaction is committed into the incore log buffer. The result
      of this is that with enough transactions on the fly the incore superblock
      buffer becomes a bottleneck.
      
      The result of contention on the incore superblock buffer is that
      transaction rates fall - the more pressure that is put on the superblock
      buffer, the slower things go.
      
      The key to removing the contention is to not require the superblock fields
      in question to be locked. We do that by not marking the superblock dirty
      in the transaction. IOWs, we modify the incore superblock but do not
      modify the cached superblock buffer. In short, we do not log superblock
      modifications to critical fields in the superblock on every transaction.
      In fact we only do it just before we write the superblock to disk every
      sync period or just before unmount.
      
      This creates an interesting problem - if we don't log or write out the
      fields in every transaction, then how do the values get recovered after a
      crash? the answer is simple - we keep enough duplicate, logged information
      in other structures that we can reconstruct the correct count after log
      recovery has been performed.
      
      It is the AGF and AGI structures that contain the duplicate information;
      after recovery, we walk every AGI and AGF and sum their individual
      counters to get the correct value, and we do a transaction into the log to
      correct them. An optimisation of this is that if we have a clean unmount
      record, we know the value in the superblock is correct, so we can avoid
      the summation walk under normal conditions and so mount/recovery times do
      not change under normal operation.
      
      One wrinkle that was discovered during development was that the blocks
      used in the freespace btrees are never accounted for in the AGF counters.
      This was once a valid optimisation to make; when the filesystem is full,
      the free space btrees are empty and consume no space. Hence when it
      matters, the "accounting" is correct. But that means the when we do the
      AGF summations, we would not have a correct count and xfs_check would
      complain. Hence a new counter was added to track the number of blocks used
      by the free space btrees. This is an *on-disk format change*.
      
      As a result of this, lazy superblock counters are a mkfs option and at the
      moment on linux there is no way to convert an old filesystem. This is
      possible - xfs_db can be used to twiddle the right bits and then
      xfs_repair will do the format conversion for you. Similarly, you can
      convert backwards as well. At some point we'll add functionality to
      xfs_admin to do the bit twiddling easily....
      
      SGI-PV: 964999
      SGI-Modid: xfs-linux-melb:xfs-kern:28652a
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      92821e2b
    • N
      [XFS] Don't grow filesystems past the size they can index. · 4cc929ee
      Nathan Scott 提交于
      When growing a filesystem we don't check to see if the new size overflows
      the page cache index range, so we can do silly things like grow a
      filesystem page 16TB on a 32bit. Check new filesystem sizes against the
      limits the kernel can support.
      
      SGI-PV: 957886
      SGI-Modid: xfs-linux-melb:xfs-kern:28563a
      Signed-Off-By: NNathan Scott <nscott@aconex.com>
      Signed-off-by: NDavid Chinner <dgc@sgi.com>
      Signed-off-by: NTim Shimmin <tes@sgi.com>
      4cc929ee
  7. 10 2月, 2007 7 次提交
  8. 28 9月, 2006 2 次提交
  9. 20 6月, 2006 1 次提交
  10. 09 6月, 2006 5 次提交