1. 29 10月, 2010 3 次提交
    • J
      Btrfs: load free space cache if it exists · 9d66e233
      Josef Bacik 提交于
      This patch actually loads the free space cache if it exists.  The only thing
      that really changes here is that we need to cache the block group if we're going
      to remove an extent from it.  Previously we did not do this since the caching
      kthread would pick it up.  With the on disk cache we don't have this luxury so
      we need to make sure we read the on disk cache in first, and then remove the
      extent, that way when the extent is unpinned the free space is added to the
      block group.  This has been tested with all sorts of things.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      9d66e233
    • J
      Btrfs: write out free space cache · 0cb59c99
      Josef Bacik 提交于
      This is a simple bit, just dump the free space cache out to our preallocated
      inode when we're writing out dirty block groups.  There are a bunch of changes
      in inode.c in order to account for special cases.  Mostly when we're doing the
      writeout we're holding trans_mutex, so we need to use the nolock transacation
      functions.  Also we can't do asynchronous completions since the async thread
      could be blocked on already completed IO waiting for the transaction lock.  This
      has been tested with xfstests and btrfs filesystem balance, as well as my ENOSPC
      tests.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      0cb59c99
    • J
      Btrfs: create special free space cache inode · 0af3d00b
      Josef Bacik 提交于
      In order to save free space cache, we need an inode to hold the data, and we
      need a special item to point at the right inode for the right block group.  So
      first, create a special item that will point to the right inode, and the number
      of extent entries we will have and the number of bitmaps we will have.  We
      truncate and pre-allocate space everytime to make sure it's uptodate.
      
      This feature will be turned on as soon as you mount with -o space_cache, however
      it is safe to boot into old kernels, they will just generate the cache the old
      fashion way.  When you boot back into a newer kernel we will notice that we
      modified and not the cache and automatically discard the cache.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      0af3d00b
  2. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  3. 09 3月, 2010 1 次提交
  4. 12 11月, 2009 1 次提交
    • J
      Btrfs: fix how we set max_size for free space clusters · 01dea1ef
      Josef Bacik 提交于
      This patch fixes a problem where max_size can be set to 0 even though we
      filled the cluster properly.  We set max_size to 0 if we restart the cluster
      window, but if the new start entry is big enough to be our new cluster then we
      could return with a max_size set to 0, which will mean the next time we try to
      allocate from this cluster it will fail.  So set max_extent to the entry's
      size.  Tested this on my box and now we actually allocate from the cluster
      after we fill it.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      01dea1ef
  5. 22 9月, 2009 2 次提交
    • J
      Btrfs: fix extent entry threshold calculation · 25891f79
      Josef Bacik 提交于
      There is a slight problem with the extent entry threshold calculation for the
      free space cache.  We only adjust the threshold down as we add bitmaps, but
      never actually adjust the threshold up as we add bitmaps.  This means we could
      fragment the free space so badly that we end up using all bitmaps to describe
      the free space, use all the free space which would result in the bitmaps being
      freed, but then go to add free space again as we delete things and immediately
      add bitmaps since the extent threshold would still be 0.  Now as we free
      bitmaps the extent threshold will be ratcheted up to allow more extent entries
      to be added.
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      25891f79
    • J
      Btrfs: fix bitmap size tracking · f019f426
      Josef Bacik 提交于
      When we first go to add free space, we allocate a new info and set the offset
      and bytes to the space we are adding.  This is fine, except we actually set the
      size of a bitmap as we set the bits in it, so if we add space to a bitmap, we'd
      end up counting the same space twice.  This isn't a huge deal, it just makes
      the allocator behave weirdly since it will think that a bitmap entry has more
      space than it ends up actually having.  I used a BUG_ON() to catch when this
      problem happened, and with this patch I no longer get the BUG_ON().
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      f019f426
  6. 31 7月, 2009 1 次提交
    • J
      Btrfs: fix btrfs_remove_from_free_space corner case · 6606bb97
      Josef Bacik 提交于
      Yan Zheng hit a problem where we tried to remove some free space but failed
      because we couldn't find the free space entry.  This is because the free space
      was held within a bitmap that had a starting offset well before the actual
      offset of the free space, and there were free space extents that were in the
      same range as that offset, so tree_search_offset returned with NULL because we
      couldn't find a free space extent that had that offset.  This is fixed by
      making sure that if we fail to find the entry, we re-search again with
      bitmap_only set to 1 and do an offset_to_bitmap so we can get the appropriate
      bitmap.  A similar problem happens in btrfs_alloc_from_bitmap for the
      clustering code, but that is not as bad since we will just go and redo our
      cluster allocation.
      
      Also this adds some debugging checks to make sure that the free space we are
      trying to remove from the bitmap is in fact there.  This can probably go away
      after a while, but since this code is only used by the tree-logging stuff it
      would be nice to run with it for a while to make sure there are no problems.
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      6606bb97
  7. 24 7月, 2009 2 次提交
    • J
      Btrfs: async block group caching · 817d52f8
      Josef Bacik 提交于
      This patch moves the caching of the block group off to a kthread in order to
      allow people to allocate sooner.  Instead of blocking up behind the caching
      mutex, we instead kick of the caching kthread, and then attempt to make an
      allocation.  If we cannot, we wait on the block groups caching waitqueue, which
      the caching kthread will wake the waiting threads up everytime it finds 2 meg
      worth of space, and then again when its finished caching.  This is how I tested
      the speedup from this
      
      mkfs the disk
      mount the disk
      fill the disk up with fs_mark
      unmount the disk
      mount the disk
      time touch /mnt/foo
      
      Without my changes this took 11 seconds on my box, with these changes it now
      takes 1 second.
      
      Another change thats been put in place is we lock the super mirror's in the
      pinned extent map in order to keep us from adding that stuff as free space when
      caching the block group.  This doesn't really change anything else as far as the
      pinned extent map is concerned, since for actual pinned extents we use
      EXTENT_DIRTY, but it does mean that when we unmount we have to go in and unlock
      those extents to keep from leaking memory.
      
      I've also added a check where when we are reading block groups from disk, if the
      amount of space used == the size of the block group, we go ahead and mark the
      block group as cached.  This drastically reduces the amount of time it takes to
      cache the block groups.  Using the same test as above, except doing a dd to a
      file and then unmounting, it used to take 33 seconds to umount, now it takes 3
      seconds.
      
      This version uses the commit_root in the caching kthread, and then keeps track
      of how many async caching threads are running at any given time so if one of the
      async threads is still running as we cross transactions we can wait until its
      finished before handling the pinned extents.  Thank you,
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      817d52f8
    • J
      Btrfs: use hybrid extents+bitmap rb tree for free space · 96303081
      Josef Bacik 提交于
      Currently btrfs has a problem where it can use a ridiculous amount of RAM simply
      tracking free space.  As free space gets fragmented, we end up with thousands of
      entries on an rb-tree per block group, which usually spans 1 gig of area.  Since
      we currently don't ever flush free space cache back to disk this gets to be a
      bit unweildly on large fs's with lots of fragmentation.
      
      This patch solves this problem by using PAGE_SIZE bitmaps for parts of the free
      space cache.  Initially we calculate a threshold of extent entries we can
      handle, which is however many extent entries we can cram into 16k of ram.  The
      maximum amount of RAM that should ever be used to track 1 gigabyte of diskspace
      will be 32k of RAM, which scales much better than we did before.
      
      Once we pass the extent threshold, we start adding bitmaps and using those
      instead for tracking the free space.  This patch also makes it so that any free
      space thats less than 4 * sectorsize we go ahead and put into a bitmap.  This is
      nice since we try and allocate out of the front of a block group, so if the
      front of a block group is heavily fragmented and then has a huge chunk of free
      space at the end, we go ahead and add the fragmented areas to bitmaps and use a
      normal extent entry to track the big chunk at the back of the block group.
      
      I've also taken the opportunity to revamp how we search for free space.
      Previously we indexed free space via an offset indexed rb tree and a bytes
      indexed rb tree.  I've dropped the bytes indexed rb tree and use only the offset
      indexed rb tree.  This cuts the number of tree operations we were doing
      previously down by half, and gives us a little bit of a better allocation
      pattern since we will always start from a specific offset and search forward
      from there, instead of searching for the size we need and try and get it as
      close as possible to the offset we want.
      
      I've given this a healthy amount of testing pre-new format stuff, as well as
      post-new format stuff.  I've booted up my fedora box which is installed on btrfs
      with this patch and ran with it for a few days without issues.  I've not seen
      any performance regressions in any of my tests.
      
      Since the last patch Yan Zheng fixed a problem where we could have overlapping
      entries, so updating their offset inline would cause problems.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      96303081
  8. 10 6月, 2009 3 次提交
    • C
      Btrfs: add mount -o ssd_spread to spread allocations out · 451d7585
      Chris Mason 提交于
      Some SSDs perform best when reusing block numbers often, while
      others perform much better when clustering strictly allocates
      big chunks of unused space.
      
      The default mount -o ssd will find rough groupings of blocks
      where there are a bunch of free blocks that might have some
      allocated blocks mixed in.
      
      mount -o ssd_spread will make sure there are no allocated blocks
      mixed in.  It should perform better on lower end SSDs.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      451d7585
    • C
      Btrfs: avoid allocation clusters that are too spread out · c6044801
      Chris Mason 提交于
      In SSD mode for data, and all the time for metadata the allocator
      will try to find a cluster of nearby blocks for allocations.  This
      commit adds extra checks to make sure that each free block in the
      cluster is close to the last one.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      c6044801
    • C
      Btrfs: reduce mount -o ssd CPU usage · 2c943de6
      Chris Mason 提交于
      The block allocator in SSD mode will try to find groups of free blocks
      that are close together.  This commit makes it loop less on a given
      group size before bumping it.
      
      The end result is that we are less likely to fill small holes in the
      available free space, but we don't waste as much CPU building the
      large cluster used by ssd mode.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      2c943de6
  9. 27 4月, 2009 1 次提交
  10. 03 4月, 2009 4 次提交
    • S
      Btrfs: BUG to BUG_ON changes · c293498b
      Stoyan Gaydarov 提交于
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      c293498b
    • C
      Btrfs: rework allocation clustering · fa9c0d79
      Chris Mason 提交于
      Because btrfs is copy-on-write, we end up picking new locations for
      blocks very often.  This makes it fairly difficult to maintain perfect
      read patterns over time, but we can at least do some optimizations
      for writes.
      
      This is done today by remembering the last place we allocated and
      trying to find a free space hole big enough to hold more than just one
      allocation.  The end result is that we tend to write sequentially to
      the drive.
      
      This happens all the time for metadata and it happens for data
      when mounted -o ssd.  But, the way we record it is fairly racey
      and it tends to fragment the free space over time because we are trying
      to allocate fairly large areas at once.
      
      This commit gets rid of the races by adding a free space cluster object
      with dedicated locking to make sure that only one process at a time
      is out replacing the cluster.
      
      The free space fragmentation is somewhat solved by allowing a cluster
      to be comprised of smaller free space extents.  This part definitely
      adds some CPU time to the cluster allocations, but it allows the allocator
      to consume the small holes left behind by cow.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      fa9c0d79
    • J
      Btrfs: kill the block group alloc mutex · 6226cb0a
      Josef Bacik 提交于
      This patch removes the block group alloc mutex used to protect the free space
      tree for allocations and replaces it with a spin lock which is used only to
      protect the free space rb tree.  This means we only take the lock when we are
      directly manipulating the tree, which makes us a touch faster with
      multi-threaded workloads.
      
      This patch also gets rid of btrfs_find_free_space and replaces it with
      btrfs_find_space_for_alloc, which takes the number of bytes you want to
      allocate, and empty_size, which is used to indicate how much free space should
      be at the end of the allocation.
      
      It will return an offset for the allocator to use.  If we don't end up using it
      we _must_ call btrfs_add_free_space to put it back.  This is the tradeoff to
      kill the alloc_mutex, since we need to make sure nobody else comes along and
      takes our space.
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
      6226cb0a
    • J
      Btrfs: free space cache cleanups · 70cb0743
      Josef Bacik 提交于
      This patch cleans up the free space cache code a bit.  It better documents the
      idiosyncrasies of tree_search_offset and makes the code make a bit more sense.
      I took out the info allocation at the start of __btrfs_add_free_space and put it
      where it makes more sense.  This was left over cruft from when alloc_mutex
      existed.  Also all of the re-searches we do to make sure we inserted properly.
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
      70cb0743
  11. 06 1月, 2009 1 次提交
  12. 09 12月, 2008 1 次提交
    • Y
      Btrfs: superblock duplication · a512bbf8
      Yan Zheng 提交于
      This patch implements superblock duplication. Superblocks
      are stored at offset 16K, 64M and 256G on every devices.
      Spaces used by superblocks are preserved by the allocator,
      which uses a reverse mapping function to find the logical
      addresses that correspond to superblocks. Thank you,
      Signed-off-by: NYan Zheng <zheng.yan@oracle.com>
      a512bbf8
  13. 02 12月, 2008 1 次提交
  14. 30 10月, 2008 1 次提交
    • J
      Btrfs: nuke fs wide allocation mutex V2 · 25179201
      Josef Bacik 提交于
      This patch removes the giant fs_info->alloc_mutex and replaces it with a bunch
      of little locks.
      
      There is now a pinned_mutex, which is used when messing with the pinned_extents
      extent io tree, and the extent_ins_mutex which is used with the pending_del and
      extent_ins extent io trees.
      
      The locking for the extent tree stuff was inspired by a patch that Yan Zheng
      wrote to fix a race condition, I cleaned it up some and changed the locking
      around a little bit, but the idea remains the same.  Basically instead of
      holding the extent_ins_mutex throughout the processing of an extent on the
      extent_ins or pending_del trees, we just hold it while we're searching and when
      we clear the bits on those trees, and lock the extent for the duration of the
      operations on the extent.
      
      Also to keep from getting hung up waiting to lock an extent, I've added a
      try_lock_extent so if we cannot lock the extent, move on to the next one in the
      tree and we'll come back to that one.  I have tested this heavily and it does
      not appear to break anything.  This has to be applied on top of my
      find_free_extent redo patch.
      
      I tested this patch on top of Yan's space reblancing code and it worked fine.
      The only thing that has changed since the last version is I pulled out all my
      debugging stuff, apparently I forgot to run guilt refresh before I sent the
      last patch out.  Thank you,
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
      
      25179201
  15. 10 10月, 2008 1 次提交
    • J
      Btrfs: make tree_search_offset more flexible in its searching · 37d3cddd
      Josef Bacik 提交于
      Sometimes we end up freeing a reserved extent because we don't need it, however
      this means that its possible for transaction->last_alloc to point to the middle
      of a free area.
      
      When we search for free space in find_free_space we do a tree_search_offset
      with contains set to 0, because we want it to find the next best free area if
      we do not have an offset starting on the given offset.
      
      Unfortunately that currently means that if the offset we were given as a hint
      points to the middle of a free area, we won't find anything.  This is especially
      bad if we happened to last allocate from the big huge chunk of a newly formed
      block group, since we won't find anything and have to go back and search the
      long way around.
      
      This fixes this problem by making it so that we return the free space area
      regardless of the contains variable.  This made cache missing happen _alot_
      less, and speeds things up considerably.
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
      37d3cddd
  16. 26 9月, 2008 1 次提交
    • C
      Btrfs: Fix allocation completions in tree log replay · 9b49c9b9
      Chris Mason 提交于
      After a crash, the tree log code uses btrfs_alloc_logged_extent to
      record allocations of data extents that it finds in the log tree.  These
      come in basically random order, which does not fit how
      btrfs_remove_free_space() expects to be called.
      
      btrfs_remove_free_space was changed to support recording an extent
      allocation in the middle of a region of free space.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      9b49c9b9
  17. 25 9月, 2008 1 次提交
    • J
      Btrfs: free space accounting redo · 0f9dd46c
      Josef Bacik 提交于
      1) replace the per fs_info extent_io_tree that tracked free space with two
      rb-trees per block group to track free space areas via offset and size.  The
      reason to do this is because most allocations come with a hint byte where to
      start, so we can usually find a chunk of free space at that hint byte to satisfy
      the allocation and get good space packing.  If we cannot find free space at or
      after the given offset we fall back on looking for a chunk of the given size as
      close to that given offset as possible.  When we fall back on the size search we
      also try to find a slot as close to the size we want as possible, to avoid
      breaking small chunks off of huge areas if possible.
      
      2) remove the extent_io_tree that tracked the block group cache from fs_info and
      replaced it with an rb-tree thats tracks block group cache via offset.  also
      added a per space_info list that tracks the block group cache for the particular
      space so we can lookup related block groups easily.
      
      3) cleaned up the allocation code to make it a little easier to read and a
      little less complicated.  Basically there are 3 steps, first look from our
      provided hint.  If we couldn't find from that given hint, start back at our
      original search start and look for space from there.  If that fails try to
      allocate space if we can and start looking again.  If not we're screwed and need
      to start over again.
      
      4) small fixes.  there were some issues in volumes.c where we wouldn't allocate
      the rest of the disk.  fixed cow_file_range to actually pass the alloc_hint,
      which has helped a good bit in making the fs_mark test I run have semi-normal
      results as we run out of space.  Generally with data allocations we don't track
      where we last allocated from, so everytime we did a data allocation we'd search
      through every block group that we have looking for free space.  Now searching a
      block group with no free space isn't terribly time consuming, it was causing a
      slight degradation as we got more data block groups.  The alloc_hint has fixed
      this slight degredation and made things semi-normal.
      
      There is still one nagging problem I'm working on where we will get ENOSPC when
      there is definitely plenty of space.  This only happens with metadata
      allocations, and only when we are almost full.  So you generally hit the 85%
      mark first, but sometimes you'll hit the BUG before you hit the 85% wall.  I'm
      still tracking it down, but until then this seems to be pretty stable and make a
      significant performance gain.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      0f9dd46c