1. 07 1月, 2016 1 次提交
  2. 25 11月, 2015 1 次提交
  3. 22 10月, 2015 1 次提交
    • J
      Btrfs: add fragment=* debug mount option · d0bd4560
      Josef Bacik 提交于
      In tracking down these weird bitmap problems it was helpful to artificially
      create an extremely fragmented file system.  These mount options let us either
      fragment data or metadata or both.  With these options I could reproduce all
      sorts of weird latencies and hangs that occur under extreme fragmentation and
      get them fixed.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      d0bd4560
  4. 23 9月, 2014 1 次提交
    • J
      Btrfs: remove empty block groups automatically · 47ab2a6c
      Josef Bacik 提交于
      One problem that has plagued us is that a user will use up all of his space with
      data, remove a bunch of that data, and then try to create a bunch of small files
      and run out of space.  This happens because all the chunks were allocated for
      data since the metadata requirements were so low.  But now there's a bunch of
      empty data block groups and not enough metadata space to do anything.  This
      patch solves this problem by automatically deleting empty block groups.  If we
      notice the used count go down to 0 when deleting or on mount notice that a block
      group has a used count of 0 then we will queue it to be deleted.
      
      When the cleaner thread runs we will double check to make sure the block group
      is still empty and then we will delete it.  This patch has the side effect of no
      longer having a bunch of BUG_ON()'s in the chunk delete code, which will be
      helpful for both this and relocate.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      47ab2a6c
  5. 18 9月, 2014 1 次提交
    • F
      Btrfs: improve free space cache management and space allocation · 20005523
      Filipe Manana 提交于
      While under random IO, a block group's free space cache eventually reaches
      a state where it has a mix of extent entries and bitmap entries representing
      free space regions.
      
      As later free space regions are returned to the cache, some of them are merged
      with existing extent entries if they are contiguous with them. But others are
      not merged, because despite the existence of adjacent free space regions in
      the cache, the merging doesn't happen because the existing free space regions
      are represented in bitmap extents. Even when new free space regions are merged
      with existing extent entries (enlarging the free space range they represent),
      we create chances of having after an enlarged region that is contiguous with
      some other region represented in a bitmap entry.
      
      Both clustered and non-clustered space allocation work by iterating over our
      extent and bitmap entries and skipping any that represents a region smaller
      then the allocation request (and giving preference to extent entries before
      bitmap entries). By having a contiguous free space region that is represented
      by 2 (or more) entries (mix of extent and bitmap entries), we end up not
      satisfying an allocation request with a size larger than the size of any of
      the entries but no larger than the sum of their sizes. Making the caller assume
      we're under a ENOSPC condition or force it to allocate multiple smaller space
      regions (as we do for file data writes), which adds extra overhead and more
      chances of causing fragmentation due to the smaller regions being all spread
      apart from each other (more likely when under concurrency).
      
      For example, if we have the following in the cache:
      
      * extent entry representing free space range: [128Mb - 256Kb, 128Mb[
      
      * bitmap entry covering the range [128Mb, 256Mb[, but only with the bits
        representing the range [128Mb, 128Mb + 768Kb[ set - that is, only that
        space in this 128Mb area is marked as free
      
      An allocation request for 1Mb, starting at offset not greater than 128Mb - 256Kb,
      would fail before, despite the existence of such contiguous free space area in the
      cache. The caller could only allocate up to 768Kb of space at once and later another
      256Kb (or vice-versa). In between each smaller allocation request, another task
      working on a different file/inode might come in and take that space, preventing the
      former task of getting a contiguous 1Mb region of free space.
      
      Therefore this change implements the ability to move free space from bitmap
      entries into existing and new free space regions represented with extent
      entries. This is done when a space region is added to the cache.
      
      A test was added to the sanity tests that explains in detail the issue too.
      
      Some performance test results with compilebench on a 4 cores machine, with
      32Gb of ram and using an HDD follow.
      
      Test: compilebench -D /mnt -i 30 -r 1000 --makej
      
      Before this change:
      
         intial create total runs 30 avg 69.02 MB/s (user 0.28s sys 0.57s)
         compile total runs 30 avg 314.96 MB/s (user 0.12s sys 0.25s)
         read compiled tree total runs 3 avg 27.14 MB/s (user 1.52s sys 0.90s)
         delete compiled tree total runs 30 avg 3.14 seconds (user 0.15s sys 0.66s)
      
      After this change:
      
         intial create total runs 30 avg 68.37 MB/s (user 0.29s sys 0.55s)
         compile total runs 30 avg 382.83 MB/s (user 0.12s sys 0.24s)
         read compiled tree total runs 3 avg 27.82 MB/s (user 1.45s sys 0.97s)
         delete compiled tree total runs 30 avg 3.18 seconds (user 0.17s sys 0.65s)
      Signed-off-by: NFilipe Manana <fdmanana@suse.com>
      Signed-off-by: NChris Mason <clm@fb.com>
      20005523
  6. 07 1月, 2014 1 次提交
  7. 19 12月, 2013 1 次提交
  8. 01 9月, 2013 1 次提交
    • J
      Btrfs: separate out tests into their own directory · dc11dd5d
      Josef Bacik 提交于
      The plan is to have a bunch of unit tests that run when btrfs is loaded when you
      build with the appropriate config option.  My ultimate goal is to have a test
      for every non-static function we have, but at first I'm going to focus on the
      things that cause us the most problems.  To start out with this just adds a
      tests/ directory and moves the existing free space cache tests into that
      directory and sets up all of the infrastructure.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@fusionio.com>
      Signed-off-by: NChris Mason <chris.mason@fusionio.com>
      dc11dd5d