1. 29 10月, 2010 7 次提交
    • J
      Btrfs: let the user know space caching is enabled · 8216ef86
      Josef Bacik 提交于
      If you mount -o space_cache, the option will be persistent across mounts, but to
      make sure the user knows that they did this, emit a message telling them if they
      didn't mount with -o space_cache but the feature is still used.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      8216ef86
    • J
      Btrfs: Add a clear_cache mount option · 88c2ba3b
      Josef Bacik 提交于
      If something goes wrong with the free space cache we need a way to make sure
      it's not loaded on mount and that it's cleared for everybody.  When you pass the
      clear_cache option it will make it so all block groups are setup to be cleared,
      which keeps them from being loaded and then they will be truncated when the
      transaction is committed.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      88c2ba3b
    • J
      Btrfs: add support for mixed data+metadata block groups · 67377734
      Josef Bacik 提交于
      There are just a few things that need to be fixed in the kernel to support mixed
      data+metadata block groups.  Mostly we just need to make sure that if we are
      using mixed block groups that we continue to allocate mixed block groups as we
      need them.  Also we need to make sure __find_space_info will find our space info
      if we search for DATA or METADATA only.  Tested this with xfstests and it works
      nicely.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      67377734
    • J
      Btrfs: check cache->caching_ctl before returning if caching has started · dde5abee
      Josef Bacik 提交于
      With the free space disk caching we can mark the block group as started with the
      caching, but we don't have a caching ctl.  This can race with anybody else who
      tries to get the caching ctl before we cache (this is very hard to do btw).  So
      instead check to see if cache->caching_ctl is set, and if not return NULL.
      Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      dde5abee
    • J
      Btrfs: load free space cache if it exists · 9d66e233
      Josef Bacik 提交于
      This patch actually loads the free space cache if it exists.  The only thing
      that really changes here is that we need to cache the block group if we're going
      to remove an extent from it.  Previously we did not do this since the caching
      kthread would pick it up.  With the on disk cache we don't have this luxury so
      we need to make sure we read the on disk cache in first, and then remove the
      extent, that way when the extent is unpinned the free space is added to the
      block group.  This has been tested with all sorts of things.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      9d66e233
    • J
      Btrfs: write out free space cache · 0cb59c99
      Josef Bacik 提交于
      This is a simple bit, just dump the free space cache out to our preallocated
      inode when we're writing out dirty block groups.  There are a bunch of changes
      in inode.c in order to account for special cases.  Mostly when we're doing the
      writeout we're holding trans_mutex, so we need to use the nolock transacation
      functions.  Also we can't do asynchronous completions since the async thread
      could be blocked on already completed IO waiting for the transaction lock.  This
      has been tested with xfstests and btrfs filesystem balance, as well as my ENOSPC
      tests.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      0cb59c99
    • J
      Btrfs: create special free space cache inode · 0af3d00b
      Josef Bacik 提交于
      In order to save free space cache, we need an inode to hold the data, and we
      need a special item to point at the right inode for the right block group.  So
      first, create a special item that will point to the right inode, and the number
      of extent entries we will have and the number of bitmaps we will have.  We
      truncate and pre-allocate space everytime to make sure it's uptodate.
      
      This feature will be turned on as soon as you mount with -o space_cache, however
      it is safe to boot into old kernels, they will just generate the cache the old
      fashion way.  When you boot back into a newer kernel we will notice that we
      modified and not the cache and automatically discard the cache.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      0af3d00b
  2. 12 6月, 2010 1 次提交
  3. 27 5月, 2010 1 次提交
  4. 25 5月, 2010 9 次提交
  5. 29 4月, 2010 1 次提交
  6. 06 4月, 2010 2 次提交
    • J
      Btrfs: fix data enospc check overflow · ab6e2410
      Josef Bacik 提交于
      Because we account for reserved space we get from the allocator before we
      actually account for allocating delalloc space, we can have a small window where
      the amount of "used" space in a space_info is more than the total amount of
      space in the space_info.  This will cause a overflow in our check, so it will
      seem like we have _tons_ of free space, and we'll allow reservations to occur
      that will end up larger than the amount of space we have.  I've seen users
      report ENOSPC panic's in cow_file_range a few times recently, so I tried to
      reproduce this problem and found I could reproduce it if I ran one of my tests
      in a loop for like 20 minutes.  With this patch my test ran all night without
      issues.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      ab6e2410
    • J
      Btrfs: fix small race with delalloc flushing waitqueue's · b5cb1600
      Josef Bacik 提交于
      Everytime we start a new flushing thread, we init the waitqueue if there isn't a
      flushing thread running.  The problem with this is we check
      space_info->flushing, which we clear right before doing a wake_up on the
      flushing waitqueue, which causes problems if we init the waitqueue in the middle
      of clearing the flushing flagh and calling wake_up.  This is hard to hit, but
      the code is wrong anyway, so init the flushing/allocating waitqueue when
      creating the space info and let it be.  I haven't seen the panic since I've been
      using this patch.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      b5cb1600
  7. 31 3月, 2010 3 次提交
    • J
      Btrfs: kill max_extent mount option · 287a0ab9
      Josef Bacik 提交于
      As Yan pointed out, theres not much reason for all this complicated math to
      account for file extents being split up into max_extent chunks, since they are
      likely to all end up in the same leaf anyway.  Since there isn't much reason to
      use max_extent, just remove the option altogether so we have one less thing we
      need to test.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      287a0ab9
    • J
      Btrfs: fail to mount if we have problems reading the block groups · 1b1d1f66
      Josef Bacik 提交于
      We don't actually check the return value of btrfs_read_block_groups, so we can
      possibly succeed to mount, but then fail to say read the superblock xattr for
      selinux which will cause the vfs code to deactivate the super.
      
      This is a problem because in find_free_extent we just assume that we
      will find the right space_info for the allocation we want.  But if we
      failed to read the block groups, we won't have setup any space_info's,
      and we'll hit a NULL pointer deref in find_free_extent.
      
      This patch fixes that problem by checking the return value of
      btrfs_read_block_groups, and failing out properly.  I've also added a
      check in find_free_extent so if for some reason we don't find an
      appropriate space_info, we just return -ENOSPC.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      1b1d1f66
    • M
      Btrfs: add NULL check for do_walk_down() · 90d2c51d
      Miao Xie 提交于
      btrfs_find_create_tree_block() may return NULL, so we must check the returned
      value, or we will access a NULL pointer.
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      90d2c51d
  8. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  9. 15 3月, 2010 1 次提交
    • J
      Btrfs: cache the extent state everywhere we possibly can V2 · 2ac55d41
      Josef Bacik 提交于
      This patch just goes through and fixes everybody that does
      
      lock_extent()
      blah
      unlock_extent()
      
      to use
      
      lock_extent_bits()
      blah
      unlock_extent_cached()
      
      and pass around a extent_state so we only have to do the searches once per
      function.  This gives me about a 3 mb/s boots on my random write test.  I have
      not converted some things, like the relocation and ioctl's, since they aren't
      heavily used and the relocation stuff is in the middle of being re-written.  I
      also changed the clear_extent_bit() to only unset the cached state if we are
      clearing EXTENT_LOCKED and related stuff, so we can do things like this
      
      lock_extent_bits()
      clear delalloc bits
      unlock_extent_cached()
      
      without losing our cached state.  I tested this thoroughly and turned on
      LEAK_DEBUG to make sure we weren't leaking extent states, everything worked out
      fine.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      2ac55d41
  10. 05 2月, 2010 1 次提交
  11. 18 1月, 2010 1 次提交
    • J
      Btrfs: fix possible panic on unmount · 11dfe35a
      Josef Bacik 提交于
      We can race with the unmount of an fs and the stopping of a kthread where we
      will free the block group before we're done using it.  The reason for this is
      because we do not hold a reference on the block group while its caching, since
      the allocator drops its reference once it exits or moves on to the next block
      group.  This patch fixes the problem by taking a reference to the block group
      before we start caching and dropping it when we're done to make sure all
      accesses to the block group are safe.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      11dfe35a
  12. 18 12月, 2009 4 次提交
  13. 16 12月, 2009 1 次提交
  14. 12 11月, 2009 2 次提交
    • C
      Btrfs: allow more metadata chunk preallocation · 33b25808
      Chris Mason 提交于
      On an FS where all of the space has not been allocated into chunks yet,
      the enospc can return enospc just because the existing metadata chunks
      are full.
      
      We get around this by allowing more metadata chunks to be allocated up
      to a certain limit, and finding the right limit is a little fuzzy.  The
      problem is the reservations for delalloc would preallocate way too much
      of the FS as metadata.  We need to start saying no and just force some
      IO to happen.
      
      But we also need to let a reasonable amount of the FS become metadata.
      This bumps the hard limit up, later releases will have a better system.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      33b25808
    • J
      Btrfs: find ideal block group for caching · ccf0e725
      Josef Bacik 提交于
      This patch changes a few things.  Hopefully the comments are helpfull, but
      I'll try and be as verbose here.
      
      Problem:
      
      My fedora box was taking 1 minute and 21 seconds to boot with btrfs as root.
      Part of this problem was we pick the first block group we can find and start
      caching it, even if it may not have enough free space.  The other problem is
      we only search for cached block groups the first time around, which we won't
      find any cached block groups because this is a newly mounted fs, so we end up
      caching several block groups during bootup, which with alot of fragmentation
      takes around 30-45 seconds to complete, which bogs down the system.  So
      
      Solution:
      
      1) Don't cache block groups willy-nilly at first.  Instead try and figure out
      which block group has the most free, and therefore will take the least amount
      of time to cache.
      
      2) Don't be so picky about cached block groups.  The other problem is once
      we've filled up a cluster, if the block group isn't finished caching the next
      time we try and do the allocation we'll completely ignore the cluster and
      start searching from the beginning of the space, which makes us cache more
      block groups, which slows us down even more.  So instead of skipping block
      groups that are not finished caching when we have a hint, only skip the block
      group if it hasn't started caching yet.
      
      There is one other tweak in here.  Before if we allocated a chunk and still
      couldn't find new space, we'd end up switching the space info to force another
      chunk allocation.  This could make us end up with way too many chunks, so keep
      track of this particular case.
      
      With this patch and my previous cluster fixes my fedora box now boots in 43
      seconds, and according to the bootchart is not held up by our block group
      caching at all.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      ccf0e725
  15. 14 10月, 2009 3 次提交
  16. 09 10月, 2009 2 次提交