1. 18 12月, 2009 13 次提交
  2. 16 12月, 2009 3 次提交
  3. 12 11月, 2009 10 次提交
    • J
      Btrfs: fix panic when trying to destroy a newly allocated · a6dbd429
      Josef Bacik 提交于
      There is a problem where iget5_locked will look for an inode, not find it, and
      then subsequently try to allocate it.  Another CPU will have raced in and
      allocated the inode instead, so when iget5_locked gets the inode spin lock again
      and does a search, it finds the new inode.  So it goes ahead and calls
      destroy_inode on the inode it just allocated.  The problem is we don't set
      BTRFS_I(inode)->root until the new inode is completely initialized.  This patch
      makes us set root to NULL when alloc'ing a new inode, so when we get to
      btrfs_destroy_inode and we see that root is NULL we can just free up the memory
      and continue on.  This fixes the panic
      
      http://www.kerneloops.org/submitresult.php?number=812690
      
      Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      a6dbd429
    • C
      Btrfs: allow more metadata chunk preallocation · 33b25808
      Chris Mason 提交于
      On an FS where all of the space has not been allocated into chunks yet,
      the enospc can return enospc just because the existing metadata chunks
      are full.
      
      We get around this by allowing more metadata chunks to be allocated up
      to a certain limit, and finding the right limit is a little fuzzy.  The
      problem is the reservations for delalloc would preallocate way too much
      of the FS as metadata.  We need to start saying no and just force some
      IO to happen.
      
      But we also need to let a reasonable amount of the FS become metadata.
      This bumps the hard limit up, later releases will have a better system.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      33b25808
    • J
      Btrfs: fallback on uncompressed io if compressed io fails · f5a84ee3
      Josef Bacik 提交于
      Currently compressed IO does not deal with not having its entire extent able to
      be allocated.  So if we have enough free space to allocate for the extent, but
      its not contiguous, it will fail spectacularly.  This patch fixes this by
      falling back on uncompressed IO which lets us spread the delalloc extent across
      multiple extents.  I tested this by making us randomly think the reservation had
      failed to make it fallback on the uncompressed io way and it seemed to work
      fine.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      f5a84ee3
    • J
      Btrfs: find ideal block group for caching · ccf0e725
      Josef Bacik 提交于
      This patch changes a few things.  Hopefully the comments are helpfull, but
      I'll try and be as verbose here.
      
      Problem:
      
      My fedora box was taking 1 minute and 21 seconds to boot with btrfs as root.
      Part of this problem was we pick the first block group we can find and start
      caching it, even if it may not have enough free space.  The other problem is
      we only search for cached block groups the first time around, which we won't
      find any cached block groups because this is a newly mounted fs, so we end up
      caching several block groups during bootup, which with alot of fragmentation
      takes around 30-45 seconds to complete, which bogs down the system.  So
      
      Solution:
      
      1) Don't cache block groups willy-nilly at first.  Instead try and figure out
      which block group has the most free, and therefore will take the least amount
      of time to cache.
      
      2) Don't be so picky about cached block groups.  The other problem is once
      we've filled up a cluster, if the block group isn't finished caching the next
      time we try and do the allocation we'll completely ignore the cluster and
      start searching from the beginning of the space, which makes us cache more
      block groups, which slows us down even more.  So instead of skipping block
      groups that are not finished caching when we have a hint, only skip the block
      group if it hasn't started caching yet.
      
      There is one other tweak in here.  Before if we allocated a chunk and still
      couldn't find new space, we'd end up switching the space info to force another
      chunk allocation.  This could make us end up with way too many chunks, so keep
      track of this particular case.
      
      With this patch and my previous cluster fixes my fedora box now boots in 43
      seconds, and according to the bootchart is not held up by our block group
      caching at all.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      ccf0e725
    • D
      Btrfs: avoid null deref in unpin_extent_cache() · 4eb3991c
      Dan Carpenter 提交于
      I re-orderred the checks to avoid dereferencing "em" if it was null.
      
      Found by smatch static checker.
      Signed-off-by: NDan Carpenter <error27@gmail.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      4eb3991c
    • L
      Btrfs: skip btrfs_release_path in btrfs_update_root and btrfs_del_root · df66916e
      Li Dongyang 提交于
      We don't need to call btrfs_release_path because btrfs_free_path will do
      that for us.
      Signed-off-by: NLi Dongyang <Jerry87905@gmail.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      df66916e
    • J
      Btrfs: fix some metadata enospc issues · 5df6a9f6
      Josef Bacik 提交于
      We weren't reserving metadata space for rename, rmdir and unlink, which could
      cause problems.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      5df6a9f6
    • J
      Btrfs: fix how we set max_size for free space clusters · 01dea1ef
      Josef Bacik 提交于
      This patch fixes a problem where max_size can be set to 0 even though we
      filled the cluster properly.  We set max_size to 0 if we restart the cluster
      window, but if the new start entry is big enough to be our new cluster then we
      could return with a max_size set to 0, which will mean the next time we try to
      allocate from this cluster it will fail.  So set max_extent to the entry's
      size.  Tested this on my box and now we actually allocate from the cluster
      after we fill it.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      01dea1ef
    • J
      Btrfs: cleanup transaction starting and fix journal_info usage · 249ac1e5
      Josef Bacik 提交于
      We use journal_info to tell if we're in a nested transaction to make sure we
      don't commit the transaction within a nested transaction.  We use another
      method to see if there are any outstanding ioctl trans handles, so if we're
      starting one do not set current->journal_info, since it will screw with other
      filesystems.  This patch also cleans up the starting stuff so there aren't any
      magic numbers.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      249ac1e5
    • J
      Btrfs: fix data allocation hint start · 6346c939
      Josef Bacik 提交于
      Sometimes our start allocation hint when we cow a file can be either
      EXTENT_HOLE or some other such place holder, which is not optimal.  So if we
      find that our em->block_start is one of these special values, check to see
      where the first block of the inode is stored, and use that as a hint.  If that
      block is also a special value, just fallback on a hint of 0 and let the
      allocator figure out a good place to put the data.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      6346c939
  4. 14 10月, 2009 9 次提交
    • C
      Btrfs: always pin metadata in discard mode · 444528b3
      Chris Mason 提交于
      We have an optimization in btrfs to allow blocks to be
      immediately freed if they were allocated in this transaction and never
      written.  Otherwise they are pinned and freed when the transaction
      commits.
      
      This isn't optimal for discard mode because immediately freeing
      them means immediately discarding them.  It is better to give the
      block to the pinning code and letting the (slow) discard happen later.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      444528b3
    • C
      Btrfs: enable discard support · 06348574
      Christoph Hellwig 提交于
      The discard support code in btrfs currently is guarded by ifdefs for
      BIO_RW_DISCARD, which is never defines as it's the name of an enum
      memeber.  Just remove the useless ifdefs to actually enable the code.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      06348574
    • C
      Btrfs: add -o discard option · e244a0ae
      Christoph Hellwig 提交于
      Enable discard by default is not a good idea given the the trim speed
      of SSD prototypes we've seen, and the carecteristics for many high-end
      arrays.  Turn of discards by default and require the -o discard option
      to enable them on.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      e244a0ae
    • Y
      Btrfs: properly wait log writers during log sync · 86df7eb9
      Yan, Zheng 提交于
      A recently fsync optimization make btrfs_sync_log skip calling
      wait_for_writer in the single log writer case. This is incorrect
      since the writer count can also be increased by btrfs_pin_log.
      Signed-off-by: NYan Zheng <zheng.yan@oracle.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      86df7eb9
    • J
      Btrfs: fix possible ENOSPC problems with truncate · 5d5e103a
      Josef Bacik 提交于
      There's a problem where we don't do any space reservation for truncates, which
      can cause you to OOPs because you will be allowed to go off in the weeds a bit
      since we don't account for the delalloc bytes that are created as a result of
      the truncate.
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      5d5e103a
    • C
      Btrfs: fix btrfs acl #ifdef checks · 0eda294d
      Chris Mason 提交于
      The btrfs acl code was #ifdefing for a define
      that didn't exist.  This correctly matches it
      to the values used by the Kconfig file.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      0eda294d
    • C
      Btrfs: streamline tree-log btree block writeout · 690587d1
      Chris Mason 提交于
      Syncing the tree log is a 3 phase operation.
      
      1) write and wait for all the tree log blocks for a given root.
      
      2) write and wait for all the tree log blocks for the
      tree of tree log roots.
      
      3) write and wait for the super blocks (barriers here)
      
      This isn't as efficient as it could be because there is
      no requirement to wait for the blocks from step one to hit the disk
      before we start writing the blocks from step two.  This commit
      changes the sequence so that we don't start waiting until
      all the tree blocks from both steps one and two have been sent
      to disk.
      
      We do this by breaking up btrfs_write_wait_marked_extents into
      two functions, which is trivial because it was already broken
      up into two parts.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      690587d1
    • C
      Btrfs: avoid tree log commit when there are no changes · 257c62e1
      Chris Mason 提交于
      rpm has a habit of running fdatasync when the file hasn't
      changed.  We already detect if a file hasn't been changed
      in the current transaction but it might have been sent to
      the tree-log in this transaction and not changed since
      the last call to fsync.
      
      In this case, we want to avoid a tree log sync, which includes
      a number of synchronous writes and barriers.  This commit
      extends the existing tracking of the last transaction to change
      a file to also track the last sub-transaction.
      
      The end result is that rpm -ivh and -Uvh are roughly twice as fast,
      and on par with ext3.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      257c62e1
    • C
      Btrfs: only write one super copy during fsync · 4722607d
      Chris Mason 提交于
      During a tree-log commit for fsync, we've been writing at least
      two copies of the super block and forcing them to disk.
      
      The other filesystems write only one, and this change brings us on
      par with them.  A full transaction commit will write all the super
      copies, so we still have redundant info written on a regular
      basis.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      4722607d
  5. 09 10月, 2009 5 次提交