1. 22 12月, 2011 2 次提交
  2. 01 12月, 2011 1 次提交
    • M
      Btrfs: fix deadlock on metadata reservation when evicting a inode · aa38a711
      Miao Xie 提交于
      When I ran the xfstests, I found the test tasks was blocked on meta-data
      reservation.
      
      By debugging, I found the reason of this bug:
         start transaction
              |
      	v
         reserve meta-data space
      	|
      	v
         flush delay allocation -> iput inode -> evict inode
      	^					|
      	|					v
         wait for delay allocation flush <- reserve meta-data space
      
      And besides that, the flush on evicting inode will block the thread, which
      is reclaiming the memory, and make oom happen easily.
      
      Fix this bug by skipping the flush step when evicting inode.
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      aa38a711
  3. 20 11月, 2011 1 次提交
    • J
      Btrfs: wait on caching if we're loading the free space cache · 291c7d2f
      Josef Bacik 提交于
      We've been hitting panics when running xfstest 13 in a loop for long periods of
      time.  And actually this problem has always existed so we've been hitting these
      things randomly for a while.  Basically what happens is we get a thread coming
      into the allocator and reading the space cache off of disk and adding the
      entries to the free space cache as we go.  Then we get another thread that comes
      in and tries to allocate from that block group.  Since block_group->cached !=
      BTRFS_CACHE_NO it goes ahead and tries to do the allocation.  We do this because
      if we're doing the old slow way of caching we don't want to hold people up and
      wait for everything to finish.  The problem with this is we could end up
      discarding the space cache at some arbitrary point in the future, which means we
      could very well end up allocating space that is either bad, or when the real
      caching happens it could end up thinking the space isn't in use when it really
      is and cause all sorts of other problems.
      
      The solution is to add a new flag to indicate we are loading the free space
      cache from disk, and always try to cache the block group if cache->cached !=
      BTRFS_CACHE_FINISHED.  That way if we are loading the space cache anybody else
      who tries to allocate from the block group will have to wait until it's finished
      to make sure it completes successfully.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      291c7d2f
  4. 15 11月, 2011 1 次提交
    • L
      Btrfs: fix tree corruption after multi-thread snapshots and inode_cache flush · f1ebcc74
      Liu Bo 提交于
      The btrfs snapshotting code requires that once a root has been
      snapshotted, we don't change it during a commit.
      
      But there are two cases to lead to tree corruptions:
      
      1) multi-thread snapshots can commit serveral snapshots in a transaction,
         and this may change the src root when processing the following pending
         snapshots, which lead to the former snapshots corruptions;
      
      2) the free inode cache was changing the roots when it root the cache,
         which lead to corruptions.
      
      This fixes things by making sure we force COW the block after we create a
      snapshot during commiting a transaction, then any changes to the roots
      will result in COW, and we get all the fs roots and snapshot roots to be
      consistent.
      Signed-off-by: NLiu Bo <liubo2009@cn.fujitsu.com>
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      f1ebcc74
  5. 06 11月, 2011 5 次提交
    • J
      Btrfs: fix delayed insertion reservation · c06a0e12
      Josef Bacik 提交于
      We all keep getting those stupid warnings from use_block_rsv when running
      stress.sh, and it's because the delayed insertion stuff is being stupid.  It's
      not the delayed insertion stuffs fault, it's all just stupid.  When marking an
      inode dirty for oh say updating the time on it, we just do a
      btrfs_join_transaction, which doesn't reserve any space.  This is stupid because
      we're going to have to have space reserve to make this change, but we do it
      because it's fast because chances are we're going to call it over and over again
      and it doesn't matter.  Well thanks to the delayed insertion stuff this is
      mostly the case, so we do actually need to make this reservation.  So if
      trans->bytes_reserved is 0 then try to do a normal reservation.  If not return
      ENOSPC which will make the btrfs_dirty_inode start a proper transaction which
      will let it do the whole ENOSPC dance and reserve enough space for the delayed
      insertion to steal the reservation from the transaction.
      
      The other stupid thing we do is not reserve space for the inode when writing to
      the thing.  Usually this is ok since we have to update the time so we'd have
      already done all this work before we get to the endio stuff, so it doesn't
      matter.  But this is stupid because we could write the data after the
      transaction commits where we changed the mtime of the inode so we have to cow
      all the way down to the inode anyway.  This used to be masked by the delalloc
      reservation stuff, but because we delay the update it doesn't get masked in this
      case.  So again the delayed insertion stuff bites us in the ass.  So if our
      trans->block_rsv is delalloc, just steal the reservation from the delalloc
      reserve.  Hopefully this won't bite us in the ass, but I've said that before.
      
      With this patch stress.sh no longer spits out those stupid warnings (famous last
      words).  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      c06a0e12
    • J
      Btrfs: make a delayed_block_rsv for the delayed item insertion · 6d668dda
      Josef Bacik 提交于
      I've been hitting warnings in use_block_rsv when running the delayed insertion
      stuff.  It's because we will readjust global block rsv based on what is in use,
      which means we could end up discarding reservations that are for the delayed
      insertion stuff.  So instead create a seperate block rsv for the delayed
      insertion stuff.  This will also make it easier to debug problems with the
      delayed insertion reservations since we will know that only the delayed
      insertion code touches this block_rsv.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      6d668dda
    • C
      Btrfs: add a log of past tree roots · af31f5e5
      Chris Mason 提交于
      This takes some of the free space in the btrfs super block
      to record information about most of the roots in the last four
      commits.
      
      It also adds a -o recovery to use the root history log when
      we're not able to read the tree of tree roots, the extent
      tree root, the device tree root or the csum root.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      af31f5e5
    • D
      btrfs: separate superblock items out of fs_info · 6c41761f
      David Sterba 提交于
      fs_info has now ~9kb, more than fits into one page. This will cause
      mount failure when memory is too fragmented. Top space consumers are
      super block structures super_copy and super_for_commit, ~2.8kb each.
      Allocate them dynamically. fs_info will be ~3.5kb. (measured on x86_64)
      
      Add a wrapper for freeing fs_info and all of it's dynamically allocated
      members.
      Signed-off-by: NDavid Sterba <dsterba@suse.cz>
      6c41761f
    • C
      Btrfs: fix extent pinning bugs in the tree log · e688b725
      Chris Mason 提交于
      The tree log had two important bugs that could cause corruptions after a
      crash.  Sometimes we were allowing tree log blocks to be reused after
      the tree log was committed but before the transaction commit was done.
      
      This allowed a future metadata write to overwrite the tree log data.  It
      is fixed by adding a new variant of freeing reserved extents that always
      pins them.  Credit goes to Stefan Behrens and Arne Jansen for many many
      hours spent tracking this bug down.
      
      During tree log replay, we do a pass through the tree log and pin all
      the extents we find.  This makes sure the replay code won't go in and
      use any of those blocks for new allocations during replay.  The problem
      is the free space cache isn't honoring these pinned extents.  So the
      allocator can end up handing them out, leading to all kinds of problems
      during replay.
      
      The fix here is to force any free space cache to load while we pin the
      extents, and then to make sure we remove the pinned extents from the
      free space rbtree.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      Reported-by: NStefan Behrens <sbehrens@giantdisaster.de>
      e688b725
  6. 20 10月, 2011 12 次提交
    • J
      Btrfs: seperate out btrfs_block_rsv_check out into 2 different functions · 36ba022a
      Josef Bacik 提交于
      Currently btrfs_block_rsv_check does 2 things, it will either refill a block
      reserve like in the truncate or refill case, or it will check to see if there is
      enough space in the global reserve and possibly refill it.  However because of
      overcommit we could be well overcommitting ourselves just to try and refill the
      global reserve, when really we should just be committing the transaction.  So
      breack this out into btrfs_block_rsv_refill and btrfs_block_rsv_check.  Refill
      will try to reserve more metadata if it can and btrfs_block_rsv_check will not,
      it will only tell you if the factor of the total space is still reserved.
      Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      36ba022a
    • J
      Btrfs: inline checksums into the disk free space cache · 5b0e95bf
      Josef Bacik 提交于
      Yeah yeah I know this is how we used to do it and then I changed it, but damnit
      I'm changing it back.  The fact is that writing out checksums will modify
      metadata, which could cause us to dirty a block group we've already written out,
      so we have to truncate it and all of it's checksums and re-write it which will
      write new checksums which could dirty a blockg roup that has already been
      written and you see where I'm going with this?  This can cause unmount or really
      anything that depends on a transaction to commit to take it's sweet damned time
      to happen.  So go back to the way it was, only this time we're specifically
      setting NODATACOW because we can't go through the COW pathway anyway and we're
      doing our own built-in cow'ing by truncating the free space cache.  The other
      new thing is once we truncate the old cache and preallocate the new space, we
      don't need to do that song and dance at all for the rest of the transaction, we
      can just overwrite the existing space with the new cache if the block group
      changes for whatever reason, and the NODATACOW will let us do this fine.  So
      keep track of which transaction we last cleared our cache in and if we cleared
      it in this transaction just say we're all setup and carry on.  This survives
      xfstests and stress.sh.
      
      The inode cache will continue to use the normal csum infrastructure since it
      only gets written once and there will be no more modifications to the fs tree in
      a transaction commit.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      5b0e95bf
    • J
      Btrfs: allow us to overcommit our enospc reservations · 2bf64758
      Josef Bacik 提交于
      One of the things that kills us is the fact that our ENOSPC reservations are
      horribly over the top in most normal cases.  There isn't too much that can be
      done about this because when we are completely full we really need them to work
      like this so we don't under reserve.  However if there is plenty of unallocated
      chunks on the disk we can use that to gauge how much we can overcommit.  So this
      patch adds chunk free space accounting so we always know how much unallocated
      space we have.  Then if we fail to make a reservation within our allocated
      space, check to see if we can overcommit.  In the normal flushing case (like
      with delalloc metadata reservations) we'll take the free space and divide it by
      2 if our metadata profile is setup for DUP or any of those, and then divide it
      by 8 to make sure we don't overcommit too much.  Then if we're in a non-flushing
      case (we really need this reservation now!) we only limit ourselves to half of
      the free space.  This makes this fio test
      
      [torrent]
      filename=torrent-test
      rw=randwrite
      size=4g
      ioengine=sync
      directory=/mnt/btrfs-test
      
      go from taking around 45 minutes to 10 seconds on my freshly formatted 3 TiB
      file system.  This doesn't seem to break my other enospc tests, but could really
      use some more testing as this is a super scary change.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      2bf64758
    • J
      Btrfs: use the inode's mapping mask for allocating pages · 3b16a4e3
      Josef Bacik 提交于
      Johannes pointed out we were allocating only kernel pages for doing writes,
      which is kind of a big deal if you are on 32bit and have more than a gig of ram.
      So fix our allocations to use the mapping's gfp but still clear __GFP_FS so we
      don't re-enter.  Thanks,
      Reported-by: NJohannes Weiner <jweiner@redhat.com>
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      3b16a4e3
    • J
      Btrfs: stop passing a trans handle all around the reservation code · 4a92b1b8
      Josef Bacik 提交于
      The only thing that we need to have a trans handle for is in
      reserve_metadata_bytes and thats to know how much flushing we can do.  So
      instead of passing it around, just check current->journal_info for a
      trans_handle so we know if we can commit a transaction to try and free up space
      or not.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      4a92b1b8
    • J
      Btrfs: allow callers to specify if flushing can occur for btrfs_block_rsv_check · 482e6dc5
      Josef Bacik 提交于
      If you run xfstest 224 it you will get lots of messages about not being able to
      delete inodes and that they will be cleaned up next mount.  This is because
      btrfs_block_rsv_check was not calling reserve_metadata_bytes with the ability to
      flush, so if there was not enough space, it simply failed.  But in truncate and
      evict case we could easily flush space to try and get enough space to do our
      work, so make btrfs_block_rsv_check take a flush argument to pass down to
      reserve_metadata_bytes.  Now xfstests 224 runs fine without all those
      complaints.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      482e6dc5
    • J
      Btrfs: reduce the amount of space needed for truncates · 07127184
      Josef Bacik 提交于
      With btrfs_truncate_inode_items we always return if we have to go to another
      leaf, which makes us do our reservation again.  This means we will only ever
      modify one leaf at a time, so we only need 1 items worth of slack space.  Also,
      since we are deleting we will not be creating nodes as we go down, if anything
      we'll be free'ing them as we merge them together, so make a different
      calculation for truncate which will only have the worst case useage of COW'ing
      the entire path down to the leaf.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      07127184
    • J
      Btrfs: kill btrfs_truncate_reserve_metadata · 5e962c78
      Josef Bacik 提交于
      Since we've optimized the truncate path, we no longer require this function.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      5e962c78
    • J
      Btrfs: kill unused parts of block_rsv · dabdb640
      Josef Bacik 提交于
      The priority and refill_used flags are not used anymore, and neither is the
      usage counter, so just remove them from btrfs_block_rsv.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      dabdb640
    • J
      Btrfs: kill the durable block rsv stuff · 37be25bc
      Josef Bacik 提交于
      This is confusing code and isn't used by anything anymore, so delete it.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      37be25bc
    • J
      Btrfs: kill the orphan space calculation for snapshots · dba68306
      Josef Bacik 提交于
      This patch kills off the calculation for the amount of space needed for the
      orphan operations during a snapshot.  The thing is we only do snapshots on
      commit, so any space that is in the block_rsv->freed[] isn't going to be in the
      new snapshot anyway, so there isn't any reason to require that space to be
      reserved for the snapshot to occur.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      dba68306
    • J
      Btrfs: use bytes_may_use for all ENOSPC reservations · fb25e914
      Josef Bacik 提交于
      We have been using bytes_reserved for metadata reservations, which is wrong
      since we use that to keep track of outstanding reservations from the allocator.
      This resulted in us doing a lot of silly things to make sure we don't allocate a
      bunch of metadata chunks since we never had a real view of how much space was
      actually in use by metadata.
      
      This passes Arne's enospc test and xfstests as well as my own enospc tests.
      Hopefully this will get us moving in the right direction.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      fb25e914
  7. 02 10月, 2011 2 次提交
    • A
      btrfs: initial readahead code and prototypes · 7414a03f
      Arne Jansen 提交于
      This is the implementation for the generic read ahead framework.
      
      To trigger a readahead, btrfs_reada_add must be called. It will start
      a read ahead for the given range [start, end) on tree root. The returned
      handle can either be used to wait on the readahead to finish
      (btrfs_reada_wait), or to send it to the background (btrfs_reada_detach).
      
      The read ahead works as follows:
      On btrfs_reada_add, the root of the tree is inserted into a radix_tree.
      reada_start_machine will then search for extents to prefetch and trigger
      some reads. When a read finishes for a node, all contained node/leaf
      pointers that lie in the given range will also be enqueued. The reads will
      be triggered in sequential order, thus giving a big win over a naive
      enumeration. It will also make use of multi-device layouts. Each disk
      will have its on read pointer and all disks will by utilized in parallel.
      Also will no two disks read both sides of a mirror simultaneously, as this
      would waste seeking capacity. Instead both disks will read different parts
      of the filesystem.
      Any number of readaheads can be started in parallel. The read order will be
      determined globally, i.e. 2 parallel readaheads will normally finish faster
      than the 2 started one after another.
      
      Changes v2:
       - protect root->node by transaction instead of node_lock
       - fix missed branches:
          The readahead had a too simple check to determine if a branch from
          a node should be checked or not. It now also records the upper bound
          of each node to see if the requested RA range lies within.
       - use KERN_CONT to debug output, to avoid line breaks
       - defer reada_start_machine to worker to avoid deadlock
      
      Changes v3:
       - protect root->node by rcu
      
      Changes v5:
       - changed EIO-semantics of reada_tree_block_flagged
       - remove spin_lock from reada_control and make elems an atomic_t
       - remove unused read_total from reada_control
       - kill reada_key_cmp, use btrfs_comp_cpu_keys instead
       - use kref-style release functions where possible
       - return struct reada_control * instead of void * from btrfs_reada_add
      Signed-off-by: NArne Jansen <sensille@gmx.net>
      7414a03f
    • A
      btrfs: state information for readahead · 90519d66
      Arne Jansen 提交于
      Add state information for readahead to btrfs_fs_info and btrfs_device
      
      Changes v2:
       - don't wait in radix_trees
       - add own set of workers for readahead
      Reviewed-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NArne Jansen <sensille@gmx.net>
      90519d66
  8. 17 8月, 2011 2 次提交
  9. 02 8月, 2011 3 次提交
  10. 28 7月, 2011 3 次提交
    • C
      Btrfs: switch the btrfs tree locks to reader/writer · bd681513
      Chris Mason 提交于
      The btrfs metadata btree is the source of significant
      lock contention, especially in the root node.   This
      commit changes our locking to use a reader/writer
      lock.
      
      The lock is built on top of rw spinlocks, and it
      extends the lock tracking to remember if we have a
      read lock or a write lock when we go to blocking.  Atomics
      count the number of blocking readers or writers at any
      given time.
      
      It removes all of the adaptive spinning from the old code
      and uses only the spinning/blocking hints inside of btrfs
      to decide when it should continue spinning.
      
      In read heavy workloads this is dramatically faster.  In write
      heavy workloads we're still faster because of less contention
      on the root node lock.
      
      We suffer slightly in dbench because we schedule more often
      during write locks, but all other benchmarks so far are improved.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      bd681513
    • J
      Btrfs: fix enospc problems with delalloc · 9e0baf60
      Josef Bacik 提交于
      So I had this brilliant idea to use atomic counters for outstanding and reserved
      extents, but this turned out to be a bad idea.  Consider this where we have 1
      outstanding extent and 1 reserved extent
      
      Reserver				Releaser
      					atomic_dec(outstanding) now 0
      atomic_read(outstanding)+1 get 1
      atomic_read(reserved) get 1
      don't actually reserve anything because
      they are the same
      					atomic_cmpxchg(reserved, 1, 0)
      atomic_inc(outstanding)
      atomic_add(0, reserved)
      					free reserved space for 1 extent
      
      Then the reserver now has no actual space reserved for it, and when it goes to
      finish the ordered IO it won't have enough space to do it's allocation and you
      get those lovely warnings.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      9e0baf60
    • J
      Btrfs: use a worker thread to do caching · bab39bf9
      Josef Bacik 提交于
      A user reported a deadlock when copying a bunch of files.  This is because they
      were low on memory and kthreadd got hung up trying to migrate pages for an
      allocation when starting the caching kthread.  The page was locked by the person
      starting the caching kthread.  To fix this we just need to use the async thread
      stuff so that the threads are already created and we don't have to worry about
      deadlocks.  Thanks,
      Reported-by: NRoman Mamedov <rm@romanrm.ru>
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      bab39bf9
  11. 26 7月, 2011 1 次提交
  12. 21 7月, 2011 2 次提交
    • J
      fs: push i_mutex and filemap_write_and_wait down into ->fsync() handlers · 02c24a82
      Josef Bacik 提交于
      Btrfs needs to be able to control how filemap_write_and_wait_range() is called
      in fsync to make it less of a painful operation, so push down taking i_mutex and
      the calling of filemap_write_and_wait() down into the ->fsync() handlers.  Some
      file systems can drop taking the i_mutex altogether it seems, like ext3 and
      ocfs2.  For correctness sake I just pushed everything down in all cases to make
      sure that we keep the current behavior the same for everybody, and then each
      individual fs maintainer can make up their mind about what to do from there.
      Thanks,
      Acked-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      02c24a82
    • J
      Btrfs: implement our own ->llseek · b2675157
      Josef Bacik 提交于
      In order to handle SEEK_HOLE/SEEK_DATA we need to implement our own llseek.
      Basically for the normal SEEK_*'s we will just defer to the generic helper, and
      for SEEK_HOLE/SEEK_DATA we will use our fiemap helper to figure out the nearest
      hole or data.  Currently this helper doesn't check for delalloc bytes for
      prealloc space, so for now treat prealloc as data until that is fixed.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      b2675157
  13. 20 7月, 2011 2 次提交
  14. 11 7月, 2011 2 次提交
    • J
      Btrfs: serialize flushers in reserve_metadata_bytes · fdb5effd
      Josef Bacik 提交于
      We keep having problems with early enospc, and that's because our method of
      making space is inherently racy.  The problem is we can have one guy trying to
      make space for himself, and in the meantime people come in and steal his
      reservation.  In order to stop this we make a waitqueue and put anybody who
      comes into reserve_metadata_bytes on that waitqueue if somebody is trying to
      make more space.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      fdb5effd
    • J
      Btrfs: do transaction space reservation before joining the transaction · b5009945
      Josef Bacik 提交于
      We have to do weird things when handling enospc in the transaction joining code.
      Because we've already joined the transaction we cannot commit the transaction
      within the reservation code since it will deadlock, so we have to return EAGAIN
      and then make sure we don't retry too many times.  Instead of doing this, just
      do the reservation the normal way before we join the transaction, that way we
      can do whatever we want to try and reclaim space, and then if it fails we know
      for sure we are out of space and we can return ENOSPC.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      b5009945
  15. 07 7月, 2011 1 次提交