1. 28 4月, 2012 1 次提交
  2. 22 3月, 2012 6 次提交
  3. 17 1月, 2012 1 次提交
    • J
      Btrfs: add a delalloc mutex to inodes for delalloc reservations · f248679e
      Josef Bacik 提交于
      I was using i_mutex for this, but we're getting bogus lockdep warnings by doing
      that and theres no real way to get rid of those, so just stop using i_mutex to
      protect delalloc metadata reservations and use a delalloc mutex instead.  This
      shouldn't be contended often at all, only if you are writing and mmap writing to
      the file at the same time.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      f248679e
  4. 22 12月, 2011 1 次提交
    • A
      Btrfs: mark delayed refs as for cow · 66d7e7f0
      Arne Jansen 提交于
      Add a for_cow parameter to add_delayed_*_ref and pass the appropriate value
      from every call site. The for_cow parameter will later on be used to
      determine if a ref will change anything with respect to qgroups.
      
      Delayed refs coming from relocation are always counted as for_cow, as they
      don't change subvol quota.
      
      Also pass in the fs_info for later use.
      
      btrfs_find_all_roots() will use this as an optimization, as changes that are
      for_cow will not change anything with respect to which root points to a
      certain leaf. Thus, we don't need to add the current sequence number to
      those delayed refs.
      Signed-off-by: NArne Jansen <sensille@gmx.net>
      Signed-off-by: NJan Schmidt <list.btrfs@jan-o-sch.net>
      66d7e7f0
  5. 16 12月, 2011 1 次提交
    • J
      Btrfs: fix how we do delalloc reservations and how we free reservations on error · 660d3f6c
      Josef Bacik 提交于
      Running xfstests 269 with some tracing my scripts kept spitting out errors about
      releasing bytes that we didn't actually have reserved.  This took me down a huge
      rabbit hole and it turns out the way we deal with reserved_extents is wrong,
      we need to only be setting it if the reservation succeeds, otherwise the free()
      method will come in and unreserve space that isn't actually reserved yet, which
      can lead to other warnings and such.  The math was all working out right in the
      end, but it caused all sorts of other issues in addition to making my scripts
      yell and scream and generally make it impossible for me to track down the
      original issue I was looking for.  The other problem is with our error handling
      in the reservation code.  There are two cases that we need to deal with
      
      1) We raced with free.  In this case free won't free anything because csum_bytes
      is modified before we dro the lock in our reservation path, so free rightly
      doesn't release any space because the reservation code may be depending on that
      reservation.  However if we fail, we need the reservation side to do the free at
      that point since that space is no longer in use.  So as it stands the code was
      doing this fine and it worked out, except in case #2
      
      2) We don't race with free.  Nobody comes in and changes anything, and our
      reservation fails.  In this case we didn't reserve anything anyway and we just
      need to clean up csum_bytes but not free anything.  So we keep track of
      csum_bytes before we drop the lock and if it hasn't changed we know we can just
      decrement csum_bytes and carry on.
      
      Because of the case where we can race with free()'s since we have to drop our
      spin_lock to do the reservation, I'm going to serialize all reservations with
      the i_mutex.  We already get this for free in the heavy use paths, truncate and
      file write all hold the i_mutex, just needed to add it to page_mkwrite and
      various ioctl/balance things.  With this patch my space leak scripts no longer
      scream bloody murder.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      660d3f6c
  6. 11 11月, 2011 1 次提交
    • M
      Btrfs: fix orphan backref nodes · 76b9e23d
      Miao Xie 提交于
      If the root node of a fs/file tree is in the block group that is
      being relocated, but the others are not in the other block groups.
      when we create a snapshot for this tree between the relocation tree
      creation ends and ->create_reloc_tree is set to 0, Btrfs will create
      some backref nodes that are the lowest nodes of the backrefs cache.
      But we forget to add them into ->leaves list of the backref cache
      and deal with them, and at last, they will triggered BUG_ON().
      
        kernel BUG at fs/btrfs/relocation.c:239!
      
      This patch fixes it by adding them into ->leaves list of backref cache.
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      76b9e23d
  7. 21 10月, 2011 1 次提交
  8. 20 10月, 2011 6 次提交
  9. 28 7月, 2011 1 次提交
  10. 18 6月, 2011 1 次提交
    • C
      Btrfs: fix relocation races · 7585717f
      Chris Mason 提交于
      The recent commit to get rid of our trans_mutex introduced
      some races with block group relocation.  The problem is that relocation
      needs to do some record keeping about each root, and it was relying
      on the transaction mutex to coordinate things in subtle ways.
      
      This fix adds a mutex just for the relocation code and makes sure
      it doesn't have a big impact on normal operations.  The race is
      really fixed in btrfs_record_root_in_trans, which is where we
      step back and wait for the relocation code to finish accounting
      setup.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      7585717f
  11. 24 5月, 2011 3 次提交
    • J
      Btrfs: don't always do readahead · 026fd317
      Josef Bacik 提交于
      Our readahead is sort of sloppy, and really isn't always needed.  For example if
      ls is doing a stating ls (which is the default) it's going to stat in non-disk
      order, so if say you have a directory with a stupid amount of files, readahead
      is going to do nothing but waste time in the case of doing the stat.  Taking the
      unconditional readahead out made my test go from 57 minutes to 36 minutes.  This
      means that everywhere we do loop through the tree we want to make sure we do set
      path->reada properly, so I went through and found all of the places where we
      loop through the path and set reada to 1.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      026fd317
    • J
      Btrfs: kill trans_mutex · a4abeea4
      Josef Bacik 提交于
      We use trans_mutex for lots of things, here's a basic list
      
      1) To serialize trans_handles joining the currently running transaction
      2) To make sure that no new trans handles are started while we are committing
      3) To protect the dead_roots list and the transaction lists
      
      Really the serializing trans_handles joining is not too hard, and can really get
      bogged down in acquiring a reference to the transaction.  So replace the
      trans_mutex with a trans_lock spinlock and use it to do the following
      
      1) Protect fs_info->running_transaction.  All trans handles have to do is check
      this, and then take a reference of the transaction and keep on going.
      2) Protect the fs_info->trans_list.  This doesn't get used too much, basically
      it just holds the current transactions, which will usually just be the currently
      committing transaction and the currently running transaction at most.
      3) Protect the dead roots list.  This is only ever processed by splicing the
      list so this is relatively simple.
      4) Protect the fs_info->reloc_ctl stuff.  This is very lightweight and was using
      the trans_mutex before, so this is a pretty straightforward change.
      5) Protect fs_info->no_trans_join.  Because we don't hold the trans_lock over
      the entirety of the commit we need to have a way to block new people from
      creating a new transaction while we're doing our work.  So we set no_trans_join
      and in join_transaction we test to see if that is set, and if it is we do a
      wait_on_commit.
      6) Make the transaction use count atomic so we don't need to take locks to
      modify it when we're dropping references.
      7) Add a commit_lock to the transaction to make sure multiple people trying to
      commit the same transaction don't race and commit at the same time.
      8) Make open_ioctl_trans an atomic so we don't have to take any locks for ioctl
      trans.
      
      I have tested this with xfstests, but obviously it is a pretty hairy change so
      lots of testing is greatly appreciated.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      a4abeea4
    • J
      Btrfs: take away the num_items argument from btrfs_join_transaction · 7a7eaa40
      Josef Bacik 提交于
      I keep forgetting that btrfs_join_transaction() just ignores the num_items
      argument, which leads me to sending pointless patches and looking stupid :).  So
      just kill the num_items argument from btrfs_join_transaction and
      btrfs_start_ioctl_transaction, since neither of them use it.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      7a7eaa40
  12. 12 5月, 2011 1 次提交
    • A
      btrfs: scrub · a2de733c
      Arne Jansen 提交于
      This adds an initial implementation for scrub. It works quite
      straightforward. The usermode issues an ioctl for each device in the
      fs. For each device, it enumerates the allocated device chunks. For
      each chunk, the contained extents are enumerated and the data checksums
      fetched. The extents are read sequentially and the checksums verified.
      If an error occurs (checksum or EIO), a good copy is searched for. If
      one is found, the bad copy will be rewritten.
      All enumerations happen from the commit roots. During a transaction
      commit, the scrubs get paused and afterwards continue from the new
      roots.
      
      This commit is based on the series originally posted to linux-btrfs
      with some improvements that resulted from comments from David Sterba,
      Ilya Dryomov and Jan Schmidt.
      Signed-off-by: NArne Jansen <sensille@gmx.net>
      a2de733c
  13. 10 5月, 2011 1 次提交
  14. 06 5月, 2011 1 次提交
  15. 02 5月, 2011 4 次提交
  16. 25 4月, 2011 2 次提交
    • L
      Btrfs: Always use 64bit inode number · 33345d01
      Li Zefan 提交于
      There's a potential problem in 32bit system when we exhaust 32bit inode
      numbers and start to allocate big inode numbers, because btrfs uses
      inode->i_ino in many places.
      
      So here we always use BTRFS_I(inode)->location.objectid, which is an
      u64 variable.
      
      There are 2 exceptions that BTRFS_I(inode)->location.objectid !=
      inode->i_ino: the btree inode (0 vs 1) and empty subvol dirs (256 vs 2),
      and inode->i_ino will be used in those cases.
      
      Another reason to make this change is I'm going to use a special inode
      to save free ino cache, and the inode number must be > (u64)-256.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      33345d01
    • L
      Btrfs: Cache free inode numbers in memory · 581bb050
      Li Zefan 提交于
      Currently btrfs stores the highest objectid of the fs tree, and it always
      returns (highest+1) inode number when we create a file, so inode numbers
      won't be reclaimed when we delete files, so we'll run out of inode numbers
      as we keep create/delete files in 32bits machines.
      
      This fixes it, and it works similarly to how we cache free space in block
      cgroups.
      
      We start a kernel thread to read the file tree. By scanning inode items,
      we know which chunks of inode numbers are free, and we cache them in
      an rb-tree.
      
      Because we are searching the commit root, we have to carefully handle the
      cross-transaction case.
      
      The rb-tree is a hybrid extent+bitmap tree, so if we have too many small
      chunks of inode numbers, we'll use bitmaps. Initially we allow 16K ram
      of extents, and a bitmap will be used if we exceed this threshold. The
      extents threshold is adjusted in runtime.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      581bb050
  17. 31 3月, 2011 1 次提交
  18. 28 3月, 2011 1 次提交
  19. 18 3月, 2011 1 次提交
    • J
      Btrfs: handle errors in btrfs_orphan_cleanup · 66b4ffd1
      Josef Bacik 提交于
      If we cannot truncate an inode for some reason we will never delete the orphan
      item associated with that inode, which means that we will loop forever in
      btrfs_orphan_cleanup.  Instead of doing this just return error so we fail to
      mount.  It sucks, but hey it's better than hanging.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      66b4ffd1
  20. 17 2月, 2011 1 次提交
    • C
      Btrfs: allow balance to explicitly allocate chunks as it relocates · c87f08ca
      Chris Mason 提交于
      Btrfs device shrinking and balancing ends up reallocating all the blocks
      in order to allow COW to move them to new destinations.  It is somewhat
      awkward in terms of ENOSPC because most of the enospc code is built
      around the idea that some operation on a reference counted tree triggers
      allocations in the non-reference counted trees.
      
      This commit changes the balancing code to deal with enospc by trying to
      allocate a new chunk.  If that allocation succeeds, we go ahead and
      retry whatever failed due to enospc.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      c87f08ca
  21. 15 2月, 2011 1 次提交
  22. 01 2月, 2011 1 次提交
  23. 29 1月, 2011 1 次提交
    • T
      btrfs: fix return value check of btrfs_join_transaction() · 3612b495
      Tsutomu Itoh 提交于
      The error check of btrfs_join_transaction()/btrfs_join_transaction_nolock()
      is added, and the mistake of the error check in several places is
      corrected.
      
      For more stable Btrfs, I think that we should reduce BUG_ON().
      But, I think that long time is necessary for this.
      So, I propose this patch as a short-term solution.
      
      With this patch:
       - To more stable Btrfs, the part that should be corrected is clarified.
       - The panic isn't done by the NULL pointer reference etc. (even if
         BUG_ON() is increased temporarily)
       - The error code is returned in the place where the error can be easily
         returned.
      
      As a long-term plan:
       - BUG_ON() is reduced by using the forced-readonly framework, etc.
      Signed-off-by: NTsutomu Itoh <t-itoh@jp.fujitsu.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      3612b495
  24. 30 10月, 2010 1 次提交