1. 28 7月, 2011 1 次提交
    • J
      Btrfs: fix deadlock when throttling transactions · 81317fde
      Josef Bacik 提交于
      Hit this nice little deadlock.  What happens is this
      
      __btrfs_end_transaction with throttle set, --use_count so it equals 0
        btrfs_commit_transaction
          <somebody else actually manages to start the commit>
          btrfs_end_transaction --use_count so now its -1 <== BAD
            we just return and wait on the transaction
      
      This is bad because we just return after our use_count is -1 and don't let go
      of our num_writer count on the transaction, so the guy committing the
      transaction just sits there forever.  Fix this by inc'ing our use_count if we're
      going to call commit_transaction so that if we call btrfs_end_transaction it's
      valid.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      81317fde
  2. 11 7月, 2011 1 次提交
    • J
      Btrfs: do transaction space reservation before joining the transaction · b5009945
      Josef Bacik 提交于
      We have to do weird things when handling enospc in the transaction joining code.
      Because we've already joined the transaction we cannot commit the transaction
      within the reservation code since it will deadlock, so we have to return EAGAIN
      and then make sure we don't retry too many times.  Instead of doing this, just
      do the reservation the normal way before we join the transaction, that way we
      can do whatever we want to try and reclaim space, and then if it fails we know
      for sure we are out of space and we can return ENOSPC.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      b5009945
  3. 18 6月, 2011 2 次提交
    • C
      Btrfs: avoid delayed metadata items during commits · e999376f
      Chris Mason 提交于
      Snapshot creation has two phases.  One is the initial snapshot setup,
      and the second is done during commit, while nobody is allowed to modify
      the root we are snapshotting.
      
      The delayed metadata insertion code can break that rule, it does a
      delayed inode update on the inode of the parent of the snapshot,
      and delayed directory item insertion.
      
      This makes sure to run the pending delayed operations before we
      record the snapshot root, which avoids corruptions.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      e999376f
    • C
      Btrfs: fix relocation races · 7585717f
      Chris Mason 提交于
      The recent commit to get rid of our trans_mutex introduced
      some races with block group relocation.  The problem is that relocation
      needs to do some record keeping about each root, and it was relying
      on the transaction mutex to coordinate things in subtle ways.
      
      This fix adds a mutex just for the relocation code and makes sure
      it doesn't have a big impact on normal operations.  The race is
      really fixed in btrfs_record_root_in_trans, which is where we
      step back and wait for the relocation code to finish accounting
      setup.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      7585717f
  4. 16 6月, 2011 1 次提交
    • J
      Btrfs: set no_trans_join after trying to expand the transaction · ed0ca140
      Josef Bacik 提交于
      We can lockup if we try to allow new writers join the transaction and we have
      flushoncommit set or have a pending snapshot.  This is because we set
      no_trans_join and then loop around and try to wait for ordered extents again.
      The problem is the ordered endio stuff needs to join the transaction, which it
      can't do because no_trans_join is set.  So instead wait until after this loop to
      set no_trans_join and then make sure to wait for num_writers == 1 in case
      anybody got started in between us exiting the loop and setting no_trans_join.
      This could easily be reproduced by mounting -o flushoncommit and running xfstest
      13.  It cannot be reproduced with this patch.  Thanks,
      Reported-by: NJim Schutt <jaschut@sandia.gov>
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      ed0ca140
  5. 11 6月, 2011 1 次提交
  6. 09 6月, 2011 1 次提交
  7. 04 6月, 2011 1 次提交
  8. 24 5月, 2011 4 次提交
    • J
      Btrfs: kill BTRFS_I(inode)->block_group · d82a6f1d
      Josef Bacik 提交于
      Originally this was going to be used as a way to give hints to the allocator,
      but frankly we can get much better hints elsewhere and it's not even used at all
      for anything usefull.  In addition to be completely useless, when we initialize
      an inode we try and find a freeish block group to set as the inodes block group,
      and with a completely full 40gb fs this takes _forever_, so I imagine with say
      1tb fs this is just unbearable.  So just axe the thing altoghether, we don't
      need it and it saves us 8 bytes in the inode and saves us 500 microseconds per
      inode lookup in my testcase.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      d82a6f1d
    • J
      Btrfs: kill trans_mutex · a4abeea4
      Josef Bacik 提交于
      We use trans_mutex for lots of things, here's a basic list
      
      1) To serialize trans_handles joining the currently running transaction
      2) To make sure that no new trans handles are started while we are committing
      3) To protect the dead_roots list and the transaction lists
      
      Really the serializing trans_handles joining is not too hard, and can really get
      bogged down in acquiring a reference to the transaction.  So replace the
      trans_mutex with a trans_lock spinlock and use it to do the following
      
      1) Protect fs_info->running_transaction.  All trans handles have to do is check
      this, and then take a reference of the transaction and keep on going.
      2) Protect the fs_info->trans_list.  This doesn't get used too much, basically
      it just holds the current transactions, which will usually just be the currently
      committing transaction and the currently running transaction at most.
      3) Protect the dead roots list.  This is only ever processed by splicing the
      list so this is relatively simple.
      4) Protect the fs_info->reloc_ctl stuff.  This is very lightweight and was using
      the trans_mutex before, so this is a pretty straightforward change.
      5) Protect fs_info->no_trans_join.  Because we don't hold the trans_lock over
      the entirety of the commit we need to have a way to block new people from
      creating a new transaction while we're doing our work.  So we set no_trans_join
      and in join_transaction we test to see if that is set, and if it is we do a
      wait_on_commit.
      6) Make the transaction use count atomic so we don't need to take locks to
      modify it when we're dropping references.
      7) Add a commit_lock to the transaction to make sure multiple people trying to
      commit the same transaction don't race and commit at the same time.
      8) Make open_ioctl_trans an atomic so we don't have to take any locks for ioctl
      trans.
      
      I have tested this with xfstests, but obviously it is a pretty hairy change so
      lots of testing is greatly appreciated.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      a4abeea4
    • J
      Btrfs: if we've already started a trans handle, use that one · 2a1eb461
      Josef Bacik 提交于
      We currently track trans handles in current->journal_info, but we don't actually
      use it.  This patch fixes it.  This will cover the case where we have multiple
      people starting transactions down the call chain.  This keeps us from having to
      allocate a new handle and all of that, we just increase the use count of the
      current handle, save the old block_rsv, and return.  I tested this with xfstests
      and it worked out fine.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      2a1eb461
    • J
      Btrfs: take away the num_items argument from btrfs_join_transaction · 7a7eaa40
      Josef Bacik 提交于
      I keep forgetting that btrfs_join_transaction() just ignores the num_items
      argument, which leads me to sending pointless patches and looking stupid :).  So
      just kill the num_items argument from btrfs_join_transaction and
      btrfs_start_ioctl_transaction, since neither of them use it.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      7a7eaa40
  9. 21 5月, 2011 1 次提交
    • M
      btrfs: implement delayed inode items operation · 16cdcec7
      Miao Xie 提交于
      Changelog V5 -> V6:
      - Fix oom when the memory load is high, by storing the delayed nodes into the
        root's radix tree, and letting btrfs inodes go.
      
      Changelog V4 -> V5:
      - Fix the race on adding the delayed node to the inode, which is spotted by
        Chris Mason.
      - Merge Chris Mason's incremental patch into this patch.
      - Fix deadlock between readdir() and memory fault, which is reported by
        Itaru Kitayama.
      
      Changelog V3 -> V4:
      - Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
        inode in time.
      
      Changelog V2 -> V3:
      - Fix the race between the delayed worker and the task which does delayed items
        balance, which is reported by Tsutomu Itoh.
      - Modify the patch address David Sterba's comment.
      - Fix the bug of the cpu recursion spinlock, reported by Chris Mason
      
      Changelog V1 -> V2:
      - break up the global rb-tree, use a list to manage the delayed nodes,
        which is created for every directory and file, and used to manage the
        delayed directory name index items and the delayed inode item.
      - introduce a worker to deal with the delayed nodes.
      
      Compare with Ext3/4, the performance of file creation and deletion on btrfs
      is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
      such as inode item, directory name item, directory name index and so on.
      
      If we can do some delayed b+ tree insertion or deletion, we can improve the
      performance, so we made this patch which implemented delayed directory name
      index insertion/deletion and delayed inode update.
      
      Implementation:
      - introduce a delayed root object into the filesystem, that use two lists to
        manage the delayed nodes which are created for every file/directory.
        One is used to manage all the delayed nodes that have delayed items. And the
        other is used to manage the delayed nodes which is waiting to be dealt with
        by the work thread.
      - Every delayed node has two rb-tree, one is used to manage the directory name
        index which is going to be inserted into b+ tree, and the other is used to
        manage the directory name index which is going to be deleted from b+ tree.
      - introduce a worker to deal with the delayed operation. This worker is used
        to deal with the works of the delayed directory name index items insertion
        and deletion and the delayed inode update.
        When the delayed items is beyond the lower limit, we create works for some
        delayed nodes and insert them into the work queue of the worker, and then
        go back.
        When the delayed items is beyond the upper bound, we create works for all
        the delayed nodes that haven't been dealt with, and insert them into the work
        queue of the worker, and then wait for that the untreated items is below some
        threshold value.
      - When we want to insert a directory name index into b+ tree, we just add the
        information into the delayed inserting rb-tree.
        And then we check the number of the delayed items and do delayed items
        balance. (The balance policy is above.)
      - When we want to delete a directory name index from the b+ tree, we search it
        in the inserting rb-tree at first. If we look it up, just drop it. If not,
        add the key of it into the delayed deleting rb-tree.
        Similar to the delayed inserting rb-tree, we also check the number of the
        delayed items and do delayed items balance.
        (The same to inserting manipulation)
      - When we want to update the metadata of some inode, we cached the data of the
        inode into the delayed node. the worker will flush it into the b+ tree after
        dealing with the delayed insertion and deletion.
      - We will move the delayed node to the tail of the list after we access the
        delayed node, By this way, we can cache more delayed items and merge more
        inode updates.
      - If we want to commit transaction, we will deal with all the delayed node.
      - the delayed node will be freed when we free the btrfs inode.
      - Before we log the inode items, we commit all the directory name index items
        and the delayed inode update.
      
      I did a quick test by the benchmark tool[1] and found we can improve the
      performance of file creation by ~15%, and file deletion by ~20%.
      
      Before applying this patch:
      Create files:
              Total files: 50000
              Total time: 1.096108
              Average time: 0.000022
      Delete files:
              Total files: 50000
              Total time: 1.510403
              Average time: 0.000030
      
      After applying this patch:
      Create files:
              Total files: 50000
              Total time: 0.932899
              Average time: 0.000019
      Delete files:
              Total files: 50000
              Total time: 1.215732
              Average time: 0.000024
      
      [1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3
      
      Many thanks for Kitayama-san's help!
      Signed-off-by: NMiao Xie <miaox@cn.fujitsu.com>
      Reviewed-by: NDavid Sterba <dave@jikos.cz>
      Tested-by: NTsutomu Itoh <t-itoh@jp.fujitsu.com>
      Tested-by: NItaru Kitayama <kitayama@cl.bb4u.ne.jp>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      16cdcec7
  10. 12 5月, 2011 1 次提交
    • A
      btrfs: scrub · a2de733c
      Arne Jansen 提交于
      This adds an initial implementation for scrub. It works quite
      straightforward. The usermode issues an ioctl for each device in the
      fs. For each device, it enumerates the allocated device chunks. For
      each chunk, the contained extents are enumerated and the data checksums
      fetched. The extents are read sequentially and the checksums verified.
      If an error occurs (checksum or EIO), a good copy is searched for. If
      one is found, the bad copy will be rewritten.
      All enumerations happen from the commit roots. During a transaction
      commit, the scrubs get paused and afterwards continue from the new
      roots.
      
      This commit is based on the series originally posted to linux-btrfs
      with some improvements that resulted from comments from David Sterba,
      Ilya Dryomov and Jan Schmidt.
      Signed-off-by: NArne Jansen <sensille@gmx.net>
      a2de733c
  11. 06 5月, 2011 1 次提交
  12. 02 5月, 2011 1 次提交
    • D
      btrfs: drop unused argument from extent_io_tree_init · f993c883
      David Sterba 提交于
      all callers pass GFP_NOFS, but the GFP mask argument is not used in the
      function; GFP_ATOMIC is passed to radix tree initialization and it's the
      only correct one, since we're using the preload/insert mechanism of
      radix tree.
      Let's drop the gfp mask from btrfs function, this will not change
      behaviour.
      Signed-off-by: NDavid Sterba <dsterba@suse.cz>
      f993c883
  13. 25 4月, 2011 3 次提交
    • L
      Btrfs: Support reading/writing on disk free ino cache · 82d5902d
      Li Zefan 提交于
      This is similar to block group caching.
      
      We dedicate a special inode in fs tree to save free ino cache.
      
      At the very first time we create/delete a file after mount, the free ino
      cache will be loaded from disk into memory. When the fs tree is commited,
      the cache will be written back to disk.
      
      To keep compatibility, we check the root generation against the generation
      of the special inode when loading the cache, so the loading will fail
      if the btrfs filesystem was mounted in an older kernel before.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      82d5902d
    • L
      Btrfs: Always use 64bit inode number · 33345d01
      Li Zefan 提交于
      There's a potential problem in 32bit system when we exhaust 32bit inode
      numbers and start to allocate big inode numbers, because btrfs uses
      inode->i_ino in many places.
      
      So here we always use BTRFS_I(inode)->location.objectid, which is an
      u64 variable.
      
      There are 2 exceptions that BTRFS_I(inode)->location.objectid !=
      inode->i_ino: the btree inode (0 vs 1) and empty subvol dirs (256 vs 2),
      and inode->i_ino will be used in those cases.
      
      Another reason to make this change is I'm going to use a special inode
      to save free ino cache, and the inode number must be > (u64)-256.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      33345d01
    • L
      Btrfs: Cache free inode numbers in memory · 581bb050
      Li Zefan 提交于
      Currently btrfs stores the highest objectid of the fs tree, and it always
      returns (highest+1) inode number when we create a file, so inode numbers
      won't be reclaimed when we delete files, so we'll run out of inode numbers
      as we keep create/delete files in 32bits machines.
      
      This fixes it, and it works similarly to how we cache free space in block
      cgroups.
      
      We start a kernel thread to read the file tree. By scanning inode items,
      we know which chunks of inode numbers are free, and we cache them in
      an rb-tree.
      
      Because we are searching the commit root, we have to carefully handle the
      cross-transaction case.
      
      The rb-tree is a hybrid extent+bitmap tree, so if we have too many small
      chunks of inode numbers, we'll use bitmaps. Initially we allow 16K ram
      of extents, and a bitmap will be used if we exceed this threshold. The
      extents threshold is adjusted in runtime.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      581bb050
  14. 12 4月, 2011 1 次提交
    • J
      Btrfs: avoid taking the trans_mutex in btrfs_end_transaction · 13c5a93e
      Josef Bacik 提交于
      I've been working on making our O_DIRECT latency not suck and I noticed we were
      taking the trans_mutex in btrfs_end_transaction.  So to do this we convert
      num_writers and use_count to atomic_t's and just decrement them in
      btrfs_end_transaction.  Instead of deleting the transaction from the trans list
      in put_transaction we do that in btrfs_commit_transaction() since that's the
      only time it actually needs to be removed from the list.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      13c5a93e
  15. 09 4月, 2011 1 次提交
    • J
      Btrfs: only retry transaction reservation once · 06d5a589
      Josef Bacik 提交于
      I saw a lockup where we kept getting into this start transaction->commit
      transaction loop because of enospce.  The fact is if we fail to make our
      reservation, we've tried _everything_ several times, so we only need to try and
      commit the transaction once, and if that doesn't work then we really are out of
      space and need to just exit.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      06d5a589
  16. 05 4月, 2011 2 次提交
  17. 28 3月, 2011 2 次提交
    • T
      Btrfs: cleanup some BUG_ON() · db5b493a
      Tsutomu Itoh 提交于
      This patch changes some BUG_ON() to the error return.
      (but, most callers still use BUG_ON())
      Signed-off-by: NTsutomu Itoh <t-itoh@jp.fujitsu.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      db5b493a
    • L
      Btrfs: add initial tracepoint support for btrfs · 1abe9b8a
      liubo 提交于
      Tracepoints can provide insight into why btrfs hits bugs and be greatly
      helpful for debugging, e.g
                    dd-7822  [000]  2121.641088: btrfs_inode_request: root = 5(FS_TREE), gen = 4, ino = 256, blocks = 8, disk_i_size = 0, last_trans = 8, logged_trans = 0
                    dd-7822  [000]  2121.641100: btrfs_inode_new: root = 5(FS_TREE), gen = 8, ino = 257, blocks = 0, disk_i_size = 0, last_trans = 0, logged_trans = 0
       btrfs-transacti-7804  [001]  2146.935420: btrfs_cow_block: root = 2(EXTENT_TREE), refs = 2, orig_buf = 29368320 (orig_level = 0), cow_buf = 29388800 (cow_level = 0)
       btrfs-transacti-7804  [001]  2146.935473: btrfs_cow_block: root = 1(ROOT_TREE), refs = 2, orig_buf = 29364224 (orig_level = 0), cow_buf = 29392896 (cow_level = 0)
       btrfs-transacti-7804  [001]  2146.972221: btrfs_transaction_commit: root = 1(ROOT_TREE), gen = 8
         flush-btrfs-2-7821  [001]  2155.824210: btrfs_chunk_alloc: root = 3(CHUNK_TREE), offset = 1103101952, size = 1073741824, num_stripes = 1, sub_stripes = 0, type = DATA
         flush-btrfs-2-7821  [001]  2155.824241: btrfs_cow_block: root = 2(EXTENT_TREE), refs = 2, orig_buf = 29388800 (orig_level = 0), cow_buf = 29396992 (cow_level = 0)
         flush-btrfs-2-7821  [001]  2155.824255: btrfs_cow_block: root = 4(DEV_TREE), refs = 2, orig_buf = 29372416 (orig_level = 0), cow_buf = 29401088 (cow_level = 0)
         flush-btrfs-2-7821  [000]  2155.824329: btrfs_cow_block: root = 3(CHUNK_TREE), refs = 2, orig_buf = 20971520 (orig_level = 0), cow_buf = 20975616 (cow_level = 0)
       btrfs-endio-wri-7800  [001]  2155.898019: btrfs_cow_block: root = 5(FS_TREE), refs = 2, orig_buf = 29384704 (orig_level = 0), cow_buf = 29405184 (cow_level = 0)
       btrfs-endio-wri-7800  [001]  2155.898043: btrfs_cow_block: root = 7(CSUM_TREE), refs = 2, orig_buf = 29376512 (orig_level = 0), cow_buf = 29409280 (cow_level = 0)
      
      Here is what I have added:
      
      1) ordere_extent:
              btrfs_ordered_extent_add
              btrfs_ordered_extent_remove
              btrfs_ordered_extent_start
              btrfs_ordered_extent_put
      
      These provide critical information to understand how ordered_extents are
      updated.
      
      2) extent_map:
              btrfs_get_extent
      
      extent_map is used in both read and write cases, and it is useful for tracking
      how btrfs specific IO is running.
      
      3) writepage:
              __extent_writepage
              btrfs_writepage_end_io_hook
      
      Pages are cirtical resourses and produce a lot of corner cases during writeback,
      so it is valuable to know how page is written to disk.
      
      4) inode:
              btrfs_inode_new
              btrfs_inode_request
              btrfs_inode_evict
      
      These can show where and when a inode is created, when a inode is evicted.
      
      5) sync:
              btrfs_sync_file
              btrfs_sync_fs
      
      These show sync arguments.
      
      6) transaction:
              btrfs_transaction_commit
      
      In transaction based filesystem, it will be useful to know the generation and
      who does commit.
      
      7) back reference and cow:
      	btrfs_delayed_tree_ref
      	btrfs_delayed_data_ref
      	btrfs_delayed_ref_head
      	btrfs_cow_block
      
      Btrfs natively supports back references, these tracepoints are helpful on
      understanding btrfs's COW mechanism.
      
      8) chunk:
      	btrfs_chunk_alloc
      	btrfs_chunk_free
      
      Chunk is a link between physical offset and logical offset, and stands for space
      infomation in btrfs, and these are helpful on tracing space things.
      
      9) reserved_extent:
      	btrfs_reserved_extent_alloc
      	btrfs_reserved_extent_free
      
      These can show how btrfs uses its space.
      Signed-off-by: NLiu Bo <liubo2009@cn.fujitsu.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      1abe9b8a
  18. 29 1月, 2011 1 次提交
    • T
      btrfs: fix return value check of btrfs_join_transaction() · 3612b495
      Tsutomu Itoh 提交于
      The error check of btrfs_join_transaction()/btrfs_join_transaction_nolock()
      is added, and the mistake of the error check in several places is
      corrected.
      
      For more stable Btrfs, I think that we should reduce BUG_ON().
      But, I think that long time is necessary for this.
      So, I propose this patch as a short-term solution.
      
      With this patch:
       - To more stable Btrfs, the part that should be corrected is clarified.
       - The panic isn't done by the NULL pointer reference etc. (even if
         BUG_ON() is increased temporarily)
       - The error code is returned in the place where the error can be easily
         returned.
      
      As a long-term plan:
       - BUG_ON() is reduced by using the forced-readonly framework, etc.
      Signed-off-by: NTsutomu Itoh <t-itoh@jp.fujitsu.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      3612b495
  19. 18 1月, 2011 1 次提交
    • L
      Btrfs: forced readonly mounts on errors · acce952b
      liubo 提交于
      This patch comes from "Forced readonly mounts on errors" ideas.
      
      As we know, this is the first step in being more fault tolerant of disk
      corruptions instead of just using BUG() statements.
      
      The major content:
      - add a framework for generating errors that should result in filesystems
        going readonly.
      - keep FS state in disk super block.
      - make sure that all of resource will be freed and released at umount time.
      - make sure that fter FS is forced readonly on error, there will be no more
        disk change before FS is corrected. For this, we should stop write operation.
      
      After this patch is applied, the conversion from BUG() to such a framework can
      happen incrementally.
      Signed-off-by: NLiu Bo <liubo2009@cn.fujitsu.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      acce952b
  20. 23 12月, 2010 1 次提交
    • L
      Btrfs: Add readonly snapshots support · b83cc969
      Li Zefan 提交于
      Usage:
      
      Set BTRFS_SUBVOL_RDONLY of btrfs_ioctl_vol_arg_v2->flags, and call
      ioctl(BTRFS_I0CTL_SNAP_CREATE_V2).
      
      Implementation:
      
      - Set readonly bit of btrfs_root_item->flags.
      - Add readonly checks in btrfs_permission (inode_permission),
      btrfs_setattr, btrfs_set/remove_xattr and some ioctls.
      
      Changelog for v3:
      
      - Eliminate btrfs_root->readonly, but check btrfs_root->root_item.flags.
      - Rename BTRFS_ROOT_SNAP_RDONLY to BTRFS_ROOT_SUBVOL_RDONLY.
      Signed-off-by: NLi Zefan <lizf@cn.fujitsu.com>
      b83cc969
  21. 22 11月, 2010 1 次提交
  22. 30 10月, 2010 3 次提交
    • S
      Btrfs: add START_SYNC, WAIT_SYNC ioctls · 46204592
      Sage Weil 提交于
      START_SYNC will start a sync/commit, but not wait for it to
      complete.  Any modification started after the ioctl returns is
      guaranteed not to be included in the commit.  If a non-NULL
      pointer is passed, the transaction id will be returned to
      userspace.
      
      WAIT_SYNC will wait for any in-progress commit to complete.  If a
      transaction id is specified, the ioctl will block and then
      return (success) when the specified transaction has committed.
      If it has already committed when we call the ioctl, it returns
      immediately.  If the specified transaction doesn't exist, it
      returns EINVAL.
      
      If no transaction id is specified, WAIT_SYNC will wait for the
      currently committing transaction to finish it's commit to disk.
      If there is no currently committing transaction, it returns
      success.
      
      These ioctls are useful for applications which want to impose an
      ordering on when fs modifications reach disk, but do not want to
      wait for the full (slow) commit process to do so.
      
      Picky callers can take the transid returned by START_SYNC and
      feed it to WAIT_SYNC, and be certain to wait only as long as
      necessary for the transaction _they_ started to reach disk.
      
      Sloppy callers can START_SYNC and WAIT_SYNC without a transid,
      and provided they didn't wait too long between the calls, they
      will get the same result.  However, if a second commit starts
      before they call WAIT_SYNC, they may end up waiting longer for
      it to commit as well.  Even so, a START_SYNC+WAIT_SYNC still
      guarantees that any operation completed before the START_SYNC
      reaches disk.
      Signed-off-by: NSage Weil <sage@newdream.net>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      46204592
    • S
      Btrfs: async transaction commit · bb9c12c9
      Sage Weil 提交于
      Add support for an async transaction commit that is ordered such that any
      subsequent operations will join the following transaction, but does not
      wait until the current commit is fully on disk.  This avoids much of the
      latency associated with the btrfs_commit_transaction for callers concerned
      with serialization and not safety.
      
      The wait_for_unblock flag controls whether we wait for the 'middle' portion
      of commit_transaction to complete, which is necessary if the caller expects
      some of the modifications contained in the commit to be available (this is
      the case for subvol/snapshot creation).
      Signed-off-by: NSage Weil <sage@newdream.net>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      bb9c12c9
    • S
      Btrfs: fix deadlock in btrfs_commit_transaction · 99d16cbc
      Sage Weil 提交于
      We calculate timeout (either 1 or MAX_SCHEDULE_TIMEOUT) based on whether
      num_writers > 1 or should_grow at the top of the loop.  Then, much much
      later, we wait for that timeout if either num_writers or should_grow is
      true.  However, it's possible for a racing process (calling
      btrfs_end_transaction()) to decrement num_writers such that we wait
      forever instead of for 1.
      
      Fix this by deciding how long to wait when we wait.  Include a smp_mb()
      before checking if the waitqueue is active to ensure the num_writers
      is visible.
      Signed-off-by: NSage Weil <sage@newdream.net>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      99d16cbc
  23. 29 10月, 2010 1 次提交
    • J
      Btrfs: create special free space cache inode · 0af3d00b
      Josef Bacik 提交于
      In order to save free space cache, we need an inode to hold the data, and we
      need a special item to point at the right inode for the right block group.  So
      first, create a special item that will point to the right inode, and the number
      of extent entries we will have and the number of bitmaps we will have.  We
      truncate and pre-allocate space everytime to make sure it's uptodate.
      
      This feature will be turned on as soon as you mount with -o space_cache, however
      it is safe to boot into old kernels, they will just generate the cache the old
      fashion way.  When you boot back into a newer kernel we will notice that we
      modified and not the cache and automatically discard the cache.
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      0af3d00b
  24. 23 10月, 2010 1 次提交
    • J
      Btrfs: rework how we reserve metadata bytes · 8bb8ab2e
      Josef Bacik 提交于
      With multi-threaded writes we were getting ENOSPC early because somebody would
      come in, start flushing delalloc because they couldn't make their reservation,
      and in the meantime other threads would come in and use the space that was
      getting freed up, so when the original thread went to check to see if they had
      space they didn't and they'd return ENOSPC.  So instead if we have some free
      space but not enough for our reservation, take the reservation and then start
      doing the flushing.  The only time we don't take reservations is when we've
      already overcommitted our space, that way we don't have people who come late to
      the party way overcommitting ourselves.  This also moves all of the retrying and
      flushing code into reserve_metdata_bytes so it's all uniform.  This keeps my
      fs_mark test from returning -ENOSPC as soon as it starts and actually lets me
      fill up the disk.  Thanks,
      Signed-off-by: NJosef Bacik <josef@redhat.com>
      8bb8ab2e
  25. 25 5月, 2010 6 次提交