1. 18 12月, 2008 1 次提交
    • C
      Btrfs: shift all end_io work to thread pools · cad321ad
      Chris Mason 提交于
      bio_end_io for reads without checksumming on and btree writes were
      happening without using async thread pools.  This means the extent_io.c
      code had to use spin_lock_irq and friends on the rb tree locks for
      extent state.
      
      There were some irq safe vs unsafe lock inversions between the delallock
      lock and the extent state locks.  This patch gets rid of them by moving
      all end_io code into the thread pools.
      
      To avoid contention and deadlocks between the data end_io processing and the
      metadata end_io processing yet another thread pool is added to finish
      off metadata writes.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      cad321ad
  2. 09 12月, 2008 2 次提交
    • C
      Btrfs: Use map_private_extent_buffer during generic_bin_search · 934d375b
      Chris Mason 提交于
      It is possible that generic_bin_search will be called on a tree block
      that has not been locked.  This happens because cache_block_block skips
      locking on the tree blocks.
      
      Since the tree block isn't locked, we aren't allowed to change
      the extent_buffer->map_token field.  Using map_private_extent_buffer
      avoids any changes to the internal extent buffer fields.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      934d375b
    • C
      Btrfs: move data checksumming into a dedicated tree · d20f7043
      Chris Mason 提交于
      Btrfs stores checksums for each data block.  Until now, they have
      been stored in the subvolume trees, indexed by the inode that is
      referencing the data block.  This means that when we read the inode,
      we've probably read in at least some checksums as well.
      
      But, this has a few problems:
      
      * The checksums are indexed by logical offset in the file.  When
      compression is on, this means we have to do the expensive checksumming
      on the uncompressed data.  It would be faster if we could checksum
      the compressed data instead.
      
      * If we implement encryption, we'll be checksumming the plain text and
      storing that on disk.  This is significantly less secure.
      
      * For either compression or encryption, we have to get the plain text
      back before we can verify the checksum as correct.  This makes the raid
      layer balancing and extent moving much more expensive.
      
      * It makes the front end caching code more complex, as we have touch
      the subvolume and inodes as we cache extents.
      
      * There is potentitally one copy of the checksum in each subvolume
      referencing an extent.
      
      The solution used here is to store the extent checksums in a dedicated
      tree.  This allows us to index the checksums by phyiscal extent
      start and length.  It means:
      
      * The checksum is against the data stored on disk, after any compression
      or encryption is done.
      
      * The checksum is stored in a central location, and can be verified without
      following back references, or reading inodes.
      
      This makes compression significantly faster by reducing the amount of
      data that needs to be checksummed.  It will also allow much faster
      raid management code in general.
      
      The checksums are indexed by a key with a fixed objectid (a magic value
      in ctree.h) and offset set to the starting byte of the extent.  This
      allows us to copy the checksum items into the fsync log tree directly (or
      any other tree), without having to invent a second format for them.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      d20f7043
  3. 02 12月, 2008 2 次提交
  4. 20 11月, 2008 3 次提交
    • C
      Btrfs: only flush down bios for writeback pages · 0e6bd956
      Chris Mason 提交于
      The btrfs write_cache_pages call has a flush function so that it submits
      the bio it has been building before it waits on any writeback pages.
      
      This adds a check so that flush only happens on writeback pages.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      0e6bd956
    • C
      Btrfs: Fixes for 2.6.28-rc API changes · 15916de8
      Chris Mason 提交于
      * open/close_bdev_excl -> open/close_bdev_exclusive
      * blkdev_issue_discard takes a GFP mask now
      * Fix blkdev_issue_discard usage now that it is enabled
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      15916de8
    • C
      Btrfs: Avoid writeback stalls · d2c3f4f6
      Chris Mason 提交于
      While building large bios in writepages, btrfs may end up waiting
      for other page writeback to finish if WB_SYNC_ALL is used.
      
      While it is waiting, the bio it is building has a number of pages with the
      writeback bit set and they aren't getting to the disk any time soon.  This
      lowers the latencies of writeback in general by sending down the bio being
      built before waiting for other pages.
      
      The bio submission code tries to limit the total number of async bios in
      flight by waiting when we're over a certain number of async bios.  But,
      the waits are happening while writepages is building bios, and this can easily
      lead to stalls and other problems for people calling wait_on_page_writeback.
      
      The current fix is to let the congestion tests take care of waiting.
      
      sync() and others make sure to drain the current async requests to make
      sure that everything that was pending when the sync was started really get
      to disk.  The code would drain pending requests both before and after
      submitting a new request.
      
      But, if one of the requests is waiting for page writeback to finish,
      the draining waits might block that page writeback.  This changes the
      draining code to only wait after submitting the bio being processed.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      d2c3f4f6
  5. 11 11月, 2008 3 次提交
  6. 10 11月, 2008 1 次提交
  7. 07 11月, 2008 2 次提交
    • C
      Btrfs: enforce metadata allocation clustering · 3b7885bf
      Chris Mason 提交于
      The allocator uses the last allocation as a starting point for metadata
      allocations, and tries to allocate in clusters of at least 256k.
      
      If the search for a free block fails to find the expected block, this patch
      forces a new cluster to be found in the free list.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      3b7885bf
    • C
      Btrfs: Optimize compressed writeback and reads · 771ed689
      Chris Mason 提交于
      When reading compressed extents, try to put pages into the page cache
      for any pages covered by the compressed extent that readpages didn't already
      preload.
      
      Add an async work queue to handle transformations at delayed allocation processing
      time.  Right now this is just compression.  The workflow is:
      
      1) Find offsets in the file marked for delayed allocation
      2) Lock the pages
      3) Lock the state bits
      4) Call the async delalloc code
      
      The async delalloc code clears the state lock bits and delalloc bits.  It is
      important this happens before the range goes into the work queue because
      otherwise it might deadlock with other work queue items that try to lock
      those extent bits.
      
      The file pages are compressed, and if the compression doesn't work the
      pages are written back directly.
      
      An ordered work queue is used to make sure the inodes are written in the same
      order that pdflush or writepages sent them down.
      
      This changes extent_write_cache_pages to let the writepage function
      update the wbc nr_written count.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      771ed689
  8. 01 11月, 2008 1 次提交
    • C
      Btrfs: Compression corner fixes · 70b99e69
      Chris Mason 提交于
      Make sure we keep page->mapping NULL on the pages we're getting
      via alloc_page.  It gets set so a few of the callbacks can do the right
      thing, but in general these pages don't have a mapping.
      
      Don't try to truncate compressed inline items in btrfs_drop_extents.
      The whole compressed item must be preserved.
      
      Don't try to create multipage inline compressed items.  When we try to
      overwrite just the first page of the file, we would have to read in and recow
      all the pages after it in the same compressed inline items.  For now, only
      create single page inline items.
      
      Make sure we lock pages in the correct order during delalloc.  The
      search into the state tree for delalloc bytes can return bytes before
      the page we already have locked.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      70b99e69
  9. 31 10月, 2008 2 次提交
    • Y
      Btrfs: Add fallocate support v2 · d899e052
      Yan Zheng 提交于
      This patch updates btrfs-progs for fallocate support.
      
      fallocate is a little different in Btrfs because we need to tell the
      COW system that a given preallocated extent doesn't need to be
      cow'd as long as there are no snapshots of it.  This leverages the
      -o nodatacow checks.
      Signed-off-by: NYan Zheng <zheng.yan@oracle.com>
      d899e052
    • Y
      Btrfs: Fix bookend extent race v2 · 6643558d
      Yan Zheng 提交于
      When dropping middle part of an extent, btrfs_drop_extents truncates
      the extent at first, then inserts a bookend extent.
      
      Since truncation and insertion can't be done atomically, there is a small
      period that the bookend extent isn't in the tree. This causes problem for
      functions that search the tree for file extent item. The way to fix this is
      lock the range of the bookend extent before truncation.
      Signed-off-by: NYan Zheng <zheng.yan@oracle.com>
      6643558d
  10. 30 10月, 2008 2 次提交
    • J
      Btrfs: nuke fs wide allocation mutex V2 · 25179201
      Josef Bacik 提交于
      This patch removes the giant fs_info->alloc_mutex and replaces it with a bunch
      of little locks.
      
      There is now a pinned_mutex, which is used when messing with the pinned_extents
      extent io tree, and the extent_ins_mutex which is used with the pending_del and
      extent_ins extent io trees.
      
      The locking for the extent tree stuff was inspired by a patch that Yan Zheng
      wrote to fix a race condition, I cleaned it up some and changed the locking
      around a little bit, but the idea remains the same.  Basically instead of
      holding the extent_ins_mutex throughout the processing of an extent on the
      extent_ins or pending_del trees, we just hold it while we're searching and when
      we clear the bits on those trees, and lock the extent for the duration of the
      operations on the extent.
      
      Also to keep from getting hung up waiting to lock an extent, I've added a
      try_lock_extent so if we cannot lock the extent, move on to the next one in the
      tree and we'll come back to that one.  I have tested this heavily and it does
      not appear to break anything.  This has to be applied on top of my
      find_free_extent redo patch.
      
      I tested this patch on top of Yan's space reblancing code and it worked fine.
      The only thing that has changed since the last version is I pulled out all my
      debugging stuff, apparently I forgot to run guilt refresh before I sent the
      last patch out.  Thank you,
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
      
      25179201
    • C
      Btrfs: Add zlib compression support · c8b97818
      Chris Mason 提交于
      This is a large change for adding compression on reading and writing,
      both for inline and regular extents.  It does some fairly large
      surgery to the writeback paths.
      
      Compression is off by default and enabled by mount -o compress.  Even
      when the -o compress mount option is not used, it is possible to read
      compressed extents off the disk.
      
      If compression for a given set of pages fails to make them smaller, the
      file is flagged to avoid future compression attempts later.
      
      * While finding delalloc extents, the pages are locked before being sent down
      to the delalloc handler.  This allows the delalloc handler to do complex things
      such as cleaning the pages, marking them writeback and starting IO on their
      behalf.
      
      * Inline extents are inserted at delalloc time now.  This allows us to compress
      the data before inserting the inline extent, and it allows us to insert
      an inline extent that spans multiple pages.
      
      * All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
      are changed to record both an in-memory size and an on disk size, as well
      as a flag for compression.
      
      From a disk format point of view, the extent pointers in the file are changed
      to record the on disk size of a given extent and some encoding flags.
      Space in the disk format is allocated for compression encoding, as well
      as encryption and a generic 'other' field.  Neither the encryption or the
      'other' field are currently used.
      
      In order to limit the amount of data read for a single random read in the
      file, the size of a compressed extent is limited to 128k.  This is a
      software only limit, the disk format supports u64 sized compressed extents.
      
      In order to limit the ram consumed while processing extents, the uncompressed
      size of a compressed extent is limited to 256k.  This is a software only limit
      and will be subject to tuning later.
      
      Checksumming is still done on compressed extents, and it is done on the
      uncompressed version of the data.  This way additional encodings can be
      layered on without having to figure out which encoding to checksum.
      
      Compression happens at delalloc time, which is basically singled threaded because
      it is usually done by a single pdflush thread.  This makes it tricky to
      spread the compression load across all the cpus on the box.  We'll have to
      look at parallel pdflush walks of dirty inodes at a later time.
      
      Decompression is hooked into readpages and it does spread across CPUs nicely.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      c8b97818
  11. 30 9月, 2008 1 次提交
    • C
      Btrfs: add and improve comments · d352ac68
      Chris Mason 提交于
      This improves the comments at the top of many functions.  It didn't
      dive into the guts of functions because I was trying to
      avoid merging problems with the new allocator and back reference work.
      
      extent-tree.c and volumes.c were both skipped, and there is definitely
      more work todo in cleaning and commenting the code.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      d352ac68
  12. 26 9月, 2008 2 次提交
    • Z
      Btrfs: extent_map and data=ordered fixes for space balancing · 5b21f2ed
      Zheng Yan 提交于
      * Add an EXTENT_BOUNDARY state bit to keep the writepage code
      from merging data extents that are in the process of being
      relocated.  This allows us to do accounting for them properly.
      
      * The balancing code relocates data extents indepdent of the underlying
      inode.  The extent_map code was modified to properly account for
      things moving around (invalidating extent_map caches in the inode).
      
      * Don't take the drop_mutex in the create_subvol ioctl.  It isn't
      required.
      
      * Fix walking of the ordered extent list to avoid races with sys_unlink
      
      * Change the lock ordering rules.  Transaction start goes outside
      the drop_mutex.  This allows btrfs_commit_transaction to directly
      drop the relocation trees.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      5b21f2ed
    • C
      Remove Btrfs compat code for older kernels · 2b1f55b0
      Chris Mason 提交于
      Btrfs had compatibility code for kernels back to 2.6.18.  These have
      been removed, and will be maintained in a separate backport
      git tree from now on.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      2b1f55b0
  13. 25 9月, 2008 18 次提交
    • Z
      Btrfs: Full back reference support · 31840ae1
      Zheng Yan 提交于
      This patch makes the back reference system to explicit record the
      location of parent node for all types of extents. The location of
      parent node is placed into the offset field of backref key. Every
      time a tree block is balanced, the back references for the affected
      lower level extents are updated.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      31840ae1
    • J
      Btrfs: free space accounting redo · 0f9dd46c
      Josef Bacik 提交于
      1) replace the per fs_info extent_io_tree that tracked free space with two
      rb-trees per block group to track free space areas via offset and size.  The
      reason to do this is because most allocations come with a hint byte where to
      start, so we can usually find a chunk of free space at that hint byte to satisfy
      the allocation and get good space packing.  If we cannot find free space at or
      after the given offset we fall back on looking for a chunk of the given size as
      close to that given offset as possible.  When we fall back on the size search we
      also try to find a slot as close to the size we want as possible, to avoid
      breaking small chunks off of huge areas if possible.
      
      2) remove the extent_io_tree that tracked the block group cache from fs_info and
      replaced it with an rb-tree thats tracks block group cache via offset.  also
      added a per space_info list that tracks the block group cache for the particular
      space so we can lookup related block groups easily.
      
      3) cleaned up the allocation code to make it a little easier to read and a
      little less complicated.  Basically there are 3 steps, first look from our
      provided hint.  If we couldn't find from that given hint, start back at our
      original search start and look for space from there.  If that fails try to
      allocate space if we can and start looking again.  If not we're screwed and need
      to start over again.
      
      4) small fixes.  there were some issues in volumes.c where we wouldn't allocate
      the rest of the disk.  fixed cow_file_range to actually pass the alloc_hint,
      which has helped a good bit in making the fs_mark test I run have semi-normal
      results as we run out of space.  Generally with data allocations we don't track
      where we last allocated from, so everytime we did a data allocation we'd search
      through every block group that we have looking for free space.  Now searching a
      block group with no free space isn't terribly time consuming, it was causing a
      slight degradation as we got more data block groups.  The alloc_hint has fixed
      this slight degredation and made things semi-normal.
      
      There is still one nagging problem I'm working on where we will get ENOSPC when
      there is definitely plenty of space.  This only happens with metadata
      allocations, and only when we are almost full.  So you generally hit the 85%
      mark first, but sometimes you'll hit the BUG before you hit the 85% wall.  I'm
      still tracking it down, but until then this seems to be pretty stable and make a
      significant performance gain.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      0f9dd46c
    • C
      Btrfs: Tree logging fixes · 4bef0848
      Chris Mason 提交于
      * Pin down data blocks to prevent them from being reallocated like so:
      
      trans 1: allocate file extent
      trans 2: free file extent
      trans 3: free file extent during old snapshot deletion
      trans 3: allocate file extent to new file
      trans 3: fsync new file
      
      Before the tree logging code, this was legal because the fsync
      would commit the transation that did the final data extent free
      and the transaction that allocated the extent to the new file
      at the same time.
      
      With the tree logging code, the tree log subtransaction can commit
      before the transaction that freed the extent.  If we crash,
      we're left with two different files using the extent.
      
      * Don't wait in start_transaction if log replay is going on.  This
      avoids deadlocks from iput while we're cleaning up link counts in the
      replay code.
      
      * Don't deadlock in replay_one_name by trying to read an inode off
      the disk while holding paths for the directory
      
      * Hold the buffer lock while we mark a buffer as written.  This
      closes a race where someone is changing a buffer while we write it.
      They are supposed to mark it dirty again after they change it, but
      this violates the cow rules.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      4bef0848
    • C
      Btrfs: trivial sparse fixes · b214107e
      Christoph Hellwig 提交于
      Fix a bunch of trivial sparse complaints.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      b214107e
    • C
      a1b32a59
    • D
      Btrfs: Remove broken optimisations in end_bio functions. · 902b22f3
      David Woodhouse 提交于
      These ended up freeing objects while they were still using them. Under
      guidance from Chris, just rip out the 'clever' bits and do things the
      simple way.
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      902b22f3
    • D
      Btrfs: Change TestSetPageLocked() to trylock_page() · 2db04966
      David Woodhouse 提交于
      Add backwards compatibility in compat.h
      Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com>
      ---
       compat.h    |    3 +++
       extent_io.c |    3 ++-
       2 files changed, 5 insertions(+), 1 deletions(-)
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      2db04966
    • S
      Btrfs: Add compatibility for kernels >= 2.6.27-rc1 · 0ee0fda0
      Sven Wegener 提交于
      Add a couple of #if's to follow API changes.
      Signed-off-by: NSven Wegener <sven.wegener@stealer.net>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      0ee0fda0
    • Y
      Btrfs: implement memory reclaim for leaf reference cache · bcc63abb
      Yan 提交于
      The memory reclaiming issue happens when snapshot exists. In that
      case, some cache entries may not be used during old snapshot dropping,
      so they will remain in the cache until umount.
      
      The patch adds a field to struct btrfs_leaf_ref to record create time. Besides,
      the patch makes all dead roots of a given snapshot linked together in order of
      create time. After a old snapshot was completely dropped, we check the dead
      root list and remove all cache entries created before the oldest dead root in
      the list.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      bcc63abb
    • C
      Btrfs: Fix verify_parent_transid · 33958dc6
      Chris Mason 提交于
      It was incorrectly clearing the up to date flag on the buffer even
      when the buffer properly verified.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      33958dc6
    • C
      Btrfs: Search data ordered extents first for checksums on read · 89642229
      Chris Mason 提交于
      Checksum items are not inserted into the tree until all of the io from a
      given extent is complete.  This means one dirty page from an extent may
      be written, freed, and then read again before the entire extent is on disk
      and the checksum item is inserted.
      
      The checksums themselves are stored in the ordered extent so they can
      be inserted in bulk when IO is complete.  On read, if a checksum item isn't
      found, the ordered extents were being searched for a checksum record.
      
      This all worked most of the time, but the checksum insertion code tries
      to reduce the number of tree operations by pre-inserting checksum items
      based on i_size and a few other factors.  This means the read code might
      find a checksum item that hasn't yet really been filled in.
      
      This commit changes things to check the ordered extents first and only
      dive into the btree if nothing was found.  This removes the need for
      extra locking and is more reliable.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      89642229
    • C
      Btrfs: Fix some data=ordered related data corruptions · f421950f
      Chris Mason 提交于
      Stress testing was showing data checksum errors, most of which were caused
      by a lookup bug in the extent_map tree.  The tree was caching the last
      pointer returned, and searches would check the last pointer first.
      
      But, search callers also expect the search to return the very first
      matching extent in the range, which wasn't always true with the last
      pointer usage.
      
      For now, the code to cache the last return value is just removed.  It is
      easy to fix, but I think lookups are rare enough that it isn't required anymore.
      
      This commit also replaces do_sync_mapping_range with a local copy of the
      related functions.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      f421950f
    • C
      Btrfs: Use a mutex in the extent buffer for tree block locking · a61e6f29
      Chris Mason 提交于
      This replaces the use of the page cache lock bit for locking, which wasn't
      suitable for block size < page size and couldn't be used recursively.
      
      The mutexes alone don't fix either problem, but they are the first step.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      a61e6f29
    • C
      Btrfs: Index extent buffers in an rbtree · 6af118ce
      Chris Mason 提交于
      Before, extent buffers were a temporary object, meant to map a number of pages
      at once and collect operations on them.
      
      But, a few extra fields have crept in, and they are also the best place to
      store a per-tree block lock field as well.  This commit puts the extent
      buffers into an rbtree, and ensures a single extent buffer for each
      tree block.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      6af118ce
    • C
      Btrfs: Keep extent mappings in ram until pending ordered extents are done · 7f3c74fb
      Chris Mason 提交于
      It was possible for stale mappings from disk to be used instead of the
      new pending ordered extent.  This adds a flag to the extent map struct
      to keep it pinned until the pending ordered extent is actually on disk.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      7f3c74fb
    • C
    • C
      Btrfs: Use async helpers to deal with pages that have been improperly dirtied · 247e743c
      Chris Mason 提交于
      Higher layers sometimes call set_page_dirty without asking the filesystem
      to help.  This causes many problems for the data=ordered and cow code.
      This commit detects pages that haven't been properly setup for IO and
      kicks off an async helper to deal with them.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      247e743c
    • C
      Btrfs: New data=ordered implementation · e6dcd2dc
      Chris Mason 提交于
      The old data=ordered code would force commit to wait until
      all the data extents from the transaction were fully on disk.  This
      introduced large latencies into the commit and stalled new writers
      in the transaction for a long time.
      
      The new code changes the way data allocations and extents work:
      
      * When delayed allocation is filled, data extents are reserved, and
        the extent bit EXTENT_ORDERED is set on the entire range of the extent.
        A struct btrfs_ordered_extent is allocated an inserted into a per-inode
        rbtree to track the pending extents.
      
      * As each page is written EXTENT_ORDERED is cleared on the bytes corresponding
        to that page.
      
      * When all of the bytes corresponding to a single struct btrfs_ordered_extent
        are written, The previously reserved extent is inserted into the FS
        btree and into the extent allocation trees.  The checksums for the file
        data are also updated.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      e6dcd2dc