1. 11 11月, 2008 2 次提交
  2. 10 11月, 2008 3 次提交
  3. 08 11月, 2008 3 次提交
    • C
      Btrfs: Avoid unplug storms during commit · 5f2cc086
      Chris Mason 提交于
      While doing a commit, btrfs makes sure all the metadata blocks
      were properly written to disk, calling wait_on_page_writeback for
      each page.  This writeback happens after allowing another transaction
      to start, so it competes for the disk with other processes in the FS.
      
      If the page writeback bit is still set, each wait_on_page_writeback might
      trigger an unplug, even though the page might be waiting for checksumming
      to finish or might be waiting for the async work queue to submit the
      bio.
      
      This trades wait_on_page_writeback for waiting on the extent writeback
      bits.  It won't trigger any unplugs and substantially improves performance
      in a number of workloads.
      
      This also changes the async bio submission to avoid requeueing if there
      is only one device.  The requeue just wastes CPU time because there are
      no other devices to service.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      5f2cc086
    • C
      Btrfs: Fix more false enospc errors and an oops from empty clustering · 42e70e7a
      Chris Mason 提交于
      In comes cases the empty cluster was added twice to the total number of
      bytes the allocator was trying to find.
      
      With empty clustering on, the hint byte was sometimes outside of the
      block group.  Add an extra goto to find the correct block group.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      42e70e7a
    • C
      Btrfs: make sure compressed bios don't complete too soon · af09abfe
      Chris Mason 提交于
      When writing a compressed extent, a number of bios are created that
      point to a single struct compressed_bio.  At end_io time an atomic counter in
      the compressed_bio struct makes sure that all of the bios have finished
      before final end_io processing is done.
      
      But when multiple bios are needed to write a compressed extent, the
      counter was being incremented after the first bio was sent to submit_bio.
      It is possible the bio will complete before the counter is incremented,
      making the end_io handler free the compressed_bio struct before
      processing is finished.
      
      The fix is to increment the atomic counter before bio submission,
      both for compressed reads and writes.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      af09abfe
  4. 07 11月, 2008 4 次提交
    • C
      Btfs: More metadata allocator optimizations · 4366211c
      Chris Mason 提交于
      This lowers the empty cluster target for metadata allocations.  The lower
      target makes it easier to do allocations and still seems to perform well.
      
      It also fixes the allocator loop to drop the empty cluster when things
      start getting difficult, avoiding false enospc warnings.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      4366211c
    • C
      Btrfs: enforce metadata allocation clustering · 3b7885bf
      Chris Mason 提交于
      The allocator uses the last allocation as a starting point for metadata
      allocations, and tries to allocate in clusters of at least 256k.
      
      If the search for a free block fails to find the expected block, this patch
      forces a new cluster to be found in the free list.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      3b7885bf
    • C
      Btrfs: Optimize compressed writeback and reads · 771ed689
      Chris Mason 提交于
      When reading compressed extents, try to put pages into the page cache
      for any pages covered by the compressed extent that readpages didn't already
      preload.
      
      Add an async work queue to handle transformations at delayed allocation processing
      time.  Right now this is just compression.  The workflow is:
      
      1) Find offsets in the file marked for delayed allocation
      2) Lock the pages
      3) Lock the state bits
      4) Call the async delalloc code
      
      The async delalloc code clears the state lock bits and delalloc bits.  It is
      important this happens before the range goes into the work queue because
      otherwise it might deadlock with other work queue items that try to lock
      those extent bits.
      
      The file pages are compressed, and if the compression doesn't work the
      pages are written back directly.
      
      An ordered work queue is used to make sure the inodes are written in the same
      order that pdflush or writepages sent them down.
      
      This changes extent_write_cache_pages to let the writepage function
      update the wbc nr_written count.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      771ed689
    • C
      Btrfs: Add ordered async work queues · 4a69a410
      Chris Mason 提交于
      Btrfs uses kernel threads to create async work queues for cpu intensive
      operations such as checksumming and decompression.  These work well,
      but they make it difficult to keep IO order intact.
      
      A single writepages call from pdflush or fsync will turn into a number
      of bios, and each bio is checksummed in parallel.  Once the checksum is
      computed, the bio is sent down to the disk, and since we don't control
      the order in which the parallel operations happen, they might go down to
      the disk in almost any order.
      
      The code deals with this somewhat by having deep work queues for a single
      kernel thread, making it very likely that a single thread will process all
      the bios for a single inode.
      
      This patch introduces an explicitly ordered work queue.  As work structs
      are placed into the queue they are put onto the tail of a list.  They have
      three callbacks:
      
      ->func (cpu intensive processing here)
      ->ordered_func (order sensitive processing here)
      ->ordered_free (free the work struct, all processing is done)
      
      The work struct has three callbacks.  The func callback does the cpu intensive
      work, and when it completes the work struct is marked as done.
      
      Every time a work struct completes, the list is checked to see if the head
      is marked as done.  If so the ordered_func callback is used to do the
      order sensitive processing and the ordered_free callback is used to do
      any cleanup.  Then we loop back and check the head of the list again.
      
      This patch also changes the checksumming code to use the ordered workqueues.
      One a 4 drive array, it increases streaming writes from 280MB/s to 350MB/s.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      4a69a410
  5. 01 11月, 2008 2 次提交
    • C
      Btrfs: rev the disk format for fallocate · 537fb067
      Chris Mason 提交于
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      537fb067
    • C
      Btrfs: Compression corner fixes · 70b99e69
      Chris Mason 提交于
      Make sure we keep page->mapping NULL on the pages we're getting
      via alloc_page.  It gets set so a few of the callbacks can do the right
      thing, but in general these pages don't have a mapping.
      
      Don't try to truncate compressed inline items in btrfs_drop_extents.
      The whole compressed item must be preserved.
      
      Don't try to create multipage inline compressed items.  When we try to
      overwrite just the first page of the file, we would have to read in and recow
      all the pages after it in the same compressed inline items.  For now, only
      create single page inline items.
      
      Make sure we lock pages in the correct order during delalloc.  The
      search into the state tree for delalloc bytes can return bytes before
      the page we already have locked.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      70b99e69
  6. 31 10月, 2008 6 次提交
  7. 30 10月, 2008 7 次提交
    • C
      Btrfs: prevent looping forever in finish_current_insert and del_pending_extents · 87ef2bb4
      Chris Mason 提交于
      finish_current_insert and del_pending_extents process extent tree modifications
      that build up while we are changing the extent tree.  It is a confusing
      bit of code that prevents recursion.
      
      Both functions run through a list of pending operations and both funcs
      add to the list of pending operations.  If you have two procs in either
      one of them, they can end up looping forever making more work for each other.
      
      This patch makes them walk forward through the list of pending changes instead
      of always trying to process the entire list.  At transaction commit
      time, we catch any changes that were left over.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      87ef2bb4
    • C
    • Y
      Btrfs: Add root tree pointer transaction ids · 84234f3a
      Yan Zheng 提交于
      This patch adds transaction IDs to root tree pointers.
      Transaction IDs in tree pointers are compared with the
      generation numbers in block headers when reading root
      blocks of trees. This can detect some types of IO errors.
      Signed-off-by: NYan Zheng <zheng.yan@oracle.com>
      
      84234f3a
    • J
      Btrfs: nuke fs wide allocation mutex V2 · 25179201
      Josef Bacik 提交于
      This patch removes the giant fs_info->alloc_mutex and replaces it with a bunch
      of little locks.
      
      There is now a pinned_mutex, which is used when messing with the pinned_extents
      extent io tree, and the extent_ins_mutex which is used with the pending_del and
      extent_ins extent io trees.
      
      The locking for the extent tree stuff was inspired by a patch that Yan Zheng
      wrote to fix a race condition, I cleaned it up some and changed the locking
      around a little bit, but the idea remains the same.  Basically instead of
      holding the extent_ins_mutex throughout the processing of an extent on the
      extent_ins or pending_del trees, we just hold it while we're searching and when
      we clear the bits on those trees, and lock the extent for the duration of the
      operations on the extent.
      
      Also to keep from getting hung up waiting to lock an extent, I've added a
      try_lock_extent so if we cannot lock the extent, move on to the next one in the
      tree and we'll come back to that one.  I have tested this heavily and it does
      not appear to break anything.  This has to be applied on top of my
      find_free_extent redo patch.
      
      I tested this patch on top of Yan's space reblancing code and it worked fine.
      The only thing that has changed since the last version is I pulled out all my
      debugging stuff, apparently I forgot to run guilt refresh before I sent the
      last patch out.  Thank you,
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
      
      25179201
    • J
      Btrfs: fix enospc when there is plenty of space · 80eb234a
      Josef Bacik 提交于
      So there is an odd case where we can possibly return -ENOSPC when there is in
      fact space to be had.  It only happens with Metadata writes, and happens _very_
      infrequently.  What has to happen is we have to allocate have allocated out of
      the first logical byte on the disk, which would set last_alloc to
      first_logical_byte(root, 0), so search_start == orig_search_start.  We then
      need to allocate for normal metadata, so BTRFS_BLOCK_GROUP_METADATA |
      BTRFS_BLOCK_GROUP_DUP.  We will do a block lookup for the given search_start,
      block_group_bits() won't match and we'll go to choose another block group.
      However because search_start matches orig_search_start we go to see if we can
      allocate a chunk.
      
      If we are in the situation that we cannot allocate a chunk, we fail and ENOSPC.
      This is kind of a big flaw of the way find_free_extent works, as it along with
      find_free_space loop through _all_ of the block groups, not just the ones that
      we want to allocate out of.  This patch completely kills find_free_space and
      rolls it into find_free_extent.  I've introduced a sort of state machine into
      this, which will make it easier to get cache miss information out of the
      allocator, and will work well with my locking changes.
      
      The basic flow is this:  We have the variable loop which is 0, meaning we are
      in the hint phase.  We lookup the block group for the hint, and lookup the
      space_info for what we want to allocate out of.  If the block group we were
      pointed at by the hint either isn't of the correct type, or just doesn't have
      the space we need, we set head to space_info->block_groups, so we start at the
      beginning of the block groups for this particular space info, and loop through.
      
      This is also where we add the empty_cluster to total_needed.  At this point
      loop is set to 1 and we just loop through all of the block groups for this
      particular space_info looking for the space we need, just as find_free_space
      would have done, except we only hit the block groups we want and not _all_ of
      the block groups.  If we come full circle we see if we can allocate a chunk.
      If we cannot of course we exit with -ENOSPC and we are good.  If not we start
      over at space_info->block_groups and loop through again, with loop == 2.  If we
      come full circle and haven't found what we need then we exit with -ENOSPC.
      I've been running this for a couple of days now and it seems stable, and I
      haven't yet hit a -ENOSPC when there was plenty of space left.
      
      Also I've made a groups_sem to handle the group list for the space_info.  This
      is part of my locking changes, but is relatively safe and seems better than
      holding the space_info spinlock over that entire search time.  Thanks,
      Signed-off-by: NJosef Bacik <jbacik@redhat.com>
       
      80eb234a
    • Y
      Btrfs: Improve space balancing code · f82d02d9
      Yan Zheng 提交于
      This patch improves the space balancing code to keep more sharing
      of tree blocks. The only case that breaks sharing of tree blocks is
      data extents get fragmented during balancing. The main changes in
      this patch are:
      
      Add a 'drop sub-tree' function. This solves the problem in old code
      that BTRFS_HEADER_FLAG_WRITTEN check breaks sharing of tree block.
      
      Remove relocation mapping tree. Relocation mappings are stored in
      struct btrfs_ref_path and updated dynamically during walking up/down
      the reference path. This reduces CPU usage and simplifies code.
      
      This patch also fixes a bug. Root items for reloc trees should be
      updated in btrfs_free_reloc_root.
      Signed-off-by: NYan Zheng <zheng.yan@oracle.com>
      
      f82d02d9
    • C
      Btrfs: Add zlib compression support · c8b97818
      Chris Mason 提交于
      This is a large change for adding compression on reading and writing,
      both for inline and regular extents.  It does some fairly large
      surgery to the writeback paths.
      
      Compression is off by default and enabled by mount -o compress.  Even
      when the -o compress mount option is not used, it is possible to read
      compressed extents off the disk.
      
      If compression for a given set of pages fails to make them smaller, the
      file is flagged to avoid future compression attempts later.
      
      * While finding delalloc extents, the pages are locked before being sent down
      to the delalloc handler.  This allows the delalloc handler to do complex things
      such as cleaning the pages, marking them writeback and starting IO on their
      behalf.
      
      * Inline extents are inserted at delalloc time now.  This allows us to compress
      the data before inserting the inline extent, and it allows us to insert
      an inline extent that spans multiple pages.
      
      * All of the in-memory extent representations (extent_map.c, ordered-data.c etc)
      are changed to record both an in-memory size and an on disk size, as well
      as a flag for compression.
      
      From a disk format point of view, the extent pointers in the file are changed
      to record the on disk size of a given extent and some encoding flags.
      Space in the disk format is allocated for compression encoding, as well
      as encryption and a generic 'other' field.  Neither the encryption or the
      'other' field are currently used.
      
      In order to limit the amount of data read for a single random read in the
      file, the size of a compressed extent is limited to 128k.  This is a
      software only limit, the disk format supports u64 sized compressed extents.
      
      In order to limit the ram consumed while processing extents, the uncompressed
      size of a compressed extent is limited to 256k.  This is a software only limit
      and will be subject to tuning later.
      
      Checksumming is still done on compressed extents, and it is done on the
      uncompressed version of the data.  This way additional encodings can be
      layered on without having to figure out which encoding to checksum.
      
      Compression happens at delalloc time, which is basically singled threaded because
      it is usually done by a single pdflush thread.  This makes it tricky to
      spread the compression load across all the cpus on the box.  We'll have to
      look at parallel pdflush walks of dirty inodes at a later time.
      
      Decompression is hooked into readpages and it does spread across CPUs nicely.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      c8b97818
  8. 16 10月, 2008 1 次提交
  9. 10 10月, 2008 9 次提交
  10. 09 10月, 2008 3 次提交