1. 27 3月, 2012 1 次提交
    • C
      Btrfs: allow metadata blocks larger than the page size · 727011e0
      Chris Mason 提交于
      A few years ago the btrfs code to support blocks lager than
      the page size was disabled to fix a few corner cases in the
      page cache handling.  This fixes the code to properly support
      large metadata blocks again.
      
      Since current kernels will crash early and often with larger
      metadata blocks, this adds an incompat bit so that older kernels
      can't mount it.
      
      This also does away with different blocksizes for nodes and leaves.
      You get a single block size for all tree blocks.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      727011e0
  2. 24 5月, 2011 1 次提交
  3. 25 5月, 2010 1 次提交
  4. 24 9月, 2009 1 次提交
    • Y
      Btrfs: check size of inode backref before adding hardlink · a5719521
      Yan, Zheng 提交于
      For every hardlink in btrfs, there is a corresponding inode back
      reference. All inode back references for hardlinks in a given
      directory are stored in single b-tree item. The size of b-tree item
      is limited by the size of b-tree leaf, so we can only create limited
      number of hardlinks to a given file in a directory.
      
      The original code lacks of the check, it oops if the number of
      hardlinks goes over the limit. This patch fixes the issue by adding
      check to btrfs_link and btrfs_rename.
      Signed-off-by: NYan Zheng <zheng.yan@oracle.com>
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      a5719521
  5. 22 9月, 2009 1 次提交
  6. 25 3月, 2009 1 次提交
    • C
      Btrfs: leave btree locks spinning more often · b9473439
      Chris Mason 提交于
      btrfs_mark_buffer dirty would set dirty bits in the extent_io tree
      for the buffers it was dirtying.  This may require a kmalloc and it
      was not atomic.  So, anyone who called btrfs_mark_buffer_dirty had to
      set any btree locks they were holding to blocking first.
      
      This commit changes dirty tracking for extent buffers to just use a flag
      in the extent buffer.  Now that we have one and only one extent buffer
      per page, this can be safely done without losing dirty bits along the way.
      
      This also introduces a path->leave_spinning flag that callers of
      btrfs_search_slot can use to indicate they will properly deal with a
      path returned where all the locks are spinning instead of blocking.
      
      Many of the btree search callers now expect spinning paths,
      resulting in better btree concurrency overall.
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      b9473439
  7. 02 12月, 2008 1 次提交
  8. 25 9月, 2008 3 次提交
  9. 11 7月, 2007 1 次提交
  10. 14 6月, 2007 1 次提交
    • A
      btrfs: Code cleanup · f1ace244
      Aneesh 提交于
      Attaching below is some of the code cleanups that i came across while
      reading the code.
      
      a) alloc_path already calls init_path.
      b) Mention that btrfs_inode is the in memory copy.Ext4 have ext4_inode_info as
      the in memory copy ext4_inode as the disk copy
      Signed-off-by: NChris Mason <chris.mason@oracle.com>
      f1ace244
  11. 12 6月, 2007 1 次提交
  12. 11 4月, 2007 1 次提交
  13. 07 4月, 2007 1 次提交
  14. 02 4月, 2007 1 次提交
  15. 21 3月, 2007 1 次提交
  16. 17 3月, 2007 1 次提交
  17. 16 3月, 2007 1 次提交