1. 14 9月, 2009 5 次提交
  2. 11 9月, 2009 3 次提交
    • J
      writeback: switch to per-bdi threads for flushing data · 03ba3782
      Jens Axboe 提交于
      This gets rid of pdflush for bdi writeout and kupdated style cleaning.
      pdflush writeout suffers from lack of locality and also requires more
      threads to handle the same workload, since it has to work in a
      non-blocking fashion against each queue. This also introduces lumpy
      behaviour and potential request starvation, since pdflush can be starved
      for queue access if others are accessing it. A sample ffsb workload that
      does random writes to files is about 8% faster here on a simple SATA drive
      during the benchmark phase. File layout also seems a LOT more smooth in
      vmstat:
      
       r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
       0  1      0 608848   2652 375372    0    0     0 71024  604    24  1 10 48 42
       0  1      0 549644   2712 433736    0    0     0 60692  505    27  1  8 48 44
       1  0      0 476928   2784 505192    0    0     4 29540  553    24  0  9 53 37
       0  1      0 457972   2808 524008    0    0     0 54876  331    16  0  4 38 58
       0  1      0 366128   2928 614284    0    0     4 92168  710    58  0 13 53 34
       0  1      0 295092   3000 684140    0    0     0 62924  572    23  0  9 53 37
       0  1      0 236592   3064 741704    0    0     4 58256  523    17  0  8 48 44
       0  1      0 165608   3132 811464    0    0     0 57460  560    21  0  8 54 38
       0  1      0 102952   3200 873164    0    0     4 74748  540    29  1 10 48 41
       0  1      0  48604   3252 926472    0    0     0 53248  469    29  0  7 47 45
      
      where vanilla tends to fluctuate a lot in the creation phase:
      
       r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
       1  1      0 678716   5792 303380    0    0     0 74064  565    50  1 11 52 36
       1  0      0 662488   5864 319396    0    0     4   352  302   329  0  2 47 51
       0  1      0 599312   5924 381468    0    0     0 78164  516    55  0  9 51 40
       0  1      0 519952   6008 459516    0    0     4 78156  622    56  1 11 52 37
       1  1      0 436640   6092 541632    0    0     0 82244  622    54  0 11 48 41
       0  1      0 436640   6092 541660    0    0     0     8  152    39  0  0 51 49
       0  1      0 332224   6200 644252    0    0     4 102800  728    46  1 13 49 36
       1  0      0 274492   6260 701056    0    0     4 12328  459    49  0  7 50 43
       0  1      0 211220   6324 763356    0    0     0 106940  515    37  1 10 51 39
       1  0      0 160412   6376 813468    0    0     0  8224  415    43  0  6 49 45
       1  1      0  85980   6452 886556    0    0     4 113516  575    39  1 11 54 34
       0  2      0  85968   6452 886620    0    0     0  1640  158   211  0  0 46 54
      
      A 10 disk test with btrfs performs 26% faster with per-bdi flushing. A
      SSD based writeback test on XFS performs over 20% better as well, with
      the throughput being very stable around 1GB/sec, where pdflush only
      manages 750MB/sec and fluctuates wildly while doing so. Random buffered
      writes to many files behave a lot better as well, as does random mmap'ed
      writes.
      
      A separate thread is added to sync the super blocks. In the long term,
      adding sync_supers_bdi() functionality could get rid of this thread again.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      03ba3782
    • J
      writeback: move dirty inodes from super_block to backing_dev_info · 66f3b8e2
      Jens Axboe 提交于
      This is a first step at introducing per-bdi flusher threads. We should
      have no change in behaviour, although sb_has_dirty_inodes() is now
      ridiculously expensive, as there's no easy way to answer that question.
      Not a huge problem, since it'll be deleted in subsequent patches.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      66f3b8e2
    • J
      writeback: get rid of generic_sync_sb_inodes() export · d8a8559c
      Jens Axboe 提交于
      This adds two new exported functions:
      
      - writeback_inodes_sb(), which only attempts to writeback dirty inodes on
        this super_block, for WB_SYNC_NONE writeout.
      - sync_inodes_sb(), which writes out all dirty inodes on this super_block
        and also waits for the IO to complete.
      Acked-by: NJan Kara <jack@suse.cz>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      d8a8559c
  3. 09 9月, 2009 1 次提交
  4. 24 8月, 2009 1 次提交
  5. 10 8月, 2009 1 次提交
  6. 08 8月, 2009 2 次提交
    • C
      vfs: add __destroy_inode · 2e00c97e
      Christoph Hellwig 提交于
      When we want to tear down an inode that lost the add to the cache race
      in XFS we must not call into ->destroy_inode because that would delete
      the inode that won the race from the inode cache radix tree.
      
      This patch provides the __destroy_inode helper needed to fix this,
      the actual fix will be in th next patch.  As XFS was the only reason
      destroy_inode was exported we shift the export to the new __destroy_inode.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
      2e00c97e
    • C
      vfs: fix inode_init_always calling convention · 54e34621
      Christoph Hellwig 提交于
      Currently inode_init_always calls into ->destroy_inode if the additional
      initialization fails.  That's not only counter-intuitive because
      inode_init_always did not allocate the inode structure, but in case of
      XFS it's actively harmful as ->destroy_inode might delete the inode from
      a radix-tree that has never been added.  This in turn might end up
      deleting the inode for the same inum that has been instanciated by
      another process and cause lots of cause subtile problems.
      
      Also in the case of re-initializing a reclaimable inode in XFS it would
      free an inode we still want to keep alive.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Reviewed-by: NEric Sandeen <sandeen@sandeen.net>
      54e34621
  7. 30 7月, 2009 1 次提交
  8. 24 6月, 2009 2 次提交
  9. 17 6月, 2009 2 次提交
  10. 15 6月, 2009 1 次提交
  11. 12 6月, 2009 12 次提交
  12. 11 5月, 2009 1 次提交
  13. 09 5月, 2009 3 次提交
    • A
      Fix races around the access to ->s_options · 2a32cebd
      Al Viro 提交于
      Put generic_show_options read access to s_options under rcu_read_lock,
      split save_mount_options() into "we are setting it the first time"
      (uses in foo_fill_super()) and "we are relacing and freeing the old one",
      synchronize_rcu() before kfree() in the latter.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      2a32cebd
    • A
      Switch open_exec() and sys_uselib() to do_open_filp() · 6e8341a1
      Al Viro 提交于
      ... and make path_lookup_open() static
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      6e8341a1
    • A
      New helper: deactivate_locked_super() · 74dbbdd7
      Al Viro 提交于
      Does equivalent of up_write(&s->s_umount); deactivate_super(s);
      However, it does not does not unlock it until it's all over.
      As the result, it's safe to use to dispose of new superblock on ->get_sb()
      failure exits - nobody will see the sucker until it's all over.
      Equivalent using up_write/deactivate_super is safe for that purpose
      if superblock is either	safe to use or has NULL ->s_root when we unlock.
      Normally filesystems take the required precautions, but
      	a) we do have bugs in that area in some of them.
      	b) up_write/deactivate_super sequence is extremely common,
      so the helper makes sense anyway.
      Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
      74dbbdd7
  14. 25 4月, 2009 1 次提交
    • F
      lockd: call locks_release_private to cleanup per-filesystem state · a9e61e25
      Felix Blyakher 提交于
      For every lock request lockd creates a new file_lock object
      in nlmsvc_setgrantargs() by copying the passed in file_lock with
      locks_copy_lock(). A filesystem can attach it's own lock_operations
      vector to the file_lock. It has to be cleaned up at the end of the
      file_lock's life. However, lockd doesn't do it today, yet it
      asserts in nlmclnt_release_lockargs() that the per-filesystem
      state is clean.
      This patch fixes it by exporting locks_release_private() and adding
      it to nlmsvc_freegrantargs(), to be symmetrical to creating a
      file_lock in nlmsvc_setgrantargs().
      Signed-off-by: NFelix Blyakher <felixb@sgi.com>
      Signed-off-by: NJ. Bruce Fields <bfields@citi.umich.edu>
      a9e61e25
  15. 21 4月, 2009 3 次提交
  16. 15 4月, 2009 1 次提交
    • M
      splice: add helpers for locking pipe inode · 61e0d47c
      Miklos Szeredi 提交于
      There are lots of sequences like this, especially in splice code:
      
      	if (pipe->inode)
      		mutex_lock(&pipe->inode->i_mutex);
      	/* do something */
      	if (pipe->inode)
      		mutex_unlock(&pipe->inode->i_mutex);
      
      so introduce helpers which do the conditional locking and unlocking.
      Also replace the inode_double_lock() call with a pipe_double_lock()
      helper to avoid spreading the use of this functionality beyond the
      pipe code.
      
      This patch is just a cleanup, and should cause no behavioral changes.
      Signed-off-by: NMiklos Szeredi <mszeredi@suse.cz>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      61e0d47c