• J
    writeback: switch to per-bdi threads for flushing data · 03ba3782
    Jens Axboe 提交于
    This gets rid of pdflush for bdi writeout and kupdated style cleaning.
    pdflush writeout suffers from lack of locality and also requires more
    threads to handle the same workload, since it has to work in a
    non-blocking fashion against each queue. This also introduces lumpy
    behaviour and potential request starvation, since pdflush can be starved
    for queue access if others are accessing it. A sample ffsb workload that
    does random writes to files is about 8% faster here on a simple SATA drive
    during the benchmark phase. File layout also seems a LOT more smooth in
    vmstat:
    
     r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
     0  1      0 608848   2652 375372    0    0     0 71024  604    24  1 10 48 42
     0  1      0 549644   2712 433736    0    0     0 60692  505    27  1  8 48 44
     1  0      0 476928   2784 505192    0    0     4 29540  553    24  0  9 53 37
     0  1      0 457972   2808 524008    0    0     0 54876  331    16  0  4 38 58
     0  1      0 366128   2928 614284    0    0     4 92168  710    58  0 13 53 34
     0  1      0 295092   3000 684140    0    0     0 62924  572    23  0  9 53 37
     0  1      0 236592   3064 741704    0    0     4 58256  523    17  0  8 48 44
     0  1      0 165608   3132 811464    0    0     0 57460  560    21  0  8 54 38
     0  1      0 102952   3200 873164    0    0     4 74748  540    29  1 10 48 41
     0  1      0  48604   3252 926472    0    0     0 53248  469    29  0  7 47 45
    
    where vanilla tends to fluctuate a lot in the creation phase:
    
     r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us sy id wa
     1  1      0 678716   5792 303380    0    0     0 74064  565    50  1 11 52 36
     1  0      0 662488   5864 319396    0    0     4   352  302   329  0  2 47 51
     0  1      0 599312   5924 381468    0    0     0 78164  516    55  0  9 51 40
     0  1      0 519952   6008 459516    0    0     4 78156  622    56  1 11 52 37
     1  1      0 436640   6092 541632    0    0     0 82244  622    54  0 11 48 41
     0  1      0 436640   6092 541660    0    0     0     8  152    39  0  0 51 49
     0  1      0 332224   6200 644252    0    0     4 102800  728    46  1 13 49 36
     1  0      0 274492   6260 701056    0    0     4 12328  459    49  0  7 50 43
     0  1      0 211220   6324 763356    0    0     0 106940  515    37  1 10 51 39
     1  0      0 160412   6376 813468    0    0     0  8224  415    43  0  6 49 45
     1  1      0  85980   6452 886556    0    0     4 113516  575    39  1 11 54 34
     0  2      0  85968   6452 886620    0    0     0  1640  158   211  0  0 46 54
    
    A 10 disk test with btrfs performs 26% faster with per-bdi flushing. A
    SSD based writeback test on XFS performs over 20% better as well, with
    the throughput being very stable around 1GB/sec, where pdflush only
    manages 750MB/sec and fluctuates wildly while doing so. Random buffered
    writes to many files behave a lot better as well, as does random mmap'ed
    writes.
    
    A separate thread is added to sync the super blocks. In the long term,
    adding sync_supers_bdi() functionality could get rid of this thread again.
    Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
    03ba3782
vmscan.c 76.8 KB