• E
    Fix vm's outbound traffic control problem · 0ac3baee
    Eiichi Tsukata 提交于
    Hello,
    
    This is a patch to fix vm's outbound traffic control problem.
    
    Currently, vm's outbound traffic control by libvirt doesn't go well.
    This problem was previously discussed at libvir-list ML, however
    it seems that there isn't still any answer to the problem.
    http://www.redhat.com/archives/libvir-list/2011-August/msg00333.html
    
    I measured Guest(with virtio-net) to Host TCP throughput with the
    command "netperf -H".
    Here are the outbound QoS parameters and the results.
    
    outbound average rate[kilobytes/s] : Guest to Host throughput[Mbit/s]
    ======================================================================
    1024  (8Mbit/s)                    : 4.56
    2048  (16Mbit/s)                   : 3.29
    4096  (32Mbit/s)                   : 3.35
    8192  (64Mbit/s)                   : 3.95
    16384 (128Mbit/s)                  : 4.08
    32768 (256Mbit/s)                  : 3.94
    65536 (512Mbit/s)                  : 3.23
    
    The outbound traffic goes down unreasonably and is even not controled.
    
    The cause of this problem is too large mtu value in "tc filter" command run by
    libvirt. The command uses burst value to set mtu and the burst is equal to
    average rate value if it's not set. This value is too large. For example
    if the average rate is set to 1024 kilobytes/s, the mtu value is set to 1024
    kilobytes. That's too large compared to the size of network packets.
    Here libvirt applies tc ingress filter to Host's vnet(tun) device.
    Tc ingress filter is implemented with TBF(Token Buckets Filter) algorithm. TBF
    uses mtu value to calculate the amount of token consumed by each packet. With too
    large mtu value, the token consumption rate is set too large. This leads to
    token starvation and deterioration of TCP throughput.
    
    Then, should we use the default mtu value 2 kilobytes?
    The anser is No, because Guest with virtio-net device uses 65536 bytes
    as mtu to transmit packets to Host, and the tc filter with the default mtu
    value 2k drops packets whose size is larger than 2k. So, the most packets
    is droped and again leads to deterioration of TCP throughput.
    
    The appropriate mtu value is 65536 bytes which is equal to the maximum value
    of network interface device defined in <linux/netdevice.h>. The value is
    not so large that it causes token starvation and not so small that it
    drops most packets.
    Therefore this patch set the mtu value to 64kb(== 65535 bytes).
    
    Again, here are the outbound QoS parameters and the TCP throughput with
    the libvirt patched.
    
    outbound average rate[kilobytes/s] : Guest to Host throughput[Mbit/s]
    ======================================================================
    1024  (8Mbit/s)                    : 8.22
    2048  (16Mbit/s)                   : 16.42
    4096  (32Mbit/s)                   : 32.93
    8192  (64Mbit/s)                   : 66.85
    16384 (128Mbit/s)                  : 133.88
    32768 (256Mbit/s)                  : 271.01
    65536 (512Mbit/s)                  : 547.32
    
    The outbound traffic conforms to the given limit.
    
    Thank you,
    Signed-off-by: NEiichi Tsukata <eiichi.tsukata.xh@hitachi.com>
    0ac3baee
virnetdevbandwidth.c 6.8 KB