1. 05 8月, 2014 1 次提交
    • K
      Drivers: net-next: hyperv: Increase the size of the sendbuf region · 06b47aac
      KY Srinivasan 提交于
      Intel did some benchmarking on our network throughput when Linux on Hyper-V
      is as used as a gateway. This fix gave us almost a 1 Gbps additional throughput
      on about 5Gbps base throughput we hadi, prior to increasing the sendbuf size.
      The sendbuf mechanism is a copy based transport that we have which is clearly
      more optimal than the copy-free page flipping mechanism (for small packets).
      In the forwarding scenario, we deal only with MTU sized packets,
      and increasing the size of the senbuf area gave us the additional performance.
      For what it is worth, Windows guests on Hyper-V, I am told use similar sendbuf
      size as well.
      
      The exact value of sendbuf I think is less important than the fact that it needs
      to be larger than what Linux can allocate as physically contiguous memory.
      Thus the change over to allocating via vmalloc().
      
      We currently allocate 16MB receive buffer and we use vmalloc there for allocation.
      Also the low level channel code has already been modified to deal with physically
      dis-contiguous memory in the ringbuffer setup.
      
      Based on experimentation Intel did, they say there was some improvement in throughput
      as the sendbuf size was increased up to 16MB and there was no effect on throughput
      beyond 16MB. Thus I have chosen 16MB here.
      
      Increasing the sendbuf value makes a material difference in small packet handling
      
      In this version of the patch, based on David's feedback, I have added
      additional details in the commit log.
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      06b47aac
  2. 24 7月, 2014 1 次提交
  3. 10 7月, 2014 1 次提交
  4. 03 7月, 2014 1 次提交
  5. 20 6月, 2014 1 次提交
  6. 17 6月, 2014 1 次提交
  7. 24 5月, 2014 1 次提交
  8. 14 5月, 2014 1 次提交
  9. 12 5月, 2014 1 次提交
  10. 01 5月, 2014 2 次提交
  11. 24 4月, 2014 2 次提交
  12. 22 4月, 2014 1 次提交
  13. 12 4月, 2014 3 次提交
  14. 11 3月, 2014 7 次提交
  15. 06 3月, 2014 1 次提交
  16. 20 2月, 2014 1 次提交
  17. 18 2月, 2014 3 次提交
  18. 17 2月, 2014 1 次提交
  19. 28 1月, 2014 1 次提交
  20. 22 12月, 2013 1 次提交
  21. 18 12月, 2013 1 次提交
    • J
      netvsc: don't flush peers notifying work during setting mtu · 50dc875f
      Jason Wang 提交于
      There's a possible deadlock if we flush the peers notifying work during setting
      mtu:
      
      [   22.991149] ======================================================
      [   22.991173] [ INFO: possible circular locking dependency detected ]
      [   22.991198] 3.10.0-54.0.1.el7.x86_64.debug #1 Not tainted
      [   22.991219] -------------------------------------------------------
      [   22.991243] ip/974 is trying to acquire lock:
      [   22.991261]  ((&(&net_device_ctx->dwork)->work)){+.+.+.}, at: [<ffffffff8108af95>] flush_work+0x5/0x2e0
      [   22.991307]
      but task is already holding lock:
      [   22.991330]  (rtnl_mutex){+.+.+.}, at: [<ffffffff81539deb>] rtnetlink_rcv+0x1b/0x40
      [   22.991367]
      which lock already depends on the new lock.
      
      [   22.991398]
      the existing dependency chain (in reverse order) is:
      [   22.991426]
      -> #1 (rtnl_mutex){+.+.+.}:
      [   22.991449]        [<ffffffff810dfdd9>] __lock_acquire+0xb19/0x1260
      [   22.991477]        [<ffffffff810e0d12>] lock_acquire+0xa2/0x1f0
      [   22.991501]        [<ffffffff81673659>] mutex_lock_nested+0x89/0x4f0
      [   22.991529]        [<ffffffff815392b7>] rtnl_lock+0x17/0x20
      [   22.991552]        [<ffffffff815230b2>] netdev_notify_peers+0x12/0x30
      [   22.991579]        [<ffffffffa0340212>] netvsc_send_garp+0x22/0x30 [hv_netvsc]
      [   22.991610]        [<ffffffff8108d251>] process_one_work+0x211/0x6e0
      [   22.991637]        [<ffffffff8108d83b>] worker_thread+0x11b/0x3a0
      [   22.991663]        [<ffffffff81095e5d>] kthread+0xed/0x100
      [   22.991686]        [<ffffffff81681c6c>] ret_from_fork+0x7c/0xb0
      [   22.991715]
      -> #0 ((&(&net_device_ctx->dwork)->work)){+.+.+.}:
      [   22.991715]        [<ffffffff810de817>] check_prevs_add+0x967/0x970
      [   22.991715]        [<ffffffff810dfdd9>] __lock_acquire+0xb19/0x1260
      [   22.991715]        [<ffffffff810e0d12>] lock_acquire+0xa2/0x1f0
      [   22.991715]        [<ffffffff8108afde>] flush_work+0x4e/0x2e0
      [   22.991715]        [<ffffffff8108e1b5>] __cancel_work_timer+0x95/0x130
      [   22.991715]        [<ffffffff8108e303>] cancel_delayed_work_sync+0x13/0x20
      [   22.991715]        [<ffffffffa03404e4>] netvsc_change_mtu+0x84/0x200 [hv_netvsc]
      [   22.991715]        [<ffffffff815233d4>] dev_set_mtu+0x34/0x80
      [   22.991715]        [<ffffffff8153bc2a>] do_setlink+0x23a/0xa00
      [   22.991715]        [<ffffffff8153d054>] rtnl_newlink+0x394/0x5e0
      [   22.991715]        [<ffffffff81539eac>] rtnetlink_rcv_msg+0x9c/0x260
      [   22.991715]        [<ffffffff8155cdd9>] netlink_rcv_skb+0xa9/0xc0
      [   22.991715]        [<ffffffff81539dfa>] rtnetlink_rcv+0x2a/0x40
      [   22.991715]        [<ffffffff8155c41d>] netlink_unicast+0xdd/0x190
      [   22.991715]        [<ffffffff8155c807>] netlink_sendmsg+0x337/0x750
      [   22.991715]        [<ffffffff8150d219>] sock_sendmsg+0x99/0xd0
      [   22.991715]        [<ffffffff8150d63e>] ___sys_sendmsg+0x39e/0x3b0
      [   22.991715]        [<ffffffff8150eba2>] __sys_sendmsg+0x42/0x80
      [   22.991715]        [<ffffffff8150ebf2>] SyS_sendmsg+0x12/0x20
      [   22.991715]        [<ffffffff81681d19>] system_call_fastpath+0x16/0x1b
      
      This is because we hold the rtnl_lock() before ndo_change_mtu() and try to flush
      the work in netvsc_change_mtu(), in the mean time, netdev_notify_peers() may be
      called from worker and also trying to hold the rtnl_lock. This will lead the
      flush won't succeed forever. Solve this by not canceling and flushing the work,
      this is safe because the transmission done by NETDEV_NOTIFY_PEERS was
      synchronized with the netif_tx_disable() called by netvsc_change_mtu().
      Reported-by: NYaju Cao <yacao@redhat.com>
      Tested-by: NYaju Cao <yacao@redhat.com>
      Cc: K. Y. Srinivasan <kys@microsoft.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      50dc875f
  22. 07 12月, 2013 1 次提交
  23. 02 8月, 2013 1 次提交
  24. 17 7月, 2013 1 次提交
  25. 18 6月, 2013 1 次提交
  26. 01 6月, 2013 1 次提交
  27. 30 4月, 2013 1 次提交
  28. 20 4月, 2013 1 次提交