1. 03 12月, 2015 5 次提交
  2. 02 12月, 2015 1 次提交
    • V
      hv_netvsc: rework link status change handling · 27a70af3
      Vitaly Kuznetsov 提交于
      There are several issues in hv_netvsc driver with regards to link status
      change handling:
      - RNDIS_STATUS_NETWORK_CHANGE results in calling userspace helper doing
        '/etc/init.d/network restart' and this is inappropriate and broken for
        many reasons.
      - link_watch infrastructure only sends one notification per second and
        in case of e.g. paired disconnect/connect events we get only one
        notification with last status. This makes it impossible to handle such
        situations in userspace.
      
      Redo link status changes handling in the following way:
      - Create a list of reconfig events in network device context.
      - On a reconfig event add it to the list of events and schedule
        netvsc_link_change().
      - In netvsc_link_change() ensure 2-second delay between link status
        changes.
      - Handle RNDIS_STATUS_NETWORK_CHANGE as a paired disconnect/connect event.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      27a70af3
  3. 13 8月, 2015 1 次提交
  4. 27 7月, 2015 2 次提交
  5. 09 7月, 2015 1 次提交
  6. 31 5月, 2015 1 次提交
    • K
      hv_netvsc: Properly size the vrss queues · e01ec219
      KY Srinivasan 提交于
      The current algorithm for deciding on the number of VRSS channels is
      not optimal since we open up the min of number of CPUs online and the
      number of VRSS channels the host is offering. So on a 32 VCPU guest
      we could potentially open 32 VRSS subchannels. Experimentation has
      shown that it is best to limit the number of VRSS channels to the number
      of CPUs within a NUMA node.
      
      Here is the new algorithm for deciding on the number of sub-channels we
      would open up:
              1) Pick the minimum of what the host is offering and what the driver
                 in the guest is specifying as the default value.
              2) Pick the minimum of (1) and the numbers of CPUs in the NUMA
                 node the primary channel is bound to.
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e01ec219
  7. 18 5月, 2015 1 次提交
  8. 15 5月, 2015 1 次提交
  9. 30 4月, 2015 2 次提交
  10. 15 4月, 2015 1 次提交
  11. 08 4月, 2015 3 次提交
  12. 01 4月, 2015 1 次提交
  13. 30 3月, 2015 1 次提交
  14. 01 3月, 2015 1 次提交
  15. 23 12月, 2014 1 次提交
  16. 23 8月, 2014 1 次提交
  17. 07 8月, 2014 1 次提交
  18. 05 8月, 2014 1 次提交
    • K
      Drivers: net-next: hyperv: Increase the size of the sendbuf region · 06b47aac
      KY Srinivasan 提交于
      Intel did some benchmarking on our network throughput when Linux on Hyper-V
      is as used as a gateway. This fix gave us almost a 1 Gbps additional throughput
      on about 5Gbps base throughput we hadi, prior to increasing the sendbuf size.
      The sendbuf mechanism is a copy based transport that we have which is clearly
      more optimal than the copy-free page flipping mechanism (for small packets).
      In the forwarding scenario, we deal only with MTU sized packets,
      and increasing the size of the senbuf area gave us the additional performance.
      For what it is worth, Windows guests on Hyper-V, I am told use similar sendbuf
      size as well.
      
      The exact value of sendbuf I think is less important than the fact that it needs
      to be larger than what Linux can allocate as physically contiguous memory.
      Thus the change over to allocating via vmalloc().
      
      We currently allocate 16MB receive buffer and we use vmalloc there for allocation.
      Also the low level channel code has already been modified to deal with physically
      dis-contiguous memory in the ringbuffer setup.
      
      Based on experimentation Intel did, they say there was some improvement in throughput
      as the sendbuf size was increased up to 16MB and there was no effect on throughput
      beyond 16MB. Thus I have chosen 16MB here.
      
      Increasing the sendbuf value makes a material difference in small packet handling
      
      In this version of the patch, based on David's feedback, I have added
      additional details in the commit log.
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      06b47aac
  19. 20 6月, 2014 1 次提交
  20. 24 5月, 2014 1 次提交
  21. 01 5月, 2014 1 次提交
    • K
      hyperv: Enable sendbuf mechanism on the send path · c25aaf81
      KY Srinivasan 提交于
      We send packets using a copy-free mechanism (this is the Guest to Host transport
      via VMBUS). While this is obviously optimal for large packets,
      it may not be optimal for small packets. Hyper-V host supports
      a second mechanism for sending packets that is "copy based". We implement that
      mechanism in this patch.
      
      In this version of the patch I have addressed a comment from David Miller.
      
      With this patch (and all of the other offload and VRSS patches), we are now able
      to almost saturate a 10G interface between Linux VMs on Hyper-V
      on different hosts - close to  9 Gbps as measured via iperf.
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Reviewed-by: NHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c25aaf81
  22. 24 4月, 2014 2 次提交
  23. 22 4月, 2014 1 次提交
  24. 12 4月, 2014 1 次提交
  25. 11 3月, 2014 6 次提交
  26. 20 2月, 2014 1 次提交