1. 23 6月, 2021 1 次提交
  2. 30 3月, 2021 1 次提交
  3. 02 3月, 2021 1 次提交
  4. 15 2月, 2021 1 次提交
  5. 05 2月, 2021 1 次提交
  6. 30 1月, 2021 1 次提交
  7. 17 11月, 2020 1 次提交
  8. 18 9月, 2020 1 次提交
  9. 11 9月, 2020 1 次提交
  10. 23 7月, 2020 1 次提交
    • S
      hv_netvsc: add support for vlans in AF_PACKET mode · fdd8fac4
      Sriram Krishnan 提交于
      Vlan tagged packets are getting dropped when used with DPDK that uses
      the AF_PACKET interface on a hyperV guest.
      
      The packet layer uses the tpacket interface to communicate the vlans
      information to the upper layers. On Rx path, these drivers can read the
      vlan info from the tpacket header but on the Tx path, this information
      is still within the packet frame and requires the paravirtual drivers to
      push this back into the NDIS header which is then used by the host OS to
      form the packet.
      
      This transition from the packet frame to NDIS header is currently missing
      hence causing the host OS to drop the all vlan tagged packets sent by
      the drivers that use AF_PACKET (ETH_P_ALL) such as DPDK.
      
      Here is an overview of the changes in the vlan header in the packet path:
      
      The RX path (userspace handles everything):
        1. RX VLAN packet is stripped by HOST OS and placed in NDIS header
        2. Guest Kernel RX hv_netvsc packets and moves VLAN info from NDIS
           header into kernel SKB
        3. Kernel shares packets with user space application with PACKET_MMAP.
           The SKB VLAN info is copied to tpacket layer and indication set
           TP_STATUS_VLAN_VALID.
        4. The user space application will re-insert the VLAN info into the frame
      
      The TX path:
        1. The user space application has the VLAN info in the frame.
        2. Guest kernel gets packets from the application with PACKET_MMAP.
        3. The kernel later sends the frame to the hv_netvsc driver. The only way
           to send VLANs is when the SKB is setup & the VLAN is stripped from the
           frame.
        4. TX VLAN is re-inserted by HOST OS based on the NDIS header. If it sees
           a VLAN in the frame the packet is dropped.
      
      Cc: xe-linux-external@cisco.com
      Cc: Sriram Krishnan <srirakr2@cisco.com>
      Signed-off-by: NSriram Krishnan <srirakr2@cisco.com>
      Reviewed-by: NHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fdd8fac4
  11. 25 1月, 2020 1 次提交
    • H
      hv_netvsc: Add XDP support · 351e1581
      Haiyang Zhang 提交于
      This patch adds support of XDP in native mode for hv_netvsc driver, and
      transparently sets the XDP program on the associated VF NIC as well.
      
      Setting / unsetting XDP program on synthetic NIC (netvsc) propagates to
      VF NIC automatically. Setting / unsetting XDP program on VF NIC directly
      is not recommended, also not propagated to synthetic NIC, and may be
      overwritten by setting of synthetic NIC.
      
      The Azure/Hyper-V synthetic NIC receive buffer doesn't provide headroom
      for XDP. We thought about re-use the RNDIS header space, but it's too
      small. So we decided to copy the packets to a page buffer for XDP. And,
      most of our VMs on Azure have Accelerated  Network (SRIOV) enabled, so
      most of the packets run on VF NIC. The synthetic NIC is considered as a
      fallback data-path. So the data copy on netvsc won't impact performance
      significantly.
      
      XDP program cannot run with LRO (RSC) enabled, so you need to disable LRO
      before running XDP:
              ethtool -K eth0 lro off
      
      XDP actions not yet supported:
              XDP_REDIRECT
      Signed-off-by: NHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      351e1581
  12. 21 12月, 2019 1 次提交
  13. 24 11月, 2019 1 次提交
  14. 22 11月, 2019 2 次提交
  15. 06 11月, 2019 1 次提交
  16. 07 9月, 2019 1 次提交
  17. 31 5月, 2019 1 次提交
  18. 30 3月, 2019 1 次提交
  19. 24 1月, 2019 2 次提交
  20. 23 9月, 2018 2 次提交
  21. 31 7月, 2018 1 次提交
    • Y
      hv_netvsc: Add per-cpu ethtool stats for netvsc · 6ae74671
      Yidong Ren 提交于
      This patch implements following ethtool stats fields for netvsc:
      cpu<n>_tx/rx_packets/bytes
      cpu<n>_vf_tx/rx_packets/bytes
      
      Corresponding per-cpu counters already exist in current code. Exposing
      these counters will help troubleshooting performance issues.
      
      for_each_present_cpu() was used instead of for_each_possible_cpu().
      for_each_possible_cpu() would create very long and useless output.
      It is still being used for internal buffer, but not for ethtool
      output.
      
      There could be an overflow if cpu was added between ethtool
      call netvsc_get_sset_count() and netvsc_get_ethtool_stats() and
      netvsc_get_strings(). (still safe if cpu was removed)
      ethtool makes these three function calls separately.
      As long as we use ethtool, I can't see any clean solution.
      
      Currently and in foreseeable short term, Hyper-V doesn't support
      cpu hot-plug. Plus, ethtool is for admin use. Unlikely the admin
      would perform such combo operations.
      
      Changes in v2:
        - Remove cpp style comment
        - Resubmit after freeze
      
      Changes in v3:
        - Reimplemented with kvmalloc instead of alloc_percpu
      
      Changes in v4:
        - Fixed inconsistent array size
        - Use kvmalloc_array instead of kvmalloc
      Signed-off-by: NYidong Ren <yidren@microsoft.com>
      Reviewed-by: NStephen Hemminger <sthemmin@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6ae74671
  22. 30 6月, 2018 1 次提交
  23. 15 6月, 2018 1 次提交
  24. 13 6月, 2018 2 次提交
  25. 29 5月, 2018 1 次提交
  26. 11 5月, 2018 1 次提交
  27. 19 4月, 2018 2 次提交
  28. 26 3月, 2018 1 次提交
  29. 23 3月, 2018 1 次提交
    • S
      hv_netvsc: common detach logic · 7b2ee50c
      Stephen Hemminger 提交于
      Make common function for detaching internals of device
      during changes to MTU and RSS. Make sure no more packets
      are transmitted and all packets have been received before
      doing device teardown.
      
      Change the wait logic to be common and use usleep_range().
      
      Changes transmit enabling logic so that transmit queues are disabled
      during the period when lower device is being changed. And enabled
      only after sub channels are setup. This avoids issue where it could
      be that a packet was being sent while subchannel was not initialized.
      
      Fixes: 8195b139 ("hv_netvsc: fix deadlock on hotplug")
      Signed-off-by: NStephen Hemminger <sthemmin@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7b2ee50c
  30. 09 3月, 2018 1 次提交
  31. 14 12月, 2017 5 次提交