1. 17 5月, 2016 1 次提交
  2. 25 4月, 2016 1 次提交
  3. 19 4月, 2016 1 次提交
    • K
      hv_netvsc: Implement support for VF drivers on Hyper-V · 84bf9cef
      KY Srinivasan 提交于
      Support VF drivers on Hyper-V. On Hyper-V, each VF instance presented to
      the guest has an associated synthetic interface that shares the MAC address
      with the VF instance. Typically these are bonded together to support
      live migration. By default, the host delivers all the incoming packets
      on the synthetic interface. Once the VF is up, we need to explicitly switch
      the data path on the host to divert traffic onto the VF interface. Even after
      switching the data path, broadcast and multicast packets are always delivered
      on the synthetic interface and these will have to be injected back onto the
      VF interface (if VF is up).
      This patch implements the necessary support in netvsc to support Linux
      VF drivers.
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Reviewed-by: NHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      84bf9cef
  4. 24 3月, 2016 1 次提交
  5. 01 3月, 2016 1 次提交
  6. 20 2月, 2016 1 次提交
  7. 13 2月, 2016 1 次提交
    • V
      hv_netvsc: Restore needed_headroom request · 14a03cf8
      Vitaly Kuznetsov 提交于
      Commit c0eb4540 ("hv_netvsc: Don't ask for additional head room in the
      skb") got rid of needed_headroom setting for the driver. With the change I
      hit the following issue trying to use ptkgen module:
      
      [   57.522021] kernel BUG at net/core/skbuff.c:1128!
      [   57.522021] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
      ...
      [   58.721068] Call Trace:
      [   58.721068]  [<ffffffffa0144e86>] netvsc_start_xmit+0x4c6/0x8e0 [hv_netvsc]
      ...
      [   58.721068]  [<ffffffffa02f87fc>] ? pktgen_finalize_skb+0x25c/0x2a0 [pktgen]
      [   58.721068]  [<ffffffff814f5760>] ? __netdev_alloc_skb+0xc0/0x100
      [   58.721068]  [<ffffffffa02f9907>] pktgen_thread_worker+0x257/0x1920 [pktgen]
      
      Basically, we're calling skb_cow_head(skb, RNDIS_AND_PPI_SIZE) and crash on
          if (skb_shared(skb))
              BUG();
      
      We probably need to restore needed_headroom setting (but shrunk to
      RNDIS_AND_PPI_SIZE as we don't need more) to request the required headroom
      space. In theory, it should not give us performance penalty.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      14a03cf8
  8. 11 2月, 2016 1 次提交
  9. 26 1月, 2016 1 次提交
  10. 03 12月, 2015 14 次提交
  11. 02 12月, 2015 1 次提交
    • V
      hv_netvsc: rework link status change handling · 27a70af3
      Vitaly Kuznetsov 提交于
      There are several issues in hv_netvsc driver with regards to link status
      change handling:
      - RNDIS_STATUS_NETWORK_CHANGE results in calling userspace helper doing
        '/etc/init.d/network restart' and this is inappropriate and broken for
        many reasons.
      - link_watch infrastructure only sends one notification per second and
        in case of e.g. paired disconnect/connect events we get only one
        notification with last status. This makes it impossible to handle such
        situations in userspace.
      
      Redo link status changes handling in the following way:
      - Create a list of reconfig events in network device context.
      - On a reconfig event add it to the list of events and schedule
        netvsc_link_change().
      - In netvsc_link_change() ensure 2-second delay between link status
        changes.
      - Handle RNDIS_STATUS_NETWORK_CHANGE as a paired disconnect/connect event.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      27a70af3
  12. 02 9月, 2015 1 次提交
  13. 19 8月, 2015 1 次提交
  14. 13 8月, 2015 2 次提交
  15. 16 7月, 2015 1 次提交
  16. 09 7月, 2015 1 次提交
  17. 31 5月, 2015 1 次提交
    • K
      hv_netvsc: Properly size the vrss queues · e01ec219
      KY Srinivasan 提交于
      The current algorithm for deciding on the number of VRSS channels is
      not optimal since we open up the min of number of CPUs online and the
      number of VRSS channels the host is offering. So on a 32 VCPU guest
      we could potentially open 32 VRSS subchannels. Experimentation has
      shown that it is best to limit the number of VRSS channels to the number
      of CPUs within a NUMA node.
      
      Here is the new algorithm for deciding on the number of sub-channels we
      would open up:
              1) Pick the minimum of what the host is offering and what the driver
                 in the guest is specifying as the default value.
              2) Pick the minimum of (1) and the numbers of CPUs in the NUMA
                 node the primary channel is bound to.
      Signed-off-by: NK. Y. Srinivasan <kys@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e01ec219
  18. 18 5月, 2015 1 次提交
  19. 15 5月, 2015 1 次提交
  20. 14 5月, 2015 1 次提交
  21. 30 4月, 2015 2 次提交
  22. 15 4月, 2015 1 次提交
  23. 09 4月, 2015 2 次提交
  24. 08 4月, 2015 1 次提交