1. 02 11月, 2020 1 次提交
  2. 27 12月, 2019 3 次提交
  3. 26 9月, 2018 1 次提交
  4. 10 7月, 2018 2 次提交
  5. 07 12月, 2017 1 次提交
  6. 18 10月, 2017 1 次提交
  7. 17 10月, 2017 1 次提交
  8. 22 9月, 2017 1 次提交
  9. 29 8月, 2017 1 次提交
    • W
      xen-netback: update ubuf_info initialization to anonymous union · cc8737a5
      Willem de Bruijn 提交于
      The xen driver initializes struct ubuf_info fields using designated
      initializers. I recently moved these fields inside a nested anonymous
      struct inside an anonymous union. I had missed this use case.
      
      This breaks compilation of xen-netback with older compilers.
      >From kbuild bot with gcc-4.4.7:
      
         drivers/net//xen-netback/interface.c: In function
         'xenvif_init_queue':
         >> drivers/net//xen-netback/interface.c:554: error: unknown field 'ctx' specified in initializer
         >> drivers/net//xen-netback/interface.c:554: warning: missing braces around initializer
            drivers/net//xen-netback/interface.c:554: warning: (near initialization for '(anonymous).<anonymous>')
         >> drivers/net//xen-netback/interface.c:554: warning: initialization makes integer from pointer without a cast
         >> drivers/net//xen-netback/interface.c:555: error: unknown field 'desc' specified in initializer
      
      Add double braces around the designated initializers to match their
      nested position in the struct. After this, compilation succeeds again.
      
      Fixes: 4ab6c99d ("sock: MSG_ZEROCOPY notification coalescing")
      Reported-by: Nkbuild bot <lpk@intel.com>
      Signed-off-by: NWillem de Bruijn <willemb@google.com>
      Acked-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cc8737a5
  10. 22 6月, 2017 1 次提交
  11. 13 3月, 2017 1 次提交
  12. 02 3月, 2017 1 次提交
  13. 14 2月, 2017 1 次提交
  14. 31 1月, 2017 1 次提交
  15. 19 1月, 2017 1 次提交
  16. 21 10月, 2016 1 次提交
    • J
      net: use core MTU range checking in virt drivers · d0c2c997
      Jarod Wilson 提交于
      hyperv_net:
      - set min/max_mtu, per Haiyang, after rndis_filter_device_add
      
      virtio_net:
      - set min/max_mtu
      - remove virtnet_change_mtu
      
      vmxnet3:
      - set min/max_mtu
      
      xen-netback:
      - min_mtu = 0, max_mtu = 65517
      
      xen-netfront:
      - min_mtu = 0, max_mtu = 65535
      
      unisys/visor:
      - clean up defines a little to not clash with network core or add
        redundat definitions
      
      CC: netdev@vger.kernel.org
      CC: virtualization@lists.linux-foundation.org
      CC: "K. Y. Srinivasan" <kys@microsoft.com>
      CC: Haiyang Zhang <haiyangz@microsoft.com>
      CC: "Michael S. Tsirkin" <mst@redhat.com>
      CC: Shrikrishna Khare <skhare@vmware.com>
      CC: "VMware, Inc." <pv-drivers@vmware.com>
      CC: Wei Liu <wei.liu2@citrix.com>
      CC: Paul Durrant <paul.durrant@citrix.com>
      CC: David Kershner <david.kershner@unisys.com>
      Signed-off-by: NJarod Wilson <jarod@redhat.com>
      Reviewed-by: NHaiyang Zhang <haiyangz@microsoft.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d0c2c997
  17. 08 10月, 2016 1 次提交
    • P
      xen-netback: make sure that hashes are not send to unaware frontends · 912e27e8
      Paul Durrant 提交于
      In the case when a frontend only negotiates a single queue with xen-
      netback it is possible for a skbuff with a s/w hash to result in a
      hash extra_info segment being sent to the frontend even when no hash
      algorithm has been configured. (The ndo_select_queue() entry point makes
      sure the hash is not set if no algorithm is configured, but this entry
      point is not called when there is only a single queue). This can result
      in a frontend that is unable to handle extra_info segments being given
      such a segment, causing it to crash.
      
      This patch fixes the problem by clearing the hash in ndo_start_xmit()
      instead, which is clearly guaranteed to be called irrespective of the
      number of queues.
      Signed-off-by: NPaul Durrant <paul.durrant@citrix.com>
      Cc: Wei Liu <wei.liu2@citrix.com>
      Acked-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      912e27e8
  18. 07 10月, 2016 2 次提交
  19. 22 9月, 2016 1 次提交
  20. 21 5月, 2016 1 次提交
  21. 17 5月, 2016 3 次提交
  22. 16 1月, 2016 2 次提交
  23. 03 9月, 2015 1 次提交
    • P
      xen-netback: add support for multicast control · 210c34dc
      Paul Durrant 提交于
      Xen's PV network protocol includes messages to add/remove ethernet
      multicast addresses to/from a filter list in the backend. This allows
      the frontend to request the backend only forward multicast packets
      which are of interest thus preventing unnecessary noise on the shared
      ring.
      
      The canonical netif header in git://xenbits.xen.org/xen.git specifies
      the message format (two more XEN_NETIF_EXTRA_TYPEs) so the minimal
      necessary changes have been pulled into include/xen/interface/io/netif.h.
      
      To prevent the frontend from extending the multicast filter list
      arbitrarily a limit (XEN_NETBK_MCAST_MAX) has been set to 64 entries.
      This limit is not specified by the protocol and so may change in future.
      If the limit is reached then the next XEN_NETIF_EXTRA_TYPE_MCAST_ADD
      sent by the frontend will be failed with NETIF_RSP_ERROR.
      Signed-off-by: NPaul Durrant <paul.durrant@citrix.com>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Cc: Wei Liu <wei.liu2@citrix.com>
      Acked-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      210c34dc
  24. 07 8月, 2015 1 次提交
  25. 21 3月, 2015 1 次提交
  26. 06 3月, 2015 1 次提交
  27. 04 3月, 2015 1 次提交
  28. 06 2月, 2015 1 次提交
  29. 03 2月, 2015 1 次提交
  30. 28 1月, 2015 1 次提交
  31. 19 12月, 2014 1 次提交
  32. 30 10月, 2014 1 次提交
  33. 26 10月, 2014 1 次提交
    • D
      xen-netback: reintroduce guest Rx stall detection · ecf08d2d
      David Vrabel 提交于
      If a frontend not receiving packets it is useful to detect this and
      turn off the carrier so packets are dropped early instead of being
      queued and drained when they expire.
      
      A to-guest queue is stalled if it doesn't have enough free slots for a
      an extended period of time (default 60 s).
      
      If at least one queue is stalled, the carrier is turned off (in the
      expectation that the other queues will soon stall as well).  The
      carrier is only turned on once all queues are ready.
      
      When the frontend connects, all the queues start in the stalled state
      and only become ready once the frontend queues enough Rx requests.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Reviewed-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ecf08d2d