1. 17 5月, 2016 4 次提交
  2. 13 5月, 2016 1 次提交
  3. 14 3月, 2016 1 次提交
  4. 16 1月, 2016 1 次提交
  5. 18 12月, 2015 2 次提交
  6. 23 10月, 2015 2 次提交
  7. 11 9月, 2015 1 次提交
  8. 10 9月, 2015 1 次提交
    • D
      xen-netback: require fewer guest Rx slots when not using GSO · 1d5d4852
      David Vrabel 提交于
      Commit f48da8b1 (xen-netback: fix
      unlimited guest Rx internal queue and carrier flapping) introduced a
      regression.
      
      The PV frontend in IPXE only places 4 requests on the guest Rx ring.
      Since netback required at least (MAX_SKB_FRAGS + 1) slots, IPXE could
      not receive any packets.
      
      a) If GSO is not enabled on the VIF, fewer guest Rx slots are required
         for the largest possible packet.  Calculate the required slots
         based on the maximum GSO size or the MTU.
      
         This calculation of the number of required slots relies on
         1650d545 (xen-netback: always fully coalesce guest Rx packets)
         which present in 4.0-rc1 and later.
      
      b) Reduce the Rx stall detection to checking for at least one
         available Rx request.  This is fine since we're predominately
         concerned with detecting interfaces which are down and thus have
         zero available Rx requests.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Reviewed-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1d5d4852
  9. 09 9月, 2015 1 次提交
  10. 03 9月, 2015 1 次提交
    • P
      xen-netback: add support for multicast control · 210c34dc
      Paul Durrant 提交于
      Xen's PV network protocol includes messages to add/remove ethernet
      multicast addresses to/from a filter list in the backend. This allows
      the frontend to request the backend only forward multicast packets
      which are of interest thus preventing unnecessary noise on the shared
      ring.
      
      The canonical netif header in git://xenbits.xen.org/xen.git specifies
      the message format (two more XEN_NETIF_EXTRA_TYPEs) so the minimal
      necessary changes have been pulled into include/xen/interface/io/netif.h.
      
      To prevent the frontend from extending the multicast filter list
      arbitrarily a limit (XEN_NETBK_MCAST_MAX) has been set to 64 entries.
      This limit is not specified by the protocol and so may change in future.
      If the limit is reached then the next XEN_NETIF_EXTRA_TYPE_MCAST_ADD
      sent by the frontend will be failed with NETIF_RSP_ERROR.
      Signed-off-by: NPaul Durrant <paul.durrant@citrix.com>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Cc: Wei Liu <wei.liu2@citrix.com>
      Acked-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      210c34dc
  11. 07 8月, 2015 1 次提交
  12. 04 8月, 2015 1 次提交
  13. 15 7月, 2015 1 次提交
  14. 22 6月, 2015 2 次提交
  15. 17 6月, 2015 1 次提交
  16. 02 6月, 2015 1 次提交
  17. 26 5月, 2015 1 次提交
  18. 15 4月, 2015 1 次提交
  19. 21 3月, 2015 1 次提交
  20. 12 3月, 2015 1 次提交
  21. 06 3月, 2015 2 次提交
  22. 25 2月, 2015 1 次提交
  23. 03 2月, 2015 1 次提交
  24. 28 1月, 2015 2 次提交
  25. 24 1月, 2015 1 次提交
  26. 19 12月, 2014 1 次提交
  27. 07 11月, 2014 1 次提交
  28. 30 10月, 2014 1 次提交
  29. 26 10月, 2014 2 次提交
    • D
      xen-netback: reintroduce guest Rx stall detection · ecf08d2d
      David Vrabel 提交于
      If a frontend not receiving packets it is useful to detect this and
      turn off the carrier so packets are dropped early instead of being
      queued and drained when they expire.
      
      A to-guest queue is stalled if it doesn't have enough free slots for a
      an extended period of time (default 60 s).
      
      If at least one queue is stalled, the carrier is turned off (in the
      expectation that the other queues will soon stall as well).  The
      carrier is only turned on once all queues are ready.
      
      When the frontend connects, all the queues start in the stalled state
      and only become ready once the frontend queues enough Rx requests.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Reviewed-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ecf08d2d
    • D
      xen-netback: fix unlimited guest Rx internal queue and carrier flapping · f48da8b1
      David Vrabel 提交于
      Netback needs to discard old to-guest skb's (guest Rx queue drain) and
      it needs detect guest Rx stalls (to disable the carrier so packets are
      discarded earlier), but the current implementation is very broken.
      
      1. The check in hard_start_xmit of the slot availability did not
         consider the number of packets that were already in the guest Rx
         queue.  This could allow the queue to grow without bound.
      
         The guest stops consuming packets and the ring was allowed to fill
         leaving S slot free.  Netback queues a packet requiring more than S
         slots (ensuring that the ring stays with S slots free).  Netback
         queue indefinately packets provided that then require S or fewer
         slots.
      
      2. The Rx stall detection is not triggered in this case since the
         (host) Tx queue is not stopped.
      
      3. If the Tx queue is stopped and a guest Rx interrupt occurs, netback
         will consider this an Rx purge event which may result in it taking
         the carrier down unnecessarily.  It also considers a queue with
         only 1 slot free as unstalled (even though the next packet might
         not fit in this).
      
      The internal guest Rx queue is limited by a byte length (to 512 Kib,
      enough for half the ring).  The (host) Tx queue is stopped and started
      based on this limit.  This sets an upper bound on the amount of memory
      used by packets on the internal queue.
      
      This allows the estimatation of the number of slots for an skb to be
      removed (it wasn't a very good estimate anyway).  Instead, the guest
      Rx thread just waits for enough free slots for a maximum sized packet.
      
      skbs queued on the internal queue have an 'expires' time (set to the
      current time plus the drain timeout).  The guest Rx thread will detect
      when the skb at the head of the queue has expired and discard expired
      skbs.  This sets a clear upper bound on the length of time an skb can
      be queued for.  For a guest being destroyed the maximum time needed to
      wait for all the packets it sent to be dropped is still the drain
      timeout (10 s) since it will not be sending new packets.
      
      Rx stall detection is reintroduced in a later commit.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Reviewed-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f48da8b1
  30. 14 8月, 2014 1 次提交
  31. 08 8月, 2014 1 次提交