1. 06 8月, 2014 2 次提交
    • Z
      xen-netback: Turn off the carrier if the guest is not able to receive · f34a4cf9
      Zoltan Kiss 提交于
      Currently when the guest is not able to receive more packets, qdisc layer starts
      a timer, and when it goes off, qdisc is started again to deliver a packet again.
      This is a very slow way to drain the queues, consumes unnecessary resources and
      slows down other guests shutdown.
      This patch change the behaviour by turning the carrier off when that timer
      fires, so all the packets are freed up which were stucked waiting for that vif.
      Instead of the rx_queue_purge bool it uses the VIF_STATUS_RX_PURGE_EVENT bit to
      signal the thread that either the timeout happened or an RX interrupt arrived,
      so the thread can check what it should do. It also disables NAPI, so the guest
      can't transmit, but leaves the interrupts on, so it can resurrect.
      Only the queues which brought down the interface can enable it again, the bit
      QUEUE_STATUS_RX_STALLED makes sure of that.
      Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Cc: netdev@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: xen-devel@lists.xenproject.org
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f34a4cf9
    • Z
      xen-netback: Using a new state bit instead of carrier · 3d1af1df
      Zoltan Kiss 提交于
      This patch introduces a new state bit VIF_STATUS_CONNECTED to track whether the
      vif is in a connected state. Using carrier will not work with the next patch
      in this series, which aims to turn the carrier temporarily off if the guest
      doesn't seem to be able to receive packets.
      Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Cc: netdev@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: xen-devel@lists.xenproject.org
      
      v2:
      - rename the bitshift type to "enum state_bit_shift" here, not in the next patch
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3d1af1df
  2. 21 7月, 2014 4 次提交
  3. 09 7月, 2014 1 次提交
    • Z
      xen-netback: Adding debugfs "io_ring_qX" files · f51de243
      Zoltan Kiss 提交于
      This patch adds debugfs capabilities to netback. There used to be a similar
      patch floating around for classic kernel, but it used procfs. It is based on a
      very similar blkback patch.
      It creates xen-netback/[vifname]/io_ring_q[queueno] files, reading them output
      various ring variables etc. Writing "kick" into it imitates an interrupt
      happened, it can be useful to check whether the ring is just stalled due to a
      missed interrupt.
      Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com>
      Cc: netdev@vger.kernel.org
      Cc: linux-kernel@vger.kernel.org
      Cc: xen-devel@lists.xenproject.org
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f51de243
  4. 06 6月, 2014 1 次提交
    • Z
      xen-netback: Fix handling of skbs requiring too many slots · 59ae9fc6
      Zoltan Kiss 提交于
      A recent commit (a02eb4 "xen-netback: worse-case estimate in xenvif_rx_action is
      underestimating") capped the slot estimation to MAX_SKB_FRAGS, but that triggers
      the next BUG_ON a few lines down, as the packet consumes more slots than
      estimated.
      This patch introduces full_coalesce on the skb callback buffer, which is used in
      start_new_rx_buffer() to decide whether netback needs coalescing more
      aggresively. By doing that, no packet should need more than
      (XEN_NETIF_MAX_TX_SIZE + 1) / PAGE_SIZE data slots (excluding the optional GSO
      slot, it doesn't carry data, therefore irrelevant in this case), as the provided
      buffers are fully utilized.
      Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com>
      Cc: Paul Durrant <paul.durrant@citrix.com>
      Cc: Wei Liu <wei.liu2@citrix.com>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Reviewed-by: NPaul Durrant <paul.durrant@gmail.com>
      Acked-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      59ae9fc6
  5. 05 6月, 2014 2 次提交
  6. 17 5月, 2014 1 次提交
  7. 16 5月, 2014 1 次提交
    • Z
      xen-netback: Fix grant ref resolution in RX path · 58375744
      Zoltan Kiss 提交于
      The original series for reintroducing grant mapping for netback had a patch [1]
      to handle receiving of packets from an another VIF. Grant copy on the receiving
      side needs the grant ref of the page to set up the op.
      The original patch assumed (wrongly) that the frags array haven't changed. In
      the case reported by Sander, the sending guest sent a packet where the linear
      buffer and the first frag were under PKT_PROT_LEN (=128) bytes.
      xenvif_tx_submit() then pulled up the linear area to 128 bytes, and ditched the
      first frag. The receiving side had an off-by-one problem when gathered the grant
      refs.
      This patch fixes that by checking whether the actual frag's page pointer is the
      same as the page in the original frag list. It can handle any kind of changes on
      the original frags array, like:
      - removing granted frags from the array at any point
      - adding local pages to the frags list anywhere
      - reordering the frags
      It's optimized to the most common case, when there is 1:1 relation between the
      frags and the list, plus works optimal when frags are removed from the end or
      the beginning.
      
      [1]: 3e2234: xen-netback: Handle foreign mapped pages on the guest RX path
      Reported-by: NSander Eikelenboom <linux@eikelenboom.it>
      Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      58375744
  8. 04 4月, 2014 3 次提交
  9. 02 4月, 2014 1 次提交
    • W
      xen-netback: disable rogue vif in kthread context · e9d8b2c2
      Wei Liu 提交于
      When netback discovers frontend is sending malformed packet it will
      disables the interface which serves that frontend.
      
      However disabling a network interface involving taking a mutex which
      cannot be done in softirq context, so we need to defer this process to
      kthread context.
      
      This patch does the following:
      1. introduce a flag to indicate the interface is disabled.
      2. check that flag in TX path, don't do any work if it's true.
      3. check that flag in RX path, turn off that interface if it's true.
      
      The reason to disable it in RX path is because RX uses kthread. After
      this change the behavior of netback is still consistent -- it won't do
      any TX work for a rogue frontend, and the interface will be eventually
      turned off.
      
      Also change a "continue" to "break" after xenvif_fatal_tx_err, as it
      doesn't make sense to continue processing packets if frontend is rogue.
      
      This is a fix for XSA-90.
      Reported-by: NTörök Edwin <edwin@etorok.net>
      Signed-off-by: NWei Liu <wei.liu2@citrix.com>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e9d8b2c2
  10. 30 3月, 2014 3 次提交
  11. 27 3月, 2014 3 次提交
  12. 26 3月, 2014 2 次提交
  13. 11 3月, 2014 1 次提交
  14. 08 3月, 2014 9 次提交
  15. 06 2月, 2014 1 次提交
    • Z
      xen-netback: Fix Rx stall due to race condition · 9ab9831b
      Zoltan Kiss 提交于
      The recent patch to fix receive side flow control
      (11b57f90: xen-netback: stop vif thread
      spinning if frontend is unresponsive) solved the spinning thread problem,
      however caused an another one. The receive side can stall, if:
      - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
      - [INTERRUPT] interrupt happens, and sets rx_event to true
      - [THREAD] then xenvif_kthread sets rx_event to false
      - [THREAD] rx_work_todo doesn't return true anymore
      
      Also, if interrupt sent but there is still no room in the ring, it take quite a
      long time until xenvif_rx_action realize it. This patch ditch that two variable,
      and rework rx_work_todo. If the thread finds it can't fit more skb's into the
      ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
      kept as 0. Then rx_work_todo will check if:
      - there is something to send to the ring (like before)
      - there is space for the topmost packet in the queue
      
      I think that's more natural and optimal thing to test than two bool which are
      set somewhere else.
      Signed-off-by: NZoltan Kiss <zoltan.kiss@citrix.com>
      Reviewed-by: NPaul Durrant <paul.durrant@citrix.com>
      Acked-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9ab9831b
  16. 15 1月, 2014 1 次提交
  17. 10 1月, 2014 1 次提交
    • P
      xen-netback: stop vif thread spinning if frontend is unresponsive · 11b57f90
      Paul Durrant 提交于
      The recent patch to improve guest receive side flow control (ca2f09f2) had a
      slight flaw in the wait condition for the vif thread in that any remaining
      skbs in the guest receive side netback internal queue would prevent the
      thread from sleeping. An unresponsive frontend can lead to a permanently
      non-empty internal queue and thus the thread will spin. In this case the
      thread should really sleep until the frontend becomes responsive again.
      
      This patch adds an extra flag to the vif which is set if the shared ring
      is full and cleared when skbs are drained into the shared ring. Thus,
      if the thread runs, finds the shared ring full and can make no progress the
      flag remains set. If the flag remains set then the thread will sleep,
      regardless of a non-empty queue, until the next event from the frontend.
      Signed-off-by: NPaul Durrant <paul.durrant@citrix.com>
      Cc: Wei Liu <wei.liu2@citrix.com>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Acked-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      11b57f90
  18. 30 12月, 2013 1 次提交
    • P
      xen-netback: fix guest-receive-side array sizes · ac3d5ac2
      Paul Durrant 提交于
      The sizes chosen for the metadata and grant_copy_op arrays on the guest
      receive size are wrong;
      
      - The meta array is needlessly twice the ring size, when we only ever
        consume a single array element per RX ring slot
      - The grant_copy_op array is way too small. It's sized based on a bogus
        assumption: that at most two copy ops will be used per ring slot. This
        may have been true at some point in the past but it's clear from looking
        at start_new_rx_buffer() that a new ring slot is only consumed if a frag
        would overflow the current slot (plus some other conditions) so the actual
        limit is MAX_SKB_FRAGS grant_copy_ops per ring slot.
      
      This patch fixes those two sizing issues and, because grant_copy_ops grows
      so much, it pulls it out into a separate chunk of vmalloc()ed memory.
      Signed-off-by: NPaul Durrant <paul.durrant@citrix.com>
      Acked-by: NWei Liu <wei.liu2@citrix.com>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ac3d5ac2
  19. 20 12月, 2013 2 次提交