1. 06 12月, 2013 1 次提交
  2. 29 11月, 2013 1 次提交
  3. 29 10月, 2013 1 次提交
  4. 18 10月, 2013 3 次提交
  5. 09 10月, 2013 1 次提交
  6. 01 10月, 2013 1 次提交
    • W
      xen-netback: improve ring effeciency for guest RX · 4f0581d2
      Wei Liu 提交于
      There was a bug that netback routines netbk/xenvif_skb_count_slots and
      netbk/xenvif_gop_frag_copy disagreed with each other, which caused
      netback to push wrong number of responses to netfront, which caused
      netfront to eventually crash. The bug was fixed in 6e43fc04
      ("xen-netback: count number required slots for an skb more carefully").
      
      Commit 6e43fc04 focused on backport-ability. The drawback with the
      existing packing scheme is that the ring is not used effeciently, as
      stated in 6e43fc04.
      
      skb->data like:
          |        1111|222222222222|3333        |
      
      is arranged as:
          |1111        |222222222222|3333        |
      
      If we can do this:
          |111122222222|22223333    |
      That would save one ring slot, which improves ring effeciency.
      
      This patch effectively reverts 6e43fc04. That patch made count_slots
      agree with gop_frag_copy, while this patch goes the other way around --
      make gop_frag_copy agree with count_slots. The end result is that they
      still agree with each other, and the ring is now arranged like:
          |111122222222|22223333    |
      
      The patch that improves packing was first posted by Xi Xong and Matt
      Wilson. I only rebase it on top of net-next and rewrite commit message,
      so I retain all their SoBs. For more infomation about the original bug
      please refer to email listed below and commit message of 6e43fc04.
      
      Original patch:
      http://lists.xen.org/archives/html/xen-devel/2013-07/msg00760.htmlSigned-off-by: NXi Xiong <xixiong@amazon.com>
      Reviewed-by: NMatt Wilson <msw@amazon.com>
      [ msw: minor code cleanups, rewrote commit message, adjusted code
        to count RX slots instead of meta structures ]
      Signed-off-by: NMatt Wilson <msw@amazon.com>
      Cc: Annie Li <annie.li@oracle.com>
      Cc: Wei Liu <wei.liu2@citrix.com>
      Cc: Ian Campbell <Ian.Campbell@citrix.com>
      [ liuw: rebased on top of net-next tree, rewrote commit message, coding
        style cleanup. ]
      Signed-off-by: NWei Liu <wei.liu2@citrix.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Acked-by: NIan Campbell <Ian.Campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4f0581d2
  7. 13 9月, 2013 1 次提交
    • D
      xen-netback: count number required slots for an skb more carefully · 6e43fc04
      David Vrabel 提交于
      When a VM is providing an iSCSI target and the LUN is used by the
      backend domain, the generated skbs for direct I/O writes to the disk
      have large, multi-page skb->data but no frags.
      
      With some lengths and starting offsets, xen_netbk_count_skb_slots()
      would be one short because the simple calculation of
      DIV_ROUND_UP(skb_headlen(), PAGE_SIZE) was not accounting for the
      decisions made by start_new_rx_buffer() which does not guarantee
      responses are fully packed.
      
      For example, a skb with length < 2 pages but which spans 3 pages would
      be counted as requiring 2 slots but would actually use 3 slots.
      
      skb->data:
      
          |        1111|222222222222|3333        |
      
      Fully packed, this would need 2 slots:
      
          |111122222222|22223333    |
      
      But because the 2nd page wholy fits into a slot it is not split across
      slots and goes into a slot of its own:
      
          |1111        |222222222222|3333        |
      
      Miscounting the number of slots means netback may push more responses
      than the number of available requests.  This will cause the frontend
      to get very confused and report "Too many frags/slots".  The frontend
      never recovers and will eventually BUG.
      
      Fix this by counting the number of required slots more carefully.  In
      xen_netbk_count_skb_slots(), more closely follow the algorithm used by
      xen_netbk_gop_skb() by introducing xen_netbk_count_frag_slots() which
      is the dry-run equivalent of netbk_gop_frag_copy().
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6e43fc04
  8. 29 8月, 2013 3 次提交
  9. 02 7月, 2013 1 次提交
  10. 24 6月, 2013 1 次提交
  11. 13 6月, 2013 1 次提交
  12. 24 5月, 2013 1 次提交
  13. 18 5月, 2013 2 次提交
  14. 03 5月, 2013 3 次提交
  15. 23 4月, 2013 2 次提交
    • W
      xen-netback: don't disconnect frontend when seeing oversize packet · 03393fd5
      Wei Liu 提交于
      Some frontend drivers are sending packets > 64 KiB in length. This length
      overflows the length field in the first slot making the following slots have
      an invalid length.
      
      Turn this error back into a non-fatal error by dropping the packet. To avoid
      having the following slots having fatal errors, consume all slots in the
      packet.
      
      This does not reopen the security hole in XSA-39 as if the packet as an
      invalid number of slots it will still hit fatal error case.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: NWei Liu <wei.liu2@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      03393fd5
    • W
      xen-netback: coalesce slots in TX path and fix regressions · 2810e5b9
      Wei Liu 提交于
      This patch tries to coalesce tx requests when constructing grant copy
      structures. It enables netback to deal with situation when frontend's
      MAX_SKB_FRAGS is larger than backend's MAX_SKB_FRAGS.
      
      With the help of coalescing, this patch tries to address two regressions
      avoid reopening the security hole in XSA-39.
      
      Regression 1. The reduction of the number of supported ring entries (slots)
      per packet (from 18 to 17). This regression has been around for some time but
      remains unnoticed until XSA-39 security fix. This is fixed by coalescing
      slots.
      
      Regression 2. The XSA-39 security fix turning "too many frags" errors from
      just dropping the packet to a fatal error and disabling the VIF. This is fixed
      by coalescing slots (handling 18 slots when backend's MAX_SKB_FRAGS is 17)
      which rules out false positive (using 18 slots is legit) and dropping packets
      using 19 to `max_skb_slots` slots.
      
      To avoid reopening security hole in XSA-39, frontend sending packet using more
      than max_skb_slots is considered malicious.
      
      The behavior of netback for packet is thus:
      
          1-18            slots: valid
         19-max_skb_slots slots: drop and respond with an error
         max_skb_slots+   slots: fatal error
      
      max_skb_slots is configurable by admin, default value is 20.
      
      Also change variable name from "frags" to "slots" in netbk_count_requests.
      
      Please note that RX path still has dependency on MAX_SKB_FRAGS. This will be
      fixed with separate patch.
      Signed-off-by: NWei Liu <wei.liu2@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2810e5b9
  16. 13 4月, 2013 1 次提交
  17. 11 4月, 2013 1 次提交
  18. 28 3月, 2013 1 次提交
  19. 27 3月, 2013 1 次提交
    • J
      netback: set transport header before passing it to kernel · f9ca8f74
      Jason Wang 提交于
      Currently, for the packets receives from netback, before doing header check,
      kernel just reset the transport header in netif_receive_skb() which pretends non
      l4 header. This is suboptimal for precise packet length estimation (introduced
      in 1def9238: net_sched: more precise pkt_len computation) which needs correct l4
      header for gso packets.
      
      The patch just reuse the header probed by netback for partial checksum packets
      and tries to use skb_flow_dissect() for other cases, if both fail, just pretend
      no l4 header.
      
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9ca8f74
  20. 26 3月, 2013 1 次提交
  21. 20 2月, 2013 1 次提交
  22. 19 2月, 2013 1 次提交
  23. 15 2月, 2013 1 次提交
  24. 08 2月, 2013 4 次提交
  25. 11 10月, 2012 1 次提交
  26. 21 9月, 2012 1 次提交
    • A
      xen/gndev: Xen backend support for paged out grant targets V4. · c571898f
      Andres Lagar-Cavilla 提交于
      Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
      foreign domain (such as dom0) attempts to map these frames, the map will
      initially fail. The hypervisor returns a suitable errno, and kicks an
      asynchronous page-in operation carried out by a helper. The foreign domain is
      expected to retry the mapping operation until it eventually succeeds. The
      foreign domain is not put to sleep because itself could be the one running the
      pager assist (typical scenario for dom0).
      
      This patch adds support for this mechanism for backend drivers using grant
      mapping and copying operations. Specifically, this covers the blkback and
      gntdev drivers (which map foreign grants), and the netback driver (which copies
      foreign grants).
      
      * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
        target foreign frame is paged out).
      * Insert hooks with appropriate wrappers in the aforementioned drivers.
      
      The retry loop is only invoked if the grant operation status is GNTST_eagain.
      It guarantees to leave a new status code different from GNTST_eagain. Any other
      status code results in identical code execution as before.
      
      The retry loop performs 256 attempts with increasing time intervals through a
      32 second period. It uses msleep to yield while waiting for the next retry.
      
      V2 after feedback from David Vrabel:
      * Explicit MAX_DELAY instead of wrap-around delay into zero
      * Abstract GNTST_eagain check into core grant table code for netback module.
      
      V3 after feedback from Ian Campbell:
      * Add placeholder in array of grant table error descriptions for unrelated
        error code we jump over.
      * Eliminate single map and retry macro in favor of a generic batch flavor.
      * Some renaming.
      * Bury most implementation in grant_table.c, cleaner interface.
      
      V4 rebased on top of sync of Xen grant table interface headers.
      Signed-off-by: NAndres Lagar-Cavilla <andres@lagarcavilla.org>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      [v5: Fixed whitespace issues]
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      c571898f
  27. 09 8月, 2012 1 次提交
  28. 29 6月, 2012 1 次提交
  29. 25 5月, 2012 1 次提交