1. 29 10月, 2013 1 次提交
  2. 18 10月, 2013 5 次提交
  3. 09 10月, 2013 2 次提交
  4. 01 10月, 2013 2 次提交
    • W
      xen-netback: improve ring effeciency for guest RX · 4f0581d2
      Wei Liu 提交于
      There was a bug that netback routines netbk/xenvif_skb_count_slots and
      netbk/xenvif_gop_frag_copy disagreed with each other, which caused
      netback to push wrong number of responses to netfront, which caused
      netfront to eventually crash. The bug was fixed in 6e43fc04
      ("xen-netback: count number required slots for an skb more carefully").
      
      Commit 6e43fc04 focused on backport-ability. The drawback with the
      existing packing scheme is that the ring is not used effeciently, as
      stated in 6e43fc04.
      
      skb->data like:
          |        1111|222222222222|3333        |
      
      is arranged as:
          |1111        |222222222222|3333        |
      
      If we can do this:
          |111122222222|22223333    |
      That would save one ring slot, which improves ring effeciency.
      
      This patch effectively reverts 6e43fc04. That patch made count_slots
      agree with gop_frag_copy, while this patch goes the other way around --
      make gop_frag_copy agree with count_slots. The end result is that they
      still agree with each other, and the ring is now arranged like:
          |111122222222|22223333    |
      
      The patch that improves packing was first posted by Xi Xong and Matt
      Wilson. I only rebase it on top of net-next and rewrite commit message,
      so I retain all their SoBs. For more infomation about the original bug
      please refer to email listed below and commit message of 6e43fc04.
      
      Original patch:
      http://lists.xen.org/archives/html/xen-devel/2013-07/msg00760.htmlSigned-off-by: NXi Xiong <xixiong@amazon.com>
      Reviewed-by: NMatt Wilson <msw@amazon.com>
      [ msw: minor code cleanups, rewrote commit message, adjusted code
        to count RX slots instead of meta structures ]
      Signed-off-by: NMatt Wilson <msw@amazon.com>
      Cc: Annie Li <annie.li@oracle.com>
      Cc: Wei Liu <wei.liu2@citrix.com>
      Cc: Ian Campbell <Ian.Campbell@citrix.com>
      [ liuw: rebased on top of net-next tree, rewrote commit message, coding
        style cleanup. ]
      Signed-off-by: NWei Liu <wei.liu2@citrix.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Acked-by: NIan Campbell <Ian.Campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4f0581d2
    • P
      xen-netback: Handle backend state transitions in a more robust way · ea732dff
      Paul Durrant 提交于
      When the frontend state changes netback now specifies its desired state to
      a new function, set_backend_state(), which transitions through any
      necessary intermediate states.
      This fixes an issue observed with some old Windows frontend drivers where
      they failed to transition through the Closing state and netback would not
      behave correctly.
      Signed-off-by: NPaul Durrant <paul.durrant@citrix.com>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Cc: Wei Liu <wei.liu2@citrix.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ea732dff
  5. 20 9月, 2013 1 次提交
    • P
      xen-netback: Don't destroy the netdev until the vif is shut down · 279f438e
      Paul Durrant 提交于
      Without this patch, if a frontend cycles through states Closing
      and Closed (which Windows frontends need to do) then the netdev
      will be destroyed and requires re-invocation of hotplug scripts
      to restore state before the frontend can move to Connected. Thus
      when udev is not in use the backend gets stuck in InitWait.
      
      With this patch, the netdev is left alone whilst the backend is
      still online and is only de-registered and freed just prior to
      destroying the vif (which is also nicely symmetrical with the
      netdev allocation and registration being done during probe) so
      no re-invocation of hotplug scripts is required.
      Signed-off-by: NPaul Durrant <paul.durrant@citrix.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Wei Liu <wei.liu2@citrix.com>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Acked-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      279f438e
  6. 13 9月, 2013 2 次提交
    • D
      xen-netback: count number required slots for an skb more carefully · 6e43fc04
      David Vrabel 提交于
      When a VM is providing an iSCSI target and the LUN is used by the
      backend domain, the generated skbs for direct I/O writes to the disk
      have large, multi-page skb->data but no frags.
      
      With some lengths and starting offsets, xen_netbk_count_skb_slots()
      would be one short because the simple calculation of
      DIV_ROUND_UP(skb_headlen(), PAGE_SIZE) was not accounting for the
      decisions made by start_new_rx_buffer() which does not guarantee
      responses are fully packed.
      
      For example, a skb with length < 2 pages but which spans 3 pages would
      be counted as requiring 2 slots but would actually use 3 slots.
      
      skb->data:
      
          |        1111|222222222222|3333        |
      
      Fully packed, this would need 2 slots:
      
          |111122222222|22223333    |
      
      But because the 2nd page wholy fits into a slot it is not split across
      slots and goes into a slot of its own:
      
          |1111        |222222222222|3333        |
      
      Miscounting the number of slots means netback may push more responses
      than the number of available requests.  This will cause the frontend
      to get very confused and report "Too many frags/slots".  The frontend
      never recovers and will eventually BUG.
      
      Fix this by counting the number of required slots more carefully.  In
      xen_netbk_count_skb_slots(), more closely follow the algorithm used by
      xen_netbk_gop_skb() by introducing xen_netbk_count_frag_slots() which
      is the dry-run equivalent of netbk_gop_frag_copy().
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6e43fc04
    • K
      xen-netback: fix possible format string flaw · a9677bc0
      Kees Cook 提交于
      This makes sure a format string cannot accidentally leak into the
      kthread_run() call.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a9677bc0
  7. 29 8月, 2013 3 次提交
  8. 02 7月, 2013 2 次提交
  9. 24 6月, 2013 1 次提交
  10. 13 6月, 2013 1 次提交
  11. 24 5月, 2013 1 次提交
  12. 18 5月, 2013 2 次提交
  13. 03 5月, 2013 3 次提交
  14. 23 4月, 2013 2 次提交
    • W
      xen-netback: don't disconnect frontend when seeing oversize packet · 03393fd5
      Wei Liu 提交于
      Some frontend drivers are sending packets > 64 KiB in length. This length
      overflows the length field in the first slot making the following slots have
      an invalid length.
      
      Turn this error back into a non-fatal error by dropping the packet. To avoid
      having the following slots having fatal errors, consume all slots in the
      packet.
      
      This does not reopen the security hole in XSA-39 as if the packet as an
      invalid number of slots it will still hit fatal error case.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: NWei Liu <wei.liu2@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      03393fd5
    • W
      xen-netback: coalesce slots in TX path and fix regressions · 2810e5b9
      Wei Liu 提交于
      This patch tries to coalesce tx requests when constructing grant copy
      structures. It enables netback to deal with situation when frontend's
      MAX_SKB_FRAGS is larger than backend's MAX_SKB_FRAGS.
      
      With the help of coalescing, this patch tries to address two regressions
      avoid reopening the security hole in XSA-39.
      
      Regression 1. The reduction of the number of supported ring entries (slots)
      per packet (from 18 to 17). This regression has been around for some time but
      remains unnoticed until XSA-39 security fix. This is fixed by coalescing
      slots.
      
      Regression 2. The XSA-39 security fix turning "too many frags" errors from
      just dropping the packet to a fatal error and disabling the VIF. This is fixed
      by coalescing slots (handling 18 slots when backend's MAX_SKB_FRAGS is 17)
      which rules out false positive (using 18 slots is legit) and dropping packets
      using 19 to `max_skb_slots` slots.
      
      To avoid reopening security hole in XSA-39, frontend sending packet using more
      than max_skb_slots is considered malicious.
      
      The behavior of netback for packet is thus:
      
          1-18            slots: valid
         19-max_skb_slots slots: drop and respond with an error
         max_skb_slots+   slots: fatal error
      
      max_skb_slots is configurable by admin, default value is 20.
      
      Also change variable name from "frags" to "slots" in netbk_count_requests.
      
      Please note that RX path still has dependency on MAX_SKB_FRAGS. This will be
      fixed with separate patch.
      Signed-off-by: NWei Liu <wei.liu2@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2810e5b9
  15. 13 4月, 2013 1 次提交
  16. 11 4月, 2013 1 次提交
  17. 28 3月, 2013 1 次提交
  18. 27 3月, 2013 1 次提交
    • J
      netback: set transport header before passing it to kernel · f9ca8f74
      Jason Wang 提交于
      Currently, for the packets receives from netback, before doing header check,
      kernel just reset the transport header in netif_receive_skb() which pretends non
      l4 header. This is suboptimal for precise packet length estimation (introduced
      in 1def9238: net_sched: more precise pkt_len computation) which needs correct l4
      header for gso packets.
      
      The patch just reuse the header probed by netback for partial checksum packets
      and tries to use skb_flow_dissect() for other cases, if both fail, just pretend
      no l4 header.
      
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9ca8f74
  19. 26 3月, 2013 1 次提交
  20. 20 2月, 2013 1 次提交
  21. 19 2月, 2013 1 次提交
  22. 15 2月, 2013 2 次提交
  23. 08 2月, 2013 3 次提交