1. 01 10月, 2013 1 次提交
  2. 20 9月, 2013 1 次提交
    • P
      xen-netback: Don't destroy the netdev until the vif is shut down · 279f438e
      Paul Durrant 提交于
      Without this patch, if a frontend cycles through states Closing
      and Closed (which Windows frontends need to do) then the netdev
      will be destroyed and requires re-invocation of hotplug scripts
      to restore state before the frontend can move to Connected. Thus
      when udev is not in use the backend gets stuck in InitWait.
      
      With this patch, the netdev is left alone whilst the backend is
      still online and is only de-registered and freed just prior to
      destroying the vif (which is also nicely symmetrical with the
      netdev allocation and registration being done during probe) so
      no re-invocation of hotplug scripts is required.
      Signed-off-by: NPaul Durrant <paul.durrant@citrix.com>
      Cc: David Vrabel <david.vrabel@citrix.com>
      Cc: Wei Liu <wei.liu2@citrix.com>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Acked-by: NWei Liu <wei.liu2@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      279f438e
  3. 13 9月, 2013 2 次提交
    • D
      xen-netback: count number required slots for an skb more carefully · 6e43fc04
      David Vrabel 提交于
      When a VM is providing an iSCSI target and the LUN is used by the
      backend domain, the generated skbs for direct I/O writes to the disk
      have large, multi-page skb->data but no frags.
      
      With some lengths and starting offsets, xen_netbk_count_skb_slots()
      would be one short because the simple calculation of
      DIV_ROUND_UP(skb_headlen(), PAGE_SIZE) was not accounting for the
      decisions made by start_new_rx_buffer() which does not guarantee
      responses are fully packed.
      
      For example, a skb with length < 2 pages but which spans 3 pages would
      be counted as requiring 2 slots but would actually use 3 slots.
      
      skb->data:
      
          |        1111|222222222222|3333        |
      
      Fully packed, this would need 2 slots:
      
          |111122222222|22223333    |
      
      But because the 2nd page wholy fits into a slot it is not split across
      slots and goes into a slot of its own:
      
          |1111        |222222222222|3333        |
      
      Miscounting the number of slots means netback may push more responses
      than the number of available requests.  This will cause the frontend
      to get very confused and report "Too many frags/slots".  The frontend
      never recovers and will eventually BUG.
      
      Fix this by counting the number of required slots more carefully.  In
      xen_netbk_count_skb_slots(), more closely follow the algorithm used by
      xen_netbk_gop_skb() by introducing xen_netbk_count_frag_slots() which
      is the dry-run equivalent of netbk_gop_frag_copy().
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6e43fc04
    • K
      xen-netback: fix possible format string flaw · a9677bc0
      Kees Cook 提交于
      This makes sure a format string cannot accidentally leak into the
      kthread_run() call.
      Signed-off-by: NKees Cook <keescook@chromium.org>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a9677bc0
  4. 29 8月, 2013 3 次提交
  5. 02 7月, 2013 2 次提交
  6. 24 6月, 2013 1 次提交
  7. 13 6月, 2013 1 次提交
  8. 24 5月, 2013 1 次提交
  9. 18 5月, 2013 2 次提交
  10. 03 5月, 2013 3 次提交
  11. 23 4月, 2013 2 次提交
    • W
      xen-netback: don't disconnect frontend when seeing oversize packet · 03393fd5
      Wei Liu 提交于
      Some frontend drivers are sending packets > 64 KiB in length. This length
      overflows the length field in the first slot making the following slots have
      an invalid length.
      
      Turn this error back into a non-fatal error by dropping the packet. To avoid
      having the following slots having fatal errors, consume all slots in the
      packet.
      
      This does not reopen the security hole in XSA-39 as if the packet as an
      invalid number of slots it will still hit fatal error case.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Signed-off-by: NWei Liu <wei.liu2@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      03393fd5
    • W
      xen-netback: coalesce slots in TX path and fix regressions · 2810e5b9
      Wei Liu 提交于
      This patch tries to coalesce tx requests when constructing grant copy
      structures. It enables netback to deal with situation when frontend's
      MAX_SKB_FRAGS is larger than backend's MAX_SKB_FRAGS.
      
      With the help of coalescing, this patch tries to address two regressions
      avoid reopening the security hole in XSA-39.
      
      Regression 1. The reduction of the number of supported ring entries (slots)
      per packet (from 18 to 17). This regression has been around for some time but
      remains unnoticed until XSA-39 security fix. This is fixed by coalescing
      slots.
      
      Regression 2. The XSA-39 security fix turning "too many frags" errors from
      just dropping the packet to a fatal error and disabling the VIF. This is fixed
      by coalescing slots (handling 18 slots when backend's MAX_SKB_FRAGS is 17)
      which rules out false positive (using 18 slots is legit) and dropping packets
      using 19 to `max_skb_slots` slots.
      
      To avoid reopening security hole in XSA-39, frontend sending packet using more
      than max_skb_slots is considered malicious.
      
      The behavior of netback for packet is thus:
      
          1-18            slots: valid
         19-max_skb_slots slots: drop and respond with an error
         max_skb_slots+   slots: fatal error
      
      max_skb_slots is configurable by admin, default value is 20.
      
      Also change variable name from "frags" to "slots" in netbk_count_requests.
      
      Please note that RX path still has dependency on MAX_SKB_FRAGS. This will be
      fixed with separate patch.
      Signed-off-by: NWei Liu <wei.liu2@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2810e5b9
  12. 13 4月, 2013 1 次提交
  13. 11 4月, 2013 1 次提交
  14. 28 3月, 2013 1 次提交
  15. 27 3月, 2013 1 次提交
    • J
      netback: set transport header before passing it to kernel · f9ca8f74
      Jason Wang 提交于
      Currently, for the packets receives from netback, before doing header check,
      kernel just reset the transport header in netif_receive_skb() which pretends non
      l4 header. This is suboptimal for precise packet length estimation (introduced
      in 1def9238: net_sched: more precise pkt_len computation) which needs correct l4
      header for gso packets.
      
      The patch just reuse the header probed by netback for partial checksum packets
      and tries to use skb_flow_dissect() for other cases, if both fail, just pretend
      no l4 header.
      
      Cc: Eric Dumazet <edumazet@google.com>
      Cc: Ian Campbell <ian.campbell@citrix.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f9ca8f74
  16. 26 3月, 2013 1 次提交
  17. 20 2月, 2013 1 次提交
  18. 19 2月, 2013 1 次提交
  19. 15 2月, 2013 2 次提交
  20. 08 2月, 2013 4 次提交
  21. 24 1月, 2013 1 次提交
  22. 11 10月, 2012 1 次提交
  23. 21 9月, 2012 1 次提交
    • A
      xen/gndev: Xen backend support for paged out grant targets V4. · c571898f
      Andres Lagar-Cavilla 提交于
      Since Xen-4.2, hvm domains may have portions of their memory paged out. When a
      foreign domain (such as dom0) attempts to map these frames, the map will
      initially fail. The hypervisor returns a suitable errno, and kicks an
      asynchronous page-in operation carried out by a helper. The foreign domain is
      expected to retry the mapping operation until it eventually succeeds. The
      foreign domain is not put to sleep because itself could be the one running the
      pager assist (typical scenario for dom0).
      
      This patch adds support for this mechanism for backend drivers using grant
      mapping and copying operations. Specifically, this covers the blkback and
      gntdev drivers (which map foreign grants), and the netback driver (which copies
      foreign grants).
      
      * Add a retry method for grants that fail with GNTST_eagain (i.e. because the
        target foreign frame is paged out).
      * Insert hooks with appropriate wrappers in the aforementioned drivers.
      
      The retry loop is only invoked if the grant operation status is GNTST_eagain.
      It guarantees to leave a new status code different from GNTST_eagain. Any other
      status code results in identical code execution as before.
      
      The retry loop performs 256 attempts with increasing time intervals through a
      32 second period. It uses msleep to yield while waiting for the next retry.
      
      V2 after feedback from David Vrabel:
      * Explicit MAX_DELAY instead of wrap-around delay into zero
      * Abstract GNTST_eagain check into core grant table code for netback module.
      
      V3 after feedback from Ian Campbell:
      * Add placeholder in array of grant table error descriptions for unrelated
        error code we jump over.
      * Eliminate single map and retry macro in favor of a generic batch flavor.
      * Some renaming.
      * Bury most implementation in grant_table.c, cleaner interface.
      
      V4 rebased on top of sync of Xen grant table interface headers.
      Signed-off-by: NAndres Lagar-Cavilla <andres@lagarcavilla.org>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      [v5: Fixed whitespace issues]
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      c571898f
  24. 09 8月, 2012 1 次提交
  25. 29 6月, 2012 1 次提交
  26. 25 5月, 2012 1 次提交
  27. 01 2月, 2012 1 次提交
  28. 06 1月, 2012 1 次提交