1. 17 7月, 2009 1 次提交
  2. 02 5月, 2009 1 次提交
  3. 05 2月, 2009 4 次提交
  4. 31 1月, 2009 1 次提交
  5. 22 1月, 2009 1 次提交
    • M
      virtio_net: add link status handling · 9f4d26d0
      Mark McLoughlin 提交于
      Allow the host to inform us that the link is down by adding
      a VIRTIO_NET_F_STATUS which indicates that device status is
      available in virtio_net config.
      
      This is currently useful for simulating link down conditions
      (e.g. using proposed qemu 'set_link' monitor command) but
      would also be needed if we were to support device assignment
      via virtio.
      Signed-off-by: NMark McLoughlin <markmc@redhat.com>
      Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (added future masking)
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9f4d26d0
  6. 17 11月, 2008 1 次提交
    • M
      virtio_net: VIRTIO_NET_F_MSG_RXBUF (imprive rcv buffer allocation) · 3f2c31d9
      Mark McLoughlin 提交于
      If segmentation offload is enabled by the host, we currently allocate
      maximum sized packet buffers and pass them to the host. This uses up
      20 ring entries, allowing us to supply only 20 packet buffers to the
      host with a 256 entry ring. This is a huge overhead when receiving
      small packets, and is most keenly felt when receiving MTU sized
      packets from off-host.
      
      The VIRTIO_NET_F_MRG_RXBUF feature flag is set by hosts which support
      using receive buffers which are smaller than the maximum packet size.
      In order to transfer large packets to the guest, the host merges
      together multiple receive buffers to form a larger logical buffer.
      The number of merged buffers is returned to the guest via a field in
      the virtio_net_hdr.
      
      Make use of this support by supplying single page receive buffers to
      the host. On receive, we extract the virtio_net_hdr, copy 128 bytes of
      the payload to the skb's linear data buffer and adjust the fragment
      offset to point to the remaining data. This ensures proper alignment
      and allows us to not use any paged data for small packets. If the
      payload occupies multiple pages, we simply append those pages as
      fragments and free the associated skbs.
      
      This scheme allows us to be efficient in our use of ring entries
      while still supporting large packets. Benchmarking using netperf from
      an external machine to a guest over a 10Gb/s network shows a 100%
      improvement from ~1Gb/s to ~2Gb/s. With a local host->guest benchmark
      with GSO disabled on the host side, throughput was seen to increase
      from 700Mb/s to 1.7Gb/s.
      
      Based on a patch from Herbert Xu.
      Signed-off-by: NMark McLoughlin <markmc@redhat.com>
      Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (use netdev_priv)
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3f2c31d9
  7. 25 7月, 2008 1 次提交
  8. 11 6月, 2008 1 次提交
  9. 02 5月, 2008 1 次提交
    • R
      virtio: finer-grained features for virtio_net · 5539ae96
      Rusty Russell 提交于
      So, we previously had a 'VIRTIO_NET_F_GSO' bit which meant that 'the
      host can handle csum offload, and any TSO (v4&v6 incl ECN) or UFO
      packets you might want to send.  I thought this was good enough for
      Linux, but it actually isn't, since we don't do UFO in software.
      
      So, add separate feature bits for what the host can handle.  Add
      equivalent ones for the guest to say what it can handle, because LRO
      is coming too (thanks Herbert!).
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      5539ae96
  10. 04 2月, 2008 3 次提交
    • R
      virtio: Tweak virtio_net defines · 34a48579
      Rusty Russell 提交于
      1) Turn GSO on virtio net into an all-or-nothing (keep checksumming
         separate).  Having multiple bits is a pain: if you can't support something
         you should handle it in software, which is still a performance win.
      
      2) Make VIRTIO_NET_HDR_GSO_ECN a flag in the header, so it can apply to
         IPv6 or v4.
      
      3) Rename VIRTIO_NET_F_NO_CSUM to VIRTIO_NET_F_CSUM (ie. means we do
         checksumming).
      
      4) Add csum and gso params to virtio_net to allow more testing.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      34a48579
    • R
      virtio: Net header needs hdr_len · 50c8ea80
      Rusty Russell 提交于
      It's far easier to deal with packets if we don't have to parse the
      packet to figure out the header length to know how much to pull into
      the skb data.  Add the field to the virtio_net_hdr struct (and fix the
      spaces that somehow crept in there).
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      50c8ea80
    • R
      virtio: simplify config mechanism. · a586d4f6
      Rusty Russell 提交于
      Previously we used a type/len pair within the config space, but this
      seems overkill.  We now simply define a structure which represents the
      layout in the config space: the config space can now only be extended
      at the end.
      
      The main driver-visible changes:
      1) We indicate what fields are present with an explicit feature bit.
      2) Virtqueues are explicitly numbered, and not in the config space.
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      a586d4f6
  11. 23 10月, 2007 1 次提交
    • R
      Net driver using virtio · 296f96fc
      Rusty Russell 提交于
      The network driver uses two virtqueues: one for input packets and one
      for output packets.  This has nice locking properties (ie. we don't do
      any for recv vs send).
      
      TODO:
      	1) Big packets.
      	2) Multi-client devices (maybe separate driver?).
      	3) Resolve freeing of old xmit skbs (Christian Borntraeger)
      Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Cc: netdev@vger.kernel.org
      296f96fc