1. 08 6月, 2013 5 次提交
  2. 29 5月, 2013 1 次提交
  3. 28 3月, 2013 1 次提交
  4. 27 3月, 2013 1 次提交
  5. 28 2月, 2013 1 次提交
  6. 14 2月, 2013 1 次提交
    • P
      net: Fix possible wrong checksum generation. · c9af6db4
      Pravin B Shelar 提交于
      Patch cef401de (net: fix possible wrong checksum
      generation) fixed wrong checksum calculation but it broke TSO by
      defining new GSO type but not a netdev feature for that type.
      net_gso_ok() would not allow hardware checksum/segmentation
      offload of such packets without the feature.
      
      Following patch fixes TSO and wrong checksum. This patch uses
      same logic that Eric Dumazet used. Patch introduces new flag
      SKBTX_SHARED_FRAG if at least one frag can be modified by
      the user. but SKBTX_SHARED_FRAG flag is kept in skb shared
      info tx_flags rather than gso_type.
      
      tx_flags is better compared to gso_type since we can have skb with
      shared frag without gso packet. It does not link SHARED_FRAG to
      GSO, So there is no need to define netdev feature for this.
      Signed-off-by: NPravin B Shelar <pshelar@nicira.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c9af6db4
  7. 28 1月, 2013 1 次提交
    • E
      net: fix possible wrong checksum generation · cef401de
      Eric Dumazet 提交于
      Pravin Shelar mentioned that GSO could potentially generate
      wrong TX checksum if skb has fragments that are overwritten
      by the user between the checksum computation and transmit.
      
      He suggested to linearize skbs but this extra copy can be
      avoided for normal tcp skbs cooked by tcp_sendmsg().
      
      This patch introduces a new SKB_GSO_SHARED_FRAG flag, set
      in skb_shinfo(skb)->gso_type if at least one frag can be
      modified by the user.
      
      Typical sources of such possible overwrites are {vm}splice(),
      sendfile(), and macvtap/tun/virtio_net drivers.
      
      Tested:
      
      $ netperf -H 7.7.8.84
      MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
      7.7.8.84 () port 0 AF_INET
      Recv   Send    Send
      Socket Socket  Message  Elapsed
      Size   Size    Size     Time     Throughput
      bytes  bytes   bytes    secs.    10^6bits/sec
      
       87380  16384  16384    10.00    3959.52
      
      $ netperf -H 7.7.8.84 -t TCP_SENDFILE
      TCP SENDFILE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 7.7.8.84 ()
      port 0 AF_INET
      Recv   Send    Send
      Socket Socket  Message  Elapsed
      Size   Size    Size     Time     Throughput
      bytes  bytes   bytes    secs.    10^6bits/sec
      
       87380  16384  16384    10.00    3216.80
      
      Performance of the SENDFILE is impacted by the extra allocation and
      copy, and because we use order-0 pages, while the TCP_STREAM uses
      bigger pages.
      Reported-by: NPravin Shelar <pshelar@nicira.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cef401de
  8. 13 8月, 2012 1 次提交
  9. 08 6月, 2012 1 次提交
  10. 12 5月, 2012 1 次提交
  11. 02 5月, 2012 5 次提交
  12. 14 2月, 2012 1 次提交
  13. 21 12月, 2011 1 次提交
  14. 24 11月, 2011 1 次提交
  15. 21 10月, 2011 5 次提交
  16. 21 9月, 2011 1 次提交
  17. 16 9月, 2011 1 次提交
  18. 07 7月, 2011 1 次提交
  19. 12 6月, 2011 1 次提交
    • J
      virtio_net: introduce VIRTIO_NET_HDR_F_DATA_VALID · 10a8d94a
      Jason Wang 提交于
      There's no need for the guest to validate the checksum if it have been
      validated by host nics. So this patch introduces a new flag -
      VIRTIO_NET_HDR_F_DATA_VALID which is used to bypass the checksum
      examing in guest. The backend (tap/macvtap) may set this flag when
      met skbs with CHECKSUM_UNNECESSARY to save cpu utilization.
      
      No feature negotiation is needed as old driver just ignore this flag.
      
      Iperf shows 12%-30% performance improvement for UDP traffic. For TCP,
      when gro is on no difference as it produces skb with partial
      checksum. But when gro is disabled, 20% or even higher improvement
      could be measured by netperf.
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Acked-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      10a8d94a
  20. 08 3月, 2011 1 次提交
  21. 28 1月, 2011 1 次提交
  22. 14 1月, 2011 1 次提交
    • E
      net: remove dev_txq_stats_fold() · 1ac9ad13
      Eric Dumazet 提交于
      After recent changes, (percpu stats on vlan/tunnels...), we dont need
      anymore per struct netdev_queue tx_bytes/tx_packets/tx_dropped counters.
      
      Only remaining users are ixgbe, sch_teql, gianfar & macvlan :
      
      1) ixgbe can be converted to use existing tx_ring counters.
      
      2) macvlan incremented txq->tx_dropped, it can use the
      dev->stats.tx_dropped counter.
      
      3) sch_teql : almost revert ab35cd4b (Use net_device internal stats)
          Now we have ndo_get_stats64(), use it, even for "unsigned long"
      fields (No need to bring back a struct net_device_stats)
      
      4) gianfar adds a stats structure per tx queue to hold
      tx_bytes/tx_packets
      
      This removes a lockdep warning (and possible lockup) in rndis gadget,
      calling dev_get_stats() from hard IRQ context.
      
      Ref: http://www.spinics.net/lists/netdev/msg149202.htmlReported-by: NNeil Jones <neiljay@gmail.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Jarek Poplawski <jarkao2@gmail.com>
      CC: Alexander Duyck <alexander.h.duyck@intel.com>
      CC: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
      CC: Sandeep Gopalpet <sandeep.kumar@freescale.com>
      CC: Michal Nazarewicz <mina86@mina86.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1ac9ad13
  23. 17 12月, 2010 1 次提交
  24. 17 8月, 2010 1 次提交
  25. 23 7月, 2010 1 次提交
    • H
      macvtap: Limit packet queue length · 8a35747a
      Herbert Xu 提交于
      Mark Wagner reported OOM symptoms when sending UDP traffic over
      a macvtap link to a kvm receiver.
      
      This appears to be caused by the fact that macvtap packet queues
      are unlimited in length.  This means that if the receiver can't
      keep up with the rate of flow, then we will hit OOM. Of course
      it gets worse if the OOM killer then decides to kill the receiver.
      
      This patch imposes a cap on the packet queue length, in the same
      way as the tuntap driver, using the device TX queue length.
      
      Please note that macvtap currently has no way of giving congestion
      notification, that means the software device TX queue cannot be
      used and packets will always be dropped once the macvtap driver
      queue fills up.
      
      This shouldn't be a great problem for the scenario where macvtap
      is used to feed a kvm receiver, as the traffic is most likely
      external in origin so congestion notification can't be applied
      anyway.
      
      Of course, if anybody decides to complain about guest-to-guest
      UDP packet loss down the track, then we may have to revisit this.
      
      Incidentally, this patch also fixes a real memory leak when
      macvtap_get_queue fails.
      
      Chris Wright noticed that for this patch to work, we need a
      non-zero TX queue length.  This patch includes his work to change
      the default macvtap TX queue length to 500.
      Reported-by: NMark Wagner <mwagner@redhat.com>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Acked-by: NChris Wright <chrisw@sous-sol.org>
      Acked-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8a35747a
  26. 11 7月, 2010 1 次提交
  27. 04 5月, 2010 1 次提交
  28. 02 5月, 2010 1 次提交
    • E
      net: sock_def_readable() and friends RCU conversion · 43815482
      Eric Dumazet 提交于
      sk_callback_lock rwlock actually protects sk->sk_sleep pointer, so we
      need two atomic operations (and associated dirtying) per incoming
      packet.
      
      RCU conversion is pretty much needed :
      
      1) Add a new structure, called "struct socket_wq" to hold all fields
      that will need rcu_read_lock() protection (currently: a
      wait_queue_head_t and a struct fasync_struct pointer).
      
      [Future patch will add a list anchor for wakeup coalescing]
      
      2) Attach one of such structure to each "struct socket" created in
      sock_alloc_inode().
      
      3) Respect RCU grace period when freeing a "struct socket_wq"
      
      4) Change sk_sleep pointer in "struct sock" by sk_wq, pointer to "struct
      socket_wq"
      
      5) Change sk_sleep() function to use new sk->sk_wq instead of
      sk->sk_sleep
      
      6) Change sk_has_sleeper() to wq_has_sleeper() that must be used inside
      a rcu_read_lock() section.
      
      7) Change all sk_has_sleeper() callers to :
        - Use rcu_read_lock() instead of read_lock(&sk->sk_callback_lock)
        - Use wq_has_sleeper() to eventually wakeup tasks.
        - Use rcu_read_unlock() instead of read_unlock(&sk->sk_callback_lock)
      
      8) sock_wake_async() is modified to use rcu protection as well.
      
      9) Exceptions :
        macvtap, drivers/net/tun.c, af_unix use integrated "struct socket_wq"
      instead of dynamically allocated ones. They dont need rcu freeing.
      
      Some cleanups or followups are probably needed, (possible
      sk_callback_lock conversion to a spinlock for example...).
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      43815482