1. 12 8月, 2011 2 次提交
    • E
      net: cleanup some rcu_dereference_raw · 33d480ce
      Eric Dumazet 提交于
      RCU api had been completed and rcu_access_pointer() or
      rcu_dereference_protected() are better than generic
      rcu_dereference_raw()
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      33d480ce
    • E
      neigh: reduce arp latency · cd28ca0a
      Eric Dumazet 提交于
      Remove the artificial HZ latency on arp resolution.
      
      Instead of firing a timer in one jiffy (up to 10 ms if HZ=100), lets
      send the ARP message immediately.
      
      Before patch :
      
      # arp -d 192.168.20.108 ; ping -c 3 192.168.20.108
      PING 192.168.20.108 (192.168.20.108) 56(84) bytes of data.
      64 bytes from 192.168.20.108: icmp_seq=1 ttl=64 time=9.91 ms
      64 bytes from 192.168.20.108: icmp_seq=2 ttl=64 time=0.065 ms
      64 bytes from 192.168.20.108: icmp_seq=3 ttl=64 time=0.061 ms
      
      After patch :
      
      $ arp -d 192.168.20.108 ; ping -c 3 192.168.20.108
      PING 192.168.20.108 (192.168.20.108) 56(84) bytes of data.
      64 bytes from 192.168.20.108: icmp_seq=1 ttl=64 time=0.152 ms
      64 bytes from 192.168.20.108: icmp_seq=2 ttl=64 time=0.064 ms
      64 bytes from 192.168.20.108: icmp_seq=3 ttl=64 time=0.074 ms
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cd28ca0a
  2. 11 8月, 2011 1 次提交
  3. 10 8月, 2011 1 次提交
  4. 07 8月, 2011 1 次提交
    • D
      net: Compute protocol sequence numbers and fragment IDs using MD5. · 6e5714ea
      David S. Miller 提交于
      Computers have become a lot faster since we compromised on the
      partial MD4 hash which we use currently for performance reasons.
      
      MD5 is a much safer choice, and is inline with both RFC1948 and
      other ISS generators (OpenBSD, Solaris, etc.)
      
      Furthermore, only having 24-bits of the sequence number be truly
      unpredictable is a very serious limitation.  So the periodic
      regeneration and 8-bit counter have been removed.  We compute and
      use a full 32-bit sequence number.
      
      For ipv6, DCCP was found to use a 32-bit truncated initial sequence
      number (it needs 43-bits) and that is fixed here as well.
      Reported-by: NDan Kaminsky <dan@doxpara.com>
      Tested-by: NWilly Tarreau <w@1wt.eu>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6e5714ea
  5. 02 8月, 2011 2 次提交
  6. 28 7月, 2011 1 次提交
    • N
      net: add IFF_SKB_TX_SHARED flag to priv_flags · d8873315
      Neil Horman 提交于
      Pktgen attempts to transmit shared skbs to net devices, which can't be used by
      some drivers as they keep state information in skbs.  This patch adds a flag
      marking drivers as being able to handle shared skbs in their tx path.  Drivers
      are defaulted to being unable to do so, but calling ether_setup enables this
      flag, as 90% of the drivers calling ether_setup touch real hardware and can
      handle shared skbs.  A subsequent patch will audit drivers to ensure that the
      flag is set properly
      Signed-off-by: NNeil Horman <nhorman@tuxdriver.com>
      Reported-by: NJiri Pirko <jpirko@redhat.com>
      CC: Robert Olsson <robert.olsson@its.uu.se>
      CC: Eric Dumazet <eric.dumazet@gmail.com>
      CC: Alexey Dobriyan <adobriyan@gmail.com>
      CC: David S. Miller <davem@davemloft.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d8873315
  7. 27 7月, 2011 1 次提交
  8. 26 7月, 2011 1 次提交
  9. 23 7月, 2011 1 次提交
    • S
      net: allow netif_carrier to be called safely from IRQ · 1821f7cd
      stephen hemminger 提交于
      As reported by Ben Greer and Froncois Romieu. The code path in
      the netif_carrier code leads it to try and disable
      a late workqueue to reenable it immediately
      netif_carrier_on
      -> linkwatch_fire_event
         -> linkwatch_schedule_work
            -> cancel_delayed_work
               -> del_timer_sync
      
      If __cancel_delayed_work is used instead then there is no
      problem of waiting for running linkwatch_event.
      
      There is a race between linkwatch_event running re-scheduling
      but it is harmless to schedule an extra scan of the linkwatch queue.
      Signed-off-by: NStephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1821f7cd
  10. 22 7月, 2011 2 次提交
  11. 18 7月, 2011 2 次提交
  12. 17 7月, 2011 4 次提交
  13. 15 7月, 2011 3 次提交
  14. 14 7月, 2011 1 次提交
    • D
      net: Embed hh_cache inside of struct neighbour. · f6b72b62
      David S. Miller 提交于
      Now that there is a one-to-one correspondance between neighbour
      and hh_cache entries, we no longer need:
      
      1) dynamic allocation
      2) attachment to dst->hh
      3) refcounting
      
      Initialization of the hh_cache entry is indicated by hh_len
      being non-zero, and such initialization is always done with
      the neighbour's lock held as a writer.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f6b72b62
  15. 13 7月, 2011 2 次提交
  16. 11 7月, 2011 2 次提交
  17. 09 7月, 2011 1 次提交
  18. 07 7月, 2011 1 次提交
    • S
      skbuff: skb supports zero-copy buffers · a6686f2f
      Shirley Ma 提交于
      This patch adds userspace buffers support in skb shared info. A new
      struct skb_ubuf_info is needed to maintain the userspace buffers
      argument and index, a callback is used to notify userspace to release
      the buffers once lower device has done DMA (Last reference to that skb
      has gone).
      
      If there is any userspace apps to reference these userspace buffers,
      then these userspaces buffers will be copied into kernel. This way we
      can prevent userspace apps from holding these userspace buffers too long.
      
      Use destructor_arg to point to the userspace buffer info; a new tx flags
      SKBTX_DEV_ZEROCOPY is added for zero-copy buffer check.
      Signed-off-by: NShirley Ma <xma@...ibm.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a6686f2f
  19. 06 7月, 2011 2 次提交
  20. 04 7月, 2011 2 次提交
  21. 02 7月, 2011 2 次提交
    • D
      ipv6: Don't put artificial limit on routing table size. · 957c665f
      David S. Miller 提交于
      IPV6, unlike IPV4, doesn't have a routing cache.
      
      Routing table entries, as well as clones made in response
      to route lookup requests, all live in the same table.  And
      all of these things are together collected in the destination
      cache table for ipv6.
      
      This means that routing table entries count against the garbage
      collection limits, even though such entries cannot ever be reclaimed
      and are added explicitly by the administrator (rather than being
      created in response to lookups).
      
      Therefore it makes no sense to count ipv6 routing table entries
      against the GC limits.
      
      Add a DST_NOCOUNT destination cache entry flag, and skip the counting
      if it is set.  Use this flag bit in ipv6 when adding routing table
      entries.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      957c665f
    • T
      rtnl: provide link dump consistency info · 4e985ada
      Thomas Graf 提交于
      This patch adds a change sequence counter to each net namespace
      which is bumped whenever a netdevice is added or removed from
      the list. If such a change occurred while a link dump took place,
      the dump will have the NLM_F_DUMP_INTR flag set in the first
      message which has been interrupted and in all subsequent messages
      of the same dump.
      
      Note that links may still be modified or renamed while a dump is
      taking place but we can guarantee for userspace to receive a
      complete list of links and not miss any.
      
      Testing:
      I have added 500 VLAN netdevices to make sure the dump is split
      over multiple messages. Then while continuously dumping links in
      one process I also continuously deleted and re-added a dummy
      netdevice in another process. Multiple dumps per seconds have
      had the NLM_F_DUMP_INTR flag set.
      
      I guess we can wait for Johannes patch to hit net-next via the
      wireless tree.  I just wanted to give this some testing right away.
      Signed-off-by: NThomas Graf <tgraf@infradead.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4e985ada
  22. 22 6月, 2011 3 次提交
  23. 21 6月, 2011 1 次提交
  24. 14 6月, 2011 1 次提交