1. 06 7月, 2011 1 次提交
  2. 24 6月, 2011 1 次提交
  3. 22 6月, 2011 1 次提交
  4. 09 6月, 2011 1 次提交
    • E
      inetpeer: remove unused list · 4b9d9be8
      Eric Dumazet 提交于
      Andi Kleen and Tim Chen reported huge contention on inetpeer
      unused_peers.lock, on memcached workload on a 40 core machine, with
      disabled route cache.
      
      It appears we constantly flip peers refcnt between 0 and 1 values, and
      we must insert/remove peers from unused_peers.list, holding a contended
      spinlock.
      
      Remove this list completely and perform a garbage collection on-the-fly,
      at lookup time, using the expired nodes we met during the tree
      traversal.
      
      This removes a lot of code, makes locking more standard, and obsoletes
      two sysctls (inet_peer_gc_mintime and inet_peer_gc_maxtime). This also
      removes two pointers in inet_peer structure.
      
      There is still a false sharing effect because refcnt is in first cache
      line of object [were the links and keys used by lookups are located], we
      might move it at the end of inet_peer structure to let this first cache
      line mostly read by cpus.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Andi Kleen <andi@firstfloor.org>
      CC: Tim Chen <tim.c.chen@linux.intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4b9d9be8
  5. 11 5月, 2011 1 次提交
  6. 09 5月, 2011 3 次提交
  7. 07 5月, 2011 1 次提交
  8. 29 4月, 2011 1 次提交
    • E
      inet: add RCU protection to inet->opt · f6d8bd05
      Eric Dumazet 提交于
      We lack proper synchronization to manipulate inet->opt ip_options
      
      Problem is ip_make_skb() calls ip_setup_cork() and
      ip_setup_cork() possibly makes a copy of ipc->opt (struct ip_options),
      without any protection against another thread manipulating inet->opt.
      
      Another thread can change inet->opt pointer and free old one under us.
      
      Use RCU to protect inet->opt (changed to inet->inet_opt).
      
      Instead of handling atomic refcounts, just copy ip_options when
      necessary, to avoid cache line dirtying.
      
      We cant insert an rcu_head in struct ip_options since its included in
      skb->cb[], so this patch is large because I had to introduce a new
      ip_options_rcu structure.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Cc: Herbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f6d8bd05
  9. 30 3月, 2011 1 次提交
  10. 02 3月, 2011 1 次提交
    • H
      inet: Add ip_make_skb and ip_finish_skb · 1c32c5ad
      Herbert Xu 提交于
      This patch adds the helper ip_make_skb which is like ip_append_data
      and ip_push_pending_frames all rolled into one, except that it does
      not send the skb produced.  The sending part is carried out by
      ip_send_skb, which the transport protocol can call after it has
      tweaked the skb.
      
      It is meant to be called in cases where corking is not used should
      have a one-to-one correspondence to sendmsg.
      
      This patch also adds the helper ip_finish_skb which is meant to
      be replace ip_push_pending_frames when corking is required.
      Previously the protocol stack would peek at the socket write
      queue and add its header to the first packet.  With ip_finish_skb,
      the protocol stack can directly operate on the final skb instead,
      just like the non-corking case with ip_make_skb.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Acked-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1c32c5ad
  11. 13 12月, 2010 1 次提交
    • D
      ipv4: Don't pre-seed hoplimit metric. · 323e126f
      David S. Miller 提交于
      Always go through a new ip4_dst_hoplimit() helper, just like ipv6.
      
      This allowed several simplifications:
      
      1) The interim dst_metric_hoplimit() can go as it's no longer
         userd.
      
      2) The sysctl_ip_default_ttl entry no longer needs to use
         ipv4_doint_and_flush, since the sysctl is not cached in
         routing cache metrics any longer.
      
      3) ipv4_doint_and_flush no longer needs to be exported and
         therefore can be marked static.
      
      When ipv4_doint_and_flush_strategy was removed some time ago,
      the external declaration in ip.h was mistakenly left around
      so kill that off too.
      
      We have to move the sysctl_ip_default_ttl declaration into
      ipv4's route cache definition header net/route.h, because
      currently net/ip.h (where the declaration lives now) has
      a back dependency on net/route.h
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      323e126f
  12. 26 10月, 2010 1 次提交
  13. 24 9月, 2010 1 次提交
  14. 19 8月, 2010 1 次提交
  15. 01 7月, 2010 1 次提交
    • E
      snmp: 64bit ipstats_mib for all arches · 4ce3c183
      Eric Dumazet 提交于
      /proc/net/snmp and /proc/net/netstat expose SNMP counters.
      
      Width of these counters is either 32 or 64 bits, depending on the size
      of "unsigned long" in kernel.
      
      This means user program parsing these files must already be prepared to
      deal with 64bit values, regardless of user program being 32 or 64 bit.
      
      This patch introduces 64bit snmp values for IPSTAT mib, where some
      counters can wrap pretty fast if they are 32bit wide.
      
      # netstat -s|egrep "InOctets|OutOctets"
          InOctets: 244068329096
          OutOctets: 244069348848
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4ce3c183
  16. 26 6月, 2010 1 次提交
    • E
      snmp: add align parameter to snmp_mib_init() · 1823e4c8
      Eric Dumazet 提交于
      In preparation for 64bit snmp counters for some mibs,
      add an 'align' parameter to snmp_mib_init(), instead
      of assuming mibs only contain 'unsigned long' fields.
      
      Callers can use __alignof__(type) to provide correct
      alignment.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      CC: Herbert Xu <herbert@gondor.apana.org.au>
      CC: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      CC: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
      CC: Vlad Yasevich <vladislav.yasevich@hp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1823e4c8
  17. 11 6月, 2010 1 次提交
    • E
      ip: ip_ra_control() rcu fix · 592fcb9d
      Eric Dumazet 提交于
      commit 66018506 (ip: Router Alert RCU conversion) introduced RCU
      lookups to ip_call_ra_chain(). It missed proper deinit phase :
      When ip_ra_control() deletes an ip_ra_chain, it should make sure
      ip_call_ra_chain() users can not start to use socket during the rcu
      grace period. It should also delay the sock_put() after the grace
      period, or we risk a premature socket freeing and corruptions, as
      raw sockets are not rcu protected yet.
      
      This delay avoids using expensive atomic_inc_not_zero() in
      ip_call_ra_chain().
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      592fcb9d
  18. 08 6月, 2010 1 次提交
  19. 25 5月, 2010 1 次提交
  20. 16 5月, 2010 1 次提交
  21. 29 4月, 2010 1 次提交
  22. 16 4月, 2010 1 次提交
  23. 17 2月, 2010 1 次提交
    • T
      percpu: add __percpu sparse annotations to net · 7d720c3e
      Tejun Heo 提交于
      Add __percpu sparse annotations to net.
      
      These annotations are to make sparse consider percpu variables to be
      in a different address space and warn if accessed without going
      through percpu accessors.  This patch doesn't affect normal builds.
      
      The macro and type tricks around snmp stats make things a bit
      interesting.  DEFINE/DECLARE_SNMP_STAT() macros mark the target field
      as __percpu and SNMP_UPD_PO_STATS() macro is updated accordingly.  All
      snmp_mib_*() users which used to cast the argument to (void **) are
      updated to cast it to (void __percpu **).
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Acked-by: NDavid S. Miller <davem@davemloft.net>
      Cc: Patrick McHardy <kaber@trash.net>
      Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
      Cc: Vlad Yasevich <vladislav.yasevich@hp.com>
      Cc: netdev@vger.kernel.org
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7d720c3e
  24. 16 2月, 2010 1 次提交
  25. 14 1月, 2010 1 次提交
  26. 07 1月, 2010 1 次提交
    • O
      ip: fix mc_loop checks for tunnels with multicast outer addresses · 7ad6848c
      Octavian Purdila 提交于
      When we have L3 tunnels with different inner/outer families
      (i.e. IPV4/IPV6) which use a multicast address as the outer tunnel
      destination address, multicast packets will be loopbacked back to the
      sending socket even if IP*_MULTICAST_LOOP is set to disabled.
      
      The mc_loop flag is present in the family specific part of the socket
      (e.g. the IPv4 or IPv4 specific part).  setsockopt sets the inner
      family mc_loop flag. When the packet is pushed through the L3 tunnel
      it will eventually be processed by the outer family which if different
      will check the flag in a different part of the socket then it was set.
      Signed-off-by: NOctavian Purdila <opurdila@ixiacom.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7ad6848c
  27. 15 12月, 2009 1 次提交
  28. 04 11月, 2009 1 次提交
  29. 19 10月, 2009 1 次提交
    • E
      inet: rename some inet_sock fields · c720c7e8
      Eric Dumazet 提交于
      In order to have better cache layouts of struct sock (separate zones
      for rx/tx paths), we need this preliminary patch.
      
      Goal is to transfert fields used at lookup time in the first
      read-mostly cache line (inside struct sock_common) and move sk_refcnt
      to a separate cache line (only written by rx path)
      
      This patch adds inet_ prefix to daddr, rcv_saddr, dport, num, saddr,
      sport and id fields. This allows a future patch to define these
      fields as macros, like sk_refcnt, without name clashes.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c720c7e8
  30. 01 10月, 2009 1 次提交
  31. 24 9月, 2009 1 次提交
  32. 27 4月, 2009 1 次提交
  33. 16 2月, 2009 1 次提交
  34. 26 11月, 2008 1 次提交
  35. 25 11月, 2008 1 次提交
    • E
      net: avoid a pair of dst_hold()/dst_release() in ip_append_data() · 2e77d89b
      Eric Dumazet 提交于
      We can reduce pressure on dst entry refcount that slowdown UDP transmit
      path on SMP machines. This pressure is visible on RTP servers when
      delivering content to mediagateways, especially big ones, handling
      thousand of streams. Several cpus send UDP frames to the same
      destination, hence use the same dst entry.
      
      This patch makes ip_append_data() eventually steal the refcount its
      callers had to take on the dst entry.
      
      This doesnt avoid all refcounting, but still gives speedups on SMP,
      on UDP/RAW transmit path
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2e77d89b
  36. 17 10月, 2008 1 次提交
  37. 09 10月, 2008 1 次提交
    • E
      inet: cleanup of local_port_range · 3c689b73
      Eric Dumazet 提交于
      I noticed sysctl_local_port_range[] and its associated seqlock
      sysctl_local_port_range_lock were on separate cache lines.
      Moreover, sysctl_local_port_range[] was close to unrelated
      variables, highly modified, leading to cache misses.
      
      Moving these two variables in a structure can help data
      locality and moving this structure to read_mostly section
      helps sharing of this data among cpus.
      
      Cleanup of extern declarations (moved in include file where
      they belong), and use of inet_get_local_port_range()
      accessor instead of direct access to ports values.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3c689b73
  38. 01 10月, 2008 1 次提交