1. 29 4月, 2010 2 次提交
    • E
      net: ip_queue_rcv_skb() helper · f84af32c
      Eric Dumazet 提交于
      When queueing a skb to socket, we can immediately release its dst if
      target socket do not use IP_CMSG_PKTINFO.
      
      tcp_data_queue() can drop dst too.
      
      This to benefit from a hot cache line and avoid the receiver, possibly
      on another cpu, to dirty this cache line himself.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f84af32c
    • E
      net: speedup udp receive path · 4b0b72f7
      Eric Dumazet 提交于
      Since commit 95766fff ([UDP]: Add memory accounting.), 
      each received packet needs one extra sock_lock()/sock_release() pair.
      
      This added latency because of possible backlog handling. Then later,
      ticket spinlocks added yet another latency source in case of DDOS.
      
      This patch introduces lock_sock_bh() and unlock_sock_bh()
      synchronization primitives, avoiding one atomic operation and backlog
      processing.
      
      skb_free_datagram_locked() uses them instead of full blown
      lock_sock()/release_sock(). skb is orphaned inside locked section for
      proper socket memory reclaim, and finally freed outside of it.
      
      UDP receive path now take the socket spinlock only once.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4b0b72f7
  2. 28 4月, 2010 1 次提交
    • E
      net: sk_add_backlog() take rmem_alloc into account · c377411f
      Eric Dumazet 提交于
      Current socket backlog limit is not enough to really stop DDOS attacks,
      because user thread spend many time to process a full backlog each
      round, and user might crazy spin on socket lock.
      
      We should add backlog size and receive_queue size (aka rmem_alloc) to
      pace writers, and let user run without being slow down too much.
      
      Introduce a sk_rcvqueues_full() helper, to avoid taking socket lock in
      stress situations.
      
      Under huge stress from a multiqueue/RPS enabled NIC, a single flow udp
      receiver can now process ~200.000 pps (instead of ~100 pps before the
      patch) on a 8 core machine.
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c377411f
  3. 26 4月, 2010 3 次提交
  4. 24 4月, 2010 3 次提交
  5. 23 4月, 2010 3 次提交
    • Y
    • E
      tcp: bind() fix when many ports are bound · fda48a0d
      Eric Dumazet 提交于
      Port autoselection done by kernel only works when number of bound
      sockets is under a threshold (typically 30000).
      
      When this threshold is over, we must check if there is a conflict before
      exiting first loop in inet_csk_get_port()
      
      Change inet_csk_bind_conflict() to forbid two reuse-enabled sockets to
      bind on same (address,port) tuple (with a non ANY address)
      
      Same change for inet6_csk_bind_conflict()
      Reported-by: NGaspar Chilingarov <gasparch@gmail.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Acked-by: NEvgeniy Polyakov <zbr@ioremap.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fda48a0d
    • S
      IPv6: Generic TTL Security Mechanism (final version) · e802af9c
      Stephen Hemminger 提交于
      This patch adds IPv6 support for RFC5082 Generalized TTL Security Mechanism.  
      
      Not to users of mapped address; the IPV6 and IPV4 socket options are seperate.
      The server does have to deal with both IPv4 and IPv6 socket options
      and the client has to handle the different for each family.
      
      On client:
      	int ttl = 255;
      	getaddrinfo(argv[1], argv[2], &hint, &result);
      
      	for (rp = result; rp != NULL; rp = rp->ai_next) {
      		s = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol);
      		if (s < 0) continue;
      
      		if (rp->ai_family == AF_INET) {
      			setsockopt(s, IPPROTO_IP, IP_TTL, &ttl, sizeof(ttl));
      		} else if (rp->ai_family == AF_INET6) {
      			setsockopt(s, IPPROTO_IPV6,  IPV6_UNICAST_HOPS, 
      					&ttl, sizeof(ttl)))
      		}
      			
      		if (connect(s, rp->ai_addr, rp->ai_addrlen) == 0) {
      		   ...
      
      On server:
      	int minttl = 255 - maxhops;
         
      	getaddrinfo(NULL, port, &hints, &result);
      	for (rp = result; rp != NULL; rp = rp->ai_next) {
      		s = socket(rp->ai_family, rp->ai_socktype, rp->ai_protocol);
      		if (s < 0) continue;
      
      		if (rp->ai_family == AF_INET6)
      			setsockopt(s, IPPROTO_IPV6,  IPV6_MINHOPCOUNT,
      					&minttl, sizeof(minttl));
      		setsockopt(s, IPPROTO_IP, IP_MINTTL, &minttl, sizeof(minttl));
      			
      		if (bind(s, rp->ai_addr, rp->ai_addrlen) == 0)
      			break
      ...
      Signed-off-by: NStephen Hemminger <shemminger@vyatta.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e802af9c
  6. 22 4月, 2010 4 次提交
    • J
      net: ipv6 bind to device issue · f4f914b5
      Jiri Olsa 提交于
      The issue raises when having 2 NICs both assigned the same
      IPv6 global address.
      
      If a sender binds to a particular NIC (SO_BINDTODEVICE),
      the outgoing traffic is being sent via the first found.
      The bonded device is thus not taken into an account during the
      routing.
      
      From the ip6_route_output function:
      
      If the binding address is multicast, linklocal or loopback,
      the RT6_LOOKUP_F_IFACE bit is set, but not for global address.
      
      So binding global address will neglect SO_BINDTODEVICE-binded device,
      because the fib6_rule_lookup function path won't check for the
      flowi::oif field and take first route that fits.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NScott Otto <scott.otto@alcatel-lucent.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f4f914b5
    • S
      ipv6: allow to send packet after receiving ICMPv6 Too Big message with MTU... · f2228f78
      Shan Wei 提交于
      ipv6: allow to send packet after receiving ICMPv6 Too Big message with MTU field less than IPV6_MIN_MTU
      
      According to RFC2460, PMTU is set to the IPv6 Minimum Link
      MTU (1280) and a fragment header should always be included
      after a node receiving Too Big message reporting PMTU is
      less than the IPv6 Minimum Link MTU.
      
      After receiving a ICMPv6 Too Big message reporting PMTU is
      less than the IPv6 Minimum Link MTU, sctp *can't* send any
      data/control chunk that total length including IPv6 head
      and IPv6 extend head is less than IPV6_MIN_MTU(1280 bytes).
      
      The failure occured in p6_fragment(), about reason
      see following(take SHUTDOWN chunk for example):
      sctp_packet_transmit (SHUTDOWN chunk, len=16 byte)
      |------sctp_v6_xmit (local_df=0)
         |------ip6_xmit
             |------ip6_output (dst_allfrag is ture)
                 |------ip6_fragment
      
      In ip6_fragment(), for local_df=0, drops the the packet
      and returns EMSGSIZE.
      
      The patch fixes it with adding check length of skb->len.
      In this case, Ipv6 not to fragment upper protocol data,
      just only add a fragment header before it.
      Signed-off-by: NShan Wei <shanwei@cn.fujitsu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f2228f78
    • N
      xfrm6: ensure to use the same dev when building a bundle · bc8e4b95
      Nicolas Dichtel 提交于
      When building a bundle, we set dst.dev and rt6.rt6i_idev.
      We must ensure to set the same device for both fields.
      Signed-off-by: NNicolas Dichtel <nicolas.dichtel@6wind.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bc8e4b95
    • D
      tcp: Mark v6 response packets as CHECKSUM_PARTIAL · e5700aff
      David S. Miller 提交于
      Otherwise we only get the checksum right for data-less TCP responses.
      
      Noticed by Herbert Xu.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e5700aff
  7. 21 4月, 2010 3 次提交
  8. 16 4月, 2010 4 次提交
    • S
      ipv6: fix the comment of ip6_xmit() · b5d43998
      Shan Wei 提交于
      ip6_xmit() is used by upper transport protocol.
      Signed-off-by: NShan Wei <shanwei@cn.fujitsu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b5d43998
    • S
      net: replace ipfragok with skb->local_df · 4e15ed4d
      Shan Wei 提交于
      As Herbert Xu said: we should be able to simply replace ipfragok
      with skb->local_df. commit f88037(sctp: Drop ipfargok in sctp_xmit function)
      has droped ipfragok and set local_df value properly.
      
      The patch kills the ipfragok parameter of .queue_xmit().
      Signed-off-by: NShan Wei <shanwei@cn.fujitsu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4e15ed4d
    • S
      ipv6: cancel to setting local_df in ip6_xmit() · 0eecb784
      Shan Wei 提交于
      commit f88037(sctp: Drop ipfargok in sctp_xmit function)
      has droped ipfragok and set local_df value properly.
      
      So the change of commit 77e2f1(ipv6: Fix ip6_xmit to
      send fragments if ipfragok is true) is not needed.
      So the patch remove them.
      Signed-off-by: NShan Wei <shanwei@cn.fujitsu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0eecb784
    • E
      ip: Fix ip_dev_loopback_xmit() · e30b38c2
      Eric Dumazet 提交于
      Eric Paris got following trace with a linux-next kernel
      
      [   14.203970] BUG: using smp_processor_id() in preemptible [00000000]
      code: avahi-daemon/2093
      [   14.204025] caller is netif_rx+0xfa/0x110
      [   14.204035] Call Trace:
      [   14.204064]  [<ffffffff81278fe5>] debug_smp_processor_id+0x105/0x110
      [   14.204070]  [<ffffffff8142163a>] netif_rx+0xfa/0x110
      [   14.204090]  [<ffffffff8145b631>] ip_dev_loopback_xmit+0x71/0xa0
      [   14.204095]  [<ffffffff8145b892>] ip_mc_output+0x192/0x2c0
      [   14.204099]  [<ffffffff8145d610>] ip_local_out+0x20/0x30
      [   14.204105]  [<ffffffff8145d8ad>] ip_push_pending_frames+0x28d/0x3d0
      [   14.204119]  [<ffffffff8147f1cc>] udp_push_pending_frames+0x14c/0x400
      [   14.204125]  [<ffffffff814803fc>] udp_sendmsg+0x39c/0x790
      [   14.204137]  [<ffffffff814891d5>] inet_sendmsg+0x45/0x80
      [   14.204149]  [<ffffffff8140af91>] sock_sendmsg+0xf1/0x110
      [   14.204189]  [<ffffffff8140dc6c>] sys_sendmsg+0x20c/0x380
      [   14.204233]  [<ffffffff8100ad82>] system_call_fastpath+0x16/0x1b
      
      While current linux-2.6 kernel doesnt emit this warning, bug is latent
      and might cause unexpected failures.
      
      ip_dev_loopback_xmit() runs in process context, preemption enabled, so
      must call netif_rx_ni() instead of netif_rx(), to make sure that we
      process pending software interrupt.
      
      Same change for ip6_dev_loopback_xmit()
      Reported-by: NEric Paris <eparis@redhat.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e30b38c2
  9. 14 4月, 2010 2 次提交
  10. 13 4月, 2010 5 次提交
  11. 12 4月, 2010 2 次提交
  12. 09 4月, 2010 1 次提交
  13. 07 4月, 2010 1 次提交
    • T
      xfrm: cache bundles instead of policies for outgoing flows · 80c802f3
      Timo Teräs 提交于
      __xfrm_lookup() is called for each packet transmitted out of
      system. The xfrm_find_bundle() does a linear search which can
      kill system performance depending on how many bundles are
      required per policy.
      
      This modifies __xfrm_lookup() to store bundles directly in
      the flow cache. If we did not get a hit, we just create a new
      bundle instead of doing slow search. This means that we can now
      get multiple xfrm_dst's for same flow (on per-cpu basis).
      Signed-off-by: NTimo Teras <timo.teras@iki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      80c802f3
  14. 04 4月, 2010 2 次提交
    • E
      icmp: Account for ICMP out errors · 1f8438a8
      Eric Dumazet 提交于
      When ip_append() fails because of socket limit or memory shortage,
      increment ICMP_MIB_OUTERRORS counter, so that "netstat -s" can report
      these errors.
      
      LANG=C netstat -s | grep "ICMP messages failed"
          0 ICMP messages failed
      
      For IPV6, implement ICMP6_MIB_OUTERRORS counter as well.
      
      # grep Icmp6OutErrors /proc/net/dev_snmp6/*
      /proc/net/dev_snmp6/eth0:Icmp6OutErrors                   	0
      /proc/net/dev_snmp6/lo:Icmp6OutErrors                   	0
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1f8438a8
    • J
      net: convert multicast list to list_head · 22bedad3
      Jiri Pirko 提交于
      Converts the list and the core manipulating with it to be the same as uc_list.
      
      +uses two functions for adding/removing mc address (normal and "global"
       variant) instead of a function parameter.
      +removes dev_mcast.c completely.
      +exposes netdev_hw_addr_list_* macros along with __hw_addr_* functions for
       manipulation with lists on a sandbox (used in bonding and 80211 drivers)
      Signed-off-by: NJiri Pirko <jpirko@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      22bedad3
  15. 31 3月, 2010 1 次提交
    • Y
      ipv6 fib: Use "Sweezle" to optimize addr_bit_test(). · 02cdce53
      YOSHIFUJI Hideaki / 吉藤英明 提交于
      addr_bit_test() is used in various places in IPv6 routing table
      subsystem.  It checks if the given fn_bit is set,
      where fn_bit counts bits from MSB in words in network-order.
      
       fn_bit        :   0 .... 31 32 .... 64 65 .... 95 96 ....127
      
      fn_bit >> 5 gives offset of word, and (~fn_bit & 0x1f) gives
      count from LSB in the network-endian word in question.
      
       fn_bit >> 5   :       0          1          2          3
       ~fn_bit & 0x1f:  31 ....  0 31 ....  0 31 ....  0 31 ....  0
      
      Thus, the mask was generated as htonl(1 << (~fn_bit & 0x1f)).
      This can be optimized by "sweezle" (See include/asm-generic/bitops/le.h).
      
      In little-endian,
        htonl(1 << bit) = 1 << (bit ^ BITOP_BE32_SWIZZLE)
      where
        BITOP_BE32_SWIZZLE is (0x1f & ~7)
      So,
        htonl(1 << (~fn_bit & 0x1f)) = 1 << ((~fn_bit & 0x1f) ^ (0x1f & ~7))
                                     = 1 << ((~fn_bit ^ ~7) & 0x1f)
                                     = 1 << ((~fn_bit ^ BITOP_BE32_SWIZZLE) & 0x1f)
      
      In big-endian, BITOP_BE32_SWIZZLE is equal to 0.
        1 << ((~fn_bit ^ BITOP_BE32_SWIZZLE) & 0x1f)
                                     = 1 << ((~fn_bit) & 0x1f)
                                     = htonl(1 << (~fn_bit & 0x1f))
      Signed-off-by: NYOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      02cdce53
  16. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  17. 29 3月, 2010 1 次提交
  18. 27 3月, 2010 1 次提交