1. 11 7月, 2012 1 次提交
  2. 28 6月, 2012 4 次提交
  3. 24 6月, 2012 1 次提交
  4. 23 6月, 2012 1 次提交
  5. 22 6月, 2012 1 次提交
  6. 20 6月, 2012 1 次提交
    • D
      ipv4: Early TCP socket demux. · 41063e9d
      David S. Miller 提交于
      Input packet processing for local sockets involves two major demuxes.
      One for the route and one for the socket.
      
      But we can optimize this down to one demux for certain kinds of local
      sockets.
      
      Currently we only do this for established TCP sockets, but it could
      at least in theory be expanded to other kinds of connections.
      
      If a TCP socket is established then it's identity is fully specified.
      
      This means that whatever input route was used during the three-way
      handshake must work equally well for the rest of the connection since
      the keys will not change.
      
      Once we move to established state, we cache the receive packet's input
      route to use later.
      
      Like the existing cached route in sk->sk_dst_cache used for output
      packets, we have to check for route invalidations using dst->obsolete
      and dst->ops->check().
      
      Early demux occurs outside of a socket locked section, so when a route
      invalidation occurs we defer the fixup of sk->sk_rx_dst until we are
      actually inside of established state packet processing and thus have
      the socket locked.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      41063e9d
  7. 10 6月, 2012 1 次提交
  8. 09 6月, 2012 3 次提交
    • D
      tcp: Get rid of inetpeer special cases. · 4670fd81
      David S. Miller 提交于
      The get_peer method TCP uses is full of special cases that make no
      sense accommodating, and it also gets in the way of doing more
      reasonable things here.
      
      First of all, if the socket doesn't have a usable cached route, there
      is no sense in trying to optimize timewait recycling.
      
      Likewise for the case where we have IP options, such as SRR enabled,
      that make the IP header destination address (and thus the destination
      address of the route key) differ from that of the connection's
      destination address.
      
      Just return a NULL peer in these cases, and thus we're also able to
      get rid of the clumsy inetpeer release logic.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4670fd81
    • D
      inet: Create and use rt{,6}_get_peer_create(). · fbfe95a4
      David S. Miller 提交于
      There's a lot of places that open-code rt{,6}_get_peer() only because
      they want to set 'create' to one.  So add an rt{,6}_get_peer_create()
      for their sake.
      
      There were also a few spots open-coding plain rt{,6}_get_peer() and
      those are transformed here as well.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fbfe95a4
    • G
      inetpeer: add parameter net for inet_getpeer_v4,v6 · 54db0cc2
      Gao feng 提交于
      add struct net as a parameter of inet_getpeer_v[4,6],
      use net to replace &init_net.
      
      and modify some places to provide net for inet_getpeer_v[4,6]
      Signed-off-by: NGao feng <gaofeng@cn.fujitsu.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      54db0cc2
  9. 04 6月, 2012 1 次提交
  10. 02 6月, 2012 1 次提交
    • E
      tcp: reflect SYN queue_mapping into SYNACK packets · fff32699
      Eric Dumazet 提交于
      While testing how linux behaves on SYNFLOOD attack on multiqueue device
      (ixgbe), I found that SYNACK messages were dropped at Qdisc level
      because we send them all on a single queue.
      
      Obvious choice is to reflect incoming SYN packet @queue_mapping to
      SYNACK packet.
      
      Under stress, my machine could only send 25.000 SYNACK per second (for
      200.000 incoming SYN per second). NIC : ixgbe with 16 rx/tx queues.
      
      After patch, not a single SYNACK is dropped.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Hans Schillstrom <hans.schillstrom@ericsson.com>
      Cc: Jesper Dangaard Brouer <brouer@redhat.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      fff32699
  11. 18 5月, 2012 1 次提交
  12. 16 5月, 2012 1 次提交
  13. 05 5月, 2012 1 次提交
    • E
      tcp: be more strict before accepting ECN negociation · bd14b1b2
      Eric Dumazet 提交于
      It appears some networks play bad games with the two bits reserved for
      ECN. This can trigger false congestion notifications and very slow
      transferts.
      
      Since RFC 3168 (6.1.1) forbids SYN packets to carry CT bits, we can
      disable TCP ECN negociation if it happens we receive mangled CT bits in
      the SYN packet.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Perry Lorier <perryl@google.com>
      Cc: Matt Mathis <mattmathis@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Wilmer van der Gaast <wilmer@google.com>
      Cc: Ankur Jain <jankur@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Dave Täht <dave.taht@bufferbloat.net>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bd14b1b2
  14. 24 4月, 2012 2 次提交
    • E
      tcp: sk_add_backlog() is too agressive for TCP · da882c1f
      Eric Dumazet 提交于
      While investigating TCP performance problems on 10Gb+ links, we found a
      tcp sender was dropping lot of incoming ACKS because of sk_rcvbuf limit
      in sk_add_backlog(), especially if receiver doesnt use GRO/LRO and sends
      one ACK every two MSS segments.
      
      A sender usually tweaks sk_sndbuf, but sk_rcvbuf stays at its default
      value (87380), allowing a too small backlog.
      
      A TCP ACK, even being small, can consume nearly same truesize space than
      outgoing packets. Using sk_rcvbuf + sk_sndbuf as a limit makes sense and
      is fast to compute.
      
      Performance results on netperf, single flow, receiver with disabled
      GRO/LRO : 7500 Mbits instead of 6050 Mbits, no more TCPBacklogDrop
      increments at sender.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Maciej Żenczykowski <maze@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Cc: Rick Jones <rick.jones2@hp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      da882c1f
    • E
      net: add a limit parameter to sk_add_backlog() · f545a38f
      Eric Dumazet 提交于
      sk_add_backlog() & sk_rcvqueues_full() hard coded sk_rcvbuf as the
      memory limit. We need to make this limit a parameter for TCP use.
      
      No functional change expected in this patch, all callers still using the
      old sk_rcvbuf limit.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Tom Herbert <therbert@google.com>
      Cc: Maciej Żenczykowski <maze@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Cc: Rick Jones <rick.jones2@hp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f545a38f
  15. 23 4月, 2012 1 次提交
  16. 22 4月, 2012 2 次提交
    • N
      tcp: move duplicate code from tcp_v4_init_sock()/tcp_v6_init_sock() · 900f65d3
      Neal Cardwell 提交于
      This commit moves the (substantial) common code shared between
      tcp_v4_init_sock() and tcp_v6_init_sock() to a new address-family
      independent function, tcp_init_sock().
      
      Centralizing this functionality should help avoid drift issues,
      e.g. where the IPv4 side is updated without a corresponding update to
      IPv6. There was already some drift: IPv4 initialized snd_cwnd to
      TCP_INIT_CWND, while the IPv6 side was still initializing snd_cwnd to
      2 (in this case it should not matter, since snd_cwnd is also
      initialized in tcp_init_metrics(), but the general risks and
      maintenance overhead remain).
      
      When diffing the old and new code, note that new tcp_init_sock()
      function uses the order of steps from the tcp_v4_init_sock()
      implementation (the order is slightly different in
      tcp_v6_init_sock()).
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      900f65d3
    • P
      tcp: Initial repair mode · ee995283
      Pavel Emelyanov 提交于
      This includes (according the the previous description):
      
      * TCP_REPAIR sockoption
      
      This one just puts the socket in/out of the repair mode.
      Allowed for CAP_NET_ADMIN and for closed/establised sockets only.
      When repair mode is turned off and the socket happens to be in
      the established state the window probe is sent to the peer to
      'unlock' the connection.
      
      * TCP_REPAIR_QUEUE sockoption
      
      This one sets the queue which we're about to repair. The
      'no-queue' is set by default.
      
      * TCP_QUEUE_SEQ socoption
      
      Sets the write_seq/rcv_nxt of a selected repaired queue.
      Allowed for TCP_CLOSE-d sockets only. When the socket changes
      its state the other seq-s are changed by the kernel according
      to the protocol rules (most of the existing code is actually
      reused).
      
      * Ability to forcibly bind a socket to a port
      
      The sk->sk_reuse is set to SK_FORCE_REUSE.
      
      * Immediate connect modification
      
      The connect syscall initializes the connection, then directly jumps
      to the code which finalizes it.
      
      * Silent close modification
      
      The close just aborts the connection (similar to SO_LINGER with 0
      time) but without sending any FIN/RST-s to peer.
      Signed-off-by: NPavel Emelyanov <xemul@parallels.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ee995283
  17. 06 4月, 2012 1 次提交
  18. 13 3月, 2012 1 次提交
  19. 12 3月, 2012 2 次提交
    • J
      net: Convert printks to pr_<level> · 058bd4d2
      Joe Perches 提交于
      Use a more current kernel messaging style.
      
      Convert a printk block to print_hex_dump.
      Coalesce formats, align arguments.
      Use %s, __func__ instead of embedding function names.
      
      Some messages that were prefixed with <foo>_close are
      now prefixed with <foo>_fini.  Some ah4 and esp messages
      are now not prefixed with "ip ".
      
      The intent of this patch is to later add something like
        #define pr_fmt(fmt) "IPv4: " fmt.
      to standardize the output messages.
      
      Text size is trivially reduced. (x86-32 allyesconfig)
      
      $ size net/ipv4/built-in.o*
         text	   data	    bss	    dec	    hex	filename
       887888	  31558	 249696	1169142	 11d6f6	net/ipv4/built-in.o.new
       887934	  31558	 249800	1169292	 11d78c	net/ipv4/built-in.o.old
      Signed-off-by: NJoe Perches <joe@perches.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      058bd4d2
    • E
      tcp: fix syncookie regression · dfd25fff
      Eric Dumazet 提交于
      commit ea4fc0d6 (ipv4: Don't use rt->rt_{src,dst} in ip_queue_xmit())
      added a serious regression on synflood handling.
      
      Simon Kirby discovered a successful connection was delayed by 20 seconds
      before being responsive.
      
      In my tests, I discovered that xmit frames were lost, and needed ~4
      retransmits and a socket dst rebuild before being really sent.
      
      In case of syncookie initiated connection, we use a different path to
      initialize the socket dst, and inet->cork.fl.u.ip4 is left cleared.
      
      As ip_queue_xmit() now depends on inet flow being setup, fix this by
      copying the temp flowi4 we use in cookie_v4_check().
      Reported-by: NSimon Kirby <sim@netnation.com>
      Bisected-by: NSimon Kirby <sim@netnation.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Tested-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dfd25fff
  20. 08 3月, 2012 1 次提交
  21. 13 2月, 2012 1 次提交
    • J
      net: implement IP_RECVTOS for IP_PKTOPTIONS · 4c507d28
      Jiri Benc 提交于
      Currently, it is not easily possible to get TOS/DSCP value of packets from
      an incoming TCP stream. The mechanism is there, IP_PKTOPTIONS getsockopt
      with IP_RECVTOS set, the same way as incoming TTL can be queried. This is
      not actually implemented for TOS, though.
      
      This patch adds this functionality, both for IPv4 (IP_PKTOPTIONS) and IPv6
      (IPV6_2292PKTOPTIONS). For IPv4, like in the IP_RECVTTL case, the value of
      the TOS field is stored from the other party's ACK.
      
      This is needed for proxies which require DSCP transparency. One such example
      is at http://zph.bratcheda.org/.
      Signed-off-by: NJiri Benc <jbenc@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4c507d28
  22. 05 2月, 2012 1 次提交
  23. 02 2月, 2012 1 次提交
    • S
      tcp: md5: RST: getting md5 key from listener · 658ddaaf
      Shawn Lu 提交于
      TCP RST mechanism is broken in TCP md5(RFC2385). When
      connection is gone, md5 key is lost, sending RST
      without md5 hash is deem to ignored by peer. This can
      be a problem since RST help protocal like bgp to fast
      recove from peer crash.
      
      In most case, users of tcp md5, such as bgp and ldp,
      have listener on both sides to accept connection from peer.
      md5 keys for peers are saved in listening socket.
      
      There are two cases in finding md5 key when connection is
      lost:
      1.Passive receive RST: The message is send to well known port,
      tcp will associate it with listner. md5 key is gotten from
      listener.
      
      2.Active receive RST (no sock): The message is send to ative
      side, there is no socket associated with the message. In this
      case, finding listener from source port, then find md5 key from
      listener.
      
      we are not loosing sercuriy here:
      packet is checked with md5 hash. No RST is generated
      if md5 hash doesn't match or no md5 key can be found.
      Signed-off-by: NShawn Lu <shawn.lu@ericsson.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      658ddaaf
  24. 01 2月, 2012 4 次提交
  25. 23 1月, 2012 1 次提交
  26. 13 12月, 2011 3 次提交
  27. 01 12月, 2011 1 次提交