1. 03 7月, 2008 1 次提交
  2. 28 6月, 2008 1 次提交
  3. 13 6月, 2008 1 次提交
    • D
      tcp: Revert 'process defer accept as established' changes. · ec0a1966
      David S. Miller 提交于
      This reverts two changesets, ec3c0982
      ("[TCP]: TCP_DEFER_ACCEPT updates - process as established") and
      the follow-on bug fix 9ae27e0a
      ("tcp: Fix slab corruption with ipv6 and tcp6fuzz").
      
      This change causes several problems, first reported by Ingo Molnar
      as a distcc-over-loopback regression where connections were getting
      stuck.
      
      Ilpo Järvinen first spotted the locking problems.  The new function
      added by this code, tcp_defer_accept_check(), only has the
      child socket locked, yet it is modifying state of the parent
      listening socket.
      
      Fixing that is non-trivial at best, because we can't simply just grab
      the parent listening socket lock at this point, because it would
      create an ABBA deadlock.  The normal ordering is parent listening
      socket --> child socket, but this code path would require the
      reverse lock ordering.
      
      Next is a problem noticed by Vitaliy Gusev, he noted:
      
      ----------------------------------------
      >--- a/net/ipv4/tcp_timer.c
      >+++ b/net/ipv4/tcp_timer.c
      >@@ -481,6 +481,11 @@ static void tcp_keepalive_timer (unsigned long data)
      > 		goto death;
      > 	}
      >
      >+	if (tp->defer_tcp_accept.request && sk->sk_state == TCP_ESTABLISHED) {
      >+		tcp_send_active_reset(sk, GFP_ATOMIC);
      >+		goto death;
      
      Here socket sk is not attached to listening socket's request queue. tcp_done()
      will not call inet_csk_destroy_sock() (and tcp_v4_destroy_sock() which should
      release this sk) as socket is not DEAD. Therefore socket sk will be lost for
      freeing.
      ----------------------------------------
      
      Finally, Alexey Kuznetsov argues that there might not even be any
      real value or advantage to these new semantics even if we fix all
      of the bugs:
      
      ----------------------------------------
      Hiding from accept() sockets with only out-of-order data only
      is the only thing which is impossible with old approach. Is this really
      so valuable? My opinion: no, this is nothing but a new loophole
      to consume memory without control.
      ----------------------------------------
      
      So revert this thing for now.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ec0a1966
  4. 05 6月, 2008 1 次提交
  5. 21 4月, 2008 1 次提交
  6. 23 3月, 2008 1 次提交
    • H
      [TCP]: Let skbs grow over a page on fast peers · 69d15067
      Herbert Xu 提交于
      While testing the virtio-net driver on KVM with TSO I noticed
      that TSO performance with a 1500 MTU is significantly worse
      compared to the performance of non-TSO with a 16436 MTU.  The
      packet dump shows that most of the packets sent are smaller
      than a page.
      
      Looking at the code this actually is quite obvious as it always
      stop extending the packet if it's the first packet yet to be
      sent and if it's larger than the MSS.  Since each extension is
      bound by the page size, this means that (given a 1500 MTU) we're
      very unlikely to construct packets greater than a page, provided
      that the receiver and the path is fast enough so that packets can
      always be sent immediately.
      
      The fix is also quite obvious.  The push calls inside the loop
      is just an optimisation so that we don't end up doing all the
      sending at the end of the loop.  Therefore there is no specific
      reason why it has to do so at MSS boundaries.  For TSO, the
      most natural extension of this optimisation is to do the pushing
      once the skb exceeds the TSO size goal.
      
      This is what the patch does and testing with KVM shows that the
      TSO performance with a 1500 MTU easily surpasses that of a 16436
      MTU and indeed the packet sizes sent are generally larger than
      16436.
      
      I don't see any obvious downsides for slower peers or connections,
      but it would be prudent to test this extensively to ensure that
      those cases don't regress.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      69d15067
  7. 22 3月, 2008 1 次提交
    • P
      [TCP]: TCP_DEFER_ACCEPT updates - process as established · ec3c0982
      Patrick McManus 提交于
      Change TCP_DEFER_ACCEPT implementation so that it transitions a
      connection to ESTABLISHED after handshake is complete instead of
      leaving it in SYN-RECV until some data arrvies. Place connection in
      accept queue when first data packet arrives from slow path.
      
      Benefits:
        - established connection is now reset if it never makes it
         to the accept queue
      
       - diagnostic state of established matches with the packet traces
         showing completed handshake
      
       - TCP_DEFER_ACCEPT timeouts are expressed in seconds and can now be
         enforced with reasonable accuracy instead of rounding up to next
         exponential back-off of syn-ack retry.
      Signed-off-by: NPatrick McManus <mcmanus@ducksong.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ec3c0982
  8. 03 2月, 2008 1 次提交
    • A
      [SOCK] proto: Add hashinfo member to struct proto · ab1e0a13
      Arnaldo Carvalho de Melo 提交于
      This way we can remove TCP and DCCP specific versions of
      
      sk->sk_prot->get_port: both v4 and v6 use inet_csk_get_port
      sk->sk_prot->hash:     inet_hash is directly used, only v6 need
                             a specific version to deal with mapped sockets
      sk->sk_prot->unhash:   both v4 and v6 use inet_hash directly
      
      struct inet_connection_sock_af_ops also gets a new member, bind_conflict, so
      that inet_csk_get_port can find the per family routine.
      
      Now only the lookup routines receive as a parameter a struct inet_hashtable.
      
      With this we further reuse code, reducing the difference among INET transport
      protocols.
      
      Eventually work has to be done on UDP and SCTP to make them share this
      infrastructure and get as a bonus inet_diag interfaces so that iproute can be
      used with these protocols.
      
      net-2.6/net/ipv4/inet_hashtables.c:
        struct proto			     |   +8
        struct inet_connection_sock_af_ops |   +8
       2 structs changed
        __inet_hash_nolisten               |  +18
        __inet_hash                        | -210
        inet_put_port                      |   +8
        inet_bind_bucket_create            |   +1
        __inet_hash_connect                |   -8
       5 functions changed, 27 bytes added, 218 bytes removed, diff: -191
      
      net-2.6/net/core/sock.c:
        proto_seq_show                     |   +3
       1 function changed, 3 bytes added, diff: +3
      
      net-2.6/net/ipv4/inet_connection_sock.c:
        inet_csk_get_port                  |  +15
       1 function changed, 15 bytes added, diff: +15
      
      net-2.6/net/ipv4/tcp.c:
        tcp_set_state                      |   -7
       1 function changed, 7 bytes removed, diff: -7
      
      net-2.6/net/ipv4/tcp_ipv4.c:
        tcp_v4_get_port                    |  -31
        tcp_v4_hash                        |  -48
        tcp_v4_destroy_sock                |   -7
        tcp_v4_syn_recv_sock               |   -2
        tcp_unhash                         | -179
       5 functions changed, 267 bytes removed, diff: -267
      
      net-2.6/net/ipv6/inet6_hashtables.c:
        __inet6_hash |   +8
       1 function changed, 8 bytes added, diff: +8
      
      net-2.6/net/ipv4/inet_hashtables.c:
        inet_unhash                        | +190
        inet_hash                          | +242
       2 functions changed, 432 bytes added, diff: +432
      
      vmlinux:
       16 functions changed, 485 bytes added, 492 bytes removed, diff: -7
      
      /home/acme/git/net-2.6/net/ipv6/tcp_ipv6.c:
        tcp_v6_get_port                    |  -31
        tcp_v6_hash                        |   -7
        tcp_v6_syn_recv_sock               |   -9
       3 functions changed, 47 bytes removed, diff: -47
      
      /home/acme/git/net-2.6/net/dccp/proto.c:
        dccp_destroy_sock                  |   -7
        dccp_unhash                        | -179
        dccp_hash                          |  -49
        dccp_set_state                     |   -7
        dccp_done                          |   +1
       5 functions changed, 1 bytes added, 242 bytes removed, diff: -241
      
      /home/acme/git/net-2.6/net/dccp/ipv4.c:
        dccp_v4_get_port                   |  -31
        dccp_v4_request_recv_sock          |   -2
       2 functions changed, 33 bytes removed, diff: -33
      
      /home/acme/git/net-2.6/net/dccp/ipv6.c:
        dccp_v6_get_port                   |  -31
        dccp_v6_hash                       |   -7
        dccp_v6_request_recv_sock          |   +5
       3 functions changed, 5 bytes added, 38 bytes removed, diff: -33
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ab1e0a13
  9. 29 1月, 2008 8 次提交
    • I
      [TCP]: Uninline tcp_set_state · 490d5046
      Ilpo Järvinen 提交于
      net/ipv4/tcp.c:
        tcp_close_state | -226
        tcp_done        | -145
        tcp_close       | -564
        tcp_disconnect  | -141
       4 functions changed, 1076 bytes removed, diff: -1076
      
      net/ipv4/tcp_input.c:
        tcp_fin               |  -86
        tcp_rcv_state_process | -164
       2 functions changed, 250 bytes removed, diff: -250
      
      net/ipv4/tcp_ipv4.c:
        tcp_v4_connect | -209
       1 function changed, 209 bytes removed, diff: -209
      
      net/ipv4/arp.c:
        arp_ignore |   +5
       1 function changed, 5 bytes added, diff: +5
      
      net/ipv6/tcp_ipv6.c:
        tcp_v6_connect | -158
       1 function changed, 158 bytes removed, diff: -158
      
      net/sunrpc/xprtsock.c:
        xs_sendpages |   -2
       1 function changed, 2 bytes removed, diff: -2
      
      net/dccp/ccids/ccid3.c:
        ccid3_update_send_interval |   +7
       1 function changed, 7 bytes added, diff: +7
      
      net/ipv4/tcp.c:
        tcp_set_state | +238
       1 function changed, 238 bytes added, diff: +238
      
      built-in.o:
       12 functions changed, 250 bytes added, 1695 bytes removed, diff: -1445
      
      I've no explanation why some unrelated changes seem to occur
      consistently as well (arp_ignore, ccid3_update_send_interval;
      I checked the arp_ignore asm and it seems to be due to some
      reordered of operation order causing some extra opcodes to be
      generated). Still, the benefits are pretty obvious from the
      codiff's results.
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      490d5046
    • I
      [TCP]: Remove TCPCB_URG & TCPCB_AT_TAIL as unnecessary · 4828e7f4
      Ilpo Järvinen 提交于
      The snd_up check should be enough. I suspect this has been
      there to provide a minor optimization in clean_rtx_queue which
      used to have a small if (!->sacked) block which could skip
      snd_up check among the other work.
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4828e7f4
    • H
      [NET] CORE: Introducing new memory accounting interface. · 3ab224be
      Hideo Aoki 提交于
      This patch introduces new memory accounting functions for each network
      protocol. Most of them are renamed from memory accounting functions
      for stream protocols. At the same time, some stream memory accounting
      functions are removed since other functions do same thing.
      
      Renaming:
      	sk_stream_free_skb()		->	sk_wmem_free_skb()
      	__sk_stream_mem_reclaim()	->	__sk_mem_reclaim()
      	sk_stream_mem_reclaim()		->	sk_mem_reclaim()
      	sk_stream_mem_schedule 		->    	__sk_mem_schedule()
      	sk_stream_pages()      		->	sk_mem_pages()
      	sk_stream_rmem_schedule()	->	sk_rmem_schedule()
      	sk_stream_wmem_schedule()	->	sk_wmem_schedule()
      	sk_charge_skb()			->	sk_mem_charge()
      
      Removeing
      	sk_stream_rfree():	consolidates into sock_rfree()
      	sk_stream_set_owner_r(): consolidates into skb_set_owner_r()
      	sk_stream_mem_schedule()
      
      The following functions are added.
          	sk_has_account(): check if the protocol supports accounting
      	sk_mem_uncharge(): do the opposite of sk_mem_charge()
      
      In addition, to achieve consolidation, updating sk_wmem_queued is
      removed from sk_mem_charge().
      
      Next, to consolidate memory accounting functions, this patch adds
      memory accounting calls to network core functions. Moreover, present
      memory accounting call is renamed to new accounting call.
      
      Finally we replace present memory accounting calls with new interface
      in TCP and SCTP.
      Signed-off-by: NTakahiro Yasui <tyasui@redhat.com>
      Signed-off-by: NHideo Aoki <haoki@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3ab224be
    • P
      [TCP]: Use BUILD_BUG_ON for tcp_skb_cb size checking · 1f9e636e
      Pavel Emelyanov 提交于
      The sizeof(struct tcp_skb_cb) should not be less than the
      sizeof(skb->cb). This is checked in net/ipv4/tcp.c, but
      this check can be made more gracefully.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1f9e636e
    • P
      [NET]: Eliminate unused argument from sk_stream_alloc_pskb · df97c708
      Pavel Emelyanov 提交于
      The 3rd argument is always zero (according to grep :) Eliminate
      it and merge the function with sk_stream_alloc_skb.
      
      This saves 44 more bytes, and together with the previous patch
      we have:
      
      add/remove: 1/0 grow/shrink: 0/8 up/down: 183/-751 (-568)
      function                                     old     new   delta
      sk_stream_alloc_skb                            -     183    +183
      ip_rt_init                                   529     525      -4
      arp_ignore                                   112     107      -5
      __inet_lookup_listener                       284     274     -10
      tcp_sendmsg                                 2583    2481    -102
      tcp_sendpage                                1449    1300    -149
      tso_fragment                                 417     258    -159
      tcp_fragment                                1149     988    -161
      __tcp_push_pending_frames                   1998    1837    -161
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      df97c708
    • P
      [NET]: Uninline the sk_stream_alloc_pskb · f561d0f2
      Pavel Emelyanov 提交于
      This function seems too big for inlining. Indeed, it saves
      half-a-kilo when uninlined:
      
      add/remove: 1/0 grow/shrink: 0/7 up/down: 195/-719 (-524)
      function                                     old     new   delta
      sk_stream_alloc_pskb                           -     195    +195
      ip_rt_init                                   529     525      -4
      __inet_lookup_listener                       284     274     -10
      tcp_sendmsg                                 2583    2486     -97
      tcp_sendpage                                1449    1305    -144
      tso_fragment                                 417     267    -150
      tcp_fragment                                1149     992    -157
      __tcp_push_pending_frames                   1998    1841    -157
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f561d0f2
    • A
      [TCP]: Make tcp_splice_data_recv() static. · 6ff7751d
      Adrian Bunk 提交于
      Signed-off-by: NAdrian Bunk <bunk@kernel.org>
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6ff7751d
    • J
      [TCP]: Splice receive support. · 9c55e01c
      Jens Axboe 提交于
      Support for network splice receive.
      Signed-off-by: NJens Axboe <jens.axboe@oracle.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9c55e01c
  10. 07 11月, 2007 1 次提交
    • E
      [INET]: Remove per bucket rwlock in tcp/dccp ehash table. · 230140cf
      Eric Dumazet 提交于
      As done two years ago on IP route cache table (commit
      22c047cc) , we can avoid using one
      lock per hash bucket for the huge TCP/DCCP hash tables.
      
      On a typical x86_64 platform, this saves about 2MB or 4MB of ram, for
      litle performance differences. (we hit a different cache line for the
      rwlock, but then the bucket cache line have a better sharing factor
      among cpus, since we dirty it less often). For netstat or ss commands
      that want a full scan of hash table, we perform fewer memory accesses.
      
      Using a 'small' table of hashed rwlocks should be more than enough to
      provide correct SMP concurrency between different buckets, without
      using too much memory. Sizing of this table depends on
      num_possible_cpus() and various CONFIG settings.
      
      This patch provides some locking abstraction that may ease a future
      work using a different model for TCP/DCCP table.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Acked-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      230140cf
  11. 30 10月, 2007 1 次提交
    • J
      [TCP]: Saner thash_entries default with much memory. · 0ccfe618
      Jean Delvare 提交于
      On systems with a very large amount of memory, the heuristics in
      alloc_large_system_hash() result in a very large TCP established hash
      table: 16 millions of entries for a 128 GB ia64 system. This makes
      reading from /proc/net/tcp pretty slow (well over a second) and as a
      result netstat is slow on these machines. I know that /proc/net/tcp is
      deprecated in favor of tcp_diag, however at the moment netstat only
      knows of the former.
      
      I am skeptical that such a large TCP established hash is often needed.
      Just because a system has a lot of memory doesn't imply that it will
      have several millions of concurrent TCP connections. Thus I believe
      that we should put an arbitrary high limit to the size of the TCP
      established hash by default. Users who really need a bigger hash can
      always use the thash_entries boot parameter to get more.
      
      I propose 2 millions of entries as the arbitrary high limit. This
      makes /proc/net/tcp reasonably fast on the system in question (0.2 s)
      while being still large enough for me to be confident that network
      performance won't suffer.
      
      This is just one way to limit the hash size, there are others; I am not
      familiar enough with the TCP code to decide which is best. Thus, I
      would welcome the proposals of alternatives.
      
      [ 2 million is still too large, thus I've modified the limit in the
        change to be '512 * 1024'. -DaveM ]
      Signed-off-by: NJean Delvare <jdelvare@suse.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0ccfe618
  12. 20 10月, 2007 1 次提交
  13. 11 10月, 2007 3 次提交
  14. 03 8月, 2007 1 次提交
    • D
      [TCP]: Invoke tcp_sendmsg() directly, do not use inet_sendmsg(). · 3516ffb0
      David S. Miller 提交于
      As discovered by Evegniy Polyakov, if we try to sendmsg after
      a connection reset, we can do incredibly stupid things.
      
      The core issue is that inet_sendmsg() tries to autobind the
      socket, but we should never do that for TCP.  Instead we should
      just go straight into TCP's sendmsg() code which will do all
      of the necessary state and pending socket error checks.
      
      TCP's sendpage already directly vectors to tcp_sendpage(), so this
      merely brings sendmsg() in line with that.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3516ffb0
  15. 20 7月, 2007 1 次提交
    • P
      mm: Remove slab destructors from kmem_cache_create(). · 20c2df83
      Paul Mundt 提交于
      Slab destructors were no longer supported after Christoph's
      c59def9f change. They've been
      BUGs for both slab and slub, and slob never supported them
      either.
      
      This rips out support for the dtor pointer from kmem_cache_create()
      completely and fixes up every single callsite in the kernel (there were
      about 224, not including the slab allocator definitions themselves,
      or the documentation references).
      Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
      20c2df83
  16. 12 7月, 2007 2 次提交
  17. 24 6月, 2007 1 次提交
  18. 04 6月, 2007 1 次提交
  19. 31 5月, 2007 1 次提交
  20. 09 5月, 2007 1 次提交
  21. 04 5月, 2007 1 次提交
    • S
      [TCP]: zero out rx_opt in tcp_disconnect() · b40b4f79
      Srinivas Aji 提交于
      When the server drops its connection, NFS client reconnects using the
      same socket after disconnecting. If the new connection's SYN,ACK
      doesn't contain the TCP timestamp option and the old connection's did,
      tp->tcp_header_len is recomputed assuming no timestamp header but
      tp->rx_opt.tstamp_ok remains set. Then tcp_build_and_update_options()
      adds in a timestamp option past the end of the allocated TCP header,
      overwriting TCP data, or when the data is in skb_shinfo(skb)->frags[],
      overwriting skb_shinfo(skb) causing a crash soon after. (The issue was
      debugged from such a crash.)
      
      Similarly, wscale_ok and sack_ok also get set based on the SYN,ACK
      packet but not reset on disconnect, since they are zeroed out at
      initialization. The patch zeroes out the entire tp->rx_opt struct in
      tcp_disconnect() to avoid this sort of problem.
      Signed-off-by: NSrinivas Aji <Aji_Srinivas@emc.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b40b4f79
  22. 29 4月, 2007 1 次提交
  23. 26 4月, 2007 8 次提交