1. 03 12月, 2010 1 次提交
    • D
      tcp: Add timewait recycling bits to ipv6 connect code. · 493f377d
      David S. Miller 提交于
      This will also improve handling of ipv6 tcp socket request
      backlog when syncookies are not enabled.  When backlog
      becomes very deep, last quarter of backlog is limited to
      validated destinations.  Previously only ipv4 implemented
      this logic, but now ipv6 does too.
      
      Now we are only one step away from enabling timewait
      recycling for ipv6, and that step is simply filling in
      the implementation of tcp_v6_get_peer() and
      tcp_v6_tw_get_peer().
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      493f377d
  2. 22 11月, 2010 1 次提交
  3. 26 7月, 2008 1 次提交
  4. 29 1月, 2008 1 次提交
  5. 15 11月, 2007 1 次提交
    • P
      [INET]: Fix potential kfree on vmalloc-ed area of request_sock_queue · dab6ba36
      Pavel Emelyanov 提交于
      The request_sock_queue's listen_opt is either vmalloc-ed or
      kmalloc-ed depending on the number of table entries. Thus it 
      is expected to be handled properly on free, which is done in 
      the reqsk_queue_destroy().
      
      However the error path in inet_csk_listen_start() calls 
      the lite version of reqsk_queue_destroy, called 
      __reqsk_queue_destroy, which calls the kfree unconditionally. 
      
      Fix this and move the __reqsk_queue_destroy into a .c file as 
      it looks too big to be inline.
      
      As David also noticed, this is an error recovery path only,
      so no locking is required and the lopt is known to be not NULL.
      
      reqsk_queue_yank_listen_sk is also now only used in
      net/core/request_sock.c so we should move it there too.
      Signed-off-by: NPavel Emelyanov <xemul@openvz.org>
      Acked-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dab6ba36
  6. 03 12月, 2006 1 次提交
    • E
      [NET]: Size listen hash tables using backlog hint · 72a3effa
      Eric Dumazet 提交于
      We currently allocate a fixed size (TCP_SYNQ_HSIZE=512) slots hash table for
      each LISTEN socket, regardless of various parameters (listen backlog for
      example)
      
      On x86_64, this means order-1 allocations (might fail), even for 'small'
      sockets, expecting few connections. On the contrary, a huge server wanting a
      backlog of 50000 is slowed down a bit because of this fixed limit.
      
      This patch makes the sizing of listen hash table a dynamic parameter,
      depending of :
      - net.core.somaxconn tunable (default is 128)
      - net.ipv4.tcp_max_syn_backlog tunable (default : 256, 1024 or 128)
      - backlog value given by user application  (2nd parameter of listen())
      
      For large allocations (bigger than PAGE_SIZE), we use vmalloc() instead of
      kmalloc().
      
      We still limit memory allocation with the two existing tunables (somaxconn &
      tcp_max_syn_backlog). So for standard setups, this patch actually reduce RAM
      usage.
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      72a3effa
  7. 10 4月, 2006 1 次提交
  8. 27 3月, 2006 1 次提交
  9. 28 2月, 2006 1 次提交
  10. 30 8月, 2005 3 次提交
  11. 19 6月, 2005 3 次提交