1. 10 7月, 2009 1 次提交
    • J
      net: adding memory barrier to the poll and receive callbacks · a57de0b4
      Jiri Olsa 提交于
      Adding memory barrier after the poll_wait function, paired with
      receive callbacks. Adding fuctions sock_poll_wait and sk_has_sleeper
      to wrap the memory barrier.
      
      Without the memory barrier, following race can happen.
      The race fires, when following code paths meet, and the tp->rcv_nxt
      and __add_wait_queue updates stay in CPU caches.
      
      CPU1                         CPU2
      
      sys_select                   receive packet
        ...                        ...
        __add_wait_queue           update tp->rcv_nxt
        ...                        ...
        tp->rcv_nxt check          sock_def_readable
        ...                        {
        schedule                      ...
                                      if (sk->sk_sleep && waitqueue_active(sk->sk_sleep))
                                              wake_up_interruptible(sk->sk_sleep)
                                      ...
                                   }
      
      If there was no cache the code would work ok, since the wait_queue and
      rcv_nxt are opposit to each other.
      
      Meaning that once tp->rcv_nxt is updated by CPU2, the CPU1 either already
      passed the tp->rcv_nxt check and sleeps, or will get the new value for
      tp->rcv_nxt and will return with new data mask.
      In both cases the process (CPU1) is being added to the wait queue, so the
      waitqueue_active (CPU2) call cannot miss and will wake up CPU1.
      
      The bad case is when the __add_wait_queue changes done by CPU1 stay in its
      cache, and so does the tp->rcv_nxt update on CPU2 side.  The CPU1 will then
      endup calling schedule and sleep forever if there are no more data on the
      socket.
      
      Calls to poll_wait in following modules were ommited:
      	net/bluetooth/af_bluetooth.c
      	net/irda/af_irda.c
      	net/irda/irnet/irnet_ppp.c
      	net/mac80211/rc80211_pid_debugfs.c
      	net/phonet/socket.c
      	net/rds/af_rds.c
      	net/rfkill/core.c
      	net/sunrpc/cache.c
      	net/sunrpc/rpc_pipe.c
      	net/tipc/socket.c
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NEric Dumazet <eric.dumazet@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a57de0b4
  2. 30 6月, 2009 1 次提交
  3. 29 5月, 2009 1 次提交
  4. 27 5月, 2009 5 次提交
  5. 19 5月, 2009 1 次提交
  6. 17 4月, 2009 1 次提交
  7. 01 4月, 2009 1 次提交
  8. 19 3月, 2009 1 次提交
  9. 16 3月, 2009 3 次提交
    • I
      tcp: make sure xmit goal size never becomes zero · afece1c6
      Ilpo Järvinen 提交于
      It's not too likely to happen, would basically require crafted
      packets (must hit the max guard in tcp_bound_to_half_wnd()).
      It seems that nothing that bad would happen as there's tcp_mems
      and congestion window that prevent runaway at some point from
      hurting all too much (I'm not that sure what all those zero
      sized segments we would generate do though in write queue).
      Preventing it regardless is certainly the best way to go.
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Cc: Evgeniy Polyakov <zbr@ioremap.net>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      afece1c6
    • I
      tcp: cache result of earlier divides when mss-aligning things · 2a3a041c
      Ilpo Järvinen 提交于
      The results is very unlikely change every so often so we
      hardly need to divide again after doing that once for a
      connection. Yet, if divide still becomes necessary we
      detect that and do the right thing and again settle for
      non-divide state. Takes the u16 space which was previously
      taken by the plain xmit_size_goal.
      
      This should take care part of the tso vs non-tso difference
      we found earlier.
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2a3a041c
    • I
      tcp: simplify tcp_current_mss · 0c54b85f
      Ilpo Järvinen 提交于
      There's very little need for most of the callsites to get
      tp->xmit_goal_size updated. That will cost us divide as is,
      so slice the function in two. Also, the only users of the
      tp->xmit_goal_size are directly behind tcp_current_mss(),
      so there's no need to store that variable into tcp_sock
      at all! The drop of xmit_goal_size currently leaves 16-bit
      hole and some reorganization would again be necessary to
      change that (but I'm aiming to fill that hole with u16
      xmit_goal_size_segs to cache the results of the remaining
      divide to get that tso on regression).
      
      Bring xmit_goal_size parts into tcp.c
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Cc: Evgeniy Polyakov <zbr@ioremap.net>
      Cc: Ingo Molnar <mingo@elte.hu>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0c54b85f
  10. 02 3月, 2009 1 次提交
  11. 09 2月, 2009 1 次提交
  12. 30 1月, 2009 1 次提交
    • H
      gro: Avoid copying headers of unmerged packets · 86911732
      Herbert Xu 提交于
      Unfortunately simplicity isn't always the best.  The fraginfo
      interface turned out to be suboptimal.  The problem was quite
      obvious.  For every packet, we have to copy the headers from
      the frags structure into skb->head, even though for 99% of the
      packets this part is immediately thrown away after the merge.
      
      LRO didn't have this problem because it directly read the headers
      from the frags structure.
      
      This patch attempts to address this by creating an interface
      that allows GRO to access the headers in the first frag without
      having to copy it.  Because all drivers that use frags place the
      headers in the first frag this optimisation should be enough.
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      86911732
  13. 27 1月, 2009 1 次提交
    • D
      tcp: Fix length tcp_splice_data_recv passes to skb_splice_bits. · 9fa5fdf2
      Dimitris Michailidis 提交于
      tcp_splice_data_recv has two lengths to consider: the len parameter it
      gets from tcp_read_sock, which specifies the amount of data in the skb,
      and rd_desc->count, which is the amount of data the splice caller still
      wants.  Currently it passes just the latter to skb_splice_bits, which then
      splices min(rd_desc->count, skb->len - offset) bytes.
      
      Most of the time this is fine, except when the skb contains urgent data.
      In that case len goes only up to the urgent byte and is less than
      skb->len - offset.  By ignoring len tcp_splice_data_recv may a) splice
      data tcp_read_sock told it not to, b) return to tcp_read_sock a value > len.
      
      Now, tcp_read_sock doesn't handle used > len and leaves the socket in a
      bad state (both sk_receive_queue and copied_seq are bad at that point)
      resulting in duplicated data and corruption.
      
      Fix by passing min(rd_desc->count, len) to skb_splice_bits.
      Signed-off-by: NDimitris Michailidis <dm@chelsio.com>
      Acked-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9fa5fdf2
  14. 15 1月, 2009 1 次提交
  15. 14 1月, 2009 1 次提交
    • W
      tcp: splice as many packets as possible at once · 33966dd0
      Willy Tarreau 提交于
      As spotted by Willy Tarreau, current splice() from tcp socket to pipe is not
      optimal. It processes at most one segment per call.
      This results in low performance and very high overhead due to syscall rate
      when splicing from interfaces which do not support LRO.
      
      Willy provided a patch inside tcp_splice_read(), but a better fix
      is to let tcp_read_sock() process as many segments as possible, so
      that tcp_rcv_space_adjust() and tcp_cleanup_rbuf() are called less
      often.
      
      With this change, splice() behaves like tcp_recvmsg(), being able
      to consume many skbs in one system call. With typical 1460 bytes
      of payload per frame, that means splice(SPLICE_F_NONBLOCK) can return
      16*1460 = 23360 bytes.
      Signed-off-by: NWilly Tarreau <w@1wt.eu>
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      33966dd0
  16. 09 1月, 2009 1 次提交
  17. 07 1月, 2009 2 次提交
  18. 05 1月, 2009 3 次提交
  19. 30 12月, 2008 1 次提交
  20. 16 12月, 2008 1 次提交
  21. 26 11月, 2008 2 次提交
  22. 17 11月, 2008 1 次提交
    • E
      net: Convert TCP & DCCP hash tables to use RCU / hlist_nulls · 3ab5aee7
      Eric Dumazet 提交于
      RCU was added to UDP lookups, using a fast infrastructure :
      - sockets kmem_cache use SLAB_DESTROY_BY_RCU and dont pay the
        price of call_rcu() at freeing time.
      - hlist_nulls permits to use few memory barriers.
      
      This patch uses same infrastructure for TCP/DCCP established
      and timewait sockets.
      
      Thanks to SLAB_DESTROY_BY_RCU, no slowdown for applications
      using short lived TCP connections. A followup patch, converting
      rwlocks to spinlocks will even speedup this case.
      
      __inet_lookup_established() is pretty fast now we dont have to
      dirty a contended cache line (read_lock/read_unlock)
      
      Only established and timewait hashtable are converted to RCU
      (bind table and listen table are still using traditional locking)
      Signed-off-by: NEric Dumazet <dada1@cosmosbay.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3ab5aee7
  23. 05 11月, 2008 1 次提交
  24. 03 11月, 2008 1 次提交
  25. 08 10月, 2008 2 次提交
    • I
      tcp: kill pointless urg_mode · 33f5f57e
      Ilpo Järvinen 提交于
      It all started from me noticing that this urgent check in
      tcp_clean_rtx_queue is unnecessarily inside the loop. Then
      I took a longer look to it and found out that the users of
      urg_mode can trivially do without, well almost, there was
      one gotcha.
      
      Bonus: those funny people who use urg with >= 2^31 write_seq -
      snd_una could now rejoice too (that's the only purpose for the
      between being there, otherwise a simple compare would have done
      the thing). Not that I assume that the rest of the tcp code
      happily lives with such mind-boggling numbers :-). Alas, it
      turned out to be impossible to set wmem to such numbers anyway,
      yes I really tried a big sendfile after setting some wmem but
      nothing happened :-). ...Tcp_wmem is int and so is sk_sndbuf...
      So I hacked a bit variable to long and found out that it seems
      to work... :-)
      Signed-off-by: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      33f5f57e
    • P
      net: wrap sk->sk_backlog_rcv() · c57943a1
      Peter Zijlstra 提交于
      Wrap calling sk->sk_backlog_rcv() in a function. This will allow extending the
      generic sk_backlog_rcv behaviour.
      Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      c57943a1
  26. 07 10月, 2008 1 次提交
  27. 26 7月, 2008 1 次提交
  28. 19 7月, 2008 1 次提交
  29. 18 7月, 2008 1 次提交