1. 13 7月, 2018 1 次提交
    • A
      tcp: use monotonic timestamps for PAWS · cca9bab1
      Arnd Bergmann 提交于
      Using get_seconds() for timestamps is deprecated since it can lead
      to overflows on 32-bit systems. While the interface generally doesn't
      overflow until year 2106, the specific implementation of the TCP PAWS
      algorithm breaks in 2038 when the intermediate signed 32-bit timestamps
      overflow.
      
      A related problem is that the local timestamps in CLOCK_REALTIME form
      lead to unexpected behavior when settimeofday is called to set the system
      clock backwards or forwards by more than 24 days.
      
      While the first problem could be solved by using an overflow-safe method
      of comparing the timestamps, a nicer solution is to use a monotonic
      clocksource with ktime_get_seconds() that simply doesn't overflow (at
      least not until 136 years after boot) and that doesn't change during
      settimeofday().
      
      To make 32-bit and 64-bit architectures behave the same way here, and
      also save a few bytes in the tcp_options_received structure, I'm changing
      the type to a 32-bit integer, which is now safe on all architectures.
      
      Finally, the ts_recent_stamp field also (confusingly) gets used to store
      a jiffies value in tcp_synq_overflow()/tcp_synq_no_recent_overflow().
      This is currently safe, but changing the type to 32-bit requires
      some small changes there to keep it working.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cca9bab1
  2. 28 6月, 2018 1 次提交
  3. 05 6月, 2018 1 次提交
  4. 11 5月, 2018 1 次提交
    • J
      tcp: Add mark for TIMEWAIT sockets · 00483690
      Jon Maxwell 提交于
      This version has some suggestions by Eric Dumazet:
      
      - Use a local variable for the mark in IPv6 instead of ctl_sk to avoid SMP
      races.
      - Use the more elegant "IP4_REPLY_MARK(net, skb->mark) ?: sk->sk_mark"
      statement.
      - Factorize code as sk_fullsock() check is not necessary.
      
      Aidan McGurn from Openwave Mobility systems reported the following bug:
      
      "Marked routing is broken on customer deployment. Its effects are large
      increase in Uplink retransmissions caused by the client never receiving
      the final ACK to their FINACK - this ACK misses the mark and routes out
      of the incorrect route."
      
      Currently marks are added to sk_buffs for replies when the "fwmark_reflect"
      sysctl is enabled. But not for TW sockets that had sk->sk_mark set via
      setsockopt(SO_MARK..).
      
      Fix this in IPv4/v6 by adding tw->tw_mark for TIME_WAIT sockets. Copy the the
      original sk->sk_mark in __inet_twsk_hashdance() to the new tw->tw_mark location.
      Then progate this so that the skb gets sent with the correct mark. Do the same
      for resets. Give the "fwmark_reflect" sysctl precedence over sk->sk_mark so that
      netfilter rules are still honored.
      Signed-off-by: NJon Maxwell <jmaxwell37@gmail.com>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      00483690
  5. 01 4月, 2018 1 次提交
  6. 15 2月, 2018 1 次提交
    • E
      tcp: try to keep packet if SYN_RCV race is lost · e0f9759f
      Eric Dumazet 提交于
      배석진 reported that in some situations, packets for a given 5-tuple
      end up being processed by different CPUS.
      
      This involves RPS, and fragmentation.
      
      배석진 is seeing packet drops when a SYN_RECV request socket is
      moved into ESTABLISH state. Other states are protected by socket lock.
      
      This is caused by a CPU losing the race, and simply not caring enough.
      
      Since this seems to occur frequently, we can do better and perform
      a second lookup.
      
      Note that all needed memory barriers are already in the existing code,
      thanks to the spin_lock()/spin_unlock() pair in inet_ehash_insert()
      and reqsk_put(). The second lookup must find the new socket,
      unless it has already been accepted and closed by another cpu.
      
      Note that the fragmentation could be avoided in the first place by
      use of a correct TCP MSS option in the SYN{ACK} packet, but this
      does not mean we can not be more robust.
      
      Many thanks to 배석진 for a very detailed analysis.
      Reported-by: N배석진 <soukjin.bae@samsung.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e0f9759f
  7. 14 12月, 2017 1 次提交
  8. 02 12月, 2017 1 次提交
  9. 11 11月, 2017 2 次提交
  10. 05 11月, 2017 1 次提交
    • P
      tcp: higher throughput under reordering with adaptive RACK reordering wnd · 1f255691
      Priyaranjan Jha 提交于
      Currently TCP RACK loss detection does not work well if packets are
      being reordered beyond its static reordering window (min_rtt/4).Under
      such reordering it may falsely trigger loss recoveries and reduce TCP
      throughput significantly.
      
      This patch improves that by increasing and reducing the reordering
      window based on DSACK, which is now supported in major TCP implementations.
      It makes RACK's reo_wnd adaptive based on DSACK and no. of recoveries.
      
      - If DSACK is received, increment reo_wnd by min_rtt/4 (upper bounded
        by srtt), since there is possibility that spurious retransmission was
        due to reordering delay longer than reo_wnd.
      
      - Persist the current reo_wnd value for TCP_RACK_RECOVERY_THRESH (16)
        no. of successful recoveries (accounts for full DSACK-based loss
        recovery undo). After that, reset it to default (min_rtt/4).
      
      - At max, reo_wnd is incremented only once per rtt. So that the new
        DSACK on which we are reacting, is due to the spurious retx (approx)
        after the reo_wnd has been updated last time.
      
      - reo_wnd is tracked in terms of steps (of min_rtt/4), rather than
        absolute value to account for change in rtt.
      
      In our internal testing, we observed significant increase in throughput,
      in scenarios where reordering exceeds min_rtt/4 (previous static value).
      Signed-off-by: NPriyaranjan Jha <priyarjha@google.com>
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      1f255691
  11. 28 10月, 2017 1 次提交
  12. 27 10月, 2017 3 次提交
  13. 26 10月, 2017 1 次提交
  14. 24 10月, 2017 1 次提交
  15. 06 10月, 2017 1 次提交
    • E
      tcp: new list for sent but unacked skbs for RACK recovery · e2080072
      Eric Dumazet 提交于
      This patch adds a new queue (list) that tracks the sent but not yet
      acked or SACKed skbs for a TCP connection. The list is chronologically
      ordered by skb->skb_mstamp (the head is the oldest sent skb).
      
      This list will be used to optimize TCP Rack recovery, which checks
      an skb's timestamp to judge if it has been lost and needs to be
      retransmitted. Since TCP write queue is ordered by sequence instead
      of sent time, RACK has to scan over the write queue to catch all
      eligible packets to detect lost retransmission, and iterates through
      SACKed skbs repeatedly.
      
      Special cares for rare events:
      1. TCP repair fakes skb transmission so the send queue needs adjusted
      2. SACK reneging would require re-inserting SACKed skbs into the
         send queue. For now I believe it's not worth the complexity to
         make RACK work perfectly on SACK reneging, so we do nothing here.
      3. Fast Open: currently for non-TFO, send-queue correctly queues
         the pure SYN packet. For TFO which queues a pure SYN and
         then a data packet, send-queue only queues the data packet but
         not the pure SYN due to the structure of TFO code. This is okay
         because the SYN receiver would never respond with a SACK on a
         missing SYN (i.e. SYN is never fast-retransmitted by SACK/RACK).
      
      In order to not grow sk_buff, we use an union for the new list and
      _skb_refdst/destructor fields. This is a bit complicated because
      we need to make sure _skb_refdst and destructor are properly zeroed
      before skb is cloned/copied at transmit, and before being freed.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e2080072
  16. 31 8月, 2017 1 次提交
  17. 01 8月, 2017 2 次提交
    • F
      tcp: remove header prediction · 45f119bf
      Florian Westphal 提交于
      Like prequeue, I am not sure this is overly useful nowadays.
      
      If we receive a train of packets, GRO will aggregate them if the
      headers are the same (HP predates GRO by several years) so we don't
      get a per-packet benefit, only a per-aggregated-packet one.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      45f119bf
    • F
      tcp: remove prequeue support · e7942d06
      Florian Westphal 提交于
      prequeue is a tcp receive optimization that moves part of rx processing
      from bh to process context.
      
      This only works if the socket being processed belongs to a process that
      is blocked in recv on that socket.
      
      In practice, this doesn't happen anymore that often because nowadays
      servers tend to use an event driven (epoll) model.
      
      Even normal client applications (web browsers) commonly use many tcp
      connections in parallel.
      
      This has measureable impact only in netperf (which uses plain recv and
      thus allows prequeue use) from host to locally running vm (~4%), however,
      there were no changes when using netperf between two physical hosts with
      ixgbe interfaces.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e7942d06
  18. 02 7月, 2017 1 次提交
  19. 08 6月, 2017 1 次提交
  20. 18 5月, 2017 3 次提交
  21. 04 5月, 2017 1 次提交
    • E
      tcp: do not inherit fastopen_req from parent · 8b485ce6
      Eric Dumazet 提交于
      Under fuzzer stress, it is possible that a child gets a non NULL
      fastopen_req pointer from its parent at accept() time, when/if parent
      morphs from listener to active session.
      
      We need to make sure this can not happen, by clearing the field after
      socket cloning.
      
      BUG: Double free or freeing an invalid pointer
      Unexpected shadow byte: 0xFB
      CPU: 3 PID: 20933 Comm: syz-executor3 Not tainted 4.11.0+ #306
      Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs
      01/01/2011
      Call Trace:
       <IRQ>
       __dump_stack lib/dump_stack.c:16 [inline]
       dump_stack+0x292/0x395 lib/dump_stack.c:52
       kasan_object_err+0x1c/0x70 mm/kasan/report.c:164
       kasan_report_double_free+0x5c/0x70 mm/kasan/report.c:185
       kasan_slab_free+0x9d/0xc0 mm/kasan/kasan.c:580
       slab_free_hook mm/slub.c:1357 [inline]
       slab_free_freelist_hook mm/slub.c:1379 [inline]
       slab_free mm/slub.c:2961 [inline]
       kfree+0xe8/0x2b0 mm/slub.c:3882
       tcp_free_fastopen_req net/ipv4/tcp.c:1077 [inline]
       tcp_disconnect+0xc15/0x13e0 net/ipv4/tcp.c:2328
       inet_child_forget+0xb8/0x600 net/ipv4/inet_connection_sock.c:898
       inet_csk_reqsk_queue_add+0x1e7/0x250
      net/ipv4/inet_connection_sock.c:928
       tcp_get_cookie_sock+0x21a/0x510 net/ipv4/syncookies.c:217
       cookie_v4_check+0x1a19/0x28b0 net/ipv4/syncookies.c:384
       tcp_v4_cookie_check net/ipv4/tcp_ipv4.c:1384 [inline]
       tcp_v4_do_rcv+0x731/0x940 net/ipv4/tcp_ipv4.c:1421
       tcp_v4_rcv+0x2dc0/0x31c0 net/ipv4/tcp_ipv4.c:1715
       ip_local_deliver_finish+0x4cc/0xc20 net/ipv4/ip_input.c:216
       NF_HOOK include/linux/netfilter.h:257 [inline]
       ip_local_deliver+0x1ce/0x700 net/ipv4/ip_input.c:257
       dst_input include/net/dst.h:492 [inline]
       ip_rcv_finish+0xb1d/0x20b0 net/ipv4/ip_input.c:396
       NF_HOOK include/linux/netfilter.h:257 [inline]
       ip_rcv+0xd8c/0x19c0 net/ipv4/ip_input.c:487
       __netif_receive_skb_core+0x1ad1/0x3400 net/core/dev.c:4210
       __netif_receive_skb+0x2a/0x1a0 net/core/dev.c:4248
       process_backlog+0xe5/0x6c0 net/core/dev.c:4868
       napi_poll net/core/dev.c:5270 [inline]
       net_rx_action+0xe70/0x18e0 net/core/dev.c:5335
       __do_softirq+0x2fb/0xb99 kernel/softirq.c:284
       do_softirq_own_stack+0x1c/0x30 arch/x86/entry/entry_64.S:899
       </IRQ>
       do_softirq.part.17+0x1e8/0x230 kernel/softirq.c:328
       do_softirq kernel/softirq.c:176 [inline]
       __local_bh_enable_ip+0x1cf/0x1e0 kernel/softirq.c:181
       local_bh_enable include/linux/bottom_half.h:31 [inline]
       rcu_read_unlock_bh include/linux/rcupdate.h:931 [inline]
       ip_finish_output2+0x9ab/0x15e0 net/ipv4/ip_output.c:230
       ip_finish_output+0xa35/0xdf0 net/ipv4/ip_output.c:316
       NF_HOOK_COND include/linux/netfilter.h:246 [inline]
       ip_output+0x1f6/0x7b0 net/ipv4/ip_output.c:404
       dst_output include/net/dst.h:486 [inline]
       ip_local_out+0x95/0x160 net/ipv4/ip_output.c:124
       ip_queue_xmit+0x9a8/0x1a10 net/ipv4/ip_output.c:503
       tcp_transmit_skb+0x1ade/0x3470 net/ipv4/tcp_output.c:1057
       tcp_write_xmit+0x79e/0x55b0 net/ipv4/tcp_output.c:2265
       __tcp_push_pending_frames+0xfa/0x3a0 net/ipv4/tcp_output.c:2450
       tcp_push+0x4ee/0x780 net/ipv4/tcp.c:683
       tcp_sendmsg+0x128d/0x39b0 net/ipv4/tcp.c:1342
       inet_sendmsg+0x164/0x5b0 net/ipv4/af_inet.c:762
       sock_sendmsg_nosec net/socket.c:633 [inline]
       sock_sendmsg+0xca/0x110 net/socket.c:643
       SYSC_sendto+0x660/0x810 net/socket.c:1696
       SyS_sendto+0x40/0x50 net/socket.c:1664
       entry_SYSCALL_64_fastpath+0x1f/0xbe
      RIP: 0033:0x446059
      RSP: 002b:00007faa6761fb58 EFLAGS: 00000282 ORIG_RAX: 000000000000002c
      RAX: ffffffffffffffda RBX: 0000000000000017 RCX: 0000000000446059
      RDX: 0000000000000001 RSI: 0000000020ba3fcd RDI: 0000000000000017
      RBP: 00000000006e40a0 R08: 0000000020ba4ff0 R09: 0000000000000010
      R10: 0000000020000000 R11: 0000000000000282 R12: 0000000000708150
      R13: 0000000000000000 R14: 00007faa676209c0 R15: 00007faa67620700
      Object at ffff88003b5bbcb8, in cache kmalloc-64 size: 64
      Allocated:
      PID = 20909
       save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:59
       save_stack+0x43/0xd0 mm/kasan/kasan.c:513
       set_track mm/kasan/kasan.c:525 [inline]
       kasan_kmalloc+0xad/0xe0 mm/kasan/kasan.c:616
       kmem_cache_alloc_trace+0x82/0x270 mm/slub.c:2745
       kmalloc include/linux/slab.h:490 [inline]
       kzalloc include/linux/slab.h:663 [inline]
       tcp_sendmsg_fastopen net/ipv4/tcp.c:1094 [inline]
       tcp_sendmsg+0x221a/0x39b0 net/ipv4/tcp.c:1139
       inet_sendmsg+0x164/0x5b0 net/ipv4/af_inet.c:762
       sock_sendmsg_nosec net/socket.c:633 [inline]
       sock_sendmsg+0xca/0x110 net/socket.c:643
       SYSC_sendto+0x660/0x810 net/socket.c:1696
       SyS_sendto+0x40/0x50 net/socket.c:1664
       entry_SYSCALL_64_fastpath+0x1f/0xbe
      Freed:
      PID = 20909
       save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:59
       save_stack+0x43/0xd0 mm/kasan/kasan.c:513
       set_track mm/kasan/kasan.c:525 [inline]
       kasan_slab_free+0x73/0xc0 mm/kasan/kasan.c:589
       slab_free_hook mm/slub.c:1357 [inline]
       slab_free_freelist_hook mm/slub.c:1379 [inline]
       slab_free mm/slub.c:2961 [inline]
       kfree+0xe8/0x2b0 mm/slub.c:3882
       tcp_free_fastopen_req net/ipv4/tcp.c:1077 [inline]
       tcp_disconnect+0xc15/0x13e0 net/ipv4/tcp.c:2328
       __inet_stream_connect+0x20c/0xf90 net/ipv4/af_inet.c:593
       tcp_sendmsg_fastopen net/ipv4/tcp.c:1111 [inline]
       tcp_sendmsg+0x23a8/0x39b0 net/ipv4/tcp.c:1139
       inet_sendmsg+0x164/0x5b0 net/ipv4/af_inet.c:762
       sock_sendmsg_nosec net/socket.c:633 [inline]
       sock_sendmsg+0xca/0x110 net/socket.c:643
       SYSC_sendto+0x660/0x810 net/socket.c:1696
       SyS_sendto+0x40/0x50 net/socket.c:1664
       entry_SYSCALL_64_fastpath+0x1f/0xbe
      
      Fixes: e994b2f0 ("tcp: do not lock listener to process SYN packets")
      Fixes: 7db92362 ("tcp: fix potential double free issue for fastopen_req")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NAndrey Konovalov <andreyknvl@google.com>
      Acked-by: NWei Wang <weiwan@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8b485ce6
  22. 25 3月, 2017 1 次提交
  23. 23 3月, 2017 1 次提交
  24. 17 3月, 2017 1 次提交
  25. 23 2月, 2017 1 次提交
  26. 04 2月, 2017 1 次提交
  27. 14 1月, 2017 1 次提交
  28. 30 12月, 2016 1 次提交
  29. 03 12月, 2016 1 次提交
    • F
      tcp: randomize tcp timestamp offsets for each connection · 95a22cae
      Florian Westphal 提交于
      jiffies based timestamps allow for easy inference of number of devices
      behind NAT translators and also makes tracking of hosts simpler.
      
      commit ceaa1fef ("tcp: adding a per-socket timestamp offset")
      added the main infrastructure that is needed for per-connection ts
      randomization, in particular writing/reading the on-wire tcp header
      format takes the offset into account so rest of stack can use normal
      tcp_time_stamp (jiffies).
      
      So only two items are left:
       - add a tsoffset for request sockets
       - extend the tcp isn generator to also return another 32bit number
         in addition to the ISN.
      
      Re-use of ISN generator also means timestamps are still monotonically
      increasing for same connection quadruple, i.e. PAWS will still work.
      
      Includes fixes from Eric Dumazet.
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Acked-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      95a22cae
  30. 21 9月, 2016 2 次提交
    • S
      tcp: track application-limited rate samples · d7722e85
      Soheil Hassas Yeganeh 提交于
      This commit adds code to track whether the delivery rate represented
      by each rate_sample was limited by the application.
      
      Upon each transmit, we store in the is_app_limited field in the skb a
      boolean bit indicating whether there is a known "bubble in the pipe":
      a point in the rate sample interval where the sender was
      application-limited, and did not transmit even though the cwnd and
      pacing rate allowed it.
      
      This logic marks the flow app-limited on a write if *all* of the
      following are true:
      
        1) There is less than 1 MSS of unsent data in the write queue
           available to transmit.
      
        2) There is no packet in the sender's queues (e.g. in fq or the NIC
           tx queue).
      
        3) The connection is not limited by cwnd.
      
        4) There are no lost packets to retransmit.
      
      The tcp_rate_check_app_limited() code in tcp_rate.c determines whether
      the connection is application-limited at the moment. If the flow is
      application-limited, it sets the tp->app_limited field. If the flow is
      application-limited then that means there is effectively a "bubble" of
      silence in the pipe now, and this silence will be reflected in a lower
      bandwidth sample for any rate samples from now until we get an ACK
      indicating this bubble has exited the pipe: specifically, until we get
      an ACK for the next packet we transmit.
      
      When we send every skb we record in scb->tx.is_app_limited whether the
      resulting rate sample will be application-limited.
      
      The code in tcp_rate_gen() checks to see when it is safe to mark all
      known application-limited bubbles of silence as having exited the
      pipe. It does this by checking to see when the delivered count moves
      past the tp->app_limited marker. At this point it zeroes the
      tp->app_limited marker, as all known bubbles are out of the pipe.
      
      We make room for the tx.is_app_limited bit in the skb by borrowing a
      bit from the in_flight field used by NV to record the number of bytes
      in flight. The receive window in the TCP header is 16 bits, and the
      max receive window scaling shift factor is 14 (RFC 1323). So the max
      receive window offered by the TCP protocol is 2^(16+14) = 2^30. So we
      only need 30 bits for the tx.in_flight used by NV.
      Signed-off-by: NVan Jacobson <vanj@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNandita Dukkipati <nanditad@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d7722e85
    • N
      tcp: use windowed min filter library for TCP min_rtt estimation · 64033892
      Neal Cardwell 提交于
      Refactor the TCP min_rtt code to reuse the new win_minmax library in
      lib/win_minmax.c to simplify the TCP code.
      
      This is a pure refactor: the functionality is exactly the same. We
      just moved the windowed min code to make TCP easier to read and
      maintain, and to allow other parts of the kernel to use the windowed
      min/max filter code.
      Signed-off-by: NVan Jacobson <vanj@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNandita Dukkipati <nanditad@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      64033892
  31. 09 9月, 2016 1 次提交
    • Y
      tcp: use an RB tree for ooo receive queue · 9f5afeae
      Yaogong Wang 提交于
      Over the years, TCP BDP has increased by several orders of magnitude,
      and some people are considering to reach the 2 Gbytes limit.
      
      Even with current window scale limit of 14, ~1 Gbytes maps to ~740,000
      MSS.
      
      In presence of packet losses (or reorders), TCP stores incoming packets
      into an out of order queue, and number of skbs sitting there waiting for
      the missing packets to be received can be in the 10^5 range.
      
      Most packets are appended to the tail of this queue, and when
      packets can finally be transferred to receive queue, we scan the queue
      from its head.
      
      However, in presence of heavy losses, we might have to find an arbitrary
      point in this queue, involving a linear scan for every incoming packet,
      throwing away cpu caches.
      
      This patch converts it to a RB tree, to get bounded latencies.
      
      Yaogong wrote a preliminary patch about 2 years ago.
      Eric did the rebase, added ofo_last_skb cache, polishing and tests.
      
      Tested with network dropping between 1 and 10 % packets, with good
      success (about 30 % increase of throughput in stress tests)
      
      Next step would be to also use an RB tree for the write queue at sender
      side ;)
      Signed-off-by: NYaogong Wang <wygivan@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Acked-By: NIlpo Järvinen <ilpo.jarvinen@helsinki.fi>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9f5afeae
  32. 03 5月, 2016 1 次提交
  33. 28 4月, 2016 1 次提交