1. 12 3月, 2021 1 次提交
    • E
      tcp: consider using standard rtx logic in tcp_rcv_fastopen_synack() · a7abf3cd
      Eric Dumazet 提交于
      Jakub reported Data included in a Fastopen SYN that had to be
      retransmit would have to wait for an RTO if TX completions are slow,
      even with prior fix.
      
      This is because tcp_rcv_fastopen_synack() does not use standard
      rtx logic, meaning TSQ handler exits early in tcp_tsq_write()
      because tp->lost_out == tp->retrans_out
      
      Lets make tcp_rcv_fastopen_synack() use standard rtx logic,
      by using tcp_mark_skb_lost() on the skb thats needs to be
      sent again.
      
      Not this raised a warning in tcp_fastretrans_alert() during my tests
      since we consider the data not being aknowledged
      by the receiver does not mean packet was lost on the network.
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Reported-by: NJakub Kicinski <kuba@kernel.org>
      Cc: Neal Cardwell <ncardwell@google.com>
      Cc: Yuchung Cheng <ycheng@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a7abf3cd
  2. 16 2月, 2021 1 次提交
  3. 13 2月, 2021 1 次提交
  4. 24 1月, 2021 2 次提交
  5. 23 1月, 2021 1 次提交
  6. 20 1月, 2021 1 次提交
  7. 19 1月, 2021 1 次提交
  8. 15 12月, 2020 2 次提交
  9. 09 12月, 2020 1 次提交
    • E
      tcp: select sane initial rcvq_space.space for big MSS · 72d05c00
      Eric Dumazet 提交于
      Before commit a337531b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
      small tcp_rmem[1] values were overridden by tcp_fixup_rcvbuf() to accommodate various MSS.
      
      This is no longer the case, and Hazem Mohamed Abuelfotoh reported
      that DRS would not work for MTU 9000 endpoints receiving regular (1500 bytes) frames.
      
      Root cause is that tcp_init_buffer_space() uses tp->rcv_wnd for upper limit
      of rcvq_space.space computation, while it can select later a smaller
      value for tp->rcv_ssthresh and tp->window_clamp.
      
      ss -temoi on receiver would show :
      
      skmem:(r0,rb131072,t0,tb46080,f0,w0,o0,bl0,d0) rcv_space:62496 rcv_ssthresh:56596
      
      This means that TCP can not increase its window in tcp_grow_window(),
      and that DRS can never kick.
      
      Fix this by making sure that rcvq_space.space is not bigger than number of bytes
      that can be held in TCP receive queue.
      
      People unable/unwilling to change their kernel can work around this issue by
      selecting a bigger tcp_rmem[1] value as in :
      
      echo "4096 196608 6291456" >/proc/sys/net/ipv4/tcp_rmem
      
      Based on an initial report and patch from Hazem Mohamed Abuelfotoh
       https://lore.kernel.org/netdev/20201204180622.14285-1-abuehaze@amazon.com/
      
      Fixes: a337531b ("tcp: up initial rmem to 128KB and SYN rwin to around 64KB")
      Fixes: 041a14d2 ("tcp: start receiver buffer autotuning sooner")
      Reported-by: NHazem Mohamed Abuelfotoh <abuehaze@amazon.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      72d05c00
  10. 04 12月, 2020 1 次提交
    • F
      tcp: merge 'init_req' and 'route_req' functions · 7ea851d1
      Florian Westphal 提交于
      The Multipath-TCP standard (RFC 8684) says that an MPTCP host should send
      a TCP reset if the token in a MP_JOIN request is unknown.
      
      At this time we don't do this, the 3whs completes and the 'new subflow'
      is reset afterwards.  There are two ways to allow MPTCP to send the
      reset.
      
      1. override 'send_synack' callback and emit the rst from there.
         The drawback is that the request socket gets inserted into the
         listeners queue just to get removed again right away.
      
      2. Send the reset from the 'route_req' function instead.
         This avoids the 'add&remove request socket', but route_req lacks the
         skb that is required to send the TCP reset.
      
      Instead of just adding the skb to that function for MPTCP sake alone,
      Paolo suggested to merge init_req and route_req functions.
      
      This saves one indirection from syn processing path and provides the skb
      to the merged function at the same time.
      
      'send reset on unknown mptcp join token' is added in next patch.
      Suggested-by: NPaolo Abeni <pabeni@redhat.com>
      Cc: Eric Dumazet <edumazet@google.com>
      Signed-off-by: NFlorian Westphal <fw@strlen.de>
      Signed-off-by: NJakub Kicinski <kuba@kernel.org>
      7ea851d1
  11. 03 11月, 2020 1 次提交
  12. 24 10月, 2020 1 次提交
  13. 23 10月, 2020 1 次提交
  14. 14 10月, 2020 1 次提交
  15. 04 10月, 2020 1 次提交
  16. 26 9月, 2020 4 次提交
  17. 25 9月, 2020 2 次提交
  18. 15 9月, 2020 1 次提交
    • E
      tcp: remove SOCK_QUEUE_SHRUNK · 0cbe6a8f
      Eric Dumazet 提交于
      SOCK_QUEUE_SHRUNK is currently used by TCP as a temporary state
      that remembers if some room has been made in the rtx queue
      by an incoming ACK packet.
      
      This is later used from tcp_check_space() before
      considering to send EPOLLOUT.
      
      Problem is: If we receive SACK packets, and no packet
      is removed from RTX queue, we can send fresh packets, thus
      moving them from write queue to rtx queue and eventually
      empty the write queue.
      
      This stall can happen if TCP_NOTSENT_LOWAT is used.
      
      With this fix, we no longer risk stalling sends while holes
      are repaired, and we can fully use socket sndbuf.
      
      This also removes a cache line dirtying for typical RPC
      workloads.
      
      Fixes: c9bee3b7 ("tcp: TCP_NOTSENT_LOWAT socket option")
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Cc: Soheil Hassas Yeganeh <soheil@google.com>
      Acked-by: NNeal Cardwell <ncardwell@google.com>
      Acked-by: NSoheil Hassas Yeganeh <soheil@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0cbe6a8f
  19. 11 9月, 2020 2 次提交
    • N
      tcp: Only init congestion control if not initialized already · 8919a9b3
      Neal Cardwell 提交于
      Change tcp_init_transfer() to only initialize congestion control if it
      has not been initialized already.
      
      With this new approach, we can arrange things so that if the EBPF code
      sets the congestion control by calling setsockopt(TCP_CONGESTION) then
      tcp_init_transfer() will not re-initialize the CC module.
      
      This is an approach that has the following beneficial properties:
      
      (1) This allows CC module customizations made by the EBPF called in
          tcp_init_transfer() to persist, and not be wiped out by a later
          call to tcp_init_congestion_control() in tcp_init_transfer().
      
      (2) Does not flip the order of EBPF and CC init, to avoid causing bugs
          for existing code upstream that depends on the current order.
      
      (3) Does not cause 2 initializations for for CC in the case where the
          EBPF called in tcp_init_transfer() wants to set the CC to a new CC
          algorithm.
      
      (4) Allows follow-on simplifications to the code in net/core/filter.c
          and net/ipv4/tcp_cong.c, which currently both have some complexity
          to special-case CC initialization to avoid double CC
          initialization if EBPF sets the CC.
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NYuchung Cheng <ycheng@google.com>
      Acked-by: NKevin Yang <yyd@google.com>
      Cc: Lawrence Brakmo <brakmo@fb.com>
      8919a9b3
    • W
      tcp: record received TOS value in the request socket · e9b12edc
      Wei Wang 提交于
      A new field is added to the request sock to record the TOS value
      received on the listening socket during 3WHS:
      When not under syn flood, it is recording the TOS value sent in SYN.
      When under syn flood, it is recording the TOS value sent in the ACK.
      This is a preparation patch in order to do TOS reflection in the later
      commit.
      Signed-off-by: NWei Wang <weiwan@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e9b12edc
  20. 25 8月, 2020 7 次提交
    • M
      tcp: bpf: Optionally store mac header in TCP_SAVE_SYN · 267cf9fa
      Martin KaFai Lau 提交于
      This patch is adapted from Eric's patch in an earlier discussion [1].
      
      The TCP_SAVE_SYN currently only stores the network header and
      tcp header.  This patch allows it to optionally store
      the mac header also if the setsockopt's optval is 2.
      
      It requires one more bit for the "save_syn" bit field in tcp_sock.
      This patch achieves this by moving the syn_smc bit next to the is_mptcp.
      The syn_smc is currently used with the TCP experimental option.  Since
      syn_smc is only used when CONFIG_SMC is enabled, this patch also puts
      the "IS_ENABLED(CONFIG_SMC)" around it like the is_mptcp did
      with "IS_ENABLED(CONFIG_MPTCP)".
      
      The mac_hdrlen is also stored in the "struct saved_syn"
      to allow a quick offset from the bpf prog if it chooses to start
      getting from the network header or the tcp header.
      
      [1]: https://lore.kernel.org/netdev/CANn89iLJNWh6bkH7DNhy_kmcAexuUCccqERqe7z2QsvPhGrYPQ@mail.gmail.com/Suggested-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      Link: https://lore.kernel.org/bpf/20200820190123.2886935-1-kafai@fb.com
      267cf9fa
    • M
      bpf: tcp: Allow bpf prog to write and parse TCP header option · 0813a841
      Martin KaFai Lau 提交于
      [ Note: The TCP changes here is mainly to implement the bpf
        pieces into the bpf_skops_*() functions introduced
        in the earlier patches. ]
      
      The earlier effort in BPF-TCP-CC allows the TCP Congestion Control
      algorithm to be written in BPF.  It opens up opportunities to allow
      a faster turnaround time in testing/releasing new congestion control
      ideas to production environment.
      
      The same flexibility can be extended to writing TCP header option.
      It is not uncommon that people want to test new TCP header option
      to improve the TCP performance.  Another use case is for data-center
      that has a more controlled environment and has more flexibility in
      putting header options for internal only use.
      
      For example, we want to test the idea in putting maximum delay
      ACK in TCP header option which is similar to a draft RFC proposal [1].
      
      This patch introduces the necessary BPF API and use them in the
      TCP stack to allow BPF_PROG_TYPE_SOCK_OPS program to parse
      and write TCP header options.  It currently supports most of
      the TCP packet except RST.
      
      Supported TCP header option:
      ───────────────────────────
      This patch allows the bpf-prog to write any option kind.
      Different bpf-progs can write its own option by calling the new helper
      bpf_store_hdr_opt().  The helper will ensure there is no duplicated
      option in the header.
      
      By allowing bpf-prog to write any option kind, this gives a lot of
      flexibility to the bpf-prog.  Different bpf-prog can write its
      own option kind.  It could also allow the bpf-prog to support a
      recently standardized option on an older kernel.
      
      Sockops Callback Flags:
      ──────────────────────
      The bpf program will only be called to parse/write tcp header option
      if the following newly added callback flags are enabled
      in tp->bpf_sock_ops_cb_flags:
      BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG
      BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG
      BPF_SOCK_OPS_WRITE_HDR_OPT_CB_FLAG
      
      A few words on the PARSE CB flags.  When the above PARSE CB flags are
      turned on, the bpf-prog will be called on packets received
      at a sk that has at least reached the ESTABLISHED state.
      The parsing of the SYN-SYNACK-ACK will be discussed in the
      "3 Way HandShake" section.
      
      The default is off for all of the above new CB flags, i.e. the bpf prog
      will not be called to parse or write bpf hdr option.  There are
      details comment on these new cb flags in the UAPI bpf.h.
      
      sock_ops->skb_data and bpf_load_hdr_opt()
      ─────────────────────────────────────────
      sock_ops->skb_data and sock_ops->skb_data_end covers the whole
      TCP header and its options.  They are read only.
      
      The new bpf_load_hdr_opt() helps to read a particular option "kind"
      from the skb_data.
      
      Please refer to the comment in UAPI bpf.h.  It has details
      on what skb_data contains under different sock_ops->op.
      
      3 Way HandShake
      ───────────────
      The bpf-prog can learn if it is sending SYN or SYNACK by reading the
      sock_ops->skb_tcp_flags.
      
      * Passive side
      
      When writing SYNACK (i.e. sock_ops->op == BPF_SOCK_OPS_WRITE_HDR_OPT_CB),
      the received SYN skb will be available to the bpf prog.  The bpf prog can
      use the SYN skb (which may carry the header option sent from the remote bpf
      prog) to decide what bpf header option should be written to the outgoing
      SYNACK skb.  The SYN packet can be obtained by getsockopt(TCP_BPF_SYN*).
      More on this later.  Also, the bpf prog can learn if it is in syncookie
      mode (by checking sock_ops->args[0] == BPF_WRITE_HDR_TCP_SYNACK_COOKIE).
      
      The bpf prog can store the received SYN pkt by using the existing
      bpf_setsockopt(TCP_SAVE_SYN).  The example in a later patch does it.
      [ Note that the fullsock here is a listen sk, bpf_sk_storage
        is not very useful here since the listen sk will be shared
        by many concurrent connection requests.
      
        Extending bpf_sk_storage support to request_sock will add weight
        to the minisock and it is not necessary better than storing the
        whole ~100 bytes SYN pkt. ]
      
      When the connection is established, the bpf prog will be called
      in the existing PASSIVE_ESTABLISHED_CB callback.  At that time,
      the bpf prog can get the header option from the saved syn and
      then apply the needed operation to the newly established socket.
      The later patch will use the max delay ack specified in the SYN
      header and set the RTO of this newly established connection
      as an example.
      
      The received ACK (that concludes the 3WHS) will also be available to
      the bpf prog during PASSIVE_ESTABLISHED_CB through the sock_ops->skb_data.
      It could be useful in syncookie scenario.  More on this later.
      
      There is an existing getsockopt "TCP_SAVED_SYN" to return the whole
      saved syn pkt which includes the IP[46] header and the TCP header.
      A few "TCP_BPF_SYN*" getsockopt has been added to allow specifying where to
      start getting from, e.g. starting from TCP header, or from IP[46] header.
      
      The new getsockopt(TCP_BPF_SYN*) will also know where it can get
      the SYN's packet from:
        - (a) the just received syn (available when the bpf prog is writing SYNACK)
              and it is the only way to get SYN during syncookie mode.
        or
        - (b) the saved syn (available in PASSIVE_ESTABLISHED_CB and also other
              existing CB).
      
      The bpf prog does not need to know where the SYN pkt is coming from.
      The getsockopt(TCP_BPF_SYN*) will hide this details.
      
      Similarly, a flags "BPF_LOAD_HDR_OPT_TCP_SYN" is also added to
      bpf_load_hdr_opt() to read a particular header option from the SYN packet.
      
      * Fastopen
      
      Fastopen should work the same as the regular non fastopen case.
      This is a test in a later patch.
      
      * Syncookie
      
      For syncookie, the later example patch asks the active
      side's bpf prog to resend the header options in ACK.  The server
      can use bpf_load_hdr_opt() to look at the options in this
      received ACK during PASSIVE_ESTABLISHED_CB.
      
      * Active side
      
      The bpf prog will get a chance to write the bpf header option
      in the SYN packet during WRITE_HDR_OPT_CB.  The received SYNACK
      pkt will also be available to the bpf prog during the existing
      ACTIVE_ESTABLISHED_CB callback through the sock_ops->skb_data
      and bpf_load_hdr_opt().
      
      * Turn off header CB flags after 3WHS
      
      If the bpf prog does not need to write/parse header options
      beyond the 3WHS, the bpf prog can clear the bpf_sock_ops_cb_flags
      to avoid being called for header options.
      Or the bpf-prog can select to leave the UNKNOWN_HDR_OPT_CB_FLAG on
      so that the kernel will only call it when there is option that
      the kernel cannot handle.
      
      [1]: draft-wang-tcpm-low-latency-opt-00
           https://tools.ietf.org/html/draft-wang-tcpm-low-latency-opt-00Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200820190104.2885895-1-kafai@fb.com
      0813a841
    • M
      bpf: tcp: Add bpf_skops_hdr_opt_len() and bpf_skops_write_hdr_opt() · 331fca43
      Martin KaFai Lau 提交于
      The bpf prog needs to parse the SYN header to learn what options have
      been sent by the peer's bpf-prog before writing its options into SYNACK.
      This patch adds a "syn_skb" arg to tcp_make_synack() and send_synack().
      This syn_skb will eventually be made available (as read-only) to the
      bpf prog.  This will be the only SYN packet available to the bpf
      prog during syncookie.  For other regular cases, the bpf prog can
      also use the saved_syn.
      
      When writing options, the bpf prog will first be called to tell the
      kernel its required number of bytes.  It is done by the new
      bpf_skops_hdr_opt_len().  The bpf prog will only be called when the new
      BPF_SOCK_OPS_WRITE_HDR_OPT_CB_FLAG is set in tp->bpf_sock_ops_cb_flags.
      When the bpf prog returns, the kernel will know how many bytes are needed
      and then update the "*remaining" arg accordingly.  4 byte alignment will
      be included in the "*remaining" before this function returns.  The 4 byte
      aligned number of bytes will also be stored into the opts->bpf_opt_len.
      "bpf_opt_len" is a newly added member to the struct tcp_out_options.
      
      Then the new bpf_skops_write_hdr_opt() will call the bpf prog to write the
      header options.  The bpf prog is only called if it has reserved spaces
      before (opts->bpf_opt_len > 0).
      
      The bpf prog is the last one getting a chance to reserve header space
      and writing the header option.
      
      These two functions are half implemented to highlight the changes in
      TCP stack.  The actual codes preparing the bpf running context and
      invoking the bpf prog will be added in the later patch with other
      necessary bpf pieces.
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      Link: https://lore.kernel.org/bpf/20200820190052.2885316-1-kafai@fb.com
      331fca43
    • M
      bpf: tcp: Add bpf_skops_parse_hdr() · 00d211a4
      Martin KaFai Lau 提交于
      The patch adds a function bpf_skops_parse_hdr().
      It will call the bpf prog to parse the TCP header received at
      a tcp_sock that has at least reached the ESTABLISHED state.
      
      For the packets received during the 3WHS (SYN, SYNACK and ACK),
      the received skb will be available to the bpf prog during the callback
      in bpf_skops_established() introduced in the previous patch and
      in the bpf_skops_write_hdr_opt() that will be added in the
      next patch.
      
      Calling bpf prog to parse header is controlled by two new flags in
      tp->bpf_sock_ops_cb_flags:
      BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG and
      BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG.
      
      When BPF_SOCK_OPS_PARSE_UNKNOWN_HDR_OPT_CB_FLAG is set,
      the bpf prog will only be called when there is unknown
      option in the TCP header.
      
      When BPF_SOCK_OPS_PARSE_ALL_HDR_OPT_CB_FLAG is set,
      the bpf prog will be called on all received TCP header.
      
      This function is half implemented to highlight the changes in
      TCP stack.  The actual codes preparing the bpf running context and
      invoking the bpf prog will be added in the later patch with other
      necessary bpf pieces.
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      Link: https://lore.kernel.org/bpf/20200820190046.2885054-1-kafai@fb.com
      00d211a4
    • M
      bpf: tcp: Add bpf_skops_established() · 72be0fe6
      Martin KaFai Lau 提交于
      In tcp_init_transfer(), it currently calls the bpf prog to give it a
      chance to handle the just "ESTABLISHED" event (e.g. do setsockopt
      on the newly established sk).  Right now, it is done by calling the
      general purpose tcp_call_bpf().
      
      In the later patch, it also needs to pass the just-received skb which
      concludes the 3 way handshake. E.g. the SYNACK received at the active side.
      The bpf prog can then learn some specific header options written by the
      peer's bpf-prog and potentially do setsockopt on the newly established sk.
      Thus, instead of reusing the general purpose tcp_call_bpf(), a new function
      bpf_skops_established() is added to allow passing the "skb" to the bpf
      prog.  The actual skb passing from bpf_skops_established() to the bpf prog
      will happen together in a later patch which has the necessary bpf pieces.
      
      A "skb" arg is also added to tcp_init_transfer() such that
      it can then be passed to bpf_skops_established().
      
      Calling the new bpf_skops_established() instead of tcp_call_bpf()
      should be a noop in this patch.
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200820190039.2884750-1-kafai@fb.com
      72be0fe6
    • M
      tcp: Add saw_unknown to struct tcp_options_received · 7656d684
      Martin KaFai Lau 提交于
      In a later patch, the bpf prog only wants to be called to handle
      a header option if that particular header option cannot be handled by
      the kernel.  This unknown option could be written by the peer's bpf-prog.
      It could also be a new standard option that the running kernel does not
      support it while a bpf-prog can handle it.
      
      This patch adds a "saw_unknown" bit to "struct tcp_options_received"
      and it uses an existing one byte hole to do that.  "saw_unknown" will
      be set in tcp_parse_options() if it sees an option that the kernel
      cannot handle.
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200820190033.2884430-1-kafai@fb.com
      7656d684
    • M
      tcp: Use a struct to represent a saved_syn · 70a217f1
      Martin KaFai Lau 提交于
      The TCP_SAVE_SYN has both the network header and tcp header.
      The total length of the saved syn packet is currently stored in
      the first 4 bytes (u32) of an array and the actual packet data is
      stored after that.
      
      A later patch will add a bpf helper that allows to get the tcp header
      alone from the saved syn without the network header.  It will be more
      convenient to have a direct offset to a specific header instead of
      re-parsing it.  This requires to separately store the network hdrlen.
      The total header length (i.e. network + tcp) is still needed for the
      current usage in getsockopt.  Although this total length can be obtained
      by looking into the tcphdr and then get the (th->doff << 2), this patch
      chooses to directly store the tcp hdrlen in the second four bytes of
      this newly created "struct saved_syn".  By using a new struct, it can
      give a readable name to each individual header length.
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200820190014.2883694-1-kafai@fb.com
      70a217f1
  21. 04 8月, 2020 1 次提交
    • J
      tcp: apply a floor of 1 for RTT samples from TCP timestamps · 730e700e
      Jianfeng Wang 提交于
      For retransmitted packets, TCP needs to resort to using TCP timestamps
      for computing RTT samples. In the common case where the data and ACK
      fall in the same 1-millisecond interval, TCP senders with millisecond-
      granularity TCP timestamps compute a ca_rtt_us of 0. This ca_rtt_us
      of 0 propagates to rs->rtt_us.
      
      This value of 0 can cause performance problems for congestion control
      modules. For example, in BBR, the zero min_rtt sample can bring the
      min_rtt and BDP estimate down to 0, reduce snd_cwnd and result in a
      low throughput. It would be hard to mitigate this with filtering in
      the congestion control module, because the proper floor to apply would
      depend on the method of RTT sampling (using timestamp options or
      internally-saved transmission timestamps).
      
      This fix applies a floor of 1 for the RTT sample delta from TCP
      timestamps, so that seq_rtt_us, ca_rtt_us, and rs->rtt_us will be at
      least 1 * (USEC_PER_SEC / TCP_TS_HZ).
      
      Note that the receiver RTT computation in tcp_rcv_rtt_measure() and
      min_rtt computation in tcp_update_rtt_min() both already apply a floor
      of 1 timestamp tick, so this commit makes the code more consistent in
      avoiding this edge case of a value of 0.
      Signed-off-by: NJianfeng Wang <jfwang@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Acked-by: NKevin Yang <yyd@google.com>
      Acked-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      730e700e
  22. 01 8月, 2020 2 次提交
  23. 24 7月, 2020 1 次提交
    • Y
      tcp: allow at most one TLP probe per flight · 76be93fc
      Yuchung Cheng 提交于
      Previously TLP may send multiple probes of new data in one
      flight. This happens when the sender is cwnd limited. After the
      initial TLP containing new data is sent, the sender receives another
      ACK that acks partial inflight.  It may re-arm another TLP timer
      to send more, if no further ACK returns before the next TLP timeout
      (PTO) expires. The sender may send in theory a large amount of TLP
      until send queue is depleted. This only happens if the sender sees
      such irregular uncommon ACK pattern. But it is generally undesirable
      behavior during congestion especially.
      
      The original TLP design restrict only one TLP probe per inflight as
      published in "Reducing Web Latency: the Virtue of Gentle Aggression",
      SIGCOMM 2013. This patch changes TLP to send at most one probe
      per inflight.
      
      Note that if the sender is app-limited, TLP retransmits old data
      and did not have this issue.
      Signed-off-by: NYuchung Cheng <ycheng@google.com>
      Signed-off-by: NNeal Cardwell <ncardwell@google.com>
      Signed-off-by: NEric Dumazet <edumazet@google.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      76be93fc
  24. 18 7月, 2020 2 次提交
  25. 14 7月, 2020 1 次提交