1. 19 6月, 2021 4 次提交
    • G
      mptcp: add csum_reqd in mptcp_out_options · 06fe1719
      Geliang Tang 提交于
      This patch added a new member csum_reqd in struct mptcp_out_options and
      struct mptcp_subflow_request_sock. Initialized it with the helper
      function mptcp_is_checksum_enabled().
      
      In mptcp_write_options, if this field is enabled, send out the MP_CAPABLE
      suboption with the MPTCP_CAP_CHECKSUM_REQD flag.
      Acked-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: NMat Martineau <mathew.j.martineau@linux.intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      06fe1719
    • G
      mptcp: generate the data checksum · d0cc2987
      Geliang Tang 提交于
      This patch added a new member named csum in struct mptcp_ext, implemented
      a new function named mptcp_generate_data_checksum().
      
      Generate the data checksum in mptcp_sendmsg_frag, save it in mpext->csum.
      
      Note that we must generate the csum for zero window probe, too.
      
      Do the csum update incrementally, to avoid multiple csum computation
      when the data is appended to existing skb.
      
      Note that in a later patch we will skip unneeded csum related operation.
      Changes not included here to keep the delta small.
      Co-developed-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: NMat Martineau <mathew.j.martineau@linux.intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d0cc2987
    • G
      mptcp: add csum_enabled in mptcp_sock · 752e9067
      Geliang Tang 提交于
      This patch added a new member named csum_enabled in struct mptcp_sock,
      used a dummy mptcp_is_checksum_enabled() helper to initialize it.
      
      Also added a new member named mptcpi_csum_enabled in struct mptcp_info
      to expose the csum_enabled flag.
      Acked-by: NPaolo Abeni <pabeni@redhat.com>
      Signed-off-by: NGeliang Tang <geliangtang@gmail.com>
      Signed-off-by: NMat Martineau <mathew.j.martineau@linux.intel.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      752e9067
    • A
      seg6: add support for SRv6 End.DT46 Behavior · 8b532109
      Andrea Mayer 提交于
      IETF RFC 8986 [1] includes the definition of SRv6 End.DT4, End.DT6, and
      End.DT46 Behaviors.
      
      The current SRv6 code in the Linux kernel only implements End.DT4 and
      End.DT6 which can be used respectively to support IPv4-in-IPv6 and
      IPv6-in-IPv6 VPNs. With End.DT4 and End.DT6 it is not possible to create a
      single SRv6 VPN tunnel to carry both IPv4 and IPv6 traffic.
      
      The proposed End.DT46 implementation is meant to support the decapsulation
      of IPv4 and IPv6 traffic coming from a single SRv6 tunnel.
      The implementation of the SRv6 End.DT46 Behavior in the Linux kernel
      greatly simplifies the setup and operations of SRv6 VPNs.
      
      The SRv6 End.DT46 Behavior leverages the infrastructure of SRv6 End.DT{4,6}
      Behaviors implemented so far, because it makes use of a VRF device in
      order to force the routing lookup into the associated routing table.
      
      To make the End.DT46 work properly, it must be guaranteed that the routing
      table used for routing lookup operations is bound to one and only one VRF
      during the tunnel creation. Such constraint has to be enforced by enabling
      the VRF strict_mode sysctl parameter, i.e.:
      
       $ sysctl -wq net.vrf.strict_mode=1
      
      Note that the same approach is used for the SRv6 End.DT4 Behavior and for
      the End.DT6 Behavior in VRF mode.
      
      The command used to instantiate an SRv6 End.DT46 Behavior is
      straightforward, i.e.:
      
       $ ip -6 route add 2001:db8::1 encap seg6local action End.DT46 vrftable 100 dev vrf100.
      
      [1] https://www.rfc-editor.org/rfc/rfc8986.html#name-enddt46-decapsulation-and-s
      
      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
      
      Performance and impact of SRv6 End.DT46 Behavior on the SRv6 Networking
      =======================================================================
      
      This patch aims to add the SRv6 End.DT46 Behavior with minimal impact on
      the performance of SRv6 End.DT4 and End.DT6 Behaviors.
      In order to verify this, we tested the performance of the newly introduced
      SRv6 End.DT46 Behavior and compared it with the performance of SRv6
      End.DT{4,6} Behaviors, considering both the patched kernel and the kernel
      before applying the End.DT46 patch (referred to as vanilla kernel).
      
      In details, the following decapsulation scenarios were considered:
      
       1.a) IPv6 traffic in SRv6 End.DT46 Behavior on patched kernel;
       1.b) IPv4 traffic in SRv6 End.DT46 Behavior on patched kernel;
       2.a) SRv6 End.DT6 Behavior (VRF mode) on patched kernel;
       2.b) SRv6 End.DT4 Behavior on patched kernel;
       3.a) SRv6 End.DT6 Behavior (VRF mode) on vanilla kernel (without the
            End.DT46 patch);
       3.b) SRv6 End.DT4 Behavior on vanilla kernel (without the End.DT46 patch).
      
      All tests were performed on a testbed deployed on the CloudLab [2]
      facilities. We considered IPv{4,6} traffic handled by a single core (at 2.4
      GHz on a Xeon(R) CPU E5-2630 v3) on kernel 5.13-rc1 using packets of size
      ~ 100 bytes.
      
      Scenario (1.a): average 684.70 kpps; std. dev. 0.7 kpps;
      Scenario (1.b): average 711.69 kpps; std. dev. 1.2 kpps;
      Scenario (2.a): average 690.70 kpps; std. dev. 1.2 kpps;
      Scenario (2.b): average 722.22 kpps; std. dev. 1.7 kpps;
      Scenario (3.a): average 690.02 kpps; std. dev. 2.6 kpps;
      Scenario (3.b): average 721.91 kpps; std. dev. 1.2 kpps;
      
      Considering the results for the patched kernel (1.a, 1.b, 2.a, 2.b) we
      observe that the performance degradation incurred in using End.DT46 rather
      than End.DT6 and End.DT4 respectively for IPv6 and IPv4 traffic is minimal,
      around 0.9% and 1.5%. Such very minimal performance degradation is the
      price to be paid if one prefers to use a single tunnel capable of handling
      both types of traffic (IPv4 and IPv6).
      
      Comparing the results for End.DT4 and End.DT6 under the patched and the
      vanilla kernel (2.a, 2.b, 3.a, 3.b) we observe that the introduction of the
      End.DT46 patch has no impact on the performance of End.DT4 and End.DT6.
      
      [2] https://www.cloudlab.usSigned-off-by: NAndrea Mayer <andrea.mayer@uniroma2.it>
      Reviewed-by: NDavid Ahern <dsahern@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8b532109
  2. 18 6月, 2021 1 次提交
  3. 17 6月, 2021 5 次提交
  4. 16 6月, 2021 13 次提交
  5. 15 6月, 2021 5 次提交
  6. 13 6月, 2021 4 次提交
  7. 12 6月, 2021 8 次提交