1. 09 5月, 2020 2 次提交
  2. 05 5月, 2020 3 次提交
  3. 03 5月, 2020 2 次提交
  4. 02 5月, 2020 6 次提交
    • P
      net: schedule: add action gate offloading · d29bdd69
      Po Liu 提交于
      Add the gate action to the flow action entry. Add the gate parameters to
      the tc_setup_flow_action() queueing to the entries of flow_action_entry
      array provide to the driver.
      Signed-off-by: NPo Liu <Po.Liu@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      d29bdd69
    • P
      net: qos: introduce a gate control flow action · a51c328d
      Po Liu 提交于
      Introduce a ingress frame gate control flow action.
      Tc gate action does the work like this:
      Assume there is a gate allow specified ingress frames can be passed at
      specific time slot, and be dropped at specific time slot. Tc filter
      chooses the ingress frames, and tc gate action would specify what slot
      does these frames can be passed to device and what time slot would be
      dropped.
      Tc gate action would provide an entry list to tell how much time gate
      keep open and how much time gate keep state close. Gate action also
      assign a start time to tell when the entry list start. Then driver would
      repeat the gate entry list cyclically.
      For the software simulation, gate action requires the user assign a time
      clock type.
      
      Below is the setting example in user space. Tc filter a stream source ip
      address is 192.168.0.20 and gate action own two time slots. One is last
      200ms gate open let frame pass another is last 100ms gate close let
      frames dropped. When the ingress frames have reach total frames over
      8000000 bytes, the excessive frames will be dropped in that 200000000ns
      time slot.
      
      > tc qdisc add dev eth0 ingress
      
      > tc filter add dev eth0 parent ffff: protocol ip \
      	   flower src_ip 192.168.0.20 \
      	   action gate index 2 clockid CLOCK_TAI \
      	   sched-entry open 200000000 -1 8000000 \
      	   sched-entry close 100000000 -1 -1
      
      > tc chain del dev eth0 ingress chain 0
      
      "sched-entry" follow the name taprio style. Gate state is
      "open"/"close". Follow with period nanosecond. Then next item is internal
      priority value means which ingress queue should put. "-1" means
      wildcard. The last value optional specifies the maximum number of
      MSDU octets that are permitted to pass the gate during the specified
      time interval.
      Base-time is not set will be 0 as default, as result start time would
      be ((N + 1) * cycletime) which is the minimal of future time.
      
      Below example shows filtering a stream with destination mac address is
      10:00:80:00:00:00 and ip type is ICMP, follow the action gate. The gate
      action would run with one close time slot which means always keep close.
      The time cycle is total 200000000ns. The base-time would calculate by:
      
       1357000000000 + (N + 1) * cycletime
      
      When the total value is the future time, it will be the start time.
      The cycletime here would be 200000000ns for this case.
      
      > tc filter add dev eth0 parent ffff:  protocol ip \
      	   flower skip_hw ip_proto icmp dst_mac 10:00:80:00:00:00 \
      	   action gate index 12 base-time 1357000000000 \
      	   sched-entry close 200000000 -1 -1 \
      	   clockid CLOCK_TAI
      Signed-off-by: NPo Liu <Po.Liu@nxp.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      a51c328d
    • C
      net: Replace the limit of TCP_LINGER2 with TCP_FIN_TIMEOUT_MAX · f0628c52
      Cambda Zhu 提交于
      This patch changes the behavior of TCP_LINGER2 about its limit. The
      sysctl_tcp_fin_timeout used to be the limit of TCP_LINGER2 but now it's
      only the default value. A new macro named TCP_FIN_TIMEOUT_MAX is added
      as the limit of TCP_LINGER2, which is 2 minutes.
      
      Since TCP_LINGER2 used sysctl_tcp_fin_timeout as the default value
      and the limit in the past, the system administrator cannot set the
      default value for most of sockets and let some sockets have a greater
      timeout. It might be a mistake that let the sysctl to be the limit of
      the TCP_LINGER2. Maybe we can add a new sysctl to set the max of
      TCP_LINGER2, but FIN-WAIT-2 timeout is usually no need to be too long
      and 2 minutes are legal considering TCP specs.
      
      Changes in v3:
      - Remove the new socket option and change the TCP_LINGER2 behavior so
        that the timeout can be set to value between sysctl_tcp_fin_timeout
        and 2 minutes.
      
      Changes in v2:
      - Add int overflow check for the new socket option.
      
      Changes in v1:
      - Add a new socket option to set timeout greater than
        sysctl_tcp_fin_timeout.
      Signed-off-by: NCambda Zhu <cambda@linux.alibaba.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f0628c52
    • S
      bpf: Bpf_{g,s}etsockopt for struct bpf_sock_addr · beecf11b
      Stanislav Fomichev 提交于
      Currently, bpf_getsockopt and bpf_setsockopt helpers operate on the
      'struct bpf_sock_ops' context in BPF_PROG_TYPE_SOCK_OPS program.
      Let's generalize them and make them available for 'struct bpf_sock_addr'.
      That way, in the future, we can allow those helpers in more places.
      
      As an example, let's expose those 'struct bpf_sock_addr' based helpers to
      BPF_CGROUP_INET{4,6}_CONNECT hooks. That way we can override CC before the
      connection is made.
      
      v3:
      * Expose custom helpers for bpf_sock_addr context instead of doing
        generic bpf_sock argument (as suggested by Daniel). Even with
        try_socket_lock that doesn't sleep we have a problem where context sk
        is already locked and socket lock is non-nestable.
      
      v2:
      * s/BPF_PROG_TYPE_CGROUP_SOCKOPT/BPF_PROG_TYPE_SOCK_OPS/
      Signed-off-by: NStanislav Fomichev <sdf@google.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200430233152.199403-1-sdf@google.com
      beecf11b
    • M
      docs: networking: convert x25-iface.txt to ReST · 883780af
      Mauro Carvalho Chehab 提交于
      Not much to be done here:
      
      - add SPDX header;
      - adjust title markup;
      - remove a tail whitespace;
      - add to networking/index.rst.
      Signed-off-by: NMauro Carvalho Chehab <mchehab+huawei@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      883780af
    • S
      bpf: Sharing bpf runtime stats with BPF_ENABLE_STATS · d46edd67
      Song Liu 提交于
      Currently, sysctl kernel.bpf_stats_enabled controls BPF runtime stats.
      Typical userspace tools use kernel.bpf_stats_enabled as follows:
      
        1. Enable kernel.bpf_stats_enabled;
        2. Check program run_time_ns;
        3. Sleep for the monitoring period;
        4. Check program run_time_ns again, calculate the difference;
        5. Disable kernel.bpf_stats_enabled.
      
      The problem with this approach is that only one userspace tool can toggle
      this sysctl. If multiple tools toggle the sysctl at the same time, the
      measurement may be inaccurate.
      
      To fix this problem while keep backward compatibility, introduce a new
      bpf command BPF_ENABLE_STATS. On success, this command enables stats and
      returns a valid fd. BPF_ENABLE_STATS takes argument "type". Currently,
      only one type, BPF_STATS_RUN_TIME, is supported. We can extend the
      command to support other types of stats in the future.
      
      With BPF_ENABLE_STATS, user space tool would have the following flow:
      
        1. Get a fd with BPF_ENABLE_STATS, and make sure it is valid;
        2. Check program run_time_ns;
        3. Sleep for the monitoring period;
        4. Check program run_time_ns again, calculate the difference;
        5. Close the fd.
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200430071506.1408910-2-songliubraving@fb.com
      d46edd67
  5. 01 5月, 2020 15 次提交
  6. 29 4月, 2020 12 次提交