1. 05 12月, 2017 1 次提交
    • L
      bpf: Add access to snd_cwnd and others in sock_ops · f19397a5
      Lawrence Brakmo 提交于
      Adds read access to snd_cwnd and srtt_us fields of tcp_sock. Since these
      fields are only valid if the socket associated with the sock_ops program
      call is a full socket, the field is_fullsock is also added to the
      bpf_sock_ops struct. If the socket is not a full socket, reading these
      fields returns 0.
      
      Note that in most cases it will not be necessary to check is_fullsock to
      know if there is a full socket. The context of the call, as specified by
      the 'op' field, can sometimes determine whether there is a full socket.
      
      The struct bpf_sock_ops has the following fields added:
      
        __u32 is_fullsock;      /* Some TCP fields are only valid if
                                 * there is a full socket. If not, the
                                 * fields read as zero.
      			   */
        __u32 snd_cwnd;
        __u32 srtt_us;          /* Averaged RTT << 3 in usecs */
      
      There is a new macro, SOCK_OPS_GET_TCP32(NAME), to make it easier to add
      read access to more 32 bit tcp_sock fields.
      Signed-off-by: NLawrence Brakmo <brakmo@fb.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      f19397a5
  2. 21 11月, 2017 2 次提交
  3. 11 11月, 2017 2 次提交
  4. 05 11月, 2017 3 次提交
  5. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX license identifier to uapi header files with a license · e2be04c7
      Greg Kroah-Hartman 提交于
      Many user space API headers have licensing information, which is either
      incomplete, badly formatted or just a shorthand for referring to the
      license under which the file is supposed to be.  This makes it hard for
      compliance tools to determine the correct license.
      
      Update these files with an SPDX license identifier.  The identifier was
      chosen based on the license information in the file.
      
      GPL/LGPL licensed headers get the matching GPL/LGPL SPDX license
      identifier with the added 'WITH Linux-syscall-note' exception, which is
      the officially assigned exception identifier for the kernel syscall
      exception:
      
         NOTE! This copyright does *not* cover user programs that use kernel
         services by normal system calls - this is merely considered normal use
         of the kernel, and does *not* fall under the heading of "derived work".
      
      This exception makes it possible to include GPL headers into non GPL
      code, without confusing license compliance tools.
      
      Headers which have either explicit dual licensing or are just licensed
      under a non GPL license are updated with the corresponding SPDX
      identifier and the GPLv2 with syscall exception identifier.  The format
      is:
              ((GPL-2.0 WITH Linux-syscall-note) OR SPDX-ID-OF-OTHER-LICENSE)
      
      SPDX license identifiers are a legally binding shorthand, which can be
      used instead of the full boiler plate text.  The update does not remove
      existing license information as this has to be done on a case by case
      basis and the copyright holders might have to be consulted. This will
      happen in a separate step.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.  See the previous patch in this series for the
      methodology of how this patch was researched.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      e2be04c7
  6. 01 11月, 2017 1 次提交
  7. 29 10月, 2017 1 次提交
  8. 22 10月, 2017 2 次提交
    • L
      bpf: Adding helper function bpf_getsockops · cd86d1fd
      Lawrence Brakmo 提交于
      Adding support for helper function bpf_getsockops to socket_ops BPF
      programs. This patch only supports TCP_CONGESTION.
      Signed-off-by: NVlad Vysotsky <vlad@cs.ucla.edu>
      Acked-by: NLawrence Brakmo <brakmo@fb.com>
      Acked-by: NAlexei Starovoitov <ast@fb.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cd86d1fd
    • L
      bpf: add support for BPF_SOCK_OPS_BASE_RTT · e6546ef6
      Lawrence Brakmo 提交于
      A congestion control algorithm can make a call to the BPF socket_ops
      program to request the base RTT. The base RTT can be congestion control
      dependent and is meant to represent a congestion threshold such that
      RTTs above it indicate congestion. This is especially useful for flows
      within a DC where the base RTT is easy to obtain.
      
      Being provided a base RTT solves a basic problem in RTT based congestion
      avoidance algorithms (such as Vegas, NV and BBR). Although it is easy
      to get the base RTT when the network is not congested, it is very
      diffcult to do when it is very congested. Newer connections get an
      inflated value of the base RTT leading to unfariness (newer flows with a
      larger base RTT get more bandwidth). As a result, RTT based congestion
      avoidance algorithms tend to update their base RTTs to improve fairness.
      In very congested networks this can lead to base RTT inflation, reducing
      the ability of these RTT based congestion control algorithms to prevent
      congestion.
      
      Note that in my experiments with TCP-NV, the base RTT provided can be
      much larger than the actual hardware RTT. For example, experimenting
      with hosts within a rack where the hardware RTT is 16-20us, I've used
      base RTTs up to 150us. The effect of using a larger base RTT is that the
      congestion avoidance algorithm will allow more queueing. When there are
      only a few flows the main effect is larger measured RTTs and RPC
      latencies due to the increased queueing. When there are a lot of flows,
      a larger base RTT can lead to more congestion and more packet drops.
      For this case, where the hardware RTT is 20us, a base RTT of 80us
      produces good results.
      
      This patch only introduces BPF_SOCK_OPS_BASE_RTT, a later patch in this
      set adds support for using it in TCP-NV. Further study and testing is
      needed before support can be added to other delay based congestion
      avoidance algorithms.
      Signed-off-by: NLawrence Brakmo <brakmo@fb.com>
      Acked-by: NAlexei Starovoitov <ast@fb.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e6546ef6
  9. 20 10月, 2017 1 次提交
  10. 18 10月, 2017 1 次提交
    • J
      bpf: introduce new bpf cpu map type BPF_MAP_TYPE_CPUMAP · 6710e112
      Jesper Dangaard Brouer 提交于
      The 'cpumap' is primarily used as a backend map for XDP BPF helper
      call bpf_redirect_map() and XDP_REDIRECT action, like 'devmap'.
      
      This patch implement the main part of the map.  It is not connected to
      the XDP redirect system yet, and no SKB allocation are done yet.
      
      The main concern in this patch is to ensure the datapath can run
      without any locking.  This adds complexity to the setup and tear-down
      procedure, which assumptions are extra carefully documented in the
      code comments.
      
      V2:
       - make sure array isn't larger than NR_CPUS
       - make sure CPUs added is a valid possible CPU
      
      V3: fix nitpicks from Jakub Kicinski <kubakici@wp.pl>
      
      V5:
       - Restrict map allocation to root / CAP_SYS_ADMIN
       - WARN_ON_ONCE if queue is not empty on tear-down
       - Return -EPERM on memlock limit instead of -ENOMEM
       - Error code in __cpu_map_entry_alloc() also handle ptr_ring_cleanup()
       - Moved cpu_map_enqueue() to next patch
      
      V6: all notice by Daniel Borkmann
       - Fix err return code in cpu_map_alloc() introduced in V5
       - Move cpu_possible() check after max_entries boundary check
       - Forbid usage initially in check_map_func_compatibility()
      
      V7:
       - Fix alloc error path spotted by Daniel Borkmann
       - Did stress test adding+removing CPUs from the map concurrently
       - Fixed refcnt issue on cpu_map_entry, kthread started too soon
       - Make sure packets are flushed during tear-down, involved use of
         rcu_barrier() and kthread_run only exit after queue is empty
       - Fix alloc error path in __cpu_map_entry_alloc() for ptr_ring
      
      V8:
       - Nitpicking comments and gramma by Edward Cree
       - Fix missing semi-colon introduced in V7 due to rebasing
       - Move struct bpf_cpu_map_entry members cpu+map_id to tracepoint patch
      Signed-off-by: NJesper Dangaard Brouer <brouer@redhat.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6710e112
  11. 08 10月, 2017 3 次提交
    • M
      bpf: Use char in prog and map name · 067cae47
      Martin KaFai Lau 提交于
      Instead of u8, use char for prog and map name.  It can avoid the
      userspace tool getting compiler's signess warning.  The
      bpf_prog_aux, bpf_map, bpf_attr, bpf_prog_info and
      bpf_map_info are changed.
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Cc: Jakub Kicinski <jakub.kicinski@netronome.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@fb.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      067cae47
    • Y
      bpf: add helper bpf_perf_prog_read_value · 4bebdc7a
      Yonghong Song 提交于
      This patch adds helper bpf_perf_prog_read_cvalue for perf event based bpf
      programs, to read event counter and enabled/running time.
      The enabled/running time is accumulated since the perf event open.
      
      The typical use case for perf event based bpf program is to attach itself
      to a single event. In such cases, if it is desirable to get scaling factor
      between two bpf invocations, users can can save the time values in a map,
      and use the value from the map and the current value to calculate
      the scaling factor.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Acked-by: NAlexei Starovoitov <ast@fb.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4bebdc7a
    • Y
      bpf: add helper bpf_perf_event_read_value for perf event array map · 908432ca
      Yonghong Song 提交于
      Hardware pmu counters are limited resources. When there are more
      pmu based perf events opened than available counters, kernel will
      multiplex these events so each event gets certain percentage
      (but not 100%) of the pmu time. In case that multiplexing happens,
      the number of samples or counter value will not reflect the
      case compared to no multiplexing. This makes comparison between
      different runs difficult.
      
      Typically, the number of samples or counter value should be
      normalized before comparing to other experiments. The typical
      normalization is done like:
        normalized_num_samples = num_samples * time_enabled / time_running
        normalized_counter_value = counter_value * time_enabled / time_running
      where time_enabled is the time enabled for event and time_running is
      the time running for event since last normalization.
      
      This patch adds helper bpf_perf_event_read_value for kprobed based perf
      event array map, to read perf counter and enabled/running time.
      The enabled/running time is accumulated since the perf event open.
      To achieve scaling factor between two bpf invocations, users
      can can use cpu_id as the key (which is typical for perf array usage model)
      to remember the previous value and do the calculation inside the
      bpf program.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Acked-by: NAlexei Starovoitov <ast@fb.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      908432ca
  12. 05 10月, 2017 2 次提交
    • A
      bpf: introduce BPF_PROG_QUERY command · 468e2f64
      Alexei Starovoitov 提交于
      introduce BPF_PROG_QUERY command to retrieve a set of either
      attached programs to given cgroup or a set of effective programs
      that will execute for events within a cgroup
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      for cgroup bits
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      468e2f64
    • A
      bpf: multi program support for cgroup+bpf · 324bda9e
      Alexei Starovoitov 提交于
      introduce BPF_F_ALLOW_MULTI flag that can be used to attach multiple
      bpf programs to a cgroup.
      
      The difference between three possible flags for BPF_PROG_ATTACH command:
      - NONE(default): No further bpf programs allowed in the subtree.
      - BPF_F_ALLOW_OVERRIDE: If a sub-cgroup installs some bpf program,
        the program in this cgroup yields to sub-cgroup program.
      - BPF_F_ALLOW_MULTI: If a sub-cgroup installs some bpf program,
        that cgroup program gets run in addition to the program in this cgroup.
      
      NONE and BPF_F_ALLOW_OVERRIDE existed before. This patch doesn't
      change their behavior. It only clarifies the semantics in relation
      to new flag.
      
      Only one program is allowed to be attached to a cgroup with
      NONE or BPF_F_ALLOW_OVERRIDE flag.
      Multiple programs are allowed to be attached to a cgroup with
      BPF_F_ALLOW_MULTI flag. They are executed in FIFO order
      (those that were attached first, run first)
      The programs of sub-cgroup are executed first, then programs of
      this cgroup and then programs of parent cgroup.
      All eligible programs are executed regardless of return code from
      earlier programs.
      
      To allow efficient execution of multiple programs attached to a cgroup
      and to avoid penalizing cgroups without any programs attached
      introduce 'struct bpf_prog_array' which is RCU protected array
      of pointers to bpf programs.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      for cgroup bits
      Acked-by: NTejun Heo <tj@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      324bda9e
  13. 04 10月, 2017 1 次提交
  14. 29 9月, 2017 2 次提交
  15. 27 9月, 2017 1 次提交
    • D
      bpf: add meta pointer for direct access · de8f3a83
      Daniel Borkmann 提交于
      This work enables generic transfer of metadata from XDP into skb. The
      basic idea is that we can make use of the fact that the resulting skb
      must be linear and already comes with a larger headroom for supporting
      bpf_xdp_adjust_head(), which mangles xdp->data. Here, we base our work
      on a similar principle and introduce a small helper bpf_xdp_adjust_meta()
      for adjusting a new pointer called xdp->data_meta. Thus, the packet has
      a flexible and programmable room for meta data, followed by the actual
      packet data. struct xdp_buff is therefore laid out that we first point
      to data_hard_start, then data_meta directly prepended to data followed
      by data_end marking the end of packet. bpf_xdp_adjust_head() takes into
      account whether we have meta data already prepended and if so, memmove()s
      this along with the given offset provided there's enough room.
      
      xdp->data_meta is optional and programs are not required to use it. The
      rationale is that when we process the packet in XDP (e.g. as DoS filter),
      we can push further meta data along with it for the XDP_PASS case, and
      give the guarantee that a clsact ingress BPF program on the same device
      can pick this up for further post-processing. Since we work with skb
      there, we can also set skb->mark, skb->priority or other skb meta data
      out of BPF, thus having this scratch space generic and programmable
      allows for more flexibility than defining a direct 1:1 transfer of
      potentially new XDP members into skb (it's also more efficient as we
      don't need to initialize/handle each of such new members). The facility
      also works together with GRO aggregation. The scratch space at the head
      of the packet can be multiple of 4 byte up to 32 byte large. Drivers not
      yet supporting xdp->data_meta can simply be set up with xdp->data_meta
      as xdp->data + 1 as bpf_xdp_adjust_meta() will detect this and bail out,
      such that the subsequent match against xdp->data for later access is
      guaranteed to fail.
      
      The verifier treats xdp->data_meta/xdp->data the same way as we treat
      xdp->data/xdp->data_end pointer comparisons. The requirement for doing
      the compare against xdp->data is that it hasn't been modified from it's
      original address we got from ctx access. It may have a range marking
      already from prior successful xdp->data/xdp->data_end pointer comparisons
      though.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      de8f3a83
  16. 09 9月, 2017 1 次提交
    • D
      bpf: make error reporting in bpf_warn_invalid_xdp_action more clear · 9beb8bed
      Daniel Borkmann 提交于
      Differ between illegal XDP action code and just driver
      unsupported one to provide better feedback when we throw
      a one-time warning here. Reason is that with 814abfab
      ("xdp: add bpf_redirect helper function") not all drivers
      support the new XDP return code yet and thus they will
      fall into their 'default' case when checking for return
      codes after program return, which then triggers a
      bpf_warn_invalid_xdp_action() stating that the return
      code is illegal, but from XDP perspective it's not.
      
      I decided not to place something like a XDP_ACT_MAX define
      into uapi i) given we don't have this either for all other
      program types, ii) future action codes could have further
      encoding there, which would render such define unsuitable
      and we wouldn't be able to rip it out again, and iii) we
      rarely add new action codes.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9beb8bed
  17. 01 9月, 2017 1 次提交
  18. 29 8月, 2017 2 次提交
    • J
      bpf: sockmap, remove STRPARSER map_flags and add multi-map support · 2f857d04
      John Fastabend 提交于
      The addition of map_flags BPF_SOCKMAP_STRPARSER flags was to handle a
      specific use case where we want to have BPF parse program disabled on
      an entry in a sockmap.
      
      However, Alexei found the API a bit cumbersome and I agreed. Lets
      remove the STRPARSER flag and support the use case by allowing socks
      to be in multiple maps. This allows users to create two maps one with
      programs attached and one without. When socks are added to maps they
      now inherit any programs attached to the map. This is a nice
      generalization and IMO improves the API.
      
      The API rules are less ambiguous and do not need a flag:
      
        - When a sock is added to a sockmap we have two cases,
      
           i. The sock map does not have any attached programs so
              we can add sock to map without inheriting bpf programs.
              The sock may exist in 0 or more other maps.
      
          ii. The sock map has an attached BPF program. To avoid duplicate
              bpf programs we only add the sock entry if it does not have
              an existing strparser/verdict attached, returning -EBUSY if
              a program is already attached. Otherwise attach the program
              and inherit strparser/verdict programs from the sock map.
      
      This allows for socks to be in a multiple maps for redirects and
      inherit a BPF program from a single map.
      
      Also this patch simplifies the logic around BPF_{EXIST|NOEXIST|ANY}
      flags. In the original patch I tried to be extra clever and only
      update map entries when necessary. Now I've decided the complexity
      is not worth it. If users constantly update an entry with the same
      sock for no reason (i.e. update an entry without actually changing
      any parameters on map or sock) we still do an alloc/release. Using
      this and allowing multiple entries of a sock to exist in a map the
      logic becomes much simpler.
      
      Note: Now that multiple maps are supported the "maps" pointer called
      when a socket is closed becomes a list of maps to remove the sock from.
      To keep the map up to date when a sock is added to the sockmap we must
      add the map/elem in the list. Likewise when it is removed we must
      remove it from the list. This results in searching the per psock list
      on delete operation. On TCP_CLOSE events we walk the list and remove
      the psock from all map/entry locations. I don't see any perf
      implications in this because at most I have a psock in two maps. If
      a psock were to be in many maps its possibly this might be noticeable
      on delete but I can't think of a reason to dup a psock in many maps.
      The sk_callback_lock is used to protect read/writes to the list. This
      was convenient because in all locations we were taking the lock
      anyways just after working on the list. Also the lock is per sock so
      in normal cases we shouldn't see any contention.
      Suggested-by: NAlexei Starovoitov <ast@kernel.org>
      Fixes: 174a79ff ("bpf: sockmap with sk redirect support")
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2f857d04
    • J
      bpf: convert sockmap field attach_bpf_fd2 to type · 464bc0fd
      John Fastabend 提交于
      In the initial sockmap API we provided strparser and verdict programs
      using a single attach command by extending the attach API with a the
      attach_bpf_fd2 field.
      
      However, if we add other programs in the future we will be adding a
      field for every new possible type, attach_bpf_fd(3,4,..). This
      seems a bit clumsy for an API. So lets push the programs using two
      new type fields.
      
         BPF_SK_SKB_STREAM_PARSER
         BPF_SK_SKB_STREAM_VERDICT
      
      This has the advantage of having a readable name and can easily be
      extended in the future.
      
      Updates to samples and sockmap included here also generalize tests
      slightly to support upcoming patch for multiple map support.
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Fixes: 174a79ff ("bpf: sockmap with sk redirect support")
      Suggested-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      464bc0fd
  19. 20 8月, 2017 1 次提交
    • M
      bpf: Allow selecting numa node during map creation · 96eabe7a
      Martin KaFai Lau 提交于
      The current map creation API does not allow to provide the numa-node
      preference.  The memory usually comes from where the map-creation-process
      is running.  The performance is not ideal if the bpf_prog is known to
      always run in a numa node different from the map-creation-process.
      
      One of the use case is sharding on CPU to different LRU maps (i.e.
      an array of LRU maps).  Here is the test result of map_perf_test on
      the INNER_LRU_HASH_PREALLOC test if we force the lru map used by
      CPU0 to be allocated from a remote numa node:
      
      [ The machine has 20 cores. CPU0-9 at node 0. CPU10-19 at node 1 ]
      
      ># taskset -c 10 ./map_perf_test 512 8 1260000 8000000
      5:inner_lru_hash_map_perf pre-alloc 1628380 events per sec
      4:inner_lru_hash_map_perf pre-alloc 1626396 events per sec
      3:inner_lru_hash_map_perf pre-alloc 1626144 events per sec
      6:inner_lru_hash_map_perf pre-alloc 1621657 events per sec
      2:inner_lru_hash_map_perf pre-alloc 1621534 events per sec
      1:inner_lru_hash_map_perf pre-alloc 1620292 events per sec
      7:inner_lru_hash_map_perf pre-alloc 1613305 events per sec
      0:inner_lru_hash_map_perf pre-alloc 1239150 events per sec  #<<<
      
      After specifying numa node:
      ># taskset -c 10 ./map_perf_test 512 8 1260000 8000000
      5:inner_lru_hash_map_perf pre-alloc 1629627 events per sec
      3:inner_lru_hash_map_perf pre-alloc 1628057 events per sec
      1:inner_lru_hash_map_perf pre-alloc 1623054 events per sec
      6:inner_lru_hash_map_perf pre-alloc 1616033 events per sec
      2:inner_lru_hash_map_perf pre-alloc 1614630 events per sec
      4:inner_lru_hash_map_perf pre-alloc 1612651 events per sec
      7:inner_lru_hash_map_perf pre-alloc 1609337 events per sec
      0:inner_lru_hash_map_perf pre-alloc 1619340 events per sec #<<<
      
      This patch adds one field, numa_node, to the bpf_attr.  Since numa node 0
      is a valid node, a new flag BPF_F_NUMA_NODE is also added.  The numa_node
      field is honored if and only if the BPF_F_NUMA_NODE flag is set.
      
      Numa node selection is not supported for percpu map.
      
      This patch does not change all the kmalloc.  F.e.
      'htab = kzalloc()' is not changed since the object
      is small enough to stay in the cache.
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@fb.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      96eabe7a
  20. 17 8月, 2017 3 次提交
    • J
    • J
      bpf: sockmap with sk redirect support · 174a79ff
      John Fastabend 提交于
      Recently we added a new map type called dev map used to forward XDP
      packets between ports (6093ec2d). This patches introduces a
      similar notion for sockets.
      
      A sockmap allows users to add participating sockets to a map. When
      sockets are added to the map enough context is stored with the
      map entry to use the entry with a new helper
      
        bpf_sk_redirect_map(map, key, flags)
      
      This helper (analogous to bpf_redirect_map in XDP) is given the map
      and an entry in the map. When called from a sockmap program, discussed
      below, the skb will be sent on the socket using skb_send_sock().
      
      With the above we need a bpf program to call the helper from that will
      then implement the send logic. The initial site implemented in this
      series is the recv_sock hook. For this to work we implemented a map
      attach command to add attributes to a map. In sockmap we add two
      programs a parse program and a verdict program. The parse program
      uses strparser to build messages and pass them to the verdict program.
      The parse programs use the normal strparser semantics. The verdict
      program is of type SK_SKB.
      
      The verdict program returns a verdict SK_DROP, or  SK_REDIRECT for
      now. Additional actions may be added later. When SK_REDIRECT is
      returned, expected when bpf program uses bpf_sk_redirect_map(), the
      sockmap logic will consult per cpu variables set by the helper routine
      and pull the sock entry out of the sock map. This pattern follows the
      existing redirect logic in cls and xdp programs.
      
      This gives the flow,
      
       recv_sock -> str_parser (parse_prog) -> verdict_prog -> skb_send_sock
                                                           \
                                                            -> kfree_skb
      
      As an example use case a message based load balancer may use specific
      logic in the verdict program to select the sock to send on.
      
      Sample programs are provided in future patches that hopefully illustrate
      the user interfaces. Also selftests are in follow-on patches.
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      174a79ff
    • J
      bpf: introduce new program type for skbs on sockets · b005fd18
      John Fastabend 提交于
      A class of programs, run from strparser and soon from a new map type
      called sock map, are used with skb as the context but on established
      sockets. By creating a specific program type for these we can use
      bpf helpers that expect full sockets and get the verifier to ensure
      these helpers are not used out of context.
      
      The new type is BPF_PROG_TYPE_SK_SKB. This patch introduces the
      infrastructure and type.
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      b005fd18
  21. 10 8月, 2017 1 次提交
    • D
      bpf: add BPF_J{LT,LE,SLT,SLE} instructions · 92b31a9a
      Daniel Borkmann 提交于
      Currently, eBPF only understands BPF_JGT (>), BPF_JGE (>=),
      BPF_JSGT (s>), BPF_JSGE (s>=) instructions, this means that
      particularly *JLT/*JLE counterparts involving immediates need
      to be rewritten from e.g. X < [IMM] by swapping arguments into
      [IMM] > X, meaning the immediate first is required to be loaded
      into a register Y := [IMM], such that then we can compare with
      Y > X. Note that the destination operand is always required to
      be a register.
      
      This has the downside of having unnecessarily increased register
      pressure, meaning complex program would need to spill other
      registers temporarily to stack in order to obtain an unused
      register for the [IMM]. Loading to registers will thus also
      affect state pruning since we need to account for that register
      use and potentially those registers that had to be spilled/filled
      again. As a consequence slightly more stack space might have
      been used due to spilling, and BPF programs are a bit longer
      due to extra code involving the register load and potentially
      required spill/fills.
      
      Thus, add BPF_JLT (<), BPF_JLE (<=), BPF_JSLT (s<), BPF_JSLE (s<=)
      counterparts to the eBPF instruction set. Modifying LLVM to
      remove the NegateCC() workaround in a PoC patch at [1] and
      allowing it to also emit the new instructions resulted in
      cilium's BPF programs that are injected into the fast-path to
      have a reduced program length in the range of 2-3% (e.g.
      accumulated main and tail call sections from one of the object
      file reduced from 4864 to 4729 insns), reduced complexity in
      the range of 10-30% (e.g. accumulated sections reduced in one
      of the cases from 116432 to 88428 insns), and reduced stack
      usage in the range of 1-5% (e.g. accumulated sections from one
      of the object files reduced from 824 to 784b).
      
      The modification for LLVM will be incorporated in a backwards
      compatible way. Plan is for LLVM to have i) a target specific
      option to offer a possibility to explicitly enable the extension
      by the user (as we have with -m target specific extensions today
      for various CPU insns), and ii) have the kernel checked for
      presence of the extensions and enable them transparently when
      the user is selecting more aggressive options such as -march=native
      in a bpf target context. (Other frontends generating BPF byte
      code, e.g. ply can probe the kernel directly for its code
      generation.)
      
        [1] https://github.com/borkmann/llvm/tree/bpf-insnsSigned-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      92b31a9a
  22. 05 8月, 2017 1 次提交
  23. 18 7月, 2017 3 次提交
  24. 03 7月, 2017 1 次提交
  25. 02 7月, 2017 2 次提交