1. 29 8月, 2020 1 次提交
    • A
      bpf: Introduce sleepable BPF programs · 1e6c62a8
      Alexei Starovoitov 提交于
      Introduce sleepable BPF programs that can request such property for themselves
      via BPF_F_SLEEPABLE flag at program load time. In such case they will be able
      to use helpers like bpf_copy_from_user() that might sleep. At present only
      fentry/fexit/fmod_ret and lsm programs can request to be sleepable and only
      when they are attached to kernel functions that are known to allow sleeping.
      
      The non-sleepable programs are relying on implicit rcu_read_lock() and
      migrate_disable() to protect life time of programs, maps that they use and
      per-cpu kernel structures used to pass info between bpf programs and the
      kernel. The sleepable programs cannot be enclosed into rcu_read_lock().
      migrate_disable() maps to preempt_disable() in non-RT kernels, so the progs
      should not be enclosed in migrate_disable() as well. Therefore
      rcu_read_lock_trace is used to protect the life time of sleepable progs.
      
      There are many networking and tracing program types. In many cases the
      'struct bpf_prog *' pointer itself is rcu protected within some other kernel
      data structure and the kernel code is using rcu_dereference() to load that
      program pointer and call BPF_PROG_RUN() on it. All these cases are not touched.
      Instead sleepable bpf programs are allowed with bpf trampoline only. The
      program pointers are hard-coded into generated assembly of bpf trampoline and
      synchronize_rcu_tasks_trace() is used to protect the life time of the program.
      The same trampoline can hold both sleepable and non-sleepable progs.
      
      When rcu_read_lock_trace is held it means that some sleepable bpf program is
      running from bpf trampoline. Those programs can use bpf arrays and preallocated
      hash/lru maps. These map types are waiting on programs to complete via
      synchronize_rcu_tasks_trace();
      
      Updates to trampoline now has to do synchronize_rcu_tasks_trace() and
      synchronize_rcu_tasks() to wait for sleepable progs to finish and for
      trampoline assembly to finish.
      
      This is the first step of introducing sleepable progs. Eventually dynamically
      allocated hash maps can be allowed and networking program types can become
      sleepable too.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: NJosef Bacik <josef@toxicpanda.com>
      Acked-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NKP Singh <kpsingh@google.com>
      Link: https://lore.kernel.org/bpf/20200827220114.69225-3-alexei.starovoitov@gmail.com
      1e6c62a8
  2. 28 8月, 2020 1 次提交
    • M
      bpf: Add map_meta_equal map ops · f4d05259
      Martin KaFai Lau 提交于
      Some properties of the inner map is used in the verification time.
      When an inner map is inserted to an outer map at runtime,
      bpf_map_meta_equal() is currently used to ensure those properties
      of the inserting inner map stays the same as the verification
      time.
      
      In particular, the current bpf_map_meta_equal() checks max_entries which
      turns out to be too restrictive for most of the maps which do not use
      max_entries during the verification time.  It limits the use case that
      wants to replace a smaller inner map with a larger inner map.  There are
      some maps do use max_entries during verification though.  For example,
      the map_gen_lookup in array_map_ops uses the max_entries to generate
      the inline lookup code.
      
      To accommodate differences between maps, the map_meta_equal is added
      to bpf_map_ops.  Each map-type can decide what to check when its
      map is used as an inner map during runtime.
      
      Also, some map types cannot be used as an inner map and they are
      currently black listed in bpf_map_meta_alloc() in map_in_map.c.
      It is not unusual that the new map types may not aware that such
      blacklist exists.  This patch enforces an explicit opt-in
      and only allows a map to be used as an inner map if it has
      implemented the map_meta_equal ops.  It is based on the
      discussion in [1].
      
      All maps that support inner map has its map_meta_equal points
      to bpf_map_meta_equal in this patch.  A later patch will
      relax the max_entries check for most maps.  bpf_types.h
      counts 28 map types.  This patch adds 23 ".map_meta_equal"
      by using coccinelle.  -5 for
      	BPF_MAP_TYPE_PROG_ARRAY
      	BPF_MAP_TYPE_(PERCPU)_CGROUP_STORAGE
      	BPF_MAP_TYPE_STRUCT_OPS
      	BPF_MAP_TYPE_ARRAY_OF_MAPS
      	BPF_MAP_TYPE_HASH_OF_MAPS
      
      The "if (inner_map->inner_map_meta)" check in bpf_map_meta_alloc()
      is moved such that the same error is returned.
      
      [1]: https://lore.kernel.org/bpf/20200522022342.899756-1-kafai@fb.com/Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200828011806.1970400-1-kafai@fb.com
      f4d05259
  3. 26 8月, 2020 3 次提交
  4. 22 8月, 2020 3 次提交
  5. 20 8月, 2020 1 次提交
  6. 07 8月, 2020 1 次提交
    • Y
      bpf: Change uapi for bpf iterator map elements · 5e7b3020
      Yonghong Song 提交于
      Commit a5cbe05a ("bpf: Implement bpf iterator for
      map elements") added bpf iterator support for
      map elements. The map element bpf iterator requires
      info to identify a particular map. In the above
      commit, the attr->link_create.target_fd is used
      to carry map_fd and an enum bpf_iter_link_info
      is added to uapi to specify the target_fd actually
      representing a map_fd:
          enum bpf_iter_link_info {
      	BPF_ITER_LINK_UNSPEC = 0,
      	BPF_ITER_LINK_MAP_FD = 1,
      
      	MAX_BPF_ITER_LINK_INFO,
          };
      
      This is an extensible approach as we can grow
      enumerator for pid, cgroup_id, etc. and we can
      unionize target_fd for pid, cgroup_id, etc.
      But in the future, there are chances that
      more complex customization may happen, e.g.,
      for tasks, it could be filtered based on
      both cgroup_id and user_id.
      
      This patch changed the uapi to have fields
      	__aligned_u64	iter_info;
      	__u32		iter_info_len;
      for additional iter_info for link_create.
      The iter_info is defined as
      	union bpf_iter_link_info {
      		struct {
      			__u32   map_fd;
      		} map;
      	};
      
      So future extension for additional customization
      will be easier. The bpf_iter_link_info will be
      passed to target callback to validate and generic
      bpf_iter framework does not need to deal it any
      more.
      
      Note that map_fd = 0 will be considered invalid
      and -EBADF will be returned to user space.
      
      Fixes: a5cbe05a ("bpf: Implement bpf iterator for map elements")
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NAndrii Nakryiko <andriin@fb.com>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200805055056.1457463-1-yhs@fb.com
      5e7b3020
  7. 02 8月, 2020 1 次提交
  8. 26 7月, 2020 7 次提交
  9. 22 7月, 2020 2 次提交
  10. 18 7月, 2020 2 次提交
    • J
      bpf: Introduce SK_LOOKUP program type with a dedicated attach point · e9ddbb77
      Jakub Sitnicki 提交于
      Add a new program type BPF_PROG_TYPE_SK_LOOKUP with a dedicated attach type
      BPF_SK_LOOKUP. The new program kind is to be invoked by the transport layer
      when looking up a listening socket for a new connection request for
      connection oriented protocols, or when looking up an unconnected socket for
      a packet for connection-less protocols.
      
      When called, SK_LOOKUP BPF program can select a socket that will receive
      the packet. This serves as a mechanism to overcome the limits of what
      bind() API allows to express. Two use-cases driving this work are:
      
       (1) steer packets destined to an IP range, on fixed port to a socket
      
           192.0.2.0/24, port 80 -> NGINX socket
      
       (2) steer packets destined to an IP address, on any port to a socket
      
           198.51.100.1, any port -> L7 proxy socket
      
      In its run-time context program receives information about the packet that
      triggered the socket lookup. Namely IP version, L4 protocol identifier, and
      address 4-tuple. Context can be further extended to include ingress
      interface identifier.
      
      To select a socket BPF program fetches it from a map holding socket
      references, like SOCKMAP or SOCKHASH, and calls bpf_sk_assign(ctx, sk, ...)
      helper to record the selection. Transport layer then uses the selected
      socket as a result of socket lookup.
      
      In its basic form, SK_LOOKUP acts as a filter and hence must return either
      SK_PASS or SK_DROP. If the program returns with SK_PASS, transport should
      look for a socket to receive the packet, or use the one selected by the
      program if available, while SK_DROP informs the transport layer that the
      lookup should fail.
      
      This patch only enables the user to attach an SK_LOOKUP program to a
      network namespace. Subsequent patches hook it up to run on local delivery
      path in ipv4 and ipv6 stacks.
      Suggested-by: NMarek Majkowski <marek@cloudflare.com>
      Signed-off-by: NJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200717103536.397595-3-jakub@cloudflare.com
      e9ddbb77
    • J
      bpf, netns: Handle multiple link attachments · ce3aa9cc
      Jakub Sitnicki 提交于
      Extend the BPF netns link callbacks to rebuild (grow/shrink) or update the
      prog_array at given position when link gets attached/updated/released.
      
      This let's us lift the limit of having just one link attached for the new
      attach type introduced by subsequent patch.
      
      No functional changes intended.
      Signed-off-by: NJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/20200717103536.397595-2-jakub@cloudflare.com
      ce3aa9cc
  11. 16 7月, 2020 1 次提交
  12. 01 7月, 2020 2 次提交
  13. 25 6月, 2020 3 次提交
  14. 23 6月, 2020 1 次提交
    • A
      bpf: Support access to bpf map fields · 41c48f3a
      Andrey Ignatov 提交于
      There are multiple use-cases when it's convenient to have access to bpf
      map fields, both `struct bpf_map` and map type specific struct-s such as
      `struct bpf_array`, `struct bpf_htab`, etc.
      
      For example while working with sock arrays it can be necessary to
      calculate the key based on map->max_entries (some_hash % max_entries).
      Currently this is solved by communicating max_entries via "out-of-band"
      channel, e.g. via additional map with known key to get info about target
      map. That works, but is not very convenient and error-prone while
      working with many maps.
      
      In other cases necessary data is dynamic (i.e. unknown at loading time)
      and it's impossible to get it at all. For example while working with a
      hash table it can be convenient to know how much capacity is already
      used (bpf_htab.count.counter for BPF_F_NO_PREALLOC case).
      
      At the same time kernel knows this info and can provide it to bpf
      program.
      
      Fill this gap by adding support to access bpf map fields from bpf
      program for both `struct bpf_map` and map type specific fields.
      
      Support is implemented via btf_struct_access() so that a user can define
      their own `struct bpf_map` or map type specific struct in their program
      with only necessary fields and preserve_access_index attribute, cast a
      map to this struct and use a field.
      
      For example:
      
      	struct bpf_map {
      		__u32 max_entries;
      	} __attribute__((preserve_access_index));
      
      	struct bpf_array {
      		struct bpf_map map;
      		__u32 elem_size;
      	} __attribute__((preserve_access_index));
      
      	struct {
      		__uint(type, BPF_MAP_TYPE_ARRAY);
      		__uint(max_entries, 4);
      		__type(key, __u32);
      		__type(value, __u32);
      	} m_array SEC(".maps");
      
      	SEC("cgroup_skb/egress")
      	int cg_skb(void *ctx)
      	{
      		struct bpf_array *array = (struct bpf_array *)&m_array;
      		struct bpf_map *map = (struct bpf_map *)&m_array;
      
      		/* .. use map->max_entries or array->map.max_entries .. */
      	}
      
      Similarly to other btf_struct_access() use-cases (e.g. struct tcp_sock
      in net/ipv4/bpf_tcp_ca.c) the patch allows access to any fields of
      corresponding struct. Only reading from map fields is supported.
      
      For btf_struct_access() to work there should be a way to know btf id of
      a struct that corresponds to a map type. To get btf id there should be a
      way to get a stringified name of map-specific struct, such as
      "bpf_array", "bpf_htab", etc for a map type. Two new fields are added to
      `struct bpf_map_ops` to handle it:
      * .map_btf_name keeps a btf name of a struct returned by map_alloc();
      * .map_btf_id is used to cache btf id of that struct.
      
      To make btf ids calculation cheaper they're calculated once while
      preparing btf_vmlinux and cached same way as it's done for btf_id field
      of `struct bpf_func_proto`
      
      While calculating btf ids, struct names are NOT checked for collision.
      Collisions will be checked as a part of the work to prepare btf ids used
      in verifier in compile time that should land soon. The only known
      collision for `struct bpf_htab` (kernel/bpf/hashtab.c vs
      net/core/sock_map.c) was fixed earlier.
      
      Both new fields .map_btf_name and .map_btf_id must be set for a map type
      for the feature to work. If neither is set for a map type, verifier will
      return ENOTSUPP on a try to access map_ptr of corresponding type. If
      just one of them set, it's verifier misconfiguration.
      
      Only `struct bpf_array` for BPF_MAP_TYPE_ARRAY and `struct bpf_htab` for
      BPF_MAP_TYPE_HASH are supported by this patch. Other map types will be
      supported separately.
      
      The feature is available only for CONFIG_DEBUG_INFO_BTF=y and gated by
      perfmon_capable() so that unpriv programs won't have access to bpf map
      fields.
      Signed-off-by: NAndrey Ignatov <rdna@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Link: https://lore.kernel.org/bpf/6479686a0cd1e9067993df57b4c3eef0e276fec9.1592600985.git.rdna@fb.com
      41c48f3a
  15. 02 6月, 2020 3 次提交
    • J
      bpf: Use tracing helpers for lsm programs · 958a3f2d
      Jiri Olsa 提交于
      Currenty lsm uses bpf_tracing_func_proto helpers which do
      not include stack trace or perf event output. It's useful
      to have those for bpftrace lsm support [1].
      
      Using tracing_prog_func_proto helpers for lsm programs.
      
      [1] https://github.com/iovisor/bpftrace/pull/1347Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Cc: KP Singh <kpsingh@google.com>
      Link: https://lore.kernel.org/bpf/20200531154255.896551-1-jolsa@kernel.org
      958a3f2d
    • D
      bpf: Add support to attach bpf program to a devmap entry · fbee97fe
      David Ahern 提交于
      Add BPF_XDP_DEVMAP attach type for use with programs associated with a
      DEVMAP entry.
      
      Allow DEVMAPs to associate a program with a device entry by adding
      a bpf_prog.fd to 'struct bpf_devmap_val'. Values read show the program
      id, so the fd and id are a union. bpf programs can get access to the
      struct via vmlinux.h.
      
      The program associated with the fd must have type XDP with expected
      attach type BPF_XDP_DEVMAP. When a program is associated with a device
      index, the program is run on an XDP_REDIRECT and before the buffer is
      added to the per-cpu queue. At this point rxq data is still valid; the
      next patch adds tx device information allowing the prorgam to see both
      ingress and egress device indices.
      
      XDP generic is skb based and XDP programs do not work with skb's. Block
      the use case by walking maps used by a program that is to be attached
      via xdpgeneric and fail if any of them are DEVMAP / DEVMAP_HASH with
      
      Block attach of BPF_XDP_DEVMAP programs to devices.
      Signed-off-by: NDavid Ahern <dsahern@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NToke Høiland-Jørgensen <toke@redhat.com>
      Link: https://lore.kernel.org/bpf/20200529220716.75383-3-dsahern@kernel.orgSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      fbee97fe
    • A
      bpf: Implement BPF ring buffer and verifier support for it · 457f4436
      Andrii Nakryiko 提交于
      This commit adds a new MPSC ring buffer implementation into BPF ecosystem,
      which allows multiple CPUs to submit data to a single shared ring buffer. On
      the consumption side, only single consumer is assumed.
      
      Motivation
      ----------
      There are two distinctive motivators for this work, which are not satisfied by
      existing perf buffer, which prompted creation of a new ring buffer
      implementation.
        - more efficient memory utilization by sharing ring buffer across CPUs;
        - preserving ordering of events that happen sequentially in time, even
        across multiple CPUs (e.g., fork/exec/exit events for a task).
      
      These two problems are independent, but perf buffer fails to satisfy both.
      Both are a result of a choice to have per-CPU perf ring buffer.  Both can be
      also solved by having an MPSC implementation of ring buffer. The ordering
      problem could technically be solved for perf buffer with some in-kernel
      counting, but given the first one requires an MPSC buffer, the same solution
      would solve the second problem automatically.
      
      Semantics and APIs
      ------------------
      Single ring buffer is presented to BPF programs as an instance of BPF map of
      type BPF_MAP_TYPE_RINGBUF. Two other alternatives considered, but ultimately
      rejected.
      
      One way would be to, similar to BPF_MAP_TYPE_PERF_EVENT_ARRAY, make
      BPF_MAP_TYPE_RINGBUF could represent an array of ring buffers, but not enforce
      "same CPU only" rule. This would be more familiar interface compatible with
      existing perf buffer use in BPF, but would fail if application needed more
      advanced logic to lookup ring buffer by arbitrary key. HASH_OF_MAPS addresses
      this with current approach. Additionally, given the performance of BPF
      ringbuf, many use cases would just opt into a simple single ring buffer shared
      among all CPUs, for which current approach would be an overkill.
      
      Another approach could introduce a new concept, alongside BPF map, to
      represent generic "container" object, which doesn't necessarily have key/value
      interface with lookup/update/delete operations. This approach would add a lot
      of extra infrastructure that has to be built for observability and verifier
      support. It would also add another concept that BPF developers would have to
      familiarize themselves with, new syntax in libbpf, etc. But then would really
      provide no additional benefits over the approach of using a map.
      BPF_MAP_TYPE_RINGBUF doesn't support lookup/update/delete operations, but so
      doesn't few other map types (e.g., queue and stack; array doesn't support
      delete, etc).
      
      The approach chosen has an advantage of re-using existing BPF map
      infrastructure (introspection APIs in kernel, libbpf support, etc), being
      familiar concept (no need to teach users a new type of object in BPF program),
      and utilizing existing tooling (bpftool). For common scenario of using
      a single ring buffer for all CPUs, it's as simple and straightforward, as
      would be with a dedicated "container" object. On the other hand, by being
      a map, it can be combined with ARRAY_OF_MAPS and HASH_OF_MAPS map-in-maps to
      implement a wide variety of topologies, from one ring buffer for each CPU
      (e.g., as a replacement for perf buffer use cases), to a complicated
      application hashing/sharding of ring buffers (e.g., having a small pool of
      ring buffers with hashed task's tgid being a look up key to preserve order,
      but reduce contention).
      
      Key and value sizes are enforced to be zero. max_entries is used to specify
      the size of ring buffer and has to be a power of 2 value.
      
      There are a bunch of similarities between perf buffer
      (BPF_MAP_TYPE_PERF_EVENT_ARRAY) and new BPF ring buffer semantics:
        - variable-length records;
        - if there is no more space left in ring buffer, reservation fails, no
          blocking;
        - memory-mappable data area for user-space applications for ease of
          consumption and high performance;
        - epoll notifications for new incoming data;
        - but still the ability to do busy polling for new data to achieve the
          lowest latency, if necessary.
      
      BPF ringbuf provides two sets of APIs to BPF programs:
        - bpf_ringbuf_output() allows to *copy* data from one place to a ring
          buffer, similarly to bpf_perf_event_output();
        - bpf_ringbuf_reserve()/bpf_ringbuf_commit()/bpf_ringbuf_discard() APIs
          split the whole process into two steps. First, a fixed amount of space is
          reserved. If successful, a pointer to a data inside ring buffer data area
          is returned, which BPF programs can use similarly to a data inside
          array/hash maps. Once ready, this piece of memory is either committed or
          discarded. Discard is similar to commit, but makes consumer ignore the
          record.
      
      bpf_ringbuf_output() has disadvantage of incurring extra memory copy, because
      record has to be prepared in some other place first. But it allows to submit
      records of the length that's not known to verifier beforehand. It also closely
      matches bpf_perf_event_output(), so will simplify migration significantly.
      
      bpf_ringbuf_reserve() avoids the extra copy of memory by providing a memory
      pointer directly to ring buffer memory. In a lot of cases records are larger
      than BPF stack space allows, so many programs have use extra per-CPU array as
      a temporary heap for preparing sample. bpf_ringbuf_reserve() avoid this needs
      completely. But in exchange, it only allows a known constant size of memory to
      be reserved, such that verifier can verify that BPF program can't access
      memory outside its reserved record space. bpf_ringbuf_output(), while slightly
      slower due to extra memory copy, covers some use cases that are not suitable
      for bpf_ringbuf_reserve().
      
      The difference between commit and discard is very small. Discard just marks
      a record as discarded, and such records are supposed to be ignored by consumer
      code. Discard is useful for some advanced use-cases, such as ensuring
      all-or-nothing multi-record submission, or emulating temporary malloc()/free()
      within single BPF program invocation.
      
      Each reserved record is tracked by verifier through existing
      reference-tracking logic, similar to socket ref-tracking. It is thus
      impossible to reserve a record, but forget to submit (or discard) it.
      
      bpf_ringbuf_query() helper allows to query various properties of ring buffer.
      Currently 4 are supported:
        - BPF_RB_AVAIL_DATA returns amount of unconsumed data in ring buffer;
        - BPF_RB_RING_SIZE returns the size of ring buffer;
        - BPF_RB_CONS_POS/BPF_RB_PROD_POS returns current logical possition of
          consumer/producer, respectively.
      Returned values are momentarily snapshots of ring buffer state and could be
      off by the time helper returns, so this should be used only for
      debugging/reporting reasons or for implementing various heuristics, that take
      into account highly-changeable nature of some of those characteristics.
      
      One such heuristic might involve more fine-grained control over poll/epoll
      notifications about new data availability in ring buffer. Together with
      BPF_RB_NO_WAKEUP/BPF_RB_FORCE_WAKEUP flags for output/commit/discard helpers,
      it allows BPF program a high degree of control and, e.g., more efficient
      batched notifications. Default self-balancing strategy, though, should be
      adequate for most applications and will work reliable and efficiently already.
      
      Design and implementation
      -------------------------
      This reserve/commit schema allows a natural way for multiple producers, either
      on different CPUs or even on the same CPU/in the same BPF program, to reserve
      independent records and work with them without blocking other producers. This
      means that if BPF program was interruped by another BPF program sharing the
      same ring buffer, they will both get a record reserved (provided there is
      enough space left) and can work with it and submit it independently. This
      applies to NMI context as well, except that due to using a spinlock during
      reservation, in NMI context, bpf_ringbuf_reserve() might fail to get a lock,
      in which case reservation will fail even if ring buffer is not full.
      
      The ring buffer itself internally is implemented as a power-of-2 sized
      circular buffer, with two logical and ever-increasing counters (which might
      wrap around on 32-bit architectures, that's not a problem):
        - consumer counter shows up to which logical position consumer consumed the
          data;
        - producer counter denotes amount of data reserved by all producers.
      
      Each time a record is reserved, producer that "owns" the record will
      successfully advance producer counter. At that point, data is still not yet
      ready to be consumed, though. Each record has 8 byte header, which contains
      the length of reserved record, as well as two extra bits: busy bit to denote
      that record is still being worked on, and discard bit, which might be set at
      commit time if record is discarded. In the latter case, consumer is supposed
      to skip the record and move on to the next one. Record header also encodes
      record's relative offset from the beginning of ring buffer data area (in
      pages). This allows bpf_ringbuf_commit()/bpf_ringbuf_discard() to accept only
      the pointer to the record itself, without requiring also the pointer to ring
      buffer itself. Ring buffer memory location will be restored from record
      metadata header. This significantly simplifies verifier, as well as improving
      API usability.
      
      Producer counter increments are serialized under spinlock, so there is
      a strict ordering between reservations. Commits, on the other hand, are
      completely lockless and independent. All records become available to consumer
      in the order of reservations, but only after all previous records where
      already committed. It is thus possible for slow producers to temporarily hold
      off submitted records, that were reserved later.
      
      Reservation/commit/consumer protocol is verified by litmus tests in
      Documentation/litmus-test/bpf-rb.
      
      One interesting implementation bit, that significantly simplifies (and thus
      speeds up as well) implementation of both producers and consumers is how data
      area is mapped twice contiguously back-to-back in the virtual memory. This
      allows to not take any special measures for samples that have to wrap around
      at the end of the circular buffer data area, because the next page after the
      last data page would be first data page again, and thus the sample will still
      appear completely contiguous in virtual memory. See comment and a simple ASCII
      diagram showing this visually in bpf_ringbuf_area_alloc().
      
      Another feature that distinguishes BPF ringbuf from perf ring buffer is
      a self-pacing notifications of new data being availability.
      bpf_ringbuf_commit() implementation will send a notification of new record
      being available after commit only if consumer has already caught up right up
      to the record being committed. If not, consumer still has to catch up and thus
      will see new data anyways without needing an extra poll notification.
      Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbuf.c) show that
      this allows to achieve a very high throughput without having to resort to
      tricks like "notify only every Nth sample", which are necessary with perf
      buffer. For extreme cases, when BPF program wants more manual control of
      notifications, commit/discard/output helpers accept BPF_RB_NO_WAKEUP and
      BPF_RB_FORCE_WAKEUP flags, which give full control over notifications of data
      availability, but require extra caution and diligence in using this API.
      
      Comparison to alternatives
      --------------------------
      Before considering implementing BPF ring buffer from scratch existing
      alternatives in kernel were evaluated, but didn't seem to meet the needs. They
      largely fell into few categores:
        - per-CPU buffers (perf, ftrace, etc), which don't satisfy two motivations
          outlined above (ordering and memory consumption);
        - linked list-based implementations; while some were multi-producer designs,
          consuming these from user-space would be very complicated and most
          probably not performant; memory-mapping contiguous piece of memory is
          simpler and more performant for user-space consumers;
        - io_uring is SPSC, but also requires fixed-sized elements. Naively turning
          SPSC queue into MPSC w/ lock would have subpar performance compared to
          locked reserve + lockless commit, as with BPF ring buffer. Fixed sized
          elements would be too limiting for BPF programs, given existing BPF
          programs heavily rely on variable-sized perf buffer already;
        - specialized implementations (like a new printk ring buffer, [0]) with lots
          of printk-specific limitations and implications, that didn't seem to fit
          well for intended use with BPF programs.
      
        [0] https://lwn.net/Articles/779550/Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200529075424.3139988-2-andriin@fb.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      457f4436
  16. 15 5月, 2020 1 次提交
    • A
      bpf: Implement CAP_BPF · 2c78ee89
      Alexei Starovoitov 提交于
      Implement permissions as stated in uapi/linux/capability.h
      In order to do that the verifier allow_ptr_leaks flag is split
      into four flags and they are set as:
        env->allow_ptr_leaks = bpf_allow_ptr_leaks();
        env->bypass_spec_v1 = bpf_bypass_spec_v1();
        env->bypass_spec_v4 = bpf_bypass_spec_v4();
        env->bpf_capable = bpf_capable();
      
      The first three currently equivalent to perfmon_capable(), since leaking kernel
      pointers and reading kernel memory via side channel attacks is roughly
      equivalent to reading kernel memory with cap_perfmon.
      
      'bpf_capable' enables bounded loops, precision tracking, bpf to bpf calls and
      other verifier features. 'allow_ptr_leaks' enable ptr leaks, ptr conversions,
      subtraction of pointers. 'bypass_spec_v1' disables speculative analysis in the
      verifier, run time mitigations in bpf array, and enables indirect variable
      access in bpf programs. 'bypass_spec_v4' disables emission of sanitation code
      by the verifier.
      
      That means that the networking BPF program loaded with CAP_BPF + CAP_NET_ADMIN
      will have speculative checks done by the verifier and other spectre mitigation
      applied. Such networking BPF program will not be able to leak kernel pointers
      and will not be able to access arbitrary kernel memory.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200513230355.7858-3-alexei.starovoitov@gmail.com
      2c78ee89
  17. 14 5月, 2020 4 次提交
  18. 10 5月, 2020 3 次提交