1. 02 6月, 2020 12 次提交
    • J
      bpf, selftests: Add test for ktls with skb bpf ingress policy · 463bac5f
      John Fastabend 提交于
      This adds a test for bpf ingress policy. To ensure data writes happen
      as expected with extra TLS headers we run these tests with data
      verification enabled by default. This will test receive packets have
      "PASS" stamped into the front of the payload.
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/159079363965.5745.3390806911628980210.stgit@john-Precision-5820-TowerSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      463bac5f
    • D
      selftest: Add tests for XDP programs in devmap entries · d39aec79
      David Ahern 提交于
      Add tests to verify ability to add an XDP program to a
      entry in a DEVMAP.
      
      Add negative tests to show DEVMAP programs can not be
      attached to devices as a normal XDP program, and accesses
      to egress_ifindex require BPF_XDP_DEVMAP attach type.
      Signed-off-by: NDavid Ahern <dsahern@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NToke Høiland-Jørgensen <toke@redhat.com>
      Link: https://lore.kernel.org/bpf/20200529220716.75383-6-dsahern@kernel.orgSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      d39aec79
    • A
      selftests/bpf: Add tests for write-only stacks/queues · 43dd115b
      Anton Protopopov 提交于
      For write-only stacks and queues bpf_map_update_elem should be allowed, but
      bpf_map_lookup_elem and bpf_map_lookup_and_delete_elem should fail with EPERM.
      Signed-off-by: NAnton Protopopov <a.s.protopopov@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200527185700.14658-6-a.s.protopopov@gmail.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      43dd115b
    • A
      bpf: Add BPF ringbuf and perf buffer benchmarks · c97099b0
      Andrii Nakryiko 提交于
      Extend bench framework with ability to have benchmark-provided child argument
      parser for custom benchmark-specific parameters. This makes bench generic code
      modular and independent from any specific benchmark.
      
      Also implement a set of benchmarks for new BPF ring buffer and existing perf
      buffer. 4 benchmarks were implemented: 2 variations for each of BPF ringbuf
      and perfbuf:,
        - rb-libbpf utilizes stock libbpf ring_buffer manager for reading data;
        - rb-custom implements custom ring buffer setup and reading code, to
          eliminate overheads inherent in generic libbpf code due to callback
          functions and the need to update consumer position after each consumed
          record, instead of batching updates (due to pessimistic assumption that
          user callback might take long time and thus could unnecessarily hold ring
          buffer space for too long);
        - pb-libbpf uses stock libbpf perf_buffer code with all the default
          settings, though uses higher-performance raw event callback to minimize
          unnecessary overhead;
        - pb-custom implements its own custom consumer code to minimize any possible
          overhead of generic libbpf implementation and indirect function calls.
      
      All of the test support default, no data notification skipped, mode, as well
      as sampled mode (with --rb-sampled flag), which allows to trigger epoll
      notification less frequently and reduce overhead. As will be shown, this mode
      is especially critical for perf buffer, which suffers from high overhead of
      wakeups in kernel.
      
      Otherwise, all benchamrks implement similar way to generate a batch of records
      by using fentry/sys_getpgid BPF program, which pushes a bunch of records in
      a tight loop and records number of successful and dropped samples. Each record
      is a small 8-byte integer, to minimize the effect of memory copying with
      bpf_perf_event_output() and bpf_ringbuf_output().
      
      Benchmarks that have only one producer implement optional back-to-back mode,
      in which record production and consumption is alternating on the same CPU.
      This is the highest-throughput happy case, showing ultimate performance
      achievable with either BPF ringbuf or perfbuf.
      
      All the below scenarios are implemented in a script in
      benchs/run_bench_ringbufs.sh. Tests were performed on 28-core/56-thread
      Intel Xeon CPU E5-2680 v4 @ 2.40GHz CPU.
      
      Single-producer, parallel producer
      ==================================
      rb-libbpf            12.054 ± 0.320M/s (drops 0.000 ± 0.000M/s)
      rb-custom            8.158 ± 0.118M/s (drops 0.001 ± 0.003M/s)
      pb-libbpf            0.931 ± 0.007M/s (drops 0.000 ± 0.000M/s)
      pb-custom            0.965 ± 0.003M/s (drops 0.000 ± 0.000M/s)
      
      Single-producer, parallel producer, sampled notification
      ========================================================
      rb-libbpf            11.563 ± 0.067M/s (drops 0.000 ± 0.000M/s)
      rb-custom            15.895 ± 0.076M/s (drops 0.000 ± 0.000M/s)
      pb-libbpf            9.889 ± 0.032M/s (drops 0.000 ± 0.000M/s)
      pb-custom            9.866 ± 0.028M/s (drops 0.000 ± 0.000M/s)
      
      Single producer on one CPU, consumer on another one, both running at full
      speed. Curiously, rb-libbpf has higher throughput than objectively faster (due
      to more lightweight consumer code path) rb-custom. It appears that faster
      consumer causes kernel to send notifications more frequently, because consumer
      appears to be caught up more frequently. Performance of perfbuf suffers from
      default "no sampling" policy and huge overhead that causes.
      
      In sampled mode, rb-custom is winning very significantly eliminating too
      frequent in-kernel wakeups, the gain appears to be more than 2x.
      
      Perf buffer achieves even more impressive wins, compared to stock perfbuf
      settings, with 10x improvements in throughput with 1:500 sampling rate. The
      trade-off is that with sampling, application might not get next X events until
      X+1st arrives, which is not always acceptable. With steady influx of events,
      though, this shouldn't be a problem.
      
      Overall, single-producer performance of ring buffers seems to be better no
      matter the sampled/non-sampled modes, but it especially beats ring buffer
      without sampling due to its adaptive notification approach.
      
      Single-producer, back-to-back mode
      ==================================
      rb-libbpf            15.507 ± 0.247M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf-sampled    14.692 ± 0.195M/s (drops 0.000 ± 0.000M/s)
      rb-custom            21.449 ± 0.157M/s (drops 0.000 ± 0.000M/s)
      rb-custom-sampled    20.024 ± 0.386M/s (drops 0.000 ± 0.000M/s)
      pb-libbpf            1.601 ± 0.015M/s (drops 0.000 ± 0.000M/s)
      pb-libbpf-sampled    8.545 ± 0.064M/s (drops 0.000 ± 0.000M/s)
      pb-custom            1.607 ± 0.022M/s (drops 0.000 ± 0.000M/s)
      pb-custom-sampled    8.988 ± 0.144M/s (drops 0.000 ± 0.000M/s)
      
      Here we test a back-to-back mode, which is arguably best-case scenario both
      for BPF ringbuf and perfbuf, because there is no contention and for ringbuf
      also no excessive notification, because consumer appears to be behind after
      the first record. For ringbuf, custom consumer code clearly wins with 21.5 vs
      16 million records per second exchanged between producer and consumer. Sampled
      mode actually hurts a bit due to slightly slower producer logic (it needs to
      fetch amount of data available to decide whether to skip or force notification).
      
      Perfbuf with wakeup sampling gets 5.5x throughput increase, compared to
      no-sampling version. There also doesn't seem to be noticeable overhead from
      generic libbpf handling code.
      
      Perfbuf back-to-back, effect of sample rate
      ===========================================
      pb-sampled-1         1.035 ± 0.012M/s (drops 0.000 ± 0.000M/s)
      pb-sampled-5         3.476 ± 0.087M/s (drops 0.000 ± 0.000M/s)
      pb-sampled-10        5.094 ± 0.136M/s (drops 0.000 ± 0.000M/s)
      pb-sampled-25        7.118 ± 0.153M/s (drops 0.000 ± 0.000M/s)
      pb-sampled-50        8.169 ± 0.156M/s (drops 0.000 ± 0.000M/s)
      pb-sampled-100       8.887 ± 0.136M/s (drops 0.000 ± 0.000M/s)
      pb-sampled-250       9.180 ± 0.209M/s (drops 0.000 ± 0.000M/s)
      pb-sampled-500       9.353 ± 0.281M/s (drops 0.000 ± 0.000M/s)
      pb-sampled-1000      9.411 ± 0.217M/s (drops 0.000 ± 0.000M/s)
      pb-sampled-2000      9.464 ± 0.167M/s (drops 0.000 ± 0.000M/s)
      pb-sampled-3000      9.575 ± 0.273M/s (drops 0.000 ± 0.000M/s)
      
      This benchmark shows the effect of event sampling for perfbuf. Back-to-back
      mode for highest throughput. Just doing every 5th record notification gives
      3.5x speed up. 250-500 appears to be the point of diminishing return, with
      almost 9x speed up. Most benchmarks use 500 as the default sampling for pb-raw
      and pb-custom.
      
      Ringbuf back-to-back, effect of sample rate
      ===========================================
      rb-sampled-1         1.106 ± 0.010M/s (drops 0.000 ± 0.000M/s)
      rb-sampled-5         4.746 ± 0.149M/s (drops 0.000 ± 0.000M/s)
      rb-sampled-10        7.706 ± 0.164M/s (drops 0.000 ± 0.000M/s)
      rb-sampled-25        12.893 ± 0.273M/s (drops 0.000 ± 0.000M/s)
      rb-sampled-50        15.961 ± 0.361M/s (drops 0.000 ± 0.000M/s)
      rb-sampled-100       18.203 ± 0.445M/s (drops 0.000 ± 0.000M/s)
      rb-sampled-250       19.962 ± 0.786M/s (drops 0.000 ± 0.000M/s)
      rb-sampled-500       20.881 ± 0.551M/s (drops 0.000 ± 0.000M/s)
      rb-sampled-1000      21.317 ± 0.532M/s (drops 0.000 ± 0.000M/s)
      rb-sampled-2000      21.331 ± 0.535M/s (drops 0.000 ± 0.000M/s)
      rb-sampled-3000      21.688 ± 0.392M/s (drops 0.000 ± 0.000M/s)
      
      Similar benchmark for ring buffer also shows a great advantage (in terms of
      throughput) of skipping notifications. Skipping every 5th one gives 4x boost.
      Also similar to perfbuf case, 250-500 seems to be the point of diminishing
      returns, giving roughly 20x better results.
      
      Keep in mind, for this test, notifications are controlled manually with
      BPF_RB_NO_WAKEUP and BPF_RB_FORCE_WAKEUP. As can be seen from previous
      benchmarks, adaptive notifications based on consumer's positions provides same
      (or even slightly better due to simpler load generator on BPF side) benefits in
      favorable back-to-back scenario. Over zealous and fast consumer, which is
      almost always caught up, will make thoughput numbers smaller. That's the case
      when manual notification control might prove to be extremely beneficial.
      
      Ringbuf back-to-back, reserve+commit vs output
      ==============================================
      reserve              22.819 ± 0.503M/s (drops 0.000 ± 0.000M/s)
      output               18.906 ± 0.433M/s (drops 0.000 ± 0.000M/s)
      
      Ringbuf sampled, reserve+commit vs output
      =========================================
      reserve-sampled      15.350 ± 0.132M/s (drops 0.000 ± 0.000M/s)
      output-sampled       14.195 ± 0.144M/s (drops 0.000 ± 0.000M/s)
      
      BPF ringbuf supports two sets of APIs with various usability and performance
      tradeoffs: bpf_ringbuf_reserve()+bpf_ringbuf_commit() vs bpf_ringbuf_output().
      This benchmark clearly shows superiority of reserve+commit approach, despite
      using a small 8-byte record size.
      
      Single-producer, consumer/producer competing on the same CPU, low batch count
      =============================================================================
      rb-libbpf            3.045 ± 0.020M/s (drops 3.536 ± 0.148M/s)
      rb-custom            3.055 ± 0.022M/s (drops 3.893 ± 0.066M/s)
      pb-libbpf            1.393 ± 0.024M/s (drops 0.000 ± 0.000M/s)
      pb-custom            1.407 ± 0.016M/s (drops 0.000 ± 0.000M/s)
      
      This benchmark shows one of the worst-case scenarios, in which producer and
      consumer do not coordinate *and* fight for the same CPU. No batch count and
      sampling settings were able to eliminate drops for ringbuffer, producer is
      just too fast for consumer to keep up. But ringbuf and perfbuf still able to
      pass through quite a lot of messages, which is more than enough for a lot of
      applications.
      
      Ringbuf, multi-producer contention
      ==================================
      rb-libbpf nr_prod 1  10.916 ± 0.399M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf nr_prod 2  4.931 ± 0.030M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf nr_prod 3  4.880 ± 0.006M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf nr_prod 4  3.926 ± 0.004M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf nr_prod 8  4.011 ± 0.004M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf nr_prod 12 3.967 ± 0.016M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf nr_prod 16 2.604 ± 0.030M/s (drops 0.001 ± 0.002M/s)
      rb-libbpf nr_prod 20 2.233 ± 0.003M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf nr_prod 24 2.085 ± 0.015M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf nr_prod 28 2.055 ± 0.004M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf nr_prod 32 1.962 ± 0.004M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf nr_prod 36 2.089 ± 0.005M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf nr_prod 40 2.118 ± 0.006M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf nr_prod 44 2.105 ± 0.004M/s (drops 0.000 ± 0.000M/s)
      rb-libbpf nr_prod 48 2.120 ± 0.058M/s (drops 0.000 ± 0.001M/s)
      rb-libbpf nr_prod 52 2.074 ± 0.024M/s (drops 0.007 ± 0.014M/s)
      
      Ringbuf uses a very short-duration spinlock during reservation phase, to check
      few invariants, increment producer count and set record header. This is the
      biggest point of contention for ringbuf implementation. This benchmark
      evaluates the effect of multiple competing writers on overall throughput of
      a single shared ringbuffer.
      
      Overall throughput drops almost 2x when going from single to two
      highly-contended producers, gradually dropping with additional competing
      producers.  Performance drop stabilizes at around 20 producers and hovers
      around 2mln even with 50+ fighting producers, which is a 5x drop compared to
      non-contended case. Good kernel implementation in kernel helps maintain decent
      performance here.
      
      Note, that in the intended real-world scenarios, it's not expected to get even
      close to such a high levels of contention. But if contention will become
      a problem, there is always an option of sharding few ring buffers across a set
      of CPUs.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200529075424.3139988-5-andriin@fb.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      c97099b0
    • A
      selftests/bpf: Add BPF ringbuf selftests · cb1c9ddd
      Andrii Nakryiko 提交于
      Both singleton BPF ringbuf and BPF ringbuf with map-in-map use cases are tested.
      Also reserve+submit/discards and output variants of API are validated.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200529075424.3139988-4-andriin@fb.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      cb1c9ddd
    • A
      bpf: Implement BPF ring buffer and verifier support for it · 457f4436
      Andrii Nakryiko 提交于
      This commit adds a new MPSC ring buffer implementation into BPF ecosystem,
      which allows multiple CPUs to submit data to a single shared ring buffer. On
      the consumption side, only single consumer is assumed.
      
      Motivation
      ----------
      There are two distinctive motivators for this work, which are not satisfied by
      existing perf buffer, which prompted creation of a new ring buffer
      implementation.
        - more efficient memory utilization by sharing ring buffer across CPUs;
        - preserving ordering of events that happen sequentially in time, even
        across multiple CPUs (e.g., fork/exec/exit events for a task).
      
      These two problems are independent, but perf buffer fails to satisfy both.
      Both are a result of a choice to have per-CPU perf ring buffer.  Both can be
      also solved by having an MPSC implementation of ring buffer. The ordering
      problem could technically be solved for perf buffer with some in-kernel
      counting, but given the first one requires an MPSC buffer, the same solution
      would solve the second problem automatically.
      
      Semantics and APIs
      ------------------
      Single ring buffer is presented to BPF programs as an instance of BPF map of
      type BPF_MAP_TYPE_RINGBUF. Two other alternatives considered, but ultimately
      rejected.
      
      One way would be to, similar to BPF_MAP_TYPE_PERF_EVENT_ARRAY, make
      BPF_MAP_TYPE_RINGBUF could represent an array of ring buffers, but not enforce
      "same CPU only" rule. This would be more familiar interface compatible with
      existing perf buffer use in BPF, but would fail if application needed more
      advanced logic to lookup ring buffer by arbitrary key. HASH_OF_MAPS addresses
      this with current approach. Additionally, given the performance of BPF
      ringbuf, many use cases would just opt into a simple single ring buffer shared
      among all CPUs, for which current approach would be an overkill.
      
      Another approach could introduce a new concept, alongside BPF map, to
      represent generic "container" object, which doesn't necessarily have key/value
      interface with lookup/update/delete operations. This approach would add a lot
      of extra infrastructure that has to be built for observability and verifier
      support. It would also add another concept that BPF developers would have to
      familiarize themselves with, new syntax in libbpf, etc. But then would really
      provide no additional benefits over the approach of using a map.
      BPF_MAP_TYPE_RINGBUF doesn't support lookup/update/delete operations, but so
      doesn't few other map types (e.g., queue and stack; array doesn't support
      delete, etc).
      
      The approach chosen has an advantage of re-using existing BPF map
      infrastructure (introspection APIs in kernel, libbpf support, etc), being
      familiar concept (no need to teach users a new type of object in BPF program),
      and utilizing existing tooling (bpftool). For common scenario of using
      a single ring buffer for all CPUs, it's as simple and straightforward, as
      would be with a dedicated "container" object. On the other hand, by being
      a map, it can be combined with ARRAY_OF_MAPS and HASH_OF_MAPS map-in-maps to
      implement a wide variety of topologies, from one ring buffer for each CPU
      (e.g., as a replacement for perf buffer use cases), to a complicated
      application hashing/sharding of ring buffers (e.g., having a small pool of
      ring buffers with hashed task's tgid being a look up key to preserve order,
      but reduce contention).
      
      Key and value sizes are enforced to be zero. max_entries is used to specify
      the size of ring buffer and has to be a power of 2 value.
      
      There are a bunch of similarities between perf buffer
      (BPF_MAP_TYPE_PERF_EVENT_ARRAY) and new BPF ring buffer semantics:
        - variable-length records;
        - if there is no more space left in ring buffer, reservation fails, no
          blocking;
        - memory-mappable data area for user-space applications for ease of
          consumption and high performance;
        - epoll notifications for new incoming data;
        - but still the ability to do busy polling for new data to achieve the
          lowest latency, if necessary.
      
      BPF ringbuf provides two sets of APIs to BPF programs:
        - bpf_ringbuf_output() allows to *copy* data from one place to a ring
          buffer, similarly to bpf_perf_event_output();
        - bpf_ringbuf_reserve()/bpf_ringbuf_commit()/bpf_ringbuf_discard() APIs
          split the whole process into two steps. First, a fixed amount of space is
          reserved. If successful, a pointer to a data inside ring buffer data area
          is returned, which BPF programs can use similarly to a data inside
          array/hash maps. Once ready, this piece of memory is either committed or
          discarded. Discard is similar to commit, but makes consumer ignore the
          record.
      
      bpf_ringbuf_output() has disadvantage of incurring extra memory copy, because
      record has to be prepared in some other place first. But it allows to submit
      records of the length that's not known to verifier beforehand. It also closely
      matches bpf_perf_event_output(), so will simplify migration significantly.
      
      bpf_ringbuf_reserve() avoids the extra copy of memory by providing a memory
      pointer directly to ring buffer memory. In a lot of cases records are larger
      than BPF stack space allows, so many programs have use extra per-CPU array as
      a temporary heap for preparing sample. bpf_ringbuf_reserve() avoid this needs
      completely. But in exchange, it only allows a known constant size of memory to
      be reserved, such that verifier can verify that BPF program can't access
      memory outside its reserved record space. bpf_ringbuf_output(), while slightly
      slower due to extra memory copy, covers some use cases that are not suitable
      for bpf_ringbuf_reserve().
      
      The difference between commit and discard is very small. Discard just marks
      a record as discarded, and such records are supposed to be ignored by consumer
      code. Discard is useful for some advanced use-cases, such as ensuring
      all-or-nothing multi-record submission, or emulating temporary malloc()/free()
      within single BPF program invocation.
      
      Each reserved record is tracked by verifier through existing
      reference-tracking logic, similar to socket ref-tracking. It is thus
      impossible to reserve a record, but forget to submit (or discard) it.
      
      bpf_ringbuf_query() helper allows to query various properties of ring buffer.
      Currently 4 are supported:
        - BPF_RB_AVAIL_DATA returns amount of unconsumed data in ring buffer;
        - BPF_RB_RING_SIZE returns the size of ring buffer;
        - BPF_RB_CONS_POS/BPF_RB_PROD_POS returns current logical possition of
          consumer/producer, respectively.
      Returned values are momentarily snapshots of ring buffer state and could be
      off by the time helper returns, so this should be used only for
      debugging/reporting reasons or for implementing various heuristics, that take
      into account highly-changeable nature of some of those characteristics.
      
      One such heuristic might involve more fine-grained control over poll/epoll
      notifications about new data availability in ring buffer. Together with
      BPF_RB_NO_WAKEUP/BPF_RB_FORCE_WAKEUP flags for output/commit/discard helpers,
      it allows BPF program a high degree of control and, e.g., more efficient
      batched notifications. Default self-balancing strategy, though, should be
      adequate for most applications and will work reliable and efficiently already.
      
      Design and implementation
      -------------------------
      This reserve/commit schema allows a natural way for multiple producers, either
      on different CPUs or even on the same CPU/in the same BPF program, to reserve
      independent records and work with them without blocking other producers. This
      means that if BPF program was interruped by another BPF program sharing the
      same ring buffer, they will both get a record reserved (provided there is
      enough space left) and can work with it and submit it independently. This
      applies to NMI context as well, except that due to using a spinlock during
      reservation, in NMI context, bpf_ringbuf_reserve() might fail to get a lock,
      in which case reservation will fail even if ring buffer is not full.
      
      The ring buffer itself internally is implemented as a power-of-2 sized
      circular buffer, with two logical and ever-increasing counters (which might
      wrap around on 32-bit architectures, that's not a problem):
        - consumer counter shows up to which logical position consumer consumed the
          data;
        - producer counter denotes amount of data reserved by all producers.
      
      Each time a record is reserved, producer that "owns" the record will
      successfully advance producer counter. At that point, data is still not yet
      ready to be consumed, though. Each record has 8 byte header, which contains
      the length of reserved record, as well as two extra bits: busy bit to denote
      that record is still being worked on, and discard bit, which might be set at
      commit time if record is discarded. In the latter case, consumer is supposed
      to skip the record and move on to the next one. Record header also encodes
      record's relative offset from the beginning of ring buffer data area (in
      pages). This allows bpf_ringbuf_commit()/bpf_ringbuf_discard() to accept only
      the pointer to the record itself, without requiring also the pointer to ring
      buffer itself. Ring buffer memory location will be restored from record
      metadata header. This significantly simplifies verifier, as well as improving
      API usability.
      
      Producer counter increments are serialized under spinlock, so there is
      a strict ordering between reservations. Commits, on the other hand, are
      completely lockless and independent. All records become available to consumer
      in the order of reservations, but only after all previous records where
      already committed. It is thus possible for slow producers to temporarily hold
      off submitted records, that were reserved later.
      
      Reservation/commit/consumer protocol is verified by litmus tests in
      Documentation/litmus-test/bpf-rb.
      
      One interesting implementation bit, that significantly simplifies (and thus
      speeds up as well) implementation of both producers and consumers is how data
      area is mapped twice contiguously back-to-back in the virtual memory. This
      allows to not take any special measures for samples that have to wrap around
      at the end of the circular buffer data area, because the next page after the
      last data page would be first data page again, and thus the sample will still
      appear completely contiguous in virtual memory. See comment and a simple ASCII
      diagram showing this visually in bpf_ringbuf_area_alloc().
      
      Another feature that distinguishes BPF ringbuf from perf ring buffer is
      a self-pacing notifications of new data being availability.
      bpf_ringbuf_commit() implementation will send a notification of new record
      being available after commit only if consumer has already caught up right up
      to the record being committed. If not, consumer still has to catch up and thus
      will see new data anyways without needing an extra poll notification.
      Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbuf.c) show that
      this allows to achieve a very high throughput without having to resort to
      tricks like "notify only every Nth sample", which are necessary with perf
      buffer. For extreme cases, when BPF program wants more manual control of
      notifications, commit/discard/output helpers accept BPF_RB_NO_WAKEUP and
      BPF_RB_FORCE_WAKEUP flags, which give full control over notifications of data
      availability, but require extra caution and diligence in using this API.
      
      Comparison to alternatives
      --------------------------
      Before considering implementing BPF ring buffer from scratch existing
      alternatives in kernel were evaluated, but didn't seem to meet the needs. They
      largely fell into few categores:
        - per-CPU buffers (perf, ftrace, etc), which don't satisfy two motivations
          outlined above (ordering and memory consumption);
        - linked list-based implementations; while some were multi-producer designs,
          consuming these from user-space would be very complicated and most
          probably not performant; memory-mapping contiguous piece of memory is
          simpler and more performant for user-space consumers;
        - io_uring is SPSC, but also requires fixed-sized elements. Naively turning
          SPSC queue into MPSC w/ lock would have subpar performance compared to
          locked reserve + lockless commit, as with BPF ring buffer. Fixed sized
          elements would be too limiting for BPF programs, given existing BPF
          programs heavily rely on variable-sized perf buffer already;
        - specialized implementations (like a new printk ring buffer, [0]) with lots
          of printk-specific limitations and implications, that didn't seem to fit
          well for intended use with BPF programs.
      
        [0] https://lwn.net/Articles/779550/Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200529075424.3139988-2-andriin@fb.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      457f4436
    • A
      selftests/bpf: Cleanup comments in test_maps · efbc3b8f
      Anton Protopopov 提交于
      Make comments inside the test_map_rdonly and test_map_wronly tests
      consistent with logic.
      Signed-off-by: NAnton Protopopov <a.s.protopopov@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200527185700.14658-4-a.s.protopopov@gmail.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      efbc3b8f
    • A
      selftests/bpf: Cleanup some file descriptors in test_maps · 36ef9a2d
      Anton Protopopov 提交于
      The test_map_rdonly and test_map_wronly tests should close file descriptors
      which they open.
      Signed-off-by: NAnton Protopopov <a.s.protopopov@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200527185700.14658-3-a.s.protopopov@gmail.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      36ef9a2d
    • A
      selftests/bpf: Fix a typo in test_maps · 204fb041
      Anton Protopopov 提交于
      Trivial fix to a typo in the test_map_wronly test: "read" -> "write"
      Signed-off-by: NAnton Protopopov <a.s.protopopov@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200527185700.14658-2-a.s.protopopov@gmail.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      204fb041
    • J
      bpf, selftests: Test probe_* helpers from SCHED_CLS · ee103e9f
      John Fastabend 提交于
      Lets test using probe* in SCHED_CLS network programs as well just
      to be sure these keep working. Its cheap to add the extra test
      and provides a second context to test outside of sk_msg after
      we generalized probe* helpers to all networking types.
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NYonghong Song <yhs@fb.com>
      Acked-by: NAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/159033911685.12355.15951980509828906214.stgit@john-Precision-5820-TowerSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      ee103e9f
    • J
      bpf, selftests: Add sk_msg helpers load and attach test · 1d9c037a
      John Fastabend 提交于
      The test itself is not particularly useful but it encodes a common
      pattern we have.
      
      Namely do a sk storage lookup then depending on data here decide if
      we need to do more work or alternatively allow packet to PASS. Then
      if we need to do more work consult task_struct for more information
      about the running task. Finally based on this additional information
      drop or pass the data. In this case the suspicious check is not so
      realisitic but it encodes the general pattern and uses the helpers
      so we test the workflow.
      
      This is a load test to ensure verifier correctly handles this case.
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/159033909665.12355.6166415847337547879.stgit@john-Precision-5820-TowerSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      1d9c037a
    • I
      selftests: mlxsw: Add test for control packets · 9959b389
      Ido Schimmel 提交于
      Generate packets matching the various control traps and check that the
      traps' stats increase accordingly.
      Signed-off-by: NIdo Schimmel <idosch@mellanox.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9959b389
  2. 31 5月, 2020 2 次提交
  3. 30 5月, 2020 2 次提交
    • J
      bpf, selftests: Add a verifier test for assigning 32bit reg states to 64bit ones · cf66c29b
      John Fastabend 提交于
      Added a verifier test for assigning 32bit reg states to
      64bit where 32bit reg holds a constant value of 0.
      
      Without previous kernel verifier.c fix, the test in
      this patch will fail.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/159077335867.6014.2075350327073125374.stgit@john-Precision-5820-Tower
      cf66c29b
    • J
      bpf, selftests: Verifier bounds tests need to be updated · e3effcdf
      John Fastabend 提交于
      After previous fix for zero extension test_verifier tests #65 and #66 now
      fail. Before the fix we can see the alu32 mov op at insn 10
      
      10: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0)
          R1_w=invP(id=0,
                    smin_value=4294967168,smax_value=4294967423,
                    umin_value=4294967168,umax_value=4294967423,
                    var_off=(0x0; 0x1ffffffff),
                    s32_min_value=-2147483648,s32_max_value=2147483647,
                    u32_min_value=0,u32_max_value=-1)
          R10=fp0 fp-8_w=mmmmmmmm
      10: (bc) w1 = w1
      11: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0)
          R1_w=invP(id=0,
                    smin_value=0,smax_value=2147483647,
                    umin_value=0,umax_value=4294967295,
                    var_off=(0x0; 0xffffffff),
                    s32_min_value=-2147483648,s32_max_value=2147483647,
                    u32_min_value=0,u32_max_value=-1)
          R10=fp0 fp-8_w=mmmmmmmm
      
      After the fix at insn 10 because we have 's32_min_value < 0' the following
      step 11 now has 'smax_value=U32_MAX' where before we pulled the s32_max_value
      bound into the smax_value as seen above in 11 with smax_value=2147483647.
      
      10: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0)
          R1_w=inv(id=0,
                   smin_value=4294967168,smax_value=4294967423,
                   umin_value=4294967168,umax_value=4294967423,
                   var_off=(0x0; 0x1ffffffff),
                   s32_min_value=-2147483648, s32_max_value=2147483647,
                   u32_min_value=0,u32_max_value=-1)
          R10=fp0 fp-8_w=mmmmmmmm
      10: (bc) w1 = w1
      11: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0)
          R1_w=inv(id=0,
                   smin_value=0,smax_value=4294967295,
                   umin_value=0,umax_value=4294967295,
                   var_off=(0x0; 0xffffffff),
                   s32_min_value=-2147483648, s32_max_value=2147483647,
                   u32_min_value=0, u32_max_value=-1)
          R10=fp0 fp-8_w=mmmmmmmm
      
      The fall out of this is by the time we get to the failing instruction at
      step 14 where previously we had the following:
      
      14: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0)
          R1_w=inv(id=0,
                   smin_value=72057594021150720,smax_value=72057594029539328,
                   umin_value=72057594021150720,umax_value=72057594029539328,
                   var_off=(0xffffffff000000; 0xffffff),
                   s32_min_value=-16777216,s32_max_value=-1,
                   u32_min_value=-16777216,u32_max_value=-1)
          R10=fp0 fp-8_w=mmmmmmmm
      14: (0f) r0 += r1
      
      We now have,
      
      14: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0)
          R1_w=inv(id=0,
                   smin_value=0,smax_value=72057594037927935,
                   umin_value=0,umax_value=72057594037927935,
                   var_off=(0x0; 0xffffffffffffff),
                   s32_min_value=-2147483648,s32_max_value=2147483647,
                   u32_min_value=0,u32_max_value=-1)
          R10=fp0 fp-8_w=mmmmmmmm
      14: (0f) r0 += r1
      
      In the original step 14 'smin_value=72057594021150720' this trips the logic
      in the verifier function check_reg_sane_offset(),
      
       if (smin >= BPF_MAX_VAR_OFF || smin <= -BPF_MAX_VAR_OFF) {
      	verbose(env, "value %lld makes %s pointer be out of bounds\n",
      		smin, reg_type_str[type]);
      	return false;
       }
      
      Specifically, the 'smin <= -BPF_MAX_VAR_OFF' check. But with the fix
      at step 14 we have bounds 'smin_value=0' so the above check is not tripped
      because BPF_MAX_VAR_OFF=1<<29.
      
      We have a smin_value=0 here because at step 10 the smaller smin_value=0 means
      the subtractions at steps 11 and 12 bring the smin_value negative.
      
      11: (17) r1 -= 2147483584
      12: (17) r1 -= 2147483584
      13: (77) r1 >>= 8
      
      Then the shift clears the top bit and smin_value is set to 0. Note we still
      have the smax_value in the fixed code so any reads will fail. An alternative
      would be to have reg_sane_check() do both smin and smax value tests.
      
      To fix the test we can omit the 'r1 >>=8' at line 13. This will change the
      err string, but keeps the intention of the test as suggseted by the title,
      "check after truncation of boundary-crossing range". If the verifier logic
      changes a different value is likely to be thrown in the error or the error
      will no longer be thrown forcing this test to be examined. With this change
      we see the new state at step 13.
      
      13: R0_w=map_value(id=0,off=0,ks=8,vs=8,imm=0)
          R1_w=invP(id=0,
                    smin_value=-4294967168,smax_value=127,
                    umin_value=0,umax_value=18446744073709551615,
                    s32_min_value=-2147483648,s32_max_value=2147483647,
                    u32_min_value=0,u32_max_value=-1)
          R10=fp0 fp-8_w=mmmmmmmm
      
      Giving the expected out of bounds error, "value -4294967168 makes map_value
      pointer be out of bounds" However, for unpriv case we see a different error
      now because of the mixed signed bounds pointer arithmatic. This seems OK so
      I've only added the unpriv_errstr for this. Another optino may have been to
      do addition on r1 instead of subtraction but I favor the approach above
      slightly.
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NYonghong Song <yhs@fb.com>
      Link: https://lore.kernel.org/bpf/159077333942.6014.14004320043595756079.stgit@john-Precision-5820-Tower
      e3effcdf
  4. 29 5月, 2020 1 次提交
  5. 28 5月, 2020 2 次提交
    • S
      net: add large ecmp group nexthop tests · 5a1b72ce
      Stephen Worley 提交于
      Add a couple large ecmp group nexthop selftests to cover
      the remnant fixed by d69100b8.
      
      The tests create 100 x32 ecmp groups of ipv4 and ipv6 and then
      dump them. On kernels without the fix, they will fail due
      to data remnant during the dump.
      Signed-off-by: NStephen Worley <sworley@cumulusnetworks.com>
      Reviewed-by: NDavid Ahern <dsahern@gmail.com>
      Reviewed-by: NDavid Ahern <dsahern@gmail.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      5a1b72ce
    • D
      net/sched: fix infinite loop in sch_fq_pie · bb2f930d
      Davide Caratti 提交于
      this command hangs forever:
      
       # tc qdisc add dev eth0 root fq_pie flows 65536
      
       watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [tc:1028]
       [...]
       CPU: 1 PID: 1028 Comm: tc Not tainted 5.7.0-rc6+ #167
       RIP: 0010:fq_pie_init+0x60e/0x8b7 [sch_fq_pie]
       Code: 4c 89 65 50 48 89 f8 48 c1 e8 03 42 80 3c 30 00 0f 85 2a 02 00 00 48 8d 7d 10 4c 89 65 58 48 89 f8 48 c1 e8 03 42 80 3c 30 00 <0f> 85 a7 01 00 00 48 8d 7d 18 48 c7 45 10 46 c3 23 00 48 89 f8 48
       RSP: 0018:ffff888138d67468 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
       RAX: 1ffff9200018d2b2 RBX: ffff888139c1c400 RCX: ffffffffffffffff
       RDX: 000000000000c5e8 RSI: ffffc900000e5000 RDI: ffffc90000c69590
       RBP: ffffc90000c69580 R08: fffffbfff79a9699 R09: fffffbfff79a9699
       R10: 0000000000000700 R11: fffffbfff79a9698 R12: ffffc90000c695d0
       R13: 0000000000000000 R14: dffffc0000000000 R15: 000000002347c5e8
       FS:  00007f01e1850e40(0000) GS:ffff88814c880000(0000) knlGS:0000000000000000
       CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
       CR2: 000000000067c340 CR3: 000000013864c000 CR4: 0000000000340ee0
       Call Trace:
        qdisc_create+0x3fd/0xeb0
        tc_modify_qdisc+0x3be/0x14a0
        rtnetlink_rcv_msg+0x5f3/0x920
        netlink_rcv_skb+0x121/0x350
        netlink_unicast+0x439/0x630
        netlink_sendmsg+0x714/0xbf0
        sock_sendmsg+0xe2/0x110
        ____sys_sendmsg+0x5b4/0x890
        ___sys_sendmsg+0xe9/0x160
        __sys_sendmsg+0xd3/0x170
        do_syscall_64+0x9a/0x370
        entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      we can't accept 65536 as a valid number for 'nflows', because the loop on
      'idx' in fq_pie_init() will never end. The extack message is correct, but
      it doesn't say that 0 is not a valid number for 'flows': while at it, fix
      this also. Add a tdc selftest to check correct validation of 'flows'.
      
      CC: Ivan Vecera <ivecera@redhat.com>
      Fixes: ec97ecf1 ("net: sched: add Flow Queue PIE packet scheduler")
      Signed-off-by: NDavide Caratti <dcaratti@redhat.com>
      Reviewed-by: NIvan Vecera <ivecera@redhat.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      bb2f930d
  6. 27 5月, 2020 1 次提交
  7. 24 5月, 2020 2 次提交
  8. 23 5月, 2020 6 次提交
  9. 22 5月, 2020 3 次提交
  10. 21 5月, 2020 2 次提交
  11. 20 5月, 2020 3 次提交
  12. 18 5月, 2020 1 次提交
  13. 17 5月, 2020 2 次提交
  14. 16 5月, 2020 1 次提交