1. 07 10月, 2021 2 次提交
  2. 22 9月, 2021 1 次提交
    • A
      libbpf: Refactor and simplify legacy kprobe code · 46ed5fc3
      Andrii Nakryiko 提交于
      Refactor legacy kprobe handling code to follow the same logic as uprobe
      legacy logic added in the next patchs:
        - add append_to_file() helper that makes it simpler to work with
          tracefs file-based interface for creating and deleting probes;
        - move out probe/event name generation outside of the code that
          adds/removes it, which simplifies bookkeeping significantly;
        - change the probe name format to start with "libbpf_" prefix and
          include offset within kernel function;
        - switch 'unsigned long' to 'size_t' for specifying kprobe offsets,
          which is consistent with how uprobes define that, simplifies
          printf()-ing internally, and also avoids unnecessary complications on
          architectures where sizeof(long) != sizeof(void *).
      
      This patch also implicitly fixes the problem with invalid open() error
      handling present in poke_kprobe_events(), which (the function) this
      patch removes.
      
      Fixes: ca304b40 ("libbpf: Introduce legacy kprobe events support")
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210921210036.1545557-4-andrii@kernel.org
      46ed5fc3
  3. 21 9月, 2021 1 次提交
  4. 18 9月, 2021 3 次提交
  5. 08 9月, 2021 1 次提交
  6. 17 8月, 2021 2 次提交
  7. 24 7月, 2021 1 次提交
  8. 23 7月, 2021 1 次提交
  9. 17 7月, 2021 1 次提交
  10. 26 5月, 2021 1 次提交
  11. 19 5月, 2021 2 次提交
    • A
      libbpf: Introduce bpf_map__initial_value(). · 7723256b
      Alexei Starovoitov 提交于
      Introduce bpf_map__initial_value() to read initial contents
      of mmaped data/rodata/bss maps.
      Note that bpf_map__set_initial_value() doesn't allow modifying
      kconfig map while bpf_map__initial_value() allows reading
      its values.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210514003623.28033-17-alexei.starovoitov@gmail.com
      7723256b
    • A
      libbpf: Generate loader program out of BPF ELF file. · 67234743
      Alexei Starovoitov 提交于
      The BPF program loading process performed by libbpf is quite complex
      and consists of the following steps:
      "open" phase:
      - parse elf file and remember relocations, sections
      - collect externs and ksyms including their btf_ids in prog's BTF
      - patch BTF datasec (since llvm couldn't do it)
      - init maps (old style map_def, BTF based, global data map, kconfig map)
      - collect relocations against progs and maps
      "load" phase:
      - probe kernel features
      - load vmlinux BTF
      - resolve externs (kconfig and ksym)
      - load program BTF
      - init struct_ops
      - create maps
      - apply CO-RE relocations
      - patch ld_imm64 insns with src_reg=PSEUDO_MAP, PSEUDO_MAP_VALUE, PSEUDO_BTF_ID
      - reposition subprograms and adjust call insns
      - sanitize and load progs
      
      During this process libbpf does sys_bpf() calls to load BTF, create maps,
      populate maps and finally load programs.
      Instead of actually doing the syscalls generate a trace of what libbpf
      would have done and represent it as the "loader program".
      The "loader program" consists of single map with:
      - union bpf_attr(s)
      - BTF bytes
      - map value bytes
      - insns bytes
      and single bpf program that passes bpf_attr(s) and data into bpf_sys_bpf() helper.
      Executing such "loader program" via bpf_prog_test_run() command will
      replay the sequence of syscalls that libbpf would have done which will result
      the same maps created and programs loaded as specified in the elf file.
      The "loader program" removes libelf and majority of libbpf dependency from
      program loading process.
      
      kconfig, typeless ksym, struct_ops and CO-RE are not supported yet.
      
      The order of relocate_data and relocate_calls had to change, so that
      bpf_gen__prog_load() can see all relocations for a given program with
      correct insn_idx-es.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210514003623.28033-15-alexei.starovoitov@gmail.com
      67234743
  12. 17 5月, 2021 1 次提交
    • K
      libbpf: Add low level TC-BPF management API · 715c5ce4
      Kumar Kartikeya Dwivedi 提交于
      This adds functions that wrap the netlink API used for adding, manipulating,
      and removing traffic control filters.
      
      The API summary:
      
      A bpf_tc_hook represents a location where a TC-BPF filter can be attached.
      This means that creating a hook leads to creation of the backing qdisc,
      while destruction either removes all filters attached to a hook, or destroys
      qdisc if requested explicitly (as discussed below).
      
      The TC-BPF API functions operate on this bpf_tc_hook to attach, replace,
      query, and detach tc filters. All functions return 0 on success, and a
      negative error code on failure.
      
      bpf_tc_hook_create - Create a hook
      Parameters:
      	@hook - Cannot be NULL, ifindex > 0, attach_point must be set to
      		proper enum constant. Note that parent must be unset when
      		attach_point is one of BPF_TC_INGRESS or BPF_TC_EGRESS. Note
      		that as an exception BPF_TC_INGRESS|BPF_TC_EGRESS is also a
      		valid value for attach_point.
      
      		Returns -EOPNOTSUPP when hook has attach_point as BPF_TC_CUSTOM.
      
      bpf_tc_hook_destroy - Destroy a hook
      Parameters:
      	@hook - Cannot be NULL. The behaviour depends on value of
      		attach_point. If BPF_TC_INGRESS, all filters attached to
      		the ingress hook will be detached. If BPF_TC_EGRESS, all
      		filters attached to the egress hook will be detached. If
      		BPF_TC_INGRESS|BPF_TC_EGRESS, the clsact qdisc will be
      		deleted, also detaching all filters. As before, parent must
      		be unset for these attach_points, and set for BPF_TC_CUSTOM.
      
      		It is advised that if the qdisc is operated on by many programs,
      		then the program at least check that there are no other existing
      		filters before deleting the clsact qdisc. An example is shown
      		below:
      
      		DECLARE_LIBBPF_OPTS(bpf_tc_hook, .ifindex = if_nametoindex("lo"),
      				    .attach_point = BPF_TC_INGRESS);
      		/* set opts as NULL, as we're not really interested in
      		 * getting any info for a particular filter, but just
      	 	 * detecting its presence.
      		 */
      		r = bpf_tc_query(&hook, NULL);
      		if (r == -ENOENT) {
      			/* no filters */
      			hook.attach_point = BPF_TC_INGRESS|BPF_TC_EGREESS;
      			return bpf_tc_hook_destroy(&hook);
      		} else {
      			/* failed or r == 0, the latter means filters do exist */
      			return r;
      		}
      
      		Note that there is a small race between checking for no
      		filters and deleting the qdisc. This is currently unavoidable.
      
      		Returns -EOPNOTSUPP when hook has attach_point as BPF_TC_CUSTOM.
      
      bpf_tc_attach - Attach a filter to a hook
      Parameters:
      	@hook - Cannot be NULL. Represents the hook the filter will be
      		attached to. Requirements for ifindex and attach_point are
      		same as described in bpf_tc_hook_create, but BPF_TC_CUSTOM
      		is also supported.  In that case, parent must be set to the
      		handle where the filter will be attached (using BPF_TC_PARENT).
      		E.g. to set parent to 1:16 like in tc command line, the
      		equivalent would be BPF_TC_PARENT(1, 16).
      
      	@opts - Cannot be NULL. The following opts are optional:
      		* handle   - The handle of the filter
      		* priority - The priority of the filter
      			     Must be >= 0 and <= UINT16_MAX
      		Note that when left unset, they will be auto-allocated by
      		the kernel. The following opts must be set:
      		* prog_fd - The fd of the loaded SCHED_CLS prog
      		The following opts must be unset:
      		* prog_id - The ID of the BPF prog
      		The following opts are optional:
      		* flags - Currently only BPF_TC_F_REPLACE is allowed. It
      			  allows replacing an existing filter instead of
      			  failing with -EEXIST.
      		The following opts will be filled by bpf_tc_attach on a
      		successful attach operation if they are unset:
      		* handle   - The handle of the attached filter
      		* priority - The priority of the attached filter
      		* prog_id  - The ID of the attached SCHED_CLS prog
      		This way, the user can know what the auto allocated values
      		for optional opts like handle and priority are for the newly
      		attached filter, if they were unset.
      
      		Note that some other attributes are set to fixed default
      		values listed below (this holds for all bpf_tc_* APIs):
      		protocol as ETH_P_ALL, direct action mode, chain index of 0,
      		and class ID of 0 (this can be set by writing to the
      		skb->tc_classid field from the BPF program).
      
      bpf_tc_detach
      Parameters:
      	@hook - Cannot be NULL. Represents the hook the filter will be
      		detached from. Requirements are same as described above
      		in bpf_tc_attach.
      
      	@opts - Cannot be NULL. The following opts must be set:
      		* handle, priority
      		The following opts must be unset:
      		* prog_fd, prog_id, flags
      
      bpf_tc_query
      Parameters:
      	@hook - Cannot be NULL. Represents the hook where the filter lookup will
      		be performed. Requirements are same as described above in
      		bpf_tc_attach().
      
      	@opts - Cannot be NULL. The following opts must be set:
      		* handle, priority
      		The following opts must be unset:
      		* prog_fd, prog_id, flags
      		The following fields will be filled by bpf_tc_query upon a
      		successful lookup:
      		* prog_id
      
      Some usage examples (using BPF skeleton infrastructure):
      
      BPF program (test_tc_bpf.c):
      
      	#include <linux/bpf.h>
      	#include <bpf/bpf_helpers.h>
      
      	SEC("classifier")
      	int cls(struct __sk_buff *skb)
      	{
      		return 0;
      	}
      
      Userspace loader:
      
      	struct test_tc_bpf *skel = NULL;
      	int fd, r;
      
      	skel = test_tc_bpf__open_and_load();
      	if (!skel)
      		return -ENOMEM;
      
      	fd = bpf_program__fd(skel->progs.cls);
      
      	DECLARE_LIBBPF_OPTS(bpf_tc_hook, hook, .ifindex =
      			    if_nametoindex("lo"), .attach_point =
      			    BPF_TC_INGRESS);
      	/* Create clsact qdisc */
      	r = bpf_tc_hook_create(&hook);
      	if (r < 0)
      		goto end;
      
      	DECLARE_LIBBPF_OPTS(bpf_tc_opts, opts, .prog_fd = fd);
      	r = bpf_tc_attach(&hook, &opts);
      	if (r < 0)
      		goto end;
      	/* Print the auto allocated handle and priority */
      	printf("Handle=%u", opts.handle);
      	printf("Priority=%u", opts.priority);
      
      	opts.prog_fd = opts.prog_id = 0;
      	bpf_tc_detach(&hook, &opts);
      end:
      	test_tc_bpf__destroy(skel);
      
      This is equivalent to doing the following using tc command line:
        # tc qdisc add dev lo clsact
        # tc filter add dev lo ingress bpf obj foo.o sec classifier da
        # tc filter del dev lo ingress handle <h> prio <p> bpf
      ... where the handle and priority can be found using:
        # tc filter show dev lo ingress
      
      Another example replacing a filter (extending prior example):
      
      	/* We can also choose both (or one), let's try replacing an
      	 * existing filter.
      	 */
      	DECLARE_LIBBPF_OPTS(bpf_tc_opts, replace_opts, .handle =
      			    opts.handle, .priority = opts.priority,
      			    .prog_fd = fd);
      	r = bpf_tc_attach(&hook, &replace_opts);
      	if (r == -EEXIST) {
      		/* Expected, now use BPF_TC_F_REPLACE to replace it */
      		replace_opts.flags = BPF_TC_F_REPLACE;
      		return bpf_tc_attach(&hook, &replace_opts);
      	} else if (r < 0) {
      		return r;
      	}
      	/* There must be no existing filter with these
      	 * attributes, so cleanup and return an error.
      	 */
      	replace_opts.prog_fd = replace_opts.prog_id = 0;
      	bpf_tc_detach(&hook, &replace_opts);
      	return -1;
      
      To obtain info of a particular filter:
      
      	/* Find info for filter with handle 1 and priority 50 */
      	DECLARE_LIBBPF_OPTS(bpf_tc_opts, info_opts, .handle = 1,
      			    .priority = 50);
      	r = bpf_tc_query(&hook, &info_opts);
      	if (r == -ENOENT)
      		printf("Filter not found");
      	else if (r < 0)
      		return r;
      	printf("Prog ID: %u", info_opts.prog_id);
      	return 0;
      Signed-off-by: NKumar Kartikeya Dwivedi <memxor@gmail.com>
      Co-developed-by: Daniel Borkmann <daniel@iogearbox.net> # libbpf API design
      [ Daniel: also did major patch cleanup ]
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: NToke Høiland-Jørgensen <toke@redhat.com>
      Link: https://lore.kernel.org/bpf/20210512103451.989420-3-memxor@gmail.com
      715c5ce4
  13. 12 5月, 2021 1 次提交
  14. 09 4月, 2021 1 次提交
  15. 26 3月, 2021 2 次提交
  16. 19 3月, 2021 1 次提交
    • A
      libbpf: Add BPF static linker APIs · faf6ed32
      Andrii Nakryiko 提交于
      Introduce BPF static linker APIs to libbpf. BPF static linker allows to
      perform static linking of multiple BPF object files into a single combined
      resulting object file, preserving all the BPF programs, maps, global
      variables, etc.
      
      Data sections (.bss, .data, .rodata, .maps, maps, etc) with the same name are
      concatenated together. Similarly, code sections are also concatenated. All the
      symbols and ELF relocations are also concatenated in their respective ELF
      sections and are adjusted accordingly to the new object file layout.
      
      Static variables and functions are handled correctly as well, adjusting BPF
      instructions offsets to reflect new variable/function offset within the
      combined ELF section. Such relocations are referencing STT_SECTION symbols and
      that stays intact.
      
      Data sections in different files can have different alignment requirements, so
      that is taken care of as well, adjusting sizes and offsets as necessary to
      satisfy both old and new alignment requirements.
      
      DWARF data sections are stripped out, currently. As well as LLLVM_ADDRSIG
      section, which is ignored by libbpf in bpf_object__open() anyways. So, in
      a way, BPF static linker is an analogue to `llvm-strip -g`, which is a pretty
      nice property, especially if resulting .o file is then used to generate BPF
      skeleton.
      
      Original string sections are ignored and instead we construct our own set of
      unique strings using libbpf-internal `struct strset` API.
      
      To reduce the size of the patch, all the .BTF and .BTF.ext processing was
      moved into a separate patch.
      
      The high-level API consists of just 4 functions:
        - bpf_linker__new() creates an instance of BPF static linker. It accepts
          output filename and (currently empty) options struct;
        - bpf_linker__add_file() takes input filename and appends it to the already
          processed ELF data; it can be called multiple times, one for each BPF
          ELF object file that needs to be linked in;
        - bpf_linker__finalize() needs to be called to dump final ELF contents into
          the output file, specified when bpf_linker was created; after
          bpf_linker__finalize() is called, no more bpf_linker__add_file() and
          bpf_linker__finalize() calls are allowed, they will return error;
        - regardless of whether bpf_linker__finalize() was called or not,
          bpf_linker__free() will free up all the used resources.
      
      Currently, BPF static linker doesn't resolve cross-object file references
      (extern variables and/or functions). This will be added in the follow up patch
      set.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20210318194036.3521577-7-andrii@kernel.org
      faf6ed32
  17. 17 3月, 2021 1 次提交
  18. 15 12月, 2020 1 次提交
  19. 30 9月, 2020 1 次提交
  20. 04 9月, 2020 1 次提交
  21. 22 8月, 2020 1 次提交
    • A
      libbpf: Add perf_buffer APIs for better integration with outside epoll loop · dca5612f
      Andrii Nakryiko 提交于
      Add a set of APIs to perf_buffer manage to allow applications to integrate
      perf buffer polling into existing epoll-based infrastructure. One example is
      applications using libevent already and wanting to plug perf_buffer polling,
      instead of relying on perf_buffer__poll() and waste an extra thread to do it.
      But perf_buffer is still extremely useful to set up and consume perf buffer
      rings even for such use cases.
      
      So to accomodate such new use cases, add three new APIs:
        - perf_buffer__buffer_cnt() returns number of per-CPU buffers maintained by
          given instance of perf_buffer manager;
        - perf_buffer__buffer_fd() returns FD of perf_event corresponding to
          a specified per-CPU buffer; this FD is then polled independently;
        - perf_buffer__consume_buffer() consumes data from single per-CPU buffer,
          identified by its slot index.
      
      To support a simpler, but less efficient, way to integrate perf_buffer into
      external polling logic, also expose underlying epoll FD through
      perf_buffer__epoll_fd() API. It will need to be followed by
      perf_buffer__poll(), wasting extra syscall, or perf_buffer__consume(), wasting
      CPU to iterate buffers with no data. But could be simpler and more convenient
      for some cases.
      
      These APIs allow for great flexiblity, but do not sacrifice general usability
      of perf_buffer.
      
      Also exercise and check new APIs in perf_buffer selftest.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NAlan Maguire <alan.maguire@oracle.com>
      Link: https://lore.kernel.org/bpf/20200821165927.849538-1-andriin@fb.com
      dca5612f
  22. 07 8月, 2020 1 次提交
  23. 02 8月, 2020 1 次提交
  24. 26 7月, 2020 2 次提交
  25. 18 7月, 2020 1 次提交
  26. 29 6月, 2020 1 次提交
    • A
      libbpf: Support disabling auto-loading BPF programs · d9297581
      Andrii Nakryiko 提交于
      Currently, bpf_object__load() (and by induction skeleton's load), will always
      attempt to prepare, relocate, and load into kernel every single BPF program
      found inside the BPF object file. This is often convenient and the right thing
      to do and what users expect.
      
      But there are plenty of cases (especially with BPF development constantly
      picking up the pace), where BPF application is intended to work with old
      kernels, with potentially reduced set of features. But on kernels supporting
      extra features, it would like to take a full advantage of them, by employing
      extra BPF program. This could be a choice of using fentry/fexit over
      kprobe/kretprobe, if kernel is recent enough and is built with BTF. Or BPF
      program might be providing optimized bpf_iter-based solution that user-space
      might want to use, whenever available. And so on.
      
      With libbpf and BPF CO-RE in particular, it's advantageous to not have to
      maintain two separate BPF object files to achieve this. So to enable such use
      cases, this patch adds ability to request not auto-loading chosen BPF
      programs. In such case, libbpf won't attempt to perform relocations (which
      might fail due to old kernel), won't try to resolve BTF types for
      BTF-aware (tp_btf/fentry/fexit/etc) program types, because BTF might not be
      present, and so on. Skeleton will also automatically skip auto-attachment step
      for such not loaded BPF programs.
      
      Overall, this feature allows to simplify development and deployment of
      real-world BPF applications with complicated compatibility requirements.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200625232629.3444003-2-andriin@fb.com
      d9297581
  27. 23 6月, 2020 1 次提交
    • A
      libbpf: Add a bunch of attribute getters/setters for map definitions · 1bdb6c9a
      Andrii Nakryiko 提交于
      Add a bunch of getter for various aspects of BPF map. Some of these attribute
      (e.g., key_size, value_size, type, etc) are available right now in struct
      bpf_map_def, but this patch adds getter allowing to fetch them individually.
      bpf_map_def approach isn't very scalable, when ABI stability requirements are
      taken into account. It's much easier to extend libbpf and add support for new
      features, when each aspect of BPF map has separate getter/setter.
      
      Getters follow the common naming convention of not explicitly having "get" in
      its name: bpf_map__type() returns map type, bpf_map__key_size() returns
      key_size. Setters, though, explicitly have set in their name:
      bpf_map__set_type(), bpf_map__set_key_size().
      
      This patch ensures we now have a getter and a setter for the following
      map attributes:
        - type;
        - max_entries;
        - map_flags;
        - numa_node;
        - key_size;
        - value_size;
        - ifindex.
      
      bpf_map__resize() enforces unnecessary restriction of max_entries > 0. It is
      unnecessary, because libbpf actually supports zero max_entries for some cases
      (e.g., for PERF_EVENT_ARRAY map) and treats it specially during map creation
      time. To allow setting max_entries=0, new bpf_map__set_max_entries() setter is
      added. bpf_map__resize()'s behavior is preserved for backwards compatibility
      reasons.
      
      Map ifindex getter is added as well. There is a setter already, but no
      corresponding getter. Fix this assymetry as well. bpf_map__set_ifindex()
      itself is converted from void function into error-returning one, similar to
      other setters. The only error returned right now is -EBUSY, if BPF map is
      already loaded and has corresponding FD.
      
      One lacking attribute with no ability to get/set or even specify it
      declaratively is numa_node. This patch fixes this gap and both adds
      programmatic getter/setter, as well as adds support for numa_node field in
      BTF-defined map.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NToke Høiland-Jørgensen <toke@redhat.com>
      Link: https://lore.kernel.org/bpf/20200621062112.3006313-1-andriin@fb.com
      1bdb6c9a
  28. 02 6月, 2020 3 次提交
    • J
      libbpf: Add support for bpf_link-based netns attachment · d60d81ac
      Jakub Sitnicki 提交于
      Add bpf_program__attach_nets(), which uses LINK_CREATE subcommand to create
      an FD-based kernel bpf_link, for attach types tied to network namespace,
      that is BPF_FLOW_DISSECTOR for the moment.
      Signed-off-by: NJakub Sitnicki <jakub@cloudflare.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20200531082846.2117903-7-jakub@cloudflare.com
      d60d81ac
    • A
      libbpf: Add BPF ring buffer support · bf99c936
      Andrii Nakryiko 提交于
      Declaring and instantiating BPF ring buffer doesn't require any changes to
      libbpf, as it's just another type of maps. So using existing BTF-defined maps
      syntax with __uint(type, BPF_MAP_TYPE_RINGBUF) and __uint(max_elements,
      <size-of-ring-buf>) is all that's necessary to create and use BPF ring buffer.
      
      This patch adds BPF ring buffer consumer to libbpf. It is very similar to
      perf_buffer implementation in terms of API, but also attempts to fix some
      minor problems and inconveniences with existing perf_buffer API.
      
      ring_buffer support both single ring buffer use case (with just using
      ring_buffer__new()), as well as allows to add more ring buffers, each with its
      own callback and context. This allows to efficiently poll and consume
      multiple, potentially completely independent, ring buffers, using single
      epoll instance.
      
      The latter is actually a problem in practice for applications
      that are using multiple sets of perf buffers. They have to create multiple
      instances for struct perf_buffer and poll them independently or in a loop,
      each approach having its own problems (e.g., inability to use a common poll
      timeout). struct ring_buffer eliminates this problem by aggregating many
      independent ring buffer instances under the single "ring buffer manager".
      
      Second, perf_buffer's callback can't return error, so applications that need
      to stop polling due to error in data or data signalling the end, have to use
      extra mechanisms to signal that polling has to stop. ring_buffer's callback
      can return error, which will be passed through back to user code and can be
      acted upon appropariately.
      
      Two APIs allow to consume ring buffer data:
        - ring_buffer__poll(), which will wait for data availability notification
          and will consume data only from reported ring buffer(s); this API allows
          to efficiently use resources by reading data only when it becomes
          available;
        - ring_buffer__consume(), will attempt to read new records regardless of
          data availablity notification sub-system. This API is useful for cases
          when lowest latency is required, in expense of burning CPU resources.
      Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200529075424.3139988-3-andriin@fb.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      bf99c936
    • E
      libbpf: Add API to consume the perf ring buffer content · 272d51af
      Eelco Chaudron 提交于
      This new API, perf_buffer__consume, can be used as follows:
      
      - When you have a perf ring where wakeup_events is higher than 1,
        and you have remaining data in the rings you would like to pull
        out on exit (or maybe based on a timeout).
      
      - For low latency cases where you burn a CPU that constantly polls
        the queues.
      Signed-off-by: NEelco Chaudron <echaudro@redhat.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAndrii Nakryiko <andriin@fb.com>
      Link: https://lore.kernel.org/bpf/159048487929.89441.7465713173442594608.stgit@ebuildSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      272d51af
  29. 10 5月, 2020 1 次提交
  30. 15 4月, 2020 1 次提交
  31. 31 3月, 2020 1 次提交