- 24 10月, 2019 2 次提交
-
-
由 KP Singh 提交于
On compiling samples with this change, one gets an error: error: ‘strncat’ specified bound 118 equals destination size [-Werror=stringop-truncation] strncat(dst, name + section_names[i].len, ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ sizeof(raw_tp_btf_name) - (dst - raw_tp_btf_name)); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ strncat requires the destination to have enough space for the terminating null byte. Fixes: f75a697e ("libbpf: Auto-detect btf_id of BTF-based raw_tracepoint") Signed-off-by: NKP Singh <kpsingh@google.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191023154038.24075-1-kpsingh@chromium.org
-
由 Björn Töpel 提交于
In commit 43e74c02 ("bpf_xdp_redirect_map: Perform map lookup in eBPF helper") the bpf_redirect_map() helper learned to do map lookup, which means that the explicit lookup in the XDP program for AF_XDP is not needed for post-5.3 kernels. This commit adds the implicit map lookup with default action, which improves the performance for the "rx_drop" [1] scenario with ~4%. For pre-5.3 kernels, the bpf_redirect_map() returns XDP_ABORTED, and a fallback path for backward compatibility is entered, where explicit lookup is still performed. This means a slight regression for older kernels (an additional bpf_redirect_map() call), but I consider that a fair punishment for users not upgrading their kernels. ;-) v1->v2: Backward compatibility (Toke) [2] v2->v3: Avoid masking/zero-extension by using JMP32 [3] [1] # xdpsock -i eth0 -z -r [2] https://lore.kernel.org/bpf/87pnirb3dc.fsf@toke.dk/ [3] https://lore.kernel.org/bpf/87v9sip0i8.fsf@toke.dk/Suggested-by: NToke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: NBjörn Töpel <bjorn.topel@intel.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20191022072206.6318-1-bjorn.topel@gmail.com
-
- 23 10月, 2019 1 次提交
-
-
由 Andrii Nakryiko 提交于
LIBBPF_OPTS is implemented as a mix of field declaration and memset + assignment. This makes it neither variable declaration nor purely statements, which is a problem, because you can't mix it with either other variable declarations nor other function statements, because C90 compiler mode emits warning on mixing all that together. This patch changes LIBBPF_OPTS into a strictly declaration of variable and solves this problem, as can be seen in case of bpftool, which previously would emit compiler warning, if done this way (LIBBPF_OPTS as part of function variables declaration block). This patch also renames LIBBPF_OPTS into DECLARE_LIBBPF_OPTS to follow kernel convention for similar macros more closely. v1->v2: - rename LIBBPF_OPTS into DECLARE_LIBBPF_OPTS (Jakub Sitnicki). Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NToke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/bpf/20191022172100.3281465-1-andriin@fb.com
-
- 21 10月, 2019 5 次提交
-
-
由 Andrii Nakryiko 提交于
Teach bpf_object__open how to guess program type and expected attach type from section names, similar to what bpf_prog_load() does. This seems like a really useful features and an oversight to not have this done during bpf_object_open(). To preserver backwards compatible behavior of bpf_prog_load(), its attr->prog_type is treated as an override of bpf_object__open() decisions, if attr->prog_type is not UNSPECIFIED. There is a slight difference in behavior for bpf_prog_load(). Previously, if bpf_prog_load() was loading BPF object with more than one program, first program's guessed program type and expected attach type would determine corresponding attributes of all the subsequent program types, even if their sections names suggest otherwise. That seems like a rather dubious behavior and with this change it will behave more sanely: each program's type is determined individually, unless they are forced to uniformity through attr->prog_type. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191021033902.3856966-5-andriin@fb.com
-
由 Andrii Nakryiko 提交于
Map uprobe/uretprobe into KPROBE program type. tp/raw_tp are just an alias for more verbose tracepoint/raw_tracepoint, respectively. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191021033902.3856966-4-andriin@fb.com
-
由 Andrii Nakryiko 提交于
There are bpf_program__set_type() and bpf_program__set_expected_attach_type(), but no corresponding getters, which seems rather incomplete. Fix this. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191021033902.3856966-3-andriin@fb.com
-
由 Kefeng Wang 提交于
For kernel logging macros, pr_warning() is completely removed and replaced by pr_warn(). By using pr_warn() in tools/lib/bpf/ for symmetry to kernel logging macros, we could eventually drop the use of pr_warning() in the whole kernel tree. Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Reviewed-by: NSergey Senozhatsky <sergey.senozhatsky@gmail.com> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191021055532.185245-1-wangkefeng.wang@huawei.com
-
由 Jakub Sitnicki 提交于
Don't generate a broken bpf_helper_defs.h header if the helper script needs updating because it doesn't recognize a newly added type. Instead print an error that explains why the build is failing, clean up the partially generated header and stop. v1->v2: - Switched from temporary file to .DELETE_ON_ERROR. Fixes: 456a513b ("scripts/bpf: Emit an #error directive known types list needs updating") Suggested-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NJakub Sitnicki <jakub@cloudflare.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NYonghong Song <yhs@fb.com> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191020112344.19395-1-jakub@cloudflare.com
-
- 19 10月, 2019 1 次提交
-
-
由 John Fastabend 提交于
With commit "libbpf: stop enforcing kern_version,..." we removed the kernel version section parsing in favor of querying for the kernel using uname() and populating the version using the result of the query. After this any version sections were simply ignored. Unfortunately, the world of kernels is not so friendly. I've found some customized kernels where uname() does not match the in kernel version. To fix this so programs can load in this environment this patch adds back parsing the section and if it exists uses the user specified kernel version to override the uname() result. However, keep most the kernel uname() discovery bits so users are not required to insert the version except in these odd cases. Fixes: 5e61f270 ("libbpf: stop enforcing kern_version, populate it for users") Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/157140968634.9073.6407090804163937103.stgit@john-XPS-13-9370
-
- 17 10月, 2019 1 次提交
-
-
由 Alexei Starovoitov 提交于
It's a responsiblity of bpf program author to annotate the program with SEC("tp_btf/name") where "name" is a valid raw tracepoint. The libbpf will try to find "name" in vmlinux BTF and error out in case vmlinux BTF is not available or "name" is not found. If "name" is indeed a valid raw tracepoint then in-kernel BTF will have "btf_trace_##name" typedef that points to function prototype of that raw tracepoint. BTF description captures exact argument the kernel C code is passing into raw tracepoint. The kernel verifier will check the types while loading bpf program. libbpf keeps BTF type id in expected_attach_type, but since kernel ignores this attribute for tracing programs copy it into attach_btf_id attribute before loading. Later the kernel will use prog->attach_btf_id to select raw tracepoint during bpf_raw_tracepoint_open syscall command. Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NAndrii Nakryiko <andriin@fb.com> Acked-by: NMartin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191016032505.2089704-6-ast@kernel.org
-
- 16 10月, 2019 4 次提交
-
-
由 Andrii Nakryiko 提交于
Add enum definition for Clang's __builtin_preserve_field_info() second argument (info_kind). Currently only byte offset and existence are supported. Corresponding Clang changes introducing this built-in can be found at [0] [0] https://reviews.llvm.org/D67980Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191015182849.3922287-5-andriin@fb.com
-
由 Andrii Nakryiko 提交于
Add support for BPF_FRK_EXISTS relocation kind to detect existence of captured field in a destination BTF, allowing conditional logic to handle incompatible differences between kernels. Also introduce opt-in relaxed CO-RE relocation handling option, which makes libbpf emit warning for failed relocations, but proceed with other relocations. Instruction, for which relocation failed, is patched with (u32)-1 value. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191015182849.3922287-4-andriin@fb.com
-
由 Andrii Nakryiko 提交于
Refactor all the various bpf_object__open variations to ultimately specify common bpf_object_open_opts struct. This makes it easy to keep extending this common struct w/ extra parameters without having to update all the legacy APIs. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191015182849.3922287-3-andriin@fb.com
-
由 Andrii Nakryiko 提交于
BTF offset reloc was generalized in recent Clang into field relocation, capturing extra u32 field, specifying what aspect of captured field needs to be relocated. This changes .BTF.ext's record size for this relocation from 12 bytes to 16 bytes. Given these format changes happened in Clang before official released version, it's ok to not support outdated 12-byte record size w/o breaking ABI. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20191015182849.3922287-2-andriin@fb.com
-
- 13 10月, 2019 2 次提交
-
-
由 Ivan Khoronzhuk 提交于
In case of C/LDFLAGS there is no way to pass them correctly to build command, for instance when --sysroot is used or external libraries are used, like -lelf, wich can be absent in toolchain. This can be used for samples/bpf cross-compiling allowing to get elf lib from sysroot. Signed-off-by: NIvan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191011002808.28206-13-ivan.khoronzhuk@linaro.org
-
由 Ivan Khoronzhuk 提交于
No need to use C++ for test_libbpf target when libbpf is on C and it can be tested with C, after this change the CXXFLAGS in makefiles can be avoided, at least in bpf samples, when sysroot is used, passing same C/LDFLAGS as for lib. Add "return 0" in test_libbpf to avoid warn, but also remove spaces at start of the lines to keep same style and avoid warns while apply. Signed-off-by: NIvan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191011002808.28206-12-ivan.khoronzhuk@linaro.org
-
- 12 10月, 2019 2 次提交
-
-
由 Andrii Nakryiko 提交于
Old GCC versions are producing invalid typedef for __gnuc_va_list pointing to void. Special-case this and emit valid: typedef __builtin_va_list __gnuc_va_list; Reported-by: NJohn Fastabend <john.fastabend@gmail.com> Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NMartin KaFai Lau <kafai@fb.com> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20191011032901.452042-1-andriin@fb.com
-
由 Andrii Nakryiko 提交于
Existing BPF_CORE_READ() macro generates slightly suboptimal code. If there are intermediate pointers to be read, initial source pointer is going to be assigned into a temporary variable and then temporary variable is going to be uniformly used as a "source" pointer for all intermediate pointer reads. Schematically (ignoring all the type casts), BPF_CORE_READ(s, a, b, c) is expanded into: ({ const void *__t = src; bpf_probe_read(&__t, sizeof(*__t), &__t->a); bpf_probe_read(&__t, sizeof(*__t), &__t->b); typeof(s->a->b->c) __r; bpf_probe_read(&__r, sizeof(*__r), &__t->c); }) This initial `__t = src` makes calls more uniform, but causes slightly less optimal register usage sometimes when compiled with Clang. This can cascase into, e.g., more register spills. This patch fixes this issue by generating more optimal sequence: ({ const void *__t; bpf_probe_read(&__t, sizeof(*__t), &src->a); /* <-- src here */ bpf_probe_read(&__t, sizeof(*__t), &__t->b); typeof(s->a->b->c) __r; bpf_probe_read(&__r, sizeof(*__r), &__t->c); }) Fixes: 7db3822a ("libbpf: Add BPF_CORE_READ/BPF_CORE_READ_INTO helpers") Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20191011023847.275936-1-andriin@fb.com
-
- 10 10月, 2019 2 次提交
-
-
由 Ilya Maximets 提交于
'struct xdp_umem_reg' has 4 bytes of padding at the end that makes valgrind complain about passing uninitialized stack memory to the syscall: Syscall param socketcall.setsockopt() points to uninitialised byte(s) at 0x4E7AB7E: setsockopt (in /usr/lib64/libc-2.29.so) by 0x4BDE035: xsk_umem__create@@LIBBPF_0.0.4 (xsk.c:172) Uninitialised value was created by a stack allocation at 0x4BDDEBA: xsk_umem__create@@LIBBPF_0.0.4 (xsk.c:140) Padding bytes appeared after introducing of a new 'flags' field. memset() is required to clear them. Fixes: 10d30e30 ("libbpf: add flags to umem config") Signed-off-by: NIlya Maximets <i.maximets@ovn.org> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191009164929.17242-1-i.maximets@ovn.org
-
由 Andrii Nakryiko 提交于
Fix a case where explicit padding at the end of a struct is necessary due to non-standart alignment requirements of fields (which BTF doesn't capture explicitly). Fixes: 351131b5 ("libbpf: add btf_dump API for BTF-to-C conversion") Reported-by: NJohn Fastabend <john.fastabend@gmail.com> Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Tested-by: NJohn Fastabend <john.fastabend@gmail.com> Link: https://lore.kernel.org/bpf/20191008231009.2991130-2-andriin@fb.com
-
- 09 10月, 2019 2 次提交
-
-
由 Andrii Nakryiko 提交于
Add few macros simplifying BCC-like multi-level probe reads, while also emitting CO-RE relocations for each read. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NJohn Fastabend <john.fastabend@gmail.com> Acked-by: NSong Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191008175942.1769476-7-andriin@fb.com
-
由 Andrii Nakryiko 提交于
Move bpf_helpers.h, bpf_tracing.h, and bpf_endian.h into libbpf. Move bpf_helper_defs.h generation into libbpf's Makefile. Ensure all those headers are installed along the other libbpf headers. Also, adjust selftests and samples include path to include libbpf now. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NSong Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20191008175942.1769476-6-andriin@fb.com
-
- 06 10月, 2019 4 次提交
-
-
由 Toke Høiland-Jørgensen 提交于
Using cscope and/or TAGS files for navigating the source code is useful. Add simple targets to the Makefile to generate the index files for both tools. Signed-off-by: NToke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org> Tested-by: NAndrii Nakryiko <andriin@fb.com> Acked-by: NAndrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191004153444.1711278-1-toke@redhat.com
-
由 Andrii Nakryiko 提交于
bpf_object__name() was returning file path, not name. Fix this. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Andrii Nakryiko 提交于
Add new set of bpf_object__open APIs using new approach to optional parameters extensibility allowing simpler ABI compatibility approach. This patch demonstrates an approach to implementing libbpf APIs that makes it easy to extend existing APIs with extra optional parameters in such a way, that ABI compatibility is preserved without having to do symbol versioning and generating lots of boilerplate code to handle it. To facilitate succinct code for working with options, add OPTS_VALID, OPTS_HAS, and OPTS_GET macros that hide all the NULL, size, and zero checks. Additionally, newly added libbpf APIs are encouraged to follow similar pattern of having all mandatory parameters as formal function parameters and always have optional (NULL-able) xxx_opts struct, which should always have real struct size as a first field and the rest would be optional parameters added over time, which tune the behavior of existing API, if specified by user. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Andrii Nakryiko 提交于
Kernel version enforcement for kprobes/kretprobes was removed from 5.0 kernel in 6c4fc209 ("bpf: remove useless version check for prog load"). Since then, BPF programs were specifying SEC("version") just to please libbpf. We should stop enforcing this in libbpf, if even kernel doesn't care. Furthermore, libbpf now will pre-populate current kernel version of the host system, in case we are still running on old kernel. This patch also removes __bpf_object__open_xattr from libbpf.h, as nothing in libbpf is relying on having it in that header. That function was never exported as LIBBPF_API and even name suggests its internal version. So this should be safe to remove, as it doesn't break ABI. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
- 02 10月, 2019 1 次提交
-
-
由 Andrii Nakryiko 提交于
New release cycle started, let's bump to v0.0.6 proactively. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Acked-by: NSong Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20190930222503.519782-1-andriin@fb.com
-
- 26 9月, 2019 3 次提交
-
-
由 Andrii Nakryiko 提交于
BTF-to-C converter previously skipped anonymous enums in an assumption that those are embedded in struct's field definitions. This is not always the case and a lot of kernel constants are defined as part of anonymous enums. This change fixes the logic by eagerly marking all types as either referenced by any other type or not. This is enough to distinguish two classes of anonymous enums and emit previously omitted enum definitions. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20190925203745.3173184-1-andriin@fb.com
-
由 Michel Lespinasse 提交于
As was already noted in rbtree.h, the logic to cache rb_first (or rb_last) can easily be implemented externally to the core rbtree api. This commit takes the changes applied to the include/linux/ and lib/ rbtree files in 9f973cb3 ("lib/rbtree: avoid generating code twice for the cached versions"), and applies these to the tools/include/linux/ and tools/lib/ files as well to keep them synchronized. Link: http://lkml.kernel.org/r/20190703034812.53002-1-walken@google.comSigned-off-by: NMichel Lespinasse <walken@google.com> Cc: David Howells <dhowells@redhat.com> Cc: Davidlohr Bueso <dbueso@suse.de> Cc: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Andrii Nakryiko 提交于
Some compilers emit warning for potential uninitialized next_id usage. The code is correct, but control flow is too complicated for some compilers to figure this out. Re-initialize next_id to satisfy compiler. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
- 25 9月, 2019 7 次提交
-
-
由 Tzvetomir Stoyanov 提交于
Create man pages for libtraceevent APIs: tep_load_plugins(), tep_unload_plugin() Signed-off-by: NTzvetomir Stoyanov <tstoyanov@vmware.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: linux-trace-devel@vger.kernel.org Link: http://lore.kernel.org/linux-trace-devel/20190903133434.30417-1-tz.stoyanov@gmail.com Link: http://lore.kernel.org/lkml/20190919212542.216189588@goodmis.orgSigned-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
All traceevent plugins code is moved to tools/lib/traceevent/plugins subdirectory. It makes traceevent implementation in trace-cmd and in kernel tree consistent. There is no changes in the way libtraceevent and plugins are compiled and installed. Committer notes: Applied fixup provided by Steven, fixing the tools/perf/Makefile.perf target for the plugin dynamic list file. Problem noticed when cross building to aarch64 from a Ubuntu 19.04 container. Suggested-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NTzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lore.kernel.org/lkml/20190923115929.453b68f1@oasis.local.home Link: http://lore.kernel.org/lkml/20190919212542.377333393@goodmis.org Link: http://lore.kernel.org/linux-trace-devel/20190917105055.18983-1-tz.stoyanov@gmail.comSigned-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
The tep_get_event() function is an official libtracevent API, described in the library man pages. However, it cannot be used by the library users because it is not declared in the event-parse.h file, where all libtracevent APIs are. The function declaration is added in event-parse.h file. Signed-off-by: NTzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lore.kernel.org/linux-trace-devel/20190808113721.13539-1-tz.stoyanov@gmail.com Link: http://lore.kernel.org/lkml/20190919212542.058025937@goodmis.orgSigned-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
APIs for printing various trace event information were redesigned to be more simple. However, the main libtraceevent man page was not updated with those changes. The documentation is updated to describe the new event print API. Signed-off-by: NTzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lore.kernel.org/linux-trace-devel/20190808113636.13299-3-tz.stoyanov@gmail.com Link: http://lore.kernel.org/lkml/20190919212541.869643036@goodmis.orgSigned-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
The tep_ref_get() was renamed to tep_get_ref(), to be more consistent with the other tep_ref_* APIs. However, in the man pages the API is still with the old name. The documentation is fixed to reflect the actual name of the API. Signed-off-by: NTzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Tzvetomir Stoyanov (VMware) <tz.stoyanov@gmail.com> Cc: linux-trace-devel@vger.kernel.org Link: http://lore.kernel.org/linux-trace-devel/20190808113636.13299-2-tz.stoyanov@gmail.com Link: http://lore.kernel.org/lkml/20190919212541.697034573@goodmis.orgSigned-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Tzvetomir Stoyanov 提交于
Added new man page, describing tep_print_event() libtraceevent API. Signed-off-by: NTzvetomir Stoyanov <tstoyanov@vmware.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: linux-trace-devel@vger.kernel.org Link: http://lore.kernel.org/linux-trace-devel/20190801075012.22098-1-tz.stoyanov@gmail.com Link: http://lore.kernel.org/lkml/20190919212541.553160178@goodmis.orgSigned-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
由 Steven Rostedt (VMware) 提交于
When testing the output of the old trace-cmd compared to the one that uses the updated tep_print_event() logic, it was different in that the time stamp precision in the old format would round up to the nearest precision, where as the new logic truncates. Bring back the old method of rounding up. Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Tzvetomir Stoyanov <tstoyanov@vmware.com> Cc: linux trace devel <linux-trace-devel@vger.kernel.org> Link: http://lore.kernel.org/lkml/20190919165119.5efa5de6@gandalf.local.homeSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 20 9月, 2019 1 次提交
-
-
由 Sakari Ailus 提交于
There are no in-kernel %p[fF] users left. Convert the traceevent tool, too, to align with the kernel. Signed-off-by: NSakari Ailus <sakari.ailus@linux.intel.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: devicetree@vger.kernel.org Cc: Heikki Krogerus <heikki.krogerus@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Joe Perches <joe@perches.com> Cc: linux-acpi@vger.kernel.org Cc: linux-trace-devel@vger.kernel.org Cc: Namhyung Kim <namhyung@kernel.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Rafael J. Wysocki <rafael@kernel.org> Cc: Rob Herring <robh@kernel.org> Cc: Steven Rostedt (VMware) <rostedt@goodmis.org> Cc: Tzvetomir Stoyanov <tstoyanov@vmware.com> Link: http://lore.kernel.org/lkml/20190918133419.7969-2-sakari.ailus@linux.intel.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-
- 19 9月, 2019 1 次提交
-
-
由 Toke Høiland-Jørgensen 提交于
The xsk_socket__create() function fails and returns an error if it cannot get the XDP_OPTIONS through getsockopt(). However, support for XDP_OPTIONS was not added until kernel 5.3, so this means that creating XSK sockets always fails on older kernels. Since the option is just used to set the zero-copy flag in the xsk struct, and that flag is not really used for anything yet, just remove the getsockopt() call until a proper use for it is introduced. Suggested-by: NYonghong Song <yhs@fb.com> Signed-off-by: NToke Høiland-Jørgensen <toke@redhat.com> Acked-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
- 01 9月, 2019 1 次提交
-
-
由 Tzvetomir Stoyanov 提交于
To be compliant with XDG user directory layout, the user's plugin directory is changed from ~/.traceevent/plugins to ~/.local/lib/traceevent/plugins/ Suggested-by: NPatrick McLean <chutzpah@gentoo.org> Signed-off-by: NTzvetomir Stoyanov <tstoyanov@vmware.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Patrick McLean <chutzpah@gentoo.org> Cc: linux-trace-devel@vger.kernel.org Link: https://lore.kernel.org/linux-trace-devel/20190313144206.41e75cf8@patrickm/ Link: http://lore.kernel.org/linux-trace-devel/20190801074959.22023-4-tz.stoyanov@gmail.com Link: http://lore.kernel.org/lkml/20190805204355.344622683@goodmis.orgSigned-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org> Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
-