提交 3d431677 编写于 作者: D Daniel Borkmann

Merge branch 'bpf-loader-progs'

Alexei Starovoitov says:

====================
v5->v6:
- fixed issue found by bpf CI. The light skeleton generation was
  doing a dry-run of loading the program where all actual sys_bpf syscalls
  were replaced by calls into gen_loader. Turned out that search for valid
  vmlinux_btf was not stubbed out which was causing light skeleton gen
  to fail on older kernels.
- significantly reduced verbosity of gen_loader.c.
- an example trace_printk.lskel.h generated out of progs/trace_printk.c
  https://gist.github.com/4ast/774ea58f8286abac6aa8e3bf3bf3b903

v4->v5:
- addressed a bunch of minor comments from Andrii.
- the main difference is that lskel is now more robust in case of errors
  and a bit cleaner looking.

v3->v4:
- cleaned up closing of temporary FDs in case intermediate sys_bpf fails during
  execution of loader program.
- added support for rodata in the skeleton.
- enforce bpf_prog_type_syscall to be sleepable, since it needs bpf_copy_from_user
  to populate rodata map.
- converted test trace_printk to use lskel to test rodata access.
- various small bug fixes.

v2->v3: Addressed comments from Andrii and John.
- added support for setting max_entries after signature verification
  and used it in ringbuf test, since ringbuf's max_entries has to be updated
  after skeleton open() and before load(). See patch 20.
- bpf_btf_find_by_name_kind doesn't take btf_fd anymore.
  Because of that removed attach_prog_fd from bpf_prog_desc in lskel.
  Both features to be added later.
- cleaned up closing of fd==0 during loader gen by resetting fds back to -1.
- converted loader gen to use memset(&attr, cmd_specific_attr_size).
  would love to see this optimization in the rest of libbpf.
- fixed memory leak during loader_gen in case of enomem.
- support for fd_array kernel feature is added in patch 9 to have
  exhaustive testing across all selftests and then partially reverted
  in patch 15 to keep old style map_fd patching tested as well.
- since fentry_test/fexit_tests were extended with re-attach had to add
  support for per-program attach method in lskel and use it in the tests.
- cleanup closing of fds in lskel in case of partial failures.
- fixed numerous small nits.

v1->v2: Addressed comments from Al, Yonghong and Andrii.
- documented sys_close fdget/fdput requirement and non-recursion check.
- reduced internal api leaks between libbpf and bpftool.
  Now bpf_object__gen_loader() is the only new libbf api with minimal fields.
- fixed light skeleton __destroy() method to munmap and close maps and progs.
- refactored bpf_btf_find_by_name_kind to return btf_id | (btf_obj_fd << 32).
- refactored use of bpf_btf_find_by_name_kind from loader prog.
- moved auto-gen like code into skel_internal.h that is used by *.lskel.h
  It has minimal static inline bpf_load_and_run() method used by lskel.
- added lksel.h example in patch 15.
- replaced union bpf_map_prog_desc with struct bpf_map_desc and struct bpf_prog_desc.
- removed mark_feat_supported and added a patch to pass 'obj' into kernel_supports.
- added proper tracking of temporary FDs in loader prog and their cleanup via bpf_sys_close.
- rename gen_trace.c into gen_loader.c to better align the naming throughout.
- expanded number of available helpers in new prog type.
- added support for raw_tp attaching in lskel.
  lskel supports tracing and raw_tp progs now.
  It correctly loads all networking prog types too, but __attach() method is tbd.
- converted progs/test_ksyms_module.c to lskel.
- minor feedback fixes all over.

The description of V1 set is still valid:

This is a first step towards signed bpf programs and the third approach of that kind.
The first approach was to bring libbpf into the kernel as a user-mode-driver.
The second approach was to invent a new file format and let kernel execute
that format as a sequence of syscalls that create maps and load programs.
This third approach is using new type of bpf program instead of inventing file format.
1st and 2nd approaches had too many downsides comparing to this 3rd and were discarded
after months of work.

To make it work the following new concepts are introduced:
1. syscall bpf program type
A kind of bpf program that can do sys_bpf and sys_close syscalls.
It can only execute in user context.

2. FD array or FD index.
Traditionally BPF instructions are patched with FDs.
What it means that maps has to be created first and then instructions modified
which breaks signature verification if the program is signed.
Instead of patching each instruction with FD patch it with an index into array of FDs.
That makes the program signature stable if it uses maps.

3. loader program that is generated as "strace of libbpf".
When libbpf is loading bpf_file.o it does a bunch of sys_bpf() syscalls to
load BTF, create maps, populate maps and finally load programs.
Instead of actually doing the syscalls generate a trace of what libbpf
would have done and represent it as the "loader program".
The "loader program" consists of single map and single bpf program that
does those syscalls.
Executing such "loader program" via bpf_prog_test_run() command will
replay the sequence of syscalls that libbpf would have done which will result
the same maps created and programs loaded as specified in the elf file.
The "loader program" removes libelf and majority of libbpf dependency from
program loading process.

4. light skeleton
Instead of embedding the whole elf file into skeleton and using libbpf
to parse it later generate a loader program and embed it into "light skeleton".
Such skeleton can load the same set of elf files, but it doesn't need
libbpf and libelf to do that. It only needs few sys_bpf wrappers.

Future steps:
- support CO-RE in the kernel. This patch set is already too big,
  so that critical feature is left for the next step.
- generate light skeleton in golang to allow such users use BTF and
  all other features provided by libbpf
- generate light skeleton for kernel, so that bpf programs can be embeded
  in the kernel module. The UMD usage in bpf_preload will be replaced with
  such skeleton, so bpf_preload would become a standard kernel module
  without user space dependency.
- finally do the signing of the loader program.

The patches are work in progress with few rough edges.
====================
Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
...@@ -22,6 +22,7 @@ ...@@ -22,6 +22,7 @@
#include <linux/sched/mm.h> #include <linux/sched/mm.h>
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/percpu-refcount.h> #include <linux/percpu-refcount.h>
#include <linux/bpfptr.h>
struct bpf_verifier_env; struct bpf_verifier_env;
struct bpf_verifier_log; struct bpf_verifier_log;
...@@ -1428,7 +1429,7 @@ struct bpf_iter__bpf_map_elem { ...@@ -1428,7 +1429,7 @@ struct bpf_iter__bpf_map_elem {
int bpf_iter_reg_target(const struct bpf_iter_reg *reg_info); int bpf_iter_reg_target(const struct bpf_iter_reg *reg_info);
void bpf_iter_unreg_target(const struct bpf_iter_reg *reg_info); void bpf_iter_unreg_target(const struct bpf_iter_reg *reg_info);
bool bpf_iter_prog_supported(struct bpf_prog *prog); bool bpf_iter_prog_supported(struct bpf_prog *prog);
int bpf_iter_link_attach(const union bpf_attr *attr, struct bpf_prog *prog); int bpf_iter_link_attach(const union bpf_attr *attr, bpfptr_t uattr, struct bpf_prog *prog);
int bpf_iter_new_fd(struct bpf_link *link); int bpf_iter_new_fd(struct bpf_link *link);
bool bpf_link_is_iter(struct bpf_link *link); bool bpf_link_is_iter(struct bpf_link *link);
struct bpf_prog *bpf_iter_get_info(struct bpf_iter_meta *meta, bool in_stop); struct bpf_prog *bpf_iter_get_info(struct bpf_iter_meta *meta, bool in_stop);
...@@ -1459,7 +1460,7 @@ int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file, ...@@ -1459,7 +1460,7 @@ int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file,
int bpf_fd_htab_map_lookup_elem(struct bpf_map *map, void *key, u32 *value); int bpf_fd_htab_map_lookup_elem(struct bpf_map *map, void *key, u32 *value);
int bpf_get_file_flag(int flags); int bpf_get_file_flag(int flags);
int bpf_check_uarg_tail_zero(void __user *uaddr, size_t expected_size, int bpf_check_uarg_tail_zero(bpfptr_t uaddr, size_t expected_size,
size_t actual_size); size_t actual_size);
/* memcpy that is used with 8-byte aligned pointers, power-of-8 size and /* memcpy that is used with 8-byte aligned pointers, power-of-8 size and
...@@ -1479,8 +1480,7 @@ static inline void bpf_long_memcpy(void *dst, const void *src, u32 size) ...@@ -1479,8 +1480,7 @@ static inline void bpf_long_memcpy(void *dst, const void *src, u32 size)
} }
/* verify correctness of eBPF program */ /* verify correctness of eBPF program */
int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, int bpf_check(struct bpf_prog **fp, union bpf_attr *attr, bpfptr_t uattr);
union bpf_attr __user *uattr);
#ifndef CONFIG_BPF_JIT_ALWAYS_ON #ifndef CONFIG_BPF_JIT_ALWAYS_ON
void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth); void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth);
...@@ -1826,6 +1826,9 @@ static inline bool bpf_map_is_dev_bound(struct bpf_map *map) ...@@ -1826,6 +1826,9 @@ static inline bool bpf_map_is_dev_bound(struct bpf_map *map)
struct bpf_map *bpf_map_offload_map_alloc(union bpf_attr *attr); struct bpf_map *bpf_map_offload_map_alloc(union bpf_attr *attr);
void bpf_map_offload_map_free(struct bpf_map *map); void bpf_map_offload_map_free(struct bpf_map *map);
int bpf_prog_test_run_syscall(struct bpf_prog *prog,
const union bpf_attr *kattr,
union bpf_attr __user *uattr);
#else #else
static inline int bpf_prog_offload_init(struct bpf_prog *prog, static inline int bpf_prog_offload_init(struct bpf_prog *prog,
union bpf_attr *attr) union bpf_attr *attr)
...@@ -1851,6 +1854,13 @@ static inline struct bpf_map *bpf_map_offload_map_alloc(union bpf_attr *attr) ...@@ -1851,6 +1854,13 @@ static inline struct bpf_map *bpf_map_offload_map_alloc(union bpf_attr *attr)
static inline void bpf_map_offload_map_free(struct bpf_map *map) static inline void bpf_map_offload_map_free(struct bpf_map *map)
{ {
} }
static inline int bpf_prog_test_run_syscall(struct bpf_prog *prog,
const union bpf_attr *kattr,
union bpf_attr __user *uattr)
{
return -ENOTSUPP;
}
#endif /* CONFIG_NET && CONFIG_BPF_SYSCALL */ #endif /* CONFIG_NET && CONFIG_BPF_SYSCALL */
#if defined(CONFIG_INET) && defined(CONFIG_BPF_SYSCALL) #if defined(CONFIG_INET) && defined(CONFIG_BPF_SYSCALL)
...@@ -1964,6 +1974,7 @@ extern const struct bpf_func_proto bpf_get_socket_ptr_cookie_proto; ...@@ -1964,6 +1974,7 @@ extern const struct bpf_func_proto bpf_get_socket_ptr_cookie_proto;
extern const struct bpf_func_proto bpf_task_storage_get_proto; extern const struct bpf_func_proto bpf_task_storage_get_proto;
extern const struct bpf_func_proto bpf_task_storage_delete_proto; extern const struct bpf_func_proto bpf_task_storage_delete_proto;
extern const struct bpf_func_proto bpf_for_each_map_elem_proto; extern const struct bpf_func_proto bpf_for_each_map_elem_proto;
extern const struct bpf_func_proto bpf_btf_find_by_name_kind_proto;
const struct bpf_func_proto *bpf_tracing_func_proto( const struct bpf_func_proto *bpf_tracing_func_proto(
enum bpf_func_id func_id, const struct bpf_prog *prog); enum bpf_func_id func_id, const struct bpf_prog *prog);
......
...@@ -77,6 +77,8 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_LSM, lsm, ...@@ -77,6 +77,8 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_LSM, lsm,
void *, void *) void *, void *)
#endif /* CONFIG_BPF_LSM */ #endif /* CONFIG_BPF_LSM */
#endif #endif
BPF_PROG_TYPE(BPF_PROG_TYPE_SYSCALL, bpf_syscall,
void *, void *)
BPF_MAP_TYPE(BPF_MAP_TYPE_ARRAY, array_map_ops) BPF_MAP_TYPE(BPF_MAP_TYPE_ARRAY, array_map_ops)
BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_ARRAY, percpu_array_map_ops) BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_ARRAY, percpu_array_map_ops)
......
...@@ -450,6 +450,7 @@ struct bpf_verifier_env { ...@@ -450,6 +450,7 @@ struct bpf_verifier_env {
u32 peak_states; u32 peak_states;
/* longest register parentage chain walked for liveness marking */ /* longest register parentage chain walked for liveness marking */
u32 longest_mark_read_walk; u32 longest_mark_read_walk;
bpfptr_t fd_array;
}; };
__printf(2, 0) void bpf_verifier_vlog(struct bpf_verifier_log *log, __printf(2, 0) void bpf_verifier_vlog(struct bpf_verifier_log *log,
......
/* SPDX-License-Identifier: GPL-2.0-only */
/* A pointer that can point to either kernel or userspace memory. */
#ifndef _LINUX_BPFPTR_H
#define _LINUX_BPFPTR_H
#include <linux/sockptr.h>
typedef sockptr_t bpfptr_t;
static inline bool bpfptr_is_kernel(bpfptr_t bpfptr)
{
return bpfptr.is_kernel;
}
static inline bpfptr_t KERNEL_BPFPTR(void *p)
{
return (bpfptr_t) { .kernel = p, .is_kernel = true };
}
static inline bpfptr_t USER_BPFPTR(void __user *p)
{
return (bpfptr_t) { .user = p };
}
static inline bpfptr_t make_bpfptr(u64 addr, bool is_kernel)
{
if (is_kernel)
return KERNEL_BPFPTR((void*) (uintptr_t) addr);
else
return USER_BPFPTR(u64_to_user_ptr(addr));
}
static inline bool bpfptr_is_null(bpfptr_t bpfptr)
{
if (bpfptr_is_kernel(bpfptr))
return !bpfptr.kernel;
return !bpfptr.user;
}
static inline void bpfptr_add(bpfptr_t *bpfptr, size_t val)
{
if (bpfptr_is_kernel(*bpfptr))
bpfptr->kernel += val;
else
bpfptr->user += val;
}
static inline int copy_from_bpfptr_offset(void *dst, bpfptr_t src,
size_t offset, size_t size)
{
return copy_from_sockptr_offset(dst, (sockptr_t) src, offset, size);
}
static inline int copy_from_bpfptr(void *dst, bpfptr_t src, size_t size)
{
return copy_from_bpfptr_offset(dst, src, 0, size);
}
static inline int copy_to_bpfptr_offset(bpfptr_t dst, size_t offset,
const void *src, size_t size)
{
return copy_to_sockptr_offset((sockptr_t) dst, offset, src, size);
}
static inline void *memdup_bpfptr(bpfptr_t src, size_t len)
{
return memdup_sockptr((sockptr_t) src, len);
}
static inline long strncpy_from_bpfptr(char *dst, bpfptr_t src, size_t count)
{
return strncpy_from_sockptr(dst, (sockptr_t) src, count);
}
#endif /* _LINUX_BPFPTR_H */
...@@ -21,7 +21,7 @@ extern const struct file_operations btf_fops; ...@@ -21,7 +21,7 @@ extern const struct file_operations btf_fops;
void btf_get(struct btf *btf); void btf_get(struct btf *btf);
void btf_put(struct btf *btf); void btf_put(struct btf *btf);
int btf_new_fd(const union bpf_attr *attr); int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr);
struct btf *btf_get_by_fd(int fd); struct btf *btf_get_by_fd(int fd);
int btf_get_info_by_fd(const struct btf *btf, int btf_get_info_by_fd(const struct btf *btf,
const union bpf_attr *attr, const union bpf_attr *attr,
......
...@@ -937,6 +937,7 @@ enum bpf_prog_type { ...@@ -937,6 +937,7 @@ enum bpf_prog_type {
BPF_PROG_TYPE_EXT, BPF_PROG_TYPE_EXT,
BPF_PROG_TYPE_LSM, BPF_PROG_TYPE_LSM,
BPF_PROG_TYPE_SK_LOOKUP, BPF_PROG_TYPE_SK_LOOKUP,
BPF_PROG_TYPE_SYSCALL, /* a program that can execute syscalls */
}; };
enum bpf_attach_type { enum bpf_attach_type {
...@@ -1097,8 +1098,8 @@ enum bpf_link_type { ...@@ -1097,8 +1098,8 @@ enum bpf_link_type {
/* When BPF ldimm64's insn[0].src_reg != 0 then this can have /* When BPF ldimm64's insn[0].src_reg != 0 then this can have
* the following extensions: * the following extensions:
* *
* insn[0].src_reg: BPF_PSEUDO_MAP_FD * insn[0].src_reg: BPF_PSEUDO_MAP_[FD|IDX]
* insn[0].imm: map fd * insn[0].imm: map fd or fd_idx
* insn[1].imm: 0 * insn[1].imm: 0
* insn[0].off: 0 * insn[0].off: 0
* insn[1].off: 0 * insn[1].off: 0
...@@ -1106,15 +1107,19 @@ enum bpf_link_type { ...@@ -1106,15 +1107,19 @@ enum bpf_link_type {
* verifier type: CONST_PTR_TO_MAP * verifier type: CONST_PTR_TO_MAP
*/ */
#define BPF_PSEUDO_MAP_FD 1 #define BPF_PSEUDO_MAP_FD 1
/* insn[0].src_reg: BPF_PSEUDO_MAP_VALUE #define BPF_PSEUDO_MAP_IDX 5
* insn[0].imm: map fd
/* insn[0].src_reg: BPF_PSEUDO_MAP_[IDX_]VALUE
* insn[0].imm: map fd or fd_idx
* insn[1].imm: offset into value * insn[1].imm: offset into value
* insn[0].off: 0 * insn[0].off: 0
* insn[1].off: 0 * insn[1].off: 0
* ldimm64 rewrite: address of map[0]+offset * ldimm64 rewrite: address of map[0]+offset
* verifier type: PTR_TO_MAP_VALUE * verifier type: PTR_TO_MAP_VALUE
*/ */
#define BPF_PSEUDO_MAP_VALUE 2 #define BPF_PSEUDO_MAP_VALUE 2
#define BPF_PSEUDO_MAP_IDX_VALUE 6
/* insn[0].src_reg: BPF_PSEUDO_BTF_ID /* insn[0].src_reg: BPF_PSEUDO_BTF_ID
* insn[0].imm: kernel btd id of VAR * insn[0].imm: kernel btd id of VAR
* insn[1].imm: 0 * insn[1].imm: 0
...@@ -1314,6 +1319,8 @@ union bpf_attr { ...@@ -1314,6 +1319,8 @@ union bpf_attr {
/* or valid module BTF object fd or 0 to attach to vmlinux */ /* or valid module BTF object fd or 0 to attach to vmlinux */
__u32 attach_btf_obj_fd; __u32 attach_btf_obj_fd;
}; };
__u32 :32; /* pad */
__aligned_u64 fd_array; /* array of FDs */
}; };
struct { /* anonymous struct used by BPF_OBJ_* commands */ struct { /* anonymous struct used by BPF_OBJ_* commands */
...@@ -4735,6 +4742,24 @@ union bpf_attr { ...@@ -4735,6 +4742,24 @@ union bpf_attr {
* be zero-terminated except when **str_size** is 0. * be zero-terminated except when **str_size** is 0.
* *
* Or **-EBUSY** if the per-CPU memory copy buffer is busy. * Or **-EBUSY** if the per-CPU memory copy buffer is busy.
*
* long bpf_sys_bpf(u32 cmd, void *attr, u32 attr_size)
* Description
* Execute bpf syscall with given arguments.
* Return
* A syscall result.
*
* long bpf_btf_find_by_name_kind(char *name, int name_sz, u32 kind, int flags)
* Description
* Find BTF type with given name and kind in vmlinux BTF or in module's BTFs.
* Return
* Returns btf_id and btf_obj_fd in lower and upper 32 bits.
*
* long bpf_sys_close(u32 fd)
* Description
* Execute close syscall for given FD.
* Return
* A syscall result.
*/ */
#define __BPF_FUNC_MAPPER(FN) \ #define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \ FN(unspec), \
...@@ -4903,6 +4928,9 @@ union bpf_attr { ...@@ -4903,6 +4928,9 @@ union bpf_attr {
FN(check_mtu), \ FN(check_mtu), \
FN(for_each_map_elem), \ FN(for_each_map_elem), \
FN(snprintf), \ FN(snprintf), \
FN(sys_bpf), \
FN(btf_find_by_name_kind), \
FN(sys_close), \
/* */ /* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper /* integer value in 'imm' field of BPF_CALL instruction selects which helper
......
...@@ -473,15 +473,16 @@ bool bpf_link_is_iter(struct bpf_link *link) ...@@ -473,15 +473,16 @@ bool bpf_link_is_iter(struct bpf_link *link)
return link->ops == &bpf_iter_link_lops; return link->ops == &bpf_iter_link_lops;
} }
int bpf_iter_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) int bpf_iter_link_attach(const union bpf_attr *attr, bpfptr_t uattr,
struct bpf_prog *prog)
{ {
union bpf_iter_link_info __user *ulinfo;
struct bpf_link_primer link_primer; struct bpf_link_primer link_primer;
struct bpf_iter_target_info *tinfo; struct bpf_iter_target_info *tinfo;
union bpf_iter_link_info linfo; union bpf_iter_link_info linfo;
struct bpf_iter_link *link; struct bpf_iter_link *link;
u32 prog_btf_id, linfo_len; u32 prog_btf_id, linfo_len;
bool existed = false; bool existed = false;
bpfptr_t ulinfo;
int err; int err;
if (attr->link_create.target_fd || attr->link_create.flags) if (attr->link_create.target_fd || attr->link_create.flags)
...@@ -489,18 +490,18 @@ int bpf_iter_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) ...@@ -489,18 +490,18 @@ int bpf_iter_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
memset(&linfo, 0, sizeof(union bpf_iter_link_info)); memset(&linfo, 0, sizeof(union bpf_iter_link_info));
ulinfo = u64_to_user_ptr(attr->link_create.iter_info); ulinfo = make_bpfptr(attr->link_create.iter_info, uattr.is_kernel);
linfo_len = attr->link_create.iter_info_len; linfo_len = attr->link_create.iter_info_len;
if (!ulinfo ^ !linfo_len) if (bpfptr_is_null(ulinfo) ^ !linfo_len)
return -EINVAL; return -EINVAL;
if (ulinfo) { if (!bpfptr_is_null(ulinfo)) {
err = bpf_check_uarg_tail_zero(ulinfo, sizeof(linfo), err = bpf_check_uarg_tail_zero(ulinfo, sizeof(linfo),
linfo_len); linfo_len);
if (err) if (err)
return err; return err;
linfo_len = min_t(u32, linfo_len, sizeof(linfo)); linfo_len = min_t(u32, linfo_len, sizeof(linfo));
if (copy_from_user(&linfo, ulinfo, linfo_len)) if (copy_from_bpfptr(&linfo, ulinfo, linfo_len))
return -EFAULT; return -EFAULT;
} }
......
...@@ -4257,7 +4257,7 @@ static int btf_parse_hdr(struct btf_verifier_env *env) ...@@ -4257,7 +4257,7 @@ static int btf_parse_hdr(struct btf_verifier_env *env)
return 0; return 0;
} }
static struct btf *btf_parse(void __user *btf_data, u32 btf_data_size, static struct btf *btf_parse(bpfptr_t btf_data, u32 btf_data_size,
u32 log_level, char __user *log_ubuf, u32 log_size) u32 log_level, char __user *log_ubuf, u32 log_size)
{ {
struct btf_verifier_env *env = NULL; struct btf_verifier_env *env = NULL;
...@@ -4306,7 +4306,7 @@ static struct btf *btf_parse(void __user *btf_data, u32 btf_data_size, ...@@ -4306,7 +4306,7 @@ static struct btf *btf_parse(void __user *btf_data, u32 btf_data_size,
btf->data = data; btf->data = data;
btf->data_size = btf_data_size; btf->data_size = btf_data_size;
if (copy_from_user(data, btf_data, btf_data_size)) { if (copy_from_bpfptr(data, btf_data, btf_data_size)) {
err = -EFAULT; err = -EFAULT;
goto errout; goto errout;
} }
...@@ -5780,12 +5780,12 @@ static int __btf_new_fd(struct btf *btf) ...@@ -5780,12 +5780,12 @@ static int __btf_new_fd(struct btf *btf)
return anon_inode_getfd("btf", &btf_fops, btf, O_RDONLY | O_CLOEXEC); return anon_inode_getfd("btf", &btf_fops, btf, O_RDONLY | O_CLOEXEC);
} }
int btf_new_fd(const union bpf_attr *attr) int btf_new_fd(const union bpf_attr *attr, bpfptr_t uattr)
{ {
struct btf *btf; struct btf *btf;
int ret; int ret;
btf = btf_parse(u64_to_user_ptr(attr->btf), btf = btf_parse(make_bpfptr(attr->btf, uattr.is_kernel),
attr->btf_size, attr->btf_log_level, attr->btf_size, attr->btf_log_level,
u64_to_user_ptr(attr->btf_log_buf), u64_to_user_ptr(attr->btf_log_buf),
attr->btf_log_size); attr->btf_log_size);
...@@ -6085,3 +6085,65 @@ struct module *btf_try_get_module(const struct btf *btf) ...@@ -6085,3 +6085,65 @@ struct module *btf_try_get_module(const struct btf *btf)
return res; return res;
} }
BPF_CALL_4(bpf_btf_find_by_name_kind, char *, name, int, name_sz, u32, kind, int, flags)
{
struct btf *btf;
long ret;
if (flags)
return -EINVAL;
if (name_sz <= 1 || name[name_sz - 1])
return -EINVAL;
btf = bpf_get_btf_vmlinux();
if (IS_ERR(btf))
return PTR_ERR(btf);
ret = btf_find_by_name_kind(btf, name, kind);
/* ret is never zero, since btf_find_by_name_kind returns
* positive btf_id or negative error.
*/
if (ret < 0) {
struct btf *mod_btf;
int id;
/* If name is not found in vmlinux's BTF then search in module's BTFs */
spin_lock_bh(&btf_idr_lock);
idr_for_each_entry(&btf_idr, mod_btf, id) {
if (!btf_is_module(mod_btf))
continue;
/* linear search could be slow hence unlock/lock
* the IDR to avoiding holding it for too long
*/
btf_get(mod_btf);
spin_unlock_bh(&btf_idr_lock);
ret = btf_find_by_name_kind(mod_btf, name, kind);
if (ret > 0) {
int btf_obj_fd;
btf_obj_fd = __btf_new_fd(mod_btf);
if (btf_obj_fd < 0) {
btf_put(mod_btf);
return btf_obj_fd;
}
return ret | (((u64)btf_obj_fd) << 32);
}
spin_lock_bh(&btf_idr_lock);
btf_put(mod_btf);
}
spin_unlock_bh(&btf_idr_lock);
}
return ret;
}
const struct bpf_func_proto bpf_btf_find_by_name_kind_proto = {
.func = bpf_btf_find_by_name_kind,
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_MEM,
.arg2_type = ARG_CONST_SIZE,
.arg3_type = ARG_ANYTHING,
.arg4_type = ARG_ANYTHING,
};
...@@ -72,11 +72,10 @@ static const struct bpf_map_ops * const bpf_map_types[] = { ...@@ -72,11 +72,10 @@ static const struct bpf_map_ops * const bpf_map_types[] = {
* copy_from_user() call. However, this is not a concern since this function is * copy_from_user() call. However, this is not a concern since this function is
* meant to be a future-proofing of bits. * meant to be a future-proofing of bits.
*/ */
int bpf_check_uarg_tail_zero(void __user *uaddr, int bpf_check_uarg_tail_zero(bpfptr_t uaddr,
size_t expected_size, size_t expected_size,
size_t actual_size) size_t actual_size)
{ {
unsigned char __user *addr = uaddr + expected_size;
int res; int res;
if (unlikely(actual_size > PAGE_SIZE)) /* silly large */ if (unlikely(actual_size > PAGE_SIZE)) /* silly large */
...@@ -85,7 +84,12 @@ int bpf_check_uarg_tail_zero(void __user *uaddr, ...@@ -85,7 +84,12 @@ int bpf_check_uarg_tail_zero(void __user *uaddr,
if (actual_size <= expected_size) if (actual_size <= expected_size)
return 0; return 0;
res = check_zeroed_user(addr, actual_size - expected_size); if (uaddr.is_kernel)
res = memchr_inv(uaddr.kernel + expected_size, 0,
actual_size - expected_size) == NULL;
else
res = check_zeroed_user(uaddr.user + expected_size,
actual_size - expected_size);
if (res < 0) if (res < 0)
return res; return res;
return res ? 0 : -E2BIG; return res ? 0 : -E2BIG;
...@@ -1004,6 +1008,17 @@ static void *__bpf_copy_key(void __user *ukey, u64 key_size) ...@@ -1004,6 +1008,17 @@ static void *__bpf_copy_key(void __user *ukey, u64 key_size)
return NULL; return NULL;
} }
static void *___bpf_copy_key(bpfptr_t ukey, u64 key_size)
{
if (key_size)
return memdup_bpfptr(ukey, key_size);
if (!bpfptr_is_null(ukey))
return ERR_PTR(-EINVAL);
return NULL;
}
/* last field in 'union bpf_attr' used by this command */ /* last field in 'union bpf_attr' used by this command */
#define BPF_MAP_LOOKUP_ELEM_LAST_FIELD flags #define BPF_MAP_LOOKUP_ELEM_LAST_FIELD flags
...@@ -1074,10 +1089,10 @@ static int map_lookup_elem(union bpf_attr *attr) ...@@ -1074,10 +1089,10 @@ static int map_lookup_elem(union bpf_attr *attr)
#define BPF_MAP_UPDATE_ELEM_LAST_FIELD flags #define BPF_MAP_UPDATE_ELEM_LAST_FIELD flags
static int map_update_elem(union bpf_attr *attr) static int map_update_elem(union bpf_attr *attr, bpfptr_t uattr)
{ {
void __user *ukey = u64_to_user_ptr(attr->key); bpfptr_t ukey = make_bpfptr(attr->key, uattr.is_kernel);
void __user *uvalue = u64_to_user_ptr(attr->value); bpfptr_t uvalue = make_bpfptr(attr->value, uattr.is_kernel);
int ufd = attr->map_fd; int ufd = attr->map_fd;
struct bpf_map *map; struct bpf_map *map;
void *key, *value; void *key, *value;
...@@ -1103,7 +1118,7 @@ static int map_update_elem(union bpf_attr *attr) ...@@ -1103,7 +1118,7 @@ static int map_update_elem(union bpf_attr *attr)
goto err_put; goto err_put;
} }
key = __bpf_copy_key(ukey, map->key_size); key = ___bpf_copy_key(ukey, map->key_size);
if (IS_ERR(key)) { if (IS_ERR(key)) {
err = PTR_ERR(key); err = PTR_ERR(key);
goto err_put; goto err_put;
...@@ -1123,7 +1138,7 @@ static int map_update_elem(union bpf_attr *attr) ...@@ -1123,7 +1138,7 @@ static int map_update_elem(union bpf_attr *attr)
goto free_key; goto free_key;
err = -EFAULT; err = -EFAULT;
if (copy_from_user(value, uvalue, value_size) != 0) if (copy_from_bpfptr(value, uvalue, value_size) != 0)
goto free_value; goto free_value;
err = bpf_map_update_value(map, f, key, value, attr->flags); err = bpf_map_update_value(map, f, key, value, attr->flags);
...@@ -2014,6 +2029,7 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type, ...@@ -2014,6 +2029,7 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
if (expected_attach_type == BPF_SK_LOOKUP) if (expected_attach_type == BPF_SK_LOOKUP)
return 0; return 0;
return -EINVAL; return -EINVAL;
case BPF_PROG_TYPE_SYSCALL:
case BPF_PROG_TYPE_EXT: case BPF_PROG_TYPE_EXT:
if (expected_attach_type) if (expected_attach_type)
return -EINVAL; return -EINVAL;
...@@ -2073,9 +2089,9 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type) ...@@ -2073,9 +2089,9 @@ static bool is_perfmon_prog_type(enum bpf_prog_type prog_type)
} }
/* last field in 'union bpf_attr' used by this command */ /* last field in 'union bpf_attr' used by this command */
#define BPF_PROG_LOAD_LAST_FIELD attach_prog_fd #define BPF_PROG_LOAD_LAST_FIELD fd_array
static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr) static int bpf_prog_load(union bpf_attr *attr, bpfptr_t uattr)
{ {
enum bpf_prog_type type = attr->prog_type; enum bpf_prog_type type = attr->prog_type;
struct bpf_prog *prog, *dst_prog = NULL; struct bpf_prog *prog, *dst_prog = NULL;
...@@ -2100,8 +2116,9 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr) ...@@ -2100,8 +2116,9 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
return -EPERM; return -EPERM;
/* copy eBPF program license from user space */ /* copy eBPF program license from user space */
if (strncpy_from_user(license, u64_to_user_ptr(attr->license), if (strncpy_from_bpfptr(license,
sizeof(license) - 1) < 0) make_bpfptr(attr->license, uattr.is_kernel),
sizeof(license) - 1) < 0)
return -EFAULT; return -EFAULT;
license[sizeof(license) - 1] = 0; license[sizeof(license) - 1] = 0;
...@@ -2185,8 +2202,9 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr) ...@@ -2185,8 +2202,9 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
prog->len = attr->insn_cnt; prog->len = attr->insn_cnt;
err = -EFAULT; err = -EFAULT;
if (copy_from_user(prog->insns, u64_to_user_ptr(attr->insns), if (copy_from_bpfptr(prog->insns,
bpf_prog_insn_size(prog)) != 0) make_bpfptr(attr->insns, uattr.is_kernel),
bpf_prog_insn_size(prog)) != 0)
goto free_prog_sec; goto free_prog_sec;
prog->orig_prog = NULL; prog->orig_prog = NULL;
...@@ -3422,7 +3440,7 @@ static int bpf_prog_get_info_by_fd(struct file *file, ...@@ -3422,7 +3440,7 @@ static int bpf_prog_get_info_by_fd(struct file *file,
u32 ulen; u32 ulen;
int err; int err;
err = bpf_check_uarg_tail_zero(uinfo, sizeof(info), info_len); err = bpf_check_uarg_tail_zero(USER_BPFPTR(uinfo), sizeof(info), info_len);
if (err) if (err)
return err; return err;
info_len = min_t(u32, sizeof(info), info_len); info_len = min_t(u32, sizeof(info), info_len);
...@@ -3701,7 +3719,7 @@ static int bpf_map_get_info_by_fd(struct file *file, ...@@ -3701,7 +3719,7 @@ static int bpf_map_get_info_by_fd(struct file *file,
u32 info_len = attr->info.info_len; u32 info_len = attr->info.info_len;
int err; int err;
err = bpf_check_uarg_tail_zero(uinfo, sizeof(info), info_len); err = bpf_check_uarg_tail_zero(USER_BPFPTR(uinfo), sizeof(info), info_len);
if (err) if (err)
return err; return err;
info_len = min_t(u32, sizeof(info), info_len); info_len = min_t(u32, sizeof(info), info_len);
...@@ -3744,7 +3762,7 @@ static int bpf_btf_get_info_by_fd(struct file *file, ...@@ -3744,7 +3762,7 @@ static int bpf_btf_get_info_by_fd(struct file *file,
u32 info_len = attr->info.info_len; u32 info_len = attr->info.info_len;
int err; int err;
err = bpf_check_uarg_tail_zero(uinfo, sizeof(*uinfo), info_len); err = bpf_check_uarg_tail_zero(USER_BPFPTR(uinfo), sizeof(*uinfo), info_len);
if (err) if (err)
return err; return err;
...@@ -3761,7 +3779,7 @@ static int bpf_link_get_info_by_fd(struct file *file, ...@@ -3761,7 +3779,7 @@ static int bpf_link_get_info_by_fd(struct file *file,
u32 info_len = attr->info.info_len; u32 info_len = attr->info.info_len;
int err; int err;
err = bpf_check_uarg_tail_zero(uinfo, sizeof(info), info_len); err = bpf_check_uarg_tail_zero(USER_BPFPTR(uinfo), sizeof(info), info_len);
if (err) if (err)
return err; return err;
info_len = min_t(u32, sizeof(info), info_len); info_len = min_t(u32, sizeof(info), info_len);
...@@ -3824,7 +3842,7 @@ static int bpf_obj_get_info_by_fd(const union bpf_attr *attr, ...@@ -3824,7 +3842,7 @@ static int bpf_obj_get_info_by_fd(const union bpf_attr *attr,
#define BPF_BTF_LOAD_LAST_FIELD btf_log_level #define BPF_BTF_LOAD_LAST_FIELD btf_log_level
static int bpf_btf_load(const union bpf_attr *attr) static int bpf_btf_load(const union bpf_attr *attr, bpfptr_t uattr)
{ {
if (CHECK_ATTR(BPF_BTF_LOAD)) if (CHECK_ATTR(BPF_BTF_LOAD))
return -EINVAL; return -EINVAL;
...@@ -3832,7 +3850,7 @@ static int bpf_btf_load(const union bpf_attr *attr) ...@@ -3832,7 +3850,7 @@ static int bpf_btf_load(const union bpf_attr *attr)
if (!bpf_capable()) if (!bpf_capable())
return -EPERM; return -EPERM;
return btf_new_fd(attr); return btf_new_fd(attr, uattr);
} }
#define BPF_BTF_GET_FD_BY_ID_LAST_FIELD btf_id #define BPF_BTF_GET_FD_BY_ID_LAST_FIELD btf_id
...@@ -4022,13 +4040,14 @@ static int bpf_map_do_batch(const union bpf_attr *attr, ...@@ -4022,13 +4040,14 @@ static int bpf_map_do_batch(const union bpf_attr *attr,
return err; return err;
} }
static int tracing_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) static int tracing_bpf_link_attach(const union bpf_attr *attr, bpfptr_t uattr,
struct bpf_prog *prog)
{ {
if (attr->link_create.attach_type != prog->expected_attach_type) if (attr->link_create.attach_type != prog->expected_attach_type)
return -EINVAL; return -EINVAL;
if (prog->expected_attach_type == BPF_TRACE_ITER) if (prog->expected_attach_type == BPF_TRACE_ITER)
return bpf_iter_link_attach(attr, prog); return bpf_iter_link_attach(attr, uattr, prog);
else if (prog->type == BPF_PROG_TYPE_EXT) else if (prog->type == BPF_PROG_TYPE_EXT)
return bpf_tracing_prog_attach(prog, return bpf_tracing_prog_attach(prog,
attr->link_create.target_fd, attr->link_create.target_fd,
...@@ -4037,7 +4056,7 @@ static int tracing_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog * ...@@ -4037,7 +4056,7 @@ static int tracing_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *
} }
#define BPF_LINK_CREATE_LAST_FIELD link_create.iter_info_len #define BPF_LINK_CREATE_LAST_FIELD link_create.iter_info_len
static int link_create(union bpf_attr *attr) static int link_create(union bpf_attr *attr, bpfptr_t uattr)
{ {
enum bpf_prog_type ptype; enum bpf_prog_type ptype;
struct bpf_prog *prog; struct bpf_prog *prog;
...@@ -4056,7 +4075,7 @@ static int link_create(union bpf_attr *attr) ...@@ -4056,7 +4075,7 @@ static int link_create(union bpf_attr *attr)
goto out; goto out;
if (prog->type == BPF_PROG_TYPE_EXT) { if (prog->type == BPF_PROG_TYPE_EXT) {
ret = tracing_bpf_link_attach(attr, prog); ret = tracing_bpf_link_attach(attr, uattr, prog);
goto out; goto out;
} }
...@@ -4077,7 +4096,7 @@ static int link_create(union bpf_attr *attr) ...@@ -4077,7 +4096,7 @@ static int link_create(union bpf_attr *attr)
ret = cgroup_bpf_link_attach(attr, prog); ret = cgroup_bpf_link_attach(attr, prog);
break; break;
case BPF_PROG_TYPE_TRACING: case BPF_PROG_TYPE_TRACING:
ret = tracing_bpf_link_attach(attr, prog); ret = tracing_bpf_link_attach(attr, uattr, prog);
break; break;
case BPF_PROG_TYPE_FLOW_DISSECTOR: case BPF_PROG_TYPE_FLOW_DISSECTOR:
case BPF_PROG_TYPE_SK_LOOKUP: case BPF_PROG_TYPE_SK_LOOKUP:
...@@ -4365,7 +4384,7 @@ static int bpf_prog_bind_map(union bpf_attr *attr) ...@@ -4365,7 +4384,7 @@ static int bpf_prog_bind_map(union bpf_attr *attr)
return ret; return ret;
} }
SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, size) static int __sys_bpf(int cmd, bpfptr_t uattr, unsigned int size)
{ {
union bpf_attr attr; union bpf_attr attr;
int err; int err;
...@@ -4380,7 +4399,7 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz ...@@ -4380,7 +4399,7 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
/* copy attributes from user space, may be less than sizeof(bpf_attr) */ /* copy attributes from user space, may be less than sizeof(bpf_attr) */
memset(&attr, 0, sizeof(attr)); memset(&attr, 0, sizeof(attr));
if (copy_from_user(&attr, uattr, size) != 0) if (copy_from_bpfptr(&attr, uattr, size) != 0)
return -EFAULT; return -EFAULT;
err = security_bpf(cmd, &attr, size); err = security_bpf(cmd, &attr, size);
...@@ -4395,7 +4414,7 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz ...@@ -4395,7 +4414,7 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
err = map_lookup_elem(&attr); err = map_lookup_elem(&attr);
break; break;
case BPF_MAP_UPDATE_ELEM: case BPF_MAP_UPDATE_ELEM:
err = map_update_elem(&attr); err = map_update_elem(&attr, uattr);
break; break;
case BPF_MAP_DELETE_ELEM: case BPF_MAP_DELETE_ELEM:
err = map_delete_elem(&attr); err = map_delete_elem(&attr);
...@@ -4422,21 +4441,21 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz ...@@ -4422,21 +4441,21 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
err = bpf_prog_detach(&attr); err = bpf_prog_detach(&attr);
break; break;
case BPF_PROG_QUERY: case BPF_PROG_QUERY:
err = bpf_prog_query(&attr, uattr); err = bpf_prog_query(&attr, uattr.user);
break; break;
case BPF_PROG_TEST_RUN: case BPF_PROG_TEST_RUN:
err = bpf_prog_test_run(&attr, uattr); err = bpf_prog_test_run(&attr, uattr.user);
break; break;
case BPF_PROG_GET_NEXT_ID: case BPF_PROG_GET_NEXT_ID:
err = bpf_obj_get_next_id(&attr, uattr, err = bpf_obj_get_next_id(&attr, uattr.user,
&prog_idr, &prog_idr_lock); &prog_idr, &prog_idr_lock);
break; break;
case BPF_MAP_GET_NEXT_ID: case BPF_MAP_GET_NEXT_ID:
err = bpf_obj_get_next_id(&attr, uattr, err = bpf_obj_get_next_id(&attr, uattr.user,
&map_idr, &map_idr_lock); &map_idr, &map_idr_lock);
break; break;
case BPF_BTF_GET_NEXT_ID: case BPF_BTF_GET_NEXT_ID:
err = bpf_obj_get_next_id(&attr, uattr, err = bpf_obj_get_next_id(&attr, uattr.user,
&btf_idr, &btf_idr_lock); &btf_idr, &btf_idr_lock);
break; break;
case BPF_PROG_GET_FD_BY_ID: case BPF_PROG_GET_FD_BY_ID:
...@@ -4446,38 +4465,38 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz ...@@ -4446,38 +4465,38 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
err = bpf_map_get_fd_by_id(&attr); err = bpf_map_get_fd_by_id(&attr);
break; break;
case BPF_OBJ_GET_INFO_BY_FD: case BPF_OBJ_GET_INFO_BY_FD:
err = bpf_obj_get_info_by_fd(&attr, uattr); err = bpf_obj_get_info_by_fd(&attr, uattr.user);
break; break;
case BPF_RAW_TRACEPOINT_OPEN: case BPF_RAW_TRACEPOINT_OPEN:
err = bpf_raw_tracepoint_open(&attr); err = bpf_raw_tracepoint_open(&attr);
break; break;
case BPF_BTF_LOAD: case BPF_BTF_LOAD:
err = bpf_btf_load(&attr); err = bpf_btf_load(&attr, uattr);
break; break;
case BPF_BTF_GET_FD_BY_ID: case BPF_BTF_GET_FD_BY_ID:
err = bpf_btf_get_fd_by_id(&attr); err = bpf_btf_get_fd_by_id(&attr);
break; break;
case BPF_TASK_FD_QUERY: case BPF_TASK_FD_QUERY:
err = bpf_task_fd_query(&attr, uattr); err = bpf_task_fd_query(&attr, uattr.user);
break; break;
case BPF_MAP_LOOKUP_AND_DELETE_ELEM: case BPF_MAP_LOOKUP_AND_DELETE_ELEM:
err = map_lookup_and_delete_elem(&attr); err = map_lookup_and_delete_elem(&attr);
break; break;
case BPF_MAP_LOOKUP_BATCH: case BPF_MAP_LOOKUP_BATCH:
err = bpf_map_do_batch(&attr, uattr, BPF_MAP_LOOKUP_BATCH); err = bpf_map_do_batch(&attr, uattr.user, BPF_MAP_LOOKUP_BATCH);
break; break;
case BPF_MAP_LOOKUP_AND_DELETE_BATCH: case BPF_MAP_LOOKUP_AND_DELETE_BATCH:
err = bpf_map_do_batch(&attr, uattr, err = bpf_map_do_batch(&attr, uattr.user,
BPF_MAP_LOOKUP_AND_DELETE_BATCH); BPF_MAP_LOOKUP_AND_DELETE_BATCH);
break; break;
case BPF_MAP_UPDATE_BATCH: case BPF_MAP_UPDATE_BATCH:
err = bpf_map_do_batch(&attr, uattr, BPF_MAP_UPDATE_BATCH); err = bpf_map_do_batch(&attr, uattr.user, BPF_MAP_UPDATE_BATCH);
break; break;
case BPF_MAP_DELETE_BATCH: case BPF_MAP_DELETE_BATCH:
err = bpf_map_do_batch(&attr, uattr, BPF_MAP_DELETE_BATCH); err = bpf_map_do_batch(&attr, uattr.user, BPF_MAP_DELETE_BATCH);
break; break;
case BPF_LINK_CREATE: case BPF_LINK_CREATE:
err = link_create(&attr); err = link_create(&attr, uattr);
break; break;
case BPF_LINK_UPDATE: case BPF_LINK_UPDATE:
err = link_update(&attr); err = link_update(&attr);
...@@ -4486,7 +4505,7 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz ...@@ -4486,7 +4505,7 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
err = bpf_link_get_fd_by_id(&attr); err = bpf_link_get_fd_by_id(&attr);
break; break;
case BPF_LINK_GET_NEXT_ID: case BPF_LINK_GET_NEXT_ID:
err = bpf_obj_get_next_id(&attr, uattr, err = bpf_obj_get_next_id(&attr, uattr.user,
&link_idr, &link_idr_lock); &link_idr, &link_idr_lock);
break; break;
case BPF_ENABLE_STATS: case BPF_ENABLE_STATS:
...@@ -4508,3 +4527,94 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz ...@@ -4508,3 +4527,94 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
return err; return err;
} }
SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, size)
{
return __sys_bpf(cmd, USER_BPFPTR(uattr), size);
}
static bool syscall_prog_is_valid_access(int off, int size,
enum bpf_access_type type,
const struct bpf_prog *prog,
struct bpf_insn_access_aux *info)
{
if (off < 0 || off >= U16_MAX)
return false;
if (off % size != 0)
return false;
return true;
}
BPF_CALL_3(bpf_sys_bpf, int, cmd, void *, attr, u32, attr_size)
{
switch (cmd) {
case BPF_MAP_CREATE:
case BPF_MAP_UPDATE_ELEM:
case BPF_MAP_FREEZE:
case BPF_PROG_LOAD:
case BPF_BTF_LOAD:
break;
/* case BPF_PROG_TEST_RUN:
* is not part of this list to prevent recursive test_run
*/
default:
return -EINVAL;
}
return __sys_bpf(cmd, KERNEL_BPFPTR(attr), attr_size);
}
const struct bpf_func_proto bpf_sys_bpf_proto = {
.func = bpf_sys_bpf,
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_ANYTHING,
.arg2_type = ARG_PTR_TO_MEM,
.arg3_type = ARG_CONST_SIZE,
};
const struct bpf_func_proto * __weak
tracing_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
return bpf_base_func_proto(func_id);
}
BPF_CALL_1(bpf_sys_close, u32, fd)
{
/* When bpf program calls this helper there should not be
* an fdget() without matching completed fdput().
* This helper is allowed in the following callchain only:
* sys_bpf->prog_test_run->bpf_prog->bpf_sys_close
*/
return close_fd(fd);
}
const struct bpf_func_proto bpf_sys_close_proto = {
.func = bpf_sys_close,
.gpl_only = false,
.ret_type = RET_INTEGER,
.arg1_type = ARG_ANYTHING,
};
static const struct bpf_func_proto *
syscall_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
switch (func_id) {
case BPF_FUNC_sys_bpf:
return &bpf_sys_bpf_proto;
case BPF_FUNC_btf_find_by_name_kind:
return &bpf_btf_find_by_name_kind_proto;
case BPF_FUNC_sys_close:
return &bpf_sys_close_proto;
default:
return tracing_prog_func_proto(func_id, prog);
}
}
const struct bpf_verifier_ops bpf_syscall_verifier_ops = {
.get_func_proto = syscall_prog_func_proto,
.is_valid_access = syscall_prog_is_valid_access,
};
const struct bpf_prog_ops bpf_syscall_prog_ops = {
.test_run = bpf_prog_test_run_syscall,
};
...@@ -8915,12 +8915,14 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn) ...@@ -8915,12 +8915,14 @@ static int check_ld_imm(struct bpf_verifier_env *env, struct bpf_insn *insn)
mark_reg_known_zero(env, regs, insn->dst_reg); mark_reg_known_zero(env, regs, insn->dst_reg);
dst_reg->map_ptr = map; dst_reg->map_ptr = map;
if (insn->src_reg == BPF_PSEUDO_MAP_VALUE) { if (insn->src_reg == BPF_PSEUDO_MAP_VALUE ||
insn->src_reg == BPF_PSEUDO_MAP_IDX_VALUE) {
dst_reg->type = PTR_TO_MAP_VALUE; dst_reg->type = PTR_TO_MAP_VALUE;
dst_reg->off = aux->map_off; dst_reg->off = aux->map_off;
if (map_value_has_spin_lock(map)) if (map_value_has_spin_lock(map))
dst_reg->id = ++env->id_gen; dst_reg->id = ++env->id_gen;
} else if (insn->src_reg == BPF_PSEUDO_MAP_FD) { } else if (insn->src_reg == BPF_PSEUDO_MAP_FD ||
insn->src_reg == BPF_PSEUDO_MAP_IDX) {
dst_reg->type = CONST_PTR_TO_MAP; dst_reg->type = CONST_PTR_TO_MAP;
} else { } else {
verbose(env, "bpf verifier is misconfigured\n"); verbose(env, "bpf verifier is misconfigured\n");
...@@ -9436,7 +9438,7 @@ static int check_abnormal_return(struct bpf_verifier_env *env) ...@@ -9436,7 +9438,7 @@ static int check_abnormal_return(struct bpf_verifier_env *env)
static int check_btf_func(struct bpf_verifier_env *env, static int check_btf_func(struct bpf_verifier_env *env,
const union bpf_attr *attr, const union bpf_attr *attr,
union bpf_attr __user *uattr) bpfptr_t uattr)
{ {
const struct btf_type *type, *func_proto, *ret_type; const struct btf_type *type, *func_proto, *ret_type;
u32 i, nfuncs, urec_size, min_size; u32 i, nfuncs, urec_size, min_size;
...@@ -9445,7 +9447,7 @@ static int check_btf_func(struct bpf_verifier_env *env, ...@@ -9445,7 +9447,7 @@ static int check_btf_func(struct bpf_verifier_env *env,
struct bpf_func_info_aux *info_aux = NULL; struct bpf_func_info_aux *info_aux = NULL;
struct bpf_prog *prog; struct bpf_prog *prog;
const struct btf *btf; const struct btf *btf;
void __user *urecord; bpfptr_t urecord;
u32 prev_offset = 0; u32 prev_offset = 0;
bool scalar_return; bool scalar_return;
int ret = -ENOMEM; int ret = -ENOMEM;
...@@ -9473,7 +9475,7 @@ static int check_btf_func(struct bpf_verifier_env *env, ...@@ -9473,7 +9475,7 @@ static int check_btf_func(struct bpf_verifier_env *env,
prog = env->prog; prog = env->prog;
btf = prog->aux->btf; btf = prog->aux->btf;
urecord = u64_to_user_ptr(attr->func_info); urecord = make_bpfptr(attr->func_info, uattr.is_kernel);
min_size = min_t(u32, krec_size, urec_size); min_size = min_t(u32, krec_size, urec_size);
krecord = kvcalloc(nfuncs, krec_size, GFP_KERNEL | __GFP_NOWARN); krecord = kvcalloc(nfuncs, krec_size, GFP_KERNEL | __GFP_NOWARN);
...@@ -9491,13 +9493,15 @@ static int check_btf_func(struct bpf_verifier_env *env, ...@@ -9491,13 +9493,15 @@ static int check_btf_func(struct bpf_verifier_env *env,
/* set the size kernel expects so loader can zero /* set the size kernel expects so loader can zero
* out the rest of the record. * out the rest of the record.
*/ */
if (put_user(min_size, &uattr->func_info_rec_size)) if (copy_to_bpfptr_offset(uattr,
offsetof(union bpf_attr, func_info_rec_size),
&min_size, sizeof(min_size)))
ret = -EFAULT; ret = -EFAULT;
} }
goto err_free; goto err_free;
} }
if (copy_from_user(&krecord[i], urecord, min_size)) { if (copy_from_bpfptr(&krecord[i], urecord, min_size)) {
ret = -EFAULT; ret = -EFAULT;
goto err_free; goto err_free;
} }
...@@ -9549,7 +9553,7 @@ static int check_btf_func(struct bpf_verifier_env *env, ...@@ -9549,7 +9553,7 @@ static int check_btf_func(struct bpf_verifier_env *env,
} }
prev_offset = krecord[i].insn_off; prev_offset = krecord[i].insn_off;
urecord += urec_size; bpfptr_add(&urecord, urec_size);
} }
prog->aux->func_info = krecord; prog->aux->func_info = krecord;
...@@ -9581,14 +9585,14 @@ static void adjust_btf_func(struct bpf_verifier_env *env) ...@@ -9581,14 +9585,14 @@ static void adjust_btf_func(struct bpf_verifier_env *env)
static int check_btf_line(struct bpf_verifier_env *env, static int check_btf_line(struct bpf_verifier_env *env,
const union bpf_attr *attr, const union bpf_attr *attr,
union bpf_attr __user *uattr) bpfptr_t uattr)
{ {
u32 i, s, nr_linfo, ncopy, expected_size, rec_size, prev_offset = 0; u32 i, s, nr_linfo, ncopy, expected_size, rec_size, prev_offset = 0;
struct bpf_subprog_info *sub; struct bpf_subprog_info *sub;
struct bpf_line_info *linfo; struct bpf_line_info *linfo;
struct bpf_prog *prog; struct bpf_prog *prog;
const struct btf *btf; const struct btf *btf;
void __user *ulinfo; bpfptr_t ulinfo;
int err; int err;
nr_linfo = attr->line_info_cnt; nr_linfo = attr->line_info_cnt;
...@@ -9614,7 +9618,7 @@ static int check_btf_line(struct bpf_verifier_env *env, ...@@ -9614,7 +9618,7 @@ static int check_btf_line(struct bpf_verifier_env *env,
s = 0; s = 0;
sub = env->subprog_info; sub = env->subprog_info;
ulinfo = u64_to_user_ptr(attr->line_info); ulinfo = make_bpfptr(attr->line_info, uattr.is_kernel);
expected_size = sizeof(struct bpf_line_info); expected_size = sizeof(struct bpf_line_info);
ncopy = min_t(u32, expected_size, rec_size); ncopy = min_t(u32, expected_size, rec_size);
for (i = 0; i < nr_linfo; i++) { for (i = 0; i < nr_linfo; i++) {
...@@ -9622,14 +9626,15 @@ static int check_btf_line(struct bpf_verifier_env *env, ...@@ -9622,14 +9626,15 @@ static int check_btf_line(struct bpf_verifier_env *env,
if (err) { if (err) {
if (err == -E2BIG) { if (err == -E2BIG) {
verbose(env, "nonzero tailing record in line_info"); verbose(env, "nonzero tailing record in line_info");
if (put_user(expected_size, if (copy_to_bpfptr_offset(uattr,
&uattr->line_info_rec_size)) offsetof(union bpf_attr, line_info_rec_size),
&expected_size, sizeof(expected_size)))
err = -EFAULT; err = -EFAULT;
} }
goto err_free; goto err_free;
} }
if (copy_from_user(&linfo[i], ulinfo, ncopy)) { if (copy_from_bpfptr(&linfo[i], ulinfo, ncopy)) {
err = -EFAULT; err = -EFAULT;
goto err_free; goto err_free;
} }
...@@ -9681,7 +9686,7 @@ static int check_btf_line(struct bpf_verifier_env *env, ...@@ -9681,7 +9686,7 @@ static int check_btf_line(struct bpf_verifier_env *env,
} }
prev_offset = linfo[i].insn_off; prev_offset = linfo[i].insn_off;
ulinfo += rec_size; bpfptr_add(&ulinfo, rec_size);
} }
if (s != env->subprog_cnt) { if (s != env->subprog_cnt) {
...@@ -9703,7 +9708,7 @@ static int check_btf_line(struct bpf_verifier_env *env, ...@@ -9703,7 +9708,7 @@ static int check_btf_line(struct bpf_verifier_env *env,
static int check_btf_info(struct bpf_verifier_env *env, static int check_btf_info(struct bpf_verifier_env *env,
const union bpf_attr *attr, const union bpf_attr *attr,
union bpf_attr __user *uattr) bpfptr_t uattr)
{ {
struct btf *btf; struct btf *btf;
int err; int err;
...@@ -11170,6 +11175,7 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) ...@@ -11170,6 +11175,7 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
struct bpf_map *map; struct bpf_map *map;
struct fd f; struct fd f;
u64 addr; u64 addr;
u32 fd;
if (i == insn_cnt - 1 || insn[1].code != 0 || if (i == insn_cnt - 1 || insn[1].code != 0 ||
insn[1].dst_reg != 0 || insn[1].src_reg != 0 || insn[1].dst_reg != 0 || insn[1].src_reg != 0 ||
...@@ -11199,16 +11205,38 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) ...@@ -11199,16 +11205,38 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
/* In final convert_pseudo_ld_imm64() step, this is /* In final convert_pseudo_ld_imm64() step, this is
* converted into regular 64-bit imm load insn. * converted into regular 64-bit imm load insn.
*/ */
if ((insn[0].src_reg != BPF_PSEUDO_MAP_FD && switch (insn[0].src_reg) {
insn[0].src_reg != BPF_PSEUDO_MAP_VALUE) || case BPF_PSEUDO_MAP_VALUE:
(insn[0].src_reg == BPF_PSEUDO_MAP_FD && case BPF_PSEUDO_MAP_IDX_VALUE:
insn[1].imm != 0)) { break;
verbose(env, case BPF_PSEUDO_MAP_FD:
"unrecognized bpf_ld_imm64 insn\n"); case BPF_PSEUDO_MAP_IDX:
if (insn[1].imm == 0)
break;
fallthrough;
default:
verbose(env, "unrecognized bpf_ld_imm64 insn\n");
return -EINVAL; return -EINVAL;
} }
f = fdget(insn[0].imm); switch (insn[0].src_reg) {
case BPF_PSEUDO_MAP_IDX_VALUE:
case BPF_PSEUDO_MAP_IDX:
if (bpfptr_is_null(env->fd_array)) {
verbose(env, "fd_idx without fd_array is invalid\n");
return -EPROTO;
}
if (copy_from_bpfptr_offset(&fd, env->fd_array,
insn[0].imm * sizeof(fd),
sizeof(fd)))
return -EFAULT;
break;
default:
fd = insn[0].imm;
break;
}
f = fdget(fd);
map = __bpf_map_get(f); map = __bpf_map_get(f);
if (IS_ERR(map)) { if (IS_ERR(map)) {
verbose(env, "fd %d is not pointing to valid bpf_map\n", verbose(env, "fd %d is not pointing to valid bpf_map\n",
...@@ -11223,7 +11251,8 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env) ...@@ -11223,7 +11251,8 @@ static int resolve_pseudo_ldimm64(struct bpf_verifier_env *env)
} }
aux = &env->insn_aux_data[i]; aux = &env->insn_aux_data[i];
if (insn->src_reg == BPF_PSEUDO_MAP_FD) { if (insn[0].src_reg == BPF_PSEUDO_MAP_FD ||
insn[0].src_reg == BPF_PSEUDO_MAP_IDX) {
addr = (unsigned long)map; addr = (unsigned long)map;
} else { } else {
u32 off = insn[1].imm; u32 off = insn[1].imm;
...@@ -13196,6 +13225,14 @@ static int check_attach_btf_id(struct bpf_verifier_env *env) ...@@ -13196,6 +13225,14 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
int ret; int ret;
u64 key; u64 key;
if (prog->type == BPF_PROG_TYPE_SYSCALL) {
if (prog->aux->sleepable)
/* attach_btf_id checked to be zero already */
return 0;
verbose(env, "Syscall programs can only be sleepable\n");
return -EINVAL;
}
if (prog->aux->sleepable && prog->type != BPF_PROG_TYPE_TRACING && if (prog->aux->sleepable && prog->type != BPF_PROG_TYPE_TRACING &&
prog->type != BPF_PROG_TYPE_LSM) { prog->type != BPF_PROG_TYPE_LSM) {
verbose(env, "Only fentry/fexit/fmod_ret and lsm programs can be sleepable\n"); verbose(env, "Only fentry/fexit/fmod_ret and lsm programs can be sleepable\n");
...@@ -13267,8 +13304,7 @@ struct btf *bpf_get_btf_vmlinux(void) ...@@ -13267,8 +13304,7 @@ struct btf *bpf_get_btf_vmlinux(void)
return btf_vmlinux; return btf_vmlinux;
} }
int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr)
union bpf_attr __user *uattr)
{ {
u64 start_time = ktime_get_ns(); u64 start_time = ktime_get_ns();
struct bpf_verifier_env *env; struct bpf_verifier_env *env;
...@@ -13298,6 +13334,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, ...@@ -13298,6 +13334,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr,
env->insn_aux_data[i].orig_idx = i; env->insn_aux_data[i].orig_idx = i;
env->prog = *prog; env->prog = *prog;
env->ops = bpf_verifier_ops[env->prog->type]; env->ops = bpf_verifier_ops[env->prog->type];
env->fd_array = make_bpfptr(attr->fd_array, uattr.is_kernel);
is_priv = bpf_capable(); is_priv = bpf_capable();
bpf_get_btf_vmlinux(); bpf_get_btf_vmlinux();
......
...@@ -409,7 +409,7 @@ static void *bpf_ctx_init(const union bpf_attr *kattr, u32 max_size) ...@@ -409,7 +409,7 @@ static void *bpf_ctx_init(const union bpf_attr *kattr, u32 max_size)
return ERR_PTR(-ENOMEM); return ERR_PTR(-ENOMEM);
if (data_in) { if (data_in) {
err = bpf_check_uarg_tail_zero(data_in, max_size, size); err = bpf_check_uarg_tail_zero(USER_BPFPTR(data_in), max_size, size);
if (err) { if (err) {
kfree(data); kfree(data);
return ERR_PTR(err); return ERR_PTR(err);
...@@ -918,3 +918,46 @@ int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kat ...@@ -918,3 +918,46 @@ int bpf_prog_test_run_sk_lookup(struct bpf_prog *prog, const union bpf_attr *kat
kfree(user_ctx); kfree(user_ctx);
return ret; return ret;
} }
int bpf_prog_test_run_syscall(struct bpf_prog *prog,
const union bpf_attr *kattr,
union bpf_attr __user *uattr)
{
void __user *ctx_in = u64_to_user_ptr(kattr->test.ctx_in);
__u32 ctx_size_in = kattr->test.ctx_size_in;
void *ctx = NULL;
u32 retval;
int err = 0;
/* doesn't support data_in/out, ctx_out, duration, or repeat or flags */
if (kattr->test.data_in || kattr->test.data_out ||
kattr->test.ctx_out || kattr->test.duration ||
kattr->test.repeat || kattr->test.flags)
return -EINVAL;
if (ctx_size_in < prog->aux->max_ctx_offset ||
ctx_size_in > U16_MAX)
return -EINVAL;
if (ctx_size_in) {
ctx = kzalloc(ctx_size_in, GFP_USER);
if (!ctx)
return -ENOMEM;
if (copy_from_user(ctx, ctx_in, ctx_size_in)) {
err = -EFAULT;
goto out;
}
}
retval = bpf_prog_run_pin_on_cpu(prog, ctx);
if (copy_to_user(&uattr->test.retval, &retval, sizeof(u32))) {
err = -EFAULT;
goto out;
}
if (ctx_size_in)
if (copy_to_user(ctx_in, ctx, ctx_size_in))
err = -EFAULT;
out:
kfree(ctx);
return err;
}
...@@ -136,7 +136,7 @@ endif ...@@ -136,7 +136,7 @@ endif
BPFTOOL_BOOTSTRAP := $(BOOTSTRAP_OUTPUT)bpftool BPFTOOL_BOOTSTRAP := $(BOOTSTRAP_OUTPUT)bpftool
BOOTSTRAP_OBJS = $(addprefix $(BOOTSTRAP_OUTPUT),main.o common.o json_writer.o gen.o btf.o) BOOTSTRAP_OBJS = $(addprefix $(BOOTSTRAP_OUTPUT),main.o common.o json_writer.o gen.o btf.o xlated_dumper.o btf_dumper.o) $(OUTPUT)disasm.o
OBJS = $(patsubst %.c,$(OUTPUT)%.o,$(SRCS)) $(OUTPUT)disasm.o OBJS = $(patsubst %.c,$(OUTPUT)%.o,$(SRCS)) $(OUTPUT)disasm.o
VMLINUX_BTF_PATHS ?= $(if $(O),$(O)/vmlinux) \ VMLINUX_BTF_PATHS ?= $(if $(O),$(O)/vmlinux) \
......
...@@ -18,6 +18,7 @@ ...@@ -18,6 +18,7 @@
#include <sys/stat.h> #include <sys/stat.h>
#include <sys/mman.h> #include <sys/mman.h>
#include <bpf/btf.h> #include <bpf/btf.h>
#include <bpf/bpf_gen_internal.h>
#include "json_writer.h" #include "json_writer.h"
#include "main.h" #include "main.h"
...@@ -274,6 +275,327 @@ static void codegen(const char *template, ...) ...@@ -274,6 +275,327 @@ static void codegen(const char *template, ...)
free(s); free(s);
} }
static void print_hex(const char *data, int data_sz)
{
int i, len;
for (i = 0, len = 0; i < data_sz; i++) {
int w = data[i] ? 4 : 2;
len += w;
if (len > 78) {
printf("\\\n");
len = w;
}
if (!data[i])
printf("\\0");
else
printf("\\x%02x", (unsigned char)data[i]);
}
}
static size_t bpf_map_mmap_sz(const struct bpf_map *map)
{
long page_sz = sysconf(_SC_PAGE_SIZE);
size_t map_sz;
map_sz = (size_t)roundup(bpf_map__value_size(map), 8) * bpf_map__max_entries(map);
map_sz = roundup(map_sz, page_sz);
return map_sz;
}
static void codegen_attach_detach(struct bpf_object *obj, const char *obj_name)
{
struct bpf_program *prog;
bpf_object__for_each_program(prog, obj) {
const char *tp_name;
codegen("\
\n\
\n\
static inline int \n\
%1$s__%2$s__attach(struct %1$s *skel) \n\
{ \n\
int prog_fd = skel->progs.%2$s.prog_fd; \n\
", obj_name, bpf_program__name(prog));
switch (bpf_program__get_type(prog)) {
case BPF_PROG_TYPE_RAW_TRACEPOINT:
tp_name = strchr(bpf_program__section_name(prog), '/') + 1;
printf("\tint fd = bpf_raw_tracepoint_open(\"%s\", prog_fd);\n", tp_name);
break;
case BPF_PROG_TYPE_TRACING:
printf("\tint fd = bpf_raw_tracepoint_open(NULL, prog_fd);\n");
break;
default:
printf("\tint fd = ((void)prog_fd, 0); /* auto-attach not supported */\n");
break;
}
codegen("\
\n\
\n\
if (fd > 0) \n\
skel->links.%1$s_fd = fd; \n\
return fd; \n\
} \n\
", bpf_program__name(prog));
}
codegen("\
\n\
\n\
static inline int \n\
%1$s__attach(struct %1$s *skel) \n\
{ \n\
int ret = 0; \n\
\n\
", obj_name);
bpf_object__for_each_program(prog, obj) {
codegen("\
\n\
ret = ret < 0 ? ret : %1$s__%2$s__attach(skel); \n\
", obj_name, bpf_program__name(prog));
}
codegen("\
\n\
return ret < 0 ? ret : 0; \n\
} \n\
\n\
static inline void \n\
%1$s__detach(struct %1$s *skel) \n\
{ \n\
", obj_name);
bpf_object__for_each_program(prog, obj) {
codegen("\
\n\
skel_closenz(skel->links.%1$s_fd); \n\
", bpf_program__name(prog));
}
codegen("\
\n\
} \n\
");
}
static void codegen_destroy(struct bpf_object *obj, const char *obj_name)
{
struct bpf_program *prog;
struct bpf_map *map;
codegen("\
\n\
static void \n\
%1$s__destroy(struct %1$s *skel) \n\
{ \n\
if (!skel) \n\
return; \n\
%1$s__detach(skel); \n\
",
obj_name);
bpf_object__for_each_program(prog, obj) {
codegen("\
\n\
skel_closenz(skel->progs.%1$s.prog_fd); \n\
", bpf_program__name(prog));
}
bpf_object__for_each_map(map, obj) {
const char * ident;
ident = get_map_ident(map);
if (!ident)
continue;
if (bpf_map__is_internal(map) &&
(bpf_map__def(map)->map_flags & BPF_F_MMAPABLE))
printf("\tmunmap(skel->%1$s, %2$zd);\n",
ident, bpf_map_mmap_sz(map));
codegen("\
\n\
skel_closenz(skel->maps.%1$s.map_fd); \n\
", ident);
}
codegen("\
\n\
free(skel); \n\
} \n\
",
obj_name);
}
static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *header_guard)
{
struct bpf_object_load_attr load_attr = {};
DECLARE_LIBBPF_OPTS(gen_loader_opts, opts);
struct bpf_map *map;
int err = 0;
err = bpf_object__gen_loader(obj, &opts);
if (err)
return err;
load_attr.obj = obj;
if (verifier_logs)
/* log_level1 + log_level2 + stats, but not stable UAPI */
load_attr.log_level = 1 + 2 + 4;
err = bpf_object__load_xattr(&load_attr);
if (err) {
p_err("failed to load object file");
goto out;
}
/* If there was no error during load then gen_loader_opts
* are populated with the loader program.
*/
/* finish generating 'struct skel' */
codegen("\
\n\
}; \n\
", obj_name);
codegen_attach_detach(obj, obj_name);
codegen_destroy(obj, obj_name);
codegen("\
\n\
static inline struct %1$s * \n\
%1$s__open(void) \n\
{ \n\
struct %1$s *skel; \n\
\n\
skel = calloc(sizeof(*skel), 1); \n\
if (!skel) \n\
goto cleanup; \n\
skel->ctx.sz = (void *)&skel->links - (void *)skel; \n\
",
obj_name, opts.data_sz);
bpf_object__for_each_map(map, obj) {
const char *ident;
const void *mmap_data = NULL;
size_t mmap_size = 0;
ident = get_map_ident(map);
if (!ident)
continue;
if (!bpf_map__is_internal(map) ||
!(bpf_map__def(map)->map_flags & BPF_F_MMAPABLE))
continue;
codegen("\
\n\
skel->%1$s = \n\
mmap(NULL, %2$zd, PROT_READ | PROT_WRITE,\n\
MAP_SHARED | MAP_ANONYMOUS, -1, 0); \n\
if (skel->%1$s == (void *) -1) \n\
goto cleanup; \n\
memcpy(skel->%1$s, (void *)\"\\ \n\
", ident, bpf_map_mmap_sz(map));
mmap_data = bpf_map__initial_value(map, &mmap_size);
print_hex(mmap_data, mmap_size);
printf("\", %2$zd);\n"
"\tskel->maps.%1$s.initial_value = (__u64)(long)skel->%1$s;\n",
ident, mmap_size);
}
codegen("\
\n\
return skel; \n\
cleanup: \n\
%1$s__destroy(skel); \n\
return NULL; \n\
} \n\
\n\
static inline int \n\
%1$s__load(struct %1$s *skel) \n\
{ \n\
struct bpf_load_and_run_opts opts = {}; \n\
int err; \n\
\n\
opts.ctx = (struct bpf_loader_ctx *)skel; \n\
opts.data_sz = %2$d; \n\
opts.data = (void *)\"\\ \n\
",
obj_name, opts.data_sz);
print_hex(opts.data, opts.data_sz);
codegen("\
\n\
\"; \n\
");
codegen("\
\n\
opts.insns_sz = %d; \n\
opts.insns = (void *)\"\\ \n\
",
opts.insns_sz);
print_hex(opts.insns, opts.insns_sz);
codegen("\
\n\
\"; \n\
err = bpf_load_and_run(&opts); \n\
if (err < 0) \n\
return err; \n\
", obj_name);
bpf_object__for_each_map(map, obj) {
const char *ident, *mmap_flags;
ident = get_map_ident(map);
if (!ident)
continue;
if (!bpf_map__is_internal(map) ||
!(bpf_map__def(map)->map_flags & BPF_F_MMAPABLE))
continue;
if (bpf_map__def(map)->map_flags & BPF_F_RDONLY_PROG)
mmap_flags = "PROT_READ";
else
mmap_flags = "PROT_READ | PROT_WRITE";
printf("\tskel->%1$s =\n"
"\t\tmmap(skel->%1$s, %2$zd, %3$s, MAP_SHARED | MAP_FIXED,\n"
"\t\t\tskel->maps.%1$s.map_fd, 0);\n",
ident, bpf_map_mmap_sz(map), mmap_flags);
}
codegen("\
\n\
return 0; \n\
} \n\
\n\
static inline struct %1$s * \n\
%1$s__open_and_load(void) \n\
{ \n\
struct %1$s *skel; \n\
\n\
skel = %1$s__open(); \n\
if (!skel) \n\
return NULL; \n\
if (%1$s__load(skel)) { \n\
%1$s__destroy(skel); \n\
return NULL; \n\
} \n\
return skel; \n\
} \n\
", obj_name);
codegen("\
\n\
\n\
#endif /* %s */ \n\
",
header_guard);
err = 0;
out:
return err;
}
static int do_skeleton(int argc, char **argv) static int do_skeleton(int argc, char **argv)
{ {
char header_guard[MAX_OBJ_NAME_LEN + sizeof("__SKEL_H__")]; char header_guard[MAX_OBJ_NAME_LEN + sizeof("__SKEL_H__")];
...@@ -283,7 +605,7 @@ static int do_skeleton(int argc, char **argv) ...@@ -283,7 +605,7 @@ static int do_skeleton(int argc, char **argv)
struct bpf_object *obj = NULL; struct bpf_object *obj = NULL;
const char *file, *ident; const char *file, *ident;
struct bpf_program *prog; struct bpf_program *prog;
int fd, len, err = -1; int fd, err = -1;
struct bpf_map *map; struct bpf_map *map;
struct btf *btf; struct btf *btf;
struct stat st; struct stat st;
...@@ -365,7 +687,25 @@ static int do_skeleton(int argc, char **argv) ...@@ -365,7 +687,25 @@ static int do_skeleton(int argc, char **argv)
} }
get_header_guard(header_guard, obj_name); get_header_guard(header_guard, obj_name);
codegen("\ if (use_loader) {
codegen("\
\n\
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ \n\
/* THIS FILE IS AUTOGENERATED! */ \n\
#ifndef %2$s \n\
#define %2$s \n\
\n\
#include <stdlib.h> \n\
#include <bpf/bpf.h> \n\
#include <bpf/skel_internal.h> \n\
\n\
struct %1$s { \n\
struct bpf_loader_ctx ctx; \n\
",
obj_name, header_guard
);
} else {
codegen("\
\n\ \n\
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ \n\ /* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */ \n\
\n\ \n\
...@@ -381,7 +721,8 @@ static int do_skeleton(int argc, char **argv) ...@@ -381,7 +721,8 @@ static int do_skeleton(int argc, char **argv)
struct bpf_object *obj; \n\ struct bpf_object *obj; \n\
", ",
obj_name, header_guard obj_name, header_guard
); );
}
if (map_cnt) { if (map_cnt) {
printf("\tstruct {\n"); printf("\tstruct {\n");
...@@ -389,7 +730,10 @@ static int do_skeleton(int argc, char **argv) ...@@ -389,7 +730,10 @@ static int do_skeleton(int argc, char **argv)
ident = get_map_ident(map); ident = get_map_ident(map);
if (!ident) if (!ident)
continue; continue;
printf("\t\tstruct bpf_map *%s;\n", ident); if (use_loader)
printf("\t\tstruct bpf_map_desc %s;\n", ident);
else
printf("\t\tstruct bpf_map *%s;\n", ident);
} }
printf("\t} maps;\n"); printf("\t} maps;\n");
} }
...@@ -397,14 +741,22 @@ static int do_skeleton(int argc, char **argv) ...@@ -397,14 +741,22 @@ static int do_skeleton(int argc, char **argv)
if (prog_cnt) { if (prog_cnt) {
printf("\tstruct {\n"); printf("\tstruct {\n");
bpf_object__for_each_program(prog, obj) { bpf_object__for_each_program(prog, obj) {
printf("\t\tstruct bpf_program *%s;\n", if (use_loader)
bpf_program__name(prog)); printf("\t\tstruct bpf_prog_desc %s;\n",
bpf_program__name(prog));
else
printf("\t\tstruct bpf_program *%s;\n",
bpf_program__name(prog));
} }
printf("\t} progs;\n"); printf("\t} progs;\n");
printf("\tstruct {\n"); printf("\tstruct {\n");
bpf_object__for_each_program(prog, obj) { bpf_object__for_each_program(prog, obj) {
printf("\t\tstruct bpf_link *%s;\n", if (use_loader)
bpf_program__name(prog)); printf("\t\tint %s_fd;\n",
bpf_program__name(prog));
else
printf("\t\tstruct bpf_link *%s;\n",
bpf_program__name(prog));
} }
printf("\t} links;\n"); printf("\t} links;\n");
} }
...@@ -415,6 +767,10 @@ static int do_skeleton(int argc, char **argv) ...@@ -415,6 +767,10 @@ static int do_skeleton(int argc, char **argv)
if (err) if (err)
goto out; goto out;
} }
if (use_loader) {
err = gen_trace(obj, obj_name, header_guard);
goto out;
}
codegen("\ codegen("\
\n\ \n\
...@@ -584,19 +940,7 @@ static int do_skeleton(int argc, char **argv) ...@@ -584,19 +940,7 @@ static int do_skeleton(int argc, char **argv)
file_sz); file_sz);
/* embed contents of BPF object file */ /* embed contents of BPF object file */
for (i = 0, len = 0; i < file_sz; i++) { print_hex(obj_data, file_sz);
int w = obj_data[i] ? 4 : 2;
len += w;
if (len > 78) {
printf("\\\n");
len = w;
}
if (!obj_data[i])
printf("\\0");
else
printf("\\x%02x", (unsigned char)obj_data[i]);
}
codegen("\ codegen("\
\n\ \n\
......
...@@ -29,6 +29,7 @@ bool show_pinned; ...@@ -29,6 +29,7 @@ bool show_pinned;
bool block_mount; bool block_mount;
bool verifier_logs; bool verifier_logs;
bool relaxed_maps; bool relaxed_maps;
bool use_loader;
struct btf *base_btf; struct btf *base_btf;
struct pinned_obj_table prog_table; struct pinned_obj_table prog_table;
struct pinned_obj_table map_table; struct pinned_obj_table map_table;
...@@ -392,6 +393,7 @@ int main(int argc, char **argv) ...@@ -392,6 +393,7 @@ int main(int argc, char **argv)
{ "mapcompat", no_argument, NULL, 'm' }, { "mapcompat", no_argument, NULL, 'm' },
{ "nomount", no_argument, NULL, 'n' }, { "nomount", no_argument, NULL, 'n' },
{ "debug", no_argument, NULL, 'd' }, { "debug", no_argument, NULL, 'd' },
{ "use-loader", no_argument, NULL, 'L' },
{ "base-btf", required_argument, NULL, 'B' }, { "base-btf", required_argument, NULL, 'B' },
{ 0 } { 0 }
}; };
...@@ -409,7 +411,7 @@ int main(int argc, char **argv) ...@@ -409,7 +411,7 @@ int main(int argc, char **argv)
hash_init(link_table.table); hash_init(link_table.table);
opterr = 0; opterr = 0;
while ((opt = getopt_long(argc, argv, "VhpjfmndB:", while ((opt = getopt_long(argc, argv, "VhpjfLmndB:",
options, NULL)) >= 0) { options, NULL)) >= 0) {
switch (opt) { switch (opt) {
case 'V': case 'V':
...@@ -452,6 +454,9 @@ int main(int argc, char **argv) ...@@ -452,6 +454,9 @@ int main(int argc, char **argv)
return -1; return -1;
} }
break; break;
case 'L':
use_loader = true;
break;
default: default:
p_err("unrecognized option '%s'", argv[optind - 1]); p_err("unrecognized option '%s'", argv[optind - 1]);
if (json_output) if (json_output)
......
...@@ -90,6 +90,7 @@ extern bool show_pids; ...@@ -90,6 +90,7 @@ extern bool show_pids;
extern bool block_mount; extern bool block_mount;
extern bool verifier_logs; extern bool verifier_logs;
extern bool relaxed_maps; extern bool relaxed_maps;
extern bool use_loader;
extern struct btf *base_btf; extern struct btf *base_btf;
extern struct pinned_obj_table prog_table; extern struct pinned_obj_table prog_table;
extern struct pinned_obj_table map_table; extern struct pinned_obj_table map_table;
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
#include <sys/types.h> #include <sys/types.h>
#include <sys/stat.h> #include <sys/stat.h>
#include <sys/syscall.h> #include <sys/syscall.h>
#include <dirent.h>
#include <linux/err.h> #include <linux/err.h>
#include <linux/perf_event.h> #include <linux/perf_event.h>
...@@ -24,6 +25,8 @@ ...@@ -24,6 +25,8 @@
#include <bpf/bpf.h> #include <bpf/bpf.h>
#include <bpf/btf.h> #include <bpf/btf.h>
#include <bpf/libbpf.h> #include <bpf/libbpf.h>
#include <bpf/bpf_gen_internal.h>
#include <bpf/skel_internal.h>
#include "cfg.h" #include "cfg.h"
#include "main.h" #include "main.h"
...@@ -1499,7 +1502,7 @@ static int load_with_options(int argc, char **argv, bool first_prog_only) ...@@ -1499,7 +1502,7 @@ static int load_with_options(int argc, char **argv, bool first_prog_only)
set_max_rlimit(); set_max_rlimit();
obj = bpf_object__open_file(file, &open_opts); obj = bpf_object__open_file(file, &open_opts);
if (IS_ERR_OR_NULL(obj)) { if (libbpf_get_error(obj)) {
p_err("failed to open object file"); p_err("failed to open object file");
goto err_free_reuse_maps; goto err_free_reuse_maps;
} }
...@@ -1645,8 +1648,110 @@ static int load_with_options(int argc, char **argv, bool first_prog_only) ...@@ -1645,8 +1648,110 @@ static int load_with_options(int argc, char **argv, bool first_prog_only)
return -1; return -1;
} }
static int count_open_fds(void)
{
DIR *dp = opendir("/proc/self/fd");
struct dirent *de;
int cnt = -3;
if (!dp)
return -1;
while ((de = readdir(dp)))
cnt++;
closedir(dp);
return cnt;
}
static int try_loader(struct gen_loader_opts *gen)
{
struct bpf_load_and_run_opts opts = {};
struct bpf_loader_ctx *ctx;
int ctx_sz = sizeof(*ctx) + 64 * max(sizeof(struct bpf_map_desc),
sizeof(struct bpf_prog_desc));
int log_buf_sz = (1u << 24) - 1;
int err, fds_before, fd_delta;
char *log_buf;
ctx = alloca(ctx_sz);
memset(ctx, 0, ctx_sz);
ctx->sz = ctx_sz;
ctx->log_level = 1;
ctx->log_size = log_buf_sz;
log_buf = malloc(log_buf_sz);
if (!log_buf)
return -ENOMEM;
ctx->log_buf = (long) log_buf;
opts.ctx = ctx;
opts.data = gen->data;
opts.data_sz = gen->data_sz;
opts.insns = gen->insns;
opts.insns_sz = gen->insns_sz;
fds_before = count_open_fds();
err = bpf_load_and_run(&opts);
fd_delta = count_open_fds() - fds_before;
if (err < 0) {
fprintf(stderr, "err %d\n%s\n%s", err, opts.errstr, log_buf);
if (fd_delta)
fprintf(stderr, "loader prog leaked %d FDs\n",
fd_delta);
}
free(log_buf);
return err;
}
static int do_loader(int argc, char **argv)
{
DECLARE_LIBBPF_OPTS(bpf_object_open_opts, open_opts);
DECLARE_LIBBPF_OPTS(gen_loader_opts, gen);
struct bpf_object_load_attr load_attr = {};
struct bpf_object *obj;
const char *file;
int err = 0;
if (!REQ_ARGS(1))
return -1;
file = GET_ARG();
obj = bpf_object__open_file(file, &open_opts);
if (libbpf_get_error(obj)) {
p_err("failed to open object file");
goto err_close_obj;
}
err = bpf_object__gen_loader(obj, &gen);
if (err)
goto err_close_obj;
load_attr.obj = obj;
if (verifier_logs)
/* log_level1 + log_level2 + stats, but not stable UAPI */
load_attr.log_level = 1 + 2 + 4;
err = bpf_object__load_xattr(&load_attr);
if (err) {
p_err("failed to load object file");
goto err_close_obj;
}
if (verifier_logs) {
struct dump_data dd = {};
kernel_syms_load(&dd);
dump_xlated_plain(&dd, (void *)gen.insns, gen.insns_sz, false, false);
kernel_syms_destroy(&dd);
}
err = try_loader(&gen);
err_close_obj:
bpf_object__close(obj);
return err;
}
static int do_load(int argc, char **argv) static int do_load(int argc, char **argv)
{ {
if (use_loader)
return do_loader(argc, argv);
return load_with_options(argc, argv, true); return load_with_options(argc, argv, true);
} }
......
...@@ -196,6 +196,9 @@ static const char *print_imm(void *private_data, ...@@ -196,6 +196,9 @@ static const char *print_imm(void *private_data,
else if (insn->src_reg == BPF_PSEUDO_MAP_VALUE) else if (insn->src_reg == BPF_PSEUDO_MAP_VALUE)
snprintf(dd->scratch_buff, sizeof(dd->scratch_buff), snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
"map[id:%u][0]+%u", insn->imm, (insn + 1)->imm); "map[id:%u][0]+%u", insn->imm, (insn + 1)->imm);
else if (insn->src_reg == BPF_PSEUDO_MAP_IDX_VALUE)
snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
"map[idx:%u]+%u", insn->imm, (insn + 1)->imm);
else if (insn->src_reg == BPF_PSEUDO_FUNC) else if (insn->src_reg == BPF_PSEUDO_FUNC)
snprintf(dd->scratch_buff, sizeof(dd->scratch_buff), snprintf(dd->scratch_buff, sizeof(dd->scratch_buff),
"subprog[%+d]", insn->imm); "subprog[%+d]", insn->imm);
......
...@@ -937,6 +937,7 @@ enum bpf_prog_type { ...@@ -937,6 +937,7 @@ enum bpf_prog_type {
BPF_PROG_TYPE_EXT, BPF_PROG_TYPE_EXT,
BPF_PROG_TYPE_LSM, BPF_PROG_TYPE_LSM,
BPF_PROG_TYPE_SK_LOOKUP, BPF_PROG_TYPE_SK_LOOKUP,
BPF_PROG_TYPE_SYSCALL, /* a program that can execute syscalls */
}; };
enum bpf_attach_type { enum bpf_attach_type {
...@@ -1097,8 +1098,8 @@ enum bpf_link_type { ...@@ -1097,8 +1098,8 @@ enum bpf_link_type {
/* When BPF ldimm64's insn[0].src_reg != 0 then this can have /* When BPF ldimm64's insn[0].src_reg != 0 then this can have
* the following extensions: * the following extensions:
* *
* insn[0].src_reg: BPF_PSEUDO_MAP_FD * insn[0].src_reg: BPF_PSEUDO_MAP_[FD|IDX]
* insn[0].imm: map fd * insn[0].imm: map fd or fd_idx
* insn[1].imm: 0 * insn[1].imm: 0
* insn[0].off: 0 * insn[0].off: 0
* insn[1].off: 0 * insn[1].off: 0
...@@ -1106,15 +1107,19 @@ enum bpf_link_type { ...@@ -1106,15 +1107,19 @@ enum bpf_link_type {
* verifier type: CONST_PTR_TO_MAP * verifier type: CONST_PTR_TO_MAP
*/ */
#define BPF_PSEUDO_MAP_FD 1 #define BPF_PSEUDO_MAP_FD 1
/* insn[0].src_reg: BPF_PSEUDO_MAP_VALUE #define BPF_PSEUDO_MAP_IDX 5
* insn[0].imm: map fd
/* insn[0].src_reg: BPF_PSEUDO_MAP_[IDX_]VALUE
* insn[0].imm: map fd or fd_idx
* insn[1].imm: offset into value * insn[1].imm: offset into value
* insn[0].off: 0 * insn[0].off: 0
* insn[1].off: 0 * insn[1].off: 0
* ldimm64 rewrite: address of map[0]+offset * ldimm64 rewrite: address of map[0]+offset
* verifier type: PTR_TO_MAP_VALUE * verifier type: PTR_TO_MAP_VALUE
*/ */
#define BPF_PSEUDO_MAP_VALUE 2 #define BPF_PSEUDO_MAP_VALUE 2
#define BPF_PSEUDO_MAP_IDX_VALUE 6
/* insn[0].src_reg: BPF_PSEUDO_BTF_ID /* insn[0].src_reg: BPF_PSEUDO_BTF_ID
* insn[0].imm: kernel btd id of VAR * insn[0].imm: kernel btd id of VAR
* insn[1].imm: 0 * insn[1].imm: 0
...@@ -1314,6 +1319,8 @@ union bpf_attr { ...@@ -1314,6 +1319,8 @@ union bpf_attr {
/* or valid module BTF object fd or 0 to attach to vmlinux */ /* or valid module BTF object fd or 0 to attach to vmlinux */
__u32 attach_btf_obj_fd; __u32 attach_btf_obj_fd;
}; };
__u32 :32; /* pad */
__aligned_u64 fd_array; /* array of FDs */
}; };
struct { /* anonymous struct used by BPF_OBJ_* commands */ struct { /* anonymous struct used by BPF_OBJ_* commands */
...@@ -4735,6 +4742,24 @@ union bpf_attr { ...@@ -4735,6 +4742,24 @@ union bpf_attr {
* be zero-terminated except when **str_size** is 0. * be zero-terminated except when **str_size** is 0.
* *
* Or **-EBUSY** if the per-CPU memory copy buffer is busy. * Or **-EBUSY** if the per-CPU memory copy buffer is busy.
*
* long bpf_sys_bpf(u32 cmd, void *attr, u32 attr_size)
* Description
* Execute bpf syscall with given arguments.
* Return
* A syscall result.
*
* long bpf_btf_find_by_name_kind(char *name, int name_sz, u32 kind, int flags)
* Description
* Find BTF type with given name and kind in vmlinux BTF or in module's BTFs.
* Return
* Returns btf_id and btf_obj_fd in lower and upper 32 bits.
*
* long bpf_sys_close(u32 fd)
* Description
* Execute close syscall for given FD.
* Return
* A syscall result.
*/ */
#define __BPF_FUNC_MAPPER(FN) \ #define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \ FN(unspec), \
...@@ -4903,6 +4928,9 @@ union bpf_attr { ...@@ -4903,6 +4928,9 @@ union bpf_attr {
FN(check_mtu), \ FN(check_mtu), \
FN(for_each_map_elem), \ FN(for_each_map_elem), \
FN(snprintf), \ FN(snprintf), \
FN(sys_bpf), \
FN(btf_find_by_name_kind), \
FN(sys_close), \
/* */ /* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper /* integer value in 'imm' field of BPF_CALL instruction selects which helper
......
libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o \ libbpf-y := libbpf.o bpf.o nlattr.o btf.o libbpf_errno.o str_error.o \
netlink.o bpf_prog_linfo.o libbpf_probes.o xsk.o hashmap.o \ netlink.o bpf_prog_linfo.o libbpf_probes.o xsk.o hashmap.o \
btf_dump.o ringbuf.o strset.o linker.o btf_dump.o ringbuf.o strset.o linker.o gen_loader.o
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
/* Copyright (c) 2021 Facebook */
#ifndef __BPF_GEN_INTERNAL_H
#define __BPF_GEN_INTERNAL_H
struct ksym_relo_desc {
const char *name;
int kind;
int insn_idx;
};
struct bpf_gen {
struct gen_loader_opts *opts;
void *data_start;
void *data_cur;
void *insn_start;
void *insn_cur;
ssize_t cleanup_label;
__u32 nr_progs;
__u32 nr_maps;
int log_level;
int error;
struct ksym_relo_desc *relos;
int relo_cnt;
char attach_target[128];
int attach_kind;
};
void bpf_gen__init(struct bpf_gen *gen, int log_level);
int bpf_gen__finish(struct bpf_gen *gen);
void bpf_gen__free(struct bpf_gen *gen);
void bpf_gen__load_btf(struct bpf_gen *gen, const void *raw_data, __u32 raw_size);
void bpf_gen__map_create(struct bpf_gen *gen, struct bpf_create_map_attr *map_attr, int map_idx);
struct bpf_prog_load_params;
void bpf_gen__prog_load(struct bpf_gen *gen, struct bpf_prog_load_params *load_attr, int prog_idx);
void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *value, __u32 value_size);
void bpf_gen__map_freeze(struct bpf_gen *gen, int map_idx);
void bpf_gen__record_attach_target(struct bpf_gen *gen, const char *name, enum bpf_attach_type type);
void bpf_gen__record_extern(struct bpf_gen *gen, const char *name, int kind, int insn_idx);
#endif
此差异已折叠。
此差异已折叠。
...@@ -471,6 +471,7 @@ LIBBPF_API int bpf_map__set_priv(struct bpf_map *map, void *priv, ...@@ -471,6 +471,7 @@ LIBBPF_API int bpf_map__set_priv(struct bpf_map *map, void *priv,
LIBBPF_API void *bpf_map__priv(const struct bpf_map *map); LIBBPF_API void *bpf_map__priv(const struct bpf_map *map);
LIBBPF_API int bpf_map__set_initial_value(struct bpf_map *map, LIBBPF_API int bpf_map__set_initial_value(struct bpf_map *map,
const void *data, size_t size); const void *data, size_t size);
LIBBPF_API const void *bpf_map__initial_value(struct bpf_map *map, size_t *psize);
LIBBPF_API bool bpf_map__is_offload_neutral(const struct bpf_map *map); LIBBPF_API bool bpf_map__is_offload_neutral(const struct bpf_map *map);
LIBBPF_API bool bpf_map__is_internal(const struct bpf_map *map); LIBBPF_API bool bpf_map__is_internal(const struct bpf_map *map);
LIBBPF_API int bpf_map__set_pin_path(struct bpf_map *map, const char *path); LIBBPF_API int bpf_map__set_pin_path(struct bpf_map *map, const char *path);
...@@ -800,6 +801,18 @@ LIBBPF_API int bpf_object__attach_skeleton(struct bpf_object_skeleton *s); ...@@ -800,6 +801,18 @@ LIBBPF_API int bpf_object__attach_skeleton(struct bpf_object_skeleton *s);
LIBBPF_API void bpf_object__detach_skeleton(struct bpf_object_skeleton *s); LIBBPF_API void bpf_object__detach_skeleton(struct bpf_object_skeleton *s);
LIBBPF_API void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s); LIBBPF_API void bpf_object__destroy_skeleton(struct bpf_object_skeleton *s);
struct gen_loader_opts {
size_t sz; /* size of this struct, for forward/backward compatiblity */
const char *data;
const char *insns;
__u32 data_sz;
__u32 insns_sz;
};
#define gen_loader_opts__last_field insns_sz
LIBBPF_API int bpf_object__gen_loader(struct bpf_object *obj,
struct gen_loader_opts *opts);
enum libbpf_tristate { enum libbpf_tristate {
TRI_NO = 0, TRI_NO = 0,
TRI_YES = 1, TRI_YES = 1,
......
...@@ -359,7 +359,9 @@ LIBBPF_0.4.0 { ...@@ -359,7 +359,9 @@ LIBBPF_0.4.0 {
bpf_linker__finalize; bpf_linker__finalize;
bpf_linker__free; bpf_linker__free;
bpf_linker__new; bpf_linker__new;
bpf_map__initial_value;
bpf_map__inner_map; bpf_map__inner_map;
bpf_object__gen_loader;
bpf_object__set_kversion; bpf_object__set_kversion;
bpf_tc_attach; bpf_tc_attach;
bpf_tc_detach; bpf_tc_detach;
......
...@@ -258,6 +258,8 @@ int bpf_object__section_size(const struct bpf_object *obj, const char *name, ...@@ -258,6 +258,8 @@ int bpf_object__section_size(const struct bpf_object *obj, const char *name,
int bpf_object__variable_offset(const struct bpf_object *obj, const char *name, int bpf_object__variable_offset(const struct bpf_object *obj, const char *name,
__u32 *off); __u32 *off);
struct btf *btf_get_from_fd(int btf_fd, struct btf *base_btf); struct btf *btf_get_from_fd(int btf_fd, struct btf *base_btf);
void btf_get_kernel_prefix_kind(enum bpf_attach_type attach_type,
const char **prefix, int *kind);
struct btf_ext_info { struct btf_ext_info {
/* /*
......
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
/* Copyright (c) 2021 Facebook */
#ifndef __SKEL_INTERNAL_H
#define __SKEL_INTERNAL_H
#include <unistd.h>
#include <sys/syscall.h>
#include <sys/mman.h>
/* This file is a base header for auto-generated *.lskel.h files.
* Its contents will change and may become part of auto-generation in the future.
*
* The layout of bpf_[map|prog]_desc and bpf_loader_ctx is feature dependent
* and will change from one version of libbpf to another and features
* requested during loader program generation.
*/
struct bpf_map_desc {
union {
/* input for the loader prog */
struct {
__aligned_u64 initial_value;
__u32 max_entries;
};
/* output of the loader prog */
struct {
int map_fd;
};
};
};
struct bpf_prog_desc {
int prog_fd;
};
struct bpf_loader_ctx {
size_t sz;
__u32 log_level;
__u32 log_size;
__u64 log_buf;
};
struct bpf_load_and_run_opts {
struct bpf_loader_ctx *ctx;
const void *data;
const void *insns;
__u32 data_sz;
__u32 insns_sz;
const char *errstr;
};
static inline int skel_sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr,
unsigned int size)
{
return syscall(__NR_bpf, cmd, attr, size);
}
static inline int skel_closenz(int fd)
{
if (fd > 0)
return close(fd);
return -EINVAL;
}
static inline int bpf_load_and_run(struct bpf_load_and_run_opts *opts)
{
int map_fd = -1, prog_fd = -1, key = 0, err;
union bpf_attr attr;
map_fd = bpf_create_map_name(BPF_MAP_TYPE_ARRAY, "__loader.map", 4,
opts->data_sz, 1, 0);
if (map_fd < 0) {
opts->errstr = "failed to create loader map";
err = -errno;
goto out;
}
err = bpf_map_update_elem(map_fd, &key, opts->data, 0);
if (err < 0) {
opts->errstr = "failed to update loader map";
err = -errno;
goto out;
}
memset(&attr, 0, sizeof(attr));
attr.prog_type = BPF_PROG_TYPE_SYSCALL;
attr.insns = (long) opts->insns;
attr.insn_cnt = opts->insns_sz / sizeof(struct bpf_insn);
attr.license = (long) "Dual BSD/GPL";
memcpy(attr.prog_name, "__loader.prog", sizeof("__loader.prog"));
attr.fd_array = (long) &map_fd;
attr.log_level = opts->ctx->log_level;
attr.log_size = opts->ctx->log_size;
attr.log_buf = opts->ctx->log_buf;
attr.prog_flags = BPF_F_SLEEPABLE;
prog_fd = skel_sys_bpf(BPF_PROG_LOAD, &attr, sizeof(attr));
if (prog_fd < 0) {
opts->errstr = "failed to load loader prog";
err = -errno;
goto out;
}
memset(&attr, 0, sizeof(attr));
attr.test.prog_fd = prog_fd;
attr.test.ctx_in = (long) opts->ctx;
attr.test.ctx_size_in = opts->ctx->sz;
err = skel_sys_bpf(BPF_PROG_TEST_RUN, &attr, sizeof(attr));
if (err < 0 || (int)attr.test.retval < 0) {
opts->errstr = "failed to execute loader prog";
if (err < 0)
err = -errno;
else
err = (int)attr.test.retval;
goto out;
}
err = 0;
out:
if (map_fd >= 0)
close(map_fd);
if (prog_fd >= 0)
close(prog_fd);
return err;
}
#endif
...@@ -30,6 +30,7 @@ test_sysctl ...@@ -30,6 +30,7 @@ test_sysctl
xdping xdping
test_cpp test_cpp
*.skel.h *.skel.h
*.lskel.h
/no_alu32 /no_alu32
/bpf_gcc /bpf_gcc
/tools /tools
......
...@@ -312,6 +312,10 @@ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c ...@@ -312,6 +312,10 @@ SKEL_BLACKLIST := btf__% test_pinning_invalid.c test_sk_assign.c
LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \ LINKED_SKELS := test_static_linked.skel.h linked_funcs.skel.h \
linked_vars.skel.h linked_maps.skel.h linked_vars.skel.h linked_maps.skel.h
LSKELS := kfunc_call_test.c fentry_test.c fexit_test.c fexit_sleep.c \
test_ksyms_module.c test_ringbuf.c atomics.c trace_printk.c
SKEL_BLACKLIST += $$(LSKELS)
test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o test_static_linked.skel.h-deps := test_static_linked1.o test_static_linked2.o
linked_funcs.skel.h-deps := linked_funcs1.o linked_funcs2.o linked_funcs.skel.h-deps := linked_funcs1.o linked_funcs2.o
linked_vars.skel.h-deps := linked_vars1.o linked_vars2.o linked_vars.skel.h-deps := linked_vars1.o linked_vars2.o
...@@ -339,6 +343,7 @@ TRUNNER_BPF_OBJS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.o, $$(TRUNNER_BPF_SRCS) ...@@ -339,6 +343,7 @@ TRUNNER_BPF_OBJS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.o, $$(TRUNNER_BPF_SRCS)
TRUNNER_BPF_SKELS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.skel.h, \ TRUNNER_BPF_SKELS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.skel.h, \
$$(filter-out $(SKEL_BLACKLIST) $(LINKED_BPF_SRCS),\ $$(filter-out $(SKEL_BLACKLIST) $(LINKED_BPF_SRCS),\
$$(TRUNNER_BPF_SRCS))) $$(TRUNNER_BPF_SRCS)))
TRUNNER_BPF_LSKELS := $$(patsubst %.c,$$(TRUNNER_OUTPUT)/%.lskel.h, $$(LSKELS))
TRUNNER_BPF_SKELS_LINKED := $$(addprefix $$(TRUNNER_OUTPUT)/,$(LINKED_SKELS)) TRUNNER_BPF_SKELS_LINKED := $$(addprefix $$(TRUNNER_OUTPUT)/,$(LINKED_SKELS))
TEST_GEN_FILES += $$(TRUNNER_BPF_OBJS) TEST_GEN_FILES += $$(TRUNNER_BPF_OBJS)
...@@ -380,6 +385,14 @@ $(TRUNNER_BPF_SKELS): %.skel.h: %.o $(BPFTOOL) | $(TRUNNER_OUTPUT) ...@@ -380,6 +385,14 @@ $(TRUNNER_BPF_SKELS): %.skel.h: %.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
$(Q)diff $$(<:.o=.linked2.o) $$(<:.o=.linked3.o) $(Q)diff $$(<:.o=.linked2.o) $$(<:.o=.linked3.o)
$(Q)$$(BPFTOOL) gen skeleton $$(<:.o=.linked3.o) name $$(notdir $$(<:.o=)) > $$@ $(Q)$$(BPFTOOL) gen skeleton $$(<:.o=.linked3.o) name $$(notdir $$(<:.o=)) > $$@
$(TRUNNER_BPF_LSKELS): %.lskel.h: %.o $(BPFTOOL) | $(TRUNNER_OUTPUT)
$$(call msg,GEN-SKEL,$(TRUNNER_BINARY),$$@)
$(Q)$$(BPFTOOL) gen object $$(<:.o=.linked1.o) $$<
$(Q)$$(BPFTOOL) gen object $$(<:.o=.linked2.o) $$(<:.o=.linked1.o)
$(Q)$$(BPFTOOL) gen object $$(<:.o=.linked3.o) $$(<:.o=.linked2.o)
$(Q)diff $$(<:.o=.linked2.o) $$(<:.o=.linked3.o)
$(Q)$$(BPFTOOL) gen skeleton -L $$(<:.o=.linked3.o) name $$(notdir $$(<:.o=)) > $$@
$(TRUNNER_BPF_SKELS_LINKED): $(TRUNNER_BPF_OBJS) $(BPFTOOL) | $(TRUNNER_OUTPUT) $(TRUNNER_BPF_SKELS_LINKED): $(TRUNNER_BPF_OBJS) $(BPFTOOL) | $(TRUNNER_OUTPUT)
$$(call msg,LINK-BPF,$(TRUNNER_BINARY),$$(@:.skel.h=.o)) $$(call msg,LINK-BPF,$(TRUNNER_BINARY),$$(@:.skel.h=.o))
$(Q)$$(BPFTOOL) gen object $$(@:.skel.h=.linked1.o) $$(addprefix $(TRUNNER_OUTPUT)/,$$($$(@F)-deps)) $(Q)$$(BPFTOOL) gen object $$(@:.skel.h=.linked1.o) $$(addprefix $(TRUNNER_OUTPUT)/,$$($$(@F)-deps))
...@@ -409,6 +422,7 @@ $(TRUNNER_TEST_OBJS): $(TRUNNER_OUTPUT)/%.test.o: \ ...@@ -409,6 +422,7 @@ $(TRUNNER_TEST_OBJS): $(TRUNNER_OUTPUT)/%.test.o: \
$(TRUNNER_EXTRA_HDRS) \ $(TRUNNER_EXTRA_HDRS) \
$(TRUNNER_BPF_OBJS) \ $(TRUNNER_BPF_OBJS) \
$(TRUNNER_BPF_SKELS) \ $(TRUNNER_BPF_SKELS) \
$(TRUNNER_BPF_LSKELS) \
$(TRUNNER_BPF_SKELS_LINKED) \ $(TRUNNER_BPF_SKELS_LINKED) \
$$(BPFOBJ) | $(TRUNNER_OUTPUT) $$(BPFOBJ) | $(TRUNNER_OUTPUT)
$$(call msg,TEST-OBJ,$(TRUNNER_BINARY),$$@) $$(call msg,TEST-OBJ,$(TRUNNER_BINARY),$$@)
...@@ -516,6 +530,6 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o $(OUTPUT)/testing_helpers.o \ ...@@ -516,6 +530,6 @@ $(OUTPUT)/bench: $(OUTPUT)/bench.o $(OUTPUT)/testing_helpers.o \
EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \ EXTRA_CLEAN := $(TEST_CUSTOM_PROGS) $(SCRATCH_DIR) $(HOST_SCRATCH_DIR) \
prog_tests/tests.h map_tests/tests.h verifier/tests.h \ prog_tests/tests.h map_tests/tests.h verifier/tests.h \
feature \ feature \
$(addprefix $(OUTPUT)/,*.o *.skel.h no_alu32 bpf_gcc bpf_testmod.ko) $(addprefix $(OUTPUT)/,*.o *.skel.h *.lskel.h no_alu32 bpf_gcc bpf_testmod.ko)
.PHONY: docs docs-clean .PHONY: docs docs-clean
...@@ -2,19 +2,19 @@ ...@@ -2,19 +2,19 @@
#include <test_progs.h> #include <test_progs.h>
#include "atomics.skel.h" #include "atomics.lskel.h"
static void test_add(struct atomics *skel) static void test_add(struct atomics *skel)
{ {
int err, prog_fd; int err, prog_fd;
__u32 duration = 0, retval; __u32 duration = 0, retval;
struct bpf_link *link; int link_fd;
link = bpf_program__attach(skel->progs.add); link_fd = atomics__add__attach(skel);
if (CHECK(IS_ERR(link), "attach(add)", "err: %ld\n", PTR_ERR(link))) if (!ASSERT_GT(link_fd, 0, "attach(add)"))
return; return;
prog_fd = bpf_program__fd(skel->progs.add); prog_fd = skel->progs.add.prog_fd;
err = bpf_prog_test_run(prog_fd, 1, NULL, 0, err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
NULL, NULL, &retval, &duration); NULL, NULL, &retval, &duration);
if (CHECK(err || retval, "test_run add", if (CHECK(err || retval, "test_run add",
...@@ -33,20 +33,20 @@ static void test_add(struct atomics *skel) ...@@ -33,20 +33,20 @@ static void test_add(struct atomics *skel)
ASSERT_EQ(skel->data->add_noreturn_value, 3, "add_noreturn_value"); ASSERT_EQ(skel->data->add_noreturn_value, 3, "add_noreturn_value");
cleanup: cleanup:
bpf_link__destroy(link); close(link_fd);
} }
static void test_sub(struct atomics *skel) static void test_sub(struct atomics *skel)
{ {
int err, prog_fd; int err, prog_fd;
__u32 duration = 0, retval; __u32 duration = 0, retval;
struct bpf_link *link; int link_fd;
link = bpf_program__attach(skel->progs.sub); link_fd = atomics__sub__attach(skel);
if (CHECK(IS_ERR(link), "attach(sub)", "err: %ld\n", PTR_ERR(link))) if (!ASSERT_GT(link_fd, 0, "attach(sub)"))
return; return;
prog_fd = bpf_program__fd(skel->progs.sub); prog_fd = skel->progs.sub.prog_fd;
err = bpf_prog_test_run(prog_fd, 1, NULL, 0, err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
NULL, NULL, &retval, &duration); NULL, NULL, &retval, &duration);
if (CHECK(err || retval, "test_run sub", if (CHECK(err || retval, "test_run sub",
...@@ -66,20 +66,20 @@ static void test_sub(struct atomics *skel) ...@@ -66,20 +66,20 @@ static void test_sub(struct atomics *skel)
ASSERT_EQ(skel->data->sub_noreturn_value, -1, "sub_noreturn_value"); ASSERT_EQ(skel->data->sub_noreturn_value, -1, "sub_noreturn_value");
cleanup: cleanup:
bpf_link__destroy(link); close(link_fd);
} }
static void test_and(struct atomics *skel) static void test_and(struct atomics *skel)
{ {
int err, prog_fd; int err, prog_fd;
__u32 duration = 0, retval; __u32 duration = 0, retval;
struct bpf_link *link; int link_fd;
link = bpf_program__attach(skel->progs.and); link_fd = atomics__and__attach(skel);
if (CHECK(IS_ERR(link), "attach(and)", "err: %ld\n", PTR_ERR(link))) if (!ASSERT_GT(link_fd, 0, "attach(and)"))
return; return;
prog_fd = bpf_program__fd(skel->progs.and); prog_fd = skel->progs.and.prog_fd;
err = bpf_prog_test_run(prog_fd, 1, NULL, 0, err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
NULL, NULL, &retval, &duration); NULL, NULL, &retval, &duration);
if (CHECK(err || retval, "test_run and", if (CHECK(err || retval, "test_run and",
...@@ -94,20 +94,20 @@ static void test_and(struct atomics *skel) ...@@ -94,20 +94,20 @@ static void test_and(struct atomics *skel)
ASSERT_EQ(skel->data->and_noreturn_value, 0x010ull << 32, "and_noreturn_value"); ASSERT_EQ(skel->data->and_noreturn_value, 0x010ull << 32, "and_noreturn_value");
cleanup: cleanup:
bpf_link__destroy(link); close(link_fd);
} }
static void test_or(struct atomics *skel) static void test_or(struct atomics *skel)
{ {
int err, prog_fd; int err, prog_fd;
__u32 duration = 0, retval; __u32 duration = 0, retval;
struct bpf_link *link; int link_fd;
link = bpf_program__attach(skel->progs.or); link_fd = atomics__or__attach(skel);
if (CHECK(IS_ERR(link), "attach(or)", "err: %ld\n", PTR_ERR(link))) if (!ASSERT_GT(link_fd, 0, "attach(or)"))
return; return;
prog_fd = bpf_program__fd(skel->progs.or); prog_fd = skel->progs.or.prog_fd;
err = bpf_prog_test_run(prog_fd, 1, NULL, 0, err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
NULL, NULL, &retval, &duration); NULL, NULL, &retval, &duration);
if (CHECK(err || retval, "test_run or", if (CHECK(err || retval, "test_run or",
...@@ -123,20 +123,20 @@ static void test_or(struct atomics *skel) ...@@ -123,20 +123,20 @@ static void test_or(struct atomics *skel)
ASSERT_EQ(skel->data->or_noreturn_value, 0x111ull << 32, "or_noreturn_value"); ASSERT_EQ(skel->data->or_noreturn_value, 0x111ull << 32, "or_noreturn_value");
cleanup: cleanup:
bpf_link__destroy(link); close(link_fd);
} }
static void test_xor(struct atomics *skel) static void test_xor(struct atomics *skel)
{ {
int err, prog_fd; int err, prog_fd;
__u32 duration = 0, retval; __u32 duration = 0, retval;
struct bpf_link *link; int link_fd;
link = bpf_program__attach(skel->progs.xor); link_fd = atomics__xor__attach(skel);
if (CHECK(IS_ERR(link), "attach(xor)", "err: %ld\n", PTR_ERR(link))) if (!ASSERT_GT(link_fd, 0, "attach(xor)"))
return; return;
prog_fd = bpf_program__fd(skel->progs.xor); prog_fd = skel->progs.xor.prog_fd;
err = bpf_prog_test_run(prog_fd, 1, NULL, 0, err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
NULL, NULL, &retval, &duration); NULL, NULL, &retval, &duration);
if (CHECK(err || retval, "test_run xor", if (CHECK(err || retval, "test_run xor",
...@@ -151,20 +151,20 @@ static void test_xor(struct atomics *skel) ...@@ -151,20 +151,20 @@ static void test_xor(struct atomics *skel)
ASSERT_EQ(skel->data->xor_noreturn_value, 0x101ull << 32, "xor_nxoreturn_value"); ASSERT_EQ(skel->data->xor_noreturn_value, 0x101ull << 32, "xor_nxoreturn_value");
cleanup: cleanup:
bpf_link__destroy(link); close(link_fd);
} }
static void test_cmpxchg(struct atomics *skel) static void test_cmpxchg(struct atomics *skel)
{ {
int err, prog_fd; int err, prog_fd;
__u32 duration = 0, retval; __u32 duration = 0, retval;
struct bpf_link *link; int link_fd;
link = bpf_program__attach(skel->progs.cmpxchg); link_fd = atomics__cmpxchg__attach(skel);
if (CHECK(IS_ERR(link), "attach(cmpxchg)", "err: %ld\n", PTR_ERR(link))) if (!ASSERT_GT(link_fd, 0, "attach(cmpxchg)"))
return; return;
prog_fd = bpf_program__fd(skel->progs.cmpxchg); prog_fd = skel->progs.cmpxchg.prog_fd;
err = bpf_prog_test_run(prog_fd, 1, NULL, 0, err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
NULL, NULL, &retval, &duration); NULL, NULL, &retval, &duration);
if (CHECK(err || retval, "test_run add", if (CHECK(err || retval, "test_run add",
...@@ -180,20 +180,20 @@ static void test_cmpxchg(struct atomics *skel) ...@@ -180,20 +180,20 @@ static void test_cmpxchg(struct atomics *skel)
ASSERT_EQ(skel->bss->cmpxchg32_result_succeed, 1, "cmpxchg_result_succeed"); ASSERT_EQ(skel->bss->cmpxchg32_result_succeed, 1, "cmpxchg_result_succeed");
cleanup: cleanup:
bpf_link__destroy(link); close(link_fd);
} }
static void test_xchg(struct atomics *skel) static void test_xchg(struct atomics *skel)
{ {
int err, prog_fd; int err, prog_fd;
__u32 duration = 0, retval; __u32 duration = 0, retval;
struct bpf_link *link; int link_fd;
link = bpf_program__attach(skel->progs.xchg); link_fd = atomics__xchg__attach(skel);
if (CHECK(IS_ERR(link), "attach(xchg)", "err: %ld\n", PTR_ERR(link))) if (!ASSERT_GT(link_fd, 0, "attach(xchg)"))
return; return;
prog_fd = bpf_program__fd(skel->progs.xchg); prog_fd = skel->progs.xchg.prog_fd;
err = bpf_prog_test_run(prog_fd, 1, NULL, 0, err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
NULL, NULL, &retval, &duration); NULL, NULL, &retval, &duration);
if (CHECK(err || retval, "test_run add", if (CHECK(err || retval, "test_run add",
...@@ -207,7 +207,7 @@ static void test_xchg(struct atomics *skel) ...@@ -207,7 +207,7 @@ static void test_xchg(struct atomics *skel)
ASSERT_EQ(skel->bss->xchg32_result, 1, "xchg32_result"); ASSERT_EQ(skel->bss->xchg32_result, 1, "xchg32_result");
cleanup: cleanup:
bpf_link__destroy(link); close(link_fd);
} }
void test_atomics(void) void test_atomics(void)
......
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2019 Facebook */ /* Copyright (c) 2019 Facebook */
#include <test_progs.h> #include <test_progs.h>
#include "fentry_test.skel.h" #include "fentry_test.lskel.h"
#include "fexit_test.skel.h" #include "fexit_test.lskel.h"
void test_fentry_fexit(void) void test_fentry_fexit(void)
{ {
...@@ -26,7 +26,7 @@ void test_fentry_fexit(void) ...@@ -26,7 +26,7 @@ void test_fentry_fexit(void)
if (CHECK(err, "fexit_attach", "fexit attach failed: %d\n", err)) if (CHECK(err, "fexit_attach", "fexit attach failed: %d\n", err))
goto close_prog; goto close_prog;
prog_fd = bpf_program__fd(fexit_skel->progs.test1); prog_fd = fexit_skel->progs.test1.prog_fd;
err = bpf_prog_test_run(prog_fd, 1, NULL, 0, err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
NULL, NULL, &retval, &duration); NULL, NULL, &retval, &duration);
CHECK(err || retval, "ipv6", CHECK(err || retval, "ipv6",
......
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2019 Facebook */ /* Copyright (c) 2019 Facebook */
#include <test_progs.h> #include <test_progs.h>
#include "fentry_test.skel.h" #include "fentry_test.lskel.h"
static int fentry_test(struct fentry_test *fentry_skel) static int fentry_test(struct fentry_test *fentry_skel)
{ {
int err, prog_fd, i; int err, prog_fd, i;
__u32 duration = 0, retval; __u32 duration = 0, retval;
struct bpf_link *link; int link_fd;
__u64 *result; __u64 *result;
err = fentry_test__attach(fentry_skel); err = fentry_test__attach(fentry_skel);
...@@ -15,11 +15,11 @@ static int fentry_test(struct fentry_test *fentry_skel) ...@@ -15,11 +15,11 @@ static int fentry_test(struct fentry_test *fentry_skel)
return err; return err;
/* Check that already linked program can't be attached again. */ /* Check that already linked program can't be attached again. */
link = bpf_program__attach(fentry_skel->progs.test1); link_fd = fentry_test__test1__attach(fentry_skel);
if (!ASSERT_ERR_PTR(link, "fentry_attach_link")) if (!ASSERT_LT(link_fd, 0, "fentry_attach_link"))
return -1; return -1;
prog_fd = bpf_program__fd(fentry_skel->progs.test1); prog_fd = fentry_skel->progs.test1.prog_fd;
err = bpf_prog_test_run(prog_fd, 1, NULL, 0, err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
NULL, NULL, &retval, &duration); NULL, NULL, &retval, &duration);
ASSERT_OK(err, "test_run"); ASSERT_OK(err, "test_run");
......
...@@ -6,7 +6,7 @@ ...@@ -6,7 +6,7 @@
#include <time.h> #include <time.h>
#include <sys/mman.h> #include <sys/mman.h>
#include <sys/syscall.h> #include <sys/syscall.h>
#include "fexit_sleep.skel.h" #include "fexit_sleep.lskel.h"
static int do_sleep(void *skel) static int do_sleep(void *skel)
{ {
...@@ -58,8 +58,8 @@ void test_fexit_sleep(void) ...@@ -58,8 +58,8 @@ void test_fexit_sleep(void)
* waiting for percpu_ref_kill to confirm). The other one * waiting for percpu_ref_kill to confirm). The other one
* will be freed quickly. * will be freed quickly.
*/ */
close(bpf_program__fd(fexit_skel->progs.nanosleep_fentry)); close(fexit_skel->progs.nanosleep_fentry.prog_fd);
close(bpf_program__fd(fexit_skel->progs.nanosleep_fexit)); close(fexit_skel->progs.nanosleep_fexit.prog_fd);
fexit_sleep__detach(fexit_skel); fexit_sleep__detach(fexit_skel);
/* kill the thread to unwind sys_nanosleep stack through the trampoline */ /* kill the thread to unwind sys_nanosleep stack through the trampoline */
......
// SPDX-License-Identifier: GPL-2.0 // SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2019 Facebook */ /* Copyright (c) 2019 Facebook */
#include <test_progs.h> #include <test_progs.h>
#include "fexit_test.skel.h" #include "fexit_test.lskel.h"
static int fexit_test(struct fexit_test *fexit_skel) static int fexit_test(struct fexit_test *fexit_skel)
{ {
int err, prog_fd, i; int err, prog_fd, i;
__u32 duration = 0, retval; __u32 duration = 0, retval;
struct bpf_link *link; int link_fd;
__u64 *result; __u64 *result;
err = fexit_test__attach(fexit_skel); err = fexit_test__attach(fexit_skel);
...@@ -15,11 +15,11 @@ static int fexit_test(struct fexit_test *fexit_skel) ...@@ -15,11 +15,11 @@ static int fexit_test(struct fexit_test *fexit_skel)
return err; return err;
/* Check that already linked program can't be attached again. */ /* Check that already linked program can't be attached again. */
link = bpf_program__attach(fexit_skel->progs.test1); link_fd = fexit_test__test1__attach(fexit_skel);
if (!ASSERT_ERR_PTR(link, "fexit_attach_link")) if (!ASSERT_LT(link_fd, 0, "fexit_attach_link"))
return -1; return -1;
prog_fd = bpf_program__fd(fexit_skel->progs.test1); prog_fd = fexit_skel->progs.test1.prog_fd;
err = bpf_prog_test_run(prog_fd, 1, NULL, 0, err = bpf_prog_test_run(prog_fd, 1, NULL, 0,
NULL, NULL, &retval, &duration); NULL, NULL, &retval, &duration);
ASSERT_OK(err, "test_run"); ASSERT_OK(err, "test_run");
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
/* Copyright (c) 2021 Facebook */ /* Copyright (c) 2021 Facebook */
#include <test_progs.h> #include <test_progs.h>
#include <network_helpers.h> #include <network_helpers.h>
#include "kfunc_call_test.skel.h" #include "kfunc_call_test.lskel.h"
#include "kfunc_call_test_subprog.skel.h" #include "kfunc_call_test_subprog.skel.h"
static void test_main(void) static void test_main(void)
...@@ -14,13 +14,13 @@ static void test_main(void) ...@@ -14,13 +14,13 @@ static void test_main(void)
if (!ASSERT_OK_PTR(skel, "skel")) if (!ASSERT_OK_PTR(skel, "skel"))
return; return;
prog_fd = bpf_program__fd(skel->progs.kfunc_call_test1); prog_fd = skel->progs.kfunc_call_test1.prog_fd;
err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
NULL, NULL, (__u32 *)&retval, NULL); NULL, NULL, (__u32 *)&retval, NULL);
ASSERT_OK(err, "bpf_prog_test_run(test1)"); ASSERT_OK(err, "bpf_prog_test_run(test1)");
ASSERT_EQ(retval, 12, "test1-retval"); ASSERT_EQ(retval, 12, "test1-retval");
prog_fd = bpf_program__fd(skel->progs.kfunc_call_test2); prog_fd = skel->progs.kfunc_call_test2.prog_fd;
err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4), err = bpf_prog_test_run(prog_fd, 1, &pkt_v4, sizeof(pkt_v4),
NULL, NULL, (__u32 *)&retval, NULL); NULL, NULL, (__u32 *)&retval, NULL);
ASSERT_OK(err, "bpf_prog_test_run(test2)"); ASSERT_OK(err, "bpf_prog_test_run(test2)");
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
#include <test_progs.h> #include <test_progs.h>
#include <bpf/libbpf.h> #include <bpf/libbpf.h>
#include <bpf/btf.h> #include <bpf/btf.h>
#include "test_ksyms_module.skel.h" #include "test_ksyms_module.lskel.h"
static int duration; static int duration;
......
...@@ -12,7 +12,7 @@ ...@@ -12,7 +12,7 @@
#include <sys/sysinfo.h> #include <sys/sysinfo.h>
#include <linux/perf_event.h> #include <linux/perf_event.h>
#include <linux/ring_buffer.h> #include <linux/ring_buffer.h>
#include "test_ringbuf.skel.h" #include "test_ringbuf.lskel.h"
#define EDONE 7777 #define EDONE 7777
...@@ -93,9 +93,7 @@ void test_ringbuf(void) ...@@ -93,9 +93,7 @@ void test_ringbuf(void)
if (CHECK(!skel, "skel_open", "skeleton open failed\n")) if (CHECK(!skel, "skel_open", "skeleton open failed\n"))
return; return;
err = bpf_map__set_max_entries(skel->maps.ringbuf, page_size); skel->maps.ringbuf.max_entries = page_size;
if (CHECK(err != 0, "bpf_map__set_max_entries", "bpf_map__set_max_entries failed\n"))
goto cleanup;
err = test_ringbuf__load(skel); err = test_ringbuf__load(skel);
if (CHECK(err != 0, "skel_load", "skeleton load failed\n")) if (CHECK(err != 0, "skel_load", "skeleton load failed\n"))
...@@ -104,7 +102,7 @@ void test_ringbuf(void) ...@@ -104,7 +102,7 @@ void test_ringbuf(void)
/* only trigger BPF program for current process */ /* only trigger BPF program for current process */
skel->bss->pid = getpid(); skel->bss->pid = getpid();
ringbuf = ring_buffer__new(bpf_map__fd(skel->maps.ringbuf), ringbuf = ring_buffer__new(skel->maps.ringbuf.map_fd,
process_sample, NULL, NULL); process_sample, NULL, NULL);
if (CHECK(!ringbuf, "ringbuf_create", "failed to create ringbuf\n")) if (CHECK(!ringbuf, "ringbuf_create", "failed to create ringbuf\n"))
goto cleanup; goto cleanup;
......
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
#include <test_progs.h>
#include "syscall.skel.h"
struct args {
__u64 log_buf;
__u32 log_size;
int max_entries;
int map_fd;
int prog_fd;
int btf_fd;
};
void test_syscall(void)
{
static char verifier_log[8192];
struct args ctx = {
.max_entries = 1024,
.log_buf = (uintptr_t) verifier_log,
.log_size = sizeof(verifier_log),
};
struct bpf_prog_test_run_attr tattr = {
.ctx_in = &ctx,
.ctx_size_in = sizeof(ctx),
};
struct syscall *skel = NULL;
__u64 key = 12, value = 0;
int err;
skel = syscall__open_and_load();
if (!ASSERT_OK_PTR(skel, "skel_load"))
goto cleanup;
tattr.prog_fd = bpf_program__fd(skel->progs.bpf_prog);
err = bpf_prog_test_run_xattr(&tattr);
ASSERT_EQ(err, 0, "err");
ASSERT_EQ(tattr.retval, 1, "retval");
ASSERT_GT(ctx.map_fd, 0, "ctx.map_fd");
ASSERT_GT(ctx.prog_fd, 0, "ctx.prog_fd");
ASSERT_OK(memcmp(verifier_log, "processed", sizeof("processed") - 1),
"verifier_log");
err = bpf_map_lookup_elem(ctx.map_fd, &key, &value);
ASSERT_EQ(err, 0, "map_lookup");
ASSERT_EQ(value, 34, "map lookup value");
cleanup:
syscall__destroy(skel);
if (ctx.prog_fd > 0)
close(ctx.prog_fd);
if (ctx.map_fd > 0)
close(ctx.map_fd);
if (ctx.btf_fd > 0)
close(ctx.btf_fd);
}
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
#include <test_progs.h> #include <test_progs.h>
#include "trace_printk.skel.h" #include "trace_printk.lskel.h"
#define TRACEBUF "/sys/kernel/debug/tracing/trace_pipe" #define TRACEBUF "/sys/kernel/debug/tracing/trace_pipe"
#define SEARCHMSG "testing,testing" #define SEARCHMSG "testing,testing"
...@@ -21,6 +21,9 @@ void test_trace_printk(void) ...@@ -21,6 +21,9 @@ void test_trace_printk(void)
if (CHECK(!skel, "skel_open", "failed to open skeleton\n")) if (CHECK(!skel, "skel_open", "failed to open skeleton\n"))
return; return;
ASSERT_EQ(skel->rodata->fmt[0], 'T', "invalid printk fmt string");
skel->rodata->fmt[0] = 't';
err = trace_printk__load(skel); err = trace_printk__load(skel);
if (CHECK(err, "skel_load", "failed to load skeleton: %d\n", err)) if (CHECK(err, "skel_load", "failed to load skeleton: %d\n", err))
goto cleanup; goto cleanup;
......
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2021 Facebook */
#include <linux/stddef.h>
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
#include <bpf/bpf_tracing.h>
#include <../../../tools/include/linux/filter.h>
#include <linux/btf.h>
char _license[] SEC("license") = "GPL";
struct args {
__u64 log_buf;
__u32 log_size;
int max_entries;
int map_fd;
int prog_fd;
int btf_fd;
};
#define BTF_INFO_ENC(kind, kind_flag, vlen) \
((!!(kind_flag) << 31) | ((kind) << 24) | ((vlen) & BTF_MAX_VLEN))
#define BTF_TYPE_ENC(name, info, size_or_type) (name), (info), (size_or_type)
#define BTF_INT_ENC(encoding, bits_offset, nr_bits) \
((encoding) << 24 | (bits_offset) << 16 | (nr_bits))
#define BTF_TYPE_INT_ENC(name, encoding, bits_offset, bits, sz) \
BTF_TYPE_ENC(name, BTF_INFO_ENC(BTF_KIND_INT, 0, 0), sz), \
BTF_INT_ENC(encoding, bits_offset, bits)
static int btf_load(void)
{
struct btf_blob {
struct btf_header btf_hdr;
__u32 types[8];
__u32 str;
} raw_btf = {
.btf_hdr = {
.magic = BTF_MAGIC,
.version = BTF_VERSION,
.hdr_len = sizeof(struct btf_header),
.type_len = sizeof(__u32) * 8,
.str_off = sizeof(__u32) * 8,
.str_len = sizeof(__u32),
},
.types = {
/* long */
BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 64, 8), /* [1] */
/* unsigned long */
BTF_TYPE_INT_ENC(0, 0, 0, 64, 8), /* [2] */
},
};
static union bpf_attr btf_load_attr = {
.btf_size = sizeof(raw_btf),
};
btf_load_attr.btf = (long)&raw_btf;
return bpf_sys_bpf(BPF_BTF_LOAD, &btf_load_attr, sizeof(btf_load_attr));
}
SEC("syscall")
int bpf_prog(struct args *ctx)
{
static char license[] = "GPL";
static struct bpf_insn insns[] = {
BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
BPF_LD_MAP_FD(BPF_REG_1, 0),
BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
BPF_MOV64_IMM(BPF_REG_0, 0),
BPF_EXIT_INSN(),
};
static union bpf_attr map_create_attr = {
.map_type = BPF_MAP_TYPE_HASH,
.key_size = 8,
.value_size = 8,
.btf_key_type_id = 1,
.btf_value_type_id = 2,
};
static union bpf_attr map_update_attr = { .map_fd = 1, };
static __u64 key = 12;
static __u64 value = 34;
static union bpf_attr prog_load_attr = {
.prog_type = BPF_PROG_TYPE_XDP,
.insn_cnt = sizeof(insns) / sizeof(insns[0]),
};
int ret;
ret = btf_load();
if (ret <= 0)
return ret;
ctx->btf_fd = ret;
map_create_attr.max_entries = ctx->max_entries;
map_create_attr.btf_fd = ret;
prog_load_attr.license = (long) license;
prog_load_attr.insns = (long) insns;
prog_load_attr.log_buf = ctx->log_buf;
prog_load_attr.log_size = ctx->log_size;
prog_load_attr.log_level = 1;
ret = bpf_sys_bpf(BPF_MAP_CREATE, &map_create_attr, sizeof(map_create_attr));
if (ret <= 0)
return ret;
ctx->map_fd = ret;
insns[3].imm = ret;
map_update_attr.map_fd = ret;
map_update_attr.key = (long) &key;
map_update_attr.value = (long) &value;
ret = bpf_sys_bpf(BPF_MAP_UPDATE_ELEM, &map_update_attr, sizeof(map_update_attr));
if (ret < 0)
return ret;
ret = bpf_sys_bpf(BPF_PROG_LOAD, &prog_load_attr, sizeof(prog_load_attr));
if (ret <= 0)
return ret;
ctx->prog_fd = ret;
return 1;
}
...@@ -35,7 +35,7 @@ long prod_pos = 0; ...@@ -35,7 +35,7 @@ long prod_pos = 0;
/* inner state */ /* inner state */
long seq = 0; long seq = 0;
SEC("tp/syscalls/sys_enter_getpgid") SEC("fentry/__x64_sys_getpgid")
int test_ringbuf(void *ctx) int test_ringbuf(void *ctx)
{ {
int cur_pid = bpf_get_current_pid_tgid() >> 32; int cur_pid = bpf_get_current_pid_tgid() >> 32;
...@@ -48,7 +48,7 @@ int test_ringbuf(void *ctx) ...@@ -48,7 +48,7 @@ int test_ringbuf(void *ctx)
sample = bpf_ringbuf_reserve(&ringbuf, sizeof(*sample), 0); sample = bpf_ringbuf_reserve(&ringbuf, sizeof(*sample), 0);
if (!sample) { if (!sample) {
__sync_fetch_and_add(&dropped, 1); __sync_fetch_and_add(&dropped, 1);
return 1; return 0;
} }
sample->pid = pid; sample->pid = pid;
......
...@@ -4,8 +4,18 @@ ...@@ -4,8 +4,18 @@
const char LICENSE[] SEC("license") = "GPL"; const char LICENSE[] SEC("license") = "GPL";
struct {
__uint(type, BPF_MAP_TYPE_ARRAY);
__uint(max_entries, 1);
__type(key, __u32);
__type(value, __u64);
} array SEC(".maps");
__noinline int sub1(int x) __noinline int sub1(int x)
{ {
int key = 0;
bpf_map_lookup_elem(&array, &key);
return x + 1; return x + 1;
} }
...@@ -23,6 +33,9 @@ static __noinline int sub3(int z) ...@@ -23,6 +33,9 @@ static __noinline int sub3(int z)
static __noinline int sub4(int w) static __noinline int sub4(int w)
{ {
int key = 0;
bpf_map_lookup_elem(&array, &key);
return w + sub3(5) + sub1(6); return w + sub3(5) + sub1(6);
} }
......
...@@ -10,11 +10,11 @@ char _license[] SEC("license") = "GPL"; ...@@ -10,11 +10,11 @@ char _license[] SEC("license") = "GPL";
int trace_printk_ret = 0; int trace_printk_ret = 0;
int trace_printk_ran = 0; int trace_printk_ran = 0;
SEC("tp/raw_syscalls/sys_enter") const char fmt[] = "Testing,testing %d\n";
SEC("fentry/__x64_sys_nanosleep")
int sys_enter(void *ctx) int sys_enter(void *ctx)
{ {
static const char fmt[] = "testing,testing %d\n";
trace_printk_ret = bpf_trace_printk(fmt, sizeof(fmt), trace_printk_ret = bpf_trace_printk(fmt, sizeof(fmt),
++trace_printk_ran); ++trace_printk_ran);
return 0; return 0;
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册