1. 08 3月, 2022 2 次提交
  2. 15 11月, 2021 1 次提交
    • A
      bpf: Add ambient BPF runtime context stored in current · 460f91d2
      Andrii Nakryiko 提交于
      mainline inclusion
      from mainline-v5.15-rc1
      commit c7603cfa
      category: bugfix
      bugzilla: 182996 https://gitee.com/openeuler/kernel/issues/I4DDEL
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=c7603cfa04e7c3a435b31d065f7cbdc829428f6e
      
      --------------------------------
      
      b910eaaa ("bpf: Fix NULL pointer dereference in bpf_get_local_storage()
      helper") fixed the problem with cgroup-local storage use in BPF by
      pre-allocating per-CPU array of 8 cgroup storage pointers to accommodate
      possible BPF program preemptions and nested executions.
      
      While this seems to work good in practice, it introduces new and unnecessary
      failure mode in which not all BPF programs might be executed if we fail to
      find an unused slot for cgroup storage, however unlikely it is. It might also
      not be so unlikely when/if we allow sleepable cgroup BPF programs in the
      future.
      
      Further, the way that cgroup storage is implemented as ambiently-available
      property during entire BPF program execution is a convenient way to pass extra
      information to BPF program and helpers without requiring user code to pass
      around extra arguments explicitly. So it would be good to have a generic
      solution that can allow implementing this without arbitrary restrictions.
      Ideally, such solution would work for both preemptable and sleepable BPF
      programs in exactly the same way.
      
      This patch introduces such solution, bpf_run_ctx. It adds one pointer field
      (bpf_ctx) to task_struct. This field is maintained by BPF_PROG_RUN family of
      macros in such a way that it always stays valid throughout BPF program
      execution. BPF program preemption is handled by remembering previous
      current->bpf_ctx value locally while executing nested BPF program and
      restoring old value after nested BPF program finishes. This is handled by two
      helper functions, bpf_set_run_ctx() and bpf_reset_run_ctx(), which are
      supposed to be used before and after BPF program runs, respectively.
      
      Restoring old value of the pointer handles preemption, while bpf_run_ctx
      pointer being a property of current task_struct naturally solves this problem
      for sleepable BPF programs by "following" BPF program execution as it is
      scheduled in and out of CPU. It would even allow CPU migration of BPF
      programs, even though it's not currently allowed by BPF infra.
      
      This patch cleans up cgroup local storage handling as a first application. The
      design itself is generic, though, with bpf_run_ctx being an empty struct that
      is supposed to be embedded into a specific struct for a given BPF program type
      (bpf_cg_run_ctx in this case). Follow up patches are planned that will expand
      this mechanism for other uses within tracing BPF programs.
      
      To verify that this change doesn't revert the fix to the original cgroup
      storage issue, I ran the same repro as in the original report ([0]) and didn't
      get any problems. Replacing bpf_reset_run_ctx(old_run_ctx) with
      bpf_reset_run_ctx(NULL) triggers the issue pretty quickly (so repro does work).
      
        [0] https://lore.kernel.org/bpf/YEEvBUiJl2pJkxTd@krava/
      
      Fixes: b910eaaa ("bpf: Fix NULL pointer dereference in bpf_get_local_storage() helper")
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NYonghong Song <yhs@fb.com>
      Link: https://lore.kernel.org/bpf/20210712230615.3525979-1-andrii@kernel.org
      Conflicts:
      	include/linux/bpf.h
      	include/linux/sched.h
      	kernel/fork.c
      	net/bpf/test_run.c
      Signed-off-by: NPu Lehui <pulehui@huawei.com>
      Reviewed-by: NKuohai Xu <xukuohai@huawei.com>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      460f91d2
  3. 19 10月, 2021 1 次提交
    • Y
      bpf: Fix potentially incorrect results with bpf_get_local_storage() · 60ad02ec
      Yonghong Song 提交于
      mainline inclusion
      from mainline-5.10.62
      commit 0c9a876f2897c64d21a5a414a9007a10cd015a9e
      bugzilla: 182217 https://gitee.com/openeuler/kernel/issues/I4EFOS
      
      Reference: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=0c9a876f2897c64d21a5a414a9007a10cd015a9e
      
      --------------------------------
      
      commit a2baf4e8 upstream.
      
      Commit b910eaaa ("bpf: Fix NULL pointer dereference in bpf_get_local_storage()
      helper") fixed a bug for bpf_get_local_storage() helper so different tasks
      won't mess up with each other's percpu local storage.
      
      The percpu data contains 8 slots so it can hold up to 8 contexts (same or
      different tasks), for 8 different program runs, at the same time. This in
      general is sufficient. But our internal testing showed the following warning
      multiple times:
      
        [...]
        warning: WARNING: CPU: 13 PID: 41661 at include/linux/bpf-cgroup.h:193
           __cgroup_bpf_run_filter_sock_ops+0x13e/0x180
        RIP: 0010:__cgroup_bpf_run_filter_sock_ops+0x13e/0x180
        <IRQ>
         tcp_call_bpf.constprop.99+0x93/0xc0
         tcp_conn_request+0x41e/0xa50
         ? tcp_rcv_state_process+0x203/0xe00
         tcp_rcv_state_process+0x203/0xe00
         ? sk_filter_trim_cap+0xbc/0x210
         ? tcp_v6_inbound_md5_hash.constprop.41+0x44/0x160
         tcp_v6_do_rcv+0x181/0x3e0
         tcp_v6_rcv+0xc65/0xcb0
         ip6_protocol_deliver_rcu+0xbd/0x450
         ip6_input_finish+0x11/0x20
         ip6_input+0xb5/0xc0
         ip6_sublist_rcv_finish+0x37/0x50
         ip6_sublist_rcv+0x1dc/0x270
         ipv6_list_rcv+0x113/0x140
         __netif_receive_skb_list_core+0x1a0/0x210
         netif_receive_skb_list_internal+0x186/0x2a0
         gro_normal_list.part.170+0x19/0x40
         napi_complete_done+0x65/0x150
         mlx5e_napi_poll+0x1ae/0x680
         __napi_poll+0x25/0x120
         net_rx_action+0x11e/0x280
         __do_softirq+0xbb/0x271
         irq_exit_rcu+0x97/0xa0
         common_interrupt+0x7f/0xa0
         </IRQ>
         asm_common_interrupt+0x1e/0x40
        RIP: 0010:bpf_prog_1835a9241238291a_tw_egress+0x5/0xbac
         ? __cgroup_bpf_run_filter_skb+0x378/0x4e0
         ? do_softirq+0x34/0x70
         ? ip6_finish_output2+0x266/0x590
         ? ip6_finish_output+0x66/0xa0
         ? ip6_output+0x6c/0x130
         ? ip6_xmit+0x279/0x550
         ? ip6_dst_check+0x61/0xd0
        [...]
      
      Using drgn [0] to dump the percpu buffer contents showed that on this CPU
      slot 0 is still available, but slots 1-7 are occupied and those tasks in
      slots 1-7 mostly don't exist any more. So we might have issues in
      bpf_cgroup_storage_unset().
      
      Further debugging confirmed that there is a bug in bpf_cgroup_storage_unset().
      Currently, it tries to unset "current" slot with searching from the start.
      So the following sequence is possible:
      
        1. A task is running and claims slot 0
        2. Running BPF program is done, and it checked slot 0 has the "task"
           and ready to reset it to NULL (not yet).
        3. An interrupt happens, another BPF program runs and it claims slot 1
           with the *same* task.
        4. The unset() in interrupt context releases slot 0 since it matches "task".
        5. Interrupt is done, the task in process context reset slot 0.
      
      At the end, slot 1 is not reset and the same process can continue to occupy
      slots 2-7 and finally, when the above step 1-5 is repeated again, step 3 BPF
      program won't be able to claim an empty slot and a warning will be issued.
      
      To fix the issue, for unset() function, we should traverse from the last slot
      to the first. This way, the above issue can be avoided.
      
      The same reverse traversal should also be done in bpf_get_local_storage() helper
      itself. Otherwise, incorrect local storage may be returned to BPF program.
      
        [0] https://github.com/osandov/drgn
      
      Fixes: b910eaaa ("bpf: Fix NULL pointer dereference in bpf_get_local_storage() helper")
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20210810010413.1976277-1-yhs@fb.comSigned-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Acked-by: NWeilong Chen <chenweilong@huawei.com>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      60ad02ec
  4. 03 7月, 2021 1 次提交
  5. 15 6月, 2021 2 次提交
    • D
      bpf, lockdown, audit: Fix buggy SELinux lockdown permission checks · 2e0ed072
      Daniel Borkmann 提交于
      stable inclusion
      from stable-5.10.43
      commit ff5039ec75c83d2ed5b781dc7733420ee8c985fc
      bugzilla: 109284
      CVE: NA
      
      --------------------------------
      
      [ Upstream commit ff40e510 ]
      
      Commit 59438b46 ("security,lockdown,selinux: implement SELinux lockdown")
      added an implementation of the locked_down LSM hook to SELinux, with the aim
      to restrict which domains are allowed to perform operations that would breach
      lockdown. This is indirectly also getting audit subsystem involved to report
      events. The latter is problematic, as reported by Ondrej and Serhei, since it
      can bring down the whole system via audit:
      
        1) The audit events that are triggered due to calls to security_locked_down()
           can OOM kill a machine, see below details [0].
      
        2) It also seems to be causing a deadlock via avc_has_perm()/slow_avc_audit()
           when trying to wake up kauditd, for example, when using trace_sched_switch()
           tracepoint, see details in [1]. Triggering this was not via some hypothetical
           corner case, but with existing tools like runqlat & runqslower from bcc, for
           example, which make use of this tracepoint. Rough call sequence goes like:
      
           rq_lock(rq) -> -------------------------+
             trace_sched_switch() ->               |
               bpf_prog_xyz() ->                   +-> deadlock
                 selinux_lockdown() ->             |
                   audit_log_end() ->              |
                     wake_up_interruptible() ->    |
                       try_to_wake_up() ->         |
                         rq_lock(rq) --------------+
      
      What's worse is that the intention of 59438b46 to further restrict lockdown
      settings for specific applications in respect to the global lockdown policy is
      completely broken for BPF. The SELinux policy rule for the current lockdown check
      looks something like this:
      
        allow <who> <who> : lockdown { <reason> };
      
      However, this doesn't match with the 'current' task where the security_locked_down()
      is executed, example: httpd does a syscall. There is a tracing program attached
      to the syscall which triggers a BPF program to run, which ends up doing a
      bpf_probe_read_kernel{,_str}() helper call. The selinux_lockdown() hook does
      the permission check against 'current', that is, httpd in this example. httpd
      has literally zero relation to this tracing program, and it would be nonsensical
      having to write an SELinux policy rule against httpd to let the tracing helper
      pass. The policy in this case needs to be against the entity that is installing
      the BPF program. For example, if bpftrace would generate a histogram of syscall
      counts by user space application:
      
        bpftrace -e 'tracepoint:raw_syscalls:sys_enter { @[comm] = count(); }'
      
      bpftrace would then go and generate a BPF program from this internally. One way
      of doing it [for the sake of the example] could be to call bpf_get_current_task()
      helper and then access current->comm via one of bpf_probe_read_kernel{,_str}()
      helpers. So the program itself has nothing to do with httpd or any other random
      app doing a syscall here. The BPF program _explicitly initiated_ the lockdown
      check. The allow/deny policy belongs in the context of bpftrace: meaning, you
      want to grant bpftrace access to use these helpers, but other tracers on the
      system like my_random_tracer _not_.
      
      Therefore fix all three issues at the same time by taking a completely different
      approach for the security_locked_down() hook, that is, move the check into the
      program verification phase where we actually retrieve the BPF func proto. This
      also reliably gets the task (current) that is trying to install the BPF tracing
      program, e.g. bpftrace/bcc/perf/systemtap/etc, and it also fixes the OOM since
      we're moving this out of the BPF helper's fast-path which can be called several
      millions of times per second.
      
      The check is then also in line with other security_locked_down() hooks in the
      system where the enforcement is performed at open/load time, for example,
      open_kcore() for /proc/kcore access or module_sig_check() for module signatures
      just to pick few random ones. What's out of scope in the fix as well as in
      other security_locked_down() hook locations /outside/ of BPF subsystem is that
      if the lockdown policy changes on the fly there is no retrospective action.
      This requires a different discussion, potentially complex infrastructure, and
      it's also not clear whether this can be solved generically. Either way, it is
      out of scope for a suitable stable fix which this one is targeting. Note that
      the breakage is specifically on 59438b46 where it started to rely on 'current'
      as UAPI behavior, and _not_ earlier infrastructure such as 9d1f8be5 ("bpf:
      Restrict bpf when kernel lockdown is in confidentiality mode").
      
      [0] https://bugzilla.redhat.com/show_bug.cgi?id=1955585, Jakub Hrozek says:
      
        I starting seeing this with F-34. When I run a container that is traced with
        BPF to record the syscalls it is doing, auditd is flooded with messages like:
      
        type=AVC msg=audit(1619784520.593:282387): avc:  denied  { confidentiality }
          for pid=476 comm="auditd" lockdown_reason="use of bpf to read kernel RAM"
            scontext=system_u:system_r:auditd_t:s0 tcontext=system_u:system_r:auditd_t:s0
              tclass=lockdown permissive=0
      
        This seems to be leading to auditd running out of space in the backlog buffer
        and eventually OOMs the machine.
      
        [...]
        auditd running at 99% CPU presumably processing all the messages, eventually I get:
        Apr 30 12:20:42 fedora kernel: audit: backlog limit exceeded
        Apr 30 12:20:42 fedora kernel: audit: backlog limit exceeded
        Apr 30 12:20:42 fedora kernel: audit: audit_backlog=2152579 > audit_backlog_limit=64
        Apr 30 12:20:42 fedora kernel: audit: audit_backlog=2152626 > audit_backlog_limit=64
        Apr 30 12:20:42 fedora kernel: audit: audit_backlog=2152694 > audit_backlog_limit=64
        Apr 30 12:20:42 fedora kernel: audit: audit_lost=6878426 audit_rate_limit=0 audit_backlog_limit=64
        Apr 30 12:20:45 fedora kernel: oci-seccomp-bpf invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=-1000
        Apr 30 12:20:45 fedora kernel: CPU: 0 PID: 13284 Comm: oci-seccomp-bpf Not tainted 5.11.12-300.fc34.x86_64 #1
        Apr 30 12:20:45 fedora kernel: Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-2.fc32 04/01/2014
        [...]
      
      [1] https://lore.kernel.org/linux-audit/CANYvDQN7H5tVp47fbYcRasv4XF07eUbsDwT_eDCHXJUj43J7jQ@mail.gmail.com/,
          Serhei Makarov says:
      
        Upstream kernel 5.11.0-rc7 and later was found to deadlock during a
        bpf_probe_read_compat() call within a sched_switch tracepoint. The problem
        is reproducible with the reg_alloc3 testcase from SystemTap's BPF backend
        testsuite on x86_64 as well as the runqlat, runqslower tools from bcc on
        ppc64le. Example stack trace:
      
        [...]
        [  730.868702] stack backtrace:
        [  730.869590] CPU: 1 PID: 701 Comm: in:imjournal Not tainted, 5.12.0-0.rc2.20210309git144c79ef.166.fc35.x86_64 #1
        [  730.871605] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-2.fc32 04/01/2014
        [  730.873278] Call Trace:
        [  730.873770]  dump_stack+0x7f/0xa1
        [  730.874433]  check_noncircular+0xdf/0x100
        [  730.875232]  __lock_acquire+0x1202/0x1e10
        [  730.876031]  ? __lock_acquire+0xfc0/0x1e10
        [  730.876844]  lock_acquire+0xc2/0x3a0
        [  730.877551]  ? __wake_up_common_lock+0x52/0x90
        [  730.878434]  ? lock_acquire+0xc2/0x3a0
        [  730.879186]  ? lock_is_held_type+0xa7/0x120
        [  730.880044]  ? skb_queue_tail+0x1b/0x50
        [  730.880800]  _raw_spin_lock_irqsave+0x4d/0x90
        [  730.881656]  ? __wake_up_common_lock+0x52/0x90
        [  730.882532]  __wake_up_common_lock+0x52/0x90
        [  730.883375]  audit_log_end+0x5b/0x100
        [  730.884104]  slow_avc_audit+0x69/0x90
        [  730.884836]  avc_has_perm+0x8b/0xb0
        [  730.885532]  selinux_lockdown+0xa5/0xd0
        [  730.886297]  security_locked_down+0x20/0x40
        [  730.887133]  bpf_probe_read_compat+0x66/0xd0
        [  730.887983]  bpf_prog_250599c5469ac7b5+0x10f/0x820
        [  730.888917]  trace_call_bpf+0xe9/0x240
        [  730.889672]  perf_trace_run_bpf_submit+0x4d/0xc0
        [  730.890579]  perf_trace_sched_switch+0x142/0x180
        [  730.891485]  ? __schedule+0x6d8/0xb20
        [  730.892209]  __schedule+0x6d8/0xb20
        [  730.892899]  schedule+0x5b/0xc0
        [  730.893522]  exit_to_user_mode_prepare+0x11d/0x240
        [  730.894457]  syscall_exit_to_user_mode+0x27/0x70
        [  730.895361]  entry_SYSCALL_64_after_hwframe+0x44/0xae
        [...]
      
      Fixes: 59438b46 ("security,lockdown,selinux: implement SELinux lockdown")
      Reported-by: NOndrej Mosnacek <omosnace@redhat.com>
      Reported-by: NJakub Hrozek <jhrozek@redhat.com>
      Reported-by: NSerhei Makarov <smakarov@redhat.com>
      Reported-by: NJiri Olsa <jolsa@redhat.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Tested-by: NJiri Olsa <jolsa@redhat.com>
      Cc: Paul Moore <paul@paul-moore.com>
      Cc: James Morris <jamorris@linux.microsoft.com>
      Cc: Jerome Marchand <jmarchan@redhat.com>
      Cc: Frank Eigler <fche@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Link: https://lore.kernel.org/bpf/01135120-8bf7-df2e-cff0-1d73f1f841c3@iogearbox.netSigned-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      2e0ed072
    • T
      bpf: Simplify cases in bpf_base_func_proto · 52dff11d
      Tobias Klauser 提交于
      stable inclusion
      from stable-5.10.43
      commit cdf3f6db1a86fc1e3d70423f4ee4fa81e4831157
      bugzilla: 109284
      CVE: NA
      
      --------------------------------
      
      [ Upstream commit 61ca36c8 ]
      
      !perfmon_capable() is checked before the last switch(func_id) in
      bpf_base_func_proto. Thus, the cases BPF_FUNC_trace_printk and
      BPF_FUNC_snprintf_btf can be moved to that last switch(func_id) to omit
      the inline !perfmon_capable() checks.
      Signed-off-by: NTobias Klauser <tklauser@distanz.ch>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20210127174615.3038-1-tklauser@distanz.chSigned-off-by: NSasha Levin <sashal@kernel.org>
      Signed-off-by: NChen Jun <chenjun102@huawei.com>
      Signed-off-by: NZheng Zengkai <zhengzengkai@huawei.com>
      52dff11d
  6. 29 1月, 2021 1 次提交
  7. 12 12月, 2020 1 次提交
  8. 03 10月, 2020 2 次提交
  9. 29 9月, 2020 1 次提交
    • A
      bpf: Add bpf_snprintf_btf helper · c4d0bfb4
      Alan Maguire 提交于
      A helper is added to support tracing kernel type information in BPF
      using the BPF Type Format (BTF).  Its signature is
      
      long bpf_snprintf_btf(char *str, u32 str_size, struct btf_ptr *ptr,
      		      u32 btf_ptr_size, u64 flags);
      
      struct btf_ptr * specifies
      
      - a pointer to the data to be traced
      - the BTF id of the type of data pointed to
      - a flags field is provided for future use; these flags
        are not to be confused with the BTF_F_* flags
        below that control how the btf_ptr is displayed; the
        flags member of the struct btf_ptr may be used to
        disambiguate types in kernel versus module BTF, etc;
        the main distinction is the flags relate to the type
        and information needed in identifying it; not how it
        is displayed.
      
      For example a BPF program with a struct sk_buff *skb
      could do the following:
      
      	static struct btf_ptr b = { };
      
      	b.ptr = skb;
      	b.type_id = __builtin_btf_type_id(struct sk_buff, 1);
      	bpf_snprintf_btf(str, sizeof(str), &b, sizeof(b), 0, 0);
      
      Default output looks like this:
      
      (struct sk_buff){
       .transport_header = (__u16)65535,
       .mac_header = (__u16)65535,
       .end = (sk_buff_data_t)192,
       .head = (unsigned char *)0x000000007524fd8b,
       .data = (unsigned char *)0x000000007524fd8b,
       .truesize = (unsigned int)768,
       .users = (refcount_t){
        .refs = (atomic_t){
         .counter = (int)1,
        },
       },
      }
      
      Flags modifying display are as follows:
      
      - BTF_F_COMPACT:	no formatting around type information
      - BTF_F_NONAME:		no struct/union member names/types
      - BTF_F_PTR_RAW:	show raw (unobfuscated) pointer values;
      			equivalent to %px.
      - BTF_F_ZERO:		show zero-valued struct/union members;
      			they are not displayed by default
      Signed-off-by: NAlan Maguire <alan.maguire@oracle.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/1601292670-1616-4-git-send-email-alan.maguire@oracle.com
      c4d0bfb4
  10. 29 8月, 2020 1 次提交
  11. 02 6月, 2020 2 次提交
    • A
      bpf: Implement BPF ring buffer and verifier support for it · 457f4436
      Andrii Nakryiko 提交于
      This commit adds a new MPSC ring buffer implementation into BPF ecosystem,
      which allows multiple CPUs to submit data to a single shared ring buffer. On
      the consumption side, only single consumer is assumed.
      
      Motivation
      ----------
      There are two distinctive motivators for this work, which are not satisfied by
      existing perf buffer, which prompted creation of a new ring buffer
      implementation.
        - more efficient memory utilization by sharing ring buffer across CPUs;
        - preserving ordering of events that happen sequentially in time, even
        across multiple CPUs (e.g., fork/exec/exit events for a task).
      
      These two problems are independent, but perf buffer fails to satisfy both.
      Both are a result of a choice to have per-CPU perf ring buffer.  Both can be
      also solved by having an MPSC implementation of ring buffer. The ordering
      problem could technically be solved for perf buffer with some in-kernel
      counting, but given the first one requires an MPSC buffer, the same solution
      would solve the second problem automatically.
      
      Semantics and APIs
      ------------------
      Single ring buffer is presented to BPF programs as an instance of BPF map of
      type BPF_MAP_TYPE_RINGBUF. Two other alternatives considered, but ultimately
      rejected.
      
      One way would be to, similar to BPF_MAP_TYPE_PERF_EVENT_ARRAY, make
      BPF_MAP_TYPE_RINGBUF could represent an array of ring buffers, but not enforce
      "same CPU only" rule. This would be more familiar interface compatible with
      existing perf buffer use in BPF, but would fail if application needed more
      advanced logic to lookup ring buffer by arbitrary key. HASH_OF_MAPS addresses
      this with current approach. Additionally, given the performance of BPF
      ringbuf, many use cases would just opt into a simple single ring buffer shared
      among all CPUs, for which current approach would be an overkill.
      
      Another approach could introduce a new concept, alongside BPF map, to
      represent generic "container" object, which doesn't necessarily have key/value
      interface with lookup/update/delete operations. This approach would add a lot
      of extra infrastructure that has to be built for observability and verifier
      support. It would also add another concept that BPF developers would have to
      familiarize themselves with, new syntax in libbpf, etc. But then would really
      provide no additional benefits over the approach of using a map.
      BPF_MAP_TYPE_RINGBUF doesn't support lookup/update/delete operations, but so
      doesn't few other map types (e.g., queue and stack; array doesn't support
      delete, etc).
      
      The approach chosen has an advantage of re-using existing BPF map
      infrastructure (introspection APIs in kernel, libbpf support, etc), being
      familiar concept (no need to teach users a new type of object in BPF program),
      and utilizing existing tooling (bpftool). For common scenario of using
      a single ring buffer for all CPUs, it's as simple and straightforward, as
      would be with a dedicated "container" object. On the other hand, by being
      a map, it can be combined with ARRAY_OF_MAPS and HASH_OF_MAPS map-in-maps to
      implement a wide variety of topologies, from one ring buffer for each CPU
      (e.g., as a replacement for perf buffer use cases), to a complicated
      application hashing/sharding of ring buffers (e.g., having a small pool of
      ring buffers with hashed task's tgid being a look up key to preserve order,
      but reduce contention).
      
      Key and value sizes are enforced to be zero. max_entries is used to specify
      the size of ring buffer and has to be a power of 2 value.
      
      There are a bunch of similarities between perf buffer
      (BPF_MAP_TYPE_PERF_EVENT_ARRAY) and new BPF ring buffer semantics:
        - variable-length records;
        - if there is no more space left in ring buffer, reservation fails, no
          blocking;
        - memory-mappable data area for user-space applications for ease of
          consumption and high performance;
        - epoll notifications for new incoming data;
        - but still the ability to do busy polling for new data to achieve the
          lowest latency, if necessary.
      
      BPF ringbuf provides two sets of APIs to BPF programs:
        - bpf_ringbuf_output() allows to *copy* data from one place to a ring
          buffer, similarly to bpf_perf_event_output();
        - bpf_ringbuf_reserve()/bpf_ringbuf_commit()/bpf_ringbuf_discard() APIs
          split the whole process into two steps. First, a fixed amount of space is
          reserved. If successful, a pointer to a data inside ring buffer data area
          is returned, which BPF programs can use similarly to a data inside
          array/hash maps. Once ready, this piece of memory is either committed or
          discarded. Discard is similar to commit, but makes consumer ignore the
          record.
      
      bpf_ringbuf_output() has disadvantage of incurring extra memory copy, because
      record has to be prepared in some other place first. But it allows to submit
      records of the length that's not known to verifier beforehand. It also closely
      matches bpf_perf_event_output(), so will simplify migration significantly.
      
      bpf_ringbuf_reserve() avoids the extra copy of memory by providing a memory
      pointer directly to ring buffer memory. In a lot of cases records are larger
      than BPF stack space allows, so many programs have use extra per-CPU array as
      a temporary heap for preparing sample. bpf_ringbuf_reserve() avoid this needs
      completely. But in exchange, it only allows a known constant size of memory to
      be reserved, such that verifier can verify that BPF program can't access
      memory outside its reserved record space. bpf_ringbuf_output(), while slightly
      slower due to extra memory copy, covers some use cases that are not suitable
      for bpf_ringbuf_reserve().
      
      The difference between commit and discard is very small. Discard just marks
      a record as discarded, and such records are supposed to be ignored by consumer
      code. Discard is useful for some advanced use-cases, such as ensuring
      all-or-nothing multi-record submission, or emulating temporary malloc()/free()
      within single BPF program invocation.
      
      Each reserved record is tracked by verifier through existing
      reference-tracking logic, similar to socket ref-tracking. It is thus
      impossible to reserve a record, but forget to submit (or discard) it.
      
      bpf_ringbuf_query() helper allows to query various properties of ring buffer.
      Currently 4 are supported:
        - BPF_RB_AVAIL_DATA returns amount of unconsumed data in ring buffer;
        - BPF_RB_RING_SIZE returns the size of ring buffer;
        - BPF_RB_CONS_POS/BPF_RB_PROD_POS returns current logical possition of
          consumer/producer, respectively.
      Returned values are momentarily snapshots of ring buffer state and could be
      off by the time helper returns, so this should be used only for
      debugging/reporting reasons or for implementing various heuristics, that take
      into account highly-changeable nature of some of those characteristics.
      
      One such heuristic might involve more fine-grained control over poll/epoll
      notifications about new data availability in ring buffer. Together with
      BPF_RB_NO_WAKEUP/BPF_RB_FORCE_WAKEUP flags for output/commit/discard helpers,
      it allows BPF program a high degree of control and, e.g., more efficient
      batched notifications. Default self-balancing strategy, though, should be
      adequate for most applications and will work reliable and efficiently already.
      
      Design and implementation
      -------------------------
      This reserve/commit schema allows a natural way for multiple producers, either
      on different CPUs or even on the same CPU/in the same BPF program, to reserve
      independent records and work with them without blocking other producers. This
      means that if BPF program was interruped by another BPF program sharing the
      same ring buffer, they will both get a record reserved (provided there is
      enough space left) and can work with it and submit it independently. This
      applies to NMI context as well, except that due to using a spinlock during
      reservation, in NMI context, bpf_ringbuf_reserve() might fail to get a lock,
      in which case reservation will fail even if ring buffer is not full.
      
      The ring buffer itself internally is implemented as a power-of-2 sized
      circular buffer, with two logical and ever-increasing counters (which might
      wrap around on 32-bit architectures, that's not a problem):
        - consumer counter shows up to which logical position consumer consumed the
          data;
        - producer counter denotes amount of data reserved by all producers.
      
      Each time a record is reserved, producer that "owns" the record will
      successfully advance producer counter. At that point, data is still not yet
      ready to be consumed, though. Each record has 8 byte header, which contains
      the length of reserved record, as well as two extra bits: busy bit to denote
      that record is still being worked on, and discard bit, which might be set at
      commit time if record is discarded. In the latter case, consumer is supposed
      to skip the record and move on to the next one. Record header also encodes
      record's relative offset from the beginning of ring buffer data area (in
      pages). This allows bpf_ringbuf_commit()/bpf_ringbuf_discard() to accept only
      the pointer to the record itself, without requiring also the pointer to ring
      buffer itself. Ring buffer memory location will be restored from record
      metadata header. This significantly simplifies verifier, as well as improving
      API usability.
      
      Producer counter increments are serialized under spinlock, so there is
      a strict ordering between reservations. Commits, on the other hand, are
      completely lockless and independent. All records become available to consumer
      in the order of reservations, but only after all previous records where
      already committed. It is thus possible for slow producers to temporarily hold
      off submitted records, that were reserved later.
      
      Reservation/commit/consumer protocol is verified by litmus tests in
      Documentation/litmus-test/bpf-rb.
      
      One interesting implementation bit, that significantly simplifies (and thus
      speeds up as well) implementation of both producers and consumers is how data
      area is mapped twice contiguously back-to-back in the virtual memory. This
      allows to not take any special measures for samples that have to wrap around
      at the end of the circular buffer data area, because the next page after the
      last data page would be first data page again, and thus the sample will still
      appear completely contiguous in virtual memory. See comment and a simple ASCII
      diagram showing this visually in bpf_ringbuf_area_alloc().
      
      Another feature that distinguishes BPF ringbuf from perf ring buffer is
      a self-pacing notifications of new data being availability.
      bpf_ringbuf_commit() implementation will send a notification of new record
      being available after commit only if consumer has already caught up right up
      to the record being committed. If not, consumer still has to catch up and thus
      will see new data anyways without needing an extra poll notification.
      Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbuf.c) show that
      this allows to achieve a very high throughput without having to resort to
      tricks like "notify only every Nth sample", which are necessary with perf
      buffer. For extreme cases, when BPF program wants more manual control of
      notifications, commit/discard/output helpers accept BPF_RB_NO_WAKEUP and
      BPF_RB_FORCE_WAKEUP flags, which give full control over notifications of data
      availability, but require extra caution and diligence in using this API.
      
      Comparison to alternatives
      --------------------------
      Before considering implementing BPF ring buffer from scratch existing
      alternatives in kernel were evaluated, but didn't seem to meet the needs. They
      largely fell into few categores:
        - per-CPU buffers (perf, ftrace, etc), which don't satisfy two motivations
          outlined above (ordering and memory consumption);
        - linked list-based implementations; while some were multi-producer designs,
          consuming these from user-space would be very complicated and most
          probably not performant; memory-mapping contiguous piece of memory is
          simpler and more performant for user-space consumers;
        - io_uring is SPSC, but also requires fixed-sized elements. Naively turning
          SPSC queue into MPSC w/ lock would have subpar performance compared to
          locked reserve + lockless commit, as with BPF ring buffer. Fixed sized
          elements would be too limiting for BPF programs, given existing BPF
          programs heavily rely on variable-sized perf buffer already;
        - specialized implementations (like a new printk ring buffer, [0]) with lots
          of printk-specific limitations and implications, that didn't seem to fit
          well for intended use with BPF programs.
      
        [0] https://lwn.net/Articles/779550/Signed-off-by: NAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200529075424.3139988-2-andriin@fb.comSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      457f4436
    • J
      bpf: Extend bpf_base_func_proto helpers with probe_* and *current_task* · f470378c
      John Fastabend 提交于
      Often it is useful when applying policy to know something about the
      task. If the administrator has CAP_SYS_ADMIN rights then they can
      use kprobe + networking hook and link the two programs together to
      accomplish this. However, this is a bit clunky and also means we have
      to call both the network program and kprobe program when we could just
      use a single program and avoid passing metadata through sk_msg/skb->cb,
      socket, maps, etc.
      
      To accomplish this add probe_* helpers to bpf_base_func_proto programs
      guarded by a perfmon_capable() check. New supported helpers are the
      following,
      
       BPF_FUNC_get_current_task
       BPF_FUNC_probe_read_user
       BPF_FUNC_probe_read_kernel
       BPF_FUNC_probe_read_user_str
       BPF_FUNC_probe_read_kernel_str
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NYonghong Song <yhs@fb.com>
      Link: https://lore.kernel.org/bpf/159033905529.12355.4368381069655254932.stgit@john-Precision-5820-TowerSigned-off-by: NAlexei Starovoitov <ast@kernel.org>
      f470378c
  12. 15 5月, 2020 1 次提交
    • A
      bpf: Implement CAP_BPF · 2c78ee89
      Alexei Starovoitov 提交于
      Implement permissions as stated in uapi/linux/capability.h
      In order to do that the verifier allow_ptr_leaks flag is split
      into four flags and they are set as:
        env->allow_ptr_leaks = bpf_allow_ptr_leaks();
        env->bypass_spec_v1 = bpf_bypass_spec_v1();
        env->bypass_spec_v4 = bpf_bypass_spec_v4();
        env->bpf_capable = bpf_capable();
      
      The first three currently equivalent to perfmon_capable(), since leaking kernel
      pointers and reading kernel memory via side channel attacks is roughly
      equivalent to reading kernel memory with cap_perfmon.
      
      'bpf_capable' enables bounded loops, precision tracking, bpf to bpf calls and
      other verifier features. 'allow_ptr_leaks' enable ptr leaks, ptr conversions,
      subtraction of pointers. 'bypass_spec_v1' disables speculative analysis in the
      verifier, run time mitigations in bpf array, and enables indirect variable
      access in bpf programs. 'bypass_spec_v4' disables emission of sanitation code
      by the verifier.
      
      That means that the networking BPF program loaded with CAP_BPF + CAP_NET_ADMIN
      will have speculative checks done by the verifier and other spectre mitigation
      applied. Such networking BPF program will not be able to leak kernel pointers
      and will not be able to access arbitrary kernel memory.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/20200513230355.7858-3-alexei.starovoitov@gmail.com
      2c78ee89
  13. 27 4月, 2020 2 次提交
    • M
      bpf: add bpf_ktime_get_boot_ns() · 71d19214
      Maciej Żenczykowski 提交于
      On a device like a cellphone which is constantly suspending
      and resuming CLOCK_MONOTONIC is not particularly useful for
      keeping track of or reacting to external network events.
      Instead you want to use CLOCK_BOOTTIME.
      
      Hence add bpf_ktime_get_boot_ns() as a mirror of bpf_ktime_get_ns()
      based around CLOCK_BOOTTIME instead of CLOCK_MONOTONIC.
      Signed-off-by: NMaciej Żenczykowski <maze@google.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      71d19214
    • M
      net: bpf: Make bpf_ktime_get_ns() available to non GPL programs · 082b57e3
      Maciej Żenczykowski 提交于
      The entire implementation is in kernel/bpf/helpers.c:
      
      BPF_CALL_0(bpf_ktime_get_ns) {
             /* NMI safe access to clock monotonic */
             return ktime_get_mono_fast_ns();
      }
      
      const struct bpf_func_proto bpf_ktime_get_ns_proto = {
             .func           = bpf_ktime_get_ns,
             .gpl_only       = false,
             .ret_type       = RET_INTEGER,
      };
      
      and this was presumably marked GPL due to kernel/time/timekeeping.c:
        EXPORT_SYMBOL_GPL(ktime_get_mono_fast_ns);
      
      and while that may make sense for kernel modules (although even that
      is doubtful), there is currently AFAICT no other source of time
      available to ebpf.
      
      Furthermore this is really just equivalent to clock_gettime(CLOCK_MONOTONIC)
      which is exposed to userspace (via vdso even to make it performant)...
      
      As such, I see no reason to keep the GPL restriction.
      (In the future I'd like to have access to time from Apache licensed ebpf code)
      Signed-off-by: NMaciej Żenczykowski <maze@google.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      082b57e3
  14. 26 4月, 2020 1 次提交
  15. 28 3月, 2020 1 次提交
  16. 13 3月, 2020 1 次提交
  17. 23 1月, 2020 1 次提交
  18. 13 11月, 2019 2 次提交
    • T
      cgroup: use cgrp->kn->id as the cgroup ID · 74321038
      Tejun Heo 提交于
      cgroup ID is currently allocated using a dedicated per-hierarchy idr
      and used internally and exposed through tracepoints and bpf.  This is
      confusing because there are tracepoints and other interfaces which use
      the cgroupfs ino as IDs.
      
      The preceding changes made kn->id exposed as ino as 64bit ino on
      supported archs or ino+gen (low 32bits as ino, high gen).  There's no
      reason for cgroup to use different IDs.  The kernfs IDs are unique and
      userland can easily discover them and map them back to paths using
      standard file operations.
      
      This patch replaces cgroup IDs with kernfs IDs.
      
      * cgroup_id() is added and all cgroup ID users are converted to use it.
      
      * kernfs_node creation is moved to earlier during cgroup init so that
        cgroup_id() is available during init.
      
      * While at it, s/cgroup/cgrp/ in psi helpers for consistency.
      
      * Fallback ID value is changed to 1 to be consistent with root cgroup
        ID.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      74321038
    • T
      kernfs: convert kernfs_node->id from union kernfs_node_id to u64 · 67c0496e
      Tejun Heo 提交于
      kernfs_node->id is currently a union kernfs_node_id which represents
      either a 32bit (ino, gen) pair or u64 value.  I can't see much value
      in the usage of the union - all that's needed is a 64bit ID which the
      current code is already limited to.  Using a union makes the code
      unnecessarily complicated and prevents using 64bit ino without adding
      practical benefits.
      
      This patch drops union kernfs_node_id and makes kernfs_node->id a u64.
      ino is stored in the lower 32bits and gen upper.  Accessors -
      kernfs[_id]_ino() and kernfs[_id]_gen() - are added to retrieve the
      ino and gen.  This simplifies ID handling less cumbersome and will
      allow using 64bit inos on supported archs.
      
      This patch doesn't make any functional changes.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Alexei Starovoitov <ast@kernel.org>
      67c0496e
  19. 05 6月, 2019 1 次提交
  20. 13 4月, 2019 1 次提交
    • A
      bpf: Introduce bpf_strtol and bpf_strtoul helpers · d7a4cb9b
      Andrey Ignatov 提交于
      Add bpf_strtol and bpf_strtoul to convert a string to long and unsigned
      long correspondingly. It's similar to user space strtol(3) and
      strtoul(3) with a few changes to the API:
      
      * instead of NUL-terminated C string the helpers expect buffer and
        buffer length;
      
      * resulting long or unsigned long is returned in a separate
        result-argument;
      
      * return value is used to indicate success or failure, on success number
        of consumed bytes is returned that can be used to identify position to
        read next if the buffer is expected to contain multiple integers;
      
      * instead of *base* argument, *flags* is used that provides base in 5
        LSB, other bits are reserved for future use;
      
      * number of supported bases is limited.
      
      Documentation for the new helpers is provided in bpf.h UAPI.
      
      The helpers are made available to BPF_PROG_TYPE_CGROUP_SYSCTL programs to
      be able to convert string input to e.g. "ulongvec" output.
      
      E.g. "net/ipv4/tcp_mem" consists of three ulong integers. They can be
      parsed by calling to bpf_strtoul three times.
      
      Implementation notes:
      
      Implementation includes "../../lib/kstrtox.h" to reuse integer parsing
      functions. It's done exactly same way as fs/proc/base.c already does.
      
      Unfortunately existing kstrtoX function can't be used directly since
      they fail if any invalid character is present right after integer in the
      string. Existing simple_strtoX functions can't be used either since
      they're obsolete and don't handle overflow properly.
      Signed-off-by: NAndrey Ignatov <rdna@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      d7a4cb9b
  21. 02 2月, 2019 2 次提交
    • A
      bpf: introduce BPF_F_LOCK flag · 96049f3a
      Alexei Starovoitov 提交于
      Introduce BPF_F_LOCK flag for map_lookup and map_update syscall commands
      and for map_update() helper function.
      In all these cases take a lock of existing element (which was provided
      in BTF description) before copying (in or out) the rest of map value.
      
      Implementation details that are part of uapi:
      
      Array:
      The array map takes the element lock for lookup/update.
      
      Hash:
      hash map also takes the lock for lookup/update and tries to avoid the bucket lock.
      If old element exists it takes the element lock and updates the element in place.
      If element doesn't exist it allocates new one and inserts into hash table
      while holding the bucket lock.
      In rare case the hashmap has to take both the bucket lock and the element lock
      to update old value in place.
      
      Cgroup local storage:
      It is similar to array. update in place and lookup are done with lock taken.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      96049f3a
    • A
      bpf: introduce bpf_spin_lock · d83525ca
      Alexei Starovoitov 提交于
      Introduce 'struct bpf_spin_lock' and bpf_spin_lock/unlock() helpers to let
      bpf program serialize access to other variables.
      
      Example:
      struct hash_elem {
          int cnt;
          struct bpf_spin_lock lock;
      };
      struct hash_elem * val = bpf_map_lookup_elem(&hash_map, &key);
      if (val) {
          bpf_spin_lock(&val->lock);
          val->cnt++;
          bpf_spin_unlock(&val->lock);
      }
      
      Restrictions and safety checks:
      - bpf_spin_lock is only allowed inside HASH and ARRAY maps.
      - BTF description of the map is mandatory for safety analysis.
      - bpf program can take one bpf_spin_lock at a time, since two or more can
        cause dead locks.
      - only one 'struct bpf_spin_lock' is allowed per map element.
        It drastically simplifies implementation yet allows bpf program to use
        any number of bpf_spin_locks.
      - when bpf_spin_lock is taken the calls (either bpf2bpf or helpers) are not allowed.
      - bpf program must bpf_spin_unlock() before return.
      - bpf program can access 'struct bpf_spin_lock' only via
        bpf_spin_lock()/bpf_spin_unlock() helpers.
      - load/store into 'struct bpf_spin_lock lock;' field is not allowed.
      - to use bpf_spin_lock() helper the BTF description of map value must be
        a struct and have 'struct bpf_spin_lock anyname;' field at the top level.
        Nested lock inside another struct is not allowed.
      - syscall map_lookup doesn't copy bpf_spin_lock field to user space.
      - syscall map_update and program map_update do not update bpf_spin_lock field.
      - bpf_spin_lock cannot be on the stack or inside networking packet.
        bpf_spin_lock can only be inside HASH or ARRAY map value.
      - bpf_spin_lock is available to root only and to all program types.
      - bpf_spin_lock is not allowed in inner maps of map-in-map.
      - ld_abs is not allowed inside spin_lock-ed region.
      - tracing progs and socket filter progs cannot use bpf_spin_lock due to
        insufficient preemption checks
      
      Implementation details:
      - cgroup-bpf class of programs can nest with xdp/tc programs.
        Hence bpf_spin_lock is equivalent to spin_lock_irqsave.
        Other solutions to avoid nested bpf_spin_lock are possible.
        Like making sure that all networking progs run with softirq disabled.
        spin_lock_irqsave is the simplest and doesn't add overhead to the
        programs that don't use it.
      - arch_spinlock_t is used when its implemented as queued_spin_lock
      - archs can force their own arch_spinlock_t
      - on architectures where queued_spin_lock is not available and
        sizeof(arch_spinlock_t) != sizeof(__u32) trivial lock is used.
      - presence of bpf_spin_lock inside map value could have been indicated via
        extra flag during map_create, but specifying it via BTF is cleaner.
        It provides introspection for map key/value and reduces user mistakes.
      
      Next steps:
      - allow bpf_spin_lock in other map types (like cgroup local storage)
      - introduce BPF_F_LOCK flag for bpf_map_update() syscall and helper
        to request kernel to grab bpf_spin_lock before rewriting the value.
        That will serialize access to map elements.
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      d83525ca
  22. 26 10月, 2018 1 次提交
  23. 20 10月, 2018 1 次提交
    • M
      bpf: add queue and stack maps · f1a2e44a
      Mauricio Vasquez B 提交于
      Queue/stack maps implement a FIFO/LIFO data storage for ebpf programs.
      These maps support peek, pop and push operations that are exposed to eBPF
      programs through the new bpf_map[peek/pop/push] helpers.  Those operations
      are exposed to userspace applications through the already existing
      syscalls in the following way:
      
      BPF_MAP_LOOKUP_ELEM            -> peek
      BPF_MAP_LOOKUP_AND_DELETE_ELEM -> pop
      BPF_MAP_UPDATE_ELEM            -> push
      
      Queue/stack maps are implemented using a buffer, tail and head indexes,
      hence BPF_F_NO_PREALLOC is not supported.
      
      As opposite to other maps, queue and stack do not use RCU for protecting
      maps values, the bpf_map[peek/pop] have a ARG_PTR_TO_UNINIT_MAP_VALUE
      argument that is a pointer to a memory zone where to save the value of a
      map.  Basically the same as ARG_PTR_TO_UNINIT_MEM, but the size has not
      be passed as an extra argument.
      
      Our main motivation for implementing queue/stack maps was to keep track
      of a pool of elements, like network ports in a SNAT, however we forsee
      other use cases, like for exampling saving last N kernel events in a map
      and then analysing from userspace.
      Signed-off-by: NMauricio Vasquez B <mauricio.vasquez@polito.it>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      f1a2e44a
  24. 01 10月, 2018 3 次提交
    • R
      bpf: introduce per-cpu cgroup local storage · b741f163
      Roman Gushchin 提交于
      This commit introduced per-cpu cgroup local storage.
      
      Per-cpu cgroup local storage is very similar to simple cgroup storage
      (let's call it shared), except all the data is per-cpu.
      
      The main goal of per-cpu variant is to implement super fast
      counters (e.g. packet counters), which don't require neither
      lookups, neither atomic operations.
      
      >From userspace's point of view, accessing a per-cpu cgroup storage
      is similar to other per-cpu map types (e.g. per-cpu hashmaps and
      arrays).
      
      Writing to a per-cpu cgroup storage is not atomic, but is performed
      by copying longs, so some minimal atomicity is here, exactly
      as with other per-cpu maps.
      Signed-off-by: NRoman Gushchin <guro@fb.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      b741f163
    • R
      bpf: rework cgroup storage pointer passing · f294b37e
      Roman Gushchin 提交于
      To simplify the following introduction of per-cpu cgroup storage,
      let's rework a bit a mechanism of passing a pointer to a cgroup
      storage into the bpf_get_local_storage(). Let's save a pointer
      to the corresponding bpf_cgroup_storage structure, instead of
      a pointer to the actual buffer.
      
      It will help us to handle per-cpu storage later, which has
      a different way of accessing to the actual data.
      Signed-off-by: NRoman Gushchin <guro@fb.com>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      f294b37e
    • R
      bpf: extend cgroup bpf core to allow multiple cgroup storage types · 8bad74f9
      Roman Gushchin 提交于
      In order to introduce per-cpu cgroup storage, let's generalize
      bpf cgroup core to support multiple cgroup storage types.
      Potentially, per-node cgroup storage can be added later.
      
      This commit is mostly a formal change that replaces
      cgroup_storage pointer with a array of cgroup_storage pointers.
      It doesn't actually introduce a new storage type,
      it will be done later.
      
      Each bpf program is now able to have one cgroup storage of each type.
      Signed-off-by: NRoman Gushchin <guro@fb.com>
      Acked-by: NSong Liu <songliubraving@fb.com>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      8bad74f9
  25. 03 8月, 2018 1 次提交
  26. 04 6月, 2018 1 次提交
    • Y
      bpf: implement bpf_get_current_cgroup_id() helper · bf6fa2c8
      Yonghong Song 提交于
      bpf has been used extensively for tracing. For example, bcc
      contains an almost full set of bpf-based tools to trace kernel
      and user functions/events. Most tracing tools are currently
      either filtered based on pid or system-wide.
      
      Containers have been used quite extensively in industry and
      cgroup is often used together to provide resource isolation
      and protection. Several processes may run inside the same
      container. It is often desirable to get container-level tracing
      results as well, e.g. syscall count, function count, I/O
      activity, etc.
      
      This patch implements a new helper, bpf_get_current_cgroup_id(),
      which will return cgroup id based on the cgroup within which
      the current task is running.
      
      The later patch will provide an example to show that
      userspace can get the same cgroup id so it could
      configure a filter or policy in the bpf program based on
      task cgroup id.
      
      The helper is currently implemented for tracing. It can
      be added to other program types as well when needed.
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      bf6fa2c8
  27. 10 1月, 2017 1 次提交
  28. 23 10月, 2016 1 次提交
  29. 21 9月, 2016 1 次提交
    • D
      bpf: direct packet write and access for helpers for clsact progs · 36bbef52
      Daniel Borkmann 提交于
      This work implements direct packet access for helpers and direct packet
      write in a similar fashion as already available for XDP types via commits
      4acf6c0b ("bpf: enable direct packet data write for xdp progs") and
      6841de8b ("bpf: allow helpers access the packet directly"), and as a
      complementary feature to the already available direct packet read for tc
      (cls/act) programs.
      
      For enabling this, we need to introduce two helpers, bpf_skb_pull_data()
      and bpf_csum_update(). The first is generally needed for both, read and
      write, because they would otherwise only be limited to the current linear
      skb head. Usually, when the data_end test fails, programs just bail out,
      or, in the direct read case, use bpf_skb_load_bytes() as an alternative
      to overcome this limitation. If such data sits in non-linear parts, we
      can just pull them in once with the new helper, retest and eventually
      access them.
      
      At the same time, this also makes sure the skb is uncloned, which is, of
      course, a necessary condition for direct write. As this needs to be an
      invariant for the write part only, the verifier detects writes and adds
      a prologue that is calling bpf_skb_pull_data() to effectively unclone the
      skb from the very beginning in case it is indeed cloned. The heuristic
      makes use of a similar trick that was done in 233577a2 ("net: filter:
      constify detection of pkt_type_offset"). This comes at zero cost for other
      programs that do not use the direct write feature. Should a program use
      this feature only sparsely and has read access for the most parts with,
      for example, drop return codes, then such write action can be delegated
      to a tail called program for mitigating this cost of potential uncloning
      to a late point in time where it would have been paid similarly with the
      bpf_skb_store_bytes() as well. Advantage of direct write is that the
      writes are inlined whereas the helper cannot make any length assumptions
      and thus needs to generate a call to memcpy() also for small sizes, as well
      as cost of helper call itself with sanity checks are avoided. Plus, when
      direct read is already used, we don't need to cache or perform rechecks
      on the data boundaries (due to verifier invalidating previous checks for
      helpers that change skb->data), so more complex programs using rewrites
      can benefit from switching to direct read plus write.
      
      For direct packet access to helpers, we save the otherwise needed copy into
      a temp struct sitting on stack memory when use-case allows. Both facilities
      are enabled via may_access_direct_pkt_data() in verifier. For now, we limit
      this to map helpers and csum_diff, and can successively enable other helpers
      where we find it makes sense. Helpers that definitely cannot be allowed for
      this are those part of bpf_helper_changes_skb_data() since they can change
      underlying data, and those that write into memory as this could happen for
      packet typed args when still cloned. bpf_csum_update() helper accommodates
      for the fact that we need to fixup checksum_complete when using direct write
      instead of bpf_skb_store_bytes(), meaning the programs can use available
      helpers like bpf_csum_diff(), and implement csum_add(), csum_sub(),
      csum_block_add(), csum_block_sub() equivalents in eBPF together with the
      new helper. A usage example will be provided for iproute2's examples/bpf/
      directory.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      36bbef52
  30. 10 9月, 2016 2 次提交
    • D
      bpf: add BPF_CALL_x macros for declaring helpers · f3694e00
      Daniel Borkmann 提交于
      This work adds BPF_CALL_<n>() macros and converts all the eBPF helper functions
      to use them, in a similar fashion like we do with SYSCALL_DEFINE<n>() macros
      that are used today. Motivation for this is to hide all the register handling
      and all necessary casts from the user, so that it is done automatically in the
      background when adding a BPF_CALL_<n>() call.
      
      This makes current helpers easier to review, eases to write future helpers,
      avoids getting the casting mess wrong, and allows for extending all helpers at
      once (f.e. build time checks, etc). It also helps detecting more easily in
      code reviews that unused registers are not instrumented in the code by accident,
      breaking compatibility with existing programs.
      
      BPF_CALL_<n>() internals are quite similar to SYSCALL_DEFINE<n>() ones with some
      fundamental differences, for example, for generating the actual helper function
      that carries all u64 regs, we need to fill unused regs, so that we always end up
      with 5 u64 regs as an argument.
      
      I reviewed several 0-5 generated BPF_CALL_<n>() variants of the .i results and
      they look all as expected. No sparse issue spotted. We let this also sit for a
      few days with Fengguang's kbuild test robot, and there were no issues seen. On
      s390, it barked on the "uses dynamic stack allocation" notice, which is an old
      one from bpf_perf_event_output{,_tp}() reappearing here due to the conversion
      to the call wrapper, just telling that the perf raw record/frag sits on stack
      (gcc with s390's -mwarn-dynamicstack), but that's all. Did various runtime tests
      and they were fine as well. All eBPF helpers are now converted to use these
      macros, getting rid of a good chunk of all the raw castings.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f3694e00
    • D
      bpf: minor cleanups in helpers · 6088b582
      Daniel Borkmann 提交于
      Some minor misc cleanups, f.e. use sizeof(__u32) instead of hardcoding
      and in __bpf_skb_max_len(), I missed that we always have skb->dev valid
      anyway, so we can drop the unneeded test for dev; also few more other
      misc bits addressed here.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      6088b582