1. 18 3月, 2021 1 次提交
    • A
      bpf: Fix fexit trampoline. · e21aa341
      Alexei Starovoitov 提交于
      The fexit/fmod_ret programs can be attached to kernel functions that can sleep.
      The synchronize_rcu_tasks() will not wait for such tasks to complete.
      In such case the trampoline image will be freed and when the task
      wakes up the return IP will point to freed memory causing the crash.
      Solve this by adding percpu_ref_get/put for the duration of trampoline
      and separate trampoline vs its image life times.
      The "half page" optimization has to be removed, since
      first_half->second_half->first_half transition cannot be guaranteed to
      complete in deterministic time. Every trampoline update becomes a new image.
      The image with fmod_ret or fexit progs will be freed via percpu_ref_kill and
      call_rcu_tasks. Together they will wait for the original function and
      trampoline asm to complete. The trampoline is patched from nop to jmp to skip
      fexit progs. They are freed independently from the trampoline. The image with
      fentry progs only will be freed via call_rcu_tasks_trace+call_rcu_tasks which
      will wait for both sleepable and non-sleepable progs to complete.
      
      Fixes: fec56f58 ("bpf: Introduce BPF trampoline")
      Reported-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: Paul E. McKenney <paulmck@kernel.org>  # for RCU
      Link: https://lore.kernel.org/bpf/20210316210007.38949-1-alexei.starovoitov@gmail.com
      e21aa341
  2. 10 3月, 2021 2 次提交
  3. 05 3月, 2021 1 次提交
  4. 27 2月, 2021 6 次提交
  5. 12 2月, 2021 1 次提交
  6. 11 2月, 2021 4 次提交
  7. 28 1月, 2021 1 次提交
  8. 13 1月, 2021 3 次提交
  9. 09 12月, 2020 1 次提交
  10. 04 12月, 2020 1 次提交
    • A
      bpf: Remove hard-coded btf_vmlinux assumption from BPF verifier · 22dc4a0f
      Andrii Nakryiko 提交于
      Remove a permeating assumption thoughout BPF verifier of vmlinux BTF. Instead,
      wherever BTF type IDs are involved, also track the instance of struct btf that
      goes along with the type ID. This allows to gradually add support for kernel
      module BTFs and using/tracking module types across BPF helper calls and
      registers.
      
      This patch also renames btf_id() function to btf_obj_id() to minimize naming
      clash with using btf_id to denote BTF *type* ID, rather than BTF *object*'s ID.
      
      Also, altough btf_vmlinux can't get destructed and thus doesn't need
      refcounting, module BTFs need that, so apply BTF refcounting universally when
      BPF program is using BTF-powered attachment (tp_btf, fentry/fexit, etc). This
      makes for simpler clean up code.
      
      Now that BTF type ID is not enough to uniquely identify a BTF type, extend BPF
      trampoline key to include BTF object ID. To differentiate that from target
      program BPF ID, set 31st bit of type ID. BTF type IDs (at least currently) are
      not allowed to take full 32 bits, so there is no danger of confusing that bit
      with a valid BTF type ID.
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Link: https://lore.kernel.org/bpf/20201203204634.1325171-10-andrii@kernel.org
      22dc4a0f
  11. 03 12月, 2020 3 次提交
  12. 19 11月, 2020 1 次提交
  13. 11 11月, 2020 1 次提交
    • A
      bpf: Load and verify kernel module BTFs · 36e68442
      Andrii Nakryiko 提交于
      Add kernel module listener that will load/validate and unload module BTF.
      Module BTFs gets ID generated for them, which makes it possible to iterate
      them with existing BTF iteration API. They are given their respective module's
      names, which will get reported through GET_OBJ_INFO API. They are also marked
      as in-kernel BTFs for tooling to distinguish them from user-provided BTFs.
      
      Also, similarly to vmlinux BTF, kernel module BTFs are exposed through
      sysfs as /sys/kernel/btf/<module-name>. This is convenient for user-space
      tools to inspect module BTF contents and dump their types with existing tools:
      
      [vmuser@archvm bpf]$ ls -la /sys/kernel/btf
      total 0
      drwxr-xr-x  2 root root       0 Nov  4 19:46 .
      drwxr-xr-x 13 root root       0 Nov  4 19:46 ..
      
      ...
      
      -r--r--r--  1 root root     888 Nov  4 19:46 irqbypass
      -r--r--r--  1 root root  100225 Nov  4 19:46 kvm
      -r--r--r--  1 root root   35401 Nov  4 19:46 kvm_intel
      -r--r--r--  1 root root     120 Nov  4 19:46 pcspkr
      -r--r--r--  1 root root     399 Nov  4 19:46 serio_raw
      -r--r--r--  1 root root 4094095 Nov  4 19:46 vmlinux
      Signed-off-by: NAndrii Nakryiko <andrii@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Reviewed-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Link: https://lore.kernel.org/bpf/20201110011932.3201430-5-andrii@kernel.org
      36e68442
  14. 07 11月, 2020 1 次提交
  15. 29 10月, 2020 1 次提交
    • Y
      bpf: Permit cond_resched for some iterators · cf83b2d2
      Yonghong Song 提交于
      Commit e679654a ("bpf: Fix a rcu_sched stall issue with
      bpf task/task_file iterator") tries to fix rcu stalls warning
      which is caused by bpf task_file iterator when running
      "bpftool prog".
      
            rcu: INFO: rcu_sched self-detected stall on CPU
            rcu: \x097-....: (20999 ticks this GP) idle=302/1/0x4000000000000000 softirq=1508852/1508852 fqs=4913
            \x09(t=21031 jiffies g=2534773 q=179750)
            NMI backtrace for cpu 7
            CPU: 7 PID: 184195 Comm: bpftool Kdump: loaded Tainted: G        W         5.8.0-00004-g68bfc7f8c1b4 #6
            Hardware name: Quanta Twin Lakes MP/Twin Lakes Passive MP, BIOS F09_3A17 05/03/2019
            Call Trace:
            <IRQ>
            dump_stack+0x57/0x70
            nmi_cpu_backtrace.cold+0x14/0x53
            ? lapic_can_unplug_cpu.cold+0x39/0x39
            nmi_trigger_cpumask_backtrace+0xb7/0xc7
            rcu_dump_cpu_stacks+0xa2/0xd0
            rcu_sched_clock_irq.cold+0x1ff/0x3d9
            ? tick_nohz_handler+0x100/0x100
            update_process_times+0x5b/0x90
            tick_sched_timer+0x5e/0xf0
            __hrtimer_run_queues+0x12a/0x2a0
            hrtimer_interrupt+0x10e/0x280
            __sysvec_apic_timer_interrupt+0x51/0xe0
            asm_call_on_stack+0xf/0x20
            </IRQ>
            sysvec_apic_timer_interrupt+0x6f/0x80
            ...
            task_file_seq_next+0x52/0xa0
            bpf_seq_read+0xb9/0x320
            vfs_read+0x9d/0x180
            ksys_read+0x5f/0xe0
            do_syscall_64+0x38/0x60
            entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      The fix is to limit the number of bpf program runs to be
      one million. This fixed the program in most cases. But
      we also found under heavy load, which can increase the wallclock
      time for bpf_seq_read(), the warning may still be possible.
      
      For example, calling bpf_delay() in the "while" loop of
      bpf_seq_read(), which will introduce artificial delay,
      the warning will show up in my qemu run.
      
        static unsigned q;
        volatile unsigned *p = &q;
        volatile unsigned long long ll;
        static void bpf_delay(void)
        {
               int i, j;
      
               for (i = 0; i < 10000; i++)
                       for (j = 0; j < 10000; j++)
                               ll += *p;
        }
      
      There are two ways to fix this issue. One is to reduce the above
      one million threshold to say 100,000 and hopefully rcu warning will
      not show up any more. Another is to introduce a target feature
      which enables bpf_seq_read() calling cond_resched().
      
      This patch took second approach as the first approach may cause
      more -EAGAIN failures for read() syscalls. Note that not all bpf_iter
      targets can permit cond_resched() in bpf_seq_read() as some, e.g.,
      netlink seq iterator, rcu read lock critical section spans through
      seq_ops->next() -> seq_ops->show() -> seq_ops->next().
      
      For the kernel code with the above hack, "bpftool p" roughly takes
      38 seconds to finish on my VM with 184 bpf program runs.
      Using the following command, I am able to collect the number of
      context switches:
         perf stat -e context-switches -- ./bpftool p >& log
      Without this patch,
         69      context-switches
      With this patch,
         75      context-switches
      This patch added additional 6 context switches, roughly every 6 seconds
      to reschedule, to avoid lengthy no-rescheduling which may cause the
      above RCU warnings.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20201028061054.1411116-1-yhs@fb.com
      cf83b2d2
  16. 12 10月, 2020 1 次提交
    • D
      bpf: Allow for map-in-map with dynamic inner array map entries · 4a8f87e6
      Daniel Borkmann 提交于
      Recent work in f4d05259 ("bpf: Add map_meta_equal map ops") and 134fede4
      ("bpf: Relax max_entries check for most of the inner map types") added support
      for dynamic inner max elements for most map-in-map types. Exceptions were maps
      like array or prog array where the map_gen_lookup() callback uses the maps'
      max_entries field as a constant when emitting instructions.
      
      We recently implemented Maglev consistent hashing into Cilium's load balancer
      which uses map-in-map with an outer map being hash and inner being array holding
      the Maglev backend table for each service. This has been designed this way in
      order to reduce overall memory consumption given the outer hash map allows to
      avoid preallocating a large, flat memory area for all services. Also, the
      number of service mappings is not always known a-priori.
      
      The use case for dynamic inner array map entries is to further reduce memory
      overhead, for example, some services might just have a small number of back
      ends while others could have a large number. Right now the Maglev backend table
      for small and large number of backends would need to have the same inner array
      map entries which adds a lot of unneeded overhead.
      
      Dynamic inner array map entries can be realized by avoiding the inlined code
      generation for their lookup. The lookup will still be efficient since it will
      be calling into array_map_lookup_elem() directly and thus avoiding retpoline.
      The patch adds a BPF_F_INNER_MAP flag to map creation which therefore skips
      inline code generation and relaxes array_map_meta_equal() check to ignore both
      maps' max_entries. This also still allows to have faster lookups for map-in-map
      when BPF_F_INNER_MAP is not specified and hence dynamic max_entries not needed.
      
      Example code generation where inner map is dynamic sized array:
      
        # bpftool p d x i 125
        int handle__sys_enter(void * ctx):
        ; int handle__sys_enter(void *ctx)
           0: (b4) w1 = 0
        ; int key = 0;
           1: (63) *(u32 *)(r10 -4) = r1
           2: (bf) r2 = r10
        ;
           3: (07) r2 += -4
        ; inner_map = bpf_map_lookup_elem(&outer_arr_dyn, &key);
           4: (18) r1 = map[id:468]
           6: (07) r1 += 272
           7: (61) r0 = *(u32 *)(r2 +0)
           8: (35) if r0 >= 0x3 goto pc+5
           9: (67) r0 <<= 3
          10: (0f) r0 += r1
          11: (79) r0 = *(u64 *)(r0 +0)
          12: (15) if r0 == 0x0 goto pc+1
          13: (05) goto pc+1
          14: (b7) r0 = 0
          15: (b4) w6 = -1
        ; if (!inner_map)
          16: (15) if r0 == 0x0 goto pc+6
          17: (bf) r2 = r10
        ;
          18: (07) r2 += -4
        ; val = bpf_map_lookup_elem(inner_map, &key);
          19: (bf) r1 = r0                               | No inlining but instead
          20: (85) call array_map_lookup_elem#149280     | call to array_map_lookup_elem()
        ; return val ? *val : -1;                        | for inner array lookup.
          21: (15) if r0 == 0x0 goto pc+1
        ; return val ? *val : -1;
          22: (61) r6 = *(u32 *)(r0 +0)
        ; }
          23: (bc) w0 = w6
          24: (95) exit
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NAndrii Nakryiko <andrii@kernel.org>
      Link: https://lore.kernel.org/bpf/20201010234006.7075-4-daniel@iogearbox.net
      4a8f87e6
  17. 03 10月, 2020 2 次提交
  18. 30 9月, 2020 2 次提交
  19. 29 9月, 2020 5 次提交
  20. 26 9月, 2020 2 次提交
    • J
      bpf: Add comment to document BTF type PTR_TO_BTF_ID_OR_NULL · ba5f4cfe
      John Fastabend 提交于
      The meaning of PTR_TO_BTF_ID_OR_NULL differs slightly from other types
      denoted with the *_OR_NULL type. For example the types PTR_TO_SOCKET
      and PTR_TO_SOCKET_OR_NULL can be used for branch analysis because the
      type PTR_TO_SOCKET is guaranteed to _not_ have a null value.
      
      In contrast PTR_TO_BTF_ID and BTF_TO_BTF_ID_OR_NULL have slightly
      different meanings. A PTR_TO_BTF_TO_ID may be a pointer to NULL value,
      but it is safe to read this pointer in the program context because
      the program context will handle any faults. The fallout is for
      PTR_TO_BTF_ID the verifier can assume reads are safe, but can not
      use the type in branch analysis. Additionally, authors need to be
      extra careful when passing PTR_TO_BTF_ID into helpers. In general
      helpers consuming type PTR_TO_BTF_ID will need to assume it may
      be null.
      
      Seeing the above is not obvious to readers without the back knowledge
      lets add a comment in the type definition.
      
      Editorial comment, as networking and tracing programs get closer
      and more tightly merged we may need to consider a new type that we
      can ensure is non-null for branch analysis and also passing into
      helpers.
      Signed-off-by: NJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NLorenz Bauer <lmb@cloudflare.com>
      ba5f4cfe
    • M
      bpf: Enable bpf_skc_to_* sock casting helper to networking prog type · 1df8f55a
      Martin KaFai Lau 提交于
      There is a constant need to add more fields into the bpf_tcp_sock
      for the bpf programs running at tc, sock_ops...etc.
      
      A current workaround could be to use bpf_probe_read_kernel().  However,
      other than making another helper call for reading each field and missing
      CO-RE, it is also not as intuitive to use as directly reading
      "tp->lsndtime" for example.  While already having perfmon cap to do
      bpf_probe_read_kernel(), it will be much easier if the bpf prog can
      directly read from the tcp_sock.
      
      This patch tries to do that by using the existing casting-helpers
      bpf_skc_to_*() whose func_proto returns a btf_id.  For example, the
      func_proto of bpf_skc_to_tcp_sock returns the btf_id of the
      kernel "struct tcp_sock".
      
      These helpers are also added to is_ptr_cast_function().
      It ensures the returning reg (BPF_REF_0) will also carries the ref_obj_id.
      That will keep the ref-tracking works properly.
      
      The bpf_skc_to_* helpers are made available to most of the bpf prog
      types in filter.c. The bpf_skc_to_* helpers will be limited by
      perfmon cap.
      
      This patch adds a ARG_PTR_TO_BTF_ID_SOCK_COMMON.  The helper accepting
      this arg can accept a btf-id-ptr (PTR_TO_BTF_ID + &btf_sock_ids[BTF_SOCK_TYPE_SOCK_COMMON])
      or a legacy-ctx-convert-skc-ptr (PTR_TO_SOCK_COMMON).  The bpf_skc_to_*()
      helpers are changed to take ARG_PTR_TO_BTF_ID_SOCK_COMMON such that
      they will accept pointer obtained from skb->sk.
      
      Instead of specifying both arg_type and arg_btf_id in the same func_proto
      which is how the current ARG_PTR_TO_BTF_ID does, the arg_btf_id of
      the new ARG_PTR_TO_BTF_ID_SOCK_COMMON is specified in the
      compatible_reg_types[] in verifier.c.  The reason is the arg_btf_id is
      always the same.  Discussion in this thread:
      https://lore.kernel.org/bpf/20200922070422.1917351-1-kafai@fb.com/
      
      The ARG_PTR_TO_BTF_ID_ part gives a clear expectation that the helper is
      expecting a PTR_TO_BTF_ID which could be NULL.  This is the same
      behavior as the existing helper taking ARG_PTR_TO_BTF_ID.
      
      The _SOCK_COMMON part means the helper is also expecting the legacy
      SOCK_COMMON pointer.
      
      By excluding the _OR_NULL part, the bpf prog cannot call helper
      with a literal NULL which doesn't make sense in most cases.
      e.g. bpf_skc_to_tcp_sock(NULL) will be rejected.  All PTR_TO_*_OR_NULL
      reg has to do a NULL check first before passing into the helper or else
      the bpf prog will be rejected.  This behavior is nothing new and
      consistent with the current expectation during bpf-prog-load.
      
      [ ARG_PTR_TO_BTF_ID_SOCK_COMMON will be used to replace
        ARG_PTR_TO_SOCK* of other existing helpers later such that
        those existing helpers can take the PTR_TO_BTF_ID returned by
        the bpf_skc_to_*() helpers.
      
        The only special case is bpf_sk_lookup_assign() which can accept a
        literal NULL ptr.  It has to be handled specially in another follow
        up patch if there is a need (e.g. by renaming ARG_PTR_TO_SOCKET_OR_NULL
        to ARG_PTR_TO_BTF_ID_SOCK_COMMON_OR_NULL). ]
      
      [ When converting the older helpers that take ARG_PTR_TO_SOCK* in
        the later patch, if the kernel does not support BTF,
        ARG_PTR_TO_BTF_ID_SOCK_COMMON will behave like ARG_PTR_TO_SOCK_COMMON
        because no reg->type could have PTR_TO_BTF_ID in this case.
      
        It is not a concern for the newer-btf-only helper like the bpf_skc_to_*()
        here though because these helpers must require BTF vmlinux to begin
        with. ]
      Signed-off-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NJohn Fastabend <john.fastabend@gmail.com>
      Link: https://lore.kernel.org/bpf/20200925000350.3855720-1-kafai@fb.com
      1df8f55a