- 26 7月, 2019 2 次提交
-
-
由 Stanislav Fomichev 提交于
Export bpf_flow_keys flags to tools/libbpf/selftests. Acked-by: NPetar Penkov <ppenkov@google.com> Acked-by: NWillem de Bruijn <willemb@google.com> Acked-by: NSong Liu <songliubraving@fb.com> Cc: Song Liu <songliubraving@fb.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Petar Penkov <ppenkov@google.com> Signed-off-by: NStanislav Fomichev <sdf@google.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Allan Zhang 提交于
Software event output is only enabled by a few prog types. This test is to ensure that all supported types are enabled for bpf_perf_event_output successfully. Signed-off-by: NAllan Zhang <allanzhang@google.com> Acked-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
- 24 7月, 2019 5 次提交
-
-
由 Andrii Nakryiko 提交于
libbpf's perf_buffer API supersedes trace_helper.h's helpers. Remove those helpers after all existing users were already moved to perf_buffer API. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Acked-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Andrii Nakryiko 提交于
Switch test_tcpnotify test to use libbpf's perf_buffer API instead of re-implementing portion of it. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Acked-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Andrii Nakryiko 提交于
Convert test_get_stack_raw_tp test to new perf_buffer API. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Acked-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Andrii Nakryiko 提交于
When BPF program defines uninitialized global variable, it's put into a special COMMON section. Libbpf will reject such programs, but will provide very unhelpful message with garbage-looking section index. This patch detects special section cases and gives more explicit error message. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Acked-by: NSong Liu <songliubraving@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Roman Mashak 提交于
Signed-off-by: NRoman Mashak <mrv@mojatatu.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 20 7月, 2019 1 次提交
-
-
由 Thomas Huth 提交于
The code in vmx.c does not use "program_invocation_name", so there is no need to "#define _GNU_SOURCE" here. Signed-off-by: NThomas Huth <thuth@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 19 7月, 2019 14 次提交
-
-
由 Ilya Leoshkevich 提交于
test_xdp_noinline fails on s390 due to a handful of endianness issues. Use ntohs for parsing eth_proto. Replace bswaps with ntohs/htons. Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Acked-by: NVasily Gorbik <gor@linux.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Ilya Leoshkevich 提交于
This test looks up a 32-bit map element and then loads it using a 64-bit load. This does not work on s390, which is a big-endian machine. Since the point of this test doesn't seem to be loading a smaller value using a larger load, simply use a 32-bit load. Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Josh Poimboeuf 提交于
A Clang-built kernel is showing the following warning: arch/x86/kernel/platform-quirks.o: warning: objtool: x86_early_init_platform_quirks()+0x84: unreachable instruction That corresponds to this code: 7e: 0f 85 00 00 00 00 jne 84 <x86_early_init_platform_quirks+0x84> 80: R_X86_64_PC32 __x86_indirect_thunk_r11-0x4 84: c3 retq This is a conditional retpoline sibling call, which is now possible thanks to retpolines. Objtool hasn't seen that before. It's incorrectly interpreting the conditional jump as an unconditional dynamic jump. Reported-by: NNick Desaulniers <ndesaulniers@google.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/30d4c758b267ef487fb97e6ecb2f148ad007b554.1563413318.git.jpoimboe@redhat.com
-
由 Josh Poimboeuf 提交于
This makes it easier to add new instruction types. Also it's hopefully more robust since the compiler should warn about out-of-range enums. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/0740e96af0d40e54cfd6a07bf09db0fbd10793cd.1563413318.git.jpoimboe@redhat.com
-
由 Josh Poimboeuf 提交于
In one rare case, Clang generated the following code: 5ca: 83 e0 21 and $0x21,%eax 5cd: b9 04 00 00 00 mov $0x4,%ecx 5d2: ff 24 c5 00 00 00 00 jmpq *0x0(,%rax,8) 5d5: R_X86_64_32S .rodata+0x38 which uses the corresponding jump table relocations: 000000000038 000200000001 R_X86_64_64 0000000000000000 .text + 834 000000000040 000200000001 R_X86_64_64 0000000000000000 .text + 5d9 000000000048 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000050 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000058 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000060 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000068 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000070 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000078 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000080 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000088 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000090 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000098 000200000001 R_X86_64_64 0000000000000000 .text + b96 0000000000a0 000200000001 R_X86_64_64 0000000000000000 .text + b96 0000000000a8 000200000001 R_X86_64_64 0000000000000000 .text + b96 0000000000b0 000200000001 R_X86_64_64 0000000000000000 .text + b96 0000000000b8 000200000001 R_X86_64_64 0000000000000000 .text + b96 0000000000c0 000200000001 R_X86_64_64 0000000000000000 .text + b96 0000000000c8 000200000001 R_X86_64_64 0000000000000000 .text + b96 0000000000d0 000200000001 R_X86_64_64 0000000000000000 .text + b96 0000000000d8 000200000001 R_X86_64_64 0000000000000000 .text + b96 0000000000e0 000200000001 R_X86_64_64 0000000000000000 .text + b96 0000000000e8 000200000001 R_X86_64_64 0000000000000000 .text + b96 0000000000f0 000200000001 R_X86_64_64 0000000000000000 .text + b96 0000000000f8 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000100 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000108 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000110 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000118 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000120 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000128 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000130 000200000001 R_X86_64_64 0000000000000000 .text + b96 000000000138 000200000001 R_X86_64_64 0000000000000000 .text + 82f 000000000140 000200000001 R_X86_64_64 0000000000000000 .text + 828 Since %eax was masked with 0x21, only the first two and the last two entries are possible. Objtool doesn't actually emulate all the code, so it isn't smart enough to know that all the middle entries aren't reachable. They point to the NOP padding area after the end of the function, so objtool seg faulted when it tried to dereference a NULL insn->func. After this fix, objtool still gives an "unreachable" error because it stops reading the jump table when it encounters the bad addresses: /home/jpoimboe/objtool-tests/adm1275.o: warning: objtool: adm1275_probe()+0x828: unreachable instruction While the above code is technically correct, it's very wasteful of memory -- it uses 34 jump table entries when only 4 are needed. It's also not possible for objtool to validate this type of switch table because the unused entries point outside the function and objtool has no way of determining if that's intentional. Hopefully the Clang folks can fix it. Reported-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/a9db88eec4f1ca089e040989846961748238b6d8.1563413318.git.jpoimboe@redhat.com
-
由 Jann Horn 提交于
This fixes objtool for both a GCC issue and a Clang issue: 1) GCC issue: kernel/bpf/core.o: warning: objtool: ___bpf_prog_run()+0x8d5: sibling call from callable instruction with modified stack frame With CONFIG_RETPOLINE=n, GCC is doing the following optimization in ___bpf_prog_run(). Before: select_insn: jmp *jumptable(,%rax,8) ... ALU64_ADD_X: ... jmp select_insn ALU_ADD_X: ... jmp select_insn After: select_insn: jmp *jumptable(, %rax, 8) ... ALU64_ADD_X: ... jmp *jumptable(, %rax, 8) ALU_ADD_X: ... jmp *jumptable(, %rax, 8) This confuses objtool. It has never seen multiple indirect jump sites which use the same jump table. For GCC switch tables, the only way of detecting the size of a table is by continuing to scan for more tables. The size of the previous table can only be determined after another switch table is found, or when the scan reaches the end of the function. That logic was reused for C jump tables, and was based on the assumption that each jump table only has a single jump site. The above optimization breaks that assumption. 2) Clang issue: drivers/usb/misc/sisusbvga/sisusb.o: warning: objtool: sisusb_write_mem_bulk()+0x588: can't find switch jump table With clang 9, code can be generated where a function contains two indirect jump instructions which use the same switch table. The fix is the same for both issues: split the jump table parsing into two passes. In the first pass, locate the heads of all switch tables for the function and mark their locations. In the second pass, parse the switch tables and add them. Fixes: e55a7325 ("bpf: Fix ORC unwinding in non-JIT BPF code") Reported-by: NRandy Dunlap <rdunlap@infradead.org> Reported-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NJann Horn <jannh@google.com> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/e995befaada9d4d8b2cf788ff3f566ba900d2b4d.1563413318.git.jpoimboe@redhat.comCo-developed-by: NJosh Poimboeuf <jpoimboe@redhat.com>
-
由 Josh Poimboeuf 提交于
Now that C jump tables are supported, call them "jump tables" instead of "switch tables". Also rename some other variables, add comments, and simplify the code flow a bit. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/cf951b0c0641628e0b9b81f7ceccd9bcabcb4bd8.1563413318.git.jpoimboe@redhat.com
-
由 Josh Poimboeuf 提交于
Simplify the sibling call detection logic a bit. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/8357dbef9e7f5512e76bf83a76c81722fc09eb5e.1563413318.git.jpoimboe@redhat.com
-
由 Josh Poimboeuf 提交于
Even calls to __noreturn functions need the frame pointer setup first. Such functions often dump the stack. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/aed62fbd60e239280218be623f751a433658e896.1563413318.git.jpoimboe@redhat.com
-
由 Josh Poimboeuf 提交于
dead_end_function() can no longer return an error. Simplify its interface by making it return boolean. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/9e6679610768fb6e6c51dca23f7d4d0c03b0c910.1563413318.git.jpoimboe@redhat.com
-
由 Josh Poimboeuf 提交于
All callable functions should have an ELF size. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/03d429c4fa87829c61c5dc0e89652f4d9efb62f1.1563413318.git.jpoimboe@redhat.com
-
由 Josh Poimboeuf 提交于
- Add an alias check in validate_functions(). With this change, aliases no longer need uaccess_safe set. - Add an alias check in decode_instructions(). With this change, the "if (!insn->func)" check is no longer needed. - Don't create aliases for zero-length functions, as it can have unexpected results. The next patch will spit out a warning for zero-length functions anyway. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/26a99c31426540f19c9a58b9e10727c385a147bc.1563413318.git.jpoimboe@redhat.com
-
由 Josh Poimboeuf 提交于
If 'insn->func' is NULL, objtool skips some important checks, including sibling call validation. So if some .fixup code does an invalid sibling call, objtool ignores it. Treat all code branches (including alts) as part of the original function by keeping track of the original func value from validate_functions(). This improves the usefulness of some clang function fallthrough warnings, and exposes some additional kernel bugs in the process. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/505df630f33c9717e1ccde6e4b64c5303135c25f.1563413318.git.jpoimboe@redhat.com
-
由 Josh Poimboeuf 提交于
After an objtool improvement, it's reporting that __memcpy_mcsafe() is calling mcsafe_handle_tail() with AC=1: arch/x86/lib/memcpy_64.o: warning: objtool: .fixup+0x13: call to mcsafe_handle_tail() with UACCESS enabled arch/x86/lib/memcpy_64.o: warning: objtool: __memcpy_mcsafe()+0x34: (alt) arch/x86/lib/memcpy_64.o: warning: objtool: __memcpy_mcsafe()+0xb: (branch) arch/x86/lib/memcpy_64.o: warning: objtool: __memcpy_mcsafe()+0x0: <=== (func) mcsafe_handle_tail() is basically an extension of __memcpy_mcsafe(), so AC=1 is supposed to be set. Add mcsafe_handle_tail() to the uaccess safe list. Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Tested-by: NNick Desaulniers <ndesaulniers@google.com> Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/035c38f7eac845281d3c3d36749144982e06e58c.1563413318.git.jpoimboe@redhat.com
-
- 18 7月, 2019 4 次提交
-
-
由 Michael Forney 提交于
The elftoolchain version of libelf has a function named elf_open(). The function name isn't quite accurate anyway, since it also reads all the ELF data. Rename it to elf_read(), which is more accurate. [ jpoimboe: rename to elf_read(); write commit description ] Signed-off-by: NMichael Forney <mforney@mforney.org> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/7ce2d1b35665edf19fd0eb6fbc0b17b81a48e62f.1562793604.git.jpoimboe@redhat.com
-
由 Michael Forney 提交于
The libelf implementation might use a different struct name, and the Elf_Scn typedef is already used throughout the rest of objtool. Signed-off-by: NMichael Forney <mforney@mforney.org> Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/d270e1be2835fc2a10acf67535ff2ebd2145bf43.1562793448.git.jpoimboe@redhat.com
-
由 Cong Wang 提交于
Add a test case to simulate the loopback packet case fixed in the previous patch. This test gets passed after the fix: IPv4 rp_filter tests TEST: rp_filter passes local packets [ OK ] TEST: rp_filter passes loopback packets [ OK ] Cc: David Ahern <dsahern@gmail.com> Signed-off-by: NCong Wang <xiyou.wangcong@gmail.com> Reviewed-by: NDavid Ahern <dsahern@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Masahiro Yamada 提交于
While descending directories, Kbuild produces objects for modules, but do not link final *.ko files; it is done in the modpost. To keep track of modules, Kbuild creates a *.mod file in $(MODVERDIR) for every module it is building. Some post-processing steps read the necessary information from *.mod files. This avoids descending into directories again. This mechanism was introduced in 2003 or so. Later, commit 551559e1 ("kbuild: implement modules.order") added modules.order. So, we can simply read it out to know all the modules with directory paths. This is easier than parsing the first line of *.mod files. $(MODVERDIR) has a flat directory structure, that is, *.mod files are named only with base names. This is based on the assumption that the module name is unique across the tree. This assumption is really fragile. Stephen Rothwell reported a race condition caused by a module name conflict: https://lkml.org/lkml/2019/5/13/991 In parallel building, two different threads could write to the same $(MODVERDIR)/*.mod simultaneously. Non-unique module names are the source of all kind of troubles, hence commit 3a48a919 ("kbuild: check uniqueness of module names") introduced a new checker script. However, it is still fragile in the build system point of view because this race happens before scripts/modules-check.sh is invoked. If it happens again, the modpost will emit unclear error messages. To fix this issue completely, create *.mod with full directory path so that two threads never attempt to write to the same file. $(MODVERDIR) is no longer needed. Since modules with directory paths are listed in modules.order, Kbuild is still able to find *.mod files without additional descending. I also killed cmd_secanalysis; scripts/mod/sumversion.c computes MD4 hash for modules with MODULE_VERSION(). When CONFIG_DEBUG_SECTION_MISMATCH=y, it occurs not only in the modpost stage, but also during directory descending, where sumversion.c may parse stale *.mod files. It would emit 'No such file or directory' warning when an object consisting a module is renamed, or when a single-obj module is turned into a multi-obj module or vice versa. Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com> Acked-by: NNicolas Pitre <nico@fluxnic.net>
-
- 17 7月, 2019 12 次提交
-
-
由 Dmitry V. Levin 提交于
Check whether PTRACE_GET_SYSCALL_INFO semantics implemented in the kernel matches userspace expectations. [akpm@linux-foundation.org: coding-style fixes] Link: http://lkml.kernel.org/r/20190510152852.GG28558@altlinux.orgSigned-off-by: NDmitry V. Levin <ldv@altlinux.org> Acked-by: NShuah Khan <shuah@kernel.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Elvira Khabirova <lineprinter@altlinux.org> Cc: Eugene Syromyatnikov <esyr@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Greentime Hu <greentime@andestech.com> Cc: Helge Deller <deller@gmx.de> [parisc] Cc: James E.J. Bottomley <jejb@parisc-linux.org> Cc: James Hogan <jhogan@kernel.org> Cc: kbuild test robot <lkp@intel.com> Cc: Kees Cook <keescook@chromium.org> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Burton <paul.burton@mips.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Vincent Chen <deanbo422@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Elvira Khabirova 提交于
PTRACE_GET_SYSCALL_INFO is a generic ptrace API that lets ptracer obtain details of the syscall the tracee is blocked in. There are two reasons for a special syscall-related ptrace request. Firstly, with the current ptrace API there are cases when ptracer cannot retrieve necessary information about syscalls. Some examples include: * The notorious int-0x80-from-64-bit-task issue. See [1] for details. In short, if a 64-bit task performs a syscall through int 0x80, its tracer has no reliable means to find out that the syscall was, in fact, a compat syscall, and misidentifies it. * Syscall-enter-stop and syscall-exit-stop look the same for the tracer. Common practice is to keep track of the sequence of ptrace-stops in order not to mix the two syscall-stops up. But it is not as simple as it looks; for example, strace had a (just recently fixed) long-standing bug where attaching strace to a tracee that is performing the execve system call led to the tracer identifying the following syscall-exit-stop as syscall-enter-stop, which messed up all the state tracking. * Since the introduction of commit 84d77d3f ("ptrace: Don't allow accessing an undumpable mm"), both PTRACE_PEEKDATA and process_vm_readv become unavailable when the process dumpable flag is cleared. On such architectures as ia64 this results in all syscall arguments being unavailable for the tracer. Secondly, ptracers also have to support a lot of arch-specific code for obtaining information about the tracee. For some architectures, this requires a ptrace(PTRACE_PEEKUSER, ...) invocation for every syscall argument and return value. ptrace(2) man page: long ptrace(enum __ptrace_request request, pid_t pid, void *addr, void *data); ... PTRACE_GET_SYSCALL_INFO Retrieve information about the syscall that caused the stop. The information is placed into the buffer pointed by "data" argument, which should be a pointer to a buffer of type "struct ptrace_syscall_info". The "addr" argument contains the size of the buffer pointed to by "data" argument (i.e., sizeof(struct ptrace_syscall_info)). The return value contains the number of bytes available to be written by the kernel. If the size of data to be written by the kernel exceeds the size specified by "addr" argument, the output is truncated. [ldv@altlinux.org: selftests/seccomp/seccomp_bpf: update for PTRACE_GET_SYSCALL_INFO] Link: http://lkml.kernel.org/r/20190708182904.GA12332@altlinux.org Link: http://lkml.kernel.org/r/20190510152842.GF28558@altlinux.orgSigned-off-by: NElvira Khabirova <lineprinter@altlinux.org> Co-developed-by: NDmitry V. Levin <ldv@altlinux.org> Signed-off-by: NDmitry V. Levin <ldv@altlinux.org> Reviewed-by: NOleg Nesterov <oleg@redhat.com> Reviewed-by: NKees Cook <keescook@chromium.org> Reviewed-by: NAndy Lutomirski <luto@kernel.org> Cc: Eugene Syromyatnikov <esyr@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Greentime Hu <greentime@andestech.com> Cc: Helge Deller <deller@gmx.de> [parisc] Cc: James E.J. Bottomley <jejb@parisc-linux.org> Cc: James Hogan <jhogan@kernel.org> Cc: kbuild test robot <lkp@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Burton <paul.burton@mips.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Vincent Chen <deanbo422@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Alexey Dobriyan 提交于
I thought that /proc/sysvipc has the same bug as /proc/net commit 1fde6f21 proc: fix /proc/net/* after setns(2) However, it doesn't! /proc/sysvipc files do get_ipc_ns(current->nsproxy->ipc_ns); in their open() hook and avoid the problem. Keep the test, maybe /proc/sysvipc will become broken someday :-\ Link: http://lkml.kernel.org/r/20190706180146.GA21015@avx2Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Alexey Dobriyan 提交于
Test tries to access vsyscall page and if it doesn't exist gets SIGSEGV which can spam into dmesg. However the segfault happens by design. Handle it and carry information via exit code to parent. Link: http://lkml.kernel.org/r/20190524181256.GA2260@avx2Signed-off-by: NAlexey Dobriyan <adobriyan@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Ilya Leoshkevich 提交于
perf_buffer test fails for exactly the same reason test_attach_probe used to fail: different nanosleep syscall kprobe name. Reuse the test_attach_probe fix. Fixes: ee5cf82c ("selftests/bpf: test perf buffer API") Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Acked-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Andrii Nakryiko 提交于
It's easier to follow the logic if it's structured the same. There is just slight difference between test_progs/test_maps and test_verifier. test_verifier's verifier/*.c files are not really compilable C files (they are more of include headers), so they can't be specified as explicit dependencies of test_verifier. Cc: Alexei Starovoitov <ast@fb.com> Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Andrii Nakryiko 提交于
e46fc22e ("selftests/bpf: make directory prerequisites order-only") exposed existing problem in Makefile for test_verifier and test_maps tests: their dependency on auto-generated header file with a list of all tests wasn't recorded explicitly. This patch fixes these issues. Fixes: 51a0e301 ("bpf: Add BPF_MAP_TYPE_SK_STORAGE test to test_maps") Fixes: 6b7b6995 ("selftests: bpf: tests.h should depend on .c files, not the output") Cc: Ilya Leoshkevich <iii@linux.ibm.com> Cc: Stanislav Fomichev <sdf@google.com> Cc: Martin KaFai Lau <kafai@fb.com> Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Steven Rostedt (VMware) 提交于
While testing on a very old kernel (3.5), the tests failed because the write to set_event_pid in the setup code, did not exist. The tests themselves could pass, but the setup failed causing an error. Other files test for existance before writing to them. Do the same for set_event_pid and set_ftrace_pid. Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
-
由 Steven Rostedt (VMware) 提交于
If the kernel is not configured with ftrace enabled, the ftracetest selftests should return the error code of "4" as that is the kselftests "skip" code, and not "1" which means an error. To determine if ftrace is enabled, first the newer "tracefs" is searched for in /proc/mounts. If it is not found, then "debugfs" is searched for (as old kernels do not have tracefs). If that is not found, an attempt to mount the tracefs or debugfs is performed. This is done by seeing first if the /sys/kernel/tracing directory exists. If it does than tracefs is configured in the kernel and an attempt to mount it is performed. If /sys/kernel/tracing does not exist, then /sys/kernel/debug is tested to see if that directory exists. If it does, then an attempt to mount debugfs on that directory is performed. If it does not exist, then debugfs is not configured in the running kernel and the test exits with the skip code. If either mount fails, then a normal error is returned as they do exist in the kernel but something went wrong to mount them. This changes the test to always try the tracefs file system first as it has been in the kernel for some time now and it is better to test it if it is available instead of always testing debugfs. Link: http://lkml.kernel.org/r/20190702062358.7330-1-po-hsu.lin@canonical.comReported-by: NPo-Hsu Lin <po-hsu.lin@canonical.com> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
-
由 Andrii Nakryiko 提交于
Similar issue was fixed in cdfc7f88 ("libbpf: fix GCC8 warning for strncpy") already. This one was missed. Fixing now. Cc: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Acked-by: NMagnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Ilya Leoshkevich 提交于
Some setups (e.g. virtual machines) might run with hardware perf events disabled. If this is the case, skip the test_send_signal_nmi test. Add a separate test involving a software perf event. This allows testing the perf event path regardless of hardware perf event support. Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Acked-by: NAndrii Nakryiko <andriin@fb.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
由 Ilya Leoshkevich 提交于
BPF_LDX_MEM is used to load the least significant byte of the retrieved test_val.index, however, on big-endian machines it ends up retrieving the most significant byte. Change the test to load the whole int in order to make it endianness-independent. Signed-off-by: NIlya Leoshkevich <iii@linux.ibm.com> Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
-
- 16 7月, 2019 2 次提交
-
-
由 Andrii Nakryiko 提交于
test_verifier tests can specify single- and multi-runs tests. Internally logic of handling them is duplicated. Get rid of it by making single run retval/data specification to be a first run spec. Signed-off-by: NAndrii Nakryiko <andriin@fb.com> Cc: Krzesimir Nowak <krzesimir@kinvolk.io> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-
由 Stanislav Fomichev 提交于
Update bpf_sock_addr comments to indicate support for 8-byte reads from user_ip6 and msg_src_ip6. Cc: Yonghong Song <yhs@fb.com> Signed-off-by: NStanislav Fomichev <sdf@google.com> Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
-