1. 13 12月, 2017 2 次提交
    • J
      bpf: add a bpf_override_function helper · 9802d865
      Josef Bacik 提交于
      Error injection is sloppy and very ad-hoc.  BPF could fill this niche
      perfectly with it's kprobe functionality.  We could make sure errors are
      only triggered in specific call chains that we care about with very
      specific situations.  Accomplish this with the bpf_override_funciton
      helper.  This will modify the probe'd callers return value to the
      specified value and set the PC to an override function that simply
      returns, bypassing the originally probed function.  This gives us a nice
      clean way to implement systematic error injection for all of our code
      paths.
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      9802d865
    • Y
      bpf/tracing: allow user space to query prog array on the same tp · f371b304
      Yonghong Song 提交于
      Commit e87c6bc3 ("bpf: permit multiple bpf attachments
      for a single perf event") added support to attach multiple
      bpf programs to a single perf event.
      Although this provides flexibility, users may want to know
      what other bpf programs attached to the same tp interface.
      Besides getting visibility for the underlying bpf system,
      such information may also help consolidate multiple bpf programs,
      understand potential performance issues due to a large array,
      and debug (e.g., one bpf program which overwrites return code
      may impact subsequent program results).
      
      Commit 2541517c ("tracing, perf: Implement BPF programs
      attached to kprobes") utilized the existing perf ioctl
      interface and added the command PERF_EVENT_IOC_SET_BPF
      to attach a bpf program to a tracepoint. This patch adds a new
      ioctl command, given a perf event fd, to query the bpf program
      array attached to the same perf tracepoint event.
      
      The new uapi ioctl command:
        PERF_EVENT_IOC_QUERY_BPF
      
      The new uapi/linux/perf_event.h structure:
        struct perf_event_query_bpf {
             __u32	ids_len;
             __u32	prog_cnt;
             __u32	ids[0];
        };
      
      User space provides buffer "ids" for kernel to copy to.
      When returning from the kernel, the number of available
      programs in the array is set in "prog_cnt".
      
      The usage:
        struct perf_event_query_bpf *query =
          malloc(sizeof(*query) + sizeof(u32) * ids_len);
        query.ids_len = ids_len;
        err = ioctl(pmu_efd, PERF_EVENT_IOC_QUERY_BPF, query);
        if (err == 0) {
          /* query.prog_cnt is the number of available progs,
           * number of progs in ids: (ids_len == 0) ? 0 : query.prog_cnt
           */
        } else if (errno == ENOSPC) {
          /* query.ids_len number of progs copied,
           * query.prog_cnt is the number of available progs
           */
        } else {
            /* other errors */
        }
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      f371b304
  2. 01 12月, 2017 1 次提交
  3. 28 11月, 2017 1 次提交
    • J
      blktrace: fix trace mutex deadlock · 2967acbb
      Jens Axboe 提交于
      A previous commit changed the locking around registration/cleanup,
      but direct callers of blk_trace_remove() were missed. This means
      that if we hit the error path in setup, we will deadlock on
      attempting to re-acquire the queue trace mutex.
      
      Fixes: 1f2cac10 ("blktrace: fix unlocked access to init/start-stop/teardown")
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      2967acbb
  4. 23 11月, 2017 3 次提交
    • G
      bpf: change bpf_perf_event_output arg5 type to ARG_CONST_SIZE_OR_ZERO · a60dd35d
      Gianluca Borello 提交于
      Commit 9fd29c08 ("bpf: improve verifier ARG_CONST_SIZE_OR_ZERO
      semantics") relaxed the treatment of ARG_CONST_SIZE_OR_ZERO due to the way
      the compiler generates optimized BPF code when checking boundaries of an
      argument from C code. A typical example of this optimized code can be
      generated using the bpf_perf_event_output helper when operating on variable
      memory:
      
      /* len is a generic scalar */
      if (len > 0 && len <= 0x7fff)
              bpf_perf_event_output(ctx, &perf_map, 0, buf, len);
      
      110: (79) r5 = *(u64 *)(r10 -40)
      111: (bf) r1 = r5
      112: (07) r1 += -1
      113: (25) if r1 > 0x7ffe goto pc+6
      114: (bf) r1 = r6
      115: (18) r2 = 0xffff94e5f166c200
      117: (b7) r3 = 0
      118: (bf) r4 = r7
      119: (85) call bpf_perf_event_output#25
      R5 min value is negative, either use unsigned or 'var &= const'
      
      With this code, the verifier loses track of the variable.
      
      Replacing arg5 with ARG_CONST_SIZE_OR_ZERO is thus desirable since it
      avoids this quite common case which leads to usability issues, and the
      compiler generates code that the verifier can more easily test:
      
      if (len <= 0x7fff)
              bpf_perf_event_output(ctx, &perf_map, 0, buf, len);
      
      or
      
      bpf_perf_event_output(ctx, &perf_map, 0, buf, len & 0x7fff);
      
      No changes to the bpf_perf_event_output helper are necessary since it can
      handle a case where size is 0, and an empty frame is pushed.
      Reported-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NGianluca Borello <g.borello@gmail.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      a60dd35d
    • G
      bpf: change bpf_probe_read_str arg2 type to ARG_CONST_SIZE_OR_ZERO · 5c4e1201
      Gianluca Borello 提交于
      Commit 9fd29c08 ("bpf: improve verifier ARG_CONST_SIZE_OR_ZERO
      semantics") relaxed the treatment of ARG_CONST_SIZE_OR_ZERO due to the way
      the compiler generates optimized BPF code when checking boundaries of an
      argument from C code. A typical example of this optimized code can be
      generated using the bpf_probe_read_str helper when operating on variable
      memory:
      
      /* len is a generic scalar */
      if (len > 0 && len <= 0x7fff)
              bpf_probe_read_str(p, len, s);
      
      251: (79) r1 = *(u64 *)(r10 -88)
      252: (07) r1 += -1
      253: (25) if r1 > 0x7ffe goto pc-42
      254: (bf) r1 = r7
      255: (79) r2 = *(u64 *)(r10 -88)
      256: (bf) r8 = r4
      257: (85) call bpf_probe_read_str#45
      R2 min value is negative, either use unsigned or 'var &= const'
      
      With this code, the verifier loses track of the variable.
      
      Replacing arg2 with ARG_CONST_SIZE_OR_ZERO is thus desirable since it
      avoids this quite common case which leads to usability issues, and the
      compiler generates code that the verifier can more easily test:
      
      if (len <= 0x7fff)
              bpf_probe_read_str(p, len, s);
      
      or
      
      bpf_probe_read_str(p, len & 0x7fff, s);
      
      No changes to the bpf_probe_read_str helper are necessary since
      strncpy_from_unsafe itself immediately returns if the size passed is 0.
      Signed-off-by: NGianluca Borello <g.borello@gmail.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      5c4e1201
    • G
      bpf: remove explicit handling of 0 for arg2 in bpf_probe_read · eb33f2cc
      Gianluca Borello 提交于
      Commit 9c019e2b ("bpf: change helper bpf_probe_read arg2 type to
      ARG_CONST_SIZE_OR_ZERO") changed arg2 type to ARG_CONST_SIZE_OR_ZERO to
      simplify writing bpf programs by taking advantage of the new semantics
      introduced for ARG_CONST_SIZE_OR_ZERO which allows <!NULL, 0> arguments.
      
      In order to prevent the helper from actually passing a NULL pointer to
      probe_kernel_read, which can happen when <NULL, 0> is passed to the helper,
      the commit also introduced an explicit check against size == 0.
      
      After the recent introduction of the ARG_PTR_TO_MEM_OR_NULL type,
      bpf_probe_read can not receive a pair of <NULL, 0> arguments anymore, thus
      the check is not needed anymore and can be removed, since probe_kernel_read
      can correctly handle a <!NULL, 0> call. This also fixes the semantics of
      the helper before it gets officially released and bpf programs start
      relying on this check.
      
      Fixes: 9c019e2b ("bpf: change helper bpf_probe_read arg2 type to ARG_CONST_SIZE_OR_ZERO")
      Signed-off-by: NGianluca Borello <g.borello@gmail.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      eb33f2cc
  5. 20 11月, 2017 1 次提交
  6. 16 11月, 2017 1 次提交
  7. 14 11月, 2017 1 次提交
  8. 11 11月, 2017 4 次提交
    • D
      bpf: Revert bpf_overrid_function() helper changes. · f3edacbd
      David S. Miller 提交于
      NACK'd by x86 maintainer.
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f3edacbd
    • J
      bpf: add a bpf_override_function helper · dd0bb688
      Josef Bacik 提交于
      Error injection is sloppy and very ad-hoc.  BPF could fill this niche
      perfectly with it's kprobe functionality.  We could make sure errors are
      only triggered in specific call chains that we care about with very
      specific situations.  Accomplish this with the bpf_override_funciton
      helper.  This will modify the probe'd callers return value to the
      specified value and set the PC to an override function that simply
      returns, bypassing the originally probed function.  This gives us a nice
      clean way to implement systematic error injection for all of our code
      paths.
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NJosef Bacik <jbacik@fb.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      dd0bb688
    • J
      blktrace: fix unlocked registration of tracepoints · a6da0024
      Jens Axboe 提交于
      We need to ensure that tracepoints are registered and unregistered
      with the users of them. The existing atomic count isn't enough for
      that. Add a lock around the tracepoints, so we serialize access
      to them.
      
      This fixes cases where we have multiple users setting up and
      tearing down tracepoints, like this:
      
      CPU: 0 PID: 2995 Comm: syzkaller857118 Not tainted
      4.14.0-rc5-next-20171018+ #36
      Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS
      Google 01/01/2011
      Call Trace:
        __dump_stack lib/dump_stack.c:16 [inline]
        dump_stack+0x194/0x257 lib/dump_stack.c:52
        panic+0x1e4/0x41c kernel/panic.c:183
        __warn+0x1c4/0x1e0 kernel/panic.c:546
        report_bug+0x211/0x2d0 lib/bug.c:183
        fixup_bug+0x40/0x90 arch/x86/kernel/traps.c:177
        do_trap_no_signal arch/x86/kernel/traps.c:211 [inline]
        do_trap+0x260/0x390 arch/x86/kernel/traps.c:260
        do_error_trap+0x120/0x390 arch/x86/kernel/traps.c:297
        do_invalid_op+0x1b/0x20 arch/x86/kernel/traps.c:310
        invalid_op+0x18/0x20 arch/x86/entry/entry_64.S:905
      RIP: 0010:tracepoint_add_func kernel/tracepoint.c:210 [inline]
      RIP: 0010:tracepoint_probe_register_prio+0x397/0x9a0 kernel/tracepoint.c:283
      RSP: 0018:ffff8801d1d1f6c0 EFLAGS: 00010293
      RAX: ffff8801d22e8540 RBX: 00000000ffffffef RCX: ffffffff81710f07
      RDX: 0000000000000000 RSI: ffffffff85b679c0 RDI: ffff8801d5f19818
      RBP: ffff8801d1d1f7c8 R08: ffffffff81710c10 R09: 0000000000000004
      R10: ffff8801d1d1f6b0 R11: 0000000000000003 R12: ffffffff817597f0
      R13: 0000000000000000 R14: 00000000ffffffff R15: ffff8801d1d1f7a0
        tracepoint_probe_register+0x2a/0x40 kernel/tracepoint.c:304
        register_trace_block_rq_insert include/trace/events/block.h:191 [inline]
        blk_register_tracepoints+0x1e/0x2f0 kernel/trace/blktrace.c:1043
        do_blk_trace_setup+0xa10/0xcf0 kernel/trace/blktrace.c:542
        blk_trace_setup+0xbd/0x180 kernel/trace/blktrace.c:564
        sg_ioctl+0xc71/0x2d90 drivers/scsi/sg.c:1089
        vfs_ioctl fs/ioctl.c:45 [inline]
        do_vfs_ioctl+0x1b1/0x1520 fs/ioctl.c:685
        SYSC_ioctl fs/ioctl.c:700 [inline]
        SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691
        entry_SYSCALL_64_fastpath+0x1f/0xbe
      RIP: 0033:0x444339
      RSP: 002b:00007ffe05bb5b18 EFLAGS: 00000206 ORIG_RAX: 0000000000000010
      RAX: ffffffffffffffda RBX: 00000000006d66c0 RCX: 0000000000444339
      RDX: 000000002084cf90 RSI: 00000000c0481273 RDI: 0000000000000009
      RBP: 0000000000000082 R08: 0000000000000000 R09: 0000000000000000
      R10: 0000000000000000 R11: 0000000000000206 R12: ffffffffffffffff
      R13: 00000000c0481273 R14: 0000000000000000 R15: 0000000000000000
      
      since we can now run these in parallel. Ensure that the exported helpers
      for doing this are grabbing the queue trace mutex.
      Reported-by: NSteven Rostedt <rostedt@goodmis.org>
      Tested-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      a6da0024
    • J
      blktrace: fix unlocked access to init/start-stop/teardown · 1f2cac10
      Jens Axboe 提交于
      sg.c calls into the blktrace functions without holding the proper queue
      mutex for doing setup, start/stop, or teardown.
      
      Add internal unlocked variants, and export the ones that do the proper
      locking.
      
      Fixes: 6da127ad ("blktrace: Add blktrace ioctls to SCSI generic devices")
      Tested-by: NDmitry Vyukov <dvyukov@google.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      1f2cac10
  9. 02 11月, 2017 1 次提交
    • G
      License cleanup: add SPDX GPL-2.0 license identifier to files with no license · b2441318
      Greg Kroah-Hartman 提交于
      Many source files in the tree are missing licensing information, which
      makes it harder for compliance tools to determine the correct license.
      
      By default all files without license information are under the default
      license of the kernel, which is GPL version 2.
      
      Update the files which contain no license information with the 'GPL-2.0'
      SPDX license identifier.  The SPDX identifier is a legally binding
      shorthand, which can be used instead of the full boiler plate text.
      
      This patch is based on work done by Thomas Gleixner and Kate Stewart and
      Philippe Ombredanne.
      
      How this work was done:
      
      Patches were generated and checked against linux-4.14-rc6 for a subset of
      the use cases:
       - file had no licensing information it it.
       - file was a */uapi/* one with no licensing information in it,
       - file was a */uapi/* one with existing licensing information,
      
      Further patches will be generated in subsequent months to fix up cases
      where non-standard license headers were used, and references to license
      had to be inferred by heuristics based on keywords.
      
      The analysis to determine which SPDX License Identifier to be applied to
      a file was done in a spreadsheet of side by side results from of the
      output of two independent scanners (ScanCode & Windriver) producing SPDX
      tag:value files created by Philippe Ombredanne.  Philippe prepared the
      base worksheet, and did an initial spot review of a few 1000 files.
      
      The 4.13 kernel was the starting point of the analysis with 60,537 files
      assessed.  Kate Stewart did a file by file comparison of the scanner
      results in the spreadsheet to determine which SPDX license identifier(s)
      to be applied to the file. She confirmed any determination that was not
      immediately clear with lawyers working with the Linux Foundation.
      
      Criteria used to select files for SPDX license identifier tagging was:
       - Files considered eligible had to be source code files.
       - Make and config files were included as candidates if they contained >5
         lines of source
       - File already had some variant of a license header in it (even if <5
         lines).
      
      All documentation files were explicitly excluded.
      
      The following heuristics were used to determine which SPDX license
      identifiers to apply.
      
       - when both scanners couldn't find any license traces, file was
         considered to have no license information in it, and the top level
         COPYING file license applied.
      
         For non */uapi/* files that summary was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0                                              11139
      
         and resulted in the first patch in this series.
      
         If that file was a */uapi/* path one, it was "GPL-2.0 WITH
         Linux-syscall-note" otherwise it was "GPL-2.0".  Results of that was:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|-------
         GPL-2.0 WITH Linux-syscall-note                        930
      
         and resulted in the second patch in this series.
      
       - if a file had some form of licensing information in it, and was one
         of the */uapi/* ones, it was denoted with the Linux-syscall-note if
         any GPL family license was found in the file or had no licensing in
         it (per prior point).  Results summary:
      
         SPDX license identifier                            # files
         ---------------------------------------------------|------
         GPL-2.0 WITH Linux-syscall-note                       270
         GPL-2.0+ WITH Linux-syscall-note                      169
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause)    21
         ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)    17
         LGPL-2.1+ WITH Linux-syscall-note                      15
         GPL-1.0+ WITH Linux-syscall-note                       14
         ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause)    5
         LGPL-2.0+ WITH Linux-syscall-note                       4
         LGPL-2.1 WITH Linux-syscall-note                        3
         ((GPL-2.0 WITH Linux-syscall-note) OR MIT)              3
         ((GPL-2.0 WITH Linux-syscall-note) AND MIT)             1
      
         and that resulted in the third patch in this series.
      
       - when the two scanners agreed on the detected license(s), that became
         the concluded license(s).
      
       - when there was disagreement between the two scanners (one detected a
         license but the other didn't, or they both detected different
         licenses) a manual inspection of the file occurred.
      
       - In most cases a manual inspection of the information in the file
         resulted in a clear resolution of the license that should apply (and
         which scanner probably needed to revisit its heuristics).
      
       - When it was not immediately clear, the license identifier was
         confirmed with lawyers working with the Linux Foundation.
      
       - If there was any question as to the appropriate license identifier,
         the file was flagged for further research and to be revisited later
         in time.
      
      In total, over 70 hours of logged manual review was done on the
      spreadsheet to determine the SPDX license identifiers to apply to the
      source files by Kate, Philippe, Thomas and, in some cases, confirmation
      by lawyers working with the Linux Foundation.
      
      Kate also obtained a third independent scan of the 4.13 code base from
      FOSSology, and compared selected files where the other two scanners
      disagreed against that SPDX file, to see if there was new insights.  The
      Windriver scanner is based on an older version of FOSSology in part, so
      they are related.
      
      Thomas did random spot checks in about 500 files from the spreadsheets
      for the uapi headers and agreed with SPDX license identifier in the
      files he inspected. For the non-uapi files Thomas did random spot checks
      in about 15000 files.
      
      In initial set of patches against 4.14-rc6, 3 files were found to have
      copy/paste license identifier errors, and have been fixed to reflect the
      correct identifier.
      
      Additionally Philippe spent 10 hours this week doing a detailed manual
      inspection and review of the 12,461 patched files from the initial patch
      version early this week with:
       - a full scancode scan run, collecting the matched texts, detected
         license ids and scores
       - reviewing anything where there was a license detected (about 500+
         files) to ensure that the applied SPDX license was correct
       - reviewing anything where there was no detection but the patch license
         was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
         SPDX license was correct
      
      This produced a worksheet with 20 files needing minor correction.  This
      worksheet was then exported into 3 different .csv files for the
      different types of files to be modified.
      
      These .csv files were then reviewed by Greg.  Thomas wrote a script to
      parse the csv files and add the proper SPDX tag to the file, in the
      format that the file expected.  This script was further refined by Greg
      based on the output to detect more types of files automatically and to
      distinguish between header and source .c files (which need different
      comment types.)  Finally Greg ran the script using the .csv files to
      generate the patches.
      Reviewed-by: NKate Stewart <kstewart@linuxfoundation.org>
      Reviewed-by: NPhilippe Ombredanne <pombredanne@nexb.com>
      Reviewed-by: NThomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      b2441318
  10. 01 11月, 2017 1 次提交
  11. 27 10月, 2017 2 次提交
    • G
      bpf: remove tail_call and get_stackid helper declarations from bpf.h · 035226b9
      Gianluca Borello 提交于
      commit afdb09c7 ("security: bpf: Add LSM hooks for bpf object related
      syscall") included linux/bpf.h in linux/security.h. As a result, bpf
      programs including bpf_helpers.h and some other header that ends up
      pulling in also security.h, such as several examples under samples/bpf,
      fail to compile because bpf_tail_call and bpf_get_stackid are now
      "redefined as different kind of symbol".
      
      >From bpf.h:
      
      u64 bpf_tail_call(u64 ctx, u64 r2, u64 index, u64 r4, u64 r5);
      u64 bpf_get_stackid(u64 r1, u64 r2, u64 r3, u64 r4, u64 r5);
      
      Whereas in bpf_helpers.h they are:
      
      static void (*bpf_tail_call)(void *ctx, void *map, int index);
      static int (*bpf_get_stackid)(void *ctx, void *map, int flags);
      
      Fix this by removing the unused declaration of bpf_tail_call and moving
      the declaration of bpf_get_stackid in bpf_trace.c, which is the only
      place where it's needed.
      Signed-off-by: NGianluca Borello <g.borello@gmail.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      035226b9
    • Y
      perf/bpf: Extend the perf_event_read_local() interface, a.k.a. "bpf: perf... · 7d9285e8
      Yonghong Song 提交于
      perf/bpf: Extend the perf_event_read_local() interface, a.k.a. "bpf: perf event change needed for subsequent bpf helpers"
      
      eBPF programs would like access to the (perf) event enabled and
      running times along with the event value, such that they can deal with
      event multiplexing (among other things).
      
      This patch extends the interface; a future eBPF patch will utilize
      the new functionality.
      
      [ Note, there's a same-content commit with a poor changelog and a meaningless
        title in the networking tree as well - but we need this change for subsequent
        perf work, so apply it here as well, with a proper changelog. Hopefully Git
        will be able to sort out this somewhat messy workflow, if there are no other,
        conflicting changes to these files. ]
      Signed-off-by: NYonghong Song <yhs@fb.com>
      [ Rewrote the changelog. ]
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <ast@fb.com>
      Cc: <daniel@iogearbox.net>
      Cc: <rostedt@goodmis.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: David S. Miller <davem@davemloft.net>
      Link: http://lkml.kernel.org/r/20171005161923.332790-2-yhs@fb.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7d9285e8
  12. 25 10月, 2017 2 次提交
    • M
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns... · 6aa7de05
      Mark Rutland 提交于
      locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns to READ_ONCE()/WRITE_ONCE()
      
      Please do not apply this to mainline directly, instead please re-run the
      coccinelle script shown below and apply its output.
      
      For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
      preference to ACCESS_ONCE(), and new code is expected to use one of the
      former. So far, there's been no reason to change most existing uses of
      ACCESS_ONCE(), as these aren't harmful, and changing them results in
      churn.
      
      However, for some features, the read/write distinction is critical to
      correct operation. To distinguish these cases, separate read/write
      accessors must be used. This patch migrates (most) remaining
      ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following
      coccinelle script:
      
      ----
      // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and
      // WRITE_ONCE()
      
      // $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch
      
      virtual patch
      
      @ depends on patch @
      expression E1, E2;
      @@
      
      - ACCESS_ONCE(E1) = E2
      + WRITE_ONCE(E1, E2)
      
      @ depends on patch @
      expression E;
      @@
      
      - ACCESS_ONCE(E)
      + READ_ONCE(E)
      ----
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: davem@davemloft.net
      Cc: linux-arch@vger.kernel.org
      Cc: mpe@ellerman.id.au
      Cc: shuah@kernel.org
      Cc: snitzer@redhat.com
      Cc: thor.thayer@linux.intel.com
      Cc: tj@kernel.org
      Cc: viro@zeniv.linux.org.uk
      Cc: will.deacon@arm.com
      Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6aa7de05
    • Y
      bpf: permit multiple bpf attachments for a single perf event · e87c6bc3
      Yonghong Song 提交于
      This patch enables multiple bpf attachments for a
      kprobe/uprobe/tracepoint single trace event.
      Each trace_event keeps a list of attached perf events.
      When an event happens, all attached bpf programs will
      be executed based on the order of attachment.
      
      A global bpf_event_mutex lock is introduced to protect
      prog_array attaching and detaching. An alternative will
      be introduce a mutex lock in every trace_event_call
      structure, but it takes a lot of extra memory.
      So a global bpf_event_mutex lock is a good compromise.
      
      The bpf prog detachment involves allocation of memory.
      If the allocation fails, a dummy do-nothing program
      will replace to-be-detached program in-place.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NMartin KaFai Lau <kafai@fb.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      e87c6bc3
  13. 18 10月, 2017 1 次提交
  14. 17 10月, 2017 4 次提交
  15. 13 10月, 2017 1 次提交
  16. 12 10月, 2017 1 次提交
  17. 11 10月, 2017 2 次提交
  18. 10 10月, 2017 1 次提交
  19. 08 10月, 2017 3 次提交
    • Y
      bpf: add helper bpf_perf_prog_read_value · 4bebdc7a
      Yonghong Song 提交于
      This patch adds helper bpf_perf_prog_read_cvalue for perf event based bpf
      programs, to read event counter and enabled/running time.
      The enabled/running time is accumulated since the perf event open.
      
      The typical use case for perf event based bpf program is to attach itself
      to a single event. In such cases, if it is desirable to get scaling factor
      between two bpf invocations, users can can save the time values in a map,
      and use the value from the map and the current value to calculate
      the scaling factor.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Acked-by: NAlexei Starovoitov <ast@fb.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      4bebdc7a
    • Y
      bpf: add helper bpf_perf_event_read_value for perf event array map · 908432ca
      Yonghong Song 提交于
      Hardware pmu counters are limited resources. When there are more
      pmu based perf events opened than available counters, kernel will
      multiplex these events so each event gets certain percentage
      (but not 100%) of the pmu time. In case that multiplexing happens,
      the number of samples or counter value will not reflect the
      case compared to no multiplexing. This makes comparison between
      different runs difficult.
      
      Typically, the number of samples or counter value should be
      normalized before comparing to other experiments. The typical
      normalization is done like:
        normalized_num_samples = num_samples * time_enabled / time_running
        normalized_counter_value = counter_value * time_enabled / time_running
      where time_enabled is the time enabled for event and time_running is
      the time running for event since last normalization.
      
      This patch adds helper bpf_perf_event_read_value for kprobed based perf
      event array map, to read perf counter and enabled/running time.
      The enabled/running time is accumulated since the perf event open.
      To achieve scaling factor between two bpf invocations, users
      can can use cpu_id as the key (which is typical for perf array usage model)
      to remember the previous value and do the calculation inside the
      bpf program.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Acked-by: NAlexei Starovoitov <ast@fb.com>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      908432ca
    • Y
      bpf: perf event change needed for subsequent bpf helpers · 97562633
      Yonghong Song 提交于
      This patch does not impact existing functionalities.
      It contains the changes in perf event area needed for
      subsequent bpf_perf_event_read_value and
      bpf_perf_prog_read_value helpers.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      97562633
  20. 07 10月, 2017 1 次提交
  21. 06 10月, 2017 4 次提交
    • S
      ftrace/kallsyms: Have /proc/kallsyms show saved mod init functions · 6171a031
      Steven Rostedt (VMware) 提交于
      If a module is loaded while tracing is enabled, then there's a possibility
      that the module init functions were traced. These functions have their name
      and address stored by ftrace such that it can translate the function address
      that is written into the buffer into a human readable function name.
      
      As userspace tools may be doing the same, they need a way to map function
      names to their address as well. This is done through reading /proc/kallsyms.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      6171a031
    • S
      ftrace: Add freeing algorithm to free ftrace_mod_maps · 6aa69784
      Steven Rostedt (VMware) 提交于
      The ftrace_mod_map is a descriptor to save module init function names in
      case they were traced, and the trace output needs to reference the function
      name from the function address. But after the function is unloaded, it
      the maps should be freed, as the rest of the function names are as well.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      6aa69784
    • S
      ftrace: Save module init functions kallsyms symbols for tracing · aba4b5c2
      Steven Rostedt (VMware) 提交于
      If function tracing is active when the module init functions are freed, then
      store them to be referenced by kallsyms. As module init functions can now be
      traced on module load, they were useless:
      
       ># echo ':mod:snd_seq' > set_ftrace_filter
       ># echo function > current_tracer
       ># modprobe snd_seq
       ># cat trace
       # tracer: function
       #
       #                              _-----=> irqs-off
       #                             / _----=> need-resched
       #                            | / _---=> hardirq/softirq
       #                            || / _--=> preempt-depth
       #                            ||| /     delay
       #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
       #              | |       |   ||||       |         |
               modprobe-2786  [000] ....  3189.037874: 0xffffffffa0860000 <-do_one_initcall
               modprobe-2786  [000] ....  3189.037876: 0xffffffffa086004d <-0xffffffffa086000f
               modprobe-2786  [000] ....  3189.037876: 0xffffffffa086010d <-0xffffffffa0860018
               modprobe-2786  [000] ....  3189.037877: 0xffffffffa086011a <-0xffffffffa0860021
               modprobe-2786  [000] ....  3189.037877: 0xffffffffa0860080 <-0xffffffffa086002a
               modprobe-2786  [000] ....  3189.039523: 0xffffffffa0860400 <-0xffffffffa0860033
               modprobe-2786  [000] ....  3189.039523: 0xffffffffa086038a <-0xffffffffa086041c
               modprobe-2786  [000] ....  3189.039591: 0xffffffffa086038a <-0xffffffffa0860436
               modprobe-2786  [000] ....  3189.039657: 0xffffffffa086038a <-0xffffffffa0860450
               modprobe-2786  [000] ....  3189.039719: 0xffffffffa0860127 <-0xffffffffa086003c
               modprobe-2786  [000] ....  3189.039742: snd_seq_create_kernel_client <-0xffffffffa08601f6
      
      When the output is shown, the kallsyms for the module init functions have
      already been freed, and the output of the trace can not convert them to
      their function names.
      
      Now this looks like this:
      
       # tracer: function
       #
       #                              _-----=> irqs-off
       #                             / _----=> need-resched
       #                            | / _---=> hardirq/softirq
       #                            || / _--=> preempt-depth
       #                            ||| /     delay
       #           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
       #              | |       |   ||||       |         |
               modprobe-2463  [002] ....   174.243237: alsa_seq_init <-do_one_initcall
               modprobe-2463  [002] ....   174.243239: client_init_data <-alsa_seq_init
               modprobe-2463  [002] ....   174.243240: snd_sequencer_memory_init <-alsa_seq_init
               modprobe-2463  [002] ....   174.243240: snd_seq_queues_init <-alsa_seq_init
               modprobe-2463  [002] ....   174.243240: snd_sequencer_device_init <-alsa_seq_init
               modprobe-2463  [002] ....   174.244860: snd_seq_info_init <-alsa_seq_init
               modprobe-2463  [002] ....   174.244861: create_info_entry <-snd_seq_info_init
               modprobe-2463  [002] ....   174.244936: create_info_entry <-snd_seq_info_init
               modprobe-2463  [002] ....   174.245003: create_info_entry <-snd_seq_info_init
               modprobe-2463  [002] ....   174.245072: snd_seq_system_client_init <-alsa_seq_init
               modprobe-2463  [002] ....   174.245094: snd_seq_create_kernel_client <-snd_seq_system_client_init
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      aba4b5c2
    • S
      ftrace: Allow module init functions to be traced · 3e234289
      Steven Rostedt (VMware) 提交于
      Allow for module init sections to be traced as well as core kernel init
      sections. Now that filtering modules functions can be stored, for when they
      are loaded, it makes sense to be able to trace them.
      
      Cc: Jessica Yu <jeyu@kernel.org>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      3e234289
  22. 05 10月, 2017 2 次提交