1. 24 11月, 2016 5 次提交
  2. 23 11月, 2016 2 次提交
  3. 16 11月, 2016 1 次提交
  4. 15 11月, 2016 6 次提交
  5. 13 10月, 2016 1 次提交
    • L
      Disable the __builtin_return_address() warning globally after all · ef6000b4
      Linus Torvalds 提交于
      This affectively reverts commit 377ccbb4 ("Makefile: Mute warning
      for __builtin_return_address(>0) for tracing only") because it turns out
      that it really isn't tracing only - it's all over the tree.
      
      We already also had the warning disabled separately for mm/usercopy.c
      (which this commit also removes), and it turns out that we will also
      want to disable it for get_lock_parent_ip(), that is used for at least
      TRACE_IRQFLAGS.  Which (when enabled) ends up being all over the tree.
      
      Steven Rostedt had a patch that tried to limit it to just the config
      options that actually triggered this, but quite frankly, the extra
      complexity and abstraction just isn't worth it.  We have never actually
      had a case where the warning is actually useful, so let's just disable
      it globally and not worry about it.
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Peter Anvin <hpa@zytor.com>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      ef6000b4
  6. 29 9月, 2016 1 次提交
  7. 26 9月, 2016 1 次提交
  8. 25 9月, 2016 1 次提交
  9. 23 9月, 2016 1 次提交
  10. 12 9月, 2016 1 次提交
  11. 10 9月, 2016 2 次提交
    • D
      bpf: add BPF_CALL_x macros for declaring helpers · f3694e00
      Daniel Borkmann 提交于
      This work adds BPF_CALL_<n>() macros and converts all the eBPF helper functions
      to use them, in a similar fashion like we do with SYSCALL_DEFINE<n>() macros
      that are used today. Motivation for this is to hide all the register handling
      and all necessary casts from the user, so that it is done automatically in the
      background when adding a BPF_CALL_<n>() call.
      
      This makes current helpers easier to review, eases to write future helpers,
      avoids getting the casting mess wrong, and allows for extending all helpers at
      once (f.e. build time checks, etc). It also helps detecting more easily in
      code reviews that unused registers are not instrumented in the code by accident,
      breaking compatibility with existing programs.
      
      BPF_CALL_<n>() internals are quite similar to SYSCALL_DEFINE<n>() ones with some
      fundamental differences, for example, for generating the actual helper function
      that carries all u64 regs, we need to fill unused regs, so that we always end up
      with 5 u64 regs as an argument.
      
      I reviewed several 0-5 generated BPF_CALL_<n>() variants of the .i results and
      they look all as expected. No sparse issue spotted. We let this also sit for a
      few days with Fengguang's kbuild test robot, and there were no issues seen. On
      s390, it barked on the "uses dynamic stack allocation" notice, which is an old
      one from bpf_perf_event_output{,_tp}() reappearing here due to the conversion
      to the call wrapper, just telling that the perf raw record/frag sits on stack
      (gcc with s390's -mwarn-dynamicstack), but that's all. Did various runtime tests
      and they were fine as well. All eBPF helpers are now converted to use these
      macros, getting rid of a good chunk of all the raw castings.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f3694e00
    • D
      bpf: add BPF_SIZEOF and BPF_FIELD_SIZEOF macros · f035a515
      Daniel Borkmann 提交于
      Add BPF_SIZEOF() and BPF_FIELD_SIZEOF() macros to improve the code a bit
      which otherwise often result in overly long bytes_to_bpf_size(sizeof())
      and bytes_to_bpf_size(FIELD_SIZEOF()) lines. So place them into a macro
      helper instead. Moreover, we currently have a BUILD_BUG_ON(BPF_FIELD_SIZEOF())
      check in convert_bpf_extensions(), but we should rather make that generic
      as well and add a BUILD_BUG_ON() test in all BPF_SIZEOF()/BPF_FIELD_SIZEOF()
      users to detect any rewriter size issues at compile time. Note, there are
      currently none, but we want to assert that it stays this way.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      f035a515
  12. 03 9月, 2016 4 次提交
    • A
      bpf: introduce BPF_PROG_TYPE_PERF_EVENT program type · 0515e599
      Alexei Starovoitov 提交于
      Introduce BPF_PROG_TYPE_PERF_EVENT programs that can be attached to
      HW and SW perf events (PERF_TYPE_HARDWARE and PERF_TYPE_SOFTWARE
      correspondingly in uapi/linux/perf_event.h)
      
      The program visible context meta structure is
      struct bpf_perf_event_data {
          struct pt_regs regs;
           __u64 sample_period;
      };
      which is accessible directly from the program:
      int bpf_prog(struct bpf_perf_event_data *ctx)
      {
        ... ctx->sample_period ...
        ... ctx->regs.ip ...
      }
      
      The bpf verifier rewrites the accesses into kernel internal
      struct bpf_perf_event_data_kern which allows changing
      struct perf_sample_data without affecting bpf programs.
      New fields can be added to the end of struct bpf_perf_event_data
      in the future.
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      0515e599
    • S
      tracing: Add NMI tracing in hwlat detector · 7b2c8625
      Steven Rostedt (Red Hat) 提交于
      As NMIs can also cause latency when interrupts are disabled, the hwlat
      detectory has no way to know if the latency it detects is from an NMI or an
      SMI or some other hardware glitch.
      
      As ftrace_nmi_enter/exit() funtions are no longer used (except for sh, which
      isn't supported anymore), I converted those to "arch_ftrace_nmi_enter/exit"
      and use ftrace_nmi_enter/exit() to check if hwlat detector is tracing or
      not, and if so, it calls into the hwlat utility.
      
      Since the hwlat detector only has a single kthread that is spinning with
      interrupts disabled, it marks what CPU it is on, and if the NMI callback
      happens on that CPU, it records the time spent in that NMI. This is added to
      the output that is generated by the hwlat detector as:
      
       #3     inner/outer(us):    9/9     ts:1470836488.206734548
       #4     inner/outer(us):    0/8     ts:1470836497.140808588
       #5     inner/outer(us):    0/6     ts:1470836499.140825168 nmi-total:5 nmi-count:1
       #6     inner/outer(us):    9/9     ts:1470836501.140841748
      
      All time is still tracked in microseconds.
      
      The NMI information is only shown when an NMI occurred during the sample.
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      7b2c8625
    • S
      tracing: Have hwlat trace migrate across tracing_cpumask CPUs · 0330f7aa
      Steven Rostedt (Red Hat) 提交于
      Instead of having the hwlat detector thread stay on one CPU, have it migrate
      across all the CPUs specified by tracing_cpumask. If the user modifies the
      thread's CPU affinity, the migration will stop until the next instance that
      the tracer is instantiated. The migration happens at the end of each window
      (period).
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      0330f7aa
    • S
      tracing: Added hardware latency tracer · e7c15cd8
      Steven Rostedt (Red Hat) 提交于
      The hardware latency tracer has been in the PREEMPT_RT patch for some time.
      It is used to detect possible SMIs or any other hardware interruptions that
      the kernel is unaware of. Note, NMIs may also be detected, but that may be
      good to note as well.
      
      The logic is pretty simple. It simply creates a thread that spins on a
      single CPU for a specified amount of time (width) within a periodic window
      (window). These numbers may be adjusted by their cooresponding names in
      
         /sys/kernel/tracing/hwlat_detector/
      
      The defaults are window = 1000000 us (1 second)
                       width  =  500000 us (1/2 second)
      
      The loop consists of:
      
      	t1 = trace_clock_local();
      	t2 = trace_clock_local();
      
      Where trace_clock_local() is a variant of sched_clock().
      
      The difference of t2 - t1 is recorded as the "inner" timestamp and also the
      timestamp  t1 - prev_t2 is recorded as the "outer" timestamp. If either of
      these differences are greater than the time denoted in
      /sys/kernel/tracing/tracing_thresh then it records the event.
      
      When this tracer is started, and tracing_thresh is zero, it changes to the
      default threshold of 10 us.
      
      The hwlat tracer in the PREEMPT_RT patch was originally written by
      Jon Masters. I have modified it quite a bit and turned it into a
      tracer.
      Based-on-code-by: NJon Masters <jcm@redhat.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      e7c15cd8
  13. 02 9月, 2016 1 次提交
  14. 01 9月, 2016 2 次提交
    • N
      function_graph: Handle TRACE_BPUTS in print_graph_comment · 613dccdf
      Namhyung Kim 提交于
      It missed to handle TRACE_BPUTS so messages recorded by trace_bputs()
      will be shown with symbol info unnecessarily.
      
      You can see it with the trace_printk sample code:
      
        # cd /sys/kernel/tracing/
        # echo sys_sync > set_graph_function
        # echo 1 > options/sym-offset
        # echo function_graph > current_tracer
      
      Note that the sys_sync filter was there to prevent recording other
      functions and the sym-offset option was needed since the first message
      was called from a module init function so kallsyms doesn't have the
      symbol and omitted in the output.
      
        # cd ~/build/kernel
        # insmod samples/trace_printk/trace-printk.ko
      
        # cd -
        # head trace
      
      Before:
      
        # tracer: function_graph
        #
        # CPU  DURATION                  FUNCTION CALLS
        # |     |   |                     |   |   |   |
         1)               |  /* 0xffffffffa0002000: This is a static string that will use trace_bputs */
         1)               |  /* This is a dynamic string that will use trace_puts */
         1)               |  /* trace_printk_irq_work+0x5/0x7b [trace_printk]: (irq) This is a static string that will use trace_bputs */
         1)               |  /* (irq) This is a dynamic string that will use trace_puts */
         1)               |  /* (irq) This is a static string that will use trace_bprintk() */
         1)               |  /* (irq) This is a dynamic string that will use trace_printk */
      
      After:
      
        # tracer: function_graph
        #
        # CPU  DURATION                  FUNCTION CALLS
        # |     |   |                     |   |   |   |
         1)               |  /* This is a static string that will use trace_bputs */
         1)               |  /* This is a dynamic string that will use trace_puts */
         1)               |  /* (irq) This is a static string that will use trace_bputs */
         1)               |  /* (irq) This is a dynamic string that will use trace_puts */
         1)               |  /* (irq) This is a static string that will use trace_bprintk() */
         1)               |  /* (irq) This is a dynamic string that will use trace_printk */
      
      Link: http://lkml.kernel.org/r/20160901024354.13720-1-namhyung@kernel.orgSigned-off-by: NNamhyung Kim <namhyung@kernel.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      613dccdf
    • D
      tracing/uprobe: Drop isdigit() check in create_trace_uprobe · 5ba8a4a9
      Dmitry Safonov 提交于
      It's useless. Before:
        [tracing]# echo 'p:test /a:0x0' >> uprobe_events
        [tracing]# echo 'p:test a:0x0' >> uprobe_events
        -bash: echo: write error: No such file or directory
        [tracing]# echo 'p:test 1:0x0' >> uprobe_events
        -bash: echo: write error: Invalid argument
      
      After:
        [tracing]# echo 'p:test 1:0x0' >> uprobe_events
        -bash: echo: write error: No such file or directory
      
      Link: http://lkml.kernel.org/r/20160825152110.25663-3-dsafonov@virtuozzo.comAcked-by: NSrikar Dronamraju <srikar@linux.vnet.ibm.com>
      Acked-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NDmitry Safonov <dsafonov@virtuozzo.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      5ba8a4a9
  15. 24 8月, 2016 7 次提交
  16. 16 8月, 2016 1 次提交
  17. 13 8月, 2016 2 次提交
  18. 08 8月, 2016 1 次提交
    • J
      block: rename bio bi_rw to bi_opf · 1eff9d32
      Jens Axboe 提交于
      Since commit 63a4cc24, bio->bi_rw contains flags in the lower
      portion and the op code in the higher portions. This means that
      old code that relies on manually setting bi_rw is most likely
      going to be broken. Instead of letting that brokeness linger,
      rename the member, to force old and out-of-tree code to break
      at compile time instead of at runtime.
      
      No intended functional changes in this commit.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      1eff9d32