1. 16 8月, 2016 1 次提交
  2. 08 8月, 2016 1 次提交
    • J
      block: rename bio bi_rw to bi_opf · 1eff9d32
      Jens Axboe 提交于
      Since commit 63a4cc24, bio->bi_rw contains flags in the lower
      portion and the op code in the higher portions. This means that
      old code that relies on manually setting bi_rw is most likely
      going to be broken. Instead of letting that brokeness linger,
      rename the member, to force old and out-of-tree code to break
      at compile time instead of at runtime.
      
      No intended functional changes in this commit.
      Signed-off-by: NJens Axboe <axboe@fb.com>
      1eff9d32
  3. 03 8月, 2016 3 次提交
  4. 26 7月, 2016 1 次提交
    • S
      bpf: Add bpf_probe_write_user BPF helper to be called in tracers · 96ae5227
      Sargun Dhillon 提交于
      This allows user memory to be written to during the course of a kprobe.
      It shouldn't be used to implement any kind of security mechanism
      because of TOC-TOU attacks, but rather to debug, divert, and
      manipulate execution of semi-cooperative processes.
      
      Although it uses probe_kernel_write, we limit the address space
      the probe can write into by checking the space with access_ok.
      We do this as opposed to calling copy_to_user directly, in order
      to avoid sleeping. In addition we ensure the threads's current fs
      / segment is USER_DS and the thread isn't exiting nor a kernel thread.
      
      Given this feature is meant for experiments, and it has a risk of
      crashing the system, and running programs, we print a warning on
      when a proglet that attempts to use this helper is installed,
      along with the pid and process name.
      Signed-off-by: NSargun Dhillon <sargun@sargun.me>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      96ae5227
  5. 20 7月, 2016 1 次提交
  6. 16 7月, 2016 3 次提交
    • D
      bpf: avoid stack copy and use skb ctx for event output · 555c8a86
      Daniel Borkmann 提交于
      This work addresses a couple of issues bpf_skb_event_output()
      helper currently has: i) We need two copies instead of just a
      single one for the skb data when it should be part of a sample.
      The data can be non-linear and thus needs to be extracted via
      bpf_skb_load_bytes() helper first, and then copied once again
      into the ring buffer slot. ii) Since bpf_skb_load_bytes()
      currently needs to be used first, the helper needs to see a
      constant size on the passed stack buffer to make sure BPF
      verifier can do sanity checks on it during verification time.
      Thus, just passing skb->len (or any other non-constant value)
      wouldn't work, but changing bpf_skb_load_bytes() is also not
      the proper solution, since the two copies are generally still
      needed. iii) bpf_skb_load_bytes() is just for rather small
      buffers like headers, since they need to sit on the limited
      BPF stack anyway. Instead of working around in bpf_skb_load_bytes(),
      this work improves the bpf_skb_event_output() helper to address
      all 3 at once.
      
      We can make use of the passed in skb context that we have in
      the helper anyway, and use some of the reserved flag bits as
      a length argument. The helper will use the new __output_custom()
      facility from perf side with bpf_skb_copy() as callback helper
      to walk and extract the data. It will pass the data for setup
      to bpf_event_output(), which generates and pushes the raw record
      with an additional frag part. The linear data used in the first
      frag of the record serves as programmatically defined meta data
      passed along with the appended sample.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      555c8a86
    • D
      bpf, perf: split bpf_perf_event_output · 8e7a3920
      Daniel Borkmann 提交于
      Split the bpf_perf_event_output() helper as a preparation into
      two parts. The new bpf_perf_event_output() will prepare the raw
      record itself and test for unknown flags from BPF trace context,
      where the __bpf_perf_event_output() does the core work. The
      latter will be reused later on from bpf_event_output() directly.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      8e7a3920
    • D
      perf, events: add non-linear data support for raw records · 7e3f977e
      Daniel Borkmann 提交于
      This patch adds support for non-linear data on raw records. It
      extends raw records to have one or multiple fragments that will
      be written linearly into the ring slot, where each fragment can
      optionally have a custom callback handler to walk and extract
      complex, possibly non-linear data.
      
      If a callback handler is provided for a fragment, then the new
      __output_custom() will be used instead of __output_copy() for
      the perf_output_sample() part. perf_prepare_sample() does all
      the size calculation only once, so perf_output_sample() doesn't
      need to redo the same work anymore, meaning real_size and padding
      will be cached in the raw record. The raw record becomes 32 bytes
      in size without holes; to not increase it further and to avoid
      doing unnecessary recalculations in fast-path, we can reuse
      next pointer of the last fragment, idea here is borrowed from
      ZERO_OR_NULL_PTR(), which should keep the perf_output_sample()
      path for PERF_SAMPLE_RAW minimal.
      
      This facility is needed for BPF's event output helper as a first
      user that will, in a follow-up, add an additional perf_raw_frag
      to its perf_raw_record in order to be able to more efficiently
      dump skb context after a linear head meta data related to it.
      skbs can be non-linear and thus need a custom output function to
      dump buffers. Currently, the skb data needs to be copied twice;
      with the help of __output_custom() this work only needs to be
      done once. Future users could be things like XDP/BPF programs
      that work on different context though and would thus also have
      a different callback function.
      
      The few users of raw records are adapted to initialize their frag
      data from the raw record itself, no change in behavior for them.
      The code is based upon a PoC diff provided by Peter Zijlstra [1].
      
        [1] http://thread.gmane.org/gmane.linux.network/421294Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      7e3f977e
  7. 09 7月, 2016 1 次提交
  8. 06 7月, 2016 2 次提交
  9. 05 7月, 2016 2 次提交
  10. 30 6月, 2016 3 次提交
  11. 28 6月, 2016 1 次提交
  12. 24 6月, 2016 1 次提交
    • S
      tracing: Skip more functions when doing stack tracing of events · be54f69c
      Steven Rostedt (Red Hat) 提交于
       # echo 1 > options/stacktrace
       # echo 1 > events/sched/sched_switch/enable
       # cat trace
                <idle>-0     [002] d..2  1982.525169: <stack trace>
       => save_stack_trace
       => __ftrace_trace_stack
       => trace_buffer_unlock_commit_regs
       => event_trigger_unlock_commit
       => trace_event_buffer_commit
       => trace_event_raw_event_sched_switch
       => __schedule
       => schedule
       => schedule_preempt_disabled
       => cpu_startup_entry
       => start_secondary
      
      The above shows that we are seeing 6 functions before ever making it to the
      caller of the sched_switch event.
      
       # echo stacktrace > events/sched/sched_switch/trigger
       # cat trace
                <idle>-0     [002] d..3  2146.335208: <stack trace>
       => trace_event_buffer_commit
       => trace_event_raw_event_sched_switch
       => __schedule
       => schedule
       => schedule_preempt_disabled
       => cpu_startup_entry
       => start_secondary
      
      The stacktrace trigger isn't as bad, because it adds its own skip to the
      stacktracing, but still has two events extra.
      
      One issue is that if the stacktrace passes its own "regs" then there should
      be no addition to the skip, as the regs will not include the functions being
      called. This was an issue that was fixed by commit 7717c6be ("tracing:
      Fix stacktrace skip depth in trace_buffer_unlock_commit_regs()" as adding
      the skip number for kprobes made the probes not have any stack at all.
      
      But since this is only an issue when regs is being used, a skip should be
      added if regs is NULL. Now we have:
      
       # echo 1 > options/stacktrace
       # echo 1 > events/sched/sched_switch/enable
       # cat trace
                <idle>-0     [000] d..2  1297.676333: <stack trace>
       => __schedule
       => schedule
       => schedule_preempt_disabled
       => cpu_startup_entry
       => rest_init
       => start_kernel
       => x86_64_start_reservations
       => x86_64_start_kernel
      
       # echo stacktrace > events/sched/sched_switch/trigger
       # cat trace
                <idle>-0     [002] d..3  1370.759745: <stack trace>
       => __schedule
       => schedule
       => schedule_preempt_disabled
       => cpu_startup_entry
       => start_secondary
      
      And kprobes are not touched.
      Reported-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      be54f69c
  13. 20 6月, 2016 10 次提交
  14. 18 6月, 2016 1 次提交
    • A
      blktrace: avoid using timespec · 59a37f8b
      Arnd Bergmann 提交于
      The blktrace code stores the current time in a 32-bit word in its
      user interface. This is a bad idea because 32-bit seconds overflow
      at some point.
      
      We probably have until 2106 before this one overflows, as it seems
      to use an 'unsigned' variable, but we should confirm that user
      space treats it the same way.
      
      Aside from this, we want to stop using 'struct timespec' here,
      so I'm adding a comment about the overflow and change the code
      to use timespec64 instead to make the loss of range more obvious.
      Signed-off-by: NArnd Bergmann <arnd@arndb.de>
      Signed-off-by: NJens Axboe <axboe@fb.com>
      59a37f8b
  15. 16 6月, 2016 3 次提交
    • D
      bpf, maps: flush own entries on perf map release · 3b1efb19
      Daniel Borkmann 提交于
      The behavior of perf event arrays are quite different from all
      others as they are tightly coupled to perf event fds, f.e. shown
      recently by commit e03e7ee3 ("perf/bpf: Convert perf_event_array
      to use struct file") to make refcounting on perf event more robust.
      A remaining issue that the current code still has is that since
      additions to the perf event array take a reference on the struct
      file via perf_event_get() and are only released via fput() (that
      cleans up the perf event eventually via perf_event_release_kernel())
      when the element is either manually removed from the map from user
      space or automatically when the last reference on the perf event
      map is dropped. However, this leads us to dangling struct file's
      when the map gets pinned after the application owning the perf
      event descriptor exits, and since the struct file reference will
      in such case only be manually dropped or via pinned file removal,
      it leads to the perf event living longer than necessary, consuming
      needlessly resources for that time.
      
      Relations between perf event fds and bpf perf event map fds can be
      rather complex. F.e. maps can act as demuxers among different perf
      event fds that can possibly be owned by different threads and based
      on the index selection from the program, events get dispatched to
      one of the per-cpu fd endpoints. One perf event fd (or, rather a
      per-cpu set of them) can also live in multiple perf event maps at
      the same time, listening for events. Also, another requirement is
      that perf event fds can get closed from application side after they
      have been attached to the perf event map, so that on exit perf event
      map will take care of dropping their references eventually. Likewise,
      when such maps are pinned, the intended behavior is that a user
      application does bpf_obj_get(), puts its fds in there and on exit
      when fd is released, they are dropped from the map again, so the map
      acts rather as connector endpoint. This also makes perf event maps
      inherently different from program arrays as described in more detail
      in commit c9da161c ("bpf: fix clearing on persistent program
      array maps").
      
      To tackle this, map entries are marked by the map struct file that
      added the element to the map. And when the last reference to that map
      struct file is released from user space, then the tracked entries
      are purged from the map. This is okay, because new map struct files
      instances resp. frontends to the anon inode are provided via
      bpf_map_new_fd() that is called when we invoke bpf_obj_get_user()
      for retrieving a pinned map, but also when an initial instance is
      created via map_create(). The rest is resolved by the vfs layer
      automatically for us by keeping reference count on the map's struct
      file. Any concurrent updates on the map slot are fine as well, it
      just means that perf_event_fd_array_release() needs to delete less
      of its own entires.
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      3b1efb19
    • A
      bpf, trace: check event type in bpf_perf_event_read · ad572d17
      Alexei Starovoitov 提交于
      similar to bpf_perf_event_output() the bpf_perf_event_read() helper
      needs to check the type of the perf_event before reading the counter.
      
      Fixes: a43eec30 ("bpf: introduce bpf_perf_event_output() helper")
      Reported-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      ad572d17
    • A
      bpf: fix matching of data/data_end in verifier · 19de99f7
      Alexei Starovoitov 提交于
      The ctx structure passed into bpf programs is different depending on bpf
      program type. The verifier incorrectly marked ctx->data and ctx->data_end
      access based on ctx offset only. That caused loads in tracing programs
      int bpf_prog(struct pt_regs *ctx) { .. ctx->ax .. }
      to be incorrectly marked as PTR_TO_PACKET which later caused verifier
      to reject the program that was actually valid in tracing context.
      Fix this by doing program type specific matching of ctx offsets.
      
      Fixes: 969bf05e ("bpf: direct packet access")
      Reported-by: NSasha Goldshtein <goldshtn@gmail.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      19de99f7
  16. 09 6月, 2016 1 次提交
  17. 08 6月, 2016 4 次提交
  18. 21 5月, 2016 1 次提交
    • S
      ftrace: Don't disable irqs when taking the tasklist_lock read_lock · 6112a300
      Soumya PN 提交于
      In ftrace.c inside the function alloc_retstack_tasklist() (which will be
      invoked when function_graph tracing is on) the tasklist_lock is being
      held as reader while iterating through a list of threads. Here the lock
      is being held as reader with irqs disabled. The tasklist_lock is never
      write_locked in interrupt context so it is safe to not disable interrupts
      for the duration of read_lock in this block which, can be significant,
      given the block of code iterates through all threads. Hence changing the
      code to call read_lock() and read_unlock() instead of read_lock_irqsave()
      and read_unlock_irqrestore().
      
      A similar change was made in commits: 8063e41d ("tracing: Change
      syscall_*regfunc() to check PF_KTHREAD and use for_each_process_thread()")'
      and 3472eaa1 ("sched: normalize_rt_tasks(): Don't use _irqsave for
      tasklist_lock, use task_rq_lock()")'
      
      Link: http://lkml.kernel.org/r/1463500874-77480-1-git-send-email-soumya.p.n@hpe.comSigned-off-by: NSoumya PN <soumya.p.n@hpe.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      6112a300