1. 05 10月, 2017 3 次提交
  2. 04 10月, 2017 4 次提交
  3. 29 9月, 2017 1 次提交
  4. 25 9月, 2017 1 次提交
    • W
      blktrace: Fix potential deadlock between delete & sysfs ops · 5acb3cc2
      Waiman Long 提交于
      The lockdep code had reported the following unsafe locking scenario:
      
             CPU0                    CPU1
             ----                    ----
        lock(s_active#228);
                                     lock(&bdev->bd_mutex/1);
                                     lock(s_active#228);
        lock(&bdev->bd_mutex);
      
       *** DEADLOCK ***
      
      The deadlock may happen when one task (CPU1) is trying to delete a
      partition in a block device and another task (CPU0) is accessing
      tracing sysfs file (e.g. /sys/block/dm-1/trace/act_mask) in that
      partition.
      
      The s_active isn't an actual lock. It is a reference count (kn->count)
      on the sysfs (kernfs) file. Removal of a sysfs file, however, require
      a wait until all the references are gone. The reference count is
      treated like a rwsem using lockdep instrumentation code.
      
      The fact that a thread is in the sysfs callback method or in the
      ioctl call means there is a reference to the opended sysfs or device
      file. That should prevent the underlying block structure from being
      removed.
      
      Instead of using bd_mutex in the block_device structure, a new
      blk_trace_mutex is now added to the request_queue structure to protect
      access to the blk_trace structure.
      Suggested-by: NChristoph Hellwig <hch@infradead.org>
      Signed-off-by: NWaiman Long <longman@redhat.com>
      Acked-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      
      Fix typo in patch subject line, and prune a comment detailing how
      the code used to work.
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      5acb3cc2
  5. 24 9月, 2017 1 次提交
  6. 20 9月, 2017 3 次提交
  7. 14 9月, 2017 1 次提交
    • M
      mm: treewide: remove GFP_TEMPORARY allocation flag · 0ee931c4
      Michal Hocko 提交于
      GFP_TEMPORARY was introduced by commit e12ba74d ("Group short-lived
      and reclaimable kernel allocations") along with __GFP_RECLAIMABLE.  It's
      primary motivation was to allow users to tell that an allocation is
      short lived and so the allocator can try to place such allocations close
      together and prevent long term fragmentation.  As much as this sounds
      like a reasonable semantic it becomes much less clear when to use the
      highlevel GFP_TEMPORARY allocation flag.  How long is temporary? Can the
      context holding that memory sleep? Can it take locks? It seems there is
      no good answer for those questions.
      
      The current implementation of GFP_TEMPORARY is basically GFP_KERNEL |
      __GFP_RECLAIMABLE which in itself is tricky because basically none of
      the existing caller provide a way to reclaim the allocated memory.  So
      this is rather misleading and hard to evaluate for any benefits.
      
      I have checked some random users and none of them has added the flag
      with a specific justification.  I suspect most of them just copied from
      other existing users and others just thought it might be a good idea to
      use without any measuring.  This suggests that GFP_TEMPORARY just
      motivates for cargo cult usage without any reasoning.
      
      I believe that our gfp flags are quite complex already and especially
      those with highlevel semantic should be clearly defined to prevent from
      confusion and abuse.  Therefore I propose dropping GFP_TEMPORARY and
      replace all existing users to simply use GFP_KERNEL.  Please note that
      SLAB users with shrinkers will still get __GFP_RECLAIMABLE heuristic and
      so they will be placed properly for memory fragmentation prevention.
      
      I can see reasons we might want some gfp flag to reflect shorterm
      allocations but I propose starting from a clear semantic definition and
      only then add users with proper justification.
      
      This was been brought up before LSF this year by Matthew [1] and it
      turned out that GFP_TEMPORARY really doesn't have a clear semantic.  It
      seems to be a heuristic without any measured advantage for most (if not
      all) its current users.  The follow up discussion has revealed that
      opinions on what might be temporary allocation differ a lot between
      developers.  So rather than trying to tweak existing users into a
      semantic which they haven't expected I propose to simply remove the flag
      and start from scratch if we really need a semantic for short term
      allocations.
      
      [1] http://lkml.kernel.org/r/20170118054945.GD18349@bombadil.infradead.org
      
      [akpm@linux-foundation.org: fix typo]
      [akpm@linux-foundation.org: coding-style fixes]
      [sfr@canb.auug.org.au: drm/i915: fix up]
        Link: http://lkml.kernel.org/r/20170816144703.378d4f4d@canb.auug.org.au
      Link: http://lkml.kernel.org/r/20170728091904.14627-1-mhocko@kernel.orgSigned-off-by: NMichal Hocko <mhocko@suse.com>
      Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au>
      Acked-by: NMel Gorman <mgorman@suse.de>
      Acked-by: NVlastimil Babka <vbabka@suse.cz>
      Cc: Matthew Wilcox <willy@infradead.org>
      Cc: Neil Brown <neilb@suse.de>
      Cc: "Theodore Ts'o" <tytso@mit.edu>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0ee931c4
  8. 12 9月, 2017 1 次提交
  9. 09 9月, 2017 1 次提交
  10. 07 9月, 2017 1 次提交
  11. 06 9月, 2017 1 次提交
  12. 05 9月, 2017 1 次提交
    • S
      tracing: Add barrier to trace_printk() buffer nesting modification · 3d9622c1
      Steven Rostedt (VMware) 提交于
      trace_printk() uses 4 buffers, one for each context (normal, softirq, irq
      and NMI), such that it does not need to worry about one context preempting
      the other. There's a nesting counter that gets incremented to figure out
      which buffer to use. If the context gets preempted by another context which
      calls trace_printk() it will increment the counter and use the next buffer,
      and restore the counter when it is finished.
      
      The problem is that gcc may optimize the modification of the buffer nesting
      counter and it may not be incremented in memory before the buffer is used.
      If this happens, and the context gets interrupted by another context, it
      could pick the same buffer and corrupt the one that is being used.
      
      Compiler barriers need to be added after the nesting variable is incremented
      and before it is decremented to prevent usage of the context buffers by more
      than one context at the same time.
      
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: stable@vger.kernel.org
      Fixes: e2ace001 ("tracing: Choose static tp_printk buffer by explicit nesting count")
      Hat-tip-to: Peter Zijlstra <peterz@infradead.org>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      3d9622c1
  13. 02 9月, 2017 2 次提交
    • S
      ftrace: Fix memleak when unregistering dynamic ops when tracing disabled · edb096e0
      Steven Rostedt (VMware) 提交于
      If function tracing is disabled by the user via the function-trace option or
      the proc sysctl file, and a ftrace_ops that was allocated on the heap is
      unregistered, then the shutdown code exits out without doing the proper
      clean up. This was found via kmemleak and running the ftrace selftests, as
      one of the tests unregisters with function tracing disabled.
      
       # cat kmemleak
      unreferenced object 0xffffffffa0020000 (size 4096):
        comm "swapper/0", pid 1, jiffies 4294668889 (age 569.209s)
        hex dump (first 32 bytes):
          55 ff 74 24 10 55 48 89 e5 ff 74 24 18 55 48 89  U.t$.UH...t$.UH.
          e5 48 81 ec a8 00 00 00 48 89 44 24 50 48 89 4c  .H......H.D$PH.L
        backtrace:
          [<ffffffff81d64665>] kmemleak_vmalloc+0x85/0xf0
          [<ffffffff81355631>] __vmalloc_node_range+0x281/0x3e0
          [<ffffffff8109697f>] module_alloc+0x4f/0x90
          [<ffffffff81091170>] arch_ftrace_update_trampoline+0x160/0x420
          [<ffffffff81249947>] ftrace_startup+0xe7/0x300
          [<ffffffff81249bd2>] register_ftrace_function+0x72/0x90
          [<ffffffff81263786>] trace_selftest_ops+0x204/0x397
          [<ffffffff82bb8971>] trace_selftest_startup_function+0x394/0x624
          [<ffffffff81263a75>] run_tracer_selftest+0x15c/0x1d7
          [<ffffffff82bb83f1>] init_trace_selftests+0x75/0x192
          [<ffffffff81002230>] do_one_initcall+0x90/0x1e2
          [<ffffffff82b7d620>] kernel_init_freeable+0x350/0x3fe
          [<ffffffff81d61ec3>] kernel_init+0x13/0x122
          [<ffffffff81d72c6a>] ret_from_fork+0x2a/0x40
          [<ffffffffffffffff>] 0xffffffffffffffff
      
      Cc: stable@vger.kernel.org
      Fixes: 12cce594 ("ftrace/x86: Allow !CONFIG_PREEMPT dynamic ops to use allocated trampolines")
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      edb096e0
    • S
      ftrace: Fix selftest goto location on error · 46320a6a
      Steven Rostedt (VMware) 提交于
      In the second iteration of trace_selftest_ops(), the error goto label is
      wrong in the case where trace_selftest_test_global_cnt is off. In the
      case of error, it leaks the dynamic ops that was allocated.
      
      Cc: stable@vger.kernel.org
      Fixes: 95950c2e ("ftrace: Add self-tests for multiple function trace users")
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      46320a6a
  14. 01 9月, 2017 2 次提交
    • S
      ftrace: Zero out ftrace hashes when a module is removed · 2a5bfe47
      Steven Rostedt (VMware) 提交于
      When a ftrace filter has a module function, and that module is removed, the
      filter still has its address as being enabled. This can cause interesting
      side effects. Nothing dangerous, but unwanted functions can be traced
      because of it.
      
       # cd /sys/kernel/tracing
       # echo ':mod:snd_seq' > set_ftrace_filter
       # cat set_ftrace_filter
      snd_use_lock_sync_helper [snd_seq]
      check_event_type_and_length [snd_seq]
      snd_seq_ioctl_pversion [snd_seq]
      snd_seq_ioctl_client_id [snd_seq]
      snd_seq_ioctl_get_queue_tempo [snd_seq]
      update_timestamp_of_queue [snd_seq]
      snd_seq_ioctl_get_queue_status [snd_seq]
      snd_seq_set_queue_tempo [snd_seq]
      snd_seq_ioctl_set_queue_tempo [snd_seq]
      snd_seq_ioctl_get_queue_timer [snd_seq]
      seq_free_client1 [snd_seq]
      [..]
       # rmmod snd_seq
       # cat set_ftrace_filter
      
       # modprobe kvm
       # cat set_ftrace_filter
      kvm_set_cr4 [kvm]
      kvm_emulate_hypercall [kvm]
      kvm_set_dr [kvm]
      
      This is because removing the snd_seq module after it was being filtered,
      left the address of the snd_seq functions in the hash. When the kvm module
      was loaded, some of its functions were loaded at the same address as the
      snd_seq module. This would enable them to be filtered and traced.
      
      Now we don't want to clear the hash completely. That would cause removing a
      module where only its functions are filtered, to cause the tracing to enable
      all functions, as an empty filter means to trace all functions. Instead,
      just set the hash ip address to zero. Then it will never match any function.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      2a5bfe47
    • S
      tracing: Only have rmmod clear buffers that its events were active in · 065e63f9
      Steven Rostedt (VMware) 提交于
      Currently, when a module event is enabled, when that module is removed, it
      clears all ring buffers. This is to prevent another module from being loaded
      and having one of its trace event IDs from reusing a trace event ID of the
      removed module. This could cause undesirable effects as the trace event of
      the new module would be using its own processing algorithms to process raw
      data of another event. To prevent this, when a module is loaded, if any of
      its events have been used (signified by the WAS_ENABLED event call flag,
      which is never cleared), all ring buffers are cleared, just in case any one
      of them contains event data of the removed event.
      
      The problem is, there's no reason to clear all ring buffers if only one (or
      less than all of them) uses one of the events. Instead, only clear the ring
      buffers that recorded the events of a module that is being removed.
      
      To do this, instead of keeping the WAS_ENABLED flag with the trace event
      call, move it to the per instance (per ring buffer) event file descriptor.
      The event file descriptor maps each event to a separate ring buffer
      instance. Then when the module is removed, only the ring buffers that
      activated one of the module's events get cleared. The rest are not touched.
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      065e63f9
  15. 29 8月, 2017 1 次提交
    • Z
      perf/ftrace: Fix double traces of perf on ftrace:function · 75e83876
      Zhou Chengming 提交于
      When running perf on the ftrace:function tracepoint, there is a bug
      which can be reproduced by:
      
        perf record -e ftrace:function -a sleep 20 &
        perf record -e ftrace:function ls
        perf script
      
                    ls 10304 [005]   171.853235: ftrace:function:
        perf_output_begin
                    ls 10304 [005]   171.853237: ftrace:function:
        perf_output_begin
                    ls 10304 [005]   171.853239: ftrace:function:
        task_tgid_nr_ns
                    ls 10304 [005]   171.853240: ftrace:function:
        task_tgid_nr_ns
                    ls 10304 [005]   171.853242: ftrace:function:
        __task_pid_nr_ns
                    ls 10304 [005]   171.853244: ftrace:function:
        __task_pid_nr_ns
      
      We can see that all the function traces are doubled.
      
      The problem is caused by the inconsistency of the register
      function perf_ftrace_event_register() with the probe function
      perf_ftrace_function_call(). The former registers one probe
      for every perf_event. And the latter handles all perf_events
      on the current cpu. So when two perf_events on the current cpu,
      the traces of them will be doubled.
      
      So this patch adds an extra parameter "event" for perf_tp_event,
      only send sample data to this event when it's not NULL.
      Signed-off-by: NZhou Chengming <zhouchengming1@huawei.com>
      Reviewed-by: NJiri Olsa <jolsa@kernel.org>
      Acked-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: acme@kernel.org
      Cc: alexander.shishkin@linux.intel.com
      Cc: huawei.libin@huawei.com
      Link: http://lkml.kernel.org/r/1503668977-12526-1-git-send-email-zhouchengming1@huawei.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      75e83876
  16. 24 8月, 2017 4 次提交
    • S
      tracing: Fix freeing of filter in create_filter() when set_str is false · 8b0db1a5
      Steven Rostedt (VMware) 提交于
      Performing the following task with kmemleak enabled:
      
       # cd /sys/kernel/tracing/events/irq/irq_handler_entry/
       # echo 'enable_event:kmem:kmalloc:3 if irq >' > trigger
       # echo 'enable_event:kmem:kmalloc:3 if irq > 31' > trigger
       # echo scan > /sys/kernel/debug/kmemleak
       # cat /sys/kernel/debug/kmemleak
      unreferenced object 0xffff8800b9290308 (size 32):
        comm "bash", pid 1114, jiffies 4294848451 (age 141.139s)
        hex dump (first 32 bytes):
          00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
          00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
        backtrace:
          [<ffffffff81cef5aa>] kmemleak_alloc+0x4a/0xa0
          [<ffffffff81357938>] kmem_cache_alloc_trace+0x158/0x290
          [<ffffffff81261c09>] create_filter_start.constprop.28+0x99/0x940
          [<ffffffff812639c9>] create_filter+0xa9/0x160
          [<ffffffff81263bdc>] create_event_filter+0xc/0x10
          [<ffffffff812655e5>] set_trigger_filter+0xe5/0x210
          [<ffffffff812660c4>] event_enable_trigger_func+0x324/0x490
          [<ffffffff812652e2>] event_trigger_write+0x1a2/0x260
          [<ffffffff8138cf87>] __vfs_write+0xd7/0x380
          [<ffffffff8138f421>] vfs_write+0x101/0x260
          [<ffffffff8139187b>] SyS_write+0xab/0x130
          [<ffffffff81cfd501>] entry_SYSCALL_64_fastpath+0x1f/0xbe
          [<ffffffffffffffff>] 0xffffffffffffffff
      
      The function create_filter() is passed a 'filterp' pointer that gets
      allocated, and if "set_str" is true, it is up to the caller to free it, even
      on error. The problem is that the pointer is not freed by create_filter()
      when set_str is false. This is a bug, and it is not up to the caller to free
      the filter on error if it doesn't care about the string.
      
      Link: http://lkml.kernel.org/r/1502705898-27571-2-git-send-email-chuhu@redhat.com
      
      Cc: stable@vger.kernel.org
      Fixes: 38b78eb8 ("tracing: Factorize filter creation")
      Reported-by: NChunyu Hu <chuhu@redhat.com>
      Tested-by: NChunyu Hu <chuhu@redhat.com>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      8b0db1a5
    • C
      tracing: Fix kmemleak in tracing_map_array_free() · 475bb3c6
      Chunyu Hu 提交于
      kmemleak reported the below leak when I was doing clear of the hist
      trigger. With this patch, the kmeamleak is gone.
      
      unreferenced object 0xffff94322b63d760 (size 32):
        comm "bash", pid 1522, jiffies 4403687962 (age 2442.311s)
        hex dump (first 32 bytes):
          00 01 00 00 04 00 00 00 08 00 00 00 ff 00 00 00  ................
          10 00 00 00 00 00 00 00 80 a8 7a f2 31 94 ff ff  ..........z.1...
        backtrace:
          [<ffffffff9e96c27a>] kmemleak_alloc+0x4a/0xa0
          [<ffffffff9e424cba>] kmem_cache_alloc_trace+0xca/0x1d0
          [<ffffffff9e377736>] tracing_map_array_alloc+0x26/0x140
          [<ffffffff9e261be0>] kretprobe_trampoline+0x0/0x50
          [<ffffffff9e38b935>] create_hist_data+0x535/0x750
          [<ffffffff9e38bd47>] event_hist_trigger_func+0x1f7/0x420
          [<ffffffff9e38893d>] event_trigger_write+0xfd/0x1a0
          [<ffffffff9e44dfc7>] __vfs_write+0x37/0x170
          [<ffffffff9e44f552>] vfs_write+0xb2/0x1b0
          [<ffffffff9e450b85>] SyS_write+0x55/0xc0
          [<ffffffff9e203857>] do_syscall_64+0x67/0x150
          [<ffffffff9e977ce7>] return_from_SYSCALL_64+0x0/0x6a
          [<ffffffffffffffff>] 0xffffffffffffffff
      unreferenced object 0xffff9431f27aa880 (size 128):
        comm "bash", pid 1522, jiffies 4403687962 (age 2442.311s)
        hex dump (first 32 bytes):
          00 00 8c 2a 32 94 ff ff 00 f0 8b 2a 32 94 ff ff  ...*2......*2...
          00 e0 8b 2a 32 94 ff ff 00 d0 8b 2a 32 94 ff ff  ...*2......*2...
        backtrace:
          [<ffffffff9e96c27a>] kmemleak_alloc+0x4a/0xa0
          [<ffffffff9e425348>] __kmalloc+0xe8/0x220
          [<ffffffff9e3777c1>] tracing_map_array_alloc+0xb1/0x140
          [<ffffffff9e261be0>] kretprobe_trampoline+0x0/0x50
          [<ffffffff9e38b935>] create_hist_data+0x535/0x750
          [<ffffffff9e38bd47>] event_hist_trigger_func+0x1f7/0x420
          [<ffffffff9e38893d>] event_trigger_write+0xfd/0x1a0
          [<ffffffff9e44dfc7>] __vfs_write+0x37/0x170
          [<ffffffff9e44f552>] vfs_write+0xb2/0x1b0
          [<ffffffff9e450b85>] SyS_write+0x55/0xc0
          [<ffffffff9e203857>] do_syscall_64+0x67/0x150
          [<ffffffff9e977ce7>] return_from_SYSCALL_64+0x0/0x6a
          [<ffffffffffffffff>] 0xffffffffffffffff
      
      Link: http://lkml.kernel.org/r/1502705898-27571-1-git-send-email-chuhu@redhat.com
      
      Cc: stable@vger.kernel.org
      Fixes: 08d43a5f ("tracing: Add lock-free tracing_map")
      Signed-off-by: NChunyu Hu <chuhu@redhat.com>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      475bb3c6
    • S
      ftrace: Check for null ret_stack on profile function graph entry function · a8f0f9e4
      Steven Rostedt (VMware) 提交于
      There's a small race when function graph shutsdown and the calling of the
      registered function graph entry callback. The callback must not reference
      the task's ret_stack without first checking that it is not NULL. Note, when
      a ret_stack is allocated for a task, it stays allocated until the task exits.
      The problem here, is that function_graph is shutdown, and a new task was
      created, which doesn't have its ret_stack allocated. But since some of the
      functions are still being traced, the callbacks can still be called.
      
      The normal function_graph code handles this, but starting with commit
      8861dd30 ("ftrace: Access ret_stack->subtime only in the function
      profiler") the profiler code references the ret_stack on function entry, but
      doesn't check if it is NULL first.
      
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=196611
      
      Cc: stable@vger.kernel.org
      Fixes: 8861dd30 ("ftrace: Access ret_stack->subtime only in the function profiler")
      Reported-by: lilydjwg@gmail.com
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      a8f0f9e4
    • C
      block: replace bi_bdev with a gendisk pointer and partitions index · 74d46992
      Christoph Hellwig 提交于
      This way we don't need a block_device structure to submit I/O.  The
      block_device has different life time rules from the gendisk and
      request_queue and is usually only available when the block device node
      is open.  Other callers need to explicitly create one (e.g. the lightnvm
      passthrough code, or the new nvme multipathing code).
      
      For the actual I/O path all that we need is the gendisk, which exists
      once per block device.  But given that the block layer also does
      partition remapping we additionally need a partition index, which is
      used for said remapping in generic_make_request.
      
      Note that all the block drivers generally want request_queue or
      sometimes the gendisk, so this removes a layer of indirection all
      over the stack.
      Signed-off-by: NChristoph Hellwig <hch@lst.de>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      74d46992
  17. 16 8月, 2017 1 次提交
    • D
      bpf: fix bpf_trace_printk on 32 bit archs · 88a5c690
      Daniel Borkmann 提交于
      James reported that on MIPS32 bpf_trace_printk() is currently
      broken while MIPS64 works fine:
      
        bpf_trace_printk() uses conditional operators to attempt to
        pass different types to __trace_printk() depending on the
        format operators. This doesn't work as intended on 32-bit
        architectures where u32 and long are passed differently to
        u64, since the result of C conditional operators follows the
        "usual arithmetic conversions" rules, such that the values
        passed to __trace_printk() will always be u64 [causing issues
        later in the va_list handling for vscnprintf()].
      
        For example the samples/bpf/tracex5 test printed lines like
        below on MIPS32, where the fd and buf have come from the u64
        fd argument, and the size from the buf argument:
      
          [...] 1180.941542: 0x00000001: write(fd=1, buf=  (null), size=6258688)
      
        Instead of this:
      
          [...] 1625.616026: 0x00000001: write(fd=1, buf=009e4000, size=512)
      
      One way to get it working is to expand various combinations
      of argument types into 8 different combinations for 32 bit
      and 64 bit kernels. Fix tested by James on MIPS32 and MIPS64
      as well that it resolves the issue.
      
      Fixes: 9c959c86 ("tracing: Allow BPF programs to call bpf_trace_printk()")
      Reported-by: NJames Hogan <james.hogan@imgtec.com>
      Tested-by: NJames Hogan <james.hogan@imgtec.com>
      Signed-off-by: NDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      88a5c690
  18. 08 8月, 2017 1 次提交
    • Y
      bpf: add support for sys_enter_* and sys_exit_* tracepoints · cf5f5cea
      Yonghong Song 提交于
      Currently, bpf programs cannot be attached to sys_enter_* and sys_exit_*
      style tracepoints. The iovisor/bcc issue #748
      (https://github.com/iovisor/bcc/issues/748) documents this issue.
      For example, if you try to attach a bpf program to tracepoints
      syscalls/sys_enter_newfstat, you will get the following error:
         # ./tools/trace.py t:syscalls:sys_enter_newfstat
         Ioctl(PERF_EVENT_IOC_SET_BPF): Invalid argument
         Failed to attach BPF to tracepoint
      
      The main reason is that syscalls/sys_enter_* and syscalls/sys_exit_*
      tracepoints are treated differently from other tracepoints and there
      is no bpf hook to it.
      
      This patch adds bpf support for these syscalls tracepoints by
        . permitting bpf attachment in ioctl PERF_EVENT_IOC_SET_BPF
        . calling bpf programs in perf_syscall_enter and perf_syscall_exit
      
      The legality of bpf program ctx access is also checked.
      Function trace_event_get_offsets returns correct max offset for each
      specific syscall tracepoint, which is compared against the maximum offset
      access in bpf program.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      cf5f5cea
  19. 03 8月, 2017 3 次提交
    • S
      ring-buffer: Have ring_buffer_alloc_read_page() return error on offline CPU · a7e52ad7
      Steven Rostedt (VMware) 提交于
      Chunyu Hu reported:
        "per_cpu trace directories and files are created for all possible cpus,
         but only the cpus which have ever been on-lined have their own per cpu
         ring buffer (allocated by cpuhp threads). While trace_buffers_open, the
         open handler for trace file 'trace_pipe_raw' is always trying to access
         field of ring_buffer_per_cpu, and would panic with the NULL pointer.
      
         Align the behavior of trace_pipe_raw with trace_pipe, that returns -NODEV
         when openning it if that cpu does not have trace ring buffer.
      
         Reproduce:
         cat /sys/kernel/debug/tracing/per_cpu/cpu31/trace_pipe_raw
         (cpu31 is never on-lined, this is a 16 cores x86_64 box)
      
         Tested with:
         1) boot with maxcpus=14, read trace_pipe_raw of cpu15.
            Got -NODEV.
         2) oneline cpu15, read trace_pipe_raw of cpu15.
            Get the raw trace data.
      
         Call trace:
         [ 5760.950995] RIP: 0010:ring_buffer_alloc_read_page+0x32/0xe0
         [ 5760.961678]  tracing_buffers_read+0x1f6/0x230
         [ 5760.962695]  __vfs_read+0x37/0x160
         [ 5760.963498]  ? __vfs_read+0x5/0x160
         [ 5760.964339]  ? security_file_permission+0x9d/0xc0
         [ 5760.965451]  ? __vfs_read+0x5/0x160
         [ 5760.966280]  vfs_read+0x8c/0x130
         [ 5760.967070]  SyS_read+0x55/0xc0
         [ 5760.967779]  do_syscall_64+0x67/0x150
         [ 5760.968687]  entry_SYSCALL64_slow_path+0x25/0x25"
      
      This was introduced by the addition of the feature to reuse reader pages
      instead of re-allocating them. The problem is that the allocation of a
      reader page (which is per cpu) does not check if the cpu is online and set
      up for the ring buffer.
      
      Link: http://lkml.kernel.org/r/1500880866-1177-1-git-send-email-chuhu@redhat.com
      
      Cc: stable@vger.kernel.org
      Fixes: 73a757e6 ("ring-buffer: Return reader page back into existing ring buffer")
      Reported-by: NChunyu Hu <chuhu@redhat.com>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      a7e52ad7
    • D
      tracing: Missing error code in tracer_alloc_buffers() · 147d88e0
      Dan Carpenter 提交于
      If ring_buffer_alloc() or one of the next couple function calls fail
      then we should return -ENOMEM but the current code returns success.
      
      Link: http://lkml.kernel.org/r/20170801110201.ajdkct7vwzixahvx@mwanda
      
      Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: stable@vger.kernel.org
      Fixes: b32614c0 ('tracing/rb: Convert to hotplug state machine')
      Signed-off-by: NDan Carpenter <dan.carpenter@oracle.com>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      147d88e0
    • S
      tracing: Call clear_boot_tracer() at lateinit_sync · 4bb0f0e7
      Steven Rostedt (VMware) 提交于
      The clear_boot_tracer function is used to reset the default_bootup_tracer
      string to prevent it from being accessed after boot, as it originally points
      to init data. But since clear_boot_tracer() is called via the
      init_lateinit() call, it races with the initcall for registering the hwlat
      tracer. If someone adds "ftrace=hwlat" to the kernel command line, depending
      on how the linker sets up the text, the saved command line may be cleared,
      and the hwlat tracer never is initialized.
      
      Simply have the clear_boot_tracer() be called by initcall_lateinit_sync() as
      that's for tasks to be called after lateinit.
      
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=196551
      
      Cc: stable@vger.kernel.org
      Fixes: e7c15cd8 ("tracing: Added hardware latency tracer")
      Reported-by: NZamir SUN <sztsian@gmail.com>
      Signed-off-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      4bb0f0e7
  20. 29 7月, 2017 3 次提交
    • S
      block: use standard blktrace API to output cgroup info for debug notes · 35fe6d76
      Shaohua Li 提交于
      Currently cfq/bfq/blk-throttle output cgroup info in trace in their own
      way. Now we have standard blktrace API for this, so convert them to use
      it.
      
      Note, this changes the behavior a little bit. cgroup info isn't output
      by default, we only do this with 'blk_cgroup' option enabled. cgroup
      info isn't output as a string by default too, we only do this with
      'blk_cgname' option enabled. Also cgroup info is output in different
      position of the note string. I think these behavior changes aren't a big
      issue (actually we make trace data shorter which is good), since the
      blktrace note is solely for debugging.
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      35fe6d76
    • S
      blktrace: add an option to allow displaying cgroup path · 69fd5c39
      Shaohua Li 提交于
      By default we output cgroup id in blktrace. This adds an option to
      display cgroup path. Since get cgroup path is a relativly heavy
      operation, we don't enable it by default.
      
      with the option enabled, blktrace will output something like this:
      dd-1353  [007] d..2   293.015252:   8,0   /test/level  D   R 24 + 8 [dd]
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      69fd5c39
    • S
      blktrace: export cgroup info in trace · ca1136c9
      Shaohua Li 提交于
      Currently blktrace isn't cgroup aware. blktrace prints out task name of
      current context, but the task of current context isn't always in the
      cgroup where the BIO comes from. We can't use task name to find out IO
      cgroup. For example, Writeback BIOs always comes from flusher thread but
      the BIOs are for different blk cgroups. Request could be requeued and
      dispatched from completely different tasks. MD/DM are another examples.
      
      This patch tries to fix the gap. We print out cgroup fhandle info in
      blktrace. Userspace can use open_by_handle_at() syscall to find the
      cgroup by fhandle. Or userspace can use name_to_handle_at() syscall to
      find fhandle for a cgroup and use a BPF program to filter out blktrace
      for a specific cgroup.
      
      We add a new 'blk_cgroup' trace option for blk tracer. It's default off.
      Application which doesn't know the new option isn't affected.  When it's
      on, we output fhandle info right after blk_io_trace with an extra bit
      set in event action. So from application point of view, blktrace with
      the option will output new actions.
      
      I didn't change blk trace event yet, since I'm not sure if changing the
      trace event output is an ABI issue. If not, I'll do it later.
      Acked-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Signed-off-by: NShaohua Li <shli@fb.com>
      Signed-off-by: NJens Axboe <axboe@kernel.dk>
      ca1136c9
  21. 20 7月, 2017 2 次提交
  22. 19 7月, 2017 1 次提交
  23. 12 7月, 2017 1 次提交