1. 16 8月, 2019 1 次提交
  2. 28 7月, 2019 2 次提交
    • P
      perf/core: Fix race between close() and fork() · 4a5cc64d
      Peter Zijlstra 提交于
      commit 1cf8dfe8a661f0462925df943140e9f6d1ea5233 upstream.
      
      Syzcaller reported the following Use-after-Free bug:
      
      	close()						clone()
      
      							  copy_process()
      							    perf_event_init_task()
      							      perf_event_init_context()
      							        mutex_lock(parent_ctx->mutex)
      								inherit_task_group()
      								  inherit_group()
      								    inherit_event()
      								      mutex_lock(event->child_mutex)
      								      // expose event on child list
      								      list_add_tail()
      								      mutex_unlock(event->child_mutex)
      							        mutex_unlock(parent_ctx->mutex)
      
      							    ...
      							    goto bad_fork_*
      
      							  bad_fork_cleanup_perf:
      							    perf_event_free_task()
      
      	  perf_release()
      	    perf_event_release_kernel()
      	      list_for_each_entry()
      		mutex_lock(ctx->mutex)
      		mutex_lock(event->child_mutex)
      		// event is from the failing inherit
      		// on the other CPU
      		perf_remove_from_context()
      		list_move()
      		mutex_unlock(event->child_mutex)
      		mutex_unlock(ctx->mutex)
      
      							      mutex_lock(ctx->mutex)
      							      list_for_each_entry_safe()
      							        // event already stolen
      							      mutex_unlock(ctx->mutex)
      
      							    delayed_free_task()
      							      free_task()
      
      	     list_for_each_entry_safe()
      	       list_del()
      	       free_event()
      	         _free_event()
      		   // and so event->hw.target
      		   // is the already freed failed clone()
      		   if (event->hw.target)
      		     put_task_struct(event->hw.target)
      		       // WHOOPSIE, already quite dead
      
      Which puts the lie to the the comment on perf_event_free_task():
      'unexposed, unused context' not so much.
      
      Which is a 'fun' confluence of fail; copy_process() doing an
      unconditional free_task() and not respecting refcounts, and perf having
      creative locking. In particular:
      
        82d94856 ("perf/core: Fix lock inversion between perf,trace,cpuhp")
      
      seems to have overlooked this 'fun' parade.
      
      Solve it by using the fact that detached events still have a reference
      count on their (previous) context. With this perf_event_free_task()
      can detect when events have escaped and wait for their destruction.
      Debugged-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Reported-by: syzbot+a24c397a29ad22d86c98@syzkaller.appspotmail.com
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: <stable@vger.kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 82d94856 ("perf/core: Fix lock inversion between perf,trace,cpuhp")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      4a5cc64d
    • A
      perf/core: Fix exclusive events' grouping · 75100ec5
      Alexander Shishkin 提交于
      commit 8a58ddae23796c733c5dfbd717538d89d036c5bd upstream.
      
      So far, we tried to disallow grouping exclusive events for the fear of
      complications they would cause with moving between contexts. Specifically,
      moving a software group to a hardware context would violate the exclusivity
      rules if both groups contain matching exclusive events.
      
      This attempt was, however, unsuccessful: the check that we have in the
      perf_event_open() syscall is both wrong (looks at wrong PMU) and
      insufficient (group leader may still be exclusive), as can be illustrated
      by running:
      
        $ perf record -e '{intel_pt//,cycles}' uname
        $ perf record -e '{cycles,intel_pt//}' uname
      
      ultimately successfully.
      
      Furthermore, we are completely free to trigger the exclusivity violation
      by:
      
         perf -e '{cycles,intel_pt//}' -e '{intel_pt//,instructions}'
      
      even though the helpful perf record will not allow that, the ABI will.
      
      The warning later in the perf_event_open() path will also not trigger, because
      it's also wrong.
      
      Fix all this by validating the original group before moving, getting rid
      of broken safeguards and placing a useful one to perf_install_in_context().
      Signed-off-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <stable@vger.kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: mathieu.poirier@linaro.org
      Cc: will.deacon@arm.com
      Fixes: bed5b25a ("perf: Add a pmu capability for "exclusive" events")
      Link: https://lkml.kernel.org/r/20190701110755.24646-1-alexander.shishkin@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      75100ec5
  3. 21 7月, 2019 1 次提交
  4. 10 5月, 2019 1 次提交
    • P
      perf/core: Fix perf_event_disable_inatomic() race · 42638d6a
      Peter Zijlstra 提交于
      [ Upstream commit 1d54ad944074010609562da5c89e4f5df2f4e5db ]
      
      Thomas-Mich Richter reported he triggered a WARN()ing from event_function_local()
      on his s390. The problem boils down to:
      
      	CPU-A				CPU-B
      
      	perf_event_overflow()
      	  perf_event_disable_inatomic()
      	    @pending_disable = 1
      	    irq_work_queue();
      
      	sched-out
      	  event_sched_out()
      	    @pending_disable = 0
      
      					sched-in
      					perf_event_overflow()
      					  perf_event_disable_inatomic()
      					    @pending_disable = 1;
      					    irq_work_queue(); // FAILS
      
      	irq_work_run()
      	  perf_pending_event()
      	    if (@pending_disable)
      	      perf_event_disable_local(); // WHOOPS
      
      The problem exists in generic, but s390 is particularly sensitive
      because it doesn't implement arch_irq_work_raise(), nor does it call
      irq_work_run() from it's PMU interrupt handler (nor would that be
      sufficient in this case, because s390 also generates
      perf_event_overflow() from pmu::stop). Add to that the fact that s390
      is a virtual architecture and (virtual) CPU-A can stall long enough
      for the above race to happen, even if it would self-IPI.
      
      Adding a irq_work_sync() to event_sched_in() would work for all hardare
      PMUs that properly use irq_work_run() but fails for software PMUs.
      
      Instead encode the CPU number in @pending_disable, such that we can
      tell which CPU requested the disable. This then allows us to detect
      the above scenario and even redirect the IPI to make up for the failed
      queue.
      Reported-by: NThomas-Mich Richter <tmricht@linux.ibm.com>
      Tested-by: NThomas Richter <tmricht@linux.ibm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Hendrik Brueckner <brueckner@linux.ibm.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      42638d6a
  5. 20 4月, 2019 1 次提交
    • S
      perf/core: Restore mmap record type correctly · 673e23ce
      Stephane Eranian 提交于
      [ Upstream commit d9c1bb2f6a2157b38e8eb63af437cb22701d31ee ]
      
      On mmap(), perf_events generates a RECORD_MMAP record and then checks
      which events are interested in this record. There are currently 2
      versions of mmap records: RECORD_MMAP and RECORD_MMAP2. MMAP2 is larger.
      The event configuration controls which version the user level tool
      accepts.
      
      If the event->attr.mmap2=1 field then MMAP2 record is returned.  The
      perf_event_mmap_output() takes care of this. It checks attr->mmap2 and
      corrects the record fields before putting it in the sampling buffer of
      the event.  At the end the function restores the modified MMAP record
      fields.
      
      The problem is that the function restores the size but not the type.
      Thus, if a subsequent event only accepts MMAP type, then it would
      instead receive an MMAP2 record with a size of MMAP record.
      
      This patch fixes the problem by restoring the record type on exit.
      Signed-off-by: NStephane Eranian <eranian@google.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@linux.intel.com>
      Fixes: 13d7a241 ("perf: Add attr->mmap2 attribute to an event")
      Link: http://lkml.kernel.org/r/20190307185233.225521-1-eranian@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NSasha Levin <sashal@kernel.org>
      673e23ce
  6. 14 3月, 2019 1 次提交
  7. 20 2月, 2019 1 次提交
    • J
      perf/x86: Add check_period PMU callback · 74cbb754
      Jiri Olsa 提交于
      commit 81ec3f3c4c4d78f2d3b6689c9816bfbdf7417dbb upstream.
      
      Vince (and later on Ravi) reported crashes in the BTS code during
      fuzzing with the following backtrace:
      
        general protection fault: 0000 [#1] SMP PTI
        ...
        RIP: 0010:perf_prepare_sample+0x8f/0x510
        ...
        Call Trace:
         <IRQ>
         ? intel_pmu_drain_bts_buffer+0x194/0x230
         intel_pmu_drain_bts_buffer+0x160/0x230
         ? tick_nohz_irq_exit+0x31/0x40
         ? smp_call_function_single_interrupt+0x48/0xe0
         ? call_function_single_interrupt+0xf/0x20
         ? call_function_single_interrupt+0xa/0x20
         ? x86_schedule_events+0x1a0/0x2f0
         ? x86_pmu_commit_txn+0xb4/0x100
         ? find_busiest_group+0x47/0x5d0
         ? perf_event_set_state.part.42+0x12/0x50
         ? perf_mux_hrtimer_restart+0x40/0xb0
         intel_pmu_disable_event+0xae/0x100
         ? intel_pmu_disable_event+0xae/0x100
         x86_pmu_stop+0x7a/0xb0
         x86_pmu_del+0x57/0x120
         event_sched_out.isra.101+0x83/0x180
         group_sched_out.part.103+0x57/0xe0
         ctx_sched_out+0x188/0x240
         ctx_resched+0xa8/0xd0
         __perf_event_enable+0x193/0x1e0
         event_function+0x8e/0xc0
         remote_function+0x41/0x50
         flush_smp_call_function_queue+0x68/0x100
         generic_smp_call_function_single_interrupt+0x13/0x30
         smp_call_function_single_interrupt+0x3e/0xe0
         call_function_single_interrupt+0xf/0x20
         </IRQ>
      
      The reason is that while event init code does several checks
      for BTS events and prevents several unwanted config bits for
      BTS event (like precise_ip), the PERF_EVENT_IOC_PERIOD allows
      to create BTS event without those checks being done.
      
      Following sequence will cause the crash:
      
      If we create an 'almost' BTS event with precise_ip and callchains,
      and it into a BTS event it will crash the perf_prepare_sample()
      function because precise_ip events are expected to come
      in with callchain data initialized, but that's not the
      case for intel_pmu_drain_bts_buffer() caller.
      
      Adding a check_period callback to be called before the period
      is changed via PERF_EVENT_IOC_PERIOD. It will deny the change
      if the event would become BTS. Plus adding also the limit_period
      check as well.
      Reported-by: NVince Weaver <vincent.weaver@maine.edu>
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Acked-by: NPeter Zijlstra <peterz@infradead.org>
      Cc: <stable@vger.kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
      Cc: Ravi Bangoria <ravi.bangoria@linux.ibm.com>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20190204123532.GA4794@kravaSigned-off-by: NIngo Molnar <mingo@kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      74cbb754
  8. 02 10月, 2018 2 次提交
    • J
      perf/ring_buffer: Prevent concurent ring buffer access · cd6fb677
      Jiri Olsa 提交于
      Some of the scheduling tracepoints allow the perf_tp_event
      code to write to ring buffer under different cpu than the
      code is running on.
      
      This results in corrupted ring buffer data demonstrated in
      following perf commands:
      
        # perf record -e 'sched:sched_switch,sched:sched_wakeup' perf bench sched messaging
        # Running 'sched/messaging' benchmark:
        # 20 sender and receiver processes per group
        # 10 groups == 400 processes run
      
             Total time: 0.383 [sec]
        [ perf record: Woken up 8 times to write data ]
        0x42b890 [0]: failed to process type: -1765585640
        [ perf record: Captured and wrote 4.825 MB perf.data (29669 samples) ]
      
        # perf report --stdio
        0x42b890 [0]: failed to process type: -1765585640
      
      The reason for the corruption are some of the scheduling tracepoints,
      that have __perf_task dfined and thus allow to store data to another
      cpu ring buffer:
      
        sched_waking
        sched_wakeup
        sched_wakeup_new
        sched_stat_wait
        sched_stat_sleep
        sched_stat_iowait
        sched_stat_blocked
      
      The perf_tp_event function first store samples for current cpu
      related events defined for tracepoint:
      
          hlist_for_each_entry_rcu(event, head, hlist_entry)
            perf_swevent_event(event, count, &data, regs);
      
      And then iterates events of the 'task' and store the sample
      for any task's event that passes tracepoint checks:
      
        ctx = rcu_dereference(task->perf_event_ctxp[perf_sw_context]);
      
        list_for_each_entry_rcu(event, &ctx->event_list, event_entry) {
          if (event->attr.type != PERF_TYPE_TRACEPOINT)
            continue;
          if (event->attr.config != entry->type)
            continue;
      
          perf_swevent_event(event, count, &data, regs);
        }
      
      Above code can race with same code running on another cpu,
      ending up with 2 cpus trying to store under the same ring
      buffer, which is specifically not allowed.
      
      This patch prevents the problem, by allowing only events with the same
      current cpu to receive the event.
      
      NOTE: this requires the use of (per-task-)per-cpu buffers for this
      feature to work; perf-record does this.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      [peterz: small edits to Changelog]
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andrew Vagin <avagin@openvz.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: e6dab5ff ("perf/trace: Add ability to set a target task for events")
      Link: http://lkml.kernel.org/r/20180923161343.GB15054@kravaSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cd6fb677
    • P
      perf/core: Fix perf_pmu_unregister() locking · a9f97721
      Peter Zijlstra 提交于
      When we unregister a PMU, we fail to serialize the @pmu_idr properly.
      Fix that by doing the entire thing under pmu_lock.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 2e80a82a ("perf: Dynamic pmu types")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      a9f97721
  9. 29 9月, 2018 1 次提交
  10. 10 9月, 2018 1 次提交
    • Y
      perf/core: Force USER_DS when recording user stack data · 02e18447
      Yabin Cui 提交于
      Perf can record user stack data in response to a synchronous request, such
      as a tracepoint firing. If this happens under set_fs(KERNEL_DS), then we
      end up reading user stack data using __copy_from_user_inatomic() under
      set_fs(KERNEL_DS). I think this conflicts with the intention of using
      set_fs(KERNEL_DS). And it is explicitly forbidden by hardware on ARM64
      when both CONFIG_ARM64_UAO and CONFIG_ARM64_PAN are used.
      
      So fix this by forcing USER_DS when recording user stack data.
      Signed-off-by: NYabin Cui <yabinc@google.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <stable@vger.kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 88b0193d ("perf/callchain: Force USER_DS when invoking perf_callchain_user()")
      Link: http://lkml.kernel.org/r/20180823225935.27035-1-yabinc@google.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      02e18447
  11. 31 8月, 2018 1 次提交
  12. 31 7月, 2018 1 次提交
    • M
      arm64: perf: Add cap_user_time aarch64 · 9d2dcc8f
      Michael O'Farrell 提交于
      It is useful to get the running time of a thread.  Doing so in an
      efficient manner can be important for performance of user applications.
      Avoiding system calls in `clock_gettime` when handling
      CLOCK_THREAD_CPUTIME_ID is important.  Other clocks are handled in the
      VDSO, but CLOCK_THREAD_CPUTIME_ID falls back on the system call.
      
      CLOCK_THREAD_CPUTIME_ID is not handled in the VDSO since it would have
      costs associated with maintaining updated user space accessible time
      offsets.  These offsets have to be updated everytime the a thread is
      scheduled/descheduled.  However, for programs regularly checking the
      running time of a thread, this is a performance improvement.
      
      This patch takes a middle ground, and adds support for cap_user_time an
      optional feature of the perf_event API.  This way costs are only
      incurred when the perf_event api is enabled.  This is done the same way
      as it is in x86.
      
      Ultimately this allows calculating the thread running time in userspace
      on aarch64 as follows (adapted from perf_event_open manpage):
      
      u32 seq, time_mult, time_shift;
      u64 running, count, time_offset, quot, rem, delta;
      struct perf_event_mmap_page *pc;
      pc = buf;  // buf is the perf event mmaped page as documented in the API.
      
      if (pc->cap_usr_time) {
          do {
              seq = pc->lock;
              barrier();
              running = pc->time_running;
      
              count = readCNTVCT_EL0();  // Read ARM hardware clock.
              time_offset = pc->time_offset;
              time_mult   = pc->time_mult;
              time_shift  = pc->time_shift;
      
              barrier();
          } while (pc->lock != seq);
      
          quot = (count >> time_shift);
          rem = count & (((u64)1 << time_shift) - 1);
          delta = time_offset + quot * time_mult +
                  ((rem * time_mult) >> time_shift);
      
          running += delta;
          // running now has the current nanosecond level thread time.
      }
      
      Summary of changes in the patch:
      
      For aarch64 systems, make arch_perf_update_userpage update the timing
      information stored in the perf_event page.  Requiring the following
      calculations:
        - Calculate the appropriate time_mult, and time_shift factors to convert
          ticks to nano seconds for the current clock frequency.
        - Adjust the mult and shift factors to avoid shift factors of 32 bits.
          (possibly unnecessary)
        - The time_offset userspace should apply when doing calculations:
          negative the current sched time (now), because time_running and
          time_enabled fields of the perf_event page have just been updated.
      Toggle bits to appropriate values:
        - Enable cap_user_time
      Signed-off-by: NMichael O'Farrell <micpof@gmail.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9d2dcc8f
  13. 25 7月, 2018 2 次提交
    • M
      perf/core: Fix crash when using HW tracing kernel filters · 7f635ff1
      Mathieu Poirier 提交于
      In function perf_event_parse_addr_filter(), the path::dentry of each struct
      perf_addr_filter is left unassigned (as it should be) when the pattern
      being parsed is related to kernel space.  But in function
      perf_addr_filter_match() the same dentries are given to d_inode() where
      the value is not expected to be NULL, resulting in the following splat:
      
        Unable to handle kernel NULL pointer dereference at virtual address 0000000000000058
        pc : perf_event_mmap+0x2fc/0x5a0
        lr : perf_event_mmap+0x2c8/0x5a0
        Process uname (pid: 2860, stack limit = 0x000000001cbcca37)
        Call trace:
         perf_event_mmap+0x2fc/0x5a0
         mmap_region+0x124/0x570
         do_mmap+0x344/0x4f8
         vm_mmap_pgoff+0xe4/0x110
         vm_mmap+0x2c/0x40
         elf_map+0x60/0x108
         load_elf_binary+0x450/0x12c4
         search_binary_handler+0x90/0x290
         __do_execve_file.isra.13+0x6e4/0x858
         sys_execve+0x3c/0x50
         el0_svc_naked+0x30/0x34
      
      This patch is fixing the problem by introducing a new check in function
      perf_addr_filter_match() to see if the filter's dentry is NULL.
      Signed-off-by: NMathieu Poirier <mathieu.poirier@linaro.org>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: miklos@szeredi.hu
      Cc: namhyung@kernel.org
      Cc: songliubraving@fb.com
      Fixes: 9511bce9 ("perf/core: Fix bad use of igrab()")
      Link: http://lkml.kernel.org/r/1531782831-1186-1-git-send-email-mathieu.poirier@linaro.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      7f635ff1
    • P
      perf/x86/intel: Fix unwind errors from PEBS entries (mk-II) · 6cbc304f
      Peter Zijlstra 提交于
      Vince reported the perf_fuzzer giving various unwinder warnings and
      Josh reported:
      
      > Deja vu.  Most of these are related to perf PEBS, similar to the
      > following issue:
      >
      >   b8000586 ("perf/x86/intel: Cure bogus unwind from PEBS entries")
      >
      > This is basically the ORC version of that.  setup_pebs_sample_data() is
      > assembling a franken-pt_regs which ORC isn't happy about.  RIP is
      > inconsistent with some of the other registers (like RSP and RBP).
      
      And where the previous unwinder only needed BP,SP ORC also requires
      IP. But we cannot spoof IP because then the sample will get displaced,
      entirely negating the point of PEBS.
      
      So cure the whole thing differently by doing the unwind early; this
      does however require a means to communicate we did the unwind early.
      We (ab)use an unused sample_type bit for this, which we set on events
      that fill out the data->callchain before the normal
      perf_prepare_sample().
      Debugged-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Reported-by: NVince Weaver <vincent.weaver@maine.edu>
      Tested-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Tested-by: NPrashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6cbc304f
  14. 21 7月, 2018 1 次提交
    • E
      pid: Implement PIDTYPE_TGID · 6883f81a
      Eric W. Biederman 提交于
      Everywhere except in the pid array we distinguish between a tasks pid and
      a tasks tgid (thread group id).  Even in the enumeration we want that
      distinction sometimes so we have added __PIDTYPE_TGID.  With leader_pid
      we almost have an implementation of PIDTYPE_TGID in struct signal_struct.
      
      Add PIDTYPE_TGID as a first class member of the pid_type enumeration and
      into the pids array.  Then remove the __PIDTYPE_TGID special case and the
      leader_pid in signal_struct.
      
      The net size increase is just an extra pointer added to struct pid and
      an extra pair of pointers of an hlist_node added to task_struct.
      
      The effect on code maintenance is the removal of a number of special
      cases today and the potential to remove many more special cases as
      PIDTYPE_TGID gets used to it's fullest.  The long term potential
      is allowing zombie thread group leaders to exit, which will remove
      a lot more special cases in the code.
      Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
      6883f81a
  15. 16 7月, 2018 1 次提交
  16. 27 6月, 2018 1 次提交
  17. 21 6月, 2018 1 次提交
  18. 25 5月, 2018 4 次提交
    • E
      perf/core: Wire up compat PERF_EVENT_IOC_QUERY_BPF, PERF_EVENT_IOC_MODIFY_ATTRIBUTES · 82489c5f
      Eugene Syromiatnikov 提交于
      Since pointer size is different in compat, and switching in _perf_ioctl
      is done using exact ioctl numbers, all new ioctl numbers that use pointer
      should be added to perf_compat_ioctl for _IOC_SIZE fixup before passing
      to perf_ioctl routine (this shouldn't be needed if semantics of the size
      argument of _IO* macros was honored).
      Signed-off-by: NEugene Syromiatnikov <esyr@redhat.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/20180521123420.GA24291@asgard.redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      82489c5f
    • S
      perf/core: Fix bad use of igrab() · 9511bce9
      Song Liu 提交于
      As Miklos reported and suggested:
      
       "This pattern repeats two times in trace_uprobe.c and in
        kernel/events/core.c as well:
      
            ret = kern_path(filename, LOOKUP_FOLLOW, &path);
            if (ret)
                goto fail_address_parse;
      
            inode = igrab(d_inode(path.dentry));
            path_put(&path);
      
        And it's wrong.  You can only hold a reference to the inode if you
        have an active ref to the superblock as well (which is normally
        through path.mnt) or holding s_umount.
      
        This way unmounting the containing filesystem while the tracepoint is
        active will give you the "VFS: Busy inodes after unmount..." message
        and a crash when the inode is finally put.
      
        Solution: store path instead of inode."
      
      This patch fixes the issue in kernel/event/core.c.
      Reviewed-and-tested-by: NAlexander Shishkin <alexander.shishkin@linux.intel.com>
      Reported-by: NMiklos Szeredi <miklos@szeredi.hu>
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <kernel-team@fb.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 375637bc ("perf/core: Introduce address range filtering")
      Link: http://lkml.kernel.org/r/20180418062907.3210386-2-songliubraving@fb.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9511bce9
    • S
      perf/core: Fix group scheduling with mixed hw and sw events · a1150c20
      Song Liu 提交于
      When hw and sw events are mixed in the same group, they are all attached
      to the hw perf_event_context. This sometimes requires moving group of
      perf_event to a different context.
      
      We found a bug in how the kernel handles this, for example if we do:
      
         perf stat -e '{faults,ref-cycles,faults}'  -I 1000
      
           1.005591180              1,297      faults
           1.005591180        457,476,576      ref-cycles
           1.005591180    <not supported>      faults
      
      First, sw event "faults" is attached to the sw context, and becomes the
      group leader. Then, hw event "ref-cycles" is attached, so both events
      are moved to the hw context. Last, another sw "faults" tries to attach,
      but it fails because of mismatch between the new target ctx (from sw
      pmu) and the group_leader's ctx (hw context, same as ref-cycles).
      
      The broken condition is:
         group_leader is sw event;
         group_leader is on hw context;
         add a sw event to the group.
      
      Fix this scenario by checking group_leader's context (instead of just
      event type). If group_leader is on hw context, use the ->pmu of this
      context to look up context for the new event.
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <kernel-team@fb.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: b04243ef ("perf: Complete software pmu grouping")
      Link: http://lkml.kernel.org/r/20180503194716.162815-1-songliubraving@fb.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a1150c20
    • Y
      perf/core: add perf_get_event() to return perf_event given a struct file · f8d959a5
      Yonghong Song 提交于
      A new extern function, perf_get_event(), is added to return a perf event
      given a struct file. This function will be used in later patches.
      Signed-off-by: NYonghong Song <yhs@fb.com>
      Signed-off-by: NAlexei Starovoitov <ast@kernel.org>
      f8d959a5
  19. 17 4月, 2018 2 次提交
  20. 12 4月, 2018 1 次提交
  21. 10 4月, 2018 1 次提交
    • P
      perf/core: Fix use-after-free in uprobe_perf_close() · 621b6d2e
      Prashant Bhole 提交于
      A use-after-free bug was caught by KASAN while running usdt related
      code (BCC project. bcc/tests/python/test_usdt2.py):
      
      	==================================================================
      	BUG: KASAN: use-after-free in uprobe_perf_close+0x222/0x3b0
      	Read of size 4 at addr ffff880384f9b4a4 by task test_usdt2.py/870
      
      	CPU: 4 PID: 870 Comm: test_usdt2.py Tainted: G        W         4.16.0-next-20180409 #215
      	Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014
      	Call Trace:
      	 dump_stack+0xc7/0x15b
      	 ? show_regs_print_info+0x5/0x5
      	 ? printk+0x9c/0xc3
      	 ? kmsg_dump_rewind_nolock+0x6e/0x6e
      	 ? uprobe_perf_close+0x222/0x3b0
      	 print_address_description+0x83/0x3a0
      	 ? uprobe_perf_close+0x222/0x3b0
      	 kasan_report+0x1dd/0x460
      	 ? uprobe_perf_close+0x222/0x3b0
      	 uprobe_perf_close+0x222/0x3b0
      	 ? probes_open+0x180/0x180
      	 ? free_filters_list+0x290/0x290
      	 trace_uprobe_register+0x1bb/0x500
      	 ? perf_event_attach_bpf_prog+0x310/0x310
      	 ? probe_event_disable+0x4e0/0x4e0
      	 perf_uprobe_destroy+0x63/0xd0
      	 _free_event+0x2bc/0xbd0
      	 ? lockdep_rcu_suspicious+0x100/0x100
      	 ? ring_buffer_attach+0x550/0x550
      	 ? kvm_sched_clock_read+0x1a/0x30
      	 ? perf_event_release_kernel+0x3e4/0xc00
      	 ? __mutex_unlock_slowpath+0x12e/0x540
      	 ? wait_for_completion+0x430/0x430
      	 ? lock_downgrade+0x3c0/0x3c0
      	 ? lock_release+0x980/0x980
      	 ? do_raw_spin_trylock+0x118/0x150
      	 ? do_raw_spin_unlock+0x121/0x210
      	 ? do_raw_spin_trylock+0x150/0x150
      	 perf_event_release_kernel+0x5d4/0xc00
      	 ? put_event+0x30/0x30
      	 ? fsnotify+0xd2d/0xea0
      	 ? sched_clock_cpu+0x18/0x1a0
      	 ? __fsnotify_update_child_dentry_flags.part.0+0x1b0/0x1b0
      	 ? pvclock_clocksource_read+0x152/0x2b0
      	 ? pvclock_read_flags+0x80/0x80
      	 ? kvm_sched_clock_read+0x1a/0x30
      	 ? sched_clock_cpu+0x18/0x1a0
      	 ? pvclock_clocksource_read+0x152/0x2b0
      	 ? locks_remove_file+0xec/0x470
      	 ? pvclock_read_flags+0x80/0x80
      	 ? fcntl_setlk+0x880/0x880
      	 ? ima_file_free+0x8d/0x390
      	 ? lockdep_rcu_suspicious+0x100/0x100
      	 ? ima_file_check+0x110/0x110
      	 ? fsnotify+0xea0/0xea0
      	 ? kvm_sched_clock_read+0x1a/0x30
      	 ? rcu_note_context_switch+0x600/0x600
      	 perf_release+0x21/0x40
      	 __fput+0x264/0x620
      	 ? fput+0xf0/0xf0
      	 ? do_raw_spin_unlock+0x121/0x210
      	 ? do_raw_spin_trylock+0x150/0x150
      	 ? SyS_fchdir+0x100/0x100
      	 ? fsnotify+0xea0/0xea0
      	 task_work_run+0x14b/0x1e0
      	 ? task_work_cancel+0x1c0/0x1c0
      	 ? copy_fd_bitmaps+0x150/0x150
      	 ? vfs_read+0xe5/0x260
      	 exit_to_usermode_loop+0x17b/0x1b0
      	 ? trace_event_raw_event_sys_exit+0x1a0/0x1a0
      	 do_syscall_64+0x3f6/0x490
      	 ? syscall_return_slowpath+0x2c0/0x2c0
      	 ? lockdep_sys_exit+0x1f/0xaa
      	 ? syscall_return_slowpath+0x1a3/0x2c0
      	 ? lockdep_sys_exit+0x1f/0xaa
      	 ? prepare_exit_to_usermode+0x11c/0x1e0
      	 ? enter_from_user_mode+0x30/0x30
      	random: crng init done
      	 ? __put_user_4+0x1c/0x30
      	 entry_SYSCALL_64_after_hwframe+0x3d/0xa2
      	RIP: 0033:0x7f41d95f9340
      	RSP: 002b:00007fffe71e4268 EFLAGS: 00000246 ORIG_RAX: 0000000000000003
      	RAX: 0000000000000000 RBX: 000000000000000d RCX: 00007f41d95f9340
      	RDX: 0000000000000000 RSI: 0000000000002401 RDI: 000000000000000d
      	RBP: 0000000000000000 R08: 00007f41ca8ff700 R09: 00007f41d996dd1f
      	R10: 00007fffe71e41e0 R11: 0000000000000246 R12: 00007fffe71e4330
      	R13: 0000000000000000 R14: fffffffffffffffc R15: 00007fffe71e4290
      
      	Allocated by task 870:
      	 kasan_kmalloc+0xa0/0xd0
      	 kmem_cache_alloc_node+0x11a/0x430
      	 copy_process.part.19+0x11a0/0x41c0
      	 _do_fork+0x1be/0xa20
      	 do_syscall_64+0x198/0x490
      	 entry_SYSCALL_64_after_hwframe+0x3d/0xa2
      
      	Freed by task 0:
      	 __kasan_slab_free+0x12e/0x180
      	 kmem_cache_free+0x102/0x4d0
      	 free_task+0xfe/0x160
      	 __put_task_struct+0x189/0x290
      	 delayed_put_task_struct+0x119/0x250
      	 rcu_process_callbacks+0xa6c/0x1b60
      	 __do_softirq+0x238/0x7ae
      
      	The buggy address belongs to the object at ffff880384f9b480
      	 which belongs to the cache task_struct of size 12928
      
      It occurs because task_struct is freed before perf_event which refers
      to the task and task flags are checked while teardown of the event.
      perf_event_alloc() assigns task_struct to hw.target of perf_event,
      but there is no reference counting for it.
      
      As a fix we get_task_struct() in perf_event_alloc() at above mentioned
      assignment and put_task_struct() in _free_event().
      Signed-off-by: NPrashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
      Reviewed-by: NOleg Nesterov <oleg@redhat.com>
      Acked-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <stable@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 63b6da39 ("perf: Fix perf_event_exit_task() race")
      Link: http://lkml.kernel.org/r/20180409100346.6416-1-bhole_prashant_q7@lab.ntt.co.jpSigned-off-by: NIngo Molnar <mingo@kernel.org>
      621b6d2e
  22. 29 3月, 2018 1 次提交
  23. 20 3月, 2018 1 次提交
    • S
      perf/cgroup: Fix child event counting bug · c917e0f2
      Song Liu 提交于
      When a perf_event is attached to parent cgroup, it should count events
      for all children cgroups:
      
         parent_group   <---- perf_event
           \
            - child_group  <---- process(es)
      
      However, in our tests, we found this perf_event cannot report reliable
      results. Here is an example case:
      
        # create cgroups
        mkdir -p /sys/fs/cgroup/p/c
        # start perf for parent group
        perf stat -e instructions -G "p"
      
        # on another console, run test process in child cgroup:
        stressapptest -s 2 -M 1000 & echo $! > /sys/fs/cgroup/p/c/cgroup.procs
      
        # after the test process is done, stop perf in the first console shows
      
             <not counted>      instructions              p
      
      The instruction should not be "not counted" as the process runs in the
      child cgroup.
      
      We found this is because perf_event->cgrp and cpuctx->cgrp are not
      identical, thus perf_event->cgrp are not updated properly.
      
      This patch fixes this by updating perf_cgroup properly for ancestor
      cgroup(s).
      Reported-by: NEphraim Park <ephiepark@fb.com>
      Signed-off-by: NSong Liu <songliubraving@fb.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: <jolsa@redhat.com>
      Cc: <kernel-team@fb.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Link: http://lkml.kernel.org/r/20180312165943.1057894-1-songliubraving@fb.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c917e0f2
  24. 17 3月, 2018 2 次提交
    • M
      perf/core: Clear sibling list of detached events · 24868367
      Mark Rutland 提交于
      When perf_group_dettach() is called on a group leader, it updates each
      sibling's group_leader field to point to that sibling, effectively
      upgrading each siblnig to a group leader. After perf_group_detach has
      completed, the caller may free the leader event.
      
      We only remove siblings from the group leader's sibling_list when the
      leader has a non-empty group_node. This was fine prior to commit:
      
        8343aae6 ("perf/core: Remove perf_event::group_entry")
      
      ... as the sibling's sibling_list would be empty. However, now that we
      use the sibling_list field as both the list head and the list entry,
      this leaves each sibling with a non-empty sibling list, including the
      stale leader event.
      
      If perf_group_detach() is subsequently called on a sibling, it will
      appear to be a group leader, and we'll walk the sibling_list,
      potentially dereferencing these stale events. In 0day testing, this has
      been observed to result in kernel panics.
      
      Let's avoid this by always removing siblings from the sibling list when
      we promote them to leaders.
      
      Fixes: 8343aae6 ("perf/core: Remove perf_event::group_entry")
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: vincent.weaver@maine.edu
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: torvalds@linux-foundation.org
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: valery.cherepennikov@intel.com
      Cc: linux-tip-commits@vger.kernel.org
      Cc: eranian@google.com
      Cc: acme@redhat.com
      Cc: alexander.shishkin@linux.intel.com
      Cc: davidcc@google.com
      Cc: kan.liang@intel.com
      Cc: Dmitry.Prohorov@intel.com
      Cc: Jiri Olsa <jolsa@redhat.com>
      Link: https://lkml.kernel.org/r/20180316131741.3svgr64yibc6vsid@lakrids.cambridge.arm.com
      24868367
    • P
      perf: Fix sibling iteration · edb39592
      Peter Zijlstra 提交于
      Mark noticed that the change to sibling_list changed some iteration
      semantics; because previously we used group_list as list entry,
      sibling events would always have an empty sibling_list.
      
      But because we now use sibling_list for both list head and list entry,
      siblings will report as having siblings.
      
      Fix this with a custom for_each_sibling_event() iterator.
      
      Fixes: 8343aae6 ("perf/core: Remove perf_event::group_entry")
      Reported-by: NMark Rutland <mark.rutland@arm.com>
      Suggested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: vincent.weaver@maine.edu
      Cc: alexander.shishkin@linux.intel.com
      Cc: torvalds@linux-foundation.org
      Cc: alexey.budankov@linux.intel.com
      Cc: valery.cherepennikov@intel.com
      Cc: eranian@google.com
      Cc: acme@redhat.com
      Cc: linux-tip-commits@vger.kernel.org
      Cc: davidcc@google.com
      Cc: kan.liang@intel.com
      Cc: Dmitry.Prohorov@intel.com
      Cc: jolsa@redhat.com
      Link: https://lkml.kernel.org/r/20180315170129.GX4043@hirez.programming.kicks-ass.net
      edb39592
  25. 16 3月, 2018 2 次提交
    • M
      perf/core: Clear sibling list of detached events · bbb68468
      Mark Rutland 提交于
      When perf_group_dettach() is called on a group leader, it updates each
      sibling's group_leader field to point to that sibling, effectively
      upgrading each siblnig to a group leader. After perf_group_detach has
      completed, the caller may free the leader event.
      
      We only remove siblings from the group leader's sibling_list when the
      leader has a non-empty group_node. This was fine prior to commit:
      
        8343aae6 ("perf/core: Remove perf_event::group_entry")
      
      ... as the sibling's sibling_list would be empty. However, now that we
      use the sibling_list field as both the list head and the list entry,
      this leaves each sibling with a non-empty sibling list, including the
      stale leader event.
      
      If perf_group_detach() is subsequently called on a sibling, it will
      appear to be a group leader, and we'll walk the sibling_list,
      potentially dereferencing these stale events. In 0day testing, this has
      been observed to result in kernel panics.
      
      Let's avoid this by always removing siblings from the sibling list when
      we promote them to leaders.
      
      Fixes: 8343aae6 ("perf/core: Remove perf_event::group_entry")
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: vincent.weaver@maine.edu
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: torvalds@linux-foundation.org
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: valery.cherepennikov@intel.com
      Cc: linux-tip-commits@vger.kernel.org
      Cc: eranian@google.com
      Cc: acme@redhat.com
      Cc: alexander.shishkin@linux.intel.com
      Cc: davidcc@google.com
      Cc: kan.liang@intel.com
      Cc: Dmitry.Prohorov@intel.com
      Cc: Jiri Olsa <jolsa@redhat.com>
      Link: https://lkml.kernel.org/r/20180316131741.3svgr64yibc6vsid@lakrids.cambridge.arm.com
      bbb68468
    • P
      perf: Fix sibling iteration · 7eb709f2
      Peter Zijlstra 提交于
      Mark noticed that the change to sibling_list changed some iteration
      semantics; because previously we used group_list as list entry,
      sibling events would always have an empty sibling_list.
      
      But because we now use sibling_list for both list head and list entry,
      siblings will report as having siblings.
      
      Fix this with a custom for_each_sibling_event() iterator.
      
      Fixes: 8343aae6 ("perf/core: Remove perf_event::group_entry")
      Reported-by: NMark Rutland <mark.rutland@arm.com>
      Suggested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: vincent.weaver@maine.edu
      Cc: alexander.shishkin@linux.intel.com
      Cc: torvalds@linux-foundation.org
      Cc: alexey.budankov@linux.intel.com
      Cc: valery.cherepennikov@intel.com
      Cc: eranian@google.com
      Cc: acme@redhat.com
      Cc: linux-tip-commits@vger.kernel.org
      Cc: davidcc@google.com
      Cc: kan.liang@intel.com
      Cc: Dmitry.Prohorov@intel.com
      Cc: jolsa@redhat.com
      Link: https://lkml.kernel.org/r/20180315170129.GX4043@hirez.programming.kicks-ass.net
      7eb709f2
  26. 13 3月, 2018 2 次提交
    • M
      perf/core: Implement fast breakpoint modification via _IOC_MODIFY_ATTRIBUTES · 32ff77e8
      Milind Chabbi 提交于
      Problem and motivation: Once a breakpoint perf event (PERF_TYPE_BREAKPOINT)
      is created, there is no flexibility to change the breakpoint type
      (bp_type), breakpoint address (bp_addr), or breakpoint length (bp_len). The
      only option is to close the perf event and configure a new breakpoint
      event. This inflexibility has a significant performance overhead. For
      example, sampling-based, lightweight performance profilers (and also
      concurrency bug detection tools),  monitor different addresses for a short
      duration using PERF_TYPE_BREAKPOINT and change the address (bp_addr) to
      another address or change the kind of breakpoint (bp_type) from  "write" to
      a "read" or vice-versa or change the length (bp_len) of the address being
      monitored. The cost of these modifications is prohibitive since it involves
      unmapping the circular buffer associated with the perf event, closing the
      perf event, opening another perf event and mmaping another circular buffer.
      
      Solution: The new ioctl flag for perf events,
      PERF_EVENT_IOC_MODIFY_ATTRIBUTES, introduced in this patch takes a pointer
      to a struct perf_event_attr as an argument to update an old breakpoint
      event with new address, type, and size. This facility allows retaining a
      previous mmaped perf events ring buffer and avoids having to close and
      reopen another perf event.
      
      This patch supports only changing PERF_TYPE_BREAKPOINT event type; future
      implementations can extend this feature. The patch replicates some of its
      functionality of modify_user_hw_breakpoint() in
      kernel/events/hw_breakpoint.c. modify_user_hw_breakpoint cannot be called
      directly since perf_event_ctx_lock() is already held in _perf_ioctl().
      
      Evidence: Experiments show that the baseline (not able to modify an already
      created breakpoint) costs an order of magnitude (~10x) more than the
      suggested optimization (having the ability to dynamically modifying a
      configured breakpoint via ioctl). When the breakpoints typically do not
      trap, the speedup due to the suggested optimization is ~10x; even when the
      breakpoints always trap, the speedup is ~4x due to the suggested
      optimization.
      
      Testing: tests posted at
      https://github.com/linux-contrib/perf_event_modify_bp demonstrate the
      performance significance of this patch. Tests also check the functional
      correctness of the patch.
      Signed-off-by: NMilind Chabbi <chabbi.milind@gmail.com>
      [ Using modify_user_hw_breakpoint_check function. ]
      [ Reformated PERF_EVENT_IOC_*, so the values are all in one column. ]
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Hari Bathini <hbathini@linux.vnet.ibm.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Oleg Nesterov <onestero@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20180312134548.31532-8-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      32ff77e8
    • J
      perf/core: Move perf_event_attr::sample_max_stack into perf_copy_attr() · 5f970521
      Jiri Olsa 提交于
      Move the sample_max_stack check and setup into perf_copy_attr(),
      so we have all perf_event_attr initial setup in one place
      and can easily compare attrs in the new ioctl introduced
      in following change.
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Hari Bathini <hbathini@linux.vnet.ibm.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Milind Chabbi <chabbi.milind@gmail.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Oleg Nesterov <onestero@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20180312134548.31532-7-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5f970521
  27. 12 3月, 2018 4 次提交
    • L
      perf/core: Fix installing cgroup events on CPU · 33801b94
      leilei.lin 提交于
      There's two problems when installing cgroup events on CPUs: firstly
      list_update_cgroup_event() only tries to set cpuctx->cgrp for the
      first event, if that mismatches on @cgrp we'll not try again for later
      additions.
      
      Secondly, when we install a cgroup event into an active context, only
      issue an event reprogram when the event matches the current cgroup
      context. This avoids a pointless event reprogramming.
      Signed-off-by: Nleilei.lin <leilei.lin@alibaba-inc.com>
      [ Improved the changelog and comments. ]
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: brendan.d.gregg@gmail.com
      Cc: eranian@gmail.com
      Cc: linux-kernel@vger.kernel.org
      Cc: yang_oliver@hotmail.com
      Link: http://lkml.kernel.org/r/20180306093637.28247-1-linxiulei@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      33801b94
    • P
      perf/core: Optimize perf_rotate_context() event scheduling · 8d5bce0c
      Peter Zijlstra 提交于
      The event schedule order (as per perf_event_sched_in()) is:
      
       - cpu  pinned
       - task pinned
       - cpu  flexible
       - task flexible
      
      But perf_rotate_context() will unschedule cpu-flexible even if it
      doesn't need a rotation.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8d5bce0c
    • P
      perf/core: Fix tree based event rotation · 8703a7cf
      Peter Zijlstra 提交于
      Similar to how first programming cpu=-1 and then cpu=# is wrong, so is
      rotating both. It was especially wrong when we were still programming
      the PMU in this same order, because in that scenario we might never
      actually end up running cpu=# events at all.
      
      Cure this by using the active_list to pick the rotation event; since
      at programming we already select the left-most event.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Carrillo-Cisneros <davidcc@google.com>
      Cc: Dmitri Prokhorov <Dmitry.Prohorov@intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Valery Cherepennikov <valery.cherepennikov@intel.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8703a7cf
    • P
      perf/core: Simpify perf_event_groups_for_each() · 6e6804d2
      Peter Zijlstra 提交于
      The last argument is, and always must be, the same.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Carrillo-Cisneros <davidcc@google.com>
      Cc: Dmitri Prokhorov <Dmitry.Prohorov@intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Valery Cherepennikov <valery.cherepennikov@intel.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6e6804d2