1. 13 3月, 2018 7 次提交
    • J
      perf tests: Add breakpoint accounting/modify test · 032db28e
      Jiri Olsa 提交于
      Adding test that:
      
        - detects the number of watch/break-points,
          skip test if any is missing
        - detects PERF_EVENT_IOC_MODIFY_ATTRIBUTES ioctl,
          skip test if it's missing
        - detects if watchpoints and breakpoints share
          same slots
        - create all possible watchpoints on cpu 0
        - change one of it to breakpoint
        - in case wp and bp do not share slots,
          we create another watchpoint to ensure
          the slot accounting is correct
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Hari Bathini <hbathini@linux.vnet.ibm.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Milind Chabbi <chabbi.milind@gmail.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Oleg Nesterov <onestero@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20180312134548.31532-9-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      032db28e
    • J
      perf/core: Move perf_event_attr::sample_max_stack into perf_copy_attr() · 5f970521
      Jiri Olsa 提交于
      Move the sample_max_stack check and setup into perf_copy_attr(),
      so we have all perf_event_attr initial setup in one place
      and can easily compare attrs in the new ioctl introduced
      in following change.
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Hari Bathini <hbathini@linux.vnet.ibm.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Milind Chabbi <chabbi.milind@gmail.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Oleg Nesterov <onestero@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20180312134548.31532-7-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5f970521
    • J
      hw_breakpoint: Add perf_event_attr fields check in __modify_user_hw_breakpoint() · 705feaf3
      Jiri Olsa 提交于
      And rename it to modify_user_hw_breakpoint_check().
      
      We are about to use modify_user_hw_breakpoint_check() for user space
      breakpoints modification, we must be very strict to check only the
      fields we can change have changed. As Peter explained:
      
       "Suppose someone does:
      
              attr = malloc(sizeof(*attr)); // uninitialized memory
              attr->type = BP;
              attr->bp_addr = new_addr;
              attr->bp_type = bp_type;
              attr->bp_len = bp_len;
              ioctl(fd, PERF_IOC_MOD_ATTR, &attr);
      
        And feeds absolute shite for the rest of the fields.
        Then we later want to extend IOC_MOD_ATTR to allow changing
        attr::sample_type but we can't, because that would break the
        above application."
      
      I'm making this check optional because we already export
      modify_user_hw_breakpoint() and with this check we could
      break existing users.
      Suggested-by: NPeter Zijlstra <peterz@infradead.org>
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Hari Bathini <hbathini@linux.vnet.ibm.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Milind Chabbi <chabbi.milind@gmail.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Oleg Nesterov <onestero@redhat.com>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20180312134548.31532-6-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      705feaf3
    • J
      hw_breakpoint: Factor out __modify_user_hw_breakpoint() function · 18ff57b2
      Jiri Olsa 提交于
      Moving out all the functionality without the events
      disabling/enabling calls, because we want to call another
      disabling/enabling functions in following change.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Hari Bathini <hbathini@linux.vnet.ibm.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Milind Chabbi <chabbi.milind@gmail.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Oleg Nesterov <onestero@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20180312134548.31532-5-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      18ff57b2
    • J
      hw_breakpoint: Add modify_bp_slot() function · ea6a9d53
      Jiri Olsa 提交于
      Add the modify_bp_slot() function to keep slot numbers
      correct when changing the breakpoint type.
      
      Using existing __release_bp_slot()/__reserve_bp_slot()
      call sequence to update the slot counts.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Hari Bathini <hbathini@linux.vnet.ibm.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Milind Chabbi <chabbi.milind@gmail.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Oleg Nesterov <onestero@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20180312134548.31532-4-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ea6a9d53
    • J
      hw_breakpoint: Pass bp_type argument to __reserve_bp_slot|__release_bp_slot() · 1ad9ff7d
      Jiri Olsa 提交于
      Passing bp_type argument to __reserve_bp_slot() and __release_bp_slot()
      functions, so we can pass another bp_type than the one defined in
      bp->attr.bp_type. This will be handy in following change that fixes
      breakpoint slot counts during its modification.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Hari Bathini <hbathini@linux.vnet.ibm.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Milind Chabbi <chabbi.milind@gmail.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Oleg Nesterov <onestero@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20180312134548.31532-3-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1ad9ff7d
    • J
      hw_breakpoint: Pass bp_type directly as find_slot_idx() argument · cbd9d9f1
      Jiri Olsa 提交于
      Pass bp_type directly as a find_slot_idx() argument,
      so we don't need to have whole event to get the
      breakpoint slot type. It will be used in following
      changes.
      Signed-off-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Hari Bathini <hbathini@linux.vnet.ibm.com>
      Cc: Jin Yao <yao.jin@linux.intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Milind Chabbi <chabbi.milind@gmail.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Oleg Nesterov <onestero@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Sukadev Bhattiprolu <sukadev@linux.vnet.ibm.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Will Deacon <will.deacon@arm.com>
      Link: http://lkml.kernel.org/r/20180312134548.31532-2-jolsa@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      cbd9d9f1
  2. 12 3月, 2018 10 次提交
    • L
      perf/core: Fix installing cgroup events on CPU · 33801b94
      leilei.lin 提交于
      There's two problems when installing cgroup events on CPUs: firstly
      list_update_cgroup_event() only tries to set cpuctx->cgrp for the
      first event, if that mismatches on @cgrp we'll not try again for later
      additions.
      
      Secondly, when we install a cgroup event into an active context, only
      issue an event reprogram when the event matches the current cgroup
      context. This avoids a pointless event reprogramming.
      Signed-off-by: Nleilei.lin <leilei.lin@alibaba-inc.com>
      [ Improved the changelog and comments. ]
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: brendan.d.gregg@gmail.com
      Cc: eranian@gmail.com
      Cc: linux-kernel@vger.kernel.org
      Cc: yang_oliver@hotmail.com
      Link: http://lkml.kernel.org/r/20180306093637.28247-1-linxiulei@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      33801b94
    • P
      perf/core: Optimize perf_rotate_context() event scheduling · 8d5bce0c
      Peter Zijlstra 提交于
      The event schedule order (as per perf_event_sched_in()) is:
      
       - cpu  pinned
       - task pinned
       - cpu  flexible
       - task flexible
      
      But perf_rotate_context() will unschedule cpu-flexible even if it
      doesn't need a rotation.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8d5bce0c
    • P
      perf/core: Fix tree based event rotation · 8703a7cf
      Peter Zijlstra 提交于
      Similar to how first programming cpu=-1 and then cpu=# is wrong, so is
      rotating both. It was especially wrong when we were still programming
      the PMU in this same order, because in that scenario we might never
      actually end up running cpu=# events at all.
      
      Cure this by using the active_list to pick the rotation event; since
      at programming we already select the left-most event.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Carrillo-Cisneros <davidcc@google.com>
      Cc: Dmitri Prokhorov <Dmitry.Prohorov@intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Valery Cherepennikov <valery.cherepennikov@intel.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8703a7cf
    • P
      perf/core: Simpify perf_event_groups_for_each() · 6e6804d2
      Peter Zijlstra 提交于
      The last argument is, and always must be, the same.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Carrillo-Cisneros <davidcc@google.com>
      Cc: Dmitri Prokhorov <Dmitry.Prohorov@intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Valery Cherepennikov <valery.cherepennikov@intel.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6e6804d2
    • P
      perf/core: Optimize ctx_sched_out() · 6668128a
      Peter Zijlstra 提交于
      When an event group contains more events than can be scheduled on the
      hardware, iterating the full event group for ctx_sched_out is a waste
      of time.
      
      Keep track of the events that got programmed on the hardware, such
      that we can iterate this smaller list in order to schedule them out.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Carrillo-Cisneros <davidcc@google.com>
      Cc: Dmitri Prokhorov <Dmitry.Prohorov@intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Valery Cherepennikov <valery.cherepennikov@intel.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      6668128a
    • P
      perf/core: Remove perf_event::group_entry · 8343aae6
      Peter Zijlstra 提交于
      Now that all the grouping is done with RB trees, we no longer need
      group_entry and can replace the whole thing with sibling_list.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Carrillo-Cisneros <davidcc@google.com>
      Cc: Dmitri Prokhorov <Dmitry.Prohorov@intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Valery Cherepennikov <valery.cherepennikov@intel.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      8343aae6
    • P
      perf/core: Fix event schedule order · 1cac7b1a
      Peter Zijlstra 提交于
      Scheduling in events with cpu=-1 before events with cpu=# changes
      semantics and is undesirable in that it would priorize these events.
      
      Given that groups->index is across all groups we actually have an
      inter-group ordering, meaning we can merge-sort two groups, which is
      just what we need to preserve semantics.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Carrillo-Cisneros <davidcc@google.com>
      Cc: Dmitri Prokhorov <Dmitry.Prohorov@intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Valery Cherepennikov <valery.cherepennikov@intel.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      1cac7b1a
    • P
      perf/core: Cleanup the rb-tree code · 161c85fa
      Peter Zijlstra 提交于
      Trivial comment and code fixups..
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexey Budankov <alexey.budankov@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: David Carrillo-Cisneros <davidcc@google.com>
      Cc: Dmitri Prokhorov <Dmitry.Prohorov@intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Valery Cherepennikov <valery.cherepennikov@intel.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      161c85fa
    • A
      perf/cor: Use RB trees for pinned/flexible groups · 8e1a2031
      Alexey Budankov 提交于
      Change event groups into RB trees sorted by CPU and then by a 64bit
      index, so that multiplexing hrtimer interrupt handler would be able
      skipping to the current CPU's list and ignore groups allocated for the
      other CPUs.
      
      New API for manipulating event groups in the trees is implemented as well
      as adoption on the API in the current implementation.
      
      pinned_group_sched_in() and flexible_group_sched_in() API are
      introduced to consolidate code enabling the whole group from pinned
      and flexible groups appropriately.
      Signed-off-by: NAlexey Budankov <alexey.budankov@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: David Carrillo-Cisneros <davidcc@google.com>
      Cc: Dmitri Prokhorov <Dmitry.Prohorov@intel.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Kan Liang <kan.liang@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Valery Cherepennikov <valery.cherepennikov@intel.com>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Link: http://lkml.kernel.org/r/372f9c8b-0cfe-4240-e44d-83d863d40813@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8e1a2031
    • P
      perf/core: Fix perf_output_read_group() · 9e5b127d
      Peter Zijlstra 提交于
      Mark reported his arm64 perf fuzzer runs sometimes splat like:
      
        armv8pmu_read_counter+0x1e8/0x2d8
        armpmu_event_update+0x8c/0x188
        armpmu_read+0xc/0x18
        perf_output_read+0x550/0x11e8
        perf_event_read_event+0x1d0/0x248
        perf_event_exit_task+0x468/0xbb8
        do_exit+0x690/0x1310
        do_group_exit+0xd0/0x2b0
        get_signal+0x2e8/0x17a8
        do_signal+0x144/0x4f8
        do_notify_resume+0x148/0x1e8
        work_pending+0x8/0x14
      
      which asserts that we only call pmu::read() on ACTIVE events.
      
      The above callchain does:
      
        perf_event_exit_task()
          perf_event_exit_task_context()
            task_ctx_sched_out() // INACTIVE
            perf_event_exit_event()
              perf_event_set_state(EXIT) // EXIT
              sync_child_event()
                perf_event_read_event()
                  perf_output_read()
                    perf_output_read_group()
                      leader->pmu->read()
      
      Which results in doing a pmu::read() on an !ACTIVE event.
      
      I _think_ this is 'new' since we added attr.inherit_stat, which added
      the perf_event_read_event() to the exit path, without that
      perf_event_read_output() would only trigger from samples and for
      @event to trigger a sample, it's leader _must_ be ACTIVE too.
      
      Still, adding this check makes it consistent with the @sub case for
      the siblings.
      Reported-and-Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      9e5b127d
  3. 09 3月, 2018 8 次提交
    • I
      Merge tag 'perf-core-for-mingo-4.17-20180308' of... · fbf8a1e1
      Ingo Molnar 提交于
      Merge tag 'perf-core-for-mingo-4.17-20180308' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core
      
      Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo:
      
      - Support to display the IPC/Cycle in 'annotate' TUI, for systems
        where this info can be obtained, like Intel's >= Skylake (Jin Yao)
      
      - Support wildcards on PMU name in dynamic PMU events (Agustin Vega-Frias)
      
      - Display pmu name when printing unmerged events in stat (Agustin Vega-Frias)
      
      - Auto-merge PMU events created by prefix or glob match (Agustin Vega-Frias)
      
      - Fix s390 'call' operations target function annotation (Thomas Richter)
      
      - Handle s390 PC relative load and store instruction in the augmented
        'annotate', code, used so far in the TUI modes of 'perf report' and
        'perf annotate' (Thomas Richter)
      
      - Provide libtraceevent with a kernel symbol resolver, so that
        symbols in tracepoint fields can be resolved when showing them in
        tools such as 'perf report' (Wang YanQing)
      
      - Refactor the cgroups code to look more like other code in tools/perf,
        using cgroup__{put,get} for refcount operations instead of its
        open-coded equivalent, breaking larger functions, etc (Arnaldo Carvalho de Melo)
      
      - Implement support for the -G/--cgroup target in 'perf trace', allowing
        strace like tracing (plus other events, backtraces, etc) for cgroups
        (Arnaldo Carvalho de Melo)
      
      - Update thread shortname in 'perf sched map' when the thread's COMM
        changes (Changbin Du)
      
      - refcount 'struct mem_info', for better sharing it over several
        users, avoid duplicating structs and fixing crashes related to
        use after free (Jiri Olsa)
      
      - Display perf.data version, offsets in 'perf report --header' (Jiri Olsa)
      
      - Record the machine's memory topology information in a perf.data
        feature section, to be used by tools such as 'perf c2c' (Jiri Olsa)
      
      - Fix output of forced groups in the header for 'perf report' --stdio
        and --tui (Jiri Olsa)
      
      - Better support llvm, clang, cxx make tests in the build process (Jiri Olsa)
      
      - Streamline the 'struct perf_mmap' methods, storing some info in the
        struct instead of passing it via various methods, shortening its
        signatures (Kan Liang)
      
      - Update the quipper perf.data parser library site information (Stephane Eranian)
      
      - Correct perf's man pages title markers for asciidoctor (Takashi Iwai)
      
      - Intel PT fixes and refactorings paving the way for implementing
        support for AUX area sampling (Adrian Hunter)
      Signed-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      fbf8a1e1
    • K
      perf/x86/intel: Disable userspace RDPMC usage for large PEBS · 1af22eba
      Kan Liang 提交于
      Userspace RDPMC cannot possibly work for large PEBS, which was introduced in:
      
        b8241d20699e ("perf/x86/intel: Implement batched PEBS interrupt handling (large PEBS interrupt threshold)")
      
      When the PEBS interrupt threshold is larger than one, there is no way
      to get exact auto-reload times and value for userspace RDPMC.  Disable
      the userspace RDPMC usage when large PEBS is enabled.
      
      The only exception is when the PEBS interrupt threshold is 1, in which
      case user-space RDPMC works well even with auto-reload events.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Fixes: b8241d20699e ("perf/x86/intel: Implement batched PEBS interrupt handling (large PEBS interrupt threshold)")
      Link: http://lkml.kernel.org/r/1518474035-21006-6-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      1af22eba
    • K
      perf/x86/intel: Fix PMU read for auto-reload · ceb90d9e
      Kan Liang 提交于
      Auto-reload events needs to be specially handled in event count read.
      
      Auto-reload is only available for intel_pmu.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Fixes: b8241d20699e ("perf/x86/intel: Implement batched PEBS interrupt handling (large PEBS interrupt threshold)")
      Link: http://lkml.kernel.org/r/1518474035-21006-5-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ceb90d9e
    • K
      perf/x86/intel/ds: Introduce ->read() function for auto-reload events and... · 5bee2cc6
      Kan Liang 提交于
      perf/x86/intel/ds: Introduce ->read() function for auto-reload events and flush the PEBS buffer there
      
      There is no way to get exact auto-reload times and values which are needed
      for event updates unless we flush the PEBS buffer.
      
      Introduce intel_pmu_auto_reload_read() to drain the PEBS buffer for
      auto reload event. To prevent races with the hardware, we can only
      call drain_pebs() when the PMU is disabled.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Link: http://lkml.kernel.org/r/1518474035-21006-4-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      5bee2cc6
    • K
      perf/x86: Introduce a ->read() callback in 'struct x86_pmu' · bcfbe5c4
      Kan Liang 提交于
      Auto-reload needs to be specially handled when reading event counts.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Link: http://lkml.kernel.org/r/1518474035-21006-3-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      bcfbe5c4
    • K
      perf/x86/intel: Fix event update for auto-reload · d31fc13f
      Kan Liang 提交于
      There is a bug when reading event->count with large PEBS enabled.
      
      Here is an example:
      
        # ./read_count
        0x71f0
        0x122c0
        0x1000000001c54
        0x100000001257d
        0x200000000bdc5
      
      In fixed period mode, the auto-reload mechanism could be enabled for
      PEBS events, but the calculation of event->count does not take the
      auto-reload values into account.
      
      Anyone who reads event->count will get the wrong result, e.g x86_pmu_read().
      
      This bug was introduced with the auto-reload mechanism enabled since
      commit:
      
        851559e3 ("perf/x86/intel: Use the PEBS auto reload mechanism when possible")
      
      Introduce intel_pmu_save_and_restart_reload() to calculate the
      event->count only for auto-reload.
      
      Since the counter increments a negative counter value and overflows on
      the sign switch, giving the interval:
      
              [-period, 0]
      
      the difference between two consequtive reads is:
      
       A) value2 - value1;
          when no overflows have happened in between,
       B) (0 - value1) + (value2 - (-period));
          when one overflow happened in between,
       C) (0 - value1) + (n - 1) * (period) + (value2 - (-period));
          when @n overflows happened in between.
      
      Here A) is the obvious difference, B) is the extension to the discrete
      interval, where the first term is to the top of the interval and the
      second term is from the bottom of the next interval and C) the extension
      to multiple intervals, where the middle term is the whole intervals
      covered.
      
      The equation for all cases is:
      
          value2 - value1 + n * period
      
      Previously the event->count is updated right before the sample output.
      But for case A, there is no PEBS record ready. It needs to be specially
      handled.
      
      Remove the auto-reload code from x86_perf_event_set_period() since
      we'll not longer call that function in this case.
      
      Based-on-code-from: Peter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Fixes: 851559e3 ("perf/x86/intel: Use the PEBS auto reload mechanism when possible")
      Link: http://lkml.kernel.org/r/1518474035-21006-2-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d31fc13f
    • K
      perf/x86/intel: Properly save/restore the PMU state in the NMI handler · 82d71ed0
      Kan Liang 提交于
      The PMU is disabled in intel_pmu_handle_irq(), but cpuc->enabled is not updated
      accordingly.
      
      This is fine in current usage because no-one checks it - but fix it
      for future code: for example, the drain_pebs() will be modified to
      fix an auto-reload bug.
      
      Properly save/restore the old PMU state.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: acme@kernel.org
      Cc: kernel test robot <fengguang.wu@intel.com>
      Link: http://lkml.kernel.org/r/6f44ee84-56f8-79f1-559b-08e371eaeb78@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      82d71ed0
    • K
      perf/x86/intel: Fix large period handling on Broadwell CPUs · f605cfca
      Kan Liang 提交于
      Large fixed period values could be truncated on Broadwell, for example:
      
        perf record -e cycles -c 10000000000
      
      Here the fixed period is 0x2540BE400, but the period which finally applied is
      0x540BE400 - which is wrong.
      
      The reason is that x86_pmu::limit_period() uses an u32 parameter, so the
      high 32 bits of 'period' get truncated.
      
      This bug was introduced in:
      
        commit 294fe0f5 ("perf/x86/intel: Add INST_RETIRED.ALL workarounds")
      
      It's safe to use u64 instead of u32:
      
       - Although the 'left' is s64, the value of 'left' must be positive when
         calling limit_period().
      
       - bdw_limit_period() only modifies the lowest 6 bits, it doesn't touch
         the higher 32 bits.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 294fe0f5 ("perf/x86/intel: Add INST_RETIRED.ALL workarounds")
      Link: http://lkml.kernel.org/r/1519926894-3520-1-git-send-email-kan.liang@linux.intel.com
      [ Rewrote unacceptably bad changelog. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      f605cfca
  4. 08 3月, 2018 15 次提交