1. 20 4月, 2017 22 次提交
  2. 17 4月, 2017 1 次提交
  3. 14 4月, 2017 3 次提交
    • K
      perf/x86: Fix spurious NMI with PEBS Load Latency event · fd583ad1
      Kan Liang 提交于
      Spurious NMIs will be observed with the following command:
      
        while :; do
          perf record -bae "cpu/umask=0x01,event=0xcd,ldlat=0x80/pp"
                        -e "cpu/umask=0x03,event=0x0/"
                        -e "cpu/umask=0x02,event=0x0/"
                        -e cycles,branches,cache-misses
                        -e cache-references -- sleep 10
        done
      
      The bug was introduced by commit:
      
        8077eca0 ("perf/x86/pebs: Add workaround for broken OVFL status on HSW+")
      
      That commit clears the status bits for the counters used for PEBS
      events, by masking the whole 64 bits pebs_enabled. However, only the
      low 32 bits of both status and pebs_enabled are reserved for PEBS-able
      counters.
      
      For status bits 32-34 are fixed counter overflow bits. For
      pebs_enabled bits 32-34 are for PEBS Load Latency.
      
      In the test case, the PEBS Load Latency event and fixed counter event
      could overflow at the same time. The fixed counter overflow bit will
      be cleared by mistake. Once it is cleared, the fixed counter overflow
      never be processed, which finally trigger spurious NMI.
      
      Correct the PEBS enabled mask by ignoring the non-PEBS bits.
      Signed-off-by: NKan Liang <kan.liang@intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: 8077eca0 ("perf/x86/pebs: Add workaround for broken OVFL status on HSW+")
      Link: http://lkml.kernel.org/r/1491333246-3965-1-git-send-email-kan.liang@intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fd583ad1
    • I
      18c5c7c6
    • P
      perf/x86: Avoid exposing wrong/stale data in intel_pmu_lbr_read_32() · f2200ac3
      Peter Zijlstra 提交于
      When the perf_branch_entry::{in_tx,abort,cycles} fields were added,
      intel_pmu_lbr_read_32() wasn't updated to initialize them.
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-kernel@vger.kernel.org
      Cc: <stable@vger.kernel.org>
      Fixes: 135c5612 ("perf/x86/intel: Support Haswell/v4 LBR format")
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      f2200ac3
  4. 13 4月, 2017 6 次提交
  5. 12 4月, 2017 8 次提交
    • D
      perf tools: Pass PYTHON config to feature detection · 7be6b316
      David Carrillo-Cisneros 提交于
      ( This is a rebased version of https://lkml.org/lkml/2017/2/7/662 )
      
      Python's CC and link Makefile variables were not passed to feature
      detection, causing feature detection to use system's Python rather than
      PYTHON_CONFIG's one. This created a mismatch between the detected Python
      support and the one actually used by perf when PYTHON_CONFIG is
      specified.
      
      Fix it by moving Python's variable initialization to before feature
      detection and pass FLAGS_PYTHON_EMBED to Python's feature detection's
      build target.
      Signed-off-by: NDavid Carrillo-Cisneros <davidcc@google.com>
      Acked-by: NJiri Olsa <jolsa@kernel.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andi Kleen <ak@linux.intel.com>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Paul Turner <pjt@google.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Simon Que <sque@chromium.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Wang Nan <wangnan0@huawei.com>
      Link: http://lkml.kernel.org/r/20170412064919.92449-2-davidcc@google.comSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      7be6b316
    • M
      kprobes/x86: Consolidate insn decoder users for copying code · a8d11cd0
      Masami Hiramatsu 提交于
      Consolidate x86 instruction decoder users on the path of
      copying original code for kprobes.
      
      Kprobes decodes the same instruction a maximum of 3 times when
      preparing the instruction buffer:
      
       - The first time for getting the length of the instruction,
       - the 2nd for adjusting displacement,
       - and the 3rd for checking whether the instruction is boostable or not.
      
      For each time, the actual decoding target address is slightly
      different (1st is original address or recovered instruction buffer,
      2nd and 3rd are pointing to the copied buffer), but all have
      the same instruction.
      
      Thus, this patch also changes the target address to the copied
      buffer at first and reuses the decoded "insn" for displacement
      adjusting and checking boostability.
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David S . Miller <davem@davemloft.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ye Xiaolong <xiaolong.ye@intel.com>
      Link: http://lkml.kernel.org/r/149076389643.22469.13151892839998777373.stgit@devboxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a8d11cd0
    • M
      kprobes/x86: Use probe_kernel_read() instead of memcpy() · ea1e34fc
      Masami Hiramatsu 提交于
      Use probe_kernel_read() for avoiding unexpected faults while
      copying kernel text in __recover_probed_insn(),
      __recover_optprobed_insn() and __copy_instruction().
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David S . Miller <davem@davemloft.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ye Xiaolong <xiaolong.ye@intel.com>
      Link: http://lkml.kernel.org/r/149076382624.22469.10091613887942958518.stgit@devboxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      ea1e34fc
    • M
      kprobes/x86: Set kprobes pages read-only · d0381c81
      Masami Hiramatsu 提交于
      Set the pages which is used for kprobes' singlestep buffer
      and optprobe's trampoline instruction buffer to readonly.
      This can prevent unexpected (or unintended) instruction
      modification.
      
      This also passes rodata_test as below.
      
      Without this patch, rodata_test shows a warning:
      
        WARNING: CPU: 0 PID: 1 at arch/x86/mm/dump_pagetables.c:235 note_page+0x7a9/0xa20
        x86/mm: Found insecure W+X mapping at address ffffffffa0000000/0xffffffffa0000000
      
      With this fix, no W+X pages are found:
      
        x86/mm: Checked W+X mappings: passed, no W+X pages found.
        rodata_test: all tests were successful
      Reported-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David S . Miller <davem@davemloft.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ye Xiaolong <xiaolong.ye@intel.com>
      Link: http://lkml.kernel.org/r/149076375592.22469.14174394514338612247.stgit@devboxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      d0381c81
    • M
      kprobes/x86: Make boostable flag boolean · 490154bc
      Masami Hiramatsu 提交于
      Make arch_specific_insn.boostable to boolean, since it has
      only 2 states, boostable or not. So it is better to use
      boolean from the viewpoint of code readability.
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David S . Miller <davem@davemloft.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ye Xiaolong <xiaolong.ye@intel.com>
      Link: http://lkml.kernel.org/r/149076368566.22469.6322906866458231844.stgit@devboxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      490154bc
    • M
      kprobes/x86: Do not modify singlestep buffer while resuming · 804dec5b
      Masami Hiramatsu 提交于
      Do not modify singlestep execution buffer (kprobe.ainsn.insn)
      while resuming from single-stepping, instead, modifies
      the buffer to add a jump back instruction at preparing
      buffer.
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David S . Miller <davem@davemloft.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ye Xiaolong <xiaolong.ye@intel.com>
      Link: http://lkml.kernel.org/r/149076361560.22469.1610155860343077495.stgit@devboxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      804dec5b
    • M
      kprobes/x86: Use instruction decoder for booster · 17880e4d
      Masami Hiramatsu 提交于
      Use x86 instruction decoder for checking whether the probed
      instruction is able to boost or not, instead of hand-written
      code.
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David S . Miller <davem@davemloft.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ye Xiaolong <xiaolong.ye@intel.com>
      Link: http://lkml.kernel.org/r/149076354563.22469.13379472209338986858.stgit@devboxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      17880e4d
    • M
      kprobes/x86: Fix the description of __copy_instruction() · 129d17e8
      Masami Hiramatsu 提交于
      Fix the description comment of __copy_instruction() function
      since it has already been changed to return the length of the
      copied instruction.
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Cc: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: David S . Miller <davem@davemloft.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Josh Poimboeuf <jpoimboe@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ye Xiaolong <xiaolong.ye@intel.com>
      Link: http://lkml.kernel.org/r/149076347582.22469.3775133607244923462.stgit@devboxSigned-off-by: NIngo Molnar <mingo@kernel.org>
      129d17e8