1. 14 5月, 2014 3 次提交
    • O
      x86/traps: Make math_error() static · 5e1b05be
      Oleg Nesterov 提交于
      Trivial, make math_error() static.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      5e1b05be
    • D
      uprobes/x86: Fix scratch register selection for rip-relative fixups · 1ea30fb6
      Denys Vlasenko 提交于
      Before this patch, instructions such as div, mul, shifts with count
      in CL, cmpxchg are mishandled.
      
      This patch adds vex prefix handling. In particular, it avoids colliding
      with register operand encoded in vex.vvvv field.
      
      Since we need to avoid two possible register operands, the selection of
      scratch register needs to be from at least three registers.
      
      After looking through a lot of CPU docs, it looks like the safest choice
      is SI,DI,BX. Selecting BX needs care to not collide with implicit use of
      BX by cmpxchg8b.
      
      Test-case:
      
      	#include <stdio.h>
      
      	static const char *const pass[] = { "FAIL", "pass" };
      
      	long two = 2;
      	void test1(void)
      	{
      		long ax = 0, dx = 0;
      		asm volatile("\n"
      	"			xor	%%edx,%%edx\n"
      	"			lea	2(%%edx),%%eax\n"
      	// We divide 2 by 2. Result (in eax) should be 1:
      	"	probe1:		.globl	probe1\n"
      	"			divl	two(%%rip)\n"
      	// If we have a bug (eax mangled on entry) the result will be 2,
      	// because eax gets restored by probe machinery.
      		: "=a" (ax), "=d" (dx) /*out*/
      		: "0" (ax), "1" (dx) /*in*/
      		: "memory" /*clobber*/
      		);
      		dprintf(2, "%s: %s\n", __func__,
      			pass[ax == 1]
      		);
      	}
      
      	long val2 = 0;
      	void test2(void)
      	{
      		long old_val = val2;
      		long ax = 0, dx = 0;
      		asm volatile("\n"
      	"			mov	val2,%%eax\n"     // eax := val2
      	"			lea	1(%%eax),%%edx\n" // edx := eax+1
      	// eax is equal to val2. cmpxchg should store edx to val2:
      	"	probe2:		.globl  probe2\n"
      	"			cmpxchg %%edx,val2(%%rip)\n"
      	// If we have a bug (eax mangled on entry), val2 will stay unchanged
      		: "=a" (ax), "=d" (dx) /*out*/
      		: "0" (ax), "1" (dx) /*in*/
      		: "memory" /*clobber*/
      		);
      		dprintf(2, "%s: %s\n", __func__,
      			pass[val2 == old_val + 1]
      		);
      	}
      
      	long val3[2] = {0,0};
      	void test3(void)
      	{
      		long old_val = val3[0];
      		long ax = 0, dx = 0;
      		asm volatile("\n"
      	"			mov	val3,%%eax\n"  // edx:eax := val3
      	"			mov	val3+4,%%edx\n"
      	"			mov	%%eax,%%ebx\n" // ecx:ebx := edx:eax + 1
      	"			mov	%%edx,%%ecx\n"
      	"			add	$1,%%ebx\n"
      	"			adc	$0,%%ecx\n"
      	// edx:eax is equal to val3. cmpxchg8b should store ecx:ebx to val3:
      	"	probe3:		.globl  probe3\n"
      	"			cmpxchg8b val3(%%rip)\n"
      	// If we have a bug (edx:eax mangled on entry), val3 will stay unchanged.
      	// If ecx:edx in mangled, val3 will get wrong value.
      		: "=a" (ax), "=d" (dx) /*out*/
      		: "0" (ax), "1" (dx) /*in*/
      		: "cx", "bx", "memory" /*clobber*/
      		);
      		dprintf(2, "%s: %s\n", __func__,
      			pass[val3[0] == old_val + 1 && val3[1] == 0]
      		);
      	}
      
      	int main(int argc, char **argv)
      	{
      		test1();
      		test2();
      		test3();
      		return 0;
      	}
      
      Before this change all tests fail if probe{1,2,3} are probed.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Reviewed-by: NJim Keniston <jkenisto@us.ibm.com>
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      1ea30fb6
    • D
      uprobes/x86: Simplify rip-relative handling · 50204c6f
      Denys Vlasenko 提交于
      It is possible to replace rip-relative addressing mode with addressing
      mode of the same length: (reg+disp32). This eliminates the need to fix
      up immediate and correct for changing instruction length.
      
      And we can kill arch_uprobe->def.riprel_target.
      Signed-off-by: NDenys Vlasenko <dvlasenk@redhat.com>
      Reviewed-by: NJim Keniston <jkenisto@us.ibm.com>
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      50204c6f
  2. 01 5月, 2014 18 次提交
  3. 24 4月, 2014 1 次提交
  4. 18 4月, 2014 16 次提交
  5. 17 4月, 2014 2 次提交
    • M
      kprobes/x86: Fix page-fault handling logic · 6381c24c
      Masami Hiramatsu 提交于
      Current kprobes in-kernel page fault handler doesn't
      expect that its single-stepping can be interrupted by
      an NMI handler which may cause a page fault(e.g. perf
      with callback tracing).
      
      In that case, the page-fault handled by kprobes and it
      misunderstands the page-fault has been caused by the
      single-stepping code and tries to recover IP address
      to probed address.
      
      But the truth is the page-fault has been caused by the
      NMI handler, and do_page_fault failes to handle real
      page fault because the IP address is modified and
      causes Kernel BUGs like below.
      
       ----
       [ 2264.726905] BUG: unable to handle kernel NULL pointer dereference at 0000000000000020
       [ 2264.727190] IP: [<ffffffff813c46e0>] copy_user_generic_string+0x0/0x40
      
      To handle this correctly, I fixed the kprobes fault
      handler to ensure the faulted ip address is its own
      single-step buffer instead of checking current kprobe
      state.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Sandeepa Prabhu <sandeepa.prabhu@linaro.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: fche@redhat.com
      Cc: systemtap@sourceware.org
      Link: http://lkml.kernel.org/r/20140417081644.26341.52351.stgit@ltc230.yrl.intra.hitachi.co.jpSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6381c24c
    • I
      x86/mce: Fix CMCI preemption bugs · ea431643
      Ingo Molnar 提交于
      The following commit:
      
        27f6c573 ("x86, CMCI: Add proper detection of end of CMCI storms")
      
      Added two preemption bugs:
      
       - machine_check_poll() does a get_cpu_var() without a matching
         put_cpu_var(), which causes preemption imbalance and crashes upon
         bootup.
      
       - it does percpu ops without disabling preemption. Preemption is not
         disabled due to the mistaken use of a raw spinlock.
      
      To fix these bugs fix the imbalance and change
      cmci_discover_lock to a regular spinlock.
      Reported-by: NOwen Kibel <qmewlo@gmail.com>
      Reported-by: NLinus Torvalds <torvalds@linux-foundation.org>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      Cc: Chen, Gong <gong.chen@linux.intel.com>
      Cc: Josh Boyer <jwboyer@fedoraproject.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Alexander Todorov <atodorov@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Link: http://lkml.kernel.org/n/tip-jtjptvgigpfkpvtQxpEk1at2@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      --
       arch/x86/kernel/cpu/mcheck/mce.c       |    4 +---
       arch/x86/kernel/cpu/mcheck/mce_intel.c |   18 +++++++++---------
       2 files changed, 10 insertions(+), 12 deletions(-)
      ea431643