1. 14 6月, 2016 1 次提交
    • M
      kprobes/x86: Clear TF bit in fault on single-stepping · dcfc4724
      Masami Hiramatsu 提交于
      Fix kprobe_fault_handler() to clear the TF (trap flag) bit of
      the flags register in the case of a fault fixup on single-stepping.
      
      If we put a kprobe on the instruction which caused a
      page fault (e.g. actual mov instructions in copy_user_*),
      that fault happens on the single-stepping buffer. In this
      case, kprobes resets running instance so that the CPU can
      retry execution on the original ip address.
      
      However, current code forgets to reset the TF bit. Since this
      fault happens with TF bit set for enabling single-stepping,
      when it retries, it causes a debug exception and kprobes
      can not handle it because it already reset itself.
      
      On the most of x86-64 platform, it can be easily reproduced
      by using kprobe tracer. E.g.
      
        # cd /sys/kernel/debug/tracing
        # echo p copy_user_enhanced_fast_string+5 > kprobe_events
        # echo 1 > events/kprobes/enable
      
      And you'll see a kernel panic on do_debug(), since the debug
      trap is not handled by kprobes.
      
      To fix this problem, we just need to clear the TF bit when
      resetting running kprobe.
      Signed-off-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Reviewed-by: NAnanth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
      Acked-by: NSteven Rostedt <rostedt@goodmis.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: systemtap@sourceware.org
      Cc: stable@vger.kernel.org # All the way back to ancient kernels
      Link: http://lkml.kernel.org/r/20160611140648.25885.37482.stgit@devbox
      [ Updated the comments. ]
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      dcfc4724
  2. 29 4月, 2016 1 次提交
  3. 29 2月, 2016 1 次提交
    • J
      x86/kprobes: Mark kretprobe_trampoline() stack frame as non-standard · 87aaff2a
      Josh Poimboeuf 提交于
      objtool reports the following warning for kretprobe_trampoline():
      
        arch/x86/kernel/kprobes/core.o: warning: objtool: kretprobe_trampoline()+0x20: call without frame pointer save/setup
      
      kretprobes are a special case where the stack is intentionally wrong.
      The return address isn't known at the beginning of the trampoline, so
      the stack frame can't be set up properly before it calls
      trampoline_handler().
      
      Because kretprobe handlers don't sleep, the frame pointer doesn't *have*
      to be accurate in the trampoline.  So it's ok to tell objtool to ignore
      it.  This results in no actual changes to the generated code.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Bernd Petrovitsch <bernd@petrovitsch.priv.at>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Chris J Arges <chris.j.arges@canonical.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Pedro Alves <palves@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: live-patching@vger.kernel.org
      Link: http://lkml.kernel.org/r/7eaf37de52456ff822ffc86b928edb5d48a40ef1.1456719558.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      87aaff2a
  4. 24 2月, 2016 1 次提交
    • J
      x86/kprobes: Get rid of kretprobe_trampoline_holder() · c1c355ce
      Josh Poimboeuf 提交于
      The kretprobe_trampoline_holder() wrapper around kretprobe_trampoline()
      isn't used anywhere and adds some unnecessary frame pointer instructions
      which never execute.  Instead, just make kretprobe_trampoline() a proper
      ELF function.
      Signed-off-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Bernd Petrovitsch <bernd@petrovitsch.priv.at>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Brian Gerst <brgerst@gmail.com>
      Cc: Chris J Arges <chris.j.arges@canonical.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Denys Vlasenko <dvlasenk@redhat.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Michal Marek <mmarek@suse.cz>
      Cc: Namhyung Kim <namhyung@gmail.com>
      Cc: Pedro Alves <palves@redhat.com>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: live-patching@vger.kernel.org
      Link: http://lkml.kernel.org/r/92d921b102fb865a7c254cfde9e4a0a72b9a781e.1453405861.git.jpoimboe@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      c1c355ce
  5. 18 2月, 2016 1 次提交
  6. 23 3月, 2015 1 次提交
  7. 17 3月, 2015 1 次提交
  8. 21 2月, 2015 2 次提交
    • P
      kprobes/x86: Check for invalid ftrace location in __recover_probed_insn() · 2a6730c8
      Petr Mladek 提交于
      __recover_probed_insn() should always be called from an address
      where an instructions starts. The check for ftrace_location()
      might help to discover a potential inconsistency.
      
      This patch adds WARN_ON() when the inconsistency is detected.
      Also it adds handling of the situation when the original code
      can not get recovered.
      Suggested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NPetr Mladek <pmladek@suse.cz>
      Cc: Ananth NMavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/1424441250-27146-3-git-send-email-pmladek@suse.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2a6730c8
    • P
      kprobes/x86: Use 5-byte NOP when the code might be modified by ftrace · 650b7b23
      Petr Mladek 提交于
      can_probe() checks if the given address points to the beginning
      of an instruction. It analyzes all the instructions from the
      beginning of the function until the given address. The code
      might be modified by another Kprobe. In this case, the current
      code is read into a buffer, int3 breakpoint is replaced by the
      saved opcode in the buffer, and can_probe() analyzes the buffer
      instead.
      
      There is a bug that __recover_probed_insn() tries to restore
      the original code even for Kprobes using the ftrace framework.
      But in this case, the opcode is not stored. See the difference
      between arch_prepare_kprobe() and arch_prepare_kprobe_ftrace().
      The opcode is stored by arch_copy_kprobe() only from
      arch_prepare_kprobe().
      
      This patch makes Kprobe to use the ideal 5-byte NOP when the
      code can be modified by ftrace. It is the original instruction,
      see ftrace_make_nop() and ftrace_nop_replace().
      
      Note that we always need to use the NOP for ftrace locations.
      Kprobes do not block ftrace and the instruction might get
      modified at anytime. It might even be in an inconsistent state
      because it is modified step by step using the int3 breakpoint.
      
      The patch also fixes indentation of the touched comment.
      
      Note that I found this problem when playing with Kprobes. I did
      it on x86_64 with gcc-4.8.3 that supported -mfentry. I modified
      samples/kprobes/kprobe_example.c and added offset 5 to put
      the probe right after the fentry area:
      
       static struct kprobe kp = {
       	.symbol_name	= "do_fork",
      +	.offset = 5,
       };
      
      Then I was able to load kprobe_example before jprobe_example
      but not the other way around:
      
        $> modprobe jprobe_example
        $> modprobe kprobe_example
        modprobe: ERROR: could not insert 'kprobe_example': Invalid or incomplete multibyte or wide character
      
      It did not make much sense and debugging pointed to the bug
      described above.
      Signed-off-by: NPetr Mladek <pmladek@suse.cz>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Ananth NMavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/1424441250-27146-2-git-send-email-pmladek@suse.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
      650b7b23
  9. 19 2月, 2015 1 次提交
  10. 15 1月, 2015 1 次提交
    • S
      ftrace/jprobes/x86: Fix conflict between jprobes and function graph tracing · 237d28db
      Steven Rostedt (Red Hat) 提交于
      If the function graph tracer traces a jprobe callback, the system will
      crash. This can easily be demonstrated by compiling the jprobe
      sample module that is in the kernel tree, loading it and running the
      function graph tracer.
      
       # modprobe jprobe_example.ko
       # echo function_graph > /sys/kernel/debug/tracing/current_tracer
       # ls
      
      The first two commands end up in a nice crash after the first fork.
      (do_fork has a jprobe attached to it, so "ls" just triggers that fork)
      
      The problem is caused by the jprobe_return() that all jprobe callbacks
      must end with. The way jprobes works is that the function a jprobe
      is attached to has a breakpoint placed at the start of it (or it uses
      ftrace if fentry is supported). The breakpoint handler (or ftrace callback)
      will copy the stack frame and change the ip address to return to the
      jprobe handler instead of the function. The jprobe handler must end
      with jprobe_return() which swaps the stack and does an int3 (breakpoint).
      This breakpoint handler will then put back the saved stack frame,
      simulate the instruction at the beginning of the function it added
      a breakpoint to, and then continue on.
      
      For function tracing to work, it hijakes the return address from the
      stack frame, and replaces it with a hook function that will trace
      the end of the call. This hook function will restore the return
      address of the function call.
      
      If the function tracer traces the jprobe handler, the hook function
      for that handler will not be called, and its saved return address
      will be used for the next function. This will result in a kernel crash.
      
      To solve this, pause function tracing before the jprobe handler is called
      and unpause it before it returns back to the function it probed.
      
      Some other updates:
      
      Used a variable "saved_sp" to hold kcb->jprobe_saved_sp. This makes the
      code look a bit cleaner and easier to understand (various tries to fix
      this bug required this change).
      
      Note, if fentry is being used, jprobes will change the ip address before
      the function graph tracer runs and it will not be able to trace the
      function that the jprobe is probing.
      
      Link: http://lkml.kernel.org/r/20150114154329.552437962@goodmis.org
      
      Cc: stable@vger.kernel.org # 2.6.30+
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NSteven Rostedt <rostedt@goodmis.org>
      237d28db
  11. 18 11月, 2014 1 次提交
    • D
      x86: Remove arbitrary instruction size limit in instruction decoder · 6ba48ff4
      Dave Hansen 提交于
      The current x86 instruction decoder steps along through the
      instruction stream but always ensures that it never steps farther
      than the largest possible instruction size (MAX_INSN_SIZE).
      
      The MPX code is now going to be doing some decoding of userspace
      instructions.  We copy those from userspace in to the kernel and
      they're obviously completely untrusted coming from userspace.  In
      addition to the constraint that instructions can only be so long,
      we also have to be aware of how long the buffer is that came in
      from userspace.  This _looks_ to be similar to what the perf and
      kprobes is doing, but it's unclear to me whether they are
      affected.
      
      The whole reason we need this is that it is perfectly valid to be
      executing an instruction within MAX_INSN_SIZE bytes of an
      unreadable page. We should be able to gracefully handle short
      reads in those cases.
      
      This adds support to the decoder to record how long the buffer
      being decoded is and to refuse to "validate" the instruction if
      we would have gone over the end of the buffer to decode it.
      
      The kprobes code probably needs to be looked at here a bit more
      carefully.  This patch still respects the MAX_INSN_SIZE limit
      there but the kprobes code does look like it might be able to
      be a bit more strict than it currently is.
      Signed-off-by: NDave Hansen <dave.hansen@linux.intel.com>
      Acked-by: NJim Keniston <jkenisto@us.ibm.com>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: x86@kernel.org
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: "David S. Miller" <davem@davemloft.net>
      Link: http://lkml.kernel.org/r/20141114153957.E6B01535@viggo.jf.intel.comSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      6ba48ff4
  12. 16 7月, 2014 1 次提交
  13. 24 4月, 2014 5 次提交
  14. 17 4月, 2014 1 次提交
    • M
      kprobes/x86: Fix page-fault handling logic · 6381c24c
      Masami Hiramatsu 提交于
      Current kprobes in-kernel page fault handler doesn't
      expect that its single-stepping can be interrupted by
      an NMI handler which may cause a page fault(e.g. perf
      with callback tracing).
      
      In that case, the page-fault handled by kprobes and it
      misunderstands the page-fault has been caused by the
      single-stepping code and tries to recover IP address
      to probed address.
      
      But the truth is the page-fault has been caused by the
      NMI handler, and do_page_fault failes to handle real
      page fault because the IP address is modified and
      causes Kernel BUGs like below.
      
       ----
       [ 2264.726905] BUG: unable to handle kernel NULL pointer dereference at 0000000000000020
       [ 2264.727190] IP: [<ffffffff813c46e0>] copy_user_generic_string+0x0/0x40
      
      To handle this correctly, I fixed the kprobes fault
      handler to ensure the faulted ip address is its own
      single-step buffer instead of checking current kprobe
      state.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: Sandeepa Prabhu <sandeepa.prabhu@linaro.org>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: fche@redhat.com
      Cc: systemtap@sourceware.org
      Link: http://lkml.kernel.org/r/20140417081644.26341.52351.stgit@ltc230.yrl.intra.hitachi.co.jpSigned-off-by: NIngo Molnar <mingo@kernel.org>
      6381c24c
  15. 07 8月, 2013 1 次提交
  16. 19 7月, 2013 1 次提交
  17. 20 6月, 2013 1 次提交
  18. 08 4月, 2013 1 次提交
  19. 18 3月, 2013 1 次提交
  20. 28 2月, 2013 1 次提交
    • S
      hlist: drop the node parameter from iterators · b67bfe0d
      Sasha Levin 提交于
      I'm not sure why, but the hlist for each entry iterators were conceived
      
              list_for_each_entry(pos, head, member)
      
      The hlist ones were greedy and wanted an extra parameter:
      
              hlist_for_each_entry(tpos, pos, head, member)
      
      Why did they need an extra pos parameter? I'm not quite sure. Not only
      they don't really need it, it also prevents the iterator from looking
      exactly like the list iterator, which is unfortunate.
      
      Besides the semantic patch, there was some manual work required:
      
       - Fix up the actual hlist iterators in linux/list.h
       - Fix up the declaration of other iterators based on the hlist ones.
       - A very small amount of places were using the 'node' parameter, this
       was modified to use 'obj->member' instead.
       - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
       properly, so those had to be fixed up manually.
      
      The semantic patch which is mostly the work of Peter Senna Tschudin is here:
      
      @@
      iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;
      
      type T;
      expression a,c,d,e;
      identifier b;
      statement S;
      @@
      
      -T b;
          <+... when != b
      (
      hlist_for_each_entry(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue(a,
      - b,
      c) S
      |
      hlist_for_each_entry_from(a,
      - b,
      c) S
      |
      hlist_for_each_entry_rcu(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_rcu_bh(a,
      - b,
      c, d) S
      |
      hlist_for_each_entry_continue_rcu_bh(a,
      - b,
      c) S
      |
      for_each_busy_worker(a, c,
      - b,
      d) S
      |
      ax25_uid_for_each(a,
      - b,
      c) S
      |
      ax25_for_each(a,
      - b,
      c) S
      |
      inet_bind_bucket_for_each(a,
      - b,
      c) S
      |
      sctp_for_each_hentry(a,
      - b,
      c) S
      |
      sk_for_each(a,
      - b,
      c) S
      |
      sk_for_each_rcu(a,
      - b,
      c) S
      |
      sk_for_each_from
      -(a, b)
      +(a)
      S
      + sk_for_each_from(a) S
      |
      sk_for_each_safe(a,
      - b,
      c, d) S
      |
      sk_for_each_bound(a,
      - b,
      c) S
      |
      hlist_for_each_entry_safe(a,
      - b,
      c, d, e) S
      |
      hlist_for_each_entry_continue_rcu(a,
      - b,
      c) S
      |
      nr_neigh_for_each(a,
      - b,
      c) S
      |
      nr_neigh_for_each_safe(a,
      - b,
      c, d) S
      |
      nr_node_for_each(a,
      - b,
      c) S
      |
      nr_node_for_each_safe(a,
      - b,
      c, d) S
      |
      - for_each_gfn_sp(a, c, d, b) S
      + for_each_gfn_sp(a, c, d) S
      |
      - for_each_gfn_indirect_valid_sp(a, c, d, b) S
      + for_each_gfn_indirect_valid_sp(a, c, d) S
      |
      for_each_host(a,
      - b,
      c) S
      |
      for_each_host_safe(a,
      - b,
      c, d) S
      |
      for_each_mesh_entry(a,
      - b,
      c, d) S
      )
          ...+>
      
      [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
      [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
      [akpm@linux-foundation.org: checkpatch fixes]
      [akpm@linux-foundation.org: fix warnings]
      [akpm@linux-foudnation.org: redo intrusive kvm changes]
      Tested-by: NPeter Senna Tschudin <peter.senna@gmail.com>
      Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com>
      Signed-off-by: NSasha Levin <sasha.levin@oracle.com>
      Cc: Wu Fengguang <fengguang.wu@intel.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      b67bfe0d
  21. 22 1月, 2013 2 次提交
  22. 20 9月, 2012 1 次提交
  23. 14 9月, 2012 2 次提交
  24. 31 7月, 2012 1 次提交
  25. 09 5月, 2012 1 次提交
  26. 06 3月, 2012 3 次提交
    • M
      x86/kprobes: Split out optprobe related code to kprobes-opt.c · 3f33ab1c
      Masami Hiramatsu 提交于
      Split out optprobe related code to arch/x86/kernel/kprobes-opt.c
      for maintenanceability.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Suggested-by: NIngo Molnar <mingo@elte.hu>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: yrl.pp-manager.tt@hitachi.com
      Cc: systemtap@sourceware.org
      Cc: anderson@redhat.com
      Link: http://lkml.kernel.org/r/20120305133222.5982.54794.stgit@localhost.localdomain
      [ Tidied up the code a tiny bit ]
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      3f33ab1c
    • M
      x86/kprobes: Fix a bug which can modify kernel code permanently · 46484688
      Masami Hiramatsu 提交于
      Fix a bug in kprobes which can modify kernel code
      permanently at run-time. In the result, kernel can
      crash when it executes the modified code.
      
      This bug can happen when we put two probes enough near
      and the first probe is optimized. When the second probe
      is set up, it copies a byte which is already modified
      by the first probe, and executes it when the probe is hit.
      Even worse, the first probe and the second probe are removed
      respectively, the second probe writes back the copied
      (modified) instruction.
      
      To fix this bug, kprobes always recovers the original
      code and copies the first byte from recovered instruction.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: yrl.pp-manager.tt@hitachi.com
      Cc: systemtap@sourceware.org
      Cc: anderson@redhat.com
      Link: http://lkml.kernel.org/r/20120305133215.5982.31991.stgit@localhost.localdomainSigned-off-by: NIngo Molnar <mingo@elte.hu>
      46484688
    • M
      x86/kprobes: Fix instruction recovery on optimized path · 86b4ce31
      Masami Hiramatsu 提交于
      Current probed-instruction recovery expects that only breakpoint
      instruction modifies instruction. However, since kprobes jump
      optimization can replace original instructions with a jump,
      that expectation is not enough. And it may cause instruction
      decoding failure on the function where an optimized probe
      already exists.
      
      This bug can reproduce easily as below:
      
      1) find a target function address (any kprobe-able function is OK)
      
       $ grep __secure_computing /proc/kallsyms
         ffffffff810c19d0 T __secure_computing
      
      2) decode the function
         $ objdump -d vmlinux --start-address=0xffffffff810c19d0 --stop-address=0xffffffff810c19eb
      
        vmlinux:     file format elf64-x86-64
      
      Disassembly of section .text:
      
      ffffffff810c19d0 <__secure_computing>:
      ffffffff810c19d0:       55                      push   %rbp
      ffffffff810c19d1:       48 89 e5                mov    %rsp,%rbp
      ffffffff810c19d4:       e8 67 8f 72 00          callq
      ffffffff817ea940 <mcount>
      ffffffff810c19d9:       65 48 8b 04 25 40 b8    mov    %gs:0xb840,%rax
      ffffffff810c19e0:       00 00
      ffffffff810c19e2:       83 b8 88 05 00 00 01    cmpl $0x1,0x588(%rax)
      ffffffff810c19e9:       74 05                   je     ffffffff810c19f0 <__secure_computing+0x20>
      
      3) put a kprobe-event at an optimize-able place, where no
       call/jump places within the 5 bytes.
       $ su -
       # cd /sys/kernel/debug/tracing
       # echo p __secure_computing+0x9 > kprobe_events
      
      4) enable it and check it is optimized.
       # echo 1 > events/kprobes/p___secure_computing_9/enable
       # cat ../kprobes/list
       ffffffff810c19d9  k  __secure_computing+0x9    [OPTIMIZED]
      
      5) put another kprobe on an instruction after previous probe in
        the same function.
       # echo p __secure_computing+0x12 >> kprobe_events
       bash: echo: write error: Invalid argument
       # dmesg | tail -n 1
       [ 1666.500016] Probing address(0xffffffff810c19e2) is not an instruction boundary.
      
      6) however, if the kprobes optimization is disabled, it works.
       # echo 0 > /proc/sys/debug/kprobes-optimization
       # cat ../kprobes/list
       ffffffff810c19d9  k  __secure_computing+0x9
       # echo p __secure_computing+0x12 >> kprobe_events
       (no error)
      
      This is because kprobes doesn't recover the instruction
      which is overwritten with a relative jump by another kprobe
      when finding instruction boundary.
      It only recovers the breakpoint instruction.
      
      This patch fixes kprobes to recover such instructions.
      
      With this fix:
      
       # echo p __secure_computing+0x9 > kprobe_events
       # echo 1 > events/kprobes/p___secure_computing_9/enable
       # cat ../kprobes/list
       ffffffff810c1aa9  k  __secure_computing+0x9    [OPTIMIZED]
       # echo p __secure_computing+0x12 >> kprobe_events
       # cat ../kprobes/list
       ffffffff810c1aa9  k  __secure_computing+0x9    [OPTIMIZED]
       ffffffff810c1ab2  k  __secure_computing+0x12    [DISABLED]
      
      Changes in v4:
       - Fix a bug to ensure optimized probe is really optimized
         by jump.
       - Remove kprobe_optready() dependency.
       - Cleanup code for preparing optprobe separation.
      
      Changes in v3:
       - Fix a build error when CONFIG_OPTPROBE=n. (Thanks, Ingo!)
         To fix the error, split optprobe instruction recovering
         path from kprobes path.
       - Cleanup comments/styles.
      
      Changes in v2:
       - Fix a bug to recover original instruction address in
         RIP-relative instruction fixup.
       - Moved on tip/master.
      Signed-off-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
      Cc: yrl.pp-manager.tt@hitachi.com
      Cc: systemtap@sourceware.org
      Cc: anderson@redhat.com
      Link: http://lkml.kernel.org/r/20120305133209.5982.36568.stgit@localhost.localdomainSigned-off-by: NIngo Molnar <mingo@elte.hu>
      86b4ce31
  27. 25 10月, 2011 1 次提交
    • J
      x86: Fix compilation bug in kprobes' twobyte_is_boostable · 315eb8a2
      Josh Stone 提交于
      When compiling an i386_defconfig kernel with gcc-4.6.1-9.fc15.i686, I
      noticed a warning about the asm operand for test_bit in kprobes'
      can_boost.  I discovered that this caused only the first long of
      twobyte_is_boostable[] to be output.
      
      Jakub filed and fixed gcc PR50571 to correct the warning and this output
      issue.  But to solve it for less current gcc, we can make kprobes'
      twobyte_is_boostable[] non-const, and it won't be optimized out.
      
      Before:
      
          CC      arch/x86/kernel/kprobes.o
        In file included from include/linux/bitops.h:22:0,
                         from include/linux/kernel.h:17,
                         from [...]/arch/x86/include/asm/percpu.h:44,
                         from [...]/arch/x86/include/asm/current.h:5,
                         from [...]/arch/x86/include/asm/processor.h:15,
                         from [...]/arch/x86/include/asm/atomic.h:6,
                         from include/linux/atomic.h:4,
                         from include/linux/mutex.h:18,
                         from include/linux/notifier.h:13,
                         from include/linux/kprobes.h:34,
                         from arch/x86/kernel/kprobes.c:43:
        [...]/arch/x86/include/asm/bitops.h: In function ‘can_boost.part.1’:
        [...]/arch/x86/include/asm/bitops.h:319:2: warning: use of memory input
              without lvalue in asm operand 1 is deprecated [enabled by default]
      
        $ objdump -rd arch/x86/kernel/kprobes.o | grep -A1 -w bt
             551:	0f a3 05 00 00 00 00 	bt     %eax,0x0
                                554: R_386_32	.rodata.cst4
      
        $ objdump -s -j .rodata.cst4 -j .data arch/x86/kernel/kprobes.o
      
        arch/x86/kernel/kprobes.o:     file format elf32-i386
      
        Contents of section .data:
         0000 48000000 00000000 00000000 00000000  H...............
        Contents of section .rodata.cst4:
         0000 4c030000                             L...
      
      Only a single long of twobyte_is_boostable[] is in the object file.
      
      After, without the const on twobyte_is_boostable:
      
        $ objdump -rd arch/x86/kernel/kprobes.o | grep -A1 -w bt
             551:	0f a3 05 20 00 00 00 	bt     %eax,0x20
                                554: R_386_32	.data
      
        $ objdump -s -j .rodata.cst4 -j .data arch/x86/kernel/kprobes.o
      
        arch/x86/kernel/kprobes.o:     file format elf32-i386
      
        Contents of section .data:
         0000 48000000 00000000 00000000 00000000  H...............
         0010 00000000 00000000 00000000 00000000  ................
         0020 4c030000 0f000200 ffff0000 ffcff0c0  L...............
         0030 0000ffff 3bbbfff8 03ff2ebb 26bb2e77  ....;.......&..w
      
      Now all 32 bytes are output into .data instead.
      Signed-off-by: NJosh Stone <jistone@redhat.com>
      Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Jakub Jelinek <jakub@redhat.com>
      Cc: stable@kernel.org
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      315eb8a2
  28. 18 10月, 2011 1 次提交
    • J
      x86, perf, kprobes: Make kprobes's twobyte_is_boostable volatile · db45bd90
      Josh Stone 提交于
      When compiling an i386_defconfig kernel with
      gcc-4.6.1-9.fc15.i686, I noticed a warning about the asm operand
      for test_bit in kprobes' can_boost. I discovered that this
      caused only the first long of twobyte_is_boostable[] to be
      output.
      
      Jakub filed and fixed gcc PR50571 to correct the warning and
      this output issue.  But to solve it for less current gcc, we can
      make kprobes' twobyte_is_boostable[] volatile, and it won't be
      optimized out.
      
      Before:
      
          CC      arch/x86/kernel/kprobes.o
        In file included from include/linux/bitops.h:22:0,
                         from include/linux/kernel.h:17,
                         from [...]/arch/x86/include/asm/percpu.h:44,
                         from [...]/arch/x86/include/asm/current.h:5,
                         from [...]/arch/x86/include/asm/processor.h:15,
                         from [...]/arch/x86/include/asm/atomic.h:6,
                         from include/linux/atomic.h:4,
                         from include/linux/mutex.h:18,
                         from include/linux/notifier.h:13,
                         from include/linux/kprobes.h:34,
                         from arch/x86/kernel/kprobes.c:43:
        [...]/arch/x86/include/asm/bitops.h: In function ‘can_boost.part.1’:
        [...]/arch/x86/include/asm/bitops.h:319:2: warning: use of memory input without lvalue in asm operand 1 is deprecated [enabled by default]
      
        $ objdump -rd arch/x86/kernel/kprobes.o | grep -A1 -w bt
             551:	0f a3 05 00 00 00 00 	bt     %eax,0x0
                                554: R_386_32	.rodata.cst4
      
        $ objdump -s -j .rodata.cst4 -j .data arch/x86/kernel/kprobes.o
      
        arch/x86/kernel/kprobes.o:     file format elf32-i386
      
        Contents of section .data:
         0000 48000000 00000000 00000000 00000000  H...............
        Contents of section .rodata.cst4:
         0000 4c030000                             L...
      
      Only a single long of twobyte_is_boostable[] is in the object
      file.
      
      After, with volatile:
      
        $ objdump -rd arch/x86/kernel/kprobes.o | grep -A1 -w bt
             551:	0f a3 05 20 00 00 00 	bt     %eax,0x20
                                554: R_386_32	.data
      
        $ objdump -s -j .rodata.cst4 -j .data arch/x86/kernel/kprobes.o
      
        arch/x86/kernel/kprobes.o:     file format elf32-i386
      
        Contents of section .data:
         0000 48000000 00000000 00000000 00000000  H...............
         0010 00000000 00000000 00000000 00000000  ................
         0020 4c030000 0f000200 ffff0000 ffcff0c0  L...............
         0030 0000ffff 3bbbfff8 03ff2ebb 26bb2e77  ....;.......&..w
      
      Now all 32 bytes are output into .data instead.
      Signed-off-by: NJosh Stone <jistone@redhat.com>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
      Cc: Jakub Jelinek <jakub@redhat.com>
      Link: http://lkml.kernel.org/r/1318899645-4068-1-git-send-email-jistone@redhat.comSigned-off-by: NIngo Molnar <mingo@elte.hu>
      db45bd90
  29. 11 5月, 2011 1 次提交
  30. 09 3月, 2011 1 次提交
    • J
      kprobes: Disabling optimized kprobes for entry text section · 2a8247a2
      Jiri Olsa 提交于
      You can crash the kernel (with root/admin privileges) using kprobe tracer by running:
      
       echo "p system_call_after_swapgs" > ./kprobe_events
       echo 1 > ./events/kprobes/enable
      
      The reason is that at the system_call_after_swapgs label, the
      kernel stack is not set up. If optimized kprobes are enabled,
      the user space stack is being used in this case (see optimized
      kprobe template) and this might result in a crash.
      
      There are several places like this over the entry code
      (entry_$BIT). As it seems there's no any reasonable/maintainable
      way to disable only those places where the stack is not ready, I
      switched off the whole entry code from kprobe optimizing.
      Signed-off-by: NJiri Olsa <jolsa@redhat.com>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: acme@redhat.com
      Cc: fweisbec@gmail.com
      Cc: ananth@in.ibm.com
      Cc: davem@davemloft.net
      Cc: a.p.zijlstra@chello.nl
      Cc: eric.dumazet@gmail.com
      Cc: 2nddept-manager@sdl.hitachi.co.jp
      LKML-Reference: <1298298313-5980-3-git-send-email-jolsa@redhat.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2a8247a2
  31. 17 12月, 2010 1 次提交