1. 07 3月, 2015 2 次提交
  2. 06 3月, 2015 6 次提交
  3. 05 3月, 2015 14 次提交
  4. 04 3月, 2015 1 次提交
  5. 28 2月, 2015 1 次提交
  6. 26 2月, 2015 1 次提交
  7. 25 2月, 2015 2 次提交
  8. 24 2月, 2015 1 次提交
    • D
      x86/xen: allow privcmd hypercalls to be preempted · fdfd811d
      David Vrabel 提交于
      Hypercalls submitted by user space tools via the privcmd driver can
      take a long time (potentially many 10s of seconds) if the hypercall
      has many sub-operations.
      
      A fully preemptible kernel may deschedule such as task in any upcall
      called from a hypercall continuation.
      
      However, in a kernel with voluntary or no preemption, hypercall
      continuations in Xen allow event handlers to be run but the task
      issuing the hypercall will not be descheduled until the hypercall is
      complete and the ioctl returns to user space.  These long running
      tasks may also trigger the kernel's soft lockup detection.
      
      Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
      bracket hypercalls that may be preempted.  Use these in the privcmd
      driver.
      
      When returning from an upcall, call xen_maybe_preempt_hcall() which
      adds a schedule point if if the current task was within a preemptible
      hypercall.
      
      Since _cond_resched() can move the task to a different CPU, clear and
      set xen_in_preemptible_hcall around the call.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      fdfd811d
  9. 23 2月, 2015 6 次提交
    • B
      x86/asm: Cleanup prefetch primitives · a930dc45
      Borislav Petkov 提交于
      This is based on a patch originally by hpa.
      
      With the current improvements to the alternatives, we can simply use %P1
      as a mem8 operand constraint and rely on the toolchain to generate the
      proper instruction sizes. For example, on 32-bit, where we use an empty
      old instruction we get:
      
        apply_alternatives: feat: 6*32+8, old: (c104648b, len: 4), repl: (c195566c, len: 4)
        c104648b: alt_insn: 90 90 90 90
        c195566c: rpl_insn: 0f 0d 4b 5c
      
        ...
      
        apply_alternatives: feat: 6*32+8, old: (c18e09b4, len: 3), repl: (c1955948, len: 3)
        c18e09b4: alt_insn: 90 90 90
        c1955948: rpl_insn: 0f 0d 08
      
        ...
      
        apply_alternatives: feat: 6*32+8, old: (c1190cf9, len: 7), repl: (c1955a79, len: 7)
        c1190cf9: alt_insn: 90 90 90 90 90 90 90
        c1955a79: rpl_insn: 0f 0d 0d a0 d4 85 c1
      
      all with the proper padding done depending on the size of the
      replacement instruction the compiler generates.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: H. Peter Anvin <hpa@linux.intel.com>
      a930dc45
    • B
      x86/entry_32: Convert X86_INVD_BUG to ALTERNATIVE macro · 8e65f6e0
      Borislav Petkov 提交于
      Booting a 486 kernel on an AMD guest with this patch applied, says:
      
        apply_alternatives: feat: 0*32+25, old: (c160a475, len: 5), repl: (c19557d4, len: 5)
        c160a475: alt_insn: 68 10 35 00 c1
        c19557d4: rpl_insn: 68 80 39 00 c1
      
      which is:
      
        old insn VA: 0xc160a475, CPU feat: X86_FEATURE_XMM, size: 5
        simd_coprocessor_error:
                 c160a475:      68 10 35 00 c1          push $0xc1003510 <do_general_protection>
        repl insn: 0xc19557d4, size: 5
                 c160a475:      68 80 39 00 c1          push $0xc1003980 <do_simd_coprocessor_error>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      8e65f6e0
    • B
      x86/alternatives: Use optimized NOPs for padding · 4fd4b6e5
      Borislav Petkov 提交于
      Alternatives allow now for an empty old instruction. In this case we go
      and pad the space with NOPs at assembly time. However, there are the
      optimal, longer NOPs which should be used. Do that at patching time by
      adding alt_instr.padlen-sized NOPs at the old instruction address.
      
      Cc: Andy Lutomirski <luto@amacapital.net>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      4fd4b6e5
    • B
      x86/alternatives: Make JMPs more robust · 48c7a250
      Borislav Petkov 提交于
      Up until now we had to pay attention to relative JMPs in alternatives
      about how their relative offset gets computed so that the jump target
      is still correct. Or, as it is the case for near CALLs (opcode e8), we
      still have to go and readjust the offset at patching time.
      
      What is more, the static_cpu_has_safe() facility had to forcefully
      generate 5-byte JMPs since we couldn't rely on the compiler to generate
      properly sized ones so we had to force the longest ones. Worse than
      that, sometimes it would generate a replacement JMP which is longer than
      the original one, thus overwriting the beginning of the next instruction
      at patching time.
      
      So, in order to alleviate all that and make using JMPs more
      straight-forward we go and pad the original instruction in an
      alternative block with NOPs at build time, should the replacement(s) be
      longer. This way, alternatives users shouldn't pay special attention
      so that original and replacement instruction sizes are fine but the
      assembler would simply add padding where needed and not do anything
      otherwise.
      
      As a second aspect, we go and recompute JMPs at patching time so that we
      can try to make 5-byte JMPs into two-byte ones if possible. If not, we
      still have to recompute the offsets as the replacement JMP gets put far
      away in the .altinstr_replacement section leading to a wrong offset if
      copied verbatim.
      
      For example, on a locally generated kernel image
      
        old insn VA: 0xffffffff810014bd, CPU feat: X86_FEATURE_ALWAYS, size: 2
        __switch_to:
         ffffffff810014bd:      eb 21                   jmp ffffffff810014e0
        repl insn: size: 5
        ffffffff81d0b23c:       e9 b1 62 2f ff          jmpq ffffffff810014f2
      
      gets corrected to a 2-byte JMP:
      
        apply_alternatives: feat: 3*32+21, old: (ffffffff810014bd, len: 2), repl: (ffffffff81d0b23c, len: 5)
        alt_insn: e9 b1 62 2f ff
        recompute_jumps: next_rip: ffffffff81d0b241, tgt_rip: ffffffff810014f2, new_displ: 0x00000033, ret len: 2
        converted to: eb 33 90 90 90
      
      and a 5-byte JMP:
      
        old insn VA: 0xffffffff81001516, CPU feat: X86_FEATURE_ALWAYS, size: 2
        __switch_to:
         ffffffff81001516:      eb 30                   jmp ffffffff81001548
        repl insn: size: 5
         ffffffff81d0b241:      e9 10 63 2f ff          jmpq ffffffff81001556
      
      gets shortened into a two-byte one:
      
        apply_alternatives: feat: 3*32+21, old: (ffffffff81001516, len: 2), repl: (ffffffff81d0b241, len: 5)
        alt_insn: e9 10 63 2f ff
        recompute_jumps: next_rip: ffffffff81d0b246, tgt_rip: ffffffff81001556, new_displ: 0x0000003e, ret len: 2
        converted to: eb 3e 90 90 90
      
      ... and so on.
      
      This leads to a net win of around
      
      40ish replacements * 3 bytes savings =~ 120 bytes of I$
      
      on an AMD guest which means some savings of precious instruction cache
      bandwidth. The padding to the shorter 2-byte JMPs are single-byte NOPs
      which on smart microarchitectures means discarding NOPs at decode time
      and thus freeing up execution bandwidth.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      48c7a250
    • B
      x86/alternatives: Add instruction padding · 4332195c
      Borislav Petkov 提交于
      Up until now we have always paid attention to make sure the length of
      the new instruction replacing the old one is at least less or equal to
      the length of the old instruction. If the new instruction is longer, at
      the time it replaces the old instruction it will overwrite the beginning
      of the next instruction in the kernel image and cause your pants to
      catch fire.
      
      So instead of having to pay attention, teach the alternatives framework
      to pad shorter old instructions with NOPs at buildtime - but only in the
      case when
      
        len(old instruction(s)) < len(new instruction(s))
      
      and add nothing in the >= case. (In that case we do add_nops() when
      patching).
      
      This way the alternatives user shouldn't have to care about instruction
      sizes and simply use the macros.
      
      Add asm ALTERNATIVE* flavor macros too, while at it.
      
      Also, we need to save the pad length in a separate struct alt_instr
      member for NOP optimization and the way to do that reliably is to carry
      the pad length instead of trying to detect whether we're looking at
      single-byte NOPs or at pathological instruction offsets like e9 90 90 90
      90, for example, which is a valid instruction.
      
      Thanks to Michael Matz for the great help with toolchain questions.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      4332195c
    • B
      x86/alternatives: Cleanup DPRINTK macro · db477a33
      Borislav Petkov 提交于
      Make it pass __func__ implicitly. Also, dump info about each replacing
      we're doing. Fixup comments and style while at it.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      db477a33
  10. 22 2月, 2015 1 次提交
  11. 21 2月, 2015 2 次提交
    • P
      kprobes/x86: Check for invalid ftrace location in __recover_probed_insn() · 2a6730c8
      Petr Mladek 提交于
      __recover_probed_insn() should always be called from an address
      where an instructions starts. The check for ftrace_location()
      might help to discover a potential inconsistency.
      
      This patch adds WARN_ON() when the inconsistency is detected.
      Also it adds handling of the situation when the original code
      can not get recovered.
      Suggested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NPetr Mladek <pmladek@suse.cz>
      Cc: Ananth NMavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/1424441250-27146-3-git-send-email-pmladek@suse.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2a6730c8
    • P
      kprobes/x86: Use 5-byte NOP when the code might be modified by ftrace · 650b7b23
      Petr Mladek 提交于
      can_probe() checks if the given address points to the beginning
      of an instruction. It analyzes all the instructions from the
      beginning of the function until the given address. The code
      might be modified by another Kprobe. In this case, the current
      code is read into a buffer, int3 breakpoint is replaced by the
      saved opcode in the buffer, and can_probe() analyzes the buffer
      instead.
      
      There is a bug that __recover_probed_insn() tries to restore
      the original code even for Kprobes using the ftrace framework.
      But in this case, the opcode is not stored. See the difference
      between arch_prepare_kprobe() and arch_prepare_kprobe_ftrace().
      The opcode is stored by arch_copy_kprobe() only from
      arch_prepare_kprobe().
      
      This patch makes Kprobe to use the ideal 5-byte NOP when the
      code can be modified by ftrace. It is the original instruction,
      see ftrace_make_nop() and ftrace_nop_replace().
      
      Note that we always need to use the NOP for ftrace locations.
      Kprobes do not block ftrace and the instruction might get
      modified at anytime. It might even be in an inconsistent state
      because it is modified step by step using the int3 breakpoint.
      
      The patch also fixes indentation of the touched comment.
      
      Note that I found this problem when playing with Kprobes. I did
      it on x86_64 with gcc-4.8.3 that supported -mfentry. I modified
      samples/kprobes/kprobe_example.c and added offset 5 to put
      the probe right after the fentry area:
      
       static struct kprobe kp = {
       	.symbol_name	= "do_fork",
      +	.offset = 5,
       };
      
      Then I was able to load kprobe_example before jprobe_example
      but not the other way around:
      
        $> modprobe jprobe_example
        $> modprobe kprobe_example
        modprobe: ERROR: could not insert 'kprobe_example': Invalid or incomplete multibyte or wide character
      
      It did not make much sense and debugging pointed to the bug
      described above.
      Signed-off-by: NPetr Mladek <pmladek@suse.cz>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Ananth NMavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/1424441250-27146-2-git-send-email-pmladek@suse.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
      650b7b23
  12. 20 2月, 2015 1 次提交
  13. 19 2月, 2015 2 次提交