1. 23 3月, 2015 5 次提交
    • B
      x86/fpu: Fold __drop_fpu() into its sole user · d2d0ac9a
      Borislav Petkov 提交于
      Fold it into drop_fpu(). Phew, one less FPU function to pay attention
      to.
      
      No functionality change.
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NOleg Nesterov <oleg@redhat.com>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pekka Riikonen <priikone@iki.fi>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      d2d0ac9a
    • O
      x86/fpu: Don't abuse drop_init_fpu() in flush_thread() · f893959b
      Oleg Nesterov 提交于
      flush_thread() -> drop_init_fpu() is suboptimal and confusing. It does
      drop_fpu() or restore_init_xstate() depending on !use_eager_fpu(). But
      flush_thread() too checks eagerfpu right after that, and if it is true
      then restore_init_xstate() just burns CPU for no reason. We are going to
      load init_xstate_buf again after we set used_math()/user_has_fpu(), until
      then the FPU state can't survive after switch_to().
      
      Remove it, and change the "if (!use_eager_fpu())" to call drop_fpu().
      While at it, clean up the tsk/current usage.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pekka Riikonen <priikone@iki.fi>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Link: http://lkml.kernel.org/r/20150313173030.GA31217@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f893959b
    • O
      x86/fpu: Use restore_init_xstate() instead of math_state_restore() on kthread exec · 9cb6ce82
      Oleg Nesterov 提交于
      Change flush_thread() to do user_fpu_begin() and restore_init_xstate()
      instead of math_state_restore().
      
      Note: "TODO: cleanup this horror" is still valid. We do not need
      init_fpu() at all, we only need fpu_alloc() and memset(0). But this
      needs other changes, in particular user_fpu_begin() should set
      used_math().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pekka Riikonen <priikone@iki.fi>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Link: http://lkml.kernel.org/r/20150311173449.GE5032@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      9cb6ce82
    • O
      x86/fpu: Introduce restore_init_xstate() · 8f4d8186
      Oleg Nesterov 提交于
      Extract the "use_eager_fpu()" code from drop_init_fpu() into a new,
      simple helper restore_init_xstate(). The next patch adds another user.
      
      - It is not clear why we do not check use_fxsr() like fpu_restore_checking()
        does. eager_fpu_init_bp() calls setup_init_fpu_buf() too, and we have the
        "eagerfpu=on" kernel option.
      
      - Ignoring the fact that init_xstate_buf is "struct xsave_struct *", not
        "union thread_xstate *", it is not clear why we can not simply use
        fpu_restore_checking() and avoid the code duplication.
      
      - It is not clear why we can't call setup_init_fpu_buf() unconditionally
        to always create init_xstate_buf(). Then do_device_not_available() path
        (at least) could use restore_init_xstate() too. It doesn't need to init
        fpu->state, its content doesn't matter until unlazy_fpu()/__switch_to()/etc
        which overwrites this memory anyway.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pekka Riikonen <priikone@iki.fi>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Link: http://lkml.kernel.org/r/20150311173429.GD5032@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      8f4d8186
    • O
      x86/fpu: Document user_fpu_begin() · fb14b4ea
      Oleg Nesterov 提交于
      Currently, user_fpu_begin() has a single caller and it is not clear why
      do we actually need it and why we should not worry about preemption
      right after preempt_enable().
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pekka Riikonen <priikone@iki.fi>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Link: http://lkml.kernel.org/r/20150311173409.GC5032@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      fb14b4ea
  2. 20 3月, 2015 1 次提交
    • R
      Revert "x86/PCI: Refine the way to release PCI IRQ resources" · 9e8ce4b9
      Rafael J. Wysocki 提交于
      Commit b4b55cda (Refine the way to release PCI IRQ resources)
      introduced a regression in the PCI IRQ resource management by causing
      the IRQ resource of a device, established when pci_enabled_device()
      is called on a fully disabled device, to be released when the driver
      is unbound from the device, regardless of the enable_cnt.
      
      This leads to the situation that an ill-behaved driver can now make a
      device unusable to subsequent drivers by an imbalance in their use of
      pci_enable/disable_device().  That is a serious problem for secondary
      drivers like vfio-pci, which are innocent of the transgressions of
      the previous driver.
      
      Since the solution of this problem is not immediate and requires
      further discussion, revert commit b4b55cda and the issue it was
      supposed to address (a bug related to xen-pciback) will be taken
      care of in a different way going forward.
      Reported-by: NAlex Williamson <alex.williamson@redhat.com>
      Signed-off-by: NRafael J. Wysocki <rafael.j.wysocki@intel.com>
      9e8ce4b9
  3. 16 3月, 2015 1 次提交
    • B
      Revert "x86/mm/ASLR: Propagate base load address calculation" · 69797daf
      Borislav Petkov 提交于
      This reverts commit:
      
        f47233c2 ("x86/mm/ASLR: Propagate base load address calculation")
      
      The main reason for the revert is that the new boot flag does not work
      at all currently, and in order to make this work, we need non-trivial
      changes to the x86 boot code which we didn't manage to get done in
      time for merging.
      
      And even if we did, they would've been too risky so instead of
      rushing things and break booting 4.1 on boxes left and right, we
      will be very strict and conservative and will take our time with
      this to fix and test it properly.
      Reported-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Baoquan He <bhe@redhat.com>
      Cc: H. Peter Anvin <hpa@linux.intel.com
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Josh Triplett <josh@joshtriplett.org>
      Cc: Junjie Mao <eternal.n08@gmail.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Matt Fleming <matt.fleming@intel.com>
      Link: http://lkml.kernel.org/r/20150316100628.GD22995@pd.tnicSigned-off-by: NIngo Molnar <mingo@kernel.org>
      69797daf
  4. 13 3月, 2015 5 次提交
    • W
      KVM: VMX: Set msr bitmap correctly if vcpu is in guest mode · 670125bd
      Wincy Van 提交于
      In commit 3af18d9c ("KVM: nVMX: Prepare for using hardware MSR bitmap"),
      we are setting MSR_BITMAP in prepare_vmcs02 if we should use hardware. This
      is not enough since the field will be modified by following vmx_set_efer.
      
      Fix this by setting vmx_msr_bitmap_nested in vmx_set_msr_bitmap if vcpu is
      in guest mode.
      Signed-off-by: NWincy Van <fanwenyi0529@gmail.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      670125bd
    • O
      x86/fpu: Drop_fpu() should not assume that tsk equals current · f4c36863
      Oleg Nesterov 提交于
      drop_fpu() does clear_used_math() and usually this is correct
      because tsk == current.
      
      However switch_fpu_finish()->restore_fpu_checking() is called before
      __switch_to() updates the "current_task" variable. If it fails,
      we will wrongly clear the PF_USED_MATH flag of the previous task.
      
      So use clear_stopped_child_used_math() instead.
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NRik van Riel <riel@redhat.com>
      Cc: <stable@vger.kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pekka Riikonen <priikone@iki.fi>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20150309171041.GB11388@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      f4c36863
    • O
      x86/fpu: Avoid math_state_restore() without used_math() in __restore_xstate_sig() · a7c80ebc
      Oleg Nesterov 提交于
      math_state_restore() assumes it is called with irqs disabled,
      but this is not true if the caller is __restore_xstate_sig().
      
      This means that if ia32_fxstate == T and __copy_from_user()
      fails, __restore_xstate_sig() returns with irqs disabled too.
      
      This triggers:
      
        BUG: sleeping function called from invalid context at kernel/locking/rwsem.c:41
         dump_stack
         ___might_sleep
         ? _raw_spin_unlock_irqrestore
         __might_sleep
         down_read
         ? _raw_spin_unlock_irqrestore
         print_vma_addr
         signal_fault
         sys32_rt_sigreturn
      
      Change __restore_xstate_sig() to call set_used_math()
      unconditionally. This avoids enabling and disabling interrupts
      in math_state_restore(). If copy_from_user() fails, we can
      simply do fpu_finit() by hand.
      
      [ Note: this is only the first step. math_state_restore() should
              not check used_math(), it should set this flag. While
      	init_fpu() should simply die. ]
      Signed-off-by: NOleg Nesterov <oleg@redhat.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: <stable@vger.kernel.org>
      Cc: Andy Lutomirski <luto@amacapital.net>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@intel.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Pekka Riikonen <priikone@iki.fi>
      Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Suresh Siddha <sbsiddha@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Link: http://lkml.kernel.org/r/20150307153844.GB25954@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      a7c80ebc
    • S
      crypto: aesni - fix memory usage in GCM decryption · ccfe8c3f
      Stephan Mueller 提交于
      The kernel crypto API logic requires the caller to provide the
      length of (ciphertext || authentication tag) as cryptlen for the
      AEAD decryption operation. Thus, the cipher implementation must
      calculate the size of the plaintext output itself and cannot simply use
      cryptlen.
      
      The RFC4106 GCM decryption operation tries to overwrite cryptlen memory
      in req->dst. As the destination buffer for decryption only needs to hold
      the plaintext memory but cryptlen references the input buffer holding
      (ciphertext || authentication tag), the assumption of the destination
      buffer length in RFC4106 GCM operation leads to a too large size. This
      patch simply uses the already calculated plaintext size.
      
      In addition, this patch fixes the offset calculation of the AAD buffer
      pointer: as mentioned before, cryptlen already includes the size of the
      tag. Thus, the tag does not need to be added. With the addition, the AAD
      will be written beyond the already allocated buffer.
      
      Note, this fixes a kernel crash that can be triggered from user space
      via AF_ALG(aead) -- simply use the libkcapi test application
      from [1] and update it to use rfc4106-gcm-aes.
      
      Using [1], the changes were tested using CAVS vectors to demonstrate
      that the crypto operation still delivers the right results.
      
      [1] http://www.chronox.de/libkcapi.html
      
      CC: Tadeusz Struk <tadeusz.struk@intel.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NStephan Mueller <smueller@chronox.de>
      Signed-off-by: NHerbert Xu <herbert@gondor.apana.org.au>
      ccfe8c3f
    • P
      kvm: x86: i8259: return initialized data on invalid-size read · c1a6bff2
      Petr Matousek 提交于
      If data is read from PIC with invalid access size, the return data stays
      uninitialized even though success is returned.
      
      Fix this by always initializing the data.
      Signed-off-by: NPetr Matousek <pmatouse@redhat.com>
      Reported-by: NNadav Amit <nadav.amit@gmail.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      c1a6bff2
  5. 12 3月, 2015 2 次提交
  6. 11 3月, 2015 1 次提交
  7. 10 3月, 2015 3 次提交
  8. 06 3月, 2015 3 次提交
  9. 05 3月, 2015 1 次提交
  10. 04 3月, 2015 1 次提交
  11. 03 3月, 2015 1 次提交
    • R
      KVM: SVM: fix interrupt injection (apic->isr_count always 0) · f563db4b
      Radim Krčmář 提交于
      In commit b4eef9b3, we started to use hwapic_isr_update() != NULL
      instead of kvm_apic_vid_enabled(vcpu->kvm).  This didn't work because
      SVM had it defined and "apicv" path in apic_{set,clear}_isr() does not
      change apic->isr_count, because it should always be 1.  The initial
      value of apic->isr_count was based on kvm_apic_vid_enabled(vcpu->kvm),
      which is always 0 for SVM, so KVM could have injected interrupts when it
      shouldn't.
      
      Fix it by implicitly setting SVM's hwapic_isr_update to NULL and make the
      initial isr_count depend on hwapic_isr_update() for good measure.
      
      Fixes: b4eef9b3 ("kvm: x86: vmx: NULL out hwapic_isr_update() in case of !enable_apicv")
      Reported-and-tested-by: NBorislav Petkov <bp@suse.de>
      Signed-off-by: NRadim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
      f563db4b
  12. 28 2月, 2015 1 次提交
  13. 27 2月, 2015 1 次提交
  14. 24 2月, 2015 6 次提交
  15. 23 2月, 2015 3 次提交
  16. 22 2月, 2015 1 次提交
  17. 21 2月, 2015 2 次提交
    • P
      kprobes/x86: Check for invalid ftrace location in __recover_probed_insn() · 2a6730c8
      Petr Mladek 提交于
      __recover_probed_insn() should always be called from an address
      where an instructions starts. The check for ftrace_location()
      might help to discover a potential inconsistency.
      
      This patch adds WARN_ON() when the inconsistency is detected.
      Also it adds handling of the situation when the original code
      can not get recovered.
      Suggested-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Signed-off-by: NPetr Mladek <pmladek@suse.cz>
      Cc: Ananth NMavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/1424441250-27146-3-git-send-email-pmladek@suse.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
      2a6730c8
    • P
      kprobes/x86: Use 5-byte NOP when the code might be modified by ftrace · 650b7b23
      Petr Mladek 提交于
      can_probe() checks if the given address points to the beginning
      of an instruction. It analyzes all the instructions from the
      beginning of the function until the given address. The code
      might be modified by another Kprobe. In this case, the current
      code is read into a buffer, int3 breakpoint is replaced by the
      saved opcode in the buffer, and can_probe() analyzes the buffer
      instead.
      
      There is a bug that __recover_probed_insn() tries to restore
      the original code even for Kprobes using the ftrace framework.
      But in this case, the opcode is not stored. See the difference
      between arch_prepare_kprobe() and arch_prepare_kprobe_ftrace().
      The opcode is stored by arch_copy_kprobe() only from
      arch_prepare_kprobe().
      
      This patch makes Kprobe to use the ideal 5-byte NOP when the
      code can be modified by ftrace. It is the original instruction,
      see ftrace_make_nop() and ftrace_nop_replace().
      
      Note that we always need to use the NOP for ftrace locations.
      Kprobes do not block ftrace and the instruction might get
      modified at anytime. It might even be in an inconsistent state
      because it is modified step by step using the int3 breakpoint.
      
      The patch also fixes indentation of the touched comment.
      
      Note that I found this problem when playing with Kprobes. I did
      it on x86_64 with gcc-4.8.3 that supported -mfentry. I modified
      samples/kprobes/kprobe_example.c and added offset 5 to put
      the probe right after the fentry area:
      
       static struct kprobe kp = {
       	.symbol_name	= "do_fork",
      +	.offset = 5,
       };
      
      Then I was able to load kprobe_example before jprobe_example
      but not the other way around:
      
        $> modprobe jprobe_example
        $> modprobe kprobe_example
        modprobe: ERROR: could not insert 'kprobe_example': Invalid or incomplete multibyte or wide character
      
      It did not make much sense and debugging pointed to the bug
      described above.
      Signed-off-by: NPetr Mladek <pmladek@suse.cz>
      Acked-by: NMasami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
      Cc: Ananth NMavinakayanahalli <ananth@in.ibm.com>
      Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Kosina <jkosina@suse.cz>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Link: http://lkml.kernel.org/r/1424441250-27146-2-git-send-email-pmladek@suse.czSigned-off-by: NIngo Molnar <mingo@kernel.org>
      650b7b23
  18. 20 2月, 2015 2 次提交