1. 27 6月, 2017 3 次提交
  2. 09 6月, 2017 17 次提交
  3. 07 6月, 2017 5 次提交
  4. 01 6月, 2017 5 次提交
  5. 30 5月, 2017 2 次提交
  6. 27 5月, 2017 1 次提交
    • J
      KVM: x86: Fix virtual wire mode · 52b54190
      Jan H. Schönherr 提交于
      Intel SDM says, that at most one LAPIC should be configured with ExtINT
      delivery. KVM configures all LAPICs this way. This causes pic_unlock()
      to kick the first available vCPU from the internal KVM data structures.
      If this vCPU is not the BSP, but some not-yet-booted AP, the BSP may
      never realize that there is an interrupt.
      
      Fix that by enabling ExtINT delivery only for the BSP.
      
      This allows booting a Linux guest without a TSC in the above situation.
      Otherwise the BSP gets stuck in calibrate_delay_converge().
      Signed-off-by: NJan H. Schönherr <jschoenh@amazon.de>
      Reviewed-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      52b54190
  7. 26 5月, 2017 2 次提交
    • J
      KVM: nVMX: Fix handling of lmsw instruction · e1d39b17
      Jan H. Schönherr 提交于
      The decision whether or not to exit from L2 to L1 on an lmsw instruction is
      based on bogus values: instead of using the information encoded within the
      exit qualification, it uses the data also used for the mov-to-cr
      instruction, which boils down to using whatever is in %eax at that point.
      
      Use the correct values instead.
      
      Without this fix, an L1 may not get notified when a 32-bit Linux L2
      switches its secondary CPUs to protected mode; the L1 is only notified on
      the next modification of CR0. This short time window poses a problem, when
      there is some other reason to exit to L1 in between. Then, L2 will be
      resumed in real mode and chaos ensues.
      Signed-off-by: NJan H. Schönherr <jschoenh@amazon.de>
      Reviewed-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e1d39b17
    • W
      KVM: X86: Fix preempt the preemption timer cancel · 5acc1ca4
      Wanpeng Li 提交于
      Preemption can occur during cancel preemption timer, and there will be
      inconsistent status in lapic, vmx and vmcs field.
      
                CPU0                    CPU1
      
        preemption timer vmexit
        handle_preemption_timer(vCPU0)
          kvm_lapic_expired_hv_timer
            vmx_cancel_hv_timer
              vmx->hv_deadline_tsc = -1
              vmcs_clear_bits
              /* hv_timer_in_use still true */
        sched_out
                                 sched_in
                                 kvm_arch_vcpu_load
                                   vmx_set_hv_timer
                                     write vmx->hv_deadline_tsc
                                     vmcs_set_bits
                                 /* back in kvm_lapic_expired_hv_timer */
                                 hv_timer_in_use = false
                                 ...
                                 vmx_vcpu_run
                                   vmx_arm_hv_run
                                     write preemption timer deadline
                                   spurious preemption timer vmexit
                                     handle_preemption_timer(vCPU0)
                                       kvm_lapic_expired_hv_timer
                                         WARN_ON(!apic->lapic_timer.hv_timer_in_use);
      
      This can be reproduced sporadically during boot of L2 on a
      preemptible L1, causing a splat on L1.
      
       WARNING: CPU: 3 PID: 1952 at arch/x86/kvm/lapic.c:1529 kvm_lapic_expired_hv_timer+0xb5/0xd0 [kvm]
       CPU: 3 PID: 1952 Comm: qemu-system-x86 Not tainted 4.12.0-rc1+ #24 RIP: 0010:kvm_lapic_expired_hv_timer+0xb5/0xd0 [kvm]
        Call Trace:
        handle_preemption_timer+0xe/0x20 [kvm_intel]
        vmx_handle_exit+0xc9/0x15f0 [kvm_intel]
        ? lock_acquire+0xdb/0x250
        ? lock_acquire+0xdb/0x250
        ? kvm_arch_vcpu_ioctl_run+0xdf3/0x1ce0 [kvm]
        kvm_arch_vcpu_ioctl_run+0xe55/0x1ce0 [kvm]
        kvm_vcpu_ioctl+0x384/0x7b0 [kvm]
        ? kvm_vcpu_ioctl+0x384/0x7b0 [kvm]
        ? __fget+0xf3/0x210
        do_vfs_ioctl+0xa4/0x700
        ? __fget+0x114/0x210
        SyS_ioctl+0x79/0x90
        do_syscall_64+0x8f/0x750
        ? trace_hardirqs_on_thunk+0x1a/0x1c
        entry_SYSCALL64_slow_path+0x25/0x25
      
      This patch fixes it by disabling preemption while cancelling
      preemption timer.  This way cancel_hv_timer is atomic with
      respect to kvm_arch_vcpu_load.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: NWanpeng Li <wanpeng.li@hotmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      5acc1ca4
  8. 22 5月, 2017 5 次提交
    • L
      Linux 4.12-rc2 · 08332893
      Linus Torvalds 提交于
      08332893
    • L
      x86: fix 32-bit case of __get_user_asm_u64() · 33c9e972
      Linus Torvalds 提交于
      The code to fetch a 64-bit value from user space was entirely buggered,
      and has been since the code was merged in early 2016 in commit
      b2f68038 ("x86/mm/32: Add support for 64-bit __get_user() on 32-bit
      kernels").
      
      Happily the buggered routine is almost certainly entirely unused, since
      the normal way to access user space memory is just with the non-inlined
      "get_user()", and the inlined version didn't even historically exist.
      
      The normal "get_user()" case is handled by external hand-written asm in
      arch/x86/lib/getuser.S that doesn't have either of these issues.
      
      There were two independent bugs in __get_user_asm_u64():
      
       - it still did the STAC/CLAC user space access marking, even though
         that is now done by the wrapper macros, see commit 11f1a4b9
         ("x86: reorganize SMAP handling in user space accesses").
      
         This didn't result in a semantic error, it just means that the
         inlined optimized version was hugely less efficient than the
         allegedly slower standard version, since the CLAC/STAC overhead is
         quite high on modern Intel CPU's.
      
       - the double register %eax/%edx was marked as an output, but the %eax
         part of it was touched early in the asm, and could thus clobber other
         inputs to the asm that gcc didn't expect it to touch.
      
         In particular, that meant that the generated code could look like
         this:
      
              mov    (%eax),%eax
              mov    0x4(%eax),%edx
      
         where the load of %edx obviously was _supposed_ to be from the 32-bit
         word that followed the source of %eax, but because %eax was
         overwritten by the first instruction, the source of %edx was
         basically random garbage.
      
      The fixes are trivial: remove the extraneous STAC/CLAC entries, and mark
      the 64-bit output as early-clobber to let gcc know that no inputs should
      alias with the output register.
      
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Cc: Benjamin LaHaise <bcrl@kvack.org>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: stable@kernel.org   # v4.8+
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      33c9e972
    • L
      Clean up x86 unsafe_get/put_user() type handling · 334a023e
      Linus Torvalds 提交于
      Al noticed that unsafe_put_user() had type problems, and fixed them in
      commit a7cc722f ("fix unsafe_put_user()"), which made me look more
      at those functions.
      
      It turns out that unsafe_get_user() had a type issue too: it limited the
      largest size of the type it could handle to "unsigned long".  Which is
      fine with the current users, but doesn't match our existing normal
      get_user() semantics, which can also handle "u64" even when that does
      not fit in a long.
      
      While at it, also clean up the type cast in unsafe_put_user().  We
      actually want to just make it an assignment to the expected type of the
      pointer, because we actually do want warnings from types that don't
      convert silently.  And it makes the code more readable by not having
      that one very long and complex line.
      
      [ This patch might become stable material if we ever end up back-porting
        any new users of the unsafe uaccess code, but as things stand now this
        doesn't matter for any current existing uses. ]
      
      Cc: Al Viro <viro@zeniv.linux.org.uk>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      334a023e
    • L
      Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs · f3926e4c
      Linus Torvalds 提交于
      Pull misc uaccess fixes from Al Viro:
       "Fix for unsafe_put_user() (no callers currently in mainline, but
        anyone starting to use it will step into that) + alpha osf_wait4()
        infoleak fix"
      
      * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
        osf_wait4(): fix infoleak
        fix unsafe_put_user()
      f3926e4c
    • L
      Merge branch 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip · 970c305a
      Linus Torvalds 提交于
      Pull scheduler fix from Thomas Gleixner:
       "A single scheduler fix:
      
        Prevent idle task from ever being preempted. That makes sure that
        synchronize_rcu_tasks() which is ignoring idle task does not pretend
        that no task is stuck in preempted state. If that happens and idle was
        preempted on a ftrace trampoline the machine crashes due to
        inconsistent state"
      
      * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
        sched/core: Call __schedule() from do_idle() without enabling preemption
      970c305a