1. 24 7月, 2020 11 次提交
  2. 19 7月, 2020 2 次提交
  3. 18 7月, 2020 2 次提交
  4. 16 7月, 2020 4 次提交
  5. 14 7月, 2020 1 次提交
  6. 10 7月, 2020 1 次提交
  7. 09 7月, 2020 3 次提交
  8. 07 7月, 2020 1 次提交
  9. 06 7月, 2020 2 次提交
    • L
      x86/ldt: use "pr_info_once()" instead of open-coding it badly · bb5a93aa
      Linus Torvalds 提交于
      Using a mutex for "print this warning only once" is so overdesigned as
      to be actively offensive to my sensitive stomach.
      
      Just use "pr_info_once()" that already does this, although in a
      (harmlessly) racy manner that can in theory cause the message to be
      printed twice if more than one CPU races on that "is this the first
      time" test.
      
      [ If somebody really cares about that harmless data race (which sounds
        very unlikely indeed), that person can trivially fix printk_once() by
        using a simple atomic access, preferably with an optimistic non-atomic
        test first before even bothering to treat the pointless "make sure it
        is _really_ just once" case.
      
        A mutex is most definitely never the right primitive to use for
        something like this. ]
      
      Yes, this is a small and meaningless detail in a code path that hardly
      matters.  But let's keep some code quality standards here, and not
      accept outrageously bad code.
      
      Link: https://lore.kernel.org/lkml/CAHk-=wgV9toS7GU3KmNpj8hCS9SeF+A0voHS8F275_mgLhL4Lw@mail.gmail.com/
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      bb5a93aa
    • I
      x86/entry/32: Fix XEN_PV build dependency · a4c0e91d
      Ingo Molnar 提交于
      xenpv_exc_nmi() and xenpv_exc_debug() are only defined on 64-bit kernels,
      but they snuck into the 32-bit build via <asm/identry.h>, causing the link
      to fail:
      
        ld: arch/x86/entry/entry_32.o: in function `asm_xenpv_exc_nmi':
        (.entry.text+0x817): undefined reference to `xenpv_exc_nmi'
      
        ld: arch/x86/entry/entry_32.o: in function `asm_xenpv_exc_debug':
        (.entry.text+0x827): undefined reference to `xenpv_exc_debug'
      
      Only use them on 64-bit kernels.
      
      Fixes: f41f0824: ("x86/entry/xen: Route #DB correctly on Xen PV")
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
      Cc: linux-kernel@vger.kernel.org
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      a4c0e91d
  10. 05 7月, 2020 5 次提交
  11. 04 7月, 2020 4 次提交
  12. 01 7月, 2020 3 次提交
  13. 30 6月, 2020 1 次提交
    • S
      x86/split_lock: Don't write MSR_TEST_CTRL on CPUs that aren't whitelisted · 009bce1d
      Sean Christopherson 提交于
      Choo! Choo!  All aboard the Split Lock Express, with direct service to
      Wreckage!
      
      Skip split_lock_verify_msr() if the CPU isn't whitelisted as a possible
      SLD-enabled CPU model to avoid writing MSR_TEST_CTRL.  MSR_TEST_CTRL
      exists, and is writable, on many generations of CPUs.  Writing the MSR,
      even with '0', can result in bizarre, undocumented behavior.
      
      This fixes a crash on Haswell when resuming from suspend with a live KVM
      guest.  Because APs use the standard SMP boot flow for resume, they will
      go through split_lock_init() and the subsequent RDMSR/WRMSR sequence,
      which runs even when sld_state==sld_off to ensure SLD is disabled.  On
      Haswell (at least, my Haswell), writing MSR_TEST_CTRL with '0' will
      succeed and _may_ take the SMT _sibling_ out of VMX root mode.
      
      When KVM has an active guest, KVM performs VMXON as part of CPU onlining
      (see kvm_starting_cpu()).  Because SMP boot is serialized, the resulting
      flow is effectively:
      
        on_each_ap_cpu() {
           WRMSR(MSR_TEST_CTRL, 0)
           VMXON
        }
      
      As a result, the WRMSR can disable VMX on a different CPU that has
      already done VMXON.  This ultimately results in a #UD on VMPTRLD when
      KVM regains control and attempt run its vCPUs.
      
      The above voodoo was confirmed by reworking KVM's VMXON flow to write
      MSR_TEST_CTRL prior to VMXON, and to serialize the sequence as above.
      Further verification of the insanity was done by redoing VMXON on all
      APs after the initial WRMSR->VMXON sequence.  The additional VMXON,
      which should VM-Fail, occasionally succeeded, and also eliminated the
      unexpected #UD on VMPTRLD.
      
      The damage done by writing MSR_TEST_CTRL doesn't appear to be limited
      to VMX, e.g. after suspend with an active KVM guest, subsequent reboots
      almost always hang (even when fudging VMXON), a #UD on a random Jcc was
      observed, suspend/resume stability is qualitatively poor, and so on and
      so forth.
      
        kernel BUG at arch/x86/kvm/x86.c:386!
        CPU: 1 PID: 2592 Comm: CPU 6/KVM Tainted: G      D
        Hardware name: ASUS Q87M-E/Q87M-E, BIOS 1102 03/03/2014
        RIP: 0010:kvm_spurious_fault+0xf/0x20
        Call Trace:
         vmx_vcpu_load_vmcs+0x1fb/0x2b0
         vmx_vcpu_load+0x3e/0x160
         kvm_arch_vcpu_load+0x48/0x260
         finish_task_switch+0x140/0x260
         __schedule+0x460/0x720
         _cond_resched+0x2d/0x40
         kvm_arch_vcpu_ioctl_run+0x82e/0x1ca0
         kvm_vcpu_ioctl+0x363/0x5c0
         ksys_ioctl+0x88/0xa0
         __x64_sys_ioctl+0x16/0x20
         do_syscall_64+0x4c/0x170
         entry_SYSCALL_64_after_hwframe+0x44/0xa9
      
      Fixes: dbaba470 ("x86/split_lock: Rework the initialization flow of split lock detection")
      Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/20200605192605.7439-1-sean.j.christopherson@intel.com
      009bce1d