1. 12 11月, 2019 4 次提交
    • J
      KVM: VMX: Do not change PID.NDST when loading a blocked vCPU · 132194ff
      Joao Martins 提交于
      When vCPU enters block phase, pi_pre_block() inserts vCPU to a per pCPU
      linked list of all vCPUs that are blocked on this pCPU. Afterwards, it
      changes PID.NV to POSTED_INTR_WAKEUP_VECTOR which its handler
      (wakeup_handler()) is responsible to kick (unblock) any vCPU on that
      linked list that now has pending posted interrupts.
      
      While vCPU is blocked (in kvm_vcpu_block()), it may be preempted which
      will cause vmx_vcpu_pi_put() to set PID.SN.  If later the vCPU will be
      scheduled to run on a different pCPU, vmx_vcpu_pi_load() will clear
      PID.SN but will also *overwrite PID.NDST to this different pCPU*.
      Instead of keeping it with original pCPU which vCPU had entered block
      phase on.
      
      This results in an issue because when a posted interrupt is delivered, as
      the wakeup_handler() will be executed and fail to find blocked vCPU on
      its per pCPU linked list of all vCPUs that are blocked on this pCPU.
      Which is due to the vCPU being placed on a *different* per pCPU
      linked list i.e. the original pCPU in which it entered block phase.
      
      The regression is introduced by commit c112b5f5 ("KVM: x86:
      Recompute PID.ON when clearing PID.SN"). Therefore, partially revert
      it and reintroduce the condition in vmx_vcpu_pi_load() responsible for
      avoiding changing PID.NDST when loading a blocked vCPU.
      
      Fixes: c112b5f5 ("KVM: x86: Recompute PID.ON when clearing PID.SN")
      Tested-by: NNathan Ni <nathan.ni@oracle.com>
      Co-developed-by: NLiran Alon <liran.alon@oracle.com>
      Signed-off-by: NLiran Alon <liran.alon@oracle.com>
      Signed-off-by: NJoao Martins <joao.m.martins@oracle.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      132194ff
    • J
      KVM: VMX: Consider PID.PIR to determine if vCPU has pending interrupts · 9482ae45
      Joao Martins 提交于
      Commit 17e433b5 ("KVM: Fix leak vCPU's VMCS value into other pCPU")
      introduced vmx_dy_apicv_has_pending_interrupt() in order to determine
      if a vCPU have a pending posted interrupt. This routine is used by
      kvm_vcpu_on_spin() when searching for a a new runnable vCPU to schedule
      on pCPU instead of a vCPU doing busy loop.
      
      vmx_dy_apicv_has_pending_interrupt() determines if a
      vCPU has a pending posted interrupt solely based on PID.ON. However,
      when a vCPU is preempted, vmx_vcpu_pi_put() sets PID.SN which cause
      raised posted interrupts to only set bit in PID.PIR without setting
      PID.ON (and without sending notification vector), as depicted in VT-d
      manual section 5.2.3 "Interrupt-Posting Hardware Operation".
      
      Therefore, checking PID.ON is insufficient to determine if a vCPU has
      pending posted interrupts and instead we should also check if there is
      some bit set on PID.PIR if PID.SN=1.
      
      Fixes: 17e433b5 ("KVM: Fix leak vCPU's VMCS value into other pCPU")
      Reviewed-by: NJagannathan Raman <jag.raman@oracle.com>
      Co-developed-by: NLiran Alon <liran.alon@oracle.com>
      Signed-off-by: NLiran Alon <liran.alon@oracle.com>
      Signed-off-by: NJoao Martins <joao.m.martins@oracle.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9482ae45
    • L
      KVM: VMX: Fix comment to specify PID.ON instead of PIR.ON · d9ff2744
      Liran Alon 提交于
      The Outstanding Notification (ON) bit is part of the Posted Interrupt
      Descriptor (PID) as opposed to the Posted Interrupts Register (PIR).
      The latter is a bitmap for pending vectors.
      Reviewed-by: NJoao Martins <joao.m.martins@oracle.com>
      Signed-off-by: NLiran Alon <liran.alon@oracle.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d9ff2744
    • C
      KVM: X86: Fix initialization of MSR lists · 7a5ee6ed
      Chenyi Qiang 提交于
      The three MSR lists(msrs_to_save[], emulated_msrs[] and
      msr_based_features[]) are global arrays of kvm.ko, which are
      adjusted (copy supported MSRs forward to override the unsupported MSRs)
      when insmod kvm-{intel,amd}.ko, but it doesn't reset these three arrays
      to their initial value when rmmod kvm-{intel,amd}.ko. Thus, at the next
      installation, kvm-{intel,amd}.ko will do operations on the modified
      arrays with some MSRs lost and some MSRs duplicated.
      
      So define three constant arrays to hold the initial MSR lists and
      initialize msrs_to_save[], emulated_msrs[] and msr_based_features[]
      based on the constant arrays.
      
      Cc: stable@vger.kernel.org
      Reviewed-by: NXiaoyao Li <xiaoyao.li@intel.com>
      Signed-off-by: NChenyi Qiang <chenyi.qiang@intel.com>
      [Remove now useless conditionals. - Paolo]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7a5ee6ed
  2. 05 11月, 2019 3 次提交
    • M
      x86/tsc: Respect tsc command line paraemeter for clocksource_tsc_early · 63ec58b4
      Michael Zhivich 提交于
      The introduction of clocksource_tsc_early broke the functionality of
      "tsc=reliable" and "tsc=nowatchdog" command line parameters, since
      clocksource_tsc_early is unconditionally registered with
      CLOCK_SOURCE_MUST_VERIFY and thus put on the watchdog list.
      
      This can cause the TSC to be declared unstable during boot:
      
        clocksource: timekeeping watchdog on CPU0: Marking clocksource
                     'tsc-early' as unstable because the skew is too large:
        clocksource: 'refined-jiffies' wd_now: fffb7018 wd_last: fffb6e9d
                     mask: ffffffff
        clocksource: 'tsc-early' cs_now: 68a6a7070f6a0 cs_last: 68a69ab6f74d6
                     mask: ffffffffffffffff
        tsc: Marking TSC unstable due to clocksource watchdog
      
      The corresponding elapsed times are cs_nsec=1224152026 and wd_nsec=378942392, so
      the watchdog differs from TSC by 0.84 seconds.
      
      This happens when HPET is not available and jiffies are used as the TSC
      watchdog instead and the jiffies update is not happening due to lost timer
      interrupts in periodic mode, which can happen e.g. with expensive debug
      mechanisms enabled or under massive overload conditions in virtualized
      environments.
      
      Before the introduction of the early TSC clocksource the command line
      parameters "tsc=reliable" and "tsc=nowatchdog" could be used to work around
      this issue.
      
      Restore the behaviour by disabling the watchdog if requested on the kernel
      command line.
      
      [ tglx: Clarify changelog ]
      
      Fixes: aa83c457 ("x86/tsc: Introduce early tsc clocksource")
      Signed-off-by: NMichael Zhivich <mzhivich@akamai.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Link: https://lkml.kernel.org/r/20191024175945.14338-1-mzhivich@akamai.com
      63ec58b4
    • T
      x86/dumpstack/64: Don't evaluate exception stacks before setup · e361362b
      Thomas Gleixner 提交于
      Cyrill reported the following crash:
      
        BUG: unable to handle page fault for address: 0000000000001ff0
        #PF: supervisor read access in kernel mode
        RIP: 0010:get_stack_info+0xb3/0x148
      
      It turns out that if the stack tracer is invoked before the exception stack
      mappings are initialized in_exception_stack() can erroneously classify an
      invalid address as an address inside of an exception stack:
      
          begin = this_cpu_read(cea_exception_stacks);  <- 0
          end = begin + sizeof(exception stacks);
      
      i.e. any address between 0 and end will be considered as exception stack
      address and the subsequent code will then try to derefence the resulting
      stack frame at a non mapped address.
      
       end = begin + (unsigned long)ep->size;
           ==> end = 0x2000
      
       regs = (struct pt_regs *)end - 1;
           ==> regs = 0x2000 - sizeof(struct pt_regs *) = 0x1ff0
      
       info->next_sp   = (unsigned long *)regs->sp;
           ==> Crashes due to accessing 0x1ff0
      
      Prevent this by checking the validity of the cea_exception_stack base
      address and bailing out if it is zero.
      
      Fixes: afcd21da ("x86/dumpstack/64: Use cpu_entry_area instead of orig_ist")
      Reported-by: NCyrill Gorcunov <gorcunov@gmail.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Tested-by: NCyrill Gorcunov <gorcunov@gmail.com>
      Acked-by: NJosh Poimboeuf <jpoimboe@redhat.com>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1910231950590.1852@nanos.tec.linutronix.de
      e361362b
    • J
      x86/apic/32: Avoid bogus LDR warnings · fe6f85ca
      Jan Beulich 提交于
      The removal of the LDR initialization in the bigsmp_32 APIC code unearthed
      a problem in setup_local_APIC().
      
      The code checks unconditionally for a mismatch of the logical APIC id by
      comparing the early APIC id which was initialized in get_smp_config() with
      the actual LDR value in the APIC.
      
      Due to the removal of the bogus LDR initialization the check now can
      trigger on bigsmp_32 APIC systems emitting a warning for every booting
      CPU. This is of course a false positive because the APIC is not using
      logical destination mode.
      
      Restrict the check and the possibly resulting fixup to systems which are
      actually using the APIC in logical destination mode.
      
      [ tglx: Massaged changelog and added Cc stable ]
      
      Fixes: bae3a8d3 ("x86/apic: Do not initialize LDR and DFR for bigsmp")
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: stable@vger.kernel.org
      Link: https://lkml.kernel.org/r/666d8f91-b5a8-1afd-7add-821e72a35f03@suse.com
      fe6f85ca
  3. 04 11月, 2019 1 次提交
  4. 31 10月, 2019 2 次提交
    • P
      KVM: vmx, svm: always run with EFER.NXE=1 when shadow paging is active · 9167ab79
      Paolo Bonzini 提交于
      VMX already does so if the host has SMEP, in order to support the combination of
      CR0.WP=1 and CR4.SMEP=1.  However, it is perfectly safe to always do so, and in
      fact VMX already ends up running with EFER.NXE=1 on old processors that lack the
      "load EFER" controls, because it may help avoiding a slow MSR write.  Removing
      all the conditionals simplifies the code.
      
      SVM does not have similar code, but it should since recent AMD processors do
      support SMEP.  So this patch also makes the code for the two vendors more similar
      while fixing NPT=0, CR0.WP=1 and CR4.SMEP=1 on AMD processors.
      
      Cc: stable@vger.kernel.org
      Cc: Joerg Roedel <jroedel@suse.de>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9167ab79
    • K
      x86, efi: Never relocate kernel below lowest acceptable address · 220dd769
      Kairui Song 提交于
      Currently, kernel fails to boot on some HyperV VMs when using EFI.
      And it's a potential issue on all x86 platforms.
      
      It's caused by broken kernel relocation on EFI systems, when below three
      conditions are met:
      
      1. Kernel image is not loaded to the default address (LOAD_PHYSICAL_ADDR)
         by the loader.
      2. There isn't enough room to contain the kernel, starting from the
         default load address (eg. something else occupied part the region).
      3. In the memmap provided by EFI firmware, there is a memory region
         starts below LOAD_PHYSICAL_ADDR, and suitable for containing the
         kernel.
      
      EFI stub will perform a kernel relocation when condition 1 is met. But
      due to condition 2, EFI stub can't relocate kernel to the preferred
      address, so it fallback to ask EFI firmware to alloc lowest usable memory
      region, got the low region mentioned in condition 3, and relocated
      kernel there.
      
      It's incorrect to relocate the kernel below LOAD_PHYSICAL_ADDR. This
      is the lowest acceptable kernel relocation address.
      
      The first thing goes wrong is in arch/x86/boot/compressed/head_64.S.
      Kernel decompression will force use LOAD_PHYSICAL_ADDR as the output
      address if kernel is located below it. Then the relocation before
      decompression, which move kernel to the end of the decompression buffer,
      will overwrite other memory region, as there is no enough memory there.
      
      To fix it, just don't let EFI stub relocate the kernel to any address
      lower than lowest acceptable address.
      
      [ ardb: introduce efi_low_alloc_above() to reduce the scope of the change ]
      Signed-off-by: NKairui Song <kasong@redhat.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NJarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: linux-efi@vger.kernel.org
      Link: https://lkml.kernel.org/r/20191029173755.27149-6-ardb@kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
      220dd769
  5. 28 10月, 2019 3 次提交
    • K
      perf/x86/uncore: Fix event group support · 75be6f70
      Kan Liang 提交于
      The events in the same group don't start or stop simultaneously.
      Here is the ftrace when enabling event group for uncore_iio_0:
      
        # perf stat -e "{uncore_iio_0/event=0x1/,uncore_iio_0/event=0xe/}"
      
                  <idle>-0     [000] d.h.  8959.064832: read_msr: a41, value
        b2b0b030		//Read counter reg of IIO unit0 counter0
                  <idle>-0     [000] d.h.  8959.064835: write_msr: a48, value
        400001			//Write Ctrl reg of IIO unit0 counter0 to enable
        counter0. <------ Although counter0 is enabled, Unit Ctrl is still
        freezed. Nothing will count. We are still good here.
                  <idle>-0     [000] d.h.  8959.064836: read_msr: a40, value
        30100                   //Read Unit Ctrl reg of IIO unit0
                  <idle>-0     [000] d.h.  8959.064838: write_msr: a40, value
        30000			//Write Unit Ctrl reg of IIO unit0 to enable all
        counters in the unit by clear Freeze bit  <------Unit0 is un-freezed.
        Counter0 has been enabled. Now it starts counting. But counter1 has not
        been enabled yet. The issue starts here.
                  <idle>-0     [000] d.h.  8959.064846: read_msr: a42, value 0
      			//Read counter reg of IIO unit0 counter1
                  <idle>-0     [000] d.h.  8959.064847: write_msr: a49, value
        40000e			//Write Ctrl reg of IIO unit0 counter1 to enable
        counter1.   <------ Now, counter1 just starts to count. Counter0 has
        been running for a while.
      
      Current code un-freezes the Unit Ctrl right after the first counter is
      enabled. The subsequent group events always loses some counter values.
      
      Implement pmu_enable and pmu_disable support for uncore, which can help
      to batch hardware accesses.
      
      No one uses uncore_enable_box and uncore_disable_box. Remove them.
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: linux-drivers-review@eclists.intel.com
      Cc: linux-perf@eclists.intel.com
      Fixes: 087bfbb0 ("perf/x86: Add generic Intel uncore PMU support")
      Link: https://lkml.kernel.org/r/1572014593-31591-1-git-send-email-kan.liang@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      75be6f70
    • K
      perf/x86/amd/ibs: Handle erratum #420 only on the affected CPU family (10h) · e431e79b
      Kim Phillips 提交于
      This saves us writing the IBS control MSR twice when disabling the
      event.
      
      I searched revision guides for all families since 10h, and did not
      find occurrence of erratum #420, nor anything remotely similar:
      so we isolate the secondary MSR write to family 10h only.
      
      Also unconditionally update the count mask for IBS Op implementations
      that have read & writeable current count (CurCnt) fields in addition
      to the MaxCnt field.  These bits were reserved on prior
      implementations, and therefore shouldn't have negative impact.
      Signed-off-by: NKim Phillips <kim.phillips@amd.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: c9574fe0 ("perf/x86-ibs: Implement workaround for IBS erratum #420")
      Link: https://lkml.kernel.org/r/20191023150955.30292-2-kim.phillips@amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      e431e79b
    • K
      perf/x86/amd/ibs: Fix reading of the IBS OpData register and thus precise RIP validity · 317b96bb
      Kim Phillips 提交于
      The loop that reads all the IBS MSRs into *buf stopped one MSR short of
      reading the IbsOpData register, which contains the RipInvalid status bit.
      
      Fix the offset_max assignment so the MSR gets read, so the RIP invalid
      evaluation is based on what the IBS h/w output, instead of what was
      left in memory.
      Signed-off-by: NKim Phillips <kim.phillips@amd.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: H. Peter Anvin <hpa@zytor.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Fixes: d47e8238 ("perf/x86-ibs: Take instruction pointer from ibs sample")
      Link: https://lkml.kernel.org/r/20191023150955.30292-1-kim.phillips@amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
      317b96bb
  6. 25 10月, 2019 1 次提交
  7. 23 10月, 2019 2 次提交
    • J
      KVM: nVMX: Don't leak L1 MMIO regions to L2 · 671ddc70
      Jim Mattson 提交于
      If the "virtualize APIC accesses" VM-execution control is set in the
      VMCS, the APIC virtualization hardware is triggered when a page walk
      in VMX non-root mode terminates at a PTE wherein the address of the 4k
      page frame matches the APIC-access address specified in the VMCS. On
      hardware, the APIC-access address may be any valid 4k-aligned physical
      address.
      
      KVM's nVMX implementation enforces the additional constraint that the
      APIC-access address specified in the vmcs12 must be backed by
      a "struct page" in L1. If not, L0 will simply clear the "virtualize
      APIC accesses" VM-execution control in the vmcs02.
      
      The problem with this approach is that the L1 guest has arranged the
      vmcs12 EPT tables--or shadow page tables, if the "enable EPT"
      VM-execution control is clear in the vmcs12--so that the L2 guest
      physical address(es)--or L2 guest linear address(es)--that reference
      the L2 APIC map to the APIC-access address specified in the
      vmcs12. Without the "virtualize APIC accesses" VM-execution control in
      the vmcs02, the APIC accesses in the L2 guest will directly access the
      APIC-access page in L1.
      
      When there is no mapping whatsoever for the APIC-access address in L1,
      the L2 VM just loses the intended APIC virtualization. However, when
      the APIC-access address is mapped to an MMIO region in L1, the L2
      guest gets direct access to the L1 MMIO device. For example, if the
      APIC-access address specified in the vmcs12 is 0xfee00000, then L2
      gets direct access to L1's APIC.
      
      Since this vmcs12 configuration is something that KVM cannot
      faithfully emulate, the appropriate response is to exit to userspace
      with KVM_INTERNAL_ERROR_EMULATION.
      
      Fixes: fe3ef05c ("KVM: nVMX: Prepare vmcs02 from vmcs01 and vmcs12")
      Reported-by: NDan Cross <dcross@google.com>
      Signed-off-by: NJim Mattson <jmattson@google.com>
      Reviewed-by: NPeter Shier <pshier@google.com>
      Reviewed-by: NSean Christopherson <sean.j.christopherson@intel.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      671ddc70
    • M
      KVM: SVM: Fix potential wrong physical id in avic_handle_ldr_update · 5c94ac5d
      Miaohe Lin 提交于
      Guest physical APIC ID may not equal to vcpu->vcpu_id in some case.
      We may set the wrong physical id in avic_handle_ldr_update as we
      always use vcpu->vcpu_id. Get physical APIC ID from vAPIC page
      instead.
      Export and use kvm_xapic_id here and in avic_handle_apic_id_update
      as suggested by Vitaly.
      Signed-off-by: NMiaohe Lin <linmiaohe@huawei.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      5c94ac5d
  8. 22 10月, 2019 6 次提交
  9. 20 10月, 2019 1 次提交
  10. 18 10月, 2019 2 次提交
  11. 15 10月, 2019 2 次提交
  12. 12 10月, 2019 10 次提交
  13. 09 10月, 2019 2 次提交
    • T
      perf/x86/amd: Change/fix NMI latency mitigation to use a timestamp · df4d2973
      Tom Lendacky 提交于
      It turns out that the NMI latency workaround from commit:
      
        6d3edaae ("x86/perf/amd: Resolve NMI latency issues for active PMCs")
      
      ends up being too conservative and results in the perf NMI handler claiming
      NMIs too easily on AMD hardware when the NMI watchdog is active.
      
      This has an impact, for example, on the hpwdt (HPE watchdog timer) module.
      This module can produce an NMI that is used to reset the system. It
      registers an NMI handler for the NMI_UNKNOWN type and relies on the fact
      that nothing has claimed an NMI so that its handler will be invoked when
      the watchdog device produces an NMI. After the referenced commit, the
      hpwdt module is unable to process its generated NMI if the NMI watchdog is
      active, because the current NMI latency mitigation results in the NMI
      being claimed by the perf NMI handler.
      
      Update the AMD perf NMI latency mitigation workaround to, instead, use a
      window of time. Whenever a PMC is handled in the perf NMI handler, set a
      timestamp which will act as a perf NMI window. Any NMIs arriving within
      that window will be claimed by perf. Anything outside that window will
      not be claimed by perf. The value for the NMI window is set to 100 msecs.
      This is a conservative value that easily covers any NMI latency in the
      hardware. While this still results in a window in which the hpwdt module
      will not receive its NMI, the window is now much, much smaller.
      Signed-off-by: NTom Lendacky <thomas.lendacky@amd.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Jerry Hoemann <jerry.hoemann@hpe.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Fixes: 6d3edaae ("x86/perf/amd: Resolve NMI latency issues for active PMCs")
      Link: https://lkml.kernel.org/r/Message-ID:
      Signed-off-by: NIngo Molnar <mingo@kernel.org>
      df4d2973
    • K
      x86/cpu: Add Comet Lake to the Intel CPU models header · 8d7c6ac3
      Kan Liang 提交于
      Comet Lake is the new 10th Gen Intel processor. Add two new CPU model
      numbers to the Intel family list.
      
      The CPU model numbers are not published in the SDM yet but they come
      from an authoritative internal source.
      
       [ bp: Touch up commit message. ]
      Signed-off-by: NKan Liang <kan.liang@linux.intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Reviewed-by: NTony Luck <tony.luck@intel.com>
      Cc: ak@linux.intel.com
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: x86-ml <x86@kernel.org>
      Link: https://lkml.kernel.org/r/1570549810-25049-2-git-send-email-kan.liang@linux.intel.com
      8d7c6ac3
  14. 08 10月, 2019 1 次提交