1. 23 11月, 2020 1 次提交
  2. 18 11月, 2020 2 次提交
  3. 17 11月, 2020 1 次提交
    • C
      x86/microcode/intel: Check patch signature before saving microcode for early loading · 1a371e67
      Chen Yu 提交于
      Currently, scan_microcode() leverages microcode_matches() to check
      if the microcode matches the CPU by comparing the family and model.
      However, the processor stepping and flags of the microcode signature
      should also be considered when saving a microcode patch for early
      update.
      
      Use find_matching_signature() in scan_microcode() and get rid of the
      now-unused microcode_matches() which is a good cleanup in itself.
      
      Complete the verification of the patch being saved for early loading in
      save_microcode_patch() directly. This needs to be done there too because
      save_mc_for_early() will call save_microcode_patch() too.
      
      The second reason why this needs to be done is because the loader still
      tries to support, at least hypothetically, mixed-steppings systems and
      thus adds all patches to the cache that belong to the same CPU model
      albeit with different steppings.
      
      For example:
      
        microcode: CPU: sig=0x906ec, pf=0x2, rev=0xd6
        microcode: mc_saved[0]: sig=0x906e9, pf=0x2a, rev=0xd6, total size=0x19400, date = 2020-04-23
        microcode: mc_saved[1]: sig=0x906ea, pf=0x22, rev=0xd6, total size=0x19000, date = 2020-04-27
        microcode: mc_saved[2]: sig=0x906eb, pf=0x2, rev=0xd6, total size=0x19400, date = 2020-04-23
        microcode: mc_saved[3]: sig=0x906ec, pf=0x22, rev=0xd6, total size=0x19000, date = 2020-04-27
        microcode: mc_saved[4]: sig=0x906ed, pf=0x22, rev=0xd6, total size=0x19400, date = 2020-04-23
      
      The patch which is being saved for early loading, however, can only be
      the one which fits the CPU this runs on so do the signature verification
      before saving.
      
       [ bp: Do signature verification in save_microcode_patch()
             and rewrite commit message. ]
      
      Fixes: ec400dde ("x86/microcode_intel_early.c: Early update ucode on Intel's CPU")
      Signed-off-by: NChen Yu <yu.c.chen@intel.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Cc: stable@vger.kernel.org
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=208535
      Link: https://lkml.kernel.org/r/20201113015923.13960-1-yu.c.chen@intel.com
      1a371e67
  4. 15 11月, 2020 1 次提交
    • P
      kvm: mmu: fix is_tdp_mmu_check when the TDP MMU is not in use · c887c9b9
      Paolo Bonzini 提交于
      In some cases where shadow paging is in use, the root page will
      be either mmu->pae_root or vcpu->arch.mmu->lm_root.  Then it will
      not have an associated struct kvm_mmu_page, because it is allocated
      with alloc_page instead of kvm_mmu_alloc_page.
      
      Just return false quickly from is_tdp_mmu_root if the TDP MMU is
      not in use, which also includes the case where shadow paging is
      enabled.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c887c9b9
  5. 13 11月, 2020 4 次提交
  6. 11 11月, 2020 2 次提交
  7. 10 11月, 2020 5 次提交
  8. 09 11月, 2020 1 次提交
    • B
      x86/xen: don't unbind uninitialized lock_kicker_irq · 65cae188
      Brian Masney 提交于
      When booting a hyperthreaded system with the kernel parameter
      'mitigations=auto,nosmt', the following warning occurs:
      
          WARNING: CPU: 0 PID: 1 at drivers/xen/events/events_base.c:1112 unbind_from_irqhandler+0x4e/0x60
          ...
          Hardware name: Xen HVM domU, BIOS 4.2.amazon 08/24/2006
          ...
          Call Trace:
           xen_uninit_lock_cpu+0x28/0x62
           xen_hvm_cpu_die+0x21/0x30
           takedown_cpu+0x9c/0xe0
           ? trace_suspend_resume+0x60/0x60
           cpuhp_invoke_callback+0x9a/0x530
           _cpu_up+0x11a/0x130
           cpu_up+0x7e/0xc0
           bringup_nonboot_cpus+0x48/0x50
           smp_init+0x26/0x79
           kernel_init_freeable+0xea/0x229
           ? rest_init+0xaa/0xaa
           kernel_init+0xa/0x106
           ret_from_fork+0x35/0x40
      
      The secondary CPUs are not activated with the nosmt mitigations and only
      the primary thread on each CPU core is used. In this situation,
      xen_hvm_smp_prepare_cpus(), and more importantly xen_init_lock_cpu(), is
      not called, so the lock_kicker_irq is not initialized for the secondary
      CPUs. Let's fix this by exiting early in xen_uninit_lock_cpu() if the
      irq is not set to avoid the warning from above for each secondary CPU.
      Signed-off-by: NBrian Masney <bmasney@redhat.com>
      Link: https://lore.kernel.org/r/20201107011119.631442-1-bmasney@redhat.comReviewed-by: NJuergen Gross <jgross@suse.com>
      Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      65cae188
  9. 08 11月, 2020 6 次提交
  10. 07 11月, 2020 3 次提交
  11. 06 11月, 2020 1 次提交
    • A
      x86/speculation: Allow IBPB to be conditionally enabled on CPUs with always-on STIBP · 1978b3a5
      Anand K Mistry 提交于
      On AMD CPUs which have the feature X86_FEATURE_AMD_STIBP_ALWAYS_ON,
      STIBP is set to on and
      
        spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED
      
      At the same time, IBPB can be set to conditional.
      
      However, this leads to the case where it's impossible to turn on IBPB
      for a process because in the PR_SPEC_DISABLE case in ib_prctl_set() the
      
        spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED
      
      condition leads to a return before the task flag is set. Similarly,
      ib_prctl_get() will return PR_SPEC_DISABLE even though IBPB is set to
      conditional.
      
      More generally, the following cases are possible:
      
      1. STIBP = conditional && IBPB = on for spectre_v2_user=seccomp,ibpb
      2. STIBP = on && IBPB = conditional for AMD CPUs with
         X86_FEATURE_AMD_STIBP_ALWAYS_ON
      
      The first case functions correctly today, but only because
      spectre_v2_user_ibpb isn't updated to reflect the IBPB mode.
      
      At a high level, this change does one thing. If either STIBP or IBPB
      is set to conditional, allow the prctl to change the task flag.
      Also, reflect that capability when querying the state. This isn't
      perfect since it doesn't take into account if only STIBP or IBPB is
      unconditionally on. But it allows the conditional feature to work as
      expected, without affecting the unconditional one.
      
       [ bp: Massage commit message and comment; space out statements for
         better readability. ]
      
      Fixes: 21998a35 ("x86/speculation: Avoid force-disabling IBPB based on STIBP and enhanced IBRS.")
      Signed-off-by: NAnand K Mistry <amistry@google.com>
      Signed-off-by: NBorislav Petkov <bp@suse.de>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NTom Lendacky <thomas.lendacky@amd.com>
      Link: https://lkml.kernel.org/r/20201105163246.v2.1.Ifd7243cd3e2c2206a893ad0a5b9a4f19549e22c6@changeid
      1978b3a5
  12. 04 11月, 2020 1 次提交
  13. 31 10月, 2020 4 次提交
  14. 30 10月, 2020 3 次提交
  15. 29 10月, 2020 3 次提交
  16. 28 10月, 2020 2 次提交