1. 09 2月, 2022 1 次提交
  2. 05 2月, 2022 1 次提交
    • J
      lib/crypto: blake2s: avoid indirect calls to compression function for Clang CFI · d2a02e3c
      Jason A. Donenfeld 提交于
      blake2s_compress_generic is weakly aliased by blake2s_compress. The
      current harness for function selection uses a function pointer, which is
      ordinarily inlined and resolved at compile time. But when Clang's CFI is
      enabled, CFI still triggers when making an indirect call via a weak
      symbol. This seems like a bug in Clang's CFI, as though it's bucketing
      weak symbols and strong symbols differently. It also only seems to
      trigger when "full LTO" mode is used, rather than "thin LTO".
      
      [    0.000000][    T0] Kernel panic - not syncing: CFI failure (target: blake2s_compress_generic+0x0/0x1444)
      [    0.000000][    T0] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.16.0-mainline-06981-g076c855b846e #1
      [    0.000000][    T0] Hardware name: MT6873 (DT)
      [    0.000000][    T0] Call trace:
      [    0.000000][    T0]  dump_backtrace+0xfc/0x1dc
      [    0.000000][    T0]  dump_stack_lvl+0xa8/0x11c
      [    0.000000][    T0]  panic+0x194/0x464
      [    0.000000][    T0]  __cfi_check_fail+0x54/0x58
      [    0.000000][    T0]  __cfi_slowpath_diag+0x354/0x4b0
      [    0.000000][    T0]  blake2s_update+0x14c/0x178
      [    0.000000][    T0]  _extract_entropy+0xf4/0x29c
      [    0.000000][    T0]  crng_initialize_primary+0x24/0x94
      [    0.000000][    T0]  rand_initialize+0x2c/0x6c
      [    0.000000][    T0]  start_kernel+0x2f8/0x65c
      [    0.000000][    T0]  __primary_switched+0xc4/0x7be4
      [    0.000000][    T0] Rebooting in 5 seconds..
      
      Nonetheless, the function pointer method isn't so terrific anyway, so
      this patch replaces it with a simple boolean, which also gets inlined
      away. This successfully works around the Clang bug.
      
      In general, I'm not too keen on all of the indirection involved here; it
      clearly does more harm than good. Hopefully the whole thing can get
      cleaned up down the road when lib/crypto is overhauled more
      comprehensively. But for now, we go with a simple bandaid.
      
      Fixes: 6048fdcc ("lib/crypto: blake2s: include as built-in")
      Link: https://github.com/ClangBuiltLinux/linux/issues/1567Reported-by: NMiles Chen <miles.chen@mediatek.com>
      Tested-by: NMiles Chen <miles.chen@mediatek.com>
      Tested-by: NNathan Chancellor <nathan@kernel.org>
      Tested-by: NJohn Stultz <john.stultz@linaro.org>
      Acked-by: NNick Desaulniers <ndesaulniers@google.com>
      Reviewed-by: NEric Biggers <ebiggers@google.com>
      Signed-off-by: NJason A. Donenfeld <Jason@zx2c4.com>
      d2a02e3c
  3. 04 2月, 2022 2 次提交
    • S
      KVM: x86: Use ERR_PTR_USR() to return -EFAULT as a __user pointer · 6e37ec88
      Sean Christopherson 提交于
      Use ERR_PTR_USR() when returning -EFAULT from kvm_get_attr_addr(), sparse
      complains about implicitly casting the kernel pointer from ERR_PTR() into
      a __user pointer.
      
      >> arch/x86/kvm/x86.c:4342:31: sparse: sparse: incorrect type in return expression
         (different address spaces) @@     expected void [noderef] __user * @@     got void * @@
         arch/x86/kvm/x86.c:4342:31: sparse:     expected void [noderef] __user *
         arch/x86/kvm/x86.c:4342:31: sparse:     got void *
      >> arch/x86/kvm/x86.c:4342:31: sparse: sparse: incorrect type in return expression
         (different address spaces) @@     expected void [noderef] __user * @@     got void * @@
         arch/x86/kvm/x86.c:4342:31: sparse:     expected void [noderef] __user *
         arch/x86/kvm/x86.c:4342:31: sparse:     got void *
      
      No functional change intended.
      
      Fixes: 56f289a8 ("KVM: x86: Add a helper to retrieve userspace address from kvm_device_attr")
      Reported-by: Nkernel test robot <lkp@intel.com>
      Signed-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220202005157.2545816-1-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      6e37ec88
    • J
      KVM: x86: Report deprecated x87 features in supported CPUID · e3bcfda0
      Jim Mattson 提交于
      CPUID.(EAX=7,ECX=0):EBX.FDP_EXCPTN_ONLY[bit 6] and
      CPUID.(EAX=7,ECX=0):EBX.ZERO_FCS_FDS[bit 13] are "defeature"
      bits. Unlike most of the other CPUID feature bits, these bits are
      clear if the features are present and set if the features are not
      present. These bits should be reported in KVM_GET_SUPPORTED_CPUID,
      because if these bits are set on hardware, they cannot be cleared in
      the guest CPUID. Doing so would claim guest support for a feature that
      the hardware doesn't support and that can't be efficiently emulated.
      
      Of course, any software (e.g WIN87EM.DLL) expecting these features to
      be present likely predates these CPUID feature bits and therefore
      doesn't know to check for them anyway.
      
      Aaron Lewis added the corresponding X86_FEATURE macros in
      commit cbb99c0f ("x86/cpufeatures: Add FDP_EXCPTN_ONLY and
      ZERO_FCS_FDS"), with the intention of reporting these bits in
      KVM_GET_SUPPORTED_CPUID, but I was unable to find a proposed patch on
      the kvm list.
      
      Opportunistically reordered the CPUID_7_0_EBX capability bits from
      least to most significant.
      
      Cc: Aaron Lewis <aaronlewis@google.com>
      Signed-off-by: NJim Mattson <jmattson@google.com>
      Message-Id: <20220204001348.2844660-1-jmattson@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      e3bcfda0
  4. 03 2月, 2022 4 次提交
    • J
      KVM: arm64: Workaround Cortex-A510's single-step and PAC trap errata · 1dd498e5
      James Morse 提交于
      Cortex-A510's erratum #2077057 causes SPSR_EL2 to be corrupted when
      single-stepping authenticated ERET instructions. A single step is
      expected, but a pointer authentication trap is taken instead. The
      erratum causes SPSR_EL1 to be copied to SPSR_EL2, which could allow
      EL1 to cause a return to EL2 with a guest controlled ELR_EL2.
      
      Because the conditions require an ERET into active-not-pending state,
      this is only a problem for the EL2 when EL2 is stepping EL1. In this case
      the previous SPSR_EL2 value is preserved in struct kvm_vcpu, and can be
      restored.
      
      Cc: stable@vger.kernel.org # 53960faf: arm64: Add Cortex-A510 CPU part definition
      Cc: stable@vger.kernel.org
      Signed-off-by: NJames Morse <james.morse@arm.com>
      [maz: fixup cpucaps ordering]
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220127122052.1584324-5-james.morse@arm.com
      1dd498e5
    • J
      KVM: arm64: Stop handle_exit() from handling HVC twice when an SError occurs · 1229630a
      James Morse 提交于
      Prior to commit defe21f4 ("KVM: arm64: Move PC rollback on SError to
      HYP"), when an SError is synchronised due to another exception, KVM
      handles the SError first. If the guest survives, the instruction that
      triggered the original exception is re-exectued to handle the first
      exception. HVC is treated as a special case as the instruction wouldn't
      normally be re-exectued, as its not a trap.
      
      Commit defe21f4 didn't preserve the behaviour of the 'return 1'
      that skips the rest of handle_exit().
      
      Since commit defe21f4, KVM will try to handle the SError and the
      original exception at the same time. When the exception was an HVC,
      fixup_guest_exit() has already rolled back ELR_EL2, meaning if the
      guest has virtual SError masked, it will execute and handle the HVC
      twice.
      
      Restore the original behaviour.
      
      Fixes: defe21f4 ("KVM: arm64: Move PC rollback on SError to HYP")
      Cc: stable@vger.kernel.org
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220127122052.1584324-4-james.morse@arm.com
      1229630a
    • J
      KVM: arm64: Avoid consuming a stale esr value when SError occur · 1c71dbc8
      James Morse 提交于
      When any exception other than an IRQ occurs, the CPU updates the ESR_EL2
      register with the exception syndrome. An SError may also become pending,
      and will be synchronised by KVM. KVM notes the exception type, and whether
      an SError was synchronised in exit_code.
      
      When an exception other than an IRQ occurs, fixup_guest_exit() updates
      vcpu->arch.fault.esr_el2 from the hardware register. When an SError was
      synchronised, the vcpu esr value is used to determine if the exception
      was due to an HVC. If so, ELR_EL2 is moved back one instruction. This
      is so that KVM can process the SError first, and re-execute the HVC if
      the guest survives the SError.
      
      But if an IRQ synchronises an SError, the vcpu's esr value is stale.
      If the previous non-IRQ exception was an HVC, KVM will corrupt ELR_EL2,
      causing an unrelated guest instruction to be executed twice.
      
      Check ARM_EXCEPTION_CODE() before messing with ELR_EL2, IRQs don't
      update this register so don't need to check.
      
      Fixes: defe21f4 ("KVM: arm64: Move PC rollback on SError to HYP")
      Cc: stable@vger.kernel.org
      Reported-by: NSteven Price <steven.price@arm.com>
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      Link: https://lore.kernel.org/r/20220127122052.1584324-3-james.morse@arm.com
      1c71dbc8
    • J
      x86/Xen: streamline (and fix) PV CPU enumeration · e25a8d95
      Jan Beulich 提交于
      This started out with me noticing that "dom0_max_vcpus=<N>" with <N>
      larger than the number of physical CPUs reported through ACPI tables
      would not bring up the "excess" vCPU-s. Addressing this is the primary
      purpose of the change; CPU maps handling is being tidied only as far as
      is necessary for the change here (with the effect of also avoiding the
      setting up of too much per-CPU infrastructure, i.e. for CPUs which can
      never come online).
      
      Noticing that xen_fill_possible_map() is called way too early, whereas
      xen_filter_cpu_maps() is called too late (after per-CPU areas were
      already set up), and further observing that each of the functions serves
      only one of Dom0 or DomU, it looked like it was better to simplify this.
      Use the .get_smp_config hook instead, uniformly for Dom0 and DomU.
      xen_fill_possible_map() can be dropped altogether, while
      xen_filter_cpu_maps() is re-purposed but not otherwise changed.
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Link: https://lore.kernel.org/r/2dbd5f0a-9859-ca2d-085e-a02f7166c610@suse.comSigned-off-by: NJuergen Gross <jgross@suse.com>
      e25a8d95
  5. 02 2月, 2022 6 次提交
    • A
      RISC-V: KVM: Fix SBI implementation version · 40327154
      Anup Patel 提交于
      The SBI implementation version returned by KVM RISC-V should be the
      Host Linux version code.
      
      Fixes: c62a7685 ("RISC-V: KVM: Add SBI v0.2 base extension")
      Signed-off-by: NAnup Patel <apatel@ventanamicro.com>
      Reviewed-by: NAtish Patra <atishp@rivosinc.com>
      Signed-off-by: NAnup Patel <anup@brainfault.org>
      40327154
    • M
      RISC-V: KVM: make CY, TM, and IR counters accessible in VU mode · de1d7b6a
      Mayuresh Chitale 提交于
      Those applications that run in VU mode and access the time CSR cause
      a virtual instruction trap as Guest kernel currently does not
      initialize the scounteren CSR.
      
      To fix this, we should make CY, TM, and IR counters accessibile
      by default in VU mode (similar to OpenSBI).
      
      Fixes: a33c72fa ("RISC-V: KVM: Implement VCPU create, init and
      destroy functions")
      Cc: stable@vger.kernel.org
      Signed-off-by: NMayuresh Chitale <mchitale@ventanamicro.com>
      Signed-off-by: NAnup Patel <anup@brainfault.org>
      de1d7b6a
    • M
      kvm/riscv: rework guest entry logic · 6455317e
      Mark Rutland 提交于
      In kvm_arch_vcpu_ioctl_run() we enter an RCU extended quiescent state
      (EQS) by calling guest_enter_irqoff(), and unmask IRQs prior to exiting
      the EQS by calling guest_exit(). As the IRQ entry code will not wake RCU
      in this case, we may run the core IRQ code and IRQ handler without RCU
      watching, leading to various potential problems.
      
      Additionally, we do not inform lockdep or tracing that interrupts will
      be enabled during guest execution, which caan lead to misleading traces
      and warnings that interrupts have been enabled for overly-long periods.
      
      This patch fixes these issues by using the new timing and context
      entry/exit helpers to ensure that interrupts are handled during guest
      vtime but with RCU watching, with a sequence:
      
      	guest_timing_enter_irqoff();
      
      	guest_state_enter_irqoff();
      	< run the vcpu >
      	guest_state_exit_irqoff();
      
      	< take any pending IRQs >
      
      	guest_timing_exit_irqoff();
      
      Since instrumentation may make use of RCU, we must also ensure that no
      instrumented code is run during the EQS. I've split out the critical
      section into a new kvm_riscv_enter_exit_vcpu() helper which is marked
      noinstr.
      
      Fixes: 99cdc6c1 ("RISC-V: Add initial skeletal KVM support")
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Albert Ou <aou@eecs.berkeley.edu>
      Cc: Anup Patel <anup@brainfault.org>
      Cc: Atish Patra <atishp@atishpatra.org>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: Palmer Dabbelt <palmer@dabbelt.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Paul Walmsley <paul.walmsley@sifive.com>
      Tested-by: NAnup Patel <anup@brainfault.org>
      Signed-off-by: NAnup Patel <anup@brainfault.org>
      6455317e
    • T
      perf/x86/intel/pt: Fix crash with stop filters in single-range mode · 1d909345
      Tristan Hume 提交于
      Add a check for !buf->single before calling pt_buffer_region_size in a
      place where a missing check can cause a kernel crash.
      
      Fixes a bug introduced by commit 67063847 ("perf/x86/intel/pt:
      Opportunistically use single range output mode"), which added a
      support for PT single-range output mode. Since that commit if a PT
      stop filter range is hit while tracing, the kernel will crash because
      of a null pointer dereference in pt_handle_status due to calling
      pt_buffer_region_size without a ToPA configured.
      
      The commit which introduced single-range mode guarded almost all uses of
      the ToPA buffer variables with checks of the buf->single variable, but
      missed the case where tracing was stopped by the PT hardware, which
      happens when execution hits a configured stop filter.
      
      Tested that hitting a stop filter while PT recording successfully
      records a trace with this patch but crashes without this patch.
      
      Fixes: 67063847 ("perf/x86/intel/pt: Opportunistically use single range output mode")
      Signed-off-by: NTristan Hume <tristan@thume.ca>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Reviewed-by: NAdrian Hunter <adrian.hunter@intel.com>
      Cc: stable@kernel.org
      Link: https://lkml.kernel.org/r/20220127220806.73664-1-tristan@thume.ca
      1d909345
    • P
      x86/perf: Default set FREEZE_ON_SMI for all · a01994f5
      Peter Zijlstra 提交于
      Kyle reported that rr[0] has started to malfunction on Comet Lake and
      later CPUs due to EFI starting to make use of CPL3 [1] and the PMU
      event filtering not distinguishing between regular CPL3 and SMM CPL3.
      
      Since this is a privilege violation, default disable SMM visibility
      where possible.
      
      Administrators wanting to observe SMM cycles can easily change this
      using the sysfs attribute while regular users don't have access to
      this file.
      
      [0] https://rr-project.org/
      
      [1] See the Intel white paper "Trustworthy SMM on the Intel vPro Platform"
      at https://bugzilla.kernel.org/attachment.cgi?id=300300, particularly the
      end of page 5.
      Reported-by: NKyle Huey <me@kylehuey.com>
      Suggested-by: NAndrew Cooper <Andrew.Cooper3@citrix.com>
      Signed-off-by: NPeter Zijlstra (Intel) <peterz@infradead.org>
      Cc: stable@kernel.org
      Link: https://lkml.kernel.org/r/YfKChjX61OW4CkYm@hirez.programming.kicks-ass.net
      a01994f5
    • M
      kvm/arm64: rework guest entry logic · 8cfe148a
      Mark Rutland 提交于
      In kvm_arch_vcpu_ioctl_run() we enter an RCU extended quiescent state
      (EQS) by calling guest_enter_irqoff(), and unmasked IRQs prior to
      exiting the EQS by calling guest_exit(). As the IRQ entry code will not
      wake RCU in this case, we may run the core IRQ code and IRQ handler
      without RCU watching, leading to various potential problems.
      
      Additionally, we do not inform lockdep or tracing that interrupts will
      be enabled during guest execution, which caan lead to misleading traces
      and warnings that interrupts have been enabled for overly-long periods.
      
      This patch fixes these issues by using the new timing and context
      entry/exit helpers to ensure that interrupts are handled during guest
      vtime but with RCU watching, with a sequence:
      
      	guest_timing_enter_irqoff();
      
      	guest_state_enter_irqoff();
      	< run the vcpu >
      	guest_state_exit_irqoff();
      
      	< take any pending IRQs >
      
      	guest_timing_exit_irqoff();
      
      Since instrumentation may make use of RCU, we must also ensure that no
      instrumented code is run during the EQS. I've split out the critical
      section into a new kvm_arm_enter_exit_vcpu() helper which is marked
      noinstr.
      
      Fixes: 1b3d546d ("arm/arm64: KVM: Properly account for guest CPU time")
      Reported-by: NNicolas Saenz Julienne <nsaenzju@redhat.com>
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NMarc Zyngier <maz@kernel.org>
      Reviewed-by: NNicolas Saenz Julienne <nsaenzju@redhat.com>
      Cc: Alexandru Elisei <alexandru.elisei@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will@kernel.org>
      Message-Id: <20220201132926.3301912-3-mark.rutland@arm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8cfe148a
  6. 01 2月, 2022 5 次提交
    • M
      kvm/x86: rework guest entry logic · b2d2af7e
      Mark Rutland 提交于
      For consistency and clarity, migrate x86 over to the generic helpers for
      guest timing and lockdep/RCU/tracing management, and remove the
      x86-specific helpers.
      
      Prior to this patch, the guest timing was entered in
      kvm_guest_enter_irqoff() (called by svm_vcpu_enter_exit() and
      svm_vcpu_enter_exit()), and was exited by the call to
      vtime_account_guest_exit() within vcpu_enter_guest().
      
      To minimize duplication and to more clearly balance entry and exit, both
      entry and exit of guest timing are placed in vcpu_enter_guest(), using
      the new guest_timing_{enter,exit}_irqoff() helpers. When context
      tracking is used a small amount of additional time will be accounted
      towards guests; tick-based accounting is unnaffected as IRQs are
      disabled at this point and not enabled until after the return from the
      guest.
      
      This also corrects (benign) mis-balanced context tracking accounting
      introduced in commits:
      
        ae95f566 ("KVM: X86: TSCDEADLINE MSR emulation fastpath")
        26efe2fd ("KVM: VMX: Handle preemption timer fastpath")
      
      Where KVM can enter a guest multiple times, calling vtime_guest_enter()
      without a corresponding call to vtime_account_guest_exit(), and with
      vtime_account_system() called when vtime_account_guest() should be used.
      As account_system_time() checks PF_VCPU and calls account_guest_time(),
      this doesn't result in any functional problem, but is unnecessarily
      confusing.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NNicolas Saenz Julienne <nsaenzju@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: Dave Hansen <dave.hansen@linux.intel.com>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Jim Mattson <jmattson@google.com>
      Cc: Joerg Roedel <joro@8bytes.org>
      Cc: Sean Christopherson <seanjc@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
      Cc: Wanpeng Li <wanpengli@tencent.com>
      Message-Id: <20220201132926.3301912-4-mark.rutland@arm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      b2d2af7e
    • M
      kvm/mips: rework guest entry logic · 72e32445
      Mark Rutland 提交于
      In kvm_arch_vcpu_ioctl_run() we use guest_enter_irqoff() and
      guest_exit_irqoff() directly, with interrupts masked between these. As
      we don't handle any timer ticks during this window, we will not account
      time spent within the guest as guest time, which is unfortunate.
      
      Additionally, we do not inform lockdep or tracing that interrupts will
      be enabled during guest execution, which caan lead to misleading traces
      and warnings that interrupts have been enabled for overly-long periods.
      
      This patch fixes these issues by using the new timing and context
      entry/exit helpers to ensure that interrupts are handled during guest
      vtime but with RCU watching, with a sequence:
      
      	guest_timing_enter_irqoff();
      
      	guest_state_enter_irqoff();
      	< run the vcpu >
      	guest_state_exit_irqoff();
      
      	< take any pending IRQs >
      
      	guest_timing_exit_irqoff();
      
      In addition, as guest exits during the "run the vcpu" step are handled
      by kvm_mips_handle_exit(), a wrapper function is added which ensures
      that such exists are handled with a sequence:
      
      	guest_state_exit_irqoff();
      	< handle the exit >
      	guest_state_enter_irqoff();
      
      This means that exits which stop the vCPU running will have a redundant
      guest_state_enter_irqoff() .. guest_state_exit_irqoff() sequence, which
      can be addressed with future rework.
      
      Since instrumentation may make use of RCU, we must also ensure that no
      instrumented code is run during the EQS. I've split out the critical
      section into a new kvm_mips_enter_exit_vcpu() helper which is marked
      noinstr.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
      Cc: Frederic Weisbecker <frederic@kernel.org>
      Cc: Huacai Chen <chenhuacai@kernel.org>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Paul E. McKenney <paulmck@kernel.org>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Message-Id: <20220201132926.3301912-6-mark.rutland@arm.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      72e32445
    • S
      KVM: x86: Move delivery of non-APICv interrupt into vendor code · 57dfd7b5
      Sean Christopherson 提交于
      Handle non-APICv interrupt delivery in vendor code, even though it means
      VMX and SVM will temporarily have duplicate code.  SVM's AVIC has a race
      condition that requires KVM to fall back to legacy interrupt injection
      _after_ the interrupt has been logged in the vIRR, i.e. to fix the race,
      SVM will need to open code the full flow anyways[*].  Refactor the code
      so that the SVM bug without introducing other issues, e.g. SVM would
      return "success" and thus invoke trace_kvm_apicv_accept_irq() even when
      delivery through the AVIC failed, and to opportunistically prepare for
      using KVM_X86_OP to fill each vendor's kvm_x86_ops struct, which will
      rely on the vendor function matching the kvm_x86_op pointer name.
      
      No functional change intended.
      
      [*] https://lore.kernel.org/all/20211213104634.199141-4-mlevitsk@redhat.comSigned-off-by: NSean Christopherson <seanjc@google.com>
      Message-Id: <20220128005208.4008533-3-seanjc@google.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      57dfd7b5
    • R
      MIPS: KVM: fix vz.c kernel-doc notation · 2161ba07
      Randy Dunlap 提交于
      Fix all kernel-doc warnings in mips/kvm/vz.c as reported by the
      kernel test robot:
      
        arch/mips/kvm/vz.c:471: warning: Function parameter or member 'out_compare' not described in '_kvm_vz_save_htimer'
        arch/mips/kvm/vz.c:471: warning: Function parameter or member 'out_cause' not described in '_kvm_vz_save_htimer'
        arch/mips/kvm/vz.c:471: warning: Excess function parameter 'compare' description in '_kvm_vz_save_htimer'
        arch/mips/kvm/vz.c:471: warning: Excess function parameter 'cause' description in '_kvm_vz_save_htimer'
        arch/mips/kvm/vz.c:1551: warning: No description found for return value of 'kvm_trap_vz_handle_cop_unusable'
        arch/mips/kvm/vz.c:1552: warning: expecting prototype for kvm_trap_vz_handle_cop_unusuable(). Prototype was for kvm_trap_vz_handle_cop_unusable() instead
        arch/mips/kvm/vz.c:1597: warning: No description found for return value of 'kvm_trap_vz_handle_msa_disabled'
      
      Fixes: c992a4f6 ("KVM: MIPS: Implement VZ support")
      Fixes: f4474d50 ("KVM: MIPS/VZ: Support hardware guest timer")
      Signed-off-by: NRandy Dunlap <rdunlap@infradead.org>
      Reported-by: Nkernel test robot <lkp@intel.com>
      Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
      Cc: linux-mips@vger.kernel.org
      Cc: Huacai Chen <chenhuacai@kernel.org>
      Cc: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
      Cc: James Hogan <jhogan@kernel.org>
      Cc: kvm@vger.kernel.org
      Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de>
      2161ba07
    • T
      MIPS: octeon: Fix missed PTR->PTR_WD conversion · 50317b63
      Thomas Bogendoerfer 提交于
      Fixes: fa62f39d ("MIPS: Fix build error due to PTR used in more places")
      Signed-off-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de>
      50317b63
  7. 30 1月, 2022 1 次提交
  8. 29 1月, 2022 1 次提交
  9. 28 1月, 2022 13 次提交
  10. 27 1月, 2022 6 次提交