1. 11 11月, 2019 1 次提交
    • P
      KVM: Fix NULL-ptr deref after kvm_create_vm fails · 8a44119a
      Paolo Bonzini 提交于
      Reported by syzkaller:
      
          kasan: CONFIG_KASAN_INLINE enabled
          kasan: GPF could be caused by NULL-ptr deref or user memory access
          general protection fault: 0000 [#1] PREEMPT SMP KASAN
          CPU: 0 PID: 14727 Comm: syz-executor.3 Not tainted 5.4.0-rc4+ #0
          RIP: 0010:kvm_coalesced_mmio_init+0x5d/0x110 arch/x86/kvm/../../../virt/kvm/coalesced_mmio.c:121
          Call Trace:
           kvm_dev_ioctl_create_vm arch/x86/kvm/../../../virt/kvm/kvm_main.c:3446 [inline]
           kvm_dev_ioctl+0x781/0x1490 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3494
           vfs_ioctl fs/ioctl.c:46 [inline]
           file_ioctl fs/ioctl.c:509 [inline]
           do_vfs_ioctl+0x196/0x1150 fs/ioctl.c:696
           ksys_ioctl+0x62/0x90 fs/ioctl.c:713
           __do_sys_ioctl fs/ioctl.c:720 [inline]
           __se_sys_ioctl fs/ioctl.c:718 [inline]
           __x64_sys_ioctl+0x6e/0xb0 fs/ioctl.c:718
           do_syscall_64+0xca/0x5d0 arch/x86/entry/common.c:290
           entry_SYSCALL_64_after_hwframe+0x49/0xbe
      
      Commit 9121923c ("kvm: Allocate memslots and buses before calling kvm_arch_init_vm")
      moves memslots and buses allocations around, however, if kvm->srcu/irq_srcu fails
      initialization, NULL will be returned instead of error code, NULL will not be intercepted
      in kvm_dev_ioctl_create_vm() and be dereferenced by kvm_coalesced_mmio_init(), this patch
      fixes it.
      
      Moving the initialization is required anyway to avoid an incorrect synchronize_srcu that
      was also reported by syzkaller:
      
       wait_for_completion+0x29c/0x440 kernel/sched/completion.c:136
       __synchronize_srcu+0x197/0x250 kernel/rcu/srcutree.c:921
       synchronize_srcu_expedited kernel/rcu/srcutree.c:946 [inline]
       synchronize_srcu+0x239/0x3e8 kernel/rcu/srcutree.c:997
       kvm_page_track_unregister_notifier+0xe7/0x130 arch/x86/kvm/page_track.c:212
       kvm_mmu_uninit_vm+0x1e/0x30 arch/x86/kvm/mmu.c:5828
       kvm_arch_destroy_vm+0x4a2/0x5f0 arch/x86/kvm/x86.c:9579
       kvm_create_vm arch/x86/kvm/../../../virt/kvm/kvm_main.c:702 [inline]
      
      so do it.
      
      Reported-by: syzbot+89a8060879fa0bd2db4f@syzkaller.appspotmail.com
      Reported-by: syzbot+e27e7027eb2b80e44225@syzkaller.appspotmail.com
      Fixes: 9121923c ("kvm: Allocate memslots and buses before calling kvm_arch_init_vm")
      Cc: Jim Mattson <jmattson@google.com>
      Cc: Wanpeng Li <wanpengli@tencent.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      8a44119a
  2. 31 10月, 2019 1 次提交
  3. 25 10月, 2019 1 次提交
  4. 22 10月, 2019 1 次提交
  5. 20 10月, 2019 3 次提交
    • M
      KVM: arm64: pmu: Reset sample period on overflow handling · 8c3252c0
      Marc Zyngier 提交于
      The PMU emulation code uses the perf event sample period to trigger
      the overflow detection. This works fine  for the *first* overflow
      handling, but results in a huge number of interrupts on the host,
      unrelated to the number of interrupts handled in the guest (a x20
      factor is pretty common for the cycle counter). On a slow system
      (such as a SW model), this can result in the guest only making
      forward progress at a glacial pace.
      
      It turns out that the clue is in the name. The sample period is
      exactly that: a period. And once the an overflow has occured,
      the following period should be the full width of the associated
      counter, instead of whatever the guest had initially programed.
      
      Reset the sample period to the architected value in the overflow
      handler, which now results in a number of host interrupts that is
      much closer to the number of interrupts in the guest.
      
      Fixes: b02386eb ("arm64: KVM: Add PMU overflow interrupt routing")
      Reviewed-by: NAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      8c3252c0
    • M
      KVM: arm64: pmu: Set the CHAINED attribute before creating the in-kernel event · 725ce669
      Marc Zyngier 提交于
      The current convention for KVM to request a chained event from the
      host PMU is to set bit[0] in attr.config1 (PERF_ATTR_CFG1_KVM_PMU_CHAINED).
      
      But as it turns out, this bit gets set *after* we create the kernel
      event that backs our virtual counter, meaning that we never get
      a 64bit counter.
      
      Moving the setting to an earlier point solves the problem.
      
      Fixes: 80f393a2 ("KVM: arm/arm64: Support chained PMU counters")
      Reviewed-by: NAndrew Murray <andrew.murray@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      725ce669
    • M
      KVM: arm64: pmu: Fix cycle counter truncation · f4e23cf9
      Marc Zyngier 提交于
      When a counter is disabled, its value is sampled before the event
      is being disabled, and the value written back in the shadow register.
      
      In that process, the value gets truncated to 32bit, which is adequate
      for any counter but the cycle counter (defined as a 64bit counter).
      
      This obviously results in a corrupted counter, and things like
      "perf record -e cycles" not working at all when run in a guest...
      A similar, but less critical bug exists in kvm_pmu_get_counter_value.
      
      Make the truncation conditional on the counter not being the cycle
      counter, which results in a minor code reorganisation.
      
      Fixes: 80f393a2 ("KVM: arm/arm64: Support chained PMU counters")
      Reviewed-by: NAndrew Murray <andrew.murray@arm.com>
      Reported-by: NJulien Thierry <julien.thierry.kdev@gmail.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      f4e23cf9
  6. 01 10月, 2019 1 次提交
  7. 18 9月, 2019 1 次提交
  8. 11 9月, 2019 1 次提交
  9. 09 9月, 2019 1 次提交
    • M
      KVM: arm/arm64: vgic: Allow more than 256 vcpus for KVM_IRQ_LINE · 92f35b75
      Marc Zyngier 提交于
      While parts of the VGIC support a large number of vcpus (we
      bravely allow up to 512), other parts are more limited.
      
      One of these limits is visible in the KVM_IRQ_LINE ioctl, which
      only allows 256 vcpus to be signalled when using the CPU or PPI
      types. Unfortunately, we've cornered ourselves badly by allocating
      all the bits in the irq field.
      
      Since the irq_type subfield (8 bit wide) is currently only taking
      the values 0, 1 and 2 (and we have been careful not to allow anything
      else), let's reduce this field to only 4 bits, and allocate the
      remaining 4 bits to a vcpu2_index, which acts as a multiplier:
      
        vcpu_id = 256 * vcpu2_index + vcpu_index
      
      With that, and a new capability (KVM_CAP_ARM_IRQ_LINE_LAYOUT_2)
      allowing this to be discovered, it becomes possible to inject
      PPIs to up to 4096 vcpus. But please just don't.
      
      Whilst we're there, add a clarification about the use of KVM_IRQ_LINE
      on arm, which is not completely conditionned by KVM_CAP_IRQCHIP.
      Reported-by: NZenghui Yu <yuzenghui@huawei.com>
      Reviewed-by: NEric Auger <eric.auger@redhat.com>
      Reviewed-by: NZenghui Yu <yuzenghui@huawei.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      92f35b75
  10. 28 8月, 2019 1 次提交
  11. 27 8月, 2019 1 次提交
  12. 25 8月, 2019 2 次提交
  13. 24 8月, 2019 1 次提交
    • A
      KVM: arm/arm64: VGIC: Properly initialise private IRQ affinity · 2e16f3e9
      Andre Przywara 提交于
      At the moment we initialise the target *mask* of a virtual IRQ to the
      VCPU it belongs to, even though this mask is only defined for GICv2 and
      quickly runs out of bits for many GICv3 guests.
      This behaviour triggers an UBSAN complaint for more than 32 VCPUs:
      ------
      [ 5659.462377] UBSAN: Undefined behaviour in virt/kvm/arm/vgic/vgic-init.c:223:21
      [ 5659.471689] shift exponent 32 is too large for 32-bit type 'unsigned int'
      ------
      Also for GICv3 guests the reporting of TARGET in the "vgic-state" debugfs
      dump is wrong, due to this very same problem.
      
      Because there is no requirement to create the VGIC device before the
      VCPUs (and QEMU actually does it the other way round), we can't safely
      initialise mpidr or targets in kvm_vgic_vcpu_init(). But since we touch
      every private IRQ for each VCPU anyway later (in vgic_init()), we can
      just move the initialisation of those fields into there, where we
      definitely know the VGIC type.
      
      On the way make sure we really have either a VGICv2 or a VGICv3 device,
      since the existing code is just checking for "VGICv3 or not", silently
      ignoring the uninitialised case.
      Signed-off-by: NAndre Przywara <andre.przywara@arm.com>
      Reported-by: NDave Martin <dave.martin@arm.com>
      Tested-by: NJulien Grall <julien.grall@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      2e16f3e9
  14. 22 8月, 2019 1 次提交
    • A
      KVM: arm/arm64: Only skip MMIO insn once · 2113c5f6
      Andrew Jones 提交于
      If after an MMIO exit to userspace a VCPU is immediately run with an
      immediate_exit request, such as when a signal is delivered or an MMIO
      emulation completion is needed, then the VCPU completes the MMIO
      emulation and immediately returns to userspace. As the exit_reason
      does not get changed from KVM_EXIT_MMIO in these cases we have to
      be careful not to complete the MMIO emulation again, when the VCPU is
      eventually run again, because the emulation does an instruction skip
      (and doing too many skips would be a waste of guest code :-) We need
      to use additional VCPU state to track if the emulation is complete.
      As luck would have it, we already have 'mmio_needed', which even
      appears to be used in this way by other architectures already.
      
      Fixes: 0d640732 ("arm64: KVM: Skip MMIO insn after emulation")
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NAndrew Jones <drjones@redhat.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      2113c5f6
  15. 19 8月, 2019 12 次提交
  16. 09 8月, 2019 2 次提交
  17. 05 8月, 2019 5 次提交
    • M
      KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block · 5eeaf10e
      Marc Zyngier 提交于
      Since commit commit 328e5664 ("KVM: arm/arm64: vgic: Defer
      touching GICH_VMCR to vcpu_load/put"), we leave ICH_VMCR_EL2 (or
      its GICv2 equivalent) loaded as long as we can, only syncing it
      back when we're scheduled out.
      
      There is a small snag with that though: kvm_vgic_vcpu_pending_irq(),
      which is indirectly called from kvm_vcpu_check_block(), needs to
      evaluate the guest's view of ICC_PMR_EL1. At the point were we
      call kvm_vcpu_check_block(), the vcpu is still loaded, and whatever
      changes to PMR is not visible in memory until we do a vcpu_put().
      
      Things go really south if the guest does the following:
      
      	mov x0, #0	// or any small value masking interrupts
      	msr ICC_PMR_EL1, x0
      
      	[vcpu preempted, then rescheduled, VMCR sampled]
      
      	mov x0, #ff	// allow all interrupts
      	msr ICC_PMR_EL1, x0
      	wfi		// traps to EL2, so samping of VMCR
      
      	[interrupt arrives just after WFI]
      
      Here, the hypervisor's view of PMR is zero, while the guest has enabled
      its interrupts. kvm_vgic_vcpu_pending_irq() will then say that no
      interrupts are pending (despite an interrupt being received) and we'll
      block for no reason. If the guest doesn't have a periodic interrupt
      firing once it has blocked, it will stay there forever.
      
      To avoid this unfortuante situation, let's resync VMCR from
      kvm_arch_vcpu_blocking(), ensuring that a following kvm_vcpu_check_block()
      will observe the latest value of PMR.
      
      This has been found by booting an arm64 Linux guest with the pseudo NMI
      feature, and thus using interrupt priorities to mask interrupts instead
      of the usual PSTATE masking.
      
      Cc: stable@vger.kernel.org # 4.12
      Fixes: 328e5664 ("KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put")
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      5eeaf10e
    • G
      KVM: no need to check return value of debugfs_create functions · 3e7093d0
      Greg KH 提交于
      When calling debugfs functions, there is no need to ever check the
      return value.  The function can work or not, but the code logic should
      never do something different based on this.
      
      Also, when doing this, change kvm_arch_create_vcpu_debugfs() to return
      void instead of an integer, as we should not care at all about if this
      function actually does anything or not.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Radim Krčmář" <rkrcmar@redhat.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Borislav Petkov <bp@alien8.de>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: <x86@kernel.org>
      Cc: <kvm@vger.kernel.org>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      3e7093d0
    • P
      KVM: remove kvm_arch_has_vcpu_debugfs() · 741cbbae
      Paolo Bonzini 提交于
      There is no need for this function as all arches have to implement
      kvm_arch_create_vcpu_debugfs() no matter what.  A #define symbol
      let us actually simplify the code.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      741cbbae
    • W
      KVM: Fix leak vCPU's VMCS value into other pCPU · 17e433b5
      Wanpeng Li 提交于
      After commit d73eb57b (KVM: Boost vCPUs that are delivering interrupts), a
      five years old bug is exposed. Running ebizzy benchmark in three 80 vCPUs VMs
      on one 80 pCPUs Skylake server, a lot of rcu_sched stall warning splatting
      in the VMs after stress testing:
      
       INFO: rcu_sched detected stalls on CPUs/tasks: { 4 41 57 62 77} (detected by 15, t=60004 jiffies, g=899, c=898, q=15073)
       Call Trace:
         flush_tlb_mm_range+0x68/0x140
         tlb_flush_mmu.part.75+0x37/0xe0
         tlb_finish_mmu+0x55/0x60
         zap_page_range+0x142/0x190
         SyS_madvise+0x3cd/0x9c0
         system_call_fastpath+0x1c/0x21
      
      swait_active() sustains to be true before finish_swait() is called in
      kvm_vcpu_block(), voluntarily preempted vCPUs are taken into account
      by kvm_vcpu_on_spin() loop greatly increases the probability condition
      kvm_arch_vcpu_runnable(vcpu) is checked and can be true, when APICv
      is enabled the yield-candidate vCPU's VMCS RVI field leaks(by
      vmx_sync_pir_to_irr()) into spinning-on-a-taken-lock vCPU's current
      VMCS.
      
      This patch fixes it by checking conservatively a subset of events.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Marc Zyngier <Marc.Zyngier@arm.com>
      Cc: stable@vger.kernel.org
      Fixes: 98f4a146 (KVM: add kvm_arch_vcpu_runnable() test to kvm_vcpu_on_spin() loop)
      Signed-off-by: NWanpeng Li <wanpengli@tencent.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      17e433b5
    • W
      KVM: Check preempted_in_kernel for involuntary preemption · 046ddeed
      Wanpeng Li 提交于
      preempted_in_kernel is updated in preempt_notifier when involuntary preemption
      ocurrs, it can be stale when the voluntarily preempted vCPUs are taken into
      account by kvm_vcpu_on_spin() loop. This patch lets it just check preempted_in_kernel
      for involuntary preemption.
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Signed-off-by: NWanpeng Li <wanpengli@tencent.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      046ddeed
  18. 26 7月, 2019 1 次提交
    • A
      KVM: arm: vgic-v3: Mark expected switch fall-through · 1a8248c7
      Anders Roxell 提交于
      When fall-through warnings was enabled by default the following warnings
      was starting to show up:
      
      ../virt/kvm/arm/hyp/vgic-v3-sr.c: In function ‘__vgic_v3_save_aprs’:
      ../virt/kvm/arm/hyp/vgic-v3-sr.c:351:24: warning: this statement may fall
       through [-Wimplicit-fallthrough=]
         cpu_if->vgic_ap0r[2] = __vgic_v3_read_ap0rn(2);
         ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
      ../virt/kvm/arm/hyp/vgic-v3-sr.c:352:2: note: here
        case 6:
        ^~~~
      ../virt/kvm/arm/hyp/vgic-v3-sr.c:353:24: warning: this statement may fall
       through [-Wimplicit-fallthrough=]
         cpu_if->vgic_ap0r[1] = __vgic_v3_read_ap0rn(1);
         ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~
      ../virt/kvm/arm/hyp/vgic-v3-sr.c:354:2: note: here
        default:
        ^~~~~~~
      
      Rework so that the compiler doesn't warn about fall-through.
      
      Fixes: d93512ef0f0e ("Makefile: Globally enable fall-through warning")
      Signed-off-by: NAnders Roxell <anders.roxell@linaro.org>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      1a8248c7
  19. 24 7月, 2019 1 次提交
  20. 23 7月, 2019 1 次提交
  21. 20 7月, 2019 1 次提交
    • W
      KVM: Boost vCPUs that are delivering interrupts · d73eb57b
      Wanpeng Li 提交于
      Inspired by commit 9cac38dd (KVM/s390: Set preempted flag during
      vcpu wakeup and interrupt delivery), we want to also boost not just
      lock holders but also vCPUs that are delivering interrupts. Most
      smp_call_function_many calls are synchronous, so the IPI target vCPUs
      are also good yield candidates.  This patch introduces vcpu->ready to
      boost vCPUs during wakeup and interrupt delivery time; unlike s390 we do
      not reuse vcpu->preempted so that voluntarily preempted vCPUs are taken
      into account by kvm_vcpu_on_spin, but vmx_vcpu_pi_put is not affected
      (VT-d PI handles voluntary preemption separately, in pi_pre_block).
      
      Testing on 80 HT 2 socket Xeon Skylake server, with 80 vCPUs VM 80GB RAM:
      ebizzy -M
      
                  vanilla     boosting    improved
      1VM          21443       23520         9%
      2VM           2800        8000       180%
      3VM           1800        3100        72%
      
      Testing on my Haswell desktop 8 HT, with 8 vCPUs VM 8GB RAM, two VMs,
      one running ebizzy -M, the other running 'stress --cpu 2':
      
      w/ boosting + w/o pv sched yield(vanilla)
      
                  vanilla     boosting   improved
                    1570         4000      155%
      
      w/ boosting + w/ pv sched yield(vanilla)
      
                  vanilla     boosting   improved
                    1844         5157      179%
      
      w/o boosting, perf top in VM:
      
       72.33%  [kernel]       [k] smp_call_function_many
        4.22%  [kernel]       [k] call_function_i
        3.71%  [kernel]       [k] async_page_fault
      
      w/ boosting, perf top in VM:
      
       38.43%  [kernel]       [k] smp_call_function_many
        6.31%  [kernel]       [k] async_page_fault
        6.13%  libc-2.23.so   [.] __memcpy_avx_unaligned
        4.88%  [kernel]       [k] call_function_interrupt
      
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Radim Krčmář <rkrcmar@redhat.com>
      Cc: Christian Borntraeger <borntraeger@de.ibm.com>
      Cc: Paul Mackerras <paulus@ozlabs.org>
      Cc: Marc Zyngier <maz@kernel.org>
      Signed-off-by: NWanpeng Li <wanpengli@tencent.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d73eb57b