1. 28 5月, 2022 1 次提交
  2. 16 5月, 2020 1 次提交
  3. 23 4月, 2020 3 次提交
  4. 15 4月, 2020 1 次提交
  5. 24 3月, 2020 1 次提交
  6. 12 2月, 2020 1 次提交
  7. 28 1月, 2020 1 次提交
    • P
      KVM: Move running VCPU from ARM to common code · 7495e22b
      Paolo Bonzini 提交于
      For ring-based dirty log tracking, it will be more efficient to account
      writes during schedule-out or schedule-in to the currently running VCPU.
      We would like to do it even if the write doesn't use the current VCPU's
      address space, as is the case for cached writes (see commit 4e335d9e,
      "Revert "KVM: Support vCPU-based gfn->hva cache"", 2017-05-02).
      
      Therefore, add a mechanism to track the currently-loaded kvm_vcpu struct.
      There is already something similar in KVM/ARM; one important difference
      is that kvm_arch_vcpu_{load,put} have two callers in virt/kvm/kvm_main.c:
      we have to update both the architecture-independent vcpu_{load,put} and
      the preempt notifiers.
      
      Another change made in the process is to allow using kvm_get_running_vcpu()
      in preemptible code.  This is allowed because preempt notifiers ensure
      that the value does not change even after the VCPU thread is migrated.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      7495e22b
  8. 28 8月, 2019 1 次提交
  9. 09 8月, 2019 1 次提交
    • A
      KVM: arm/arm64: vgic: Reevaluate level sensitive interrupts on enable · 16e604a4
      Alexandru Elisei 提交于
      A HW mapped level sensitive interrupt asserted by a device will not be put
      into the ap_list if it is disabled at the VGIC level. When it is enabled
      again, it will be inserted into the ap_list and written to a list register
      on guest entry regardless of the state of the device.
      
      We could argue that this can also happen on real hardware, when the command
      to enable the interrupt reached the GIC before the device had the chance to
      de-assert the interrupt signal; however, we emulate the distributor and
      redistributors in software and we can do better than that.
      Signed-off-by: NAlexandru Elisei <alexandru.elisei@arm.com>
      Signed-off-by: NMarc Zyngier <maz@kernel.org>
      16e604a4
  10. 31 5月, 2019 1 次提交
  11. 24 1月, 2019 1 次提交
  12. 20 12月, 2018 1 次提交
  13. 18 12月, 2018 2 次提交
  14. 21 7月, 2018 2 次提交
  15. 27 4月, 2018 1 次提交
    • M
      KVM: arm/arm64: vgic: Fix source vcpu issues for GICv2 SGI · 53692908
      Marc Zyngier 提交于
      Now that we make sure we don't inject multiple instances of the
      same GICv2 SGI at the same time, we've made another bug more
      obvious:
      
      If we exit with an active SGI, we completely lose track of which
      vcpu it came from. On the next entry, we restore it with 0 as a
      source, and if that wasn't the right one, too bad. While this
      doesn't seem to trouble GIC-400, the architectural model gets
      offended and doesn't deactivate the interrupt on EOI.
      
      Another connected issue is that we will happilly make pending
      an interrupt from another vcpu, overriding the above zero with
      something that is just as inconsistent. Don't do that.
      
      The final issue is that we signal a maintenance interrupt when
      no pending interrupts are present in the LR. Assuming we've fixed
      the two issues above, we end-up in a situation where we keep
      exiting as soon as we've reached the active state, and not be
      able to inject the following pending.
      
      The fix comes in 3 parts:
      - GICv2 SGIs have their source vcpu saved if they are active on
        exit, and restored on entry
      - Multi-SGIs cannot go via the Pending+Active state, as this would
        corrupt the source field
      - Multi-SGIs are converted to using MI on EOI instead of NPIE
      
      Fixes: 16ca6a60 ("KVM: arm/arm64: vgic: Don't populate multiple LRs with the same vintid")
      Reported-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NChristoffer Dall <christoffer.dall@arm.com>
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      53692908
  16. 15 3月, 2018 1 次提交
  17. 02 1月, 2018 2 次提交
    • C
      KVM: arm/arm64: Support VGIC dist pend/active changes for mapped IRQs · df635c5b
      Christoffer Dall 提交于
      For mapped IRQs (with the HW bit set in the LR) we have to follow some
      rules of the architecture.  One of these rules is that VM must not be
      allowed to deactivate a virtual interrupt with the HW bit set unless the
      physical interrupt is also active.
      
      This works fine when injecting mapped interrupts, because we leave it up
      to the injector to either set EOImode==1 or manually set the active
      state of the physical interrupt.
      
      However, the guest can set virtual interrupt to be pending or active by
      writing to the virtual distributor, which could lead to deactivating a
      virtual interrupt with the HW bit set without the physical interrupt
      being active.
      
      We could set the physical interrupt to active whenever we are about to
      enter the VM with a HW interrupt either pending or active, but that
      would be really slow, especially on GICv2.  So we take the long way
      around and do the hard work when needed, which is expected to be
      extremely rare.
      
      When the VM sets the pending state for a HW interrupt on the virtual
      distributor we set the active state on the physical distributor, because
      the virtual interrupt can become active and then the guest can
      deactivate it.
      
      When the VM clears the pending state we also clear it on the physical
      side, because the injector might otherwise raise the interrupt.  We also
      clear the physical active state when the virtual interrupt is not
      active, since otherwise a SPEND/CPEND sequence from the guest would
      prevent signaling of future interrupts.
      
      Changing the state of mapped interrupts from userspace is not supported,
      and it's expected that userspace unmaps devices from VFIO before
      attempting to set the interrupt state, because the interrupt state is
      driven by hardware.
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NEric Auger <eric.auger@redhat.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      df635c5b
    • C
      KVM: arm/arm64: Factor out functionality to get vgic mmio requester_vcpu · 6c1b7521
      Christoffer Dall 提交于
      We are about to distinguish between userspace accesses and mmio traps
      for a number of the mmio handlers.  When the requester vcpu is NULL, it
      means we are handling a userspace access.
      
      Factor out the functionality to get the request vcpu into its own
      function, mostly so we have a common place to document the semantics of
      the return value.
      
      Also take the chance to move the functionality outside of holding a
      spinlock and instead explicitly disable and enable preemption.  This
      supports PREEMPT_RT kernels as well.
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NAndre Przywara <andre.przywara@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      6c1b7521
  18. 06 11月, 2017 1 次提交
  19. 23 5月, 2017 2 次提交
  20. 08 5月, 2017 1 次提交
  21. 07 3月, 2017 1 次提交
  22. 30 1月, 2017 4 次提交
  23. 25 1月, 2017 1 次提交
  24. 05 11月, 2016 1 次提交
  25. 22 9月, 2016 1 次提交
  26. 19 7月, 2016 3 次提交
  27. 02 6月, 2016 1 次提交
    • M
      KVM: arm/arm64: vgic-new: Removel harmful BUG_ON · 05fb05a6
      Marc Zyngier 提交于
      When changing the active bit from an MMIO trap, we decide to
      explode if the intid is that of a private interrupt.
      
      This flawed logic comes from the fact that we were assuming that
      kvm_vcpu_kick() as called by kvm_arm_halt_vcpu() would not return before
      the called vcpu responded, but this is not the case, so we need to
      perform this wait even for private interrupts.
      
      Dropping the BUG_ON seems like the right thing to do.
      
       [ Commit message tweaked by Christoffer ]
      Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      05fb05a6
  28. 20 5月, 2016 2 次提交
    • C
      KVM: arm/arm64: vgic-new: Synchronize changes to active state · 35a2d585
      Christoffer Dall 提交于
      When modifying the active state of an interrupt via the MMIO interface,
      we should ensure that the write has the intended effect.
      
      If a guest sets an interrupt to active, but that interrupt is already
      flushed into a list register on a running VCPU, then that VCPU will
      write the active state back into the struct vgic_irq upon returning from
      the guest and syncing its state.  This is a non-benign race, because the
      guest can observe that an interrupt is not active, and it can have a
      reasonable expectations that other VCPUs will not ack any IRQs, and then
      set the state to active, and expect it to stay that way.  Currently we
      are not honoring this case.
      
      Thefore, change both the SACTIVE and CACTIVE mmio handlers to stop the
      world, change the irq state, potentially queue the irq if we're setting
      it to active, and then continue.
      
      We take this chance to slightly optimize these functions by not stopping
      the world when touching private interrupts where there is inherently no
      possible race.
      Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
      35a2d585
    • A
      KVM: arm/arm64: vgic-new: Add GICv3 MMIO handling framework · ed9b8cef
      Andre Przywara 提交于
      Create a new file called vgic-mmio-v3.c and describe the GICv3
      distributor and redistributor registers there.
      This adds a special macro to deal with the split of SGI/PPI in the
      redistributor and SPIs in the distributor, which allows us to reuse
      the existing GICv2 handlers for those registers which are compatible.
      Also we provide a function to deal with the registration of the two
      separate redistributor frames per VCPU.
      Signed-off-by: NAndre Przywara <andre.przywara@arm.com>
      Reviewed-by: NEric Auger <eric.auger@linaro.org>
      Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org>
      ed9b8cef