- 31 3月, 2020 7 次提交
-
-
由 Sean Christopherson 提交于
Set kvm_x86_ops with the vendor's ops only after ->hardware_setup() completes to "prevent" using kvm_x86_ops before they are ready, i.e. to generate a null pointer fault instead of silently consuming unconfigured state. An alternative implementation would be to have ->hardware_setup() return the vendor's ops, but that would require non-trivial refactoring, and would arguably result in less readable code, e.g. ->hardware_setup() would need to use ERR_PTR() in multiple locations, and each vendor's declaration of the runtime ops would be less obvious. No functional change intended. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321202603.19355-6-sean.j.christopherson@intel.com> Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Configure VMX's runtime hooks by modifying vmx_x86_ops directly instead of using the global kvm_x86_ops. This sets the stage for waiting until after ->hardware_setup() to set kvm_x86_ops with the vendor's implementation. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321202603.19355-5-sean.j.christopherson@intel.com> Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Move VMX's hardware_setup() below its vmx_x86_ops definition so that a future patch can refactor hardware_setup() to modify vmx_x86_ops directly instead of indirectly modifying the ops via the global kvm_x86_ops. No functional change intended. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321202603.19355-4-sean.j.christopherson@intel.com> Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Move the kvm_x86_ops functions that are used only within the scope of kvm_init() into a separate struct, kvm_x86_init_ops. In addition to identifying the init-only functions without restorting to code comments, this also sets the stage for waiting until after ->hardware_setup() to set kvm_x86_ops. Setting kvm_x86_ops after ->hardware_setup() is desirable as many of the hooks are not usable until ->hardware_setup() completes. No functional change intended. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321202603.19355-3-sean.j.christopherson@intel.com> Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Pass @opaque to kvm_arch_hardware_setup() and kvm_arch_check_processor_compat() to allow architecture specific code to reference @opaque without having to stash it away in a temporary global variable. This will enable x86 to separate its vendor specific callback ops, which are passed via @opaque, into "init" and "runtime" ops without having to stash away the "init" ops. No functional change intended. Reviewed-by: NCornelia Huck <cohuck@redhat.com> Tested-by: Cornelia Huck <cohuck@redhat.com> #s390 Acked-by: NMarc Zyngier <maz@kernel.org> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200321202603.19355-2-sean.j.christopherson@intel.com> Reviewed-by: NVitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Merge tag 'kvm-ppc-next-5.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc into HEAD KVM PPC update for 5.7 * Add a capability for enabling secure guests under the Protected Execution Framework ultravisor * Various bug fixes and cleanups.
-
git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm由 Paolo Bonzini 提交于
KVM/arm updates for Linux 5.7 - GICv4.1 support - 32bit host removal
-
- 30 3月, 2020 1 次提交
-
-
由 Paolo Bonzini 提交于
Merge tag 'kvm-s390-next-5.7-3' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD KVM: s390: Fix for error codes - return the proper error to userspace when a signal interrupts the KSM unsharing operation
-
- 27 3月, 2020 1 次提交
-
-
由 Christian Borntraeger 提交于
If a signal is pending we might return -ENOMEM instead of -EINTR. We should propagate the proper error during KSM unsharing. unmerge_ksm_pages returns -ERESTARTSYS on signal_pending. This gets translated by entry.S to -EINTR. It is important to get this error code so that userspace can retry. To make this clearer we also add -EINTR to the documentation of the PV_ENABLE call, which calls unmerge_ksm_pages. Fixes: 3ac8e380 ("s390/mm: disable KSM for storage key enabled pages") Reviewed-by: NJanosch Frank <frankja@linux.vnet.ibm.com> Reported-by: NMarc Hartmayer <mhartmay@linux.ibm.com> Tested-by: NMarc Hartmayer <mhartmay@linux.ibm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Reviewed-by: NCornelia Huck <cohuck@redhat.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
- 26 3月, 2020 6 次提交
-
-
由 Paolo Bonzini 提交于
Merge tag 'kvm-s390-next-5.7-2' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD KVM: s390: cleanups for 5.7 - mark sie control block as 512 byte aligned - use fallthrough;
-
由 Sean Christopherson 提交于
Fix a copy-paste typo in a comment and error message. Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200320205546.2396-3-sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Reset the LRU slot if it becomes invalid when deleting a memslot to fix an out-of-bounds/use-after-free access when searching through memslots. Explicitly check for there being no used slots in search_memslots(), and in the caller of s390's approximation variant. Fixes: 36947254 ("KVM: Dynamically size memslot array based on number of used slots") Reported-by: NQian Cai <cai@lca.pw> Cc: Peter Xu <peterx@redhat.com> Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Message-Id: <20200320205546.2396-2-sean.j.christopherson@intel.com> Acked-by: NChristian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Wanpeng Li 提交于
This patch optimizes the virtual IPI fastpath emulation sequence: write ICR2 send virtual IPI read ICR2 write ICR2 send virtual IPI ==> write ICR write ICR We can observe ~0.67% performance improvement for IPI microbenchmark (https://lore.kernel.org/kvm/20171219085010.4081-1-ynorov@caviumnetworks.com/) on Skylake server. Signed-off-by: NWanpeng Li <wanpengli@tencent.com> Message-Id: <1585189202-1708-4-git-send-email-wanpengli@tencent.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Wanpeng Li 提交于
Delay read msr data until we identify guest accesses ICR MSR to avoid to penalize all other MSR writes. Signed-off-by: NWanpeng Li <wanpengli@tencent.com> Message-Id: <1585189202-1708-2-git-send-email-wanpengli@tencent.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paul Mackerras 提交于
At present, on Power systems with Protected Execution Facility hardware and an ultravisor, a KVM guest can transition to being a secure guest at will. Userspace (QEMU) has no way of knowing whether a host system is capable of running secure guests. This will present a problem in future when the ultravisor is capable of migrating secure guests from one host to another, because virtualization management software will have no way to ensure that secure guests only run in domains where all of the hosts can support secure guests. This adds a VM capability which has two functions: (a) userspace can query it to find out whether the host can support secure guests, and (b) userspace can enable it for a guest, which allows that guest to become a secure guest. If userspace does not enable it, KVM will return an error when the ultravisor does the hypercall that indicates that the guest is starting to transition to a secure guest. The ultravisor will then abort the transition and the guest will terminate. Signed-off-by: NPaul Mackerras <paulus@ozlabs.org> Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au> Reviewed-by: NRam Pai <linuxram@us.ibm.com>
-
- 25 3月, 2020 1 次提交
-
-
由 Marc Zyngier 提交于
Goodbye KVM/arm Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
- 24 3月, 2020 24 次提交
-
-
由 Marc Zyngier 提交于
Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
The vgic-state debugfs file could do with showing the pending state of the HW-backed SGIs. Plug it into the low-level code. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-24-maz@kernel.org
-
由 Marc Zyngier 提交于
Just like for VLPIs, it is beneficial to avoid trapping on WFI when the vcpu is using the GICv4.1 SGIs. Add such a check to vcpu_clear_wfx_traps(). Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-23-maz@kernel.org
-
由 Marc Zyngier 提交于
Each time a Group-enable bit gets flipped, the state of these bits needs to be forwarded to the hardware. This is a pretty heavy handed operation, requiring all vcpus to reload their GICv4 configuration. It is thus implemented as a new request type. These enable bits are programmed into the HW by setting the VGrp{0,1}En fields of GICR_VPENDBASER when the vPEs are made resident again. Of course, we only support Group-1 for now... Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-22-maz@kernel.org
-
由 Marc Zyngier 提交于
The GICv4.1 architecture gives the hypervisor the option to let the guest choose whether it wants the good old SGIs with an active state, or the new, HW-based ones that do not have one. For this, plumb the configuration of SGIs into the GICv3 MMIO handling, present the GICD_TYPER2.nASSGIcap to the guest, and handle the GICD_CTLR.nASSGIreq setting. In order to be able to deal with the restore of a guest, also apply the GICD_CTLR.nASSGIreq setting at first run so that we can move the restored SGIs to the HW if that's what the guest had selected in a previous life. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-21-maz@kernel.org
-
由 Marc Zyngier 提交于
In order to let a guest buy in the new, active-less SGIs, we need to be able to switch between the two modes. Handle this by stopping all guest activity, transfer the state from one mode to the other, and resume the guest. Nothing calls this code so far, but a later patch will plug it into the MMIO emulation. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-20-maz@kernel.org
-
由 Marc Zyngier 提交于
Most of the GICv3 emulation code that deals with SGIs now has to be aware of the v4.1 capabilities in order to benefit from it. Add such support, keyed on the interrupt having the hw flag set and being a SGI. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-19-maz@kernel.org
-
由 Marc Zyngier 提交于
As GICv4.1 understands the life cycle of doorbells (instead of just randomly firing them at the most inconvenient time), just enable them at irq_request time, and be done with it. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-18-maz@kernel.org
-
由 Marc Zyngier 提交于
Now that we have HW-accelerated SGIs being delivered to VPEs, it becomes required to map the VPEs on all ITSs instead of relying on the lazy approach that we would use when using the ITS-list mechanism. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-17-maz@kernel.org
-
由 Marc Zyngier 提交于
Add the SGI configuration entry point for KVM to use. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-16-maz@kernel.org
-
由 Marc Zyngier 提交于
Allocate per-VPE SGIs when initializing the GIC-specific part of the VPE data structure. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-15-maz@kernel.org
-
由 Marc Zyngier 提交于
In order to hide some of the differences between v4.0 and v4.1, move the doorbell management out of the KVM code, and into the GICv4-specific layer. This allows the calling code to ask for the doorbell when blocking, and otherwise to leave the doorbell permanently disabled. This matches the v4.1 code perfectly, and only results in a minor refactoring of the v4.0 code. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-14-maz@kernel.org
-
由 Marc Zyngier 提交于
Just like for vLPIs, there is some configuration information that cannot be directly communicated through the normal irqchip API, and we have to use our good old friend set_vcpu_affinity as a side-band communication mechanism. This is used to configure group and priority for a given vSGI. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-13-maz@kernel.org
-
由 Marc Zyngier 提交于
To implement the get/set_irqchip_state callbacks (limited to the PENDING state), we have to use a particular set of hacks: - Reading the pending state is done by using a pair of new redistributor registers (GICR_VSGIR, GICR_VSGIPENDR), which allow the 16 interrupts state to be retrieved. - Setting the pending state is done by generating it as we'd otherwise do for a guest (writing to GITS_SGIR). - Clearing the pending state is done by emitting a VSGI command with the "clear" bit set. This requires some interesting locking though: - When talking to the redistributor, we must make sure that the VPE affinity doesn't change, hence taking the VPE lock. - At the same time, we must ensure that nobody accesses the same redistributor's GICR_VSGIR registers for a different VPE, which would corrupt the reading of the pending bits. We thus take the per-RD spinlock. Much fun. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-12-maz@kernel.org
-
由 Marc Zyngier 提交于
Implement mask/unmask for virtual SGIs by calling into the configuration helper. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-11-maz@kernel.org
-
由 Marc Zyngier 提交于
The GICv4.1 ITS has yet another new command (VSGI) which allows a VPE-targeted SGI to be configured (or have its pending state cleared). Add support for this command and plumb it into the activate irqdomain callback so that it is ready to be used. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Link: https://lore.kernel.org/r/20200304203330.4967-10-maz@kernel.org
-
由 Marc Zyngier 提交于
Since GICv4.1 has the capability to inject 16 SGIs into each VPE, and that I'm keen not to invent too many specific interfaces to manipulate these interrupts, let's pretend that each of these SGIs is an actual Linux interrupt. For that matter, let's introduce a minimal irqchip and irqdomain setup that will get fleshed up in the following patches. Signed-off-by: NMarc Zyngier <maz@kernel.org> Reviewed-by: NZenghui Yu <yuzenghui@huawei.com> Reviewed-by: NEric Auger <eric.auger@redhat.com> Link: https://lore.kernel.org/r/20200304203330.4967-9-maz@kernel.org
-
由 Marc Zyngier 提交于
Drop the KVM/arm entries from the MAINTAINERS file. Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
Although we have to bounce between HYP and SVC to decompress and relocate the kernel, we don't need to be able to use it in the kernel itself. So let's drop the functionnality. Since the vectors are never changed, there is no need to reset them either, and nobody calls that stub anyway. The last function (SOFT_RESTART) is still present in order to support kexec. Signed-off-by: NMarc Zyngier <maz@kernel.org>
-
由 Marc Zyngier 提交于
We used to use a set of macros to provide support of vgic-v3 to 32bit without duplicating everything. We don't need it anymore, so drop it. Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NChristoffer Dall <christoffer.dall@arm.com>
-
由 Marc Zyngier 提交于
Remove all traces of Stage-2 and HYP page table support. Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NChristoffer Dall <christoffer.dall@arm.com>
-
由 Marc Zyngier 提交于
That's it. Remove all references to KVM itself, and document that although it is no more, the ABI between SVC and HYP still exists. Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NChristoffer Dall <christoffer.dall@arm.com>
-
由 Marc Zyngier 提交于
Only one platform is building KVM by default. How crazy! Remove it whilst nobody is watching. Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NChristoffer Dall <christoffer.dall@arm.com>
-
由 Marc Zyngier 提交于
As we're about to drop KVM/arm on the floor, carefully unplug it from the build system. Signed-off-by: NMarc Zyngier <maz@kernel.org> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NWill Deacon <will@kernel.org> Acked-by: NVladimir Murzin <vladimir.murzin@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NChristoffer Dall <christoffer.dall@arm.com>
-