- 06 8月, 2018 18 次提交
-
-
由 Liran Alon 提交于
Signed-off-by: NLiran Alon <liran.alon@oracle.com> Signed-off-by: NJim Mattson <jmattson@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Liran Alon 提交于
Signed-off-by: NLiran Alon <liran.alon@oracle.com> Signed-off-by: NJim Mattson <jmattson@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Liran Alon 提交于
Signed-off-by: NLiran Alon <liran.alon@oracle.com> Signed-off-by: NJim Mattson <jmattson@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Liran Alon 提交于
Signed-off-by: NLiran Alon <liran.alon@oracle.com> Signed-off-by: NJim Mattson <jmattson@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Liran Alon 提交于
No functionality change. This is done as a preparation for VMCS shadowing emulation. Signed-off-by: NLiran Alon <liran.alon@oracle.com> Signed-off-by: NJim Mattson <jmattson@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Liran Alon 提交于
No functionality change. Signed-off-by: NLiran Alon <liran.alon@oracle.com> Signed-off-by: NJim Mattson <jmattson@google.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Jim Mattson 提交于
For nested virtualization L0 KVM is managing a bit of state for L2 guests, this state can not be captured through the currently available IOCTLs. In fact the state captured through all of these IOCTLs is usually a mix of L1 and L2 state. It is also dependent on whether the L2 guest was running at the moment when the process was interrupted to save its state. With this capability, there are two new vcpu ioctls: KVM_GET_NESTED_STATE and KVM_SET_NESTED_STATE. These can be used for saving and restoring a VM that is in VMX operation. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: x86@kernel.org Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: NJim Mattson <jmattson@google.com> [karahmed@ - rename structs and functions and make them ready for AMD and address previous comments. - handle nested.smm state. - rebase & a bit of refactoring. - Merge 7/8 and 8/8 into one patch. ] Signed-off-by: NKarimAllah Ahmed <karahmed@amazon.de> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
If the vCPU enters system management mode while running a nested guest, RSM starts processing the vmentry while still in SMM. In that case, however, the pages pointed to by the vmcs12 might be incorrectly loaded from SMRAM. To avoid this, delay the handling of the pages until just before the next vmentry. This is done with a new request and a new entry in kvm_x86_ops, which we will be able to reuse for nested VMX state migration. Extracted from a patch by Jim Mattson and KarimAllah Ahmed. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
The test calls KVM_RUN repeatedly, and creates an entirely new VM with the old memory and vCPU state on every exit to userspace. The kvm_util API is expanded with two functions that manage the lifetime of a kvm_vm struct: the first closes the file descriptors and leaves the memory allocated, and the second opens the file descriptors and reuses the memory from the previous incarnation of the kvm_vm struct. For now the test is very basic, as it does not test for example XSAVE or vCPU events. However, it will test nested virtualization state starting with the next patch. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
The selftests were not munmap-ing the kvm_run area from the vcpu file descriptor. The result was that kvm_vcpu_release was not called and a reference was left in the parent "struct kvm". Ultimately this was visible in the upcoming state save/restore test as an error when KVM attempted to create a duplicate debugfs entry. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
The allocation of the VMXON and VMCS is currently done twice, in lib/vmx.c and in vmx_tsc_adjust_test.c. Reorganize the code to provide a cleaner and easier to use API to the tests. lib/vmx.c now does the complete setup of the VMX data structures, but does not create the VM or set CPUID. This has to be done by the caller. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
The GDT and the TSS base were left to zero, and this has interesting effects when the TSS descriptor is later read to set up a VMCS's TR_BASE. Basically it worked by chance, and this patch fixes it by setting up all the protected mode data structures properly. Because the GDT and TSS addresses are virtual, the page tables now always exist at the time of vcpu setup. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Some of the MSRs returned by GET_MSR_INDEX_LIST currently cannot be sent back to KVM_GET_MSR and/or KVM_SET_MSR; either they can never be sent back, or you they are only accepted under special conditions. This makes the API a pain to use. To avoid this pain, this patch makes it so that the result of the get-list ioctl can always be used for host-initiated get and set. Since we don't have a separate way to check for read-only MSRs, this means some Hyper-V MSRs are ignored when written. Arguably they should not even be in the result of GET_MSR_INDEX_LIST, but I am leaving there in case userspace is using the outcome of GET_MSR_INDEX_LIST to derive the support for the corresponding Hyper-V feature. Cc: stable@vger.kernel.org Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Sean Christopherson 提交于
Linux does not support Memory Protection Extensions (MPX) in the kernel itself, thus the BNDCFGS (Bound Config Supervisor) MSR will always be zero in the KVM host, i.e. RDMSR in vmx_save_host_state() is superfluous. KVM unconditionally sets VM_EXIT_CLEAR_BNDCFGS, i.e. BNDCFGS will always be zero after VMEXIT, thus manually loading BNDCFGS is also superfluous. And in the event the MPX kernel support is added (unlikely given that MPX for userspace is in its death throes[1]), BNDCFGS will likely be common across all CPUs[2], and at the least shouldn't change on a regular basis, i.e. saving the MSR on every VMENTRY is completely unnecessary. WARN_ONCE in hardware_setup() if the host's BNDCFGS is non-zero to document that KVM does not preserve BNDCFGS and to serve as a hint as to how BNDCFGS likely should be handled if MPX is used in the kernel, e.g. BNDCFGS should be saved once during KVM setup. [1] https://lkml.org/lkml/2018/4/27/1046 [2] http://www.openwall.com/lists/kernel-hardening/2017/07/24/28Signed-off-by: NSean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 KarimAllah Ahmed 提交于
Switch 'requests' to be explicitly 64-bit and update BUILD_BUG_ON check to use the size of "requests" instead of the hard-coded '32'. That gives us a bit more room again for arch-specific requests as we already ran out of space for x86 due to the hard-coded check. The only exception here is ARM32 as it is still 32-bits. Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim KrÄmář <rkrcmar@redhat.com> Cc: kvm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Reviewed-by: NJim Mattson <jmattson@google.com> Signed-off-by: NKarimAllah Ahmed <karahmed@amazon.de> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Wei Huang 提交于
KVM is supposed to update some guest VM's CPUID bits (e.g. OSXSAVE) when CR4 is changed. A bug was found in KVM recently and it was fixed by Commit c4d21882 ("KVM: x86: Update cpuid properly when CR4.OSXAVE or CR4.PKE is changed"). This patch adds a test to verify the synchronization between guest VM's CR4 and CPUID bits. Signed-off-by: NWei Huang <wei@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Pull bug fixes into the KVM development tree to avoid nasty conflicts.
-
- 02 8月, 2018 2 次提交
-
-
由 Paolo Bonzini 提交于
Merge tag 'kvm-s390-next-4.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into HEAD KVM: s390: Features for 4.19 - initial version for host large page support. Must be enabled with module parameter hpage=1 and will conflict with the nested=1 parameter. - enable etoken facility for guests - Fixes
-
由 Paolo Bonzini 提交于
Merge tag 'kvm-ppc-next-4.19-1' of git://git.kernel.org/pub/scm/linux/kernel/git/paulus/powerpc into HEAD PPC KVM update for 4.19. This update adds no new features; it just has some minor code cleanups and bug fixes, including a fix to allow us to create KVM_MAX_VCPUS vCPUs on POWER9 in all CPU threading modes.
-
- 31 7月, 2018 4 次提交
-
-
由 Janosch Frank 提交于
Merge tag 'hlp_stage1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvms390/linux into kvms390/next KVM: s390: initial host large page support - must be enabled via module parameter hpage=1 - cannot be used together with nested - does support migration - does support hugetlbfs - no THP yet
-
由 Janosch Frank 提交于
General KVM huge page support on s390 has to be enabled via the kvm.hpage module parameter. Either nested or hpage can be enabled, as we currently do not support vSIE for huge backed guests. Once the vSIE support is added we will either drop the parameter or enable it as default. For a guest the feature has to be enabled through the new KVM_CAP_S390_HPAGE_1M capability and the hpage module parameter. Enabling it means that cmm can't be enabled for the vm and disables pfmf and storage key interpretation. This is due to the fact that in some cases, in upcoming patches, we have to split huge pages in the guest mapping to be able to set more granular memory protection on 4k pages. These split pages have fake page tables that are not visible to the Linux memory management which subsequently will not manage its PGSTEs, while the SIE will. Disabling these features lets us manage PGSTE data in a consistent matter and solve that problem. Signed-off-by: NJanosch Frank <frankja@linux.ibm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com>
-
由 Janosch Frank 提交于
Let's allow huge pmd linking when enabled through the KVM_CAP_S390_HPAGE_1M capability. Also we can now restrict gmap invalidation and notification to the cases where the capability has been activated and save some cycles when that's not the case. Signed-off-by: NJanosch Frank <frankja@linux.ibm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com>
-
由 Dominik Dingel 提交于
Guests backed by huge pages could theoretically free unused pages via the diagnose 10 instruction. We currently don't allow that, so we don't have to refault it once it's needed again. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Signed-off-by: NJanosch Frank <frankja@linux.ibm.com>
-
- 30 7月, 2018 11 次提交
-
-
由 Janosch Frank 提交于
Let's introduce an explicit check if skeys have already been enabled for the vcpu, so we don't have to check the mm context if we don't have the storage key facility. This lets us check for enablement without having to take the mm semaphore and thus speedup skey emulation. Signed-off-by: NJanosch Frank <frankja@linux.ibm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com> Acked-by: NFarhan Ali <alifm@linux.ibm.com> Signed-off-by: NChristian Borntraeger <borntraeger@de.ibm.com>
-
由 Janosch Frank 提交于
When doing skey emulation for huge guests, we now need to fault in pmds, as we don't have PGSTES anymore to store them when we do not have valid table entries. Signed-off-by: NJanosch Frank <frankja@linux.ibm.com>
-
由 Janosch Frank 提交于
Storage keys for guests with huge page mappings have to be managed in hardware. There are no PGSTEs for PMDs that we could use to retain the guests's logical view of the key. Signed-off-by: NJanosch Frank <frankja@linux.vnet.ibm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com>
-
由 Janosch Frank 提交于
Similarly to the pte skey handling, where we set the storage key to the default key for each newly mapped pte, we have to also do that for huge pmds. With the PG_arch_1 flag we keep track if the area has already been cleared of its skeys. Signed-off-by: NJanosch Frank <frankja@linux.ibm.com> Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
由 Dominik Dingel 提交于
When a guest starts using storage keys, we trap and set a default one for its whole valid address space. With this patch we are now able to do that for large pages. To speed up the storage key insertion, we use __storage_key_init_range, which in-turn will use sske_frame to set multiple storage keys with one instruction. As it has been previously used for debuging we have to get rid of the default key check and make it quiescing. Signed-off-by: NDominik Dingel <dingel@linux.vnet.ibm.com> Signed-off-by: NJanosch Frank <frankja@linux.vnet.ibm.com> [replaced page_set_storage_key loop with __storage_key_init_range] Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com>
-
由 Janosch Frank 提交于
To do dirty loging with huge pages, we protect huge pmds in the gmap. When they are written to, we unprotect them and mark them dirty. We introduce the function gmap_test_and_clear_dirty_pmd which handles dirty sync for huge pages. Signed-off-by: NJanosch Frank <frankja@linux.ibm.com> Acked-by: NDavid Hildenbrand <david@redhat.com>
-
由 Janosch Frank 提交于
If the host invalidates a pmd, we also have to invalidate the corresponding gmap pmds, as well as flush them from the TLB. This is necessary, as we don't share the pmd tables between host and guest as we do with ptes. The clearing part of these three new functions sets a guest pmd entry to _SEGMENT_ENTRY_EMPTY, so the guest will fault on it and we will re-link it. Flushing the gmap is not necessary in the host's lazy local and csp cases. Both purge the TLB completely. Signed-off-by: NJanosch Frank <frankja@linux.vnet.ibm.com> Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Acked-by: NDavid Hildenbrand <david@redhat.com>
-
由 Janosch Frank 提交于
Like for ptes, we also need invalidation notification for pmds, to make sure the guest lowcore pages are always accessible and later addition of shadowed pmds. With PMDs we do not have PGSTEs or some other bits we could use in the host PMD. Instead we pick one of the free bits in the gmap PMD. Every time a host pmd will be invalidated, we will check if the respective gmap PMD has the bit set and in that case fire up the notifier. Signed-off-by: NJanosch Frank <frankja@linux.ibm.com>
-
由 Janosch Frank 提交于
Let's allow pmds to be linked into gmap for the upcoming s390 KVM huge page support. Before this patch we copied the full userspace pmd entry. This is not correct, as it contains SW defined bits that might be interpreted differently in the GMAP context. Now we only copy over all hardware relevant information leaving out the software bits. Signed-off-by: NJanosch Frank <frankja@linux.ibm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com>
-
由 Janosch Frank 提交于
Currently we use the software PGSTE bits PGSTE_IN_BIT and PGSTE_VSIE_BIT to notify before an invalidation occurs on a prefix page or a VSIE page respectively. Both bits are pgste specific, but are used when protecting a memory range. Let's introduce abstract GMAP_NOTIFY_* bits that will be realized into the respective bits when gmap DAT table entries are protected. Signed-off-by: NJanosch Frank <frankja@linux.vnet.ibm.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NDavid Hildenbrand <david@redhat.com>
-
由 Janosch Frank 提交于
This patch reworks the gmap_protect_range logic and extracts the pte handling into an own function. Also we do now walk to the pmd and make it accessible in the function for later use. This way we can add huge page handling logic more easily. Signed-off-by: NJanosch Frank <frankja@linux.vnet.ibm.com> Reviewed-by: NChristian Borntraeger <borntraeger@de.ibm.com> Reviewed-by: NMartin Schwidefsky <schwidefsky@de.ibm.com>
-
- 26 7月, 2018 3 次提交
-
-
由 Paul Mackerras 提交于
Commit 1e175d2e ("KVM: PPC: Book3S HV: Pack VCORE IDs to access full VCPU ID space", 2018-07-25) added code that uses kvm->arch.emul_smt_mode before any VCPUs are created. However, userspace can change kvm->arch.emul_smt_mode at any time up until the first VCPU is created. Hence it is (theoretically) possible for the check in kvmppc_core_vcpu_create_hv() to race with another userspace thread changing kvm->arch.emul_smt_mode. This fixes it by moving the test that uses kvm->arch.emul_smt_mode into the block where kvm->lock is held. Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
-
由 Paul Mackerras 提交于
Commit 1e175d2e ("KVM: PPC: Book3S HV: Pack VCORE IDs to access full VCPU ID space", 2018-07-25) allowed use of VCPU IDs up to KVM_MAX_VCPU_ID on POWER9 in all guest SMT modes and guest emulated hardware SMT modes. However, with the current definition of KVM_MAX_VCPU_ID, a guest SMT mode of 1 and an emulated SMT mode of 8, it is only possible to create KVM_MAX_VCPUS / 2 VCPUS, because threads_per_subcore is 4 on POWER9 CPUs. (Using an emulated SMT mode of 8 is useful when migrating VMs to or from POWER8 hosts.) This increases KVM_MAX_VCPU_ID to 8 * KVM_MAX_VCPUS when HV KVM is configured in, so that a full complement of KVM_MAX_VCPUS VCPUs can be created on POWER9 in all guest SMT modes and emulated hardware SMT modes. Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
-
由 Sam Bobroff 提交于
It is not currently possible to create the full number of possible VCPUs (KVM_MAX_VCPUS) on Power9 with KVM-HV when the guest uses fewer threads per core than its core stride (or "VSMT mode"). This is because the VCORE ID and XIVE offsets grow beyond KVM_MAX_VCPUS even though the VCPU ID is less than KVM_MAX_VCPU_ID. To address this, "pack" the VCORE ID and XIVE offsets by using knowledge of the way the VCPU IDs will be used when there are fewer guest threads per core than the core stride. The primary thread of each core will always be used first. Then, if the guest uses more than one thread per core, these secondary threads will sequentially follow the primary in each core. So, the only way an ID above KVM_MAX_VCPUS can be seen, is if the VCPUs are being spaced apart, so at least half of each core is empty, and IDs between KVM_MAX_VCPUS and (KVM_MAX_VCPUS * 2) can be mapped into the second half of each core (4..7, in an 8-thread core). Similarly, if IDs above KVM_MAX_VCPUS * 2 are seen, at least 3/4 of each core is being left empty, and we can map down into the second and third quarters of each core (2, 3 and 5, 6 in an 8-thread core). Lastly, if IDs above KVM_MAX_VCPUS * 4 are seen, only the primary threads are being used and 7/8 of the core is empty, allowing use of the 1, 5, 3 and 7 thread slots. (Strides less than 8 are handled similarly.) This allows the VCORE ID or offset to be calculated quickly from the VCPU ID or XIVE server numbers, without access to the VCPU structure. [paulus@ozlabs.org - tidied up comment a little, changed some WARN_ONCE to pr_devel, wrapped line, fixed id check.] Signed-off-by: NSam Bobroff <sam.bobroff@au1.ibm.com> Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
-
- 23 7月, 2018 2 次提交
-
-
由 Linus Torvalds 提交于
-
git://git.infradead.org/nvme由 Linus Torvalds 提交于
Pull NVMe fixes from Christoph Hellwig: - fix a regression in 4.18 that causes a memory leak on probe failure (Keith Bush) - fix a deadlock in the passthrough ioctl code (Scott Bauer) - don't enable AENs if not supported (Weiping Zhang) - fix an old regression in metadata handling in the passthrough ioctl code (Roland Dreier) * tag 'nvme-for-4.18' of git://git.infradead.org/nvme: nvme: fix handling of metadata_len for NVME_IOCTL_IO_CMD nvme: don't enable AEN if not supported nvme: ensure forward progress during Admin passthru nvme-pci: fix memory leak on probe failure
-