- 31 10月, 2013 1 次提交
-
-
由 Borislav Petkov 提交于
Add a kvm ioctl which states which system functionality kvm emulates. The format used is that of CPUID and we return the corresponding CPUID bits set for which we do emulate functionality. Make sure ->padding is being passed on clean from userspace so that we can use it for something in the future, after the ioctl gets cast in stone. s/kvm_dev_ioctl_get_supported_cpuid/kvm_dev_ioctl_get_cpuid/ while at it. Signed-off-by: NBorislav Petkov <bp@suse.de> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 28 10月, 2013 3 次提交
-
-
由 Jan Kiszka 提交于
If the host supports it, we can and should expose it to the guest as well, just like we already do with PIN_BASED_VIRTUAL_NMIS. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Jan Kiszka 提交于
__vmx_complete_interrupts stored uninjected NMIs in arch.nmi_injected, not arch.nmi_pending. So we actually need to check the former field in vmcs12_save_pending_event. This fixes the eventinj unit test when run in nested KVM. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Jan Kiszka 提交于
As long as the hardware provides us 2MB EPT pages, we can also expose them to the guest because our shadow EPT code already supports this feature. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 15 10月, 2013 2 次提交
-
-
由 Raghavendra K T 提交于
We use jump label to enable pv-spinlock. With the changes in (442e0973 Merge branch 'x86/jumplabel'), the jump label behaviour has changed that would result in eventual hang of the VM since we would end up in a situation where slow path locks would halt the vcpus but we will not be able to wakeup the vcpu by lock releaser using unlock kick. Similar problem in Xen and more detailed description is available in a945928e (xen: Do not enable spinlocks before jump_label_init() has executed) This patch splits kvm_spinlock_init to separate jump label changes with pvops patching and also make jump label enabling after jump_label_init(). Signed-off-by: NRaghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 chai wen 提交于
Page pinning is not mandatory in kvm async page fault processing since after async page fault event is delivered to a guest it accesses page once again and does its own GUP. Drop the FOLL_GET flag in GUP in async_pf code, and do some simplifying in check/clear processing. Suggested-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NGu zheng <guz.fnst@cn.fujitsu.com> Signed-off-by: Nchai wen <chaiw.fnst@cn.fujitsu.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
- 14 10月, 2013 1 次提交
-
-
由 Christoffer Dall 提交于
The gfn_to_index function relies on huge page defines which either may not make sense on systems that don't support huge pages or are defined in an unconvenient way for other architectures. Since this is x86-specific, move the function to arch/x86/include/asm/kvm_host.h. Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
- 11 10月, 2013 2 次提交
-
-
由 Ingo Molnar 提交于
Fengguang Wu, Oleg Nesterov and Peter Zijlstra tracked down a kernel crash to a GCC bug: GCC miscompiles certain 'asm goto' constructs, as outlined here: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670 Implement a workaround suggested by Jakub Jelinek. Reported-and-tested-by: NFengguang Wu <fengguang.wu@intel.com> Reported-by: NOleg Nesterov <oleg@redhat.com> Reported-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Suggested-by: NJakub Jelinek <jakub@redhat.com> Reviewed-by: NRichard Henderson <rth@twiddle.net> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: <stable@kernel.org> Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Arthur Chunqi Li 提交于
This patch contains the following two changes: 1. Fix the bug in nested preemption timer support. If vmexit L2->L0 with some reasons not emulated by L1, preemption timer value should be save in such exits. 2. Add support of "Save VMX-preemption timer value" VM-Exit controls to nVMX. With this patch, nested VMX preemption timer features are fully supported. Signed-off-by: NArthur Chunqi Li <yzt356@gmail.com> Reviewed-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 10 10月, 2013 2 次提交
-
-
由 Frediano Ziglio 提交于
Due to the way kernel is initialized under Xen is possible that the ring1 selector used by the kernel for the boot cpu end up to be copied to userspace leading to segmentation fault in the userspace. Xen code in the kernel initialize no-boot cpus with correct selectors (ds and es set to __USER_DS) but the boot one keep the ring1 (passed by Xen). On task context switch (switch_to) we assume that ds, es and cs already point to __USER_DS and __KERNEL_CSso these selector are not changed. If processor is an Intel that support sysenter instruction sysenter/sysexit is used so ds and es are not restored switching back from kernel to userspace. In the case the selectors point to a ring1 instead of __USER_DS the userspace code will crash on first memory access attempt (to be precise Xen on the emulated iret used to do sysexit will detect and set ds and es to zero which lead to GPF anyway). Now if an userspace process call kernel using sysenter and get rescheduled (for me it happen on a specific init calling wait4) could happen that the ring1 selector is set to ds and es. This is quite hard to detect cause after a while these selectors are fixed (__USER_DS seems sticky). Bisecting the code commit 7076aada appears to be the first one that have this issue. Signed-off-by: NFrediano Ziglio <frediano.ziglio@citrix.com> Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: NAndrew Cooper <andrew.cooper3@citrix.com>
-
由 Gleb Natapov 提交于
72f85795 broke shadow on EPT. This patch reverts it and fixes PAE on nEPT (which reverted commit fixed) in other way. Shadow on EPT is now broken because while L1 builds shadow page table for L2 (which is PAE while L2 is in real mode) it never loads L2's GUEST_PDPTR[0-3]. They do not need to be loaded because without nested virtualization HW does this during guest entry if EPT is disabled, but in our case L0 emulates L2's vmentry while EPT is enables, so we cannot rely on vmcs12->guest_pdptr[0-3] to contain up-to-date values and need to re-read PDPTEs from L2 memory. This is what kvm_set_cr3() is doing, but by clearing cache bits during L2 vmentry we drop values that kvm_set_cr3() read from memory. So why the same code does not work for PAE on nEPT? kvm_set_cr3() reads pdptes into vcpu->arch.walk_mmu->pdptrs[]. walk_mmu points to vcpu->arch.nested_mmu while nested guest is running, but ept_load_pdptrs() uses vcpu->arch.mmu which contain incorrect values. Fix that by using walk_mmu in ept_(load|save)_pdptrs. Signed-off-by: NGleb Natapov <gleb@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Tested-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 06 10月, 2013 1 次提交
-
-
由 Ville Syrjälä 提交于
Dell Latitude E5410 needs reboot=pci to actually reboot. Signed-off-by: NVille Syrjälä <ville.syrjala@linux.intel.com> Link: http://lkml.kernel.org/r/1380888964-14517-1-git-send-email-ville.syrjala@linux.intel.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 05 10月, 2013 2 次提交
-
-
由 Bjorn Helgaas 提交于
This reverts commit 07f9b61c. 07f9b61c was intended to be a cleanup that didn't change anything, but in fact, for systems without _CBA (which is almost everything), it broke extended config space for domain 0 and all config space for other domains. Reference: http://lkml.kernel.org/r/20131004011806.GE20450@dangermouse.emea.sgi.comReported-by: NHedi Berriche <hedi@sgi.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
由 Thomas Petazzoni 提交于
Commit ebd97be6 ('PCI: remove ARCH_SUPPORTS_MSI kconfig option') removed the ARCH_SUPPORTS_MSI option which architectures could select to indicate that they support MSI. Now, all architectures are supposed to build fine when MSI support is enabled: instead of having the architecture tell *when* MSI support can be used, it's up to the architecture code to ensure that MSI support can be enabled. On x86, commit ebd97be6 removed the following line: select ARCH_SUPPORTS_MSI if (X86_LOCAL_APIC && X86_IO_APIC) Which meant that MSI support was only available when the local APIC and I/O APIC were enabled. While this is always true on SMP or x86-64, it is not necessarily the case on i386 !SMP. The below patch makes sure that the local APIC and I/O APIC support is always enabled when MSI support is enabled. To do so, it: * Ensures the X86_UP_APIC option is not visible when PCI_MSI is enabled. This is the option that allows, on UP machines, to enable or not the APIC support. It is already not visible on SMP systems, or x86-64 systems, for example. We're simply also making it invisible on i386 MSI systems. * Ensures that the X86_LOCAL_APIC and X86_IO_APIC options are 'y' when PCI_MSI is enabled. Notice that this change requires a change in drivers/iommu/Kconfig to avoid a recursive Kconfig dependencey. The AMD_IOMMU option selects PCI_MSI, but was depending on X86_IO_APIC. This dependency is no longer needed: as soon as PCI_MSI is selected, the presence of X86_IO_APIC is guaranteed. Moreover, the AMD_IOMMU already depended on X86_64, which already guaranteed that X86_IO_APIC was enabled, so this dependency was anyway redundant. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Link: http://lkml.kernel.org/r/1380794354-9079-1-git-send-email-thomas.petazzoni@free-electrons.comReported-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NBjorn Helgaas <bhelgaas@google.com> Signed-off-by: NH. Peter Anvin <hpa@linux.intel.com>
-
- 04 10月, 2013 1 次提交
-
-
由 Peter Zijlstra 提交于
Currently the cap_user_time_zero capability has different tests than cap_user_time; even though they expose the exact same data. Switch from CONSTANT && NONSTOP to sched_clock_stable to also deal with multi cabinet machines and drop the tsc_disabled() check.. non of this will work sanely without tsc anyway. Signed-off-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/n/tip-nmgn0j0muo1r4c94vlfh23xy@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 03 10月, 2013 8 次提交
-
-
由 Paolo Bonzini 提交于
kvm_mmu initialization is mostly filling in function pointers, there is no way for it to fail. Clean up unused return values. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Paolo Bonzini 提交于
They do the same thing, and destroy_kvm_mmu can be confused with kvm_mmu_destroy. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Paolo Bonzini 提交于
The new_cr3 MMU callback has been a wrapper for mmu_free_roots since commit e676505a (KVM: MMU: Force cr3 reload with two dimensional paging on mov cr3 emulation, 2012-07-08). The commit message mentioned that "mmu_free_roots() is somewhat of an overkill, but fixing that is more complicated and will be done after this minimal fix". One year has passed, and no one really felt the need to do a different fix. Wrap the call with a kvm_mmu_new_cr3 function for clarity, but remove the callback. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Paolo Bonzini 提交于
The free MMU callback has been a wrapper for mmu_free_roots since mmu_free_roots itself was introduced (commit 17ac10ad, [PATCH] KVM: MU: Special treatment for shadow pae root pages, 2007-01-05), and has always been the same for all MMU cases. Remove the indirection as it is useless. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Paolo Bonzini 提交于
This makes the interface more deterministic for userspace, which can expect (after configuring only the features it supports) to get exactly the same state from the kernel, independent of the host CPU and kernel version. Suggested-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Paolo Bonzini 提交于
A guest can still attempt to save and restore XSAVE states even if they have been masked in CPUID leaf 0Dh. This usually is not visible to the guest, but is still wrong: "Any attempt to set a reserved bit (as determined by the contents of EAX and EDX after executing CPUID with EAX=0DH, ECX= 0H) in XCR0 for a given processor will result in a #GP exception". The patch also performs the same checks as __kvm_set_xcr in KVM_SET_XSAVE. This catches migration from newer to older kernel/processor before the guest starts running. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 Paolo Bonzini 提交于
XSAVE entries that KVM does not support are reported by KVM_GET_SUPPORTED_CPUID for leaf 0Dh index 0 if the host supports them; they should be left out unless there is also hypervisor support for them. Sub-leafs are correctly handled in supported_xcr0_bit, fix index 0 to match. Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com>
-
由 David Herrmann 提交于
IORESOURCE_BUSY is used to mark temporary driver mem-resources instead of global regions. This suppresses warnings if regions overlap with a region marked as BUSY. This was always the case for VESA/VGA/EFI framebuffer regions so do the same for simplefb regions. The reason we do this is to allow device handover to real GPU drivers like i915/radeon/nouveau which get the same regions via PCI BARs. Maybe at some point we will be able to unregister platform devices properly during the handover. In this case the simplefb region would get removed before the new region is created. However, this is currently not the case and would require rather huge changes in remove_conflicting_framebuffers(). Add the BUSY marker now and try to eventually rewrite the handover for a next release. Also see kernel/resource.c for more information: /* * if a resource is "BUSY", it's not a hardware resource * but a driver mapping of such a resource; we don't want * to warn for those; some drivers legitimately map only * partial hardware resources. (example: vesafb) */ This suppresses warnings like: ------------[ cut here ]------------ WARNING: CPU: 2 PID: 199 at arch/x86/mm/ioremap.c:171 __ioremap_caller+0x2e3/0x390() Info: mapping multiple BARs. Your kernel is fine. Call Trace: dump_stack+0x54/0x8d warn_slowpath_common+0x7d/0xa0 warn_slowpath_fmt+0x4c/0x50 iomem_map_sanity_check+0xac/0xe0 __ioremap_caller+0x2e3/0x390 ioremap_wc+0x32/0x40 i915_driver_load+0x670/0xf50 [i915] ... Reported-by: NTom Gundersen <teg@jklm.no> Tested-by: NTom Gundersen <teg@jklm.no> Tested-by: NPavel Roskin <proski@gnu.org> Signed-off-by: NDavid Herrmann <dh.herrmann@gmail.com> Link: http://lkml.kernel.org/r/1380724864-1757-1-git-send-email-dh.herrmann@gmail.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 02 10月, 2013 1 次提交
-
-
由 Tom Gundersen 提交于
On my MacBook Air lfb_size is 4M, which makes the bitshit overflow (to 256GB - larger than 32 bits), meaning we fall back to efifb unnecessarily. Cast to u64 to avoid the overflow. Signed-off-by: NTom Gundersen <teg@jklm.no> Reviewed-by: NDavid Herrmann <dh.herrmann@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Stephen Warren <swarren@nvidia.com> Cc: Stephen Warren <swarren@wwwdotorg.org> Link: http://lkml.kernel.org/r/1380644320-1026-1-git-send-email-teg@jklm.noSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 30 9月, 2013 5 次提交
-
-
由 Paolo Bonzini 提交于
In commit e935b837 ("KVM: Convert kvm_lock to raw_spinlock"), the kvm_lock was made a raw lock. However, the kvm mmu_shrink() function tries to grab the (non-raw) mmu_lock within the scope of the raw locked kvm_lock being held. This leads to the following: BUG: sleeping function called from invalid context at kernel/rtmutex.c:659 in_atomic(): 1, irqs_disabled(): 0, pid: 55, name: kswapd0 Preemption disabled at:[<ffffffffa0376eac>] mmu_shrink+0x5c/0x1b0 [kvm] Pid: 55, comm: kswapd0 Not tainted 3.4.34_preempt-rt Call Trace: [<ffffffff8106f2ad>] __might_sleep+0xfd/0x160 [<ffffffff817d8d64>] rt_spin_lock+0x24/0x50 [<ffffffffa0376f3c>] mmu_shrink+0xec/0x1b0 [kvm] [<ffffffff8111455d>] shrink_slab+0x17d/0x3a0 [<ffffffff81151f00>] ? mem_cgroup_iter+0x130/0x260 [<ffffffff8111824a>] balance_pgdat+0x54a/0x730 [<ffffffff8111fe47>] ? set_pgdat_percpu_threshold+0xa7/0xd0 [<ffffffff811185bf>] kswapd+0x18f/0x490 [<ffffffff81070961>] ? get_parent_ip+0x11/0x50 [<ffffffff81061970>] ? __init_waitqueue_head+0x50/0x50 [<ffffffff81118430>] ? balance_pgdat+0x730/0x730 [<ffffffff81060d2b>] kthread+0xdb/0xe0 [<ffffffff8106e122>] ? finish_task_switch+0x52/0x100 [<ffffffff817e1e94>] kernel_thread_helper+0x4/0x10 [<ffffffff81060c50>] ? __init_kthread_worker+0x After the previous patch, kvm_lock need not be a raw spinlock anymore, so change it back. Reported-by: NPaul Gortmaker <paul.gortmaker@windriver.com> Cc: kvm@vger.kernel.org Cc: gleb@redhat.com Cc: jan.kiszka@siemens.com Reviewed-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Gleb Natapov 提交于
If #PF happens during delivery of an exception into L2 and L1 also do not have the page mapped in its shadow page table then L0 needs to generate vmexit to L2 with original event in IDT_VECTORING_INFO, but current code combines both exception and generates #DF instead. Fix that by providing nVMX specific function to handle page faults during page table walk that handles this case correctly. Signed-off-by: NGleb Natapov <gleb@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Gleb Natapov 提交于
All exceptions should be checked for intercept during delivery to L2, but we check only #PF currently. Drop nested_run_pending while we are at it since exception cannot be injected during vmentry anyway. Signed-off-by: NGleb Natapov <gleb@redhat.com> [Renamed the nested_vmx_check_exception function. - Paolo] Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Gleb Natapov 提交于
If an exception causes vmexit directly it should not be reported in IDT_VECTORING_INFO during the exit. For that we need to be able to distinguish between exception that is injected into nested VM and one that is reinjected because its delivery failed. Fortunately we already have mechanism to do so for nested SVM, so here we just use correct function to requeue exceptions and make sure that reinjected exception is not moved to IDT_VECTORING_INFO during vmexit emulation and not re-checked for interception during delivery. Signed-off-by: NGleb Natapov <gleb@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Gleb Natapov 提交于
EXIT_REASON_VMLAUNCH/EXIT_REASON_VMRESUME exit does not mean that nested VM will actually run during next entry. Move setting nested_run_pending closer to vmentry emulation code and move its clearing close to vmexit to minimize amount of code that will erroneously run with nested_run_pending set. Signed-off-by: NGleb Natapov <gleb@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 28 9月, 2013 1 次提交
-
-
由 Ingo Molnar 提交于
Ran into this cryptic PMU bootup log recently: [ 0.124047] Performance Events: [ 0.125000] smpboot: ... Turns out we print this if no PMU is detected. Fall back to the right condition so that the following is printed: [ 0.122381] Performance Events: no PMU driver, software events only. Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Link: http://lkml.kernel.org/n/tip-u2fwaUffakjp0qkpRfqljgsn@git.kernel.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 27 9月, 2013 1 次提交
-
-
由 Suravee Suthikulpanit 提交于
On AMD family 14h, applying microcode patch on the a core (core0) would also affect the other core (core1) in the same compute unit. The driver would skip applying the patch on core1, but it still need to update kernel structures to reflect the proper patch level. The current logic is not updating the struct ucode_cpu_info.cpu_sig.rev of the skipped core. This causes the /sys/devices/system/cpu/cpu1/microcode/version to report incorrect patch level as shown below: $ grep . cpu?/microcode/version cpu0/microcode/version:0x600063d cpu1/microcode/version:0x6000626 cpu2/microcode/version:0x600063d cpu3/microcode/version:0x6000626 cpu4/microcode/version:0x600063d Signed-off-by: NSuravee Suthikulpanit <suravee.suthikulpanit@amd.com> Acked-by: NBorislav Petkov <bp@suse.de> Cc: <bp@alien8.de> Cc: <jacob.w.shin@gmail.com> Cc: <herrmann.der.user@googlemail.com> Link: http://lkml.kernel.org/r/1285806432-1995-1-git-send-email-suravee.suthikulpanit@amd.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
- 25 9月, 2013 8 次提交
-
-
由 David Vrabel 提交于
On hosts with more than 168 GB of memory, a 32-bit guest may attempt to grant map an MFN that is error cannot lookup in its mapping of the m2p table. There is an m2p lookup as part of m2p_add_override() and m2p_remove_override(). The lookup falls off the end of the mapped portion of the m2p and (because the mapping is at the highest virtual address) wraps around and the lookup causes a fault on what appears to be a user space address. do_page_fault() (thinking it's a fault to a userspace address), tries to lock mm->mmap_sem. If the gntdev device is used for the grant map, m2p_add_override() is called from from gnttab_mmap() with mm->mmap_sem already locked. do_page_fault() then deadlocks. The deadlock would most commonly occur when a 64-bit guest is started and xenconsoled attempts to grant map its console ring. Introduce mfn_to_pfn_no_overrides() which checks the MFN is within the mapped portion of the m2p table before accessing the table and use this in m2p_add_override(), m2p_remove_override(), and mfn_to_pfn() (which already had the correct range check). All faults caused by accessing the non-existant parts of the m2p are thus within the kernel address space and exception_fixup() is called without trying to lock mm->mmap_sem. This means that for MFNs that are outside the mapped range of the m2p then mfn_to_pfn() will always look in the m2p overrides. This is correct because it must be a foreign MFN (and the PFN in the m2p in this case is only relevant for the other domain). Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com> Cc: Stefano Stabellini <stefano.stabellini@citrix.com> Cc: Jan Beulich <JBeulich@suse.com> -- v3: check for auto_translated_physmap in mfn_to_pfn_no_overrides() v2: in mfn_to_pfn() look in m2p_overrides if the MFN is out of range as it's probably foreign. Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
由 Gleb Natapov 提交于
Bit 12 is undefined in any of the following cases: - If the "NMI exiting" VM-execution control is 1 and the "virtual NMIs" VM-execution control is 0. - If the VM exit sets the valid bit in the IDT-vectoring information field Signed-off-by: NGleb Natapov <gleb@redhat.com> [Add parentheses around & within && - Paolo] Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Dave Jones 提交于
This seems to have been copied from the Optiplex 990 entry above, but somoene forgot to change the ident text. Signed-off-by: NDave Jones <davej@fedoraproject.org> Link: http://lkml.kernel.org/r/20130925001344.GA13554@redhat.comSigned-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Konrad Rzeszutek Wilk 提交于
xen_init_spinlocks() currently calls static_key_slow_inc() before jump_label_init() is invoked. When CONFIG_JUMP_LABEL is set (which usually is the case) the effect of this static_key_slow_inc() is deferred until after jump_label_init(). This is different from when CONFIG_JUMP_LABEL is not set, in which case the key is set immediately. Thus, depending on the value of config option, we may observe different behavior. In addition, when we come to __jump_label_transform() from jump_label_init(), the key (paravirt_ticketlocks_enabled) is already enabled. On processors where ideal_nop is not the same as default_nop this will cause a BUG() since it is expected that before a key is enabled the latter is replaced by the former during initialization. To address this problem we need to move static_key_slow_inc(¶virt_ticketlocks_enabled) so that it is called after jump_label_init(). We also need to make sure that this is done before other cpus start to boot. early_initcall appears to be a good place to do so. (Note that we cannot move whole xen_init_spinlocks() there since pv_lock_ops need to be set before alternative_instructions() runs.) Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> [v2: Added extra comments in the code] Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: NSteven Rostedt <rostedt@goodmis.org>
-
由 Jan Kiszka 提交于
Now that we provide EPT support, there is no reason to torture our guests by hiding the relieving unrestricted guest mode feature. We just need to relax CR0 checks for always-on bits as PE and PG can now be switched off. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Reviewed-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Jan Kiszka 提交于
Implement and advertise VM_EXIT_SAVE_IA32_EFER. L0 traps EFER writes unconditionally, so we always find the current L2 value in the architectural state. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Reviewed-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Jan Kiszka 提交于
Fiddling with CR3 for L2 is L1's job. It may set its own, different identity map or simple leave it alone if unrestricted guest mode is enabled. This also fixes reading back the current CR3 on L2 exits for reporting it to L1. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Reviewed-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Jan Kiszka 提交于
kvm_set_cr0 performs checks on the state transition that may prevent loading L1's cr0. For now we rely on the hardware to catch invalid states loaded by L1 into its VMCS. Still, consistency checks on the host state part of the VMCS on guest entry will have to be improved later on. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Reviewed-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 23 9月, 2013 1 次提交
-
-
由 Masoud Sharbiani 提交于
Dell PowerEdge C6100 machines fail to completely reboot about 20% of the time. Signed-off-by: NMasoud Sharbiani <msharbiani@twitter.com> Signed-off-by: NVinson Lee <vlee@twitter.com> Cc: Robin Holt <holt@sgi.com> Cc: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/1379717947-18042-1-git-send-email-vlee@freedesktop.orgSigned-off-by: NIngo Molnar <mingo@kernel.org>
-