- 31 12月, 2008 40 次提交
-
-
由 Eduardo Habkost 提交于
svm.h will be used by core code that is independent of KVM, so I am moving it outside the arch/x86/kvm directory. Signed-off-by: NEduardo Habkost <ehabkost@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Eduardo Habkost 提交于
vmx.h will be used by core code that is independent of KVM, so I am moving it outside the arch/x86/kvm directory. Signed-off-by: NEduardo Habkost <ehabkost@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
We used to defer invalidating userspace TLB entries until jumping out of the kernel. This was causing MMU weirdness most easily triggered by using a pipe in the guest, e.g. "dmesg | tail". I believe the problem was that after the guest kernel changed the PID (part of context switch), the old process's mappings were still present, and so copy_to_user() on the "return to new process" path ended up using stale mappings. Testing with large pages (64K) exposed the problem, probably because with 4K pages, pressure on the TLB faulted all process A's mappings out before the guest kernel could insert any for process B. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Bare metal Linux on 440 can "overmap" RAM in the kernel linear map, so that it can use large (256MB) mappings even if memory isn't a multiple of 256MB. To prevent the hardware prefetcher from loading from an invalid physical address through that mapping, it's marked Guarded. However, KVM must ensure that all guest mappings are backed by real physical RAM (since a deliberate access through a guarded mapping could still cause a machine check). Accordingly, we don't need to make our mappings guarded, so let's allow prefetching as the designers intended. Curiously this patch didn't affect performance at all on the quick test I tried, but it's clearly the right thing to do anyways and may improve other workloads. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
We have an accessor; might as well use it. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Sheng Yang 提交于
Commit 7fd49de9773fdcb7b75e823b21c1c5dc1e218c14 "KVM: ensure that memslot userspace addresses are page-aligned" broke kernel space allocated memory slot, for the userspace_addr is invalid. Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Xiantao Zhang 提交于
Use kernel's corresponding macro instead. Signed-off-by: NXiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Make sure that CONFIG_KVM cannot be selected without processor support (currently, 440 is the only processor implementation available). Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Bad page translation and silent guest failure ensue if the userspace address is not page-aligned. I hit this problem using large (host) pages with qemu, because qemu currently has a hardcoded 4096-byte alignment for guest memory allocations. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Nitin A Kamble 提交于
The code to traverse the cpuid data array list for counting type of leaves is currently broken. This patches fixes the 2 things in it. 1. Set the 1st counting entry's flag KVM_CPUID_FLAG_STATE_READ_NEXT. Without it the code will never find a valid entry. 2. Also the stop condition in the for loop while looking for the next unflaged entry is broken. It needs to stop when it find one matching entry; and in the case of count of 1, it will be the same entry found in this iteration. Signed-Off-By: NNitin A Kamble <nitin.a.kamble@intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Nitin A Kamble 提交于
For cpuid leaf 0xb the bits 8-15 in ECX register define the end of counting leaf. The previous code was using bits 0-7 for this purpose, which is a bug. Signed-off-by: NNitin A Kamble <nitin.a.kamble@intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
set ESR[PTR] when emulating a guest trap. This allows Linux guests to properly handle WARN_ON() (i.e. detect that it's a non-fatal trap). Also remove debugging printk in trap emulation. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
In kvmppc_deliver_interrupt is just one case left in the switch and it is a rare one (less than 8%) when looking at the exit numbers. Therefore we can at least drop the switch/case and if an if. I inserted an unlikely too, but that's open for discussion. In kvmppc_can_deliver_interrupt all frequent cases are in the default case. I know compilers are smart but we can make it easier for them. By writing down all options and removing the default case combined with the fact that ithe values are constants 0..15 should allow the compiler to write an easy jump table. Modifying kvmppc_can_deliver_interrupt pointed me to the fact that gcc seems to be unable to reduce priority_exception[x] to a build time constant. Therefore I changed the usage of the translation arrays in the interrupt delivery path completely. It is now using priority without translation to irq on the full irq delivery path. To be able to do that ivpr regs are stored by their priority now. Additionally the decision made in kvmppc_can_deliver_interrupt is already sufficient to get the value of interrupt_msr_mask[x]. Therefore we can replace the 16x4byte array used here with a single 4byte variable (might still be one miss, but the chance to find this in cache should be better than the right entry of the whole array). Signed-off-by: NChristian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Since we use a unsigned long here anyway we can use the optimized __ffs. Signed-off-by: NChristian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Currently we use an unnecessary if&switch to detect some cases. To be honest we don't need the ligh_exits counter anyway, because we can calculate it out of others. Sum_exits can also be calculated, so we can remove that too. MMIO, DCR and INTR can be counted on other places without these additional control structures (The INTR case was never hit anyway). The handling of BOOKE_INTERRUPT_EXTERNAL/BOOKE_INTERRUPT_DECREMENTER is similar, but we can avoid the additional if when copying 3 lines of code. I thought about a goto there to prevent duplicate lines, but rewriting three lines should be better style than a goto cross switch/case statements (its also not enough code to justify a new inline function). Signed-off-by: NChristian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
When changing some msr bits e.g. problem state we need to take special care of that. We call the function in our mtmsr emulation (not needed for wrtee[i]), but we don't call kvmppc_set_msr if we change msr via set_regs ioctl. It's a corner case we never hit so far, but I assume it should be kvmppc_set_msr in our arch set regs function (I found it because it is also a corner case when using pv support which would miss the update otherwise). Signed-off-by: NChristian Ehrhardt <ehrhardt@linux.vnet.ibm.com> Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
However, some of these fields could be split into separate per-core structures in the future. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
This patch doesn't yet move all 44x-specific data into the new structure, but is the first step down that path. In the future we may also want to create a struct kvm_vcpu_booke. Based on patch from Liu Yu <yu.liu@freescale.com>. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Needed to port to other Book E processors. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
Cores provide 3 emulation hooks, implemented for example in the new 4xx_emulate.c: kvmppc_core_emulate_op kvmppc_core_emulate_mtspr kvmppc_core_emulate_mfspr Strictly speaking the last two aren't necessary, but provide for more informative error reporting ("unknown SPR"). Long term I'd like to have instruction decoding autogenerated from tables of opcodes, and that way we could aggregate universal, Book E, and core-specific instructions more easily and without redundant switch statements. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
This is used in a couple places in KVM, but isn't KVM-specific. However, this patch doesn't modify other in-kernel emulation code: - xmon uses a direct copy of ppc_opc.c from binutils - emulate_instruction() doesn't need it because it can use a series of mask tests. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
This introduces a set of core-provided hooks. For 440, some of these are implemented by booke.c, with the rest in (the new) 44x.c. Note that these hooks are link-time, not run-time. Since it is not possible to build a single kernel for both e500 and 440 (for example), using function pointers would only add overhead. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
The division was somewhat artificial and cumbersome, and had no functional benefit anyways: we can only guests built for the real host processor. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
This will ease ports to other cores. Also remove unused "struct kvm_tlb" while we're at it. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Hollis Blanchard 提交于
This will make it easier to provide implementations for other cores. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Izik Eidus 提交于
Some areas of kvm x86 mmu are using gfn offset inside a slot without unaliasing the gfn first. This patch makes sure that the gfn will be unaliased and add gfn_to_memslot_unaliased() to save the calculating of the gfn unaliasing in case we have it unaliased already. Signed-off-by: NIzik Eidus <ieidus@redhat.com> Acked-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Sheng Yang 提交于
Ideally, every assigned device should in a clear condition before and after assignment, so that the former state of device won't affect later work. Some devices provide a mechanism named Function Level Reset, which is defined in PCI/PCI-e document. We should execute it before and after device assignment. (But sadly, the feature is new, and most device on the market now don't support it. We are considering using D0/D3hot transmit to emulate it later, but not that elegant and reliable as FLR itself.) [Update: Reminded by Xiantao, execute FLR after we ensure that the device can be assigned to the guest.] Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Xiantao Zhang 提交于
Remove the lock protection for kvm halt logic, otherwise, once other vcpus want to acquire the lock, and they have to wait all vcpus are waken up from halt. Signed-off-by: NXiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Xiantao Zhang 提交于
1. Increase the size of data area to 64M 2. Support more vcpus and memory, 128 vcpus and 256G memory are supported for guests. 3. Add the boundary check for memory and vcpu allocation. With this patch, kvm guest's data area looks as follow: * * +----------------------+ ------- KVM_VM_DATA_SIZE * | vcpu[n]'s data | | ___________________KVM_STK_OFFSET * | | | / | * | .......... | | /vcpu's struct&stack | * | .......... | | /---------------------|---- 0 * | vcpu[5]'s data | | / vpd | * | vcpu[4]'s data | |/-----------------------| * | vcpu[3]'s data | / vtlb | * | vcpu[2]'s data | /|------------------------| * | vcpu[1]'s data |/ | vhpt | * | vcpu[0]'s data |____________________________| * +----------------------+ | * | memory dirty log | | * +----------------------+ | * | vm's data struct | | * +----------------------+ | * | | | * | | | * | | | * | | | * | | | * | | | * | | | * | vm's p2m table | | * | | | * | | | * | | | | * vm's data->| | | | * +----------------------+ ------- 0 * To support large memory, needs to increase the size of p2m. * To support more vcpus, needs to ensure it has enough space to * hold vcpus' data. */ Signed-off-by: NXiantao Zhang <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Guillaume Thouvenin 提交于
If emulate_invalid_guest_state is enabled, the emulator is called when guest state is invalid. Until now, we reported an mmio failure when emulate_instruction() returned EMULATE_DO_MMIO. This patch adds the case where emulate_instruction() failed and an MMIO emulation is needed. Signed-off-by: NGuillaume Thouvenin <guillaume.thouvenin@ext.bull.net> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Guillaume Thouvenin 提交于
If we call the emulator we shouldn't call skip_emulated_instruction() in the first place, since the emulator already computes the next rip for us. Thus we move ->skip_emulated_instruction() out of kvm_emulate_pio() and into handle_io() (and the svm equivalent). We also replaced "return 0" by "break" in the "do_io:" case because now the shadow register state needs to be committed. Otherwise eip will never be updated. Signed-off-by: NGuillaume Thouvenin <guillaume.thouvenin@ext.bull.net> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Amit Shah 提交于
The busy flag of the TR selector is not set by the hardware. This breaks migration from amd hosts to intel hosts. Signed-off-by: NAmit Shah <amit.shah@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Amit Shah 提交于
The hardware does not set the 'g' bit of the cs selector and this breaks migration from amd hosts to intel hosts. Set this bit if the segment limit is beyond 1 MB. Signed-off-by: NAmit Shah <amit.shah@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Amit Shah 提交于
get_segment_descritptor_dtable() contains an obvious type. Signed-off-by: NAmit Shah <amit.shah@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Sheng Yang 提交于
Also remove unnecessary parameter of unregister irq ack notifier. Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Jan Kiszka 提交于
As suggested by Avi, this patch introduces a counter of VCPUs that have LVT0 set to NMI mode. Only if the counter > 0, we push the PIT ticks via all LAPIC LVT0 lines to enable NMI watchdog support. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Acked-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Jan Kiszka 提交于
This patch refactors the NMI watchdog delivery patch, consolidating tests and providing a proper API for delivering watchdog events. An included micro-optimization is to check only for apic_hw_enabled in kvm_apic_local_deliver (the test for LVT mask is covering the soft-disabled case already). Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Acked-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Guillaume Thouvenin 提交于
Add decode entries for 0x04 and 0x05 (ADD) opcodes, execution is already implemented. Signed-off-by: NGuillaume Thouvenin <guillaume.thouvenin@ext.bull.net> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Sheng Yang 提交于
PCI device assignment would map guest MMIO spaces as separate slot, so it is possible that the device has more than 2 MMIO spaces and overwrite current private memslot. The patch move private memory slot to the top of userspace visible memory slots. Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Sheng Yang 提交于
Otherwise set_bit() for private memory slot(above KVM_MEMORY_SLOTS) would corrupted memory in 32bit host. Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-