- 12 1月, 2011 30 次提交
-
-
由 Heiko Carstens 提交于
Fixes this: CC arch/s390/kvm/../../../virt/kvm/kvm_main.o arch/s390/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_dev_ioctl_create_vm': arch/s390/kvm/../../../virt/kvm/kvm_main.c:1828:10: warning: unused variable 'r' Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Heiko Carstens 提交于
Fixes this: CC arch/s390/kvm/../../../virt/kvm/kvm_main.o arch/s390/kvm/../../../virt/kvm/kvm_main.c: In function 'kvm_clear_guest_page': arch/s390/kvm/../../../virt/kvm/kvm_main.c:1224:2: warning: passing argument 3 of 'kvm_write_guest_page' makes pointer from integer without a cast arch/s390/kvm/../../../virt/kvm/kvm_main.c:1185:5: note: expected 'const void *' but argument is of type 'long unsigned int' Signed-off-by: NHeiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Takuya Yoshikawa 提交于
Currently we are using vmalloc() for all dirty bitmaps even if they are small enough, say less than K bytes. We use kmalloc() if dirty bitmap size is less than or equal to PAGE_SIZE so that we can avoid vmalloc area usage for VGA. This will also make the logging start/stop faster. Signed-off-by: NTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Takuya Yoshikawa 提交于
Currently x86's kvm_vm_ioctl_get_dirty_log() needs to allocate a bitmap by vmalloc() which will be used in the next logging and this has been causing bad effect to VGA and live-migration: vmalloc() consumes extra systime, triggers tlb flush, etc. This patch resolves this issue by pre-allocating one more bitmap and switching between two bitmaps during dirty logging. Performance improvement: I measured performance for the case of VGA update by trace-cmd. The result was 1.5 times faster than the original one. In the case of live migration, the improvement ratio depends on the workload and the guest memory size. In general, the larger the memory size is the more benefits we get. Note: This does not change other architectures's logic but the allocation size becomes twice. This will increase the actual memory consumption only when the new size changes the number of pages allocated by vmalloc(). Signed-off-by: NTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: NFernando Luis Vazquez Cao <fernando@oss.ntt.co.jp> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Takuya Yoshikawa 提交于
This makes it easy to change the way of allocating/freeing dirty bitmaps. Signed-off-by: NTakuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp> Signed-off-by: NFernando Luis Vazquez Cao <fernando@oss.ntt.co.jp> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
Add tracepoint for userspace exit. Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Marcelo Tosatti 提交于
As suggested by Andrea, pass r/w error code to gup(), upgrading read fault to writable if host pte allows it. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Marcelo Tosatti 提交于
This can happen in the following scenario: vcpu0 vcpu1 read fault gup(.write=0) gup(.write=1) reuse swap cache, no COW set writable spte use writable spte set read-only spte Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Marcelo Tosatti 提交于
Unused. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Marcelo Tosatti 提交于
The EPT present/writable bits use the same position as normal pagetable bits. Since direct_map passes ACC_ALL to mmu_set_spte, thus always setting the writable bit on sptes, use the generic PT_PRESENT shadow_base_pte. Also pass present/writable error code information from EPT violation to generic pagefault handler. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
After an interrupt injection, the PPR changes, and we have to reflect that into the vapic. This causes a KVM_REQ_EVENT to be set, which causes the whole interrupt injection routine to be run again (harmlessly). Optimize by only setting KVM_REQ_EVENT if the ppr was lowered; otherwise there is no chance that a new injection is needed. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
This abstraction only serves to obfuscate. Remove. Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Avi Kivity 提交于
ldt is never used in the kernel context; same goes for fs (x86_64) and gs (i386). So save/restore them in the heavyweight exit path instead of the lightweight path. By itself, this doesn't buy us much, but it paves the way for moving vmload and vmsave to the heavyweight exit path, since they modify the same registers. [jan: fix copy/pase mistake on i386] Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Avi Kivity 提交于
More members will join it soon. Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Avi Kivity 提交于
Saving guest registers is just a memory copy, and does not need to be in the critical section. Move outside the critical section to improve latency a bit. Signed-off-by: NAvi Kivity <avi@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Jan Kiszka 提交于
May otherwise generates build warnings about unused kvm_read_and_reset_pf_reason if included without CONFIG_KVM_GUEST enabled. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Andi Kleen 提交于
gcc 4.5 with some special options is able to duplicate the VMX context switch asm in vmx_vcpu_run(). This results in a compile error because the inline asm sequence uses an on local label. The non local label is needed because other code wants to set up the return address. This patch moves the asm code into an own function and marks that explicitely noinline to avoid this problem. Better would be probably to just move it into an .S file. The diff looks worse than the change really is, it's all just code movement and no logic change. Signed-off-by: NAndi Kleen <ak@linux.intel.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Jan Kiszka 提交于
It has no user outside mmu.c and also no prototype. Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
Improve vma handling code readability in hva_to_pfn() and fix async pf handling code to properly check vma returned by find_vma(). Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
If guest indicates that it can handle async pf in kernel mode too send it, but only if interrupts are enabled. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
If guest can detect that it runs in non-preemptable context it can handle async PFs at any time, so let host know that it can send async PF even if guest cpu is not in userspace. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
If async page fault is received by idle task or when preemp_count is not zero guest cannot reschedule, so do sti; hlt and wait for page to be ready. vcpu can still process interrupts while it waits for the page to be ready. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
Send async page fault to a PV guest if it accesses swapped out memory. Guest will choose another task to run upon receiving the fault. Allow async page fault injection only when guest is in user mode since otherwise guest may be in non-sleepable context and will not be able to reschedule. Vcpu will be halted if guest will fault on the same page again or if vcpu executes kernel code. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
When async PF capability is detected hook up special page fault handler that will handle async page fault events and bypass other page faults to regular page fault handler. Also add async PF handling to nested SVM emulation. Async PF always generates exit to L1 where vcpu thread will be scheduled out until page is available. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
Enable async PF in a guest if async PF capability is discovered. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
Guest enables async PF vcpu functionality using this MSR. Reviewed-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
Async PF also needs to hook into smp_prepare_boot_cpu so move the hook into generic code. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
Keep track of memslots changes by keeping generation number in memslots structure. Provide kvm_write_guest_cached() function that skips gfn_to_hva() translation if memslots was not changed since previous invocation. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
When page is swapped in it is mapped into guest memory only after guest tries to access it again and generate another fault. To save this fault we can map it immediately since we know that guest is going to access the page. Do it only when tdp is enabled for now. Shadow paging case is more complicated. CR[034] and EFER registers should be switched before doing mapping and then switched back. Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
由 Gleb Natapov 提交于
If a guest accesses swapped out memory do not swap it in from vcpu thread context. Schedule work to do swapping and put vcpu into halted state instead. Interrupts will still be delivered to the guest and if interrupt will cause reschedule guest will continue to run another task. [avi: remove call to get_user_pages_noio(), nacked by Linus; this makes everything synchrnous again] Acked-by: NRik van Riel <riel@redhat.com> Signed-off-by: NGleb Natapov <gleb@redhat.com> Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com>
-
- 02 1月, 2011 2 次提交
-
-
由 Avi Kivity 提交于
The only bit of EFER that affects the mmu is NX, and this is already accounted for (LME only takes effect when changing cr0). Based on a patch by Hillf Danton. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Avi Kivity 提交于
isr_ack is never initialized. So, until the first PIC reset, interrupts may fail to be injected. This can cause Windows XP to fail to boot, as reported in the fallout from the fix to https://bugzilla.kernel.org/show_bug.cgi?id=21962. Reported-and-tested-by: NNicolas Prochazka <prochazka.nicolas@gmail.com> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 29 12月, 2010 1 次提交
-
-
由 Avi Kivity 提交于
We use the physical address instead of the base gfn for the four PAE page directories we use in unpaged mode. When the guest accesses an address above 1GB that is backed by a large host page, a BUG_ON() in kvm_mmu_set_gfn() triggers. Resolves: https://bugzilla.kernel.org/show_bug.cgi?id=21962Reported-and-tested-by: NNicolas Prochazka <prochazka.nicolas@gmail.com> KVM-Stable-Tag. Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 19 12月, 2010 3 次提交
-
-
git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile由 Linus Torvalds 提交于
* git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile: arch/tile: handle rt_sigreturn() more cleanly arch/tile: handle CLONE_SETTLS in copy_thread(), not user space
-
git://git.linux-mips.org/pub/scm/upstream-linus由 Linus Torvalds 提交于
* 'upstream' of git://git.linux-mips.org/pub/scm/upstream-linus: MIPS: Fix build errors in sc-mips.c
-
git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6由 Linus Torvalds 提交于
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6: x86: avoid high BIOS area when allocating address space x86: avoid E820 regions when allocating address space x86: avoid low BIOS area when allocating address space resources: add arch hook for preventing allocation in reserved areas Revert "resources: support allocating space within a region from the top down" Revert "PCI: allocate bus resources from the top down" Revert "x86/PCI: allocate space from the end of a region, not the beginning" Revert "x86: allocate space within a region top-down" Revert "PCI: fix pci_bus_alloc_resource() hang, prefer positive decode" PCI: Update MCP55 quirk to not affect non HyperTransport variants
-
- 18 12月, 2010 4 次提交
-
-
由 Chris Metcalf 提交于
The current tile rt_sigreturn() syscall pattern uses the common idiom of loading up pt_regs with all the saved registers from the time of the signal, then anticipating the fact that we will clobber the ABI "return value" register (r0) as we return from the syscall by setting the rt_sigreturn return value to whatever random value was in the pt_regs for r0. However, this breaks in our 64-bit kernel when running "compat" tasks, since we always sign-extend the "return value" register to properly handle returned pointers that are in the upper 2GB of the 32-bit compat address space. Doing this to the sigreturn path then causes occasional random corruption of the 64-bit r0 register. Instead, we stop doing the crazy "load the return-value register" hack in sigreturn. We already have some sigreturn-specific assembly code that we use to pass the pt_regs pointer to C code. We extend that code to also set the link register to point to a spot a few instructions after the usual syscall return address so we don't clobber the saved r0. Now it no longer matters what the rt_sigreturn syscall returns, and the pt_regs structure can be cleanly and completely reloaded. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com>
-
由 Chris Metcalf 提交于
Previously we were just setting up the "tp" register in the new task as started by clone() in libc. However, this is not quite right, since in principle a signal might be delivered to the new task before it had its TLS set up. (Of course, this race window still exists for resetting the libc getpid() cached value in the new task, in principle. But in any case, we are now doing this exactly the way all other architectures do it.) This change is important for 2.6.37 since the tile glibc we will be submitting upstream will not set TLS in user space any more, so it will only work on a kernel that has this fix. It should also be taken for 2.6.36.x in the stable tree if possible. Signed-off-by: NChris Metcalf <cmetcalf@tilera.com> Cc: stable <stable@kernel.org>
-
由 Kevin Cernekee 提交于
Seen with malta_defconfig on Linus' tree: CC arch/mips/mm/sc-mips.o arch/mips/mm/sc-mips.c: In function 'mips_sc_is_activated': arch/mips/mm/sc-mips.c:77: error: 'config2' undeclared (first use in this function) arch/mips/mm/sc-mips.c:77: error: (Each undeclared identifier is reported only once arch/mips/mm/sc-mips.c:77: error: for each function it appears in.) arch/mips/mm/sc-mips.c:81: error: 'tmp' undeclared (first use in this function) make[2]: *** [arch/mips/mm/sc-mips.o] Error 1 make[1]: *** [arch/mips/mm] Error 2 make: *** [arch/mips] Error 2 [Ralf: Cosmetic changes to minimize the number of arguments passed to mips_sc_is_activated] Signed-off-by: NKevin Cernekee <cernekee@gmail.com> Patchwork: https://patchwork.linux-mips.org/patch/1752/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Bjorn Helgaas 提交于
This prevents allocation of the last 2MB before 4GB. The experiment described here shows Windows 7 ignoring the last 1MB: https://bugzilla.kernel.org/show_bug.cgi?id=23542#c27 This patch ignores the top 2MB instead of just 1MB because H. Peter Anvin says "There will be ROM at the top of the 32-bit address space; it's a fact of the architecture, and on at least older systems it was common to have a shadow 1 MiB below." Acked-by: NH. Peter Anvin <hpa@zytor.com> Signed-off-by: NBjorn Helgaas <bjorn.helgaas@hp.com> Signed-off-by: NJesse Barnes <jbarnes@virtuousgeek.org>
-