- 30 1月, 2008 40 次提交
-
-
由 Zhang Xiantao 提交于
Since these functions need to know the details of kvm or kvm_vcpu structure, it can't be put in x86.h. Create mmu.h to hold them. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Move all the architecture-specific fields in kvm_vcpu into a new struct kvm_vcpu_arch. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Acked-by: NCarsten Otte <cotte@de.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Marcelo Tosatti 提交于
There is a race where VCPU0 is shadowing a pagetable entry while VCPU1 is updating it, which results in a stale shadow copy. Fix that by comparing the contents of the cached guest pte with the current guest pte after write-protecting the guest pagetable. Signed-off-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
In addition to removing some duplicated code, this also handles the unlikely case of real-mode code updating a guest page table. This can happen when one vcpu (in real mode) touches a second vcpu's (in protected mode) page tables, or if a vcpu switches to real mode, touches page tables, and switches back. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
As set_pte() no longer references either a gpte or the guest walker, we can move it out of paging mode dependent code (which compiles twice and is generally nasty). Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
When we emulate a guest pte write, we fail to apply the correct inherited permissions from the parent ptes. Now that we store inherited permissions in the shadow page, we can use that to update the pte permissions correctly. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
The nx bit is awkwardly placed in the 63rd bit position; furthermore it has a reversed meaning compared to the other bits, which means we can't use a bitwise and to calculate compounded access masks. So, we simplify things by creating a new 3-bit exec/write/user access word, and doing all calculations in that. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Mark guest pages as accessed when removed from the shadow page tables for better lru processing. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Rename the awkwardly named variable. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
These are traditionally named 'page', but even more traditionally, that name is reserved for variables that point to a 'struct page'. Rename them to 'sp' (for "shadow page"). Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Converting last uses along the way. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
No longer used. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Instead of passing an hpa, pass a regular struct page. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Similar information is available in the gfn parameter, so use that. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
If the guest requests just a tlb flush, don't take the vm lock and drop the mmu context pointlessly. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
If all we're doing is increasing permissions on a pte (typical for demand paging), then there's not need to flush remote tlbs. Worst case they'll get a spurious page fault. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
Instead of incrementally changing the mmu cache size for every memory slot operation, recalculate it from scratch. This is simpler and safer. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Improve dirty bit setting for pages that kvm release, until now every page that we released we marked dirty, from now only pages that have potential to get dirty we mark dirty. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
When we map a page, we check whether some other vcpu mapped it for us and if so, bail out. But we should decrease the refcount on the page as we do so. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
Emulation may cause a shadow pte to be instantiated, which requires memory resources. Make sure the caches are filled to avoid an oops. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
The code that dispatches the page fault and emulates if we failed to map is duplicated across vmx and svm. Merge it to simplify further bugfixing. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Zhang Xiantao 提交于
First step to split kvm_vcpu. Currently, we just use an macro to define the common fields in kvm_vcpu for all archs, and all archs need to define its own kvm_vcpu struct. Signed-off-by: NZhang Xiantao <xiantao.zhang@intel.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
This allows guest memory to be swapped. Pages which are currently mapped via shadow page tables are pinned into memory, but all other pages can be freely swapped. The patch makes gfn_to_page() elevate the page's reference count, and introduces kvm_release_page() that pairs with it. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
In case the page is not present in the guest memory map, return a dummy page the guest can scribble on. This simplifies error checking in its users. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
The current kvm mmu only reverse maps writable translation. This is used to write-protect a page in case it becomes a pagetable. But with swapping support, we need a reverse mapping of read-only pages as well: when we evict a page, we need to remove any mapping to it, whether writable or not. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This is consistent with real-mode permissions. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
This is more consistent with the accessed bit management, and makes the dirty bit available earlier for other purposes. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Anthony Liguori 提交于
This time, the biggest change is gpa_to_hpa. The translation of GPA to HPA does not depend on the VCPU state unlike GVA to GPA so there's no need to pass in the kvm_vcpu. Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Anthony Liguori 提交于
Some of the MMU functions take a struct kvm_vcpu even though they affect all VCPUs. This patch cleans up some of them to instead take a struct kvm. This makes things a bit more clear. The main thing that was confusing me was whether certain functions need to be called on all VCPUs. Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Mike Day 提交于
Signed-off-by: NMike D. Day <ncmike@ncultra.org> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
The user is now able to set how many mmu pages will be allocated to the guest. Signed-off-by: NIzik Eidus <izike@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Izik Eidus 提交于
When kvm uses user-allocated pages in the future for the guest, we won't be able to use page->private for rmap, since page->rmap is reserved for the filesystem. So we move the rmap base pointers to the memory slot. A side effect of this is that we need to store the gfn of each gpte in the shadow pages, since the memory slot is addressed by gfn, instead of hfn like struct page. Signed-off-by: NIzik Eidus <izik@qumranet.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
When we allow guest page faults to reach the guests directly, we lose the fault tracking which allows us to detect demand paging. So we provide an alternate mechnism by clearing the accessed bit when we set a pte, and checking it later to see if the guest actually used it. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Avi Kivity 提交于
There are two classes of page faults trapped by kvm: - host page faults, where the fault is needed to allow kvm to install the shadow pte or update the guest accessed and dirty bits - guest page faults, where the guest has faulted and kvm simply injects the fault back into the guest to handle The second class, guest page faults, is pure overhead. We can eliminate some of it on vmx using the following evil trick: - when we set up a shadow page table entry, if the corresponding guest pte is not present, set up the shadow pte as not present - if the guest pte _is_ present, mark the shadow pte as present but also set one of the reserved bits in the shadow pte - tell the vmx hardware not to trap faults which have the present bit clear With this, normal page-not-present faults go directly to the guest, bypassing kvm entirely. Unfortunately, this trick only works on Intel hardware, as AMD lacks a way to discriminate among page faults based on error code. It is also a little risky since it uses reserved bits which might become unreserved in the future, so a module parameter is provided to disable it. Signed-off-by: NAvi Kivity <avi@qumranet.com>
-