提交 98edb6ca 编写于 作者: L Linus Torvalds

Merge branch 'kvm-updates/2.6.35' of git://git.kernel.org/pub/scm/virt/kvm/kvm

* 'kvm-updates/2.6.35' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (269 commits)
  KVM: x86: Add missing locking to arch specific vcpu ioctls
  KVM: PPC: Add missing vcpu_load()/vcpu_put() in vcpu ioctls
  KVM: MMU: Segregate shadow pages with different cr0.wp
  KVM: x86: Check LMA bit before set_efer
  KVM: Don't allow lmsw to clear cr0.pe
  KVM: Add cpuid.txt file
  KVM: x86: Tell the guest we'll warn it about tsc stability
  x86, paravirt: don't compute pvclock adjustments if we trust the tsc
  x86: KVM guest: Try using new kvm clock msrs
  KVM: x86: export paravirtual cpuid flags in KVM_GET_SUPPORTED_CPUID
  KVM: x86: add new KVMCLOCK cpuid feature
  KVM: x86: change msr numbers for kvmclock
  x86, paravirt: Add a global synchronization point for pvclock
  x86, paravirt: Enable pvclock flags in vcpu_time_info structure
  KVM: x86: Inject #GP with the right rip on efer writes
  KVM: SVM: Don't allow nested guest to VMMCALL into host
  KVM: x86: Fix exception reinjection forced to true
  KVM: Fix wallclock version writing race
  KVM: MMU: Don't read pdptrs with mmu spinlock held in mmu_alloc_roots
  KVM: VMX: enable VMXON check with SMX enabled (Intel TXT)
  ...
......@@ -656,6 +656,7 @@ struct kvm_clock_data {
4.29 KVM_GET_VCPU_EVENTS
Capability: KVM_CAP_VCPU_EVENTS
Extended by: KVM_CAP_INTR_SHADOW
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_vcpu_event (out)
......@@ -676,7 +677,7 @@ struct kvm_vcpu_events {
__u8 injected;
__u8 nr;
__u8 soft;
__u8 pad;
__u8 shadow;
} interrupt;
struct {
__u8 injected;
......@@ -688,9 +689,13 @@ struct kvm_vcpu_events {
__u32 flags;
};
KVM_VCPUEVENT_VALID_SHADOW may be set in the flags field to signal that
interrupt.shadow contains a valid state. Otherwise, this field is undefined.
4.30 KVM_SET_VCPU_EVENTS
Capability: KVM_CAP_VCPU_EVENTS
Extended by: KVM_CAP_INTR_SHADOW
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_vcpu_event (in)
......@@ -709,6 +714,183 @@ current in-kernel state. The bits are:
KVM_VCPUEVENT_VALID_NMI_PENDING - transfer nmi.pending to the kernel
KVM_VCPUEVENT_VALID_SIPI_VECTOR - transfer sipi_vector
If KVM_CAP_INTR_SHADOW is available, KVM_VCPUEVENT_VALID_SHADOW can be set in
the flags field to signal that interrupt.shadow contains a valid state and
shall be written into the VCPU.
4.32 KVM_GET_DEBUGREGS
Capability: KVM_CAP_DEBUGREGS
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_debugregs (out)
Returns: 0 on success, -1 on error
Reads debug registers from the vcpu.
struct kvm_debugregs {
__u64 db[4];
__u64 dr6;
__u64 dr7;
__u64 flags;
__u64 reserved[9];
};
4.33 KVM_SET_DEBUGREGS
Capability: KVM_CAP_DEBUGREGS
Architectures: x86
Type: vm ioctl
Parameters: struct kvm_debugregs (in)
Returns: 0 on success, -1 on error
Writes debug registers into the vcpu.
See KVM_GET_DEBUGREGS for the data structure. The flags field is unused
yet and must be cleared on entry.
4.34 KVM_SET_USER_MEMORY_REGION
Capability: KVM_CAP_USER_MEM
Architectures: all
Type: vm ioctl
Parameters: struct kvm_userspace_memory_region (in)
Returns: 0 on success, -1 on error
struct kvm_userspace_memory_region {
__u32 slot;
__u32 flags;
__u64 guest_phys_addr;
__u64 memory_size; /* bytes */
__u64 userspace_addr; /* start of the userspace allocated memory */
};
/* for kvm_memory_region::flags */
#define KVM_MEM_LOG_DIRTY_PAGES 1UL
This ioctl allows the user to create or modify a guest physical memory
slot. When changing an existing slot, it may be moved in the guest
physical memory space, or its flags may be modified. It may not be
resized. Slots may not overlap in guest physical address space.
Memory for the region is taken starting at the address denoted by the
field userspace_addr, which must point at user addressable memory for
the entire memory slot size. Any object may back this memory, including
anonymous memory, ordinary files, and hugetlbfs.
It is recommended that the lower 21 bits of guest_phys_addr and userspace_addr
be identical. This allows large pages in the guest to be backed by large
pages in the host.
The flags field supports just one flag, KVM_MEM_LOG_DIRTY_PAGES, which
instructs kvm to keep track of writes to memory within the slot. See
the KVM_GET_DIRTY_LOG ioctl.
When the KVM_CAP_SYNC_MMU capability, changes in the backing of the memory
region are automatically reflected into the guest. For example, an mmap()
that affects the region will be made visible immediately. Another example
is madvise(MADV_DROP).
It is recommended to use this API instead of the KVM_SET_MEMORY_REGION ioctl.
The KVM_SET_MEMORY_REGION does not allow fine grained control over memory
allocation and is deprecated.
4.35 KVM_SET_TSS_ADDR
Capability: KVM_CAP_SET_TSS_ADDR
Architectures: x86
Type: vm ioctl
Parameters: unsigned long tss_address (in)
Returns: 0 on success, -1 on error
This ioctl defines the physical address of a three-page region in the guest
physical address space. The region must be within the first 4GB of the
guest physical address space and must not conflict with any memory slot
or any mmio address. The guest may malfunction if it accesses this memory
region.
This ioctl is required on Intel-based hosts. This is needed on Intel hardware
because of a quirk in the virtualization implementation (see the internals
documentation when it pops into existence).
4.36 KVM_ENABLE_CAP
Capability: KVM_CAP_ENABLE_CAP
Architectures: ppc
Type: vcpu ioctl
Parameters: struct kvm_enable_cap (in)
Returns: 0 on success; -1 on error
+Not all extensions are enabled by default. Using this ioctl the application
can enable an extension, making it available to the guest.
On systems that do not support this ioctl, it always fails. On systems that
do support it, it only works for extensions that are supported for enablement.
To check if a capability can be enabled, the KVM_CHECK_EXTENSION ioctl should
be used.
struct kvm_enable_cap {
/* in */
__u32 cap;
The capability that is supposed to get enabled.
__u32 flags;
A bitfield indicating future enhancements. Has to be 0 for now.
__u64 args[4];
Arguments for enabling a feature. If a feature needs initial values to
function properly, this is the place to put them.
__u8 pad[64];
};
4.37 KVM_GET_MP_STATE
Capability: KVM_CAP_MP_STATE
Architectures: x86, ia64
Type: vcpu ioctl
Parameters: struct kvm_mp_state (out)
Returns: 0 on success; -1 on error
struct kvm_mp_state {
__u32 mp_state;
};
Returns the vcpu's current "multiprocessing state" (though also valid on
uniprocessor guests).
Possible values are:
- KVM_MP_STATE_RUNNABLE: the vcpu is currently running
- KVM_MP_STATE_UNINITIALIZED: the vcpu is an application processor (AP)
which has not yet received an INIT signal
- KVM_MP_STATE_INIT_RECEIVED: the vcpu has received an INIT signal, and is
now ready for a SIPI
- KVM_MP_STATE_HALTED: the vcpu has executed a HLT instruction and
is waiting for an interrupt
- KVM_MP_STATE_SIPI_RECEIVED: the vcpu has just received a SIPI (vector
accesible via KVM_GET_VCPU_EVENTS)
This ioctl is only useful after KVM_CREATE_IRQCHIP. Without an in-kernel
irqchip, the multiprocessing state must be maintained by userspace.
4.38 KVM_SET_MP_STATE
Capability: KVM_CAP_MP_STATE
Architectures: x86, ia64
Type: vcpu ioctl
Parameters: struct kvm_mp_state (in)
Returns: 0 on success; -1 on error
Sets the vcpu's current "multiprocessing state"; see KVM_GET_MP_STATE for
arguments.
This ioctl is only useful after KVM_CREATE_IRQCHIP. Without an in-kernel
irqchip, the multiprocessing state must be maintained by userspace.
5. The kvm_run structure
......@@ -820,6 +1002,13 @@ executed a memory-mapped I/O instruction which could not be satisfied
by kvm. The 'data' member contains the written data if 'is_write' is
true, and should be filled by application code otherwise.
NOTE: For KVM_EXIT_IO, KVM_EXIT_MMIO and KVM_EXIT_OSI, the corresponding
operations are complete (and guest state is consistent) only after userspace
has re-entered the kernel with KVM_RUN. The kernel side will first finish
incomplete operations and then check for pending signals. Userspace
can re-enter the guest with an unmasked signal pending to complete
pending operations.
/* KVM_EXIT_HYPERCALL */
struct {
__u64 nr;
......@@ -829,7 +1018,9 @@ true, and should be filled by application code otherwise.
__u32 pad;
} hypercall;
Unused.
Unused. This was once used for 'hypercall to userspace'. To implement
such functionality, use KVM_EXIT_IO (x86) or KVM_EXIT_MMIO (all except s390).
Note KVM_EXIT_IO is significantly faster than KVM_EXIT_MMIO.
/* KVM_EXIT_TPR_ACCESS */
struct {
......@@ -870,6 +1061,19 @@ s390 specific.
powerpc specific.
/* KVM_EXIT_OSI */
struct {
__u64 gprs[32];
} osi;
MOL uses a special hypercall interface it calls 'OSI'. To enable it, we catch
hypercalls and exit with this exit struct that contains all the guest gprs.
If exit_reason is KVM_EXIT_OSI, then the vcpu has triggered such a hypercall.
Userspace can now handle the hypercall and when it's done modify the gprs as
necessary. Upon guest entry all guest GPRs will then be replaced by the values
in this struct.
/* Fix the size of the union. */
char padding[256];
};
......
KVM CPUID bits
Glauber Costa <glommer@redhat.com>, Red Hat Inc, 2010
=====================================================
A guest running on a kvm host, can check some of its features using
cpuid. This is not always guaranteed to work, since userspace can
mask-out some, or even all KVM-related cpuid features before launching
a guest.
KVM cpuid functions are:
function: KVM_CPUID_SIGNATURE (0x40000000)
returns : eax = 0,
ebx = 0x4b4d564b,
ecx = 0x564b4d56,
edx = 0x4d.
Note that this value in ebx, ecx and edx corresponds to the string "KVMKVMKVM".
This function queries the presence of KVM cpuid leafs.
function: define KVM_CPUID_FEATURES (0x40000001)
returns : ebx, ecx, edx = 0
eax = and OR'ed group of (1 << flag), where each flags is:
flag || value || meaning
=============================================================================
KVM_FEATURE_CLOCKSOURCE || 0 || kvmclock available at msrs
|| || 0x11 and 0x12.
------------------------------------------------------------------------------
KVM_FEATURE_NOP_IO_DELAY || 1 || not necessary to perform delays
|| || on PIO operations.
------------------------------------------------------------------------------
KVM_FEATURE_MMU_OP || 2 || deprecated.
------------------------------------------------------------------------------
KVM_FEATURE_CLOCKSOURCE2 || 3 || kvmclock available at msrs
|| || 0x4b564d00 and 0x4b564d01
------------------------------------------------------------------------------
KVM_FEATURE_CLOCKSOURCE_STABLE_BIT || 24 || host will warn if no guest-side
|| || per-cpu warps are expected in
|| || kvmclock.
------------------------------------------------------------------------------
The x86 kvm shadow mmu
======================
The mmu (in arch/x86/kvm, files mmu.[ch] and paging_tmpl.h) is responsible
for presenting a standard x86 mmu to the guest, while translating guest
physical addresses to host physical addresses.
The mmu code attempts to satisfy the following requirements:
- correctness: the guest should not be able to determine that it is running
on an emulated mmu except for timing (we attempt to comply
with the specification, not emulate the characteristics of
a particular implementation such as tlb size)
- security: the guest must not be able to touch host memory not assigned
to it
- performance: minimize the performance penalty imposed by the mmu
- scaling: need to scale to large memory and large vcpu guests
- hardware: support the full range of x86 virtualization hardware
- integration: Linux memory management code must be in control of guest memory
so that swapping, page migration, page merging, transparent
hugepages, and similar features work without change
- dirty tracking: report writes to guest memory to enable live migration
and framebuffer-based displays
- footprint: keep the amount of pinned kernel memory low (most memory
should be shrinkable)
- reliablity: avoid multipage or GFP_ATOMIC allocations
Acronyms
========
pfn host page frame number
hpa host physical address
hva host virtual address
gfn guest frame number
gpa guest physical address
gva guest virtual address
ngpa nested guest physical address
ngva nested guest virtual address
pte page table entry (used also to refer generically to paging structure
entries)
gpte guest pte (referring to gfns)
spte shadow pte (referring to pfns)
tdp two dimensional paging (vendor neutral term for NPT and EPT)
Virtual and real hardware supported
===================================
The mmu supports first-generation mmu hardware, which allows an atomic switch
of the current paging mode and cr3 during guest entry, as well as
two-dimensional paging (AMD's NPT and Intel's EPT). The emulated hardware
it exposes is the traditional 2/3/4 level x86 mmu, with support for global
pages, pae, pse, pse36, cr0.wp, and 1GB pages. Work is in progress to support
exposing NPT capable hardware on NPT capable hosts.
Translation
===========
The primary job of the mmu is to program the processor's mmu to translate
addresses for the guest. Different translations are required at different
times:
- when guest paging is disabled, we translate guest physical addresses to
host physical addresses (gpa->hpa)
- when guest paging is enabled, we translate guest virtual addresses, to
guest physical addresses, to host physical addresses (gva->gpa->hpa)
- when the guest launches a guest of its own, we translate nested guest
virtual addresses, to nested guest physical addresses, to guest physical
addresses, to host physical addresses (ngva->ngpa->gpa->hpa)
The primary challenge is to encode between 1 and 3 translations into hardware
that support only 1 (traditional) and 2 (tdp) translations. When the
number of required translations matches the hardware, the mmu operates in
direct mode; otherwise it operates in shadow mode (see below).
Memory
======
Guest memory (gpa) is part of the user address space of the process that is
using kvm. Userspace defines the translation between guest addresses and user
addresses (gpa->hva); note that two gpas may alias to the same gva, but not
vice versa.
These gvas may be backed using any method available to the host: anonymous
memory, file backed memory, and device memory. Memory might be paged by the
host at any time.
Events
======
The mmu is driven by events, some from the guest, some from the host.
Guest generated events:
- writes to control registers (especially cr3)
- invlpg/invlpga instruction execution
- access to missing or protected translations
Host generated events:
- changes in the gpa->hpa translation (either through gpa->hva changes or
through hva->hpa changes)
- memory pressure (the shrinker)
Shadow pages
============
The principal data structure is the shadow page, 'struct kvm_mmu_page'. A
shadow page contains 512 sptes, which can be either leaf or nonleaf sptes. A
shadow page may contain a mix of leaf and nonleaf sptes.
A nonleaf spte allows the hardware mmu to reach the leaf pages and
is not related to a translation directly. It points to other shadow pages.
A leaf spte corresponds to either one or two translations encoded into
one paging structure entry. These are always the lowest level of the
translation stack, with optional higher level translations left to NPT/EPT.
Leaf ptes point at guest pages.
The following table shows translations encoded by leaf ptes, with higher-level
translations in parentheses:
Non-nested guests:
nonpaging: gpa->hpa
paging: gva->gpa->hpa
paging, tdp: (gva->)gpa->hpa
Nested guests:
non-tdp: ngva->gpa->hpa (*)
tdp: (ngva->)ngpa->gpa->hpa
(*) the guest hypervisor will encode the ngva->gpa translation into its page
tables if npt is not present
Shadow pages contain the following information:
role.level:
The level in the shadow paging hierarchy that this shadow page belongs to.
1=4k sptes, 2=2M sptes, 3=1G sptes, etc.
role.direct:
If set, leaf sptes reachable from this page are for a linear range.
Examples include real mode translation, large guest pages backed by small
host pages, and gpa->hpa translations when NPT or EPT is active.
The linear range starts at (gfn << PAGE_SHIFT) and its size is determined
by role.level (2MB for first level, 1GB for second level, 0.5TB for third
level, 256TB for fourth level)
If clear, this page corresponds to a guest page table denoted by the gfn
field.
role.quadrant:
When role.cr4_pae=0, the guest uses 32-bit gptes while the host uses 64-bit
sptes. That means a guest page table contains more ptes than the host,
so multiple shadow pages are needed to shadow one guest page.
For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the
first or second 512-gpte block in the guest page table. For second-level
page tables, each 32-bit gpte is converted to two 64-bit sptes
(since each first-level guest page is shadowed by two first-level
shadow pages) so role.quadrant takes values in the range 0..3. Each
quadrant maps 1GB virtual address space.
role.access:
Inherited guest access permissions in the form uwx. Note execute
permission is positive, not negative.
role.invalid:
The page is invalid and should not be used. It is a root page that is
currently pinned (by a cpu hardware register pointing to it); once it is
unpinned it will be destroyed.
role.cr4_pae:
Contains the value of cr4.pae for which the page is valid (e.g. whether
32-bit or 64-bit gptes are in use).
role.cr4_nxe:
Contains the value of efer.nxe for which the page is valid.
role.cr0_wp:
Contains the value of cr0.wp for which the page is valid.
gfn:
Either the guest page table containing the translations shadowed by this
page, or the base page frame for linear translations. See role.direct.
spt:
A pageful of 64-bit sptes containing the translations for this page.
Accessed by both kvm and hardware.
The page pointed to by spt will have its page->private pointing back
at the shadow page structure.
sptes in spt point either at guest pages, or at lower-level shadow pages.
Specifically, if sp1 and sp2 are shadow pages, then sp1->spt[n] may point
at __pa(sp2->spt). sp2 will point back at sp1 through parent_pte.
The spt array forms a DAG structure with the shadow page as a node, and
guest pages as leaves.
gfns:
An array of 512 guest frame numbers, one for each present pte. Used to
perform a reverse map from a pte to a gfn.
slot_bitmap:
A bitmap containing one bit per memory slot. If the page contains a pte
mapping a page from memory slot n, then bit n of slot_bitmap will be set
(if a page is aliased among several slots, then it is not guaranteed that
all slots will be marked).
Used during dirty logging to avoid scanning a shadow page if none if its
pages need tracking.
root_count:
A counter keeping track of how many hardware registers (guest cr3 or
pdptrs) are now pointing at the page. While this counter is nonzero, the
page cannot be destroyed. See role.invalid.
multimapped:
Whether there exist multiple sptes pointing at this page.
parent_pte/parent_ptes:
If multimapped is zero, parent_pte points at the single spte that points at
this page's spt. Otherwise, parent_ptes points at a data structure
with a list of parent_ptes.
unsync:
If true, then the translations in this page may not match the guest's
translation. This is equivalent to the state of the tlb when a pte is
changed but before the tlb entry is flushed. Accordingly, unsync ptes
are synchronized when the guest executes invlpg or flushes its tlb by
other means. Valid for leaf pages.
unsync_children:
How many sptes in the page point at pages that are unsync (or have
unsynchronized children).
unsync_child_bitmap:
A bitmap indicating which sptes in spt point (directly or indirectly) at
pages that may be unsynchronized. Used to quickly locate all unsychronized
pages reachable from a given page.
Reverse map
===========
The mmu maintains a reverse mapping whereby all ptes mapping a page can be
reached given its gfn. This is used, for example, when swapping out a page.
Synchronized and unsynchronized pages
=====================================
The guest uses two events to synchronize its tlb and page tables: tlb flushes
and page invalidations (invlpg).
A tlb flush means that we need to synchronize all sptes reachable from the
guest's cr3. This is expensive, so we keep all guest page tables write
protected, and synchronize sptes to gptes when a gpte is written.
A special case is when a guest page table is reachable from the current
guest cr3. In this case, the guest is obliged to issue an invlpg instruction
before using the translation. We take advantage of that by removing write
protection from the guest page, and allowing the guest to modify it freely.
We synchronize modified gptes when the guest invokes invlpg. This reduces
the amount of emulation we have to do when the guest modifies multiple gptes,
or when the a guest page is no longer used as a page table and is used for
random guest data.
As a side effect we have to resynchronize all reachable unsynchronized shadow
pages on a tlb flush.
Reaction to events
==================
- guest page fault (or npt page fault, or ept violation)
This is the most complicated event. The cause of a page fault can be:
- a true guest fault (the guest translation won't allow the access) (*)
- access to a missing translation
- access to a protected translation
- when logging dirty pages, memory is write protected
- synchronized shadow pages are write protected (*)
- access to untranslatable memory (mmio)
(*) not applicable in direct mode
Handling a page fault is performed as follows:
- if needed, walk the guest page tables to determine the guest translation
(gva->gpa or ngpa->gpa)
- if permissions are insufficient, reflect the fault back to the guest
- determine the host page
- if this is an mmio request, there is no host page; call the emulator
to emulate the instruction instead
- walk the shadow page table to find the spte for the translation,
instantiating missing intermediate page tables as necessary
- try to unsynchronize the page
- if successful, we can let the guest continue and modify the gpte
- emulate the instruction
- if failed, unshadow the page and let the guest continue
- update any translations that were modified by the instruction
invlpg handling:
- walk the shadow page hierarchy and drop affected translations
- try to reinstantiate the indicated translation in the hope that the
guest will use it in the near future
Guest control register updates:
- mov to cr3
- look up new shadow roots
- synchronize newly reachable shadow pages
- mov to cr0/cr4/efer
- set up mmu context for new paging mode
- look up new shadow roots
- synchronize newly reachable shadow pages
Host translation updates:
- mmu notifier called with updated hva
- look up affected sptes through reverse map
- drop (or update) translations
Further reading
===============
- NPT presentation from KVM Forum 2008
http://www.linux-kvm.org/wiki/images/c/c8/KvmForum2008%24kdf2008_21.pdf
......@@ -979,11 +979,13 @@ long kvm_arch_vm_ioctl(struct file *filp,
r = -EFAULT;
if (copy_from_user(&irq_event, argp, sizeof irq_event))
goto out;
r = -ENXIO;
if (irqchip_in_kernel(kvm)) {
__s32 status;
status = kvm_set_irq(kvm, KVM_USERSPACE_IRQ_SOURCE_ID,
irq_event.irq, irq_event.level);
if (ioctl == KVM_IRQ_LINE_STATUS) {
r = -EFAULT;
irq_event.status = status;
if (copy_to_user(argp, &irq_event,
sizeof irq_event))
......@@ -1379,7 +1381,7 @@ static void kvm_release_vm_pages(struct kvm *kvm)
int i, j;
unsigned long base_gfn;
slots = rcu_dereference(kvm->memslots);
slots = kvm_memslots(kvm);
for (i = 0; i < slots->nmemslots; i++) {
memslot = &slots->memslots[i];
base_gfn = memslot->base_gfn;
......@@ -1535,8 +1537,10 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
goto out;
if (copy_to_user(user_stack, stack,
sizeof(struct kvm_ia64_vcpu_stack)))
sizeof(struct kvm_ia64_vcpu_stack))) {
r = -EFAULT;
goto out;
}
break;
}
......
......@@ -51,7 +51,7 @@ static int __init kvm_vmm_init(void)
vmm_fpswa_interface = fpswa_interface;
/*Register vmm data to kvm side*/
return kvm_init(&vmm_info, 1024, THIS_MODULE);
return kvm_init(&vmm_info, 1024, 0, THIS_MODULE);
}
static void __exit kvm_vmm_exit(void)
......
......@@ -21,6 +21,7 @@
/* operations for longs and pointers */
#define PPC_LL stringify_in_c(ld)
#define PPC_STL stringify_in_c(std)
#define PPC_STLU stringify_in_c(stdu)
#define PPC_LCMPI stringify_in_c(cmpdi)
#define PPC_LONG stringify_in_c(.llong)
#define PPC_LONG_ALIGN stringify_in_c(.balign 8)
......@@ -44,6 +45,7 @@
/* operations for longs and pointers */
#define PPC_LL stringify_in_c(lwz)
#define PPC_STL stringify_in_c(stw)
#define PPC_STLU stringify_in_c(stwu)
#define PPC_LCMPI stringify_in_c(cmpwi)
#define PPC_LONG stringify_in_c(.long)
#define PPC_LONG_ALIGN stringify_in_c(.balign 4)
......
......@@ -77,4 +77,14 @@ struct kvm_debug_exit_arch {
struct kvm_guest_debug_arch {
};
#define KVM_REG_MASK 0x001f
#define KVM_REG_EXT_MASK 0xffe0
#define KVM_REG_GPR 0x0000
#define KVM_REG_FPR 0x0020
#define KVM_REG_QPR 0x0040
#define KVM_REG_FQPR 0x0060
#define KVM_INTERRUPT_SET -1U
#define KVM_INTERRUPT_UNSET -2U
#endif /* __LINUX_KVM_POWERPC_H */
......@@ -88,6 +88,8 @@
#define BOOK3S_HFLAG_DCBZ32 0x1
#define BOOK3S_HFLAG_SLB 0x2
#define BOOK3S_HFLAG_PAIRED_SINGLE 0x4
#define BOOK3S_HFLAG_NATIVE_PS 0x8
#define RESUME_FLAG_NV (1<<0) /* Reload guest nonvolatile state? */
#define RESUME_FLAG_HOST (1<<1) /* Resume host? */
......
......@@ -22,46 +22,47 @@
#include <linux/types.h>
#include <linux/kvm_host.h>
#include <asm/kvm_book3s_64_asm.h>
#include <asm/kvm_book3s_asm.h>
struct kvmppc_slb {
u64 esid;
u64 vsid;
u64 orige;
u64 origv;
bool valid;
bool Ks;
bool Kp;
bool nx;
bool large; /* PTEs are 16MB */
bool tb; /* 1TB segment */
bool class;
bool valid : 1;
bool Ks : 1;
bool Kp : 1;
bool nx : 1;
bool large : 1; /* PTEs are 16MB */
bool tb : 1; /* 1TB segment */
bool class : 1;
};
struct kvmppc_sr {
u32 raw;
u32 vsid;
bool Ks;
bool Kp;
bool nx;
bool Ks : 1;
bool Kp : 1;
bool nx : 1;
bool valid : 1;
};
struct kvmppc_bat {
u64 raw;
u32 bepi;
u32 bepi_mask;
bool vs;
bool vp;
u32 brpn;
u8 wimg;
u8 pp;
bool vs : 1;
bool vp : 1;
};
struct kvmppc_sid_map {
u64 guest_vsid;
u64 guest_esid;
u64 host_vsid;
bool valid;
bool valid : 1;
};
#define SID_MAP_BITS 9
......@@ -70,7 +71,7 @@ struct kvmppc_sid_map {
struct kvmppc_vcpu_book3s {
struct kvm_vcpu vcpu;
struct kvmppc_book3s_shadow_vcpu shadow_vcpu;
struct kvmppc_book3s_shadow_vcpu *shadow_vcpu;
struct kvmppc_sid_map sid_map[SID_MAP_NUM];
struct kvmppc_slb slb[64];
struct {
......@@ -82,9 +83,10 @@ struct kvmppc_vcpu_book3s {
struct kvmppc_bat ibat[8];
struct kvmppc_bat dbat[8];
u64 hid[6];
u64 gqr[8];
int slb_nr;
u32 dsisr;
u64 sdr1;
u64 dsisr;
u64 hior;
u64 msr_mask;
u64 vsid_first;
......@@ -98,15 +100,15 @@ struct kvmppc_vcpu_book3s {
#define CONTEXT_GUEST 1
#define CONTEXT_GUEST_END 2
#define VSID_REAL 0xfffffffffff00000
#define VSID_REAL_DR 0xffffffffffe00000
#define VSID_REAL_IR 0xffffffffffd00000
#define VSID_BAT 0xffffffffffc00000
#define VSID_PR 0x8000000000000000
#define VSID_REAL 0x1fffffffffc00000ULL
#define VSID_BAT 0x1fffffffffb00000ULL
#define VSID_REAL_DR 0x2000000000000000ULL
#define VSID_REAL_IR 0x4000000000000000ULL
#define VSID_PR 0x8000000000000000ULL
extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 ea, u64 ea_mask);
extern void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong ea, ulong ea_mask);
extern void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 vp, u64 vp_mask);
extern void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, u64 pa_start, u64 pa_end);
extern void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end);
extern void kvmppc_set_msr(struct kvm_vcpu *vcpu, u64 new_msr);
extern void kvmppc_mmu_book3s_64_init(struct kvm_vcpu *vcpu);
extern void kvmppc_mmu_book3s_32_init(struct kvm_vcpu *vcpu);
......@@ -114,11 +116,13 @@ extern int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *pte);
extern int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr);
extern void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu);
extern struct kvmppc_pte *kvmppc_mmu_find_pte(struct kvm_vcpu *vcpu, u64 ea, bool data);
extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong eaddr, int size, void *ptr, bool data);
extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong eaddr, int size, void *ptr);
extern int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data);
extern int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, bool data);
extern void kvmppc_book3s_queue_irqprio(struct kvm_vcpu *vcpu, unsigned int vec);
extern void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat,
bool upper, u32 val);
extern void kvmppc_giveup_ext(struct kvm_vcpu *vcpu, ulong msr);
extern int kvmppc_emulate_paired_single(struct kvm_run *run, struct kvm_vcpu *vcpu);
extern u32 kvmppc_trampoline_lowmem;
extern u32 kvmppc_trampoline_enter;
......@@ -126,6 +130,8 @@ extern void kvmppc_rmcall(ulong srr0, ulong srr1);
extern void kvmppc_load_up_fpu(void);
extern void kvmppc_load_up_altivec(void);
extern void kvmppc_load_up_vsx(void);
extern u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst);
extern ulong kvmppc_alignment_dar(struct kvm_vcpu *vcpu, unsigned int inst);
static inline struct kvmppc_vcpu_book3s *to_book3s(struct kvm_vcpu *vcpu)
{
......@@ -140,7 +146,108 @@ static inline ulong dsisr(void)
}
extern void kvm_return_point(void);
static inline struct kvmppc_book3s_shadow_vcpu *to_svcpu(struct kvm_vcpu *vcpu);
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
{
if ( num < 14 ) {
to_svcpu(vcpu)->gpr[num] = val;
to_book3s(vcpu)->shadow_vcpu->gpr[num] = val;
} else
vcpu->arch.gpr[num] = val;
}
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
{
if ( num < 14 )
return to_svcpu(vcpu)->gpr[num];
else
return vcpu->arch.gpr[num];
}
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
{
to_svcpu(vcpu)->cr = val;
to_book3s(vcpu)->shadow_vcpu->cr = val;
}
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
{
return to_svcpu(vcpu)->cr;
}
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
to_svcpu(vcpu)->xer = val;
to_book3s(vcpu)->shadow_vcpu->xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
{
return to_svcpu(vcpu)->xer;
}
static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
{
to_svcpu(vcpu)->ctr = val;
}
static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
{
return to_svcpu(vcpu)->ctr;
}
static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
{
to_svcpu(vcpu)->lr = val;
}
static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
{
return to_svcpu(vcpu)->lr;
}
static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
{
to_svcpu(vcpu)->pc = val;
}
static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
{
return to_svcpu(vcpu)->pc;
}
static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
{
ulong pc = kvmppc_get_pc(vcpu);
struct kvmppc_book3s_shadow_vcpu *svcpu = to_svcpu(vcpu);
/* Load the instruction manually if it failed to do so in the
* exit path */
if (svcpu->last_inst == KVM_INST_FETCH_FAILED)
kvmppc_ld(vcpu, &pc, sizeof(u32), &svcpu->last_inst, false);
return svcpu->last_inst;
}
static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
{
return to_svcpu(vcpu)->fault_dar;
}
/* Magic register values loaded into r3 and r4 before the 'sc' assembly
* instruction for the OSI hypercalls */
#define OSI_SC_MAGIC_R3 0x113724FA
#define OSI_SC_MAGIC_R4 0x77810F9B
#define INS_DCBZ 0x7c0007ec
/* Also add subarch specific defines */
#ifdef CONFIG_PPC_BOOK3S_32
#include <asm/kvm_book3s_32.h>
#else
#include <asm/kvm_book3s_64.h>
#endif
#endif /* __ASM_KVM_BOOK3S_H__ */
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* Copyright SUSE Linux Products GmbH 2010
*
* Authors: Alexander Graf <agraf@suse.de>
*/
#ifndef __ASM_KVM_BOOK3S_32_H__
#define __ASM_KVM_BOOK3S_32_H__
static inline struct kvmppc_book3s_shadow_vcpu *to_svcpu(struct kvm_vcpu *vcpu)
{
return to_book3s(vcpu)->shadow_vcpu;
}
#define PTE_SIZE 12
#define VSID_ALL 0
#define SR_INVALID 0x00000001 /* VSID 1 should always be unused */
#define SR_KP 0x20000000
#define PTE_V 0x80000000
#define PTE_SEC 0x00000040
#define PTE_M 0x00000010
#define PTE_R 0x00000100
#define PTE_C 0x00000080
#define SID_SHIFT 28
#define ESID_MASK 0xf0000000
#define VSID_MASK 0x00fffffff0000000ULL
#endif /* __ASM_KVM_BOOK3S_32_H__ */
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* Copyright SUSE Linux Products GmbH 2010
*
* Authors: Alexander Graf <agraf@suse.de>
*/
#ifndef __ASM_KVM_BOOK3S_64_H__
#define __ASM_KVM_BOOK3S_64_H__
static inline struct kvmppc_book3s_shadow_vcpu *to_svcpu(struct kvm_vcpu *vcpu)
{
return &get_paca()->shadow_vcpu;
}
#endif /* __ASM_KVM_BOOK3S_64_H__ */
......@@ -22,7 +22,7 @@
#ifdef __ASSEMBLY__
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
#ifdef CONFIG_KVM_BOOK3S_HANDLER
#include <asm/kvm_asm.h>
......@@ -55,7 +55,7 @@ kvmppc_resume_\intno:
.macro DO_KVM intno
.endm
#endif /* CONFIG_KVM_BOOK3S_64_HANDLER */
#endif /* CONFIG_KVM_BOOK3S_HANDLER */
#else /*__ASSEMBLY__ */
......@@ -63,12 +63,33 @@ struct kvmppc_book3s_shadow_vcpu {
ulong gpr[14];
u32 cr;
u32 xer;
u32 fault_dsisr;
u32 last_inst;
ulong ctr;
ulong lr;
ulong pc;
ulong shadow_srr1;
ulong fault_dar;
ulong host_r1;
ulong host_r2;
ulong handler;
ulong scratch0;
ulong scratch1;
ulong vmhandler;
u8 in_guest;
#ifdef CONFIG_PPC_BOOK3S_32
u32 sr[16]; /* Guest SRs */
#endif
#ifdef CONFIG_PPC_BOOK3S_64
u8 slb_max; /* highest used guest slb entry */
struct {
u64 esid;
u64 vsid;
} slb[64]; /* guest SLB */
#endif
};
#endif /*__ASSEMBLY__ */
......
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* Copyright SUSE Linux Products GmbH 2010
*
* Authors: Alexander Graf <agraf@suse.de>
*/
#ifndef __ASM_KVM_BOOKE_H__
#define __ASM_KVM_BOOKE_H__
#include <linux/types.h>
#include <linux/kvm_host.h>
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
{
vcpu->arch.gpr[num] = val;
}
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
{
return vcpu->arch.gpr[num];
}
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
{
vcpu->arch.cr = val;
}
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.cr;
}
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
vcpu->arch.xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
{
return vcpu->arch.xer;
}
static inline u32 kvmppc_get_last_inst(struct kvm_vcpu *vcpu)
{
return vcpu->arch.last_inst;
}
static inline void kvmppc_set_ctr(struct kvm_vcpu *vcpu, ulong val)
{
vcpu->arch.ctr = val;
}
static inline ulong kvmppc_get_ctr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.ctr;
}
static inline void kvmppc_set_lr(struct kvm_vcpu *vcpu, ulong val)
{
vcpu->arch.lr = val;
}
static inline ulong kvmppc_get_lr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.lr;
}
static inline void kvmppc_set_pc(struct kvm_vcpu *vcpu, ulong val)
{
vcpu->arch.pc = val;
}
static inline ulong kvmppc_get_pc(struct kvm_vcpu *vcpu)
{
return vcpu->arch.pc;
}
static inline ulong kvmppc_get_fault_dar(struct kvm_vcpu *vcpu)
{
return vcpu->arch.fault_dear;
}
#endif /* __ASM_KVM_BOOKE_H__ */
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* Copyright Novell Inc. 2010
*
* Authors: Alexander Graf <agraf@suse.de>
*/
#ifndef __ASM_KVM_FPU_H__
#define __ASM_KVM_FPU_H__
#include <linux/types.h>
extern void fps_fres(struct thread_struct *t, u32 *dst, u32 *src1);
extern void fps_frsqrte(struct thread_struct *t, u32 *dst, u32 *src1);
extern void fps_fsqrts(struct thread_struct *t, u32 *dst, u32 *src1);
extern void fps_fadds(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2);
extern void fps_fdivs(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2);
extern void fps_fmuls(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2);
extern void fps_fsubs(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2);
extern void fps_fmadds(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2,
u32 *src3);
extern void fps_fmsubs(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2,
u32 *src3);
extern void fps_fnmadds(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2,
u32 *src3);
extern void fps_fnmsubs(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2,
u32 *src3);
extern void fps_fsel(struct thread_struct *t, u32 *dst, u32 *src1, u32 *src2,
u32 *src3);
#define FPD_ONE_IN(name) extern void fpd_ ## name(u64 *fpscr, u32 *cr, \
u64 *dst, u64 *src1);
#define FPD_TWO_IN(name) extern void fpd_ ## name(u64 *fpscr, u32 *cr, \
u64 *dst, u64 *src1, u64 *src2);
#define FPD_THREE_IN(name) extern void fpd_ ## name(u64 *fpscr, u32 *cr, \
u64 *dst, u64 *src1, u64 *src2, u64 *src3);
extern void fpd_fcmpu(u64 *fpscr, u32 *cr, u64 *src1, u64 *src2);
extern void fpd_fcmpo(u64 *fpscr, u32 *cr, u64 *src1, u64 *src2);
FPD_ONE_IN(fsqrts)
FPD_ONE_IN(frsqrtes)
FPD_ONE_IN(fres)
FPD_ONE_IN(frsp)
FPD_ONE_IN(fctiw)
FPD_ONE_IN(fctiwz)
FPD_ONE_IN(fsqrt)
FPD_ONE_IN(fre)
FPD_ONE_IN(frsqrte)
FPD_ONE_IN(fneg)
FPD_ONE_IN(fabs)
FPD_TWO_IN(fadds)
FPD_TWO_IN(fsubs)
FPD_TWO_IN(fdivs)
FPD_TWO_IN(fmuls)
FPD_TWO_IN(fcpsgn)
FPD_TWO_IN(fdiv)
FPD_TWO_IN(fadd)
FPD_TWO_IN(fmul)
FPD_TWO_IN(fsub)
FPD_THREE_IN(fmsubs)
FPD_THREE_IN(fmadds)
FPD_THREE_IN(fnmsubs)
FPD_THREE_IN(fnmadds)
FPD_THREE_IN(fsel)
FPD_THREE_IN(fmsub)
FPD_THREE_IN(fmadd)
FPD_THREE_IN(fnmsub)
FPD_THREE_IN(fnmadd)
#endif
......@@ -66,7 +66,7 @@ struct kvm_vcpu_stat {
u32 dec_exits;
u32 ext_intr_exits;
u32 halt_wakeup;
#ifdef CONFIG_PPC64
#ifdef CONFIG_PPC_BOOK3S
u32 pf_storage;
u32 pf_instruc;
u32 sp_storage;
......@@ -124,12 +124,12 @@ struct kvm_arch {
};
struct kvmppc_pte {
u64 eaddr;
ulong eaddr;
u64 vpage;
u64 raddr;
bool may_read;
bool may_write;
bool may_execute;
ulong raddr;
bool may_read : 1;
bool may_write : 1;
bool may_execute : 1;
};
struct kvmppc_mmu {
......@@ -145,7 +145,7 @@ struct kvmppc_mmu {
int (*xlate)(struct kvm_vcpu *vcpu, gva_t eaddr, struct kvmppc_pte *pte, bool data);
void (*reset_msr)(struct kvm_vcpu *vcpu);
void (*tlbie)(struct kvm_vcpu *vcpu, ulong addr, bool large);
int (*esid_to_vsid)(struct kvm_vcpu *vcpu, u64 esid, u64 *vsid);
int (*esid_to_vsid)(struct kvm_vcpu *vcpu, ulong esid, u64 *vsid);
u64 (*ea_to_vp)(struct kvm_vcpu *vcpu, gva_t eaddr, bool data);
bool (*is_dcbz32)(struct kvm_vcpu *vcpu);
};
......@@ -160,7 +160,7 @@ struct hpte_cache {
struct kvm_vcpu_arch {
ulong host_stack;
u32 host_pid;
#ifdef CONFIG_PPC64
#ifdef CONFIG_PPC_BOOK3S
ulong host_msr;
ulong host_r2;
void *host_retip;
......@@ -175,7 +175,7 @@ struct kvm_vcpu_arch {
ulong gpr[32];
u64 fpr[32];
u32 fpscr;
u64 fpscr;
#ifdef CONFIG_ALTIVEC
vector128 vr[32];
......@@ -186,19 +186,23 @@ struct kvm_vcpu_arch {
u64 vsr[32];
#endif
#ifdef CONFIG_PPC_BOOK3S
/* For Gekko paired singles */
u32 qpr[32];
#endif
#ifdef CONFIG_BOOKE
ulong pc;
ulong ctr;
ulong lr;
#ifdef CONFIG_BOOKE
ulong xer;
u32 cr;
#endif
ulong msr;
#ifdef CONFIG_PPC64
#ifdef CONFIG_PPC_BOOK3S
ulong shadow_msr;
ulong shadow_srr1;
ulong hflags;
ulong guest_owned_ext;
#endif
......@@ -253,20 +257,22 @@ struct kvm_vcpu_arch {
struct dentry *debugfs_exit_timing;
#endif
#ifdef CONFIG_BOOKE
u32 last_inst;
#ifdef CONFIG_PPC64
ulong fault_dsisr;
#endif
ulong fault_dear;
ulong fault_esr;
ulong queued_dear;
ulong queued_esr;
#endif
gpa_t paddr_accessed;
u8 io_gpr; /* GPR used as IO source/target */
u8 mmio_is_bigendian;
u8 mmio_sign_extend;
u8 dcr_needed;
u8 dcr_is_write;
u8 osi_needed;
u8 osi_enabled;
u32 cpr0_cfgaddr; /* holds the last set cpr0_cfgaddr */
......@@ -275,7 +281,7 @@ struct kvm_vcpu_arch {
u64 dec_jiffies;
unsigned long pending_exceptions;
#ifdef CONFIG_PPC64
#ifdef CONFIG_PPC_BOOK3S
struct hpte_cache hpte_cache[HPTEG_CACHE_NUM];
int hpte_cache_offset;
#endif
......
......@@ -30,6 +30,8 @@
#include <linux/kvm_host.h>
#ifdef CONFIG_PPC_BOOK3S
#include <asm/kvm_book3s.h>
#else
#include <asm/kvm_booke.h>
#endif
enum emulation_result {
......@@ -37,6 +39,7 @@ enum emulation_result {
EMULATE_DO_MMIO, /* kvm_run filled with MMIO request */
EMULATE_DO_DCR, /* kvm_run filled with DCR request */
EMULATE_FAIL, /* can't emulate this instruction */
EMULATE_AGAIN, /* something went wrong. go again */
};
extern int __kvmppc_vcpu_run(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu);
......@@ -48,8 +51,11 @@ extern void kvmppc_dump_vcpu(struct kvm_vcpu *vcpu);
extern int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned int rt, unsigned int bytes,
int is_bigendian);
extern int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned int rt, unsigned int bytes,
int is_bigendian);
extern int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
u32 val, unsigned int bytes, int is_bigendian);
u64 val, unsigned int bytes, int is_bigendian);
extern int kvmppc_emulate_instruction(struct kvm_run *run,
struct kvm_vcpu *vcpu);
......@@ -63,6 +69,7 @@ extern void kvmppc_mmu_map(struct kvm_vcpu *vcpu, u64 gvaddr, gpa_t gpaddr,
extern void kvmppc_mmu_priv_switch(struct kvm_vcpu *vcpu, int usermode);
extern void kvmppc_mmu_switch_pid(struct kvm_vcpu *vcpu, u32 pid);
extern void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu);
extern int kvmppc_mmu_init(struct kvm_vcpu *vcpu);
extern int kvmppc_mmu_dtlb_index(struct kvm_vcpu *vcpu, gva_t eaddr);
extern int kvmppc_mmu_itlb_index(struct kvm_vcpu *vcpu, gva_t eaddr);
extern gpa_t kvmppc_mmu_xlate(struct kvm_vcpu *vcpu, unsigned int gtlb_index,
......@@ -88,6 +95,8 @@ extern void kvmppc_core_queue_dec(struct kvm_vcpu *vcpu);
extern void kvmppc_core_dequeue_dec(struct kvm_vcpu *vcpu);
extern void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
struct kvm_interrupt *irq);
extern void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
struct kvm_interrupt *irq);
extern int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned int op, int *advance);
......@@ -99,81 +108,37 @@ extern void kvmppc_booke_exit(void);
extern void kvmppc_core_destroy_mmu(struct kvm_vcpu *vcpu);
#ifdef CONFIG_PPC_BOOK3S
/* We assume we're always acting on the current vcpu */
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
{
if ( num < 14 ) {
get_paca()->shadow_vcpu.gpr[num] = val;
to_book3s(vcpu)->shadow_vcpu.gpr[num] = val;
} else
vcpu->arch.gpr[num] = val;
}
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
{
if ( num < 14 )
return get_paca()->shadow_vcpu.gpr[num];
else
return vcpu->arch.gpr[num];
}
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
{
get_paca()->shadow_vcpu.cr = val;
to_book3s(vcpu)->shadow_vcpu.cr = val;
}
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
{
return get_paca()->shadow_vcpu.cr;
}
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
get_paca()->shadow_vcpu.xer = val;
to_book3s(vcpu)->shadow_vcpu.xer = val;
}
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
/*
* Cuts out inst bits with ordering according to spec.
* That means the leftmost bit is zero. All given bits are included.
*/
static inline u32 kvmppc_get_field(u64 inst, int msb, int lsb)
{
return get_paca()->shadow_vcpu.xer;
}
u32 r;
u32 mask;
#else
BUG_ON(msb > lsb);
static inline void kvmppc_set_gpr(struct kvm_vcpu *vcpu, int num, ulong val)
{
vcpu->arch.gpr[num] = val;
}
mask = (1 << (lsb - msb + 1)) - 1;
r = (inst >> (63 - lsb)) & mask;
static inline ulong kvmppc_get_gpr(struct kvm_vcpu *vcpu, int num)
{
return vcpu->arch.gpr[num];
return r;
}
static inline void kvmppc_set_cr(struct kvm_vcpu *vcpu, u32 val)
/*
* Replaces inst bits with ordering according to spec.
*/
static inline u32 kvmppc_set_field(u64 inst, int msb, int lsb, int value)
{
vcpu->arch.cr = val;
}
u32 r;
u32 mask;
static inline u32 kvmppc_get_cr(struct kvm_vcpu *vcpu)
{
return vcpu->arch.cr;
}
BUG_ON(msb > lsb);
static inline void kvmppc_set_xer(struct kvm_vcpu *vcpu, u32 val)
{
vcpu->arch.xer = val;
}
mask = ((1 << (lsb - msb + 1)) - 1) << (63 - lsb);
r = (inst & ~mask) | ((value << (63 - lsb)) & mask);
static inline u32 kvmppc_get_xer(struct kvm_vcpu *vcpu)
{
return vcpu->arch.xer;
return r;
}
#endif
#endif /* __POWERPC_KVM_PPC_H__ */
......@@ -27,6 +27,8 @@ extern int __init_new_context(void);
extern void __destroy_context(int context_id);
static inline void mmu_context_init(void) { }
#else
extern unsigned long __init_new_context(void);
extern void __destroy_context(unsigned long context_id);
extern void mmu_context_init(void);
#endif
......
......@@ -23,7 +23,7 @@
#include <asm/page.h>
#include <asm/exception-64e.h>
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
#include <asm/kvm_book3s_64_asm.h>
#include <asm/kvm_book3s_asm.h>
#endif
register struct paca_struct *local_paca asm("r13");
......@@ -137,15 +137,9 @@ struct paca_struct {
u64 startpurr; /* PURR/TB value snapshot */
u64 startspurr; /* SPURR value snapshot */
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
struct {
u64 esid;
u64 vsid;
} kvm_slb[64]; /* guest SLB */
#ifdef CONFIG_KVM_BOOK3S_HANDLER
/* We use this to store guest state in */
struct kvmppc_book3s_shadow_vcpu shadow_vcpu;
u8 kvm_slb_max; /* highest used guest slb entry */
u8 kvm_in_guest; /* are we inside the guest? */
#endif
};
......
......@@ -229,6 +229,9 @@ struct thread_struct {
unsigned long spefscr; /* SPE & eFP status */
int used_spe; /* set if process has used spe */
#endif /* CONFIG_SPE */
#ifdef CONFIG_KVM_BOOK3S_32_HANDLER
void* kvm_shadow_vcpu; /* KVM internal data */
#endif /* CONFIG_KVM_BOOK3S_32_HANDLER */
};
#define ARCH_MIN_TASKALIGN 16
......
......@@ -293,10 +293,12 @@
#define HID1_ABE (1<<10) /* 7450 Address Broadcast Enable */
#define HID1_PS (1<<16) /* 750FX PLL selection */
#define SPRN_HID2 0x3F8 /* Hardware Implementation Register 2 */
#define SPRN_HID2_GEKKO 0x398 /* Gekko HID2 Register */
#define SPRN_IABR 0x3F2 /* Instruction Address Breakpoint Register */
#define SPRN_IABR2 0x3FA /* 83xx */
#define SPRN_IBCR 0x135 /* 83xx Insn Breakpoint Control Reg */
#define SPRN_HID4 0x3F4 /* 970 HID4 */
#define SPRN_HID4_GEKKO 0x3F3 /* Gekko HID4 */
#define SPRN_HID5 0x3F6 /* 970 HID5 */
#define SPRN_HID6 0x3F9 /* BE HID 6 */
#define HID6_LB (0x0F<<12) /* Concurrent Large Page Modes */
......@@ -465,6 +467,14 @@
#define SPRN_VRSAVE 0x100 /* Vector Register Save Register */
#define SPRN_XER 0x001 /* Fixed Point Exception Register */
#define SPRN_MMCR0_GEKKO 0x3B8 /* Gekko Monitor Mode Control Register 0 */
#define SPRN_MMCR1_GEKKO 0x3BC /* Gekko Monitor Mode Control Register 1 */
#define SPRN_PMC1_GEKKO 0x3B9 /* Gekko Performance Monitor Control 1 */
#define SPRN_PMC2_GEKKO 0x3BA /* Gekko Performance Monitor Control 2 */
#define SPRN_PMC3_GEKKO 0x3BD /* Gekko Performance Monitor Control 3 */
#define SPRN_PMC4_GEKKO 0x3BE /* Gekko Performance Monitor Control 4 */
#define SPRN_WPAR_GEKKO 0x399 /* Gekko Write Pipe Address Register */
#define SPRN_SCOMC 0x114 /* SCOM Access Control */
#define SPRN_SCOMD 0x115 /* SCOM Access DATA */
......
......@@ -50,6 +50,9 @@
#endif
#ifdef CONFIG_KVM
#include <linux/kvm_host.h>
#ifndef CONFIG_BOOKE
#include <asm/kvm_book3s.h>
#endif
#endif
#ifdef CONFIG_PPC32
......@@ -105,6 +108,9 @@ int main(void)
DEFINE(THREAD_USED_SPE, offsetof(struct thread_struct, used_spe));
#endif /* CONFIG_SPE */
#endif /* CONFIG_PPC64 */
#ifdef CONFIG_KVM_BOOK3S_32_HANDLER
DEFINE(THREAD_KVM_SVCPU, offsetof(struct thread_struct, kvm_shadow_vcpu));
#endif
DEFINE(TI_FLAGS, offsetof(struct thread_info, flags));
DEFINE(TI_LOCAL_FLAGS, offsetof(struct thread_info, local_flags));
......@@ -191,33 +197,9 @@ int main(void)
DEFINE(PACA_DATA_OFFSET, offsetof(struct paca_struct, data_offset));
DEFINE(PACA_TRAP_SAVE, offsetof(struct paca_struct, trap_save));
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
DEFINE(PACA_KVM_IN_GUEST, offsetof(struct paca_struct, kvm_in_guest));
DEFINE(PACA_KVM_SLB, offsetof(struct paca_struct, kvm_slb));
DEFINE(PACA_KVM_SLB_MAX, offsetof(struct paca_struct, kvm_slb_max));
DEFINE(PACA_KVM_CR, offsetof(struct paca_struct, shadow_vcpu.cr));
DEFINE(PACA_KVM_XER, offsetof(struct paca_struct, shadow_vcpu.xer));
DEFINE(PACA_KVM_R0, offsetof(struct paca_struct, shadow_vcpu.gpr[0]));
DEFINE(PACA_KVM_R1, offsetof(struct paca_struct, shadow_vcpu.gpr[1]));
DEFINE(PACA_KVM_R2, offsetof(struct paca_struct, shadow_vcpu.gpr[2]));
DEFINE(PACA_KVM_R3, offsetof(struct paca_struct, shadow_vcpu.gpr[3]));
DEFINE(PACA_KVM_R4, offsetof(struct paca_struct, shadow_vcpu.gpr[4]));
DEFINE(PACA_KVM_R5, offsetof(struct paca_struct, shadow_vcpu.gpr[5]));
DEFINE(PACA_KVM_R6, offsetof(struct paca_struct, shadow_vcpu.gpr[6]));
DEFINE(PACA_KVM_R7, offsetof(struct paca_struct, shadow_vcpu.gpr[7]));
DEFINE(PACA_KVM_R8, offsetof(struct paca_struct, shadow_vcpu.gpr[8]));
DEFINE(PACA_KVM_R9, offsetof(struct paca_struct, shadow_vcpu.gpr[9]));
DEFINE(PACA_KVM_R10, offsetof(struct paca_struct, shadow_vcpu.gpr[10]));
DEFINE(PACA_KVM_R11, offsetof(struct paca_struct, shadow_vcpu.gpr[11]));
DEFINE(PACA_KVM_R12, offsetof(struct paca_struct, shadow_vcpu.gpr[12]));
DEFINE(PACA_KVM_R13, offsetof(struct paca_struct, shadow_vcpu.gpr[13]));
DEFINE(PACA_KVM_HOST_R1, offsetof(struct paca_struct, shadow_vcpu.host_r1));
DEFINE(PACA_KVM_HOST_R2, offsetof(struct paca_struct, shadow_vcpu.host_r2));
DEFINE(PACA_KVM_VMHANDLER, offsetof(struct paca_struct,
shadow_vcpu.vmhandler));
DEFINE(PACA_KVM_SCRATCH0, offsetof(struct paca_struct,
shadow_vcpu.scratch0));
DEFINE(PACA_KVM_SCRATCH1, offsetof(struct paca_struct,
shadow_vcpu.scratch1));
DEFINE(PACA_KVM_SVCPU, offsetof(struct paca_struct, shadow_vcpu));
DEFINE(SVCPU_SLB, offsetof(struct kvmppc_book3s_shadow_vcpu, slb));
DEFINE(SVCPU_SLB_MAX, offsetof(struct kvmppc_book3s_shadow_vcpu, slb_max));
#endif
#endif /* CONFIG_PPC64 */
......@@ -228,8 +210,8 @@ int main(void)
/* Interrupt register frame */
DEFINE(STACK_FRAME_OVERHEAD, STACK_FRAME_OVERHEAD);
DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE);
#ifdef CONFIG_PPC64
DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs));
#ifdef CONFIG_PPC64
/* Create extra stack space for SRR0 and SRR1 when calling prom/rtas. */
DEFINE(PROM_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
DEFINE(RTAS_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs) + 16);
......@@ -412,9 +394,6 @@ int main(void)
DEFINE(VCPU_HOST_STACK, offsetof(struct kvm_vcpu, arch.host_stack));
DEFINE(VCPU_HOST_PID, offsetof(struct kvm_vcpu, arch.host_pid));
DEFINE(VCPU_GPRS, offsetof(struct kvm_vcpu, arch.gpr));
DEFINE(VCPU_LR, offsetof(struct kvm_vcpu, arch.lr));
DEFINE(VCPU_CTR, offsetof(struct kvm_vcpu, arch.ctr));
DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.pc));
DEFINE(VCPU_MSR, offsetof(struct kvm_vcpu, arch.msr));
DEFINE(VCPU_SPRG4, offsetof(struct kvm_vcpu, arch.sprg4));
DEFINE(VCPU_SPRG5, offsetof(struct kvm_vcpu, arch.sprg5));
......@@ -422,27 +401,68 @@ int main(void)
DEFINE(VCPU_SPRG7, offsetof(struct kvm_vcpu, arch.sprg7));
DEFINE(VCPU_SHADOW_PID, offsetof(struct kvm_vcpu, arch.shadow_pid));
DEFINE(VCPU_LAST_INST, offsetof(struct kvm_vcpu, arch.last_inst));
DEFINE(VCPU_FAULT_DEAR, offsetof(struct kvm_vcpu, arch.fault_dear));
DEFINE(VCPU_FAULT_ESR, offsetof(struct kvm_vcpu, arch.fault_esr));
/* book3s_64 */
#ifdef CONFIG_PPC64
DEFINE(VCPU_FAULT_DSISR, offsetof(struct kvm_vcpu, arch.fault_dsisr));
/* book3s */
#ifdef CONFIG_PPC_BOOK3S
DEFINE(VCPU_HOST_RETIP, offsetof(struct kvm_vcpu, arch.host_retip));
DEFINE(VCPU_HOST_R2, offsetof(struct kvm_vcpu, arch.host_r2));
DEFINE(VCPU_HOST_MSR, offsetof(struct kvm_vcpu, arch.host_msr));
DEFINE(VCPU_SHADOW_MSR, offsetof(struct kvm_vcpu, arch.shadow_msr));
DEFINE(VCPU_SHADOW_SRR1, offsetof(struct kvm_vcpu, arch.shadow_srr1));
DEFINE(VCPU_TRAMPOLINE_LOWMEM, offsetof(struct kvm_vcpu, arch.trampoline_lowmem));
DEFINE(VCPU_TRAMPOLINE_ENTER, offsetof(struct kvm_vcpu, arch.trampoline_enter));
DEFINE(VCPU_HIGHMEM_HANDLER, offsetof(struct kvm_vcpu, arch.highmem_handler));
DEFINE(VCPU_RMCALL, offsetof(struct kvm_vcpu, arch.rmcall));
DEFINE(VCPU_HFLAGS, offsetof(struct kvm_vcpu, arch.hflags));
DEFINE(VCPU_SVCPU, offsetof(struct kvmppc_vcpu_book3s, shadow_vcpu) -
offsetof(struct kvmppc_vcpu_book3s, vcpu));
DEFINE(SVCPU_CR, offsetof(struct kvmppc_book3s_shadow_vcpu, cr));
DEFINE(SVCPU_XER, offsetof(struct kvmppc_book3s_shadow_vcpu, xer));
DEFINE(SVCPU_CTR, offsetof(struct kvmppc_book3s_shadow_vcpu, ctr));
DEFINE(SVCPU_LR, offsetof(struct kvmppc_book3s_shadow_vcpu, lr));
DEFINE(SVCPU_PC, offsetof(struct kvmppc_book3s_shadow_vcpu, pc));
DEFINE(SVCPU_R0, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[0]));
DEFINE(SVCPU_R1, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[1]));
DEFINE(SVCPU_R2, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[2]));
DEFINE(SVCPU_R3, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[3]));
DEFINE(SVCPU_R4, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[4]));
DEFINE(SVCPU_R5, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[5]));
DEFINE(SVCPU_R6, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[6]));
DEFINE(SVCPU_R7, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[7]));
DEFINE(SVCPU_R8, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[8]));
DEFINE(SVCPU_R9, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[9]));
DEFINE(SVCPU_R10, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[10]));
DEFINE(SVCPU_R11, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[11]));
DEFINE(SVCPU_R12, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[12]));
DEFINE(SVCPU_R13, offsetof(struct kvmppc_book3s_shadow_vcpu, gpr[13]));
DEFINE(SVCPU_HOST_R1, offsetof(struct kvmppc_book3s_shadow_vcpu, host_r1));
DEFINE(SVCPU_HOST_R2, offsetof(struct kvmppc_book3s_shadow_vcpu, host_r2));
DEFINE(SVCPU_VMHANDLER, offsetof(struct kvmppc_book3s_shadow_vcpu,
vmhandler));
DEFINE(SVCPU_SCRATCH0, offsetof(struct kvmppc_book3s_shadow_vcpu,
scratch0));
DEFINE(SVCPU_SCRATCH1, offsetof(struct kvmppc_book3s_shadow_vcpu,
scratch1));
DEFINE(SVCPU_IN_GUEST, offsetof(struct kvmppc_book3s_shadow_vcpu,
in_guest));
DEFINE(SVCPU_FAULT_DSISR, offsetof(struct kvmppc_book3s_shadow_vcpu,
fault_dsisr));
DEFINE(SVCPU_FAULT_DAR, offsetof(struct kvmppc_book3s_shadow_vcpu,
fault_dar));
DEFINE(SVCPU_LAST_INST, offsetof(struct kvmppc_book3s_shadow_vcpu,
last_inst));
DEFINE(SVCPU_SHADOW_SRR1, offsetof(struct kvmppc_book3s_shadow_vcpu,
shadow_srr1));
#ifdef CONFIG_PPC_BOOK3S_32
DEFINE(SVCPU_SR, offsetof(struct kvmppc_book3s_shadow_vcpu, sr));
#endif
#else
DEFINE(VCPU_CR, offsetof(struct kvm_vcpu, arch.cr));
DEFINE(VCPU_XER, offsetof(struct kvm_vcpu, arch.xer));
#endif /* CONFIG_PPC64 */
DEFINE(VCPU_LR, offsetof(struct kvm_vcpu, arch.lr));
DEFINE(VCPU_CTR, offsetof(struct kvm_vcpu, arch.ctr));
DEFINE(VCPU_PC, offsetof(struct kvm_vcpu, arch.pc));
DEFINE(VCPU_LAST_INST, offsetof(struct kvm_vcpu, arch.last_inst));
DEFINE(VCPU_FAULT_DEAR, offsetof(struct kvm_vcpu, arch.fault_dear));
DEFINE(VCPU_FAULT_ESR, offsetof(struct kvm_vcpu, arch.fault_esr));
#endif /* CONFIG_PPC_BOOK3S */
#endif
#ifdef CONFIG_44x
DEFINE(PGD_T_LOG2, PGD_T_LOG2);
......
......@@ -33,6 +33,7 @@
#include <asm/asm-offsets.h>
#include <asm/ptrace.h>
#include <asm/bug.h>
#include <asm/kvm_book3s_asm.h>
/* 601 only have IBAT; cr0.eq is set on 601 when using this macro */
#define LOAD_BAT(n, reg, RA, RB) \
......@@ -303,6 +304,7 @@ __secondary_hold_acknowledge:
*/
#define EXCEPTION(n, label, hdlr, xfer) \
. = n; \
DO_KVM n; \
label: \
EXCEPTION_PROLOG; \
addi r3,r1,STACK_FRAME_OVERHEAD; \
......@@ -358,6 +360,7 @@ i##n: \
* -- paulus.
*/
. = 0x200
DO_KVM 0x200
mtspr SPRN_SPRG_SCRATCH0,r10
mtspr SPRN_SPRG_SCRATCH1,r11
mfcr r10
......@@ -381,6 +384,7 @@ i##n: \
/* Data access exception. */
. = 0x300
DO_KVM 0x300
DataAccess:
EXCEPTION_PROLOG
mfspr r10,SPRN_DSISR
......@@ -397,6 +401,7 @@ DataAccess:
/* Instruction access exception. */
. = 0x400
DO_KVM 0x400
InstructionAccess:
EXCEPTION_PROLOG
andis. r0,r9,0x4000 /* no pte found? */
......@@ -413,6 +418,7 @@ InstructionAccess:
/* Alignment exception */
. = 0x600
DO_KVM 0x600
Alignment:
EXCEPTION_PROLOG
mfspr r4,SPRN_DAR
......@@ -427,6 +433,7 @@ Alignment:
/* Floating-point unavailable */
. = 0x800
DO_KVM 0x800
FPUnavailable:
BEGIN_FTR_SECTION
/*
......@@ -450,6 +457,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_FPU_UNAVAILABLE)
/* System call */
. = 0xc00
DO_KVM 0xc00
SystemCall:
EXCEPTION_PROLOG
EXC_XFER_EE_LITE(0xc00, DoSyscall)
......@@ -467,9 +475,11 @@ SystemCall:
* by executing an altivec instruction.
*/
. = 0xf00
DO_KVM 0xf00
b PerformanceMonitor
. = 0xf20
DO_KVM 0xf20
b AltiVecUnavailable
/*
......@@ -882,6 +892,10 @@ __secondary_start:
RFI
#endif /* CONFIG_SMP */
#ifdef CONFIG_KVM_BOOK3S_HANDLER
#include "../kvm/book3s_rmhandlers.S"
#endif
/*
* Those generic dummy functions are kept for CPUs not
* included in CONFIG_6xx
......
......@@ -37,7 +37,7 @@
#include <asm/firmware.h>
#include <asm/page_64.h>
#include <asm/irqflags.h>
#include <asm/kvm_book3s_64_asm.h>
#include <asm/kvm_book3s_asm.h>
/* The physical memory is layed out such that the secondary processor
* spin code sits at 0x0000...0x00ff. On server, the vectors follow
......@@ -169,7 +169,7 @@ exception_marker:
/* KVM trampoline code needs to be close to the interrupt handlers */
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
#include "../kvm/book3s_64_rmhandlers.S"
#include "../kvm/book3s_rmhandlers.S"
#endif
_GLOBAL(generic_secondary_thread_init)
......
......@@ -101,6 +101,10 @@ EXPORT_SYMBOL(pci_dram_offset);
EXPORT_SYMBOL(start_thread);
EXPORT_SYMBOL(kernel_thread);
#ifndef CONFIG_BOOKE
EXPORT_SYMBOL_GPL(cvt_df);
EXPORT_SYMBOL_GPL(cvt_fd);
#endif
EXPORT_SYMBOL(giveup_fpu);
#ifdef CONFIG_ALTIVEC
EXPORT_SYMBOL(giveup_altivec);
......
......@@ -147,7 +147,7 @@ static int __init kvmppc_44x_init(void)
if (r)
return r;
return kvm_init(NULL, sizeof(struct kvmppc_vcpu_44x), THIS_MODULE);
return kvm_init(NULL, sizeof(struct kvmppc_vcpu_44x), 0, THIS_MODULE);
}
static void __exit kvmppc_44x_exit(void)
......
......@@ -22,12 +22,34 @@ config KVM
select ANON_INODES
select KVM_MMIO
config KVM_BOOK3S_HANDLER
bool
config KVM_BOOK3S_32_HANDLER
bool
select KVM_BOOK3S_HANDLER
config KVM_BOOK3S_64_HANDLER
bool
select KVM_BOOK3S_HANDLER
config KVM_BOOK3S_32
tristate "KVM support for PowerPC book3s_32 processors"
depends on EXPERIMENTAL && PPC_BOOK3S_32 && !SMP && !PTE_64BIT
select KVM
select KVM_BOOK3S_32_HANDLER
---help---
Support running unmodified book3s_32 guest kernels
in virtual machines on book3s_32 host processors.
This module provides access to the hardware capabilities through
a character device node named /dev/kvm.
If unsure, say N.
config KVM_BOOK3S_64
tristate "KVM support for PowerPC book3s_64 processors"
depends on EXPERIMENTAL && PPC64
depends on EXPERIMENTAL && PPC_BOOK3S_64
select KVM
select KVM_BOOK3S_64_HANDLER
---help---
......
......@@ -14,7 +14,7 @@ CFLAGS_emulate.o := -I.
common-objs-y += powerpc.o emulate.o
obj-$(CONFIG_KVM_EXIT_TIMING) += timing.o
obj-$(CONFIG_KVM_BOOK3S_64_HANDLER) += book3s_64_exports.o
obj-$(CONFIG_KVM_BOOK3S_HANDLER) += book3s_exports.o
AFLAGS_booke_interrupts.o := -I$(obj)
......@@ -40,17 +40,31 @@ kvm-objs-$(CONFIG_KVM_E500) := $(kvm-e500-objs)
kvm-book3s_64-objs := \
$(common-objs-y) \
fpu.o \
book3s_paired_singles.o \
book3s.o \
book3s_64_emulate.o \
book3s_64_interrupts.o \
book3s_emulate.o \
book3s_interrupts.o \
book3s_64_mmu_host.o \
book3s_64_mmu.o \
book3s_32_mmu.o
kvm-objs-$(CONFIG_KVM_BOOK3S_64) := $(kvm-book3s_64-objs)
kvm-book3s_32-objs := \
$(common-objs-y) \
fpu.o \
book3s_paired_singles.o \
book3s.o \
book3s_emulate.o \
book3s_interrupts.o \
book3s_32_mmu_host.o \
book3s_32_mmu.o
kvm-objs-$(CONFIG_KVM_BOOK3S_32) := $(kvm-book3s_32-objs)
kvm-objs := $(kvm-objs-m) $(kvm-objs-y)
obj-$(CONFIG_KVM_440) += kvm.o
obj-$(CONFIG_KVM_E500) += kvm.o
obj-$(CONFIG_KVM_BOOK3S_64) += kvm.o
obj-$(CONFIG_KVM_BOOK3S_32) += kvm.o
此差异已折叠。
......@@ -37,7 +37,7 @@
#define dprintk(X...) do { } while(0)
#endif
#ifdef DEBUG_PTE
#ifdef DEBUG_MMU_PTE
#define dprintk_pte(X...) printk(KERN_INFO X)
#else
#define dprintk_pte(X...) do { } while(0)
......@@ -45,6 +45,9 @@
#define PTEG_FLAG_ACCESSED 0x00000100
#define PTEG_FLAG_DIRTY 0x00000080
#ifndef SID_SHIFT
#define SID_SHIFT 28
#endif
static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
{
......@@ -57,6 +60,8 @@ static inline bool check_debug_ip(struct kvm_vcpu *vcpu)
static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
struct kvmppc_pte *pte, bool data);
static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
u64 *vsid);
static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t eaddr)
{
......@@ -66,13 +71,14 @@ static struct kvmppc_sr *find_sr(struct kvmppc_vcpu_book3s *vcpu_book3s, gva_t e
static u64 kvmppc_mmu_book3s_32_ea_to_vp(struct kvm_vcpu *vcpu, gva_t eaddr,
bool data)
{
struct kvmppc_sr *sre = find_sr(to_book3s(vcpu), eaddr);
u64 vsid;
struct kvmppc_pte pte;
if (!kvmppc_mmu_book3s_32_xlate_bat(vcpu, eaddr, &pte, data))
return pte.vpage;
return (((u64)eaddr >> 12) & 0xffff) | (((u64)sre->vsid) << 16);
kvmppc_mmu_book3s_32_esid_to_vsid(vcpu, eaddr >> SID_SHIFT, &vsid);
return (((u64)eaddr >> 12) & 0xffff) | (vsid << 16);
}
static void kvmppc_mmu_book3s_32_reset_msr(struct kvm_vcpu *vcpu)
......@@ -142,8 +148,13 @@ static int kvmppc_mmu_book3s_32_xlate_bat(struct kvm_vcpu *vcpu, gva_t eaddr,
bat->bepi_mask);
}
if ((eaddr & bat->bepi_mask) == bat->bepi) {
u64 vsid;
kvmppc_mmu_book3s_32_esid_to_vsid(vcpu,
eaddr >> SID_SHIFT, &vsid);
vsid <<= 16;
pte->vpage = (((u64)eaddr >> 12) & 0xffff) | vsid;
pte->raddr = bat->brpn | (eaddr & ~bat->bepi_mask);
pte->vpage = (eaddr >> 12) | VSID_BAT;
pte->may_read = bat->pp;
pte->may_write = bat->pp > 1;
pte->may_execute = true;
......@@ -172,7 +183,7 @@ static int kvmppc_mmu_book3s_32_xlate_pte(struct kvm_vcpu *vcpu, gva_t eaddr,
struct kvmppc_sr *sre;
hva_t ptegp;
u32 pteg[16];
u64 ptem = 0;
u32 ptem = 0;
int i;
int found = 0;
......@@ -302,6 +313,7 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
/* And then put in the new SR */
sre->raw = value;
sre->vsid = (value & 0x0fffffff);
sre->valid = (value & 0x80000000) ? false : true;
sre->Ks = (value & 0x40000000) ? true : false;
sre->Kp = (value & 0x20000000) ? true : false;
sre->nx = (value & 0x10000000) ? true : false;
......@@ -312,36 +324,48 @@ static void kvmppc_mmu_book3s_32_mtsrin(struct kvm_vcpu *vcpu, u32 srnum,
static void kvmppc_mmu_book3s_32_tlbie(struct kvm_vcpu *vcpu, ulong ea, bool large)
{
kvmppc_mmu_pte_flush(vcpu, ea, ~0xFFFULL);
kvmppc_mmu_pte_flush(vcpu, ea, 0x0FFFF000);
}
static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
static int kvmppc_mmu_book3s_32_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
u64 *vsid)
{
ulong ea = esid << SID_SHIFT;
struct kvmppc_sr *sr;
u64 gvsid = esid;
if (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
sr = find_sr(to_book3s(vcpu), ea);
if (sr->valid)
gvsid = sr->vsid;
}
/* In case we only have one of MSR_IR or MSR_DR set, let's put
that in the real-mode context (and hope RM doesn't access
high memory) */
switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
case 0:
*vsid = (VSID_REAL >> 16) | esid;
*vsid = VSID_REAL | esid;
break;
case MSR_IR:
*vsid = (VSID_REAL_IR >> 16) | esid;
*vsid = VSID_REAL_IR | gvsid;
break;
case MSR_DR:
*vsid = (VSID_REAL_DR >> 16) | esid;
*vsid = VSID_REAL_DR | gvsid;
break;
case MSR_DR|MSR_IR:
{
ulong ea;
ea = esid << SID_SHIFT;
*vsid = find_sr(to_book3s(vcpu), ea)->vsid;
if (!sr->valid)
return -1;
*vsid = sr->vsid;
break;
}
default:
BUG();
}
if (vcpu->arch.msr & MSR_PR)
*vsid |= VSID_PR;
return 0;
}
......
/*
* Copyright (C) 2010 SUSE Linux Products GmbH. All rights reserved.
*
* Authors:
* Alexander Graf <agraf@suse.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
#include <linux/kvm_host.h>
#include <asm/kvm_ppc.h>
#include <asm/kvm_book3s.h>
#include <asm/mmu-hash32.h>
#include <asm/machdep.h>
#include <asm/mmu_context.h>
#include <asm/hw_irq.h>
/* #define DEBUG_MMU */
/* #define DEBUG_SR */
#ifdef DEBUG_MMU
#define dprintk_mmu(a, ...) printk(KERN_INFO a, __VA_ARGS__)
#else
#define dprintk_mmu(a, ...) do { } while(0)
#endif
#ifdef DEBUG_SR
#define dprintk_sr(a, ...) printk(KERN_INFO a, __VA_ARGS__)
#else
#define dprintk_sr(a, ...) do { } while(0)
#endif
#if PAGE_SHIFT != 12
#error Unknown page size
#endif
#ifdef CONFIG_SMP
#error XXX need to grab mmu_hash_lock
#endif
#ifdef CONFIG_PTE_64BIT
#error Only 32 bit pages are supported for now
#endif
static ulong htab;
static u32 htabmask;
static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
{
volatile u32 *pteg;
dprintk_mmu("KVM: Flushing SPTE: 0x%llx (0x%llx) -> 0x%llx\n",
pte->pte.eaddr, pte->pte.vpage, pte->host_va);
pteg = (u32*)pte->slot;
pteg[0] = 0;
asm volatile ("sync");
asm volatile ("tlbie %0" : : "r" (pte->pte.eaddr) : "memory");
asm volatile ("sync");
asm volatile ("tlbsync");
pte->host_va = 0;
if (pte->pte.may_write)
kvm_release_pfn_dirty(pte->pfn);
else
kvm_release_pfn_clean(pte->pfn);
}
void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
{
int i;
dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%x & 0x%x\n",
vcpu->arch.hpte_cache_offset, guest_ea, ea_mask);
BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
guest_ea &= ea_mask;
for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
struct hpte_cache *pte;
pte = &vcpu->arch.hpte_cache[i];
if (!pte->host_va)
continue;
if ((pte->pte.eaddr & ea_mask) == guest_ea) {
invalidate_pte(vcpu, pte);
}
}
/* Doing a complete flush -> start from scratch */
if (!ea_mask)
vcpu->arch.hpte_cache_offset = 0;
}
void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
{
int i;
dprintk_mmu("KVM: Flushing %d Shadow vPTEs: 0x%llx & 0x%llx\n",
vcpu->arch.hpte_cache_offset, guest_vp, vp_mask);
BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
guest_vp &= vp_mask;
for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
struct hpte_cache *pte;
pte = &vcpu->arch.hpte_cache[i];
if (!pte->host_va)
continue;
if ((pte->pte.vpage & vp_mask) == guest_vp) {
invalidate_pte(vcpu, pte);
}
}
}
void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
{
int i;
dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%llx & 0x%llx\n",
vcpu->arch.hpte_cache_offset, pa_start, pa_end);
BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
struct hpte_cache *pte;
pte = &vcpu->arch.hpte_cache[i];
if (!pte->host_va)
continue;
if ((pte->pte.raddr >= pa_start) &&
(pte->pte.raddr < pa_end)) {
invalidate_pte(vcpu, pte);
}
}
}
struct kvmppc_pte *kvmppc_mmu_find_pte(struct kvm_vcpu *vcpu, u64 ea, bool data)
{
int i;
u64 guest_vp;
guest_vp = vcpu->arch.mmu.ea_to_vp(vcpu, ea, false);
for (i=0; i<vcpu->arch.hpte_cache_offset; i++) {
struct hpte_cache *pte;
pte = &vcpu->arch.hpte_cache[i];
if (!pte->host_va)
continue;
if (pte->pte.vpage == guest_vp)
return &pte->pte;
}
return NULL;
}
static int kvmppc_mmu_hpte_cache_next(struct kvm_vcpu *vcpu)
{
if (vcpu->arch.hpte_cache_offset == HPTEG_CACHE_NUM)
kvmppc_mmu_pte_flush(vcpu, 0, 0);
return vcpu->arch.hpte_cache_offset++;
}
/* We keep 512 gvsid->hvsid entries, mapping the guest ones to the array using
* a hash, so we don't waste cycles on looping */
static u16 kvmppc_sid_hash(struct kvm_vcpu *vcpu, u64 gvsid)
{
return (u16)(((gvsid >> (SID_MAP_BITS * 7)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 6)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 5)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 4)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 3)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 2)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 1)) & SID_MAP_MASK) ^
((gvsid >> (SID_MAP_BITS * 0)) & SID_MAP_MASK));
}
static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
{
struct kvmppc_sid_map *map;
u16 sid_map_mask;
if (vcpu->arch.msr & MSR_PR)
gvsid |= VSID_PR;
sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
map = &to_book3s(vcpu)->sid_map[sid_map_mask];
if (map->guest_vsid == gvsid) {
dprintk_sr("SR: Searching 0x%llx -> 0x%llx\n",
gvsid, map->host_vsid);
return map;
}
map = &to_book3s(vcpu)->sid_map[SID_MAP_MASK - sid_map_mask];
if (map->guest_vsid == gvsid) {
dprintk_sr("SR: Searching 0x%llx -> 0x%llx\n",
gvsid, map->host_vsid);
return map;
}
dprintk_sr("SR: Searching 0x%llx -> not found\n", gvsid);
return NULL;
}
static u32 *kvmppc_mmu_get_pteg(struct kvm_vcpu *vcpu, u32 vsid, u32 eaddr,
bool primary)
{
u32 page, hash;
ulong pteg = htab;
page = (eaddr & ~ESID_MASK) >> 12;
hash = ((vsid ^ page) << 6);
if (!primary)
hash = ~hash;
hash &= htabmask;
pteg |= hash;
dprintk_mmu("htab: %lx | hash: %x | htabmask: %x | pteg: %lx\n",
htab, hash, htabmask, pteg);
return (u32*)pteg;
}
extern char etext[];
int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
{
pfn_t hpaddr;
u64 va;
u64 vsid;
struct kvmppc_sid_map *map;
volatile u32 *pteg;
u32 eaddr = orig_pte->eaddr;
u32 pteg0, pteg1;
register int rr = 0;
bool primary = false;
bool evict = false;
int hpte_id;
struct hpte_cache *pte;
/* Get host physical address for gpa */
hpaddr = gfn_to_pfn(vcpu->kvm, orig_pte->raddr >> PAGE_SHIFT);
if (kvm_is_error_hva(hpaddr)) {
printk(KERN_INFO "Couldn't get guest page for gfn %lx!\n",
orig_pte->eaddr);
return -EINVAL;
}
hpaddr <<= PAGE_SHIFT;
/* and write the mapping ea -> hpa into the pt */
vcpu->arch.mmu.esid_to_vsid(vcpu, orig_pte->eaddr >> SID_SHIFT, &vsid);
map = find_sid_vsid(vcpu, vsid);
if (!map) {
kvmppc_mmu_map_segment(vcpu, eaddr);
map = find_sid_vsid(vcpu, vsid);
}
BUG_ON(!map);
vsid = map->host_vsid;
va = (vsid << SID_SHIFT) | (eaddr & ~ESID_MASK);
next_pteg:
if (rr == 16) {
primary = !primary;
evict = true;
rr = 0;
}
pteg = kvmppc_mmu_get_pteg(vcpu, vsid, eaddr, primary);
/* not evicting yet */
if (!evict && (pteg[rr] & PTE_V)) {
rr += 2;
goto next_pteg;
}
dprintk_mmu("KVM: old PTEG: %p (%d)\n", pteg, rr);
dprintk_mmu("KVM: %08x - %08x\n", pteg[0], pteg[1]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[2], pteg[3]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[4], pteg[5]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[6], pteg[7]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[8], pteg[9]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[10], pteg[11]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[12], pteg[13]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[14], pteg[15]);
pteg0 = ((eaddr & 0x0fffffff) >> 22) | (vsid << 7) | PTE_V |
(primary ? 0 : PTE_SEC);
pteg1 = hpaddr | PTE_M | PTE_R | PTE_C;
if (orig_pte->may_write) {
pteg1 |= PP_RWRW;
mark_page_dirty(vcpu->kvm, orig_pte->raddr >> PAGE_SHIFT);
} else {
pteg1 |= PP_RWRX;
}
local_irq_disable();
if (pteg[rr]) {
pteg[rr] = 0;
asm volatile ("sync");
}
pteg[rr + 1] = pteg1;
pteg[rr] = pteg0;
asm volatile ("sync");
local_irq_enable();
dprintk_mmu("KVM: new PTEG: %p\n", pteg);
dprintk_mmu("KVM: %08x - %08x\n", pteg[0], pteg[1]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[2], pteg[3]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[4], pteg[5]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[6], pteg[7]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[8], pteg[9]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[10], pteg[11]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[12], pteg[13]);
dprintk_mmu("KVM: %08x - %08x\n", pteg[14], pteg[15]);
/* Now tell our Shadow PTE code about the new page */
hpte_id = kvmppc_mmu_hpte_cache_next(vcpu);
pte = &vcpu->arch.hpte_cache[hpte_id];
dprintk_mmu("KVM: %c%c Map 0x%llx: [%lx] 0x%llx (0x%llx) -> %lx\n",
orig_pte->may_write ? 'w' : '-',
orig_pte->may_execute ? 'x' : '-',
orig_pte->eaddr, (ulong)pteg, va,
orig_pte->vpage, hpaddr);
pte->slot = (ulong)&pteg[rr];
pte->host_va = va;
pte->pte = *orig_pte;
pte->pfn = hpaddr >> PAGE_SHIFT;
return 0;
}
static struct kvmppc_sid_map *create_sid_map(struct kvm_vcpu *vcpu, u64 gvsid)
{
struct kvmppc_sid_map *map;
struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
u16 sid_map_mask;
static int backwards_map = 0;
if (vcpu->arch.msr & MSR_PR)
gvsid |= VSID_PR;
/* We might get collisions that trap in preceding order, so let's
map them differently */
sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
if (backwards_map)
sid_map_mask = SID_MAP_MASK - sid_map_mask;
map = &to_book3s(vcpu)->sid_map[sid_map_mask];
/* Make sure we're taking the other map next time */
backwards_map = !backwards_map;
/* Uh-oh ... out of mappings. Let's flush! */
if (vcpu_book3s->vsid_next >= vcpu_book3s->vsid_max) {
vcpu_book3s->vsid_next = vcpu_book3s->vsid_first;
memset(vcpu_book3s->sid_map, 0,
sizeof(struct kvmppc_sid_map) * SID_MAP_NUM);
kvmppc_mmu_pte_flush(vcpu, 0, 0);
kvmppc_mmu_flush_segments(vcpu);
}
map->host_vsid = vcpu_book3s->vsid_next;
/* Would have to be 111 to be completely aligned with the rest of
Linux, but that is just way too little space! */
vcpu_book3s->vsid_next+=1;
map->guest_vsid = gvsid;
map->valid = true;
return map;
}
int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
{
u32 esid = eaddr >> SID_SHIFT;
u64 gvsid;
u32 sr;
struct kvmppc_sid_map *map;
struct kvmppc_book3s_shadow_vcpu *svcpu = to_svcpu(vcpu);
if (vcpu->arch.mmu.esid_to_vsid(vcpu, esid, &gvsid)) {
/* Invalidate an entry */
svcpu->sr[esid] = SR_INVALID;
return -ENOENT;
}
map = find_sid_vsid(vcpu, gvsid);
if (!map)
map = create_sid_map(vcpu, gvsid);
map->guest_esid = esid;
sr = map->host_vsid | SR_KP;
svcpu->sr[esid] = sr;
dprintk_sr("MMU: mtsr %d, 0x%x\n", esid, sr);
return 0;
}
void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
{
int i;
struct kvmppc_book3s_shadow_vcpu *svcpu = to_svcpu(vcpu);
dprintk_sr("MMU: flushing all segments (%d)\n", ARRAY_SIZE(svcpu->sr));
for (i = 0; i < ARRAY_SIZE(svcpu->sr); i++)
svcpu->sr[i] = SR_INVALID;
}
void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
{
kvmppc_mmu_pte_flush(vcpu, 0, 0);
preempt_disable();
__destroy_context(to_book3s(vcpu)->context_id);
preempt_enable();
}
/* From mm/mmu_context_hash32.c */
#define CTX_TO_VSID(ctx) (((ctx) * (897 * 16)) & 0xffffff)
int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
{
struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
int err;
ulong sdr1;
err = __init_new_context();
if (err < 0)
return -1;
vcpu3s->context_id = err;
vcpu3s->vsid_max = CTX_TO_VSID(vcpu3s->context_id + 1) - 1;
vcpu3s->vsid_first = CTX_TO_VSID(vcpu3s->context_id);
#if 0 /* XXX still doesn't guarantee uniqueness */
/* We could collide with the Linux vsid space because the vsid
* wraps around at 24 bits. We're safe if we do our own space
* though, so let's always set the highest bit. */
vcpu3s->vsid_max |= 0x00800000;
vcpu3s->vsid_first |= 0x00800000;
#endif
BUG_ON(vcpu3s->vsid_max < vcpu3s->vsid_first);
vcpu3s->vsid_next = vcpu3s->vsid_first;
/* Remember where the HTAB is */
asm ( "mfsdr1 %0" : "=r"(sdr1) );
htabmask = ((sdr1 & 0x1FF) << 16) | 0xFFC0;
htab = (ulong)__va(sdr1 & 0xffff0000);
return 0;
}
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, version 2, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* Copyright SUSE Linux Products GmbH 2009
*
* Authors: Alexander Graf <agraf@suse.de>
*/
/******************************************************************************
* *
* Entry code *
* *
*****************************************************************************/
.macro LOAD_GUEST_SEGMENTS
/* Required state:
*
* MSR = ~IR|DR
* R1 = host R1
* R2 = host R2
* R3 = shadow vcpu
* all other volatile GPRS = free
* SVCPU[CR] = guest CR
* SVCPU[XER] = guest XER
* SVCPU[CTR] = guest CTR
* SVCPU[LR] = guest LR
*/
#define XCHG_SR(n) lwz r9, (SVCPU_SR+(n*4))(r3); \
mtsr n, r9
XCHG_SR(0)
XCHG_SR(1)
XCHG_SR(2)
XCHG_SR(3)
XCHG_SR(4)
XCHG_SR(5)
XCHG_SR(6)
XCHG_SR(7)
XCHG_SR(8)
XCHG_SR(9)
XCHG_SR(10)
XCHG_SR(11)
XCHG_SR(12)
XCHG_SR(13)
XCHG_SR(14)
XCHG_SR(15)
/* Clear BATs. */
#define KVM_KILL_BAT(n, reg) \
mtspr SPRN_IBAT##n##U,reg; \
mtspr SPRN_IBAT##n##L,reg; \
mtspr SPRN_DBAT##n##U,reg; \
mtspr SPRN_DBAT##n##L,reg; \
li r9, 0
KVM_KILL_BAT(0, r9)
KVM_KILL_BAT(1, r9)
KVM_KILL_BAT(2, r9)
KVM_KILL_BAT(3, r9)
.endm
/******************************************************************************
* *
* Exit code *
* *
*****************************************************************************/
.macro LOAD_HOST_SEGMENTS
/* Register usage at this point:
*
* R1 = host R1
* R2 = host R2
* R12 = exit handler id
* R13 = shadow vcpu - SHADOW_VCPU_OFF
* SVCPU.* = guest *
* SVCPU[CR] = guest CR
* SVCPU[XER] = guest XER
* SVCPU[CTR] = guest CTR
* SVCPU[LR] = guest LR
*
*/
/* Restore BATs */
/* We only overwrite the upper part, so we only restoree
the upper part. */
#define KVM_LOAD_BAT(n, reg, RA, RB) \
lwz RA,(n*16)+0(reg); \
lwz RB,(n*16)+4(reg); \
mtspr SPRN_IBAT##n##U,RA; \
mtspr SPRN_IBAT##n##L,RB; \
lwz RA,(n*16)+8(reg); \
lwz RB,(n*16)+12(reg); \
mtspr SPRN_DBAT##n##U,RA; \
mtspr SPRN_DBAT##n##L,RB; \
lis r9, BATS@ha
addi r9, r9, BATS@l
tophys(r9, r9)
KVM_LOAD_BAT(0, r9, r10, r11)
KVM_LOAD_BAT(1, r9, r10, r11)
KVM_LOAD_BAT(2, r9, r10, r11)
KVM_LOAD_BAT(3, r9, r10, r11)
/* Restore Segment Registers */
/* 0xc - 0xf */
li r0, 4
mtctr r0
LOAD_REG_IMMEDIATE(r3, 0x20000000 | (0x111 * 0xc))
lis r4, 0xc000
3: mtsrin r3, r4
addi r3, r3, 0x111 /* increment VSID */
addis r4, r4, 0x1000 /* address of next segment */
bdnz 3b
/* 0x0 - 0xb */
/* 'current->mm' needs to be in r4 */
tophys(r4, r2)
lwz r4, MM(r4)
tophys(r4, r4)
/* This only clobbers r0, r3, r4 and r5 */
bl switch_mmu_context
.endm
......@@ -232,7 +232,7 @@ static int kvmppc_mmu_book3s_64_xlate(struct kvm_vcpu *vcpu, gva_t eaddr,
}
dprintk("KVM MMU: Translated 0x%lx [0x%llx] -> 0x%llx "
"-> 0x%llx\n",
"-> 0x%lx\n",
eaddr, avpn, gpte->vpage, gpte->raddr);
found = true;
break;
......@@ -383,7 +383,7 @@ static void kvmppc_mmu_book3s_64_slbia(struct kvm_vcpu *vcpu)
if (vcpu->arch.msr & MSR_IR) {
kvmppc_mmu_flush_segments(vcpu);
kvmppc_mmu_map_segment(vcpu, vcpu->arch.pc);
kvmppc_mmu_map_segment(vcpu, kvmppc_get_pc(vcpu));
}
}
......@@ -439,37 +439,43 @@ static void kvmppc_mmu_book3s_64_tlbie(struct kvm_vcpu *vcpu, ulong va,
kvmppc_mmu_pte_vflush(vcpu, va >> 12, mask);
}
static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, u64 esid,
static int kvmppc_mmu_book3s_64_esid_to_vsid(struct kvm_vcpu *vcpu, ulong esid,
u64 *vsid)
{
ulong ea = esid << SID_SHIFT;
struct kvmppc_slb *slb;
u64 gvsid = esid;
if (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
slb = kvmppc_mmu_book3s_64_find_slbe(to_book3s(vcpu), ea);
if (slb)
gvsid = slb->vsid;
}
switch (vcpu->arch.msr & (MSR_DR|MSR_IR)) {
case 0:
*vsid = (VSID_REAL >> 16) | esid;
*vsid = VSID_REAL | esid;
break;
case MSR_IR:
*vsid = (VSID_REAL_IR >> 16) | esid;
*vsid = VSID_REAL_IR | gvsid;
break;
case MSR_DR:
*vsid = (VSID_REAL_DR >> 16) | esid;
*vsid = VSID_REAL_DR | gvsid;
break;
case MSR_DR|MSR_IR:
{
ulong ea;
struct kvmppc_slb *slb;
ea = esid << SID_SHIFT;
slb = kvmppc_mmu_book3s_64_find_slbe(to_book3s(vcpu), ea);
if (slb)
*vsid = slb->vsid;
else
if (!slb)
return -ENOENT;
*vsid = gvsid;
break;
}
default:
BUG();
break;
}
if (vcpu->arch.msr & MSR_PR)
*vsid |= VSID_PR;
return 0;
}
......
......@@ -48,21 +48,25 @@
static void invalidate_pte(struct hpte_cache *pte)
{
dprintk_mmu("KVM: Flushing SPT %d: 0x%llx (0x%llx) -> 0x%llx\n",
i, pte->pte.eaddr, pte->pte.vpage, pte->host_va);
dprintk_mmu("KVM: Flushing SPT: 0x%lx (0x%llx) -> 0x%llx\n",
pte->pte.eaddr, pte->pte.vpage, pte->host_va);
ppc_md.hpte_invalidate(pte->slot, pte->host_va,
MMU_PAGE_4K, MMU_SEGSIZE_256M,
false);
pte->host_va = 0;
kvm_release_pfn_dirty(pte->pfn);
if (pte->pte.may_write)
kvm_release_pfn_dirty(pte->pfn);
else
kvm_release_pfn_clean(pte->pfn);
}
void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, u64 guest_ea, u64 ea_mask)
void kvmppc_mmu_pte_flush(struct kvm_vcpu *vcpu, ulong guest_ea, ulong ea_mask)
{
int i;
dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%llx & 0x%llx\n",
dprintk_mmu("KVM: Flushing %d Shadow PTEs: 0x%lx & 0x%lx\n",
vcpu->arch.hpte_cache_offset, guest_ea, ea_mask);
BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
......@@ -106,12 +110,12 @@ void kvmppc_mmu_pte_vflush(struct kvm_vcpu *vcpu, u64 guest_vp, u64 vp_mask)
}
}
void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, u64 pa_start, u64 pa_end)
void kvmppc_mmu_pte_pflush(struct kvm_vcpu *vcpu, ulong pa_start, ulong pa_end)
{
int i;
dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%llx & 0x%llx\n",
vcpu->arch.hpte_cache_offset, guest_pa, pa_mask);
dprintk_mmu("KVM: Flushing %d Shadow pPTEs: 0x%lx & 0x%lx\n",
vcpu->arch.hpte_cache_offset, pa_start, pa_end);
BUG_ON(vcpu->arch.hpte_cache_offset > HPTEG_CACHE_NUM);
for (i = 0; i < vcpu->arch.hpte_cache_offset; i++) {
......@@ -182,7 +186,7 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
sid_map_mask = kvmppc_sid_hash(vcpu, gvsid);
map = &to_book3s(vcpu)->sid_map[sid_map_mask];
if (map->guest_vsid == gvsid) {
dprintk_slb("SLB: Searching 0x%llx -> 0x%llx\n",
dprintk_slb("SLB: Searching: 0x%llx -> 0x%llx\n",
gvsid, map->host_vsid);
return map;
}
......@@ -194,7 +198,8 @@ static struct kvmppc_sid_map *find_sid_vsid(struct kvm_vcpu *vcpu, u64 gvsid)
return map;
}
dprintk_slb("SLB: Searching 0x%llx -> not found\n", gvsid);
dprintk_slb("SLB: Searching %d/%d: 0x%llx -> not found\n",
sid_map_mask, SID_MAP_MASK - sid_map_mask, gvsid);
return NULL;
}
......@@ -212,7 +217,7 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
/* Get host physical address for gpa */
hpaddr = gfn_to_pfn(vcpu->kvm, orig_pte->raddr >> PAGE_SHIFT);
if (kvm_is_error_hva(hpaddr)) {
printk(KERN_INFO "Couldn't get guest page for gfn %llx!\n", orig_pte->eaddr);
printk(KERN_INFO "Couldn't get guest page for gfn %lx!\n", orig_pte->eaddr);
return -EINVAL;
}
hpaddr <<= PAGE_SHIFT;
......@@ -227,10 +232,16 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
vcpu->arch.mmu.esid_to_vsid(vcpu, orig_pte->eaddr >> SID_SHIFT, &vsid);
map = find_sid_vsid(vcpu, vsid);
if (!map) {
kvmppc_mmu_map_segment(vcpu, orig_pte->eaddr);
ret = kvmppc_mmu_map_segment(vcpu, orig_pte->eaddr);
WARN_ON(ret < 0);
map = find_sid_vsid(vcpu, vsid);
}
BUG_ON(!map);
if (!map) {
printk(KERN_ERR "KVM: Segment map for 0x%llx (0x%lx) failed\n",
vsid, orig_pte->eaddr);
WARN_ON(true);
return -EINVAL;
}
vsid = map->host_vsid;
va = hpt_va(orig_pte->eaddr, vsid, MMU_SEGSIZE_256M);
......@@ -257,26 +268,26 @@ int kvmppc_mmu_map_page(struct kvm_vcpu *vcpu, struct kvmppc_pte *orig_pte)
if (ret < 0) {
/* If we couldn't map a primary PTE, try a secondary */
#ifdef USE_SECONDARY
hash = ~hash;
vflags ^= HPTE_V_SECONDARY;
attempt++;
if (attempt % 2)
vflags = HPTE_V_SECONDARY;
else
vflags = 0;
#else
attempt = 2;
#endif
goto map_again;
} else {
int hpte_id = kvmppc_mmu_hpte_cache_next(vcpu);
struct hpte_cache *pte = &vcpu->arch.hpte_cache[hpte_id];
dprintk_mmu("KVM: %c%c Map 0x%llx: [%lx] 0x%lx (0x%llx) -> %lx\n",
dprintk_mmu("KVM: %c%c Map 0x%lx: [%lx] 0x%lx (0x%llx) -> %lx\n",
((rflags & HPTE_R_PP) == 3) ? '-' : 'w',
(rflags & HPTE_R_N) ? '-' : 'x',
orig_pte->eaddr, hpteg, va, orig_pte->vpage, hpaddr);
/* The ppc_md code may give us a secondary entry even though we
asked for a primary. Fix up. */
if ((ret & _PTEIDX_SECONDARY) && !(vflags & HPTE_V_SECONDARY)) {
hash = ~hash;
hpteg = ((hash & htab_hash_mask) * HPTES_PER_GROUP);
}
pte->slot = hpteg + (ret & 7);
pte->host_va = va;
pte->pte = *orig_pte;
......@@ -321,6 +332,9 @@ static struct kvmppc_sid_map *create_sid_map(struct kvm_vcpu *vcpu, u64 gvsid)
map->guest_vsid = gvsid;
map->valid = true;
dprintk_slb("SLB: New mapping at %d: 0x%llx -> 0x%llx\n",
sid_map_mask, gvsid, map->host_vsid);
return map;
}
......@@ -331,14 +345,14 @@ static int kvmppc_mmu_next_segment(struct kvm_vcpu *vcpu, ulong esid)
int found_inval = -1;
int r;
if (!get_paca()->kvm_slb_max)
get_paca()->kvm_slb_max = 1;
if (!to_svcpu(vcpu)->slb_max)
to_svcpu(vcpu)->slb_max = 1;
/* Are we overwriting? */
for (i = 1; i < get_paca()->kvm_slb_max; i++) {
if (!(get_paca()->kvm_slb[i].esid & SLB_ESID_V))
for (i = 1; i < to_svcpu(vcpu)->slb_max; i++) {
if (!(to_svcpu(vcpu)->slb[i].esid & SLB_ESID_V))
found_inval = i;
else if ((get_paca()->kvm_slb[i].esid & ESID_MASK) == esid)
else if ((to_svcpu(vcpu)->slb[i].esid & ESID_MASK) == esid)
return i;
}
......@@ -352,11 +366,11 @@ static int kvmppc_mmu_next_segment(struct kvm_vcpu *vcpu, ulong esid)
max_slb_size = mmu_slb_size;
/* Overflowing -> purge */
if ((get_paca()->kvm_slb_max) == max_slb_size)
if ((to_svcpu(vcpu)->slb_max) == max_slb_size)
kvmppc_mmu_flush_segments(vcpu);
r = get_paca()->kvm_slb_max;
get_paca()->kvm_slb_max++;
r = to_svcpu(vcpu)->slb_max;
to_svcpu(vcpu)->slb_max++;
return r;
}
......@@ -374,7 +388,7 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
if (vcpu->arch.mmu.esid_to_vsid(vcpu, esid, &gvsid)) {
/* Invalidate an entry */
get_paca()->kvm_slb[slb_index].esid = 0;
to_svcpu(vcpu)->slb[slb_index].esid = 0;
return -ENOENT;
}
......@@ -388,8 +402,8 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
slb_vsid &= ~SLB_VSID_KP;
slb_esid |= slb_index;
get_paca()->kvm_slb[slb_index].esid = slb_esid;
get_paca()->kvm_slb[slb_index].vsid = slb_vsid;
to_svcpu(vcpu)->slb[slb_index].esid = slb_esid;
to_svcpu(vcpu)->slb[slb_index].vsid = slb_vsid;
dprintk_slb("slbmte %#llx, %#llx\n", slb_vsid, slb_esid);
......@@ -398,11 +412,29 @@ int kvmppc_mmu_map_segment(struct kvm_vcpu *vcpu, ulong eaddr)
void kvmppc_mmu_flush_segments(struct kvm_vcpu *vcpu)
{
get_paca()->kvm_slb_max = 1;
get_paca()->kvm_slb[0].esid = 0;
to_svcpu(vcpu)->slb_max = 1;
to_svcpu(vcpu)->slb[0].esid = 0;
}
void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
{
kvmppc_mmu_pte_flush(vcpu, 0, 0);
__destroy_context(to_book3s(vcpu)->context_id);
}
int kvmppc_mmu_init(struct kvm_vcpu *vcpu)
{
struct kvmppc_vcpu_book3s *vcpu3s = to_book3s(vcpu);
int err;
err = __init_new_context();
if (err < 0)
return -1;
vcpu3s->context_id = err;
vcpu3s->vsid_max = ((vcpu3s->context_id + 1) << USER_ESID_BITS) - 1;
vcpu3s->vsid_first = vcpu3s->context_id << USER_ESID_BITS;
vcpu3s->vsid_next = vcpu3s->vsid_first;
return 0;
}
......@@ -44,8 +44,7 @@ slb_exit_skip_ ## num:
* *
*****************************************************************************/
.global kvmppc_handler_trampoline_enter
kvmppc_handler_trampoline_enter:
.macro LOAD_GUEST_SEGMENTS
/* Required state:
*
......@@ -53,20 +52,14 @@ kvmppc_handler_trampoline_enter:
* R13 = PACA
* R1 = host R1
* R2 = host R2
* R9 = guest IP
* R10 = guest MSR
* all other GPRS = free
* PACA[KVM_CR] = guest CR
* PACA[KVM_XER] = guest XER
* R3 = shadow vcpu
* all other volatile GPRS = free
* SVCPU[CR] = guest CR
* SVCPU[XER] = guest XER
* SVCPU[CTR] = guest CTR
* SVCPU[LR] = guest LR
*/
mtsrr0 r9
mtsrr1 r10
/* Activate guest mode, so faults get handled by KVM */
li r11, KVM_GUEST_MODE_GUEST
stb r11, PACA_KVM_IN_GUEST(r13)
/* Remove LPAR shadow entries */
#if SLB_NUM_BOLTED == 3
......@@ -101,14 +94,14 @@ kvmppc_handler_trampoline_enter:
/* Fill SLB with our shadow */
lbz r12, PACA_KVM_SLB_MAX(r13)
lbz r12, SVCPU_SLB_MAX(r3)
mulli r12, r12, 16
addi r12, r12, PACA_KVM_SLB
add r12, r12, r13
addi r12, r12, SVCPU_SLB
add r12, r12, r3
/* for (r11 = kvm_slb; r11 < kvm_slb + kvm_slb_size; r11+=slb_entry) */
li r11, PACA_KVM_SLB
add r11, r11, r13
li r11, SVCPU_SLB
add r11, r11, r3
slb_loop_enter:
......@@ -127,34 +120,7 @@ slb_loop_enter_skip:
slb_do_enter:
/* Enter guest */
ld r0, (PACA_KVM_R0)(r13)
ld r1, (PACA_KVM_R1)(r13)
ld r2, (PACA_KVM_R2)(r13)
ld r3, (PACA_KVM_R3)(r13)
ld r4, (PACA_KVM_R4)(r13)
ld r5, (PACA_KVM_R5)(r13)
ld r6, (PACA_KVM_R6)(r13)
ld r7, (PACA_KVM_R7)(r13)
ld r8, (PACA_KVM_R8)(r13)
ld r9, (PACA_KVM_R9)(r13)
ld r10, (PACA_KVM_R10)(r13)
ld r12, (PACA_KVM_R12)(r13)
lwz r11, (PACA_KVM_CR)(r13)
mtcr r11
ld r11, (PACA_KVM_XER)(r13)
mtxer r11
ld r11, (PACA_KVM_R11)(r13)
ld r13, (PACA_KVM_R13)(r13)
RFI
kvmppc_handler_trampoline_enter_end:
.endm
/******************************************************************************
* *
......@@ -162,99 +128,22 @@ kvmppc_handler_trampoline_enter_end:
* *
*****************************************************************************/
.global kvmppc_handler_trampoline_exit
kvmppc_handler_trampoline_exit:
.macro LOAD_HOST_SEGMENTS
/* Register usage at this point:
*
* SPRG_SCRATCH0 = guest R13
* R12 = exit handler id
* R13 = PACA
* PACA.KVM.SCRATCH0 = guest R12
* PACA.KVM.SCRATCH1 = guest CR
* R1 = host R1
* R2 = host R2
* R12 = exit handler id
* R13 = shadow vcpu - SHADOW_VCPU_OFF [=PACA on PPC64]
* SVCPU.* = guest *
* SVCPU[CR] = guest CR
* SVCPU[XER] = guest XER
* SVCPU[CTR] = guest CTR
* SVCPU[LR] = guest LR
*
*/
/* Save registers */
std r0, PACA_KVM_R0(r13)
std r1, PACA_KVM_R1(r13)
std r2, PACA_KVM_R2(r13)
std r3, PACA_KVM_R3(r13)
std r4, PACA_KVM_R4(r13)
std r5, PACA_KVM_R5(r13)
std r6, PACA_KVM_R6(r13)
std r7, PACA_KVM_R7(r13)
std r8, PACA_KVM_R8(r13)
std r9, PACA_KVM_R9(r13)
std r10, PACA_KVM_R10(r13)
std r11, PACA_KVM_R11(r13)
/* Restore R1/R2 so we can handle faults */
ld r1, PACA_KVM_HOST_R1(r13)
ld r2, PACA_KVM_HOST_R2(r13)
/* Save guest PC and MSR in GPRs */
mfsrr0 r3
mfsrr1 r4
/* Get scratch'ed off registers */
mfspr r9, SPRN_SPRG_SCRATCH0
std r9, PACA_KVM_R13(r13)
ld r8, PACA_KVM_SCRATCH0(r13)
std r8, PACA_KVM_R12(r13)
lwz r7, PACA_KVM_SCRATCH1(r13)
stw r7, PACA_KVM_CR(r13)
/* Save more register state */
mfxer r6
stw r6, PACA_KVM_XER(r13)
mfdar r5
mfdsisr r6
/*
* In order for us to easily get the last instruction,
* we got the #vmexit at, we exploit the fact that the
* virtual layout is still the same here, so we can just
* ld from the guest's PC address
*/
/* We only load the last instruction when it's safe */
cmpwi r12, BOOK3S_INTERRUPT_DATA_STORAGE
beq ld_last_inst
cmpwi r12, BOOK3S_INTERRUPT_PROGRAM
beq ld_last_inst
b no_ld_last_inst
ld_last_inst:
/* Save off the guest instruction we're at */
/* Set guest mode to 'jump over instruction' so if lwz faults
* we'll just continue at the next IP. */
li r9, KVM_GUEST_MODE_SKIP
stb r9, PACA_KVM_IN_GUEST(r13)
/* 1) enable paging for data */
mfmsr r9
ori r11, r9, MSR_DR /* Enable paging for data */
mtmsr r11
/* 2) fetch the instruction */
li r0, KVM_INST_FETCH_FAILED /* In case lwz faults */
lwz r0, 0(r3)
/* 3) disable paging again */
mtmsr r9
no_ld_last_inst:
/* Unset guest mode */
li r9, KVM_GUEST_MODE_NONE
stb r9, PACA_KVM_IN_GUEST(r13)
/* Restore bolted entries from the shadow and fix it along the way */
/* We don't store anything in entry 0, so we don't need to take care of it */
......@@ -275,28 +164,4 @@ no_ld_last_inst:
slb_do_exit:
/* Register usage at this point:
*
* R0 = guest last inst
* R1 = host R1
* R2 = host R2
* R3 = guest PC
* R4 = guest MSR
* R5 = guest DAR
* R6 = guest DSISR
* R12 = exit handler id
* R13 = PACA
* PACA.KVM.* = guest *
*
*/
/* RFI into the highmem handler */
mfmsr r7
ori r7, r7, MSR_IR|MSR_DR|MSR_RI /* Enable paging */
mtsrr1 r7
ld r8, PACA_KVM_VMHANDLER(r13) /* Highmem handler address */
mtsrr0 r8
RFI
kvmppc_handler_trampoline_exit_end:
.endm
......@@ -28,13 +28,16 @@
#define OP_31_XOP_MFMSR 83
#define OP_31_XOP_MTMSR 146
#define OP_31_XOP_MTMSRD 178
#define OP_31_XOP_MTSR 210
#define OP_31_XOP_MTSRIN 242
#define OP_31_XOP_TLBIEL 274
#define OP_31_XOP_TLBIE 306
#define OP_31_XOP_SLBMTE 402
#define OP_31_XOP_SLBIE 434
#define OP_31_XOP_SLBIA 498
#define OP_31_XOP_MFSR 595
#define OP_31_XOP_MFSRIN 659
#define OP_31_XOP_DCBA 758
#define OP_31_XOP_SLBMFEV 851
#define OP_31_XOP_EIOIO 854
#define OP_31_XOP_SLBMFEE 915
......@@ -42,6 +45,24 @@
/* DCBZ is actually 1014, but we patch it to 1010 so we get a trap */
#define OP_31_XOP_DCBZ 1010
#define OP_LFS 48
#define OP_LFD 50
#define OP_STFS 52
#define OP_STFD 54
#define SPRN_GQR0 912
#define SPRN_GQR1 913
#define SPRN_GQR2 914
#define SPRN_GQR3 915
#define SPRN_GQR4 916
#define SPRN_GQR5 917
#define SPRN_GQR6 918
#define SPRN_GQR7 919
/* Book3S_32 defines mfsrin(v) - but that messes up our abstract
* function pointers, so let's just disable the define. */
#undef mfsrin
int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned int inst, int *advance)
{
......@@ -52,7 +73,7 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
switch (get_xop(inst)) {
case OP_19_XOP_RFID:
case OP_19_XOP_RFI:
vcpu->arch.pc = vcpu->arch.srr0;
kvmppc_set_pc(vcpu, vcpu->arch.srr0);
kvmppc_set_msr(vcpu, vcpu->arch.srr1);
*advance = 0;
break;
......@@ -80,6 +101,18 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
case OP_31_XOP_MTMSR:
kvmppc_set_msr(vcpu, kvmppc_get_gpr(vcpu, get_rs(inst)));
break;
case OP_31_XOP_MFSR:
{
int srnum;
srnum = kvmppc_get_field(inst, 12 + 32, 15 + 32);
if (vcpu->arch.mmu.mfsrin) {
u32 sr;
sr = vcpu->arch.mmu.mfsrin(vcpu, srnum);
kvmppc_set_gpr(vcpu, get_rt(inst), sr);
}
break;
}
case OP_31_XOP_MFSRIN:
{
int srnum;
......@@ -92,6 +125,11 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
}
break;
}
case OP_31_XOP_MTSR:
vcpu->arch.mmu.mtsrin(vcpu,
(inst >> 16) & 0xf,
kvmppc_get_gpr(vcpu, get_rs(inst)));
break;
case OP_31_XOP_MTSRIN:
vcpu->arch.mmu.mtsrin(vcpu,
(kvmppc_get_gpr(vcpu, get_rb(inst)) >> 28) & 0xf,
......@@ -150,12 +188,17 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
kvmppc_set_gpr(vcpu, get_rt(inst), t);
}
break;
case OP_31_XOP_DCBA:
/* Gets treated as NOP */
break;
case OP_31_XOP_DCBZ:
{
ulong rb = kvmppc_get_gpr(vcpu, get_rb(inst));
ulong ra = 0;
ulong addr;
ulong addr, vaddr;
u32 zeros[8] = { 0, 0, 0, 0, 0, 0, 0, 0 };
u32 dsisr;
int r;
if (get_ra(inst))
ra = kvmppc_get_gpr(vcpu, get_ra(inst));
......@@ -163,15 +206,25 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
addr = (ra + rb) & ~31ULL;
if (!(vcpu->arch.msr & MSR_SF))
addr &= 0xffffffff;
vaddr = addr;
r = kvmppc_st(vcpu, &addr, 32, zeros, true);
if ((r == -ENOENT) || (r == -EPERM)) {
*advance = 0;
vcpu->arch.dear = vaddr;
to_svcpu(vcpu)->fault_dar = vaddr;
dsisr = DSISR_ISSTORE;
if (r == -ENOENT)
dsisr |= DSISR_NOHPTE;
else if (r == -EPERM)
dsisr |= DSISR_PROTFAULT;
to_book3s(vcpu)->dsisr = dsisr;
to_svcpu(vcpu)->fault_dsisr = dsisr;
if (kvmppc_st(vcpu, addr, 32, zeros)) {
vcpu->arch.dear = addr;
vcpu->arch.fault_dear = addr;
to_book3s(vcpu)->dsisr = DSISR_PROTFAULT |
DSISR_ISSTORE;
kvmppc_book3s_queue_irqprio(vcpu,
BOOK3S_INTERRUPT_DATA_STORAGE);
kvmppc_mmu_pte_flush(vcpu, addr, ~0xFFFULL);
}
break;
......@@ -184,6 +237,9 @@ int kvmppc_core_emulate_op(struct kvm_run *run, struct kvm_vcpu *vcpu,
emulated = EMULATE_FAIL;
}
if (emulated == EMULATE_FAIL)
emulated = kvmppc_emulate_paired_single(run, vcpu);
return emulated;
}
......@@ -207,6 +263,34 @@ void kvmppc_set_bat(struct kvm_vcpu *vcpu, struct kvmppc_bat *bat, bool upper,
}
}
static u32 kvmppc_read_bat(struct kvm_vcpu *vcpu, int sprn)
{
struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
struct kvmppc_bat *bat;
switch (sprn) {
case SPRN_IBAT0U ... SPRN_IBAT3L:
bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
break;
case SPRN_IBAT4U ... SPRN_IBAT7L:
bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
break;
case SPRN_DBAT0U ... SPRN_DBAT3L:
bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
break;
case SPRN_DBAT4U ... SPRN_DBAT7L:
bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
break;
default:
BUG();
}
if (sprn % 2)
return bat->raw >> 32;
else
return bat->raw;
}
static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val)
{
struct kvmppc_vcpu_book3s *vcpu_book3s = to_book3s(vcpu);
......@@ -217,13 +301,13 @@ static void kvmppc_write_bat(struct kvm_vcpu *vcpu, int sprn, u32 val)
bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT0U) / 2];
break;
case SPRN_IBAT4U ... SPRN_IBAT7L:
bat = &vcpu_book3s->ibat[(sprn - SPRN_IBAT4U) / 2];
bat = &vcpu_book3s->ibat[4 + ((sprn - SPRN_IBAT4U) / 2)];
break;
case SPRN_DBAT0U ... SPRN_DBAT3L:
bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT0U) / 2];
break;
case SPRN_DBAT4U ... SPRN_DBAT7L:
bat = &vcpu_book3s->dbat[(sprn - SPRN_DBAT4U) / 2];
bat = &vcpu_book3s->dbat[4 + ((sprn - SPRN_DBAT4U) / 2)];
break;
default:
BUG();
......@@ -258,6 +342,7 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
/* BAT writes happen so rarely that we're ok to flush
* everything here */
kvmppc_mmu_pte_flush(vcpu, 0, 0);
kvmppc_mmu_flush_segments(vcpu);
break;
case SPRN_HID0:
to_book3s(vcpu)->hid[0] = spr_val;
......@@ -268,7 +353,32 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
case SPRN_HID2:
to_book3s(vcpu)->hid[2] = spr_val;
break;
case SPRN_HID2_GEKKO:
to_book3s(vcpu)->hid[2] = spr_val;
/* HID2.PSE controls paired single on gekko */
switch (vcpu->arch.pvr) {
case 0x00080200: /* lonestar 2.0 */
case 0x00088202: /* lonestar 2.2 */
case 0x70000100: /* gekko 1.0 */
case 0x00080100: /* gekko 2.0 */
case 0x00083203: /* gekko 2.3a */
case 0x00083213: /* gekko 2.3b */
case 0x00083204: /* gekko 2.4 */
case 0x00083214: /* gekko 2.4e (8SE) - retail HW2 */
case 0x00087200: /* broadway */
if (vcpu->arch.hflags & BOOK3S_HFLAG_NATIVE_PS) {
/* Native paired singles */
} else if (spr_val & (1 << 29)) { /* HID2.PSE */
vcpu->arch.hflags |= BOOK3S_HFLAG_PAIRED_SINGLE;
kvmppc_giveup_ext(vcpu, MSR_FP);
} else {
vcpu->arch.hflags &= ~BOOK3S_HFLAG_PAIRED_SINGLE;
}
break;
}
break;
case SPRN_HID4:
case SPRN_HID4_GEKKO:
to_book3s(vcpu)->hid[4] = spr_val;
break;
case SPRN_HID5:
......@@ -278,12 +388,30 @@ int kvmppc_core_emulate_mtspr(struct kvm_vcpu *vcpu, int sprn, int rs)
(mfmsr() & MSR_HV))
vcpu->arch.hflags |= BOOK3S_HFLAG_DCBZ32;
break;
case SPRN_GQR0:
case SPRN_GQR1:
case SPRN_GQR2:
case SPRN_GQR3:
case SPRN_GQR4:
case SPRN_GQR5:
case SPRN_GQR6:
case SPRN_GQR7:
to_book3s(vcpu)->gqr[sprn - SPRN_GQR0] = spr_val;
break;
case SPRN_ICTC:
case SPRN_THRM1:
case SPRN_THRM2:
case SPRN_THRM3:
case SPRN_CTRLF:
case SPRN_CTRLT:
case SPRN_L2CR:
case SPRN_MMCR0_GEKKO:
case SPRN_MMCR1_GEKKO:
case SPRN_PMC1_GEKKO:
case SPRN_PMC2_GEKKO:
case SPRN_PMC3_GEKKO:
case SPRN_PMC4_GEKKO:
case SPRN_WPAR_GEKKO:
break;
default:
printk(KERN_INFO "KVM: invalid SPR write: %d\n", sprn);
......@@ -301,6 +429,12 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
int emulated = EMULATE_DONE;
switch (sprn) {
case SPRN_IBAT0U ... SPRN_IBAT3L:
case SPRN_IBAT4U ... SPRN_IBAT7L:
case SPRN_DBAT0U ... SPRN_DBAT3L:
case SPRN_DBAT4U ... SPRN_DBAT7L:
kvmppc_set_gpr(vcpu, rt, kvmppc_read_bat(vcpu, sprn));
break;
case SPRN_SDR1:
kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->sdr1);
break;
......@@ -320,19 +454,40 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[1]);
break;
case SPRN_HID2:
case SPRN_HID2_GEKKO:
kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[2]);
break;
case SPRN_HID4:
case SPRN_HID4_GEKKO:
kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[4]);
break;
case SPRN_HID5:
kvmppc_set_gpr(vcpu, rt, to_book3s(vcpu)->hid[5]);
break;
case SPRN_GQR0:
case SPRN_GQR1:
case SPRN_GQR2:
case SPRN_GQR3:
case SPRN_GQR4:
case SPRN_GQR5:
case SPRN_GQR6:
case SPRN_GQR7:
kvmppc_set_gpr(vcpu, rt,
to_book3s(vcpu)->gqr[sprn - SPRN_GQR0]);
break;
case SPRN_THRM1:
case SPRN_THRM2:
case SPRN_THRM3:
case SPRN_CTRLF:
case SPRN_CTRLT:
case SPRN_L2CR:
case SPRN_MMCR0_GEKKO:
case SPRN_MMCR1_GEKKO:
case SPRN_PMC1_GEKKO:
case SPRN_PMC2_GEKKO:
case SPRN_PMC3_GEKKO:
case SPRN_PMC4_GEKKO:
case SPRN_WPAR_GEKKO:
kvmppc_set_gpr(vcpu, rt, 0);
break;
default:
......@@ -346,3 +501,73 @@ int kvmppc_core_emulate_mfspr(struct kvm_vcpu *vcpu, int sprn, int rt)
return emulated;
}
u32 kvmppc_alignment_dsisr(struct kvm_vcpu *vcpu, unsigned int inst)
{
u32 dsisr = 0;
/*
* This is what the spec says about DSISR bits (not mentioned = 0):
*
* 12:13 [DS] Set to bits 30:31
* 15:16 [X] Set to bits 29:30
* 17 [X] Set to bit 25
* [D/DS] Set to bit 5
* 18:21 [X] Set to bits 21:24
* [D/DS] Set to bits 1:4
* 22:26 Set to bits 6:10 (RT/RS/FRT/FRS)
* 27:31 Set to bits 11:15 (RA)
*/
switch (get_op(inst)) {
/* D-form */
case OP_LFS:
case OP_LFD:
case OP_STFD:
case OP_STFS:
dsisr |= (inst >> 12) & 0x4000; /* bit 17 */
dsisr |= (inst >> 17) & 0x3c00; /* bits 18:21 */
break;
/* X-form */
case 31:
dsisr |= (inst << 14) & 0x18000; /* bits 15:16 */
dsisr |= (inst << 8) & 0x04000; /* bit 17 */
dsisr |= (inst << 3) & 0x03c00; /* bits 18:21 */
break;
default:
printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
break;
}
dsisr |= (inst >> 16) & 0x03ff; /* bits 22:31 */
return dsisr;
}
ulong kvmppc_alignment_dar(struct kvm_vcpu *vcpu, unsigned int inst)
{
ulong dar = 0;
ulong ra;
switch (get_op(inst)) {
case OP_LFS:
case OP_LFD:
case OP_STFD:
case OP_STFS:
ra = get_ra(inst);
if (ra)
dar = kvmppc_get_gpr(vcpu, ra);
dar += (s32)((s16)inst);
break;
case 31:
ra = get_ra(inst);
if (ra)
dar = kvmppc_get_gpr(vcpu, ra);
dar += kvmppc_get_gpr(vcpu, get_rb(inst));
break;
default:
printk(KERN_INFO "KVM: Unaligned instruction 0x%x\n", inst);
break;
}
return dar;
}
......@@ -24,36 +24,56 @@
#include <asm/asm-offsets.h>
#include <asm/exception-64s.h>
#define KVMPPC_HANDLE_EXIT .kvmppc_handle_exit
#define ULONG_SIZE 8
#define VCPU_GPR(n) (VCPU_GPRS + (n * ULONG_SIZE))
#if defined(CONFIG_PPC_BOOK3S_64)
.macro DISABLE_INTERRUPTS
mfmsr r0
rldicl r0,r0,48,1
rotldi r0,r0,16
mtmsrd r0,1
.endm
#define ULONG_SIZE 8
#define FUNC(name) GLUE(.,name)
#define GET_SHADOW_VCPU(reg) \
addi reg, r13, PACA_KVM_SVCPU
#define DISABLE_INTERRUPTS \
mfmsr r0; \
rldicl r0,r0,48,1; \
rotldi r0,r0,16; \
mtmsrd r0,1; \
#elif defined(CONFIG_PPC_BOOK3S_32)
#define ULONG_SIZE 4
#define FUNC(name) name
#define GET_SHADOW_VCPU(reg) \
lwz reg, (THREAD + THREAD_KVM_SVCPU)(r2)
#define DISABLE_INTERRUPTS \
mfmsr r0; \
rlwinm r0,r0,0,17,15; \
mtmsr r0; \
#endif /* CONFIG_PPC_BOOK3S_XX */
#define VCPU_GPR(n) (VCPU_GPRS + (n * ULONG_SIZE))
#define VCPU_LOAD_NVGPRS(vcpu) \
ld r14, VCPU_GPR(r14)(vcpu); \
ld r15, VCPU_GPR(r15)(vcpu); \
ld r16, VCPU_GPR(r16)(vcpu); \
ld r17, VCPU_GPR(r17)(vcpu); \
ld r18, VCPU_GPR(r18)(vcpu); \
ld r19, VCPU_GPR(r19)(vcpu); \
ld r20, VCPU_GPR(r20)(vcpu); \
ld r21, VCPU_GPR(r21)(vcpu); \
ld r22, VCPU_GPR(r22)(vcpu); \
ld r23, VCPU_GPR(r23)(vcpu); \
ld r24, VCPU_GPR(r24)(vcpu); \
ld r25, VCPU_GPR(r25)(vcpu); \
ld r26, VCPU_GPR(r26)(vcpu); \
ld r27, VCPU_GPR(r27)(vcpu); \
ld r28, VCPU_GPR(r28)(vcpu); \
ld r29, VCPU_GPR(r29)(vcpu); \
ld r30, VCPU_GPR(r30)(vcpu); \
ld r31, VCPU_GPR(r31)(vcpu); \
PPC_LL r14, VCPU_GPR(r14)(vcpu); \
PPC_LL r15, VCPU_GPR(r15)(vcpu); \
PPC_LL r16, VCPU_GPR(r16)(vcpu); \
PPC_LL r17, VCPU_GPR(r17)(vcpu); \
PPC_LL r18, VCPU_GPR(r18)(vcpu); \
PPC_LL r19, VCPU_GPR(r19)(vcpu); \
PPC_LL r20, VCPU_GPR(r20)(vcpu); \
PPC_LL r21, VCPU_GPR(r21)(vcpu); \
PPC_LL r22, VCPU_GPR(r22)(vcpu); \
PPC_LL r23, VCPU_GPR(r23)(vcpu); \
PPC_LL r24, VCPU_GPR(r24)(vcpu); \
PPC_LL r25, VCPU_GPR(r25)(vcpu); \
PPC_LL r26, VCPU_GPR(r26)(vcpu); \
PPC_LL r27, VCPU_GPR(r27)(vcpu); \
PPC_LL r28, VCPU_GPR(r28)(vcpu); \
PPC_LL r29, VCPU_GPR(r29)(vcpu); \
PPC_LL r30, VCPU_GPR(r30)(vcpu); \
PPC_LL r31, VCPU_GPR(r31)(vcpu); \
/*****************************************************************************
* *
......@@ -69,11 +89,11 @@ _GLOBAL(__kvmppc_vcpu_entry)
kvm_start_entry:
/* Write correct stack frame */
mflr r0
std r0,16(r1)
mflr r0
PPC_STL r0,PPC_LR_STKOFF(r1)
/* Save host state to the stack */
stdu r1, -SWITCH_FRAME_SIZE(r1)
PPC_STLU r1, -SWITCH_FRAME_SIZE(r1)
/* Save r3 (kvm_run) and r4 (vcpu) */
SAVE_2GPRS(3, r1)
......@@ -82,33 +102,28 @@ kvm_start_entry:
SAVE_NVGPRS(r1)
/* Save LR */
std r0, _LINK(r1)
PPC_STL r0, _LINK(r1)
/* Load non-volatile guest state from the vcpu */
VCPU_LOAD_NVGPRS(r4)
GET_SHADOW_VCPU(r5)
/* Save R1/R2 in the PACA */
std r1, PACA_KVM_HOST_R1(r13)
std r2, PACA_KVM_HOST_R2(r13)
PPC_STL r1, SVCPU_HOST_R1(r5)
PPC_STL r2, SVCPU_HOST_R2(r5)
/* XXX swap in/out on load? */
ld r3, VCPU_HIGHMEM_HANDLER(r4)
std r3, PACA_KVM_VMHANDLER(r13)
PPC_LL r3, VCPU_HIGHMEM_HANDLER(r4)
PPC_STL r3, SVCPU_VMHANDLER(r5)
kvm_start_lightweight:
ld r9, VCPU_PC(r4) /* r9 = vcpu->arch.pc */
ld r10, VCPU_SHADOW_MSR(r4) /* r10 = vcpu->arch.shadow_msr */
/* Load some guest state in the respective registers */
ld r5, VCPU_CTR(r4) /* r5 = vcpu->arch.ctr */
/* will be swapped in by rmcall */
ld r3, VCPU_LR(r4) /* r3 = vcpu->arch.lr */
mtlr r3 /* LR = r3 */
PPC_LL r10, VCPU_SHADOW_MSR(r4) /* r10 = vcpu->arch.shadow_msr */
DISABLE_INTERRUPTS
#ifdef CONFIG_PPC_BOOK3S_64
/* Some guests may need to have dcbz set to 32 byte length.
*
* Usually we ensure that by patching the guest's instructions
......@@ -118,7 +133,7 @@ kvm_start_lightweight:
* because that's a lot faster.
*/
ld r3, VCPU_HFLAGS(r4)
PPC_LL r3, VCPU_HFLAGS(r4)
rldicl. r3, r3, 0, 63 /* CR = ((r3 & 1) == 0) */
beq no_dcbz32_on
......@@ -128,13 +143,15 @@ kvm_start_lightweight:
no_dcbz32_on:
ld r6, VCPU_RMCALL(r4)
#endif /* CONFIG_PPC_BOOK3S_64 */
PPC_LL r6, VCPU_RMCALL(r4)
mtctr r6
ld r3, VCPU_TRAMPOLINE_ENTER(r4)
PPC_LL r3, VCPU_TRAMPOLINE_ENTER(r4)
LOAD_REG_IMMEDIATE(r4, MSR_KERNEL & ~(MSR_IR | MSR_DR))
/* Jump to SLB patching handlder and into our guest */
/* Jump to segment patching handler and into our guest */
bctr
/*
......@@ -149,31 +166,20 @@ kvmppc_handler_highmem:
/*
* Register usage at this point:
*
* R0 = guest last inst
* R1 = host R1
* R2 = host R2
* R3 = guest PC
* R4 = guest MSR
* R5 = guest DAR
* R6 = guest DSISR
* R13 = PACA
* PACA.KVM.* = guest *
* R1 = host R1
* R2 = host R2
* R12 = exit handler id
* R13 = PACA
* SVCPU.* = guest *
*
*/
/* R7 = vcpu */
ld r7, GPR4(r1)
PPC_LL r7, GPR4(r1)
/* Now save the guest state */
#ifdef CONFIG_PPC_BOOK3S_64
stw r0, VCPU_LAST_INST(r7)
std r3, VCPU_PC(r7)
std r4, VCPU_SHADOW_SRR1(r7)
std r5, VCPU_FAULT_DEAR(r7)
std r6, VCPU_FAULT_DSISR(r7)
ld r5, VCPU_HFLAGS(r7)
PPC_LL r5, VCPU_HFLAGS(r7)
rldicl. r5, r5, 0, 63 /* CR = ((r5 & 1) == 0) */
beq no_dcbz32_off
......@@ -184,35 +190,29 @@ kvmppc_handler_highmem:
no_dcbz32_off:
std r14, VCPU_GPR(r14)(r7)
std r15, VCPU_GPR(r15)(r7)
std r16, VCPU_GPR(r16)(r7)
std r17, VCPU_GPR(r17)(r7)
std r18, VCPU_GPR(r18)(r7)
std r19, VCPU_GPR(r19)(r7)
std r20, VCPU_GPR(r20)(r7)
std r21, VCPU_GPR(r21)(r7)
std r22, VCPU_GPR(r22)(r7)
std r23, VCPU_GPR(r23)(r7)
std r24, VCPU_GPR(r24)(r7)
std r25, VCPU_GPR(r25)(r7)
std r26, VCPU_GPR(r26)(r7)
std r27, VCPU_GPR(r27)(r7)
std r28, VCPU_GPR(r28)(r7)
std r29, VCPU_GPR(r29)(r7)
std r30, VCPU_GPR(r30)(r7)
std r31, VCPU_GPR(r31)(r7)
/* Save guest CTR */
mfctr r5
std r5, VCPU_CTR(r7)
/* Save guest LR */
mflr r5
std r5, VCPU_LR(r7)
#endif /* CONFIG_PPC_BOOK3S_64 */
PPC_STL r14, VCPU_GPR(r14)(r7)
PPC_STL r15, VCPU_GPR(r15)(r7)
PPC_STL r16, VCPU_GPR(r16)(r7)
PPC_STL r17, VCPU_GPR(r17)(r7)
PPC_STL r18, VCPU_GPR(r18)(r7)
PPC_STL r19, VCPU_GPR(r19)(r7)
PPC_STL r20, VCPU_GPR(r20)(r7)
PPC_STL r21, VCPU_GPR(r21)(r7)
PPC_STL r22, VCPU_GPR(r22)(r7)
PPC_STL r23, VCPU_GPR(r23)(r7)
PPC_STL r24, VCPU_GPR(r24)(r7)
PPC_STL r25, VCPU_GPR(r25)(r7)
PPC_STL r26, VCPU_GPR(r26)(r7)
PPC_STL r27, VCPU_GPR(r27)(r7)
PPC_STL r28, VCPU_GPR(r28)(r7)
PPC_STL r29, VCPU_GPR(r29)(r7)
PPC_STL r30, VCPU_GPR(r30)(r7)
PPC_STL r31, VCPU_GPR(r31)(r7)
/* Restore host msr -> SRR1 */
ld r6, VCPU_HOST_MSR(r7)
PPC_LL r6, VCPU_HOST_MSR(r7)
/*
* For some interrupts, we need to call the real Linux
......@@ -228,9 +228,12 @@ no_dcbz32_off:
beq call_linux_handler
cmpwi r12, BOOK3S_INTERRUPT_DECREMENTER
beq call_linux_handler
cmpwi r12, BOOK3S_INTERRUPT_PERFMON
beq call_linux_handler
/* Back to EE=1 */
mtmsr r6
sync
b kvm_return_point
call_linux_handler:
......@@ -249,14 +252,14 @@ call_linux_handler:
*/
/* Restore host IP -> SRR0 */
ld r5, VCPU_HOST_RETIP(r7)
PPC_LL r5, VCPU_HOST_RETIP(r7)
/* XXX Better move to a safe function?
* What if we get an HTAB flush in between mtsrr0 and mtsrr1? */
mtlr r12
ld r4, VCPU_TRAMPOLINE_LOWMEM(r7)
PPC_LL r4, VCPU_TRAMPOLINE_LOWMEM(r7)
mtsrr0 r4
LOAD_REG_IMMEDIATE(r3, MSR_KERNEL & ~(MSR_IR | MSR_DR))
mtsrr1 r3
......@@ -274,7 +277,7 @@ kvm_return_point:
/* Restore r3 (kvm_run) and r4 (vcpu) */
REST_2GPRS(3, r1)
bl KVMPPC_HANDLE_EXIT
bl FUNC(kvmppc_handle_exit)
/* If RESUME_GUEST, get back in the loop */
cmpwi r3, RESUME_GUEST
......@@ -285,7 +288,7 @@ kvm_return_point:
kvm_exit_loop:
ld r4, _LINK(r1)
PPC_LL r4, _LINK(r1)
mtlr r4
/* Restore non-volatile host registers (r14 - r31) */
......@@ -296,8 +299,8 @@ kvm_exit_loop:
kvm_loop_heavyweight:
ld r4, _LINK(r1)
std r4, (16 + SWITCH_FRAME_SIZE)(r1)
PPC_LL r4, _LINK(r1)
PPC_STL r4, (PPC_LR_STKOFF + SWITCH_FRAME_SIZE)(r1)
/* Load vcpu and cpu_run */
REST_2GPRS(3, r1)
......@@ -315,4 +318,3 @@ kvm_loop_lightweight:
/* Jump back into the beginning of this function */
b kvm_start_lightweight
此差异已折叠。
......@@ -22,7 +22,10 @@
#include <asm/reg.h>
#include <asm/page.h>
#include <asm/asm-offsets.h>
#ifdef CONFIG_PPC_BOOK3S_64
#include <asm/exception-64s.h>
#endif
/*****************************************************************************
* *
......@@ -30,6 +33,39 @@
* *
****************************************************************************/
#if defined(CONFIG_PPC_BOOK3S_64)
#define LOAD_SHADOW_VCPU(reg) \
mfspr reg, SPRN_SPRG_PACA
#define SHADOW_VCPU_OFF PACA_KVM_SVCPU
#define MSR_NOIRQ MSR_KERNEL & ~(MSR_IR | MSR_DR)
#define FUNC(name) GLUE(.,name)
#elif defined(CONFIG_PPC_BOOK3S_32)
#define LOAD_SHADOW_VCPU(reg) \
mfspr reg, SPRN_SPRG_THREAD; \
lwz reg, THREAD_KVM_SVCPU(reg); \
/* PPC32 can have a NULL pointer - let's check for that */ \
mtspr SPRN_SPRG_SCRATCH1, r12; /* Save r12 */ \
mfcr r12; \
cmpwi reg, 0; \
bne 1f; \
mfspr reg, SPRN_SPRG_SCRATCH0; \
mtcr r12; \
mfspr r12, SPRN_SPRG_SCRATCH1; \
b kvmppc_resume_\intno; \
1:; \
mtcr r12; \
mfspr r12, SPRN_SPRG_SCRATCH1; \
tophys(reg, reg)
#define SHADOW_VCPU_OFF 0
#define MSR_NOIRQ MSR_KERNEL
#define FUNC(name) name
#endif
.macro INTERRUPT_TRAMPOLINE intno
......@@ -42,19 +78,19 @@ kvmppc_trampoline_\intno:
* First thing to do is to find out if we're coming
* from a KVM guest or a Linux process.
*
* To distinguish, we check a magic byte in the PACA
* To distinguish, we check a magic byte in the PACA/current
*/
mfspr r13, SPRN_SPRG_PACA /* r13 = PACA */
std r12, PACA_KVM_SCRATCH0(r13)
LOAD_SHADOW_VCPU(r13)
PPC_STL r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
mfcr r12
stw r12, PACA_KVM_SCRATCH1(r13)
lbz r12, PACA_KVM_IN_GUEST(r13)
stw r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
lbz r12, (SHADOW_VCPU_OFF + SVCPU_IN_GUEST)(r13)
cmpwi r12, KVM_GUEST_MODE_NONE
bne ..kvmppc_handler_hasmagic_\intno
/* No KVM guest? Then jump back to the Linux handler! */
lwz r12, PACA_KVM_SCRATCH1(r13)
lwz r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
mtcr r12
ld r12, PACA_KVM_SCRATCH0(r13)
PPC_LL r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
mfspr r13, SPRN_SPRG_SCRATCH0 /* r13 = original r13 */
b kvmppc_resume_\intno /* Get back original handler */
......@@ -76,9 +112,7 @@ kvmppc_trampoline_\intno:
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_SYSTEM_RESET
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_MACHINE_CHECK
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_DATA_STORAGE
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_DATA_SEGMENT
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_INST_STORAGE
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_INST_SEGMENT
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_EXTERNAL
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_ALIGNMENT
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_PROGRAM
......@@ -88,7 +122,14 @@ INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_SYSCALL
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_TRACE
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_PERFMON
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_ALTIVEC
/* Those are only available on 64 bit machines */
#ifdef CONFIG_PPC_BOOK3S_64
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_DATA_SEGMENT
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_INST_SEGMENT
INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_VSX
#endif
/*
* Bring us back to the faulting code, but skip the
......@@ -99,11 +140,11 @@ INTERRUPT_TRAMPOLINE BOOK3S_INTERRUPT_VSX
*
* Input Registers:
*
* R12 = free
* R13 = PACA
* PACA.KVM.SCRATCH0 = guest R12
* PACA.KVM.SCRATCH1 = guest CR
* SPRG_SCRATCH0 = guest R13
* R12 = free
* R13 = Shadow VCPU (PACA)
* SVCPU.SCRATCH0 = guest R12
* SVCPU.SCRATCH1 = guest CR
* SPRG_SCRATCH0 = guest R13
*
*/
kvmppc_handler_skip_ins:
......@@ -114,9 +155,9 @@ kvmppc_handler_skip_ins:
mtsrr0 r12
/* Clean up all state */
lwz r12, PACA_KVM_SCRATCH1(r13)
lwz r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH1)(r13)
mtcr r12
ld r12, PACA_KVM_SCRATCH0(r13)
PPC_LL r12, (SHADOW_VCPU_OFF + SVCPU_SCRATCH0)(r13)
mfspr r13, SPRN_SPRG_SCRATCH0
/* And get back into the code */
......@@ -147,41 +188,48 @@ kvmppc_handler_lowmem_trampoline_end:
*
* R3 = function
* R4 = MSR
* R5 = CTR
* R5 = scratch register
*
*/
_GLOBAL(kvmppc_rmcall)
mtmsr r4 /* Disable relocation, so mtsrr
LOAD_REG_IMMEDIATE(r5, MSR_NOIRQ)
mtmsr r5 /* Disable relocation and interrupts, so mtsrr
doesn't get interrupted */
mtctr r5
sync
mtsrr0 r3
mtsrr1 r4
RFI
#if defined(CONFIG_PPC_BOOK3S_32)
#define STACK_LR INT_FRAME_SIZE+4
#elif defined(CONFIG_PPC_BOOK3S_64)
#define STACK_LR _LINK
#endif
/*
* Activate current's external feature (FPU/Altivec/VSX)
*/
#define define_load_up(what) \
\
_GLOBAL(kvmppc_load_up_ ## what); \
subi r1, r1, INT_FRAME_SIZE; \
mflr r3; \
std r3, _LINK(r1); \
mfmsr r4; \
std r31, GPR3(r1); \
mr r31, r4; \
li r5, MSR_DR; \
oris r5, r5, MSR_EE@h; \
andc r4, r4, r5; \
mtmsr r4; \
\
bl .load_up_ ## what; \
\
mtmsr r31; \
ld r3, _LINK(r1); \
ld r31, GPR3(r1); \
addi r1, r1, INT_FRAME_SIZE; \
mtlr r3; \
#define define_load_up(what) \
\
_GLOBAL(kvmppc_load_up_ ## what); \
PPC_STLU r1, -INT_FRAME_SIZE(r1); \
mflr r3; \
PPC_STL r3, STACK_LR(r1); \
PPC_STL r20, _NIP(r1); \
mfmsr r20; \
LOAD_REG_IMMEDIATE(r3, MSR_DR|MSR_EE); \
andc r3,r20,r3; /* Disable DR,EE */ \
mtmsr r3; \
sync; \
\
bl FUNC(load_up_ ## what); \
\
mtmsr r20; /* Enable DR,EE */ \
sync; \
PPC_LL r3, STACK_LR(r1); \
PPC_LL r20, _NIP(r1); \
mtlr r3; \
addi r1, r1, INT_FRAME_SIZE; \
blr
define_load_up(fpu)
......@@ -194,11 +242,10 @@ define_load_up(vsx)
.global kvmppc_trampoline_lowmem
kvmppc_trampoline_lowmem:
.long kvmppc_handler_lowmem_trampoline - _stext
.long kvmppc_handler_lowmem_trampoline - CONFIG_KERNEL_START
.global kvmppc_trampoline_enter
kvmppc_trampoline_enter:
.long kvmppc_handler_trampoline_enter - _stext
#include "book3s_64_slb.S"
.long kvmppc_handler_trampoline_enter - CONFIG_KERNEL_START
#include "book3s_segment.S"
此差异已折叠。
......@@ -133,6 +133,12 @@ void kvmppc_core_queue_external(struct kvm_vcpu *vcpu,
kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_EXTERNAL);
}
void kvmppc_core_dequeue_external(struct kvm_vcpu *vcpu,
struct kvm_interrupt *irq)
{
clear_bit(BOOKE_IRQPRIO_EXTERNAL, &vcpu->arch.pending_exceptions);
}
/* Deliver the interrupt of the corresponding priority, if possible. */
static int kvmppc_booke_irqprio_deliver(struct kvm_vcpu *vcpu,
unsigned int priority)
......@@ -479,6 +485,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{
int i;
vcpu_load(vcpu);
regs->pc = vcpu->arch.pc;
regs->cr = kvmppc_get_cr(vcpu);
regs->ctr = vcpu->arch.ctr;
......@@ -499,6 +507,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
regs->gpr[i] = kvmppc_get_gpr(vcpu, i);
vcpu_put(vcpu);
return 0;
}
......@@ -506,6 +516,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{
int i;
vcpu_load(vcpu);
vcpu->arch.pc = regs->pc;
kvmppc_set_cr(vcpu, regs->cr);
vcpu->arch.ctr = regs->ctr;
......@@ -525,6 +537,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
for (i = 0; i < ARRAY_SIZE(regs->gpr); i++)
kvmppc_set_gpr(vcpu, i, regs->gpr[i]);
vcpu_put(vcpu);
return 0;
}
......@@ -553,7 +567,12 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
int kvm_arch_vcpu_ioctl_translate(struct kvm_vcpu *vcpu,
struct kvm_translation *tr)
{
return kvmppc_core_vcpu_translate(vcpu, tr);
int r;
vcpu_load(vcpu);
r = kvmppc_core_vcpu_translate(vcpu, tr);
vcpu_put(vcpu);
return r;
}
int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
......
......@@ -161,7 +161,7 @@ static int __init kvmppc_e500_init(void)
flush_icache_range(kvmppc_booke_handlers,
kvmppc_booke_handlers + max_ivor + kvmppc_handler_len);
return kvm_init(NULL, sizeof(struct kvmppc_vcpu_e500), THIS_MODULE);
return kvm_init(NULL, sizeof(struct kvmppc_vcpu_e500), 0, THIS_MODULE);
}
static void __init kvmppc_e500_exit(void)
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -72,7 +72,7 @@ static inline void kvm_s390_vcpu_set_mem(struct kvm_vcpu *vcpu)
struct kvm_memslots *memslots;
idx = srcu_read_lock(&vcpu->kvm->srcu);
memslots = rcu_dereference(vcpu->kvm->memslots);
memslots = kvm_memslots(vcpu->kvm);
mem = &memslots->memslots[0];
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册