- 03 3月, 2014 2 次提交
-
-
由 Marc Zyngier 提交于
HCR.TVM traps (among other things) accesses to AMAIR0 and AMAIR1. In order to minimise the amount of surprise a guest could generate by trying to access these registers with caches off, add them to the list of registers we switch/handle. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
Commit 240e99cb (ARM: KVM: Fix 64-bit coprocessor handling) changed the way we match the 64bit coprocessor access from user space, but didn't update the trap handler for the same set of registers. The effect is that a trapped 64bit access is never matched, leading to a fault being injected into the guest. This went unnoticed as we didn't really trap any 64bit register so far. Placing the CRm field of the access into the CRn field of the matching structure fixes the problem. Also update the debug feature to emit the expected string in case of failing match. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 10月, 2013 2 次提交
-
-
由 Marc Zyngier 提交于
The L2CTLR register contains the number of CPUs in this cluster. Make sure the register content is actually relevant to the vcpu that is being configured by computing the number of cores that are part of its cluster. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
由 Marc Zyngier 提交于
In order to be able to support more than 4 A7 or A15 CPUs, we need to fix the MPIDR computing to reflect the fact that both A15 and A7 can only exist in clusters of at most 4 CPUs. Fix the MPIDR computing to allow virtual clusters to be exposed to the guest. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 13 10月, 2013 1 次提交
-
-
由 Jonathan Austin 提交于
This patch adds support for running Cortex-A7 guests on Cortex-A7 hosts. As Cortex-A7 is architecturally compatible with A15, this patch is largely just generalising existing code. Areas where 'implementation defined' behaviour is identical for A7 and A15 is moved to allow it to be used by both cores. The check to ensure that coprocessor register tables are sorted correctly is also moved in to 'common' code to avoid each new cpu doing its own check (and possibly forgetting to do so!) Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 07 8月, 2013 1 次提交
-
-
由 Christoffer Dall 提交于
The PAR was exported as CRn == 7 and CRm == 0, but in fact the primary coprocessor register number was determined by CRm for 64-bit coprocessor registers as the user space API was modeled after the coprocessor access instructions (see the ARM ARM rev. C - B3-1445). However, just changing the CRn to CRm breaks the sorting check when booting the kernel, because the internal kernel logic always treats CRn as the primary register number, and it makes the table sorting impossible to understand for humans. Alternatively we could change the logic to always have CRn == CRm, but that becomes unclear in the number of ways we do look up of a coprocessor register. We could also have a separate 64-bit table but that feels somewhat over-engineered. Instead, keep CRn the primary representation of the primary coproc. register number in-kernel and always export the primary number as CRm as per the existing user space ABI. Note: The TTBR registers just magically worked because they happened to follow the CRn(0) regs and were considered CRn(0) in the in-kernel representation. Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 27 6月, 2013 1 次提交
-
-
由 Marc Zyngier 提交于
Not saving PAR is an unfortunate oversight. If the guest performs an AT* operation and gets scheduled out before reading the result of the translation from PAR, it could become corrupted by another guest or the host. Saving this register is made slightly more complicated as KVM also uses it on the permission fault handling path, leading to an ugly "stash and restore" sequence. Fortunately, this is already a slow path so we don't really care. Also, Linux doesn't do any AT* operation, so Linux guests are not impacted by this bug. [ Slightly tweaked to use an even register as first operand to ldrd and strd operations in interrupts_head.S - Christoffer ] Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org>
-
- 18 4月, 2013 1 次提交
-
-
由 Marc Zyngier 提交于
In the very unlikely event where a guest would be foolish enough to *read* from a write-only cache maintainance register, we end up with preemption disabled, due to a misplaced get_cpu(). Just move the "is_write" test outside of the critical section. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 3月, 2013 3 次提交
-
-
由 Marc Zyngier 提交于
Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <cdall@cs.columbia.edu>
-
由 Marc Zyngier 提交于
Instead of directly accessing the fault registers, use proper accessors so the core code can be shared. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
由 Marc Zyngier 提交于
On 32bit ARM, unsigned long is guaranteed to be a 32bit quantity. On 64bit ARM, it is a 64bit quantity. In order to be able to share code between the two architectures, convert the registers to be unsigned long, so the core code can be oblivious of the change. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 12 2月, 2013 1 次提交
-
-
由 Marc Zyngier 提交于
Do the necessary save/restore dance for the timers in the world switch code. In the process, allow the guest to read the physical counter, which is useful for its own clock_event_device. Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NChristoffer Dall <c.dall@virtualopensystems.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com>
-
- 24 1月, 2013 5 次提交
-
-
由 Rusty Russell 提交于
We use space #18 for floating point regs. Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NChristoffer Dall <c.dall@virtualopensystems.com>
-
由 Christoffer Dall 提交于
The Cache Size Selection Register (CSSELR) selects the current Cache Size ID Register (CCSIDR). You write which cache you are interested in to CSSELR, and read the information out of CCSIDR. Which cache numbers are valid is known by reading the Cache Level ID Register (CLIDR). To export this state to userspace, we add a KVM_REG_ARM_DEMUX numberspace (17), which uses 8 bits to represent which register is being demultiplexed (0 for CCSIDR), and the lower 8 bits to represent this demultiplexing (in our case, the CSSELR value, which is 4 bits). Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NChristoffer Dall <c.dall@virtualopensystems.com>
-
由 Christoffer Dall 提交于
The following three ioctls are implemented: - KVM_GET_REG_LIST - KVM_GET_ONE_REG - KVM_SET_ONE_REG Now we have a table for all the cp15 registers, we can drive a generic API. The register IDs carry the following encoding: ARM registers are mapped using the lower 32 bits. The upper 16 of that is the register group type, or coprocessor number: ARM 32-bit CP15 registers have the following id bit patterns: 0x4002 0000 000F <zero:1> <crn:4> <crm:4> <opc1:4> <opc2:3> ARM 64-bit CP15 registers have the following id bit patterns: 0x4003 0000 000F <zero:1> <zero:4> <crm:4> <opc1:4> <zero:3> For futureproofing, we need to tell QEMU about the CP15 registers the host lets the guest access. It will need this information to restore a current guest on a future CPU or perhaps a future KVM which allow some of these to be changed. We use a separate table for these, as they're only for the userspace API. Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NChristoffer Dall <c.dall@virtualopensystems.com>
-
由 Christoffer Dall 提交于
Adds a new important function in the main KVM/ARM code called handle_exit() which is called from kvm_arch_vcpu_ioctl_run() on returns from guest execution. This function examines the Hyp-Syndrome-Register (HSR), which contains information telling KVM what caused the exit from the guest. Some of the reasons for an exit are CP15 accesses, which are not allowed from the guest and this commit handles these exits by emulating the intended operation in software and skipping the guest instruction. Minor notes about the coproc register reset: 1) We reserve a value of 0 as an invalid cp15 offset, to catch bugs in our table, at cost of 4 bytes per vcpu. 2) Added comments on the table indicating how we handle each register, for simplicity of understanding. Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NChristoffer Dall <c.dall@virtualopensystems.com>
-
由 Christoffer Dall 提交于
Targets KVM support for Cortex A-15 processors. Contains all the framework components, make files, header files, some tracing functionality, and basic user space API. Only supported core is Cortex-A15 for now. Most functionality is in arch/arm/kvm/* or arch/arm/include/asm/kvm_*.h. Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <c.dall@virtualopensystems.com>
-
- 06 12月, 2012 1 次提交
-
-
由 Paul Mackerras 提交于
This fixes various issues in how we were handling the VSX registers that exist on POWER7 machines. First, we were running off the end of the current->thread.fpr[] array. Ultimately this was because the vcpu->arch.vsr[] array is sized to be able to store both the FP registers and the extra VSX registers (i.e. 64 entries), but PR KVM only uses it for the extra VSX registers (i.e. 32 entries). Secondly, calling load_up_vsx() from C code is a really bad idea, because it jumps to fast_exception_return at the end, rather than returning with a blr instruction. This was causing it to jump off to a random location with random register contents, since it was using the largely uninitialized stack frame created by kvmppc_load_up_vsx. In fact, it isn't necessary to call either __giveup_vsx or load_up_vsx, since giveup_fpu and load_up_fpu handle the extra VSX registers as well as the standard FP registers on machines with VSX. Also, since VSX instructions can access the VMX registers and the FP registers as well as the extra VSX registers, we have to load up the FP and VMX registers before we can turn on the MSR_VSX bit for the guest. Conversely, if we save away any of the VSX or FP registers, we have to turn off MSR_VSX for the guest. To handle all this, it is more convenient for a single call to kvmppc_giveup_ext() to handle all the state saving that needs to be done, so we make it take a set of MSR bits rather than just one, and the switch statement becomes a series of if statements. Similarly kvmppc_handle_ext needs to be able to load up more than one set of registers. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
- 01 11月, 2011 1 次提交
-
-
由 Paul Gortmaker 提交于
All these files were including module.h just for the basic EXPORT_SYMBOL infrastructure. We can shift them off to the export.h header which is a way smaller footprint and thus realize some compile time gains. Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
-
- 26 9月, 2011 1 次提交
-
-
由 Paul Mackerras 提交于
This simplifies the way that the book3s_pr makes the transition to real mode when entering the guest. We now call kvmppc_entry_trampoline (renamed from kvmppc_rmcall) in the base kernel using a normal function call instead of doing an indirect call through a pointer in the vcpu. If kvm is a module, the module loader takes care of generating a trampoline as it does for other calls to functions outside the module. kvmppc_entry_trampoline then disables interrupts and jumps to kvmppc_handler_trampoline_enter in real mode using an rfi[d]. That then uses the link register as the address to return to (potentially in module space) when the guest exits. This also simplifies the way that we call the Linux interrupt handler when we exit the guest due to an external, decrementer or performance monitor interrupt. Instead of turning on the MMU, then deciding that we need to call the Linux handler and turning the MMU back off again, we now go straight to the handler at the point where we would turn the MMU on. The handler will then return to the virtual-mode code (potentially in the module). Along the way, this moves the setting and clearing of the HID5 DCBZ32 bit into real-mode interrupts-off code, and also makes sure that we clear the MSR[RI] bit before loading values into SRR0/1. The net result is that we no longer need any code addresses to be stored in vcpu->arch. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
- 12 7月, 2011 2 次提交
-
-
由 Paul Mackerras 提交于
This adds support for KVM running on 64-bit Book 3S processors, specifically POWER7, in hypervisor mode. Using hypervisor mode means that the guest can use the processor's supervisor mode. That means that the guest can execute privileged instructions and access privileged registers itself without trapping to the host. This gives excellent performance, but does mean that KVM cannot emulate a processor architecture other than the one that the hardware implements. This code assumes that the guest is running paravirtualized using the PAPR (Power Architecture Platform Requirements) interface, which is the interface that IBM's PowerVM hypervisor uses. That means that existing Linux distributions that run on IBM pSeries machines will also run under KVM without modification. In order to communicate the PAPR hypercalls to qemu, this adds a new KVM_EXIT_PAPR_HCALL exit code to include/linux/kvm.h. Currently the choice between book3s_hv support and book3s_pr support (i.e. the existing code, which runs the guest in user mode) has to be made at kernel configuration time, so a given kernel binary can only do one or the other. This new book3s_hv code doesn't support MMIO emulation at present. Since we are running paravirtualized guests, this isn't a serious restriction. With the guest running in supervisor mode, most exceptions go straight to the guest. We will never get data or instruction storage or segment interrupts, alignment interrupts, decrementer interrupts, program interrupts, single-step interrupts, etc., coming to the hypervisor from the guest. Therefore this introduces a new KVMTEST_NONHV macro for the exception entry path so that we don't have to do the KVM test on entry to those exception handlers. We do however get hypervisor decrementer, hypervisor data storage, hypervisor instruction storage, and hypervisor emulation assist interrupts, so we have to handle those. In hypervisor mode, real-mode accesses can access all of RAM, not just a limited amount. Therefore we put all the guest state in the vcpu.arch and use the shadow_vcpu in the PACA only for temporary scratch space. We allocate the vcpu with kzalloc rather than vzalloc, and we don't use anything in the kvmppc_vcpu_book3s struct, so we don't allocate it. We don't have a shared page with the guest, but we still need a kvm_vcpu_arch_shared struct to store the values of various registers, so we include one in the vcpu_arch struct. The POWER7 processor has a restriction that all threads in a core have to be in the same partition. MMU-on kernel code counts as a partition (partition 0), so we have to do a partition switch on every entry to and exit from the guest. At present we require the host and guest to run in single-thread mode because of this hardware restriction. This code allocates a hashed page table for the guest and initializes it with HPTEs for the guest's Virtual Real Memory Area (VRMA). We require that the guest memory is allocated using 16MB huge pages, in order to simplify the low-level memory management. This also means that we can get away without tracking paging activity in the host for now, since huge pages can't be paged or swapped. This also adds a few new exports needed by the book3s_hv code. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Alexander Graf 提交于
Up until now, Book3S KVM had variables stored in the kernel that a kernel module or the kvm code in the kernel could read from to figure out where some real mode helper functions are located. This is all unnecessary. The high bits of the EA get ignore in real mode, so we can just use the pointer as is. Also, it's a lot easier on relocations when we use the normal way of resolving the address to a function, instead of jumping through hoops. This patch fixes compilation with CONFIG_RELOCATABLE=y. Signed-off-by: NAlexander Graf <agraf@suse.de>
-
- 17 5月, 2010 1 次提交
-
-
由 Alexander Graf 提交于
We have quite some code that can be used by Book3S_32 and Book3S_64 alike, so let's call it "Book3S" instead of "Book3S_64", so we can later on use it from the 32 bit port too. Signed-off-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 01 3月, 2010 2 次提交
-
-
由 Alexander Graf 提交于
Linux contains quite some bits of code to load FPU, Altivec and VSX lazily for a task. It calls those bits in real mode, coming from an interrupt handler. For KVM we better reuse those, so let's wrap a bit of trampoline magic around them and then we can call them from normal module code. Signed-off-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
由 Alexander Graf 提交于
Currently we're racy when doing the transition from IR=1 to IR=0, from the module memory entry code to the real mode SLB switching code. To work around that I took a look at the RTAS entry code which is faced with a similar problem and did the same thing: A small helper in linear mapped memory that does mtmsr with IR=0 and then RFIs info the actual handler. Thanks to that trick we can safely take page faults in the entry code and only need to be really wary of what to do as of the SLB switching part. Signed-off-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 05 11月, 2009 1 次提交
-
-
由 Alexander Graf 提交于
To be able to keep KVM as module, we need to export the SLB trampoline addresses to the module, so it knows where to jump to. Signed-off-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 04 8月, 2008 1 次提交
-
-
由 Stephen Rothwell 提交于
from include/asm-powerpc. This is the result of a mkdir arch/powerpc/include/asm git mv include/asm-powerpc/* arch/powerpc/include/asm Followed by a few documentation/comment fixups and a couple of places where <asm-powepc/...> was being used explicitly. Of the latter only one was outside the arch code and it is a driver only built for powerpc. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 27 4月, 2008 1 次提交
-
-
由 Hollis Blanchard 提交于
This functionality is definitely experimental, but is capable of running unmodified PowerPC 440 Linux kernels as guests on a PowerPC 440 host. (Only tested with 440EP "Bamboo" guests so far, but with appropriate userspace support other SoC/board combinations should work.) See Documentation/powerpc/kvm_440.txt for technical details. [stephen: build fix] Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
- 04 10月, 2006 1 次提交
-
-
由 Dave Jones 提交于
kbuild explicitly includes this at build time. Signed-off-by: NDave Jones <davej@redhat.com>
-
- 13 9月, 2006 1 次提交
-
-
由 Olof Johansson 提交于
Base patch for PA6T and PA6T-1682M. This introduces the arch/powerpc/platform/pasemi directory, together with basic implementations for various setup. Much of this was based on other platform code, i.e. Maple, etc. Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 23 6月, 2006 1 次提交
-
-
由 Sascha Hauer 提交于
This is a patch for the Hilscher netx builtin ethernet ports. The netx board support was merged into 2.6.17-git2. The netx is a arm926 based SoC. Signed-off-by: NRobert Schwebel <r.schwebel@pengutronix.de> Signed-off-by: NSascha Hauer <s.hauer@pengutronix.de> -- drivers/net/Kconfig | 11 drivers/net/Makefile | 1 drivers/net/netx-eth.c | 516 ++++++++++++++++++++++++++++++++++++++++ include/asm-arm/arch-netx/eth.h | 27 ++ 4 files changed, 555 insertions(+) Signed-off-by: NJeff Garzik <jeff@garzik.org>
-
- 19 6月, 2006 2 次提交
-
-
由 Sascha Hauer 提交于
Patch from Sascha Hauer This patch adds the base support for Hilscher's netX network processors. Signed-off-by: NRobert Schwebel <r.schwebel@pengutronix.de> Signed-off-by: NSascha Hauer <s.hauer@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Sascha Hauer 提交于
Patch from Sascha Hauer This patch adds framebuffer support for Hilscher's netX network processors. Signed-off-by: NRobert Schwebel <r.schwebel@pengutronix.de> Signed-off-by: NSascha Hauer <s.hauer@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 1月, 2006 1 次提交
-
-
由 SAN People 提交于
Patch from SAN People Following changes were made to clock.c: 1) Replaced <asm/hardware/clock.h> with <linux/clk.h> 2) Removed old unused clk_enable & clk_disable. 3) Replaced clk_use/clk_unuse with clk_enable/clk_disable. Otherwise it's the same as the previous patch. Signed-off-by: NAndrew Victor <andrew@sanpeople.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 10月, 2005 1 次提交
-
-
由 Russell King 提交于
Including asm/hardware.h into asm/io.h can cause #define clashes between platform specific definitions and driver local definitions. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-