- 09 1月, 2014 3 次提交
-
-
由 Paul Mackerras 提交于
This modifies kvmppc_load_fp and kvmppc_save_fp to use the generic FP/VSX and VMX load/store functions instead of open-coding the FP/VSX/VMX load/store instructions. Since kvmppc_load/save_fp don't follow C calling conventions, we make them private symbols within book3s_hv_rmhandlers.S. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Paul Mackerras 提交于
This uses struct thread_fp_state and struct thread_vr_state to store the floating-point, VMX/Altivec and VSX state, rather than flat arrays. This makes transferring the state to/from the thread_struct simpler and allows us to unify the get/set_one_reg implementations for the VSX registers. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Bharat Bhushan 提交于
kvm_hypercall() have nothing KVM specific, so renamed to epapr_hypercall(). Also this in moved to arch/powerpc/include/asm/epapr_hcalls.h Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
- 09 11月, 2013 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 06 11月, 2013 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
When restoring the PPR value, we incorrectly access the thread structure at a time where MSR:RI is clear, which means we cannot recover from nested faults. However the thread structure isn't covered by the "bolted" SLB entries and thus accessing can fault. This fixes it by splitting the code so that the PPR value is loaded into a GPR before MSR:RI is cleared. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 05 11月, 2013 1 次提交
-
-
由 Yijing Wang 提交于
Fix f0308261 ("powerpc/pci: Use pci_is_pcie() to simplify code"). I accidentally merged v2 instead of v3, so this adds the difference. Without this, "cap" is the left-over PCI-X capability offset, and we're using it as the PCIe capability offset. [bhelgaas: extracted v2->v3 diff] Signed-off-by: NYijing Wang <wangyijing@huawei.com> Signed-off-by: NBjorn Helgaas <bhelgaas@google.com>
-
- 31 10月, 2013 5 次提交
-
-
由 Sudeep KarkadaNagesha 提交于
Since the definition of_find_next_cache_node is architecture independent, the existing definition in powerpc can be moved to driver/of/base.c Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Likely <grant.likely@linaro.org> Cc: Rob Herring <rob.herring@calxeda.com> Signed-off-by: NSudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Sudeep KarkadaNagesha 提交于
Currently big endianness of the device tree data is assumed in of_find_next_cache_node for 'handle' when calling of_find_node_by_phandle. In preparation to move this function to common code, this patch fixes the endianness using 'be32_to_cpup' Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Likely <grant.likely@linaro.org> Cc: Rob Herring <rob.herring@calxeda.com> Signed-off-by: NSudeep KarkadaNagesha <sudeep.karkadanagesha@arm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
We currently turn IRQs off in __switch_to(0 but this is unnecessary as it's already disabled in the caller. This removes the IRQ disable but adds a check to make sure it is really off in case this changes in future. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Dan Streetman 提交于
Currently, when not in hypervisor mode the kernel Oopses during suspend or hibernation when accessing the SDR1 register, because it is only available in hypervisor mode. Access to it needs to be protected in BEGIN/END_FW_FTR_SECTION. Signed-off-by: NDan Streetman <ddstreet@ieee.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Reported-by: NJimmy Pan <jipan@redhat.com> Tested-by: NJimmy Pan <jipan@redhat.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Cedric Le Goater 提交于
When reading partitions, the length has to be translated from big endian to the endian order of the host. Similarly, when writing partitions, the length needs to be in big endian order. The userspace tool 'nvram' needs a similar fix as it is reading and writing partitions through /dev/nram : http://sourceforge.net/p/powerpc-utils/mailman/message/31571277/Signed-off-by: NCedric Le Goater <clg@fr.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 30 10月, 2013 7 次提交
-
-
由 Geert Uytterhoeven 提交于
Correct reference to the location of the kexec_sequence() assembly helper. There never was a kexec_stub.S in mainline. Signed-off-by: NGeert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The condition register (CR) is a 32 bit quantity so we should use 32 bit loads and stores. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Tom Musta 提交于
This patch enables alignment handling for the load/store floating point pair instructions (lfdp, lfdpx, stfdp, stfdpx). The handler routine is properly coded and only needs to be enabled. Signed-off-by: NTom Musta <tmusta@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Tom Musta 提交于
The alignment handler is incorrect for unaligned string instructions in little endian mode. These instructions access data as arrays of bytes and thus are endian neutral. However, the routine also handles the load/store multiple instructions, which are NOT endian neutral. This patch toggles the byte swapping flag for the string instructions in little endian builds. This effectively disables the byte swapping logic. Signed-off-by: NTom Musta <tmusta@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
This issue was causing the QEMU emulated USB device to fail dring PCI probe. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Prarit Bhargava 提交于
Commit e82b89a6 used strcat instead of strcpy which can result in an overflow of newlines on the buffer. Signed-off-by: Prarit Bhargava Cc: benh@kernel.crashing.org Cc: ben@decadent.org.uk Cc: stable@vger.kernel.org Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Robert Jennings 提交于
Move the few declarations from arch/powerpc/kernel/setup.h into arch/powerpc/include/asm/setup.h. This resolves a sparse warning for arch/powerpc/mm/numa.c which defines do_init_bootmem() but can't include the setup.h header in the prior path. Resolves: arch/powerpc/mm/numa.c:998:13: warning: symbol 'do_init_bootmem' was not declared. Should it be static? Signed-off-by: NRobert C Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 29 10月, 2013 4 次提交
-
-
由 Scott Wood 提交于
Commit 9863c28a ("powerpc: Emulate sync instruction variants") introduced a build breakage with CONFIG_PPC_EMULATED_STATS enabled. Signed-off-by: NScott Wood <scottwood@freescale.com> Cc: Kumar Gala <galak@kernel.org> Cc: James Yang <James.Yang@freescale.com> ---
-
由 LEROY Christophe 提交于
Activating CONFIG_PIN_TLB is supposed to pin the IMMR and the first three 8Mbytes pages. But the setting of MD_CTR to a pinnable entry was missing before the pinning of the third 8Mb page. As the index is decremented module 28 (MD_RSV4D is set) after every DTLB update, the third 8Mbytes page was not pinned. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 LEROY Christophe 提交于
This patch modifies the Oops message in case of Software Emulation Exception. The existing message is quite confusing because it refers to FPU Emulation while most often the issue is due to either a non supported instruction (not necessarily FPU related) or a stale instruction due to HW issues. The new message tries to be more generic in order to make the user understand that the Oops is due to something wrong with an instruction, not necessarily due to an FPU instruction. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Suzuki Poulose 提交于
The regset defintion for SPE doesn't have the core_note_type set, which prevents it from being dumped. Add the note type NT_PPC_SPE for SPE regset. Signed-off-by: NSuzuki K Poulose <suzuki@in.ibm.com> Cc: Roland McGrath <roland@hack.frob.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 24 10月, 2013 3 次提交
-
-
由 Grant Likely 提交于
All the callers of irq_create_of_mapping() pass the contents of a struct of_phandle_args structure to the function. Since all the callers already have an of_phandle_args pointer, why not pass it directly to irq_create_of_mapping()? Signed-off-by: NGrant Likely <grant.likely@linaro.org> Acked-by: NMichal Simek <monstr@monstr.eu> Acked-by: NTony Lindgren <tony@atomide.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Grant Likely 提交于
struct of_irq and struct of_phandle_args are exactly the same structure. This patch makes the kernel use of_phandle_args everywhere. This in itself isn't a big deal, but it makes some follow-on patches simpler. Signed-off-by: NGrant Likely <grant.likely@linaro.org> Acked-by: NMichal Simek <monstr@monstr.eu> Acked-by: NTony Lindgren <tony@atomide.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Grant Likely 提交于
The OF irq handling code has been overloading the term 'map' to refer to both parsing the data in the device tree and mapping it to the internal linux irq system. This is probably because the device tree does have the concept of an 'interrupt-map' function for translating interrupt references from one node to another, but 'map' is still confusing when the primary purpose of some of the functions are to parse the DT data. This patch renames all the of_irq_map_* functions to of_irq_parse_* which makes it clear that there is a difference between the parsing phase and the mapping phase. Kernel code can make use of just the parsing or just the mapping support as needed by the subsystem. The patch was generated mechanically with a handful of sed commands. Signed-off-by: NGrant Likely <grant.likely@linaro.org> Acked-by: NMichal Simek <monstr@monstr.eu> Acked-by: NTony Lindgren <tony@atomide.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 23 10月, 2013 1 次提交
-
-
由 Paul Mackerras 提交于
Commit de79f7b9 ("powerpc: Put FP/VSX and VR state into structures") modified load_up_fpu() and load_up_altivec() in such a way that they now use r7 and r8. Unfortunately, the callers of these functions on 32-bit machines then return to userspace via fast_exception_return, which doesn't restore all of the volatile GPRs, but only r1, r3 -- r6 and r9 -- r12. This was causing userspace segfaults and other userspace misbehaviour on 32-bit machines. This fixes the problem by changing the register usage of load_up_fpu() and load_up_altivec() to avoid using r7 and r8 and instead use r6 and r10. This also adds comments to those functions saying which registers may be used. Signed-off-by: NPaul Mackerras <paulus@samba.org> Tested-by: Scott Wood <scottwood@freescale.com> (on e500mc, so no altivec) Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 19 10月, 2013 5 次提交
-
-
由 James Yang 提交于
BookE version of user_disable_single_step() clears DBCR0_IC for the instruction completion debug, but did not also clear DBCR0_BT for the branch taken exception. This behavior was lost by the 2/2010 patch. Signed-off-by: NJames Yang <James.Yang@freescale.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Bharat Bhushan 提交于
KVM need this function when switching from vcpu to user-space thread. My subsequent patch will use this function. Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Acked-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Bharat Bhushan 提交于
This way we can use same data type struct with KVM and also help in using other debug related function. Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Acked-by: NMichael Neuling <mikey@neuling.org> [scottwood@freescale.com: removed obvious debug_reg comment] Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Bharat Bhushan 提交于
Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Acked-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Tiejun Chen 提交于
Use DEFINE_PER_CPU to allocate thread_info statically instead of kmalloc(). This can avoid introducing more memory check codes. Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com> [scottwood@freescale.com: wrapped long line] Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 17 10月, 2013 9 次提交
-
-
由 Aneesh Kumar K.V 提交于
This patch add a new callback kvmppc_ops. This will help us in enabling both HV and PR KVM together in the same kernel. The actual change to enable them together is done in the later patch in the series. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [agraf: squash in booke changes] Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Aneesh Kumar K.V 提交于
This help ups to select the relevant code in the kernel code when we later move HV and PR bits as seperate modules. The patch also makes the config options for PR KVM selectable Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Aneesh Kumar K.V 提交于
With later patches supporting PR kvm as a kernel module, the changes that has to be built into the main kernel binary to enable PR KVM module is now selected via KVM_BOOK3S_PR_POSSIBLE Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Bharat Bhushan 提交于
KVM need this function when switching from vcpu to user-space thread. My subsequent patch will use this function. Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Acked-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Bharat Bhushan 提交于
This way we can use same data type struct with KVM and also help in using other debug related function. Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Bharat Bhushan 提交于
Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Paul Mackerras 提交于
Both PR and HV KVM have separate, identical copies of the kvmppc_skip_interrupt and kvmppc_skip_Hinterrupt handlers that are used for the situation where an interrupt happens when loading the instruction that caused an exit from the guest. To eliminate this duplication and make it easier to compile in both PR and HV KVM, this moves this code to arch/powerpc/kernel/exceptions-64s.S along with other kernel interrupt handler code. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Paul Mackerras 提交于
Currently PR-style KVM keeps the volatile guest register values (R0 - R13, CR, LR, CTR, XER, PC) in a shadow_vcpu struct rather than the main kvm_vcpu struct. For 64-bit, the shadow_vcpu exists in two places, a kmalloc'd struct and in the PACA, and it gets copied back and forth in kvmppc_core_vcpu_load/put(), because the real-mode code can't rely on being able to access the kmalloc'd struct. This changes the code to copy the volatile values into the shadow_vcpu as one of the last things done before entering the guest. Similarly the values are copied back out of the shadow_vcpu to the kvm_vcpu immediately after exiting the guest. We arrange for interrupts to be still disabled at this point so that we can't get preempted on 64-bit and end up copying values from the wrong PACA. This means that the accessor functions in kvm_book3s.h for these registers are greatly simplified, and are same between PR and HV KVM. In places where accesses to shadow_vcpu fields are now replaced by accesses to the kvm_vcpu, we can also remove the svcpu_get/put pairs. Finally, on 64-bit, we don't need the kmalloc'd struct at all any more. With this, the time to read the PVR one million times in a loop went from 567.7ms to 575.5ms (averages of 6 values), an increase of about 1.4% for this worse-case test for guest entries and exits. The standard deviation of the measurements is about 11ms, so the difference is only marginally significant statistically. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
由 Paul Mackerras 提交于
This enables us to use the Processor Compatibility Register (PCR) on POWER7 to put the processor into architecture 2.05 compatibility mode when running a guest. In this mode the new instructions and registers that were introduced on POWER7 are disabled in user mode. This includes all the VSX facilities plus several other instructions such as ldbrx, stdbrx, popcntw, popcntd, etc. To select this mode, we have a new register accessible through the set/get_one_reg interface, called KVM_REG_PPC_ARCH_COMPAT. Setting this to zero gives the full set of capabilities of the processor. Setting it to one of the "logical" PVR values defined in PAPR puts the vcpu into the compatibility mode for the corresponding architecture level. The supported values are: 0x0f000002 Architecture 2.05 (POWER6) 0x0f000003 Architecture 2.06 (POWER7) 0x0f100003 Architecture 2.06+ (POWER7+) Since the PCR is per-core, the architecture compatibility level and the corresponding PCR value are stored in the struct kvmppc_vcore, and are therefore shared between all vcpus in a virtual core. Signed-off-by: NPaul Mackerras <paulus@samba.org> [agraf: squash in fix to add missing break statements and documentation] Signed-off-by: NAlexander Graf <agraf@suse.de>
-