- 30 10月, 2013 7 次提交
-
-
由 Vasant Hegde 提交于
Code update interface for powernv platform. This provides sysfs interface to pass new image, validate, update and commit images. This patch includes: - Below OPAL APIs for code update - opal_validate_flash() - opal_manage_flash() - opal_update_flash() - Create below sysfs files under /sys/firmware/opal - image : Interface to pass new FW image - validate_flash : Validate candidate image - manage_flash : Commit/Reject operations - update_flash : Flash new candidate image Updating Image: "update_flash" is an interface to indicate flash new FW. It just passes image SG list to FW. Actual flashing is done during system reboot time. Note: - SG entry format: I have kept version number to keep this list similar to what PAPR is defined. Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Vasant Hegde 提交于
Create /sys/firmware/opal directory. We wil use this interface to fetch opal error logs, firmware update, etc. Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Add a VMX optimised xor, used primarily for RAID5. On a POWER7 blade this is a decent win: 32regs : 17932.800 MB/sec altivec : 19724.800 MB/sec The bigger gain is when the same test is run in SMT4 mode, as it would if there was a lot of work going on: 8regs : 8377.600 MB/sec altivec : 15801.600 MB/sec I tested this against an array created without the patch, and also verified it worked as expected on a little endian kernel. [ Fix !CONFIG_ALTIVEC build -- BenH ] Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
commit f13c13a0 (powerpc: Stop using non-architected shared_proc field in lppaca) fixed a potential issue with shared/dedicated partition detection. The old method of detection relied on an unarchitected field (shared_proc), and this patch switched to using something architected (a non zero yield_count). Unfortunately the assertion in the Linux header that yield_count is only non zero on shared processor partitions is not true. It turns out dedicated processor partitions can increment yield_count and as such we falsely detect dedicated partitions as shared. Fix the comment, and switch back to using the old method. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Alistair Popple 提交于
PPC44x supports page sizes other than 4K however when 64K page sizes are selected compilation fails. This is due to a change in the definition of pgtable_t introduced by the following patch: commit 5c1f6ee9 Author: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> powerpc: Reduce PTE table memory wastage The above patch only implements the new layout for PPC64 so it doesn't compile for PPC32 with a 64K page size. Ideally we should implement the same layout for PPC32 however for the meantime this patch reverts the definition of pgtable_t for PPC32. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Vaishnavi Bhat 提交于
This patch fixes typo in comments virtual to physical address conversion. Signed-off-by: NVaishnavi Bhat <vaishnavi@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Robert Jennings 提交于
Move the few declarations from arch/powerpc/kernel/setup.h into arch/powerpc/include/asm/setup.h. This resolves a sparse warning for arch/powerpc/mm/numa.c which defines do_init_bootmem() but can't include the setup.h header in the prior path. Resolves: arch/powerpc/mm/numa.c:998:13: warning: symbol 'do_init_bootmem' was not declared. Should it be static? Signed-off-by: NRobert C Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 29 10月, 2013 1 次提交
-
-
由 Scott Wood 提交于
Commit 9863c28a ("powerpc: Emulate sync instruction variants") introduced a build breakage with CONFIG_PPC_EMULATED_STATS enabled. Signed-off-by: NScott Wood <scottwood@freescale.com> Cc: Kumar Gala <galak@kernel.org> Cc: James Yang <James.Yang@freescale.com> ---
-
- 19 10月, 2013 2 次提交
-
-
由 Bharat Bhushan 提交于
KVM need this function when switching from vcpu to user-space thread. My subsequent patch will use this function. Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Acked-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Bharat Bhushan 提交于
This way we can use same data type struct with KVM and also help in using other debug related function. Signed-off-by: NBharat Bhushan <bharat.bhushan@freescale.com> Acked-by: NMichael Neuling <mikey@neuling.org> [scottwood@freescale.com: removed obvious debug_reg comment] Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 17 10月, 2013 1 次提交
-
-
由 James Yang 提交于
Reserved fields of the sync instruction have been used for other instructions (e.g. lwsync). On processors that do not support variants of the sync instruction, emulate it by executing a sync to subsume the effect of the intended instruction. Signed-off-by: NJames Yang <James.Yang@freescale.com> [scottwood@freescale.com: whitespace and subject line fix] Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 11 10月, 2013 22 次提交
-
-
由 Paul Mackerras 提交于
This provides a facility which is intended for use by KVM, where the contents of the FP/VSX and VMX (Altivec) registers can be saved away to somewhere other than the thread_struct when kernel code wants to use floating point or VMX instructions. This is done by providing a pointer in the thread_struct to indicate where the state should be saved to. The giveup_fpu() and giveup_altivec() functions test these pointers and save state to the indicated location if they are non-NULL. Note that the MSR_FP/VEC bits in task->thread.regs->msr are still used to indicate whether the CPU register state is live, even when an alternate save location is being used. This also provides load_fp_state() and load_vr_state() functions, which load up FP/VSX and VMX state from memory into the CPU registers, and corresponding store_fp_state() and store_vr_state() functions, which store FP/VSX and VMX state into memory from the CPU registers. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Paul Mackerras 提交于
This creates new 'thread_fp_state' and 'thread_vr_state' structures to store FP/VSX state (including FPSCR) and Altivec/VSX state (including VSCR), and uses them in the thread_struct. In the thread_fp_state, the FPRs and VSRs are represented as u64 rather than double, since we rarely perform floating-point computations on the values, and this will enable the structures to be used in KVM code as well. Similarly FPSCR is now a u64 rather than a structure of two 32-bit values. This takes the offsets out of the macros such as SAVE_32FPRS, REST_32FPRS, etc. This enables the same macros to be used for normal and transactional state, enabling us to delete the transactional versions of the macros. This also removes the unused do_load_up_fpu and do_load_up_altivec, which were in fact buggy since they didn't create large enough stack frames to account for the fact that load_up_fpu and load_up_altivec are not designed to be called from C and assume that their caller's stack frame is an interrupt frame. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Alexey Kardashevskiy 提交于
The existing TCE machine calls (tce_build and tce_free) only support virtual mode as they call __raw_writeq for TCE invalidation what fails in real mode. This introduces tce_build_rm and tce_free_rm real mode versions which do mostly the same but use "Store Doubleword Caching Inhibited Indexed" instruction for TCE invalidation. This new feature is going to be utilized by real mode support of VFIO. Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Alexey Kardashevskiy 提交于
The current VFIO-on-POWER implementation supports only user mode driven mapping, i.e. QEMU is sending requests to map/unmap pages. However this approach is really slow, so we want to move that to KVM. Since H_PUT_TCE can be extremely performance sensitive (especially with network adapters where each packet needs to be mapped/unmapped) we chose to implement that as a "fast" hypercall directly in "real mode" (processor still in the guest context but MMU off). To be able to do that, we need to provide some facilities to access the struct page count within that real mode environment as things like the sparsemem vmemmap mappings aren't accessible. This adds an API function realmode_pfn_to_page() to get page struct when MMU is off. This adds to MM a new function put_page_unless_one() which drops a page if counter is bigger than 1. It is going to be used when MMU is off (for example, real mode on PPC64) and we want to make sure that page release will not happen in real mode as it may crash the kernel in a horrible way. CONFIG_SPARSEMEM_VMEMMAP and CONFIG_FLATMEM are supported. Cc: linux-mm@kvack.org Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
The patch adds function ioda_eeh_phb3_phb_diag() to dump PHB3 PHB diag-data. That's called while detecting informative errors or frozen PE on the specific PHB. Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
scom_read() now returns the read value via a pointer argument and both functions return an int error code Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
isa_io_special is set when the platform provides a "special" implementation of inX/outX via some FW interface for example. Such a platform doesn't need an ISA bridge on PCI, and so /dev/port should be made available even if one isn't present. This makes the LPC bus IOs accessible via /dev/port on PowerNV Power8 Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
Add the plumbing to implement arch_get_random_long/int(). It didn't seem worth adding an extra ppc_md hook for int, so we reuse the one for long. Add an implementation for powernv based on the hwrng found in power7+ systems. We whiten the output of the hwrng, and the result passes all the dieharder tests. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
pnv_pci_setup_bml_iommu was missing a byteswap of a device tree property. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Sparse caught an issue where opal_set_rtc_time was incorrectly byteswapping. Also fix a number of sparse warnings. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
We need to fix some endian issues in our memcpy code. For now just enable the generic memcpy routine for little endian builds. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
We need to fix some endian issues in our checksum code. For now just enable the generic checksum routines for little endian builds. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Create a trampoline that works in either endian and flips to the expected endian. Use it for primary and secondary thread entry as well as RTAS and OF call return. Credit for finding the magic instruction goes to Paul Mackerras Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Ian Munsie 提交于
This patch will have powerpc include the appropriate generic endianness header depending on what the compiler reports. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
We need to set MSR_LE in kernel and userspace for little endian builds Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The powerpc word-at-a-time functions are big endian specific. Bring in the x86 version in order to support little endian builds. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Ian Munsie 提交于
This patch maps the MMIO functions for 32bit PowerPC to their appropriate instructions depending on CPU endianness. The macros used to create the corresponding inline functions are also renamed by this patch. Previously they had BE or LE in their names which was misleading - they had nothing to do with endianness, but actually created different instruction forms so their new names reflect the instruction form they are creating (D-Form and X-Form). Little endian 64bit PowerPC is not supported, so the lack of mappings (and corresponding breakage) for that case is intentional to bring the attention of anyone doing a 64bit little endian port. 64bit big endian is unaffected. [ Added 64 bit versions - Anton ] Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The elements within VSX loads and stores are big endian ordered regardless of endianness. Our VSX context save/restore code uses lxvd2x and stxvd2x which is a 2x doubleword operation. This means the two doublewords will be swapped and we have to perform another swap to undo it. We need to do this on save and restore. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The FPRs overlap the high doublewords of the first 32 VSX registers. Fix TS_FPROFFSET and TS_VSRLOWOFFSET so we access the correct fields in little endian mode. If VSX is disabled the FPRs are only one doubleword in length so TS_FPROFFSET needs adjusting in little endian. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 25 9月, 2013 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
We've been keeping that field in thread_struct for a while, it contains the "limit" of the current stack pointer and is meant to be used for detecting stack overflows. It has a few problems however: - First, it was never actually *used* on 64-bit. Set and updated but not actually exploited - When switching stack to/from irq and softirq stacks, it's update is racy unless we hard disable interrupts, which is costly. This is fine on 32-bit as we don't soft-disable there but not on 64-bit. Thus rather than fixing 2 in order to implement 1 in some hypothetical future, let's remove the code completely from 64-bit. In order to avoid a clutter of ifdef's, we remove the updates from C code completely during interrupt stack switching, and instead maintain it from the asm helper that is used to do the stack switching in the first place. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Nowadays, irq_exit() calls __do_softirq() pretty much directly instead of calling do_softirq() which switches to the decicated softirq stack. This has lead to observed stack overflows on powerpc since we call irq_enter() and irq_exit() outside of the scope that switches to the irq stack. This fixes it by moving the stack switching up a level, making irq_enter() and irq_exit() run off the irq stack. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 05 9月, 2013 1 次提交
-
-
由 Paul Mackerras 提交于
Commit 74e400ce ("powerpc: Rework setting up H/FSCR bit definitions") ended up with incorrect bit numbers for FSCR_PM_LG and FSCR_BHRB_LG. This fixes them. Signed-off-by: NPaul Mackerras <paulus@samba.org> Acked-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 28 8月, 2013 1 次提交
-
-
由 Paul Mackerras 提交于
It turns out that if we exit the guest due to a hcall instruction (sc 1), and the loading of the instruction in the guest exit path fails for any reason, the call to kvmppc_ld() in kvmppc_get_last_inst() fetches the instruction after the hcall instruction rather than the hcall itself. This in turn means that the instruction doesn't get recognized as an hcall in kvmppc_handle_exit_pr() but gets passed to the guest kernel as a sc instruction. That usually results in the guest kernel getting a return code of 38 (ENOSYS) from an hcall, which often triggers a BUG_ON() or other failure. This fixes the problem by adding a new variant of kvmppc_get_last_inst() called kvmppc_get_last_sc(), which fetches the instruction if necessary from pc - 4 rather than pc. Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAlexander Graf <agraf@suse.de>
-
- 27 8月, 2013 3 次提交
-
-
由 Benjamin Herrenschmidt 提交于
With OPAL v3 we can return secondary CPUs to firmware on kexec. This allows firmware to do various cleanups making things generally more reliable, and will enable the "new" kernel to call OPAL to perform some reconfiguration tasks early on that can only be done while all the CPUs are in firmware. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Paul Mackerras 提交于
On 64-bit, __pa(&static_var) gets miscompiled by recent versions of gcc as something like: addis 3,2,.LANCHOR1+4611686018427387904@toc@ha addi 3,3,.LANCHOR1+4611686018427387904@toc@l This ends up effectively ignoring the offset, since its bottom 32 bits are zero, and means that the result of __pa() still has 0xC in the top nibble. This happens with gcc 4.8.1, at least. To work around this, for 64-bit we make __pa() use an AND operator, and for symmetry, we make __va() use an OR operator. Using an AND operator rather than a subtraction ends up with slightly shorter code since it can be done with a single clrldi instruction, whereas it takes three instructions to form the constant (-PAGE_OFFSET) and add it on. (Note that MEMORY_START is always 0 on 64-bit.) CC: <stable@vger.kernel.org> Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Deepthi Dharwar 提交于
As a part of pseries_idle backend driver cleanup to make the code common to both pseries and powernv platforms, it is necessary to move the backend-driver code to drivers/cpuidle. As a pre-requisite for that, it is essential to move plpar_wrapper.h to include/asm. Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-