- 14 8月, 2013 25 次提交
-
-
由 Anton Blanchard 提交于
Our ppc64 spinlocks and rwlocks use a trick where a lock token and the paca index are placed in the lock with a single store. Since we are using two u16s they need adjusting for little endian. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The lppaca, slb_shadow and dtl_entry hypervisor structures are big endian, so we have to byte swap them in little endian builds. LE KVM hosts will also need to be fixed but for now add an #error to remind us. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Add endian annotation to various hypervisor structures which are defined as big endian. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
We pass dma_window to of_parse_dma_window as a void * and then run through hoops to cast it back to a u32 array. In the process we lose endian annotation. Simplify it by just passing a __be32 * down. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
RTAS expects arguments in the call buffer to be big endian so we need to byteswap on little endian builds Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
When we have MMU on exceptions (POWER8) and a relocatable kernel, we need to branch from the initial exception vectors at 0x0 to up high where the kernel might be located. Currently we do this using the link register. Unfortunately this corrupts the link stack and instead we should use the count register. We did this for the syscall entry path in: 6a404806 powerpc: Avoid link stack corruption in MMU on syscall entry path but I stupidly forgot to do the same for other exceptions. This patch changes the initial exception vectors to use the count register instead of the link register when we need to branch up to the relocated kernel. I have a dodgy userspace test which loops calling a function that reads the PVR (mfpvr in userspace will be emulated by the kernel via the program check exception). On POWER8 and with CONFIG_RELOCATABLE=y, I get a ~10% performance improvement with my userspace test with this patch. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Vasant Hegde 提交于
So far "/sys/devices/system/cpu/cpuX/topology/physical_package_id" was always default (-1) on ppc64 architecture. Now, some systems have an ibm,chip-id property in the cpu nodes in the device tree. On these systems, we now use this information to display physical_package_id. Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: NShivaprasad G Bhat <sbhat@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kevin Hao 提交于
Instead of implementing an empty giveup_fpu() function for each 32bit processor type, replace them with an unique empty inline function. Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kevin Hao 提交于
In the current kernel, the function flush_fp_to_thread() is not dependent on CONFIG_PPC_FPU. So most invocations of this function is not wrapped by CONFIG_PPC_FPU. Even through we don't really save the FPRs to the thread struct if CONFIG_PPC_FPU is not enabled, but there does have some runtime overhead such as the check for tsk->thread.regs and preempt disable and enable. It really make no sense to do that. So make it a nop when CONFIG_PPC_FPU is disabled. Also remove the wrapped #ifdef CONFIG_PPC_FPU when invoking this function. Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kevin Hao 提交于
The only using of function disable_kernel_fp() was already dropped in the commit 5daf9071 (powerpc: merge align.c). Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Paul Bolle 提交于
The Kconfig symbol 8XX_MINIMAL_FPEMU was removed in commit 968219fa ("powerpc/8xx: Remove 8xx specific "minimal FPU emulation""). But that commit didn't remove all code depending on that symbol. Do so now. Signed-off-by: NPaul Bolle <pebolle@tiscali.nl> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
The udbg_16550 code, which we use for our early consoles and debug backends was fairly messy. Especially for the debug consoles, it would re-implement the "high level" getc/putc/poll functions for each access method. It also had code to configure the UART but only for the straight MMIO method. This changes it to instead abstract at the register accessor level, and have the various functions and configuration routines use these. The result is simpler and slightly smaller code, and free support for non-MMIO mapped PIO UARTs, which such as the ones that can be present on a POWER 8 LPC bus. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
This uses the hooks provided by CONFIG_PPC_INDIRECT_PIO to implement a set of hooks for IO port access to use the LPC bus via OPAL calls for the first 64K of IO space Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
This includes walking the parent nodes if necessary. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Remove the generic PPC_INDIRECT_IO and ensure we only add overhead to the right accessors. IE. If only CONFIG_PPC_INDIRECT_PIO is set, we don't add overhead to all MMIO accessors. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Tiejun Chen 提交于
The SOFT_DISABLE_INTS seems an odd name for something that updates the software state to be consistent with interrupts being hard disabled, so rename SOFT_DISABLE_INTS with RECONCILE_IRQ_STATE to avoid this confusion. Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
We have a bunch of CONFIG_PPC_EARLY_DEBUG_* options that are intended for bringup/debug only. They hard wire a machine specific udbg backend very early on (before we even probe the platform), and use whatever tricks are available on each machine/cpu to be able to get some kind of output out there early on. So far, on powermac with no serial ports, we have CONFIG_PPC_EARLY_DEBUG_BOOTX to use the low-level btext engine on the screen, but it doesn't do much, at least on 64-bit. It only really gets enabled after the platform has been probed and the MMU enabled. This adds a way to enable it much earlier. From prom_init.c (while still running with Open Firmware), we grab the screen details and set things up using the physical address of the frame buffer. Then btext itself uses the "rm_ci" feature of the 970 processor (Real Mode Cache Inhibited) to access it while in real mode. We need to do a little bit of reorg of the btext code to inline things better, in order to limit how much we touch memory while in this mode as the consequences might be ... interesting. This successfully allowed me to debug problems early on with the G5 (related to gold being broken vs. ppc64 kernels). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Andy Fleming 提交于
Cell and PSeries both implemented their own versions of a cpu_bootable smp_op which do the same thing (well, the PSeries one has support for more than 2 threads). Copy the PSeries one to generic code, and rename it smp_generic_cpu_bootable. Signed-off-by: NAndy Fleming <afleming@freescale.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kevin Hao 提交于
And now the function flush_icache_range() is just a wrapper which only invoke the function __flush_icache_range() directly. So we don't have reason to keep it anymore. Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kevin Hao 提交于
In function flush_icache_range(), we use cpu_has_feature() to test the feature bit of CPU_FTR_COHERENT_ICACHE. But this seems not optimal for two reasons: a) For ppc32, the function __flush_icache_range() already do this check with the macro END_FTR_SECTION_IFSET. b) Compare with the cpu_has_feature(), the method of using macro END_FTR_SECTION_IFSET will not introduce any runtime overhead. [And while at it, add the missing required isync] -- BenH Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Although the shared_proc field in the lppaca works today, it is not architected. A shared processor partition will always have a non zero yield_count so use that instead. Create a wrapper so users don't have to know about the details. In order for older kernels to continue to work on KVM we need to set the shared_proc bit. While here, remove the ugly bitfield. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Fix a sparse warning about force_32bit_msi being a one bit bitfield. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
We always use VMX loads and stores to manage the high 32 VSRs. Remove these unused macros. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Not having parentheses around a macro is asking for trouble. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 08 8月, 2013 6 次提交
-
-
由 Laurentiu TUDOR 提交于
At console init, when the kernel tries to flush the log buffer the ePAPR byte-channel based console write fails silently, losing the buffered messages. This happens because The ePAPR para-virtualization init isn't done early enough so that the hcall instruction to be set, causing the byte-channel write hcall to be a nop. To fix, change the ePAPR para-virt init to use early device tree functions and move it in early init. Signed-off-by: NLaurentiu Tudor <Laurentiu.Tudor@freescale.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Hongtao Jia 提交于
mpic_get_primary_version() is not defined when not using MPIC. The compile error log like: arch/powerpc/sysdev/built-in.o: In function `fsl_of_msi_probe': fsl_msi.c:(.text+0x150c): undefined reference to `fsl_mpic_primary_get_version' Signed-off-by: NJia Hongtao <hongtao.jia@freescale.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Priyanka Jain 提交于
e6500 core performance monitors has the following features: - 6 performance monitor counters - 512 events supported - no threshold events e6500 PMU has more specific events (Data L1 cache misses, Instruction L1 cache misses, etc ) than e500 PMU (which only had Data L1 cache reloads, etc). Where available, the more specific events have been used which will produce slightly different results than e500 PMU equivalents. Signed-off-by: NPriyanka Jain <Priyanka.Jain@freescale.com> Signed-off-by: NLijun Pan <Lijun.Pan@freescale.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Lijun Pan 提交于
There are 6 counters in e6500 core instead of 4 in e500 core. Signed-off-by: NLijun Pan <Lijun.Pan@freescale.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Catalin Udma 提交于
This change is required after the e6500 perf support has been added. There are 6 counters in e6500 core instead of 4 in e500 core and the MAX_HWEVENTS counter should be changed accordingly from 4 to 6. Added also runtime check for counters overflow. Signed-off-by: NCatalin Udma <catalin.udma@freescale.com> Signed-off-by: NLijun Pan <Lijun.Pan@freescale.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 Lijun Pan 提交于
Signed-off-by: NLijun Pan <Lijun.Pan@freescale.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 01 8月, 2013 2 次提交
-
-
由 Robert Jennings 提交于
When an associativity level change is found for one thread, the siblings threads need to be updated as well. This is done today for PRRN in stage_topology_update() but is missing for VPHN in update_cpu_associativity_changes_mask(). This patch will correctly update all thread siblings during a topology change. Without this patch a topology update can result in a CPU in init_sched_groups_power() getting stuck indefinitely in a loop. This loop is built in build_sched_groups(). As a result of the thread moving to a node separate from its siblings the struct sched_group will have its next pointer set to point to itself rather than the sched_group struct of the next thread. This happens because we have a domain without the SD_OVERLAP flag, which is correct, and a topology that doesn't conform with reality (threads on the same core assigned to different numa nodes). When this list is traversed by init_sched_groups_power() it will reach the thread's sched_group structure and loop indefinitely; the cpu will be stuck at this point. The bug was exposed when VPHN was enabled in commit b7abef04 (v3.9). Cc: <stable@vger.kernel.org> [v3.9+] Reported-by: NJan Stancek <jstancek@redhat.com> Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
We use bit 63 of the event code for userspace to request that the event be counted using EBB (Event Based Branches). Export this value, making it part of the API - though only on processors that support EBB. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 31 7月, 2013 1 次提交
-
-
由 Hongtao Jia 提交于
Opcode and xopcode are useful definitions not just for KVM. Move these definitions to asm/ppc-opcode.h for public use. Also add the opcodes for LHAUX and LWZUX. Signed-off-by: NJia Hongtao <hongtao.jia@freescale.com> Signed-off-by: NLi Yang <leoli@freescale.com> [scottwood@freesacle.com: update commit message and rebase] Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 24 7月, 2013 6 次提交
-
-
由 Gavin Shan 提交于
The patch introduces flag EEH_DEV_SYSFS to keep track that the sysfs entries for the corresponding EEH device (then PCI device) has been added or removed, in order to avoid race condition. Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
While restoring BARs for one specific PCI device, the pci_dev instance should have been released. So it's not reliable to use the pci_dev instance on restoring BARs. However, we still need some information (e.g. PCIe capability position, header type) from the pci_dev instance. So we have to store those information to EEH device in advance. Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
When EEH error happens to one specific PE, some devices with drivers supporting EEH won't except hotplug on the device. However, there might have other deivces without driver, or with driver without EEH support. For the case, we need do partial hotplug in order to make sure that the PE becomes absolutely quite during reset. Otherise, the PE reset might fail and leads to failure of error recovery. The current code doesn't handle that 'mixed' case properly, it either uses the error callbacks to the drivers, or tries hotplug, but doesn't handle a PE (EEH domain) composed of a combination of the two. The patch intends to support so-called "partial" hotplug for EEH: Before we do reset, we stop and remove those PCI devices without EEH sensitive driver. The corresponding EEH devices are not detached from its PE, but with special flag. After the reset is done, those EEH devices with the special flag will be scanned one by one. Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
Currently, we're trasversing the EEH devices list using list_for_each_entry(). That's not safe enough because the EEH devices might be removed from its parent PE while doing iteration. The patch replaces that with list_for_each_entry_safe(). Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
When we do normal hotplug, the PE (shadow EEH structure) shouldn't be kept around. However, we need to keep it if the hotplug an artifial one caused by EEH errors recovery. Since we remove EEH device through the PCI hook pcibios_release_device(), the flag "purge_pe" passed to various functions is meaningless. So the patch removes the meaningless flag and introduce new flag "EEH_PE_KEEP" to save the PE while doing hotplug during EEH error recovery. Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
Make some functions public in order to support hotplug on either specific PCI bus or PCI device in future. Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-