- 14 5月, 2012 5 次提交
-
-
由 Kent Yoder 提交于
Add support for the Platform Facilities Option (PFO) to the VIO bus. These devices have a separate root node in OpenFirmware which requires additional parsing to map into the existing VIO device structure fields. This adds the interface for PFO device drivers to make synchronous hypervisor calls. Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NKent Yoder <key@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kent Yoder 提交于
This adds an update notifier mechanism for changes to properties in the device tree. One use of this would be a device driver that needs to act on changes to it's properties in the device tree after a live migration or a dynamic activation that is triggered by updates to ofdt properties. Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NKent Yoder <key@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kent Yoder 提交于
The Platform Facilities Option (PFO) adds several new h_calls and more return codes. Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NKent Yoder <key@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 K.Prasad 提交于
PPC_PTRACE_GETHWDBGINFO, PPC_PTRACE_SETHWDEBUG and PPC_PTRACE_DELHWDEBUG are PowerPC specific ptrace flags that use the watchpoint register. While they are targeted primarily towards BookE users, user-space applications such as GDB have started using them for BookS too. This patch enables the use of generic hardware breakpoint interfaces for these new flags. Apart from the usual benefits of using generic hw-breakpoint interfaces, these changes allow debuggers (such as GDB) to use a common set of ptrace flags for their watchpoint needs and allow more precise breakpoint specification (length of the variable can be specified). Signed-off-by: NK.Prasad <prasad@linux.vnet.ibm.com> Acked-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Robert Jennings 提交于
This patch changes the architecture vector to advertise support for a lower minimum virtual processor entitled capacity. The default minimum without this patch is 10%, this patch specifies 1%. Signed-off-by: NRobert Jennings <rcj@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 12 5月, 2012 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
So we have another case of paca->irq_happened getting out of sync with the HW irq state. This can happen when a perfmon interrupt occurs while soft disabled, as it will return to a soft disabled but hard enabled context while leaving a stale PACA_IRQ_HARD_DIS flag set. This patch fixes it, and also adds a test for the condition of those flags being out of sync in arch_local_irq_restore() when CONFIG_TRACE_IRQFLAGS is enabled. This helps catching those gremlins faster (and so far I can't seem see any anymore, so that's good news). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 5月, 2012 3 次提交
-
-
由 Tiejun Chen 提交于
'TIF_RUNLATCH' is already dropped from commit fe1952fc powerpc: Rework runlatch code So '_TIF_RUNLATCH' should be removed as well. Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Alignment was the last user of the ENABLE_INTS macro, which we can now remove. All non-syscall exceptions now disable interrupts on entry, they get re-enabled conditionally from C code. Don't unconditionally re-enable in program check either, check the original context. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
We had a case where we could turn on hard interrupts while leaving the PACA_IRQ_HARD_DIS bit set in the PACA. This can in turn cause a BUG_ON() to hit in __check_irq_replay() due to interrupt state getting out of sync. The assembly code was also way too convoluted. Instead, we now leave it to the C code to do the right thing which ends up being smaller and more readable. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 08 5月, 2012 1 次提交
-
-
由 David Gibson 提交于
The H_REGISTER_VPA hcall implementation in HV Power KVM needs to pin some guest memory pages into host memory so that they can be safely accessed from usermode. It does this used get_user_pages_fast(). When the VPA is unregistered, or the VCPUs are cleaned up, these pages are released using put_page(). However, the get_user_pages() is invoked on the specific memory are of the VPA which could lie within hugepages. In case the pinned page is huge, we explicitly find the head page of the compound page before calling put_page() on it. At least with the latest kernel, this is not correct. put_page() already handles finding the correct head page of a compound, and also deals with various counts on the individual tail page which are important for transparent huge pages. We don't support transparent hugepages on Power, but even so, bypassing this count maintenance can lead (when the VM ends) to a hugepage being released back to the pool with a non-zero mapcount on one of the tail pages. This can then lead to a bad_page() when the page is released from the hugepage pool. This removes the explicit compound_head() call to correct this bug. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org> Acked-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 04 5月, 2012 1 次提交
-
-
由 Josh Boyer 提交于
Fix a build error when -Werror is set: arch/powerpc/sysdev/ppc4xx_msi.c: In function ‘ppc4xx_setup_pcieh_hw’: arch/powerpc/sysdev/ppc4xx_msi.c:178:2: error: right shift count >= width of type [-Werror] Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
-
- 03 5月, 2012 5 次提交
-
-
由 Mai La 提交于
Signed-off-by: NMai La <mla@apm.com> Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
-
由 Mai La 提交于
This patch consists of: - Enable PCI MSI as default for Bluestone board - Change definition of number of MSI interrupts as it depends on SoC - Fix returning ENODEV as finding MSI node - Fix MSI physical high and low address - Keep MSI data logically Signed-off-by: NMai La <mla@apm.com> Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
-
由 Suzuki Poulose 提交于
Now that we have KEXEC and relocatable kernel working on 47x (!SMP) enable CRASH_DUMP. Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
-
由 Suzuki Poulose 提交于
This patch adds support for creating 1:1 mapping for the PPC_47x during a KEXEC. The implementation is similar to that of the PPC440x which is described here : http://patchwork.ozlabs.org/patch/104323/ PPC_47x MMU : The 47x uses Unified TLB 1024 entries, with 4-way associative mapping (4 x 256 entries). The index to be used is calculated by the MMU by hashing the PID, EPN and TS. The software can choose to specify the way by setting bit 0(enable way select) and the way in bits 1-2 in the TLB Word 0. Implementation: The patch erases all the UTLB entries which includes the tlb covering the mapping for our code. The shadow TLB caches the mapping for the running code which helps us to continue the execution until we do isync/rfi. We then create a tmp mapping for the current code in the other address space (TS) and switch to it. Then we create a 1:1 mapping(EPN=RPN) for 0-2GiB in the original address space and switch to the new mapping. TODO: Add SMP support. Signed-off-by: NSuzuki K. Poulose <suzuki@in.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
-
由 Suzuki Poulose 提交于
Initialize the PID register with kernel pid (0) before we start setting the TLB mapping for KEXEC. Also set the MMUCR[TID] to kernel PID. This was spotted while testing the kexec on ISS for 47x. ISS doesn't return a successful tlbsx for a kernel address with PID set to a user PID. Though the hardware/qemu/simics work fine. This patch is harmless and initializes the PID to 0 (kernel PID) which is usually the case during a normal kernel boot. This would fix the kexec on ISS for 440. I have tested this patch on sequoia board. Signed-off-by: NSuzuki K Poulose <suzuki@in.ibm.com> Cc: Josh Boyer <jwboyer@gmail.com> Signed-off-by: NJosh Boyer <jwboyer@gmail.com>
-
- 01 5月, 2012 1 次提交
-
-
由 Jan Seiffert 提交于
Now the helper function from filter.c for negative offsets is exported, it can be used it in the jit to handle negative offsets. First modify the asm load helper functions to handle: - know positive offsets - know negative offsets - any offset then the compiler can be modified to explicitly use these helper when appropriate. This fixes the case of a negative X register and allows to lift the restriction that bpf programs with negative offsets can't be jited. Tested-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NJan Seiffert <kaffeemonster@googlemail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 30 4月, 2012 19 次提交
-
-
由 Anton Blanchard 提交于
PowerPC has non standard getregs calls that only dump the GPRs or FPRs and have their arguments reversed. commit e17666ba (ptrace updates & new, better requests) in 2.6.3 deprecated them and introduced more standard versions. It's been about 5 years and I know of no users of the old calls so lets remove them. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
When we get an EEH error we just print a backtrace with dump_stack which is rather cryptic. We really should print something before spewing out the backtrace. Also switch from dump_stack to WARN so we get more information about the fail - what modules were loaded, what process was running etc. This was useful information when debugging a recent EEH subsystem bug. The standard WARN output should also get picked up by monitoring tools like kerneloops. The register dump is of questionable value here but I figured it was better to use something standard and not roll my own. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Add a menu to select various 64-bit CPU targets for gcc. We default to -mtune=power7 and if gcc doesn't understand that we fallback to -mtune=power4. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Now we require gcc 4.0 on 64-bit we can remove the pre gcc 4.0 -maltivec workaround. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Older versions of gcc had issues with using -maltivec together with -mcpu of a non altivec capable CPU. We work around it by specifying -mcpu=970, but the logic is complicated. In preparation for adding more -mcpu targets, remove the workaround and just require gcc 4.0 for 64-bit builds. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Remove CONFIG_POWER4_ONLY, the option is badly named and only does two things: - It wraps the MMU segment table code. With feature fixups there is little downside to compiling this in. - It uses the newer mtocrf instruction in various assembly functions. Instead of making this a compile option just do it at runtime via a feature fixup. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
This causes i2c-powermac to register i2c devices exposed in the device-tree, enabling new-style probing of devices. Note that we prefix the IDs with "MAC," in order to prevent the generic drivers from matching. This is done on purpose as we only want drivers specifically tested/designed to operate on powermacs to match. This removes the special case we had for the AMS driver, and updates the driver's match table instead. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Add two optimisations to enable_kernel_altivec: - enable_kernel_altivec has already determined if we need to save the previous task's state but we call giveup_altivec in both cases, requiring an extra branch in giveup_altivec. Create giveup_altivec_notask which only turns on the VMX bit in the MSR. - We write the VMX MSR bit each time we call enable_kernel_altivec even it was already set. Check the bit and branch out if we have already set it. The classic case for this is vectored IO where we have to copy multiple buffers to or from userspace. The following testcase was used to confirm this patch improves performance: http://ozlabs.org/~anton/junkcode/copy_to_user.c Since the current breakpoint for using VMX in copy_tofrom_user is 4096 bytes, I'm using buffers of 4096 + 1 cacheline (4224) bytes. A benchmark of 16 entry readvs (-s 16): time copy_to_user -l 4224 -s 16 -i 1000000 completes 5.2% faster on a POWER7 PS700. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Use an empty inline instead of an empty function to implement giveup_altivec on book3e CPUs, similar to flush_altivec_to_thread. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Reformat lppaca.h to match Linux coding standards. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Remove all the iseries specific fields in the lppaca. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
We have a union containing fields from the old iseries hypervisor that has been reused for the cede latency hint. Since we no longer support iseries, remove the union completely. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
At the moment system call entry looks like: crclr so ... mfcr r9 ... std r9,_CCR(r1) commit bd19c899 ([POWERPC] system call micro optimisation) put some space between the crclr and mfcr in order to avoid a stall. There is still a stall seen between the mfcr and std. We can avoid the crclr by doing it in a GPR with rlwinm which gives us more room to better schedule the sequence. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The count register is volatile so we don't need to preserve it. Store zero to the entry in the exception frame. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The XER is a volatile register so there is no need to save and restore it over a system call - zero it out in the exception stack frame instead. This should fix a 5 cycle stall of the mfxer/std seen on POWER7. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
syscall_dotrace_cont and syscall_error_cont tend to complicate perf output so make them local. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gavin Shan 提交于
Recently, Ryan Wang tried to compile PPC pSeries platform without CONFIG_EEH and eventually run into errors. Nishanth Aravamudan helped to narrow down the root cause. Actually, the pSeries platform depends on CONFIG_EEH heavily and that won't work properly without EEH support. According to Ben's suggestion, the patch make CONFIG_EEH invisible and keep it as always selected on pSeries platform. Signed-off-by: NGavin Shan <shangw@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Grant Likely 提交于
The switch from using irq_map to irq_alloc_desc*() for managing irq number allocations introduced new bugs in some of the powerpc interrupt code. Several functions rely on the value of NR_IRQS to determine the maximum irq number that could get allocated. However, with sparse_irq and using irq_alloc_desc*() the maximum possible irq number is now specified with 'nr_irqs' which may be a number larger than NR_IRQS. This has caused breakage on powermac when CONFIG_NR_IRQS is set to 32. This patch removes most of the direct references to NR_IRQS in the powerpc code and replaces them with either a nr_irqs reference or by using the common for_each_irq_desc() macro. The powerpc-specific for_each_irq() macro is removed at the same time. Also, the Cell axon_msi driver is refactored to remove the global build assumption on the size of NR_IRQS and instead add a limit to the maximum irq number when calling irq_domain_add_nomap(). Signed-off-by: NGrant Likely <grant.likely@secretlab.ca> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Grant Likely 提交于
The mpc8xx driver uses a reference to NR_IRQS that is buggy. It uses NR_IRQs for the array size of the ppc_cached_irq_mask bitmap, but NR_IRQs could be smaller than the number of hardware irqs that ppc_cached_irq_mask tracks. Also, while fixing that problem, it became apparent that the interrupt controller only supports 32 interrupt numbers, but it is written as if it supports multiple register banks which is more complicated. This patch pulls out the buggy reference to NR_IRQs and fixes the size of the ppc_cached_irq_mask to match the number of HW irqs. It also drops the now-unnecessary code since ppc_cached_irq_mask is no longer an array. Signed-off-by: NGrant Likely <grant.likely@secretlab.ca> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 25 4月, 2012 4 次提交
-
-
由 Geoff Levand 提交于
Signed-off-by: NGeoff Levand <geoff@infradead.org>
-
由 Geoff Levand 提交于
Signed-off-by: NGeoff Levand <geoff@infradead.org>
-
由 Andre Heider 提交于
The dependency on hotplug memory was removed, so remove the dependency in the Kconfig. Signed-off-by: NAndre Heider <a.heider@gmail.com> Signed-off-by: NGeoff Levand <geoff@infradead.org>
-
由 Hector Martin 提交于
Real mode memory can be limited and runs out quickly as memory is allocated during kernel startup. Having the highmem available sooner fixes this. This change simplifies the memory management code by converting from hotplug memory to logical memory blocks. Signed-off-by: NHector Martin <hector@marcansoft.com> Signed-off-by: NAndre Heider <a.heider@gmail.com> Signed-off-by: NGeoff Levand <geoff@infradead.org>
-