- 30 12月, 2013 5 次提交
-
-
由 Alistair Popple 提交于
This patch updates the generic iommu backend code to use the it_page_shift field to determine the iommu page size instead of using hardcoded values. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Alistair Popple 提交于
This patch adds a it_page_shift field to struct iommu_table and initiliases it to 4K for all platforms. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Alistair Popple 提交于
The powerpc iommu uses a hardcoded page size of 4K. This patch changes the name of the IOMMU_PAGE_* macros to reflect the hardcoded values. A future patch will use the existing names to support dynamic page sizes. Signed-off-by: NAlistair Popple <alistair@popple.id.au> Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Rajesh B Prathipati 提交于
The generic put_unaligned/get_unaligned macros were made endian-safe by calling the appropriate endian dependent macros based on the endian type of the powerpc processor. Signed-off-by: NRajesh B Prathipati <rprathip@linux.vnet.ibm.com> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
In EXCEPTION_PROLOG_COMMON() we check to see if the stack pointer (r1) is valid when coming from the kernel. If it's not valid, we die but with a nice oops message. Currently we allocate a stack frame (subtract INT_FRAME_SIZE) before we check to see if the stack pointer is negative. Unfortunately, this won't detect a bad stack where r1 is less than INT_FRAME_SIZE. This patch fixes the check to compare the modified r1 with -INT_FRAME_SIZE. With this, bad kernel stack pointers (including NULL pointers) are correctly detected again. Kudos to Paulus for finding this. Signed-off-by: NMichael Neuling <mikey@neuling.org> cc: stable@vger.kernel.org Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 13 12月, 2013 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
We are passing pointers to the firmware for reads, we need to properly convert the result as OPAL is always BE. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
opal_xscom_read uses a pointer to return the data so we need to byteswap it on LE builds. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 12月, 2013 1 次提交
-
-
由 Hong H. Pham 提交于
In pte_alloc_one(), pgtable_page_ctor() is passed an address that has not been converted by page_address() to the newly allocated PTE page. When the PTE is freed, __pte_free_tlb() calls pgtable_page_dtor() with an address to the PTE page that has been converted by page_address(). The mismatch in the PTE's page address causes pgtable_page_dtor() to access invalid memory, so resources for that PTE (such as the page lock) is not properly cleaned up. On PPC32, only SMP kernels are affected. On PPC64, only SMP kernels with 4K page size are affected. This bug was introduced by commit d614bb04 "powerpc: Move the pte free routines from common header". On a preempt-rt kernel, a spinlock is dynamically allocated for each PTE in pgtable_page_ctor(). When the PTE is freed, calling pgtable_page_dtor() with a mismatched page address causes a memory leak, as the pointer to the PTE's spinlock is bogus. On mainline, there isn't any immediately obvious symptoms, but the problem still exists here. Fixes: d614bb04 "powerpc: Move the pte free routes from common header" Cc: Paul Mackerras <paulus@samba.org> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: linux-stable <stable@vger.kernel.org> # v3.10+ Signed-off-by: NHong H. Pham <hong.pham@windriver.com> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 12月, 2013 4 次提交
-
-
由 Mahesh Salgaonkar 提交于
Get the memory errors reported by opal and plumb it into memory poison infrastructure. This patch uses new messaging channel infrastructure to pull the fsp memory errors to linux. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
We steal the _PAGE_COHERENCE bit and use that for indicating NUMA ptes. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Aneesh Kumar K.V 提交于
Set memory coherence always on hash64 config. If a platform cannot have memory coherence always set they can infer that from _PAGE_NO_CACHE and _PAGE_WRITETHRU like in lpar. So we dont' really need a separate bit for tracking _PAGE_COHERENCE. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Jeremy Kerr 提交于
The only external user of slb_shadow is the pseries lpar code, and it can access through the paca array instead. Signed-off-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 05 12月, 2013 13 次提交
-
-
由 Michael Ellerman 提交于
These accessors allow us to do cache inhibited accesses when in real mode. They should only be used in real mode. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Alexey Kardashevskiy 提交于
The current implementation of IOMMU on sPAPR does not use iommu_ops and therefore does not call IOMMU API's bus_set_iommu() which 1) sets iommu_ops for a bus 2) registers a bus notifier Instead, PCI devices are added to IOMMU groups from subsys_initcall_sync(tce_iommu_init) which does basically the same thing without using iommu_ops callbacks. However Freescale PAMU driver (https://lkml.org/lkml/2013/7/1/158) implements iommu_ops and when tce_iommu_init is called, every PCI device is already added to some group so there is a conflict. This patch does 2 things: 1. removes the loop in which PCI devices were added to groups and adds explicit iommu_add_device() calls to add devices as soon as they get the iommu_table pointer assigned to them. 2. moves a bus notifier to powernv code in order to avoid conflict with the notifier from Freescale driver. iommu_add_device() and iommu_del_device() are public now. Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Vasant Hegde 提交于
Move SG list and entry structure to header file so that it can be used in other places as well. Signed-off-by: NVasant Hegde <hegdevasant@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mahesh Salgaonkar 提交于
Opal now has a new messaging infrastructure to push the messages to linux in a generic format for different type of messages using only one event bit. The format of the opal message is as below: struct opal_msg { uint32_t msg_type; uint32_t reserved; uint64_t params[8]; }; This patch allows clients to subscribe for notification for specific message type. It is upto the subscriber to decipher the messages who showed interested in receiving specific message type. The interface to subscribe for notification is: int opal_message_notifier_register(enum OpalMessageType msg_type, struct notifier_block *nb) The notifier will fetch the opal message when available and notify the subscriber with message type and the opal message. It is subscribers responsibility to copy the message data before returning from notifier callback. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mahesh Salgaonkar 提交于
Add basic error handling in machine check exception handler. - If MSR_RI isn't set, we can not recover. - Check if disposition set to OpalMCE_DISPOSITION_RECOVERED. - Check if address at fault is inside kernel address space, if not then send SIGBUS to process if we hit exception when in userspace. - If address at fault is not provided then and if we get a synchronous machine check while in userspace then kill the task. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mahesh Salgaonkar 提交于
When machine check real mode handler can not continue into host kernel in V mode, it returns from the interrupt and we loose MCE event which never gets logged. In such a situation queue up the MCE event so that we can log it later when we get back into host kernel with r1 pointing to kernel stack e.g. during syscall exit. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mahesh Salgaonkar 提交于
Now that we handle machine check in linux, the MCE decoding should also take place in linux host. This info is crucial to log before we go down in case we can not handle the machine check errors. This patch decodes and populates a machine check event which contain high level meaning full MCE information. We do this in real mode C code with ME bit on. The MCE information is still available on emergency stack (in pt_regs structure format). Even if we take another exception at this point the MCE early handler will allocate a new stack frame on top of current one. So when we return back here we still have our MCE information safe on current stack. We use per cpu buffer to save high level MCE information. Each per cpu buffer is an array of machine check event structure indexed by per cpu counter mce_nest_count. The mce_nest_count is incremented every time we enter machine check early handler in real mode to get the current free slot (index = mce_nest_count - 1). The mce_nest_count is decremented once the MCE info is consumed by virtual mode machine exception handler. This patch provides save_mce_event(), get_mce_event() and release_mce_event() generic routines that can be used by machine check handlers to populate and retrieve the event. The routine release_mce_event() will free the event slot so that it can be reused. Caller can invoke get_mce_event() with a release flag either to release the event slot immediately OR keep it so that it can be fetched again. The event slot can be also released anytime by invoking release_mce_event(). This patch also updates kvm code to invoke get_mce_event to retrieve generic mce event rather than paca->opal_mce_evt. The KVM code always calls get_mce_event() with release flags set to false so that event is available for linus host machine If machine check occurs while we are in guest, KVM tries to handle the error. If KVM is able to handle MC error successfully, it enters the guest and delivers the machine check to guest. If KVM is not able to handle MC error, it exists the guest and passes the control to linux host machine check handler which then logs MC event and decides how to handle it in linux host. In failure case, KVM needs to make sure that the MC event is available for linux host to consume. Hence KVM always calls get_mce_event() with release flags set to false and later it invokes release_mce_event() only if it succeeds to handle error. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mahesh Salgaonkar 提交于
This patch handles the memory errors on power8. If we get a machine check exception due to SLB or TLB errors, then flush SLBs/TLBs and reload SLBs to recover. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mahesh Salgaonkar 提交于
If we get a machine check exception due to SLB or TLB errors, then flush SLBs/TLBs and reload SLBs to recover. We do this in real mode before turning on MMU. Otherwise we would run into nested machine checks. If we get a machine check when we are in guest, then just flush the SLBs and continue. This patch handles errors for power7. The next patch will handle errors for power8 Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mahesh Salgaonkar 提交于
This patch introduces flush_tlb operation in cpu_spec structure. This will help us to invoke appropriate CPU-side flush tlb routine. This patch adds the foundation to invoke CPU specific flush routine for respective architectures. Currently this patch introduce flush_tlb for p7 and p8. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mahesh Salgaonkar 提交于
This patch adds the early machine check function pointer in cputable for CPU specific early machine check handling. The early machine handle routine will be called in real mode to handle SLB and TLB errors. We can not reuse the existing machine_check hook because it is always invoked in kernel virtual mode and we would already be in trouble if we get SLB or TLB errors. This patch just sets up a mechanism to invoke CPU specific handler. The subsequent patches will populate the function pointer. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mahesh Salgaonkar 提交于
This patch introduces exclusive emergency stack for machine check exception. We use emergency stack to handle machine check exception so that we can save MCE information (srr1, srr0, dar and dsisr) before turning on ME bit and be ready for re-entrancy. This helps us to prevent clobbering of MCE information in case of nested machine checks. The reason for using emergency stack over normal kernel stack is that the machine check might occur in the middle of setting up a stack frame which may result into improper use of kernel stack. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mahesh Salgaonkar 提交于
This patch splits the common exception prolog logic into three parts to facilitate reuse of existing code in the next patch. This patch also re-arranges few instructions in such a way that the second part now deals with saving register values from paca save area to stack frame, and the third part deals with saving current register values to stack frame. The second and third part will be reused in the machine check exception routine in the subsequent patch. Please note that this patch does not introduce or change existing code logic. Instead it is just a code movement and instruction re-ordering. Patch Acked-by Paul. But made some minor modification (explained above) to address Paul's comment in the later patch(3). Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 02 12月, 2013 3 次提交
-
-
由 fan.du 提交于
Current irq_stat.timers_irqs counting doesn't discriminate timer event handler and other timer interrupt(like arch_irq_work_raise). Sometimes we need to know exactly how much interrupts timer event handler fired, so let's be more specific on this. Signed-off-by: NFan Du <fan.du@windriver.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kevin Hao 提交于
As Benjamin Herrenschmidt has indicated, we still need a dummy icbi to purge all the prefetched instructions from the ifetch buffers for the snooping icache. We also need a sync before the icbi to order the actual stores to memory that might have modified instructions with the icbi. Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kevin Hao 提交于
So that it can be used by other codes. No function change. Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 25 11月, 2013 1 次提交
-
-
由 Hari Bathini 提交于
When CONFIG_SPARSEMEM_VMEMMAP option is used in kernel, makedumpfile fails to filter vmcore dump as it fails to do vmemmap translations. So far dump filtering on ppc64 never had to deal with vmemmap addresses seperately as vmemmap regions where mapped in zone normal. But with the inclusion of CONFIG_SPARSEMEM_VMEMMAP config option in kernel, this vmemmap address translation support becomes necessary for dump filtering. For vmemmap adress translation, few kernel symbols are needed by dump filtering tool. This patch adds those symbols to vmcoreinfo, which a dump filtering tool can use for filtering the kernel dump. Tested this changes successfully with makedumpfile tool that supports vmemmap to physical address translation outside zone normal. [ Removed unneeded #ifdef as suggested by Michael Ellerman --BenH ] Signed-off-by: NHari Bathini <hbathini@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 23 11月, 2013 1 次提交
-
-
由 LEROY Christophe 提交于
Commit beb2dc0a breaks the MPC8xx which seems to not support using mfspr SPRN_TBRx instead of mftb/mftbu despite what is written in the reference manual. This patch reverts to the use of mftb/mftbu when CONFIG_8xx is selected. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 21 11月, 2013 4 次提交
-
-
由 Michael Ellerman 提交于
Up until now we have only used cpu_to_chip_id() in the topology code, which is only used on SMP builds. However my recent commit a4da0d50 "Implement arch_get_random_long/int() for powernv" added a usage when SMP=n, breaking the build. Move cpu_to_chip_id() into prom.c so it is available for SMP=n builds. We would move the extern to prom.h, but that breaks the include in topology.h. Instead we leave it in smp.h, but move it out of the CONFIG_SMP #ifdef. We also need to include asm/smp.h in rng.c, because the linux version skips asm/smp.h on UP. What a mess. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Rusty Russell 提交于
We leave it at zero (though it could be 1) for old tasks. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Rusty Russell 提交于
Little endian ppc64 is getting an exciting new ABI. This is reflected by the bottom two bits of e_flags in the ELF header: 0 == legacy binaries (v1 ABI) 1 == binaries using the old ABI (compiled with a new toolchain) 2 == binaries using the new ABI. We store this in a thread flag, because we need to set it in core dumps and for signal delivery. Our chief concern is that it doesn't use function descriptors. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
On little endian builds call H_SET_MODE so exceptions have the correct endianness. We need to reset the endian during kexec so do that in the MMU hashtable clear callback. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 15 11月, 2013 1 次提交
-
-
由 Kirill A. Shutemov 提交于
Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 11月, 2013 1 次提交
-
-
由 Thomas Gleixner 提交于
No point in having this bit defined by architecture. Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPeter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20130917183629.090698799@linutronix.de
-
- 09 11月, 2013 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 07 11月, 2013 3 次提交
-
-
由 Prabhakar Kushwaha 提交于
Current IFC driver supports till 4K page size NAND flash. Add support of 8K Page size NAND flash - Add nand_ecclayout for 4 bit & 8 bit ecc - Defines constants - also fix ecc.strength for 8bit ecc of 8K page size NAND Signed-off-by: NPrabhakar Kushwaha <prabhakar@freescale.com> Signed-off-by: NBrian Norris <computersforpeace@gmail.com>
-
由 Oleg Nesterov 提交于
Currently xol_get_insn_slot() assumes that we should simply copy arch_uprobe->insn[] which is (ignoring arch_uprobe_analyze_insn) just the copy of the original insn. This is not true for arm which needs to create another insn to execute it out-of-line. So this patch simply adds the new member, ->ixol into the union. This doesn't make any difference for x86 and powerpc, but arm can divorce insn/ixol and initialize the correct xol insn in arch_uprobe_analyze_insn(). Signed-off-by: NOleg Nesterov <oleg@redhat.com>
-
由 David A. Long 提交于
Move the function declarations from the arch headers to the common header, since only the function bodies are architecture-specific. These changes are from Vincent Rabin's uprobes patch. [ oleg: update arch/powerpc/include/asm/uprobes.h ] Signed-off-by: NRabin Vincent <rabin@rab.in> Signed-off-by: NDavid A. Long <dave.long@linaro.org> Signed-off-by: NOleg Nesterov <oleg@redhat.com>
-