- 29 11月, 2010 1 次提交
-
-
由 Stephen Rothwell 提交于
Since STACK_FRAME_OVERHEAD is defined in asm/ptrace.h and that is ASSEMBER safe, we can just include that instead of going via asm-offsets.h. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 14 10月, 2010 1 次提交
-
-
由 Matthew McClintock 提交于
First we check to see if we are the first core booting up. This is accomplished by comparing the boot_cpuid with -1, if it is we assume this is the first core coming up. Secondly, we need to update the initial thread info structure to reflect the actual cpu we are running on otherwise smp_processor_id() and related functions will return the default initialization value of the struct or 0. Signed-off-by: NMatthew McClintock <msm@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 25 5月, 2010 3 次提交
-
-
This adds support kexec on FSL-BookE where the MMU can not be simply switched off. The code borrows the initial MMU-setup code to create the identical mapping mapping. The only difference to the original boot code is the size of the mapping(s) and the executeable address. The kexec code maps the first 2 GiB of memory in 256 MiB steps. This should work also on e500v1 boxes. SMP support is still not available. (Kumar: Added minor change to build to ifdef CONFIG_PPC_STD_MMU_64 some code that was PPC64 specific) Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
This patch only moves the initial entry code which setups the mapping from what ever to KERNELBASE into a seperate file. No code change has been made here. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
During boot we change the mapping a few times until we have a "defined" mapping. During this procedure a small 4KiB mapping is created and after that one a 64MiB. Currently the offset of the 4KiB page in that we run is zero because the complete startup up code is in first page which starts at RPN zero. If the code is recycled and moved to another location then its execution will fail because the start address in 64 MiB mapping is computed wrongly. It does not consider the offset to the page from the begin of the memory. This patch fixes this. Usually (system boot) r25 is zero so this does not change anything unless the code is recycled. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 17 5月, 2010 1 次提交
-
-
由 Li Yang 提交于
In CONFIG_PTE_64BIT the PTE format has unique permission bits for user and supervisor execute. However on !CONFIG_PTE_64BIT we overload the supervisor bit to imply user execute with _PAGE_USER set. This allows us to use the same permission check mask for user or supervisor code on !CONFIG_PTE_64BIT. However, on CONFIG_PTE_64BIT we map _PAGE_EXEC to _PAGE_BAP_UX so we need a different permission mask based on the fault coming from a kernel address or user space. Without unique permission masks we see issues like the following with modules: Unable to handle kernel paging request for instruction fetch Faulting instruction address: 0xf938d040 Oops: Kernel access of bad area, sig: 11 [#1] Signed-off-by: NLi Yang <leoli@freescale.com> Signed-off-by: NJin Qing <b24347@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 14 5月, 2010 1 次提交
-
-
由 Li Yang 提交于
In CONFIG_PTE_64BIT the PTE format has unique permission bits for user and supervisor execute. However on !CONFIG_PTE_64BIT we overload the supervisor bit to imply user execute with _PAGE_USER set. This allows us to use the same permission check mask for user or supervisor code on !CONFIG_PTE_64BIT. However, on CONFIG_PTE_64BIT we map _PAGE_EXEC to _PAGE_BAP_UX so we need a different permission mask based on the fault coming from a kernel address or user space. Without unique permission masks we see issues like the following with modules: Unable to handle kernel paging request for instruction fetch Faulting instruction address: 0xf938d040 Oops: Kernel access of bad area, sig: 11 [#1] Signed-off-by: NLi Yang <leoli@freescale.com> Signed-off-by: NJin Qing <b24347@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 19 3月, 2010 1 次提交
-
-
由 Márton Németh 提交于
When printk() is disabled (CONFIG_PRINTK) at menu item General setup -> Configure standard kernel features (for small systems) -> Enable support for printk then there should be no printk() calls at all. Signed-off-by: NMárton Németh <nm127@freemail.hu> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 17 3月, 2010 1 次提交
-
-
由 Kumar Gala 提交于
We shouldn't be always setting 'M' in the TLB entry since its reasonable for somethings to be mapped non-coherent. The PTE should have 'M' set properly. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 18 2月, 2010 1 次提交
-
-
24 is offset between the opcode past bl and past rfi. This makes it more obvious. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 21 11月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
Re-write the code so its more standalone and fixed some issues: * Bump'd # of CAM entries to 64 to support e500mc * Make the code handle MAS7 properly * Use pr_cont instead of creating a string as we go Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 02 9月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
Switch to using the Power ISA defined PTE format when we have a 64-bit PTE. This makes the code handling between fsl-booke and book3e-64 similiar for TLB faults. Additionally this lets use take advantage of the page size encodings and full permissions that the HW PTE defines. Also defined _PMD_PRESENT, _PMD_PRESENT_MASK, and _PMD_BAD since the 32-bit ppc arch code expects them. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 27 8月, 2009 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This is an attempt at cleaning up a bit the way we handle execute permission on powerpc. _PAGE_HWEXEC is gone, _PAGE_EXEC is now only defined by CPUs that can do something with it, and the myriad of #ifdef's in the I$/D$ coherency code is reduced to 2 cases that hopefully should cover everything. The logic on BookE is a little bit different than what it was though not by much. Since now, _PAGE_EXEC will be set by the generic code for executable pages, we need to filter out if they are unclean and recover it. However, I don't expect the code to be more bloated than it already was in that area due to that change. I could boast that this brings proper enforcing of per-page execute permissions to all BookE and 40x but in fact, we've had that now for some time as a side effect of my previous rework in that area (and I didn't even know it :-) We would only enable execute permission if the page was cache clean and we would only cache clean it if we took and exec fault. Since we now enforce that the later only work if VM_EXEC is part of the VMA flags, we de-fact already enforce per-page execute permissions... Unless I missed something Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 20 8月, 2009 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The kernel uses SPRG registers for various purposes, typically in low level assembly code as scratch registers or to hold per-cpu global infos such as the PACA or the current thread_info pointer. We want to be able to easily shuffle the usage of those registers as some implementations have specific constraints realted to some of them, for example, some have userspace readable aliases, etc.. and the current choice isn't always the best. This patch should not change any code generation, and replaces the usage of SPRN_SPRGn everywhere in the kernel with a named replacement and adds documentation next to the definition of the names as to what those are used for on each processor family. The only parts that still use the original numbers are bits of KVM or suspend/resume code that just blindly needs to save/restore all the SPRGs. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 27 4月, 2009 1 次提交
-
-
由 Tim Abbott 提交于
This has the consequence of changing the section name use for head code from ".text.head" to ".head.text". Since this commit changes all users in the architecture, this change should be harmless. Signed-off-by: NTim Abbott <tabbott@mit.edu> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 2月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
The e500mc supports the new msgsnd/doorbell mechanisms that were added in the Power ISA 2.05 architecture. We use the normal level doorbell for doing SMP IPIs at this point. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 13 2月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
The Power ISA 2.06 added power of two page sizes to the embedded MMU architecture. Its done it such a way to be code compatiable with the existing HW. Made the minor code changes to support both power of two and power of four page sizes. Also added some new MAS bits and macros that are defined as part of the 2.06 ISA. Renamed some things to use the 'Book-3e' concept to convey the new MMU that is based on the Freescale Book-E MMU programming model. Note, its still invalid to try and use a page size that isn't supported by cpu. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 29 1月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
We currently have a few variants of fsl-booke processors (e500v1, e500v2, e500mc, and e200). They all have minor differences that we had previously been handling via ifdefs. To move towards having this support the following changes have been made: * PID1, PID2 only exist on e500v1 & e500v2 and should not be accessed on e500mc or e200. We use MMUCFG[NPIDS] to determine which case we are since we only touch PID1/2 in extremely early init code. * Not all IVORs exist on all the processors so introduce cpu_setup functions for each variant to setup the proper IVORs that are either unique or exist but have some variations between the processors Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 14 1月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
We use Doorbell interrupts for IPIs and thus we need to make sure we aren't interrupted in the process of processing the IPI. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Acked-by: NDave Liu <daveliu@freescale.com>
-
- 08 1月, 2009 2 次提交
-
-
由 Trent Piepho 提交于
This is a global variable defined in fsl_booke_mmu.c with a value that gets initialized in assembly code in head_fsl_booke.S. It's never used. If some code ever does want to know the number of entries in TLB1, then "numcams = mfspr(SPRN_TLB1CFG) & 0xfff", is a whole lot simpler than a global initialized during kernel boot from assembly. Signed-off-by: NTrent Piepho <tpiepho@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Trent Piepho 提交于
Some assembly code in head_fsl_booke.S hard-coded the size of struct tlbcam to 20 when it indexed the TLBCAM table. Anyone changing the size of struct tlbcam would not know to expect that. The kernel already has a system to get the size of C structures into assembly language files, asm-offsets, so let's use it. The definition of the struct gets moved to a header, so that asm-offsets.c can include it. Signed-off-by: NTrent Piepho <tpiepho@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 21 12月, 2008 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
We're soon running out of CPU features and I need to add some new ones for various MMU related bits, so this patch separates the MMU features from the CPU features. I moved over the 32-bit MMU related ones, added base features for MMU type families, but didn't move over any 64-bit only feature yet. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 03 12月, 2008 4 次提交
-
-
由 Kumar Gala 提交于
Added 85xx specifc smp_ops structure. We use ePAPR style boot release and the MPIC for IPIs at this point. Additionally added routines for secondary cpu entry and initializtion. Signed-off-by: NAndy Fleming <afleming@freescale.com> Signed-off-by: NTrent Piepho <tpiepho@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Kumar Gala 提交于
Removed unused branch labels Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Trent Piepho 提交于
The initial TLB mapping for the kernel boot didn't set the memory coherent attribute, MAS2[M], in SMP mode. If this code supported booting a secondary processor, which it doesn't yet, but if it did, then when a secondary processor boots, it would probably signal the primary processor by setting a variable called something like __secondary_hold_acknowledge. However, due to the lack of the M bit, the primary processor would not snoop the transaction (even if a transaction were broadcast). If primary CPU's L1 D-cache had a copy, it would not be flushed and the CPU would never see the ack. Which would have resulted in the primary CPU spinning for a long time, perhaps a full second before it gives up, while it would have waited for the ack from the secondary CPU that it wouldn't have been able to see because of the stale cache. The value of MAS2 for the boot page TLB1 entry is a compile time constant, so there is no need to calculate it in powerpc assembly language. Also, from the MPC8572 manual section 6.12.5.3, "Bits that represent offsets within a page are ignored and should be cleared." Existing code didn't clear them, this code does. The same when the page of KERNELBASE is found; we don't need to use asm to mask the lower 12 bits off. In the code that computes the address to rfi from, don't hard code the offset to 24 bytes, but have the assembler figure that out for us. Signed-off-by: NTrent Piepho <tpiepho@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Liu Yu 提交于
This patch add the handlers of SPE/EFP exceptions. The code is used to emulate float point arithmetic, when MSR(SPE) is enabled and receive EFP data interrupt or EFP round interrupt. This patch has no conflict with or dependence on FP math-emu. The code has been tested by TestFloat. Now the code doesn't support SPE/EFP instructions emulation (it won't be called when receive program interrupt), but it could be easily added. Signed-off-by: NLiu Yu <yu.liu@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 14 10月, 2008 1 次提交
-
-
由 Milton Miller 提交于
b38fd42f added false dependencys to order the load of upper and lower halfs of the pte, but only adjusted whitespace instead of deleting the old load in the iside handler, letting the hardware see the non-dependent load. This patch removes the extra load. Signed-off-by: NMilton Miller <miltonm@bga.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 25 9月, 2008 1 次提交
-
-
由 Becky Bruce 提交于
This rearranges a bit of code, and adds support for 36-bit physical addressing for configs that use a hashed page table. The 36b physical support is not enabled by default on any config - it must be explicitly enabled via the config system. This patch *only* expands the page table code to accomodate large physical addresses on 32-bit systems and enables the PHYS_64BIT config option for 86xx. It does *not* allow you to boot a board with more than about 3.5GB of RAM - for that, SWIOTLB support is also required (and coming soon). Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 20 9月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
We need to create a false data dependency to ensure the loads of the pte are done in the right order. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 17 7月, 2008 2 次提交
-
-
由 Kumar Gala 提交于
Use the TLBSYNC macro defined in ppc_asm.h rather than our own ifdefs. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Kumar Gala 提交于
This converts the FSL Book-E PTE access and TLB miss handling to match with the recent changes to 44x that introduce support for non-atomic PTE operations in pgtable-ppc32.h and removes write back to the PTE from the TLB miss handlers. In addition, the DSI interrupt code no longer tries to fixup write permission, this is left to generic code, and _PAGE_HWWRITE is gone. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 26 6月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
The e500 core enter DOZE/NAP power-saving modes when the core go to cpu_idle routine. The power management default running mode is DOZE, If the user echo 1 > /proc/sys/kernel/powersave-nap the system will change to NAP running mode. Signed-off-by: NDave Liu <daveliu@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 19 6月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
The new e500mc core from Freescale is based on the e500v2 but with the following changes: * Supports only the Enhanced Debug Architecture (DSRR0/1, etc) * Floating Point * No SPE * Supports lwsync * Doorbell Exceptions * Hypervisor * Cache line size is now 64-bytes (e500v1/v2 have a 32-byte cache line) Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 03 6月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
For the additonal exception levels (critical, debug, machine check) on 40x/book-e we were using "static" allocations of the stack in the associated head.S. Move to a runtime allocation to make the code a bit easier to read as we mimic how we handle IRQ stacks. Its also a bit easier to setup the stack with a "dummy" thread_info in C code. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Acked-by: NPaul Mackerras <paulus@samba.org>
-
- 24 4月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
Added support to allow an 85xx kernel to be run from a non-zero physical address (useful for cooperative asymmetric multiprocessing situations and kdump). The support can be configured at compile time by setting CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as desired. Alternatively, the kernel build can set CONFIG_RELOCATABLE. Setting this config option causes the kernel to determine at runtime the physical addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START. If CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning. However, CONFIG_PHYSICAL_START will always be used to set the LOAD program header physical address field in the resulting ELF image. Currently we are limited to running at a physical address that is a multiple of 256M. This is due to how we map TLBs to cover lowmem. This should be fixed to allow 64M or maybe even 16M alignment in the future. It is considered an error to try and run a kernel at a non-aligned physical address. All the magic for this support is accomplished by proper initialization of the kernel memory subsystem and use of ARCH_PFN_OFFSET. The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings. ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET. /dev/mem continues to allow access to any physical address in the system regardless of how CONFIG_PHYSICAL_START is set. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 17 4月, 2008 2 次提交
-
-
由 Kumar Gala 提交于
The architecture allows for "Book-E" style debug interrupts to either go to critial interrupts of their own debug interrupt level. To allow for a dynamic kernel to support machines of either type we want to be able to compile in the interrupt handling code for both exception levels. Towards this goal we renamed the debug handling macros to specify the interrupt level in their name (DEBUG_CRIT_EXCEPTION/DebugCrit and DEBUG_DEBUG_EXCEPTION/DebugDebug). Additionally, on the Freescale Book-e parts we expanded the exception stacks to cover the maximum case of needing three exception stacks (normal, machine check and debug). There is some kernel text space optimization to be gained if a kernel is configured for a specific Freescale implementation but we aren't handling that now to allow for the single kernel image support. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Kumar Gala 提交于
* Determine the RPN we are running the kernel at runtime rather than using compile time constant for initial TLB * Cleanup adjust_total_lowmem() to respect memstart_addr and be a bit more clear on variables that are sizes vs addresses. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 24 1月, 2008 1 次提交
-
-
由 Dale Farnsworth 提交于
The e500 MMU init code previously assumed KERNELBASE always equaled PAGE_OFFSET and PHYSICAL_START was 0. This is useful for kdump support as well as asymetric multicore. For the initial kdump support the secondary kernel will run at 32M but need access to all of memory so we bump the initial TLB up to 64M. This also matches with the forth coming ePAPR spec. Signed-off-by: NDale Farnsworth <dale@farnsworth.org> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 07 12月, 2007 1 次提交
-
-
由 Kumar Gala 提交于
The size of swapper_pg_dir is 8k instead of 4k when using 64-bit PTEs (CONFIG_PTE_64BIT). This was reported by Cedric Hombourger <chombourger@gmail.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 12 10月, 2007 1 次提交
-
-
由 Kumar Gala 提交于
Move to using PAGE_OFFSET instead of TASK_SIZE or KERNELBASE value on 6xx/40x/44x/fsl-booke to determine if the faulting address is a kernel or user space address. This mimics how the macro is_kernel_addr() works. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-