- 30 7月, 2018 1 次提交
-
-
由 Christophe Leroy 提交于
This patch moves ASM_CONST() and stringify_in_c() into dedicated asm-const.h, then cleans all related inclusions. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> [mpe: asm-compat.h should include asm-const.h] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 10 8月, 2017 1 次提交
-
-
由 Christophe Leroy 提交于
Two config options exist to define powerpc MPC8xx: * CONFIG_PPC_8xx * CONFIG_8xx arch/powerpc/platforms/Kconfig.cputype has contained the following comment about CONFIG_8xx item for some years: "# this is temp to handle compat with arch=ppc" arch/powerpc is now the only place with remaining use of CONFIG_8xx: get rid of them. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 01 3月, 2016 1 次提交
-
-
由 Adam Buchbinder 提交于
Signed-off-by: NAdam Buchbinder <adam.buchbinder@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 23 10月, 2015 1 次提交
-
-
由 Scott Wood 提交于
Use an AS=1 trampoline TLB entry to allow all normal TLB1 entries to be loaded at once. This avoids the need to keep the translation that code is executing from in the same TLB entry in the final TLB configuration as during early boot, which in turn is helpful for relocatable kernels (e.g. kdump) where the kernel is not running from what would be the first TLB entry. On e6500, we limit map_mem_in_cams() to the primary hwthread of a core (the boot cpu is always considered primary, as a kdump kernel can be entered on any cpu). Each TLB only needs to be set up once, and when we do, we don't want another thread to be running when we create a temporary trampoline TLB1 entry. Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 10 1月, 2014 1 次提交
-
-
由 Kevin Hao 提交于
When booting above the 64M for a secondary cpu, we also face the same issue as the boot cpu that the PAGE_OFFSET map two different physical address for the init tlb and the final map. So we have to use switch_to_as1/restore_to_as0 between the conversion of these two maps. When restoring to as0 for a secondary cpu, we only need to return to the caller. So add a new parameter for function restore_to_as0 for this purpose. Use LOAD_REG_ADDR_PIC to get the address of variables which may be used before we set the final map in cams for the secondary cpu. Move the setting of cams a bit earlier in order to avoid the unnecessary using of LOAD_REG_ADDR_PIC. Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 15 11月, 2012 1 次提交
-
-
由 Tony Breeds 提交于
Don't use 47x only #defines for TLBIVAX or ICBT, supply and use helpers in ppc-opcode.h This fixes a compile breakage. Signed-off-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 7月, 2012 3 次提交
-
-
由 Michael Neuling 提交于
Some macros use RA where when RA=R0 the values is 0, so make this the enforced mnemonic in the macro. Idea suggested by Andreas Schwab. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
These macros are using integers where they could be using logical names since they take registers. We are going to enforce this soon, so fix these up now. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Anything that uses a constructed instruction (ie. from ppc-opcode.h), need to use the new R0 macro, as %r0 is not going to work. Also convert usages of macros where we are just determining an offset (usually for a load/store), like: std r14,STK_REG(r14)(r1) Can't use STK_REG(r14) as %r14 doesn't work in the STK_REG macro since it's just calculating an offset. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 02 2月, 2011 1 次提交
-
-
由 Dave Kleikamp 提交于
The 476FP core may hang if an instruction fetch happens during an msync following a tlbsync. This workaround makes sure that enough instruction cache lines are pre-fetched before executing the msync. (sync and msync are the same to the compiler.) Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 14 10月, 2010 1 次提交
-
-
由 Kumar Gala 提交于
On Freescale parts typically have TLB array for large mappings that we can bolt the linear mapping into. We utilize the code that already exists on PPC32 on the 64-bit side to setup the linear mapping to be cover by bolted TLB entries. We utilize a quarter of the variable size TLB array for this purpose. Additionally, we limit the amount of memory to what we can cover via bolted entries so we don't get secondary faults in the TLB miss handlers. We should fix this limitation in the future. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 23 8月, 2010 1 次提交
-
-
由 Dave Kleikamp 提交于
Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 17 5月, 2010 1 次提交
-
-
由 Kumar Gala 提交于
When we build with ftrace enabled its possible that loadcam_entry would have used the stack pointer (even though the code doesn't need it). We call loadcam_entry in __secondary_start before the stack is setup. To ensure that loadcam_entry doesn't use the stack pointer the easiest solution is to just have it in asm code. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 14 5月, 2010 1 次提交
-
-
由 Kumar Gala 提交于
When we build with ftrace enabled its possible that loadcam_entry would have used the stack pointer (even though the code doesn't need it). We call loadcam_entry in __secondary_start before the stack is setup. To ensure that loadcam_entry doesn't use the stack pointer the easiest solution is to just have it in asm code. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 05 5月, 2010 1 次提交
-
-
由 Dave Kleikamp 提交于
This patch adds the base support for the 476 processor. The code was primarily written by Ben Herrenschmidt and Torez Smith, but I've been maintaining it for a while. The goal is to have a single binary that will run on 44x and 47x, but we still have some details to work out. The biggest is that the L1 cache line size differs on the two platforms, but it's currently a compile-time option. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NTorez Smith <lnxtorez@linux.vnet.ibm.com> Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 25 8月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
The MMUCSR is now defined as part of the Book-3E architecture so we can move it into mmu-book3e.h and add some of the additional bits defined by the architecture specs. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 20 8月, 2009 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This adds the TLB miss handler assembly, the low level TLB flush routines along with the necessary hook for dealing with our virtual page tables or indirect TLB entries that need to be flushes when PTE pages are freed. There is currently no support for hugetlbfs Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
We need to pass down whether the page is direct or indirect and we'll need to pass the page size to _tlbil_va and _tlbivax_bcast We also add a new low level _tlbil_pid_noind() which does a TLB flush by PID but avoids flushing indirect entries if possible This implements those new prototypes but defines them with inlines or macros so that no additional arguments are actually passed on current processors. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 23 4月, 2009 2 次提交
-
-
由 Stephen Rothwell 提交于
Previous gcc versions didn't notice this because one of the preceding #ifs always evaluated to true. gcc 4.4.0 produced this error: arch/powerpc/mm/tlb_nohash_low.S:206:6: error: #elif with no expression Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Acked-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Kumar Gala 提交于
This reverts commit e9965577. Our HW guys were able to fix this so it never sees the light of day. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 07 4月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
During the ISA 2.06 development the opcode for tlbilx changed and some early implementations used to old opcode. Add support for a MMU_FTR fixup to deal with this. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 09 3月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
The e500mc core supports the new tlbilx instructions that do core local invalidates and also provide us the ability to take down all TLB entries matching a given PID. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 21 12月, 2008 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The handlers for Critical, Machine Check or Debug interrupts will save and restore MMUCR nowadays, thus we only need to disable normal interrupts when invalidating TLB entries. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NKumar Gala <galak@kernel.crashing.org> Acked-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Benjamin Herrenschmidt 提交于
Currently, the various forms of low level TLB invalidations are all implemented in misc_32.S for 32-bit processors, in a fairly scary mess of #ifdef's and with interesting duplication such as a whole bunch of code for FSL _tlbie and _tlbia which are no longer used. This moves things around such that _tlbie is now defined in hash_low_32.S and is only used by the 32-bit hash code, and all nohash CPUs use the various _tlbil_* forms that are now moved to a new file, tlb_nohash_low.S. I moved all the definitions for that stuff out of include/asm/tlbflush.h as they are really internal mm stuff, into mm/mmu_decl.h The code should have no functional changes. I kept some variants inline for trivial forms on things like 40x and 8xx. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-