- 01 8月, 2016 40 次提交
-
-
由 Anshuman Khandual 提交于
This patch enables support for Performance monitor registers related ELF core note NT_PPC_PMU based ptrace requests through PTRACE_GETREGSET, PTRACE_SETREGSET calls. This is achieved through adding one new register sets REGSET_PMU in powerpc corresponding to the ELF core note sections added in this regard. It also implements the get, set and active functions for this new register sets added. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch enables support for EBB state registers related ELF core note NT_PPC_EBB based ptrace requests through PTRACE_GETREGSET, PTRACE_SETREGSET calls. This is achieved through adding one new register sets REGSET_EBB in powerpc corresponding to the ELF core note sections added in this regard. It also implements the get, set and active functions for this new register sets added. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch enables support for running TAR, PPR, DSCR registers related ELF core notes NT_PPPC_TAR, NT_PPC_PPR, NT_PPC_DSCR based ptrace requests through PTRACE_GETREGSET, PTRACE_SETREGSET calls. This is achieved through adding three new register sets REGSET_TAR, REGSET_PPR, REGSET_DSCR in powerpc corresponding to the ELF core note sections added in this regad. It implements the get, set and active functions for all these new register sets added. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch enables support for all three TM checkpointed SPR states related ELF core note NT_PPC_TM_CTAR, NT_PPC_TM_CPPR, NT_PPC_TM_CDSCR based ptrace requests through PTRACE_GETREGSET, PTRACE_SETREGSET calls. This is achieved through adding three new register sets REGSET_TM_CTAR, REGSET_TM_CPPR and REGSET_TM_CDSCR in powerpc corresponding to the ELF core note sections added. It implements the get, set and active functions for all these new register sets added. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch enables support for TM SPR state related ELF core note NT_PPC_TM_SPR based ptrace requests through PTRACE_GETREGSET, PTRACE_SETREGSET calls. This is achieved through adding a register set REGSET_TM_SPR in powerpc corresponding to the ELF core note section added. It implements the get, set and active functions for this new register set added. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch enables support for TM checkpointed VSX register set ELF core note NT_PPC_CVSX based ptrace requests through PTRACE_GETREGSET, PTRACE_SETREGSET calls. This is achieved through adding a register set REGSET_CVSX in powerpc corresponding to the ELF core note section added. It implements the get, set and active functions for this new register set added. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch enables support for TM checkpointed VMX register set ELF core note NT_PPC_CVMX based ptrace requests through PTRACE_GETREGSET, PTRACE_SETREGSET calls. This is achieved through adding a register set REGSET_CVMX in powerpc corresponding to the ELF core note section added. It implements the get, set and active functions for this new register set added. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch enables support for TM checkpointed FPR register set ELF core note NT_PPC_CFPR based ptrace requests through PTRACE_GETREGSET, PTRACE_SETREGSET calls. This is achieved through adding a register set REGSET_CFPR in powerpc corresponding to the ELF core note section added. It implements the get, set and active functions for this new register set added. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch enables support for TM checkpointed GPR register set ELF core note NT_PPC_CGPR based ptrace requests through PTRACE_GETREGSET, PTRACE_SETREGSET calls. This is achieved through adding a register set REGSET_CGPR in powerpc corresponding to the ELF core note section added. It implements the get, set and active functions for this new register set added. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch splits gpr32_get, gpr32_set functions to accommodate in transaction ptrace requests implemented in patches later in the series. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch enables in transaction NT_PPC_VSX ptrace requests. The function vsr_get which gets the running value of all VSX registers and the function vsr_set which sets the running value of of all VSX registers work on the running set of VMX registers whose location will be different if transaction is active. This patch makes these functions adapt to situations when the transaction is active. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch enables in transaction NT_PPC_VMX ptrace requests. The function vr_get which gets the running value of all VMX registers and the function vr_set which sets the running value of of all VMX registers work on the running set of VMX registers whose location will be different if transaction is active. This patch makes these functions adapt to situations when the transaction is active. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch enables in transaction NT_PRFPREG ptrace requests. The function fpr_get which gets the running value of all FPR registers and the function fpr_set which sets the running value of of all FPR registers work on the running set of FPR registers whose location will be different if transaction is active. This patch makes these functions adapt to situations when the transaction is active. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch creates a function flush_tmregs_to_thread which will then be used by subsequent patches in this series. The function checks for self tracing ptrace interface attempts while in the TM context and logs appropriate warning message. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Anshuman Khandual 提交于
This patch adds twelve ELF core note sections for powerpc architecture for various registers and register sets which need to be accessed from ptrace interface and then gdb. These additions include special purpose registers like TAR, PPR, DSCR, TM running and checkpointed state for various register sets, EBB related register set, performance monitor register set etc. Addition of these new ELF core note sections extends the existing ELF ABI on powerpc arch without affecting it in any manner. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NSimon Guo <wei.guo.simon@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
This should be same as flush_tlb_page except for hash32. For hash32 I guess the existing code is wrong, because we don't seem to be flushing tlb for Hash != 0 case at all. Fix this by switching to calling flush_tlb_page() which does the right thing by flushing tlb for both hash and nohash case with hash32 Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
Some archs like ppc64 need to do special things when flushing tlb for hugepage. Add a new helper to flush hugetlb tlb range. This helps us to avoid flushing the entire tlb mapping for the pid. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
Use the helper instead of open coding the same at multiple place Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
Instead of flushing the entire mm, implement a flush_pmd_tlb_range Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
Use flush_hugetlb_page instead of flush_tlb_page when we clear flush the pte. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
Replace opencoding of the same at multiple places with the helper. No functional change with this patch. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
Now that we track page size in mmu_gather, we can use address based tlbie format when doing a tlb_flush(). We don't do this if we are invalidating the full address space. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Add a comment to the generated assembler for jump labels. This makes it easier to identify them in asm listings (generated with $ make foo.s). Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
This allows us to catch incorrect usage of cpu_has_feature() and mmu_has_feature() prior to jump labels being initialised. mpe: Use printk() and dump_stack() rather than WARN_ON(), because WARN_ON() may not work this early in boot. Rename the Kconfig. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Kevin Hao 提交于
As we just did for CPU features. Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Kevin Hao 提交于
We do binary patching of asm code using CPU features, which is a one-time operation, done during early boot. However checks of CPU features in C code are currently done at run time, even though the set of CPU features can never change after boot. We can optimise this by using jump labels to implement cpu_has_feature(), meaning checks in C code are binary patched into a single nop or branch. For a C sequence along the lines of: if (cpu_has_feature(FOO)) return 2; The generated code before is roughly: ld r9,-27640(r2) ld r9,0(r9) lwz r9,32(r9) cmpwi cr7,r9,0 bge cr7, 1f li r3,2 blr 1: ... After (true): nop li r3,2 blr After (false): b 1f li r3,2 blr 1: ... mpe: Rename MAX_CPU_FEATURES as we already have a #define with that name, and define it simply as a constant, rather than doing tricks with sizeof and NULL pointers. Rename the array to cpu_feature_keys. Use the kconfig we added to guard it. Add BUILD_BUG_ON() if the feature is not a compile time constant. Rewrite the change log. Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Add a kconfig option to control whether we use jump label for the cpu/mmu_has_feature() checks. Currently this does nothing, but we will enabled it in the subsequent patches. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Kevin Hao 提交于
We plan to use jump label for cpu_has_feature(). In order to implement this we need to include the linux/jump_label.h in asm/cputable.h. Unfortunately if we do that it leads to an include loop. The root of the problem seems to be that reg.h needs cputable.h (for CPU_FTRs), and then cputable.h via jump_label.h eventually pulls in hw_irq.h which needs reg.h (for MSR_EE). So move cpu_has_feature() to a separate file on its own. Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> [mpe: Rename to cpu_has_feature.h and flesh out change log] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Kevin Hao 提交于
This function is only used by get_vtb(). They are almost the same except the reading from the real register. Move the mfspr() to get_vtb() and kill the function mfvtb(). With this, we can eliminate the use of cpu_has_feature() in very core header file like reg.h. This is a preparation for the use of jump label for cpu_has_feature(). Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
Call jump_label_init() early so that we can use static keys for CPU and MMU feature checks. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Kevin Hao 提交于
Some arches (powerpc at least) would like to invoke jump_label_init() much earlier in boot. So check static_key_initialized in order to make sure this function runs only once. LGTM-by: Ingo (http://marc.info/?l=linux-kernel&m=144049104329961&w=2) Signed-off-by: NKevin Hao <haokexin@gmail.com> Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
This switches early feature checks to use the non static key variant of the function. In later patches we will be switching cpu_has_feature() and mmu_has_feature() to use static keys and we can use them only after static key/jump label is initialized. Any check for feature before jump label init should be done using this new helper. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
In later patches, we will be switching CPU and MMU feature checks to use static keys. For checks in early boot before jump label is initialized we need a variant of [cpu|mmu]_has_feature() that doesn't use jump labels. So create those called, unimaginatively, early_[cpu|mmu]_has_feature(). Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Currently we have radix_enabled() three times, twice in asm/book3s/64/mmu.h and then a fallback in asm/mmu.h. Consolidate them in asm/mmu.h. While we're at it convert them to be static inlines, and change the fallback case to returning a bool, like mmu_has_feature(). Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
The intention is that the result is only used as a boolean, so enforce that by changing the return type to bool. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
The intention is that the result is only used as a boolean, so enforce that by changing the return type to bool. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Aneesh Kumar K.V 提交于
MMU feature bits are defined such that we use the lower half to present MMU family features. Remove the strict split of half and also move Radix to a mmu family feature. Radix introduce a new MMU model and strictly speaking it is a new MMU family. This also free up bits which can be used for individual features later. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Early in boot we binary patch some sections of code based on the CPU and MMU feature bits. But it is a one-time patching, there is no facility for repatching the code later if the set of features change. It is a major bug if the set of features changes after we've done the code patching - so add a check for it. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-