- 04 10月, 2016 25 次提交
-
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Use assembler sections of fixed size and location to arrange the 64-bit Book3S exception vector code (64-bit Book3E also uses it in head_64.S for 0x0..0x100). This allows better flexibility in arranging exception code and hiding unimportant details behind macros. Gas sections can be a bit painful to use this way, mainly because the assembler does not know where they will be finally linked. Taking absolute addresses requires a bit of trickery for example, but it can be hidden behind macros for the most part. Generated code is mostly the same except locations, offsets, alignments. The "+ 0x2" is only required for the trap number / kvm exit number, which gets loaded as a constant into a register. Previously, code also used + 0x2 for label names, but we changed to using "H" to distinguish HV case for that. Remove the last vestiges of that. __after_prom_start is taking absolute address of a label in another fixed section. Newer toolchains seemed to compile this okay, but older ones do not. FIXED_SYMBOL_ABS_ADDR is more foolproof, it just takes an additional line to define. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
Move exception handler alignment directives into the head-64.h macros, beause they will no longer work in-place after the next patch. This slightly changes functions that have alignments applied and therefore code generation, which is why it was not done initially (see earlier patch). Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Create arch/powerpc/include/asm/head-64.h with macros that specify an exception vector (name, type, location), which will be used to label and lay out exceptions into the object file. Naming is moved out of exception-64s.h, which is used to specify the implementation of exception handlers. objdump of generated code in exception vectors is unchanged except for names. Alignment directives scattered around are annoying, but done this way so that disassembly can verify identical instruction generation before and after patch. These get cleaned up in future patch. We change the way KVMTEST works, explicitly passing EXC_HV or EXC_STD rather than overloading the trap number. This removes the need to have SOFTEN values for the overloaded trap numbers, eg. 0x502. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 23 9月, 2016 2 次提交
-
-
由 Nicholas Piggin 提交于
When we originally added the ability to split the exception vectors from the kernel (commit 1f6a93e4 ("powerpc: Make it possible to move the interrupt handlers away from the kernel" 2008-09-15)), the LOAD_HANDLER() macro used an addi instruction to compute the offset of the common handler from the kernel base address. Using addi meant the handler had to be within 32K of the kernel base address, due to the addi instruction taking a signed immediate value. That necessitated creating a trampoline for the system call handler, because system_call_common (in entry64.S) is not linked within 32K of the kernel base address. Later in commit 61e2390e ("powerpc: Make load_hander handle upto 64k offset" 2012-11-15) we changed LOAD_HANDLER to take a 64K offset, by changing it to use ori. Although system_call_common is not in head_64.S or exceptions-64s.S, it is included in head-y, which causes it to be linked early in the kernel text, so in practice it ends up below 64K. Additionally if it can't be placed below 64K the linker will fail to build with a "relocation truncated to fit" error. So remove the trampoline. Newer toolchains are able to work out that the ori in LOAD_HANDLER only takes a 16 bit offset, and so they generate a 16 bit relocation. Older toolchains (binutils 2.22 at least) are not so smart, so we have to add the @l annotation to tell the assembler to generate a 16 bit relocation. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Nicholas Piggin 提交于
The 0xf80 hv_facility_unavailable trampoline branches to the 0xf60 handler. This works because they both do the same thing, but it should be fixed. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 20 9月, 2016 1 次提交
-
-
由 Nicholas Piggin 提交于
The mflr r10 instruction was left over from when the code used LR to branch to system_call_entry from the exception handler. That was changed by commit 6a404806 ("powerpc: Avoid link stack corruption in MMU on syscall entry path") to use the count register. The value is never used now, so mflr can be removed, and r10 can be used for storage rather than spilling to the SPR scratch register. The scratch register spill causes a long pipeline stall due to the SPR read after write. This change brings getppid syscall cost from 406 to 376 cycles on POWER8. getppid for non-relocatable case is 371 cycles. Signed-off-by: NNicholas Piggin <npiggin@gmail.com> Acked-by: NBalbir Singh <bsingharora@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 13 9月, 2016 2 次提交
-
-
由 Michael Ellerman 提交于
The LOAD_HANDLER macro requires that you have previously loaded "reg" with PACAKBASE. Although that gives callers flexibility to get PACAKBASE in some interesting way, none of the callers actually do that. So fold the load of PACAKBASE into the macro, making it simpler for callers to use correctly. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NNick Piggin <npiggin@gmail.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Paul Mackerras 提交于
Currently, if userspace or the kernel accesses a completely bogus address, for example with any of bits 46-59 set, we first take an SLB miss interrupt, install a corresponding SLB entry with VSID 0, retry the instruction, then take a DSI/ISI interrupt because there is no HPT entry mapping the address. However, by the time of the second interrupt, the Come-From Address Register (CFAR) has been overwritten by the rfid instruction at the end of the SLB miss interrupt handler. Since bogus accesses can often be caused by a function return after the stack has been overwritten, the CFAR value would be very useful as it could indicate which function it was whose return had led to the bogus address. This patch adds code to create a full exception frame in the SLB miss handler in the case of a bogus address, rather than inserting an SLB entry with a zero VSID field. Then we call a new slb_miss_bad_addr() function in C code, which delivers a signal for a user access or creates an oops for a kernel access. In the latter case the oops message will show the CFAR value at the time of the access. In the case of the radix MMU, a segment miss interrupt indicates an access outside the ranges mapped by the page tables. Previously this was handled by the code for an unrecoverable SLB miss (one with MSR[RI] = 0), which is not really correct. With this patch, we now handle these interrupts with slb_miss_bad_addr(), which is much more consistent. Signed-off-by: NPaul Mackerras <paulus@ozlabs.org> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 22 8月, 2016 2 次提交
-
-
由 Nicholas Piggin 提交于
MCE must not enable MSR_RI until PACA_EXMC is no longer being used. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Nicholas Piggin 提交于
MCE must not use PACA_EXGEN. When a general exception enables MSR_RI, that means SPRN_SRR[01] and SPRN_SPRG are no longer used. However the PACA save area is still in use. Acked-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 09 8月, 2016 1 次提交
-
-
由 Mahesh Salgaonkar 提交于
The current implementation of MCE early handling modifies CR0/1 registers without saving its old values. Fix this by moving early check for powersaving mode to machine_check_handle_early(). The power architecture 2.06 or later allows the possibility of getting machine check while in nap/sleep/winkle. The last bit of HSPRG0 is set to 1, if thread is woken up from winkle. Hence, clear the last bit of HSPRG0 (r13) before MCE handler starts using it as paca pointer. Also, the current code always puts the thread into nap state irrespective of whatever idle state it woke up from. Fix that by looking at paca->thread_idle_state and put the thread back into same state where it came from. Fixes: 1c51089f ("powerpc/book3s: Return from interrupt if coming from evil context.") Reported-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Reviewed-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 01 8月, 2016 1 次提交
-
-
由 Aneesh Kumar K.V 提交于
MMU feature bits are defined such that we use the lower half to present MMU family features. Remove the strict split of half and also move Radix to a mmu family feature. Radix introduce a new MMU model and strictly speaking it is a new MMU family. This also free up bits which can be used for individual features later. Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 17 7月, 2016 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This will be delivering external interrupts from the XIVE to the Hypervisor. We treat it as a normal external interrupt for the lazy irq disable code (so it will be replayed as a 0x500) and route it to do_IRQ. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Benjamin Herrenschmidt 提交于
This moves the CBE RAS and facility unavailable "common" handlers down to after the FWNMI page. This frees up some space in the very demanded spaces before the relocation-on vectors and before the FWNMI page. They are still within 64K of __start, so CONFIG_RELOCATABLE should still work. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 15 7月, 2016 2 次提交
-
-
由 Shreyas B. Prabhu 提交于
Functions like power7_wakeup_loss, power7_wakeup_noloss, power7_wakeup_tb_loss are used by POWER7 and POWER8 hardware. They can also be used by POWER9. Hence rename these functions hardware agnostic names. Suggested-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Shreyas B. Prabhu 提交于
In the current code, when the thread wakes up in reset vector, some of the state restore code and check for whether a thread needs to branch to kvm is duplicated. Reorder the code such that this duplication is avoided. At a higher level this is what the change looks like- Before this patch - power7_wakeup_tb_loss: restore hypervisor state if (thread needed by kvm) goto kvm_start_guest restore nvgprs, cr, pc rfid to process context power7_wakeup_loss: restore nvgprs, cr, pc rfid to process context reset vector: if (waking from deep idle states) goto power7_wakeup_tb_loss else if (thread needed by kvm) goto kvm_start_guest goto power7_wakeup_loss After this patch - power7_wakeup_tb_loss: restore hypervisor state return power7_restore_hyp_resource(): if (waking from deep idle states) goto power7_wakeup_tb_loss return power7_wakeup_loss: restore nvgprs, cr, pc rfid to process context reset vector: power7_restore_hyp_resource() if (thread needed by kvm) goto kvm_start_guest goto power7_wakeup_loss Reviewed-by: NPaul Mackerras <paulus@samba.org> Reviewed-by: NGautham R. Shenoy <ego@linux.vnet.ibm.com> Signed-off-by: NShreyas B. Prabhu <shreyas@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 23 6月, 2016 1 次提交
-
-
由 Michael Ellerman 提交于
As part of the Radix MMU support we added some feature sections in the SLB miss handler. These are intended to catch the case that we incorrectly take an SLB miss when Radix is enabled, and instead of crashing weirdly they bail out to a well defined exit path and trigger an oops. However the way they were written meant the bailout case was enabled by default until we did CPU feature patching. On powermacs the early debug prints in setup_system() can cause an SLB miss, which happens before code patching, and so the SLB miss handler would incorrectly bailout and crash during boot. Fix it by inverting the sense of the feature section, so that the code which is in place at boot is correct for the hash case. Once we determine we are using Radix - which will never happen on a powermac - only then do we patch in the bailout case which unconditionally jumps. Fixes: caca285e ("powerpc/mm/radix: Use STD_MMU_64 to properly isolate hash related code") Reported-by: NDenis Kirjanov <kda@linux-powerpc.org> Tested-by: NDenis Kirjanov <kda@linux-powerpc.org> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 20 6月, 2016 1 次提交
-
-
由 Mahesh Salgaonkar 提交于
When a guest is assigned to a core it converts the host Timebase (TB) into guest TB by adding guest timebase offset before entering into guest. During guest exit it restores the guest TB to host TB. This means under certain conditions (Guest migration) host TB and guest TB can differ. When we get an HMI for TB related issues the opal HMI handler would try fixing errors and restore the correct host TB value. With no guest running, we don't have any issues. But with guest running on the core we run into TB corruption issues. If we get an HMI while in the guest, the current HMI handler invokes opal hmi handler before forcing guest to exit. The guest exit path subtracts the guest TB offset from the current TB value which may have already been restored with host value by opal hmi handler. This leads to incorrect host and guest TB values. With split-core, things become more complex. With split-core, TB also gets split and each subcore gets its own TB register. When a hmi handler fixes a TB error and restores the TB value, it affects all the TB values of sibling subcores on the same core. On TB errors all the thread in the core gets HMI. With existing code, the individual threads call opal hmi handle independently which can easily throw TB out of sync if we have guest running on subcores. Hence we will need to co-ordinate with all the threads before making opal hmi handler call followed by TB resync. This patch introduces a sibling subcore state structure (shared by all threads in the core) in paca which holds information about whether sibling subcores are in Guest mode or host mode. An array in_guest[] of size MAX_SUBCORE_PER_CORE=4 is used to maintain the state of each subcore. The subcore id is used as index into in_guest[] array. Only primary thread entering/exiting the guest is responsible to set/unset its designated array element. On TB error, we get HMI interrupt on every thread on the core. Upon HMI, this patch will now force guest to vacate the core/subcore. Primary thread from each subcore will then turn off its respective bit from the above bitmap during the guest exit path just after the guest->host partition switch is complete. All other threads that have just exited the guest OR were already in host will wait until all other subcores clears their respective bit. Once all the subcores turn off their respective bit, all threads will will make call to opal hmi handler. It is not necessary that opal hmi handler would resync the TB value for every HMI interrupts. It would do so only for the HMI caused due to TB errors. For rest, it would not touch TB value. Hence to make things simpler, primary thread would call TB resync explicitly once for each core immediately after opal hmi handler instead of subtracting guest offset from TB. TB resync call will restore the TB with host value. Thus we can be sure about the TB state. One of the primary threads exiting the guest will take up the responsibility of calling TB resync. It will use one of the top bits (bit 63) from subcore state flags bitmap to make the decision. The first primary thread (among the subcores) that is able to set the bit will have to call the TB resync. Rest all other threads will wait until TB resync is complete. Once TB resync is complete all threads will then proceed. Signed-off-by: NMahesh Salgaonkar <mahesh@linux.vnet.ibm.com> Signed-off-by: NPaul Mackerras <paulus@ozlabs.org>
-