- 03 6月, 2018 40 次提交
-
-
由 Christophe Leroy 提交于
When compiled with GCC 8.1, vmlinux is significantly bigger than with GCC 4.8. When looking at the generated code with objdump, we notice that all functions and loops when a 16 bytes alignment. This significantly increases the size of the kernel. It is pointless and even counterproductive as on the 8xx 'nop' also consumes one clock cycle. Size of vmlinux with GCC 4.8: text data bss dec hex filename 5801948 1626076 457796 7885820 7853fc vmlinux Size of vmlinux with GCC 8.1: text data bss dec hex filename 6764592 1630652 456476 8851720 871108 vmlinux Size of vmlinux with GCC 8.1 and this patch: text data bss dec hex filename 6331544 1631756 456476 8419776 8079c0 vmlinux Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
The generic csum_ipv6_magic() generates a pretty bad result 00000000 <csum_ipv6_magic>: (PPC32) 0: 81 23 00 00 lwz r9,0(r3) 4: 81 03 00 04 lwz r8,4(r3) 8: 7c e7 4a 14 add r7,r7,r9 c: 7d 29 38 10 subfc r9,r9,r7 10: 7d 4a 51 10 subfe r10,r10,r10 14: 7d 27 42 14 add r9,r7,r8 18: 7d 2a 48 50 subf r9,r10,r9 1c: 80 e3 00 08 lwz r7,8(r3) 20: 7d 08 48 10 subfc r8,r8,r9 24: 7d 4a 51 10 subfe r10,r10,r10 28: 7d 29 3a 14 add r9,r9,r7 2c: 81 03 00 0c lwz r8,12(r3) 30: 7d 2a 48 50 subf r9,r10,r9 34: 7c e7 48 10 subfc r7,r7,r9 38: 7d 4a 51 10 subfe r10,r10,r10 3c: 7d 29 42 14 add r9,r9,r8 40: 7d 2a 48 50 subf r9,r10,r9 44: 80 e4 00 00 lwz r7,0(r4) 48: 7d 08 48 10 subfc r8,r8,r9 4c: 7d 4a 51 10 subfe r10,r10,r10 50: 7d 29 3a 14 add r9,r9,r7 54: 7d 2a 48 50 subf r9,r10,r9 58: 81 04 00 04 lwz r8,4(r4) 5c: 7c e7 48 10 subfc r7,r7,r9 60: 7d 4a 51 10 subfe r10,r10,r10 64: 7d 29 42 14 add r9,r9,r8 68: 7d 2a 48 50 subf r9,r10,r9 6c: 80 e4 00 08 lwz r7,8(r4) 70: 7d 08 48 10 subfc r8,r8,r9 74: 7d 4a 51 10 subfe r10,r10,r10 78: 7d 29 3a 14 add r9,r9,r7 7c: 7d 2a 48 50 subf r9,r10,r9 80: 81 04 00 0c lwz r8,12(r4) 84: 7c e7 48 10 subfc r7,r7,r9 88: 7d 4a 51 10 subfe r10,r10,r10 8c: 7d 29 42 14 add r9,r9,r8 90: 7d 2a 48 50 subf r9,r10,r9 94: 7d 08 48 10 subfc r8,r8,r9 98: 7d 4a 51 10 subfe r10,r10,r10 9c: 7d 29 2a 14 add r9,r9,r5 a0: 7d 2a 48 50 subf r9,r10,r9 a4: 7c a5 48 10 subfc r5,r5,r9 a8: 7c 63 19 10 subfe r3,r3,r3 ac: 7d 29 32 14 add r9,r9,r6 b0: 7d 23 48 50 subf r9,r3,r9 b4: 7c c6 48 10 subfc r6,r6,r9 b8: 7c 63 19 10 subfe r3,r3,r3 bc: 7c 63 48 50 subf r3,r3,r9 c0: 54 6a 80 3e rotlwi r10,r3,16 c4: 7c 63 52 14 add r3,r3,r10 c8: 7c 63 18 f8 not r3,r3 cc: 54 63 84 3e rlwinm r3,r3,16,16,31 d0: 4e 80 00 20 blr 0000000000000000 <.csum_ipv6_magic>: (PPC64) 0: 81 23 00 00 lwz r9,0(r3) 4: 80 03 00 04 lwz r0,4(r3) 8: 81 63 00 08 lwz r11,8(r3) c: 7c e7 4a 14 add r7,r7,r9 10: 7f 89 38 40 cmplw cr7,r9,r7 14: 7d 47 02 14 add r10,r7,r0 18: 7d 30 10 26 mfocrf r9,1 1c: 55 29 f7 fe rlwinm r9,r9,30,31,31 20: 7d 4a 4a 14 add r10,r10,r9 24: 7f 80 50 40 cmplw cr7,r0,r10 28: 7d 2a 5a 14 add r9,r10,r11 2c: 80 03 00 0c lwz r0,12(r3) 30: 81 44 00 00 lwz r10,0(r4) 34: 7d 10 10 26 mfocrf r8,1 38: 55 08 f7 fe rlwinm r8,r8,30,31,31 3c: 7d 29 42 14 add r9,r9,r8 40: 81 04 00 04 lwz r8,4(r4) 44: 7f 8b 48 40 cmplw cr7,r11,r9 48: 7d 29 02 14 add r9,r9,r0 4c: 7d 70 10 26 mfocrf r11,1 50: 55 6b f7 fe rlwinm r11,r11,30,31,31 54: 7d 29 5a 14 add r9,r9,r11 58: 7f 80 48 40 cmplw cr7,r0,r9 5c: 7d 29 52 14 add r9,r9,r10 60: 7c 10 10 26 mfocrf r0,1 64: 54 00 f7 fe rlwinm r0,r0,30,31,31 68: 7d 69 02 14 add r11,r9,r0 6c: 7f 8a 58 40 cmplw cr7,r10,r11 70: 7c 0b 42 14 add r0,r11,r8 74: 81 44 00 08 lwz r10,8(r4) 78: 7c f0 10 26 mfocrf r7,1 7c: 54 e7 f7 fe rlwinm r7,r7,30,31,31 80: 7c 00 3a 14 add r0,r0,r7 84: 7f 88 00 40 cmplw cr7,r8,r0 88: 7d 20 52 14 add r9,r0,r10 8c: 80 04 00 0c lwz r0,12(r4) 90: 7d 70 10 26 mfocrf r11,1 94: 55 6b f7 fe rlwinm r11,r11,30,31,31 98: 7d 29 5a 14 add r9,r9,r11 9c: 7f 8a 48 40 cmplw cr7,r10,r9 a0: 7d 29 02 14 add r9,r9,r0 a4: 7d 70 10 26 mfocrf r11,1 a8: 55 6b f7 fe rlwinm r11,r11,30,31,31 ac: 7d 29 5a 14 add r9,r9,r11 b0: 7f 80 48 40 cmplw cr7,r0,r9 b4: 7d 29 2a 14 add r9,r9,r5 b8: 7c 10 10 26 mfocrf r0,1 bc: 54 00 f7 fe rlwinm r0,r0,30,31,31 c0: 7d 29 02 14 add r9,r9,r0 c4: 7f 85 48 40 cmplw cr7,r5,r9 c8: 7c 09 32 14 add r0,r9,r6 cc: 7d 50 10 26 mfocrf r10,1 d0: 55 4a f7 fe rlwinm r10,r10,30,31,31 d4: 7c 00 52 14 add r0,r0,r10 d8: 7f 80 30 40 cmplw cr7,r0,r6 dc: 7d 30 10 26 mfocrf r9,1 e0: 55 29 ef fe rlwinm r9,r9,29,31,31 e4: 7c 09 02 14 add r0,r9,r0 e8: 54 03 80 3e rotlwi r3,r0,16 ec: 7c 03 02 14 add r0,r3,r0 f0: 7c 03 00 f8 not r3,r0 f4: 78 63 84 22 rldicl r3,r3,48,48 f8: 4e 80 00 20 blr This patch implements it in assembly for both PPC32 and PPC64 Link: https://github.com/linuxppc/linux/issues/9Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: NSegher Boessenkool <segher@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
Improve __csum_partial by interleaving loads and adds. On a 8xx, it brings neither improvement nor degradation. On a 83xx, it brings a 25% improvement. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: NSegher Boessenkool <segher@kernel.crashing.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
commit 87a156fb ("Align hot loops of some string functions") degraded the performance of string functions by adding useless nops A simple benchmark on an 8xx calling 100000x a memchr() that matches the first byte runs in 41668 TB ticks before this patch and in 35986 TB ticks after this patch. So this gives an improvement of approx 10% Another benchmark doing the same with a memchr() matching the 128th byte runs in 1011365 TB ticks before this patch and 1005682 TB ticks after this patch, so regardless on the number of loops, removing those useless nops improves the test by 5683 TB ticks. Fixes: 87a156fb ("Align hot loops of some string functions") Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
Use fault_in_pages_readable() to prefault user context instead of open coding Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Reviewed-by: NMathieu Malaterre <malat@debian.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
The 885 familly processors don't have the Real Time Clock Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
Variable div is set but never used. Remove it. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
reloc_offset() is the same as add_reloc_offset(0) Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
The current implementation of from64to32() gives a poor result: 0000000000000270 <.from64to32>: 270: 38 00 ff ff li r0,-1 274: 78 69 00 22 rldicl r9,r3,32,32 278: 78 00 00 20 clrldi r0,r0,32 27c: 7c 60 00 38 and r0,r3,r0 280: 7c 09 02 14 add r0,r9,r0 284: 78 09 00 22 rldicl r9,r0,32,32 288: 7c 00 4a 14 add r0,r0,r9 28c: 78 03 00 20 clrldi r3,r0,32 290: 4e 80 00 20 blr This patch modifies from64to32() to operate in the same spirit as csum_fold() It swaps the two 32-bit halves of sum then it adds it with the unswapped sum. If there is a carry from adding the two 32-bit halves, it will carry from the lower half into the upper half, giving us the correct sum in the upper half. The resulting code is: 0000000000000260 <.from64to32>: 260: 78 60 00 02 rotldi r0,r3,32 264: 7c 60 1a 14 add r3,r0,r3 268: 78 63 00 22 rldicl r3,r3,32,32 26c: 4e 80 00 20 blr Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
stale_map[] bits are only set in steal_context_smp() so on UP processors this map is useless. Only manage it for SMP processors. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
last_context is 16 on the 8xx, 65535 on the 47x and 255 on other ones. The kernel is exclusively built for the 8xx, for the 47x or for another processor so the last context can be defined as a constant depending on the processor. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> [mpe: Reformat old comment] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
no_selective_tlbil hence the use of either steal_all_contexts() or steal_context_up() depends on the subarch, it won't change during run. Only the 8xx uses steal_all_contexts and CONFIG_PPC_8xx is exclusive of other processors. This patch replaces the test of no_selective_tlbil global var by a test of CONFIG_PPC_8xx selection. It avoids the test and removes unnecessary code. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
First context is now 1 for all supported platforms, so it can be made a constant. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Christophe Leroy 提交于
Direction is already checked in all calling functions in include/linux/dma-mapping.h and also in called function __dma_sync() So really no need to check it once more here. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ravi Bangoria 提交于
emulate_step() tests are failing if VSX is not supported or disabled. emulate_step_test: lxvd2x : FAIL emulate_step_test: stxvd2x : FAIL If !CPU_FTR_VSX, emulate_step() failure is expected and testcase should PASS with a valid justification. After patch: emulate_step_test: lxvd2x : PASS (!CPU_FTR_VSX) emulate_step_test: stxvd2x : PASS (!CPU_FTR_VSX) Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ravi Bangoria 提交于
emulate_step() is not checking runtime VSX feature flag before emulating an instruction. This is causing kernel crash when kernel is compiled with CONFIG_VSX=y but running on a machine where VSX is not supported or disabled. Ex, while running emulate_step tests on P6 machine: Oops: Exception in kernel mode, sig: 4 [#1] NIP [c000000000095c24] .load_vsrn+0x28/0x54 LR [c000000000094bdc] .emulate_loadstore+0x167c/0x17b0 Call Trace: 0x40fe240c7ae147ae (unreliable) .emulate_loadstore+0x167c/0x17b0 .emulate_step+0x25c/0x5bc .test_lxvd2x_stxvd2x+0x64/0x154 .test_emulate_step+0x38/0x4c .do_one_initcall+0x5c/0x2c0 .kernel_init_freeable+0x314/0x4cc .kernel_init+0x24/0x160 .ret_from_kernel_thread+0x58/0xb4 With fix: emulate_step_test: lxvd2x : FAIL emulate_step_test: stxvd2x : FAIL Reported-by: NMichael Ellerman <mpe@ellerman.id.au> Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Ravi Bangoria 提交于
Replace 'op->type & INSTR_TYPE_MASK' expression with GETTYPE(op->type) macro. Signed-off-by: NRavi Bangoria <ravi.bangoria@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Neuling 提交于
This tests perf hardware breakpoints (ie PERF_TYPE_BREAKPOINT) on powerpc. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michal Suchanek 提交于
We now have barrier_nospec as mitigation so print it in cpu_show_spectre_v1() when enabled. Signed-off-by: NMichal Suchanek <msuchanek@suse.de> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Our syscall entry is done in assembly so patch in an explicit barrier_nospec. Based on a patch by Michal Suchanek. Signed-off-by: NMichal Suchanek <msuchanek@suse.de> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
Based on the x86 commit doing the same. See commit 304ec1b0 ("x86/uaccess: Use __uaccess_begin_nospec() and uaccess_try_nospec") and b3bbfb3f ("x86: Introduce __uaccess_begin_nospec() and uaccess_try_nospec") for more detail. In all cases we are ordering the load from the potentially user-controlled pointer vs a previous branch based on an access_ok() check or similar. Base on a patch from Michal Suchanek. Signed-off-by: NMichal Suchanek <msuchanek@suse.de> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michal Suchanek 提交于
Check what firmware told us and enable/disable the barrier_nospec as appropriate. We err on the side of enabling the barrier, as it's no-op on older systems, see the comment for more detail. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michal Suchanek 提交于
Note that unlike RFI which is patched only in kernel the nospec state reflects settings at the time the module was loaded. Iterating all modules and re-patching every time the settings change is not implemented. Based on lwsync patching. Signed-off-by: NMichal Suchanek <msuchanek@suse.de> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michal Suchanek 提交于
Based on the RFI patching. This is required to be able to disable the speculation barrier. Only one barrier type is supported and it does nothing when the firmware does not enable it. Also re-patching modules is not supported So the only meaningful thing that can be done is patching out the speculation barrier at boot when the user says it is not wanted. Signed-off-by: NMichal Suchanek <msuchanek@suse.de> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michal Suchanek 提交于
A no-op form of ori (or immediate of 0 into r31 and the result stored in r31) has been re-tasked as a speculation barrier. The instruction only acts as a barrier on newer machines with appropriate firmware support. On older CPUs it remains a harmless no-op. Implement barrier_nospec using this instruction. mpe: The semantics of the instruction are believed to be that it prevents execution of subsequent instructions until preceding branches have been fully resolved and are no longer executing speculatively. There is no further documentation available at this time. Signed-off-by: NMichal Suchanek <msuchanek@suse.de> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
This now has new code in it written by Nick and I, and switch to a SPDX tag. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NNicholas Piggin <npiggin@gmail.com>
-
由 Michael Ellerman 提交于
This allows eg. the RCU stall detector, or the soft/hardlockup detectors to trigger a backtrace on all CPUs. We implement this by sending a "safe" NMI, which will actually only send an IPI. Unfortunately the generic code prints "NMI", so that's a little confusing but we can probably live with it. If one of the CPUs doesn't respond to the IPI, we then print some info from it's paca and do a backtrace based on its saved_r1. Example output: INFO: rcu_sched detected stalls on CPUs/tasks: 2-...0: (0 ticks this GP) idle=1be/1/4611686018427387904 softirq=1055/1055 fqs=25735 (detected by 4, t=58847 jiffies, g=58, c=57, q=1258) Sending NMI from CPU 4 to CPUs 2: CPU 2 didn't respond to backtrace IPI, inspecting paca. irq_soft_mask: 0x01 in_mce: 0 in_nmi: 0 current: 3623 (bash) Back trace of paca->saved_r1 (0xc0000000e1c83ba0) (possibly stale): Call Trace: [c0000000e1c83ba0] [0000000000000014] 0x14 (unreliable) [c0000000e1c83bc0] [c000000000765798] lkdtm_do_action+0x48/0x80 [c0000000e1c83bf0] [c000000000765a40] direct_entry+0x110/0x1b0 [c0000000e1c83c90] [c00000000058e650] full_proxy_write+0x90/0xe0 [c0000000e1c83ce0] [c0000000003aae3c] __vfs_write+0x6c/0x1f0 [c0000000e1c83d80] [c0000000003ab214] vfs_write+0xd4/0x240 [c0000000e1c83dd0] [c0000000003ab5cc] ksys_write+0x6c/0x110 [c0000000e1c83e30] [c00000000000b860] system_call+0x58/0x6c Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NNicholas Piggin <npiggin@gmail.com>
-
由 Michael Ellerman 提交于
Currently the options we have for sending NMIs are not necessarily safe, that is they can potentially interrupt a CPU in a non-recoverable region of code, meaning the kernel must then panic(). But we'd like to use smp_send_nmi_ipi() to do cross-CPU calls in situations where we don't want to risk a panic(), because it doesn't have the requirement that interrupts must be enabled like smp_call_function(). So add an API for the caller to indicate that it wants to use the NMI infrastructure, but doesn't want to do anything "unsafe". Currently that is implemented by not actually calling cause_nmi_ipi(), instead falling back to an IPI. In future we can pass the safe parameter down to cause_nmi_ipi() and the individual backends can potentially take it into account before deciding what to do. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NNicholas Piggin <npiggin@gmail.com>
-
由 Michael Ellerman 提交于
A CPU that gets stuck with interrupts hard disable can be difficult to debug, as on some platforms we have no way to interrupt the CPU to find out what it's doing. A stop-gap is to have the CPU save it's stack pointer (r1) in its paca when it hard disables interrupts. That way if we can't interrupt it, we can at least trace the stack based on where it last disabled interrupts. In some cases that will be total junk, but the stack trace code should handle that. In the simple case of a CPU that disable interrupts and then gets stuck in a loop, the stack trace should be informative. We could clear the saved stack pointer when we enable interrupts, but that loses information which could be useful if we have nothing else to go on. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NNicholas Piggin <npiggin@gmail.com>
-
由 Michael Ellerman 提交于
set_fs() sets the addr_limit, which is used in access_ok() to determine if an address is a user or kernel address. Some code paths use set_fs() to temporarily elevate the addr_limit so that kernel code can read/write kernel memory as if it were user memory. That is fine as long as the code can't ever return to userspace with the addr_limit still elevated. If that did happen, then userspace can read/write kernel memory as if it were user memory, eg. just with write(2). In case it's not clear, that is very bad. It has also happened in the past due to bugs. Commit 5ea0727b ("x86/syscalls: Check address limit on user-mode return") added a mechanism to check the addr_limit value before returning to userspace. Any call to set_fs() sets a thread flag, TIF_FSCHECK, and if we see that on the return to userspace we go out of line to check that the addr_limit value is not elevated. For further info see the above commit, as well as: https://lwn.net/Articles/722267/ https://bugs.chromium.org/p/project-zero/issues/detail?id=990 Verified to work on 64-bit Book3S using a POC that objdumps the system call handler, and a modified lkdtm_CORRUPT_USER_DS() that doesn't kill the caller. Before: $ sudo ./test-tif-fscheck ... 0000000000000000 <.data>: 0: e1 f7 8a 79 rldicl. r10,r12,30,63 4: 80 03 82 40 bne 0x384 8: 00 40 8a 71 andi. r10,r12,16384 c: 78 0b 2a 7c mr r10,r1 10: 10 fd 21 38 addi r1,r1,-752 14: 08 00 c2 41 beq- 0x1c 18: 58 09 2d e8 ld r1,2392(r13) 1c: 00 00 41 f9 std r10,0(r1) 20: 70 01 61 f9 std r11,368(r1) 24: 78 01 81 f9 std r12,376(r1) 28: 70 00 01 f8 std r0,112(r1) 2c: 78 00 41 f9 std r10,120(r1) 30: 20 00 82 41 beq 0x50 34: a6 42 4c 7d mftb r10 After: $ sudo ./test-tif-fscheck Killed And in dmesg: Invalid address limit on user-mode return WARNING: CPU: 1 PID: 3689 at ../include/linux/syscalls.h:260 do_notify_resume+0x140/0x170 ... NIP [c00000000001ee50] do_notify_resume+0x140/0x170 LR [c00000000001ee4c] do_notify_resume+0x13c/0x170 Call Trace: do_notify_resume+0x13c/0x170 (unreliable) ret_from_except_lite+0x70/0x74 Performance overhead is essentially zero in the usual case, because the bit is checked as part of the existing _TIF_USER_WORK_MASK check. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Michael Ellerman 提交于
It's called 'fs' for historical reasons, it's named after the x86 'FS' register. But we don't have to use that name for the member of thread_struct, and in fact arch/x86 doesn't even call it 'fs' anymore. So rename it to 'addr_limit', which better reflects what it's used for, and is also the name used on other arches. Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Al Viro 提交于
In PPC_PTRACE_GETHWDBGINFO and PPC_PTRACE_SETHWDEBUG we do an access_ok() check and then __copy_{from,to}_user(). Instead we should just use copy_{from,to}_user() which does all that for us and is less error prone. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au> Reviewed-by: NSamuel Mendoza-Jonas <sam@mendozajonas.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Sam Bobroff 提交于
The EEH report functions now share a fair bit of code around the start and end of each function. So factor out as much as possible, and move the traversal into a custom function. This also allows accurate debug to be generated more easily. Signed-off-by: NSam Bobroff <sbobroff@linux.ibm.com> [mpe: Format with clang-format] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Sam Bobroff 提交于
If a device without a driver is recovered via EEH, the flag EEH_DEV_NO_HANDLER is incorrectly left set on the device after recovery, because the test in eeh_report_resume() for the existence of a bound driver is done before the flag is cleared. If a driver is later bound, and EEH experienced again, some of the drivers EEH handers are not called. To correct this, clear the flag unconditionally after EEH processing is complete. Signed-off-by: NSam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Sam Bobroff 提交于
To ease future refactoring, extract calls to eeh_enable_irq() and eeh_disable_irq() from the various report functions. This makes the report functions initial sequences more similar, as well as making the IRQ changes visible when reading eeh_handle_normal_event(). Signed-off-by: NSam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Sam Bobroff 提交于
To ease future refactoring, extract setting of the channel state from the report functions out into their own functions. This increases the amount of code that is identical across all of the report functions. Signed-off-by: NSam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Sam Bobroff 提交于
The same test is done in every EEH report function, so factor it out. Since eeh_dev_removed() needs to be moved higher up in the file, simplify it a little while we're at it. Signed-off-by: NSam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Sam Bobroff 提交于
Add a for_each-style macro for iterating through PEs without the boilerplate required by a traversal function. eeh_pe_next() is now exported, as it is now used directly in place. Signed-off-by: NSam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Sam Bobroff 提交于
As EEH event handling progresses, a cumulative result of type pci_ers_result is built up by (some of) the eeh_report_*() functions using either: if (rc == PCI_ERS_RESULT_NEED_RESET) *res = rc; if (*res == PCI_ERS_RESULT_NONE) *res = rc; or: if ((*res == PCI_ERS_RESULT_NONE) || (*res == PCI_ERS_RESULT_RECOVERED)) *res = rc; if (*res == PCI_ERS_RESULT_DISCONNECT && rc == PCI_ERS_RESULT_NEED_RESET) *res = rc; (Where *res is the accumulator.) However, the intent is not immediately clear and the result in some situations is order dependent. Address this by assigning a priority to each result value, and always merging to the highest priority. This renders the intent clear, and provides a stable value for all orderings. Signed-off-by: NSam Bobroff <sbobroff@linux.ibm.com> [mpe: Minor formatting (clang-format)] Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
由 Sam Bobroff 提交于
To aid debugging, add a message to show when EEH processing for a PE will be done at the device's parent, rather than directly at the device. Signed-off-by: NSam Bobroff <sbobroff@linux.ibm.com> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-