- 07 3月, 2018 1 次提交
-
-
由 Will Deacon 提交于
Using arm64_force_sig_info means that printing messages about unhandled signals is dealt with for us, so use that in preference to force_sig_info and remove any homebrew printing code. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 22 2月, 2018 1 次提交
-
-
由 Ingo Molnar 提交于
On lkml suggestions were made to split up such trivial typo fixes into per subsystem patches: --- a/arch/x86/boot/compressed/eboot.c +++ b/arch/x86/boot/compressed/eboot.c @@ -439,7 +439,7 @@ setup_uga32(void **uga_handle, unsigned long size, u32 *width, u32 *height) struct efi_uga_draw_protocol *uga = NULL, *first_uga; efi_guid_t uga_proto = EFI_UGA_PROTOCOL_GUID; unsigned long nr_ugas; - u32 *handles = (u32 *)uga_handle;; + u32 *handles = (u32 *)uga_handle; efi_status_t status = EFI_INVALID_PARAMETER; int i; This patch is the result of the following script: $ sed -i 's/;;$/;/g' $(git grep -E ';;$' | grep "\.[ch]:" | grep -vwE 'for|ia64' | cut -d: -f1 | sort | uniq) ... followed by manual review to make sure it's all good. Splitting this up is just crazy talk, let's get over with this and just do it. Reported-by: NPavel Machek <pavel@ucw.cz> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 23 1月, 2018 3 次提交
-
-
由 Eric W. Biederman 提交于
There are so many places that build struct siginfo by hand that at least one of them is bound to get it wrong. A handful of cases in the kernel arguably did just that when using the errno field of siginfo to pass no errno values to userspace. The usage is limited to a single si_code so at least does not mess up anything else. Encapsulate this questionable pattern in a helper function so that the userspace ABI is preserved. Update all of the places that use this pattern to use the new helper function. Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
由 Eric W. Biederman 提交于
The siginfo structure has all manners of holes with the result that a structure initializer is not guaranteed to initialize all of the bits. As we have to copy the structure to userspace don't even try to use a structure initializer. Instead use clear_siginfo followed by initializing selected fields. This gives a guarantee that uninitialized kernel memory is not copied to userspace. Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
由 Eric W. Biederman 提交于
Instead of jumpping while !is_compat_task placee all of the code inside of an if (is_compat_task) block. This allows the int i variable to be properly limited to the compat block no matter how the rest of ptrace_hbptriggered changes. In a following change a non-variable declaration will preceed was made independent to ensure the code is easy to review. Signed-off-by: N"Eric W. Biederman" <ebiederm@xmission.com>
-
- 03 11月, 2017 1 次提交
-
-
由 Dave Martin 提交于
This patch defines and implements a new regset NT_ARM_SVE, which describes a thread's SVE register state. This allows a debugger to manipulate the SVE state, as well as being included in ELF coredumps for post-mortem debugging. Because the regset size and layout are dependent on the thread's current vector length, it is not possible to define a C struct to describe the regset contents as is done for existing regsets. Instead, and for the same reasons, NT_ARM_SVE is based on the freeform variable-layout approach used for the SVE signal frame. Additionally, to reduce debug overhead when debugging threads that might or might not have live SVE register state, NT_ARM_SVE may be presented in one of two different formats: the old struct user_fpsimd_state format is embedded for describing the state of a thread with no live SVE state, whereas a new variable-layout structure is embedded for describing live SVE state. This avoids a debugger needing to poll NT_PRFPREG in addition to NT_ARM_SVE, and allows existing userspace code to handle the non-SVE case without too much modification. For this to work, NT_ARM_SVE is defined with a fixed-format header of type struct user_sve_header, which the recipient can use to figure out the content, size and layout of the reset of the regset. Accessor macros are defined to allow the vector-length-dependent parts of the regset to be manipulated. Signed-off-by: NAlan Hayward <alan.hayward@arm.com> Signed-off-by: NDave Martin <Dave.Martin@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Alex Bennée <alex.bennee@linaro.org> Cc: Okamoto Takayuki <tokamoto@jp.fujitsu.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 16 8月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
Before we add yet another stack to the kernel, it would be nice to ensure that we consistently organise stack definitions and related helper functions. This patch moves the basic IRQ stack defintions to <asm/memory.h> to live with their task stack counterparts. Helpers used for unwinding are moved into <asm/stacktrace.h>, where subsequent patches will add helpers for other stacks. Includes are fixed up accordingly. This patch is a pure refactoring -- there should be no functional changes as a result of this patch. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Tested-by: NLaura Abbott <labbott@redhat.com> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com>
-
- 08 8月, 2017 1 次提交
-
-
由 Mark Rutland 提交于
Our IRQ_STACK_PTR() and on_irq_stack() helpers both take a cpu argument, used to generate a percpu address. In all cases, they are passed {raw_,}smp_processor_id(), so this parameter is redundant. Since {raw_,}smp_processor_id() use a percpu variable internally, this approach means we generate a percpu offset to find the current cpu, then use this to index an array of percpu offsets, which we then use to find the current CPU's IRQ stack pointer. Thus, most of the work is redundant. Instead, we can consistently use raw_cpu_ptr() to generate the CPU's irq_stack pointer by simply adding the percpu offset to the irq_stack address, which is simpler in both respects. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Will Deacon <will.deacon@arm.com>
-
- 07 8月, 2017 2 次提交
-
-
由 Dave Martin 提交于
The -1 "no syscall" value is written in various ways, shared with the user ABI in some places, and generally obscure. This patch attempts to make things a little more consistent and readable by replacing all these uses with a single #define. A couple of symbolic helpers are provided to clarify the intent further. Because the in-syscall check in do_signal() is changed from >= 0 to != NO_SYSCALL by this patch, different behaviour may be observable if syscallno is set to values less than -1 by a tracer. However, this is not different from the behaviour that is already observable if a tracer sets syscallno to a value >= __NR_(compat_)syscalls. It appears that this can cause spurious syscall restarting, but that is not a new behaviour either, and does not appear harmful. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Dave Martin 提交于
The upper 32 bits of the syscallno field in thread_struct are handled inconsistently, being sometimes zero extended and sometimes sign-extended. In fact, only the lower 32 bits seem to have any real significance for the behaviour of the code: it's been OK to handle the upper bits inconsistently because they don't matter. Currently, the only place I can find where those bits are significant is in calling trace_sys_enter(), which may be unintentional: for example, if a compat tracer attempts to cancel a syscall by passing -1 to (COMPAT_)PTRACE_SET_SYSCALL at the syscall-enter-stop, it will be traced as syscall 4294967295 rather than -1 as might be expected (and as occurs for a native tracer doing the same thing). Elsewhere, reads of syscallno cast it to an int or truncate it. There's also a conspicuous amount of code and casting to bodge around the fact that although semantically an int, syscallno is stored as a u64. Let's not pretend any more. In order to preserve the stp x instruction that stores the syscall number in entry.S, this patch special-cases the layout of struct pt_regs for big endian so that the newly 32-bit syscallno field maps onto the low bits of the stored value. This is not beautiful, but benchmarking of the getpid syscall on Juno suggests indicates a minor slowdown if the stp is split into an stp x and stp w. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 30 6月, 2017 3 次提交
-
-
由 Dave Martin 提交于
Now that compat_vfp_get() uses the regset API to copy the FPSCR value out to userspace, compat_vfp_set() looks inconsistent. In particular, compat_vfp_set() will fail if called with kbuf != NULL && ubuf == NULL (which is valid usage according to the regset API). This patch fixes compat_vfp_set() to use user_regset_copyin(), similarly to compat_vfp_get(). This also squashes a sparse warning triggered by the cast that drops __user when calling get_user(). Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave Martin 提交于
compat_vfp_set() checks for userspace trying to write an excessive amount of data to the regset. However this check is conspicuous for its absence from every other _set() in the arm64 ptrace implementation. In fact, the core ptrace_regset() already clamps userspace's iov_len to the regset size before the individual regset .{get,set}() methods get called. This patch removes the redundant check. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave Martin 提交于
If get_user() fails when reading the new FPSCR value from userspace in compat_vfp_get(), then garbage* will be written to the task's FPSR and FPCR registers. This patch prevents this by checking the return from get_user() first. [*] Actually, zero, due to the behaviour of get_user() on error, but that's still not what userspace expects. Fixes: 478fcb2c ("arm64: Debugging support") Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 22 6月, 2017 3 次提交
-
-
由 Dave Martin 提交于
When reading current's user-writable TLS register (which occurs when dumping core for native tasks), it is possible that userspace has modified it since the time the task was last scheduled out. The new TLS register value is not guaranteed to have been written immediately back to thread_struct in this case. As a result, a coredump can capture stale data for this register. Reading the register for a stopped task via ptrace is unaffected. For native tasks, this patch explicitly flushes the TPIDR_EL0 register back to thread_struct before dumping when operating on current, thus ensuring that coredump contents are up to date. For compat tasks, the TLS register is not user-writable and so cannot be out of sync, so no flush is required in compat_tls_get(). Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave Martin 提交于
When reading the FPSIMD state of current (which occurs when dumping core), it is possible that userspace has modified the FPSIMD registers since the time the task was last scheduled out. Such changes are not guaranteed to be reflected immedately in thread_struct. As a result, a coredump can contain stale values for these registers. Reading the registers of a stopped task via ptrace is unaffected. This patch explicitly flushes the CPU state back to thread_struct before dumping when operating on current, thus ensuring that coredump contents are up to date. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Dave Martin 提交于
Currently, VFP registers are omitted from coredumps for compat processes, due to a bug in the REGSET_COMPAT_VFP regset implementation. compat_vfp_get() needs to transfer non-contiguous data from thread_struct.fpsimd_state, and uses put_user() to handle the offending trailing word (FPSCR). This fails when copying to a kernel address (i.e., kbuf && !ubuf), which is what happens when dumping core. As a result, the ELF coredump core code silently omits the NT_ARM_VFP note from the dump. It would be possible to work around this with additional special case code for the put_user(), but since user_regset_copyout() is explicitly designed to handle this scenario it is cleaner to port the put_user() to a user_regset_copyout() call, which this patch does. Signed-off-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 02 3月, 2017 2 次提交
-
-
由 Ingo Molnar 提交于
We are going to split <linux/sched/task_stack.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/task_stack.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
由 Ingo Molnar 提交于
We are going to split <linux/sched/signal.h> out of <linux/sched.h>, which will have to be picked up from other headers and a couple of .c files. Create a trivial placeholder <linux/sched/signal.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by: NLinus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: NIngo Molnar <mingo@kernel.org>
-
- 19 1月, 2017 4 次提交
-
-
由 Dave Martin 提交于
We cannot preserve partial fields for hardware breakpoints, because the values written by userspace to the hardware breakpoint registers can't subsequently be recovered intact from the hardware. So, just reject attempts to write incomplete fields with -EINVAL. Cc: <stable@vger.kernel.org> # 3.7.x- Fixes: 478fcb2c ("arm64: Debugging support") Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NWill Deacon <Will.Deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Dave Martin 提交于
Ensure that if userspace supplies insufficient data to PTRACE_SETREGSET to fill all the registers, the thread's old registers are preserved. Cc: <stable@vger.kernel.org> # 4.3.x- Fixes: 5d220ff9 ("arm64: Better native ptrace support for compat tasks") Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NWill Deacon <Will.Deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Dave Martin 提交于
Ensure that if userspace supplies insufficient data to PTRACE_SETREGSET to fill all the registers, the thread's old registers are preserved. Cc: <stable@vger.kernel.org> # 3.19.x- Fixes: 766a85d7 ("arm64: ptrace: add NT_ARM_SYSTEM_CALL regset") Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NWill Deacon <Will.Deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Dave Martin 提交于
Ensure that if userspace supplies insufficient data to PTRACE_SETREGSET to fill all the registers, the thread's old registers are preserved. Cc: <stable@vger.kernel.org> # 3.7.x- Fixes: 478fcb2c ("arm64: Debugging support") Signed-off-by: NDave Martin <Dave.Martin@arm.com> Acked-by: NWill Deacon <Will.Deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 11月, 2016 1 次提交
-
-
由 Pratyush Anand 提交于
ARM64 hardware supports watchpoint at any double word aligned address. However, it can select any consecutive bytes from offset 0 to 7 from that base address. For example, if base address is programmed as 0x420030 and byte select is 0x1C, then access of 0x420032,0x420033 and 0x420034 will generate a watchpoint exception. Currently, we do not have such modularity. We can only program byte, halfword, word and double word access exception from any base address. This patch adds support to overcome above limitations. Signed-off-by: NPratyush Anand <panand@redhat.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 19 7月, 2016 1 次提交
-
-
由 David A. Long 提交于
Add HAVE_REGS_AND_STACK_ACCESS_API feature for arm64, including supporting functions and defines. Signed-off-by: NDavid A. Long <dave.long@linaro.org> Acked-by: NMasami Hiramatsu <mhiramat@kernel.org> [catalin.marinas@arm.com: Remove unused functions] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 15 6月, 2016 2 次提交
-
-
由 Kees Cook 提交于
Close the hole where ptrace can change a syscall out from under seccomp. Signed-off-by: NKees Cook <keescook@chromium.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: linux-arm-kernel@lists.infradead.org
-
由 Andy Lutomirski 提交于
Currently, if arch code wants to supply seccomp_data directly to seccomp (which is generally much faster than having seccomp do it using the syscall_get_xyz() API), it has to use the two-phase seccomp hooks. Add it to the easy hooks, too. Cc: linux-arch@vger.kernel.org Signed-off-by: NAndy Lutomirski <luto@kernel.org> Signed-off-by: NKees Cook <keescook@chromium.org>
-
- 02 3月, 2016 1 次提交
-
-
由 Mark Rutland 提交于
We validate pstate using PSR_MODE32_BIT, which is part of the user-provided pstate (and cannot be trusted). Also, we conflate validation of AArch32 and AArch64 pstate values, making the code difficult to reason about. Instead, validate the pstate value based on the associated task. The task may or may not be current (e.g. when using ptrace), so this must be passed explicitly by callers. To avoid circular header dependencies via sched.h, is_compat_task is pulled out of asm/ptrace.h. To make the code possible to reason about, the AArch64 and AArch32 validation is split into separate functions. Software must respect the RES0 policy for SPSR bits, and thus the kernel mirrors the hardware policy (RAZ/WI) for bits as-yet unallocated. When these acquire an architected meaning writes may be permitted (potentially with additional validation). Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: Dave Martin <dave.martin@arm.com> Cc: James Morse <james.morse@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 08 12月, 2015 1 次提交
-
-
由 John Blackwood 提交于
Make sure to clear out any ptrace singlestep state when a ptrace(2) PTRACE_DETACH call is made on arm64 systems. Otherwise, the previously ptraced task will die off with a SIGTRAP signal if the debugger just previously singlestepped the ptraced task. Cc: <stable@vger.kernel.org> Signed-off-by: NJohn Blackwood <john.blackwood@ccur.com> [will: added comment to justify why this is in the arch code] Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 7月, 2015 1 次提交
-
-
由 Catalin Marinas 提交于
The compat ptrace interface allows access to the TLS register, hardware breakpoints and watchpoints, syscall number. However, a native task using the native ptrace interface to debug compat tasks (e.g. multi-arch gdb) only has access to the general and VFP register sets. The compat ptrace interface cannot be accessed from a native task. This patch adds a new user_aarch32_ptrace_view which contains the TLS, hardware breakpoint/watchpoint and syscall number regsets in addition to the existing GPR and VFP regsets. This view is backwards compatible with the previous kernels. Core dumping of 32-bit tasks and compat ptrace are not affected since the original user_aarch32_view is preserved. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NYao Qi <yao.qi@arm.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 28 11月, 2014 2 次提交
-
-
由 AKASHI Takahiro 提交于
secure_computing() is called first in syscall_trace_enter() so that a system call will be aborted quickly without doing succeeding syscall tracing if seccomp rules want to deny that system call. On compat task, syscall numbers for system calls allowed in seccomp mode 1 are different from those on normal tasks, and so _NR_seccomp_xxx_32's need to be redefined. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 AKASHI Takahiro 提交于
This regeset is intended to be used to get and set a system call number while tracing. There was some discussion about possible approaches to do so: (1) modify x8 register with ptrace(PTRACE_SETREGSET) indirectly, and update regs->syscallno later on in syscall_trace_enter(), or (2) define a dedicated regset for this purpose as on s390, or (3) support ptrace(PTRACE_SET_SYSCALL) as on arch/arm Thinking of the fact that user_pt_regs doesn't expose 'syscallno' to tracer as well as that secure_computing() expects a changed syscall number, especially case of -1, to be visible before this function returns in syscall_trace_enter(), (1) doesn't work well. We will take (2) since it looks much cleaner. Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 24 9月, 2014 2 次提交
-
-
由 Eric Paris 提交于
The arm64 tree added calls to audit_syscall_entry() and rightly included the syscall number. The interface has since been changed to not need the syscall number. As such, arm64 should no longer pass that value. Signed-off-by: NEric Paris <eparis@redhat.com>
-
由 AKASHI Takahiro 提交于
This patch adds auditing functions on entry to or exit from every system call invocation. Acked-by: NRichard Guy Briggs <rgb@redhat.com> Acked-by Will Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 29 8月, 2014 2 次提交
-
-
由 Will Deacon 提交于
copy_{to,from}_user return the number of bytes remaining on failure, not an error code. This patch returns -EFAULT when the copy operation didn't complete, rather than expose the number of bytes not copied directly to userspace. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
I'm not sure what I was on when I wrote this, but when iterating over the hardware watchpoint array (hbp_watch_array), our index is off by ARM_MAX_BRP, so we walk off the end of our thread_struct... ... except, a dodgy condition in the loop means that it never executes at all (bp cannot be NULL). This patch fixes the code so that we remove the bp check and use the correct index for accessing the watchpoint structures. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 8月, 2014 1 次提交
-
-
由 Will Deacon 提交于
For some reason, the audit patches didn't make it out of -next this merge window, so revert our temporary hack and let the audit guys deal with fixing up -next. This reverts commit 2a8f45b0. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 24 7月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
This is a temporary patch to be able to compile the kernel in linux-next where the audit_syscall_* API has been changed. To be reverted once the proper arm64 fix can be applied. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 7月, 2014 1 次提交
-
-
由 AKASHI Takahiro 提交于
This patch adds auditing functions on entry to or exit from every system call invocation. Acked-by: NRichard Guy Briggs <rgb@redhat.com> Acked-by Will Deacon <will.deacon@arm.com> Signed-off-by: NAKASHI Takahiro <takahiro.akashi@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 18 6月, 2014 2 次提交
-
-
由 Victor Kamensky 提交于
Currently core file of aarch32 process prstatus note has empty registers set. As result aarch32 core files create by V8 kernel are not very useful. It happens because compat_gpr_get and compat_gpr_set functions can copy registers values to/from either kbuf or ubuf. ELF core file collection function fill_thread_core_info calls compat_gpr_get with kbuf set and ubuf set to 0. But current compat_gpr_get and compat_gpr_set function handle copy to/from only ubuf case. Fix is to handle kbuf and ubuf as two separate cases in similar way as other functions like user_regset_copyout, user_regset_copyin do. Signed-off-by: NVictor Kamensky <victor.kamensky@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
Our compat PTRACE_POKEUSR implementation simply passes the user data to regset_copy_from_user after some simple range checking. Unfortunately, the data in question has already been copied to the kernel stack by this point, so the subsequent access_ok check fails and the ptrace request returns -EFAULT. This causes problems tracing fork() with older versions of strace. This patch briefly changes the fs to KERNEL_DS, so that the access_ok check passes even with a kernel address. Signed-off-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-