1. 28 6月, 2016 1 次提交
  2. 27 6月, 2016 4 次提交
  3. 22 6月, 2016 2 次提交
    • M
      arm64: kill ESR_LNX_EXEC · 541ec870
      Mark Rutland 提交于
      Currently we treat ESR_EL1 bit 24 as software-defined for distinguishing
      instruction aborts from data aborts, but this bit is architecturally
      RES0 for instruction aborts, and could be allocated for an arbitrary
      purpose in future. Additionally, we hard-code the value in entry.S
      without the mnemonic, making the code difficult to understand.
      
      Instead, remove ESR_LNX_EXEC, and distinguish aborts based on the esr,
      which we already pass to the sole use of ESR_LNX_EXEC. A new helper,
      is_el0_instruction_abort() is added to make the logic clear. Any
      instruction aborts taken from EL1 will already have been handled by
      bad_mode, so we need not handle that case in the helper.
      
      For consistency, the existing permission_fault helper is renamed to
      is_permission_fault, and the return type is changed to bool. There
      should be no functional changes as the return value was a boolean
      expression, and the result is only used in another boolean expression.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Dave P Martin <dave.martin@arm.com>
      Cc: Huang Shijie <shijie.huang@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      541ec870
    • M
      arm64: add macro to extract ESR_ELx.EC · 275f344b
      Mark Rutland 提交于
      Several places open-code extraction of the EC field from an ESR_ELx
      value, in subtly different ways. This is unfortunate duplication and
      variation, and the precise logic used to extract the field is a
      distraction.
      
      This patch adds a new macro, ESR_ELx_EC(), to extract the EC field from
      an ESR_ELx value in a consistent fashion.
      
      Existing open-coded extractions in core arm64 code are moved over to the
      new helper. KVM code is left as-is for the moment.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NHuang Shijie <shijie.huang@arm.com>
      Cc: Dave P Martin <dave.martin@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      275f344b
  4. 21 6月, 2016 2 次提交
    • M
      arm64: simplify dump_mem · 7ceb3a10
      Mark Rutland 提交于
      Currently dump_mem attempts to dump memory in 64-bit chunks when
      reporting a failure in 64-bit code, or 32-bit chunks when reporting a
      failure in 32-bit code. We added code to handle these two cases
      separately in commit e147ae6d ("arm64: modify the dump mem for
      64 bit addresses").
      
      However, in all cases dump_mem is called, the failing context is a
      kernel rather than user context. Additionally dump_mem is assumed to
      only be used for kernel contexts, as internally it switches to
      KERNEL_DS, and its callers pass kernel stack bounds.
      
      This patch removes the redundant 32-bit chunk logic and associated
      compat parameter, largely reverting the aforementioned commit. For the
      call in __die(), the check of in_interrupt() is removed also, as __die()
      is only called in response to faults from the kernel's exception level,
      and thus the !user_mode(regs) check is sufficient. Were this not the
      case, the used of task_stack_page(tsk) to generate the stack bounds
      would be erroneous.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      7ceb3a10
    • Y
      arm64: kasan: instrument user memory access API · bffe1baf
      Yang Shi 提交于
      The upstream commit 1771c6e1
      ("x86/kasan: instrument user memory access API") added KASAN instrument to
      x86 user memory access API, so added such instrument to ARM64 too.
      
      Define __copy_to/from_user in C in order to add kasan_check_read/write call,
      rename assembly implementation to __arch_copy_to/from_user.
      
      Tested by test_kasan module.
      Acked-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NYang Shi <yang.shi@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bffe1baf
  5. 17 6月, 2016 1 次提交
    • D
      arm64: kgdb: Match pstate size with gdbserver protocol · 0d15ef67
      Daniel Thompson 提交于
      Current versions of gdb do not interoperate cleanly with kgdb on arm64
      systems because gdb and kgdb do not use the same register description.
      This patch modifies kgdb to work with recent releases of gdb (>= 7.8.1).
      
      Compatibility with gdb (after the patch is applied) is as follows:
      
        gdb-7.6 and earlier  Ok
        gdb-7.7 series       Works if user provides custom target description
        gdb-7.8(.0)          Works if user provides custom target description
        gdb-7.8.1 and later  Ok
      
      When commit 44679a4f ("arm64: KGDB: Add step debugging support") was
      introduced it was paired with a gdb patch that made an incompatible
      change to the gdbserver protocol. This patch was eventually merged into
      the gdb sources:
      https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=a4d9ba85ec5597a6a556afe26b712e878374b9dd
      
      The change to the protocol was mostly made to simplify big-endian support
      inside the kernel gdb stub. Unfortunately the gdb project released
      gdb-7.7.x and gdb-7.8.0 before the protocol incompatibility was identified
      and reversed:
      https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commit;h=bdc144174bcb11e808b4e73089b850cf9620a7ee
      
      This leaves us in a position where kgdb still uses the no-longer-used
      protocol; gdb-7.8.1, which restored the original behaviour, was
      released on 2014-10-29.
      
      I don't believe it is possible to detect/correct the protocol
      incompatiblity which means the kernel must take a view about which
      version of the gdb remote protocol is "correct". This patch takes the
      view that the original/current version of the protocol is correct
      and that version found in gdb-7.7.x and gdb-7.8.0 is anomalous.
      Signed-off-by: NDaniel Thompson <daniel.thompson@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      0d15ef67
  6. 14 6月, 2016 1 次提交
    • M
      arm64: fix dump_instr when PAN and UAO are in use · c5cea06b
      Mark Rutland 提交于
      If the kernel is set to show unhandled signals, and a user task does not
      handle a SIGILL as a result of an instruction abort, we will attempt to
      log the offending instruction with dump_instr before killing the task.
      
      We use dump_instr to log the encoding of the offending userspace
      instruction. However, dump_instr is also used to dump instructions from
      kernel space, and internally always switches to KERNEL_DS before dumping
      the instruction with get_user. When both PAN and UAO are in use, reading
      a user instruction via get_user while in KERNEL_DS will result in a
      permission fault, which leads to an Oops.
      
      As we have regs corresponding to the context of the original instruction
      abort, we can inspect this and only flip to KERNEL_DS if the original
      abort was taken from the kernel, avoiding this issue. At the same time,
      remove the redundant (and incorrect) comments regarding the order
      dump_mem and dump_instr are called in.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: <stable@vger.kernel.org> #4.6+
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Tested-by: NVladimir Murzin <vladimir.murzin@arm.com>
      Fixes: 57f4959b ("arm64: kernel: Add support for User Access Override")
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c5cea06b
  7. 03 6月, 2016 1 次提交
    • M
      arm64: report CPU number in bad_mode · 8051f4d1
      Mark Rutland 提交于
      If we take an exception we don't expect (e.g. SError), we report this in
      the bad_mode handler with pr_crit. Depending on the configured log
      level, we may or may not log additional information in functions called
      subsequently. Notably, the messages in dump_stack (including the CPU
      number) are printed with KERN_DEFAULT and may not appear.
      
      Some exceptions have an IMPLEMENTATION DEFINED ESR_ELx.ISS encoding, and
      knowing the CPU number is crucial to correctly decode them. To ensure
      that this is always possible, we should log the CPU number along with
      the ESR_ELx value, so we are not reliant on subsequent logs or
      additional printk configuration options.
      
      This patch logs the CPU number in bad_mode such that it is possible for
      a developer to decode these exceptions, provided access to sufficient
      documentation.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NAl Grant <Al.Grant@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      8051f4d1
  8. 01 6月, 2016 1 次提交
  9. 24 5月, 2016 1 次提交
  10. 21 5月, 2016 1 次提交
    • J
      exit_thread: remove empty bodies · 5f56a5df
      Jiri Slaby 提交于
      Define HAVE_EXIT_THREAD for archs which want to do something in
      exit_thread. For others, let's define exit_thread as an empty inline.
      
      This is a cleanup before we change the prototype of exit_thread to
      accept a task parameter.
      
      [akpm@linux-foundation.org: fix mips]
      Signed-off-by: NJiri Slaby <jslaby@suse.cz>
      Cc: "David S. Miller" <davem@davemloft.net>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: "James E.J. Bottomley" <jejb@parisc-linux.org>
      Cc: Aurelien Jacquiot <a-jacquiot@ti.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Chen Liqin <liqin.linux@gmail.com>
      Cc: Chris Metcalf <cmetcalf@mellanox.com>
      Cc: Chris Zankel <chris@zankel.net>
      Cc: David Howells <dhowells@redhat.com>
      Cc: Fenghua Yu <fenghua.yu@intel.com>
      Cc: Geert Uytterhoeven <geert@linux-m68k.org>
      Cc: Guan Xuetao <gxt@mprc.pku.edu.cn>
      Cc: Haavard Skinnemoen <hskinnemoen@gmail.com>
      Cc: Hans-Christian Egtvedt <egtvedt@samfundet.no>
      Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
      Cc: Helge Deller <deller@gmx.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: James Hogan <james.hogan@imgtec.com>
      Cc: Jeff Dike <jdike@addtoit.com>
      Cc: Jesper Nilsson <jesper.nilsson@axis.com>
      Cc: Jiri Slaby <jslaby@suse.cz>
      Cc: Jonas Bonn <jonas@southpole.se>
      Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
      Cc: Lennox Wu <lennox.wu@gmail.com>
      Cc: Ley Foon Tan <lftan@altera.com>
      Cc: Mark Salter <msalter@redhat.com>
      Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
      Cc: Matt Turner <mattst88@gmail.com>
      Cc: Max Filippov <jcmvbkbc@gmail.com>
      Cc: Michael Ellerman <mpe@ellerman.id.au>
      Cc: Michal Simek <monstr@monstr.eu>
      Cc: Mikael Starvik <starvik@axis.com>
      Cc: Paul Mackerras <paulus@samba.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Ralf Baechle <ralf@linux-mips.org>
      Cc: Rich Felker <dalias@libc.org>
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Richard Kuo <rkuo@codeaurora.org>
      Cc: Richard Weinberger <richard@nod.at>
      Cc: Russell King <linux@arm.linux.org.uk>
      Cc: Steven Miao <realmz6@gmail.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Vineet Gupta <vgupta@synopsys.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      5f56a5df
  11. 17 5月, 2016 2 次提交
    • A
      perf core: Add a 'nr' field to perf_event_callchain_context · 3b1fff08
      Arnaldo Carvalho de Melo 提交于
      We will use it to count how many addresses are in the entry->ip[] array,
      excluding PERF_CONTEXT_{KERNEL,USER,etc} entries, so that we can really
      return the number of entries specified by the user via the relevant
      sysctl, kernel.perf_event_max_contexts, or via the per event
      perf_event_attr.sample_max_stack knob.
      
      This way we keep the perf_sample->ip_callchain->nr meaning, that is the
      number of entries, be it real addresses or PERF_CONTEXT_ entries, while
      honouring the max_stack knobs, i.e. the end result will be max_stack
      entries if we have at least that many entries in a given stack trace.
      
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Link: http://lkml.kernel.org/n/tip-s8teto51tdqvlfhefndtat9r@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      3b1fff08
    • A
      perf core: Pass max stack as a perf_callchain_entry context · cfbcf468
      Arnaldo Carvalho de Melo 提交于
      This makes perf_callchain_{user,kernel}() receive the max stack
      as context for the perf_callchain_entry, instead of accessing
      the global sysctl_perf_event_max_stack.
      
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: Alexei Starovoitov <ast@kernel.org>
      Cc: Brendan Gregg <brendan.d.gregg@gmail.com>
      Cc: David Ahern <dsahern@gmail.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Milian Wolff <milian.wolff@kdab.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Wang Nan <wangnan0@huawei.com>
      Cc: Zefan Li <lizefan@huawei.com>
      Link: http://lkml.kernel.org/n/tip-kolmn1yo40p7jhswxwrc7rrd@git.kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      cfbcf468
  12. 12 5月, 2016 1 次提交
  13. 11 5月, 2016 3 次提交
    • K
      arm64: kernel: Fix incorrect brk randomization · 61462c8a
      Kees Cook 提交于
      This fixes two issues with the arm64 brk randomziation. First, the
      STACK_RND_MASK was being used incorrectly. The original code was:
      
      	unsigned long range_end = base + (STACK_RND_MASK << PAGE_SHIFT) + 1;
      
      STACK_RND_MASK is 0x7ff (32-bit) or 0x3ffff (64-bit), with 4K pages where
      PAGE_SHIFT is 12:
      
      	#define STACK_RND_MASK	(test_thread_flag(TIF_32BIT) ? \
      						0x7ff >> (PAGE_SHIFT - 12) : \
      						0x3ffff >> (PAGE_SHIFT - 12))
      
      This means the resulting offset from base would be 0x7ff0001 or 0x3ffff0001,
      which is wrong since it creates an unaligned end address. It was likely
      intended to be:
      
      	unsigned long range_end = base + ((STACK_RND_MASK + 1) << PAGE_SHIFT)
      
      Which would result in offsets of 0x800000 (32-bit) and 0x40000000 (64-bit).
      
      However, even this corrected 32-bit compat offset (0x00800000) is much
      smaller than native ARM's brk randomization value (0x02000000):
      
      	unsigned long arch_randomize_brk(struct mm_struct *mm)
      	{
      	        unsigned long range_end = mm->brk + 0x02000000;
      	        return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
      	}
      
      So, instead of basing arm64's brk randomization on mistaken STACK_RND_MASK
      calculations, just use specific corrected values for compat (0x2000000)
      and native arm64 (0x40000000).
      Reviewed-by: NJon Medhurst <tixy@linaro.org>
      Signed-off-by: NKees Cook <keescook@chromium.org>
      [will: use is_compat_task() as suggested by tixy]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      61462c8a
    • J
      arm64: cpuinfo: Missing NULL terminator in compat_hwcap_str · f228b494
      Julien Grall 提交于
      The loop that browses the array compat_hwcap_str will stop when a NULL
      is encountered, however NULL is missing at the end of array. This will
      lead to overrun until a NULL is found somewhere in the following memory.
      In reality, this works out because the compat_hwcap2_str array tends to
      follow immediately in memory, and that *is* terminated correctly.
      Furthermore, the unsigned int compat_elf_hwcap is checked before
      printing each capability, so we end up doing the right thing because
      the size of the two arrays is less than 32. Still, this is an obvious
      mistake and should be fixed.
      
      Note for backporting: commit 12d11817 ("arm64: Move
      /proc/cpuinfo handling code") moved this code in v4.4. Prior to that
      commit, the same change should be made in arch/arm64/kernel/setup.c.
      
      Fixes: 44b82b77 "arm64: Fix up /proc/cpuinfo"
      Cc: <stable@vger.kernel.org> # v3.19+ (but see note above prior to v4.4)
      Signed-off-by: NJulien Grall <julien.grall@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f228b494
    • S
      arm64: secondary_start_kernel: Remove unnecessary barrier · 99aa0362
      Suzuki K Poulose 提交于
      Remove the unnecessary smp_wmb(), which was added to make sure
      that the update_cpu_boot_status() completes before we mark the
      CPU online. But update_cpu_boot_status() already has dsb() (required
      for the failing CPUs) to ensure the correct behavior.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Reported-by: NDennis Chen <dennis.chen@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      99aa0362
  14. 28 4月, 2016 10 次提交
  15. 27 4月, 2016 1 次提交
    • A
      perf core: Allow setting up max frame stack depth via sysctl · c5dfd78e
      Arnaldo Carvalho de Melo 提交于
      The default remains 127, which is good for most cases, and not even hit
      most of the time, but then for some cases, as reported by Brendan, 1024+
      deep frames are appearing on the radar for things like groovy, ruby.
      
      And in some workloads putting a _lower_ cap on this may make sense. One
      that is per event still needs to be put in place tho.
      
      The new file is:
      
        # cat /proc/sys/kernel/perf_event_max_stack
        127
      
      Chaging it:
      
        # echo 256 > /proc/sys/kernel/perf_event_max_stack
        # cat /proc/sys/kernel/perf_event_max_stack
        256
      
      But as soon as there is some event using callchains we get:
      
        # echo 512 > /proc/sys/kernel/perf_event_max_stack
        -bash: echo: write error: Device or resource busy
        #
      
      Because we only allocate the callchain percpu data structures when there
      is a user, which allows for changing the max easily, its just a matter
      of having no callchain users at that point.
      Reported-and-Tested-by: NBrendan Gregg <brendan.d.gregg@gmail.com>
      Reviewed-by: NFrederic Weisbecker <fweisbec@gmail.com>
      Acked-by: NAlexei Starovoitov <ast@kernel.org>
      Acked-by: NDavid Ahern <dsahern@gmail.com>
      Cc: Adrian Hunter <adrian.hunter@intel.com>
      Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
      Cc: He Kuang <hekuang@huawei.com>
      Cc: Jiri Olsa <jolsa@redhat.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Masami Hiramatsu <mhiramat@kernel.org>
      Cc: Milian Wolff <milian.wolff@kdab.com>
      Cc: Namhyung Kim <namhyung@kernel.org>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Stephane Eranian <eranian@google.com>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Vince Weaver <vincent.weaver@maine.edu>
      Cc: Wang Nan <wangnan0@huawei.com>
      Cc: Zefan Li <lizefan@huawei.com>
      Link: http://lkml.kernel.org/r/20160426002928.GB16708@kernel.orgSigned-off-by: NArnaldo Carvalho de Melo <acme@redhat.com>
      c5dfd78e
  16. 26 4月, 2016 7 次提交
  17. 25 4月, 2016 1 次提交
    • S
      arm64: Fix behavior of maxcpus=N · 44dbcc93
      Suzuki K Poulose 提交于
      maxcpu=n sets the number of CPUs activated at boot time to a max of n,
      but allowing the remaining CPUs to be brought up later if the user
      decides to do so. However, on arm64 due to various reasons, we disallowed
      hotplugging CPUs beyond n, by marking them not present. Now that
      we have checks in place to make sure the hotplugged CPUs have compatible
      features with system and requires no new errata, relax the restriction.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      44dbcc93