1. 20 4月, 2016 4 次提交
  2. 19 4月, 2016 1 次提交
    • J
      arm64: Reduce verbosity on SMP CPU stop · 82611c14
      Jan Glauber 提交于
      When CPUs are stopped during an abnormal operation like panic
      for each CPU a line is printed and the stack trace is dumped.
      
      This information is only interesting for the aborting CPU
      and on systems with many CPUs it only makes it harder to
      debug if after the aborting CPU the log is flooded with data
      about all other CPUs too.
      
      Therefore remove the stack dump and printk of other CPUs
      and only print a single line that the other CPUs are going to be
      stopped and, in case any CPUs remain online list them.
      Signed-off-by: NJan Glauber <jglauber@cavium.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      82611c14
  3. 16 4月, 2016 3 次提交
  4. 15 4月, 2016 5 次提交
    • A
      arm64: hw-breakpoint: Remove superfluous SMP function call · 4bc49274
      Anna-Maria Gleixner 提交于
      Since commit 1cf4f629 ("cpu/hotplug: Move online calls to
      hotplugged cpu") it is ensured that callbacks of CPU_ONLINE and
      CPU_DOWN_PREPARE are processed on the hotplugged CPU. Due to this SMP
      function calls are no longer required.
      
      Replace smp_call_function_single() with a direct call of
      hw_breakpoint_reset(). To keep the calling convention, interrupts are
      explicitly disabled around the call.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4bc49274
    • A
      arm64/debug: Remove superfluous SMP function call · 499c8150
      Anna-Maria Gleixner 提交于
      Since commit 1cf4f629 ("cpu/hotplug: Move online calls to
      hotplugged cpu") it is ensured that callbacks of CPU_ONLINE and
      CPU_DOWN_PREPARE are processed on the hotplugged CPU. Due to this SMP
      function calls are no longer required.
      
      Replace smp_call_function_single() with a direct call to
      clear_os_lock(). The function writes the OSLAR register to clear OS
      locking. This does not require to be called with interrupts disabled,
      therefore the smp_call_function_single() calling convention is not
      preserved.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: linux-arm-kernel@lists.infradead.org
      Signed-off-by: NAnna-Maria Gleixner <anna-maria@linutronix.de>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      499c8150
    • A
      arm64: simplify kernel segment mapping granularity · 97740051
      Ard Biesheuvel 提交于
      The mapping of the kernel consist of four segments, each of which is mapped
      with different permission attributes and/or lifetimes. To optimize the TLB
      and translation table footprint, we define various opaque constants in the
      linker script that resolve to different aligment values depending on the
      page size and whether CONFIG_DEBUG_ALIGN_RODATA is set.
      
      Considering that
      - a 4 KB granule kernel benefits from a 64 KB segment alignment (due to
        the fact that it allows the use of the contiguous bit),
      - the minimum alignment of the .data segment is THREAD_SIZE already, not
        PAGE_SIZE (i.e., we already have padding between _data and the start of
        the .data payload in many cases),
      - 2 MB is a suitable alignment value on all granule sizes, either for
        mapping directly (level 2 on 4 KB), or via the contiguous bit (level 3 on
        16 KB and 64 KB),
      - anything beyond 2 MB exceeds the minimum alignment mandated by the boot
        protocol, and can only be mapped efficiently if the physical alignment
        happens to be the same,
      
      we can simplify this by standardizing on 64 KB (or 2 MB) explicitly, i.e.,
      regardless of granule size, all segments are aligned either to 64 KB, or to
      2 MB if CONFIG_DEBUG_ALIGN_RODATA=y. This also means we can drop the Kconfig
      dependency of CONFIG_DEBUG_ALIGN_RODATA on CONFIG_ARM64_4K_PAGES.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      97740051
    • A
      arm64: cover the .head.text section in the .text segment mapping · 7eb90f2f
      Ard Biesheuvel 提交于
      Keeping .head.text out of the .text mapping buys us very little: its actual
      payload is only 4 KB, most of which is padding, but the page alignment may
      add up to 2 MB (in case of CONFIG_DEBUG_ALIGN_RODATA=y) of additional
      padding to the uncompressed kernel Image.
      
      Also, on 4 KB granule kernels, the 4 KB misalignment of .text forces us to
      map the adjacent 56 KB of code without the PTE_CONT attribute, and since
      this region contains things like the vector table and the GIC interrupt
      handling entry point, this region is likely to benefit from the reduced TLB
      pressure that results from PTE_CONT mappings.
      
      So remove the alignment between the .head.text and .text sections, and use
      the [_text, _etext) rather than the [_stext, _etext) interval for mapping
      the .text segment.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7eb90f2f
    • A
      arm64: move early boot code to the .init segment · 546c8c44
      Ard Biesheuvel 提交于
      Apart from the arm64/linux and EFI header data structures, there is nothing
      in the .head.text section that must reside at the beginning of the Image.
      So let's move it to the .init section where it belongs.
      
      Note that this involves some minor tweaking of the EFI header, primarily
      because the address of 'stext' no longer coincides with the start of the
      .text section. It also requires a couple of relocated symbol references
      to be slightly rewritten or their definition moved to the linker script.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      546c8c44
  5. 14 4月, 2016 3 次提交
  6. 13 4月, 2016 2 次提交
    • J
      arm64: cpuidle: make arm_cpuidle_suspend() a bit more efficient · b5fda7ed
      Jisheng Zhang 提交于
      Currently, we check two pointers: cpu_ops and cpu_suspend on every idle
      state entry. These pointers check can be avoided:
      
      If cpu_ops has not been registered, arm_cpuidle_init() will return
      -EOPNOTSUPP, so arm_cpuidle_suspend() will never have chance to
      run. In other word, the cpu_ops check can be avoid.
      
      Similarly, the cpu_suspend check could be avoided in this hot path by
      moving it into arm_cpuidle_init().
      
      I measured the 4096 * time from arm_cpuidle_suspend entry point to the
      cpu_psci_cpu_suspend entry point. HW platform is Marvell BG4CT STB
      board.
      
      1. only one shell, no other process, hot-unplug secondary cpus, execute
      the following cmd
      
      while true
      do
      	sleep 0.2
      done
      
      before the patch: 1581220ns
      
      after the patch: 1579630ns
      
      reduced by 0.1%
      
      2. only one shell, no other process, hot-unplug secondary cpus, execute
      the following cmd
      
      while true
      do
      	md5sum /tmp/testfile
      	sleep 0.2
      done
      
      NOTE: the testfile size should be larger than L1+L2 cache size
      
      before the patch: 1961960ns
      after the patch: 1912500ns
      
      reduced by 2.5%
      
      So the more complex the system load, the bigger the improvement.
      Signed-off-by: NJisheng Zhang <jszhang@marvell.com>
      Acked-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b5fda7ed
    • K
      arm64: cpufeature: append additional id_aa64mmfr2 fields to cpufeature · 7d7b4ae4
      Kefeng Wang 提交于
      There are some new cpu features which can be identified by id_aa64mmfr2,
      this patch appends all fields of it.
      Signed-off-by: NKefeng Wang <wangkefeng.wang@huawei.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7d7b4ae4
  7. 29 3月, 2016 1 次提交
  8. 26 3月, 2016 1 次提交
  9. 25 3月, 2016 1 次提交
  10. 21 3月, 2016 2 次提交
    • M
      arm64: fix KASLR boot-time I-cache maintenance · b90b4a60
      Mark Rutland 提交于
      Commit f80fb3a3 ("arm64: add support for kernel ASLR") missed a
      DSB necessary to complete I-cache maintenance in the primary boot path,
      and hence stale instructions may still be present in the I-cache and may
      be executed until the I-cache maintenance naturally completes.
      
      Since commit 8ec41987 ("arm64: mm: ensure patched kernel text is
      fetched from PoU"), all CPUs invalidate their I-caches after their MMU
      is enabled. Prior a CPU's MMU having been enabled, arbitrary lines may
      have been fetched from the PoC into I-caches. We never patch text
      expected to be executed with the MMU off. Thus, it is unnecessary to
      perform broadcast I-cache maintenance in the primary boot path.
      
      This patch reduces the scope of the I-cache maintenance to the local
      CPU, and adds the missing DSB with similar scope, matching prior
      maintenance in the primary boot path.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NArd Biesehvuel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b90b4a60
    • A
      arm64/kernel: fix incorrect EL0 check in inv_entry macro · b660950c
      Ard Biesheuvel 提交于
      The implementation of macro inv_entry refers to its 'el' argument without
      the required leading backslash, which results in an undefined symbol
      'el' to be passed into the kernel_entry macro rather than the index of
      the exception level as intended.
      
      This undefined symbol strangely enough does not result in build failures,
      although it is visible in vmlinux:
      
           $ nm -n vmlinux |head
                            U el
           0000000000000000 A _kernel_flags_le_hi32
           0000000000000000 A _kernel_offset_le_hi32
           0000000000000000 A _kernel_size_le_hi32
           000000000000000a A _kernel_flags_le_lo32
           .....
      
      However, it does result in incorrect code being generated for invalid
      exceptions taken from EL0, since the argument check in kernel_entry
      assumes EL1 if its argument does not equal '0'.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b660950c
  11. 10 3月, 2016 1 次提交
    • M
      arm64: kasan: clear stale stack poison · 0d97e6d8
      Mark Rutland 提交于
      Functions which the compiler has instrumented for KASAN place poison on
      the stack shadow upon entry and remove this poison prior to returning.
      
      In the case of cpuidle, CPUs exit the kernel a number of levels deep in
      C code.  Any instrumented functions on this critical path will leave
      portions of the stack shadow poisoned.
      
      If CPUs lose context and return to the kernel via a cold path, we
      restore a prior context saved in __cpu_suspend_enter are forgotten, and
      we never remove the poison they placed in the stack shadow area by
      functions calls between this and the actual exit of the kernel.
      
      Thus, (depending on stackframe layout) subsequent calls to instrumented
      functions may hit this stale poison, resulting in (spurious) KASAN
      splats to the console.
      
      To avoid this, clear any stale poison from the idle thread for a CPU
      prior to bringing a CPU online.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Reviewed-by: NAndrey Ryabinin <aryabinin@virtuozzo.com>
      Reviewed-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com>
      Cc: Alexander Potapenko <glider@google.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Ingo Molnar <mingo@kernel.org>
      Signed-off-by: NAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
      0d97e6d8
  12. 05 3月, 2016 2 次提交
  13. 04 3月, 2016 1 次提交
    • M
      arm64: make mrs_s prefixing implicit in read_cpuid · 1cc6ed90
      Mark Rutland 提交于
      Commit 0f54b14e ("arm64: cpufeature: Change read_cpuid() to use
      sysreg's mrs_s macro") changed read_cpuid to require a SYS_ prefix on
      register names, to allow manual assembly of registers unknown by the
      toolchain, using tables in sysreg.h.
      
      This interacts poorly with commit 42b55734 ("efi/arm64: Check
      for h/w support before booting a >4 KB granular kernel"), which is
      curretly queued via the tip tree, and uses read_cpuid without a SYS_
      prefix. Due to this, a build of next-20160304 fails if EFI and 64K pages
      are selected.
      
      To avoid this issue when trees are merged, move the required SYS_
      prefixing into read_cpuid, and revert all of the updated callsites to
      pass plain register names. This effectively reverts the bulk of commit
      0f54b14e.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1cc6ed90
  14. 02 3月, 2016 2 次提交
    • M
      arm64: Rework valid_user_regs · dbd4d7ca
      Mark Rutland 提交于
      We validate pstate using PSR_MODE32_BIT, which is part of the
      user-provided pstate (and cannot be trusted). Also, we conflate
      validation of AArch32 and AArch64 pstate values, making the code
      difficult to reason about.
      
      Instead, validate the pstate value based on the associated task. The
      task may or may not be current (e.g. when using ptrace), so this must be
      passed explicitly by callers. To avoid circular header dependencies via
      sched.h, is_compat_task is pulled out of asm/ptrace.h.
      
      To make the code possible to reason about, the AArch64 and AArch32
      validation is split into separate functions. Software must respect the
      RES0 policy for SPSR bits, and thus the kernel mirrors the hardware
      policy (RAZ/WI) for bits as-yet unallocated. When these acquire an
      architected meaning writes may be permitted (potentially with additional
      validation).
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      dbd4d7ca
    • T
      arch/hotplug: Call into idle with a proper state · fc6d73d6
      Thomas Gleixner 提交于
      Let the non boot cpus call into idle with the corresponding hotplug state, so
      the hotplug core can handle the further bringup. That's a first step to
      convert the boot side of the hotplugged cpus to do all the synchronization
      with the other side through the state machine. For now it'll only start the
      hotplug thread and kick the full bringup of the cpu.
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      Cc: linux-arch@vger.kernel.org
      Cc: Rik van Riel <riel@redhat.com>
      Cc: Rafael Wysocki <rafael.j.wysocki@intel.com>
      Cc: "Srivatsa S. Bhat" <srivatsa@mit.edu>
      Cc: Peter Zijlstra <peterz@infradead.org>
      Cc: Arjan van de Ven <arjan@linux.intel.com>
      Cc: Sebastian Siewior <bigeasy@linutronix.de>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Steven Rostedt <rostedt@goodmis.org>
      Cc: Oleg Nesterov <oleg@redhat.com>
      Cc: Tejun Heo <tj@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
      Cc: Linus Torvalds <torvalds@linux-foundation.org>
      Cc: Paul Turner <pjt@google.com>
      Link: http://lkml.kernel.org/r/20160226182341.614102639@linutronix.deSigned-off-by: NThomas Gleixner <tglx@linutronix.de>
      fc6d73d6
  15. 01 3月, 2016 5 次提交
  16. 26 2月, 2016 4 次提交
  17. 25 2月, 2016 2 次提交