1. 03 11月, 2017 9 次提交
    • D
      arm64/sve: Core task context handling · bc0ee476
      Dave Martin 提交于
      This patch adds the core support for switching and managing the SVE
      architectural state of user tasks.
      
      Calls to the existing FPSIMD low-level save/restore functions are
      factored out as new functions task_fpsimd_{save,load}(), since SVE
      now dynamically may or may not need to be handled at these points
      depending on the kernel configuration, hardware features discovered
      at boot, and the runtime state of the task.  To make these
      decisions as fast as possible, const cpucaps are used where
      feasible, via the system_supports_sve() helper.
      
      The SVE registers are only tracked for threads that have explicitly
      used SVE, indicated by the new thread flag TIF_SVE.  Otherwise, the
      FPSIMD view of the architectural state is stored in
      thread.fpsimd_state as usual.
      
      When in use, the SVE registers are not stored directly in
      thread_struct due to their potentially large and variable size.
      Because the task_struct slab allocator must be configured very
      early during kernel boot, it is also tricky to configure it
      correctly to match the maximum vector length provided by the
      hardware, since this depends on examining secondary CPUs as well as
      the primary.  Instead, a pointer sve_state in thread_struct points
      to a dynamically allocated buffer containing the SVE register data,
      and code is added to allocate and free this buffer at appropriate
      times.
      
      TIF_SVE is set when taking an SVE access trap from userspace, if
      suitable hardware support has been detected.  This enables SVE for
      the thread: a subsequent return to userspace will disable the trap
      accordingly.  If such a trap is taken without sufficient system-
      wide hardware support, SIGILL is sent to the thread instead as if
      an undefined instruction had been executed: this may happen if
      userspace tries to use SVE in a system where not all CPUs support
      it for example.
      
      The kernel will clear TIF_SVE and disable SVE for the thread
      whenever an explicit syscall is made by userspace.  For backwards
      compatibility reasons and conformance with the spirit of the base
      AArch64 procedure call standard, the subset of the SVE register
      state that aliases the FPSIMD registers is still preserved across a
      syscall even if this happens.  The remainder of the SVE register
      state logically becomes zero at syscall entry, though the actual
      zeroing work is currently deferred until the thread next tries to
      use SVE, causing another trap to the kernel.  This implementation
      is suboptimal: in the future, the fastpath case may be optimised
      to zero the registers in-place and leave SVE enabled for the task,
      where beneficial.
      
      TIF_SVE is also cleared in the following slowpath cases, which are
      taken as reasonable hints that the task may no longer use SVE:
       * exec
       * fork and clone
      
      Code is added to sync data between thread.fpsimd_state and
      thread.sve_state whenever enabling/disabling SVE, in a manner
      consistent with the SVE architectural programmer's model.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Alex Bennée <alex.bennee@linaro.org>
      [will: added #include to fix allnoconfig build]
      [will: use enable_daif in do_sve_acc]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      bc0ee476
    • D
      arm64/sve: Low-level CPU setup · 22043a3c
      Dave Martin 提交于
      To enable the kernel to use SVE, SVE traps from EL1 to EL2 must be
      disabled.  To take maximum advantage of the hardware, the full
      available vector length also needs to be enabled for EL1 by
      programming ZCR_EL2.LEN.  (The kernel will program ZCR_EL1.LEN as
      required, but this cannot override the limit set by ZCR_EL2.)
      
      This patch makes the appropriate changes to the EL2 early setup
      code.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Cc: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      22043a3c
    • D
      arm64/sve: Low-level SVE architectural state manipulation functions · 1fc5dce7
      Dave Martin 提交于
      Manipulating the SVE architectural state, including the vector and
      predicate registers, first-fault register and the vector length,
      requires the use of dedicated instructions added by SVE.
      
      This patch adds suitable assembly functions for saving and
      restoring the SVE registers and querying the vector length.
      Setting of the vector length is done as part of register restore.
      
      Since people building kernels may not all get an SVE-enabled
      toolchain for a while, this patch uses macros that generate
      explicit opcodes in place of assembler mnemonics.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      1fc5dce7
    • D
      arm64/sve: System register and exception syndrome definitions · 67236564
      Dave Martin 提交于
      The SVE architecture adds some system registers, ID register fields
      and a dedicated ESR exception class.
      
      This patch adds the appropriate definitions that will be needed by
      the kernel.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      67236564
    • D
      arm64: fpsimd: Simplify uses of {set,clear}_ti_thread_flag() · 9cf5b54f
      Dave Martin 提交于
      The existing FPSIMD context switch code contains a couple of
      instances of {set,clear}_ti_thread(task_thread_info(task)).  Since
      there are thread flag manipulators that operate directly on
      task_struct, this verbosity isn't strictly needed.
      
      For consistency, this patch simplifies the affected calls.  This
      should have no impact on behaviour.
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Acked-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9cf5b54f
    • D
      arm64: Port deprecated instruction emulation to new sysctl interface · 38b9aeb3
      Dave Martin 提交于
      Currently, armv8_deprected.c takes charge of the "abi" sysctl
      directory, which makes life difficult for other code that wants to
      register sysctls in the same directory.
      
      There is a "new" [1] sysctl registration interface that removes the
      need to define ctl_tables for parent directories explicitly, which
      is ideal here.
      
      This patch ports register_insn_emulation_sysctl() over to the
      register_sysctl() interface and removes the redundant ctl_table for
      "abi".
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      
      [1] fea478d4 (sysctl: Add register_sysctl for normal sysctl
      users)
      The commit message notes an intent to port users of the
      pre-existing interfaces over to register_sysctl(), though the
      number of users of the new interface currently appears negligible.
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      38b9aeb3
    • D
      arm64: signal: Verify extra data is user-readable in sys_rt_sigreturn · abf73988
      Dave Martin 提交于
      Currently sys_rt_sigreturn() verifies that the base sigframe is
      readable, but no similar check is performed on the extra data to
      which an extra_context record points.
      
      This matters because the extra data will be read with the
      unprotected user accessors.  However, this is not a problem at
      present because the extra data base address is required to be
      exactly at the end of the base sigframe.  So, there would need to
      be a non-user-readable kernel address within about 59K
      (SIGFRAME_MAXSZ - sizeof(struct rt_sigframe)) of some address for
      which access_ok(VERIFY_READ) returns true, in order for sigreturn
      to be able to read kernel memory that should be inaccessible to the
      user task.  This is currently impossible due to the untranslatable
      address hole between the TTBR0 and TTBR1 address ranges.
      
      Disappearance of the hole between the TTBR0 and TTBR1 mapping
      ranges would require the VA size for TTBR0 and TTBR1 to grow to at
      least 55 bits, and either the disabling of tagged pointers for
      userspace or enabling of tagged pointers for kernel space; none of
      which is currently envisaged.
      
      Even so, it is wrong to use the unprotected user accessors without
      an accompanying access_ok() check.
      
      To avoid the potential for future surprises, this patch does an
      explicit access_ok() check on the extra data space when parsing an
      extra_context record.
      
      Fixes: 33f08261 ("arm64: signal: Allow expansion of the signal frame")
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      abf73988
    • D
      arm64: fpsimd: Correctly annotate exception helpers called from asm · 94ef7ecb
      Dave Martin 提交于
      A couple of FPSIMD exception handling functions that are called
      from entry.S are currently not annotated as such.
      
      This is not a big deal since asmlinkage does nothing on arm/arm64,
      but fixing the annotations is more consistent and may help avoid
      future surprises.
      
      This patch adds appropriate asmlinkage annotations for
      do_fpsimd_acc() and do_fpsimd_exc().
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      94ef7ecb
    • J
      arm64: Fix static use of function graph · d125bffc
      Julien Thierry 提交于
      Function graph does not work currently when CONFIG_DYNAMIC_TRACE is not
      set. This is because ftrace_function_trace is not always set to ftrace_stub
      when function_graph is in use.
      
      Do not skip checking of graph tracer functions when ftrace_function_trace
      is set.
      Signed-off-by: NJulien Thierry <julien.thierry@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: NSteven Rostedt (VMware) <rostedt@goodmis.org>
      Reviewed-by: NAKASHI Takahiro <takahiro.akashi@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d125bffc
  2. 02 11月, 2017 9 次提交
  3. 31 10月, 2017 1 次提交
  4. 27 10月, 2017 2 次提交
  5. 25 10月, 2017 1 次提交
  6. 24 10月, 2017 1 次提交
    • M
      arm64: Avoid aligning normal memory pointers in __memcpy_{to,from}io · 9ca255bf
      Mark Salyzyn 提交于
      __memcpy_{to,from}io fall back to byte-at-a-time copying if both the
      source and destination pointers are not 8-byte aligned. Since one of the
      pointers always points at normal memory, this is unnecessary and
      detrimental to performance, so only do byte copying until we hit an 8-byte
      boundary for the device pointer.
      
      This change was motivated by performance issues in the pstore driver.
      On a test platform, measuring probe time for pstore, console buffer
      size of 1/4MB and pmsg of 1/2MB, was in the 90-107ms region. Change
      managed to reduce it to 10-25ms, an improvement in boot time.
      
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Anton Vorontsov <anton@enomsg.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Anton Vorontsov <anton@enomsg.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Signed-off-by: NMark Salyzyn <salyzyn@android.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9ca255bf
  7. 20 10月, 2017 1 次提交
    • S
      arm64: Fix the feature type for ID register fields · 5bdecb79
      Suzuki K Poulose 提交于
      Now that the ARM ARM clearly specifies the rules for inferring
      the values of the ID register fields, fix the types of the
      feature bits we have in the kernel.
      
      As per ARM ARM DDI0487B.b, section D10.1.4 "Principles of the
      ID scheme for fields in ID registers" lists the registers to
      which the scheme applies along with the exceptions.
      
      This patch changes the relevant feature bits from FTR_EXACT
      to FTR_LOWER_SAFE to select the safer value. This will enable
      an older kernel running on a new CPU detect the safer option
      rather than completely disabling the feature.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5bdecb79
  8. 18 10月, 2017 1 次提交
  9. 11 10月, 2017 1 次提交
    • S
      arm64: Expose support for optional ARMv8-A features · f5e035f8
      Suzuki K Poulose 提交于
      ARMv8-A adds a few optional features for ARMv8.2 and ARMv8.3.
      Expose them to the userspace via HWCAPs and mrs emulation.
      
      SHA2-512  - Instruction support for SHA512 Hash algorithm (e.g SHA512H,
      	    SHA512H2, SHA512U0, SHA512SU1)
      SHA3 	  - SHA3 crypto instructions (EOR3, RAX1, XAR, BCAX).
      SM3	  - Instruction support for Chinese cryptography algorithm SM3
      SM4 	  - Instruction support for Chinese cryptography algorithm SM4
      DP	  - Dot Product instructions (UDOT, SDOT).
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f5e035f8
  10. 04 10月, 2017 1 次提交
    • M
      arm64: consistently log boot/secondary CPU IDs · ccaac162
      Mark Rutland 提交于
      Currently we inconsistently log identifying information for the boot CPU
      and secondary CPUs. For the boot CPU, we log the MIDR and MPIDR across
      separate messages, whereas for the secondary CPUs we only log the MIDR.
      
      In some cases, it would be useful to know the MPIDR of secondary CPUs,
      and it would be nice for these messages to be consistent.
      
      This patch ensures that in the primary and secondary boot paths, we log
      both the MPIDR and MIDR in a single message, with a consistent format.
      the MPIDR is consistently padded to 10 hex characters to cover Aff3 in
      bits 39:32, so that IDs can be compared easily.
      
      The newly redundant message in setup_arch() is removed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Al Stone <ahs3@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      [will: added '0x' prefixes consistently]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ccaac162
  11. 02 10月, 2017 2 次提交
    • M
      arm64: remove unneeded copy to init_utsname()->machine · c2f0b54f
      Masahiro Yamada 提交于
      As you see in init/version.c, init_uts_ns.name.machine is initially
      set to UTS_MACHINE.  There is no point to copy the same string.
      
      I dug the git history to figure out why this line is here.  My best
      guess is like this:
      
       - This line has been around here since the initial support of arm64
         by commit 9703d9d7 ("arm64: Kernel booting and initialisation").
         If ARCH (=arm64) and UTS_MACHINE (=aarch64) do not match,
         arch/$(ARCH)/Makefile is supposed to override UTS_MACHINE, but the
         initial version of arch/arm64/Makefile missed to do that.  Instead,
         the boot code copied "aarch64" to init_utsname()->machine.
      
       - Commit 94ed1f2c ("arm64: setup: report ELF_PLATFORM as the
         machine for utsname") replaced "aarch64" with ELF_PLATFORM to
         make "uname" to reflect the endianness.
      
       - ELF_PLATFORM does not help to provide the UTS machine name to rpm
         target, so commit cfa88c79 ("arm64: Set UTS_MACHINE in the
         Makefile") fixed it.  The commit simply replaced ELF_PLATFORM with
         UTS_MACHINE, but missed the fact the string copy itself is no longer
         needed.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c2f0b54f
    • Y
      arm64: move TASK_* definitions to <asm/processor.h> · eef94a3d
      Yury Norov 提交于
      ILP32 series [1] introduces the dependency on <asm/is_compat.h> for
      TASK_SIZE macro. Which in turn requires <asm/thread_info.h>, and
      <asm/thread_info.h> include <asm/memory.h>, giving a circular dependency,
      because TASK_SIZE is currently located in <asm/memory.h>.
      
      In other architectures, TASK_SIZE is defined in <asm/processor.h>, and
      moving TASK_SIZE there fixes the problem.
      
      Discussion: https://patchwork.kernel.org/patch/9929107/
      
      [1] https://github.com/norov/linux/tree/ilp32-next
      
      CC: Will Deacon <will.deacon@arm.com>
      CC: Laura Abbott <labbott@redhat.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Suggested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NYury Norov <ynorov@caviumnetworks.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      eef94a3d
  12. 27 9月, 2017 1 次提交
  13. 18 9月, 2017 2 次提交
  14. 14 9月, 2017 1 次提交
  15. 09 9月, 2017 1 次提交
  16. 23 8月, 2017 5 次提交
    • Y
      arm64: cleanup {COMPAT_,}SET_PERSONALITY() macro · d1be5c99
      Yury Norov 提交于
      There is some work that should be done after setting the personality.
      Currently it's done in the macro, which is not the best idea.
      
      In this patch new arch_setup_new_exec() routine is introduced, and all
      setup code is moved there, as suggested by Catalin:
      https://lkml.org/lkml/2017/8/4/494
      
      Cc: Pratyush Anand <panand@redhat.com>
      Signed-off-by: NYury Norov <ynorov@caviumnetworks.com>
      [catalin.marinas@arm.com: comments changed or removed]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d1be5c99
    • C
      arm64: kaslr: Adjust the offset to avoid Image across alignment boundary · a067d94d
      Catalin Marinas 提交于
      With 16KB pages and a kernel Image larger than 16MB, the current
      kaslr_early_init() logic for avoiding mappings across swapper table
      boundaries fails since increasing the offset by kimg_sz just moves the
      problem to the next boundary.
      
      This patch rounds the offset down to (1 << SWAPPER_TABLE_SHIFT) if the
      Image crosses a PMD_SIZE boundary.
      
      Fixes: afd0e5a8 ("arm64: kaslr: Fix up the kernel image alignment")
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      a067d94d
    • A
      arm64: kaslr: ignore modulo offset when validating virtual displacement · 4a23e56a
      Ard Biesheuvel 提交于
      In the KASLR setup routine, we ensure that the early virtual mapping
      of the kernel image does not cover more than a single table entry at
      the level above the swapper block level, so that the assembler routines
      involved in setting up this mapping can remain simple.
      
      In this calculation we add the proposed KASLR offset to the values of
      the _text and _end markers, and reject it if they would end up falling
      in different swapper table sized windows.
      
      However, when taking the addresses of _text and _end, the modulo offset
      (the physical displacement modulo 2 MB) is already accounted for, and
      so adding it again results in incorrect results. So disregard the modulo
      offset from the calculation.
      
      Fixes: 08cdac61 ("arm64: relocatable: deal with physically misaligned ...")
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4a23e56a
    • D
      arm64: fpsimd: Prevent registers leaking across exec · 09662210
      Dave Martin 提交于
      There are some tricky dependencies between the different stages of
      flushing the FPSIMD register state during exec, and these can race
      with context switch in ways that can cause the old task's regs to
      leak across.  In particular, a context switch during the memset() can
      cause some of the task's old FPSIMD registers to reappear.
      
      Disabling preemption for this small window would be no big deal for
      performance: preemption is already disabled for similar scenarios
      like updating the FPSIMD registers in sigreturn.
      
      So, instead of rearranging things in ways that might swap existing
      subtle bugs for new ones, this patch just disables preemption
      around the FPSIMD state flushing so that races of this type can't
      occur here.  This brings fpsimd_flush_thread() into line with other
      code paths.
      
      Cc: stable@vger.kernel.org
      Fixes: 674c242c ("arm64: flush FP/SIMD state correctly after execve()")
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      09662210
    • Y
      arm64: introduce separated bits for mm_context_t flags · 5ce93ab6
      Yury Norov 提交于
      Currently mm->context.flags field uses thread_info flags which is not
      the best idea for many reasons. For example, mm_context_t doesn't need
      most of thread_info flags. And it would be difficult to add new mm-related
      flag if needed because it may easily interfere with TIF ones.
      
      To deal with it, the new MMCF_AARCH32 flag is introduced for
      mm_context_t->flags, where MMCF prefix stands for mm_context_t flags.
      Also, mm_context_t flag doesn't require atomicity and ordering of the
      access, so using set/clear_bit() is replaced with simple masks.
      Signed-off-by: NYury Norov <ynorov@caviumnetworks.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5ce93ab6
  17. 22 8月, 2017 1 次提交
    • H
      arm64: kexec: have own crash_smp_send_stop() for crash dump for nonpanic cores · a88ce63b
      Hoeun Ryu 提交于
       Commit 0ee59413 : (x86/panic: replace smp_send_stop() with kdump friendly
      version in panic path) introduced crash_smp_send_stop() which is a weak
      function and can be overridden by architecture codes to fix the side effect
      caused by commit f06e5153 : (kernel/panic.c: add "crash_kexec_post_
      notifiers" option).
      
       ARM64 architecture uses the weak version function and the problem is that
      the weak function simply calls smp_send_stop() which makes other CPUs
      offline and takes away the chance to save crash information for nonpanic
      CPUs in machine_crash_shutdown() when crash_kexec_post_notifiers kernel
      option is enabled.
      
       Calling smp_send_crash_stop() in machine_crash_shutdown() is useless
      because all nonpanic CPUs are already offline by smp_send_stop() in this
      case and smp_send_crash_stop() only works against online CPUs.
      
       The result is that secondary CPUs registers are not saved by
      crash_save_cpu() and the vmcore file misreports these CPUs as being
      offline.
      
       crash_smp_send_stop() is implemented to fix this problem by replacing the
      existing smp_send_crash_stop() and adding a check for multiple calling to
      the function. The function (strong symbol version) saves crash information
      for nonpanic CPUs and machine_crash_shutdown() tries to save crash
      information for nonpanic CPUs only when crash_kexec_post_notifiers kernel
      option is disabled.
      
      * crash_kexec_post_notifiers : false
      
        panic()
          __crash_kexec()
            machine_crash_shutdown()
              crash_smp_send_stop()    <= save crash dump for nonpanic cores
      
      * crash_kexec_post_notifiers : true
      
        panic()
          crash_smp_send_stop()        <= save crash dump for nonpanic cores
          __crash_kexec()
            machine_crash_shutdown()
              crash_smp_send_stop()    <= just return.
      Signed-off-by: NHoeun Ryu <hoeun.ryu@gmail.com>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Tested-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a88ce63b