1. 03 11月, 2017 8 次提交
  2. 02 11月, 2017 9 次提交
  3. 31 10月, 2017 1 次提交
  4. 27 10月, 2017 2 次提交
  5. 25 10月, 2017 1 次提交
  6. 24 10月, 2017 1 次提交
    • M
      arm64: Avoid aligning normal memory pointers in __memcpy_{to,from}io · 9ca255bf
      Mark Salyzyn 提交于
      __memcpy_{to,from}io fall back to byte-at-a-time copying if both the
      source and destination pointers are not 8-byte aligned. Since one of the
      pointers always points at normal memory, this is unnecessary and
      detrimental to performance, so only do byte copying until we hit an 8-byte
      boundary for the device pointer.
      
      This change was motivated by performance issues in the pstore driver.
      On a test platform, measuring probe time for pstore, console buffer
      size of 1/4MB and pmsg of 1/2MB, was in the 90-107ms region. Change
      managed to reduce it to 10-25ms, an improvement in boot time.
      
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Anton Vorontsov <anton@enomsg.org>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Anton Vorontsov <anton@enomsg.org>
      Cc: Robin Murphy <robin.murphy@arm.com>
      Signed-off-by: NMark Salyzyn <salyzyn@android.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9ca255bf
  7. 20 10月, 2017 1 次提交
    • S
      arm64: Fix the feature type for ID register fields · 5bdecb79
      Suzuki K Poulose 提交于
      Now that the ARM ARM clearly specifies the rules for inferring
      the values of the ID register fields, fix the types of the
      feature bits we have in the kernel.
      
      As per ARM ARM DDI0487B.b, section D10.1.4 "Principles of the
      ID scheme for fields in ID registers" lists the registers to
      which the scheme applies along with the exceptions.
      
      This patch changes the relevant feature bits from FTR_EXACT
      to FTR_LOWER_SAFE to select the safer value. This will enable
      an older kernel running on a new CPU detect the safer option
      rather than completely disabling the feature.
      
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5bdecb79
  8. 18 10月, 2017 1 次提交
  9. 11 10月, 2017 1 次提交
    • S
      arm64: Expose support for optional ARMv8-A features · f5e035f8
      Suzuki K Poulose 提交于
      ARMv8-A adds a few optional features for ARMv8.2 and ARMv8.3.
      Expose them to the userspace via HWCAPs and mrs emulation.
      
      SHA2-512  - Instruction support for SHA512 Hash algorithm (e.g SHA512H,
      	    SHA512H2, SHA512U0, SHA512SU1)
      SHA3 	  - SHA3 crypto instructions (EOR3, RAX1, XAR, BCAX).
      SM3	  - Instruction support for Chinese cryptography algorithm SM3
      SM4 	  - Instruction support for Chinese cryptography algorithm SM4
      DP	  - Dot Product instructions (UDOT, SDOT).
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      f5e035f8
  10. 04 10月, 2017 1 次提交
    • M
      arm64: consistently log boot/secondary CPU IDs · ccaac162
      Mark Rutland 提交于
      Currently we inconsistently log identifying information for the boot CPU
      and secondary CPUs. For the boot CPU, we log the MIDR and MPIDR across
      separate messages, whereas for the secondary CPUs we only log the MIDR.
      
      In some cases, it would be useful to know the MPIDR of secondary CPUs,
      and it would be nice for these messages to be consistent.
      
      This patch ensures that in the primary and secondary boot paths, we log
      both the MPIDR and MIDR in a single message, with a consistent format.
      the MPIDR is consistently padded to 10 hex characters to cover Aff3 in
      bits 39:32, so that IDs can be compared easily.
      
      The newly redundant message in setup_arch() is removed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Al Stone <ahs3@redhat.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      [will: added '0x' prefixes consistently]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      ccaac162
  11. 02 10月, 2017 2 次提交
    • M
      arm64: remove unneeded copy to init_utsname()->machine · c2f0b54f
      Masahiro Yamada 提交于
      As you see in init/version.c, init_uts_ns.name.machine is initially
      set to UTS_MACHINE.  There is no point to copy the same string.
      
      I dug the git history to figure out why this line is here.  My best
      guess is like this:
      
       - This line has been around here since the initial support of arm64
         by commit 9703d9d7 ("arm64: Kernel booting and initialisation").
         If ARCH (=arm64) and UTS_MACHINE (=aarch64) do not match,
         arch/$(ARCH)/Makefile is supposed to override UTS_MACHINE, but the
         initial version of arch/arm64/Makefile missed to do that.  Instead,
         the boot code copied "aarch64" to init_utsname()->machine.
      
       - Commit 94ed1f2c ("arm64: setup: report ELF_PLATFORM as the
         machine for utsname") replaced "aarch64" with ELF_PLATFORM to
         make "uname" to reflect the endianness.
      
       - ELF_PLATFORM does not help to provide the UTS machine name to rpm
         target, so commit cfa88c79 ("arm64: Set UTS_MACHINE in the
         Makefile") fixed it.  The commit simply replaced ELF_PLATFORM with
         UTS_MACHINE, but missed the fact the string copy itself is no longer
         needed.
      Signed-off-by: NMasahiro Yamada <yamada.masahiro@socionext.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      c2f0b54f
    • Y
      arm64: move TASK_* definitions to <asm/processor.h> · eef94a3d
      Yury Norov 提交于
      ILP32 series [1] introduces the dependency on <asm/is_compat.h> for
      TASK_SIZE macro. Which in turn requires <asm/thread_info.h>, and
      <asm/thread_info.h> include <asm/memory.h>, giving a circular dependency,
      because TASK_SIZE is currently located in <asm/memory.h>.
      
      In other architectures, TASK_SIZE is defined in <asm/processor.h>, and
      moving TASK_SIZE there fixes the problem.
      
      Discussion: https://patchwork.kernel.org/patch/9929107/
      
      [1] https://github.com/norov/linux/tree/ilp32-next
      
      CC: Will Deacon <will.deacon@arm.com>
      CC: Laura Abbott <labbott@redhat.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Suggested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NYury Norov <ynorov@caviumnetworks.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      eef94a3d
  12. 27 9月, 2017 1 次提交
  13. 18 9月, 2017 2 次提交
  14. 14 9月, 2017 1 次提交
  15. 09 9月, 2017 1 次提交
  16. 23 8月, 2017 5 次提交
    • Y
      arm64: cleanup {COMPAT_,}SET_PERSONALITY() macro · d1be5c99
      Yury Norov 提交于
      There is some work that should be done after setting the personality.
      Currently it's done in the macro, which is not the best idea.
      
      In this patch new arch_setup_new_exec() routine is introduced, and all
      setup code is moved there, as suggested by Catalin:
      https://lkml.org/lkml/2017/8/4/494
      
      Cc: Pratyush Anand <panand@redhat.com>
      Signed-off-by: NYury Norov <ynorov@caviumnetworks.com>
      [catalin.marinas@arm.com: comments changed or removed]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      d1be5c99
    • C
      arm64: kaslr: Adjust the offset to avoid Image across alignment boundary · a067d94d
      Catalin Marinas 提交于
      With 16KB pages and a kernel Image larger than 16MB, the current
      kaslr_early_init() logic for avoiding mappings across swapper table
      boundaries fails since increasing the offset by kimg_sz just moves the
      problem to the next boundary.
      
      This patch rounds the offset down to (1 << SWAPPER_TABLE_SHIFT) if the
      Image crosses a PMD_SIZE boundary.
      
      Fixes: afd0e5a8 ("arm64: kaslr: Fix up the kernel image alignment")
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: Neeraj Upadhyay <neeraju@codeaurora.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      a067d94d
    • A
      arm64: kaslr: ignore modulo offset when validating virtual displacement · 4a23e56a
      Ard Biesheuvel 提交于
      In the KASLR setup routine, we ensure that the early virtual mapping
      of the kernel image does not cover more than a single table entry at
      the level above the swapper block level, so that the assembler routines
      involved in setting up this mapping can remain simple.
      
      In this calculation we add the proposed KASLR offset to the values of
      the _text and _end markers, and reject it if they would end up falling
      in different swapper table sized windows.
      
      However, when taking the addresses of _text and _end, the modulo offset
      (the physical displacement modulo 2 MB) is already accounted for, and
      so adding it again results in incorrect results. So disregard the modulo
      offset from the calculation.
      
      Fixes: 08cdac61 ("arm64: relocatable: deal with physically misaligned ...")
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      4a23e56a
    • D
      arm64: fpsimd: Prevent registers leaking across exec · 09662210
      Dave Martin 提交于
      There are some tricky dependencies between the different stages of
      flushing the FPSIMD register state during exec, and these can race
      with context switch in ways that can cause the old task's regs to
      leak across.  In particular, a context switch during the memset() can
      cause some of the task's old FPSIMD registers to reappear.
      
      Disabling preemption for this small window would be no big deal for
      performance: preemption is already disabled for similar scenarios
      like updating the FPSIMD registers in sigreturn.
      
      So, instead of rearranging things in ways that might swap existing
      subtle bugs for new ones, this patch just disables preemption
      around the FPSIMD state flushing so that races of this type can't
      occur here.  This brings fpsimd_flush_thread() into line with other
      code paths.
      
      Cc: stable@vger.kernel.org
      Fixes: 674c242c ("arm64: flush FP/SIMD state correctly after execve()")
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NDave Martin <Dave.Martin@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      09662210
    • Y
      arm64: introduce separated bits for mm_context_t flags · 5ce93ab6
      Yury Norov 提交于
      Currently mm->context.flags field uses thread_info flags which is not
      the best idea for many reasons. For example, mm_context_t doesn't need
      most of thread_info flags. And it would be difficult to add new mm-related
      flag if needed because it may easily interfere with TIF ones.
      
      To deal with it, the new MMCF_AARCH32 flag is introduced for
      mm_context_t->flags, where MMCF prefix stands for mm_context_t flags.
      Also, mm_context_t flag doesn't require atomicity and ordering of the
      access, so using set/clear_bit() is replaced with simple masks.
      Signed-off-by: NYury Norov <ynorov@caviumnetworks.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5ce93ab6
  17. 22 8月, 2017 1 次提交
    • H
      arm64: kexec: have own crash_smp_send_stop() for crash dump for nonpanic cores · a88ce63b
      Hoeun Ryu 提交于
       Commit 0ee59413 : (x86/panic: replace smp_send_stop() with kdump friendly
      version in panic path) introduced crash_smp_send_stop() which is a weak
      function and can be overridden by architecture codes to fix the side effect
      caused by commit f06e5153 : (kernel/panic.c: add "crash_kexec_post_
      notifiers" option).
      
       ARM64 architecture uses the weak version function and the problem is that
      the weak function simply calls smp_send_stop() which makes other CPUs
      offline and takes away the chance to save crash information for nonpanic
      CPUs in machine_crash_shutdown() when crash_kexec_post_notifiers kernel
      option is enabled.
      
       Calling smp_send_crash_stop() in machine_crash_shutdown() is useless
      because all nonpanic CPUs are already offline by smp_send_stop() in this
      case and smp_send_crash_stop() only works against online CPUs.
      
       The result is that secondary CPUs registers are not saved by
      crash_save_cpu() and the vmcore file misreports these CPUs as being
      offline.
      
       crash_smp_send_stop() is implemented to fix this problem by replacing the
      existing smp_send_crash_stop() and adding a check for multiple calling to
      the function. The function (strong symbol version) saves crash information
      for nonpanic CPUs and machine_crash_shutdown() tries to save crash
      information for nonpanic CPUs only when crash_kexec_post_notifiers kernel
      option is disabled.
      
      * crash_kexec_post_notifiers : false
      
        panic()
          __crash_kexec()
            machine_crash_shutdown()
              crash_smp_send_stop()    <= save crash dump for nonpanic cores
      
      * crash_kexec_post_notifiers : true
      
        panic()
          crash_smp_send_stop()        <= save crash dump for nonpanic cores
          __crash_kexec()
            machine_crash_shutdown()
              crash_smp_send_stop()    <= just return.
      Signed-off-by: NHoeun Ryu <hoeun.ryu@gmail.com>
      Reviewed-by: NJames Morse <james.morse@arm.com>
      Tested-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a88ce63b
  18. 21 8月, 2017 1 次提交
    • C
      arm64: Move PTE_RDONLY bit handling out of set_pte_at() · 73e86cb0
      Catalin Marinas 提交于
      Currently PTE_RDONLY is treated as a hardware only bit and not handled
      by the pte_mkwrite(), pte_wrprotect() or the user PAGE_* definitions.
      The set_pte_at() function is responsible for setting this bit based on
      the write permission or dirty state. This patch moves the PTE_RDONLY
      handling out of set_pte_at into the pte_mkwrite()/pte_wrprotect()
      functions. The PAGE_* definitions to need to be updated to explicitly
      include PTE_RDONLY when !PTE_WRITE.
      
      The patch also removes the redundant PAGE_COPY(_EXEC) definitions as
      they are identical to the corresponding PAGE_READONLY(_EXEC).
      Reviewed-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      73e86cb0