1. 05 4月, 2017 1 次提交
  2. 23 3月, 2017 2 次提交
  3. 10 2月, 2017 1 次提交
  4. 03 2月, 2017 1 次提交
    • A
      efi: arm64: Add vmlinux debug link to the Image binary · 757b435a
      Ard Biesheuvel 提交于
      When building with debugging symbols, take the absolute path to the
      vmlinux binary and add it to the special PE/COFF debug table entry.
      This allows a debug EFI build to find the vmlinux binary, which is
      very helpful in debugging, given that the offset where the Image is
      first loaded by EFI is highly unpredictable.
      
      On implementations of UEFI that choose to implement it, this
      information is exposed via the EFI debug support table, which is a UEFI
      configuration table that is accessible both by the firmware at boot time
      and by the OS at runtime, and lists all PE/COFF images loaded by the
      system.
      
      The format of the NB10 Codeview entry is based on the definition used
      by EDK2, which is our primary reference when it comes to the use of
      PE/COFF in the context of UEFI firmware.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: use realpath instead of shell invocation, as discussed on list]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      757b435a
  5. 18 1月, 2017 1 次提交
    • M
      arm64: head.S: avoid open-coded adr_l · 9bb00360
      Mark Rutland 提交于
      Some places in the kernel open-code sequences using ADRP for a symbol
      another instruction using a :lo12: relocation for that same symbol.
      These sequences are easy to get wrong, and more painful to read than is
      necessary. For these reasons, it is preferable to use the
      {adr,ldr,str}_l macros for these cases.
      
      This patch makes use of adr_l these in head.S, removing an open-coded
      sequence using adrp.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Reviewed-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Marc Zyngier <marc.zyngier@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9bb00360
  6. 10 1月, 2017 1 次提交
  7. 29 11月, 2016 1 次提交
    • J
      arm64: head.S: Fix CNTHCTL_EL2 access on VHE system · 1650ac49
      Jintack 提交于
      Bit positions of CNTHCTL_EL2 are changing depending on HCR_EL2.E2H bit.
      EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is not set, but they
      are 11th and 10th bits respectively when E2H is set.  Current code is
      unintentionally setting wrong bits to CNTHCTL_EL2 with E2H set.
      
      In fact, we don't need to set those two bits, which allow EL1 and EL0 to
      access physical timer and counter respectively, if E2H and TGE are set
      for the host kernel. They will be configured later as necessary. First,
      we don't need to configure those bits for EL1, since the host kernel
      runs in EL2.  It is a hypervisor's responsibility to configure them
      before entering a VM, which runs in EL0 and EL1. Second, EL0 accesses
      are configured in the later stage of boot process.
      Signed-off-by: NJintack Lim <jintack@cs.columbia.edu>
      Acked-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1650ac49
  8. 22 11月, 2016 1 次提交
    • C
      arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1 · 4b65a5db
      Catalin Marinas 提交于
      This patch adds the uaccess macros/functions to disable access to user
      space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
      written to TTBR0_EL1 must be a physical address, for simplicity this
      patch introduces a reserved_ttbr0 page at a constant offset from
      swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
      adjusted by the reserved_ttbr0 offset.
      
      Enabling access to user is done by restoring TTBR0_EL1 with the value
      from the struct thread_info ttbr0 variable. Interrupts must be disabled
      during the uaccess_ttbr0_enable code to ensure the atomicity of the
      thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
      get_thread_info asm macro from entry.S to assembler.h for reuse in the
      uaccess_ttbr0_* macros.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4b65a5db
  9. 12 11月, 2016 1 次提交
    • M
      arm64: split thread_info from task stack · c02433dd
      Mark Rutland 提交于
      This patch moves arm64's struct thread_info from the task stack into
      task_struct. This protects thread_info from corruption in the case of
      stack overflows, and makes its address harder to determine if stack
      addresses are leaked, making a number of attacks more difficult. Precise
      detection and handling of overflow is left for subsequent patches.
      
      Largely, this involves changing code to store the task_struct in sp_el0,
      and acquire the thread_info from the task struct. Core code now
      implements current_thread_info(), and as noted in <linux/sched.h> this
      relies on offsetof(task_struct, thread_info) == 0, enforced by core
      code.
      
      This change means that the 'tsk' register used in entry.S now points to
      a task_struct, rather than a thread_info as it used to. To make this
      clear, the TI_* field offsets are renamed to TSK_TI_*, with asm-offsets
      appropriately updated to account for the structural change.
      
      Userspace clobbers sp_el0, and we can no longer restore this from the
      stack. Instead, the current task is cached in a per-cpu variable that we
      can safely access from early assembly as interrupts are disabled (and we
      are thus not preemptible).
      
      Both secondary entry and idle are updated to stash the sp and task
      pointer separately.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <labbott@redhat.com>
      Cc: AKASHI Takahiro <takahiro.akashi@linaro.org>
      Cc: Andy Lutomirski <luto@kernel.org>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Suzuki K Poulose <suzuki.poulose@arm.com>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      c02433dd
  10. 17 10月, 2016 1 次提交
  11. 02 9月, 2016 6 次提交
  12. 26 8月, 2016 1 次提交
    • J
      arm64: vmlinux.ld: Add mmuoff data sections and move mmuoff text into idmap · b6113038
      James Morse 提交于
      Resume from hibernate needs to clean any text executed by the kernel with
      the MMU off to the PoC. Collect these functions together into the
      .idmap.text section as all this code is tightly coupled and also needs
      the same cleaning after resume.
      
      Data is more complicated, secondary_holding_pen_release is written with
      the MMU on, clean and invalidated, then read with the MMU off. In contrast
      __boot_cpu_mode is written with the MMU off, the corresponding cache line
      is invalidated, so when we read it with the MMU on we don't get stale data.
      These cache maintenance operations conflict with each other if the values
      are within a Cache Writeback Granule (CWG) of each other.
      Collect the data into two sections .mmuoff.data.read and .mmuoff.data.write,
      the linker script ensures mmuoff.data.write section is aligned to the
      architectural maximum CWG of 2KB.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b6113038
  13. 25 8月, 2016 1 次提交
  14. 22 8月, 2016 1 次提交
  15. 29 7月, 2016 1 次提交
    • A
      arm64: relocatable: suppress R_AARCH64_ABS64 relocations in vmlinux · 08cc55b2
      Ard Biesheuvel 提交于
      The linker routines that we rely on to produce a relocatable PIE binary
      treat it as a shared ELF object in some ways, i.e., it emits symbol based
      R_AARCH64_ABS64 relocations into the final binary since doing so would be
      appropriate when linking a shared library that is subject to symbol
      preemption. (This means that an executable can override certain symbols
      that are exported by a shared library it is linked with, and that the
      shared library *must* update all its internal references as well, and point
      them to the version provided by the executable.)
      
      Symbol preemption does not occur for OS hosted PIE executables, let alone
      for vmlinux, and so we would prefer to get rid of these symbol based
      relocations. This would allow us to simplify the relocation routines, and
      to strip the .dynsym, .dynstr and .hash sections from the binary. (Note
      that these are tiny, and are placed in the .init segment, but they clutter
      up the vmlinux binary.)
      
      Note that these R_AARCH64_ABS64 relocations are only emitted for absolute
      references to symbols defined in the linker script, all other relocatable
      quantities are covered by anonymous R_AARCH64_RELATIVE relocations that
      simply list the offsets to all 64-bit values in the binary that need to be
      fixed up based on the offset between the link time and run time addresses.
      
      Fortunately, GNU ld has a -Bsymbolic option, which is intended for shared
      libraries to allow them to ignore symbol preemption, and unconditionally
      bind all internal symbol references to its own definitions. So set it for
      our PIE binary as well, and get rid of the asoociated sections and the
      relocation code that processes them.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: fixed conflict with __dynsym_offset linker script entry]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      08cc55b2
  16. 28 4月, 2016 2 次提交
  17. 26 4月, 2016 6 次提交
  18. 22 4月, 2016 1 次提交
  19. 18 4月, 2016 1 次提交
  20. 15 4月, 2016 1 次提交
    • A
      arm64: move early boot code to the .init segment · 546c8c44
      Ard Biesheuvel 提交于
      Apart from the arm64/linux and EFI header data structures, there is nothing
      in the .head.text section that must reside at the beginning of the Image.
      So let's move it to the .init section where it belongs.
      
      Note that this involves some minor tweaking of the EFI header, primarily
      because the address of 'stext' no longer coincides with the start of the
      .text section. It also requires a couple of relocated symbol references
      to be slightly rewritten or their definition moved to the linker script.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      546c8c44
  21. 25 3月, 2016 1 次提交
  22. 21 3月, 2016 1 次提交
    • M
      arm64: fix KASLR boot-time I-cache maintenance · b90b4a60
      Mark Rutland 提交于
      Commit f80fb3a3 ("arm64: add support for kernel ASLR") missed a
      DSB necessary to complete I-cache maintenance in the primary boot path,
      and hence stale instructions may still be present in the I-cache and may
      be executed until the I-cache maintenance naturally completes.
      
      Since commit 8ec41987 ("arm64: mm: ensure patched kernel text is
      fetched from PoU"), all CPUs invalidate their I-caches after their MMU
      is enabled. Prior a CPU's MMU having been enabled, arbitrary lines may
      have been fetched from the PoC into I-caches. We never patch text
      expected to be executed with the MMU off. Thus, it is unnecessary to
      perform broadcast I-cache maintenance in the primary boot path.
      
      This patch reduces the scope of the I-cache maintenance to the local
      CPU, and adds the missing DSB with similar scope, matching prior
      maintenance in the primary boot path.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NArd Biesehvuel <ard.biesheuvel@linaro.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      b90b4a60
  23. 01 3月, 2016 1 次提交
  24. 25 2月, 2016 1 次提交
    • S
      arm64: Handle early CPU boot failures · bb905274
      Suzuki K Poulose 提交于
      A secondary CPU could fail to come online due to insufficient
      capabilities and could simply die or loop in the kernel.
      e.g, a CPU with no support for the selected kernel PAGE_SIZE
      loops in kernel with MMU turned off.
      or a hotplugged CPU which doesn't have one of the advertised
      system capability will die during the activation.
      
      There is no way to synchronise the status of the failing CPU
      back to the master. This patch solves the issue by adding a
      field to the secondary_data which can be updated by the failing
      CPU. If the secondary CPU fails even before turning the MMU on,
      it updates the status in a special variable reserved in the head.txt
      section to make sure that the update can be cache invalidated safely
      without possible sharing of cache write back granule.
      
      Here are the possible states :
      
       -1. CPU_MMU_OFF - Initial value set by the master CPU, this value
      indicates that the CPU could not turn the MMU on, hence the status
      could not be reliably updated in the secondary_data. Instead, the
      CPU has updated the status @ __early_cpu_boot_status.
      
       0. CPU_BOOT_SUCCESS - CPU has booted successfully.
      
       1. CPU_KILL_ME - CPU has invoked cpu_ops->die, indicating the
      master CPU to synchronise by issuing a cpu_ops->cpu_kill.
      
       2. CPU_STUCK_IN_KERNEL - CPU couldn't invoke die(), instead is
      looping in the kernel. This information could be used by say,
      kexec to check if it is really safe to do a kexec reboot.
      
       3. CPU_PANIC_KERNEL - CPU detected some serious issues which
      requires kernel to crash immediately. The secondary CPU cannot
      call panic() until it has initialised the GIC. This flag can
      be used to instruct the master to do so.
      
      Cc: Mark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NSuzuki K Poulose <suzuki.poulose@arm.com>
      [catalin.marinas@arm.com: conflict resolution]
      [catalin.marinas@arm.com: converted "status" from int to long]
      [catalin.marinas@arm.com: updated update_early_cpu_boot_status to use str_l]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bb905274
  25. 24 2月, 2016 4 次提交
    • A
      arm64: add support for kernel ASLR · f80fb3a3
      Ard Biesheuvel 提交于
      This adds support for KASLR is implemented, based on entropy provided by
      the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
      of the address space (VA_BITS) and the page size, the entropy in the
      virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
      4 levels), with the sidenote that displacements that result in the kernel
      image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
      granule kernels, respectively) are not allowed, and will be rounded up to
      an acceptable value.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
      randomized independently from the core kernel. This makes it less likely
      that the location of core kernel data structures can be determined by an
      adversary, but causes all function calls from modules into the core kernel
      to be resolved via entries in the module PLTs.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
      randomized by choosing a page aligned 128 MB region inside the interval
      [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
      entropy (depending on page size), independently of the kernel randomization,
      but still guarantees that modules are within the range of relative branch
      and jump instructions (with the caveat that, since the module region is
      shared with other uses of the vmalloc area, modules may need to be loaded
      further away if the module region is exhausted)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f80fb3a3
    • A
      arm64: add support for building vmlinux as a relocatable PIE binary · 1e48ef7f
      Ard Biesheuvel 提交于
      This implements CONFIG_RELOCATABLE, which links the final vmlinux
      image with a dynamic relocation section, allowing the early boot code
      to perform a relocation to a different virtual address at runtime.
      
      This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE).
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1e48ef7f
    • A
      arm64: avoid dynamic relocations in early boot code · 2bf31a4a
      Ard Biesheuvel 提交于
      Before implementing KASLR for arm64 by building a self-relocating PIE
      executable, we have to ensure that values we use before the relocation
      routine is executed are not subject to dynamic relocation themselves.
      This applies not only to virtual addresses, but also to values that are
      supplied by the linker at build time and relocated using R_AARCH64_ABS64
      relocations.
      
      So instead, use assemble time constants, or force the use of static
      relocations by folding the constants into the instructions.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2bf31a4a
    • A
      arm64: avoid R_AARCH64_ABS64 relocations for Image header fields · 6ad1fe5d
      Ard Biesheuvel 提交于
      Unfortunately, the current way of using the linker to emit build time
      constants into the Image header will no longer work once we switch to
      the use of PIE executables. The reason is that such constants are emitted
      into the binary using R_AARCH64_ABS64 relocations, which are resolved at
      runtime, not at build time, and the places targeted by those relocations
      will contain zeroes before that.
      
      So refactor the endian swapping linker script constant generation code so
      that it emits the upper and lower 32-bit words separately.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6ad1fe5d