1. 04 3月, 2016 1 次提交
    • M
      arm64: make mrs_s prefixing implicit in read_cpuid · 1cc6ed90
      Mark Rutland 提交于
      Commit 0f54b14e ("arm64: cpufeature: Change read_cpuid() to use
      sysreg's mrs_s macro") changed read_cpuid to require a SYS_ prefix on
      register names, to allow manual assembly of registers unknown by the
      toolchain, using tables in sysreg.h.
      
      This interacts poorly with commit 42b55734 ("efi/arm64: Check
      for h/w support before booting a >4 KB granular kernel"), which is
      curretly queued via the tip tree, and uses read_cpuid without a SYS_
      prefix. Due to this, a build of next-20160304 fails if EFI and 64K pages
      are selected.
      
      To avoid this issue when trees are merged, move the required SYS_
      prefixing into read_cpuid, and revert all of the updated callsites to
      pass plain register names. This effectively reverts the bulk of commit
      0f54b14e.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1cc6ed90
  2. 02 3月, 2016 1 次提交
    • M
      arm64: Rework valid_user_regs · dbd4d7ca
      Mark Rutland 提交于
      We validate pstate using PSR_MODE32_BIT, which is part of the
      user-provided pstate (and cannot be trusted). Also, we conflate
      validation of AArch32 and AArch64 pstate values, making the code
      difficult to reason about.
      
      Instead, validate the pstate value based on the associated task. The
      task may or may not be current (e.g. when using ptrace), so this must be
      passed explicitly by callers. To avoid circular header dependencies via
      sched.h, is_compat_task is pulled out of asm/ptrace.h.
      
      To make the code possible to reason about, the AArch64 and AArch32
      validation is split into separate functions. Software must respect the
      RES0 policy for SPSR bits, and thus the kernel mirrors the hardware
      policy (RAZ/WI) for bits as-yet unallocated. When these acquire an
      architected meaning writes may be permitted (potentially with additional
      validation).
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Cc: Dave Martin <dave.martin@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      dbd4d7ca
  3. 01 3月, 2016 2 次提交
  4. 27 2月, 2016 1 次提交
    • A
      arm64: lse: deal with clobbered IP registers after branch via PLT · 5be8b70a
      Ard Biesheuvel 提交于
      The LSE atomics implementation uses runtime patching to patch in calls
      to out of line non-LSE atomics implementations on cores that lack hardware
      support for LSE. To avoid paying the overhead cost of a function call even
      if no call ends up being made, the bl instruction is kept invisible to the
      compiler, and the out of line implementations preserve all registers, not
      just the ones that they are required to preserve as per the AAPCS64.
      
      However, commit fd045f6c ("arm64: add support for module PLTs") added
      support for routing branch instructions via veneers if the branch target
      offset exceeds the range of the ordinary relative branch instructions.
      Since this deals with jump and call instructions that are exposed to ELF
      relocations, the PLT code uses x16 to hold the address of the branch target
      when it performs an indirect branch-to-register, something which is
      explicitly allowed by the AAPCS64 (and ordinary compiler generated code
      does not expect register x16 or x17 to retain their values across a bl
      instruction).
      
      Since the lse runtime patched bl instructions don't adhere to the AAPCS64,
      they don't deal with this clobbering of registers x16 and x17. So add them
      to the clobber list of the asm() statements that perform the call
      instructions, and drop x16 and x17 from the list of registers that are
      callee saved in the out of line non-LSE implementations.
      
      In addition, since we have given these functions two scratch registers,
      they no longer need to stack/unstack temp registers.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: factored clobber list into #define, updated Makefile comment]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5be8b70a
  5. 26 2月, 2016 4 次提交
  6. 25 2月, 2016 7 次提交
  7. 24 2月, 2016 9 次提交
    • A
      arm64: add support for kernel ASLR · f80fb3a3
      Ard Biesheuvel 提交于
      This adds support for KASLR is implemented, based on entropy provided by
      the bootloader in the /chosen/kaslr-seed DT property. Depending on the size
      of the address space (VA_BITS) and the page size, the entropy in the
      virtual displacement is up to 13 bits (16k/2 levels) and up to 25 bits (all
      4 levels), with the sidenote that displacements that result in the kernel
      image straddling a 1GB/32MB/512MB alignment boundary (for 4KB/16KB/64KB
      granule kernels, respectively) are not allowed, and will be rounded up to
      an acceptable value.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is enabled, the module region is
      randomized independently from the core kernel. This makes it less likely
      that the location of core kernel data structures can be determined by an
      adversary, but causes all function calls from modules into the core kernel
      to be resolved via entries in the module PLTs.
      
      If CONFIG_RANDOMIZE_MODULE_REGION_FULL is not enabled, the module region is
      randomized by choosing a page aligned 128 MB region inside the interval
      [_etext - 128 MB, _stext + 128 MB). This gives between 10 and 14 bits of
      entropy (depending on page size), independently of the kernel randomization,
      but still guarantees that modules are within the range of relative branch
      and jump instructions (with the caveat that, since the module region is
      shared with other uses of the vmalloc area, modules may need to be loaded
      further away if the module region is exhausted)
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f80fb3a3
    • A
      arm64: add support for building vmlinux as a relocatable PIE binary · 1e48ef7f
      Ard Biesheuvel 提交于
      This implements CONFIG_RELOCATABLE, which links the final vmlinux
      image with a dynamic relocation section, allowing the early boot code
      to perform a relocation to a different virtual address at runtime.
      
      This is a prerequisite for KASLR (CONFIG_RANDOMIZE_BASE).
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      1e48ef7f
    • A
      arm64: switch to relative exception tables · 6c94f27a
      Ard Biesheuvel 提交于
      Instead of using absolute addresses for both the exception location
      and the fixup, use offsets relative to the exception table entry values.
      Not only does this cut the size of the exception table in half, it is
      also a prerequisite for KASLR, since absolute exception table entries
      are subject to dynamic relocation, which is incompatible with the sorting
      of the exception table that occurs at build time.
      
      This patch also introduces the _ASM_EXTABLE preprocessor macro (which
      exists on x86 as well) and its _asm_extable assembly counterpart, as
      shorthands to emit exception table entries.
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6c94f27a
    • A
      arm64: make asm/elf.h available to asm files · 4a2e034e
      Ard Biesheuvel 提交于
      This reshuffles some code in asm/elf.h and puts a #ifndef __ASSEMBLY__
      around its C definitions so that the CPP defines can be used in asm
      source files as well.
      Acked-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4a2e034e
    • A
      arm64: avoid R_AARCH64_ABS64 relocations for Image header fields · 6ad1fe5d
      Ard Biesheuvel 提交于
      Unfortunately, the current way of using the linker to emit build time
      constants into the Image header will no longer work once we switch to
      the use of PIE executables. The reason is that such constants are emitted
      into the binary using R_AARCH64_ABS64 relocations, which are resolved at
      runtime, not at build time, and the places targeted by those relocations
      will contain zeroes before that.
      
      So refactor the endian swapping linker script constant generation code so
      that it emits the upper and lower 32-bit words separately.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      6ad1fe5d
    • A
      arm64: add support for module PLTs · fd045f6c
      Ard Biesheuvel 提交于
      This adds support for emitting PLTs at module load time for relative
      branches that are out of range. This is a prerequisite for KASLR, which
      may place the kernel and the modules anywhere in the vmalloc area,
      making it more likely that branch target offsets exceed the maximum
      range of +/- 128 MB.
      
      In this version, I removed the distinction between relocations against
      .init executable sections and ordinary executable sections. The reason
      is that it is hardly worth the trouble, given that .init.text usually
      does not contain that many far branches, and this version now only
      reserves PLT entry space for jump and call relocations against undefined
      symbols (since symbols defined in the same module can be assumed to be
      within +/- 128 MB)
      
      For example, the mac80211.ko module (which is fairly sizable at ~400 KB)
      built with -mcmodel=large gives the following relocation counts:
      
                          relocs    branches   unique     !local
        .text              3925       3347       518        219
        .init.text           11          8         7          1
        .exit.text            4          4         4          1
        .text.unlikely       81         67        36         17
      
      ('unique' means branches to unique type/symbol/addend combos, of which
      !local is the subset referring to undefined symbols)
      
      IOW, we are only emitting a single PLT entry for the .init sections, and
      we are better off just adding it to the core PLT section instead.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      fd045f6c
    • A
      arm64: move brk immediate argument definitions to separate header · f98deee9
      Ard Biesheuvel 提交于
      Instead of reversing the header dependency between asm/bug.h and
      asm/debug-monitors.h, split off the brk instruction immediate value
      defines into a new header asm/brk-imm.h, and include it from both.
      
      This solves the circular dependency issue that prevents BUG() from
      being used in some header files, and keeps the definitions together.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      f98deee9
    • A
      arm64: mm: use bit ops rather than arithmetic in pa/va translations · 8439e62a
      Ard Biesheuvel 提交于
      Since PAGE_OFFSET is chosen such that it cuts the kernel VA space right
      in half, and since the size of the kernel VA space itself is always a
      power of 2, we can treat PAGE_OFFSET as a bitmask and replace the
      additions/subtractions with 'or' and 'and-not' operations.
      
      For the comparison against PAGE_OFFSET, a mov/cmp/branch sequence ends
      up getting replaced with a single tbz instruction. For the additions and
      subtractions, we save a mov instruction since the mask is folded into the
      instruction's immediate field.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      8439e62a
    • A
      arm64: mm: only perform memstart_addr sanity check if DEBUG_VM · a92405f0
      Ard Biesheuvel 提交于
      Checking whether memstart_addr has been assigned every time it is
      referenced adds a branch instruction that may hurt performance if
      the reference in question occurs on a hot path. So only perform the
      check if CONFIG_DEBUG_VM=y.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [catalin.marinas@arm.com: replaced #ifdef with VM_BUG_ON]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a92405f0
  8. 19 2月, 2016 11 次提交
  9. 18 2月, 2016 2 次提交
  10. 17 2月, 2016 1 次提交
  11. 16 2月, 2016 1 次提交
    • W
      arm64: prefetch: add missing #include for spin_lock_prefetch · afb83cc3
      Will Deacon 提交于
      As of 52e662326e1e ("arm64: prefetch: don't provide spin_lock_prefetch
      with LSE"), spin_lock_prefetch is patched at runtime when the LSE atomics
      are in use. This relies on the ARM64_LSE_ATOMIC_INSN macro to drive
      the alternatives framework, but that macro is only available via
      asm/lse.h, which isn't explicitly included in processor.h. Consequently,
      drivers can run into build failures such as:
      
         In file included from include/linux/prefetch.h:14:0,
                          from drivers/net/ethernet/intel/i40e/i40e_txrx.c:27:
         arch/arm64/include/asm/processor.h: In function 'spin_lock_prefetch':
         arch/arm64/include/asm/processor.h:183:15: error: expected string literal before 'ARM64_LSE_ATOMIC_INSN'
           asm volatile(ARM64_LSE_ATOMIC_INSN(
      
      This patch add the missing include and gets things building again.
      Reported-by: Nkbuild test robot <fengguang.wu@intel.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      afb83cc3