1. 22 11月, 2016 1 次提交
    • C
      arm64: Introduce uaccess_{disable,enable} functionality based on TTBR0_EL1 · 4b65a5db
      Catalin Marinas 提交于
      This patch adds the uaccess macros/functions to disable access to user
      space by setting TTBR0_EL1 to a reserved zeroed page. Since the value
      written to TTBR0_EL1 must be a physical address, for simplicity this
      patch introduces a reserved_ttbr0 page at a constant offset from
      swapper_pg_dir. The uaccess_disable code uses the ttbr1_el1 value
      adjusted by the reserved_ttbr0 offset.
      
      Enabling access to user is done by restoring TTBR0_EL1 with the value
      from the struct thread_info ttbr0 variable. Interrupts must be disabled
      during the uaccess_ttbr0_enable code to ensure the atomicity of the
      thread_info.ttbr0 read and TTBR0_EL1 write. This patch also moves the
      get_thread_info asm macro from entry.S to assembler.h for reuse in the
      uaccess_ttbr0_* macros.
      
      Cc: Will Deacon <will.deacon@arm.com>
      Cc: James Morse <james.morse@arm.com>
      Cc: Kees Cook <keescook@chromium.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      4b65a5db
  2. 08 10月, 2016 1 次提交
  3. 26 8月, 2016 1 次提交
    • J
      arm64: vmlinux.ld: Add mmuoff data sections and move mmuoff text into idmap · b6113038
      James Morse 提交于
      Resume from hibernate needs to clean any text executed by the kernel with
      the MMU off to the PoC. Collect these functions together into the
      .idmap.text section as all this code is tightly coupled and also needs
      the same cleaning after resume.
      
      Data is more complicated, secondary_holding_pen_release is written with
      the MMU on, clean and invalidated, then read with the MMU off. In contrast
      __boot_cpu_mode is written with the MMU off, the corresponding cache line
      is invalidated, so when we read it with the MMU on we don't get stale data.
      These cache maintenance operations conflict with each other if the values
      are within a Cache Writeback Granule (CWG) of each other.
      Collect the data into two sections .mmuoff.data.read and .mmuoff.data.write,
      the linker script ensures mmuoff.data.write section is aligned to the
      architectural maximum CWG of 2KB.
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Mark Rutland <mark.rutland@arm.com>
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      b6113038
  4. 29 7月, 2016 2 次提交
    • A
      arm64: relocatable: suppress R_AARCH64_ABS64 relocations in vmlinux · 08cc55b2
      Ard Biesheuvel 提交于
      The linker routines that we rely on to produce a relocatable PIE binary
      treat it as a shared ELF object in some ways, i.e., it emits symbol based
      R_AARCH64_ABS64 relocations into the final binary since doing so would be
      appropriate when linking a shared library that is subject to symbol
      preemption. (This means that an executable can override certain symbols
      that are exported by a shared library it is linked with, and that the
      shared library *must* update all its internal references as well, and point
      them to the version provided by the executable.)
      
      Symbol preemption does not occur for OS hosted PIE executables, let alone
      for vmlinux, and so we would prefer to get rid of these symbol based
      relocations. This would allow us to simplify the relocation routines, and
      to strip the .dynsym, .dynstr and .hash sections from the binary. (Note
      that these are tiny, and are placed in the .init segment, but they clutter
      up the vmlinux binary.)
      
      Note that these R_AARCH64_ABS64 relocations are only emitted for absolute
      references to symbols defined in the linker script, all other relocatable
      quantities are covered by anonymous R_AARCH64_RELATIVE relocations that
      simply list the offsets to all 64-bit values in the binary that need to be
      fixed up based on the offset between the link time and run time addresses.
      
      Fortunately, GNU ld has a -Bsymbolic option, which is intended for shared
      libraries to allow them to ignore symbol preemption, and unconditionally
      bind all internal symbol references to its own definitions. So set it for
      our PIE binary as well, and get rid of the asoociated sections and the
      relocation code that processes them.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      [will: fixed conflict with __dynsym_offset linker script entry]
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      08cc55b2
    • A
      arm64: vmlinux.lds: make __rela_offset and __dynsym_offset ABSOLUTE · d6732fc4
      Ard Biesheuvel 提交于
      Due to the untyped KIMAGE_VADDR constant, the linker may not notice
      that the __rela_offset and __dynsym_offset expressions are absolute
      values (i.e., are not subject to relocation). This does not matter for
      KASLR, but it does confuse kallsyms in relative mode, since it uses
      the lowest non-absolute symbol address as the anchor point, and expects
      all other symbol addresses to be within 4 GB of it.
      
      Fix this by qualifying these expressions as ABSOLUTE() explicitly.
      
      Fixes: 0cd3defe ("arm64: kernel: perform relocation processing from ID map")
      Cc: <stable@vger.kernel.org>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      d6732fc4
  5. 19 7月, 2016 2 次提交
    • P
      arm64: Treat all entry code as non-kprobe-able · 888b3c87
      Pratyush Anand 提交于
      Entry symbols are not kprobe safe. So blacklist them for kprobing.
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      Signed-off-by: NDavid A. Long <dave.long@linaro.org>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      [catalin.marinas@arm.com: Do not include syscall wrappers in .entry.text]
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      888b3c87
    • S
      arm64: Kprobes with single stepping support · 2dd0e8d2
      Sandeepa Prabhu 提交于
      Add support for basic kernel probes(kprobes) and jump probes
      (jprobes) for ARM64.
      
      Kprobes utilizes software breakpoint and single step debug
      exceptions supported on ARM v8.
      
      A software breakpoint is placed at the probe address to trap the
      kernel execution into the kprobe handler.
      
      ARM v8 supports enabling single stepping before the break exception
      return (ERET), with next PC in exception return address (ELR_EL1). The
      kprobe handler prepares an executable memory slot for out-of-line
      execution with a copy of the original instruction being probed, and
      enables single stepping. The PC is set to the out-of-line slot address
      before the ERET. With this scheme, the instruction is executed with the
      exact same register context except for the PC (and DAIF) registers.
      
      Debug mask (PSTATE.D) is enabled only when single stepping a recursive
      kprobe, e.g.: during kprobes reenter so that probed instruction can be
      single stepped within the kprobe handler -exception- context.
      The recursion depth of kprobe is always 2, i.e. upon probe re-entry,
      any further re-entry is prevented by not calling handlers and the case
      counted as a missed kprobe).
      
      Single stepping from the x-o-l slot has a drawback for PC-relative accesses
      like branching and symbolic literals access as the offset from the new PC
      (slot address) may not be ensured to fit in the immediate value of
      the opcode. Such instructions need simulation, so reject
      probing them.
      
      Instructions generating exceptions or cpu mode change are rejected
      for probing.
      
      Exclusive load/store instructions are rejected too.  Additionally, the
      code is checked to see if it is inside an exclusive load/store sequence
      (code from Pratyush).
      
      System instructions are mostly enabled for stepping, except MSR/MRS
      accesses to "DAIF" flags in PSTATE, which are not safe for
      probing.
      
      This also changes arch/arm64/include/asm/ptrace.h to use
      include/asm-generic/ptrace.h.
      
      Thanks to Steve Capper and Pratyush Anand for several suggested
      Changes.
      Signed-off-by: NSandeepa Prabhu <sandeepa.s.prabhu@gmail.com>
      Signed-off-by: NDavid A. Long <dave.long@linaro.org>
      Signed-off-by: NPratyush Anand <panand@redhat.com>
      Acked-by: NMasami Hiramatsu <mhiramat@kernel.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      2dd0e8d2
  6. 28 6月, 2016 1 次提交
  7. 28 4月, 2016 1 次提交
    • J
      arm64: kernel: Add support for hibernate/suspend-to-disk · 82869ac5
      James Morse 提交于
      Add support for hibernate/suspend-to-disk.
      
      Suspend borrows code from cpu_suspend() to write cpu state onto the stack,
      before calling swsusp_save() to save the memory image.
      
      Restore creates a set of temporary page tables, covering only the
      linear map, copies the restore code to a 'safe' page, then uses the copy to
      restore the memory image. The copied code executes in the lower half of the
      address space, and once complete, restores the original kernel's page
      tables. It then calls into cpu_resume(), and follows the normal
      cpu_suspend() path back into the suspend code.
      
      To restore a kernel using KASLR, the address of the page tables, and
      cpu_resume() are stored in the hibernate arch-header and the el2
      vectors are pivotted via the 'safe' page in low memory.
      Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com>
      Tested-by: Kevin Hilman <khilman@baylibre.com> # Tested on Juno R2
      Signed-off-by: NJames Morse <james.morse@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      82869ac5
  8. 26 4月, 2016 1 次提交
    • A
      arm64: kernel: perform relocation processing from ID map · 0cd3defe
      Ard Biesheuvel 提交于
      Refactor the relocation processing so that the code executes from the
      ID map while accessing the relocation tables via the virtual mapping.
      This way, we can use literals containing virtual addresses as before,
      instead of having to use convoluted absolute expressions.
      
      For symmetry with the secondary code path, the relocation code and the
      subsequent jump to the virtual entry point are implemented in a function
      called __primary_switch(), and __mmap_switched() is renamed to
      __primary_switched(). Also, the call sequence in stext() is aligned with
      the one in secondary_startup(), by replacing the awkward 'adr_l lr' and
      'b cpu_setup' sequence with a simple branch and link.
      Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      0cd3defe
  9. 15 4月, 2016 2 次提交
    • A
      arm64: simplify kernel segment mapping granularity · 97740051
      Ard Biesheuvel 提交于
      The mapping of the kernel consist of four segments, each of which is mapped
      with different permission attributes and/or lifetimes. To optimize the TLB
      and translation table footprint, we define various opaque constants in the
      linker script that resolve to different aligment values depending on the
      page size and whether CONFIG_DEBUG_ALIGN_RODATA is set.
      
      Considering that
      - a 4 KB granule kernel benefits from a 64 KB segment alignment (due to
        the fact that it allows the use of the contiguous bit),
      - the minimum alignment of the .data segment is THREAD_SIZE already, not
        PAGE_SIZE (i.e., we already have padding between _data and the start of
        the .data payload in many cases),
      - 2 MB is a suitable alignment value on all granule sizes, either for
        mapping directly (level 2 on 4 KB), or via the contiguous bit (level 3 on
        16 KB and 64 KB),
      - anything beyond 2 MB exceeds the minimum alignment mandated by the boot
        protocol, and can only be mapped efficiently if the physical alignment
        happens to be the same,
      
      we can simplify this by standardizing on 64 KB (or 2 MB) explicitly, i.e.,
      regardless of granule size, all segments are aligned either to 64 KB, or to
      2 MB if CONFIG_DEBUG_ALIGN_RODATA=y. This also means we can drop the Kconfig
      dependency of CONFIG_DEBUG_ALIGN_RODATA on CONFIG_ARM64_4K_PAGES.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      97740051
    • A
      arm64: cover the .head.text section in the .text segment mapping · 7eb90f2f
      Ard Biesheuvel 提交于
      Keeping .head.text out of the .text mapping buys us very little: its actual
      payload is only 4 KB, most of which is padding, but the page alignment may
      add up to 2 MB (in case of CONFIG_DEBUG_ALIGN_RODATA=y) of additional
      padding to the uncompressed kernel Image.
      
      Also, on 4 KB granule kernels, the 4 KB misalignment of .text forces us to
      map the adjacent 56 KB of code without the PTE_CONT attribute, and since
      this region contains things like the vector table and the GIC interrupt
      handling entry point, this region is likely to benefit from the reduced TLB
      pressure that results from PTE_CONT mappings.
      
      So remove the alignment between the .head.text and .text sections, and use
      the [_text, _etext) rather than the [_stext, _etext) interval for mapping
      the .text segment.
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      7eb90f2f
  10. 26 3月, 2016 1 次提交
  11. 26 2月, 2016 1 次提交
  12. 24 2月, 2016 1 次提交
  13. 22 2月, 2016 1 次提交
  14. 19 2月, 2016 1 次提交
  15. 16 2月, 2016 1 次提交
  16. 11 12月, 2015 2 次提交
    • M
      arm64: mm: fold alternatives into .init · 9aa4ec15
      Mark Rutland 提交于
      Currently we treat the alternatives separately from other data that's
      only used during initialisation, using separate .altinstructions and
      .altinstr_replacement linker sections. These are freed for general
      allocation separately from .init*. This is problematic as:
      
      * We do not remove execute permissions, as we do for .init, leaving the
        memory executable.
      
      * We pad between them, making the kernel Image bianry up to PAGE_SIZE
        bytes larger than necessary.
      
      This patch moves the two sections into the contiguous region used for
      .init*. This saves some memory, ensures that we remove execute
      permissions, and allows us to remove some code made redundant by this
      reorganisation.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Andre Przywara <andre.przywara@arm.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Jeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      9aa4ec15
    • M
      arm64: Remove redundant padding from linker script · 5b28cd9d
      Mark Rutland 提交于
      Currently we place an ALIGN_DEBUG_RO between text and data for the .text
      and .init sections, and depending on configuration each of these may
      result in up to SECTION_SIZE bytes worth of padding (for
      DEBUG_RODATA_ALIGN).
      
      We make no distinction between the text and data in each of these
      sections at any point when creating the initial page tables in head.S.
      We also make no distinction when modifying the tables; __map_memblock,
      fixup_executable, mark_rodata_ro, and fixup_init only work at section
      granularity. Thus this padding is unnecessary.
      
      For the spit between init text and data we impose a minimum alignment of
      16 bytes, but this is also unnecessary. The init data is output
      immediately after the padding before any symbols are defined, so this is
      not required to keep a symbol for linker a section array correctly
      associated with the data. Any objects within the section will be given
      at least their usual alignment regardless.
      
      This patch removes the redundant padding.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Jeremy Linton <jeremy.linton@arm.com>
      Cc: Laura Abbott <labbott@fedoraproject.org>
      Cc: Will Deacon <will.deacon@arm.com>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      5b28cd9d
  17. 08 12月, 2015 1 次提交
  18. 30 10月, 2015 1 次提交
  19. 20 10月, 2015 1 次提交
  20. 03 6月, 2015 1 次提交
    • A
      arm64: reduce ID map to a single page · 5dfe9d7d
      Ard Biesheuvel 提交于
      Commit ea8c2e11 ("arm64: Extend the idmap to the whole kernel
      image") changed the early page table code so that the entire kernel
      Image is covered by the identity map. This allows functions that
      need to enable or disable the MMU to reside anywhere in the kernel
      Image.
      
      However, this change has the unfortunate side effect that the Image
      cannot cross a physical 512 MB alignment boundary anymore, since the
      early page table code cannot deal with the Image crossing a /virtual/
      512 MB alignment boundary.
      
      So instead, reduce the ID map to a single page, that is populated by
      the contents of the .idmap.text section. Only three functions reside
      there at the moment: __enable_mmu(), cpu_resume_mmu() and cpu_reset().
      If new code is introduced that needs to manipulate the MMU state, it
      should be added to this section as well.
      Reviewed-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      5dfe9d7d
  21. 20 3月, 2015 1 次提交
    • A
      ARM, arm64: kvm: get rid of the bounce page · 06f75a1f
      Ard Biesheuvel 提交于
      The HYP init bounce page is a runtime construct that ensures that the
      HYP init code does not cross a page boundary. However, this is something
      we can do perfectly well at build time, by aligning the code appropriately.
      
      For arm64, we just align to 4 KB, and enforce that the code size is less
      than 4 KB, regardless of the chosen page size.
      
      For ARM, the whole code is less than 256 bytes, so we tweak the linker
      script to align at a power of 2 upper bound of the code size
      
      Note that this also fixes a benign off-by-one error in the original bounce
      page code, where a bounce page would be allocated unnecessarily if the code
      was exactly 1 page in size.
      
      On ARM, it also fixes an issue with very large kernels reported by Arnd
      Bergmann, where stub sections with linker emitted veneers could erroneously
      trigger the size/alignment ASSERT() in the linker script.
      Tested-by: NMarc Zyngier <marc.zyngier@arm.com>
      Reviewed-by: NMarc Zyngier <marc.zyngier@arm.com>
      Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
      Signed-off-by: NWill Deacon <will.deacon@arm.com>
      06f75a1f
  22. 22 1月, 2015 1 次提交
  23. 25 11月, 2014 2 次提交
  24. 05 11月, 2014 1 次提交
  25. 03 10月, 2014 1 次提交
  26. 10 7月, 2014 3 次提交
    • M
      arm64: Enable TEXT_OFFSET fuzzing · da57a369
      Mark Rutland 提交于
      The arm64 Image header contains a text_offset field which bootloaders
      are supposed to read to determine the offset (from a 2MB aligned "start
      of memory" per booting.txt) at which to load the kernel. The offset is
      not well respected by bootloaders at present, and due to the lack of
      variation there is little incentive to support it. This is unfortunate
      for the sake of future kernels where we may wish to vary the text offset
      (even zeroing it).
      
      This patch adds options to arm64 to enable fuzz-testing of text_offset.
      CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random
      16-byte aligned value value in the range [0..2MB) upon a build of the
      kernel. It is recommended that distribution kernels enable randomization
      to test bootloaders such that any compliance issues can be fixed early.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NTom Rini <trini@ti.com>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      da57a369
    • M
      arm64: Update the Image header · a2c1d73b
      Mark Rutland 提交于
      Currently the kernel Image is stripped of everything past the initial
      stack, and at runtime the memory is initialised and used by the kernel.
      This makes the effective minimum memory footprint of the kernel larger
      than the size of the loaded binary, though bootloaders have no mechanism
      to identify how large this minimum memory footprint is. This makes it
      difficult to choose safe locations to place both the kernel and other
      binaries required at boot (DTB, initrd, etc), such that the kernel won't
      clobber said binaries or other reserved memory during initialisation.
      
      Additionally when big endian support was added the image load offset was
      overlooked, and is currently of an arbitrary endianness, which makes it
      difficult for bootloaders to make use of it. It seems that bootloaders
      aren't respecting the image load offset at present anyway, and are
      assuming that offset 0x80000 will always be correct.
      
      This patch adds an effective image size to the kernel header which
      describes the amount of memory from the start of the kernel Image binary
      which the kernel expects to use before detecting memory and handling any
      memory reservations. This can be used by bootloaders to choose suitable
      locations to load the kernel and/or other binaries such that the kernel
      will not clobber any memory unexpectedly. As before, memory reservations
      are required to prevent the kernel from clobbering these locations
      later.
      
      Both the image load offset and the effective image size are forced to be
      little-endian regardless of the native endianness of the kernel to
      enable bootloaders to load a kernel of arbitrary endianness. Bootloaders
      which wish to make use of the load offset can inspect the effective
      image size field for a non-zero value to determine if the offset is of a
      known endianness. To enable software to determine the endinanness of the
      kernel as may be required for certain use-cases, a new flags field (also
      little-endian) is added to the kernel header to export this information.
      
      The documentation is updated to clarify these details. To discourage
      future assumptions regarding the value of text_offset, the value at this
      point in time is removed from the main flow of the documentation (though
      kept as a compatibility note). Some minor formatting issues in the
      documentation are also corrected.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Acked-by: NTom Rini <trini@ti.com>
      Cc: Geoff Levand <geoff@infradead.org>
      Cc: Kevin Hilman <kevin.hilman@linaro.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      a2c1d73b
    • M
      arm64: place initial page tables above the kernel · bd00cd5f
      Mark Rutland 提交于
      Currently we place swapper_pg_dir and idmap_pg_dir below the kernel
      image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However,
      bootloaders may use portions of this memory below the kernel and we do
      not parse the memory reservation list until after the MMU has been
      enabled. As such we may clobber some memory a bootloader wishes to have
      preserved.
      
      To enable the use of all of this memory by bootloaders (when the
      required memory reservations are communicated to the kernel) it is
      necessary to move our initial page tables elsewhere. As we currently
      have an effectively unbound requirement for memory at the end of the
      kernel image for .bss, we can place the page tables here.
      
      This patch moves the initial page table to the end of the kernel image,
      after the BSS. As they do not consist of any initialised data they will
      be stripped from the kernel Image as with the BSS. The BSS clearing
      routine is updated to stop at __bss_stop rather than _end so as to not
      clobber the page tables, and memory reservations made redundant by the
      new organisation are removed.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Tested-by: NLaura Abbott <lauraa@codeaurora.org>
      Acked-by: NWill Deacon <will.deacon@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      bd00cd5f
  27. 23 5月, 2014 1 次提交
  28. 20 12月, 2013 2 次提交
  29. 05 11月, 2013 1 次提交
  30. 25 10月, 2013 1 次提交
    • M
      arm64: factor out spin-table boot method · 652af899
      Mark Rutland 提交于
      The arm64 kernel has an internal holding pen, which is necessary for
      some systems where we can't bring CPUs online individually and must hold
      multiple CPUs in a safe area until the kernel is able to handle them.
      The current SMP infrastructure for arm64 is closely coupled to this
      holding pen, and alternative boot methods must launch CPUs into the pen,
      where they sit before they are launched into the kernel proper.
      
      With PSCI (and possibly other future boot methods), we can bring CPUs
      online individually, and need not perform the secondary_holding_pen
      dance. Instead, this patch factors the holding pen management code out
      to the spin-table boot method code, as it is the only boot method
      requiring the pen.
      
      A new entry point for secondaries, secondary_entry is added for other
      boot methods to use, which bypasses the holding pen and its associated
      overhead when bringing CPUs online. The smp.pen.text section is also
      removed, as the pen can live in head.text without problem.
      
      The cpu_operations structure is extended with two new functions,
      cpu_boot and cpu_postboot, for bringing a cpu into the kernel and
      performing any post-boot cleanup required by a bootmethod (e.g.
      resetting the secondary_holding_pen_release to INVALID_HWID).
      Documentation is added for cpu_operations.
      Signed-off-by: NMark Rutland <mark.rutland@arm.com>
      Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
      652af899
  31. 28 8月, 2013 1 次提交
  32. 12 6月, 2013 1 次提交