- 08 9月, 2014 1 次提交
-
-
由 Ard Biesheuvel 提交于
The static memory footprint of a kernel Image at boot is larger than the Image file itself. Things like .bss data and initial page tables are allocated statically but populated dynamically so their content is not contained in the Image file. However, if EFI (or GRUB) has loaded the Image at precisely the desired offset of base of DRAM + TEXT_OFFSET, the Image will be booted in place, and we have to make sure that the allocation done by the PE/COFF loader is large enough. Fix this by growing the PE/COFF .text section to cover the entire static memory footprint. The part of the section that is not covered by the payload will be zero initialised by the PE/COFF loader. Acked-by: NMark Salter <msalter@redhat.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NLeif Lindholm <leif.lindholm@linaro.org> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 27 8月, 2014 1 次提交
-
-
由 Geoff Levand 提交于
Remove an unused local variable from head.S. It seems this was never used even from the initial commit 9703d9d7 (arm64: Kernel booting and initialisation), and is a left over from a previous implementation of __calc_phys_offset. Signed-off-by: NGeoff Levand <geoff@infradead.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 20 8月, 2014 1 次提交
-
-
由 Ard Biesheuvel 提交于
When booting via UEFI, the kernel Image is loaded at a 4 kB boundary and the embedded EFI stub is executed in place. The EFI stub relocates the Image to reside TEXT_OFFSET bytes above a 2 MB boundary, and jumps into the kernel proper. In AArch64, PC relative symbol references are emitted using adrp/add or adrp/ldr pairs, where the offset into a 4 kB page is resolved using a separate :lo12: relocation. This implicitly assumes that the code will always be executed at the same relative offset with respect to a 4 kB boundary, or the references will point to the wrong address. This means we should link the kernel at a 4 kB aligned base address in order to remain compatible with the base address the UEFI loader uses when doing the initial load of Image. So update the code that generates TEXT_OFFSET to choose a multiple of 4 kB. At the same time, update the code so it chooses from the interval [0..2MB) as the author originally intended. Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 25 7月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
GICv3 introduces new system registers accessible with the full msr/mrs syntax (e.g. mrs x0, Sop0_op1_CRm_CRn_op2). However, only recent binutils understand the new syntax. This patch introduces msr_s/mrs_s assembly macros which generate the equivalent instructions above and converts the existing GICv3 code (both drivers/irqchip/ and arch/arm64/kernel/). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NOlof Johansson <olof@lixom.net> Tested-by: NOlof Johansson <olof@lixom.net> Suggested-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NJason Cooper <jason@lakedaemon.net> Cc: Will Deacon <will.deacon@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com>
-
- 23 7月, 2014 5 次提交
-
-
由 Catalin Marinas 提交于
This patch allows support for 3 levels of page tables with 64KB page configuration allowing 48-bit VA space. The pgd is no longer a full PAGE_SIZE (PTRS_PER_PGD is 64) and (swapper|idmap)_pg_dir are not fully populated (pgd_alloc falls back to kzalloc). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Catalin Marinas 提交于
This patch adds a create_table_entry macro which is used to populate pgd and pud entries, also reducing the number of arguments for create_pgd_entry. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Catalin Marinas 提交于
Rather than having several Kconfig options, define int ARM64_PGTABLE_LEVELS which will be also useful in converting some of the pgtable macros. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Jungseok Lee 提交于
This patch implements 4 levels of translation tables since 3 levels of page tables with 4KB pages cannot support 40-bit physical address space described in [1] due to the following issue. It is a restriction that kernel logical memory map with 4KB + 3 levels (0xffffffc000000000-0xffffffffffffffff) cannot cover RAM region from 544GB to 1024GB in [1]. Specifically, ARM64 kernel fails to create mapping for this region in map_mem function since __phys_to_virt for this region reaches to address overflow. If SoC design follows the document, [1], over 32GB RAM would be placed from 544GB. Even 64GB system is supposed to use the region from 544GB to 576GB for only 32GB RAM. Naturally, it would reach to enable 4 levels of page tables to avoid hacking __virt_to_phys and __phys_to_virt. However, it is recommended 4 levels of page table should be only enabled if memory map is too sparse or there is about 512GB RAM. References ---------- [1]: Principles of ARM Memory Maps, White Paper, Issue C Signed-off-by: NJungseok Lee <jays.lee@samsung.com> Reviewed-by: NSungjinn Chung <sungjinn.chung@samsung.com> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NSteve Capper <steve.capper@linaro.org> [catalin.marinas@arm.com: MEMBLOCK_INITIAL_LIMIT removed, same as PUD_SIZE] [catalin.marinas@arm.com: early_ioremap_init() updated for 4 levels] [catalin.marinas@arm.com: 48-bit VA depends on BROKEN until KVM is fixed] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
由 Catalin Marinas 提交于
The early_ioremap_init() function already handles fixmap pte initialisation, so upgrade this to cover all of pud/pmd/pte and remove one page from swapper_pg_dir. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NJungseok Lee <jungseoklee85@gmail.com>
-
- 10 7月, 2014 4 次提交
-
-
由 Mark Rutland 提交于
The arm64 Image header contains a text_offset field which bootloaders are supposed to read to determine the offset (from a 2MB aligned "start of memory" per booting.txt) at which to load the kernel. The offset is not well respected by bootloaders at present, and due to the lack of variation there is little incentive to support it. This is unfortunate for the sake of future kernels where we may wish to vary the text offset (even zeroing it). This patch adds options to arm64 to enable fuzz-testing of text_offset. CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET forces the text offset to a random 16-byte aligned value value in the range [0..2MB) upon a build of the kernel. It is recommended that distribution kernels enable randomization to test bootloaders such that any compliance issues can be fixed early. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NTom Rini <trini@ti.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently the kernel Image is stripped of everything past the initial stack, and at runtime the memory is initialised and used by the kernel. This makes the effective minimum memory footprint of the kernel larger than the size of the loaded binary, though bootloaders have no mechanism to identify how large this minimum memory footprint is. This makes it difficult to choose safe locations to place both the kernel and other binaries required at boot (DTB, initrd, etc), such that the kernel won't clobber said binaries or other reserved memory during initialisation. Additionally when big endian support was added the image load offset was overlooked, and is currently of an arbitrary endianness, which makes it difficult for bootloaders to make use of it. It seems that bootloaders aren't respecting the image load offset at present anyway, and are assuming that offset 0x80000 will always be correct. This patch adds an effective image size to the kernel header which describes the amount of memory from the start of the kernel Image binary which the kernel expects to use before detecting memory and handling any memory reservations. This can be used by bootloaders to choose suitable locations to load the kernel and/or other binaries such that the kernel will not clobber any memory unexpectedly. As before, memory reservations are required to prevent the kernel from clobbering these locations later. Both the image load offset and the effective image size are forced to be little-endian regardless of the native endianness of the kernel to enable bootloaders to load a kernel of arbitrary endianness. Bootloaders which wish to make use of the load offset can inspect the effective image size field for a non-zero value to determine if the offset is of a known endianness. To enable software to determine the endinanness of the kernel as may be required for certain use-cases, a new flags field (also little-endian) is added to the kernel header to export this information. The documentation is updated to clarify these details. To discourage future assumptions regarding the value of text_offset, the value at this point in time is removed from the main flow of the documentation (though kept as a compatibility note). Some minor formatting issues in the documentation are also corrected. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NTom Rini <trini@ti.com> Cc: Geoff Levand <geoff@infradead.org> Cc: Kevin Hilman <kevin.hilman@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently we place swapper_pg_dir and idmap_pg_dir below the kernel image, between PHYS_OFFSET and (PHYS_OFFSET + TEXT_OFFSET). However, bootloaders may use portions of this memory below the kernel and we do not parse the memory reservation list until after the MMU has been enabled. As such we may clobber some memory a bootloader wishes to have preserved. To enable the use of all of this memory by bootloaders (when the required memory reservations are communicated to the kernel) it is necessary to move our initial page tables elsewhere. As we currently have an effectively unbound requirement for memory at the end of the kernel image for .bss, we can place the page tables here. This patch moves the initial page table to the end of the kernel image, after the BSS. As they do not consist of any initialised data they will be stripped from the kernel Image as with the BSS. The BSS clearing routine is updated to stop at __bss_stop rather than _end so as to not clobber the page tables, and memory reservations made redundant by the new organisation are removed. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
Currently __turn_mmu_on is aligned to 64 bytes to ensure that it doesn't span any page boundary, which simplifies the idmap and spares us requiring an additional page table to map half of the function. In keeping with other important requirements in architecture code, this fact is undocumented. Additionally, as the function consists of three instructions totalling 12 bytes with no literal pool data, a smaller alignment of 16 bytes would be sufficient. This patch reduces the alignment to 16 bytes and documents the underlying reason for the alignment. This reduces the required alignment of the entire .head.text section from 64 bytes to 16 bytes, though it may still be aligned to a larger value depending on TEXT_OFFSET. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 09 7月, 2014 1 次提交
-
-
由 Marc Zyngier 提交于
The Generic Interrupt Controller (version 3) offers services that are similar to GICv2, with a number of additional features: - Affinity routing based on the CPU MPIDR (ARE) - System register for the CPU interfaces (SRE) - Support for more that 8 CPUs - Locality-specific Peripheral Interrupts (LPIs) - Interrupt Translation Services (ITS) This patch adds preliminary support for GICv3 with ARE and SRE, non-secure mode only. It relies on higher exception levels to grant ARE and SRE access. Support for LPI and ITS will be added at a later time. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Reviewed-by: NZi Shen Lim <zlim@broadcom.com> Reviewed-by: NChristoffer Dall <christoffer.dall@linaro.org> Reviewed-by: NTirumalesh Chalamarla <tchalamarla@cavium.com> Reviewed-by: NYun Wu <wuyun.wu@huawei.com> Reviewed-by: NZhen Lei <thunder.leizhen@huawei.com> Tested-by: Tirumalesh Chalamarla<tchalamarla@cavium.com> Tested-by: NRadha Mohan Chintakuntla <rchintakuntla@cavium.com> Acked-by: NRadha Mohan Chintakuntla <rchintakuntla@cavium.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Link: https://lkml.kernel.org/r/1404140510-5382-3-git-send-email-marc.zyngier@arm.comSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
- 04 7月, 2014 1 次提交
-
-
由 Marc Zyngier 提交于
The CurrentEL system register reports the Current Exception Level of the CPU. It doesn't say anything about the stack handling, and yet we compare it to PSR_MODE_EL2t and PSR_MODE_EL2h. It works by chance because PSR_MODE_EL2t happens to match the right bits, but that's otherwise a very bad idea. Just check for the EL value instead. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> [catalin.marinas@arm.com: fixed arch/arm64/kernel/efi-entry.S] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 10 5月, 2014 1 次提交
-
-
由 Will Deacon 提交于
set_cpu_boot_mode_flag is used to identify which exception levels are encountered across the system by CPUs trying to enter the kernel. The basic algorithm is: if a CPU is booting at EL2, it will set a flag at an offset of #4 from __boot_cpu_mode, a cacheline-aligned variable. Otherwise, a flag is set at an offset of zero into the same cacheline. This enables us to check that all CPUs booted at the same exception level. This cacheline is written with the stage-1 MMU off (that is, via a strongly-ordered mapping) and will bypass any clean lines in the cache, leading to potential coherence problems when the variable is later checked via the normal, cacheable mapping of the kernel image. This patch reworks the broken flushing code so that we: (1) Use a DMB to order the strongly-ordered write of the cacheline against the subsequent cache-maintenance operation (by-VA operations only hazard against normal, cacheable accesses). (2) Use a single dc ivac instruction to invalidate any clean lines containing a stale copy of the line after it has been updated. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 01 5月, 2014 1 次提交
-
-
由 Mark Salter 提交于
This patch adds PE/COFF header fields to the start of the kernel Image so that it appears as an EFI application to UEFI firmware. An EFI stub is included to allow direct booting of the kernel Image. Signed-off-by: NMark Salter <msalter@redhat.com> [Add support in PE/COFF header for signed images] Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NLeif Lindholm <leif.lindholm@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NMatt Fleming <matt.fleming@intel.com>
-
- 08 4月, 2014 1 次提交
-
-
由 Mark Salter 提交于
Add support for early IO or memory mappings which are needed before the normal ioremap() is usable. This also adds fixmap support for permanent fixed mappings such as that used by the earlyprintk device register region. Signed-off-by: NMark Salter <msalter@redhat.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Dave Young <dyoung@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 4月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
With system caches for the host OS or architected caches for guest OS we cannot easily guarantee that there are no dirty or stale cache lines for the areas of memory written by the kernel during boot with the MMU off (therefore non-cacheable accesses). This patch adds the necessary cache maintenance during boot and relaxes the booting requirements. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 26 2月, 2014 1 次提交
-
-
由 Catalin Marinas 提交于
This patch changes the idmap page table creation during boot to cover the whole kernel image, allowing functions like cpu_reset() to be safely called with the physical address. This patch also simplifies the create_block_map asm macro to no longer take an idmap argument and always use the phys/virt/end parameters. For the idmap case, phys == virt. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 20 12月, 2013 1 次提交
-
-
由 Geoff Levand 提交于
The __data_loc variable is an unused left over from the 32 bit arm implementation. Remove that variable and adjust the __mmap_switched startup routine accordingly. Signed-off-by: Geoff Levand <geoff@infradead.org> for Huawei, Linaro Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 07 12月, 2013 1 次提交
-
-
由 Lorenzo Pieralisi 提交于
The refactoring of el2_setup split code setting up EL2 and detecting the CPU boot mode in separate chunks. This allows the code that sets up EL2 to run in an endian independent way - ie before the endianess is set up in the respective sctlr registers. This patch brings secondary_entry up-to-date so that CPUs entering the kernel through this code path set-up EL2 and the cpu boot mode properly. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Acked-by: NMark Rutland <mark.rutand@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 25 10月, 2013 3 次提交
-
-
由 Matthew Leach 提交于
The endianness of memory accesses at EL2 and EL1 are configured by SCTLR_EL2.EE and SCTLR_EL1.EE respectively. When the kernel is booted, the state of SCTLR_EL{2,1}.EE is unknown, and thus the kernel must ensure that they are set before performing any memory accesses. This patch ensures that SCTLR_EL{2,1} are configured appropriately at boot for kernels of either endianness. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMatthew Leach <matthew.leach@arm.com> [catalin.marinas@arm.com: fix SCTLR_EL1.E0E bit setting in head.S] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Matthew Leach 提交于
Currently, the code for setting the __cpu_boot_mode flag is munged in with el2_setup. This makes things difficult on a BE bringup as a memory access has to have occurred before el2_setup which is the place that we'd like to set the endianess on the current EL. Create a new function for setting __cpu_boot_mode and have el2_setup return the mode the CPU. Also define a new constant in virt.h, BOOT_CPU_MODE_EL1, for readability. Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NMatthew Leach <matthew.leach@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Mark Rutland 提交于
The arm64 kernel has an internal holding pen, which is necessary for some systems where we can't bring CPUs online individually and must hold multiple CPUs in a safe area until the kernel is able to handle them. The current SMP infrastructure for arm64 is closely coupled to this holding pen, and alternative boot methods must launch CPUs into the pen, where they sit before they are launched into the kernel proper. With PSCI (and possibly other future boot methods), we can bring CPUs online individually, and need not perform the secondary_holding_pen dance. Instead, this patch factors the holding pen management code out to the spin-table boot method code, as it is the only boot method requiring the pen. A new entry point for secondaries, secondary_entry is added for other boot methods to use, which bypasses the holding pen and its associated overhead when bringing CPUs online. The smp.pen.text section is also removed, as the pen can live in head.text without problem. The cpu_operations structure is extended with two new functions, cpu_boot and cpu_postboot, for bringing a cpu into the kernel and performing any post-boot cleanup required by a bootmethod (e.g. resetting the secondary_holding_pen_release to INVALID_HWID). Documentation is added for cpu_operations. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 22 8月, 2013 1 次提交
-
-
由 Roy Franz 提交于
Expand the arm64 image header to allow for co-existance with PE/COFF header required by the EFI stub. The PE/COFF format requires the "MZ" header to be at offset 0, and the offset to the PE/COFF header to be at offset 0x3c. The image header is expanded to allow 2 instructions at the beginning to accommodate a benign intruction at offset 0 that includes the "MZ" header, a magic number, and the offset to the PE/COFF header. Signed-off-by: NRoy Franz <roy.franz@linaro.org> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 21 3月, 2013 1 次提交
-
-
由 Javi Merino 提交于
The reg property of the cpu nodes in the DT now contains all the affinity levels in (MPIDR[39:32] and MPIDR[23:0]) and that's what boot_secondary() writes in the pen, so increase the mask in secondary_holding_pen accordingly. Signed-off-by: NJavi Merino <javi.merino@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 23 1月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
This patch adds support for "earlyprintk=" parameter on the kernel command line. The format is: earlyprintk=<name>[,<addr>][,<options>] where <name> is the name of the (UART) device, e.g. "pl011", <addr> is the I/O address. The <options> aren't currently used. The mapping of the earlyprintk device is done very early during kernel boot and there are restrictions on which functions it can call. A special early_io_map() function is added which creates the mapping from the pre-defined EARLY_IOBASE to the device I/O address passed via the kernel parameter. The pgd entry corresponding to EARLY_IOBASE is pre-populated in head.S during kernel boot. Only PL011 is currently supported and it is assumed that the interface is already initialised by the boot loader before the kernel is started. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-
- 05 12月, 2012 4 次提交
-
-
由 Marc Zyngier 提交于
The architecture doesn't mandate any reset value for vttbr_el2. Better set it to a known value before some HYP code gets confused. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
If booted in EL2, install an dummy hypervisor whose only purpose is to be replaced by a full fledged one. A minimal API allows to: - obtain the current HYP vectors (__hyp_get_vectors) - set new HYP vectors (__hyp_set_vectors) Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Marc Zyngier 提交于
To be able to signal the availability of EL2 to other parts of the kernel, record the boot mode. Once booted, two predicates indicate if HYP mode is available, and if not, whether this is due to a boot mode mismatch or not. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
We want to use the virtual counter at EL0, as the physical counter may not track the current clocksource for guests running under a hypervisor. This patch updates the vdso and generic timer driver to use the virtual counter. The kernel EL2 entry code is also updated to ensure that the virtual offset is initialised to zero. Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 17 9月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
The patch adds the kernel booting and the initial setup code. Documentation/arm64/booting.txt describes the booting protocol on the AArch64 Linux kernel. This is subject to change following the work on boot standardisation, ACPI. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NOlof Johansson <olof@lixom.net> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NArnd Bergmann <arnd@arndb.de>
-