- 10 8月, 2016 2 次提交
-
-
由 Ard Biesheuvel 提交于
The late_alloc() PTE allocation function used by create_mapping_late() does not call pgtable_page_ctor() on PTE pages it allocates, leaving the per-page spinlock uninitialized. Since generic page table manipulation code may assume that translation table pages that are not owned by init_mm are covered by fully constructed struct pages, the following crash may occur with the new UEFI memory attributes table code. efi: memattr: Processing EFI Memory Attributes table: efi: memattr: 0x0000ffa16000-0x0000ffa82fff [Runtime Code |RUN| | |XP| | | | | | | | ] Unable to handle kernel NULL pointer dereference at virtual address 00000010 pgd = c0204000 [00000010] *pgd=00000000 Internal error: Oops: 5 [#1] SMP ARM Modules linked in: CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.7.0-rc4-00063-g3882aa7b340b #361 Hardware name: Generic DT based system task: ed858000 ti: ed842000 task.ti: ed842000 PC is at __lock_acquire+0xa0/0x19a8 ... [<c038c830>] (__lock_acquire) from [<c038e4f8>] (lock_acquire+0x6c/0x88) [<c038e4f8>] (lock_acquire) from [<c0c06134>] (_raw_spin_lock+0x2c/0x3c) [<c0c06134>] (_raw_spin_lock) from [<c0410384>] (apply_to_page_range+0xe8/0x238) [<c0410384>] (apply_to_page_range) from [<c1205f34>] (efi_set_mapping_permissions+0x54/0x5c) [<c1205f34>] (efi_set_mapping_permissions) from [<c1247474>] (efi_memattr_apply_permissions+0x2b8/0x378) [<c1247474>] (efi_memattr_apply_permissions) from [<c1248258>] (arm_enable_runtime_services+0x1f0/0x22c) [<c1248258>] (arm_enable_runtime_services) from [<c0301f0c>] (do_one_initcall+0x44/0x174) [<c0301f0c>] (do_one_initcall) from [<c1200d10>] (kernel_init_freeable+0x90/0x1e8) [<c1200d10>] (kernel_init_freeable) from [<c0bff690>] (kernel_init+0x8/0x114) [<c0bff690>] (kernel_init) from [<c0307ed0>] (ret_from_fork+0x14/0x24) The crash is due to the fact that the UEFI page tables are not owned by init_mm, but are not covered by fully constructed struct pages. Given that the UEFI subsystem is currently the only user of create_mapping_late(), add an unconditional call to pgtable_page_ctor() to late_alloc(). Fixes: 9fc68b71 ("ARM/efi: Apply strict permissions for UEFI Runtime Services regions") Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
To limit the amount of mapped low memory, we determine a physical address boundary based on the start of the vmalloc area using __pa(). Strictly speaking, the vmalloc area location is arbitrary and does not necessarily corresponds to a valid physical address. For example, if PAGE_OFFSET = 0x80000000 PHYS_OFFSET = 0x90000000 vmalloc_min = 0xf0000000 then __pa(vmalloc_min) overflows and returns a wrapped 0 when phys_addr_t is a 32-bit type. Then the code that follows determines that the entire physical memory is above that boundary and no low memory gets mapped at all: |[...] |Machine model: Freescale i.MX51 NA04 Board |Ignoring RAM at 0x90000000-0xb0000000 (!CONFIG_HIGHMEM) |Consider using a HIGHMEM enabled kernel. To avoid this problem let's make vmalloc_limit a 64-bit value all the time and determine that boundary explicitly without using __pa(). Reported-by: NEmil Renner Berthing <kernel@esmil.dk> Signed-off-by: NNicolas Pitre <nico@linaro.org> Tested-by: NEmil Renner Berthing <kernel@esmil.dk> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 3月, 2016 1 次提交
-
-
由 Kirill A. Shutemov 提交于
There are few things about *pte_alloc*() helpers worth cleaning up: - 'vma' argument is unused, let's drop it; - most __pte_alloc() callers do speculative check for pmd_none(), before taking ptl: let's introduce pte_alloc() macro which does the check. The only direct user of __pte_alloc left is userfaultfd, which has different expectation about atomicity wrt pmd. - pte_alloc_map() and pte_alloc_map_lock() are redefined using pte_alloc(). [sudeep.holla@arm.com: fix build for arm64 hugetlbpage] [sfr@canb.auug.org.au: fix arch/arm/mm/mmu.c some more] Signed-off-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Dave Hansen <dave.hansen@intel.com> Signed-off-by: NSudeep Holla <sudeep.holla@arm.com> Acked-by: NKirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 11 2月, 2016 1 次提交
-
-
由 Chris Brandt 提交于
For an XIP build, _etext does not represent the end of the binary image that needs to stay mapped into the MODULES_VADDR area. Years ago, data came before text in the memory map. However, now that the order is text/init/data, an XIP_KERNEL needs to map up to the data location in order to keep from cutting off parts of the kernel that are needed. We only map up to the beginning of data because data has already been copied, so there's no reason to keep it around anymore. A new symbol is created to make it clear what it is we are referring to. This fixes the bug where you might lose the end of your kernel area after page table setup is complete. Signed-off-by: NChris Brandt <chris.brandt@renesas.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 1月, 2016 1 次提交
-
-
由 Jungseung Lee 提交于
The VMSA field of MMFR0 (bottom 4 bits) is incremented for each added feature. PXN is supported if the value is >= 4 and LPAE is supported if it is >= 5. In case a kernel with CONFIG_ARM_LPAE disabled is used on a processor that supports LPAE, we can still use PXN in short descriptors. So check for >= 4 not == 4. Signed-off-by: NJungseung Lee <js07.lee@samsung.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NBen Hutchings <ben@decadent.org.uk> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 12月, 2015 6 次提交
-
-
由 Ard Biesheuvel 提交于
Take the new memblock attribute MEMBLOCK_NOMAP into account when deciding whether a certain region is or should be covered by the kernel direct mapping. Tested-by: NRyan Harkin <ryan.harkin@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
This implements create_mapping_late(), which we will use to populate the UEFI Runtime Services page tables. Tested-by: NRyan Harkin <ryan.harkin@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
Add support to the kernel translation table population routines for creating non-global mappings. This will be used by the UEFI runtime services, which will use temporary mappings in the userland range. Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
To allow __create_mapping() to be used for populating UEFI Runtime Services page tables, factor out the allocation routine 'early_alloc' and pass it down as a function pointer into alloc_init_[pud|pmd|pte]. This way, new users of __create_mapping() can supply another allocation function. Tested-by: NRyan Harkin <ryan.harkin@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
In order to be able to reuse the core mapping logic of create_mapping for mapping the UEFI Runtime Services into a private set of page tables, split it off from create_mapping() into a separate function __create_mapping which we will wire up in a subsequent patch. Tested-by: NRyan Harkin <ryan.harkin@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
由 Ard Biesheuvel 提交于
This enables the generic early_ioremap implementation for ARM. It uses the fixmap region reserved for kmap. Since early_ioremap is only supported before paging_init(), and kmap is only supported afterwards, this is guaranteed not to cause any clashes. Tested-by: NRyan Harkin <ryan.harkin@linaro.org> Reviewed-by: NMatt Fleming <matt@codeblueprint.co.uk> Signed-off-by: NArd Biesheuvel <ard.biesheuvel@linaro.org>
-
- 02 12月, 2015 1 次提交
-
-
由 Arnd Bergmann 提交于
In a multiplatform configuration, we may end up building a kernel for both Marvell PJ1 and an ARMv4 CPU implementation. In that case, the xscale-cp0 code is built with gcc -march=armv4{,t}, which results in a build error from the coprocessor instructions. Since we know this code will only have to run on an actual xscale processor, we can simply build the entire file for ARMv5TE. Related to this, we need to handle the iWMMXT initialization sequence differently during boot, to ensure we don't try to touch xscale specific registers on other CPUs from the xscale_cp0_init initcall. cpu_is_xscale() used to be hardcoded to '1' in any configuration that enables any XScale-compatible core, but this breaks once we can have a combined kernel with MMP1 and something else. In this patch, I replace the existing cpu_is_xscale() macro with a new cpu_is_xscale_family() macro that evaluates true for xscale, xsc3 and mohawk, which makes the behavior more deterministic. The two existing users of cpu_is_xscale() are modified accordingly, but slightly change behavior for kernels that enable CPU_MOHAWK without also enabling CPU_XSCALE or CPU_XSC3. Previously, these would leave leave PMD_BIT4 in the page tables untouched, now they clear it as we've always done for kernels that enable both MOHAWK and the support for the older CPU types. Since the previous behavior was inconsistent, I assume it was unintentional. Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 20 10月, 2015 1 次提交
-
-
由 Lucas Stach 提交于
Install a non-faulting handler just before unmasking imprecise aborts and switch back to the regular one after unmasking is done. This catches any pending imprecise abort that the firmware/bootloader may have left behind that would normally crash the kernel at that point. As there are apparently a lot of bootlaoders out there that do such a thing it makes sense to handle it in the common startup code. Signed-off-by: NLucas Stach <l.stach@pengutronix.de> Tested-by: NTyler Baker <tyler.baker@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 9月, 2015 1 次提交
-
-
由 Lucas Stach 提交于
This patch adds imprecise abort enable/disable macros and uses them to enable imprecise aborts early when starting the kernel. This helps in tracking down the real cause for such imprecise abort, as they are handled as soon as they occur. Until now those aborts would only be enabled when entering the userspace and as a consequence crash the first userspace process if any abort had been raised during kernel startup. Signed-off-by: NFabrice Gasnier <fabrice.gasnier@st.com> Signed-off-by: NLucas Stach <l.stach@pengutronix.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 8月, 2015 1 次提交
-
-
由 Russell King 提交于
Keep the machine vectors in its own domain to avoid software based user access control from making the vector code inaccessible, and thereby deadlocking the machine. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 8月, 2015 1 次提交
-
-
由 Stefan Agner 提交于
Add early fixmap support, initially to support permanent, fixed mapping support for early console. A temporary, early pte is created which is migrated to a permanent mapping in paging_init. This is also needed since the attributes may change as the memory types are initialized. The 3MiB range of fixmap spans two pte tables, but currently only one pte is created for early fixmap support. Re-add FIX_KMAP_BEGIN to the index calculation in highmem.c since the index for kmap does not start at zero anymore. This reverts 4221e2e6 ("ARM: 8031/1: fixmap: remove FIX_KMAP_BEGIN and FIX_KMAP_END") to some extent. Cc: Mark Salter <msalter@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: NRob Herring <robh@kernel.org> Signed-off-by: NStefan Agner <stefan@agner.ch> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 6月, 2015 2 次提交
-
-
由 Russell King 提交于
Add a nmessage to suggest that HIGHMEM is enabled when physical memory is truncated due to lack of virtual address space to map it in the low memory mapping. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Laura Abbott 提交于
The memblock limit is currently used in find_limits to find the bounds for ZONE_NORMAL. The memblock limit may need to be rounded down a PMD size to ensure allocations are fully mapped though. This has the side effect of reducing the amount of memory in ZONE_NORMAL. Once all lowmem is mapped, it's safe to change the memblock limit back to include the unaligned section. Adjust the memblock limit after lowmem mapping is complete. Before: # cat /proc/zoneinfo | grep managed managed 62907 managed 424 After: # cat /proc/zoneinfo | grep managed managed 63331 Signed-off-by: NLaura Abbott <labbott@fedoraproject.org> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Tested-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 6月, 2015 5 次提交
-
-
由 Russell King 提交于
Eliminate the needless nommu version of this function, and get rid of the proc_info_list structure argument - we no longer need this in order to fix up the page table entries. Acked-by: NSantosh Shilimkar <ssantosh@kernel.org> Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Re-implement the physical address space switching to be architecturally compliant. This involves flushing the caches, disabling the MMU, and only then updating the page tables. Once that is complete, the system can be brought back up again. Since we disable the MMU, we need to do the update in assembly code. Luckily, the entries which need updating are fairly trivial, and are all setup by the early assembly code. We can merely adjust each entry by the delta required. Not only does this fix the code to be architecturally compliant, but it fixes a couple of bugs too: 1. The original code would only ever update the first L2 entry covering a fraction of the kernel; the remainder were left untouched. 2. The L2 entries covering the DTB blob were likewise untouched. This solution fixes up all entries. Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The init_meminfo() method is not about initialising meminfo - it's about fixing up the physical to virtual translation so that we use a different physical address space, possibly above the 4GB physical address space. Therefore, the name "init_meminfo()" is confusing. Rename it to pv_fixup() instead. Acked-by: NSantosh Shilimkar <ssantosh@kernel.org> Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
There is no point platform code doing this, let's move it into the generic code so it doesn't get duplicated. Acked-by: NSantosh Shilimkar <ssantosh@kernel.org> Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Make the init_meminfo function return the offset to be applied to the phys-to-virt translation constants. This allows us to move the update into generic code, along with the requirements for this update. This avoids platforms having to know the details of the phys-to-virt translation support. Acked-by: NSantosh Shilimkar <ssantosh@kernel.org> Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 5月, 2015 1 次提交
-
-
由 Mark Rutland 提交于
At boot time we round the memblock limit down to section size in an attempt to ensure that we will have mapped this RAM with section mappings prior to allocating from it. When mapping RAM we iterate over PMD-sized chunks, creating these section mappings. Section mappings are only created when the end of a chunk is aligned to section size. Unfortunately, with classic page tables (where PMD_SIZE is 2 * SECTION_SIZE) this means that if a chunk is between 1M and 2M in size the first 1M will not be mapped despite having been accounted for in the memblock limit. This has been observed to result in page tables being allocated from unmapped memory, causing boot-time hangs. This patch modifies the memblock limit rounding to always round down to PMD_SIZE instead of SECTION_SIZE. For classic MMU this means that we will round the memblock limit down to a 2M boundary, matching the limits on section mappings, and preventing allocations from unmapped memory. For LPAE there should be no change as PMD_SIZE == SECTION_SIZE. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Reported-by: NStefan Agner <stefan@agner.ch> Tested-by: NStefan Agner <stefan@agner.ch> Acked-by: NLaura Abbott <labbott@redhat.com> Tested-by: NHans de Goede <hdegoede@redhat.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 1月, 2015 1 次提交
-
-
由 Grygorii Strashko 提交于
Now local variables kernel_x_start and kernel_x_end defined using 'unsigned long' type which is wrong because they represent physical memory range and will be calculated wrongly if LPAE is enabled. As result, all following code in map_lowmem() will not work correctly. For example, Keystone 2 boot is broken because kernel_x_start == 0x0000 0000 kernel_x_end == 0x0080 0000 instead of kernel_x_start == 0x0000 0008 0000 0000 kernel_x_end == 0x0000 0008 0080 0000 and as result whole low memory will be mapped with MT_MEMORY_RW permissions by code (start > kernel_x_end): } else if (start >= kernel_x_end) { map.pfn = __phys_to_pfn(start); map.virtual = __phys_to_virt(start); map.length = end - start; map.type = MT_MEMORY_RW; create_mapping(&map); } Hence, fix it by using phys_addr_t type for variables kernel_x_start and kernel_x_end. Tested-by: NMurali Karicheri <m-karicheri2@ti.com> Signed-off-by: NGrygorii Strashko <grygorii.strashko@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 04 12月, 2014 1 次提交
-
-
由 Jungseung Lee 提交于
set_memory_* functions have same implementation except memory attribute. This patch makes to use common function for these, and pull out the functions into arch/arm/mm/pageattr.c like arm64 did. It will reduce code size and enhance the readability. Signed-off-by: NJungseung Lee <js07.lee@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 12月, 2014 1 次提交
-
-
由 Jungseung Lee 提交于
Modern ARMv7-A/R cores optionally implement below new hardware feature: - PXN: Privileged execute-never(PXN) is a security feature. PXN bit determines whether the processor can execute software from the region. This is effective solution against ret2usr attack. On an implementation that does not include the LPAE, PXN is optionally supported. This patch set PXN bit on user page table for preventing user code execution with privilege mode. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NJungseung Lee <js07.lee@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 11月, 2014 1 次提交
-
-
由 Russell King 提交于
Convert many (but not all) printk(KERN_* to pr_* to simplify the code. We take the opportunity to join some printk lines together so we don't split the message across several lines, and we also add a few levels to some messages which were previously missing them. Tested-by: NAndrew Lunn <andrew@lunn.ch> Tested-by: NFelipe Balbi <balbi@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 10月, 2014 3 次提交
-
-
由 Kees Cook 提交于
Adds CONFIG_ARM_KERNMEM_PERMS to separate the kernel memory regions into section-sized areas that can have different permisions. Performs the NX permission changes during free_initmem, so that init memory can be reclaimed. This uses section size instead of PMD size to reduce memory lost to padding on non-LPAE systems. Based on work by Brad Spengler, Larry Bassel, and Laura Abbott. Signed-off-by: NKees Cook <keescook@chromium.org> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NNicolas Pitre <nico@linaro.org>
-
由 Kees Cook 提交于
This is used from set_fixmap() and clear_fixmap() via asm-generic/fixmap.h. Also makes sure that the fixmap allocation fits into the expected range. Based on patch by Rabin Vincent. Signed-off-by: NKees Cook <keescook@chromium.org> Cc: Rabin Vincent <rabin@rab.in> Acked-by: NNicolas Pitre <nico@linaro.org>
-
由 Rob Herring 提交于
With commit a05e54c1 ("ARM: 8031/2: change fixmap mapping region to support 32 CPUs"), the fixmap region was expanded to 2MB, but it precluded any other uses of the fixmap region. In order to support other uses the fixmap region needs to be expanded beyond 2MB. Fortunately, the adjacent 1MB range 0xffe00000-0xfff00000 is availabe. Remove fixmap_page_table ptr and lookup the page table via the virtual address so that the fixmap region can span more that one pmd. The 2nd pmd is already created since it is shared with the vector page. Signed-off-by: NRob Herring <robh@kernel.org> [kees: fixed CONFIG_DEBUG_HIGHMEM get_fixmap() calls] [kees: moved pte allocation outside of CONFIG_HIGHMEM] Signed-off-by: NKees Cook <keescook@chromium.org> Acked-by: NNicolas Pitre <nico@linaro.org>
-
- 26 9月, 2014 1 次提交
-
-
由 Joe Perches 提交于
Use the more common pr_warn. Other miscellanea: o Coalesce formats o Realign arguments Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 8月, 2014 1 次提交
-
-
由 Russell King 提交于
Add further comments to the early page table remap code to explain what the code is doing, why it is doing it, but more importantly to explain that the code is not architecturally compliant and is squarely in "UNPREDICTABLE" behaviour territory. Add a warning and tainting of the kernel too. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 7月, 2014 1 次提交
-
-
由 Russell King 提交于
If init_mm.brk is not section aligned, the LPAE fixup code will miss updating the final PMD. Fix this by aligning map_end. Fixes: a77e0c7b ("ARM: mm: Recreate kernel mappings in early_paging_init()") Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 6月, 2014 5 次提交
-
-
由 Russell King 提交于
This does the same as the previous commit, but for the S bit, which also needs to match the initial value which the assembly code used for the same reasons. Again, we add a check for SMP to ensure that the page tables are correctly setup for SMP. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Fix a long standing bug where, for ARMv6+, we don't fully ensure that the C code sets the same cache policy as the assembly code. This was introduced partially by commit 11179d8c ([ARM] 4497/1: Only allow safe cache configurations on ARMv6 and later) and also by adding SMP support. This patch sets the default cache policy based on the flags used by the assembly code, and then ensures that when a cache policy command line argument is used, we verify that on ARMv6, it matches the initial setup. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
adjust_cr() is not used anymore, so let's get rid of it. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Keep all bits of alignment handling together. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Several places open-code this manipulation, let's consolidate this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 6月, 2014 1 次提交
-
-
由 Laura Abbott 提交于
memblock is now fully integrated into the kernel and is the prefered method for tracking memory. Rather than reinvent the wheel with meminfo, migrate to using memblock directly instead of meminfo as an intermediate. Acked-by: NJason Cooper <jason@lakedaemon.net> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-