- 21 11月, 2014 3 次提交
-
-
由 Russell King 提交于
Convert many (but not all) printk(KERN_* to pr_* to simplify the code. We take the opportunity to join some printk lines together so we don't split the message across several lines, and we also add a few levels to some messages which were previously missing them. Tested-by: NAndrew Lunn <andrew@lunn.ch> Tested-by: NFelipe Balbi <balbi@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
Rather than unconditionally allocating a fresh ASID to an mm from an older generation, attempt to re-use the old assignment where possible. This can bring performance benefits on systems where the ASID is used to tag things other than the TLB (e.g. branch prediction resources). Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Stephen Boyd 提交于
Certain versions of the Krait processor don't report that they support the fused multiply accumulate instruction via the MVFR1 register despite the fact that they actually do. Unfortunately we use this register to identify support for VFPv4. Override the hwcap on all Krait processors to indicate support for VFPv4 to workaround this. Tested-by: NRob Clark <robdclark@gmail.com> Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 10月, 2014 4 次提交
-
-
由 Steve Capper 提交于
Activate the RCU fast_gup for ARM. We also need to force THP splits to broadcast an IPI s.t. we block in the fast_gup page walker. As THP splits are comparatively rare, this should not lead to a noticeable performance degradation. Some pre-requisite functions pud_write and pud_page are also added. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Dann Frazier <dann.frazier@canonical.com> Cc: Hugh Dickins <hughd@google.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Marek Szyprowski 提交于
DMA-mapping supports CMA regions places either in low or high memory, so there is no longer needed to limit default CMA regions only to low memory. The real limit is still defined by architecture specific DMA limit. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reported-by: NRussell King - ARM Linux <linux@arm.linux.org.uk> Acked-by: NMichal Nazarewicz <mina86@mina86.com> Cc: Daniel Drake <drake@endlessm.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Laura Abbott 提交于
ARM currently uses a bitmap for tracking atomic allocations. genalloc already handles this type of memory pool allocation so switch to using that instead. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Riley <davidriley@chromium.org> Cc: Olof Johansson <olof@lixom.net> Cc: Ritesh Harjain <ritesh.harjani@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Laura Abbott 提交于
For architectures without coherent DMA, memory for DMA may need to be remapped with coherent attributes. Factor out the the remapping code from arm and put it in a common location to reduce code duplication. As part of this, the arm APIs are now migrated away from ioremap_page_range to the common APIs which use map_vm_area for remapping. This should be an equivalent change and using map_vm_area is more correct as ioremap_page_range is intended to bring in io addresses into the cpu space and not regular kernel managed memory. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Riley <davidriley@chromium.org> Cc: Olof Johansson <olof@lixom.net> Cc: Ritesh Harjain <ritesh.harjani@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Mitchel Humpherys <mitchelh@codeaurora.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 10月, 2014 2 次提交
-
-
由 Yalin Wang 提交于
This patch extends the start and end address of initrd to be page aligned, so that we can free all memory including the un-page aligned head or tail page of initrd, if the start or end address of initrd are not page aligned, the page can't be freed by free_initrd_mem() function. Signed-off-by: NYalin Wang <yalin.wang@sonymobile.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Linus Walleij 提交于
When both 'cache-size' and 'cache-sets' are specified for a L2 cache controller node, parse those properties and set up the set size based on which type of L2 cache controller we are using. Update the L2 cache controller Device Tree binding with the optional 'cache-size', 'cache-sets', 'cache-block-size' and 'cache-line-size' properties. These come from the ePAPR specification. Using the cache size, number of sets and cache line size we can calculate desired associativity of the L2 cache. This is done by the calculation: set size = cache size / sets ways = set size / line size way size = cache size / ways = sets * line size associativity = cache size / way size Example output from the PB1176 DT that look like this: L2: l2-cache { compatible = "arm,l220-cache"; (...) arm,override-auxreg; cache-size = <131072>; // 128kB cache-sets = <512>; cache-line-size = <32>; }; Ends up like this: L2C OF: override cache size: 131072 bytes (128KB) L2C OF: override line size: 32 bytes L2C OF: override way size: 16384 bytes (16KB) L2C OF: override associativity: 8 L2C: DT/platform modifies aux control register: 0x02020fff -> 0x02030fff L2C-220 cache controller enabled, 8 ways, 128 kB L2C-220: CACHE_ID 0x41000486, AUX_CTRL 0x06030fff Which is consistent with the value earlier hardcoded for the PB1176 platform. This patch is an extended version based on the initial patch by Florian Fainelli. Reviewed-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NFlorian Fainelli <f.fainelli@gmail.com> Signed-off-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 9月, 2014 1 次提交
-
-
由 Joe Perches 提交于
Use the more common pr_warn. Other miscellanea: o Coalesce formats o Realign arguments Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 9月, 2014 2 次提交
-
-
由 Robin Murphy 提交于
The alignment fixup incorrectly decodes faulting ARM VLDn/VSTn instructions (where the optional alignment hint is given but incorrect) as LDR/STR, leading to register corruption. Detect these and correctly treat them as unhandled, so that userspace gets the fault it expects. Reported-by: NSimon Hosie <simon.hosie@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
SCTLR.HA (hardware access flag) is deprecated and not actually implemented by any CPUs. Furthermore, it can confuse cr_alignment checks where the whole value of SCTLR is compared against the value sitting in the hardware, since the bit is actually RAZ/WI and will not match the saved cr_alignment value. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 9月, 2014 1 次提交
-
-
Ifdef around cpu_\name\()_do_suspend and cpu_\name\()_do_resume ops in proc-macros.S should check for CONFIG_ARM_CPU_SUSPEND and not CONFIG_PM_SLEEP. Fix it. [ Please note that cpu_v7_do_[suspend,resume] code in proc-v7.S already correctly checks for CONFIG_ARM_CPU_SUSPEND, same is true for functions for other architectures. ] This fix is needed for decoupling suspend/resume and advanced cpuidle support on Exynos platform (next patch fixes build for config with CONFIG_PM_SLEEP=n and CONFIG_ARM_EXYNOS_CPUIDLE=y). If this fix is not present then the following OOPS happens on the first attempt to go into advanced cpuidle mode (AFTR): [ 22.244143] Unable to handle kernel NULL pointer dereference at virtual address 00000000 [ 22.250759] pgd = c0004000 [ 22.253445] [00000000] *pgd=00000000 [ 22.257012] Internal error: Oops: 80000007 [#1] PREEMPT SMP ARM [ 22.262906] Modules linked in: [ 22.265949] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.16.0-next-20140811-dirty #730 [ 22.273757] task: c05dce68 ti: c05d2000 task.ti: c05d2000 [ 22.279139] PC is at 0x0 [ 22.281661] LR is at __cpu_suspend_save+0x4c/0xa8 [ 22.286344] pc : [<00000000>] lr : [<c00125e0>] psr: a0000093 [ 22.286344] sp : c05d3ef4 ip : c05da414 fp : 00000001 [ 22.297799] r10: c05da414 r9 : c0609cb0 r8 : 0000000f [ 22.303008] r7 : c05da444 r6 : 00000038 r5 : ea802c00 r4 : c05d3f14 [ 22.309517] r3 : 00000000 r2 : c05d3f4c r1 : 00000038 r0 : c05d3f20 [ 22.316029] Flags: NzCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment kernel [ 22.323406] Control: 10c5387d Table: 69d5404a DAC: 00000015 [ 22.329135] Process swapper/0 (pid: 0, stack limit = 0xc05d2240) [ 22.335124] Stack: (0xc05d3ef4 to 0xc05d4000) [ 22.339466] 3ee0: ea802c00 00000038 c05d3f4c [ 22.347626] 3f00: 00000000 00000007 c00123bc 00000000 c001d468 6a888000 c05d3f4c 80000000 [ 22.355785] 3f20: 00000007 c003d3a0 0000193d eaf9dde4 eaf9dde4 c02ef0c8 c000969c fffffffe [ 22.363944] 3f40: 00000000 c0037b54 eaf9dbb8 e9d1a380 00000000 c001d468 c0609cb0 00000000 [ 22.372103] 3f60: c0609cb0 c061649e 00000001 c001250c eaf9dbb8 00000001 c0609cb0 c001d618 [ 22.380262] 3f80: c001d5d0 c02ef56c 2d9d2e1e 00000005 eaf9dbb8 c02edcc4 2d9d2e1e 00000005 [ 22.388421] 3fa0: c040446c c05da4ec c040446c eaf9dbb8 c05cfbb0 c004c580 c05dce68 c05b3ae8 [ 22.396580] 3fc0: 00000000 c058bb24 ffffffff ffffffff c058b5e4 00000000 00000000 c05b3ae8 [ 22.404740] 3fe0: c0616994 c05da47c c05b3ae4 c05ddeec 4000406a 40008074 00000000 00000000 [ 22.412909] [<c00125e0>] (__cpu_suspend_save) from [<c00123bc>] (__cpu_suspend+0x5c/0x70) [ 22.421074] [<c00123bc>] (__cpu_suspend) from [<c05d3f4c>] (init_thread_union+0x1f4c/0x2000) [ 22.429479] Code: bad PC value [ 22.432518] ---[ end trace fb90ebf4217d0ad9 ]--- [ 22.437116] Kernel panic - not syncing: Attempted to kill the idle task! [ 22.443800] Rebooting in 5 seconds.. This patch has been tested on Exynos4210 based Origen board. Signed-off-by: NBartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Signed-off-by: NKukjin Kim <kgene.kim@samsung.com>
-
- 13 9月, 2014 1 次提交
-
-
由 Brian Norris 提交于
The Brahma-B15's ISAR0 correcty advertises UDIV/SDIV support in both ARM and Thumb2 modes (CPUID_EXT_ISAR0=02101110), so we don't need to manually apply this hwcap. The code in question actually predates the following commit, which made our hwcaps unnecessary: commit 8164f7af Author: Stephen Boyd <sboyd@codeaurora.org> Date: Mon Mar 18 19:44:15 2013 +0100 ARM: 7680/1: Detect support for SDIV/UDIV from ISAR0 register Signed-off-by: NBrian Norris <computersforpeace@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 9月, 2014 1 次提交
-
-
由 Konstantin Khlebnikov 提交于
ARM: LPAE: drop wrong carry flag correction after adding TTBR1_OFFSET In commit 7fb00c2f ("ARM: 8114/1: LPAE: load upper bits of early TTBR0/TTBR1") part which fixes carrying in adding TTBR1_OFFSET to TTRR1 was wrong: addls ttbr1, ttbr1, #TTBR1_OFFSET adcls tmp, tmp, #0 addls doesn't update flags, adcls adds carry from cmp above: cmp ttbr1, tmp @ PHYS_OFFSET > PAGE_OFFSET? Condition 'ls' means carry flag is clear or zero flag is set, thus only one case is affected: when PHYS_OFFSET == PAGE_OFFSET. It seems safer to remove this fixup. Bug is here for ages and nobody complained. Let's fix it separately. Reported-and-Tested-by: NJassi Brar <jassisinghbrar@gmail.com> Signed-off-by: NKonstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 8月, 2014 1 次提交
-
-
由 Mark Rutland 提交于
The ARMv6 and ARMv7 early abort handlers clear the exclusive monitors upon entry to the kernel, but this is redundant: - We clear the monitors on every exception return since commit 200b812d ("Clear the exclusive monitor when returning from an exception"), so this is not necessary to ensure the monitors are cleared before returning from a fault handler. - Any dummy STREX will target a temporary scratch area in memory, and may succeed or fail without corrupting useful data. Its status value will not be used. - Any other STREX in the kernel must be preceded by an LDREX, which will initialise the monitors consistently and will not depend on the earlier state of the monitors. Therefore we have no reason to care about the initial state of the exclusive monitors when a data abort is taken, and clearing the monitors prior to exception return (as we already do) is sufficient. This patch removes the redundant clearing of the exclusive monitors from the early abort handlers. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 8月, 2014 1 次提交
-
-
由 Konstantin Khlebnikov 提交于
This patch fixes booting when idmap pgd lays above 4gb. Commit 4756dcbf mostly had fixed this, but it'd failed to load upper bits. Also this fixes adding TTBR1_OFFSET to TTRR1: if lower part overflows carry flag must be added to the upper part. Signed-off-by: NKonstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 8月, 2014 1 次提交
-
-
由 Joonsoo Kim 提交于
Currently, there are two users on CMA functionality, one is the DMA subsystem and the other is the KVM on powerpc. They have their own code to manage CMA reserved area even if they looks really similar. From my guess, it is caused by some needs on bitmap management. KVM side wants to maintain bitmap not for 1 page, but for more size. Eventually it use bitmap where one bit represents 64 pages. When I implement CMA related patches, I should change those two places to apply my change and it seem to be painful to me. I want to change this situation and reduce future code management overhead through this patch. This change could also help developer who want to use CMA in their new feature development, since they can use CMA easily without copying & pasting this reserved area management code. In previous patches, we have prepared some features to generalize CMA reserved area management and now it's time to do it. This patch moves core functions to mm/cma.c and change DMA APIs to use these functions. There is no functional change in DMA APIs. Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: NMichal Nazarewicz <mina86@mina86.com> Acked-by: NZhang Yanfei <zhangyanfei@cn.fujitsu.com> Acked-by: NMinchan Kim <minchan@kernel.org> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Alexander Graf <agraf@suse.de> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Gleb Natapov <gleb@kernel.org> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 8月, 2014 2 次提交
-
-
由 Russell King 提交于
Add a note about the usage of the identity mapping; we do not support accesses outside of the identity map region and kernel image while a CPU is using the identity map. This is because the identity mapping may overwrite vmalloc space, IO mappings, the vectors pages, etc. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Add further comments to the early page table remap code to explain what the code is doing, why it is doing it, but more importantly to explain that the code is not architecturally compliant and is squarely in "UNPREDICTABLE" behaviour territory. Add a warning and tainting of the kernel too. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 7月, 2014 2 次提交
-
-
由 Konstantin Khlebnikov 提交于
On LPAE, each level 1 (pgd) page table entry maps 1GiB, and the level 2 (pmd) entries map 2MiB. When the identity mapping is created on LPAE, the pgd pointers are copied from the swapper_pg_dir. If we find that we need to modify the contents of a pmd, we allocate a new empty pmd table and insert it into the appropriate 1GB slot, before then filling it with the identity mapping. However, if the 1GB slot covers the kernel lowmem mappings, we obliterate those mappings. When replacing a PMD, first copy the old PMD contents to the new PMD, so that we preserve the existing mappings, particularly the mappings of the kernel itself. [rewrote commit message and added code comment -- rmk] Fixes: ae2de101 ("ARM: LPAE: Add identity mapping support for the 3-level page table format") Signed-off-by: NKonstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
If init_mm.brk is not section aligned, the LPAE fixup code will miss updating the final PMD. Fix this by aligning map_end. Fixes: a77e0c7b ("ARM: mm: Recreate kernel mappings in early_paging_init()") Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 7月, 2014 2 次提交
-
-
由 Marc Carino 提交于
Perform any CPU-specific initialization required on the Broadcom Brahma-15 core. Signed-off-by: NMarc Carino <marc.ceeeee@gmail.com> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NBrian Norris <computersforpeace@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Steven Capper 提交于
For LPAE, we have the following means for encoding writable or dirty ptes: L_PTE_DIRTY L_PTE_RDONLY !pte_dirty && !pte_write 0 1 !pte_dirty && pte_write 0 1 pte_dirty && !pte_write 1 1 pte_dirty && pte_write 1 0 So we can't distinguish between writeable clean ptes and read only ptes. This can cause problems with ptes being incorrectly flagged as read only when they are writeable but not dirty. This patch renumbers L_PTE_RDONLY from AP[2] to a software bit #58, and adds additional logic to set AP[2] whenever the pte is read only or not dirty. That way we can distinguish between clean writeable ptes and read only ptes. HugeTLB pages will use this new logic automatically. We need to add some logic to Transparent HugePages to ensure that they correctly interpret the revised pgprot permissions (L_PTE_RDONLY has moved and no longer matches PMD_SECT_AP2). In the process of revising THP, the names of the PMD software bits have been prefixed with L_ to make them easier to distinguish from their hardware bit counterparts. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 7月, 2014 8 次提交
-
-
由 Russell King 提交于
SWP is deprecated in ARMv6 and ARMv7 CPUs, but more importantly, when running on a SMP system, SWP doesn't guarantee atomicity. This means it can't really be used (by userspace) for locking purposes in a SMP environment. Currently, many configurations leave the SWP emulation disabled, which means we never know if userspace executes this instruction on ARMv7 hardware. Rectify this by enabling SWP emulation for ARMv7 with SMP (where we can trap the instruction.) Tested-by: NTony Lindgren <tony@atomide.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Shawn Guo 提交于
The CP15 diagnostic register holds ARM errata bits on Cortex-A9, so it needs to be saved/restored on suspend/resume. Otherwise, the effectiveness of errata workaround gets lost together with diagnostic register bit across suspend/resume cycle. And the CP15 power control register of Cortex-A9 shares the same problem. The patch adds a couple of Cortex-A9 specific suspend/resume functions to save/restore these two Cortex-A9 CP15 registers across the suspend/resume cycle. Signed-off-by: NShawn Guo <shawn.guo@freescale.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Shawn Guo 提交于
Add revision info for PL310_ERRATA_588369 and PL310_ERRATA_727915 to help people understand if they need to enable the errata for their hardware. Signed-off-by: NShawn Guo <shawn.guo@freescale.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Shawn Guo 提交于
Since pj4b suspend/resume routines are implemented based on generic ARMv7 ones, instead of hard-coding cpu_pj4b_suspend_size, we should have it be cpu_v7_suspend_size plus pj4b specific bytes. Otherwise, if cpu_v7_suspend_size gets updated alone, the pj4b suspend/resume will likely be broken. While at it, fix the comments in cpu_pj4b_do_resume, as we're restoring CP15 registers rather than saving in there. Signed-off-by: NShawn Guo <shawn.guo@freescale.com> Acked-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Save and report (via the procfs file) the last kernel unaligned fault location. This allows us to trivially inspect where the last fault happened for cases which we don't expect to occur. Since we expect the kernel to generate misalignment faults (due to the networking layer), even when warnings are enabled, we don't log them for the kernel. Tested-by: NTony Lindgren <tony@atomide.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
ARMv6 and greater introduced a new instruction ("bx") which can be used to return from function calls. Recent CPUs perform better when the "bx lr" instruction is used rather than the "mov pc, lr" instruction, and this sequence is strongly recommended to be used by the ARM architecture manual (section A.4.1.1). We provide a new macro "ret" with all its variants for the condition code which will resolve to the appropriate instruction. Rather than doing this piecemeal, and miss some instances, change all the "mov pc" instances to use the new macro, with the exception of the "movs" instruction and the kprobes code. This allows us to detect the "mov pc, lr" case and fix it up - and also gives us the possibility of deploying this for other registers depending on the CPU selection. Reported-by: NWill Deacon <will.deacon@arm.com> Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1 Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood Tested-by: NShawn Guo <shawn.guo@freescale.com> Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385 Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Ensure that platform maintainers check the CPU part number in the right manner: the CPU part number is meaningless without also checking the CPU implement(e|o)r (choose your preferred spelling!) Provide an interface which returns both the implementer and part number together, and update the definitions to include the implementer. Mark the old function as being deprecated... indeed, using the old function with the definitions will now always evaluate as false, so people must update their un-merged code to the new function. While this could be avoided by adding new definitions, we'd also have to create new names for them which would be awkward. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
When setting up the CMA region, we must ensure that the old section mappings are flushed from the TLB before replacing them with page tables, otherwise we can suffer from mismatched aliases if the CPU speculatively prefetches from these mappings at an inopportune time. A mismatched alias can occur when the TLB contains a section mapping, but a subsequent prefetch causes it to load a page table mapping, resulting in the possibility of the TLB containing two matching mappings for the same virtual address region. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 7月, 2014 1 次提交
-
-
由 Andrew Lunn 提交于
Now that all boards have been converted to DT and all the support code lives in mach-mvebu, we can remove mach-kirkwood. Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Link: https://lkml.kernel.org/r/1405028192-9623-2-git-send-email-andrew@lunn.chSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
- 08 7月, 2014 1 次提交
-
-
由 Russell King 提交于
The revision checking in l2c310_enable() was not correct; we were masking the part number rather than the revision number. Fix this to use the correct macro. Fixes: 4374d649 ("ARM: l2c: add automatic enable of early BRESP") Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 6月, 2014 2 次提交
-
-
由 Laura Abbott 提交于
Commit 1c2f87c2 (ARM: 8025/1: Get rid of meminfo) changed find_limits to use memblock_get_current_limit for calculating the max_low pfn. nommu targets never actually set a limit on memblock though which means memblock_get_current_limit will just return the default value. Set the memblock_limit to be the end of DDR to make sure bounds are calculated correctly. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Thomas Petazzoni 提交于
When a PL310 cache is used on a system that provides hardware coherency, the outer cache sync operation is useless, and can be skipped. Moreover, on some systems, it is harmful as it causes deadlocks between the Marvell coherency mechanism, the Marvell PCIe controller and the Cortex-A9. To avoid this, this commit introduces a new Device Tree property 'arm,io-coherent' for the L2 cache controller node, valid only for the PL310 cache. It identifies the usage of the PL310 cache in an I/O coherent configuration. Internally, it makes the driver disable the outer cache sync operation. Note that technically speaking, a fully coherent system wouldn't require any of the other .outer_cache operations. However, in practice, when booting secondary CPUs, these are not yet coherent, and therefore a set of cache maintenance operations are necessary at this point. This explains why we keep the other .outer_cache operations and only ->sync is disabled. While in theory any write to a PL310 register could cause the deadlock, in practice, disabling ->sync is sufficient to workaround the deadlock, since the other cache maintenance operations are only used in very specific situations. Contrary to previous versions of this patch, this new version does not simply NULL-ify the ->sync member, because the l2c_init_data structures are now 'const' and therefore cannot be modified, which is a good thing. Therefore, this patch introduces a separate l2c_init_data instance, called of_l2c310_coherent_data. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 6月, 2014 1 次提交
-
-
由 Russell King 提交于
Commit ca8f0b0a ("ARM: ensure C page table setup code follows assembly code") did what it said on the tin, but some of the older CPU code omitted the default cache policy from their files. This results in the kernel running with the caches disabled. Fix this for ARM925. Reported-by: NAaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 6月, 2014 1 次提交
-
-
由 Russell King 提交于
A number of configurations spit out warnings similar to: warning: (SOC_IMX6 && SOC_VF610 && ARCH_OMAP4) selects PL310_ERRATA_588369 which has unmet direct dependencies (CACHE_L2X0) warning: (SOC_IMX6 && SOC_VF610 && ARCH_OMAP4) selects PL310_ERRATA_727915 which has unmet direct dependencies (CACHE_L2X0) Clean up the dependencies here: * PL310 symbols should only be selected when CACHE_L2X0 is enabled. * Since the cache-l2x0 code detects PL310 presence at runtime, and we will eventually get rid of CACHE_PL310, surround these errata options with an if CACHE_L2X0 conditional rather than repeating the dependency against each. Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 6月, 2014 1 次提交
-
-
由 Naoya Horiguchi 提交于
Currently hugepage migration is available for all archs which support pmd-level hugepage, but testing is done only for x86_64 and there're bugs for other archs. So to avoid breaking such archs, this patch limits the availability strictly to x86_64 until developers of other archs get interested in enabling this feature. Simply disabling hugepage migration on non-x86_64 archs is not enough to fix the reported problem where sys_move_pages() hits the BUG_ON() in follow_page(FOLL_GET), so let's fix this by checking if hugepage migration is supported in vma_migratable(). Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reported-by: NMichael Ellerman <mpe@ellerman.id.au> Tested-by: NMichael Ellerman <mpe@ellerman.id.au> Acked-by: NHugh Dickins <hughd@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Cc: <stable@vger.kernel.org> [3.12+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 6月, 2014 1 次提交
-
-
由 Russell King 提交于
This does the same as the previous commit, but for the S bit, which also needs to match the initial value which the assembly code used for the same reasons. Again, we add a check for SMP to ensure that the page tables are correctly setup for SMP. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-