- 17 10月, 2014 3 次提交
-
-
由 Kees Cook 提交于
This is used from set_fixmap() and clear_fixmap() via asm-generic/fixmap.h. Also makes sure that the fixmap allocation fits into the expected range. Based on patch by Rabin Vincent. Signed-off-by: NKees Cook <keescook@chromium.org> Cc: Rabin Vincent <rabin@rab.in> Acked-by: NNicolas Pitre <nico@linaro.org>
-
由 Rob Herring 提交于
With commit a05e54c1 ("ARM: 8031/2: change fixmap mapping region to support 32 CPUs"), the fixmap region was expanded to 2MB, but it precluded any other uses of the fixmap region. In order to support other uses the fixmap region needs to be expanded beyond 2MB. Fortunately, the adjacent 1MB range 0xffe00000-0xfff00000 is availabe. Remove fixmap_page_table ptr and lookup the page table via the virtual address so that the fixmap region can span more that one pmd. The 2nd pmd is already created since it is shared with the vector page. Signed-off-by: NRob Herring <robh@kernel.org> [kees: fixed CONFIG_DEBUG_HIGHMEM get_fixmap() calls] [kees: moved pte allocation outside of CONFIG_HIGHMEM] Signed-off-by: NKees Cook <keescook@chromium.org> Acked-by: NNicolas Pitre <nico@linaro.org>
-
由 Mark Salter 提交于
ARM is different from other architectures in that fixmap pages are indexed with a positive offset from FIXADDR_START. Other architectures index with a negative offset from FIXADDR_TOP. In order to use the generic fixmap.h definitions, this patch redefines FIXADDR_TOP to be inclusive of the useable range. That is, FIXADDR_TOP is the virtual address of the topmost fixed page. The newly defined FIXADDR_END is the first virtual address past the fixed mappings. Signed-off-by: NMark Salter <msalter@redhat.com> Reviewed-by: NDoug Anderson <dianders@chromium.org> [kees: update for a05e54c1 ("ARM: 8031/2: change fixmap ...")] Signed-off-by: NKees Cook <keescook@chromium.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Rob Herring <robh@kernel.org> Acked-by: NNicolas Pitre <nico@linaro.org>
-
- 25 9月, 2014 2 次提交
-
-
由 Robin Murphy 提交于
The alignment fixup incorrectly decodes faulting ARM VLDn/VSTn instructions (where the optional alignment hint is given but incorrect) as LDR/STR, leading to register corruption. Detect these and correctly treat them as unhandled, so that userspace gets the fault it expects. Reported-by: NSimon Hosie <simon.hosie@arm.com> Signed-off-by: NRobin Murphy <robin.murphy@arm.com> Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
SCTLR.HA (hardware access flag) is deprecated and not actually implemented by any CPUs. Furthermore, it can confuse cr_alignment checks where the whole value of SCTLR is compared against the value sitting in the hardware, since the bit is actually RAZ/WI and will not match the saved cr_alignment value. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 9月, 2014 1 次提交
-
-
由 Konstantin Khlebnikov 提交于
ARM: LPAE: drop wrong carry flag correction after adding TTBR1_OFFSET In commit 7fb00c2f ("ARM: 8114/1: LPAE: load upper bits of early TTBR0/TTBR1") part which fixes carrying in adding TTBR1_OFFSET to TTRR1 was wrong: addls ttbr1, ttbr1, #TTBR1_OFFSET adcls tmp, tmp, #0 addls doesn't update flags, adcls adds carry from cmp above: cmp ttbr1, tmp @ PHYS_OFFSET > PAGE_OFFSET? Condition 'ls' means carry flag is clear or zero flag is set, thus only one case is affected: when PHYS_OFFSET == PAGE_OFFSET. It seems safer to remove this fixup. Bug is here for ages and nobody complained. Let's fix it separately. Reported-and-Tested-by: NJassi Brar <jassisinghbrar@gmail.com> Signed-off-by: NKonstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 8月, 2014 1 次提交
-
-
由 Mark Rutland 提交于
The ARMv6 and ARMv7 early abort handlers clear the exclusive monitors upon entry to the kernel, but this is redundant: - We clear the monitors on every exception return since commit 200b812d ("Clear the exclusive monitor when returning from an exception"), so this is not necessary to ensure the monitors are cleared before returning from a fault handler. - Any dummy STREX will target a temporary scratch area in memory, and may succeed or fail without corrupting useful data. Its status value will not be used. - Any other STREX in the kernel must be preceded by an LDREX, which will initialise the monitors consistently and will not depend on the earlier state of the monitors. Therefore we have no reason to care about the initial state of the exclusive monitors when a data abort is taken, and clearing the monitors prior to exception return (as we already do) is sufficient. This patch removes the redundant clearing of the exclusive monitors from the early abort handlers. Signed-off-by: NMark Rutland <mark.rutland@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 8月, 2014 1 次提交
-
-
由 Konstantin Khlebnikov 提交于
This patch fixes booting when idmap pgd lays above 4gb. Commit 4756dcbf mostly had fixed this, but it'd failed to load upper bits. Also this fixes adding TTBR1_OFFSET to TTRR1: if lower part overflows carry flag must be added to the upper part. Signed-off-by: NKonstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 8月, 2014 1 次提交
-
-
由 Joonsoo Kim 提交于
Currently, there are two users on CMA functionality, one is the DMA subsystem and the other is the KVM on powerpc. They have their own code to manage CMA reserved area even if they looks really similar. From my guess, it is caused by some needs on bitmap management. KVM side wants to maintain bitmap not for 1 page, but for more size. Eventually it use bitmap where one bit represents 64 pages. When I implement CMA related patches, I should change those two places to apply my change and it seem to be painful to me. I want to change this situation and reduce future code management overhead through this patch. This change could also help developer who want to use CMA in their new feature development, since they can use CMA easily without copying & pasting this reserved area management code. In previous patches, we have prepared some features to generalize CMA reserved area management and now it's time to do it. This patch moves core functions to mm/cma.c and change DMA APIs to use these functions. There is no functional change in DMA APIs. Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Acked-by: NMichal Nazarewicz <mina86@mina86.com> Acked-by: NZhang Yanfei <zhangyanfei@cn.fujitsu.com> Acked-by: NMinchan Kim <minchan@kernel.org> Reviewed-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Alexander Graf <agraf@suse.de> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Cc: Gleb Natapov <gleb@kernel.org> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 8月, 2014 2 次提交
-
-
由 Russell King 提交于
Add a note about the usage of the identity mapping; we do not support accesses outside of the identity map region and kernel image while a CPU is using the identity map. This is because the identity mapping may overwrite vmalloc space, IO mappings, the vectors pages, etc. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Add further comments to the early page table remap code to explain what the code is doing, why it is doing it, but more importantly to explain that the code is not architecturally compliant and is squarely in "UNPREDICTABLE" behaviour territory. Add a warning and tainting of the kernel too. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 7月, 2014 2 次提交
-
-
由 Konstantin Khlebnikov 提交于
On LPAE, each level 1 (pgd) page table entry maps 1GiB, and the level 2 (pmd) entries map 2MiB. When the identity mapping is created on LPAE, the pgd pointers are copied from the swapper_pg_dir. If we find that we need to modify the contents of a pmd, we allocate a new empty pmd table and insert it into the appropriate 1GB slot, before then filling it with the identity mapping. However, if the 1GB slot covers the kernel lowmem mappings, we obliterate those mappings. When replacing a PMD, first copy the old PMD contents to the new PMD, so that we preserve the existing mappings, particularly the mappings of the kernel itself. [rewrote commit message and added code comment -- rmk] Fixes: ae2de101 ("ARM: LPAE: Add identity mapping support for the 3-level page table format") Signed-off-by: NKonstantin Khlebnikov <k.khlebnikov@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
If init_mm.brk is not section aligned, the LPAE fixup code will miss updating the final PMD. Fix this by aligning map_end. Fixes: a77e0c7b ("ARM: mm: Recreate kernel mappings in early_paging_init()") Cc: <stable@vger.kernel.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 7月, 2014 2 次提交
-
-
由 Marc Carino 提交于
Perform any CPU-specific initialization required on the Broadcom Brahma-15 core. Signed-off-by: NMarc Carino <marc.ceeeee@gmail.com> Acked-by: NFlorian Fainelli <f.fainelli@gmail.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NBrian Norris <computersforpeace@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Steven Capper 提交于
For LPAE, we have the following means for encoding writable or dirty ptes: L_PTE_DIRTY L_PTE_RDONLY !pte_dirty && !pte_write 0 1 !pte_dirty && pte_write 0 1 pte_dirty && !pte_write 1 1 pte_dirty && pte_write 1 0 So we can't distinguish between writeable clean ptes and read only ptes. This can cause problems with ptes being incorrectly flagged as read only when they are writeable but not dirty. This patch renumbers L_PTE_RDONLY from AP[2] to a software bit #58, and adds additional logic to set AP[2] whenever the pte is read only or not dirty. That way we can distinguish between clean writeable ptes and read only ptes. HugeTLB pages will use this new logic automatically. We need to add some logic to Transparent HugePages to ensure that they correctly interpret the revised pgprot permissions (L_PTE_RDONLY has moved and no longer matches PMD_SECT_AP2). In the process of revising THP, the names of the PMD software bits have been prefixed with L_ to make them easier to distinguish from their hardware bit counterparts. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 7月, 2014 8 次提交
-
-
由 Russell King 提交于
SWP is deprecated in ARMv6 and ARMv7 CPUs, but more importantly, when running on a SMP system, SWP doesn't guarantee atomicity. This means it can't really be used (by userspace) for locking purposes in a SMP environment. Currently, many configurations leave the SWP emulation disabled, which means we never know if userspace executes this instruction on ARMv7 hardware. Rectify this by enabling SWP emulation for ARMv7 with SMP (where we can trap the instruction.) Tested-by: NTony Lindgren <tony@atomide.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Shawn Guo 提交于
The CP15 diagnostic register holds ARM errata bits on Cortex-A9, so it needs to be saved/restored on suspend/resume. Otherwise, the effectiveness of errata workaround gets lost together with diagnostic register bit across suspend/resume cycle. And the CP15 power control register of Cortex-A9 shares the same problem. The patch adds a couple of Cortex-A9 specific suspend/resume functions to save/restore these two Cortex-A9 CP15 registers across the suspend/resume cycle. Signed-off-by: NShawn Guo <shawn.guo@freescale.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Shawn Guo 提交于
Add revision info for PL310_ERRATA_588369 and PL310_ERRATA_727915 to help people understand if they need to enable the errata for their hardware. Signed-off-by: NShawn Guo <shawn.guo@freescale.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Shawn Guo 提交于
Since pj4b suspend/resume routines are implemented based on generic ARMv7 ones, instead of hard-coding cpu_pj4b_suspend_size, we should have it be cpu_v7_suspend_size plus pj4b specific bytes. Otherwise, if cpu_v7_suspend_size gets updated alone, the pj4b suspend/resume will likely be broken. While at it, fix the comments in cpu_pj4b_do_resume, as we're restoring CP15 registers rather than saving in there. Signed-off-by: NShawn Guo <shawn.guo@freescale.com> Acked-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Save and report (via the procfs file) the last kernel unaligned fault location. This allows us to trivially inspect where the last fault happened for cases which we don't expect to occur. Since we expect the kernel to generate misalignment faults (due to the networking layer), even when warnings are enabled, we don't log them for the kernel. Tested-by: NTony Lindgren <tony@atomide.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
ARMv6 and greater introduced a new instruction ("bx") which can be used to return from function calls. Recent CPUs perform better when the "bx lr" instruction is used rather than the "mov pc, lr" instruction, and this sequence is strongly recommended to be used by the ARM architecture manual (section A.4.1.1). We provide a new macro "ret" with all its variants for the condition code which will resolve to the appropriate instruction. Rather than doing this piecemeal, and miss some instances, change all the "mov pc" instances to use the new macro, with the exception of the "movs" instruction and the kprobes code. This allows us to detect the "mov pc, lr" case and fix it up - and also gives us the possibility of deploying this for other registers depending on the CPU selection. Reported-by: NWill Deacon <will.deacon@arm.com> Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1 Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood Tested-by: NShawn Guo <shawn.guo@freescale.com> Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385 Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Ensure that platform maintainers check the CPU part number in the right manner: the CPU part number is meaningless without also checking the CPU implement(e|o)r (choose your preferred spelling!) Provide an interface which returns both the implementer and part number together, and update the definitions to include the implementer. Mark the old function as being deprecated... indeed, using the old function with the definitions will now always evaluate as false, so people must update their un-merged code to the new function. While this could be avoided by adding new definitions, we'd also have to create new names for them which would be awkward. Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
When setting up the CMA region, we must ensure that the old section mappings are flushed from the TLB before replacing them with page tables, otherwise we can suffer from mismatched aliases if the CPU speculatively prefetches from these mappings at an inopportune time. A mismatched alias can occur when the TLB contains a section mapping, but a subsequent prefetch causes it to load a page table mapping, resulting in the possibility of the TLB containing two matching mappings for the same virtual address region. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 7月, 2014 1 次提交
-
-
由 Andrew Lunn 提交于
Now that all boards have been converted to DT and all the support code lives in mach-mvebu, we can remove mach-kirkwood. Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Link: https://lkml.kernel.org/r/1405028192-9623-2-git-send-email-andrew@lunn.chSigned-off-by: NJason Cooper <jason@lakedaemon.net>
-
- 08 7月, 2014 1 次提交
-
-
由 Russell King 提交于
The revision checking in l2c310_enable() was not correct; we were masking the part number rather than the revision number. Fix this to use the correct macro. Fixes: 4374d649 ("ARM: l2c: add automatic enable of early BRESP") Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 6月, 2014 2 次提交
-
-
由 Laura Abbott 提交于
Commit 1c2f87c2 (ARM: 8025/1: Get rid of meminfo) changed find_limits to use memblock_get_current_limit for calculating the max_low pfn. nommu targets never actually set a limit on memblock though which means memblock_get_current_limit will just return the default value. Set the memblock_limit to be the end of DDR to make sure bounds are calculated correctly. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Thomas Petazzoni 提交于
When a PL310 cache is used on a system that provides hardware coherency, the outer cache sync operation is useless, and can be skipped. Moreover, on some systems, it is harmful as it causes deadlocks between the Marvell coherency mechanism, the Marvell PCIe controller and the Cortex-A9. To avoid this, this commit introduces a new Device Tree property 'arm,io-coherent' for the L2 cache controller node, valid only for the PL310 cache. It identifies the usage of the PL310 cache in an I/O coherent configuration. Internally, it makes the driver disable the outer cache sync operation. Note that technically speaking, a fully coherent system wouldn't require any of the other .outer_cache operations. However, in practice, when booting secondary CPUs, these are not yet coherent, and therefore a set of cache maintenance operations are necessary at this point. This explains why we keep the other .outer_cache operations and only ->sync is disabled. While in theory any write to a PL310 register could cause the deadlock, in practice, disabling ->sync is sufficient to workaround the deadlock, since the other cache maintenance operations are only used in very specific situations. Contrary to previous versions of this patch, this new version does not simply NULL-ify the ->sync member, because the l2c_init_data structures are now 'const' and therefore cannot be modified, which is a good thing. Therefore, this patch introduces a separate l2c_init_data instance, called of_l2c310_coherent_data. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 6月, 2014 1 次提交
-
-
由 Russell King 提交于
Commit ca8f0b0a ("ARM: ensure C page table setup code follows assembly code") did what it said on the tin, but some of the older CPU code omitted the default cache policy from their files. This results in the kernel running with the caches disabled. Fix this for ARM925. Reported-by: NAaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 19 6月, 2014 1 次提交
-
-
由 Russell King 提交于
A number of configurations spit out warnings similar to: warning: (SOC_IMX6 && SOC_VF610 && ARCH_OMAP4) selects PL310_ERRATA_588369 which has unmet direct dependencies (CACHE_L2X0) warning: (SOC_IMX6 && SOC_VF610 && ARCH_OMAP4) selects PL310_ERRATA_727915 which has unmet direct dependencies (CACHE_L2X0) Clean up the dependencies here: * PL310 symbols should only be selected when CACHE_L2X0 is enabled. * Since the cache-l2x0 code detects PL310 presence at runtime, and we will eventually get rid of CACHE_PL310, surround these errata options with an if CACHE_L2X0 conditional rather than repeating the dependency against each. Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 6月, 2014 1 次提交
-
-
由 Naoya Horiguchi 提交于
Currently hugepage migration is available for all archs which support pmd-level hugepage, but testing is done only for x86_64 and there're bugs for other archs. So to avoid breaking such archs, this patch limits the availability strictly to x86_64 until developers of other archs get interested in enabling this feature. Simply disabling hugepage migration on non-x86_64 archs is not enough to fix the reported problem where sys_move_pages() hits the BUG_ON() in follow_page(FOLL_GET), so let's fix this by checking if hugepage migration is supported in vma_migratable(). Signed-off-by: NNaoya Horiguchi <n-horiguchi@ah.jp.nec.com> Reported-by: NMichael Ellerman <mpe@ellerman.id.au> Tested-by: NMichael Ellerman <mpe@ellerman.id.au> Acked-by: NHugh Dickins <hughd@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Cc: <stable@vger.kernel.org> [3.12+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 6月, 2014 7 次提交
-
-
由 Russell King 提交于
This does the same as the previous commit, but for the S bit, which also needs to match the initial value which the assembly code used for the same reasons. Again, we add a check for SMP to ensure that the page tables are correctly setup for SMP. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Fix a long standing bug where, for ARMv6+, we don't fully ensure that the C code sets the same cache policy as the assembly code. This was introduced partially by commit 11179d8c ([ARM] 4497/1: Only allow safe cache configurations on ARMv6 and later) and also by adding SMP support. This patch sets the default cache policy based on the flags used by the assembly code, and then ensures that when a cache policy command line argument is used, we verify that on ARMv6, it matches the initial setup. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
cr_no_alignment is really only used by the alignment code. Since we no longer change the setting of cr_alignment after boot, we can localise this to alignment.c Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
alignment.c will not be built unless CPU_CP15 is set: config CPU_CP15 bool config CPU_CP15_MMU bool select CPU_CP15 config ALIGNMENT_TRAP bool depends on CPU_CP15_MMU So there's no point having conditionals on CPU_CP15 within this code. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
adjust_cr() is not used anymore, so let's get rid of it. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Keep all bits of alignment handling together. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Several places open-code this manipulation, let's consolidate this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 6月, 2014 2 次提交
-
-
由 Laura Abbott 提交于
memblock is now fully integrated into the kernel and is the prefered method for tracking memory. Rather than reinvent the wheel with meminfo, migrate to using memblock directly instead of meminfo as an intermediate. Acked-by: NJason Cooper <jason@lakedaemon.net> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Tested-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NLeif Lindholm <leif.lindholm@linaro.org> Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Thomas Petazzoni 提交于
Due to a design incompatibility between the PCIe Marvell controller and the Cortex-A9, stressing PCIe devices with a lot of traffic quickly causes a deadlock. One part of the workaround for this is to have all PCIe regions mapped as strongly-ordered (MT_UNCACHED) instead of the default MT_DEVICE. While the arch_ioremap_caller() mechanism allows sub-architecture code to override ioremap(), used to map PCIe memory regions, there isn't such a mechanism to override the behavior of pci_ioremap_io(). This commit adds the arch_pci_ioremap_mem_type variable, initialized to MT_DEVICE by default, and that sub-architecture code can override. We have chosen to expose a single variable rather than offering the possibility of overriding the entire pci_ioremap_io(), because implementing pci_ioremap_io() requires calling functions (get_mem_type()) that are private to the arch/arm/mm/ code. Signed-off-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 5月, 2014 1 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-