- 26 5月, 2014 1 次提交
-
-
由 Will Deacon 提交于
Cortex-A17 has identical initialisation requirements to Cortex-A12, so hook it up in proc-v7.S in the same way. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 5月, 2014 1 次提交
-
-
由 Russell King 提交于
Avoid calling dma_cache_maint_page() when unmapping a DMA_TO_DEVICE buffer. The L1 cache ops never do anything in this circumstance, nor do they ever need to - all that matters for this case is that the data written is visible to the device before DMA starts. What happens during the transfer (provided the buffer is not written to) is of no real consequence. We already do this optimisation for the L2 cache. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 4月, 2014 1 次提交
-
-
由 Jianguo Wu 提交于
When enable LPAE and big-endian in a hisilicon board, while specify mem=384M mem=512M@7680M, will get bad page state: Freeing unused kernel memory: 180K (c0466000 - c0493000) BUG: Bad page state in process init pfn:fa442 page:c7749840 count:0 mapcount:-1 mapping: (null) index:0x0 page flags: 0x40000400(reserved) Modules linked in: CPU: 0 PID: 1 Comm: init Not tainted 3.10.27+ #66 [<c000f5f0>] (unwind_backtrace+0x0/0x11c) from [<c000cbc4>] (show_stack+0x10/0x14) [<c000cbc4>] (show_stack+0x10/0x14) from [<c009e448>] (bad_page+0xd4/0x104) [<c009e448>] (bad_page+0xd4/0x104) from [<c009e520>] (free_pages_prepare+0xa8/0x14c) [<c009e520>] (free_pages_prepare+0xa8/0x14c) from [<c009f8ec>] (free_hot_cold_page+0x18/0xf0) [<c009f8ec>] (free_hot_cold_page+0x18/0xf0) from [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8) [<c00b5444>] (handle_pte_fault+0xcf4/0xdc8) from [<c00b6458>] (handle_mm_fault+0xf4/0x120) [<c00b6458>] (handle_mm_fault+0xf4/0x120) from [<c0013754>] (do_page_fault+0xfc/0x354) [<c0013754>] (do_page_fault+0xfc/0x354) from [<c0008400>] (do_DataAbort+0x2c/0x90) [<c0008400>] (do_DataAbort+0x2c/0x90) from [<c0008fb4>] (__dabt_usr+0x34/0x40) The bad pfn:fa442 is not system memory(mem=384M mem=512M@7680M), after debugging, I find in page fault handler, will get wrong pfn from pte just after set pte, as follow: do_anonymous_page() { ... set_pte_at(mm, address, page_table, entry); //debug code pfn = pte_pfn(entry); pr_info("pfn:0x%lx, pte:0x%llxn", pfn, pte_val(entry)); //read out the pte just set new_pte = pte_offset_map(pmd, address); new_pfn = pte_pfn(*new_pte); pr_info("new pfn:0x%lx, new pte:0x%llxn", pfn, pte_val(entry)); ... } pfn: 0x1fa4f5, pte:0xc00001fa4f575f new_pfn:0xfa4f5, new_pte:0xc00000fa4f5f5f //new pfn/pte is wrong. The bug is happened in cpu_v7_set_pte_ext(ptep, pte): An LPAE PTE is a 64bit quantity, passed to cpu_v7_set_pte_ext in the r2 and r3 registers. On an LE kernel, r2 contains the LSB of the PTE, and r3 the MSB. On a BE kernel, the assignment is reversed. Unfortunately, the current code always assumes the LE case, leading to corruption of the PTE when clearing/setting bits. This patch fixes this issue much like it has been done already in the cpu_v7_switch_mm case. CC stable <stable@vger.kernel.org> Signed-off-by: NJianguo Wu <wujianguo@huawei.com> Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 4月, 2014 3 次提交
-
-
由 Liu Hua 提交于
In 32-bit ARM systems, the fixmap mapping region can support no more than 14 CPUs(total: 896k; one CPU: 64K). And we can configure NR_CPUS up to 32. So there is a mismatch. This patch moves fixmapping region downwards to region 0xffc00000- 0xffe00000. Then the fixmap mapping region can support up to 32 CPUs. Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NLiu Hua <sdu.liu@huawei.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Liu Hua 提交于
It seems that these two macros are not used by non architecture specific code. And on ARM FIX_KMAP_BEGIN equals zero. This patch removes these two macros. Instead, using FIX_KMAP_NR_PTES to tell the pte number belonged to fixmap mapping region. The code will become clearer when I introduce a bugfix on fixmap mapping region. Reviewed-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NLiu Hua <sdu.liu@huawei.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Gregory CLEMENT 提交于
PJ4B needs extra instructions for suspend and resume, so instead of using the armv7 version, this commit introduces specific versions for PJ4B. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 4月, 2014 1 次提交
-
-
由 Kees Cook 提交于
On non-LPAE ARMv6+, read-only PMD bits are defined with the combination "PMD_SECT_APX | PMD_SECT_AP_WRITE". Adjusted the bit masks to correctly report this. Signed-off-by: NKees Cook <keescook@chromium.org> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 3月, 2014 1 次提交
-
-
由 Arnd Bergmann 提交于
When building a kernel with support for both ARMv6 and ARMv7 but no MMU, the call from tauros2_internal_init to adjust_cr causes a link error. While that could probably be resolved, we don't actually support cache-tauros2 on ARMv6 any more. All PJ4 CPU implementations support both ARMv6 and ARMv7 and we already assume that we are using them only in ARMv7 mode. Removing the ARMv6 code path reduces the code size and avoids the linker error. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NHaojian Zhuang <haojian.zhuang@gmail.com>
-
- 22 3月, 2014 1 次提交
-
-
由 Arnd Bergmann 提交于
ARCH_RPC no longer supports other CPUs aside from StrongARM110, so we can make the option implicitly selected by the platform and no longer give the option of building a kernel without CPU support. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk>
-
- 12 3月, 2014 1 次提交
-
-
由 Marek Szyprowski 提交于
Enable reserved memory initialization from device tree. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NGrant Likely <grant.likely@linaro.org>
-
- 28 2月, 2014 2 次提交
-
-
由 Marek Szyprowski 提交于
The 'order' parameter for IOMMU-aware dma-mapping implementation was introduced mainly as a hack to reduce size of the bitmap used for tracking IO virtual address space. Since now it is possible to dynamically resize the bitmap, this hack is not needed and can be removed without any impact on the client devices. This way the parameters for arm_iommu_create_mapping() becomes much easier to understand. 'size' parameter now means the maximum supported IO address space size. The code will allocate (resize) bitmap in chunks, ensuring that a single chunk is not larger than a single memory page to avoid unreliable allocations of size larger than PAGE_SIZE in atomic context. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Andreas Herrmann 提交于
Instead of using just one bitmap to keep track of IO virtual addresses (handed out for IOMMU use) introduce an array of bitmaps. This allows us to extend existing mappings when running out of iova space in the initial mapping etc. If there is not enough space in the mapping to service an IO virtual address allocation request, __alloc_iova() tries to extend the mapping -- by allocating another bitmap -- and makes another allocation attempt using the freshly allocated bitmap. This allows arm iommu drivers to start with a decent initial size when an dma_iommu_mapping is created and still to avoid running out of IO virtual addresses for the mapping. Signed-off-by: NAndreas Herrmann <andreas.herrmann@calxeda.com> [mszyprow: removed extensions parameter to arm_iommu_create_mapping() function, which will be modified in the next patch anyway, also some debug messages about extending bitmap] Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 23 2月, 2014 3 次提交
-
-
由 Andrew Lunn 提交于
Kirkwood, which uses the Feroceon L2 cache controller will soon be moving into mach-mvebu. Allow the cache controller to be built in this situation. Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: NJason Cooper <jason@lakedaemon.net>
-
由 Andrew Lunn 提交于
Instantiate the L2 cache from DT. Indicate in DT where the cache control register is so that it is possible to enable/disable write through on the CPU. Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Tested-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: NJason Cooper <jason@lakedaemon.net>
-
由 Andrew Lunn 提交于
With the gradual move to DT, kirkwood has become a lot less dependent on plat-orion. cache-feroceon-l2.h is the last dependency. Move it out so we can drop plat-orion when building DT only kirkwood boards. Signed-off-by: NAndrew Lunn <andrew@lunn.ch> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: NJason Cooper <jason@lakedaemon.net>
-
- 19 2月, 2014 2 次提交
-
-
由 Steven Capper 提交于
The Coherant DMA allocator allocates pages of high order then splits them up into smaller pages. This splitting logic would run into problems if the allocator was given compound pages. Thus the Coherant DMA allocator was originally incompatible with compound pages existing and, by extension, huge pages. A compile #error was put in place whenever huge pages were enabled. Compatibility with compound pages has since been introduced by the following commit (which merely excludes GFP_COMP pages from being requested by the coherant DMA allocator): ea2e7057 ARM: 7172/1: dma: Drop GFP_COMP for DMA memory allocations When huge page support was introduced to ARM, the compile #error in dma-mapping.c was replaced by a #warning when it should have been removed instead. This patch removes the compile #warning in dma-mapping.c when huge pages are enabled. Signed-off-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Kees Cook 提交于
On 2-level page table systems, the PMD has 2 section entries. Report these, otherwise ARM_PTDUMP will miss reporting permission changes on odd section boundaries. Signed-off-by: NKees Cook <keescook@chromium.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NSteve Capper <steve.capper@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 2月, 2014 1 次提交
-
-
由 Marek Szyprowski 提交于
GFP_ATOMIC is not a single gfp flag, but a macro which expands to the other flags and LACK of __GFP_WAIT flag. To check if caller wanted to perform an atomic allocation, the code must test __GFP_WAIT flag presence. This patch fixes the issue introduced in v3.6-rc5 Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> CC: stable@vger.kernel.org
-
- 10 2月, 2014 5 次提交
-
-
由 Will Deacon 提交于
CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do not have hardware thread registers. The lack of these registers requires the kernel to update the vectors page at each context switch in order to write a new TLS pointer. This write must be done via the userspace mapping, since aliasing caches can lead to expensive flushing when using kmap. Finally, this requires the vectors page to be mapped r/w for kernel and r/o for user, which has implications for things like put_user which must trigger CoW appropriately when targetting user pages. The upshot of all this is that a v6/v7 kernel makes use of domains to segregate kernel and user memory accesses. This has the nasty side-effect of making device mappings executable, which has been observed to cause subtle bugs on recent cores (e.g. Cortex-A15 performing a speculative instruction fetch from the GIC and acking an interrupt in the process). This patch solves this problem by removing the remaining domain support from ARMv6. A new memory type is added specifically for the vectors page which allows that page (and only that page) to be mapped as user r/o, kernel r/w. All other user r/o pages are mapped also as kernel r/o. Patch co-developed with Russell King. Cc: <stable@vger.kernel.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jason Gunthorpe 提交于
Booting on feroceon CPUS requires the L2 cache to be turned off. With some kernel configurations (notably CONFIG_ARM_PATCH_PHYS_VIRT disabled) the kernel will boot even if the L2 is turned on. However there may be subtle breakage, and when PATCH_PHYS_VIRT is enabled it is very likely that booting with L2 will crash at early boot before any kernel diagnostic output. The diagnostic message is intended to discourage people from shipping bootloaders that leave the L2 turned on. The issue on feroceon is that the L2 is bypassed when the L1 caches are disabled. So the decompressor will place parts of the kernel image into the L2 and the early cache-off boot code in head.S will write to parts of the kernel image, bypassing the L2 and creating inconsistency. Tested on ARM Kirkwood. Signed-off-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Acked-by: NJason Cooper <jason@lakedaemon.net> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jonathan Austin 提交于
The A12 behaves as the A7/A15 does with respect to setting the SMP bit, and doesn't require TLB ops broadcasting to be explicitly enabled like the A9 does. Note that as the ACTLR cannot (usually) be written from non-secure, it is the responsibility of the bootloader/firmware to set this bit per core - it is done here in Linux as last resort in case of bad firmware. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
During __v{6,7}_setup, we invalidate the TLBs since we are about to enable the MMU on return to head.S. Unfortunately, without a subsequent dsb instruction, the invalidation is not guaranteed to have completed by the time we write to the sctlr, potentially exposing us to junk/stale translations cached in the TLB. This patch reworks the init functions so that the dsb used to ensure completion of cache/predictor maintenance is also used to ensure completion of the TLB invalidation. Cc: <stable@vger.kernel.org> Reported-by: NAlbin Tonnerre <Albin.Tonnerre@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Christoffer Dall 提交于
The stage-2 memory attributes are distinct from the Hyp memory attributes and the Stage-1 memory attributes. We were using the stage-1 memory attributes for stage-2 mappings causing device mappings to be mapped as normal memory. Add the S2 equivalent defines for memory attributes and fix the comments explaining the defines while at it. Add a prot_pte_s2 field to the mem_type struct and fill out the field for device mappings accordingly. Cc: <stable@vger.kernel.org> [3.9+] Acked-by: NMarc Zyngier <marc.zyngier@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 1月, 2014 1 次提交
-
-
由 Ben Peddell 提交于
Commit 65939301 (arm: set initrd_start/initrd_end for fdt scan) caused the FDT initrd_start and initrd_end to override the phys_initrd_start and phys_initrd_size set by the initrd= kernel parameter. With this patch initrd_start and initrd_end will be overridden if phys_initrd_start and phys_initrd_size are set by the kernel initrd= parameter. Fixes: 65939301 (arm: set initrd_start/initrd_end for fdt scan) Signed-off-by: NBen Peddell <klightspeed@killerwolves.net> Acked-by: NJason Cooper <jason@lakedaemon.net> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 1月, 2014 2 次提交
-
-
由 Santosh Shilimkar 提交于
Switch to memblock interfaces for early memory allocator instead of bootmem allocator. No functional change in beahvior than what it is in current code from bootmem users points of view. Archs already converted to NO_BOOTMEM now directly use memblock interfaces instead of bootmem wrappers build on top of memblock. And the archs which still uses bootmem, these new apis just fallback to exiting bootmem APIs. Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Grygorii Strashko <grygorii.strashko@ti.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Michal Hocko <mhocko@suse.cz> Cc: Paul Walmsley <paul@pwsan.com> Cc: Pavel Machek <pavel@ucw.cz> Cc: Russell King <linux@arm.linux.org.uk> Cc: Tejun Heo <tj@kernel.org> Cc: Tony Lindgren <tony@atomide.com> Cc: Yinghai Lu <yinghai@kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
Commit 4b59e6c4 ("mm, show_mem: suppress page counts in non-blockable contexts") introduced SHOW_MEM_FILTER_PAGE_COUNT to suppress PFN walks on large memory machines. Commit c78e9363 ("mm: do not walk all of system memory during show_mem") avoided a PFN walk in the generic show_mem helper which removes the requirement for SHOW_MEM_FILTER_PAGE_COUNT in that case. This patch removes PFN walkers from the arch-specific implementations that report on a per-node or per-zone granularity. ARM and unicore32 still do a PFN walk as they report memory usage on each bank which is a much finer granularity where the debugging information may still be of use. As the remaining arches doing PFN walks have relatively small amounts of memory, this patch simply removes SHOW_MEM_FILTER_PAGE_COUNT. [akpm@linux-foundation.org: fix parisc] Signed-off-by: NMel Gorman <mgorman@suse.de> Acked-by: NDavid Rientjes <rientjes@google.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: James Bottomley <jejb@parisc-linux.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 1月, 2014 1 次提交
-
-
由 Russell King 提交于
This reverts commit 787b0d5c since it is no longer required after 7909/1 was applied, and it causes build regressions when ARM_PATCH_PHYS_VIRT is disabled and DMA_ZONE is enabled. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 12月, 2013 7 次提交
-
-
由 Will Deacon 提交于
The ASID allocator has to deal with some pretty horrible behaviours by the CPU, so expand on some of the comments in there so I remember why we can never allocate ASID zero to a userspace task. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
Since we only clear entries in the ASID bitmap on a rollover event, the bitmap tends to consist of a block of consecutive set bits followed by a block of consecutive clear bits. The exception to this rule is for ASIDs which have been carried over from a previous generation, but these are bound by the number of CPUs. This patch optimises our bitmap searching strategy, so that we search from the last successful allocation, rather than search from index 1 each time we allocate a new ASID. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
With the new ASID allocation algorithm, active ASIDs at the time of a rollover event will be marked as reserved, so active mm_structs can continue to operate with the same ASID as before. This in turn means that we don't need to worry about allocating a new ASID to an mm that is currently active (installed in TTBR0). Since updating the pgd and ASID is atomic on LPAE systems (by virtue of the two being fields in the same hardware register), we can dispose of the reserved TTBR0 and rely on whatever tables we currently have live. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Steven Capper 提交于
When given a compound high page, __flush_dcache_page will only flush the first page of the compound page repeatedly rather than the entire set of constituent pages. This error was introduced by: 0b19f933 ARM: mm: Add support for flushing HugeTLB pages. This patch corrects the logic such that all constituent pages are now flushed. Cc: stable@vger.kernel.org # 3.10+ Signed-off-by: NSteve Capper <steve.capper@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Make pgd allocation retry on failure; we really need this to succeed otherwise fork() can trigger OOMs. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Sebastian Hesselbarth 提交于
This adds support for the Marvell Tauros3 cache controller which is compatible with pl310 cache controller but broadcasts L1 cache operations to L2 cache. While updating the binding documentation, clean up the list of possible compatibles. Also reorder driver compatibles to allow non-ARM derivated to be compatible to ARM cache controller compatibles. Signed-off-by: NSebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Reviewed-by: NMark Rutland <mark.rutland@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Lorenzo Pieralisi 提交于
Set-associative caches on all v7 implementations map the index bits to physical addresses LSBs and tag bits to MSBs. As the last level of cache on current and upcoming ARM systems grows in size, this means that under normal DRAM controller configurations, the current v7 cache flush routine using set/way operations triggers a DRAM memory controller precharge/activate for every cache line writeback since the cache routine cleans lines by first fixing the index and then looping through ways (index bits are mapped to lower physical addresses on all v7 cache implementations; this means that, with last level cache sizes in the order of MBytes, lines belonging to the same set but different ways map to different DRAM pages). Given the random content of cache tags, swapping the order between indexes and ways loops do not prevent DRAM pages precharge and activate cycles but at least, on average, improves the chances that either multiple lines hit the same page or multiple lines belong to different DRAM banks, improving throughput significantly. This patch swaps the inner loops in the v7 cache flushing routine to carry out the clean operations first on all sets belonging to a given way (looping through sets) and then decrementing the way. Benchmarks showed that by swapping the ordering in which sets and ways are decremented in the v7 cache flushing routine, that uses set/way operations, time required to flush caches is reduced significantly, owing to improved writebacks throughput to the DRAM controller. Benchmarks results vary and depend heavily on the last level of cache tag RAM content when cache is cleaned and invalidated, ranging from 2x throughput when all tag RAM entries contain dirty lines mapping to sequential pages of RAM to 1x (ie no improvement) when all tag RAM accesses trigger a DRAM precharge/activate cycle, as the current code implies on most DRAM controller configurations. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Reviewed-by: NDave Martin <Dave.Martin@arm.com> Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 12月, 2013 5 次提交
-
-
由 Russell King 提交于
The CMA region was being marked executable: 0xdc04e000-0xdc050000 8K RW x MEM/CACHED/WBRA 0xdc060000-0xdc100000 640K RW x MEM/CACHED/WBRA 0xdc4f5000-0xdc500000 44K RW x MEM/CACHED/WBRA 0xdcce9000-0xe0000000 52316K RW x MEM/CACHED/WBRA This is mainly due to the badly worded MT_MEMORY_DMA_READY symbol, but there are also a few other places in dma-mapping which should be corrected to use the right constant. Fix all these places: 0xdc04e000-0xdc050000 8K RW NX MEM/CACHED/WBRA 0xdc060000-0xdc100000 640K RW NX MEM/CACHED/WBRA 0xdc280000-0xdc300000 512K RW NX MEM/CACHED/WBRA 0xdc6fc000-0xe0000000 58384K RW NX MEM/CACHED/WBRA Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Laura Abbott 提交于
Other architectures define various set_memory functions to allow attributes to be changed (e.g. set_memory_x, set_memory_rw, etc.) Currently, these functions are missing on ARM. Define these in an appropriate manner for ARM. Signed-off-by: NLaura Abbott <lauraa@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Add basic NX support for kernel lowmem mappings. We mark any section which does not overlap kernel text as non-executable, preventing it from being used to write code and then execute directly from there. This does not change the alignment of the sections, so the kernel image doesn't grow significantly via this change, so we can do this without needing a config option. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Document the permissions which the various MT_MEMORY* mapping types will provide. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This patch allows the kernel page tables to be dumped via a debugfs file, allowing kernel developers to check the layout of the kernel page tables and the verify the various permissions and type settings. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 12月, 2013 1 次提交
-
-
由 Santosh Shilimkar 提交于
Current code is using PHYS_OFFSET to calculate the arm_dma_limit which will lead to wrong calculations in cases where PHYS_OFFSET is updated runtime. So fix the code by using __pv_phys_offset instead of PHYS_OFFSET. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-