- 07 1月, 2013 3 次提交
-
-
由 Gregory CLEMENT 提交于
The use of writel instead of writel_relaxed lead to deadlock in some situation (SMP on Armada 370 for instance). The use of writel_relaxed as it was done in the rest of this driver fixes this bug. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: NJason Cooper <jason@lakedaemon.net> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Gregory CLEMENT 提交于
This patch fixes a bug for Aurora L2 cache controller when the write-through mode is enable. For the clean operation even if we don't have to flush the lines we still need to invalidate them. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: NJason Cooper <jason@lakedaemon.net> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Haojian Zhuang 提交于
If CONFIG_ARCH_MULTIPLATFORM & CONFIG_ARCH_MVEBU are both enabled, __v7_pj4b_setup is added between __v7_ca9mp_setup and __v7_setup. But there's no jump instruction added. If the chip is Cortex A5/A9, it goes through __v7_pj4b_setup also. It results in system hang. Signed-off-by: NHaojian Zhuang <haojian.zhuang@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 1月, 2013 2 次提交
-
-
由 Rob Herring 提交于
In order to support secure and non-secure platforms in multi-platform kernels, errata work-arounds that access secure only registers need to be disabled. Make all the errata options that fit in this category depend on !CONFIG_ARCH_MULTIPLATFORM. This will effectively remove the errata options as platforms are converted over to multi-platform. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Rob Herring 提交于
PL310 errata work-arounds using .set_debug function are only needed on r3p0 and earlier, so check the rev and only set .set_debug on older revs. Avoiding debug register accesses fixes aborts on non-secure platforms like highbank. It is assumed that non-secure platforms needing these work-arounds have already implemented .set_debug with secure monitor calls. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 12月, 2012 1 次提交
-
-
由 Will Deacon 提交于
flush_cache_louis flushes the D-side caches to the point of unification inner-shareable. On uniprocessor CPUs, this is defined as zero and therefore no flushing will take place. Rather than invent a new interface for UP systems, instead use our SMP_ON_UP patching code to read the LoUU from the CLIDR instead. Cc: <stable@vger.kernel.org> Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com> Tested-by: NGuennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 12月, 2012 1 次提交
-
-
由 Michel Lespinasse 提交于
Update the arm arch_get_unmapped_area[_topdown] functions to make use of vm_unmapped_area() instead of implementing a brute force search. [akpm@linux-foundation.org: remove now-unused COLOUR_ALIGN_DOWN()] Signed-off-by: NMichel Lespinasse <walken@google.com> Reviewed-by: NRik van Riel <riel@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 11月, 2012 1 次提交
-
-
由 Marek Szyprowski 提交于
This patch adds support for DMA_ATTR_FORCE_CONTIGUOUS attribute for dma_alloc_attrs() in IOMMU-aware implementation. For allocating physically contiguous buffers Contiguous Memory Allocator is used. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 26 11月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
The kvm_seq value has nothing to do what so ever with this other KVM. Given that KVM support on ARM is imminent, it's best to rename kvm_seq into something else to clearly identify what it is about i.e. a sequence number for vmalloc section mappings. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 11月, 2012 1 次提交
-
-
由 Gregory CLEMENT 提交于
Expose another DMA operations function: arm_dma_set_mask. This function will be added to a custom DMA ops for Armada 370/XP. Depending of its configuration Armada 370/XP can be set as a "nearly" coherent architecture. In this case the DMA ops is made of: - specific functions for this architecture - already exposed arm DMA related functions - the arm_dma_set_mask which was not exposed yet. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 21 11月, 2012 1 次提交
-
-
由 Gregory CLEMENT 提交于
PJ4B is an implementation of the ARMv7 (such as the Cortex A9 for example) released by Marvell. This CPU is currently found in Armada 370 and Armada XP SoCs. This patch provides a support for the specific initialization of this CPU. Signed-off-by: NYehuda Yitschak <yehuday@marvell.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 11月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
Flushing the cache is needed for the hardware to see the idmap table and therefore can be done at init time. On ARMv7 it is not necessary to flush L2 so flush_cache_louis() is used here instead. There is no point flushing the cache in setup_mm_for_reboot() as the caller should, and already is, taking care of this. If switching the memory map requires a cache flush, then cpu_switch_mm() already includes that operation. What is not done by cpu_switch_mm() on ASID capable CPUs is TLB flushing as the whole point of the ASID is to tag the TLBs and avoid flushing them on a context switch. Since we don't have a clean ASID for the identity mapping, we need to flush the TLB explicitly in that case. Otherwise this is already performed by cpu_switch_mm(). Signed-off-by: NNicolas Pitre <nico@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 11月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 11月, 2012 4 次提交
-
-
由 Will Deacon 提交于
PROT_NONE mappings apply the page protection attributes defined by _P000 which translate to PAGE_NONE for ARM. These attributes specify an XN, RDONLY pte that is inaccessible to userspace. However, on kernels configured without support for domains, such a pte *is* accessible to the kernel and can be read via get_user, allowing tasks to read PROT_NONE pages via syscalls such as read/write over a pipe. This patch introduces a new software pte flag, L_PTE_NONE, that is set to identify faulting, present entries. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
For long-descriptor translation table formats, the ARMv7 architecture defines the last two bits of the second- and third-level descriptors to be: x0b - Invalid 01b - Block (second-level), Reserved (third-level) 11b - Table (second-level), Page (third-level) This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to create ptes directly. However, when determining whether a given pte value is present in the low-level page table accessors, we only need to check the least significant bit of the descriptor, allowing us to write faulting, present entries which are required for PROT_NONE mappings. This patch introduces L_PTE_VALID, which can be used to test whether a pte should fault, and updates the low-level page table accessors accordingly. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The simplified access permissions model is not used for the classic MMU translation regime, so ensure that it is turned off in the sctlr prior to turning on address translation for ARMv7. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
When updating the page protection map after calculating the user_pgprot value, the base protection map is temporarily stored in an unsigned long type, causing truncation of the protection bits when LPAE is enabled. This effectively means that calls to mprotect() will corrupt the upper page attributes, clearing the XN bit unconditionally. This patch uses pteval_t to store the intermediate protection values, preserving the upper bits for 64-bit descriptors. Cc: stable@vger.kernel.org Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 11月, 2012 1 次提交
-
-
由 Gregory CLEMENT 提交于
Aurora Cache Controller was designed to be compatible with the ARM L2 Cache Controller. It comes with some difference or improvement such as: - no cache id part number available through hardware (need to get it by the DT). - always write through mode available. - two flavors of the controller outer cache and system cache (meaning maintenance operations on L1 are broadcasted to the L2 and L2 performs the same operation). - in outer cache mode, the cache maintenance operations are improved and can be done on a range inside a page and are not limited to a cache line. Tested-and-Reviewed-by: NLior Amsalem <alior@marvell.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NYehuda Yitschak <yehuday@marvell.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 06 11月, 2012 4 次提交
-
-
由 Rob Herring 提交于
When using DEBUG_LL, the UART's (or other HW's) registers are mapped into early page tables based on the results of assembly macro addruart. Later, when the page tables are replaced, the same virtual address must remain valid. Historically, this has been ensured by using defines from <mach/iomap.h> in both the implementation of addruart, and the machine's .map_io() function. However, with the move to single zImage, we wish to remove <mach/iomap.h>. To enable this, the macro addruart may be used when constructing the late page tables too; addruart is exposed as a C function debug_ll_addr(), and used to set up the required mapping in debug_ll_io_init(), which may called on an opt-in basis from a machine's .map_io() function. Signed-off-by: NRob Herring <rob.herring@calxeda.com> [swarren: Mask map.virtual with PAGE_MASK. Checked for NULL results from debug_ll_addr (e.g. when selected UART isn't valid). Fixed compile when either !CONFIG_DEBUG_LL or CONFIG_DEBUG_SEMIHOSTING.] Signed-off-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
由 Will Deacon 提交于
When allocating a new ASID, we must take care not to re-assign a reserved ASID-value to a new mm. This requires us to check each candidate ASID against those currently reserved by other cores before assigning a new ASID to the current mm. This patch improves the ASID allocation algorithm by using a bitmap-based approach. Rather than iterating over the reserved ASID array for each candidate ASID, we simply find the first zero bit, ensuring that those indices corresponding to reserved ASIDs are set when flushing during a rollover event. Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
When scheduling a new mm, we take a spinlock so that we can: 1. Safely allocate a new ASID, if required 2. Update our active_asids field without worrying about parallel updates to reserved_asids 3. Ensure that we flush our local TLB, if required However, this has the nasty affect of serialising context-switch across all CPUs in the system. The usual (fast) case is where the next mm has a valid ASID for the current generation. In such a scenario, we can avoid taking the lock and instead use atomic64_xchg to update the active_asids variable for the current CPU. If a rollover occurs on another CPU (which would take the lock), when copying the active_asids into the reserved_asids another atomic64_xchg is used to replace each active_asids with 0. The fast path can then detect this case and fall back to spinning on the lock. Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
ASIDs are allocated to MMU contexts based on a rolling counter. This means that after 255 allocations we must invalidate all existing ASIDs via an expensive IPI mechanism to synchronise all of the online CPUs and ensure that all tasks execute with an ASID from the new generation. This patch changes the rollover behaviour so that we rely instead on the hardware broadcasting of the TLB invalidation to avoid the IPI calls. This works by keeping track of the active ASID on each core, which is then reserved in the case of a rollover so that currently scheduled tasks can continue to run. For cores without hardware TLB broadcasting, we keep track of pending flushes in a cpumask, so cores can flush their local TLB before scheduling a new mm. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 04 11月, 2012 1 次提交
-
-
由 viresh kumar 提交于
The variables here are really not used uninitialized. arch/arm/mm/alignment.c: In function 'do_alignment': arch/arm/mm/alignment.c:327:15: warning: 'offset.un' may be used uninitialized in this function [-Wmaybe-uninitialized] arch/arm/mm/alignment.c:748:21: note: 'offset.un' was declared here Signed-off-by: NViresh Kumar <viresh.kumar@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 10月, 2012 1 次提交
-
-
由 Cyril Chemparathy 提交于
This patch fixes the /dev/mem driver to use phys_addr_t for physical addresses. This is required on PAE systems, especially those that run entirely out of >4G physical memory space. Signed-off-by: NCyril Chemparathy <cyril@ti.com> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 24 10月, 2012 2 次提交
-
-
由 Laurent Pinchart 提交于
Commit e9da6e99 ("ARM: dma-mapping: remove custom consistent dma region") removed the last users of the field. Remove it. Signed-off-by: NLaurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Jingoo Han 提交于
Fix build warning in __dma_alloc() as below: arch/arm/mm/dma-mapping.c: In function '__dma_alloc': arch/arm/mm/dma-mapping.c:653:29: warning: 'page' may be used uninitialized in this function [-Wuninitialized] Signed-off-by: NJingoo Han <jg1.han@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 18 10月, 2012 2 次提交
-
-
由 Dave Martin 提交于
Because mov pc,<Rn> never switches instruction set when executed in Thumb code, Thumb-2 kernels will silently execute the target code after cpu_reset as Thumb code, even if the passed code pointer denotes ARM (bit 0 clear). This patch uses bx instead, ensuring the correct instruction set for the target code. Thumb code in the kernel is not supported prior to ARMv7, so other CPUs are not affected. Signed-off-by: NDave Martin <dave.martin@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Gregory CLEMENT 提交于
Instead of having multiple functions belonging to outer_cache and filling this structure on the fly, use a outer_cache_fns field inside l2x0_of_data and just memcopy it into outer_cache depending of the type of the l2x0 cache. For non DT case, the former code was kept. [rmk: fixed a style issue] Tested-and-Reviewed-by: NYehuda Yitschak <yehuday@marvell.com> Tested-and-Reviewed-by: NLior Amsalem <alior@marvell.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 14 10月, 2012 1 次提交
-
-
由 Russell King 提交于
As suggested by Andrew Morton: This is a pet peeve of mine. Any time there's a long list of items (header file inclusions, kconfig entries, array initalisers, etc) and someone wants to add a new item, they *always* go and stick it at the end of the list. Guys, don't do this. Either put the new item into a randomly-chosen position or, probably better, alphanumerically sort the list. lets sort all our select statements alphanumerically. This commit was created by the following perl: while (<>) { while (/\\\s*$/) { $_ .= <>; } undef %selects if /^\s*config\s+/; if (/^\s+select\s+(\w+).*/) { if (defined($selects{$1})) { if ($selects{$1} eq $_) { print STDERR "Warning: removing duplicated $1 entry\n"; } else { print STDERR "Error: $1 differently selected\n". "\tOld: $selects{$1}\n". "\tNew: $_\n"; exit 1; } } $selects{$1} = $_; next; } if (%selects and (/^\s*$/ or /^\s+help/ or /^\s+---help---/ or /^endif/ or /^endchoice/)) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } undef %selects; } print; } if (%selects) { foreach $k (sort (keys %selects)) { print "$selects{$k}"; } } It found two duplicates: Warning: removing duplicated S5P_SETUP_MIPIPHY entry Warning: removing duplicated HARDIRQS_SW_RESEND entry and they are identical duplicates, hence the shrinkage in the diffstat of two lines. We have four testers reporting success of this change (Tony, Stephen, Linus and Sekhar.) Acked-by: NJason Cooper <jason@lakedaemon.net> Acked-by: NTony Lindgren <tony@atomide.com> Acked-by: NStephen Warren <swarren@nvidia.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Acked-by: NSekhar Nori <nsekhar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 10月, 2012 1 次提交
-
-
由 Arnd Bergmann 提交于
One such warning was recently fixed in a761cebf "ARM: Fix build warning in arch/arm/mm/alignment.c" but only for the thumb2 case, this fixes the other half. arch/arm/mm/alignment.c: In function 'do_alignment': arch/arm/mm/alignment.c:327:15: error: 'offset.un' may be used uninitialized in this function arch/arm/mm/alignment.c:748:21: note: 'offset.un' was declared here Signed-off-by: NArnd Bergmann <arnd@arndb.de> Cc: Russell King <rmk+kernel@arm.linux.org.uk>
-
- 09 10月, 2012 2 次提交
-
-
由 Shaohua Li 提交于
.fault now can retry. The retry can break state machine of .fault. In filemap_fault, if page is miss, ra->mmap_miss is increased. In the second try, since the page is in page cache now, ra->mmap_miss is decreased. And these are done in one fault, so we can't detect random mmap file access. Add a new flag to indicate .fault is tried once. In the second try, skip ra->mmap_miss decreasing. The filemap_fault state machine is ok with it. I only tested x86, didn't test other archs, but looks the change for other archs is obvious, but who knows :) Signed-off-by: NShaohua Li <shaohua.li@fusionio.com> Cc: Rik van Riel <riel@redhat.com> Cc: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michel Lespinasse 提交于
Implement an interval tree as a replacement for the VMA prio_tree. The algorithms are similar to lib/interval_tree.c; however that code can't be directly reused as the interval endpoints are not explicitly stored in the VMA. So instead, the common algorithm is moved into a template and the details (node type, how to get interval endpoints from the node, etc) are filled in using the C preprocessor. Once the interval tree functions are available, using them as a replacement to the VMA prio tree is a relatively simple, mechanical job. Signed-off-by: NMichel Lespinasse <walken@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: Hillf Danton <dhillf@gmail.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: David Woodhouse <dwmw2@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 02 10月, 2012 6 次提交
-
-
由 Hiroshi Doyu 提交于
Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Rob Herring 提交于
With ixp2xxx removed, there are no platforms that define arch_is_coherent, so the last occurrences of arch_is_coherent can be removed. Any new platform with coherent i/o should use coherent dma mapping functions. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Rob Herring 提交于
Remove arch_is_coherent() from iommu dma ops and implement separate coherent ops functions. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Rob Herring 提交于
arch_is_coherent is problematic as it is a global symbol. This doesn't work for multi-platform kernels or platforms which can support per device coherent DMA. This adds arm_coherent_dma_ops to be used for devices which connected coherently (i.e. to the ACP port on Cortex-A9 or A15). The arm_dma_ops are modified at boot when arch_is_coherent is true. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Hiroshi Doyu 提交于
With many IOMMU'able devices, console gets noisy. Tegra30 has a few dozen of IOMMU'able devices. Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Hiroshi Doyu 提交于
Skip unnecessary operations if order == 0. A little bit easier to read. Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 29 9月, 2012 2 次提交
-
-
由 Simon Horman 提交于
arm: Add ARM ERRATA 775420 workaround Workaround for the 775420 Cortex-A9 (r2p2, r2p6,r2p8,r2p10,r3p0) erratum. In case a date cache maintenance operation aborts with MMU exception, it might cause the processor to deadlock. This workaround puts DSB before executing ISB if an abort may occur on cache maintenance. Based on work by Kouei Abe and feedback from Catalin Marinas. Signed-off-by: NKouei Abe <kouei.abe.cp@rms.renesas.com> [ horms@verge.net.au: Changed to implementation suggested by catalin.marinas@arm.com ] Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NSimon Horman <horms@verge.net.au> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Lorenzo Pieralisi 提交于
Some architectures like xscale and feroceon have cache API variants that map cache flushing functions as aliases to the base architecture. This patch adds the required aliases to complete the implementation of cache flushing LoUIS API. Signed-off-by: NLorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-