- 26 2月, 2013 1 次提交
-
-
由 Russell King 提交于
Paolo Pisati reports that IPv6 triggers this warning: BUG: scheduling while atomic: swapper/0/0/0x40000100 Modules linked in: [<c001b1c4>] (unwind_backtrace+0x0/0xf0) from [<c0503c5c>] (__schedule_bug+0x48/0x5c) [<c0503c5c>] (__schedule_bug+0x48/0x5c) from [<c0508608>] (__schedule+0x700/0x740) [<c0508608>] (__schedule+0x700/0x740) from [<c007007c>] (__cond_resched+0x24/0x34) [<c007007c>] (__cond_resched+0x24/0x34) from [<c05086dc>] (_cond_resched+0x3c/0x44) [<c05086dc>] (_cond_resched+0x3c/0x44) from [<c0021f6c>] (do_alignment+0x178/0x78c) [<c0021f6c>] (do_alignment+0x178/0x78c) from [<c00083e0>] (do_DataAbort+0x34/0x98) [<c00083e0>] (do_DataAbort+0x34/0x98) from [<c0509a60>] (__dabt_svc+0x40/0x60) Exception stack(0xc0763d70 to 0xc0763db8) 3d60: e97e805e e97e806e 2c000000 11000000 3d80: ea86bb00 0000002c 00000011 e97e807e c076d2a8 e97e805e e97e806e 0000002c 3da0: 3d000000 c0763dbc c04b98fc c02a8490 00000113 ffffffff [<c0509a60>] (__dabt_svc+0x40/0x60) from [<c02a8490>] (__csum_ipv6_magic+0x8/0xc8) Fix this by using probe_kernel_address() stead of __get_user(). Cc: <stable@vger.kernel.org> Reported-by: NPaolo Pisati <p.pisati@gmail.com> Tested-by: NPaolo Pisati <p.pisati@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 2月, 2013 7 次提交
-
-
由 Marek Szyprowski 提交于
This patch removes page_address() usage in IOMMU-aware dma-mapping implementation and replaced it with direct use of the cpu virtual address provided by the caller. page_address() returned incorrect address for pages remapped in atomic pool, what caused memory leak. Reported-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Tested-by: NHiroshi Doyu <hdoyu@nvidia.com>
-
由 Seung-Woo Kim 提交于
Alignment order for a dma iommu buffer is set by buffer size. For large buffer, it is a waste of iommu address space. So configurable parameter to limit maximum alignment order can reduce the waste. Signed-off-by: NSeung-Woo Kim <sw0312.kim@samsung.com> Signed-off-by: NKyungmin.park <kyungmin.park@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
IOMMU can provide access to any memory page, so there is no point in limiting the allocated pages only to lowmem, once other parts of dma-mapping subsystem correctly supports himem pages. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
This patch adds missing pieces to correctly support memory pages served from CMA regions placed in high memory zones. Please note that the default global CMA area is still put into lowmem and is limited by optional architecture specific DMA zone. One can however put device specific CMA regions in high memory zone to reduce lowmem usage. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NMichal Nazarewicz <mina86@mina86.com>
-
由 Prathyush K 提交于
This patch adds EXPORT_SYMBOL_GPL calls to the three arm iommu functions - arm_iommu_create_mapping, arm_iommu_free_mapping and arm_iommu_attach_device. These three functions are arm specific wrapper functions for creating/freeing/using an iommu mapping and they are called by various drivers. If any of these drivers need to be built as dynamic modules, these functions need to be exported. Changelog v2: using EXPORT_SYMBOL_GPL as suggested by Marek. Signed-off-by: NPrathyush K <prathyush.k@samsung.com> [m.szyprowski: extended with recently introduced EXPORT_SYMBOL_GPL(arm_iommu_detach_device)] Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Hiroshi Doyu 提交于
A counter part of arm_iommu_attach_device(). Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Hiroshi Doyu 提交于
struct dma_map_ops iommu_ops doesn't have ->set_dma_mask, which causes crash when dma_set_mask() is called from some driver. Signed-off-by: NHiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 17 2月, 2013 6 次提交
-
-
由 Ben Dooks 提交于
The mmid macro is meant to be used to get the mm->context.id data from the mm structure, but it seems to have been missed in a cuple of files. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Ben Dooks 提交于
Since the new ASID code in b5466f87 ("ARM: mm: remove IPI broadcasting on ASID rollover") was changed to use 64bit operations it has broken the BE operation due to an issue with the MM code accessing sub-fields of mm->context.id. When running in BE mode we see the values in mm->context.id are stored with the highest value first, so the LDR in the arch/arm/mm/proc-macros.S reads the wrong part of this field. To resolve this, change the LDR in the mmid macro to load from +4. Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NBen Dooks <ben.dooks@codethink.co.uk> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
869486d5f51 (ARM: 7646/1: mm: use static_vm for managing static mapped areas) introduced new warnings: arch/arm/mm/mmu.c: In function 'pci_reserve_io': arch/arm/mm/mmu.c:888:16: warning: unused variable 'addr' arch/arm/mm/mmu.c:887:20: warning: unused variable 'vm' because it failed to delete the two local variables it no longer used. Fix this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Joonsoo Kim 提交于
A static mapped area is ARM-specific, so it is better not to use generic vmalloc data structure, that is, vmlist and vmlist_lock for managing static mapped area. And it causes some needless overhead and reducing this overhead is better idea. Now, we have newly introduced static_vm infrastructure. With it, we don't need to iterate all mapped areas. Instead, we just iterate static mapped areas. It helps to reduce an overhead of finding matched area. And architecture dependency on vmalloc layer is removed, so it will help to maintainability for vmalloc layer. Reviewed-by: NNicolas Pitre <nico@linaro.org> Acked-by: NRob Herring <rob.herring@calxeda.com> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Joonsoo Kim 提交于
In current implementation, we used ARM-specific flag, that is, VM_ARM_STATIC_MAPPING, for distinguishing ARM specific static mapped area. The purpose of static mapped area is to re-use static mapped area when entire physical address range of the ioremap request can be covered by this area. This implementation causes needless overhead for some cases. For example, assume that there is only one static mapped area and vmlist has 300 areas. Every time we call ioremap, we check 300 areas for deciding whether it is matched or not. Moreover, even if there is no static mapped area and vmlist has 300 areas, every time we call ioremap, we check 300 areas in now. If we construct a extra list for static mapped area, we can eliminate above mentioned overhead. With a extra list, if there is one static mapped area, we just check only one area and proceed next operation quickly. In fact, it is not a critical problem, because ioremap is not frequently used. But reducing overhead is better idea. Another reason for doing this work is for removing architecture dependency on vmalloc layer. I think that vmlist and vmlist_lock is internal data structure for vmalloc layer. Some codes for debugging and stat inevitably use vmlist and vmlist_lock. But it is preferable that they are used as least as possible in outside of vmalloc.c Now, I introduce an ARM-specific infrastructure for static mapped area. In the following patch, we will use this and resolve above mentioned problem. Reviewed-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Joonsoo Kim 提交于
Now, there is no user for vmregion. So remove it. Acked-by: NNicolas Pitre <nico@linaro.org> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 2月, 2013 1 次提交
-
-
由 Dinh Nguyen 提交于
mach-socfpga is another platform that needs to use v7_invalidate_l1 to bringup additional cores. There was a comment that the ideal place for v7_invalidate_l1 should be in arm/mm/cache-v7.S Signed-off-by: NDinh Nguyen <dinguyen@altera.com> Acked-by: NSimon Horman <horms+renesas@verge.net.au> Acked-by: NStephen Warren <swarren@nvidia.com> Reviewed-by: NPavel Machek <pavel@denx.de> Reviewed-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NPavel Machek <pavel@denx.de> Tested-by: NStephen Warren <swarren@nvidia.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Olof Johansson <olof@lixom.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com> Cc: Sascha Hauer <kernel@pengutronix.de> Cc: Magnus Damm <magnus.damm@gmail.com> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 08 2月, 2013 1 次提交
-
-
由 Russell King 提交于
Realview fails to boot with this warning: BUG: spinlock lockup suspected on CPU#0, init/1 lock: 0xcf8bde10, .magic: dead4ead, .owner: init/1, .owner_cpu: 0 Backtrace: [<c00185d8>] (dump_backtrace+0x0/0x10c) from [<c03294e8>] (dump_stack+0x18/0x1c) r6:cf8bde10 r5:cf83d1c0 r4:cf8bde10 r3:cf83d1c0 [<c03294d0>] (dump_stack+0x0/0x1c) from [<c018926c>] (spin_dump+0x84/0x98) [<c01891e8>] (spin_dump+0x0/0x98) from [<c0189460>] (do_raw_spin_lock+0x100/0x198) [<c0189360>] (do_raw_spin_lock+0x0/0x198) from [<c032cbac>] (_raw_spin_lock+0x3c/0x44) [<c032cb70>] (_raw_spin_lock+0x0/0x44) from [<c01c9224>] (pl011_console_write+0xe8/0x11c) [<c01c913c>] (pl011_console_write+0x0/0x11c) from [<c002aea8>] (call_console_drivers.clone.7+0xdc/0x104) [<c002adcc>] (call_console_drivers.clone.7+0x0/0x104) from [<c002b320>] (console_unlock+0x2e8/0x454) [<c002b038>] (console_unlock+0x0/0x454) from [<c002b8b4>] (vprintk_emit+0x2d8/0x594) [<c002b5dc>] (vprintk_emit+0x0/0x594) from [<c0329718>] (printk+0x3c/0x44) [<c03296dc>] (printk+0x0/0x44) from [<c002929c>] (warn_slowpath_common+0x28/0x6c) [<c0029274>] (warn_slowpath_common+0x0/0x6c) from [<c0029304>] (warn_slowpath_null+0x24/0x2c) [<c00292e0>] (warn_slowpath_null+0x0/0x2c) from [<c0070ab0>] (lockdep_trace_alloc+0xd8/0xf0) [<c00709d8>] (lockdep_trace_alloc+0x0/0xf0) from [<c00c0850>] (kmem_cache_alloc+0x24/0x11c) [<c00c082c>] (kmem_cache_alloc+0x0/0x11c) from [<c00bb044>] (__get_vm_area_node.clone.24+0x7c/0x16c) [<c00bafc8>] (__get_vm_area_node.clone.24+0x0/0x16c) from [<c00bb7b8>] (get_vm_area_caller+0x48/0x54) [<c00bb770>] (get_vm_area_caller+0x0/0x54) from [<c0020064>] (__alloc_remap_buffer.clone.15+0x38/0xb8) [<c002002c>] (__alloc_remap_buffer.clone.15+0x0/0xb8) from [<c0020244>] (__dma_alloc+0x160/0x2c8) [<c00200e4>] (__dma_alloc+0x0/0x2c8) from [<c00204d8>] (arm_dma_alloc+0x88/0xa0)[<c0020450>] (arm_dma_alloc+0x0/0xa0) from [<c00beb00>] (dma_pool_alloc+0xcc/0x1a8) [<c00bea34>] (dma_pool_alloc+0x0/0x1a8) from [<c01a9d14>] (pl08x_fill_llis_for_desc+0x28/0x568) [<c01a9cec>] (pl08x_fill_llis_for_desc+0x0/0x568) from [<c01aab8c>] (pl08x_prep_slave_sg+0x258/0x3b0) [<c01aa934>] (pl08x_prep_slave_sg+0x0/0x3b0) from [<c01c9f74>] (pl011_dma_tx_refill+0x140/0x288) [<c01c9e34>] (pl011_dma_tx_refill+0x0/0x288) from [<c01ca748>] (pl011_start_tx+0xe4/0x120) [<c01ca664>] (pl011_start_tx+0x0/0x120) from [<c01c54a4>] (__uart_start+0x48/0x4c) [<c01c545c>] (__uart_start+0x0/0x4c) from [<c01c632c>] (uart_start+0x2c/0x3c) [<c01c6300>] (uart_start+0x0/0x3c) from [<c01c795c>] (uart_write+0xcc/0xf4) [<c01c7890>] (uart_write+0x0/0xf4) from [<c01b0384>] (n_tty_write+0x1c0/0x3e4) [<c01b01c4>] (n_tty_write+0x0/0x3e4) from [<c01acfe8>] (tty_write+0x144/0x240) [<c01acea4>] (tty_write+0x0/0x240) from [<c01ad17c>] (redirected_tty_write+0x98/0xac) [<c01ad0e4>] (redirected_tty_write+0x0/0xac) from [<c00c371c>] (vfs_write+0xbc/0x150) [<c00c3660>] (vfs_write+0x0/0x150) from [<c00c39c0>] (sys_write+0x4c/0x78) [<c00c3974>] (sys_write+0x0/0x78) from [<c0014460>] (ret_fast_syscall+0x0/0x3c) This happens because the DMA allocation code is not respecting atomic allocations correctly. GFP flags should not be tested for GFP_ATOMIC to determine if an atomic allocation is being requested. GFP_ATOMIC is not a flag but a value. The GFP bitmask flags are all prefixed with __GFP_. The rest of the kernel tests for __GFP_WAIT not being set to indicate an atomic allocation. We need to do the same. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 2月, 2013 2 次提交
-
-
由 Uwe Kleine-König 提交于
Some ARM cores are not capable to run in ARM mode (e.g. Cortex-M3). So obviously these cannot enter the kernel in ARM mode. Make an exception for them and let them enter in THUMB mode. Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Message-Id: 1358162123-30113-1-git-send-email-u.kleine-koenig@pengutronix.de Acked-by: NNicolas Pitre <nico@linaro.org>
-
由 Uwe Kleine-König 提交于
This makes cr_alignment a constant 0 to break code that tries to modify the value as it's likely that it's built on wrong assumption when CONFIG_CPU_CP15 isn't defined. For code that is only reading the value 0 is more or less a fine value to report. Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Message-Id: 1358413196-5609-2-git-send-email-u.kleine-koenig@pengutronix.de (v8)
-
- 24 1月, 2013 2 次提交
-
-
由 Christoffer Dall 提交于
Add a method (hyp_idmap_setup) to populate a hyp pgd with an identity mapping of the code contained in the .hyp.idmap.text section. Offer a method to drop this identity mapping through hyp_idmap_teardown. Make all the above depend on CONFIG_ARM_VIRT_EXT and CONFIG_ARM_LPAE. Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com> Signed-off-by: NMarc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <c.dall@virtualopensystems.com>
-
由 Christoffer Dall 提交于
KVM uses the stage-2 page tables and the Hyp page table format, so we define the fields and page protection flags needed by KVM. The nomenclature is this: - page_hyp: PL2 code/data mappings - page_hyp_device: PL2 device mappings (vgic access) - page_s2: Stage-2 code/data page mappings - page_s2_device: Stage-2 device mappings (vgic access) Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com> Christoffer Dall <c.dall@virtualopensystems.com>
-
- 19 1月, 2013 2 次提交
-
-
由 Santosh Shilimkar 提交于
Commit 8fb54284 {ARM: mm: Add strongly ordered descriptor support} added XN flag at section level but missed it at PTE level. Fix it by adding the L_PTE_XN to MT_MEMORY_SO PTE descriptor. Reported-by: NRichard Woodruff <r-woodruff2@ti.com> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Subhash Jadavani reported this partial backtrace: Now consider this call stack from MMC block driver (this is on the ARMv7 based board): [<c001b50c>] (v7_dma_inv_range+0x30/0x48) from [<c0017b8c>] (dma_cache_maint_page+0x1c4/0x24c) [<c0017b8c>] (dma_cache_maint_page+0x1c4/0x24c) from [<c0017c28>] (___dma_page_cpu_to_dev+0x14/0x1c) [<c0017c28>] (___dma_page_cpu_to_dev+0x14/0x1c) from [<c0017ff8>] (dma_map_sg+0x3c/0x114) This is caused by incrementing the struct page pointer, and running off the end of the sparsemem page array. Fix this by incrementing by pfn instead, and convert the pfn to a struct page. Cc: <stable@vger.kernel.org> Suggested-by: NJames Bottomley <JBottomley@Parallels.com> Tested-by: NSubhash Jadavani <subhashj@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 11 1月, 2013 1 次提交
-
-
由 Will Deacon 提交于
ARM_VIRT_EXT is a property of CPU_V7, but does not adversely affect other CPUs that can be built into the same kernel image (i.e. ARMv6+). This patch defaults ARM_VIRT_EXT to y if CPU_V7, allowing hypervisors such as KVM to make better use of the option and being able to rely on hyp-mode boot support. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 07 1月, 2013 3 次提交
-
-
由 Gregory CLEMENT 提交于
The use of writel instead of writel_relaxed lead to deadlock in some situation (SMP on Armada 370 for instance). The use of writel_relaxed as it was done in the rest of this driver fixes this bug. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: NJason Cooper <jason@lakedaemon.net> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Gregory CLEMENT 提交于
This patch fixes a bug for Aurora L2 cache controller when the write-through mode is enable. For the clean operation even if we don't have to flush the lines we still need to invalidate them. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: NThomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: NJason Cooper <jason@lakedaemon.net> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Haojian Zhuang 提交于
If CONFIG_ARCH_MULTIPLATFORM & CONFIG_ARCH_MVEBU are both enabled, __v7_pj4b_setup is added between __v7_ca9mp_setup and __v7_setup. But there's no jump instruction added. If the chip is Cortex A5/A9, it goes through __v7_pj4b_setup also. It results in system hang. Signed-off-by: NHaojian Zhuang <haojian.zhuang@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 1月, 2013 2 次提交
-
-
由 Rob Herring 提交于
In order to support secure and non-secure platforms in multi-platform kernels, errata work-arounds that access secure only registers need to be disabled. Make all the errata options that fit in this category depend on !CONFIG_ARCH_MULTIPLATFORM. This will effectively remove the errata options as platforms are converted over to multi-platform. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Rob Herring 提交于
PL310 errata work-arounds using .set_debug function are only needed on r3p0 and earlier, so check the rev and only set .set_debug on older revs. Avoiding debug register accesses fixes aborts on non-secure platforms like highbank. It is assumed that non-secure platforms needing these work-arounds have already implemented .set_debug with secure monitor calls. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NTony Lindgren <tony@atomide.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 20 12月, 2012 1 次提交
-
-
由 Will Deacon 提交于
flush_cache_louis flushes the D-side caches to the point of unification inner-shareable. On uniprocessor CPUs, this is defined as zero and therefore no flushing will take place. Rather than invent a new interface for UP systems, instead use our SMP_ON_UP patching code to read the LoUU from the CLIDR instead. Cc: <stable@vger.kernel.org> Cc: Lorenzo Pieralisi <Lorenzo.Pieralisi@arm.com> Tested-by: NGuennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 12月, 2012 1 次提交
-
-
由 Michel Lespinasse 提交于
Update the arm arch_get_unmapped_area[_topdown] functions to make use of vm_unmapped_area() instead of implementing a brute force search. [akpm@linux-foundation.org: remove now-unused COLOUR_ALIGN_DOWN()] Signed-off-by: NMichel Lespinasse <walken@google.com> Reviewed-by: NRik van Riel <riel@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 11月, 2012 1 次提交
-
-
由 Marek Szyprowski 提交于
This patch adds support for DMA_ATTR_FORCE_CONTIGUOUS attribute for dma_alloc_attrs() in IOMMU-aware implementation. For allocating physically contiguous buffers Contiguous Memory Allocator is used. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 26 11月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
The kvm_seq value has nothing to do what so ever with this other KVM. Given that KVM support on ARM is imminent, it's best to rename kvm_seq into something else to clearly identify what it is about i.e. a sequence number for vmalloc section mappings. Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 11月, 2012 1 次提交
-
-
由 Gregory CLEMENT 提交于
Expose another DMA operations function: arm_dma_set_mask. This function will be added to a custom DMA ops for Armada 370/XP. Depending of its configuration Armada 370/XP can be set as a "nearly" coherent architecture. In this case the DMA ops is made of: - specific functions for this architecture - already exposed arm DMA related functions - the arm_dma_set_mask which was not exposed yet. Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 21 11月, 2012 1 次提交
-
-
由 Gregory CLEMENT 提交于
PJ4B is an implementation of the ARMv7 (such as the Cortex A9 for example) released by Marvell. This CPU is currently found in Armada 370 and Armada XP SoCs. This patch provides a support for the specific initialization of this CPU. Signed-off-by: NYehuda Yitschak <yehuday@marvell.com> Signed-off-by: NGregory CLEMENT <gregory.clement@free-electrons.com> Reviewed-by: NWill Deacon <will.deacon@arm.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 13 11月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
Flushing the cache is needed for the hardware to see the idmap table and therefore can be done at init time. On ARMv7 it is not necessary to flush L2 so flush_cache_louis() is used here instead. There is no point flushing the cache in setup_mm_for_reboot() as the caller should, and already is, taking care of this. If switching the memory map requires a cache flush, then cpu_switch_mm() already includes that operation. What is not done by cpu_switch_mm() on ASID capable CPUs is TLB flushing as the whole point of the ASID is to tag the TLBs and avoid flushing them on a context switch. Since we don't have a clean ASID for the identity mapping, we need to flush the TLB explicitly in that case. Otherwise this is already performed by cpu_switch_mm(). Signed-off-by: NNicolas Pitre <nico@linaro.org> Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 11月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
Signed-off-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 11月, 2012 4 次提交
-
-
由 Will Deacon 提交于
PROT_NONE mappings apply the page protection attributes defined by _P000 which translate to PAGE_NONE for ARM. These attributes specify an XN, RDONLY pte that is inaccessible to userspace. However, on kernels configured without support for domains, such a pte *is* accessible to the kernel and can be read via get_user, allowing tasks to read PROT_NONE pages via syscalls such as read/write over a pipe. This patch introduces a new software pte flag, L_PTE_NONE, that is set to identify faulting, present entries. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
For long-descriptor translation table formats, the ARMv7 architecture defines the last two bits of the second- and third-level descriptors to be: x0b - Invalid 01b - Block (second-level), Reserved (third-level) 11b - Table (second-level), Page (third-level) This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to create ptes directly. However, when determining whether a given pte value is present in the low-level page table accessors, we only need to check the least significant bit of the descriptor, allowing us to write faulting, present entries which are required for PROT_NONE mappings. This patch introduces L_PTE_VALID, which can be used to test whether a pte should fault, and updates the low-level page table accessors accordingly. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
The simplified access permissions model is not used for the classic MMU translation regime, so ensure that it is turned off in the sctlr prior to turning on address translation for ARMv7. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
When updating the page protection map after calculating the user_pgprot value, the base protection map is temporarily stored in an unsigned long type, causing truncation of the protection bits when LPAE is enabled. This effectively means that calls to mprotect() will corrupt the upper page attributes, clearing the XN bit unconditionally. This patch uses pteval_t to store the intermediate protection values, preserving the upper bits for 64-bit descriptors. Cc: stable@vger.kernel.org Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-