- 30 7月, 2012 4 次提交
-
-
由 Marek Szyprowski 提交于
This patch adds support for DMA_ATTR_NO_KERNEL_MAPPING attribute for IOMMU allocations, what let drivers to save precious kernel virtual address space for large buffers that are intended to be accessed only from userspace. This patch is heavily based on initial work kindly provided by Abhinav Kochhar <abhinav.k@samsung.com>. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NKyungmin Park <kyungmin.park@samsung.com>
-
由 Marek Szyprowski 提交于
This patch fixes incorrect check in error path. When the allocation of first page fails, the kernel ops appears due to accessing -1 element of the pages array. Reported-by: NSylwester Nawrocki <s.nawrocki@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
Add some sanity checks and forbid mmaping of buffers into vma areas larger than allocated dma buffer. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
This patch changes dma-mapping subsystem to use generic vmalloc areas for all consistent dma allocations. This increases the total size limit of the consistent allocations and removes platform hacks and a lot of duplicated code. Atomic allocations are served from special pool preallocated on boot, because vmalloc areas cannot be reliably created in atomic context. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: NKyungmin Park <kyungmin.park@samsung.com> Reviewed-by: NMinchan Kim <minchan@kernel.org>
-
- 16 7月, 2012 1 次提交
-
-
由 Prathyush K 提交于
WARNING: at mm/vmalloc.c:1471 __iommu_free_buffer+0xcc/0xd0() Trying to vfree() nonexistent vm area (ef095000) Modules linked in: [<c0015a18>] (unwind_backtrace+0x0/0xfc) from [<c0025a94>] (warn_slowpath_common+0x54/0x64) [<c0025a94>] (warn_slowpath_common+0x54/0x64) from [<c0025b38>] (warn_slowpath_fmt+0x30/0x40) [<c0025b38>] (warn_slowpath_fmt+0x30/0x40) from [<c0016de0>] (__iommu_free_buffer+0xcc/0xd0) [<c0016de0>] (__iommu_free_buffer+0xcc/0xd0) from [<c0229a5c>] (exynos_drm_free_buf+0xe4/0x138) [<c0229a5c>] (exynos_drm_free_buf+0xe4/0x138) from [<c022b358>] (exynos_drm_gem_destroy+0x80/0xfc) [<c022b358>] (exynos_drm_gem_destroy+0x80/0xfc) from [<c0211230>] (drm_gem_object_free+0x28/0x34) [<c0211230>] (drm_gem_object_free+0x28/0x34) from [<c0211bd0>] (drm_gem_object_release_handle+0xcc/0xd8) [<c0211bd0>] (drm_gem_object_release_handle+0xcc/0xd8) from [<c01abe10>] (idr_for_each+0x74/0xb8) [<c01abe10>] (idr_for_each+0x74/0xb8) from [<c02114e4>] (drm_gem_release+0x1c/0x30) [<c02114e4>] (drm_gem_release+0x1c/0x30) from [<c0210ae8>] (drm_release+0x608/0x694) [<c0210ae8>] (drm_release+0x608/0x694) from [<c00b75a0>] (fput+0xb8/0x228) [<c00b75a0>] (fput+0xb8/0x228) from [<c00b40c4>] (filp_close+0x64/0x84) [<c00b40c4>] (filp_close+0x64/0x84) from [<c0029d54>] (put_files_struct+0xe8/0x104) [<c0029d54>] (put_files_struct+0xe8/0x104) from [<c002b930>] (do_exit+0x608/0x774) [<c002b930>] (do_exit+0x608/0x774) from [<c002bae4>] (do_group_exit+0x48/0xb4) [<c002bae4>] (do_group_exit+0x48/0xb4) from [<c002bb60>] (sys_exit_group+0x10/0x18) [<c002bb60>] (sys_exit_group+0x10/0x18) from [<c000ee80>] (ret_fast_syscall+0x0/0x30) This patch modifies the condition while freeing to match the condition used while allocation. This fixes the above warning which arises when array size is equal to PAGE_SIZE where allocation is done using kzalloc but free is done using vfree. Signed-off-by: NPrathyush K <prathyush.k@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 05 7月, 2012 1 次提交
-
-
由 Russell King 提交于
arch/arm/mm/init.c: In function 'arm_memblock_init': arch/arm/mm/init.c:380: warning: comparison of distinct pointer types lacks a cast by fixing the typecast in its definition when DMA_ZONE is disabled. This was missed in 4986e5c7 (ARM: mm: fix type of the arm_dma_limit global variable). Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 7月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
On ARM with the 2-level page table format, a PMD entry is represented by two consecutive section entries covering 2MB of virtual space. However, static mappings always were allowed to use separate 1MB section entries. This means in practice that a static mapping may create half populated PMDs via create_mapping(). Since commit 0536bdf3 (ARM: move iotable mappings within the vmalloc region) those static mappings are located in the vmalloc area. We must ensure no such half populated PMDs are accessible once vmalloc() or ioremap() start looking at the vmalloc area for nearby free virtual address ranges, or various things leading to a kernel crash will happen. Signed-off-by: NNicolas Pitre <nico@linaro.org> Reported-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: N"R, Sricharan" <r.sricharan@ti.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 6月, 2012 1 次提交
-
-
由 Marek Szyprowski 提交于
IOMMU-aware dma_alloc_attrs() implementation allocates buffers in power-of-two chunks to improve performance and take advantage of large page mappings provided by some IOMMU hardware. However current code, due to a subtle bug, allocated those chunks in the smallest-to-largest order, what completely killed all the advantages of using larger than page chunks. If a 4KiB chunk has been mapped as a first chunk, the consecutive chunks are not aligned correctly to the power-of-two which match their size and IOMMU drivers were not able to use internal mappings of size other than the 4KiB (largest common denominator of alignment and chunk size). This patch fixes this issue by changing to the correct largest-to-smallest chunk size allocation sequence. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 11 6月, 2012 2 次提交
-
-
由 Marek Szyprowski 提交于
arm_dma_limit stores physical address of maximal address accessible by DMA, so the phys_addr_t type makes much more sense for it instead of u32. This patch fixes the following build warning: arch/arm/mm/init.c:380: warning: comparison of distinct pointer types lacks a cast Reported-by: NRussell King <linux@arm.linux.org.uk> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Sachin Kamat 提交于
Fixes the following sparse warnings: arch/arm/mm/dma-mapping.c:231:15: warning: symbol 'consistent_base' was not declared. Should it be static? arch/arm/mm/dma-mapping.c:326:8: warning: symbol 'coherent_pool_size' was not declared. Should it be static? Signed-off-by: NSachin Kamat <sachin.kamat@linaro.org> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 04 6月, 2012 1 次提交
-
-
由 Marek Szyprowski 提交于
CMA has been enabled unconditionally on all ARMv6+ systems to solve the long standing issue of double kernel mappings for all dma coherent buffers. This however created a dependency on CONFIG_EXPERIMENTAL for the whole ARM architecture what should be really avoided. This patch removes this dependency and lets one use old, well-tested dma-mapping implementation also on ARMv6+ systems without the need to use EXPERIMENTAL stuff. Reported-by: NRussell King <linux@arm.linux.org.uk> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 21 5月, 2012 12 次提交
-
-
由 Vitaly Andrianov 提交于
The dma_contiguous_remap() function clears existing section maps using the wrong size (PGDIR_SIZE instead of PMD_SIZE). This is a bug which does not affect non-LPAE systems, where PGDIR_SIZE and PMD_SIZE are the same. On LPAE systems, however, this bug causes the kernel to hang at this point. This fix has been tested on both LPAE and non-LPAE kernel builds. Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
由 Marek Szyprowski 提交于
This patch adds support for CMA to dma-mapping subsystem for ARM architecture. By default a global CMA area is used, but specific devices are allowed to have their private memory areas if required (they can be created with dma_declare_contiguous() function during board initialisation). Contiguous memory areas reserved for DMA are remapped with 2-level page tables on boot. Once a buffer is requested, a low memory kernel mapping is updated to to match requested memory access type. GFP_ATOMIC allocations are performed from special pool which is created early during boot. This way remapping page attributes is not needed on allocation time. CMA has been enabled unconditionally for ARMv6+ systems. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> CC: Michal Nazarewicz <mina86@mina86.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-by: NRob Clark <rob.clark@linaro.org> Tested-by: NOhad Ben-Cohen <ohad@wizery.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: NRobert Nelson <robertcnelson@gmail.com> Tested-by: NBarry Song <Baohua.Song@csr.com>
-
由 Marek Szyprowski 提交于
This patch add a complete implementation of DMA-mapping API for devices which have IOMMU support. This implementation tries to optimize dma address space usage by remapping all possible physical memory chunks into a single dma address space chunk. DMA address space is managed on top of the bitmap stored in the dma_iommu_mapping structure stored in device->archdata. Platform setup code has to initialize parameters of the dma address space (base address, size, allocation precision order) with arm_iommu_create_mapping() function. To reduce the size of the bitmap, all allocations are aligned to the specified order of base 4 KiB pages. dma_alloc_* functions allocate physical memory in chunks, each with alloc_pages() function to avoid failing if the physical memory gets fragmented. In worst case the allocated buffer is composed of 4 KiB page chunks. dma_map_sg() function minimizes the total number of dma address space chunks by merging of physical memory chunks into one larger dma address space chunk. If requested chunk (scatter list entry) boundaries match physical page boundaries, most calls to dma_map_sg() requests will result in creating only one chunk in dma address space. dma_map_page() simply creates a mapping for the given page(s) in the dma address space. All dma functions also perform required cache operation like their counterparts from the arm linear physical memory mapping version. This patch contains code and fixes kindly provided by: - Krishna Reddy <vdumpa@nvidia.com>, - Andrzej Pietrasiewicz <andrzej.p@samsung.com>, - Hiroshi DOYU <hdoyu@nvidia.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch converts dma_alloc/free/mmap_{coherent,writecombine} functions to use generic alloc/free/mmap methods from dma_map_ops structure. A new DMA_ATTR_WRITE_COMBINE DMA attribute have been introduced to implement writecombine methods. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch just performs a global cleanup in DMA mapping implementation for ARM architecture. Some of the tiny helper functions have been moved to the caller code, some have been merged together. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch removes dma bounce hooks from the common dma mapping implementation on ARM architecture and creates a separate set of dma_map_ops for dma bounce devices. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch converts all dma_sg methods to be generic (independent of the current DMA mapping implementation for ARM architecture). All dma sg operations are now implemented on top of respective dma_map_page/dma_sync_single_for* operations from dma_map_ops structure. Before this patch there were custom methods for all scatter/gather related operations. They iterated over the whole scatter list and called cache related operations directly (which in turn checked if we use dma bounce code or not and called respective version). This patch changes them not to use such shortcut. Instead it provides similar loop over scatter list and calls methods from the device's dma_map_ops structure. This enables us to use device dependent implementations of cache related operations (direct linear or dma bounce) depending on the provided dma_map_ops structure. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch modifies dma-mapping implementation on ARM architecture to use common dma_map_ops structure and asm-generic/dma-mapping-common.h helpers. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
This patch removes the need for the offset parameter in dma bounce functions. This is required to let dma-mapping framework on ARM architecture to use common, generic dma_map_ops based dma-mapping helpers. Background and more detailed explaination: dma_*_range_* functions are available from the early days of the dma mapping api. They are the correct way of doing a partial syncs on the buffer (usually used by the network device drivers). This patch changes only the internal implementation of the dma bounce functions to let them tunnel through dma_map_ops structure. The driver api stays unchanged, so driver are obliged to call dma_*_range_* functions to keep code clean and easy to understand. The only drawback from this patch is reduced detection of the dma api abuse. Let us consider the following code: dma_addr = dma_map_single(dev, ptr, 64, DMA_TO_DEVICE); dma_sync_single_range_for_cpu(dev, dma_addr+16, 0, 32, DMA_TO_DEVICE); Without the patch such code fails, because dma bounce code is unable to find the bounce buffer for the given dma_address. After the patch the above sync call will be equivalent to: dma_sync_single_range_for_cpu(dev, dma_addr, 16, 32, DMA_TO_DEVICE); which succeeds. I don't consider this as a real problem, because DMA API abuse should be caught by debug_dma_* function family. This patch lets us to simplify the internal low-level implementation without chaning the driver visible API. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
Replace all uses of ~0 with DMA_ERROR_CODE, what should make the code easier to read. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
Replace all calls to printk with pr_* functions family. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Acked-by: NKyungmin Park <kyungmin.park@samsung.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-By: NSubash Patel <subash.ramaswamy@linaro.org>
-
由 Marek Szyprowski 提交于
Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 17 5月, 2012 1 次提交
-
-
由 Vitaly Andrianov 提交于
A zero value for prot_sect in the memory types table implies that section mappings should never be created for the memory type in question. This is checked for in alloc_init_section(). With LPAE, we set a bit to mask access flag faults for kernel mappings. This breaks the aforementioned (!prot_sect) check in alloc_init_section(). This patch fixes this bug by first checking for a non-zero prot_sect before setting the PMD_SECT_AF flag. Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 5月, 2012 1 次提交
-
-
由 Russell King 提交于
Cc: <stable@vger.kernel.org> Reported-by: NAl Viro <viro@zeniv.linux.org.uk> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 07 5月, 2012 2 次提交
-
-
由 Chao Xie 提交于
For the SOC chips using tauros2 cache, will need disable and resume tauros2 cache for SOC suspend/resume. Signed-off-by: NChao Xie <chao.xie@marvell.com> Signed-off-by: NHaojian Zhuang <haojian.zhuang@gmail.com>
-
由 Chao Xie 提交于
When enable ARCH_SUSPEND_POSSIBLE, it need defintion of cpu_mohawk_do_suspend and cpu_mohawk_do_resume Signed-off-by: NChao Xie <chao.xie@marvell.com> Signed-off-by: NHaojian Zhuang <<haojian.zhuang@gmail.com>
-
- 05 5月, 2012 1 次提交
-
-
由 Russell King 提交于
This patch removes support for ARMv3 CPUs, which haven't worked properly for quite some time (see the FIXME comment in arch/arm/mm/fault.c). The only V3 parts left is the cache model for ARMv3, which is needed for some odd reason by ARM740T CPUs, and being able to build with -march=armv3, which is required for the RiscPC platform due to its bus structure. Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NJean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 5月, 2012 1 次提交
-
-
由 Will Deacon 提交于
The cacheflush syscall can fail for two reasons: (1) The arguments are invalid (nonsensical address range or no VMA) (2) The region generates a translation fault on a VIPT or PIPT cache This patch allows do_cache_op to return an error code to userspace in the case of the above. The various coherent_user_range implementations are modified to return 0 in the case of VIVT caches or -EFAULT in the case of an abort on v6/v7 cores. Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 4月, 2012 1 次提交
-
-
由 Stephen Boyd 提交于
WARNING: vmlinux.o(.text+0x111b8): Section mismatch in reference from the function arm_memory_present() to the function .init.text:memory_present() The function arm_memory_present() references the function __init memory_present(). This is often because arm_memory_present lacks a __init annotation or the annotation of memory_present is wrong. WARNING: arch/arm/mm/built-in.o(.text+0x1edc): Section mismatch in reference from the function alloc_init_pud() to the function .init.text:alloc_init_section() The function alloc_init_pud() references the function __init alloc_init_section(). This is often because alloc_init_pud lacks a __init annotation or the annotation of alloc_init_section is wrong. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 4月, 2012 3 次提交
-
-
由 Will Deacon 提交于
PL310 errata #588369 and #727915 require writes to the debug registers of the cache controller to work around known problems. Writing these registers on L220 may cause deadlock, so ensure that we only perform this operation when we identify a PL310 at probe time. Cc: stable@vger.kernel.org Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
The workaround for PL310 erratum #753970 can lead to deadlock on systems with an L220 cache controller. This patch makes the workaround effective only when the cache controller is identified as a PL310 at probe time. Cc: stable@vger.kernel.org Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
Erratum #326103 ("FSR write bit incorrect on a SWP to read-only memory") only affects the ARM 1136 core prior to r1p0. The workaround disassembles the faulting instruction to determine whether it was a read or write access on all v6 cores. An issue has been reported on the ARM 11MPCore whereby loading the faulting instruction may happen in parallel with that page being unmapped, resulting in a deadlock due to the lack of TLB broadcasting in hardware: http://lists.infradead.org/pipermail/linux-arm-kernel/2012-March/091561.html This patch limits the workaround so that it is only used on affected cores, which are known to be UP only. Other v6 cores can rely on the FSR to indicate the access type correctly. Cc: stable@vger.kernel.org Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 4月, 2012 3 次提交
-
-
由 Catalin Marinas 提交于
The current_mm variable was used to store the new mm between the switch_mm() and switch_to() calls where an IPI to reset the context could have set the wrong mm. Since the interrupts are disabled during context switch, there is no need for this variable, current->active_mm already points to the current mm when interrupts are re-enabled. Reviewed-by: NWill Deacon <will.deacon@arm.com> Tested-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NFrank Rowand <frank.rowand@am.sony.com> Tested-by: NMarc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
Since the ASIDs must be unique to an mm across all the CPUs in a system, the __new_context() function needs to broadcast a context reset event to all the CPUs during ASID allocation if a roll-over occurred. Such IPIs cannot be issued with interrupts disabled and ARM had to define __ARCH_WANT_INTERRUPTS_ON_CTXSW. This patch changes the check_context() function to check_and_switch_context() called from switch_mm(). In case of ASID-capable CPUs (ARMv6 onwards), if a new ASID is needed and the interrupts are disabled, it defers the __new_context() and cpu_switch_mm() calls to the post-lock switch hook where the interrupts are enabled. Setting the reserved TTBR0 was also moved to check_and_switch_context() from cpu_v7_switch_mm(). Reviewed-by: NWill Deacon <will.deacon@arm.com> Tested-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NFrank Rowand <frank.rowand@am.sony.com> Tested-by: NMarc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Will Deacon 提交于
On ARMv7 CPUs that cache first level page table entries (like the Cortex-A15), using a reserved ASID while changing the TTBR or flushing the TLB is unsafe. This is because the CPU may cache the first level entry as the result of a speculative memory access while the reserved ASID is assigned. After the process owning the page tables dies, the memory will be reallocated and may be written with junk values which can be interpreted as global, valid PTEs by the processor. This will result in the TLB being populated with bogus global entries. This patch avoids the use of a reserved context ID in the v7 switch_mm and ASID rollover code by temporarily using the swapper_pg_dir pointed at by TTBR1, which contains only global entries that are not tagged with ASIDs. Reviewed-by: NFrank Rowand <frank.rowand@am.sony.com> Tested-by: NMarc Zyngier <Marc.Zyngier@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: add LPAE support] Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 16 4月, 2012 1 次提交
-
-
由 Jonathan Austin 提交于
Currently when ThumbEE is not enabled (!CONFIG_ARM_THUMBEE) the ThumbEE register states are not saved/restored at context switch. The default state of the ThumbEE Ctrl register (TEECR) allows userspace accesses to the ThumbEE Base Handler register (TEEHBR). This can cause unexpected behaviour when people use ThumbEE on !CONFIG_ARM_THUMBEE kernels, as well as allowing covert communication - eg between userspace tasks running inside chroot jails. This patch sets up TEECR in order to prevent user-space access to TEEHBR when !CONFIG_ARM_THUMBEE. In this case, tasks are sent SIGILL if they try to access TEEHBR. Cc: stable@vger.kernel.org Reviewed-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 13 4月, 2012 2 次提交
-
-
由 Will Deacon 提交于
Commit 94e5a85b ("ARM: earlier initialization of vectors page") made it the responsibility of paging_init to initialise the vectors page. This patch adds a call to early_trap_init for the !CONFIG_MMU case, placing the vectors at CONFIG_VECTORS_BASE. Cc: Jonathan Austin <jonathan.austin@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
The description for the CPU_HIGH_VECTOR Kconfig option for nommu builds doesn't make any sense. This patch fixes up the trivial grammatical error. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 10 4月, 2012 1 次提交
-
-
由 Kautuk Consul 提交于
commit 8878a539 was done by me to make the page fault handler retryable as well as interruptible. Due to this commit, there is a mistake in the way in which tsk->[maj|min]_flt counter gets incremented for VM_FAULT_ERROR: If VM_FAULT_ERROR is returned in the fault flags by handle_mm_fault, then either maj_flt or min_flt will get incremented. This is wrong as in the case of a VM_FAULT_ERROR we need to be skip ahead to the error handling code in do_page_fault. Added a check after the call to __do_page_fault() to check for (fault & VM_FAULT_ERROR). Signed-off-by: NKautuk Consul <consul.kautuk@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-