- 30 5月, 2013 1 次提交
-
-
由 Vitaly Andrianov 提交于
This patch fixes the alloc_init_pud() function to use phys_addr_t instead of unsigned long when passing in the phys argument. This is an extension to commit 97092e0c (ARM: pgtable: use phys_addr_t for physical addresses), which applied similar changes elsewhere in the ARM memory management code. Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Signed-off-by: NCyril Chemparathy <cyril@ti.com> Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 17 4月, 2013 1 次提交
-
-
由 Joonsoo Kim 提交于
tcm_init() call iotable_init() and it use early_alloc variants which do memblock allocation. Directly using memblock allocation after initializing bootmem should not permitted, because bootmem can't know where are additinally reserved. So move tcm_init() to a safe place before initalizing bootmem. (On the U300) Tested-by: NLinus Walleij <linus.walleij@linaro.org> Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 3月, 2013 1 次提交
-
-
由 Sricharan R 提交于
With LPAE enabled, alloc_init_section() does not map the entire address space for unaligned addresses. The issue also reproduced with CMA + LPAE. CMA tries to map 16MB with page granularity mappings during boot. alloc_init_pte() is called and out of 16MB, only 2MB gets mapped and rest remains unaccessible. Because of this OMAP5 boot is broken with CMA + LPAE enabled. Fix the issue by ensuring that the entire addresses are mapped. Signed-off-by: NR Sricharan <r.sricharan@ti.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoffer Dall <chris@cloudcar.com> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NLaura Abbott <lauraa@codeaurora.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NChristoffer Dall <chris@cloudcar.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 17 2月, 2013 2 次提交
-
-
由 Russell King 提交于
869486d5f51 (ARM: 7646/1: mm: use static_vm for managing static mapped areas) introduced new warnings: arch/arm/mm/mmu.c: In function 'pci_reserve_io': arch/arm/mm/mmu.c:888:16: warning: unused variable 'addr' arch/arm/mm/mmu.c:887:20: warning: unused variable 'vm' because it failed to delete the two local variables it no longer used. Fix this. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Joonsoo Kim 提交于
A static mapped area is ARM-specific, so it is better not to use generic vmalloc data structure, that is, vmlist and vmlist_lock for managing static mapped area. And it causes some needless overhead and reducing this overhead is better idea. Now, we have newly introduced static_vm infrastructure. With it, we don't need to iterate all mapped areas. Instead, we just iterate static mapped areas. It helps to reduce an overhead of finding matched area. And architecture dependency on vmalloc layer is removed, so it will help to maintainability for vmalloc layer. Reviewed-by: NNicolas Pitre <nico@linaro.org> Acked-by: NRob Herring <rob.herring@calxeda.com> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NJoonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 2月, 2013 1 次提交
-
-
由 Uwe Kleine-König 提交于
This makes cr_alignment a constant 0 to break code that tries to modify the value as it's likely that it's built on wrong assumption when CONFIG_CPU_CP15 isn't defined. For code that is only reading the value 0 is more or less a fine value to report. Signed-off-by: NUwe Kleine-König <u.kleine-koenig@pengutronix.de> Message-Id: 1358413196-5609-2-git-send-email-u.kleine-koenig@pengutronix.de (v8)
-
- 24 1月, 2013 1 次提交
-
-
由 Christoffer Dall 提交于
KVM uses the stage-2 page tables and the Hyp page table format, so we define the fields and page protection flags needed by KVM. The nomenclature is this: - page_hyp: PL2 code/data mappings - page_hyp_device: PL2 device mappings (vgic access) - page_s2: Stage-2 code/data page mappings - page_s2_device: Stage-2 device mappings (vgic access) Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com> Christoffer Dall <c.dall@virtualopensystems.com>
-
- 19 1月, 2013 1 次提交
-
-
由 Santosh Shilimkar 提交于
Commit 8fb54284 {ARM: mm: Add strongly ordered descriptor support} added XN flag at section level but missed it at PTE level. Fix it by adding the L_PTE_XN to MT_MEMORY_SO PTE descriptor. Reported-by: NRichard Woodruff <r-woodruff2@ti.com> Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 09 11月, 2012 1 次提交
-
-
由 Will Deacon 提交于
When updating the page protection map after calculating the user_pgprot value, the base protection map is temporarily stored in an unsigned long type, causing truncation of the protection bits when LPAE is enabled. This effectively means that calls to mprotect() will corrupt the upper page attributes, clearing the XN bit unconditionally. This patch uses pteval_t to store the intermediate protection values, preserving the upper bits for 64-bit descriptors. Cc: stable@vger.kernel.org Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 06 11月, 2012 1 次提交
-
-
由 Rob Herring 提交于
When using DEBUG_LL, the UART's (or other HW's) registers are mapped into early page tables based on the results of assembly macro addruart. Later, when the page tables are replaced, the same virtual address must remain valid. Historically, this has been ensured by using defines from <mach/iomap.h> in both the implementation of addruart, and the machine's .map_io() function. However, with the move to single zImage, we wish to remove <mach/iomap.h>. To enable this, the macro addruart may be used when constructing the late page tables too; addruart is exposed as a C function debug_ll_addr(), and used to set up the required mapping in debug_ll_io_init(), which may called on an opt-in basis from a machine's .map_io() function. Signed-off-by: NRob Herring <rob.herring@calxeda.com> [swarren: Mask map.virtual with PAGE_MASK. Checked for NULL results from debug_ll_addr (e.g. when selected UART isn't valid). Fixed compile when either !CONFIG_DEBUG_LL or CONFIG_DEBUG_SEMIHOSTING.] Signed-off-by: NStephen Warren <swarren@nvidia.com> Signed-off-by: NOlof Johansson <olof@lixom.net>
-
- 02 10月, 2012 1 次提交
-
-
由 Rob Herring 提交于
With ixp2xxx removed, there are no platforms that define arch_is_coherent, so the last occurrences of arch_is_coherent can be removed. Any new platform with coherent i/o should use coherent dma mapping functions. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com>
-
- 25 8月, 2012 2 次提交
-
-
由 Jonathan Austin 提交于
With !HIGHMEM, sanity_check_meminfo checks for banks that completely or partially overlap the vmalloc region. The test for partial overlap checks __va(bank->start + bank->size) > vmalloc_min. This is not appropriate if there is a non-linear translation between virtual and physical addresses, as bank->start + bank->size is actually in the bank following the one being interrogated. In most cases, even when using SPARSEMEM, this is not problematic as the subsequent bank will start at a higher va than the one in question. However if the physical to virtual address conversion is not monotonic increasing, the incorrect test could result in a bank not being truncated when it should be. This patch ensures we perform the va-pa conversion on memory from the bank we are interested in, not the following one. Reported-by: N??? (Steve) <zhanzhenbo@gmail.com> Signed-off-by: NJonathan Austin <jonathan.austin@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Murali Nalajala reports a regression that ioremapping address zero results in an oops dump: Unable to handle kernel paging request at virtual address fa200000 pgd = d4f80000 [fa200000] *pgd=00000000 Internal error: Oops: 5 [#1] PREEMPT SMP ARM Modules linked in: CPU: 0 Tainted: G W (3.4.0-g3b5f728-00009-g638207a #13) PC is at msm_pm_config_rst_vector_before_pc+0x8/0x30 LR is at msm_pm_boot_config_before_pc+0x18/0x20 pc : [<c0078f84>] lr : [<c007903c>] psr: a0000093 sp : c0837ef0 ip : cfe00000 fp : 0000000d r10: da7efc17 r9 : 225c4278 r8 : 00000006 r7 : 0003c000 r6 : c085c824 r5 : 00000001 r4 : fa101000 r3 : fa200000 r2 : c095080c r1 : 002250fc r0 : 00000000 Flags: NzCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment kernel Control: 10c5387d Table: 25180059 DAC: 00000015 [<c0078f84>] (msm_pm_config_rst_vector_before_pc+0x8/0x30) from [<c007903c>] (msm_pm_boot_config_before_pc+0x18/0x20) [<c007903c>] (msm_pm_boot_config_before_pc+0x18/0x20) from [<c007a55c>] (msm_pm_power_collapse+0x410/0xb04) [<c007a55c>] (msm_pm_power_collapse+0x410/0xb04) from [<c007b17c>] (arch_idle+0x294/0x3e0) [<c007b17c>] (arch_idle+0x294/0x3e0) from [<c000eed8>] (default_idle+0x18/0x2c) [<c000eed8>] (default_idle+0x18/0x2c) from [<c000f254>] (cpu_idle+0x90/0xe4) [<c000f254>] (cpu_idle+0x90/0xe4) from [<c057231c>] (rest_init+0x88/0xa0) [<c057231c>] (rest_init+0x88/0xa0) from [<c07ff890>] (start_kernel+0x3a8/0x40c) Code: c0704256 e12fff1e e59f2020 e5923000 (e5930000) This is caused by the 'reserved' entries which we insert (see 19b52abe - ARM: 7438/1: fill possible PMD empty section gaps) which get matched for physical address zero. Resolve this by marking these reserved entries with a different flag. Cc: <stable@vger.kernel.org> Tested-by: NMurali Nalajala <mnalajal@codeaurora.org> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 7月, 2012 1 次提交
-
-
由 Rob Herring 提交于
This adds a fixed virtual mapping for PCI i/o addresses. The mapping is located at the last 2MB of vmalloc region (0xfee00000-0xff000000). 2MB is used to align with PMD size, but IO_SPACE_LIMIT is 1MB. The space is reserved after .map_io and can be mapped at any time later with pci_ioremap_io. Platforms which need early i/o mapping (e.g. for vga console) can call pci_map_io_early in their .map_io function. This has changed completely from the 1st implementation which only supported creating the static mapping at .map_io. Signed-off-by: NRob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Acked-by: NNicolas Pitre <nico@linaro.org>
-
- 10 7月, 2012 1 次提交
-
-
由 Catalin Marinas 提交于
The vectors page has been traditionally mapped as WT on UP systems but this creates a mismatched alias with the directly mapped RAM that is using WB attributes. On newer processors like Cortex-A15 this has implications on the data/instructions coherency at the point of unification (usually L2). This patch removes such restriction. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 7月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
On ARM with the 2-level page table format, a PMD entry is represented by two consecutive section entries covering 2MB of virtual space. However, static mappings always were allowed to use separate 1MB section entries. This means in practice that a static mapping may create half populated PMDs via create_mapping(). Since commit 0536bdf3 (ARM: move iotable mappings within the vmalloc region) those static mappings are located in the vmalloc area. We must ensure no such half populated PMDs are accessible once vmalloc() or ioremap() start looking at the vmalloc area for nearby free virtual address ranges, or various things leading to a kernel crash will happen. Signed-off-by: NNicolas Pitre <nico@linaro.org> Reported-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: N"R, Sricharan" <r.sricharan@ti.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: stable@vger.kernel.org Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 6月, 2012 1 次提交
-
-
由 Alessandro Rubini 提交于
Signed-off-by: NAlessandro Rubini <rubini@gnudd.com> Acked-by: NGiancarlo Asnaghi <giancarlo.asnaghi@st.com> Acked-by: NLinus Walleij <linus.walleij@linaro.org> Cc: Alan Cox <alan@linux.intel.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 5月, 2012 1 次提交
-
-
由 Marek Szyprowski 提交于
This patch adds support for CMA to dma-mapping subsystem for ARM architecture. By default a global CMA area is used, but specific devices are allowed to have their private memory areas if required (they can be created with dma_declare_contiguous() function during board initialisation). Contiguous memory areas reserved for DMA are remapped with 2-level page tables on boot. Once a buffer is requested, a low memory kernel mapping is updated to to match requested memory access type. GFP_ATOMIC allocations are performed from special pool which is created early during boot. This way remapping page attributes is not needed on allocation time. CMA has been enabled unconditionally for ARMv6+ systems. Signed-off-by: NMarek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: NKyungmin Park <kyungmin.park@samsung.com> CC: Michal Nazarewicz <mina86@mina86.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Tested-by: NRob Clark <rob.clark@linaro.org> Tested-by: NOhad Ben-Cohen <ohad@wizery.com> Tested-by: NBenjamin Gaignard <benjamin.gaignard@linaro.org> Tested-by: NRobert Nelson <robertcnelson@gmail.com> Tested-by: NBarry Song <Baohua.Song@csr.com>
-
- 17 5月, 2012 1 次提交
-
-
由 Vitaly Andrianov 提交于
A zero value for prot_sect in the memory types table implies that section mappings should never be created for the memory type in question. This is checked for in alloc_init_section(). With LPAE, we set a bit to mask access flag faults for kernel mappings. This breaks the aforementioned (!prot_sect) check in alloc_init_section(). This patch fixes this bug by first checking for a non-zero prot_sect before setting the PMD_SECT_AF flag. Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 28 4月, 2012 1 次提交
-
-
由 Stephen Boyd 提交于
WARNING: vmlinux.o(.text+0x111b8): Section mismatch in reference from the function arm_memory_present() to the function .init.text:memory_present() The function arm_memory_present() references the function __init memory_present(). This is often because arm_memory_present lacks a __init annotation or the annotation of memory_present is wrong. WARNING: arch/arm/mm/built-in.o(.text+0x1edc): Section mismatch in reference from the function alloc_init_pud() to the function .init.text:alloc_init_section() The function alloc_init_pud() references the function __init alloc_init_section(). This is often because alloc_init_pud lacks a __init annotation or the annotation of alloc_init_section is wrong. Signed-off-by: NStephen Boyd <sboyd@codeaurora.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 3月, 2012 2 次提交
-
-
由 David Howells 提交于
Disintegrate asm/system.h for ARM. Signed-off-by: NDavid Howells <dhowells@redhat.com> cc: Russell King <linux@arm.linux.org.uk> cc: linux-arm-kernel@lists.infradead.org
-
由 Russell King 提交于
Avoid namespace conflicts with drivers over the CP15 definitions by moving CP15 related prototypes and definitions to a private header file. Acked-by: NStephen Warren <swarren@nvidia.com> Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra] Acked-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx] Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NDavid Howells <dhowells@redhat.com>
-
- 24 3月, 2012 1 次提交
-
-
由 Russell King 提交于
Avoid namespace conflicts with drivers over the CP15 definitions by moving CP15 related prototypes and definitions to a private header file. Acked-by: NStephen Warren <swarren@nvidia.com> Tested-by: Stephen Warren <swarren@nvidia.com> [Tegra] Acked-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Tested-by: H Hartley Sweeten <hsweeten@visionengravers.com> [EP93xx] Acked-by: NNicolas Pitre <nico@linaro.org> Acked-by: NKukjin Kim <kgene.kim@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 1月, 2012 1 次提交
-
-
由 Russell King 提交于
Initialize the contents of the vectors page immediately after we allocate the page, but before we map it. This avoids any possible aliases with other mappings which may need to be flushed after the page has been mapped irrespective of the cache type. We follow this later with a flush_cache_all() after all static memory mappings have been initialized, which ensures that this is safe from any cache effects. Tested-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 12月, 2011 2 次提交
-
-
由 Will Deacon 提交于
Memory banks living outside of the 32-bit physical address space do not have a 1:1 pa <-> va mapping and therefore the __va macro may wrap. This patch ensures that such banks are marked as highmem so that the Kernel doesn't try to split them up when it sees that the wrapped virtual address overlaps the vmalloc space. Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Acked-by: NNicolas Pitre <nico@linaro.org>
-
由 Catalin Marinas 提交于
This patch adds the MMU initialisation for the LPAE page table format. The swapper_pg_dir size with LPAE is 5 rather than 4 pages. A new proc-v7-3level.S file contains the TTB initialisation, context switch and PTE setting code with the LPAE. The TTBRx split is based on the PAGE_OFFSET with TTBR1 used for the kernel mappings. The 36-bit mappings (supersections) and a few other memory types in mmu.c are conditionally compiled. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 27 11月, 2011 2 次提交
-
-
由 Nicolas Pitre 提交于
Now that we have all the static mappings from iotable_init() located in the vmalloc area, it is trivial to optimize ioremap by reusing those static mappings when the requested physical area fits in one of them, and so in a generic way for all platforms. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Tested-by: NStephen Warren <swarren@nvidia.com> Tested-by: NKevin Hilman <khilman@ti.com> Tested-by: NJamie Iles <jamie@jamieiles.com>
-
由 Nicolas Pitre 提交于
In order to remove the build time variation between different SOCs with regards to VMALLOC_END, the iotable mappings are now allocated inside the vmalloc region. This allows for VMALLOC_END to be identical across all machines. The value for VMALLOC_END is now set to 0xff000000 which is right where the consistent DMA area starts. To accommodate all static mappings on machines with possible highmem usage, the default vmalloc area size is changed to 240 MB so that VMALLOC_START is no higher than 0xf0000000 by default. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Tested-by: NStephen Warren <swarren@nvidia.com> Tested-by: NKevin Hilman <khilman@ti.com> Tested-by: NJamie Iles <jamie@jamieiles.com>
-
- 19 11月, 2011 1 次提交
-
-
由 Nicolas Pitre 提交于
Some upcoming changes must know the VMALLOC_START value, which is based on high_memory, before bootmem_init() is called. The best location to set it is in sanity_check_meminfo() where the needed computation is already done, and in the non MMU case it is trivial to do now that the meminfo array is already sorted at that point. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org>
-
- 06 10月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
This patch defines the (pte|pmd)val_t as u32 and changes the page table types to be based on these. The PMD bits are converted to the corresponding type using the _AT macro. The flush_pmd_entry/clean_pmd_entry argument was changed to (void *) to allow them to be used with both PGD and PMD pointers and avoid code duplication. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 9月, 2011 1 次提交
-
-
由 Santosh Shilimkar 提交于
On certain architectures, there might be a need to mark certain addresses with strongly ordered memory attributes to avoid ordering issues at the interconnect level. On OMAP4, the asynchronous bridge buffers can only be drained with strongly ordered accesses and hence the need to mark the memory strongly ordered. Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NWoodruff Richard <r-woodruff2@ti.com> Tested-by: NVishwanath BS <vishwanath.bs@ti.com>
-
- 23 8月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
PGDIR_SHIFT and PMD_SHIFT for the classic 2-level page table format have the same value (21). This patch converts the PGDIR_* uses in the kernel to the PMD_* equivalent so that LPAE builds can reuse the same code. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 06 7月, 2011 1 次提交
-
-
由 Russell King 提交于
Ensure that the meminfo array is sanity checked before we pass the memory to memblock. This helps to ensure that memblock and meminfo agree on the dimensions of memory, especially when more memory is passed than the kernel can deal with. Acked-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 5月, 2011 1 次提交
-
-
由 Will Deacon 提交于
sanity_check_meminfo walks over the registered memory banks and attempts to split banks across lowmem and highmem when they would otherwise overlap with the vmalloc space. When SPARSEMEM is used, there are two potential problems that occur when the virtual address of the start of a bank is equal to vmalloc_min. 1.) The end of lowmem is calculated as __pa(vmalloc_min - 1) + 1. In the above scenario, this will give the end address of the previous bank, rather than the actual bank we are interested in. This value is later used as the memblock limit and artificially restricts the total amount of available memory. 2.) The checks to determine whether or not a bank belongs to highmem or not only check if __va(bank->start) is greater or less than vmalloc_min. In the case that it is equal, the bank is incorrectly treated as lowmem, which hoses the vmalloc area. This patch fixes these two problems by checking whether the virtual start address of a bank is >= vmalloc_min and then calculating lowmem_end by finding the virtual end address of the highest lowmem bank. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Reviewed-by: NNicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 5月, 2011 1 次提交
-
-
由 Peter Zijlstra 提交于
Fold all the mmu_gather rework patches into one for submission Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Reported-by: NHugh Dickins <hughd@google.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Miller <davem@davemloft.net> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Jeff Dike <jdike@addtoit.com> Cc: Richard Weinberger <richard@nod.at> Cc: Tony Luck <tony.luck@intel.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Nick Piggin <npiggin@kernel.dk> Cc: Namhyung Kim <namhyung@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 2月, 2011 1 次提交
-
-
由 Nicolas Pitre 提交于
In commit e616c591, highmem support was deactivated for SMP platforms without hardware TLB ops broadcast because usage of kmap_high_get() requires that IRQs be disabled when kmap_lock is locked which is incompatible with the IPI mechanism used by the software TLB ops broadcast invoked through flush_all_zero_pkmaps(). The reason for kmap_high_get() is to ensure that the currently kmap'd page usage count does not decrease to zero while we're using its existing virtual mapping in an atomic context. With a VIVT cache this is essential to do due to cache coherency issues, but with a VIPT cache this is only an optimization so not to pay the price of establishing a second mapping if an existing one can be used. However, on VIPT platforms without hardware TLB maintenance we can give up on that optimization in order to be able to use highmem. From ARMv7 onwards the TLB ops are broadcasted in hardware, so let's disable ARCH_NEEDS_KMAP_HIGH_GET only when CONFIG_SMP and CONFIG_CPU_TLB_V6 are defined. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Tested-by: NSaeed Bishara <saeed.bishara@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 2月, 2011 2 次提交
-
-
由 Russell King 提交于
Add pud_offset() et.al. between the pgd and pmd code in preparation of using pgtable-nopud.h rather than 4level-fixup.h. This incorporates a fix from Jamie Iles <jamie@jamieiles.com> for uaccess_with_memcpy.c. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Acked-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 2月, 2011 2 次提交
-
-
由 Will Deacon 提交于
The unsigned long datatype is not sufficient for mapping physical addresses >= 4GB. This patch ensures that the phys_addr_t datatype is used to represent physical addresses when converting from a PFN. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
For the Kernel to support 2 level and 3 level page tables, physical addresses (and also page table entries) need to be 32 or 64-bits depending upon the configuration. This patch uses the %08llx conversion specifier for physical addresses and page table entries, ensuring that they are cast to (long long) so that common code can be used regardless of the datatype widths. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-