- 30 11月, 2013 1 次提交
-
-
由 Russell King 提交于
Commit f6f91b0d (ARM: allow kuser helpers to be removed from the vector page) required two pages for the vectors code. Although the code setting up the initial page tables was updated, the code which allocates page tables for new processes wasn't, neither was the code which tears down the mappings. Fix this. Fixes: f6f91b0d ("ARM: allow kuser helpers to be removed from the vector page") Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Cc: <stable@vger.kernel.org>
-
- 14 8月, 2013 1 次提交
-
-
由 Christoffer Dall 提交于
THe L_PTE_USER actually has nothing to do with stage 2 mappings and the L_PTE_S2_RDWR value sets the readable bit, which was what L_PTE_USER was used for before proper handling of stage 2 memory defines. Changelog: [v3]: Drop call to kvm_set_s2pte_writable in mmu.c [v2]: Change default mappings to be r/w instead of r/o, as per Marc Zyngier's suggestion. Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: NChristoffer Dall <christoffer.dall@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 6月, 2013 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 04 6月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
The patch adds support for THP (transparent huge pages) to LPAE systems. When this feature is enabled, the kernel tries to map anonymous pages as 2MB sections where possible. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> [steve.capper@linaro.org: symbolic constants used, value of PMD_SECT_SPLITTING adjusted, tlbflush.h included in pgtable.h, added PROT_NONE support.] Signed-off-by: NSteve Capper <steve.capper@linaro.org> Reviewed-by: NWill Deacon <will.deacon@arm.com>
-
- 30 4月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
ARM processors with LPAE enabled use 3 levels of page tables, with an entry in the top level (pgd) covering 1GB of virtual space. Because of the branch relocation limitations on ARM, the loadable modules are mapped 16MB below PAGE_OFFSET, making the corresponding 1GB pgd shared between kernel modules and user space. If free_pgtables() is called with the default ceiling 0, free_pgd_range() (and subsequently called functions) also frees the page table shared between user space and kernel modules (which is normally handled by the ARM-specific pgd_free() function). This patch changes defines the ARM USER_PGTABLES_CEILING to TASK_SIZE when CONFIG_ARM_LPAE is enabled. Note that the pgd_free() function already checks the presence of the shared pmd page allocated by pgd_alloc() and frees it, though with ceiling 0 this wasn't necessary. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> [3.3+] Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 4月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
ARM processors with LPAE enabled use 3 levels of page tables, with an entry in the top level (pgd) covering 1GB of virtual space. Because of the branch relocation limitations on ARM, the loadable modules are mapped 16MB below PAGE_OFFSET, making the corresponding 1GB pgd shared between kernel modules and user space. If free_pgtables() is called with the default ceiling 0, free_pgd_range() (and subsequently called functions) also frees the page table shared between user space and kernel modules (which is normally handled by the ARM-specific pgd_free() function). This patch changes defines the ARM USER_PGTABLES_CEILING to TASK_SIZE when CONFIG_ARM_LPAE is enabled. Note that the pgd_free() function already checks the presence of the shared pmd page allocated by pgd_alloc() and frees it, though with ceiling 0 this wasn't necessary. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Cc: Hugh Dickins <hughd@google.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: <stable@vger.kernel.org> # 3.3+ Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 2月, 2013 1 次提交
-
-
由 Catalin Marinas 提交于
Following commit 26ffd0d4 (ARM: mm: introduce present, faulting entries for PAGE_NONE), if a page has been mapped as PROT_NONE, the L_PTE_VALID bit is cleared by the set_pte_ext() code. With LPAE the software and hardware pte share the same location and subsequent modifications of pte range (change_protection()) will leave the L_PTE_VALID bit cleared. This patch adds the L_PTE_VALID bit to the newprot mask in pte_modify(). Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Reported-by: NSubash Patel <subash.rp@samsung.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Acked-by: NWill Deacon <will.deacon@arm.com> Cc: <stable@vger.kernel.org> # 3.8.x Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 1月, 2013 1 次提交
-
-
由 Christoffer Dall 提交于
KVM uses the stage-2 page tables and the Hyp page table format, so we define the fields and page protection flags needed by KVM. The nomenclature is this: - page_hyp: PL2 code/data mappings - page_hyp_device: PL2 device mappings (vgic access) - page_s2: Stage-2 code/data page mappings - page_s2_device: Stage-2 device mappings (vgic access) Reviewed-by: NWill Deacon <will.deacon@arm.com> Reviewed-by: NMarcelo Tosatti <mtosatti@redhat.com> Christoffer Dall <c.dall@virtualopensystems.com>
-
- 09 11月, 2012 2 次提交
-
-
由 Will Deacon 提交于
PROT_NONE mappings apply the page protection attributes defined by _P000 which translate to PAGE_NONE for ARM. These attributes specify an XN, RDONLY pte that is inaccessible to userspace. However, on kernels configured without support for domains, such a pte *is* accessible to the kernel and can be read via get_user, allowing tasks to read PROT_NONE pages via syscalls such as read/write over a pipe. This patch introduces a new software pte flag, L_PTE_NONE, that is set to identify faulting, present entries. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Will Deacon 提交于
For long-descriptor translation table formats, the ARMv7 architecture defines the last two bits of the second- and third-level descriptors to be: x0b - Invalid 01b - Block (second-level), Reserved (third-level) 11b - Table (second-level), Page (third-level) This allows us to define L_PTE_PRESENT as (3 << 0) and use this value to create ptes directly. However, when determining whether a given pte value is present in the low-level page table accessors, we only need to check the least significant bit of the descriptor, allowing us to write faulting, present entries which are required for PROT_NONE mappings. This patch introduces L_PTE_VALID, which can be used to test whether a pte should fault, and updates the low-level page table accessors accordingly. Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 10月, 2012 1 次提交
-
-
由 David Howells 提交于
Convert #include "..." to #include <path/...> in kernel system headers. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: NDave Jones <davej@redhat.com>
-
- 11 8月, 2012 2 次提交
-
-
由 Will Deacon 提交于
Page migration encodes the pfn in the offset field of a swp_entry_t. For LPAE, we support physical addresses of up to 36 bits (due to sparsemem limitations with the size of page flags), requiring 24 bits to represent a pfn. A further 3 bits are used to encode a swp_entry into a pte, leaving 5 bits for the type field. Furthermore, the core code defines MAX_SWAPFILES_SHIFT as 5, so the additional type bit does not get used. This patch reduces the width of the type field to 5 bits, allowing us to create up to 31 swapfiles of 64GB each. Cc: <stable@vger.kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Will Deacon 提交于
Swap entries are encoding in ptes such that !pte_present(pte) and pte_file(pte). The remaining bits of the descriptor are used to identify the swapfile and offset within it to the swap entry. When writing such a pte for a user virtual address, set_pte_at unconditionally sets the nG bit, which (in the case of LPAE) will corrupt the swapfile offset and lead to a BUG: [ 140.494067] swap_free: Unused swap offset entry 000763b4 [ 140.509989] BUG: Bad page map in process rs:main Q:Reg pte:0ec76800 pmd:8f92e003 This patch fixes the problem by only setting the nG bit for user mappings that are actually present. Cc: <stable@vger.kernel.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 1月, 2012 1 次提交
-
-
由 Nicolas Pitre 提交于
This reverts commit 0af362f8 as shmobile is not using a non-standard memory layout anymore. Signed-off-by: NNicolas Pitre <nico@linaro.org>
-
- 08 12月, 2011 3 次提交
-
-
由 Catalin Marinas 提交于
This patch introduces the pgtable-3level*.h files with definitions specific to the LPAE page table format (3 levels of page tables). Each table is 4KB and has 512 64-bit entries. An entry can point to a 40-bit physical address. The young, write and exec software bits share the corresponding hardware bits (negated). Other software bits use spare bits in the PTE. The patch also changes some variable types from unsigned long or int to pteval_t or pgprot_t. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
The page table maintenance macros need to be duplicated between the classic and the LPAE MMU so this patch moves those that are not common to the pgtable-2level.h file. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Russell King 提交于
Nick Piggin noted upon introducing 4level-fixup.h: | Add a temporary "fallback" header so architectures can run with | the 4level pagetables patch without modification. All architectures | should be converted to use the folding headers (include/asm-generic/ | pgtable-nop?d.h) as soon as possible, and the fallback header removed. This makes ARM compliant with this statement. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 06 12月, 2011 2 次提交
-
-
由 Will Deacon 提交于
When disabling and re-enabling the MMU, it is necessary to take out an identity mapping for the code that manipulates the SCTLR in order to avoid it disappearing from under our feet. This is useful when soft rebooting and returning from CPU suspend. This patch allocates a set of page tables during boot and populates them with an identity mapping for the .idmap.text section. This means that users of the identity map do not need to manage their own pgd and can instead annotate their functions with __idmap or, in the case of assembly code, place them in the correct section. Acked-by: NDave Martin <dave.martin@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NLorenzo Pieralisi <Lorenzo.Pieralisi@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
由 Rob Herring 提交于
Similar to other architectures, this adds topdown mmap support in user process address space allocation policy. This allows mmap sizes greater than 2GB. This support is largely copied from MIPS and the generic implementations. The address space randomization is moved into arch_pick_mmap_layout. Tested on V-Express with ubuntu and a mmap test from here: https://bugs.launchpad.net/bugs/861296Signed-off-by: NRob Herring <rob.herring@calxeda.com> Acked-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 11月, 2011 2 次提交
-
-
由 Nicolas Pitre 提交于
THIS IS A TEMPORARY HACK. The purpose of this is _only_ to avoid a regression on an existing machine while a better fix is implemented. On shmobile the consistent DMA memory area was set to 158MB in commit 28f0721a with no explanation. The documented size for this area should vary between 2MB and 14MB, and none of the other ARM targets exceed that. The included #warning is therefore meant to be noisy on purpose to get shmobile maintainers attention and this commit reverted once this consistent DMA size conflict is resolved. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Cc: Magnus Damm <damm@opensource.se> Cc: Paul Mundt <lethal@linux-sh.org>
-
由 Nicolas Pitre 提交于
In order to remove the build time variation between different SOCs with regards to VMALLOC_END, the iotable mappings are now allocated inside the vmalloc region. This allows for VMALLOC_END to be identical across all machines. The value for VMALLOC_END is now set to 0xff000000 which is right where the consistent DMA area starts. To accommodate all static mappings on machines with possible highmem usage, the default vmalloc area size is changed to 240 MB so that VMALLOC_START is no higher than 0xf0000000 by default. Signed-off-by: NNicolas Pitre <nicolas.pitre@linaro.org> Tested-by: NStephen Warren <swarren@nvidia.com> Tested-by: NKevin Hilman <khilman@ti.com> Tested-by: NJamie Iles <jamie@jamieiles.com>
-
- 06 10月, 2011 2 次提交
-
-
由 Catalin Marinas 提交于
With LPAE, the physical address mask is 40-bit while the page table entry is 64-bit. This patch introduces PHYS_MASK for the 2-level page table format, defined as ~0UL. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Catalin Marinas 提交于
This patch moves page table definitions from asm/page.h, asm/pgtable.h and asm/ptgable-hwdef.h into corresponding *-2level* files. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 23 9月, 2011 1 次提交
-
-
由 Santosh Shilimkar 提交于
On certain architectures, there might be a need to mark certain addresses with strongly ordered memory attributes to avoid ordering issues at the interconnect level. On OMAP4, the asynchronous bridge buffers can only be drained with strongly ordered accesses and hence the need to mark the memory strongly ordered. Signed-off-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: NWoodruff Richard <r-woodruff2@ti.com> Tested-by: NVishwanath BS <vishwanath.bs@ti.com>
-
- 22 2月, 2011 1 次提交
-
-
由 Russell King 提交于
Add pud_offset() et.al. between the pgd and pmd code in preparation of using pgtable-nopud.h rather than 4level-fixup.h. This incorporates a fix from Jamie Iles <jamie@jamieiles.com> for uaccess_with_memcpy.c. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 2月, 2011 1 次提交
-
-
由 Will Deacon 提交于
The unsigned long datatype is not sufficient for mapping physical addresses >= 4GB. This patch ensures that the phys_addr_t datatype is used to represent physical addresses when converting from a PFN. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 12月, 2010 5 次提交
-
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
The hardware page tables use an XN bit 'execute never'. Historically, we've had a Linux 'execute allow' bit, in the positive sense. Get rid of this artifact as future hardware will continue to have the XN sense. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
FIRST_USER_PGD_NR is now unnecessary, as this has been replaced by FIRST_USER_ADDRESS except in the architecture code. Fix up the last usage of FIRST_USER_PGD_NR, and remove the definition. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We have two places where we create identity mappings - one when we bring secondary CPUs online, and one where we setup some mappings for soft- reboot. Combine these two into a single implementation. Also collect the identity mapping deletion function. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
This switches the ordering of the Linux vs hardware page tables in each page, thereby eliminating some of the arithmetic in the page table walks. As we now place the Linux page table at the beginning of the page, we can deal with the offset in the pgt by simply masking it away, along with the other control bits. This also makes the arithmetic all be positive, rather than a mixture. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 11月, 2010 6 次提交
-
-
由 Russell King 提交于
This makes everywhere dealing with pte values use the same type. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Ensure that physical addresses are typed as phys_addr_t Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Rather than passing the pte value to __pte_error, pass the raw pte_t cookie instead. Do the same for pmd and pgd functions. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Rather than scattering them throughout the file, group them together. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 21 11月, 2010 1 次提交
-
-
由 Russell King 提交于
Allow the compiler to better optimize the page table walking code by avoiding over-complex pmd_addr_end() calculations. These calculations prevent the compiler spotting that we'll never iterate over the PMD table, causing it to create double nested loops where a single loop will do. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 10月, 2010 1 次提交
-
-
由 Peter Zijlstra 提交于
Since we no longer need to provide KM_type, the whole pte_*map_nested() API is now redundant, remove it. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NChris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 9月, 2010 1 次提交
-
-
由 Catalin Marinas 提交于
ARMv7 onwards requires that there are no aliases to the same physical location using different memory types (i.e. Normal vs Strongly Ordered). Access to SO mappings when the unaligned accesses are handled in hardware is also Unpredictable (pgprot_noncached() mappings in user space). The /dev/mem driver requires uncached mappings with O_SYNC. The patch implements the phys_mem_access_prot() function which generates Strongly Ordered memory attributes if !pfn_valid() (independent of O_SYNC) and Normal Noncacheable (writecombine) if O_SYNC. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-