- 11 1月, 2011 2 次提交
-
-
由 Paul Mundt 提交于
When p3_ioremap() was converted to ioremap_prot() there was some breakage introduced where the 29-bit segmentation logic would trap the area range and return an identity mapping without having allowed the area specification to force mapping through page tables. This wires up a PCC mask for pgprot verification to work out whether to short-circuit the identity mapping on legacy parts, restoring the previous behaviour. Reported-by: NNobuhiro Iwamatsu <iwamatsu@nigauri.org> Cc: stable@kernel.org Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Presently it's still possible to reference the PAGE_KERNEL_PCC pgprot encoding on X2 TLBs which simply don't support it at all. Convert this over to follow the nommu behaviour and simply hand back an invalid pgprot value so we error out properly. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 27 10月, 2010 2 次提交
-
-
由 Paul Mundt 提交于
Now that sh64 has grown extended page flag support we finally have a free bit for _PAGE_SPECIAL. Wire it up. Signed-off-by: NPaul Mundt <lethal@linux-sh.org> -
由 Peter Zijlstra 提交于
Since we no longer need to provide KM_type, the whole pte_*map_nested() API is now redundant, remove it. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NChris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 1月, 2010 1 次提交
-
-
由 Paul Mundt 提交于
This provides a dummy value for legacy parts which permits the entry wiring to be open-coded. The compiler takes care of optimizing the entry wiring away in these cases. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 16 1月, 2010 1 次提交
-
-
由 Matt Fleming 提交于
Provide a new extended page flag, _PAGE_WIRED and an SH4 implementation for wiring TLB entries and use it in the fixmap code path so that we can wire the fixmap TLB entry. Signed-off-by: NMatt Fleming <matt@console-pimps.org>
-
- 14 12月, 2009 1 次提交
-
-
由 Matt Fleming 提交于
pte_write() should check whether the permissions include either the user or kernel write permission bits. Likewise, pte_wrprotect() needs to remove both the kernel and user write bits. Without this patch handle_tlbmiss() doesn't handle faulting in pages from the P3 area (our vmalloc space) because of a write. Mappings of the P3 space have the _PAGE_EXT_KERN_WRITE bit but not _PAGE_EXT_USER_WRITE. Signed-off-by: NMatt Fleming <matt@console-pimps.org> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 10 10月, 2009 1 次提交
-
-
由 Matt Fleming 提交于
To allow the MMU to be switched between 29bit and 32bit mode at runtime some constants need to swapped for functions that return a runtime value. Signed-off-by: NMatt Fleming <matt@console-pimps.org> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 03 9月, 2009 1 次提交
-
-
由 Paul Mundt 提交于
This fixes up the kmap_coherent/kunmap_coherent() interface for recent changes both in the page fault path and the shared cache flushers, as well as adding in some optimizations. One of the key things to note here is that the TLB flush itself is deferred until the unmap, and the call in to update_mmu_cache() itself goes away, relying on the regular page fault path to handle the lazy dcache writeback if necessary. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 20 8月, 2009 1 次提交
-
-
由 Michael Trimarchi 提交于
Signed-off-by: NMichael Trimarchi <trimarchimichael@yahoo.it> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 22 7月, 2009 1 次提交
-
-
由 Paul Mundt 提交于
Allocate one of the unused PTE bits for _PAGE_SPECIAL directly. This is prep work for fast gup and the zero page revival. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 29 7月, 2008 1 次提交
-
-
由 Paul Mundt 提交于
This follows the sparc changes a439fe51. Most of the moving about was done with Sam's directions at: http://marc.info/?l=linux-sh&m=121724823706062&w=2 with subsequent hacking and fixups entirely my fault. Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 28 7月, 2008 1 次提交
-
-
由 Paul Mundt 提交于
16kB is a useful size on nommu, while 64kB still tends to be too big to be useful. Newer MMUs are likely to support this as well, so plug it in in anticipation of those, too. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 28 4月, 2008 1 次提交
-
-
由 Nick Piggin 提交于
s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory model (which is more dynamic than most). Instead, they had proposed to implement it with an additional path through vm_normal_page(), using a bit in the pte to determine whether or not the page should be refcounted: vm_normal_page() { ... if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) { if (vma->vm_flags & VM_MIXEDMAP) { #ifdef s390 if (!mixedmap_refcount_pte(pte)) return NULL; #else if (!pfn_valid(pfn)) return NULL; #endif goto out; } ... } This is fine, however if we are allowed to use a bit in the pte to determine refcountedness, we can use that to _completely_ replace all the vma based schemes. So instead of adding more cases to the already complex vma-based scheme, we can have a clearly seperate and simple pte-based scheme (and get slightly better code generation in the process): vm_normal_page() { #ifdef s390 if (!mixedmap_refcount_pte(pte)) return NULL; return pte_page(pte); #else ... #endif } And finally, we may rather make this concept usable by any architecture rather than making it s390 only, so implement a new type of pte state for this. Unfortunately the old vma based code must stay, because some architectures may not be able to spare pte bits. This makes vm_normal_page a little bit more ugly than we would like, but the 2 cases are clearly seperate. So introduce a pte_special pte state, and use it in mm/memory.c. It is currently a noop for all architectures, so this doesn't actually result in any compiled code changes to mm/memory.o. BTW: I haven't put vm_normal_page() into arch code as-per an earlier suggestion. The reason is that, regardless of where vm_normal_page is actually implemented, the *abstraction* is still exactly the same. Also, while it depends on whether the architecture has pte_special or not, that is the only two possible cases, and it really isn't an arch specific function -- the role of the arch code should be to provide primitive functions and accessors with which to build the core code; pte_special does that. We do not want architectures to know or care about vm_normal_page itself, and we definitely don't want them being able to invent something new there out of sight of mm/ code. If we made vm_normal_page an arch function, then we have to make vm_insert_mixed (next patch) an arch function too. So I don't think moving it to arch code fundamentally improves any abstractions, while it does practically make the code more difficult to follow, for both mm and arch developers, and easier to misuse. [akpm@linux-foundation.org: build fix] Signed-off-by: NNick Piggin <npiggin@suse.de> Acked-by: NCarsten Otte <cotte@de.ibm.com> Cc: Jared Hulbert <jaredeh@gmail.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 1月, 2008 5 次提交
-
-
由 Stuart Menefy 提交于
Signed-off-by: NStuart Menefy <stuart.menefy@st.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Stuart Menefy 提交于
Signed-off-by: NStuart Menefy <stuart.menefy@st.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Signed-off-by: NPaul Mundt <lethal@linux-sh.org> -
由 Paul Mundt 提交于
Signed-off-by: NPaul Mundt <lethal@linux-sh.org> -
由 Paul Mundt 提交于
Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 07 11月, 2007 1 次提交
-
-
由 Paul Mundt 提交于
PAGE_KERNEL_PCC() takes two arguments, which weren't reflected in the nommu case. Fix it up. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 30 10月, 2007 1 次提交
-
-
由 Paul Mundt 提交于
As noted by David: pte_page() is a macro defined as follows; include/asm-sh/pgtable.h #define pte_page(x) phys_to_page(pte_val(x)&PTE_PHYS_MASK) include/asm-sh/page.h #define phys_to_page(phys) (pfn_to_page(phys >> PAGE_SHIFT)) So as you can see the phys_to_page() macro doesn't wrap the 'phys' parameter in parentheses so we end up with; pte_val(x)&PTE_PHYS_MASK >> PAGE_SHIFT Which is not what we wanted as '>>' has a higher precedence than bitwise AND. I dug into the git repository and I believe this bug was added with this commit (104b8dea); 2006-03-27 KAMEZAWA Hiroyuki [PATCH] unify pfn_to_page: sh pfn_to_page -#define phys_to_page(phys) (mem_map + (((phys)-__MEMORY_START) >> PAGE_SHIFT)) -#define page_to_phys(page) (((page - mem_map) << PAGE_SHIFT) + __MEMORY_START) +#define phys_to_page(phys) (pfn_to_page(phys >> PAGE_SHIFT)) +#define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT) Reported-by: NDavid ADDISON <david.addison@st.com> Reported-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 21 9月, 2007 2 次提交
-
-
由 Paul Mundt 提交于
The extended mode TLB requires both 64-bit PTEs and a 64-bit pgprot, correspondingly, the PGD also has to be 64-bits, so fix that up. The kernel and user permission bits really are decoupled in early cuts of the silicon, which means that we also have to set corresponding kernel permissions on user pages or we end up with user pages that the kernel simply can't touch (!). Finally, with those things corrected, really enable MMUCR.ME and correct the PTEA value (this simply needs to be the upper 32-bits of the PTE, with the size and protection bit encoding). Signed-off-by: NPaul Mundt <lethal@linux-sh.org> -
由 Paul Mundt 提交于
This reworks the cache mode configuration in Kconfig, and allows for explicit selection of write-back/write-through/off configurations. All of the cache flushing routines are optimized away for the off case. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 25 7月, 2007 1 次提交
-
-
由 Paul Mundt 提交于
The first 1MB of P3 space was reserved and used for page colouring, as we've reworked that to use fixmaps, we can reclaim the space and hand it back to VMALLOC_START. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 17 7月, 2007 1 次提交
-
-
由 Jan Beulich 提交于
Kill pte_rdprotect(), pte_exprotect(), pte_mkread(), pte_mkexec(), pte_read(), pte_exec(), and pte_user() except where arch-specific code is making use of them. Signed-off-by: NJan Beulich <jbeulich@novell.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 09 5月, 2007 1 次提交
-
-
由 David Gibson 提交于
Most architectures defined three macros, MK_IOSPACE_PFN(), GET_IOSPACE() and GET_PFN() in pgtable.h. However, the only callers of any of these macros are in Sparc specific code, either in arch/sparc, arch/sparc64 or drivers/sbus. This patch removes the redundant macros from all architectures except sparc and sparc64. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 05 3月, 2007 1 次提交
-
-
由 Paul Mundt 提交于
These ended up causing too many problems on older parts, revert for now.. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 14 2月, 2007 1 次提交
-
-
由 Paul Mundt 提交于
This ended up causing problems for older parts (particularly ones using PTEA). Revert this for now, it can be added back in once it's had some more testing. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 13 2月, 2007 2 次提交
-
-
由 Paul Mundt 提交于
This converts the lazy dcache handling to the model described in Documentation/cachetlb.txt and drops the ptep_get_and_clear() hacks used for the aliasing dcaches on SH-4 and SH7705 in 32kB mode. As a bonus, this slightly cuts down on the cache flushing frequency. With that and the PTEA handling out of the way, the update_mmu_cache() implementations can be consolidated, and we no longer have to worry about which configuration the cache is in for the SH7705 case. And finally, explicitly disable the lazy writeback on SMP (SH-4A). Signed-off-by: NPaul Mundt <lethal@linux-sh.org> -
由 Paul Mundt 提交于
There were a few more things that needed fixing up, namely THREAD_SIZE and the TLB miss handler where certain PTRS_PER_PGD == PTRS_PER_PTE assumptions were being made. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 12 12月, 2006 2 次提交
-
-
由 Paul Mundt 提交于
Get the landisk board building again.. Signed-off-by: NPaul Mundt <lethal@linux-sh.org> -
由 Paul Mundt 提交于
In the 64-bit PTE case there's no point in restricting the encoding to the low bits of the PTE, we can instead bump all of this up to the high 32 bits and extend PTE_FILE_MAX_BITS to 32, adopting the same convention used by x86 PAE. There's a minor discrepency between the number of bits used for the swap type encoding between 32 and 64-bit PTEs, but this is unlikely to cause any problem given the extended offset. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 06 12月, 2006 5 次提交
-
-
由 Paul Mundt 提交于
When hugetlbpage support isn't enabled, this can be bogus. Wrap it back in _PAGE_FLAGS_HARD to avoid changes to the base PTE when not aiming for larger sizes. Signed-off-by: NPaul Mundt <lethal@linux-sh.org> -
由 Paul Mundt 提交于
There were a number of places that made evil PAGE_SIZE == 4k assumptions that ended up breaking when trying to play with 8k and 64k page sizes, this fixes those up. The most significant change is the way we load THREAD_SIZE, previously this was done via: mov #(THREAD_SIZE >> 8), reg shll8 reg to avoid a memory access and allow the immediate load. With a 64k PAGE_SIZE, we're out of range for the immediate load size without resorting to special instructions available in later ISAs (movi20s and so on). The "workaround" for this is to bump up the shift to 10 and insert a shll2, which gives a bit more flexibility while still being much cheaper than a memory access. Signed-off-by: NPaul Mundt <lethal@linux-sh.org> -
由 Stuart Menefy 提交于
Handle simple TLB miss faults which can be resolved completely from the page table in assembler. Signed-off-by: NStuart Menefy <stuart.menefy@st.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Stuart Menefy 提交于
Remove extra bits from the pmd structure and store a kernel logical address rather than a physical address. This allows it to be directly dereferenced. Another piece of wierdness inherited from x86. Signed-off-by: NStuart Menefy <stuart.menefy@st.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This adds some preliminary support for the SH-X2 MMU, used by newer SH-4A parts (particularly SH7785). This MMU implements a 'compat' mode with SH-X MMUs and an 'extended' mode for SH-X2 extended features. Extended features include additional page sizes (8kB, 4MB, 64MB), as well as the addition of page execute permissions. The extended mode attributes are placed in a second data array, which requires us to switch to 64-bit PTEs when in X2 mode. With the addition of the exec perms, we also overhaul the mmap prots somewhat, now that it's possible to handle them more intelligently. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 27 9月, 2006 3 次提交
-
-
由 Paul Mundt 提交于
Set the SHM alignment at runtime, based off of probed cache desc. Optimize get_unmapped_area() to only colour align shared mappings. Signed-off-by: NPaul Mundt <lethal@linux-sh.org> -
由 Paul Mundt 提交于
Drop _PAGE_SHARED/_PAGE_U0_SHARED and document Linux PTE encodings in the PTEL value. Preserve the swap cache entry encoding semantics for now, though it will need rework to free up _PAGE_WT from _PAGE_FILE. Signed-off-by: NPaul Mundt <lethal@linux-sh.org> -
由 Paul Mundt 提交于
Cleanup of page table allocators, using generic folded PMD and PUD helpers. TLB flushing operations are moved to a more sensible spot. The page fault handler is also optimized slightly, we no longer waste cycles on IRQ disabling for flushing of the page from the ITLB, since we're already under CLI protection by the initial exception handler. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-