- 06 12月, 2006 8 次提交
-
-
由 Stuart Menefy 提交于
Handle simple TLB miss faults which can be resolved completely from the page table in assembler. Signed-off-by: NStuart Menefy <stuart.menefy@st.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Stuart Menefy 提交于
Remove extra bits from the pmd structure and store a kernel logical address rather than a physical address. This allows it to be directly dereferenced. Another piece of wierdness inherited from x86. Signed-off-by: NStuart Menefy <stuart.menefy@st.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Stuart Menefy 提交于
Add TTB accessor functions and give it a sensible default value. We will use this later for optimizing the fault path. Signed-off-by: NStuart Menefy <stuart.menefy@st.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Stuart Menefy 提交于
Remove the previous saving of fault codes into the thread_struct as they are never used, and appeared to be inherited from x86. Signed-off-by: NStuart Menefy <stuart.menefy@st.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Simple sem2mutex conversion for the p3map semaphores. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This adds some preliminary support for the SH-X2 MMU, used by newer SH-4A parts (particularly SH7785). This MMU implements a 'compat' mode with SH-X MMUs and an 'extended' mode for SH-X2 extended features. Extended features include additional page sizes (8kB, 4MB, 64MB), as well as the addition of page execute permissions. The extended mode attributes are placed in a second data array, which requires us to switch to 64-bit PTEs when in X2 mode. With the addition of the exec perms, we also overhaul the mmap prots somewhat, now that it's possible to handle them more intelligently. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Simple 7785 placeholders to start hooking up other bits of code. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Yoshinori Sato 提交于
This implements initial support for the SH7206 (SH-2A) and SH7619 (SH-2) MMU-less CPUs. Signed-off-by: NYoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 10 10月, 2006 1 次提交
-
-
由 Paul Mundt 提交于
Be sure to zero out the buffer, this was causing occasional problems under heavier PCI tests. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 03 10月, 2006 1 次提交
-
-
由 Paul Mundt 提交于
Get all of the defconfigs building again. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 30 9月, 2006 2 次提交
-
-
由 Sukadev Bhattiprolu 提交于
This is an updated version of Eric Biederman's is_init() patch. (http://lkml.org/lkml/2006/2/6/280). It applies cleanly to 2.6.18-rc3 and replaces a few more instances of ->pid == 1 with is_init(). Further, is_init() checks pid and thus removes dependency on Eric's other patches for now. Eric's original description: There are a lot of places in the kernel where we test for init because we give it special properties. Most significantly init must not die. This results in code all over the kernel test ->pid == 1. Introduce is_init to capture this case. With multiple pid spaces for all of the cases affected we are looking for only the first process on the system, not some other process that has pid == 1. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> Signed-off-by: NSukadev Bhattiprolu <sukadev@us.ibm.com> Cc: Dave Hansen <haveblue@us.ibm.com> Cc: Serge Hallyn <serue@us.ibm.com> Cc: Cedric Le Goater <clg@fr.ibm.com> Cc: <lxc-devel@lists.sourceforge.net> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Jason Baron 提交于
Make PROT_WRITE imply PROT_READ for a number of architectures which don't support write only in hardware. While looking at this, I noticed that some architectures which do not support write only mappings already take the exact same approach. For example, in arch/alpha/mm/fault.c: " if (cause < 0) { if (!(vma->vm_flags & VM_EXEC)) goto bad_area; } else if (!cause) { /* Allow reads even for write-only mappings */ if (!(vma->vm_flags & (VM_READ | VM_WRITE))) goto bad_area; } else { if (!(vma->vm_flags & VM_WRITE)) goto bad_area; } " Thus, this patch brings other architectures which do not support write only mappings in-line and consistent with the rest. I've verified the patch on ia64, x86_64 and x86. Additional discussion: Several architectures, including x86, can not support write-only mappings. The pte for x86 reserves a single bit for protection and its two states are read only or read/write. Thus, write only is not supported in h/w. Currently, if i 'mmap' a page write-only, the first read attempt on that page creates a page fault and will SEGV. That check is enforced in arch/blah/mm/fault.c. However, if i first write that page it will fault in and the pte will be set to read/write. Thus, any subsequent reads to the page will succeed. It is this inconsistency in behavior that this patch is attempting to address. Furthermore, if the page is swapped out, and then brought back the first read will also cause a SEGV. Thus, any arbitrary read on a page can potentially result in a SEGV. According to the SuSv3 spec, "if the application requests only PROT_WRITE, the implementation may also allow read access." Also as mentioned, some archtectures, such as alpha, shown above already take the approach that i am suggesting. The counter-argument to this raised by Arjan, is that the kernel is enforcing the write only mapping the best it can given the h/w limitations. This is true, however Alan Cox, and myself would argue that the inconsitency in behavior, that is applications can sometimes work/sometimes fails is highly undesireable. If you read through the thread, i think people, came to an agreement on the last patch i posted, as nobody has objected to it... Signed-off-by: NJason Baron <jbaron@redhat.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Roman Zippel <zippel@linux-m68k.org> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NAndi Kleen <ak@muc.de> Acked-by: NAlan Cox <alan@lxorguk.ukuu.org.uk> Cc: Arjan van de Ven <arjan@linux.intel.com> Acked-by: NPaul Mundt <lethal@linux-sh.org> Cc: Kazumoto Kojima <kkojima@rr.iij4u.or.jp> Cc: Ian Molton <spyro@f2s.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 27 9月, 2006 24 次提交
-
-
由 Paul Mundt 提交于
IRQs disabling in flush_cache_4096 for cache purge. Under certain workloads we would get an IRQ in the middle of a purge operation, and the cachelines would remain in an inconsistent state, leading to occasional stack corruption. Signed-off-by: NTakeo Takahashi <takahashi.takeo@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This implements initial support for the vsyscall page on SH. At the moment we leave it configurable due to having nommu to support from the same code base. We hook it up for the signal trampoline return at present, with more to be added later, once uClibc catches up. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
flush_cache_mm() wraps in to flush_cache_all(), which is rather excessive given that the number of PTEs within the specified context are generally quite low. Optimize for walking the mm's VMA list and selectively flushing the VMA ranges from the dcache. Invalidate the icache only if a VMA sets VM_EXEC. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This was previously unimplemented.. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
A simple debugging aid for easier visibility of the respective cachelines. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This adds support for the aforementioned CPU subtypes, and cleans up some build issues encountered as a result. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Yoshinori Sato 提交于
A few more outstanding nommu fixups.. Signed-off-by: NYoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Yoshinori Sato 提交于
This fixes up some of the various outstanding nommu bugs on SH. Signed-off-by: NYoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
nommu needs to be able to shift PAGE_OFFSET, so we switch it to a non-user-visible CONFIG_PAGE_OFFSET and use that in the few places where it matters. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Nothing exciting here, just trivial fixes.. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Inhibit mapping through page tables in __ioremap() for PCI memory apertures on SH7751 and SH7780-style PCI controllers, translation is not possible for these areas. For other users that map a small window in P1/P2 space, ioremap() traps that already, and should never make it to __ioremap(). Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
SE73180 can use the generic support, we just need to wire up the IRQ demuxing. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
There was a bug that got introduced when the split ptlock changes went in where mm could be unintialized for user mappings, this fixes it up.. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
ioremap() overhaul. Add support for transparent PMB mapping, get rid of p3_ioremap(), etc. Also drop ioremap() and iounmap() routines from the machvec, as everyone can use the generic ioremap() API instead. For PCI memory apertures and other special cases, use the pci_iomap() API, as boards are already required to get the mapping right there. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Cleanup of page table allocators, using generic folded PMD and PUD helpers. TLB flushing operations are moved to a more sensible spot. The page fault handler is also optimized slightly, we no longer waste cycles on IRQ disabling for flushing of the page from the ITLB, since we're already under CLI protection by the initial exception handler. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Add support for 32-bit physical addressing through the SH-4A Privileged Space Mapping Buffer (PMB). Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Currently when making changes to control registers, we typically need some time for changes to take effect (8 nops, generally). However, for sh4a we simply need to do an icbi.. This is a simple patch for implementing a general purpose ctrl_barrier() which functions as a control register write barrier. There's some additional documentation in the patch itself, but it's pretty self explanatory. There were also some places where we were not doing the barrier, which didn't seem to have any adverse effects on legacy parts, but certainly did on sh4a. It's safer to have the barrier in place for legacy parts as well in these cases, though this does make flush_tlb_all() more expensive (by an order of 8 nops). We can ifdef around the flush_tlb_all() case for now if it's clear that all legacy parts won't have a problem with this. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Add CPU_HAS_PTEA, refactor some of the cpu flag settings. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
We had a pretty interesting oops happening, where copy_user_page() was down()'ing p3map_sem[] with a bogus offset (particularly, an offset that hadn't been initialized with sema_init(), due to the mismatch between cpu_data->dcache.n_aliases and what was assumed based off of the old CACHE_ALIAS value). Luckily, spinlock debugging caught this for us, and so we drop the old hardcoded CACHE_ALIAS for sh4 completely and rely on the run-time probed cpu_data->dcache.alias_mask. This in turn gets the p3map_sem[] index right, and everything works again. While we're at it, also convert to 4-level page tables.. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Merge support for SH7770 and SH7780 SH-4A subtypes. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Richard Curnow 提交于
This reworks some of the SH-4 cache handling code to more easily accomodate newer-style caches (particularly for the > direct-mapped case), as well as optimizing some of the old code. Signed-off-by: NRichard Curnow <richard.curnow@st.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
SH-4A supports 'synco' as a barrier, sprinkle it around the cache ops as necessary.. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
For some of the larger sizes we permitted spanning pages across several PTEs, but this turned out to not be generally useful. This reverts the sh hugetlbpage interface to something more sensible using huge pages at single PTE granularity. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
flush_cache_range() wasn't page aligning the end of the range, we can't assume that it will always be page aligned, and we ended up getting unaligned faults in some rare call paths. Additionally, we add a small optimization to just purge the dcache entirely if the range is large enough that the page table walking will take longer. We use an arbitrary value of 64 pages for the large range size, as per sh64. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 26 9月, 2006 1 次提交
-
-
由 Dave McCracken 提交于
One of the changes necessary for shared page tables is to standardize the pxx_page macros. pte_page and pmd_page have always returned the struct page associated with their entry, while pte_page_kernel and pmd_page_kernel have returned the kernel virtual address. pud_page and pgd_page, on the other hand, return the kernel virtual address. Shared page tables needs pud_page and pgd_page to return the actual page structures. There are very few actual users of these functions, so it is simple to standardize their usage. Since this is basic cleanup, I am submitting these changes as a standalone patch. Per Hugh Dickins' comments about it, I am also changing the pxx_page_kernel macros to pxx_page_vaddr to clarify their meaning. Signed-off-by: NDave McCracken <dmccr@us.ibm.com> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 01 7月, 2006 1 次提交
-
-
由 Jörn Engel 提交于
Signed-off-by: NJörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: NAdrian Bunk <bunk@stusta.de>
-
- 22 3月, 2006 2 次提交
-
-
由 David Gibson 提交于
Quite a long time back, prepare_hugepage_range() replaced is_aligned_hugepage_range() as the callback from mm/mmap.c to arch code to verify if an address range is suitable for a hugepage mapping. is_aligned_hugepage_range() stuck around, but only to implement prepare_hugepage_range() on archs which didn't implement their own. Most archs (everything except ia64 and powerpc) used the same implementation of is_aligned_hugepage_range(). On powerpc, which implements its own prepare_hugepage_range(), the custom version was never used. In addition, "is_aligned_hugepage_range()" was a bad name, because it suggests it returns true iff the given range is a good hugepage range, whereas in fact it returns 0-or-error (so the sense is reversed). This patch cleans up by abolishing is_aligned_hugepage_range(). Instead prepare_hugepage_range() is defined directly. Most archs use the default version, which simply checks the given region is aligned to the size of a hugepage. ia64 and powerpc define custom versions. The ia64 one simply checks that the range is in the correct address space region in addition to being suitably aligned. The powerpc version (just as previously) checks for suitable addresses, and if necessary performs low-level MMU frobbing to set up new areas for use by hugepages. No libhugetlbfs testsuite regressions on ppc64 (POWER5 LPAR). Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NZhang Yanmin <yanmin.zhang@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Nick Piggin 提交于
set_page_count usage outside mm/ is limited to setting the refcount to 1. Remove set_page_count from outside mm/, and replace those users with init_page_count() and set_page_refcounted(). This allows more debug checking, and tighter control on how code is allowed to play around with page->_count. Signed-off-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-