- 16 1月, 2010 1 次提交
-
-
由 Matt Fleming 提交于
We need some more page flags to hook up _PAGE_WIRED (and eventually other things). So use the unused PTE bits above the PPN field as no implementations use these for anything currently. Now that we have _PAGE_WIRED let's provide the SH-5 functions for wiring up TLB entries. Signed-off-by: NMatt Fleming <matt@console-pimps.org>
-
- 13 1月, 2010 1 次提交
-
-
由 Paul Mundt 提交于
This introduces some much overdue chainsawing of the fixed PMB support. fixed PMB was introduced initially to work around the fact that dynamic PMB mode was relatively broken, though they were never intended to converge. The main areas where there are differences are whether the system is booted in 29-bit mode or 32-bit mode, and whether legacy mappings are to be preserved. Any system booting in true 32-bit mode will not care about legacy mappings, so these are roughly decoupled. Regardless of the entry point, PMB and 32BIT are directly related as far as the kernel is concerned, so we also switch back to having one select the other. With legacy mappings iterated through and applied in the initialization path it's now possible to finally merge the two implementations and permit dynamic remapping overtop of remaining entries regardless of whether boot mappings are crafted by hand or inherited from the boot loader. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 15 8月, 2009 2 次提交
-
-
由 Paul Mundt 提交于
The caches enabled case needs more work, but is presently broken regardless, so this can be done incrementally. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This does a bit of reorganizing for allowing nommu to use the new and generic cache.c, no functional changes. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 27 7月, 2009 2 次提交
-
-
由 Paul Mundt 提交于
Now that the SH-4 page clear/copy ops are generic, they can be used for all platforms with CONFIG_MMU=y. SH-5 remains the odd one out, but it too will gradually be converted over to using this interface. SH-3 platforms which do not contain aliases will see no impact from this change, while aliasing SH-3 platforms will get the same interface as SH-4. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This wires up clear_user_highpage() on SH-4 and subsequently converts the SH7705 32kB cache mode over to using it. Now that the SH-4 implementation handles all of the dcache purging directly in the aliasing case, there is no need to do this in the default clear_page() implementation. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 22 7月, 2009 1 次提交
-
-
由 Paul Mundt 提交于
This inverts the delayed dcache flush a bit to be more in line with other platforms. At the same time this also gives us the ability to do some more optimizations and cleanup. Now that the update_mmu_cache() callsite only tests for the bit, the implementation can gradually be split out and made generic, rather than relying on special implementations for each of the peculiar CPU types. SH7705 in 32kB mode and SH-4 still need slightly different handling, but this is something that can remain isolated in the varying page copy/clear routines. On top of that, SH-X3 is dcache coherent, so there is no need to bother with any of these tests in the PTEAEX version of update_mmu_cache(), so we kill that off too. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 12 6月, 2009 1 次提交
-
-
由 Arnd Bergmann 提交于
The current asm-generic/page.h only contains the get_order function, and asm-generic/uaccess.h only implements unaligned accesses. This renames the file to getorder.h and uaccess-unaligned.h to make room for new page.h and uaccess.h file that will be usable by all simple (e.g. nommu) architectures. Signed-off-by: NRemis Lima Baima <remis.developer@googlemail.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 10 3月, 2009 1 次提交
-
-
由 Yoshihiro Shimoda 提交于
This provides a method for supporting fixed PMB mappings inherited from the bootloader, as an alternative to the dynamic PMB mapping currently used by the kernel. In the future these methods will be combined. P1/P2 area is handled like a regular 29-bit physical address, and local bus device are assigned P3 area addresses. Signed-off-by: NYoshihiro Shimoda <shimoda.yoshihiro@renesas.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 12 9月, 2008 1 次提交
-
-
由 Paul Mundt 提交于
Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 29 7月, 2008 1 次提交
-
-
由 Paul Mundt 提交于
This follows the sparc changes a439fe51. Most of the moving about was done with Sam's directions at: http://marc.info/?l=linux-sh&m=121724823706062&w=2 with subsequent hacking and fixups entirely my fault. Signed-off-by: NSam Ravnborg <sam@ravnborg.org> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 28 7月, 2008 1 次提交
-
-
由 Paul Mundt 提交于
16kB is a useful size on nommu, while 64kB still tends to be too big to be useful. Newer MMUs are likely to support this as well, so plug it in in anticipation of those, too. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 25 7月, 2008 1 次提交
-
-
由 Andrea Righi 提交于
On 32-bit architectures PAGE_ALIGN() truncates 64-bit values to the 32-bit boundary. For example: u64 val = PAGE_ALIGN(size); always returns a value < 4GB even if size is greater than 4GB. The problem resides in PAGE_MASK definition (from include/asm-x86/page.h for example): #define PAGE_SHIFT 12 #define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT) #define PAGE_MASK (~(PAGE_SIZE-1)) ... #define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK) The "~" is performed on a 32-bit value, so everything in "and" with PAGE_MASK greater than 4GB will be truncated to the 32-bit boundary. Using the ALIGN() macro seems to be the right way, because it uses typeof(addr) for the mask. Also move the PAGE_ALIGN() definitions out of include/asm-*/page.h in include/linux/mm.h. See also lkml discussion: http://lkml.org/lkml/2008/6/11/237 [akpm@linux-foundation.org: fix drivers/media/video/uvc/uvc_queue.c] [akpm@linux-foundation.org: fix v850] [akpm@linux-foundation.org: fix powerpc] [akpm@linux-foundation.org: fix arm] [akpm@linux-foundation.org: fix mips] [akpm@linux-foundation.org: fix drivers/media/video/pvrusb2/pvrusb2-dvb.c] [akpm@linux-foundation.org: fix drivers/mtd/maps/uclinux.c] [akpm@linux-foundation.org: fix powerpc] Signed-off-by: NAndrea Righi <righi.andrea@gmail.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 2月, 2008 1 次提交
-
-
由 Paul Mundt 提交于
A number of cleanups to get the SH-5 cache management code in line with the rest of the SH backend. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 09 2月, 2008 1 次提交
-
-
由 Martin Schwidefsky 提交于
Background: I've implemented 1K/2K page tables for s390. These sub-page page tables are required to properly support the s390 virtualization instruction with KVM. The SIE instruction requires that the page tables have 256 page table entries (pte) followed by 256 page status table entries (pgste). The pgstes are only required if the process is using the SIE instruction. The pgstes are updated by the hardware and by the hypervisor for a number of reasons, one of them is dirty and reference bit tracking. To avoid wasting memory the standard pte table allocation should return 1K/2K (31/64 bit) and 2K/4K if the process is using SIE. Problem: Page size on s390 is 4K, page table size is 1K or 2K. That means the s390 version for pte_alloc_one cannot return a pointer to a struct page. Trouble is that with the CONFIG_HIGHPTE feature on x86 pte_alloc_one cannot return a pointer to a pte either, since that would require more than 32 bit for the return value of pte_alloc_one (and the pte * would not be accessible since its not kmapped). Solution: The only solution I found to this dilemma is a new typedef: a pgtable_t. For s390 pgtable_t will be a (pte *) - to be introduced with a later patch. For everybody else it will be a (struct page *). The additional problem with the initialization of the ptl lock and the NR_PAGETABLE accounting is solved with a constructor pgtable_page_ctor and a destructor pgtable_page_dtor. The page table allocation and free functions need to call these two whenever a page table page is allocated or freed. pmd_populate will get a pgtable_t instead of a struct page pointer. To get the pgtable_t back from a pmd entry that has been installed with pmd_populate a new function pmd_pgtable is added. It replaces the pmd_page call in free_pte_range and apply_to_pte_range. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 2月, 2008 1 次提交
-
-
由 Kirill A. Shutemov 提交于
asm/elf.h, asm/page.h and asm/user.h don't export to userspace now, so we can drop #ifdef __KERNEL__ for them. [k.shutemov@gmail.com: remove #ifdef __KERNEL_] Signed-off-by: NKirill A. Shutemov <k.shutemov@gmail.com> Reviewed-by: NDavid Woodhouse <dwmw2@infradead.org> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NKirill A. Shutemov <k.shutemov@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 1月, 2008 7 次提交
-
-
由 Stuart Menefy 提交于
Signed-off-by: NStuart Menefy <stuart.menefy@st.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 07 11月, 2007 2 次提交
-
-
由 Paul Mundt 提交于
Now that copy_to_user_page()/copy_from_user_page() are wired up, we can drop the old __copy_xxx() implementations. Now that the page colouring scheme has changed via kmap_coherent(), we can avoid the flush in these specific helpers. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
With the kmap_coherent() API in place, this is trivial to implement, and lets us avoid the cache flush in certain cases. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 30 10月, 2007 1 次提交
-
-
由 Paul Mundt 提交于
As noted by David: pte_page() is a macro defined as follows; include/asm-sh/pgtable.h #define pte_page(x) phys_to_page(pte_val(x)&PTE_PHYS_MASK) include/asm-sh/page.h #define phys_to_page(phys) (pfn_to_page(phys >> PAGE_SHIFT)) So as you can see the phys_to_page() macro doesn't wrap the 'phys' parameter in parentheses so we end up with; pte_val(x)&PTE_PHYS_MASK >> PAGE_SHIFT Which is not what we wanted as '>>' has a higher precedence than bitwise AND. I dug into the git repository and I believe this bug was added with this commit (104b8dea); 2006-03-27 KAMEZAWA Hiroyuki [PATCH] unify pfn_to_page: sh pfn_to_page -#define phys_to_page(phys) (mem_map + (((phys)-__MEMORY_START) >> PAGE_SHIFT)) -#define page_to_phys(page) (((page - mem_map) << PAGE_SHIFT) + __MEMORY_START) +#define phys_to_page(phys) (pfn_to_page(phys >> PAGE_SHIFT)) +#define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT) Reported-by: NDavid ADDISON <david.addison@st.com> Reported-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 21 9月, 2007 2 次提交
-
-
由 Paul Mundt 提交于
The extended mode TLB requires both 64-bit PTEs and a 64-bit pgprot, correspondingly, the PGD also has to be 64-bits, so fix that up. The kernel and user permission bits really are decoupled in early cuts of the silicon, which means that we also have to set corresponding kernel permissions on user pages or we end up with user pages that the kernel simply can't touch (!). Finally, with those things corrected, really enable MMUCR.ME and correct the PTEA value (this simply needs to be the upper 32-bits of the PTE, with the size and protection bit encoding). Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This reworks the cache mode configuration in Kconfig, and allows for explicit selection of write-back/write-through/off configurations. All of the cache flushing routines are optimized away for the off case. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 08 6月, 2007 3 次提交
-
-
由 Paul Mundt 提交于
Slub currently defaults to 8-byte alignment for the kmalloc and slab minalign values, where 4 will suffice. In the slab case BYTES_PER_WORD == 4 already, so defining the minalign values outright doesn't cause any regressions there either. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
pfn_valid() is already defined in the sparsemem case, so we only need to define this for CONFIG_FLATMEM. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This adds in some more __user annotations. These weren't being handled properly in some of the __get_user and __put_user paths, so tidy those up. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 07 5月, 2007 1 次提交
-
-
由 Paul Mundt 提交于
This reworks some of the node 0 bootmem initialization in preparation for discontigmem and sparsemem support. ARCH_POPULATES_NODE_MAP is switched to as a result of this. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 13 2月, 2007 1 次提交
-
-
由 Paul Mundt 提交于
This was breaking the uClibc build, which triggered the bogus page size error. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 06 12月, 2006 1 次提交
-
-
由 Paul Mundt 提交于
This adds some preliminary support for the SH-X2 MMU, used by newer SH-4A parts (particularly SH7785). This MMU implements a 'compat' mode with SH-X MMUs and an 'extended' mode for SH-X2 extended features. Extended features include additional page sizes (8kB, 4MB, 64MB), as well as the addition of page execute permissions. The extended mode attributes are placed in a second data array, which requires us to switch to 64-bit PTEs when in X2 mode. With the addition of the exec perms, we also overhaul the mmap prots somewhat, now that it's possible to handle them more intelligently. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 27 9月, 2006 6 次提交
-
-
由 Paul Mundt 提交于
Set the SHM alignment at runtime, based off of probed cache desc. Optimize get_unmapped_area() to only colour align shared mappings. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This implements initial support for the vsyscall page on SH. At the moment we leave it configurable due to having nommu to support from the same code base. We hook it up for the signal trampoline return at present, with more to be added later, once uClibc catches up. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
We want to be able to use PAGE_SIZE all over the place, this is the same approach adopted by other architectures.. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
This adds a DEBUG_STACK_USAGE and DEBUG_STACKOVERFLOW for SH. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Yoshinori Sato 提交于
This fixes up some of the various outstanding nommu bugs on SH. Signed-off-by: NYoshinori Sato <ysato@users.sourceforge.jp> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
由 Paul Mundt 提交于
nommu needs to be able to shift PAGE_OFFSET, so we switch it to a non-user-visible CONFIG_PAGE_OFFSET and use that in the few places where it matters. Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-