- 20 5月, 2008 2 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Use ~PTE_MASK to extract the non-pfn parts of the pte (ie, the pte flags), rather than constructing an ad-hoc mask. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Tested-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Jeremy Fitzhardinge 提交于
_PAGE_CHG_MASK is defined as the set of bits not updated by pte_modify(); specifically, the pfn itself, and the Accessed and Dirty bits (which are updated by hardware). Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Tested-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 5月, 2008 1 次提交
-
-
由 Venki Pallipadi 提交于
There is a defect in mprotect, which lets the user change the page cache type bits by-passing the kernel reserve_memtype and free_memtype wrappers. Fix the problem by not letting mprotect change the PAT bits. Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NHugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 28 4月, 2008 1 次提交
-
-
由 Nick Piggin 提交于
s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory model (which is more dynamic than most). Instead, they had proposed to implement it with an additional path through vm_normal_page(), using a bit in the pte to determine whether or not the page should be refcounted: vm_normal_page() { ... if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) { if (vma->vm_flags & VM_MIXEDMAP) { #ifdef s390 if (!mixedmap_refcount_pte(pte)) return NULL; #else if (!pfn_valid(pfn)) return NULL; #endif goto out; } ... } This is fine, however if we are allowed to use a bit in the pte to determine refcountedness, we can use that to _completely_ replace all the vma based schemes. So instead of adding more cases to the already complex vma-based scheme, we can have a clearly seperate and simple pte-based scheme (and get slightly better code generation in the process): vm_normal_page() { #ifdef s390 if (!mixedmap_refcount_pte(pte)) return NULL; return pte_page(pte); #else ... #endif } And finally, we may rather make this concept usable by any architecture rather than making it s390 only, so implement a new type of pte state for this. Unfortunately the old vma based code must stay, because some architectures may not be able to spare pte bits. This makes vm_normal_page a little bit more ugly than we would like, but the 2 cases are clearly seperate. So introduce a pte_special pte state, and use it in mm/memory.c. It is currently a noop for all architectures, so this doesn't actually result in any compiled code changes to mm/memory.o. BTW: I haven't put vm_normal_page() into arch code as-per an earlier suggestion. The reason is that, regardless of where vm_normal_page is actually implemented, the *abstraction* is still exactly the same. Also, while it depends on whether the architecture has pte_special or not, that is the only two possible cases, and it really isn't an arch specific function -- the role of the arch code should be to provide primitive functions and accessors with which to build the core code; pte_special does that. We do not want architectures to know or care about vm_normal_page itself, and we definitely don't want them being able to invent something new there out of sight of mm/ code. If we made vm_normal_page an arch function, then we have to make vm_insert_mixed (next patch) an arch function too. So I don't think moving it to arch code fundamentally improves any abstractions, while it does practically make the code more difficult to follow, for both mm and arch developers, and easier to misuse. [akpm@linux-foundation.org: build fix] Signed-off-by: NNick Piggin <npiggin@suse.de> Acked-by: NCarsten Otte <cotte@de.ibm.com> Cc: Jared Hulbert <jaredeh@gmail.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 25 4月, 2008 6 次提交
-
-
由 Jeremy Fitzhardinge 提交于
All pagetables need fundamentally the same setup and destruction, so just use the same code for everything. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jeremy Fitzhardinge 提交于
Make KERNEL_PGD_PTRS common, as previously it was only being defined for 32-bit. There are a couple of follow-on changes from this: - KERNEL_PGD_PTRS was being defined in terms of USER_PGD_PTRS. The definition of USER_PGD_PTRS doesn't really make much sense on x86-64, since it can have two different user address-space configurations. I renamed USER_PGD_PTRS to KERNEL_PGD_BOUNDARY, which is meaningful for all of 32/32, 32/64 and 64/64 process configurations. - USER_PTRS_PER_PGD was also defined and was being used for similar purposes. Converting its users to KERNEL_PGD_BOUNDARY left it completely unused, and so I removed it. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Andi Kleen <ak@suse.de> Cc: Zach Amsden <zach@vmware.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jeremy Fitzhardinge 提交于
Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jeremy Fitzhardinge 提交于
Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jeremy Fitzhardinge 提交于
Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
Introduce phys_mem_access_prot_allowed(), which checks whether the mapping is possible, without any conflicts and returns success or failure based on that. phys_mem_access_prot() by itself does not allow failure case. This ability to return error is needed for PAT where we may have aliasing conflicts. x86 setup __HAVE_PHYS_MEM_ACCESS_PROT and move x86 specific code out of /dev/mem into arch specific area. Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 17 4月, 2008 4 次提交
-
-
由 Jiri Slaby 提交于
Some of pde bits weren't documented, add the short description to them. Signed-off-by: NJiri Slaby <jirislaby@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Joe Perches 提交于
Signed-off-by: NJoe Perches <joe@perches.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
Introduce ioremap_wc for wc remap. (generic wrapper is in a later patch) Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
Sets up pat_init() infrastructure. PAT MSR has following setting. PAT |PCD ||PWT ||| 000 WB _PAGE_CACHE_WB 001 WC _PAGE_CACHE_WC 010 UC- _PAGE_CACHE_UC_MINUS 011 UC _PAGE_CACHE_UC We are effectively changing WT from boot time setting to WC. UC_MINUS is used to provide backward compatibility to existing /dev/mem users(X). reserve_memtype and free_memtype are new interfaces for maintaining alias-free mapping. It is currently implemented in a simple way with a linked list and not optimized. reserve and free tracks the effective memory type, as a result of PAT and MTRR setting rather than what is actually requested in PAT. pat_init piggy backs on mtrr_init as the rules for setting both pat and mtrr are same. Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 3月, 2008 1 次提交
-
-
由 Suresh Siddha 提交于
fix the 3D performance drop reported at: http://bugzilla.kernel.org/show_bug.cgi?id=10328 fb drivers are using ioremap()/ioremap_nocache(), followed by mtrr_add with WC attribute. Recent changes in page attribute code made both ioremap()/ioremap_nocache() mappings as UC (instead of previous UC-). This breaks the graphics performance, as the effective memory type is UC instead of expected WC. The correct way to fix this is to add ioremap_wc() (which uses UC- in the absence of PAT kernel support and WC with PAT) and change all the fb drivers to use this new ioremap_wc() API. We can take this correct and longer route for post 2.6.25. For now, revert back to the UC- behavior for ioremap/ioremap_nocache. Signed-off-by: NSuresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 10 2月, 2008 1 次提交
-
-
由 Harvey Harrison 提交于
Adjust the definition of lookup_address to take an unsigned long level argument. Adjust callers in xen/mmu.c that pass in a dummy variable. Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 04 2月, 2008 1 次提交
-
-
由 Andi Kleen 提交于
Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 30 1月, 2008 21 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Use a standard list threaded through page->lru for maintaining the pgd list on PAE. This is the same as 64-bit, and seems saner than using a non-standard list via page->index. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Ingo Molnar 提交于
Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Ingo Molnar 提交于
add PAGE_KERNEL_EXEC_NOCACHE. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
this way PG_LEVEL_1GB will be an easy change. Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Thomas Gleixner 提交于
Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andi Kleen 提交于
Introduce canon_pgprot() Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andi Kleen 提交于
The pte_* modifier functions that cleared bits dropped the NX bit on 32bit PAE because they only worked in int, but NX is in bit 63. Fix that by adding appropiate casts so that the arithmetic happens as long long on PAE kernels. I decided to just use 64bit arithmetic instead of open coding like pte_modify() because gcc should generate good enough code for that now. Signed-off-by: NAndi Kleen <ak@suse.de> Acked-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andi Kleen 提交于
64bit already had it. Needed for later patches. Signed-off-by: NAndi Kleen <ak@suse.de> Acked-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andi Kleen 提交于
No need to make it 64bit there. Signed-off-by: NAndi Kleen <ak@suse.de> Acked-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andi Kleen 提交于
Needed for some test code. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Andi Kleen 提交于
- Rename it to pte_exec() from pte_exec_kernel(). There is nothing kernel specific in there. - Move it into the common file because _PAGE_NX is 0 on !PAE and then pte_exec() will be always evaluate to true. Signed-off-by: NAndi Kleen <ak@suse.de> Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jeremy Fitzhardinge 提交于
When the _PAGE_FOO constants are defined as (1ul << _PAGE_BIT_FOO), they become unsigned longs. In 32-bit PAE mode, these end up being implicitly cast to 64-bit types when used to manipulate a pte, and because they're unsigned the top 32-bits are 0, destroying the upper bits of the pte. When _PAGE_FOO constants are given a signed integer type, the cast to 64-bits will sign-extend so that the upper bits are all ones, preserving the upper pte bits in manipulations. Explain this in a prominent place. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jeremy Fitzhardinge 提交于
Put all the defines for mapping pagetable operations to their native versions (for the non-paravirt case) into one place. Make the corresponding changes to paravirt.h. The tricky part here is that when a pagetable entry can't be updated atomically (ie, 32-bit PAE), we need special handlers for pte_clear, set_pte_atomic and set_pte_present. However, the other two modes don't need special handling for these, and can use a common set_pte(_at) path. [ mingo@elte.hu: fixes ] Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jeremy Fitzhardinge 提交于
Move ZERO_PAGE/empty_zero_page to common place. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jeremy Fitzhardinge 提交于
&ptep->pte isn't always an unsigned long *, so cast it to avoid a warning. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jeremy Fitzhardinge 提交于
Make various pte accessors common. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Ingo Molnar 提交于
based on: Subject: x86: unify pgtable accessors which use supported_pte_mask From: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jeremy Fitzhardinge 提交于
Make users of supported_pte_mask common. This has the side-effect of introducing the variable for 32-bit non-PAE, but I think its a pretty small cost to simplify the code. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jeremy Fitzhardinge 提交于
Unify functions to test and set bits in pagetable entries. NOP: only moves existing code around, without any change to it. Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Ingo Molnar 提交于
based on: Subject: x86/pgtable: fix constant sign extension problem From: Jeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Jeremy Fitzhardinge 提交于
32 and 64-bit use the same flags for pagetable entries, so make them all common. [ mingo@elte.hu: fixes ] Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 11 10月, 2007 1 次提交
-
-
由 Thomas Gleixner 提交于
Move the headers to include/asm-x86 and fixup the header install make rules Signed-off-by: NThomas Gleixner <tglx@linutronix.de> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-