- 30 1月, 2015 3 次提交
-
-
由 LEROY Christophe 提交于
When pages are not 4K, PGDIR table is allocated with kmalloc(). In order to optimise TLB handlers, aligned memory is needed. kmalloc() doesn't provide aligned memory blocks, so lets use a kmem_cache pool instead. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 LEROY Christophe 提交于
On powerpc 8xx, in TLB entries, 0x400 bit is set to 1 for read-only pages and is set to 0 for RW pages. So we should use _PAGE_RO instead of _PAGE_RW Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
由 LEROY Christophe 提交于
Some powerpc like the 8xx don't have a RW bit in PTE bits but a RO (Read Only) bit. This patch implements the handling of a _PAGE_RO flag to be used in place of _PAGE_RW Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> [scottwood@freescale.com: fix whitespace] Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 08 11月, 2014 1 次提交
-
-
由 LEROY Christophe 提交于
No need to re-set this bit at each TLB miss. Let's set it in the PTE. Signed-off-by: NChristophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: NScott Wood <scottwood@freescale.com>
-
- 02 10月, 2014 1 次提交
-
-
由 Anton Blanchard 提交于
Add printk levels to some places in the powerpc port. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 25 9月, 2014 1 次提交
-
-
由 Anton Blanchard 提交于
There were a number of prototypes for functions that no longer exist. Remove them. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NMichael Ellerman <mpe@ellerman.id.au>
-
- 27 10月, 2010 1 次提交
-
-
由 Peter Zijlstra 提交于
Since we no longer need to provide KM_type, the whole pte_*map_nested() API is now redundant, remove it. Signed-off-by: NPeter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: NChris Metcalf <cmetcalf@tilera.com> Cc: David Howells <dhowells@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Rik van Riel <riel@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Miller <davem@davemloft.net> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 07 4月, 2010 1 次提交
-
-
由 Jason Gunthorpe 提交于
Instead of referencing mem_map directly, use pfn_to_page. Otherwise the kernel crashes when trying to start userspace if ARCH_PFN_OFFSET is non-zero and CONFIG_BOOKE is not defined Signed-off-by: NJason Gunthorpe <jgunthorpe@obsidianresearch.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 02 9月, 2009 1 次提交
-
-
由 Kumar Gala 提交于
Switch to using the Power ISA defined PTE format when we have a 64-bit PTE. This makes the code handling between fsl-booke and book3e-64 similiar for TLB faults. Additionally this lets use take advantage of the page size encodings and full permissions that the HW PTE defines. Also defined _PMD_PRESENT, _PMD_PRESENT_MASK, and _PMD_BAD since the 32-bit ppc arch code expects them. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 27 8月, 2009 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This is an attempt at cleaning up a bit the way we handle execute permission on powerpc. _PAGE_HWEXEC is gone, _PAGE_EXEC is now only defined by CPUs that can do something with it, and the myriad of #ifdef's in the I$/D$ coherency code is reduced to 2 cases that hopefully should cover everything. The logic on BookE is a little bit different than what it was though not by much. Since now, _PAGE_EXEC will be set by the generic code for executable pages, we need to filter out if they are unclean and recover it. However, I don't expect the code to be more bloated than it already was in that area due to that change. I could boast that this brings proper enforcing of per-page execute permissions to all BookE and 40x but in fact, we've had that now for some time as a side effect of my previous rework in that area (and I didn't even know it :-) We would only enable execute permission if the page was cache clean and we would only cache clean it if we took and exec fault. Since we now enforce that the later only work if VM_EXEC is part of the VMA flags, we de-fact already enforce per-page execute permissions... Unless I missed something Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 27 5月, 2009 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The implementation we just revived has issues, such as using a Kconfig-defined virtual address area in kernel space that nothing actually carves out (and thus will overlap whatever is there), or having some dependencies on being self contained in a single PTE page which adds unnecessary constraints on the kernel virtual address space. This fixes it by using more classic PTE accessors and automatically locating the area for consistent memory, carving an appropriate hole in the kernel virtual address space, leaving only the size of that area as a Kconfig option. It also brings some dma-mask related fixes from the ARM implementation which was almost identical initially but grew its own fixes. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
Make FIXADDR_TOP a compile time constant and cleanup a couple of definitions relative to the layout of the kernel address space on ppc32. We also print out that layout at boot time for debugging purposes. This is a pre-requisite for properly fixing non-coherent DMA allocactions. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 24 3月, 2009 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Now that they are almost identical, we can merge some of the definitions related to the PTE format into common files. This creates a new pte-common.h which is included by both 32 and 64-bit right after the CPU specific pte-*.h file, and which defines some bits to "default" values if they haven't been defined already, and then provides a generic definition of most of the bit combinations based on these and exposed to the rest of the kernel. I also moved to the common pgtable.h most of the "small" accessors to the PTE bits and modification helpers (pte_mk*). The actual accessors remain in their separate files. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
This patch tweaks the way some PTE bit combinations are defined, in such a way that the 32 and 64-bit variant become almost identical and that will make it easier to bring in a new common pte-* file for the new variant of the Book3-E support. The combination of bits defining access to kernel pages are now clearly separated from the combination used by userspace and the core VM. The resulting generated code should remain identical unless I made a mistake. Note: While at it, I removed a non-sensical statement related to CONFIG_KGDB in ppc_mmu_32.c which could cause kernel mappings to be user accessible when that option is enabled. Probably something that bitrot. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 20 3月, 2009 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This updates the 32-bit headers to use the same definitions for the RPN shift inside the PTE as 64-bit, and thus updates _PAGE_CHG_MASK to become identical. This does introduce a runtime visible difference, which is that now, _PAGE_HASHPTE will be part of _PAGE_CHG_MASK and thus preserved. However this should have no practical effect as it should have been preserved in the first place and we got away with not having it there due to our PTE access functions preserving it anyway. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Benjamin Herrenschmidt 提交于
This patch moves the definition of the PTE format for each MMU type to separate files instead of all in one file. This improves overall maintainability and will make it easier to add new types. On 64-bit, additionally, I've separated the headers relative to the format of the page table tree (3 vs. 4 levels for 64K vs 4K pages) from the headers specific to the PTE format for hash based processors, this will make it easier to add support for Book3 "E" 64-bit implementations. There are still some type-related ifdef's in the generic headers, we might remove them in the long run, but this patch shouldn't result in any code change, -hopefully- just definitions being moved around. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 13 2月, 2009 1 次提交
-
-
由 Philippe Gerum 提交于
Fix _PAGE_CHG_MASK so that pte_modify() does not affect the _PAGE_SPECIAL bit. Signed-off-by: NPhilippe Gerum <rpm@xenomai.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 11 2月, 2009 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This patch reworks the way we do I and D cache coherency on PowerPC. The "old" way was split in 3 different parts depending on the processor type: - Hash with per-page exec support (64-bit and >= POWER4 only) does it at hashing time, by preventing exec on unclean pages and cleaning pages on exec faults. - Everything without per-page exec support (32-bit hash, 8xx, and 64-bit < POWER4) does it for all page going to user space in update_mmu_cache(). - Embedded with per-page exec support does it from do_page_fault() on exec faults, in a way similar to what the hash code does. That leads to confusion, and bugs. For example, the method using update_mmu_cache() is racy on SMP where another processor can see the new PTE and hash it in before we have cleaned the cache, and then blow trying to execute. This is hard to hit but I think it has bitten us in the past. Also, it's inefficient for embedded where we always end up having to do at least one more page fault. This reworks the whole thing by moving the cache sync into two main call sites, though we keep different behaviours depending on the HW capability. The call sites are set_pte_at() which is now made out of line, and ptep_set_access_flags() which joins the former in pgtable.c The base idea for Embedded with per-page exec support, is that we now do the flush at set_pte_at() time when coming from an exec fault, which allows us to avoid the double fault problem completely (we can even improve the situation more by implementing TLB preload in update_mmu_cache() but that's for later). If for some reason we didn't do it there and we try to execute, we'll hit the page fault, which will do a minor fault, which will hit ptep_set_access_flags() to do things like update _PAGE_ACCESSED or _PAGE_DIRTY if needed, we just make this guys also perform the I/D cache sync for exec faults now. This second path is the catch all for things that weren't cleaned at set_pte_at() time. For cpus without per-pag exec support, we always do the sync at set_pte_at(), thus guaranteeing that when the PTE is visible to other processors, the cache is clean. For the 64-bit hash with per-page exec support case, we keep the old mechanism for now. I'll look into changing it later, once I've reworked a bit how we use _PAGE_EXEC. This is also a first step for adding _PAGE_EXEC support for embedded platforms Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 21 12月, 2008 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Currently, we never set _PAGE_COHERENT in the PTEs, we just OR it in in the hash code based on some CPU feature bit. We also manipulate _PAGE_NO_CACHE and _PAGE_GUARDED by hand in all sorts of places. This changes the logic so that instead, the PTE now contains _PAGE_COHERENT for all normal RAM pages thay have I = 0 on platforms that need it. The hash code clears it if the feature bit is not set. It also adds some clean accessors to setup various valid combinations of access flags and change various bits of code to use them instead. This should help having the PTE actually containing the bit combinations that we really want. I also removed _PAGE_GUARDED from _PAGE_BASE on 44x and instead set it explicitely from the TLB miss. I will ultimately remove it completely as it appears that it might not be needed after all but in the meantime, having it in the TLB miss makes things a lot easier. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 14 10月, 2008 1 次提交
-
-
由 David Gibson 提交于
The typesafe version of the powerpc pagetable handling (with USE_STRICT_MM_TYPECHECKS defined) has bitrotted again. This patch makes a bunch of small fixes to get it back to building status. It's still not enabled by default as gcc still generates worse code with it for some reason. Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 25 9月, 2008 3 次提交
-
-
由 Becky Bruce 提交于
This rearranges a bit of code, and adds support for 36-bit physical addressing for configs that use a hashed page table. The 36b physical support is not enabled by default on any config - it must be explicitly enabled via the config system. This patch *only* expands the page table code to accomodate large physical addresses on 32-bit systems and enables the PHYS_64BIT config option for 86xx. It does *not* allow you to boot a board with more than about 3.5GB of RAM - for that, SWIOTLB support is also required (and coming soon). Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Kumar Gala 提交于
Implement _PAGE_SPECIAL and pte_special() for 32-bit powerpc. This bit will be used by the fast get_user_pages() to differenciate PTEs that correspond to a valid struct page from special mappings that don't such as IO mappings obtained via io_remap_pfn_ranges(). We currently only implement this on sub-arch that support SMP or will so in the future (6xx, 44x, FSL-BookE) and not (8xx, 40x). Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Kumar Gala 提交于
There are some minor issues with support 64-bit PTEs on a 32-bit processor when dealing with SMP. * We need to order the stores in set_pte_at to make sure the flag word is set second. * Change pte_clear to use pte_update so only the flag word is cleared * Added a WARN_ON to set_pte_at to ensure the pte isn't present for the 64-bit pte/SMP case (to ensure our assumption of this fact). Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Acked-by: NBecky Bruce <becky.bruce@freescale.com>
-
- 04 8月, 2008 1 次提交
-
-
由 Stephen Rothwell 提交于
from include/asm-powerpc. This is the result of a mkdir arch/powerpc/include/asm git mv include/asm-powerpc/* arch/powerpc/include/asm Followed by a few documentation/comment fixups and a couple of places where <asm-powepc/...> was being used explicitly. Of the latter only one was outside the arch code and it is a driver only built for powerpc. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 28 7月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
The 'powerpc ioremap_prot' broke 8xx builds: include2/asm/pgtable-ppc32.h:555: error: '_PAGE_WRITETHRU' undeclared (first use in this function) include2/asm/pgtable-ppc32.h:555: error: (Each undeclared identifier is reported only once include2/asm/pgtable-ppc32.h:555: error: for each function it appears in.) Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 27 7月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
The 'powerpc ioremap_prot' broke 8xx builds: include2/asm/pgtable-ppc32.h:555: error: '_PAGE_WRITETHRU' undeclared (first use in this function) include2/asm/pgtable-ppc32.h:555: error: (Each undeclared identifier is reported only once include2/asm/pgtable-ppc32.h:555: error: for each function it appears in.) Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 25 7月, 2008 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This adds ioremap_prot and pte_pgprot() so that one can extract protection bits from a PTE and use them to ioremap_prot() (in order to support ptrace of VM_IO | VM_PFNMAP as per Rik's patch). This moves a couple of flag checks around in the ioremap implementations of arch/powerpc. There's a side effect of allowing non-cacheable and non-guarded mappings on ppc32 which before would always have _PAGE_GUARDED set whenever _PAGE_NO_CACHE is. (standard ioremap will still set _PAGE_GUARDED, but ioremap_prot will be capable of setting such a non guarded mapping). Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NRik van Riel <riel@redhat.com> Cc: Dave Airlie <airlied@linux.ie> Cc: Hugh Dickins <hugh@veritas.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 7月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
This converts the FSL Book-E PTE access and TLB miss handling to match with the recent changes to 44x that introduce support for non-atomic PTE operations in pgtable-ppc32.h and removes write back to the PTE from the TLB miss handlers. In addition, the DSI interrupt code no longer tries to fixup write permission, this is left to generic code, and _PAGE_HWWRITE is gone. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 15 7月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
Because the pte is now 64-bits the compiler was optimizing the update to always clear the upper 32-bits of the pte. We need to ensure the clr mask is treated as an unsigned long long to get the proper behavior. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Acked-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 10 7月, 2008 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This is some preliminary work to improve TLB management on SW loaded TLB powerpc platforms. This introduce support for non-atomic PTE operations in pgtable-ppc32.h and removes write back to the PTE from the TLB miss handlers. In addition, the DSI interrupt code no longer tries to fixup write permission, this is left to generic code, and _PAGE_HWWRITE is gone. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 01 7月, 2008 1 次提交
-
-
由 Andy Whitcroft 提交于
The implementation of huge_ptep_set_wrprotect() directly calls ptep_set_wrprotect() to mark a hugepte write protected. However this call is not appropriate on ppc64 kernels as this is a small page only implementation. This can lead to the hash not being flushed correctly when a mapping is being converted to COW, allowing processes to continue using the original copy. Currently huge_ptep_set_wrprotect() unconditionally calls ptep_set_wrprotect(). This is fine on ppc32 kernels as this call is generic. On 64 bit this is implemented as: pte_update(mm, addr, ptep, _PAGE_RW, 0); On ppc64 this last parameter is the page size and is passed directly on to hpte_need_flush(): hpte_need_flush(mm, addr, ptep, old, huge); And this directly affects the page size we pass to flush_hash_page(): flush_hash_page(vaddr, rpte, psize, ssize, 0); As this changes the way the hash is calculated we will flush the wrong pages, potentially leaving live hashes to the original page. Move the definition of huge_ptep_set_wrprotect() to the 32/64 bit specific headers. Signed-off-by: NAndy Whitcroft <apw@shadowen.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 30 6月, 2008 1 次提交
-
-
由 Becky Bruce 提交于
Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 06 5月, 2008 1 次提交
-
-
由 Stefan Roese 提交于
The new 440x6 core used on AMCC 460EX/GT introduces new storage attibure fields to the TLB2 word. Those are: Bit 11 12 13 14 15 WL1 IL1I IL1D IL2I IL2D With these bits the cache (L1 and L2) can be configured in a more flexible way, instruction- and data-cache independently now. The "old" I and W bits are still available and setting these old bits will automically set these new bits too (for backward compatibilty). The current code does not clear these fields resulting in disabling the cache by chance. This patch now makes sure that these new bits are cleared when the TLB2 word is written. Signed-off-by: NStefan Roese <sr@denx.de> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 28 4月, 2008 1 次提交
-
-
由 Nick Piggin 提交于
s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory model (which is more dynamic than most). Instead, they had proposed to implement it with an additional path through vm_normal_page(), using a bit in the pte to determine whether or not the page should be refcounted: vm_normal_page() { ... if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) { if (vma->vm_flags & VM_MIXEDMAP) { #ifdef s390 if (!mixedmap_refcount_pte(pte)) return NULL; #else if (!pfn_valid(pfn)) return NULL; #endif goto out; } ... } This is fine, however if we are allowed to use a bit in the pte to determine refcountedness, we can use that to _completely_ replace all the vma based schemes. So instead of adding more cases to the already complex vma-based scheme, we can have a clearly seperate and simple pte-based scheme (and get slightly better code generation in the process): vm_normal_page() { #ifdef s390 if (!mixedmap_refcount_pte(pte)) return NULL; return pte_page(pte); #else ... #endif } And finally, we may rather make this concept usable by any architecture rather than making it s390 only, so implement a new type of pte state for this. Unfortunately the old vma based code must stay, because some architectures may not be able to spare pte bits. This makes vm_normal_page a little bit more ugly than we would like, but the 2 cases are clearly seperate. So introduce a pte_special pte state, and use it in mm/memory.c. It is currently a noop for all architectures, so this doesn't actually result in any compiled code changes to mm/memory.o. BTW: I haven't put vm_normal_page() into arch code as-per an earlier suggestion. The reason is that, regardless of where vm_normal_page is actually implemented, the *abstraction* is still exactly the same. Also, while it depends on whether the architecture has pte_special or not, that is the only two possible cases, and it really isn't an arch specific function -- the role of the arch code should be to provide primitive functions and accessors with which to build the core code; pte_special does that. We do not want architectures to know or care about vm_normal_page itself, and we definitely don't want them being able to invent something new there out of sight of mm/ code. If we made vm_normal_page an arch function, then we have to make vm_insert_mixed (next patch) an arch function too. So I don't think moving it to arch code fundamentally improves any abstractions, while it does practically make the code more difficult to follow, for both mm and arch developers, and easier to misuse. [akpm@linux-foundation.org: build fix] Signed-off-by: NNick Piggin <npiggin@suse.de> Acked-by: NCarsten Otte <cotte@de.ibm.com> Cc: Jared Hulbert <jaredeh@gmail.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 17 4月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
* Removed defines KERNEL_PGD_PTRS & USER_PGD_PTRS since they aren't used anywhere * Changed pmd_page macro to use pfn_to_page so we get proper behavior if ARCH_PFN_OFFSET is set as well if we use a different memory model on ppc32. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 07 4月, 2008 1 次提交
-
-
由 Ionut Nicu 提交于
The code in arch_arm_kprobe was trying to set a breakpoint which resulted in a page fault because the kernel text pages were write protected. Disable the write protect when CONFIG_KPROBES is defined. Signed-off-by: NIonut Nicu <ionut.nicu@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 07 3月, 2008 1 次提交
-
-
由 Vitaly Bordug 提交于
This makes swap routines operate correctly on the ppc_8xx based machines. Code has been revalidated on mpc885ads (8M sdram) with recent kernel. Based on patch from Yuri Tikhonov <yur@emcraft.com> to do the same on arch/ppc instance. Recent kernel's size makes swap feature very important on low-memory platforms, those are actually non-operable without it. Signed-off-by: NYuri Tikhonov <yur@emcraft.com> Signed-off-by: NVitaly Bordug <vitb@kernel.crashing.org> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 07 12月, 2007 1 次提交
-
-
由 Kumar Gala 提交于
The size of swapper_pg_dir is 8k instead of 4k when using 64-bit PTEs (CONFIG_PTE_64BIT). This was reported by Cedric Hombourger <chombourger@gmail.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 01 11月, 2007 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
The 44x family has an interesting "feature" which is a virtually tagged instruction cache (yuck !). So far, we haven't dealt with it properly, which means we've been mostly lucky or people didn't report the problems, unless people have been running custom patches in their distro... This is an attempt at fixing it properly. I chose to do it by setting a global flag whenever we change a PTE that was previously marked executable, and flush the entire instruction cache upon return to user space when that happens. This is a bit heavy handed, but it's hard to do more fine grained flushes as the icbi instruction, on those processor, for some very strange reasons (since the cache is virtually mapped) still requires a valid TLB entry for reading in the target address space, which isn't something I want to deal with. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 18 7月, 2007 1 次提交
-
-
由 Martin Schwidefsky 提交于
Nobody is using ptep_test_and_clear_dirty and ptep_clear_flush_dirty. Remove the functions from all architectures. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-