- 30 5月, 2013 1 次提交
-
-
由 Cyril Chemparathy 提交于
This patch applies to PAGE_MASK, PMD_MASK, and PGDIR_MASK, where forcing unsigned long math truncates the mask at the 32-bits. This clearly does bad things on PAE systems. This patch fixes this problem by defining these masks as signed quantities. We then rely on sign extension to do the right thing. Signed-off-by: NCyril Chemparathy <cyril@ti.com> Signed-off-by: NVitaly Andrianov <vitalya@ti.com> Reviewed-by: NNicolas Pitre <nico@linaro.org> Reviewed-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NSantosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: NSubash Patel <subash.rp@samsung.com> Signed-off-by: NWill Deacon <will.deacon@arm.com>
-
- 03 10月, 2012 1 次提交
-
-
由 David Howells 提交于
Convert #include "..." to #include <path/...> in kernel system headers. Signed-off-by: NDavid Howells <dhowells@redhat.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NThomas Gleixner <tglx@linutronix.de> Acked-by: NPaul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: NDave Jones <davej@redhat.com>
-
- 05 5月, 2012 1 次提交
-
-
由 Russell King 提交于
This patch removes support for ARMv3 CPUs, which haven't worked properly for quite some time (see the FIXME comment in arch/arm/mm/fault.c). The only V3 parts left is the cache model for ARMv3, which is needed for some odd reason by ARM740T CPUs, and being able to build with -march=armv3, which is required for the RiscPC platform due to its bus structure. Acked-by: NWill Deacon <will.deacon@arm.com> Acked-by: NJean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 3月, 2012 1 次提交
-
-
由 Will Deacon 提交于
The current user mapping for the vectors page is inserted as a `horrible hack vma' into each task via arch_setup_additional_pages. This causes problems with the MM subsystem and vm_normal_page, as described here: https://lkml.org/lkml/2012/1/14/55 Following the suggestion from Hugh in the above thread, this patch uses the gate_vma for the vectors user mapping, therefore consolidating the horrible hack VMAs into one. Acked-and-Tested-by: NNicolas Pitre <nico@linaro.org> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 08 12月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
This patch introduces the pgtable-3level*.h files with definitions specific to the LPAE page table format (3 levels of page tables). Each table is 4KB and has 512 64-bit entries. An entry can point to a 40-bit physical address. The young, write and exec software bits share the corresponding hardware bits (negated). Other software bits use spare bits in the PTE. The patch also changes some variable types from unsigned long or int to pteval_t or pgprot_t. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 06 10月, 2011 1 次提交
-
-
由 Catalin Marinas 提交于
This patch moves page table definitions from asm/page.h, asm/pgtable.h and asm/ptgable-hwdef.h into corresponding *-2level* files. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 5月, 2011 1 次提交
-
-
由 Will Deacon 提交于
In commit eb33575c ("[ARM] Double check memmap is actually valid with a memmap has unexpected holes V2"), a new function, memmap_valid_within, was introduced to mmzone.h so that holes in the memmap which pass pfn_valid in SPARSEMEM configurations can be detected and avoided. The fix to this problem checks that the pfn <-> page linkages are correct by calculating the page for the pfn and then checking that page_to_pfn on that page returns the original pfn. Unfortunately, in SPARSEMEM configurations, this results in reading from the page flags to determine the correct section. Since the memmap here has been freed, junk is read from memory and the check is no longer robust. In the best case, reading from /proc/pagetypeinfo will give you the wrong answer. In the worst case, you get SEGVs, Kernel OOPses and hung CPUs. Furthermore, ioremap implementations that use pfn_valid to disallow the remapping of normal memory will break. This patch allows architectures to provide their own pfn_valid function instead of using the default implementation used by sparsemem. The architecture-specific version is aware of the memmap state and will return false when passed a pfn for a freed page within a valid section. Acked-by: NMel Gorman <mgorman@suse.de> Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Tested-by: NH Hartley Sweeten <hsweeten@visionengravers.com> Signed-off-by: NWill Deacon <will.deacon@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 11月, 2010 1 次提交
-
-
由 Russell King 提交于
This makes everywhere dealing with pte values use the same type. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 10月, 2009 1 次提交
-
-
由 Russell King 提交于
Our copy_user_highpage() implementations may require cache maintainence. Ensure that implementations have all necessary details to perform this maintainence. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 9月, 2009 1 次提交
-
-
由 Russell King 提交于
On OMAP platforms, some people want to declare to segment up the memory between the kernel and a separate application such that there is a hole in the middle of the memory as far as Linux is concerned. However, they want to be able to mmap() the hole. This currently causes problems, because update_mmu_cache() thinks that there are valid struct pages for the "hole". Fix this by making pfn_valid() slightly more expensive, by checking whether the PFN is contained within the meminfo array. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-by: NKhasim Syed Mohammed <khasim@ti.com>
-
- 25 6月, 2009 1 次提交
-
-
由 Linus Walleij 提交于
Update the link script for ARM to use PAGE_SIZE instead of hard- coded 4096. Also the old RODATA macro is deprecated for the RO_DATA(PAGE_SIZE) macro. As a consequence the PAGE_SIZE was changed from (1UL << PAGE_SHIFT) to (_AC(1,UL) << PAGE_SHIFT) because the linker does not understand the "UL" suffix to numeric constants. Signed-off-by: NLinus Walleij <linus.walleij@stericsson.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 12 6月, 2009 1 次提交
-
-
由 Arnd Bergmann 提交于
The current asm-generic/page.h only contains the get_order function, and asm-generic/uaccess.h only implements unaligned accesses. This renames the file to getorder.h and uaccess-unaligned.h to make room for new page.h and uaccess.h file that will be usable by all simple (e.g. nommu) architectures. Signed-off-by: NRemis Lima Baima <remis.developer@googlemail.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de>
-
- 03 6月, 2009 1 次提交
-
-
由 Martin Fuzzey 提交于
Define ARCH_KMALLOC_MINALIGN in asm/cache.h At the request of Russell also move ARCH_SLAB_MINALIGN to this file. Signed-off-by: NMartin Fuzzey <mfuzzey@gmail.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 3月, 2009 1 次提交
-
-
由 Paulius Zaleckas 提交于
Adds support for Faraday FA526 core. This core is used at least by: Cortina Systems Gemini and Centroid family Cavium Networks ECONA family Grain Media GM8120 Pixelplus ImageARM Prolific PL-1029 Faraday IP evaluation boards v2: - move TLB_BTB to separate patch - update copyrights Signed-off-by: NPaulius Zaleckas <paulius.zaleckas@teltonika.lt>
-
- 28 11月, 2008 2 次提交
-
-
由 Russell King 提交于
For similar reasons as copy_user_page(), we want to avoid the additional kmap_atomic if it's unnecessary. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
We used to override the copy_user_page() function. However, this is not only inefficient, it also causes additional complexity for highmem support, since we convert from a struct page to a kernel direct mapped address and back to a struct page again. Moreover, with highmem support, we end up pointlessly setting up kmap entries for pages which we're going to remap. So, push the kmapping down into the copypage implementation files where it's required. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 27 11月, 2008 1 次提交
-
-
由 Russell King 提交于
As suggested by Andrew Morton, remove memzero() - it's not supported on other architectures so use of it is a potential build breaking bug. Since the compiler optimizes memset(x,0,n) to __memzero() perfectly well, we don't miss out on the underlying benefits of memzero(). Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 01 10月, 2008 1 次提交
-
-
由 Russell King 提交于
Add support for detecting non-executable stack binaries, and adjust permissions to prevent execution from data and stack areas. Also, ensure that READ_IMPLIES_EXEC is enabled for older CPUs where that is true, and for any executable-stack binary. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 8月, 2008 1 次提交
-
-
由 Russell King 提交于
Move platform independent header files to arch/arm/include/asm, leaving those in asm/arch* and asm/plat* alone. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 25 7月, 2008 1 次提交
-
-
由 Andrea Righi 提交于
On 32-bit architectures PAGE_ALIGN() truncates 64-bit values to the 32-bit boundary. For example: u64 val = PAGE_ALIGN(size); always returns a value < 4GB even if size is greater than 4GB. The problem resides in PAGE_MASK definition (from include/asm-x86/page.h for example): #define PAGE_SHIFT 12 #define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT) #define PAGE_MASK (~(PAGE_SIZE-1)) ... #define PAGE_ALIGN(addr) (((addr)+PAGE_SIZE-1)&PAGE_MASK) The "~" is performed on a 32-bit value, so everything in "and" with PAGE_MASK greater than 4GB will be truncated to the 32-bit boundary. Using the ALIGN() macro seems to be the right way, because it uses typeof(addr) for the mask. Also move the PAGE_ALIGN() definitions out of include/asm-*/page.h in include/linux/mm.h. See also lkml discussion: http://lkml.org/lkml/2008/6/11/237 [akpm@linux-foundation.org: fix drivers/media/video/uvc/uvc_queue.c] [akpm@linux-foundation.org: fix v850] [akpm@linux-foundation.org: fix powerpc] [akpm@linux-foundation.org: fix arm] [akpm@linux-foundation.org: fix mips] [akpm@linux-foundation.org: fix drivers/media/video/pvrusb2/pvrusb2-dvb.c] [akpm@linux-foundation.org: fix drivers/mtd/maps/uclinux.c] [akpm@linux-foundation.org: fix powerpc] Signed-off-by: NAndrea Righi <righi.andrea@gmail.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 23 5月, 2008 1 次提交
-
-
由 Greg Ungerer 提交于
The non-MMU case also needs the type definition of pgtable_t. So move it out of a CONFIG_MMU conditional section. Signed-off-by: NGreg Ungerer <gerg@uclinux.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 4月, 2008 1 次提交
-
-
由 Lennert Buytenhek 提交于
This patch implements a set of Feroceon-specific {copy,clear}_user_page() routines that perform more optimally than the generic implementations. This also deals with write-allocate caches (Feroceon can run L1 D in WA mode) which otherwise prevents Linux from booting. [nico: optimized the code even further] Signed-off-by: NLennert Buytenhek <buytenh@marvell.com> Tested-by: NSylver Bruneau <sylver.bruneau@googlemail.com> Tested-by: NMartin Michlmayr <tbm@cyrius.com> Signed-off-by: NNicolas Pitre <nico@marvell.com>
-
- 09 2月, 2008 1 次提交
-
-
由 Martin Schwidefsky 提交于
Background: I've implemented 1K/2K page tables for s390. These sub-page page tables are required to properly support the s390 virtualization instruction with KVM. The SIE instruction requires that the page tables have 256 page table entries (pte) followed by 256 page status table entries (pgste). The pgstes are only required if the process is using the SIE instruction. The pgstes are updated by the hardware and by the hypervisor for a number of reasons, one of them is dirty and reference bit tracking. To avoid wasting memory the standard pte table allocation should return 1K/2K (31/64 bit) and 2K/4K if the process is using SIE. Problem: Page size on s390 is 4K, page table size is 1K or 2K. That means the s390 version for pte_alloc_one cannot return a pointer to a struct page. Trouble is that with the CONFIG_HIGHPTE feature on x86 pte_alloc_one cannot return a pointer to a pte either, since that would require more than 32 bit for the return value of pte_alloc_one (and the pte * would not be accessible since its not kmapped). Solution: The only solution I found to this dilemma is a new typedef: a pgtable_t. For s390 pgtable_t will be a (pte *) - to be introduced with a later patch. For everybody else it will be a (struct page *). The additional problem with the initialization of the ptl lock and the NR_PAGETABLE accounting is solved with a constructor pgtable_page_ctor and a destructor pgtable_page_dtor. The page table allocation and free functions need to call these two whenever a page table page is allocated or freed. pmd_populate will get a pgtable_t instead of a struct page pointer. To get the pgtable_t back from a pmd entry that has been installed with pmd_populate a new function pmd_pgtable is added. It replaces the pmd_page call in free_pte_range and apply_to_pte_range. Signed-off-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 08 2月, 2008 1 次提交
-
-
由 Kirill A. Shutemov 提交于
asm/elf.h, asm/page.h and asm/user.h don't export to userspace now, so we can drop #ifdef __KERNEL__ for them. [k.shutemov@gmail.com: remove #ifdef __KERNEL_] Signed-off-by: NKirill A. Shutemov <k.shutemov@gmail.com> Reviewed-by: NDavid Woodhouse <dwmw2@infradead.org> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NKirill A. Shutemov <k.shutemov@gmail.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 21 9月, 2006 1 次提交
-
-
由 David Woodhouse 提交于
Sanitise the ARM headers exported to userspace. Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 20 9月, 2006 1 次提交
-
-
由 Russell King 提交于
Move top_pmd into arch/arm/mm/mm.h - nothing outside arch/arm/mm references it. Move the repeated definition of TOP_PTE into mm/mm.h, as well as a few function prototypes. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 9月, 2006 1 次提交
-
-
由 Ralph Siemsen 提交于
Move kernel-only #includes into #ifdef __KERNEL__, so that headers_install target can be used on ARM. Signed-off-by: NRalph Siemsen <ralphs@netwinder.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 6月, 2006 1 次提交
-
-
由 Russell King 提交于
Majorily based on Hyok Choi's patches, this fixes up the asm-arm header files for mmuless systems. Over and above Hyok's patches: - nommu.h merged into mmu.h (it's only a structure) - nommu_context.h is essentially the same as mmu_context.h, but without the MM switching code. so there's no point having separate files. Also, in memory.h, there's no point #ifndef'ing PHYS_OFFSET and END_MEM - both CONFIG_DRAM_BASE and CONFIG_DRAM_SIZE will always be set by the configuration scripts. Other files have minor formatting changes, but are essentially the same. Hyok's original patches were signed off thusly: Signed-off-by: NHyok S. Choi <hyok.choi@samsung.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 26 4月, 2006 1 次提交
-
-
由 David Woodhouse 提交于
Signed-off-by: NDavid Woodhouse <dwmw2@infradead.org>
-
- 29 3月, 2006 1 次提交
-
-
由 Lennert Buytenhek 提交于
Patch from Lennert Buytenhek This patch adds support for the new XScale v3 core. This is an ARMv5 ISA core with the following additions: - L2 cache - I/O coherency support (on select chipsets) - Low-Locality Reference cache attributes (replaces mini-cache) - Supersections (v6 compatible) - 36-bit addressing (v6 compatible) - Single instruction cache line clean/invalidate - LRU cache replacement (vs round-robin) I attempted to merge the XSC3 support into proc-xscale.S, but XSC3 cores have separate errata and have to handle things like L2, so it is simpler to keep it separate. L2 cache support is currently a build option because the L2 enable bit must be set before we enable the MMU and there is no easy way to capture command line parameters at this point. There are still optimizations that can be done such as using LLR for copypage (in theory using the exisiting mini-cache code) but those can be addressed down the road. Signed-off-by: NDeepak Saxena <dsaxena@plexity.net> Signed-off-by: NLennert Buytenhek <buytenh@wantstofly.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 1月, 2006 1 次提交
-
-
由 Nicolas Pitre 提交于
Patch from Nicolas Pitre Although ARM is still using 32-bit pointers, version 5 and later versions of the ARM architecture introduced the ldrd and strd instructions to move 64-bit data which must be 64-bit aligned in memory, and the EABI includes new constraints on structure data alignment to allow for the compiler to use those instructions. This means that any slab allocation must start on a 64-bit boundary which is not equivalent to BYTES_PER_WORD, especially on those architecture versions that implements the ldrd/strd instructions. Overriding the default alignment disables some slab debug features. If those debug features are really needed then the kernel will have to be compiled for version 4 of the ARM architecture. Signed-off-by: NNicolas Pitre <nico@cam.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 9月, 2005 1 次提交
-
-
由 Stephen Rothwell 提交于
Someone mentioned that almost all the architectures used basically the same implementation of get_order. This patch consolidates them into asm-generic/page.h and includes that in the appropriate places. The exceptions are ia64 and ppc which have their own (presumably optimised) versions. Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 10 5月, 2005 2 次提交
-
-
由 Russell King 提交于
Move the locking for copy_user_page() and clear_user_page() into the implementations which require locking. For simple memcpy/ memset based implementations, the locking is extra overhead which is not necessary, and prevents preemption occuring. Signed-off-by: NRussell King <rmk@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk@arm.linux.org.uk>
-
- 17 4月, 2005 1 次提交
-
-
由 Linus Torvalds 提交于
Initial git repository build. I'm not bothering with the full history, even though we have it. We can create a separate "historical" git archive of that later if we want to, and in the meantime it's about 3.2GB when imported into git - space that would just make the early git days unnecessarily complicated, when we don't have a lot of good infrastructure for it. Let it rip!
-