- 03 10月, 2009 1 次提交
-
-
由 Greg Ungerer 提交于
Commit 1522ac3e ("Fix virtual to physical translation macro corner cases") breaks the end of memory check in valid_phys_addr_range(). The modified expression results in the apparent /dev/mem size being 2 bytes smaller than what it actually is. This patch reworks the expression to correctly check the address, while maintaining use of a valid address to __pa(). Signed-off-by: NGreg Ungerer <gerg@uclinux.org> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 29 9月, 2009 2 次提交
-
-
由 Russell King 提交于
We suffer an unfortunate combination of "features" which makes highmem support on platforms without hardware TLB maintainence broadcast difficult: - we need kmap_high_get() support for DMA cache coherence - this requires kmap_high() to take a spinlock with IRQs disabled - kmap_high() occasionally calls flush_all_zero_pkmaps() to clear out old mappings - flush_all_zero_pkmaps() calls flush_tlb_kernel_range(), which on s/w IPI'd systems eventually calls smp_call_function_many() - smp_call_function_many() must not be called with IRQs disabled: WARNING: at kernel/smp.c:380 smp_call_function_many+0xc4/0x240() Modules linked in: Backtrace: [<c00306f0>] (dump_backtrace+0x0/0x108) from [<c0286e6c>] (dump_stack+0x18/0x1c) r6:c007cd18 r5:c02ff228 r4:0000017c [<c0286e54>] (dump_stack+0x0/0x1c) from [<c0053e08>] (warn_slowpath_common+0x50/0x80) [<c0053db8>] (warn_slowpath_common+0x0/0x80) from [<c0053e50>] (warn_slowpath_null+0x18/0x1c) r7:00000003 r6:00000001 r5:c1ff4000 r4:c035fa34 [<c0053e38>] (warn_slowpath_null+0x0/0x1c) from [<c007cd18>] (smp_call_function_many+0xc4/0x240) [<c007cc54>] (smp_call_function_many+0x0/0x240) from [<c007cec0>] (smp_call_function+0x2c/0x38) [<c007ce94>] (smp_call_function+0x0/0x38) from [<c005980c>] (on_each_cpu+0x1c/0x38) [<c00597f0>] (on_each_cpu+0x0/0x38) from [<c0031788>] (flush_tlb_kernel_range+0x50/0x58) r6:00000001 r5:00000800 r4:c05f3590 [<c0031738>] (flush_tlb_kernel_range+0x0/0x58) from [<c009c600>] (flush_all_zero_pkmaps+0xc0/0xe8) [<c009c540>] (flush_all_zero_pkmaps+0x0/0xe8) from [<c009c6b4>] (kmap_high+0x8c/0x1e0) [<c009c628>] (kmap_high+0x0/0x1e0) from [<c00364a8>] (kmap+0x44/0x5c) [<c0036464>] (kmap+0x0/0x5c) from [<c0109dfc>] (cramfs_readpage+0x3c/0x194) [<c0109dc0>] (cramfs_readpage+0x0/0x194) from [<c0090c14>] (__do_page_cache_readahead+0x1f0/0x290) [<c0090a24>] (__do_page_cache_readahead+0x0/0x290) from [<c0090ce4>] (ra_submit+0x30/0x38) [<c0090cb4>] (ra_submit+0x0/0x38) from [<c0089384>] (filemap_fault+0x3dc/0x438) r4:c1819988 [<c0088fa8>] (filemap_fault+0x0/0x438) from [<c009d21c>] (__do_fault+0x58/0x43c) [<c009d1c4>] (__do_fault+0x0/0x43c) from [<c009e8cc>] (handle_mm_fault+0x104/0x318) [<c009e7c8>] (handle_mm_fault+0x0/0x318) from [<c0033c98>] (do_page_fault+0x188/0x1e4) [<c0033b10>] (do_page_fault+0x0/0x1e4) from [<c0033ddc>] (do_translation_fault+0x7c/0x84) [<c0033d60>] (do_translation_fault+0x0/0x84) from [<c002b474>] (do_DataAbort+0x40/0xa4) r8:c1ff5e20 r7:c0340120 r6:00000805 r5:c1ff5e54 r4:c03400d0 [<c002b434>] (do_DataAbort+0x0/0xa4) from [<c002bcac>] (__dabt_svc+0x4c/0x60) ... So we disable highmem support on these systems. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 24 9月, 2009 1 次提交
-
-
由 Rusty Russell 提交于
Makes code futureproof against the impending change to mm->cpu_vm_mask. It's also a chance to use the new cpumask_ ops which take a pointer (the older ones are deprecated, but there's no hurry for arch code). Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
- 22 9月, 2009 1 次提交
-
-
由 Geert Uytterhoeven 提交于
Commit 96177299 ("Drop free_pages()") modified nr_free_pages() to return 'unsigned long' instead of 'unsigned int'. This made the casts to 'unsigned long' in most callers superfluous, so remove them. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: NGeert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Reviewed-by: NChristoph Lameter <cl@linux-foundation.org> Acked-by: NIngo Molnar <mingo@elte.hu> Acked-by: NRussell King <rmk+kernel@arm.linux.org.uk> Acked-by: NDavid S. Miller <davem@davemloft.net> Acked-by: NKyle McMartin <kyle@mcmartin.ca> Acked-by: NWANG Cong <xiyou.wangcong@gmail.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Haavard Skinnemoen <hskinnemoen@atmel.com> Cc: Mikael Starvik <starvik@axis.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Hirokazu Takata <takata@linux-m32r.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: David Howells <dhowells@redhat.com> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Chris Zankel <zankel@tensilica.com> Cc: Michal Simek <monstr@monstr.eu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 20 9月, 2009 5 次提交
-
-
由 Russell King 提交于
ARMv6 introduces non-executable mappings, which can cause prefetch aborts when an attempt is made to execute from such a mapping. Currently, this causes us to loop in the page fault handler since we don't correctly check for proper permissions. Fix this by checking that VMAs have VM_EXEC set for prefetch aborts. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Since we get notified separately about prefetch aborts, which may be permission faults, we need to check for appropriate access permissions when handling a fault. This patch prepares us for doing this by separating out the access error checking. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Russell King 提交于
Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 9月, 2009 3 次提交
-
-
由 Linus Walleij 提交于
This adds the TCM interface to Linux, when active, it will detect and report TCM memories and sizes early in boot if present, introduce generic TCM memory handling, provide a generic TCM memory pool and select TCM memory for the U300 platform. See the Documentation/arm/tcm.txt for documentation. Signed-off-by: NLinus Walleij <linus.walleij@stericsson.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Kirill A. Shutemov 提交于
Currently kernel believes that all ARM CPUs have L1_CACHE_SHIFT == 5. It's not true at least for CPUs based on Cortex-A8. List of CPUs with cache line size != 32 should be expanded later. Signed-off-by: NKirill A. Shutemov <kirill@shutemov.name> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Nicolas Pitre 提交于
Due to problems at cam.org, my nico@cam.org email address is no longer valid. FRom now on, nico@fluxnic.net should be used instead. Signed-off-by: NNicolas Pitre <nico@fluxnic.net> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 12 9月, 2009 1 次提交
-
-
由 Russell King 提交于
On OMAP platforms, some people want to declare to segment up the memory between the kernel and a separate application such that there is a hole in the middle of the memory as far as Linux is concerned. However, they want to be able to mmap() the hole. This currently causes problems, because update_mmu_cache() thinks that there are valid struct pages for the "hole". Fix this by making pfn_valid() slightly more expensive, by checking whether the PFN is contained within the meminfo array. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Tested-by: NKhasim Syed Mohammed <khasim@ti.com>
-
- 05 9月, 2009 1 次提交
-
-
由 Nicolas Pitre 提交于
Let's suppose a highmem page is kmap'd with kmap(). A pkmap entry is used, the page mapped to it, and the virtual cache is dirtied. Then kunmap() is used which does virtually nothing except for decrementing a usage count. Then, let's suppose the _same_ page gets mapped using kmap_atomic(). It is therefore mapped onto a fixmap entry instead, which has a different virtual address unaware of the dirty cache data for that page sitting in the pkmap mapping. Fortunately it is easy to know if a pkmap mapping still exists for that page and use it directly with kmap_atomic(), thanks to kmap_high_get(). And actual testing with a printk in the added code path shows that this condition is actually met *extremely* frequently. Seems that we've been quite lucky that things have worked so well with highmem so far. Cc: stable@kernel.org Signed-off-by: NNicolas Pitre <nico@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 02 9月, 2009 1 次提交
-
-
由 Nicolas Pitre 提交于
In xdr_partial_copy_from_skb() there is that sequence: kaddr = kmap_atomic(*ppage, KM_SKB_SUNRPC_DATA); [...] flush_dcache_page(*ppage); kunmap_atomic(kaddr, KM_SKB_SUNRPC_DATA); Mixing flush_dcache_page() and kmap_atomic() is a bit odd, especially since kunmap_atomic() must deal with cache issues already. OTOH the non-highmem case must use flush_dcache_page() as kunmap_atomic() becomes a no op with no cache maintenance. Problem is that with highmem the implementation of kmap_atomic() doesn't set page->virtual, and page_address(page) returns 0 in that case. Here flush_dcache_page() calls __flush_dcache_page() which calls __cpuc_flush_dcache_page(page_address(page)) resulting in a kernel oops. None of the kmap_atomic() implementations uses set_page_address(). Hence we can assume page_address() is always expected to return 0 in that case. Let's conditionally call __cpuc_flush_dcache_page() only when the page address is non zero, and perform that test only when highmem is configured. Signed-off-by: NNicolas Pitre <nico@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 18 8月, 2009 1 次提交
-
-
由 Russell King 提交于
Add the ARM implementation of highpte, which allows PTE tables to be placed in highmem. Unfortunately, we do not offer highpte support when support for L2 cache is enabled. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 15 8月, 2009 1 次提交
-
-
由 Russell King 提交于
Currently, highmem is selectable, and you can request an increased vmalloc area. However, none of this has any effect on the memory layout since a patch in the highmem series was accidentally dropped. Moreover, even if you did want highmem, all memory would still be registered as lowmem, possibly resulting in overflow of the available virtual mapping space. The highmem boundary is determined by the highest allowed beginning of the vmalloc area, which depends on its configurable minimum size (see commit 60296c71 for details on this). We should create mappings and initialize bootmem only for low memory, while the zone allocator must still be told about highmem. Currently, memory nodes which are completely located in high memory are not supported. This is not a huge limitation since systems relying on highmem support are unlikely to have discontiguous memory with large holes. [ A similar patch was meant to be merged before commit 5f0fbf9e and be available in Linux v2.6.30, however some git rebase screw-up of mine dropped the first commit of the series, and that goofage escaped testing somehow as well. -- Nico ] Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk> Reviewed-by: NNicolas Pitre <nico@marvell.com>
-
- 24 7月, 2009 7 次提交
-
-
由 Catalin Marinas 提交于
When building with !MMU, task_struct is not defined. Just include the relevant file. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
ARMv7-R profile CPUs do not have these registers. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This is needed for the struct meminfo definition. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
The patch adds the necessary ifdefs around functions that only make sense when the MMU is enabled. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This patch adds the ARM/Thumb-2 unified support to the arch/arm/mm/* files. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 11 7月, 2009 1 次提交
-
-
由 Russell King 提交于
These old symbols are meaningless now that we have memory type support implemented. The entire memory type field needs to be modified rather than just a few bits twiddled. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 05 7月, 2009 1 次提交
-
-
由 Russell King 提交于
Now required for libsas: Kernel: arch/arm/boot/Image is ready Kernel: arch/arm/boot/zImage is ready Building modules, stage 2. MODPOST 1096 modules ERROR: "xscale_flush_kern_dcache_page" [drivers/scsi/libsas/libsas.ko] undefined! Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 03 7月, 2009 1 次提交
-
-
由 Alessandro Rubini 提交于
Signed-off-by: NAlessandro Rubini <rubini@unipv.it> Acked-by: NAndrea Gallo <andrea.gallo@stericsson.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 22 6月, 2009 1 次提交
-
-
由 Linus Torvalds 提交于
This allows the callers to now pass down the full set of FAULT_FLAG_xyz flags to handle_mm_fault(). All callers have been (mechanically) converted to the new calling convention, there's almost certainly room for architectures to clean up their code and then add FAULT_FLAG_RETRY when that support is added. Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 19 6月, 2009 1 次提交
-
-
由 George G. Davis 提交于
From: Min Zhang <mzhang@mvista.com> Add alignment fault fixup support for 32-bit Thumb-2 LDM, LDRD, POP, PUSH, STM and STRD instructions. Alignment fault fixup support for the remaining 32-bit Thumb-2 load/store instruction cases is not included since ARMv6 and later processors include hardware support for loads and stores of unaligned words and halfwords. Signed-off-by: NMin Zhang <mzhang@mvista.com> Signed-off-by: NGeorge G. Davis <gdavis@mvista.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 16 6月, 2009 1 次提交
-
-
Signed-off-by: NTomáš Čech <sleep_walker@suse.cz> Acked-by: NMarek Vasut <marek.vasut@gmail.com> Signed-off-by: NEric Miao <eric.miao@marvell.com>
-
- 03 6月, 2009 1 次提交
-
-
由 Russell King 提交于
Currently, whenever an erratum workaround is enabled, it will be applied whether or not the erratum is relevent for the CPU. This patch changes this - we check the variant and revision fields in the main ID register to determine which errata to apply. We also avoid re-applying erratum 460075 if it has already been applied. Applying this fix in non-secure mode results in the kernel failing to boot (or even do anything.) This fixes booting on some ARMv7 based platforms which otherwise silently fail. Acked-by: NCatalin Marinas <catalin.marinas@arm.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 31 5月, 2009 1 次提交
-
-
由 Russell King 提交于
Kconfig entries default to n, so there's no need for this to be explicitly specified. Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
- 30 5月, 2009 6 次提交
-
-
由 Catalin Marinas 提交于
Starting with ARMv6, the CPUs support the BE-8 variant of big-endian (byte-invariant). This patch adds the core support: - setting of the BE-8 mode via the CPSR.E register for both kernel and user threads - big-endian page table walking - REV used to rotate instructions read from memory during fault processing as they are still little-endian format - Kconfig and Makefile support for BE-8. The --be8 option must be passed to the final linking stage to convert the instructions to little-endian Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
This patch adds a comment to the proc-v7.S file for the setting of the PRRR and NMRR registers. It also sets the PRRR[13:12] bits to 0 (corresponding to the reserved TEX[0]CB encoding 110) to be consistent with the documentation. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
The SWP instruction has been deprecated starting with the ARMv6 architecture. On ARMv7 processors with the multiprocessor extensions (like Cortex-A9), this instruction is disabled by default but it can be enabled by setting bit 10 in the System Control register. Note that setting this bit is safe even if the ARMv7 processor has the SWP instruction enabled by default. Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Tony Thompson 提交于
There are additional bits to set for the ARMv7 SMP extensions in the TTBR registers. The IRGN bits order is counter-intuitive but it allows software built for the ARMv7 base architecture to run on an implementation with the MP extensions. Signed-off-by: NTony Thompson <Anthony.Thompson@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Catalin Marinas 提交于
ARMv7 SMP hardware can handle the TLB maintenance operations broadcasting in hardware so that the software can avoid the costly IPIs. This patch adds the necessary checks (the MMFR3 CPUID register) to avoid the broadcasting if already supported by the hardware. (this patch is based on the work done by Tony Thompson @ ARM) Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
由 Colin Tuckley 提交于
This is a RealView platform supporting core tiles with ARM11MPCore, Cortex-A8 or Cortex-A9 (multicore) processors. It has support for MMC, CompactFlash, PCI-E. Signed-off-by: NColin Tuckley <colin.tuckley@arm.com> Signed-off-by: NCatalin Marinas <catalin.marinas@arm.com>
-
- 19 5月, 2009 1 次提交
-
-
由 Hiroshi DOYU 提交于
This patch provides a device drivers, which has a omap iommu, with address mapping APIs between device virtual address(iommu), physical address and MPU virtual address. There are 4 possible patterns for iommu virtual address(iova/da) mapping. |iova/ mapping iommu_ page | da pa va (d)-(p)-(v) function type --------------------------------------------------------------------------- 1 | c c c 1 - 1 - 1 _kmap() / _kunmap() s 2 | c c,a c 1 - 1 - 1 _kmalloc()/ _kfree() s 3 | c d c 1 - n - 1 _vmap() / _vunmap() s 4 | c d,a c 1 - n - 1 _vmalloc()/ _vfree() n* 'iova': device iommu virtual address 'da': alias of 'iova' 'pa': physical address 'va': mpu virtual address 'c': contiguous memory area 'd': dicontiguous memory area 'a': anonymous memory allocation '()': optional feature 'n': a normal page(4KB) size is used. 's': multiple iommu superpage(16MB, 1MB, 64KB, 4KB) size is used. '*': not yet, but feasible. Signed-off-by: NHiroshi DOYU <Hiroshi.DOYU@nokia.com>
-