- 21 6月, 2006 1 次提交
-
-
由 Jon Loeliger 提交于
Clear the high BATS during load_up_mmu if FTR_HAS_HIGH_BATS. Allow just a bit more time for secondary CPUs to phone home. Signed-off-by: NWei Zhang <Wei.Zhang@freescale.com> Signed-off-by: NHaiying Wang <Haiying.Wang@freescale.com> Signed-off-by: NJon Loeliger <jdl@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 15 6月, 2006 2 次提交
-
-
由 Anton Blanchard 提交于
Remove some stale POWER3/POWER4/970 on 32bit kernel support. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
Some POWER5+ machines can do 64k hardware pages for normal memory but not for cache-inhibited pages. This patch lets us use 64k hardware pages for most user processes on such machines (assuming the kernel has been configured with CONFIG_PPC_64K_PAGES=y). User processes start out using 64k pages and get switched to 4k pages if they use any non-cacheable mappings. With this, we use 64k pages for the vmalloc region and 4k pages for the imalloc region. If anything creates a non-cacheable mapping in the vmalloc region, the vmalloc region will get switched to 4k pages. I don't know of any driver other than the DRM that would do this, though, and these machines don't have AGP. When a region gets switched from 64k pages to 4k pages, we do not have to clear out all the 64k HPTEs from the hash table immediately. We use the _PAGE_COMBO bit in the Linux PTE to indicate whether the page was hashed in as a 64k page or a set of 4k pages. If hash_page is trying to insert a 4k page for a Linux PTE and it sees that it has already been inserted as a 64k page, it first invalidates the 64k HPTE before inserting the 4k HPTE. The hash invalidation routines also use the _PAGE_COMBO bit, to determine whether to look for a 64k HPTE or a set of 4k HPTEs to remove. With those two changes, we can tolerate a mix of 4k and 64k HPTEs in the hash table, and they will all get removed when the address space is torn down. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 12 6月, 2006 1 次提交
-
-
由 Paul Mackerras 提交于
The pgdir field in the paca was a leftover from the dynamic VSIDs patch, and is not used in the current kernel code. This removes it. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 11 6月, 2006 1 次提交
-
-
由 Paul Mackerras 提交于
This adds a vdso_base element to the mm_context_t for 32-bit compiles (both for ARCH=powerpc and ARCH=ppc). This fixes the compile errors that have been reported in arch/powerpc/kernel/signal_32.c. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 09 6月, 2006 1 次提交
-
-
由 Benjamin Herrenschmidt 提交于
Our MMU hash management code would not set the "C" bit (changed bit) in the hardware PTE when updating a RO PTE into a RW PTE. That would cause the hardware to possibly to a write back to the hash table to set it on the first store access, which in addition to being a performance issue, might also hit a bug when running with native hash management (non-HV) as our code is specifically optimized for the case where no write back happens. Thus there is a very small therocial window were a hash PTE can become corrupted if that HPTE has just been upgraded to read write, a store access happens on it, and that races with another processor evicting that same slot. Since eviction (caused by an almost full hash) is extremely rare, the bug is very unlikely to happen fortunately. This fixes by allowing the updating of the protection bits in the native hash handling to also set (but not clear) the "C" bit, and, in order to also improve performances in the general case, by always setting that bit on newly inserted hash PTE so that writeback really never happens. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 19 5月, 2006 1 次提交
-
-
由 Michael Ellerman 提交于
We currently do mem= handling in three seperate places. And as benh pointed out I wrote two of them. Now that we parse command line parameters earlier we can clean this mess up. Moving the parsing out of prom_init means the device tree might be allocated above the memory limit. If that happens we'd have to move it. As it happens we already have logic to do that for kdump, so just genericise it. This also means we might have reserved regions above the memory limit, if we do the bootmem allocator will blow up, so we have to modify lmb_enforce_memory_limit() to truncate the reserves as well. Tested on P5 LPAR, iSeries, F50, 44p. Tested moving device tree on P5 and 44p and F50. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 02 5月, 2006 1 次提交
-
-
由 Jeremy Kerr 提交于
Change of_node_to_nid() to traverse the device tree, looking for a numa id. Cell uses this to assign ids to SPUs, which are children of the CPU node. Existing users of of_node_to_nid() are altered to use of_node_to_nid_single(), which doesn't do the traversal. Export an attach_sysdev_to_node() function, allowing system devices (eg. SPUs) to link themselves into the numa topology in sysfs. Signed-off-by: NArnd Bergmann <arnd.bergmann@de.ibm.com> Cc: Paul Mackerras <paulus@samba.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 28 4月, 2006 1 次提交
-
-
由 David Gibson 提交于
At present, ARCH=powerpc kernels can waste considerable space in pagetables when making large hugepage mappings. Hugepage PTEs go in PMD pages, but each PMD page maps 256M and so contains only 16 hugepage PTEs (128 bytes of data), but takes up a 1024 byte allocation. With CONFIG_PPC_64K_PAGES enabled (64k base page size), the situation is worse. Now hugepage PTEs are at the PTE page level (also mapping 256M), so we store 16 hugepage PTEs in a 64k allocation. The PowerPC MMU already means that any 256M region is either all hugepage, or all normal pages. Thus, with some care, we can use a different allocation for the hugepage PTE tables and only allocate the 128 bytes necessary. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 22 4月, 2006 2 次提交
-
-
由 Olof Johansson 提交于
Quieten some of the debug ram config output. we already print out available memory at KERN_INFO level. Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Olof Johansson 提交于
No need to always print page orders. Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 01 4月, 2006 1 次提交
-
-
由 Anton Blanchard 提交于
This comment exceeded my bad spelling threshold :) Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 29 3月, 2006 2 次提交
-
-
由 KAMEZAWA Hiroyuki 提交于
for_each_cpu() actually iterates across all possible CPUs. We've had mistakes in the past where people were using for_each_cpu() where they should have been iterating across only online or present CPUs. This is inefficient and possibly buggy. We're renaming for_each_cpu() to for_each_possible_cpu() to avoid this in the future. This patch replaces for_each_cpu with for_each_possible_cpu. Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Eugene Surovegin 提交于
Fix 44x and BookE page fault handler to correctly lock PTE before trying to pte_update() it, otherwise this PTE might be swapped out after pte_present() check but before pte_uptdate() call, resulting in corrupted PTE. This can happen with enabled preemption and low memory condition. Signed-off-by: NEugene Surovegin <ebs@ebshome.net> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 28 3月, 2006 2 次提交
-
-
由 Benjamin Herrenschmidt 提交于
This removes statically assigned platform numbers and reworks the powerpc platform probe code to use a better mechanism. With this, board support files can simply declare a new machine type with a macro, and implement a probe() function that uses the flattened device-tree to detect if they apply for a given machine. We now have a machine_is() macro that replaces the comparisons of _machine with the various PLATFORM_* constants. This commit also changes various drivers to use the new macro instead of looking at _machine. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 KAMEZAWA Hiroyuki 提交于
Replace for_each_pgdat() with for_each_online_pgdat(). Signed-off-by: NKAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 27 3月, 2006 3 次提交
-
-
由 Anton Blanchard 提交于
We were printing node ids in hex in one spot. Lets be consistent and always print them in decimal. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Andrew Morton 提交于
The return statement is to prevent `warning: 'nid' might be used uninitialized in this function'. Cc: Nathan Lynch <nathanl@austin.ibm.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Ingo Molnar 提交于
Semaphore to mutex conversion. The conversion was generated via scripts, and the result was validated automatically via a script as well. Signed-off-by: NIngo Molnar <mingo@elte.hu> Cc: Dave Jones <davej@codemonkey.org.uk> Cc: Paul Mackerras <paulus@samba.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jens Axboe <axboe@suse.de> Cc: Neil Brown <neilb@cse.unsw.edu.au> Acked-by: NAlasdair G Kergon <agk@redhat.com> Cc: Greg KH <greg@kroah.com> Cc: Dominik Brodowski <linux@dominikbrodowski.net> Cc: Adam Belay <ambx1@neo.rr.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
- 23 3月, 2006 1 次提交
-
-
由 Andrew Morton 提交于
arch/powerpc/mm/mem.c: In function `add_memory': arch/powerpc/mm/mem.c:128: warning: assignment makes integer from pointer without a cast Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 22 3月, 2006 13 次提交
-
-
由 David Gibson 提交于
Quite a long time back, prepare_hugepage_range() replaced is_aligned_hugepage_range() as the callback from mm/mmap.c to arch code to verify if an address range is suitable for a hugepage mapping. is_aligned_hugepage_range() stuck around, but only to implement prepare_hugepage_range() on archs which didn't implement their own. Most archs (everything except ia64 and powerpc) used the same implementation of is_aligned_hugepage_range(). On powerpc, which implements its own prepare_hugepage_range(), the custom version was never used. In addition, "is_aligned_hugepage_range()" was a bad name, because it suggests it returns true iff the given range is a good hugepage range, whereas in fact it returns 0-or-error (so the sense is reversed). This patch cleans up by abolishing is_aligned_hugepage_range(). Instead prepare_hugepage_range() is defined directly. Most archs use the default version, which simply checks the given region is aligned to the size of a hugepage. ia64 and powerpc define custom versions. The ia64 one simply checks that the range is in the correct address space region in addition to being suitably aligned. The powerpc version (just as previously) checks for suitable addresses, and if necessary performs low-level MMU frobbing to set up new areas for use by hugepages. No libhugetlbfs testsuite regressions on ppc64 (POWER5 LPAR). Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au> Signed-off-by: NZhang Yanmin <yanmin.zhang@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Nick Piggin 提交于
set_page_count usage outside mm/ is limited to setting the refcount to 1. Remove set_page_count from outside mm/, and replace those users with init_page_count() and set_page_refcounted(). This allows more debug checking, and tighter control on how code is allowed to play around with page->_count. Signed-off-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Nick Piggin 提交于
A couple of places set_page_count(page, 1) that don't need to. Signed-off-by: NNick Piggin <npiggin@suse.de> Cc: Paul Mackerras <paulus@samba.org> Cc: Jeff Dike <jdike@addtoit.com> Cc: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Signed-off-by: NAndrew Morton <akpm@osdl.org> Signed-off-by: NLinus Torvalds <torvalds@osdl.org>
-
由 Michael Ellerman 提交于
In mm_init_ppc64() we calculate the location of the "IO hole", but then no one ever looks at the value. So don't bother. That's actually all mm_init_ppc64() does, so get rid of it too. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
It has been decreed that platform numbers are evil, so as a step in that direction, replace platform_is_lpar() with a FW_FEATURE_LPAR bit. Currently FW_FEATURE_LPAR really means i/pSeries LPAR, in the future we might have to clean that up if we need to be more specific about what LPAR actually means. But that's another patch ... Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
htab_bolt_mapping() takes a vstart and pstart parameter, but all but one of its callers actually pass it vstart and vstart. Luckily before it passes paddr (calculated from paddr) to the hpte_insert routines it calls virt_to_abs() (aka. __pa()) on the address, so there isn't actually a bug. map_io_page() however does pass pstart properly, so currently it's broken AFAICT because we're calling __pa(paddr) which will get us something very large. Presumably no one's calling map_io_page() in the right context. Anyway, change htab_bolt_mapping() callers to properly pass pstart, and then use it properly in htab_bolt_mapping(), ie. don't call __pa() on it again. Booted on p5 LPAR, iSeries and Power3. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Nathan Lynch 提交于
We can plug the boot cpu into its node independently of whether numa topology is detected. And numa_setup_cpu does the right thing for all cases now, so remove special-casing for non-numa from the cpu hotplug callback. Signed-off-by: NNathan Lynch <nathanl@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Nathan Lynch 提交于
The powerpc numa code unconditionally onlines all nodes from 0 to the highest node id found, regardless of whether cpus or memory are present in the nodes. This wastes 8K per node and complicates some cpu and memory hotplug situations, such as adding a resource that doesn't map to one of the nodes discovered at boot. Set nodes online as resources are scanned. Fall back to node 0 only when we're sure this isn't a NUMA machine. Instead of defaulting to node 0 for cases of hot-adding a resource which doesn't belong to any initialized node, assign it to the first online node. Signed-off-by: NNathan Lynch <nathanl@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Nathan Lynch 提交于
Code to handle Power4's invalid node id (0xffff) is duplicated for cpu and memory. Better to handle this case in one place -- of_node_to_nid. Overall behavior should be unchanged. Signed-off-by: NNathan Lynch <nathanl@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Nathan Lynch 提交于
Since we effectively treat the domain ids given to us by firmare as logical node ids, make this explicit (basically s/numa_domain/nid/). No functional changes, only variable and function names are modified. Signed-off-by: NNathan Lynch <nathanl@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Nathan Lynch 提交于
map_cpu_to_node does not need to be inline, it is never called in a hot path. map_cpu_to_node, numa_setup_cpu, and find_cpu_node can be marked __cpuinit, as they are never used after boot if CONFIG_HOTPLUG_CPU=n. Signed-off-by: NNathan Lynch <nathanl@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Nathan Lynch 提交于
Add debug statement for map_cpu_to_node; it's useful for cpu hotplug. Clarify debug statement about not finding the numa reference points property. Don't print a meaningless associativity depth (-1) on non-numa systems. Signed-off-by: NNathan Lynch <nathanl@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Nathan Lynch 提交于
At boot, the numa code is assigning boot_cpuid to node 0 unconditionally. Basically, numa_setup_cpu is being stupid about it, but this is the minimal fix -- just call numa_setup_cpu(boot_cpuid) later, after all nodes have been set online. Signed-off-by: NNathan Lynch <nathanl@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 17 3月, 2006 1 次提交
-
-
由 Michael Ellerman 提交于
My patch (d7a5b2ff) to always panic if lmb_alloc() fails is broken because it checks alloc < 0, but should be checking alloc == 0. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 16 3月, 2006 1 次提交
-
-
由 Olaf Hering 提交于
remove warnings when building a 64bit kernel. smp_call_function triggers also with 32bit kernel. WARNING: vmlinux: duplicate symbol 'smp_call_function' previous definition was in vmlinux arch/powerpc/kernel/ppc_ksyms.c:164:EXPORT_SYMBOL(smp_call_function); arch/powerpc/kernel/smp.c:300:EXPORT_SYMBOL(smp_call_function); WARNING: vmlinux: duplicate symbol 'ioremap' previous definition was in vmlinux arch/powerpc/kernel/ppc_ksyms.c:113:EXPORT_SYMBOL(ioremap); arch/powerpc/mm/pgtable_64.c:321:EXPORT_SYMBOL(ioremap); WARNING: vmlinux: duplicate symbol '__ioremap' previous definition was in vmlinux arch/powerpc/kernel/ppc_ksyms.c:117:EXPORT_SYMBOL(__ioremap); arch/powerpc/mm/pgtable_64.c:322:EXPORT_SYMBOL(__ioremap); WARNING: vmlinux: duplicate symbol 'iounmap' previous definition was in vmlinux arch/powerpc/kernel/ppc_ksyms.c:118:EXPORT_SYMBOL(iounmap); arch/powerpc/mm/pgtable_64.c:323:EXPORT_SYMBOL(iounmap); Signed-off-by: NOlaf Hering <olh@suse.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 28 2月, 2006 1 次提交
-
-
由 Michael Ellerman 提交于
Before the merge I updated create_pte_mapping() to work for iSeries, by calling iSeries_hpte_bolt_or_insert. (4c55130b) Later we changed iSeries_hpte_insert to cope with the bolting case, and called that instead from create_pte_mapping() (which was renamed to htab_bolt_mapping) (3c726f8d). Unfortunately that change introduced a subtle bug, where we pass an absolute address to iSeries_hpte_insert() where it expects a physical address. This leads to us calling phys_to_abs() twice on the physical address, which is seriously bogus. This only causes a problem if the absolute address from the first translation can be looked up again in the chunk_map, which depends on the size and layout of memory. I've seen it fail on one box, but not others. The minimal fix is to pass the physical address to iSeries_hpte_insert(). For 2.6.17 we should make phys_to_abs() BUG if we try to double-translate an address. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 24 2月, 2006 2 次提交
-
-
由 R Sharada 提交于
native_hpte_clear has a spinlock recursion problem with the native_tlbie_lock being called twice, once in native_hpte_clear() and once within tlbie(). Fix the problem by changing the call to tlbie() in native_hpte_clear() to __tlbie(). It still supports only 4k pages for now. Signed-off-by: NR Sharada <sharada@in.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
For kexec we need to know the size of the MMU hash table. Currently we calculate the size once in the htab code, and then twice more in the kexec code, once using htab_hash_mask and once using ppc64_pft_size. On some machines the ppc64_pft_size calculation is broken because ppc64_pft_size is not set. So we need to fix the second calculation, but better still we should just calculate the size once and use it everywhere else. Tested on Power5 LPAR, Power4 non-LPAR and Power3. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 10 2月, 2006 1 次提交
-
-
由 Jon Mason 提交于
This patch removes all self references and fixes references to files in the now defunct arch/ppc64 tree. I think this accomplises everything wanted, though there might be a few references I missed. Signed-off-by: NJon Mason <jdmason@us.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 07 2月, 2006 1 次提交
-
-
由 Michael Ellerman 提交于
LMB_ALLOC_ANYWHERE doesn't need to be part of the API, it's only used in lmb.c - so move it out of the header file. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-