- 16 5月, 2008 1 次提交
-
-
由 Paul Mackerras 提交于
This provides a way to defer processing of an interrupt that wakes the processor out of sleep mode. On 32-bit platforms that use an interrupt to wake the processor, we have to have interrupts enabled in hardware at the point where we go to sleep, otherwise the processor will never wake up. However, because interrupts are logically disabled at this point, we don't want to process the interrupt straight away. This is handled by setting the _TLF_SLEEPING flag. When we get an interrupt and _TLF_SLEEPING is set, we firstly clear the MSR_EE (external interrupt enable) bit in the saved MSR value, and secondly we then return to the address in the link register, like we do for _TLF_NAPPING, but without actually handling the interrupt. Note that this is handled somewhat differently on powerbooks, so this new code will only be used on non-Apple machines. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 15 5月, 2008 2 次提交
-
-
由 Nate Case 提交于
Calls to copy_to_user() or copy_from_user() can fail when copying N bytes, where N is a constant less than 8, but not 1, 2, 4, or 8, because 'ret' is not initialized and is only set if the size is 1, 2, 4 or 8, but is tested after the switch statement for any constant size <= 8. This fixes it by initializing 'ret' to 1, causing the code to fall through to the __copy_tofrom_user call for sizes other than 1, 2, 4 or 8. Signed-off-by: NDave Scidmore <dscidmore@xes-inc.com> Signed-off-by: NNate Case <ncase@xes-inc.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Benjamin Herrenschmidt 提交于
This changes vmemmap to use a different region (region 0xf) of the address space, and to configure the page size of that region dynamically at boot. The problem with the current approach of always using 16M pages is that it's not well suited to machines that have small amounts of memory such as small partitions on pseries, or PS3's. In fact, on the PS3, failure to allocate the 16M page backing vmmemmap tends to prevent hotplugging the HV's "additional" memory, thus limiting the available memory even more, from my experience down to something like 80M total, which makes it really not very useable. The logic used by my match to choose the vmemmap page size is: - If 16M pages are available and there's 1G or more RAM at boot, use that size. - Else if 64K pages are available, use that - Else use 4K pages I've tested on a POWER6 (16M pages) and on an iSeries POWER3 (4K pages) and it seems to work fine. Note that I intend to change the way we organize the kernel regions & SLBs so the actual region will change from 0xf back to something else at one point, as I simplify the SLB miss handler, but that will be for a later patch. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 14 5月, 2008 8 次提交
-
-
由 Michael Ellerman 提交于
Make a few things static in lparcfg.c Make init and exit routines static in rtas_flash.c Make things static in rtas_pci.c Make some functions static in rtas.c Make fops static in rtas-proc.c Remove unneeded extern for do_gtod in smp.c Make clocksource_init() static in time.c Make last_tick_len and ticklen_to_xs static in time.c Move the declaration of the pvr per-cpu into smp.h Make kexec_smp_down() and kexec_stack static in machine_kexec_64.c Don't return void in arch_teardown_msi_irqs() in msi.c Move declaration of GregorianDay()into asm/time.h Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
This is a little messier than I'd like because xmon.h only exists on powerpc and we can't have a static inline and an extern declaration visible at the same time. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
The typdef for irqreturn_t was moved into its own header a while back, so there's no reason we can't move xmon_irq() into xmon.h now. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
Usually we call xmon() via debugger(), so this could be static. Sometimes when debugging it's nice to be able to call xmon() directly though, so add a declaration. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
... instead of having extern declarations in a .c file. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
... instead of having an extern declaration in a .c file. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Kumar Gala 提交于
We use the low bits of regs->trap as flag bits. We already indicate critical and machine check level exceptions via this mechanism. Extend it to indicate debug level exceptions. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Roland McGrath 提交于
Replace TIF_RESTORE_SIGMASK with TLF_RESTORE_SIGMASK and define our own set_restore_sigmask() function. This saves the costly SMP-safe set_bit operation, which we do not need for the sigmask flag since TIF_SIGPENDING always has to be set too. Signed-off-by: NRoland McGrath <roland@redhat.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 06 5月, 2008 1 次提交
-
-
由 Stefan Roese 提交于
The new 440x6 core used on AMCC 460EX/GT introduces new storage attibure fields to the TLB2 word. Those are: Bit 11 12 13 14 15 WL1 IL1I IL1D IL2I IL2D With these bits the cache (L1 and L2) can be configured in a more flexible way, instruction- and data-cache independently now. The "old" I and W bits are still available and setting these old bits will automically set these new bits too (for backward compatibilty). The current code does not clear these fields resulting in disabling the cache by chance. This patch now makes sure that these new bits are cleared when the TLB2 word is written. Signed-off-by: NStefan Roese <sr@denx.de> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 05 5月, 2008 3 次提交
-
-
由 Emil Medve 提交于
We provide an ioremap_flags, so this provides a corresponding devm_ioremap_prot. The slight name difference is at Ben Herrenschmidt's request as he plans on changing ioremap_flags to ioremap_prot in the future. Signed-off-by: NEmil Medve <Emilian.Medve@Freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Acked-by: NTejun Heo <htejun@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Luke Browning 提交于
Currently, page fault handlers don't issue a mfc restart if the context switch pending flag is set, which can leave us with a hanging DMA after a context restore. This patch introduces fault pending flag that is set by the fault handler and read by the context switch code, so that the latter can add the restart bit at the right spot, after it has successfuly saved the state of the mfc control register. Signed-off-by: NLuke Browning <lukebr@linux.vnet.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
由 Luke Browning 提交于
SPU class 0 & 1 exceptions may occur in parallel, so we may end up overwriting csa.dsisr. This change adds dedicated fields for each class to the spu and the spu context so that fault data is not overwritten. Signed-off-by: NLuke Browning <lukebr@linux.vnet.ibm.com> Signed-off-by: NJeremy Kerr <jk@ozlabs.org>
-
- 04 5月, 2008 2 次提交
-
-
由 Hollis Blanchard 提交于
This reduces host CPU usage when the guest is idle. However, the guest must set MSR[WE] in its idle loop, which Linux did not do until 2.6.26. Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Signed-off-by: NJerone Young <jyoung5@us.ibm.com> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Ulrich Drepper 提交于
This replaces the duplicated arch-specific versions of "sys_pipe()" with one unified implementation. This removes almost 250 lines of duplicated code. It's marked __weak, so that *if* an architecture wants to override the default implementation it can do so by simply having its own replacement version, since many architectures use alternate calling conventions for the 'pipe()' system call for legacy reasons (ie traditional UNIX implementations often return the two file descriptors in registers) I still haven't changed the cris version even though Linus says the BKL isn't needed. The arch maintainer can easily do it if there are really no obstacles. Signed-off-by: NUlrich Drepper <drepper@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 03 5月, 2008 1 次提交
-
-
由 H. Peter Anvin 提交于
This modifies <asm-powerpc/types.h> to use the <asm-generic/int-*.h> generic include files. Signed-off-by: NH. Peter Anvin <hpa@zytor.com> Cc: Paul Mackerras <paulus@samba.org> Cc: Anton Blanchard <anton@samba.org>
-
- 02 5月, 2008 1 次提交
-
-
由 Geert Uytterhoeven 提交于
The routines ps3_virq_setup() and ps3_virq_destroy() are used in only one file, so make them static. Signed-off-by: NGeert Uytterhoeven <Geert.Uytterhoeven@sonycom.com> Signed-off-by: NGeoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 30 4月, 2008 1 次提交
-
-
由 Jeff Dike 提交于
Lots of asm-*/futex.h call pagefault_enable and pagefault_disable, which are declared in linux/uaccess.h, without including linux/uaccess.h. They all include asm/uaccess.h, so this patch replaces asm/uaccess.h with linux/uaccess.h. Signed-off-by: NJeff Dike <jdike@linux.intel.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Kyle McMartin <kyle@mcmartin.ca> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 29 4月, 2008 5 次提交
-
-
由 Harvey Harrison 提交于
Unaligned access is ok for the following arches: cris, m68k, mn10300, powerpc, s390, x86 Arches that use the memmove implementation for native endian, and the byteshifting for the opposite endianness. h8300, m32r, xtensa Packed struct for native endian, byteshifting for other endian: alpha, blackfin, ia64, parisc, sparc, sparc64, mips, sh m86knommu is generic_be for Coldfire, otherwise unaligned access is ok. frv, arm chooses endianness based on compiler settings, uses the byteshifting versions. Remove the unaligned trap handler from frv as it is now unused. v850 is le, uses the byteshifting versions for both be and le. Remove the now unused asm-generic implementation. Signed-off-by: NHarvey Harrison <harvey.harrison@gmail.com> Acked-by: NDavid S. Miller <davem@davemloft.net> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Adrian Bunk 提交于
Add a proper prototype for __do_softirq() in include/linux/interrupt.h Signed-off-by: NAdrian Bunk <bunk@kernel.org> Acked-by: NIngo Molnar <mingo@elte.hu> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Zhang Wei 提交于
Signed-off-by: NZhang Wei <wei.zhang@freescale.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Kumar Gala 提交于
This makes it possible to use separate stacks for hard and soft IRQs on 32-bit powerpc as well as on 64-bit. The code for 32-bit is just the 32-bit analog of the 64-bit code. * Added allocation and initialization of the irq stacks. We limit the stacks to be in lowmem for ppc32. * Implemented ppc32 versions of call_do_softirq() and call_handle_irq() to switch the stack pointers * Reworked how we do stack overflow detection. We now keep around the limit of the stack in the thread_struct and compare against the limit to see if we've overflowed. We can now use this on ppc64 if desired. [ paulus@samba.org: Fixed bug on 6xx where we need to reload r9 with the thread_info pointer. ] Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
This changes the definitions of the xchg and cmpxchg families of functions in include/asm-powerpc/system.h to be marked __always_inline rather than __inline__. The reason for doing this is that we rely on the compiler inlining them in order to eliminate the references to __xchg_called_with_bad_pointer and __cmpxchg_called_with_bad_pointer, which are deliberately left undefined. Thus this change will enable us to make the inline keyword be just a hint rather than a directive. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 28 4月, 2008 4 次提交
-
-
由 Gerald Schaefer 提交于
Huge ptes have a special type on s390 and cannot be handled with the standard pte functions in certain cases, e.g. because of a different location of the invalid bit. This patch adds some new architecture- specific functions to hugetlb common code, as a prerequisite for the s390 large page support. This won't affect other architectures in functionality, but I need to add some new dummy inline functions to the headers. Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Gerald Schaefer 提交于
A cow break on a hugetlbfs page with page_count > 1 will set a new pte with set_huge_pte_at(), w/o any tlb flush operation. The old pte will remain in the tlb and subsequent write access to the page will result in a page fault loop, for as long as it may take until the tlb is flushed from somewhere else. This patch introduces an architecture-specific huge_ptep_clear_flush() function, which is called before the the set_huge_pte_at() in hugetlb_cow(). ATTENTION: This is just a nop on all architectures for now, the s390 implementation will come with our large page patch later. Other architectures should define their own huge_ptep_clear_flush() if needed. Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Gerald Schaefer 提交于
This patch moves all architecture functions for hugetlb to architecture header files (include/asm-foo/hugetlb.h) and converts all macros to inline functions. It also removes (!) ARCH_HAS_HUGEPAGE_ONLY_RANGE, ARCH_HAS_HUGETLB_FREE_PGD_RANGE, ARCH_HAS_PREPARE_HUGEPAGE_RANGE, ARCH_HAS_SETCLEAR_HUGE_PTE and ARCH_HAS_HUGETLB_PREFAULT_HOOK. Getting rid of the ARCH_HAS_xxx #ifdef and macro fugliness should increase readability and maintainability, at the price of some code duplication. An asm-generic common part would have reduced the loc, but we would end up with new ARCH_HAS_xxx defines eventually. Acked-by: NMartin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: NGerald Schaefer <gerald.schaefer@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "David S. Miller" <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Nick Piggin 提交于
s390 for one, cannot implement VM_MIXEDMAP with pfn_valid, due to their memory model (which is more dynamic than most). Instead, they had proposed to implement it with an additional path through vm_normal_page(), using a bit in the pte to determine whether or not the page should be refcounted: vm_normal_page() { ... if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) { if (vma->vm_flags & VM_MIXEDMAP) { #ifdef s390 if (!mixedmap_refcount_pte(pte)) return NULL; #else if (!pfn_valid(pfn)) return NULL; #endif goto out; } ... } This is fine, however if we are allowed to use a bit in the pte to determine refcountedness, we can use that to _completely_ replace all the vma based schemes. So instead of adding more cases to the already complex vma-based scheme, we can have a clearly seperate and simple pte-based scheme (and get slightly better code generation in the process): vm_normal_page() { #ifdef s390 if (!mixedmap_refcount_pte(pte)) return NULL; return pte_page(pte); #else ... #endif } And finally, we may rather make this concept usable by any architecture rather than making it s390 only, so implement a new type of pte state for this. Unfortunately the old vma based code must stay, because some architectures may not be able to spare pte bits. This makes vm_normal_page a little bit more ugly than we would like, but the 2 cases are clearly seperate. So introduce a pte_special pte state, and use it in mm/memory.c. It is currently a noop for all architectures, so this doesn't actually result in any compiled code changes to mm/memory.o. BTW: I haven't put vm_normal_page() into arch code as-per an earlier suggestion. The reason is that, regardless of where vm_normal_page is actually implemented, the *abstraction* is still exactly the same. Also, while it depends on whether the architecture has pte_special or not, that is the only two possible cases, and it really isn't an arch specific function -- the role of the arch code should be to provide primitive functions and accessors with which to build the core code; pte_special does that. We do not want architectures to know or care about vm_normal_page itself, and we definitely don't want them being able to invent something new there out of sight of mm/ code. If we made vm_normal_page an arch function, then we have to make vm_insert_mixed (next patch) an arch function too. So I don't think moving it to arch code fundamentally improves any abstractions, while it does practically make the code more difficult to follow, for both mm and arch developers, and easier to misuse. [akpm@linux-foundation.org: build fix] Signed-off-by: NNick Piggin <npiggin@suse.de> Acked-by: NCarsten Otte <cotte@de.ibm.com> Cc: Jared Hulbert <jaredeh@gmail.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 27 4月, 2008 3 次提交
-
-
由 Hollis Blanchard 提交于
This functionality is definitely experimental, but is capable of running unmodified PowerPC 440 Linux kernels as guests on a PowerPC 440 host. (Only tested with 440EP "Bamboo" guests so far, but with appropriate userspace support other SoC/board combinations should work.) See Documentation/powerpc/kvm_440.txt for technical details. [stephen: build fix] Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NStephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Hollis Blanchard 提交于
PowerPC 440 KVM needs to know how many TLB entries are used for the host kernel linear mapping (it does not modify these mappings when switching between guest and host execution). Signed-off-by: NHollis Blanchard <hollisb@us.ibm.com> Acked-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Acked-by: NPaul Mackerras <paulus@samba.org> Signed-off-by: NAvi Kivity <avi@qumranet.com>
-
由 Alexander van Heukelum 提交于
Implement __fls on all 64-bit archs: alpha has an implementation of fls64. Added __fls(x) = fls64(x) - 1. ia64 has fls, but not __fls. Added __fls based on code of fls. mips and powerpc have __ilog2, which is the same as __fls. Added __fls = __ilog2. parisc, s390, sh and sparc64: Include generic __fls. x86_64 already has __fls. Signed-off-by: NAlexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 24 4月, 2008 5 次提交
-
-
由 Ishizaki Kou 提交于
This splits cell io-workaround code into spider-pci dependent code and a generic part, and also moves io-workarounds initialization into cell_setup_phb. Signed-off-by: NKou Ishizaki <kou.ishizaki@toshiba.co.jp> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Tony Breeds 提交于
This adds the required functionality to fill in all pacas at runtime. With NR_CPUS=1024 text data bss dec hex filename 137 1704032 0 1704169 1a00e9 arch/powerpc/kernel/paca.o :Before 121 1179744 524288 1704153 1a00d9 arch/powerpc/kernel/paca.o :After Also remove unneeded #includes from arch/powerpc/kernel/paca.c Signed-off-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Kumar Gala 提交于
The fixmap code from x86 allows us to have compile time virtual addresses that we change the physical addresses of at run time. This is useful for applications like kmap_atomic, PCI config that is done via direct memory map, kexec/kdump. We got ride of CONFIG_HIGHMEM_START as we can now determine a more optimal location for PKMAP_BASE based on where the fixmap addresses start and working back from there. Additionally, the kmap code in asm-powerpc/highmem.h always had debug enabled. Moved to using CONFIG_DEBUG_HIGHMEM to determine if we should have the extra debug checking. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Kumar Gala 提交于
Added support to allow an 85xx kernel to be run from a non-zero physical address (useful for cooperative asymmetric multiprocessing situations and kdump). The support can be configured at compile time by setting CONFIG_PAGE_OFFSET, CONFIG_KERNEL_START, and CONFIG_PHYSICAL_START as desired. Alternatively, the kernel build can set CONFIG_RELOCATABLE. Setting this config option causes the kernel to determine at runtime the physical addresses of CONFIG_PAGE_OFFSET and CONFIG_KERNEL_START. If CONFIG_RELOCATABLE is set, then CONFIG_PHYSICAL_START has no meaning. However, CONFIG_PHYSICAL_START will always be used to set the LOAD program header physical address field in the resulting ELF image. Currently we are limited to running at a physical address that is a multiple of 256M. This is due to how we map TLBs to cover lowmem. This should be fixed to allow 64M or maybe even 16M alignment in the future. It is considered an error to try and run a kernel at a non-aligned physical address. All the magic for this support is accomplished by proper initialization of the kernel memory subsystem and use of ARCH_PFN_OFFSET. The use of ARCH_PFN_OFFSET only affects normal memory and not IO mappings. ioremap uses map_page and isn't affected by ARCH_PFN_OFFSET. /dev/mem continues to allow access to any physical address in the system regardless of how CONFIG_PHYSICAL_START is set. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Benjamin Herrenschmidt 提交于
The powerpc kernel stacks need to be naturally aligned, as they contain the thread info at the bottom, which is obtained by clearing the low bits of the stack pointer. However, when using 64K pages, the stack is smaller than a page, so we use kmalloc to allocate it, but that doesn't provide the alignment guarantee we need. It appeared to work so far... until one enables SLUB debugging which then returns unaligned pointers. Ooops... This fixes it by using a slab cache with enforced alignment. It relies on my previous patch that adds a thread_info_cache_init() callback. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 20 4月, 2008 2 次提交
-
-
由 Paul Mackerras 提交于
The rearrangements in 945feb17 ("[POWERPC] irqtrace support for 64-bit powerpc") caused 64-bit non-SMP configs to fail to compile with a message about local_irq_save being undefined in include/linux/proportions.h. This follows the lead of x86 in including <linux/irqflags.h> in asm/system.h, which fixes the problem. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Mike Travis 提交于
Create a simple macro to always return a pointer to the node_to_cpumask(node) value. This relies on compiler optimization to remove the extra indirection: #define node_to_cpumask_ptr(v, node) \ cpumask_t _##v = node_to_cpumask(node), *v = &_##v For those systems with a large cpumask size, then a true pointer to the array element can be used: #define node_to_cpumask_ptr(v, node) \ cpumask_t *v = &(node_to_cpumask_map[node]) A node_to_cpumask_ptr_next() macro is provided to access another node_to_cpumask value. The other change is to always include asm-generic/topology.h moving the ifdef CONFIG_NUMA to this same file. Note: there are no references to either of these new macros in this patch, only the definition. Based on 2.6.25-rc5-mm1 # alpha Cc: Richard Henderson <rth@twiddle.net> # fujitsu Cc: David Howells <dhowells@redhat.com> # ia64 Cc: Tony Luck <tony.luck@intel.com> # powerpc Cc: Paul Mackerras <paulus@samba.org> Cc: Anton Blanchard <anton@samba.org> # sparc Cc: David S. Miller <davem@davemloft.net> Cc: William L. Irwin <wli@holomorphy.com> # x86 Cc: H. Peter Anvin <hpa@zytor.com> Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 18 4月, 2008 1 次提交
-
-
由 Paul Mackerras 提交于
64-bit powerpc processors can find the leftmost 1 bit in a 64-bit doubleword in one instruction, so use that rather than using the generic fls64(), which does two 32-bit fls() calls. Signed-off-by: NPaul Mackerras <paulus@samba.org>
-