- 03 7月, 2008 2 次提交
-
-
由 Michael Neuling 提交于
Currently we get this warning: arch/powerpc/kernel/init_task.c:33: warning: missing braces around initializer arch/powerpc/kernel/init_task.c:33: warning: (near initialization for 'init_task.thread.fpr[0]') This fixes it. Noticed by Stephen Rothwell. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Tony Breeds 提交于
Currently the kernel fails to build with the above config options with: CC arch/powerpc/mm/mem.o arch/powerpc/mm/mem.c: In function 'arch_add_memory': arch/powerpc/mm/mem.c:130: error: implicit declaration of function 'create_section_mapping' This explicitly includes asm/sparsemem.h in arch/powerpc/mm/mem.c and moves the guards in include/asm-powerpc/sparsemem.h to protect the SPARSEMEM specific portions only. Signed-off-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 01 7月, 2008 20 次提交
-
-
由 Michael Neuling 提交于
This correctly hooks the VSX dump into Roland McGrath core file infrastructure. It adds the VSX dump information as an additional elf note in the core file (after talking more to the tool chain/gdb guys). This also ensures the formats are consistent between signals, ptrace and core files. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Eric B Munson 提交于
Currently when a 32 bit process is exec'd on a powerpc 64 bit host the value in the top three bytes of the personality is clobbered. patch adds a check in the SET_PERSONALITY macro that will carry all the values in the top three bytes across the exec. These three bytes currently carry flags to disable address randomisation, limit the address space, force zeroing of an mmapped page, etc. Should an application set any of these bits they will be maintained and honoured on homogeneous environment but discarded and ignored on a heterogeneous environment. So if an application requires all mmapped pages to be initialised to zero and a wrapper is used to setup the personality and exec the target, these flags will remain set on an all 32 or all 64 bit envrionment, but they will be lost in the exec on a mixed 32/64 bit environment. Losing these bits means that the same application would behave differently in different environments. Tested on a POWER5+ machine with 64bit kernel and a mixed 64/32 bit user space. Signed-off-by: NEric B Munson <ebmunson@us.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Bart Van Assche 提交于
When compiling kernel modules for ppc that include <linux/spinlock.h>, gcc prints a warning message every time it encounters a function declaration where the inline keyword appears after the return type. This makes sure that the order of the inline keyword and the return type is as gcc expects it. Additionally, the __inline__ keyword is replaced by inline, as checkpatch expects. Signed-off-by: NBart Van Assche <bart.vanassche@gmail.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Andy Whitcroft 提交于
The implementation of huge_ptep_set_wrprotect() directly calls ptep_set_wrprotect() to mark a hugepte write protected. However this call is not appropriate on ppc64 kernels as this is a small page only implementation. This can lead to the hash not being flushed correctly when a mapping is being converted to COW, allowing processes to continue using the original copy. Currently huge_ptep_set_wrprotect() unconditionally calls ptep_set_wrprotect(). This is fine on ppc32 kernels as this call is generic. On 64 bit this is implemented as: pte_update(mm, addr, ptep, _PAGE_RW, 0); On ppc64 this last parameter is the page size and is passed directly on to hpte_need_flush(): hpte_need_flush(mm, addr, ptep, old, huge); And this directly affects the page size we pass to flush_hash_page(): flush_hash_page(vaddr, rpte, psize, ssize, 0); As this changes the way the hash is calculated we will flush the wrong pages, potentially leaving live hashes to the original page. Move the definition of huge_ptep_set_wrprotect() to the 32/64 bit specific headers. Signed-off-by: NAndy Whitcroft <apw@shadowen.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Neuling 提交于
This patch extends the floating point save and restore code to use the VSX load/stores when VSX is available. This will make FP context save/restore marginally slower on FP only code, when VSX is available, as it has to load/store 128bits rather than just 64bits. Mixing FP, VMX and VSX code will get constant architected state. The signals interface is extended to enable access to VSR 0-31 doubleword 1 after discussions with tool chain maintainers. Backward compatibility is maintained. The ptrace interface is also extended to allow access to VSR 0-31 full registers. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Neuling 提交于
This adds the macros for the VSX load/store instruction as most binutils are not going to support this for a while. Also add VSX register save/restore macros and vsr[0-63] register definitions. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Neuling 提交于
Add a VSX CPU feature. Also add code to detect if VSX is available from the device tree. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NJoel Schopp <jschopp@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Neuling 提交于
The layout of the new VSR registers and how they overlap on top of the legacy FPR and VR registers is: VSR doubleword 0 VSR doubleword 1 ---------------------------------------------------------------- VSR[0] | FPR[0] | | ---------------------------------------------------------------- VSR[1] | FPR[1] | | ---------------------------------------------------------------- | ... | | | ... | | ---------------------------------------------------------------- VSR[30] | FPR[30] | | ---------------------------------------------------------------- VSR[31] | FPR[31] | | ---------------------------------------------------------------- VSR[32] | VR[0] | ---------------------------------------------------------------- VSR[33] | VR[1] | ---------------------------------------------------------------- | ... | | ... | ---------------------------------------------------------------- VSR[62] | VR[30] | ---------------------------------------------------------------- VSR[63] | VR[31] | ---------------------------------------------------------------- VSX has 64 128bit registers. The first 32 regs overlap with the FP registers and hence extend them with and additional 64 bits. The second 32 regs overlap with the VMX registers. This commit introduces the thread_struct changes required to reflect this register layout. Ptrace and signals code is updated so that the floating point registers are correctly accessed from the thread_struct when CONFIG_VSX is enabled. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Neuling 提交于
We are going to change where the floating point registers are stored in the thread_struct, so in preparation add some macros to access the floating point registers. Update all code to use these new macros. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
The current feature section logic only supports nop'ing out code, this means if you want to choose at runtime between instruction sequences, one or both cases will have to execute the nop'ed out contents of the other section, eg: BEGIN_FTR_SECTION or 1,1,1 END_FTR_SECTION_IFSET(FOO) BEGIN_FTR_SECTION or 2,2,2 END_FTR_SECTION_IFCLR(FOO) and the resulting code will be either, or 1,1,1 nop or, nop or 2,2,2 For small code segments this is fine, but for larger code blocks and in performance criticial code segments, it would be nice to avoid the nops. This commit starts to implement logic to allow the following: BEGIN_FTR_SECTION or 1,1,1 FTR_SECTION_ELSE or 2,2,2 ALT_FTR_SECTION_END_IFSET(FOO) and the resulting code will be: or 1,1,1 or, or 2,2,2 We achieve this by extending the existing FTR macros. The current feature section semantic just becomes a special case, ie. if the else case is empty we nop out the default case. The key limitation is that the size of the else case must be less than or equal to the size of the default case. If the else case is smaller the remainder of the section is nop'ed. We let the linker put the else case code in with the rest of the text, so that relative branches from the else case are more likley to link, this has the disadvantage that we can't free the unused else cases. This commit introduces the required macro and linker script changes, but does not enable the patching of the alternative sections. We also need to update two hand-made section entries in reg.h and timex.h Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
Currently we have three versions of MAKE_FTR_SECTION_ENTRY(), the macro that generates a feature section entry. There is 64bit version, a 32bit version and version for 32bit code built with a 64bit kernel. Rather than triplicating (?) the MAKE_FTR_SECTION_ENTRY() logic, we can move the 64bit/32bit differences into separate macros, and then only have one version of MAKE_FTR_SECTION_ENTRY(). Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
The CPU and firmware feature fixup macros are currently spread across three files, firmware.h, cputable.h and asm-compat.h. Consolidate them into their own file, feature-fixups.h Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
A bunch of code has hard-coded the value for a "nop" instruction, it would be nice to have a #define for it. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
This commit adds some new routines for patching code, which will be used in a following commit. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
Because function pointers point to different things on 32-bit vs 64-bit, add a macro that deals with dereferencing the OPD on 64-bit. The soon to be merged ftrace wants this, as well as other code I am working on. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
Currently create_branch() creates a branch instruction for you, and patches it into the call site. In some circumstances it would be nice to be able to create the instruction and patch it later, and also some code might want to check for errors in the branch creation before doing the patching. A future commit will change create_branch() to check for errors. For callers that don't care, replace create_branch() with patch_branch(), which just creates the branch and patches it directly. While we're touching all the callers, change to using unsigned int *, as this seems to match usage better. That allows (and requires) us to remove the volatile in the definition of vector in powermac/smp.c and mpc86xx_smp.c, that's correct because now that we're passing vector as an unsigned int * the compiler knows that it's value might change across the patch_branch() call. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NKumar Gala <galak@kernel.crashing.org> Acked-by: NJon Loeliger <jdl@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
We currently have a few routines for patching code in asm/system.h, because they didn't fit anywhere else. I'd like to clean them up a little and add some more, so first move them into a dedicated C file - they don't need to be inlined. While we're moving the code, drop create_function_call(), it's intended caller never got merged and will be replaced in future with something different. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Adrian Bunk 提交于
This makes asm/elf.h export less non-userspace stuff to userspace. Signed-off-by: NAdrian Bunk <bunk@kernel.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Adrian Bunk 提交于
asm/asm-compat.h doesn't seem to be intended for userspace usage. Signed-off-by: NAdrian Bunk <bunk@kernel.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
At present, if we have a kernel with a 64kB page size, and some process maps something that has to be mapped with 4kB pages (such as a cache-inhibited mapping on POWER5+, or the eHCA infiniband queue-pair pages), we change the process to use 4kB pages everywhere. This hurts the performance of HPC programs that access eHCA from userspace. With this patch, the kernel will only demote the slice(s) containing the eHCA or cache-inhibited mappings, leaving the remaining slices able to use 64kB hardware pages. This also changes the slice_get_unmapped_area code so that it is willing to place a 64k-page mapping into (or across) a 4k-page slice if there is no better alternative, i.e. if the program specified MAP_FIXED or if there is not sufficient space available in slices that are either empty or already have 64k-page mappings in them. Signed-off-by: NPaul Mackerras <paulus@samba.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 30 6月, 2008 9 次提交
-
-
由 Michael Neuling 提交于
Add a cputable entry for the POWER7 processor. Also tell firmware that we know about POWER7. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NJoel Schopp <jschopp@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Scott Wood 提交于
This was pointed out by Detlev Zundel when this code was being added to U-boot. Signed-off-by: NScott Wood <scottwood@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Becky Bruce 提交于
While working on the 36-bit physical support, I noticed that there was exactly one line of code that actually referenced the bitfields. So I got rid of them and redefined ppc_bat as a struct of 2 u32's: batu and batl. I also got rid of the previous union that held the bitfield structs and a word representation of the batu/l values. This seems like a nicer solution than adding in a bunch of new bitfields to support extended bat addressing that would never get used, and just leaving the struct as-is would have been incomplete in the face of large physical addressing. Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Becky Bruce 提交于
Currently, the physical address is an unsigned long, but it should be phys_addr_t in set_bat, [v/p]_mapped_by_bat. Also, create a macro that can convert a large physical address into the correct format for programming the BAT registers. Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Becky Bruce 提交于
Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
When kexec is disabled, the crash_shutdown_{un,}register functions are not available in the kernel. This provides dummy inline functions for those so that the callers don't have to worry about it. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Benjamin Herrenschmidt 提交于
This frees a PTE bit when using 64K pages on ppc64. This is done by getting rid of the separate _PAGE_HASHPTE bit. Instead, we just test if any of the 16 sub-page bits is set. For non-combo pages (ie. real 64K pages), we set SUB0 and the location encoding in that field. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Anton Vorontsov 提交于
To avoid "#ifdef CONFIG_PCI" in the drivers we should provide stubs in place of OF PCI address accessors. Without these stubs build breaks for drivers not strictly requiring PCI, for example CONFIG_FB_OF=y without CONFIG_PCI: LD .tmp_vmlinux1 drivers/built-in.o: In function `offb_map_reg': offb.c:(.text+0x6e7c): undefined reference to `of_get_pci_address' OF PCI IRQ accessors require pci_dev argument, so drivers using PCI IRQs should depend on CONFIG_PCI anyway. Signed-off-by: NAnton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Nick Piggin 提交于
For 64-bit processors, lwsync is the recommended method of store/store ordering on caching enabled memory. For those subarchs which have lwsync, use it rather than eieio for smp_wmb. Signed-off-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 28 6月, 2008 1 次提交
-
-
由 David Woodhouse 提交于
We need to check for existence of the a.out.h header in the source tree, not the object tree, if we want it to get the right answer with O=. Signed-off-by: NDavid Woodhouse <david.woodhouse@intel.com> Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
-
- 26 6月, 2008 4 次提交
-
-
由 Anton Vorontsov 提交于
It was discussed that global arch_initcall() is preferred way to probe QE GPIOs, so let's use it. Signed-off-by: NAnton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Kumar Gala 提交于
Now that arch/ppc is gone we always define CONFIG_PPC_CPM_NEW_BINDING so we can remove all the code associated with !CONFIG_PPC_CPM_NEW_BINDING. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Kumar Gala 提交于
If we have an L2CSR register (e500mc) we need to flush the L2 before going to nap. We use the HW flush mechanism provided in that register. The code reuses the CPU_FTR_604_PERF_MON bit as it is no longer used by any code in the kernel. Additionally we didn't reuse the exist L2CR feature bit as this is intended for the 7xxx L2CR register and L2CSR is part of the new Freescale "Book-E" registers. Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Kumar Gala 提交于
The e500 core enter DOZE/NAP power-saving modes when the core go to cpu_idle routine. The power management default running mode is DOZE, If the user echo 1 > /proc/sys/kernel/powersave-nap the system will change to NAP running mode. Signed-off-by: NDave Liu <daveliu@freescale.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 19 6月, 2008 1 次提交
-
-
由 Kumar Gala 提交于
The new e500mc core from Freescale is based on the e500v2 but with the following changes: * Supports only the Enhanced Debug Architecture (DSRR0/1, etc) * Floating Point * No SPE * Supports lwsync * Doorbell Exceptions * Hypervisor * Cache line size is now 64-bytes (e500v1/v2 have a 32-byte cache line) Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 18 6月, 2008 1 次提交
-
-
由 Josh Boyer 提交于
The 440EPx/GRx chips don't support PCI MRM commands. Drivers determine this by looking for a zero value in the PCI cache line size register. However, some drivers write to this register upon initialization. This can cause MRMs to be used on these chips, which may cause deadlocks on PLB4. The workaround implemented here introduces a new indirect_type flag, called PPC_INDIRECT_TYPE_BROKEN_MRM. This is set in the pci_controller structure in the pci fixup function for 4xx PCI bridges by determining if the bridge is compatible with 440EPx/GRx. The flag is checked in the indirect_write_config function, and forces any writes to the PCI_CACHE_LINE_SIZE register to be zero, which will disable MRMs for these chips. A similar workaround has been tested by AMCC on various PCI cards, such as the Silicon Image ATA card and Intel E1000 GIGE card. Hangs were seen with the Silicon Image card, and MRMs were seen on the bus with a PCI analyzer. With the workaround in place, the card functioned properly and only Memory Reads were seen on the bus with the analyzer. Acked-by: NStefan Roese <sr@denx.de> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
- 16 6月, 2008 2 次提交
-
-
由 Jerone Young 提交于
This takes values from the PowerPC ISA BookIII-E specifications that are for DBCR0. Many of these values are different from those currently specified, which are for the ppc405. Also added some bookE definitions for DBCR1 & DBCR2. [ galak@kernel.crashing.org: Added aliases to 40x DBCR0 to match Book-E, Added enhanced debug DBCR0/DBSR _CIRPT and _CRET defines and DBSR IRPT and RET. ] Signed-off-by: NJerone Young <jyoung5@us.ibm.com> Acked-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Adrian Bunk 提交于
This fixes the following build error with CONFIG_BLK_DEV_IDE_PMAC=n: <-- snip --> ... CC drivers/macintosh/mediabay.o /home/bunk/linux/kernel-2.6/git/linux-2.6/drivers/macintosh/mediabay.c: In function 'check_media_bay': /home/bunk/linux/kernel-2.6/git/linux-2.6/drivers/macintosh/mediabay.c:428: error: 'struct media_bay_info' has no member named 'cd_index' make[3]: *** [drivers/macintosh/mediabay.o] Error 1 <-- snip --> Reported-by: NAdrian Bunk <bunk@kernel.org> Signed-off-by: NAdrian Bunk <bunk@kernel.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-