- 09 7月, 2008 6 次提交
-
-
由 Dave Kleikamp 提交于
This patch defines: - PROT_SAO, which is passed into mmap() and mprotect() in the prot field - VM_SAO in vma->vm_flags, and - _PAGE_SAO, the combination of WIMG bits in the pte that enables strong access ordering for the page. Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Srinivasa Ds 提交于
The task_pt_regs() macro allows access to the pt_regs of a given task. This macro is not currently defined for the powerpc architecture, but we need it for some upcoming utrace additions. Signed-off-by: NSrinivasa DS <srinivasa@in.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mark Nelson 提交于
Move device_to_mask() to dma-mapping.h because we need to use it from outside dma_64.c in a later patch. Signed-off-by: NMark Nelson <markn@au1.ibm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mark Nelson 提交于
Update powerpc to use the new dma_*map*_attrs() interfaces. In doing so update struct dma_mapping_ops to accept a struct dma_attrs and propagate these changes through to all users of the code (generic IOMMU and the 64bit DMA code, and the iseries and ps3 platform code). The old dma_*map_*() interfaces are reimplemented as calls to the corresponding new interfaces. Signed-off-by: NMark Nelson <markn@au1.ibm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Acked-by: NGeoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Mark Nelson 提交于
Make iommu_map_sg take a struct iommu_table. It did so before commit 740c3ce6 (iommu sg merging: ppc: make iommu respect the segment size limits). This stops the function looking in the archdata.dma_data for the iommu table because in the future it will be called with a device that has no table there. This also has the nice side effect of making iommu_map_sg() match the other map functions. Signed-off-by: NMark Nelson <markn@au1.ibm.com> Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Maxim Shchetynin 提交于
As nr_active counter includes also spus waiting for syscalls to return we need a seperate counter that only counts spus that are currently running on spu side. This counter shall be used by a cpufreq governor that targets a frequency dependent from the number of running spus. Signed-off-by: NChristian Krafft <krafft@de.ibm.com> Acked-by: NJeremy Kerr <jk@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 03 7月, 2008 5 次提交
-
-
由 Nathan Fontenot 提交于
This updates the device tree manipulation routines so that memory add/remove of lmbs represented under the ibm,dynamic-reconfiguration-memory node of the device tree invokes the hotplug notifier chain. This change is needed because of the change in the way memory is represented under the ibm,dynamic-reconfiguration-memory node. All lmbs are described in the ibm,dynamic-memory property instead of having a separate node for each lmb as in previous device tree layouts. This requires the update_node() routine to check for updates to the ibm,dynamic-memory property and invoke the hotplug notifier chain. This also updates the pseries hotplug notifier to be able to gather information for lmbs represented under the ibm,dynamic-reconfiguration-memory node and have the lmbs added/removed. Signed-off-by: NNathan Fontenot <nfont@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Neuling 提交于
Since Roland's ptrace cleanup starting with commit f65255e8 ("[POWERPC] Use user_regset accessors for FP regs"), the dump_task_* functions are no longer being used. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Kumar Gala 提交于
To allow for a single kernel image on e500 v1/v2/mc we need to fixup lwsync at runtime. On e500v1/v2 lwsync causes an illop so we need to patch up the code. We default to 'sync' since that is always safe and if the cpu is capable we will replace 'sync' with 'lwsync'. We introduce CPU_FTR_LWSYNC as a way to determine at runtime if this is needed. This flag could be moved elsewhere since we dont really use it for the normal CPU_FTR purpose. Finally we only store the relative offset in the fixup section to keep it as small as possible rather than using a full fixup_entry. Signed-off-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Neuling 提交于
Currently we get this warning: arch/powerpc/kernel/init_task.c:33: warning: missing braces around initializer arch/powerpc/kernel/init_task.c:33: warning: (near initialization for 'init_task.thread.fpr[0]') This fixes it. Noticed by Stephen Rothwell. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Tony Breeds 提交于
Currently the kernel fails to build with the above config options with: CC arch/powerpc/mm/mem.o arch/powerpc/mm/mem.c: In function 'arch_add_memory': arch/powerpc/mm/mem.c:130: error: implicit declaration of function 'create_section_mapping' This explicitly includes asm/sparsemem.h in arch/powerpc/mm/mem.c and moves the guards in include/asm-powerpc/sparsemem.h to protect the SPARSEMEM specific portions only. Signed-off-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 01 7月, 2008 20 次提交
-
-
由 Michael Neuling 提交于
This correctly hooks the VSX dump into Roland McGrath core file infrastructure. It adds the VSX dump information as an additional elf note in the core file (after talking more to the tool chain/gdb guys). This also ensures the formats are consistent between signals, ptrace and core files. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Eric B Munson 提交于
Currently when a 32 bit process is exec'd on a powerpc 64 bit host the value in the top three bytes of the personality is clobbered. patch adds a check in the SET_PERSONALITY macro that will carry all the values in the top three bytes across the exec. These three bytes currently carry flags to disable address randomisation, limit the address space, force zeroing of an mmapped page, etc. Should an application set any of these bits they will be maintained and honoured on homogeneous environment but discarded and ignored on a heterogeneous environment. So if an application requires all mmapped pages to be initialised to zero and a wrapper is used to setup the personality and exec the target, these flags will remain set on an all 32 or all 64 bit envrionment, but they will be lost in the exec on a mixed 32/64 bit environment. Losing these bits means that the same application would behave differently in different environments. Tested on a POWER5+ machine with 64bit kernel and a mixed 64/32 bit user space. Signed-off-by: NEric B Munson <ebmunson@us.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Bart Van Assche 提交于
When compiling kernel modules for ppc that include <linux/spinlock.h>, gcc prints a warning message every time it encounters a function declaration where the inline keyword appears after the return type. This makes sure that the order of the inline keyword and the return type is as gcc expects it. Additionally, the __inline__ keyword is replaced by inline, as checkpatch expects. Signed-off-by: NBart Van Assche <bart.vanassche@gmail.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Andy Whitcroft 提交于
The implementation of huge_ptep_set_wrprotect() directly calls ptep_set_wrprotect() to mark a hugepte write protected. However this call is not appropriate on ppc64 kernels as this is a small page only implementation. This can lead to the hash not being flushed correctly when a mapping is being converted to COW, allowing processes to continue using the original copy. Currently huge_ptep_set_wrprotect() unconditionally calls ptep_set_wrprotect(). This is fine on ppc32 kernels as this call is generic. On 64 bit this is implemented as: pte_update(mm, addr, ptep, _PAGE_RW, 0); On ppc64 this last parameter is the page size and is passed directly on to hpte_need_flush(): hpte_need_flush(mm, addr, ptep, old, huge); And this directly affects the page size we pass to flush_hash_page(): flush_hash_page(vaddr, rpte, psize, ssize, 0); As this changes the way the hash is calculated we will flush the wrong pages, potentially leaving live hashes to the original page. Move the definition of huge_ptep_set_wrprotect() to the 32/64 bit specific headers. Signed-off-by: NAndy Whitcroft <apw@shadowen.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Neuling 提交于
This patch extends the floating point save and restore code to use the VSX load/stores when VSX is available. This will make FP context save/restore marginally slower on FP only code, when VSX is available, as it has to load/store 128bits rather than just 64bits. Mixing FP, VMX and VSX code will get constant architected state. The signals interface is extended to enable access to VSR 0-31 doubleword 1 after discussions with tool chain maintainers. Backward compatibility is maintained. The ptrace interface is also extended to allow access to VSR 0-31 full registers. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Neuling 提交于
This adds the macros for the VSX load/store instruction as most binutils are not going to support this for a while. Also add VSX register save/restore macros and vsr[0-63] register definitions. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Neuling 提交于
Add a VSX CPU feature. Also add code to detect if VSX is available from the device tree. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NJoel Schopp <jschopp@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Neuling 提交于
The layout of the new VSR registers and how they overlap on top of the legacy FPR and VR registers is: VSR doubleword 0 VSR doubleword 1 ---------------------------------------------------------------- VSR[0] | FPR[0] | | ---------------------------------------------------------------- VSR[1] | FPR[1] | | ---------------------------------------------------------------- | ... | | | ... | | ---------------------------------------------------------------- VSR[30] | FPR[30] | | ---------------------------------------------------------------- VSR[31] | FPR[31] | | ---------------------------------------------------------------- VSR[32] | VR[0] | ---------------------------------------------------------------- VSR[33] | VR[1] | ---------------------------------------------------------------- | ... | | ... | ---------------------------------------------------------------- VSR[62] | VR[30] | ---------------------------------------------------------------- VSR[63] | VR[31] | ---------------------------------------------------------------- VSX has 64 128bit registers. The first 32 regs overlap with the FP registers and hence extend them with and additional 64 bits. The second 32 regs overlap with the VMX registers. This commit introduces the thread_struct changes required to reflect this register layout. Ptrace and signals code is updated so that the floating point registers are correctly accessed from the thread_struct when CONFIG_VSX is enabled. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Neuling 提交于
We are going to change where the floating point registers are stored in the thread_struct, so in preparation add some macros to access the floating point registers. Update all code to use these new macros. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
The current feature section logic only supports nop'ing out code, this means if you want to choose at runtime between instruction sequences, one or both cases will have to execute the nop'ed out contents of the other section, eg: BEGIN_FTR_SECTION or 1,1,1 END_FTR_SECTION_IFSET(FOO) BEGIN_FTR_SECTION or 2,2,2 END_FTR_SECTION_IFCLR(FOO) and the resulting code will be either, or 1,1,1 nop or, nop or 2,2,2 For small code segments this is fine, but for larger code blocks and in performance criticial code segments, it would be nice to avoid the nops. This commit starts to implement logic to allow the following: BEGIN_FTR_SECTION or 1,1,1 FTR_SECTION_ELSE or 2,2,2 ALT_FTR_SECTION_END_IFSET(FOO) and the resulting code will be: or 1,1,1 or, or 2,2,2 We achieve this by extending the existing FTR macros. The current feature section semantic just becomes a special case, ie. if the else case is empty we nop out the default case. The key limitation is that the size of the else case must be less than or equal to the size of the default case. If the else case is smaller the remainder of the section is nop'ed. We let the linker put the else case code in with the rest of the text, so that relative branches from the else case are more likley to link, this has the disadvantage that we can't free the unused else cases. This commit introduces the required macro and linker script changes, but does not enable the patching of the alternative sections. We also need to update two hand-made section entries in reg.h and timex.h Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
Currently we have three versions of MAKE_FTR_SECTION_ENTRY(), the macro that generates a feature section entry. There is 64bit version, a 32bit version and version for 32bit code built with a 64bit kernel. Rather than triplicating (?) the MAKE_FTR_SECTION_ENTRY() logic, we can move the 64bit/32bit differences into separate macros, and then only have one version of MAKE_FTR_SECTION_ENTRY(). Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
The CPU and firmware feature fixup macros are currently spread across three files, firmware.h, cputable.h and asm-compat.h. Consolidate them into their own file, feature-fixups.h Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
A bunch of code has hard-coded the value for a "nop" instruction, it would be nice to have a #define for it. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
This commit adds some new routines for patching code, which will be used in a following commit. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
Because function pointers point to different things on 32-bit vs 64-bit, add a macro that deals with dereferencing the OPD on 64-bit. The soon to be merged ftrace wants this, as well as other code I am working on. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
Currently create_branch() creates a branch instruction for you, and patches it into the call site. In some circumstances it would be nice to be able to create the instruction and patch it later, and also some code might want to check for errors in the branch creation before doing the patching. A future commit will change create_branch() to check for errors. For callers that don't care, replace create_branch() with patch_branch(), which just creates the branch and patches it directly. While we're touching all the callers, change to using unsigned int *, as this seems to match usage better. That allows (and requires) us to remove the volatile in the definition of vector in powermac/smp.c and mpc86xx_smp.c, that's correct because now that we're passing vector as an unsigned int * the compiler knows that it's value might change across the patch_branch() call. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NKumar Gala <galak@kernel.crashing.org> Acked-by: NJon Loeliger <jdl@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Michael Ellerman 提交于
We currently have a few routines for patching code in asm/system.h, because they didn't fit anywhere else. I'd like to clean them up a little and add some more, so first move them into a dedicated C file - they don't need to be inlined. While we're moving the code, drop create_function_call(), it's intended caller never got merged and will be replaced in future with something different. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NKumar Gala <galak@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Adrian Bunk 提交于
This makes asm/elf.h export less non-userspace stuff to userspace. Signed-off-by: NAdrian Bunk <bunk@kernel.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Adrian Bunk 提交于
asm/asm-compat.h doesn't seem to be intended for userspace usage. Signed-off-by: NAdrian Bunk <bunk@kernel.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
At present, if we have a kernel with a 64kB page size, and some process maps something that has to be mapped with 4kB pages (such as a cache-inhibited mapping on POWER5+, or the eHCA infiniband queue-pair pages), we change the process to use 4kB pages everywhere. This hurts the performance of HPC programs that access eHCA from userspace. With this patch, the kernel will only demote the slice(s) containing the eHCA or cache-inhibited mappings, leaving the remaining slices able to use 64kB hardware pages. This also changes the slice_get_unmapped_area code so that it is willing to place a 64k-page mapping into (or across) a 4k-page slice if there is no better alternative, i.e. if the program specified MAP_FIXED or if there is not sufficient space available in slices that are either empty or already have 64k-page mappings in them. Signed-off-by: NPaul Mackerras <paulus@samba.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 30 6月, 2008 9 次提交
-
-
由 Michael Neuling 提交于
Add a cputable entry for the POWER7 processor. Also tell firmware that we know about POWER7. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NJoel Schopp <jschopp@austin.ibm.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Scott Wood 提交于
This was pointed out by Detlev Zundel when this code was being added to U-boot. Signed-off-by: NScott Wood <scottwood@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Becky Bruce 提交于
While working on the 36-bit physical support, I noticed that there was exactly one line of code that actually referenced the bitfields. So I got rid of them and redefined ppc_bat as a struct of 2 u32's: batu and batl. I also got rid of the previous union that held the bitfield structs and a word representation of the batu/l values. This seems like a nicer solution than adding in a bunch of new bitfields to support extended bat addressing that would never get used, and just leaving the struct as-is would have been incomplete in the face of large physical addressing. Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Becky Bruce 提交于
Currently, the physical address is an unsigned long, but it should be phys_addr_t in set_bat, [v/p]_mapped_by_bat. Also, create a macro that can convert a large physical address into the correct format for programming the BAT registers. Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Becky Bruce 提交于
Signed-off-by: NBecky Bruce <becky.bruce@freescale.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Arnd Bergmann 提交于
When kexec is disabled, the crash_shutdown_{un,}register functions are not available in the kernel. This provides dummy inline functions for those so that the callers don't have to worry about it. Signed-off-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Benjamin Herrenschmidt 提交于
This frees a PTE bit when using 64K pages on ppc64. This is done by getting rid of the separate _PAGE_HASHPTE bit. Instead, we just test if any of the 16 sub-page bits is set. For non-combo pages (ie. real 64K pages), we set SUB0 and the location encoding in that field. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Anton Vorontsov 提交于
To avoid "#ifdef CONFIG_PCI" in the drivers we should provide stubs in place of OF PCI address accessors. Without these stubs build breaks for drivers not strictly requiring PCI, for example CONFIG_FB_OF=y without CONFIG_PCI: LD .tmp_vmlinux1 drivers/built-in.o: In function `offb_map_reg': offb.c:(.text+0x6e7c): undefined reference to `of_get_pci_address' OF PCI IRQ accessors require pci_dev argument, so drivers using PCI IRQs should depend on CONFIG_PCI anyway. Signed-off-by: NAnton Vorontsov <avorontsov@ru.mvista.com> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Nick Piggin 提交于
For 64-bit processors, lwsync is the recommended method of store/store ordering on caching enabled memory. For those subarchs which have lwsync, use it rather than eieio for smp_wmb. Signed-off-by: NNick Piggin <npiggin@suse.de> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-