- 10 1月, 2013 40 次提交
-
-
由 Michael Neuling 提交于
This is a rewrite so that we don't assume we are using the DABR throughout the code. We now use the arch_hw_breakpoint to store the breakpoint in a generic manner in the thread_struct, rather than storing the raw DABR value. The ptrace GET/SET_DEBUGREG interface currently passes the raw DABR in from userspace. We keep this functionality, so that future changes (like the POWER8 DAWR), will still fake the DABR to userspace. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
.. and add it to POWER8 cpu features. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Ian Munsie 提交于
These are just wrappers around the new set_mode HCALL. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
This frees up 7 bits for crazy new CPU features. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
These are 32 bit, so no need to have a bunch of wasted 0s. The 0s saved here can be put to better use elsewhere, like at the end of my pay check. Signed-off-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
powerpc: Avoid load of static chain register when calling nested functions through a pointer on 64bit The ppc64 ABI has a static chain register (r11) which is only used when calling nested functions through a pointer. Considering that we take a dim view of nested functions in the kernel, we have a lot of unnecessary overhead here. gcc 4.7 has an option to disable loading of r11 so lets use it. If hell freezes over and hipsters manage to litter the kernel with nested functions, gcc will give us an error message and won't simply compile bad code: You cannot take the address of a nested function if you use the -mno-pointers-to-nested-functions option. Furthermore our kernel module trampolines don't setup the static chain register so adding this option and forcing gcc to error out makes even more sense. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
powerpc: Enable devtmpfs, EFI partition support and tmpfs ACLs on pseries, ppc64 and ppc64e defconfig We need devtmpfs enabled to boot on recent versions of Fedora. EFI partitions will be useful for large block devices. tmpfs ACL support is used by some distros for managing access to devices. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Set CONFIG_NLS_DEFAULT to utf8. The distros do this (eg ppc64 FC17 and RHEL6) as well as the x86 defconfigs. Userspace these days is most likely to expect utf8 anyway. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
No changes, just update the configs with savedefconfig. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Joe Perches 提交于
Use the new vsprintf extension to avoid any possible message interleaving. Convert the #ifdef DEBUG block to a single pr_debug. Signed-off-by: NJoe Perches <joe@perches.com> Acked-by: NArnd Bergmann <arnd@arndb.de> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Haren Myneni 提交于
[PATCH 6/6] powerpc: Implement PPR save/restore When the task enters in to kernel space, the user defined priority (PPR) will be saved in to PACA at the beginning of first level exception vector and then copy from PACA to thread_info in second level vector. PPR will be restored from thread_info before exits the kernel space. P7/P8 temporarily raises the thread priority to higher level during exception until the program executes HMT_* calls. But it will not modify PPR register. So we save PPR value whenever some register is available to use and then calls HMT_MEDIUM to increase the priority. This feature supports on P7 or later processors. We save/ restore PPR for all exception vectors except system call entry. GLIBC will be saving / restore for system calls. So the default PPR value (3) will be set for the system call exit when the task returned to the user space. Signed-off-by: NHaren Myneni <haren@us.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Haren Myneni 提交于
[PATCH 5/6] powerpc: Macros for saving/restore PPR Several macros are defined for saving and restore user defined PPR value. Signed-off-by: NHaren Myneni <haren@us.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Haren Myneni 提交于
[PATCH 4/6] powerpc: Define ppr in thread_struct ppr in thread_struct is used to save PPR and restore it before process exits from kernel. This patch sets the default priority to 3 when tasks are created such that users can use 4 for higher priority tasks. Signed-off-by: NHaren Myneni <haren@us.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Haren Myneni 提交于
[PATCH 3/6] powerpc: Increase exceptions arrays in paca struct to save PPR Using paca to save user defined PPR value in the first level exception vector. Signed-off-by: NHaren Myneni <haren@us.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Haren Myneni 提交于
[PATCH 2/6] powerpc: Enable PPR save/restore SMT thread status register (PPR) is used to set thread priority. This patch enables PPR save/restore feature (CPU_FTR_HAS_PPR) on POWER7 and POWER8 systems. Signed-off-by: NHaren Myneni <haren@us.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Haren Myneni 提交于
[PATCH 1/6] powerpc: Move branch instruction from ACCOUNT_CPU_USER_ENTRY to caller The first instruction in ACCOUNT_CPU_USER_ENTRY is 'beq' which checks for exceptions coming from kernel mode. PPR value will be saved immediately after ACCOUNT_CPU_USER_ENTRY and is also for user level exceptions. So moved this branch instruction in the caller code. Signed-off-by: NHaren Myneni <haren@us.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Chris Freehill 提交于
Support for stalled-cycles-frontend and stalled-cycles-backend is added for e500-based processors. The following mappings are used: stalled-cycles-frontend or idle-cycles-frontend: Com:18 Cycles decode stalled stalled-cycles-backend or idle-cycles-backend Com:19 cycles issue stalled Signed-off-by: NChris Freehill <chrisf@freescale.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 David Woodhouse 提交于
By using the compiler intrinsics instead of hand-crafted opaque inline assembler for byte-swapping, we let the compiler see what's actually happening and it gets to use lwbrx/stwbrx instructions instead of a normal load/store coupled with a sequence of rlwimi instructions to move bits around. Compiled-tested only. It gave a code size reduction of almost 4% for ext2, and more like 2.5% for ext3/ext4. Signed-off-by: NDavid Woodhouse <David.Woodhouse@intel.com> Acked-by: NH. Peter Anvin <hpa@linux.intel.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Wei Yongjun 提交于
Use for_each_compatible_node() macro instead of open coding it. Signed-off-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Wei Yongjun 提交于
Use for_each_compatible_node() macro instead of open coding it. Signed-off-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Wei Yongjun 提交于
Use for_each_compatible_node() macro instead of open coding it. Signed-off-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn> Acked-by: NGrant Likely <grant.likely@secretlab.ca> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Ian Munsie 提交于
For PR KVM we allow userspace to map 0xc000000000000000. Because transitioning from userspace to the guest kernel may use the relocated exception vectors we have to disable relocation on exceptions whenever PR KVM is active as we cannot trust that address. This issue does not apply to HV KVM, since changing from a guest to the hypervisor will never use the relocated exception vectors. Currently the hypervisor interface only allows us to toggle relocation on exceptions on a partition wide scope, so we need to globally disable relocation on exceptions when the first PR KVM instance is started and only re-enable them when all PR KVM instances have been destroyed. It's a bit heavy handed, but until the hypervisor gives us a lightweight way to toggle relocation on exceptions on a single thread it's only real option. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Jimi Xenidis 提交于
Motivation: IBM Blue Gene/Q comes with some very strange firmware that I'm trying to get out of using in the kernel. So instead I spin all the threads in the boot wrapper (using the firmware) and have them enter the kexec stub, pre-translated at the virtual "linear" address, never touching firmware again. This works strategy works wonderfully, but I need the following patch in the kexec stub. I believe it should not effect Book3S and Book3E does not appear to be here yet so I'd love to get any criticisms up front. This patch adds two items: 1) Book3e requires that GPR4 survive the "hold" process, so we make sure that happens. 2) Book3e has no real mode, and the hold code exploits this. Since these processors ares always translated, we arrange for the kexeced threads to enter the hold code using the normal kernel linear mapping. Signed-off-by: NJimi Xenidis <jimix@pobox.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Wei Yongjun 提交于
Use for_each_node_by_type() macro instead of open coding it. Signed-off-by: NWei Yongjun <yongjun_wei@trendmicro.com.cn> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Li Zhong 提交于
This patch fixes MAX_STACK_TRACE_ENTRIES too low warning for ppc32, which is similar to commit 12660b17. Reported-by: NChristian Kujau <lists@nerdbynature.de> Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com> Tested-by: NChristian Kujau <lists@nerdbynature.de> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Finally remove the two level TOC and build with -mcmodel=medium. Unfortunately we can't build modules with -mcmodel=medium due to the tricks the kernel module loader plays with percpu data: # -mcmodel=medium breaks modules because it uses 32bit offsets from # the TOC pointer to create pointers where possible. Pointers into the # percpu data area are created by this method. # # The kernel module loader relocates the percpu data section from the # original location (starting with 0xd...) to somewhere in the base # kernel percpu data space (starting with 0xc...). We need a full # 64bit relocation for this to work, hence -mcmodel=large. On older kernels we fall back to the two level TOC (-mminimal-toc) Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Now we relocate prom_init.c on 64bit we can finally remove the nasty RELOC() macro. Finally a patch that I can claim has a net positive effect on the kernel. It doesn't happen very often. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
The ppc64 kernel can get loaded at any address which means our very early init code in prom_init.c must be relocatable. We do this with a pretty nasty RELOC() macro that we wrap accesses of variables with. It is very fragile and sometimes we forget to add a RELOC() to an uncommon path or sometimes a compiler change breaks it. 32bit has a much more elegant solution where we build prom_init.c with -mrelocatable and then process the relocations manually. Unfortunately we can't do the equivalent on 64bit and we would have to build the entire kernel relocatable (-pie), resulting in a large increase in kernel footprint (megabytes of relocation data). The relocation data will be marked __initdata but it still creates more pressure on our already tight memory layout at boot. Alan Modra pointed out that the 64bit ABI is relocatable even if we don't build with -pie, we just need to relocate the TOC. This patch implements that idea and relocates the TOC entries of prom_init.c. An added bonus is there are very few relocations to process which helps keep boot times on simulators down. gcc does not put 64bit integer constants into the TOC but to be safe we may want a build time script which passes through the prom_init.c TOC entries to make sure everything looks reasonable. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Olof Johansson 提交于
Enable PRINTK_TIME in pasemi_defconfig. Also regenerate it, it seems that a lot of options have moved around since last time savedefconfig was ran on it. Signed-off-by: NOlof Johansson <olof@lixom.net> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Tushar Behera 提交于
The third argument for of_get_property() is a pointer, hence pass NULL instead of 0. Signed-off-by: NTushar Behera <tushar.behera@linaro.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anshuman Khandual 提交于
Change the representation of the PMU flags from decimal to hex since they are bitfields which are easier to read in hex. Signed-off-by: NAnshuman Khandual <khandual@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Ian Munsie 提交于
This patch actually hooks up doorbell interrupts on POWER8: - Select the PPC_DOORBELL Kconfig option from PPC_PSERIES - Add the doorbell CPU feature bit to POWER8 - We define a new pSeries_cause_ipi_mux() function that issues a doorbell interrupt if the recipient is another thread within the same core as the sender. If the recipient is in a different core it falls back to using XICS to deliver the IPI as before. - During pSeries_smp_probe() at boot, we check if doorbell interrupts are supported. If they are we set the cause_ipi function pointer to the above mentioned function, otherwise we leave it as whichever XICS cause_ipi function was determined by xics_smp_probe(). Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Tested-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Ian Munsie 提交于
Move the rule to build doorbell support out of the Makefile and into a new Kconfig boolean that platforms can select. We will add doorbell support to pseries as well in the next patch. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Tested-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Ian Munsie 提交于
This patch adds the logic to properly handle doorbells that come in when interrupts have been soft disabled and to replay them when interrupts are re-enabled: - masked_##_H##interrupt is modified to leave interrupts enabled when a doorbell has come in since doorbells are edge sensitive and as such won't be automatically re-raised. - __check_irq_replay now tests if a doorbell happened on book3s, and returns either 0xe80 or 0xa00 depending on whether we are the hypervisor or not. - restore_check_irq_replay now tests for the two possible server doorbell vector numbers to replay. - __replay_interrupt also adds tests for the two server doorbell vector numbers, and is modified to use a compare instruction rather than an andi. on the single bit difference between 0x500 and 0x900. The last two use a CPU feature section to avoid needlessly testing against the hypervisor vector if it is not the hypervisor, and vice versa. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Ian Munsie 提交于
On book3s we have two msgsnd instructions with differing privilege levels. This patch selects the appropriate instruction to use whenever we send a doorbell interrupt. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Tested-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Ian Munsie 提交于
Directed Privileged Doorbell Interrupts come in at 0xa00 (or 0xc000000000004a00 if relocation on exception is enabled), so add exception vectors at these locations. If doorbell support is not compiled in we handle it as an unknown_exception. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Tested-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Ian Munsie 提交于
Directed Hypervisor Doorbell Interrupts come in at 0xe80 (or 0xc000000000004e80 if relocation on exceptions is enabled), so add exception vectors at these locations. If doorbell support is not compiled in we handle it as an unknown_exception. Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Tested-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Ian Munsie 提交于
There are a few key differences between doorbells on server compared with embedded that we care about on Linux, namely: - We have a new msgsndp instruction for directed privileged doorbells. msgsnd is used for directed hypervisor doorbells. - The tag we use in the instruction is the Thread Identification Register of the recipient thread (since server doorbells can only occur between threads within a single core), and is only 7 bits wide. - A new message type is introduced for server doorbells (none of the existing book3e message types are currently supported on book3s). Signed-off-by: NIan Munsie <imunsie@au1.ibm.com> Tested-by: NMichael Neuling <mikey@neuling.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Gernot Vormayr 提交于
This adds the marvel phy which is present on the ml507 board. Without this ethtool causes kernel-oopses. Tested on ml507 board. Signed-off-by: NGernot Vormayr <gvormayr@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-