- 03 7月, 2012 14 次提交
-
-
由 Anton Blanchard 提交于
Implement a POWER7 optimised copy_page using VMX and enhanced prefetch instructions. We use enhanced prefetch hints to prefetch both the load and store side. We copy a cacheline at a time and fall back to regular loads and stores if we are unable to use VMX (eg we are in an interrupt). The following microbenchmark was used to assess the impact of the patch: http://ozlabs.org/~anton/junkcode/page_fault_file.c We test MAP_PRIVATE page faults across a 1GB file, 100 times: # time ./page_fault_file -p -l 1G -i 100 Before: 22.25s After: 18.89s 17% faster Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Subsequent patches will add more VMX library functions and it makes sense to keep all the c-code helper functions in the one file. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
mtmsrd is an expensive instruction, we save a few cycles by doing it once instead of twice. Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Version 2.06 of the POWER ISA introduced enhanced touch instructions, allowing us to specify a number of attributes including the length of a stream. This patch adds a software stream for both loads and stores in the POWER7 copy_tofrom_user loop. Since the setup is quite complicated and we have to use an eieio to ensure correct ordering of the "GO" command we only do this for copies above 4kB. To quantify any performance improvements we need a working set bigger than the caches so we operate on a 1GB file: # dd if=/dev/zero of=/tmp/foo bs=1M count=1024 And we compare how fast we can read the file: # dd if=/tmp/foo of=/dev/null bs=1M before: 7.7 GB/s after: 9.6 GB/s A 25% improvement. The worst case for this patch will be a completely L1 cache contained copy of just over 4kB. We can test this with the copy_to_user testcase we used to tune copy_tofrom_user originally: http://ozlabs.org/~anton/junkcode/copy_to_user.c # time ./copy_to_user2 -l 4224 -i 10000000 before: 6.807 s after: 6.946 s A 2% slowdown, which seems reasonable considering our data is unlikely to be completely L1 contained. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Yong Zhang 提交于
1) call_function.lock used in smp_call_function_many() is just to protect call_function.queue and &data->refs, cpu_online_mask is outside of the lock. And it's not necessary to protect cpu_online_mask, because data->cpumask is pre-calculate and even if a cpu is brougt up when calling arch_send_call_function_ipi_mask(), it's harmless because validation test in generic_smp_call_function_interrupt() will take care of it. 2) For cpu down issue, stop_machine() will guarantee that no concurrent smp_call_fuction() is processing. Signed-off-by: NYong Zhang <yong.zhang0@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
I noticed __clear_user high up in a profile of one of my RAID stress tests. The testcase was doing a dd from /dev/zero which ends up calling __clear_user. __clear_user is basically a loop with a single 4 byte store which is horribly slow. We can do much better by aligning the desination and doing 32 bytes of 8 byte stores in a loop. The following testcase was used to verify the patch: http://ozlabs.org/~anton/junkcode/stress_clear_user.c To show the improvement in performance I ran a dd from /dev/zero to /dev/null on a POWER7 box: Before: # dd if=/dev/zero of=/dev/null bs=1M count=10000 10485760000 bytes (10 GB) copied, 3.72379 s, 2.8 GB/s After: # time dd if=/dev/zero of=/dev/null bs=1M count=10000 10485760000 bytes (10 GB) copied, 0.728318 s, 14.4 GB/s Over 5x faster. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
irq_entry, irq_exit, timer_interrupt_entry and timer_interrupt_exit all do the same thing so use DECLARE_EVENT_CLASS to avoid duplicating everything 4 times. This saves quite a lot of space in both instruction text and data: text data bss dec hex filename 9265 19622 16 28903 70e7 arch/powerpc/kernel/irq.o 6817 19019 16 25852 64fc arch/powerpc/kernel/irq.o Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
When looking through some instruction traces I noticed our tracepoint checks were inline. It turns out we don't have CONFIG_JUMP_LABEL enabled. By enabling CONFIG_JUMP_LABEL we replace a load/compare/branch with a nop at every tracepoint call. For example in do_IRQ: CONFIG_JUMP_LABEL disabled: stdx 3,11,9 lwz 0,8(29) cmpwi 7,0,0 bne- 7,.L124 bl .irq_enter CONFIG_JUMP_LABEL enabled: stdx 3,11,9 nop bl .irq_enter Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Deepthi Dharwar 提交于
The following patch is to remove the pseries_notify_add_cpu() call and replace it by a hot plug notifier. This would prevent cpuidle resources being released and allocated each time cpu comes online on pseries. The earlier design was causing a lockdep problem in start_secondary as reported on this thread -https://lkml.org/lkml/2012/5/17/2 This applies on 3.4-rc7 Signed-off-by: NDeepthi Dharwar <deepthi@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Nishanth Aravamudan 提交于
An upcoming release of firmware will add DDW extensions, in particular an API to "reset" the DMA window to the original configuration (32-bit, 2GB in size). With that API available, we can safely remove the default window, increasing the resources available to firmware for creation of larger windows for the slot in question -- if we encounter an error, we can use the new API to reset the state of the slot. Further, this same release of firmware will make it a hard requirement for OSes to release the existing window before any other windows will be shown as available, to avoid conflicts in addressing between the two windows. In anticipation of these changes, always remove the default window before we do any DDW manipulations. Signed-off-by: NNishanth Aravamudan <nacc@us.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Steven Rostedt 提交于
The patch_instruction() interface is made to modify kernel text. It is safer to use that then the probe_kernel_write() when modifying kernel code. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Steven Rostedt 提交于
For ftrace to use the patch_instruction code, it needs to check for faults on write. Ftrace updates code all over the kernel, and we need to know if code is updated or not due to protections that are placed on some portions of the kernel. If ftrace does not detect a fault, it will error later on, and it will be much more difficult to find the problem. By changing patch_instruction() to detect faults, then ftrace will be able to make use of it too. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Steven Rostedt 提交于
PowerPC does not have the synchronization issues that x86 has with modifying code on one CPU while another CPU is executing it. The other CPU will either see the old or new code without any issues, unlike x86 which may issue a GPF. Instead of calling the heavy stop_machine, just update the code. Signed-off-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Tony Breeds 提交于
Currently we build all board files regardless of the final zImage target. This is sub-optimal (in terms on compilation) and leads to problems in one platform needlessly causing failures for other platforms. Use the Kconfig variables to selectively construct this board files to build. Signed-off-by: NTony Breeds <tony@bakeyournoodle.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 29 6月, 2012 7 次提交
-
-
由 Michael Neuling 提交于
The following added support for powernv but broke pseries/BML: 1f1616e8 powerpc/powernv: Add TCE SW invalidation support TCE_PCI_SW_INVAL was split into FREE and CREATE flags but the tests in the pseries code were not updated to reflect this. Signed-off-by: NMichael Neuling <mikey@neuling.org> cc: stable@kernel.org [v3.3+] Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Commit f948501b ("Make hard_irq_disable() actually hard-disable interrupts") caused check_and_cede_processor to stop working. ->irq_happened will never be zero right after a hard_irq_disable so the compiler removes the call to cede_processor completely. The bug was introduced back in the lazy interrupt handling rework of 3.4 but was hidden until recently because hard_irq_disable did nothing. This issue will eventually appear in 3.4 stable since the hard_irq_disable fix is marked stable, so mark this one for stable too. Signed-off-by: NAnton Blanchard <anton@samba.org> Cc: stable@vger.kernel.org Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Steven Rostedt 提交于
As I was adding code that affects all archs, I started testing function tracer against PPC64 and found that it currently locks up with 3.4 kernel. I figured it was due to tracing a function that shouldn't be, so I went through the following process to bisect to find the culprit: cat /debug/tracing/available_filter_functions > t num=`wc -l t` sed -ne "1,${num}p" t > t1 let num=num+1 sed -ne "${num},$p" t > t2 cat t1 > /debug/tracing/set_ftrace_filter echo function /debug/tracing/current_tracer <failed? bisect t1, if not bisect t2> It finally came down to this function: restore_interrupts() I'm not sure why this locks up the system. It just seems to prevent scheduling from occurring. Interrupts seem to still work, as I can ping the box. But all user processes freeze. When restore_interrupts() is not traced, function tracing works fine. Cc: stable@kernel.org Signed-off-by: NSteven Rostedt <rostedt@goodmis.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Li Zhong 提交于
This patches tries to fix a couple of Section mismatch warnings like following one: WARNING: arch/powerpc/kernel/built-in.o(.text+0x2923c): Section mismatch in reference from the function .prom_query_opal() to the function .init.text:.call_prom() The function .prom_query_opal() references the function __init .call_prom(). This is often because .prom_query_opal lacks a __init annotation or the annotation of .call_prom is wrong. Signed-off-by: NLi Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Tiejun Chen 提交于
In entry_64.S version of ret_from_except_lite, you'll notice that in the !preempt case, after we've checked MSR_PR we test for any TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work or not. However, in the preempt case, we do a convoluted trick to test SIGPENDING only if PR was set and always test NEED_RESCHED ... but we forget to test any other bit of _TIF_USER_WORK_MASK !!! So that means that with preempt, we completely fail to test for things like single step, syscall tracing, etc... This should be fixed as the following path: - Test PR. If not set, go to resume_kernel, else continue. - If go resume_kernel, to do that original do_work. - If else, then always test for _TIF_USER_WORK_MASK to decide to do that original user_work, else restore directly. Signed-off-by: NTiejun Chen <tiejun.chen@windriver.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Neuling 提交于
chroma_defconfig currently gives me this with gcc 4.6: arch/powerpc/mm/numa.c:638:13: error: 'dm' may be used uninitialized in this function [-Werror=uninitialized] It's a bogus warning/error since of_get_drconf_memory() only writes it anyway. Signed-off-by: NMichael Neuling <mikey@neuling.org> cc: <stable@kernel.org> [v3.3+] Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Michael Ellerman 提交于
If the kernel is big enough (eg. allyesconfig), the linker may need to switch TOCs when calling from the BPF JIT code out to the external helpers (skb_copy_bits() & bpf_internal_load_pointer_neg_helper()). In order to do that we need to leave space after the bl for the linker to insert a reload of our TOC pointer. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Acked-by: NMatt Evans <matt@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
- 19 6月, 2012 1 次提交
-
-
由 Paul Mackerras 提交于
At the moment we call kvmppc_pin_guest_page() in kvmppc_update_vpa() with two spinlocks held: the vcore lock and the vcpu->vpa_update_lock. This is not good, since kvmppc_pin_guest_page() calls down_read() and get_user_pages_fast(), both of which can sleep. This bug was introduced in 2e25aa5f ("KVM: PPC: Book3S HV: Make virtual processor area registration more robust"). This arranges to drop those spinlocks before calling kvmppc_pin_guest_page() and re-take them afterwards. Dropping the vcore lock in kvmppc_run_core() means we have to set the vcore_state field to VCORE_RUNNING before we drop the lock, so that other vcpus won't try to run this vcore. Signed-off-by: NPaul Mackerras <paulus@samba.org> Acked-by: NAlexander Graf <agraf@suse.de> Signed-off-by: NAvi Kivity <avi@redhat.com>
-
- 16 6月, 2012 1 次提交
-
-
由 Kay Sievers 提交于
Provide an iterator to receive the log buffer content, and convert all kmsg_dump() users to it. The structured data in the kmsg buffer now contains binary data, which should no longer be copied verbatim to the kmsg_dump() users. The iterator should provide reliable access to the buffer data, and also supports proper log line-aware chunking of data while iterating. Signed-off-by: NKay Sievers <kay@vrfy.org> Tested-by: NTony Luck <tony.luck@intel.com> Reported-by: NAnton Vorontsov <anton.vorontsov@linaro.org> Tested-by: NAnton Vorontsov <anton.vorontsov@linaro.org> Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
-
- 15 6月, 2012 1 次提交
-
-
由 Paul Mackerras 提交于
At present, hard_irq_disable() does nothing on powerpc because of this code in include/linux/interrupt.h: #ifndef hard_irq_disable #define hard_irq_disable() do { } while(0) #endif So we need to make our hard_irq_disable be a macro. It was previously a macro until commit 7230c564 ("powerpc: Rework lazy-interrupt handling") changed it to a static inline function. Cc: stable@vger.kernel.org Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org> -- arch/powerpc/include/asm/hw_irq.h | 3 +++ 1 file changed, 3 insertions(+)
-
- 08 6月, 2012 2 次提交
-
-
由 Steffen Rumler 提交于
This fixes a problem which can causes kernel oopses while loading a kernel module. According to the PowerPC EABI specification, GPR r11 is assigned the dedicated function to point to the previous stack frame. In the powerpc-specific kernel module loader, do_plt_call() (in arch/powerpc/kernel/module_32.c), GPR r11 is also used to generate trampoline code. This combination crashes the kernel, in the case where the compiler chooses to use a helper function for saving GPRs on entry, and the module loader has placed the .init.text section far away from the .text section, meaning that it has to generate a trampoline for functions in the .init.text section to call the GPR save helper. Because the trampoline trashes r11, references to the stack frame using r11 can cause an oops. The fix just uses GPR r12 instead of GPR r11 for generating the trampoline code. According to the statements from Freescale, this is safe from an EABI perspective. I've tested the fix for kernel 2.6.33 on MPC8541. Cc: stable@vger.kernel.org Signed-off-by: NSteffen Rumler <steffen.rumler.ext@nsn.com> [paulus@samba.org: reworded the description] Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
由 Paul Mackerras 提交于
This reverts 68568add ("powerpc/time: Remove unnecessary sanity check of decrementer expiration"). We do need to check whether we have reached the expiration time of the next event, because we sometimes get an early decrementer interrupt, most notably when we set the decrementer to 1 in arch_irq_work_raise(). The effect of not having the sanity check is that if timer_interrupt() gets called early, we leave the decrementer set to its maximum value, which means we then don't get any more decrementer interrupts for about 4 seconds (or longer, depending on timebase frequency). I saw these pauses as a consequence of getting a stray hypervisor decrementer interrupt left over from exiting a KVM guest. This isn't quite a straight revert because of changes to the surrounding code, but it restores the same algorithm as was previously used. Cc: stable@vger.kernel.org Acked-by: NAnton Blanchard <anton@samba.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NPaul Mackerras <paulus@samba.org>
-
- 02 6月, 2012 9 次提交
-
-
由 Anton Blanchard 提交于
commit e57f93cc (powerpc: get rid of nlink_t uses, switch to explicitly-sized type) changed the size of st_nlink on ppc64 from a long to a short, resulting in boot failures. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Does block_sigmask() + tracehook_signal_handler(); called when sigframe has been successfully built. All architectures converted to it; block_sigmask() itself is gone now (merged into this one). I'm still not too happy with the signature, but that's a separate story (IMO we need a structure that would contain signal number + siginfo + k_sigaction, so that get_signal_to_deliver() would fill one, signal_delivered(), handle_signal() and probably setup...frame() - take one). Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
... it's just a call of set_current_blocked() now Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Only 3 out of 63 do not. Renamed the current variant to __set_current_blocked(), added set_current_blocked() that will exclude unblockable signals, switched open-coded instances to it. Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
replace boilerplate "should we use ->saved_sigmask or ->blocked?" with calls of obvious inlined helper... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
first fruits of ..._restore_sigmask() helpers: now we can take boilerplate "signal didn't have a handler, clear RESTORE_SIGMASK and restore the blocked mask from ->saved_mask" into a common helper. Open-coded instances switched... Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
helpers parallel to set_restore_sigmask(), used in the next commits Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 01 6月, 2012 1 次提交
-
-
由 Anton Vorontsov 提交于
Current CPU hotplug code has some task->mm handling issues: 1. Working with task->mm w/o getting mm or grabing the task lock is dangerous as ->mm might disappear (exit_mm() assigns NULL under task_lock(), so tasklist lock is not enough). We can't use get_task_mm()/mmput() pair as mmput() might sleep, so we must take the task lock while handle its mm. 2. Checking for process->mm is not enough because process' main thread may exit or detach its mm via use_mm(), but other threads may still have a valid mm. To fix this we would need to use find_lock_task_mm(), which would walk up all threads and returns an appropriate task (with task lock held). clear_tasks_mm_cpumask() has all the issues fixed, so let's use it. Suggested-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NAnton Vorontsov <anton.vorontsov@linaro.org> Acked-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 31 5月, 2012 2 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-
- 28 5月, 2012 1 次提交
-
-
由 Paul Mackerras 提交于
This is much the same as for SPARC except that we can do the find_zero() function more efficiently using the count-leading-zeroes instructions. Tested on 32-bit and 64-bit PowerPC. Signed-off-by: NPaul Mackerras <paulus@samba.org> Acked-by: NDavid S. Miller <davem@davemloft.net> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 24 5月, 2012 1 次提交
-
-
由 Al Viro 提交于
Signed-off-by: NAl Viro <viro@zeniv.linux.org.uk>
-