- 18 2月, 2010 6 次提交
-
-
由 Martyn Welch 提交于
Signed-off-by: NMartyn Welch <martyn.welch@gefanuc.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Martyn Welch 提交于
Support for the SBC610 VPX Single Board Computer from GE (PowerPC MPC8641D). This patch adds basic support for the on-board flash. Signed-off-by: NMartyn Welch <martyn.welch@gefanuc.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Malcolm Crossley 提交于
Add the MSI section to the DTS file for the GE SBC610. Signed-off-by: NMalcolm Crossley <malcolm.crossley2@gefanuc.com> Signed-off-by: NMartyn Welch <martyn.welch@gefanuc.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Malcolm Crossley 提交于
Correction to interrupt map mask for GE SBC310 XMC site and addition of alias. Signed-off-by: NMalcolm Crossley <malcolm.crossley2@gefanuc.com> Signed-off-by: NMartyn Welch <martyn.welch@gefanuc.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
由 Martyn Welch 提交于
Add the MSI section to the DTS file for the GE SBC310. Signed-off-by: NMartyn Welch <martyn.welch@gefanuc.com> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
24 is offset between the opcode past bl and past rfi. This makes it more obvious. Signed-off-by: NSebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: NKumar Gala <galak@kernel.crashing.org>
-
- 17 2月, 2010 22 次提交
-
-
由 Dave Kleikamp 提交于
powerpc/booke: Add support for advanced debug registers From: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Based on patches originally written by Torez Smith. This patch defines context switch and trap related functionality for BookE specific Debug Registers. It adds support to ptrace() for setting and getting BookE related Debug Registers Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Cc: Torez Smith <lnxtorez@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: David Gibson <dwg@au1.ibm.com> Cc: Josh Boyer <jwboyer@linux.vnet.ibm.com> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Sergio Durigan Junior <sergiodj@br.ibm.com> Cc: Thiago Jung Bauermann <bauerman@br.ibm.com> Cc: linuxppc-dev list <Linuxppc-dev@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Dave Kleikamp 提交于
powerpc/booke: Add definitions for advanced debug registers From: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Based on patches originally written by Torez Smith. This patch adds additional definitions for BookE Debug Registers to the reg_booke.h header file. Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Acked-by: NDavid Gibson <dwg@au1.ibm.com> Cc: Torez Smith <lnxtorez@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Josh Boyer <jwboyer@linux.vnet.ibm.com> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Sergio Durigan Junior <sergiodj@br.ibm.com> Cc: Thiago Jung Bauermann <bauerman@br.ibm.com> Cc: linuxppc-dev list <Linuxppc-dev@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Dave Kleikamp 提交于
powerpc: Extended ptrace interface From: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Based on patches originally written by Torez Smith. Add a new extended ptrace interface so that user-space has a single interface for powerpc, without having to know the specific layout of the debug registers. Implement: PPC_PTRACE_GETHWDEBUGINFO PPC_PTRACE_SETHWDEBUG PPC_PTRACE_DELHWDEBUG Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Acked-by: NDavid Gibson <dwg@au1.ibm.com> Cc: Torez Smith <lnxtorez@linux.vnet.ibm.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Josh Boyer <jwboyer@linux.vnet.ibm.com> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Sergio Durigan Junior <sergiodj@br.ibm.com> Cc: Thiago Jung Bauermann <bauerman@br.ibm.com> Cc: linuxppc-dev list <Linuxppc-dev@ozlabs.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Dave Kleikamp 提交于
powerpc/booke: Introduce new CONFIG options for advanced debug registers From: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Introduce new config options to simplify the ifdefs pertaining to the advanced debug registers for booke and 40x processors: CONFIG_PPC_ADV_DEBUG_REGS - boolean: true for dac-based processors CONFIG_PPC_ADV_DEBUG_IACS - number of IAC registers CONFIG_PPC_ADV_DEBUG_DACS - number of DAC registers CONFIG_PPC_ADV_DEBUG_DVCS - number of DVC registers CONFIG_PPC_ADV_DEBUG_DAC_RANGE - DAC ranges supported Beginning conservatively, since I only have the facilities to test 440 hardware. I believe all 40x and booke platforms support at least 2 IAC and 2 DAC registers. For 440, 4 IAC and 2 DVC registers are enabled, as well as the DAC ranges. Signed-off-by: NDave Kleikamp <shaggy@linux.vnet.ibm.com> Acked-by: NDavid Gibson <dwg@au1.ibm.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Here is a patch from Paul Mackerras that improves the ppc64 copy_tofrom_user. The loop now does 32 bytes at a time and as well as pairing loads and stores. A quick test case that reads 8kB over and over shows the improvement: POWER6: 53% faster POWER7: 51% faster #define _XOPEN_SOURCE 500 #include <stdlib.h> #include <stdio.h> #include <unistd.h> #include <fcntl.h> #include <sys/types.h> #include <sys/stat.h> #define BUFSIZE (8 * 1024) #define ITERATIONS 10000000 int main() { char tmpfile[] = "/tmp/copy_to_user_testXXXXXX"; int fd; char *buf[BUFSIZE]; unsigned long i; fd = mkstemp(tmpfile); if (fd < 0) { perror("open"); exit(1); } if (write(fd, buf, BUFSIZE) != BUFSIZE) { perror("open"); exit(1); } for (i = 0; i < 10000000; i++) { if (pread(fd, buf, BUFSIZE, 0) != BUFSIZE) { perror("pread"); exit(1); } } unlink(tmpfile); return 0; } Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
A number of our chips like loads and stores to be paired. A small kernel module testcase shows the improvement of pairing loads and stores in copy_4k_page: POWER6: +9% POWER7: +1.5% #include <linux/module.h> #include <linux/mm.h> #define ITERATIONS 10000000 static int __init copypage_init(void) { struct timespec before, after; unsigned long i; struct page *destpage, *srcpage; char *dest, *src; destpage = alloc_page(GFP_KERNEL); srcpage = alloc_page(GFP_KERNEL); dest = page_address(destpage); src = page_address(srcpage); getnstimeofday(&before); for (i = 0; i < ITERATIONS; i++) copy_4K_page(dest, src); getnstimeofday(&after); free_page((unsigned long)dest); free_page((unsigned long)src); printk(KERN_DEBUG "copy_4K_page loop took %lu ns\n", (after.tv_sec - before.tv_sec) * NSEC_PER_SEC + (after.tv_nsec - before.tv_nsec)); return 0; } static void __exit copypage_exit(void) { } module_init(copypage_init) module_exit(copypage_exit) MODULE_LICENSE("GPL"); MODULE_AUTHOR("Anton Blanchard"); Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Nick Piggin discovered that lwsync barriers around locks were faster than isync on 970. That was a long time ago and I completely dropped the ball in testing his patches across other ppc64 processors. Turns out the idea helps on other chips. Using a microbenchmark that uses a lot of threads to contend on a global pthread mutex (and therefore a global futex), POWER6 improves 8% and POWER7 improves 2%. I checked POWER5 and while I couldn't measure an improvement, there was no regression. This patch uses the lwsync patching code to replace the isyncs with lwsyncs on CPUs that support the instruction. We were marking POWER3 and RS64 as lwsync capable but in reality they treat it as a full sync (ie slow). Remove the CPU_FTR_LWSYNC bit from these CPUs so they continue to use the faster isync method. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
do_lwsync_fixups doesn't work on 64bit, we end up writing lwsyncs to the wrong addresses: 0:mon> di c0000001000bfacc c0000001000bfacc 7c2004ac lwsync Since the lwsync section has negative offsets we need to use a signed int pointer so we sign extend the value. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
For performance reasons we are about to change ISYNC_ON_SMP to sometimes be lwsync. Now that the macro name doesn't make sense, change it and LWSYNC_ON_SMP to better explain what the barriers are doing. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Now we have real bit locks use them instead of open coding it. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
This patch implements the lwarx/ldarx hint bit for bit locks. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Recent versions of the PowerPC architecture added a hint bit to the larx instructions to differentiate between an atomic operation and a lock operation: > 0 Other programs might attempt to modify the word in storage addressed by EA > even if the subsequent Store Conditional succeeds. > > 1 Other programs will not attempt to modify the word in storage addressed by > EA until the program that has acquired the lock performs a subsequent store > releasing the lock. To avoid a binutils dependency this patch create macros for the extended lwarx format and uses it in the spinlock code. To test this change I used a simple test case that acquires and releases a global pthread mutex: pthread_mutex_lock(&mutex); pthread_mutex_unlock(&mutex); On a 32 core POWER6, running 32 test threads we spend almost all our time in the futex spinlock code: 94.37% perf [kernel] [k] ._raw_spin_lock | |--99.95%-- ._raw_spin_lock | | | |--63.29%-- .futex_wake | | | |--36.64%-- .futex_wait_setup Which is a good test for this patch. The results (in lock/unlock operations per second) are: before: 1538203 ops/sec after: 2189219 ops/sec An improvement of 42% A 32 core POWER7 improves even more: before: 1279529 ops/sec after: 2282076 ops/sec An improvement of 78% Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
I often get asked if BAD interrupts are really bad. On some boxes (eg IBM machines running a hypervisor) there are valid cases where are presented with an interrupt that is not for us. These cases are common enough to show up as thousands of BAD interrupts a day. Tone them down by calling them spurious. Since they can be a significant cause of OS jitter, we may as well log them per cpu so we know where they are occurring. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
With NO_HZ it is useful to know how often the decrementer is going off. The patch below adds an entry for it and also adds it into the /proc/stat summaries. While here, I added performance monitoring and machine check exceptions. I found it useful to keep an eye on the PMU exception rate when using the perf tool. Since it's possible to take a completely handled machine check on a System p box it also sounds like a good idea to keep a machine check summary. The event naming matches x86 to keep gratuitous differences to a minimum. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Now we use printf style alignment there is no need to manually space these fields. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
On a large machine I noticed the columns of /proc/interrupts failed to line up with the header after CPU9. At sufficiently large numbers of CPUs it becomes impossible to line up the CPU number with the counts. While fixing this I noticed x86 has a number of updates that we may as well pull in. On PowerPC we currently omit an interrupt completely if there is no active handler, whereas on x86 it is printed if there is a non zero count. The x86 code also spaces the first column correctly based on nr_irqs. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
Right now we allocate a cacheline sized NR_CPUS array for xics IPI communication. Use DECLARE_PER_CPU_SHARED_ALIGNED to put it in percpu data in its own cacheline since it is written to by other cpus. On a kernel with NR_CPUS=1024, this saves quite a lot of memory: text data bss dec hex filename 8767779 2944260 1505724 13217763 c9afe3 vmlinux.irq_cpustat 8767555 2813444 1505724 13086723 c7b003 vmlinux.xics A saving of around 128kB. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Anton Blanchard 提交于
PowerPC is currently using asm-generic/hardirq.h which statically allocates an NR_CPUS irq_stat array. Switch to an arch specific implementation which uses per cpu data: On a kernel with NR_CPUS=1024, this saves quite a lot of memory: text data bss dec hex filename 8767938 2944132 1636796 13348866 cbb002 vmlinux.baseline 8767779 2944260 1505724 13217763 c9afe3 vmlinux.irq_cpustat A saving of around 128kB. Signed-off-by: NAnton Blanchard <anton@samba.org> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Breno Leitao 提交于
During a EEH recover, the pci_dev structure can be null, mainly if an eeh event is detected during cpi config operation. In this case, the pci_dev will not be known (and will be null) the kernel will crash with the following message: Unable to handle kernel paging request for data at address 0x000000a0 Faulting instruction address: 0xc00000000006b8b4 Oops: Kernel access of bad area, sig: 11 [#1] NIP [c00000000006b8b4] .eeh_event_handler+0x10c/0x1a0 LR [c00000000006b8a8] .eeh_event_handler+0x100/0x1a0 Call Trace: [c0000003a80dff00] [c00000000006b8a8] .eeh_event_handler+0x100/0x1a0 [c0000003a80dff90] [c000000000031f1c] .kernel_thread+0x54/0x70 The bug occurs because pci_name() tries to access a null pointer. This patch just guarantee that pci_name() is not called on Null pointers. Signed-off-by: NBreno Leitao <leitao@linux.vnet.ibm.com> Signed-off-by: NLinas Vepstas <linasvepstas@gmail.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Corey Minyard 提交于
DMA ops requires that coherent_dma_mask be set properly for a device, but this was not being done for devices on the MV64x60 that use DMA. Both the serial and ethernet devices need this or they won't be able to allocate memory. Signed-off-by: NCorey Minyard <cminyard@mvista.com> Signed-off-by: NBenjamin Herrenschmidt <benh@kernel.crashing.org>
-
由 Oleg Nesterov 提交于
The 64-bit version of ELF_PLAT_INIT() clears TIF_IA32, but at this point it has already been cleared by SET_PERSONALITY == set_personality_64bit. Signed-off-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Oleg Nesterov 提交于
05d43ed8 "x86: get rid of the insane TIF_ABI_PENDING bit" forgot about force_personality32. Fix. Signed-off-by: NOleg Nesterov <oleg@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 15 2月, 2010 1 次提交
-
-
由 Paul Mundt 提交于
This follows the parisc change to ensure that tracehook_signal_handler() is aware of when we are single-stepping in order to ptrace_notify() appropriately. While this was implemented for 32-bit SH, sh64 neglected to make use of TIF_SINGLESTEP when it was folded in with the 32-bit code, resulting in ptrace_notify() never being called. As sh64 uses all of the other abstractions already, this simply plugs in the thread flag in the appropriate enable/disable paths and fixes up the tracehook notification accordingly. With this in place, sh64 is brought in line with what 32-bit is already doing. Reported-by: NMike Frysinger <vapier@gentoo.org> Signed-off-by: NPaul Mundt <lethal@linux-sh.org>
-
- 13 2月, 2010 2 次提交
-
-
由 Kyle McMartin 提交于
Mike Frysinger pointed out that calling tracehook_signal_handler with stepping=0 missed testing the thread flags, resulting in not calling ptrace_notify. Fix this by testing if we're single stepping or branch stepping and setting the flag accordingly. Tested, seems to work. Reported-by: NMike Frysinger <vapier@gentoo.org> Signed-off-by: NKyle McMartin <kyle@mcmartin.ca> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Tony Luck 提交于
In its <asm/elf.h> ia64 defines SET_PERSONALITY in a way that unconditionally sets the personality of the current process to PER_LINUX, losing any flag bits from the upper 3 bytes of current->personality. This is wrong. Those bits are intended to be inherited across exec (other code takes care of ensuring that security sensitive bits like ADDR_NO_RANDOMIZE are not passed to unsuspecting setuid/setgid applications). Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 12 2月, 2010 2 次提交
-
-
由 Stefan Roese 提交于
This patch adds support for boards with more that 512MByte RAM. Currently only 512MB of memory are enabled in the DCCR/ICCR real-mode cache control registers. This patch now enables caching in real-mode for 2GByte. Signed-off-by: NStefan Roese <sr@denx.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-
由 David S. Miller 提交于
Should mask stack with 0xf not "0x15". Noticed by Blue Swirl <blauwirbel@gmail.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
- 11 2月, 2010 6 次提交
-
-
由 David Daney 提交于
The patch that adds cpu_probe_vmbits is erroneously writing to reserved bit 12. Since we are really only probing high bits, don't write this bit with a one. Signed-off-by: NDavid Daney <ddaney@caviumnetworks.com> To: linux-mips@linux-mips.org Cc: Guenter Roeck <guenter.roeck@ericsson.com> Patchwork: http://patchwork.linux-mips.org/patch/949/Acked-by: NGuenter Roeck <guenter.roeck@ericsson.com> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Julia Lawall 提交于
Test the value that was just allocated rather than the previously tested one. A simplified version of the semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @r@ expression *x; expression e; identifier l; @@ if (x == NULL || ...) { ... when forall return ...; } ... when != goto l; when != x = e when != &x *x == NULL // </smpl> Signed-off-by: NJulia Lawall <julia@diku.dk> To: linux-mips@linux-mips.org To: linux-kernel@vger.kernel.org To: kernel-janitors@vger.kernel.org Patchwork: http://patchwork.linux-mips.org/patch/945/Acked-by: NThomas Bogendoerfer <tsbogend@alpha.franken.de> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 David Daney 提交于
cpu_cache_init and the things it calls should all be __cpuinit instead of __devinit. Signed-off-by: NDavid Daney <ddaney@caviumnetworks.com> To: linux-mips@linux-mips.org Patchwork: http://patchwork.linux-mips.org/patch/938/Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Ralf Baechle 提交于
RTC support was rewritten but the defconfig files were not updated. Enable IPv6 support which for some folks already is a must have. Assign useful values to other new options. Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Wu Zhangjin 提交于
As reported by Maxime Bizon, the commit "MIPS: PowerTV: Fix support for timer interrupts with > 64 external IRQs" have broken the r4k timer since it didn't initialize the cp0_compare_irq_shift variable used in c0_compare_int_pending() on the architectures whose cpu_has_mips_r2 is false. This patch fixes it via initializing the cp0_compare_irq_shift as the cp0_compare_irq used in the old c0_compare_int_pending(). Reported-by: NMaxime Bizon <mbizon@freebox.fr> Signed-off-by: NWu Zhangjin <wuzhangjin@gmail.com> Cc: David VomLehn <dvomlehn@cisco.com> Cc: mbizon@freebox.fr Cc: linux-mips@linux-mips.org Patchwork: http://patchwork.linux-mips.org/patch/922/Tested-by: NShane McDonald <mcdonald.shane@gmail.com> Signed-off-by: NRalf Baechle <ralf@linux-mips.org>
-
由 Aaro Koskinen 提交于
The platform data allocated with kmalloc() will become unreachable once the init is complete, so it should be freed. The problem was discovered by kmemleak. Signed-off-by: NAaro Koskinen <aaro.koskinen@nokia.com> Acked-by: NAdrian Hunter <adrian.hunter@nokia.com> Signed-off-by: NTony Lindgren <tony@atomide.com>
-
- 10 2月, 2010 1 次提交
-
-
由 Stefan Roese 提交于
Signed-off-by: NStefan Roese <sr@denx.de> Cc: Josh Boyer <jwboyer@linux.vnet.ibm.com> Signed-off-by: NJosh Boyer <jwboyer@linux.vnet.ibm.com>
-