- 16 6月, 2009 14 次提交
-
-
由 Hong H. Pham 提交于
irq_choose_cpu() should compare the affinity mask against cpu_online_map rather than CPU_MASK_ALL, since irq_select_affinity() sets the interrupt's affinity mask to cpu_online_map "and" CPU_MASK_ALL (which ends up being just cpu_online_map). The mask comparison in irq_choose_cpu() will always fail since the two masks are not the same. So the CPU chosen is the first CPU in the intersection of cpu_online_map and CPU_MASK_ALL, which is always CPU0. That means all interrupts are reassigned to CPU0... Distributing interrupts to CPUs in a linearly increasing round robin fashion is not optimal for the UltraSPARC T1/T2. Also, the irq_rover in irq_choose_cpu() causes an interrupt to be assigned to a different processor each time the interrupt is allocated and released. This may lead to an unbalanced distribution over time. A static mapping of interrupts to processors is done to optimize and balance interrupt distribution. For the T1/T2, interrupts are spread to different cores first, and then to strands within a core. The following is some benchmarks showing the effects of interrupt distribution on a T2. The test was done with iperf using a pair of T5220 boxes, each with a 10GBe NIU (XAUI) connected back to back. TCP | Stock Linear RR IRQ Optimized IRQ Streams | 2.6.30-rc5 Distribution Distribution | GBits/sec GBits/sec GBits/sec --------+----------------------------------------- 1 0.839 0.862 0.868 8 1.16 4.96 5.88 16 1.15 6.40 8.04 100 1.09 7.28 8.68 Signed-off-by: NHong H. Pham <hong.pham@windriver.com> Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
This gets us real close to the generic implementation of setup_per_cpu_areas() except: 1) We store the per-cpu offset into the trap_block[], whereas the generic code has it's own static array. 2) We have to initialize the %g5 register to hold the boot cpu's per-cpu area offset. 3) The OBP/MDESC cpu info scan is performed at the end. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Now that we defer the cpu_data() initializations to the end of per-cpu setup, we can get rid of this local hack we had to setup the per-cpu areas eary. This is a necessary step in order to support HAVE_DYNAMIC_PER_CPU_AREA since the per-cpu setup must run when page structs are available. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
We need to split up the cpu present mask setup from the cpu_data initialization, and this is a first step towards that. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
With feedback from Sam Ravnborg. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Surprisingly this actually makes LOAD_PER_CPU_BASE() a little more efficient. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
Later we're going to want to get at these definitions from asm/percpu.h and that's not possible via cpudata.h because of the set of dependencies the non-trap_block[] stuff has. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 David S. Miller 提交于
This really isn't necessary at all, a local variable suits the job just fine. This frees up 8 bytes in the trap_block[] that we can use later to store the per-cpu base addresses. Signed-off-by: NDavid S. Miller <davem@davemloft.net>
-
由 Rusty Russell 提交于
ad6561df ("module: trim exception table on init free.") put a bogus trim_init_extable() function into ia64 which didn't compile. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 14 6月, 2009 4 次提交
-
-
由 Nicolas Pitre 提交于
Without this, the default implementation is a no op which is completely wrong with a VIVT cache, and usage of sg_copy_buffer() produces unpredictable results. Tested-by: NSebastian Andrzej Siewior <bigeasy@breakpoint.cc> CC: stable@kernel.org Signed-off-by: NNicolas Pitre <nico@marvell.com> Signed-off-by: NRussell King <rmk+kernel@arm.linux.org.uk>
-
由 Jaswinder Singh Rajput 提交于
fix the following 'make headers_check' warnings: usr/include/asm-mn10300/setup.h:14: extern's make no sense in userspace usr/include/asm-mn10300/setup.h:15: extern's make no sense in userspace Signed-off-by: NJaswinder Singh Rajput <jaswinderrajput@gmail.com>
-
由 Jaswinder Singh Rajput 提交于
fix the following 'make headers_check' warning: usr/include/asm-mn10300/ptrace.h:80: extern's make no sense in userspace Signed-off-by: NJaswinder Singh Rajput <jaswinderrajput@gmail.com>
-
由 David Howells 提交于
Fix interaction with new generic header stuff as added by: commit 6103ec56 Author: Arnd Bergmann <arnd@arndb.de> Date: Wed May 13 22:56:27 2009 +0000 asm-generic: add generic ABI headers The problem is that asm/signal.h has been made to include asm-generic/signal.h, but the redundant stuff from asm/signal.h has not been discarded, leading to multiple redefinitions. Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: NDavid Howells <dhowells@redhat.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 6月, 2009 15 次提交
-
-
由 Haavard Skinnemoen 提交于
The unaligned address exception handler (and others) does not scan the fixup tables before oopsing. This is bad because it means passing a badly aligned pointer from user space might crash the kernel. Fix this by scanning the fixup tables in _exception(). This should resolve the issue for unaligned addresses as well as other less common exceptions that might be happening during a userspace access. The page fault handler already does fixup processing. Signed-off-by: NHaavard Skinnemoen <haavard.skinnemoen@atmel.com>
-
由 Alan Stern 提交于
This patch (as1241) renames a bunch of functions in the PM core. Rather than go through a boring list of name changes, suffice it to say that in the end we have a bunch of pairs of functions: device_resume_noirq dpm_resume_noirq device_resume dpm_resume device_complete dpm_complete device_suspend_noirq dpm_suspend_noirq device_suspend dpm_suspend device_prepare dpm_prepare in which device_X does the X operation on a single device and dpm_X invokes device_X for all devices in the dpm_list. In addition, the old dpm_power_up and device_resume_noirq have been combined into a single function (dpm_resume_noirq). Lastly, dpm_suspend_start and dpm_resume_end are the renamed versions of the former top-level device_suspend and device_resume routines. Signed-off-by: NAlan Stern <stern@rowland.harvard.edu> Acked-by: NMagnus Damm <damm@igel.co.jp> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Magnus Damm 提交于
Rename the functions performing "_noirq" dev_pm_ops operations from device_power_down() and device_power_up() to device_suspend_noirq() and device_resume_noirq(). The new function names are chosen to show that the functions are responsible for calling the _noirq() versions to finalize the suspend/resume operation. The current function names do not perform power down/up anymore so the names may be misleading. Global function renames: - device_power_down() -> device_suspend_noirq() - device_power_up() -> device_resume_noirq() Static function renames: - suspend_device_noirq() -> __device_suspend_noirq() - resume_device_noirq() -> __device_resume_noirq() Signed-off-by: NMagnus Damm <damm@igel.co.jp> Acked-by: NGreg Kroah-Hartman <gregkh@suse.de> Acked-by: NLen Brown <lenb@kernel.org> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Magnus Damm 提交于
This patch removes unused asm/suspend.h files for the following architectures: alpha, arm, ia64, m68k, mips, s390, um Signed-off-by: NMagnus Damm <damm@igel.co.jp> Acked-by: NPavel Machek <pavel@ucw.cz> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Sergio Luis 提交于
This is the last unification step. Here we do remove one of the files and rename the left one as cpu.c, as both are now the same. Also update power/Makefile, telling it to build cpu.o, instead of cpu_(32|64).o Signed-off-by: NSergio Luis <sergio@larces.uece.br> Signed-off-by: NLauro Salmito <laurosalmito@gmail.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Sergio Luis 提交于
In this step, we do unify the copyright notes for both files cpu_32.c and cpu_64.c, making such files exactly the same. It's the last step before the actual unification, that will rename one of them to cpu.c and remove the other one. Signed-off-by: NSergio Luis <sergio@larces.uece.br> Signed-off-by: NLauro Salmito <laurosalmito@gmail.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Sergio Luis 提交于
In this step we do unify cpu_32.c and cpu_64.c functions that work on restoring the saved processor state. Also, we do eliminate the forward declaration of fix_processor_context() for X86_64, as it's not needed anymore. Signed-off-by: NSergio Luis <sergio@larces.uece.br> Signed-off-by: NLauro Salmito <laurosalmito@gmail.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Sergio Luis 提交于
In this step we do unify cpu_32.c and cpu_64.c functions that work on saving the processor state. Signed-off-by: NSergio Luis <sergio@larces.uece.br> Signed-off-by: NLauro Salmito <laurosalmito@gmail.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Sergio Luis 提交于
Aiming total unification of cpu_32.c and cpu_64.c, in this step we do unify the global variables and existing forward declarations for such files. Signed-off-by: NSergio Luis <sergio@larces.uece.br> Signed-off-by: NLauro Salmito <laurosalmito@gmail.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Sergio Luis 提交于
First step towards the unification of cpu_32.c and cpu_64.c. This commit unifies the headers of such files, making both of them use the same header files. It also remove the uneeded <module.h>. Signed-off-by: NSergio Luis <sergio@larces.uece.br> Signed-off-by: NLauro Salmito <laurosalmito@gmail.com> Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jaswinder Singh Rajput 提交于
One of the numbers in arch/x86/kernel/acpi/sleep.c is long, but it is not annotated appropriately, so sparese warns about it. Fix that. [rjw: added the changelog.] Signed-off-by: NRafael J. Wysocki <rjw@sisk.pl>
-
由 Jean Delvare 提交于
fix ETIMEOUT -> ETIMEDOUT typos Signed-off-by: NJean Delvare <khali@linux-fr.org> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Pavel Machek 提交于
.ko is normally not included in Kconfig help, make it consistent. Signed-off-by: NPavel Machek <pavel@ucw.cz> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Sankar P 提交于
Fixes a trivial spelling error in powerpc code comments. Signed-off-by: NSankar P <sankar.curiosity@gmail.com> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
由 Martin Olsson 提交于
trivial: fix typos s/paramter/parameter/ and s/excute/execute/ in documentation and source comments. Signed-off-by: NMartin Olsson <martin@minimum.se> Signed-off-by: NJiri Kosina <jkosina@suse.cz>
-
- 12 6月, 2009 7 次提交
-
-
由 Matias Zabaljauregui 提交于
This version requires that host and guest have the same PAE status. NX cap is not offered to the guest, yet. Signed-off-by: NMatias Zabaljauregui <zabaljauregui@gmail.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Matias Zabaljauregui 提交于
Add support for kvm_hypercall4(); PAE wants it. Signed-off-by: Matias Zabaljauregui <zabaljauregui at gmail.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Matias Zabaljauregui 提交于
replace LHCALL_SET_PMD with LHCALL_SET_PGD hypercall name (That's really what it is, and the confusion gets worse with PAE support) Signed-off-by: NMatias Zabaljauregui <zabaljauregui@gmail.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Reported-by: NJeremy Fitzhardinge <jeremy@goop.org>
-
由 Matias Zabaljauregui 提交于
Some cleanups and replace direct assignment with native_set_* macros which properly handle 64-bit entries when PAE is activated Signed-off-by: NMatias Zabaljauregui <zabaljauregui@gmail.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
The downside of the last patch which made restore_flags and irq_enable check interrupts is that they are now too big to be patched directly into the callsites, so the C versions are always used. But the C versions go via PV_CALLEE_SAVE_REGS_THUNK which saves all the registers. In fact, we don't need any registers in the fast path, so we can do better than this if we actually code them in assembler. The results are in the noise, but since it's about the same amount of code, it's worth applying. 1GB Guest->Host: input(suppressed),output(suppressed) Before: Seconds: 0:16.53 Packets: 377268,753673 Interrupts: 22461,24297 Notifications: 1(5245),21303(732370) Net IRQs triggered: 377023(245),42578(711095) After: Seconds: 0:16.48 Packets: 377289,753673 Interrupts: 22281,24465 Notifications: 1(5245),21296(732377) Net IRQs triggered: 377060(229),42564(711109) Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
lguest never checked for pending interrupts when enabling interrupts, and things still worked. However, it makes a significant difference to TCP performance, so it's time we fixed it by introducing a pending_irq flag and checking it on irq_restore and irq_enable. These two routines are now too big to patch into the 8/10 bytes patch space, so we drop that code. Note: The high latency on interrupt delivery had a very curious effect: once everything else was optimized, networking without GSO was faster than networking with GSO, since more interrupts were sent and hence a greater chance of one getting through to the Guest! Note2: (Almost) Closing the same loophole for iret doesn't have any measurable effect, so I'm leaving that patch for the moment. Before: 1GB tcpblast Guest->Host: 30.7 seconds 1GB tcpblast Guest->Host (no GSO): 76.0 seconds After: 1GB tcpblast Guest->Host: 6.8 seconds 1GB tcpblast Guest->Host (no GSO): 27.8 seconds Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Rusty Russell 提交于
Copy from arch/x86/kernel/irqinit_32.c: we don't use the vectors beyond LGUEST_IRQS (if any), but we might as well set them all. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-