- 21 7月, 2007 1 次提交
-
-
由 Tony Luck 提交于
This is a merge of Peter Keilty's initial patch (which was revived by Bob Picco) for this with Hidetoshi Seto's fixes and scaling improvements. Acked-by: NBob Picco <bob.picco@hp.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 20 7月, 2007 5 次提交
-
-
由 Yasuaki Ishimatsu 提交于
> arch/ia64/kernel/iosapic.c:597: warning: 'iosapic_free_rte' defined but not used > > This isn't spurious, the only call to iosapic_free_rte() has been removed, but there > is still a call to iosapic_alloc_rte() ... which means we must have a memory leak. I did it on purpose (and gave the warning a miss...) and I consider iosapic_free_rte() is no longer needed. I decided to remain iosapic_rte_info to keep gsi-to-irq binding after device disable. Indeed it needs some extra memory, but it is only "sizeof(iosapic_rte_info) * <the number of removed devices>" bytes and has no memory leak becasue re-enabled devices use the iosapic_rte_info which they used before disabling. Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 David Chinner 提交于
sys_fallocate for ia64. This uses an empty slot #1303 erroneously marked as reserved for move_pages (which had already been allocated as syscall #1276) Signed-Off-By: NDave Chinner <dgc@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Fenghua Yu 提交于
Currently most of the per cpu data, which is accessed by different cpus, has a ____cacheline_aligned_in_smp attribute. Move all this data to the new per cpu shared data section: .data.percpu.shared_aligned. This will seperate the percpu data which is referenced frequently by other cpus from the local only percpu data. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Christoph Lameter <clameter@sgi.com> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Fenghua Yu 提交于
per cpu data section contains two types of data. One set which is exclusively accessed by the local cpu and the other set which is per cpu, but also shared by remote cpus. In the current kernel, these two sets are not clearely separated out. This can potentially cause the same data cacheline shared between the two sets of data, which will result in unnecessary bouncing of the cacheline between cpus. One way to fix the problem is to cacheline align the remotely accessed per cpu data, both at the beginning and at the end. Because of the padding at both ends, this will likely cause some memory wastage and also the interface to achieve this is not clean. This patch: Moves the remotely accessed per cpu data (which is currently marked as ____cacheline_aligned_in_smp) into a different section, where all the data elements are cacheline aligned. And as such, this differentiates the local only data and remotely accessed data cleanly. Signed-off-by: NFenghua Yu <fenghua.yu@intel.com> Acked-by: NSuresh Siddha <suresh.b.siddha@intel.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Christoph Lameter <clameter@sgi.com> Cc: <linux-arch@vger.kernel.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Andi Kleen <ak@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Michael Ellerman 提交于
I realise jprobes are a razor-blades-included type of interface, but that doesn't mean we can't try and make them safer to use. This guy I know once wrote code like this: struct jprobe jp = { .kp.symbol_name = "foo", .entry = "jprobe_foo" }; And then his kernel exploded. Oops. This patch adds an arch hook, arch_deref_entry_point() (I don't like it either) which takes the void * in a struct jprobe, and gives back the text address that it represents. We can then use that in register_jprobe() to check that the entry point we're passed is actually in the kernel text, rather than just some random value. Signed-off-by: NMichael Ellerman <michael@ellerman.id.au> Cc: Prasanna S Panchamukhi <prasanna@in.ibm.com> Acked-by: NAnanth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 18 7月, 2007 14 次提交
-
-
由 Pavel Emelianov 提交于
If the kernel OOPSed or BUGed then it probably should be considered as tainted. Thus, all subsequent OOPSes and SysRq dumps will report the tainted kernel. This saves a lot of time explaining oddities in the calltraces. Signed-off-by: NPavel Emelianov <xemul@openvz.org> Acked-by: NRandy Dunlap <randy.dunlap@oracle.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> [ Added parisc patch from Matthew Wilson -Linus ] Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Mel Gorman 提交于
This patch adds the kernelcore= parameter for x86. Once all patches are applied, a new command-line parameter exist and a new sysctl. This patch adds the necessary documentation. From: Yasunori Goto <y-goto@jp.fujitsu.com> When "kernelcore" boot option is specified, kernel can't boot up on ia64 because of an infinite loop. In addition, the parsing code can be handled in an architecture-independent manner. This patch uses common code to handle the kernelcore= parameter. It is only available to architectures that support arch-independent zone-sizing (i.e. define CONFIG_ARCH_POPULATES_NODE_MAP). Other architectures will ignore the boot parameter. [bunk@stusta.de: make cmdline_parse_kernelcore() static] Signed-off-by: NMel Gorman <mel@csn.ul.ie> Signed-off-by: NYasunori Goto <y-goto@jp.fujitsu.com> Acked-by: NAndy Whitcroft <apw@shadowen.org> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
由 Yasuaki Ishimatsu 提交于
Add per-CPU vector domain support for IA64_DIG. It is enabled by adding the "vector=percpu" boot option. Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Yasuaki Ishimatsu 提交于
Add per-CPU vector domain support for IA64_GENERIC. It is enabled by adding the "vector=percpu" boot option. Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Yasuaki Ishimatsu 提交于
Add support for IRQ migration across vector domain. Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Yasuaki Ishimatsu 提交于
Add fundamental support for multiple vector domain. There still exists only one vector domain even with this patch. IRQ migration across domain is not supported yet by this patch. Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Yasuaki Ishimatsu 提交于
Add mapping tables between irqs and vectors, and its management code. This is necessary for supporting multiple vector domain because 1:1 mapping between irq and vector will be changed to n:1. The irq == vector relationship between irqs and vectors is explicitly remained for percpu interrupts, platform interrupts, isa IRQs and vectors assigned using assign_irq_vector() because some programs might depend on it. And I should consider the following problem. When pci drivers enabled/disabled devices dynamically, its irq number is changed to the different one. Therefore, suspend/resume code may happen problem. To fix this problem, I bound gsi to irq. Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Yasuaki Ishimatsu 提交于
Need to check if irq is sharable amoung handlers when searching sharable IOSAPIC irq. Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Yasuaki Ishimatsu 提交于
Many of IOSAPIC codes depends on the flollowing assumptions, but these would become invalid when multiple vector domain will be supported in the future. - 1:1 mapping between IRQ and vector - IRQ == vector To fix those invalid assumptions, this patch changes iosapic_intr_info[] to be indexed by irq number instead of vector. Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Yasuaki Ishimatsu 提交于
Use create_irq()/destroy_irq() for iosapic interrupts. Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Yasuaki Ishimatsu 提交于
Use per-iosapic lock for indirect iosapic register access. It reduces lock contention. Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Yasuaki Ishimatsu 提交于
Cleanup order of irq_desc.lock and iosapic_lock in iosapic_register_intr() and iosapic_unregister_intr(). Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Yasuaki Ishimatsu 提交于
Remove duplicated members in iosapic_rte_info in iosapic.c. This patch has no functional changes. Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Yasuaki Ishimatsu 提交于
Remove unnecessary indent between spin_lock() and spin_unlock() in iosapic.c. This has no functional changes. Signed-off-by: NKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> Signed-off-by: NYasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 17 7月, 2007 1 次提交
-
-
由 Yinghai Lu 提交于
Beacuse SERIAL_PORT_DFNS is removed from include/asm-i386/serial.h and include/asm-x86_64/serial.h. the serial8250_ports need to be probed late in serial initializing stage. the console_init=>serial8250_console_init=> register_console=>serial8250_console_setup will return -ENDEV, and console ttyS0 can not be enabled at that time. need to wait till uart_add_one_port in drivers/serial/serial_core.c to call register_console to get console ttyS0. that is too late. Make early_uart to use early_param, so uart console can be used earlier. Make it to be bootconsole with CON_BOOT flag, so can use console handover feature. and it will switch to corresponding normal serial console automatically. new command line will be: console=uart8250,io,0x3f8,9600n8 console=uart8250,mmio,0xff5e0000,115200n8 or earlycon=uart8250,io,0x3f8,9600n8 earlycon=uart8250,mmio,0xff5e0000,115200n8 it will print in very early stage: Early serial console at I/O port 0x3f8 (options '9600n8') console [uart0] enabled later for console it will print: console handover: boot [uart0] -> real [ttyS0] Signed-off-by: <yinghai.lu@sun.com> Cc: Andi Kleen <ak@suse.de> Cc: Bjorn Helgaas <bjorn.helgaas@hp.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Gerd Hoffmann <kraxel@suse.de> Signed-off-by: NAndrew Morton <akpm@linux-foundation.org> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 14 7月, 2007 1 次提交
-
-
由 Hidetoshi Seto 提交于
The ".acq" semantics of the load only apply w.r.t. other data access. Reading the clock (ar.itc) isn't a data access so strange things can happen here. Specifically the read of ar.itc can be launched as soon as the read of xtime_lock.sequence is ISSUED. Since this may cache miss, and that might cause a thread switch, and there may be cache contention for the line containing xtime_lock, it may be a long time before the actual value is returned, so the ar.itc value may be very stale. Move the consumption of r28 up before the read of ar.itc to make sure that we really have got the current value of xtime_lock.sequence before look at ar.itc. Signed-off-by: NHidetoshi Seto <seto.hidetoshi@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 12 7月, 2007 2 次提交
-
-
由 Russ Anderson 提交于
Linux does not gracefully deal with multiple processors going through OS_MCA aa part of the same MCA event. The first cpu into OS_MCA grabs the ia64_mca_serialize lock. Subsequent cpus wait for that lock, preventing them from reporting in as rendezvoused. The first cpu waits 5 seconds then complains that all the cpus have not rendezvoused. The first cpu then handles its MCA and frees up all the rendezvoused cpus and releases the ia64_mca_serialize lock. One of the subsequent cpus going thought OS_MCA then gets the ia64_mca_serialize lock, waits another 5 seconds and then complains that none of the other cpus have rendezvoused. This patch allows multiple CPUs to gracefully go through OS_MCA. The first CPU into ia64_mca_handler() grabs a mca_count lock. Subsequent CPUs into ia64_mca_handler() are added to a list of cpus that need to go through OS_MCA (a bit set in mca_cpu), and report in as rendezvoused, and but spin waiting their turn. The first CPU sees everyone rendezvous, handles his MCA, wakes up one of the other CPUs waiting to process their MCA (by clearing one mca_cpu bit), and then waits for the other cpus to complete their MCA handling. The next CPU handles his MCA and the process repeats until all the CPUs have handled their MCA. When the last CPU has handled it's MCA, it sets monarch_cpu to -1, releasing all the CPUs. In testing this works more reliably and faster. Thanks to Keith Owens for suggesting numerous improvements to this code. Signed-off-by: NRuss Anderson <rja@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jes Sorensen 提交于
Tell GCC to stop spewing out unnecessary warnings for unused variables passed to functions as pointers for ia64 files. Signed-off-by: NJes Sorensen <jes@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 10 7月, 2007 3 次提交
-
-
由 Christian Kandeler 提交于
SDM says that brl instruction must be followed by a stop bit. Fix instance in BRL_COND_FSYS_BUBBLE_DOWN where it isn't. Signed-off-by: NChristian Kandeler <christian.kandeler@hob.de> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Tony Luck 提交于
It's not a good idea to use "ssm psr.ic | psr.i" to simultaneously enable interrupts and interrupt state collection, the two bits can take effect asynchronously, so it is possible for an interrupt to be serviced while psr.ic is still zero. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Ingo Molnar 提交于
the SMP load-balancer uses the boot-time migration-cost estimation code to attempt to improve the quality of balancing. The reason for this code is that the discrete priority queues do not preserve the order of scheduling accurately, so the load-balancer skips tasks that were running on a CPU 'recently'. this code is fundamental fragile: the boot-time migration cost detector doesnt really work on systems that had large L3 caches, it caused boot delays on large systems and the whole cache-hot concept made the balancing code pretty undeterministic as well. (and hey, i wrote most of it, so i can say it out loud that it sucks ;-) under CFS the same purpose of cache affinity can be achieved without any special cache-hot special-case: tasks are sorted in the 'timeline' tree and the SMP balancer picks tasks from the left side of the tree, thus the most cache-cold task is balanced automatically. Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 27 6月, 2007 2 次提交
-
-
由 MUNEDA Takahiro 提交于
Remove duplicate header include line from arch/ia64/kernel/time.c. Signed-off-by: NMUNEDA Takahiro <muneda.takahiro@jp.fujitsu.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Keith Owens 提交于
Both rp_loc and pfs_loc can be in the register stack area _or_ they can be in the memory stack area, the latter occurs when a struct pt_regs is pushed. Correct the validation check on these fields to check for both stack areas. Not allowing for memory stack locations means no backtrace past ia64_leave_kernel, or any other code that uses PT_REGS_UNWIND_INFO. Signed-off-by: NKeith Owens <kaos@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 25 5月, 2007 2 次提交
-
-
由 Tony Luck 提交于
Section mismatch: reference to .init.text:acpi_find_rsdp (between 'acpi_get_sysname' and 'acpi_request_vector') acpi_get_sysname() needs to call the __init function acpi_find_rsdp, but it doesn't have the __init attribute itself, hence the warning. Luckily it is only called from machvec_init() which has __init attribute, so the fix is to define acpi_get_sysname() as __init too. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Venki Pallipadi 提交于
Silly bug in _PDC data setup. Haven't seen any real side-effects of this one yet. But, needs fixing regardless. Signed-off-by: NVenkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 24 5月, 2007 1 次提交
-
-
由 Tony Luck 提交于
Continuing the seemingly neverending quest to stomp out "Section mismatch" warnings. Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 23 5月, 2007 2 次提交
-
-
由 Robin Holt 提交于
Unwinding a running task has proven problematic. In one instance, the running task was attempting to unwind itself and received an interrupt between when get_wchan allocated local variables on the stack and when unw_init_from_blocked_task was called which resulted in unw_init_frame_info to place this tasks task_struct pointer over the switch stack's ar_bspstore entry. Signed-off-by: NRobin Holt <holt@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Robin Holt 提交于
This patch adds some sanity checks to keep register and memory stack pointers in the unw_frame_info structure within the tasks stack address range. Signed-off-by: NRobin Holt <holt@sgi.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 19 5月, 2007 2 次提交
-
-
由 Sam Ravnborg 提交于
With this consolidation we can now modify the .data section definition in one spot for all archs. Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
-
由 Sam Ravnborg 提交于
Move definition of .text section to asm-generic. Signed-off-by: NSam Ravnborg <sam@ravnborg.org>
-
- 17 5月, 2007 1 次提交
-
-
由 Christoph Hellwig 提交于
Get rid of the notifier list and call the kprobes code directly if compiled in. This mirrors the changes that recently went into powerpc, s390 and sparc64. Signed-off-by: NChristoph Hellwig <hch@lst.de> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 16 5月, 2007 1 次提交
-
-
由 Martin Michlmayr 提交于
Building with GCC 4.2, I get the following error: CC arch/ia64/kernel/mca.o arch/ia64/kernel/mca.c:275: error: __ksymtab_ia64_mlogbuf_finish causes a section type conflict This is because ia64_mlogbuf_finish is both declared static and exported. Fix by removing the export (which is unneeded now). Signed-off-by: NMartin Michlmayr <tbm@cyrius.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-
- 15 5月, 2007 2 次提交
-
-
由 Tony Luck 提交于
Previous spelling patch from Simon Arlott broke one spot that didn't need fixing (reported by Simon within 35 minutes of the patch ... but not until after I'd applied to GIT and pushed :-( Signed-off-by: NTony Luck <tony.luck@intel.com>
-
由 Jay Lan 提交于
The current implementation of kdump on INIT events would enter kdump processing on DIE_INIT_MONARCH_ENTER and DIE_INIT_SLAVE_ENTER events. Thus, the monarch cpu would go ahead and boot up the kdump On SN shub2 systems, this out-of-sync situation causes some slave cpus on different nodes to enter POD. This patch moves kdump entry points to DIE_INIT_MONARCH_LEAVE and DIE_INIT_SLAVE_LEAVE. It also sets kdump_in_progress variable in the DIE_INIT_MONARCH_PROCESS event to not dump all active stack traces to the console in the case of kdump. I have tested this patch on an SN machine and a HP RX2600. Signed-off-by: NJay Lan <jlan@sgi.com> Acked-by: NZou Nan hai <nanhai.zou@intel.com> Signed-off-by: NTony Luck <tony.luck@intel.com>
-