- 17 12月, 2008 13 次提交
-
-
由 Mike Travis 提交于
Impact: Remove cpumask_t's from stack. Simple transition to work_on_cpu(), rather than cpumask games. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Robert Richter <robert.richter@amd.com> Cc: jacob.shin@amd.com
-
由 Mike Travis 提交于
Impact: remove cpumask_t from stack. We should not try to save and restore cpus_allowed on current. We can't use work_on_cpu() here, since it's in the hotplug cpu path (if anyone else tries to get the hotplug lock from a workqueue we could deadlock against them). Fortunately, we can just use smp_call_function_single() since the function can run from an interrupt. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Oleg Nesterov <oleg@tv-sign.ru>
-
由 Mike Travis 提交于
Impact: use new API Use the accessors rather than frobbing bits directly. Most of this is in arch code I haven't even compiled, but is straightforward. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com>
-
由 Mike Travis 提交于
Impact: cleanup, futureproof In fact, all cpumask ops will only be valid (in general) for bit numbers < nr_cpu_ids. So use that instead of NR_CPUS in various places. This is always safe: no cpu number can be >= nr_cpu_ids, and nr_cpu_ids is initialized to NR_CPUS at boot. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com> Acked-by: NIngo Molnar <mingo@elte.hu>
-
由 Mike Travis 提交于
This patch simply changes cpumask_t to struct cpumask and similar trivial modernizations. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com>
-
由 Mike Travis 提交于
Impact: cleanup, remove on-stack cpumask. The "map" arg is always cpu_online_mask. Importantly, set_affinity always ands the argument with cpu_online_mask anyway, so we don't need to do it in fixup_irqs(), avoiding a temporary. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com>
-
由 Mike Travis 提交于
Impact: cleanup, consolidate patches, use new API Consolidate the following into a single patch to adapt to new sparseirq code in arch/x86/kernel/io_apic.c, add allocation of cpumask_var_t's in domain and old_domain, and reduce further merge conflicts. Only one file (arch/x86/kernel/io_apic.c) is changed in all of these patches. 0006-x86-io_apic-change-irq_cfg-domain-old_domain-to.patch 0007-x86-io_apic-set_desc_affinity.patch 0008-x86-io_apic-send_cleanup_vector.patch 0009-x86-io_apic-eliminate-remaining-cpumask_ts-from-st.patch 0021-x86-final-cleanups-in-io_apic-to-use-new-cpumask-AP.patch Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com>
-
由 Mike Travis 提交于
Impact: use updated APIs Various API updates for x86:add-cpu_mask_to_apicid_and (Note: separate because previous patch has been "backported" to 2.6.27.) Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com>
-
由 Mike Travis 提交于
Impact: new API Add a helper function that takes two cpumask's, and's them and then returns the apicid of the result. This removes a need in io_apic.c that uses a temporary cpumask to hold (mask & cfg->domain). Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Mike Travis 提交于
Impact: cleanup, better debugging This has proven useful in debugging, *before* we try to use for_each_possible_cpu(). It also now shows nr_cpumask_bits. Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au>
-
由 Mike Travis 提交于
Impact: cleanup, change parameter passing * Change genapic interfaces to accept cpumask_t pointers where possible. * Modify external callers to use cpumask_t pointers in function calls. * Create new send_IPI_mask_allbutself which is the same as the send_IPI_mask functions but removes smp_processor_id() from list. This removes another common need for a temporary cpumask_t variable. * Functions that used a temp cpumask_t variable for: cpumask_t allbutme = cpu_online_map; cpu_clear(smp_processor_id(), allbutme); if (!cpus_empty(allbutme)) ... become: if (!cpus_equal(cpu_online_map, cpumask_of_cpu(cpu))) ... * Other minor code optimizations (like using cpus_clear instead of CPU_MASK_NONE, etc.) Applies to linux-2.6.tip/master. Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Acked-by: NIngo Molnar <mingo@elte.hu> -
由 Yinghai Lu 提交于
Impact: improve NUMA handling by migrating irq_desc on smp_affinity changes if CONFIG_NUMA_MIGRATE_IRQ_DESC is set: - make irq_desc to go with affinity aka irq_desc moving etc - call move_irq_desc in irq_complete_move() - legacy irq_desc is not moved, because they are allocated via static array for logical apic mode, need to add move_desc_in_progress_in_same_domain, otherwise it will not be moved ==> also could need two phases to get irq_desc moved. Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Mike Travis 提交于
Ingo Molnar wrote: > allyes64 build failure: > > arch/x86/kernel/io_apic.c: In function ‘set_ir_ioapic_affinity_irq_desc’: > arch/x86/kernel/io_apic.c:2295: error: incompatible type for argument 2 of > ‘migrate_ioapic_irq_desc’ > arch/x86/kernel/io_apic.c: In function ‘ir_set_msi_irq_affinity’: > arch/x86/kernel/io_apic.c:3205: error: incompatible type for argument 2 of > ‘set_extra_move_desc’ > make[1]: *** wait: No child processes. Stop. Here's a small patch to correct the build error with the post-merge tree. Built and boot-tested. I'll will reset the follow on patches in my brand new git tree to accommodate this change. Fix two references in io_apic.c that were incorrect. Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 15 12月, 2008 1 次提交
-
-
由 Zachary Amsden 提交于
VMI initialiation can relocate the fixmap, causing early_ioremap to malfunction if it is initialized before the relocation. To fix this, VMI activation is split into two phases; the detection, which must happen before setting up ioremap, and the activation, which must happen after parsing early boot parameters. This fixes a crash on boot when VMI is enabled under VMware. Signed-off-by: NZachary Amsden <zach@vmware.com> Signed-off-by: NLinus Torvalds <torvalds@linux-foundation.org>
-
- 13 12月, 2008 4 次提交
-
-
由 Rusty Russell 提交于
Impact: change calling convention of existing clock_event APIs struct clock_event_timer's cpumask field gets changed to take pointer, as does the ->broadcast function. Another single-patch change. For safety, we BUG_ON() in clockevents_register_device() if it's not set. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Cc: Ingo Molnar <mingo@elte.hu> -
由 Rusty Russell 提交于
Impact: change existing irq_chip API Not much point with gentle transition here: the struct irq_chip's setaffinity method signature needs to change. Fortunately, not widely used code, but hits a few architectures. Note: In irq_select_affinity() I save a temporary in by mangling irq_desc[irq].affinity directly. Ingo, does this break anything? (Folded in fix from KOSAKI Motohiro) Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com> Reviewed-by: NGrant Grundler <grundler@parisc-linux.org> Acked-by: NIngo Molnar <mingo@redhat.com> Cc: ralf@linux-mips.org Cc: grundler@parisc-linux.org Cc: jeremy@xensource.com Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
-
由 Rusty Russell 提交于
cpumask: change cpumask_scnprintf, cpumask_parse_user, cpulist_parse, and cpulist_scnprintf to take pointers. Impact: change calling convention of existing cpumask APIs Most cpumask functions started with cpus_: these have been replaced by cpumask_ ones which take struct cpumask pointers as expected. These four functions don't have good replacement names; fortunately they're rarely used, so we just change them over. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com> Acked-by: NIngo Molnar <mingo@elte.hu> Cc: paulus@samba.org Cc: mingo@redhat.com Cc: tony.luck@intel.com Cc: ralf@linux-mips.org Cc: Greg Kroah-Hartman <gregkh@suse.de> Cc: cl@linux-foundation.org Cc: srostedt@redhat.com
-
由 Rusty Russell 提交于
Impact: cleanup Each SMP arch defines these themselves. Move them to a central location. Twists: 1) Some archs (m32, parisc, s390) set possible_map to all 1, so we add a CONFIG_INIT_ALL_POSSIBLE for this rather than break them. 2) mips and sparc32 '#define cpu_possible_map phys_cpu_present_map'. Those archs simply have phys_cpu_present_map replaced everywhere. 3) Alpha defined cpu_possible_map to cpu_present_map; this is tricky so I just manipulate them both in sync. 4) IA64, cris and m32r have gratuitous 'extern cpumask_t cpu_possible_map' declarations. Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Reviewed-by: NGrant Grundler <grundler@parisc-linux.org> Tested-by: NTony Luck <tony.luck@intel.com> Acked-by: NIngo Molnar <mingo@elte.hu> Cc: Mike Travis <travis@sgi.com> Cc: ink@jurassic.park.msu.ru Cc: rmk@arm.linux.org.uk Cc: starvik@axis.com Cc: tony.luck@intel.com Cc: takata@linux-m32r.org Cc: ralf@linux-mips.org Cc: grundler@parisc-linux.org Cc: paulus@samba.org Cc: schwidefsky@de.ibm.com Cc: lethal@linux-sh.org Cc: wli@holomorphy.com Cc: davem@davemloft.net Cc: jdike@addtoit.com Cc: mingo@redhat.com
-
- 08 12月, 2008 13 次提交
-
-
由 Ingo Molnar 提交于
these warnings: arch/x86/kernel/paravirt-spinlocks.c: In function ‘default_spin_lock_flags’: arch/x86/kernel/paravirt-spinlocks.c:12: warning: passing argument 1 of ‘__raw_spin_lock’ from incompatible pointer type arch/x86/kernel/paravirt-spinlocks.c: At top level: arch/x86/kernel/paravirt-spinlocks.c:11: warning: ‘default_spin_lock_flags’ defined but not used showed that the prototype of default_spin_lock_flags() was confused about what type spinlocks have. the proper type on UP is raw_spinlock_t. Signed-off-by: NIngo Molnar <mingo@elte.hu> -
由 Frederic Weisbecker 提交于
Impact: Provide a way to pause the function graph tracer As suggested by Steven Rostedt, the previous patch that prevented from spinlock function tracing shouldn't use the raw_spinlock to fix it. It's much better to follow lockdep with normal spinlock, so this patch adds a new flag for each task to make the function graph tracer able to be paused. We also can send an ftrace_printk whithout worrying of the irrelevant traced spinlock during insertion. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Frederic Weisbecker 提交于
Impact: trace more functions When the function graph tracer is configured, three more files are not traced to prevent only four functions to be traced. And this impacts the normal function tracer too. arch/x86/kernel/process_64/32.c: I had crashes when I let this file traced. After some debugging, I saw that the "current" task point was changed inside__swtich_to(), ie: "write_pda(pcurrent, next_p);" inside process_64.c Since the tracer store the original return address of the function inside current, we had crashes. Only __switch_to() has to be excluded from tracing. kernel/module.c and kernel/extable.c: Because of a function used internally by the function graph tracer: __kernel_text_address() To let the other functions inside these files to be traced, this patch introduces the __notrace_funcgraph function prefix which is __notrace if function graph tracer is configured and nothing if not. Signed-off-by: NFrederic Weisbecker <fweisbec@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
Impact: cleanup reorder exit path in __get_smp_config(). also move two print outs to acpi_process_madt Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Joerg Roedel 提交于
Impact: minor fix Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> -
由 Joerg Roedel 提交于
Impact: minor fix Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> -
由 Joerg Roedel 提交于
Impact: cleanup Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> -
由 Joerg Roedel 提交于
Impact: bugfix Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> -
由 Joerg Roedel 提交于
Impact: bugfix in iommu_map_page function Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> -
由 Yinghai Lu 提交于
Impact: simplify code Pass irq_desc and cfg around, instead of raw IRQ numbers - this way we dont have to look it up again and again. Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
Impact: sanitize MSI irq number ordering from top-down to bottom-up Increase new MSI IRQs starting from nr_irqs_gsi (which is somewhere below 256), instead of decreasing from NR_IRQS. (The latter method can result in confusingly high IRQ numbers - if NR_CPUS is set to a high value and NR_IRQS scales up to a high value.) Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
Impact: cleanup Introduce NR_IRQS_LEGACY instead of hard coded number. Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
Impact: new feature Problem on distro kernels: irq_desc[NR_IRQS] takes megabytes of RAM with NR_CPUS set to large values. The goal is to be able to scale up to much larger NR_IRQS value without impacting the (important) common case. To solve this, we generalize irq_desc[NR_IRQS] to an (optional) array of irq_desc pointers. When CONFIG_SPARSE_IRQ=y is used, we use kzalloc_node to get irq_desc, this also makes the IRQ descriptors NUMA-local (to the site that calls request_irq()). This gets rid of the irq_cfg[] static array on x86 as well: irq_cfg now uses desc->chip_data for x86 to store irq_cfg. Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 04 12月, 2008 1 次提交
-
-
由 Andi Kleen 提交于
Impact: fix boot crash with numcpus=0 on certain systems Fix early exception in __get_smp_config with nosmp. Bail out early when there is no MP table. Reported-by: NWu Fengguang <fengguang.wu@intel.com> Signed-off-by: NAndi Kleen <ak@linux.intel.com> Tested-by: NWu Fengguang <fengguang.wu@intel.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 03 12月, 2008 8 次提交
-
-
由 Joerg Roedel 提交于
The access to the iommu->need_sync member needs to be protected by the iommu->lock. Otherwise this is a possible race condition. Fix it with this patch. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> -
由 Joerg Roedel 提交于
In some rare cases a request can arrive an IOMMU with its originial requestor id even it is aliased. Handle this by setting the device table entry to the same protection domain for the original and the aliased requestor id. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> -
由 Joerg Roedel 提交于
Impact: remove stale IOTLB entries In the non-default nofullflush case the GART is only flushed when next_bit wraps around. But it can happen that an unmap operation unmaps memory which is behind the current next_bit location. If these addresses are reused it may result in stale GART IO/TLB entries. Fix this by setting the GART next_bit always behind an unmapped location. Signed-off-by: NJoerg Roedel <joerg.roedel@amd.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
Import: robustness checks Add more checks in the function graph code to detect errors and perhaps print out better information if a bug happens. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
Impact: feature, let entry function decide to trace or not This patch lets the graph tracer entry function decide if the tracing should be done at the end as well. This requires all function graph entry functions return 1 if it should trace, or 0 if the return should not be traced. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
Impact: better dumpstack output I noticed in my crash dumps and even in the stack tracer that a lot of functions listed in the stack trace are simply return_to_handler which is ftrace graphs way to insert its own call into the return of a function. But we lose out where the actually function was called from. This patch adds in hooks to the dumpstack mechanism that detects this and finds the real function to print. Both are printed to let the user know that a hook is still in place. This does give a funny side effect in the stack tracer output: Depth Size Location (80 entries) ----- ---- -------- 0) 4144 48 save_stack_trace+0x2f/0x4d 1) 4096 128 ftrace_call+0x5/0x2b 2) 3968 16 mempool_alloc_slab+0x16/0x18 3) 3952 384 return_to_handler+0x0/0x73 4) 3568 -240 stack_trace_call+0x11d/0x209 5) 3808 144 return_to_handler+0x0/0x73 6) 3664 -128 mempool_alloc+0x4d/0xfe 7) 3792 128 return_to_handler+0x0/0x73 8) 3664 -32 scsi_sg_alloc+0x48/0x4a [scsi_mod] As you can see, the real functions are now negative. This is due to them not being found inside the stack. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu> -
由 Steven Rostedt 提交于
Impact: new ftrace_graph_stop function While developing more features of function graph, I hit a bug that caused the WARN_ON to trigger in the prepare_ftrace_return function. Well, it was hard for me to find out that was happening because the bug would not print, it would just cause a hard lockup or reboot. The reason is that it is not safe to call printk from this function. Looking further, I also found that it calls unregister_ftrace_graph, which grabs a mutex and calls kstop machine. This would definitely lock the box up if it were to trigger. This patch adds a fast and safe ftrace_graph_stop() which will stop the function tracer. Then it is safe to call the WARN ON. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Steven Rostedt 提交于
Impact: consistency change for function graph This patch makes function graph record the mcount caller address the same way the function tracer does. Signed-off-by: NSteven Rostedt <srostedt@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-