- 25 8月, 2010 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
IPIs and VIRQs are inherently per-cpu event types, so treat them as such: - use a specific percpu irq_chip implementation, and - handle them with handle_percpu_irq This makes the path for delivering these interrupts more efficient (no masking/unmasking, no locks), and it avoid problems with attempts to migrate them. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stable Kernel <stable@kernel.org>
-
- 30 7月, 2010 1 次提交
-
-
由 Stefano Stabellini 提交于
This patch introduce a CONFIG_XEN_PVHVM compile time option to enable/disable Xen PV on HVM support. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
-
- 29 7月, 2010 1 次提交
-
-
由 Ian Campbell 提交于
In general the semantics of IPIs are that they are are expected to continue functioning after dpm_suspend_noirq(). Specifically I have seen a deadlock between the callfunc IPI and the stop machine used by xen's do_suspend() routine. If one CPU has already called dpm_suspend_noirq() then there is a window where it can be sent a callfunc IPI before all the other CPUs have entered stop_cpu(). If this happens then the first CPU ends up spinning in stop_cpu() waiting for the other to rendezvous in state STOPMACHINE_PREPARE while the other is spinning in csd_lock_wait(). Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: xen-devel@lists.xensource.com LKML-Reference: <1280398595-29708-4-git-send-email-ian.campbell@citrix.com> Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
- 23 7月, 2010 3 次提交
-
-
由 Stefano Stabellini 提交于
Don't break the assumption that the first 16 irqs are ISA irqs; make sure that the irq is actually free before using it. Use dynamic_irq_init_keep_chip_data instead of dynamic_irq_init so that chip_data is not NULL (a NULL chip_data breaks setup_vector_irq). Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
-
由 Stefano Stabellini 提交于
Add the xen pci platform device driver that is responsible for initializing the grant table and xenbus in PV on HVM mode. Few changes to xenbus and grant table are necessary to allow the delayed initialization in HVM mode. Grant table needs few additional modifications to work in HVM mode. The Xen PCI platform device raises an irq every time an event has been delivered to us. However these interrupts are only delivered to vcpu 0. The Xen PCI platform interrupt handler calls xen_hvm_evtchn_do_upcall that is a little wrapper around __xen_evtchn_do_upcall, the traditional Xen upcall handler, the very same used with traditional PV guests. When running on HVM the event channel upcall is never called while in progress because it is a normal Linux irq handler (and we cannot switch the irq chip wholesale to the Xen PV ones as we are running QEMU and might have passed in PCI devices), therefore we cannot be sure that evtchn_upcall_pending is 0 when returning. For this reason if evtchn_upcall_pending is set by Xen we need to loop again on the event channels set pending otherwise we might loose some event channel deliveries. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
-
由 Sheng Yang 提交于
Set the callback to receive evtchns from Xen, using the callback vector delivery mechanism. The traditional way for receiving event channel notifications from Xen is via the interrupts from the platform PCI device. The callback vector is a newer alternative that allow us to receive notifications on any vcpu and doesn't need any PCI support: we allocate a vector exclusively to receive events, in the vector handler we don't need to interact with the vlapic, therefore we avoid a VMEXIT. Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: NSheng Yang <sheng@linux.intel.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
-
- 30 3月, 2010 1 次提交
-
-
由 Tejun Heo 提交于
include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: NTejun Heo <tj@kernel.org> Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
-
- 19 2月, 2010 1 次提交
-
-
由 Eric W. Biederman 提交于
Right now xen's use of the x86 and ia64 handle_irq is just bizarre and very fragile as it is very non-obvious the function exists and is is used by code out in drivers/.... Luckily using handle_irq is completely unnecessary, and we can just use the generic irq apis instead. This still leaves drivers/xen/events.c as a problematic user of the generic irq apis it has "static struct irq_info irq_info[NR_IRQS]" but that can be fixed some other time. Signed-off-by: NEric W. Biederman <ebiederm@xmission.com> LKML-Reference: <4B7CAAD2.10803@kernel.org> Acked-by: NJeremy Fitzhardinge <jeremy@goop.org> Cc: Ian Campbell <Ian.Campbell@citrix.com> Signed-off-by: NH. Peter Anvin <hpa@zytor.com>
-
- 04 12月, 2009 1 次提交
-
-
由 Ian Campbell 提交于
On resume irq_info[*].evtchn is reset to 0 since event channel mappings are not preserved over suspend/resume. The other contents of irq_info is preserved to allow rebind_evtchn_irq() to function. However when a device resumes it will try to unbind from the previous IRQ (e.g. blkfront goes blkfront_resume() -> blkif_free() -> unbind_from_irqhandler() -> unbind_from_irq()). This will fail due to the check for VALID_EVTCHN in unbind_from_irq() and the IRQ is leaked. The device will then continue to resume and allocate a new IRQ, eventually leading to find_unbound_irq() panic()ing. Fix this by changing unbind_from_irq() to handle teardown of interrupts which have type!=IRQT_UNBOUND but are not currently bound to a specific event channel. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Stable Kernel <stable@kernel.org>
-
- 01 7月, 2009 1 次提交
-
-
由 Pekka Enberg 提交于
The init_IRQ() function is now called with slab allocator initialized. Therefore, we must not use the bootmem allocator in xen_init_IRQ(). Fixes the following boot-time warning: ------------[ cut here ]------------ WARNING: at mm/bootmem.c:535 alloc_arch_preferred_bootmem+0x27/0x45() Modules linked in: Pid: 0, comm: swapper Not tainted 2.6.30 #1 Call Trace: [<ffffffff8102d6e3>] ? warn_slowpath_common+0x73/0xb0 [<ffffffff810210d9>] ? pvclock_clocksource_read+0x49/0x90 [<ffffffff812e522f>] ? alloc_arch_preferred_bootmem+0x27/0x45 [<ffffffff812e5761>] ? ___alloc_bootmem_nopanic+0x39/0xc9 [<ffffffff812e57fa>] ? ___alloc_bootmem+0x9/0x2f [<ffffffff812e9e21>] ? xen_init_IRQ+0x25/0x61 [<ffffffff812d69ee>] ? start_kernel+0x1b5/0x29e ---[ end trace 4eaa2a86a8e2da22 ]--- Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Tested-by: NChristian Kujau <lists@nerdbynature.de> Reported-by: NChristian Kujau <lists@nerdbynature.de> Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi> Cc: lists@nerdbynature.de Cc: jeremy.fitzhardinge@citrix.com LKML-Reference: <1246438278.22417.28.camel@penberg-laptop> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 24 6月, 2009 2 次提交
-
-
由 Tejun Heo 提交于
Percpu variable definition is about to be updated such that all percpu symbols including the static ones must be unique. Update percpu variable definitions accordingly. * as,cfq: rename ioc_count uniquely * cpufreq: rename cpu_dbs_info uniquely * xen: move nesting_count out of xen_evtchn_do_upcall() and rename it * mm: move ratelimits out of balance_dirty_pages_ratelimited_nr() and rename it * ipv4,6: rename cookie_scratch uniquely * x86 perf_counter: rename prev_left to pmc_prev_left, irq_entry to pmc_irq_entry and nmi_entry to pmc_nmi_entry * perf_counter: rename disable_count to perf_disable_count * ftrace: rename test_event_disable to ftrace_test_event_disable * kmemleak: rename test_pointer to kmemleak_test_pointer * mce: rename next_interval to mce_next_interval [ Impact: percpu usage cleanups, no duplicate static percpu var names ] Signed-off-by: NTejun Heo <tj@kernel.org> Reviewed-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Jens Axboe <jens.axboe@oracle.com> Cc: Dave Jones <davej@redhat.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: linux-mm <linux-mm@kvack.org> Cc: David S. Miller <davem@davemloft.net> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Steven Rostedt <srostedt@redhat.com> Cc: Li Zefan <lizf@cn.fujitsu.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Andi Kleen <andi@firstfloor.org>
-
由 Tejun Heo 提交于
Currently, the following three different ways to define percpu arrays are in use. 1. DEFINE_PER_CPU(elem_type[array_len], array_name); 2. DEFINE_PER_CPU(elem_type, array_name[array_len]); 3. DEFINE_PER_CPU(elem_type, array_name)[array_len]; Unify to #1 which correctly separates the roles of the two parameters and thus allows more flexibility in the way percpu variables are defined. [ Impact: cleanup ] Signed-off-by: NTejun Heo <tj@kernel.org> Reviewed-by: NChristoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Tony Luck <tony.luck@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: linux-mm@kvack.org Cc: Christoph Lameter <cl@linux-foundation.org> Cc: David S. Miller <davem@davemloft.net>
-
- 28 4月, 2009 2 次提交
-
-
由 Yinghai Lu 提交于
This simplifies the node awareness of the code. All our allocators only deal with a NUMA node ID locality not with CPU ids anyway - so there's no need to maintain (and transform) a CPU id all across the IRq layer. v2: keep move_irq_desc related [ Impact: cleanup, prepare IRQ code to be NUMA-aware ] Signed-off-by: NYinghai Lu <yinghai@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jeremy Fitzhardinge <jeremy@goop.org> LKML-Reference: <49F65536.2020300@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
according to Ingo, change set_affinity() in irq_chip should return int, because that way we can handle failure cases in a much cleaner way, in the genirq layer. v2: fix two typos [ Impact: extend API ] Signed-off-by: NYinghai Lu <yinghai@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Suresh Siddha <suresh.b.siddha@intel.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: linux-arch@vger.kernel.org LKML-Reference: <49F654E9.4070809@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 31 3月, 2009 1 次提交
-
-
由 Ian Campbell 提交于
Given an evtchn, return the corresponding irq. Signed-off-by: NIan Campbell <ian.campbell@citrix.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
-
- 09 2月, 2009 6 次提交
-
-
由 Ian Campbell 提交于
I was seeing a very odd crash on 64 bit in bind_evtchn_to_cpu because cpu_from_irq(irq) was coming out as -1. I found this was coming direct from the mk_ipi_info call. It's not clear to me that this isn't a compiler bug (implicit initialisation to zero of unsigned shorts in a struct not handled correctly?). On the other hand is it true that all event channels start of bound to CPU 0? If not then -1 might be correct and the various other functions should cope with this. Signed-off-by: NIan Campbell <Ian.Campbell@eu.citrix.com> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jeremy Fitzhardinge 提交于
Make sure that irq_enter()/irq_exit() wrap the entire event processing loop, rather than each individual event invokation. This makes sure that softirq processing is deferred until the end of event processing, rather than in the middle with interrupts disabled. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jeremy Fitzhardinge 提交于
There should be no need for us to maintain our own bind count for irqs, since the surrounding irq system should keep track of shared irqs for us. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jeremy Fitzhardinge 提交于
Put all irq info into one struct. Also, use a union to keep event channel type-specific information, rather than overloading the index field. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jeremy Fitzhardinge 提交于
Rather than overloading vectors for event channels, take full responsibility for mapping an event channel to irq directly. With this patch Xen has its own irq allocator. When the kernel gets an event channel upcall, it maps the event channel number to an irq and injects it into the normal interrupt path. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Jeremy Fitzhardinge 提交于
By default, the irq_chip.disable operation is a no-op. Explicitly set it to disable the Xen event channel. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 12 1月, 2009 3 次提交
-
-
由 Christophe Saout 提交于
Impact: fix bootup crash on xen guests SLAB is not yet up, with earlyprintk it is giving me an Oops in __kmalloc. Replace call to kmalloc() with alloc_bootmem(). Reported-by: NChristophe Saout <christophe@saout.de> Signed-off-by: NMike Travis <travis@sgi.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Mike Travis 提交于
Impact: reduce memory usage. Reduce this significant gain in the amount of memory used when NR_CPUS bumped from 128 to 4096 by allocating the array based on nr_cpu_ids: 65536 +2031616 2097152 +3100% cpu_evtchn_mask(.bss) Signed-off-by: NMike Travis <travis@sgi.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: Chris Wright <chrisw@sous-sol.org> Cc: virtualization@lists.osdl.org Cc: xen-devel@lists.xensource.com
-
由 Mike Travis 提交于
Impact: reduce memory usage, use new cpumask API. Replace the affinity and pending_masks with cpumask_var_t's. This adds to the significant size reduction done with the SPARSE_IRQS changes. The added functions (init_alloc_desc_masks & init_copy_desc_masks) are in the include file so they can be inlined (and optimized out for the !CONFIG_CPUMASKS_OFFSTACK case.) [Naming chosen to be consistent with the other init*irq functions, as well as the backwards arg declaration of "from, to" instead of the more common "to, from" standard.] Includes a slight change to the declaration of struct irq_desc to embed the pending_mask within ifdef(CONFIG_SMP) to be consistent with other references, and some small changes to Xen. Tested: sparse/non-sparse/cpumask_offstack/non-cpumask_offstack/nonuma/nosmp on x86_64 Signed-off-by: NMike Travis <travis@sgi.com> Cc: Chris Wright <chrisw@sous-sol.org> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Cc: virtualization@lists.osdl.org Cc: xen-devel@lists.xensource.com Cc: Yinghai Lu <yhlu.kernel@gmail.com>
-
- 26 12月, 2008 1 次提交
-
-
由 KOSAKI Motohiro 提交于
Impact: cleanup all for_each_irq_desc() usage point have !desc check. then its check can move into for_each_irq_desc() macro. Signed-off-by: NKOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Acked-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 18 12月, 2008 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
Impact: fix crash Make sure all Xen irqs have an irq_desc. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 13 12月, 2008 1 次提交
-
-
由 Rusty Russell 提交于
Impact: change existing irq_chip API Not much point with gentle transition here: the struct irq_chip's setaffinity method signature needs to change. Fortunately, not widely used code, but hits a few architectures. Note: In irq_select_affinity() I save a temporary in by mangling irq_desc[irq].affinity directly. Ingo, does this break anything? (Folded in fix from KOSAKI Motohiro) Signed-off-by: NRusty Russell <rusty@rustcorp.com.au> Signed-off-by: NMike Travis <travis@sgi.com> Reviewed-by: NGrant Grundler <grundler@parisc-linux.org> Acked-by: NIngo Molnar <mingo@redhat.com> Cc: ralf@linux-mips.org Cc: grundler@parisc-linux.org Cc: jeremy@xensource.com Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
-
- 08 12月, 2008 1 次提交
-
-
由 Yinghai Lu 提交于
Impact: new feature Problem on distro kernels: irq_desc[NR_IRQS] takes megabytes of RAM with NR_CPUS set to large values. The goal is to be able to scale up to much larger NR_IRQS value without impacting the (important) common case. To solve this, we generalize irq_desc[NR_IRQS] to an (optional) array of irq_desc pointers. When CONFIG_SPARSE_IRQ=y is used, we use kzalloc_node to get irq_desc, this also makes the IRQ descriptors NUMA-local (to the site that calls request_irq()). This gets rid of the irq_cfg[] static array on x86 as well: irq_cfg now uses desc->chip_data for x86 to store irq_cfg. Signed-off-by: NYinghai Lu <yinghai@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 24 10月, 2008 1 次提交
-
-
由 Isaku Yamahata 提交于
use set_xen_guest_handle() instead of direct assigning. > linux-2.6/drivers/xen/events.c: In function 'xen_poll_irq': > linux-2.6/drivers/xen/events.c:757: error: incompatible types in assignment > make[4]: *** [drivers/xen/events.o] Error 1 Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 22 10月, 2008 1 次提交
-
-
由 Isaku Yamahata 提交于
use set_xen_guest_handle() instead of direct assigning. > linux-2.6/drivers/xen/events.c: In function 'xen_poll_irq': > linux-2.6/drivers/xen/events.c:757: error: incompatible types in assignment > make[4]: *** [drivers/xen/events.o] Error 1 Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Isaku Yamahata <yamahata@valinux.co.jp> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 16 10月, 2008 4 次提交
-
-
由 Thomas Gleixner 提交于
Use for_each_irq_desc[_reverse] for all the iteration loops. Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
-
由 Alex Nixon 提交于
When sparse IRQs are enabled, it is not safe to assume an IRQ descriptor exists for every possible IRQ. This patch causes init_evtchn_cpu_bindings to skip initialisation of IRQ descriptors which don't exist. Signed-off-by: NAlex Nixon <alex.nixon@citrix.com> Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
add CONFIG_HAVE_SPARSE_IRQ to for use condensed array. Get rid of irq_desc[] array assumptions. Preallocate 32 irq_desc, and irq_desc() will try to get more. ( No change in functionality is expected anywhere, except the odd build failure where we missed a code site or where a crossing commit itroduces new irq_desc[] usage. ) v2: according to Eric, change get_irq_desc() to irq_desc() Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Yinghai Lu 提交于
Signed-off-by: NYinghai Lu <yhlu.kernel@gmail.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 25 8月, 2008 1 次提交
-
-
由 Alex Nixon 提交于
Note the changes from 2.6.18-xen CPU hotplugging: A vcpu_down request from the remote admin via Xenbus both hotunplugs the CPU, and disables it by removing it from the cpu_present map, and removing its entry in /sys. A vcpu_up request from the remote admin only re-enables the CPU, and does not immediately bring the CPU up. A udev event is emitted, which can be caught by the user if he wishes to automatically re-up CPUs when available, or implement a more complex policy. Signed-off-by: NAlex Nixon <alex.nixon@citrix.com> Acked-by: NJeremy Fitzhardinge <jeremy@goop.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 21 8月, 2008 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
A spinlock can be interrupted while spinning, so make sure we preserve the previous lock of interest if we're taking a lock from within an interrupt handler. We also need to deal with the case where the blocking path gets interrupted between testing to see if the lock is free and actually blocking. If we get interrupted there and end up in the state where the lock is free but the irq isn't pending, then we'll block indefinitely in the hypervisor. This fix is to make sure that any nested lock-takers will always leave the irq pending if there's any chance the outer lock became free. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Acked-by: NJan Beulich <jbeulich@novell.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 31 7月, 2008 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
For some reason I managed to miss a bunch of irq-related functions which also need to be compiled without -pg when using ftrace. This patch moves them into their own file, and starts a cleanup process I've been meaning to do anyway. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: "Alex Nixon (Intern)" <Alex.Nixon@eu.citrix.com> Cc: Eduardo Habkost <ehabkost@redhat.com> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 16 7月, 2008 1 次提交
-
-
由 Jeremy Fitzhardinge 提交于
The standard ticket spinlocks are very expensive in a virtual environment, because their performance depends on Xen's scheduler giving vcpus time in the order that they're supposed to take the spinlock. This implements a Xen-specific spinlock, which should be much more efficient. The fast-path is essentially the old Linux-x86 locks, using a single lock byte. The locker decrements the byte; if the result is 0, then they have the lock. If the lock is negative, then locker must spin until the lock is positive again. When there's contention, the locker spin for 2^16[*] iterations waiting to get the lock. If it fails to get the lock in that time, it adds itself to the contention count in the lock and blocks on a per-cpu event channel. When unlocking the spinlock, the locker looks to see if there's anyone blocked waiting for the lock by checking for a non-zero waiter count. If there's a waiter, it traverses the per-cpu "lock_spinners" variable, which contains which lock each CPU is waiting on. It picks one CPU waiting on the lock and sends it an event to wake it up. This allows efficient fast-path spinlock operation, while allowing spinning vcpus to give up their processor time while waiting for a contended lock. [*] 2^16 iterations is threshold at which 98% locks have been taken according to Thomas Friebel's Xen Summit talk "Preventing Guests from Spinning Around". Therefore, we'd expect the lock and unlock slow paths will only be entered 2% of the time. Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Christoph Lameter <clameter@linux-foundation.org> Cc: Petr Tesarik <ptesarik@suse.cz> Cc: Virtualization <virtualization@lists.linux-foundation.org> Cc: Xen devel <xen-devel@lists.xensource.com> Cc: Thomas Friebel <thomas.friebel@amd.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
- 20 6月, 2008 2 次提交
-
-
由 Isaku Yamahata 提交于
This patch is ported one from 534:77db69c38249 of linux-2.6.18-xen.hg. Use wmb instead of rmb to enforce ordering between evtchn_upcall_pending and evtchn_pending_sel stores in xen_evtchn_do_upcall(). Cc: Samuel Thibault <samuel.thibault@eu.citrix.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: the arch/x86 maintainers <x86@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-
由 Isaku Yamahata 提交于
This patch is ported one from 534:77db69c38249 of linux-2.6.18-xen.hg. Use wmb instead of rmb to enforce ordering between evtchn_upcall_pending and evtchn_pending_sel stores in xen_evtchn_do_upcall(). Cc: Samuel Thibault <samuel.thibault@eu.citrix.com> Signed-off-by: NIsaku Yamahata <yamahata@valinux.co.jp> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Cc: the arch/x86 maintainers <x86@kernel.org> Signed-off-by: NIngo Molnar <mingo@elte.hu>
-