1. 25 8月, 2010 1 次提交
  2. 30 7月, 2010 1 次提交
  3. 29 7月, 2010 1 次提交
    • I
      xen: Do not suspend IPI IRQs. · 4877c737
      Ian Campbell 提交于
      In general the semantics of IPIs are that they are are expected to
      continue functioning after dpm_suspend_noirq().
      
      Specifically I have seen a deadlock between the callfunc IPI and the
      stop machine used by xen's do_suspend() routine. If one CPU has already
      called dpm_suspend_noirq() then there is a window where it can be sent
      a callfunc IPI before all the other CPUs have entered stop_cpu().
      
      If this happens then the first CPU ends up spinning in stop_cpu()
      waiting for the other to rendezvous in state STOPMACHINE_PREPARE while
      the other is spinning in csd_lock_wait().
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: xen-devel@lists.xensource.com
      LKML-Reference: <1280398595-29708-4-git-send-email-ian.campbell@citrix.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      4877c737
  4. 23 7月, 2010 3 次提交
    • S
      xen: Fix find_unbound_irq in presence of ioapic irqs. · 99ad198c
      Stefano Stabellini 提交于
      Don't break the assumption that the first 16 irqs are ISA irqs;
      make sure that the irq is actually free before using it.
      
      Use dynamic_irq_init_keep_chip_data instead of
      dynamic_irq_init so that chip_data is not NULL (a NULL chip_data breaks
      setup_vector_irq).
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      99ad198c
    • S
      xen: Xen PCI platform device driver. · 183d03cc
      Stefano Stabellini 提交于
      Add the xen pci platform device driver that is responsible
      for initializing the grant table and xenbus in PV on HVM mode.
      Few changes to xenbus and grant table are necessary to allow the delayed
      initialization in HVM mode.
      Grant table needs few additional modifications to work in HVM mode.
      
      The Xen PCI platform device raises an irq every time an event has been
      delivered to us. However these interrupts are only delivered to vcpu 0.
      The Xen PCI platform interrupt handler calls xen_hvm_evtchn_do_upcall
      that is a little wrapper around __xen_evtchn_do_upcall, the traditional
      Xen upcall handler, the very same used with traditional PV guests.
      
      When running on HVM the event channel upcall is never called while in
      progress because it is a normal Linux irq handler (and we cannot switch
      the irq chip wholesale to the Xen PV ones as we are running QEMU and
      might have passed in PCI devices), therefore we cannot be sure that
      evtchn_upcall_pending is 0 when returning.
      For this reason if evtchn_upcall_pending is set by Xen we need to loop
      again on the event channels set pending otherwise we might loose some
      event channel deliveries.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NSheng Yang <sheng@linux.intel.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      183d03cc
    • S
      x86/xen: event channels delivery on HVM. · 38e20b07
      Sheng Yang 提交于
      Set the callback to receive evtchns from Xen, using the
      callback vector delivery mechanism.
      
      The traditional way for receiving event channel notifications from Xen
      is via the interrupts from the platform PCI device.
      The callback vector is a newer alternative that allow us to receive
      notifications on any vcpu and doesn't need any PCI support: we allocate
      a vector exclusively to receive events, in the vector handler we don't
      need to interact with the vlapic, therefore we avoid a VMEXIT.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NSheng Yang <sheng@linux.intel.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      38e20b07
  5. 30 3月, 2010 1 次提交
    • T
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo 提交于
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Guess-its-ok-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  6. 19 2月, 2010 1 次提交
  7. 04 12月, 2009 1 次提交
    • I
      xen: don't leak IRQs over suspend/resume. · fed5ea87
      Ian Campbell 提交于
      On resume irq_info[*].evtchn is reset to 0 since event channel mappings
      are not preserved over suspend/resume. The other contents of irq_info
      is preserved to allow rebind_evtchn_irq() to function.
      
      However when a device resumes it will try to unbind from the
      previous IRQ (e.g.  blkfront goes blkfront_resume() -> blkif_free() ->
      unbind_from_irqhandler() -> unbind_from_irq()). This will fail due to the
      check for VALID_EVTCHN in unbind_from_irq() and the IRQ is leaked. The
      device will then continue to resume and allocate a new IRQ, eventually
      leading to find_unbound_irq() panic()ing.
      
      Fix this by changing unbind_from_irq() to handle teardown of interrupts
      which have type!=IRQT_UNBOUND but are not currently bound to a specific
      event channel.
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Stable Kernel <stable@kernel.org>
      fed5ea87
  8. 01 7月, 2009 1 次提交
    • P
      xen: Use kcalloc() in xen_init_IRQ() · a70c352a
      Pekka Enberg 提交于
      The init_IRQ() function is now called with slab allocator initialized.
      Therefore, we must not use the bootmem allocator in xen_init_IRQ().
      
      Fixes the following boot-time warning:
      
        ------------[ cut here ]------------
        WARNING: at mm/bootmem.c:535 alloc_arch_preferred_bootmem+0x27/0x45()
        Modules linked in:
        Pid: 0, comm: swapper Not tainted 2.6.30 #1
        Call Trace:
         [<ffffffff8102d6e3>] ? warn_slowpath_common+0x73/0xb0
         [<ffffffff810210d9>] ? pvclock_clocksource_read+0x49/0x90
         [<ffffffff812e522f>] ? alloc_arch_preferred_bootmem+0x27/0x45
         [<ffffffff812e5761>] ? ___alloc_bootmem_nopanic+0x39/0xc9
         [<ffffffff812e57fa>] ? ___alloc_bootmem+0x9/0x2f
         [<ffffffff812e9e21>] ? xen_init_IRQ+0x25/0x61
         [<ffffffff812d69ee>] ? start_kernel+0x1b5/0x29e
        ---[ end trace 4eaa2a86a8e2da22 ]---
      Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Tested-by: NChristian Kujau <lists@nerdbynature.de>
      Reported-by: NChristian Kujau <lists@nerdbynature.de>
      Signed-off-by: NPekka Enberg <penberg@cs.helsinki.fi>
      Cc: lists@nerdbynature.de
      Cc: jeremy.fitzhardinge@citrix.com
      LKML-Reference: <1246438278.22417.28.camel@penberg-laptop>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      a70c352a
  9. 24 6月, 2009 2 次提交
    • T
      percpu: clean up percpu variable definitions · 245b2e70
      Tejun Heo 提交于
      Percpu variable definition is about to be updated such that all percpu
      symbols including the static ones must be unique.  Update percpu
      variable definitions accordingly.
      
      * as,cfq: rename ioc_count uniquely
      
      * cpufreq: rename cpu_dbs_info uniquely
      
      * xen: move nesting_count out of xen_evtchn_do_upcall() and rename it
      
      * mm: move ratelimits out of balance_dirty_pages_ratelimited_nr() and
        rename it
      
      * ipv4,6: rename cookie_scratch uniquely
      
      * x86 perf_counter: rename prev_left to pmc_prev_left, irq_entry to
        pmc_irq_entry and nmi_entry to pmc_nmi_entry
      
      * perf_counter: rename disable_count to perf_disable_count
      
      * ftrace: rename test_event_disable to ftrace_test_event_disable
      
      * kmemleak: rename test_pointer to kmemleak_test_pointer
      
      * mce: rename next_interval to mce_next_interval
      
      [ Impact: percpu usage cleanups, no duplicate static percpu var names ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
      Cc: Jens Axboe <jens.axboe@oracle.com>
      Cc: Dave Jones <davej@redhat.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: linux-mm <linux-mm@kvack.org>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Steven Rostedt <srostedt@redhat.com>
      Cc: Li Zefan <lizf@cn.fujitsu.com>
      Cc: Catalin Marinas <catalin.marinas@arm.com>
      Cc: Andi Kleen <andi@firstfloor.org>
      245b2e70
    • T
      percpu: cleanup percpu array definitions · 204fba4a
      Tejun Heo 提交于
      Currently, the following three different ways to define percpu arrays
      are in use.
      
      1. DEFINE_PER_CPU(elem_type[array_len], array_name);
      2. DEFINE_PER_CPU(elem_type, array_name[array_len]);
      3. DEFINE_PER_CPU(elem_type, array_name)[array_len];
      
      Unify to #1 which correctly separates the roles of the two parameters
      and thus allows more flexibility in the way percpu variables are
      defined.
      
      [ Impact: cleanup ]
      Signed-off-by: NTejun Heo <tj@kernel.org>
      Reviewed-by: NChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Tony Luck <tony.luck@intel.com>
      Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: linux-mm@kvack.org
      Cc: Christoph Lameter <cl@linux-foundation.org>
      Cc: David S. Miller <davem@davemloft.net>
      204fba4a
  10. 28 4月, 2009 2 次提交
    • Y
      x86/irq: change irq_desc_alloc() to take node instead of cpu · 85ac16d0
      Yinghai Lu 提交于
      This simplifies the node awareness of the code. All our allocators
      only deal with a NUMA node ID locality not with CPU ids anyway - so
      there's no need to maintain (and transform) a CPU id all across the
      IRq layer.
      
      v2: keep move_irq_desc related
      
      [ Impact: cleanup, prepare IRQ code to be NUMA-aware ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      LKML-Reference: <49F65536.2020300@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      85ac16d0
    • Y
      irq: change ->set_affinity() to return status · d5dedd45
      Yinghai Lu 提交于
      according to Ingo, change set_affinity() in irq_chip should return int,
      because that way we can handle failure cases in a much cleaner way, in
      the genirq layer.
      
      v2: fix two typos
      
      [ Impact: extend API ]
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Cc: Andrew Morton <akpm@linux-foundation.org>
      Cc: Suresh Siddha <suresh.b.siddha@intel.com>
      Cc: "Eric W. Biederman" <ebiederm@xmission.com>
      Cc: Rusty Russell <rusty@rustcorp.com.au>
      Cc: linux-arch@vger.kernel.org
      LKML-Reference: <49F654E9.4070809@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d5dedd45
  11. 31 3月, 2009 1 次提交
  12. 09 2月, 2009 6 次提交
  13. 12 1月, 2009 3 次提交
    • C
      xen: fix too early kmalloc call · 28e08861
      Christophe Saout 提交于
      Impact: fix bootup crash on xen guests
      
      SLAB is not yet up, with earlyprintk it is giving me an Oops in __kmalloc.
      
      Replace call to kmalloc() with alloc_bootmem().
      Reported-by: NChristophe Saout <christophe@saout.de>
      Signed-off-by: NMike Travis <travis@sgi.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      28e08861
    • M
      Xen: reduce memory required for cpu_evtchn_mask · c7a3589e
      Mike Travis 提交于
      Impact: reduce memory usage.
      
      Reduce this significant gain in the amount of memory used
      when NR_CPUS bumped from 128 to 4096 by allocating the
      array based on nr_cpu_ids:
      
          65536  +2031616   2097152 +3100%  cpu_evtchn_mask(.bss)
      Signed-off-by: NMike Travis <travis@sgi.com>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: virtualization@lists.osdl.org
      Cc: xen-devel@lists.xensource.com
      c7a3589e
    • M
      cpumask: update irq_desc to use cpumask_var_t · 7f7ace0c
      Mike Travis 提交于
      Impact: reduce memory usage, use new cpumask API.
      
      Replace the affinity and pending_masks with cpumask_var_t's.  This adds
      to the significant size reduction done with the SPARSE_IRQS changes.
      
      The added functions (init_alloc_desc_masks & init_copy_desc_masks) are
      in the include file so they can be inlined (and optimized out for the
      !CONFIG_CPUMASKS_OFFSTACK case.)  [Naming chosen to be consistent with
      the other init*irq functions, as well as the backwards arg declaration
      of "from, to" instead of the more common "to, from" standard.]
      
      Includes a slight change to the declaration of struct irq_desc to embed
      the pending_mask within ifdef(CONFIG_SMP) to be consistent with other
      references, and some small changes to Xen.
      
      Tested: sparse/non-sparse/cpumask_offstack/non-cpumask_offstack/nonuma/nosmp on x86_64
      Signed-off-by: NMike Travis <travis@sgi.com>
      Cc: Chris Wright <chrisw@sous-sol.org>
      Cc: Jeremy Fitzhardinge <jeremy@xensource.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
      Cc: virtualization@lists.osdl.org
      Cc: xen-devel@lists.xensource.com
      Cc: Yinghai Lu <yhlu.kernel@gmail.com>
      7f7ace0c
  14. 26 12月, 2008 1 次提交
  15. 18 12月, 2008 1 次提交
  16. 13 12月, 2008 1 次提交
  17. 08 12月, 2008 1 次提交
    • Y
      sparse irq_desc[] array: core kernel and x86 changes · 0b8f1efa
      Yinghai Lu 提交于
      Impact: new feature
      
      Problem on distro kernels: irq_desc[NR_IRQS] takes megabytes of RAM with
      NR_CPUS set to large values. The goal is to be able to scale up to much
      larger NR_IRQS value without impacting the (important) common case.
      
      To solve this, we generalize irq_desc[NR_IRQS] to an (optional) array of
      irq_desc pointers.
      
      When CONFIG_SPARSE_IRQ=y is used, we use kzalloc_node to get irq_desc,
      this also makes the IRQ descriptors NUMA-local (to the site that calls
      request_irq()).
      
      This gets rid of the irq_cfg[] static array on x86 as well: irq_cfg now
      uses desc->chip_data for x86 to store irq_cfg.
      Signed-off-by: NYinghai Lu <yinghai@kernel.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      0b8f1efa
  18. 24 10月, 2008 1 次提交
  19. 22 10月, 2008 1 次提交
  20. 16 10月, 2008 4 次提交
  21. 25 8月, 2008 1 次提交
    • A
      xen: implement CPU hotplugging · d68d82af
      Alex Nixon 提交于
      Note the changes from 2.6.18-xen CPU hotplugging:
      
      A vcpu_down request from the remote admin via Xenbus both hotunplugs the
      CPU, and disables it by removing it from the cpu_present map, and removing
      its entry in /sys.
      
      A vcpu_up request from the remote admin only re-enables the CPU, and does
      not immediately bring the CPU up. A udev event is emitted, which can be
      caught by the user if he wishes to automatically re-up CPUs when available,
      or implement a more complex policy.
      Signed-off-by: NAlex Nixon <alex.nixon@citrix.com>
      Acked-by: NJeremy Fitzhardinge <jeremy@goop.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d68d82af
  22. 21 8月, 2008 1 次提交
    • J
      xen: save previous spinlock when blocking · 168d2f46
      Jeremy Fitzhardinge 提交于
      A spinlock can be interrupted while spinning, so make sure we preserve
      the previous lock of interest if we're taking a lock from within an
      interrupt handler.
      
      We also need to deal with the case where the blocking path gets
      interrupted between testing to see if the lock is free and actually
      blocking.  If we get interrupted there and end up in the state where
      the lock is free but the irq isn't pending, then we'll block
      indefinitely in the hypervisor.  This fix is to make sure that any
      nested lock-takers will always leave the irq pending if there's any
      chance the outer lock became free.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Acked-by: NJan Beulich <jbeulich@novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      168d2f46
  23. 31 7月, 2008 1 次提交
  24. 16 7月, 2008 1 次提交
    • J
      xen: implement Xen-specific spinlocks · 2d9e1e2f
      Jeremy Fitzhardinge 提交于
      The standard ticket spinlocks are very expensive in a virtual
      environment, because their performance depends on Xen's scheduler
      giving vcpus time in the order that they're supposed to take the
      spinlock.
      
      This implements a Xen-specific spinlock, which should be much more
      efficient.
      
      The fast-path is essentially the old Linux-x86 locks, using a single
      lock byte.  The locker decrements the byte; if the result is 0, then
      they have the lock.  If the lock is negative, then locker must spin
      until the lock is positive again.
      
      When there's contention, the locker spin for 2^16[*] iterations waiting
      to get the lock.  If it fails to get the lock in that time, it adds
      itself to the contention count in the lock and blocks on a per-cpu
      event channel.
      
      When unlocking the spinlock, the locker looks to see if there's anyone
      blocked waiting for the lock by checking for a non-zero waiter count.
      If there's a waiter, it traverses the per-cpu "lock_spinners"
      variable, which contains which lock each CPU is waiting on.  It picks
      one CPU waiting on the lock and sends it an event to wake it up.
      
      This allows efficient fast-path spinlock operation, while allowing
      spinning vcpus to give up their processor time while waiting for a
      contended lock.
      
      [*] 2^16 iterations is threshold at which 98% locks have been taken
      according to Thomas Friebel's Xen Summit talk "Preventing Guests from
      Spinning Around".  Therefore, we'd expect the lock and unlock slow
      paths will only be entered 2% of the time.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <clameter@linux-foundation.org>
      Cc: Petr Tesarik <ptesarik@suse.cz>
      Cc: Virtualization <virtualization@lists.linux-foundation.org>
      Cc: Xen devel <xen-devel@lists.xensource.com>
      Cc: Thomas Friebel <thomas.friebel@amd.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2d9e1e2f
  25. 20 6月, 2008 2 次提交