1. 11 3月, 2011 3 次提交
  2. 02 12月, 2010 1 次提交
    • S
      xen: fix MSI setup and teardown for PV on HVM guests · af42b8d1
      Stefano Stabellini 提交于
      When remapping MSIs into pirqs for PV on HVM guests, qemu is responsible
      for doing the actual mapping and unmapping.
      We only give qemu the desired pirq number when we ask to do the mapping
      the first time, after that we should be reading back the pirq number
      from qemu every time we want to re-enable the MSI.
      
      This fixes a bug in xen_hvm_setup_msi_irqs that manifests itself when
      trying to enable the same MSI for the second time: the old MSI to pirq
      mapping is still valid at this point but xen_hvm_setup_msi_irqs would
      try to assign a new pirq anyway.
      A simple way to reproduce this bug is to assign an MSI capable network
      card to a PV on HVM guest, if the user brings down the corresponding
      ethernet interface and up again, Linux would fail to enable MSIs on the
      device.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      af42b8d1
  3. 23 10月, 2010 4 次提交
  4. 18 10月, 2010 5 次提交
  5. 23 7月, 2010 1 次提交
  6. 31 3月, 2009 1 次提交
  7. 21 8月, 2008 1 次提交
    • J
      xen: save previous spinlock when blocking · 168d2f46
      Jeremy Fitzhardinge 提交于
      A spinlock can be interrupted while spinning, so make sure we preserve
      the previous lock of interest if we're taking a lock from within an
      interrupt handler.
      
      We also need to deal with the case where the blocking path gets
      interrupted between testing to see if the lock is free and actually
      blocking.  If we get interrupted there and end up in the state where
      the lock is free but the irq isn't pending, then we'll block
      indefinitely in the hypervisor.  This fix is to make sure that any
      nested lock-takers will always leave the irq pending if there's any
      chance the outer lock became free.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Acked-by: NJan Beulich <jbeulich@novell.com>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      168d2f46
  8. 16 7月, 2008 1 次提交
    • J
      xen: implement Xen-specific spinlocks · 2d9e1e2f
      Jeremy Fitzhardinge 提交于
      The standard ticket spinlocks are very expensive in a virtual
      environment, because their performance depends on Xen's scheduler
      giving vcpus time in the order that they're supposed to take the
      spinlock.
      
      This implements a Xen-specific spinlock, which should be much more
      efficient.
      
      The fast-path is essentially the old Linux-x86 locks, using a single
      lock byte.  The locker decrements the byte; if the result is 0, then
      they have the lock.  If the lock is negative, then locker must spin
      until the lock is positive again.
      
      When there's contention, the locker spin for 2^16[*] iterations waiting
      to get the lock.  If it fails to get the lock in that time, it adds
      itself to the contention count in the lock and blocks on a per-cpu
      event channel.
      
      When unlocking the spinlock, the locker looks to see if there's anyone
      blocked waiting for the lock by checking for a non-zero waiter count.
      If there's a waiter, it traverses the per-cpu "lock_spinners"
      variable, which contains which lock each CPU is waiting on.  It picks
      one CPU waiting on the lock and sends it an event to wake it up.
      
      This allows efficient fast-path spinlock operation, while allowing
      spinning vcpus to give up their processor time while waiting for a
      contended lock.
      
      [*] 2^16 iterations is threshold at which 98% locks have been taken
      according to Thomas Friebel's Xen Summit talk "Preventing Guests from
      Spinning Around".  Therefore, we'd expect the lock and unlock slow
      paths will only be entered 2% of the time.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: Jens Axboe <axboe@kernel.dk>
      Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
      Cc: Christoph Lameter <clameter@linux-foundation.org>
      Cc: Petr Tesarik <ptesarik@suse.cz>
      Cc: Virtualization <virtualization@lists.linux-foundation.org>
      Cc: Xen devel <xen-devel@lists.xensource.com>
      Cc: Thomas Friebel <thomas.friebel@amd.com>
      Cc: Nick Piggin <nickpiggin@yahoo.com.au>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      2d9e1e2f
  9. 27 5月, 2008 2 次提交
    • J
      xen: implement save/restore · 0e91398f
      Jeremy Fitzhardinge 提交于
      This patch implements Xen save/restore and migration.
      
      Saving is triggered via xenbus, which is polled in
      drivers/xen/manage.c.  When a suspend request comes in, the kernel
      prepares itself for saving by:
      
      1 - Freeze all processes.  This is primarily to prevent any
          partially-completed pagetable updates from confusing the suspend
          process.  If CONFIG_PREEMPT isn't defined, then this isn't necessary.
      
      2 - Suspend xenbus and other devices
      
      3 - Stop_machine, to make sure all the other vcpus are quiescent.  The
          Xen tools require the domain to run its save off vcpu0.
      
      4 - Within the stop_machine state, it pins any unpinned pgds (under
          construction or destruction), performs canonicalizes various other
          pieces of state (mostly converting mfns to pfns), and finally
      
      5 - Suspend the domain
      
      Restore reverses the steps used to save the domain, ending when all
      the frozen processes are thawed.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      0e91398f
    • J
      xen: add rebind_evtchn_irq · eb1e305f
      Jeremy Fitzhardinge 提交于
      Add rebind_evtchn_irq(), which will rebind an device driver's existing
      irq to a new event channel on restore.  Since the new event channel
      will be masked and bound to vcpu0, we update the state accordingly and
      unmask the irq once everything is set up.
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Signed-off-by: NThomas Gleixner <tglx@linutronix.de>
      eb1e305f
  10. 25 4月, 2008 2 次提交
  11. 18 7月, 2007 3 次提交
    • J
      xen: use the hvc console infrastructure for Xen console · b536b4b9
      Jeremy Fitzhardinge 提交于
      Implement a Xen back-end for hvc console.
      
      * * *
      Add early printk support via hvc console, enable using
      "earlyprintk=xen" on the kernel command line.
      
      From: Gerd Hoffmann <kraxel@suse.de>
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: NChris Wright <chrisw@sous-sol.org>
      Acked-by: NIngo Molnar <mingo@elte.hu>
      Acked-by: NOlof Johansson <olof@lixom.net>
      b536b4b9
    • J
      xen: SMP guest support · f87e4cac
      Jeremy Fitzhardinge 提交于
      This is a fairly straightforward Xen implementation of smp_ops.
      
      Xen has its own IPI mechanisms, and has no dependency on any
      APIC-based IPI.  The smp_ops hooks and the flush_tlb_others pv_op
      allow a Xen guest to avoid all APIC code in arch/i386 (the only apic
      operation is a single apic_read for the apic version number).
      
      One subtle point which needs to be addressed is unpinning pagetables
      when another cpu may have a lazy tlb reference to the pagetable. Xen
      will not allow an in-use pagetable to be unpinned, so we must find any
      other cpus with a reference to the pagetable and get them to shoot
      down their references.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: NChris Wright <chrisw@sous-sol.org>
      Cc: Benjamin LaHaise <bcrl@kvack.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Andi Kleen <ak@suse.de>
      f87e4cac
    • J
      xen: event channels · e46cdb66
      Jeremy Fitzhardinge 提交于
      Xen implements interrupts in terms of event channels.  Each guest
      domain gets 1024 event channels which can be used for a variety of
      purposes, such as Xen timer events, inter-domain events,
      inter-processor events (IPI) or for real hardware IRQs.
      
      Within the kernel, we map the event channels to IRQs, and implement
      the whole interrupt handling using a Xen irq_chip.
      
      Rather than setting NR_IRQ to 1024 under PARAVIRT in order to
      accomodate Xen, we create a dynamic mapping between event channels and
      IRQs.  Ideally, Linux will eventually move towards dynamically
      allocating per-irq structures, and we can use a 1:1 mapping between
      event channels and irqs.
      Signed-off-by: NJeremy Fitzhardinge <jeremy@xensource.com>
      Signed-off-by: NChris Wright <chrisw@sous-sol.org>
      Cc: Ingo Molnar <mingo@elte.hu>
      Cc: Eric W. Biederman <ebiederm@xmission.com>
      e46cdb66