1. 17 8月, 2012 1 次提交
  2. 20 7月, 2012 2 次提交
    • O
      xen PVonHVM: move shared_info to MMIO before kexec · 00e37bdb
      Olaf Hering 提交于
      Currently kexec in a PVonHVM guest fails with a triple fault because the
      new kernel overwrites the shared info page. The exact failure depends on
      the size of the kernel image. This patch moves the pfn from RAM into
      MMIO space before the kexec boot.
      
      The pfn containing the shared_info is located somewhere in RAM. This
      will cause trouble if the current kernel is doing a kexec boot into a
      new kernel. The new kernel (and its startup code) can not know where the
      pfn is, so it can not reserve the page. The hypervisor will continue to
      update the pfn, and as a result memory corruption occours in the new
      kernel.
      
      One way to work around this issue is to allocate a page in the
      xen-platform pci device's BAR memory range. But pci init is done very
      late and the shared_info page is already in use very early to read the
      pvclock. So moving the pfn from RAM to MMIO is racy because some code
      paths on other vcpus could access the pfn during the small   window when
      the old pfn is moved to the new pfn. There is even a  small window were
      the old pfn is not backed by a mfn, and during that time all reads
      return -1.
      
      Because it is not known upfront where the MMIO region is located it can
      not be used right from the start in xen_hvm_init_shared_info.
      
      To minimise trouble the move of the pfn is done shortly before kexec.
      This does not eliminate the race because all vcpus are still online when
      the syscore_ops will be called. But hopefully there is no work pending
      at this point in time. Also the syscore_op is run last which reduces the
      risk further.
      Signed-off-by: NOlaf Hering <olaf@aepfle.de>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      00e37bdb
    • O
      xen: enable platform-pci only in a Xen guest · 38ad4f4b
      Olaf Hering 提交于
      While debugging kexec issues in a PVonHVM guest I modified
      xen_hvm_platform() to return false to disable all PV drivers. This
      caused a crash in platform_pci_init() because it expects certain data
      structures to be initialized properly.
      
      To avoid such a crash make sure the driver is initialized only if
      running in a Xen guest.
      Signed-off-by: NOlaf Hering <olaf@aepfle.de>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      38ad4f4b
  3. 22 3月, 2012 1 次提交
  4. 26 2月, 2011 1 次提交
  5. 12 1月, 2011 1 次提交
  6. 27 7月, 2010 1 次提交
  7. 23 7月, 2010 2 次提交
    • S
      xen: Add suspend/resume support for PV on HVM guests. · 016b6f5f
      Stefano Stabellini 提交于
      Suspend/resume requires few different things on HVM: the suspend
      hypercall is different; we don't need to save/restore memory related
      settings; except the shared info page and the callback mechanism.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      016b6f5f
    • S
      xen: Xen PCI platform device driver. · 183d03cc
      Stefano Stabellini 提交于
      Add the xen pci platform device driver that is responsible
      for initializing the grant table and xenbus in PV on HVM mode.
      Few changes to xenbus and grant table are necessary to allow the delayed
      initialization in HVM mode.
      Grant table needs few additional modifications to work in HVM mode.
      
      The Xen PCI platform device raises an irq every time an event has been
      delivered to us. However these interrupts are only delivered to vcpu 0.
      The Xen PCI platform interrupt handler calls xen_hvm_evtchn_do_upcall
      that is a little wrapper around __xen_evtchn_do_upcall, the traditional
      Xen upcall handler, the very same used with traditional PV guests.
      
      When running on HVM the event channel upcall is never called while in
      progress because it is a normal Linux irq handler (and we cannot switch
      the irq chip wholesale to the Xen PV ones as we are running QEMU and
      might have passed in PCI devices), therefore we cannot be sure that
      evtchn_upcall_pending is 0 when returning.
      For this reason if evtchn_upcall_pending is set by Xen we need to loop
      again on the event channels set pending otherwise we might loose some
      event channel deliveries.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NSheng Yang <sheng@linux.intel.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      183d03cc