1. 06 1月, 2014 1 次提交
    • K
      xen/grant: Implement an grant frame array struct (v3). · efaf30a3
      Konrad Rzeszutek Wilk 提交于
      The 'xen_hvm_resume_frames' used to be an 'unsigned long'
      and contain the virtual address of the grants. That was OK
      for most architectures (PVHVM, ARM) were the grants are contiguous
      in memory. That however is not the case for PVH - in which case
      we will have to do a lookup for each virtual address for the PFN.
      
      Instead of doing that, lets make it a structure which will contain
      the array of PFNs, the virtual address and the count of said PFNs.
      
      Also provide a generic functions: gnttab_setup_auto_xlat_frames and
      gnttab_free_auto_xlat_frames to populate said structure with
      appropriate values for PVHVM and ARM.
      
      To round it off, change the name from 'xen_hvm_resume_frames' to
      a more descriptive one - 'xen_auto_xlat_grant_frames'.
      
      For PVH, in patch "xen/pvh: Piggyback on PVHVM for grant driver"
      we will populate the 'xen_auto_xlat_grant_frames' by ourselves.
      
      v2 moves the xen_remap in the gnttab_setup_auto_xlat_frames
      and also introduces xen_unmap for gnttab_free_auto_xlat_frames.
      Suggested-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      [v3: Based on top of 'asm/xen/page.h: remove redundant semicolon']
      Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      efaf30a3
  2. 07 11月, 2013 1 次提交
  3. 04 1月, 2013 1 次提交
    • G
      Drivers: xen: remove __dev* attributes. · 345a5255
      Greg Kroah-Hartman 提交于
      CONFIG_HOTPLUG is going away as an option.  As a result, the __dev*
      markings need to be removed.
      
      This change removes the use of __devinit, and __devinitdata from these
      drivers.
      
      Based on patches originally written by Bill Pemberton, but redone by me
      in order to handle some of the coding style issues better, by hand.
      
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Jan Beulich <jbeulich@suse.com>
      Cc: Stephen Hemminger <shemminger@vyatta.com>
      Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
      Cc: Bill Pemberton <wfp5p@virginia.edu>
      Signed-off-by: NGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      345a5255
  4. 17 8月, 2012 1 次提交
  5. 20 7月, 2012 2 次提交
    • O
      xen PVonHVM: move shared_info to MMIO before kexec · 00e37bdb
      Olaf Hering 提交于
      Currently kexec in a PVonHVM guest fails with a triple fault because the
      new kernel overwrites the shared info page. The exact failure depends on
      the size of the kernel image. This patch moves the pfn from RAM into
      MMIO space before the kexec boot.
      
      The pfn containing the shared_info is located somewhere in RAM. This
      will cause trouble if the current kernel is doing a kexec boot into a
      new kernel. The new kernel (and its startup code) can not know where the
      pfn is, so it can not reserve the page. The hypervisor will continue to
      update the pfn, and as a result memory corruption occours in the new
      kernel.
      
      One way to work around this issue is to allocate a page in the
      xen-platform pci device's BAR memory range. But pci init is done very
      late and the shared_info page is already in use very early to read the
      pvclock. So moving the pfn from RAM to MMIO is racy because some code
      paths on other vcpus could access the pfn during the small   window when
      the old pfn is moved to the new pfn. There is even a  small window were
      the old pfn is not backed by a mfn, and during that time all reads
      return -1.
      
      Because it is not known upfront where the MMIO region is located it can
      not be used right from the start in xen_hvm_init_shared_info.
      
      To minimise trouble the move of the pfn is done shortly before kexec.
      This does not eliminate the race because all vcpus are still online when
      the syscore_ops will be called. But hopefully there is no work pending
      at this point in time. Also the syscore_op is run last which reduces the
      risk further.
      Signed-off-by: NOlaf Hering <olaf@aepfle.de>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      00e37bdb
    • O
      xen: enable platform-pci only in a Xen guest · 38ad4f4b
      Olaf Hering 提交于
      While debugging kexec issues in a PVonHVM guest I modified
      xen_hvm_platform() to return false to disable all PV drivers. This
      caused a crash in platform_pci_init() because it expects certain data
      structures to be initialized properly.
      
      To avoid such a crash make sure the driver is initialized only if
      running in a Xen guest.
      Signed-off-by: NOlaf Hering <olaf@aepfle.de>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      38ad4f4b
  6. 22 3月, 2012 1 次提交
  7. 26 2月, 2011 1 次提交
  8. 12 1月, 2011 1 次提交
  9. 27 7月, 2010 1 次提交
  10. 23 7月, 2010 2 次提交
    • S
      xen: Add suspend/resume support for PV on HVM guests. · 016b6f5f
      Stefano Stabellini 提交于
      Suspend/resume requires few different things on HVM: the suspend
      hypercall is different; we don't need to save/restore memory related
      settings; except the shared info page and the callback mechanism.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      016b6f5f
    • S
      xen: Xen PCI platform device driver. · 183d03cc
      Stefano Stabellini 提交于
      Add the xen pci platform device driver that is responsible
      for initializing the grant table and xenbus in PV on HVM mode.
      Few changes to xenbus and grant table are necessary to allow the delayed
      initialization in HVM mode.
      Grant table needs few additional modifications to work in HVM mode.
      
      The Xen PCI platform device raises an irq every time an event has been
      delivered to us. However these interrupts are only delivered to vcpu 0.
      The Xen PCI platform interrupt handler calls xen_hvm_evtchn_do_upcall
      that is a little wrapper around __xen_evtchn_do_upcall, the traditional
      Xen upcall handler, the very same used with traditional PV guests.
      
      When running on HVM the event channel upcall is never called while in
      progress because it is a normal Linux irq handler (and we cannot switch
      the irq chip wholesale to the Xen PV ones as we are running QEMU and
      might have passed in PCI devices), therefore we cannot be sure that
      evtchn_upcall_pending is 0 when returning.
      For this reason if evtchn_upcall_pending is set by Xen we need to loop
      again on the event channels set pending otherwise we might loose some
      event channel deliveries.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NSheng Yang <sheng@linux.intel.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      183d03cc