1. 27 7月, 2010 1 次提交
    • K
      swiotlb-xen: SWIOTLB library for Xen PV guest with PCI passthrough. · b097186f
      Konrad Rzeszutek Wilk 提交于
      This patchset:
      
      PV guests under Xen are running in an non-contiguous memory architecture.
      
      When PCI pass-through is utilized, this necessitates an IOMMU for
      translating bus (DMA) to virtual and vice-versa and also providing a
      mechanism to have contiguous pages for device drivers operations (say DMA
      operations).
      
      Specifically, under Xen the Linux idea of pages is an illusion. It
      assumes that pages start at zero and go up to the available memory. To
      help with that, the Linux Xen MMU provides a lookup mechanism to
      translate the page frame numbers (PFN) to machine frame numbers (MFN)
      and vice-versa. The MFN are the "real" frame numbers. Furthermore
      memory is not contiguous. Xen hypervisor stitches memory for guests
      from different pools, which means there is no guarantee that PFN==MFN
      and PFN+1==MFN+1. Lastly with Xen 4.0, pages (in debug mode) are
      allocated in descending order (high to low), meaning the guest might
      never get any MFN's under the 4GB mark.
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Acked-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
      Cc: Albert Herranz <albert_herranz@yahoo.es>
      Cc: Ian Campbell <Ian.Campbell@citrix.com>
      b097186f
  2. 23 7月, 2010 1 次提交
    • S
      xen: Xen PCI platform device driver. · 183d03cc
      Stefano Stabellini 提交于
      Add the xen pci platform device driver that is responsible
      for initializing the grant table and xenbus in PV on HVM mode.
      Few changes to xenbus and grant table are necessary to allow the delayed
      initialization in HVM mode.
      Grant table needs few additional modifications to work in HVM mode.
      
      The Xen PCI platform device raises an irq every time an event has been
      delivered to us. However these interrupts are only delivered to vcpu 0.
      The Xen PCI platform interrupt handler calls xen_hvm_evtchn_do_upcall
      that is a little wrapper around __xen_evtchn_do_upcall, the traditional
      Xen upcall handler, the very same used with traditional PV guests.
      
      When running on HVM the event channel upcall is never called while in
      progress because it is a normal Linux irq handler (and we cannot switch
      the irq chip wholesale to the Xen PV ones as we are running QEMU and
      might have passed in PCI devices), therefore we cannot be sure that
      evtchn_upcall_pending is 0 when returning.
      For this reason if evtchn_upcall_pending is set by Xen we need to loop
      again on the event channels set pending otherwise we might loose some
      event channel deliveries.
      Signed-off-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NSheng Yang <sheng@linux.intel.com>
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      183d03cc
  3. 10 9月, 2009 1 次提交
    • J
      xen: make -fstack-protector work under Xen · 577eebea
      Jeremy Fitzhardinge 提交于
      -fstack-protector uses a special per-cpu "stack canary" value.
      gcc generates special code in each function to test the canary to make
      sure that the function's stack hasn't been overrun.
      
      On x86-64, this is simply an offset of %gs, which is the usual per-cpu
      base segment register, so setting it up simply requires loading %gs's
      base as normal.
      
      On i386, the stack protector segment is %gs (rather than the usual kernel
      percpu %fs segment register).  This requires setting up the full kernel
      GDT and then loading %gs accordingly.  We also need to make sure %gs is
      initialized when bringing up secondary cpus too.
      
      To keep things consistent, we do the full GDT/segment register setup on
      both architectures.
      
      Because we need to avoid -fstack-protected code before setting up the GDT
      and because there's no way to disable it on a per-function basis, several
      files need to have stack-protector inhibited.
      
      [ Impact: allow Xen booting with stack-protector enabled ]
      Signed-off-by: NJeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
      577eebea
  4. 31 3月, 2009 2 次提交
  5. 09 1月, 2009 1 次提交
  6. 05 9月, 2008 1 次提交
  7. 25 8月, 2008 1 次提交
    • A
      xen: implement CPU hotplugging · d68d82af
      Alex Nixon 提交于
      Note the changes from 2.6.18-xen CPU hotplugging:
      
      A vcpu_down request from the remote admin via Xenbus both hotunplugs the
      CPU, and disables it by removing it from the cpu_present map, and removing
      its entry in /sys.
      
      A vcpu_up request from the remote admin only re-enables the CPU, and does
      not immediately bring the CPU up. A udev event is emitted, which can be
      caught by the user if he wishes to automatically re-up CPUs when available,
      or implement a more complex policy.
      Signed-off-by: NAlex Nixon <alex.nixon@citrix.com>
      Acked-by: NJeremy Fitzhardinge <jeremy@goop.org>
      Signed-off-by: NIngo Molnar <mingo@elte.hu>
      d68d82af
  8. 27 5月, 2008 1 次提交
  9. 25 4月, 2008 4 次提交
  10. 18 7月, 2007 2 次提交