1. 13 1月, 2015 1 次提交
    • J
      x86/xen: properly retrieve NMI reason · f221b04f
      Jan Beulich 提交于
      Using the native code here can't work properly, as the hypervisor would
      normally have cleared the two reason bits by the time Dom0 gets to see
      the NMI (if passed to it at all). There's a shared info field for this,
      and there's an existing hook to use - just fit the two together. This
      is particularly relevant so that NMIs intended to be handled by APEI /
      GHES actually make it to the respective handler.
      
      Note that the hook can (and should) be used irrespective of whether
      being in Dom0, as accessing port 0x61 in a DomU would be even worse,
      while the shared info field would just hold zero all the time. Note
      further that hardware NMI handling for PVH doesn't currently work
      anyway due to missing code in the hypervisor (but it is expected to
      work the native rather than the PV way).
      Signed-off-by: NJan Beulich <jbeulich@suse.com>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      f221b04f
  2. 04 12月, 2014 2 次提交
  3. 03 10月, 2014 1 次提交
  4. 23 9月, 2014 1 次提交
    • J
      xen: Add Xen pvSCSI protocol description · e124c9a2
      Juergen Gross 提交于
      Add the definition of pvSCSI protocol used between the pvSCSI frontend
      in a XEN domU and the pvSCSI backend in a XEN driver domain (usually
      Dom0).
      
      This header was originally provided by Fujitsu for Xen based on Linux
      2.6.18.  Changes are:
      - Added comments.
      - Adapt to Linux style guide.
      - Add support for larger SG-lists by putting them in an own granted
        page.
      - Remove stale definitions.
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      e124c9a2
  5. 12 9月, 2014 1 次提交
  6. 19 7月, 2014 1 次提交
  7. 05 6月, 2014 1 次提交
  8. 29 5月, 2014 1 次提交
  9. 24 4月, 2014 1 次提交
    • I
      arm: xen: implement multicall hypercall support. · 5e40704e
      Ian Campbell 提交于
      As part of this make the usual change to xen_ulong_t in place of unsigned long.
      This change has no impact on x86.
      
      The Linux definition of struct multicall_entry.result differs from the Xen
      definition, I think for good reasons, and used a long rather than an unsigned
      long. Therefore introduce a xen_long_t, which is a long on x86 architectures
      and a signed 64-bit integer on ARM.
      
      Use uint32_t nr_calls on x86 for consistency with the ARM definition.
      
      Build tested on amd64 and i386 builds. Runtime tested on ARM.
      Signed-off-by: NIan Campbell <ian.campbell@citrix.com>
      Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      5e40704e
  10. 18 3月, 2014 1 次提交
    • R
      xen: add support for MSI message groups · 4892c9b4
      Roger Pau Monne 提交于
      Add support for MSI message groups for Xen Dom0 using the
      MAP_PIRQ_TYPE_MULTI_MSI pirq map type.
      
      In order to keep track of which pirq is the first one in the group all
      pirqs in the MSI group except for the first one have the newly
      introduced PIRQ_MSI_GROUP flag set. This prevents calling
      PHYSDEVOP_unmap_pirq on them, since the unmap must be done with the
      first pirq in the group.
      Signed-off-by: NRoger Pau Monné <roger.pau@citrix.com>
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
      4892c9b4
  11. 11 2月, 2014 1 次提交
  12. 08 2月, 2014 1 次提交
  13. 06 1月, 2014 2 次提交
    • M
      xen/pvh: Support ParaVirtualized Hardware extensions (v3). · 4e903a20
      Mukesh Rathor 提交于
      PVH allows PV linux guest to utilize hardware extended capabilities,
      such as running MMU updates in a HVM container.
      
      The Xen side defines PVH as (from docs/misc/pvh-readme.txt,
      with modifications):
      
      "* the guest uses auto translate:
       - p2m is managed by Xen
       - pagetables are owned by the guest
       - mmu_update hypercall not available
      * it uses event callback and not vlapic emulation,
      * IDT is native, so set_trap_table hcall is also N/A for a PVH guest.
      
      For a full list of hcalls supported for PVH, see pvh_hypercall64_table
      in arch/x86/hvm/hvm.c in xen.  From the ABI prespective, it's mostly a
      PV guest with auto translate, although it does use hvm_op for setting
      callback vector."
      
      Use .ascii and .asciz to define xen feature string. Note, the PVH
      string must be in a single line (not multiple lines with \) to keep the
      assembler from putting null char after each string before \.
      This patch allows it to be configured and enabled.
      
      We also use introduce the 'XEN_ELFNOTE_SUPPORTED_FEATURES' ELF note to
      tell the hypervisor that 'hvm_callback_vector' is what the kernel
      needs. We can not put it in 'XEN_ELFNOTE_FEATURES' as older hypervisor
      parse fields they don't understand as errors and refuse to load
      the kernel. This work-around fixes the problem.
      Signed-off-by: NMukesh Rathor <mukesh.rathor@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      4e903a20
    • D
      xen/events: Add the hypervisor interface for the FIFO-based event channels · bf2bbe07
      David Vrabel 提交于
      Add the hypercall sub-ops and the structures for the shared data used
      in the FIFO-based event channel ABI.
      
      The design document for this new ABI is available here:
      
          http://xenbits.xen.org/people/dvrabel/event-channels-H.pdf
      
      In summary, events are reported using a per-domain shared event array
      of event words.  Each event word has PENDING, LINKED and MASKED bits
      and a LINK field for pointing to the next event in the event queue.
      
      There are 16 event queues (with different priorities) per-VCPU.
      
      Key advantages of this new ABI include:
      
      - Support for over 100,000 events (2^17).
      - 16 different event priorities.
      - Improved fairness in event latency through the use of FIFOs.
      
      The ABI is available in Xen 4.4 and later.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      bf2bbe07
  14. 13 12月, 2013 1 次提交
  15. 11 12月, 2013 1 次提交
  16. 09 11月, 2013 1 次提交
  17. 18 10月, 2013 3 次提交
  18. 09 8月, 2013 2 次提交
    • D
      drivers/tpm: add xen tpmfront interface · e2683957
      Daniel De Graaf 提交于
      This is a complete rewrite of the Xen TPM frontend driver, taking
      advantage of a simplified frontend/backend interface and adding support
      for cancellation and timeouts.  The backend for this driver is provided
      by a vTPM stub domain using the interface in Xen 4.3.
      Signed-off-by: NDaniel De Graaf <dgdegra@tycho.nsa.gov>
      Acked-by: NMatthew Fioravante <matthew.fioravante@jhuapl.edu>
      Reviewed-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Acked-by: NPeter Huewe <peterhuewe@gmx.de>
      Reviewed-by: NPeter Huewe <peterhuewe@gmx.de>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      e2683957
    • K
      xen: Support 64-bit PV guest receiving NMIs · 6efa20e4
      Konrad Rzeszutek Wilk 提交于
      This is based on a patch that Zhenzhong Duan had sent - which
      was missing some of the remaining pieces. The kernel has the
      logic to handle Xen-type-exceptions using the paravirt interface
      in the assembler code (see PARAVIRT_ADJUST_EXCEPTION_FRAME -
      pv_irq_ops.adjust_exception_frame and and INTERRUPT_RETURN -
      pv_cpu_ops.iret).
      
      That means the nmi handler (and other exception handlers) use
      the hypervisor iret.
      
      The other changes that would be neccessary for this would
      be to translate the NMI_VECTOR to one of the entries on the
      ipi_vector and make xen_send_IPI_mask_allbutself use different
      events.
      
      Fortunately for us commit 1db01b49
      (xen: Clean up apic ipi interface) implemented this and we piggyback
      on the cleanup such that the apic IPI interface will pass the right
      vector value for NMI.
      
      With this patch we can trigger NMIs within a PV guest (only tested
      x86_64).
      
      For this to work with normal PV guests (not initial domain)
      we need the domain to be able to use the APIC ops - they are
      already implemented to use the Xen event channels. For that
      to be turned on in a PV domU we need to remove the masking
      of X86_FEATURE_APIC.
      
      Incidentally that means kgdb will also now work within
      a PV guest without using the 'nokgdbroundup' workaround.
      
      Note that the 32-bit version is different and this patch
      does not enable that.
      
      CC: Lisa Nguyen <lisa@xenapiadmin.com>
      CC: Ben Guthro <benjamin.guthro@citrix.com>
      CC: Zhenzhong Duan <zhenzhong.duan@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      [v1: Fixed up per David Vrabel comments]
      Reviewed-by: NBen Guthro <benjamin.guthro@citrix.com>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      6efa20e4
  19. 31 7月, 2013 1 次提交
  20. 18 6月, 2013 1 次提交
  21. 07 6月, 2013 1 次提交
  22. 24 5月, 2013 1 次提交
  23. 23 4月, 2013 2 次提交
    • W
      xen-netback: coalesce slots in TX path and fix regressions · 2810e5b9
      Wei Liu 提交于
      This patch tries to coalesce tx requests when constructing grant copy
      structures. It enables netback to deal with situation when frontend's
      MAX_SKB_FRAGS is larger than backend's MAX_SKB_FRAGS.
      
      With the help of coalescing, this patch tries to address two regressions
      avoid reopening the security hole in XSA-39.
      
      Regression 1. The reduction of the number of supported ring entries (slots)
      per packet (from 18 to 17). This regression has been around for some time but
      remains unnoticed until XSA-39 security fix. This is fixed by coalescing
      slots.
      
      Regression 2. The XSA-39 security fix turning "too many frags" errors from
      just dropping the packet to a fatal error and disabling the VIF. This is fixed
      by coalescing slots (handling 18 slots when backend's MAX_SKB_FRAGS is 17)
      which rules out false positive (using 18 slots is legit) and dropping packets
      using 19 to `max_skb_slots` slots.
      
      To avoid reopening security hole in XSA-39, frontend sending packet using more
      than max_skb_slots is considered malicious.
      
      The behavior of netback for packet is thus:
      
          1-18            slots: valid
         19-max_skb_slots slots: drop and respond with an error
         max_skb_slots+   slots: fatal error
      
      max_skb_slots is configurable by admin, default value is 20.
      
      Also change variable name from "frags" to "slots" in netbk_count_requests.
      
      Please note that RX path still has dependency on MAX_SKB_FRAGS. This will be
      fixed with separate patch.
      Signed-off-by: NWei Liu <wei.liu2@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      2810e5b9
    • W
      xen-netfront: reduce gso_max_size to account for max TCP header · 9ecd1a75
      Wei Liu 提交于
      The maximum packet including header that can be handled by netfront / netback
      wire format is 65535. Reduce gso_max_size accordingly.
      
      Drop skb and print warning when skb->len > 65535. This can 1) save the effort
      to send malformed packet to netback, 2) help spotting misconfiguration of
      netfront in the future.
      Signed-off-by: NWei Liu <wei.liu2@citrix.com>
      Acked-by: NIan Campbell <ian.campbell@citrix.com>
      Signed-off-by: NDavid S. Miller <davem@davemloft.net>
      9ecd1a75
  24. 19 4月, 2013 1 次提交
    • R
      xen-block: implement indirect descriptors · 402b27f9
      Roger Pau Monne 提交于
      Indirect descriptors introduce a new block operation
      (BLKIF_OP_INDIRECT) that passes grant references instead of segments
      in the request. This grant references are filled with arrays of
      blkif_request_segment_aligned, this way we can send more segments in a
      request.
      
      The proposed implementation sets the maximum number of indirect grefs
      (frames filled with blkif_request_segment_aligned) to 256 in the
      backend and 32 in the frontend. The value in the frontend has been
      chosen experimentally, and the backend value has been set to a sane
      value that allows expanding the maximum number of indirect descriptors
      in the frontend if needed.
      
      The migration code has changed from the previous implementation, in
      which we simply remapped the segments on the shared ring. Now the
      maximum number of segments allowed in a request can change depending
      on the backend, so we have to requeue all the requests in the ring and
      in the queue and split the bios in them if they are bigger than the
      new maximum number of segments.
      
      [v2: Fixed minor comments by Konrad.
      [v1: Added padding to make the indirect request 64bit aligned.
       Added some BUGs, comments; fixed number of indirect pages in
       blkif_get_x86_{32/64}_req. Added description about the indirect operation
       in blkif.h]
      Signed-off-by: NRoger Pau Monné <roger.pau@citrix.com>
      [v3: Fixed spaces and tabs mix ups]
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      402b27f9
  25. 22 3月, 2013 1 次提交
  26. 12 3月, 2013 1 次提交
    • D
      xen/blkback: correctly respond to unknown, non-native requests · 0e367ae4
      David Vrabel 提交于
      If the frontend is using a non-native protocol (e.g., a 64-bit
      frontend with a 32-bit backend) and it sent an unrecognized request,
      the request was not translated and the response would have the
      incorrect ID.  This may cause the frontend driver to behave
      incorrectly or crash.
      
      Since the ID field in the request is always in the same place,
      regardless of the request type we can get the correct ID and make a
      valid response (which will report BLKIF_RSP_EOPNOTSUPP).
      
      This bug affected 64-bit SLES 11 guests when using a 32-bit backend.
      This guest does a BLKIF_OP_RESERVED_1 (BLKIF_OP_PACKET in the SLES
      source) and would crash in blkif_int() as the ID in the response would
      be invalid.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Cc: stable@vger.kernel.org
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      0e367ae4
  27. 20 2月, 2013 4 次提交
  28. 18 12月, 2012 1 次提交
  29. 29 11月, 2012 2 次提交
  30. 27 11月, 2012 1 次提交