1. 10 11月, 2017 2 次提交
  2. 11 8月, 2017 2 次提交
  3. 25 6月, 2017 1 次提交
    • J
      xen: allocate page for shared info page from low memory · a5d5f328
      Juergen Gross 提交于
      In a HVM guest the kernel allocates the page for mapping the shared
      info structure via extend_brk() today. This will lead to a drop of
      performance as the underlying EPT entry will have to be split up into
      4kB entries as the single shared info page is located in hypervisor
      memory.
      
      The issue has been detected by using the libmicro munmap test:
      unmapping 8kB of memory was faster by nearly a factor of two when no
      pv interfaces were active in the HVM guest.
      
      So instead of taking a page from memory which might be mapped via
      large EPT entries use a page which is already mapped via a 4kB EPT
      entry: we can take a page from the first 1MB of memory as the video
      memory at 640kB disallows using larger EPT entries.
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      a5d5f328
  4. 13 6月, 2017 3 次提交
    • A
      xen/vcpu: Handle xen_vcpu_setup() failure in hotplug · c9b5d98b
      Ankur Arora 提交于
      The hypercall VCPUOP_register_vcpu_info can fail. This failure is
      handled by making per_cpu(xen_vcpu, cpu) point to its shared_info
      slot and those without one (cpu >= MAX_VIRT_CPUS) be NULL.
      
      For PVH/PVHVM, this is not enough, because we also need to pull
      these VCPUs out of circulation.
      
      Fix for PVH/PVHVM: on registration failure in the cpuhp prepare
      callback (xen_cpu_up_prepare_hvm()), return an error to the cpuhp
      state-machine so it can fail the CPU init.
      
      Fix for PV: the registration happens before smp_init(), so, in the
      failure case we clamp setup_max_cpus and limit the number of VCPUs
      that smp_init() will bring-up to MAX_VIRT_CPUS.
      This is functionally correct but it makes the code a bit simpler
      if we get rid of this explicit clamping: for VCPUs that don't have
      valid xen_vcpu, fail the CPU init in the cpuhp prepare callback
      (xen_cpu_up_prepare_pv()).
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: NAnkur Arora <ankur.a.arora@oracle.com>
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      c9b5d98b
    • A
      xen/pvh*: Support > 32 VCPUs at domain restore · 0b64ffb8
      Ankur Arora 提交于
      When Xen restores a PVHVM or PVH guest, its shared_info only holds
      up to 32 CPUs. The hypercall VCPUOP_register_vcpu_info allows
      us to setup per-page areas for VCPUs. This means we can boot
      PVH* guests with more than 32 VCPUs. During restore the per-cpu
      structure is allocated freshly by the hypervisor (vcpu_info_mfn is
      set to INVALID_MFN) so that the newly restored guest can make a
      VCPUOP_register_vcpu_info hypercall.
      
      However, we end up triggering this condition in Xen:
      /* Run this command on yourself or on other offline VCPUS. */
       if ( (v != current) && !test_bit(_VPF_down, &v->pause_flags) )
      
      which means we are unable to setup the per-cpu VCPU structures
      for running VCPUS. The Linux PV code paths makes this work by
      iterating over cpu_possible in xen_vcpu_restore() with:
      
       1) is target CPU up (VCPUOP_is_up hypercall?)
       2) if yes, then VCPUOP_down to pause it
       3) VCPUOP_register_vcpu_info
       4) if it was down, then VCPUOP_up to bring it back up
      
      With Xen commit 192df6f9122d ("xen/x86: allow HVM guests to use
      hypercalls to bring up vCPUs") this is available for non-PV guests.
      As such first check if VCPUOP_is_up is actually possible before
      trying this dance.
      
      As most of this dance code is done already in xen_vcpu_restore()
      let's make it callable on PV, PVH and PVHVM.
      Based-on-patch-by: NKonrad Wilk <konrad.wilk@oracle.com>
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: NAnkur Arora <ankur.a.arora@oracle.com>
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      0b64ffb8
    • A
      xen/vcpu: Simplify xen_vcpu related code · ad73fd59
      Ankur Arora 提交于
      Largely mechanical changes to aid unification of xen_vcpu_restore()
      logic for PV, PVH and PVHVM.
      
      xen_vcpu_setup(): the only change in logic is that clamp_max_cpus()
      is now handled inside the "if (!xen_have_vcpu_info_placement)" block.
      
      xen_vcpu_restore(): code movement from enlighten_pv.c to enlighten.c.
      
      xen_vcpu_info_reset(): pulls together all the code where xen_vcpu
      is set to default.
      Reviewed-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Signed-off-by: NAnkur Arora <ankur.a.arora@oracle.com>
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      ad73fd59
  5. 03 5月, 2017 1 次提交
  6. 02 5月, 2017 2 次提交
    • B
      xen: Revert commits da72ff5b and 72a9b186 · 84d582d2
      Boris Ostrovsky 提交于
      Recent discussion (http://marc.info/?l=xen-devel&m=149192184523741)
      established that commit 72a9b186 ("xen: Remove event channel
      notification through Xen PCI platform device") (and thus commit
      da72ff5b ("partially revert "xen: Remove event channel
      notification through Xen PCI platform device"")) are unnecessary and,
      in fact, prevent HVM guests from booting on Xen releases prior to 4.0
      
      Therefore we revert both of those commits.
      
      The summary of that discussion is below:
      
        Here is the brief summary of the current situation:
      
        Before the offending commit (72a9b186):
      
        1) INTx does not work because of the reset_watches path.
        2) The reset_watches path is only taken if you have Xen > 4.0
        3) The Linux Kernel by default will use vector inject if the hypervisor
           support. So even INTx does not work no body running the kernel with
           Xen > 4.0 would notice. Unless he explicitly disabled this feature
           either in the kernel or in Xen (and this can only be disabled by
           modifying the code, not user-supported way to do it).
      
        After the offending commit (+ partial revert):
      
        1) INTx is no longer support for HVM (only for PV guests).
        2) Any HVM guest The kernel will not boot on Xen < 4.0 which does
           not have vector injection support. Since the only other mode
           supported is INTx which.
      
        So based on this summary, I think before commit (72a9b186) we were
        in much better position from a user point of view.
      Signed-off-by: NBoris Ostrovsky <boris.ostrovsky@oracle.com>
      Reviewed-by: NJuergen Gross <jgross@suse.com>
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      84d582d2
    • V
      x86/xen: split off enlighten_hvm.c · 98f2a47a
      Vitaly Kuznetsov 提交于
      Move PVHVM related code to enlighten_hvm.c. Three functions:
      xen_cpuhp_setup(), xen_reboot(), xen_emergency_restart() are shared, drop
      static qualifier from them. These functions will go to common code once
      it is split from enlighten.c.
      Signed-off-by: NVitaly Kuznetsov <vkuznets@redhat.com>
      Reviewed-by: NJuergen Gross <jgross@suse.com>
      Signed-off-by: NJuergen Gross <jgross@suse.com>
      98f2a47a