1. 15 7月, 2014 1 次提交
    • K
      xen: Introduce 'xen_nopv' to disable PV extensions for HVM guests. · 8d693b91
      Konrad Rzeszutek Wilk 提交于
      By default when CONFIG_XEN and CONFIG_XEN_PVHVM kernels are
      run, they will enable the PV extensions (drivers, interrupts, timers,
      etc) - which is the best option for the majority of use cases.
      
      However, in some cases (kexec not fully working, benchmarking)
      we want to disable Xen PV extensions. As such introduce the
      'xen_nopv' parameter that will do it.
      
      This parameter is intended only for HVM guests as the Xen PV
      guests MUST boot with PV extensions. However, even if you use
      'xen_nopv' on Xen PV guests it will be ignored.
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      ---
      [v2: s/off/xen_nopv/ per Boris Ostrovsky recommendation.]
      [v3: Add Reviewed-by]
      [v4: Clarify that this is only for HVM guests]
      8d693b91
  2. 05 6月, 2014 1 次提交
  3. 15 5月, 2014 1 次提交
  4. 06 5月, 2014 1 次提交
  5. 04 2月, 2014 1 次提交
  6. 22 1月, 2014 1 次提交
    • R
      xen/pvh: Set X86_CR0_WP and others in CR0 (v2) · c9f6e997
      Roger Pau Monne 提交于
      otherwise we will get for some user-space applications
      that use 'clone' with CLONE_CHILD_SETTID | CLONE_CHILD_CLEARTID
      end up hitting an assert in glibc manifested by:
      
      general protection ip:7f80720d364c sp:7fff98fd8a80 error:0 in
      libc-2.13.so[7f807209e000+180000]
      
      This is due to the nature of said operations which sets and clears
      the PID.  "In the successful one I can see that the page table of
      the parent process has been updated successfully to use a
      different physical page, so the write of the tid on
      that page only affects the child...
      
      On the other hand, in the failed case, the write seems to happen before
      the copy of the original page is done, so both the parent and the child
      end up with the same value (because the parent copies the page after
      the write of the child tid has already happened)."
      (Roger's analysis). The nature of this is due to the Xen's commit
      of 51e2cac257ec8b4080d89f0855c498cbbd76a5e5
      "x86/pvh: set only minimal cr0 and cr4 flags in order to use paging"
      the CR0_WP was removed so COW features of the Linux kernel were not
      operating properly.
      
      While doing that also update the rest of the CR0 flags to be inline
      with what a baremetal Linux kernel would set them to.
      
      In 'secondary_startup_64' (baremetal Linux) sets:
      
      X86_CR0_PE | X86_CR0_MP | X86_CR0_ET | X86_CR0_NE | X86_CR0_WP |
      X86_CR0_AM | X86_CR0_PG
      
      The hypervisor for HVM type guests (which PVH is a bit) sets:
      X86_CR0_PE | X86_CR0_ET | X86_CR0_TS
      For PVH it specifically sets:
      X86_CR0_PG
      
      Which means we need to set the rest: X86_CR0_MP | X86_CR0_NE  |
      X86_CR0_WP | X86_CR0_AM to have full parity.
      Signed-off-by: NRoger Pau Monne <roger.pau@citrix.com>
      Signed-off-by: NMukesh Rathor <mukesh.rathor@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      [v1: Took out the cr4 writes to be a seperate patch]
      [v2: 0-DAY kernel found xen_setup_gdt to be missing a static]
      c9f6e997
  7. 07 1月, 2014 1 次提交
  8. 06 1月, 2014 5 次提交
    • M
      xen/pvh: Piggyback on PVHVM for event channels (v2) · 2771374d
      Mukesh Rathor 提交于
      PVH is a PV guest with a twist - there are certain things
      that work in it like HVM and some like PV. There is
      a similar mode - PVHVM where we run in HVM mode with
      PV code enabled - and this patch explores that.
      
      The most notable PV interfaces are the XenBus and event channels.
      
      We will piggyback on how the event channel mechanism is
      used in PVHVM - that is we want the normal native IRQ mechanism
      and we will install a vector (hvm callback) for which we
      will call the event channel mechanism.
      
      This means that from a pvops perspective, we can use
      native_irq_ops instead of the Xen PV specific. Albeit in the
      future we could support pirq_eoi_map. But that is
      a feature request that can be shared with PVHVM.
      Signed-off-by: NMukesh Rathor <mukesh.rathor@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      2771374d
    • M
      xen/pvh: Secondary VCPU bringup (non-bootup CPUs) · 5840c84b
      Mukesh Rathor 提交于
      The VCPU bringup protocol follows the PV with certain twists.
      From xen/include/public/arch-x86/xen.h:
      
      Also note that when calling DOMCTL_setvcpucontext and VCPU_initialise
      for HVM and PVH guests, not all information in this structure is updated:
      
       - For HVM guests, the structures read include: fpu_ctxt (if
       VGCT_I387_VALID is set), flags, user_regs, debugreg[*]
      
       - PVH guests are the same as HVM guests, but additionally use ctrlreg[3] to
       set cr3. All other fields not used should be set to 0.
      
      This is what we do. We piggyback on the 'xen_setup_gdt' - but modify
      a bit - we need to call 'load_percpu_segment' so that 'switch_to_new_gdt'
      can load per-cpu data-structures. It has no effect on the VCPU0.
      
      We also piggyback on the %rdi register to pass in the CPU number - so
      that when we bootup a new CPU, the cpu_bringup_and_idle will have
      passed as the first parameter the CPU number (via %rdi for 64-bit).
      Signed-off-by: NMukesh Rathor <mukesh.rathor@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      5840c84b
    • M
      xen/pvh: Load GDT/GS in early PV bootup code for BSP. · 8d656bbe
      Mukesh Rathor 提交于
      During early bootup we start life using the Xen provided
      GDT, which means that we are running with %cs segment set
      to FLAT_KERNEL_CS (FLAT_RING3_CS64 0xe033, GDT index 261).
      
      But for PVH we want to be use HVM type mechanism for
      segment operations. As such we need to switch to the HVM
      one and also reload ourselves with the __KERNEL_CS:eip
      to run in the proper GDT and segment.
      
      For HVM this is usually done in 'secondary_startup_64' in
      (head_64.S) but since we are not taking that bootup
      path (we start in PV - xen_start_kernel) we need to do
      that in the early PV bootup paths.
      
      For good measure we also zero out the %fs, %ds, and %es
      (not strictly needed as Xen has already cleared them
      for us). The %gs is loaded by 'switch_to_new_gdt'.
      Signed-off-by: NMukesh Rathor <mukesh.rathor@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      8d656bbe
    • K
      xen/pvh: Don't setup P2M tree. · 696fd7c5
      Konrad Rzeszutek Wilk 提交于
      P2M is not available for PVH. Fortunatly for us the
      P2M code already has mostly the support for auto-xlat guest thanks to
      commit 3d24bbd7
      "grant-table: call set_phys_to_machine after mapping grant refs"
      which: "
      introduces set_phys_to_machine calls for auto_translated guests
      (even on x86) in gnttab_map_refs and gnttab_unmap_refs.
      translated by swiotlb-xen... " so we don't need to muck much.
      
      with above mentioned "commit you'll get set_phys_to_machine calls
      from gnttab_map_refs and gnttab_unmap_refs but PVH guests won't do
      anything with them " (Stefano Stabellini) which is OK - we want
      them to be NOPs.
      
      This is because we assume that an "IOMMU is always present on the
      plaform and Xen is going to make the appropriate IOMMU pagetable
      changes in the hypercall implementation of GNTTABOP_map_grant_ref
      and GNTTABOP_unmap_grant_ref, then eveything should be transparent
      from PVH priviligied point of view and DMA transfers involving
      foreign pages keep working with no issues[sp]
      
      Otherwise we would need a P2M (and an M2P) for PVH priviligied to
      track these foreign pages .. (see arch/arm/xen/p2m.c)."
      (Stefano Stabellini).
      
      We still have to inhibit the building of the P2M tree.
      That had been done in the past by not calling
      xen_build_dynamic_phys_to_machine (which setups the P2M tree
      and gives us virtual address to access them). But we are missing
      a check for xen_build_mfn_list_list - which was continuing to setup
      the P2M tree and would blow up at trying to get the virtual
      address of p2m_missing (which would have been setup by
      xen_build_dynamic_phys_to_machine).
      
      Hence a check is needed to not call xen_build_mfn_list_list when
      running in auto-xlat mode.
      
      Instead of replicating the check for auto-xlat in enlighten.c
      do it in the p2m.c code. The reason is that the xen_build_mfn_list_list
      is called also in xen_arch_post_suspend without any checks for
      auto-xlat. So for PVH or PV with auto-xlat - we would needlessly
      allocate space for an P2M tree.
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      696fd7c5
    • M
      xen/pvh: Early bootup changes in PV code (v4). · d285d683
      Mukesh Rathor 提交于
      We don't use the filtering that 'xen_cpuid' is doing
      because the hypervisor treats 'XEN_EMULATE_PREFIX' as
      an invalid instruction. This means that all of the filtering
      will have to be done in the hypervisor/toolstack.
      
      Without the filtering we expose to the guest the:
      
       - cpu topology (sockets, cores, etc);
       - the APERF (which the generic scheduler likes to
          use), see  5e626254
          "xen/setup: filter APERFMPERF cpuid feature out"
       - and the inability to figure out whether MWAIT_LEAF
         should be exposed or not. See
         df88b2d9
         "xen/enlighten: Disable MWAIT_LEAF so that acpi-pad won't be loaded."
       - x2apic, see  4ea9b9ac
         "xen: mask x2APIC feature in PV"
      
      We also check for vector callback early on, as it is a required
      feature. PVH also runs at default kernel IOPL.
      
      Finally, pure PV settings are moved to a separate function that are
      only called for pure PV, ie, pv with pvmmu. They are also #ifdef
      with CONFIG_XEN_PVMMU.
      Signed-off-by: NMukesh Rathor <mukesh.rathor@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      d285d683
  9. 10 9月, 2013 1 次提交
    • K
      xen/spinlock: Fix locking path engaging too soon under PVHVM. · 1fb3a8b2
      Konrad Rzeszutek Wilk 提交于
      The xen_lock_spinning has a check for the kicker interrupts
      and if it is not initialized it will spin normally (not enter
      the slowpath).
      
      But for PVHVM case we would initialize the kicker interrupt
      before the CPU came online. This meant that if the booting
      CPU used a spinlock and went in the slowpath - it would
      enter the slowpath and block forever. The forever part because
      during bootup: the spinlock would be taken _before_ the CPU
      sets itself to be online (more on this further), and we enter
      to poll on the event channel forever.
      
      The bootup CPU (see commit fc78d343
      "xen/smp: initialize IPI vectors before marking CPU online"
      for details) and the CPU that started the bootup consult
      the cpu_online_mask to determine whether the booting CPU should
      get an IPI. The booting CPU has to set itself in this mask via:
      
        set_cpu_online(smp_processor_id(), true);
      
      However, if the spinlock is taken before this (and it is) and
      it polls on an event channel - it will never be woken up as
      the kernel will never send an IPI to an offline CPU.
      
      Note that the PVHVM logic in sending IPIs is using the HVM
      path which has numerous checks using the cpu_online_mask
      and cpu_active_mask. See above mention git commit for details.
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      1fb3a8b2
  10. 21 8月, 2013 1 次提交
    • V
      xen/pvhvm: Initialize xen panic handler for PVHVM guests · 669b0ae9
      Vaughan Cao 提交于
      kernel use callback linked in panic_notifier_list to notice others when panic
      happens.
      NORET_TYPE void panic(const char * fmt, ...){
          ...
          atomic_notifier_call_chain(&panic_notifier_list, 0, buf);
      }
      When Xen becomes aware of this, it will call xen_reboot(SHUTDOWN_crash) to
      send out an event with reason code - SHUTDOWN_crash.
      
      xen_panic_handler_init() is defined to register on panic_notifier_list but
      we only call it in xen_arch_setup which only be called by PV, this patch is
      necessary for PVHVM.
      
      Without this patch, setting 'on_crash=coredump-restart' in PVHVM guest config
      file won't lead a vmcore to be generate when the guest panics. It can be
      reproduced with 'echo c > /proc/sysrq-trigger'.
      Signed-off-by: NVaughan Cao <vaughan.cao@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Acked-by: NJoe Jin <joe.jin@oracle.com>
      669b0ae9
  11. 09 8月, 2013 1 次提交
    • K
      xen: Support 64-bit PV guest receiving NMIs · 6efa20e4
      Konrad Rzeszutek Wilk 提交于
      This is based on a patch that Zhenzhong Duan had sent - which
      was missing some of the remaining pieces. The kernel has the
      logic to handle Xen-type-exceptions using the paravirt interface
      in the assembler code (see PARAVIRT_ADJUST_EXCEPTION_FRAME -
      pv_irq_ops.adjust_exception_frame and and INTERRUPT_RETURN -
      pv_cpu_ops.iret).
      
      That means the nmi handler (and other exception handlers) use
      the hypervisor iret.
      
      The other changes that would be neccessary for this would
      be to translate the NMI_VECTOR to one of the entries on the
      ipi_vector and make xen_send_IPI_mask_allbutself use different
      events.
      
      Fortunately for us commit 1db01b49
      (xen: Clean up apic ipi interface) implemented this and we piggyback
      on the cleanup such that the apic IPI interface will pass the right
      vector value for NMI.
      
      With this patch we can trigger NMIs within a PV guest (only tested
      x86_64).
      
      For this to work with normal PV guests (not initial domain)
      we need the domain to be able to use the APIC ops - they are
      already implemented to use the Xen event channels. For that
      to be turned on in a PV domU we need to remove the masking
      of X86_FEATURE_APIC.
      
      Incidentally that means kgdb will also now work within
      a PV guest without using the 'nokgdbroundup' workaround.
      
      Note that the 32-bit version is different and this patch
      does not enable that.
      
      CC: Lisa Nguyen <lisa@xenapiadmin.com>
      CC: Ben Guthro <benjamin.guthro@citrix.com>
      CC: Zhenzhong Duan <zhenzhong.duan@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      [v1: Fixed up per David Vrabel comments]
      Reviewed-by: NBen Guthro <benjamin.guthro@citrix.com>
      Reviewed-by: NDavid Vrabel <david.vrabel@citrix.com>
      6efa20e4
  12. 05 8月, 2013 1 次提交
    • J
      x86: Correctly detect hypervisor · 9df56f19
      Jason Wang 提交于
      We try to handle the hypervisor compatibility mode by detecting hypervisor
      through a specific order. This is not robust, since hypervisors may implement
      each others features.
      
      This patch tries to handle this situation by always choosing the last one in the
      CPUID leaves. This is done by letting .detect() return a priority instead of
      true/false and just re-using the CPUID leaf where the signature were found as
      the priority (or 1 if it was found by DMI). Then we can just pick hypervisor who
      has the highest priority. Other sophisticated detection method could also be
      implemented on top.
      
      Suggested by H. Peter Anvin and Paolo Bonzini.
      Acked-by: NK. Y. Srinivasan <kys@microsoft.com>
      Cc: Haiyang Zhang <haiyangz@microsoft.com>
      Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Cc: Jeremy Fitzhardinge <jeremy@goop.org>
      Cc: Doug Covelli <dcovelli@vmware.com>
      Cc: Borislav Petkov <bp@suse.de>
      Cc: Dan Hecht <dhecht@vmware.com>
      Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
      Cc: Marcelo Tosatti <mtosatti@redhat.com>
      Cc: Gleb Natapov <gleb@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: Frederic Weisbecker <fweisbec@gmail.com>
      Signed-off-by: NJason Wang <jasowang@redhat.com>
      Link: http://lkml.kernel.org/r/1374742475-2485-4-git-send-email-jasowang@redhat.comSigned-off-by: NH. Peter Anvin <hpa@linux.intel.com>
      9df56f19
  13. 15 7月, 2013 1 次提交
    • P
      x86: delete __cpuinit usage from all x86 files · 148f9bb8
      Paul Gortmaker 提交于
      The __cpuinit type of throwaway sections might have made sense
      some time ago when RAM was more constrained, but now the savings
      do not offset the cost and complications.  For example, the fix in
      commit 5e427ec2 ("x86: Fix bit corruption at CPU resume time")
      is a good example of the nasty type of bugs that can be created
      with improper use of the various __init prefixes.
      
      After a discussion on LKML[1] it was decided that cpuinit should go
      the way of devinit and be phased out.  Once all the users are gone,
      we can then finally remove the macros themselves from linux/init.h.
      
      Note that some harmless section mismatch warnings may result, since
      notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
      are flagged as __cpuinit  -- so if we remove the __cpuinit from
      arch specific callers, we will also get section mismatch warnings.
      As an intermediate step, we intend to turn the linux/init.h cpuinit
      content into no-ops as early as possible, since that will get rid
      of these warnings.  In any case, they are temporary and harmless.
      
      This removes all the arch/x86 uses of the __cpuinit macros from
      all C files.  x86 only had the one __CPUINIT used in assembly files,
      and it wasn't paired off with a .previous or a __FINIT, so we can
      delete it directly w/o any corresponding additional change there.
      
      [1] https://lkml.org/lkml/2013/5/20/589
      
      Cc: Thomas Gleixner <tglx@linutronix.de>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: "H. Peter Anvin" <hpa@zytor.com>
      Cc: x86@kernel.org
      Acked-by: NIngo Molnar <mingo@kernel.org>
      Acked-by: NThomas Gleixner <tglx@linutronix.de>
      Acked-by: NH. Peter Anvin <hpa@linux.intel.com>
      Signed-off-by: NPaul Gortmaker <paul.gortmaker@windriver.com>
      148f9bb8
  14. 07 6月, 2013 1 次提交
  15. 08 5月, 2013 2 次提交
    • Z
      xen: mask x2APIC feature in PV · 4ea9b9ac
      Zhenzhong Duan 提交于
      On x2apic enabled pvm, doing sysrq+l, got NULL pointer dereference as below.
      
          SysRq : Show backtrace of all active CPUs
          BUG: unable to handle kernel NULL pointer dereference at           (null)
          IP: [<ffffffff8125e3cb>] memcpy+0xb/0x120
          Call Trace:
           [<ffffffff81039633>] ? __x2apic_send_IPI_mask+0x73/0x160
           [<ffffffff8103973e>] x2apic_send_IPI_all+0x1e/0x20
           [<ffffffff8103498c>] arch_trigger_all_cpu_backtrace+0x6c/0xb0
           [<ffffffff81501be4>] ? _raw_spin_lock_irqsave+0x34/0x50
           [<ffffffff8131654e>] sysrq_handle_showallcpus+0xe/0x10
           [<ffffffff8131616d>] __handle_sysrq+0x7d/0x140
           [<ffffffff81316230>] ? __handle_sysrq+0x140/0x140
           [<ffffffff81316287>] write_sysrq_trigger+0x57/0x60
           [<ffffffff811ca996>] proc_reg_write+0x86/0xc0
           [<ffffffff8116dd8e>] vfs_write+0xce/0x190
           [<ffffffff8116e3e5>] sys_write+0x55/0x90
           [<ffffffff8150a242>] system_call_fastpath+0x16/0x1b
      
      That's because apic points to apic_x2apic_cluster or apic_x2apic_phys
      but the basic element like cpumask isn't initialized.
      
      Mask x2APIC feature in pvm to avoid overwrite of apic pointer,
      update commit message per Konrad's suggestion.
      Signed-off-by: NZhenzhong Duan <zhenzhong.duan@oracle.com>
      Tested-by: NTamon Shiose <tamon.shiose@oracle.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      4ea9b9ac
    • K
      xen/smp/pvhvm: Don't point per_cpu(xen_vpcu, 33 and larger) to shared_info · d5b17dbf
      Konrad Rzeszutek Wilk 提交于
      As it will point to some data, but not event channel data (the
      shared_info has an array limited to 32).
      
      This means that for PVHVM guests with more than 32 VCPUs without
      the usage of VCPUOP_register_info any interrupts to VCPUs
      larger than 32 would have gone unnoticed during early bootup.
      
      That is OK, as during early bootup, in smp_init we end up calling
      the hotplug mechanism (xen_hvm_cpu_notify) which makes the
      VCPUOP_register_vcpu_info call for all VCPUs and we can receive
      interrupts on VCPUs 33 and further.
      
      This is just a cleanup.
      Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      d5b17dbf
  16. 07 5月, 2013 1 次提交
  17. 06 5月, 2013 1 次提交
    • K
      xen/vcpu/pvhvm: Fix vcpu hotplugging hanging. · 7f1fc268
      Konrad Rzeszutek Wilk 提交于
      If a user did:
      
      	echo 0 > /sys/devices/system/cpu/cpu1/online
      	echo 1 > /sys/devices/system/cpu/cpu1/online
      
      we would (this a build with DEBUG enabled) get to:
      smpboot: ++++++++++++++++++++=_---CPU UP  1
      .. snip..
      smpboot: Stack at about ffff880074c0ff44
      smpboot: CPU1: has booted.
      
      and hang. The RCU mechanism would kick in an try to IPI the CPU1
      but the IPIs (and all other interrupts) would never arrive at the
      CPU1. At first glance at least. A bit digging in the hypervisor
      trace shows that (using xenanalyze):
      
      [vla] d4v1 vec 243 injecting
         0.043163027 --|x d4v1 intr_window vec 243 src 5(vector) intr f3
      ]  0.043163639 --|x d4v1 vmentry cycles 1468
      ]  0.043164913 --|x d4v1 vmexit exit_reason PENDING_INTERRUPT eip ffffffff81673254
         0.043164913 --|x d4v1 inj_virq vec 243  real
        [vla] d4v1 vec 243 injecting
         0.043164913 --|x d4v1 intr_window vec 243 src 5(vector) intr f3
      ]  0.043165526 --|x d4v1 vmentry cycles 1472
      ]  0.043166800 --|x d4v1 vmexit exit_reason PENDING_INTERRUPT eip ffffffff81673254
         0.043166800 --|x d4v1 inj_virq vec 243  real
        [vla] d4v1 vec 243 injecting
      
      there is a pending event (subsequent debugging shows it is the IPI
      from the VCPU0 when smpboot.c on VCPU1 has done
      "set_cpu_online(smp_processor_id(), true)") and the guest VCPU1 is
      interrupted with the callback IPI (0xf3 aka 243) which ends up calling
      __xen_evtchn_do_upcall.
      
      The __xen_evtchn_do_upcall seems to do *something* but not acknowledge
      the pending events. And the moment the guest does a 'cli' (that is the
      ffffffff81673254 in the log above) the hypervisor is invoked again to
      inject the IPI (0xf3) to tell the guest it has pending interrupts.
      This repeats itself forever.
      
      The culprit was the per_cpu(xen_vcpu, cpu) pointer. At the bootup
      we set each per_cpu(xen_vcpu, cpu) to point to the
      shared_info->vcpu_info[vcpu] but later on use the VCPUOP_register_vcpu_info
      to register per-CPU  structures (xen_vcpu_setup).
      This is used to allow events for more than 32 VCPUs and for performance
      optimizations reasons.
      
      When the user performs the VCPU hotplug we end up calling the
      the xen_vcpu_setup once more. We make the hypercall which returns
      -EINVAL as it does not allow multiple registration calls (and
      already has re-assigned where the events are being set). We pick
      the fallback case and set per_cpu(xen_vcpu, cpu) to point to the
      shared_info->vcpu_info[vcpu] (which is a good fallback during bootup).
      However the hypervisor is still setting events in the register
      per-cpu structure (per_cpu(xen_vcpu_info, cpu)).
      
      As such when the events are set by the hypervisor (such as timer one),
      and when we iterate in __xen_evtchn_do_upcall we end up reading stale
      events from the shared_info->vcpu_info[vcpu] instead of the
      per_cpu(xen_vcpu_info, cpu) structures. Hence we never acknowledge the
      events that the hypervisor has set and the hypervisor keeps on reminding
      us to ack the events which we never do.
      
      The fix is simple. Don't on the second time when xen_vcpu_setup is
      called over-write the per_cpu(xen_vcpu, cpu) if it points to
      per_cpu(xen_vcpu_info).
      Acked-by: NStefano Stabellini <stefano.stabellini@eu.citrix.com>
      CC: stable@vger.kernel.org
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      7f1fc268
  18. 17 4月, 2013 2 次提交
    • K
      xen/time: Fix kasprintf splat when allocating timer%d IRQ line. · 7918c92a
      Konrad Rzeszutek Wilk 提交于
      When we online the CPU, we get this splat:
      
      smpboot: Booting Node 0 Processor 1 APIC 0x2
      installing Xen timer for CPU 1
      BUG: sleeping function called from invalid context at /home/konrad/ssd/konrad/linux/mm/slab.c:3179
      in_atomic(): 1, irqs_disabled(): 0, pid: 0, name: swapper/1
      Pid: 0, comm: swapper/1 Not tainted 3.9.0-rc6upstream-00001-g3884fad #1
      Call Trace:
       [<ffffffff810c1fea>] __might_sleep+0xda/0x100
       [<ffffffff81194617>] __kmalloc_track_caller+0x1e7/0x2c0
       [<ffffffff81303758>] ? kasprintf+0x38/0x40
       [<ffffffff813036eb>] kvasprintf+0x5b/0x90
       [<ffffffff81303758>] kasprintf+0x38/0x40
       [<ffffffff81044510>] xen_setup_timer+0x30/0xb0
       [<ffffffff810445af>] xen_hvm_setup_cpu_clockevents+0x1f/0x30
       [<ffffffff81666d0a>] start_secondary+0x19c/0x1a8
      
      The solution to that is use kasprintf in the CPU hotplug path
      that 'online's the CPU. That is, do it in in xen_hvm_cpu_notify,
      and remove the call to in xen_hvm_setup_cpu_clockevents.
      
      Unfortunatly the later is not a good idea as the bootup path
      does not use xen_hvm_cpu_notify so we would end up never allocating
      timer%d interrupt lines when booting. As such add the check for
      atomic() to continue.
      
      CC: stable@vger.kernel.org
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      7918c92a
    • D
      x86/xen: populate boot_params with EDD data · 96f28bc6
      David Vrabel 提交于
      During early setup of a dom0 kernel, populate boot_params with the
      Enhanced Disk Drive (EDD) and MBR signature data.  This makes
      information on the BIOS boot device available in /sys/firmware/edd/.
      Signed-off-by: NDavid Vrabel <david.vrabel@citrix.com>
      Acked-by: NJan Beulich <jbeulich@suse.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      96f28bc6
  19. 12 4月, 2013 1 次提交
  20. 28 2月, 2013 1 次提交
    • K
      xen/pat: Disable PAT using pat_enabled value. · c79c4982
      Konrad Rzeszutek Wilk 提交于
      The git commit 8eaffa67
      (xen/pat: Disable PAT support for now) explains in details why
      we want to disable PAT for right now. However that
      change was not enough and we should have also disabled
      the pat_enabled value. Otherwise we end up with:
      
      mmap-example:3481 map pfn expected mapping type write-back for
      [mem 0x00010000-0x00010fff], got uncached-minus
       ------------[ cut here ]------------
      WARNING: at /build/buildd/linux-3.8.0/arch/x86/mm/pat.c:774 untrack_pfn+0xb8/0xd0()
      mem 0x00010000-0x00010fff], got uncached-minus
      ------------[ cut here ]------------
      WARNING: at /build/buildd/linux-3.8.0/arch/x86/mm/pat.c:774
      untrack_pfn+0xb8/0xd0()
      ...
      Pid: 3481, comm: mmap-example Tainted: GF 3.8.0-6-generic #13-Ubuntu
      Call Trace:
       [<ffffffff8105879f>] warn_slowpath_common+0x7f/0xc0
       [<ffffffff810587fa>] warn_slowpath_null+0x1a/0x20
       [<ffffffff8104bcc8>] untrack_pfn+0xb8/0xd0
       [<ffffffff81156c1c>] unmap_single_vma+0xac/0x100
       [<ffffffff81157459>] unmap_vmas+0x49/0x90
       [<ffffffff8115f808>] exit_mmap+0x98/0x170
       [<ffffffff810559a4>] mmput+0x64/0x100
       [<ffffffff810560f5>] dup_mm+0x445/0x660
       [<ffffffff81056d9f>] copy_process.part.22+0xa5f/0x1510
       [<ffffffff81057931>] do_fork+0x91/0x350
       [<ffffffff81057c76>] sys_clone+0x16/0x20
       [<ffffffff816ccbf9>] stub_clone+0x69/0x90
       [<ffffffff816cc89d>] ? system_call_fastpath+0x1a/0x1f
      ---[ end trace 4918cdd0a4c9fea4 ]---
      
      (a similar message shows up if you end up launching 'mcelog')
      
      The call chain is (as analyzed by Liu, Jinsong):
      do_fork
        --> copy_process
          --> dup_mm
            --> dup_mmap
             	--> copy_page_range
                --> track_pfn_copy
                  --> reserve_pfn_range
                    --> line 624: flags != want_flags
      It comes from different memory types of page table (_PAGE_CACHE_WB) and MTRR
      (_PAGE_CACHE_UC_MINUS).
      
      Stefan Bader dug in this deep and found out that:
      "That makes it clearer as this will do
      
      reserve_memtype(...)
      --> pat_x_mtrr_type
        --> mtrr_type_lookup
          --> __mtrr_type_lookup
      
      And that can return -1/0xff in case of MTRR not being enabled/initialized. Which
      is not the case (given there are no messages for it in dmesg). This is not equal
      to MTRR_TYPE_WRBACK and thus becomes _PAGE_CACHE_UC_MINUS.
      
      It looks like the problem starts early in reserve_memtype:
      
             	if (!pat_enabled) {
                      /* This is identical to page table setting without PAT */
                      if (new_type) {
                              if (req_type == _PAGE_CACHE_WC)
                                      *new_type = _PAGE_CACHE_UC_MINUS;
                              else
                                     	*new_type = req_type & _PAGE_CACHE_MASK;
                     	}
                      return 0;
              }
      
      This would be what we want, that is clearing the PWT and PCD flags from the
      supported flags - if pat_enabled is disabled."
      
      This patch does that - disabling PAT.
      
      CC: stable@vger.kernel.org # 3.3 and further
      Reported-by: NSander Eikelenboom <linux@eikelenboom.it>
      Reported-and-Tested-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      Reported-and-Tested-by: NStefan Bader <stefan.bader@canonical.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      c79c4982
  21. 15 2月, 2013 2 次提交
  22. 24 1月, 2013 1 次提交
  23. 18 12月, 2012 1 次提交
  24. 01 12月, 2012 1 次提交
  25. 29 11月, 2012 1 次提交
  26. 27 11月, 2012 1 次提交
  27. 02 11月, 2012 1 次提交
    • O
      xen PVonHVM: use E820_Reserved area for shared_info · 9d02b43d
      Olaf Hering 提交于
      This is a respin of 00e37bdb
      ("xen PVonHVM: move shared_info to MMIO before kexec").
      
      Currently kexec in a PVonHVM guest fails with a triple fault because the
      new kernel overwrites the shared info page. The exact failure depends on
      the size of the kernel image. This patch moves the pfn from RAM into an
      E820 reserved memory area.
      
      The pfn containing the shared_info is located somewhere in RAM. This will
      cause trouble if the current kernel is doing a kexec boot into a new
      kernel. The new kernel (and its startup code) can not know where the pfn
      is, so it can not reserve the page. The hypervisor will continue to update
      the pfn, and as a result memory corruption occours in the new kernel.
      
      The toolstack marks the memory area FC000000-FFFFFFFF as reserved in the
      E820 map. Within that range newer toolstacks (4.3+) will keep 1MB
      starting from FE700000 as reserved for guest use. Older Xen4 toolstacks
      will usually not allocate areas up to FE700000, so FE700000 is expected
      to work also with older toolstacks.
      
      In Xen3 there is no reserved area at a fixed location. If the guest is
      started on such old hosts the shared_info page will be placed in RAM. As
      a result kexec can not be used.
      Signed-off-by: NOlaf Hering <olaf@aepfle.de>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      9d02b43d
  28. 20 10月, 2012 1 次提交
  29. 12 10月, 2012 2 次提交
    • K
      xen/bootup: allow {read|write}_cr8 pvops call. · 1a7bbda5
      Konrad Rzeszutek Wilk 提交于
      We actually do not do anything about it. Just return a default
      value of zero and if the kernel tries to write anything but 0
      we BUG_ON.
      
      This fixes the case when an user tries to suspend the machine
      and it blows up in save_processor_state b/c 'read_cr8' is set
      to NULL and we get:
      
      kernel BUG at /home/konrad/ssd/linux/arch/x86/include/asm/paravirt.h:100!
      invalid opcode: 0000 [#1] SMP
      Pid: 2687, comm: init.late Tainted: G           O 3.6.0upstream-00002-gac264ac-dirty #4 Bochs Bochs
      RIP: e030:[<ffffffff814d5f42>]  [<ffffffff814d5f42>] save_processor_state+0x212/0x270
      
      .. snip..
      Call Trace:
       [<ffffffff810733bf>] do_suspend_lowlevel+0xf/0xac
       [<ffffffff8107330c>] ? x86_acpi_suspend_lowlevel+0x10c/0x150
       [<ffffffff81342ee2>] acpi_suspend_enter+0x57/0xd5
      
      CC: stable@vger.kernel.org
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      1a7bbda5
    • K
      xen/bootup: allow read_tscp call for Xen PV guests. · cd0608e7
      Konrad Rzeszutek Wilk 提交于
      The hypervisor will trap it. However without this patch,
      we would crash as the .read_tscp is set to NULL. This patch
      fixes it and sets it to the native_read_tscp call.
      
      CC: stable@vger.kernel.org
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      cd0608e7
  30. 24 9月, 2012 1 次提交
  31. 20 9月, 2012 1 次提交
    • K
      xen/boot: Disable BIOS SMP MP table search. · bd49940a
      Konrad Rzeszutek Wilk 提交于
      As the initial domain we are able to search/map certain regions
      of memory to harvest configuration data. For all low-level we
      use ACPI tables - for interrupts we use exclusively ACPI _PRT
      (so DSDT) and MADT for INT_SRC_OVR.
      
      The SMP MP table is not used at all. As a matter of fact we do
      not even support machines that only have SMP MP but no ACPI tables.
      
      Lets follow how Moorestown does it and just disable searching
      for BIOS SMP tables.
      
      This also fixes an issue on HP Proliant BL680c G5 and DL380 G6:
      
      9f->100 for 1:1 PTE
      Freeing 9f-100 pfn range: 97 pages freed
      1-1 mapping on 9f->100
      .. snip..
      e820: BIOS-provided physical RAM map:
      Xen: [mem 0x0000000000000000-0x000000000009efff] usable
      Xen: [mem 0x000000000009f400-0x00000000000fffff] reserved
      Xen: [mem 0x0000000000100000-0x00000000cfd1dfff] usable
      .. snip..
      Scan for SMP in [mem 0x00000000-0x000003ff]
      Scan for SMP in [mem 0x0009fc00-0x0009ffff]
      Scan for SMP in [mem 0x000f0000-0x000fffff]
      found SMP MP-table at [mem 0x000f4fa0-0x000f4faf] mapped at [ffff8800000f4fa0]
      (XEN) mm.c:908:d0 Error getting mfn 100 (pfn 5555555555555555) from L1 entry 0000000000100461 for l1e_owner=0, pg_owner=0
      (XEN) mm.c:4995:d0 ptwr_emulate: could not get_page_from_l1e()
      BUG: unable to handle kernel NULL pointer dereference at           (null)
      IP: [<ffffffff81ac07e2>] xen_set_pte_init+0x66/0x71
      . snip..
      Pid: 0, comm: swapper Not tainted 3.6.0-rc6upstream-00188-gb6fb969-dirty #2 HP ProLiant BL680c G5
      .. snip..
      Call Trace:
       [<ffffffff81ad31c6>] __early_ioremap+0x18a/0x248
       [<ffffffff81624731>] ? printk+0x48/0x4a
       [<ffffffff81ad32ac>] early_ioremap+0x13/0x15
       [<ffffffff81acc140>] get_mpc_size+0x2f/0x67
       [<ffffffff81acc284>] smp_scan_config+0x10c/0x136
       [<ffffffff81acc2e4>] default_find_smp_config+0x36/0x5a
       [<ffffffff81ac3085>] setup_arch+0x5b3/0xb5b
       [<ffffffff81624731>] ? printk+0x48/0x4a
       [<ffffffff81abca7f>] start_kernel+0x90/0x390
       [<ffffffff81abc356>] x86_64_start_reservations+0x131/0x136
       [<ffffffff81abfa83>] xen_start_kernel+0x65f/0x661
      (XEN) Domain 0 crashed: 'noreboot' set - not rebooting.
      
      which is that ioremap would end up mapping 0xff using _PAGE_IOMAP
      (which is what early_ioremap sticks as a flag) - which meant
      we would get MFN 0xFF (pte ff461, which is OK), and then it would
      also map 0x100 (b/c ioremap tries to get page aligned request, and
      it was trying to map 0xf4fa0 + PAGE_SIZE - so it mapped the next page)
      as _PAGE_IOMAP. Since 0x100 is actually a RAM page, and the _PAGE_IOMAP
      bypasses the P2M lookup we would happily set the PTE to 1000461.
      Xen would deny the request since we do not have access to the
      Machine Frame Number (MFN) of 0x100. The P2M[0x100] is for example
      0x80140.
      
      CC: stable@vger.kernel.org
      Fixes-Oracle-Bugzilla: https://bugzilla.oracle.com/bugzilla/show_bug.cgi?id=13665Acked-by: NJan Beulich <jbeulich@suse.com>
      Signed-off-by: NKonrad Rzeszutek Wilk <konrad.wilk@oracle.com>
      bd49940a
  32. 23 8月, 2012 1 次提交