1. 08 6月, 2017 2 次提交
  2. 11 5月, 2017 2 次提交
    • T
      target/ppc: Avoid printing wrong aliases in CPU help text · e9edd931
      Thomas Huth 提交于
      When running with KVM, we update the "family" CPU alias to point
      to the right host CPU type, so that it for example possible to
      use "-cpu POWER8" on a POWER8NVL host. However, the function for
      printing the list of available CPU models is called earlier than
      the KVM setup code, so the output of "-cpu help" is wrong in that
      case. Since it would be somewhat ugly anyway to have different
      help texts depending on whether "-enable-kvm" has been specified
      or not, we should better always print the same text, so fix this
      issue by printing "alias for preferred XXX CPU" instead.
      Reviewed-by: NEduardo Habkost <ehabkost@redhat.com>
      Signed-off-by: NThomas Huth <thuth@redhat.com>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      e9edd931
    • D
      target/ppc: Allow workarounds for POWER9 DD1 · 5f3066d8
      David Gibson 提交于
      POWER9 DD1 silicon has some bugs which mean it a) isn't really compliant
      with the ISA v3.00 and b) require a number of special workarounds in the
      kernel.
      
      At the moment, qemu isn't aware of DD1.  For TCG we don't really want it to
      be (why bother emulating buggy silicon).  But with KVM, the guest does need
      to be aware of DD1 so it can apply the necessary workarounds.
      
      Meanwhile, the feature negotiation between qemu and the guest strongly
      favours architected compatibility modes to "raw" CPU modes.  In combination
      with the above, this means the guest sees architected POWER9 mode, and
      doesn't apply the DD1 workarounds.  Well, unless it has yet another
      workaround to partially ignore what qemu tells it.
      
      This patch addresses this by disabling support for compatibility modes when
      using KVM on a POWER9 DD1 host.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      5f3066d8
  3. 26 4月, 2017 6 次提交
  4. 21 4月, 2017 1 次提交
  5. 03 3月, 2017 4 次提交
  6. 01 3月, 2017 2 次提交
    • D
      target/ppc: Manage external HPT via virtual hypervisor · e57ca75c
      David Gibson 提交于
      The pseries machine type implements the behaviour of a PAPR compliant
      hypervisor, without actually executing such a hypervisor on the virtual
      CPU.  To do this we need some hooks in the CPU code to make hypervisor
      facilities get redirected to the machine instead of emulated internally.
      
      For hypercalls this is managed through the cpu->vhyp field, which points
      to a QOM interface with a method implementing the hypercall.
      
      For the hashed page table (HPT) - also a hypervisor resource - we use an
      older hack.  CPUPPCState has an 'external_htab' field which when non-NULL
      indicates that the HPT is stored in qemu memory, rather than within the
      guest's address space.
      
      For consistency - and to make some future extensions easier - this merges
      the external HPT mechanism into the vhyp mechanism.  Methods are added
      to vhyp for the basic operations the core hash MMU code needs: map_hptes()
      and unmap_hptes() for reading the HPT, store_hpte() for updating it and
      hpt_mask() to retrieve its size.
      
      To match this, the pseries machine now sets these vhyp fields in its
      existing vhyp class, rather than reaching into the cpu object to set the
      external_htab field.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com>
      e57ca75c
    • D
      target/ppc: Fix KVM-HV HPTE accessors · 1ad9f0a4
      David Gibson 提交于
      When a 'pseries' guest is running with KVM-HV, the guest's hashed page
      table (HPT) is stored within the host kernel, so it is not directly
      accessible to qemu.  Most of the time, qemu doesn't need to access it:
      we're using the hardware MMU, and KVM itself implements the guest
      hypercalls for manipulating the HPT.
      
      However, qemu does need access to the in-KVM HPT to implement
      get_phys_page_debug() for the benefit of the gdbstub, and maybe for
      other debug operations.
      
      To allow this, 7c43bca0 "target-ppc: Fix page table lookup with kvm
      enabled" added kvmppc_hash64_read_pteg() to target/ppc/kvm.c to read
      in a batch of HPTEs from the KVM table.  Unfortunately, there are a
      couple of problems with this:
      
      First, the name of the function implies it always reads a whole PTEG
      from the HPT, but in fact in some cases it's used to grab individual
      HPTEs (which ends up pulling 8 HPTEs, not aligned to a PTEG from the
      kernel).
      
      Second, and more importantly, the code to read the HPTEs from KVM is
      simply wrong, in general.  The data from the fd that KVM provides is
      designed mostly for compact migration rather than this sort of one-off
      access, and so needs some decoding for this purpose.  The current code
      will work in some cases, but if there are invalid HPTEs then it will
      not get sane results.
      
      This patch rewrite the HPTE reading function to have a simpler
      interface (just read n HPTEs into a caller provided buffer), and to
      correctly decode the stream from the kernel.
      
      For consistency we also clean up the similar function for altering
      HPTEs within KVM (introduced in c1385933 "target-ppc: Update
      ppc_hash64_store_hpte to support updating in-kernel htab").
      
      Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      1ad9f0a4
  7. 22 2月, 2017 1 次提交
    • T
      hw/ppc/spapr: Check for valid page size when hot plugging memory · df587133
      Thomas Huth 提交于
      On POWER, the valid page sizes that the guest can use are bound
      to the CPU and not to the memory region. QEMU already has some
      fancy logic to find out the right maximum memory size to tell
      it to the guest during boot (see getrampagesize() in the file
      target/ppc/kvm.c for more information).
      However, once we're booted and the guest is using huge pages
      already, it is currently still possible to hot-plug memory regions
      that does not support huge pages - which of course does not work
      on POWER, since the guest thinks that it is possible to use huge
      pages everywhere. The KVM_RUN ioctl will then abort with -EFAULT,
      QEMU spills out a not very helpful error message together with
      a register dump and the user is annoyed that the VM unexpectedly
      died.
      To avoid this situation, we should check the page size of hot-plugged
      DIMMs to see whether it is possible to use it in the current VM.
      If it does not fit, we can print out a better error message and
      refuse to add it, so that the VM does not die unexpectely and the
      user has a second chance to plug a DIMM with a matching memory
      backend instead.
      
      Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1419466Signed-off-by: NThomas Huth <thuth@redhat.com>
      [dwg: Fix a build error on 32-bit builds with KVM]
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      df587133
  8. 02 2月, 2017 1 次提交
    • T
      ppc/kvm: Handle the "family" CPU via alias instead of registering new types · 715d4b96
      Thomas Huth 提交于
      When running with KVM on POWER, we are registering a "family" CPU
      type for the host CPU that we are running on. For example, on all
      POWER8-compatible hosts, we register a "POWER8" CPU type, so that
      you can always start QEMU with "-cpu POWER8" there, without the
      need to know whether you are running on a POWER8, POWER8E or POWER8NVL
      host machine.
      However, we also have a "POWER8" CPU alias in the ppc_cpu_aliases list
      (that is mainly useful for TCG). This leads to two cosmetical drawbacks:
      If the user runs QEMU with "-cpu ?", we always claim that POWER8 is an
      "alias for POWER8_v2.0" - which is simply not true when running with
      KVM on POWER. And when using the 'query-cpu-definitions' QMP call,
      there are currently two entries for "POWER8", one for the alias, and
      one for the additional registered type.
      To solve the two problems, we should rather update the "family" alias
      instead of registering a new types. We then only have one "POWER8"
      CPU definition around, an alias, which also points to the right
      destination.
      
      Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1396536Signed-off-by: NThomas Huth <thuth@redhat.com>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      715d4b96
  9. 31 1月, 2017 1 次提交
    • D
      ppc: Rename cpu_version to compat_pvr · d6e166c0
      David Gibson 提交于
      The 'cpu_version' field in PowerPCCPU is badly named.  It's named after the
      'cpu-version' device tree property where it is advertised, but that meaning
      may not be obvious in most places it appears.
      
      Worse, it doesn't even really correspond to that device tree property.  The
      property contains either the processor's PVR, or, if the CPU is running in
      a compatibility mode, a special "logical PVR" representing which mode.
      
      Rename the cpu_version field, and a number of related variables to
      compat_pvr to make this clearer.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Reviewed-by: NThomas Huth <thuth@redhat.com>
      d6e166c0
  10. 20 1月, 2017 1 次提交
  11. 17 1月, 2017 1 次提交
  12. 21 12月, 2016 1 次提交
    • T
      Move target-* CPU file into a target/ folder · fcf5ef2a
      Thomas Huth 提交于
      We've currently got 18 architectures in QEMU, and thus 18 target-xxx
      folders in the root folder of the QEMU source tree. More architectures
      (e.g. RISC-V, AVR) are likely to be included soon, too, so the main
      folder of the QEMU sources slowly gets quite overcrowded with the
      target-xxx folders.
      To disburden the main folder a little bit, let's move the target-xxx
      folders into a dedicated target/ folder, so that target-xxx/ simply
      becomes target/xxx/ instead.
      
      Acked-by: Laurent Vivier <laurent@vivier.eu> [m68k part]
      Acked-by: Bastian Koppelmann <kbastian@mail.uni-paderborn.de> [tricore part]
      Acked-by: Michael Walle <michael@walle.cc> [lm32 part]
      Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com> [s390x part]
      Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> [s390x part]
      Acked-by: Eduardo Habkost <ehabkost@redhat.com> [i386 part]
      Acked-by: Artyom Tarasenko <atar4qemu@gmail.com> [sparc part]
      Acked-by: Richard Henderson <rth@twiddle.net> [alpha part]
      Acked-by: Max Filippov <jcmvbkbc@gmail.com> [xtensa part]
      Reviewed-by: David Gibson <david@gibson.dropbear.id.au> [ppc part]
      Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com> [cris&microblaze part]
      Acked-by: Guan Xuetao <gxt@mprc.pku.edu.cn> [unicore32 part]
      Signed-off-by: NThomas Huth <thuth@redhat.com>
      fcf5ef2a
  13. 13 10月, 2016 1 次提交
  14. 05 10月, 2016 3 次提交
  15. 23 9月, 2016 3 次提交
  16. 10 8月, 2016 2 次提交
  17. 25 7月, 2016 1 次提交
  18. 22 7月, 2016 1 次提交
    • P
      kvm-irqchip: i386: add hook for add/remove virq · 38d87493
      Peter Xu 提交于
      Adding two hooks to be notified when adding/removing msi routes. There
      are two kinds of MSI routes:
      
      - in kvm_irqchip_add_irq_route(): before assigning IRQFD. Used by
        vhost, vfio, etc.
      
      - in kvm_irqchip_send_msi(): when sending direct MSI message, if
        direct MSI not allowed, we will first create one MSI route entry
        in the kernel, then trigger it.
      
      This patch only hooks the first one (irqfd case). We do not need to
      take care for the 2nd one, since it's only used by QEMU userspace
      (kvm-apic) and the messages will always do in-time translation when
      triggered. While we need to note them down for the 1st one, so that we
      can notify the kernel when cache invalidation happens.
      
      Also, we do not hook IOAPIC msi routes (we have explicit notifier for
      IOAPIC to keep its cache updated). We only need to care about irqfd
      users.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      38d87493
  19. 18 7月, 2016 1 次提交
    • T
      ppc: Yet another fix for the huge page support detection mechanism · 159d2e39
      Thomas Huth 提交于
      Commit 86b50f2e ("Disable huge page support if it is not available
      for main RAM") already made sure that huge page support is not announced
      to the guest if the normal RAM of non-NUMA configurations is not backed
      by a huge page filesystem. However, there is one more case that can go
      wrong: NUMA is enabled, but the RAM of the NUMA nodes are not configured
      with huge page support (and only the memory of a DIMM is configured with
      it). When QEMU is started with the following command line for example,
      the Linux guest currently crashes because it is trying to use huge pages
      on a memory region that does not support huge pages:
      
       qemu-system-ppc64 -enable-kvm ... -m 1G,slots=4,maxmem=32G -object \
         memory-backend-file,policy=default,mem-path=/hugepages,size=1G,id=mem-mem1 \
         -device pc-dimm,id=dimm-mem1,memdev=mem-mem1 -smp 2 \
         -numa node,nodeid=0 -numa node,nodeid=1
      
      To fix this issue, we've got to make sure to disable huge page support,
      too, when there is a NUMA node that is not using a memory backend with
      huge page support.
      
      Fixes: 86b50f2eSigned-off-by: NThomas Huth <thuth@redhat.com>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      159d2e39
  20. 23 6月, 2016 1 次提交
    • T
      ppc: Disable huge page support if it is not available for main RAM · 86b50f2e
      Thomas Huth 提交于
      On powerpc, we must only signal huge page support to the guest if
      all memory areas are capable of supporting huge pages. The commit
      2d103aae ("fix hugepage support when using memory-backend-file")
      already fixed the case when the user specified the mem-path property
      for NUMA memory nodes instead of using the global "-mem-path" option.
      However, there is one more case where it currently can go wrong.
      When specifying additional memory DIMMs without using NUMA, e.g.
      
       qemu-system-ppc64 -enable-kvm ... -m 1G,slots=2,maxmem=2G \
          -device pc-dimm,id=dimm-mem1,memdev=mem1 -object \
          memory-backend-file,policy=default,mem-path=/...,size=1G,id=mem1
      
      the code in getrampagesize() currently assumes that huge pages
      are possible since they are enabled for the mem1 object. But
      since the main RAM is not backed by a huge page filesystem,
      the guest Linux kernel then crashes very quickly after being
      started. So in case the we've got "normal" memory without NUMA
      and without the global "-mem-path" option, we must not announce
      huge pages to the guest. Since this is likely a mis-configuration
      by the user, also spill out a message in this case.
      Signed-off-by: NThomas Huth <thuth@redhat.com>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      86b50f2e
  21. 17 6月, 2016 2 次提交
  22. 14 6月, 2016 1 次提交
  23. 19 5月, 2016 1 次提交