1. 26 4月, 2017 10 次提交
  2. 03 4月, 2017 1 次提交
    • D
      pseries: Enforce homogeneous threads-per-core · 8149e299
      David Gibson 提交于
      For reasons that may be useful in future, CPU core objects, as used on the
      pseries machine type have their own nr-threads property, potentially
      allowing cores with different numbers of threads in the same system.
      
      If the user/management uses the values specified in query-hotpluggable-cpus
      as they're expected to do, this will never matter in pratice.  But that's
      not actually enforced - it's possible to manually specify a core with
      a different number of threads from that in -smp.  That will confuse the
      platform - most immediately, this can be used to create a CPU thread with
      index above max_cpus which leads to an assertion failure in
      spapr_cpu_core_realize().
      
      For now, enforce that all cores must have the same, standard, number of
      threads.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: NBharata B Rao <bharata@linux.vnet.ibm.com>
      8149e299
  3. 29 3月, 2017 1 次提交
    • M
      spapr: fix buffer-overflow · 24ec2863
      Marc-André Lureau 提交于
      Running postcopy-test with ASAN produces the following error:
      
      QTEST_QEMU_BINARY=ppc64-softmmu/qemu-system-ppc64  tests/postcopy-test
      ...
      =================================================================
      ==23641==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7f1556600000 at pc 0x55b8e9d28208 bp 0x7f1555f4d3c0 sp 0x7f1555f4d3b0
      READ of size 8 at 0x7f1556600000 thread T6
          #0 0x55b8e9d28207 in htab_save_first_pass /home/elmarco/src/qq/hw/ppc/spapr.c:1528
          #1 0x55b8e9d2939c in htab_save_iterate /home/elmarco/src/qq/hw/ppc/spapr.c:1665
          #2 0x55b8e9beae3a in qemu_savevm_state_iterate /home/elmarco/src/qq/migration/savevm.c:1044
          #3 0x55b8ea677733 in migration_thread /home/elmarco/src/qq/migration/migration.c:1976
          #4 0x7f15845f46c9 in start_thread (/lib64/libpthread.so.0+0x76c9)
          #5 0x7f157d9d0f7e in clone (/lib64/libc.so.6+0x107f7e)
      
      0x7f1556600000 is located 0 bytes to the right of 2097152-byte region [0x7f1556400000,0x7f1556600000)
      allocated by thread T0 here:
          #0 0x7f159bb76980 in posix_memalign (/lib64/libasan.so.3+0xc7980)
          #1 0x55b8eab185b2 in qemu_try_memalign /home/elmarco/src/qq/util/oslib-posix.c:106
          #2 0x55b8eab186c8 in qemu_memalign /home/elmarco/src/qq/util/oslib-posix.c:122
          #3 0x55b8e9d268a8 in spapr_reallocate_hpt /home/elmarco/src/qq/hw/ppc/spapr.c:1214
          #4 0x55b8e9d26e04 in ppc_spapr_reset /home/elmarco/src/qq/hw/ppc/spapr.c:1261
          #5 0x55b8ea12e913 in qemu_system_reset /home/elmarco/src/qq/vl.c:1697
          #6 0x55b8ea13fa40 in main /home/elmarco/src/qq/vl.c:4679
          #7 0x7f157d8e9400 in __libc_start_main (/lib64/libc.so.6+0x20400)
      
      Thread T6 created by T0 here:
          #0 0x7f159bae0488 in __interceptor_pthread_create (/lib64/libasan.so.3+0x31488)
          #1 0x55b8eab1d9cb in qemu_thread_create /home/elmarco/src/qq/util/qemu-thread-posix.c:465
          #2 0x55b8ea67874c in migrate_fd_connect /home/elmarco/src/qq/migration/migration.c:2096
          #3 0x55b8ea66cbb0 in migration_channel_connect /home/elmarco/src/qq/migration/migration.c:500
          #4 0x55b8ea678f38 in socket_outgoing_migration /home/elmarco/src/qq/migration/socket.c:87
          #5 0x55b8eaa5a03a in qio_task_complete /home/elmarco/src/qq/io/task.c:142
          #6 0x55b8eaa599cc in gio_task_thread_result /home/elmarco/src/qq/io/task.c:88
          #7 0x7f15823e38e6  (/lib64/libglib-2.0.so.0+0x468e6)
      SUMMARY: AddressSanitizer: heap-buffer-overflow /home/elmarco/src/qq/hw/ppc/spapr.c:1528 in htab_save_first_pass
      
      index seems to be wrongly incremented, unless I miss something that
      would be worth a comment.
      Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      24ec2863
  4. 22 3月, 2017 1 次提交
    • L
      numa,spapr: align default numa node memory size to 256MB · 55641213
      Laurent Vivier 提交于
      Since commit 224245bf ("spapr: Add LMB DR connectors"), NUMA node
      memory size must be aligned to 256MB (SPAPR_MEMORY_BLOCK_SIZE).
      
      But when "-numa" option is provided without "mem" parameter,
      the memory is equally divided between nodes, but 8MB aligned.
      This can be not valid for pseries.
      
      In that case we can have:
      $ ./ppc64-softmmu/qemu-system-ppc64 -m 4G -numa node -numa node -numa node
      qemu-system-ppc64: Node 0 memory size 0x55000000 is not aligned to 256 MiB
      
      With this patch, we have:
      (qemu) info numa
      3 nodes
      node 0 cpus: 0
      node 0 size: 1280 MB
      node 1 cpus:
      node 1 size: 1280 MB
      node 2 cpus:
      node 2 size: 1536 MB
      Signed-off-by: NLaurent Vivier <lvivier@redhat.com>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      55641213
  5. 14 3月, 2017 1 次提交
  6. 06 3月, 2017 1 次提交
  7. 03 3月, 2017 3 次提交
    • S
      spapr: Small cleanup of PPC MMU enums · ec975e83
      Sam Bobroff 提交于
      The PPC MMU types are sometimes treated as if they were a bit field
      and sometime as if they were an enum which causes maintenance
      problems: flipping bits in the MMU type (which is done on both the 1TB
      segment and 64K segment bits) currently produces new MMU type
      values that are not handled in every "switch" on it, sometimes causing
      an abort().
      
      This patch provides some macros that can be used to filter out the
      "bit field-like" bits so that the remainder of the value can be
      switched on, like an enum. This allows removal of all of the
      "degraded" types from the list and should ease maintenance.
      Signed-off-by: NSam Bobroff <sam.bobroff@au1.ibm.com>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      ec975e83
    • S
      target/ppc/POWER9: Add POWER9 pa-features definition · 4975c098
      Suraj Jitindar Singh 提交于
      Add a pa-features definition which includes all of the new fields which
      have been added, note we don't claim support for any of these new features
      at this stage.
      Signed-off-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com>
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Acked-by: NBalbir Singh <bsingharora@gmail.com>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      4975c098
    • S
      target/ppc: Add patb_entry to sPAPRMachineState · 9861bb3e
      Suraj Jitindar Singh 提交于
      ISA v3.00 adds the idea of a partition table which is used to store the
      address translation details for all partitions on the system. The partition
      table consists of double word entries indexed by partition id where the second
      double word contains the location of the process table in guest memory. The
      process table is registered by the guest via a h-call.
      
      We need somewhere to store the address of the process table so we add an entry
      to the sPAPRMachineState struct called patb_entry to represent the second
      doubleword of a single partition table entry corresponding to the current
      guest. We need to store this value so we know if the guest is using radix or
      hash translation and the location of the corresponding process table in guest
      memory. Since we only have a single guest per qemu instance, we only need one
      entry.
      
      Since the partition table is technically a hypervisor resource we require that
      access to it is abstracted by the virtual hypervisor through the get_patbe()
      call. Currently the value of the entry is never set (and thus
      defaults to 0 indicating hash), but it will be required to both implement
      POWER9 kvm support and tcg radix support.
      
      We also add this field to be migrated as part of the sPAPRMachineState as we
      will need it on the receiving side as the guest will never tell us this
      information again and we need it to perform translation.
      Signed-off-by: NSuraj Jitindar Singh <sjitindarsingh@gmail.com>
      Reviewed-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      9861bb3e
  8. 01 3月, 2017 19 次提交
  9. 24 2月, 2017 1 次提交
    • J
      tcg: drop global lock during TCG code execution · 8d04fb55
      Jan Kiszka 提交于
      This finally allows TCG to benefit from the iothread introduction: Drop
      the global mutex while running pure TCG CPU code. Reacquire the lock
      when entering MMIO or PIO emulation, or when leaving the TCG loop.
      
      We have to revert a few optimization for the current TCG threading
      model, namely kicking the TCG thread in qemu_mutex_lock_iothread and not
      kicking it in qemu_cpu_kick. We also need to disable RAM block
      reordering until we have a more efficient locking mechanism at hand.
      
      Still, a Linux x86 UP guest and my Musicpal ARM model boot fine here.
      These numbers demonstrate where we gain something:
      
      20338 jan       20   0  331m  75m 6904 R   99  0.9   0:50.95 qemu-system-arm
      20337 jan       20   0  331m  75m 6904 S   20  0.9   0:26.50 qemu-system-arm
      
      The guest CPU was fully loaded, but the iothread could still run mostly
      independent on a second core. Without the patch we don't get beyond
      
      32206 jan       20   0  330m  73m 7036 R   82  0.9   1:06.00 qemu-system-arm
      32204 jan       20   0  330m  73m 7036 S   21  0.9   0:17.03 qemu-system-arm
      
      We don't benefit significantly, though, when the guest is not fully
      loading a host CPU.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Message-Id: <1439220437-23957-10-git-send-email-fred.konrad@greensocs.com>
      [FK: Rebase, fix qemu_devices_reset deadlock, rm address_space_* mutex]
      Signed-off-by: NKONRAD Frederic <fred.konrad@greensocs.com>
      [EGC: fixed iothread lock for cpu-exec IRQ handling]
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      [AJB: -smp single-threaded fix, clean commit msg, BQL fixes]
      Signed-off-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      Reviewed-by: NPranith Kumar <bobby.prani@gmail.com>
      [PM: target-arm changes]
      Acked-by: NPeter Maydell <peter.maydell@linaro.org>
      8d04fb55
  10. 22 2月, 2017 2 次提交
    • T
      hw/ppc/spapr: Check for valid page size when hot plugging memory · df587133
      Thomas Huth 提交于
      On POWER, the valid page sizes that the guest can use are bound
      to the CPU and not to the memory region. QEMU already has some
      fancy logic to find out the right maximum memory size to tell
      it to the guest during boot (see getrampagesize() in the file
      target/ppc/kvm.c for more information).
      However, once we're booted and the guest is using huge pages
      already, it is currently still possible to hot-plug memory regions
      that does not support huge pages - which of course does not work
      on POWER, since the guest thinks that it is possible to use huge
      pages everywhere. The KVM_RUN ioctl will then abort with -EFAULT,
      QEMU spills out a not very helpful error message together with
      a register dump and the user is annoyed that the VM unexpectedly
      died.
      To avoid this situation, we should check the page size of hot-plugged
      DIMMs to see whether it is possible to use it in the current VM.
      If it does not fit, we can print out a better error message and
      refuse to add it, so that the VM does not die unexpectely and the
      user has a second chance to plug a DIMM with a matching memory
      backend instead.
      
      Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1419466Signed-off-by: NThomas Huth <thuth@redhat.com>
      [dwg: Fix a build error on 32-bit builds with KVM]
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      df587133
    • I
      machine: replace query_hotpluggable_cpus() callback with has_hotpluggable_cpus flag · c5514d0e
      Igor Mammedov 提交于
      Generic helper machine_query_hotpluggable_cpus() replaced
      target specific query_hotpluggable_cpus() callbacks so
      there is no need in it anymore. However inon NULL callback
      value is used to detect/report hotpluggable cpus support,
      therefore it can be removed completely.
      Replace it with MachineClass.has_hotpluggable_cpus boolean
      which is sufficient for the task.
      Suggested-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NIgor Mammedov <imammedo@redhat.com>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      c5514d0e