1. 16 9月, 2019 5 次提交
  2. 03 9月, 2019 9 次提交
  3. 21 8月, 2019 1 次提交
  4. 20 8月, 2019 1 次提交
    • P
      memory: fix race between TCG and accesses to dirty bitmap · 9458a9a1
      Paolo Bonzini 提交于
      There is a race between TCG and accesses to the dirty log:
      
            vCPU thread                  reader thread
            -----------------------      -----------------------
            TLB check -> slow path
              notdirty_mem_write
                write to RAM
                set dirty flag
                                         clear dirty flag
            TLB check -> fast path
                                         read memory
              write to RAM
      
      Fortunately, in order to fix it, no change is required to the
      vCPU thread.  However, the reader thread must delay the read after
      the vCPU thread has finished the write.  This can be approximated
      conservatively by run_on_cpu, which waits for the end of the current
      translation block.
      
      A similar technique is used by KVM, which has to do a synchronous TLB
      flush after doing a test-and-clear of the dirty-page flags.
      Reported-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9458a9a1
  5. 16 8月, 2019 2 次提交
  6. 15 7月, 2019 2 次提交
    • P
      memory: Introduce memory listener hook log_clear() · 077874e0
      Peter Xu 提交于
      Introduce a new memory region listener hook log_clear() to allow the
      listeners to hook onto the points where the dirty bitmap is cleared by
      the bitmap users.
      
      Previously log_sync() contains two operations:
      
        - dirty bitmap collection, and,
        - dirty bitmap clear on remote site.
      
      Let's take KVM as example - log_sync() for KVM will first copy the
      kernel dirty bitmap to userspace, and at the same time we'll clear the
      dirty bitmap there along with re-protecting all the guest pages again.
      
      We add this new log_clear() interface only to split the old log_sync()
      into two separated procedures:
      
        - use log_sync() to collect the collection only, and,
        - use log_clear() to clear the remote dirty bitmap.
      
      With the new interface, the memory listener users will still be able
      to decide how to implement the log synchronization procedure, e.g.,
      they can still only provide log_sync() method only and put all the two
      procedures within log_sync() (that's how the old KVM works before
      KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 is introduced).  However with this
      new interface the memory listener users will start to have a chance to
      postpone the log clear operation explicitly if the module supports.
      That can really benefit users like KVM at least for host kernels that
      support KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2.
      
      There are three places that can clear dirty bits in any one of the
      dirty bitmap in the ram_list.dirty_memory[3] array:
      
              cpu_physical_memory_snapshot_and_clear_dirty
              cpu_physical_memory_test_and_clear_dirty
              cpu_physical_memory_sync_dirty_bitmap
      
      Currently we hook directly into each of the functions to notify about
      the log_clear().
      Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      Reviewed-by: NJuan Quintela <quintela@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20190603065056.25211-7-peterx@redhat.com>
      Signed-off-by: NJuan Quintela <quintela@redhat.com>
      077874e0
    • P
      memory: Pass mr into snapshot_and_clear_dirty · 5dea4079
      Peter Xu 提交于
      Also we change the 2nd parameter of it to be the relative offset
      within the memory region. This is to be used in follow up patches.
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Reviewed-by: NJuan Quintela <quintela@redhat.com>
      Message-Id: <20190603065056.25211-6-peterx@redhat.com>
      Signed-off-by: NJuan Quintela <quintela@redhat.com>
      5dea4079
  7. 06 7月, 2019 1 次提交
  8. 12 6月, 2019 2 次提交
    • M
      Include qemu-common.h exactly where needed · a8d25326
      Markus Armbruster 提交于
      No header includes qemu-common.h after this commit, as prescribed by
      qemu-common.h's file comment.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20190523143508.25387-5-armbru@redhat.com>
      [Rebased with conflicts resolved automatically, except for
      include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c
      block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c
      target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h
      target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h
      target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h
      target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and
      net/tap-bsd.c fixed up]
      a8d25326
    • M
      qemu-common: Move tcg_enabled() etc. to sysemu/tcg.h · 14a48c1d
      Markus Armbruster 提交于
      Other accelerators have their own headers: sysemu/hax.h, sysemu/hvf.h,
      sysemu/kvm.h, sysemu/whpx.h.  Only tcg_enabled() & friends sit in
      qemu-common.h.  This necessitates inclusion of qemu-common.h into
      headers, which is against the rules spelled out in qemu-common.h's
      file comment.
      
      Move tcg_enabled() & friends into their own header sysemu/tcg.h, and
      adjust #include directives.
      
      Cc: Richard Henderson <rth@twiddle.net>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20190523143508.25387-2-armbru@redhat.com>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      [Rebased with conflicts resolved automatically, except for
      accel/tcg/tcg-all.c]
      14a48c1d
  9. 26 4月, 2019 3 次提交
  10. 25 4月, 2019 1 次提交
  11. 19 4月, 2019 2 次提交
    • M
      qom/cpu: Simplify how CPUClass:cpu_dump_state() prints · 90c84c56
      Markus Armbruster 提交于
      CPUClass method dump_statistics() takes an fprintf()-like callback and
      a FILE * to pass to it.  Most callers pass fprintf() and stderr.
      log_cpu_state() passes fprintf() and qemu_log_file.
      hmp_info_registers() passes monitor_fprintf() and the current monitor
      cast to FILE *.  monitor_fprintf() casts it right back, and is
      otherwise identical to monitor_printf().
      
      The callback gets passed around a lot, which is tiresome.  The
      type-punning around monitor_fprintf() is ugly.
      
      Drop the callback, and call qemu_fprintf() instead.  Also gets rid of
      the type-punning, since qemu_fprintf() takes NULL instead of the
      current monitor cast to FILE *.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      Message-Id: <20190417191805.28198-15-armbru@redhat.com>
      90c84c56
    • M
      memory: Clean up how mtree_info() prints · b6b71cb5
      Markus Armbruster 提交于
      mtree_info() takes an fprintf()-like callback and a FILE * to pass to
      it, and so do its helper functions.  Passing around callback and
      argument is rather tiresome.
      
      Its only caller hmp_info_mtree() passes monitor_printf() cast to
      fprintf_function and the current monitor cast to FILE *.
      
      The type-punning is technically undefined behaviour, but works in
      practice.  Clean up: drop the callback, and call qemu_printf()
      instead.
      Signed-off-by: NMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      Message-Id: <20190417191805.28198-9-armbru@redhat.com>
      b6b71cb5
  12. 29 3月, 2019 1 次提交
    • D
      exec: Only count mapped memory backends for qemu_getrampagesize() · 7d5489e6
      David Gibson 提交于
      qemu_getrampagesize() works out the minimum host page size backing any of
      guest RAM.  This is required in a few places, such as for POWER8 PAPR KVM
      guests, because limitations of the hardware virtualization mean the guest
      can't use pagesizes larger than the host pages backing its memory.
      
      However, it currently checks against *every* memory backend, whether or not
      it is actually mapped into guest memory at the moment.  This is incorrect.
      
      This can cause a problem attempting to add memory to a POWER8 pseries KVM
      guest which is configured to allow hugepages in the guest (e.g.
      -machine cap-hpt-max-page-size=16m).  If you attempt to add non-hugepage,
      you can (correctly) create a memory backend, however it (correctly) will
      throw an error when you attempt to map that memory into the guest by
      'device_add'ing a pc-dimm.
      
      What's not correct is that if you then reset the guest a startup check
      against qemu_getrampagesize() will cause a fatal error because of the new
      memory object, even though it's not mapped into the guest.
      
      This patch corrects the problem by adjusting find_max_supported_pagesize()
      (called from qemu_getrampagesize() via object_child_foreach) to exclude
      non-mapped memory backends.
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Reviewed-by: NIgor Mammedov <imammedo@redhat.com>
      Acked-by: NDavid Hildenbrand <david@redhat.com>
      7d5489e6
  13. 11 3月, 2019 1 次提交
    • W
      exec.c: refactor function flatview_add_to_dispatch() · 494d1997
      Wei Yang 提交于
      flatview_add_to_dispatch() registers page based on the condition of
      *section*, which may looks like this:
      
          |s|PPPPPPP|s|
      
      where s stands for subpage and P for page.
      
      The procedure of this function could be described as:
      
          - register first subpage
          - register page
          - register last subpage
      
      This means the procedure could be simplified into these three steps
      instead of a loop iteration.
      
      This patch refactors the function into three corresponding steps and
      adds some comment to clarify it.
      Signed-off-by: NWei Yang <richardw.yang@linux.intel.com>
      Message-Id: <20190311054252.6094-1-richardw.yang@linux.intel.com>
      [Paolo: move exit before adjustment of remain.offset_within_*,
       otherwise int128_get64 fails when a region is 2^64 bytes long]
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      494d1997
  14. 06 3月, 2019 2 次提交
  15. 05 2月, 2019 2 次提交
    • M
      mmap-alloc: fix hugetlbfs misaligned length in ppc64 · 7265c2b9
      Murilo Opsfelder Araujo 提交于
      The commit 7197fb40 ("util/mmap-alloc:
      fix hugetlb support on ppc64") fixed Huge TLB mappings on ppc64.
      
      However, we still need to consider the underlying huge page size
      during munmap() because it requires that both address and length be a
      multiple of the underlying huge page size for Huge TLB mappings.
      Quote from "Huge page (Huge TLB) mappings" paragraph under NOTES
      section of the munmap(2) manual:
      
        "For munmap(), addr and length must both be a multiple of the
        underlying huge page size."
      
      On ppc64, the munmap() in qemu_ram_munmap() does not work for Huge TLB
      mappings because the mapped segment can be aligned with the underlying
      huge page size, not aligned with the native system page size, as
      returned by getpagesize().
      
      This has the side effect of not releasing huge pages back to the pool
      after a hugetlbfs file-backed memory device is hot-unplugged.
      
      This patch fixes the situation in qemu_ram_mmap() and
      qemu_ram_munmap() by considering the underlying page size on ppc64.
      
      After this patch, memory hot-unplug releases huge pages back to the
      pool.
      
      Fixes: 7197fb40Signed-off-by: NMurilo Opsfelder Araujo <muriloo@linux.ibm.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NGreg Kurz <groug@kaod.org>
      7265c2b9
    • L
      unify len and addr type for memory/address APIs · 0c249ff7
      Li Zhijian 提交于
      Some address/memory APIs have different type between
      'hwaddr/target_ulong addr' and 'int len'. It is very unsafe, especially
      some APIs will be passed a non-int len by caller which might cause
      overflow quietly.
      Below is an potential overflow case:
          dma_memory_read(uint32_t len)
            -> dma_memory_rw(uint32_t len)
              -> dma_memory_rw_relaxed(uint32_t len)
                -> address_space_rw(int len) # len overflow
      
      CC: Paolo Bonzini <pbonzini@redhat.com>
      CC: Peter Crosthwaite <crosthwaite.peter@gmail.com>
      CC: Richard Henderson <rth@twiddle.net>
      CC: Peter Maydell <peter.maydell@linaro.org>
      CC: Stefano Garzarella <sgarzare@redhat.com>
      Signed-off-by: NLi Zhijian <lizhijian@cn.fujitsu.com>
      Reviewed-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Reviewed-by: NStefano Garzarella <sgarzare@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0c249ff7
  16. 04 2月, 2019 1 次提交
    • M
      mmap-alloc: fix hugetlbfs misaligned length in ppc64 · 53adb9d4
      Murilo Opsfelder Araujo 提交于
      The commit 7197fb40 ("util/mmap-alloc:
      fix hugetlb support on ppc64") fixed Huge TLB mappings on ppc64.
      
      However, we still need to consider the underlying huge page size
      during munmap() because it requires that both address and length be a
      multiple of the underlying huge page size for Huge TLB mappings.
      Quote from "Huge page (Huge TLB) mappings" paragraph under NOTES
      section of the munmap(2) manual:
      
        "For munmap(), addr and length must both be a multiple of the
        underlying huge page size."
      
      On ppc64, the munmap() in qemu_ram_munmap() does not work for Huge TLB
      mappings because the mapped segment can be aligned with the underlying
      huge page size, not aligned with the native system page size, as
      returned by getpagesize().
      
      This has the side effect of not releasing huge pages back to the pool
      after a hugetlbfs file-backed memory device is hot-unplugged.
      
      This patch fixes the situation in qemu_ram_mmap() and
      qemu_ram_munmap() by considering the underlying page size on ppc64.
      
      After this patch, memory hot-unplug releases huge pages back to the
      pool.
      
      Fixes: 7197fb40Signed-off-by: NMurilo Opsfelder Araujo <muriloo@linux.ibm.com>
      Reviewed-by: NGreg Kurz <groug@kaod.org>
      Signed-off-by: NDavid Gibson <david@gibson.dropbear.id.au>
      53adb9d4
  17. 01 2月, 2019 1 次提交
    • P
      exec.c: Don't reallocate IOMMUNotifiers that are in use · 5601be3b
      Peter Maydell 提交于
      The tcg_register_iommu_notifier() code has a GArray of
      TCGIOMMUNotifier structs which it has registered by passing
      memory_region_register_iommu_notifier() a pointer to the embedded
      IOMMUNotifier field. Unfortunately, if we need to enlarge the
      array via g_array_set_size() this can cause a realloc(), which
      invalidates the pointer that memory_region_register_iommu_notifier()
      put into the MemoryRegion's iommu_notify list. This can result
      in segfaults.
      
      Switch the GArray to holding pointers to the TCGIOMMUNotifier
      structs, so that we can individually allocate and free them.
      
      Cc: qemu-stable@nongnu.org
      Fixes: 1f871c5e ("exec.c: Handle IOMMUs in address_space_translate_for_iotlb()")
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Message-id: 20190128174241.5860-1-peter.maydell@linaro.org
      5601be3b
  18. 29 1月, 2019 2 次提交
  19. 11 1月, 2019 1 次提交