1. 11 1月, 2019 5 次提交
  2. 07 11月, 2018 1 次提交
    • M
      memory: learn about non-volatile memory region · c26763f8
      Marc-André Lureau 提交于
      Add a new flag to mark memory region that are used as non-volatile, by
      NVDIMM for example. That bit is propagated down to the flat view, and
      reflected in HMP info mtree with a "nv-" prefix on the memory type.
      
      This way, guest_phys_blocks_region_add() can skip the NV memory
      regions for dumps and TCG memory clear in a following patch.
      
      Cc: dgilbert@redhat.com
      Cc: imammedo@redhat.com
      Cc: pbonzini@redhat.com
      Cc: guangrong.xiao@linux.intel.com
      Cc: mst@redhat.com
      Cc: xiaoguangrong.eric@gmail.com
      Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      Message-Id: <20181003114454.5662-2-marcandre.lureau@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c26763f8
  3. 19 10月, 2018 1 次提交
  4. 03 10月, 2018 6 次提交
  5. 20 8月, 2018 1 次提交
  6. 15 8月, 2018 1 次提交
    • P
      accel/tcg: Pass read access type through to io_readx() · dbea78a4
      Peter Maydell 提交于
      The io_readx() function needs to know whether the load it is
      doing is an MMU_DATA_LOAD or an MMU_INST_FETCH, so that it
      can pass the right value to the cpu_transaction_failed()
      function. Plumb this information through from the softmmu
      code.
      
      This is currently not often going to give the wrong answer,
      because usually instruction fetches go via get_page_addr_code().
      However once we switch over to handling execution from non-RAM by
      creating single-insn TBs, the path for an insn fetch to generate
      a bus error will be through cpu_ld*_code() and io_readx(),
      so without this change we will generate a d-side fault when we
      should generate an i-side fault.
      
      We also have to pass the access type via a CPU struct global
      down to unassigned_mem_read(), for the benefit of the targets
      which still use the cpu_unassigned_access() hook (m68k, mips,
      sparc, xtensa).
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Reviewed-by: NPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Tested-by: NCédric Le Goater <clg@kaod.org>
      Message-id: 20180710160013.26559-2-peter.maydell@linaro.org
      dbea78a4
  7. 10 8月, 2018 1 次提交
  8. 29 6月, 2018 1 次提交
  9. 15 6月, 2018 3 次提交
  10. 01 6月, 2018 3 次提交
  11. 31 5月, 2018 2 次提交
  12. 09 5月, 2018 1 次提交
    • P
      exec: reintroduce MemoryRegion caching · 48564041
      Paolo Bonzini 提交于
      MemoryRegionCache was reverted to "normal" address_space_* operations
      for 2.9, due to lack of support for IOMMUs.  Reinstate the
      optimizations, caching only the IOMMU translation at address_cache_init
      but not the IOMMU lookup and target AddressSpace translation are not
      cached; now that MemoryRegionCache supports IOMMUs, it becomes more widely
      applicable too.
      
      The inlined fast path is defined in memory_ldst_cached.inc.h, while the
      slow path uses memory_ldst.inc.c as before.  The smaller fast path causes
      a little code size reduction in MemoryRegionCache users:
      
          hw/virtio/virtio.o text size before: 32373
          hw/virtio/virtio.o text size after: 31941
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      48564041
  13. 06 3月, 2018 1 次提交
  14. 19 2月, 2018 1 次提交
    • M
      mem: add share parameter to memory-backend-ram · 06329cce
      Marcel Apfelbaum 提交于
      Currently only file backed memory backend can
      be created with a "share" flag in order to allow
      sharing guest RAM with other processes in the host.
      
      Add the "share" flag also to RAM Memory Backend
      in order to allow remapping parts of the guest RAM
      to different host virtual addresses. This is needed
      by the RDMA devices in order to remap non-contiguous
      QEMU virtual addresses to a contiguous virtual address range.
      
      Moved the "share" flag to the Host Memory base class,
      modified phys_mem_alloc to include the new parameter
      and a new interface memory_region_init_ram_shared_nomigrate.
      
      There are no functional changes if the new flag is not used.
      Reviewed-by: NEduardo Habkost <ehabkost@redhat.com>
      Signed-off-by: NMarcel Apfelbaum <marcel@redhat.com>
      06329cce
  15. 13 2月, 2018 3 次提交
  16. 07 2月, 2018 2 次提交
    • P
      memory: do explicit cleanup when remove listeners · d25836ca
      Peter Xu 提交于
      When unregister memory listeners, we should call, e.g.,
      region_del() (and possibly other undo operations) on every existing
      memory region sections there, otherwise we may leak resources that are
      held during the region_add(). This patch undo the stuff for the
      listeners, which emulates the case when the address space is set from
      current to an empty state.
      
      I found this problem when debugging a refcount leak issue that leads to
      a device unplug event lost (please see the "Bug:" line below).  In that
      case, the leakage of resource is the PCI BAR memory region refcount.
      And since memory regions are not keeping their own refcount but onto
      their owners, so the vfio-pci device's (who is the owner of the PCI BAR
      memory regions) refcount is leaked, and event missing.
      
      We had encountered similar issues before and fixed in other
      way (ee4c1128, "vhost: Release memory references on cleanup"). This
      patch can be seen as a more high-level fix of similar problems that are
      caused by the resource leaks from memory listeners. So now we can remove
      the explicit unref of memory regions since that'll be done altogether
      during unregistering of listeners now.
      
      Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1531393Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <20180122060244.29368-5-peterx@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d25836ca
    • A
      memory/iommu: Add get_attr() · f1334de6
      Alexey Kardashevskiy 提交于
      This adds get_attr() to IOMMUMemoryRegionClass, like
      iommu_ops::domain_get_attr in the Linux kernel.
      
      This defines the first attribute - IOMMU_ATTR_SPAPR_TCE_FD - which
      will be used between the pSeries machine and VFIO-PCI.
      Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Acked-by: NPaolo Bonzini <pbonzini@redhat.com>
      Acked-by: NDavid Gibson <david@gibson.dropbear.id.au>
      Signed-off-by: NAlex Williamson <alex.williamson@redhat.com>
      f1334de6
  17. 05 2月, 2018 1 次提交
  18. 19 1月, 2018 1 次提交
    • H
      hostmem-file: add "align" option · 98376843
      Haozhong Zhang 提交于
      When mmap(2) the backend files, QEMU uses the host page size
      (getpagesize(2)) by default as the alignment of mapping address.
      However, some backends may require alignments different than the page
      size. For example, mmap a device DAX (e.g., /dev/dax0.0) on Linux
      kernel 4.13 to an address, which is 4K-aligned but not 2M-aligned,
      fails with a kernel message like
      
      [617494.969768] dax dax0.0: qemu-system-x86: dax_mmap: fail, unaligned vma (0x7fa37c579000 - 0x7fa43c579000, 0x1fffff)
      
      Because there is no common approach to get such alignment requirement,
      we add the 'align' option to 'memory-backend-file', so that users or
      management utils, which have enough knowledge about the backend, can
      specify a proper alignment via this option.
      Signed-off-by: NHaozhong Zhang <haozhong.zhang@intel.com>
      Message-Id: <20171211072806.2812-2-haozhong.zhang@intel.com>
      Reviewed-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      [ehabkost: fixed typo, fixed error_setg() format string]
      Signed-off-by: NEduardo Habkost <ehabkost@redhat.com>
      98376843
  19. 18 12月, 2017 1 次提交
  20. 18 10月, 2017 2 次提交
  21. 12 10月, 2017 1 次提交
  22. 22 9月, 2017 1 次提交
    • A
      memory: Share special empty FlatView · 092aa2fc
      Alexey Kardashevskiy 提交于
      This shares an cached empty FlatView among address spaces. The empty
      FV is used every time when a root MR renders into a FV without memory
      sections which happens when MR or its children are not enabled or
      zero-sized. The empty_view is not NULL to keep the rest of memory
      API intact; it also has a dispatch tree for the same reason.
      
      On POWER8 with 255 CPUs, 255 virtio-net, 40 PCI bridges guest this halves
      the amount of FlatView's in use (557 -> 260) and dispatch tables
      (~800000 -> ~370000).  In an unrelated experiment with 112 non-virtio
      devices on x86 ("-M pc"), only 4 FlatViews are alive, and about ~2000
      are created at startup.
      Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
      Message-Id: <20170921085110.25598-16-aik@ozlabs.ru>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      092aa2fc