1. 02 7月, 2018 1 次提交
  2. 27 6月, 2018 1 次提交
  3. 15 6月, 2018 1 次提交
  4. 09 5月, 2018 2 次提交
    • P
      exec: reintroduce MemoryRegion caching · 48564041
      Paolo Bonzini 提交于
      MemoryRegionCache was reverted to "normal" address_space_* operations
      for 2.9, due to lack of support for IOMMUs.  Reinstate the
      optimizations, caching only the IOMMU translation at address_cache_init
      but not the IOMMU lookup and target AddressSpace translation are not
      cached; now that MemoryRegionCache supports IOMMUs, it becomes more widely
      applicable too.
      
      The inlined fast path is defined in memory_ldst_cached.inc.h, while the
      slow path uses memory_ldst.inc.c as before.  The smaller fast path causes
      a little code size reduction in MemoryRegionCache users:
      
          hw/virtio/virtio.o text size before: 32373
          hw/virtio/virtio.o text size after: 31941
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      48564041
    • P
      exec: move memory access declarations to a common header, inline *_phys functions · 4269c82b
      Paolo Bonzini 提交于
      For now, this reduces the text size very slightly due to the newly-added
      inlining:
      
         text size before: 9301965
         text size after: 9300645
      
      Later, however, the declarations in include/exec/memory_ldst.inc.h will be
      reused for the MemoryRegionCache slow path functions.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      4269c82b
  5. 14 3月, 2018 1 次提交
    • M
      linux-user: fix mmap/munmap/mprotect/mremap/shmat · e530acd7
      Max Filippov 提交于
      In linux-user QEMU that runs for a target with TARGET_ABI_BITS bigger
      than L1_MAP_ADDR_SPACE_BITS an assertion in page_set_flags fires when
      mmap, munmap, mprotect, mremap or shmat is called for an address outside
      the guest address space. mmap and mprotect should return ENOMEM in such
      case.
      
      Change definition of GUEST_ADDR_MAX to always be the last valid guest
      address. Account for this change in open_self_maps.
      Add macro guest_addr_valid that verifies if the guest address is valid.
      Add function guest_range_valid that verifies if address range is within
      guest address space and does not wrap around. Use that macro in
      mmap/munmap/mprotect/mremap/shmat for error checking.
      
      Cc: qemu-stable@nongnu.org
      Cc: Riku Voipio <riku.voipio@iki.fi>
      Cc: Laurent Vivier <laurent@vivier.eu>
      Reviewed-by: NLaurent Vivier <laurent@vivier.eu>
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      e530acd7
  6. 10 3月, 2018 1 次提交
    • M
      linux-user: fix mmap/munmap/mprotect/mremap/shmat · ebf9a363
      Max Filippov 提交于
      In linux-user QEMU that runs for a target with TARGET_ABI_BITS bigger
      than L1_MAP_ADDR_SPACE_BITS an assertion in page_set_flags fires when
      mmap, munmap, mprotect, mremap or shmat is called for an address outside
      the guest address space. mmap and mprotect should return ENOMEM in such
      case.
      
      Change definition of GUEST_ADDR_MAX to always be the last valid guest
      address. Account for this change in open_self_maps.
      Add macro guest_addr_valid that verifies if the guest address is valid.
      Add function guest_range_valid that verifies if address range is within
      guest address space and does not wrap around. Use that macro in
      mmap/munmap/mprotect/mremap/shmat for error checking.
      
      Cc: qemu-stable@nongnu.org
      Cc: Riku Voipio <riku.voipio@iki.fi>
      Cc: Laurent Vivier <laurent@vivier.eu>
      Signed-off-by: NMax Filippov <jcmvbkbc@gmail.com>
      Reviewed-by: NLaurent Vivier <laurent@vivier.eu>
      Message-Id: <20180307215010.30706-1-jcmvbkbc@gmail.com>
      Signed-off-by: NLaurent Vivier <laurent@vivier.eu>
      ebf9a363
  7. 20 10月, 2017 1 次提交
    • D
      accel/tcg: allow to invalidate a write TLB entry immediately · f52bfb12
      David Hildenbrand 提交于
      Background: s390x implements Low-Address Protection (LAP). If LAP is
      enabled, writing to effective addresses (before any translation)
      0-511 and 4096-4607 triggers a protection exception.
      
      So we have subpage protection on the first two pages of every address
      space (where the lowcore - the CPU private data resides).
      
      By immediately invalidating the write entry but allowing the caller to
      continue, we force every write access onto these first two pages into
      the slow path. we will get a tlb fault with the specific accessed
      addresses and can then evaluate if protection applies or not.
      
      We have to make sure to ignore the invalid bit if tlb_fill() succeeds.
      Signed-off-by: NDavid Hildenbrand <david@redhat.com>
      Message-Id: <20171016202358.3633-2-david@redhat.com>
      Signed-off-by: NCornelia Huck <cohuck@redhat.com>
      f52bfb12
  8. 11 10月, 2017 1 次提交
  9. 22 12月, 2016 1 次提交
    • P
      exec: introduce MemoryRegionCache · 1f4e496e
      Paolo Bonzini 提交于
      Device models often have to perform multiple access to a single
      memory region that is known in advance, but would to use "DMA-style"
      functions instead of address_space_map/unmap.  This can happen
      for example when the data has to undergo endianness conversion.
      Introduce a new data structure to cache the result of
      address_space_translate without forcing usage of a host address
      like address_space_map does.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      1f4e496e
  10. 26 10月, 2016 1 次提交
  11. 24 10月, 2016 1 次提交
    • P
      cpu: Support a target CPU having a variable page size · 20bccb82
      Peter Maydell 提交于
      Support target CPUs having a page size which isn't knownn
      at compile time. To use this, the CPU implementation should:
       * define TARGET_PAGE_BITS_VARY
       * not define TARGET_PAGE_BITS
       * define TARGET_PAGE_BITS_MIN to the smallest value it
         might possibly want for TARGET_PAGE_BITS
       * call set_preferred_target_page_bits() in its realize
         function to indicate the actual preferred target page
         size for the CPU (and report any error from it)
      
      In CONFIG_USER_ONLY, the CPU implementation should continue
      to define TARGET_PAGE_BITS appropriately for the guest
      OS page size.
      
      Machines which want to take advantage of having the page
      size something larger than TARGET_PAGE_BITS_MIN must
      set the MachineClass minimum_page_bits field to a value
      which they guarantee will be no greater than the preferred
      page size for any CPU they create.
      
      Note that changing the target page size by setting
      minimum_page_bits is a migration compatibility break
      for that machine.
      
      For debugging purposes, attempts to use TARGET_PAGE_SIZE
      before it has been finally confirmed will assert.
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      20bccb82
  12. 06 7月, 2016 1 次提交
    • S
      tcg: Improve the alignment check infrastructure · 1f00b27f
      Sergey Sorokin 提交于
      Some architectures (e.g. ARMv8) need the address which is aligned
      to a size more than the size of the memory access.
      To support such check it's enough the current costless alignment
      check implementation in QEMU, but we need to support
      an alignment size specifying.
      Signed-off-by: NSergey Sorokin <afarallax@yandex.ru>
      Message-Id: <1466705806-679898-1-git-send-email-afarallax@yandex.ru>
      Signed-off-by: NRichard Henderson <rth@twiddle.net>
      [rth: Assert in tcg_canonicalize_memop.  Leave get_alignment_bits
      available for, though unused by, user-mode.  Retain logging difference
      based on ALIGNED_ONLY.]
      1f00b27f
  13. 29 6月, 2016 1 次提交
  14. 07 6月, 2016 1 次提交
  15. 19 5月, 2016 1 次提交
  16. 23 2月, 2016 1 次提交
    • P
      include: Clean up includes · 90ce6e26
      Peter Maydell 提交于
      Clean up includes so that osdep.h is included first and headers
      which it implies are not included manually.
      
      This commit was created with scripts/clean-includes.
      
      NB: If this commit breaks compilation for your out-of-tree
      patchseries or fork, then you need to make sure you add
      #include "qemu/osdep.h" to any new .c files that you have.
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      90ce6e26
  17. 02 12月, 2015 1 次提交
    • P
      translate-all: ensure host page mask is always extended with 1's · 0c2d70c4
      Paolo Bonzini 提交于
      Anthony reported that >4GB guests on Xen with 32bit QEMU broke after
      commit 4ed023ce ("Round up RAMBlock sizes to host page sizes", 2015-11-05).
      
      In that patch sizes are masked against qemu_host_page_size/mask which
      are uintptr_t, and thus 32bit on a 32bit QEMU, even though the ram space
      might be bigger than 4GB on Xen.
      
      Since ram_addr_t is not available on user-mode emulation targets, ensure
      that we get a sign extension when masking away the low bits of the address.
      Remove the ~10 year old scary comment that the type of these variables
      is probably wrong, with another equally scary comment.  The new comment
      however does not have "???" in it, which is arguably an improvement.
      
      For completeness use the alignment macros in linux-user and bsd-user
      instead of manually doing an &.  linux-user and bsd-user are not affected
      by the Xen issue, however.
      Reviewed-by: NJuan Quintela <quintela@redhat.com>
      Reported-by: NAnthony PERARD <anthony.perard@citrix.com>
      Fixes: 4ed023ceSigned-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      0c2d70c4
  18. 09 9月, 2015 1 次提交
  19. 25 8月, 2015 2 次提交
  20. 07 7月, 2015 1 次提交
  21. 26 6月, 2015 1 次提交
  22. 17 2月, 2015 4 次提交
  23. 20 1月, 2015 1 次提交
  24. 08 1月, 2015 2 次提交
    • M
      exec: qemu_ram_alloc_resizeable, qemu_ram_resize · 62be4e3a
      Michael S. Tsirkin 提交于
      Add API to allocate "resizeable" RAM.
      This looks just like regular RAM generally, but
      has a special property that only a portion of it
      (used_length) is actually used, and migrated.
      
      This used_length size can change across reboots.
      
      Follow up patches will change used_length for such blocks at migration,
      making it easier to extend devices using such RAM (notably ACPI,
      but in the future thinkably other ROMs) without breaking migration
      compatibility or wasting ROM (guest) memory.
      
      Device is notified on resize, so it can adjust if necessary.
      
      qemu_ram_alloc_resizeable allocates this memory, qemu_ram_resize resizes
      it.
      
      Note: nothing prevents making all RAM resizeable in this way.
      However, reviewers felt that only enabling this selectively will
      make some class of errors easier to detect.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      62be4e3a
    • M
      exec: split length -> used_length/max_length · 9b8424d5
      Michael S. Tsirkin 提交于
      This patch allows us to distinguish between two
      length values for each block:
          max_length - length of memory block that was allocated
          used_length - length of block used by QEMU/guest
      
      Currently, we set used_length - max_length, unconditionally.
      Follow-up patches allow used_length <= max_length.
      Signed-off-by: NMichael S. Tsirkin <mst@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      9b8424d5
  25. 17 12月, 2014 1 次提交
  26. 16 12月, 2014 3 次提交
  27. 07 10月, 2014 1 次提交
  28. 22 8月, 2014 1 次提交
  29. 19 6月, 2014 3 次提交
  30. 05 6月, 2014 1 次提交