1. 24 2月, 2017 1 次提交
    • A
      cputlb: atomically update tlb fields used by tlb_reset_dirty · b0706b71
      Alex Bennée 提交于
      The main use case for tlb_reset_dirty is to set the TLB_NOTDIRTY flags
      in TLB entries to force the slow-path on writes. This is used to mark
      page ranges containing code which has been translated so it can be
      invalidated if written to. To do this safely we need to ensure the TLB
      entries in question for all vCPUs are updated before we attempt to run
      the code otherwise a race could be introduced.
      
      To achieve this we atomically set the flag in tlb_reset_dirty_range and
      take care when setting it when the TLB entry is filled.
      
      On 32 bit systems attempting to emulate 64 bit guests we don't even
      bother as we might not have the atomic primitives available. MTTCG is
      disabled in this case and can't be forced on. The copy_tlb_helper
      function helps keep the atomic semantics in one place to avoid
      confusion.
      
      The dirty helper function is made static as it isn't used outside of
      cputlb.
      Signed-off-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      b0706b71
  2. 16 9月, 2015 3 次提交
  3. 05 6月, 2015 1 次提交
  4. 17 2月, 2015 1 次提交
    • P
      exec: make iotlb RCU-friendly · 9d82b5a7
      Paolo Bonzini 提交于
      After the previous patch, TLBs will be flushed on every change to
      the memory mapping.  This patch augments that with synchronization
      of the MemoryRegionSections referred to in the iotlb array.
      
      With this change, it is guaranteed that iotlb_to_region will access
      the correct memory map, even once the TLB will be accessed outside
      the BQL.
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      9d82b5a7
  5. 14 3月, 2014 3 次提交
  6. 20 6月, 2013 1 次提交
    • J
      exec: Resolve subpages in one step except for IOTLB fills · 90260c6c
      Jan Kiszka 提交于
      Except for the case of setting the IOTLB entry in TCG mode, we can avoid
      the subpage dispatching handlers and do the resolution directly on
      address_space_lookup_region. An IOTLB entry describes a full page, not
      only the region that the first access to a sub-divided page may return.
      
      This patch therefore introduces a special translation function,
      address_space_translate_for_iotlb, that avoids the subpage resolutions.
      In contrast, callers of the existing address_space_translate service
      will now always receive the terminal memory region section. This will be
      important for breaking the BQL and for enabling unaligned memory region.
      Signed-off-by: NJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      90260c6c
  7. 29 5月, 2013 1 次提交
    • P
      memory: add address_space_translate · 149f54b5
      Paolo Bonzini 提交于
      Using phys_page_find to translate an AddressSpace to a MemoryRegionSection
      is unwieldy.  It requires to pass the page index rather than the address,
      and later memory_region_section_addr has to be called.  Replace
      memory_region_section_addr with a function that does all of it: call
      phys_page_find, compute the offset within the region, and check how
      big the current mapping is.  This way, a large flat region can be written
      with a single lookup rather than a page at a time.
      
      address_space_translate will also provide a single point where IOMMU
      forwarding is implemented.
      Reviewed-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      149f54b5
  8. 19 12月, 2012 1 次提交
  9. 23 10月, 2012 1 次提交
    • A
      Rename target_phys_addr_t to hwaddr · a8170e5e
      Avi Kivity 提交于
      target_phys_addr_t is unwieldly, violates the C standard (_t suffixes are
      reserved) and its purpose doesn't match the name (most target_phys_addr_t
      addresses are not target specific).  Replace it with a finger-friendly,
      standards conformant hwaddr.
      
      Outstanding patchsets can be fixed up with the command
      
        git rebase -i --exec 'find -name "*.[ch]"
                              | xargs s/target_phys_addr_t/hwaddr/g' origin
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      a8170e5e
  10. 22 10月, 2012 1 次提交
    • A
      memory: per-AddressSpace dispatch · ac1970fb
      Avi Kivity 提交于
      Currently we use a global radix tree to dispatch memory access.  This only
      works with a single address space; to support multiple address spaces we
      make the radix tree a member of AddressSpace (via an intermediate structure
      AddressSpaceDispatch to avoid exposing too many internals).
      
      A side effect is that address_space_io also gains a dispatch table.  When
      we remove all the pre-memory-API I/O registrations, we can use that for
      dispatching I/O and get rid of the original I/O dispatch.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      ac1970fb
  11. 01 5月, 2012 3 次提交