1. 04 6月, 2017 1 次提交
  2. 26 5月, 2017 2 次提交
  3. 19 5月, 2017 2 次提交
  4. 18 5月, 2017 2 次提交
    • P
      ramblock: add new hmp command "info ramblock" · be9b23c4
      Peter Xu 提交于
      To dump information about ramblocks. It looks like:
      
      (qemu) info ramblock
                    Block Name    PSize              Offset               Used              Total
                  /objects/mem    2 MiB  0x0000000000000000 0x0000000080000000 0x0000000080000000
                      vga.vram    4 KiB  0x0000000080060000 0x0000000001000000 0x0000000001000000
          /rom@etc/acpi/tables    4 KiB  0x00000000810b0000 0x0000000000020000 0x0000000000200000
                       pc.bios    4 KiB  0x0000000080000000 0x0000000000040000 0x0000000000040000
        0000:00:03.0/e1000.rom    4 KiB  0x0000000081070000 0x0000000000040000 0x0000000000040000
                        pc.rom    4 KiB  0x0000000080040000 0x0000000000020000 0x0000000000020000
          0000:00:02.0/vga.rom    4 KiB  0x0000000081060000 0x0000000000010000 0x0000000000010000
         /rom@etc/table-loader    4 KiB  0x00000000812b0000 0x0000000000001000 0x0000000000001000
            /rom@etc/acpi/rsdp    4 KiB  0x00000000812b1000 0x0000000000001000 0x0000000000001000
      
      Ramblock is something hidden internally in QEMU implementation, and this
      command should only be used by mostly QEMU developers on RAM stuff. It
      is not a command suitable for QMP interface. So only HMP interface is
      provided for it.
      Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <1494562661-9063-4-git-send-email-peterx@redhat.com>
      Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      be9b23c4
    • P
      ramblock: add RAMBLOCK_FOREACH() · 99e15582
      Peter Xu 提交于
      So that it can simplifies the iterators.
      Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: NPeter Xu <peterx@redhat.com>
      Message-Id: <1494562661-9063-2-git-send-email-peterx@redhat.com>
      Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
      99e15582
  5. 04 5月, 2017 1 次提交
  6. 24 4月, 2017 1 次提交
    • G
      memory: add support getting and using a dirty bitmap copy. · 8deaf12c
      Gerd Hoffmann 提交于
      This patch adds support for getting and using a local copy of the dirty
      bitmap.
      
      memory_region_snapshot_and_clear_dirty() will create a snapshot of the
      dirty bitmap for the specified range, clear the dirty bitmap and return
      the copy.  The returned bitmap can be a bit larger than requested, the
      range is expanded so the code can copy unsigned longs from the bitmap
      and avoid atomic bit update operations.
      
      memory_region_snapshot_get_dirty() will return the dirty status of
      pages, pretty much like memory_region_get_dirty(), but using the copy
      returned by memory_region_copy_and_clear_dirty().
      Signed-off-by: NGerd Hoffmann <kraxel@redhat.com>
      Message-id: 20170421091632.30900-3-kraxel@redhat.com
      Signed-off-by: NGerd Hoffmann <kraxel@redhat.com>
      8deaf12c
  7. 21 4月, 2017 10 次提交
  8. 03 4月, 2017 1 次提交
    • P
      exec: revert MemoryRegionCache · 90c4fe5f
      Paolo Bonzini 提交于
      MemoryRegionCache did not know about virtio support for IOMMUs (because the
      two features were developed at the same time).  Revert MemoryRegionCache
      to "normal" address_space_* operations for 2.9, as it is simpler than
      undoing the virtio patches.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      90c4fe5f
  9. 23 3月, 2017 1 次提交
  10. 16 3月, 2017 2 次提交
  11. 14 3月, 2017 1 次提交
  12. 03 3月, 2017 2 次提交
  13. 28 2月, 2017 2 次提交
  14. 24 2月, 2017 6 次提交
    • A
      cputlb: introduce tlb_flush_*_all_cpus[_synced] · c3b9a07a
      Alex Bennée 提交于
      This introduces support to the cputlb API for flushing all CPUs TLBs
      with one call. This avoids the need for target helpers to iterate
      through the vCPUs themselves.
      
      An additional variant of the API (_synced) will cause the source vCPUs
      work to be scheduled as "safe work". The result will be all the flush
      operations will be complete by the time the originating vCPU executes
      its safe work. The calling implementation can either end the TB
      straight away (which will then pick up the cpu->exit_request on
      entering the next block) or defer the exit until the architectural
      sync point (usually a barrier instruction).
      Signed-off-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      c3b9a07a
    • A
      cputlb: atomically update tlb fields used by tlb_reset_dirty · b0706b71
      Alex Bennée 提交于
      The main use case for tlb_reset_dirty is to set the TLB_NOTDIRTY flags
      in TLB entries to force the slow-path on writes. This is used to mark
      page ranges containing code which has been translated so it can be
      invalidated if written to. To do this safely we need to ensure the TLB
      entries in question for all vCPUs are updated before we attempt to run
      the code otherwise a race could be introduced.
      
      To achieve this we atomically set the flag in tlb_reset_dirty_range and
      take care when setting it when the TLB entry is filled.
      
      On 32 bit systems attempting to emulate 64 bit guests we don't even
      bother as we might not have the atomic primitives available. MTTCG is
      disabled in this case and can't be forced on. The copy_tlb_helper
      function helps keep the atomic semantics in one place to avoid
      confusion.
      
      The dirty helper function is made static as it isn't used outside of
      cputlb.
      Signed-off-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      b0706b71
    • A
      cputlb and arm/sparc targets: convert mmuidx flushes from varg to bitmap · 0336cbf8
      Alex Bennée 提交于
      While the vargs approach was flexible the original MTTCG ended up
      having munge the bits to a bitmap so the data could be used in
      deferred work helpers. Instead of hiding that in cputlb we push the
      change to the API to make it take a bitmap of MMU indexes instead.
      
      For ARM some the resulting flushes end up being quite long so to aid
      readability I've tended to move the index shifting to a new line so
      all the bits being or-ed together line up nicely, for example:
      
          tlb_flush_page_by_mmuidx(other_cs, pageaddr,
                                   (1 << ARMMMUIdx_S1SE1) |
                                   (1 << ARMMMUIdx_S1SE0));
      Signed-off-by: NAlex Bennée <alex.bennee@linaro.org>
      [AT: SPARC parts only]
      Reviewed-by: NArtyom Tarasenko <atar4qemu@gmail.com>
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      [PM: ARM parts only]
      Reviewed-by: NPeter Maydell <peter.maydell@linaro.org>
      0336cbf8
    • K
      cputlb: introduce tlb_flush_* async work. · e3b9ca81
      KONRAD Frederic 提交于
      Some architectures allow to flush the tlb of other VCPUs. This is not a problem
      when we have only one thread for all VCPUs but it definitely needs to be an
      asynchronous work when we are in true multithreaded work.
      
      We take the tb_lock() when doing this to avoid racing with other threads
      which may be invalidating TB's at the same time. The alternative would
      be to use proper atomic primitives to clear the tlb entries en-mass.
      
      This patch doesn't do anything to protect other cputlb function being
      called in MTTCG mode making cross vCPU changes.
      Signed-off-by: NKONRAD Frederic <fred.konrad@greensocs.com>
      [AJB: remove need for g_malloc on defer, make check fixes, tb_lock]
      Signed-off-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      e3b9ca81
    • A
      tcg: remove global exit_request · e5143e30
      Alex Bennée 提交于
      There are now only two uses of the global exit_request left.
      
      The first ensures we exit the run_loop when we first start to process
      pending work and in the kick handler. This is just as easily done by
      setting the first_cpu->exit_request flag.
      
      The second use is in the round robin kick routine. The global
      exit_request ensured every vCPU would set its local exit_request and
      cause a full exit of the loop. Now the iothread isn't being held while
      running we can just rely on the kick handler to push us out as intended.
      
      We lightly re-factor the main vCPU thread to ensure cpu->exit_requests
      cause us to exit the main loop and process any IO requests that might
      come along. As an cpu->exit_request may legitimately get squashed
      while processing the EXCP_INTERRUPT exception we also check
      cpu->queued_work_first to ensure queued work is expedited as soon as
      possible.
      Signed-off-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      e5143e30
    • A
      tcg: rename tcg_current_cpu to tcg_current_rr_cpu · 791158d9
      Alex Bennée 提交于
      ..and make the definition local to cpus. In preparation for MTTCG the
      concept of a global tcg_current_cpu will no longer make sense. However
      we still need to keep track of it in the single-threaded case to be able
      to exit quickly when required.
      
      qemu_cpu_kick_no_halt() moves and becomes qemu_cpu_kick_rr_cpu() to
      emphasise its use-case. qemu_cpu_kick now kicks the relevant cpu as
      well as qemu_kick_rr_cpu() which will become a no-op in MTTCG.
      
      For the time being the setting of the global exit_request remains.
      Signed-off-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      Reviewed-by: NPranith Kumar <bobby.prani@gmail.com>
      791158d9
  15. 22 2月, 2017 1 次提交
  16. 18 2月, 2017 1 次提交
  17. 16 2月, 2017 1 次提交
  18. 01 2月, 2017 1 次提交
  19. 28 1月, 2017 1 次提交
  20. 17 1月, 2017 1 次提交