1. 17 8月, 2018 1 次提交
  2. 15 8月, 2018 6 次提交
  3. 17 7月, 2018 2 次提交
  4. 16 7月, 2018 1 次提交
  5. 09 7月, 2018 1 次提交
  6. 02 7月, 2018 7 次提交
  7. 29 6月, 2018 3 次提交
  8. 27 6月, 2018 3 次提交
    • S
      compiler: add a sizeof_field() macro · f18793b0
      Stefan Hajnoczi 提交于
      Determining the size of a field is useful when you don't have a struct
      variable handy.  Open-coding this is ugly.
      
      This patch adds the sizeof_field() macro, which is similar to
      typeof_field().  Existing instances are updated to use the macro.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: NJohn Snow <jsnow@redhat.com>
      Message-id: 20180614164431.29305-1-stefanha@redhat.com
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      f18793b0
    • E
      trace: enable tracing of TCG atomics · d071f4cd
      Emilio G. Cota 提交于
      We do not trace guest atomic accesses. Fix it.
      
      Tested with a modified atomic_add-bench so that it executes
      a deterministic number of instructions, i.e. fixed seeding,
      no threading and fixed number of loop iterations instead
      of running for a certain time.
      
      Before:
      - With parallel_cpus = false (no clone syscall so it is never set to true):
        220070 memory accesses
      - With parallel_cpus = true (hard-coded):
        212105 memory accesses <-- we're not tracing the atomics!
      
      After:
        220070 memory accesses regardless of parallel_cpus.
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Message-id: 1527028012-21888-6-git-send-email-cota@braap.org
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      d071f4cd
    • P
      tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE · 55df6fcf
      Peter Maydell 提交于
      Add support for MMU protection regions that are smaller than
      TARGET_PAGE_SIZE. We do this by marking the TLB entry for those
      pages with a flag TLB_RECHECK. This flag causes us to always
      take the slow-path for accesses. In the slow path we can then
      special case them to always call tlb_fill() again, so we have
      the correct information for the exact address being accessed.
      
      This change allows us to handle reading and writing from small
      regions; we cannot deal with execution from the small region.
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Message-id: 20180620130619.11362-2-peter.maydell@linaro.org
      55df6fcf
  9. 23 6月, 2018 1 次提交
  10. 16 6月, 2018 15 次提交