1. 13 3月, 2018 1 次提交
  2. 12 3月, 2018 1 次提交
    • P
      cpu-exec: fix exception_index handling · 5f3bdfd4
      Pavel Dovgalyuk 提交于
      Function cpu_handle_interrupt calls cc->cpu_exec_interrupt to process
      pending hardware interrupts. Under the hood cpu_exec_interrupt uses
      cpu->exception_index to pass information to the internal function which
      is usually common for exception and interrupt processing.
      But this value is not reset after return and may be processed again
      by cpu_handle_exception. This does not happen due to overwriting
      the exception_index at the end of cpu_handle_interrupt.
      But this branch may also overwrite the valid exception_index in some cases.
      Therefore this patch:
       1. resets exception_index just after the call to cpu_exec_interrupt
       2. prevents overwriting the meaningful value of exception_index
      Signed-off-by: NPavel Dovgalyuk <pavel.dovgaluk@ispras.ru>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20180227095140.1060.61357.stgit@pasha-VirtualBox>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NPavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
      5f3bdfd4
  3. 09 2月, 2018 1 次提交
  4. 08 2月, 2018 6 次提交
  5. 07 2月, 2018 1 次提交
  6. 06 2月, 2018 1 次提交
  7. 05 2月, 2018 1 次提交
  8. 25 1月, 2018 2 次提交
  9. 23 1月, 2018 2 次提交
    • P
      page_unprotect(): handle calls to pages that are PAGE_WRITE · 9c4bbee9
      Peter Maydell 提交于
      If multiple guest threads in user-mode emulation write to a
      page which QEMU has marked read-only because of cached TCG
      translations, the threads can race in page_unprotect:
      
       * threads A & B both try to do a write to a page with code in it at
         the same time (ie which we've made non-writeable, so SEGV)
       * they race into the signal handler with this faulting address
       * thread A happens to get to page_unprotect() first and takes the
         mmap lock, so thread B sits waiting for it to be done
       * A then finds the page, marks it PAGE_WRITE and mprotect()s it writable
       * A can then continue OK (returns from signal handler to retry the
         memory access)
       * ...but when B gets the mmap lock it finds that the page is already
         PAGE_WRITE, and so it exits page_unprotect() via the "not due to
         protected translation" code path, and wrongly delivers the signal
         to the guest rather than just retrying the access
      
      In particular, this meant that trying to run 'javac' in user-mode
      emulation would fail with a spurious guest SIGSEGV.
      
      Handle this by making page_unprotect() assume that a call for a page
      which is already PAGE_WRITE is due to a race of this sort and return
      a "fault handled" indication.
      
      Since this would cause an infinite loop if we ever called
      page_unprotect() for some other kind of fault than "write failed due
      to bad access permissions", tighten the condition in
      handle_cpu_signal() to check the signal number and si_code, and add a
      comment so that if somebody does ever find themselves debugging an
      infinite loop of faults they have some clue about why.
      
      (The trick for identifying the correct setting for
      current_tb_invalidated for thread B (needed to handle the precise-SMC
      case) is due to Richard Henderson.  Paolo Bonzini suggested just
      relying on si_code rather than trying anything more complicated.)
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Message-Id: <1511879725-9576-3-git-send-email-peter.maydell@linaro.org>
      Signed-off-by: NLaurent Vivier <laurent@vivier.eu>
      9c4bbee9
    • P
      linux-user: Propagate siginfo_t through to handle_cpu_signal() · a78b1299
      Peter Maydell 提交于
      Currently all the architecture/OS specific cpu_signal_handler()
      functions call handle_cpu_signal() without passing it the
      siginfo_t. We're going to want that so we can look at the si_code
      to determine whether this is a SEGV_ACCERR access violation or
      some other kind of fault, so change the functions to pass through
      the pointer to the siginfo_t rather than just the si_addr value.
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Message-Id: <1511879725-9576-2-git-send-email-peter.maydell@linaro.org>
      Signed-off-by: NLaurent Vivier <laurent@vivier.eu>
      a78b1299
  10. 19 1月, 2018 1 次提交
  11. 30 12月, 2017 1 次提交
  12. 22 12月, 2017 1 次提交
    • S
      i386: hvf: add code base from Google's QEMU repository · c97d6d2c
      Sergio Andres Gomez Del Real 提交于
      This file begins tracking the files that will be the code base for HVF
      support in QEMU. This code base is part of Google's QEMU version of
      their Android emulator, and can be found at
      https://android.googlesource.com/platform/external/qemu/+/emu-master-dev
      
      This code is based on Veertu Inc's vdhh (Veertu Desktop Hosted
      Hypervisor), found at https://github.com/veertuinc/vdhh. Everything is
      appropriately licensed under GPL v2-or-later, except for the code inside
      x86_task.c and x86_task.h, which, deriving from KVM (the Linux kernel),
      is licensed GPL v2-only.
      
      This code base already implements a very great deal of functionality,
      although Google's version removed from Vertuu's the support for APIC
      page and hyperv-related stuff. According to the Android Emulator Release
      Notes, Revision 26.1.3 (August 2017), "Hypervisor.framework is now
      enabled by default on macOS for 32-bit x86 images to improve performance
      and macOS compatibility", although we better use with caution for, as the
      same Revision warns us, "If you experience issues with it specifically,
      please file a bug report...". The code hasn't seen much update in the
      last 5 months, so I think that we can further develop the code with
      occasional visiting Google's repository to see if there has been any
      update.
      
      On top of Google's code, the following changes were made:
      
      - add code to the configure script to support the --enable-hvf argument.
      If the OS is Darwin, it checks for presence of HVF in the system. The
      patch also adds strings related to HVF in the file qemu-options.hx.
      QEMU will only support the modern syntax style '-M accel=hvf' no enable
      hvf; the legacy '-enable-hvf' will not be supported.
      
      - fix styling issues
      
      - add glue code to cpus.c
      
      - move HVFX86EmulatorState field to CPUX86State, changing the
      the emulation functions to have a parameter with signature 'CPUX86State *'
      instead of 'CPUState *' so we don't have to get the 'env'.
      Signed-off-by: NSergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com>
      Message-Id: <20170913090522.4022-2-Sergio.G.DelReal@gmail.com>
      Message-Id: <20170913090522.4022-3-Sergio.G.DelReal@gmail.com>
      Message-Id: <20170913090522.4022-5-Sergio.G.DelReal@gmail.com>
      Message-Id: <20170913090522.4022-6-Sergio.G.DelReal@gmail.com>
      Message-Id: <20170905035457.3753-7-Sergio.G.DelReal@gmail.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      c97d6d2c
  13. 21 12月, 2017 1 次提交
    • D
      cpu-exec: fix missed CPU kick during interrupt injection · d84be02d
      David Hildenbrand 提交于
      The conditional memory barrier not only looks strange but actually is
      wrong.
      
      On s390x, I can reproduce interrupts via cpu_interrupt() not leading to
      a proper kick out of emulation every now and then. cpu_interrupt() is
      especially used for inter CPU communication via SIGP (esp. external
      calls and emergency interrupts).
      
      With this patch, I was not able to reproduce. (esp. no stalls or hangs
      in the guest).
      
      My setup is s390x MTTCG with 16 VCPUs on 8 CPU host, running make -j16.
      Signed-off-by: NDavid Hildenbrand <david@redhat.com>
      Message-Id: <20171129191319.11483-1-david@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      d84be02d
  14. 18 12月, 2017 3 次提交
  15. 23 11月, 2017 1 次提交
  16. 21 11月, 2017 1 次提交
    • P
      accel/tcg: Handle atomic accesses to notdirty memory correctly · 34d49937
      Peter Maydell 提交于
      To do a write to memory that is marked as notdirty, we need
      to invalidate any TBs we have cached for that memory, and
      update the cpu physical memory dirty flags for VGA and migration.
      The slowpath code in notdirty_mem_write() does all this correctly,
      but the new atomic handling code in atomic_mmu_lookup() doesn't
      do anything at all, it just clears the dirty bit in the TLB.
      
      The effect of this bug is that if the first write to a notdirty
      page for which we have cached TBs is by a guest atomic access,
      we fail to invalidate the TBs and subsequently will execute
      incorrect code. This can be seen by trying to run 'javac' on AArch64.
      
      Use the new notdirty_call_before() and notdirty_call_after()
      functions to correctly handle the update to notdirty memory
      in the atomic codepath.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Message-id: 1511201308-23580-3-git-send-email-peter.maydell@linaro.org
      34d49937
  17. 20 11月, 2017 1 次提交
  18. 15 11月, 2017 1 次提交
  19. 14 11月, 2017 2 次提交
  20. 13 11月, 2017 1 次提交
  21. 03 11月, 2017 1 次提交
  22. 25 10月, 2017 9 次提交
    • E
      translate-all: exit from tb_phys_invalidate if qht_remove fails · cc689485
      Emilio G. Cota 提交于
      Two or more threads might race while invalidating the same TB. We currently
      do not check for this at all despite taking tb_lock, which means we would
      wrongly invalidate the same TB more than once. This bug has actually been
      hit by users: I recently saw a report on IRC, although I have yet to see
      the corresponding test case.
      
      Fix this by using qht_remove as the synchronization point; if it fails,
      that means the TB has already been invalidated, and therefore there
      is nothing left to do in tb_phys_invalidate.
      
      Note that this solution works now that we still have tb_lock, and will
      continue working once we remove tb_lock.
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Message-Id: <1508445114-4717-1-git-send-email-cota@braap.org>
      Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
      cc689485
    • E
      tcg: enable multiple TCG contexts in softmmu · 3468b59e
      Emilio G. Cota 提交于
      This enables parallel TCG code generation. However, we do not take
      advantage of it yet since tb_lock is still held during tb_gen_code.
      
      In user-mode we use a single TCG context; see the documentation
      added to tcg_region_init for the rationale.
      
      Note that targets do not need any conversion: targets initialize a
      TCGContext (e.g. defining TCG globals), and after this initialization
      has finished, the context is cloned by the vCPU threads, each of
      them keeping a separate copy.
      
      TCG threads claim one entry in tcg_ctxs[] by atomically increasing
      n_tcg_ctxs. Do not be too annoyed by the subsequent atomic_read's
      of that variable and tcg_ctxs; they are there just to play nice with
      analysis tools such as thread sanitizer.
      
      Note that we do not allocate an array of contexts (we allocate
      an array of pointers instead) because when tcg_context_init
      is called, we do not know yet how many contexts we'll use since
      the bool behind qemu_tcg_mttcg_enabled() isn't set yet.
      
      Previous patches folded some TCG globals into TCGContext. The non-const
      globals remaining are only set at init time, i.e. before the TCG
      threads are spawned. Here is a list of these set-at-init-time globals
      under tcg/:
      
      Only written by tcg_context_init:
      - indirect_reg_alloc_order
      - tcg_op_defs
      Only written by tcg_target_init (called from tcg_context_init):
      - tcg_target_available_regs
      - tcg_target_call_clobber_regs
      - arm: arm_arch, use_idiv_instructions
      - i386: have_cmov, have_bmi1, have_bmi2, have_lzcnt,
              have_movbe, have_popcnt
      - mips: use_movnz_instructions, use_mips32_instructions,
              use_mips32r2_instructions, got_sigill (tcg_target_detect_isa)
      - ppc: have_isa_2_06, have_isa_3_00, tb_ret_addr
      - s390: tb_ret_addr, s390_facilities
      - sparc: qemu_ld_trampoline, qemu_st_trampoline (build_trampolines),
               use_vis3_instructions
      
      Only written by tcg_prologue_init:
      - 'struct jit_code_entry one_entry'
      - aarch64: tb_ret_addr
      - arm: tb_ret_addr
      - i386: tb_ret_addr, guest_base_flags
      - ia64: tb_ret_addr
      - mips: tb_ret_addr, bswap32_addr, bswap32u_addr, bswap64_addr
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
      3468b59e
    • E
      tcg: introduce regions to split code_gen_buffer · e8feb96f
      Emilio G. Cota 提交于
      This is groundwork for supporting multiple TCG contexts.
      
      The naive solution here is to split code_gen_buffer statically
      among the TCG threads; this however results in poor utilization
      if translation needs are different across TCG threads.
      
      What we do here is to add an extra layer of indirection, assigning
      regions that act just like pages do in virtual memory allocation.
      (BTW if you are wondering about the chosen naming, I did not want
      to use blocks or pages because those are already heavily used in QEMU).
      
      We use a global lock to serialize allocations as well as statistics
      reporting (we now export the size of the used code_gen_buffer with
      tcg_code_size()). Note that for the allocator we could just use
      a counter and atomic_inc; however, that would complicate the gathering
      of tcg_code_size()-like stats. So given that the region operations are
      not a fast path, a lock seems the most reasonable choice.
      
      The effectiveness of this approach is clear after seeing some numbers.
      I used the bootup+shutdown of debian-arm with '-tb-size 80' as a benchmark.
      Note that I'm evaluating this after enabling per-thread TCG (which
      is done by a subsequent commit).
      
      * -smp 1, 1 region (entire buffer):
          qemu: flush code_size=83885014 nb_tbs=154739 avg_tb_size=357
          qemu: flush code_size=83884902 nb_tbs=153136 avg_tb_size=363
          qemu: flush code_size=83885014 nb_tbs=152777 avg_tb_size=364
          qemu: flush code_size=83884950 nb_tbs=150057 avg_tb_size=373
          qemu: flush code_size=83884998 nb_tbs=150234 avg_tb_size=373
          qemu: flush code_size=83885014 nb_tbs=154009 avg_tb_size=360
          qemu: flush code_size=83885014 nb_tbs=151007 avg_tb_size=370
          qemu: flush code_size=83885014 nb_tbs=151816 avg_tb_size=367
      
      That is, 8 flushes.
      
      * -smp 8, 32 regions (80/32 MB per region) [i.e. this patch]:
      
          qemu: flush code_size=76328008 nb_tbs=141040 avg_tb_size=356
          qemu: flush code_size=75366534 nb_tbs=138000 avg_tb_size=361
          qemu: flush code_size=76864546 nb_tbs=140653 avg_tb_size=361
          qemu: flush code_size=76309084 nb_tbs=135945 avg_tb_size=375
          qemu: flush code_size=74581856 nb_tbs=132909 avg_tb_size=375
          qemu: flush code_size=73927256 nb_tbs=135616 avg_tb_size=360
          qemu: flush code_size=78629426 nb_tbs=142896 avg_tb_size=365
          qemu: flush code_size=76667052 nb_tbs=138508 avg_tb_size=368
      
      Again, 8 flushes. Note how buffer utilization is not 100%, but it
      is close. Smaller region sizes would yield higher utilization,
      but we want region allocation to be rare (it acquires a lock), so
      we do not want to go too small.
      
      * -smp 8, static partitioning of 8 regions (10 MB per region):
          qemu: flush code_size=21936504 nb_tbs=40570 avg_tb_size=354
          qemu: flush code_size=11472174 nb_tbs=20633 avg_tb_size=370
          qemu: flush code_size=11603976 nb_tbs=21059 avg_tb_size=365
          qemu: flush code_size=23254872 nb_tbs=41243 avg_tb_size=377
          qemu: flush code_size=28289496 nb_tbs=52057 avg_tb_size=358
          qemu: flush code_size=43605160 nb_tbs=78896 avg_tb_size=367
          qemu: flush code_size=45166552 nb_tbs=82158 avg_tb_size=364
          qemu: flush code_size=63289640 nb_tbs=116494 avg_tb_size=358
          qemu: flush code_size=51389960 nb_tbs=93937 avg_tb_size=362
          qemu: flush code_size=59665928 nb_tbs=107063 avg_tb_size=372
          qemu: flush code_size=38380824 nb_tbs=68597 avg_tb_size=374
          qemu: flush code_size=44884568 nb_tbs=79901 avg_tb_size=376
          qemu: flush code_size=50782632 nb_tbs=90681 avg_tb_size=374
          qemu: flush code_size=39848888 nb_tbs=71433 avg_tb_size=372
          qemu: flush code_size=64708840 nb_tbs=119052 avg_tb_size=359
          qemu: flush code_size=49830008 nb_tbs=90992 avg_tb_size=362
          qemu: flush code_size=68372408 nb_tbs=123442 avg_tb_size=368
          qemu: flush code_size=33555560 nb_tbs=59514 avg_tb_size=378
          qemu: flush code_size=44748344 nb_tbs=80974 avg_tb_size=367
          qemu: flush code_size=37104248 nb_tbs=67609 avg_tb_size=364
      
      That is, 20 flushes. Note how a static partitioning approach uses
      the code buffer poorly, leading to many unnecessary flushes.
      Reviewed-by: NRichard Henderson <richard.henderson@linaro.org>
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
      e8feb96f
    • E
      translate-all: use qemu_protect_rwx/none helpers · f51f315a
      Emilio G. Cota 提交于
      The helpers require the address and size to be page-aligned, so
      do that before calling them.
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
      f51f315a
    • E
      tcg: distribute profiling counters across TCGContext's · c3fac113
      Emilio G. Cota 提交于
      This is groundwork for supporting multiple TCG contexts.
      
      To avoid scalability issues when profiling info is enabled, this patch
      makes the profiling info counters distributed via the following changes:
      
      1) Consolidate profile info into its own struct, TCGProfile, which
         TCGContext also includes. Note that tcg_table_op_count is brought
         into TCGProfile after dropping the tcg_ prefix.
      2) Iterate over the TCG contexts in the system to obtain the total counts.
      
      This change also requires updating the accessors to TCGProfile fields to
      use atomic_read/set whenever there may be conflicting accesses (as defined
      in C11) to them.
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
      c3fac113
    • E
      tcg: define tcg_init_ctx and make tcg_ctx a pointer · b1311c4a
      Emilio G. Cota 提交于
      Groundwork for supporting multiple TCG contexts.
      
      The core of this patch is this change to tcg/tcg.h:
      
      > -extern TCGContext tcg_ctx;
      > +extern TCGContext tcg_init_ctx;
      > +extern TCGContext *tcg_ctx;
      
      Note that for now we set *tcg_ctx to whatever TCGContext is passed
      to tcg_context_init -- in this case &tcg_init_ctx.
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
      b1311c4a
    • E
      tcg: take tb_ctx out of TCGContext · 44ded3d0
      Emilio G. Cota 提交于
      Groundwork for supporting multiple TCG contexts.
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      Reviewed-by: NAlex Bennée <alex.bennee@linaro.org>
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
      44ded3d0
    • E
      translate-all: report correct avg host TB size · f19c6cc6
      Emilio G. Cota 提交于
      Since commit 6e3b2bfd ("tcg: allocate TB structs before the
      corresponding translated code") we are not fully utilizing
      code_gen_buffer for translated code, and therefore are
      incorrectly reporting the amount of translated code as well as
      the average host TB size. Address this by:
      
      - Making the conscious choice of misreporting the total translated code;
        doing otherwise would mislead users into thinking "-tb-size" is not
        honoured.
      
      - Expanding tb_tree_stats to accurately count the bytes of translated code on
        the host, and using this for reporting the average tb host size,
        as well as the expansion ratio.
      
      In the future we might want to consider reporting the accurate numbers for
      the total translated code, together with a "bookkeeping/overhead" field to
      account for the TB structs.
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
      f19c6cc6
    • E
      exec-all: rename tb_free to tb_remove · be1e0117
      Emilio G. Cota 提交于
      We don't really free anything in this function anymore; we just remove
      the TB from the binary search tree.
      Suggested-by: NAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: NRichard Henderson <rth@twiddle.net>
      Signed-off-by: NEmilio G. Cota <cota@braap.org>
      Signed-off-by: NRichard Henderson <richard.henderson@linaro.org>
      be1e0117