1. 27 7月, 2015 6 次提交
  2. 23 7月, 2015 1 次提交
  3. 22 7月, 2015 12 次提交
    • P
      Merge remote-tracking branch 'remotes/elmarco/tags/for-upstream' into staging · 3edf6b3f
      Peter Maydell 提交于
      qxl: build fix for 2.4
      
      # gpg: Signature made Wed Jul 22 15:55:00 2015 BST using DSA key ID F43F0992
      # gpg: Good signature from "Marc-André Lureau <marcandre.lureau@redhat.com>"
      # gpg:                 aka "Marc-Andre Lureau <marcandre.lureau@gmail.com>"
      # gpg:                 aka "Marc-Andre Lureau <marc-andre.lureau@nokia.com>"
      # gpg:                 aka "Marc-André Lureau <marc-andre.lureau@nokia.com>"
      # gpg:                 aka "Marc-André Lureau (elmarco) <marcandre.lureau@gmail.com>"
      # gpg: WARNING: This key is not certified with a trusted signature!
      # gpg:          There is no indication that the signature belongs to the owner.
      # Primary key fingerprint: 7346 2483 9404 4E20 ABFF  7D48 D864 9487 F43F 0992
      
      * remotes/elmarco/tags/for-upstream:
        qxl: Fix new function name for spice-server library
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      3edf6b3f
    • F
      qxl: Fix new function name for spice-server library · a52b2cbf
      Frediano Ziglio 提交于
      The new spice-server function to limit the number of monitors (0.12.6)
      changed while development from spice_qxl_set_monitors_config_limit to
      spice_qxl_max_monitors (accepted upstream).
      By mistake I post patch with former name.
      This patch fix the function name.
      Signed-off-by: NFrediano Ziglio <fziglio@redhat.com>
      Acked-by: NChristophe Fergeau <cfergeau@redhat.com>
      Acked-by: NMartin Kletzander <mkletzan@redhat.com>
      Signed-off-by: NMarc-André Lureau <marcandre.lureau@redhat.com>
      a52b2cbf
    • P
      Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into staging · dc94bd91
      Peter Maydell 提交于
      # gpg: Signature made Wed Jul 22 12:43:35 2015 BST using RSA key ID 81AB73C8
      # gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
      # gpg:                 aka "Stefan Hajnoczi <stefanha@gmail.com>"
      
      * remotes/stefanha/tags/block-pull-request:
        AioContext: optimize clearing the EventNotifier
        AioContext: fix broken placement of event_notifier_test_and_clear
        AioContext: fix broken ctx->dispatching optimization
        aio-win32: reorganize polling loop
        tests: remove irrelevant assertions from test-aio
        qemu-timer: initialize "timers_done_ev" to set
        mirror: Speed up bitmap initial scanning
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      dc94bd91
    • P
      AioContext: optimize clearing the EventNotifier · 05e514b1
      Paolo Bonzini 提交于
      It is pretty rare for aio_notify to actually set the EventNotifier.  It
      can happen with worker threads such as thread-pool.c's, but otherwise it
      should never be set thanks to the ctx->notify_me optimization.  The
      previous patch, unfortunately, added an unconditional call to
      event_notifier_test_and_clear; now add a userspace fast path that
      avoids the call.
      
      Note that it is not possible to do the same with event_notifier_set;
      it would break, as proved (again) by the included formal model.
      
      This patch survived over 3000 reboots on aarch64 KVM.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Tested-by: NRichard W.M. Jones <rjones@redhat.com>
      Message-id: 1437487673-23740-7-git-send-email-pbonzini@redhat.com
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      05e514b1
    • P
      AioContext: fix broken placement of event_notifier_test_and_clear · 21a03d17
      Paolo Bonzini 提交于
      event_notifier_test_and_clear must be called before processing events.
      Otherwise, an aio_poll could "eat" the notification before the main
      I/O thread invokes ppoll().  The main I/O thread then never wakes up.
      This is an example of what could happen:
      
         i/o thread       vcpu thread                     worker thread
         ---------------------------------------------------------------------
         lock_iothread
         notify_me = 1
         ...
         unlock_iothread
                                                           bh->scheduled = 1
                                                           event_notifier_set
                          lock_iothread
                          notify_me = 3
                          ppoll
                          notify_me = 1
                          aio_dispatch
                           aio_bh_poll
                            thread_pool_completion_bh
                                                           bh->scheduled = 1
                                                           event_notifier_set
                           node->io_read(node->opaque)
                            event_notifier_test_and_clear
         ppoll
         *** hang ***
      
      "Tracing" with qemu_clock_get_ns shows pretty much the same behavior as
      in the previous bug, so there are no new tricks here---just stare more
      at the code until it is apparent.
      
      One could also use a formal model, of course.  The included one shows
      this with three processes: notifier corresponds to a QEMU thread pool
      worker, temporary_waiter to a VCPU thread that invokes aio_poll(),
      waiter to the main I/O thread.  I would be happy to say that the
      formal model found the bug for me, but actually I wrote it after the
      fact.
      
      This patch is a bit of a big hammer.  The next one optimizes it,
      with help (this time for real rather than a posteriori :)) from
      another, similar formal model.
      Reported-by: NRichard W. M. Jones <rjones@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Tested-by: NRichard W.M. Jones <rjones@redhat.com>
      Message-id: 1437487673-23740-6-git-send-email-pbonzini@redhat.com
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      21a03d17
    • P
      AioContext: fix broken ctx->dispatching optimization · eabc9779
      Paolo Bonzini 提交于
      This patch rewrites the ctx->dispatching optimization, which was the cause
      of some mysterious hangs that could be reproduced on aarch64 KVM only.
      The hangs were indirectly caused by aio_poll() and in particular by
      flash memory updates's call to blk_write(), which invokes aio_poll().
      Fun stuff: they had an extremely short race window, so much that
      adding all kind of tracing to either the kernel or QEMU made it
      go away (a single printf made it half as reproducible).
      
      On the plus side, the failure mode (a hang until the next keypress)
      made it very easy to examine the state of the process with a debugger.
      And there was a very nice reproducer from Laszlo, which failed pretty
      often (more than half of the time) on any version of QEMU with a non-debug
      kernel; it also failed fast, while still in the firmware.  So, it could
      have been worse.
      
      For some unknown reason they happened only with virtio-scsi, but
      that's not important.  It's more interesting that they disappeared with
      io=native, making thread-pool.c a likely suspect for where the bug arose.
      thread-pool.c is also one of the few places which use bottom halves
      across threads, by the way.
      
      I hope that no other similar bugs exist, but just in case :) I am
      going to describe how the successful debugging went...  Since the
      likely culprit was the ctx->dispatching optimization, which mostly
      affects bottom halves, the first observation was that there are two
      qemu_bh_schedule() invocations in the thread pool: the one in the aio
      worker and the one in thread_pool_completion_bh.  The latter always
      causes the optimization to trigger, the former may or may not.  In
      order to restrict the possibilities, I introduced new functions
      qemu_bh_schedule_slow() and qemu_bh_schedule_fast():
      
           /* qemu_bh_schedule_slow: */
           ctx = bh->ctx;
           bh->idle = 0;
           if (atomic_xchg(&bh->scheduled, 1) == 0) {
               event_notifier_set(&ctx->notifier);
           }
      
           /* qemu_bh_schedule_fast: */
           ctx = bh->ctx;
           bh->idle = 0;
           assert(ctx->dispatching);
           atomic_xchg(&bh->scheduled, 1);
      
      Notice how the atomic_xchg is still in qemu_bh_schedule_slow().  This
      was already debated a few months ago, so I assumed it to be correct.
      In retrospect this was a very good idea, as you'll see later.
      
      Changing thread_pool_completion_bh() to qemu_bh_schedule_fast() didn't
      trigger the assertion (as expected).  Changing the worker's invocation
      to qemu_bh_schedule_slow() didn't hide the bug (another assumption
      which luckily held).  This already limited heavily the amount of
      interaction between the threads, hinting that the problematic events
      must have triggered around thread_pool_completion_bh().
      
      As mentioned early, invoking a debugger to examine the state of a
      hung process was pretty easy; the iothread was always waiting on a
      poll(..., -1) system call.  Infinite timeouts are much rarer on x86,
      and this could be the reason why the bug was never observed there.
      With the buggy sequence more or less resolved to an interaction between
      thread_pool_completion_bh() and poll(..., -1), my "tracing" strategy was
      to just add a few qemu_clock_get_ns(QEMU_CLOCK_REALTIME) calls, hoping
      that the ordering of aio_ctx_prepare(), aio_ctx_dispatch, poll() and
      qemu_bh_schedule_fast() would provide some hint.  The output was:
      
          (gdb) p last_prepare
          $3 = 103885451
          (gdb) p last_dispatch
          $4 = 103876492
          (gdb) p last_poll
          $5 = 115909333
          (gdb) p last_schedule
          $6 = 115925212
      
      Notice how the last call to qemu_poll_ns() came after aio_ctx_dispatch().
      This makes little sense unless there is an aio_poll() call involved,
      and indeed with a slightly different instrumentation you can see that
      there is one:
      
          (gdb) p last_prepare
          $3 = 107569679
          (gdb) p last_dispatch
          $4 = 107561600
          (gdb) p last_aio_poll
          $5 = 110671400
          (gdb) p last_schedule
          $6 = 110698917
      
      So the scenario becomes clearer:
      
         iothread                   VCPU thread
      --------------------------------------------------------------------------
         aio_ctx_prepare
         aio_ctx_check
         qemu_poll_ns(timeout=-1)
                                    aio_poll
                                      aio_dispatch
                                        thread_pool_completion_bh
                                          qemu_bh_schedule()
      
      At this point bh->scheduled = 1 and the iothread has not been woken up.
      The solution must be close, but this alone should not be a problem,
      because the bottom half is only rescheduled to account for rare situations
      (see commit 3c80ca15, thread-pool: avoid deadlock in nested aio_poll()
      calls, 2014-07-15).
      
      Introducing a third thread---a thread pool worker thread, which
      also does qemu_bh_schedule()---does bring out the problematic case.
      The third thread must be awakened *after* the callback is complete and
      thread_pool_completion_bh has redone the whole loop, explaining the
      short race window.  And then this is what happens:
      
                                                            thread pool worker
      --------------------------------------------------------------------------
                                                            <I/O completes>
                                                            qemu_bh_schedule()
      
      Tada, bh->scheduled is already 1, so qemu_bh_schedule() does nothing
      and the iothread is never woken up.  This is where the bh->scheduled
      optimization comes into play---it is correct, but removing it would
      have masked the bug.
      
      So, what is the bug?
      
      Well, the question asked by the ctx->dispatching optimization ("is any
      active aio_poll dispatching?") was wrong.  The right question to ask
      instead is "is any active aio_poll *not* dispatching", i.e. in the prepare
      or poll phases?  In that case, the aio_poll is sleeping or might go to
      sleep anytime soon, and the EventNotifier must be invoked to wake
      it up.
      
      In any other case (including if there is *no* active aio_poll at all!)
      we can just wait for the next prepare phase to pick up the event (e.g. a
      bottom half); the prepare phase will avoid the blocking and service the
      bottom half.
      
      Expressing the invariant with a logic formula, the broken one looked like:
      
         !(exists(thread): in_dispatching(thread)) => !optimize
      
      or equivalently:
      
         !(exists(thread):
                in_aio_poll(thread) && in_dispatching(thread)) => !optimize
      
      In the correct one, the negation is in a slightly different place:
      
         (exists(thread):
               in_aio_poll(thread) && !in_dispatching(thread)) => !optimize
      
      or equivalently:
      
         (exists(thread): in_prepare_or_poll(thread)) => !optimize
      
      Even if the difference boils down to moving an exclamation mark :)
      the implementation is quite different.  However, I think the new
      one is simpler to understand.
      
      In the old implementation, the "exists" was implemented with a boolean
      value.  This didn't really support well the case of multiple concurrent
      event loops, but I thought that this was okay: aio_poll holds the
      AioContext lock so there cannot be concurrent aio_poll invocations, and
      I was just considering nested event loops.  However, aio_poll _could_
      indeed be concurrent with the GSource.  This is why I came up with the
      wrong invariant.
      
      In the new implementation, "exists" is computed simply by counting how many
      threads are in the prepare or poll phases.  There are some interesting
      points to consider, but the gist of the idea remains:
      
      1) AioContext can be used through GSource as well; as mentioned in the
      patch, bit 0 of the counter is reserved for the GSource.
      
      2) the counter need not be updated for a non-blocking aio_poll, because
      it won't sleep forever anyway.  This is just a matter of checking
      the "blocking" variable.  This requires some changes to the win32
      implementation, but is otherwise not too complicated.
      
      3) as mentioned above, the new implementation will not call aio_notify
      when there is *no* active aio_poll at all.  The tests have to be
      adjusted for this change.  The calls to aio_notify in async.c are fine;
      they only want to kick aio_poll out of a blocking wait, but need not
      do anything if aio_poll is not running.
      
      4) nested aio_poll: these just work with the new implementation; when
      a nested event loop is invoked, the outer event loop is never in the
      prepare or poll phases.  The outer event loop thus has already decremented
      the counter.
      Reported-by: NRichard W. M. Jones <rjones@redhat.com>
      Reported-by: NLaszlo Ersek <lersek@redhat.com>
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Tested-by: NRichard W.M. Jones <rjones@redhat.com>
      Message-id: 1437487673-23740-5-git-send-email-pbonzini@redhat.com
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      eabc9779
    • P
      aio-win32: reorganize polling loop · 6493c975
      Paolo Bonzini 提交于
      Preparatory bugfixes and tweaks to the loop before the next patch:
      
      - disable dispatch optimization during aio_prepare.  This fixes a bug.
      
      - do not modify "blocking" until after the first WaitForMultipleObjects
      call.  This is needed in the next patch.
      
      - change the loop to do...while.  This makes it obvious that the loop
      is always entered at least once.  In the next patch this is important
      because the first iteration undoes the ctx->notify_me increment that
      happened before entering the loop.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Tested-by: NRichard W.M. Jones <rjones@redhat.com>
      Message-id: 1437487673-23740-4-git-send-email-pbonzini@redhat.com
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      6493c975
    • P
      tests: remove irrelevant assertions from test-aio · 12d69ac0
      Paolo Bonzini 提交于
      In these tests, the purpose of the initial calls to aio_poll and
      g_main_context_iteration is simply to put the AioContext in a
      known state; the return value of the function does not really
      matter.  The next patch will change those return values; change
      the assertions to a while loop which expresses the intention
      better.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Tested-by: NRichard W.M. Jones <rjones@redhat.com>
      Message-id: 1437487673-23740-3-git-send-email-pbonzini@redhat.com
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      12d69ac0
    • P
      qemu-timer: initialize "timers_done_ev" to set · e4efd8a4
      Paolo Bonzini 提交于
      The normal value for the event is to be set.  If we do not do
      this, pause_all_vcpus (through qemu_clock_enable) hangs unless
      timerlist_run_timers has been run at least once for the timerlist.
      This can happen with the following patches, that make aio_notify do
      nothing most of the time.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: NFam Zheng <famz@redhat.com>
      Tested-by: NRichard W.M. Jones <rjones@redhat.com>
      Message-id: 1437487673-23740-2-git-send-email-pbonzini@redhat.com
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      e4efd8a4
    • F
      mirror: Speed up bitmap initial scanning · 99900697
      Fam Zheng 提交于
      Limiting to sectors_per_chunk for each bdrv_is_allocated_above is slow,
      because the underlying protocol driver would issue much more queries
      than necessary. We should coalesce the query.
      Signed-off-by: NFam Zheng <famz@redhat.com>
      Reviewed-by: NStefan Hajnoczi <stefanha@redhat.com>
      Message-id: <1436413678-7114-4-git-send-email-famz@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      99900697
    • P
      Merge remote-tracking branch 'remotes/mdroth/tags/qga-pull-2015-07-21-tag' into staging · b9c46307
      Peter Maydell 提交于
      tag for qga-pull-2015-07-21
      
      Small fix to correct schema versioning annotations for recently-added
      GuestDiskBusType enum values. Not the end of the world, but ideally
      this inconsistency would be corrected prior to 2.4 release.
      
      # gpg: Signature made Tue Jul 21 20:43:24 2015 BST using RSA key ID F108B584
      # gpg: Good signature from "Michael Roth <flukshun@gmail.com>"
      # gpg:                 aka "Michael Roth <mdroth@utexas.edu>"
      # gpg:                 aka "Michael Roth <mdroth@linux.vnet.ibm.com>"
      # gpg: WARNING: This key is not certified with sufficiently trusted signatures!
      # gpg:          It is not certain that the signature belongs to the owner.
      # Primary key fingerprint: CEAC C9E1 5534 EBAB B82D  3FA0 3353 C9CE F108 B584
      
      * remotes/mdroth/tags/qga-pull-2015-07-21-tag:
        qga: fixed versions for guest bus types in qapi-schema
      Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
      b9c46307
    • O
      qga: fixed versions for guest bus types in qapi-schema · 5f8343d0
      Olga Krishtal 提交于
      Signed-off-by: NOlga Krishtal <okrishtal@virtuozzo.com>
      Signed-off-by: NDenis V. Lunev <den@openvz.org>
      CC: Eric Blake <eblake@redhat.com>
      CC: Michael Roth <mdroth@linux.vnet.ibm.com>
      Reviewed-by: NEric Blake <eblake@redhat.com>
      *added semi-colon to better delineate 2.2 vs. 2.4 versioning
      Signed-off-by: NMichael Roth <mdroth@linux.vnet.ibm.com>
      5f8343d0
  4. 21 7月, 2015 16 次提交
  5. 20 7月, 2015 5 次提交