1. 29 8月, 2014 3 次提交
    • P
      AioContext: introduce aio_prepare · a3462c65
      Paolo Bonzini 提交于
      This will be used to implement socket polling on Windows.
      On Windows, select() and g_poll() are completely different;
      sockets are polled with select() before calling g_poll,
      and the g_poll must be nonblocking if select() says a
      socket is ready.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      a3462c65
    • P
      AioContext: export and use aio_dispatch · e4c7e2d1
      Paolo Bonzini 提交于
      So far, aio_poll's scheme was dispatch/poll/dispatch, where
      the first dispatch phase was used only in the GSource case in
      order to avoid a blocking poll.  Earlier patches changed it to
      dispatch/prepare/poll/dispatch, where prepare is aio_compute_timeout.
      
      By making aio_dispatch public, we can remove the first dispatch
      phase altogether, so that both aio_poll and the GSource use the same
      prepare/poll/dispatch scheme.
      
      This patch breaks the invariant that aio_poll(..., true) will not block
      the first time it returns false.  This used to be fundamental for
      qemu_aio_flush's implementation as "while (qemu_aio_wait()) {}" but
      no code in QEMU relies on this invariant anymore.  The return value
      of aio_poll() is now comparable with that of g_main_context_iteration.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      e4c7e2d1
    • P
      AioContext: take bottom halves into account when computing aio_poll timeout · 845ca10d
      Paolo Bonzini 提交于
      Right now, QEMU invokes aio_bh_poll before the "poll" phase
      of aio_poll.  It is simpler to do it afterwards and skip the
      "poll" phase altogether when the OS-dependent parts of AioContext
      are invoked from GSource.  This way, AioContext behaves more
      similarly when used as a GSource vs. when used as stand-alone.
      
      As a start, take bottom halves into account when computing the
      poll timeout.  If a bottom half is ready, do a non-blocking
      poll.  As a side effect, this makes idle bottom halves work
      with aio_poll; an improvement, but not really an important
      one since they are deprecated.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      845ca10d
  2. 09 7月, 2014 1 次提交
    • P
      AioContext: speed up aio_notify · 0ceb849b
      Paolo Bonzini 提交于
      In many cases, the call to event_notifier_set in aio_notify is unnecessary.
      In particular, if we are executing aio_dispatch, or if aio_poll is not
      blocking, we know that we will soon get to the next loop iteration (if
      necessary); the thread that hosts the AioContext's event loop does not
      need any nudging.
      
      The patch includes a Promela formal model that shows that this really
      works and does not need any further complication such as generation
      counts.  It needs a memory barrier though.
      
      The generation counts are not needed because any change to
      ctx->dispatching after the memory barrier is okay for aio_notify.
      If it changes from zero to one, it is the right thing to skip
      event_notifier_set.  If it changes from one to zero, the
      event_notifier_set is unnecessary but harmless.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      0ceb849b
  3. 04 6月, 2014 1 次提交
  4. 13 3月, 2014 1 次提交
    • S
      aio: add aio_context_acquire() and aio_context_release() · 98563fc3
      Stefan Hajnoczi 提交于
      It can be useful to run an AioContext from a thread which normally does
      not "own" the AioContext.  For example, request draining can be
      implemented by acquiring the AioContext and looping aio_poll() until all
      requests have been completed.
      
      The following pattern should work:
      
        /* Event loop thread */
        while (running) {
            aio_context_acquire(ctx);
            aio_poll(ctx, true);
            aio_context_release(ctx);
        }
      
        /* Another thread */
        aio_context_acquire(ctx);
        bdrv_read(bs, 0x1000, buf, 1);
        aio_context_release(ctx);
      
      This patch implements aio_context_acquire() and aio_context_release().
      
      Note that existing aio_poll() callers do not need to worry about
      acquiring and releasing - it is only needed when multiple threads will
      call aio_poll() on the same AioContext.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      98563fc3
  5. 23 8月, 2013 3 次提交
  6. 19 8月, 2013 1 次提交
  7. 19 7月, 2013 1 次提交
  8. 15 3月, 2013 1 次提交
    • S
      aio: add a ThreadPool instance to AioContext · 9b34277d
      Stefan Hajnoczi 提交于
      This patch adds a ThreadPool to AioContext.  It's possible that some
      AioContext instances will never use the ThreadPool, so defer creation
      until aio_get_thread_pool().
      
      The reason why AioContext should have the ThreadPool is because the
      ThreadPool is bound to a AioContext instance where the work item's
      callback function is invoked.  It doesn't make sense to keep the
      ThreadPool pointer anywhere other than AioContext.  For example,
      block/raw-posix.c can get its AioContext's ThreadPool and submit work.
      
      Special note about headers: I used struct ThreadPool in aio.h because
      there is a circular dependency if aio.h includes thread-pool.h.
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      9b34277d
  9. 22 2月, 2013 1 次提交
  10. 19 12月, 2012 2 次提交
  11. 11 12月, 2012 1 次提交
  12. 13 11月, 2012 1 次提交
  13. 30 10月, 2012 6 次提交
  14. 01 5月, 2012 1 次提交
  15. 27 4月, 2012 1 次提交
  16. 22 10月, 2011 1 次提交
  17. 06 9月, 2011 1 次提交
    • K
      async: Allow nested qemu_bh_poll calls · 648fb0ea
      Kevin Wolf 提交于
      qemu may segfault when a BH handler first deletes a BH and then (possibly
      indirectly) calls a nested qemu_bh_poll(). This is because the inner instance
      frees the BH and deletes it from the list that the outer one processes.
      
      This patch deletes BHs only in the outermost qemu_bh_poll instance.
      
      Commit 7887f620 already tried to achieve the same, but it assumed that the BH
      handler would only delete its own BH. With a nested qemu_bh_poll(), this isn't
      guaranteed, so that commit wasn't enough. Hope this one fixes it for real.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      648fb0ea
  18. 21 8月, 2011 1 次提交
  19. 02 8月, 2011 1 次提交
    • K
      async: Remove AsyncContext · 384acbf4
      Kevin Wolf 提交于
      The purpose of AsyncContexts was to protect qcow and qcow2 against reentrancy
      during an emulated bdrv_read/write (which includes a qemu_aio_wait() call and
      can run AIO callbacks of different requests if it weren't for AsyncContexts).
      
      Now both qcow and qcow2 are protected by CoMutexes and AsyncContexts can be
      removed.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      384acbf4
  20. 15 6月, 2011 1 次提交
  21. 28 10月, 2009 2 次提交