1. 29 8月, 2014 3 次提交
    • P
      AioContext: export and use aio_dispatch · e4c7e2d1
      Paolo Bonzini 提交于
      So far, aio_poll's scheme was dispatch/poll/dispatch, where
      the first dispatch phase was used only in the GSource case in
      order to avoid a blocking poll.  Earlier patches changed it to
      dispatch/prepare/poll/dispatch, where prepare is aio_compute_timeout.
      
      By making aio_dispatch public, we can remove the first dispatch
      phase altogether, so that both aio_poll and the GSource use the same
      prepare/poll/dispatch scheme.
      
      This patch breaks the invariant that aio_poll(..., true) will not block
      the first time it returns false.  This used to be fundamental for
      qemu_aio_flush's implementation as "while (qemu_aio_wait()) {}" but
      no code in QEMU relies on this invariant anymore.  The return value
      of aio_poll() is now comparable with that of g_main_context_iteration.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      e4c7e2d1
    • P
      AioContext: run bottom halves after polling · 3672fa50
      Paolo Bonzini 提交于
      Make the dispatching phase the same before blocking and afterwards.
      The next patch will make aio_dispatch public and use it directly
      for the GSource case, instead of aio_poll.  aio_poll can then be
      simplified heavily.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      3672fa50
    • P
      AioContext: take bottom halves into account when computing aio_poll timeout · 845ca10d
      Paolo Bonzini 提交于
      Right now, QEMU invokes aio_bh_poll before the "poll" phase
      of aio_poll.  It is simpler to do it afterwards and skip the
      "poll" phase altogether when the OS-dependent parts of AioContext
      are invoked from GSource.  This way, AioContext behaves more
      similarly when used as a GSource vs. when used as stand-alone.
      
      As a start, take bottom halves into account when computing the
      poll timeout.  If a bottom half is ready, do a non-blocking
      poll.  As a side effect, this makes idle bottom halves work
      with aio_poll; an improvement, but not really an important
      one since they are deprecated.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      845ca10d
  2. 09 7月, 2014 2 次提交
    • P
      AioContext: speed up aio_notify · 0ceb849b
      Paolo Bonzini 提交于
      In many cases, the call to event_notifier_set in aio_notify is unnecessary.
      In particular, if we are executing aio_dispatch, or if aio_poll is not
      blocking, we know that we will soon get to the next loop iteration (if
      necessary); the thread that hosts the AioContext's event loop does not
      need any nudging.
      
      The patch includes a Promela formal model that shows that this really
      works and does not need any further complication such as generation
      counts.  It needs a memory barrier though.
      
      The generation counts are not needed because any change to
      ctx->dispatching after the memory barrier is okay for aio_notify.
      If it changes from zero to one, it is the right thing to skip
      event_notifier_set.  If it changes from one to zero, the
      event_notifier_set is unnecessary but harmless.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      0ceb849b
    • P
      block: drop aio functions that operate on the main AioContext · 87f68d31
      Paolo Bonzini 提交于
      The main AioContext should be accessed explicitly via qemu_get_aio_context().
      Most of the time, using it is not the right thing to do.
      Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      87f68d31
  3. 06 12月, 2013 1 次提交
    • S
      aio: make aio_poll(ctx, true) block with no fds · d3fa9230
      Stefan Hajnoczi 提交于
      This patch drops a special case where aio_poll(ctx, true) returns false
      instead of blocking if no file descriptors are waiting on I/O.  Now it
      is possible to block in aio_poll() to wait for aio_notify().
      
      This change eliminates busy waiting.  bdrv_drain_all() used to rely on
      busy waiting to completed throttled I/O requests but this is no longer
      required so we can simplify aio_poll().
      
      Note that aio_poll() still returns false when aio_notify() was used.  In
      other words, stopping a blocking aio_poll() wait is not considered
      making progress.
      
      Adjust test-aio /aio/bh/callback-delete/one which assumed aio_poll(ctx,
      true) would immediately return false instead of blocking.
      Reviewed-by: NAlex Bligh <alex@alex.org.uk>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      d3fa9230
  4. 23 8月, 2013 1 次提交
  5. 19 8月, 2013 2 次提交
    • S
      aio: drop io_flush argument · f2e5dca4
      Stefan Hajnoczi 提交于
      The .io_flush() handler no longer exists and has no users.  Drop the
      io_flush argument to aio_set_fd_handler() and related functions.
      
      The AioFlushEventNotifierHandler and AioFlushHandler typedefs are no
      longer used and are dropped too.
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      f2e5dca4
    • S
      aio: stop using .io_flush() · 164a101f
      Stefan Hajnoczi 提交于
      Now that aio_poll() users check their termination condition themselves,
      it is no longer necessary to call .io_flush() handlers.
      
      The behavior of aio_poll() changes as follows:
      
      1. .io_flush() is no longer invoked and file descriptors are *always*
      monitored.  Previously returning 0 from .io_flush() would skip this file
      descriptor.
      
      Due to this change it is essential to check that requests are pending
      before calling qemu_aio_wait().  Failure to do so means we block, for
      example, waiting for an idle iSCSI socket to become readable when there
      are no requests.  Currently all qemu_aio_wait()/aio_poll() callers check
      before calling.
      
      2. aio_poll() now returns true if progress was made (BH or fd handlers
      executed) and false otherwise.  Previously it would return true whenever
      'busy', which means that .io_flush() returned true.  The 'busy' concept
      no longer exists so just progress is returned.
      
      Due to this change we need to update tests/test-aio.c which asserts
      aio_poll() return values.  Note that QEMU doesn't actually rely on these
      return values so only tests/test-aio.c cares.
      
      Note that ctx->notifier, the EventNotifier fd used for aio_notify(), is
      now handled as a special case.  This is a little ugly but maintains
      aio_poll() semantics, i.e. aio_notify() does not count as 'progress' and
      aio_poll() avoids blocking when the user has not set any fd handlers yet.
      
      Patches after this remove .io_flush() handler code until we can finally
      drop the io_flush arguments to aio_set_fd_handler() and friends.
      Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      164a101f
  6. 22 2月, 2013 3 次提交
  7. 17 1月, 2013 1 次提交
    • K
      aio: Fix return value of aio_poll() · 2ea9b58f
      Kevin Wolf 提交于
      aio_poll() must return true if any work is still pending, even if it
      didn't make progress, so that bdrv_drain_all() doesn't stop waiting too
      early. The possibility of stopping early occasionally lead to a failed
      assertion in bdrv_drain_all(), when some in-flight request was missed
      and the function didn't really drain all requests.
      
      In order to make that change, the return value as specified in the
      function comment must change for blocking = false; fortunately, the
      return value of blocking = false callers is only used in test cases, so
      this change shouldn't cause any trouble.
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
      2ea9b58f
  8. 19 12月, 2012 2 次提交
  9. 30 10月, 2012 10 次提交
  10. 28 9月, 2012 2 次提交
  11. 19 4月, 2012 3 次提交
  12. 14 1月, 2012 1 次提交
  13. 21 8月, 2011 1 次提交
  14. 21 5月, 2010 1 次提交
  15. 28 10月, 2009 1 次提交
  16. 12 9月, 2009 1 次提交
    • B
      Fix sys-queue.h conflict for good · 72cf2d4f
      Blue Swirl 提交于
      Problem: Our file sys-queue.h is a copy of the BSD file, but there are
      some additions and it's not entirely compatible. Because of that, there have
      been conflicts with system headers on BSD systems. Some hacks have been
      introduced in the commits 15cc9235,
      f40d7537,
      96555a96 and
      3990d09a but the fixes were fragile.
      
      Solution: Avoid the conflict entirely by renaming the functions and the
      file. Revert the previous hacks.
      Signed-off-by: NBlue Swirl <blauwirbel@gmail.com>
      72cf2d4f
  17. 22 7月, 2009 1 次提交
  18. 15 6月, 2009 1 次提交
  19. 09 5月, 2009 1 次提交
    • A
      AIO deletion race fix · 79d5ca56
      Alexander Graf 提交于
      When deleting an fd event there is a chance the object doesn't get
      deleted, but only ->deleted set positive and deleted somewhere later.
      
      Now, if we create a handler for the fd again before the actual
      deletion occurs, we end up writing data into an object that has
      ->deleted set, which is obviously wrong.
      
      I see two ways to fix this:
      
      1. Don't return ->deleted objects in the search
      2. Unset ->deleted in the search
      
      This patch implements 1. which feels safer to do. It fixes AIO issues
      I've seen with curl, as libcurl unsets fd event listeners pretty
      frequently.
      Signed-off-by: NAlexander Graf <alex@csgraf.de>
      Signed-off-by: NAnthony Liguori <aliguori@us.ibm.com>
      79d5ca56
  20. 06 2月, 2009 1 次提交
  21. 13 10月, 2008 1 次提交