1. 17 7月, 2012 1 次提交
  2. 17 2月, 2012 1 次提交
  3. 15 12月, 2011 1 次提交
    • A
      coroutine: switch per-thread free pool to a global pool · 39a7a362
      Avi Kivity 提交于
      ucontext-based coroutines use a free pool to reduce allocations and
      deallocations of coroutine objects.  The pool is per-thread, presumably
      to improve locality.  However, as coroutines are usually allocated in
      a vcpu thread and freed in the I/O thread, the pool accounting gets
      screwed up and we end allocating and freeing a coroutine for every I/O
      request.  This is expensive since large objects are allocated via the
      kernel, and are not cached by the C runtime.
      
      Fix by switching to a global pool.  This is safe since we're protected
      by the global mutex.
      Signed-off-by: NAvi Kivity <avi@redhat.com>
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      39a7a362
  4. 21 8月, 2011 1 次提交
  5. 08 8月, 2011 1 次提交
  6. 01 8月, 2011 1 次提交
    • K
      coroutine: introduce coroutines · 00dccaf1
      Kevin Wolf 提交于
      Asynchronous code is becoming very complex.  At the same time
      synchronous code is growing because it is convenient to write.
      Sometimes duplicate code paths are even added, one synchronous and the
      other asynchronous.  This patch introduces coroutines which allow code
      that looks synchronous but is asynchronous under the covers.
      
      A coroutine has its own stack and is therefore able to preserve state
      across blocking operations, which traditionally require callback
      functions and manual marshalling of parameters.
      
      Creating and starting a coroutine is easy:
      
        coroutine = qemu_coroutine_create(my_coroutine);
        qemu_coroutine_enter(coroutine, my_data);
      
      The coroutine then executes until it returns or yields:
      
        void coroutine_fn my_coroutine(void *opaque) {
            MyData *my_data = opaque;
      
            /* do some work */
      
            qemu_coroutine_yield();
      
            /* do some more work */
        }
      
      Yielding switches control back to the caller of qemu_coroutine_enter().
      This is typically used to switch back to the main thread's event loop
      after issuing an asynchronous I/O request.  The request callback will
      then invoke qemu_coroutine_enter() once more to switch back to the
      coroutine.
      
      Note that if coroutines are used only from threads which hold the global
      mutex they will never execute concurrently.  This makes programming with
      coroutines easier than with threads.  Race conditions cannot occur since
      only one coroutine may be active at any time.  Other coroutines can only
      run across yield.
      
      This coroutines implementation is based on the gtk-vnc implementation
      written by Anthony Liguori <anthony@codemonkey.ws> but it has been
      significantly rewritten by Kevin Wolf <kwolf@redhat.com> to use
      setjmp()/longjmp() instead of the more expensive swapcontext() and by
      Paolo Bonzini <pbonzini@redhat.com> for Windows Fibers support.
      Signed-off-by: NKevin Wolf <kwolf@redhat.com>
      Signed-off-by: NStefan Hajnoczi <stefanha@linux.vnet.ibm.com>
      00dccaf1