- 03 12月, 2013 1 次提交
-
-
由 Marc-André Lureau 提交于
qemu_co_queue_wait_insert_head() is unused in qemu code base now. Signed-off-by: NMarc-André Lureau <marcandre.lureau@gmail.com> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 30 10月, 2013 1 次提交
-
-
由 MORITA Kazutaka 提交于
This helper function behaves similarly to co_sleep_ns(), but the sleeping coroutine will be resumed when using qemu_aio_wait(). Signed-off-by: NMORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp> Tested-by: NLiu Yuan <namei.unix@gmail.com> Reviewed-by: NLiu Yuan <namei.unix@gmail.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 23 8月, 2013 2 次提交
-
-
由 Alex Bligh 提交于
Convert block_job_sleep_ns and co_sleep_ns to use the new timer API. Signed-off-by: NAlex Bligh <alex@alex.org.uk> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
由 Alex Bligh 提交于
include/qemu/timer.h has no need to include main-loop.h and doing so causes an issue for the next patch. Unfortunately various files assume including timers.h will pull in main-loop.h. Untangle this mess. Signed-off-by: NAlex Bligh <alex@alex.org.uk> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 29 7月, 2013 1 次提交
-
-
由 Benoît Canet 提交于
The throttling code was segfaulting since commit 02ffb504 because some qemu_co_queue_next caller does not run in a coroutine. qemu_co_queue_do_restart assume that the caller is a coroutinne. As suggested by Stefan fix this by entering the coroutine directly. Also make sure like suggested that qemu_co_queue_next() and qemu_co_queue_restart_all() can be called only in coroutines. Signed-off-by: NBenoit Canet <benoit@irqsave.net> Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com>
-
- 27 6月, 2013 1 次提交
-
-
由 Michael R. Hines 提交于
The RDMA event channel can be made non-blocking just like a TCP socket. Exporting this function allows us to yield so that the QEMU monitor remains available. Reviewed-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NChegu Vinod <chegu_vinod@hp.com> Tested-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NMichael R. Hines <mrhines@us.ibm.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 15 3月, 2013 1 次提交
-
-
由 Stefan Hajnoczi 提交于
CoQueue uses a BH to awake coroutines that were made ready to run again using qemu_co_queue_next() or qemu_co_queue_restart_all(). The BH currently runs in the iothread AioContext and would break coroutines that run in a different AioContext. This is a slightly tricky problem because the lifetime of the BH exceeds that of the CoQueue. This means coroutines can be awoken after CoQueue itself has been freed. Also, there is no qemu_co_queue_destroy() function which we could use to handle freeing resources. Introducing qemu_co_queue_destroy() has a ripple effect of requiring us to also add qemu_co_mutex_destroy() and qemu_co_rwlock_destroy(), as well as updating all callers. Avoid doing that. We also cannot switch from BH to GIdle function because aio_poll() does not dispatch GIdle functions. (GIdle functions make memory management slightly easier because they free themselves.) Finally, I don't want to move unlock_queue and unlock_bh into AioContext. That would break encapsulation - AioContext isn't supposed to know about CoQueue. This patch implements a different solution: each qemu_co_queue_next() or qemu_co_queue_restart_all() call creates a new BH and list of coroutines to wake up. Callers tend to invoke qemu_co_queue_next() and qemu_co_queue_restart_all() occasionally after blocking I/O, so creating a new BH for each call shouldn't be massively inefficient. Note that this patch does not add an interface for specifying the AioContext. That is left to future patches which will convert CoQueue, CoMutex, and CoRwlock to expose AioContext. Signed-off-by: NStefan Hajnoczi <stefanha@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 19 12月, 2012 2 次提交
-
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Paolo Bonzini 提交于
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 26 1月, 2012 1 次提交
-
-
由 Stefan Hajnoczi 提交于
Signed-off-by: NStefan Hajnoczi <stefanha@linux.vnet.ibm.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 05 12月, 2011 2 次提交
-
-
由 Stefan Hajnoczi 提交于
It's common to wake up all waiting coroutines. Introduce the qemu_co_queue_restart_all() function to do this instead of looping over qemu_co_queue_next() in every caller. Signed-off-by: NStefan Hajnoczi <stefanha@linux.vnet.ibm.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
由 Zhi Yong Wu 提交于
Signed-off-by: NZhi Yong Wu <wuzhy@linux.vnet.ibm.com> Signed-off-by: NStefan Hajnoczi <stefanha@linux.vnet.ibm.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 23 8月, 2011 1 次提交
-
-
由 Aneesh Kumar K.V 提交于
Signed-off-by: NAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 02 8月, 2011 1 次提交
-
-
由 Kevin Wolf 提交于
Signed-off-by: NKevin Wolf <kwolf@redhat.com>
-
- 01 8月, 2011 1 次提交
-
-
由 Kevin Wolf 提交于
Asynchronous code is becoming very complex. At the same time synchronous code is growing because it is convenient to write. Sometimes duplicate code paths are even added, one synchronous and the other asynchronous. This patch introduces coroutines which allow code that looks synchronous but is asynchronous under the covers. A coroutine has its own stack and is therefore able to preserve state across blocking operations, which traditionally require callback functions and manual marshalling of parameters. Creating and starting a coroutine is easy: coroutine = qemu_coroutine_create(my_coroutine); qemu_coroutine_enter(coroutine, my_data); The coroutine then executes until it returns or yields: void coroutine_fn my_coroutine(void *opaque) { MyData *my_data = opaque; /* do some work */ qemu_coroutine_yield(); /* do some more work */ } Yielding switches control back to the caller of qemu_coroutine_enter(). This is typically used to switch back to the main thread's event loop after issuing an asynchronous I/O request. The request callback will then invoke qemu_coroutine_enter() once more to switch back to the coroutine. Note that if coroutines are used only from threads which hold the global mutex they will never execute concurrently. This makes programming with coroutines easier than with threads. Race conditions cannot occur since only one coroutine may be active at any time. Other coroutines can only run across yield. This coroutines implementation is based on the gtk-vnc implementation written by Anthony Liguori <anthony@codemonkey.ws> but it has been significantly rewritten by Kevin Wolf <kwolf@redhat.com> to use setjmp()/longjmp() instead of the more expensive swapcontext() and by Paolo Bonzini <pbonzini@redhat.com> for Windows Fibers support. Signed-off-by: NKevin Wolf <kwolf@redhat.com> Signed-off-by: NStefan Hajnoczi <stefanha@linux.vnet.ibm.com>
-