You need to sign in or sign up before continuing.
提交 ee82310f 编写于 作者: P Paolo Bonzini 提交者: Stefan Hajnoczi

block: replace g_new0 with g_new for bottom half allocation.

This saves about 15% of the clock cycles spent on allocation.  Using the
slice allocator does not add a visible improvement; allocation is faster
than malloc, while freeing seems to be slower.
Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
Signed-off-by: NKevin Wolf <kwolf@redhat.com>
上级 e012b78c
...@@ -44,10 +44,12 @@ struct QEMUBH { ...@@ -44,10 +44,12 @@ struct QEMUBH {
QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque) QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
{ {
QEMUBH *bh; QEMUBH *bh;
bh = g_new0(QEMUBH, 1); bh = g_new(QEMUBH, 1);
bh->ctx = ctx; *bh = (QEMUBH){
bh->cb = cb; .ctx = ctx,
bh->opaque = opaque; .cb = cb,
.opaque = opaque,
};
qemu_mutex_lock(&ctx->bh_lock); qemu_mutex_lock(&ctx->bh_lock);
bh->next = ctx->first_bh; bh->next = ctx->first_bh;
/* Make sure that the members are ready before putting bh into list */ /* Make sure that the members are ready before putting bh into list */
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册