- 06 5月, 2015 2 次提交
-
-
由 Liang Li 提交于
Add the code to create and destroy the multiple threads those will be used to do data decompression. Left some functions empty just to keep clearness, and the code will be added later. Signed-off-by: NLiang Li <liang.z.li@intel.com> Signed-off-by: NYang Zhang <yang.z.zhang@intel.com> Reviewed-by: NDr.David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Liang Li 提交于
Add the code to create and destroy the multiple threads those will be used to do data compression. Left some functions empty to keep clearness, and the code will be added later. Signed-off-by: NLiang Li <liang.z.li@intel.com> Signed-off-by: NYang Zhang <yang.z.zhang@intel.com> Reviewed-by: NDr.David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 26 3月, 2015 1 次提交
-
-
由 Juan Quintela 提交于
Compression code (still not on tree) want to call this funtion from outside the migration thread, so we can't write to last_sent_block. Instead of reverting full patch: [PULL 07/11] save_block_hdr: we can recalculate Just revert the parts that touch last_sent_block. Signed-off-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com>
-
- 17 3月, 2015 1 次提交
-
-
由 zhanghailiang 提交于
There is already a helper function ram_bytes_total(), we can use it to help counting the total number of pages used by ram blocks. Signed-off-by: Nzhanghailiang <zhang.zhanghailiang@huawei.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 16 3月, 2015 6 次提交
-
-
由 Juan Quintela 提交于
It has always been a page header, not a block header. Once there, the flag argument was only passed to make a bit or with it, just do the or on the caller. Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Juan Quintela 提交于
No need to pass it through all the callers. Once there, update last_sent_block here. Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Juan Quintela 提交于
Add a parameter to pass the number of bytes written, and make it return the number of pages written instead. Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Juan Quintela 提交于
Add a parameter to pass the number of bytes written, and make it return the number of pages written instead. Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Juan Quintela 提交于
Add a parameter to pass the number of bytes written, and make it return the number of pages written instead. Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Juan Quintela 提交于
It used to be an int, but then we can't pass directly the bytes_transferred parameter, that would happen later in the series. Signed-off-by: NJuan Quintela <quintela@redhat.com> Reviewed-by: NAmit Shah <amit.shah@redhat.com>
-
- 18 2月, 2015 1 次提交
-
-
由 Markus Armbruster 提交于
Coccinelle semantic patch: @@ expression E; @@ - error_report("%s", error_get_pretty(E)); - error_free(E); + error_report_err(E); @@ expression E, S; @@ - error_report("%s", error_get_pretty(E)); + error_report_err(E); ( exit(S); | abort(); ) Trivial manual touch-ups in block/sheepdog.c. Signed-off-by: NMarkus Armbruster <armbru@redhat.com> Reviewed-by: NEric Blake <eblake@redhat.com>
-
- 17 2月, 2015 3 次提交
-
-
由 Mike Day 提交于
Allow "unlocked" reads of the ram_list by using an RCU-enabled QLIST. The ramlist mutex is kept. call_rcu callbacks are run with the iothread lock taken, but that may change in the future. Writers still take the ramlist mutex, but they no longer need to assume that the iothread lock is taken. Readers of the list, instead, no longer require either the iothread or ramlist mutex, but they need to use rcu_read_lock() and rcu_read_unlock(). One place in arch_init.c was downgrading from write side to read side like this: qemu_mutex_lock_iothread() qemu_mutex_lock_ramlist() ... qemu_mutex_unlock_iothread() ... qemu_mutex_unlock_ramlist() and the equivalent idiom is: qemu_mutex_lock_ramlist() rcu_read_lock() ... qemu_mutex_unlock_ramlist() ... rcu_read_unlock() Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NMike Day <ncmike@ncultra.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Mike Day 提交于
QLIST has RCU-friendly primitives, so switch to it. Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NMike Day <ncmike@ncultra.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Mike Day 提交于
Reviewed-by: NFam Zheng <famz@redhat.com> Signed-off-by: NMike Day <ncmike@ncultra.org> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 15 1月, 2015 1 次提交
-
-
由 ChenLiang 提交于
Avoid hot pages being replaced by others to remarkably decrease cache misses Sample results with the test program which quote from xbzrle.txt ran in vm:(migrate bandwidth:1GE and xbzrle cache size 8MB) the test program: include <stdlib.h> include <stdio.h> int main() { char *buf = (char *) calloc(4096, 4096); while (1) { int i; for (i = 0; i < 4096 * 4; i++) { buf[i * 4096 / 4]++; } printf("."); } } before this patch: virsh qemu-monitor-command test_vm '{"execute": "query-migrate"}' {"return":{"expected-downtime":1020,"xbzrle-cache":{"bytes":1108284, "cache-size":8388608,"cache-miss-rate":0.987013,"pages":18297,"overflow":8, "cache-miss":1228737},"status":"active","setup-time":10,"total-time":52398, "ram":{"total":12466991104,"remaining":1695744,"mbps":935.559472, "transferred":5780760580,"dirty-sync-counter":271,"duplicate":2878530, "dirty-pages-rate":29130,"skipped":0,"normal-bytes":5748592640, "normal":1403465}},"id":"libvirt-706"} 18k pages sent compressed in 52 seconds. cache-miss-rate is 98.7%, totally miss. after optimizing: virsh qemu-monitor-command test_vm '{"execute": "query-migrate"}' {"return":{"expected-downtime":2054,"xbzrle-cache":{"bytes":5066763, "cache-size":8388608,"cache-miss-rate":0.485924,"pages":194823,"overflow":0, "cache-miss":210653},"status":"active","setup-time":11,"total-time":18729, "ram":{"total":12466991104,"remaining":3895296,"mbps":937.663549, "transferred":1615042219,"dirty-sync-counter":98,"duplicate":2869840, "dirty-pages-rate":58781,"skipped":0,"normal-bytes":1588404224, "normal":387794}},"id":"libvirt-266"} 194k pages sent compressed in 18 seconds. The value of cache-miss-rate decrease to 48.59%. Signed-off-by: NChenLiang <chenliang88@huawei.com> Signed-off-by: NGonglei <arei.gonglei@huawei.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NAmit Shah <amit.shah@redhat.com>
-
- 08 1月, 2015 2 次提交
-
-
由 Michael S. Tsirkin 提交于
If block used_length does not match, try to resize it. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
-
由 Michael S. Tsirkin 提交于
This patch allows us to distinguish between two length values for each block: max_length - length of memory block that was allocated used_length - length of block used by QEMU/guest Currently, we set used_length - max_length, unconditionally. Follow-up patches allow used_length <= max_length. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 20 11月, 2014 1 次提交
-
-
由 ChenLiang 提交于
The static variables in migration_bitmap_sync will not be reset in the case of a second attempted migration. Signed-off-by: NChenLiang <chenliang88@huawei.com> Signed-off-by: NGonglei <arei.gonglei@huawei.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NAmit Shah <amit.shah@redhat.com>
-
- 18 11月, 2014 1 次提交
-
-
由 Michael S. Tsirkin 提交于
During migration, the values read from migration stream during ram load are not validated. Especially offset in host_from_stream_offset() and also the length of the writes in the callers of said function. To fix this, we need to make sure that the [offset, offset + length] range fits into one of the allocated memory regions. Validating addr < len should be sufficient since data seems to always be managed in TARGET_PAGE_SIZE chunks. Fixes: CVE-2014-7840 Note: follow-up patches add extra checks on each block->host access. Signed-off-by: NMichael S. Tsirkin <mst@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NAmit Shah <amit.shah@redhat.com>
-
- 14 10月, 2014 1 次提交
-
-
由 Peter Lieven 提交于
this patch extends commit db80face by not only checking for unknown flags, but also filtering out unknown flag combinations. Suggested-by: NEric Blake <eblake@redhat.com> Signed-off-by: NPeter Lieven <pl@kamp.de> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 04 10月, 2014 1 次提交
-
-
由 Eduardo Habkost 提交于
As the function always return 1, it is not needed anymore. Signed-off-by: NEduardo Habkost <ehabkost@redhat.com> Reviewed-by: NPaolo Bonzini <pbonzini@redhat.com> Signed-off-by: NPaolo Bonzini <pbonzini@redhat.com>
-
- 01 9月, 2014 1 次提交
-
-
由 Bastian Koppelmann 提交于
Add TriCore target stubs, and QOM cpu, and Maintainer Signed-off-by: NBastian Koppelmann <kbastian@mail.uni-paderborn.de> Message-id: 1409572800-4116-2-git-send-email-kbastian@mail.uni-paderborn.de Signed-off-by: NPeter Maydell <peter.maydell@linaro.org>
-
- 09 8月, 2014 1 次提交
-
-
由 Alex Bligh 提交于
When live migrate fails due to a section length mismatch we currently see an error message like: Length mismatch: 0000:00:03.0/virtio-net-pci.rom: 10000 in != 20000 The section lengths are in fact in hex, so this should read Length mismatch: 0000:00:03.0/virtio-net-pci.rom: 0x10000 in != 0x20000 Correct the error string to reflect this. Signed-off-by: NAlex Bligh <alex@alex.org.uk> Signed-off-by: NMichael Tokarev <mjt@tls.msk.ru>
-
- 16 6月, 2014 1 次提交
-
-
由 Peter Lieven 提交于
if a saved vm has unknown flags in the memory data qemu currently simply ignores this flag and continues which yields in an unpredictable result. This patch catches all unknown flags and aborts the loading of the vm. Additionally error reports are thrown if the migration aborts abnormally. Signed-off-by: NPeter Lieven <pl@kamp.de> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 10 6月, 2014 1 次提交
-
-
由 Chen Gang 提交于
We call g_free() after cache_fini() in migration_end(), but we don't call it after cache_fini() in xbzrle_cache_resize(), leaking the memory. cache_init() and cache_fini() are a pair. Since cache_init() allocates the cache, let cache_fini() free it. This plugs the leak. Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NMichael Tokarev <mjt@tls.msk.ru>
-
- 24 5月, 2014 1 次提交
-
-
由 Le Tan 提交于
Replace fprintf(stderr,...) with error_report() in the file arch_init.c. The trailing "\n"s of the @fmt argument have been removed because @fmt of error_report() should not contain newline. Signed-off-by: NLe Tan <tamlokveer@gmail.com> Reviewed-by: NAndreas Färber <afaerber@suse.de> Signed-off-by: NMichael Tokarev <mjt@tls.msk.ru>
-
- 14 5月, 2014 2 次提交
-
-
由 Dr. David Alan Gilbert 提交于
ram_save_block is getting a bit too complicated, and does two separate things: 1) Finds a page to send 2) Sends the page (dealing with compression etc) Split into 'ram_save_page' to send the page and deal with compression (2) Rename remaining function to 'ram_find_and_save_block' Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Chen Gang 提交于
For xbzrle_decode_buffer(), when decoding contents will exceed writing buffer, it will return -1, so need not check the return value whether large than writing buffer. And when failure occurs within load_xbzrle(), it always return -1 without any resources which need release. So can remove the related checking statements, and also can remove 'rc' and 'ret' local variables, Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 08 5月, 2014 1 次提交
-
-
由 Chen Gang 提交于
When DPRINTF() has effect, the original author wants to print all ram_load() calling results. So need use 'goto' instead of 'return' within ram_load(), just like other areas have done. Signed-off-by: NChen Gang <gang.chen.5i5j@gmail.com> Signed-off-by: NMichael Tokarev <mjt@tls.msk.ru>
-
- 06 5月, 2014 8 次提交
-
-
由 ChenLiang 提交于
expose xbzrle cache miss rate Signed-off-by: NChenLiang <chenliang88@huawei.com> Signed-off-by: NGonglei <arei.gonglei@huawei.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 ChenLiang 提交于
expose the count that logs the times of updating the dirty bitmap to end user. Signed-off-by: NChenLiang <chenliang88@huawei.com> Signed-off-by: NGonglei <arei.gonglei@huawei.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 ChenLiang 提交于
Add counts to log the times of updating the dirty bitmap. Signed-off-by: NChenLiang <chenliang88@huawei.com> Signed-off-by: NGonglei <arei.gonglei@huawei.com> Reviewed-by: NEric Blake <eblake@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 ChenLiang 提交于
The page may not be inserted into cache after executing save_xbzrle_page. In case of failure to insert, the original page should be sent rather than the page in the cache. Signed-off-by: NChenLiang <chenliang88@huawei.com> Signed-off-by: NGonglei <arei.gonglei@huawei.com> Reviewed-by: NJuan Quintela <quintela@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 ChenLiang 提交于
version_id is checked twice in the ram_load. Signed-off-by: NChenLiang <chenliang88@huawei.com> Signed-off-by: NGonglei <arei.gonglei@huawei.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Dr. David Alan Gilbert 提交于
Initialising the XBZRLE.lock earlier simplifies the lock use. Based on Markus's patch in: http://lists.gnu.org/archive/html/qemu-devel/2014-03/msg03879.htmlSigned-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NGonglei <arei.gonglei@huawei.com> Reviewed-by: NMarkus Armbruster <armbru@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Dr. David Alan Gilbert 提交于
Provide ram_mig_init (like blk_mig_init) for vl.c to initialise stuff to do with ram migration (currently in arch_init.c). Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: NGonglei <arei.gonglei@huawei.com> Reviewed-by: NMarkus Armbruster <armbru@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
由 Dr. David Alan Gilbert 提交于
This is a fix for a bug* triggered by a migration after hot unplugging a few virtio-net NICs, that caused migration never to converge, because 'migration_dirty_pages' is incorrectly initialised. 'migration_dirty_pages' is used as a tally of the number of outstanding dirty pages, to give the migration code an idea of how much more data will need to be transferred, and thus whether it can end the iterative phase. It was initialised to the total size of the RAMBlock address space, however hotunplug can leave this space sparse, and hence migration_dirty_pages ended up too large. Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> (* https://bugzilla.redhat.com/show_bug.cgi?id=1074913 ) Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 09 3月, 2014 1 次提交
-
-
由 Gonglei 提交于
Resizing the xbzrle cache during migration causes qemu-crash, because the main-thread and migration-thread modify the xbzrle cache size concurrently without lock-protection. Signed-off-by: NChenLiang <chenliang88@huawei.com> Signed-off-by: NGonglei <arei.gonglei@huawei.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 25 2月, 2014 1 次提交
-
-
由 Dr. David Alan Gilbert 提交于
Push zero'd pages into the XBZRLE cache A page that was cached by XBZRLE, zero'd and then XBZRLE'd again was being compared against a stale cache value Don't use 'qemu_put_buffer_async' to put pages from the XBZRLE cache Since the cache might change before the data hits the wire Signed-off-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-
- 04 2月, 2014 1 次提交
-
-
由 Orit Wasserman 提交于
It is better to fail migration in case of failure to allocate new cache item Signed-off-by: NOrit Wasserman <owasserm@redhat.com> Reviewed-by: NDr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: NJuan Quintela <quintela@redhat.com>
-