提交 f7cd55a0 编写于 作者: A Alexey Kardashevskiy 提交者: Juan Quintela

migration: Increase default max_downtime from 30ms to 300ms

The existing timeout is 30ms which on 100MB/s (1Gbit) gives us
3MB/s rate maximum. If we put some load on the guest, it is easy to
get page dirtying rate too big so live migration will never complete.
In the case of libvirt that means that the guest will be stopped
anyway after a timeout specified in the "virsh migrate" command and
this normally generates even bigger delay.

This changes max_downtime to 300ms which seems to be more
reasonable value.
Signed-off-by: NAlexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: NJuan Quintela <quintela@redhat.com>
上级 c6f6646c
......@@ -133,7 +133,7 @@ void process_incoming_migration(QEMUFile *f)
* the choice of nanoseconds is because it is the maximum resolution that
* get_clock() can achieve. It is an internal measure. All user-visible
* units must be in seconds */
static uint64_t max_downtime = 30000000;
static uint64_t max_downtime = 300000000;
uint64_t migrate_max_downtime(void)
{
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册