qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH] migration: optimize the downtime


From: Jay Zhou
Subject: [Qemu-devel] [PATCH] migration: optimize the downtime
Date: Thu, 20 Jul 2017 11:49:29 +0800

Qemu_savevm_state_cleanup() takes about 300ms in my ram migration tests
with a 8U24G vm(20G is really occupied), the main cost comes from
KVM_SET_USER_MEMORY_REGION ioctl when mem.memory_size = 0 in
kvm_set_user_memory_region(). In kmod, the main cost is
kvm_zap_obsolete_pages(), which traverses the active_mmu_pages list to
zap the unsync sptes.

I think it can be optimized:
(1) source vm will be destroyed if the migration is successfully done,
    so the resources will be cleanuped automatically by the system
(2) delay the cleanup if the migration failed

Signed-off-by: Jay Zhou <address@hidden>
---
 migration/migration.c | 16 +++++++++-------
 qmp.c                 | 10 ++++++++++
 2 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/migration/migration.c b/migration/migration.c
index a0db40d..72832be 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1877,6 +1877,15 @@ static void *migration_thread(void *opaque)
         if (qemu_file_get_error(s->to_dst_file)) {
             migrate_set_state(&s->state, current_active_state,
                               MIGRATION_STATUS_FAILED);
+            /*
+             * The resource has been allocated by migration will be reused in
+             * COLO process, so don't release them.
+             */
+            if (!enable_colo) {
+                qemu_mutex_lock_iothread();
+                qemu_savevm_state_cleanup();
+                qemu_mutex_unlock_iothread();
+            }
             trace_migration_thread_file_err();
             break;
         }
@@ -1916,13 +1925,6 @@ static void *migration_thread(void *opaque)
     end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
 
     qemu_mutex_lock_iothread();
-    /*
-     * The resource has been allocated by migration will be reused in COLO
-     * process, so don't release them.
-     */
-    if (!enable_colo) {
-        qemu_savevm_state_cleanup();
-    }
     if (s->state == MIGRATION_STATUS_COMPLETED) {
         uint64_t transferred_bytes = qemu_ftell(s->to_dst_file);
         s->total_time = end_time - s->total_time;
diff --git a/qmp.c b/qmp.c
index b86201e..0e68eaa 100644
--- a/qmp.c
+++ b/qmp.c
@@ -37,6 +37,8 @@
 #include "qom/object_interfaces.h"
 #include "hw/mem/pc-dimm.h"
 #include "hw/acpi/acpi_dev_interface.h"
+#include "migration/migration.h"
+#include "migration/savevm.h"
 
 NameInfo *qmp_query_name(Error **errp)
 {
@@ -200,6 +202,14 @@ void qmp_cont(Error **errp)
     if (runstate_check(RUN_STATE_INMIGRATE)) {
         autostart = 1;
     } else {
+        /*
+         * Delay the cleanup to reduce the downtime of migration.
+         * The resource has been allocated by migration will be reused
+         * in COLO process, so don't release them.
+         */
+        if (runstate_check(RUN_STATE_POSTMIGRATE) && !migrate_colo_enabled()) {
+            qemu_savevm_state_cleanup();
+        }
         vm_start();
     }
 }
-- 
1.8.3.1





reply via email to

[Prev in Thread] Current Thread [Next in Thread]