qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH] kvm: fix slot flags sync between Qemu and KVM


From: Xiao Guangrong
Subject: [Qemu-devel] [PATCH] kvm: fix slot flags sync between Qemu and KVM
Date: Wed, 08 Apr 2015 14:34:54 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0

We noticed that KVM keeps tracking dirty for the memslots when
live migration failed which causes bad performance due to huge
page mapping disallowed for this kind of memslot

It is caused by slot flags does not properly sync-ed between Qemu
and KVM. Current code doing slot update depends on slot->flags
which hopes to omit unnecessary ioctl. However, slot->flags only
reflects the stauts of corresponding memory region, vmsave and
live migration do dirty tracking which overset
KVM_MEM_LOG_DIRTY_PAGES for the slot. That causes the slot status
recorded in the flags does not exactly match the stauts in kernel.

We fixed it by introducing slot->is_dirty_logging which indicates
the dirty status in kernel so that it helps us to sync the status
between userspace and kernel

Wanpeng Li <address@hidden>
Signed-off-by: Xiao Guangrong <address@hidden>
---
 kvm-all.c | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)

diff --git a/kvm-all.c b/kvm-all.c
index dd44f8c..69fa233 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -60,6 +60,15 @@

 #define KVM_MSI_HASHTAB_SIZE    256

+/*
+ * @flags only reflects the stauts of corresponding memory region, however,
+ * vmsave and live migration do dirty tracking which overset
+ * KVM_MEM_LOG_DIRTY_PAGES for the slot. That causes the slot status recorded
+ * in @flags does not exactly match the stauts in kernel.
+ *
+ * @is_dirty_logging indicating the dirty status in kernel helps us to sync
+ * the status between userspace and kernel.
+ */
 typedef struct KVMSlot
 {
     hwaddr start_addr;
@@ -67,6 +76,7 @@ typedef struct KVMSlot
     void *ram;
     int slot;
     int flags;
+    bool is_dirty_logging;
 } KVMSlot;

 typedef struct kvm_dirty_log KVMDirtyLog;
@@ -245,6 +255,7 @@ static int kvm_set_user_memory_region(KVMState *s, KVMSlot 
*slot)
         kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
     }
     mem.memory_size = slot->memory_size;
+    slot->is_dirty_logging = !!(mem.flags & KVM_MEM_LOG_DIRTY_PAGES);
     return kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
 }

@@ -312,6 +323,7 @@ static int kvm_slot_dirty_pages_log_change(KVMSlot *mem, 
bool log_dirty)
     int old_flags;

     old_flags = mem->flags;
+    old_flags |= mem->is_dirty_logging ? KVM_MEM_LOG_DIRTY_PAGES : 0;

     flags = (mem->flags & ~mask) | kvm_mem_flags(s, log_dirty, false);
     mem->flags = flags;
@@ -376,12 +388,17 @@ static int kvm_set_migration_log(bool enable)
     s->migration_log = enable;

     for (i = 0; i < s->nr_slots; i++) {
+        int dirty_enable;
+
         mem = &s->slots[i];

         if (!mem->memory_size) {
             continue;
         }
-        if (!!(mem->flags & KVM_MEM_LOG_DIRTY_PAGES) == enable) {
+
+        /* Keep the dirty bit if it is tracked by the memory region. */
+        dirty_enable = enable | (mem->flags & KVM_MEM_LOG_DIRTY_PAGES);
+        if (mem->is_dirty_logging == dirty_enable) {
             continue;
         }
         err = kvm_set_user_memory_region(s, mem);
--
2.1.0




reply via email to

[Prev in Thread] Current Thread [Next in Thread]