qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH] Revert "memory: syncronize kvm bitmap using bitmaps


From: Alexey Kardashevskiy
Subject: [Qemu-devel] [PATCH] Revert "memory: syncronize kvm bitmap using bitmaps operations"
Date: Wed, 29 Jan 2014 16:50:39 +1100

This reverts commit ae2810c4bb3b383176e8e1b33931b16c01483aab.

This reverts the optimization introduced by the original patch as
it breaks dirty page tracking on systems where
getpagesize() != TARGET_PAGE_SIZE such as POWERPC64.

The cpu_physical_memory_set_dirty_lebitmap() is called from
the kvm_physical_sync_dirty_bitmap() with the bitmap returned by
the KVM's KVM_GET_DIRTY_LOG ioctl where 1 bit is per the system
page. However QEMU's ram_list.dirty_memory maps are allocated to store
1 bit per TARGET_PAGE_SIZE which is hardcoded to 4K.

Since 64K system page size is quite popular configuration on PPC64,
the original patch breaks migration.

Signed-off-by: Alexey Kardashevskiy <address@hidden>
---
 include/exec/ram_addr.h | 54 +++++++++++++++++--------------------------------
 1 file changed, 18 insertions(+), 36 deletions(-)

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 33c8acc..c6736ed 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -83,47 +83,29 @@ static inline void 
cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
                                                           ram_addr_t start,
                                                           ram_addr_t pages)
 {
-    unsigned long i, j;
+    unsigned int i, j;
     unsigned long page_number, c;
     hwaddr addr;
     ram_addr_t ram_addr;
-    unsigned long len = (pages + HOST_LONG_BITS - 1) / HOST_LONG_BITS;
+    unsigned int len = (pages + HOST_LONG_BITS - 1) / HOST_LONG_BITS;
     unsigned long hpratio = getpagesize() / TARGET_PAGE_SIZE;
-    unsigned long page = BIT_WORD(start >> TARGET_PAGE_BITS);
 
-    /* start address is aligned at the start of a word? */
-    if (((page * BITS_PER_LONG) << TARGET_PAGE_BITS) == start) {
-        long k;
-        long nr = BITS_TO_LONGS(pages);
-
-        for (k = 0; k < nr; k++) {
-            if (bitmap[k]) {
-                unsigned long temp = leul_to_cpu(bitmap[k]);
-
-                ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION][page + k] |= 
temp;
-                ram_list.dirty_memory[DIRTY_MEMORY_VGA][page + k] |= temp;
-                ram_list.dirty_memory[DIRTY_MEMORY_CODE][page + k] |= temp;
-            }
-        }
-        xen_modified_memory(start, pages);
-    } else {
-        /*
-         * bitmap-traveling is faster than memory-traveling (for addr...)
-         * especially when most of the memory is not dirty.
-         */
-        for (i = 0; i < len; i++) {
-            if (bitmap[i] != 0) {
-                c = leul_to_cpu(bitmap[i]);
-                do {
-                    j = ffsl(c) - 1;
-                    c &= ~(1ul << j);
-                    page_number = (i * HOST_LONG_BITS + j) * hpratio;
-                    addr = page_number * TARGET_PAGE_SIZE;
-                    ram_addr = start + addr;
-                    cpu_physical_memory_set_dirty_range(ram_addr,
-                                       TARGET_PAGE_SIZE * hpratio);
-                } while (c != 0);
-            }
+    /*
+     * bitmap-traveling is faster than memory-traveling (for addr...)
+     * especially when most of the memory is not dirty.
+     */
+    for (i = 0; i < len; i++) {
+        if (bitmap[i] != 0) {
+            c = leul_to_cpu(bitmap[i]);
+            do {
+                j = ffsl(c) - 1;
+                c &= ~(1ul << j);
+                page_number = (i * HOST_LONG_BITS + j) * hpratio;
+                addr = page_number * TARGET_PAGE_SIZE;
+                ram_addr = start + addr;
+                cpu_physical_memory_set_dirty_range(ram_addr,
+                                                    TARGET_PAGE_SIZE * 
hpratio);
+            } while (c != 0);
         }
     }
 }
-- 
1.8.4.rc4




reply via email to

[Prev in Thread] Current Thread [Next in Thread]