[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH v6 08/10] Reduce the PVM stop time during Checkpoint
From: |
leirao |
Subject: |
[PATCH v6 08/10] Reduce the PVM stop time during Checkpoint |
Date: |
Thu, 8 Apr 2021 23:20:54 -0400 |
From: "Rao, Lei" <lei.rao@intel.com>
When flushing memory from ram cache to ram during every checkpoint
on secondary VM, we can copy continuous chunks of memory instead of
4096 bytes per time to reduce the time of VM stop during checkpoint.
Signed-off-by: Lei Rao <lei.rao@intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Lukas Straub <lukasstraub2@web.de>
Tested-by: Lukas Straub <lukasstraub2@web.de>
---
migration/ram.c | 48 +++++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 45 insertions(+), 3 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index f9d60f0..8661d82 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -822,6 +822,41 @@ unsigned long migration_bitmap_find_dirty(RAMState *rs,
RAMBlock *rb,
return next;
}
+/*
+ * colo_bitmap_find_diry:find contiguous dirty pages from start
+ *
+ * Returns the page offset within memory region of the start of the contiguout
+ * dirty page
+ *
+ * @rs: current RAM state
+ * @rb: RAMBlock where to search for dirty pages
+ * @start: page where we start the search
+ * @num: the number of contiguous dirty pages
+ */
+static inline
+unsigned long colo_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,
+ unsigned long start, unsigned long *num)
+{
+ unsigned long size = rb->used_length >> TARGET_PAGE_BITS;
+ unsigned long *bitmap = rb->bmap;
+ unsigned long first, next;
+
+ *num = 0;
+
+ if (ramblock_is_ignored(rb)) {
+ return size;
+ }
+
+ first = find_next_bit(bitmap, size, start);
+ if (first >= size) {
+ return first;
+ }
+ next = find_next_zero_bit(bitmap, size, first + 1);
+ assert(next >= first);
+ *num = next - first;
+ return first;
+}
+
static inline bool migration_bitmap_clear_dirty(RAMState *rs,
RAMBlock *rb,
unsigned long page)
@@ -3730,19 +3765,26 @@ void colo_flush_ram_cache(void)
block = QLIST_FIRST_RCU(&ram_list.blocks);
while (block) {
- offset = migration_bitmap_find_dirty(ram_state, block, offset);
+ unsigned long num = 0;
+ offset = colo_bitmap_find_dirty(ram_state, block, offset, &num);
if (((ram_addr_t)offset) << TARGET_PAGE_BITS
>= block->used_length) {
offset = 0;
+ num = 0;
block = QLIST_NEXT_RCU(block, next);
} else {
- migration_bitmap_clear_dirty(ram_state, block, offset);
+ unsigned long i = 0;
+
+ for (i = 0; i < num; i++) {
+ migration_bitmap_clear_dirty(ram_state, block, offset + i);
+ }
dst_host = block->host
+ (((ram_addr_t)offset) << TARGET_PAGE_BITS);
src_host = block->colo_cache
+ (((ram_addr_t)offset) << TARGET_PAGE_BITS);
- memcpy(dst_host, src_host, TARGET_PAGE_SIZE);
+ memcpy(dst_host, src_host, TARGET_PAGE_SIZE * num);
+ offset += num;
}
}
}
--
1.8.3.1
- [PATCH v6 00/10] Fixed some bugs and optimized some codes for COLO, leirao, 2021/04/08
- [PATCH v6 01/10] Remove some duplicate trace code., leirao, 2021/04/08
- [PATCH v6 02/10] Fix the qemu crash when guest shutdown during checkpoint, leirao, 2021/04/08
- [PATCH v6 03/10] Optimize the function of filter_send, leirao, 2021/04/08
- [PATCH v6 04/10] Remove migrate_set_block_enabled in checkpoint, leirao, 2021/04/08
- [PATCH v6 05/10] Add a function named packet_new_nocopy for COLO., leirao, 2021/04/08
- [PATCH v6 06/10] Add the function of colo_compare_cleanup, leirao, 2021/04/08
- [PATCH v6 07/10] Reset the auto-converge counter at every checkpoint., leirao, 2021/04/08
- [PATCH v6 08/10] Reduce the PVM stop time during Checkpoint,
leirao <=
- [PATCH v6 09/10] Add the function of colo_bitmap_clear_dirty, leirao, 2021/04/08
- [PATCH v6 10/10] Fixed calculation error of pkt->header_size in fill_pkt_tcp_info(), leirao, 2021/04/08