qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] About QEMU BQL and dirty log switch in Migration


From: Jay Zhou
Subject: Re: [Qemu-devel] About QEMU BQL and dirty log switch in Migration
Date: Fri, 19 May 2017 17:27:07 +0800
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0

Hi Xiao,

On 2017/5/19 16:32, Xiao Guangrong wrote:

I do not know why i was removed from the list.

I was CCed to you...
Your comments are very valuable to us, and thank for your quick response.


On 05/19/2017 04:09 PM, Jay Zhou wrote:
Hi Paolo and Wanpeng,

On 2017/5/17 16:38, Wanpeng Li wrote:
2017-05-17 15:43 GMT+08:00 Paolo Bonzini <address@hidden>:
Recently, I have tested the performance before migration and after
migration failure
using spec cpu2006 https://www.spec.org/cpu2006/, which is a standard
performance
evaluation tool.

These are the steps:
======
  (1) the version of kmod is 4.4.11(with slightly modified) and the
version of
  qemu is 2.6.0
     (with slightly modified), the kmod is applied with the following patch

diff --git a/source/x86/x86.c b/source/x86/x86.c
index 054a7d3..75a4bb3 100644
--- a/source/x86/x86.c
+++ b/source/x86/x86.c
@@ -8550,8 +8550,10 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
          */
         if ((change != KVM_MR_DELETE) &&
                 (old->flags & KVM_MEM_LOG_DIRTY_PAGES) &&
-               !(new->flags & KVM_MEM_LOG_DIRTY_PAGES))
-               kvm_mmu_zap_collapsible_sptes(kvm, new);
+               !(new->flags & KVM_MEM_LOG_DIRTY_PAGES)) {
+               printk(KERN_ERR "zj make KVM_REQ_MMU_RELOAD request\n");
+               kvm_make_all_cpus_request(kvm, KVM_REQ_MMU_RELOAD);
+       }

         /*
          * Set up write protection and/or dirty logging for the new slot.

Try these modifications to the setup:

1) set up 1G hugetlbfs hugepages and use those for the guest's memory

2) test both without and with the above patch.


In order to avoid random memory allocation issues, I reran the test cases:
(1) setup: start a 4U10G VM with memory preoccupied, each vcpu is pinned to a
pcpu respectively, these resources(memory and pcpu) allocated to VM are all
from NUMA node 0
(2) sequence: firstly, I run the 429.mcf of spec cpu2006 before migration,
and get a result. And then, migration failure is constructed. At last, I run
the test case again, and get an another result.

I guess this case purely writes the memory, that means the readonly mappings 
will

Yes, I printed out the speed of dirty page rate, it is about 1GB per second.

always be dropped by #PF, then huge mappings are established.

If benchmark memory read, you show observe its difference.


OK, thank for your suggestion!

Regards,
Jay Zhou







reply via email to

[Prev in Thread] Current Thread [Next in Thread]