qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [RFC] KVM Fault Tolerance: Kemari for KVM


From: Yoshiaki Tamura
Subject: [Qemu-devel] Re: [RFC] KVM Fault Tolerance: Kemari for KVM
Date: Wed, 18 Nov 2009 22:28:46 +0900

2009/11/17 Yoshiaki Tamura <address@hidden>:
> Avi Kivity wrote:
>>
>> On 11/16/2009 04:18 PM, Fernando Luis Vázquez Cao wrote:
>>>
>>> Avi Kivity wrote:
>>>>
>>>> On 11/09/2009 05:53 AM, Fernando Luis Vázquez Cao wrote:
>>>>>
>>>>> Kemari runs paired virtual machines in an active-passive configuration
>>>>> and achieves whole-system replication by continuously copying the
>>>>> state of the system (dirty pages and the state of the virtual devices)
>>>>> from the active node to the passive node. An interesting implication
>>>>> of this is that during normal operation only the active node is
>>>>> actually executing code.
>>>>>
>>>>
>>>> Can you characterize the performance impact for various workloads?  I
>>>> assume you are running continuously in log-dirty mode.  Doesn't this make
>>>> memory intensive workloads suffer?
>>>
>>> Yes, we're running continuously in log-dirty mode.
>>>
>>> We still do not have numbers to show for KVM, but
>>> the snippets below from several runs of lmbench
>>> using Xen+Kemari will give you an idea of what you
>>> can expect in terms of overhead. All the tests were
>>> run using a fully virtualized Debian guest with
>>> hardware nested paging enabled.
>>>
>>>                     fork exec   sh    P/F  C/S   [us]
>>> ------------------------------------------------------
>>> Base                  114  349 1197 1.2845  8.2
>>> Kemari(10GbE) + FC    141  403 1280 1.2835 11.6
>>> Kemari(10GbE) + DRBD  161  415 1388 1.3145 11.6
>>> Kemari(1GbE) + FC     151  410 1335 1.3370 11.5
>>> Kemari(1GbE) + DRBD   162  413 1318 1.3239 11.6
>>> * P/F=page fault, C/S=context switch
>>>
>>> The benchmarks above are memory intensive and, as you
>>> can see, the overhead varies widely from 7% to 40%.
>>> We also measured CPU bound operations, but, as expected,
>>> Kemari incurred almost no overhead.
>>
>> Is lmbench fork that memory intensive?
>>
>> Do you have numbers for benchmarks that use significant anonymous RSS?
>>  Say, a parallel kernel build.
>>
>> Note that scaling vcpus will increase a guest's memory-dirtying power but
>> snapshot rate will not scale in the same way.
>
> I don't think lmbench is intensive but it's sensitive to memory latency.
> We'll measure kernel build time with minimum config, and post it later.

Here are some quick numbers of parallel kernel compile time.
The number of vcpu is 1, just for convenience.

time make -j 2 all
-----------------------------------------------------------------------------
Base:    real 1m13.950s (user 1m2.742s, sys 0m10.446s)
Kemari: real 1m22.720s (user 1m5.882s, sys 0m10.882s)

time make -j 4 all
-----------------------------------------------------------------------------
Base:    real 1m11.234s (user 1m2.582s, sys 0m8.643s)
Kemari: real 1m26.964s (user 1m6.530s, sys 0m12.194s)

The result of Kemari includes everything, meaning dirty pages tracking and
synchronization upon I/O operations to the disk.
The compile time using j=4 under Kemari was worse than that of j=2,
but I'm not sure this is due to dirty pages tracking or sync interval.

Thanks,

Yoshi




reply via email to

[Prev in Thread] Current Thread [Next in Thread]