qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] postcopy livemigration proposal


From: Isaku Yamahata
Subject: Re: [Qemu-devel] [RFC] postcopy livemigration proposal
Date: Tue, 9 Aug 2011 11:07:12 +0900
User-agent: Mutt/1.5.19 (2009-01-05)

On Mon, Aug 08, 2011 at 10:47:09PM +0300, Dor Laor wrote:
> On 08/08/2011 06:59 PM, Anthony Liguori wrote:
>> On 08/08/2011 10:36 AM, Avi Kivity wrote:
>>> On 08/08/2011 06:29 PM, Anthony Liguori wrote:
>>>>
>>>>>>> - Efficient, reduce needed traffic no need to re-send pages.
>>>>>>
>>>>>> It's not quite that simple. Post-copy needs to introduce a protocol
>>>>>> capable of requesting pages.
>>>>>
>>>>> Just another subsection.. (kidding), still it shouldn't be too
>>>>> complicated, just an offset+pagesize and return page_content/error
>>>>
>>>> What I meant by this is that there is potentially a lot of round trip
>>>> overhead. Pre-copy migration works well with reasonable high latency
>>>> network connections because the downtime is capped only by the maximum
>>>> latency sending from one point to another.
>>>>
>>>> But with something like this, the total downtime is
>>>> 2*max_latency*nb_pagefaults. That's potentially pretty high.
>>>
>>> Let's be generous and assume that the latency is dominated by page copy
>>> time. So the total downtime is equal to the first live migration pass,
>>> ~20 sec for 2GB on 1GbE. It's distributed over potentially even more
>>> time, though. If the guest does a lot of I/O, it may not be noticeable
>>> (esp. if we don't copy over pages read from disk). If the guest is
>>> cpu/memory bound, it'll probably suck badly.
>>>
>>>>
>>>> So it may be desirable to try to reduce nb_pagefaults by prefaulting
>>>> in pages, etc. Suffice to say, this ends up getting complicated and
>>>> may end up burning network traffic too.
>
> It is complicated but can help (like pre faulting working set size  
> pages). Beyond that async page fault will help a bit.
> Lastly, if a guest has several apps, those that are memory intensive  
> might suffer but light weight apps will function nicely.
> It provides extra flexibility over the current protocol (that still has  
> value for some of the loads).

We can also combine postcopy with precopy.
For example, The migration is started in in precopy mode at the beginning
and then at some point it is switched into postcopy mode.

>
>>>
>>> Yeah, and prefaulting in the background adds latency to synchronous
>>> requests.
>>>
>>> This really needs excellent networking resources to work well.
>>
>> Yup, it's very similar to other technologies using RDMA (single system
>> image, lock step execution, etc.).
>>
>> Regards,
>>
>> Anthony Liguori
>>
>>>
>>
>>
>

-- 
yamahata



reply via email to

[Prev in Thread] Current Thread [Next in Thread]