qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] exec: eliminate ram naming issue as migration


From: Tan, Jianfeng
Subject: Re: [Qemu-devel] [RFC] exec: eliminate ram naming issue as migration
Date: Tue, 6 Feb 2018 00:44:03 +0800
User-agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0



On 2/6/2018 12:19 AM, Paolo Bonzini wrote:
On 05/02/2018 17:12, Tan, Jianfeng wrote:
Hi Paolo,

On 2/5/2018 11:53 PM, Paolo Bonzini wrote:
On 05/02/2018 15:58, Jianfeng Tan wrote:
Here are some options to fix this:

1. When we do ram name comparison, we truncate the prefix as this
patch shows.
It cannot cover the corner case: the source VM could have two ram blocks
with name of "pc.ram" and "/object/pc.ram".
That shouldn't happen ("pc.ram" exists even in the "-numa
node,memdev=..." case, but it has no RAM block).
Suppose we have a VM started with "-m xG", and then hot plugged with a
ram block:
   (qemu) object_add
memory-backend-file,id=pc.ram,size=1G,mem-path=/dev/hugepages
   (qemu) device_add pc-dimm,id=pc.ram,memdev=pc.ram

Then we would have both ram block named pc.ram:
               Block Name    PSize
                       pc.ram     4 KiB
       /objects/pc.ram    2 MiB

But I assume it's a corner case which not really happen.
Yeah, you're right. :/  I hadn't thought of hotplug.  It can happen indeed.

However, note that

    -m xG -numa node,memdev=pc.ram \
    -object memory-backend-file,id=pc.ram,...

works for both vhost-kernel and vhost-user, so I'd rather consider this
a configuration problem and not do anything.
That configuration indeed works for both. But in the production env,
lots of VMs are already started with previous mem config. If we do
nothing, it will take a long time (shutdown/start for each VM) to
migrate to the new setup. This patch is to make this process more smooth
without any bad effect if possible.
I understand.  However it's not as bad as "there's no possibility at all
to migrate from vhost-kernel to vhost-user".  There are cases that are
more problematic: for example, there's no possibility at all to add
memory NUMA policy during a live migration, unless -object
memory-backend-* was used on the source.

Please help me to understand: Are you saying it's always recommended to use -object memory-backend-* configuration even with vhost-kernel backend for the reason you mentioned? Or just another more serious problem that we shall work on in priority?

Thanks,
Jianfeng



reply via email to

[Prev in Thread] Current Thread [Next in Thread]