qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change han


From: Anthony Liguori
Subject: Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
Date: Wed, 06 Jun 2012 17:22:21 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:11.0) Gecko/20120329 Thunderbird/11.0.1

On 06/06/2012 05:10 PM, Yonit Halperin wrote:
Hi,

I would like to add some more points to Gerd's explanation:
On 06/05/2012 04:15 PM, Gerd Hoffmann wrote:
Hi,

Absolutely not. This is hideously ugly and affects a bunch of code.

Spice is *not* getting a hook in migration where it gets to add
arbitrary amounts of downtime to the migration traffic. That's a
terrible idea.

I'd like to be more constructive in my response, but you aren't
explaining the problem well enough for me to offer an alternative
solution. You need to find another way to solve this problem.
Actually, this is not the first time we address you with this issues. For
example: http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg01805.html (The
first part of the above discussion is not directly related to the current one).
I'll try to explain in more details:

As Gerd mentioned, migrating the spice connection smoothly requires the src
server to keep running and send/receive data to/from the client, after migration
has already completed, till the client completely transfers to the target. The
suggested patch series only delays the migration state change from ACTIVE to
COMPLETED/ERROR/CANCELED, till spice signals it has completed its part in
migration.
As I see it, if spice connection does exists, its migration should be treated as
a non separate part of the whole migration process, and thus, the migration
state shouldn't change from ACTIVE, till spice has completed its part. Hence, I
don't think we should have a qmp event for signaling libvirt about spice 
migration.

Spice client migration has nothing to do with guest migration. Trying to abuse QEMU to support it is not acceptable.

The second challenge we are facing, which I addressed in the "plans" part of the
cover-letter, and on which I think you (anthony) actually replied, is how to
tackle migrating spice data from the src server to the target server. Such data
can be usb/smartcard packets sent from a device connected on the client, to the
server, and that haven't reached the device. Or partial data that has been read
from a guest character device and that haven't been sent to the client. Other
data can be internal server-client state data we would wish to keep on the
server in order to avoid establishing the connection to the target from scratch,
and possibly also suffer from a slower responsiveness at start.
In the cover-letter I suggested to transfer spice migration data via the vmstate
infrastructure. The other alternative which we also discussed in the link above,
is to transfer the data via the client. The latter also requires holding the src
process alive after migration completion, in order to manage to complete
transferring the data from the src to the client.

<-->

To summarize, since we can still use the client to transfer data from the src to
the target (instead of using vmstate), the major requirement of spice, is to
keep the src running after migration has completed.

So send a QMP event and call it a day.

Regards,

Anthony Liguori


Yonit.


Very short version: The requirement is simply to not kill qemu on the
source side until the source spice-server has finished session handover
to the target spice-server.

Long version: spice-client connects automatically to the target
machine, so the user ideally doesn't notice that his virtual machine was
just migrated over to another host.

Today this happens via "switch-host", which is a simple message asking
the spice client to connect to the new host.

We want move to "seamless migration" model where we don't start over
from scratch, but hand over the session from the source to the target.
Advantage is that various state cached in spice-client will stay valid
and doesn't need to be retransmitted. It also requires a handshake
between spice-servers on source and target. libvirt killing qemu on the
source host before the handshake is done isn't exactly helpful.

[ Side note: In theory this issue exists even today: in case the data
pipe to the client is full spice-server will queue up the switch-host
message and qemu might be killed before it is sent out. In practice
it doesn't happen though because it goes through the low-traffic main
channel so the socket buffers usually have enougth space. ]

So, the big question is how to tackle the issue?

Option (1): Wait until spice-server is done before signaling completion
to libvirt. This is what this patch series implements.

Advantage is that it is completely transparent for libvirt, thats why I
like it.

Disadvantage is that it indeed adds a small delay for the spice-server
handshake. The target qemu doesn't process main loop events while the
incoming migration is running, and because of that the spice-server
handshake doesn't run in parallel with the final stage of vm migration,
which it could in theory.

BTW: There will be no "arbitrary amounts of downtime". Seamless spice
client migration is pretty pointless if it doesn't finish within a
fraction of a second, so we can go with a very short timeout there.

Option (2): Add a new QMP event which is emmitted when spice-server is
done, then make libvirt wait for it before killing qemu.

Obvious disadvantage is that it requires libvirt changes.

Option (3): Your suggestion?

thanks,
Gerd






reply via email to

[Prev in Thread] Current Thread [Next in Thread]