On 06/06/2012 05:10 PM, Yonit Halperin wrote:
Hi,
I would like to add some more points to Gerd's explanation:
On 06/05/2012 04:15 PM, Gerd Hoffmann wrote:
Hi,
Absolutely not. This is hideously ugly and affects a bunch of code.
Spice is *not* getting a hook in migration where it gets to add
arbitrary amounts of downtime to the migration traffic. That's a
terrible idea.
I'd like to be more constructive in my response, but you aren't
explaining the problem well enough for me to offer an alternative
solution. You need to find another way to solve this problem.
Actually, this is not the first time we address you with this issues. For
example:
http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg01805.html (The
first part of the above discussion is not directly related to the
current one).
I'll try to explain in more details:
As Gerd mentioned, migrating the spice connection smoothly requires
the src
server to keep running and send/receive data to/from the client, after
migration
has already completed, till the client completely transfers to the
target. The
suggested patch series only delays the migration state change from
ACTIVE to
COMPLETED/ERROR/CANCELED, till spice signals it has completed its part in
migration.
As I see it, if spice connection does exists, its migration should be
treated as
a non separate part of the whole migration process, and thus, the
migration
state shouldn't change from ACTIVE, till spice has completed its part.
Hence, I
don't think we should have a qmp event for signaling libvirt about
spice migration.
Spice client migration has nothing to do with guest migration. Trying to
abuse QEMU to support it is not acceptable.
The second challenge we are facing, which I addressed in the "plans"
part of the
cover-letter, and on which I think you (anthony) actually replied, is
how to
tackle migrating spice data from the src server to the target server.
Such data
can be usb/smartcard packets sent from a device connected on the
client, to the
server, and that haven't reached the device. Or partial data that has
been read
from a guest character device and that haven't been sent to the
client. Other
data can be internal server-client state data we would wish to keep on
the
server in order to avoid establishing the connection to the target
from scratch,
and possibly also suffer from a slower responsiveness at start.
In the cover-letter I suggested to transfer spice migration data via
the vmstate
infrastructure. The other alternative which we also discussed in the
link above,
is to transfer the data via the client. The latter also requires
holding the src
process alive after migration completion, in order to manage to complete
transferring the data from the src to the client.
<-->
To summarize, since we can still use the client to transfer data from
the src to
the target (instead of using vmstate), the major requirement of spice,
is to
keep the src running after migration has completed.
So send a QMP event and call it a day.