qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Live migration from Qemu 2.12 hosts to Qemu 3.2 hosts, with


From: Mark Mielke
Subject: [Qemu-devel] Live migration from Qemu 2.12 hosts to Qemu 3.2 hosts, with VMX flag enabled in the guest?
Date: Fri, 18 Jan 2019 00:32:31 -0500

Thank you for the work on nested virtualization. Having had live migrations
fail in the past when nested virtualization has been active, it is great to
see that clever people have been working on this problem!

My question is about whether a migration path has been considered to allow
live migration from Qemu 2.12 hosts to Qemu 3.2 hosts, with VMX flag
enabled in the guest?

Qemu 2.12 doesn't know about the new nested state available from newer
Linux kernels, and it might be used on a machine with an older kernel that
doesn't make the nested state available. If Qemu 3.2 is on an up-to-date
host with an up-to-date kernel that does support the nested state, I'd like
to ensure we have the ability to try the migrations.

In the past, I've found that:

1) If the guest had used nested virtualization before, the migration often
fails. However, if we reboot the guest and do not use nested
virtualization, this simplifies to...
2) If the guest has never used nested virtualization before, the migration
succeeds.

I would like to leverage 2) as much as possible to migrate forwards to Qemu
3.2 hosts (once it is available). I can normally enter the guest to see if
1) is likely or not, and handle these ones specially. If only 20% of the
guests have ever used nested virtualization, then I would like the option
to safely migrate 80% of the guests using live migration, and handle the
20% as exceptions.

This is the 3.1 change log that got my attention:


   - x86 machines cannot be live-migrated if nested Intel virtualization is
   enabled. The next version of QEMU will be able to do live migration when
   nested virtualization is enabled, if supported by the kernel.


I believe this is the change it refers to:

commit d98f26073bebddcd3da0ba1b86c3a34e840c0fb8
Author: Paolo Bonzini <address@hidden>
Date:   Wed Nov 14 10:38:13 2018 +0100

    target/i386: kvm: add VMX migration blocker

    Nested VMX does not support live migration yet.  Add a blocker
    until that is worked out.

    Nested SVM only does not support it, but unfortunately it is
    enabled by default for -cpu host so we cannot really disable it.

    Signed-off-by: Paolo Bonzini <address@hidden>


This particular check seems very simplistic:

+    if ((env->features[FEAT_1_ECX] & CPUID_EXT_VMX) && !vmx_mig_blocker) {
+        error_setg(&vmx_mig_blocker,
+                   "Nested VMX virtualization does not support live
migration yet");
+        r = migrate_add_blocker(vmx_mig_blocker, &local_err);
+        if (local_err) {
+            error_report_err(local_err);
+            error_free(vmx_mig_blocker);
+            return r;
+        }
+    }
+

It fails if the flag is set, rather than if any nested virtualization has
been used before.

I'm concerned I will end up with a requirement for *all* guests to be
restarted in order to migrate them to the new hosts, rather than just the
ones that would have a problem.

Thoughts?

Thanks!

-- 
Mark Mielke <address@hidden>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]