qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Best practices to handle shared objects through qemu upgrades?


From: Christian Ehrhardt
Subject: Re: Best practices to handle shared objects through qemu upgrades?
Date: Wed, 4 Mar 2020 10:37:44 +0100



On Fri, Nov 1, 2019 at 10:34 AM Daniel P. Berrangé <address@hidden> wrote:
On Fri, Nov 01, 2019 at 08:14:08AM +0100, Christian Ehrhardt wrote:
> Hi everyone,
> we've got a bug report recently - on handling qemu .so's through
> upgrades - that got me wondering how to best handle it.
> After checking with Paolo yesterday that there is no obvious solution
> that I missed we agreed this should be brought up on the list for
> wider discussion.
> Maybe there already is a good best practise out there, or if it
> doesn't exist we might want to agree upon one going forward.
> Let me outline the case and the ideas brought up so far.
>
> Case
> - You have qemu representing a Guest
> - Due to other constraints e.g. PT you can't live migrate (which would
> be preferred)
> - You haven't used a specific shared object yet - lets say RBD storage
> driver as example
> - Qemu gets an update, packaging replaces the .so files on disk
> - The Qemu process and the .so files on disk now have a mismatch in $buildid
> - If you hotplug an RBD device it will fail to load the (now new) .so

What happens when it fails to load ?  Does the user get a graceful
error message or does QEMU abort ? I'd hope the former.

>
> On almost any other service than "qemu representing a VM" the answer
> is "restart it", some even re-exec in place to keep things up and
> running.
>
> Ideas so far:
> a) Modules are checked by build-id, so keep them in a per build-id dir on disk
>   - qemu could be made looking preferred in -$buildid dir first
>   - do not remove the packages with .so's on upgrades
>   - needs a not-too-complex way to detect which buildids running qemu processes
>     have for packaging to be able to "autoclean later"
>   - Needs some dependency juggling for Distro packaging but IMHO can be made
>     to work if above simple "probing buildid of running qemu" would exist

So this needs a bunch of special QEMU hacks in package mgmt tools
to prevent the package upgrade & cleanup later. This does not look
like a viable strategy to me.

>
> b) Preload the modules before upgrade
>   - One could load the .so files before upgrade
>   - The open file reference will keep the content around even with the
> on disk file gone
>   - lacking a 'load-module' command that would require fake hotplugs
> which seems wrong
>   - Required additional upgrade pre-planning
>   - kills most benefits of modular code without an actual need for it
> being loaded

Well there's two benefits to modular approach

 - Allow a single build to be selectively installed on a host or container
   image, such that the install disk footprint is reduced
 - Allow a faster startup such that huge RBD libraries dont slow down
   startup of VMs not using RBD disks.

Preloading the modules before upgrade doesn't have to the second benefit.
We just have to make sure the pre loading doesn't impact the VM startup
performance.

IOW, register a SIGUSR2 handler which preloads all modules it finds on
disk. Have a pre-uninstall option on the .so package that sends SIGUSR2
to all QEMU processes. The challenge of course is that signals are
async. You might suggest a QMP command, but only 1 process can have the
QMP monitor open at any time and that's libvirt. Adding a second QMP
monitor instance is possible but kind of gross for this purpose.

Another option would be to pre-load the modules during startup, but
do it asynchronously, so that its not blocking overall VM startup.
eg just before starting the mainloop, spawn a background thread to
load all remaining modules.

This will potentially degrade performance of the guest CPUs a bit,
but avoids the latency spike from being synchronous in the startup
path.


> c) go back to non modular build
>   - No :-)
>
> d) anything else out there?

e) Don't do upgrades on a host with running VMs :-)

   Upgrades can break the running VM even ignoring this particular
   QEMU module scenario.

f) Simply document that if you upgrade with running VMs that some
   features like hotplug of RBD will become unavialable. Users can
   then avoid upgrades if that matters to them.

Hi,
I've come back to this after a while and now think all the pre-load or load-command Ideas we had were in vain.
They would be overly complex and need a lot of integration into different places to trigger them.
All of that would not be well integrated in the trigger of the issue itself which usually is a package upgrade.

But qemu already does try to load modules from different places and with a slight extension there I think we could
provide something that packaging (the actual place knowing about upgrades) can use to avoid this issue.

I'll reply to this thread with a patch for your consideration in a few minutes.

There is already a Ubuntu 20.04 test build with the qemu and packaging changes in [1].
The related Debian/Ubuntu packaging changes themselves can be seen in [2].
I hope that helps to illustrate how it would work overall

[1]: https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/3961
[2]: https://git.launchpad.net/~paelzer/ubuntu/+source/qemu/log/?h=bug-1847361-miss-old-so-on-upgrade-UBUNTU

 
Regards,
Daniel
--
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



--
Christian Ehrhardt
Staff Engineer, Ubuntu Server
Canonical Ltd

reply via email to

[Prev in Thread] Current Thread [Next in Thread]