qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/4] net/virtio: add failover support


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH 3/4] net/virtio: add failover support
Date: Fri, 7 Jun 2019 18:51:10 +0100
User-agent: Mutt/1.11.4 (2019-03-13)

* Jens Freimann (address@hidden) wrote:
> On Tue, Jun 04, 2019 at 08:00:19PM +0100, Dr. David Alan Gilbert wrote:
> > * Michael S. Tsirkin (address@hidden) wrote:
> > > On Tue, Jun 04, 2019 at 03:43:21PM +0200, Jens Freimann wrote:
> > > > On Mon, Jun 03, 2019 at 04:36:48PM -0300, Eduardo Habkost wrote:
> > > > > On Mon, Jun 03, 2019 at 10:24:56AM +0200, Jens Freimann wrote:
> > > > > > On Fri, May 31, 2019 at 06:47:48PM -0300, Eduardo Habkost wrote:
> > > > > > > On Thu, May 30, 2019 at 04:56:45PM +0200, Jens Freimann wrote:
> > > > > > > > On Tue, May 28, 2019 at 11:04:15AM -0400, Michael S. Tsirkin 
> > > > > > > > wrote:
> > > > > > > > > On Tue, May 21, 2019 at 10:45:05AM +0100, Dr. David Alan 
> > > > > > > > > Gilbert wrote:
> > > > > > > > > > * Jens Freimann (address@hidden) wrote:
> > > > > > Why is it bad to fully re-create the device in case of a failed 
> > > > > > migration?
> > > > >
> > > > > Bad or not, I thought the whole point of doing it inside QEMU was
> > > > > to do something libvirt wouldn't be able to do (namely,
> > > > > unplugging the device while not freeing resources).  If we are
> > > > > doing something that management software is already capable of
> > > > > doing, what's the point?
> > > >
> > > > Event though management software seems to be capable of it, a failover
> > > > implementation has never happened. As Michael says network failover is
> > > > a mechanism (there's no good reason not to use a PT device if it is
> > > > available), not a policy. We are now trying to implement it in a
> > > > simple way, contained within QEMU.
> > > >
> > > > > Quoting a previous message from this thread:
> > > > >
> > > > > On Thu, May 30, 2019 at 02:09:42PM -0400, Michael S. Tsirkin wrote:
> > > > > | > On Thu, May 30, 2019 at 07:00:23PM +0100, Dr. David Alan Gilbert 
> > > > > wrote:
> > > > > | > >  This patch series is very
> > > > > | > > odd precisely because it's trying to do the unplug itself in the
> > > > > | > > migration phase rather than let the management layer do it - so 
> > > > > unless
> > > > > | > > it's nailed down how to make sure that's really really bullet 
> > > > > proof
> > > > > | > > then we've got to go back and ask the question about whether we 
> > > > > should
> > > > > | > > really fix it so it can be done by the management layer.
> > > > > | > >
> > > > > | > > Dave
> > > > > | >
> > > > > | > management already said they can't because files get closed and
> > > > > | > resources freed on unplug and so they might not be able to re-add 
> > > > > device
> > > > > | > on migration failure. We do it in migration because that is
> > > > > | > where failures can happen and we can recover.
> > > >
> > > > This is something that I can work on as well, but it doesn't have to
> > > > be part of this patch set in my opinion. Let's say migration fails and 
> > > > we can't
> > > > re-plug the primary device. We can still use the standby (virtio-net)
> > > > device which would only mean slower networking. How likely is it that
> > > > the primary device is grabbed by another VM between unplugging and
> > > > migration failure anyway?
> > > >
> > > > regards,
> > > > Jens
> > > 
> > > I think I agree with Eduardo it's very important to handle this corner
> > > case correctly. Fast networking outside migration is why people use
> > > failover at all.  Someone who can live with a slower virtio would use
> > > just that.
> > > 
> > > And IIRC this corner case is exactly why libvirt could not
> > > implement it correctly itself and had to push it up the stack
> > > until it fell off the cliff :).
> > 
> > So I think we need to have the code that shows we can cope with the
> > corner cases - or provide a way for libvirt to handle it (which is
> > my strong preference).
> 
> Would this work: We add a new migration state MIGRATE_WAIT_UNPLUG (or
> a better more generic name) which tells libvirt that migration has not
> started yet because we are waiting for the guest. And extend the qmp
> events for the migration state. When we know the device was
> sucessfully unplugged we sent a qmp event DEVICE_DELETED or a new one
> DEVICE_DELETED_PARTIALLY (not sure about that yet), let migration
> start and set the migration state to active?

Potentially; lets see what the libvirt people have to say.
What happens if you have multiple devices and one of them unplugs OK and
then the other fails?

> To do a partial unplug I imagine, we have to separate vfio(-pci) code
> to differ between release of resources (fds, mappings etc) and unplug
> (I haven't yet found out how it works in vfio). In the failover case
> we only do the unplug part but not the release part.

Dave

> regards,
> Jens
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]