qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCHv2] virtio: verify that all outstanding buffers a


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCHv2] virtio: verify that all outstanding buffers are flushed
Date: Wed, 12 Dec 2012 21:23:36 +0200

On Wed, Dec 12, 2012 at 06:39:15PM +0100, Paolo Bonzini wrote:
> Il 12/12/2012 18:14, Michael S. Tsirkin ha scritto:
> > On Wed, Dec 12, 2012 at 05:51:51PM +0100, Paolo Bonzini wrote:
> >> Il 12/12/2012 17:37, Michael S. Tsirkin ha scritto:
> >>>> You wrote "the only way to know head 1 is outstanding is because backend
> >>>> has stored this info somewhere".  But the backend _is_ tracking it (by
> >>>> serializing and then restoring the VirtQueueElement) and no leak happens
> >>>> because virtqueue_fill/flush will put the head on the used ring sooner
> >>>> or later.
> >>>
> >>> If you did this before save vm inuse would be 0.
> >>
> >> No, I won't.  I want a simple API that the device can call to keep inuse
> >> up-to-date.  Perhaps a bit ugly compared to just saving inuse, but it
> >> works.  Or are there other bits that need resyncing besides inuse?  Bits
> >> that cannot be recovered from the existing migration data?
> > 
> > Saving inuse counter is useless. We need to know which requests
> > are outstanding if we want to retry them on remote.
> 
> And that's what virtio-blk and virtio-scsi have been doing for years.

I don't see it - all I see in save is virtio_save.
there's the extra code to save the elements in flight
and send them to remote?

> They store the VirtQueueElement including the index and the sglists.
> Can you explain *why* the index is not enough to reconstruct the state
> on the destination?  There may be bugs and you may need help from
> virtio_blk_load, but that's okay.
> 
> >>> You said that at the point where we save state,
> >>> some entries are outstanding. It is too late to
> >>> put head at that point.
> >>
> >> I don't want to put head on the source.  I want to put it on the
> >> destination, when the request is completed.  Same as it is done now,
> >> with bugfixes of course.  Are there any problems doing so, except that
> >> inuse will not be up-to-date (easily fixed)?
> > 
> > You have an outstanding request that is behind last avail index.
> > You do not want to complete it. You migrate. There is no
> > way for remote to understand that the request is outstanding.
> 
> The savevm callbacks know which request is outstanding and pass the
> information to the destination.  See virtio_blk_save and virtio_blk_load.
> 
> What is not clear, and you haven't explained, is how you get to a bug in
> the handling of the avail ring.  What's wrong with this explanation:
> 
>    A 1
>    A 2
>    U 2
>    A 2
>    U 2
>    A 2
>    U 2
>    A 2     <---
>    U 2
> 
> where before the point marked with the arrow, the avail ring is
> 
>    1 2 2 2
> 
>    vring_avail_idx(vq) == 3
>    last_avail_idx == 3
> 
> and after the point marked with the arrow, the avail ring is
> 
>    2 2 2 2
>    vring_avail_idx(vq) == 4
>    last_avail_idx == 3
> 
> ?!?

You need to retry A1 on remote. How do you do that? There's
no way to find out it has not been completed
from the ring itself.


> >>>> It's not common, but you cannot block migration because you have an I/O
> >>>> error.  Solving the error may involve migrating the guests away from
> >>>> that host.
> >>>
> >>> No, you should complete with error.
> >>
> >> Knowing that the request will fail, the admin will not be able to do
> >> migration, even if that will solve the error transparently.
> > 
> > You are saying there's no way to complete all requests?
> 
> With an error, yes.  Transparently after fixing the error (which may
> involve migration), no.
> 
> Paolo



reply via email to

[Prev in Thread] Current Thread [Next in Thread]