qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 4/5] migration: Teach QEMUFile to be QIOChannel-aware


From: Daniel P . Berrangé
Subject: Re: [PATCH 4/5] migration: Teach QEMUFile to be QIOChannel-aware
Date: Wed, 21 Jul 2021 11:57:39 +0100
User-agent: Mutt/2.0.7 (2021-05-04)

On Wed, Jul 21, 2021 at 11:27:44AM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu (peterx@redhat.com) wrote:
> > migration uses QIOChannel typed qemufiles.  In follow up patches, we'll need
> > the capability to identify this fact, so that we can get the backing 
> > QIOChannel
> > from a QEMUFile.
> > 
> > We can also define types for QEMUFile but so far since we only need to be 
> > able
> > to identify QIOChannel, introduce a boolean which is simpler.
> > 
> > No functional change.
> > 
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> 
> This is messy but I can't see another quick way; the better way would be
> to add an OBJECT or QIOCHannel wrapper for BlockDriverState.

I looked at making a QIOChannel for BlockDriverState but it was not
as easy as it might seem.  The problem is that the QEMUFile
get_buffer / write_buffer methods take a offset at which the
I/O operation is required to be applied.

For the existing QIOChannel impl for migration, we simply ignore
the 'pos' argument entirely, since it is irrelevant for the main
migration channel doing streaming.

For a BlockDriverState based impl though I think we need to
honour "pos" in some manner.

I think it ought to be possible to rewrite the savevm code
so that it uses 'seek' in the few places it needs to, and
then we can drop "pos" from get_buffer/write_buffer, but
that requires careful consideration.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]