qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [virtio-dev] Re: guest / host buffer sharing ...


From: Gerd Hoffmann
Subject: Re: [virtio-dev] Re: guest / host buffer sharing ...
Date: Fri, 8 Nov 2019 07:45:52 +0100
User-agent: NeoMutt/20180716

On Thu, Nov 07, 2019 at 11:16:18AM +0000, Dr. David Alan Gilbert wrote:
> * Gerd Hoffmann (address@hidden) wrote:
> >   Hi,
> > 
> > > > This is not about host memory, buffers are in guest ram, everything else
> > > > would make sharing those buffers between drivers inside the guest (as
> > > > dma-buf) quite difficult.
> > > 
> > > Given it's just guest memory, can the guest just have a virt queue on
> > > which it places pointers to the memory it wants to share as elements in
> > > the queue?
> > 
> > Well, good question.  I'm actually wondering what the best approach is
> > to handle long-living, large buffers in virtio ...
> > 
> > virtio-blk (and others) are using the approach you describe.  They put a
> > pointer to the io request header, followed by pointer(s) to the io
> > buffers directly into the virtqueue.  That works great with storage for
> > example.  The queue entries are tagged being "in" or "out" (driver to
> > device or visa-versa), so the virtio transport can set up dma mappings
> > accordingly or even transparently copy data if needed.
> > 
> > For long-living buffers where data can potentially flow both ways this
> > model doesn't fit very well though.  So what virtio-gpu does instead is
> > transferring the scatter list as virtio payload.  Does feel a bit
> > unclean as it doesn't really fit the virtio architecture.  It assumes
> > the host can directly access guest memory for example (which is usually
> > the case but explicitly not required by virtio).  It also requires
> > quirks in virtio-gpu to handle VIRTIO_F_IOMMU_PLATFORM properly, which
> > in theory should be handled fully transparently by the virtio-pci
> > transport.
> > 
> > We could instead have a "create-buffer" command which adds the buffer
> > pointers as elements to the virtqueue as you describe.  Then simply
> > continue using the buffer even after completing the "create-buffer"
> > command.  Which isn't exactly clean either.  It would likewise assume
> > direct access to guest memory, and it would likewise need quirks for
> > VIRTIO_F_IOMMU_PLATFORM as the virtio-pci transport tears down the dma
> > mappings for the virtqueue entries after command completion.
> > 
> > Comments, suggestions, ideas?
> 
> What about not completing the command while the device is using the
> memory?

Thought about that too, but I don't think this is a good idea for
buffers which exist for a long time.

Example #1:  A video decoder would setup a bunch of buffers and use
them robin-round, so they would exist until the video playback is
finished.

Example #2:  virtio-gpu creates a framebuffer for fbcon which exists
forever.  And virtio-gpu potentially needs lots of buffers.  With 3d
active there can be tons of objects.  Although they typically don't
stay around that long we would still need a pretty big virtqueue to
store them all I guess.

And it also doesn't fully match the virtio spirit, it still assumes
direct guest memory access.  Without direct guest memory access
updates to the fbcon object would never reach the host for example.
In case a iommu is present we might need additional dma map flushes
for updates happening after submitting the lingering "create-buffer"
command.

cheers,
  Gerd




reply via email to

[Prev in Thread] Current Thread [Next in Thread]