qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RFC: New device for zero-copy VM memory access


From: Gerd Hoffmann
Subject: Re: RFC: New device for zero-copy VM memory access
Date: Tue, 5 Nov 2019 10:38:59 +0100
User-agent: NeoMutt/20180716

On Mon, Nov 04, 2019 at 09:31:47PM +1100, address@hidden wrote:
> 
> On 2019-11-04 21:26, Gerd Hoffmann wrote:
> > Hi,
> > 
> > > This new device, currently named `introspection` (which needs a more
> > > suitable name, porthole perhaps?), provides a means of translating
> > > guest physical addresses to host virtual addresses, and finally to the
> > > host offsets in RAM for file-backed memory guests. It does this by
> > > means of a simple protocol over a unix socket (chardev) which is
> > > supplied the appropriate fd for the VM's system RAM. The guest (in
> > > this case, Windows), when presented with the address of a userspace
> > > buffer and size, will mlock the appropriate pages into RAM and pass
> > > guest physical addresses to the virtual device.
> > 
> > So, if I understand things correctly, the workflow looks like this:
> > 
> >   (1) guest allocates buffers, using guest ram.
> >   (2) guest uses these buffers as render target for the gpu
> > (pci-assigned I guess?).
> >   (3) guest passes guest physical address to qemu (via porthole device).
> >   (4) qemu translates gpa into file offset and passes offsets to
> >       the client application.
> >   (5) client application maps all guest ram, then uses the offsets from
> >       qemu to find the buffers.  Then goes displaying these buffers I
> > guess.
> > 
> > Correct?
> 
> Correct, however step 5 might be a proxy to copy the buffers into another
> porthole device in a second VM allowing VM->VM transfers.
> 
> > Performance aside for now, is it an option for your use case to simply
> > use both an emulated display device and the assigned gpu, then configure
> > screen mirroring inside the guest to get the guest display scanned out
> > to the host?
> 
> Unfortunately no, NVidia and AMD devices do not support mirroring their
> outputs to a separate GPU unless it's a professional-grade GPU such as a
> Quadro or Firepro.

Ok.

We had discussions about buffer sharing between host and guest before.


One possible approach would be to use virtio-gpu for that, because it
already has the buffer management bits (and alot of other stuff not
needed for this use case).  There is no support for shared buffers right
now (atm there are guest-side and host-side buffers and commands for
data transfers).  Shared buffer support is being worked on though, this
(and other changes) are here (look for udmabuf commits):
    https://git.kraxel.org/cgit/qemu/log/?h=sirius/virtio-gpu-memory-v2

Note: udmabuf is a linux driver which allows to create dma-bufs from
guest memory pages.  These dma-bufs can be passed to other applications
using unix file descriptor passing, that way we could pass the buffers
from qemu to the client application.  The client can map them, or even
pass them on to the (host) gpu driver for display.

Requiring a full-blown display device just for buffer sharing might be a
bit of overkill though.  Another obvious drawback for your specific use
case is that there are no virtio-gpu windows drivers yet.


Another approach would be to design a new virtio device just for buffer
sharing.  Would probably be pretty simple, with one guest -> host queue
for sending buffer management commands.  Each buffer would be a list of
pages or (guest physical) address ranges.  Adding some properties would
probably be very useful too, so you can attach some metadata to the
buffers (i.e. id=42, application=porthole, width=1024, height=768,
stride=4094, format=XR24, you get the idea ...).

On the host side we could again have qemu using the udmabuf driver to
create dma-bufs and hand them out to other applications so they can use
the buffers.  Alternatively use the vhost-user approach outlined
elsewhere in this thread.  Having qemu manage the buffers makes client
reconnects and multiple parallel applications alot easier though.

cheers,
  Gerd




reply via email to

[Prev in Thread] Current Thread [Next in Thread]