qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio


From: Gerd Hoffmann
Subject: Re: [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices
Date: Wed, 08 Jun 2016 08:11:52 +0200

On Di, 2016-06-07 at 11:01 -0400, Marc-André Lureau wrote:
> Hi
> 
> ----- Original Message -----
> > On Mo, 2016-06-06 at 15:54 +0200, Marc-André Lureau wrote:
> > > Hi Gerd
> > > 
> > > Thanks for your feedback on the series. Your remarks are all valid,
> > > but before doing more work I would like to know if there is enough
> > > interest. It duplicates work and adds some complexity. Also, some
> > > general feedback on design would be welcome.
> > > 
> > > What is proposed in this series:
> > > - the vhost-user-backend is a helper object spawning, setting up and
> > > holding a connection to a backend
> > > - the vhost-user socket is set to be fd 3 in child process
> > 
> > Which implies a 1:1 relationship between object and backend.  Which
> > isn't that great if we want allow for multiple backends in one process
> > (your idea below, and I think it can be useful).
> > 
> 
> That socket could use a different protocol to instantiate vhost-user 
> device/backends (passing vhost-user sockets per device)?

I'd tend to simply hand the backend process one unix socket path per
device.  Maybe also allow libvirt to link things using monitor fd
passing.

It's a little less automatic, but more flexible.

> > > - there are device specific vhost-user messages to be added, such as
> > > VHOST_USER_INPUT_GET_CONFIG, or we may use extra fd for communication
> > > to pass to child during fork
> > 
> > Is that needed?  I think it should be possible to create device-agnostic
> > messages for config access.
> 
> VHOST_USER_INPUT_GET_CONFIG is quite virtio-input specific, since it
> returns the array of virtio_input_config, that is later read via
> virtio config selection. Can this be generalized?

Well, not as one-time init call.  You have to forward every write access
to the backend.  For read access the easiest would be to forward every
access too.  Or have a shadow copy for read access which is updated
after every write.

> > > - when there is a whole set of messages to add, like the VHOST_GPU*, I
> > > decided to use a different socket, given to backend with
> > > VHOST_USER_GPU_SET_SOCKET.
> > 
> > I would tend to send it all over the same socket.
> 
> It's possible, but currently vhost-user protocol is unidirectional
> (master/slave request/reply relationship). The backend cannot easily
> send messages on its own. So beside reinventing some display protocol,
> it is hard to fit in vhost-user socket today.

Ok.  So maybe it isn't that useful to use vhost-user for the gpu?  The
fundamental issue here is that qemu needs to process some of the
messages.  So you send those back to qemu via VHOST_GPU*.

So maybe it works better when we continue to terminate the rings in
qemu, then forward messages relevant for virglrenderer to the external
process.

> > > I am not sold that we need to develop a new vhost protocol for the gpu
> > > though. I am considering the Spice worker thread (handling cursor and
> > > display) to actually run in the vhost backend.
> > 
> > Interesting idea, would safe quite a few context switches for dma-buf
> > passing.  But it also brings new challenges, vga compatibility for
> > example.  Also spice channel management.  vdagent, ...
> 
> What I had in mind is to hand off only the cursor and display channel
> to the vhost-gpu backend once the channel is up and the gpu is active.
> Eventually hand it back to qemu when switching back to VGA (sounds
> like it should be doable to me, but perhaps not worth it like this?)

It's not clear to me how you want hand over the display channel from
qemu (and spice-server running as thread in qemu process context) to the
vhost backend (running in a separate process).

cheers,
  Gerd




reply via email to

[Prev in Thread] Current Thread [Next in Thread]