qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio


From: Marc-André Lureau
Subject: Re: [Qemu-devel] [RFC 00/14] vhost-user backends for gpu & input virtio devices
Date: Wed, 8 Jun 2016 08:53:44 -0400 (EDT)

Hi

----- Original Message -----
> > > > - the vhost-user-backend is a helper object spawning, setting up and
> > > > holding a connection to a backend
> > > > - the vhost-user socket is set to be fd 3 in child process
> > > 
> > > Which implies a 1:1 relationship between object and backend.  Which
> > > isn't that great if we want allow for multiple backends in one process
> > > (your idea below, and I think it can be useful).
> > > 
> > 
> > That socket could use a different protocol to instantiate vhost-user
> > device/backends (passing vhost-user sockets per device)?
> 
> I'd tend to simply hand the backend process one unix socket path per
> device.  Maybe also allow libvirt to link things using monitor fd
> passing.
> 
> It's a little less automatic, but more flexible.

Having explicit socket path is closer to the current vhost-user-net approach:

-chardev socket,id=char0,path=/tmp/vubr.sock -netdev 
type=vhost-user,id=mynet1,chardev=char0

so we could have:

-chardev socket,id=char0,path=/tmp/vgpu.sock
-object vhost-user-backend,id=vug,chardev=char0
-device virtio-vga,virgl=true,vhost-user=vug

This is not incompatible with what I proposed, and I think that would be enough 
to allow libvirt to link things using monitor fd pass.

Other option is to hide vhost-user-backend object behind a property, and use 
chardev only:

-chardev socket,id=char0,path=/tmp/vgpu.sock
-device virtio-vga,virgl=true,vhost-user=char0

But I found it more convenient to allow qemu to manage the backend process, if 
only for development.

> 
> > > > - there are device specific vhost-user messages to be added, such as
> > > > VHOST_USER_INPUT_GET_CONFIG, or we may use extra fd for communication
> > > > to pass to child during fork
> > > 
> > > Is that needed?  I think it should be possible to create device-agnostic
> > > messages for config access.
> > 
> > VHOST_USER_INPUT_GET_CONFIG is quite virtio-input specific, since it
> > returns the array of virtio_input_config, that is later read via
> > virtio config selection. Can this be generalized?
> 
> Well, not as one-time init call.  You have to forward every write access
> to the backend.  For read access the easiest would be to forward every
> access too.  Or have a shadow copy for read access which is updated
> after every write.

I see. But it would have to be explicit which device requires read/write config 
and which not, and many config details would have to be specified on backend 
side. So far, only input requires config data, gpu and net have "static" qemu 
side config.

> > > > - when there is a whole set of messages to add, like the VHOST_GPU*, I
> > > > decided to use a different socket, given to backend with
> > > > VHOST_USER_GPU_SET_SOCKET.
> > > 
> > > I would tend to send it all over the same socket.
> > 
> > It's possible, but currently vhost-user protocol is unidirectional
> > (master/slave request/reply relationship). The backend cannot easily
> > send messages on its own. So beside reinventing some display protocol,
> > it is hard to fit in vhost-user socket today.
> 
> Ok.  So maybe it isn't that useful to use vhost-user for the gpu?  The
> fundamental issue here is that qemu needs to process some of the
> messages.  So you send those back to qemu via VHOST_GPU*.
> 
> So maybe it works better when we continue to terminate the rings in
> qemu, then forward messages relevant for virglrenderer to the external
> process.

I would have to think about it, I am not sure how this would impact 
performance. I would rather teach vhost-user protocol to be bidirectionnal (and 
async), there would be benefits of doing that for the protocol in general (the 
graceful shutdown request would benefit such backend-side request support) 

> 
> > > > I am not sold that we need to develop a new vhost protocol for the gpu
> > > > though. I am considering the Spice worker thread (handling cursor and
> > > > display) to actually run in the vhost backend.
> > > 
> > > Interesting idea, would safe quite a few context switches for dma-buf
> > > passing.  But it also brings new challenges, vga compatibility for
> > > example.  Also spice channel management.  vdagent, ...
> > 
> > What I had in mind is to hand off only the cursor and display channel
> > to the vhost-gpu backend once the channel is up and the gpu is active.
> > Eventually hand it back to qemu when switching back to VGA (sounds
> > like it should be doable to me, but perhaps not worth it like this?)
> 
> It's not clear to me how you want hand over the display channel from
> qemu (and spice-server running as thread in qemu process context) to the
> vhost backend (running in a separate process).

10000ft view would be a qemu call like spice_qxl_steal(&state, &statesize, 
&fds, &nfds), that would gather all config and state related data and clients 
fds for cursor and display (qxl instance), and stop the worker thread. Then it 
would send this over to the backend, and resume a worker thread with a call 
like spice_qxl_resume(state, fds). The server is not ready for this sort of 
operations today though.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]