qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] gpu and console chicken and egg


From: Gerd Hoffmann
Subject: Re: [Qemu-devel] gpu and console chicken and egg
Date: Thu, 05 Dec 2013 09:52:03 +0100

  Hi,

> > Hmm, why does it depend on the UI?  Wasn't the plan to render into a
> > dma-buf no matter what?  Then either read the rendered result from the
> > dmabuf (non-gl UI like vnc) or let the (gl-capable) UI pass the dma-buf
> > to the compositor?
> 
> That would be the hopeful plan, however so far my brief investigation says
> I'm possibly being a bit naive with what EGL can do. I'm still talking to the
> EGL and wayland people about how best to model this, but either way
> this won't work with nvidia drivers which is a case we need to handle, so
> we need to interact between the UI GL usage and the renderer.

Hmm.  That implies we simply can't combine hardware-accelerated 3d
rendering with vnc, correct?

> Also
> non-Linux platforms would want this in some way I'd assume, at least
> so virtio-gpu is usable with qemu on them.

Yes, the non-3d part should have no linux dependency and should be
available on all platforms.

> GL isn't that simple, and I'm not sure I can make it that simple 
> unfortunately,
> the renderer requires certain extensions on top of the base GL 2.1 and GL3.0.
> live migration with none might be the first answer, and then we'd have to 
> expend
> serious effort on making live migration work for any sort of different
> GL drivers.
> Reading everything back while renderering continues could be a lot of
> fun. (or pain).

We probably want to start with gl={none,host} then.  Live migration only
supported with "none".

If we can't combine remote displays with 3d rendering (nvidia issue
above) live migration with 3d makes little sense anyway.

> I don't think this will let me change the feature bits though since the virtio
> PCI layer has already picked them up I think. I just wondered if we have any
> examples of changing features later.

I think you can.  There are no helper functions for it though, you
probably have to walk the data structures and fiddle with the bits
directly.

Maybe it is easier to just have a command line option to enable/disable
3d globally, and a global variable with the 3d status.  Being able to
turn off all 3d is probably useful anyway.  Either as standalone option
or as display option (i.e. -display sdl,3d={on,off,auto}).  Then do a
simple check for 3d availability when *parsing* the options.  That'll
also remove the need for the 3d option for virtio-gpu, it can just check
the global flag instead.

> I should probably resubmit the multi-head changes and SDL2 changes and
> we should look at merging them first.

Yes.

> a) dma-buf/EGL, EGLimage vs EGLstream, nothing exists upstream, so
> unknown timeframe.
> I don't think we should block merging on this, also dma-buf doesn't
> exist on Windows/MacOSX
> so qemu there should still get virtio-gpu available.

Yes.  Merging virtio-gpu with 2d should not wait for 3d being finally
sorted.  3d is too much of a moving target still.

> c) GTK multi-head + GL support - I'd like to have the GTK UI be able
> for multi-head as well
> my first attempt moved a lot of code around, I'm not really sure what
> the secondary head
> windows should contain vs the primary head.

Yes, the multihead UI design is the tricky part here.  I'd say don't try
to make the first draft too fancy.  I expect we will have quite some
discussions on that topic.

cheers,
  Gerd





reply via email to

[Prev in Thread] Current Thread [Next in Thread]