[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] gpu and console chicken and egg
From: |
Dave Airlie |
Subject: |
Re: [Qemu-devel] gpu and console chicken and egg |
Date: |
Fri, 6 Dec 2013 12:24:25 +1000 |
On Thu, Dec 5, 2013 at 6:52 PM, Gerd Hoffmann <address@hidden> wrote:
> Hi,
>
>> > Hmm, why does it depend on the UI? Wasn't the plan to render into a
>> > dma-buf no matter what? Then either read the rendered result from the
>> > dmabuf (non-gl UI like vnc) or let the (gl-capable) UI pass the dma-buf
>> > to the compositor?
>>
>> That would be the hopeful plan, however so far my brief investigation says
>> I'm possibly being a bit naive with what EGL can do. I'm still talking to the
>> EGL and wayland people about how best to model this, but either way
>> this won't work with nvidia drivers which is a case we need to handle, so
>> we need to interact between the UI GL usage and the renderer.
>
> Hmm. That implies we simply can't combine hardware-accelerated 3d
> rendering with vnc, correct?
For SDL + spice/vnc I've added a readback capabilty to the renderer,
and hooked things up if there is > 1 DisplayChangeListener then it'll
do readbacks, and keep the surface updated, this slows things down,
but it does work.
but yes it means we can't just run the qemu process in its sandbox
without a connection to the X server for it to do GL rendering, or
without using SDL,
I don't think we should block merging the initial code on this, it was
always a big problem on its own that needed solving.
>> GL isn't that simple, and I'm not sure I can make it that simple
>> unfortunately,
>> the renderer requires certain extensions on top of the base GL 2.1 and GL3.0.
>> live migration with none might be the first answer, and then we'd have to
>> expend
>> serious effort on making live migration work for any sort of different
>> GL drivers.
>> Reading everything back while renderering continues could be a lot of
>> fun. (or pain).
>
> We probably want to start with gl={none,host} then. Live migration only
> supported with "none".
>
> If we can't combine remote displays with 3d rendering (nvidia issue
> above) live migration with 3d makes little sense anyway.
Well we can, we just can't do it without also having a local display
connection, but yes it does limit the migration capabilities quite a
lot!
>> I don't think this will let me change the feature bits though since the
>> virtio
>> PCI layer has already picked them up I think. I just wondered if we have any
>> examples of changing features later.
>
> I think you can. There are no helper functions for it though, you
> probably have to walk the data structures and fiddle with the bits
> directly.
>
> Maybe it is easier to just have a command line option to enable/disable
> 3d globally, and a global variable with the 3d status. Being able to
> turn off all 3d is probably useful anyway. Either as standalone option
> or as display option (i.e. -display sdl,3d={on,off,auto}). Then do a
> simple check for 3d availability when *parsing* the options. That'll
> also remove the need for the 3d option for virtio-gpu, it can just check
> the global flag instead.
Ah yes that might work, and just fail if we request 3D but can't fulfil it.
>
>> I should probably resubmit the multi-head changes and SDL2 changes and
>> we should look at merging them first.
>
I've got some outstanding things to redo on the virtio-gpu/vga bits, then I'll
resubmit the sdl2 and unaccelerated virtio-gpu bits.
Dave.