qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Multi-head support RFC


From: John Baboval
Subject: Re: [Qemu-devel] Multi-head support RFC
Date: Wed, 06 Nov 2013 10:39:41 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20120714 Thunderbird/14.0

On 11/06/2013 05:55 AM, Gerd Hoffmann wrote:
   Hi,

In QEMU 1.3, there was a DisplayState list. We used one DisplayState per
monitor. The DisplayChangeListener has a new hw_add_display vector, so
that when the UI requests a second monitor the new display gets attached
to the emulated hardware. (patch: add_display_ptr)
I don't think we want actually add/remove stuff here.  On real hardware
your gfx card has a fixed set of display connectors, and I think we are
best of mimic that.
I think that's a property of the emulated hardware. Monitors get connected and disconnected, and that's what the UI cares about.

Support for propagating connect/disconnect events and enabling/disabling
displays needs to be added properly.  Currently qxl/spice can handle
this, but it uses a private side channel.

A new vector, hw_store_edid, was added to DisplayState so that UIs could
tell emulated hardware what the EDID for a given display should be.
(patch: edid-vector)
Note that multiple uis can be active at the same time.
What happened with the edids then?

This is why it seemed to me that we shouldn't have multiple QemuConsoles. There should be one per UI type. In my current patches, each DisplayState has a new DisplayType enum, so I can keep track of which active UI the DisplayState goes with.

As far as the EDID is concerned, there can only be one EDID for a display+hw pair, or the guest won't know what to do. In my use-case, I simply pass real EDIDs through, and create a full-screen window for each real monitor. If you wanted to have two UIs displaying the same DisplaySurface, the EDID would have to come from one of them, and the other would have to clip, or scale.

VRAM size was made configurable, so that more could be allocated to
handle multiple high-resolution displays. (patch: variable-vram-size)
upstream stdvga has this meanwhile.

I don't think it makes sense to have a QemuConsole per display.
Why not?  That is exactly my plan.  Just have the virtual graphic card
call graphic_console_init() multiple times, once for each display
connector it has.

Do you see fundamental issues with that approach?
Currently only one QemuConsole is active at a time, so that would have to change.... It could certainly be done this way, but it seemed like more churn. Perhaps we should step back and define what we want each of these objects to be.

There are things in QemuConsole that we don't really need another copy of. There is also stuff in the QemuConsole that we don't want to duplicate just to have multiple displays. Like the CharDriverState.

I can use a model similar to what qxl does, and put the framebuffer for
each display inside a single DisplaySurface allocated to be a bounding
rectangle around all framebuffers. This has the advantage of looking
like something that already exists in the tree, but has several
disadvantages.
Indeed.  I don't recommend that.  It is that way for several historical
reasons (one being that the code predates the qemu console cleanup in
the 1.5 devel cycle).

Are these features something that people would want to see in the tree?
Sure.  One of the reasons for the console cleanup was to allow proper
multihead support.

cheers,
   Gerd









reply via email to

[Prev in Thread] Current Thread [Next in Thread]