qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] GSoC 2011: S3 Trio, AHCI


From: Anthony Liguori
Subject: Re: [Qemu-devel] GSoC 2011: S3 Trio, AHCI
Date: Thu, 07 Apr 2011 08:13:15 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.14) Gecko/20110223 Lightning/1.0b2 Thunderbird/3.1.8

On 04/06/2011 05:54 PM, Paul Brook wrote:
Last year, I was also interested in working on S3 Trio emulation. This
year, the same idea is on the ideas list. The hardware is pretty
thoroughly documented through source code and textual documentation, and
I'm already familiar with adding PCI devices to Qemu, so I do see a
rough outline of how I would implement it.

However, last year, Paul Brook commented [1] that he wasn't convinced
about the usefulness of emulating an S3 Trio or Virge card, because of
performance reasons. He suggested that accelerating the 2D engine would
be tricky because the framebuffer is exposed to the guest. This might be
just me not fully understanding his point, but isn't this also the case
with the Cirrus Logic GD5446 card?

He also suggested paravirtualization for 3D acceleration. Do you think
it would make a good summer project?
I can't comment on these issues, CC'ing Paul, Anthony and Stefan.
My understanding is that Cirrus logic cards also have 2D acceleration.  We
implement this in qemu, but not in a way that's likely to be fast.  I don't
really know either card in detail, but they're both a similar age, so I'd
expect the functionality to be fairly similar.

The 2D engines you're talking about are of questionable benefit.  IIUC They're
basically a memcpy engine with some weird bitmasking operations that line up
with the windows 3 GDI raster ops.  While accelerating this maybe made sense
on a 386, it's not worth the effort on modern CPUs.  The latency and overhead
of setting up and syncronising with the async blit engine is greater than the
cost of just doing it in software.  In practice modern desktop environments
just use the 3D engine.

2d acceleration is more useful for more remote graphics protocols than local performance. We make use of Cirrus's bitblt and it's a huge performance optimization for VNC.

The other big non-3d optimizations are YUV surfaces, hardware scaling, and RGBA hardware mouse rendering. With those things, you can get 90% of the way to having a nice desktop experience.

And this is basically what VMware VGA has FWIW. To get the rest of the way, you really need something like QXL that has offscreen surfaces, text rendering, etc.

Regards,

Anthony Liguori


IMO emulating useful 'real' 3D hardware is not feasible.  In theory you could
emulate an old card, however these are also of limited practical benefit.  For
the S3 cards the 3D engine is so crippled that even when new it wasn't worth
using.  You could maybe implement an old fixed-function card like, e.g. an
i810 or 3dfx card, however drivers for these are also getting hard to come by,
and the functionality is still limited.  You basically get raster offloading,
and everything else is done in software.  Emulation overhead may be greater
than useful offloaded work.

For good 3D support you're looking at something shader based.  Emulating real
hardware is not going to happen.  With real hardware the interface qemu needs
to emulate is directly tied to the implementation details of that particular
chipset.  The guest driver generally uses intimate knowledge of these
implementation details (e.g. vram layout, shader ISA).  Different
implementations may provide the same high-level functionality, however the
low-level implementations are very different.  Reconstructing high-level
operations from the low-level stream is extremely hard, probably harder than
the main CPU emulation that qemu does.

IMO a good rule of thumb is that the output of the render pipeline should not
be guest visible.  Anything where the guest can observe/manipulate the output
or intermediate results makes it very hard to isolate the guest from the
implementation details (i.e. whatever hardware acceleration the host
provides).

There are already a handful of different paravirtual graphics drivers, of
varying quality and openness.  This includes:

- Several OpenGL passthrough drivers.  These are effectively just re-
implementing GLX, often badly.  I suspect that given a decent virtual network,
remote X (including 3D via GLX) already works pretty well.

- SPICE. IIUC this is an ugly hack that maps directly onto legacy Windows/GDI
operations.  I'm not aware of any substantive plan for making this work well
in other environments (using the subset that's basically a dumb framebuffer
doesn't count), or for doing 3D.

- Whatever VMware uses.

- Whatever VirtualBox uses.

- At least two gallium3D based projects.  I think this includes Xen, and
possibly VirtualBox.  Given the whole point of Gallium3D is to provide a
common abstraction layer between the application API and the hardware this
would be my choice.

Paul





reply via email to

[Prev in Thread] Current Thread [Next in Thread]