qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] X support for QXL and SPICE


From: Soeren Sandmann
Subject: Re: [Qemu-devel] X support for QXL and SPICE
Date: 12 Dec 2009 17:39:02 +0100
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.4

Anthony Liguori <address@hidden> writes:

> Soeren Sandmann wrote:
> > Hi,
> >
> > Here is an overview of what the current QXL driver does and does not
> > do.  The parts of X rendering that are currently being used by cairo
> > and Qt are:
> >
> > - Most of XRender         - Image compositing
> >         - Glyphs
> >
> 
> Does anything use Xrender for drawing glyphs these days?

Yes, essentially everything on a desktop uses Xrender for glyphs.

The way glyphs work in XRender is basically like this:

        - The client stores a bunch of glyphs in the X server. The
          data structure is called a GlyphSet

        - Whenever it wants to draw text, it sends a string of indices
          into this a GlyphSet along with coordinates.

Adding support for this to SPICE amounts to offscreen pixmaps along
with a compact way of representing text.

> Certainly, with a compositing window manager, nothing is getting
> rendered by X...

With a compositing manager, all windows are redirected to offscreen
pixmaps, and the compositing manager will then use either OpenGL or
XRender to composite all those pixmaps together whenever one of them
changes.

Rendering to these offscreen pixmaps is still done by X in the same
way I described before:

        - Render to temporary offscreen pixmap (T)

        - Copy pixmap T to window W, where W is redirected to another
          pixmap, which is not T.

So X is still rendering, even when there is a compositing manager.

> > The X driver for the QXL device is not yet very sophisticated. What it
> > does is basically this:
> >
> >         - It keeps a separate memory framebuffer uptodate using
> >           software
> >
> >         - Solid fills and CopyArea requests are turned into SPICE
> >           commands.
> >
> >         - The cursor is drawn using cursor commands
> >
> >         - For other things, bitmaps are sent across the wire
> >                 - It uses the hashing feature of SPICE to only send
> >                   hashcodes for those bitmaps if it can get away with
> >                   it.
> >
> > Even this simple support provides a better user experience than VNC
> > because scrolling is accelerated and doesn't result in a huge bitmap
> > getting sent across the wire.
> 
> Scrolling is accelerated in VNC.  In the Cirrus adapter, both X and
> Windows use a video-to-video bitblt, we use this to create a VNC
> CopyRect which makes scrolling and Window movement smooth.

The solid fill acceleration also makes a difference because windows
usually have a solid background, so when they are expose (for example
by someone dragging one window over another), their background gets
filled by the X server.

The bitmap hashing also made a fairly noticable difference for
animations where the same few images get sent again and
again. Eventually, many of these cases are better handled with
offscreen pixmaps, but there likely are applications drawing the image
over and over withing putting it into a pixmap.

> > However, as things stand right now, there is not much point in adding
> > this support, because X applications essentially always work like
> > this:
> >
> >         - render to offscreen pixmap
> >         - copy pixmap to screen
> >
> > There is not yet support for offscreen pixmaps in SPICE, so at the
> > moment, solid fill and CopyArea are the two main things that actually
> > make a difference.
> >
> 
> Okay, that's in line with what my expectations were.  So what's the
> future of Spice for X?  Anything clever or is Windows the only target
> right now?

I'd say that offscreen pixmaps is the biggest missing feature at the
moment as far as performance improvements go.

There are some other things that would be interesting to do:

- Really good RandR support

  The QXL driver already has rudimentary RandR support - ie., basic
  mode switching on the fly, which is reflected on the client side.

  But newer versions of RandR are much more capable: a graphics
  adapter can report when screens are plugged in and what their
  capabilities are and so on. It would make sense to capture this
  information on the client side and expose it through the QXL device.


- Video support

  The QXL device tries to detect when a piece of the screen is really
  a video, but we can do better by implementing the Xv extension (or
  whatever ends up being the future of video playback on Linux). It
  may also make sense to add support for "decoding" Theora or other
  video codecs to QXL, and then tunnel compressed video to the client.


Soren




reply via email to

[Prev in Thread] Current Thread [Next in Thread]