qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH] Use opaque alpha channel to support Xgl


From: Anthony Liguori
Subject: [Qemu-devel] [PATCH] Use opaque alpha channel to support Xgl
Date: Sun, 05 Mar 2006 16:45:07 -0600
User-agent: Mail/News 1.5 (X11/20060213)

Xgl introduces a new pixel format that's a 32 bit depth with a true alpha channel. When we let SDL choose a depth on an Xgl server, it picks up this format (as it's the native pixel format in Xgl). We don't fill out the alpha channel which results in a completely transparent screen.

This patch fills out the alpha channel to all 1s for 32 bit pixels. I'm not sure if this handles every case where a pixel is generated but it seems to work on all of my VMs. I really don't like this patch as it seems like a hack but I couldn't figure out a way to differentiate in SDL between a 24-bit depth with a 32-bit pixel width (which is a common, non-alpha format) and a true 32 bit depth with an alpha channel.

Hopefully, this will be fixed in future versions of SDL.

Regards,

Anthony Liguori
# HG changeset patch
# User Anthony Liguori <address@hidden>
# Node ID 9ad5f865d44bf962f0ed9ca712e9ce2d8a4d46dd
# Parent  945c27df128e8b5b1f43f1b3ddcb77c887c51f4d
Xgl introduces a new surface type that's 32-bit with a true alpha channel.

Make sure that we return pixels with opaque alpha channels.

diff -r 945c27df128e -r 9ad5f865d44b hw/vga.c
--- a/hw/vga.c  Thu Mar  2 16:46:45 2006 -0500
+++ b/hw/vga.c  Sat Mar  4 12:34:45 2006 -0500
@@ -803,7 +803,7 @@
 
 static inline unsigned int rgb_to_pixel32(unsigned int r, unsigned int g, 
unsigned b)
 {
-    return (r << 16) | (g << 8) | b;
+    return 0xFF000000 | (r << 16) | (g << 8) | b;
 }
 
 #define DEPTH 8

reply via email to

[Prev in Thread] Current Thread [Next in Thread]