discuss-gnustep
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Opal/CoreGraphics (was Re: UIKit?)


From: David Chisnall
Subject: Re: Opal/CoreGraphics (was Re: UIKit?)
Date: Mon, 4 Jan 2010 11:52:45 +0000

On 4 Jan 2010, at 11:04, Riccardo Mottola wrote:

Things are not so rose as you describe. It is clear that if you think in terms of an Atom based netbooks we do not need to worry, those are small workstations in almost every aspects. But there are other devices, netbooks based on MIPS, ARM processors. So there are handhelds.

Not at all. I consider Atom irrelevant. It's only advantage is x86- compatibility, which is irrelevant to me as I have no legacy x86-only code.

I'm talking about ARM devices based on SoCs like the OMAP3/4 and Tegra. The last generation had OpenGL ES 1.0 GPUs, which were more powerful than the first few desktop GPUs I bought, and vastly more powerful than anything NeXT ever shipped (discounting some of the analogue features on the NeXTDimension). The current generation includes OpenGL ES 2.0 compatible GPUs which support a fully programmable pipeline.

As an example, the OMAP3430, which has been shipping for a while and is now being phased out, has a PowerVR SGX 530 GPU on die, which can push 14 million textured polygons a second - vastly more than you need to accelerate 2D rendering and compositing on a device with a screen resolution of 800x640. Newer ARM SoCs have even faster ones. These draw well under 1W when fully loaded, so I'm not sure where your comment about 'burning the white plastic' comes from; this is the GPU in the iPhone 3GS (as well as the N900 and a lot of similar devices).

I have one of the Letux netbooks of GoldenDelicious, they have a much more limited framebuffer than devices you describe. So ti also instructive to watch and know about the problems and performance differences Nikolaus or Felipe have when running on interesting non- mainstream devices.

The Letux netbooks have a 300MHz MIPS chip which, again, is vastly faster than anything that NeXT ever shipped. As I said in my last mail, they also have a huge amount more RAM.

The OpenStep drawing model was designed for machines with 8MB of RAM and a 931840 pixels screen, giving just 9 bytes per pixel. A simple frame buffer took over 10% of the total RAM for the machine. The Letux 400 comes with 128MB of RAM and only 384000 pixels, giving 350 bytes per pixel.

That it what I was talking about in my last email about the changing ratio of RAM to pixels. You can now afford to cache a lot more before drawing it, which is what the layer model introduced by CoreAnimation encourages and which the XRender extension encourages. As a nice side effect, this spe

If apple decides to put so powerful cpu's in their devices that they burn the white plastic and they start to underclock the CPU... that is their decision. Their devices are expensive anyway and don't always perform as fast as the should either.

I'm not sure if this contained any point other than that you hate recent Apple hardware, which is not particularly relevant.

The point is that if we have the same dependencies and the same requirements and the same performances of other toolkits, why should somebody wants to use GNUstep? (admitted that we are in any case inferior in several areas)?

Because they want to port code from Cocoa? Because they've used OpenStep/Cocoa and liked it? Because we produce a more productive development environment? Because we have things like Distributed Objects that Just Work?

If on the contrary we distinguish ourselves because we run efficiently on a 100$ device, we have a new market segment and we are interesting.

My current mobile phone was released in 2007. It has better specs in every single respect than the desktop that I was using in 2001. It has better specs in every respect than the laptop that my father was using in 2003.

If we aim for $100 devices at the expense of more powerful systems, then in a year's time the $100 device will be twice as powerful and people developing for them will use the environments that they became familiar with on other systems, rather than learning something new just for the handheld.

I think many here miss what flexibility can mean for us. Distinction.

Are you arguing for flexibility or for performance now?

I found LindauSTEP discussions to be very very interesting in this regard.


Okay, I've not heard of LindauSTEP and neither has Google, so I don't know what these discussions entailed.

You've omitted saying what you actually want in this email, although you mentioned in IRC that you like the xlib back end. This has a huge number of problems, which is why I've been advocating deprecating it for a long time:

- It uses X fonts, which means that you can't easily package new fonts with your application or they will not work over remote X11.

- Not font antialiasing.

- No support for acceleration.

- No support for alpha channels in windows.

- No use of XDAMAGE/XFIXES (if you're using a compositing manager on the server, this means that you end up with a lot more client-server traffic than you should have).

- No use of hardware acceleration. For example, XRENDER speeds up rendering of antialiased text a huge amount by storing the glyphs on the GPU as textures and compositing them in hardware (or software, but on the server, so this makes things much faster for remote X11).

- Does printing work? Cairo and GDI back ends let you output to PDF / PS and GDI printers respectively. Given that modern X servers don't include XPRINT anymore, what does the xlib back end use?

- No way to add these features easily. They all require using new extensions, which means writing fall-back code for when these aren't available. This fall-back code ALREADY EXIST IN CAIRO, which will even fall back to dithering to 8-bit color if you remote display onto XSun on a SPARCstation 2.

- No latency hiding in xlib. Eventually we want to move to using XCB, which, used well rather than as a drop-in replacement for xlib, gives much better performance for remote X11. With Cairo, the drawing- related code is already written for us, we just need to do the event handling and window creation code (which I've mostly done in ProjectManager's XCB framework). With xlib, we'd need to rewrite the whole thing.

- No code sharing. The Win32 and xlib back ends are entirely separate. Adding a new feature like shadows or gradients means adding it in several different ways, not all of which get good testing.

We weren't packaged on FreeBSD for six months a couple of years back because there was a bug in the xlib back end which only showed up in some circumstances, but wasn't in the Cairo or aRts back ends, so none of the developers saw it because no one adequately tests the xlib back end.

Please take a look at the code in Cairo. To get a full implementation of CoreGraphics we would need to duplicate 90% of this code. Do you really think that this is a good idea, or should we use the developer time to fix bugs and add new features to GNUstep that don't duplicate the work of a library that we could just use?

Of course, if you want to provide an alternate CoreGraphics implementation that uses xlib directly, then I won't stop you. I will, however, not be at all sympathetic when you realise what a massive task it is.

David

-- Sent from my PDP-11




reply via email to

[Prev in Thread] Current Thread [Next in Thread]