freetype-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Devel] Re: [Devel]Oversampling and rounding TrueType glyphs


From: Henrik Gedenryd
Subject: [Devel] Re: [Devel]Oversampling and rounding TrueType glyphs
Date: Fri, 17 Nov 2000 10:51:06 +0100
User-agent: Microsoft-Outlook-Express-Macintosh-Edition/5.02.2022

Keith Packard wrote:

> 
>> I implemented sub-pixel rendering almost a year ago, for Sqeuak
>> (www.squeak.org). I used a bell-type filter, which I think works well and I
>> would recommend you to try it as well.
> 
> I tried several filters, a gaussian, a simple box filter and intra-pixel
> linear and exponential filters.  Any time I included subpixels outside the
> original pixel, I was treated to softened edges for each glyph.  As the
> eye recognises glpyhs largely by the sharp edges, reducing the sharpness
> is unacceptable, especially in long reading sessions.

Have you tried this long enough in practice? I've used it since January as
my working font and it is in fact less straining on the eyes than the 1-bit
font I preferred before. Of course, some fonts yield better SPR results,
some worse. I've been particularly happy with Univers and Futura.

> One of the "features" of TrueType is that readability is put ahead of
> accurately representing the glyphs on the screen; this means that glyphs
> are radically reshaped to keep horizontal and vertical elements aligned
> with pixel boundaries to provide as sharp an edge as possible.  I'm trying
> to keep those advantages while improving the appearance of non-rectilinear
> elements.
> 
> Something that mystifies me is that even though my LCD screen is a regular
> grid of RGB subpixel elements, placing the glyph edge anywhere other than
> between a blue and red subpixel yields a very noticable color on that edge.

If this is really true then it is strange indeed--I don't question your
intelligence, but I think it is more likely an oversight than some unique
property with the particular LCD you have ;-) Or, the screen is eg. BGR
instead of RGB but that's rare I think. I had some problems understanding
the color artifacts myself for some time.

My best suggestion is that this might be happening: for you to perceive a
white color, there need to be at least one subpixel of each color in a row,
since the three add up to white. Two in a row (or one of course) will yield
color artifacts (brown and light blue IIRC). So what might be happening in
your case is that if the B element, say, is "black", and the R and G are
"white", then there will be a perceived color instead of white if the B
element in the pixel to the left of your glyph stem is not entirely "white".
This will be the case if the glyph to the left comes too close. However, you
may well be aware of this already, and then I can't help you.

>> The problem is that any image processing can never really "know" when a
>> full pixel is supposed to be a sharp edge, and when not.
> 
> Image processing cannot, which makes the problem there significantly
> easier.  Generating text images is different; there *all* of the edges
> are supposed to be sharp; we're trying to take advantage of three times as
> many edges available in the horizontal dimension of an LCD screen.
> 
>> What/where is the ClearType paper you mention?
> 
> http://research.microsoft.com/~jplatt/cleartype/

Their "RGB decimation" technique is the same as the one I mentioned, except
my source only handled black & white whereas they do the general case. But
for B&W the result is the same anyway.

> The problem is that the simple technique doesn't generate the best
> results; there's far more human visual system information needed to solve
> the problem.  For simple image display, the solutions are easy enough, but
> representing text as clearly as possible is a very different endeavor.

I was surprised to see that the example images at the ClearType site weren't
smoothened even where it made sense (like on rounded corners or slanted
lines).

Reading about your experiments I had the following idea: Generate one image
using the box filter, and one using the regular antialiaser. Then
"superimpose" them, and for each pixel, if the center one is either 100% on
or of, _and_ those on each side are too (either on or off), then take the
resulting pixel value from there to avoid the blurred edges--But if there is
a grey value in either of these three pixels in the regular aa:ed image,
then use the pixel from the box-filtered image. The point is to look at the
neighbors to avoid color artifacts--if you remove the softened edges and
there isn't enough "space" on the sides, then there will be artifacts. I
don't know if I am being clear enough, and I have't tried it myself yet, but
I will.

best regards,
Henrik





reply via email to

[Prev in Thread] Current Thread [Next in Thread]