freetype-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ft-devel] gamma correction and FreeType


From: Antti Lankila
Subject: Re: [ft-devel] gamma correction and FreeType
Date: Thu, 7 Nov 2013 22:43:57 +0200

Dave Arnold <address@hidden> kirjoitti 7.11.2013 kello 20.48:

> Hi Antii,
> 
> Thank you for the interesting page. The text looks good, but I think we 
> differ a bit in our "hinting philosophy" :-) .
> 
> I completely understand why many people have an aversion to the way hints can 
> distort a typeface design. I've seen examples where you might be fooled into 
> thinking the hinted font was a different font altogether. But, personally, I 
> dislike "fuzzy" fonts. I also value consistency. To me, fonts that are fuzzy 
> at some sizes and sharp at others are even worse. I've talked to web 
> designers about their choices and heard them argue that they chose a 13.5 
> pixel font because it had a sharper x-height than 13. This is wrong. 
> Designing in pixels is not scalable. It's better if all sizes are as 
> consistent as possible, because you (the designer) may not have control of 
> the final ppem on the user's machine.

I do not like fuzziness either, but I’d like to take the lack of hinting as far 
as it can go. I fundamentally do not like the idea of moving the control points 
around unless the effect is consistently applied and uniform. For instance, 
scaling the entire outline and translating the entire outline are acceptable 
operations as far as I’m concerned, and I’d be personally willing to accept 
small error in scale for sake of improved precision. (I’ve yet to look into the 
FT autohinter to study how it implements something similar).

> The philosophy of hinting in the Adobe CFF rasterizer is to use as little as 
> possible. Horizontal stems in alignment zones are strongly hinted, because 
> sharp alignment zones help in reading Latin-based text. Other horizontal 
> stems are hinted to produce the least possible movement (and to avoid 
> collisions). Vertical stems are not hinted at all. This is because interglyph 
> spacing and kerning are more important to readability than sharp vertical 
> stems (again, this is for Latin-based text). This is very similar to the 
> light mode of the FreeType autohinter. Finally, stem widths are not modified. 
> That is, they are not "snapped" to integer widths. This is one of the 
> advantages of rendering antialiased text. Stem snapping was invented for 
> bilevel/monochrome text, where it is necessary to achieve consistent stem 
> widths.

All this above sounds reasonable. I'd treat stem darkening as being somewhat 
similar to changing the stem width, as the effect from such an operation would 
be similar. In any case, it is a minor quibble — I agree that something must be 
done about small sizes or they are going to appear very light.

>> - one face and size -global y offset, calculated over all the glyphs in the 
>> face, to produce the best contrast available for horizontal stems
> Do you find that the y-offset is dominated by stems at the baseline? I'd 
> guess these were the most common. I like the use of a global offset to keep 
> the baseline straight.

I guess this depends on the font. I’m afraid that I’ve not yet really developed 
understanding about what it prefers to do. I literally wrote the code last 
night and debugged it this morning, and have been busy at day job since. 
Nevertheless, one thing I’ve seen is that it likes to spread the antialiasing 
around both sides of important edges. I’ve looked only couple of sample TTF 
renderings with the code so far, though!

> I'd guess that most Latin characters are influenced by two alignment zones. 
> E.g., baseline and x-height, or baseline and cap-height. This suggests that 
> an adjustable scale factor would help to match both. (I think there is such a 
> mechanism in FreeType autohint.)

Yes, I personally believe that ”optimal translation and scaling”, despite being 
an irritating parameter space search, would likely be the limit of the 
technique. More complicated strategies such as splitting the glyph box and 
stretching/shrinking the top/bottom halves slightly differently would still 
improve the alignment to pixel grid, but as previously noted, I have my dislike 
for solutions that imply distorting the outline.

>> - per glyph x offset, which just enhances the vertical stems. Since kerning 
>> and glyph placement is per subpixel, it causes only very minor bit of 
>> horizontal jitter, too little to be noticeable to me.
> Are you saying that the x-offset is limited to +- 1/6 pixel because you're 
> assuming vertically striped LCD? Would you disable the x-translation for 
> horizontally striped or grayscale rendering?

Hmm. No, the x resolution is in 1/64 pixels as usual. I’m using freetype’s 
grayscale rendering and have told freetype that the rendering in x resolution 
is to be done 3 times the size of y resolution. I’m doing LCD filtering 
manually on the returned bitmap. I suspect this technique is not acceptable if 
the effective horizontal resolution is not at ”retina” levels. For instance, 
I’m viewing it on 390 dpi given that my screen has 130 dpi pitch.

>> I’d also like to implement the darkening somehow without touching the glyph 
>> outline, but the best idea I have so far is to simply use a multiplier on 
>> the alpha bitmap (which should of course be calculated somehow, not just 
>> specified as some kind of magical number). For now, I leave the 0.5 px 
>> outline bolding because it is just one number and should ”optimally” enhance 
>> glyphs to full alpha range in small glyph sizes, and degrades gradually to 
>> irrelevance if the rendering size is increased.
> You are using a constant darkening amount. I think Apple does this, too.

Yes. Don’t like it, but...

> The design goals in the Adobe CFF rasterizer are to minimize distortion, and 
> darkening is a form of distortion. The darkening amount is variable because 
> light fonts lose contrast more readily than bold fonts. And at larger sizes 
> (above about two pixel stems) no darkening is needed, so no distortion is 
> needed.

I guess this means that a single glyph can have both darkened and undarkened 
stems.

> I would avoid adjusting the alpha map to achieve darkening. I don't see how 
> to do it without distorting the perceived shape. How would you darken an edge 
> pixel whose alpha value is 1.0? When you embolden the outline, you darken 
> that pixel by turning on a neighbor.

This is of course a difficult problem in its own right. I guess you’d sample a 
number of ”landmark” glyphs in the rendering size you want, and then examine 
the rasterized bitmaps, and then note the alpha values and calculate a 
multiplier factor that achieves bit over 100 % saturation when applied over the 
alphas.

As an example, suppose that you render all the latin a-zA-Z0-9 glyphs into a 
bitmap, and then scan the alpha values of the result and store them in a sorted 
list in ascending order. Now, if the rendering is ”too light”, the alpha value 
near the end of list — say the value at 90 % point — is probably not 1.0, but 
could be something fairly low like 0.3. So, now you could calculate an alpha 
bitmap multiplier of 1.0 / 0.3, and apply it over all glyphs returned with the 
current parameters, producing glyphs with good contrast.

— 
Antti


reply via email to

[Prev in Thread] Current Thread [Next in Thread]