freetype-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[ft-devel] [08/08] gamma correction issues with FreeType


From: Werner LEMBERG
Subject: [ft-devel] [08/08] gamma correction issues with FreeType
Date: Tue, 29 Oct 2013 05:53:47 +0100 (CET)

From Antti Lankila, answering Octoploid who forwarded Dave Arnold's reply:

  > Where could one open a bug for this issue?
  > (https://bugs.freedesktop.org/show_bug.cgi?id=28549 that you've
  > filed in 2010 unfortunately hasn't seen much activity since then.)
  > I'm asking because it would be better to have a place to bundle
  > all information and also involve the wider Linux community
  > (instead of exchanging private emails).

  I can’t really see where a central place for this work should
  be. We’d need to do advocacy and involve a whole bunch of parties.
  Here’s the parties as I see it:

  - application writers, if we add color spaces.  We should do this,
  but I think I’ll wait until Wayland and see how it is in practice.
  I think we should make applications get scRGB(16) or similar color
  spaces by default, and then adjust our abstractions to correct
  images and colors from their colorspaces to this colorspace, and
  then in display compositor convert (with monitor profile) to final
  display, using whatever precision is suitable for it.  The most
  important application to fix is the web browser.

  - toolkit people, if color spaces are added.  This is in practice
  the GTK+/Qt folks, and I think that Qt is more critical as GTK+’s
  star seems to be waning.  Still, desktop environments exist that do
  not depend on it, and if they provide superior experience, GTK+’s
  decline will be faster and we’re all better off with just one choice
  for toolkit.

  - drawing libraries (cairo, etc.).  We have to make cairo always
  compose text bitmaps with a particular function somehow, it needs to
  tell the display subsystem one way or other that this is what it
  wants to do.  It seems to me that Cairo already does this, given
  that it can forward work to the pixman-glyph function somehow.

  - pixman, the software implementation of XRENDER and likely part of
  Wayland too.  Now pixman already can do the job ”correctly” if it’s
  told that the source surface/color is in sRGB, and target is in
  sRGB.  However, if pixman doesn’t have this information and we do
  alpha correction or something similar, then we could just hack the
  pixman-glyph to do the right thing and make sure at first everyone
  gets the software pathway.  This is totally feasible today, though
  not elegant.

  - GPU driver people (so they can add sRGB capability to text-related
  OVER blend).  This is required for accelerating pixman’s sRGB
  surfaces, or text compositing if not done by alpha correction.  They
  should probably look into hardware accelerating something like LCMS
  color space processing library, and integrating this into all the
  PDF blending ops they support.  For 3-component-in, 3-component-out
  case, a color space conversion could be realized by simply linearly
  interpolated 3D texture lookup, or something similar, thus a quite
  modest cost.

  I believe that we need to leave sRGB behind as a color space,
  especially as we are getting displays with wider gamuts.  We should
  select a universal colorspace which all other colorspaces can be
  converted to with minimal loss of fidelity, and make it the default
  colorspace for all applications.  My belief is that It should be
  linear, and have 16 bits per pixel, and a gamut large enough to
  describe the colors we can build hardware to show.  I am personally
  thinking that scRGB(16) could be that colorspace.  Of the various
  properties, the linearity is the most important, so that we could
  finally get text rendering look correct by default.  (If it is
  non-linear, we will never get a physically meaningful OVER operator,
  it simply is too difficult to make people understand it.) I also
  believe that we can pay the cost of 64 bit application window
  textures.

  On to the email.

  > »I agree that one solution would be to work entirely in a linear
  > color space, but there are competing reasons for using a
  > perceptually efficient space like sRGB.  As was pointed out, the
  > linear space would require more bits per pixel.

  This is one of the reasons.  I believe we should have at least 12
  bits per pixel to preserve black fidelity.  Using scRGB(16) color
  space, or linear with 16 bits per pixel, as in Windows 7 and above
  (where it is an option) would be my go-to solution though.  The
  gamut will be considerably larger, and there’s about double the
  precision at the low intensity range, and it’s just going to double
  the framebuffer size.

  I tried to convince Wayland people to only offer scRGB(16) window
  textures, and convert with a color correction lookup from the
  compositor to sRGB or whatever the display will consume.  This did
  not fly, but I believe the people at least understood my arguments
  even when they disagreed with me for performance reasons.

  > The proper calculation needs 3 values, the text
  > density/coverage/alpha value, the foreground color and the
  > background color.  FreeType is in the business of supplying only
  > the first value.  Using that value properly is the business of the
  > graphics system.  So the problem cannot be solved in FreeType.

  Well, I’ve spoken with David Turner at FreeType about the fact that
  he uses a LCD filter kernel that is oversaturated, and he basically
  told me to fuck off (but quite politely).  However, FreeType’s
  problem can be worked around by using FIR3 filter, or passing 5
  weights which are {0, 0x55, 0x56, 0x55, 0}, or doing the LCD
  filtering manually on the glyph image.  A 3-subpixel moving average
  gives theoretically the correct result, and the FreeType’s 5-tap
  filter is only bad-looking in some pathological cases.  In any case,
  I fully agree that hacks in FreeType will only make everything
  worse, and the library is pretty much perfect if you avoid the FIR5
  issue.

  > The most common problem I've encountered is not knowing the
  > background color.  This is what happens when rendering text into a
  > "transparent" buffer, which will be composited much later.  Antti
  > has described a good compromise ("trick") that can be used when
  > the foreground color is known.  Namely, assume the background
  > color can be guessed, and hack the alpha values.  This has the
  > potential to get the most common cases correct, for example white
  > on black and black on white.  Mid-tone colors will not be correct,
  > however.  (My favorite "worst case" test is red text on medium
  > green.) I helped implement such a "linear blending heuristic" for
  > Flash.  I think a similar approach may be used in Android.

  Yes, Android has this in Skia, and it has some hacks in it.  I
  actually discovered alpha correction independently and recognized,
  when I looked at how Skia renders things, that it is actually doing
  the same thing, and was using my initial theory of ”bg in linear
  light = 1.0 - fg in linear light” theory for background.
  Unfortunately, we are stuck with Cairo, it seems.  I’ve talked to
  Cairo people, and people seem to understand my arguments but they
  don’t give the appearance of being eager to add color space support,
  or to touch the text rendering parts, and I find the code-level
  indirection inside Cairo to make the library nearly incomprehensible
  to me, so the few hacks I’ve done in Cairo have done little to
  improve anything.  For me, it is unhackable at the present time.

  > You are also correct about such a heuristic interacting with the
  > cache.  One approach is to add the text color to the cache key,
  > when it is known.  This is not too bad if you first quantize by
  > luminance to only a few values.  A second approach is to adjust
  > the alpha values post-cache.  The adjustment is usually a table
  > lookup and is faster than rendering a glyph, so most of the cache
  > benefit is preserved.

  Yes.  These are obvious solutions.  The cache will become very large
  if alpha correction is to be used for maximum benefit it has to
  offer, given that we have 16M possible colors and even quantizing to
  4 levels gives 4^3 = 64 variants of a glyph.  Of course, some of
  these are far less likely than others, so it might work alright.  I
  would suggest post-processing the FreeType-generated alpha bitmap,
  because the lookups are just the sort of busywork that computers are
  good at.

  > The issues for subpixel rendering are a little more complicated.
  > This type of LCD rendering requires a "color balanced filter"
  > where the basis of the filter must have equal parts of red green
  > and blue in order to cancel out color fringing.  Linear blending
  > is a second requirement, because without it, the colors will not
  > cancel.

  This can be accomplished with the FIR3/FIR5/moving-average filter.
  The objective is to spread the energy around so that if a red
  component is excited, its neighboring blue and green will also be
  excited just as much.  Once combined with correct alpha blending,
  there is no perceptible color fringing.  Somewhat related to this:
  Cairo unfortunately programs FreeType to use the old ”intra-pixel”
  filter which only moves energy around within that pixel.  This
  subtly distorts geometry and causes other coloration issues.  This
  can be changed by setting the lcdfilter property to lcddefault (= FT
  FIR5) or lcdlight (= FT FIR3).  The latter is my recommendation if
  you have linearly correct rendering pipeline, the former otherwise.

  > While displays are usually calibrated to sRGB (approx. gamma 2.2)
  > I have found that the *effective* gamma for high-frequency
  > elements like glyph stems is lower.  We've used 1.8 as a design
  > target.  The default of 1.4 used by some Windows systems and
  > Android is probably acceptable.

  I do not understand what this means.  This is probably to compensate
  for the eye-related problem that people experience light on dark as
  stronger/more intense than dark on light?  This is the subject of
  the next paragraph.

  > While linear blending solves a number of rendering quality issues,
  > one major side effect is that small black-on-white text becomes
  > lighter.  This loss of contrast is a serious problem and I believe
  > it has led some implementers to decide that the darker black text
  > that comes from blending with gamma 1.0 is worth it.  However,
  > this is truly a "poor man's darkening".  The text may be on
  > average darker, but the effect is non-uniform.  Only mid-tone
  > pixels are affected; black and white pixels do not change.  This
  > non-uniformity produces perceived distortions, such as jagged
  > curves, and "ropey" diagonal lines and uneven vertical stem
  > widths.  And the effects are the same on white text, but in the
  > opposite direction.  The darkening in the Adobe CFF rasterizer is
  > applied to the outline, so it is uniform and is independent of
  > color.  It is a much better solution to the problem.

  I can’t really comment much insight on this.  I personally think it
  could be just a matter of getting used to the new rendering.  I see
  linearly blended glyph inverses as equal now because I’ve been
  staring at them for so long.  I know that a new solution, whatever
  it is, must sort of conform to people’s expectations, though, so
  this problem requires a solution.

  Here is my proposal: Just increasing the rendering weight regardless
  of the foreground color seems like it might be the easiest solution
  to making text look better for everyone, at cost of less precisely
  following the glyph outlines.  I would suggest using the FT outline
  emboldening facility because it should preserve diagonal and curved
  shapes, if not the negative space between them.

  The FT FIR5 oversaturation, by the way, also has the effect of
  increasing rendering weight, but this is a distorting effect because
  it affects the midtones only.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]