Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.11.0
On the one hand, we have a coverage map, i.e. for each pixel how
much of it is covered by the outline.
On the other hand, we are blending a foreground (text) color with a
background color. So we need some alpha, and use it for a linear
blending, in a linear colorspace.
What is not clear to me is how to go from coverage to alpha.
Identity is plausible, but there are also reasons to believe it is
not:
- things like White's
illusion show that the perception works in strange ways
- "One usually begins by assuming that nothing is known about the
object world and then the diffraction limit outlines the range of
object details that an image transfer allows to be gained and, by
exclusion, those that it leaves undetermined. On the other hand, it
might be known ahead of time that the ensemble of possible objects
is restricted. Then distinctions can be made by concentrating on the
expected differences and disregarding image aspects that might have
arisen from sources known beforehand to be absent." (Optical
superresolution and visual hyperacuity, Westheimer), which can
explain how readers can perceive gray pixels differently (i.e.
expecting black and white, and therefore perceiving gray as width)
- may be the alpha should also depend on the foreground/background
color
- may be the alpha should also depend on the ppem.
I have not found much in the literature. Opinions, pointers?