lout-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Combining German Umlauts and Russian Cyrillic characters


From: Chris Herborth
Subject: Re: Combining German Umlauts and Russian Cyrillic characters
Date: Tue, 25 Nov 1997 09:37:48 -0500

Previously, Valeriy E. Ushakov (address@hidden) writes:
> * Thoughts on font selection scheme.
> 
> This also poses an interesting question on Lout font selection scheme
> (FSS).
[...]

> So we need to study prospects of adopting some more sophisticated FSS
> for Lout, may be similar to LaTeX's NFSS. 

As long as we don't adopt NFSS specifically; after having used LaTeX for
nearly three years to typeset our documents, I'm convinced that there
isn't a single thing (well, that I use in our docs; maybe math behaves)
that works right in LaTeX.

> If Lout is to adopt Unicode
> (and it's just a matter of time), this kind of FSS is unavoidable,
> because during processing of, say, mixed Latin(eastern or western) and
> Cyrillic text, Lout will have to emit PS commands to change fonts
> automatically selecting, properly encoded type1 font (type1 encoding
> vector is 8 bit). 

I'd like to see Lout go Unicode sooner rather than later; it'll make
_my_ life easier.  :-)  Our SGML system now encompasses English, German,
and Japanese translations, and it'd be fantastic to use the same tools
to publish all three documents.  Right now, doing German is no problem,
but Japanese is quite a challenge... I think our translator is using MS
Word (*shudder*) or something for printouts.

> Either a single font with all the necessary glyphs
> can be used by swapping encoding vector on the fly or preparating
> several reencoded fonts that share glyphs but has proper encoding
> vectors, or several distinct fonts each of which has a subset of
> necessary glyphs (say for eastern latin, for western latin and for
> cyrillic) can be used.

The "several distinc fonts" route seems like the best to me; it doesn't
require any modifications, and gives us a lot of flexibility.  On the
other hand, it'll also require some work to get a good mapping set up.

I don't think there is a full Unicode font out there; Bitstream's
Cyberbit has most of the "important" glyphs, but it's not complete.

> Thus we will need some way to desribe a mapping from character codes
> in some coded set into pairs of set of glyphs (font is a set glyph +
> encoding vector) with character codes in another coded set (encoding
> vector proper).

Maybe some sort of logical font groupings; serif, sans-serif,
monospaced, and each has a range of Unicode characters that map onto a
font and 8-bit value in that font...

This is sort-of what I was planning with my Japanese SGML; I'd process
it, and map the UTF-8 characters in the Japanese range to the correct
font and character value durings the SGML -> Lout (or whatever)
conversion.

So, if we do need Unicode -> PostScript font value lookup tables (maybe
we'll get lucky and we don't need a 1:1 lookup database), I'll be able
to provide them for Japanese characters.

-- 
----------================================================----------  _
Chris Herborth, R&D Technical Writer   (address@hidden)              | \  _
QNX Software Systems, Ltd.             Arcane Dragon -==(UDIC)==-    | < /_\
http://www.qnx.com/~chrish/            DNRC Holder of Past Knowledge |_/ \_


reply via email to

[Prev in Thread] Current Thread [Next in Thread]