> Yes, you should try each one, and all combinations of them. However,
the number of features that are of interest (at least for latin
scripts) is small, which means that the number of iterations doesn't
become very large; see macro `META_STYLE_LATIN` in file `afstyles.h`
for a list.
Is testing all these combinations really necessary? There are 9 styles listed, which is 2^9 combinations, each of which will be queried for the over 100 adjustment database entries currently listed, for a total of 5000 queries to hb_ot_shape_glyphs_closure. My intuition says very few of these combinations actually matter. 5 are related to capital forms of letters, so it would be strange for there to be a different form of a glyph when 2 of these are enabled at once.
I wrote some pseudocode for a different approach that I believe accomplishes the same thing, while being more efficient and hopefully removing the need to constrain the set of features considered:
Definitions:
lookup(c, fs) takes a set of featues fs and a codepoint c and returns all intermediate and final forms of the glyph (same as hb_ot_shape_glyphs_closure)
"features" is a set of all features in the font, potentially reduced to only the relevant ones.
func all_glyphs(codepoint c, set<feature> fs = ∅)
{
set<glyph_index> result = ∅
foreach (feature f ∈ (features - fs)) //for all features not already in fs
{
if (lookup(c, fs) != lookup(c, fs ∪ f) //if adding the feature f to the lookup adds new glyphs to the result...
&& (lookup(c, fs ∪ f) - lookup(c, fs)) ⊈ result) //...and at least one of these glyphs aren't already in the result list
{
result = result ∪ all_glyphs(c, fs ∪ f)
}
}
return result
}
calling this function like all_glyphs(c) will return all variants of codepoint c.
Let me know your thoughts and questions about this algorithm.
> Please also post some images.
I attached some pictures of the tilde unflattening approaches. I chose sizes that showcase the differences between the approaches, and also committed my current code if you would like to try it yourself.