emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: On language-dependent defaults for character-folding


From: Eli Zaretskii
Subject: Re: On language-dependent defaults for character-folding
Date: Fri, 19 Feb 2016 21:18:42 +0200

> Date: Fri, 19 Feb 2016 21:37:26 +0800
> From: Elias Mårtenson <address@hidden>
> Cc: Lars Ingebrigtsen <address@hidden>, emacs-devel <address@hidden>
> 
>  For example, if the buffer includes ñ (2 characters), should "C-s n"
>  find the n in it?
> 
> That depends on the locale of the user.

There are use cases that are independent of the locale.  For example,
imagine that you need to find all the literal n characters in a buffer
because you are investigating a bug in the program that produced that
buffer.  As an Emacs user, I need to do such jobs almost every day.  I
don't want the results affected by the locale.

> However, from the point of a user, there should not be a visible
> difference between the precomposed and the composed variants are the
> exact same character.

What if the user wants to find all those places where what looks like
ñ is actually ñ?  Wouldn't that be a valid use case?

> Note: I know that it's possible that I am wrong about this and that Unicode 
> actually _has_ said that the
> equivalence tables can be used for this purpose (I.e. decompose and only use 
> the primary character). If that is
> the case, I'd be interested to see a reference to that, but I will still be 
> of the same opinion that doing so will
> result in broken behaviour for a certain class of user.

The reference you are looking for is the Unicode Standard itself.  It
says to use the normalization forms, see for example section 5.16
there.

> The equivalence tables explains that the precomposed character U+00F1 is 
> equivalent to the specific
> sequence U+006E U+0303. That is all it says. It does not say that ñ is a 
> variation of n. It's an instruction how
> to construct a given character.

Every character-folding search implementation decomposes characters
before matching them.  So does Emacs.  We didn't invent this, and we
certainly didn't use the decompositions where they weren't supposed to
be used.  It's not a trick, it's what everyone else does to do the
job.  See the ICU library, for example.

> The decompositions are used in the normalisation forms to ensure that the two 
> variants are treated equally
> (such as the two alternative representations of ñ that we have been 
> discussing).

Yes, and any character-folding search uses normalization forms as
well.

>  Indeed,
>  the locale in which Emacs started says almost nothing about the
>  documents being edited, nor even about the user's preferences: it is
>  easy to imagine a user whose "native" locale is X starting Emacs in
>  another locale.
> 
> Yes. I am fully aware of this. But so be it. Having applications work 
> differently depending on the locale of the
> environment the application was started in is nothing new.

It's not new.  It's old.  We should move on to more general
environments that support multiple languages.  Emacs is such an
environment.  The old l10n paradigms are fundamentally incompatible
with that.

>  Being a multi-lingual environment, Emacs has no real notion of the
>  locale.
> 
> Perhaps it should?

That'd be a step backward, IMO.

>  > It is, Unicode provides it. We just didn't import it yet.
>  >
>  > It does? I was looking for such tables, but didn't find it. Do you have a 
> link?
> 
>  Look for DUCET and its tailoring data. These should be a good
>  starting point:
> 
>  http://www.unicode.org/Public/UCA/latest/
>  http://cldr.unicode.org/
> 
> Those are the decomposition charts, and don't actually say anything about 
> equivalence outside of providing a
> canonical form for precomposed characters, as was discussed above.

Strange, I always thought the data was there.  Perhaps you should ask
a question on the Unicode mailing list, then.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]