[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Solved] RE: Differences between identical strings in Emacs lisp

From: Jürgen Hartmann
Subject: RE: [Solved] RE: Differences between identical strings in Emacs lisp
Date: Thu, 9 Apr 2015 12:38:43 +0200

Thank you for the clarification, Stefan Monnier:

>>>> the use cases you tried -- Emacs will sometimes silently convert
>>>> unibyte characters to their locale-dependent multibyte equivalents.
> Nowadays this should happen extremely rarely, or never.
>>> On which occasion such a conversion is done?
>> One example that comes to mind is (insert 160), i.e. when inserting
>> text into a buffer.
> This doesn't do any conversion (although it did, in Emacs<23).
> 160 is simply taken as the code of the corresponding character in
> Emacs's character space (which is basically Unicode), hence regardless
> of locale.
> If this `insert' is performed inside a unibyte buffer, then this 160 is
> instead taken to be a the code of a byte.  Again, regardless of the locale.

So this is comparable to the output of \xA0 in an unibyte string
(e.g. in "\xA0\ A") in contrast to the same in a mutibyte string (e.g. in
"\xA0 Ä"): The former yields the raw byte \240, the latter a no-break space.

> AFAIR, the only "dwimish" conversion that still takes place on occasion
> is between things like #x3FFFBA and #xBA (i.e. between a byte and
> a character representing that same byte).

(*Broad grin*) I think that I appoint this one to my favorite trap. (See my
previous post.)

>>> It seems that all my related observations that puzzled me before can be well
>>> explained by the strict distinction between characters and raw bytes and the
>>> mapping between the latter's integer representations in the range
>>> [0x80..0xFF] in an unibyte context and in the range [0x3FFF80..0x3FFFFF] in 
>>> a
>>> multibyte context.
>> Pretty much, yes.
> Yes, distinguishing bytes (and byte strings/buffers) from chars (and
> char strings/buffers) is key.  Sadly, Emacs doesn't make it easy because
> the terms used evolved from a time where byte=char and where people were
> focused too much on the underlying/internal representation (hence the
> terms "multibyte" vs "unibyte"), plus the fact that too much code relied
> on byte=char to be able to make a clean design.  So when Emacs-20
> appeared, it included all kinds of dwimish (and locale-dependent)
> conversions to try and accommodate incorrect byte=char assumptions.
> Over time, the design has been significantly cleaned up, but the
> terminology is still problematic.

I could imagine that the step from the equivalence char=byte to
char=unicode code point (long(er) integer) is not so difficult. But we have
in addition the UTF-8 representation. To what of the two latter--unicode code
point (integer, several bytes long) or its UTF-8 representation (sequence of
several bytes) does the term "multibyte" refer?

Thank you for the insight in the historic background.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]