emacs-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: setenv -> locale-coding-system cannot handle ASCII?!


From: Richard Stallman
Subject: Re: setenv -> locale-coding-system cannot handle ASCII?!
Date: Mon, 03 Mar 2003 13:59:13 -0500

      It seems to me that `efficiency' should _never_ be a
    reason to use a unibyte buffer, because the emacs primitives should take 
care
    of it automatically -- that is, a buffer/string's should have an associated
    `unibyte encoding' attribute, which would allow it to be encoded using the
    straightforward and efficient `unibyte representation' but appear to
    lisp/whoweve as being a multibyte buffer/string (all of who's characters
    happen to have the same charset).

This is more or less what a unibyte buffer is now, except that there
is only one possibility for which character sets can be stored in it:
it holds the character codes from 0 to 0377.

If we wanted to hide from the user the distinction between unibyte and
multibyte buffers, we would have to change the buffer's representation
automatically when inserting characters that don't fit unibyte.  That
seems like a bad idea.

The advantage of unibyte mode for some European Latin-N users is that
they don't have to deal with encoding and decoding, so they never have
to specify a coding system.  It is possible that today we could get
the same results using multibyte buffers and forcing use of a specific
Latin-N coding system.  People could try experimenting with this and
seeing if it provides results that are just like what European users
now get with unibyte mode.

As for the idea that efficiency should never be a factor in deciding
what to do here, I am skeptical of that.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]