bug-gnu-emacs
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

bug#31679: 26.1; detect-coding-string does not detect UTF-16


From: Eli Zaretskii
Subject: bug#31679: 26.1; detect-coding-string does not detect UTF-16
Date: Sat, 02 Jun 2018 17:24:19 +0300

> From: Benjamin Riefenstahl <b.riefenstahl@turtle-trading.net>
> Cc: 31679@debbugs.gnu.org
> Date: Sat, 02 Jun 2018 15:55:49 +0200
> 
> > First, you should lose the trailing null (or add one more), since
> > UTF-16 strings must, by definition, have an even number of bytes.
> 
> Actually this string *has* 8 bytes, the last '\0' completes the 'l' to
> form the two-byte character.

Oops.  I guess I modified the string while playing with the example
and ended up with one more null.

> > Why? because it is perfectly valid for a plain-ASCII string to include
> > null bytes, so Emacs prefers to guess ASCII.
> 
> While NUL is a valid ASCII character according to the standard,
> practically nobody uses it as a character.  So for a heuristic in this
> context, it would be a bad decision to treat it just as another
> character.

That's because you _know_ this is supposed to be human-readable text,
made of non-null characters.  But Emacs doesn't.

> And indeed NUL bytes are treated as a strong indication of binary data,
> it seems.  I tried to debug this.  The C routine detect_coding_utf_16
> tries to distinguish between binary and UTF-16, but it is not called for
> the string above.  That routine is called OTOH, when I add a non-ASCII
> character as in "h\0t\0m\0l\0ΓΌ\0", but even than it decides that the
> string is not UTF-16 (?).

Don't forget that decoding is supposed to be fast, because it's
something Emacs does each time it visits a file or accepts input from
a subprocess.  So it tries not to go through all the possible
encodings, but instead bails out as soon as it thinks it has found a
good guess.

> > Morale: detecting an encoding in Emacs is based on heuristic
> > _guesswork_, which is heavily biased to what is deemed to be the most
> > frequent use cases.  And UTF-16 is quite infrequent, at least on Posix
> > hosts.
> >
> > IOW, detecting encoding in Emacs is not as reliable as you seem to
> > expect.  If you _know_ the text is in UTF-16, just tell Emacs to use
> > that, don't let it guess.
> 
> My use-case is that I am trying to paste types other than UTF8_STRING
> from the X11 clipboard, and have them handled as automatically as
> possible.  While official clipboard types probably have a documented
> encoding (and I have code for those), applications like Firefox also put
> private formats there.  And Firefox seems to like UTF-16, even the
> text/html format it puts there is UTF-16.

If you have a special application in mind, you could always write some
simple enough code in Lisp to see if UTF-16 should be tried, then tell
Emacs to try that explicitly.

> I have tried to debug the C routines that implement this (s.a.), but the
> code is somewhat hairy.  I guess I'll have another look to see if I can
> understand it better.

We could add code to detect_coding_system that looks at some short
enough prefix of the text and sees whether there's a null byte there
for each non-null byte, and try UTF-16 if so.  Assuming that we want
to improve the chances of having UTF-16 detected for a small penalty,
that is.

Thanks.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]