[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Finding and mapping all UTF-8 characters

From: harven
Subject: Re: Finding and mapping all UTF-8 characters
Date: Sat, 05 Dec 2009 21:29:47 +0100
User-agent: Gnus/5.11 (Gnus v5.11) Emacs/22.1 (darwin)

deech <address@hidden> writes:

> Hi all,
> I recently cut-and-pasted large chunks of text into an HTML document.
> When I tried to save the document I was warned that it was ISO-Latin
> but there were UTF-8 characters in the text.

The warning actually contains a list of these characters, and you can click
on them to see where they are located in the buffer.

> Is there a way to (1) search for the UTF-8 encoded characters in a
> document and (2) map them to a sensible ASCII character?
> Thanks ...
> -deech

Instead of converting to latin-1, it is probably better to save the file
in another coding system. Just do
M-x set-buffer-file-coding-system RET utf-8 RET

On the other hand, if you were surprised by the unicode characters,
then this probably means that there are few of them. Have a look at
the iso-cvt.el package for setting a conversion table.
The command iso-sgml2iso is pretty close to what you want.

Now, if you want to search a buffer for all characters belonging to 
some category, you can use a regexp. 

\ca matches any ascii characters (newlines excluded). Same as [[:ascii:]].
\Ca matches any non-ascii characters (newlines included).
\cl matches any latin characters (newlines excluded).
\Cl matches any non-latin characters (newlines included).

So the following command copies all non-latin characters to the scratch buffer.
M-x replace-regexp RET \Cl RET \,(princ \& (get-buffer "*scratch*"))

reply via email to

[Prev in Thread] Current Thread [Next in Thread]