|
From: | Anthony Liguori |
Subject: | Re: [Qemu-devel] KVM call minutes for Feb 15 |
Date: | Thu, 17 Feb 2011 07:10:44 -0600 |
User-agent: | Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.15) Gecko/20101027 Lightning/1.0b1 Thunderbird/3.0.10 |
On 02/17/2011 06:23 AM, Avi Kivity wrote:
On 02/17/2011 02:12 PM, Anthony Liguori wrote:(btw what happens in a non-UTF-8 locale? I guess we should just reject unencodable strings).While QEMU is mostly ASCII internally, for the purposes of the JSON parser, we always encode and decode UTF-8. We reject invalid UTF-8 sequences. But since JSON is string-encoded unicode, we can always decode a JSON string to valid UTF-8 as long as the string is well formed.That is wrong. If the user passes a Unicode filename it is expected to be translated to the current locale encoding for the purpose of, say, filename lookup.
QEMU does not support anything but UTF-8.That's pretty common with Unix software. I don't think any modern Unix platform actually uses UCS2 or UTF-16. It's either ascii or UTF-8.
The only place it even matters is Windows and Windows has ASCII and UTF-16 versions of their APIs. So on Windows, non-ASCII characters won't be handled correctly (yet another one of the many issues with Windows support in QEMU). UTF-8 is self-recovering though so it degrades gracefully.
Regards, Anthony Liguori
[Prev in Thread] | Current Thread | [Next in Thread] |