nmh-workers
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Nmh-workers] hardcoding en_US.UTF-8 in test cases


From: Oliver Kiddle
Subject: [Nmh-workers] hardcoding en_US.UTF-8 in test cases
Date: Thu, 07 Feb 2013 01:30:08 +0100

I noticed that en_US.UTF-8 appears in hardcoded form in the test cases.
Knowing that my system doesn't have it, I tried running the test cases
and, sure enough:

Unable to convert string "→n̈"
test/scan/test-scan-multibyte: 59: test: Illegal number: 
test/scan/test-scan-multibyte: 63: test: Illegal number: 
Unsupported width for UTF-8 test string: 

A quick hack to use en_GB.UTF-8 fixes it. I've also got fi_FI.UTF-8 on
my system and that worked too. One of my more up-to-date Linux boxes,
also has the very sensible C.UTF-8.

For whatever reason, pick/test-pick seems to be fine regardless.

It'd perhaps be good to be a bit more intelligent so we either get:
SKIP: test/scan/test-scan-multibyte: cannot find UTF-8 locale
or
PASS: test/scan/test-scan-multibyte:

I'm not sure how to make that as portable as possible but as a start,
perhaps trying existing LANG, LC_* values, the output of locale -a (|
sed 's/utf8/UTF-8/') or, if there is no locale command, the contents of
/usr/lib/locale. And, perhaps fallback to plain guessing. It seems
getcwidth can be used to test them out. It might be wise to give
preference to C.UTF-8 and then en_.*

Oliver




reply via email to

[Prev in Thread] Current Thread [Next in Thread]