I've just updated from ruby 1.9.2 to ruby 1.9.3p0 (2011-10-30 revision 33570). My rails application uses postgresql as its database backend. The system locale is UTF8, as is the database encoding. The default encoding of the rails application is also UTF8. I have Chinese users who input Chinese characters as well as English characters. The strings are stored as UTF8 encoded strings.
Rails version: 3.0.9
Since the update some of the existing Chinese strings in the database are no longer displayed correctly. This does not affect all strings, but only those that are part of a serialized hash. All other strings that are stored as plain strings still appear to be correct.
Example:
This is a serialized hash that is stored as a UTF8 string in the database:
broken = "--- !map:ActiveSupport::HashWithIndifferentAccess \ncheckbox: \"1\"\nchoice: \"Round Paper Clips \\xEF\\xBC\\x88\\xE5\\x9B\\x9E\\xE5\\xBD\\xA2\\xE9\\x92\\x88\\xEF\\xBC\\x89\\r\\n\"\ninfo: \"10\\xE7\\x9B\\x92\"\n"
In order to convert this string to a ruby hash, I deserialize it with YAML.load
:
broken_hash = YAML.load(broken)
This returns a hash with garbled contents:
{"checkbox"=>"1", "choice"=>"Round Paper Clips ï¼\u0088å\u009B\u009Eå½¢é\u0092\u0088ï¼\u0089\r\n", "info"=>"10ç\u009B\u0092"}
The garbled stuff is supposed to be UTF8-encoded Chinese. broken_hash['info'].encoding
tells me that ruby thinks this is #<Encoding:UTF-8>
. I disagree.
Interestingly, all other strings that were not serialized before look fine, however. In the same record a different field contains Chinese characters that look just right---in the rails console, the psql console, and the browser. Every string---no matter if serialized hash or plain string---saved to the database since the update looks fine, too.
I tried to convert the garbled text from a possible wrong encoding (like GB2312 or ANSI) to UTF-8 despite ruby's claim that this was already UTF-8 and of course I failed. This is the code I used:
require 'iconv'
Iconv.conv('UTF-8', 'GB2312', broken_hash['info'])
This fails because ruby doesn't know what to do with illegal sequences in the string.
I really just want to run a script to fix all the old, presumably broken serialized hash strings and be done with it. Is there a way to convert these broken strings to something resembling Chinese again?
I just played with the encoded UTF-8 string in the raw string (called "broken" in the above example). This is the Chinese string that is encoded in the serialized string:
chinese = "\\xEF\\xBC\\x88\\xE5\\x9B\\x9E\\xE5\\xBD\\xA2\\xE9\\x92\\x88\\xEF\\xBC\\x89\\r\\n\"
I noticed that it is easy to convert this to a real UTF-8 encoded string by unescaping it (removing the escape backslashes).
chinese_ok = "\xEF\xBC\x88\xE5\x9B\x9E\xE5\xBD\xA2\xE9\x92\x88\xEF\xBC\x89\r\n"
This returns a proper UTF-8-encoded Chinese string: "(回形针)\r\n"
The thing falls apart only when I use YAML.load(...)
to convert the string to a ruby hash. Maybe I should process the raw string before it is fed to YAML.load
. Just makes me wonder why this is so...
Interesting! This is likely due to the YAML engine "psych" that's used by default now in 1.9.3. I switched to the "syck" engine with YAML::ENGINE.yamler = 'syck'
and the broken strings are correctly parsed.
text
. – rekadobinary
? That should get the string out as "8bit ASCII" (i.e. raw bytes) and maybe that will kickYAML.load
into shape. As a quick test you canbroken.force_encoding('binary')
beforeYAML.load(broken)
. – mu is too shortIconv.conv('UTF-8', 'ISO-8859-1', "\xEF\xBC\x88\xE5\x9B\x9E\xE5\xBD\xA2\xE9\x92\x88\xEF\xBC\x89")
insideirb
. The strings claim to be UTF-8 but I think they've been mangled into Latin-1. – mu is too short