12
votes

I've been reading up on Unicode and UTF-8 encoding for a while and I think I understand it, so hopefully this won't be a stupid question:

I have a file which contains some CJK characters, and which has been saved as UTF-8. I have various Asian language packs installed and the characters are rendered properly by other applications, so I know that much works.

In my Java app, I read the file as follows:

// Create objects
fis = new FileInputStream(new File("xyz.sgf"));
InputStreamReader is = new InputStreamReader(fis, Charset.forName("UTF-8"));
BufferedReader br = new BufferedReader(is);

// Read and display file contents
StringBuffer sb = new StringBuffer();
String line;
while ((line = br.readLine()) != null) {
    sb.append(line);
}
System.out.println(sb);

The output shows the CJK characters as '???'. A call to is.getEncoding() confirms that it is definitely using UTF-8. What step am I missing to make the characters appear properly? If it makes a difference, I'm looking at the output using the Eclipse console.

4
what is the IDE(Netbeans, Eclipse, etc...) you are using?Abdelwahed
I tried it with the arabic characters before and the I had the same issue. But when I placed a beak point and checked the string I saw the string displayed correctly. I printed it out in a file and it was OK.Abdelwahed
Thanks for confirming. Further testing has confirmed it's just my Eclipse config that's the issue.Twicetimes

4 Answers

17
votes
System.out.println(sb);

The problem is the above line. This will encode character data using the default system encoding and emit the data to STDOUT. On many systems, this is a lossy process.

If you change the defaults, the encoding used by System.out and the encoding used by the console must match.

The only supported mechanism to change the default system encoding is via the operating system. (Some will advise using the file.encoding system property, but this is not supported and may have unintended side-effects.) You can use setOut to your own custom PrintStream:

PrintStream stdout = new PrintStream(System.out, autoFlush, encoding);

You can change the Eclipse console encoding via the Run configuration.

You can find a number of posts about the subject on my blog - via my profile.

5
votes

The following program prints CJK characters to the console using TextPad. To see the Korean Hangul and Japanese Hiragana I had to tell Java to change the print stream's encoding to EUC_KR and set the properties of TextPad's tool output window:

  • font is Arial Unicode MS
  • script is Hangul

import java.io.PrintStream;
import java.io.UnsupportedEncodingException;

class Hangul {

    public static void main(String[] args)  throws Exception {

        // Change console encoding to Korean

        PrintStream out = new PrintStream(System.out, true, "EUC_KR");
        System.setOut(out);

        // Print sample to console

        String go_hello  = "가다 こんにちは";
        System.out.println(go_hello);
    }
}

Tool Output is:

가다 こんにちは

4
votes

Yeah, you need to change the encoding of the Eclipse console as explained in this how-to-display-chinese-character-in-eclipse-console article

2
votes

Depending on your platform, it is highly likely that your console (or windows CMD) does not support or use the UTF-8 characterset, and therefor converts all unmappable characters to a question mark.

On Windows for example CMD almost always uses WIN1252 or a similar single byte characterset.