8
votes

As far as I know UTF-8 is a variable-length encoding, i.e. a character can be represented as 1 byte, 2 bytes, 3 bytes or 4 bytes.

For example the Unicode character U+00A9 = 10101001 is encoded in UTF-8 as

11000010 10101001, i.e. 0xC2 0xA9

The prefix 110 in the first byte indicates that the character is stored with two bytes (because I count two ones until zero in the prefix 110).

The prefix in the following bytes starts with 10

A 4-byte UTF-8 encoding would look like

11110xxx 10xxxxxx 10xxxxxx 10xxxxxx

prefix 11110 (four ones and zero) indicates four bytes and so on.

Now my question:

Why is the prefix 10 used in the following bytes? What is the advantage of such a prefix? Without 10 prefix in the following bytes I could use 3*2=6 bits more if I write:

11110000 xxxxxxxx xxxxxxxx xxxxxxxx

3

3 Answers

6
votes

Historically there were many proposals to UTF-8's encoding. One of which uses no prefix in the following bytes and another named FSS-UTF uses the prefix 1

Number    First       Last
of bytes  code point  code point
1         U+0000      U+007F       0xxxxxxx
2         U+0080      U+07FF       110xxxxx 10xxxxxx
3         U+0800      U+FFFF       1110xxxx 10xxxxxx 10xxxxxx
4         U+10000     U+1FFFFF     11110xxx 10xxxxxx 10xxxxxx 10xxxxxx
5         U+200000    U+3FFFFFF    111110xx 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx
6         U+4000000   U+7FFFFFFF   1111110x 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx 10xxxxxx

However finally a new encoding using the prefix 10 was chosen

A modification by Ken Thompson of the Plan 9 operating system group at Bell Labs made it somewhat less bit-efficient than the previous proposal but crucially allowed it to be self-synchronizing, letting a reader start anywhere and immediately detect byte sequence boundaries.

https://en.wikipedia.org/wiki/UTF-8#History

The most obvious advantage of the new encoding is self-synchronization as others mentioned. It allows the reader to find the character boundaries easily, so any dropped byte can be skipped quickly, and the current/previous/next characters can also be found immediately given any byte index in the string. If the indexed byte starts with 10 then just a middle byte, just go back or forward to find the start of the surrounding characters; otherwise if it starts with 0 or 11 then it's the start of a byte sequence

That property is very important because in a badly designed encoding without self-synchronization like Shift-JIS the reader has to maintain a table of character offsets, or it'll have to reparse the array from the beginning to edit a string. In DOS/V for Japanese (which uses Shift-JIS) probably due to the limited amount of memory the table wasn't used, hence every time you press Backspace the OS will need to reiterate from the start to know which character was deleted. There's no way to get the length of the previous character like in the case of UTF-8

The prefixed nature of UTF-8 also allows the old C string search functions to work without any modification because a search string's byte sequence can never appear in the middle of the another valid UTF-8 byte sequence. In Shift-JIS or other non-self-synchronized encoding you need a specialized search function because the a start byte can be a middle byte of another character

Some of the above advantages are also shared by UTF-16

Since the ranges for the high surrogates (0xD800–0xDBFF), low surrogates (0xDC00–0xDFFF), and valid BMP characters (0x0000–0xD7FF, 0xE000–0xFFFF) are disjoint, it is not possible for a surrogate to match a BMP character, or for two adjacent code units to look like a legal surrogate pair. This simplifies searches a great deal. It also means that UTF-16 is self-synchronizing on 16-bit words: whether a code unit starts a character can be determined without examining earlier code units (i.e. the type of code unit can be determined by the ranges of values in which it falls). UTF-8 shares these advantages, but many earlier multi-byte encoding schemes (such as Shift JIS and other Asian multi-byte encodings) did not allow unambiguous searching and could only be synchronized by re-parsing from the start of the string (UTF-16 is not self-synchronizing if one byte is lost or if traversal starts at a random byte).

https://en.wikipedia.org/wiki/UTF-16#Description

5
votes

All follow-up bytes of multi-byte characters start with binary 10 to indicate that they are follow-up bytes.

This allows re-synchronization if parts of a transmission are broken and/or missing. For example if the first byte of a multi-byte sequence is missing, you can still figure out where the next character starts.

If the follow-up bytes could take any values then there would be no way to distinguish the follow-up bytes from single-byte encoded characters.

1
votes

I’m not sure whether Ken Thompson has publicly stated his reasons, but there is a straightforward explanation.

UTF-8 was designed for backward-compatibility with ASCII. Therefore, all single-byte UTF-8 characters start with 0.

It could have been designed to be as compact as possible, that is, with 10xxxxxx as the prefix for a two-byte sequence and all eight bits available for the continuation byte. However, officially, Unicode will never need all the codepoints UTF-8 makes available now, and if disk space for text files matters, the user will compress them.

It was therefore a higher-priority design goal to make it as easy as possible to algorithmically detect UTF-8, so as many possible applications could transparently support it as possible. Very few documents in any other encoding will happen to look like UTF-8 mojibake by accident (but see “Bush hid the facts.”) However, this could not be allowed to slow down decoding too much.

Therefore, continuation bytes have a prefix distinct from those of initial bytes. These are in the higher-order bits so that discriminating between them is simple on any processor. The choice of prefix also falls into a simple logical sequence: A single leading 1 denotes a one-byte unit, two denote the start of a two-byte sequence, three the start of a three-byte sequence, or four the start of a four-byte sequence. None indicate no continuation bytes at all. If it should ever become necessary to extend UTF-8, it would be trivial to continue this pattern.