18
votes

Why not 4 bits, or 16 bits?

I assume some hardware-related reasons and I'd like to know how 8bit 1byte became the standard.

1
Related on retrocomputing.SE: What was the rationale behind 32-bit computer architectures? / Last computer not to use octets / 8-bit bytes / Did any computer use a 7-bit byte? / What was the rationale behind 36 bit computer architectures? . For some, packing characters into words is a factor, how to store strings. (e.g. 3x 6-bit characters in a machine with 18-bit words). - Peter Cordes
Also related: What is the hex versus octal timeline? - octal was useful on machines where the word size was a multiple of 3, like 18 or 36 bits. - Peter Cordes
Stack Overflow has a history tag, but the usage guidance is don't use it: history of programming or computing questions are off topic. It's too late to migrate this now, but similar questions belong on retrocomputing.stackexchange.com - Peter Cordes

1 Answers

19
votes

I'ts been a minute since I took computer organization, but the relevant wiki on 'Byte' gives some context.

The byte was originally the smallest number of bits that could hold a single character (I assume standard ASCII). We still use ASCII standard, so 8 bits per character is still relevant. This sentence, for instance, is 41 bytes. That's easily countable and practical for our purposes.

If we had only 4 bits, there would only be 16 (2^4) possible characters, unless we used 2 bytes to represent a single character, which is more inefficient computationally. If we had 16 bits in a byte, we would have a whole lot more 'dead space' in our instruction set, we would allow 65,536 (2^16) possible characters, which would make computers run less efficiently when performing byte-level instructions, especially since our character set is much smaller.

Additionally, a byte can represent 2 nibbles. Each nibble is 4 bits, which is the smallest number of bits that can encode any numeric digit from 0 to 9 (10 different digits).