1
votes

Excuse me if you find it silly. I am not able to understand the purpose of BCD and its implementation in programming?

What I so far understand about BCD is that it is an internal way of data representation in a byte or a nibble. In a simple C program, we declare variables int, float etc and we manipulate them as we like. Internally they may be represented as binary number or BCD.

Modern calculators use BCD format only. In Intel chips, the instruction FBLD is used to move a packed BCD value into the FPU register.

I have seen few exercises in assembly language programming that convert BCD into decimal and other way round.

How does it help a programmer to know that?

What is the usefulness of the knowledge of BCD to a high-level language programmer?

3

3 Answers

1
votes

BCD does exactly what it is named for: Binary Coded Decimal. In essence, this means, that instead of storing a hexadecimal digit in every 4 bits, we only store a decimal digit, wasting the remaining space.

The point in doing this is mostly with arithmetics, that need to be perfect (in the sense of least rounding error) in the decimal system, but not necessarily in the hexadecimal (or binary) system, when observing digits after the decimal point.

In former times, this was important in e.g. accounting software, but it has become a none-issue with natural word lenghtes becoming larger. Todays solutions typically use integer arithmetic in 1/10th or 1/100th of a cent.

Another former use was to facilitate easier interfacing to 7-segment LED displays - numbers encoded in BCD could be displayed nibble by nibble, while binary representation needs modulo operations.

I am sure, in today's world you will encounter BCD on a bit level only in very specialised circumstances.

1
votes

There is no BCD format in C although you can store it in normal binary variables and print out in hex.

In the old times BCD is mostly used to store decimal values that are infrequently involved in calculations (esp. in accounting softwares like Eugen Rieck said). In that case the cost of converting between decimal and binary in input and output may outperformed the cost of simple calculations that were used because of the slow divisions or lack of hardware divisors. Some old architectures may also have instructions for BCD arithmetics so BCD can be used to improve performance

But that's almost no problem nowadays since most modern architectures have dropped support for BCD maths, and doing that in larger registers (> 8 bit) may result in worse performance

0
votes

Intel chip does not uses BCD representation internally. It uses binary representation including 2's complement for negative integers. However, it has certain instructions like AAA, AAS, AAM, AAD, DAA, DAS which are used to convert the binary results of addition, subtraction, multiplication and division on unsigned integer values into unpacked/packed BCD results. Therefore, Intel chips can produced BCD results for unsigned integers INDIRECTLY. These instructions use the implied operand located in AL register and place the result of the conversion in the AL register. There are advanced BCD-handling instructions to move from memory a 80-bit packed signed BCD format value into the FPU register, where it is converted automatically into binary form, processed and converted back into the BCD format.