For the sake of simplicity I will be using and requesting the use of 8 bit floats. Also, ignore the sign bit.
In our Numerical Methods class, we're learning one type of floating point representation in our theory classes and another in our lab classes. We have different teachers for either and they do not collaborate on topics discussed in successive classes.
In the theory class we were told that floats are represented like this:
where d_1 is always 1. No further conditions/constraints were told. Let's call this A.
In the lab class, we were taught the IEEE-754 format:
where e becomes 1 only if it's 000, if it's 111 and mantissa is 0000, then it's infinity, and if it's 111 and mantissa is XXXX, then it's not a number. Let's call this B.
Here's what I understood, when it comes to finding the smallest non-zero number.
In A, e becomes e_min - 3 which is simply 0-3. Meaning, the overall number is 0.1 * 2^-3 which is 2^-4.
But in B, the smallest non-zero normal is 1 * 2^(1-3) which is 2^-2; and the smallest non-zero denormal is 0.0001 * 2^(1-3) which is 2^-4 * 2^-2 which is 2^-6.
They don't match, even if they are both supposed to be correct forms of representations. Every other source I can find either only follows the IEEE-754 format, or simply states that a regular number can be represented in different ways by simply changing the position of the decimal point and the exponent. But none tell me how they are related, such as this man here from 21:50 onward.
Where am I going wrong? How can I get the same values? How are they related?