I thought that float precision is 7 decimal places/digits (including both integer and decimal part) - here I mean base10 7 digits - I can type those 7 digits in my floating-point literal in my code editor. It means that if I have 7 significant digits (before and after decimal point) in two numbers, those two numbers shall be always different.
But as I see, two numbers with 7 significant digits sometimes differ, and sometimes same!!!
1) Where am I wrong?
2) What is the pattern and principle in examples below? Why same 7-digit-precision combinations sometimes are treated as different, and other times are treated as being the same?
float f01 = 90.000_001f;
float f02 = 90.000_002f; // f01 == f02 is TRUE ! (CORRECT RESULT)
float f03 = 90.000_001f;
float f04 = 90.000_003f; // f03 == f04 is TRUE ! (CORRECT RESULT)
float f1 = 90.000_001f;
float f2 = 90.000_004f; // FALSE (INCORRECT RESULT)
float f3 = 90.000_002f;
float f4 = 90.000_009f; // FALSE (INCORRECT RESULT)
float f5 = 90.000_009f;
float f6 = 90.000_000f; // FALSE (INCORRECT RESULT)
float f7 = 90.000_001f;
float f8 = 90.000_009f; // FALSE (INCORRECT RESULT)