I have several columns of data that I plan to use for training a ANN regression model. Most of these columns have values ranging from 0 to 10,000.00, but one specific column has values that are always within [0,1] range and have precision of up to 10 decimal places, eg. value: 0.1582639672. Usually I would use MinMaxScaler class from sklearn.preprocessing to normalize all the values of my dataset to [0,1] range, however I am concerned with possible precision loss when applying normalization to this specific column.
Would normalization of float values with 10 digit precision cause loss of data by producing a 'further normalized' values that might exceed the maximum digit precision that float type can faithfully represent?