Floating point values (IEEE 32 and 64-bit) are encoded using fixed-length big-endian encoding (7 bits used to avoid use of reserved bytes like 0xFF):
That paragraphs comes from the Smile Format spec (a JSON-like binary format).
What could that mean? Is there some standard way to encode IEEE floating point (single and double precision) so that the encoded bytes are in the 0-127 range?
More in general: I believe that, in the standard binary representation, there is no reserved or prohibited byte value, a IEEE floating point number can include any of the 256 possible bytes. Granted that, is there any standard binary encoding (or trick) so that some bytes value/s will never appear (as, say, in UTF8 encoding of strings one have some prohibited bytes values, as 0xFF)?
(I guess that would imply either losing some precision, or using more bytes.)