As far as I know, it is possible to use Uint8Array
as a texture in WebGL. But how is it possible to pass a large Float32Array
or Float64Array
to the shader efficiently? The float values are not in the (0.0, 1.0) range, and may be negative too.
I know, some devices does not support high precision float
, and it is not a problem, if some precision is lost during the process.