1
votes

I've got a data set of 7 million float positive values going from 10^(-7) to 10^(0). I would like to convert the float values into RGB values in a "linear way". Until now i used [255-const*|log(float value)|]/255 and it worked, but i would like to know if is there a way to convert the values without using the logarithm and maybe (i hope so) if is there an OpenGL library that does this thing taking advance of the GPU.

EDIT1: values going from 10^(-7) to 10^(1). I would like to obtain the conversion in the fastest way because i need a quick program (that's why i would like to use the GPU).

EDIT2: isn't there an OpenGL function that scale your values by giving max and min? I've got a program with a user interface and i would like to change the "window of values" while the program is running. I would like to have something like this Computerized tomography

EDIT3: could this help me? http://glm.g-truc.net/0.9.3/api/a00157.html/

EDIT4: i would like to get something like the Jet colormap in Matlab http://blogs.mathworks.com/images/loren/73/colormapManip_14.png

2
Your values are in the interval [0.0, 1.0]. Is it enough to multiply them by 255 and round to integer?Miguel Muñoz
Sorry, i edited my post.nicogno
What do you mean by "linear way"? Your current formula has a log in it, do you want to get the same output with a different formula?Karsten Koop
Your values are now in the range 0 to 10? So scale down by 10 to give values between 0 and 1 and multiply by 255 (as Miguel Muñoz states).No need to for any log/exp operation unless you are applying a gamut.lfgtm
10^(1) == 10, 10^(-7) == 0.0000001 ~= 0.lfgtm

2 Answers

2
votes

I guess it depends on where you actually have the values. If you're generating them inside OpenGL, then you could perform the conversion in OpenGL, which would be fast, and then extract the data. If you're in CPU memory, moving the data to and from GPU memory will also take some time.

For this kind of loop, if you're compiling with optimization -O3 the compiler will normally vectorize optimize the code (see What is "vectorization"?). This should work with GCC, MSVC and other compilers. Check for the specific vectorization capabilities of your compiler.

1
votes

As unsigned char is between 0 and 255 you could do:

int max = max_value_from_range;
int max = min_value_from_range;
if ( max == min ) return; // avoid to divide by 0
unsigned char out [RANGE_LEN]
for( int i = 0; i < RANGE_LEN; i++ )
    out[i] = 255 * ( FLOAT_INPUT[i] - min ) / ( max - min );