2
votes

I need to identify a spectrum emitted from a light source (spectrometer). To do that, I need to convert each pixel to a wavelength.

To overcome the problem that the RGB values do not have a single value, I will use a prism so that I will get an optical lattice. This way, I'm receiving a deployment of the spectrum, and the difference in the spectrum is only along the X axis.

My question is, how do I convert the pixels to wavelengths and receive a plot of intensity as a function of wavelength.

example

4
The question is bogus. Not all the "colors" expressed by RGB are monochromatic. E.g. what wavelength is white?user2271770
in the spectrum range there is no white yorku.ca/eye/spectrum.gifMoti S
@MotiS In a continuous spectrum there is no combination of 3 wavelengths (one for red, one for green and one for blue) that summed up will give you another single wavelength (unless two out of three intensities are zero).user2271770
A single light source (spectrometer) and a prism don't produce white light. The prism is used to have monochromatic light.Daniel
the prism is used for getting Diffraction, the outcome is Light SpectrumMoti S

4 Answers

3
votes
  1. You have to get wavelength from x position

    You can not compute wavelength from color as the two are not the same thing. So first you should calibrate your spectroscope with known wavelengths. From that infer function:

    wavelength = f(x)
    

    either by LUT and interpolation, or by approximation polynomial. For more info see:

    You can use Sun light as a reference and calibrate based on the known spectral (Fraunhofer) lines. Here first example I found on google:

    Sun reference

    So take/plot a sunlight shot here mine:

    My shot of Sun

    cross match the Fraunhofer lines (the darker lines, beware of over exponated images they can screw things up, also the Intensity is R+G+B without weight we do not want human perception like conversion) and make a table of Known wavelengths x positions in your image. From that interpolate your wavelength = f(x).

    As you can see my shot of Sun's spectra more or less matches the reference one (discrepances are due to grating material, bayer filter, camera properties, clouds and atmosphere,etc...). How ever the Fraunhofer lines are not easily detectable by local minima so may be some user assistance GUI style will be a better idea to start with.

    But beware that most spectra images on the WEB is wrong or nonlinear or shifted !!! So to be sure I created a reference spectra from linearized spectral data like this and here the result 400-700 [nm]:

    real linearized unshifted sunlight spectra

    And here the plot:

    plot for real data

    the gray lines are grid from 400-700 nm with 10 nm step.

    Here is how your setup should look like:

    spectro-meter

    Here is image from mine spectro-scope (looking at White area on my LCD):

    White on my LCD

    I am using grating grid made from DVD hence the circular arc shapes. Now if your camera is in fixed position relative to your prism then for selected horizontal line the x position of pixel directly corresponds to a specific wavelength.

    If you do not see any Fraunhofer lines then you are missing aperture before the prism/grating. I usually use 2 razor blades distanced by 0.1 mm set by thin paper. If your image is out of focus you need to add lens(es) before your camera/sensor and or add more shielding from outside light.

    As I mentioned before You cannot get wavelength from color because there is "infinite" combinations of input spectra creating the same RGB response. For example take white color ... it can be composed form 3 or more distinct wavelengths or even continuous white noise. So from RGB you can not tell which one it is... If you add also the x positon with combination of prism/grating then you can get the wavelength but it would be much more complicated and less precise then direct conversion from just x position...

  2. compute intensity from RGB

    this may be a bit tricky as your sensor may have different sensitivity for different wavelengths. You can normalize intensity similarly to #1. Just take shot of known intensity light source and approximate for the missing wavelengths. Also this can be done with Sunlight as a source

    From normalized color you just compute gray-scale intensity and that is it.

    To improve accuracy you can average all the pixels for the same x.

    Also to boost accuracy and sensitivity usually non color sensor is used (mostly linear cameras) either by design or by removing the Bayer filter so it does not mess up the data.

  3. plot the data

    on x axis is the wavelength and on y axis is the intensity. If you want to apply spectral colors you can use this:

Beware calibration data may change with temperature ...

2
votes

Late to the party. But here is an idea for accurately(close to scientific methodology) converting RGB of a sensor pixel to intensity value in a wavelength plot.

  1. Get a light source of known wavelength(s).

The narrower the emission bandwidth, the better. Lasers are suitable for such requirements but also note the power and make sure it does not exceed you image sensor limits. It is better to calibrate the measurement system with three wavelengths (red, green, blue). Ideally when using red laser, readout the raw image and look for any charge accumulations on green and blue channel of a pixel. (Since each pixel has bayer filter pattern over it). If the accumulation is too high consider a good quality image sensors. And , then follow the HSV method suggested by Noel Segura Meraz. Use vertical binning for the image you captured. Vertical binning is where you simply add the intensity values of a column in the sensor array. Once you calibrate the system with these three laser, mix and match them to verify if your interpolation function works well.

  1. Get the wavelength efficiency of pixels in your image sensors

If it is not possible to get this info from spec sheet of the the image sensor then introduce a thin vertical slit in your optical setup just before the image sensor thereby selecting only a specific wavelength. Get all the vertically binned intensity values of each wavelength to characterize your image sensor. The thin slit should not be so thin that in introduces diffraction effects of light. The image sensor would have great characteristics if it has almost same intensity values for every wavelength. Use this data to scale your wavelength vs intensity plot obtained from step 1.

  1. Use a broadband input light source before your optical setup to get the best out of your spectrum

Although what you are trying to achieve is totally valid, it is not a highly accurate instrumentation system therefore not used by researchers/industry. In a true spectrometer a diffraction grating (primarily) is used, the distance between the lines on a grating and the angle of incidence is used to calculate the spread of wavelengths on a CCD sensor (or any linear array of photo sensors as a matter of fact). Generally, this angle is tuned to achieve required wavelength spread without losing resolution. Here's an example from Andor to design a system with their products.

1
votes

If I understand correctly what you are trying to achieve, it's doable (kinda) but it will need calibration.

First you want to work on hsv space, you can do this with rgb2hsv

On HSV space, 'V' or 'value' will give you the intensity of light of a given pixel. This will be the value you want to plot in order to get the graph you show. You can get either the average over each column of pixels, or just analyze the center row, whatever works better for you.

Now, the interesting part. How to get the x axis values of your graph. Theoretically speaking, your prism will separate the light into specific wavelengths and each one will have a unique 'H' or 'hue' value, related by

Hue = (650 - wavelength)*240/(650-475)

more about it here

But this will only work in ideal lighting conditions and if your camera is sensitive enough and its ccd have true green, red and blue, which I don't know how to test. Not to mention that the wavelength you are going to see in your monitor is also dependant on the calibration of your monitor, so I wouldn't trust it.

You can kind of check how pure and ideal is each pixel by the value of 'S' or 'saturation'. The higher the better.

What I would recommend you to do, it's to calibrate it by hand. Look at your spectrum and mark with a pencil or something where are colors that you know their wavelength, and then use those marks to define the x-axis of your graph.


I forgot to mention, you only need to make the calibration once, once you know which wavelength goes with which hue in your camera, you could do the setting automatically, or even a scatter(hue_wavelenght,value) of all your pixels may work

0
votes

I would start trying to reverse this code which does the exact opposite of what you want. The final stuff (% LET THE INTENSITY SSS FALL OFF NEAR THE VISION LIMITS) is irrelevant for your case, it attempts to recreate human perception.

If you really want an accurate running system you will need to establish some kind of calibration process, most cameras are not very accurate, further the behaviour changes depending on factors like temperature so you have to repeat it.

Did you also conciser the alternative, using the position to identify the wavelength? With everything set up in a fixed position you can do the math where on the surface each wavelength will end up. What remains to do is establishing some calibration map which associates pixels with wavelength, some work to do but having everything in a fixed setup this is a calibration you have to do only once. Another advantage would be, once you have the scale written on your surface you have an easy way document and verify the sensed data.