I am trying to do transfer learning by re-training the InceptionV3 on medical images - grayscale 3D brain PET scans.
I have two challenges: converting my data from grayscale to an RGB image and formatting my 3D input data for the inception architecture.
I solved the first challenge that by stacking them into 3 channels(feeding the same image to all the 3 channels of the network).
The second challenge is still a problem: the network accepts 2D images. The current images dimensions are 79 x 95 x 79 x 3, where as the network would happily accepts 79 x 95 x 3 dimensional images.
What would be a good way to solve this problem, is it possible to feed the 3D images to the network or do they have to be converted to 2D. How do I convert the images to 2D?
In a research, grid method was used 8 2d images were extracted from each 3D image and displayed as a grid image for classification. Would this be the only way to go about converting from 3D to 2D, or are there alternatives?