1
votes

I don't know how to start the project to implement this:

  1. An matrix of 16 bits integers are loaded into GPU memory (This is a grey scale radiology image)
  2. A second matrix of 16 bits of integers is computed from the first array by applying a function (For example a contrast enhancement algorithm)
  3. An region of interest of the second matrix is converted to an RGB image for on screen display.

I can do step one and two and I'm stuck on step 3! I've implemented all that in CPU, so this is not a matter of handling grey scale or RGB images, nor creating bitmaps for display. I've also implemented the first two steps in OpenCL and then read the resulting matrix in CPU memory for RGB bitmap conversion and then display it. But this is of course slow because of moving data back and forth between CPU and GPU memory (Images are really big: more than 100 megapixel).

Any help is appreciated. I'm programming with Delphi 10 but sample code in C/C++ is OK. I have VC2010 and successfully rebuild NVidia OpenCL oclNbody sample application.

1
What have you tried already didn't work? You can create OpenCL images for the grayscale source and RGB or RGBA result and pass them to the computation kernel(s).Dithermaster
For step 3, I have not created anything since I have no idea how to do it. That's what I'm asking actually. I don't know how to have a image in OpenCL (clCreateImage) displayed on screen in a window created by Windows API CreateWindow.fpiette
I have found some source code quite close to what I need (amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/12/… chapter E.1.1 creating CL context from a GL Context, using Win32 API). Unfortunately the call to clGetGLContextInfoKHR generate an access violation which is caused by platform variable passed to the function thru properties argument. I ported this source code to Delphi and this may be the culprit. If I had the complete sample program in C, I could compare that C version with my Delphi version at runtime and understand what is wrong.fpiette
The issue related to the access violation was caused by an improper call to clGetGLContextInfoKHR: when OpenCL 1.2 is used (my case), the function is different. His address must be obtained using clGetExtensionFunctionAddressForPlatform instead of clGetExtensionFunctionAddress. That version is suitable for the given platform and don't cause any access violation. I am progressing...fpiette
See any of the vendor examples on OpenCL/OpenGL interop for the best way to get OpenCL images on the display. If you want to use GDI instead then clEnqueueReadImage it back to CPU memory and display it using GDI (it won't be as fast as OpenGL but might be sufficient for your needs).Dithermaster

1 Answers

0
votes

Solution of your problem heavily depends on what do you want to render. Probably, if you do medical images processing, you need to display high quality images.

Based on your words, ROI resolution is much bigger, than that of display. I don't know much about quality of OpenGL built-in image scaling filters, so first I would test if OpenGL is an option to go before writing the complex OpenCL - OpenGL interop code. Anyway, I would prefer to have scaling filter which I can customize.

So, the most flexible option is to create 2 OpenCL buffers of size same to display resolution. Then, implement OpenCL kernels, which do Grayscale -> RGB conversion & downsampling.

Pseudo-code is as follows:

// Original grayscale image;
cl_mem orig_image = clCreateBuffer(sizeof(uint16_t)*orig_width*orig_heigth);

// Grayscale ROI, downscaled to dispaly resolution, still 16 bits;
cl_mem disp_grayscale = clCreateBuffer(sizeof(uint16_t)*disp_width*disp_heigth);

// RGB image to display;
cl_mem disp_rgb = clCreateBuffer(sizeof(uint8_t)*3*disp_width*disp_heigth);

clEnqueueNDRangeKernel(DownscaleFilter, orig_image, disp_grayscale);
clEnqueueNDRangeKernel(GrayscaleToRGb, disp_grayscale, disp_rgb);

uint8_t *pixels = clEnqueueMapBuffer(disp_rgb);
// Deal with it as with usual image;

clEnqueueUnmapMemoryObject(disp_rgb);
// pixels are no longer available;

IMO, the most complex problem here is to develop high-quality downscale filter. I've tested 8-tap downscale filter from new HEVC video coding standard, it was pretty good.