0
votes

I am trying to use a compute shader for image processing. Being new to Vulkan I have some (possibly naive) questions:

  1. I try to look at neighborhood of a pixel. So AFAIK I have 2 possiblities:

    a, Pass one image to the compute shader and sample the neighborhood pixels directly (x +/- i, y +/- j)

    b, Pass multiple images to the compute shader (each being offset) and sample only the current position (x, y)

    Is there any difference in sample performance a vs b (aside from b needing way more memory to being passed to GPU)?

  2. I need to pass on pixel information (+ meta info) from one pipeline stage to another (and read it back out once command is done).

    a, can I do this in any other way than passing a image with storage bit set?

    b, when reading back information from host I probably need to use a framebuffer?

1

1 Answers

0
votes

Using a single image and sampling at offsets (maybe using textureGather?) is going to be more efficient, probably by a lot. Each texturing operation has a cost, and this uses fewer. More importantly, the texture cache in GPUs generally loads a small region around your sample point, so sampling the adjacent pixels is likely going to hit in the cache.

Even better would be to load all the pixels once into shared memory, and then work from there. Then instead of fetching pixel (i,j) from thread (i,j) and all of that thread's eight neighbors, you only fetch it once. You still need extra fetches on the edge of the region handled by a single workgroup. (For what it's worth, this technique is not Vulkan specific: you'll see it used in CUDA, OpenCL, D3D Compute, and GL Compute too).

The only way to persist data out of a compute shader is to write it to a storage buffer or storage image. To read that on the CPU, use vkCmdCopyImageToBuffer or vkCmdCopyBuffer to a host-readable resource, and then map that.