I have an image that I generate programmatically and I want to send this image as a texture to a compute shader. The way I generate this image is that I calculate each of the RGBA components as UInt8
values, and combine them into a UInt32
and store it in the buffer of the image. I do this with the following piece of code:
guard let cgContext = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: 0,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: RGBA32.bitmapInfo) else {
print("Unable to create CGContext")
return
}
guard let buffer = cgContext.data else {
print("Unable to create textures")
return
}
let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)
let heightFloat = Float(height)
let widthFloat = Float(width)
for i in 0 ..< height {
let latitude = Float(i + 1) / heightFloat
for j in 0 ..< width {
let longitude = Float(j + 1) / widthFloat
let x = UInt8(((sin(longitude * Float.pi * 2) * cos(latitude * Float.pi) + 1) / 2) * 255)
let y = UInt8(((sin(longitude * Float.pi * 2) * sin(latitude * Float.pi) + 1) / 2) * 255)
let z = UInt8(((cos(latitude * Float.pi) + 1) / 2) * 255)
let offset = width * i + j
pixelBuffer[offset] = RGBA32(red: x, green: y, blue: z, alpha: 255)
}
}
let coordinateConversionImage = cgContext.makeImage()
where RGBA32
is a little struct that does the shifting and creating the UInt32
value. This image turns out fine as I can convert it to UIImage
and save it to my photos library.
The problem arises when I try to send this image as a texture to a compute shader. Below is my shader code:
kernel void updateEnvironmentMap(texture2d<uint, access::read> currentFrameTexture [[texture(0)]],
texture2d<uint, access::read> coordinateConversionTexture [[texture(1)]],
texture2d<uint, access::write> environmentMap [[texture(2)]]
uint2 gid [[thread_position_in_grid]])
{
const uint4 pixel = {255, 127, 63, 255};
environmentMap.write(pixel, gid);
}
The problem with this code is that the type of my textures is uint
, which is 32-bits, and I want to generate 32-bit pixels the same way I do on the CPU, by appending 4 8-bit values. However, I can't seem to do that on Metal as there is no byte
type that I can just append together and make up a uint32
. So, my question is, what is the correct way to handle 2D textures and set 32-bit pixels on a Metal compute shader?
Bonus question: Also, I've seen example shader codes with texture2d<float, access::read>
as the input texture type. I'm assuming it represents a value between 0.0 and 1.0 but what advantage that does that have over an unsigned int with values between 0 and 255?
Edit: To clarify, the output texture of the shader, environmentMap
, has the exact same properties (width, height, pixelFormat, etc.) as the input textures. Why I think this is counter intuitive is that we are setting a uint4
as a pixel, which means it's composed of 4 32-bit values, whereas each pixel should be 32-bits. With this current code, {255, 127, 63, 255}
has the exact same result as {2550, 127, 63, 255}
, meaning the values somehow get clamped between 0-255 before being written to the output texture. But this is extremely counter-intuitive.