I currently have a MTLTexture
for input and am rendering that piece-wise using a set of 20-30 vertices. This is currently done at the tail end of my drawRect
handler of an MTKView
:
[encoder setVertexBuffer:mBuff offset:0 atIndex:0]; // buffer of vertices
[encoder setVertexBytes:&_viewportSize length:sizeof(_viewportSize) atIndex:1];
[encoder setFragmentTexture:inputTexture atIndex:0];
[encoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:_vertexInfo.metalVertexCount];
[encoder endEncoding];
[commandBuffer presentDrawable:self.currentDrawable];
[commandBuffer commit];
However, before doing the final presentDrawable
I would like intercept the resulting texture (I'm going to send a region of it off to a separate MTKView
). In other words, I need access to some manner of an output MTLTexture
after the drawPrimitives
call.
What is the most efficient way to do this?
One idea is to introduce an additional drawPrimitives
render to an intermediate output MTLTexture
instead. I'm not sure how to do this, but I'd scoop that output texture in the process. I suspect that this would even be done elsewhere (ie. off-screen).
Then I'd issue a second drawPrimitives
using a single massive textured quad with that outputTexture and then a presentDrawable
on it. That code would exist where my previous code was.
There may be a simple method in the Metal API (that I'm missing) that will allow me to capture an output texture of drawPrimitives
.
I have looked into using an MTLBlitCommandEncoder
but there are some issues around that on certain MacOSX hardware.
UPDATE#1: idoogy, here is the code you were requesting:
Here is where I create the initial "brightness output" texture... we're mid-flight in a vertex shader to do so:
...
[encoder setFragmentTexture:brightnessOutput atIndex:0];
[encoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:_vertexInfo.metalVertexCount];
[encoder endEncoding];
for (AltMonitorMTKView *v in self.downstreamOutputs). // ancillary MTKViews
[v setInputTexture:brightnessOutput];
__block dispatch_semaphore_t block_sema = d.hostedAssetsSemaphore;
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> buffer) {
dispatch_semaphore_signal(block_sema);
}];
[commandBuffer presentDrawable:self.currentDrawable];
[commandBuffer commit];
Below, we're in the ancillary view's drawRect
handler with inputTexture
as the texture that's being transferred, displaying a subregion of it. I should mention that this MTKView
is configured to be drawn as a result of a setNeedsDisplay
rather than as one with an internal timer:
id<MTLRenderCommandEncoder> encoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor];
encoder.label = @"Vertex Render Encoder";
[encoder setRenderPipelineState:metalVertexPipelineState];
// draw main content
NSUInteger vSize = _vertexInfo.metalVertexCount*sizeof(AAPLVertex);
id<MTLBuffer> mBuff = [self.device newBufferWithBytes:_vertexInfo.metalVertices
length:vSize
options:MTLResourceStorageModeShared];
[encoder setVertexBuffer:mBuff offset:0 atIndex:0];
[encoder setVertexBytes:&_viewportSize length:sizeof(_viewportSize) atIndex:1];
[encoder setFragmentTexture:self.inputTexture atIndex:0];
[encoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:_vertexInfo.metalVertexCount];
[encoder endEncoding];
[commandBuffer presentDrawable:self.currentDrawable];
[commandBuffer commit];
The above code seems to work fine. Having said that, I think we're telling a different story in the Xcode debugger. It's pretty obvious that I'm wasting huge swaths of time doing things this way... That long command buffer is the ancillary monitor view doing a LOT of waiting...