Been pulling my hair out trying to figure out the current best path from AVFoundation videos to an openGLTexture, most of what I find is related to iOS, and I can't seem to make it work well in OSX.
First of all, this is how I set up the videoOutput:
NSDictionary *pbOptions = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_422YpCbCr8], kCVPixelBufferPixelFormatTypeKey,
[NSDictionary dictionary], kCVPixelBufferIOSurfacePropertiesKey,
nil];
self.playeroutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:pbOptions];
self.playeroutput.suppressesPlayerRendering = YES;
I'm attempting three different solutions, of which only one seems to work consistently, but not sure it's the fastest. One works for a little, then breaks down with frames jumping all over the place, and one just produces black.
First off, working solution using glTexImage2D
- (BOOL) renderWithCVPixelBufferForTime: (NSTimeInterval) time
{
CMTime vTime = [self.playeroutput itemTimeForHostTime:CACurrentMediaTime()];
if ([self.playeroutput hasNewPixelBufferForItemTime:vTime]) {
if (_cvPixelBufferRef) {
CVPixelBufferUnlockBaseAddress(_cvPixelBufferRef, kCVPixelBufferLock_ReadOnly);
CVPixelBufferRelease(_cvPixelBufferRef);
}
_cvPixelBufferRef = [self.playeroutput copyPixelBufferForItemTime:vTime itemTimeForDisplay:NULL];
CVPixelBufferLockBaseAddress(_cvPixelBufferRef, kCVPixelBufferLock_ReadOnly);
GLsizei texWidth = CVPixelBufferGetWidth(_cvPixelBufferRef);
GLsizei texHeight = CVPixelBufferGetHeight(_cvPixelBufferRef);
GLvoid *baseAddress = CVPixelBufferGetBaseAddress(_cvPixelBufferRef);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, self.textureName);
glTexParameteri(GL_TEXTURE_RECTANGLE_ARB, GL_TEXTURE_STORAGE_HINT_APPLE , GL_STORAGE_CACHED_APPLE);
glTexImage2D(GL_TEXTURE_RECTANGLE_ARB, 0, GL_RGB, texWidth, texHeight, 0, GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_APPLE, baseAddress);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, 0);
}
return YES;
}
This method spends most of it's time Locking the base address of the pixel buffer, but the docs say it's not required if accessing data from the GPU and can impair performance. I could not figure out a way to get a texture without locking.
Next up, the almost working solution using iOSurface, this works for a bit then gets really glitchy, as if ioSurfaces are being used from previous frames:
- (BOOL) renderWithIOSurfaceForTime:(NSTimeInterval) time {
CMTime vTime = [self.playeroutput itemTimeForHostTime:CACurrentMediaTime()];
if ([self.playeroutput hasNewPixelBufferForItemTime:vTime]) {
CVPixelBufferRef pb = [self.playeroutput copyPixelBufferForItemTime:vTime itemTimeForDisplay:NULL];
IOSurfaceRef newSurface = CVPixelBufferGetIOSurface(pb);
if (_surfaceRef != newSurface) {
IOSurfaceDecrementUseCount(_surfaceRef);
_surfaceRef = newSurface;
IOSurfaceIncrementUseCount(_surfaceRef);
GLsizei texWidth = (int) IOSurfaceGetWidth(_surfaceRef);
GLsizei texHeight= (int) IOSurfaceGetHeight(_surfaceRef);
size_t rowbytes = CVPixelBufferGetBytesPerRow(_cvPixelBufferRef);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, self.textureName);
CGLTexImageIOSurface2D(cgl_ctx, GL_TEXTURE_RECTANGLE_ARB, GL_RGB8, texWidth, texHeight, GL_YCBCR_422_APPLE, GL_UNSIGNED_SHORT_8_8_APPLE, _surfaceRef, 0);
glBindTexture(GL_TEXTURE_RECTANGLE_ARB, 0);
}
CVPixelBufferRelease(pb);
}
return YES;
}
This seems to be the best solution, if it would work. I have another process that creates textures from ioSurfaces and it works just fine, while being extremely fast too.
Finally the one that seems recommended for iOS is to use a CVOpenGLTextureCache, implementation in osx seems slightly different, and I could not get it to render anything but black, plus it seemed even slower than the first solution....
- (BOOL) renderByCVOpenGLTextureCacheForTime:(NSTimeInterval) time
{
CMTime vTime = [self.playeroutput itemTimeForHostTime:CACurrentMediaTime()];
if ([self.playeroutput hasNewPixelBufferForItemTime:vTime]) {
_cvPixelBufferRef = [self.playeroutput copyPixelBufferForItemTime:vTime itemTimeForDisplay:NULL];
if (!_textureCacheRef) {
CVReturn error = CVOpenGLTextureCacheCreate(kCFAllocatorDefault, NULL, cgl_ctx, CGLGetPixelFormat(cgl_ctx), NULL, &_textureCacheRef);
if (error) {
NSLog(@"Texture cache create failed");
}
}
CVReturn error = CVOpenGLTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCacheRef, _cvPixelBufferRef, NULL, &_textureRef);
if (error) {
NSLog(@"Failed to copy video texture");
}
CVPixelBufferRelease(_cvPixelBufferRef);
_textureName = CVOpenGLTextureGetName(_textureRef);
}
return YES;
}
Probably I'm not setting things up right, there's zero documentation for the texture cache in osx.
I've found it best to retain the cvpixelbufferref between render cycles, as I understand it, the texture upload can run asynchronously with CGLTexImage2d, I'm quite happy with that, several other objects may be rendered at the same time, a cglflushDrawable is eventually called when textures are eventually drawn.
Most of the apple examples I find for video to openGL Textures relate to iOS, and split the texture in two to recombine in a shader, like in this example https://developer.apple.com/library/ios/samplecode/GLCameraRipple/Listings/GLCameraRipple_main_m.html#//apple_ref/doc/uid/DTS40011222-GLCameraRipple_main_m-DontLinkElementID_11 I couldn't adapt the code directly as the texture cache has different implementations in iOS.
So any pointers would be great, it seems like vital functionality, but info I find regarding av foundation and opengl on osx seems very negative.
Update: Updated ioSurface code with use counts, works slightly longer, but still glitches out eventually.