6
votes

I'm currently working on a camera app for the iPhone in which I take the camera input, convert that to an OpenGL texture and then map it onto a 3D Object (currently a plane in perspective projection, for the sake of simplicity). After mapping the camera input to this 3D plane I then render this 3D scene to a texture which is then used as a new texture for a plane in orthographic space (to apply additional filters in my fragment shader).

As long as I keep everything in orthographic projection, the resolution of my render texture is pretty high. But from the moment I put my plane in perspective projection the resolution of my render texture is very low.

Comparison:

Resolution comparison

As you can see, the last image has a very low resolution compared to the other two. So I'm guessing I'm doing something wrong.

I'm currently not using multisampling on any of my framebuffers and I'm in doubt if I will need it anyway to fix my problem since the orthographic scene works perfectly.

The textures I render into are 2048x2048 (will eventually be outputted as an image to the iPhone camera roll).

Here are some parts of my source code that I think might be relevant:

Code to create the framebuffer that gets outputted to the screen:

// Color renderbuffer
glGenRenderbuffers(1, &colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer);
[context renderbufferStorage:GL_RENDERBUFFER 
                 fromDrawable:(CAEAGLLayer*)glView.layer];

// Depth renderbuffer
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);

// Framebuffer
glGenFramebuffers(1, &defaultFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFrameBuffer);

// Associate renderbuffers with framebuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, 
      GL_RENDERBUFFER, colorRenderBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 
      GL_RENDERBUFFER, depthRenderbuffer);

TextureRenderTarget class:

void TextureRenderTarget::init()
{
    // Color renderbuffer
    glGenRenderbuffers(1, &colorRenderBuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer);
    glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB8_OES, 
           width, height);

    // Depth renderbuffer
    glGenRenderbuffers(1, &depthRenderbuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
    glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, 
           width, height);

    // Framebuffer
    glGenFramebuffers(1, &framebuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);

    // Associate renderbuffers with framebuffer
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, 
          GL_RENDERBUFFER, colorRenderBuffer);
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, 
          GL_RENDERBUFFER, depthRenderbuffer);

    // Texture and associate with framebuffer
    texture = new RenderTexture(width, height);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, 
          GL_TEXTURE_2D, texture->getHandle(), 0);

    // Check for errors
    checkStatus();
}

void TextureRenderTarget::bind() const
{
    glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, colorRenderBuffer);
}



void TextureRenderTarget::unbind() const
{
    glBindFramebuffer(GL_FRAMEBUFFER, 0);
    glBindRenderbuffer(GL_RENDERBUFFER, 0);
}

And finally, a snippet on how I create the render texture and fill it with pixels:

void Texture::generate()
{
    // Create texture to render into
    glActiveTexture(unit);
    glGenTextures(1, &handle);
    glBindTexture(GL_TEXTURE_2D, handle);

    // Configure texture
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}

void Texture::setPixels(const GLvoid* pixels)
{
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, 
         GL_UNSIGNED_BYTE, pixels);
    updateMipMaps();
}

void Texture::updateMipMaps() const
{
    glBindTexture(GL_TEXTURE_2D, handle);
    glGenerateMipmap(GL_TEXTURE_2D);
}

void Texture::bind(GLenum unit)
{
    this->unit = unit;

    if(unit != -1)
    {
        glActiveTexture(unit);
        glBindTexture(GL_TEXTURE_2D, handle);
    }
    else
    {
        cout << "Texture::bind -> Couldn't activate unit -1" << endl;
    }
}

void Texture::unbind()
{
    glBindTexture(GL_TEXTURE_2D, 0);    
}
3
I've tried solving the problem by enabling mipmapping on my texture class but this didn't solve the problem.polyclick
To figure out what’s going wrong here, it would probably be more useful to show what position/texture coordinate values you’re using for your plane, as well as the matrices you’re using.Pivot
@bitshiftcop Any success in solving this?Anton

3 Answers

6
votes

I would assume that texture mapping is not exact with perspective projection.

Could you replace camera roll image by checker (chess grid with 1px cell size)? Then compare rendered checkers in orthogonal and perspective projections - the grid should be not blurred. If it is, then the problem is in projection matrix - it needs some bias for direct texel-to-pixel mapping.

If you have device you can look at rendering steps through OpenGL frame capture feature in XCode - there you will see when exactly the image becomes blurred.

As for mipmapping, it's not good to use it for textures created on-the-fly.

5
votes

The blurring may be caused by the plane being positioned at half pixels in screen coordinates. Since going from orthographic to perspective transform changes the position of the plane, the plane will likely not be positioned at the same screen coordinate between the two transforms.

Similar blurring occur when you move an UIImageView from frame origin (0.0,0.0) to (0.5,0.5) on standard-res display, and (0.25,0.25) on retina displays.

The fact that your texture is very high-res may not help in this case since number of pixels actually sampled is bounded.

Try moving the plane a small distance in screen x,y coordinates and see if the blurring disappears.

1
votes

I finally solved my problem by merging the first and second step of my rendering process.

The first step used to crop and flip the texture of the camera and render it to a new texture. Then this newly rendered texture is mapped onto a 3D plane and the result is rendered to a new texture.

I merged these two steps by changing the texture coordinates of my 3D plane so that I can use the original camera texture directly onto this plane.

I don't know what the exact reason is what was causing this loss of quality between the two rendered textures, but as a hint for the future: don't render to texture and reuse that result for a new render to texture. Merging all this together is better for performance and it also avoids color shifting issues.