0
votes

This is my first time playing around with Vertex Shaders in a WebGL context. I want to texture a primitive with a video, but instead of just mapping the video into the surface I;m trying to translate the luma of the video into vertex displacement. This is kind of like the Rutt Etra, but in a digital format. A bright pixel should push the vertex forward, while a darker pixel does the inverse. Can anyone tell me what I'm doing wrong? I can't find a reference for this error.

When compiling my code, I get the following when using sampler2D and texture2D:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.65 Safari/537.36 | WebGL 1.0 (OpenGL ES 2.0 Chromium) | WebKit | WebKit WebGL | WebGL GLSL ES 1.0 (OpenGL ES GLSL ES 1.0 Chromium) Three.js:264 ERROR: 0:57: 'ftransform' : no matching overloaded function found ERROR: 0:57: 'assign' : cannot convert from 'const mediump float' to 'Position highp 4-component vector of float' ERROR: 0:60: 'gl_TextureMatrix' : undeclared identifier ERROR: 0:60: 'gl_TextureMatrix' : left of '[' is not of type array, matrix, or vector
ERROR: 0:60: 'gl_MultiTexCoord0' : undeclared identifier Three.js:257

 <!doctype html>
<html>
    <head>
        <title>boiler plate for three.js</title>
        <meta charset="utf-8">
        <meta name="viewport" content="width=device-width, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0">

        <script src="vendor/three.js/Three.js"></script>
        <script src="vendor/three.js/Detector.js"></script>
        <script src="vendor/three.js/Stats.js"></script>
        <script src="vendor/threex/THREEx.screenshot.js"></script>
        <script src="vendor/threex/THREEx.FullScreen.js"></script>
        <script src="vendor/threex/THREEx.WindowResize.js"></script>
        <script src="vendor/threex.dragpancontrols.js"></script>
        <script src="vendor/headtrackr.js"></script>

        <style>
body {
    overflow    : hidden;
    padding     : 0;
    margin      : 0;
    color       : #222;
    background-color: #BBB;
    font-family : arial;
    font-size   : 100%;
}
#info .top {
    position    : absolute;
    top     : 0px;
    width       : 100%;
    padding     : 5px;
    text-align  : center;
}
#info a {
    color       : #66F;
    text-decoration : none;
}
#info a:hover {
    text-decoration : underline;
}
#info .bottom {
    position    : absolute;
    bottom      : 0px;
    right       : 5px;
    padding     : 5px;
}

        </style>
    </head>
<body>
    <!-- three.js container -->
        <div id="container"></div>
    <!-- info on screen display -->
    <div id="info">
        <!--<div class="top">
            <a href="http://learningthreejs.com/blog/2011/12/20/boilerplate-for-three-js/" target="_blank">LearningThree.js</a>
            boiler plate for
            <a href="https://github.com/mrdoob/three.js/" target="_blank">three.js</a>
        </div>-->
        <div class="bottom" id="inlineDoc" >
            - <i>p</i> for screenshot
        </div> 
    </div> 

<canvas id="compare" width="320" height="240" style="display:none"></canvas>
<video id="vid" autoplay loop></video>
<script type="x-shader/x-vertex" id="vertexShader">
varying vec2 texcoord0;


void main()
{
    // perform standard transform on vertex
    gl_Position = ftransform();

    // transform texcoords
    texcoord0 = vec2(gl_TextureMatrix[0] * gl_MultiTexCoord0);
}       
    </script>

    <script type="x-shader/x-vertex" id="fragmentShader">
varying vec2 texcoord0;

uniform sampler2D tex0;
uniform vec2 imageSize;
uniform float coef;

const vec4 lumcoeff = vec4(0.299,0.587,0.114,0.);

void main (void)
{

    vec4 pixel = texture2D(tex0, texcoord0);
    float luma = dot(lumcoeff, pixel);

    gl_FragColor =  vec4((texcoord0.x  / imageSize.x), luma, (texcoord0.y / imageSize.y) , 1.0);
}
    </script>
    <script type="text/javascript">
        var stats, scene, renderer;
        var camera, cameraControls;
        var videoInput = document.getElementById('vid');
        var canvasInput = document.getElementById('compare');   
        var projector = new THREE.Projector();
        var gl;
        var mesh,
        cube,
    attributes,
    uniforms,
    material,
    materials; 
        var videoTexture = new THREE.Texture( videoInput );

        if( !init() )   animate();

        // init the scene
        function init(){

            if( Detector.webgl ){
                renderer = new THREE.WebGLRenderer({
                    antialias       : true, // to get smoother output
                    preserveDrawingBuffer   : true  // to allow screenshot
                });
                renderer.setClearColorHex( 0xBBBBBB, 1 );
            // uncomment if webgl is required
            //}else{
            //  Detector.addGetWebGLMessage();
            //  return true;
            }else{
                renderer    = new THREE.CanvasRenderer();
                gl=renderer;
            }
            renderer.setSize( window.innerWidth, window.innerHeight );
            document.getElementById('container').appendChild(renderer.domElement);


            // create a scene
            scene = new THREE.Scene();

            // put a camera in the scene
            camera = new THREE.PerspectiveCamera( 23, window.innerWidth / window.innerHeight, 1, 100000 );
            camera.position.z = 0;
            scene.add( camera );
//
//          // create a camera contol
//          cameraControls  = new THREEx.DragPanControls(camera)

            // transparently support window resize
//          THREEx.WindowResize.bind(renderer, camera);
            // allow 'p' to make screenshot
            THREEx.Screenshot.bindKey(renderer);
            // allow 'f' to go fullscreen where this feature is supported
            if( THREEx.FullScreen.available() ){
                THREEx.FullScreen.bindKey();        
                document.getElementById('inlineDoc').innerHTML  += "- <i>f</i> for fullscreen";
            }
            materials   = new THREE.MeshLambertMaterial({
                    map : videoTexture
            });
            attributes = {};

            uniforms = {

              tex0: {type: 'mat2', value: materials},

              imageSize: {type: 'f', value: []},

              coef: {type: 'f', value: 1.0}

            };


        //Adding a directional light source to see anything..
        var directionalLight = new THREE.DirectionalLight(0xffffff);
        directionalLight.position.set(1, 1, 1).normalize();
        scene.add(directionalLight);    



            // video styling
            videoInput.style.position = 'absolute';
            videoInput.style.top = '50px';
            videoInput.style.zIndex = '100001';
            videoInput.style.display = 'block';

            // set up camera controller
            headtrackr.controllers.three.realisticAbsoluteCameraControl(camera, 1, [0,0,0], new THREE.Vector3(0,0,0), {damping : 1.1});
            var htracker = new headtrackr.Tracker();
            htracker.init(videoInput, canvasInput);
            htracker.start();

//          var stats = new Stats();
//          stats.domElement.style.position = 'absolute';
//          stats.domElement.style.top = '0px';
//          document.body.appendChild( stats.domElement );


document.addEventListener('headtrackrStatus', 
  function (event) {
    if (event.status == "found") {
        addCube();

    }
  }
);      

}    
        // animation loop
        function animate() {

            // loop on request animation loop
            // - it has to be at the begining of the function
            // - see details at http://my.opera.com/emoller/blog/2011/12/20/requestanimationframe-for-smart-er-animating
            requestAnimationFrame( animate );

            // do the render
            render();

            // update stats
            //stats.update();
        }

function render() {

            // convert matrix of every frame of video -> texture
            uniforms.tex0 = materials;
            uniforms.coef = 0.2;  
            uniforms.imageSize.x = window.innerWidth;
            uniforms.imageSize.y = window.innerHeight;
            // update camera controls
//          cameraControls.update();
            if(  videoInput.readyState ===  videoInput.HAVE_ENOUGH_DATA ){
                videoTexture.needsUpdate = true;
            }

            // actually render the scene
            renderer.render( scene, camera );
        }
function addCube(){
        material = new THREE.ShaderMaterial({
          uniforms: uniforms,
          attributes: attributes,
          vertexShader: document.getElementById('vertexShader').textContent,
          fragmentShader: document.getElementById('fragmentShader').textContent,
          transparent: true
        });


            //The cube
        cube = new THREE.Mesh(new THREE.CubeGeometry(40, 30, 10, 1, 1, 1, material), new THREE.MeshFaceMaterial());
        cube.overdraw = true;
        scene.add(cube);
}
</script>
</body>
</html>
1

1 Answers

0
votes

The primary problem here is that you are using the old GLSL reserved words that were intended for programmable / fixed-function interop. In OpenGL ES 2.0 things like gl_MultiTexCoord0 and gl_TextureMatrix [n] are not defined, because they completely removed the legacy fixed-function vertex array baggage that regular OpenGL has to deal with. These reserved words let you have matrix/vertex array state per-texture unit; they do not exist in OpenGL ES, this was their purpose in OpenGL.

To get around this, you have to use generic vertex attributes (e.g. attribute vec2 tex_st) instead of having a 1:1 mapping between texture coordinate pointers and texture units. Likewise, there is no texture matrix associated with each texture unit. To duplicate the functionality of texture matrices, you need to use matrix uniforms in your vertex/fragment shader.

To be honest, I cannot remember the last time I actually found it useful to have a separate texture matrix / texture coordinate pointer for each texture unit when using shaders... I often have 4 or 5 different textures and only need maybe 1 or 2 sets of texture coordinates. It is no big loss.

The kicker here is ftransform (...). This is intended to make it possible to write 1-line vertex shaders in OpenGL that behave the same way as the fixed-function pipeline. You must have copied and pasted a shader that was written for OpenGL 2.x or 3.x (compatibility). Explaining how to fix everything in this shader could be a real chore, you might have to learn more about GLSL before most of what I just wrote makes sense :-\