5
votes

In my game I want to create seperate GLSL shaders for each situation. In example if i would have 3 models character, shiny sword and blury ghost i would like to set renderShader, animationShader and lightingShader to the character, then renderShader, lightingShader and specularShader to shiny sword, and finally i would like to set renderShader, lightingShader and blurShader to the blury ghost.

The renderShader should multiply the positions of vertices by projection, world and other matrices, and it's fragmet shader should simply set the texture to the model.

animationShader should transform vertices by given bone transforms.

lightingShader should do the lighting and specularLighting should do the specular lighting.

blurShader should do the blur effect.

Now first of all how can i do multiple vertex transforms on different shaders? Because the animationShader should calculate the animated positions of vertices and then renderShader should get that position and trasform it by some matrices.

Secondly how can i change the color of fragments on different shader?

The basic idea is that i want to be able to use different shaders for each sutuations/effects, and i don't know how to achieve it.

I need to know how should i use these shaders in opengl, and how should i use GLSL so that all shaders would complete each other and the shaders would not care if another shader is used or not.

3
on SO its preferred to have one question per post, try to separate.Valerij
"lightingShader should do the lighting and specularLighting should do the specular lighting." "Lighting" is a functional superset of "specular lighting". So you would be doing specular lighting twice.Nicol Bolas

3 Answers

3
votes

What you're asking for is decidedly non-trivial, and is probably extreme overkill for the relatively limited number of "shader" types you describe.

Doing what you want will require developing what is effectively your own shading language. It may be a highly #defined version of GLSL, but the shaders you write would not be pure GLSL. They would have specialized hooks and be written in ways that code could be expected to flow into other code.

You'll need to have your own way of specifying the inputs and outputs of your language. When you want to connect shaders together, you have to say who's outputs go to which shader's inputs. Some inputs can come from actual shader stage inputs, while others come from other shaders. Some outputs written by a shader will be actual shader stage outputs, while others will feed other shaders.

Therefore, a shader who needs an input from another shader must execute after that other shader. Your system will have to work out the dependency graph.

Once you've figured out all of the inputs and outputs for a specific sequence of shaders, you have to take all of those shader text files and compile them into GLSL, as appropriate. Obviously, this is a non-trivial process.

Your shader language might look like this:

INPUT vec4 modelSpacePosition;
OUTPUT vec4 clipSpacePosition;

uniform mat4 modelToClipMatrix;

void main()
{
  clipSpacePosition = modelToClipMatrix * modelSpacePosition;
}

Your "compiler" will need to do textual transformations on this, converting references to modelSpacePosition into an actual vertex shader input or a variable written by another shader, as appropriate. Similarly, if clipSpacePosition is to be written to gl_Position, you will need to convert all uses of clipSpacePosition to gl_Position. Also, you will need to remove the explicit output declaration.

In short, this will be a lot of work.

If you're going to do this, I would strongly urge you to avoid trying to merge the concept of vertex and fragment shaders. Keep this shader system working within the well-defined shader stages. So your "lightingShader" would need to be either a vertex shader or a fragment shader. If it's a fragment shader, then one of the shaders in the vertex shader that feeds into it will need to provide a normal in some way, or you'll need the fragment shader component to compute the normal via some mechanism.

2
votes

Effectively for every combination of the shader stages you'll have to create an individual shader program. To save work and redundancy you'd use some caching structure to create a program for each requested combination only one time and reuse it, whenever it is requested.

Similar you can do with the shader stages. However shader stages can not be linked from several compilation units (yet, this is an ongoing effort in OpenGL development to get there, separable shaders of OpenGL-4 are a stepping stone there). But you can compile a shader from several sources. So you'd write functions for each desired effect into a separate source and then combine them at compilation time. And again use a caching structure to map source module combinations to shader object.

Update due to comment

Let's say you want to have some modularity. For this we can exploit the fact that glShaderSource accepts multiple source strings, it simply concatenates. You write a number of shader modules. One doing the illumination per-vertex calculations

uniform vec3 light_positions[N_LIGHT_SOURCES];
out vec3 light_directions[N_LIGHT_SOURCES];
out vec3 light_halfdirections[N_LIGHT_SOURCES];

void illum_calculation()
{
    for(int i = 0; i < N_LIGHT_SOURCES; i++) {
        light_directions[i] = ...;
        light_halfdirections[i] = ...;
    }
}

you put this into illum_calculation.vs.glslmod (the filename and extensions are arbitrary). Next you have a small module that does bone animation

uniform vec4 armature_pose[N_ARMATURE_BONES];
uniform vec3 armature_bones[N_ARMATURE_BONES];

in vec3 vertex_position;

void skeletal_animation()
{
    /* ...*/
}

put this into illum_skeletal_anim.vs.glslmod. Then you have some common header

#version 330
uniform ...;
in ...;

and some common tail which contains the main function, which invokes all the different stages

void main() {
    skeletal_animation();
    illum_calculation();
}

and so on. Now you can load all those modules, in the right order into a single shader stage. The same you can do with all shader stages. The fragment shader is special, since it can write to several framebuffer targets at the same time (in OpenGL versions large enough). And technically you can pass a lot of varyings between the stages. So you could pass a own set of varyings between shader stages for each framebuffer target. However the geometry and the transformed vertex positions are common to all of them.

1
votes

You have to provide different shader programs for each Model you want to render. You can switch between different shader combinations using the glUseProgram function. So before rendering your character or shiny sword or whatever you have to initialize the appropriate shader attributes and uniforms.

So it just a question of the design of the code of your game, because you need to provide all uniform attributes to the shader, for example light information, texture samples and you must enable all necessary vertex attributes of the shader in order to assign position, color and so on.

These attributes can differ between the shaders and also your client side model can have different kind of Vertex attribute structures.

That means the model of your code directly influences the assigned shader and depends on it.

If you want to share common code between different shader programs, e.g illuminateDiffuse you have to outsource this function and providing it to your shader through simply insert the string literal which represents the function into your shaders code, which is nothin more than a string literal. So you can reach a kind of modularity or include behavior through string manipulation of you shader code.

In any case the shader compiler tells you whats wrong.

Best regards