2
votes

Here's a very simple shader,

float4 vert (float4 vertex :POSITION, out PositionHolder o) :SV_POSITION
{
    UNITY_INITIALIZE_OUTPUT(PositionHolder,o);
    o.localPos = vertex;
    return UnityObjectToClipPos(vertex);
}

fixed4 frag (PositionHolder IN) :SV_Target
{
    if (IN.localPos.y > -.2)
        return _UpperColor;
    else
       return _LowerColor;
}

(So, if on a quad it just paints the top 70% in one color and the bottom stripe in another color.)

Notice that the only thing vert does is pass along the local on the mesh vertex position.

Here's the struct to do so

struct PositionHolder
{
    float3 localPos;
};

However, (in Unity, anyway) it will want a semantic for that float3.

enter image description here

Reviewing Unity's doco, they don't have any of course, but there's a link to

https://docs.microsoft.com/en-us/windows/desktop/direct3dhlsl/dx-graphics-hlsl-semantics

(Aside - which, as I understand it, is specifically for D3D, but, I guess that's the syntax/semantic to go with anyway?)

My reading of the situation,

suggests you should use say,

struct PositionHolder
{
    float3 localPos :POSITION1;
};

Or really, position-any-number you're not already using as a resource.

What about,

struct PositionHolder
{
    float3 localPos :POSITION8;
};

for luck.

(Confusingly, they seem to run only from blank to 9, I don't know if that reflects a hardware reality, or is just something shoddy somewhere - or what. But anyway, basically "any number 1 to 9 you're not already using elsewhere.)

  1. Note that anyway, POSITION is a float4, really is that ok for my float3?? Is there something better than POSITION for a general-purpose float3? (I've noticed if you just use, say, a COLOR, it also works fine. Does it matter?)

  2. Is the "1 on the end" system actually correct? So, POSITION1 is good to go? I'm not melting the gpu or some such?

  3. Hmm, I've noticed you can just type ANYTHING in there (say, "abcd1"). But surely then it's not actually a hardware register thingy???

2
UV is a good container for custom data, gets interpolated and theres four of themzambari
position is only per vertex, but UV gets passed to fragment shader in interpolated form so you don't have to use other forms of trickery to know where you arezambari
regarding your third question - I don't know an answer, regarding some examples: a great starting point, that takes maybe half an hour to fully follow is here: docs.unity3d.com/Manual/SL-SurfaceShaderExamples.htmlzambari
oh, I've surely worked through all that - unfortunately it does not explain anything about the semantic tags. (nothing does! heh.)Fattie

2 Answers

1
votes

Let say your development platform is iOS.

iOS has its own shader language - metal shading language.

In order for your shaders to work on iOS, unity compiles them from HLSL to Metal. So if we were to compile the following shader part:

HLSL:

float4 localPos: POSITION1;

this is what the compilation result be:

Metal:

float4 POSITION1 [[ user(POSITION1) ]];

According to Metal Shading Language Specification

The attribute [[user(name)]] syntax can also be used to specify an attribute name for any user-defined variables.

Unity3d translates your HLSL semantic field "name" to Metal Language attribute [user(name)]. So, it doesn't matter if you write: [POSITION1 or HelloWorld or Fattie], as long as it's unique and doesn't use reserved keywords. Hence, after compiling the following shader

HLSL:

float4 localPos: Fattie;

this would be the result:

Metal:

float4 Fattie0 [[ user(Fattie0) ]];

As for Vector Data Types, vector types [float, float2, float3, float4, etc] are not specific to your custom semantics name. Compiling this shader with float3

HLSL:

float3 localPos: Fattie;

gives the following result:

Metal:

float3 Fattie0 [[ user(Fattie0) ]];
1
votes

1: GPU registers are hardware-based, and they are always vectors of size 4 (float4 etc), the semantic is just for binding variables to names so that all the data ends up in the right place. All vector operations on the GPU use SIMD (single instruciton, multiple data), so performing computations on a vector is pretty much as fast as with a scalar. However, i am not sure about POSITION, generally this name only fills a function for input variables - for your own arbitrary output variables, you should use TEXCOORD0-N. The COLOR interpolators have 8-bit precision and might also clamp the values to the range of 0-1.

2: The amount of interpolator registers at your disposal depends on the supported shader model (you can use #pragma target 4.0 to support up to 32 interpolators, at the cost of requiring a minimum of Direct3D 10). But yes, you can use 0 or 8 or 4 or whatever.

3: Have you tried it? Really, the semantics are just used to provide hints/labels for the compiler as to what data we want where. For most values though, it doesn't matter which register we use, and typing down registers explicitly is really just a legacy from the past. I have never seen anyone write random things as shader semantics though!

Personally, i think it is more intuitive to structure your input and output like this:

struct VertexInput {
    // Here we use POSITION to tell the compiler that we want the vertex coords to be written to this variable.
    float4 vertex : POSITION;
    float3 normal : NORMAL;
};

struct VertexOutput {
    float4 clipPos : SV_POSITION;
    // Here we use TEXCOORD0-1, since these are just variables to be interpolated. 
    float3 localPos : TEXCOORD0; 
    float3 normal : TEXCOORD1;
};

VertexOutput Vert(VertexInput v) {
    VertexOutput o;

    o.clipPos = UnityObjectToClipPos(v.vertex);
    o.localPos = v.vertex;
    o.normal = UnityObjectToWorldNormal(v.normal);

    return o;
}

fixed4 Frag(VertexOutput o) : SV_Target {
    half3 normal = normalize(o.normal); // Interpolated values are not necessarily normalized
    half NdotL = saturate(dot(normal, _WorldSpaceLightPos0.xyz));
    return fixed4((fixed3)NdotL, 1); // Basic lambert lighting as an example
}