1
votes

I'm trying to interpolate between integer pixel coordinates instead of between 0-1, because I'm using point sampling, so I'm not interested in fractions of pixels, but the texture coordinates are still coming into the pixel shader as float2 even though the data type is int2.

pixelSize is 1 divided by texture size

matrix WorldViewProjection;

float2 pixelSize;

Texture2D SpriteTexture;

sampler2D SpriteTextureSampler = sampler_state
{
    Texture = <SpriteTexture>;
    AddressU = clamp;
    AddressV = clamp;
    magfilter = POINT;
    minfilter = POINT;
    mipfilter = POINT;
};

struct VertexShaderOutput
{
    float4 Position : SV_POSITION;
    float4 Color : COLOR0;
    int2 TextureCoordinates : TEXCOORD0;
};

VertexShaderOutput SpriteVertexShader(float4 position : POSITION0, float4 color : COLOR0, float2 texCoord : TEXCOORD0)
{
    VertexShaderOutput output;
    output.Position = mul(position, WorldViewProjection);
    output.Color = color;
    output.TextureCoordinates = texCoord * (1 / pixelSize);
    return output;
}

float4 SpritePixelShader(VertexShaderOutput input) : COLOR
{
    float2 texCoords = input.TextureCoordinates * pixelSize;
    return tex2D(SpriteTextureSampler, texCoords) * input.Color;
}

technique SpriteDrawing
{
    pass P0
    {
        VertexShader = compile vs_2_0 SpriteVertexShader();
        PixelShader = compile ps_2_0 SpritePixelShader();
    }
};
1
If pixelSize is already one divided by the texture size, you should not divide it again at computing the texture coordinate. So it should be output.TextureCoordinates = texCoord * pixelSize;Gnietschow
No. 1 / 1 / x = x. Therefore 1 / 1 / texture size = texture size. I have to divide it again to get the texture size instead of having to set two variables. Multiplying the texture coordinates by texture size gives me integer texture coordinates.Martin
Oh, I overlooked the * pixelSize in the pixel shader. I thought your vertex data contains integer texture coordinates. What is your problem exactly? In the post is no question and I don't see the point of converting the texCoord into integer and back to float without doing anything between.Gnietschow
You ask what the point of converting into integer and then back to float is? You lose the decimal value and I'm not interested in fractions of pixels as I said in my question. So instead of getting interpolated texture coordinates like: (368.4 * (1 / textureWidth), 175.8 * (1 / textureHeight)) I want it as (368, 175). My problem is that it retains the decimal value even though it's int2, so I get it as (368.4, 175.8), which makes no sense.Martin
So I want to round down, add half a pixel (I excluded this for simplicity) and convert back into texture coordinates between 0-1. That way I always sample at the center of each pixel. In point sampling if you sample too close to the bounds of a pixel it will sample the neighbor pixel.Martin

1 Answers

0
votes

My understanding is that you need to take a range of 0-1 and resize it to 0-w:h then back to 0-1 later. A very useful technique for this is called feature scaling:

Feature scaling is a method used to standardize the range of independent variables or features of data. In data processing, it is also known as data normalization and is generally performed during the data preprocessing step.

There are several approaches to this, but I will focus on the rescaling approach. Originally this approach focuses on taking a range such as 200-300 and scaling it to 0-1. The math is simply:

Rescaling

Where x is the original value and x' is the normalized value.


In our case we want to go the opposite direction and scale from 0-1 back to 200-300 so we have to rework it; but while we're at it why not make it where we can go either direction to meet your requirements:

customizable rescaling

Translating this into HLSL is a simple task and I would recommend putting it in a common method for reuse later:

float RescaleInRange(float value, float oldMin, float oldMax, float newMin, float newMax) {
    if (value < oldMin) value = oldMin;
    else if (value > oldMax) value = oldMax;
    return ((value - oldMin) / (oldMax - oldMin)) * (newMax - newMin) + newMin;
}

I would leave your TextureCoordinates as a float2 to remain consistent with industry standards where most use float2 for this. Then in your vertex shader just assign it the texCoord that was supplied. Later, in your pixel shader you can use the rescale supplied above to work the units of TextureCoordinates individually (called UV in my code below):

float w = textureSize.x;
float h = textureSize.y;
float x = RescaleInRange(UV.x, 0, 1, 0, w);
float y = RescaleInRange(UV.y, 0, 1, 0, h);
return tex2D(Sampler, float2(RescaleInRange(x, 0, w, 0, 1), RescaleInRange(y, 0, h, 0, 1))) * input.Color;