3
votes

I am writing a 3D app using Metal. For rendering in 3D I need to control each pixel. In normal screens this seems to be working ok with the [[position]] variable passed to the shader. But in Retina display, where there is a scaling factor, each screen-coordinate represent 2x2 (or 3x3) pixels.

Let me elaborate with iPhone 6 screen as example: screen-coordinates is 375x667 and pixel-coordinates is 750x1334. Here is my (test) shader code:

fragment half4 myFragShader(vtxOut in [[stage_in]],
                           float4 pcoord [[position]])
{    
    if(pcoord.x >187.5)    //187.5=375/2
         return half4(1.0 , 0.0 , 0.0, 1.0);  // return red
    else return half4(0.0 , 0.0 , 1.0, 1.0);  // return blue 
}

With the above test code, I am getting (exactly) left half of the screen as blue and right half as red. This means the pcoord is coming in the coordinates system 375x667, not in 750x1334.

Questions:

  1. Will the fragment shader be called for every pixel-coordinate? Or only for every screen-coordinate?

  2. If it gets called for every pixel-coordinate, how do I access each pixel inside the fragment shader?

I tried the same with the pcoord.y (in my code above) with similar result.

2

2 Answers

1
votes

After doing a little digging, your answer to question 2 seems correct, i.e. use in.position.xy.

For question 1, I discovered the Metal fragment shader seems to work in native pixel coordinates, i.e. the pixel dimensions of the physical hardware screen.

My problem concerned clipping (discarding) pixels in the fragment shader and I was getting unexpected results because I was clipping to screen pixel coordinates and not native pixel coordinates. After this realisation everything worked fine.

So it looks like iOS uses its own screen coordinates (called points I believe) which then get rescaled to the physical hardware later on. UI input also seems to work in screen coordinates. Whereas Metal fragments work with native (hardware) screen coordinates.

I discovered I could find out the relative information (in Swift 5) using the following:

let systemWidth = UIScreen.main.bounds.size.width
let systemHeight = UIScreen.main.bounds.size.height
let systemScale = UIScreen.main.scale

let metalWidth = UIScreen.main.nativeBounds.size.width
let metalHeight = UIScreen.main.nativeBounds.size.height
let metalScale = UIScreen.main.nativeScale

For my system, this gave:

System Screen size = 768.0 x 1024.0
      system scale = 2.0
 Metal screen size = 1536.0 x 2048.0
       metal scale = 2.0

So presumably to go from system screen dimensions to hardware screen dimensions we multiply by systemScale, whereas to go from Metal screen dimensions to system we divide by metalScale.

1
votes

This is my own answer:

Though I could not find out why some simple projects do not give the exact pixel-coordinates in the shader code, few other projects I created did give me the coordinates in pixel-coordinates (including the screen scaling factor), i.e. in my example above, I did get 750x1334 coordinates, not 375x667.

So, to answer the two questions I posted:

  1. The fragment shader will be called for every pixel-coodinate and
  2. in.position.xy gives the coordinates inside the shader code.

Shall update this, once I find out what was going wrong in my projects that give the coordinates in screen-coordiantes.