0
votes

I know there are many similar questions for this issue, such as this one, but I can't seem to figure out what is going wrong in my program.

I am attempting to create a unit sphere using the naive longitude/latitude method, then I attempt to wrap a texture around the sphere using UV coordinates.

I am seeing the classic vertical seam issue, but I'm also some strangeness at both poles.

North Pole... North Pole

South Pole... enter image description here

Seam... enter image description here

The images are from a sphere with 180 stacks and 360 slices.

I create it as follows.

First, here are a couple of convenience structures I'm using...

struct Point {
    float x;
    float y;
    float z;
    float u;
    float v;
};

struct Quad {
    Point lower_left;  // Lower left corner of quad
    Point lower_right; // Lower right corner of quad
    Point upper_left;  // Upper left corner of quad
    Point upper_right; // Upper right corner of quad
};

I first specify a sphere which is '_stacks' high and '_slices' wide.

float* Sphere::generate_glTriangle_array(int& num_elements) const
{
    int elements_per_point  = 5; //xyzuv
    int points_per_triangle = 3;
    int triangles_per_mesh = _stacks * _slices * 2; // 2 triangles makes a quad
    num_elements = triangles_per_mesh * points_per_triangle * elements_per_point;

    float *buff = new float[num_elements];
    int i = 0;

    Quad q;

    for (int stack=0; stack<_stacks; ++stack)
    {
        for (int slice=0; slice<_slices; ++slice)
        {
            q = generate_sphere_quad(stack, slice);
            load_quad_into_array(q, buff, i);
        }
    }

    return buff;
}

Quad Sphere::generate_sphere_quad(int stack, int slice) const
{
    Quad q;

    std::cout << "Stack " << stack << ", Slice: " << slice << std::endl;

    std::cout << "   Lower left...";
    q.lower_left = generate_sphere_coord(stack, slice);
    std::cout << "   Lower right...";
    q.lower_right = generate_sphere_coord(stack, slice+1);
    std::cout << "   Upper left...";
    q.upper_left = generate_sphere_coord(stack+1, slice);
    std::cout << "   Upper right...";
    q.upper_right = generate_sphere_coord(stack+1, slice+1);
    std::cout << std::endl;

    return q;
}

Point Sphere::generate_sphere_coord(int stack, int slice) const
{
    Point p;

    p.y = 2.0 * stack / _stacks - 1.0;

    float r = sqrt(1 - p.y * p.y);
    float angle = 2.0 * M_PI * slice / _slices;

    p.x = r * sin(angle);
    p.z = r * cos(angle);

    p.u = (0.5 + ( (atan2(p.z, p.x)) / (2 * M_PI) ));
    p.v = (0.5 + ( (asin(p.y)) / M_PI ));

    std::cout << " Point: (x: " << p.x << ", y: " << p.y << ", z: " << p.z << ") [u: " << p.u << ", v: " << p.v << "]" << std::endl;

    return p;
}

I then load my array, specifying vertices of two CCW triangles for each Quad...

void Sphere::load_quad_into_array(const Quad& q, float* buff, int& buff_idx, bool counter_clockwise=true)
{
    if (counter_clockwise)
    {
        // First triangle
        load_point_into_array(q.lower_left, buff, buff_idx);
        load_point_into_array(q.upper_right, buff, buff_idx);
        load_point_into_array(q.upper_left, buff, buff_idx);

        // Second triangle
        load_point_into_array(q.lower_left, buff, buff_idx);
        load_point_into_array(q.lower_right, buff, buff_idx);
        load_point_into_array(q.upper_right, buff, buff_idx);
    }
    else
    {
        // First triangle
        load_point_into_array(q.lower_left, buff, buff_idx);
        load_point_into_array(q.upper_left, buff, buff_idx);
        load_point_into_array(q.upper_right, buff, buff_idx);

        // Second triangle
        load_point_into_array(q.lower_left, buff, buff_idx);
        load_point_into_array(q.upper_right, buff, buff_idx);
        load_point_into_array(q.lower_right, buff, buff_idx);
    }
}

void Sphere::load_point_into_array(const Point& p, float* buff, int& buff_idx)
{
    buff[buff_idx++] = p.x;
    buff[buff_idx++] = p.y;
    buff[buff_idx++] = p.z;
    buff[buff_idx++] = p.u;
    buff[buff_idx++] = p.v;
}

My vertex and fragment shaders are simple...

// Vertex shader
#version 450 core

in vec3 vert;
in vec2 texcoord;

uniform mat4 matrix;

out FS_INPUTS {
   vec2 i_texcoord;
} tex_data;

void main(void) {
   tex_data.i_texcoord = texcoord;
   gl_Position = matrix * vec4(vert, 1.0);
}

// Fragment shader
#version 450 core

in FS_INPUTS {
   vec2 i_texcoord;
};

layout (binding=1) uniform sampler2D tex_id;

out vec4 color;

void main(void) {
   color = texture(tex_id, texcoord);
}

My draw command is:

glDrawArrays(GL_TRIANGLES, 0, num_elements/5);

Thanks!

1
take a look at this: Applying map of the earth texture a Sphere too lazy to analyze your code but possible reasons are: using GL_CLAMP instead of GL_CLAMP_TO_EDGE and or screwed coordinates somewhere.Spektre
Thanks, but I am using GL_CLAMP_TO_EDGE. I keep checking my printed XYZ and UV coordinates, but they seem right as far as I can tell.vincent

1 Answers

2
votes

First of all, this code does some funny extra work:

Point Sphere::generate_sphere_coord(int stack, int slice) const
{
    Point p;

    p.y = 2.0 * stack / _stacks - 1.0;

    float r = sqrt(1 - p.y * p.y);
    float angle = 2.0 * M_PI * slice / _slices;

    p.x = r * sin(angle);
    p.z = r * cos(angle);

    p.u = (0.5 + ( (atan2(p.z, p.x)) / (2 * M_PI) ));
    p.v = (0.5 + ( (asin(p.y)) / M_PI ));

    return p;
}

Calling cos and sin just to cal atan2 on the result is just extra work in the best case, and in the worst case you might get the wrong branch cuts. You can calculate p.u directly from slice and slice instead.

The Seam

You are going to have a seam in your sphere. This is normal, most models will have a seam (or many seams) in their UV maps somewhere. The problem is that the UV coordinates should still increase linearly next to the seam. For example, think about a loop of vertices that go around the globe's equator. At some point, the UV coordinates will wrap around, something like this:

0.8, 0.9, 0.0, 0.1, 0.2

The problem is that you'll get four quads, but one of them will be wrong:

quad 1: u = 0.8 ... 0.9
quad 2: u = 0.9 ... 0.0 <<----
quad 3: u = 0.0 ... 0.1
quad 4: u = 0.1 ... 0.2

Look at how messed up quad 2 is. You will have to generate instead the following data:

quad 1: u = 0.8 ... 0.9
quad 2: u = 0.9 ... 1.0
quad 3: u = 0.0 ... 0.1
quad 4: u = 0.1 ... 0.2

A Fixed Version

Here is a sketch of a fixed version.

namespace {

const float pi = std::atan(1.0f) * 4.0f;

// Generate point from the u, v coordinates in (0..1, 0..1)
Point sphere_point(float u, float v) {
    float r = std::sin(pi * v);
    return Point{
        r * std::cos(2.0f * pi * u),
        r * std::sin(2.0f * pi * u),
        std::cos(pi * v),
        u,
        v
    };
}

}

// Create array of points with quads that make a unit sphere.
std::vector<Point> sphere(int hSize, int vSize) {
    std::vector<Point> pt;
    for (int i = 0; i < hSize; i++) {
        for (int j = 0; j < vSize; j++) {
            float u0 = (float)i / (float)hSize;
            float u1 = (float)(i + 1) / (float)hSize;
            float v0 = (float)j / (float)vSize;
            float v1 = (float)(j + 1) / float(vSize);
            // Create quad as two triangles.
            pt.push_back(sphere_point(u0, v0));
            pt.push_back(sphere_point(u1, v0));
            pt.push_back(sphere_point(u0, v1));
            pt.push_back(sphere_point(u0, v1));
            pt.push_back(sphere_point(u1, v0));
            pt.push_back(sphere_point(u1, v1));
        }
    }
}

Note that there is some easy optimization you could do, and also note that due to rounding errors, the seam might not line up quite correctly. These are left as an exercise for the reader.

More Problems

Even with the fixed version, you will likely see artifacts at the poles. This is because the screen space texture coordinate derivatives have a singularity at the poles.

The recommended way to fix this is to use a cube map texture instead. This will also greatly simplify the sphere geometry data, since you can completely eliminate the UV coordinates and you won't have a seam.

As a kludge, you can enable anisotropic filtering instead.