2
votes

I have an .obj file with only v and f parameters.

I've got a task, to texture it up by "enclosing" it in a cuboid (defined by max and min vertices on every axis), calculating its centre, then dividing space coordinates of teapot's vertices from centre coordinates so that I get vectors with a beginning in the centre of the teapot, and an end on its surface, and then to find where these vectors (rays) intersect the surface of the outer cuboid. Each face of the cuboid has to simulate an image. After that, I have to calculate what are the texture coordinates for every vertex - just by taking 2D parameters from 3D intersection points, and then normalizing results so that they lie between 0 and 1 (in two dimensions).

So, I open up the .obj file with some proper lines of code, then I make all the described calculation to find vt parameters, and then use a function that allows me to get NORMALS using a list of indexed vertices that make triangles (face attributes) and a list of vertices.

It looks like that (it consists of two functions in fact, in two different files):

1)

def getNormals4Triangles(self):

    tind = np.resize(self.indx,(len(self.indx)/3,3))
    return VOB.getNormals(self.arrs[0],tind)

Where self.arrs[0] is the list of v attributes taken from .obj, and tind is a list of 3 element lists of vertices that make faces.

2)

@staticmethod
def getNormals(verts,tinds):
    print("Shape verts: ",verts.shape)
    print("Shape tinds: ",tinds.shape)

    if len(verts[0])==3:
        xyz = verts
    elif len(verts[0])==4:
        xyz = verts[:,:3]/np.outer(1/verts[:,3],[1,1,1])
    else:
        raise Exception('No cross product defined')
    txyz = xyz[tinds,:]
    txy = txyz[:,2,:]-txyz[:,0,:]
    txz = txyz[:,1,:]-txyz[:,0,:]
    nrmls = np.cross(txy,txz)
    len_nrmls = norm(nrmls,axis=1)
    return nrmls/np.outer(len_nrmls,[1,1,1])

Where "norm" is a function from linear algebra NumPy set.

After that I create a VOB object:

self.vob = VOB(arrs = [vert, point, normals],indx=self.obj.indx)

In brief, I send it to the GPU using VOB object.

vert - a list of vertices taken from .obj point - a list of texture coordinates calculated in a way described above normals - a list of normal vectors calculated using functions described above self.obj.indx - a list made of f attributes from .obj

Texture image I use:texture

When I try to show it using classic ligthing:

gl.glShadeModel( gl.GL_SMOOTH )
gl.glEnable( gl.GL_LIGHTING )
gl.glEnable( gl.GL_LIGHT0 )
gl.glLightModeli( gl.GL_LIGHT_MODEL_TWO_SIDE, 0 )
gl.glLightfv( gl.GL_LIGHT0, gl.GL_POSITION, [4, 4, 4, 1] )
lA = 0.8
gl.glLightfv( gl.GL_LIGHT0, gl.GL_AMBIENT, [lA, lA, lA, 1] )
lD = 1
gl.glLightfv( gl.GL_LIGHT0, gl.GL_DIFFUSE, [lD, lD, lD, 1] )
lS = 1
gl.glLightfv( gl.GL_LIGHT0, gl.GL_SPECULAR, [lS, lS, lS, 1] )
gl.glMaterialfv( gl.GL_FRONT_AND_BACK, gl.GL_AMBIENT, [0.9, 0.8, 0.7, 1] )
gl.glMaterialfv( gl.GL_FRONT_AND_BACK, gl.GL_DIFFUSE, [0.7, 0.8, 0.9, 1] )
gl.glMaterialfv( gl.GL_FRONT_AND_BACK, gl.GL_SPECULAR, [0.9, 0.9, 0.9, 1] )
gl.glMaterialf( gl.GL_FRONT_AND_BACK, gl.GL_SHININESS, 100 )

I can see something like this: normal lighting

But when I use Blinn and Phong shading model (I will evade pasting it all, with hope that maybe someone has met a similar problem) it goes like this:blinn i phong

Why in any of both situations can't I see a well textured teapot? Is it that I have to prepare new f attributes to be sent to VOB after getting normals and vt coordinates?

1

1 Answers

0
votes

Probably the winding directions of your face vertices is inconsistent, resulting in some of the normals pointing inwards, messing up your illumination calculation. Ideally the data in your model file has properly oriented faces, but you can reorient the faces starting from a seeding face and work from there. This issue has been covered here: How to unify normal orientation

A quick fix is to implemented a two-sided illumination mode; essentially you negate the result of the illumination dot product if the primitive is back facing … * !gl_FrontFacing ? -1 : 1, before clamping to positive values.

As for the texture coordinate generation: Technically you could implement a cubemap texture coordinate mapping, that uses the one picture for all 6 faces of the cube and use the vertex positions as texture coordinate. Or you simply load the same image into the 6 subimages of a cubemap and use the position attribute directly as coordinate for the texture lookup.