Normals are often stored per-vertex, and not per-face (face being a triangle). If you have a number of triangles sharing a single vertex, you can average all the normals of all the faces to get the normal of the vertex. Now you just interpolate those normals when rendering and you have a smooth surface. Some file types are even designed this way, in that you cannot store a normal for each triangle, only a normal for each vertex. It is a very common way of storing and interpreting normals, especially in real time graphics.
Texture coordinates are most of the time per vertex as well. If you have some smooth surface, you most likely will not need to have the texture suddenly change. To get seems on the surface you simply put them on the texture but keep the coordinates the same. Thus you can store texture coordinates per vertex perfectly fine.
Moral of the story is that there are very very very few things that actually needs to be stored per-face. Most of the data you need can be converted to a per-vertex representation. Normals and texture coordinates in particular are often wanted per vertex instead of per face.
Also on a side note, if you don't want smooth surfaces, but want flat surfaces (so you can see the triangles), you can actually pick one vertex for each face and only use its extra data for the surface. So for example, the first vertex of each triangle will store the normal, texture coordinates and so on for said triangle. This will leave you with a number of vertices at the end with extra data you don't need, but that's a lot better than a lot more duplicated vertices. In OpenGL this technique can be achieved through flat interpolation in the shaders, I'm not sure what the Direct3D equivalent is.