Why not just manually multiply the world-view-projection with the vertex positions? This will give you the vertices in "normalized device coordinates" where -1 is the bottom left of the screen and +1 is the top-right of the screen?
The only thing is if the projection is perspective, you have to divide your vertices by their 4th component, ie if the final vertex is (x,y,z,w) you would divide by w.
Take for example a position vector
v = {x, 0, -z, 1}
Given a vertical viewing angle view 'a' and an aspect ration 'r', the position of x' in normalized device coordinates (range 0 - 1) is this (this formula taken directly out of a graphics programming book):
x' = x * cot(a/2) / ( r * z )
So a perspective projection for given parameters these will be as follows (shown in row major format):
cot(a/2) / r 0 0 0
0 cot(a/2) 0 0
0 0 z1 -1
0 0 z2 0
When you multiply your vector by the projection matrix (assuming the world, view matrices are identity in this example) you get the following (i'm only computing the new "x" and "w" values cause only they matter in this example).
v' = { x * cot(a/2) / r, newY, newZ, z }
So finally when we divide the new vector by its fourth component we get
v' = { x * cot(a/2) / (r*z), newY/z, newZ/z, 1 }
So v'.x is now the screen space coordinate v.x. This is exactly what the graphics pipeline does to figure out where your vertex is on screen.
I've used this basic method before to figure out the size of geometry on screen. The nice part about it is that the math works regardless of whether or not the projection is perspective or orthographic, as long you divide by the 4th component of the vector (for orthographic projections, the 4th component will be 1).