As you indicate in your question, the primary issues here is that of execution time and memory. There are many ways in which rendering objects with skinning (skeletons) takes more of both:
- Extra vertex data. For the bone weights and indices. Generally these streams are (each) 4 bytes per vertex. Depending on the number of vertices your unskinned meshes have, this can be a large amount of extra data allocated, and that is also streamed to the GPU.
- Extra uniform data. Even if you use only one skinning matrix in this case, and set to identity, your shader objects still contain storage for the maximum number of skinning matrices you use. Also, the more uniforms your shader (potentially) uses, the less parallel execution that can take place.
- Extra vertex shader instructions. This includes normalizing the bone weights, and interpolating the skinning matrices. It also must multiply the each vertex by this matrix.
Because of the above considerations, generally applications choose to have separate vertex shaders for skinned and non-skinned objects. Frequently, applications will create several skinning shaders, which will be created for different 'tiers' of skinning quality. For example, different numbers of possible bones per vertex, or total number of skinning matrices. However, the only way to make these sorts of decisions is profiling your application.
If performance and memory aren't issues for you, then you could render non-skinned objects as skinned ones with your existing shader, although, because this will require some work to setup properly, it's probably just as easy to create a new non-skinned shader.