I am writing a constraint based particle system based on this paper. The Jacobian of the constraints C scales with the number of particles in the system and the number of constraints. Because each constraint generally has only a few particles that depend on it, the matrix will be extremely sparse. I thought it might be beneficial to use a sparse matrix in Eigen to solve the system. Per the Eigen reference material, there seems to be several different methods for solving these sparse matrix equations. My questions are:
- Would the size of these matrices demand use of a sparce matrix in the first place? I need to store the Jacobian and its time derivative. Each is a Mx3N matrix where M is the number of constraints and N is the number of particles. The user of my particle system can presumably add as many particles as they wish within reason.
- Is the argument for sparse matrix representations over a dense matrix more so about performance or memory consumption. Can a sparse matrix be solved faster than a dense matrix and what does this depend on?
- I don't really know a lot about these solver implementations. I haven't studied these algorithms very much, especially in the context of sparse matrix equations. What is the performance of these algorithms like and which one should I choose? I remember learning about some of them in my linear algebra class several years ago and I believe many of them were O(n^3) which doesn't seem like it would work very well for a system like the one described in this paper.