3
votes

I am writing a constraint based particle system based on this paper. The Jacobian of the constraints C scales with the number of particles in the system and the number of constraints. Because each constraint generally has only a few particles that depend on it, the matrix will be extremely sparse. I thought it might be beneficial to use a sparse matrix in Eigen to solve the system. Per the Eigen reference material, there seems to be several different methods for solving these sparse matrix equations. My questions are:

  1. Would the size of these matrices demand use of a sparce matrix in the first place? I need to store the Jacobian and its time derivative. Each is a Mx3N matrix where M is the number of constraints and N is the number of particles. The user of my particle system can presumably add as many particles as they wish within reason.
  2. Is the argument for sparse matrix representations over a dense matrix more so about performance or memory consumption. Can a sparse matrix be solved faster than a dense matrix and what does this depend on?
  3. I don't really know a lot about these solver implementations. I haven't studied these algorithms very much, especially in the context of sparse matrix equations. What is the performance of these algorithms like and which one should I choose? I remember learning about some of them in my linear algebra class several years ago and I believe many of them were O(n^3) which doesn't seem like it would work very well for a system like the one described in this paper.
1

1 Answers

1
votes

If N if smaller than, say, 1000 then you might go with a dense storage, otherwise better use a sparse representation. The number of non-zeros per row should be very small, say around 10 and no more than 100 to keep the efficiency of the solvers. The complexity of sparse solvers is in practice smaller than O(N*K) where K is the number of nonzeros. Therefore they can be several orders of magnitudes faster than dense solvers for very sparse matrices.