Hi,
I have seen multiple times that matrix inversion is not recommended when solving linear equations and everyone says to just use a solver but I may reduce execution time significantly by inverting instead (or will I?).
I have a simulation where either I will calculate the inverse of a sparse symmetric matrix at the beginning and for each time-step calculate the matrix-(new vector) multiplication to solve the system,
or
I could just use the original sparse matrix and solve the linear system at each time step even though the matrix doesn't change.
My matrix-vector has: n ~= 20,000 and simulation has approx 10^7 time steps. So what is the optimum method?
I found pardiso to solve the system unless someone has a better recommendation.
Using MKL 2016.4 on a cluster so I could request more CPUs but my code isn't parallelized.
Thanks for your time