Skip to content

Add Sparse Subset of Matrix Inverse

What does this implement/fix?

Certain problems require access to specific elements of the inverse of a sparse matrix. Calculating the full inverse will usually result in a dense matrix, and the other option of calculating just a single column of the inverse can quickly become expensive and tends towards needing the whole matrix if eg. block diagonal elements of the inverse are needed.

This MR implements a method to efficiently calculate a sparse subset of the inverse, corresponding to the dense elements in an LU decomposition, plus any additional elements required.

Additional information

The Takahashi method (https://dl.acm.org/doi/10.1145/360680.360704) allows the computation of a sparse subset of the inverse, corresponding to the dense elements of a sparse LU factorization. Once this sparse subset is calculated, any additional elements of the inverse can be calculated at the cost of a single sparse dot product. In this implementation, this was achieved by calculating the inverse value for all dense values in the LU. Thus if a user needs specific values of the inverse, they can insert corresponding zeros into the sparse non-inverted matrix before calculating the inverse.

Due to large sparse matrices and in general interesting problems easily becoming close to rank deficient, and also the recursive nature of the algorithm, it is very sensitive to numerical errors, particularly on the first elements. In order to reduce this sensitivity, the dot products used for the accumulator were modified to use Kahan Summation for the accumulator. In testing (and in the test case added) this makes many problems go from intractable to easily solveable. However, the Kahan summation does come at a cost: roughly 4x the operations of a simple summation. In addition, Neumaeier summation was tested, which only merges back the accumulated error at the end of the summation rather than at each iteration. This was found to be less accurate and a similar speed, so Kahan summation was used.

Despite the addition of the Kahan summation, the Takahashi method is still very fast, particularly for larger, sparser matrices. A potential follow-up to this would be to make the algorithm block-aware, to be more efficient on block sparse matrices.

Edited by Julian Kent

Merge request reports