Skip to content

Dense LMVM

Hansol Suh requested to merge hsuh/dense-lmvm-squashed into main

Dense LMVM - supports BFGS, DFP, and "QN" version.

Dense LMVM, unlike regular sparse LMVM that iterates through sparse outer rank-1 products of history vectors, batches all the sparse operatrions into few dense operations.

In dense LMVM formulation, MatMult_LMVMDBFGS and MatSolve_LMVMDDFP,both requires Cholesky factorization. Thus, in order to avoid costly Cholesky factorization, we introduce LMVMDQN variation that uses MatSolve_LMVMDBFGS for MatSolve, and MatMult_LMVMDDFP for MatMult.

There are two strategies for maintaining memory history - MAT_LMVM_DENSE_REORDER and MAT_LMVM_DENSE_INPLACE.

For both strategies, we keep history of difference of solution vectors, Y_mat, and history of different of gradients, S_mat, in cyclical order. The difference lies in how we store outputs of matrix-matrix multiplication.

MAT_LMVM_DENSE_REORDER will keep matrix-matrix multiplication output in a historical order, in order to minimize BLAS kernel launches, at the cost of memory movement. MAT_LMVM_DENSE_INPLACE will not move any matrices, keeping matrix-matrix multiplication result in "cyclical" order. This approach, while minimizes memory movement, comes at the cost of more BLAS kernel launches.

The act of submitting a merge request will be understood as an affirmation of the Developer’s Certificate of Origin 1.1 at https://petsc.org/release/install/license/

See https://petsc.org/release/developers/integration/ for instructions on preparing and submitting a merge request. If the merge request is not from a fork, then you must

  • List yourself as assignee
  • List one or more reviewers
  • Select labels including liking workflow::Review

For merge request from a fork

  • make sure the fork is not private - gitlab MR process (pipelines, merge) breaks with it.
  • cc: one of the developers in the MR description who can shepherd the MR

Merge request reports