-
Toby Isaac authored
I don't think there is a good way to break this up into smaller commits: once I use LMBasis and LMProducts in Mat_LMVM, all of the implementations that depend on it are broken until they are changed to also use these new objects. It is probably better to review all of the MatLMVM implementations (except for denseqn) as new code rather than trying to review using a diff tool. This rework achieves: - Implementations of three classes of algorithms for limitd memory quasi-Newton updates - MAT_LMVM_MULT_RECURSIVE: Recursive, level 1 BLAS algorithms that closely match the typical way the methods are presented mathematically. - MAT_LMVM_MULT_DENSE: level 2 BLAS algorithms that, when possible, avoid any need for recomputation when the reference Jacobian J0 changes. - MAT_LMVM_MULT_COMPACT_DENSE: level 2 BLAS algorithms that construct rank-m (or rank-2m) update representations of the operators, for the best-performance on MatMult() and MatSolve(), but at the cost of additional setup, and requiring more recomputation when J0 changes than MAT_LMVM_MULT_DENSE - Every quasi-Newton method has a dual method (e.g. BFGS and DFP): we exploit this duality to use only one code path for a dual pair, reducing the numer of implementaitons to maintain - Special handling of the case J0 = gamma * I, avoiding products that are unnecessary in this case - Instead of including a MatLMVMDiagBrdn as the J0 matrix when J0 is variable (which means that both B and J0 of basis vectors to maintain), a SymBroydenRescale object is shared by MatLMVMDiagBrdn and other formats that use rescaling - Improved correctness of when internal products need to be recomputed: if the user externally changes J0, this will be detected
Toby Isaac authoredI don't think there is a good way to break this up into smaller commits: once I use LMBasis and LMProducts in Mat_LMVM, all of the implementations that depend on it are broken until they are changed to also use these new objects. It is probably better to review all of the MatLMVM implementations (except for denseqn) as new code rather than trying to review using a diff tool. This rework achieves: - Implementations of three classes of algorithms for limitd memory quasi-Newton updates - MAT_LMVM_MULT_RECURSIVE: Recursive, level 1 BLAS algorithms that closely match the typical way the methods are presented mathematically. - MAT_LMVM_MULT_DENSE: level 2 BLAS algorithms that, when possible, avoid any need for recomputation when the reference Jacobian J0 changes. - MAT_LMVM_MULT_COMPACT_DENSE: level 2 BLAS algorithms that construct rank-m (or rank-2m) update representations of the operators, for the best-performance on MatMult() and MatSolve(), but at the cost of additional setup, and requiring more recomputation when J0 changes than MAT_LMVM_MULT_DENSE - Every quasi-Newton method has a dual method (e.g. BFGS and DFP): we exploit this duality to use only one code path for a dual pair, reducing the numer of implementaitons to maintain - Special handling of the case J0 = gamma * I, avoiding products that are unnecessary in this case - Instead of including a MatLMVMDiagBrdn as the J0 matrix when J0 is variable (which means that both B and J0 of basis vectors to maintain), a SymBroydenRescale object is shared by MatLMVMDiagBrdn and other formats that use rescaling - Improved correctness of when internal products need to be recomputed: if the user externally changes J0, this will be detected
Loading