Rediscretised multigrid inside SNES+PCFieldsplit always linearises around zero state
Consider a setup where I am solving a nonlinear problem with SNES+PCFieldsplit and GMG (rediscretising on the coarse grids) inside one of the field split blocks.
Presently, all of the KSPComputeOperators
that are passed on to the inner field split pcs receive a zero state to linearise around. This is what I think is going on:
SNESSolve sets up a ComputeOperators on the KSP:
KSP ksp;
ierr = SNESGetKSP(snes,&ksp);CHKERRQ(ierr);
ierr = KSPSetComputeOperators(ksp,KSPComputeOperators_SNES,snes);CHKERRQ(ierr);
ierr = DMCoarsenHookAdd(snes->dm,DMCoarsenHook_SNESVecSol,DMRestrictHook_SNESVecSol,snes);CHKERRQ(ierr);
How does this callback know where to linearise around? Well, KSPComputeOperators_SNES does this:
if (dmsave == snes->dm) X = snes->vec_sol; /* We are on the finest level */
else { /* We are on a coarser level, this vec was initialized using a DM restrict hook */
ierr = DMGetNamedGlobalVector(snes->dm,"SNESVecSol",&Xnamed);CHKERRQ(ierr);
X = Xnamed;
}
...
ierr = SNESComputeJacobian(snes,X,A,B);CHKERRQ(ierr);
And the DMCoarsenHookAdd and DMRestrictHook_SNESVecSol ensure that if we coarsen the DM associated with the SNES that state vector is transferred to the coarser levels.
However, inside PCFieldsplit with (say -fieldsplit_0_pc_type mg) the DM to be coarsened no longer has any coarsen hooks and restrict hooks applied.
As a result, when rediscretising inside the coarse grids, we correctly call the KSPComputeOperators_SNES
function, but the named vector "SNESVecSol"
is always full of zeros.
I think the right thing to do is to add some new field decomposition hooks that transfer all of this stuff over, but I got a bit lost looking.
In Firedrake we have some problems that break right now (so we have a workaround to get the right state on coarse grids), but I don't know where to try and create a test to expose this just in PETSc.