unify partitioner classes
I would like to discuss further direction of unifying PetscPartitioner and MatPartitioning constructively.
This is a follow-up of #192 (closed). I'm closing it because it suggests only one way of doing this.
I think we need to write down features which are implemented in one representation and not the other.
Feature comparison
MatPartitioning:
MATPARTITIONINGHIERARCH
- Party interface
MATPARTITIONINGPARTY
- is it worth keeping?
- clear, documented, encapsulated API
- used by some external packages
- libMesh/MOOSE
- edge weighting
- Partition a numeric matrix
- GAMG
- conversion to the dual graph
MatMeshToCellGraph
- I hate this name because I must always use Google to find it - no
Dual
in the name! - currently only using ParMETIS
- IMHO shouldn't be underrated - e.g. these guys consider it an important step
PetscPartitioner:
-
handle the case of a graph with zero local vertices- it looks this is in MatPartitioning already in the form of MatMPIAdjCreateNonemptySubcommMat
- see comment below
- perhaps could be done more naturally/intuitively?
- it looks this is in MatPartitioning already in the form of MatMPIAdjCreateNonemptySubcommMat
PETSCPARTITIONERSHELL
-
PETSCPARTITIONERRANDOM
(does not require a graph) -
PETSCPARTITIONERGATHER
(does not require a graph) - PARMETIS partitioner users sequential METIS when the parallel graph to partition has all the vertices in a single root process
- Stefano has a quite general implementation of a multilevel partitioner based on PetscPartitioner, but it is somewhat DMPlex-centric, in the sense that helps with scalability issues with the read-in-root scatter-to-all mesh pattern.
Feel free to add any further items to either.
Refactoring process steps
We should preferably use PetscPartitioner
implementations of the partitioners implemented in both classes because they are probably more up-to-date and better tested, and guys will keep understand the implementation details.
Stage 1
Proof-of-concept WIP MR which will take one of those three partitioners {(PAR)METIS, PT-SCOTCH, CHACO}, let's denote it XXX
-
Create a new MatPartitioning
implementationMATPARTITIONING_WIP_XXX
fromPETSCPARTITIONERXXX
.- strip manual pages [we don't want to produce any user-facing docs at this stage]
- otherwise minimize the diff
- in
src/mat/partition/impls/wip_xxx
-
Define tests that compare MATPARTITIONING_WIP_XXX
,MATPARTITIONINGXXX
,PETSCPARTITIONERXXX
.-
PETSCPARTITIONERMATPARTITIONING
will be useful to wrapMATPARTITIONING_WIP_XXX
,MATPARTITIONINGXXX
into PetscPartitioner API - make use of ex24
- maybe we need to add more tests with more datafiles and construction modes
- at the end of this stage, run also bigger tests on a cluster to compare performance
-
Stage 2
If Stage 1 succeeds
-
Add all documentation to MATPARTITIONING_WIP_XXX
-
Delete MATPARTITIONINGXXX
and its directorysrc/mat/partition/impls/xxx
-
Rename MATPARTITIONING_WIP_XXX
toMATPARTITIONINGXXX
and move tosrc/mat/partition/impls/xxx
Stage 3
Once Stage 2 is successfully finished
-
Rewrite all PetscPartitioner
types toMatPartitioning
types- repeat Stage 1 and Stage 2 for each
PetscPartitionerType
- repeat Stage 1 and Stage 2 for each
-
Rewrite PETSCPARTITIONERMATPARTITIONING
so that's is just a helper function(s) withinDMPLEX
-
Get rid of PetscPartitioner
, once we agree there is nothing missing inMatPartitioning
-
Rename/refactor MatPartitioning
- this should perhaps happen within petsc-future