Improve MPI encapsulation in par_vec module
Description
Only use HAVE_MPI around the MPI calls. This reduces the need to use HAVE_MPI in other parts of the code. This makes the code better readable.
Merge request reports
Activity
changed milestone to %11.0
added Refactoring label
Codecov Report
Merging #1126 (de32d25e) into develop (0a0c868a) will decrease coverage by
0.06%
. The diff coverage isn/a
.@@ Coverage Diff @@ ## develop #1126 +/- ## =========================================== - Coverage 70.11% 70.04% -0.07% =========================================== Files 515 515 Lines 102629 102724 +95 =========================================== + Hits 71957 71958 +1 - Misses 30672 30766 +94
Impacted Files Coverage Δ src/grid/cube_function_inc.F90 82.75% <ø> (ø)
src/grid/double_grid.F90 81.25% <ø> (+0.23%)
src/grid/io_function_inc.F90 60.90% <ø> (-0.30%)
src/grid/mesh.F90 73.00% <ø> (ø)
src/grid/mesh_cube_parallel_map.F90 55.88% <ø> (ø)
src/grid/mesh_interpolation_inc.F90 97.39% <ø> (ø)
src/grid/multigrid.F90 93.12% <ø> (ø)
src/grid/nl_operator.F90 66.07% <ø> (+0.33%)
src/grid/par_vec.F90 67.24% <ø> (-24.46%)
src/grid/par_vec_inc.F90 50.00% <ø> (ø)
... and 7 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 0a0c868...de32d25. Read the comment docs.Edited by Codecovassigned to @micael.oliveira
mentioned in commit c0033538
Please register or sign in to reply