Skip to content

Strategies for reducing set of considered models

Disclaimer: this is a discussion issue, possibly serving as a source for subsequent technical issues, not as an issue to implement.

Status

After the first OAT part of the getViableProjections one gets a matrix of OATviable with 0,1 entries for each of n parameter points (in rows) and each of p parameters (in columns). The matrix encodes whether the j-th OAT parameter projection was viable or not for the i-th parameter point.

Currently the set of reduced models which is considered in the second part contains all unique projections obtained per row. Up to p*n + n models are evaluated in both parts, resulting in possibly n maximally reduced models, one per each parameter point sample.

Moreover, in the OATviable and over the recursive steps (more important for the running time), viable projection P1 possibly smaller than other already found viable projection P2 can be considered. This seems to make sense in a recursive steps, due to the possibility of finding another viable projection P3 larger than P1, but non-comparable with P2 (cf. hand-sketch: ProjectionsPoset-SmallerToNoncomparable.pdf). Observe that:

  1. Projection P3 could be found only with parameter point samples other than witnesses of P2 and P1;
  2. samples with projected P2\P1 parameters will not be witnesses for any such P3.

Considerations

  1. For a faster, more greedy and inaccurate search of maximal reductions allow to directly skip non-maximal projections at each recursive steps (use pleq instead of peq in ProjectionSet.has()), and in getViableProjections (on top of unique remove also rows with pleq, or re-implement to fully use ProjectionSet).
  2. Each sample knows which projections it witnessed (viable projections for that sample), and possibly take advantage of the observations above in getViableProjections and topologicalFilteringRecursiveStep{Parallel,Sequential}.
  3. Reduction to a single unique projection obtained per column (after elimination of all zeros rows), i.e. only parameters non-essential in all considered samples are considered for maximally reduced model. This corresponds to making much smaller jumps in the model space, if such reductions are found at all. Arguably, such strategy would make sense with local analysis based on small sample and re-sampling per reduction, in contrast to a global large initial sample.