|
|
The optimization parameters file specifies additional options for the run. It is model dependent. The available options are:
|
|
|
- **optimization-method-type** which can be `GradientAscent`, `ScipyLBFGS`or `McmcSaem`. The default is `GradientAscent`. The longitudinal atlas and the Bayesian atlas supports all types and the other models support only GradientAscent and ScipyLBFGS.
|
|
|
- **optimized-log-likelihood**: for Bayesian and Longitudinal atlas. It must be set to `complete` if GradientAscent of ScipyLBFGS is used to estimate these models.
|
|
|
- **number-of-threads**: the number of threads to use. Multi-threading is only supported for the DeterministicAtlas model.
|
|
|
- **max-iterations**: the maximum number of iterations to run. Default is 100.
|
|
|
- **convergence-tolerance**: a tolerance threshold for premature stopping of the estimation. If the relative decrease of the loss function during an iteration of the estimation of the model is below this threshold, the computations stops.
|
|
|
- **memory-length**: length of the memory for the ScipyLBFGS estimator. Large values can induce a high memory use. Default is 10.
|
|
|
- **downsampling-factor**: a factor by which to downsample internally the images, to speed-up image deformations.
|
|
|
- **save-every-n-iters**: when to periodically save the output. Default is 10. Beware, for large meshes, saving the whole output of a model can be expensive.
|
|
|
- **use-sobolev-gradient**: a smoothing operation on the gradient to update the template. We advise to use it on meshes since it provides a smoother estimated template. Default is `On`.
|
|
|
- **initial-step-size**: the initial step size for the GradientAscent estimator. Default is 1e-3. If the line search fails, try lower values.
|
|
|
- **freeze-template**: On if the template is to be kept at its initial value, Off to estimate the template.
|
|
|
- **freeze-control-points**: Off if the control points are to be optimized. Default is On.
|
|
|
- **use-cuda**: if On, all computations with kernel type torch will be made on GPU.
|
|
|
- **max-line-search-iterations**: number of line search iterations for the GradientAscent estimator.
|
|
|
- **state-file**: if a state-file is specified and the file exists, deformetrica will attempt to restart the computation from this state file. If it does not exist, deformetrica will run normally and save the state in this file. If a state-file is not specified, the state will be saved in the output folder under the name `pydef-state.p`.
|
|
|
- **use-rk2**: A Runge-Kutta 2 scheme is used for integration of the deformation if it is On. A simple Euler scheme is used otherwise. Default is Off.
|
|
|
- **momenta-proposal-std**, **onset-age-proposal-std**, **log-acceleration-proposal-std**, **sources-proposal-std**: standard deviation of the proposal distributions for the McmcSaem algorithm. See [Longitudinal Atlas](Tutorials/LongitudinalAtlas).
|
|
|
- **scale-initial-step-size**: scales the steps sizes by the squared norm of the gradient for GradientAscent. Default is On.
|
|
|
- **freeze-v0**, **freeze-p0**, **freeze-modulation-matrix**, **freeze-time-shift-variance**, **freeze-log-acceleration-variance**, **freeze-reference-time**, **freeze-noise-variance**: whether to freeze some of the variables in a longitudinal atlas. |