Scaling the objective by the number of goals
At each priority the objective function that the solver has to minimize is equal to the sum of the objectives of each goals divided by the number of goals:
def objective(self, ensemble_member):
if len(self.__subproblem_objectives) > 0:
acc_objective = ca.sum1(ca.vertcat(*[o(self, ensemble_member) for o in self.__subproblem_objectives]))
return acc_objective / len(self.__subproblem_objectives)
(Similarly for the path_objective
.)
This division plays against the tolerance of the solver. Indeed, if len(self.__subproblem_objectives) = 100.000
and solver_tolerance = 1e-8
, the objective function will be in practice have been optimized up to a tolerance of 1e-3
. (Here we assume that the solver won't perform any internal scaling.)
This lost of significant digits of the objective function is particularly problematic at the next priority level, when soft constraints are transformed into hard ones. An epsilon that was considered optimal by the solver may not be accurate enough when constructing the corresponding hard constraint, thus leading to an infeasible problem.
To be discussed:
- what are the advantages of this scaling?
- if we opt to keep the scaling, what actions should we take to prevent infeasibilities issues?