Commit 6d5cc07d authored by Tjerk Vreeken's avatar Tjerk Vreeken

Add mixin to linearize goal order

For some problems it is preferred to not introduce any non-linear terms in
the objective. For example, when using a MILP solver, one wants the entire
problem to remain linear. This sometimes clashed with the desire for an
order 2 goal, which steers the solution in the direction of many small
exceedences instead of one large exceedence. An order 1 goal does not
distuinguish between the two.

This commit adds functionality to linearly approximate the goal order.
That way, one can keep the problem fully linear, why getting some of the
benefits of higher order goals back.

By default, all goals will be linearized when inheriting from
LinearizedOrderGPMixin. This behavior can also be overridden by setting
`linearize_goal_order` to False in goal_programming_options(). The
linearization can also be controlled on the level of a goal by making a
goal inherit from LinearizedOrderGoal and setting `linearize_order` to
either True or False.

Typical use-cases for this fine-grained control are:
- `linearize_goal_order` is True (default) to avoid a QP problem turning
  into a QCP problem when `keep_soft_constraints` is set. The last
  priority can however be solved as a QP problem (instead of LP), because
  this quadratic objective will _not_ turn into a constraint.
- For performance reasons, one might want to linearize specific higher
  order goals, but keep others as-is.

Aside from solvers which only handle LP and not QP, using a linearly
approximated goal order is also useful when `keep_soft_constraints` is
set. A quadratic problem in priority 1 would mean a quadratically
_constrained_ problem in priority 2 with that option, something that can
be much harder to solve.

Note that higher order minimization goals do not work cannot be
linearized. This is because a function range is lacking over which to
linearize. Instead, users are requested to give these minimization goals
e.g. a target_min = target_max = 0.0 (and a proper function range), such
that they can be linearized. Minimization goals with order 1 are
supported, as they do not have to be linearized.
parent bf3dc363
......@@ -27,3 +27,18 @@ Minimize absolute value
.. autoclass:: rtctools.optimization.min_abs_goal_programming_mixin.MinAbsStateGoal
:members: __init__
Linearized order
.. autoclass:: rtctools.optimization.linearized_order_gpmixin.LinearizedOrderGoalProgrammingMixin
:members: goal_programming_options
.. autoclass:: rtctools.optimization.linearized_order_gpmixin.LinearizedOrderGoal
.. autoclass:: rtctools.optimization.linearized_order_gpmixin.LinearizedOrderStateGoal
:members: __init__
import casadi as ca
import numpy as np
from rtctools.optimization.goal_programming_mixin import Goal, GoalProgrammingMixin, StateGoal
from rtctools.optimization.goal_programming_mixin import _GoalConstraint
class LinearizedOrderGoal(Goal):
#: Override linearization of goal order. Related global goal programming
#: option is ``linearize_goal_order`` (see :py:meth:`LinearizedOrderGoalProgrammingMixin.goal_programming_options`).
#: The default value of None defers to the global option, but the user can
#: explicitly override it per goal by setting this value to True or False.
linearize_order = None
#: Coefficients to linearize a goal's order
_linear_coefficients = {}
def _get_linear_coefficients(cls, order, eps=0.1, kind='balanced'):
assert order > 1, "Order should be strictly larger than one"
return cls._linear_coefficients[eps][order]
except KeyError:
x = ca.SX.sym('x')
a = ca.SX.sym('a')
b = ca.SX.sym('b')
# Strike a balance between "absolute error < eps" and "relative error < eps" by
# multiplying eps with x**(order-1)
if kind == 'balanced':
f = x**order - eps * x**(order-1) - (a * x + b)
elif kind == 'abs':
f = x**order - eps - (a * x + b)
raise Exception("Unknown error approximation strategy '{}'".format(kind))
res_vals = ca.Function("res_vals", [x, ca.vertcat(a, b)], [f])
options = {'nlpsol': 'ipopt', 'nlpsol_options': {'print_time': 0, 'ipopt': {'print_level': 0}}}
do_step = ca.rootfinder("next_state", "nlpsol", res_vals, options)
x = 0.0
a = 0.0
b = 0.0
xs = [0.0]
while x < 1.0:
# Initial guess larger than 1.0 to always have the next point be
# on the right (i.e. not left) side.
x = float(do_step(2.0, [a, b]))
a = order * x ** (order - 1)
b = x**order - a * x
# Turn underestimate into an overestimate, such that we get rid of
# horizontal line at origin.
xs[-1] = 1.0
xs = np.array(xs)
ys = xs**order
a = (ys[1:] - ys[:-1])/(xs[1:] - xs[:-1])
b = ys[1:] - a * xs[1:]
lines = list(zip(a, b))
cls._linear_coefficients.setdefault(eps, {})[order] = lines
return lines
class LinearizedOrderStateGoal(LinearizedOrderGoal, StateGoal):
Convenience class definition for linearized order state goals. Note that
it is possible to just inherit from :py:class:`.LinearizedOrderGoal` to get the needed
functionality for control of the linearization at goal level.
class LinearizedOrderGoalProgrammingMixin(GoalProgrammingMixin):
Adds support for linearization of the goal objective functions, i.e. the
violation variables to a certain power. This can be used to keep a problem
fully linear and/or make sure that no quadratic constraints appear when using
the goal programming option ``keep_soft_constraints``.
def goal_programming_options(self):
If ``linearize_goal_order`` is set to ``True``, the goal's order will be
approximated linearly for any goals where order > 1. Note that this option
does not work with minimization goals of higher order. Instead, it is
suggested to transform these minimization goals into goals with a target (and
function range) when using this option. Note that this option can be overriden
on the level of a goal by using a :py:class:`LinearizedOrderGoal` (see
options = super().goal_programming_options()
options['linearize_goal_order'] = True
return options
def _gp_validate_goals(self, goals, is_path_goal):
options = self.goal_programming_options()
for goal in goals:
goal_linearize = None
if isinstance(goal, LinearizedOrderGoal):
goal_linearize = goal.linearize_order
if goal_linearize or (options['linearize_goal_order'] and goal_linearize is not False):
if not goal.has_target_bounds and goal.order > 1:
raise Exception("Higher order minimization goals not allowed with "
"`linearize_goal_order` for goal {}".format(goal))
super()._gp_validate_goals(goals, is_path_goal)
def _gp_goal_constraints(self, goals, sym_index, options, is_path_goal):
options = self.goal_programming_options()
def _linearize_goal(goal):
goal_linearize = None
if isinstance(goal, LinearizedOrderGoal):
goal_linearize = goal.linearize_order
if goal_linearize or (options['linearize_goal_order'] and goal_linearize is not False):
if goal.order > 1 and not goal.critical:
return True
return False
return False
lo_soft_constraints = [[] for ensemble_member in range(self.ensemble_size)]
lo_epsilons = []
# For the linearized goals, we use all of the normal processing,
# except for the objective. We can override the objective function by
# setting a _objective_func function on the Goal object.
for j, goal in enumerate(goals):
if not _linearize_goal(goal):
assert goal.has_target_bounds, "Cannot linearize minimization goals"
# Make a linear epsilon, and constraints relating the linear
# variable to the original objective function
path_prefix = "path_" if is_path_goal else ""
linear_variable = ca.MX.sym(path_prefix + "lineps_{}_{}".format(sym_index, j), goal.size)
if isinstance(goal, LinearizedOrderGoal):
coeffs = goal._get_linear_coefficients(goal.order)
coeffs = LinearizedOrderGoal._get_linear_coefficients(goal.order)
epsilon_name = path_prefix + "eps_{}_{}".format(sym_index, j)
for a, b in coeffs:
# We add to soft constraints, as these constraints are no longer valid when
# having `keep_soft_constraints` = False. This is because the `epsilon` and
# the `linear_variable` no longer exist in the next priority.
for ensemble_member in range(self.ensemble_size):
def _f(problem,
goal=goal, epsilon_name=epsilon_name, linear_variable=linear_variable, a=a, b=b,
ensemble_member=ensemble_member, is_path_constraint=is_path_goal):
if is_path_constraint:
eps = problem.variable(epsilon_name)
lin = problem.variable(
eps = problem.extra_variable(epsilon_name, ensemble_member)
lin = problem.extra_variable(, ensemble_member)
return lin - a * eps - b
lo_soft_constraints[ensemble_member].append(_GoalConstraint(goal, _f, 0.0, np.inf, False))
if is_path_goal and options['scale_by_problem_size']:
goal_m, goal_M = self._gp_min_max_arrays(goal, target_shape=len(self.times()))
goal_active = np.isfinite(goal_m) | np.isfinite(goal_M)
n_active = np.sum(goal_active.astype(int), axis=0)
n_active = 1
def _objective_func(problem, ensemble_member,
goal=goal, linear_variable=linear_variable, is_path_goal=is_path_goal,
return goal.weight * linear_variable / n_active
goal._objective_func = _objective_func
epsilons, objectives, soft_constraints, hard_constraints, extra_constants = \
super()._gp_goal_constraints(goals, sym_index, options, is_path_goal)
epsilons = epsilons + lo_epsilons
for ensemble_member in range(self.ensemble_size):
return epsilons, objectives, soft_constraints, hard_constraints, extra_constants
......@@ -69,7 +69,8 @@ def run_optimization_problem(optimization_problem_class, base_folder='..', log_l
# Check for some common mistakes in inheritance order
suggested_order = OrderedSet([
'HomotopyMixin', 'MinAbsGoalProgrammingMixin', 'GoalProgrammingMixin',
'MinAbsGoalProgrammingMixin', 'LinearizedOrderGoalProgrammingMixin', 'GoalProgrammingMixin',
'PIMixin', 'CSVMixin', 'ModelicaMixin',
'ControlTreeMixin', 'CollocatedIntegratedOptimizationProblem', 'OptimizationProblem'])
base_names = OrderedSet([b.__name__ for b in optimization_problem_class.__bases__])
from rtctools.optimization.goal_programming_mixin import Goal, StateGoal
from rtctools.optimization.linearized_order_goal_programming_mixin import (
from test_case import TestCase
from .test_goal_programming import Model
class RangeGoalX(Goal):
def function(self, optimization_problem, ensemble_member):
return optimization_problem.state("x")
function_range = (-10.0, 10.0)
priority = 1
order = 1
target_min = 1.0
target_max = 2.0
class RangeGoalUOrder1(StateGoal):
state = 'u'
order = 1
target_min = 0.0
target_max = 0.0
priority = 2
class RangeGoalUOrder2(RangeGoalUOrder1):
order = 2
class RangeGoalUOrder2Linearize(LinearizedOrderGoal, RangeGoalUOrder2):
linearize_order = True
class RangeGoalUOrder2NoLinearize(LinearizedOrderGoal, RangeGoalUOrder2):
linearize_order = False
class ModelRangeUOrder1(Model):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._results = []
def priority_completed(self, priority):
def goals(self):
return []
def path_goals(self):
return [RangeGoalX(), RangeGoalUOrder1(self)]
class ModelRangeUOrder2(ModelRangeUOrder1):
def path_goals(self):
return [RangeGoalX(), RangeGoalUOrder2(self)]
class ModelRangeUOrder2OverruleLinearize(LinearizedOrderGoalProgrammingMixin, ModelRangeUOrder1):
def goal_programming_options(self):
options = super().goal_programming_options()
options['linearize_goal_order'] = False
return options
def path_goals(self):
return [RangeGoalX(), RangeGoalUOrder2Linearize(self)]
class ModelLinearGoalOrder1(LinearizedOrderGoalProgrammingMixin, ModelRangeUOrder1):
class ModelLinearGoalOrder2(LinearizedOrderGoalProgrammingMixin, ModelRangeUOrder2):
class ModelLinearGoalOrder2OverruleNoLinearize(ModelLinearGoalOrder2):
def path_goals(self):
return [RangeGoalX(), RangeGoalUOrder2NoLinearize(self)]
class TestLinearGoalOrder(TestCase):
def setUpClass(cls):
cls.problem1 = ModelRangeUOrder1()
cls.problem1_linear = ModelLinearGoalOrder1()
cls.problem2 = ModelRangeUOrder2()
cls.problem2_linear = ModelLinearGoalOrder2()
cls.problem2_linear_overrule = ModelRangeUOrder2OverruleLinearize()
cls.problem2_nonlinear_overrule = ModelLinearGoalOrder2OverruleNoLinearize()
def test_order_1_linear_equal(self):
self.assertEqual(self.problem1.objective_value, self.problem1_linear.objective_value)
def test_order_1_2_different(self):
o1 = self.problem1.objective_value
o2 = self.problem2.objective_value
self.assertGreater(abs(o1 - o2), 0.25 * abs(o2))
def test_order_2_linear_similar(self):
o2 = self.problem2.objective_value
o2_lin = self.problem2_linear.objective_value
# 0.1 is the default max error when fitting, although this is just an
# approximation when using 'balanced' mode.
self.assertLess(abs(o2 - o2_lin), 0.1 * abs(o2))
self.assertNotEqual(o2, o2_lin)
def test_order_2_nonlinear_overrule_equal(self):
self.assertEqual(self.problem2.objective_value, self.problem2_nonlinear_overrule.objective_value)
def test_order_2_linear_overrule_equal(self):
self.assertEqual(self.problem2_linear.objective_value, self.problem2_linear_overrule.objective_value)
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment