Instance#
- class Instance#
Optimization problem instance.
Note that this class also contains annotations like
titlewhich are not contained in protobuf message but stored in OMMX artifact. These annotations are loaded from annotations while reading from OMMX artifact.Examples#
Create an instance for KnapSack Problem
>>> from ommx.v1 import Instance, DecisionVariable
Profit and weight of items
>>> p = [10, 13, 18, 31, 7, 15] >>> w = [11, 15, 20, 35, 10, 33]
Decision variables
>>> x = [DecisionVariable.binary(i) for i in range(6)]
Objective and constraint
>>> objective = sum(p[i] * x[i] for i in range(6)) >>> constraint = sum(w[i] * x[i] for i in range(6)) <= 47
Compose as an instance
>>> instance = Instance.from_components( ... decision_variables=x, ... objective=objective, ... constraints=[constraint], ... sense=Instance.MAXIMIZE, ... )
- __copy__() Instance#
- add_integer_slack_to_inequality(constraint_id: int, slack_upper_bound: int) Optional[float]#
Convert inequality \(f(x) \leq 0\) to inequality \(f(x) + b s \leq 0\) with an integer slack variable \(s\).
This should be used when
convert_inequality_to_equality_with_integer_slack()is not applicable.The bound of \(s\) will be \([0, \text{slack\_upper\_bound}]\), and the coefficient \(b\) is determined from the lower bound of \(f(x)\).
Since the slack variable is integer, the yielded inequality has residual error \(\min_s f(x) + b s\) at most \(b\). And thus \(b\) is returned to use scaling the penalty weight or other things.
Larger slack_upper_bound (i.e. finer-grained slack) yields smaller \(b\), and thus smaller the residual error, but it needs more bits for the slack variable, and thus the problem size becomes larger.
Returns: The coefficient \(b\) of the slack variable. If the constraint is trivially satisfied, this returns
None.Examples#
Let’s consider a simple inequality constraint x0 + 2*x1 <= 4.
>>> from ommx.v1 import Instance, DecisionVariable >>> x = [ ... DecisionVariable.integer(i, lower=0, upper=3, name="x", subscripts=[i]) ... for i in range(3) ... ] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=sum(x), ... constraints=[ ... (x[0] + 2*x[1] <= 4).set_id(0) ... ], ... sense=Instance.MAXIMIZE, ... ) >>> instance.constraints[0] Constraint(x0 + 2*x1 - 4 <= 0)
Introduce an integer slack variable s in [0, 2]
>>> b = instance.add_integer_slack_to_inequality( ... constraint_id=0, ... slack_upper_bound=2 ... ) >>> b, instance.constraints[0] (2.0, Constraint(x0 + 2*x1 + 2*x3 - 4 <= 0))
- add_user_annotation(key: str, value: str, annotation_namespace: str = 'org.ommx.user.') None#
- add_user_annotations(annotations: Mapping[str, str], annotation_namespace: str = 'org.ommx.user.') None#
- as_hubo_format() tuple[dict, float]#
- as_maximization_problem() bool#
Convert the instance to a maximization problem.
If the instance is already a maximization problem, this does nothing.
Returns:
Trueif the instance is converted,Falseif already a maximization problem.Examples#
>>> from ommx.v1 import Instance, DecisionVariable >>> x = [DecisionVariable.binary(i) for i in range(3)] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=sum(x), ... constraints=[sum(x) == 1], ... sense=Instance.MINIMIZE, ... ) >>> instance.sense == Instance.MINIMIZE True >>> instance.objective Function(x0 + x1 + x2)
Convert to a maximization problem
>>> instance.as_maximization_problem() True >>> instance.sense == Instance.MAXIMIZE True >>> instance.objective Function(-x0 - x1 - x2)
If the instance is already a maximization problem, this does nothing
>>> instance.as_maximization_problem() False
- as_minimization_problem() bool#
Convert the instance to a minimization problem.
If the instance is already a minimization problem, this does nothing.
Returns:
Trueif the instance is converted,Falseif already a minimization problem.Examples#
>>> from ommx.v1 import Instance, DecisionVariable >>> x = [DecisionVariable.binary(i) for i in range(3)] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=sum(x), ... constraints=[sum(x) == 1], ... sense=Instance.MAXIMIZE, ... ) >>> instance.sense == Instance.MAXIMIZE True >>> instance.objective Function(x0 + x1 + x2)
Convert to a minimization problem
>>> instance.as_minimization_problem() True >>> instance.sense == Instance.MINIMIZE True >>> instance.objective Function(-x0 - x1 - x2)
If the instance is already a minimization problem, this does nothing
>>> instance.as_minimization_problem() False
- as_parametric_instance() ParametricInstance#
- as_qubo_format() tuple[dict, float]#
- convert_inequality_to_equality_with_integer_slack(constraint_id: int, max_integer_range: int) None#
Convert an inequality constraint \(f(x) \leq 0\) to an equality constraint \(f(x) + s/a = 0\) with an integer slack variable \(s\).
Since \(a\) is determined as the minimal multiplier to make every coefficient of \(a f(x)\) integer, \(a\) itself and the range of \(s\) becomes impractically large.
max_integer_rangelimits the maximal range of \(s\), and returns error if the range exceeds it.Since this method evaluates the bound of \(f(x)\), we may find that:
The bound \([l, u]\) is strictly positive, i.e. \(l > 0\): this means the instance is infeasible because this constraint never be satisfied, and an error is raised.
The bound \([l, u]\) is always negative, i.e. \(u \leq 0\): this means this constraint is trivially satisfied, the constraint is moved to
removed_constraints, and this method returns without introducing slack variable or raising an error.
Examples#
Let’s consider a simple inequality constraint x0 + 2*x1 <= 5.
>>> from ommx.v1 import Instance, DecisionVariable >>> x = [ ... DecisionVariable.integer(i, lower=0, upper=3, name="x", subscripts=[i]) ... for i in range(3) ... ] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=sum(x), ... constraints=[ ... (x[0] + 2*x[1] <= 5).set_id(0) ... ], ... sense=Instance.MAXIMIZE, ... ) >>> instance.constraints[0] Constraint(x0 + 2*x1 - 5 <= 0)
Introduce an integer slack variable
>>> instance.convert_inequality_to_equality_with_integer_slack( ... constraint_id=0, ... max_integer_range=32 ... ) >>> instance.constraints[0] Constraint(x0 + 2*x1 + x3 - 5 == 0)
- decision_variable_analysis() DecisionVariableAnalysis#
Analyze decision variables in the optimization problem instance.
Returns a comprehensive analysis of all decision variables including:
Kind-based partitioning (binary, integer, continuous, etc.)
Usage-based partitioning (used in objective, constraints, fixed, etc.)
Variable bounds information
Returns: Analysis object containing detailed information about decision variables
Examples#
>>> from ommx.v1 import Instance, DecisionVariable >>> x = [DecisionVariable.binary(i) for i in range(3)] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=x[0] + x[1], ... constraints=[(x[1] + x[2] == 1).set_id(0)], ... sense=Instance.MAXIMIZE, ... ) >>> analysis = instance.decision_variable_analysis() >>> analysis.used_decision_variable_ids() {0, 1, 2} >>> analysis.used_in_objective() {0, 1} >>> analysis.used_in_constraints() {0: {1, 2}}
- empty() Instance#
Create trivial empty instance of minimization with zero objective, no constraints, and no decision variables.
Examples#
>>> from ommx.v1 import Instance >>> instance = Instance.empty() >>> instance.sense == Instance.MINIMIZE True
- evaluate(state: ToState, atol: Optional[float] = None) Solution#
Evaluate the given
Stateinto aSolution.This method evaluates the problem instance using the provided state (a map from decision variable IDs to their values), and returns a
Solutionobject containing objective value, evaluated constraint values, and feasibility information.Examples#
Create a simple instance with three binary variables and evaluate a solution:
>>> from ommx.v1 import Instance, DecisionVariable >>> x = [DecisionVariable.binary(i) for i in range(3)] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=sum(x), ... constraints=[(x[0] + x[1] <= 1).set_id(0)], ... sense=Instance.MAXIMIZE, ... )
Evaluate it with a state x0 = 1, x1 = 0, x2 = 0, and show the objective and constraints:
>>> solution = instance.evaluate({0: 1, 1: 0, 2: 0}) >>> solution.objective 1.0
If the value is out of the range, the solution is infeasible:
>>> solution = instance.evaluate({0: 1, 1: 0, 2: 2}) >>> solution.feasible False
If some of the decision variables are not set, this raises an error:
>>> instance.evaluate({0: 1, 1: 0})
Traceback (most recent call last): … ValueError: The state does not contain some required IDs: {VariableID(2)}
- from_bytes(bytes: bytes) Instance#
- from_components(sense: Sense, objective: ToFunction, decision_variables: Sequence[DecisionVariable], constraints: Sequence[Constraint], named_functions: Optional[Sequence[NamedFunction]] = None, description: Optional[InstanceDescription] = None, constraint_hints: Optional[ConstraintHints] = None) Instance#
Create an instance from its components.
Args:
sense: Optimization sense (minimize or maximize)objective: Objective functiondecision_variables: List of decision variablesconstraints: List of constraintsnamed_functions: Optional list of named functionsdescription: Optional instance descriptionconstraint_hints: Optional constraint hints for solvers
Returns: A new Instance
- get_constraint_by_id(constraint_id: int) Constraint#
Get a specific constraint by ID
- get_decision_variable_by_id(variable_id: int) DecisionVariable#
Get a specific decision variable by ID
- get_named_function_by_id(named_function_id: int) NamedFunction#
Get a specific named function by ID
- get_removed_constraint_by_id(constraint_id: int) RemovedConstraint#
Get a specific removed constraint by ID
- get_user_annotation(key: str, annotation_namespace: str = 'org.ommx.user.') str#
- get_user_annotations(annotation_namespace: str = 'org.ommx.user.') dict[str, str]#
- load_mps(path: str) Instance#
- load_qplib(path: str) Instance#
- log_encode(decision_variable_ids: set[int] = set()) None#
Log-encode the integer decision variables.
Log encoding of an integer variable \(x \in [l, u]\) is to represent by \(m\) bits \(b_i \in \{0, 1\}\) by:
\[x = \sum_{i=0}^{m-2} 2^i b_i + (u - l - 2^{m-1} + 1) b_{m-1} + l\]where \(m = \lceil \log_2(u - l + 1) \rceil\).
Args:
decision_variable_ids: The IDs of the integer decision variables to log-encode. If not specified (or empty), all integer variables are log-encoded.
Examples#
Let’s consider a simple integer programming problem with three integer variables x0, x1, and x2.
>>> from ommx.v1 import Instance, DecisionVariable >>> x = [ ... DecisionVariable.integer(i, lower=0, upper=3, name="x", subscripts=[i]) ... for i in range(3) ... ] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=sum(x), ... constraints=[], ... sense=Instance.MAXIMIZE, ... ) >>> instance.objective Function(x0 + x1 + x2)
To log-encode the integer variables x0 and x2 (except x1), call log_encode:
>>> instance.log_encode({0, 2})
Integer variable in range \([0, 3]\) can be represented by two binary variables:
\[x_0 = b_{0,0} + 2 b_{0,1}, \quad x_2 = b_{2,0} + 2 b_{2,1}\]And these are substituted into the objective and constraint functions.
>>> instance.objective Function(x1 + x3 + 2*x4 + x5 + 2*x6)
- logical_memory_profile() str#
Generate folded stack format for memory profiling of this instance.
This method generates a format compatible with flamegraph visualization tools like
flamegraph.plandinferno. Each line has the format: “frame1;frame2;…;frameN bytes”The output shows the hierarchical memory structure of the instance, making it easy to identify which components are consuming the most memory.
To visualize with flamegraph:
Save the output to a file:
profile.txtGenerate SVG:
flamegraph.pl profile.txt > memory.svgOpen memory.svg in a browser
Returns: Folded stack format string that can be visualized with flamegraph tools
Examples#
>>> from ommx.v1 import Instance, DecisionVariable >>> x = [DecisionVariable.binary(i) for i in range(3)] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=x[0] + x[1], ... constraints=[], ... sense=Instance.MAXIMIZE, ... ) >>> profile = instance.logical_memory_profile() >>> isinstance(profile, str) True
- partial_evaluate(state: ToState, atol: Optional[float] = None) Instance#
Creates a new instance with specific decision variables fixed to given values.
This method substitutes the specified decision variables with their provided values, creating a new problem instance where these variables are fixed. This is useful for scenarios such as:
Creating simplified sub-problems with some variables fixed
Incrementally solving a problem by fixing some variables and optimizing the rest
Testing specific configurations of a problem
Args:
state: Maps decision variable IDs to their fixed values. Can be aStateobject or a dictionary mapping variable IDs to values.atol: Absolute tolerance for floating point comparisons. If None, uses the default tolerance.
Returns: A new instance with the specified decision variables fixed to their given values.
Examples#
>>> from ommx.v1 import Instance, DecisionVariable >>> x = DecisionVariable.binary(1) >>> y = DecisionVariable.binary(2) >>> instance = Instance.from_components( ... decision_variables=[x, y], ... objective=x + y, ... constraints=[x + y <= 1], ... sense=Instance.MINIMIZE ... ) >>> new_instance = instance.partial_evaluate({1: 1}) >>> new_instance.objective Function(x2 + 1)
Substituted value is stored in the decision variable:
>>> x = new_instance.get_decision_variable_by_id(1) >>> x.substituted_value 1.0
- penalty_method() ParametricInstance#
Convert to a parametric unconstrained instance by penalty method.
Roughly, this converts a constrained problem:
\[\min_x f(x) \quad \text{s.t.} \quad g_i(x) = 0 \; (\forall i), \quad h_j(x) \leq 0 \; (\forall j)\]to an unconstrained problem with parameters:
\[\min_x f(x) + \sum_i \lambda_i g_i(x)^2 + \sum_j \rho_j h_j(x)^2\]where \(\lambda_i\) and \(\rho_j\) are the penalty weight parameters for each constraint. If you want to use single weight parameter, use
uniform_penalty_method()instead.The removed constraints are stored in
removed_constraints.Note: This method converts inequality constraints \(h(x) \leq 0\) to \(|h(x)|^2\) not to \(\max(0, h(x))^2\). This means the penalty is enforced even for \(h(x) < 0\) cases, and \(h(x) = 0\) is unfairly favored. This feature is intended to use with
add_integer_slack_to_inequality().Examples#
>>> from ommx.v1 import Instance, DecisionVariable, Constraint >>> x = [DecisionVariable.binary(i) for i in range(3)] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=sum(x), ... constraints=[x[0] + x[1] == 1, x[1] + x[2] == 1], ... sense=Instance.MAXIMIZE, ... ) >>> instance.objective Function(x0 + x1 + x2) >>> pi = instance.penalty_method()
The constraint is put in removed_constraints
>>> pi.constraints [] >>> len(pi.removed_constraints) 2 >>> pi.removed_constraints[0] RemovedConstraint(x0 + x1 - 1 == 0, reason=penalty_method, parameter_id=3) >>> pi.removed_constraints[1] RemovedConstraint(x1 + x2 - 1 == 0, reason=penalty_method, parameter_id=4)
- random_samples(rng: Rng, num_different_samples: int = 5, num_samples: int = 10, max_sample_id: Optional[int] = None) Samples#
Generate random samples for this instance.
The generated samples will contain
num_samplessample entries divided intonum_different_samplesgroups, where each group shares the same state but has different sample IDs.Args:
rng: Random number generatornum_different_samples: Number of different states to generatenum_samples: Total number of samples to generatemax_sample_id: Maximum sample ID (default:num_samples)
Returns: Samples object
Examples#
Generate samples for a simple instance:
>>> from ommx.v1 import Instance, DecisionVariable, Rng >>> x = [DecisionVariable.binary(i) for i in range(3)] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=sum(x), ... constraints=[(sum(x) <= 2).set_id(0)], ... sense=Instance.MAXIMIZE, ... ) >>> rng = Rng() >>> samples = instance.random_samples(rng, num_different_samples=2, num_samples=5) >>> samples.num_samples() 5
- random_state(rng: Rng) State#
Generate a random state for this instance using the provided random number generator.
This method generates random values only for variables that are actually used in the objective function or constraints, as determined by decision variable analysis. Generated values respect the bounds of each variable type.
Args:
rng: Random number generator to use for generating the state.
Returns: A randomly generated state that satisfies the variable bounds of this instance. Only contains values for variables that are used in the problem.
Examples#
Generate random state only for used variables
>>> from ommx.v1 import Instance, DecisionVariable, Rng >>> x = [DecisionVariable.binary(i) for i in range(5)] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=x[0] + x[1], ... constraints=[], ... sense=Instance.MAXIMIZE, ... ) >>> rng = Rng() >>> state = instance.random_state(rng)
Only used variables have values
>>> set(state.entries.keys()) {0, 1}
Values respect binary bounds
>>> all(state.entries[i] in [0.0, 1.0] for i in state.entries) True
- reduce_binary_power() bool#
Reduce binary powers in the instance.
This method replaces binary powers in the instance with their equivalent linear expressions. For binary variables, \(x^n = x\) for any \(n \geq 1\), so we can reduce higher powers to linear terms.
Returns:
Trueif any reduction was performed,Falseotherwise.Examples#
Consider an instance with binary variables and quadratic terms:
>>> from ommx.v1 import Instance, DecisionVariable >>> x = [DecisionVariable.binary(i) for i in range(2)] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=x[0] * x[0] + x[0] * x[1], ... constraints=[], ... sense=Instance.MINIMIZE, ... ) >>> instance.objective Function(x0*x0 + x0*x1)
After reducing binary powers, x0^2 becomes x0:
>>> changed = instance.reduce_binary_power() >>> changed True >>> instance.objective Function(x0*x1 + x0)
Running it again should not change anything:
>>> changed = instance.reduce_binary_power() >>> changed False
- relax_constraint(constraint_id: int, reason: str, parameters: str) None#
Remove a constraint from the instance.
The removed constraint is stored in
removed_constraints, and can be restored byrestore_constraint().Args:
constraint_id: The ID of the constraint to remove.reason: The reason why the constraint is removed.parameters: Additional parameters to describe the reason.
Examples#
Relax constraint, and restore it.
>>> from ommx.v1 import Instance, DecisionVariable >>> x = [DecisionVariable.binary(i) for i in range(3)] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=sum(x), ... constraints=[(sum(x) == 3).set_id(1)], ... sense=Instance.MAXIMIZE, ... ) >>> instance.constraints [Constraint(x0 + x1 + x2 - 3 == 0)]
>>> instance.relax_constraint(1, "manual relaxation") >>> instance.constraints [] >>> instance.removed_constraints [RemovedConstraint(x0 + x1 + x2 - 3 == 0, reason=manual relaxation)]
>>> instance.restore_constraint(1) >>> instance.constraints [Constraint(x0 + x1 + x2 - 3 == 0)] >>> instance.removed_constraints []
- required_ids() set[int]#
Get the set of decision variable IDs used in the objective and remaining constraints.
Examples#
>>> from ommx.v1 import Instance, DecisionVariable >>> x = [DecisionVariable.binary(i) for i in range(3)] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=sum(x), ... constraints=[], ... sense=Instance.MAXIMIZE, ... ) >>> instance.required_ids() {0, 1, 2}
- restore_constraint(constraint_id: int) None#
- save_mps(path: str, compress: bool = True) None#
- stats() dict#
Get statistics about the instance.
Returns a dictionary containing counts of decision variables and constraints categorized by kind, usage, and status.
Returns: A dictionary with the following structure:
{ "decision_variables": { "total": int, "by_kind": { "binary": int, "integer": int, "continuous": int, "semi_integer": int, "semi_continuous": int }, "by_usage": { "used_in_objective": int, "used_in_constraints": int, "used": int, "fixed": int, "dependent": int, "irrelevant": int } }, "constraints": { "total": int, "active": int, "removed": int } }
Examples#
>>> from ommx.v1 import Instance >>> instance = Instance.empty() >>> stats = instance.stats() >>> stats["decision_variables"]["total"] 0 >>> stats["constraints"]["total"] 0
- to_bytes() bytes#
- to_hubo(uniform_penalty_weight: Optional[float] = None, penalty_weights: Optional[Mapping[int, float]] = None, inequality_integer_slack_max_range: int = 31) tuple[dict, float]#
Convert the instance to a HUBO format.
This is a Driver API for HUBO conversion calling single-purpose methods in order:
Convert the instance to a minimization problem by
as_minimization_problem().Check continuous variables and raise error if exists.
Convert inequality constraints
Try
convert_inequality_to_equality_with_integer_slack()first with giveninequality_integer_slack_max_range.If failed,
add_integer_slack_to_inequality()
Convert to HUBO with (uniform) penalty method
If
penalty_weightsis given (indict[constraint_id, weight]form), usepenalty_method()with the given weights.If
uniform_penalty_weightis given, useuniform_penalty_method()with the given weight.If both are None, defaults to
uniform_penalty_weight = 1.0.
Log-encode integer variables by
log_encode().Finally convert to HUBO format by
as_hubo_format().
Please see the documentation for
to_qubo()for more information, or the documentation for each individual method for additional details. The difference between this andto_qubo()is that this method isn’t restricted to quadratic or linear problems. If you want to customize the conversion, use the individual methods above manually.
- to_qubo(uniform_penalty_weight: Optional[float] = None, penalty_weights: Optional[Mapping[int, float]] = None, inequality_integer_slack_max_range: int = 31) tuple[dict, float]#
Convert the instance to a QUBO format.
This is a Driver API for QUBO conversion calling single-purpose methods in order:
Convert the instance to a minimization problem by
as_minimization_problem().Check continuous variables and raise error if exists.
Convert inequality constraints
Try
convert_inequality_to_equality_with_integer_slack()first with giveninequality_integer_slack_max_range.If failed,
add_integer_slack_to_inequality()
Convert to QUBO with (uniform) penalty method
If
penalty_weightsis given (indict[constraint_id, weight]form), usepenalty_method()with the given weights.If
uniform_penalty_weightis given, useuniform_penalty_method()with the given weight.If both are None, defaults to
uniform_penalty_weight = 1.0.
Log-encode integer variables by
log_encode().Finally convert to QUBO format by
as_qubo_format().
Please see the document of each method for details. If you want to customize the conversion, use the methods above manually.
Examples#
Let’s consider a maximization problem with two integer variables \(x_0, x_1 \in [0, 2]\) subject to an inequality:
\[\max \; x_0 + x_1 \quad \text{s.t.} \quad x_0 + 2 x_1 \leq 3\]>>> from ommx.v1 import Instance, DecisionVariable >>> x = [DecisionVariable.integer(i, lower=0, upper=2, name="x", subscripts=[i]) for i in range(2)] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=sum(x), ... constraints=[(x[0] + 2*x[1] <= 3).set_id(0)], ... sense=Instance.MAXIMIZE, ... )
Convert into QUBO format
>>> qubo, offset = instance.to_qubo() >>> qubo {(3, 3): -6.0, (3, 4): 2.0, (3, 5): 4.0, (3, 6): 4.0, (3, 7): 2.0, (3, 8): 4.0, (4, 4): -6.0, (4, 5): 4.0, (4, 6): 4.0, (4, 7): 2.0, (4, 8): 4.0, (5, 5): -9.0, (5, 6): 8.0, (5, 7): 4.0, (5, 8): 8.0, (6, 6): -9.0, (6, 7): 4.0, (6, 8): 8.0, (7, 7): -5.0, (7, 8): 4.0, (8, 8): -8.0} >>> offset 9.0
For the maximization problem, the sense is converted to minimization for generating QUBO, and then converted back to maximization.
>>> instance.sense == Instance.MAXIMIZE True
- uniform_penalty_method() ParametricInstance#
Convert to a parametric unconstrained instance by penalty method with uniform weight.
Roughly, this converts a constrained problem:
\[\min_x f(x) \quad \text{s.t.} \quad g_i(x) = 0 \; (\forall i), \quad h_j(x) \leq 0 \; (\forall j)\]to an unconstrained problem with a parameter:
\[\min_x f(x) + \lambda \left( \sum_i g_i(x)^2 + \sum_j h_j(x)^2 \right)\]where \(\lambda\) is the uniform penalty weight parameter for all constraints.
The removed constraints are stored in
removed_constraints.Note: This method converts inequality constraints \(h(x) \leq 0\) to \(|h(x)|^2\) not to \(\max(0, h(x))^2\). This means the penalty is enforced even for \(h(x) < 0\) cases, and \(h(x) = 0\) is unfairly favored. This feature is intended to use with
add_integer_slack_to_inequality().Examples#
>>> from ommx.v1 import Instance, DecisionVariable >>> x = [DecisionVariable.binary(i) for i in range(3)] >>> instance = Instance.from_components( ... decision_variables=x, ... objective=sum(x), ... constraints=[sum(x) == 3], ... sense=Instance.MAXIMIZE, ... ) >>> instance.objective Function(x0 + x1 + x2) >>> pi = instance.uniform_penalty_method()
The constraint is put in removed_constraints
>>> pi.constraints [] >>> len(pi.removed_constraints) 1 >>> pi.removed_constraints[0] RemovedConstraint(x0 + x1 + x2 - 3 == 0, reason=uniform_penalty_method)
There is only one parameter in the instance
>>> len(pi.parameters) 1 >>> p = pi.parameters[0] >>> p.id 3 >>> p.name 'uniform_penalty_weight'
- Description: type[InstanceDescription]#
- MAXIMIZE: Sense#
- MINIMIZE: Sense#
- property annotations: dict[str, str]#
Returns a copy of the annotations dictionary.
Mutating the returned dict will not update the object. Use
add_user_annotation()or assign toannotationsto modify annotations.
- property authors: list[str]#
- property constraint_hints: ConstraintHints#
Read-only property.
- property constraints: list[Constraint]#
Read-only property.
List of all constraints in the instance sorted by their IDs.
- property constraints_df: DataFrame#
Read-only property.
DataFrame of constraints
- property decision_variable_names: set[str]#
Read-only property.
Get all unique decision variable names in this instance
- property decision_variables: list[DecisionVariable]#
Read-only property.
List of all decision variables in the instance sorted by their IDs.
- property decision_variables_df: DataFrame#
Read-only property.
DataFrame of decision variables
- property named_function_names: set[str]#
Read-only property.
Get all unique named function names in this instance
- property named_functions: list[NamedFunction]#
Read-only property.
List of all named functions in the instance sorted by their IDs.
- property named_functions_df: DataFrame#
Read-only property.
DataFrame of named functions
- property removed_constraints: list[RemovedConstraint]#
Read-only property.
List of all removed constraints in the instance sorted by their IDs.
- property removed_constraints_df: DataFrame#
Read-only property.
DataFrame of removed constraints
- property sense: Sense#
Read-only property.
- property used_decision_variables: list[DecisionVariable]#
Read-only property.