Instance

Instance#

class Instance#

Optimization problem instance.

Note that this class also contains annotations like title which are not contained in protobuf message but stored in OMMX artifact. These annotations are loaded from annotations while reading from OMMX artifact.

Examples#

Create an instance for KnapSack Problem

>>> from ommx.v1 import Instance, DecisionVariable

Profit and weight of items

>>> p = [10, 13, 18, 31, 7, 15]
>>> w = [11, 15, 20, 35, 10, 33]

Decision variables

>>> x = [DecisionVariable.binary(i) for i in range(6)]

Objective and constraint

>>> objective = sum(p[i] * x[i] for i in range(6))
>>> constraint = sum(w[i] * x[i] for i in range(6)) <= 47

Compose as an instance

>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=objective,
...     constraints=[constraint],
...     sense=Instance.MAXIMIZE,
... )
__copy__() Instance#
__deepcopy__(_memo: Any) Instance#
add_integer_slack_to_inequality(constraint_id: int, slack_upper_bound: int) Optional[float]#

Convert inequality \(f(x) \leq 0\) to inequality \(f(x) + b s \leq 0\) with an integer slack variable \(s\).

  • This should be used when convert_inequality_to_equality_with_integer_slack() is not applicable.

  • The bound of \(s\) will be \([0, \text{slack\_upper\_bound}]\), and the coefficient \(b\) is determined from the lower bound of \(f(x)\).

  • Since the slack variable is integer, the yielded inequality has residual error \(\min_s f(x) + b s\) at most \(b\). And thus \(b\) is returned to use scaling the penalty weight or other things.

    • Larger slack_upper_bound (i.e. finer-grained slack) yields smaller \(b\), and thus smaller the residual error, but it needs more bits for the slack variable, and thus the problem size becomes larger.

Returns: The coefficient \(b\) of the slack variable. If the constraint is trivially satisfied, this returns None.

Examples#

Let's consider a simple inequality constraint x0 + 2*x1 <= 4.

>>> from ommx.v1 import Instance, DecisionVariable
>>> x = [
...     DecisionVariable.integer(i, lower=0, upper=3, name="x", subscripts=[i])
...     for i in range(3)
... ]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints=[
...         (x[0] + 2*x[1] <= 4).set_id(0)
...     ],
...     sense=Instance.MAXIMIZE,
... )
>>> instance.constraints[0]
Constraint(x0 + 2*x1 - 4 <= 0)

Introduce an integer slack variable s in [0, 2]

>>> b = instance.add_integer_slack_to_inequality(
...     constraint_id=0,
...     slack_upper_bound=2
... )
>>> b, instance.constraints[0]
(2.0, Constraint(x0 + 2*x1 + 2*x3 - 4 <= 0))
add_user_annotation(key: str, value: str, annotation_namespace: str = 'org.ommx.user.') None#
add_user_annotations(annotations: Mapping[str, str], annotation_namespace: str = 'org.ommx.user.') None#
as_hubo_format() tuple[dict, float]#
as_maximization_problem() bool#

Convert the instance to a maximization problem.

If the instance is already a maximization problem, this does nothing.

Returns: True if the instance is converted, False if already a maximization problem.

Examples#

>>> from ommx.v1 import Instance, DecisionVariable
>>> x = [DecisionVariable.binary(i) for i in range(3)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints=[sum(x) == 1],
...     sense=Instance.MINIMIZE,
... )
>>> instance.sense == Instance.MINIMIZE
True
>>> instance.objective
Function(x0 + x1 + x2)

Convert to a maximization problem

>>> instance.as_maximization_problem()
True
>>> instance.sense == Instance.MAXIMIZE
True
>>> instance.objective
Function(-x0 - x1 - x2)

If the instance is already a maximization problem, this does nothing

>>> instance.as_maximization_problem()
False
as_minimization_problem() bool#

Convert the instance to a minimization problem.

If the instance is already a minimization problem, this does nothing.

Returns: True if the instance is converted, False if already a minimization problem.

Examples#

>>> from ommx.v1 import Instance, DecisionVariable
>>> x = [DecisionVariable.binary(i) for i in range(3)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints=[sum(x) == 1],
...     sense=Instance.MAXIMIZE,
... )
>>> instance.sense == Instance.MAXIMIZE
True
>>> instance.objective
Function(x0 + x1 + x2)

Convert to a minimization problem

>>> instance.as_minimization_problem()
True
>>> instance.sense == Instance.MINIMIZE
True
>>> instance.objective
Function(-x0 - x1 - x2)

If the instance is already a minimization problem, this does nothing

>>> instance.as_minimization_problem()
False
as_parametric_instance() ParametricInstance#
as_qubo_format() tuple[dict, float]#
convert_all_indicators_to_constraints() dict[int, list[int]]#

Convert every active indicator constraint to regular constraints using Big-M.

See convert_indicator_to_constraint() for the conversion rule. Returns a dict mapping each original indicator ID to the list of regular constraint IDs it produced.

Atomic: every active indicator is validated up front, and only if every one is convertible are the conversions applied. If any indicator fails validation (non-finite bound on a required side), no mutation happens and the instance is left untouched.

convert_all_one_hots_to_constraints() list[int]#

Convert every active one-hot constraint to a regular equality constraint.

See convert_one_hot_to_constraint() for the conversion rule. Returns the IDs of the newly created regular constraints.

Examples#

>>> from ommx.v1 import Instance, DecisionVariable, OneHotConstraint
>>> x = [DecisionVariable.binary(i) for i in range(4)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints={},
...     one_hot_constraints={
...         1: OneHotConstraint(variables=[0, 1]),
...         2: OneHotConstraint(variables=[2, 3]),
...     },
...     sense=Instance.MINIMIZE,
... )
>>> instance.convert_all_one_hots_to_constraints()
[0, 1]
>>> instance.one_hot_constraints
{}
>>> instance.constraints
{0: Constraint(x0 + x1 - 1 == 0), 1: Constraint(x2 + x3 - 1 == 0)}
convert_all_sos1_to_constraints() dict[int, list[int]]#

Convert every active SOS1 constraint to regular constraints using Big-M.

See convert_sos1_to_constraints() for the conversion rule. Returns a dict mapping each original SOS1 ID to the list of regular constraint IDs it produced.

Atomic: every active SOS1 is validated up front, and only if every one is convertible are the conversions applied. If any SOS1 fails validation (unsupported kind, non-finite bound, domain excludes 0, etc.), no mutation happens and the instance is left untouched.

Examples#

>>> from ommx.v1 import Instance, DecisionVariable, Sos1Constraint
>>> x = [DecisionVariable.binary(i) for i in range(4)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints={},
...     sos1_constraints={
...         1: Sos1Constraint(variables=[0, 1]),
...         2: Sos1Constraint(variables=[2, 3]),
...     },
...     sense=Instance.MINIMIZE,
... )
>>> instance.convert_all_sos1_to_constraints()
{1: [0], 2: [1]}
>>> instance.sos1_constraints
{}
>>> instance.constraints
{0: Constraint(x0 + x1 - 1 <= 0), 1: Constraint(x2 + x3 - 1 <= 0)}
convert_indicator_to_constraint(indicator_id: int) list[int]#

Convert an indicator constraint to regular constraints using the Big-M method.

An indicator constraint y = 1 f(x) <= 0 (or = 0) is encoded with upper and lower Big-M sides computed from the interval bounds of \(f(x)\):

\[ f(x) + u y - u \leq 0, \qquad -f(x) - l y + l \leq 0, \]

where \(u \geq \sup f(x)\) and \(l \leq \inf f(x)\) are the upper and lower bounds of \(f\) over the decision variables' domains.

Side emission:

  • For <= indicators, only the upper side is considered; it is emitted iff \(u > 0\). If \(u \leq 0\) the constraint is already implied by the variable bounds and no Big-M is emitted.

  • For = indicators, both sides are considered independently: upper emitted iff \(u > 0\), lower emitted iff \(l < 0\).

When an equality side is skipped, the remaining constraints still enforce the implication correctly because the skipped inequality is already implied by the variable bounds: e.g. \(u \leq 0\) together with the emitted lower side forces \(f(x) = 0\) at \(y = 1\) when \(u = 0\), or renders \(y = 1\) infeasible when \(u < 0\) (correctly reflecting that \(f(x) = 0\) has no solution under the given bounds). When both \(u = 0\) and \(l = 0\), the bound says \(f(x) \equiv 0\) so the equality is vacuously satisfied and nothing is emitted.

Returns the list of newly created regular constraint IDs in insertion order (upper first, then lower). The list is empty when both sides are redundant.

Raises if the bound needed for an emitted side is non-finite, or if \(f(x)\) references a semi-continuous / semi-integer variable (the split domain \(\{0\} \cup [l, u]\) is not uniformly implemented, so Big-M conversion could silently drop the upper side when \(0 \notin [l, u]\)). The instance is not mutated on error.

Examples#

Convert an inequality indicator where the upper side is active:

>>> from ommx.v1 import (
...     Instance, DecisionVariable, IndicatorConstraint, Equality,
... )
>>> x = DecisionVariable.continuous(0, lower=0.0, upper=5.0)
>>> y = DecisionVariable.binary(1)
>>> ic = IndicatorConstraint(
...     indicator_variable=y,
...     function=x - 2,
...     equality=Equality.LessThanOrEqualToZero,
... )
>>> instance = Instance.from_components(
...     decision_variables=[x, y],
...     objective=x,
...     constraints={},
...     indicator_constraints={1: ic},
...     sense=Instance.MINIMIZE,
... )
>>> instance.convert_indicator_to_constraint(1)
[0]
>>> instance.indicator_constraints
{}
>>> instance.constraints
{0: Constraint(x0 + 3*x1 - 5 <= 0)}
convert_inequality_to_equality_with_integer_slack(constraint_id: int, max_integer_range: int) None#

Convert an inequality constraint \(f(x) \leq 0\) to an equality constraint \(f(x) + s/a = 0\) with an integer slack variable \(s\).

  • Since \(a\) is determined as the minimal multiplier to make every coefficient of \(a f(x)\) integer, \(a\) itself and the range of \(s\) becomes impractically large. max_integer_range limits the maximal range of \(s\), and returns error if the range exceeds it.

  • Since this method evaluates the bound of \(f(x)\), we may find that:

    • The bound \([l, u]\) is strictly positive, i.e. \(l > 0\): this means the instance is infeasible because this constraint never be satisfied, and an error is raised.

    • The bound \([l, u]\) is always negative, i.e. \(u \leq 0\): this means this constraint is trivially satisfied, the constraint is moved to removed_constraints, and this method returns without introducing slack variable or raising an error.

Examples#

Let's consider a simple inequality constraint x0 + 2*x1 <= 5.

>>> from ommx.v1 import Instance, DecisionVariable
>>> x = [
...     DecisionVariable.integer(i, lower=0, upper=3, name="x", subscripts=[i])
...     for i in range(3)
... ]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints=[
...         (x[0] + 2*x[1] <= 5).set_id(0)
...     ],
...     sense=Instance.MAXIMIZE,
... )
>>> instance.constraints[0]
Constraint(x0 + 2*x1 - 5 <= 0)

Introduce an integer slack variable

>>> instance.convert_inequality_to_equality_with_integer_slack(
...     constraint_id=0,
...     max_integer_range=32
... )
>>> instance.constraints[0]
Constraint(x0 + 2*x1 + x3 - 5 == 0)
convert_one_hot_to_constraint(one_hot_id: int) int#

Convert a one-hot constraint to a regular equality constraint.

A one-hot constraint over {x_1, ..., x_n} is mathematically equivalent to the linear equality x_1 + ... + x_n - 1 == 0. This method inserts that equality as a new regular constraint and moves the one-hot constraint into removed_one_hot_constraints with reason="ommx.Instance.convert_one_hot_to_constraint" and a constraint_id parameter pointing to the new regular constraint.

Returns the ID of the newly created regular constraint.

Examples#

>>> from ommx.v1 import Instance, DecisionVariable, OneHotConstraint
>>> x = [DecisionVariable.binary(i) for i in range(3)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints={},
...     one_hot_constraints={1: OneHotConstraint(variables=[0, 1, 2])},
...     sense=Instance.MINIMIZE,
... )
>>> new_id = instance.convert_one_hot_to_constraint(1)
>>> instance.one_hot_constraints
{}
>>> instance.constraints
{0: Constraint(x0 + x1 + x2 - 1 == 0)}
>>> instance.removed_one_hot_constraints
{1: RemovedOneHotConstraint(OneHotConstraint(exactly one of {x0, x1, x2} = 1), reason=ommx.Instance.convert_one_hot_to_constraint, constraint_id=0)}
convert_sos1_to_constraints(sos1_id: int) list[int]#

Convert a SOS1 constraint to regular constraints using the Big-M method.

A SOS1 constraint over \(\{x_1, \ldots, x_n\}\) with each \(x_i \in [l_i, u_i]\) asserts that at most one \(x_i\) is non-zero. Per variable, a binary indicator \(y_i\) is introduced with the Big-M pair

\[ x_i - u_i y_i \leq 0, \qquad l_i y_i - x_i \leq 0 \]

(trivial sides \(u_i = 0\) or \(l_i = 0\) are skipped), together with the single cardinality constraint

\[ \sum_i y_i - 1 \leq 0. \]

If \(x_i\) is already binary with bound \([0, 1]\), \(x_i\) itself is reused as its indicator (no new variable, no Big-M pair).

Returns the list of newly created regular constraint IDs in insertion order (Big-M upper/lower pairs per non-binary variable, followed by the cardinality sum).

Raises if any \(x_i\) has a non-binary bound that is not finite, if its domain excludes \(0\), or if its kind is semi-continuous / semi-integer (the split domain \(\{0\} \cup [l, u]\) is not uniformly implemented across the codebase yet, so Big-M conversion of these kinds is not supported). The instance is not mutated on error.

Examples#

All-binary SOS1 reduces to sum(x_i) - 1 <= 0 without extra variables:

>>> from ommx.v1 import Instance, DecisionVariable, Sos1Constraint
>>> x = [DecisionVariable.binary(i) for i in range(3)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints={},
...     sos1_constraints={1: Sos1Constraint(variables=[0, 1, 2])},
...     sense=Instance.MINIMIZE,
... )
>>> instance.convert_sos1_to_constraints(1)
[0]
>>> instance.sos1_constraints
{}
>>> instance.constraints
{0: Constraint(x0 + x1 + x2 - 1 <= 0)}
>>> instance.removed_sos1_constraints
{1: RemovedSos1Constraint(Sos1Constraint(at most one of {x0, x1, x2} ≠ 0), reason=ommx.Instance.convert_sos1_to_constraints, constraint_ids=0)}
decision_variable_analysis() DecisionVariableAnalysis#

Analyze decision variables in the optimization problem instance.

Returns a comprehensive analysis of all decision variables including:

  • Kind-based partitioning (binary, integer, continuous, etc.)

  • Usage-based partitioning (used in objective, constraints, fixed, etc.)

  • Variable bounds information

Returns: Analysis object containing detailed information about decision variables

Examples#

>>> from ommx.v1 import Instance, DecisionVariable
>>> x = [DecisionVariable.binary(i) for i in range(3)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=x[0] + x[1],
...     constraints=[(x[1] + x[2] == 1).set_id(0)],
...     sense=Instance.MAXIMIZE,
... )
>>> analysis = instance.decision_variable_analysis()
>>> analysis.used_decision_variable_ids()
{0, 1, 2}
>>> analysis.used_in_objective()
{0, 1}
>>> analysis.used_in_constraints()
{0: {1, 2}}
empty() Instance#

Create trivial empty instance of minimization with zero objective, no constraints, and no decision variables.

Examples#

>>> from ommx.v1 import Instance
>>> instance = Instance.empty()
>>> instance.sense == Instance.MINIMIZE
True
evaluate(state: ToState, atol: Optional[float] = None) Solution#

Evaluate the given State into a Solution.

This method evaluates the problem instance using the provided state (a map from decision variable IDs to their values), and returns a Solution object containing objective value, evaluated constraint values, and feasibility information.

Examples#

Create a simple instance with three binary variables and evaluate a solution:

>>> from ommx.v1 import Instance, DecisionVariable
>>> x = [DecisionVariable.binary(i) for i in range(3)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints=[(x[0] + x[1] <= 1).set_id(0)],
...     sense=Instance.MAXIMIZE,
... )

Evaluate it with a state x0 = 1, x1 = 0, x2 = 0, and show the objective and constraints:

>>> solution = instance.evaluate({0: 1, 1: 0, 2: 0})
>>> solution.objective
1.0

If the value is out of the range, the solution is infeasible:

>>> solution = instance.evaluate({0: 1, 1: 0, 2: 2})
>>> solution.feasible
False

If some of the decision variables are not set, this raises an error:

>>> instance.evaluate({0: 1, 1: 0})

Traceback (most recent call last): ... ValueError: The state does not contain some required IDs: {VariableID(2)}

evaluate_samples(samples: ToSamples, atol: Optional[float] = None) SampleSet#
from_bytes(bytes: bytes) Instance#
from_components(sense: Sense, objective: ToFunction, decision_variables: Sequence[DecisionVariable], constraints: Mapping[int, Constraint], indicator_constraints: Optional[Mapping[int, IndicatorConstraint]] = None, one_hot_constraints: Optional[Mapping[int, OneHotConstraint]] = None, sos1_constraints: Optional[Mapping[int, Sos1Constraint]] = None, named_functions: Optional[Sequence[NamedFunction]] = None, description: Optional[InstanceDescription] = None) Instance#

Create an instance from its components.

Args:

  • sense: Optimization sense (minimize or maximize)

  • objective: Objective function

  • decision_variables: List of decision variables

  • constraints: List of constraints

  • named_functions: Optional list of named functions

  • description: Optional instance description

Returns: A new Instance

get_constraint_by_id(constraint_id: int) Constraint#

Get a specific constraint by ID

get_decision_variable_by_id(variable_id: int) DecisionVariable#

Get a specific decision variable by ID

get_named_function_by_id(named_function_id: int) NamedFunction#

Get a specific named function by ID

get_removed_constraint_by_id(constraint_id: int) RemovedConstraint#

Get a specific removed constraint by ID

get_user_annotation(key: str, annotation_namespace: str = 'org.ommx.user.') str#
get_user_annotations(annotation_namespace: str = 'org.ommx.user.') dict[str, str]#
load_mps(path: str) Instance#
load_qplib(path: str) Instance#
log_encode(decision_variable_ids: set[int] = set()) None#

Log-encode the integer decision variables.

Log encoding of an integer variable \(x \in [l, u]\) is to represent by \(m\) bits \(b_i \in \{0, 1\}\) by:

\[x = \sum_{i=0}^{m-2} 2^i b_i + (u - l - 2^{m-1} + 1) b_{m-1} + l\]

where \(m = \lceil \log_2(u - l + 1) \rceil\).

Args:

  • decision_variable_ids: The IDs of the integer decision variables to log-encode. If not specified (or empty), all integer variables are log-encoded.

Examples#

Let's consider a simple integer programming problem with three integer variables x0, x1, and x2.

>>> from ommx.v1 import Instance, DecisionVariable
>>> x = [
...     DecisionVariable.integer(i, lower=0, upper=3, name="x", subscripts=[i])
...     for i in range(3)
... ]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints=[],
...     sense=Instance.MAXIMIZE,
... )
>>> instance.objective
Function(x0 + x1 + x2)

To log-encode the integer variables x0 and x2 (except x1), call log_encode:

>>> instance.log_encode({0, 2})

Integer variable in range \([0, 3]\) can be represented by two binary variables:

\[x_0 = b_{0,0} + 2 b_{0,1}, \quad x_2 = b_{2,0} + 2 b_{2,1}\]

And these are substituted into the objective and constraint functions.

>>> instance.objective
Function(x1 + x3 + 2*x4 + x5 + 2*x6)
logical_memory_profile() str#

Generate folded stack format for memory profiling of this instance.

This method generates a format compatible with flamegraph visualization tools like flamegraph.pl and inferno. Each line has the format: "frame1;frame2;...;frameN bytes"

The output shows the hierarchical memory structure of the instance, making it easy to identify which components are consuming the most memory.

To visualize with flamegraph:

  1. Save the output to a file: profile.txt

  2. Generate SVG: flamegraph.pl profile.txt > memory.svg

  3. Open memory.svg in a browser

Returns: Folded stack format string that can be visualized with flamegraph tools

Examples#

>>> from ommx.v1 import Instance, DecisionVariable
>>> x = [DecisionVariable.binary(i) for i in range(3)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=x[0] + x[1],
...     constraints=[],
...     sense=Instance.MAXIMIZE,
... )
>>> profile = instance.logical_memory_profile()
>>> isinstance(profile, str)
True
partial_evaluate(state: ToState, atol: Optional[float] = None) Instance#

Creates a new instance with specific decision variables fixed to given values.

This method substitutes the specified decision variables with their provided values, creating a new problem instance where these variables are fixed. This is useful for scenarios such as:

  • Creating simplified sub-problems with some variables fixed

  • Incrementally solving a problem by fixing some variables and optimizing the rest

  • Testing specific configurations of a problem

Args:

  • state: Maps decision variable IDs to their fixed values. Can be a State object or a dictionary mapping variable IDs to values.

  • atol: Absolute tolerance for floating point comparisons. If None, uses the default tolerance.

Returns: A new instance with the specified decision variables fixed to their given values.

Examples#

>>> from ommx.v1 import Instance, DecisionVariable
>>> x = DecisionVariable.binary(1)
>>> y = DecisionVariable.binary(2)
>>> instance = Instance.from_components(
...     decision_variables=[x, y],
...     objective=x + y,
...     constraints=[x + y <= 1],
...     sense=Instance.MINIMIZE
... )
>>> new_instance = instance.partial_evaluate({1: 1})
>>> new_instance.objective
Function(x2 + 1)

Substituted value is stored in the decision variable:

>>> x = new_instance.get_decision_variable_by_id(1)
>>> x.substituted_value
1.0
penalty_method() ParametricInstance#

Convert to a parametric unconstrained instance by penalty method.

Roughly, this converts a constrained problem:

\[\min_x f(x) \quad \text{s.t.} \quad g_i(x) = 0 \; (\forall i), \quad h_j(x) \leq 0 \; (\forall j)\]

to an unconstrained problem with parameters:

\[\min_x f(x) + \sum_i \lambda_i g_i(x)^2 + \sum_j \rho_j h_j(x)^2\]

where \(\lambda_i\) and \(\rho_j\) are the penalty weight parameters for each constraint. If you want to use single weight parameter, use uniform_penalty_method() instead.

The removed constraints are stored in removed_constraints.

Note: This method converts inequality constraints \(h(x) \leq 0\) to \(|h(x)|^2\) not to \(\max(0, h(x))^2\). This means the penalty is enforced even for \(h(x) < 0\) cases, and \(h(x) = 0\) is unfairly favored. This feature is intended to use with add_integer_slack_to_inequality().

Examples#

>>> from ommx.v1 import Instance, DecisionVariable, Constraint
>>> x = [DecisionVariable.binary(i) for i in range(3)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints=[x[0] + x[1] == 1, x[1] + x[2] == 1],
...     sense=Instance.MAXIMIZE,
... )
>>> instance.objective
Function(x0 + x1 + x2)
>>> pi = instance.penalty_method()

The constraint is put in removed_constraints

>>> pi.constraints
[]
>>> len(pi.removed_constraints)
2
>>> pi.removed_constraints[0]
RemovedConstraint(x0 + x1 - 1 == 0, reason=ommx.Instance.penalty_method, parameter_id=3)
>>> pi.removed_constraints[1]
RemovedConstraint(x1 + x2 - 1 == 0, reason=ommx.Instance.penalty_method, parameter_id=4)
random_samples(rng: Rng, num_different_samples: int = 5, num_samples: int = 10, max_sample_id: Optional[int] = None) Samples#

Generate random samples for this instance.

The generated samples will contain num_samples sample entries divided into num_different_samples groups, where each group shares the same state but has different sample IDs.

Args:

  • rng: Random number generator

  • num_different_samples: Number of different states to generate

  • num_samples: Total number of samples to generate

  • max_sample_id: Maximum sample ID (default: num_samples)

Returns: Samples object

Examples#

Generate samples for a simple instance:

>>> from ommx.v1 import Instance, DecisionVariable, Rng
>>> x = [DecisionVariable.binary(i) for i in range(3)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints=[(sum(x) <= 2).set_id(0)],
...     sense=Instance.MAXIMIZE,
... )

>>> rng = Rng()
>>> samples = instance.random_samples(rng, num_different_samples=2, num_samples=5)
>>> samples.num_samples()
5
random_state(rng: Rng) State#

Generate a random state for this instance using the provided random number generator.

This method generates random values only for variables that are actually used in the objective function or constraints, as determined by decision variable analysis. Generated values respect the bounds of each variable type.

Args:

  • rng: Random number generator to use for generating the state.

Returns: A randomly generated state that satisfies the variable bounds of this instance. Only contains values for variables that are used in the problem.

Examples#

Generate random state only for used variables

>>> from ommx.v1 import Instance, DecisionVariable, Rng
>>> x = [DecisionVariable.binary(i) for i in range(5)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=x[0] + x[1],
...     constraints=[],
...     sense=Instance.MAXIMIZE,
... )

>>> rng = Rng()
>>> state = instance.random_state(rng)

Only used variables have values

>>> set(state.entries.keys())
{0, 1}

Values respect binary bounds

>>> all(state.entries[i] in [0.0, 1.0] for i in state.entries)
True
reduce_binary_power() bool#

Reduce binary powers in the instance.

This method replaces binary powers in the instance with their equivalent linear expressions. For binary variables, \(x^n = x\) for any \(n \geq 1\), so we can reduce higher powers to linear terms.

Returns: True if any reduction was performed, False otherwise.

Examples#

Consider an instance with binary variables and quadratic terms:

>>> from ommx.v1 import Instance, DecisionVariable
>>> x = [DecisionVariable.binary(i) for i in range(2)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=x[0] * x[0] + x[0] * x[1],
...     constraints=[],
...     sense=Instance.MINIMIZE,
... )
>>> instance.objective
Function(x0*x0 + x0*x1)

After reducing binary powers, x0^2 becomes x0:

>>> changed = instance.reduce_binary_power()
>>> changed
True
>>> instance.objective
Function(x0*x1 + x0)

Running it again should not change anything:

>>> changed = instance.reduce_binary_power()
>>> changed
False
reduce_capabilities(supported: set[AdditionalCapability]) set[AdditionalCapability]#

Convert constraint types not in supported into regular constraints.

For every capability in :attr:required_capabilities not in supported, the corresponding bulk conversion is invoked (:meth:convert_all_indicators_to_constraints, :meth:convert_all_one_hots_to_constraints, or :meth:convert_all_sos1_to_constraints). The instance is mutated in place and :attr:required_capabilities becomes a subset of supported on success.

Returns the set of :class:AdditionalCapability values that were actually converted. Empty when nothing needed conversion.

Raises if any underlying Big-M conversion fails (e.g. a SOS1 variable with a non-finite bound).

relax_constraint(constraint_id: int, reason: str, parameters: str) None#

Remove a constraint from the instance.

The removed constraint is stored in removed_constraints, and can be restored by restore_constraint().

Args:

  • constraint_id: The ID of the constraint to remove.

  • reason: The reason why the constraint is removed.

  • parameters: Additional parameters to describe the reason.

Examples#

Relax constraint, and restore it.

>>> from ommx.v1 import Instance, DecisionVariable
>>> x = [DecisionVariable.binary(i) for i in range(3)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints=[(sum(x) == 3).set_id(1)],
...     sense=Instance.MAXIMIZE,
... )
>>> instance.constraints
[Constraint(x0 + x1 + x2 - 3 == 0)]
>>> instance.relax_constraint(1, "manual relaxation")
>>> instance.constraints
[]
>>> instance.removed_constraints
[RemovedConstraint(x0 + x1 + x2 - 3 == 0, reason=manual relaxation)]
>>> instance.restore_constraint(1)
>>> instance.constraints
[Constraint(x0 + x1 + x2 - 3 == 0)]
>>> instance.removed_constraints
[]
relax_indicator_constraint(constraint_id: int, reason: str, parameters: str) None#

Relax an indicator constraint by moving it from active to removed.

required_ids() set[int]#

Get the set of decision variable IDs used in the objective and remaining constraints.

Examples#

>>> from ommx.v1 import Instance, DecisionVariable
>>> x = [DecisionVariable.binary(i) for i in range(3)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints=[],
...     sense=Instance.MAXIMIZE,
... )
>>> instance.required_ids()
{0, 1, 2}
restore_constraint(constraint_id: int) None#
restore_indicator_constraint(constraint_id: int) None#

Restore a removed indicator constraint back to active.

save_mps(path: str, compress: bool = True) None#
stats() dict#

Get statistics about the instance.

Returns a dictionary containing counts of decision variables and constraints categorized by kind, usage, and status.

Returns: A dictionary with the following structure:

{
    "decision_variables": {
        "total": int,
        "by_kind": {
            "binary": int,
            "integer": int,
            "continuous": int,
            "semi_integer": int,
            "semi_continuous": int
        },
        "by_usage": {
            "used_in_objective": int,
            "used_in_constraints": int,
            "used": int,
            "fixed": int,
            "dependent": int,
            "irrelevant": int
        }
    },
    "constraints": {
        "total": int,
        "active": int,
        "removed": int
    }
}

Examples#

>>> from ommx.v1 import Instance
>>> instance = Instance.empty()
>>> stats = instance.stats()
>>> stats["decision_variables"]["total"]
0
>>> stats["constraints"]["total"]
0
to_bytes() bytes#
to_hubo(uniform_penalty_weight: Optional[float] = None, penalty_weights: Optional[Mapping[int, float]] = None, inequality_integer_slack_max_range: int = 31) tuple[dict, float]#

Convert the instance to a HUBO format.

This is a Driver API for HUBO conversion calling single-purpose methods in order:

  1. Convert the instance to a minimization problem by as_minimization_problem().

  2. Check continuous variables and raise error if exists.

  3. Convert inequality constraints

  1. Convert to HUBO with (uniform) penalty method

  • If penalty_weights is given (in dict[constraint_id, weight] form), use penalty_method() with the given weights.

  • If uniform_penalty_weight is given, use uniform_penalty_method() with the given weight.

  • If both are None, defaults to uniform_penalty_weight = 1.0.

  1. Log-encode integer variables by log_encode().

  2. Finally convert to HUBO format by as_hubo_format().

Please see the documentation for to_qubo() for more information, or the documentation for each individual method for additional details. The difference between this and to_qubo() is that this method isn't restricted to quadratic or linear problems. If you want to customize the conversion, use the individual methods above manually.

to_qubo(uniform_penalty_weight: Optional[float] = None, penalty_weights: Optional[Mapping[int, float]] = None, inequality_integer_slack_max_range: int = 31) tuple[dict, float]#

Convert the instance to a QUBO format.

This is a Driver API for QUBO conversion calling single-purpose methods in order:

  1. Convert the instance to a minimization problem by as_minimization_problem().

  2. Check continuous variables and raise error if exists.

  3. Convert inequality constraints

  1. Convert to QUBO with (uniform) penalty method

  • If penalty_weights is given (in dict[constraint_id, weight] form), use penalty_method() with the given weights.

  • If uniform_penalty_weight is given, use uniform_penalty_method() with the given weight.

  • If both are None, defaults to uniform_penalty_weight = 1.0.

  1. Log-encode integer variables by log_encode().

  2. Finally convert to QUBO format by as_qubo_format().

Please see the document of each method for details. If you want to customize the conversion, use the methods above manually.

Examples#

Let's consider a maximization problem with two integer variables \(x_0, x_1 \in [0, 2]\) subject to an inequality:

\[\max \; x_0 + x_1 \quad \text{s.t.} \quad x_0 + 2 x_1 \leq 3\]
>>> from ommx.v1 import Instance, DecisionVariable
>>> x = [DecisionVariable.integer(i, lower=0, upper=2, name="x", subscripts=[i]) for i in range(2)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints=[(x[0] + 2*x[1] <= 3).set_id(0)],
...     sense=Instance.MAXIMIZE,
... )

Convert into QUBO format

>>> qubo, offset = instance.to_qubo()
>>> qubo
{(3, 3): -6.0, (3, 4): 2.0, (3, 5): 4.0, (3, 6): 4.0, (3, 7): 2.0, (3, 8): 4.0, (4, 4): -6.0, (4, 5): 4.0, (4, 6): 4.0, (4, 7): 2.0, (4, 8): 4.0, (5, 5): -9.0, (5, 6): 8.0, (5, 7): 4.0, (5, 8): 8.0, (6, 6): -9.0, (6, 7): 4.0, (6, 8): 8.0, (7, 7): -5.0, (7, 8): 4.0, (8, 8): -8.0}
>>> offset
9.0

For the maximization problem, the sense is converted to minimization for generating QUBO, and then converted back to maximization.

>>> instance.sense == Instance.MAXIMIZE
True
uniform_penalty_method() ParametricInstance#

Convert to a parametric unconstrained instance by penalty method with uniform weight.

Roughly, this converts a constrained problem:

\[\min_x f(x) \quad \text{s.t.} \quad g_i(x) = 0 \; (\forall i), \quad h_j(x) \leq 0 \; (\forall j)\]

to an unconstrained problem with a parameter:

\[\min_x f(x) + \lambda \left( \sum_i g_i(x)^2 + \sum_j h_j(x)^2 \right)\]

where \(\lambda\) is the uniform penalty weight parameter for all constraints.

The removed constraints are stored in removed_constraints.

Note: This method converts inequality constraints \(h(x) \leq 0\) to \(|h(x)|^2\) not to \(\max(0, h(x))^2\). This means the penalty is enforced even for \(h(x) < 0\) cases, and \(h(x) = 0\) is unfairly favored. This feature is intended to use with add_integer_slack_to_inequality().

Examples#

>>> from ommx.v1 import Instance, DecisionVariable
>>> x = [DecisionVariable.binary(i) for i in range(3)]
>>> instance = Instance.from_components(
...     decision_variables=x,
...     objective=sum(x),
...     constraints=[sum(x) == 3],
...     sense=Instance.MAXIMIZE,
... )
>>> instance.objective
Function(x0 + x1 + x2)
>>> pi = instance.uniform_penalty_method()

The constraint is put in removed_constraints

>>> pi.constraints
[]
>>> len(pi.removed_constraints)
1
>>> pi.removed_constraints[0]
RemovedConstraint(x0 + x1 + x2 - 3 == 0, reason=ommx.Instance.uniform_penalty_method)

There is only one parameter in the instance

>>> len(pi.parameters)
1
>>> p = pi.parameters[0]
>>> p.id
3
>>> p.name
'uniform_penalty_weight'
Description: type[InstanceDescription]#
MAXIMIZE: Sense#
MINIMIZE: Sense#
property annotations: dict[str, str]#

Returns a copy of the annotations dictionary.

Mutating the returned dict will not update the object. Use add_user_annotation() or assign to annotations to modify annotations.

property authors: list[str]#
property constraints: dict[int, Constraint]#

Read-only property.

Dict of all constraints in the instance keyed by their IDs.

property constraints_df: DataFrame#

Read-only property.

DataFrame of constraints

property created: Optional[datetime]#
property dataset: Optional[str]#
property decision_variable_names: set[str]#

Read-only property.

Get all unique decision variable names in this instance

property decision_variables: list[DecisionVariable]#

Read-only property.

List of all decision variables in the instance sorted by their IDs.

property decision_variables_df: DataFrame#

Read-only property.

DataFrame of decision variables

property description: Optional[InstanceDescription]#

Read-only property.

property indicator_constraints: dict[int, IndicatorConstraint]#

Read-only property.

Dict of all indicator constraints in the instance keyed by their IDs.

property indicator_constraints_df: DataFrame#

Read-only property.

DataFrame of indicator constraints

property license: Optional[str]#
property named_function_names: set[str]#

Read-only property.

Get all unique named function names in this instance

property named_functions: list[NamedFunction]#

Read-only property.

List of all named functions in the instance sorted by their IDs.

property named_functions_df: DataFrame#

Read-only property.

DataFrame of named functions

property num_constraints: Optional[int]#
property num_variables: Optional[int]#
property objective: Function#
property one_hot_constraints: dict[int, OneHotConstraint]#

Read-only property.

Dict of all one-hot constraints in the instance keyed by their IDs.

property one_hot_constraints_df: DataFrame#

Read-only property.

DataFrame of one-hot constraints

property removed_constraints: dict[int, RemovedConstraint]#

Read-only property.

Dict of all removed constraints in the instance keyed by their IDs.

property removed_constraints_df: DataFrame#

Read-only property.

DataFrame of removed constraints

property removed_indicator_constraints: dict[int, RemovedIndicatorConstraint]#

Read-only property.

Dict of all removed indicator constraints in the instance keyed by their IDs.

property removed_indicator_constraints_df: DataFrame#

Read-only property.

DataFrame of removed indicator constraints

property removed_one_hot_constraints: dict[int, RemovedOneHotConstraint]#

Read-only property.

Dict of all removed one-hot constraints in the instance keyed by their IDs.

property removed_one_hot_constraints_df: DataFrame#

Read-only property.

DataFrame of removed one-hot constraints

property removed_sos1_constraints: dict[int, RemovedSos1Constraint]#

Read-only property.

Dict of all removed SOS1 constraints in the instance keyed by their IDs.

property removed_sos1_constraints_df: DataFrame#

Read-only property.

DataFrame of removed SOS1 constraints

property required_capabilities: set[AdditionalCapability]#

Read-only property.

The non-standard constraint capabilities this instance currently uses.

Returns the set of :class:AdditionalCapability values corresponding to the active (non-removed) constraint collections the instance contains. An empty set means the instance only uses regular constraints.

Callers can diff this against an adapter's ADDITIONAL_CAPABILITIES to see what would be converted, or use :meth:reduce_capabilities to perform the conversion.

property sense: Sense#

Read-only property.

property sos1_constraints: dict[int, Sos1Constraint]#

Read-only property.

Dict of all SOS1 constraints in the instance keyed by their IDs.

property sos1_constraints_df: DataFrame#

Read-only property.

DataFrame of SOS1 constraints

property title: Optional[str]#
property used_decision_variables: list[DecisionVariable]#

Read-only property.