在整数线性规划中,如何满足(至少)三个条件中的一个。

huangapple go评论90阅读模式
英文:

In integer linear programming, how to satisfy (at least) one out of three conditions

问题

以下是您要翻译的部分:

假设您有一个需要优化的函数。您有3个条件,其中至少一个必须满足。
这些条件是:

x1 + x2 + x3 + x4 ≤ 30,

-3x1 + 3x2 - 2x3 + x4 ≥ 40,

-x1 + x2 + 4x3 + x4 ≤ 60。

我知道您必须在右侧添加一个大数,以满足条件"非空"。这就是为什么您引入了一个新变量

$$q_i = \begin{cases} 1; & the condition does not necessarily apply \ 0; & otherwise\end{cases}$$

最终,您应该得到类似于

x1 + x2 + x3 + x4 ≤ 30 + M q1

-3x1 + 3x2 - 2x3 + x4 ≥ 40 + M q2

-x1 + x2 + 4x3 + x4 ≤ 60 + M q3

然而,我不知道如何找到缺失的M值。
有人知道如何解决这个问题吗?

英文:

Suppose you are given a function which you have to optimize. You are given 3 conditions among which (at least) one has to be satisfied.
The conditions are:

x1 + x2 + x3 + x4 ≤ 30,

−3x1 + 3x2 − 2x3 + x4 ≥ 40,

−x1 + x2 + 4x3 + x4 ≤ 60.

I know you have to add such a large number on the right so the condition is fulfilled as "not empty". That is why you intruduce a new variable

$$q_i = \begin{cases} 1; & the condition does not necessarily apply \ 0; & otherwise\end{cases}$$

In the end you are suppose to get something like

x1 + x2 + x3 + x4 ≤ 30 + M q1

−3x1 + 3x2 − 2x3 + x4 ≥ 40 +M q2

−x1 + x2 + 4x3 + x4 ≤ 60 + M q3

However, I do not know how to find the missing M's.
Does anyone know how to conquer this problem?

答案1

得分: 2

  • 第二个约束的右手边应为 40 - M q2
  • 最好使用 M1、M2、M3,因为它们的大小最好不同。
  • M 的大小很重要。它们应该尽量小,但不能排除任何可行解。例如,如果 x[i]∈[0,100],则对于第一个约束,M=370
  • 我甚至有时使用额外的求解来找到我的大 M 的最佳值。
  • 许多 MIP 求解器支持指示约束。这将允许您编写:q1=0 ==> x1 + x2 + x3 + x4 ≤ 30,而不需要任何大 M。
  • 我可能不应该提到这一点,但也可以使用 SOS1 集合来避免使用大 M。如果你不知道什么是 SOS 集合,可以忽略这一点。

在另一个答案中,提出了一个错误的模型(也称为“hack”)。以下是使用二进制变量和大 M 的相同模型:

import pulp

prob = pulp.LpProblem(name='original_model', sense=pulp.LpMaximize)
x1, x2, x3, x4 = [pulp.LpVariable(name=f'x{i}', lowBound=0, upBound=100) for i in range(1,5)]
b1, b2, b3 = [pulp.LpVariable(name=f'b{i}', cat=pulp.LpBinary) for i in range(1,4)]

M = 1000
prob.addConstraint(x1 + x2 + x3 + x4 <= 30 + M*b1)
prob.addConstraint(-3*x1 + 3*x2 - 2*x3 + x4 >= 40 - M*b2)
prob.addConstraint(-x1 + x2 + 4*x3 + x4 <= 60 + M*b3)

prob.addConstraint(b1 + b2 + b3 <= 2)

prob.objective = x1 + x2 + x3 + x4
print(prob)
prob.solve()
assert prob.status == pulp.LpStatusOptimal
for v in prob.variables():
    print(v.name, "=", v.varValue)

这会产生以下结果:

b1 = 1.0
b2 = 0.0
b3 = 1.0
x1 = 53.333333
x2 = 100.0
x3 = 100.0
x4 = 100.0

目标值为 353.33。另一个模型报告的结果为 `x = [100.0, 100.0, 30.0, 100.0]`,目标值为 330。由于我们正在最大化,这不是最优解。

这些带有某种惩罚形式的“捷径”模型不严密,通常是错误的。有时它们适用于特定数据集。但在一般情况下,它们会失败。始终对它们保持高度怀疑。

还有一个理论原因。"或" 类型的约束引入了非凸性。我们无法将固有非凸模型重新构建成线性凸模型。如果是这样的话,我们就不需要 MIP 求解器了。

这也意味着你需要对答案中提出的内容要小心。它可能看起来权威,但像在这种情况下一样,它在根本上是有缺陷的。

<details>
<summary>英文:</summary>

 - The rhs of second constraint should read `40 -M q2`
 - It is better to use M1,M2,M3 as they are preferably different in size.
 - The sizes of M are important. They should be as small as possible but not cut off any feasible solutions. E.g. if `x[i]∈[0,100]`, then for the first constraint `M=370`.
 - I have even used sometimes extra solves to find the best value for my big-Ms.
 - Many MIP solvers support **indicator constraints**. That would allow you to write: `q1=0 ==&gt; x1 + x2 + x3 + x4 ≤ 30` without any big-Ms.
 - I probably should not mention this, but it is also possible to use SOS1 sets to prevent big-Ms. If you don&#39;t know what SOS sets are, ignore this bullet.

---
In another answer a wrong model (a.k.a. hack) is proposed. Here is the same model using binary variables and big-Ms:

    import pulp
    
    prob = pulp.LpProblem(name=&#39;original_model&#39;, sense=pulp.LpMaximize)
    x1, x2, x3, x4 = [pulp.LpVariable(name=f&#39;x{i}&#39;, lowBound=0, upBound=100) for i in range(1,5)]
    b1,b2,b3 = [pulp.LpVariable(name=f&#39;b{i}&#39;, cat=pulp.LpBinary) for i in range(1,4)]
    
    M = 1000
    prob.addConstraint(x1 + x2 + x3 + x4 &lt;= 30 + M*b1)
    prob.addConstraint(-3*x1 + 3*x2 - 2*x3 + x4 &gt;= 40 - M*b2)
    prob.addConstraint(-x1 + x2 + 4*x3 + x4 &lt;= 60 + M*b3)
    
    prob.addConstraint(b1+b2+b3&lt;=2)
    
    prob.objective = x1 + x2 + x3 + x4 
    print(prob)
    prob.solve()
    assert prob.status == pulp.LpStatusOptimal
    for v in prob.variables():
        print(v.name, &quot;=&quot;, v.varValue)

This produces:

    b1 = 1.0
    b2 = 0.0
    b3 = 1.0
    x1 = 53.333333
    x2 = 100.0
    x3 = 100.0
    x4 = 100.0

with an obj of 353.33. The other model reported `x = [100.0, 100.0, 30.0, 100.0]` with an obj of 330. As we are maximizing, that is a non-optimal solution. 

These types of &quot;short-cut&quot; models with some form of penalty are not rigorous and are often wrong. Sometimes they work on a given data set. But they fail in general. Always be very suspicious about them. 

There is a theoretical reason also. The &#39;or&#39; type of constraints introduce a non-convexity. We cannot reformulate an inherently non-convex model into a linear convex model. If that were the case, we would not need MIP solvers. 

This also means you need to be careful with what is proposed in answers. It may look authoritative, but as in this case, it is fundamentally flawed.

</details>



# 答案2
**得分**: 0

以下是已翻译的部分:

最自然的解决这个问题的方式是仅三次解决您的优化问题,首先只考虑第一个约束条件,然后只考虑第二个约束条件,最后只考虑第三个约束条件。然后,您只需选择三个识别出的解决方案中的最佳解决方案。以其他回答中提供的示例为例,最大化x1+x2+x3+x4,同时至少满足一个约束条件,以及每个变量的界限为[0, 100],我们可以这样做:

```python
import pulp

leq_constraints = [{"lhs": [1, 1, 1, 1], "rhs": 30},
                   {"lhs": [3, -3, 2, -1], "rhs": -40},
                   {"lhs": [-1, 1, 4, 1], "rhs": 60}]

solns = []
for constr in leq_constraints:
    prob = pulp.LpProblem(name='original_model', sense=pulp.LpMaximize)
    x = pulp.LpVariable.matrix(name='x', indices=range(1, 5), cat=pulp.LpContinuous,
                               lowBound=0, upBound=100)
    prob.addConstraint(sum([c*var for c, var in zip(constr["lhs"], x)]) <= constr["rhs"])
    prob.objective = x[0] + x[1] + x[2] + x[3]
    prob.solve(pulp.PULP_CBC_CMD(msg=0))
    solns.append((prob.objective.value(), [xi.value() for xi in x]))

bestSol = max(solns)
print("Best objective:", bestSol[0])
print("Best solution variable values:", bestSol[1])

输出确认了其他人所说的 -- 353.33 是最佳可能的目标值。

最佳目标值: 353.33333300000004

最佳解决方案变量值: [53.333333, 100.0, 100.0, 100.0]

这避免了思考大M的所有麻烦,对您来说成本相对较低(解决了3个问题而不是1个问题)。由于您在评论中指出需要用纸和笔解决这个问题,解决三个简单的问题而不是制定一个带有大M和二进制变量的巨大问题肯定是我会选择的方式。

但是,如果您的要求更加复杂(例如,至少有100个约束条件中的10个成立),这将不是一个好的方法,因为您需要解决100个选择10的问题,共计数十亿个问题。那时,您肯定会更好地选择使用大M法。

英文:

The most natural way I could imagine to solve this problem would be to just solve your optimization problem three times, first with only the first constraint, next with only the second constraint, and last only with the third constraint. Then you just take the best of the three solutions identified. Taking the example provided in other answers of maximizing x1+x2+x3+x4 subject to at least one of your constraints as well as bounds of [0, 100] on each variable, we might do:

import pulp

leq_constraints = [{&quot;lhs&quot;: [1, 1, 1, 1], &quot;rhs&quot;: 30},
                   {&quot;lhs&quot;: [3, -3, 2, -1], &quot;rhs&quot;: -40},
                   {&quot;lhs&quot;: [-1, 1, 4, 1], &quot;rhs&quot;: 60}]

solns = []
for constr in leq_constraints:
    prob = pulp.LpProblem(name=&#39;original_model&#39;, sense=pulp.LpMaximize)
    x = pulp.LpVariable.matrix(name=&#39;x&#39;, indices=range(1, 5), cat=pulp.LpContinuous,
                               lowBound=0, upBound=100)
    prob.addConstraint(sum([c*var for c, var in zip(constr[&quot;lhs&quot;], x)]) &lt;= constr[&quot;rhs&quot;])
    prob.objective = x[0] + x[1] + x[2] + x[3]
    prob.solve(pulp.PULP_CBC_CMD(msg=0))
    solns.append((prob.objective.value(), [xi.value() for xi in x]))

bestSol = max(solns)
print(&quot;Best objective:&quot;, bestSol[0])
print(&quot;Best solution variable values:&quot;, bestSol[1])

The output confirms what others have been saying -- 353.33 is the best possible objective value.

# Best objective: 353.33333300000004
# Best solution variable values: [53.333333, 100.0, 100.0, 100.0]

This avoids all the hassle of thinking about big-M's, and for you it's at pretty minimal cost (solving 3 instead of 1 problems). Since you indicated in the comments that you needed to solve this by pen and paper, solving three simple problems instead of making a giant problem with big-M's and binary variables is definitely the way I would go.

That said, with more complexity to your requirements (e.g. at least 10 of 100 constraints hold) this would not be a good approach, since you would need to solve 100 choose 10 = trillions of problems. Then you would surely be better served by going the big-M route.

答案3

得分: -1

根据您的目标和决策变量边界有一种比大M法更紧凑的方法。定义一个单一的松弛变量,它是三个不同右手不等式边的最大值,其中左手边是一个变量;然后强制执行该单一约束变量小于该松弛变量:

import pulp

'''
x1 + x2 + x3 + x4 ≤ 30,       x3 ≤   30 -  x1 -  x2 - x4
−3x1 + 3x2 − 2x3 + x4 ≥ 40,   x3 ≤ (-40 - 3x1 + 3x2 + x4)/2
−x1 + x2 + 4x3 + x4 ≤ 60.     x3 ≤ ( 60 +  x1 -  x2 - x4)/4
'''

prob = pulp.LpProblem(name='additive_constraints', sense=pulp.LpMaximize)
x = x1, x2, x3, x4 = pulp.LpVariable.matrix(name='x', indices=range(1, 5), cat=pulp.LpContinuous,
                                            lowBound=0, upBound=100)

cu = pulp.LpVariable(name='cu', cat=pulp.LpContinuous)

# cu is the max of the three RHS x3 inequalities
rhs_a =   30 -   x1 -   x2 - x4
rhs_b  = (-40 - 3*x1 + 3*x2 + x4)/2
rhs_c  = ( 60 +   x1 -   x2 - x4)/4
prob.addConstraint(cu >= rhs_a)
prob.addConstraint(cu >= rhs_b)
prob.addConstraint(cu >= rhs_c)

prob.addConstraint(x3 <= cu)

prob.objective = x1 + x2 + x3 + x4 - cu
print(prob)
prob.solve()
assert prob.status == pulp.LpStatusOptimal
x = x1, x2, x3, x4 = [xi.value() for xi in x]
cu = cu.value()

print('x =', x)
print()

print('Constraints, any of:')
print(x1 + x2 + x3 + x4, '<= 30',)
print(-3*x1 + 3*x2 - 2*x3 + x4, '>= 40')
print(-x1 + x2 + 4*x3 + x4, '<= 60')
print()

print('RHS, all of:')
print(cu, '>=', rhs_a.value())
print(cu, '>=', rhs_b.value())
print(cu, '>=', rhs_c.value())

additive_constraints:
MAXIMIZE
-1*cu + 1*x_1 + 1*x_2 + 1*x_3 + 1*x_4 + 0
SUBJECT TO
_C1: cu + x_1 + x_2 + x_4 >= 30

_C2: cu + 1.5 x_1 - 1.5 x_2 - 0.5 x_4 >= -20

_C3: cu - 0.25 x_1 + 0.25 x_2 + 0.25 x_4 >= 15

_C4: - cu + x_3 <= 0

VARIABLES
cu free Continuous
x_1 <= 100 Continuous
x_2 <= 100 Continuous
x_3 <= 100 Continuous
x_4 <= 100 Continuous


Optimal - objective value 300
Optimal objective 300 - 2 iterations time 0.002
Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.00   (Wallclock seconds):       0.00

x = [100.0, 100.0, 30.0, 100.0]

Constraints, any of:
330.0 <= 30
40.0 >= 40
220.0 <= 60

RHS, all of:
30.0 >= -270.0
30.0 >= 30.0
30.0 >= -10.0

在这个公式中,您需要注意cu的目标权重要低于您真正的目标。如果您的决策变量是整数,这很容易实现;如果它们是连续的,这可能需要进行二次步骤。

英文:

Depending somewhat on your objective and decision variable bounds, there is a more compact method than big-M. Define a single slack variable to be the maximum of three different right-hand inequality sides, where the left hand is one variable; then enforce that that single constrained variable is less than the slack:

import pulp

&#39;&#39;&#39;
x1 + x2 + x3 + x4  30,       x3 &lt;=   30 -  x1 -  x2 - x4
3x1 + 3x2  2x3 + x4  40,   x3 &lt;= (-40 - 3x1 + 3x2 + x4)/2
x1 + x2 + 4x3 + x4  60.     x3 &lt;= ( 60 +  x1 -  x2 - x4)/4
&#39;&#39;&#39;

prob = pulp.LpProblem(name=&#39;additive_constraints&#39;, sense=pulp.LpMaximize)
x = x1, x2, x3, x4 = pulp.LpVariable.matrix(name=&#39;x&#39;, indices=range(1, 5), cat=pulp.LpContinuous,
                                            lowBound=0, upBound=100)

cu = pulp.LpVariable(name=&#39;cu&#39;, cat=pulp.LpContinuous)

# cu is the max of the three RHS x3 inequalities
rhs_a =   30 -   x1 -   x2 - x4
rhs_b  = (-40 - 3*x1 + 3*x2 + x4)/2
rhs_c  = ( 60 +   x1 -   x2 - x4)/4
prob.addConstraint(cu &gt;= rhs_a)
prob.addConstraint(cu &gt;= rhs_b)
prob.addConstraint(cu &gt;= rhs_c)

prob.addConstraint(x3 &lt;= cu)

prob.objective = x1 + x2 + x3 + x4 - cu
print(prob)
prob.solve()
assert prob.status == pulp.LpStatusOptimal
x = x1, x2, x3, x4 = [xi.value() for xi in x]
cu = cu.value()

print(&#39;x =&#39;, x)
print()

print(&#39;Constraints, any of:&#39;)
print(x1 + x2 + x3 + x4, &#39;&lt;= 30&#39;,)
print(-3*x1 + 3*x2 - 2*x3 + x4, &#39;&gt;= 40&#39;)
print(-x1 + x2 + 4*x3 + x4, &#39;&lt;= 60&#39;)
print()

print(&#39;RHS, all of:&#39;)
print(cu, &#39;&gt;=&#39;, rhs_a.value())
print(cu, &#39;&gt;=&#39;, rhs_b.value())
print(cu, &#39;&gt;=&#39;, rhs_c.value())

additive_constraints:
MAXIMIZE
-1*cu + 1*x_1 + 1*x_2 + 1*x_3 + 1*x_4 + 0
SUBJECT TO
_C1: cu + x_1 + x_2 + x_4 &gt;= 30
_C2: cu + 1.5 x_1 - 1.5 x_2 - 0.5 x_4 &gt;= -20
_C3: cu - 0.25 x_1 + 0.25 x_2 + 0.25 x_4 &gt;= 15
_C4: - cu + x_3 &lt;= 0
VARIABLES
cu free Continuous
x_1 &lt;= 100 Continuous
x_2 &lt;= 100 Continuous
x_3 &lt;= 100 Continuous
x_4 &lt;= 100 Continuous
Optimal - objective value 300
Optimal objective 300 - 2 iterations time 0.002
Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.00   (Wallclock seconds):       0.00
x = [100.0, 100.0, 30.0, 100.0]
Constraints, any of:
330.0 &lt;= 30
40.0 &gt;= 40
220.0 &lt;= 60
RHS, all of:
30.0 &gt;= -270.0
30.0 &gt;= 30.0
30.0 &gt;= -10.0

In this formulation you have to take care that cu has a lower objective weight than your true objective. If your decision variables are integral this is easy; if they are continuous this may require a secondary step.

huangapple
  • 本文由 发表于 2023年6月8日 02:07:25
  • 转载请务必保留本文链接:https://go.coder-hub.com/76425990.html
  • linear-programming
  • math
  • mathematical-optimization
  • operations-research
  • optimization
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定