快速凸整数非线性规划 Python 求解器

huangapple go评论124阅读模式
英文:

Fast Convex Integer Non linear programming python solver

问题

I have a convex non-linear integer program of the following form:

formula

formula

K is a fixed integer greater than 0. Betas are real numbers greater than zero.

Note that x is a positive integer and the function to be optimized as well as the constraint are convex.

I have found that this problem falls under the umbrella of Mixed-Integer Non-linear programming (MINLP), except that there are no real part in the variable to be optimized so it is not really "mixed".

I have found a lot of references to python solvers for MINLP but it looks like they all work by alternating between solving for the integer and real part. Among them Pyomo's mindt solver which I managed to run on my problem.

from pyomo.environ import *
import numpy as np


N = 50000
K = N * 2
beta = np.absolute(np.random.randn(N) * 1000)

model = ConcreteModel()

for i in range(N):
    model.__setattr__(f"x{i}", Var(within=RangeSet(K), bounds=(1, K), initialize=1))
model.c = Constraint(expr=sum([model.__getattribute__(f"x{i}") for i in range(N)]) <= K)
model.objective = Objective(expr=sum([beta[i]/sqrt(model.__getattribute__(f"x{i}")) for i in range(N)]), sense=minimize)

SolverFactory('mindtpy').solve(model, strategy="OA", mip_solver='glpk', nlp_solver='ipopt') 
res = np.array([model.__getattribute__(f"x{i}").value for i in range(N)])

However, this is extremely slow and runtime seems to scale exponentially with N. Duration is tractable for small N but not for N as big as 50000.

I wonder if the solver doesn't lose time trying to optimize on the real variables even though there are not? Isn't there an open-source python solver specialized in Convex Integer Non-Linear Programs? It is ok if the solver is not available in python, but it must be open-source.

英文:

I have a convex non-linear integer program of the following form:

快速凸整数非线性规划 Python 求解器

快速凸整数非线性规划 Python 求解器

K is a fixed integer greater than 0. Betas are real numbers greater than zero.

Note that x is a positive integer and the function to be optimized as well as the constraint are convex.

I have found that this problem falls under the umbrella of Mixed-Integer Non-linear programming (MINLP), except that there are no real part in the variable to be optimized so it is not really "mixed".

I have found a lot of references to python solvers for MINLP but it looks like they all work by alternating between solving for the integer and real part. Among them Pyomo's mindt solver which I managed to run on my problem.

from pyomo.environ import *
import numpy as np


N = 50000
K = N * 2
beta = np.absolute(np.random.randn(N) * 1000)

model = ConcreteModel()

for i in range(N):
    model.__setattr__(f&quot;x{i}&quot;, Var(within=RangeSet(K), bounds=(1, K), initialize=1))
model.c = Constraint(expr=sum([model.__getattribute__(f&quot;x{i}&quot;) for i in range(N)]) &lt;= K)
model.objective = Objective(expr=sum([beta[i]/sqrt(model.__getattribute__(f&quot;x{i}&quot;)) for i in range(N)]), sense=minimize)

SolverFactory(&#39;mindtpy&#39;).solve(model, strategy=&quot;OA&quot;, mip_solver=&#39;glpk&#39;, nlp_solver=&#39;ipopt&#39;) 
res = np.array([model.__getattribute__(f&quot;x{i}&quot;).value for i in range(N)])

However, this is extremely slow and runtime seems to scale exponentially with N. Duration is tractable for small N but not for N as big as 50000.

I wonder if the solver doesn't loose time trying to optimize on the real variables even though there are not? Isn't there an open-source python solver specialized in Convex Integer Non-Linear Programs? It is ok if the solver is not available in python, but it must be open-source.

答案1

得分: 1

我认为您的求解器“混合”并不是问题,所有MIP求解器都很好地理解了纯粹整数问题类别。整数规划比连续优化慢,非线性优化比线性优化慢;MINLP是所有领域中最困难的。

您应该从问题的分析性评估开始。朴素的雅可比矩阵如下:

n = 50  # 50_000
k = 2 * n
rand = default_rng(seed=0)
beta = np.abs(rand.normal(scale=1_000, size=n))


def cost_unconstrained(x: np.ndarray) -> float:
    return beta.dot(1/np.sqrt(x))


def jacobian_unconstrained(x: np.ndarray) -> np.ndarray:
    return -0.5 * beta * x**-1.5


error = check_grad(
    func=cost_unconstrained,
    grad=jacobian_unconstrained,
    x0=rand.uniform(0.5, 1.5, n),
)
assert error < 0.01

continuous_soln = minimize(
    fun=cost_unconstrained,
    jac=jacobian_unconstrained,
    x0=x0,
    bounds=Bounds(lb=0, ub=k),
    constraints=LinearConstraint(A=np.ones(n), lb=k, ub=k),
)
assert continuous_soln.success, continuous_soln.message

但至关重要的是,这不是实际重要的雅可比矩阵。请注意,如果您尝试将其设置为0以找到最小值,它将给出x=0的平凡解,但这违反了您的_k_-超平面约束。

重新构造问题,使目标及其雅可比矩阵包括_k_-超平面约束:

def cost_constrained(x: np.ndarray) -> float:
    xn = k - x.sum()
    return beta[:-1].dot(1/np.sqrt(x)) + beta[-1]/np.sqrt(xn)


def jacobian_constrained(x: np.ndarray) -> float:
    sum_term = 0.5 * beta[-1] * (k - x.sum())**-1.5
    return -0.5 * beta[:-1] * x**-1.5 + sum_term


error = check_grad(
    func=cost_constrained,
    grad=jacobian_constrained,
    x0=rand.uniform(0.5, 1.5, n-1),
)
assert error < 0.01

constrained_soln = minimize(
    fun=cost_constrained,
    jac=jacobian_constrained,
    x0=x0,
    bounds=Bounds(lb=0, ub=k),
    constraints=LinearConstraint(A=np.ones(n-1), lb=0, ub=k),
)
assert constrained_soln.success, constrained_soln.message

这产生了相同的优化结果,但显示您可以在连续情况下找到分析最优解:

0 == -0.5*beta[:-1] * x**-1.5 + 0.5*beta[-1] * (k - x.sum())**-1.5
0.5*beta[:-1] * x**-1.5 == 0.5*beta[-1] * (k - x.sum())**-1.5
beta[:-1] * x**-1.5 == beta[-1] * (k - x.sum())**-1.5
beta[:-1]/beta[-1] == (x/(k - x.sum())**1.5
(beta[:-1]/beta[-1])**(2/3) = x/(k - x.sum())
(beta[:-1]/beta[-1])**(2/3)(k - x.sum()) = x
k*(beta[:-1]/beta[-1])**(2/3) = x + (beta[:-1]/beta[-1])**(2/3) * x.sum()
k*bb23 = x + bb23*x.sum()
k * bb23i = xi + bb23i*(x0 + x1 + x2...)
bb23 = (beta[:-1]/beta[-1])**(2/3)
A = bb23[:, np.newaxis] + np.eye(n-1)
b = bb23 * k
x0, *_ = np.linalg.lstsq(a=A, b=b, rcond=None)

这个分析最优解是线性的、确定性的,除了A中可能存在的一些内存问题之外,可能比使用minimize调用更容易找到。当然,它是连续的,而不是整数的,因此不是您的最终解决方案,但可以用作实际MINLP求解器的合理起点。

为了减少A占用的内存,您可以使用稀疏格式进行求解:

bb23 = (beta[:-1]/beta[-1])**(2/3)
A = scipy.sparse.bmat([
    [scipy.sparse.eye(n-1), -bb23[:, np.newaxis]],
    [np.ones(n-1), 1],
], format='csc')
b = scipy.sparse.csc_array(([k], [n-1], [0, 1]), shape=(n, 1))
x0 = scipy.sparse.linalg.spsolve(A=A, b=b)
英文:

I think your solvers being "mixed" is not the problem, and all MIP solvers have a good understanding of problem classes that are purely integral. Integer programming is slower than continuous optimization, and non-linear optimization is slower than linear optimization; MINLP is the most difficult of all worlds.

You should start with an analytic assessment of your problem. The naive Jacobian is

n = 50  # 50_000
k = 2*n
rand = default_rng(seed=0)
beta = np.abs(rand.normal(scale=1_000, size=n))


def cost_unconstrained(x: np.ndarray) -&gt; float:
    return beta.dot(1/np.sqrt(x))


def jacobian_unconstrained(x: np.ndarray) -&gt; np.ndarray:
    return -0.5*beta * x**-1.5


error = check_grad(
    func=cost_unconstrained,
    grad=jacobian_unconstrained,
    x0=rand.uniform(0.5, 1.5, n),
)
assert error &lt; 0.01

continuous_soln = minimize(
    fun=cost_unconstrained,
    jac=jacobian_unconstrained,
    x0=x0,
    bounds=Bounds(lb=0, ub=k),
    constraints=LinearConstraint(A=np.ones(n), lb=k, ub=k),
)
assert continuous_soln.success, continuous_soln.message

But critically that isn't the Jacobian that actually matters. Notice that if you try to set it to 0 to find the minimum it will give you the trivial solution of x=0, but that violates your k-hyperplane constraint.

Reframe the problem so that the objective and its Jacobian include the k-hyperplane constraint:

def cost_constrained(x: np.ndarray) -&gt; float:
    xn = k - x.sum()
    return beta[:-1].dot(1/np.sqrt(x)) + beta[-1]/np.sqrt(xn)


def jacobian_constrained(x: np.ndarray) -&gt; float:
    sum_term = 0.5*beta[-1] * (k - x.sum())**-1.5
    return -0.5*beta[:-1] * x**-1.5 + sum_term


error = check_grad(
    func=cost_constrained,
    grad=jacobian_constrained,
    x0=rand.uniform(0.5, 1.5, n-1),
)
assert error &lt; 0.01

constrained_soln = minimize(
    fun=cost_constrained,
    jac=jacobian_constrained,
    x0=x0,
    bounds=Bounds(lb=0, ub=k),
    constraints=LinearConstraint(A=np.ones(n-1), lb=0, ub=k),
)
assert constrained_soln.success, constrained_soln.message

This produces the same optimized result, but shows you that it's possible to find an analytic optimum in the continuous case:

0 == -0.5*beta[:-1] * x**-1.5 + 0.5*beta[-1] * (k - x.sum())**-1.5
0.5*beta[:-1] * x**-1.5 == 0.5*beta[-1] * (k - x.sum())**-1.5
beta[:-1] * x**-1.5 == beta[-1] * (k - x.sum())**-1.5
beta[:-1]/beta[-1] == (x/(k - x.sum())**1.5
(beta[:-1]/beta[-1])**(2/3) = x/(k - x.sum())
(beta[:-1]/beta[-1])**(2/3)(k - x.sum()) = x
k*(beta[:-1]/beta[-1])**(2/3) = x + (beta[:-1]/beta[-1])**(2/3) * x.sum()
k*bb23 = x + bb23*x.sum()
k * bb23i = xi + bb23i*(x0 + x1 + x2...)
bb23 = (beta[:-1]/beta[-1])**(2/3)
A = bb23[:, np.newaxis] + np.eye(n-1)
b = bb23 * k
x0, *_ = np.linalg.lstsq(a=A, b=b, rcond=None)

This analytic optimum is linear, deterministic, and (aside from some potential memory problems in A) possibly easier to find than with a minimize call. Of course it's continuous and not integral, so it is not your end solution, but it can be used as a reasonable starting point for an actual MINLP solver.

To help with memory occupied by A, you may instead solve in sparse format:

bb23 = (beta[:-1]/beta[-1])**(2/3)
A = scipy.sparse.bmat([
    [scipy.sparse.eye(n-1), -bb23[:, np.newaxis]],
    [np.ones(n-1), 1],
], format=&#39;csc&#39;)
b = scipy.sparse.csc_array(([k], [n-1], [0, 1]), shape=(n, 1))
x0 = scipy.sparse.linalg.spsolve(A=A, b=b)

huangapple
  • 本文由 发表于 2023年7月27日 21:04:55
  • 转载请务必保留本文链接:https://go.coder-hub.com/76780043.html
  • convex
  • integer-programming
  • mixed-integer-programming
  • nonlinear-optimization
  • pyomo
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定