英文:
Manual Benders Implementation on docplex
问题
I am using a manual Benders implementation for a problem. For that, I need to store certain variables since I need to transfer them to sub or master problems. However, as the size of the problem gets larger, it becomes really slow. I define those values as numpy arrays as follows:
alpha_bar_bd = np.zeros((demand,facility))
beta_bar_bd = np.zeros((demand))
and update them in each iteration as follows:
for i in range(1, demand + 1):
beta_bar_bd[i-1] = beta_bd[i].solution_value
for j in range(1, facility + 1):
alpha_bar_bd[i-1, j-1] = alpha_bd[i, j].solution_value
I think it could worth mentioning that most of these array and solution values are zeros. Only a subset of facilities are to be selected, and for those selected facilities only some of the values take value different than zero. Is there any way to focus only on nonzero elements and safely update without information loss/mistakes.
Additionally, I define the CPLEX variables as dictionaries:
# Define sets
alpha_bd = [(i, j) for i in range(1, demand+1) for j in range(1, facility + 1)]
# Define variables
alpha_bd = m_bd.continuous_var_dict(alpha_bd, name="alpha")
The greatest time loss occurs during the following operation:
m_bd_ub.add_constraint(theta_bd <= sum(
(u_bd[j] * alpha_bd[i, j].solution_value) for i in range(demand) for j in range(facility)) + sum(
((-u_bd[j] + 1) * pi_bd[i, j].solution_value)
for i in range(demand) for j in range(facility)) + sum(
(y_bd[j] * eta_bd[i, j].solution_value) for i in range(demand) for j in range(facility)) + sum(
zeta_bd[i].solution_value for i in range(demand)), ctname='new constraint')
-
Is there anything I can do to make this process run faster?
-
I know obtaining Pareto optimal cuts could improve the convergence speed. However, if the core points are not well-selected, it may increase the computation time rather than improving it. Is there a way to select core points effectively?
-
I also tried defining variables in a numpy array as follows. However, it did not have any impact on the speed. Would this be expected? I am not sure if I am doing it wrong.
alpha_bd = np.array([[m_bd_lb.continuous_var(name='alpha_bd_{0}_{1}'.format(i, j), lb=0) for j in range(1, facility + 1)] for i in range(1, demand + 1)])
英文:
I am using a manual Benders implementation for a problem. For that, I need to store certain variables since I need to transfer them to sub or master problems. However, as the size of the problem gets larger, it becomes really slow. I define those values as numpy arrays as follows:
alpha_bar_bd = np.zeros((demand,facility))
beta_bar_bd = np.zeros((demand))
and update them in each iteration as follows:
for i in range(1, demand + 1):
beta_bar_bd[i-1] = beta_bd[i].solution_value
for j in range(1, facility + 1):
alpha_bar_bd[i-1, j-1] = alpha_bd[i, j].solution_value
I think it could worth mentioning that most of these array and solution values are zeros. Only a subset of facilities are to be selected, and for those selected facilities only some of the values take value different than zero. Is there any way to focus only on nonzero elements and safely update without information loss/mistakes.
Additionally, I define the CPLEX variables as dictionaries:
# Define sets
alpha_bd = [(i, j) for i in range(1, demand+1) for j in range(1, facility + 1)]
# Define variables
alpha_bd = m_bd.continuous_var_dict(alpha_bd, name="alpha")
The greatest time loss occurs during the following operation:
m_bd_ub.add_constraint(theta_bd <= sum(
(u_bd[j] * alpha_bd[i, j].solution_value) for i in range(demand) for j in range(facility)) + sum(
((-u_bd[j] + 1) * pi_bd[i, j].solution_value)
for i in range(demand) for j in range(facility)) + sum(
(y_bd[j] * eta_bd[i, j].solution_value) for i in range(demand) for j in range(facility)) + sum(
zeta_bd[i].solution_value for i in range(demand)), ctname='new constraint')
-
Is there anything I can do to make this process run faster?
-
I know obtaining Pareto optimal cuts could improve the convergence speed. However, if the core points are not well-selected, it may increase the computation time rather than improving it. Is there a way to select core points effectively?
-
I also tried defining variables in a numpy array as follows. However,
it did not have any impact on the speed. Would this be expected? I am
not sure if I am doing it wrong.
> alpha_bd = np.array([[m_bd_lb.continuous_var(name='alpha_bd_{0}_{1}'.format(i, j), lb=0) for j in range(1, facility + 1)] for i in range(1, demand + 1)])
答案1
得分: 2
尝试使用 Model.sum()
而不是 Python 的 sum
。这是构建大型 Docplex 模型性能不佳的一个众所周知的原因。
参见
https://github.com/IBMDecisionOptimization/docplex-examples/blob/master/examples/mp/jupyter/efficient.ipynb
以获取更多关于高效模型构建的信息。
英文:
Try using Model.sum()
instead of Python sum
. This is a well-known cause of bad performance in building large Docplex models.
See
https://github.com/IBMDecisionOptimization/docplex-examples/blob/master/examples/mp/jupyter/efficient.ipynb
for more on efficient model building.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论