英文:
OpenMDAO outputs scaling in a coupled model
问题
我在OpenMDAO中定义了一个耦合模型,我正在使用优化驱动器,各种组件的输出在数量级上变化很大。我想将所有输出缩放到大致为0-1的范围,以获得元素数量级相似的雅可比矩阵,以进行更好的优化过程。我正在使用self.add_output()
的ref
和ref0
参数,并将要缩放的组件定义为约束,以便在缩放报告中显示它们(只是为了帮助检查缩放结果)。
我遇到的问题是雅可比矩阵全部变灰(浅灰色)。可能的原因有哪些?
在我的原始模型中,我有设计变量,它们传递给一个组件,其输出传递到另一个组件,依此类推,直到最后一个组件的输出被传递给约束组件。如果我想确保雅可比矩阵(相对于设计变量的约束)的适当缩放,我是否应该缩放沿着从设计变量到约束的路径的每个组件的每个输出?
有没有办法使OpenMDAO直接基于约束相对于设计变量的有限差分计算数值导数,而不是按照链式法则,逐个组件地计算导数,以避免需要缩放整个模型?
在下面是我的模型的样例(这是根据更复杂模型的结构进行了根本上的简化,以提供可重现的示例)。
问题是雅可比矩阵全部变灰(浅灰色)。这可能的原因有哪些?
在我的原始模型中,我有设计变量,它们传递给一个组件,其输出传递到另一个组件,依此类推,直到最后一个组件的输出被传递给约束组件。如果我想确保雅可比矩阵(相对于设计变量的约束)的适当缩放,我是否应该缩放沿着从设计变量到约束的路径的每个组件的每个输出?
有没有办法使OpenMDAO直接基于约束相对于设计变量的有限差分计算数值导数,而不是按照链式法则,逐个组件地计算导数,以避免需要缩放整个模型?
import numpy as np
import openmdao.api as om
class MDA(om.Group):
class Inertia(om.ExplicitComponent):
def setup(self):
self.add_input('r', val=[3.1,3,3], units='m')
self.add_input('T', val=102.0, units='m')
self.add_output('M', shape=(6,6), val=np.ones([6,6]), ref=1e6*np.ones([6,6]), units='kg')
# ... (其他方法的翻译)
# ... (其他类的翻译)
# ... (模型设置和运行的翻译)
英文:
I have a coupled model defined in OpenMDAO that I'm using with an optimisation driver, and the outputs of various components vary a lot in order of magnitude. I would like to scale all outputs to roughly 0-1 range, to have a Jacobian with elements of a similar order of magnitude, for a better optimisation process. I am using the ref
and ref0
arguments to self.add_output()
, and define the components that I want to scale as constraints in order to have them visible in the scaling report (just to help checking the scaling results).
The sample of my model below (this is radically simplified to provide a reproducable example, but keeps the structure of the more complex model I am dealing with).
The problem I have is that the Jacobian is all grayed out (light gray). What are some possible reasons for this?
In my original model, I have design variables, that feed into a component, and its output feeds into another component, and so on through a few components, until an output of the last component is fed to the constraint component. If I want to ensure proper scaling of the Jacobian (of constraints w.r.t. design variables), should I be scaling every single output of every single component along the path from the design variables to the constraints?
Is there any way to make OpenMDAO compute the derivatives numerically based directly on the finite difference of constraints w.r.t. design variables, and not as in the chain rule, component by component, to avoid the need for scaling entire model?
import numpy as np
import openmdao.api as om
class MDA(om.Group):
class Inertia(om.ExplicitComponent):
def setup(self):
self.add_input('r', val= [3.1,3,3], units='m')
self.add_input('T', val= 102.0, units='m')
self.add_output('M', shape=(6,6), val=np.ones([6,6]), ref=1e6*np.ones([6,6]), units='kg')
def setup_partials(self):
self.declare_partials('*', '*')
def compute(self, inputs, outputs):
r = inputs['r']
T = inputs['T'][0]
outputs['M'] = np.linalg.norm(r) * T * np.ones([6,6])
class Cost(om.ExplicitComponent):
def setup(self):
self.add_input('M', shape=(6,6), val=np.ones([6,6]), units='kg')
self.add_output('cost', val=0, units='USD')
def compute(self, inputs, outputs):
M = inputs['M']
outputs['cost'] = np.linalg.norm(M) * 10
def setup_partials(self):
self.declare_partials('*', '*')
class ConCmp0(om.ExplicitComponent):
def setup(self):
self.add_input('M', shape=(6,6), val=np.ones([6,6]), units='kg')
self.add_output('con0', shape=(6,6), val=np.ones([6,6]), units='kg')
def setup_partials(self):
self.declare_partials('*', '*')
def compute(self, inputs, outputs):
M = inputs['M']
outputs['con0'] = M
class ObjCmp(om.ExplicitComponent):
def setup(self):
self.add_input('cost', val=0, units='USD')
self.add_output('obj', val=0.0)
def setup_partials(self):
self.declare_partials('*', '*')
def compute(self, inputs, outputs):
outputs['obj'] = inputs['cost']
def setup(self):
self.add_subsystem('inertia', self.Inertia(), promotes_inputs=['T','r'],
promotes_outputs=['M'])
self.add_subsystem('cost', self.Cost(), promotes_inputs=['M'],
promotes_outputs=['cost'])
self.add_subsystem('con_cmp0', self.ConCmp0(), promotes_inputs=['M'],
promotes_outputs=['con0'])
self.add_subsystem('obj_cmp', self.ObjCmp(), promotes_inputs=['cost'],
promotes_outputs=['obj'])
# build the model
prob = om.Problem(model=MDA())
model = prob.model
# setup the optimization
prob.driver = om.ScipyOptimizeDriver()
prob.driver.options['optimizer'] = 'SLSQP'
model.add_design_var('r', lower = [3,3,3], upper = [10,10,10])
model.add_design_var('T', lower = 100, upper = 140)
model.add_objective('obj')
model.add_constraint('con0', lower = 0)
prob.setup()
# Set initial values.
prob.set_val('r', [3.1, 3, 3])
prob.set_val('T', 102.0)
# run the optimization
prob.run_driver()
答案1
得分: 0
如果您希望在模型顶部使用FD方法,请使用顶部调用的approx_totals
方法。
关于FD的详细信息,除外,示例模型存在问题
> 我遇到的问题是雅可比矩阵全部变灰(浅灰色)
因此,除了FD方法之外,您的根本问题可能在于您最初声明导数的方式。
在这里的每个组件中,您都调用了self.declare_partials('*','*')
。该方法的函数签名显示默认值为method="exact"
--- 这意味着您告诉OpenMDAO您将提供精确的解析导数。因此,OM期望您为显式组件实现compute_partials
或compute_jacvec_product
方法。您没有这样做,因此框架认为您的所有导数都是零。这就是您在报告中看到整个雅可比矩阵变为灰色的原因。
当您在模型顶层设置approx_totals
时,OM将忽略该调用下面组件的导数设置。因此,这应该解决您的问题。然而,更正确的解决方法是在所有的declare_partials
调用中设置method="fd"
参数。您可能仍然更喜欢总体FD,但以防您将其关闭,这样做仍然可以正常工作。
英文:
If you want FD at the top of your model, then use the approx_totals
method called at the top. Note that this method can be called at any level of the model hierarchy, but your question specifically asked about top level FD.
The FD details, aside, there is a problem with the sample model that is causing
> The problem I have is that the Jacobian is all grayed out (light gray)
So FD method aside, your root problem may be in the way you've declared the derivatives to being with.
you have called self.declare_partials('*', '*')
in every component here. The function signature for that method shows that the default has method="exact"
--- which means you are telling OpenMDAO that you will provide exact analytic derivatives. So OM is expecting you to implement a compute_partials
or compute_jacvec_product
method for your explicit components. You didn't do that, so the framework thinks that all your derivatives are zero. This is why you are seeing the whole jacobian being gray'd out in the scaling report.
When you set the approx_totals
at the top level of the model, OM will then ignore the derivative settings of components below that call. So this should sort of fix your issue. However, the more correct method of fixing it would be to set the method="fd"
argument in all your declare_partials
calls. You might still prefer top level FD overall, but in case you ever turn that off at least it would still work properly then.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论