梯度函数评估

huangapple go评论84阅读模式
英文:

Gradient function evaluation

问题

以下是一些我编写的代码,用于评估一个点向渐变/三维函数的最小值移动的位置(在代码开头定义为“eq”)。roll.roll()通过重复评估点(x, y)处的方程,将其沿着梯度方向移动,然后使用新点重复此过程来实现这一目标。

尽管运行速度非常慢。我认为这可能是因为calculate()效率低下,或者是因为roll.roll中sympy的符号方程操作非常慢。是否有人有任何关于如何加快速度的想法?是否有比SymPy更快的库?

以下是代码部分的翻译:

import sympy as smp

x, y = smp.symbols('x y')
eq = 1*smp.exp(-((x-5)/5)**2 - ((y-1)/2)**2) + \
    2*smp.exp(-((x+3)/2)**2 - ((y-3)/2)**2) + \
    3*smp.exp(-((x-4)/2)**2 - ((y-7)/2)**2)

# 评估两个输入sympy符号函数"expression"在点(x1, y1)处的函数
def calculate(expression, x1, y1):
    EQ = smp.lambdify((x, y), expression, 'numpy')
    return EQ(x1, y1)

class roll:
    xDiff = smp.diff(eq, x)
    yDiff = smp.diff(eq, y)
    normalize = eq/smp.sqrt(xDiff**2 + yDiff**2)

    def roll(x, y, duration):
        (x, y) = (x, y)
        for i in range(0, duration):
            (x, y) = (
                x - calculate((roll.normalize * roll.xDiff), x, y),
                y - calculate((roll.normalize * roll.yDiff), x, y)
            )
        return (x, y)

print(roll.roll(1, 2, 10))

这是一个可视化,帮助理解这个程序在做什么;彩色点越大,函数f(x)在该点处的值越大。可拖动的点代表程序试图找到的位置。https://www.desmos.com/calculator/c8mq2rijqn

我尝试弄清楚是否可能在roll.roll之外预先计算normalize*xDiff,但不确定是否可行。

此外,我相信如果步长不依赖于当前点的函数值,那么实际上很容易做到这一点。但我需要在图表上的高点移动得更快(不仅仅是陡峭斜坡上的点),这确实很难弄清楚。

英文:

Below is some code I wrote to evaluate the position of a point moving towards the minimum of a gradient/3d function (defined at the beginning as "eq"). roll.roll() does this by repeatedly evaluating the equation at point (x,y), moving it in the direction of the gradient, then repeating with the new point.

It is very very slow to run though. I think this is because either calculate() is inefficient, or sympy's symbolic equation manipulation in roll.roll is really slow. Does anyone have any ideas on how to speed this up? Is there another libray other than SymPy that is faster?

import sympy as smp

x, y = smp.symbols('x y')
eq = 1*smp.exp(-((x-5)/5)**2 - ((y-1)/2)**2) + \
    2*smp.exp(-((x+3)/2)**2 - ((y-3)/2)**2) + \
    3*smp.exp(-((x-4)/2)**2 - ((y-7)/2)**2)

# Evaluates the 2 input sympy symbolic function "expression" at points (x1,y1)
def calculate(expression,x1,y1):
    EQ = smp.lambdify((x,y), expression, 'numpy')
    return EQ(x1,y1)

class roll:
    xDiff = smp.diff(eq,x)
    yDiff = smp.diff(eq,y)
    normalize = eq/smp.sqrt(xDiff**2 + yDiff**2)

    def roll(x,y,duration):
        (x,y) = (x,y)
        for i in range(0,duration):
            (x,y) = (
                x-calculate((roll.normalize*roll.xDiff),x,y),
                y-calculate((roll.normalize*roll.yDiff),x,y)
                )
        return (x,y)

print(roll.roll(1,2,10))

Here is a visual to help see what this program is doing; the bigger the colored dots are, the greater the function f(x) is evaluated at that point. The draggable point represents what the program is attempting to find. https://www.desmos.com/calculator/c8mq2rijqn

I've tried to figure out if it's possible to pre-calculate normalize*xDiff not inside of roll.roll, but idk if thats possible.

Also, I believe that it is actually pretty easy to do this if the step size isn't dependent on the value of the function at the current point. I do need it to move faster when it's at a high point on the graph though (not just a point with a steep slope) so that has really been hard to figure out too.

答案1

得分: 2

你在循环中调用了 lambdifylambdify 的目的是返回一个快速的函数,但它本身比它返回的函数要慢得多。你应该只调用一次 lambdify,然后在循环中重复使用它返回的函数。

以下代码与你的代码等效,返回完全相同的结果,但循环速度提高了500倍:

import sympy as smp

x, y = smp.symbols('x y')
eq = 1*smp.exp(-((x-5)/5)**2 - ((y-1)/2)**2) + \
    2*smp.exp(-((x+3)/2)**2 - ((y-3)/2)**2) + \
    3*smp.exp(-((x-4)/2)**2 - ((y-7)/2)**2)

# 在点 (x1,y1) 处评估 2 输入 sympy 符号函数 "expression"
def calculate(expression, x1, y1):
    EQ = smp.lambdify((x, y), expression, 'numpy')
    return EQ(x1, y1)

class roll:
    xDiff = smp.diff(eq, x)
    yDiff = smp.diff(eq, y)
    normalize = eq/smp.sqrt(xDiff**2 + yDiff**2)

    # 只调用一次 lambdify
    fxy = smp.lambdify((x, y), (x - normalize*xDiff, y - normalize*yDiff))

    def roll(x, y, duration):
        (x, y) = (x, y)
        for i in range(0, duration):
            # 在循环中调用 lambdify 返回的函数
            x, y = roll.fxy(x, y)
        return (x, y)

print(roll.roll(1, 2, 10))

Note: I've translated the code comments and string literals as well to make the code more readable.

英文:

You are calling lambdify inside the loop. The point of lambdify is that it returns a fast function but lambdify itself is a lot slower than the function that it returns. You should call lambdify once and then in the loop repeatedly use the function that was returned by it.

This code is equivalent to yours and returns the exact same result but the loop is 500x times faster:

import sympy as smp

x, y = smp.symbols('x y')
eq = 1*smp.exp(-((x-5)/5)**2 - ((y-1)/2)**2) + \
    2*smp.exp(-((x+3)/2)**2 - ((y-3)/2)**2) + \
    3*smp.exp(-((x-4)/2)**2 - ((y-7)/2)**2)

# Evaluates the 2 input sympy symbolic function "expression" at points (x1,y1)
def calculate(expression,x1,y1):
    EQ = smp.lambdify((x,y), expression, 'numpy')
    return EQ(x1,y1)

class roll:
    xDiff = smp.diff(eq,x)
    yDiff = smp.diff(eq,y)
    normalize = eq/smp.sqrt(xDiff**2 + yDiff**2)

    # call lambdify once
    fxy = smp.lambdify((x, y), (x - normalize*xDiff, y - normalize*yDiff))

    def roll(x,y,duration):
        (x,y) = (x,y)
        for i in range(0,duration):
            # in the loop call the function that was returned by lambdify
            x, y = roll.fxy(x, y)
        return (x,y)

print(roll.roll(1,2,10))

答案2

得分: 0

以下是翻译好的代码部分:

问题是你想要执行梯度下降

如果你定义了一个与梯度值相关的函数那是最有效的方法

def eq(x, y):
    return y * x**2 + 2 * x * y
def grad_x(x, y):
    return 2 * x * y + 2 * y
def grad_y(x, y):
    return x**2 + 2 * x

但是当你不确定如何计算你的函数的梯度时你可以使用其他可以自动求导的库例如numpypytorch)。

这里是一个pytorch的示例

import torch

def eq(x, y):
    return (
        1 * torch.exp(-(((x - 5) / 5) ** 2) - ((y - 1) / 2) ** 2)
        + 2 * torch.exp(-(((x + 3) / 2) ** 2) - ((y - 3) / 2) ** 2)
        + 3 * torch.exp(-(((x - 4) / 2) ** 2) - ((y - 7) / 2) ** 2)
    )

def roll(x, y, duration):
    x, y = (
        torch.tensor(x).requires_grad_(True),
        torch.tensor(y).requires_grad_(True),
    )
    for _ in range(0, duration):
        func_value = eq(x, y)
        xDiff = torch.autograd.grad(func_value, x, retain_graph=True)[0]
        yDiff = torch.autograd.grad(func_value, y)[0]
        normalize = func_value / torch.sqrt(xDiff ** 2 + yDiff ** 2)
        x = x - normalize * xDiff
        y = y - normalize * yDiff
    return (x, y)

print(roll(1.0, 2.0, 10))

希望这对你有所帮助。如果你有任何其他问题,请随时提出。

英文:

The question is you want to do gradient-descent.

If you define a function about grad value, it's the most efficient way.

def eq(x,y): 
    return y*x**2+2*x*y
def grad_x(x,y):
    return 2*x*y+2*y
def grad_y(x,y):
    return x**2+2*x

But when you confused about how to calculate the grad of your function, you can use other package which can autograd (i.e. numpy, pytorch).

Here is an example of pytorch:

import torch

def eq(x, y):
    return (
        1 * torch.exp(-(((x - 5) / 5) ** 2) - ((y - 1) / 2) ** 2)
        + 2 * torch.exp(-(((x + 3) / 2) ** 2) - ((y - 3) / 2) ** 2)
        + 3 * torch.exp(-(((x - 4) / 2) ** 2) - ((y - 7) / 2) ** 2)
    )


def roll(x, y, duration):
    x, y = (
        torch.tensor(x).requires_grad_(True),
        torch.tensor(y).requires_grad_(True),
    )
    for _ in range(0, duration):
        func_value = eq(x, y)
        xDiff = torch.autograd.grad(func_value, x, retain_graph=True)[0]
        yDiff = torch.autograd.grad(func_value, y)[0]
        normalize = func_value / torch.sqrt(xDiff ** 2 + yDiff ** 2)
        x = x - normalize * xDiff
        y = y - normalize * yDiff
    return (x, y)


print(roll(1.0, 2.0, 10))

huangapple
  • 本文由 发表于 2023年2月19日 12:33:10
  • 转载请务必保留本文链接:https://go.coder-hub.com/75498015.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定