Gram Schmidt算法使用numpy/sympy来处理自定义内积空间。

huangapple go评论84阅读模式
英文:

Gram Schmidt algorithm using nympy/sympy for a custom inner product space

问题

我正在尝试使用numpy或sympy编写Gram Schmidt算法,用于一个特殊的内积空间(不是欧几里德空间)。该内积空间是

Gram Schmidt算法使用numpy/sympy来处理自定义内积空间。

而向量是

Gram Schmidt算法使用numpy/sympy来处理自定义内积空间。

def inner_product(x, y):
    return x[0]*y[0] + 2*x[1]*y[1] + x[2]*y[2]

def gram_schmidt(V):
 U = []
 for i in range(len(V)):
   # 从当前向量开始
   u = V[i]
   for j in range(i):
     # 减去V[i]在每个U[j]上的投影
     proj = (inner_product(V[i], U[j]) / inner_product(U[j], U[j])) * U[j]
     u = u - proj
   # 归一化
   U.append(u / np.linalg.norm(u))
 return np.array(U)

V = np.array([[1, 3, 4], [1, 2, 1], [1, 1, 2]])
U = gram_schmidt(V)
print(U)

如果算法能够打印出整个过程的所有步骤,那将是很棒的。

英文:

I'm trying to write a Gram Schmidt algorithm using numpy or sympy for a special inner product space (so not the euclidean one). The inner product space is

Gram Schmidt算法使用numpy/sympy来处理自定义内积空间。

And the vectors are

Gram Schmidt算法使用numpy/sympy来处理自定义内积空间。

def inner_product(x, y):
    return x[0]*y[0] + 2*x[1]*y[1] + x[2]*y[2]

def gram_schmidt(V):
 U = []
 for i in range(len(V)):
   # start with the current vector
   u = V[i]
   for j in range(i):
     # subtract the projection of V[i] onto each U[j]
     proj = (inner_product(V[i], U[j]) / inner_product(U[j], U[j])) * U[j]
     u = u - proj
   # normalize
   U.append(u / np.linalg.norm(u))
 return np.array(U)

V = np.array([[1, 3, 4], [1, 2, 1], [1, 1, 2]])
U = gram_schmidt(V)
print(U)

It could be great if the algorithm could print all the steps for the proces

答案1

得分: 1

查看Gram-Schmidt以获取确切的算法。

请注意,您用于缩放的范数是欧几里德范数,所以显然会出现问题。
另外,如果您在每个步骤中都“标准化”它,那么在减法过程中就不需要再次进行标准化。

不管怎样,这是我做的方法:

import numpy as np

def inner_product(x, y):
    return x[0]*y[0] + 2*x[1]*y[1] + x[2]*y[2]

# Gram Schmidt:
# 输入一个向量列表
def gram_schmidt(V):
    # 正交化后的向量,将被返回
    orthogonal = []
    
    # 在每一步中,取向量
    for i in range(len(V)):
        v = copy.deepcopy(V[i])
        
        # 从当前正交集中减去“分量”
        for j in range(i):
            v = v - inner_product(orthogonal[j], v) * orthogonal[j]
        
        # 标准化
        v = v / ((inner_product(v, v))**0.5)
        orthogonal.append(v)
    
    return orthogonal

# 尝试以下操作:
V = [np.array([1,1,1]), np.array([3,2,1]), np.array([4,1,2])]
GS = gram_schmidt(V)
# 应该大致打印出0和1
print(inner_product(GS[0], GS[1]))
print(inner_product(GS[0], GS[0]))
英文:

See Gram-Schmidt for the exact algorithm.

Note that the norm that you are taking for scaling is the Euclidean norm, so obviously that would become a problem.
Also if you "normalize" it at each step, you would not need to normalize it in the subtraction process.

Anyways, here is how I did it:

import numpy as np

def inner_product(x, y):
    return x[0]*y[0] + 2*x[1]*y[1] + x[2]*y[2]

# Gram Schmidt:
# Take in a list of vectors
def gram_schmidt(V):
    # Orthogonalized, To Be Returned
    orthogonal = []

    # At each step, take vector
    for i in range(len(V)):
        v = copy.deepcopy(V[i])
        
        # Subtract off the "components" from current orthogonal set.
        for j in range(i):
            v = v - inner_product(orthogonal[j], v) * orthogonal[j]
        
        # Normalization
        v = v / ((inner_product(v, v))**0.5)
        orthogonal.append(v)
    
    return orthogonal

# Try the following:
V = [np.array([1,1,1]), np.array([3,2,1]), np.array([4,1,2])]
GS = gram_schmidt(V)
# Should print roughly 0 and 1 respectively
print(inner_product(GS[0], GS[1]))
print(inner_product(GS[0], GS[0]))

huangapple
  • 本文由 发表于 2023年6月1日 20:21:17
  • 转载请务必保留本文链接:https://go.coder-hub.com/76381803.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定