英文:
How do I compare two perlin noise images?
问题
两个Perlin噪声图像是否是由相同的参数生成的可以通过以下方法来判断吗?
例如,以下这两个图像:
和
是由相同的代码生成的,即:
import numpy as np
import cv2
from sys import argv
def generate_perlin_noise_2d(shape, res):
def f(t):
return 6*t**5 - 15*t**4 + 10*t**3
delta = (res[0] / shape[0], res[1] / shape[1])
d = (shape[0] // res[0], shape[1] // res[1])
grid = np.mgrid[0:res[0]:delta[0],0:res[1]:delta[1]].transpose(1, 2, 0) % 1
# Gradients
angles = 2*np.pi*np.random.rand(res[0]+1, res[1]+1)
gradients = np.dstack((np.cos(angles), np.sin(angles)))
g00 = gradients[0:-1,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g10 = gradients[1:,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g01 = gradients[0:-1,1:].repeat(d[0], 0).repeat(d[1], 1)
g11 = gradients[1:,1:].repeat(d[0], 0).repeat(d[1], 1)
# Ramps
n00 = np.sum(grid * g00, 2)
n10 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1])) * g10, 2)
n01 = np.sum(np.dstack((grid[:,:,0], grid[:,:,1]-1)) * g01, 2)
n11 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1]-1)) * g11, 2)
# Interpolation
t = f(grid)
n0 = n00*(1-t[:,:,0]) + t[:,:,0]*n10
n1 = n01*(1-t[:,:,0]) + t[:,:,0]*n11
return np.sqrt(2)*((1-t[:,:,1])*n0 + t[:,:,1]*n1)
perlin = generate_perlin_noise_2d((1024, 1024), (64, 64))
cv2.imwrite(argv[-1], perlin * 256)
但它们之间不同(因为随机性)。有没有一种方法可以提取统计信息,告诉我它们是如何生成的?
英文:
Is it possible to tell if two perlin noise images have been generated by the same parameters? and if so how?
For example, both these images:
and
Have been generated by the same code, namely:
import numpy as np
import cv2
from sys import argv
def generate_perlin_noise_2d(shape, res):
def f(t):
return 6*t**5 - 15*t**4 + 10*t**3
delta = (res[0] / shape[0], res[1] / shape[1])
d = (shape[0] // res[0], shape[1] // res[1])
grid = np.mgrid[0:res[0]:delta[0],0:res[1]:delta[1]].transpose(1, 2, 0) % 1
# Gradients
angles = 2*np.pi*np.random.rand(res[0]+1, res[1]+1)
gradients = np.dstack((np.cos(angles), np.sin(angles)))
g00 = gradients[0:-1,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g10 = gradients[1:,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g01 = gradients[0:-1,1:].repeat(d[0], 0).repeat(d[1], 1)
g11 = gradients[1:,1:].repeat(d[0], 0).repeat(d[1], 1)
# Ramps
n00 = np.sum(grid * g00, 2)
n10 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1])) * g10, 2)
n01 = np.sum(np.dstack((grid[:,:,0], grid[:,:,1]-1)) * g01, 2)
n11 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1]-1)) * g11, 2)
# Interpolation
t = f(grid)
n0 = n00*(1-t[:,:,0]) + t[:,:,0]*n10
n1 = n01*(1-t[:,:,0]) + t[:,:,0]*n11
return np.sqrt(2)*((1-t[:,:,1])*n0 + t[:,:,1]*n1)
perlin = generate_perlin_noise_2d((1024, 1024), (64, 64))
cv2.imwrite(argv[-1], perlin * 256)
But they differ (because random).
Is there a way to estract statistics that would tell me how they've been generated?
答案1
得分: 1
由于分辨率参数的较低值会使白色到黑色的颜色变化更加渐变,因此在参数较低时会有更多的低频率。您可以使用图像的FFT来获取这些频率,然后找出是否只有低频率(res参数的低值)或低高频率混合(res参数的高值)。
import numpy as np
import cv2
import matplotlib.pyplot as plt
from sys import argv
def generate_perlin_noise_2d(shape, res):
def f(t):
return 6*t**5 - 15*t**4 + 10*t**3
delta = (res[0] / shape[0], res[1] / shape[1])
d = (shape[0] // res[0], shape[1] // res[1])
grid = np.mgrid[0:res[0]:delta[0],0:res[1]:delta[1]].transpose(1, 2, 0) % 1
# Gradients
angles = 2*np.pi*np.random.rand(res[0]+1, res[1]+1)
gradients = np.dstack((np.cos(angles), np.sin(angles)))
g00 = gradients[0:-1,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g10 = gradients[1:,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g01 = gradients[0:-1,1:].repeat(d[0], 0).repeat(d[1], 1)
g11 = gradients[1:,1:].repeat(d[0], 0).repeat(d[1], 1)
# Ramps
n00 = np.sum(grid * g00, 2)
n10 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1])) * g10, 2)
n01 = np.sum(np.dstack((grid[:,:,0], grid[:,:,1]-1)) * g01, 2)
n11 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1]-1)) * g11, 2)
# Interpolation
t = f(grid)
n0 = n00*(1-t[:,:,0]) + t[:,:,0]*n10
n1 = n01*(1-t[:,:,0]) + t[:,:,0]*n11
return np.sqrt(2)*((1-t[:,:,1])*n0 + t[:,:,1]*n1)
perlin1 = generate_perlin_noise_2d((1024, 1024), (64, 64))
perlin2 = generate_perlin_noise_2d((1024, 1024), (32, 32))
perlin3 = generate_perlin_noise_2d((1024, 1024), (1, 1))
fig, [[p1, p2, p3], [fftp1, fftp2, fftp3]] = plt.subplots(nrows=2, ncols=3)
fig.set_size_inches(14, 8)
p1.imshow(perlin1, cmap='gray')
p2.imshow(perlin2, cmap='gray')
p3.imshow(perlin3, cmap='gray')
perlin1_fft = np.fft.fftshift(np.fft.fft2(perlin1))
perlin2_fft = np.fft.fftshift(np.fft.fft2(perlin2))
perlin3_fft = np.fft.fftshift(np.fft.fft2(perlin3))
fftp1.imshow(np.log(np.abs(perlin1_fft)))
fftp2.imshow(np.log(np.abs(perlin2_fft)))
fftp3.imshow(np.log(np.abs(perlin3_fft)))
fig.show()
如果您想要对参数进行粗略估计,可以执行以下步骤:
- 为一些res参数(例如4x4、8x8、16x16等)生成Perlin图像并计算其FFT。
- 获取您的Perlin图像并计算其FFT。
- 获取您的图像的FFT并与每个res参数的FFT进行比较(使用范数或分布之间的距离)。
- 选择与您的图像FFT距离最小的参数。
英文:
Since lower values for the resolution parameter generate a more gradual change in color from white to black there will be higher amounts of lower frequencies when the parameters are lower.
You could use the FFT of the image to get these frequencies and then find out if there are only low frequencies (low value for the res parameter) or a mix of low and high frequencies (high value for res parameter).
import numpy as np
import cv2
import matplotlib.pyplot as plt
from sys import argv
def generate_perlin_noise_2d(shape, res):
def f(t):
return 6*t**5 - 15*t**4 + 10*t**3
delta = (res[0] / shape[0], res[1] / shape[1])
d = (shape[0] // res[0], shape[1] // res[1])
grid = np.mgrid[0:res[0]:delta[0],0:res[1]:delta[1]].transpose(1, 2, 0) % 1
# Gradients
angles = 2*np.pi*np.random.rand(res[0]+1, res[1]+1)
gradients = np.dstack((np.cos(angles), np.sin(angles)))
g00 = gradients[0:-1,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g10 = gradients[1:,0:-1].repeat(d[0], 0).repeat(d[1], 1)
g01 = gradients[0:-1,1:].repeat(d[0], 0).repeat(d[1], 1)
g11 = gradients[1:,1:].repeat(d[0], 0).repeat(d[1], 1)
# Ramps
n00 = np.sum(grid * g00, 2)
n10 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1])) * g10, 2)
n01 = np.sum(np.dstack((grid[:,:,0], grid[:,:,1]-1)) * g01, 2)
n11 = np.sum(np.dstack((grid[:,:,0]-1, grid[:,:,1]-1)) * g11, 2)
# Interpolation
t = f(grid)
n0 = n00*(1-t[:,:,0]) + t[:,:,0]*n10
n1 = n01*(1-t[:,:,0]) + t[:,:,0]*n11
return np.sqrt(2)*((1-t[:,:,1])*n0 + t[:,:,1]*n1)
perlin1 = generate_perlin_noise_2d((1024, 1024), (64, 64))
perlin2 = generate_perlin_noise_2d((1024, 1024), (32, 32))
perlin3 = generate_perlin_noise_2d((1024, 1024), (1, 1))
fig, [[p1, p2, p3], [fftp1, fftp2, fftp3]] = plt.subplots(nrows=2, ncols=3)
fig.set_size_inches(14, 8)
p1.imshow(perlin1, cmap='gray')
p2.imshow(perlin2, cmap='gray')
p3.imshow(perlin3, cmap='gray')
perlin1_fft = np.fft.fftshift(np.fft.fft2(perlin1))
perlin2_fft = np.fft.fftshift(np.fft.fft2(perlin2))
perlin3_fft = np.fft.fftshift(np.fft.fft2(perlin3))
fftp1.imshow(np.log(np.abs(perlin1_fft)))
fftp2.imshow(np.log(np.abs(perlin2_fft)))
fftp3.imshow(np.log(np.abs(perlin3_fft)))
fig.show()
If you want to get a rough estimation for the parameters you could:
- generate perlin images for some res parameters (4x4, 8x8, 16x16, etc) and compute the fft of those
- take your perlin image and compute the fft
- take your image's fft and compare it with each res parameter fft (using the norm or distances between distributions)
- take the parameter that has the least distance with your image fft
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论