可以使用光流进行视频的四分之一插值吗?

huangapple go评论67阅读模式
英文:

Is it possible to interpolate a quarter of the video with optical flow?

问题

我现在正在尝试使用光流来插值视频。我通过参考这个问题并将其用作参考来插值视频。所以我的问题是:是否可以使用原始帧和光流帧以更精细的1/4单位插值视频?谢谢您提前。

我尝试将光流帧减半并重新映射到原始图像,但没有成功(当然,这是显而易见的,但...)。

英文:

I am now trying to interpolate video using optical flow. I was able to interpolate the video by referring to this question and using it as a reference.
So my question is: Is it possible to interpolate the video in 1/4 units even finer using the original frame and the optical flow frame?
Thank you in advance.

I tried halving the optical flow frame and remapped it to the original image, but it did not work. (It's obvious, of course, but...).

答案1

得分: 1

以下是代码部分的中文翻译:

我将尝试用示例来解决这个问题假设我的前一张图像是
[![输入图像描述][1]][1]


而我的下一张图像是 [![输入图像描述][2]][2]


`下一张图像` 是通过将 `前一张图像` 向右平移而创建的我希望这是可见的)。

现在让我们计算这两张图像之间的光流并绘制光流向量
```python
optical_flow = cv2.calcOpticalFlowFarneback(img1, img2, None, pyr_scale = 0.5, levels = 3, winsize = 200, iterations = 3, poly_n = 5, poly_sigma = 1.1, flags = 0)

我已经采用了 前一张图像 并在其上绘制了光流向量。
可以使用光流进行视频的四分之一插值吗?
这张图片显示,前一张图像 是通过将 下一张图像 向左平移而形成的。

如果你明白了这一点,那么完成手头的任务就相当容易,即在这两个帧之间插值帧。

基本上,光流向量告诉我们每个像素来自 前一张图像 的位置。让我们谈一谈一个特定的像素,它已经从 (100, 100) 移动到 (200, 100),即向右方向移动了 100 个像素。
问题是,如果这两帧之间有均匀的运动,那么这个像素将在哪里?
示例:如果在这两帧之间有 4 帧,那么像素将位于以下位置。

初始位置     =  (100, 100)  
插值帧 1 =  (120, 100)
插值帧 2 =  (140, 100)
插值帧 3 =  (160, 100)
插值帧 4 =  (180, 100)
最终位置      =  (200, 100)

够了废话,让我给你看一些代码。

def load_image(image_path):
    "加载图像并将其转换为灰度图像"
    img = cv2.imread(image_path)
    return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

def plot_flow_vectors(img, flow):
    "绘制光流向量"
    img = img.copy()
    h, w = img.shape[:2]
    step_size = 40
    cv2.imshow("原始图像", img)
    for y in range(0, h, step_size):
        for x in range(0, w, step_size):
            # 获取此点处的光流向量
            dx, dy = flow[y, x]
            # 绘制箭头表示光流向量
            cv2.arrowedLine(img, (x, y), (int(x + dx), int(y + dy)), (0, 255, 0), 1, cv2.LINE_AA, tipLength=0.8)
    cv2.imshow("带有光流向量的图像", img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()

img1 = load_image('first.jpeg')  # 加载前一张图像
img2 = load_image('second.jpeg')  # 加载下一张图像
optical_flow = cv2.calcOpticalFlowFarneback(img1, img2, None, pyr_scale=0.5, levels=3, winsize=200, iterations=3, poly_n=5, poly_sigma=1.1, flags=0)  # 计算光流

plot_flow_vectors(img1, optical_flow)  # 绘制光流向量

# 生成中间帧
num_frames = 10
h, w = optical_flow.shape[:2]
for frame_num in range(1, num_frames+1):
    alpha = frame_num / num_frames
    flow = alpha * optical_flow
    flow[:, :, 0] += np.arange(w)
    flow[:, :, 1] += np.arange(h)[:, np.newaxis]
    interpolated_frame = cv2.remap(img2, flow, None, cv2.INTER_LINEAR)
    cv2.imshow("中间帧", interpolated_frame)
    cv2.waitKey(0)

PS:光流向量也显示了在 y 方向上的运动(实际上没有)。我认为这是两个帧之间大幅度运动的产物。这通常不会出现在视频的帧中。

编辑:我创建了一个 GitHub 存储库,用于在两个视频帧之间插值帧,以增加视频的帧速率或从输入视频获得慢动作视频。查看它这里


<details>
<summary>英文:</summary>

I will try solving this problem with the help of an example. Suppose my previous image is
[![enter image description here][1]][1]


and my next image is [![enter image description here][2]][2]


The `next image` was created by translating the `previous image` towards right (I hope it is visible). 

Now, let&#39;s calculate the optical flow between the two images and plot the flow vectors. 

optical_flow = cv2.calcOpticalFlowFarneback(img1, img2, None, pyr_scale = 0.5, levels = 3, winsize = 200, iterations = 3, poly_n = 5, poly_sigma = 1.1, flags = 0)

I have taken the `previous image` and drawn flow vectors on top of it. 
[![enter image description here][3]][3]
This image shows that, `previous image` was formed by translating the `next image` towards the left. 

If you followed this, it is pretty easy to accomplish the task at hand .i.e interpolate frames between these two frames
 
Essentially, optical flow vectors tell us for each pixel, where has the pixel come from the `previous image`. Lets talk about a particular pixel, which has moved from (100, 100) to (200, 100) .i.e moved in the right direction by 100 pixels. 
The question is where will this pixel be if there was uniform motion between these 2 frames? 
Example: if there were 4 frames in between these two frames, the pixel will be at locations

initial location = (100, 100)
interpolated frame 1 = (120, 100)
interpolated frame 2 = (140, 100)
interpolated frame 3 = (160, 100)
interpolated frame 4 = (180, 100)
final location. = (200, 100)

Enough talking, let me show you some code. 

def load_image(image_path):
"Load the image and convert it into a grayscale image"
img = cv2.imread(image_path)
return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

def plot_flow_vectors(img, flow):
"""
Plots the flow vectors
"""
img = img.copy()
h, w = img.shape[:2]
step_size = 40
cv2.imshow("original_image", img)
for y in range(0, h, step_size):
for x in range(0, w, step_size):
# Get the flow vector at this point
dx, dy = flow[y, x]
# Draw an arrow to represent the flow vector
cv2.arrowedLine(img, (x, y), (int(x + dx), int(y + dy)), (0, 255, 0), 1, cv2.LINE_AA, tipLength=0.8)
cv2.imshow("image_with_flow_vectors", img)
cv2.waitKey(0)
cv2.destroyAllWindows()

img1 = load_image('first.jpeg') // load previous image
img2 = load_image('second.jpeg') // load next image
optical_flow = cv2.calcOpticalFlowFarneback(img1, img2, None, pyr_scale = 0.5, levels = 3, winsize = 200, iterations = 3, poly_n = 5, poly_sigma = 1.1, flags = 0) // calculate optical flow

plot_flow_vectors(img1, optical_flow) // plot flow vectors

// Generate frames in between
num_frames = 10
h, w = optical_flow.shape[:2]
for frame_num in range(1, num_frames+1):
alpha = frame_num / num_frames
flow = alpha * optical_flow
flow[:,:,0] += np.arange(w)
flow[:,:,1] += np.arange(h)[:,np.newaxis]
interpolated_frame = cv2.remap(img2, flow, None, cv2.INTER_LINEAR)
cv2.imshow("intermediate_frame", interpolated_frame)
cv2.waitKey(0)


PS: optical flow vectors also show an motion in the y direction (when there isn&#39;t). I think it is an artifact of the large motion between the two frames. This is generally not the case for frames in a video.


Edit: I created a github repository to interpolate frames in between two video frames to increase the video fps or get a slow motion video out of the input video. Check it out [here](https://github.com/satinder147/video-frame-interpolation)

  [1]: https://i.stack.imgur.com/EbVN2.jpg
  [2]: https://i.stack.imgur.com/qRnb8.jpg
  [3]: https://i.stack.imgur.com/SVoCJ.jpg


</details>



huangapple
  • 本文由 发表于 2023年4月11日 15:34:08
  • 转载请务必保留本文链接:https://go.coder-hub.com/75983462.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定