如何在OpenCV中旋转和平移图像而不丢失屏幕外的数据

huangapple go评论63阅读模式
英文:

How to rotate and translate an image with opencv without losing off-screen data

问题

我试图使用OpenCV执行后续的图像转换。我有一张图片,我想要旋转和平移,同时保持整体图像大小不变。我一直在使用warpAffine函数与旋转和平移矩阵执行这些转换,但问题是,在执行一个转换后,一些图像数据丢失了,没有传递到下一个转换。

原始图片
如何在OpenCV中旋转和平移图像而不丢失屏幕外的数据

旋转后的图片
如何在OpenCV中旋转和平移图像而不丢失屏幕外的数据

平移和旋转后的图片。 请注意原始图片的角落被裁剪掉了。
如何在OpenCV中旋转和平移图像而不丢失屏幕外的数据

期望的输出
如何在OpenCV中旋转和平移图像而不丢失屏幕外的数据

我想要的是一张与此处平移的图像非常相似的图像,但没有角落被裁剪。我明白这是因为在执行第一个仿射变换后,角落的数据被移除。然而,我不太确定如何在保持图像原始大小与中心有关的情况下保留这些数据。我对计算机视觉没有太多背景,也不懂线性代数或矩阵数学。我成功编写的大部分代码都是从在线教程中学到的,所以非常感谢任何可获得的帮助来解决这个问题!

以下是用于生成上述图片的代码:

import numpy as np
import cv2

def rotate_image(image, angle):
    w, h = (image.shape[1], image.shape[0])
    cx, cy = (w//2, h//2)

    M = cv2.getRotationMatrix2D((cx, cy), -1 * angle, 1.0)
    rotated = cv2.warpAffine(image, M, (w, h))
    return rotated

def translate_image(image, d_x, d_y):
    M = np.float32([
        [1, 0, d_x],
        [0, 1, d_y]
    ])
    
    return cv2.warpAffine(image, M, (image.shape[1], image.shape[0]))

path = "dog.jpg"
image = cv2.imread(path)
angle = 30.0
d_x = 200
d_y = 300
rotated = rotate_image(image, angle)
translated = translate_image(rotated, d_x, d_y)
英文:

I'm trying to use opencv to perform subsequent image transformations. I have an image that I want to both rotate and translate while keeping the overall image size constant. I've been using the warpAffine function with rotation and translation matrixes to perform the transformations, but the problem is that after performing one transformation some image data is lost and not carried over to the next transformation.

Original
如何在OpenCV中旋转和平移图像而不丢失屏幕外的数据

Rotated
如何在OpenCV中旋转和平移图像而不丢失屏幕外的数据

Translated and rotated. Note how the corners of the original image have been clipped off.
如何在OpenCV中旋转和平移图像而不丢失屏幕外的数据

Desired output
如何在OpenCV中旋转和平移图像而不丢失屏幕外的数据

What I would like is to have an image that's very similar to the translated image here, but without the corners clipped off. I understand that this occurs because after performing the first Affine transform the corner data is removed. However, I'm not really sure how I can preserve this data while still maintaining the image's original size with respect to its center. I am very new to computer vision and do not have a strong background in linear algebra or matrix math. Most of the code I've managed to make work has been learned from online tutorials, so any and all accessible help fixing this would be greatly appreciated!

Here is the code used to generate the above images:

import numpy as np
import cv2

def rotate_image(image, angle):
    w, h = (image.shape[1], image.shape[0])
    cx, cy = (w//2,h//2)

    M = cv2.getRotationMatrix2D((cx,cy), -1*angle, 1.0)
    rotated = cv2.warpAffine(image, M, (w,h))
    return rotated

def translate_image(image, d_x, d_y):
    M = np.float32([
        [1,0,d_x],
        [0,1,d_y]
    ])
    
    return cv2.warpAffine(image, M, (image.shape[1], image.shape[0]))

path = "dog.jpg"
image = cv2.imread(path)
angle = 30.0
d_x = 200
d_y = 300
rotated = rotate_image(image, angle)
translated = translate_image(rotated, d_x, d_y)

答案1

得分: 3

链接旋转和平移变换是你正在寻找的。

与依次应用旋转和平移不同,我们可以使用等效的链接变换矩阵应用 cv2.warpAffine
只使用一次 cv2.warpAffine 可以防止由中间图像引起的角落切割(仅中间旋转图像与切角的图像)。

链接两个变换矩阵是通过将两个矩阵相乘来完成的。
在OpenCV约定中,第一个变换是从左侧相乘的。


2D仿射变换矩阵的最后一行始终是 [0, 0, 1]。
OpenCV的转换省略了最后一行,因此 M 是一个2x3矩阵。

链接OpenCV变换 M0M1 应用以下阶段:

  • 将“省略最后一行” [0, 0, 1] 插入到 M0M1 中。
T0 = np.vstack((M0, np.array([0, 0, 1])))
T1 = np.vstack((M1, np.array([0, 0, 1]))
  • 通过矩阵乘法链式变换:
T = T1 @ T0
  • 删除最后一行(等于 [0, 0, 1])以匹配OpenCV的2x3约定:
M = T[0:2, :]

更高级的解决方案应用以下阶段:

  • 计算旋转变换矩阵。
  • 计算平移变换矩阵。
  • 链接旋转和平移变换。
  • 使用链接的变换矩阵应用仿射变换。

代码示例:

import numpy as np
import cv2

def get_rotation_mat(image, angle):
    w, h = (image.shape[1], image.shape[0])
    cx, cy = (w//2,h//2)

    M = cv2.getRotationMatrix2D((cx, cy), -1*angle, 1.0)
    #rotated = cv2.warpAffine(image, M, (w,h))
    return M

def get_translation_mat(d_x, d_y):
    M = np.float64([
        [1, 0, d_x],
        [0, 1, d_y]
    ])

    #return cv2.warpAffine(image, M, (image.shape[1], image.shape[0]))
    return M

def chain_affine_transformation_mats(M0, M1):
    """
    链接由M0和M1矩阵给出的仿射变换。
    M0 - 应用第一个仿射变换(例如旋转)的2x3矩阵。
    M1 - 应用第二个仿射变换(例如平移)的2x3矩阵。
    该方法返回M - 链接两个变换M0和M1(例如旋转然后平移)的2x3矩阵。
    """
    T0 = np.vstack((M0, np.array([0, 0, 1])))  # 在M0的底部添加行[0, 0, 1]([0, 0, 1]应用于单位矩阵的最后一行),T0是3x3矩阵。
    T1 = np.vstack((M1, np.array([0, 0, 1]))  # 在M1的底部添加行[0, 0, 1]。
    T = T1 @ T0  # 使用矩阵乘法链式变换T0和T1。
    M = T[0:2, :]  # 从T中删除最后一行(仿射变换的最后一行始终是[0, 0, 1],OpenCV转换省略了最后一行)。
    return M

path = "dog.jpg"
image = cv2.imread(path)
angle = 30.0
d_x = 200
d_y = 300
#rotated = rotate_image(image, angle)
#translated = translate_image(rotated, d_x, d_y)
rotationM = get_rotation_mat(image, angle)  # 计算旋转变换矩阵
translationM = get_translation_mat(d_x, d_y)  # 计算平移变换矩阵

M = chain_affine_transformation_mats(rotationM, translationM)  # 链接旋转和平移变换(旋转后平移)的矩阵
transformed_image = cv2.warpAffine(image, M, (image.shape[1], image.shape[0]))  # 使用链式(统一)矩阵M应用仿射变换

cv2.imwrite("transformed_dog.jpg", transformed_image)  # 保存输出供测试

输出:
如何在OpenCV中旋转和平移图像而不丢失屏幕外的数据

英文:

Chaining the rotation and translation transformations is what you are looking for.

Instead of applying the rotation and translation one after the other, we may apply cv2.warpAffine with the equivalent chained transformation matrix.
Using cv2.warpAffine only once, prevents the corner cutting that resulted by the intermediate image (intermediate rotated only image with cutted corners).

Chaining two transformation matrices is done by multiplying the two matrices.
In OpenCV convention, the first transformation is multiplied from the left side.


The last row of 2D affine transformation matrix is always [0, 0, 1].
OpenCV conversion is omitting the last row, so M is 2x3 matrix.

Chaining OpenCV transformations M0 and M1 applies the following stages:

  • Insert the "omitting last row" [0, 0, 1] to M0 and to M1.

     T0 = np.vstack((M0, np.array([0, 0, 1])))
     T1 = np.vstack((M1, np.array([0, 0, 1])))
    
  • Chain transformations by matrix multiplication:

     T = T1 @ T0
    
  • Remove the last row (equals [0, 0, 1]) for matching OpenCV 2x3 convention:

     M = T[0:2, :]
    

The higher level solution applies the following stages:

  • Compute rotation transformation matrix.
  • Compute translation transformation matrix.
  • Chain the rotation and translation transformations.
  • Apply affine transformation with the chained transformations matrix.

Code sample:

import numpy as np
import cv2

def get_rotation_mat(image, angle):
    w, h = (image.shape[1], image.shape[0])
    cx, cy = (w//2,h//2)

    M = cv2.getRotationMatrix2D((cx, cy), -1*angle, 1.0)
    #rotated = cv2.warpAffine(image, M, (w,h))
    return M

def get_translation_mat(d_x, d_y):
    M = np.float64([
        [1, 0, d_x],
        [0, 1, d_y]
    ])
    
    #return cv2.warpAffine(image, M, (image.shape[1], image.shape[0]))
    return M

def chain_affine_transformation_mats(M0, M1):
    """ 
    Chaining affine transformations given by M0 and M1 matrices.
    M0 - 2x3 matrix applying the first affine transformation (e.g rotation).
    M1 - 2x3 matrix applying the second affine transformation (e.g translation).
    The method returns M - 2x3 matrix that chains the two transformations M0 and M1 (e.g rotation then translation in a single matrix).
    """
    T0 = np.vstack((M0, np.array([0, 0, 1])))  # Add row [0, 0, 1] to the bottom of M0 ([0, 0, 1] applies last row of eye matrix), T0 is 3x3 matrix.
    T1 = np.vstack((M1, np.array([0, 0, 1])))  # Add row [0, 0, 1] to the bottom of M1.
    T = T1 @ T0  # Chain transformations T0 and T1 using matrix multiplication.
    M = T[0:2, :]  # Remove the last row from T (the last row of affine transformations is always [0, 0, 1] and OpenCV conversion is omitting the last row).
    return M

path = "dog.jpg"
image = cv2.imread(path)
angle = 30.0
d_x = 200
d_y = 300
#rotated = rotate_image(image, angle)
#translated = translate_image(rotated, d_x, d_y)
rotationM = get_rotation_mat(image, angle)  # Compute rotation transformation matrix
translationM = get_translation_mat(d_x, d_y)  # Compute translation transformation matrix

M = chain_affine_transformation_mats(rotationM, translationM)  # Chain rotation and translation transformations (translation after rotation)

transformed_image = cv2.warpAffine(image, M, (image.shape[1], image.shape[0]))  # Apply affine transformation with the chained (unified) matrix M.

cv2.imwrite("transformed_dog.jpg", transformed_image)  # Store output for testing

Output:
如何在OpenCV中旋转和平移图像而不丢失屏幕外的数据

huangapple
  • 本文由 发表于 2023年2月9日 00:35:56
  • 转载请务必保留本文链接:https://go.coder-hub.com/75388906.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定