获取OpenCV成功匹配和拼接后的图像坐标。

huangapple go评论59阅读模式
英文:

How to get image coordinates after successful match and stitch using OpenCV

问题

I am only providing the translation of the code portion you've provided:

matchess = np.asarray(good)
if len(good) > 500:  # the number here is the number of matches deemed good
    src_pts = np.float32([kp1[m.queryIdx].pt for m in matchess[:, 0]]).reshape(-1, 1, 2)
    dst_pts = np.float32([kp2[m.trainIdx].pt for m in matchess[:, 0]]).reshape(-1, 1, 2)
    H, masked = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)

    pts = np.float32([[0, 0], [0, -1], [-1, -1], [-1, 0]]).reshape(-1, 1, 2)

    dst = cv2.perspectiveTransform(pts, H)

    dst = cv2.warpPerspective(img1, H, (img2.shape[1] + img1.shape[1], img2.shape[0]))  # wrapped image

    print("dst 1:", dst)

    dst[0:img2.shape[0], 0:img2.shape[1]] = img2  # stitched image

    print("dst 2:", dst)

Please note that this translation doesn't include the comments within the code, as you requested only the translated code itself.

英文:

Morning!

I am trying to get the coordinates (x,y of corners) of two images that have successfully been matched using ORB and stitched together using cv2.warpPerspective.

I have tried this by calculating the Homography (cv2.findHomography) from the two ORB matched pairs.

I am however struggling to get the coordinates for the corners of each image after these have been stitched.

Any help would be great.

R

获取OpenCV成功匹配和拼接后的图像坐标。

获取OpenCV成功匹配和拼接后的图像坐标。

This is the code that I am running at the moment, but I am at a loss. Not even sure this is the right approach

matchess = np.asarray(good)
if len(good)>500: # the number here is the number of matches deemed good
    src_pts = np.float32([kp1[m.queryIdx].pt for m in matchess[:,0] ]).reshape(-1,1,2)
    dst_pts = np.float32([kp2[m.trainIdx].pt for m in matchess[:,0] ]).reshape(-1,1,2)
    H, masked = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)

    pts = np.float32([[0,0],[0,-1],[-1,-1],[-1,0]]).reshape(-1,1,2)

    dst = cv2.perspectiveTransform(pts,H)
        

    dst = cv2.warpPerspective(img1,H,(img2.shape[1] + img1.shape[1], img2.shape[0])) #wraped image

    print("dst 1:", dst)


    dst[0:img2.shape[0], 0:img2.shape[1]] = img2 #stitched image


    print("dst 2:",dst)

答案1

得分: 0

perspectiveTransform() 是要使用的函数。

给它点和要么 H 要么倒置的 H。它会将单应性变换应用于这些点。

图像的角点是 (0,0)(width-1, height-1),另外两个角自然也是如此。

英文:

perspectiveTransform() is the function to use.

Give it points and either H or inverted H. It will apply the homography to those points.

The corner pixels of an image are (0,0) and (width-1, height-1), and the other two corners of course.

huangapple
  • 本文由 发表于 2023年4月13日 16:50:01
  • 转载请务必保留本文链接:https://go.coder-hub.com/76003489.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定