英文:
How to get image coordinates after successful match and stitch using OpenCV
问题
I am only providing the translation of the code portion you've provided:
matchess = np.asarray(good)
if len(good) > 500: # the number here is the number of matches deemed good
src_pts = np.float32([kp1[m.queryIdx].pt for m in matchess[:, 0]]).reshape(-1, 1, 2)
dst_pts = np.float32([kp2[m.trainIdx].pt for m in matchess[:, 0]]).reshape(-1, 1, 2)
H, masked = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
pts = np.float32([[0, 0], [0, -1], [-1, -1], [-1, 0]]).reshape(-1, 1, 2)
dst = cv2.perspectiveTransform(pts, H)
dst = cv2.warpPerspective(img1, H, (img2.shape[1] + img1.shape[1], img2.shape[0])) # wrapped image
print("dst 1:", dst)
dst[0:img2.shape[0], 0:img2.shape[1]] = img2 # stitched image
print("dst 2:", dst)
Please note that this translation doesn't include the comments within the code, as you requested only the translated code itself.
英文:
Morning!
I am trying to get the coordinates (x,y of corners) of two images that have successfully been matched using ORB and stitched together using cv2.warpPerspective.
I have tried this by calculating the Homography (cv2.findHomography) from the two ORB matched pairs.
I am however struggling to get the coordinates for the corners of each image after these have been stitched.
Any help would be great.
R
This is the code that I am running at the moment, but I am at a loss. Not even sure this is the right approach
matchess = np.asarray(good)
if len(good)>500: # the number here is the number of matches deemed good
src_pts = np.float32([kp1[m.queryIdx].pt for m in matchess[:,0] ]).reshape(-1,1,2)
dst_pts = np.float32([kp2[m.trainIdx].pt for m in matchess[:,0] ]).reshape(-1,1,2)
H, masked = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
pts = np.float32([[0,0],[0,-1],[-1,-1],[-1,0]]).reshape(-1,1,2)
dst = cv2.perspectiveTransform(pts,H)
dst = cv2.warpPerspective(img1,H,(img2.shape[1] + img1.shape[1], img2.shape[0])) #wraped image
print("dst 1:", dst)
dst[0:img2.shape[0], 0:img2.shape[1]] = img2 #stitched image
print("dst 2:",dst)
答案1
得分: 0
perspectiveTransform()
是要使用的函数。
给它点和要么 H
要么倒置的 H
。它会将单应性变换应用于这些点。
图像的角点是 (0,0)
和 (width-1, height-1)
,另外两个角自然也是如此。
英文:
perspectiveTransform()
is the function to use.
Give it points and either H
or inverted H
. It will apply the homography to those points.
The corner pixels of an image are (0,0)
and (width-1, height-1)
, and the other two corners of course.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论