英文:
Bad axis calculation with arucos
问题
I'm looking at examples of OpenCV with wrinkles and how to identify the position.
这是我查看OpenCV中关于皱纹的示例以及如何识别其位置的部分。
It marks the lines well for me, but the orientation does not, the Z axis always goes to the upper left corner.
它为我标记了线条,但方向不正确,Z轴始终指向左上角。
This is my code:
这是我的代码:
float markerLength = 0.2;
// Set coordinate system
cv::Mat objPoints(4, 1, CV_32FC3);
objPoints.ptr<cv::Vec3f>(0)[0] = cv::Vec3f(-markerLength/2.f, markerLength/2.f, 0);
objPoints.ptr<cv::Vec3f>(0)[1] = cv::Vec3f( markerLength/2.f, markerLength/2.f, 0);
objPoints.ptr<cv::Vec3f>(0)[2] = cv::Vec3f( markerLength/2.f, -markerLength/2.f, 0);
objPoints.ptr<cv::Vec3f>(0)[3] = cv::Vec3f(-markerLength/2.f, -markerLength/2.f, 0);
cv::Mat cameraMatrix(3,3,cv::DataType<double>::type);
cv::setIdentity(cameraMatrix);
cv::Mat distCoeffs(4,1,cv::DataType<double>::type);
distCoeffs.at<double>(0) = 0;
distCoeffs.at<double>(1) = 0;
distCoeffs.at<double>(2) = 0;
distCoeffs.at<double>(3) = 0;
std::vector<cv::Vec3d> rvecs, tvecs;
while (m_videoCap.isOpened()) {
m_videoCap >> m_frame;
if (!m_frame.empty()) {
// Draw marker centers
cv::Mat outputImage = m_frame.clone();
std::vector<int> markerIds;
std::vector<std::vector<cv::Point2f>> markerCorners, rejectedCandidates;
cv::Ptr<cv::aruco::DetectorParameters> parameters = cv::aruco::DetectorParameters::create();
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_250);
cv::aruco::detectMarkers(m_frame, dictionary, markerCorners, markerIds, parameters, rejectedCandidates);
cv::aruco::drawDetectedMarkers(outputImage, markerCorners);
int nMarkers = markerCorners.size();
std::vector<cv::Vec3d> rvecs(nMarkers), tvecs(nMarkers);
for (int i = 0; i < nMarkers; i++) {
auto& corners = markerCorners[i];
cv::solvePnP(objPoints, corners, cameraMatrix, distCoeffs, rvecs.at(i), tvecs.at(i));
cv::drawFrameAxes(outputImage, cameraMatrix, distCoeffs, rvecs[i], tvecs[i], 0.1, 2);
}
m_Pixmap = cvMatToQPixmap(outputImage);
emit newPixmapCaptured();
}
}
Does anyone know what I'm doing wrong?
有人知道我做错了什么吗?
EDIT:
我已根据Christoph Rackwitz的建议更改了相机初始化代码:
// f[px] = x[px] * z[m] / x[m]
float focalLen = 950 * 1.3f / 0.45f;
cv::Matx33f cameraMatrix(focalLen, 0.0f, (1280-1) / 2.0f,
0.0f, focalLen, (780-1) / 2.0f,
0.0f, 0.0f, 1.0f);
现在它正常工作了。
现在它正常工作了。
Thanks for your help.
谢谢你的帮助。
英文:
I'm looking at examples of OpenCV with wrinkles and how to identify the position.
It marks the lines well for me, but the orientation does not, the Z axis always goes to the upper left corner.
This is my code:
float markerLength = 0.2;
// Set coordinate system
cv::Mat objPoints(4, 1, CV_32FC3);
objPoints.ptr<cv::Vec3f>(0)[0] = cv::Vec3f(-markerLength/2.f, markerLength/2.f, 0);
objPoints.ptr<cv::Vec3f>(0)[1] = cv::Vec3f( markerLength/2.f, markerLength/2.f, 0);
objPoints.ptr<cv::Vec3f>(0)[2] = cv::Vec3f( markerLength/2.f, -markerLength/2.f, 0);
objPoints.ptr<cv::Vec3f>(0)[3] = cv::Vec3f(-markerLength/2.f, -markerLength/2.f, 0);
cv::Mat cameraMatrix(3,3,cv::DataType<double>::type);
cv::setIdentity(cameraMatrix);
cv::Mat distCoeffs(4,1,cv::DataType<double>::type);
distCoeffs.at<double>(0) = 0;
distCoeffs.at<double>(1) = 0;
distCoeffs.at<double>(2) = 0;
distCoeffs.at<double>(3) = 0;
std::vector<cv::Vec3d> rvecs, tvecs;
while (m_videoCap.isOpened()) {
m_videoCap >> m_frame;
if (!m_frame.empty()) {
// Draw marker centers
cv::Mat outputImage = m_frame.clone();
std::vector<int> markerIds;
std::vector<std::vector<cv::Point2f>> markerCorners, rejectedCandidates;
cv::Ptr<cv::aruco::DetectorParameters> parameters = cv::aruco::DetectorParameters::create();
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_250);
cv::aruco::detectMarkers(m_frame, dictionary, markerCorners, markerIds, parameters, rejectedCandidates);
cv::aruco::drawDetectedMarkers(outputImage, markerCorners);
int nMarkers = markerCorners.size();
std::vector<cv::Vec3d> rvecs(nMarkers), tvecs(nMarkers);
for (int i = 0; i < nMarkers; i++) {
auto& corners = markerCorners[i];
cv::solvePnP(objPoints, corners, cameraMatrix, distCoeffs, rvecs.at(i), tvecs.at(i));
cv::drawFrameAxes(outputImage, cameraMatrix, distCoeffs, rvecs[i], tvecs[i], 0.1, 2);
}
m_Pixmap = cvMatToQPixmap(outputImage);
emit newPixmapCaptured();
}
}
Does anyone know what I'm doing wrong?
EDIT:
I've changed the camera initialization to this one as suggested Christoph Rackwitz:
// f[px] = x[px] * z[m] / x[m]
float focalLen = 950 * 1.3f / 0.45f;
cv::Matx33f cameraMatrix(focalLen, 0.0f, (1280-1) / 2.0f,
0.0f, focalLen, (780-1) / 2.0f,
0.0f, 0.0f, 1.0f);
And now it works fine:
Thanks for your help.
答案1
得分: 3
以下是代码部分的翻译:
cv::Mat cameraMatrix(3,3,cv::DataType<double>::type);
cv::setIdentity(cameraMatrix);
这部分代码创建了一个3x3的相机矩阵 cameraMatrix
,并将其初始化为单位矩阵。
另外,以下是关于相机矩阵和焦距的说明:
一个正确的相机矩阵应该如下所示:
[[ f, 0, cx],
[ 0, f, cy],
[ 0, 0, 1]]
你可以通过校准来获取整个矩阵(以及畸变系数),但这很困难。你也可以计算这些值。光学中心可以通过以下方式获得:
- 光学中心:
cx = (width-1) / 2
,对于cy
同样适用。
焦距可以通过以下步骤计算:
- 拍摄一张易于测量的物体的照片,比如一个aruco标记或者一个码尺。
- 测量物体到相机的物理距离
z[m]
和物体的物理长度x[m]
。 - 测量物体在像素中的长度
x[px]
。 - 计算
f[px] = x[px] * z[m] / x[m]
。
你可以暂时忽略畸变系数,将它们都设为0,除非你的相机镜头有明显的凹或凸畸变。
如果需要初始化你的矩阵,你可以使用 Mat::zeros
和 Mat::eye
函数。你还可以使用预定义的固定大小矩阵类型,如 Matx33f
,或者使用 Mat_
来生成矩阵。
请注意,estimatePoseSingleMarkers
已被弃用,建议使用 solvePnP
。
关于OpenCV的aruco模块的一些说明,它仍然有些混乱。文档中使用了枚举 PatternPositionType
(在 EstimateParameters
中使用),它使用了“顺时针”和“逆时针”的术语,同时假定这是相对于一个具有Z轴指向标记表面的坐标系统的。更好的术语可能是“正”和“负”绕Z轴旋转。
英文:
cv::Mat cameraMatrix(3,3,cv::DataType<double>::type);
cv::setIdentity(cameraMatrix);
This is insufficient. The camera matrix must contain a sensible focal length as well as optical center.
A proper camera matrix looks like
[[ f, 0, cx],
[ 0, f, cy],
[ 0, 0, 1]]
You can get that entire matrix (and distortion coefficients) from calibration, but that's hard.
You can also just calculate those values. That'll be close enough.
-
Optical center:
cx = (width-1) / 2
and similarly forcy
. -
Focal length
- Take a picture of some easily measured object, like... an aruco marker, or a yard stick
- Measure its physical distance
z[m]
to the camera and the physical lengthx[m]
of it - Measure its length in pixels
x[px]
- Calculate
f[px] = x[px] * z[m] / x[m]
You can forget about distortion coefficients for now. Set them all to 0. Those will be relevant if your camera has noticeable pincushion or barrel distortion from its lens.
You can use Mat::zeros
and Mat::eye
to initialize your matrices.
You can generate a Mat from literal element values with predefined fixed-size matrix types like Matx33f
Matx33f K(1, 2, 3,
4, 5, 6,
7, 8, 9);
Or using Mat_
:
Mat K = (Mat_<float>(3,3) <<
1, 2, 3,
4, 5, 6,
7, 8, 9);
It looks like estimatePoseSingleMarkers
got deprecated. That must have happened with the v4.7 release or maybe the v4.6 release already.
Docs recommend using solvePnP
.
The advantage of that is: you get to decide the marker's coordinate system, i.e. where the origin lies (center or corner) and which way the axes point.
Downside: it's a little inconvenient to be expected to generate the object points.
OpenCV's aruco module is still kind of a mess. There's an enum called PatternPositionType
(used in EstimateParameters
). They use the terms "clockwise" and "counter-clockwise", while assuming that's relative to a coordinate system with Z going into the surface of the marker. Better terms would have been "positive" and "negative" rotation around the Z axis.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论