我正在尝试使用OpenCV来做一些基本的增强现实。我正在使用的方式是使用findChessboardCorners
从相机图像获取一组点。然后,我沿着z = 0平面创建一个3D四边形,并使用solvePnP
获得成像点与平面点之间的单应性。从那里,我想我应该能够建立一个模型视图矩阵,这将允许我在图像顶部渲染一个具有正确姿势的立方体。OpenCV:旋转/平移向量到OpenGL模型视图矩阵
documentation对于solvePnP
表示它输出一个旋转矢量“(与[平移矢量]一起)将模型坐标系中的点带到相机坐标系中。”我认为这与我想要的是相反的;因为我的四边形是在z = 0的平面上的,所以我想要一个模型视图矩阵,它将把这个四边形转换为合适的3D平面。
我认为通过以相反的顺序执行相反的旋转和平移,我可以计算出正确的模型视图矩阵,但似乎不起作用。虽然渲染对象(立方体)确实随着相机图像移动,并且似乎大致正确地平移,但旋转根本不起作用;它在多个轴上,只能在一个轴上旋转,有时在错误的方向旋转。下面是我在做什么至今:
std::vector<Point2f> corners;
bool found = findChessboardCorners(*_imageBuffer, cv::Size(5,4), corners,
CV_CALIB_CB_FILTER_QUADS |
CV_CALIB_CB_FAST_CHECK);
if(found)
{
drawChessboardCorners(*_imageBuffer, cv::Size(6, 5), corners, found);
std::vector<double> distortionCoefficients(5); // camera distortion
distortionCoefficients[0] = 0.070969;
distortionCoefficients[1] = 0.777647;
distortionCoefficients[2] = -0.009131;
distortionCoefficients[3] = -0.013867;
distortionCoefficients[4] = -5.141519;
// Since the image was resized, we need to scale the found corner points
float sw = _width/SMALL_WIDTH;
float sh = _height/SMALL_HEIGHT;
std::vector<Point2f> board_verts;
board_verts.push_back(Point2f(corners[0].x * sw, corners[0].y * sh));
board_verts.push_back(Point2f(corners[15].x * sw, corners[15].y * sh));
board_verts.push_back(Point2f(corners[19].x * sw, corners[19].y * sh));
board_verts.push_back(Point2f(corners[4].x * sw, corners[4].y * sh));
Mat boardMat(board_verts);
std::vector<Point3f> square_verts;
square_verts.push_back(Point3f(-1, 1, 0));
square_verts.push_back(Point3f(-1, -1, 0));
square_verts.push_back(Point3f(1, -1, 0));
square_verts.push_back(Point3f(1, 1, 0));
Mat squareMat(square_verts);
// Transform the camera's intrinsic parameters into an OpenGL camera matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
// Camera parameters
double f_x = 786.42938232; // Focal length in x axis
double f_y = 786.42938232; // Focal length in y axis (usually the same?)
double c_x = 217.01358032; // Camera primary point x
double c_y = 311.25384521; // Camera primary point y
cv::Mat cameraMatrix(3,3,CV_32FC1);
cameraMatrix.at<float>(0,0) = f_x;
cameraMatrix.at<float>(0,1) = 0.0;
cameraMatrix.at<float>(0,2) = c_x;
cameraMatrix.at<float>(1,0) = 0.0;
cameraMatrix.at<float>(1,1) = f_y;
cameraMatrix.at<float>(1,2) = c_y;
cameraMatrix.at<float>(2,0) = 0.0;
cameraMatrix.at<float>(2,1) = 0.0;
cameraMatrix.at<float>(2,2) = 1.0;
Mat rvec(3, 1, CV_32F), tvec(3, 1, CV_32F);
solvePnP(squareMat, boardMat, cameraMatrix, distortionCoefficients,
rvec, tvec);
_rv[0] = rvec.at<double>(0, 0);
_rv[1] = rvec.at<double>(1, 0);
_rv[2] = rvec.at<double>(2, 0);
_tv[0] = tvec.at<double>(0, 0);
_tv[1] = tvec.at<double>(1, 0);
_tv[2] = tvec.at<double>(2, 0);
}
然后在绘制代码...
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 0.0f);
modelViewMatrix = GLKMatrix4Translate(modelViewMatrix, -tv[1], -tv[0], -tv[2]);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[0], 1.0f, 0.0f, 0.0f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[1], 0.0f, 1.0f, 0.0f);
modelViewMatrix = GLKMatrix4Rotate(modelViewMatrix, -rv[2], 0.0f, 0.0f, 1.0f);
我从渲染创建原点周围的单位长度的立方体(即顶点 - )我知道OpenGL翻译函数以“逆序”执行转换,所以上面应该沿着z,y和x轴旋转立方体,然后翻译它。但是,它似乎是先翻译后旋转的,所以也许苹果的GLKMatrix4
的工作方式不同?
This question与我的似乎非常相似,特别是coder9的答案似乎可能是或多或少我正在寻找。然而,我尝试了它并将结果与我的方法进行了比较,并且我在两种情况下得到的矩阵都是相同的。我觉得答案是对的,但我错过了一些关键的细节。
试着如何自己做到这一点。看到这篇文章的坐标系差异的话题。 http://stackoverflow.com/questions/9081900/reference-coordinate-system-changes-between-opencv-opengl-and-android-sensor – Koobz 2012-04-26 00:15:16