2015-02-10 255 views
0

我申请使用OpenCV的安装与VARIOS图像马赛克warpperspective,但是,我是一个非常问题...如何在应用warpperspective(opencv)之后发现坐标(0,0)?

当我申请cvWarpPerspective,一个生成图像不显示窗口。 只出现图像的一部分,我需要知道如何在应用warpperspective之后发现图像的坐标(0,0)。 可能发现,在第一张图像中,如果与此处显示的第二张图像进行比较,则会剪切图像的一部分。

因此,我的问题是:如何在应用warpperspective之后发现start的坐标? 我需要帮助来解决这个问题。 如何使用opencv工具解决此问题? 我如何使用opencv解决这个问题?

这是我的代码:

#include <stdio.h> 
#include <iostream> 

#include "opencv2/core/core.hpp" 
#include "opencv2/features2d/features2d.hpp" 
#include "opencv2/highgui/highgui.hpp" 
#include "opencv2/nonfree/nonfree.hpp" 
#include "opencv2/calib3d/calib3d.hpp" 
#include "opencv2/imgproc/imgproc.hpp" 

using namespace cv; 

void readme(); 

/** @function main */ 
int main(int argc, char** argv) 
{ 


// Load the images 
Mat image1= imread("f.jpg"); 
Mat image2= imread("e.jpg"); 
Mat gray_image1; 
Mat gray_image2; 
// Convert to Grayscale 
cvtColor(image1, gray_image1, CV_RGB2GRAY); 
cvtColor(image2, gray_image2, CV_RGB2GRAY); 

imshow("first image",image2); 
imshow("second image",image1); 


//-- Step 1: Detect the keypoints using SURF Detector 
int minHessian = 100; 

SurfFeatureDetector detector(minHessian); 

std::vector<KeyPoint> keypoints_object, keypoints_scene; 

detector.detect(gray_image1, keypoints_object); 
detector.detect(gray_image2, keypoints_scene); 

//-- Step 2: Calculate descriptors (feature vectors) 
SurfDescriptorExtractor extractor; 

Mat descriptors_object, descriptors_scene; 

extractor.compute(gray_image1, keypoints_object, descriptors_object); 
extractor.compute(gray_image2, keypoints_scene, descriptors_scene); 

//-- Step 3: Matching descriptor vectors using FLANN matcher 
FlannBasedMatcher matcher; 
std::vector<DMatch> matches; 
matcher.match(descriptors_object, descriptors_scene, matches); 

double max_dist = 0; double min_dist = 100; 

//-- Quick calculation of max and min distances between keypoints 
for(int i = 0; i < descriptors_object.rows; i++) 
{ double dist = matches[i].distance; 
if(dist < min_dist) min_dist = dist; 
if(dist > max_dist) max_dist = dist; 
} 

printf("-- Max dist : %f \n", max_dist); 
printf("-- Min dist : %f \n", min_dist); 

//-- Use only "good" matches (i.e. whose distance is less than 3*min_dist) 
std::vector<DMatch> good_matches; 

for(int i = 0; i < descriptors_object.rows; i++) 
{ if(matches[i].distance < 3*min_dist) 
{ good_matches.push_back(matches[i]); } 
} 
std::vector<Point2f> obj; 
std::vector<Point2f> scene; 

for(int i = 0; i < good_matches.size(); i++) 
{ 
//-- Get the keypoints from the good matches 
obj.push_back(keypoints_object[ good_matches[i].queryIdx ].pt); 
scene.push_back(keypoints_scene[ good_matches[i].trainIdx ].pt); 
} 

// Find the Homography Matrix 
Mat H = findHomography(obj, scene, CV_RANSAC); 
// Use the Homography Matrix to warp the images 
cv::Mat result; 
warpPerspective(image1,result,H,cv::Size()); 
imshow("WARP", result); 
cv::Mat half(result,cv::Rect(0,0,image2.cols,image2.rows)); 
image2.copyTo(half); 

Mat key; 
//drawKeypoints(image1,keypoints_scene,key,Scalar::all(-1), DrawMatchesFlags::DEFAULT); 
//drawMatches(image2, keypoints_scene, image1, keypoints_object, matches, result); 

imshow("Result", result); 

imwrite("teste.jpg", result); 
waitKey(0); 
return 0; 
} 

/** @function readme */ 
void readme() 
{ std::cout << " Usage: Panorama <img1> <img2>" << std::endl; } 

在此图像中出现第二个图像剪切。见 enter image description here

我想,我的形象出现在这种形式:
enter image description here

+0

嘿,你可以请看看你的图片链接,我编辑了你的问题,并试图修复,但我不知道我是否与正确的图片链接 – kebs 2015-02-10 14:54:38

+0

你想知道你的原始(0,0)翘曲?尝试:'cv :: perspectiveTransform'与'cv :: Point2f(0,0)'作为输入(你需要把它放在一个向量中,我猜)。 – Micka 2015-02-10 16:44:34

+0

Micka,这是我需要知道的!我怎样才能发现我的原始(0,0)?你能帮我吗,米克? – 2015-02-10 23:32:32

回答

1

下面的修改应解决您的去除拼接图像的黑色部分的问题。

尝试改变这一行:

warpPerspective(image1,result,H,cv::Size()); 

warpPerspective(image1,result,H,cv::Size(image1.cols+image2.cols,image1.rows)); 

此创建result矩阵的行相等的image1的数量,从而避免了将要创建的不希望的行。