2015-02-05 69 views
2

首先匹配的图片我是相当新的搭配技巧,以便和我一起承担:与OpenCV的

我对训练图像匹配收集的图像(单细胞样品)的申请工作。

我已经使用SIFT检测器和SURF检测器与基于FLANN的匹配来匹配一组训练数据到收集的图像。但是我得到的结果非常糟糕。我使用相同的代码在OpenCV的文档:

void foramsMatching(Mat img_object, Mat img_scene){ 
    int minHessian = 400; 

    SiftFeatureDetector detector(minHessian); 

    std::vector<KeyPoint> keypoints_object, keypoints_scene; 

    detector.detect(img_object, keypoints_object); 
    detector.detect(img_scene, keypoints_scene); 

    //-- Step 2: Calculate descriptors (feature vectors) 
    SurfDescriptorExtractor extractor; 

    Mat descriptors_object, descriptors_scene; 

    extractor.compute(img_object, keypoints_object, descriptors_object); 
    extractor.compute(img_scene, keypoints_scene, descriptors_scene); 

    //-- Step 3: Matching descriptor vectors using FLANN matcher 

    FlannBasedMatcher matcher; 
    //BFMatcher matcher; 
    std::vector<DMatch> matches; 
    matcher.match(descriptors_object, descriptors_scene, matches); 


    double max_dist = 0; double min_dist = 100; 

    //-- Quick calculation of max and min distances between keypoints 
    for (int i = 0; i < descriptors_object.rows; i++) 
    { 
     double dist = matches[i].distance; 
     if (dist < min_dist) min_dist = dist; 
     if (dist > max_dist) max_dist = dist; 
    } 

    printf("-- Max dist : %f \n", max_dist); 
    printf("-- Min dist : %f \n", min_dist); 

    //-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist) 
    std::vector<DMatch> good_matches; 

    for (int i = 0; i < descriptors_object.rows; i++) 
    { 
     if (matches[i].distance < 3 * min_dist) 
     { 
      good_matches.push_back(matches[i]); 
     } 
    } 

    Mat img_matches; 
    drawMatches(img_object, keypoints_object, img_scene, keypoints_scene, 
    good_matches, img_matches, Scalar::all(-1), Scalar::all(-1), 
    vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS); 

    //-- Localize the object 
    std::vector<Point2f> obj; 
    std::vector<Point2f> scene; 

    for (int i = 0; i < good_matches.size(); i++) 
    { 
     //-- Get the keypoints from the good matches 
     obj.push_back(keypoints_object[good_matches[i].queryIdx].pt); 
     scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt); 
    } 

    Mat H = findHomography(obj, scene, CV_RANSAC); 

    //-- Get the corners from the image_1 (the object to be "detected") 
    std::vector<Point2f> obj_corners(4); 
    obj_corners[0] = cvPoint(0, 0); obj_corners[1] = cvPoint(img_object.cols, 0); 
    obj_corners[2] = cvPoint(img_object.cols, img_object.rows); obj_corners[3] = cvPoint(0, img_object.rows); 
    std::vector<Point2f> scene_corners(4); 

    perspectiveTransform(obj_corners, scene_corners, H); 

    //-- Draw lines between the corners (the mapped object in the scene - image_2) 
    line(img_matches, scene_corners[0] + Point2f(img_object.cols, 0), scene_corners[1] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4); 
    line(img_matches, scene_corners[1] + Point2f(img_object.cols, 0), scene_corners[2] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4); 
    line(img_matches, scene_corners[2] + Point2f(img_object.cols, 0), scene_corners[3] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4); 
    line(img_matches, scene_corners[3] + Point2f(img_object.cols, 0), scene_corners[0] + Point2f(img_object.cols, 0), Scalar(0, 255, 0), 4); 

    //-- Show detected matches 
    namedWindow("Good Matches & Object detection", CV_WINDOW_NORMAL); 
    imshow("Good Matches & Object detection", img_matches); 
    //imwrite("../../Samples/Matching.jpg", img_matches); 
} 

下面是结果 - Matching Two Images

他们是真正的穷人相比,我已经使用这些方法看到了一些其他的结果。应该有两个匹配到屏幕底部的两个斑点(单元格)。

任何想法,我做错了或如何改善这些结果? 我正在考虑编写我自己的Matcher/Discription Extractor,因为我的训练图像不是我正在查询的细胞的精确副本。 这是一个好主意吗?如果是这样,我应该看的任何教程?

问候,

+1

也许有任何额外的知识可以用来消除噪音?在您提供的图片中,背景和文字似乎很容易移除。 – runDOSrun 2015-02-05 14:14:31

+0

如果我理解正确,您建议尝试仅匹配底部的特定区域而不匹配最新的图片?我会尝试并报告回来:)顺便说一句,你会如何去除它们? – Nimrodshn 2015-02-05 14:17:22

+0

当然,我认为引入更多关于对象的知识可以消除误报。要做到这一点,你可以举例来说与规则相匹配的点和面积(大小/关系/颜色等) – runDOSrun 2015-02-05 14:19:44

回答

0

转换评论回答:

你应该申请使用可用的知识,你才能找到感兴趣的地区和消除噪音运行SIFT/SURF之前某种预处理。这里的总体思路:

  1. 进行分割的具体标准(*)
  2. 检查段和选择有趣的候选人。
  3. 在候选片段上执行匹配。

(*)您可用于此步骤的内容有面积大小,形状,颜色分布等。根据您提供的示例,它可以例如可以看到你的物体是圆形的并且具有一定的最小尺寸。使用任何知识来消除进一步的误报。当然,您需要进行一些调整,以便使您的规则集不会过于严格,即保持真实的优点。