PS:要转载请注明出处,本人版权所有。

PS: 这个只是基于《我自己》的理解,

如果和你的原则及想法相冲突,请谅解,勿喷。

前置说明

  本文作为本人csdn blog的主站的备份。(BlogID=063)

  本文发布于 2018-05-22 15:31:47,现用MarkDown+图床做备份更新。blog原图已丢失,使用csdn所存的图进行更新。(BlogID=063)

环境说明

  无

前言


  在https://blog.csdn.net/u011728480/article/details/79609493文中,我已经实现了从一个可旋转的相机中采图进行360度的全景拼接。但是最近,又接到了一个任务是关于从三个不同相机(同型号)采集图像进行拼接,这个时候就出现了一个让我崩溃的问题,99.99999999999999999999999999%都拼接不上,这个时候,就需要我们探索与发现一下 Stitcher的工作原理,看能不能找到相关的解决办法或者出现这些问题的原因。

Stitcher 大致工作原理(主要根据网上的资料结合其官方的例子来分析)


  1. 首先就是通过算法找特征点
  2. 其次就是计算图片之间的特征点的匹配程度
  3. ... ....(这里的省略号的原因是我也没有太懂后面的流程)
  4. 拼接

opencv example :stitching_detailed.cpp(https://github.com/opencv/opencv/blob/master/samples/cpp/stitching_detailed.cpp)

/*M///////////////////////////////////////////////////////////////////////////////////////
//
// IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
// By downloading, copying, installing or using the software you agree to this license.
// If you do not agree to this license, do not download, install,
// copy or use the software.
//
//
// License Agreement
// For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistribution's of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
//
// * Redistribution's in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
//
// * The name of the copyright holders may not be used to endorse or promote products
// derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//
//M*/ #include <iostream>
#include <fstream>
#include <string>
#include "opencv2/opencv_modules.hpp"
#include <opencv2/core/utility.hpp>
#include "opencv2/imgcodecs.hpp"
#include "opencv2/highgui.hpp"
#include "opencv2/stitching/detail/autocalib.hpp"
#include "opencv2/stitching/detail/blenders.hpp"
#include "opencv2/stitching/detail/timelapsers.hpp"
#include "opencv2/stitching/detail/camera.hpp"
#include "opencv2/stitching/detail/exposure_compensate.hpp"
#include "opencv2/stitching/detail/matchers.hpp"
#include "opencv2/stitching/detail/motion_estimators.hpp"
#include "opencv2/stitching/detail/seam_finders.hpp"
#include "opencv2/stitching/detail/warpers.hpp"
#include "opencv2/stitching/warpers.hpp" #define ENABLE_LOG 1
#define LOG(msg) std::cout << msg
#define LOGLN(msg) std::cout << msg << std::endl using namespace std;
using namespace cv;
using namespace cv::detail; static void printUsage()
{
cout <<
"Rotation model images stitcher.\n\n"
"stitching_detailed img1 img2 [...imgN] [flags]\n\n"
"Flags:\n"
" --preview\n"
" Run stitching in the preview mode. Works faster than usual mode,\n"
" but output image will have lower resolution.\n"
" --try_cuda (yes|no)\n"
" Try to use CUDA. The default value is 'no'. All default values\n"
" are for CPU mode.\n"
"\nMotion Estimation Flags:\n"
" --work_megapix <float>\n"
" Resolution for image registration step. The default is 0.6 Mpx.\n"
" --features (surf|orb)\n"
" Type of features used for images matching. The default is surf.\n"
" --matcher (homography|affine)\n"
" Matcher used for pairwise image matching.\n"
" --estimator (homography|affine)\n"
" Type of estimator used for transformation estimation.\n"
" --match_conf <float>\n"
" Confidence for feature matching step. The default is 0.65 for surf and 0.3 for orb.\n"
" --conf_thresh <float>\n"
" Threshold for two images are from the same panorama confidence.\n"
" The default is 1.0.\n"
" --ba (no|reproj|ray|affine)\n"
" Bundle adjustment cost function. The default is ray.\n"
" --ba_refine_mask (mask)\n"
" Set refinement mask for bundle adjustment. It looks like 'x_xxx',\n"
" where 'x' means refine respective parameter and '_' means don't\n"
" refine one, and has the following format:\n"
" <fx><skew><ppx><aspect><ppy>. The default mask is 'xxxxx'. If bundle\n"
" adjustment doesn't support estimation of selected parameter then\n"
" the respective flag is ignored.\n"
" --wave_correct (no|horiz|vert)\n"
" Perform wave effect correction. The default is 'horiz'.\n"
" --save_graph <file_name>\n"
" Save matches graph represented in DOT language to <file_name> file.\n"
" Labels description: Nm is number of matches, Ni is number of inliers,\n"
" C is confidence.\n"
"\nCompositing Flags:\n"
" --warp (affine|plane|cylindrical|spherical|fisheye|stereographic|compressedPlaneA2B1|compressedPlaneA1.5B1|compressedPlanePortraitA2B1|compressedPlanePortraitA1.5B1|paniniA2B1|paniniA1.5B1|paniniPortraitA2B1|paniniPortraitA1.5B1|mercator|transverseMercator)\n"
" Warp surface type. The default is 'spherical'.\n"
" --seam_megapix <float>\n"
" Resolution for seam estimation step. The default is 0.1 Mpx.\n"
" --seam (no|voronoi|gc_color|gc_colorgrad)\n"
" Seam estimation method. The default is 'gc_color'.\n"
" --compose_megapix <float>\n"
" Resolution for compositing step. Use -1 for original resolution.\n"
" The default is -1.\n"
" --expos_comp (no|gain|gain_blocks)\n"
" Exposure compensation method. The default is 'gain_blocks'.\n"
" --blend (no|feather|multiband)\n"
" Blending method. The default is 'multiband'.\n"
" --blend_strength <float>\n"
" Blending strength from [0,100] range. The default is 5.\n"
" --output <result_img>\n"
" The default is 'result.jpg'.\n"
" --timelapse (as_is|crop) \n"
" Output warped images separately as frames of a time lapse movie, with 'fixed_' prepended to input file names.\n"
" --rangewidth <int>\n"
" uses range_width to limit number of images to match with.\n";
} // Default command line args
vector<String> img_names;
bool preview = false;
bool try_cuda = false;
double work_megapix = 0.6;
double seam_megapix = 0.1;
double compose_megapix = -1;
float conf_thresh = 1.f;
string features_type = "surf";
string matcher_type = "homography";
string estimator_type = "homography";
string ba_cost_func = "ray";
string ba_refine_mask = "xxxxx";
bool do_wave_correct = true;
WaveCorrectKind wave_correct = detail::WAVE_CORRECT_HORIZ;
bool save_graph = false;
std::string save_graph_to;
string warp_type = "spherical";
int expos_comp_type = ExposureCompensator::GAIN_BLOCKS;
float match_conf = 0.3f;
string seam_find_type = "gc_color";
int blend_type = Blender::MULTI_BAND;
int timelapse_type = Timelapser::AS_IS;
float blend_strength = 5;
string result_name = "result.jpg";
bool timelapse = false;
int range_width = -1; static int parseCmdArgs(int argc, char** argv)
{
if (argc == 1)
{
printUsage();
return -1;
}
for (int i = 1; i < argc; ++i)
{
if (string(argv[i]) == "--help" || string(argv[i]) == "/?")
{
printUsage();
return -1;
}
else if (string(argv[i]) == "--preview")
{
preview = true;
}
else if (string(argv[i]) == "--try_cuda")
{
if (string(argv[i + 1]) == "no")
try_cuda = false;
else if (string(argv[i + 1]) == "yes")
try_cuda = true;
else
{
cout << "Bad --try_cuda flag value\n";
return -1;
}
i++;
}
else if (string(argv[i]) == "--work_megapix")
{
work_megapix = atof(argv[i + 1]);
i++;
}
else if (string(argv[i]) == "--seam_megapix")
{
seam_megapix = atof(argv[i + 1]);
i++;
}
else if (string(argv[i]) == "--compose_megapix")
{
compose_megapix = atof(argv[i + 1]);
i++;
}
else if (string(argv[i]) == "--result")
{
result_name = argv[i + 1];
i++;
}
else if (string(argv[i]) == "--features")
{
features_type = argv[i + 1];
if (features_type == "orb")
match_conf = 0.3f;
i++;
}
else if (string(argv[i]) == "--matcher")
{
if (string(argv[i + 1]) == "homography" || string(argv[i + 1]) == "affine")
matcher_type = argv[i + 1];
else
{
cout << "Bad --matcher flag value\n";
return -1;
}
i++;
}
else if (string(argv[i]) == "--estimator")
{
if (string(argv[i + 1]) == "homography" || string(argv[i + 1]) == "affine")
estimator_type = argv[i + 1];
else
{
cout << "Bad --estimator flag value\n";
return -1;
}
i++;
}
else if (string(argv[i]) == "--match_conf")
{
match_conf = static_cast<float>(atof(argv[i + 1]));
i++;
}
else if (string(argv[i]) == "--conf_thresh")
{
conf_thresh = static_cast<float>(atof(argv[i + 1]));
i++;
}
else if (string(argv[i]) == "--ba")
{
ba_cost_func = argv[i + 1];
i++;
}
else if (string(argv[i]) == "--ba_refine_mask")
{
ba_refine_mask = argv[i + 1];
if (ba_refine_mask.size() != 5)
{
cout << "Incorrect refinement mask length.\n";
return -1;
}
i++;
}
else if (string(argv[i]) == "--wave_correct")
{
if (string(argv[i + 1]) == "no")
do_wave_correct = false;
else if (string(argv[i + 1]) == "horiz")
{
do_wave_correct = true;
wave_correct = detail::WAVE_CORRECT_HORIZ;
}
else if (string(argv[i + 1]) == "vert")
{
do_wave_correct = true;
wave_correct = detail::WAVE_CORRECT_VERT;
}
else
{
cout << "Bad --wave_correct flag value\n";
return -1;
}
i++;
}
else if (string(argv[i]) == "--save_graph")
{
save_graph = true;
save_graph_to = argv[i + 1];
i++;
}
else if (string(argv[i]) == "--warp")
{
warp_type = string(argv[i + 1]);
i++;
}
else if (string(argv[i]) == "--expos_comp")
{
if (string(argv[i + 1]) == "no")
expos_comp_type = ExposureCompensator::NO;
else if (string(argv[i + 1]) == "gain")
expos_comp_type = ExposureCompensator::GAIN;
else if (string(argv[i + 1]) == "gain_blocks")
expos_comp_type = ExposureCompensator::GAIN_BLOCKS;
else
{
cout << "Bad exposure compensation method\n";
return -1;
}
i++;
}
else if (string(argv[i]) == "--seam")
{
if (string(argv[i + 1]) == "no" ||
string(argv[i + 1]) == "voronoi" ||
string(argv[i + 1]) == "gc_color" ||
string(argv[i + 1]) == "gc_colorgrad" ||
string(argv[i + 1]) == "dp_color" ||
string(argv[i + 1]) == "dp_colorgrad")
seam_find_type = argv[i + 1];
else
{
cout << "Bad seam finding method\n";
return -1;
}
i++;
}
else if (string(argv[i]) == "--blend")
{
if (string(argv[i + 1]) == "no")
blend_type = Blender::NO;
else if (string(argv[i + 1]) == "feather")
blend_type = Blender::FEATHER;
else if (string(argv[i + 1]) == "multiband")
blend_type = Blender::MULTI_BAND;
else
{
cout << "Bad blending method\n";
return -1;
}
i++;
}
else if (string(argv[i]) == "--timelapse")
{
timelapse = true; if (string(argv[i + 1]) == "as_is")
timelapse_type = Timelapser::AS_IS;
else if (string(argv[i + 1]) == "crop")
timelapse_type = Timelapser::CROP;
else
{
cout << "Bad timelapse method\n";
return -1;
}
i++;
}
else if (string(argv[i]) == "--rangewidth")
{
range_width = atoi(argv[i + 1]);
i++;
}
else if (string(argv[i]) == "--blend_strength")
{
blend_strength = static_cast<float>(atof(argv[i + 1]));
i++;
}
else if (string(argv[i]) == "--output")
{
result_name = argv[i + 1];
i++;
}
else
img_names.push_back(argv[i]);
}
if (preview)
{
compose_megapix = 0.6;
}
return 0;
} int main(int argc, char* argv[])
{
#if ENABLE_LOG
int64 app_start_time = getTickCount();
#endif #if 0
cv::setBreakOnError(true);
#endif int retval = parseCmdArgs(argc, argv);
if (retval)
return retval; // Check if have enough images
int num_images = static_cast<int>(img_names.size());
if (num_images < 2)
{
LOGLN("Need more images");
return -1;
} double work_scale = 1, seam_scale = 1, compose_scale = 1;
bool is_work_scale_set = false, is_seam_scale_set = false, is_compose_scale_set = false; LOGLN("Finding features...");
#if ENABLE_LOG
int64 t = getTickCount();
#endif Ptr<FeaturesFinder> finder;
if (features_type == "surf")
{
#ifdef HAVE_OPENCV_XFEATURES2D
if (try_cuda && cuda::getCudaEnabledDeviceCount() > 0)
finder = makePtr<SurfFeaturesFinderGpu>();
else
#endif
finder = makePtr<SurfFeaturesFinder>();
}
else if (features_type == "orb")
{
finder = makePtr<OrbFeaturesFinder>();
}
else
{
cout << "Unknown 2D features type: '" << features_type << "'.\n";
return -1;
} Mat full_img, img;
vector<ImageFeatures> features(num_images);
vector<Mat> images(num_images);
vector<Size> full_img_sizes(num_images);
double seam_work_aspect = 1; for (int i = 0; i < num_images; ++i)
{
full_img = imread(img_names[i]);
full_img_sizes[i] = full_img.size(); if (full_img.empty())
{
LOGLN("Can't open image " << img_names[i]);
return -1;
}
if (work_megapix < 0)
{
img = full_img;
work_scale = 1;
is_work_scale_set = true;
}
else
{
if (!is_work_scale_set)
{
work_scale = min(1.0, sqrt(work_megapix * 1e6 / full_img.size().area()));
is_work_scale_set = true;
}
resize(full_img, img, Size(), work_scale, work_scale, INTER_LINEAR_EXACT);
}
if (!is_seam_scale_set)
{
seam_scale = min(1.0, sqrt(seam_megapix * 1e6 / full_img.size().area()));
seam_work_aspect = seam_scale / work_scale;
is_seam_scale_set = true;
} (*finder)(img, features[i]);
features[i].img_idx = i;
LOGLN("Features in image #" << i+1 << ": " << features[i].keypoints.size()); resize(full_img, img, Size(), seam_scale, seam_scale, INTER_LINEAR_EXACT);
images[i] = img.clone();
} finder->collectGarbage();
full_img.release();
img.release(); LOGLN("Finding features, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec"); LOG("Pairwise matching");
#if ENABLE_LOG
t = getTickCount();
#endif
vector<MatchesInfo> pairwise_matches;
Ptr<FeaturesMatcher> matcher;
if (matcher_type == "affine")
matcher = makePtr<AffineBestOf2NearestMatcher>(false, try_cuda, match_conf);
else if (range_width==-1)
matcher = makePtr<BestOf2NearestMatcher>(try_cuda, match_conf);
else
matcher = makePtr<BestOf2NearestRangeMatcher>(range_width, try_cuda, match_conf); (*matcher)(features, pairwise_matches);
matcher->collectGarbage(); LOGLN("Pairwise matching, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec"); // Check if we should save matches graph
if (save_graph)
{
LOGLN("Saving matches graph...");
ofstream f(save_graph_to.c_str());
f << matchesGraphAsString(img_names, pairwise_matches, conf_thresh);
} // Leave only images we are sure are from the same panorama
vector<int> indices = leaveBiggestComponent(features, pairwise_matches, conf_thresh);
vector<Mat> img_subset;
vector<String> img_names_subset;
vector<Size> full_img_sizes_subset;
for (size_t i = 0; i < indices.size(); ++i)
{
img_names_subset.push_back(img_names[indices[i]]);
img_subset.push_back(images[indices[i]]);
full_img_sizes_subset.push_back(full_img_sizes[indices[i]]);
} images = img_subset;
img_names = img_names_subset;
full_img_sizes = full_img_sizes_subset; // Check if we still have enough images
num_images = static_cast<int>(img_names.size());
if (num_images < 2)
{
LOGLN("Need more images");
return -1;
} Ptr<Estimator> estimator;
if (estimator_type == "affine")
estimator = makePtr<AffineBasedEstimator>();
else
estimator = makePtr<HomographyBasedEstimator>(); vector<CameraParams> cameras;
if (!(*estimator)(features, pairwise_matches, cameras))
{
cout << "Homography estimation failed.\n";
return -1;
} for (size_t i = 0; i < cameras.size(); ++i)
{
Mat R;
cameras[i].R.convertTo(R, CV_32F);
cameras[i].R = R;
LOGLN("Initial camera intrinsics #" << indices[i]+1 << ":\nK:\n" << cameras[i].K() << "\nR:\n" << cameras[i].R);
} Ptr<detail::BundleAdjusterBase> adjuster;
if (ba_cost_func == "reproj") adjuster = makePtr<detail::BundleAdjusterReproj>();
else if (ba_cost_func == "ray") adjuster = makePtr<detail::BundleAdjusterRay>();
else if (ba_cost_func == "affine") adjuster = makePtr<detail::BundleAdjusterAffinePartial>();
else if (ba_cost_func == "no") adjuster = makePtr<NoBundleAdjuster>();
else
{
cout << "Unknown bundle adjustment cost function: '" << ba_cost_func << "'.\n";
return -1;
}
adjuster->setConfThresh(conf_thresh);
Mat_<uchar> refine_mask = Mat::zeros(3, 3, CV_8U);
if (ba_refine_mask[0] == 'x') refine_mask(0,0) = 1;
if (ba_refine_mask[1] == 'x') refine_mask(0,1) = 1;
if (ba_refine_mask[2] == 'x') refine_mask(0,2) = 1;
if (ba_refine_mask[3] == 'x') refine_mask(1,1) = 1;
if (ba_refine_mask[4] == 'x') refine_mask(1,2) = 1;
adjuster->setRefinementMask(refine_mask);
if (!(*adjuster)(features, pairwise_matches, cameras))
{
cout << "Camera parameters adjusting failed.\n";
return -1;
} // Find median focal length vector<double> focals;
for (size_t i = 0; i < cameras.size(); ++i)
{
LOGLN("Camera #" << indices[i]+1 << ":\nK:\n" << cameras[i].K() << "\nR:\n" << cameras[i].R);
focals.push_back(cameras[i].focal);
} sort(focals.begin(), focals.end());
float warped_image_scale;
if (focals.size() % 2 == 1)
warped_image_scale = static_cast<float>(focals[focals.size() / 2]);
else
warped_image_scale = static_cast<float>(focals[focals.size() / 2 - 1] + focals[focals.size() / 2]) * 0.5f; if (do_wave_correct)
{
vector<Mat> rmats;
for (size_t i = 0; i < cameras.size(); ++i)
rmats.push_back(cameras[i].R.clone());
waveCorrect(rmats, wave_correct);
for (size_t i = 0; i < cameras.size(); ++i)
cameras[i].R = rmats[i];
} LOGLN("Warping images (auxiliary)... ");
#if ENABLE_LOG
t = getTickCount();
#endif vector<Point> corners(num_images);
vector<UMat> masks_warped(num_images);
vector<UMat> images_warped(num_images);
vector<Size> sizes(num_images);
vector<UMat> masks(num_images); // Preapre images masks
for (int i = 0; i < num_images; ++i)
{
masks[i].create(images[i].size(), CV_8U);
masks[i].setTo(Scalar::all(255));
} // Warp images and their masks Ptr<WarperCreator> warper_creator;
#ifdef HAVE_OPENCV_CUDAWARPING
if (try_cuda && cuda::getCudaEnabledDeviceCount() > 0)
{
if (warp_type == "plane")
warper_creator = makePtr<cv::PlaneWarperGpu>();
else if (warp_type == "cylindrical")
warper_creator = makePtr<cv::CylindricalWarperGpu>();
else if (warp_type == "spherical")
warper_creator = makePtr<cv::SphericalWarperGpu>();
}
else
#endif
{
if (warp_type == "plane")
warper_creator = makePtr<cv::PlaneWarper>();
else if (warp_type == "affine")
warper_creator = makePtr<cv::AffineWarper>();
else if (warp_type == "cylindrical")
warper_creator = makePtr<cv::CylindricalWarper>();
else if (warp_type == "spherical")
warper_creator = makePtr<cv::SphericalWarper>();
else if (warp_type == "fisheye")
warper_creator = makePtr<cv::FisheyeWarper>();
else if (warp_type == "stereographic")
warper_creator = makePtr<cv::StereographicWarper>();
else if (warp_type == "compressedPlaneA2B1")
warper_creator = makePtr<cv::CompressedRectilinearWarper>(2.0f, 1.0f);
else if (warp_type == "compressedPlaneA1.5B1")
warper_creator = makePtr<cv::CompressedRectilinearWarper>(1.5f, 1.0f);
else if (warp_type == "compressedPlanePortraitA2B1")
warper_creator = makePtr<cv::CompressedRectilinearPortraitWarper>(2.0f, 1.0f);
else if (warp_type == "compressedPlanePortraitA1.5B1")
warper_creator = makePtr<cv::CompressedRectilinearPortraitWarper>(1.5f, 1.0f);
else if (warp_type == "paniniA2B1")
warper_creator = makePtr<cv::PaniniWarper>(2.0f, 1.0f);
else if (warp_type == "paniniA1.5B1")
warper_creator = makePtr<cv::PaniniWarper>(1.5f, 1.0f);
else if (warp_type == "paniniPortraitA2B1")
warper_creator = makePtr<cv::PaniniPortraitWarper>(2.0f, 1.0f);
else if (warp_type == "paniniPortraitA1.5B1")
warper_creator = makePtr<cv::PaniniPortraitWarper>(1.5f, 1.0f);
else if (warp_type == "mercator")
warper_creator = makePtr<cv::MercatorWarper>();
else if (warp_type == "transverseMercator")
warper_creator = makePtr<cv::TransverseMercatorWarper>();
} if (!warper_creator)
{
cout << "Can't create the following warper '" << warp_type << "'\n";
return 1;
} Ptr<RotationWarper> warper = warper_creator->create(static_cast<float>(warped_image_scale * seam_work_aspect)); for (int i = 0; i < num_images; ++i)
{
Mat_<float> K;
cameras[i].K().convertTo(K, CV_32F);
float swa = (float)seam_work_aspect;
K(0,0) *= swa; K(0,2) *= swa;
K(1,1) *= swa; K(1,2) *= swa; corners[i] = warper->warp(images[i], K, cameras[i].R, INTER_LINEAR, BORDER_REFLECT, images_warped[i]);
sizes[i] = images_warped[i].size(); warper->warp(masks[i], K, cameras[i].R, INTER_NEAREST, BORDER_CONSTANT, masks_warped[i]);
} vector<UMat> images_warped_f(num_images);
for (int i = 0; i < num_images; ++i)
images_warped[i].convertTo(images_warped_f[i], CV_32F); LOGLN("Warping images, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec"); Ptr<ExposureCompensator> compensator = ExposureCompensator::createDefault(expos_comp_type);
compensator->feed(corners, images_warped, masks_warped); Ptr<SeamFinder> seam_finder;
if (seam_find_type == "no")
seam_finder = makePtr<detail::NoSeamFinder>();
else if (seam_find_type == "voronoi")
seam_finder = makePtr<detail::VoronoiSeamFinder>();
else if (seam_find_type == "gc_color")
{
#ifdef HAVE_OPENCV_CUDALEGACY
if (try_cuda && cuda::getCudaEnabledDeviceCount() > 0)
seam_finder = makePtr<detail::GraphCutSeamFinderGpu>(GraphCutSeamFinderBase::COST_COLOR);
else
#endif
seam_finder = makePtr<detail::GraphCutSeamFinder>(GraphCutSeamFinderBase::COST_COLOR);
}
else if (seam_find_type == "gc_colorgrad")
{
#ifdef HAVE_OPENCV_CUDALEGACY
if (try_cuda && cuda::getCudaEnabledDeviceCount() > 0)
seam_finder = makePtr<detail::GraphCutSeamFinderGpu>(GraphCutSeamFinderBase::COST_COLOR_GRAD);
else
#endif
seam_finder = makePtr<detail::GraphCutSeamFinder>(GraphCutSeamFinderBase::COST_COLOR_GRAD);
}
else if (seam_find_type == "dp_color")
seam_finder = makePtr<detail::DpSeamFinder>(DpSeamFinder::COLOR);
else if (seam_find_type == "dp_colorgrad")
seam_finder = makePtr<detail::DpSeamFinder>(DpSeamFinder::COLOR_GRAD);
if (!seam_finder)
{
cout << "Can't create the following seam finder '" << seam_find_type << "'\n";
return 1;
} seam_finder->find(images_warped_f, corners, masks_warped); // Release unused memory
images.clear();
images_warped.clear();
images_warped_f.clear();
masks.clear(); LOGLN("Compositing...");
#if ENABLE_LOG
t = getTickCount();
#endif Mat img_warped, img_warped_s;
Mat dilated_mask, seam_mask, mask, mask_warped;
Ptr<Blender> blender;
Ptr<Timelapser> timelapser;
//double compose_seam_aspect = 1;
double compose_work_aspect = 1; for (int img_idx = 0; img_idx < num_images; ++img_idx)
{
LOGLN("Compositing image #" << indices[img_idx]+1); // Read image and resize it if necessary
full_img = imread(img_names[img_idx]);
if (!is_compose_scale_set)
{
if (compose_megapix > 0)
compose_scale = min(1.0, sqrt(compose_megapix * 1e6 / full_img.size().area()));
is_compose_scale_set = true; // Compute relative scales
//compose_seam_aspect = compose_scale / seam_scale;
compose_work_aspect = compose_scale / work_scale; // Update warped image scale
warped_image_scale *= static_cast<float>(compose_work_aspect);
warper = warper_creator->create(warped_image_scale); // Update corners and sizes
for (int i = 0; i < num_images; ++i)
{
// Update intrinsics
cameras[i].focal *= compose_work_aspect;
cameras[i].ppx *= compose_work_aspect;
cameras[i].ppy *= compose_work_aspect; // Update corner and size
Size sz = full_img_sizes[i];
if (std::abs(compose_scale - 1) > 1e-1)
{
sz.width = cvRound(full_img_sizes[i].width * compose_scale);
sz.height = cvRound(full_img_sizes[i].height * compose_scale);
} Mat K;
cameras[i].K().convertTo(K, CV_32F);
Rect roi = warper->warpRoi(sz, K, cameras[i].R);
corners[i] = roi.tl();
sizes[i] = roi.size();
}
}
if (abs(compose_scale - 1) > 1e-1)
resize(full_img, img, Size(), compose_scale, compose_scale, INTER_LINEAR_EXACT);
else
img = full_img;
full_img.release();
Size img_size = img.size(); Mat K;
cameras[img_idx].K().convertTo(K, CV_32F); // Warp the current image
warper->warp(img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped); // Warp the current image mask
mask.create(img_size, CV_8U);
mask.setTo(Scalar::all(255));
warper->warp(mask, K, cameras[img_idx].R, INTER_NEAREST, BORDER_CONSTANT, mask_warped); // Compensate exposure
compensator->apply(img_idx, corners[img_idx], img_warped, mask_warped); img_warped.convertTo(img_warped_s, CV_16S);
img_warped.release();
img.release();
mask.release(); dilate(masks_warped[img_idx], dilated_mask, Mat());
resize(dilated_mask, seam_mask, mask_warped.size(), 0, 0, INTER_LINEAR_EXACT);
mask_warped = seam_mask & mask_warped; if (!blender && !timelapse)
{
blender = Blender::createDefault(blend_type, try_cuda);
Size dst_sz = resultRoi(corners, sizes).size();
float blend_width = sqrt(static_cast<float>(dst_sz.area())) * blend_strength / 100.f;
if (blend_width < 1.f)
blender = Blender::createDefault(Blender::NO, try_cuda);
else if (blend_type == Blender::MULTI_BAND)
{
MultiBandBlender* mb = dynamic_cast<MultiBandBlender*>(blender.get());
mb->setNumBands(static_cast<int>(ceil(log(blend_width)/log(2.)) - 1.));
LOGLN("Multi-band blender, number of bands: " << mb->numBands());
}
else if (blend_type == Blender::FEATHER)
{
FeatherBlender* fb = dynamic_cast<FeatherBlender*>(blender.get());
fb->setSharpness(1.f/blend_width);
LOGLN("Feather blender, sharpness: " << fb->sharpness());
}
blender->prepare(corners, sizes);
}
else if (!timelapser && timelapse)
{
timelapser = Timelapser::createDefault(timelapse_type);
timelapser->initialize(corners, sizes);
} // Blend the current image
if (timelapse)
{
timelapser->process(img_warped_s, Mat::ones(img_warped_s.size(), CV_8UC1), corners[img_idx]);
String fixedFileName;
size_t pos_s = String(img_names[img_idx]).find_last_of("/\\");
if (pos_s == String::npos)
{
fixedFileName = "fixed_" + img_names[img_idx];
}
else
{
fixedFileName = "fixed_" + String(img_names[img_idx]).substr(pos_s + 1, String(img_names[img_idx]).length() - pos_s);
}
imwrite(fixedFileName, timelapser->getDst());
}
else
{
blender->feed(img_warped_s, mask_warped, corners[img_idx]);
}
} if (!timelapse)
{
Mat result, result_mask;
blender->blend(result, result_mask); LOGLN("Compositing, time: " << ((getTickCount() - t) / getTickFrequency()) << " sec"); imwrite(result_name, result);
} LOGLN("Finished, total time: " << ((getTickCount() - app_start_time) / getTickFrequency()) << " sec");
return 0;
}
不能拼接的原因

  从上面的流程我们可知,如果要拼接成功,则首要条件是保证图片间图像重叠部分的特征点重合(匹配)的地方比较多。我们分析例子可知,特征点提取用的是SURF算法,我查阅资料可知:我认为,SURF算法主要提取的是一些突出点,比如轮廓边缘,角点。也就是说,特征点出现位置,一般都在有形状的地方或者形状变化明显的地方。

  而我拼接不成功的原因是,我图像中大部分区域都为纯白色的天花板,导致整体特征点比较少,图像重叠部分的特征点也较少,导致其匹配特征点的时候失败了。而我,改变了几个相机之间的固定视角后,总算是Stitcher类的默认算法都能拼接了。

解决思路

  用上面例子的提取特征点的源码,然后把每张图的特征点画出来,观察一下特征点的情况,根据实际情况分析和调整。

  例如下图:经过实验发现,Stitcher算法要拼接成功的要素是图间的重叠部分的特征点 重叠度和重叠数量要达到一个阈值才行。(此图就是用上面的源码,提取特征,并在原图上把图画出来)

后记


  opencv关于图像拼接的东西基本上整个原理都在例子中的cpp中,需要的时候,可以对此cpp文件结合网上大多数资料进行分析。

参考文献


打赏、订阅、收藏、丢香蕉、硬币,请关注公众号(攻城狮的搬砖之路)

PS: 请尊重原创,不喜勿喷。

PS: 要转载请注明出处,本人版权所有。

PS: 有问题请留言,看到后我会第一时间回复。

关于全景(360)图片拼接的方法(Opencv3.0 Stitcher)----续(一)的更多相关文章

  1. [转]opencv3.0 鱼眼相机标定

    [原文转自]:http://blog.csdn.net/qq_15947787/article/details/51441031 前两天发表的时候没注意,代码出了点错误,所以修改了一下,重新发上来.  ...

  2. win7 32 bit VS2012 OpenCV3.0配置

    今天看CPP基础,想起来之前在vs2012配置opencv3未成功,就忍不住再次配置一... 环境:win7 32bit vs2012 opencv3.0 主要参考这几篇博文:1,2,3 上面的博文已 ...

  3. VS2013 Community配置OpenCV3.0.0

    配置环境:32位win7系统+VS2013 Community版本 1.首先从OpenCV官网上下载最新版本的OpenCV for Windows. 2.直接双击打开下载得到的opencv-3.0.0 ...

  4. VS2012 配置 OpenCV3.0

    VS2012 配置 opencv3.0,相比之前的版本,3.0的配置简单了好多. 通过配置属性文件,可以做到一次配置,重复使用! 根据文章的操作在 win7 64bit VS2012 下成功配置 op ...

  5. Ubuntu 安装OpenCV3.0.0

    Ubuntu安装OpenCV3.0.0 为了看看opencv3.0的HDR效果,尝试安装opencv3.0到ubuntu12.04上面,安装了好几次终于成功了. 参考博客: http://www.sa ...

  6. ubuntu14.04下配置使用openCV3.0

    [操  作  系  统] Ubuntu 14.04 LTS [OpenCV版本]  3.0.0-beta [Eclipse 版 本] 3.8.1 需要知识: Linux系统shell命令基础 编译原理 ...

  7. VS2015+Windows 10下配置opencv3.0

    博客园样式换了,原先的文章排版太乱了,新发一篇 ##0. 安装 opencv 3.0 首先去官网下载安装包(其实是个压缩包),释放到随便的一个路径里面为了方便描述,下面把这个路径称为 cvPath. ...

  8. ubuntu14.04 python2.7 安装配置OpenCV3.0

    环境:ubuntu14.04  python2.7 内容:安装并配置OpenCV3.0 今天按照OpenCV官网上的步骤装了OpenCV但是,装好之后python提示“No module named ...

  9. 基于opencv3.0下的人脸检测和检测部分的高斯模糊处理

    如题 这里将任务分解为三大部分: 1.录播放视频 2.人脸检测 3.部分高斯模糊 其中重点放在人脸检测和部分高斯模糊上 1.录播放视频(以opencv中的VideoCapture类进行实现) 首先罗列 ...

  10. Caffe搭建:Ubuntu14.04 + CUDA7.0 + opencv3.0 + Matlab2014A

    从Hinton在science上发表深度学习训练开创新的文章以来,深度学习火了整整有3年多,而且随着新的硬件和算法的提出,深度学习正在应用于越来越多的领域,发挥其算法的优势. 实验室并没有赶上第一波深 ...

随机推荐

  1. AutoGPT是什么?超简单安装使用教程

    1.AutoGPT 最近几天当红炸子鸡的是AutoGPT,不得不说AI发展真快啊,几天出来一个新东西,都跟不上时代的脚步了. AutoGPT是一个开源的应用程序,展示了GPT-4语言模型的能力.这个程 ...

  2. delphi中 注意一点,record 类型 参数默认是 值拷贝,class 参数 默认是传地址;值传递,指针传递、引用传递

    作为函数的入参,若是record类型,默认是值拷贝,效率低,若要传指针,需要加 var ; 作为函数的入参,若是 class类型,默认是传地址,不需要加var unit Unit1; interfac ...

  3. python高级用法之命名元组namedtuple

    1.tuple类型数据的获取 大家都知道,元组里面的数据获取只能通过下标的方式去获取,比如 :a = ('username', 'age', 'phone'),要获取username的话 ,就需要用a ...

  4. Linux离线安装MySQL(5.7.22)

    1.下载tar包 (1)Window PC下载(PC需要联网) MySQL官网地址:https://www.mysql.com/ MySQL社区版下载地址: https://dev.mysql.com ...

  5. Unicode编码的魅力:跨语言交流的桥梁

    引言: Unicode编码是一种用于表示世界上所有字符的标准编码方式.它解决了字符集兼容性和多语言文本处理的难题,成为实现全球化软件的关键技术.本文将深入探讨Unicode编码的优点与缺点,并介绍它在 ...

  6. Argocd学习

    argocd官网文档链接 ArgoCD官网文档 在K8S集群使用argocd命令将集群添加到argcd的cluster列表中 argocd cluster add kubernetes-admin@i ...

  7. 探索AI视频生成新纪元:文生视频Sora VS RunwayML、Pika及StableVideo——谁将引领未来

    探索AI视频生成新纪元:文生视频Sora VS RunwayML.Pika及StableVideo--谁将引领未来 由于在AI生成视频的时长上成功突破到一分钟,再加上演示视频的高度逼真和高质量,Sor ...

  8. BasicSample项目说明

    整个示例项目,两个Fragment,ProductListFragment和ProductFragment,一个MainActivity.在MainActivity里面展示的是ProductListF ...

  9. 在OAuth 2.0模式下使用Spring Cloud Gateway

    Spring Cloud Gateway主要用于以下角色之一: OAuth Client OAuth Resource Server 1  Spring Cloud Gateway as an OAu ...

  10. Goland 使用[临时]

    环境变量 因为module模式的引入, 多个项目可以共用同一套External Libraries, 在File->Settings->Go中, 设置GOROOT为go安装目录, 例如 / ...