视觉slam闭环检测之-DBoW2 -视觉词袋构建
需要准备的知识点:http://www.cnblogs.com/zjiaxing/p/5616653.html
http://www.cnblogs.com/zjiaxing/p/5616664.html
http://www.cnblogs.com/zjiaxing/p/5616670.html
http://www.cnblogs.com/zjiaxing/p/5616679.html
- #include <iostream>
- #include <vector>
- // DBoW2
- #include "DBoW2.h" // defines Surf64Vocabulary and Surf64Database
- #include <DUtils/DUtils.h>
- #include <DVision/DVision.h>
- // OpenCV
- #include <opencv2/core.hpp>
- #include <opencv2/highgui.hpp>
- #include <opencv2/xfeatures2d/nonfree.hpp>
- using namespace DBoW2;
- using namespace DUtils;
- using namespace std;
- // - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- void loadFeatures(vector<vector<vector<float> > > &features);
- void changeStructure(const vector<float> &plain, vector<vector<float> > &out,
- int L);
- void testVocCreation(const vector<vector<vector<float> > > &features);
- void testDatabase(const vector<vector<vector<float> > > &features);
- // - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- // number of training images
- const int NIMAGES = ;
- // extended surf gives 128-dimensional vectors
- const bool EXTENDED_SURF = false;
- // - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- void wait()
- {
- cout << endl << "Press enter to continue" << endl;
- getchar();
- }
- // ----------------------------------------------------------------------------
- int main()
- {
- vector<vector<vector<float> > > features;
- loadFeatures(features);
- testVocCreation(features);
- wait();
- testDatabase(features);
- return ;
- }
- // ----------------------------------------------------------------------------
- void loadFeatures(vector<vector<vector<float> > > &features)
- {
- features.clear();
- features.reserve(NIMAGES);
- cv::Ptr<cv::xfeatures2d::SURF> surf = cv::xfeatures2d::SURF::create(, , , EXTENDED_SURF);
- cout << "Extracting SURF features..." << endl;
- for(int i = ; i < NIMAGES; ++i)
- {
- stringstream ss;
- ss << "images/image" << i << ".png";
- cv::Mat image = cv::imread(ss.str(), );
- cv::Mat mask;
- vector<cv::KeyPoint> keypoints;
- vector<float> descriptors;
- surf->detectAndCompute(image, mask, keypoints, descriptors);
- features.push_back(vector<vector<float> >());
- changeStructure(descriptors, features.back(), surf->descriptorSize());
- }
- }
- // ----------------------------------------------------------------------------
- void changeStructure(const vector<float> &plain, vector<vector<float> > &out,
- int L)
- {
- out.resize(plain.size() / L);
- unsigned int j = ;
- for(unsigned int i = ; i < plain.size(); i += L, ++j)
- {
- out[j].resize(L);
- std::copy(plain.begin() + i, plain.begin() + i + L, out[j].begin());
- }
- }
- // ----------------------------------------------------------------------------
- void testVocCreation(const vector<vector<vector<float> > > &features)
- {
- // Creates a vocabulary from the training features, setting the branching
- factor and the depth levels of the tree and the weighting and scoring
- schemes * Creates k clusters from the given descriptors with some seeding algorithm.
- const int k = ;
- const int L = ;
- const WeightingType weight = TF_IDF;
- const ScoringType score = L1_NORM;
- Surf64Vocabulary voc(k, L, weight, score);
- cout << "Creating a small " << k << "^" << L << " vocabulary..." << endl;
- voc.create(features);
- cout << "... done!" << endl;
- cout << "Vocabulary information: " << endl
- << voc << endl << endl;
- // lets do something with this vocabulary
- cout << "Matching images against themselves (0 low, 1 high): " << endl;
- BowVector v1, v2;
- for(int i = ; i < NIMAGES; i++)
- {
- //Transforms a set of descriptores into a bow vector
- voc.transform(features[i], v1);
- for(int j = ; j < NIMAGES; j++)
- {
- voc.transform(features[j], v2);
- double score = voc.score(v1, v2);
- cout << "Image " << i << " vs Image " << j << ": " << score << endl;
- }
- }
- // save the vocabulary to disk
- cout << endl << "Saving vocabulary..." << endl;
- voc.save("small_voc.yml.gz");
- cout << "Done" << endl;
- }
- // ----------------------------------------------------------------------------
- void testDatabase(const vector<vector<vector<float> > > &features)
- {
- cout << "Creating a small database..." << endl;
- // load the vocabulary from disk
- Surf64Vocabulary voc("small_voc.yml.gz");
- Surf64Database db(voc, false, ); // false = do not use direct index
- // (so ignore the last param)
- // The direct index is useful if we want to retrieve the features that
- // belong to some vocabulary node.
- // db creates a copy of the vocabulary, we may get rid of "voc" now
- // add images to the database
- for(int i = ; i < NIMAGES; i++)
- {
- db.add(features[i]);
- }
- cout << "... done!" << endl;
- cout << "Database information: " << endl << db << endl;
- // and query the database
- cout << "Querying the database: " << endl;
- QueryResults ret;
- for(int i = ; i < NIMAGES; i++)
- {
- db.query(features[i], ret, );
- // ret[0] is always the same image in this case, because we added it to the
- // database. ret[1] is the second best match.
- cout << "Searching for Image " << i << ". " << ret << endl;
- }
- cout << endl;
- // we can save the database. The created file includes the vocabulary
- // and the entries added
- cout << "Saving database..." << endl;
- db.save("small_db.yml.gz");
- cout << "... done!" << endl;
- // once saved, we can load it again
- cout << "Retrieving database once again..." << endl;
- Surf64Database db2("small_db.yml.gz");
- cout << "... done! This is: " << endl << db2 << endl;
- }
视觉slam闭环检测之-DBoW2 -视觉词袋构建的更多相关文章
- 《视觉SLAM十四讲》第2讲
目录 一 视觉SLAM中的传感器 二 经典视觉SLAM框架 三 SLAM问题的数学表述 注:原创不易,转载请务必注明原作者和出处,感谢支持! 本讲主要内容: (1) 视觉SLAM中的传感器 (2) 经 ...
- 视觉SLAM之词袋(bag of words) 模型与K-means聚类算法浅析
原文地址:http://www.cnblogs.com/zjiaxing/p/5548265.html 在目前实际的视觉SLAM中,闭环检测多采用DBOW2模型https://github.com/d ...
- 视觉SLAM之词袋(bag of words) 模型与K-means聚类算法浅析(1)
在目前实际的视觉SLAM中,闭环检测多采用DBOW2模型https://github.com/dorian3d/DBoW2,而bag of words 又运用了数据挖掘的K-means聚类算法,笔者只 ...
- (转) SLAM系统的研究点介绍 与 Kinect视觉SLAM技术介绍
首页 视界智尚 算法技术 每日技术 来打我呀 注册 SLAM系统的研究点介绍 本文主要谈谈SLAM中的各个研究点,为研究生们(应该是博客的多数读者吧)作一个提纲挈领的摘要.然后,我 ...
- 视觉SLAM漫谈 (三): 研究点介绍
1. 前言 读者朋友们大家好!(很久很久)之前,我们为大家介绍了SLAM的基本概念和方法.相信大家对SLAM,应该有了基本的认识.在忙完一堆写论文.博士开题的事情之后,我准备回来继续填坑:为大家介绍S ...
- 视觉SLAM关键方法总结
点"计算机视觉life"关注,置顶更快接收消息! 最近在做基于激光信息的机器人行人跟踪发现如果单独利用激光信息很难完成机器人对行人的识别.跟踪等功能,因此考虑与视觉融合的方法,这样 ...
- 视觉SLAM的主要功能模块分析
视觉SLAM的主要功能模块分析 一.基本概念 SLAM (simultaneous localization and mapping),也称为CML (Concurrent Mapping and L ...
- 高翔《视觉SLAM十四讲》从理论到实践
目录 第1讲 前言:本书讲什么:如何使用本书: 第2讲 初始SLAM:引子-小萝卜的例子:经典视觉SLAM框架:SLAM问题的数学表述:实践-编程基础: 第3讲 三维空间刚体运动 旋转矩阵:实践-Ei ...
- 视觉SLAM漫淡
视觉SLAM漫谈 1. 前言 开始做SLAM(机器人同时定位与建图)研究已经近一年了.从一年级开始对这个方向产生兴趣,到现在为止,也算是对这个领域有了大致的了解.然而越了解,越觉得这个方向难度很 ...
随机推荐
- 001-使用idea开发环境安装部署,npm工具栏,脚本运行
一.概述 参看官方文档:https://ant.design/docs/spec/introduce-cn 其中包含了设计价值观.设计原则.视觉.模式.可视化.动态等. 其中Ant Design 的 ...
- ubuntu错误解决E: Sub-process /usr/bin/dpkg returned an error code (1)
在用apt-get安装软件时出现了类似于 install-info: No dir file specified; try –help for more information.dpkg:处理 get ...
- Python-PyQt4学习笔记
1.每个应用必须创建一个 QtGui.QApplication(sys.argv), 此时 QtGui.qApp 为此应用的实例 app = QtGui.QApplication(sys.argv) ...
- unity3d世界坐标系和本地坐标系
transform.Translate(Vector3.forware);//向着自己坐标前方 transform.Translate(Vector3.forware,Space.World);//向 ...
- python3操作mysql教程
一.下载\安装\配置 1. python3 Python3下载网址:http://www.python.org/getit/ 当前最新版本是python3.2,下载地址是 http://www.pyt ...
- memcache命令行
memcache运行状态可以方便的用stats命令显示. 首先用telnet 127.0.0.1 11211 [quit 退出]这样的命令连接上memcache,然后直接输入stats就可以得到当前 ...
- 多线程-Thread、Runnable、Callbale、Future
Thread:java使用Thread代表线程,所有的线程对象都必须是Thread类或其子类,可以通过继承Thread类来创建并启动多线程. package org.github.lujiango; ...
- Atitit.png 图片不能显示 php环境下
Atitit.png 图片不能显示 php环境下 1.1. 不能显示png 下载png 检查使用bcompare与正常png对比.. 多了bom头 , "\xEF\xBB\xBF" ...
- Android4.4的zygote进程(下)
3.2.4启动Android系统服务——startSystemServer() 接下来就是启动Android的重头戏了,此时ZygoteInit的main()函数会调用startSystemServe ...
- ld -l选项注意事项
在程序中用到某个静态库,使用命令: gcc bin -llibrary.a object.o 结果发现找不到library.a中的某些函数符号 undefine reference to ... 通过 ...