AnswerOpenCV(1001-1007)一周佳作欣赏
外国不过十一,所以利用十一假期,看看他们都在干什么。
Contour Single blob with multiple object
Hi to everyone.
I'm developing an object shape identification application and struck up with separating close objects using contour, Since close objects are identified as single contour. Is there way to separate the objects?
Things I have tried:1. I have tried Image segmentation with distance transform and Watershed algorithm - It works for few images only2. I have tried to separate the objects manual using the distance between two points as mentioned in http://answers.opencv.org/question/71... - I struck up with choosing the points that will separate the object.
I have attached a sample contour for the reference.
Please suggest any comments to separate the objects.
分析:这个问题其实在阈值处理之前就出现了,我们常见的想法是对图像进行预处理,比如HSV 分割,或者在阈值处理的时候想一些方法。
二、性能优化
http://answers.opencv.org/question/109754/optimizing-splitmerge-for-clahe/
Optimizing split/merge for clahe
I am trying to squeeze the last ms from a tracking loop. One of the time consuminig parts is doing adaptive contrast enhancement (clahe), which is a necessary part. The results are great, but I am wondering whether I could avoid some copying/splitting/merge or apply other optimizations.
Basically I do the following in tight loop:
cv::cvtColor(rgb, hsv, cv::COLOR_BGR2HSV);
std::vector<cv::Mat> hsvChannels;
cv::split(hsv, hsvChannels);
m_clahe->apply(hsvChannels[2], hsvChannels[2]); /* m_clahe constructed outside loop */
cv::merge(hsvChannels, hsvOut);
cv::cvtColor(hsvOut, rgbOut, cv::COLOR_HSV2BGR);
On the test machine, the above snippet takes about 8ms (on 1Mpix images), The actual clahe part takes only 1-2 ms.
1 answer
You can save quite a bit. First, get rid of the vector. Then, outside the loop, create a Mat for the V channel only.
Then use extractChannel and insertChannel to access the channel you're using. It only accesses the one channel, instead of all three like split does.
The reason you put the Mat outside the loop is to avoid reallocating it every pass through the loop. Right now you're allocating and deallocating three Mats every pass.
test code:
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main(){
TickMeter tm;
Ptr<CLAHE> clahe = createCLAHE();
clahe->setClipLimit(4);
vector <Mat> hsvChannels;
Mat img, hsv1, hsv2, hsvChannels2, diff;
img = imread("lena.jpg");
cvtColor (img, hsv1, COLOR_BGR2HSV);
cvtColor (img, hsv2, COLOR_BGR2HSV);
tm.start();
for (int i = 0; i < 1000; i++)
{
split(hsv2, hsvChannels);
clahe->apply(hsvChannels[2], hsvChannels[2]);
merge(hsvChannels, hsv2);
}
tm.stop();
cout<< tm << endl;
tm.reset();
tm.start();
for (int i = 0; i < 1000; i++)
{
extractChannel(hsv1, hsvChannels2, 2);
clahe->apply(hsvChannels2, hsvChannels2);
insertChannel(hsvChannels2, hsv1, 2);
}
tm.stop();
cout<< tm;
absdiff(hsv1, hsv2, diff);
imshow("diff", diff*255);
waitKey();
}
我运行这段代码的结果为:
4.63716sec
3.80283sec
应该说其中关键的一句就是使用:
extractChannel(hsv1, hsvChannels2, 2);
代替
split(hsv2, hsvChannels);
能够单句提高1MS左右时间,而这种费时的方法是我目前经常采用的,应该说这题很有较益。
三、基本算法
Compare two images and highlight the difference
Hi - First I'm a total n00b so please be kind. I'd like to create a target shooting app that allows me to us the camera on my android device to see where I hit the target from shot to shot. The device will be stationary with very little to no movement. My thinking is that I'd access the camera and zoom as needed on the target. Once ready I'd hit a button that would start taking pictures every x seconds. Each picture would be compared to the previous one to see if there was a change - the change being I hit the target. If a change was detected the two imaged would be saved, the device would stop taking picture, the image with the change would be displayed on the device and the spot of change would be highlighted. When I was ready for the next shot, I would hit a button on the device and the process would start over. If I was done shooting, there would be a button to stop.
Any help in getting this project off the ground would be greatly appreciated.
This will be a very basic algorithm just to evaluate your use case. It can be improved a lot.
(i) In your case, the first step is to identify whether there is a change or not between 2 frames. It can be identified by using a simple StandardDeviation measurement. Set a threshold for acceptable difference in deviation.
Mat prevFrame, currentFrame;
for(;;)
{
//Getting a frame from the video capture device.
cap >> currentFrame;
if( prevFrame.data )
{
//Finding the standard deviations of current and previous frame.
Scalar prevStdDev, currentStdDev;
meanStdDev(prevFrame, Scalar(), prevStdDev);
meanStdDev(currentFrame, Scalar(), currentStdDev);
//Decision Making.
if(abs(currentStdDev - prevStdDev) < ACCEPTED_DEVIATION)
{
Save the images and break out of the loop.
}
}
//To exit from the loop, if there is a keypress event.
if(waitKey(30)>=0)
break;
//For swapping the previous and current frame.
swap(prevFrame, currentFrame);
}
(ii) The first step will only identify the change in frames. In order to locate the position where the change occured, find the difference between the two saved frames using AbsDiff. Using this difference image mask, find the contours and finally mark the region with a bounding rectangle.
Hope this answers your question.
这道题难道不是对absdiff的应用吗?直接absdiff,然后阈值,数数就可以了。
四、系统配置
opencv OCRTesseract::create v3.05
I have the version of tesseract 3.05 and opencv3.2 installed and tested. But when I tried the end-to-end-recognition demo code, I discovered that tesseract was not found using OCRTesseract::create and checked the documentation to find that the interface is for v3.02. Is it possible to use it with Tesseract v3.05 ? How?
How to create OpenCV binary files from source with tesseract ( Windows )
i tried to explain the steps
Step 1.download https://github.com/DanBloomberg/lepto...
extract it in a dir like "E:/leptonica-1.74.4"
run cmake
where is the source code : E:/leptonica-1.74.4
where to build binaries : E:/leptonica-1.74.4/build
click Configure buttonselect compiler
see "Configuring done"click Generate button and see "Generating done"
Open Visual Studio 2015 >> file >> open "E:\leptonica-1.74.4\build\ALL_BUILD.vcxproj"select release, build ALL BUILD
see "Build: 3 succeeded" and be sure E:\leptonica-master\build\src\Release\leptonica-1.74.4.lib
and E:\leptonica-1.74.4\build\bin\Release\leptonica-1.74.4.dll
have been created
Step 2.download https://github.com/tesseract-ocr/tess...
extract it in a dir like "E:/tesseract-3.05.01"
create a directory E:\tesseract-3.05.01\Files\leptonica\include
copy *.h from E:\leptonica-master\src
into E:\tesseract-3.05.01\Files\leptonica\include
copy *.h from E:\leptonica-master\build\src
into E:\tesseract-3.05.01\Files\leptonica\include
run cmake
where is the source code : E:/tesseract-3.05.01
where to build binaries : E:/tesseract-3.05.01/build
click Configure buttonselect compiler
set Leptonica_DIR to E:/leptonica-1.74.4\buildclick Configure button againsee "Configuring done"click Generate button and see "Generating done"
Open Visual Studio 2015 >> file >> open "E:/tesseract-3.05.01\build\ALL_BUILD.vcxproj"build ALL_BUILD
be sure E:\tesseract-3.05.01\build\Release\tesseract305.lib
and E:\tesseract-3.05.01\build\bin\Release\tesseract305.dll
generated
Step 3.create directory E:\tesseract-3.05.01\include\tesseract
copy all *.h files from
E:\tesseract-3.05.01\api
E:\tesseract-3.05.01\ccmain
E:\tesseract-3.05.01\ccutil
E:\tesseract-3.05.01\ccstruct
to E:/tesseract-3.05.01/include\tesseract
in OpenCV cmake set Tesseract_INCLUDE_DIR : E:/tesseract-3.05.01/include
set tesseract_LIBRARY E:/tesseract-3.05.01/build/Release/tesseract305.lib
set Lept_LIBRARY E:/leptonica-master/build/src/Release/leptonica-1.74.4.lib
when you click Configure button you will see "Tesseract: YES" it means everything is OK
make other settings and generate. Compile ....
禾路按语:OCR问题,一直都是图像处理的经典问题。那么tesseract是这个方向的非常经典的项目,包括east一起进行结合研究。
五、算法问题
Pyramid Blending with Single input and Non-Vertical Boundar
Hi All,
Here is the input image.
Say you do not have the other half of the images. Is it still possible to do with Laplacian pyramid blending?
I tried stuffing the image directly into the algorithm. I put weights as opposite triangles. The result comes out the same as the input.My another guess is splitting the triangles. Do gaussian and Laplacian pyramid on each separately, and then merge them.
But the challenge is how do we apply Laplacian matrix on triangular data. What do we fill on the missing half? I tried 0. It made the boundary very bright.
If pyramid blending is not the best approach for this. What other methods do you recommend me to look into for blending?
Any help is much appreciated!
Comments
Thank you for your comment. I tried doing that (explained by my 2nd paragraph). The output is the same as the original image. Please note where I want to merge is NOT vertical. So I do not get what you meant by "line blend".
这个问题需要实现的是mulitband blend,而且实现的是倾斜过来的融合,应该说很奇怪,不知道在什么环境下会有这样的需求,但是如果作为算法问题来说的话,还是很有价值的。首先需要解决的是倾斜的line bend,值得思考。
六、新玩意
DroidCam with OpenCV
With my previous laptop (Windows7) I was connecting to my phone camera via DroidCam and using videoCapture in OpenCV with Visual Studio, and there was no problem. But now I have a laptop with Windows 10, and when I connect the same way it shows orange screen all the time. Actually DroidCam app in my laptop works fine, it shows the video. However while using OpenCV videoCapture from Visual Studio it shows orange screen.
Thanks in advance
Disable laptop webcam from device manager and then restart. Then it works
七、算法研究
OpenCV / C++ - Filling holes
Hello there,
For a personnel projet, I'm trying to detect object and there shadow. These are the result I have for now:Original:
Object:
Shadow:
The external contours of the object are quite good, but as you can see, my object is not full.Same for the shadow.I would like to get full contours, filled, for the object and its shadow, and I don't know how to get better than this (I juste use "dilate" for the moment).Does someone knows a way to obtain a better result please?Regards.
有趣的问题,研究看看。
Hi to everyone.
I'm developing an object shape identification application and struck up with separating close objects using contour, Since close objects are identified as single contour. Is there way to separate the objects?
Things I have tried:1. I have tried Image segmentation with distance transform and Watershed algorithm - It works for few images only2. I have tried to separate the objects manual using the distance between two points as mentioned in http://answers.opencv.org/question/71... - I struck up with choosing the points that will separate the object.
I have attached a sample contour for the reference.
Please suggest any comments to separate the objects.
Optimizing split/merge for clahe
I am trying to squeeze the last ms from a tracking loop. One of the time consuminig parts is doing adaptive contrast enhancement (clahe), which is a necessary part. The results are great, but I am wondering whether I could avoid some copying/splitting/merge or apply other optimizations.
Basically I do the following in tight loop:
cv::cvtColor(rgb, hsv, cv::COLOR_BGR2HSV);
std::vector<cv::Mat> hsvChannels;
cv::split(hsv, hsvChannels);
m_clahe->apply(hsvChannels[2], hsvChannels[2]); /* m_clahe constructed outside loop */
cv::merge(hsvChannels, hsvOut);
cv::cvtColor(hsvOut, rgbOut, cv::COLOR_HSV2BGR);
On the test machine, the above snippet takes about 8ms (on 1Mpix images), The actual clahe part takes only 1-2 ms.
I am trying to squeeze the last ms from a tracking loop. One of the time consuminig parts is doing adaptive contrast enhancement (clahe), which is a necessary part. The results are great, but I am wondering whether I could avoid some copying/splitting/merge or apply other optimizations.
Basically I do the following in tight loop:
cv::cvtColor(rgb, hsv, cv::COLOR_BGR2HSV);
std::vector<cv::Mat> hsvChannels;
cv::split(hsv, hsvChannels);
m_clahe->apply(hsvChannels[2], hsvChannels[2]); /* m_clahe constructed outside loop */
cv::merge(hsvChannels, hsvOut);
cv::cvtColor(hsvOut, rgbOut, cv::COLOR_HSV2BGR);
On the test machine, the above snippet takes about 8ms (on 1Mpix images), The actual clahe part takes only 1-2 ms.
1 answer
You can save quite a bit. First, get rid of the vector. Then, outside the loop, create a Mat for the V channel only.
Then use extractChannel and insertChannel to access the channel you're using. It only accesses the one channel, instead of all three like split does.
The reason you put the Mat outside the loop is to avoid reallocating it every pass through the loop. Right now you're allocating and deallocating three Mats every pass.
test code:
#include "opencv2/imgproc.hpp"
#include "opencv2/highgui.hpp"
#include <iostream>
using namespace std;
using namespace cv;
int main(){
TickMeter tm;
Ptr<CLAHE> clahe = createCLAHE();
clahe->setClipLimit(4);
vector <Mat> hsvChannels;
Mat img, hsv1, hsv2, hsvChannels2, diff;
img = imread("lena.jpg");
cvtColor (img, hsv1, COLOR_BGR2HSV);
cvtColor (img, hsv2, COLOR_BGR2HSV);
tm.start();
for (int i = 0; i < 1000; i++)
{
split(hsv2, hsvChannels);
clahe->apply(hsvChannels[2], hsvChannels[2]);
merge(hsvChannels, hsv2);
}
tm.stop();
cout<< tm << endl;
tm.reset();
tm.start();
for (int i = 0; i < 1000; i++)
{
extractChannel(hsv1, hsvChannels2, 2);
clahe->apply(hsvChannels2, hsvChannels2);
insertChannel(hsvChannels2, hsv1, 2);
}
tm.stop();
cout<< tm;
absdiff(hsv1, hsv2, diff);
imshow("diff", diff*255);
waitKey();
}
三、基本算法
Compare two images and highlight the difference
Hi - First I'm a total n00b so please be kind. I'd like to create a target shooting app that allows me to us the camera on my android device to see where I hit the target from shot to shot. The device will be stationary with very little to no movement. My thinking is that I'd access the camera and zoom as needed on the target. Once ready I'd hit a button that would start taking pictures every x seconds. Each picture would be compared to the previous one to see if there was a change - the change being I hit the target. If a change was detected the two imaged would be saved, the device would stop taking picture, the image with the change would be displayed on the device and the spot of change would be highlighted. When I was ready for the next shot, I would hit a button on the device and the process would start over. If I was done shooting, there would be a button to stop.
Any help in getting this project off the ground would be greatly appreciated.
Hi - First I'm a total n00b so please be kind. I'd like to create a target shooting app that allows me to us the camera on my android device to see where I hit the target from shot to shot. The device will be stationary with very little to no movement. My thinking is that I'd access the camera and zoom as needed on the target. Once ready I'd hit a button that would start taking pictures every x seconds. Each picture would be compared to the previous one to see if there was a change - the change being I hit the target. If a change was detected the two imaged would be saved, the device would stop taking picture, the image with the change would be displayed on the device and the spot of change would be highlighted. When I was ready for the next shot, I would hit a button on the device and the process would start over. If I was done shooting, there would be a button to stop.
Any help in getting this project off the ground would be greatly appreciated.
This will be a very basic algorithm just to evaluate your use case. It can be improved a lot.
(i) In your case, the first step is to identify whether there is a change or not between 2 frames. It can be identified by using a simple StandardDeviation measurement. Set a threshold for acceptable difference in deviation.
Mat prevFrame, currentFrame;
for(;;)
{
//Getting a frame from the video capture device.
cap >> currentFrame;
if( prevFrame.data )
{
//Finding the standard deviations of current and previous frame.
Scalar prevStdDev, currentStdDev;
meanStdDev(prevFrame, Scalar(), prevStdDev);
meanStdDev(currentFrame, Scalar(), currentStdDev);
//Decision Making.
if(abs(currentStdDev - prevStdDev) < ACCEPTED_DEVIATION)
{
Save the images and break out of the loop.
}
}
//To exit from the loop, if there is a keypress event.
if(waitKey(30)>=0)
break;
//For swapping the previous and current frame.
swap(prevFrame, currentFrame);
}
(ii) The first step will only identify the change in frames. In order to locate the position where the change occured, find the difference between the two saved frames using AbsDiff. Using this difference image mask, find the contours and finally mark the region with a bounding rectangle.
Hope this answers your question.
这道题难道不是对absdiff的应用吗?直接absdiff,然后阈值,数数就可以了。
opencv OCRTesseract::create v3.05
I have the version of tesseract 3.05 and opencv3.2 installed and tested. But when I tried the end-to-end-recognition demo code, I discovered that tesseract was not found using OCRTesseract::create and checked the documentation to find that the interface is for v3.02. Is it possible to use it with Tesseract v3.05 ? How?
I have the version of tesseract 3.05 and opencv3.2 installed and tested. But when I tried the end-to-end-recognition demo code, I discovered that tesseract was not found using OCRTesseract::create and checked the documentation to find that the interface is for v3.02. Is it possible to use it with Tesseract v3.05 ? How?
How to create OpenCV binary files from source with tesseract ( Windows )
i tried to explain the steps
Step 1.download https://github.com/DanBloomberg/lepto...
extract it in a dir like "E:/leptonica-1.74.4"
run cmake
where is the source code : E:/leptonica-1.74.4
where to build binaries : E:/leptonica-1.74.4/build
click Configure buttonselect compiler
see "Configuring done"click Generate button and see "Generating done"
Open Visual Studio 2015 >> file >> open "E:\leptonica-1.74.4\build\ALL_BUILD.vcxproj"select release, build ALL BUILD
see "Build: 3 succeeded" and be sure E:\leptonica-master\build\src\Release\leptonica-1.74.4.lib
and E:\leptonica-1.74.4\build\bin\Release\leptonica-1.74.4.dll
have been created
Step 2.download https://github.com/tesseract-ocr/tess...
extract it in a dir like "E:/tesseract-3.05.01"
create a directory E:\tesseract-3.05.01\Files\leptonica\include
copy *.h from E:\leptonica-master\src
into E:\tesseract-3.05.01\Files\leptonica\include
copy *.h from E:\leptonica-master\build\src
into E:\tesseract-3.05.01\Files\leptonica\include
run cmake
where is the source code : E:/tesseract-3.05.01
where to build binaries : E:/tesseract-3.05.01/build
click Configure buttonselect compiler
set Leptonica_DIR to E:/leptonica-1.74.4\buildclick Configure button againsee "Configuring done"click Generate button and see "Generating done"
Open Visual Studio 2015 >> file >> open "E:/tesseract-3.05.01\build\ALL_BUILD.vcxproj"build ALL_BUILD
be sure E:\tesseract-3.05.01\build\Release\tesseract305.lib
and E:\tesseract-3.05.01\build\bin\Release\tesseract305.dll
generated
Step 3.create directory E:\tesseract-3.05.01\include\tesseract
copy all *.h files from
E:\tesseract-3.05.01\api
E:\tesseract-3.05.01\ccmain
E:\tesseract-3.05.01\ccutil
E:\tesseract-3.05.01\ccstruct
to E:/tesseract-3.05.01/include\tesseract
in OpenCV cmake set Tesseract_INCLUDE_DIR : E:/tesseract-3.05.01/include
set tesseract_LIBRARY E:/tesseract-3.05.01/build/Release/tesseract305.lib
set Lept_LIBRARY E:/leptonica-master/build/src/Release/leptonica-1.74.4.lib
when you click Configure button you will see "Tesseract: YES" it means everything is OK
make other settings and generate. Compile ....
Pyramid Blending with Single input and Non-Vertical Boundar
Hi All,
Here is the input image.
Say you do not have the other half of the images. Is it still possible to do with Laplacian pyramid blending?
I tried stuffing the image directly into the algorithm. I put weights as opposite triangles. The result comes out the same as the input.My another guess is splitting the triangles. Do gaussian and Laplacian pyramid on each separately, and then merge them.
But the challenge is how do we apply Laplacian matrix on triangular data. What do we fill on the missing half? I tried 0. It made the boundary very bright.
If pyramid blending is not the best approach for this. What other methods do you recommend me to look into for blending?
Any help is much appreciated!
Comments
Thank you for your comment. I tried doing that (explained by my 2nd paragraph). The output is the same as the original image. Please note where I want to merge is NOT vertical. So I do not get what you meant by "line blend".
这个问题需要实现的是mulitband blend,而且实现的是倾斜过来的融合,应该说很奇怪,不知道在什么环境下会有这样的需求,但是如果作为算法问题来说的话,还是很有价值的。首先需要解决的是倾斜的line bend,值得思考。
六、新玩意
DroidCam with OpenCV
With my previous laptop (Windows7) I was connecting to my phone camera via DroidCam and using videoCapture in OpenCV with Visual Studio, and there was no problem. But now I have a laptop with Windows 10, and when I connect the same way it shows orange screen all the time. Actually DroidCam app in my laptop works fine, it shows the video. However while using OpenCV videoCapture from Visual Studio it shows orange screen.
Thanks in advance
Disable laptop webcam from device manager and then restart. Then it works
七、算法研究
OpenCV / C++ - Filling holes
Hello there,
For a personnel projet, I'm trying to detect object and there shadow. These are the result I have for now:Original:
Object:
Shadow:
The external contours of the object are quite good, but as you can see, my object is not full.Same for the shadow.I would like to get full contours, filled, for the object and its shadow, and I don't know how to get better than this (I juste use "dilate" for the moment).Does someone knows a way to obtain a better result please?Regards.
有趣的问题,研究看看。
Hi All,
Here is the input image.
Say you do not have the other half of the images. Is it still possible to do with Laplacian pyramid blending?
I tried stuffing the image directly into the algorithm. I put weights as opposite triangles. The result comes out the same as the input.My another guess is splitting the triangles. Do gaussian and Laplacian pyramid on each separately, and then merge them.
But the challenge is how do we apply Laplacian matrix on triangular data. What do we fill on the missing half? I tried 0. It made the boundary very bright.
If pyramid blending is not the best approach for this. What other methods do you recommend me to look into for blending?
Any help is much appreciated!
Thank you for your comment. I tried doing that (explained by my 2nd paragraph). The output is the same as the original image. Please note where I want to merge is NOT vertical. So I do not get what you meant by "line blend".
这个问题需要实现的是mulitband blend,而且实现的是倾斜过来的融合,应该说很奇怪,不知道在什么环境下会有这样的需求,但是如果作为算法问题来说的话,还是很有价值的。首先需要解决的是倾斜的line bend,值得思考。
DroidCam with OpenCV
With my previous laptop (Windows7) I was connecting to my phone camera via DroidCam and using videoCapture in OpenCV with Visual Studio, and there was no problem. But now I have a laptop with Windows 10, and when I connect the same way it shows orange screen all the time. Actually DroidCam app in my laptop works fine, it shows the video. However while using OpenCV videoCapture from Visual Studio it shows orange screen.
Thanks in advance
Disable laptop webcam from device manager and then restart. Then it works
七、算法研究
OpenCV / C++ - Filling holes
Hello there,
For a personnel projet, I'm trying to detect object and there shadow. These are the result I have for now:Original:
Object:
Shadow:
The external contours of the object are quite good, but as you can see, my object is not full.Same for the shadow.I would like to get full contours, filled, for the object and its shadow, and I don't know how to get better than this (I juste use "dilate" for the moment).Does someone knows a way to obtain a better result please?Regards.
有趣的问题,研究看看。
With my previous laptop (Windows7) I was connecting to my phone camera via DroidCam and using videoCapture in OpenCV with Visual Studio, and there was no problem. But now I have a laptop with Windows 10, and when I connect the same way it shows orange screen all the time. Actually DroidCam app in my laptop works fine, it shows the video. However while using OpenCV videoCapture from Visual Studio it shows orange screen.
Thanks in advance
OpenCV / C++ - Filling holes
Hello there,
For a personnel projet, I'm trying to detect object and there shadow. These are the result I have for now:Original:
Object:
Shadow:
The external contours of the object are quite good, but as you can see, my object is not full.Same for the shadow.I would like to get full contours, filled, for the object and its shadow, and I don't know how to get better than this (I juste use "dilate" for the moment).Does someone knows a way to obtain a better result please?Regards.
有趣的问题,研究看看。
Hello there,
For a personnel projet, I'm trying to detect object and there shadow. These are the result I have for now:Original:
Object:
Shadow:
The external contours of the object are quite good, but as you can see, my object is not full.Same for the shadow.I would like to get full contours, filled, for the object and its shadow, and I don't know how to get better than this (I juste use "dilate" for the moment).Does someone knows a way to obtain a better result please?Regards.
有趣的问题,研究看看。
AnswerOpenCV(1001-1007)一周佳作欣赏的更多相关文章
- AnswerOpenCV(0826-0901)一周佳作欣赏
1.OpenCV to detect how missing tooth in equipment Hello everyone. I am just starting with OpenCV and ...
- AnswerOpenCV一周佳作欣赏(0615-0622)
一.How to make auto-adjustments(brightness and contrast) for image Android Opencv Image Correction i' ...
- (原创)古典主义——平凡之美 佳作欣赏(摄影,欣赏)
文中图片摘自腾讯文化网:www.cal.qq.com 1.Abstract 生活本就是平淡的,如同真理一般寂静.平时生活中不经意的瞬间,也有它本来的美丽.下面一组图是上上个世纪到上个世纪末一个 ...
- [PHP]全国省市区信息,mysql数据库记录
下载地址: https://files.cnblogs.com/files/wukong1688/T_Area.zip 或者也可以复制如下内容: CREATE TABLE IF NOT EXISTS ...
- Python爬虫之抓取豆瓣影评数据
脚本功能: 1.访问豆瓣最受欢迎影评页面(http://movie.douban.com/review/best/?start=0),抓取所有影评数据中的标题.作者.影片以及影评信息 2.将抓取的信息 ...
- 城市代码表mysql
只有代码: # ************************************************************ # Sequel Pro SQL dump # Version ...
- jnhs中国的省市县区邮编坐标mysql数据表
https://blog.csdn.net/sln2432713617/article/details/79412896 -- 1.之前项目中需要全国的省市区数据,在网上找了很多,发现数据要么不全,要 ...
- T-SQL 查询语句总结
我们使用一下两张表作为范例: select * from [dbo].[employee] select * from [dbo].[dept] 1.select语句 DISTINCT:去掉记录中的重 ...
- 2017年4月16日 一周AnswerOpenCV佳作赏析
2017年4月16日 一周AnswerOpenCV佳作赏析 1.HelloHow to smooth edge of text in binary image, based on threshold. ...
随机推荐
- ida脚本学习
#!/usr/bin/env python #coding:utf-8 from idc import * import idaapi import idautils import os os.sys ...
- cocos2d JS 错误异常抛出捕获和崩溃拦截
Error对象 一旦代码解析或运行时发生错误,JavaScript引擎就会自动产生并抛出一个Error对象的实例,然后整个程序就中断在发生错误的地方. Error对象的实例有三个最基本的属性: nam ...
- Django MTV模式详解
出自:http://blog.csdn.net/dbanote/article/details/11338953 在正式开始coding之前,我觉得有必要探讨下Django的MTV模式,理论和实践 ...
- RDD、DataFrame、Dataset三者三者之间转换
转化: RDD.DataFrame.Dataset三者有许多共性,有各自适用的场景常常需要在三者之间转换 DataFrame/Dataset转RDD: 这个转换很简单 val rdd1=testDF. ...
- 【Spring学习笔记-MVC】Spring MVC之多文件上传 (zhan)
http://www.cnblogs.com/ssslinppp/p/4607330.html (zhan)
- Spark学习之路 (十)SparkCore的调优之Shuffle调优
摘抄自https://tech.meituan.com/spark-tuning-pro.html 一.概述 大多数Spark作业的性能主要就是消耗在了shuffle环节,因为该环节包含了大量的磁盘I ...
- python 文件描述符
先上一张图 文件描述符是内核为了高效管理已经被打开的文件所创建的索引, ----非负整数 ----用于指代被打开的文件 ----所有执行i/o操作的系统调用都是通过文件描述符完成的 进程通过文件描述符 ...
- 邮件服务器hMailServer管理工具hMailServer Administrator汉化(转)
//实现:邮件服务器hMailServer管理工具hMailServer Administrator的汉化 //环境: Windows Server 2008 R2 hMailServer Admin ...
- 以太坊ETH中智能合约消耗的Gas、Gas Limit是什么?
以太坊ETH中智能合约消耗的Gas.Gas Limit是什么? 数字货币交易一般是有交易费的,比特币(BTC)的交易费很容易理解,就是直接支付一定额度的BTC作为手续费.而以太坊(ETH)的交易费表面 ...
- TensorFlow 1.2.0新版本完美支持Python3.6,windows在cmd中输入pip install tensorflow就能下载应用最新tensorflow
TensorFlow 1.2.0新版本完美支持Python3.6,windows在cmd中输入pip install tensorflow就能下载应用最新tensorflow 只需在cmd中输入pip ...