DeepLearning to digit recongnizer in kaggle


近期在看deeplearning,于是就找了kaggle上字符识别进行练习。这里我主要用两种工具箱进行求解。并比对两者的结果。

两种工具箱各自是DeepLearningToolbox和caffe。

DeeplearningToolbox源代码解析见:http://blog.csdn.net/lu597203933/article/details/46576017

Caffe学习见:http://caffe.berkeleyvision.org/

一:DeeplearningToolbox

DeeplearningToolbox基于matlab,很的简单,读下源代码,对于了解卷积神经网络等过程很有帮助。

这里我主要是对digit recongnizer给出的数据集进行预处理以使其适用于我们的deeplearningToolbox工具箱。主要包括两个.m文件,各自是predeal.m和cnntest.m文件。

所须要做的就是改变addpath的路径,代码凝视很具体,大家自己看。

代码

predeal.m

% use the deeplearnToolbox to solve the digit recongnizer in kaggle!
clear;clc
trainFile = 'train.csv';
testFile = 'test.csv';
fidId = fopen(trainFile); M = csvread(trainFile, 1); % 读取csv文件除第一行以外的全部数据
train_x = M(:, 2:end); %第2列開始为数据data
label = M(:,1)'; %第一列为标签
label(label == 0) = 10; % 不变为10 以下一句无法处理
train_y = full(sparse(label, 1:size(train_x, 1), 1)); %将标签变成一个矩阵 train_x = double(reshape(train_x',28,28,size(train_x, 1)))/255; fidId = fopen('test.csv'); %% 处理预測的数据
M = csvread(testFile, 1); % 读取csv文件除第一行以外的全部数据
test_x = double(reshape(M',28,28,size(M, 1)))/255;
clear fidId label testFile M testFile trainFile addpath D:\DeepLearning\DeepLearnToolbox-master\data\ %路径须要改下
addpath D:\DeepLearning\DeepLearnToolbox-master\CNN\
addpath D:\DeepLearning\DeepLearnToolbox-master\util\ rand('state',0)
cnn.layers = { %%% 设置各层feature maps个数及卷积模板大小等属性
struct('type', 'i') %input layer
struct('type', 'c', 'outputmaps', 6, 'kernelsize', 5) %convolution layer
struct('type', 's', 'scale', 2) %sub sampling layer
struct('type', 'c', 'outputmaps', 12, 'kernelsize', 5) %convolution layer
struct('type', 's', 'scale', 2) %subsampling layer
}; opts.alpha = 0.01; %迭代下降的速率
opts.batchsize = 50; %每次选择50个样本进行更新 随机梯度下降。每次仅仅选用50个样本进行更新
opts.numepochs = 25; %迭代次数
cnn = cnnsetup(cnn, train_x, train_y); %对各层參数进行初始化 包含权重和偏置
cnn = cnntrain(cnn, train_x, train_y, opts); %训练的过程,包含bp算法及迭代过程 test_y = cnntest(cnn, test_x); %对測试数据集进行測试
test_y(test_y == 10) = 0; %标签10 须要反转为0
test_y = test_y';
M = [(1:length(test_y))' test_y(:)];
csvwrite('test_y.csv', M);
figure; plot(cnn.rL);

cnntest.m

  function [test_y] = cnntest(net, x)
% feedforward
net = cnnff(net, x);
[~, test_y] = max(net.o);
end

结果:用deeplearningToolbox得到的结果并非非常好,仅仅有0.94586

二:caffe to digit recongnizer

尽管caffe自带了mnist对样例对字符进行处理。可是官网给出的数据是二进制的文件,得到的结果也仅仅是一个简单的准确率,所以不能无限制的套用。

过程例如以下:

1:将给定csv数据转变成lmdb格式

这里我在mnist的目录下写了一个convert_data_to_lmdb.cpp的程序对数据进行处理:

代码例如以下:

#include <iostream>
#include <string>
#include <sstream>
#include <gflags/gflags.h> #include "boost/scoped_ptr.hpp"
#include "gflags/gflags.h"
#include "glog/logging.h" #include "caffe/proto/caffe.pb.h"
#include "caffe/util/db.hpp"
#include "caffe/util/io.hpp"
#include "caffe/util/rng.hpp" using namespace caffe;
using namespace std;
using std::pair;
using boost::scoped_ptr; /* edited by Zack
* argv[1] the input file, argv[2] the output file*/ DEFINE_string(backend, "lmdb", "The backend for storing the result"); // get Flags_backend == lmdb int main(int argc, char **argv){
::google::InitGoogleLogging(argv[0]); #ifndef GFLAGS_GFLAGS_H_
namespace gflags = google;
#endif if(argc < 3){
LOG(ERROR)<< "please check the input arguments!";
return 1;
}
ifstream infile(argv[1]);
if(!infile){
LOG(ERROR)<< "please check the input arguments!";
return 1;
}
string str;
int count = 0;
int rows = 28;
int cols = 28;
unsigned char *buffer = new unsigned char[rows*cols];
stringstream ss; Datum datum; // this data structure store the data and label
datum.set_channels(1); // the channels
datum.set_height(rows); // rows
datum.set_width(cols); // cols scoped_ptr<db::DB> db(db::GetDB(FLAGS_backend)); // new DB object
db->Open(argv[2], db::NEW); // open the lmdb file to store the data
scoped_ptr<db::Transaction> txn(db->NewTransaction()); // new Transaction object to put and commit the data const int kMaxKeyLength = 256; // to save the key
char key_cstr[kMaxKeyLength]; bool flag= false;
while(getline(infile, str)){
if(flag == false){
flag = true;
continue;
}
int beg = 0;
int end = 0;
int str_index = 0;
//test need to add this----------1
//datum.set_label(0);
while((end = str.find_first_of(',', beg)) != string::npos){
//cout << end << endl;
string dig_str = str.substr(beg, end - beg);
int pixes;
ss.clear();
ss << dig_str;
ss >> pixes;
// test need to delete this--------------2
if(beg == 0){
datum.set_label(pixes);
beg = ++ end;
continue;
}
buffer[str_index++] = (unsigned char)pixes;
beg = ++end;
}
string dig_str = str.substr(beg);
int pixes;
ss.clear();
ss << dig_str;
ss >> pixes;
buffer[str_index++] = (unsigned char)pixes;
datum.set_data(buffer, rows*cols); int length = snprintf(key_cstr, kMaxKeyLength, "%08d", count); // Put in db
string out;
CHECK(datum.SerializeToString(&out)); // serialize to string
txn->Put(string(key_cstr, length), out); // put it, both the key and value if (++count % 1000 == 0) { // to commit every 1000 iteration
// Commit db
txn->Commit();
txn.reset(db->NewTransaction());
LOG(ERROR) << "Processed " << count << " files.";
} }
// write the last batch
if (count % 1000 != 0) { // commit the last batch
txn->Commit();
LOG(ERROR) << "Processed " << count << " files.";
} return 0;
}

然后我们运行make all –j8对代码进行编译。

这样在build目录下就会生成对应的二进制文件了。

如图:

然后运行./build/examples/mnist/convert_data_to_lmdb.bin examples/mnist/kaggle/data/train.csvexamples/mnist/kaggle/mnist_train_lmdb --backend=lmdb

就能够得到得到训练文件的lmdb格式文件了。对于測试test.csv,因为test.csv没有标签,所以须要对代码进行细微调整,2处调整已在上述代码中标注了。

然后相同运行make all –j8,再运行./build/examples/mnist/convert_data_to_lmdb.bin examples/mnist/kaggle/data/test.csvexamples/mnist/kaggle/mnist_test_lmdb --backend=lmdb

就能够得到所相应的測试数据的lmdb格式文件了。

2:用训练数据进行训练得到model

Caffe在训练model的时候,代码须要在每隔test_iter时间就要对測试数据集进行測试,因此我们这里能够用train.csv的前1000条数据制作一个交叉验证的数据集lmdb, 过程和上面一样。

分别将mnist文件夹以下的lenet_solver.prototxt和lenet_train_test.prototxt复制到kaggle文件夹以下。并对相应的包括文件所在文件夹和相应的batch size进行改动。详细见:下载地址。

然后运行./build/tools/caffe train –solver=examples/mnist/kaggle/lenet_solver.prototxt,这样就能够得到我们的lenet_iter_10000.caffemodel了。

3:提取測试集prob层的特征。

这里我们使用tools文件下的extract_features.cpp的源文件。可是该源文件产生的结果是lmdb的格式。因此我对源代码进行了改动例如以下:

#include <stdio.h>  // for snprintf
#include <string>
#include <vector>
#include <fstream> #include "boost/algorithm/string.hpp"
#include "google/protobuf/text_format.h" #include "caffe/blob.hpp"
#include "caffe/common.hpp"
#include "caffe/net.hpp"
#include "caffe/proto/caffe.pb.h"
#include "caffe/util/db.hpp"
#include "caffe/util/io.hpp"
#include "caffe/vision_layers.hpp" using caffe::Blob;
using caffe::Caffe;
using caffe::Datum;
using caffe::Net;
using boost::shared_ptr;
using std::string;
namespace db = caffe::db; template<typename Dtype>
int feature_extraction_pipeline(int argc, char** argv); int main(int argc, char** argv) {
return feature_extraction_pipeline<float>(argc, argv);
// return feature_extraction_pipeline<double>(argc, argv);
} template<typename Dtype>
int feature_extraction_pipeline(int argc, char** argv) {
::google::InitGoogleLogging(argv[0]);
const int num_required_args = 7; /// the parameters must be not less 7
if (argc < num_required_args) {
LOG(ERROR)<<
"This program takes in a trained network and an input data layer, and then"
" extract features of the input data produced by the net.\n"
"Usage: extract_features pretrained_net_param"
" feature_extraction_proto_file extract_feature_blob_name1[,name2,...]"
" save_feature_dataset_name1[,name2,...] num_mini_batches db_type"
" [CPU/GPU] [DEVICE_ID=0]\n"
"Note: you can extract multiple features in one pass by specifying"
" multiple feature blob names and dataset names seperated by ','."
" The names cannot contain white space characters and the number of blobs"
" and datasets must be equal.";
return 1;
}
int arg_pos = num_required_args; //the necessary nums of parameters arg_pos = num_required_args;
if (argc > arg_pos && strcmp(argv[arg_pos], "GPU") == 0) { // whether use GPU------ -gpu 0
LOG(ERROR)<< "Using GPU";
uint device_id = 0;
if (argc > arg_pos + 1) {
device_id = atoi(argv[arg_pos + 1]);
CHECK_GE(device_id, 0);
}
LOG(ERROR) << "Using Device_id=" << device_id;
Caffe::SetDevice(device_id);
Caffe::set_mode(Caffe::GPU);
} else {
LOG(ERROR) << "Using CPU";
Caffe::set_mode(Caffe::CPU);
} arg_pos = 0; // the name of the executable
std::string pretrained_binary_proto(argv[++arg_pos]); // the mode had been trained // Expected prototxt contains at least one data layer such as
// the layer data_layer_name and one feature blob such as the
// fc7 top blob to extract features.
/*
layers {
name: "data_layer_name"
type: DATA
data_param {
source: "/path/to/your/images/to/extract/feature/images_leveldb"
mean_file: "/path/to/your/image_mean.binaryproto"
batch_size: 128
crop_size: 227
mirror: false
}
top: "data_blob_name"
top: "label_blob_name"
}
layers {
name: "drop7"
type: DROPOUT
dropout_param {
dropout_ratio: 0.5
}
bottom: "fc7"
top: "fc7"
}
*/
std::string feature_extraction_proto(argv[++arg_pos]); // get the net structure
shared_ptr<Net<Dtype> > feature_extraction_net(
new Net<Dtype>(feature_extraction_proto, caffe::TEST)); //new net object and set each layers------feature_extraction_net
feature_extraction_net->CopyTrainedLayersFrom(pretrained_binary_proto); // init the weights std::string extract_feature_blob_names(argv[++arg_pos]); //exact which blob's feature
std::vector<std::string> blob_names;
boost::split(blob_names, extract_feature_blob_names, boost::is_any_of(",")); //you can exact many blobs' features and to store them in different dirname std::string save_feature_dataset_names(argv[++arg_pos]); // to store the features
std::vector<std::string> dataset_names;
boost::split(dataset_names, save_feature_dataset_names, // each dataset_names to store one blob's feature
boost::is_any_of(","));
CHECK_EQ(blob_names.size(), dataset_names.size()) <<
" the number of blob names and dataset names must be equal";
size_t num_features = blob_names.size(); // how many features you exact for (size_t i = 0; i < num_features; i++) {
CHECK(feature_extraction_net->has_blob(blob_names[i]))
<< "Unknown feature blob name " << blob_names[i]
<< " in the network " << feature_extraction_proto;
} int num_mini_batches = atoi(argv[++arg_pos]); // each exact num_mini_batches of images // init the DB and Transaction for all blobs you want to extract features
std::vector<shared_ptr<db::DB> > feature_dbs; // new DB object, is a vector maybe has many blogs' feature
std::vector<shared_ptr<db::Transaction> > txns; // new Transaction object, is a vectore maybe has many blob's feature // edit by Zack
//std::string strfile = "/home/hadoop/caffe/textileImage/features/probTest";
std::string strfile = argv[argc-1];
std::vector<std::ofstream*> vec(num_features, 0); const char* db_type = argv[++arg_pos]; //the data to store style == lmdb
for (size_t i = 0; i < num_features; ++i) {
LOG(INFO)<< "Opening dataset " << dataset_names[i]; // dataset_name[i] to store the feature which type is lmdb
shared_ptr<db::DB> db(db::GetDB(db_type)); // the type of the db
db->Open(dataset_names.at(i), db::NEW); // open the dir to store the feature
feature_dbs.push_back(db); // put the db to the vector
shared_ptr<db::Transaction> txn(db->NewTransaction()); // the transaction to the db
txns.push_back(txn); // put the transaction to the vector // edit by Zack std::stringstream ss;
ss.clear();
string index;
ss << i;
ss >> index;
std::string str = strfile + index + ".txt";
vec[i] = new std::ofstream(str.c_str());
} LOG(ERROR)<< "Extacting Features"; Datum datum;
const int kMaxKeyStrLength = 100;
char key_str[kMaxKeyStrLength]; // to store the key
std::vector<Blob<float>*> input_vec;
std::vector<int> image_indices(num_features, 0); /// how many blogs' feature you exact for (int batch_index = 0; batch_index < num_mini_batches; ++batch_index) {
feature_extraction_net->Forward(input_vec);
for (int i = 0; i < num_features; ++i) { // to exact the blobs' name maybe fc7 fc8
const shared_ptr<Blob<Dtype> > feature_blob = feature_extraction_net
->blob_by_name(blob_names[i]);
int batch_size = feature_blob->num(); // the nums of images-------batch size
int dim_features = feature_blob->count() / batch_size; // this dim of this feature of each image in this blob
const Dtype* feature_blob_data; // float is the features
for (int n = 0; n < batch_size; ++n) {
datum.set_height(feature_blob->height()); // set the height
datum.set_width(feature_blob->width()); // set the width
datum.set_channels(feature_blob->channels()); // set the channel
datum.clear_data(); // clear data
datum.clear_float_data(); // clear float_data
feature_blob_data = feature_blob->cpu_data() +
feature_blob->offset(n); //the features of which image
for (int d = 0; d < dim_features; ++d) {
datum.add_float_data(feature_blob_data[d]);
(*vec[i]) << feature_blob_data[d] << " "; // save the features
}
(*vec[i]) << std::endl;
//LOG(ERROR)<< "dim" << dim_features;
int length = snprintf(key_str, kMaxKeyStrLength, "%010d",
image_indices[i]); // key di ji ge tupian
string out;
CHECK(datum.SerializeToString(&out)); // serialize to string
txns.at(i)->Put(std::string(key_str, length), out); // put to transaction
++image_indices[i]; // key++
if (image_indices[i] % 1000 == 0) { // when it reach to 1000 ,we commit it
txns.at(i)->Commit();
txns.at(i).reset(feature_dbs.at(i)->NewTransaction());
LOG(ERROR)<< "Extracted features of " << image_indices[i] <<
" query images for feature blob " << blob_names[i];
}
} // for (int n = 0; n < batch_size; ++n)
} // for (int i = 0; i < num_features; ++i)
} // for (int batch_index = 0; batch_index < num_mini_batches; ++batch_index)
// write the last batch
for (int i = 0; i < num_features; ++i) {
if (image_indices[i] % 1000 != 0) { // commit the last path images
txns.at(i)->Commit();
}
// edit by Zack
vec[i]->close();
delete vec[i]; LOG(ERROR)<< "Extracted features of " << image_indices[i] <<
" query images for feature blob " << blob_names[i];
feature_dbs.at(i)->Close();
} LOG(ERROR)<< "Successfully extracted the features!";
return 0;
}

最后将得到的prob层(即最后得到的概率)存入到了txt中了。

此外对网络结构进行了调整,仅仅须要预測,网络中的參数都能够去掉不要了。,

deploy.prototxt代码例如以下:

name: "LeNet"
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/kaggle/mnist_test_lmdb"
batch_size: 100
backend: LMDB
}
} layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1" convolution_param {
num_output: 20
kernel_size: 5
stride: 1 }
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2" convolution_param {
num_output: 50
kernel_size: 5
stride: 1 }
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1" inner_product_param {
num_output: 500 }
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2" inner_product_param {
num_output: 10 }
}
layer {
name: "prob"
type: "Softmax"
bottom: "ip2"
top: "prob"
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "prob"
bottom: "label"
top: "accuracy"
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}

然后运行

./build/tools/extract_features.bin examples/mnist/kaggle/lenet_iter_10000.caffemodel examples/mnist/kaggle/deploy.prototxt prob examples/mnist/kaggle/features 280 lmdb /home/hadoop/caffe/caffe-master/examples/mnist/kaggle/feature

当中280为迭代次数,由于在deploy.prototxt中batch_size设为了100。故就为总共的測试数据集的大小=28000. /home/hadoop/caffe/caffe-master/examples/mnist/kaggle/feature为终于的提取特征存放在txt保存的路径。examples/mnist/kaggle/lenet_iter_10000.caffemodel为训练的权重參数,examples/mnist/kaggle/deploy.prototxt为网络结构。

4:对得到的txt进行后处理

通过上面三个步骤,我们就能够得到feture0.txt。存放的数据位28000*10大小。相应每一个样本属于哪一类发生的概率。然后运行下面matlab代码就能够得到kaggle所须要的提交结果了。最后的准确率为0.98986。排名也提升了400+。great!!

% caffe toolbox, the postprocessing of the data
clear;clc;
feature = load('feature0.txt');
feature = feature';
[~,test_y] = max(feature);
[M,N] = size(test_y);
test_y = test_y - repmat([1], M, N);
test_y = test_y';
M = [(1:length(test_y))' test_y(:)];
csvwrite('test_y3.csv', M);

全部文件代码下载见:https://github.com/zack6514/zackcoding

DeepLearning to digit recognizer in kaggle的更多相关文章

  1. kaggle实战记录 =>Digit Recognizer

    date:2016-09-13 今天开始注册了kaggle,从digit recognizer开始学习, 由于是第一个案例对于整个流程目前我还不够了解,首先了解大神是怎么运行怎么构思,然后模仿.这样的 ...

  2. Kaggle—Digit Recognizer竞赛

    Digit Recognizer 手写体数字识别  MNIST数据集 本赛 train 42000样例 test 28000样例,原始MNIST是 train 60000 test 10000 我分别 ...

  3. Kiggle:Digit Recognizer

    题目链接:Kiggle:Digit Recognizer Each image is 28 pixels in height and 28 pixels in width, for a total o ...

  4. Kaggle入门(一)——Digit Recognizer

    目录 0 前言 1 简介 2 数据准备 2.1 导入数据 2.2 检查空值 2.3 正则化 Normalization 2.4 更改数据维度 Reshape 2.5 标签编码 2.6 分割交叉验证集 ...

  5. Kaggle 项目之 Digit Recognizer

    train.csv 和 test.csv 包含 1~9 的手写数字的灰度图片.每幅图片都是 28 个像素的高度和宽度,共 28*28=784 个像素点,每个像素值都在 0~255 之间. train. ...

  6. kaggle赛题Digit Recognizer:利用TensorFlow搭建神经网络(附上K邻近算法模型预测)

    一.前言 kaggle上有传统的手写数字识别mnist的赛题,通过分类算法,将图片数据进行识别.mnist数据集里面,包含了42000张手写数字0到9的图片,每张图片为28*28=784的像素,所以整 ...

  7. 适合初学者的使用CNN的数字图像识别项目:Digit Recognizer with CNN for beginner

    准备工作 数据集介绍 数据文件 train.csv 和 test.csv 包含从零到九的手绘数字的灰度图像. 每张图像高 28 像素,宽 28 像素,总共 784 像素.每个像素都有一个与之关联的像素 ...

  8. kaggle 实战 (1): PCA + KNN 手写数字识别

    文章目录 加载package read data PCA 降维探索 选择50维度, 拆分数据为训练集,测试机 KNN PCA降维和K值筛选 分析k & 维度 vs 精度 预测 生成提交文件 本 ...

  9. How do I learn machine learning?

    https://www.quora.com/How-do-I-learn-machine-learning-1?redirected_qid=6578644   How Can I Learn X? ...

随机推荐

  1. 时间框的属性编辑(WdatePicker日期插件)

    效果图如下:可以设置输入的时间不大于,或不小于某日. //引用js包 <script type="text/javascript" src="${basePath} ...

  2. JQuery 兼容所有浏览器的复制到剪切板功能

    灵机一动想的点子,应该不难理解 <textarea onmousedown='selectAll(this);'>11111</textarea> function selec ...

  3. Android第三方登陆之新浪微博Weibo篇(原生登陆授权)

    前言 Android第三方登录可以说是非常的常见,今天主要先说一下新浪微博第三方登陆授权. SDK版本支持 SDK v3.0已经发布了支持iPhone和Android的版本. 须将你的应用的包名签名信 ...

  4. (转载) ORA-12537:TNS连接已关闭

    今天在远程客户端配置EBS数据库连接的时候发生“ORA-12537:TNS连接已关闭”的错误.进入服务器运行如下命令:$tnsping VIS 这里VIS如果定义服务名,可以写成 $ tnsping ...

  5. 11.03 在外链接中用OR逻辑

    select e.ename,d.deptno,d.dname,d.locfrom dept d left join emp e on(d.deptno = e.deptnoand (e.deptno ...

  6. tomcat ider配置

    xml文件配置: <servlet> <servlet-name>test1</servlet-name>//设定java文件链接的锚点 <servlet-c ...

  7. ApplicationLoader登录失败

    报错:Please sign in with an app-specific password. You can create one at appleid.apple.com 是因为帐号开启了双重认 ...

  8. 【剑指Offer】26、二叉搜索树与双向链表

      题目描述:   输入一棵二叉搜索树,将该二叉搜索树转换成一个排序的双向链表.要求不能创建任何新的结点,只能调整树中结点指针的指向.   解题思路:   首先要理解此题目的含义,在双向链表中,每个结 ...

  9. 区分escape、encodeURI和encodeURIComponent

    一.escape和它们不是同一类 简单来说,escape是对字符串(string)进行编码(而另外两种是对URL),作用是让它们在所有电脑上可读.编码之后的效果是%XX或者%uXXXX这种形式.其中  ...

  10. Tensorflow 0.8.0 安装配置方法

    本系列文章由 @yhl_leo 出品,转载请注明出处. 文章链接: http://blog.csdn.net/yhl_leo/article/details/51280087 折腾了一下,给工作站配置 ...