转自:http://www.cnblogs.com/xfzhang/archive/2013/05/24/3096412.html

在有监督(supervise)的机器学习中,数据集常被分成2~3个,即:训练集(train set) 验证集(validation set) 测试集(test set)。

http://blog.sina.com.cn/s/blog_4d2f6cf201000cjx.html

一般需要将样本分成独立的三部分训练集(train set),验证集(validation set)和测试集(test set)。其中训练集用来估计模型,验证集用来确定网络结构或者控制模型复杂程度的参数,而测试集则检验最终选择最优的模型的性能如何。一个典型的划分是训练集占总样本的50%,而其它各占25%,三部分都是从样本中随机抽取。
样本少的时候,上面的划分就不合适了。常用的是留少部分做测试集。然后对其余N个样本采用K折交叉验证法。就是将样本打乱,然后均匀分成K份,轮流选择其中K-1份训练,剩余的一份做验证,计算预测误差平方和,最后把K次的预测误差平方和再做平均作为选择最优模型结构的依据。特别的K取N,就是留一法(leave one out)。

http://www.cppblog.com/guijie/archive/2008/07/29/57407.html

这三个名词在机器学习领域的文章中极其常见,但很多人对他们的概念并不是特别清楚,尤其是后两个经常被人混用。Ripley, B.D(1996)在他的经典专著Pattern Recognition and Neural Networks中给出了这三个词的定义。
Training set: A set of examples used for learning, which is to fit the parameters [i.e., weights] of the classifier. 
Validation set: A set of examples used to tune the parameters [i.e., architecture, not weights] of a classifier, for example to choose the number of hidden units in a neural network. 
Test set: A set of examples used only to assess the performance [generalization] of a fully specified classifier.
显然,training set是用来训练模型或确定模型参数的,如ANN中权值等; validation set是用来做模型选择(model selection),即做模型的最终优化及确定的,如ANN的结构;而 test set则纯粹是为了测试已经训练好的模型的推广能力。当然,test set这并不能保证模型的正确性,他只是说相似的数据用此模型会得出相似的结果。但实际应用中,一般只将数据集分成两类,即training set 和test set,大多数文章并不涉及validation set。
Ripley还谈到了Why separate test and validation sets?
1. The error rate estimate of the final model on validation data will be biased (smaller than the true error rate) since the validation set is used to select the final model.
2. After assessing the final model with the test set, YOU MUST NOT tune the model any further.

http://stats.stackexchange.com/questions/19048/what-is-the-difference-between-test-set-and-validation-set

Step 1) Training: Each type of algorithm has its own parameter options (the number of layers in a Neural Network, the number of trees in a Random Forest, etc). For each of your algorithms, you must pick one option. That’s why you have a validation set.

Step 2) Validating: You now have a collection of algorithms. You must pick one algorithm. That’s why you have a test set. Most people pick the algorithm that performs best on the validation set (and that's ok). But, if you do not measure your top-performing algorithm’s error rate on the test set, and just go with its error rate on the validation set, then you have blindly mistaken the “best possible scenario” for the “most likely scenario.” That's a recipe for disaster.

Step 3) Testing: I suppose that if your algorithms did not have any parameters then you would not need a third step. In that case, your validation step would be your test step. Perhaps Matlab does not ask you for parameters or you have chosen not to use them and that is the source of your confusion.

My Idea is that those option in neural network toolbox is for avoiding overfitting. In this situation the weights are specified for the training data only and don't show the global trend. By having a validation set, the iterations are adaptable to where decreases in the training data error cause decreases in validation data and increases in validation data error; along with decreases in training data error, this demonstrates the overfitting phenomenon.

http://blog.sciencenet.cn/blog-397960-666113.html

http://stackoverflow.com/questions/2976452/whats-is-the-difference-between-train-validation-and-test-set-in-neural-networ

for each epoch
for each training data instance
propagate error through the network
adjust the weights
calculate the accuracy over training data
for each validation data instance
calculate the accuracy over the validation data
if the threshold validation accuracy is met
exit training
else
continue training

Once you're finished training, then you run against your testing set and verify that the accuracy is sufficient.

Training Set: this data set is used to adjust the weights on the neural network.

Validation Set: this data set is used to minimize overfitting. You're not adjusting the weights of the network with this data set, you're just verifying that any increase in accuracy over the training data set actually yields an increase in accuracy over a data set that has not been shown to the network before, or at least the network hasn't trained on it (i.e. validation data set). If the accuracy over the training data set increases, but the accuracy over then validation data set stays the same or decreases, then you're overfitting your neural network and you should stop training.

Testing Set: this data set is used only for testing the final solution in order to confirm the actual predictive power of the network.

Validating set is used in the process of training. Testing set is not. The Testing set allows

1)to see if the training set was enough and 
2)whether the validation set did the job of preventing overfitting. If you use the testing set in the process of training then it will be just another validation set and it won't show what happens when new data is feeded in the network.

Training set: A set of examples used for learning, that is to fit the parameters [i.e., weights] of the classifier.

Validation set: A set of examples used to tune the parameters [i.e., architecture, not weights] of a classifier, for example to choose the number of hidden units in a neural network.

Test set: A set of examples used only to assess the performance [generalization] of a fully specified classifier.

The error surface will be different for different sets of data from your data set (batch learning). Therefore if you find a very good local minima for your test set data, that may not be a very good point, and may be a very bad point in the surface generated by some other set of data for the same problem. Therefore you need to compute such a model which not only finds a good weight configuration for the training set but also should be able to predict new data (which is not in the training set) with good error. In other words the network should be able to generalize the examples so that it learns the data and does not simply remembers or loads the training set by overfitting the training data.

The validation data set is a set of data for the function you want to learn, which you are not directly using to train the network. You are training the network with a set of data which you call the training data set. If you are using gradient based algorithm to train the network then the error surface and the gradient at some point will completely depend on the training data set thus the training data set is being directly used to adjust the weights. To make sure you don't overfit the network you need to input the validation dataset to the network and check if the error is within some range. Because the validation set is not being using directly to adjust the weights of the netowork, therefore a good error for the validation and also the test set indicates that the network predicts well for the train set examples, also it is expected to perform well when new example are presented to the network which was not used in the training process.

Early stopping is a way to stop training. There are different variations available, the main outline is, both the train and the validation set errors are monitored, the train error decreases at each iteration (backprop and brothers) and at first the validation error decreases. The training is stopped at the moment the validation error starts to rise. The weight configuration at this point indicates a model, which predicts the training data well, as well as the data which is not seen by the network . But because the validation data actually affects the weight configuration indirectly to select the weight configuration. This is where the Test set comes in. This set of data is never used in the training process. Once a model is selected based on the validation set, the test set data is applied on the network model and the error for this set is found. This error is a representative of the error which we can expect from absolutely new data for the same problem.

训练集(train set) 验证集(validation set) 测试集(test set)的更多相关文章

  1. [机器学习] 训练集(train set) 验证集(validation set) 测试集(test set)

    在有监督(supervise)的机器学习中,数据集常被分成2~3个即: 训练集(train set) 验证集(validation set) 测试集(test set) 一般需要将样本分成独立的三部分 ...

  2. 训练集(train set) 验证集(validation set) 测试集(test set)。

    训练集(train set) 验证集(validation set) 测试集(test set). http://blog.sina.com.cn/s/blog_4d2f6cf201000cjx.ht ...

  3. AI---训练集(train set) 验证集(validation set) 测试集(test set)

    在有监督(supervise)的机器学习中,数据集常被分成2~3个即: 训练集(train set) 验证集(validation set) 测试集(test set) 一般需要将样本分成独立的三部分 ...

  4. PCA:利用PCA(四个主成分的贡献率就才达100%)降维提高测试集辛烷值含量预测准确度并《测试集辛烷值含量预测结果对比》—Jason niu

    load spectra; temp = randperm(size(NIR, 1)); P_train = NIR(temp(1:50),:); T_train = octane(temp(1:50 ...

  5. PLS:利用PLS(两个主成分的贡献率就可达100%)提高测试集辛烷值含量预测准确度并《测试集辛烷值含量预测结果对比》—Jason niu

    load spectra; temp = randperm(size(NIR, 1)); P_train = NIR(temp(1:50),:); T_train = octane(temp(1:50 ...

  6. 斯坦福大学公开课机器学习:advice for applying machine learning | model selection and training/validation/test sets(模型选择以及训练集、交叉验证集和测试集的概念)

    怎样选用正确的特征构造学习算法或者如何选择学习算法中的正则化参数lambda?这些问题我们称之为模型选择问题. 在对于这一问题的讨论中,我们不仅将数据分为:训练集和测试集,而是将数据分为三个数据组:也 ...

  7. 训练集,验证集,测试集(以及为什么要使用验证集?)(Training Set, Validation Set, Test Set)

    对于训练集,验证集,测试集的概念,很多人都搞不清楚.网上的文章也是鱼龙混杂,因此,现在来把这方面的知识梳理一遍.让我们先来看一下模型验证(评估)的几种方式. 在机器学习中,当我们把模型训练出来以后,该 ...

  8. 十折交叉验证10-fold cross validation, 数据集划分 训练集 验证集 测试集

    机器学习 数据挖掘 数据集划分 训练集 验证集 测试集 Q:如何将数据集划分为测试数据集和训练数据集? A:three ways: 1.像sklearn一样,提供一个将数据集切分成训练集和测试集的函数 ...

  9. [DeeplearningAI笔记]改善深层神经网络1.1_1.3深度学习使用层面_偏差/方差/欠拟合/过拟合/训练集/验证集/测试集

    觉得有用的话,欢迎一起讨论相互学习~Follow Me 1.1 训练/开发/测试集 对于一个数据集而言,可以将一个数据集分为三个部分,一部分作为训练集,一部分作为简单交叉验证集(dev)有时候也成为验 ...

随机推荐

  1. myeclipse里装svn插件

    找到myeclipse的安装目录下的dropins文件夹,在该文件夹中建立一个新的文件夹SVN,将下载的site-1.8.16.zip解压下的东西放在该文件夹中.再打开myeclipse,import ...

  2. java程序员从笨鸟到菜鸟之(七)一—java数据库操作

     本文来自:曹胜欢博客专栏.转载请注明出处:http://blog.csdn.net/csh624366188 数据库访问几乎每一个稍微成型的程序都要用到的知识,怎么高效的访问数据库也是我们学习的一个 ...

  3. PostgreSQL Errors and Messages

    使用RAISE语句来报告消息并抛出错误 RAISE [ level ] ’format’ [, expression [, ... ]] [ USING option = expression [, ...

  4. 转:python webdriver API 之分页处理

    对于 web 页面上的分页功能,我们一般做做以下操作:  获取总页数  翻页操作(上一页,下一页) 对于有些分页功能提供上一页,下一页按钮,以及可以输入具体页面数跳转功能不在本例的讨论范围. .. ...

  5. 树形DP +01背包(HDU 1011)

    题意:有n个房间,有n-1条道路连接着n个房间,每个房间都有若干个野怪和一定的能量值,有m个士兵从1房间入口进去,到达每个房间必须要留下若干士兵杀死所有的野怪,然后其他人继续走,(一个士兵可以杀死20 ...

  6. [原创]安装Oracle 11gR2,以及如何在win8下使用plsql develper连接Oracle数据库 ,在这里和大家分享下

    一,关于win8下安装Oracle 11gR2 1.我下载的是Oracle_11gR2_win64.其中有两个包: 注意:在解压了之后将:win64_11gR2_database_2of2\datab ...

  7. yii中sphinx索引配置解析

    #MySQL数据源配置,详情请查看:http://www.coreseek.cn/products-install/mysql/#请先将var/test/documents.sql导入数据库,并配置好 ...

  8. android中获取打气筒的几种方式

    1,简单说明,打气筒就是将我们的xml布局转换为我们的view对象,不扯远了,直接看代码 A:从context中获取 LayoutInflater inflater1 = LayoutInflater ...

  9. 包括post,get请求(http,https)的HttpClientUtils

    package cn.knet.data.untils; import java.io.IOException; import java.net.SocketTimeoutException; imp ...

  10. [Ubuntu] 转载-使用Ubuntu修复grub

    原文地址:http://fav.coketea.com/article_show.php?id=1 步骤一.以试用方式进入ubuntu光盘系统,打开终端(快捷键ctrl+alt+t): 步骤二.获取r ...