Comparing Differently Trained Models
Comparing Differently Trained Models
At the end of the previous post, we mentioned that the solution found by L-BFGS made different errors compared to the model we trained with SGD and momentum. So, one question is what solution is better in terms of generalization and if they focus on different aspects, how do they differ for individual methods.
To make the analysis easier, but to be at least a little realistic, we train a linear SVM classifier (W, bias) for a “werewolf” theme. In other words, all movies with that theme are marked with “+1” and we sample random movies for the ‘rest’ that are marked with -1. For the features, we use the 1,500 most frequent keywords. All random seeds were fixed which means both models start at the same “point”.
In our first experiment, we only care to minimize the errors. The SGD method (I) uses standard momentum and a L1 penalty of 0.0005 in combination with mini-batches. The learning rate and momentum was kept at a fixed value. The L-BFGS method (II) minimizes the same loss function. Both methods were able to get an accuracy of 100% for the training data and the training has been stopped as soon as the error was zero.
(I) loss=1.64000 ||W||=3.56,
bias=-0.60811 (SGD)
(II) loss=0.04711 ||W||=3.75, bias=-0.58073
(L-BFGS)
As we can see, the L2 norm of the final weight vector is
similar, also the bias, but of course we do not care for absolute norms but
rather for the correlation of both solutions. For that reason, we converted both
weight vectors W to unit-norm and determined the cosine similarity: correlation
= W_sgd.T * W_lbfgs = 0.977.
Since we do not have any empirical data for such correlations, we analyzed
the magnitude of the features in the weight vectors. More precisely the top-5
most important features:
(I) werewolf=0.6652, vampire=0.2394,
creature=0.1886, forbidden-love=0.1392, teenagers=0.1372
(II)
werewolf=0.6698, vampire=0.2119, monster=0.1531, creature=0.1511,
teenagers=0.1279
If we also consider the top-12 features of both
models, which are pretty similar,
(I) werewolf, vampire, creature,
forbidden-love, teenagers, monster, pregnancy, undead, curse, supernatural,
mansion, bloodsucker
(II) werewolf, vampire, monster, creature, teenagers,
curse, forbidden-love, supernatural, pregnancy, hunting, undead,
beast
we can see some patterns here: First, a lot of the movies in
the dataset seem to combine the theme with love stories that may involve
teenagers. This makes sense because this is actually a very popular pattern
these days and second, vampires and werewolves are very likely to co-occur in
the same movie.
Those patterns were learned by both models, regardless of the actual
optimization method but with minor differences which can be seen by considering
the magnitude of the individual weights in W. However, as the correlation of the
parameters vectors confirmed, both solutions are pretty close together.
Bottom line, we should be careful with interpretations since the data at hand
was limited, but nevertheless the results confirmed that with proper
initializations and hyper-parameters, good solutions can be both achieved with
1st and 2nd order methods. Next, we will study the ability of models to
generalize for unseen data.
Comparing Differently Trained Models的更多相关文章
- 大规模视觉识别挑战赛ILSVRC2015各团队结果和方法 Large Scale Visual Recognition Challenge 2015
Large Scale Visual Recognition Challenge 2015 (ILSVRC2015) Legend: Yellow background = winner in thi ...
- (转)The Evolved Transformer - Enhancing Transformer with Neural Architecture Search
The Evolved Transformer - Enhancing Transformer with Neural Architecture Search 2019-03-26 19:14:33 ...
- (转) Ensemble Methods for Deep Learning Neural Networks to Reduce Variance and Improve Performance
Ensemble Methods for Deep Learning Neural Networks to Reduce Variance and Improve Performance 2018-1 ...
- Keras vs. PyTorch
We strongly recommend that you pick either Keras or PyTorch. These are powerful tools that are enjoy ...
- Understanding Tensorflow using Go
原文: https://pgaleone.eu/tensorflow/go/2017/05/29/understanding-tensorflow-using-go/ Tensorflow is no ...
- How to handle Imbalanced Classification Problems in machine learning?
How to handle Imbalanced Classification Problems in machine learning? from:https://www.analyticsvidh ...
- understanding backpropagation
几个有助于加深对反向传播算法直观理解的网页,包括普通前向神经网络,卷积神经网络以及利用BP对一般性函数求导 A Visual Explanation of the Back Propagation A ...
- 论文翻译——Deep contextualized word representations
Abstract We introduce a new type of deep contextualized word representation that models both (1) com ...
- 论文翻译——Character-level Convolutional Networks for Text Classification
论文地址 Abstract Open-text semantic parsers are designed to interpret any statement in natural language ...
随机推荐
- 记录:Ubuntu 18.04 安装 tensorflow-gpu 版本
狠下心来重新装了系统,探索一下 gpu 版本的安装.比较令人可喜的是,跟着前辈们的经验,还是让我给安装成功了.由于我是新装的系统,就像婴儿般纯净,所以进入系统的第一步就是安装 cuda,只要这个不出错 ...
- Redmine 安装指南
第一种方式 (一键安装): 准备工作: 1.最小化安装CentOS7 2.更新YUM源 3.更新系统关闭防火墙 yum -y update systemctl stop firewalld syste ...
- Codeforces Round #546 (Div. 2) E - Nastya Hasn't Written a Legend
这题是一个贼搞人的线段树 线段树维护的是 区间和a[i - j] 首先对于update的位置可以二分查找 其次update时候的lazy比较技巧 比如更新的是 l-r段,增加的是c 那么这段的值为: ...
- EOS开发基础之二:使用cleos命令行客户端操作EOS(钱包wallet基础操作)
不知道下边这一段英文你们是不是能看懂,如果看不懂那就算了,我就是转过来随便看看的. 总之你记住nodeos.cleos和keosd这三个工程十分重要就行了,回头咱们的研究都从这三个工程杀进去. EOS ...
- 公钥与私钥的理解,以及https的应用原理
1.公钥与私钥原理1)鲍勃有两把钥匙,一把是公钥,另一把是私钥2)鲍勃把公钥送给他的朋友们----帕蒂.道格.苏珊----每人一把.3)苏珊要给鲍勃写一封保密的信.她写完后用鲍勃的公钥加密,就可以达到 ...
- JAVA每日一旅
1.关于final关键字 final修饰的成员变量为基本数据类型时,在赋值之后无法改变.当final修饰的成员变量为引用数据类型时,在赋值后其指向地址无法改变,但是对象内容还是可以改变的. final ...
- ORACLE创建数据库时无法创建目录
ORACLE创建数据库时无法创建目录,如图所示信息 原因:没有创建写入的权限 解决:修改文件夹权限即可 F:\oracle\product\10.2.0\db_1\cfgtoollogs\dbca 增 ...
- SQLServer2008只能编辑前面200行数据
设置编辑所有行:操作步骤:打开数据库-〉工具-〉选项-〉sqlserver对象资源管理器-〉命令 把200改为0,即可编辑所有行了
- Alpha冲刺——day10
Alpha冲刺--day10 作业链接 Alpha冲刺随笔集 github地址 团队成员 031602636 许舒玲(队长) 031602237 吴杰婷 031602220 雷博浩 031602634 ...
- 牛B的VUE讲解
https://www.cnblogs.com/lvhw/p/8286544.html