Self-Taught Learning to Deep Networks
In this section, we describe how you can fine-tune and further improve the learned features using labeled data. When you have a large amount of labeled training data, this can significantly improve your classifier's performance.
In self-taught learning, we first trained a sparse autoencoder on the unlabeled data. Then, given a new example
, we used the hidden layer to extract features
. This is illustrated in the following diagram:
We are interested in solving a classification task, where our goal is to predict labels
. We have a labeled training set
of
labeled examples. We showed previously that we can replace the original features
with features
computed by the sparse autoencoder (the "replacement" representation). This gives us a training set
. Finally, we train a logistic classifier to map from the features
to the classification label
.
we can draw our logistic regression unit (shown in orange) as follows:
Now, consider the overall classifier (i.e., the input-output mapping) that we have learned using this method. In particular, let us examine the function that our classifier uses to map from from a new test example
to a new prediction p(y = 1 | x). We can draw a representation of this function by putting together the two pictures from above. In particular, the final classifier looks like this:
The parameters of this model were trained in two stages: The first layer of weights
mapping from the input
to the hidden unit activations
were trained as part of the sparse autoencoder training process. The second layer of weights
mapping from the activations
to the output
was trained using logistic regression (or softmax regression).
But the form of our overall/final classifier is clearly just a whole big neural network. So, having trained up an initial set of parameters for our model (training the first layer using an autoencoder, and the second layer via logistic/softmax regression), we can further modify all the parameters in our model to try to further reduce the training error. In particular, we can fine-tune the parameters, meaning perform gradient descent (or use L-BFGS) from the current setting of the parameters to try to reduce the training error on our labeled training set
.
When fine-tuning is used, sometimes the original unsupervised feature learning steps (i.e., training the autoencoder and the logistic classifier) are called pre-training. The effect of fine-tuning is that the labeled data can be used to modify the weights W(1) as well, so that adjustments can be made to the features a extracted by the layer of hidden units.
if we are using fine-tuning usually we will do so with a network built using the replacement representation. (If you are not using fine-tuning however, then sometimes the concatenation representation can give much better performance.)
When should we use fine-tuning? It is typically used only if you have a large labeled training set; in this setting, fine-tuning can significantly improve the performance of your classifier. However, if you have a large unlabeled dataset (for unsupervised feature learning/pre-training) and only a relatively small labeled training set, then fine-tuning is significantly less likely to help.
Self-Taught Learning to Deep Networks的更多相关文章
- 【论文考古】联邦学习开山之作 Communication-Efficient Learning of Deep Networks from Decentralized Data
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, "Communication-Efficient Learni ...
- Communication-Efficient Learning of Deep Networks from Decentralized Data
郑重声明:原文参见标题,如有侵权,请联系作者,将会撤销发布! Proceedings of the 20th International Conference on Artificial Intell ...
- Deep Learning 8_深度学习UFLDL教程:Stacked Autocoders and Implement deep networks for digit classification_Exercise(斯坦福大学深度学习教程)
前言 1.理论知识:UFLDL教程.Deep learning:十六(deep networks) 2.实验环境:win7, matlab2015b,16G内存,2T硬盘 3.实验内容:Exercis ...
- (转)Understanding, generalisation, and transfer learning in deep neural networks
Understanding, generalisation, and transfer learning in deep neural networks FEBRUARY 27, 2017 Thi ...
- 论文笔记之:UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS
UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS ICLR 2 ...
- 深度学习材料:从感知机到深度网络A Deep Learning Tutorial: From Perceptrons to Deep Networks
In recent years, there’s been a resurgence in the field of Artificial Intelligence. It’s spread beyo ...
- This instability is a fundamental problem for gradient-based learning in deep neural networks. vanishing exploding gradient problem
The unstable gradient problem: The fundamental problem here isn't so much the vanishing gradient pro ...
- [译]深度神经网络的多任务学习概览(An Overview of Multi-task Learning in Deep Neural Networks)
译自:http://sebastianruder.com/multi-task/ 1. 前言 在机器学习中,我们通常关心优化某一特定指标,不管这个指标是一个标准值,还是企业KPI.为了达到这个目标,我 ...
- Learning Combinatorial Embedding Networks for Deep Graph Matching(基于图嵌入的深度图匹配)
1. 文献信息 题目: Learning Combinatorial Embedding Networks for Deep Graph Matching(基于图嵌入的深度图匹配) 作者:上海交通大学 ...
随机推荐
- CentOS6.9下sftp配置和scp用法
基于 ssh 的 sftp 服务相比 ftp 有更好的安全性(非明文帐号密码传输)和方便的权限管理(限制用户的活动目录). 1.如果只想让某些用户只能使用 sftp 操作文件, 而不能通过ssh进行服 ...
- vue单页面前端做非空校验
form表单 确定按钮 js部分 确定按钮的方法
- [转载]-分布式之redis复习精讲
原创地址:https://www.cnblogs.com/rjzheng/p/9096228.html 看这篇文章前,我看的是另一个人博客上的文章.看到最后(评论这一块)很多人就指出这并非原创而是抄袭 ...
- screen---管理会话
Screen是一款由GNU计划开发的用于命令行终端切换的自由软件.用户可以通过该软件同时连接多个本地或远程的命令行会话,并在其间自由切换.GNU Screen可以看作是窗口管理器的命令行界面版本.它提 ...
- ubuntu18.04安装dash-to-dock出错的问题
之前在安装dash-to-dock出现了这种错误: TypeError: ExtensionUtils.initTranslations is not a function Stack trace:i ...
- Spring EL表达式和资源调用
Spring EL表达式 Spring EL-Spring表达式语言,支持在xml和注解中使用表达式,类似于在jsp的EL表达式语言. Spring 开发中经常涉及调用各种资源的情况, ...
- [React] Pass a function to setState in React
In React, when you want to set the state which calculation depends on the current state, using an ob ...
- php 读取windows 的系统版本,硬盘,内存,网卡,数据流量等
php 读取windows 的系统版本,硬盘,内存,网卡,数据流量等 <?php header("Content-type: text/html; charset=utf-8" ...
- php实现简单算法1
php实现简单算法1 <? //-------------------- // 基本数据结构算法 //-------------------- //二分查找(数组里查找某个元素) functio ...
- hdoj--1342--Lotto(dfs)
Lotto Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 65536/32768 K (Java/Others) Total Subm ...
labeled examples. We showed previously that we can replace the original features
with features
computed by the sparse autoencoder (the "replacement" representation). This gives us a training set
. Finally, we train a logistic classifier to map from the features
to the classification label
.