Training Neural Networks: Q&A with Ian Goodfellow, Google

Neural networks require considerable time and computational firepower to train. Previously, researchers believed that neural networks were costly to train because gradient descent slows down near local minima or saddle points. At the RE.WORK Deep Learning Summit in San Francisco, Ian Goodfellow, Research Scientist at Google, will challenge that view and look deeper to find the true bottlenecks in neural network training.

Before joining the Google team, Ian earned a PhD in machine learning from Université de Montréal, under his advisors Yoshua Bengio and Aaron Courville. During his studies, which were funded by the Google PhD Fellowship in Deep Learning, he wrote Pylearn2, the open source deep learning research library, and introduced a variety of new deep learning algorithms. Previously, he obtained a BSc and MSc in Computer Science from Stanford University, where he was one of the earliest members of Andrew Ng's deep learning research group.

We caught up with Ian ahead of the summit in January 2016 to hear more about his current work and thoughts on the future of deep learning.

What are you currently working on in deep networks?
I am interested in developing generic methods that make any neural network train faster and generalize better. To improve generalization, I study the way neural networks respond to “adversarial examples” that are intentionally constructed to confuse the network. To improve optimization, I study the structure of neural network optimization problems and determine which factors cause learning to be slow.

What are the key factors that have enabled recent advancements in deep learning? 
The basic machine learning algorithms have been in place since the 1980s, but until very recently, we were applying these algorithms to neural networks with fewer neurons than a leech. Unsurprisingly, such small networks performed poorly. Fast computers with larger memory capacity and better software infrastructure have allowed us to train neural networks that are large enough to perform well. Larger datasets are also very important. Some changes in machine learning algorithms, like designing neural network layers to be very linear, have also led to noticeable improvements.

What are the main types of problems now being addressed in the deep learning space?
There is a gold rush to be the first to use existing deep learning algorithms on new application areas. Every day, there are new articles about deep learning for counting calories from photos, deep learning for separating two voices in a recording, etc.

What are the practical applications of your work and what sectors are most likely to be affected?
My work is generic enough that it impacts everything we use neural networks for. Anything you want to do with a neural net, I aim to make faster and more accurate.

What developments can we expect to see in deep learning in the next 5 years?
I expect within five years, we will have neural networks that can summarize what happens in a video clip, and will be able to generate short videos. Neural networks are already the standard solution to vision tasks. I expect they will become the standard solution to NLP and robotics tasks as well. I also predict that neural networks will become an important tool in other scientific disciplines. For example, neural networks could be trained to model the behavior of genes, drugs, and proteins and then used to design new medicines.

What advancements excite you most in the field?
Recent extensions of variational auto-encoders and generative adversarial networks have greatly improved the ability of neural networks to generate realistic images. Generating data has been a constantly studied problem for decades, and we still do not seem to have the right algorithm to do it. The last year or so has shown that we are getting much closer though.

Ian Goodfellow will be speaking at Deep Learning Summit in San Francisco, on 28-29 January 2016, alongside speakers from Baidu, Twitter, Clarifai, MIT and more.

Training Neural Networks: Q&A with Ian Goodfellow, Google的更多相关文章

  1. 实现径向变换用于样本增强《Training Neural Networks with Very Little Data-A Draft》

    背景: 做大规模机器学习算法,特别是神经网络最怕什么--没有数据!!没有数据意味着,机器学不会,人工不智能!通常使用样本增强来扩充数据一直都是解决这个问题的一个好方法. 最近的一篇论文<Trai ...

  2. (转)A Recipe for Training Neural Networks

    A Recipe for Training Neural Networks Andrej Karpathy blog  2019-04-27 09:37:05 This blog is copied ...

  3. 1506.01186-Cyclical Learning Rates for Training Neural Networks

    1506.01186-Cyclical Learning Rates for Training Neural Networks 论文中提出了一种循环调整学习率来训练模型的方式. 如下图: 通过循环的线 ...

  4. A Recipe for Training Neural Networks [中文翻译, part 1]

    最近拜读大神Karpathy的经验之谈 A Recipe for Training Neural Networks  https://karpathy.github.io/2019/04/25/rec ...

  5. [Converge] Training Neural Networks

    CS231n Winter 2016: Lecture 5: Neural Networks Part 2 CS231n Winter 2016: Lecture 6: Neural Networks ...

  6. [CS231n-CNN] Training Neural Networks Part 1 : activation functions, weight initialization, gradient flow, batch normalization | babysitting the learning process, hyperparameter optimization

    课程主页:http://cs231n.stanford.edu/   Introduction to neural networks -Training Neural Network ________ ...

  7. [转]Binarized Neural Networks_ Training Neural Networks with Weights and Activations Constrained to +1 or −1

    原文: 二值神经网络(Binary Neural Network,BNN) 在我刚刚过去的研究生毕设中,我在ImageNet数据集上验证了图像特征二值化后仍然具有很强的表达能力,可以在检索中达到较好的 ...

  8. [CS231n-CNN] Training Neural Networks Part 1 : parameter updates, ensembles, dropout

    课程主页:http://cs231n.stanford.edu/ ___________________________________________________________________ ...

  9. Binarized Neural Networks_ Training Neural Networks with Weights and Activations Constrained to +1 or −1

    转载请注明出处: http://www.cnblogs.com/sysuzyq/p/6248953.html by 少侠阿朱

随机推荐

  1. 【Coursera】高斯混合模型

    一.高斯混合模型 软分类算法,即对每一个样本,计算其属于各个分布的概率,概率值最大的就是这个样本所属的分类. 对于训练样本的分布,看成为多个高斯分布加权得到的.其中每个高斯分布即为某一特定的类. 高斯 ...

  2. 四则运算截图and代码

    1.运行截图 2.代码 #include<stdio.h> #include<stdlib.h> int main() { int i=300; int a=0; while( ...

  3. angularJS1笔记-(5)-过滤器练习

    html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF ...

  4. Three.js入门篇(一)创建一个场景

    上一面讲述了向场景中添加物体对象.这一篇准备把每个功能点细细的讲述一遍,一方面是为了加深自己的理解.另一方面希望能够 帮助到有需要的人. 一.在学习WEBGL的时候,你应该先了解要创建一个WebGL程 ...

  5. Java的Bean

    Bean的定义 遵循Sun的Java Bean规范编写的特殊类 Java Bean的规范 类的访问控制权限是public 类提供有一个无参的构造函数 类的属性的访问控制权限是private,通过set ...

  6. linq 左连接实现两个集合的合并

    //第一个集合为所有的数据 var specilist = new List<Me.SpecificationsInfo>(); var resultall = (from a in db ...

  7. 【Python】Python简介

    Python是一种既使用简单又功能强大的高级编程语言,同时支持面向过程的编程和面向对象的编程. 官方对python的介绍:Python 是一种简单易学,功能强大的编程语言,它有高效率的高层数据结构,简 ...

  8. 传说中的WCF:消息拦截与篡改

    我们知道,在WCF中,客户端对服务操作方法的每一次调用,都可以被看作是一条消息,而且,可能我们还会有一个疑问:如何知道客户端与服务器通讯过程中,期间发送和接收的SOAP是什么样子.当然,也有人是通过借 ...

  9. python自动化之正则

    import re phoneNumRegex=re.compile(r'\d\d\d-\d\d\d-\d\d\d\d') mo=phoneNumRegex.search('My number is ...

  10. BZOJ 1898: [Zjoi2005]Swamp 沼泽鳄鱼

    1898: [Zjoi2005]Swamp 沼泽鳄鱼 Time Limit: 5 Sec  Memory Limit: 64 MBSubmit: 1085  Solved: 604[Submit][S ...