zz先睹为快:神经网络顶会ICLR 2019论文热点分析
先睹为快:神经网络顶会ICLR 2019论文热点分析 - lqfarmer的文章 - 知乎 https://zhuanlan.zhihu.com/p/53011934
链接:https://zhuanlan.zhihu.com/p/53011934
来源:知乎
著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。
ICLR-2019(International Conference on Learning Representations 2019),将于2019年5月9日在美国路易斯安那州的新奥尔良举行,这也是2019年最新的一个国际性的AI顶会。目前,ICLR-2019的最新接受的论文已经Release出来了,本文对本届会议接受的论文进行整理,按照统计方法,抽取出了其中集中程度最高的27个主题,并抽样了每个主题下的一些最新论文,提供给需要的朋友周末充电。
ICLR-2019接受全部论文地址
https://openreview.net/group?id=ICLR.cc/2019/Conference#accepted-oral-papers
主题热点
Deep reinforcement learning
Generative adversarial networks
Deep learning
Deep neural Network
Domain adaptation
Recurrent neural network
Neural architecture search
Convolutional networks network
Deep networks
Graph neural network
Bayesian neural Network
Variational autoencoders
Gradient descent optimization
Unsupervised learning
Adversarial examples/Adversarial attacks/Adversarial training
Imitation learning
Generalization bounds
Monte carlo method
Representation learning
Neural program
Experience replay
Batch normalization
Word embeddings
Neural machine translation
Transfer learning
Program synthesis
Image-to-image translation
热点论文推荐
Reinforcement learning
Algorithmic Framework for Model-based Deep Reinforcement Learning with Theoretical Guarantees
M^3RL: Mind-aware Multi-agent Management Reinforcement Learning
Information-Directed Exploration for Deep Reinforcement Learning
Near-Optimal Representation Learning for Hierarchical Reinforcement Learning
Adversarial Imitation via Variational Inverse Reinforcement Learning
Deep reinforcement learning with relational inductive biases
Variance Reduction for Reinforcement Learning in Input-Driven Environments
Recall Traces: Backtracking Models for Efficient Reinforcement Learning
Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization
Contingency-Aware Exploration in Reinforcement Learning
Learning to Schedule Communication in Multi-agent Reinforcement Learning
Modeling the Long Term Future in Model-Based Reinforcement Learning
Visceral Machines: Reinforcement Learning with Intrinsic Physiological Rewards
From Language to Goals: Inverse Reinforcement Learning for Vision-Based Instruction Following
Recurrent Experience Replay in Distributed Reinforcement Learning
Probabilistic Recursive Reasoning for Multi-Agent Reinforcement Learning
NADPEx: An on-policy temporally consistent exploration method for deep reinforcement learning
Hierarchical Reinforcement Learning with Hindsight
Generative adversarial networks
A generative adversarial network for style modeling in a text-to-speech system
KnockoffGAN: Generating Knockoffs for Feature Selection using Generative Adversarial Networks
ROBUST ESTIMATION VIA GENERATIVE ADVERSARIAL NETWORKS
Improving Generalization and Stability of Generative Adversarial Networks
On Self Modulation for Generative Adversarial Networks
Scalable Unbalanced Optimal Transport using Generative Adversarial Networks
Visualizing and Understanding Generative Adversarial Networks
Learning from Incomplete Data with Generative Adversarial Networks
A Direct Approach to Robust Deep Learning Using Adversarial Networks
A Variational Inequality Perspective on Generative Adversarial Networks
On Computation and Generalization of Generative Adversarial Networks under Spectrum Control
RelGAN: Relational Generative Adversarial Networks for Text Generation
Diversity-Sensitive Conditional Generative Adversarial Networks
Scalable Reversible Generative Models with Free-form Continuous Dynamics
Optimal Transport Maps For Distribution Preserving Operations on Latent Spaces of Generative Models
Do Deep Generative Models Know What They Don't Know?
Learning Localized Generative Models for 3D Point Clouds via Graph Convolution
Distribution-Interpolation Trade off in Generative Models
Kernel Change-point Detection with Auxiliary Deep Generative Models
Multi-Domain Adversarial Learning
SPIGAN: Privileged Adversarial Learning from Simulation
Deep learning
Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning
SGD Converges to Global Minimum in Deep Learning via Star-convex Path
Dynamic Sparse Graph for Efficient Deep Learning
Quasi-hyperbolic momentum and Adam for deep learning
DeepOBS: A Deep Learning Optimizer Benchmark Suite
Deep Learning 3D Shapes Using Alt-az Anisotropic 2-Sphere Convolution
DELTA: DEEP LEARNING TRANSFER USING FEATURE MAP WITH ATTENTION FOR CONVOLUTIONAL NETWORKS
Deep learning generalizes because the parameter-function map is biased towards simple functions
Deep neural Network
An Empirical Study of Example Forgetting during Deep Neural Network Learning
Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking
Neural Persistence: A Complexity Measure for Deep Neural Networks Using Algebraic Topology
Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks
On the loss landscape of a class of deep neural networks with no bad local valleys
Double Viterbi: Weight Encoding for High Compression Ratio and Fast On-Chip Reconstruction for Deep Neural Network
Minimal Images in Deep Neural Networks: Fragile Object Recognition in Natural Images
Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers
Adaptive Estimators Show Information Compression in Deep Neural Networks
Domain adaptation
Augmented Cyclic Adversarial Learning for Low Resource Domain Adaptation
Unsupervised Domain Adaptation for Distance Metric Learning
ADVERSARIAL DOMAIN ADAPTATION FOR STABLE BRAIN-MACHINE INTERFACES
LEARNING FACTORIZED REPRESENTATIONS FOR OPEN-SET DOMAIN ADAPTATION
Improving the Generalization of Adversarial Training with Domain Adaptation
Regularized Learning for Domain Adaptation under Label Shifts
Recurrent neural network
Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks
A MAX-AFFINE SPLINE PERSPECTIVE OF RECURRENT NEURAL NETWORKS
Quaternion Recurrent Neural Networks
Variational Smoothing in Recurrent Neural Network Language Models
Generalized Tensor Models for Recurrent Neural Networks
AntisymmetricRNN: A Dynamical System View on Recurrent Neural Networks
Neural architecture search
Efficient Multi-Objective Neural Architecture Search via Lamarckian Evolution
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
Learnable Embedding Space for Efficient Neural Architecture Compression
Graph HyperNetworks for Neural Architecture Search
SNAS: stochastic neural architecture search
DARTS: Differentiable Architecture Search
Convolutional networks network
Deep Bayesian Convolutional Networks with Many Channels are Gaussian Processes
LanczosNet: Multi-Scale Deep Graph Convolutional Networks
Deep Convolutional Networks as shallow Gaussian Processes
STCN: Stochastic Temporal Convolutional Networks
Convolutional Neural Networks on Non-uniform Geometrical Signals Using Euclidean Spectral Transformation
A rotation-equivariant convolutional neural network model of primary visual cortex
Human-level Protein Localization with Convolutional Neural Networks
Deep networks
Critical Learning Periods in Deep Networks
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
RotDCF: Decomposition of Convolutional Filters for Rotation-Equivariant Deep Networks
Predicting the Generalization Gap in Deep Networks with Margin Distributions
Deterministic PAC-Bayesian generalization bounds for deep networks via generalizing noise-resilience
Graph neural network
How Powerful are Graph Neural Networks?
Capsule Graph Neural Network
Adversarial Attacks on Graph Neural Networks via Meta Learning
Supervised Community Detection with Line Graph Neural Networks
Bayesian neural Network
Deterministic Variational Inference for Robust Bayesian Neural Networks
Function Space Particle Optimization for Bayesian Neural Networks
Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network
FUNCTIONAL VARIATIONAL BAYESIAN NEURAL NETWORKS
Variational autoencoders
MAE: Mutual Posterior-Divergence Regularization for Variational AutoEncoders
Learning Latent Superstructures in Variational Autoencoders for Deep Multidimensional Clustering
Variational Autoencoders with Jointly Optimized Latent Dependency Structure
Lagging Inference Networks and Posterior Collapse in Variational Autoencoders
Gradient descent optimization
Gradient descent aligns the layers of deep linear networks
Gradient Descent Provably Optimizes Over-parameterized Neural Networks
A Convergence Analysis of Gradient Descent for Deep Linear Neural Networks
Fluctuation-dissipation relations for stochastic gradient descent
Unsupervised learning
Learning Unsupervised Learning Rules
Unsupervised Learning of the Set of Local Maxima
Unsupervised Learning via Meta-Learning
Adversarial examples/Adversarial attacks/Adversarial training
Prior Convictions: Black-box Adversarial Attacks with Bandits and Priors
PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks
The Limitations of Adversarial Training and the Blind-Spot Attack
Generalizable Adversarial Training via Spectral Normalization
Cost-Sensitive Robustness against Adversarial Examples
Characterizing Audio Adversarial Examples Using Temporal Dependency
Are adversarial examples inevitable?
Imitation learning
Sample Efficient Imitation Learning for Continuous Control
Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning
Generative predecessor models for sample-efficient imitation learning
Generalization bounds
Non-vacuous Generalization Bounds at the ImageNet Scale: a PAC-Bayesian Compression Approach
Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds
Monte carlo method
Probabilistic Planning with Sequential Monte Carlo methods
Bayesian Modelling and Monte Carlo Inference for GAN
Doubly Reparameterized Gradient Estimators for Monte Carlo Objectives
Representation learning
Measuring Compositionality in Representation Learning
SOM-VAE: Interpretable Discrete Representation Learning on Time Series
The Laplacian in RL: Learning Representations with Efficient Approximations
Learning Actionable Representations with Goal Conditioned Policies
Learning Programmatically Structured Representations with Perceptor Gradients
Neural program
Neural Program Repair by Jointly Learning to Localize and Repair
Experience replay
DHER: Hindsight Experience Replay for Dynamic Goals
Competitive experience replay
Batch normalization
Towards Understanding Regularization in Batch Normalization
A Mean Field Theory of Batch Normalization
Theoretical Analysis of Auto Rate-Tuning by Batch Normalization
Word embeddings
Understanding Composition of Word Embeddings via Tensor Decomposition
Unsupervised Hyper-alignment for Multilingual Word Embeddings
Poincare Glove: Hyperbolic Word Embeddings
Neural machine translation
Identifying and Controlling Important Neurons in Neural Machine Translation
Multilingual Neural Machine Translation with Knowledge Distillation
Multilingual Neural Machine Translation With Soft Decoupled Encoding
Transfer learning
K For The Price Of 1: Parameter Efficient Multi-task And Transfer Learning
Transfer Learning for Sequences via Learning to Collocate
An analytic theory of generalization dynamics and transfer learning in deep linear networks
Program synthesis
Execution-Guided Neural Program Synthesis
Learning a Meta-Solver for Syntax-Guided Program Synthesis
Synthetic Datasets for Neural Program Synthesis
Image-to-image translation
Harmonic Unpaired Image-to-image Translation
Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency
Instance-aware Image-to-Image Translation
往期精品内容
苏黎世联邦理工推荐信workshop | 数据科学 数据分析方法
Geffery Hinton-数字代表模型从数据中抽取的知识、AI不会有寒冬
机器学习泰斗- Michael I.Jordan-机器学习前景与挑战
10月最新-深度强化学习圣经-《Reinforcement Learning-第二版》
zz先睹为快:神经网络顶会ICLR 2019论文热点分析的更多相关文章
- ICML 2019论文录取Top100:谷歌霸榜
[导读]人工智能顶级会议ICML 2019发布了今年论文录取结果.提交的3424篇论文中,录取了774篇,录取率为22.6%,较去年有所降低.从录取论文数量来看,谷歌成为今年最大赢家,紧随其后的是MI ...
- 顶会两篇论文连发,华为云医疗AI低调中崭露头角
摘要:2020年国际医学图像计算和计算机辅助干预会议(MICCAI 2020),论文接收结果已经公布.华为云医疗AI团队和华中科技大学合作的2篇研究成果入选. 同时两篇研究成果被行业顶会收录,华为云医 ...
- MnasNet:经典轻量级神经网络搜索方法 | CVPR 2019
论文提出了移动端的神经网络架构搜索方法,该方法主要有两个思路,首先使用多目标优化方法将模型在实际设备上的耗时融入搜索中,然后使用分解的层次搜索空间,来让网络保持层多样性的同时,搜索空间依然很简洁,能够 ...
- CVPR 2019 论文解读 | 小样本域适应的目标检测
引文 最近笔者也在寻找目标检测的其他方向,一般可以继续挖掘的方向是从目标检测的数据入手,困难样本的目标检测,如检测物体被遮挡,极小人脸检测,亦或者数据样本不足的算法.这里笔者介绍一篇小样本(few ...
- 【计算机视觉】【神经网络与深度学习】论文阅读笔记:You Only Look Once: Unified, Real-Time Object Detection
尊重原创,转载请注明:http://blog.csdn.net/tangwei2014 这是继RCNN,fast-RCNN 和 faster-RCNN之后,rbg(Ross Girshick)大神挂名 ...
- 基于HHT和RBF神经网络的故障检测——第二篇论文读后感
故障诊断主要包括三部分: 1.故障信号检测方法(定子电流信号检测 [ 定子电流幅值和电流频谱 ] ,振动信号检测,温度信号检测,磁通检测法,绝缘检测法,噪声检测法) 2.故障信号的处理方法,即故障特征 ...
- 邀您共赴数据库学术顶会ICDE 2019——阿里云专场 零距离接触达摩院数据库“最强大脑”
摘要: 当学术大家遇到技术大拿,会碰撞出怎样的火花?为进一步加深产学研学术交流,阿里云将于ICDE 2019大会期间(4月9日)举办以“云时代的数据库”为主题的技术专场(Workshop) 作为全球数 ...
- NASH:基于丰富网络态射和爬山算法的神经网络架构搜索 | ICLR 2018
论文提出NASH方法来进行神经网络结构搜索,核心思想与之前的EAS方法类似,使用网络态射来生成一系列效果一致且继承权重的复杂子网,本文的网络态射更丰富,而且仅需要简单的爬山算法辅助就可以完成搜索,耗时 ...
- 2019年度【计算机视觉&机器学习&人工智能】国际重要会议汇总
简介 每年全世界都会举办很多计算机视觉(Computer Vision,CV). 机器学习(Machine Learning,ML).人工智能(Artificial Intelligence ,AI) ...
随机推荐
- ESP8266 LUA脚本语言开发: 外设篇-串口
https://nodemcu.readthedocs.io/en/master/modules/uart/ 串口发送数据 发送一个16进制到串口 uart.write(0, 0xaa) 注: 之所以 ...
- 使用system V实现读者写者问题
#include <stdio.h> #include <sys/sem.h> #include <sys/ipc.h> #include <string.h ...
- FFT/NTT基础题总结
在学各种数各种反演之前把以前做的$FFT$/$NTT$的题整理一遍 还请数论$dalao$口下留情 T1快速傅立叶之二 题目中要求求出 $c_k=\sum\limits_{i=k}^{n-1}a_i* ...
- ubuntu / zsh shell / oh-my-zsh / 常用插件
记录一下 zsh 的下载与配置,省得每次重装系统都要上网到处查. 安装 zsh shell sudo apt install zsh 切换 shell chsh -s /bin/zsh 安装 oh-m ...
- 这个meta标签会让华为mate10 pro自带浏览器无法粘贴手机收到的验证码信息
前言 最近在项目中遇到一个问题,注册登录界面点击获取验证码,手机收到短信验证码后可以复制成功,但无法粘贴 让人郁闷的是在其它上手机上的(比如小米,苹果)默认浏览器和其它手机浏览器(比如QQ,夸克,搜 ...
- 在Visual Studio 2019中开启预览功能
在Visual Studio 2019 菜单 [工具] > [选项] > [环境] 下的预览功能页面焕然一新!我们介绍了预览功能页面,以便您可以轻松找到这些功能并能够控制其启用.新布局提供 ...
- Winform中对xml文件进行保存时空白节点自动换行问题的解决
场景 Winform中自定义xml配置文件后对节点进行读取与写入: https://blog.csdn.net/BADAO_LIUMANG_QIZHI/article/details/10053213 ...
- 基于OpenCV.Net连通域分析进行文本块分割
上一次通过投影的方式进行了文本块分割,(见 https://www.cnblogs.com/BoyTNT/p/11812323.html )但这种方法有很大的局限性,要求分行清晰.不能有字符跨多行.不 ...
- React组件安装使用和生命周期函数
React安装在使用react时 需要安装 两个模块 react react-dom 初始化时 需要用到react-dom中的render方法 具体如下: import ReactDOM from & ...
- vue3.0和2.0的区别,Vue-cli3.0于 8月11日正式发布,更快、更小、更易维护、更易于原生、让开发者更轻松
vue3.0和2.0的区别Vue-cli3.0于 8月11日正式发布,看了下评论,兼容性不是很好,命令有不少变化,不是特别的乐观vue3.0 的发布与 vue2.0 相比,优势主要体现在:更快.更小. ...