IJCAI 2019 Analysis
IJCAI 2019 Analysis
检索不到论文的关键词:retrofitting
word embedding
Getting in Shape: Word Embedding SubSpaces
减肥:词嵌入的子空间
Many tasks in natural language processing require the alignment of word embeddings.
自然语言处理中的许多任务都需要词嵌入的对齐。
Embedding alignment relies on the geometric properties of the manifold of word vectors.
嵌入对齐依赖于字向量流形的几何特性。
This paper focuses on supervised linear alignment and studies the relationship between the shape of the target embedding.
本文着重研究了有监督线性对齐和目标嵌入形状之间的关系。
We assess the performance of aligned word vectors on semantic similarity tasks and find that the isotropy of the target embedding is critical to the alignment.
我们评估了词向量对齐在语义相似度任务中的性能,发现目标嵌入的各向同性对对齐至关重要。
Furthermore, aligning with an isotropic noise can deliver satisfactory results.
此外,与各向同性噪声对准可以产生令人满意的结果。
We provide a theoretical framework and guarantees which aid in the understanding of empirical results.
我们提供了一个理论框架和保证,有助于理解经验结果。
The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning
学生已成为大师:基于师生模型的词嵌入蒸馏与集成学习
Recent advances in deep learning have facilitated the demand of neural models for real applications.
深度学习的最新进展促进了神经模型对实际应用的需求。
In practice, these applications often need to be deployed with limited resources while keeping high accuracy.
在实践中,这些应用程序通常需要以有限的资源部署,同时保持高精度。
This paper touches the core of neural models in NLP, word embeddings, and presents an embedding distillation framework that remarkably reduces the dimension of word embeddings without compromising accuracy.
本文探讨了神经网络模型在NLP中的核心——字嵌入,提出了一种嵌入蒸馏框架,在不影响精度的前提下,显著减小了字嵌入的维数。
A new distillation ensemble approach is also proposed that trains a high-efficient student model using multiple teacher models.
提出了一种新的蒸馏集成方法,利用多教师模型训练高效的学生模型。
In our approach, the teacher models play roles only during training such that the student model operates on its own without getting supports from the teacher models during decoding, which makes it run as fast and light as any single model.
在我们的方法中,教师模型只在培训过程中发挥作用,这样学生模型就可以独立运行,而在解码过程中没有得到教师模型的支持,这使得它运行的速度和重量与任何单个模型一样快。
All models are evaluated on seven document classification datasets and show significant advantage over the teacher models for most cases.
所有模型都在七个文档分类数据集上进行评估,并且在大多数情况下都显示出比教师模型更大的优势。
Our analysis depicts insightful transformation of word embeddings from distillation and suggests a future direction to ensemble approaches using neural models.
我们的分析描述了单词嵌入从蒸馏到集成的深刻转变,并提出了使用神经模型的集成方法的未来方向。
word vector
A Latent Variable Model for Learning Distributional Relation Vectors
一种学习分布关系向量的隐变量模型
Recently a number of unsupervised approaches have been proposed for learning vectors that capture the relationship between two words.
近年来,一些无监督的方法被提出,用来学习向量捕捉两个词之间的关系。
Inspired by word embedding models, these approaches rely on co-occurrence statistics that are obtained from sentences in which the two target words appear.
受到词嵌入模型的启发,这些方法依赖于从两个目标词出现的句子中获得的共现统计数据。
However, the number of such sentences is often quite small, and most of the words that occur in them are not relevant for characterizing the considered relationship.
然而,这种句子的数量往往很小,其中出现的大多数单词与描述所考虑的关系无关。
As a result, standard co-occurrence statistics typically lead to noisy relation vectors.
因此,标准共现统计通常会导致噪声关系向量。
To address this issue, we propose a latent variable model that aims to explicitly determine what words from the given sentences best characterize the relationship between the two target words.
为了解决这一问题,我们提出了一个隐变量模型,该模型旨在明确地确定来自给定句子的哪些词最能描述两个目标词之间的关系。
Relation vectors then correspond to the parameters of a simple unigram language model which is estimated from these words.
然后,关系向量对应于一个简单的一元语言模型的参数,该模型是根据这些词估计的。
word representation
Refining Word Representations by Manifold Learning
用流形学习提炼词的表征
Pre-trained distributed word representations have been proven useful in various natural language processing (NLP) tasks.
预训练的分布式单词表示已经被证明在各种自然语言处理(NLP)任务中有用。
However, the effect of words’ geometric structure on word representations has not been carefully studied yet.
然而,词汇的几何结构对词汇表征的影响还没有得到认真研究。
The existing word representations methods underestimate the words whose distances are close in the Euclidean space, while overestimating words with a much greater distance.
现有的词表示方法低估了欧几里得空间中距离较近的词,而高估了距离较大的词。
In this paper, we propose a word vector refinement model to correct the pre-trained word embedding, which brings the similarity of words in Euclidean space closer to word semantics by using manifold learning.
本文提出了一个词向量精化模型来修正预先训练好的嵌入词,利用流形学习使欧几里得空间中的词相似性更接近于词的语义。
This approach is theoretically founded in the metric recovery paradigm.
这种方法理论上建立在度量恢复范式中。
Our word representations have been evaluated on a variety of lexical-level intrinsic tasks (semantic relatedness, semantic similarity) and the experimental results show that the proposed model outperforms several popular word representations approaches.
我们对各种词汇层次的内在任务(语义关联性、语义相似度)进行了词汇表征评估,实验结果表明,该模型优于几种常用的词汇表征方法。
IJCAI 2019 Analysis的更多相关文章
- 阿里云安全研究成果入选人工智能顶级会议 IJCAI 2019, 业界首次用AI解决又一难题!
8月10日至8月16日,国际人工智能组织联合会议IJCAI 2019(International Joint Conference on Artificial Intelligence 2019)在中 ...
- 2019年度【计算机视觉&机器学习&人工智能】国际重要会议汇总
简介 每年全世界都会举办很多计算机视觉(Computer Vision,CV). 机器学习(Machine Learning,ML).人工智能(Artificial Intelligence ,AI) ...
- zz【清华NLP】图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐
[清华NLP]图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐 图神经网络研究成为当前深度学习领域的热点.最近,清华大学NLP课题组Jie Zhou, Ganqu Cui, Zhengy ...
- Awesome Knowledge-Distillation
Awesome Knowledge-Distillation 2019-11-26 19:02:16 Source: https://github.com/FLHonker/Awesome-Knowl ...
- 揭秘阿里云WAF背后神秘的AI智能防御体系
背景 应用安全领域,各类攻击长久以来都危害着互联网上的应用,在web应用安全风险中,各类注入.跨站等攻击仍然占据着较前的位置.WAF(Web应用防火墙)正是为防御和阻断这类攻击而存在,也正是这些针对W ...
- 深度兴趣网络DIN-SIEN-DSIN
看看阿里如何在淘宝做推荐,实现"一人千物千面"的用户多样化兴趣推荐,首先总结下DIN.DIEN.DSIN: 传统深度学习在推荐就是稀疏到embedding编码,变成稠密向量,喂给N ...
- 论文解读(GraphDA)《Data Augmentation for Deep Graph Learning: A Survey》
论文信息 论文标题:Data Augmentation for Deep Graph Learning: A Survey论文作者:Kaize Ding, Zhe Xu, Hanghang Tong, ...
- 知识图谱实体对齐1:基于平移(translation)的方法
1 导引 在知识图谱领域,最重要的任务之一就是实体对齐 [1](entity alignment, EA).实体对齐旨在从不同的知识图谱中识别出表示同一个现实对象的实体.如下图所示,知识图谱\(\ma ...
- Relation-Shape Convolutional Neural Network for Point Cloud Analysis(CVPR 2019)
代码:https://github.com/Yochengliu/Relation-Shape-CNN 文章:https://arxiv.org/abs/1904.07601 作者直播:https:/ ...
随机推荐
- leetcode297. 二叉树的序列化与反序列化
代码 /** * Definition for a binary tree node. * public class TreeNode { * int val; * TreeNode left; * ...
- docker快速入门02——在docker下开启mysql5.6 binlog日志
1.检查容器状态 [root@localhost ~]# docker ps 执行这个命令可以看到所有正在运行当中的容器,如果加上-a参数,就可以看到所有的容器包括停止的. 我们可以看到容器正在运行当 ...
- Linux学习篇之OpenKM的安装(汉化)
OpenKM是一个开放源代码的电子文档管理系统,它的特点是可用于大型公司或是中小企业, 适应性比较强. 并且在知识管理方面的加工,提供了更加灵活和成本较低的替代应用,下面讲一下搭建方法. 一.以下都是 ...
- 读《JavaScript面向对象编程指南》(二)
第五章 原型 在JavaScript中,所有函数都会拥有一个 prototype 的属性,默认初始值为空对象. 可以在相关的原型对象中添加新的方法和属性,甚至可以用自定义对象来完全替换掉原有的原型对象 ...
- python出现Non-ASCII character '\xe6' in file statistics.py on line 19, but no encoding declared错误
可按照错误建议网址查看http://www.python.org/peps/pep-0263.html 发现是因为Python在默认状态下不支持源文件中的编码所致.解决方案有如下三种: 一.在文件头部 ...
- 解决java编译错误:编码 GBK 的不可映射字符 (0x8C)
1. 问题概述: 程序很简单,打印一行字:你好,世界 (使用的工具是:win10自带的记事本.java的jdk:java development kit) 但是在打开终端进行编译时,报出了一个错误:编 ...
- JavaScript中数组元素删除的七大方法汇总
原文链接:https://blog.csdn.net/u010323023/article/details/52700770 在JavaScript中,除了Object之外,Array类型恐怕就是最常 ...
- Java介绍、环境的搭建及结构化程序
一.Java 简介及环境配置: JDK和JRE的区别:JRE(Java Runtime Environment)Java运行时环境有些程序运行需要Java环境,因此JRE只是给客户端使用的. JDK( ...
- BZOJ2306 [Ctsc2011]幸福路径[倍增]
这个有环的情况非常的讨厌,一开始想通过数学推等比数列的和,但是发现比较繁就不做了. 然后挖掘这题性质. 数据比较小,但是体力可以很接近1(恼怒),也就是说可能可以跳很多很多步.算了一下,大概跳了2e7 ...
- C#制作的屏幕取色器
1 using System; 2 using System.Collections.Generic; 3 using System.ComponentModel; 4 using System ...