IJCAI 2019 Analysis
IJCAI 2019 Analysis
检索不到论文的关键词:retrofitting
word embedding
Getting in Shape: Word Embedding SubSpaces
减肥:词嵌入的子空间
Many tasks in natural language processing require the alignment of word embeddings.
自然语言处理中的许多任务都需要词嵌入的对齐。
Embedding alignment relies on the geometric properties of the manifold of word vectors.
嵌入对齐依赖于字向量流形的几何特性。
This paper focuses on supervised linear alignment and studies the relationship between the shape of the target embedding.
本文着重研究了有监督线性对齐和目标嵌入形状之间的关系。
We assess the performance of aligned word vectors on semantic similarity tasks and find that the isotropy of the target embedding is critical to the alignment.
我们评估了词向量对齐在语义相似度任务中的性能,发现目标嵌入的各向同性对对齐至关重要。
Furthermore, aligning with an isotropic noise can deliver satisfactory results.
此外,与各向同性噪声对准可以产生令人满意的结果。
We provide a theoretical framework and guarantees which aid in the understanding of empirical results.
我们提供了一个理论框架和保证,有助于理解经验结果。
The Pupil Has Become the Master: Teacher-Student Model-Based Word Embedding Distillation with Ensemble Learning
学生已成为大师:基于师生模型的词嵌入蒸馏与集成学习
Recent advances in deep learning have facilitated the demand of neural models for real applications.
深度学习的最新进展促进了神经模型对实际应用的需求。
In practice, these applications often need to be deployed with limited resources while keeping high accuracy.
在实践中,这些应用程序通常需要以有限的资源部署,同时保持高精度。
This paper touches the core of neural models in NLP, word embeddings, and presents an embedding distillation framework that remarkably reduces the dimension of word embeddings without compromising accuracy.
本文探讨了神经网络模型在NLP中的核心——字嵌入,提出了一种嵌入蒸馏框架,在不影响精度的前提下,显著减小了字嵌入的维数。
A new distillation ensemble approach is also proposed that trains a high-efficient student model using multiple teacher models.
提出了一种新的蒸馏集成方法,利用多教师模型训练高效的学生模型。
In our approach, the teacher models play roles only during training such that the student model operates on its own without getting supports from the teacher models during decoding, which makes it run as fast and light as any single model.
在我们的方法中,教师模型只在培训过程中发挥作用,这样学生模型就可以独立运行,而在解码过程中没有得到教师模型的支持,这使得它运行的速度和重量与任何单个模型一样快。
All models are evaluated on seven document classification datasets and show significant advantage over the teacher models for most cases.
所有模型都在七个文档分类数据集上进行评估,并且在大多数情况下都显示出比教师模型更大的优势。
Our analysis depicts insightful transformation of word embeddings from distillation and suggests a future direction to ensemble approaches using neural models.
我们的分析描述了单词嵌入从蒸馏到集成的深刻转变,并提出了使用神经模型的集成方法的未来方向。
word vector
A Latent Variable Model for Learning Distributional Relation Vectors
一种学习分布关系向量的隐变量模型
Recently a number of unsupervised approaches have been proposed for learning vectors that capture the relationship between two words.
近年来,一些无监督的方法被提出,用来学习向量捕捉两个词之间的关系。
Inspired by word embedding models, these approaches rely on co-occurrence statistics that are obtained from sentences in which the two target words appear.
受到词嵌入模型的启发,这些方法依赖于从两个目标词出现的句子中获得的共现统计数据。
However, the number of such sentences is often quite small, and most of the words that occur in them are not relevant for characterizing the considered relationship.
然而,这种句子的数量往往很小,其中出现的大多数单词与描述所考虑的关系无关。
As a result, standard co-occurrence statistics typically lead to noisy relation vectors.
因此,标准共现统计通常会导致噪声关系向量。
To address this issue, we propose a latent variable model that aims to explicitly determine what words from the given sentences best characterize the relationship between the two target words.
为了解决这一问题,我们提出了一个隐变量模型,该模型旨在明确地确定来自给定句子的哪些词最能描述两个目标词之间的关系。
Relation vectors then correspond to the parameters of a simple unigram language model which is estimated from these words.
然后,关系向量对应于一个简单的一元语言模型的参数,该模型是根据这些词估计的。
word representation
Refining Word Representations by Manifold Learning
用流形学习提炼词的表征
Pre-trained distributed word representations have been proven useful in various natural language processing (NLP) tasks.
预训练的分布式单词表示已经被证明在各种自然语言处理(NLP)任务中有用。
However, the effect of words’ geometric structure on word representations has not been carefully studied yet.
然而,词汇的几何结构对词汇表征的影响还没有得到认真研究。
The existing word representations methods underestimate the words whose distances are close in the Euclidean space, while overestimating words with a much greater distance.
现有的词表示方法低估了欧几里得空间中距离较近的词,而高估了距离较大的词。
In this paper, we propose a word vector refinement model to correct the pre-trained word embedding, which brings the similarity of words in Euclidean space closer to word semantics by using manifold learning.
本文提出了一个词向量精化模型来修正预先训练好的嵌入词,利用流形学习使欧几里得空间中的词相似性更接近于词的语义。
This approach is theoretically founded in the metric recovery paradigm.
这种方法理论上建立在度量恢复范式中。
Our word representations have been evaluated on a variety of lexical-level intrinsic tasks (semantic relatedness, semantic similarity) and the experimental results show that the proposed model outperforms several popular word representations approaches.
我们对各种词汇层次的内在任务(语义关联性、语义相似度)进行了词汇表征评估,实验结果表明,该模型优于几种常用的词汇表征方法。
IJCAI 2019 Analysis的更多相关文章
- 阿里云安全研究成果入选人工智能顶级会议 IJCAI 2019, 业界首次用AI解决又一难题!
8月10日至8月16日,国际人工智能组织联合会议IJCAI 2019(International Joint Conference on Artificial Intelligence 2019)在中 ...
- 2019年度【计算机视觉&机器学习&人工智能】国际重要会议汇总
简介 每年全世界都会举办很多计算机视觉(Computer Vision,CV). 机器学习(Machine Learning,ML).人工智能(Artificial Intelligence ,AI) ...
- zz【清华NLP】图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐
[清华NLP]图神经网络GNN论文分门别类,16大应用200+篇论文最新推荐 图神经网络研究成为当前深度学习领域的热点.最近,清华大学NLP课题组Jie Zhou, Ganqu Cui, Zhengy ...
- Awesome Knowledge-Distillation
Awesome Knowledge-Distillation 2019-11-26 19:02:16 Source: https://github.com/FLHonker/Awesome-Knowl ...
- 揭秘阿里云WAF背后神秘的AI智能防御体系
背景 应用安全领域,各类攻击长久以来都危害着互联网上的应用,在web应用安全风险中,各类注入.跨站等攻击仍然占据着较前的位置.WAF(Web应用防火墙)正是为防御和阻断这类攻击而存在,也正是这些针对W ...
- 深度兴趣网络DIN-SIEN-DSIN
看看阿里如何在淘宝做推荐,实现"一人千物千面"的用户多样化兴趣推荐,首先总结下DIN.DIEN.DSIN: 传统深度学习在推荐就是稀疏到embedding编码,变成稠密向量,喂给N ...
- 论文解读(GraphDA)《Data Augmentation for Deep Graph Learning: A Survey》
论文信息 论文标题:Data Augmentation for Deep Graph Learning: A Survey论文作者:Kaize Ding, Zhe Xu, Hanghang Tong, ...
- 知识图谱实体对齐1:基于平移(translation)的方法
1 导引 在知识图谱领域,最重要的任务之一就是实体对齐 [1](entity alignment, EA).实体对齐旨在从不同的知识图谱中识别出表示同一个现实对象的实体.如下图所示,知识图谱\(\ma ...
- Relation-Shape Convolutional Neural Network for Point Cloud Analysis(CVPR 2019)
代码:https://github.com/Yochengliu/Relation-Shape-CNN 文章:https://arxiv.org/abs/1904.07601 作者直播:https:/ ...
随机推荐
- Nginx如何配置https证书?
#把80端口请求跳转到443端口 server { listen 80; server_name 域名; return 301 https://$http_host$request_uri; } se ...
- C++中写文件ofstream 的<< 操作符 与C风格的fwrite 的笔记
在某次工作中,调用了某SDK接口,该接口通过一个回调函数返回需要的内容.我们需要的内容是H.264码流,该码流通过一个unsigned char* 变量带回,另外还有一个长度 int length.为 ...
- 浅析Java泛型
什么是泛型? 泛型是JDK 1.5的一项新特性,它的本质是参数化类型(Parameterized Type)的应用,也就是说所操作的数据类型被指定为一个参数,在用到的时候在指定具体的类型.这种参数类型 ...
- android app 闪屏
main activity package com.splash.screen; import android.app.Activity; import android.content.Intent; ...
- CentOS6.5安装zabbix3.0
Server端 搭建LAMP(Linux+Apache+Mysql+PHP)环境 1.安装MySQL #安装地址:https://dev.mysql.com/downloads/repo/yum/ y ...
- 【spoj 5971】lcmsum
全场都 AK 了就我爆 0 了 题意 \(t\) 组询问,每组询问给定 \(n\),求 \(\sum\limits_{k=1}^n [n,k]\).其中 \([a,b]\) 表示 \(a\) 和 \( ...
- WIndows cmd command 指令总结
1. 文件操作 显示当前文件夹内所有文件 dir dir /s 仅显示特定后缀的文件 # 查找当前目录下所有mp3文件dir /s *.mp3
- 搭建hadoop单机版
一.准备工作 1.申请机器 1)修改配置: 申请虚拟机下来了,通过xshell连接进入, 主机名还是默认的,修改下,不然看着不习惯 >hostname 查看主机名 >vim /etc/sy ...
- windows静态路由
本机:192.168.1.10 本机网关:192.168.1.254 目的IP:188.1.1.10 指定网关:192.168.1.107 最多跳数:10跳 route -p add 188.1 ...
- tensorflow各版本下载地址
https://pypi.org/project/tensorflow-gpu/1.13.0/#files 把13改对你想要的版本