(论文编号及摘要见 [2017 ACL] 对话系统. [2018 ACL Long] 对话系统. 论文标题[]中最后的数字表示截止2019.1.21 google被引次数)

1. Domain Adaptation:

challenges:

  (a) data shifts (syn -> live user data; stale -> current) cause distribution mismatch bet train and eval. -> 2017.1

  (b) reestimate a global model from scratch each time a new domain with potentially new intents and slots is added. -> 2017.4

 papers:

 2017.1 adversarial training

     [Adversarial Adaptation of Synthetic or Stale Data. Young-Bum Kim. 14]

2017.4 model(k + 1) = weighted_combination[model(1),...,model(k)]

    [Domain Attention with an Ensemble of Experts. Young-Bum Kim. 17]

2. NLG: 

 challenges:

  (a) integrate LM + Affect. -> 2017.2

  (b) refering expression misunderstand -> 2017.5

  (c) neural encoder-decoder models in open-domain: generate dull and generic responses. -> 2017.8

  (d) multi-turn: lose relationships among utterances or important contextual information. -> 2017.11

  (e) automatically evaluating the quality of dialogue responses for unstructured domains:  biased and correlate very poorly with human judgements of response quality. -> 2017.12

  (f) deep latent variable models used in open-domain: highly randomized, leading to uncontrollable generated responses. -> 2017.14


  (g) does not employ knowledge to guide the generation -> tends to generate short, general, and meaningless responses. -> 2018.L1

  (h) encoder-decoder dialog model is limited because it cannot output interpretable actions as in traditional systems, which hinders(阻碍) humans from understanding its generation process. -> 2018.L6

  (i) translate natural language questions ->structured queries: further improvement hard. -> 2018.L8

 papers:

 2017.2  language model + affect info

    [Affect-LM: A Neural Language Model for Customizable Affective Text Generation. 27]

 2017.5  refering expression misunderstand correction - alg:  contrastive focus

    [Generating Contrastive Referring Expressions. 0]

 2017.8  open-domain - Framework - conditional variaional autoencoders 

    pre: word-level decoder

    cur: discourse-level encoder

    [Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders. CMU. 69]    

 2017.11   muli-turn response selection - sequential matching network (SMN)

    pre: concatenates utterances in context

      matches a response with a highly abstract context vector

      => lose relationships among utterances or important contextual information 

    current:  matches a response with each utterance on multiple levels of granularity

        distills important matching information -> vector -> conv + pooling

        accumulate vector -> RNN (models relationships among utterances)

        final matching score (calcu with hid of rnn)

    [Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-based Chatbots. 北航. 南开.微软. 48]

 2017.12  auto eval Metric - ADEM

    [Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses. 47]

 2017.14  Framework - generation based on specific attributes(manually + auto detected) - both speakers diag states modeled -> personal features

    [A Conditional Variational Framework for Dialog Generation. 20]  

 2017.16  Open-domain - Engine - generation (info retrieval + Seq2Seq) - AliMe chat

    [AliMe Chat: A Sequence to Sequence and Rerank based Chatbot Engine. 27]


 2018.L1  knowledge guide generation - neural knowledge diffusion (NKD) model - both fact + chi-chats

     match the relevant facts for the input utterance + diffuse them to similar entities

     [Knowledge Diffusion for Neural Dialogue Generation. 3]

 2018.L6  encoder-decoder model - interprete- unsup discrete sent representation learning

     DI-VAE + DI-VST - discover interpretable semantics via either auto encoding or context predicting

     [Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation. 8]

 2018.L8  Framework - DialSQL + human intelligence

      identify potential error of SQL  -> ask for validation -> feedback to revise query

     [ DialSQL: Dialogue Based Structured Query Generation. 4]

3. Task + Non-task hybrid

 2017.3  whether to have a chat - dataset

    [Chat Detection in Intelligent Assistant: Combining Task-oriented and Non-task-oriented Spoken Dialogue Systems. McGill University. Montreal. 7] 

4. E2E

 challenges:

   (a) data-intensive -> 2017.6

   (b) task - interact with KB -> pre: issuing a symbolic query to the KB to retrieve entries based on their attributes. -> 2017.13

    disadvantages:

      (1) such symbolic operations break the differentiability(可辨性) of the system

      (2) prevent end-to-end training of neural dialogue agents


   (c) only consider user semantic inputs and under-utilize other user info. -> 2018.L4

    (b) incorporating knowledge bases. -> 2018.L7

 papers:

 2017.6  Framework - HCNs : RNN + knowledge(software/sys action templates) - reduce train data - opt (sup + RL) - bAbI dialog dataset - 2 commercial diag sys

    [Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. Microsoft Research. 87]

 2017.13  KB-InfoBot - E2E - task -multi-turn - interact with KB - present a agent

    replacing symbolic queries ->  induced "soft" posterior distribution over the KB

    integrate soft retrival process + RL

    [Towards End-to-End Reinforcement Learning of Dialogue Agents for Information Access. CMU. MS. 国立台北. 82]


 2018.L4  multimodel info (sup + RL) - user adaptive - reduce diag length + improve success rate

     [Sentiment Adaptive End-to-End Dialog Systems. 2]

 2018.L7  Mem2Seq - first neural generative model: combines [ multi-hop attention over memories + idea of pointer network]

     [Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Oriented Dialog Systems. UST.8]

5. NLU

 challenges:

  (a) no systematic comparison to analyze how to use context effectively. -> 2017.15

 papers:

 2017.7  identity discussion points + discourse relations

    [Joint Modeling of Content and Discourse Relations in Dialogues. 7] 

 2017.15  context utiliztion eval -empirical study and compare models - variant: weights context vectors by context-query relevance

    [How to Make Contexts More Useful? An Empirical Study to Context-Aware Neural Conversation Models. 18]

6. Dialogue state tracking

  challenges:

  (a) have difficulty scaling to larger, more complex dialogue domains. -> 2017.10

    (1) Spoken Language Understanding models that require large amounts of annotated training data

    (2) hand-crafted lexicons for capturing some of the linguistic variation in users' language.

  (b) handling unknown slot values -> Pre: assume predefined candidate lists and thus are not designed to output unknown values. especially in E2E, SLU is absent. -> 2018.L10

 papers:

 2017.10  Framework - Neural Belief Tracking (NBT) - representation learning (compose pre-trained word vector -> utterances and context)

     [Neural Belief Tracker: Data-Driven Dialogue State Tracking. 63]


 2018.L9  Global-Locally Self-Attentive Dialogue State Tracker (GLAD)

     global modules: shares parameters between estimators for different types (called slots) of dialogue states

     local modules: learn slot-specific features

     [Global-Locally Self-Attentive Encoder for Dialogue State Tracking. 0]

 2018.L10  E2E + pointer nerwork (PtrNet)

     [An End-to-end Approach for Handling Unknown Slot Values in Dialogue State Tracking. 2]

7. Framework

 challenges:

    (a) pipeline: introduces architectural complexity and fragility. -> 2018.L2

 papers:

 2018.L2  Seq2Seq + opt (sup / RL) - Task

    design text spans named belief spans ->  track dialogue believes -> allow task-oriented sys be modeled in Seq2Seq

    Two Stage CopyNet instantiation -> reduce para, train time + better than pipeline on large dataset + OOV

    [Sequicity: Simplifying Task-oriented Dialogue Systems with Single Sequence-to-Sequence Architectures. 新加坡国立. 复旦. 京东. 9]    

8. RL

 challenges:

    (a) Training a task-completion dialogue agent via reinforcement learning (RL) is costly: requires many interactions with real users.

    (b) use a user simulator: lacks the language complexity + biases

 papers:

 2018.L3  RL - policy learning - Deep Dyna-Q

     first deep RL framework that integrates planning for task-completion dialogue policy learning

     world model update with real user experience + agent opt using real and simulated experience

     [Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning. 3] 

8. Chi-chat

 challenges:

  (a)  lack specificity

  (b) do not display a consistent personality. -> 2018.L5

 papers:

  2018.L5  add profile info[i. given + ii.partner]  train to engage ii with personal topics -> used to predict profile

      [Personalizing Dialogue Agents: I have a dog, do you have pets too? 31.]        

9. Others:

 challenges:

  (a) open-ended dialogue state. -> 2017.9

 papers:

 2017.9 Symmetric Collaborative Dialogue - two agents to achieve a common goal

    [Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings. 21] 

[2017 - 2018 ACL] 对话系统论文研究点整理的更多相关文章

  1. R语言重要数据集分析研究——需要整理分析阐明理念

    1.R语言重要数据集分析研究需要整理分析阐明理念? 上一节讲了R语言作图,本节来讲讲当你拿到一个数据集的时候如何下手分析,数据分析的第一步,探索性数据分析. 统计量,即统计学里面关注的数据集的几个指标 ...

  2. MyEclips 2017/2018 (mac 版)安装与破解

    MyEclips 2017/2018 (mac 版)安装与破解 现在在学J2EE,然后使用的工具就是 MyEclipse,现在就抛弃 Eclipse 了,我就不多说它俩的区别了,但是 MyEclips ...

  3. MyEclipse 2017/2018 安装与破解 图文教程

    SSM 框架-02-MyEclipse 2017/2018 安装与破解 现在在学J2EE,然后使用的工具就是 MyEclipse,现在就抛弃 Eclipse 了,我就不多说它俩的区别了,但是 MyEc ...

  4. Hadoop是原Yahoo的Doug Cutting根据Google发布的学术论文研究而来

    Hadoop是原Yahoo的Doug Cutting根据Google发布的学术论文研究而来.Doug Cutting给这个Project起了个名字,就叫Hadoop. Doug Cutting在Clo ...

  5. </2017><2018>

    >>> Blog 随笔原始文档及源代码 -> github: https://github.com/StackLike/Python_Note >>> 统计信 ...

  6. 转:2018最全Redis面试题整理

    Java面试----2018最全Redis面试题整理 1.什么是Redis? 答:Redis全称为:Remote Dictionary Server(远程数据服务),是一个基于内存的高性能key-va ...

  7. [2017 ACL] 对话系统

    Long Papers [Domain adaptation ] 1. Adversarial Adaptation of Synthetic or Stale Data ( Cited by 14 ...

  8. [2018 ACL Short and System] 对话系统

    Short Paper(s) 1.  Task-oriented Dialogue System for Automatic Diagnosis. (Cited by 0) Zhongyu Wei, ...

  9. [2018 ACL Long] 对话系统

    [NLG - E2E - knowledge guide generation] 1. Knowledge Diffusion for Neural Dialogue Generation ( ‎Ci ...

随机推荐

  1. angular1的复选框指令--checklistModel

    这个指令可以改变一组checkbox的model格式,提交的时候格式为[x,y,z,...] //复选框指令 .directive('checklistModel', ['$parse', '$com ...

  2. idea操作 clone项目、 import项目所有注解全部报错

    操作:从现有的git上边clone项目,前提是开发工具,开发环境都一样错误类型:所有的注解全部报错 原因: 是选择了Create from existing source 一路Next下去,Maven ...

  3. Java Bean与Map之间相互转化的实现

    目录树 概述 Apache BeanUtils将Bean转Map Apache BeanUtils将Map转Bean 理解BeanUtils将Bean转Map的实现之手写Bean转Map 概述 Apa ...

  4. 导航栏NavigationBar的按钮设置

    有时候在自定义navigationBar的左右按钮的时候,button的图片会显得很大,个人感觉原因有以下几种情况: 1.使用的是UIButton直接加在navigationBar上面 2.自定义了一 ...

  5. 01-http简介-四层 七层 三次握手

    HTTP简介.请求方法与响应状态码 接下来想系统的回顾一下TCP/IP协议族的相关东西,当然这些东西大部分是在大学的时候学过的,但是那句话,基础的东西还是要不时的回顾回顾的.接下来的几篇博客都是关于T ...

  6. collections.ChainMap类合并字典或映射

    ## 使用update()方法或者ChainMap类合并字典或映射 # 使用update()方法合并 a = {'x': 1, 'z': 3} b = {'y': 2, 'z': 4} merged ...

  7. Java实例 Part1:Java基础输出语句

    ** Part1:Java基础输出语句 ** 第一部分最基础,就是标准的输出语句. ps:(目前还没熟悉这个编辑器,先尝试一下) Example01 : 输出"hello world&quo ...

  8. 大数据&人工智能&云计算

    仅从技术上讲大数据.人工智能都包含工程.算法两方面内容: 一.大数据: 工程: 1)云计算,核心是怎么管理大量的计算机.存储.网络. 2)核心是如何管理数据:代表是分布式存储,HDFS 3)核心是如何 ...

  9. JS第一周学习笔记整理

    目录 JS正式课第一周笔记整理 JS正式课第一周笔记整理 webstorm : 代码编辑器 浏览器: 代码解析器: Git : 是一个工具;用于团队协作开发项目管理代码的工具:在工作中用git.svn ...

  10. 22-Consent 确认逻辑实现

    1-定义一个从前台传到后台的viewModel namespace MvcCookieAuthSample.Models { public class InputConsentViewModel { ...