Paper Title

Real-time Attention Based Look-alike Model for Recommender System

Basic algorithm and main steps

Basic ideas

RALM is a similarity based look-alike model, which consists of user representation learning and look-alike learning. Novel points: attention-merge layer, local and global attention, on-line asynchronous seeds cluster.

1. Offline Traning

1. User Representation Learning

Treat it as multi-class classification that chooses an interest item from millions of candidates.

(1) Calculate the possibility of picking the $ i$-th item as a negative example

$ p(x_i) = \frac{log(k+2)-log(k+1)}{log(D+1)} $

$ D $: the max rank of all the items( rank by their frequency of appearance.)

$ k $: the rank of the $ i$-th item.

(2) Negative sampling: ample in a positive/negative proportion of 1/10

(3) Embedding layer

$ P(c=i|U,X_i) = \frac{e^{x_i u}}{\sum \limits_{j \in X}e^{x_j u}} $

the cross entropy loss : $ L = -\sum \limits_{j \in X} y_i log P(c=i|U,X_i) $

$ u $: a high-dimensional embedding of the user

$ x_j $: embeddings of item $ j $

$ y_i \in {0, 1} $: the label

When converge, output: the representation of user interests.

(4) Attention merge layer

Learn user-related weights for multiple fields.

\(n\) fields are embedded with the same length \(m\) as vector \(h \in R^m\), and then concatenate them in dimension 2, resulting a matrix \(H \in R^{n×m}\). Next, compute weights:

$ u = tanh(W_1H) $

$ w_i = \frac{e{W_2u_iT}}{\sum_j^n e{W_2u_jT}} $

\(W_1 \in R^{k×n}\) and \(W_2 \in R^k\) : weight matrix , \(k\) size of attention unit,

$ u \in R^n$ :the activation unit for fields, \(a ∈ R^n\) weights of fields.

Merge vector $ M \in R^m : M = aH $

Then take it as the input of the MLP layer and get universal user embedding.

2. Look-alike Learning

(1) Transforming matrix.

$ n \times m $ to $ n \times h $

(2) Local attention

To activate local interest / mine personalized info.

$ E_{local_s} = E_s softmax(tanh(E_s^T W_l E_u)) $

\(W_l \in R^{h \times h}\) : the attention matrix,

\(E_s\) : seen user $ E_u $: target user

Note: Firstly, cluster the seed users through K-means algorithm into k clusters, and for each cluster , calculate the average mean of seeds vectors.

(3) Global attention

$ E_{global_s} = E_s softmax(E_s^T tanh(W_g E_s)) $

(4) Calculate the similarity between seeds and target user

$ score_{u,s} = \alpha \cdot cosine(E_u,E_{global_s}) + \beta \cdot cosine(E_u, E_{local_s}) $

(5) Iterative training

2. Online Asynchronous Processing

Update seeds embedding database in real-time . It includes user feedback monitor and seeds clustering.

3. Online Serving

$ score_{u,s} = \alpha \cdot cosine(E_u,E_{global_s}) + \beta \cdot cosine(E_u, E_{local_s}) $

Motivation

  • The "Matthew effect" becomes increasingly evident in recent recommendation systems. Many competitive long-tail contents are

    difficult to achieve timely exposure because of lacking behavior

    features .
  • Traditional look-alike models which widely used in on-line

    advertising are not suitable for recommender systems because of

    the strict requirement of both real-time and effectiveness.

Contribution

  • Improve the effectiveness of user representation learning. Use the attention to capture various fields of interests.
  • Improve the robustness and adaptivity of seeds representation learning. Use local and global attention.
  • Realize a real-time and high-performance look-alike model

My own idea

Relations to what I had read

  • Method of concatenating feature fields. In other paper about CTR I had read, different feature fields

    are concatenated directly. It will cause overfitting in strongly-relevant fields(such as interested tags) and underfitting in to weakly-relevant fields(such as shopping interests) . Then it leads to a result that the recommended results are determined by the few strongly-relevant fields. Such models can not learn comprehensively on multi-fields features, and will lack diversity of recommended results. But in this paper, it uses attention merge to learn effective relations among different fields of user features.
  • Besides, it uses high-order continuous features instead of categorical features. In my opinion, if we use low-order categorical features to express the user group, we can only use statistical methods to construct the features, which will lose most of the information of the group. However, the higher-order continuous features after presentation learning actually contain the intersections of various lower-order features of users, which can more comprehensively express the information of users. Moreover, the higher-order features are generalized to avoid the expression of memory trapped in historical data.

Shortcomings and potential change I assume

  • In this paper, it seems that only a few features are used to learn representation, which may limits the effect in some extends.

【DM论文阅读杂记】推荐系统 注意力机制的更多相关文章

  1. CAP:多重注意力机制,有趣的细粒度分类方案 | AAAI 2021

    论文提出细粒度分类解决方案CAP,通过上下文感知的注意力机制来帮助模型发现细微的特征变化.除了像素级别的注意力机制,还有区域级别的注意力机制以及局部特征编码方法,与以往的视觉方案很不同,值得一看 来源 ...

  2. 推荐系统中的注意力机制——阿里深度兴趣网络(DIN)

    参考: https://zhuanlan.zhihu.com/p/51623339 https://arxiv.org/abs/1706.06978 注意力机制顾名思义,就是模型在预测的时候,对用户不 ...

  3. [论文阅读]阿里DIN深度兴趣网络之总体解读

    [论文阅读]阿里DIN深度兴趣网络之总体解读 目录 [论文阅读]阿里DIN深度兴趣网络之总体解读 0x00 摘要 0x01 论文概要 1.1 概括 1.2 文章信息 1.3 核心观点 1.4 名词解释 ...

  4. [论文阅读]阿里DIEN深度兴趣进化网络之总体解读

    [论文阅读]阿里DIEN深度兴趣进化网络之总体解读 目录 [论文阅读]阿里DIEN深度兴趣进化网络之总体解读 0x00 摘要 0x01论文概要 1.1 文章信息 1.2 基本观点 1.2.1 DIN的 ...

  5. 自然语言处理中的自注意力机制(Self-attention Mechanism)

    自然语言处理中的自注意力机制(Self-attention Mechanism) 近年来,注意力(Attention)机制被广泛应用到基于深度学习的自然语言处理(NLP)各个任务中,之前我对早期注意力 ...

  6. 深度学习之注意力机制(Attention Mechanism)和Seq2Seq

    这篇文章整理有关注意力机制(Attention Mechanism )的知识,主要涉及以下几点内容: 1.注意力机制是为了解决什么问题而提出来的? 2.软性注意力机制的数学原理: 3.软性注意力机制. ...

  7. Pytorch系列教程-使用Seq2Seq网络和注意力机制进行机器翻译

    前言 本系列教程为pytorch官网文档翻译.本文对应官网地址:https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutor ...

  8. AAAI2018中的自注意力机制(Self-attention Mechanism)

    近年来,注意力(Attention)机制被广泛应用到基于深度学习的自然语言处理(NLP)各个任务中.随着注意力机制的深入研究,各式各样的attention被研究者们提出,如单个.多个.交互式等等.去年 ...

  9. 论文阅读笔记 Improved Word Representation Learning with Sememes

    论文阅读笔记 Improved Word Representation Learning with Sememes 一句话概括本文工作 使用词汇资源--知网--来提升词嵌入的表征能力,并提出了三种基于 ...

  10. TensorFlow从1到2(十)带注意力机制的神经网络机器翻译

    基本概念 机器翻译和语音识别是最早开展的两项人工智能研究.今天也取得了最显著的商业成果. 早先的机器翻译实际脱胎于电子词典,能力更擅长于词或者短语的翻译.那时候的翻译通常会将一句话打断为一系列的片段, ...

随机推荐

  1. 12月14日内容总结——模板层之标签、自定义模板语法、母版(模版)的继承与导入、模型层前期准备知识点、ORM常用关键字

    目录 一.模板层之标签 分支结构if for循环 with(定义变量名) 二.自定义过滤器.标签及inclusion_tag(了解) 三.母版(模板)的继承与导入(重要) 四.模型层之前期准备 模型层 ...

  2. ASP.NET Core - .NET 6 的入口文件

    自从.NET 6 开始,微软对应用的入口文件进行了调整,移除了 Main 方法和 Startup 文件,使用顶级语句的写法,将应用初始化的相关配置和操作全部集中在 Program.cs 文件中,如下: ...

  3. springBoot集成flowable

    前言 Flowable可以十分灵活地加入你的应用/服务/构架.可以将JAR形式发布的Flowable库加入应用或服务,来嵌入引擎. 以JAR形式发布使Flowable可以轻易加入任何Java环境:Ja ...

  4. Netty进阶

    1.Netty问题 TCP协议都存在着黏包和半包问题,但是UDP没有 1.粘包现象 发送方分10次发送,接收方一次接受了10次发送的消息 2.半包现象 调整服务器的接受缓冲区大小(调小) 半包会导致服 ...

  5. 剑指 Offer 34. 二叉树中和为某一值的路径(java解题)

    目录 1. 题目 2. 解题思路 3. 数据类型功能函数总结 4. java代码 1. 题目 给你二叉树的根节点 root 和一个整数目标和 targetSum ,找出所有 从根节点到叶子节点 路径总 ...

  6. SpringBoot整合Mybatis、SpringBoot整合Spring Data JPA

    Springboot Mybatis <?xml version="1.0" encoding="UTF-8"?> <project xmln ...

  7. jquery(三:jquery的动画)

    动画 一:1.显示 show() 参数:1.代表动画执行的时长,毫秒数,也可以是代表时长的字符串 fast normal slow 2.代表方法执行完毕的回调函数 默认的是:normal $(func ...

  8. 三分钟实战手写Spring Boot Starter

    1 背景 在平时的开发中,开发的同学会把一些通用的方法,写成一个工具类,例如日期转换的,JSON转换的等等,方便业务后续调用,使代码更容易维护. 如果一些更常用的方法,例如鉴权的,加解密的等等,几乎每 ...

  9. Cesium Ellipsoid(十四)

    由方程(x/A)^2+(y/b)^2+(z/c)^2=1在笛卡尔坐标系中定义的二次曲面.Cesium主要用来表示行星体的形状.通常使用提供的常量之一,而不是直接构造此对象. 不用new,直接就可以使用 ...

  10. Autoit 制作上传工具完美版

    一. 制作上传器 在ui自动化过程中经常遇到需要上传的动作,我们可以使用input标签来送值,但这样不太稳定,所以建议使用autoit制作出来的exe工具. 下面就教大家如何制作上传器,如何使用吧! ...