• A finite set of states St summarizing the information the agent senses from the environment at every time step t ∈ {1, ..., T}.

• A set of actions At which the agent can perform at each time step t ∈ {1, ..., T} to interact with the environment.

• A set of transition probabilities between subsequent states which render the environment stochastic. Note: the probabilities are usually not explicitly modeled but the result of the stochastic nature of the financial asset’s price process.

• A reward (or return) function Rt which provides a numerical feedback value rt to the agent in response to its action At−1 = at−1 in state St−1 = st−1.

• A policy π which maps states to concrete actions to be carried out by the agent. The policy can hence be understood as the agent’s rules for how to choose actions.

• A value function V which maps states to the total (discounted) reward the agent can expect from a given state until the end of the episode (trading period) under policy π.

Given the above framework, the decision problem is formalized as finding the optimal policy π = π ∗ , i.e., the mapping from states to actions, corresponding to the optimal value function V ∗ - see also Dempster et al. (2001); Dempster and Romahi (2002):

  V (st) = max at E[Rt+1 + γV (St+1)|St = st ].(1)

Hereby, E denotes the expectation operator, γ the discount factor, and Rt+1 the expected immediate reward for carrying out action At = at in state St = st . Further, St+1 denotes the next state of the agent. The value function can hence be understood as a mapping from states to discounted future rewards which the agent seeks to maximize with its actions.

To solve this optimization problem, the Q-Learning algorithm (Watkins, 1989) can be applied, extending the above equation to the level of state-action tuples:

  Q (st , at) = E[Rt+1 + γ max at+1 Q (St+1, at+1)|St = st , At = at ].(2)

Hereby, the Q-value Q∗ (st , at) equals to the immediate reward for carrying out action At = at in state St = st plus the discounted future reward from carrying on in the best way possible.

The optimal policy π (the mapping from states to actions) then simply becomes:

  π (st) = max at Q (st , at) .(3)

i.e., in every state St = st , choose the action At = at that yields the highest Q-value. To approximate the Q-function during (online) learning, an iterative optimization is carried out with α denoting the learning rate - see also Sutton and Barto (1998) for further details:

  Q (st , at) ← (1 − α) Q (st , at) + α (rt+1 + γ max at+1 Q (st+1, at+1) ) . (4)

a survey for RL的更多相关文章

  1. (转)Applications of Reinforcement Learning in Real World

    Applications of Reinforcement Learning in Real World 2018-08-05 18:58:04 This blog is copied from: h ...

  2. 论文笔记系列-Neural Network Search :A Survey

    论文笔记系列-Neural Network Search :A Survey 论文 笔记 NAS automl survey review reinforcement learning Bayesia ...

  3. (zhuan) 一些RL的文献(及笔记)

    一些RL的文献(及笔记) copy from: https://zhuanlan.zhihu.com/p/25770890  Introductions Introduction to reinfor ...

  4. A Survey of Visual Attention Mechanisms in Deep Learning

    A Survey of Visual Attention Mechanisms in Deep Learning 2019-12-11 15:51:59 Source: Deep Learning o ...

  5. Generalizing from a Few Examples: A Survey on Few-Shot Learning 小样本学习最新综述 | 三大数据增强方法

    目录 原文链接:小样本学习与智能前沿 01 Transforming Samples from Dtrain 02 Transforming Samples from a Weakly Labeled ...

  6. 知识图谱顶刊综述 - (2021年4月) A Survey on Knowledge Graphs: Representation, Acquisition, and Applications

    知识图谱综述(2021.4) 论文地址:A Survey on Knowledge Graphs: Representation, Acquisition, and Applications 目录 知 ...

  7. SharePoint 2010 Survey的Export to Spreadsheet功能怎么不见了?

    背景信息: 最近用户报了一个问题,说他创建的Survey里将结果导出成Excel文件(Export to spreadsheet)的按钮不见了. 原因排查: 正常情况下,这个功能只存在于SharePo ...

  8. 中间值为什么为l+(r-l)/2,而不是(l+r)/2

    二分法的算法中,我们看到一些代码里取中间值: MID=l+(r-l)/2; 为什么是这个呢?不就是(l+r)/2吗?为什么要多此一举呢? 其实还是有不一样的,看看他们的区别吧: l,r是指针的时候只能 ...

  9. SharePoint Tricks - Survey

    1. SharePoint 2010中,在Survey的问题框中输入HTML代码可以用于插入图片或者链接,具体方法为: 1.1 在问题框中输入html, 1.2 在New Form和Edit Form ...

随机推荐

  1. 谷歌同声翻译Translatotron原理

    背景介绍 作为中国人,学好英语这件事从小学开始就让人苦恼,近些年随着AI的快速发展,语言差异是否会缩小甚至被消灭成了热门话题.在5月15日,谷歌AI在博客平台发出一篇文章,正式介绍了一款能保留原声的& ...

  2. 【异步编程】Part2:掌控SynchronizationContext避免deadlock

    引言: 多线程编程/异步编程非常复杂,有很多概念和工具需要去学习,贴心的.NET提供Task线程包装类和await/async异步编程语法糖简化了异步编程方式. 相信很多开发者都看到如下异步编程实践原 ...

  3. AI决策算法 之 GOAP (一)

    http://blog.csdn.net/lovethrain/article/details/67632033 本系列文章内容部分参考自:http://gamerboom.com/archives/ ...

  4. python dict操作

    d1 = {'one': 1, 'two': 2} d2 = {'one': 1, 'two': 2} d3 = {'one': 1, 'two': 2} print(dir(d1)) # 1.con ...

  5. ios 实现 cell 的动态高度

    - (CGFloat)tableView:(UITableView *)tableView heightForRowAtIndexPath:(NSIndexPath *)indexPath { Mes ...

  6. 数学补天 By cellur925

    质数 bool prime(int q) { ||q==) ; ) ; !=||q%!=) ; int cnt=sqrt(q); ;i<=cnt;i+=) !=||q%(i+)!=) ; ; } ...

  7. Vuex+axios

    Vuex+axios   Vuex简介 vuex是一个专门为Vue.js设计的集中式状态管理架构. 状态? 我们把它理解为在data中需要共享给其他组件使用的部分. Vuex和单纯的全局对象有以下不同 ...

  8. 去掉 Ctrl + A 全选

    import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.table.*; public ...

  9. Json规范

    标准格式 书写使用首字母小写驼峰式 {" status":0   //状态 大于0代表正常.小于等于0代表异常 "message":"",/ ...

  10. Spark Mllib里如何建立密集向量和稀疏向量(图文详解)

    不多说,直接上干货! 具体,见 Spark Mllib机器学习实战的第4章 Mllib基本数据类型和Mllib数理统计