• A finite set of states St summarizing the information the agent senses from the environment at every time step t ∈ {1, ..., T}.

• A set of actions At which the agent can perform at each time step t ∈ {1, ..., T} to interact with the environment.

• A set of transition probabilities between subsequent states which render the environment stochastic. Note: the probabilities are usually not explicitly modeled but the result of the stochastic nature of the financial asset’s price process.

• A reward (or return) function Rt which provides a numerical feedback value rt to the agent in response to its action At−1 = at−1 in state St−1 = st−1.

• A policy π which maps states to concrete actions to be carried out by the agent. The policy can hence be understood as the agent’s rules for how to choose actions.

• A value function V which maps states to the total (discounted) reward the agent can expect from a given state until the end of the episode (trading period) under policy π.

Given the above framework, the decision problem is formalized as finding the optimal policy π = π ∗ , i.e., the mapping from states to actions, corresponding to the optimal value function V ∗ - see also Dempster et al. (2001); Dempster and Romahi (2002):

  V (st) = max at E[Rt+1 + γV (St+1)|St = st ].(1)

Hereby, E denotes the expectation operator, γ the discount factor, and Rt+1 the expected immediate reward for carrying out action At = at in state St = st . Further, St+1 denotes the next state of the agent. The value function can hence be understood as a mapping from states to discounted future rewards which the agent seeks to maximize with its actions.

To solve this optimization problem, the Q-Learning algorithm (Watkins, 1989) can be applied, extending the above equation to the level of state-action tuples:

  Q (st , at) = E[Rt+1 + γ max at+1 Q (St+1, at+1)|St = st , At = at ].(2)

Hereby, the Q-value Q∗ (st , at) equals to the immediate reward for carrying out action At = at in state St = st plus the discounted future reward from carrying on in the best way possible.

The optimal policy π (the mapping from states to actions) then simply becomes:

  π (st) = max at Q (st , at) .(3)

i.e., in every state St = st , choose the action At = at that yields the highest Q-value. To approximate the Q-function during (online) learning, an iterative optimization is carried out with α denoting the learning rate - see also Sutton and Barto (1998) for further details:

  Q (st , at) ← (1 − α) Q (st , at) + α (rt+1 + γ max at+1 Q (st+1, at+1) ) . (4)

a survey for RL的更多相关文章

  1. (转)Applications of Reinforcement Learning in Real World

    Applications of Reinforcement Learning in Real World 2018-08-05 18:58:04 This blog is copied from: h ...

  2. 论文笔记系列-Neural Network Search :A Survey

    论文笔记系列-Neural Network Search :A Survey 论文 笔记 NAS automl survey review reinforcement learning Bayesia ...

  3. (zhuan) 一些RL的文献(及笔记)

    一些RL的文献(及笔记) copy from: https://zhuanlan.zhihu.com/p/25770890  Introductions Introduction to reinfor ...

  4. A Survey of Visual Attention Mechanisms in Deep Learning

    A Survey of Visual Attention Mechanisms in Deep Learning 2019-12-11 15:51:59 Source: Deep Learning o ...

  5. Generalizing from a Few Examples: A Survey on Few-Shot Learning 小样本学习最新综述 | 三大数据增强方法

    目录 原文链接:小样本学习与智能前沿 01 Transforming Samples from Dtrain 02 Transforming Samples from a Weakly Labeled ...

  6. 知识图谱顶刊综述 - (2021年4月) A Survey on Knowledge Graphs: Representation, Acquisition, and Applications

    知识图谱综述(2021.4) 论文地址:A Survey on Knowledge Graphs: Representation, Acquisition, and Applications 目录 知 ...

  7. SharePoint 2010 Survey的Export to Spreadsheet功能怎么不见了?

    背景信息: 最近用户报了一个问题,说他创建的Survey里将结果导出成Excel文件(Export to spreadsheet)的按钮不见了. 原因排查: 正常情况下,这个功能只存在于SharePo ...

  8. 中间值为什么为l+(r-l)/2,而不是(l+r)/2

    二分法的算法中,我们看到一些代码里取中间值: MID=l+(r-l)/2; 为什么是这个呢?不就是(l+r)/2吗?为什么要多此一举呢? 其实还是有不一样的,看看他们的区别吧: l,r是指针的时候只能 ...

  9. SharePoint Tricks - Survey

    1. SharePoint 2010中,在Survey的问题框中输入HTML代码可以用于插入图片或者链接,具体方法为: 1.1 在问题框中输入html, 1.2 在New Form和Edit Form ...

随机推荐

  1. java反射机制基础总结

    1反射机制是啥? 反射是运行中的程序检查自己和软件运行环境的能力,它可以根据它发现的进行改变.通俗的讲就是反射可以在运行时根据指定的类名获得类的信息. 2反射机制有啥用? Reflection(反射) ...

  2. [51nod] 1007 正整数分组 dp

    将一堆正整数分为2组,要求2组的和相差最小. 例如:1 2 3 4 5,将1 2 4分为1组,3 5分为1组,两组和相差1,是所有方案中相差最少的.   Input 第1行:一个数N,N为正整数的数量 ...

  3. aimOffset注意事项

    AimOffset的记录 AimOffset是什么,就是动画(相对于某个具体姿势比如待机动作的)叠加. AimOffset有什么用,简单说就是叠加动作,比如无双中骑马挥刀动作叠加. 注意步骤 1所有分 ...

  4. 2017-9-9 NOIP模拟赛

    站军姿 2bc*cosA=b^2+c^2-a^2 #include<cstdio> #include<cstdlib> #include<cmath> #inclu ...

  5. Hive_Hive的数据模型_外部表

    Hive的数据模型之外部表 外部表(External Table)- 指向已经在HDFS中存在的数据,可以创建Partition- 它和内部表在元数据的组织上是相同的,而实际数据的存储则有较大的差异. ...

  6. Java带token验证的注册登录

    http://blog.csdn.net/huqingpeng321/article/details/52900550 http://blog.csdn.net/l18710006370/articl ...

  7. NETCORE MVC模块化

    NETCORE MVC模块化 ASP.NETCORE MVC模块化编程 前言 记得上一篇博客中跟大家分享的是基于ASP.NETMVC5,实际也就是基于NETFRAMEWORK平台实现的这么一个轻量级插 ...

  8. exportExcel()方法注意事项

    1.保证数据集里的字段和SQL语句里字段全部一致,包括sql语句里必须有系统字段 2.exportExcel()执行的时候,是先去执行SQL语句,再去到数据集里面进行不对,若有不一致的地方,则报列名无 ...

  9. csu 1554: SG Value 思维题

    http://acm.csu.edu.cn/csuoj/problemset/problem?pid=1554 这题在比赛的时候居然没想出来,然后发现居然是做过的题目的变种!!!! 先不考虑插入操作, ...

  10. webpack.config.js====插件clean-webpack-plugin

    1. 安装:主要是用来清除重复文件,生成最新的的插件 就是说在编译文件的时候,先把 build或dist (就是放生产环境用的文件) 目录里的文件先清除干净,再生成新的带有hash值的文件 cnpm ...