a survey for RL
• A finite set of states St summarizing the information the agent senses from the environment at every time step t ∈ {1, ..., T}.
• A set of actions At which the agent can perform at each time step t ∈ {1, ..., T} to interact with the environment.
• A set of transition probabilities between subsequent states which render the environment stochastic. Note: the probabilities are usually not explicitly modeled but the result of the stochastic nature of the financial asset’s price process.
• A reward (or return) function Rt which provides a numerical feedback value rt to the agent in response to its action At−1 = at−1 in state St−1 = st−1.
• A policy π which maps states to concrete actions to be carried out by the agent. The policy can hence be understood as the agent’s rules for how to choose actions.
• A value function V which maps states to the total (discounted) reward the agent can expect from a given state until the end of the episode (trading period) under policy π.
Given the above framework, the decision problem is formalized as finding the optimal policy π = π ∗ , i.e., the mapping from states to actions, corresponding to the optimal value function V ∗ - see also Dempster et al. (2001); Dempster and Romahi (2002):
V ∗ (st) = max at E[Rt+1 + γV ∗ (St+1)|St = st ].(1)
Hereby, E denotes the expectation operator, γ the discount factor, and Rt+1 the expected immediate reward for carrying out action At = at in state St = st . Further, St+1 denotes the next state of the agent. The value function can hence be understood as a mapping from states to discounted future rewards which the agent seeks to maximize with its actions.
To solve this optimization problem, the Q-Learning algorithm (Watkins, 1989) can be applied, extending the above equation to the level of state-action tuples:
Q ∗ (st , at) = E[Rt+1 + γ max at+1 Q ∗ (St+1, at+1)|St = st , At = at ].(2)
Hereby, the Q-value Q∗ (st , at) equals to the immediate reward for carrying out action At = at in state St = st plus the discounted future reward from carrying on in the best way possible.
The optimal policy π ∗ (the mapping from states to actions) then simply becomes:
π ∗ (st) = max at Q ∗ (st , at) .(3)
i.e., in every state St = st , choose the action At = at that yields the highest Q-value. To approximate the Q-function during (online) learning, an iterative optimization is carried out with α denoting the learning rate - see also Sutton and Barto (1998) for further details:
Q ∗ (st , at) ← (1 − α) Q ∗ (st , at) + α (rt+1 + γ max at+1 Q ∗ (st+1, at+1) ) . (4)
a survey for RL的更多相关文章
- (转)Applications of Reinforcement Learning in Real World
Applications of Reinforcement Learning in Real World 2018-08-05 18:58:04 This blog is copied from: h ...
- 论文笔记系列-Neural Network Search :A Survey
论文笔记系列-Neural Network Search :A Survey 论文 笔记 NAS automl survey review reinforcement learning Bayesia ...
- (zhuan) 一些RL的文献(及笔记)
一些RL的文献(及笔记) copy from: https://zhuanlan.zhihu.com/p/25770890 Introductions Introduction to reinfor ...
- A Survey of Visual Attention Mechanisms in Deep Learning
A Survey of Visual Attention Mechanisms in Deep Learning 2019-12-11 15:51:59 Source: Deep Learning o ...
- Generalizing from a Few Examples: A Survey on Few-Shot Learning 小样本学习最新综述 | 三大数据增强方法
目录 原文链接:小样本学习与智能前沿 01 Transforming Samples from Dtrain 02 Transforming Samples from a Weakly Labeled ...
- 知识图谱顶刊综述 - (2021年4月) A Survey on Knowledge Graphs: Representation, Acquisition, and Applications
知识图谱综述(2021.4) 论文地址:A Survey on Knowledge Graphs: Representation, Acquisition, and Applications 目录 知 ...
- SharePoint 2010 Survey的Export to Spreadsheet功能怎么不见了?
背景信息: 最近用户报了一个问题,说他创建的Survey里将结果导出成Excel文件(Export to spreadsheet)的按钮不见了. 原因排查: 正常情况下,这个功能只存在于SharePo ...
- 中间值为什么为l+(r-l)/2,而不是(l+r)/2
二分法的算法中,我们看到一些代码里取中间值: MID=l+(r-l)/2; 为什么是这个呢?不就是(l+r)/2吗?为什么要多此一举呢? 其实还是有不一样的,看看他们的区别吧: l,r是指针的时候只能 ...
- SharePoint Tricks - Survey
1. SharePoint 2010中,在Survey的问题框中输入HTML代码可以用于插入图片或者链接,具体方法为: 1.1 在问题框中输入html, 1.2 在New Form和Edit Form ...
随机推荐
- 【异步编程】Part3:取消异步操作
在.Net和C#中运行异步代码相当简单,因为我们有时候需要取消正在进行的异步操作,通过本文,可以掌握 通过CancellationToken取消任务(包括non-cancellable任务). 早期 ...
- SQL中的limit
SELECT * FROM employees ORDER BY hire_date DESC LIMIT 2,1; LIMIT m,n : 表示从第m+1条开始,取n条数据: LIMIT n : 表 ...
- ue4-C++中加载一个蓝图类(二)-C++中绑定Blueprint武器
editor中编辑好一个武器蓝图资源后,c++中create出这个武器,然后attach到一个人物身上. 思路: 写个c++基类,蓝图继承后编辑成武器或其他装备,然后c++用一个TSubclassO ...
- PAT甲级——1114 Family Property (并查集)
此文章同步发布在我的CSDN上https://blog.csdn.net/weixin_44385565/article/details/89930332 1114 Family Property ( ...
- 题解 P1004 方格取数
传送门 动态规划Yes? 设i为路径长度,(为什么i这一维可以省掉见下)f[j][k]表示第一个点到了(j,i-j),第二个点到了(k,j-k) 则 int ji=i-j,ki=i-k; f[j][k ...
- HDU-1556-Color the ball (线段树和差分数组两种解法)
N个气球排成一排,从左到右依次编号为1,2,3....N.每次给定2个整数a b(a <= b),lele便为骑上他的"小飞鸽"牌电动车从气球a开始到气球b依次给每个气球涂一 ...
- CoreRT
使用CoreRT将.NET Core发布为Native应用程序 在上一篇文章<使用.NET Core快速开发一个较正规的命令行应用程序>中我们看到了使用自包含方式发布的.NET Core应 ...
- 使用MRUnit对MapReduce进行单元测试
1. 为什么需要单元测试 一旦MapReduce项目提交到集群之后,若是出现问题是很难定位和修改的,只能通过打印日志的方式进行筛选.又如果数据和项目较大时,修改起来则更加麻烦.所以,在将MapRedu ...
- 开发工具~nuget配置本地源
我们在本地部署了自己的nuget服务器,有时可以需要用到nuget restore命令去恢复包包,它会从下面的nuget.config里读你的配置源信息,就是在这里,我们要把内测的nuget服务器路径 ...
- JS类对象实现继续的几种方式
0. ES6可以直接使用class,extends来继承. 1. 原型继承 父类: function Persion(name,age){ this.name = name; this.age = ...