Policy Improvement and Policy Iteration
From the last post, we know how to evaluate a policy. But that's not enough, because the purpose of policy evaluation is to improve policies so that finally get the optimal policy. So in this post, we will discuss about how to improve a given policy, and how to from a given policy get to the optimal policy.
Firstly, when you have an evaluated policy, the Action-Value function is known for every state. That is, at a certain state s, we known which action can give the system the largest reward.
In the puzzle wandering example, we evaluate the random policy. However,the State-Value functions can be used for policy improvement. After 1 step calculating,we can conclude at the circled location, moving left is better than randomly picking a direction because left side has more reward.
After three steps, we've got a much better intuition about the map. We can change the random policy to a new better one.
The way to improve the current policy is to greedyly pick actions for every state. It is worth noting that greedily picking actions does not means it only consider one step (too greedy to consider multiple steps). Instead, when k=3, the algorithm can foresee three steps, and the greedy picking algorithm will select the best action for k steps.
The Policy Iteration Algorithm is keep doing evaluation and improvement tasks untill the policy becomes stable,
This process means Action-Value function of the improved policy picking the best return from a single action:
The algorithm is:
Policy Improvement and Policy Iteration的更多相关文章
- Provider Policy与Consumer Policy在bnd中的区别
首先需要了解的是bnd的相关知识: 1. API(也就是接口), 2. API Provider(接口的实现) 3. API Consumer( 接口的使用者) OSGi中的一个版本有4个部分: ...
- Reinforcement Learning Index Page
Reinforcement Learning Posts Step-by-step from Markov Property to Markov Decision Process Markov Dec ...
- Policy Gradient Algorithms
Policy Gradient Algorithms 2019-10-02 17:37:47 This blog is from: https://lilianweng.github.io/lil-l ...
- Deep Learning专栏--强化学习之从 Policy Gradient 到 A3C(3)
在之前的强化学习文章里,我们讲到了经典的MDP模型来描述强化学习,其解法包括value iteration和policy iteration,这类经典解法基于已知的转移概率矩阵P,而在实际应用中,我们 ...
- 使用 SecurityManager 和 Policy File 管理 Java 程序的权限
参考资料 该文中的内容来源于 Oracle 的官方文档.Oracle 在 Java 方面的文档是非常完善的.对 Java 8 感兴趣的朋友,可以从这个总入口 Java SE 8 Documentati ...
- Utility2:Appropriate Evaluation Policy
UCP收集所有Managed Instance的数据的机制,是通过启用各个Managed Instances上的Collection Set:Utility information(位于Managem ...
- trait与policy模板应用简单示例
trait与policy模板应用简单示例 accumtraits.hpp // 累加算法模板的trait // 累加算法模板的trait #ifndef ACCUMTRAITS_HPP #define ...
- trait与policy模板技术
trait与policy模板技术 我们知道,类有属性(即数据)和操作两个方面.同样模板也有自己的属性(特别是模板参数类型的一些具体特征,即trait)和算法策略(policy,即模板内部的操作逻辑). ...
- Network Policy - 每天5分钟玩转 Docker 容器技术(171)
Network Policy 是 Kubernetes 的一种资源.Network Policy 通过 Label 选择 Pod,并指定其他 Pod 或外界如何与这些 Pod 通信. 默认情况下,所有 ...
随机推荐
- loadkeys - 调入键盘翻译表
总览 (SYNOPSIS) loadkeys [ -d --default ] [ -h --help ] [ -q --quiet ] [ -v --verbose [ -v --verbose ] ...
- python引用库异常总结
一.导入import pandas.io.data as web 时报了"The pandas.io.data module is moved to a separate package & ...
- 【转】uboot中的mmc命令
转自:https://www.cnblogs.com/yxwkf/p/3855383.html 1:mmcinfo 输入: mmcinfo 显示结果:Manufacturer ID: 45OEM: 1 ...
- caffe与tensorflow中的pooling
两个框架对poolin的处理方式不同,这就导致在转模型时容易踩雷 tensorflow通过“VALID”和“SAME”参数来控制 caffe 通过pad值来控制 参考:https://blog.csd ...
- Spring缓存机制(转)
Spring的缓存机制非常灵活,可以对容器中任意Bean或者Bean的方法进行缓存,因此这种缓存机制可以在JavaEE应用的任何层次上进行缓存. Spring缓存底层也是需要借助其他缓存工具来实现,例 ...
- ACM/IOI 国家队集训队论文集锦
转自:https://blog.csdn.net/txl199106/article/details/49227067 国家集训队1999论文集 陈宏:<数据结构的选择与算法效率——从IOI98 ...
- 2018微信小程序开发遇到的坑
第一个坑:wx.showModal(OBJECT) wx.showModal在安卓手机里,如果点击遮罩的话会关闭弹窗,不会有任何回调.而苹果的情况下则是点击遮罩不会有任何反应. 这样会有什么问题呢? ...
- Keras get Tensor dimensions
int_shape(y_true)[0] int_shape(y_true)[1]
- DNS域名解析系统
1.DNS的组成 DNS系统是为解析域名为IP地址而存在的,它是由域名空间.资源记录.名称服务器和解析器组成. 域名空间是包含一个树状结构,用于存储资源记录的空间. 资源记录是与域名相关的数据,如IP ...
- webstorm 点击右上角运行run 启动vue项目
点击右上角框 -> 编辑结构 点击加号 新增一个npm项目 前提:node环境已经安装完成,npm包管理器 1.进行定位到项目的路径2.安装依赖包,npm install3.启动服务,npm r ...