https://study.163.com/provider/400000000398149/index.htm?share=2&shareId=400000000398149( 欢迎关注博主主页,学习python视频资源,还有大量免费python经典文章)

【强化学习】Q-Learning详解
1、算法思想
QLearning是强化学习算法中值迭代的算法,Q即为Q(s,a)就是在某一时刻的 s 状态下(s∈S),采取 a (a∈A)动作能够获得收益的期望,环境会根据agent的动作反馈相应的回报reward r,所以算法的主要思想就是将State与Action构建成一张Q-table来存储Q值,然后根据Q值来选取动作获得较大的收益。

2、公式推导
举个例子如图有一个GridWorld的游戏从起点出发到达终点为胜利掉进陷阱为失败。智能体(Agent)、环境状态(environment)、奖励(reward)、动作(action)可以将问题抽象成一个马尔科夫决策过程,我们在每个格子都算是一个状态 $s_t $ , π(a|s)在s状态下采取动作a a∈A 。 P(s’|s,a)为在s状态下选择a动作转换到下一个状态s’的概率。R(s’|s,a)表示在s状态下采取a动作转移到s’的奖励reward,我们的目的很明确就是找到一条能够到达终点获得最大奖赏的策略。

所以目标就是求出累计奖赏最大的策略的期望:

4、实现代码

值迭代部分

  1. # -*- coding: utf-8 -*-
  2. from environment import GraphicDisplay, Env
  3.  
  4. class ValueIteration:
  5. def __init__(self, env):
  6. self.env = env
  7. # 2-d list for the value function
  8. self.value_table = [[0.0] * env.width for _ in range(env.height)]
  9. self.discount_factor = 0.9
  10.  
  11. # get next value function table from the current value function table
  12. def value_iteration(self):
  13. next_value_table = [[0.0] * self.env.width
  14. for _ in range(self.env.height)]
  15. for state in self.env.get_all_states():
  16. if state == [2, 2]:
  17. next_value_table[state[0]][state[1]] = 0.0
  18. continue
  19. value_list = []
  20.  
  21. for action in self.env.possible_actions:
  22. next_state = self.env.state_after_action(state, action)
  23. reward = self.env.get_reward(state, action)
  24. next_value = self.get_value(next_state)
  25. value_list.append((reward + self.discount_factor * next_value))
  26. # return the maximum value(it is the optimality equation!!)
  27. next_value_table[state[0]][state[1]] = round(max(value_list), 2)
  28. self.value_table = next_value_table
  29.  
  30. # get action according to the current value function table
  31. def get_action(self, state):
  32. action_list = []
  33. max_value = -99999
  34.  
  35. if state == [2, 2]:
  36. return []
  37.  
  38. # calculating q values for the all actions and
  39. # append the action to action list which has maximum q value
  40. for action in self.env.possible_actions:
  41.  
  42. next_state = self.env.state_after_action(state, action)
  43. reward = self.env.get_reward(state, action)
  44. next_value = self.get_value(next_state)
  45. value = (reward + self.discount_factor * next_value)
  46.  
  47. if value > max_value:
  48. action_list.clear()
  49. action_list.append(action)
  50. max_value = value
  51. elif value == max_value:
  52. action_list.append(action)
  53.  
  54. return action_list
  55.  
  56. def get_value(self, state):
  57. return round(self.value_table[state[0]][state[1]], 2)
  58.  
  59. if __name__ == "__main__":
  60. env = Env()
  61. value_iteration = ValueIteration(env)
  62. grid_world = GraphicDisplay(value_iteration)
  63. grid_world.mainloop()

  

动态环境部分

  1. import tkinter as tk
  2. import time
  3. import numpy as np
  4. import random
  5. from PIL import ImageTk, Image
  6.  
  7. PhotoImage = ImageTk.PhotoImage
  8. UNIT = 100 # pixels
  9. HEIGHT = 5 # grid height
  10. WIDTH = 5 # grid width
  11. TRANSITION_PROB = 1
  12. POSSIBLE_ACTIONS = [0, 1, 2, 3] # up, down, left, right
  13. ACTIONS = [(-1, 0), (1, 0), (0, -1), (0, 1)] # actions in coordinates
  14. REWARDS = []
  15.  
  16. class GraphicDisplay(tk.Tk):
  17. def __init__(self, value_iteration):
  18. super(GraphicDisplay, self).__init__()
  19. self.title('Value Iteration')
  20. self.geometry('{0}x{1}'.format(HEIGHT * UNIT, HEIGHT * UNIT + 50))
  21. self.texts = []
  22. self.arrows = []
  23. self.env = Env()
  24. self.agent = value_iteration
  25. self.iteration_count = 0
  26. self.improvement_count = 0
  27. self.is_moving = 0
  28. (self.up, self.down, self.left,
  29. self.right), self.shapes = self.load_images()
  30. self.canvas = self._build_canvas()
  31. self.text_reward(2, 2, "R : 1.0")
  32. self.text_reward(1, 2, "R : -1.0")
  33. self.text_reward(2, 1, "R : -1.0")
  34.  
  35. def _build_canvas(self):
  36. canvas = tk.Canvas(self, bg='white',
  37. height=HEIGHT * UNIT,
  38. width=WIDTH * UNIT)
  39. # buttons
  40. iteration_button = tk.Button(self, text="Calculate",
  41. command=self.calculate_value)
  42. iteration_button.configure(width=10, activebackground="#33B5E5")
  43. canvas.create_window(WIDTH * UNIT * 0.13, (HEIGHT * UNIT) + 10,
  44. window=iteration_button)
  45.  
  46. policy_button = tk.Button(self, text="Print Policy",
  47. command=self.print_optimal_policy)
  48. policy_button.configure(width=10, activebackground="#33B5E5")
  49. canvas.create_window(WIDTH * UNIT * 0.37, (HEIGHT * UNIT) + 10,
  50. window=policy_button)
  51.  
  52. policy_button = tk.Button(self, text="Move",
  53. command=self.move_by_policy)
  54. policy_button.configure(width=10, activebackground="#33B5E5")
  55. canvas.create_window(WIDTH * UNIT * 0.62, (HEIGHT * UNIT) + 10,
  56. window=policy_button)
  57.  
  58. policy_button = tk.Button(self, text="Clear", command=self.clear)
  59. policy_button.configure(width=10, activebackground="#33B5E5")
  60. canvas.create_window(WIDTH * UNIT * 0.87, (HEIGHT * UNIT) + 10,
  61. window=policy_button)
  62.  
  63. # create grids
  64. for col in range(0, WIDTH * UNIT, UNIT): # 0~400 by 80
  65. x0, y0, x1, y1 = col, 0, col, HEIGHT * UNIT
  66. canvas.create_line(x0, y0, x1, y1)
  67. for row in range(0, HEIGHT * UNIT, UNIT): # 0~400 by 80
  68. x0, y0, x1, y1 = 0, row, HEIGHT * UNIT, row
  69. canvas.create_line(x0, y0, x1, y1)
  70.  
  71. # add img to canvas
  72. self.rectangle = canvas.create_image(50, 50, image=self.shapes[0])
  73. canvas.create_image(250, 150, image=self.shapes[1])
  74. canvas.create_image(150, 250, image=self.shapes[1])
  75. canvas.create_image(250, 250, image=self.shapes[2])
  76.  
  77. # pack all
  78. canvas.pack()
  79.  
  80. return canvas
  81.  
  82. def load_images(self):
  83. PhotoImage = ImageTk.PhotoImage
  84. up = PhotoImage(Image.open("../img/up.png").resize((13, 13)))
  85. right = PhotoImage(Image.open("../img/right.png").resize((13, 13)))
  86. left = PhotoImage(Image.open("../img/left.png").resize((13, 13)))
  87. down = PhotoImage(Image.open("../img/down.png").resize((13, 13)))
  88. rectangle = PhotoImage(
  89. Image.open("../img/rectangle.png").resize((65, 65)))
  90. triangle = PhotoImage(
  91. Image.open("../img/triangle.png").resize((65, 65)))
  92. circle = PhotoImage(Image.open("../img/circle.png").resize((65, 65)))
  93. return (up, down, left, right), (rectangle, triangle, circle)
  94.  
  95. def clear(self):
  96.  
  97. if self.is_moving == 0:
  98. self.iteration_count = 0
  99. self.improvement_count = 0
  100. for i in self.texts:
  101. self.canvas.delete(i)
  102.  
  103. for i in self.arrows:
  104. self.canvas.delete(i)
  105.  
  106. self.agent.value_table = [[0.0] * WIDTH for _ in range(HEIGHT)]
  107.  
  108. x, y = self.canvas.coords(self.rectangle)
  109. self.canvas.move(self.rectangle, UNIT / 2 - x, UNIT / 2 - y)
  110.  
  111. def reset(self):
  112. self.update()
  113. time.sleep(0.5)
  114. self.canvas.delete(self.rectangle)
  115. return self.canvas.coords(self.rectangle)
  116.  
  117. def text_value(self, row, col, contents, font='Helvetica', size=12,
  118. style='normal', anchor="nw"):
  119. origin_x, origin_y = 85, 70
  120. x, y = origin_y + (UNIT * col), origin_x + (UNIT * row)
  121. font = (font, str(size), style)
  122. text = self.canvas.create_text(x, y, fill="black", text=contents,
  123. font=font, anchor=anchor)
  124. return self.texts.append(text)
  125.  
  126. def text_reward(self, row, col, contents, font='Helvetica', size=12,
  127. style='normal', anchor="nw"):
  128. origin_x, origin_y = 5, 5
  129. x, y = origin_y + (UNIT * col), origin_x + (UNIT * row)
  130. font = (font, str(size), style)
  131. text = self.canvas.create_text(x, y, fill="black", text=contents,
  132. font=font, anchor=anchor)
  133. return self.texts.append(text)
  134.  
  135. def rectangle_move(self, action):
  136. base_action = np.array([0, 0])
  137. location = self.find_rectangle()
  138. self.render()
  139. if action == 0 and location[0] > 0: # up
  140. base_action[1] -= UNIT
  141. elif action == 1 and location[0] < HEIGHT - 1: # down
  142. base_action[1] += UNIT
  143. elif action == 2 and location[1] > 0: # left
  144. base_action[0] -= UNIT
  145. elif action == 3 and location[1] < WIDTH - 1: # right
  146. base_action[0] += UNIT
  147.  
  148. self.canvas.move(self.rectangle, base_action[0],
  149. base_action[1]) # move agent
  150.  
  151. def find_rectangle(self):
  152. temp = self.canvas.coords(self.rectangle)
  153. x = (temp[0] / 100) - 0.5
  154. y = (temp[1] / 100) - 0.5
  155. return int(y), int(x)
  156.  
  157. def move_by_policy(self):
  158.  
  159. if self.improvement_count != 0 and self.is_moving != 1:
  160. self.is_moving = 1
  161. x, y = self.canvas.coords(self.rectangle)
  162. self.canvas.move(self.rectangle, UNIT / 2 - x, UNIT / 2 - y)
  163.  
  164. x, y = self.find_rectangle()
  165. while len(self.agent.get_action([x, y])) != 0:
  166. action = random.sample(self.agent.get_action([x, y]), 1)[0]
  167. self.after(100, self.rectangle_move(action))
  168. x, y = self.find_rectangle()
  169. self.is_moving = 0
  170.  
  171. def draw_one_arrow(self, col, row, action):
  172. if col == 2 and row == 2:
  173. return
  174. if action == 0: # up
  175. origin_x, origin_y = 50 + (UNIT * row), 10 + (UNIT * col)
  176. self.arrows.append(self.canvas.create_image(origin_x, origin_y,
  177. image=self.up))
  178. elif action == 1: # down
  179. origin_x, origin_y = 50 + (UNIT * row), 90 + (UNIT * col)
  180. self.arrows.append(self.canvas.create_image(origin_x, origin_y,
  181. image=self.down))
  182. elif action == 3: # right
  183. origin_x, origin_y = 90 + (UNIT * row), 50 + (UNIT * col)
  184. self.arrows.append(self.canvas.create_image(origin_x, origin_y,
  185. image=self.right))
  186. elif action == 2: # left
  187. origin_x, origin_y = 10 + (UNIT * row), 50 + (UNIT * col)
  188. self.arrows.append(self.canvas.create_image(origin_x, origin_y,
  189. image=self.left))
  190.  
  191. def draw_from_values(self, state, action_list):
  192. i = state[0]
  193. j = state[1]
  194. for action in action_list:
  195. self.draw_one_arrow(i, j, action)
  196.  
  197. def print_values(self, values):
  198. for i in range(WIDTH):
  199. for j in range(HEIGHT):
  200. self.text_value(i, j, values[i][j])
  201.  
  202. def render(self):
  203. time.sleep(0.1)
  204. self.canvas.tag_raise(self.rectangle)
  205. self.update()
  206.  
  207. def calculate_value(self):
  208. self.iteration_count += 1
  209. for i in self.texts:
  210. self.canvas.delete(i)
  211. self.agent.value_iteration()
  212. self.print_values(self.agent.value_table)
  213.  
  214. def print_optimal_policy(self):
  215. self.improvement_count += 1
  216. for i in self.arrows:
  217. self.canvas.delete(i)
  218. for state in self.env.get_all_states():
  219. action = self.agent.get_action(state)
  220. self.draw_from_values(state, action)
  221.  
  222. class Env:
  223. def __init__(self):
  224. self.transition_probability = TRANSITION_PROB
  225. self.width = WIDTH # Width of Grid World
  226. self.height = HEIGHT # Height of GridWorld
  227. self.reward = [[0] * WIDTH for _ in range(HEIGHT)]
  228. self.possible_actions = POSSIBLE_ACTIONS
  229. self.reward[2][2] = 1 # reward 1 for circle
  230. self.reward[1][2] = -1 # reward -1 for triangle
  231. self.reward[2][1] = -1 # reward -1 for triangle
  232. self.all_state = []
  233.  
  234. for x in range(WIDTH):
  235. for y in range(HEIGHT):
  236. state = [x, y]
  237. self.all_state.append(state)
  238.  
  239. def get_reward(self, state, action):
  240. next_state = self.state_after_action(state, action)
  241. return self.reward[next_state[0]][next_state[1]]
  242.  
  243. def state_after_action(self, state, action_index):
  244. action = ACTIONS[action_index]
  245. return self.check_boundary([state[0] + action[0], state[1] + action[1]])
  246.  
  247. @staticmethod
  248. def check_boundary(state):
  249. state[0] = (0 if state[0] < 0 else WIDTH - 1
  250. if state[0] > WIDTH - 1 else state[0])
  251. state[1] = (0 if state[1] < 0 else HEIGHT - 1
  252. if state[1] > HEIGHT - 1 else state[1])
  253. return state
  254.  
  255. def get_transition_prob(self, state, action):
  256. return self.transition_probability
  257.  
  258. def get_all_states(self):
  259. return self.all_state

转载https://blog.csdn.net/qq_30615903/article/details/80739243

python机器学习-乳腺癌细胞挖掘(博主亲自录制视频)

https://study.163.com/course/introduction.htm?courseId=1005269003&utm_campaign=commission&utm_source=cp-400000000398149&utm_medium=share

 

强化学习Q-Learning算法详解的更多相关文章

  1. RFC2544学习频率“Learning Frequency”详解—信而泰网络测试仪实操

    在RFC2544中, 会有一个Learning Frequency的字段让我们选择, 其值有4个, 分别是learn once, learn Every Trial, Learn Every Fram ...

  2. OpenCV学习(21) Grabcut算法详解

    grab cut算法是graph cut算法的改进.在理解grab cut算之前,应该学习一下graph cut算法的概念及实现方式. 我搜集了一些graph cut资料:http://yunpan. ...

  3. 机器学习经典算法详解及Python实现--基于SMO的SVM分类器

    原文:http://blog.csdn.net/suipingsp/article/details/41645779 支持向量机基本上是最好的有监督学习算法,因其英文名为support vector  ...

  4. BM算法  Boyer-Moore高质量实现代码详解与算法详解

    Boyer-Moore高质量实现代码详解与算法详解 鉴于我见到对算法本身分析非常透彻的文章以及实现的非常精巧的文章,所以就转载了,本文的贡献在于将两者结合起来,方便大家了解代码实现! 算法详解转自:h ...

  5. [转] KMP算法详解

    转载自:http://www.matrix67.com/blog/archives/115 KMP算法详解 如果机房马上要关门了,或者你急着要和MM约会,请直接跳到第六个自然段.    我们这里说的K ...

  6. 【转】AC算法详解

    原文转自:http://blog.csdn.net/joylnwang/article/details/6793192 AC算法是Alfred V.Aho(<编译原理>(龙书)的作者),和 ...

  7. KMP算法详解(转自中学生OI写的。。ORZ!)

    KMP算法详解 如果机房马上要关门了,或者你急着要和MM约会,请直接跳到第六个自然段. 我们这里说的KMP不是拿来放电影的(虽然我很喜欢这个软件),而是一种算法.KMP算法是拿来处理字符串匹配的.换句 ...

  8. 安全体系(二)——RSA算法详解

    本文主要讲述RSA算法使用的基本数学知识.秘钥的计算过程以及加密和解密的过程. 安全体系(零)—— 加解密算法.消息摘要.消息认证技术.数字签名与公钥证书 安全体系(一)—— DES算法详解 1.概述 ...

  9. 第三十一节,目标检测算法之 Faster R-CNN算法详解

    Ren, Shaoqing, et al. “Faster R-CNN: Towards real-time object detection with region proposal network ...

  10. 第三十节,目标检测算法之Fast R-CNN算法详解

    Girshick, Ross. “Fast r-cnn.” Proceedings of the IEEE International Conference on Computer Vision. 2 ...

随机推荐

  1. centos7个人shell编写环境

    一.配置存放文件/root/wang 存放常用的文件/root/wang/shell 存放练习的shell文件/root/wang/succeed_shell 存放有用shell文件/root/bak ...

  2. kylin简单优化cube

    优化Cube 层次结构 理论上,对于N维,你最终会得到2 ^ N维组合.但是对于某些维度组,不需要创建这么多组合.例如,如果您有三个维度:洲,国家,城市(在层次结构中,“更大”维度首先出现).在深入分 ...

  3. insert into select的实际用法

    INSERT INTO SELECT语句 语句形式为:Insert into Table2(field1,field2,...) select value1,value2,... from Table ...

  4. Loj #2494. 「AHOI / HNOI2018」寻宝游戏

    Loj #2494. 「AHOI / HNOI2018」寻宝游戏 题目描述 某大学每年都会有一次 Mystery Hunt 的活动,玩家需要根据设置的线索解谜,找到宝藏的位置,前一年获胜的队伍可以获得 ...

  5. web框架开发-分页器(Paginator)

    Django有自带的分页器,可以将数据分在不同的页面中,并提供一些属性和方法实现对分页数据的操作.分页功能的类位于django/core/paginator.py中. 常用方法 # 分页器 # pag ...

  6. Linux内存都去哪了:(1)分析memblock在启动过程中对内存的影响

    关键词:memblock.totalram_pages.meminfo.MemTotal.CMA等. 最近在做低成本方案,需要研究一整块RAM都用在哪里了? 最直观的的就是通过/proc/meminf ...

  7. asp.net webapi中helppage

    今天研究了下webapi,发现还有自动生成接口说明文档提供测试的功能 参考:https://docs.microsoft.com/en-us/aspnet/web-api/overview/getti ...

  8. SpringCloud(2)服务消费者(rest+ribbon)

    1.准备工作 这一篇文章基于上一篇文章的工程.启动eureka-server 工程,端口为 8761.分别以端口 8762 和 8763 启动 service-hi 工程.访问 localhost:8 ...

  9. js对时间的一些操作

    new Date()  //Thu Dec 27 2018 12:16:16 GMT+0800 (中国标准时间); new Date('2018-1-1,12:20:20'/1258454512000 ...

  10. Codeforces Round #546 (Div. 2)-D - Nastya Is Buying Lunch

    这道题,神仙贪心题... 题意就是我给出数的顺序,并给出多个交换,每个只能用于相邻交换,问最后一个元素,最多能往前交换多少步. 我们考虑这样一个问题,如果一个这数和a[n]发生交换,那么这个数作为后面 ...