Deep Q Learning

使用gym的CartPole作为环境,使用QDN解决离散动作空间的问题。

一、导入需要的包和定义超参数

import tensorflow as tf
import numpy as np
import gym
import time
import random
from collections import deque ##################### hyper parameters #################### # Hyper Parameters for DQN
GAMMA = 0.9 # discount factor for target Q
INITIAL_EPSILON = 0.5 # starting value of epsilon
FINAL_EPSILON = 0.01 # final value of epsilon
REPLAY_SIZE = 10000 # experience replay buffer size
BATCH_SIZE = 32 # size of minibatch

二、DQN构造函数

1、初始化经验重放buffer;

2、设置问题的状态空间维度,动作空间维度;

3、设置e-greedy的epsilon;

4、创建用于估计q值的Q网络,创建训练方法。

5、初始化tensorflow的session

def __init__(self, env):
# init experience replay
self.replay_buffer = deque()
# init some parameters
self.time_step = 0
self.epsilon = INITIAL_EPSILON
self.state_dim = env.observation_space.shape[0]
self.action_dim = env.action_space.n self.create_Q_network()
self.create_training_method() # Init session
self.session = tf.InteractiveSession()
self.session.run(tf.global_variables_initializer())

三、创建神经网络

创建一个3层全连接的神经网络,hidden layer有20个神经元。

def create_Q_network(self):
# network weights
W1 = self.weight_variable([self.state_dim,20])
b1 = self.bias_variable([20])
W2 = self.weight_variable([20,self.action_dim])
b2 = self.bias_variable([self.action_dim])
# input layer
self.state_input = tf.placeholder("float",[None,self.state_dim])
# hidden layers
h_layer = tf.nn.relu(tf.matmul(self.state_input,W1) + b1)
# Q Value layer
self.Q_value = tf.matmul(h_layer,W2) + b2 def weight_variable(self,shape):
initial = tf.truncated_normal(shape)
return tf.Variable(initial) def bias_variable(self,shape):
initial = tf.constant(0.01, shape = shape)
return tf.Variable(initial)

定义cost function和优化的方法,使“实际”q值(y)与当前网络估计的q值的差值尽可能小,即使当前网络尽可能接近真实的q值。

def create_training_method(self):
self.action_input = tf.placeholder("float",[None,self.action_dim]) # one hot presentation
self.y_input = tf.placeholder("float",[None])
Q_action = tf.reduce_sum(tf.multiply(self.Q_value,self.action_input),reduction_indices = 1)
self.cost = tf.reduce_mean(tf.square(self.y_input - Q_action))
self.optimizer = tf.train.AdamOptimizer(0.0001).minimize(self.cost)

从buffer中随机取样BATCH_SIZE大小的样本,计算y(batch中(s,a)在当前网络下的实际q值)

if done: y_batch.append(reward_batch[i])

else :  y_batch.append(reward_batch[i] + GAMMA * np.max(Q_value_batch[i]))

def train_Q_network(self):
self.time_step += 1
# Step 1: obtain random minibatch from replay memory
minibatch = random.sample(self.replay_buffer,BATCH_SIZE)
state_batch = [data[0] for data in minibatch]
action_batch = [data[1] for data in minibatch]
reward_batch = [data[2] for data in minibatch]
next_state_batch = [data[3] for data in minibatch] # Step 2: calculate y
y_batch = []
Q_value_batch = self.Q_value.eval(feed_dict={self.state_input:next_state_batch})
for i in range(0,BATCH_SIZE):
done = minibatch[i][4]
if done:
y_batch.append(reward_batch[i])
else :
y_batch.append(reward_batch[i] + GAMMA * np.max(Q_value_batch[i])) self.optimizer.run(feed_dict={
self.y_input:y_batch,
self.action_input:action_batch,
self.state_input:state_batch
})

四、Agent感知环境的接口

每次决策采取的动作,得到环境的反馈,将(s, a, r, s_, done)存入经验重放buffer。当buffer中经验数量大于batch_size时开始训练。

def perceive(self,state,action,reward,next_state,done):
one_hot_action = np.zeros(self.action_dim)
one_hot_action[action] = 1
self.replay_buffer.append((state,one_hot_action,reward,next_state,done))
if len(self.replay_buffer) > REPLAY_SIZE:
self.replay_buffer.popleft() if len(self.replay_buffer) > BATCH_SIZE:
self.train_Q_network()

五、决策(选取action)

两种选取方式greedy和e-greedy。

  def egreedy_action(self,state):
Q_value = self.Q_value.eval(feed_dict = {
self.state_input:[state]
})[0]
if random.random() <= self.epsilon:
self.epsilon -= (INITIAL_EPSILON - FINAL_EPSILON) / 10000
return random.randint(0,self.action_dim - 1)
else:
self.epsilon -= (INITIAL_EPSILON - FINAL_EPSILON) / 10000
return np.argmax(Q_value) def action(self,state):
return np.argmax(self.Q_value.eval(feed_dict = {
self.state_input:[state]
})[0])

Agent完整代码:

DQN

import tensorflow as tf
import numpy as np
import gym
import time
import random
from collections import deque ##################### hyper parameters #################### # Hyper Parameters for DQN
GAMMA = 0.9 # discount factor for target Q
INITIAL_EPSILON = 0.5 # starting value of epsilon
FINAL_EPSILON = 0.01 # final value of epsilon
REPLAY_SIZE = 10000 # experience replay buffer size
BATCH_SIZE = 32 # size of minibatch ############################### DQN #################################### class DQN():
# DQN Agent
def __init__(self, env):
# init experience replay
self.replay_buffer = deque()
# init some parameters
self.time_step = 0
self.epsilon = INITIAL_EPSILON
self.state_dim = env.observation_space.shape[0]
self.action_dim = env.action_space.n self.create_Q_network()
self.create_training_method() # Init session
self.session = tf.InteractiveSession()
self.session.run(tf.global_variables_initializer()) def create_Q_network(self):
# network weights
W1 = self.weight_variable([self.state_dim,20])
b1 = self.bias_variable([20])
W2 = self.weight_variable([20,self.action_dim])
b2 = self.bias_variable([self.action_dim])
# input layer
self.state_input = tf.placeholder("float",[None,self.state_dim])
# hidden layers
h_layer = tf.nn.relu(tf.matmul(self.state_input,W1) + b1)
# Q Value layer
self.Q_value = tf.matmul(h_layer,W2) + b2 def create_training_method(self):
self.action_input = tf.placeholder("float",[None,self.action_dim]) # one hot presentation
self.y_input = tf.placeholder("float",[None])
Q_action = tf.reduce_sum(tf.multiply(self.Q_value,self.action_input),reduction_indices = 1)
self.cost = tf.reduce_mean(tf.square(self.y_input - Q_action))
self.optimizer = tf.train.AdamOptimizer(0.0001).minimize(self.cost) def perceive(self,state,action,reward,next_state,done):
one_hot_action = np.zeros(self.action_dim)
one_hot_action[action] = 1
self.replay_buffer.append((state,one_hot_action,reward,next_state,done))
if len(self.replay_buffer) > REPLAY_SIZE:
self.replay_buffer.popleft() if len(self.replay_buffer) > BATCH_SIZE:
self.train_Q_network() def train_Q_network(self):
self.time_step += 1
# Step 1: obtain random minibatch from replay memory
minibatch = random.sample(self.replay_buffer,BATCH_SIZE)
state_batch = [data[0] for data in minibatch]
action_batch = [data[1] for data in minibatch]
reward_batch = [data[2] for data in minibatch]
next_state_batch = [data[3] for data in minibatch] # Step 2: calculate y
y_batch = []
Q_value_batch = self.Q_value.eval(feed_dict={self.state_input:next_state_batch})
for i in range(0,BATCH_SIZE):
done = minibatch[i][4]
if done:
y_batch.append(reward_batch[i])
else :
y_batch.append(reward_batch[i] + GAMMA * np.max(Q_value_batch[i])) self.optimizer.run(feed_dict={
self.y_input:y_batch,
self.action_input:action_batch,
self.state_input:state_batch
}) def egreedy_action(self,state):
Q_value = self.Q_value.eval(feed_dict = {
self.state_input:[state]
})[0]
if random.random() <= self.epsilon:
self.epsilon -= (INITIAL_EPSILON - FINAL_EPSILON) / 10000
return random.randint(0,self.action_dim - 1)
else:
self.epsilon -= (INITIAL_EPSILON - FINAL_EPSILON) / 10000
return np.argmax(Q_value) def action(self,state):
return np.argmax(self.Q_value.eval(feed_dict = {
self.state_input:[state]
})[0]) def weight_variable(self,shape):
initial = tf.truncated_normal(shape)
return tf.Variable(initial) def bias_variable(self,shape):
initial = tf.constant(0.01, shape = shape)
return tf.Variable(initial)

训练agent:

train.py

from DQN import DQN
import gym
import numpy as np
import time ENV_NAME = 'CartPole-v1'
EPISODE = 3000 # Episode limitation
STEP = 300 # Step limitation in an episode
TEST = 10 # The number of experiment test every 100 episode def main():
# initialize OpenAI Gym env and dqn agent
env = gym.make(ENV_NAME)
agent = DQN(env) for episode in range(EPISODE):
# initialize task
state = env.reset()
# Train
ep_reward = 0
for step in range(STEP):
action = agent.egreedy_action(state) # e-greedy action for train
next_state,reward,done,_ = env.step(action)
# Define reward for agent
reward = -10 if done else 1
ep_reward += reward
agent.perceive(state,action,reward,next_state,done)
state = next_state
if done:
#print('episode complete, reward: ', ep_reward)
break
# Test every 100 episodes
if episode % 100 == 0:
total_reward = 0
for i in range(TEST):
state = env.reset()
for j in range(STEP):
#env.render()
action = agent.action(state) # direct action for test
state,reward,done,_ = env.step(action)
total_reward += reward
if done:
break
ave_reward = total_reward/TEST
print ('episode: ',episode,'Evaluation Average Reward:',ave_reward) if __name__ == '__main__':
main()

reference:

https://www.cnblogs.com/pinard/p/9714655.html

https://github.com/ljpzzz/machinelearning/blob/master/reinforcement-learning/dqn.py

强化学习_Deep Q Learning(DQN)_代码解析的更多相关文章

  1. 强化学习9-Deep Q Learning

    之前讲到Sarsa和Q Learning都不太适合解决大规模问题,为什么呢? 因为传统的强化学习都有一张Q表,这张Q表记录了每个状态下,每个动作的q值,但是现实问题往往极其复杂,其状态非常多,甚至是连 ...

  2. 强化学习(十二) Dueling DQN

    在强化学习(十一) Prioritized Replay DQN中,我们讨论了对DQN的经验回放池按权重采样来优化DQN算法的方法,本文讨论另一种优化方法,Dueling DQN.本章内容主要参考了I ...

  3. 【转载】 强化学习(十一) Prioritized Replay DQN

    原文地址: https://www.cnblogs.com/pinard/p/9797695.html ------------------------------------------------ ...

  4. 强化学习(十一) Prioritized Replay DQN

    在强化学习(十)Double DQN (DDQN)中,我们讲到了DDQN使用两个Q网络,用当前Q网络计算最大Q值对应的动作,用目标Q网络计算这个最大动作对应的目标Q值,进而消除贪婪法带来的偏差.今天我 ...

  5. 强化学习(四)—— DQN系列(DQN, Nature DQN, DDQN, Dueling DQN等)

    1 概述 在之前介绍的几种方法,我们对值函数一直有一个很大的限制,那就是它们需要用表格的形式表示.虽说表格形式对于求解有很大的帮助,但它也有自己的缺点.如果问题的状态和行动的空间非常大,使用表格表示难 ...

  6. 强化学习10-Deep Q Learning-fix target

    针对 Deep Q Learning 可能无法收敛的问题,这里提出了一种  fix target 的方法,就是冻结现实神经网络,延时更新参数. 这个方法的初衷是这样的: 1. 之前我们每个(批)记忆都 ...

  7. 强化学习(3)-----DQN

    看这篇https://blog.csdn.net/qq_16234613/article/details/80268564 1.DQN 原因:在普通的Q-learning中,当状态和动作空间是离散且维 ...

  8. 强化学习 - Q-learning Sarsa 和 DQN 的理解

    本文用于基本入门理解. 强化学习的基本理论 : R, S, A 这些就不说了. 先设想两个场景:  一. 1个 5x5 的 格子图, 里面有一个目标点,  2个死亡点二. 一个迷宫,   一个出发点, ...

  9. 转:强化学习(Reinforcement Learning)

    机器学习算法大致可以分为三种: 1. 监督学习(如回归,分类) 2. 非监督学习(如聚类,降维) 3. 增强学习 什么是增强学习呢? 增强学习(reinforcementlearning, RL)又叫 ...

随机推荐

  1. STL中的vector实现邻接表

    /* STL中的vector实现邻接表 2014-4-2 08:28:45 */ #include <iostream> #include <vector> #include  ...

  2. Git 移除某些文件

    一.前言 在使用 Git 版本控制中,有些文件是不需要加入到版本控制中的.如 日志( log ).编译的文件.这些随时都在变的文件,使用用一个代码库的用户.只要稍稍修改一点,或者启动一下,就会变.容易 ...

  3. bootstrap添加多个模态对话框支持

    bootstrap添加多个模态对话框支持 (2015-03-04 21:05:35) 转载▼ 标签: 房产   因为项目需要,在页面交互上要弹出多个dialog窗口,而bootstrap的modal支 ...

  4. Integrated Metabolomics and Lipidomics Analyses Reveal Metabolic Reprogramming in Human Glioma with IDH1 Mutation (文献分享一组-黄旭蕾)

    题目:Integrated Metabolomics and Lipidomics Analyses Reveal Metabolic Reprogramming in Human Glioma wi ...

  5. windows 系统 GitBook生成PDF、epub报错Error during ebook generation: 'ebook-convert' 乱码

    解决方法 1. 根据你的系统下载calibre并安装 2. 右键属性打开桌面图标位置 3 .复制该路径: 4. 打开我的电脑-属性-系统-高级系统设置-环境变量,配置环境. 5. 编辑"PA ...

  6. 【OpenJ_Bailian - 2287】Tian Ji -- The Horse Racing (贪心)

    Tian Ji -- The Horse Racing 田忌赛马,还是English,要不是看题目,我都被原题整懵了,直接上Chinese吧 Descriptions: 田忌和齐王赛马,他们各有n匹马 ...

  7. 新装ubuntu 12.04 , 使用技巧

    *********************************************** 一.让Ubuntu 12.04开机默认进入命令行模式. 修改 /etc/default/grubGRUB ...

  8. Eclipse开发MR环境搭建

    1.jdk环境配置     jdk安装后好后配置相关JAVA_HOME环境变量,并将bin目录配置到path 2. 下载hadoop-2.7.1.tar.gz 解压hadoop-2.7.1.tar.g ...

  9. 最新的vue没有dev-server.js文件,如何进行后台数据模拟?

    最新的vue里dev-server.js被替换成了webpack-dev-conf.js 在模拟后台数据的时候直接在webpack-dev-conf.js文件中修改 第一步,在const portfi ...

  10. python中的计时器:timeit模块

    python中的计时器:timeit模块 (1) timeit - 通常在一段程序的前后都用上time.time()然后进行相减就可以得到一段程序的运行时间,不过python提供了更强大的计时库:ti ...