Playing FPS games with deep reinforcement learning

博文转自:https://blog.acolyer.org/2016/11/23/playing-fps-games-with-deep-reinforcement-learning/

When I wrote up ‘Asynchronous methods for deep learning’ last month, I made a throwaway remark that after Go the next challenge for deep learning systems would be to win an esports competition against the best human teams. Can you imagine the theatre!

Source: ‘League of Legends’ video game championship is like the World Cup, Super Bowl combined – Fortune:http://fortune.com/2015/10/29/league-of-legends-video-game-championship/

Since those are team competitions, it would need to be a team of collaborating software agents playing against human teams. Which would make for some very cool AI technology.

Today’s paper isn’t quite at that level yet, but it does show that progress is already being made on playing first-person shooter (FPS) games in 3D environments.

In this paper, we tackle the task of playing an FPS game in a 3D environment. This task is much more challenging than playing most Atari games as it involves a wide variety of skills, such as navigating through a map, collecting items, recognizing and fighting enemies, etc. Furthermore, states are partially observable, and the agent navigates a 3D environment in a first-person perspective, which makes the task more suitable for real-world robotics applications.

Lample and Chaplot develop an AI agent for playing death matches. I’m not really an FPS kind of person, and had no idea what a deathmatch was. It turns out to be a scenario in which the objective is to maximize the number of kills by a player/agent. Nice. The agent uses separate neural networks for navigation tasks and for action tasks. Experimentation is done using the VizDoom framework for developing AI bots that play Doom. It turns out there’s even been a recent VizDoom competition, with the ‘full deathmatch’ category won by a team from Intel Labs. Here’s a video of their entry in action:

Deep Recurrent Q-Networks

The core of the system is built on a DRQN architecture (Deep Recurrent Q-Network). A regular Deep-Q network, such as that used to play Atari games, receives a full (or very close to full) observation of the environment at each step. But in a game like DOOM where agent’s field of view is limited to 90 degrees centred around its position it only receives apartial observation.

In 2015 Hausknecht and Stone introduced Deep Recurrent Q-Networks which include an extra parameter at each step representing the hidden state of the agent. This can be accomplished by layering a recurrent neural network such as an LSTM on top of a normal DNQ network.

Two models

In a deathmatch, you need to explore the map to collect items and find enemies, and then you need to fight enemies when you find them. Lample and Chaplot use two networks, one for navigation, and one for action. The current phase of the game (and hence which model to use at any given time) is determined by predicting whether or not an enemy is visible in the current frame (action model if so, navigation model otherwise).

There are various advantages of splitting the task into two phases and training a different network for each phase. First, this makes the architecture modular and allows different models to be trained and tested independently… Furthermore, the navigation phase only requires three actions (move forward, turn left, and turn right), which dramatically reduces the number of state-action pairs required to learn the Q-function and makes training much faster. More importantly, using two networks also mitigates ‘camper’ behaviour, i.e. the tendency to stay in one area of the map and wait for enemies, which was exhibited by the agent when we tried to train a single DQN or DRQN for the deathmatch task.

Training

When trained using a vanilla DRQN approach, agents tended either to fire at will, hoping for enemies to wander into their crossfire, or not fire at all when given a penalty for using ammunition. This is because the agent could not effectively learn to detect enemies. To address this, the team gave the agent additional information that it could use during training (but not during actual gameplay or testing). At each training step, in addition to receiving a video frame, the agent received a boolean value for each entity (enemy, health pack, weapon, ammo and so on) indicating whether or not it appeared in the frame.

We modified the DRQN architecture to incorporate this information and to make it sensitive to game features. In the initial model, the output of the CNN is given to a LSTM that predicts a score for each action based on the current frame and its hidden state. We added two fully-connected layers of size 512, and k-connected to the output of the CNN, where k is the number of game features we want to detect… Although a lot of game information was available, we only used an indicator about the presence of enemies on the current frame.

Jointly training the DRQN model and the game feature detection allows the kernels of the convolutional layers to capture relevant information about the game with only a few hours of training needed to reach an optimal enemy detection accuracy of 90%.

The reward function for the action network includes:

  • positive rewards for kills
  • negative rewards for suicides
  • positive rewards for picking up objects
  • negative rewards for losing health
  • negative rewards for shooting or losing ammo

The navigation network was simply given a positive reward for picking up an item, and a negative reward for walking on lava.

A frame-skip of 4 turned out to be best overall balance between training speed and performance (the agent receives a screen input every 4+1 frames, and the action decided by the network is repeated over all the skipped frames). During back-propagation, only action states with enough history to give a reasonable estimation are updated.

Fighting! (evaluation)

Evaluation is done using the delightful kill to death ratio (K/D) as the scoring metric. Table 2 below shows how well the agent performed both on known maps (limited deathmatch) and on unknown maps (full deathmatch).

You can watch the agent play in these videos.

Here’s how it stacks up against human opposition:

The authors conclude:

In this paper, we have presented a complete architecture for playing deathmatch scenarios in FPS games. We introduced a method to augment a DRQN model with high-level game information, and modularized our architecture to incorporate independent networks responsible for different phases of the game. These methods lead to dramatic improvements over the standard DRQN model when applied to complicated tasks like a deathmatch. We showed that the proposed model is able to outperform built-in bots as well as human players and demonstrated the generalizability of our model to unknown maps. Moreover, our methods are complementary to recent improvements in DQN, and could easily be combined with dueling architectures (Wang, de Freitas, and Lanctot 2015), and priorized replay (Schaul et al. 2015).

(转) Playing FPS games with deep reinforcement learning的更多相关文章

  1. Playing FPS Games with Deep Reinforcement Learning

    论文不同点: (1)用两套网络分别实现移动和射击. (2)使用LSTM来处理不完全信息. 疑问: (1)为什么对于射击使用RNN,对导航却没有使用RNN.一般来说,当我们看见视野里面有敌人的时候,我们 ...

  2. (zhuan) Deep Reinforcement Learning Papers

    Deep Reinforcement Learning Papers A list of recent papers regarding deep reinforcement learning. Th ...

  3. 【资料总结】| Deep Reinforcement Learning 深度强化学习

    在机器学习中,我们经常会分类为有监督学习和无监督学习,但是尝尝会忽略一个重要的分支,强化学习.有监督学习和无监督学习非常好去区分,学习的目标,有无标签等都是区分标准.如果说监督学习的目标是预测,那么强 ...

  4. (转) Deep Reinforcement Learning: Playing a Racing Game

    Byte Tank Posts Archive Deep Reinforcement Learning: Playing a Racing Game OCT 6TH, 2016 Agent playi ...

  5. 论文笔记之:Playing Atari with Deep Reinforcement Learning

    Playing Atari with Deep Reinforcement Learning <Computer Science>, 2013 Abstract: 本文提出了一种深度学习方 ...

  6. Deep Reinforcement Learning from Self-Play in Imperfect-Information Games

    Heinrich, Johannes, and David Silver. "Deep reinforcement learning from self-play in imperfect- ...

  7. Paper Reading 1 - Playing Atari with Deep Reinforcement Learning

    来源:NIPS 2013 作者:DeepMind 理解基础: 增强学习基本知识 深度学习 特别是卷积神经网络的基本知识 创新点:第一个将深度学习模型与增强学习结合在一起从而成功地直接从高维的输入学习控 ...

  8. (转) Deep Reinforcement Learning: Pong from Pixels

    Andrej Karpathy blog About Hacker's guide to Neural Networks Deep Reinforcement Learning: Pong from ...

  9. 论文笔记之:Asynchronous Methods for Deep Reinforcement Learning

    Asynchronous Methods for Deep Reinforcement Learning ICML 2016 深度强化学习最近被人发现貌似不太稳定,有人提出很多改善的方法,这些方法有很 ...

随机推荐

  1. ADO.NET与ORM的比较:NHibernate实现CRUD(转)

    原文地址 http://blog.csdn.net/zhoufoxcn/article/details/5402511 说明:个人感觉在Java领域大型开发都离不了ORM的身影,所谓的SSH就是Spr ...

  2. github的使用步骤及体会

      对于github的readme文件的提交,很是坎坷.   首先打开了github的首页,对于满屏的英文,我是头大的.百度搜索教程,百度翻译等等,这些都使用上了.带着试一试的态度,我按了creat ...

  3. C语言程序设计第九次作业

    一.学习内容      本次课我们重点学习了怎样向函数传递数组,鉴于大家对函数和数组的理解和运用还存在一些问题,下面通过一些实例加以说明,希望同学们能够认真阅读和理解.      例1:火柴棍拼数字 ...

  4. C#窗体无法接受Keydown事件

    问题一描述:当新建一个窗体时,添加KeyDown事件后,会正常处理,但是当添加有控件时,比如Button,TextBox,不会触发窗体的KeyDown事件,也没有调用KeyDown事件的处理程序. 原 ...

  5. spring mvc 跳转后页面cs样式表丢失

    原因:../不能正确返回 解决办法:jsp文件加<% String path = request.getContextPath(); String basePath = request.getS ...

  6. super作用

    super()的作用: super可以用来访问超类的构造方法和被子类所隐藏的方法,如果子类中有方法与超类中的方法名称和参数相同,则超类中的方法就被隐藏起来,也就是说在子类中重载了父类中的方法. 引用父 ...

  7. 用jQuery做一个三级菜单,鼠标移动到二级菜单的选项上,然后再迅速离开后,当鼠标再移动到该一级菜单或其他二级菜单选项,三级菜单也会显示。

    用jQuery做一个三级菜单,鼠标移动到二级菜单的选项上,然后再迅速离开后,当鼠标再移动到该一级菜单或其他二级菜单选项,三级菜单也会显示. 原因:在为一个元素绑定hover事件之后,用户把光标移入元素 ...

  8. 【转】C#线程同步示例

    using System; using System.Threading; // 银行帐户类 class Account { int balance;                         ...

  9. Tomcat8 localhost+端口可以访问Manager APP,而IP+端口不可以访问 解决办法

    localhost + 端口可以正常访问Manager APP,而IP + 端口不能访问Manager APP,报403错误.(我的主机环境是Ubuntu16.04) 前提是你已经配好了tomcat_ ...

  10. webstorm mac版快捷键

    WebStorm快捷键(Mac版) ⌘--Command ⌃ --Control ⌥--alt ⇧--Shift ⇪--Caps Lock fn--功能键就是fn 编辑 Command+alt+T 用 ...