Project Bella
Background Research
I spent the bulk of this week searching for papers about improving the realism of NPC AI, and successful predecessors to my research topic. It's not unknown to me that game dev companies across the board, big and small, have been trying to create believable NPC interactions for decades, some with extraordinary success. Popular examples of existing NPC AI, particularly those that interest me, include:
- The F.E.A.R. series' Goal-Oriented Action Planning (GOAP) AI. Despite being fourteen years old, it's still widely regarded as an amazing AI.
- Bethesda's Radiant AI, used for the NPCs in The Elder Scrolls games.
- The AI of Trico from Fumito Ueda's The Last Guardian.
- The AI of the creature from Lionhead's Black & White, designed and implemented by Richard Evans.
- And of course, DeepMind's AlphaGo and OpenAI Five. These weren't exactly NPC AI, but they were effectively used as such.
The AI programmers on these teams are often deeply concerned with producing a versatile AI without jeopardizing the playability of the game. This is the primary reason the vast majority of games companies have refrained from using complex machine learning models, like neural nets, to guide the behavior of their NPCs: doing so would produce unpredictable, potentially game-breaking behavior.
Throughout my search, I grew frustrated by the apparent lack of documentation for the AIs of commercial games. To be fair, this is probably because they’re proprietary, but that really doesn’t do much for someone whose research goal is to improve the realism of existing NPC AIs with machine learning. The richest descriptions of NPC AI seem to come from academic exploration (for instance, the only reason I could find anything about F.E.A.R.’s use of GOAP is because it was implemented by MIT then-master’s student Jeff Orkin who published a webpage full of everything under the sun about it). In any case, these are some of the articles I found especially interesting (I'll provide citations for the rest below):
- K. Stanley, B. Bryant, and R. Miikkulainen, “Real-Time Neuroevolution in the NERO Video Game,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 6, pp. 653–668, Dec. 2005.
- This paper discusses Neuro-Evolving Robotic Operatives, an online computer game in which players can train squadrons of intelligent robots and pit them against other users’ teams. The robots learn via a real-time variation of NEAT, a genetic algorithm that iterates on the structure of a neural network. One of the authors’ most interesting suggestions is not how real-time machine learning can be used to enhance existing game genres, but how it can be used to create a new one altogether.
- L. Galway, D. Charles, and M. Black, “Machine learning in digital games: a survey,” Artificial Intelligence Review, vol. 29, no. 2, pp. 123–161, Aug. 2008.
- This article provides an excellent survey of how various machine learning models have been used in digital (specifically non-commercial) games, and their success therein. Of particular interest to me are those games that used reinforcement learning and related methods (Q-learning, SARSA, rtNEAT, etc.).
- H. Ponce and R. Padilla, “A Hierarchical Reinforcement Learning Based Artificial Intelligence for Non-Player Characters in Video Games,” Nature-Inspired Computation and Machine Learning Lecture Notes in Computer Science, pp. 172–183, 2014.
- This article discusses using the MaxQ-Q algorithm, an implementation of the MaxQ hierarchical reinforcement learning method, to create natural and human-like NPCs. The methods were applied to NPCs in a capture-the-flag game, and were indeed found to improve the humanness of the NPCs.
- D. Wang, B. Subagdja, A. Tan, G. Ng, “Creating Human-like Autonomous Players in Real-time First Person Shooter Computer Games,” Twenty-First Innovative Applications of Artificial Intelligence Conference, Apr. 2009.
- This article describes a bot created using a real-time self-organizing neural network called FALCON, and its performance in Unreal Tournament 2004, the platform for the 2008 2K Bot Prize competition. Although the bot did not place among the top contenders, it convinced as many judges of its “humanness” as did the first-place winner.
- A. A. Sharifi, R. Zhao, D. Szafron, “Learning Companion Behaviors Using Reinforcement Learning in Games,” Sixth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, Oct. 2010.
- I found this article really interesting. The authors used the SARSA(λ) reinforcement learning algorithm to train an NPC in BioWare Corp.’s Neverwinter Nights to adapt to player behavior. More specifically, the player could ask the NPC to do something about a trap, and depending on the consequences of obeying, the NPC’s regard for the player and response to their commands would change.
- J. Wexler, “Artificial Intelligence in Games,” University of Rochester, May. 2002.
- This is the most detailed article I could find (written by a student) about the AI of the creature from Black & White. Attribute-value pairs represented the creature’s attitude about individual objects, decision trees constructed in real time represented the creature’s attitude about more general objects, and neural nets guided its desires. The paper suggests possible additions to this AI to make it even more dynamic. Considering the high regard for Black & White’s creature AI, I don’t think these suggestions would be a bad place to start.
Among these articles, the one that caught my interest the most was the student paper about Black & White’s creature AI. Much to my dismay and fascination, I discovered that Black & White’s creature AI had kind of accomplished 20 years ago what I’m trying to do right now. By combining the techniques described above, the AI produced a convincing and sympathetic character that could learn from observing the behavior of you or other villagers, or from acting independently and receiving feedback from the player.
Initially, this was kind of discouraging. Not only had someone achieved something very similar to my research objective twenty years ago, but I could not find any significant improvements on it in my research direction with remotely comparable success. But then, this paper in combination with the paper about NERO’s use of rtNEAT led me to a new interpretation of my project.
I originally hoped to demonstrate how machine learning can be used to create sympathetic, complex, and convincing NPCs. In my mind, I envisioned such systems ultimately replacing those in massive open-world games like Skyrim and Red Dead Redemption 2; however, some of the papers I read seemed to be more interested in potentially inventing a totally new genre of “machine-learning games,” in which gameplay is somehow uniquely centered around a core mechanic of non-player game elements that can change intelligently. To be honest, many games are thrilling enough without the use of machine learning in their NPCs which could anyways leave the game vulnerable to unpredictable and game-breaking phenomena, but Black & White was indeed ground-breaking and almost ironic in its presentation as a God game with a prophet that demonstrated an impressive amount of “free will”.
And to be honest, the stark difference of every kind between the game I’ve envisioned for Project Bella and those for whom I’ve shamelessly hoped it would serve as proof of concept (ahem, RDR2, Skyrim, my inflated self-image knows no bounds I’m sorry) was not at all lost to me. I hope, then, that if I can’t necessarily speak to the NPC AI of existing genres, perhaps I can still push further into an entirely new one.
The Other 394288632 Research Articles
- G. Synnaeve, N. Nardelli, A. Auvolat, S. Chintala, T. Lacroix, Z. Lin, F. Richoux, N. Usunier, “TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games,” arXiv:1611.00625 [cs.LG], Nov. 2016.
- S. Wender, I. Watson, “Applying reinforcement learning to small scale combat in the real-time strategy game StarCraft:Broodwar,” 2012 IEEE Conference on Computational Intelligence and Games, Sept. 2012.
- D. Zhao, H. Wang, K. Shao, Y. Zhu, “Deep reinforcement learning with experience replay based on SARSA,” 2016 IEEE Symposium Series on Computational Intelligence, Dec. 2016.
- J. Oh, X. Guo, H. Lee, R. Lewis, S. Singh, “Action-Conditional Video Prediction using Deep Networks in Atari Games,” Neural Information Processing Systems 2015, 2015.
- Diller, David & Ferguson, William & Leung, Alice & Benyo, Brett & Foley, Dennis. (2020). Behavior Modeling in Commercial Games.
- S. He, F. Xie, Y. Wang, S. Luo, Y. Fu, J. Yang, Z. Liu, and Q. Zhu, “To Create Adaptive Game Opponent by Using UCT,” 2008 International Conference on Computational Intelligence for Modelling Control & Automation, 2008.
- P. G. Patel, N. Carver, S. Rahimi, “Tuning computer gaming agents using Q-learning,” 2011 Federated Conference on Computer Science and Information Systems, Sept. 2011.
- G. N. Yannakakis, “Game AI revisited,” Proceedings of the 9th conference on Computing Frontiers - CF 12, 2012.
- M.-V. Aponte, G. Levieux, and S. Natkin, “Measuring the level of difficulty in single player video games,” Entertainment Computing, vol. 2, no. 4, pp. 205–213, 2011.
- M. Garnelo, K. Arulkumaran, M. Shanahan, “Towards Deep Symbolic Reinforcement Learning,” arXiv:1609.05518 [cs.AI], Oct. 2016.
- F. S. He, Y. Liu, A. G. Schwing, J. Peng, “Learning to Play in a Day: Faster Deep Reinforcement Learning by Optimality Tightening,” arXiv:1611.01606 [cs.LG], Nov. 2016.
- M. Kopel and T. Hajas, “Implementing AI for Non-player Characters in 3D Video Games,” Intelligent Information and Database Systems Lecture Notes in Computer Science, pp. 610–619, Feb. 2018.
- D. Perez, S. Samothrakis, S. Lucas, “Knowledge-based fast evolutionary MCTS for general video game playing,” 2014 IEEE Conference on Computational Intelligence and Games, Aug. 2014.
- A. Elkin, “Adaptive Game AI and Video Game Enjoyability,” June. 2012.
- H. Warpefelt, “The Non-Player Character: Exploring the believability of NPC presentation and behavior,” Doctoral thesis, Dept. of Computer and Systems Sciences, Stockholm University, 2016.
- E. A. A. Gunn, B. G. W. Craenen, E. Hart, “A Taxonomy of Video Games and AI,” Artificial Intelligence and Simulation of Behavior 2009 Convention Proceedings, Heriot-Watt University, Edinburgh, Scotland.
- J. M. L. Asensio, J. Peralta, R. Arrabales, M. G. Bedia, P. Cortez, A. L. Peña, “Artificial Intelligence approaches for the generation and assessment of believable human-like behavior in virtual characters,” Expert Systems with Applications, vol. 41, no. 16, pp. 7281-7290, Nov. 2014.
- W. Hu, Q. Zhang, Y. Mao, “Component-based hierarchical state machine - A reusable and flexible game AI technology,” 2011 6th IEEE Joint International Information Technology and Artificial Intelligence Conference, Sept. 2011.
- D. Livingstone, “Turing’s test and believable AI in games,” Computers in Entertainment (CIE), Jan. 2006.
- M. Molineaux, D. Dannenhauer, D. W. Aha, “Towards Explainable NPCs: A Relational Exploration Learning Agent,” Thirty-Second AAAI Conference on Artificial Intelligence, June. 2018.
- S. Mondesire, R. P. Wiegand, “Evolving a Non-playable Character team with Layered Learning,” 2011 IEEE Symposium on Computational Intelligence in Multicriteria Decision-Making (MDCM), Apr. 2011.
- S. Yildirim, S. B. Stene, “A Survey on the Need and Use of AI in Game Agents,” in Modeling Simulation and Optimization: Focus on Applications, S. Cakaj, Ed. Rijeka, Croatia: InTech, 2010, pp. 225-238.
- S. Phon-Amnuaisuk, “Learning Chasing Behaviours of Non-Player Characters in Games Using SARSA,” Applications of Evolutionary Computation Lecture Notes in Computer Science, pp. 133–142, 2011.