top of page

Before diving into the actual study, it is necessary to note that Artificial Intelligence must be separated into three different forms to understand the past and future progress this scientific domain has seen. Firstly, there is Artificial Narrow Intelligence (ANI) which refers to machines which excels only at a handful similar tasks. Deep Blue excels at chess only and Google’s AlphaGo can only be used to play Go. Furthermore, Artificial General Intelligence (AGI) applies when machines are capable of doing most tasks humans can perform. Finally, the theoretical Artificial Superintelligence will apply when we are able to create a machine which can exceed human beings at almost any tasks. (Sweijs, 2018) Currently, throughout society, including the video game industry, we are in between ANI and AGI. It is through cooperation of the different fields using and developing Artificial Intelligence that the overall field will evolve. This is why the present study aims at highlighting the efforts and importance of the video game industry in the development of artificial intelligence, as it usually goes unnoticed by the population.

How Artificial Intelligence Work in Video Games

Nintendo Retro.jpg

Less than two decades ago, Non-Playable Characters (NPCs) in video games interacted with their virtual world through numerical values instead symbolic ones (such as weather, temperature, etc.). This came as a consequence to a lack of use of detailed Artificial Intelligence due to relatively standard, labor-intensive scripting and authoring methods to develop such intelligence. Nowadays, the field has developed in the Video Game industry so much that the latter has now taken a significant place in the general scientific field of Artificial Intelligence. (Mikkulainen, 2006)

At a quite early form, artificial intelligence has long been used to simulate human players in board games. Computer chess players are the best-known example. Nowadays, modern chess programs are able to easily beat the best human players; IBM’s Deep Blue computer famously beat Garry Kasparov in 1997. Computer-controlled opponents existed from the very beginning like Computer Space (1971). However many early video games like Pong (1972) only allowed human opponents to face each other. Space Invaders (1978) provided an early example of the challenge that computer-controlled opponents could bring to a game. As the player shot down the aliens, the game sped up considerably with fewer opponents. At first, it was an unexpected effect of the limitations of the hardware at the time, but Tomohiro Nishikado, the inventor of the game, decided to leave it in because it made the gameplay more exciting. In general, a lot of AI in game development goes toward defining the way a computer opponent behaves. (Delony, n.d) Behavior can range from relatively simple patterns in action games all the way to chess programs that can beat champion human players thanks to their very developed and detailed patterns.

Credits: DeviantArt

Decision Trees

While human opponents are undeniably entertaining to square off against, the video game industry really took off when microprocessors allowed players to compete with more sophisticated and challenging computer opponents. The latter can examine player behavior and change their responses to make the games more challenging using emergent behavior. Decision trees and pathfinding techniques are the main techniques used in AI game programming. Some AI opponents in first-person shooter games can listen for player movements, look for footprints or even take cover when a human opponent fires on them. These details which go unnoticed by most players had to be programmed extensively by the people working on the game in order to create Non-Playable Characters (NPCs) that can make decisions depending on their surroundings. (Delony, n.d)

decision tree.png

To develop machines with artificial intelligence, the game developers need to have access to a lot of data involving previous games’ environments, player behaviors and responses. All the essential data gathered for the AI is then engineered to create a virtual gaming environment involving scenarios, motives, and actions attributed to the gaming characters that are becoming increasingly realistic and natural. However, since the possible moves are much more diversified than in chess, it is impossible to consider all of them for most game developers without surcharging the system. (AIthority, 2020)

Credits: Pinterest

For example, in the recent game God Of War (2018) for Playstation 4, The Santa Monica Studio used the decision tree technique in order to program the Artificial Intelligence used in their game. It is a programming technique in which the computer opponent has many different possible actions available at any given time which lead to another range of possible actions not necessarily different (like a tree with branches that lead to other branches infinitely). This is a fairly basic technique as the machine only has to take into account a few low-probability variables. (Yeh, 2019)

God-of-War_20200305233648-scaled.jpg

Credits: KeenGamer

Nevertheless, the developers at Santa Monica Studios had to resort to “dumbing down” their hardest computer-directed opponent. The team itself explain: “She has almost every attack. In fact, she originally had so many attacks in her decision tree that the game engine could not handle it. We had to trim down her movelist a bit to fit her into the game’s limits for an AI character. This still resulted in over 25 attacks, in addition to many combo chains and variations.” Thus, while decision trees are a great way of developing AI in low variations situations, yet the lack of power can hold back progress significantly in certain instances.  

Monte-Carlo Search Tree

A more advanced method used to enhance the personalized gaming experience is the Monte Carlo Search Tree (MCST) algorithm. It is a best-first search algorithm that gradually builds up a search tree and uses Monte-Carlo simulations to approximate the value of game states. In these games the MCST would randomly choose some of the possible moves to start with. Therefore, outcomes become much more uncertain to human players. This uncertainty makes the artificial intelligence more akin to human intelligence.

MCST.png

Credits: (Harbing, 2017)

For example, in Civilization, a game in which players compete to develop a city in competition with an AI who is doing the same thing, it is impossible to pre-program every move for the AI. The MCST AI evaluates some of the possible next moves, such as developing ‘technology’, attacking a human player, defending a fortress, and so on. The AI then performs the MCST to calculate the overall payback of each of these moves and chooses whichever is the most valuable. It is thereafter the goal of the Human player to outplay the computer which can make predictions based on multiples variables and different series of paths, similarly to the human players themselves. The Monte Carlo Search Tree algorithm is actually used by many companies as it is a very useful tool to analyze and simulate the risks of many different situations at the same time. (Schulze, 2010) To come back to the aforementioned game Civilization, it employs MCST to provide different AI behaviors in each round. In these complicated open-world games, the evolution of a situation is never predetermined, providing a fresh gaming experience for human players every time. (Harbing, 2017)

Finite State Machine

The paramount reason Artificial intelligence is used in video games is to enhance human players’ gaming experience. The most common role for AI in video games is controlling NPCs. Game designers often use tricks to make these NPCs look intelligent as the Artificial Intelligence used is usually under-developedin order to avoid overwhelming the game engine and the programming team. One of the most widely utilized techniques to make the AI seem intelligent while only using a simple program is called the Finite State Machine (FSM) algorithm. The latter was introduced to video game design in the 1990s.

In a FSM, a designer generalizes all possible situations that an AI could encounter, and then programs a specific reaction for each situation. An obvious drawback of FSM design is its predictability which is why it is gradually being used less as the bigger and expensive games (commonly called, AAA or Triple A games) strive to be more realistic for the player’s content. Indeed, all NPCs’ behaviors are pre-programmed, so after playing an FSM-based game a few times, a player may lose interest. Compared to the MCST algorithms, the FSM algorithm takes action only based on the current status of its environment, responding only to one specific event without taking into account any variables or different paths. (Harbing, 2017) In the video game Splinter Cell: Blacklist, the human guards have 15 different possible states which will determine their course of actions. These states can be provoked by the player's actions or the game's environment (temperature, smell, protection etc.) (Delony, n.d)

While this algorithm may seem archaic, as long as it seems intelligent to the player then this simple Artificial intelligence has achieved its goal in the minds of the game designers and programmers.

Finite STATE MACHINE.png

Credits: (Harbing, 2017)

Reinforcement Learning

A newly used technique in video game development to conceive Artificial Intelligence is through Reinforcement Learning. This technique has rarely been used for time/ programming constraints and also for lack of purpose in most instances. It is also rare to find an actual use for a program capable of “learning” (term being used very loosely) in video games. Indeed, reinforcement learning as the name suggests consists of making the machine repeat the same actions over a long period of time with small variations in order to act in a way that will benefit it the most. Loosely explained, it is similar to the concept of Risk versus Reward. (Greene, 2018)

Reinforcement L.jpg

Credits: Sheild.AI

Nevertheless, it is actually rare to find a video game which uses Artificial intelligence during the actual gameplay for the Human player. Indeed, to this day, virtual pet games represent the sole segment of the video game industry which consistently employs machines with the ability to “learn” during each play session. One of the earliest video game AIs to adopt NPCs with learning capabilities was the digital pet game, Petz (1995). In this game, the player can train a virtual pet just as they would train a real animal. Since training style varies between players, their pets’ behavior also becomes personalized, resulting in a strong bond between pet and player. Nevertheless, while a personalized experience is preferable in these types of games, for most video games the fear of creating an Artificial Intelligence that can learn during each play session means that the developers lose control of the AI range of actions and behavior. This loss of control can potentially spoil the player’s experience by either making the game too easy, too hard or even unplayable if the algorithm was badly programed. (Harbing, 2017)

According to Nicolas Esposito, researcher on video games, through reinforcement learning video games became testing grounds for new forms of AIs which will be used by other domains, such as autonomous cars and architectures.

Only few companies have tried their hand at this technique over the years in order to program an Artificial Intelligence that can “evolve” by itself. For example, the video game studio, Electronic Arts, has been trying to develop such an AI, but not as a companion for the human player, but as it replacement during test runs. Indeed, EA reports that its novel method is superior to previous efforts. It “provides a 4x improvement in training speed by having the agent learn useful behaviors by imitating the play style of an expert human player” as it begins with a 50/50 blend of imitation learning and reinforcement training which gradually evolves into only reinforcement learning.

EA’s goal to train neural networks (how the machine is programmed to “think”) on AAA games could be useful to test the limits of a gaming environment in order to help lessen launch-day bugs. Indeed, the Machine will be able to play the game more often and faster than a human counterpart making it easier for the game developers to find the bug in the games prior to their release. (Greene, 2018)

the fuck.jpg

Credits: DKOldies

Game developers have often adopted innovating techniques to develop their technical skills and creativity. Reinforcement Learning is a sub-set of Machine Learning, and the calculation behind the famous AI PC program AlphaGo, which beat the world’s best human Go player is a case in point. From the simple Finite State Machine algorithm, the complex Monte Carlo Search Tree, to the experimental Reinforcement Learning, Artificial intelligence has made its way into game development for the benefice of players and game developers. Furthermore, this means that there is a new player in the AI development field working toward improving, testing and inspiring the other players, which will and has already benefited the entire field. (AIthority, 2020)

According to three out of the four gaming design students interviewed for this study, Machine Learning techniques, alike Reinforcement Learning, will be an essential part of their future profession in order ease and further the development of the games on which they will work.

bottom of page