Video games continue to provide a challenge to billions of players around the world. You may not know it yet, but machine learning algorithms have started to rise up to the challenge as well.
There is currently a significant amount of research in the field of AI to see if machine learning methods can be applied to video games. Substantial progress in this field shows that machine learning agents can be used to emulate or even replace the human player.
What does this mean for the future of video games?
Are these projects simply for fun, or are there deeper reasons why so many researchers are focusing on games?
This article will briefly explore the history of AI in video games. Afterward, we’ll give you a quick overview of some machine learning techniques we can use to learn how to beat games. We’ll then look at some successful applications of neural nets to learn and master specific video games.
Brief History of AI in Gaming
Before we get into why neural nets have become the ideal algorithm to solve video games, let’s briefly look into how computer scientists have used video games to advance their research in AI.
You can argue that, from its inception, video games have been a hot area of research for researchers interested in AI.
While not strictly a video game in origin, chess has been a large focus in the early days of AI. In 1951, Dr. Dietrich Prinz wrote a chess-playing program using the Ferranti Mark 1 digital computer. This was way back in the era when these bulky computers had to read programs off paper tape.
The program itself was not a complete chess AI. Because of the computer’s limitations, Prinz could only create a program that solved mate-in-two chess problems. On average, the program took 15-20 minutes to calculate every possible move for the White and Black players.
Work on improving chess and checkers AI has improved steadily throughout the decades. The progress reached its climax in 1997 when IBM’s Deep Blue defeated Russian chess grandmaster Garry Kasparov in a pair of six-game matches. Nowadays, chess engines you can find on your mobile phone can defeat Deep Blue.
AI opponents started gaining popularity during the golden age of video arcade games. 1978’s Space Invaders and 1980s Pac-Man are some of the industry’s pioneers in creating AI that can sufficiently challenge even the most veteran of arcade gamers.
Pac-Man, in particular, was a popular game for AI researchers to experiment on. Various competitions for Ms. Pac-Man have been organized to determine which team could come up with the best AI to beat the game.
Game AI and heuristic algorithms continued to evolve as the need for smarter opponents arose. For example, combat AI rose in popularity as genres such as first-person shooters became more mainstream.
Machine Learning in Video Games
As machine learning techniques quickly rose in popularity, various research projects tried to use these new techniques to play video games.
Games such as Dota 2, StarCraft, and Doom can act as problems for these machine learning algorithms to solve. Deep learning algorithms, in particular, were able to achieve and even surpass human-level performance.
The Arcade Learning Environment or ALE gave researchers an interface for over a hundred Atari 2600 games. The open-source platform allowed researchers to benchmark the performance of machine learning techniques on classic Atari video games. Google even published their own paper using seven games from the ALE
Meanwhile, projects like VizDoom gave AI researchers the opportunity to train machine learning algorithms to play 3D first-person shooters.
How Does It Work: Some Key Concepts
Most approaches to solving video games with machine learning involve a type of algorithm known as a neural network.
You can think of a neural net as a program that tries to mimic how a brain might function. Similar to how our brain is composed of neurons that transmit a signal, a neural net also contains artificial neurons.
These artificial neurons also transfer signals to each other, with each signal being an actual number. A neural net contains multiple layers between the input and output layers, called a deep neural network.
Another common machine learning technique relevant to learning video games is the idea of reinforcement learning.
This technique is the process of training an agent using rewards or punishments. With this approach, the agent should be able to come up with a solution to a problem through trial and error.
Let’s say we want an AI to find out how to play the game Snake. The game’s objective is simple: get as many points as possible by consuming items and avoiding your growing tail.
With reinforcement learning, we can define a reward function R. The function adds points when a Snake consumes an item and deducts points when the Snake hits an obstacle. Given the current environment and a set of possible actions, our reinforcement learning model will try to compute the optimal ‘policy’ that maximizes our reward function.
Keeping in theme with being inspired by nature, researchers have also found success in applying ML to video games through a technique known as neuroevolution.
Instead of using gradient descent to update neurons in a network, we can use evolutionary algorithms to achieve better results.
Evolutionary algorithms typically start by generating an initial population of random individuals. We then evaluate these individuals using certain criteria. The best individuals are chosen as “parents” and are bred together to form a new generation of individuals. These individuals will then replace the least-fit individuals in the population.
These algorithms also typically introduce some form of mutation operation during the crossover or “breeding” step to maintain genetic diversity.
Sample Research on Machine Learning in Video Games
OpenAI Five is a computer program by OpenAI that aims to play DOTA 2, a popular multiplayer mobile battle arena (MOBA) game.
The program leveraged existing reinforcement learning techniques, scaled to learn from millions of frames per second. Thanks to a distributed training system, OpenAI was able to play 180 years’ worth of games each day.
After the training period, OpenAI Five was able to achieve expert-level performance and demonstrate cooperation with human players. In 2019, OpenAI five was able to defeat 99.4% of players in public matches.
Why did OpenAI decide on this game? According to the researchers, DOTA 2 had complex mechanics that were outside the reach of existing deep reinforcement learning algorithms.
Super Mario Bros.
Another interesting application of neural nets in video games is the use of neuroevolution to play platformers such as Super Mario Bros.
For example, this hackathon entry starts with having no knowledge of the game and slowly builds a foundation of what is needed to progress through a level.
The self-evolving neural net takes in the game’s current state as a grid of tiles. At first, the neural net has no understanding of what each tile means, only that the “air” tiles are different from “ground tiles” and “enemy tiles.”
The hackathon project’s implementation of a neuroevolution used the NEAT genetic algorithm to breed different neural nets selectively.
Now that you’ve seen some examples of neural nets playing video games, you might be wondering what the point of all this is.
Since video games involve complex interactions between agents and their environments, it’s the perfect testing ground for making AI. Virtual environments are safe and controllable and provide an infinite supply of data.
Research made in this field has given researchers insight into how neural nets can be optimized to learn how to solve problems in the real world.
Neural networks are inspired by how brains work in the natural world. By studying how artificial neurons behave when learning how to play a video game, we may also gain insight into how the human brain works.
Similarities between neural networks and the brain have led to insights in both fields. The continuing research on how neural nets can solve problems may someday lead to more advanced forms of artificial intelligence.
Imagine using an AI tailored to your specifications that can play an entire video game before you purchase it to let you know if it’s worth your time. Would video game companies use neural nets to improve game design, tweak level, and opponent difficulty?
What do you think will happen when neural nets become the ultimate gamers?