DeepMind AI crushes pros at Starcraft II

Google-owned DeepMind has developed artificial intelligence that can defeat the world’s best human players at StarCraft II. The AI, known as AlphaStar, crushed two professional gamers at DeepMind’s headquarters in London after logging an amount equivalent to 200 years of in-game experience.

Given other recent AI successes over human gamers, this news may not raise any alarms. However, taking a closer look at AlphaStar’s victory reveals how much progress the technology has made over the last decade.

Using Video Games to Test AI

One of the primary ways that researchers have tested AI is through various styles of gaming. AI first beat professional chess players in the late 90s. In 2017, AI took down the best in the world at the classic abstract strategy board game, Go.

Advertisement

Today, developers are turning more and more to video games that can further test the limits of AI. Last year, footage was released of an AI bot, MarI/O, that learned how to play the side-scroller classic, “Super Mario Bros,” with incredible effectiveness. Two years ago, another DeepMind bot mastered the difficult Atari game, Montezuma’s Revenge.

However, no other gaming achievements to date can compare to what AlphaStar has achieved in StarCraft II. The game has challenged AI researchers for years who have now broken through in a significant way.

Why StarCraft II is Different

StarCraft II is a real-time strategy game in which players campaign against one another for control over large environments. Gamers move simultaneously and can only see a portion of the map at any given time, forcing them to make rapid, high-stakes decisions with imperfect information.

Units also move at different speeds depending on the conditions of the map. Rather than taking turns to move pieces at fixed speeds (i.e. chess), StarCraft II players have to navigate complex situations in which variables are constantly changing. These elements combine to create a fast-moving gaming experience that tests a gamer’s ability to outmaneuver others.

Training AlphaStar

The DeepMind team used both supervised learning and reinforcement learning to train AlphaStar. Blizzard provided DeepMind with anonymized human gameplay that AlphaStar was able to observe and study. Then, the bot was left to hone its skill in an AI league in which algorithms battled each other continuously.

Out of this arena, the ultimate AlphaStar program was born based on the most effective strategies used in the league. Once AlphaStar was pitted against real human gamers, the AI surprised and dazzled. “I was impressed to see AlphaStar pull off advanced moves and different strategies across almost every game,” said Grzegorz Komincz, a StarCraft II professional.

Why This Matters

AlphaStar’s achievement reveals important insights about AI’s overarching progress. DeepMind was able to create an algorithm that can outperform humans effectively with limited information in complex environments. Yes, AlphaStar trained more than any human ever could, but the AI’s ability to learn and perform is unprecedented.

DeepMind’s success is a major milestone with exciting real-world implications. As seen with other gaming champion bots, there is potential that the innovation behind AlphaStar could be applied in various ways. For example, the algorithm’s strong prediction capabilities could be useful for weather prediction and climate modeling.

Creating AI to beat humans at video games isn’t the end-all-be-all. Instead, it is a viable way to test AI progress in low-risk situations before putting it to work in real-world situations that impact our world.

Facebook Comments