The world of AI is ever-expanding. Another rapidly expanding industry is that of pro gaming. Some professional gamers make millions, rivaling the lesser-paid athletes of more traditional sports. However, these countless hours of training behind a controller recently were not enough to best DeepMind’s AI in a recent competition of real-time strategy video game, “StarCraft II.”
After beating humans in simple games like Chess, Scrabble, and Go, it only makes sense that AI has moved on to more complex ones. The AI, affectionately known as AlphaStar, trained on footage from human players. It then trained against four other versions for an equivalent of 200 years of in-game experience.
Down Go the Champions
When AlphaStar took to the stage against two of the world’s top Starcraft players, Dario Wunsch (ranked 44th) and Grzegorz Komincz (ranked 13th), no one knew what the results would be. However, the AI defeated them both 5-0 in a best-of-five series.
In “StarCraft II,” players control armies across a sprawling map. They must build infrastructure and balance short-term gain with long-term strategy. Often, players can’t see the full map and must rely on instinct and guesses rather than firm information. For AlphaStar, this may have been an advantage as the AI could rely on its algorithms and prediction models to more accurately guess outcomes.
Though it is still unclear how much innovation is baked-in with the new AlphaStar AI, Sebastian Risi of the IT University of Copenhagen, Denmark is optimistic. “It looks like a big step forward,” he said.
Not So Fast
Though the victories over professional gamers were impressive, they were not without caveats. For one, the AI only played on a single map and using one of three possible players. Meanwhile, professionals were faced with a different version of AlphaStar from match-to-match, hindering them from analyzing the AI’s strengths and weaknesses.
AlphaStar also used enormous amounts of processing power for its training. While the game took place on a single GPU, it trained on 16 tensor processing units housed in the Google cloud. Obviously, this is more than any human could train in a lifetime. Despite evidence that playing action video games makes people smarter, humans are no match for AI when it comes to learning through training. When an experimental version of the AI with less training played a live game, it was defeated.
Regardless of the caveats, the success of AlphaStar against some of the world’s best gamers is significant. Even more so considering that previous attempts to teach AI the game have fallen short.
Meanwhile, the team behind the AI is optimistic that AlphaStar may have more meaningful implications. For example, playing “StarCraft” is similar to running logistics for a company or planning research and development. By learning, improving, and applying strategy, future AI systems could help streamline logistics for companies around the world.
Or, it could simply stick to making video games for our enjoyment.