In the latest contest between mankind and an artificial intelligence (AI), champion debater Harish Natarajan defeated his robot adversary in live discourse earlier this February.
One might hope that the human victory in a debate would come as a given, but after the world’s best players keep losing to AI at everything from chess, to Jeopardy, to online video games, it’s nice to see humanity still maintains the crown in some areas—especially reasoning.
But according to Natarajan, even that title will soon belong to the machines (there’s a reason AI home assistants are about to start offering users relationship advice).
Such contests against AI may seem silly at times, and in some ways maybe even a little frightening. But these events are also fantastic indicators on how AI has advanced over the past two decades. Because, really, what better way to evaluate the progress of our technology than by getting it to beat us at our own games?
An Overview of Robotic Rivalries (1996-2019)
The concept of AI has been around for a long time, with the official term being traced back to the first machine learning projects of the 1950s. But it’s relatively recent how we’ve become so fascinated with using games as a benchmark for where we stand against our mechanical creations.
One reason for this is partly a response to the media we enjoy, which often depicts robotic intelligence as something that threatens our own. But the main cause is simply up until the 1990s, AI was hardly advanced enough to reach a level where it could competitively challenge human skills. Once it got to that point, who could resist?
Chess (1996): Kasparov versus Deep Blue
In 1996, the world chess champion at the time, Garry Kasparov, played against a chess savvy AI developed by IBM named Deep Blue.
The match involved six games with two draws, one win for the computer, and three wins for Kasparov. This also marked the first recorded time a machine managed to score a victory on a human chess champion. While Kasparov retained his title that year with the most wins, the fact that a non-human intelligence performed so well against one of the best human minds around proved incredible.
A Closer Look at Deep Blue
Deep Blue’s impressive performance was attributed to immense “brute force” calculations, which examined any and all possibilities-per-turn to determine best choices, with the system analyzing up to 200 million moves a second. When accounting for the three minutes allowed during each turn of tournament play, Deep Blue could calculate 36 billion possible options every move.
In 1997, Kasparov and Deep Blue held a rematch. This bout included three draws, a single win for Kasparov (when Deep Blue’s system unexpectedly crashed), and two wins for the AI total—officially marking the machine as the superior player.
Jeopardy! (2011): Jennings versus Watson
In 2011, Ken Jennings, who at one point triumphed in 74 back-to-back games of “Jeopardy!,” lost to Watson, another AI creation from IBM.
The competition aired over three episodes of the series, but by the second episode it became clear that Watson would steal the show. In the end, the AI scored $77,147, with Jennings following at a very distant second ($24,000); his last submitted answer reading, “I for one welcome our new computer overlords.”
A Closer Look at Watson
Unlike Chess, where a finite number of moves are available to the player, “Jeopardy!” involves a variety of dynamic parts that make interpreting the game much more complex for an AI system. This is because “Jeopardy!” doesn’t just require the encyclopedic knowledge that a computer would be advantaged with, but also the ability to interpret complex sentences, discern vague meanings, and of course, account for the strategy behind button pressing.
While it emerged victorious, interestingly enough, Watson showed it was often unsure of itself during the competition. The AI provided some incorrect answers, and to avoid penalties when unconfident, even delayed its 10-millisecond button presses by a margin. Still, despite the occasional imperfect answer, Watson’s nuanced performance in the face of uncertainty highlights a massive evolution from the “brute force” chess calculations of IBM’s Deep Blue a decade earlier.
Starcraft II (2018): TLO and MaNa versus AlphaStar
Video games have been testing the capabilities of AI for a while now, with everything from the classic Mario Kart to the masochistically difficult Montezuma’s Revenge being taken on by artificial systems. But what DeepMind has accomplished with StarCraft II reaches a whole new level.
At the end of 2018, DeepMind’s AlphaStar AI was put to the test against two professional StarCraft II players, TLO and MaNa. The only limitation was a smaller available selection of the game’s characters to compensate for the AI’s restricted access to the game’s full roster. In a series of contests to the first of five wins, both human players lost five-to-nothing. When you understand the complexity a StarCraft game involves, this comes as a monumental triumph for how AI has progressed.
A Closer Look at AlphaStar
StarCraft II takes place in real time, and players are not able to see the entire field-of-play all at once, meaning there’s a lot of fast-paced decision making that needs to happen on incomplete information. There are also a lot of moving parts going on at one time, with different units having their own unique abilities and even movement speeds. All-in-all, this means players must navigate an ever-changing and unpredictable playfield while micromanaging separate units to outmaneuver one another—something that’s wildly impressive for a computer to do at champion levels.
AlphaStar was trained by observing footage from human players for study, and then practicing against other artificial partners for a cumulative 200 years of game experience. Of course, this is more game-time (completed in just 14 days) than any human could possibly experience in their lifetime, showcasing AI’s incredible potential to outlearn and outperform.
Regulated Debates (2019): Harish Natarajan versus Miss Debater
Which brings us back to the current year, with the debate held earlier in February between the 2012 European debate winner, Harish Natajaran, and his AI opponent designed by IBM, Miss Debater.
Both sides had 15-minutes to prepare their stance on the topic of pre-school subsidies and then present a structured argument. After 25-minutes, the attending crowd ruled Natajaran the winner.
A Closer Look at Miss Debater
While we’ve seen incredible progress from AI in terms of games, debating effectively is a more complex task for computers to handle for multiple reasons.
First, large amounts of relevant information are required for an appropriate argument to be made. Second, the argument must be conveyed in a clear and structured manner. Third, and most importantly, the argument needs to be framed so it matters to the audience. This last step requires emotional rhetoric, relevant examples, and targeted word choices to create a compelling stance. While Miss Debater may be an expert in drawing on information and creating strong sentences, connecting with a feeling, human audience is another task entirely.
Still, it’s only a matter of time before the technology gets there.
What this Means for AI and Humanity
From these events, we can appreciate the astounding rate of progress AI has made and will continue to make. At this point, there’s no question that AI will keep surpassing humans in the activities held most dear. But that isn’t necessarily a bad thing.
From esteemed games of intellect to violent video games, to the art of conversation itself, we’re undoubtedly making machines that perform better than we do, and opening ourselves up to entirely new worlds of technological possibilities.