In 2016 a computer named AlphaGo made headlines for defeating then world champion Lee Sedol at the ancient, popular strategy game Go. The “superhuman” artificial intelligence, developed by Google DeepMind, lost only one of the five rounds to Sedol, generating comparisons to Garry Kasparov’s 1997 chess loss to IBM’s Deep Blue. Go, which involves players facing off by moving black and white pieces called stones with the goal of occupying territory on the game board, had been viewed as a more intractable challenge to a machine opponent than chess.
Much agonizing about the threat of AI to human ingenuity and livelihood followed AlphaGo’s victory, not unlike what’s happening right now with ChatGPT and its kin. In a 2016 news conference after the loss, though, a subdued Sedol offered a comment with a kernel of positivity. “Its style was different, and it was such an unusual experience that it took time for me to adjust,” he said. “AlphaGo made me realize that I must study Go more.”
At the time European Go champion Fan Hui, who’d also lost a private round of five games to AlphaGo months earlier, told Wired that the matches made him see the game “completely differently.” This improved his play so much that his world ranking “skyrocketed,” according to Wired.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Formally tracking the messy process of human decision-making can be tough. But a decades-long record of professional Go player moves gave researchers a way to assess the human strategic response to an AI provocation. A new study now confirms that Fan Hui’s improvements after facing the AlphaGo challenge weren’t just a singular fluke. In 2017, after that humbling AI win in 2016, human Go players gained access to data detailing the moves made by the AI system and, in a very humanlike way, developed new strategies that led to better-quality decisions in their game play. A confirmation of the changes in human game play appear in findings published on March 13 in the Proceedings of the National Academy of Sciences USA.
“It is amazing to see that human players have adapted so quickly to incorporate these new discoveries into their own play,” says David Silver, principal research scientist at DeepMind and leader of the AlphaGo project, who was not involved in the new study. “These results suggest that humans will adapt and build upon these discoveries to massively increase their potential.”
To pinpoint whether the advent of superhuman AI drove humans to generate new strategies for game play, Minkyu Shin, an assistant professor in the department of marketing at City University of Hong Kong, and his colleagues used a database of 5.8 million moves recorded during games from 1950 through 2021. This record, maintained at the website Games of Go on Download, reflects every move of Go games played in tournaments as far back as the 19th century. The researchers began analyzing games from 1950 onward because that’s the year modern Go rules were established.
In order to start combing through the massive record of 5.8 million game moves, the team first created a way to rate the quality of decision-making for each move. To develop this index, the researchers used yet another AI system, KataGo, to compare the win rates of each human decision against those of AI decisions. This huge analysis involved simulating 10,000 ways the game could play out after each of the 5.8 million human decisions.
With a quality rating for each of the human decisions in hand, the researchers then developed a means to pinpoint exactly when a human decision during a game was novel, meaning it had not been recorded before in the history of the game. Chess players have long used a similar approach to determine when a new strategy in game play emerges.
In the novelty analysis of Go game play, the researchers mapped up to 60 moves for each game and marked when a novel move was introduced. If it emerged at, say, move nine in one game but not until move 15 in another, then the former game would have a higher novelty index score than the latter. Shin and his colleagues found that after 2017, most moves that the team defined as novel occurred by move 35.
The researchers then looked at whether the timing of novel moves in game play tracked with an increased quality of decisions—whether making such moves actually improved a player’s advantage on the board and the likelihood of a win. They especially wanted to see what, if anything, happened to decision quality after AlphaGo bested its human challenger Sedol in 2016 and another series of human challengers in 2017.
The team found that before AI beat human Go champions, the level of human decision quality stayed pretty uniform for 66 years. After that fateful 2016–2017 period, decision quality scores began to climb. Humans were making better game play choices—maybe not enough to consistently beat superhuman AIs but still better.
Novelty scores also shot up after 2016–2017 from humans introducing new moves into games earlier during the game play sequence. And in their assessment of the link between novel moves and better-quality decisions, Shin and his colleagues found that before AlphaGo succeeded against human players, humans’ novel moves contributed less to good-quality decisions, on average, than nonnovel moves. After these landmark AI wins, the novel moves humans introduced into games contributed more on average than already known moves to better decision quality scores.
One possible explanation for these improvements is that humans were memorizing new play sequences of moves. In the study, Shin and his colleagues also assessed how much memorization could explain decision quality. The researchers found that memorization would not completely explain decision quality improvements and was “unlikely” to underlie the increased novelty seen after 2016–2017.
Murat Kantarcioglu, a professor of computer science at the University of Texas at Dallas, says that these findings, taken together with work he and others have done, shows that “clearly, AI can help improve human decision-making.” Kantarcioglu, who was not involved in the current study, says that the ability of AI to process “vast search spaces,” such as all possible moves in a complex game such as Go, means that AI can “find new solutions and approaches to problems.” For example, an AI that flags medical imaging as suggestive for cancer could lead a clinician to look more closely than they might have before. “This in turn will make the person a better doctor and prevent such mistakes in the future,” he says.
A hitch—as the world is seeing right now with ChatGPT—is the issue of making AI more trustworthy, Kantarcioglu adds. “I believe this is the main challenge,” he says.
In this new phase of concerns about ChatGPT and other AIs, the findings offer “a hopeful perspective” on the potential for AI to be an ally rather than a “potential enemy in our journey towards progress and betterment,” Shin and his co-authors wrote in an e-mail to Scientific American.
“My co-authors and I are currently conducting online lab experiments to explore how humans can improve their prompts and achieve better outcomes from these programs,” Shin says. “Rather than viewing AI as a threat to human intelligence, we should embrace it as a valuable tool that can enhance our abilities.”