Fact Check

Did Facebook Shut Down an AI Experiment Because Chatbots Developed Their Own Language?

Facebook's artificial intelligence scientists were purportedly dismayed when the bots they created began conversing in their own private language.

Published Aug. 1, 2017

Updated Aug. 1, 2017
 (Palto / Shutterstock.com)
Image Via Palto / Shutterstock.com
Claim:
Concerned artificial intelligence researchers hurriedly abandoned an experimental chatbot program after they realized that the bots were inventing their own language.

It is probably not a coincidence that two of the top-trending news stories of July 2017 were, in the first case, a warning from billionaire tech entrepreneur Elon Musk that artificial intelligence (AI) poses an "existential threat" to human civilization, and, in the second case, the announcement that an AI experiment sponsored by Facebook was, according to some sources, "shut down" after researchers discovered that the chatbots they programmed had begun communicating with one another in a private language of their own invention.

Musk, who has previously warned that the development of autonomous weaponry could lead to an "AI arms race", told the National Governors Association on 15 July 2017 that the risks posed by artificial intelligence are so great that it needs to be proactively regulated before it's too late. "Once there is awareness," Musk said, "people will be extremely afraid, as they should be."

Whether he meant it to or not, in some people's minds Musk's warning conjured up images of Skynet, the fictional AI network in the Terminator film series that became self-aware and set out to destroy the human race in the interests of self-preservation.

Cue the "creepy chatbot" stories. Albeit prompted by a somewhat dry 14 June blog post by Facebook's Artificial Intelligence Research team (FAIR) describing an inroad in the development of dialog agents (AI systems designed to communicate with humans), the news that chatbots were found to be communicating with each other via a private language received more and more sensationalized treatment in the press as the summer wore on.

In a report published the day before Musk gave his speech to the governors, Fast Co. Design delivered a fascinating account of the FAIR team's experiment with nary a hint of dystopian fear-mongering:

Bob: “I can can I I everything else.”

Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”

To you and I, that passage looks like nonsense. But what if I told you this nonsense was the discussion of what might be the most sophisticated negotiation software on the planet? Negotiation software that had learned, and evolved, to get the best deal possible with more speed and efficiency — and perhaps, hidden nuance — than you or I ever could? Because it is.

This conversation occurred between two AI agents developed inside Facebook. At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.

“There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal — a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network” — neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

The key word is "seemingly," for in this instance the agents' neologisms were simple, straightforward, and easily decipherable:

“Agents will drift off understandable language and invent codewords for themselves,” says Batra, speaking to a now-predictable phenomenon that’s been observed again, and again, and again. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

The article notes that the researchers chose not to let the bots continue developing a private language in favor of programming them to stick to plain English, given that the whole point of the research is to improve AI-to-human communication. That decision took on a more and more sinister vibe as more and more venues reported the story, however, as exemplified in this small sampling of the dozens of blurbs shared via social media:

Among other misrepresentations, some articles claimed that the scientists were "shocked" or "surprised" when the chatbots invented new forms of expression. Not so, the study's lead author, Michael Lewis, told us via e-mail:

We gave some AI systems a goal to achieve, which required them to communicate with each other. While they were initially trained to communicate in English, in some initial experiments we only reward them for achieving their goal, not for using good English. This meant that after thousands of conversations with each other, they started using words in ways that people wouldn’t. In some sense, they had a simple language that they could use to communicate with each other, but was hard for people to understand. This was not important or particularly surprising, and in future experiments we used some established techniques to reward them for using English correctly. There have also been a number of papers from other research groups on methods for making AIs invent simple languages from scratch.

As to the claim that the project was "shut down" because the bots' deviation from English caused concern, Lewis said that, too, misrepresents the facts:

There was no panic, and the project hasn’t been shut down. Our goal was to build bots that could communicate with people. In some experiments, we found that they weren’t using English words as people do — so we stopped those experiments, and used some additional techniques to get the bots to work as we wanted. Analyzing the reward function and changing the parameters of an experiment is NOT the same as “unplugging” or “shutting down AI.” If that were the case, every AI researcher has been “shutting down AI” every time they stop a job on a machine.

The main thing lost in all the hubbub about dialog agents inventing their own language, Lewis said, is that the study produced significant results in terms of its core mission: training bots to negotiate with people, a task that requires both linguistic and reasoning skills:

We introduced a new technique for having bots simulate possible future conversations before deciding what to say (“If I say this, you might say that, then I’ll say this”), and found that this significantly improved their ability on the negotiation task.

We asked, finally, if Lewis and his colleagues see anything inherently dangerous in letting AI systems develop their own languages. He said no. "While it is often the case that modern AI systems solve problems in ways that are hard for people to interpret, they are always trying to achieve the goals that were given to them by people."

William Wisher, who wrote the Terminator films (among others) and who was part of a panel about artificial intelligence and its future at the 2017 San Diego Comic-Con, weighed in on the Skynet scenario, telling us:

Right now, everyone is terrified of AI and project nightmarish scenarios around it, assuming that we are creating our own new overlords. I’ve been part of that in the Terminator films. It makes for a good movie.

But what everyone fails to appreciate in these fever dreams is that human beings are the most adaptable, clever, and aggressive predators in the known universe. That really helps me sleep at night. Because I don’t believe AI will ever fully develop as a separate thing from people. I don’t think we’d allow it to. We are in infant stages now, but I think we will subsume AI and make it part of ourselves; better to control it. Implanting neural nets, within our brains that are connected to it, etc. Now that raises all kinds of as yet unseen "have and have not" issues. But that's another subject for another time.

Sources

Domonoske, Camila.   "Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk.'"    NPR.   17 July 2017.

Lewis, Michael, et al.   "Deal or No Deal? Training AI Bots to Negotiate."    Facebook Engineering Blog.   14 June 2017.

Wilson, Mark.   "AI Is Inventing Languages Humans Can't Understand. Should We Stop It?"    Fast Co. Design.   14 July 2017.

Updates

1 August 2017, 5:54 P.M.: Added quote from William Wisher.

David Emery is a West Coast-based writer and editor with 25 years of experience fact-checking rumors, hoaxes, and contemporary legends.