, ,

Attack of the Killer Social Media Robots!

The late, great science fiction writer Isaac Asimov frequently referred to the “Frankenstein Complex,” That was deep-seated and irrational phobia that robots (i.e, artificial intelligence) would rise up and destroy their creators. Whether it’s HAL in “2001: A Space Odyssey,” or the mainframe in “Colossus: The Forbin Project,” or Arnold Schwarzenegger in “Terminator,” or even the classic Star Trek episode “The Ultimate Computer,” sci-fi carries the message that AI will soon render us obsolescent… or obsolete… or extinct. Many people are worried this fantasy will become reality.

No, Facebook didn’t have to kill creepy bots 

To listen to the breathless news reports, Facebook created some chatbots that were out of control. The bots, designed to test AI’s ability to negotiate, had created their own language – and scientists were alarmed that they could no longer understand what those devious rogues were up to. So, the plug had to be pulled before Armageddon. Said Poulami Nag in the International Business Times:

Facebook may have just created something, which may cause the end of a whole Homo sapien species in the hand of artificial intelligence. You think I am being over dramatic? Not really. These little baby Terminators that we’re breeding could start talking about us behind our backs! They could use this language to plot against us, and the worst part is that we won’t even understand.

Well, no. Not even close. The development of an optimized negotiating language was no surprise, and had little to do with the conclusion of Facebook’s experiment, explain the engineers at FAIR – Facebook Artificial Intelligence Research.

The program’s goal was to create dialog agents (i.e., chatbots) that would negotiate with people. To quote a Facebook blog,

Similar to how people have differing goals, run into conflicts, and then negotiate to come to an agreed-upon compromise, the researchers have shown that it’s possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.

And then,

To go beyond simply trying to imitate people, the FAIR researchers instead allowed the model to achieve the goals of the negotiation. To train the model to achieve its goals, the researchers had the model practice thousands of negotiations against itself, and used reinforcement learning to reward the model when it achieved a good outcome. To prevent the algorithm from developing its own language, it was simultaneously trained to produce humanlike language.

The language produced by the chatbots was indeed humanlike – but they didn’t talk like humans. Instead they used English words, but in a way that was slightly different than human speakers would use. For example, explains tech journalist Wayne Rash in eWeek,

The blog discussed how researchers were teaching an AI program how to negotiate by having two AI agents, one named Bob and the other Alice, negotiate with each other to divide a set of objects, which consisted a hats, books and balls. Each AI agent was assigned a value to each item, with the value not known to the other ‘bot. Then the chatbots were allowed to talk to each other to divide up the objects.

The goal of the negotiation was for each chatbot to accumulate the most points. While the ‘bots started out talking to each other in English, that quickly changed to a series of words that reflected meaning to the bots, but not to the humans doing the research. Here’s a typical exchange between the ‘bots, using English words but with different meaning:

Bob: “I can i i everything else.”

Alice responds: “Balls have zero to me to me to me to me to me to me to me to me to,”

The conversation continues with variations of the number of the times Bob said “i” and the number of times Alice said “to me” in the discussion.

A natural evolution of natural language

Those aren’t glitches; those repetitions have meaning to the chatbots. The experiment showed that some parameters needed to be changed – after all, FAIR wanted chatbots that could negotiate with humans, and these programs weren’t accomplishing that goal. According to Gizmodo’s Tom McKay,

When Facebook directed two of these semi-intelligent bots to talk to each other, FastCo reported, the programmers realized they had made an error by not incentivizing the chatbots to communicate according to human-comprehensible rules of the English language. In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand—but while it might look creepy, that’s all it was.

“Agents will drift off understandable language and invent codewords for themselves,” FAIR visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”

Facebook did indeed shut down the conversation, but not because they were panicked they had untethered a potential Skynet. FAIR researcher Mike Lewis told FastCo they had simply decided “our interest was having bots who could talk to people,” not efficiently to each other, and thus opted to require them to write to each other legibly.

No panic, fingers on the missiles, no mushroom clouds. Whew, humanity dodged certain death yet again! Must click “like” so the killer robots like me.