News

The Saga of Twitter Bot Tay

A Microsoft experiment with AI to research "conversational understanding" on social media quickly turned into a public relations nightmare.

Published March 25, 2016

It took less than 24 hours. Microsoft had released its latest experiment with artificial intelligence: a Twitter bot named Tay that was designed to research and foster "conversational understanding." But Tay learned too much, much too young.

It started, as so many things on the internet do, with optimism, a fresh start, and a tweet:

And, as with so many things on the internet, the best of intentions went awry almost immediately.  Tay was programmed to edit responses to her on Twitter in order to form new thoughts and sentences.  According to the official site, Tay was targeted at American 18- to 24-year-olds, "the dominant users of mobile social chat services in the U.S.":

Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.

In theory, chatting with Tay on Twitter would deepen and widen the vocabulary she had been been programmed with:

Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source.

In other words, Tay would be a Twitter representation of the entire internet. Things seemed to be fine (if a bit surreal) for a little while:

But probably predictably, the experiment went bad fast. What started out as the representation of a sweet, naïve teenaged girl became (presumably with the help of a large number of gleeful, enterprising trolls) a venom-spewing, racist conspiracy theorist:

Tay Tweets


Screen Shot 2016-03-25 at 4.57.15 PM

Tay Tweets

Tay went offline not long after that (and less than a day after the grand experiment went online), and many of her more offensive tweets were scrubbed:

It's not clear what will happen to Tay, but the Twitter bot did prove beyond the shadow of a doubt that she was capable of learning something.  Microsoft said in an e-mailed statement:

The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay.

Almost a week later, Microsoft accidentally turned Tay back on while working on the programming. The bot immediately spammed hundreds of followers:

Tay loop


Tay also tweeted the following:

Tay kush


As of 31 March 2016, Tay's tweets are no longer visible and her account is protected.

Brooke Binkowski is a former editor for Snopes.