The Saga of Twitter Bot Tay

A Microsoft experiment with AI to research "conversational understanding" on social media quickly turned into a public relations nightmare.

  • Published 25 March 2016

It took less than 24 hours. Microsoft had released its latest experiment with artificial intelligence: a Twitter bot named Tay that was designed to research and foster “conversational understanding.” But Tay learned too much, much too young.

It started, as so many things on the internet do, with optimism, a fresh start, and a tweet:

And, as with so many things on the internet, the best of intentions went awry almost immediately.  Tay was programmed to edit responses to her on Twitter in order to form new thoughts and sentences.  According to the official site, Tay was targeted at American 18- to 24-year-olds, “the dominant users of mobile social chat services in the U.S.”:

Tay is an artificial intelligent chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.

In theory, chatting with Tay on Twitter would deepen and widen the vocabulary she had been been programmed with:

Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source.

In other words, Tay would be a Twitter representation of the entire internet. Things seemed to be fine (if a bit surreal) for a little while:

But probably predictably, the experiment went bad fast. What started out as the representation of a sweet, naïve teenaged girl became (presumably with the help of a large number of gleeful, enterprising trolls) a venom-spewing, racist conspiracy theorist:

Tay Tweets
Screen Shot 2016-03-25 at 4.57.15 PM

Tay Tweets

Tay went offline not long after that (and less than a day after the grand experiment went online), and many of her more offensive tweets were scrubbed:

It’s not clear what will happen to Tay, but the Twitter bot did prove beyond the shadow of a doubt that she was capable of learning something.  Microsoft said in an e-mailed statement:

The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.

Almost a week later, Microsoft accidentally turned Tay back on while working on the programming. The bot immediately spammed hundreds of followers:

Tay loop

Tay also tweeted the following:

Tay kush

As of 31 March 2016, Tay’s tweets are no longer visible and her account is protected.
Since 1994
A Word to Our Loyal Readers

Support Snopes and make a difference for readers everywhere.

  • David Mikkelson
  • Doreen Marchionni
  • David Emery
  • Bond Huberman
  • Jordan Liles
  • Alex Kasprak
  • Dan Evon
  • Dan MacGuill
  • Bethania Palma
  • Liz Donaldson
  • Vinny Green
  • Ryan Miller
  • Chris Reilly
  • Chad Ort
  • Elyssa Young

Most Snopes assignments begin when readers ask us, “Is this true?” Those tips launch our fact-checkers on sprints across a vast range of political, scientific, legal, historical, and visual information. We investigate as thoroughly and quickly as possible and relay what we learn. Then another question arrives, and the race starts again.

We do this work every day at no cost to you, but it is far from free to produce, and we cannot afford to slow down. To ensure Snopes endures — and grows to serve more readers — we need a different kind of tip: We need your financial support.

Support Snopes so we continue to pursue the facts — for you and anyone searching for answers.

Team Snopes