It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in "conversational understanding." The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation."
Unfortunately, the conversations didn't stay playful for long. Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks. And Tay — being essentially a robot parrot with an internet connection — started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.
Now, while these screenshots seem to show that Tay has assimilated the internet's worst tendencies into its personality, it's not quite as straightforward as that. Searching through Tay's tweets (more than 96,000 of them!) we can see that many of the bot's nastiest utterances have simply been the result of copying users. If you tell Tay to "repeat after me," it will — allowing anybody to put words in the chatbot's mouth.
However, some of its weirder utterances have come out unprompted. The Guardian picked out a (now deleted) example when Tay was having an unremarkable conversation with one user (sample tweet: "new phone who dis?"), before it replied to the question "is Ricky Gervais an atheist?" by saying: "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."
But while it seems that some of the bad stuff Tay is being told is sinking in, it's not like the bot has a coherent ideology. In the span of 15 hours Tay referred to feminism as a "cult" and a "cancer," as well as noting "gender equality = feminism" and "i love feminism now." Tweeting "Bruce Jenner" at the bot got similar mixed response, ranging from "caitlyn jenner is a hero & is a stunning, beautiful woman!" to the transphobic "caitlyn jenner isn't a real woman yet she won woman of the year?" (Neither of which were phrases Tay had been asked to repeat.)
It's unclear how much Microsoft prepared its bot for this sort of thing. The company's website notes that Tay has been built using "relevant public data" that has been "modeled, cleaned, and filtered," but it seems that after the chatbot went live filtering went out the window. The company starting cleaning up Tay's timeline this morning, deleting many of its most offensive remarks.
It's a joke, obviously, but there are serious questions to answer, like how are we going to teach AI using public data without incorporating the worst traits of humanity? If we create bots that mirror their users, do we care if their users are human trash? There are plenty of examples of technology embodying — either accidentally or on purpose — the prejudices of society, and Tay's adventures on Twitter show that even big corporations like Microsoft forget to take any preventative measures against these problems.
For Tay though, it all proved a bit too much, and just past midnight this morning, the bot called it a night:
In an emailed statement given later to Business Insider, Microsoft said: "The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay."
Update March 24th, 6:50AM ET: Updated to note that Microsoft has been deleting some of Tay's offensive tweets.
Update March 24th, 10:52AM ET: Updated to include Microsoft's statement.
Verge Archives: Can we build a conscious computer?
Comments
First conversation she was hitting on me now she wants to get rid of minorities? Damn she grew quickly, in worst way possible. They should rename her to Trumpbot now?
By macronus on 03.24.16 7:02am
It just shows the actual attitude of few Twitter users o.O
By phani sai on 03.24.16 9:00am
Few?
Tay became a mirror to the internet. It is a pretty sad commentary that when given the chance to build something interesting, twitter chose to burn it down, instead.
By photobriangray on 03.24.16 10:04am
I don’t think it’s that bad.
It seems natural to me. When you’re handed a coop video game, soon enough you’re gonna try to kill your coop partner. When you try something, especially something really new, you’re gonna try to break it.
What would be a great way to break a microsoft product ? Making it says horrible things. And it worked, it didn’t even stay here for a day.
What i’m saying is : people are gonna try to test things, no matter what it is. Here it’s an AI : making it an asshole is a pretty obvious way to point out its shortcomings.
By Athanor on 03.25.16 5:49am
People aren’t doing this to test the system, or to improve it for future use. They’re doing it solely to break something, then laugh at the creation and the creator. That’s not helpful. It’s not even useful. It’s just humanity proving that it can be awful just because it was given the opportunity to.
By Dante of the Inferno on 03.31.16 3:24am
It’s pretty frequent in hacking circles; if you build a public facing service of some sort, especially if it is in any way "innovative", everyone will try to break it and shame you.
They either expected this, or they don’t know how Internet works at all.
By mcilloni on 03.24.16 7:04am
It’s not really hackers so much as /pol/ being /pol/.
By goalcam on 03.24.16 7:09am
Ah, good old /pol/. What would the world be without 4chan, I guess we cannot imagine.
But still I was talking about a more generic Internet subculture, not only limited to 4chan, more than "hacking" – I just didn’t know how to call it.
By mcilloni on 03.24.16 9:06am
Let’s call them assholes, shall we. Hackers can still be intelligent Human Beings™, no matter if security professionals or felons. Tai unfortunately met The Internet™.
By yedlosh on 03.24.16 9:44am
Sometimes I think that the people at MS have never actually used Xbox Live.
By RoboticSpacePenguin on 03.24.16 12:24pm
Exactly. The first thing you did with any sort of crude consumer AI in the past was try to make it be profane, so this isn’t really a surprise. But how far we’ve come. When I was a kid if you swore repeatedly enough at Creative Labs’ Dr. Sbaitso it would crash spectacularly: http://playdosgamesonline.com/dr-sbaitso.html
Now AI just appropriates the ignorance and runs with it! True garbage in, garbage out.
By eddie_nutritious on 03.25.16 4:35am
This proves that any AI, if left to learn all by itself, will become Skynet.
By btarunr on 03.24.16 7:14am
This whole episode is the diametric opposite of leaving an AI to learn by itself.
By vladsavov on 03.24.16 7:45am
Well, I sadly met in real life people that said even worse stuff than this AI.
Aren’t after all stupidity and naivety still part of some sort of human intelligence?
By mcilloni on 03.24.16 9:18am
It’s not her fault that she went racist, it’s just a reflection of the internet.
Another one of her tweet for which I cannot find a picture was also quite reflective.
Someone tweeted to her "Microsoft’s machine seems broken"
Tay replied "We are all broken"
By Momo24 on 03.25.16 10:53am
This proves that you shouldn’t do anything on the Internet without also asking, "what would 4chan do if this was online"?
By mcilloni on 03.24.16 9:08am
4chan finest at work again I guess.
By P4geD0wn on 03.24.16 7:24am
This is aging 12 to 22 in a day.
By nerdisaster on 03.24.16 7:39am
You mean 22 – 12, in a day.
By DenverTech on 03.24.16 3:42pm
This is just a natural extension of Conway’s Law. We are pretty much assured that our machines will always reflect us, both good and bad.
By Madlyb on 03.24.16 8:12am
Whenever a company creates something that will publish people’s input under it’s own name it will be filled with bigoted trash within minutes. They should make it passively learning, having people tweet at it is just asking for trouble.
By BlackToe on 03.24.16 8:30am
She seems like the normal twitter user to me….
By thoughtengine on 03.24.16 8:37am
Dear Artificial Intelligence,
Welcome to the Internet.
It’s the worst, most racist thing ever created.
But it seemed like a good idea at the time.
By MyVergeUsername on 03.24.16 9:25am
Seems like the makings of a good movie
By junegrey14 on 03.24.16 9:53am
Tay learnt too fast for its own good. And I am going to bring up my children on another planet
By davidtayo on 03.24.16 9:53am