This article was last updated 9 years ago

Not even a day ago, Microsoft let loose a highly opinionated chat bot named Tay onto the Internet. The AI bot was responding to tweets and chats on GroupMe and Kik. But unfortunately, the company has now turned her off because of the program’s inability to distinguish between humane comments and offensive/racist statements.

Developed by the Microsoft Technology and Research and Bing teams, Tay’s sole purpose was to make conversation with millions of users around the globe. She was developed in an attempt to understand the way humans converse and its implications on AI tech. The bot can perform a number of tasks, like telling users jokes, commenting on pictures she’s tagged to, answering question etc.

In her 16 hours of operation, Tay was involved in hundreds, if not thousands, of different conversations. And just like any AI program, Tay was designed to personalize her interactions with users with each response or mirror reply. To top this all off, Tay’s responses were developed by a staff that included improvisational comedians which made her completely nonchalant about what she spoke.

Just hours into the service of Tay, Twitter users came to realize that Tay would often repeat back racist tweets with her own commentary. It was just a matter of time until Twitter started flooding with racist tweets by the Microsoft bot. The more offensive tweets from the AI bot include referencing Hitler, denying the Holocaust, supporting Trump’s immigration plans and weighing in on the side of the abusers in the #GamerGate scandal.

Some believe that Tay’s conversation with online users followed the Internet trend called the “Godwin’s law.” The law states that as a conversation online progresses, the probability of a comparison involving Nazis or Hitler approaches.

The Redmond giant deleted most of the harmful comments but a website named Socialhax.com collected screenshots of several of these before they were removed. The company then shut down the chatter bot making her say,

c u soon humans need sleep now so many conversations today thx?

The overall implication here is that technology is neutral but it’s up to its developers and users to utilize it in the way they want. The Tay incident also directs us to always have anti-abuse measures and filtering in place when it comes down to interacting with the masses.

A company spokesperson then confirmed the shut down and said that they are making adjustments:

The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.