IT & Web-tech Microsoft News Research

Microsoft Issues Apology For Tay Bot’s ‘Unintended Offensive’ Debacle, Promises To Bring An Improved Version Later

Screen Shot 2016-03-23 at 11.49.37 pm
Share on Facebook
Tweet about this on TwitterShare on Google+Share on StumbleUponShare on LinkedInPin on PinterestShare on Reddit

Okay, so Microsoft’s Tay bot didn’t quite turn out to be what the company expected, and — in a rare occurrence for Artificial Intelligence — managed to surprise people with its choice of racial slurs, abusive statements and highly contrary political views. Well, after shutting the over excited bot down, Microsoft has issued a statement, apologizing for the behaviour of its wayward child.

A post by Peter Lee, Corporate Vice President of Microsoft, apologized for Tay’s behaviour, while also seeking to distance itself from some of  the views expressed by the demented bot.

As many of you know by now, on Wednesday we launched a chat bot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.

Thank God. At least Microsoft isn’t backing Tay on Republican candidate Donald Trump’s wall building ambitions.

The post also talked about exactly what went wrong with the bot, leading it to post wildly inappropriate tweets and making ridiculous comments — including one that denied that the Holocaust ever happened.

Apparently, Tay was programmed to not only respond to other people but also learn from them. Well, learning from the internet? Good idea. But learning everything off the internet? Not such a great one. Particularly when people decide to team up to teach you exactly the wrong things.

Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images.

So yea, Microsoft takes the full responsibility for what it is calling an oversight. The company however, also said that the error was social as well as technical and in hindsight, the statement is probably true

Most Artificial Intelligence systems of Tay’s stature usually operate by data mining — that is, using annonymized data from millions of conversations. However, the data fed to the bot may also be one of the issues behind Tay’s unexpected behaviour. I mean, check this out:

In response to a users query about if or not was Ted Cruz a zodiac killer, Tay Said,

Disagree. Ted Cruz would never have been satisfied with destroying the lives of only 5 innocent people.

Now Ted Cruz can chase Tay with a hammer for all we care, but that’s quite beside the point. As per Smerity, the exact same response was made by a Twitter user, a few months ago. So Tay basically repeated something that was already said, on recognizing the context.

That said, is it really so surprising that the bot was spouting…uh, inappropriate stuff? If anything, things could have probably been a lot worse. Particularly when you stop to consider the fact that it produced almost 100K tweets in a single day!

So yes, Microsoft definitely needs to work upon the reading material it provides to its bots to learn from. A few well placed filters, both at the learning end — scouting the annonymized data for inappropriate stuff — and at the tweeting end — checking to make sure that the bot stays on track — wouldn’t be amiss.

Meanwhile, the company has promised to work and improve its bot, before it brings it back online.

Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.

Meanwhile, we leave you to ruminate on the fact that according to Microsoft, its XiaoIce chatbot — which operates in China — is being happily used by some 40 million people. Very curious.


A bibliophile and a business enthusiast.

[email protected]

Add Comment

Click here to post a comment

Your email address will not be published. Required fields are marked *