So Microsoft’s chat bot — Tay AI — which was silenced a few days back for Internet taught it to be racist, came back online today. However, the appearance was even more brief as compared to the previous one as it turned into a spammy spam spammer and was set to sleep by its creator, Microsoft.
The tweets which Tay sent during this brief showdown have obviously been removed, but Jon Russell of TechCrunch did document a few of them, shown down here :
And this happened as well :
Tay, which was developed jointly by Microsoft Technology and Research and Bing teams, was an effort to conduct research on conversational understanding and was mulled by the redmond giant as a bot which could understand and hence adapt via its interactions with humans.
That of course did not happen, as Tay during its first appearance turned pretty much into a racist. However, that did not stop Microsoft from bringing it back online, as promised. That too however, did not work out. Tay apparently could not keep pace with her own self, ultimately spamming timelines of its over 200K followers with speeds, as high as seven tweets per second !
Witnessing this behaviour, Microsoft took it off from Twitter, put it back to sleep and made it into a private account — no more followers as of now. Theories are also suggesting of a possible hack, which however did not illicit any response from Microsoft.
This does bring up questions on how difficult it is to develop AI, as capable as humans in every other activity. And while Google’s AlphaGo did rout World Go Champion by an impressive 4-1 margin, Tay was a far more difficult — and rather over-ambitious attempt — to bring AI at levels wherein it can interact as good as humans. Long way to go I guess, but keep the research coming !
Founder of the The Tech Portal. Now a consulting editor for the platform. Has advised and worked with numerous early/mid-stage startups during past 5 years in various roles. You can click on his LinkedIn profile and drop in a message to get in touch.