Sunday, 27 March 2016

Talking about Tay


Talking about Tay
Microsoft’s AI (Artificial Intelligence) Powered Chat-bot
This week Microsoft had decided to introduce a chatbot, named Tay, on Twitter. The chatbot was expected to talk like a ‘teen girl’ with the social media users. Tay was equipped with AI and she was expected to learn basic human characteristics and behavioral traits from the conversations she would have with the target group of youngsters on social media aged from 18-24 years old. Well, looks like it didn’t even take one whole day for the beloved humanity to turn the poor robot into a flirty racist Hitler fan! Everyone is busy blaming Microsoft for the damage being done and the company has already put the teen-chick chatbot offline, saying that they need to do some adjustments with Tay,
Microsoft has released a public statement in their defense for the ongoing criticism. According to an online source the company has released an official acknowledgement about Tay’s off-color language acquisition.
The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.” 
 
One would think it is a nice save of public relations, but if one is thinking rationally about the incident, Microsoft can never be said to have the complete responsibility of this outcome. The way I see it, this is exactly how the process of raising children into responsible adult goes. The people who made Tay to talk like this by having conversations with the chatbox ought to be the real villains of this story.

No comments: