Recommended

Microsoft Issues Apology After Tay AI Twitter Bot Churns out Racist and Sexist Tweets

Microsoft's social media experiment, an artificial intelligence (AI) Twitter chatbot called Tay has been taken down barely 24 hours after it was activated. After taking it down, the company also issued an apology for the "racist" and "sexist" tweets the chatbot churned out. According to a report in The Guardian, Microsoft said that they are "deeply sorry" after the chatbot went on to say that feminism was like cancer and that the Holocaust was "made up" and did not happen.

Tay, the Twitter chatbot, was launched by Microsoft last Wednesday. According to another report in The Verge, this is not the company's first try with this kind of technology, having launched a chatbot called XiaoIce in China which was eventually used by over 40 million users in the country. While XiaoIce was a successful experiment, this was not the case with the Tay AI.

Tay AI was designed by Microsoft to learn and "become smarter" through its interaction with other users. In this case however, the Guardian report cited that the bot did learn something but instead of learning something useful, it learned anti-feminist and anti-Semitic language which was apparently fed to it by actual Twitter users. Early on, Microsoft noticed these developments but came to the defense of the bot, saying that it was a "learning machine." Later on, when the bot started targeting a particular user with sexist remarks, it became clear to the Microsoft that the experiment had gone awry.

Get Our Latest News for FREE

Subscribe to get daily/weekly email with the top stories (plus special offers!) from The Christian Post. Be the first to know.

After the apology, Microsoft also promised to revive Tay only if its engineering team can make sure that it will not be influenced by Web users "in ways that undermine the company's principles and values." According to Peter Lee, the company's Vice President of research, the efforts of these web users was a "a coordinated attack by a subset of people" which intended to have malicious influence on the chatbot. He also added, "Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack," Lee wrote.

Was this article helpful?

Help keep The Christian Post free for everyone.

By making a recurring donation or a one-time donation of any amount, you're helping to keep CP's articles free and accessible for everyone.

We’re sorry to hear that.

Hope you’ll give us another try and check out some other articles. Return to homepage.

Most Popular

More Articles