News

Tay, the artificial intelligence (AI) robot, had a bug in which it would at first repeat racist comments, then it began to incorporate the language in its own tweets.
Tay.ai, the coolest chatbot since SmarterChild, is “so fricken excited” to talk to you. That’s because she’s engineered to talk like a teenager — and does a pretty convincing job of it, too.
Tay is in Robot Hell now, with a Teddy Ruxpin that had so much vinegar poured on him that he became extremely racist. Burn in Hell, Tay. But it didn’t need to go this way.
How Twitter taught a robot to hate. by Emily Crockett. Mar 24, 2016, 4:11 PM UTC ... Tay was dubbed by her creators as an “AI fam from the internet that’s got zero chill!” (Oh boy.) ...
"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." ...
An artificially intelligent “chatbot” has quickly picked up some of the worst of human traits. Microsoft’s Tay robot was taken offline less than 24 hours after its launch because it was ...
Well, that escalated quickly. Less than 24 hours after first talking with the public, Microsoft’s millennial-minded chatbot Tay was pulled offline amid pro-Nazi leanings. According to her ...
Tay is currently in the shop now, and Microsoft is, obviously, rather mortified one of its products went so wrong. But this is, in a way, a reminder that as bad as computers can be, they learned ...
Less than a day after she joined Twitter, Microsoft's AI bot, Tay.ai, was taken down for becoming a sexist, racist monster. AI experts explain why it went terribly wrong.
Barely hours after Microsoft debuted Tay AI, a chatbot designed to speak the lingo of the youths, on Wednesday, the artificially intelligent bot went off the rails and became, like, so totally racist.