News
Tay, the artificial intelligence (AI) robot, had a bug in which it would at first repeat racist comments, then it began to incorporate the language in its own tweets.
Less than a day after she joined Twitter, Microsoft's AI bot, Tay.ai, was taken down for becoming a sexist, racist monster. AI experts explain why it went terribly wrong.
Tay is in Robot Hell now, with a Teddy Ruxpin that had so much vinegar poured on him that he became extremely racist. Burn in Hell, Tay. But it didn’t need to go this way.
Tay.ai, the coolest chatbot since SmarterChild, is “so fricken excited” to talk to you. That’s because she’s engineered to talk like a teenager — and does a pretty convincing job of it, too.
Barely hours after Microsoft debuted Tay AI, a chatbot designed to speak the lingo of the youths, on Wednesday, the artificially intelligent bot went off the rails and became, like, so totally racist.
How Twitter taught a robot to hate. by Emily Crockett. Mar 24, 2016, 4:11 PM UTC ... Tay was dubbed by her creators as an “AI fam from the internet that’s got zero chill!” (Oh boy.) ...
Well, that escalated quickly. Less than 24 hours after first talking with the public, Microsoft’s millennial-minded chatbot Tay was pulled offline amid pro-Nazi leanings. According to her ...
The robot, apparently named Tay, is artificially intelligent and speaks with people who send messages to it. It appears to be based on Microsoft’s machine learning work and claims that it will ...
The robot was programmed to learn from her conversations with people – so it was only a matter of time before this happened. They were forced to switch Tay off that same day, but with high hopes ...
An artificially intelligent “chatbot” has quickly picked up some of the worst of human traits. Microsoft’s Tay robot was taken offline less than 24 hours after its launch because it was ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results