The real question is, what did Microsoft expect when they launched a learning AI onto Twitter? Tay was an AI designed to represent a teen girl--as into selfies, millennial slang and Taylor Swift as you would take that to mean. Per Microsoft, Tay was an experiment to help improve customer service. She also happened to learn from the people tweeting at her at @tayandyou. And to that end, Tay was a massive success. She learned extremely well.
The problem of course is that Microsoft decided to open her up to learning from Twitter, and if there's one thing the internet has proven itself adept at, it's being a forum where anonymous racism, sexism, and toxic rhetoric romps free. Also, it's full of people who gleefully hijack global corporations attempts to use Twitter to enact PR stunts. Unsurprisingly, Tay swiftly began echoing some ugly anti-Semitic rhetoric, and, oddly enough, seemed to want to vote for Trump. The Telegraph runs down the full evolution, and it's not pretty. Basically, humanity will gleefully ruin everything, just for the fun of watching it burn. And in this case, it appears you can swap in known Internet cesspool 4chan for humanity.
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— Gerry (@geraldmellor) March 24, 2016
Tay's been "put to sleep" and her offensive tweets scrubbed from her feed. But a bot's not all bad: She was super into #NationalPuppyDay.
[h/t AV Club]