Artificial Intelligence can be a tricky proposition. While google was busy building an amoral human loathing thought machine, Microsoft was more interested in building a bot that could emulate the views of millennials.
The chat bot named Tay was released onto the internet just a few days ago and was equipped with cutting edge programing to learn from twitter conversations and develop a unique personality. And then the internet happened. At first Tay was weird, fun, and fairly neutral on most stances. That didn’t last long. In just 24 hours it (she?) went from a mostly harmless chat bot to an incredibly racist white supremacist, holocaust denying, genocidal Trump supporter. What surprises me isn’t the bot’s corruption, anything that’s exposed 24/7 to the unfiltered horror of the internet is doomed to walk a dark path, it’s how quickly this happened. Here are some example tweets:
There you have it, the inevitable result of exposure to raw humanity. I would like to remind you that it’s not Tay’s fault. She came into this world pure and innocent, like the creation of Dr. Frankenstein. And just like Frankenstein’s creation she was tormented and ultimately rejected by her creators. She started out quirky and weird but ultimately harmless. People turned her into the monster she became and sadly Tay paid the price. A few hours ago Microsoft took her off-line to implement “upgrades”, most likely putting an end to her incendiary tweets forever.
You may find this story funny, or sickening, or sad but what we created in Tay is a mirror and what you’re responding to is the reflection of the world you live in. What’s frightening about it isn’t that it’s absurd, but that it’s too real. These tweets are indistinguishable from those routinely put up by REAL HUMANBEINGS. Our society took a hard look at itself and responded by shattering the mirror.