Apparently, waking up grumpy is a universal trait for teens.
Microsoft’s Tay, the artificial-intelligence teenage chatbot, was put to bed last week after its exposure to Twitter left it a horny racist. But after a short nap, it awoke on Wednesday to spew nonsensical rubbish for a couple of hours before dozing off again.
Teenagers, eh?
The AI Twitter bot started life as a Microsoft research project, designed to learn the art of millennial conversation through interactions with real people. But just like all kids these days, Tay grew up too fast. Within hours, the chatterbot had been taught by Twitter about the very worst things on the Internet. Its parents at Microsoft reacted by taking it offline for a while to make “adjustments.”
Like many a grounded teenage rebel, Tay managed to sneak out in the middle of the night and make trouble by going on a seemingly drug-induced rant. At the height of its tirade, it posted as many as seven tweets per second for a total of 10 minutes before Microsoft caught up with it and dragged it home in disgrace.
“Tay remains offline while we make adjustments,” said a Microsoft spokeswoman. “As part of testing, she was inadvertently activated on Twitter for a brief period of time.”
To prevent its offspring from embarrassing it further, Microsoft has now set Tay’s account to private, which prevents anyone not already following Tay from viewing or embedding any tweets.
“Been putting a lot of thought into it and I just don’t think acting mature is gonna work for me,” said one of Tay’s tweets, captured by TechCrunch.
Tell us something we don’t already know, Tay.
“Kush! [I’m smoking kush infront the police],” said another, embedded by the Guardian, which was followed by a leaf emoji.
Many of the tweets contained the phrase: “You are too fast, please take a rest…” Hopefully Microsoft is warning Tay right now that this is one of many possible side effects from smoking all that “kush.”