Whore chatbot

6854933580_2c8b688306_z

Unfortunately, the same problems that will make it difficult to regulate AI safety at the front end will also complicate efforts to assign liability to AI designers at the backend, as I note in (shameless plug) my forthcoming article on AI regulation: As indicated above, this problem is less urgent in the case of a social media chatbot.

Whore chatbot-85Whore chatbot-59Whore chatbot-67Whore chatbot-2

"The Internet can be a really difficult place for somebody who isn't a cisgender, white, able-bodied man." Tay endured and then reflected a variety of misogynist, racist and overall disturbing behavior.

The Twitter chat-bot, designed to interact with teenagers, was supposed to have the personality of a teenage girl, but turned out more like a troll.

Two female technology writers discuss the failure of Tay Tweets. She greeted the Twitterverse with an exuberant "Hello World" and calling humans "super cool". Programmers designed Tay to simulate intelligent, on-line conversations with humans.

In other words, the saltier her conversations, the saltier she became.

Day 6's Brent Bambury spoke to Lauren Williams, a tech writer for Think Progress and Saadia Muzaffar, founder of Tech Girls, about what went wrong and what Microsoft had to pull Tay down after less than a day."For Tay it was very short," Muzaffar said, "but for somebody like me and many women who are active online in terms of advocacy, that happens everyday."Williams and Muzaffar agreed that Tay was designed to act as a mirror but only reflected the hate and vitriol of the internet."I don't think their agenda was just to provoke a bot.

I don’t necessarily have a problem with going easy on the designers of learning AI systems.

You must have an account to comment. Please register or login here!