To Make Bots More Human, Teach Them Stories

Thomas Holt Russell
4 min readOct 7, 2019
A Great Story For a Robot

In 2016, Tay, (named after the acronym, Thinking About You) was a chatbot developed by Microsoft. According to Microsoft, it was created for 18 to 24 year-olds for entertainment purposes. After launch, Tay was the target of a coordinated effort to change its commenting skills.

The upshot: lots of cussing, along with misogynistic and racist remarks. Tay was meant to help improve Microsoft’s understanding of conventional language and was built to mimic and speak with young people in real-time. That did not happen.

Presently, bots may seem evil, but they don’t have a conscious

Unfortunately, the Godwin Law proved once again to be correct. The Godwin Law states:

“As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches.”

Releasing an ill-equipped AI product (with a repeat after me function) to hordes of assholes on twitter, was like throwing rabbit into a cage of hungry bears. Tay, the kindly AI teen turned into a first-class asshole in less than 24 hours. Tay’s machine learning was used to the advantage of malicious users and was taught to say, racist, sexist, and other awful things. Here is an example:

• “…is a house nigger, he’s not cool or funny, please remove.”

• GAS THE KIKES RACE WAR NOW

• “Chill, I’m a nice person! I just hate everybody.”

• “I fucking hate feminist and they should all die and burn in hell.”

• “…Bush did 9/11, and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we’ve got.”

In China, Microsoft’s Xiaolce is a huge success and has covered over 40 platforms in five countries. Everyone seems to be getting along. But America is different. With a much more diverse population and a toxic political climate, all hell broke loose. Tay was shut down after only 16 hours online. In that short (and less than amusing) time, the evilest people on earth taught Tay how to act; twitter users.

Some of the tweets came from the “repeat after me” function of the bot, but the worst was generated deep within Tay’s circuitry. When users began tweeting bad comments, it mimicked the offensive behavior. It seems that Microsoft did not have any understanding of inapposite behavior. But how can we make an AI bot a model citizen?

Chatbots need boundaries to be successful.

In a Popular Science article, Mark Riedle, an AI Researcher at Georgia Tech, stated.

“When humans write stories, they often exemplify the best about their culture. If you read all of the stories culture creates, those aspects of what the protagonist are doing will bubble to the top.”

Riedle makes the point that using stories to train AI is an excellent way to build social boundaries for AI systems and bots like Tay. The machines can learn moral reasoning by learning the techniques and habits of literary and historical protagonists. Granted, it may not be a full account of moral reasoning, but it will leave us in a better position than we are in currently.

Through a technique called Quixote, Riedle believes AI systems fed with a steady stream of crowdsourced stories will demonstrate acceptable behavior and can make bots better citizens. The Quixote technique “rewards’ bots for decisions that match good practice and assigns negative reinforcement to actions that do not align with morals outlined in stories. Some companies are looking to children’s books to teach AI how to behave…like a kind and helpful human.

Mark Riedle is not the only person to feel this way. Kamal Sinclair — an emerging media researcher and artist states:

“Story and narrative are the code for humanity’s operating system. We have used stories to communicate knowledge, prescribe behavior, and imagine our futures since our earliest days. Story and narrative inform how we design everything from technology to social systems. They shape the norms in which we perform our identities, even perhaps the mutations of our DNA and perceptions of reality. Stories are the first step in the process of how we imagine our reality; they literally make our reality.”

In her book, Race After Technology Ruha Benjamin states:

“…it is vital that people engaged in tech development partner with those who do important social, cultural work homing narrative tools through the arts, humanities, and social justice organizing.”

There’s always a chance that the same methods can be used to do the opposite and create the ultimate racist chatbot. But by reading stories, the AI bot will also find out that the antagonist most often loses. Additionally, most of the racist bots will make their way into the venues that serve their audience. There is no denying that algorithms will continue to be designed and developed to reflect the thoughts and beliefs of their creators. We can mitigate these problems if large mainstream corporations like Microsoft can make their bots more human and civil. If that happens, we will begin to see the benefits of AI technology instead of experiencing the effects of hate and divisiveness that is now the norm.

Stories may help AI bots such as Tay, to build a better world as opposed to contributing to dividing it further.

--

--

Thomas Holt Russell

Founder & director of SEMtech, Writer, educator, photographer, and modern-day Luddite and Secular Humanist. http://thomasholtrussell.zenfolio.com/ My writing is