Tay, Twitter Bots, and the Value Alignment Problem

Recently, Microsoft launched a bot on Twitter that learns to speak from anyone who speaks to it. The results were disastrous on multiple levels:

Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac

First, let’s look at some of the many reasons why it’s a bad thing this happened:

  • On a purely business level, this is a PR disaster for Microsoft. In the present-day culture of instant outrage, this was the perfect news story. The headline “Microsoft Builds Racist Robot” is a guaranteed clickthrough. It makes Microsoft look either evil, negligent, or incompetent.
  • On a user experience level, this bot makes wide swaths of the population feel excluded and attacked. That’s simply bad UX.
  • On an infosec level, it has wide-open attack vectors. The most obvious one is you can get it to tweet anything if you use the following phrase: “repeat after me.” It’s the most obvious injection attack vector possible.
  • On a social level, hate speech already has enough of a platform. This bot was turned into an amplifier for the most deplorable parts of humanity.
    • Even if it were just some pranksters from 4Chan messing with the bot as a joke, it has the unintended side-effect of making visible hateful, fringe viewpoints beyond their proportional representation in society.
    • It was promoted by Microsoft, one of the largest corporate presences in the world by any measure.
    • It was on Twitter — a platform not only with millions of users, but one that’s closely watched by every news agency and blog to be amplified to audiences worldwide.

As bots and other self-sustaining agents become more prevalent in day-to-day life, they absolutely need to deal with these issues.

Why did this happen? How can we avoid this?

For something more clear-cut, let’s take a look at a similar snafu that happened with Google Photos last summer:

Google Photos Mistakenly Labels Black People ‘Gorillas’

This algorithm did not have the knowledge of context and history and racial issues that a human would have. It was simply working with a collection of training data and statistical models. In essence, it was matching new input with its knowledge of old input and producing the most probable output. As with all statistical modeling, it has some error rate, and just like sometimes, it would mislabel a chair as a stool, it mislabeled this input as well.

It’s tempting to say that algorithms are neutral.

They are not.

Machine learning algorithms, by definition, are biased. They have to be. If they were neutral, they would have no better results than flipping a coin. They have to have bias built into them. What builds this bias into the statistical models? Training data and those who design the algorithm. And, as much as we in the software industry would like to believe otherwise, both of those things have complicated relationships with the real world.

Data is not a perfect representation of the real world. A dataset is highly dependent on the choices, conscious and unconscious, made by those who collected the data. If your training data seems comprehensive (e.g. every photo indexed by Google Image Search), that’s when you have to be careful.

How do you know you have enough photos of dark-skinned people to be able to distinguish them from animals that they’ve been historically associated with as a means to oppression and dehumanization? If your test data is equally biased, you can’t, until it blows up out in the real world when a real-life dark-skinned person tries to use it. This is especially true if your team of computer scientists, data scientists, and software engineers are full of people who don’t have experience with these issues socially. That brings us back to Tay.

Here is a breakdown of Tay’s failures in the context of a larger culture where these issues are generally not visible:

The Ongoing Lessons of Tay

Particularly, take a look at this quote:

A long time ago, I observed that there are hundreds of NLP papers on sentiment classification, and less than a dozen on automatically identifying online harassment. This is how the NLP community has chosen to prioritize its goals. I believe we are all complicit in this, and I am embarrassed and ashamed.

This is a consequence of the free market. There is a business demand for sentiment analysis tools (to classify customer reviews of products as positive or negative, for example), but no demand for anti-harassment technology. Research with an immediate business impact is prioritized over research with long-term social and business (PR) consequences. The skeptical response is: “Why is this bad in the long run? Why not let the free market take care of it? If ethical algorithm design becomes a priority, it will automatically become prioritized.”

I’m not convinced this is true.

This line of thinking follows the ideology of utilitarian ethics, which has many problems of its own. For example, take a look at this article. You can justify a lot of morally unsound behavior and decisions with utilitarianism.

Another reason we should not always let market forces rule public goods (like society’s body of research and publically available algorithms) is because it is a short-sighted force of nature. As humans, we should have more of an interest in our long-term survival. Here are some situations where the free market has, is, and will fail us:

  • environmental concerns
  • sustainable energy usage for the long term
  • market bubbles and crashes, ruining individual lives
  • child labor
  • investment in space travel for us to become a multi-planetary species to reduce the chances of annihilation

The free market has worked mostly well for us until now. However, the lack of focus on the long term is troubling especially now that we live in such an abstract, accelerating world. Each individual has far-reaching powers unimaginable to anyone even half a century ago. We are inching ever-closer to creating algorithms that have significant impact on our day-to-day lives. This brings us to the Value Alignment Problem.

Here is the Arbital page for Value Alignment Problem. In essence here is what it is: How do we design systems (particularly self-sufficient software systems such as AGI) that has motivation to do its best to help humanity? How do we align its values with values possessed by the best of our species (for a well-thought-out definition of “best”)?

In the far (but not too far) future, this issue will suddenly become an emergency if not dealt with now. The Machine Intelligence Research Institute (MIRI) is starting to tackle some of these problems, but the free market is not.

The free market is not set up to deal with issues like the Value Alignment Problem. It needs to be solved by forces outside the market. Government is the most obvious candidate, but a government run by the governed often has trouble solving large, abstract problems. Maybe we need more organizations like MIRI. Maybe we need more individuals willing to get involved in civic hacking even as just a hobby. I don’t know what the solution is but I do know the market will have nothing to do with it until it’s too late.

Let’s get back to Tay. What should the Tay team have done differently?

Tay is a relatively simple Twitter bot. Twitter already has a tight-knit, conscientious community of botmakers, all of whom already deal with ethical questions pretty well. The easiest thing in the world for Microsoft to do would have been looking into prior art before creating a Twitter bot. Here is an article containing interviews with some of the more prominent botmakers:

How to Make a Bot That Isn’t Racist

Microsoft’s engineers failed to do their due diligence before launching Tay, and this failing points to much larger issues that we are all about to face.

 

The 25 Meanings behind Favoriting on Twitter

I found an interesting paper describing the 25 possible motivations for clicking the “Favorite” button on someone’s tweet:

More than Liking and Bookmarking? Towards Understanding Twitter Favouriting Behaviour

The team that came up with the fav button probably did not anticipate all of these uses. I’m guessing they probably thought of 3 or 4 at most. This is interesting because it stresses the importance of studying user behavior and motivation. Changes to the behavior of the fav button will affect many of these motivations in unpredictable ways.

I wonder if Facebook’s “Like” button has similar connotations. I’d like to see a study comparing similar functionality across multiple social media sites.

I found this paper when reading this post on Medium, which is interesting in itself:

What’s Wrong with Twitter’s Latest Experiment with Broadcasting Favorites: It Steps over Social Signals While Looking for Technical Solutions