What MeToo Is Teaching AI

Technology
Photo Credit: Pixabay

Shelly Palmer 

AI is getting smarter every day. Google’s AutoML project has learned to replicate itself – early steps on the path to superintelligence. Just down the hall, DeepMind’s AlphaGoZero trained itself to beat the human-trained AlphaGo 100 games to zip! As we move closer to a world where machines train themselves – but think for us – complicated questions about fairness and biases arise.

#MeToo

In response to the Harvey Weinstein allegations, the hashtag #MeToo began to surface on social media. The Twitter and Facebook posts were heart-wrenching and, in some cases, gut-wrenching. Not surprisingly, many of the personal stories included words, phrases, and concepts not usually associated with the profiles, the previous behaviors, or even the genders of the authors.

The coincidental emergence of the #MeToo hashtag and self-training, self-replicating AI systems got me thinking. How will a self-training AI system be biased when learning from the #MeToo hashtagged posts? And how would the advent of self-training AI affect the systems that control our news feeds and other curated content presented to us?

Silence Is an Action

Would a lack of engagement with any given post teach the algorithm that you are not interested in the subject or not empathetic to the cause? What if you were stunned and saddened by the content of a post but didn’t know how (or preferred not) to comment? Would posting or sharing graphic details of a traumatic event re-characterize your profile and associate you with a kind of content you’re not used to seeing? There is an endless list of questions one could ask.

AI Biases

In practice, Facebook, Google, Twitter, and all other information systems that rely heavily on AI and machine learning have a problem they have been reluctant to discuss: AI bias.

This problem is not new. There are several popular examples of algorithms getting it “wrong.” In September 2017 the Guardian reported about an Instagram ad on Facebook that included Olivia Solon’s image and her most “engaged” post, “I will rape you before I kill you, you filthy whore!” Later that month, Facebook’s AI blocked an ad for a march against white supremacitsts.

While it’s easy for a human to say that the AI system “got it wrong,” that’s not what happened at all. What happened was that the action that the AI system scored the highest, and therefore surfaced as the best output for a given input, was deemed either objectively or subjectively “wrong” by the humans who were affected by it.

Trained to prevent fake profiles from being created, Vice reported that a set of machine-learning algorithms or a neural network at Facebook determined that certain Native American and Drag Queen names looked fake and prevented them from being used in profiles. When Creepingbear’s Facebook profile problem was brought to Facebook’s attention, Facebook reportedly didn’t do a great job responding or fixing the problem. This isn’t hugely surprising. It’s not like you can flip a switch or change one line of code. To truly solve the problem, the AI needs to be retrained.

What Should We Do?

The last machine we will ever need to build is a machine that can replicate itself. Google took the first steps toward building the brains of that machine this year. There are a couple of ways to look at this issue. In his book Superintelligence, philosopher Nick Bostrom reasons, “The creation of a superintelligent being represents a possible means to the extinction of mankind.” It’s a great book, and it will get you thinking seriously about what precautions we should take as we quickly evolve thinking machines.

Then there are some who optimistically believe that the evolution of technology will take care of itself, as it has done in the past. As the machines become smarter, we will adapt and vice versa. Move along, move along, nothing to see here.

If we look exclusively through the lens of technological evolution, history suggests neither extreme will be the case. From stone tools to intelligent machines, we have always survived and prospered. If we think of AI as a tool (like a knife or a gun or a computer), then we are implicitly thinking that we control the tools. But I would urge caution.

You can also think about AI by likening it to an alien intelligence arriving on our shores. Will humanity writ large do any better against AI than did thousands of nations conquered by strangers with superior technology, weaponry, and tactical intelligence in the past?

Start asking questions. Make AI biases and fairness an action item for every AI and machine-learning meeting. It’s time to bring out your inner philosopher. The future of humanity may depend on it.

I know you have an opinion, and I want to hear it. Please visit https://www.shellypalmer.com/ai-biases-fairness-survey/ to take my AI Fairness and Biases Survey and to leave your comments.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.

Categories
Technology

RELATED BY

0