“A Bad Naughty Robot?”

Is Microsoft’s Tay Chatbot Really

Shelly Palmer

Tay is a combination chatbot and AI system designed by Microsoft to “engage and entertain people where they connect with each other online through casual and playful conversation.” It was specifically “targeted at 18 to 24 year olds in the U.S., the dominant users of mobile social chat services in the U.S.” If the words “designed” and “targeted” are off-putting, then you’re really not going to care for one of the system’s recent, now infamous, tweets:

The media have been all over this story, and most of the headlines are sensationalist, to say the least. Just Google “Tay,” and the results speak for themselves. That said, Microsoft has apologized and now it’s time to learn from the experience.

Let’s Not Conflate AI and Chatbots

In 1952 Arthur Samuel began to teach a computer to play checkers, thinking that it was a good model for rudimentary problem solving.  He defined AI (known back then as “machine learning”) as “a field of study that gives computers the ability to learn without being explicitly programmed.”

We can benefit from this definition, but first we must define the verb “to learn.”  When Arthur Samuel used the term “to learn” it was not cognitive; it was operational. Today, systems like Google’s AlphaGo are starting to do work we (humans) would describe as cognitive. Right after AlphaGo beat 9-dan Go master Lee Sedol, I wrote an essay about the potential impact of AI on human cognitive work entitled, “AlphaGo vs. You: Not a Fair Fight.” It will, and should, scare you.

Chatbots, on the other hand, can be as simple as order-taking algorithms. They can be purely operational and still be amazingly effective. For example, Fandango’s chatbot may text you about the availability of movie tickets, but the system does not need any AI to process your request to purchase them. And while there are thousands of entertaining chatbots available online, they are mostly parlor tricks created for your amusement, nothing more.

Microsoft’s Tay is a combination of these two ideas. It’s an AI-powered chatbot. Among its other capabilities, for its debut, it was supposed to “engage and entertain” people by anthropomorphically interacting with them on Twitter.

Microsoft’s Mistake

According to Microsoft, the company had a great experience in China with its Xiaolce chatbot. They claim the system delighted approximately 40 million people with its stories and conversations. “The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment? Tay – a chatbot created for 18- to 24-year-olds in the U.S. for entertainment purposes – is our first attempt to answer this question.” Oops! China is not America and the cultures of the two countries could not be more different.

This is not the first time engineers have failed at “engineering” social interactions. Anyone remember Google Buzz? But I digress. Microsoft’s biggest mistake was not an engineering oversight; it was expectation management. No matter how rigorously you test an AI system, it will always perform differently in the wild. There are just too many variables. But Microsoft could have mitigated much of the risk by simply managing user expectations. Was Tay a consumer product? Was it a demonstration of some aspect of AI or data-scientific research? Was it a tour de force in question answering technology? All Microsoft told us was that “Tay has been built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned and filtered by the team developing Tay.” What should we have expected from Tay?

“I’m Such a Bad Naughty Robot”

Tay’s tweet, which is only one of a whole bunch of seriously inappropriate missives spewed by the system, is not its fault. Tay is doing what it was programmed to do and, clearly, the model needs some adjustments. However, Microsoft did not do what it was supposed to do. As responsible engineers, they should have stated the scientific principles they were exploring, described the methodology, set specific levels of expectations and, most importantly, been ready for prime time before launching. Deep Blue did it with chess. Watson did it with Jeopardy! and AlphaGo did it with Go. Sadly, Tay was no match for a bunch of American 18- to 24-year-olds – because when you “target” people (especially Millennials), they will absolutely “target” you back.

The Future Is AI-Powered

Chatbots and Everything Else

Here’s the most important take-away from Microsoft’s experience with Tay: They may have stumbled out of the gate, but the future is AI-Powered Chatbots. This technology is going to dramatically change how and when you use Apps. It will completely change how you interact with the Web and the Internet of Things, and seriously impact almost every form of electronic communication. Cognitive chatbots will displace millions of customer service representatives, millions of paper-pushers and millions of other low-paying jobs in the very near future. If you make under $40/hour in the US and you do not perform manual labor, AI and the associated toolsets will absolutely replace you sooner than later. Laugh about this “bad naughty robot” all you want – it’s not funny.

Categories
Technology

RELATED BY

0