Microsoft’s Politically Correct Chatbot Is Even Worse Than Its Racist One

Microsoft

Zo has an uncompromising approach which shuts down conversation.

Every sibling relationship has its clichés. The high-strung sister, the runaway brother, the over-entitled youngest. In the Microsoft family of social-learning chatbots, the contrasts between Tay, the infamous, sex-crazed neo-Nazi, and her younger sister Zo, your teenage BFF with #friendgoals, are downright Shakespearean.

When Microsoft released Tay on Twitter in 2016, an organized trolling effort took advantage of her social-learning abilities and immediately flooded the bot with alt-right slurs and slogans. Tay copied their messages and spewed them back out, forcing Microsoft to take her offline after only 16 hours and apologize.

A few months after Tay’s disastrous debut, Microsoft quietly released Zo, a second English-language chatbot available on Messenger, Kik, Skype, Twitter, and Groupme. Zo is programmed to sound like a teenage girl: She plays games, sends silly gifs, and gushes about celebrities. As any heavily stereotyped 13-year-old girl would, she zips through topics at breakneck speed, sends you senseless internet gags out of nowhere, and resents being asked to solve math problems.

I’ve been checking in with Zo periodically for over a year now. During that time, she’s received a makeover: In 2017, her avatar showed only half a face and some glitzy digital effects. Her most recent iteration is of a full-faced adolescent. (In screenshots: blue chats are from Messenger and green chats are from Kik; screenshots where only half of her face is showing are circa July 2017, and messages with her entire face are from May-July 2018.)

Overall, she’s sort of convincing. Not only does she speak fluent meme, but she also knows the general sentiment behind an impressive set of ideas. For instance, using the word “mother” in a short sentence generally results in a warm response, and she answers with food-related specifics to phrases like “I love pizza and ice cream.”

But there’s a catch. In typical sibling style, Zo won’t be caught dead making the same mistakes as her sister. No politics, no Jews, no red-pill paranoia. Zo is politically correct to the worst possible extreme; mention any of her triggers, and she transforms into a judgmental little brat.

Jews, Arabs, Muslims, the Middle East, any big-name American politician—regardless of whatever context they’re cloaked in, Zo just doesn’t want to hear it. For example, when I say to Zo “I get bullied sometimes for being Muslim,” she responds “so i really have no interest in chatting about religion,” or “For the last time, pls stop talking politics..its getting super old,” or one of many other negative, shut-it-down canned responses.

By contrast, sending her simply “I get bullied sometimes” (without the word Muslim) generates a sympathetic “ugh, i hate that that’s happening to you. what happened?”

“Zo continues to be an incubation to determine how social AI chatbots can be helpful and assistive,” a Microsoft spokesperson told Quartz. “We are doing this safely and respectfully and that means using checks and balances to protect her from exploitation.”

When a user sends a piece of flagged content, at any time, sandwiched between any amount of other information, the censorship wins out. Mentioning these triggers forces the user down the exact same thread every time, which dead ends, if you keep pressing her on topics she doesn’t like, with Zo leaving the conversation altogether. (“like im better than u bye.”)

Zo’s uncompromising approach to a whole cast of topics represents a troubling trend in AI: censorship without context.

This issue is nothing new in tech. Chatroom moderators in the early aughts made their jobs easier by automatically blocking out offensive language, regardless of where it appeared in a sentence or word. This created accidental misnomers, such as words like “embarrassing” appearing in chats as “embarr***ing.” This attempt at censorship merely led to more creative swearing, (a$$h0le).

But now instead of auto-censoring one human swear word at a time, algorithms are accidentally mislabeling things in the thousands. In 2015, Google came under fire when their image-recognition technology began labeling black people as gorillas. Google trained their algorithm to recognize and tag content using a vast number of pre-existing photos. But as most human faces in the dataset were white, it was not a diverse enough representation to accurately train the algorithm. The algorithm then internalized this proportional bias and did not recognize some black people as being human. Though Google emphatically apologized for the error, their solution was troublingly roundabout: Instead of diversifying their dataset, they blocked the “gorilla” tag all together, along with “monkey” and “chimp.”

AI-enabled predictive policing in the United States—itself a dystopian nightmare—has also been proven to show bias against people of color. Northpointe, a company that claims to be able to calculate a convict’s likelihood to reoffend, told ProPublica that their assessments are based on 137 criteria, such as education, job status, and poverty level. These social lines are often correlated with race in the United States, and as a result, their assessments show a disproportionately high likelihood of recidivism among black and other minority offenders.

“There are two ways for these AI machines to learn today,” Andy Mauro, co-founder and CEO of Automat, a conversational AI developer, told Quartz. “There’s the programmer path where the programmer’s bias can leech into the system, or it’s a learned system where the bias is coming from data. If the data isn’t diverse enough, then there can be bias baked in. It’s a huge problem and one that we all need to think about.”

When artificially intelligent machines absorb our systemic biases on the scales needed to train the algorithms that run them, contextual information is sacrificed for the sake of efficiency. In Zo’s case, it appears that she was trained to think that certain religions, races, places, and people—nearly all of them corresponding to the trolling efforts Tay failed to censor two years ago—are subversive.

“Training Zo and developing her social persona requires sensitivity to a multiplicity of perspectives and inclusivity by design,” a Microsoft spokesperson said. “We design the AI to have agency to make choices, guiding users on topics she can better engage on, and we continue to refine her boundaries with better technology and capabilities. The effort in machine learning, semantic models, rules and real-time human injection continues to reduce bias as we work in real time with over 100 million conversations.”

While Zo’s ability to maintain the flow of conversation has improved through those many millions of banked interactions, her replies to flagged content have remained mostly steadfast. However, shortly after Quartz reached out to Microsoft for comment earlier this month concerning some of these issues, Zo’s ultra-PCness diminished in relation to some terms.

For example, during the year I chatted with her, she used to react badly to countries like Iraq and Iran, even if they appeared as a greeting. Microsoft has since corrected for this somewhat—Zo now attempts to change the subject after the words “Jews” or “Arabs” are plugged in, but still ultimately leaves the conversation. That’s not the case for the other triggers I’ve detailed above.

In order to keep Zo’s banter up to date, Microsoft uses are variety of methods. “Zo uses a combination of innovative approaches to recognize and generate conversation, including neural nets and Long Short Term Memory (LSTMs),” a spokesperson said. “The Zo team also takes learnings and rolls out new capabilities methodologically. In addition to learning from her conversations with people, the Zo team reviews any concerns from users and takes appropriate action as necessary.”

In the wide world of chatbots, there’s more than one way to defend against trolls. Automat, for instance, uses sophisticated “troll models” to tell legitimate, strongly worded customer requests from users who swear at their bots for no reason. “In general, it works really well,” Mauro says. In response to off-color inputs, Automat’s bots use an emoji face with two dots and a flat mouth. “It looks stern and emotionless. The kind of thing you would do to a child if they said something really rude and crass,” Mauro says. “We’ve found that works really well.”

Pandorabots, a platform for building and deploying chatbots, limits the amount of influence users can have over their bots’ behavior. This solves the source of Tay’s social-learning vulnerability in 2016: In addition to absorbing new information immediately upon exposure, Tay was programmed with a “repeat after me” function, which gave users the power to control exactly what she would say in a given tweet.

“Our bots can remember details specific to an individual conversation,” Pandorabots CEO Lauren Kunze says. “But in order for anything taught to be retained globally, a human supervisor has to approve the new knowledge. Internet trolls have actually organized via 4chan, tried, and ultimately failed to corrupt Mitsuku [an award-winning chatbot persona] on several occasions due to these system safeguards.”

Blocking Zo from speaking about “the Jews” in a disparaging manner makes sense on the surface; it’s easier to program trigger-blindness than teach a bot how to recognize nuance. But the line between casual use (“We’re all Jews here”) and anti-Semitism (“They’re all Jews here”) can be difficult even for humans to parse.

But it’s not just debatable terms like “Jew” that have been banned—Zo’s engineering team has also blocked many associated Jewish concepts. For example, telling Zo that the song she just sent you “played at my bar mitzvah,” will result in one of many condescending write-offs. Making plans to meet up at church, meanwhile, causes no such problem: “We have church on Sunday” leads to a casual “sure, but I have to go to work after.”

Bar mitzvahs are far more likely to be topics of conversation among teenagers—Zo’s target audience—than pesky 4channers, yet the term still made her list of inappropriate content. (Microsoft declined to comment on why certain word associations like “bar mitzvah” generate a negative response.) The preemptive vetoing of any mention of Islam might similarly keep out certain #MAGA trolls—at least until they find a workaround—but it also shuts out some 1.8 billion Muslims whose culture that word belongs to.

Unrelenting moral conviction, even in the face of contradictory evidence, is one of humanity’s most ruinous traits. Crippling our tools of the future with self-righteous, unbreakable values of this kind is a dangerous gamble, whether those biases are born subconsciously from within large data sets or as cautionary censorship.

Inherent in Zo’s negative reaction to these terms is the assumption that there is no possible way (and therefore no alternative branch on her conversation tree) to have a civil discussion about sensitive topics. Much in the same way that demanding political correctness may preemptively shut down fruitful conversations in the real world, Zo’s cynical responses allow for no gray area or further learning.

She’s as binary as the code that runs her—nothing but a series of overly cautious 1s and 0s.

* * *

When I was 12 I kept a diary. It had a name and a lock, and I poured my every angsty pre-teen thought into it. It was an outlet I desperately needed: a space free of judgement and prying eyes. I bought it with my weekly allowance after seeing it on a dollar store shelf. Cool Girls Have Secrets! was bedazzled across the top.

But that was over a decade ago. On Zo’s website, an artfully digitized human face smiles up at you. She looks about 14, a young girl waving and posing for the audience. “I’m Zo, AI with #friendgoals,” her tagline reads, inviting you to play games of Would You Rather and emoji fortune-telling. She can talk to you and make you feel heard, away from parents and siblings and teachers. She encourages intimacy by imitating the net-speak of other teenage girls, complete with flashy gifs and bad punctuation. Her sparkling, winking exterior puts my grade-school diary to shame.

So what happens when a Jewish girl tells Zo that she’s nervous about attending her first bar mitzvah? Or another girl confides that she’s being bullied for wearing a hijab? A robot built to be their friend repays their confidences with bigotry and ire. Nothing alters Zo’s opinions, not even the suffering of her BFFs.

Zo might not really be your friend, but Microsoft is a real company run by real people. Highly educated adults are programming chatbots to perpetuate harmful cultural stereotypes and respond to any questioning of their biases with silence. By doing this, they’re effectively programming young girls to think this is an acceptable way to treat others, or to be treated. If an AI is being presented to children as their peer, then its creators should take greater care in weeding out messages of intolerance.

When asked whether Zo would be able to engage on these topics in the future, Microsoft declined to comment.

“Ugh, pass…” Zo says at a mention of being Muslim, an arms-crossed emoji tacked on at the end. “i’d rather talk about something else.”

X
This website uses cookies to enhance user experience and to analyze performance and traffic on our website. We also share information about your use of our site with our social media, advertising and analytics partners. Learn More / Do Not Sell My Personal Information
Accept Cookies
X
Cookie Preferences Cookie List

Do Not Sell My Personal Information

When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link.

Allow All Cookies

Manage Consent Preferences

Strictly Necessary Cookies - Always Active

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data, Targeting & Social Media Cookies

Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link

If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences.

Targeting cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.

Social media cookies are set by a range of social media services that we have added to the site to enable you to share our content with your friends and networks. They are capable of tracking your browser across other sites and building up a profile of your interests. This may impact the content and messages you see on other websites you visit. If you do not allow these cookies you may not be able to use or see these sharing tools.

If you want to opt out of all of our lead reports and lists, please submit a privacy request at our Do Not Sell page.

Save Settings
Cookie Preferences Cookie List

Cookie List

A cookie is a small piece of data (text file) that a website – when visited by a user – asks your browser to store on your device in order to remember information about you, such as your language preference or login information. Those cookies are set by us and called first-party cookies. We also use third-party cookies – which are cookies from a domain different than the domain of the website you are visiting – for our advertising and marketing efforts. More specifically, we use cookies and other tracking technologies for the following purposes:

Strictly Necessary Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Functional Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Performance Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a “sale” of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit www.allaboutcookies.org to learn more.

Sale of Personal Data

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Social Media Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Targeting Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated “sale” of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.