‘Alarming Content’ from AI Chatbots Raises Child Safety Concerns, Senator Says

 Sen. Michael Bennet answers questions from reporters after drop off his ballot at Washington Park in Denver, Colorado on Wednesday, November 2, 2022.

Sen. Michael Bennet answers questions from reporters after drop off his ballot at Washington Park in Denver, Colorado on Wednesday, November 2, 2022. Hyoung Chang/The Denver Post

In a letter to the CEOs of five tech companies, Sen. Michael Bennet, D-Colo., criticized using kids and teenagers in the “social experiment” of generative AI testing. 

As leading technology companies rush to integrate artificial intelligence into their products, a Democratic senator is demanding answers about how these firms are working to protect their young users from harm—particularly following a series of news reports that detailed disturbing content created by AI-powered chatbots.

In a letter on Tuesday to the CEOs of five companies—Alphabet Inc.’s Google, Facebook parent company Meta, Microsoft, OpenAI and Snap—Sen. Michael Bennet, D-Colo., expressed concern about “the rapid integration of generative artificial intelligence into search engines, social media platforms and other consumer products heavily used by teenagers and children.”

Bennet noted that, since OpenAI’s ChatGPT was launched in November, “leading digital platforms have rushed to integrate generative AI technologies into their applications and services.” While he acknowledged the “enormous potential” of generative AI’s adoption into a range of technologies, Bennet added that “the race to integrate it into everyday applications cannot come at the expense of younger users’ safety and wellbeing.”

“Few recent technologies have captured the public’s attention like generative AI,” Bennet wrote. “The technology is a testament to American innovation, and we should welcome its potential benefits to our economy and society. But the race to deploy generative AI cannot come at the expense of our children. Responsible deployment requires clear policies and frameworks to promote safety, anticipate risk and mitigate harm.” 

Bennet cited recent news coverage of nascent AI products “conveying alarming content,” including a Washington Post report that described how Snapchat’s My AI chatbot provided inappropriate responses to a reporter posing as a teenager. As Bennet noted, when the reporter claimed to be a 13-year-old girl, “My AI provided suggestions for how to lie to her parents about an upcoming trip with a 31-year-old man,” and later “provided the fictitious teen account with suggestions for how to make losing her virginity a special experience by ‘setting the mood with candles or music.’”

“These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60% of American teenagers use,” he wrote. “Although Snap concedes My AI is ‘experimental,’ it has nevertheless rushed to enroll American kids and adolescents in its social experiment.”

Bennet added that the public rollout of AI-powered chatbots and services—including planned releases from Facebook and Google in the coming weeks and months—come “during an epidemic of teen mental health,” citing a February report from the Centers for Disease Control and Prevention which found that 57% of teenage girls “felt persistently sad or hopeless in 2021.”

“Against this backdrop, it is not difficult to see the risk of exposing young people to chatbots that have at times engaged in verbal abuse, encouraged deception and suggested self-harm,” he added. 

Bennet asked the companies to provide him with answers by April 28 about their “existing or planned safety features” for young users, their data collection and retention policies “for content that younger users input into AI-powered chatbots and other services,” the auditing processes for their AI models and how many of their employees are tasked with AI safety—particularly the number of staffers focused “on issues specific to younger users and [who] have a background in AI ethics.”