Biden Administration Examining How Companies Use AI to Surveil Employees

ferrantraite/Getty

The White House Office of Science and Technology Policy is seeking public feedback on companies’ use of artificial intelligence and automated systems to monitor employees.

The Biden administration is scrutinizing the ways in which U.S. companies use automated technologies to manage and surveil their employees, including seeking further information from workers, employers and vendors about the risks posed by these monitoring tools. 

In a request for information announced by the White House Office of Science and Technology Policy—or OSTP—on Monday, officials said they are seeking public input “to better understand automated surveillance and management of workers, including its prevalence, purposes, deployment and impacts, as well as opportunities for federal agencies to work with employers, workers and other stakeholders to ensure that these systems do not undermine workers’ rights, opportunities, access, health or safety.”

The RFI notes that a 2022 investigation by The New York Times found that “eight of the ten largest private U.S. employers track the productivity metrics of individual workers.” And the request also cites research conducted by Gartner last year, which found that “the number of large employers using tools to track their workers has doubled since the beginning of the pandemic to 60%, with this number expected to rise to 70% within the next three years.”

Given the growing use of these automated surveillance tools in both the virtual and physical workplaces—which the RFI notes “may track workers’ location, pace or quality of work, communications (e.g., text, chats, emails, social media), interactions with other workers or customers and computer activity”—White House officials have expressed concern about the ways in which these monitoring technologies could harm U.S. workers.

In a May 1 blog post, Deirdre Mulligan—deputy U.S. chief technology officer for policy at OSTP, and Jenny Yang, deputy assistant to the president for racial justice and equity at the White House Domestic Policy Council, noted that “while these technologies can benefit both workers and employers in some cases, they can also create serious risks to workers.” 

“The constant tracking of performance can push workers to move too fast on the job, posing risks to their safety and mental health,” they added. “Monitoring conversations can deter workers from exercising their rights to organize and collectively bargain with their employers. And, when paired with employer decisions about pay, discipline and promotion, automated surveillance can lead to workers being treated differently or discriminated against.” 

The RFI seeks, in part, feedback on employees’ experiences with workplace surveillance technologies; information on the “economic, safety, physical, mental and emotional impact” of these tools on workers’ wellbeing; details about how vendors and manufacturers “develop, sell and use these technologies;” and strategies for mitigating the risks posed by automated monitoring tools.

The deadline for responses to the RFI is 5 p.m. ET on June 15. Mulligan and Yang said in their blog post that the public’s comments “will be used to inform new policy responses, share relevant research, data and findings with the public, and amplify best practices among employers, worker organizations, technology vendors, developers and others in civil society.”

The White House previously released a Blueprint for an AI Bill of Rights in October 2022 to outline a potential framework for governing the development and use of artificial intelligence technologies. The blueprint said, in part, that Americans “should be free from unchecked surveillance.”

“Continuous surveillance and monitoring should not be used in education, work, housing or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities or access,” the document added. 

During an event at the Milken Institute Global Conference on Monday, OSTP Director Arati Prabhakar said the Biden administration is working to address the risks posed by autonomous technologies and would “continue to hold corporations accountable when they use AI and make sure that it's not violating civil rights.”