Tech bills of the week: Child social media safety, Data center moratorium, and more

Jarmo Piironen/Getty Images
This week’s new bills aim to regulate multiple aspects of digital safety — such as child social media access, AI bias and environmental harm — while others update larger legislation to support emerging technology.
Arming parents with social media tools
Sens. Mark Warner, D-Va., Jon Husted, R-Ohio, and Katie Britt, R-Ala., introduced new child safety legislation on Monday that aims to grant parents expanded access to protect their children who use social media platforms via advanced tools.
Called Sammy’s Law, named for 16-year-old Sammy Chapman who died after inadvertently procuring lethal drugs off of social media, the bill stipulates that large social media platforms must allow parents to receive safety notifications through third party safety providers regulated through the Federal Trade Commission.
The goal is for parents to be able to mitigate the harmful effects of social media platforms on their children. The bill specifically would require large social media companies with either 100 million active monthly users or companies that earn $1 billion in annual gross revenue to make new, real-time application programming interfaces for FTC-registered, third-party safety software providers.
Sammy’s Law also creates a pathway that will alert parents when their child’s social media features 15 specific instances or phrases that are linked to eating disorders, suicidal ideation and sexual harassment.
“Parents are struggling to protect their kids from the harmful effects of social media, where children are more exposed than ever to cyberbullying, eating disorders, and other online threats to their wellbeing,” Warner said in a press release. “Sammy’s Law will give parents the choice to be alerted of concerning behaviors on social media, while protecting their personal information. I’m proud to join this bipartisan effort so parents have more resources to supervise their children’s social media use.”
This bill follows the landmark ruling that held social media companies Meta and YouTube liable for designing addictive social media products targeting children.
A version of the bill was introduced in the House by Rep. Debbie Wasserman Schultz, D-Fla., in December.
AI training data transparency
On Thursday, a bipartisan team of House lawmakers introduced a bill that promotes more transparency into the inner workings of foundation artificial intelligence models.
The AI Foundation Model Transparency Act, introduced by Reps. Don Beyer. D-Va., Mike Lawler, R-N.Y., and Sara Jacobs, D-Calif., instructs the Federal Trade Commission to establish requirements for transparency in how leading large language models, such as ChatGPT, Claude, Gemini, and Grok, are created, with the goal of mitigating biased and harmful outputs.
“Artificial intelligence foundation models commonly described as a ‘black box’ do not inherently give consumers the tools to understand why a model gives a particular response. Giving users more information about the model — how it was built and what background information it bases its results on — would greatly increase transparency,” Beyer said in the press release. “This bill would help users determine if they should trust the model they are using for certain applications, and help identify limitations on data, potential biases, or misleading results. When a model’s bias could lead to harmful results like rejections for housing or loan applications, or faulty medical decisions, the importance of this reform becomes clear and very significant.”
If passed, the bill tasks the FTC to consult with the director of the National Institute of Standards and Technology, the secretary of the Department of Commerce, and the director of the White House Office of Science and Technology Policy to set requirements for AI model developers to improve the documentation of how their models are trained within one year of its passage.
Companies would then need to submit documentation about the lifecycle of the AI model to the FTC, focusing on the specific training data used and if user data is collected during training.
“This is about accountability and getting ahead of a rapidly evolving technology before it outpaces common-sense guardrails,” said Lawler. “As the general public interacts with AI every day, whether they realize it or not, Americans deserve to know how these systems are built, what data is being used, and where the risks are.”
Beyer introduced a version of the legislation in 2023, which did not make it out of committee.
Data center moratorium
Sen. Bernie Sanders, I-Vt., and Rep. Alexandra Ocasio-Cortez, D-N.Y., introduced a bill on Wednesday that would set a moratorium on the construction of new data centers that are designed to support the growing volume of artificial intelligence compute.
The Artificial Intelligence Data Center Moratorium Act is focused on putting safety ahead of new AI infrastructure development. Its primary goals are to ensure AI tools and systems are safe and effective prior to market deployment, that economic gains of AI are equitably distributed and that ratepayers are protected from shouldering increased electricity costs.
The bill also bans the exports of AI computing infrastructure to countries that are deemed to not have adequate safeguards in place to ensure AI will be deployed in a “safe and effective” manner, workers will be protected and AI does not harm the environment.
“AI and robotics are creating the most sweeping technological revolution in the history of humanity. The scale, scope and speed of that change is unprecedented. Congress is way behind where it should be in understanding the nature of this revolution and its impacts,” Sanders said. “Bottom line: We cannot sit back and allow a handful of billionaire Big Tech oligarchs to make decisions that will reshape our economy, our democracy and the future of humanity.
Improving subsea cable security
Rep. Joe Wilson, R-S.C., introduced a bill on Tuesday that would fortify the U.S. government’s coordination of the security, installation, maintenance and repair of the international subsea fiber-optic cable network.
Undersea cables handle the vast majority of data transfers worldwide. The security of these cables has been under increased scrutiny amid greater threats to cyber infrastructure worldwide. In 2024, the Federal Communications Commission’s then-chairwoman, Jessica Rosenworcel, sought feedback on strategies to fortify undersea cable security.
Updating the Defense Production Act
On Tuesday, Rep. Stephen Lynch, D-Mass., introduced a bill to amend the Defense Production Act of 1950, updating the requirements of the Defense Production Act Committee to create a new subcommittee on emerging technology.
The Defense Production Act Committee was created as an amendment to the law in 2009. The committee functions as an interagency platform to advise the president and help coordinate DPA actions across the government.
More regional support for quantum tech R&D
Sen. Kirsten Gillibrand, D-N.Y., introduced a bill to amend the National Quantum Initiative Act on Monday, requiring more support for regional innovation initiatives in quantum information science and technology.
Having expired in 2023, the National Quantum Initiative Reauthorization Act still awaits Congressional action.




