This week, lawmakers reignited the legislative charge into investigating and regulating how automated and artificial intelligence systems will be implemented in crucial operations.
Congress has undertaken a flurry of new activity targeting artificial intelligence technologies in recent weeks, following the rapid advancement and ambiguous implications of AI systems.
Lawmakers from both chambers—namely Sen. Ed Markey, D-Mass., and Reps. Ted Lieu, D-Calif., Don Beyer, D-Va., and Ken Buck, R-Colo.—introduced new legislation Wednesday to respond to the mounting concern of unregulated AI systems making important societal decisions and manning infrastructure operations.
The bicameral and bipartisan bill aims to better regulate AI systems that could govern U.S. nuclear weapons. Titled the “Block Nuclear Launch by Autonomous AI Act of 2023,” it primarily seeks to mandate a human element in all protocols and systems which govern state nuclear devices.
“As we live in an increasingly digital age, we need to ensure that humans hold the power alone to command, control, and launch nuclear weapons—not robots,” said Markey in a news release. “We need to keep humans in the loop on making life or death decisions to use deadly force, especially for our most dangerous weapons.”
Building off of the U.S. Department of Defense’s 2022 Nuclear Posture Review and Geneva Convention regulations, the bill would codify a required “meaningful human control” within any autonomous weapons systems.
The bill has garnered cosponosrship from fellow Sens. Bernie Sanders, I-Vt., Elizabeth Warren, D-Mass., and Jeff Merkley, D-Ore.
“While U.S. military use of AI can be appropriate for enhancing national security purposes, use of AI for deploying nuclear weapons without a human chain of command and control is reckless, dangerous, and should be prohibited,” said Buck. “I am proud to co-sponsor this legislation to ensure that human beings, not machines, have the final say over the most critical and sensitive military decisions.”
Sen. Mark Warner, D-Va., also followed suit in asking for more transparency surrounding AI systems, issuing several letters Wednesday to chief executive officers of prominent tech companies expanding into the AI field, including Tim Cook of Apple and Sundar Pichai of Google.
Warner implored tech leadership to thoroughly investigate the harms documented in AI and machine learning technologies, namely learned bias through data input.
“With the increasing use of AI across large swaths of our economy, and the possibility for large language models to be steadily integrated into a range of existing systems, from healthcare to finance sectors, I see an urgent need to underscore the importance of putting security at the forefront of your work,” Warner wrote.
He also highlighted a need to control for security vulnerabilities in automated systems, including data poisoning attacks. As part of his inquiry, Warner asks for a comprehensive overview of each company’s security efforts, third-party access procedures and other security requirements.
“As we build out these AI platforms, these AI tools, I've asked all the major companies: ‘Tell me what you're doing to ensure security at the front end.’ All of these massive databases that allow these algorithms to create this information back in speech patterns of generative AI,” Warner told reporters during a press call on Thursday. “You've got to make sure that building in at the outset, there are appropriate cybersecurity protections.”
Warner’s efforts and the new Block Nuclear Launch Act mirror ongoing federal inquiries into better regulating the emerging but powerful AI/ML software scene. Other lawmakers have recently used AI tools like the popular ChatGPT to make a case for its regulation, and its place in nuclear technologies specifically has been discussed by the Nuclear Regulatory Commission, which released a draft of its first Artificial Intelligence Strategic Plan last June. Like the bill, the Commission’s document also underscores the need for consistent human-machine interaction.
“Artificial intelligence offers a lot of possibilities, and also…unrestrained AI could literally take over key functions,” Warner said. “So I'm trying to make sure that I get educated and educate my colleagues—there is nothing partisan about this issue.”
He further added that, while the U.S. should continue spurring innovation within AI and ML products, the country can’t take the same passive approach that occurred as social media platforms were created.
NEXT STORY: 4 Agencies Pledge Enforcement Against AI Bias