National security, emerging technology and cybersecurity experts told lawmakers Wednesday that the federal government must implement new guardrails to cope with recent advancements in artificial intelligence and machine learning.
Cybersecurity and technology experts warned lawmakers Wednesday that recent artificial intelligence and machine learning advancements have outpaced current policies within the federal government, necessitating new regulations and standards to help grapple with the rapid development of emerging technologies.
And while there’s little debate about AI/ML’s potential to threaten U.S. national security and the nation's status as the leader in technology innovation when compared to near-peer rivals like China and Russia, who should develop the guardrails to shape the technology’s development, and the urgency in which those are crafted, are somewhat divided.
"The application of some of these large models to developing very capable cyber weapons, very capable biological weapons, disinformation campaigns at scale, poses grave risks," said RAND Corporation President and CEO Jason G. Matheny during a Wednesday hearing of the Senate Armed Services subcommittee on cybersecurity. "The very technology that we develop in the United States for benign use can be stolen and misused by others."
Recent reports have suggested an urgent need for government agencies to develop comprehensive plans and improved coordination around AI, including a Government Accountability Office report published in April 2022, that tasked the DOD with establishing collaboration guidance, enhancing its AI strategies and refining the inventory process.
A separate report from the National Security Commission on Artificial Intelligence said the U.S. has "not yet grappled with just how profoundly the AI revolution will impact our economy, national security and welfare," and called for major investments to fund innovation and research efforts.
Matheny called for a licensing regime and federal regulatory approach towards AI and ML to ensure agencies like the Department of Defense and others can keep pace with rapid technological advancements while adhering to national security and cybersecurity requirements.
Some technologists, including Tesla and Twitter CEO Elon Musk, have argued the opposite. Musk recently co-signed an open letter penned by the Future of Life Institute — a 501(c)(3) non-profit backed by several technology foundations, including the Musk Foundation — calling for a six-month pause on the development of AI systems so researchers can "jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."
Shyam Sankar, chief technology officer and executive vice president of Palantir, told the subcommittee Wednesday that enacting a pause would not only be difficult to implement both domestically and abroad, but that the interruption in development and research would essentially be "ceding the advantage to the adversary.
"What's going to be different in five months and 29 days?" said Sankar. "I would double down on the idea that a democratic AI is crucial. We will continue to build these guardrails around this, but I think ceding our nascent advantage here may not be wise."
Josh Lospinoso, co-founder and CEO of Shift5 — an Arlington-based data and cybersecurity analytics company — also said it was "impractical to try to implement some kind of pause" and suggested that agencies like the DOD should instead "be empowered to holistically collect, assess and manage data" with the help of AI-enabled technologies.
Sen. Mike Rounds (R-S.D.), ranking member of the subcommittee, agreed that implementing a pause on AI development could pose a risk to U.S. security interests, and that doing so would effectively give near-peer competitors a "leap ahead" in the field.
"I don't believe that now is the time for the United States to take a break from developing our AI capabilities," he said. "Is it even possible to expect that other competitors around the world would consider taking a break?"
Sen. Joe Manchin (D-W.V.), ranking member of the committee, called for Congress to establish guardrails around the development of AI to avoid a gap in adequate congressional oversight and national security.
"The internet was founded in 1983, and Section 230 didn’t come into play until 1996," he said. "We’ve been discussing ever since how we could have done more, and if it’s gone too far and who’s accountable."
Manchin also requested a report from the panelists "as quickly as possible" on congressional recommendations around the development of AI software, adding: "We can then start looking into how we can write legislation and not repeat the mistakes of the past."