The U.S. should work with allies to continue to innovate in AI capabilities while agreeing on common use cases, administration officials say.
Federal policy leaders working in emerging technology plan to prioritize collaboration over competition with allies, as systems like artificial intelligence continue their rapid innovation with limited legal guardrails.
Cybersecurity and Infrastructure Security Agency Director Jen Easterly and Ambassador at Large for Cyberspace and Digital Policy Nathaniel Fick discussed their views on how AI stands to impact the international cybersecurity landscape during a Special Competitive Studies Project event on Tuesday.
While not explicitly naming Congress as the regulatory source to manage burgeoning AI systems, Easterly did express the need for guidance to continue motivating innovation while simultaneously protecting individuals from algorithmic-based harms.
“We need to look at a framework that enables us to protect it, but still take advantage of the amazing innovation,” she said.
Both Easterly and Fick spoke to the need to foster deep bilateral partnerships to establish common use foundations for AI and other emerging technologies.
“It's incumbent upon us to articulate the positive, attractive, compelling vision for what the AI-enabled and -powered future can look like, and attract others to it,” Fick said. “There's an intrinsically competitive nature of this, of course, but we try to anchor it all in that attractive, compelling vision so that others can sign up to it and join us.”
The need for continued unity between global allies comes as foreign adversaries have become the primary concern for public sector organizations’ cybersecurity efforts, spurred largely by Russia’s continued war on Ukraine and the corresponding digital offensive. Easterly said that digital hostility should be at the forefront of creating regulations around powerful AI systems.
“If we allow these technologies to fall into the hands of our adversaries, they will use them to do unimaginably bad things,” she said. “The history of security is both the power of imagination and the failure of imagination, and as leaders we need to understand how to leverage the first and how to prevent the latter.”
With most development in AI stemming from the private sector, Easterly noted that the Biden administration has taken positive steps to bring public sector regulatory options into the broader discussion on AI guidance. She cited last week’s meeting between leading AI industry chief executives and administration officials—as well as the active AI Risk Management Framework and Bill of Rights—as steps in developing AI technologies with “democratic principles” at the forefront.
“We can stipulate that AI will do incredible, incredible things,” Easterly said. “It will make our lives easier, it will make our lives better, but it will also make lives easier and better for our adversaries and our enemies large and small, who will use these capabilities to do significant harms.”