Former DHS AI official comments on new extinction-level event study

wildpixel/Getty Images

The study from Gladstone AI warns that the day when such technologies are capable of ending all human life is closer than people think.

Artificial intelligence has been making a lot of inroads into just about every aspect of people's lives these days, while also finding its way into government and even military service. But even though there are quite a few success stories, current AI technology has also demonstrated quite a few flaws by sometimes introducing bias, errors or even outright hallucinations into its processes, answers and solutions. As such, governments around the world are struggling with how to properly regulate the new technology without stifling innovation.

Of the many legitimate concerns regarding AI, perhaps the most worrying is contained in a recent report from Gladstone AI that the company says was commissioned by the State Department to look at the current threat landscape regarding the technology. According to the report, AI poses a clear danger of triggering an “extinction level threat” to humanity, meaning that it could one day end all human life on Earth. And according to the report, the day when AI could achieve or at least attempt that goal is closer than most people realize.

Suggesting that AI could end the world is nothing new. There are several sci-fi movies, like the Terminator series, centered around that premise, as well as countless books, TV shows and video games. And it doesn’t help when an AI goes a little off the rails at a live United Nations’ press conference by suggesting that the world might be better off with AIs in charge instead of flawed and overly-emotional humans. But even so, the Gladstone report is generating a lot of talk about how to support AI while also making sure that it doesn’t try to eliminate or subjugate people in the process.

Joel Meyer has spent a lot of time working on AI issues in government. He is the former deputy assistant secretary for strategic initiatives at the Department of Homeland Security, where he spearheaded the creation of the first DHS Artificial Intelligence Task Force. He was also on the transition team for President Joe Biden, serving as the day one chief of staff for DHS’s Office of Policy.

Today, Meyer is the president of public sector for Domino Data Lab, and also serves on the newly formed Atlantic Council committee investigating ways to leverage the defensive capabilities of AI for government service. I talked with Meyer about the new report, the advantages of AI in government, and things that agencies should definitely look out for when developing new AI and generative AI applications.

Nextgov/FCW: Thanks for taking the time to talk with us. Can you start by explaining what the DHS AI task force that you spearheaded worked on, and what it discovered?

Meyer: The task force was designed to take stock of AI efforts; define a vision and a roadmap for more rapid adoption; and drive progress in a cohesive and coherent manner. The department, to its great credit, has been releasing the required 60-day update memos, which show tremendous progress in tackling the policy, people and technology challenges of adopting AI. Examples include workforce training on how to responsibly use commercial AI tools, the launch of three new AI pilot projects and the development of AI infrastructure tools for deploying and operating AI models. 

These public updates also set an important and welcome precedent of transparency and accountability in how government agencies use this new technology.

Nextgov/FCW: Looking at how government uses AI, the recent study by Gladstone AI raises a number of alarms, including the often-heard concern that AI could destroy humanity. Do you think that the study makes some valid points?

Meyer: There are undoubtedly risks to safety, security and prosperity arising from malicious uses of AI. And while the risk of human extinction is quite alarmist, there is no doubt that the potential power of future AI systems is enormous. 

However, I don’t view onerous restrictions on the development of frontier AI technologies as the right answer. For one, AI is being developed in many countries, and holding the U.S. development back will only put us at a disadvantage. But beyond that, we want the incredible potential of AI for good, from scientific and medical breakthroughs to creativity and productivity gains, to be realized. We want to be more balanced in finding ways to guard against the worst outcomes while also ensuring we are optimizing the potential to achieve the best outcomes.

Nextgov/FCW: But going beyond sci-fi themes, is it really possible for AI to destroy humanity? How could that even happen?

Meyer: There is less fear about the AI of today threatening humankind and more about what AI will be capable of in the future. The danger is AI-powered weaponry falling into the wrong hands or humans losing control of advanced AI systems, like Artificial General Intelligence, which would have the ability to think at the human level. Without the right guardrails in place, the use of these systems could have unintended consequences. 

That’s why it’s important, as AI is being developed today, for there to be transparency and governance in every aspect of the AI lifecycle, including testing all the ways a model might be used in malicious or other unintended ways.

Nextgov/FCW: Okay, so we are safe for now from an extinction level event. So, what kinds of real threats do AI and generative AI already pose?

Meyer: I worry about threats we have already seen manifested, such as convincing disinformation and fakes, particularly instances where elections have been targeted. Cyber threats can also be supercharged, with basic applications, like improved phishing campaigns, to more advanced possibilities, like discovering new vulnerabilities and developing zero-day attacks to exploit them, using generative AI. 

There are also scenarios where the application of AI in society-level surveillance can threaten privacy and civil liberties. The reality is that we are just starting to see the ways in which AI will change the threat landscape. This reality underscores the urgent need to have people, processes and technology in place that allow our nation to leverage AI for our own protection.

Nextgov/FCW: What about using AI models to defend this country, even from other AIs? Is that possible?

Meyer: Just as AI can be used against us, we can also use AI to protect ourselves. AI can be used to neutralize autonomous weapons systems through real-time analysis and countermeasures. In cyber defense, AI algorithms can monitor networks for unusual patterns, automatically patching vulnerabilities and responding to sophisticated cyber attacks faster than human operators can. AI can safeguard supply chains by continuously monitoring for signs of tampering or intrusion. And AI can also be used to help identify and counter disinformation campaigns, using advanced analytics to trace their origins and mitigate their impact. 

This is why it is imperative that while we continue to work to guard against the malicious use of AI, we also ensure that we are adopting AI to keep us safe.

Nextgov/FCW: Is any of that happening right now?

Meyer: We are already seeing the Defense Department make significant strides toward implementing AI technology. We have seen the DOD progress from Project Maven, to the Joint Artificial Intelligence Center, the JAIC, and now to the Chief Digital and Artificial Intelligence Office, the CDAO. 

Beyond the Pentagon there has been an equal flurry of activity, with each service standing up their CDAO functions and launching initial projects developing AI models for next-generation capabilities and employing AI in current programs. This has involved lots of top-down leadership from the department’s most senior officials, but also incredible ingenuity from operators in the field. 

To continue this trajectory, it will be important for DOD to continue putting smart, forward-leaning policies in place to encourage experimentation and adoption in ethical and responsible ways. It will also be important for Congress and the White House to invest in these capabilities, including hiring and training, procurement and development, and deployment and monitoring.

Nextgov/FCW: Putting aside the more alarming aspects of the Gladstone report for a moment, one of the things the report recommends is accelerating the use of AI in government. Can that be done safely, or is that too risky?

Meyer: Accelerating AI deployment at scale and ensuring its responsible execution are not contradictory goals. We can accelerate the pace of deploying AI at scale, and we can do it responsibly, with the right people, process and technology-driven approach. 

This approach is already being used in the commercial sector among advanced AI organizations in highly regulated industries such as financial services, insurance and pharmaceuticals. Those organizations have made significant investments in acquiring the necessary talent, establishing robust processes, and adopting the platforms required to balance rapid development with stringent governance across the entire AI lifecycle. 

This is now a priority in government. Following President Biden's directive that every agency must appoint a chief AI officer, agencies like the Department of Homeland Security are on an active quest to recruit 50 AI specialists in a significant hiring push. These specialists are tasked with either developing or procuring the AI tools needed, while also formulating responsible AI practices to set up essential safeguards from the outset. They’ll create the right tech stack and a system that ensures AI models are delivering on their intended purpose, and there will be continuous improvements to optimize model performance.

Nextgov/FCW: Thank you for taking the time to talk with us about this fascinating topic. Is there anything you would like to add before we have to say goodbye?

Meyer: Yes. It’s critical for government agencies to take advantage of the momentum behind AI to speed the ethical and responsible adoption of these technologies. It is not easy, but the need is urgent. And there will need to be progress on all fronts to make this happen. 

This includes bringing talent into the government and building AI solutions where customization is critical as well as leveraging America’s technology innovators — its startup ecosystem, a unique and strategic national asset — to make cutting-edge AI technology available to those who administer the government programs that are so critical to our safety, security and prosperity.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys