Proceed with caution: Industry advises a careful approach to generative AI

Mabry Campbell/Getty Images

Government and industry tech executives are focused on the risks as well as the opportunities presented by the generative AI revolution.

ChatGPT did not write this article.

But the artificial intelligence app might soon be writing tech magazine stories, and — considering the rapid increase in AI capabilities — more complex functions are likely to follow. Generative AI holds promise for federal government use cases from cybersecurity to public health and medicine to citizen services. But these opportunities come with risks and policy challenges.

“In my mind, its potential is virtually unlimited,” said Rob Carey, president of Cloudera Government Solutions. “The only limiting factor may be the data you give it access to.”

Carey, who served as principal deputy chief information officer for the Defense Department, thinks that in five years, generative AI will be in widespread use across the government. But a lot has to happen before then. 

“We’ve got to be careful not to commoditize AI before we understand it,” Carey told Nextgov/FCW. “AI is hard to do right, but we’ve got to do it right.”

Doing it right will require agencies to address data privacy, bias and ethical concerns, and the responsible use of the technology — all pillars of the Biden administration’s AI Bill of Rights and other recent policy documents. The comments of the industry and federal executives interviewed for this article could be edited to fit on a street sign reading “proceed with caution.”

“We need to be cautious about how we are acquiring and using large language models and AI platforms that may have very unintended consequences for our data,” Eric Hysen, CIO at the Department of Homeland Security and co-chair of the agency’s AI Taskforce, recently told Nextgov/FCW.

“We’ll be working with industry to make sure that we can very clearly understand, as we continue to adopt more AI services and platforms, what that means for our data protection efforts and how we know where our data is going and what it is being used for. It’s something that is going to take some work from both government and industry,” Hysen said.

Gary Washington, CIO of the Agriculture Department, also stressed caution regarding advanced AI tools, even as some agencies under his purview experiment with them. 

“AI is sexy to people now. We have some agencies that have been proactive about dipping their toes into that area, but I just felt like we need to put some guardrails around it, because that could potentially be dangerous,” Washington told Nextgov/FCW.

Steps to responsible AI use

Kathleen Featheringham, vice president of artificial intelligence and machine learning at Maximus, said that because industry is primarily responsible for developing generative AI technologies, it’s also well-positioned to “lean forward” in partnering with government customers on how to use AI intelligently and securely.   

Featheringham recommends that agencies develop and implement AI guidance principles and align those with desired outputs and outcomes for AI. Transparency is also key, she said, as decision makers should be able to understand what data may go into something like an AI training model, how it is used and whether it is appropriate for a desired use case.

Finally, she stressed the importance of testing and validation for AI tools. This is important especially for agencies that engage with the public on a broad basis. 

“To me, it always starts with thinking about the outcomes and outputs you’re trying to get, and then working backwards to understand who is involved and what is the impact," Featheringham said. “Does it do what it says it is supposed to do? How does it work?”

We’ve got to be careful not to commoditize AI before we understand it.
— Rob Carey, Cloudera Government Solutions

Jim Carlini, chief technology officer at Leidos Inc., said that because trust between human operators and advanced AI tools is so important, it’s critical for industry to build what he calls “explainability” into solutions. In short, that means designing software that tells humans why it does what it does whenever questions arise.

“If you build in explainability, then I know why it is saying what it is saying,” Carlini said.

He recommends that agencies begin experimenting with the technology or risk being left behind. For many agencies, he said, that means setting up secure environments “where they can figure out what the return on investment is for a particular application.” Whether through existing IT contracts and vehicles or through new mechanisms, it’s important for agencies to get a toehold in AI if they haven’t already.

“You need a secure environment where you’re willing to put your data into it, then you need to be able to experiment with what people call ‘prompt engineering’ — how you interface with the model to get what you want out of it,” Carlini said. “Whatever is the most expeditious way, I think the most important thing is to just get started and do some experimentation in a secure way.”

Battle for talent

Just as cloud computing ushered a swath of new skill sets into the government and contracting workforces, AI’s advancement is likely to have profound impacts for government employees and contractors. Carlini believes the advent of AI, along with machine learning and other related technologies, is exacerbating an already competitive battle for tech talent among industry and the public sector.

“It’s going to be a journey for both sides, government and industry, to get the workforce, and it’s a challenge that is just going to get worse in these different areas because there’s just so much pressure for this kind of talent,” Carlini said.