What Trump’s Order on Trustworthy AI Might Mean for Agencies

Anatolii Stoiko/Shutterstock.com

Experts say it’s a step in the right direction, but the coming months will determine its ultimate impacts. 

An executive order recently signed by President Donald Trump governing the federal government’s use of “trustworthy artificial intelligence” goes beyond setting uniform principles for most agencies to follow when designing, producing or buying the not-so-nascent technology. 

The mandate prompts comprehensive inventories of AI deployments within agencies, a roadmap from the Office of Management and Budget for improved policy guidance on AI use, and moves to expand rotational programs and usher in fresh personnel with tech expertise.

“I’d like to emphasize the truly historic nature of this first-ever framework, for both the acceleration of government modernization and the advancement of trustworthy AI development in the United States,” Deputy U.S. Chief Technology Officer and Assistant Director for AI at the Office of Science and Technology Policy Dr. Lynne Parker told Nextgov via email Friday. 

How agencies ultimately interpret and execute the broad policies outlined in the order, though—and its potential to lead to lasting results of more responsibly-steered federal AI efforts—will become more clear in the early months of the forthcoming Biden administration. 

“This has laid out some really good principles, I think, and some sort of important yardsticks. I think the question is: ‘How do you operationalize those?’” Booz Allen Senior Vice President John Larson, who helps steer the firm’s analytics and AI practice serving civil clients, told Nextgov Wednesday. “And that's what we've been really thinking about.” 

AI has emerged over the last few decades as an invisible constant in many American’s lives. It underpins the digital maps people use to get around, informs the ads they see scrolling through content on smart devices, and can be used to determine whether individuals are approved for loans or new homes. The term is broad and generally refers to a wide-ranging branch of computer science encompassing systems that can perform tasks typically associated with intelligent beings, but the ultimate definitions of AI and machine learning still generally remain a moving target. 

For this order, Trump uses the definition of AI set forth in section 238(g) of the National Defense Authorization Act for fiscal 2019. 

At the core of the order are nine common principles guiding the government’s design, development, acquisition and use of AI. They push for the government’s technology to be: lawful; purposeful and performance-driven; accurate, reliable, and effective; safe, secure, and resilient; understandable; responsible and traceable; regularly monitored; transparent; and accountable. 

Those principles aren’t far from—and follow—ethical AI-driving frameworks previously produced by America’s defense and intelligence communities that are already being implemented. However, this guidance does not apply to defense or national security systems, nor to “AI embedded within common commercial products,” or research and development activities. 

“Since research and development is focused on early stage work that is not yet put into practice, it is not covered by the EO,” Parker said. “However, the EO notes that R&D efforts directed at potential future applications of AI in the federal government should be informed by the principles and OMB guidance.”

Larson and Brookings Institution Rubenstein Fellow for Governance Studies Alex Engler both said it makes sense to exclude R&D pursuits in this initial policy document. 

“We've really over the last 12 to 18 months as an institution focused an enormous amount of our time, energy and efforts in collaboration with our clients around operationalizing AI. And when you're doing something in the lab, you're doing it in sort of a sanitized environment. It's very controlled,” Larson explained. “And I think agencies have really struggled to get from these sorts of proof of concepts and pilots into enterprise AI. So, I think it's important to govern that enterprise AI—and that's what we're talking about here.” 

However, the exclusion of common, in-use commercial products in this order’s application, could introduce some challenges down the line. 

“If you're using a tool like Google Maps that is using AI, maybe it's not particularly important that we audit that,” Engler said. “I think where you could feasibly be concerned with commercial products is if it creates an incentive to procure outside software to do something that the government should itself be doing, right—if it’s suddenly easier to use procurement than development. And the incentives that have caused that to happen, have been really problematic in tech.”

New Tech Will Need New Talent

Deep in the order’s mandates are also two new requirements to help grow technical expertise inside the government. Specifically, it directs the General Services Administration to create an explicit AI track to attract experts from industry and academia into its Presidential Innovation Fellows Program and instructs the Office of Personnel Management to construct a record of federal rotational programs and pinpoint those that could help expand the number of employees with relevant, technical skills.

Engler deemed this a reasonable inclusion but said what’s truly needed is new ways to routinely and systemically recruit people with this in-demand expertise—and this is more of a bandage.

“I think [PIF has] been a good program and a good influence broadly on the government. But there is an overreliance on fellowship programs, or ‘tours of duty,’” he said. “It's like this for a reason, which is that it's trying to avoid troubling the very difficult federal hiring process, which dissuades a lot of technical workers and doesn't filter candidates out very well.” 

Larson said there isn’t enough talent within the government itself “to do everything that they need to do,” so it was important that the order also included requirements to help train up existing agency personnel. 

“One of the big challenges I experience when I talk with clients, is even understanding the art of the possible, right—so you're almost limited by your imagination—and if you don't understand the power of these tools, and what they can unlock, you may not even think about their application to your mission,” he said. 

Cataloging the Cases

Perhaps the heaviest lift assigned to agencies by the document is a requirement to make sense of all the AI deployments they’ve released into their world so far. The order calls for the Federal Chief Information Officers Council—in the first two months following its passage—to advise agencies on formatting and mechanisms through which they will need to provide their full “inventories of non-classified and non-sensitive” AI use cases. 

Almost six months after that’s released, agencies must share catalogs of deployments. 

“I will not sugarcoat it. That is a daunting task. It's daunting because of the number of ways in which these types of algorithms are already in place today,” Larson said. “They're used in many instances across agencies—both in backroom operations and also in sort of citizen-facing endeavors.” 

While smaller organizations with less robust operations may not be troubled by the demand, it could cause some challenges for massive agencies that leverage the technology in some of their crucial operations, like NASA or the Energy Department. The latter actually stood up its own AI and Technology Office, or AITO, last year to better and more centrally coordinate all the department’s moves in the vast technical space. But an Inspector General report released last month reveals the office recently “demonstrated that the AI data available to leaders responsible for coordination was incomplete.” The assessment also confirmed that “although AITO identified almost 300 distinct AI projects, it estimated that these represented only about half of all AI projects by various departmental elements that were planned, underway, or recently completed.” 

“I think there's a complexity in the sheer volume of what's currently in place today,” Larson said.

Within 120 days after identifying all their present AI efforts, agencies are additionally instructed by the order to “retire AI applications found to be developed or used in a manner that is not consistent with” all that it outlines. 

“I think setting that up is really important,” Larson said. “If you don't have the right frameworks in place to evaluate your data and your models on an ongoing basis as part of that sustainment process—that AI lifecycle—you run the risk of models not being deployed for their intended purpose, or for what they were originally designed to do. That's when you need to decommission models.”

Engler was less convinced the inclusion of this direction could catalyze promising results. “I don't expect a rash of retired AI systems, based on what I've seen so far,” he said. 

Among an array of other directives, the order also places responsibility on the OMB Director to prepare a roadmap for policy guidance the agency aims to release or revise to better support the AI initiatives in a way that’s consistent with this mandate. 

"I'm skeptical that we're going to see OMB resourced well enough to create a broad typology of trustworthy AI that is specific and meaningful to every application," Engler said. "By this I mean, AI systems can be incredibly diverse. It would take a lot of effort for OMB to write guidance general enough to apply to every application in government, and specific enough to be especially meaningful."

Time Will Tell

This executive order follows several other previously made, AI-centered workshops, proposals and policies Trump put forth during his administration. And according to the press announcement, through it, the U.S. “is signaling to the world its continued commitment to the development and use of AI underpinned by democratic values.”

Meeting criteria laid out in a previous, 2019 executive order to maintain American leadership in AI, OMB recently issued final guidance regarding how agencies should regulate the technology in the private sector. Engler noted that while the U.S. might be “a little further ahead” most of the rest of the world in that realm, it’s fundamentally different from this order governing federal entities’ use of the tech. 

“Here, the U.S. is chasing others,” he said, “especially Canada.”

America’s neighbor launched a calculated process several years ago that ultimately culminated with the release of its Directive on Automated Decision-Making in early 2019. According to it, any “automated decision system,” or AI developed or procured by the government after April 1, 2020, must first undergo a robust “Algorithmic Impact Assessment,” that was designed to help officials evaluate and mitigate risks that might arise when deploying such systems.

“Their process is certainly further along—and I think notably more formalized than the U.S.,” Engler said.

Still, Larson noted that the U.S.’ latest order on AI offers a new level of unmatched consistency for AI operations spanning across all agencies, which is no easy feat. Drawing from his economics background, he also connected it to the notion that enterprises obtain certain advantages based on their operational scale. 

“There's an ‘economies of scale,’ of thinking about doing this on a broader level than individually agency by agency,” he said. “And there's some value in thinking about this holistically in that way, and I think it outlines that ... which is important.”

Executive orders are binding across administrations—though they can be reversed. Many of the instructions laid out here will need to occur over the next few months, or the earliest days of President-elect Joe Biden’s term. 

“If it's across administrations and the White House doesn't care—or, for instance, is trying to fix an enormous COVID-19 crisis and economic meltdown—then perhaps you're not going to get a lot of focus on this,” Engler said. “But it doesn't seem super objectionable. So, it's possible that in looking for some continuity of governance, and in looking for opportunities to not throw everything out the window, that this actually does get continued, because it seems pretty reasonable.”