Committing to a U.S. Government-Industry Partnership to Empower AI Innovation

MF3d/iStock.com

Such a partnership could make prompt and significant progress in resolving the top barriers to AI innovation and deployment.

When it comes to artificial intelligence and the federal government, the range of impactful deployments appears limited only by human imagination. The Joint Artificial Intelligence Center of the Defense Department has successfully tested a disaster-response tool, for example, that uses AI to analyze overhead imagery from aircraft and satellites and then identify flooded areas and assess damage.

The development—summarized in a recent report from Accenture—illustrates AI’s promise to substantially enhance the efficiency and efficacy of federal agencies. Other real-life examples in the report include the Coast Guard’s leveraging of the technology to determine via satellite imagery whether a vessel should be inspected for suspected smuggling, and the Social Security Administration’s evaluation of documentation to support medical claims through AI.

AI is about solving problems. It applies predictive analytics to mission-related information and objectives so users can achieve optimal outcomes for future initiatives. It observes and captures data about increasingly complex systems and software/application components 24/7/365 to detect anomalies and identify the root causes of issues that result in performance disruptions and system degradations. What’s more, it automates effective responses to these issues, freeing up teams to focus on what they do best: developing and executing overarching strategies to best support their agency’s mission. (Note that I use the phrase “freeing up” teams here and not removing them … AI is about augmenting human capabilities, not replacing them.)

Clearly, government leaders are interested in this technology, as 91% of agencies are either piloting or adopting AI in some form, compared to just 73% of global organizations overall that are doing so, according to the Accenture report. In fact, AI could potentially augment nearly one-half of the average federal worker’s tasks. There are, however, challenges standing in the way, with 72% of federal officials, industry executives and IT decision-makers indicating that the government is not doing enough to increase its pace of AI adoption. When asked what’s holding back progress, 56% cite a lack of technical expertise, followed by resource constraints (49%) and ethical/bias concerns (48%).

The subsequent fallout amounts to far more than simply the failure of agencies to reach performance goals, as the National Security Commission on Artificial Intelligence’s final report in March paints a very dark and troubling picture of a future in which the U.S. falls far behind its adversaries. “America is not prepared to defend or compete in the AI era,” according to the report. “This is the tough reality we must face … We fear AI tools will be weapons of first resort in future conflicts … States, criminals and terrorists will conduct AI-powered cyber attacks and pair AI software with commercially available drones to create ‘smart weapons.’ It is no secret that America’s military rivals are integrating AI concepts and platforms to challenge the United States’ decades-long technology advantage. We will not be able to defend against AI-enabled threats without ubiquitous AI capabilities and new warfighting paradigms.”

That is an unsettling reality. But it’s one we can address today. And, in referencing “we,” I mean all of us, as urged in the commission report. “The U.S. government cannot do this alone,” it states. “It needs committed partners in industry, academia and civil society.”

From an industry perspective, we must take this on as our patriotic duty. We are not, after all, in the technology business strictly to make money. We do what we do to make a difference: a positive, lasting impact on our greater society.

This includes the government. When an adversary disrupts communications between an aircraft carrier and a drone, it compromises the mission. When a systems outage interrupts the distribution of a COVID-19 vaccine, it compromises the mission. Our industry commands the expertise and skill sets to work with federal leaders as devoted partners to maximize AI’s total potential to fix these problems.

In forming such a partnership, we would make prompt and significant progress in resolving the aforementioned top barriers to AI innovation and deployment: Our combined sector’s knowledge, work ethic and passion for ongoing discovery would immediately fill in any technology expertise gaps while providing agencies with needed outside resources in the form of available funding, talent and tools. We’d also help address the concerns about ethics and biases, as our industry’s people are a diverse collection of backgrounds and perspectives. This diversity ensures that no “single mindset” drives AI advancements.

Ultimately, we can work as partners with agencies so they are always at least two or three steps ahead of state adversaries, criminals and terrorists on AI, instead of constantly playing catch-up. As a result, we can make our nation and the world a safer place—and a better one.

Willie Hicks is the chief technology officer at Dynatrace Federal.