How Agencies Should Prep for Artificial Intelligence

Christian Lagerek/

Artificial intelligence refers to the ability of computers systems to perform tasks that normally require human intellect and judgment. Also called machine learning, these systems take in data that is structured or unstructured producing algorithms, finding patterns, and continually learning and refining its capacity over time to make decisions, guide behavior, offer solutions, or act. AI is now moving rapidly into “deep learning” which looks not just at the face value of data but draws meaning and representations from initial and cumulative data accelerating its learning and capacity as it grows. IBM’s Watson is probably the most commonly know system exercising deep learning.

Although the term was first coined in the mid 1950s, we are beginning to see their use in everyday life with self-driving cars, airplane autopilot systems, automatically controlling traffic lights based on flow and density, mobile check deposits deciphering handwriting, detecting human emotion to target advertising, and analyzing electronic trails to learn and exploit human behavior patterns.

How Government Can Use AI

Government is taking a strong look at using AI to help strengthen its decision making and creating operational efficiencies.  For example, Governing reported the Las Vegas Health Department uses AI to scan social media for incidents of food poisoning, using that data to identify food establishments to be inspected. The Defense Department uses AI on drones to assess battlefield conditions and call for appropriate resources. Imagine using AI to calculate prescription drug type and dosage based your symptoms and body composition for more accuracy or trolling computer systems to find hackers. When trained, AI can think and act faster, and more accurately than humans. 

AI can also be used to select top candidates from a pool and conduct employee interviews and make unbiased recommendations based on candidate moods, skills, exaggerations, intelligence levels, and more by combining and analyzing video, written, social media and other data sets. AI can optimize the size and scope of your workforce, detect spending patterns, and optimize procurement contracts. AI could analyze the Federal Viewpoint Survey combined with other data sets to identify specific stressful workplaces. It can determine and request optimal resource allocations to disaster sites. AI can detect and tell a call center worker when it is time for a break, detecting a shift in their customer orientation. The list is endless.

Data Can Backfire

While opportunity exists, data capacity in government has a long way to go to facilitate machine learning. Data across agencies is not standardized and is housed in many locations. A lot of unbiased judgment is necessary to be able to truly leverage the data, at least initially. Visionaries are needed to identify beneficial data and AI opportunities for government.

Because data has many uses, we must focus on what is important in each area. For example, data can be used to measure performance, conduct if/then analysis, and speed up processing of certain functions. Within these broad applications, AI system designers must try to eliminate unneeded data, improper data, and issues associated with not having enough data. The ability to game the system by inputting erroneous data or not reporting less than stellar performance outcomes to limit accountability challenges is possible. In other words, garbage in-garbage out. 

Technology must not drive every decision to expend resources, act, or apply risk factors. For example, winning a war takes time and progress is not always evident right away. The impact of major legislation on our economy may require a long view and result in short-term strains. While these patterns should be captured so that the system can “learn them,” rush to judgment may need to be tempered. 

Policy and Governance is Required 

There are many potential applications of AI, as well as potential pitfalls. Pitfalls or risks should be addressed now as AI is being considered and moves more powerfully into government. Troves of data are critical to a successful AI system, some which are available, more is needed, and even more that is system generated. In an AI system, the data needs to be tagged and trained to perform as planned, and its performance needs to be verified and validated.  

In its March 2018 report on AI, the Government Accountability Office identified the need to update regulations, address ethical decision making, and determine acceptable risk. Risks need to be identified and can be addressed through strong expertise, effective management, and strong governance. Some of the considerations I see include:

  • Determining valid purposes.  A charter should define what constitutes proper use of AI systems for our government. For example, as we all know AI could change behavior for nefarious purposes or infiltrate our election system to change the outcomes. 
  • Securing data. In an AI system, data will grow exponentially and the definition of personally identifiable information or national security data can and will change dramatically. This needs to be addressed to ensure its ethical and secure use.  For example, we think of PII today as social security, address, medical information, but under AI applications it can grow to include behavioral profiles, spending habits, social habits, and much more.
  • Establishing ethical use of AI. Ethics experts should be educated and used in the development of AI. They should be part of the governing process and the development of laws, regulations, and policy.
  • Agreeing on the impact of technology on society and our willingness to have the critical discussion. How will AI impact people’s ability to earn a living, contribute to society, or generate revenue for our government. As populations grow, will we have the opportunity for a decent and contributing life? 
  • Changing our definition of data is critical to success.  Today we capture and report on many data elements, but we must learn how to combine structured and unstructured data from many sources to tell a story or make decisions. Data as we know it will evolve and we need strong, capable thinkers to support this. In many places in government, this is still in its infancy.

The government needs to address these through a strong advanced computing oversight board to ensure systems are built, deployed, and exploited appropriately, consistent with needs, values, and laws. Government also needs to hire exemplary data and AI experts to guide the use of advanced technologies for both proper use and the development of efficient and effective applications.

Strong debates and decisions are needed at the highest policy levels of government to determine how AI will best be used for our country and the efficiency and effectiveness of government as well as society and world competitiveness. Its impact on our society must be ascertained (and perhaps controlled). Changes in philosophy and purpose of government will be necessary to, for example, redirect human resources toward mission accomplishment, and less on back office or decide the appropriate footprint of government. 

Steve Goodrich is the CEO of the Center for Organizational Excellence and author of Transforming Government from Congress to the Cubicle. He can be reached at