Promoting Trustworthy AI in Government

Just_Super/iStock.com

Successful agencies will do more than just innovate and invest in technology; they will work to ensure that AI in government is safe, reliable, and ethical.

President Joe Biden’s decision to elevate the director of the Office of Science and Technology Policy to a Cabinet-level position underscores the importance of artificial intelligence in America’s future. His selection of Alondra Nelson to be deputy director of OSTP shows that unlocking AI’s potential will be done with a focus on racial and gender equity. Nelson, a Black woman whose research focuses on the intersection of science, technology and social inequality, has said that technologies like AI “reveal and reflect even more about the complex and sometimes dangerous social architecture that lies beneath the scientific progress that we pursue.” 

There’s no doubt that ethics must be foundational to the design, development and acquisition of AI capabilities, and that government agencies should embed trustworthy AI as part of a holistic strategy to transform the way government operates.

Agency leaders can start by identifying areas where AI can transform their internal operations and improve their public-facing mission services with minimal risk of bias. From there, they can prioritize areas that provide immediate value and build momentum, and those with long-term potential to improve mission delivery. To successfully scale AI, leaders will need to establish trust in AI within their agencies, with other government and private sector stakeholders, and with the public. In championing the ethical use of AI, they must not only ask, can we do this, but should we do this and how can we do this in a way that promotes equity. 

The Power of AI

When developed and used correctly, the benefits of AI can be game-changing. Many government agencies have begun adopting AI with an emphasis on building trust and seeing tangible results along the way. The Department of Defense, for example, is putting trust at the center of its AI strategy by engaging a broad audience without compromising national security interests. DOD consults experts across academia, the private sector and the international community to help ensure AI systems and data minimize bias and produce explainable outcomes. The department also requires that autonomous and semi-autonomous weapon systems be designed to allow human judgment to overrule the use of force, reducing the risk of civilian casualties and collateral damage.

Building Trust

Lack of trust in AI and the potential for racial bias can be serious barriers to AI initiatives. In fact, 84% of U.S. public sector executives cited data privacy and quality issues as the biggest challenges for AI adoption. Government agencies can address these challenges with a focus on six key disciplines:

  • AI applications and data should be tested for fairness and impartiality, especially against systemic racial and gender biases. 
  • Decisions made by AI algorithms should be transparent, explainable, and open to public inspection. 
  • Organizational structures and policies should be in place to hold leaders responsible and accountable for decisions made using AI. 
  • The systems themselves should be robust and reliable enough to produce consistent decisions. 
  • AI systems should be safe and secure from cyber risks that may lead to loss of trust or physical harm. 
  • The use of AI should be respectful of privacy and limited to its intended and stated use.

How Government Agencies Can Respond

  • Identify ways AI can transform operations and improve public services. Before agencies dive into potential use cases, they should first take an inventory of AI readiness.  Once this is done, they can explore opportunities to improve public-facing services and streamline business processes in a way that puts ethics at the very core. Understanding both the short- and long-term benefits, associated costs, and potential risks will help prioritize AI investments.
  • Invest in developing and attracting AI talent. Merely having the right technology in place is not enough to unlock an agency’s full potential. According to a recent survey, 68% of respondents indicated their staff will need more training in cognitive technologies and automation. This training should include how to navigate the inherent potential for bias in AI technology while building employees’ trust in AI. 
  • Develop a governance model with an emphasis on safe, reliable, and ethical AI.  Agencies should institutionalize governance and appoint leaders with responsibility for ensuring that AI practices treat citizens fairly and provide equal access to government services. Ethical AI must be ingrained in the organizational culture by creating an environment that supports diversity of thought and alternate points of view. 

AI is a powerful force, and agency leaders can cultivate trust in it by proactively addressing racial and gender inequities that may arise from deploying AI technology. By focusing on the transformational potential of AI, creating transparency among stakeholders, leveraging datasets reflective of diverse populations, and building ethical principles into governance structures, government agencies will be well-positioned to become ethical leaders in the field of AI.

Ed Van Buren is a principal and public sector AI practice leader at Deloitte Consulting LLP. Jitinder Kohli is a managing director and public sector AI practice leader at Deloitte Consulting LLP.  John Costa is a senior consultant at Deloitte Consulting LLP.