Expert Analysis of Dangerous Artificial Intelligences in Government

Thinkhubstudio/Getty Images

The “real risks” of AI come from a lack of governance and risk understanding, according to Navrina Singh, CEO of Credo AI and a member of the Department of Commerce’s National AI Advisory Committee.

Of all the emerging sciences and technologies that government and private industry are working on, artificial intelligence—mostly just called AI—is one of the most promising. And a big reason for that is because AI can be designed to work in almost any role or capacity. At the most basic level, AIs using robotic process automation can accurately route mail, packages and payroll checks to thousands of destinations. Meanwhile, advanced AIs in the military are learning how to control swarms of intelligent combat drones across sprawling modern battlefields. And AIs are also being tasked with just about everything in between those extremes.

The United States is the world leader in the field of AI development, with new applications and capabilities being developed almost every day. But that development does not come without some level of risk. All new technologies have some risks associated with them, and experts are already pointing out specific vulnerabilities that could affect a large number of AIs. But beyond short-term vulnerabilities, there are also longer term concerns regarding AIs because they are becoming so ubiquitous. 

While few people seriously think that an AI could take over the world, asking a flawed AI to manage a critical function could negatively affect everything from hiring decisions to healthcare treatments, law enforcement operations, financial planning or even battlefield strategies. As such, it’s important to study AIs and their development to ensure that they are free from bias and other flaws before giving them too much responsibility.

That is an area that the founder and CEO of Credo AI, Navrina Singh, knows well. She has been studying the development of AI—and both the potential advantages and dangers associated with it—for many years. She was recently asked to speak about those issues before the House Committee on Science, Space and Technology subcommittee on research and technology. And she was also recently appointed to the Department of Commerce’s National AI Advisory Committee to counsel President Joe Biden on AI topics.

Nextgov talked with Singh about the development of AI, and especially some of the dangers of deploying this emerging technology before it’s been fully vetted and tested. We also asked her about some smart strategies to make sure that we are always able to get the most out of AI technology, while mitigating some of the biggest risks.

Nextgov: Thank you for talking with us. Before we get started, can you tell us a little bit about your impressive background in technology and the field of AI?

Singh: In 2001, I immigrated to the United States from India to pursue an engineering degree at the University of Wisconsin Madison. I spent nearly two decades as an engineer working on various technologies, including machine learning, augmented reality and mobile for tech heavyweights like Qualcomm and Microsoft.

In 2011 I started exploring machine learning, which led me to building emerging businesses at Qualcomm. Following that, I led the commercialization of AI products at Microsoft, including conversational AI chatbots.

It was through building these enterprise-scale AI products that I realized there was a lack of AI oversight and accountability within the industry. This technology that is so pervasive across our daily lives is also the perfect breeding ground for bias if the right guardrails aren’t put in place. This is why I founded Credo AI, to help enterprises have the tools to build responsible AI at scale and ensure that this technology would work for humanity instead of against it.

Nextgov: Let’s get this question out of the way. There are a few very smart technology leaders who are warning that a rogue AI could eventually take over the world or ultimately destroy humanity. Assuming you don’t think that is a real danger right now, can you explain what some of the real risks of AI might be for humanity?

Singh: In my view, the real risks associated with AI are a lack of technology governance and a lack of risk understanding, including unintentional risks—or intentionally via bad actors—and the threat it can pose to our society and economy. We see AI being used everywhere: in job hiring, in finance, in education, in healthcare, in our criminal justice system, and the list goes on.

AI is so prevalent in our society, and, even though its intentions may be good, ungoverned AI can present damaging consequences, especially for those in marginalized and minority communities. For example, AI can filter out qualified candidates from a hiring process due to gender, deny a family from obtaining a mortgage loan, make college admissions less inclusive, reject life-saving health procedures and even identify the wrong individual for a crime.

With the recent advancements in generative AI, where AI is generating something new rather than analyzing something that already exists, we are facing a new generation of AI risks that need to be deeply thought about and managed. From creating fake news stories to influencing the political decisions of citizens and leaders, malicious actors can use AI for deceitful purposes like identity theft and digital fraud, exponentially increasing already prevalent fairness issues. The increasing scale and impact of AI is raising the stakes for major ethical questions. Now more than ever, the governance of these AI systems is critical for us to ensure that AI works in service of humanity.

Nextgov: That is interesting. And I suppose the reason that anyone is even thinking about the dangers of AI is the fact that the science behind it is advancing so rapidly. Have there been any recent advances that are really helping to improve the capabilities of AI?

Singh: Absolutely! And at a pace that requires careful monitoring and assurance that governance and oversight of these systems can keep up with those advancements. There has been a lot of progress made in the past year in large language models, natural language processing, multi-model learning, bias mitigation in machine learning and more. 

One area that is gaining a lot of momentum and marketing attention is generative AI, an umbrella term for AI that creates text, videos, images and code, rather than just analyzing data. Generative AI is gaining traction in gaming, programming, advertising, retail and many other sectors—pushing AI further into our economy and society.

Nextgov: And you recently testified before the House of Representatives Committee on Science, Space and Technology subcommittee on research and technology about how to manage the risks of AI. What were the key factors that you asked Congress to consider in this area?

Singh: It was an honor to testify before Congress, and I commend the House committee for taking a very evidence-focused approach to the hearing. In addition, I also want to commend the National Institute of Science and Technology for their work on the AI Risk Management Framework and Practice Guide, both of which have been developed with a significant amount of stakeholder engagement. These demonstrate a good first step in our journey to responsible AI governance, especially on informed policy and standards development.

There are three key factors that Congress should consider as they explore ways to manage AI risks.

First, responsible AI  requires a full lifecycle approach. AI systems cannot be considered “responsible” based on one point-in-time snapshot, but instead must be continuously evaluated for responsibility, and transparently reported on throughout the entire AI lifecycle, from the design, to development, to testing and validation, to production and use.

Secondly, context is critical for AI governance. We believe that achieving trustworthy AI depends on a shared understanding that AI is industry specific, application specific, data specific and context driven. There is no one-size-fits-all approach to “what good looks like” for most AI use cases. This requires a collaborative approach to assessments, and we advocate for context-based tests for AI systems with reporting requirements that are: specific, regular and transparent.

And finally, transparency reporting and system assessments is a good first step to accomplish trustworthy and responsible AI. The importance of transparency reporting and system assessments cannot be overstated as a critical foundation for AI governance for all organizations. 

Reporting allows policymakers to start to evaluate different approaches, and potentially opens the door for benchmarking—reporting is the step that gets us to standards that can be enforced. We have seen firsthand how comprehensive and accurate assessments of the AI applications and the associated models/datasets, coupled with transparency and disclosure reporting, encourage responsible practices to be cultivated, engineered and managed throughout the AI development life cycle. Fundamental to this is access to compliant and comprehensive data for assessments.

Nextgov: And do you have any advice for government policymakers to help the U.S. achieve those goals?

Singh: I encourage policymakers to approach AI governance with the mindset that responsible AI—RAI—is a core competitive differentiator, not just for companies, but for countries. Any government helping to set up RAI requirements on testing and metrics now will have a competitive advantage in creating trustworthy AI in the future. Building trustworthy AI is not just about “doing the right thing” and setting “values” that make people feel good. It is about building systems that work better—systems that do not have unintended harmful consequences.

Nextgov: And you were also recently appointed to the Department of Commerce’s National AI Advisory Committee to counsel President Biden on AI topics. Recently, the White House has advocated for the creation of an AI Bill of Rights. Can you explain how the bill of rights might work, and whether or not you think it’s an idea that could move us closer to responsible AI? 

Singh: I can only speak in my personal capacity and not as a representative of NAIAC. In my personal view, the AI Bill of Rights is an important step toward education protections citizens should have from the organizations’ use or misuse of AI, and also informing a responsible AI governance for organizations. The principles in the AI Bill of Rights align with emerging AI regulations in the U.S., the E.U. and Canada, among other key jurisdictions. The evaluation and transparency efforts emphasized in the AI Bill of Rights blueprint provide the foundation for effective AI accountability. 

We see consistent themes in our work at Credo AI to formally codify these principles into standard practices to empower organizations to build responsible AI that is safe, fair, compliant and auditable.

Nextgov: You mention your company, Credo AI, and how you are trying to build a truly responsible AI platform. What does that entail, and how can other companies or government agencies ensure that their AIs are also responsible?

Singh: Building trust in AI is a dynamic process. At Credo AI we believe that organizations can build that trust by weaving accountability and oversight into their AI lifecycle. This means that organizations take on the responsibility to align and commit to enterprise and business values, follow through, observe impact and repeat the process, with diverse voices providing input at every stage.

Nextgov: Thank you for your time. We know how busy you are these days. Do you have any parting advice for government agencies as they continue to take the lead in many areas of AI development?

Singh: As with anything, it is important for governments to lead by example. It will be crucial for government agencies that deploy AI to ensure these projects have a transparent process to explain what decisions were made by the AI developer and why.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys