How Policymakers Can Tackle the Complexities of AI Models

A digital rendering of the human brain.

A digital rendering of the human brain. Milad Fakurian via Unsplash

Presented by IBM IBM's logo

Diversity, precision regulation and governance are key to building responsible and trustworthy artificial intelligence (AI) models.

AI is not new, but it has advanced to the point where it’s certainly having a moment. Next wave generative AI models look to replace the task-specific models that have dominated the AI landscape to date. They’ve also given the public a chance to experience AI first-hand. Popularized by consumer applications like ChatGPT and DALL-E, generative AI models promise to revolutionize how citizens approach activities like homework, research papers and email. But the technology’s surge in public attention has rightfully raised serious questions from policymakers. What are AI’s potential impacts on society? What do we do about data bias? What about the potential for misinformation, misuse, or harmful and offensive content generated by AI systems?

It’s often said that technology innovation moves too fast for government regulations to keep up. And while AI may be having its moment, the moment for government to play its proper role has not passed. On the contrary, this period of increased attention on AI is exactly the time to design and implement the right regulations that not only protect citizens and guide government’s use of AI, but also support industry in the development of responsible and trustworthy AI.

Generative AI Models Require Precision Regulation

Today’s generative AI models can be attributed to what are known as foundation models. As the name suggests, foundation models can be the foundation for many kinds of AI systems. Trained on large, unlabeled datasets and fine-tuned for many uses, foundation models apply information learned about one situation to another situation using machine learning. While the potential benefits of foundation models to the economy and society are many, there are also concerns about their potential to cause harm. For example, just as with all AI, if the data used to create a model is biased or skewed, this could negatively impact certain users. Can existing regulatory frameworks protect against these (and other) potential harms?

To help Congress begin to address these questions, on May 16, 2023, IBM's Chief Privacy and Trust Officer, Christina Montgomery, testified before the Senate Judiciary Subcommittee on Privacy, Technology and the Law along with OpenAI CEO, Sam Altman.

An early adopter and promoter of AI, IBM not only understands the value of foundation models, but the company is also a leader in the development and deployment of responsible and trustworthy AI. As Montgomery’s written testimony outlines, a precision regulation approach can help mitigate the potential harm caused by generative AI and promote responsible use of technology as a force for good. 

A precision regulation approach means establishing rules to govern the deployment of AI based on use cases, not regulating the technology itself. For example, instead of issuing a blanket statement on all AI use, an agency may look at their mission, vision and goals to see how AI could support specific actions. According to Montgomery’s written testimony and white paper from early May, precision regulation would require organizations (including government agencies):

  • Create different rules for different risks — a chatbot that can share restaurant recommendations or draft an email could have different impacts on society than a system that supports decisions on credit, housing, or employment. In precision regulation, the more stringent regulation should be applied to the use-cases with the greatest risk.
  • Clearly define risks – there must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk. This common definition is key to ensuring that AI developers and deployers have a clear understanding of what regulatory requirements will apply to a tool they are building for a specific end use. Risk can be assessed in part by considering the magnitude of potential harm and the likelihood of occurrence.
  • Be transparent — citizens deserve to know when they are interacting with an AI system and whether they have the option to engage with a real person, should they so desire. Congress should formalize disclosure requirements for certain uses of AI. No person, anywhere, should be tricked into interacting with an AI system. AI developers should also be required to disclose technical information about the development and performance of an AI model, as well as the data used to train it, to give society better visibility into how these models operate. 
  • Show the impact — for higher-risk AI use-cases, organizations should be required to conduct impact assessments showing how their systems perform against tests for bias and other ways that they could potentially impact the public, and attest that they have done so. Additionally, bias testing and mitigation should be performed in a robust and transparent manner for certain high-risk AI systems, such as law enforcement use cases. These high-risk AI systems should also be continually monitored and re-tested by the entities that have deployed them.

The Future of Responsible AI Starts Today

During this dynamic transitional time for AI, neither industry nor government should delay implementing controls for the ethical development of AI. Federal agencies like the National Institute of Standards and Technology (NIST) have a wealth of resources for government and industry leaders looking to take advantage of AI technology. One such asset is NIST’s AI Risk Management Framework (AI RMF), which provides definitions, recommendations and guidelines for AI development. The AI RMF could also prove helpful to the policy and regulation development process.

Beyond developing robust governance, Montgomery recommends federal agencies have a centralized means of considering and embedding AI ethics, such as by incorporating an AI Ethics Board. As one of the first in the industry to establish an AI Ethics Board, IBM’s AI Ethics Board plays a key role in ensuring that the company’s principles and commitments are upheld in its business engagements globally.

“Our AI Ethics Board plays a critical role in overseeing our internal AI governance process, creating reasonable internal guardrails to ensure we introduce technology into the world in a responsible and safe manner,” said Montgomery. “For example, the board was central in IBM’s decision to sunset our general-purpose facial recognition and analysis products, considering the risk posed by the technology and the societal debate around the use. IBM’s AI Ethics Board infuses the company’s principles and ethical thinking into business and product decision-making.”

Bottom line, improving citizen experiences by leveraging AI and generative AI models does not have to come at the cost of personal liberties. Government leaders can support innovation, responsibility and economic outcomes all at once, and to quote Montgomery, "We can, and must, [pursue] all three now."

Discover how IBM can help your agency build responsible and trustworthy AI models.

This content is made possible by our sponsor, IBM. The editorial staff was not involved in its preparation.

NEXT STORY: AI Boosts Security and Operational Capability