Collaboration is essential for safe and secure AI deployments

Shutthiphong Chandaeng/Getty Images

COMMENTARY | Congress, federal agencies and industry need to work together to develop artificial intelligence governance models that address cybersecurity, safety and bias issues.

After months of research, deep dive discussions and closed-door meetings, the House announced that it formed a bipartisan AI task force which will be responsible for developing recommendations, bipartisan policy and employing guardrails for the safe and responsible use of AI. 

While this is an important and critical move towards AI regulation, it will require a wide-spread government approach to assert control over the rapidly advancing technology. 

This approach must focus on a combination of laws, executive orders and industry standards to rapidly allow the U.S.’s critical infrastructure to safely deploy artificial intelligence technologies and provide a global blueprint for other governments. 

For the last two years, we’ve seen AI attract the attention of the White House and lawmakers; however, in 2023, we saw a massive spike in interest as government started grappling to get ahead of the rapid emergence of the technology. 

In response, President Biden issued the AI Executive Order in October 2023. Since then, AI guidance across agencies has been fast. Within six months, agencies hit key milestones such as implementing Chief AI Officers, publishing AI inventories, NIST publishing GenAI Risk Profiles for completing risk assessments of AI and NSF launching its National Artificial Intelligence Research Resource Pilot pilot.

These achievements have shown the accumulation of steps and processes that the government is taking to ensure the safe adoption of AI. Now, we‘re ready for the next steps of AI regulation with the publication of the OMB Memo M-24-10, and the government at large leading the charge in recommendations for the nation and the world.

We’re already seeing progress 

Just within the first few months of 2024, we’ve seen rapid AI movement across the federal government. Not long after the task force was made public, Senate committees announced they would soon be marking up AI bills to showcase the “lessons learned” from the series of “AI-insight forums” held last summer, and also released a roadmap to guide $32 billion in new spending on AI programs. 

At the beginning of March, lawmakers introduced the Federal AI Governance and Transparency Act which would codify the “federal governance of agency AI systems” and consolidate existing AI laws while also creating an oversight framework to guide Congress as the technology evolves. This will be critical as the U.S. looks to establish a baseline in AI oversight. 

To move forward, we need to recognize that legislation needs to be created in accordance with the existing IT and technology policy. This not only leads to safer and secure AI use across industries but also helps cement the U.S. as a leader in emerging technology – paving an example for other countries looking to do the same. 

Recently, the European Parliament approved the world’s first major set of regulatory ground rules to govern AI. After it receives endorsement from the European Council, its implementation will be staggered throughout 2025. This now puts Europe at the forefront of technological regulation and investment. 

Similarly, and soon after the EU AI Act, the United Nations General Assembly adopted a U.S.-led resolution backed by 120 countries to set the principles of safe and secure AI for advancing sustainable development goals.

These developments in just the last month demonstrate how quickly we will need to move as a nation toward overarching regulation. 

Next steps

This year will be critical in AI development across the federal government, especially with the House’s AI Task Force and the Senate looking towards AI bills. However, lawmakers need to be dedicated to safe and responsible AI use for long-lasting and binding regulation to incentivize and drive the right behaviors by market participants.

To do so, radical collaboration with the private sector will be imperative. 

NIST has already begun this effort with its recently created U.S. AI Safety Institute Consortium. The consortium brings together 200 stakeholders from AI creators and users, academics, government and industry researchers and civil society organizations – all with the mission to develop guidelines to better prepare the nation for AI adoption.

Additionally, the FedRAMP Program Management Office released its draft of emerging technologies prioritization framework to rapidly bring commercial AI technologies to federal agencies. 

By merging private and public sector experts in cybersecurity, safety and bias issues, we can develop governance models that leverage and extend existing mechanisms. 

As the federal government continues to make AI a priority, we are slowly making progress towards a binding resolution of safe and responsible AI use across the U.S. The AI Task Force and the introduction of new AI bills over the next year is just the beginning. 

This year, the U.S. will need to continue to demonstrate its leadership in not only driving safe and secure AI adoption across the nation but also serve as a model for the rest of the world.