DHS must do more to appropriately govern AI, watchdog finds

Anadolu / Getty Images
An oversight report from DHS's OIG noted the agency's many big steps in AI governance, but added specific areas that need to be updated to advance safe federal uses of the tools.
The Department of Homeland Security needs to continue its efforts to develop and codify artificial intelligence governance practices, according to the agency’s Office of the Inspector General.
Outlined in a report released Monday, the OIG praised DHS’s recent action on furthering internal and external AI policy, including appointing a chief AI officer, conducting inventory assessments for subagency AI use cases and establishing multiple working groups and task forces. It identified 20 areas, however, where the agency needs to improve to ensure effective AI governance plan implementation.
“Without appropriate, ongoing governance of its AI, DHS faces an increased risk that its AI efforts will infringe upon the safety and rights of the American people,” the report preview stated.
The fundamental underpinning of the OIG’s findings was DHS’s failure to implement some of its 2020 AI Strategy Goals and Objectives. The report noted that, despite DHS’s proactive steps in several categories, it failed to develop and apply an implementation plan that would harmonize internal AI program planning and adequately report progress on AI policies.
“Although DHS made notable progress in its efforts to develop and issue AI-specific policies, it has not yet completed efforts to ensure the department has the policies needed to appropriately govern the use of AI,” the report said.
DHS also did not cultivate sufficient AI governance processes, particularly surrounding privacy and civil rights and liberties requirements.
The report spotlighted the lack of clarity in whether Privacy Compliance Reviews — a collaborative process conducted by the DHS Privacy Office to ensure whether agency technology complies with applicable laws and policies — should be conducted for AI used in potentially privacy- and right-impacting scenarios.
“[DHS Privacy Office]’s insufficient controls regarding [Privacy Compliance Reviews] that are required or recommended increases the risk that [Privacy Compliance Reviews] of AI technology will not be completed in the future when needed,” the report said.
The OIG recommended that DHS take steps to update and finalize existing AI strategy documents and procedures surrounding responsible uses of AI. The report also suggested a reevaluation of personnel training and resources for AI oversight, as well as updates to the Privacy Compliance Review process.
The watchdog also called for DHS subcomponents to “implement DHS’ enterprise process to identify, assess and track AI use cases,” and for the DHS Chief Technology Officer Directorate to ensure that internal AI programs use accurate data.
In provided responses, DHS officials concurred with all of the OIG’s analyses.
This report followed the release of the new DHS AI Playbook for public sector entities that debuted in January 2025, which documented the outcomes of several generative AI pilot programs helmed by DHS, as well as its AI hiring sprint to meet workforce demands.