The Department of Homeland Security is advancing its use of artificial intelligence technologies with new leadership and guidance.
The Department of Homeland Security took new steps in introducing artificial intelligence technologies into its agency operations, announcing a new executive position and fresh directives aimed at guiding the agency’s AI-related agenda.
In a Thursday announcement, DHS named Eric Hysen, the agency’s chief information officer and Artificial Intelligence Task Force co-chair, to the inaugural position of Chief AI Officer. In this role, Hysen will promote AI usage while maintaining safety protocols, in addition to advising Secretary Alejandro Mayorkas and other agency leadership on AI policy and practice. He will continue to work as the agency's CIO.
“Artificial intelligence is a powerful tool we must harness effectively and responsibly,” said Mayorkas in a press release. “Our department must continue to keep pace with this rapidly evolving technology, and do so in a way that is transparent and respectful of the privacy, civil rights and civil liberties of everyone we serve.”
In addition to a new leadership post, DHS’s AI task force unveiled two new policies to help safely further such technologies in DHS missions.
Policy Statement 139-06 establishes a set of defined principles by which DHS should adhere regarding AI usage in agency operations. The principles follow President Donald Trump’s December 2020 executive order on trustworthy AI in government operations. The policy also restricts DHS from collecting, using or disseminating data used in AI activities, or from allowing AI to make decisions based on the “inappropriate consideration” of characteristics like race, gender and ethnicity.
“DHS will not use AI to improperly profile, target or to discriminate against any individual, or entity, based on the individual characteristics identified above, as reprisal or solely because of exercising their Constitutional rights,” the directive says. “DHS will not use AI technology to enable improper systemic, indiscriminate or large-scale monitoring, surveillance or tracking of individuals.”
The second policy document, Directive 026-11, doubles down on prohibiting the inappropriate usage of biometric systems, particularly surrounding facial recognition. While the directive says that facial recognition technologies are still authorized for specific DHS missions, the directive orders the agency to conduct periodic testing to ensure these systems are not being abused or leveraged illegally.
“DHS continues to anticipate and assess the impacts and operational risks of [facial recognition] and [face capture] technologies across the department with strong leadership oversight and ongoing testing and evaluation of the technologies,” the directive reads. “It is essential that DHS only uses FR and FC technologies in a manner that includes safeguards for privacy, civil rights and civil liberties.”
It goes on to expressly forbid DHS from using facial recognition technology to unlawfully surveil or track individuals.
“I commend Secretary Mayorkas for leading this important effort,” Rep. Yvette Clarke, D-N.Y., said in a statement on the policy. “Updating federal government procurement policies is a prime example of how the federal government can help incentivize the private sector towards the development and use of responsible and safe AI. For years, we have seen the unintended and harmful consequences from the use of AI, particularly when it comes to bias, discrimination and lack of explainability. And it is all too important that our federal government works to keep pace with the rapid development of emerging technologies, while also ensuring the protection of the privacy, civil rights and civil liberties that are enshrined within our Constitution.”