The report calls on the DHS to better verify its AI inventory submissions and for CISA to develop AI cybersecurity progress metrics.
The Department of Homeland Security’s inventory of artificial intelligence systems listed two use cases linked to cybersecurity, but one such use case in reality “did not have the characteristics of AI,” an oversight report found Wednesday.
The agency's Automated Scoring and Feedback tool that models cybersecurity threats was incorrectly characterized as an AI system, the Government Accountability Office analysis said, stressing that labeling it in that way “raises questions about the overall reliability of DHS’s AI Use Case Inventory.”
The DHS inventory was born out of a Trump-era mandate that directed federal agencies to take stock of their AI use cases and share them with the public.
GAO found inaccuracies in the DHS inventory of AI use cases, according to the report. “Although DHS has a review process for the inventory," the report reads, "the agency acknowledges that it does not confirm whether use cases identified by components are correctly characterized as AI as a part of this process.”
The GAO analysis also said that DHS “doesn't verify whether each case is correctly characterized as AI” and later mentions that it also “hasn't ensured that the data used to develop the AI use case we assessed is reliable — which is an accountability practice in our AI framework.”
Out of 11 total key practices recommended by GAO in its accountability framework, the watchdog identified four that DHS and its component — the Cybersecurity and Infrastructure Security Agency — have implemented fully: human supervision, specifications, compliance and stakeholder involvement.
The remaining seven were not fully implemented, but were leveraged at varying levels. For example, having clear goals regarding AI for cybersecurity governance received high marks, while data oversight practices like documenting the sources of its data and their reliability was not implemented at all.
GAO issued eight recommendations. The first stipulates that the DHS chief technology officer should aim to expand the agency’s review process to better verify its AI inventory submissions.
The remaining seven recommendations specifically call on CISA Director Jen Easterly to take action. These recommendations include having CISA leadership develop AI-cybersecurity progress metrics, document data sources and monitor AI system performance.
DHS did not return a request for comment, but did agree with the eight recommendations in the report.