Senate's top tech official greenlights research use of generative AI

Bloomberg Creative Photos/Getty Images

The chief information officer for the Senate Sergeant at Arms evaluated three popular AI platforms and offered members and staff instruction on how to leverage them safely.

The Senate chief information officer released new guidance in December to advise on the use of artificial intelligence systems and technologies within the Senate chamber, Nextgov/FCW can confirm.

Based on a series of risk assessments, the Senate's CIO deemed that three leading generative AI systems –– OpenAI’s ChatGPT, Google’s BARD AI, and Microsoft’s Bing AI Chat –– the CIO deemed that each software stands a “moderate level of risk if controls are followed.”

The CIO’s guidance highlights only two uses for these and other AI services: research and evaluation purposes and with non-sensitive data.

Exact guidance for the AI systems reminds users to follow six points: treating AI tools like search engines, with no expectation of privacy or accuracy; not using sensitive data with an AI system; assuming all information put into an AI software can be seen by others; understanding that prompts in an AI tool can be used to glean other information; verifying accuracy of information generated by AI software; and adding a human review of content from an AI software.

Advocacy organizations working within the government modernization space spoke in support of the SAA’s new guidance as vital for incorporating responsible AI systems into the government. 

"We commend the Senate for conducting a risk analysis of a diverse array of AI tools and offering a broad toolkit for staff exploration of this emerging technology,” said  Aubrey Wilson, the director of Government Innovation at POPVOX Foundation. “Not only will this assist the institution’s adoption of these new tools, it provides an opportunity for senators and legislative staff to have hands-on engagement with a technology the chamber is crafting policy to address. This is essential for promoting timely and competent oversight of AI."

Marci Harris, the co-founder and executive director at POPVOX Foundation, told Nextgov/FCW that she believes it's “inevitable” that emerging generative systems will be present in governance operations, adding the SAA’s AI guidance is a positive step forward in offering help to Congressional staffers using sensitive AI systems in the future.

“Without measured steps to understand and responsibly deploy these technologies, our democratic institutions risk being overrun or out-maneuvered by outsiders deploying this technology to influence them,” Harris said. 

Some of the specific tasks Harris expects AI to accomplish in government operations include research-oriented jobs, namely summarizing legislative documents, providing first drafts of letters and being leveraged as an overall administrative assistant. 

The ideal optimization of AI within Congress should lead to more face-to-face time with constituents or hearing attendance for lawmakers. 

“The goal is to have thoughtfully implemented AI that can equip Congress to improve its capacity and meet rising constituent expectations in an increasingly digital age,” Harris said.  

The guidance comes as Congress at large has been grappling with how to approach and potentially regulate the AI field.