The Homeland Security Department’s chatbot “Emma” sometimes struggles to process some Spanish language requests because not everyone says green card the same way.
A growing number of federal agencies are investing in virtual assistants—online systems that can respond to questions people pose either via voice or text.
This summer, the General Services Administration’s Office of Emerging Citizen Technology wrapped a pilot walking agency leadership through the process of inventing their own Siri and Alexa counterparts. Early prototypes included a voice-controlled system that lets people apply for Small Business Administration permits.
At the federal level, virtual assistants are in the very early stages. Though it’s almost two years old, Emma, the chatbot built by U.S. Citizenship and Immigration Services, is occasionally stumped when processing Spanish-language requests, Robert Genesoni, chief of the customer engagement center at USCIS, said at a 930gov event in Washington Wednesday.
» Get the best federal technology news and ideas delivered right to your inbox. Sign up here.
Emma has processed about 10.5 million requests from 3.3 million unique visitors, typing in both English and Spanish, according to USCIS. People might ask Emma, “Do I need a visa to travel to Europe?” or “What documents do I need to obtain a green card?,” and the system quickly provides a set of links with information pertinent to the request.
English-speakers are likely to use the term “green card,” but Spanish language speakers used a greater variety of terms including “tarjeta verde” or “tarjeta de residencia.” So USCIS had to build out a database accounting for all those possibilities, Genesoni explained.
As of now, Emma has a 91 percent success rate answering questions posed in English, Genesoni said. It’s slightly lower—89 percent—in Spanish.
Broadly, USCIS plans to use more customer relationship management and analytics tools to gain more clarity about “what our customers think of us,” Genesoni said. The agency is also investigating how to use “interactive voice response,” which might let callers use their phone keypads to answer questions that the system reads out, and “visual interactive voice response,” which might present a user with a visual menu on their device, he added.