Without heavy investment in AI research, the government risks national security and economic implications, according to a congressional report.
The U.S. could face heightened national security threats and lose its economic edge if the government doesn’t step up its game when it comes to artificial intelligence, according to a pair of oversight lawmakers.
Reps. Will Hurd, R-Texas, and Robin Kelly, D-Ill., on Tuesday published a report detailing the current state of the country’s artificial intelligence ecosystem and offering recommendations for how government could steer and accomodate the technology’s development in the years ahead.
The report is based on a series of hearings examining the government’s role in advancing AI hosted earlier this year by the House Oversight Subcommittee on Information Technology, on which Hurd chairs and Kelly serves as ranking member.
“[Artificial intelligence] is a topic that’s going to transcend and be important beyond this Congress,” Hurd said Tuesday during a call with reporters. “I think this report [will] lay a foundation for future focus by Congress and other parts of the government.”
Artificial intelligence systems already underpin a wide variety of technologies. But as AI tools become ubiquitous in U.S. society, the government has a significant role to play in ensuring the technology advances responsibly.
The report calls for government to ramp up funding for AI basic research, particularly as global powers like China drastically increase their investment in the technology.
“The loss of American leadership in AI could also pose a risk to ensuring any potential use of AI in weapons systems by nation-states comports with international humanitarian laws,” the report said. “Authoritarian regimes like Russia and China have not been focused on the ethical implications of AI in warfare, and will likely not have guidelines against more bellicose uses of AI, such as in autonomous weapons systems.”
During the press call, Hurd said China’s military intelligence units are already looking for ways to use AI to infiltrate other countries’ digital infrastructure, and the U.S needs to quickly figure out how to defend against those threats.
“The future of cybersecurity is going to be good AI versus bad AI,” he said. “I think [cyber] is an area where we have to ... recognize we’re in a real race with the Chinese.”
“I would say artificial intelligence right now is dumb. To turn artificial intelligence smart, we’re going to need quantum computing,” he said. “When quantum is achieved, that’s when you’re really going to start seeing some use of AI that goes beyond our imagination.”
The government must also overcome the privacy and bias concerns raised by AI, according to the report.
The booming AI industry has spurred rampant collection and use of personal data, and agencies need to examine how existing privacy laws apply to artificial intelligence products, lawmakers said. They also encouraged organizations to update regulations as needed to account for the new tech.
Hurd told reporters he’d like to see federal research and development funds devoted to solving the issue of bias in artificial intelligence systems. As law enforcement agencies and other groups use AI to make evermore consequential decisions, it’s critical to ensure those systems don’t discriminate by race, gender or other factors, lawmakers wrote. Increasing transparency into how tools arrive at their answer could help address that issue, they said.
In the report, Hurd and Kelly also urged the government to invest in educating and reskilling the country’s workforce as different industries increase their reliance on artificial intelligence. They suggested agencies “lead by example” by retraining federal employees for the digital economy.
“We need to make sure people realize this is going to be the equivalent of typing,” he said. “If you don’t know how to type, then it’s going to be hard for you to work in a professional environment. Knowing how to use this tool, in a number of years, is going to be something similar.”