More Money, Fewer Rules Could Help AI Grow, Experts Say


The government needs to put money into artificial intelligence if it wants to beat other countries, tech experts say.

Giving your kids more money and fewer rules might not be the best parenting strategy, but that’s how the government should treat the budding artificial intelligence industry, according to tech experts.

Technologists from the private sector and academia got lawmakers up to speed on the current state of AI and detailed the ways government could drive innovation in the field in the years ahead. In the first of three hearings on the topic, witnesses told the House Oversight IT Subcommittee despite the significant advancements made in artificial intelligence in recent years, society is still far from realizing the technology’s full potential.

“AI is the biggest economic and technological revolution to take place in our lifetime,” said Ian Buck, the vice president and general manager of accelerated computing at NVIDIA. With artificial intelligence projected to add $8 trillion to the U.S. economy by 2035, he said, “the bottom line is we cannot afford to let other countries overtake us.”

Buck noted the governments of China and other countries are “aggressively” pouring resources into AI research, but funding in the U.S. has remained “relatively flat.” In the past, federal dollars supported research that eventually created to the internet and self-driving cars, and now there’s an opportunity to “enable, empower and fund” the tech community to do the same cutting-edge work in AI, he told Nextgov.

While other panelists agreed the government could afford to invest more in AI research given the relative size of the U.S. economy, and though the Trump administration said it’s “deeply committed” to AI, the president’s 2019 budget proposal would cut non-Defense R&D funds by more than 19 percent.

Beyond funding research, federal regulators should take a largely hands-off approach to artificial intelligence, said Amir Khosrowshah, vice president and chief technology officer of Intel’s artificial intelligence products group. We are still “in the early days” of AI, and imposing regulations on the field could have “adverse consequences” and potentially “stifle its growth.”

Though regulating specific applications of AI could help ensure product safety and protect personal data, applying one-size-fits-all rules to such a multifaceted field is a bad idea, said Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence.

He expressed particular concern about regulating transparency in the algorithms AI software uses to make decisions. While shedding light on the way AI produces results may curb abuses and limit bias in the algorithms, Etzioni said he preferred letting the market decide what algorithms work most efficiently and effectively.

Witnesses also highlighted the importance of developing a tech-savvy workforce that continues innovate artificial intelligence as becomes more prevalent in the workplace. Every panelist recommended requiring college students to take computer or data science courses to graduate, and many advocated for expanding coding programs in high schools and grade schools.

Charles Isbell, the senior associate dean of the Georgia Institute of Technology’s College of Computing, said one of the biggest limiting factors to expanding computer science education is a lack of trained teachers, but schools can use online courses to expose students to the field and give them experience.

Subcommittee Chairman Will Hurd, R-Texas, said he plans to hold two more hearings on artificial intelligence in the coming months. The March panel will explore how federal agencies can expand their use of AI, and the April hearing will outline the roles of government and private companies the artificial intelligence space as the technology matures.