Economist Calls for Deeper Examination of AI’s Risks


Technological progress could come at a cost.

Daron Acemoglu thinks that the Luddites of the late 18th and early 19th centuries have been maligned by history. They are widely known for tossing their tools and even shoes into factory equipment to damage and break the machinery, and have become synonymous with those who oppose all technological progress.

“Luddites understood very well that technology wasn’t neutral,” the Institute Professor of Economics, MIT, said in a Brookings Institution webcast Dec. 13. Luddites were not opposed to all industrialization – they were protesting factory owners replacing skilled weavers and textile workers with machines that left them unemployed, while the British government of the time did nothing to alleviate the impact.

Acemoglu opened the conversation with that moment early in the Industrial Revolution because he sees disturbing parallels – and differences – in the potential impact of artificial intelligence on human society.

Labor came to a new level of coexistence with automation over time, in part because of significant productivity gains that lowered prices and expanded products’ availability, and in part because companies were able to expand their workforces in nonmanufacturing areas, he said. “But despite all the promises on behalf of AI, that’s not what we’re seeing today. AI is not generating high-productivity automation, it is not being used for generating new tasks for labor,” he said.

“Of course AI is expanding our technological capability, so if we use it well it can bring huge improvements,” he said. “But AI is highly non-neutral. [It] could create misery and huge power imbalances.”

There are three broad areas where AI can make a significant difference, Acemoglu said.

“First of all, it’s about more data and better data,” he said. “Cancer [research], pandemic tracking, recommendation systems are all about data.” Using AI can significantly advance everything from medical treatments to improved customer outcomes.

“Second, as we move into the heart of the 21st century, we need to reduce our footprint on the environment – not just carbon, but many other ways [to reduce] materials and consumption of resources,” so it can help design and implement a post-fossil fuel, post-resource economic system.

“Third, AI has shown its greatest success – its only success so far – in improving decision-making. AI can help businesses make better decisions, better hiring, better pricing,” he said. “But none of those promises are being realized, and instead we’re facing” real peril.

Acemoglu is concerned that capitalism is ill-suited to make sure that AI’s benefits are shared and its risks mitigated. “I’m a huge believer that innovation brings a lot of externalities,” he said. “You can’t just say, ‘let the market take care of it.’”

He pointed to the fossil fuel industry as an example. Oil and gas companies have known for decades that their products were causing climate change – an externality of gigantic proportions – but they continue to receive government subsidies, and fight to keep clean energy sources marginalized.

He also worries about AI being developed by the private sector for its own benefit. “If you look at how AI is used – monitoring doesn’t help the workers, it [drives them]. Amazon pays a much higher wage … but many workers don’t like it because it uses AI to monitor them, make them work harder.” The injury rates at Amazon facilities are higher than other companies’ sites, and increase sharply during seasonal peaks such as Christmas, he said.

“If you are an economist and look at the current [digital] landscape, it looks like monopoly, it smells like monopoly, it acts like monopoly,” he said. The government hasn’t taken action because current legal standards were written for a very different world. “Today we’re not dealing with the ‘monopoly’ problem but something much more pernicious.”

He suggested laws and regulations need to be revised to reflect today’s needs. “I think we have to strengthen the privacy requirement. I think we have to revive consumer protection – what do we mean by ‘consumer protection’ in this era?” he said. “I think we have to decide which [uses] individuals find problematic but don’t have agency” to address on their own.

“We also have to find ways of making data more available to competing companies,” Acemoglu added. “There are two conflicting ideas here – protecting privacy while sharing data … We have to solve both problems, and we can’t solve one without the other.”

He noted that global society is already seeing the harm that unregulated AI is having on democracies. “It’s ironic – the first wave of thought on the effects of digital technologies on democracy was completely the opposite … in the early 2000s, there was optimism that [technology would be] a positive,” he said. It is not that “that view was fundamentally wrong, it’s because that view would have required a very different direction of technological development and policy.”

What is happening instead “is what I call disrupting the existing balance. Human societies, when they function to even a moderate level, are based on balance,” he said. “The biggest problem of the data economy is disrupting this balance. It’s empowering companies against their customers, countries against their citizens.”

Instead, “a handful of companies in the U.S. and China are shaping our future,” he said. “That’s just an unacceptable state of affairs for anyone … We have to try to democratize decisions about the future of AI.”