Elon Musk Is Right About the Threat of AI, but He’s Dangerously Wrong About Why

Tesla Motors Inc. CEO Elon Musk

Tesla Motors Inc. CEO Elon Musk Francois Mori/AP

Could humanity learn to live with AI?

This question originally appeared on QuoraTo what extent might Stephen Hawking and Elon Musk be right about the dangers of artificial intelligence? What are your thoughts? Answer by Suzanne Sadedin, evolutionary biologist.

Elon Musk and Stephen Hawking are right: AI is dangerous. But they are dangerously wrong about why.

I see two fairly likely futures:

Future one: AI destroys itself, humanity, and most or all life on earth, probably a lot sooner than within 1000 years.

Future two: Humanity radically restructures its institutions to empower individuals, probably via trans-humanist modification that effectively merges us with AI. We go to the stars.

Right now, we are headed for the first future, but we could change this. As much as I admire Elon Musk, his plan to democratize AI actually makes future one more likely, not less.

Here’s why: There’s a sense in which humans are already building a specific kind of AI; indeed, we’ve been gradually building it for centuries. This kind of AI consists of systems that we construct and endow with legal, real-world power. These systems create their own internal structures of rules and traditions, while humans perform fuzzy brain-based tasks specified by the system. The system as a whole can act with an appearance of purpose, intelligence, and values entirely distinct from anything exhibited by its human components.

All nations, corporations and organizations can be classified as this kind of AI.

I realize at this point it may seem like I’m bending the definition of AI. To be clear, I’m not suggesting organizations are sentient, self-aware, or conscious, but simply that they show emergent, purpose-driven behavior equivalent to that of autonomous intelligent agents. For example, we talk very naturally about how “the US did X”, and that means something entirely different from “the people of the US did X” or “the president of the US did X,” or even “the US government did X.”

These systems can be entirely ruthless toward individuals. Such ruthlessness is often advantageous—even necessary, because these systems exist in a competitive environment. They compete for human effort, involvement, and commitment: Money and power. That’s how they survive and grow. New organizations, and less successful ones, copy the features of dominant organizations in order to compete. This places them under Darwinian selection, as Milton Friedman noted long ago.

Until recently, however, organizations have relied upon human consent and participation; human brains always ultimately made the decisions, whether it was a decision to manufacture 600 rubber duckies or drop a nuclear bomb. So their competitive success has been somewhat constrained by human values and morals—there are not enough Martin Shkrelis to go around.

With the advent of machine learning, things have changed. We now have algorithms that can make complex decisions better and faster than any human, about practically any specific domain. They are being applied to big data problems far beyond human comprehension. Yet, these algorithms are still stupid in some ways. They are designed to optimize specific parameters for specific datasets, but they’re oblivious to the complexity of the real-world, long-term ramifications of their choices.

Machine-learning algorithms, increasingly, act like the neural ganglia of organizations; they do the computational heavy lifting, while humans act as messengers and mediators between them. The jobs machine-learners are designed to do are those that help their organizations out-compete other organizations. And this is what we should expect any innovations that approach strong AI to be designed to do, as well: To help organizations compete.

There is, of course, great scope for this competition to be good for humanity: We get faster, better search results, and more effective cancer drugs; but we must expect humans to be systematically disempowered. Why?

  • As machine-learning algorithms improve, they will gradually be integrated with one another to form coherent, independent competitive systems that approach more traditional definitions of AI. Doing this will increase the system’s overall efficiency, and thus make it more competitive. Humans will be removed from decision-making roles simply as a side-effect.
  • Humans have moral scruples, empathy, selfishness, biases, and foresight, all of which can limit the competitive effectiveness of the organizations we work for. The systems that win will be those that eliminate these human weaknesses. (We humans might not consider these things weaknesses, but in evolution, even foresight can be a weakness if it prevents you from advantaging yourself over your competitors.)
  • We can also expect increasingly sophisticated control over our access to information, again for competitive reasons. What you don’t know about, you can’t fight. Arguably this is manifest already in the ongoing battles between governments and corporations over internet regulation.

So, AIs will be powerful and humans won’t. Who cares? Here’s the big problem with this scenario: These AIs are evolving under natural selection in a globally-connected world. One of the hardest problems for natural selection is the tragedy of the commons: a situation where multiple agents share a limited resource. Every agent has an individual incentive to take more than their share, but if too many do so, the resource is destroyed and everybody loses. Unfortunately, in most scenarios, agents cannot trust one another not to abuse the resource, so they predict (accurately) that it will be abused. Knowing this, the rational response for every agent is to abuse it.

Humans are probably the only species capable of thinking our way around the tragedy of the commons, precisely because we have evolved a peculiarly cooperative cognitive architecture that makes strong, trusting relationships and social contracts seem natural to us. I see absolutely no reason why the AIs we are currently creating will share this cooperative ethos; from their evolutionary perspective, it is simply one of the human weaknesses that can be eliminated for competitive benefit. Our species evolved it in a very specific tribal environment where we experienced the tragedy of the commons, over and over at small local scales. Individuals who couldn’t cooperate lost everything. The evolution of our organizations has the opposite trajectory: they compete in a highly connected world with abundant resources, a world where exponential growth is not only possible butexpected.

Newly–evolving organizations have never experienced the tragedy of the commons, so they have no evolutionary incentive to develop any structure to deal with it effectively.

I think if we were truly confronted with the humanitarian consequences of climate change, only a tiny proportion of individual humans would accept the risk and suffering entailed by inaction. Yet, despite acknowledging the data on this, nations continue to squabble over negligible reductions in carbon emissions. Individual humans have the foresight and global perspective to appreciate that this is a disaster, but our organizations are structured to deny that perspective precisely because it undermines their success in short-term competition.

Providing open access to machine-learning technologies, as Musk proposes, will only intensify competition among emerging AIs. If a small number of ruling AIs were to attain world domination, it’s just possible they might be able to coordinate to avoid mutual destruction. But when millions of AIs are fighting for market share, the coordination problems involved in long-term sustainability become intractable.

Perhaps humans still have enough power to win the fight over climate change. I don’t know. But without drastic systemic change, we will be progressively locked out of major decisions and fed increasingly biased information to maintain compliance.

In a world with finite resources and an economy based around exponential growth, we will inevitably face another global tragedy of the commons. Will our ruling AIs rise to the occasion? Why would they?