Musk and Zuckerberg Are Fighting Over Whether We Rule Technology—Or it Rules Us

This combo of file , ... ]

This combo of file , ... ] Manu Fernandez, Stephan Savoia/AP File Photo

The default setting for Silicon Valley is in question.

In the public imagination, the Amish are famous for renouncing modern technology. In truth, many Amish farms hum with machines: milk vats, mechanical agitators, diesel engines, and pneumatic belt sanders are all found in their barns and workshops.

The Amish don’t actually oppose technology. Rather, the community must vote on whether to adopt a given item. To do so, they must agree almost unanimously, says Jameson Wetmore, a social science researcher at Arizona State University. Whereas the outside world may see innovation as good until proven otherwise, the Amish first decide whether a new technology might erode the community values they’re trying to preserve. “It is not individual technologies the concern us,” one Amish minister told Wetmore, “but the total chain.”

It’s an idea that is resonating in Silicon Valley these days, where a debate over technology and its potential unintended consequences is cleaving the industry into rival camps—each with a tech titan as its figurehead.

On one side is Facebook CEO Mark Zuckerberg, who sees technology as an intrinsic good. Any social or ethical problems can simply be handled as they arise (preferably without much regulation). This is the default setting for Silicon Valley, which sees the future through utopia-tinged glasses: The problem is the past, and the future can’t come soon enough.

On the other side is Elon Musk, CEO of Tesla and SpaceX, who argues for caution when dealing with technologies such as artificial intelligence lest humans lose control of their creations, and has expressed reservations about Zuckerberg’s online surveillance business model.

Neither man disavows technology; indeed, both insist our future depends upon rapid progress. (Musk, after all, is pouring billions into interplanetary rockets and a new solar economy.) But the Cambridge Analytica scandal has laid bare ideological rifts between the two men, and the attitudes toward technology that they represent.

Clash of the titans

Musk was among the most high-profile defections from Facebook in the days after the scandal broke, opting to remove the Facebook pages of both Tesla and SpaceX, which had amassed a combined total roughly 5 million followers. “We’ve never advertised with FB,” he later tweeted. “Just don’t like Facebook. Gives me the willies. Sorry.” (He is, however, sticking with his 7 million followers on Facebook-owned Instagram.)

It wasn’t the first time Musk and Zuckerberg had a falling out in public. The divide between the two CEOs on AI safety is now one of Silicon Valley’s most-watched family feuds.

Musk wants to rein in AI, which he calls “a fundamental risk to the existence of human civilization.” Zuckerberg has dismissed such views calling their proponents “naysayers.” During a Facebook live stream last July, he added, “In some ways I actually think it is pretty irresponsible.” Musk was quick to retort on Twitter. “I’ve talked to Mark about this,” he wrote. “His understanding of the subject is limited.”

Both men’s views on the risks and rewards of technology are embodied in their respective companies. Zuckerberg has famously embraced the motto “Move fast and break things.” That served Facebook well as it exploded from a college campus experiment in 2004 to an aggregator of the internet for more than 2 billion users.

Facebook has treated the world as an infinite experiment, a game of low-stakes, high-volume tests that reliably generate profits, if not always progress. Zuckerberg’s main concern has been to deliver the fruits of digital technology to as many people as possible, as soon as possible. “I have pretty strong opinions on this,” Zuckerberg has said. “I am optimistic. I think you can build things and the world gets better.”

Musk deals with electric cars and space travel, areas in which even small mistakes can have disastrous consequences. No one would buy a Tesla car if the company’s motto was “Move fast and break things.” Of course, that has not deterred Musk from taking controversial risks, even with people’s lives.

Consider Tesla’s decision to release its Autopilot in beta. That turned tragic in 2016 when Joshua Brown crashed his Tesla Model S into a tractor trailer at 70 MPH, the first recorded fatality involving Autopilot. The US National Transportation Safety Board blamed driver error and an over-reliance on vehicle automation for the crash in its investigation, but explicitly found fault with Tesla for failing to ensure drivers pay attention when moving at high speeds and for not restricting Autopilot to appropriate roads.

The issue of Autopilot safety made news again on March 23 after another fatal crash, in which Tesla said the driver had taken his hands off the wheel for six seconds before the collision. Tesla’s decision to release Autopilot could be seen as a premature rush to gather self-driving expertise before its rivals—or a sincere effort to deploy a life-saving technology as fast as possible.

Philosophically, Musk’s companies are driven by a belief that humanity needs an escape hatch. It’s no coincidence his companies address climate change and interplanetary exploration, not to mention artificial intelligence safety, brain-computer interfaces and underground transport. He believes human extinction is far likelier than we might think, both because of the risks technology poses to our species (whether through greenhouse gas emissions or renegade robots) as well as natural disasters like a stray asteroid.

“Marc’s company is built around things that happens very quickly, and are low-value,” says one prominent Silicon Valley investor familiar with both companies. “Elon’s companies are built entirely around the premise that a catastrophic thing could happen. The inverse of that is their blind spots […] It comes down to how much risk do you want, how much unintended consequences are you willing to bear.”

Firmly in Zuckerberg’s camp are Google co-founder Larry Page, inventor and author Ray Kurzweil, and computer scientist Andrew Ng, a prominent figure in the artificial intelligence community who previously ran the artificial intelligence unit for the Chinese company Baidu. All three seem to share the philosophy that technological progress is almost always positive, on balance, and that hindering that progress is not just bad business, but morally wrong because it deprives society of those benefits.

Musk, alongside others such as Bill Gates, the late physicist Stephen Hawking, and venture investors such as Sam Altman and Fred Wilson, do not see all technological progress as an absolute good. For this reason, they’re open to regulation. After Trump’s election, for example, Wilson told a San Francisco audience that while the expected regulatory rollback would likely spur innovation, it wasn’t worth it: “I’m willing to pay the tax that good regulatory oversight creates on our industry.”

In 2017, Musk spoke to politicians about the urgency of reigning in AI. “I keep sounding the alarm bell,” he told attendees at a meeting of America’s governors. “But until people see robots going down the street killing people, they don’t know how to react.”

The downside of optimism

At the heart of the divide in Silicon Valley is the question of who, at the end of the day, is in charge. Do we rule technology, or does it rule us?

Most people in the Silicon Valley are like Zuckerberg. The Valley selects, relentlessly, for optimists who believe they control their destinies, and ours. There is little room left for self-doubt or dwelling on the downsides of their creations. “You wouldn’t get Facebook, Microsoft, Google and Apple if their founders weren’t deeply optimistic that they could do something meaningful,” said the investor. “You have to believe something is possible where everyone thinks it’s is going to fail. You build a culture and companies that believe in that by design.”

This philosophy is what’s powered Facebook to success. It’s also at the core of why Facebook failed to anticipate the fallout of its surveillance strategy and privacy policies, and dismisses the risk of new technologies.

“I am much more motivated by making sure we have the biggest impact on the world than by building a business or making sure we don’t fail,” Zuckerberg said in his backyard barbecue speech. “I have more fear in my life that we aren’t going to maximize the opportunity that we have than that we mess something up and the business goes badly.”

The tech industry’s reckoning

Zuckerberg is not wrong. Optimism is essential to technological progress. But the unintended consequences that accompany good intentions keep people like Musk up at night. “Sometimes what will happen is a scientist will get so engrossed in their work that they don’t really realize the ramifications of what they’re doing,” Musk told the World Government Summit in Dubai this February.

Today, computer scientist are pondering aloud if their field is having its “atom bomb” moment. In 1945, physicist Robert Oppenheimer, watching a mushroom cloud rise above the Trinity test site, the world’s first nuclear test, quoted a line from the Hindu epic Bhagavad-Gita: “Now I am become Death, the destroyer of worlds.”

Yonatan Zunger, a former security and privacy engineer at Google has compared software engineers’ power to that of “kids in a toy shop full of loaded AK-47’s.” It’s becoming increasingly clear how dangerous it is to consider safety and ethics elective, rather than foundational, to software design. “Computer science is a field which hasn’t yet encountered consequences,” he writes.

It doesn’t take much imagination to see how the next wave of technology might go wrong. Every aspect of human life—our food, our work, our intimate interactions, our DNA itself—is, or will soon be, mediated by the technology we embrace. Machines can now recognize speech and written text; images will be next. Algorithms know your face, and the faces of millions of your fellow citizens. They can infer, with increasing accuracy, a person’s income, mental health, gender, creditworthiness, personality, feelings, and more from public data. Some possible inventions, like weaponized algorithms, have been likened to “nuclear weapons, but worse.”

Sam Altman described the anticipatory danger of the moment in an interview with Vanity Fair. “It’s a very exciting time to be alive,” he said, “because in the next few decades we are either going to head toward self-destruction or toward human descendants eventually colonizing the universe.”

The problem is that that humans have a poor record of anticipating, and mitigating, the dangers of new technologies. When automobiles were first introduced, they were allowed to wreak havoc for decades before anyone summoned the courage to impose any rules at all. US states finally began requiring driver’s licenses in the 1930s, and systematic federal motor-vehicle safety efforts such as seat belts only began in the 1960s.

We may not have as much time to experiment and adapt to the new technologies coming down the road It’s easy to mock the fear of AI when robots still can’t open doors, and Siri struggles to book a restaurant reservation. But if you’re trained to contemplate the consequences of a low-probability catastrophe, after which patches and fixes aren’t possible, adopting a darker view makes sense. Better to design a stronger bottle for the genie than leave it up to chance how your creations enter the world.

The Amish might have something to say on the subject. As Wetmore visited different Amish communities, he wanted to know why they Amish didn’t own automobiles. After all, they had accepted technologies like diesel engines. Several Amish families bought cars when mass production started in the 20th century. But the community soon voted to prohibit them. Why?

“‘Well look what they did to you,’” said one Amish man, Wetmore recalls. “Do you know the name of your neighbors? As soon as you have the car, you never to talk to your neighbor again. That’s not the society we want to live in.”