The Internet of Things Needs a Code of Ethics

chombosan/Shutterstock.com

Technology is evolving faster than the legal and moral frameworks needed to manage it.

In October, when malware called Mirai took over poorly secured webcams and DVRs, and used them to disrupt internet access across the United States, I wondered who was responsible. Not who actually coded the malware, or who unleashed it on an essential piece of the internet’s infrastructure—instead, I wanted to know if anybody could be held legally responsible. Could the unsecure devices’ manufacturers be liable for the damage their products?

Right now, in this early stage of connected devices’ slow invasion into our daily lives, there’s no clear answer to that question. That’s because there’s no real legal framework that would hold manufacturers responsible for critical failures that harm others. As is often the case, the technology has developed far faster than policies and regulations.

But it’s not just the legal system that’s out of touch with the new, connected reality. The internet of things, as it’s called, is also lacking a critical ethical framework, argues Francine Berman, a computer-science professor at Rensselaer Polytechnic Institute and a longtime expert on computer infrastructure. Together with Vint Cerf, an engineer considered one of the fathers of the internet, Berman wrote an article in the journal Communications of the Association for Computing Machinery about the need for an ethical system.

I spoke to her about ethical design, and how to balance individual privacy with the potential for social good of connected devices that share data with one another. A transcript of our conversation, lightly edited for concision and clarity, follows.


Kaveh Waddell: Why is it particularly important to think about ethics in the internet of things?

Francine Berman: I think we’ve been running up against these issues as digital technologies become more and more prevalent—but the internet of things is particularly interesting.

First of all, we’re just at the tip of the iceberg in what is arguably going to be a brave new world. And it’s highly heterogenous: We’ll be seeing a lot more autonomous systems, we’ll be seeing enhanced humans and smart systems, devices and organizations. When you put all of those together, and you start thinking about how to bring out the best of the internet of things rather than the worst of the internet of things, governance is really the key.

That means understanding how to design and build and think about these systems. Who’s responsible and who’s accountable, what does it mean to be ethical and what does it mean to promote the public good?

Waddell: What’s another example of a new technology driving the creation a new ethical framework?

Berman: Think about the Industrial Revolution: The technologies were very compelling—but perhaps the most compelling part were the social differences it created. During the Industrial Revolution, you saw a move to the cities, you saw the first child-labor laws, you saw manufacturing really come to the fore. Things were available that had not been very available before. There was a workforce evolution.

In some sense, the Information Age has had that same kind of impact on all aspects of life. And the internet of things exacerbates it, because now the boundaries between the physical world and the biological world and the cyber world are blending. We’re bumping into something that’s going to have a tremendous impact in the way we live, what we do, how we think about things and even our individual rights.

Waddell: Have we seen a shift in governance in the Information Age in the way we did at the beginning of the Industrial Revolution?

Berman: Well, sure. You think about something like copyright: That was around at the time of the Constitution, and now we’re worrying about whether Stairway to Heaven is a copyright infringement. There’s a lot of things we’re looking at now that we never even imagined we’d have to think about. A great example: What if self-driving cars have to make bad choices? How do they do that? Where are the ethics? And then, who is accountable for the choices that are made by autonomous systems?

This needs to be more of a priority, and we need to be thinking about it more broadly. We need to start designing the systems that are going to be able to support social regulation, social policy, and social practice, to bring out the best of the internet of things.

Waddell: What would those supporting systems look like?

Berman: Think about designing a car. I want to design it so it’s safe, and so that the opportunity to hack my car is minimized. If I design internet of things systems that are effective, provide me a lot of opportunities, and are adaptive, but I only worry about really important things like security and privacy and safety afterwards, it’s much less effective than designing them with those things in mind.

We can lessen the number of unintended consequences if we start thinking from the design stage and the innovation stage how we’re going to use these technologies. Then, we put into place the corresponding social framework.

Waddell: I want to ask a little more about attribution. How, for example, should we think about assigning responsibility for the Mirai botnet that harnessed internet of things devices to attack core internet infrastructure last year? Who should be held responsible for that?

Berman: That’s a really great question. There’s a shared responsibility between innovators, companies, the government and the individual, to try and create and utilize a framework that assigns responsibility and accountability based on what promotes the public good.

Think about medicine: In trying to treat symptoms and cure diseases, the medical community creates medication. Some medications are available over the counter, and some medications require a prescription because you need guidance from your doctor. Some medications can only be administered in the hospital because they’re very specialized or dangerous. We have a whole social and policy framework to govern the use and administration of medicine—even if it doesn’t work perfectly—whose purpose is to keep you safe and promote individual and public health.

Waddell: How do we create a framework like that? Does it start with creating a new system of regulations?

Berman: We have to be careful with regulations, because we want to make sure we have thoughtful regulations that promote the public good, instead of get in the way of things.

The internet of things is just emerging, so this is a great time for experimentation. If you think about a smart city or a smart system or a smart building, or even an intensive-care ward, you have a lot of devices and systems that share information, you have policy issues, you have privacy issues. This is a great time for us to start experimenting with things that make sense, and getting some feedback.

The National Institute of Standards and Technology is doing some really interesting work on cyber-physical systems, and they’re starting to develop standards that would help people keep things private. We’ll have more experience with that as people start putting that into practice. Some of it will work very well, and some of it won’t, and that will help us iteratively improve it.

You can’t come out full force with a law about something you don’t really know about, because it’s unlikely to work effectively. But we can get a lot of experience in more circumscribed systems now. And if we make it a priority and we start looking at smart cities in that way, or smart buildings, then I think we have a chance of starting to organically grow a sensible governance and ethical system for the internet of things.

Waddell: How do you balance individual privacy with the greater community good that can come from harvesting large amounts of personal data?

Berman: I think we’re dealing with this bit by bit. When you travel to a foreign country and you come back, and the customs agent asks you where you’ve been, they’re really asking a question that you might think is private information. But you’re willing to share with them whether you’re on a farm or whether you’re bringing back vegetables: things that can be dangerous to the public good. So we’re already making those decisions all the time.

The internet of things is exacerbated by the sheer technical challenge of trying to do things adaptively and preferentially. So imagine that I can decide whether, for example, my address can appear on Google Maps. Then, Google Maps becomes less valuable.

And it’s hard to even imagine what kind of technical infrastructure you would need to maintain, or who would maintain it, to make that happen. So there will always be trade-offs. And they’ll always be in the gray area between extreme personal privacy and everything being transparent for the public good.

Waddell: Has anybody struck that balance particularly well?

Berman: That’s a good question. In some sense, that’s an individual opinion. I’m OK telling people whether or not I’ve been on a farm when I come back from a trip. But there are other places where I’m not so comfortable. I might feel that my device taking data and sharing data is too intrusive for me.

One question I have is whether the generation of people who grew up with having their devices be a part of their lives, versus the generations that didn’t, will have different ideas about what’s okay and what’s not okay.

Waddell: I also wonder if people who see what they might gain from participating in a larger community-oriented data-gathering process might shift their attitudes. For example, digital assistants like Siri learn from the people that use them. When you use Siri, you’re helping teach it—does that make people decide it’s worth it to give up a small amount of privacy to have a more useful digital assistant?

Berman: I agree with that. A key question is: Are there negative repercussions to having people know about a particular thing? I think in a lot of instances, there are no negative repercussions. People might perceive a much greater benefit to sharing things, outweighing the downside.

Health data is a good example of something where there can be negative repercussions to sharing things, and so we have a lot of privacy laws around that. But some people may not feel that way, so you see things like Patients Like Me, [a website where patients can share their medical details and meet others who share their symptoms]. We all make those decisions at an individual level. But for the internet of things, I think it’s very important for us to think of it at a societal level.

Waddell: To me, one of the biggest problems with the internet of things right now is that manufacturers are making poorly secured devices in their rush to connect every little thing they make to the internet. Is there any way to stop that from happening without turning to regulations and laws?

Berman: I was talking to someone the other day about baby monitors, which can be both great and pretty scary. It’s great to be able to monitor your baby and make sure that they’re OK, and it’s really scary when people are shrieking at your baby over the internet because your baby monitor got hacked. That’s poor design.

I think the gee-whiz aspects of the internet of things get a lot of attention: Oh, I can do this, and the system adapts to me in this way, and I never have to turn on the lights in my house again because my house is doing it for me.

But I think it’s equally important to be thinking on the social side. We don’t want to, as you point out, rush to the final result, which is maybe legislation or policy, without a lot of experience and testing and thinking about what’s in the public good.

Waddell: Are people getting distracted by that gee-whiz factor: the novelty of Alexa and internet-connected lights? Are we in the honeymoon stage right now?

Berman: Right. Infrastructure is never newsworthy until it fails. To get people to think about and prioritize getting infrastructure right—and governance and ethics and policy and social-interest structure—is hard. Typically, we’re interested in the results and the breakthroughs and the discoveries, or the really bad things that happen, like losing Social Security numbers or getting hacked. It’s not newsworthy to say the water main is working, or the lights are still on.

Waddell: The Mirai botnet and the attack on the internet probably did a lot to bring infrastructure into headlines. If people keep thinking about it, do you think some of these cultural shifts might come about a little earlier?

Berman: This is something where your periodical and a lot of other important ones can really help, because you can bring it to people’s attention. We need more than just innovators and designers to think about it. We really need the stakeholders to think about it: policymakers, the private sector, leadership and C-suite people, the general public.

Waddell: What’s one ethical guideline that internet of things manufacturers should be keeping in their sights when they are developing new products?

Berman: Security is critical. For all of our technologies, security is the Achilles’ Heel. If we want it to be used for benefit, we have to think about security. And we have to think about privacy, too: What data is private and what isn’t, and are we engineering our systems so that they can support whatever privacy concerns we have?