What Can Barnstormers Teach Us about Face Recognition?
New technologies and a lack of regulation cause real and imagined problems.
Back in the 1920s, barnstormers crisscrossed the country, dropping into farmer’s fields to offer rides in their exciting new technology, the airplane. These daredevils pushed the performance limits of their aircraft—and all too often, crashed.
They also provided an example that’s been repeated time and again throughout history: People rapidly applying new technologies to new uses, and a lack of regulation causing real and imagined problems.
We’re now at a similar point not only with face recognition, but also with many other fast-moving technologies with privacy implications, such as always-listening speakers and phones, GPS that tracks our every move, and, of course, social media. While my recommendations below apply to those technologies as well, I’m focusing on face recognition because there’s currently talk about banning it.
I appreciate that any technology that collects data about the public can cause legitimate concerns about privacy and civil liberties. But much of the recent reporting about face recognition is based upon partial truths and misunderstandings. The Face Recognition Vendor Test (FRVT) released by the National Institutes of Standards and Technology this past December, makes clear that different face recognition systems have different performance levels with different test groups, such as sex, age and race. But that doesn’t mean deliberate bias, only that some vendors’ systems perform better than others—which is typical of all emerging technologies. Competition is a powerful motivator: Vendors will learn from the FRVT report and improve.
Because of the inherent complexities of face recognition, it’s easy—and, in some cases, beneficial—for some stakeholders to mischaracterize how the technology works and its impact on privacy. But the truth is more nuanced: face recognition offers tremendous opportunities to enhance both security and customer service in government and commercial applications. Yet, if it’s implemented the wrong way, it will have a detrimental impact on privacy and fairness.
For face recognition systems to truly benefit our country, we need to develop methods and safeguards around the four key aspects of introducing any new technology into society: performance, use, control and governance.
- Performance: Does a technology perform well in the context of its use? Face recognition performance has improved rapidly in recent years, powered by vast gains in computing speed and storage, and fundamental changes to the algorithms and technology. This quality will only continue to improve. However, performance will never be perfect. Therefore, use of the systems and applications must be controlled and governed.
- New Uses: From airport security to smartphones and computer gaming—and customer service and fraud prevention—many new face recognition applications are emerging. It’s vitally important to insert controls and governance into those systems’ architectures to make the benefits of the technology at different performance levels transparent to all. This will help us determine if a new application is ready to be implemented and give individuals control of their personal data.
- Control: Technology systems controls are implemented in two ways. First, by knowing the tradeoffs. Second, by having the means to implement decisions about the different tradeoffs. For example, if we give a company the right to collect our facial image to make it easier to get into our phone, do we understand they can use the data that they collect—such as what we’re doing, when, where, and with whom? And are there means of restricting that use that are easy to understand, do, and enforce?
- Governance: This is where we spell out what’s acceptable and what’s not, and when and why. For instance, is our priority finding terrorists or protecting personal data and privacy? Good governance is not just a list of rules. Good governance recognizes the different aspects of the system for which it’s making rules, so that those rules are truly focused on the real issues. In the case of face recognition, that means that the governance will address the underlying issues of data collection, ownership, use, storage, retention, and aggregation.
Three important things must happen. First, face recognition developers and users must understand that privacy is not a hindrance to the usefulness of the technology, but rather a means of making face recognition more useful. Second, privacy advocates must recognize that face recognition and similar technologies are necessary and unavoidable—and serve as a positive force in building privacy into those systems. Third, all parties (developers, users, regulators, and legislators) must work together with independent experts to develop technical architectures that allow reasonable governance and control of the technology as it serves the nation’s need for enhanced security.
Just as barnstorming was a dangerous way to use aircraft, face recognition technology is a tool that can be used well or misused. Rather than ban it, let’s take a page from aviation history and create a system of safeguards and regulatory infrastructure for face recognition and use it as a model for similar technologies.
Achieving this depends upon people from all sides of the issues making decisions based upon a factual understanding of the technologies—and making a deep commitment to protecting our privacy and civil rights.
Craig Arndt is the technical director for homeland security programs at MITRE, which operates the Homeland Security Systems Engineering and Development Institute™, a federally funded research and development center.