Once the stuff of sci-fi and spy flicks, facial recognition technology has evolved into a concrete reality touching nearly everyone on the planet.
The technology figures prominently in post-9/11 security. According to the International Civil Aviation Organization, 93 countries now issue passports containing the bearer’s biometric facial data. A number of U.S. states use facial recognition to prevent individuals from obtaining multiple driver licenses under different names. And law enforcement agencies successfully use it to identify criminals from video footage.
In the pre-Google, pre-cloud computing era, the technology required for these facial recognition systems was exclusively in the hands of the governments and organizations that deployed them. Flash-forward ten-years and the technology is available off the shelf, biometric databases are booming and the personal information of millions of people is freely available in the cloud.
These new circumstances have prompted the International Biometrics and Identification Association (IBIA), a trade association promoting the appropriate use of identity and security technology, to raise the red flag on an impending “perfect storm.”
The IBIA warns that this perfect storm may destroy the barrier separating our online and offline identities, altering our notions of what constitutes privacy in today’s connected world.
Identification in moments
Imagine a scenario in which anyone with a mobile device could capture an image from a distance and use facial recognition software to identify the individual and access a wealth of personal information that they or others, have uploaded over the years. Researchers at Carnegie Mellon University have already done it.
In August a team led by Carnegie Mellon Professor Alessandro Acquisti reported that they had successfully combined three technologies accessible to anyone–a commercially available face recognition tool, cloud computing and public information from social network sites such as Facebook–to identify individuals online and in the physical world.
In their first experiment, Acquisti’s team was able to scan profiles on a popular online dating site and identify users–protected under pseudonyms–based on their photo. In another experiment, the team used the technology to identify individuals on the campus based on their Facebook profile photos. A third experiment found the researchers identifying students’ Social Security numbers and predicting their personal interests using a photo of the subject’s face.
“The results foreshadow a future when we all may be recognizable on the street–not just by friends or government agencies using sophisticated devices–but by anyone with a smart phone and Internet connection,” said the researchers.
This is possible now because of exponentially increased accessibility, according to the IBIA. Identification databases were once small and tightly controlled, but today anyone with the right computer program can build massive databases using the billions of identity-tagged photos openly available online.
Another new point of access is the digital camera. According to IBIA, when facial recognition was first invented twenty years ago, digital photography was exclusive, expensive, time consuming and certainly not within the reach of the average citizen. Today it’s a standard feature on most cell phones and inexpensive point and shoot models are everywhere.
This has made it much easier for users to create and upload the digital images necessary to form facial recognition databases. Smart phones are particularly problematic in that their connectivity enables users to seamlessly take and upload digital photos. Increasingly powerful processors also enable smart phones to run complex applications such as facial recognition, says IBIA.
IBIA also points to the improved speed and accuracy of algorithms. According to independent measurements by the National Institute of Standards and Technology, facial recognition algorithms are one hundred times more accurate and up to one million times faster than past systems. Improvements have also made modern systems less reliant on precise facial placement and controlled lighting for accurate operation.
These improvements have led facial recognition out of the lab and on the road to pervasive use in the real world. In response to this summer’s riots in the UK, police turned to facial recognition to identify looters caught on camera. Police ran these images against a face-matching database that Scotland Yard constructed in preparation for the 2012 Olympic Games in London.
On the same front, a cadre of so-called digilantes formed a Google Group to use Face.com’s facial recognition API to identify rioters. The group produced no clear results and disbanded in August, but it was successful in demonstrating that the technology is accessible to average citizens.
Adding the social networks
In order to “help make tagging your friends easier,” Facebook added an app that automatically identifies other Facebook users in uploaded photos. It prompts users to tag others based on the app’s suggestion. Each time a photo is uploaded to Facebook with your name attached, this “Tag Suggest” app gathers data from the photo and learns how to better identify you in future uploads.
Although Tag Suggest is a default setting on many profiles, users can turn it off through their privacy settings. Still the ability for Facebook to recognize you and build on its biometric database is preserved.
Since its debut in June, Tag Suggest has been rolled out in most of the countries represented on Facebook, but not all are happy about it. In November, Germany’s data protection agency announced its intention to file suit against Facebook over Tag Suggest.
The agency claims that Facebook compiled its massive facial recognition database without the prior knowledge or consent of millions of users, resulting in a wholesale invasion of privacy. At this time no lawsuit has been filed.
What can be done?
Aside from legal action, there are a few steps that can be taken to protect individual privacy. According to IBIA, banning the technology is a “desperate act” and ultimately futile. As IBIA report author Joseph Atick points out, past attempts to stifle useful technologies have been unsuccessful, and facial recognition is too vital a security tool to throw out with the bathwater.
Atick argues we must begin by changing the way we look at identity-tagged images in the cloud. These, Atick says, must be treated like any other personal identity information and should be subject to the same protections as social security numbers, financial data and health care records. Accordingly, any security breach on an image site should be countered with equal severity.
Additionally, Web sites hosting identity-tagged images should set up protections against software that aims to harvest images for the creation of databases.
Finally, Atick advocates for a warning system to alert consumers uploading images that the photos could be used for facial recognition. In this way, the consumer is given the chance to “opt-in” to sharing such information, rather than do so unknowingly.
The moment of convergence for this “perfect storm” has not yet arrived, according to the IBIA report–but it is inevitable. In order to reach the level of widespread privacy invasion suggested by the Carnegie Mellon research, the technology still requires additional refinement, as suggested by the failure of the Google digilantes.
Thus IBIA says there is still time for the facial recognition industry to establish self-regulatory measures to protect individual privacy while allowing the technology to serve as a valuable security resource.
FTC examines facial recognition
The Federal Trade Commission (FTC) is seeking public comments on facial recognition technology and the privacy and security implications raised by its increased use.
The FTC held public workshop to address commercial applications of facial detection and recognition technologies at the close of 2011. Participants explored current uses, future uses, benefits and potential privacy and security concerns.
Facial detection and recognition technologies have been adopted in a range of new contexts, ranging from online social networks such as Facebook and Google+, to digital signs and mobile apps. The increased use has raised a variety of privacy concerns.
The FTC collected public comments on issues raised at the workshop, including but not limited to:
What are the privacy and security concerns surrounding the adoption of these technologies, and how do they vary depending on how the technologies are implemented?
Are there special considerations that should be given for the use of these technologies on or by populations that may be particularly vulnerable, such as children?
*What are best practices for providing consumers with notice and choice regarding the use of these technologies?
Are there situations where notice and choice are not necessary? By contrast, are there contexts or places where these technologies should not be deployed, even with notice and choice?
What are best practices for developing and deploying these technologies in a way that protects consumer privacy?
A report is likely though no timeline has been published.