After the Boston Marathon bombings, investigators had images of the two men believed to be responsible for the attack; however, facial recognition software couldn’t help identify the pair. Real humans had to make the match.
As reported in the Boston Globe, even though pictures of Dzhokhar and Tamerlan Tsarnaev were in government databases, facial recognition software couldn’t take the fuzzy images from the marathon site and match them against an image such as a photo for a driver’s license.
Furthermore, facial recognition software might not be up to the task for several years.
While facial recognition software may sometimes work well, it tends to be because the images are easy to see. Photos used for IDs are taken in good lighting and create easy to recognize images, whereas surveillance footage can be blurry and difficult for the software to determine the measurements it needs to make a match.
Manufacturers of the software like MorphoTrust believe that the incidence of matches will improve in a surveillance setting as the hardware and cameras are replaced by high-definition gear.
However, even as the software improves, criminals may also learn tricks and wear disguises to help them fool the cameras, making the tool virtually irrelevant and a target for privacy advocates.