By: Talia Boiangin
Earlier this month, an Amazon employee wrote an op-ed for Medium, claiming that more than 450 employees sent a letter asking Jeff Bezos to stop selling their facial recognition software, “Rekognition,” to police worldwide. This comes just three months after the American Civil Liberties Union (ACLU) tested its accuracy with 533 members of Congress. The results? The Amazon Rekognition software incorrectly matched 28 of the members—a 5.2% error rate. In a blog post, the ACLU wrote that “face surveillance also threatens to chill First Amendment-protected activity like engaging in protest or practicing religion, and it can be used to subject immigrants to further abuse from the government.”
The ACLU has been very vocal in the use of facial recognition software by law enforcement agencies. Orlando Police suspended its 2-year pilot program with Rekognition when the ACLU wrote a letterto the Orlando City Council, arguing “people should be able to safely live their lives without being watched and targeted by their government.” However, it should be noted that the Orlando Police Department has since restarted its use of Rekognition and continues to investigate how it can be used in the context of law enforcement. While it is my personal belief that we should safeguard our First Amendment rights, there are concerns that are just as grave as privacy infringement, namely bias algorithms and training data.
Artificial Intelligence (AI) algorithms used to create facial recognition software are hindered by programmers’ bias and a lack of diversity in the photos used to “train” the software. This is not a surprising issue. In 2015, Google’s facial recognition software identified two African-Americans as gorillas in their photos. Google’s fault was in its algorithm and the data that was being used to train it. Earlier this year, Google apparently “resolved” the issue by removing gorillas from its labeling system. This software should not be utilized by the most powerful enforcement agencies in the world.
In February of this year, Joy Buolamwini, a researcher at the MIT Media Lab, tested and published her findings on the accuracy of three facial recognition systems. The study found that the technology had problems detecting darker skin tones. Gender was “misidentified in less than one percent of lighter-skinned males; in up to seven percent of lighter-skinned females; up to 12 percent of darker-skinned males; and up to 35 percent in darker-skinned females.” After publishing her findings, Buolamwini wrote a letter to Jeff Bezos about her concern over the use of facial recognition technology in law enforcement. In the aforementioned ACLU experiment with members of congress, 20% of the congressmen and women were people of color, and yet nearly 40% of 28 false matches were of people of color.
The implications of false matches by law enforcement need to be severely audited. For example, the Future of Privacy Forum released Privacy Principles for Facial Recognition Technology in Commercial Applications and discuss these very implications. This regulation, as well as a broader public discussion on the use of such privacy-violating software, must be addressed sooner rather than later. If we cannot achieve 100% accuracy in facial recognition, there should be a call to evaluate what percentage of a match a police officer needs to have “reasonable suspicion” to stop an individual. Additionally, if the data used to train the algorithm is inaccurate, perhaps Amazon should consider creating a system, possibly built on blockchain, that would allow the sharing of data from police departments across the country. Although it cannot resolve a bias algorithm, using national demographic data may assist in removing bias on the AI’s final decision.