The Toll Facial Recognition Systems Might Take on Our Privacy and Humanity

0
8401
Facial Recognition Privacy Concerns

Artificial intelligence really is everywhere in our day-to-day lives, and one area that’s drawn a lot of attention is its use in facial recognition systems (FRS). This controversial collection of technology is one of the most hotly-debated among data privacy activists, government officials, and proponents of tougher measures on crime.

Enough ink has been spilled on the topic to fill libraries, but this article is meant to distill some of the key arguments, viewpoints, and general information related to facial recognition systems and the impacts they can have on our privacy today.

What Are Facial Recognition Systems?

The actual technology behind FRS and who develops them can be complicated. It’s best to have a basic idea of how these systems work before diving into the ethical and privacy-related concerns related to using them.

How Do Facial Recognition Systems Work?

On a basic level, facial recognition systems operate on a three-step process. First, the hardware, such as a security camera or smartphone, records a photo or video of a person.

That photo or video is then fed into an AI program, which then maps and analyzes the geometry of a person’s face, such as the distance between eyes or the contours of the face. The AI also identifies specific facial landmarks, like forehead, eye sockets, eyes, or lips.

Finally, all these landmarks and measurements come together to create a digital signature which the AI compares against its database of digital signatures to see if there is a match or to verify someone’s identity. That digital signature is then stored on the database for future reference.

Read More At: The Pros and Cons of Enlisting AI for Cybersecurity

Use Cases of Facial Recognition Systems

A technology like facial recognition is broadly applicable to a number of different industries. Two of the most obvious are law enforcement and security. 

With facial recognition software, law enforcement agencies can track suspects and offenders unfortunate enough to be caught on camera, while security firms can utilize it as part of their access control measures, checking people’s faces as easily as they check people’s ID cards or badges.

Access control in general is the most common use case for facial recognition so far. It generally relies on a smaller database (i.e. the people allowed inside a specific building), meaning the AI is less likely to hit a false positive or a similar error. Plus, it’s such a broad use case that almost any industry imaginable could find a reason to implement the technology.

Facial recognition is also a hot topic in the education field, especially in the U.S. where vendors pitch facial recognition surveillance systems as a potential solution to the school shootings that plague the country more than any other. It has additional uses in virtual classroom platforms as a way to track student activity and other metrics.

In healthcare, facial recognition can theoretically be combined with emergent tech like emotion recognition for improved patient insights, such as being able to detect pain or monitor their health status. It can also be used during the check-in process as a no-contact alternative to traditional check-in procedures.

The world of banking saw an increase in facial recognition adoption during the COVID-19 pandemic, as financial institutions looked for new ways to safely verify customers’ identities.

Some workplaces already use facial recognition as part of their clock-in-clock-out procedures. It’s also seen as a way to monitor employee productivity and activity, preventing folks from “sleeping on the job,” as it were. 

Companies like HireVue were developing software using facial recognition that can determine the hireability of prospective employees. However, it discontinued the facial analysis portion of its software in 2021. In a statement, the firm cited public concerns over AI and a growing devaluation of visual components to the software’s effectiveness.

Retailers who sell age-restricted products, such as bars or grocery stores with liquor licenses, could use facial recognition to better prevent underaged customers from buying these products.

Who Develops Facial Recognition Systems?

The people developing FRS are many of the same usual suspects who push other areas of tech research forward. As always, academics are some of the primary contributors to facial recognition innovation. The field was started in academia in the 1950s by researchers like Woody Bledsoe.

In a modern day example, The Chinese University of Hong Kong created the GaussianFace algorithm in 2014, which its researchers reported had surpassed human levels of facial recognition. The algorithm scored 98.52% accuracy compared to the 97.53% accuracy of human performance.

In the corporate world, tech giants like Google, Facebook, Microsoft, IBM, and Amazon have been just some of the names leading the charge.

Google’s facial recognition is utilized in its Photos app, which infamously mislabeled a picture of software engineer Jacky Alciné and his friend, both of whom are black, as “gorillas” in 2015. To combat this, the company simply blocked “gorilla” and similar categories like “chimpanzee” and “monkey” on Photos.

Amazon was even selling its facial recognition system, Rekognition, to law enforcement agencies until 2020, when they banned the use of the software by police. The ban is still in effect as of this writing.

Facebook used facial recognition technology on its social media platform for much of the platform’s lifespan. However, the company shuttered the software in late 2021 as “part of a company-wide move to limit the use of facial recognition in [its] products.”

Additionally, there are firms who specialize in facial recognition software like Kairos, Clearview AI, and Face First who are contributing their knowledge and expertise to the field.

Read More At: The Value of Emotion Recognition Technology

Is This a Problem?

To answer the question of “should we be worried about facial recognition systems,” it will be best to look at some of the arguments that proponents and opponents of facial recognition commonly use.

Why Use Facial Recognition?

The most common argument in favor of facial recognition software is that it provides more security for everyone involved. In enterprise use cases, employers can better manage access control, while lowering the chance of employees becoming victims of identity theft.

Law enforcement officials say the use of FRS can aid their investigative abilities to make sure they catch perpetrators quickly and more accurately. It can also be used to track victims of human trafficking, as well as individuals who might not be able to communicate such as people with dementia. This, in theory, could reduce the number of police-caused deaths in cases involving these individuals.

Human trafficking and sex-related crimes are an oft-spoken refrain from proponents of this technology in law enforcement. Vermont, the state with the strictest bans on facial recognition, peeled back their ban slightly to allow for its use in investigating child sex crimes.

For banks, facial recognition could reduce the likelihood and frequency of fraud. With biometric data like what facial recognition requires, criminals can’t simply steal a password or a PIN and gain full access to your entire life savings. This would go a long way in stopping a crime for which the FTC received 2.8 million reports from consumers in 2021 alone.

Finally, some proponents say, the technology is so accurate now that the worries over false positives and negatives should barely be a concern. According to a 2022 report by the National Institute of Standards and Technology, top facial recognition algorithms can have a success rate of over 99%, depending on the circumstances.

With accuracy that good and use cases that strong, facial recognition might just be one of the fairest and most effective technologies we can use in education, business, and law enforcement, right? Not so fast, say the technology’s critics.

Why Ban Facial Recognition Technology?

First, the accuracy of these systems isn’t the primary concern for many critics of FRS. Whether the technology is accurate or not is inessential. 

While Academia is where much research on facial recognition is conducted, it is also where many of the concerns and criticisms are raised regarding the technology’s use in areas of life such as education or law enforcement

Northeastern University Professor of Law and Computer Science Woodrow Hartzog is one of the most outspoken critics of the technology. In a 2018 article Hartzog said, “The mere existence of facial recognition systems, which are often invisible, harms civil liberties, because people will act differently if they suspect they’re being surveilled.”

The concerns over privacy are numerous. As AI ethics researcher Rosalie A. Waelen put it in a 2022 piece for AI & Ethics, “[FRS] is expected to become omnipresent and able to infer a wide variety of information about a person.” The information it is meant to infer is not necessarily information an individual is willing to disclose.

Facial recognition technology has demonstrated difficulties identifying individuals of diverse races, ethnicities, genders, and age. This, when used by law enforcement, can potentially lead to false arrests, imprisonments, and other issues.

As a matter of fact, it already has. In Detroit, Robert Williams, a black man, was incorrectly identified by facial recognition software as a watch thief and falsely arrested in 2020. After being detained for 30 hours, he was released due to insufficient evidence after it became clear that the photographed suspect and Williams were not the same person.

This wasn’t the only time this happened in Detroit either. Michael Oliver was wrongly picked by facial recognition software as the one who threw a teacher’s cell phone and broke it.

A similar case happened to Nijeer Parks in late 2019 in New Jersey. Parks was detained for 10 days for allegedly shoplifting candy and trying to hit police with a car. Facial recognition falsely identified him as the perpetrator, despite Parks being 30 miles away from the incident at the time. 

There is also, in critics’ minds, an inherently dehumanizing element to facial recognition software and the way it analyzes the individual. Recall the aforementioned incident wherein Google Photos mislabeled Jacky Alciné and his friend as “gorillas.” It didn’t even recognize them as human. Given Google’s response to the situation was to remove “gorilla” and similar categories, it arguably still doesn’t.

Finally, there comes the issue of what would happen if the technology was 100% accurate. The dehumanizing element doesn’t just go away if Photos can suddenly determine that a person of color is, in fact, a person of color. 

The way these machines see us is fundamentally different from the way we see each other because the machines’ way of seeing goes only one way.  As Andrea Brighenti said, facial recognition software “leads to a qualitatively different way of seeing … .[the subject is] not even fully human. Inherent in the one way gaze is a kind of dehumanization of the observed.”

In order to get an AI to recognize human faces, you have to teach it what a human is, which can, in some cases, cause it to take certain human characteristics outside of its dataset and define them as decidedly “inhuman.”

That said, making facial recognition technology more accurate for detecting people of color only really serves to make law enforcement and business-related surveillance better. This means that, as researchers Nikki Stevens and Os Keyes noted in their 2021 paper for academic journal Cultural Studies, “efforts to increase representation are merely efforts to increase the ability of commercial entities to exploit, track and control people of colour.”

Final Thoughts

Ultimately, how much one worries about facial recognition technology comes down to a matter of trust. How much trust does a person place in the police or Amazon or any random individual who gets their hands on this software and the power it provides that they will only use it “for the right reasons”?

This technology provides institutions with power, and when thinking about giving power to an organization or an institution, one of the first things to consider is the potential for abuse of that power. For facial recognition, specifically for law enforcement, that potential is quite large.

In an interview for this article, Frederic Lederer, William & Mary Law School Chancellor Professor and Director of the Center for Legal & Court Technology, shared his perspective on the potential abuses facial recognition systems could facilitate in the U.S. legal system:

“Let’s imagine we run information through a facial recognition system, and it spits out 20 [possible suspects], and we had classified those possible individuals in probability terms. We know for a fact that the system is inaccurate and even under its best circumstances could still be dead wrong.

If what happens now is that the police use this as a mechanism for focusing on people and conducting proper investigation, I recognize the privacy objections, but it does seem to me to be a fairly reasonable use.

The problem is that police officers, law enforcement folks, are human beings. They are highly stressed and overworked human beings. And what little I know of reality in the field suggests that there is a large tendency to dump all but the one with the highest probability, and let’s go out and arrest him.”

Professor Lederer believes this is a dangerous idea, however:

“…since at minimum the way the system operates, it may be effectively impossible for the person to avoid what happens in the system until and unless… there is ultimately a conviction.”

Lederer explains that the Bill of Rights guarantees individuals a right to a “speedy trial.” However, court interpretations have borne out that arrested individuals will spend at least a year in prison before the courts even think about a speedy trial.

Add to that plea bargaining:

“…Now, and I don’t have the numbers, it is not uncommon for an individual in jail pending trial to be offered the following deal: ‘plead guilty, and we’ll see you’re sentenced to the time you’ve already been [in jail] in pre-trial, and you can walk home tomorrow.’ It takes an awful lot of guts for an individual to say ‘No, I’m innocent, and I’m going to stay here as long as is necessary.’

So if, in fact, we arrest the wrong person, unless there is painfully obvious evidence that the person is not the right person, we are quite likely to have individuals who are going to serve long periods of time pending trial, and a fair number of them may well plead guilty just to get out of the process.

So when you start thinking about facial recognition error, you can’t look at it in isolation. You have to ask: ‘How will real people deal with this information and to what extent does this correlate with everything else that happens?’ And at that point, there’s some really good concerns.”

As Lederer pointed out, these abuses already happen in the system, but facial recognition systems could exacerbate these abuses and even increase them. They can perpetuate pre-existing biases and systemic failings, and even if their potential benefits are enticing, the potential harm is too present and real to ignore.

Of the viable use cases of facial recognition that have been explored, the closest thing to a “safe” use case is ID verification. However, there are plenty of equally effective ID verification methods, some of which use biometrics like fingerprints.

In reality, there might not be any “safe” use case for facial recognition technology. Any advancements in the field will inevitably aid surveillance and control functions that have been core to the technology from its very beginning.

For now, Lederer said he hasn’t come to any firm conclusions as to whether the technology should be banned. But he and privacy advocates like Hartzog will continue to watch how it’s used.

Read Next: What’s Next for Ethical AI?