Eyes Without a Face: San Francisco Bans Facial Recognition

San Francisco has banned the use of facial recognition software by city agencies, the first major city to do so. We look at what’s driving fears about AI and surveillance in the city at the epicenter of tech.

You’re pumping gas in the suburbs on your way home from the office when you notice there’s nowhere to put your credit card. No cash payment system, either; the kiosk window at the center of the station is empty. A screen flickers, with a yellow oval outline superimposed over live video of your confused face.

“Welcome to Smile-and-Go,” says a cheery, pre-recorded voice through an overhead speaker. “Please center your face within the circle. We’ll scan your face and sign you up for a free account. Your first tank of gas is free. After today, you can use your face to pay for gasoline and even snacks at our network of Smile-and-Go stations.”

Shocked, you smile into the camera. Your phone vibrates, glowing with a text message: “Is this your phone? Click Y and your phone will be billed for future purchases.”

You stop for a moment, wondering why this company is willing to trade a whole tank of gas for access to your photo. There’s a long user-agreement text you can scroll through, written in unfriendly capitalized text you don’t have time to read. You press Y. The fuel line stiffens with a click. You press the handle and the fuel flows: your face is now your credit card.

If it was all that controversial, you think to yourself — though maybe you should know better — someone would have passed a law. Or you’d have heard about boycotts on the news. In the end, you trust it because you trusted it. You put the gas pump back on the handle and drive off, leaving your face behind.

This is a vision of the future of commerce, one that’s built by blending technology to photograph, scan, and cross-reference your face across vast streams of data. Powerful artificial intelligence algorithms would be able to spot you as you go about your day, pulling your information from social media accounts, or maybe even a smart doorbell — no need for keys, just smile. The possibilities of this world are wide open: get your packages delivered direct to your refrigerator or car by authorizing the face of your delivery person in advance. Check in to your flight, movie, or hospital just by looking at a screen. Buy a hamburger in Pasadena with the flash of your teeth. The tech is even being promoted as a solution to deadly violence in schools.

But what happens to your face when it is permanently attached to data about your spending, travel, and lifestyle? What if your face was flashed in front of police officers every time someone resembling you was seen at the site of a crime? What if something about you — your race, your gender, or the color of your eyes, hair, or skin — singled you out in the database more than anyone else?

With San Francisco making headlines as the first major city in the United States to ban facial recognition software for municipal agencies, we’re exploring why this technology is so appealing to governments and corporations, and why San Francisco — the epicenter of global tech — is hesitant to embrace it.

Every Picture Tells a Story

San Francisco’s facial-recognition bill is the result of growing unease about the ways it could be abused by both police and social services. At the center of the debate — though not the only issue — is the question of reliability.

In tests, facial recognition software has confused 28 members of Congress with photos of criminals. And the AI that drives these image scans is notoriously skewed by race. Humans tend to believe that a computer can’t be racist — but datasets are assembled by humans, and implicit biases can make their way into the data humans collect.

For example, self-driving cars tend to ignore black pedestrians at a higher rate than white ones. The reason is baked into how the technology works. For teaching an AI image recognition, computers use a library to “learn” what faces look like, scanning millions of images to discover their common features: a nose, eyes, ears, etc. It categorizes distinctions based on human criteria, such as gender and race. If the majority of photos in the library are skewed toward one race over another — for example, white men — the software will be better at recognizing white male faces over those of other races and genders.

San Francisco City Hall. In May 2019, San Francisco became the first major city to ban the use of facial recognition AI for policing and other municipal services.

San Francisco City Hall. In May 2019, San Francisco became the first major city to ban the use of facial recognition AI for policing and other municipal services.

Which is precisely what’s happened, over and over again. Image recognition is bad at recognizing darker-skinned women, and very good at recognizing white guys. If you’re deciding who to arrest or who receives subsidized groceries based on image recognition, that could contribute to denial of services, or police harassment, based on race.

One research paper from MIT and the University of Toronto found that Amazon’s Rekognition software —(more on this later) incorrectly identified dark-skinned women 31% of the time. Imagine if police in your hometown were to systematically question innocent black women 31% more often than innocent white men, and you begin to see the problem.

This is part of the motivation behind the San Francisco bill, which states clearly that “the propensity for facial recognition technology to endanger civil rights and civil liberties substantially outweighs its purported benefits, and the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring.”

Despite the headlines, the bill doesn’t ban facial recognition surveillance projects outright: if government agencies in the city want to use it, they’ll need to submit it into an approval process and allow a period for public comment. The bill doesn’t stop businesses or private citizens from using facial recognition technology, only the San Francisco city government. If a landlord in the city wants to use a facial scan to give access to a building, or a grocery store wants to use a smile-scan at checkout, this bill doesn’t stop them.

Selfie Regulation?

There are piles of academic studies and investigations revealing how algorithms are skewed toward white men. Not only that, but humans have a tendency to trust the outcome of a machine process as if it were more objective than that of a human being, and to feel less personally responsible for the decisions a computer makes — what NYU New Media professor Clay Shirky has called “algorithmic authority.”

The result is a system that lets everyone off the hook for sustaining biases: when people in power rely on algorithms, they tend to believe computers aren’t racist, that the machine must be making good decisions, and that they don’t have the authority to question those decisions.

This environment is a perfect storm for civil rights abuses, and it’s why companies such as Microsoft and Google refuse to sell their facial recognition software to police groups: the risk is still too high that a computer would falsely accuse human beings of criminal acts. (Microsoft does sell the tech to other groups, including prisons, while Google has completely held off on selling its tech to third parties).

On the other side of the spectrum is Amazon, which sells a facial-recognition technology called Rekognition. Amazon shareholders narrowly rejected a non-binding vote to ban sales of the product to government agencies in May 2019, the same month that San Francisco passed its municipal ban. (Curiously, the proposal to ban these sales was created by nuns from the Sisters of Saint Joseph, who happen to own large shares of the company).

Rekognition, like most facial-recognition software, has been found to have problematic gaps in its matches — a 31% misrecognition rate for dark-skinned women — though Amazon says it has improved its dataset since that study was produced. While Amazon tells its police partners to disregard results that aren’t rated by the system as 99% accurate or higher, that concept isn’t built into the software: police can still find lower-accuracy results, and act on them.

Other companies, such as IC Realtime, offer AI-assisted tools for surveillance footage, categorizing every frame of the videos it records and allowing users to search the data. If you’re looking for someone in a red shirt at a street protest, for example, you could type in “red shirt” and find your friend (or suspect).

To avoid these kinds of abuses — such as documenting protestors or racially profiling individual customers — Amazon launched an online web form for its own platform, where people could submit abuse reports if they suspect their rights were violated as a result of the software being used inappropriately, a move some have criticized as misguided.

As Matt Cagle, technology and civil liberties attorney at the ACLU of Northern California, told Buzzfeed News: “It is absurd that Amazon seriously believes the solution to preventing rights violations from its face surveillance technology is an online form for people to report secret surveillance they have no way of knowing about."

Caught with a Smile

Image recognition software has yielded incredible benefits, such as  diagnosing breast cancer  earlier and with greater accuracy than human doctors. But in the public sphere, people are concerned.

Image recognition software has yielded incredible benefits, such as diagnosing breast cancer earlier and with greater accuracy than human doctors. But in the public sphere, people are concerned.

Amazon defends the use of facial recognition software through a series of compelling real-world applications, including its role in building tools for locating victims of human trafficking. Marinus Analytics uses Rekognition in a tool called Traffic Jam, which connects images from online advertisements for sex workers to images of missing persons. In one case study, they describe using this process to identify and track a ring of 21 trafficked women.

The airport security firm CLEAR uses facial recognition to offer additional services at airports, with support from US Homeland Security, and is aiming to use its technology as an additional barrier for the use of fake IDs in underage smoking and drinking.

The US Homeland Security Department has embraced facial recognition’s use at the border, with a policy for “100% of travelers” to be scanned at the 20 busiest American airports by 2021. The technology seems to have been shared with airports, with JetBlue reporting that facial recognition can now be used for check-in, using data collected by the US Customs and Border Protection Agency.

Combine this with the reality that millions of our photographs are shared on line — often by our own hands — to giant databases, with software used by Facebook and Google to scan and identify who’s in your photos. There’s simply no legal code in the US to stop these images from being recommended to law enforcement whenever someone who looks like you is filmed at the scene of a crime. Right now, many states and municipalities allow your driver’s license photo to be accessed by an AI, comparing facial features to surveillance footage.

Civil Libertarians argue that this can allow abuses to be automated in ways that we wouldn’t allow in personalized settings: that we are, somehow, more comfortable with police running images of faces taken at a protest march against a criminal database, while we’d object to police stopping every single person at a march or political event and demanding ID. This is algorithmic abstraction, the dream that because a machine is making the analysis, it is objective and less harmful than when a human does the same thing. But as we’ve seen, the evidence shows this simply isn’t true.

What’s Next?

The gradual inclusion of biased facial recognition points to a broader question: even in a hypothetical world where all algorithmic biases were fixed, are we comfortable handing our face over to third parties to treat as just another data point? At stake is the right to control over our faces in the endless sea of information that has already been handed to companies and governments to simplify tasks such as unlocking our phones.

Without regulation, the scale and scope of abuse is limitless: think of services that scan surveillance cameras to report back to your employer if you’re spotted at the park on a sick day, or flagging your insurance company for spending one too many nights at the bar.

A frame from the live video stream taken from Brainwash, a San Francisco laundromat, which was used to  help train Chinese military AI  to better recognize individuals in crowds.

A frame from the live video stream taken from Brainwash, a San Francisco laundromat, which was used to help train Chinese military AI to better recognize individuals in crowds.

Even third parties we trust today can change ownership or strategies, and the images we gave them for once-harmless purposes can be used in ways we may never know. That’s what happened when live-streaming footage of a San Francisco laundromat was used to train image-recognition software for the Chinese military — just one of many examples documented at megapixels.cc, which tracks the origin and use of image recognition datasets. Even under the new law, this kind of thing is perfectly legal.

While we may believe large companies would be slower to move into using your face for controversial ends, nothing in US law enshrines any measure of transparency into what happens to an image that you use to pay for gas, unlock your home, or make a bank deposit. No government agency is responsible for verifying that these products are accessible to people of all genders and races; or for addressing the biases built into face recognition software.

And of course, the San Francisco bill is the first of its kind in the country, meaning that the vast majority of US cities could still augment their police work with software that recognizes individuals through an embedded lens of racial biases.

It can be tricky to research these biases in ways that offer useful information to regulators. The software is often operating in a black box, with output that even its creator’s can’t immediately explain. Diversifying image sources can also be difficult, given that finding millions of images to train software can be tedious, as is verifying ethnic diversity in a data set. There’s also the question of whether we even want this software to be better than it is — with some suggesting that some forms of AI shouldn’t merely be fixed, but should just not exist at all.

Challenges could be remedied through the development of responsible policy, regulation, and industry ethics. Europe’s GDPR addresses facial recognition directly, but it doesn’t apply to users who hand over data willingly. Nonetheless, its emphasis on consent and transparency is a good place to start.

Right now, facial recognition in the US exists in a regulatory desert. San Francisco’s legislation is the first major law to be put in the books, with no regulation at the national level. In May 2019, the US Congress met for the first time to outline a strategy for tackling responsible use of facial recognition technology, with politicians from both parties seemingly united in addressing the issue. As of this writing, it’s too early to discern any result — but for a technology posing a radical transformation of laws and norms around privacy, consent, data ownership, and criminal justice, it’s long overdue.

Written by Eryk Salvaggio
Eryk Salvaggio is a researcher and content strategist at swissnex San Francisco. As a writer for swissnex San Francisco's publication, nextrends, Eryk is focused on insights that emerge at the intersections of science, art, and technology.

The opinions expressed in nextrends are those of the individual authors and interviewees; they do not reflect the position of nextrends or swissnex San Francisco.