Racial Pseudoscience in the Age of AI
Can a machine determine your personality by looking at your face? Probably not. But contemporary misunderstandings about machine learning and genetics is heralding the return of an outdated — and biased — science.
Look at any app store and you’ll find them: images of smiling people, scan lines running across their faces, and an app that promises to tell you something about yourself in exchange for a selfie.
The idea that you can read a person’s character by studying their face — not just fleeting expressions of emotion, but unchanging traits like the length of a forehead or width of a chin — has been a pseudo-scientific pursuit since the 1600s, when Italian scholar Giambattista della Porta published a book with the thesis of, quite literally, “people who look like bears probably act like bears.”
The connection has lingered in our language — ”high-brow” culture refers to the sophisticated tastes of those with large, aristocratic foreheads, while “low-brow” refers to the language of imaginary neanderthalic-browed brutes. We often remark with surprise at how someone didn’t look like a terrorist or a serial killer. While there is plenty of data conveyed through changes in facial expression — an averted glance, that short intake of air that suggests you’re about to speak — there’s not much to learn from the shape of that face alone.
It’s notable that della Porta was an alchemist, a field of robed scientists huddled over flames to produce gold from worthless materials. The desire to transform a face into a map feels like alchemy too: a holy grail of divination that promises to simplify the messiness of human particularity. If we could crack the code of the face, then maybe gold would follow: we’d know who to trust, who to hire, who to prosecute, and somebody would certainly get rich.
On the Nose
The idea was called “physiognomy” in the 18th century by the Swiss poet Johann Kaspar Lavater, who pursued the idea after his discovery that the shape of his nose was similar to that of two of his friends. He later extended his theory along spiritual lines, inferring that humankind was once able to communicate complex ideas through the face alone (as angels, he noted, still do). Only after our expulsion from Eden were our human faces hardened, Lavater said, and after that we needed language, which introduced our ability to mask our true selves, manipulate, and lie.
Lavater compiled the first biased dataset: a book of drawings to show the connections between faces and observed traits. If you wanted to know if you could trust the new aristocrat in town, you could look up his nose or chin and judge accordingly.
In the centuries that followed, physiognomy was applied mostly to racial characteristics, linking whatever wasn’t a white-European-aristocrat trait to unsavory behaviors and personalities. Noses were compared to birds, and the personality of the bird was applied to the person. Almond-shaped eyes were linked to untrustworthiness. The shape of one’s head was used to determine if you were a criminal, a bad mother, a liar, or a cheat. Special sections of books outlined the reason why Africans, Asians, and the poor were inferior to the aristocratic European, giving rise to centuries of racist shorthand for the physical features of non-whites.
While physiognomy faded from mainstream science shortly after Lavater (Henry the VIIIth banned the practice as quackery) it has been persistent enough to never quite disappear. Today, this discredited science, based on the idea of correlating vast amounts of data, is coming back, hidden among the promises of machine learning and image recognition technology. Throw enough faces into a hard drive, and it’s bound to find something.
Here’s a question nobody needed answered: can a machine look at your face and figure out if you’re gay? In 2017, a research study used machine learning to do just that. The result described faces in a way that might have made the physiognomists proud.
"Gay faces tended to be gender atypical," the researchers write. "Gay men had narrower jaws and longer noses, while lesbians had larger jaws."
The study was discredited in 2018: rather than determining sexual orientation from faces, it’s likely the algorithm was reading other elements of the photographs it scanned. The photos were all pulled from social media and online dating profiles of gay men and women, which have their own aesthetic culture around presentation and appearance. Do you wear a beard? Do you use eye shadow? Do you trim your eyebrows?
Turns out these aesthetic choices often vary based on whether you’re gay or straight, and that’s what the machine was really identifying. When the model was applied to a truly random set of photographs, its accuracy plummeted.
This is a casebook study in over-interpreting research results: the original paper suggested that because gay men tended not to have beards, “exposure to androgens” in the womb must have “feminized” them — ignoring the existence of bearded gay men among older generations. What the researchers linked to “genetics” was in fact an emergent cultural norm around presentation, a preference for unshaven faces among younger gay men on dating apps. It labeled a narrow and momentary fashion trend as an immutable fact of homosexuality.
Selfies Are Not Destiny
Drawing meaningful data from your social media photos is significantly easier than drawing meaningful data from an isolated and expressionless photograph of your face. A study at ETH Zurich showed that you could make accurate predictions about a personality by looking at 10 “likes” on the platform — much more efficient that looking at images alone. But pictures helped.
Another US study made clear links between photo choices and certain personality traits, because users are already doing a lot of that work when they choose an image. We self-select pictures that we feel most comfortably represent us, so highly social people might choose a profile picture that shows them smiling more often than antisocial people. That’s an analysis of image choice — not facial characteristics.
None of this has stopped an alchemical gold-rush to find the perfect formula for turning faces into million-dollar apps. A slew of computer science research and startups are exploring models for extrapolating meaningful correlations between personality and the shape of your head.
The Stanford professor who discovered the homosexuality-detection technique, Michal Kosinski (who co-authored the study with Yilun Wang), has suggested that political ideology is also genetically hard-wired and that machines could be created to make sense of it. To his credit, he has been using this thesis as a way of warning against the future abuse of this technology. But others see opportunities.
The Future of the Face?
The Israeli startup Faception made headlines a few years ago with its claim that it could have caught 9 out of the 11 terrorists in the Paris bombing attack by scanning their facial features. This wasn’t about facial recognition — the startup isn’t selling an extensive archive of suspected terrorists or criminal mugshots. Instead, they’re suggesting that the crime could have been predicted by looking at the shape of the men’s heads (or other facial characteristics — the company hasn’t made its algorithms public).
“Our personality is determined by our DNA and reflected in our face,” its CEO, Shai Gilboa, told the Washington Post. “It’s a kind of signal.”
Without access to their data, it’s hard to verify these claims, but a review of scientific literature seems to disagree. A study of 60,000 human genomes found correlations between genes and the “Big 5” personality traits (extraversion, neuroticism, agreeableness, conscientiousness, and openness to experience). But there’s no evidence that those genes are responsible for the formation of facial features.
The science that does show connections tends to be a little more complicated: for example, one study showed that testosterone levels in men with masculine faces tend to spike more after a competition than testosterone levels in men with less masculine faces. The researchers think that could be a result of testosterone spikes in youth sparking changes to physical structures of the face. But they note that other results showed testosterone spikes regardless of whether the kids were winning at sports or video games or games of chance.
Facemetrics, from Poland & Belarus, is in the business of predicting the “impressions” that could be made from your facial structure. But their app, FaceMe, promises to reveal your Myers-Briggs personality type exclusively through an analysis of your facial features. The app’s advertisement shows a man labeled as a ”Virtuoso” and his corresponding Myers-Briggs type, as well as the option to test your relationship compatibility (presumably by comparing your face and the face of your loved ones). How the app does this — (and what the app does with this data) is anyone’s guess. But if it’s like similar services, the app is probably reading into expressions and presentation — not facial structure.
The problem with the new physiognomy was laid out in a series of excellent blog posts (start here) by Google AI researchers Margaret Mitchell, Alexander Todorov, and Blaise Agüera y Arcas: “Much of the ensuing scrutiny has focused on ethics, implicitly assuming that the science is valid.”
Humans are great at misunderstanding data, and the bias has been evident: Lavater’s belief that his friends were like him, and had similar noses, is exactly the kind of data that gets fed into poorly conceived data science projects today. It doesn’t help that we already view “genetics” through a lens of race: studies have shown that white people tend to see their achievements as independent of their genes, while viewing negative traits of other races as inherent and genetic.
Computerizing bad data and misinterpreting or misrepresenting the results is, of course, completely possible in the age of smart machines. Machine Learning is what it sounds like: simplified, it’s a process for computers to learn something through exposure to reams of data, testing a hypothesis, and then predicting outcomes. When the computer gets good at predicting an outcome, we assume the model is pretty good. But a computer can’t get good at something that isn’t possible in the first place.
If there is no correlation between the face and personality, the machines that show these results are likely “learning” something else: for example, the changing cultural role of beards for gay men on dating apps. Extrapolating that out so broadly as to say that gay men can’t grow beards because of genetics is a huge mismatch, the result of misunderstanding what the machine has learned.
Other studies rely on a faulty relationship between perception and fact: for example, surveys that show faces, ask people to evaluate those faces, and then present the results as evidence that certain features are linked to certain personality patterns. That kind of study only tells us how a face is perceived by others, a system open to all kinds of bias.
We are right to worry about the merging of machine learning and physiognomy — not because algorithms will be able to discover a correlation that doesn’t exist, but because people could start to believe such correlations are possible.
The possibilities of meaningful genetics research merged with powerful machine learning tools is incredible. The risk of misunderstanding our genes to rehabilitate outdated, racial ideas about “blood” and the links of racial traits to human capacities to learn, achieve, and grow, is devastating.
“It seems that there was a longing for fixed patterns at a time of unrest.”
That line comes to us from Wolfgang Bruckle’s essay on the work of a German photographer of the Weimar era, August Sander. Sander had set himself out for the task of documenting the faces of Germans — postal workers, SS officers, farmers, aristocrats, all alike — into a published compendium of faces matched to type, “People of the 20th Century.” Faces were not attached to names, but to jobs and status. The result was shared with the world as a form of visual research: to help the reader discover similarities between the faces of bricklayers, fraternity students, Bohemians, and other “types.”
The cultural critic Walter Benjamin described Sander’s book in a small review, praising the importance of this kind of work. “Sudden shifts of power such as are now overdue in our society can make the ability to read facial types a matter of vital importance. Whether one is of the Left or the Right, one will have to get used to being looked at in terms of one’s provenance. And one will have to look at others the same way. Sander’s work is more than a picture book. It is a training manual.”
It’s appropriate to consider the sudden shifts in our own power, and the collection and automation of our personal data into massive online corrals. Not only our faces, but our words, our online interactions, our movements and interactions compiled and reduced into “types,” sorted by targets for advertising.
Lavater, that father of physiognomy, collected sketches of faces and presented them as data. Sander used photographs. Today, the technology that moves us forward is digital photography, correlated to millions of social metrics. It’s a new kind of portrait, and new kinds of portraits seem to create new opportunities to double-check bad science about our appearances.
Today, someone, somewhere, is assembling a similar catalog of human faces, tracking them to social media engagement, purchases, and career data. The result will be a much more comprehensive digital catalog — call it “Faces of the 21st Century” — assembled by machines. This is an era of datafication. We seem to find comfort in patterns, and we turn to data as much as we’re turned into data.
It’s telling that these companies are selling apps that promise to analyze our face, to turn our visage into data we can use. To twist Walter Benjamin: If we are looked at as data points, we might be starting to look at others in the same way.
Written by Eryk Salvaggio
Eryk Salvaggio is a researcher and content strategist at swissnex San Francisco. As a writer for swissnex San Francisco's publication, nextrends, Eryk is focused on insights that emerge at the intersections of science, art, and technology.
The opinions expressed in nextrends are those of the individual authors and interviewees; they do not reflect the position of nextrends or swissnex San Francisco.