Celebrating Corporate History Can Backfire
A company that plugs its past can alienate those who were previously marginalized.
Celebrating Corporate History Can BackfireIn order to study the automatic judgments that people make when assessing facial traits, scientists need faces to show their lab participants. Traditionally, researchers who study impressions have taken one of two approaches: they use real photographs that limit how much they can change the features of a particular face, or they use artificially generated faces that don’t look very real.
Scientists from Princeton, the University of Chicago, and the Stevens Institute of Technology offer a new approach. Combining deep generative image models with more than 1 million facial judgments of over 1,000 images, they have created face models that allow researchers to generate synthetic but photorealistic and demographically diverse faces that can be tuned along sets of perceived attributes, such as age or weight, and even evoke more subjective judgments such as perceived trustworthiness or perceived intelligence.
“When we make an API (application programming interface) available to the scientific community, they’ll have a lot more power over the kinds of images they use. And it will open a whole new set of questions that were never possible to answer before,” says Chicago Booth postdoctoral scholar Stefan Uddenberg, a researcher on the project. The same technology could have commercial use too, as it could potentially allow photographers, ad agencies, and countless others to identify which of their face photos are likely to be considered trustworthy or smart, for example.
These models were born of the researchers’ frustration at never having enough faces or the right type of faces for study, Uddenberg says. While researchers frequently develop expensive new face databases, artificial faces don’t necessarily convince anyone that they’re real. “They look like bald heads on black backgrounds, like mannequin heads,” Uddenberg says. The goal was to create easily transformed images that look like actual photos.
By combining machine-learning methods with more than 1 million facial judgments on qualities such as trustworthiness and intelligence, the researchers created a generative model that allows for the creation of photorealistic faces that can be tuned according to people’s impressions of various qualities.
To make sure their face models do what they want them to, the researchers performed a series of validation studies involving about 1,000 people in all. Each participant was assigned to judge one of 10 perceived attributes—such as age or trustworthiness—and given 150 faces to judge. Fifty of the images were synthetic faces while 50 others were developed using unique face photographs and manipulating them with the researchers’ models; repeat images were included in the mix to test the raters’ reliability. Sure enough, the models transformed impressions of face photographs as predicted: faces made to look older (according to the models) did in fact look older to study participants, for instance.
Uddenberg is careful to note that this research is modeling impressions, not defining what makes a person trustworthy, for instance, or smart. “What we’re saying is, this is what the American population in general thinks a trustworthy or smart person looks like,” he says.
The patterns demonstrate clear biases in judgments. For example, a male face that is thinner and older with a crooked smile tended to be perceived as intelligent, while a round, chubby face topped by a baseball cap was perceived as unintelligent. The (mostly white) participants deemed a face more familiar the larger the smile and the whiter the complexion it had.
Besides the model’s implications for the scientific community, Uddenberg says the researchers are exploring its commercial utilities, and he’s excited to see how it can and will be used. He gives the example of a wedding photographer who might take 100 pictures of a couple and could use the model to pull out photos that evoke just the right impression.
Joshua C. Peterson, Stefan Uddenberg, Thomas L. Griffiths, Alexander Todorov, and Jordan W. Suchow, “Deep Models of Superficial Face Judgments,” PNAS, April 2022.
A company that plugs its past can alienate those who were previously marginalized.
Celebrating Corporate History Can BackfireAn analysis of stock returns tests the limits of A.I.
Is There a Ceiling for Gains in Machine-Learned Arbitrage?The spread of disinformation illuminates algorithms’ unique abilities and shortcomings.
Can A.I. Stop Fake News?Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.