Over the past decade, facial recognition technology has become an important—and controversial—part of the fabric of society. It has become so common now that many major smartphones feature face unlock functionality, which uses front-facing cameras to scan users’ faces to grant access to the device. The market for this technology is projected to more than double by 2024.Yet, for all its advances, facial recognition technology—created by training computer vision algorithms on massive datasets of photographs of faces—might have a critical shortcoming: only being able to “see” two genders.New research by Jed Brubaker, Jacob Paul, and Morgan Klaus Scheuerman (lead author) in the University of Colorado Boulder’s Information Science department reveals that many major facial recognition services misclassify the gender of trans and non-binary people.“We found that facial analysis services performed consistently worse on transgender individuals, and were universally unable to classify non-binary genders,” said Scheuerman, who is also a PhD student in Information Science at CU Boulder, in a statement. “While there are many different types of people out there, these systems have an extremely limited view of what gender looks like.”
[snip]
On average, these systems correctly identified cisgender women 98.3 percent of the time, and cisgender men 97.6 percent of the time.
Trans men, however, were incorrectly identified as women in up to 38 percent of instances. More troublingly, those who identified as agender, genderqueer or non-binary, were misclassified 100 percent of the time because these gender identities have not been built into the algorithms.
Oh dear. There are two sexes and the software is 98% accurate in determining whether an individual belongs to the one sex or the other. I am unfamiliar with the software algorithms but suspect that this is a reasonably straightforward application of statistics to empirical data estimates derived from the photos, i.e. probability that a picture of someone with square jaw line is a male. The probability with a brow line of x-dimensions is male, etc.
As a more common example,
In the U.S. population, about 14.5 percent of all men are six feet or over. Roughly 1% of US women are 6 feet tall or taller.
Very roughly, only 7% of all people above 6 feet are female. If you have a photo of a 6 foot person, and that is your only measure, then you will be 93% accurate if your forecast is that the person is male. Then start adding in weight estimates, BMI, waist to hips ratios, facial feature measures, etc. and you can see that, while complex, weighted estimations are likely to be reasonably accurate forecasting your birth sex.
The article is built on a sleight of hand, moving from empirical data to opinions. Software can only deal with empirical data and match a new case against the empirical and statistical measures of a known population of data.
Gender is not an empirical matter but an opinion matter. People choose to identify as one of any number of permutations of identity between the sex binary. This is not to disavow that choice or to denigrate it, it is to clarify the distinction between an identity choice and a sex empirical measure.
Since only perhaps 1% of the population identify as trans, agender, genderqueer or non-binary, there probably is a challenge in getting enough data to establish valid statistical probabilities. Separate from that, it is not uncommon for personal self-identities to evolve over time. As far as I am aware, there are few empirical attributes associated with changes in self-identity, in contrast with sex identity, which uniquely mark that identity. Absent such identifying empirical measure differences, the software would not be able to provide a useful estimation.
It would be like using software to identify Democrats from Republicans based on a passport photo. It is possible that there are real measure differences between the two based on crown to chin measures, facial width, etc. but I suspect not. Even if there are a few that are real, I suspect the effect size is small, all of which would make the accuracy forecast very low.
Unless I am missing something, this seems to be an ideological complaint rather than a real issue. I am unaware of facial recognition that claims it can identify gender orientation with accuracy. Damiani seems to be faulting facial recognition software for not performing well on a function it was not designed to address and probably, based on empirical reality, can't be addressed.
Since Damiani finishes the article with multiple claims that this inability to accurately forecast gender identities is serious and dangerous.
At a minimum, this miscategorization has the potential to result in social discomfort and exclusion, reinforcing stereotypes that serve to “other” those who do not identify with the traditional gender binary.
“When you walk down the street you might look at someone and presume that you know what their gender is, but that is a really quaint idea from the ‘90s and it is not what the world is like anymore,” Brubaker said. “As our vision and our cultural understanding of what gender is has evolved, the algorithms driving our technological future have not. That’s deeply problematic.”
But it also carries far more serious implications in the ways that this technology is being deployed in practice. Scheuerman points out that discrepancies in identification can lead to being allowed through airport security.
The introduction of "problematic" is a tell. It is only used around normative discussions, not empirical discussions. Someone may choose to assess software as "problematic" for failing to accurately forecast personal self-identity. Everyone else can acknowledge the astonishing accuracy of using empirical data to accurately forecast birth sex 98% of the time.
Ideally, news articles deal with the empirical. Through dent of hard effort and deep ideological conviction, much news is now simply a product of unuseful or misleading opinions. Hence their plunging valuations.
No comments:
Post a Comment