Sunday, August 22, 2021

Ideological obsessions distort interpreting empirical data

An interesting phenomenon obscured by ideological obsessions.  From These Algorithms Look at X-Rays—and Somehow Detect Your Race by Tom Simonite.

The phenomenon is that machine learning can pick up patterns undetected by humans.  The challenge is that the AI can turn out to be accurate in its forecasting but for reasons which are unknown to people.  That is what has happened here.

Millions of dollars are being spent to develop artificial intelligence software that reads x-rays and other medical scans in hopes it can spot things doctors look for but sometimes miss, such as lung cancers. A new study reports that these algorithms can also see something doctors don’t look for on such scans: a patient’s race.

The study authors and other medical AI experts say the results make it more crucial than ever to check that health algorithms perform fairly on people with different racial identities. Complicating that task: The authors themselves aren’t sure what cues the algorithms they created use to predict a person’s race.

Evidence that algorithms can read race from a person’s medical scans emerged from tests on five types of imagery used in radiology research, including chest and hand x-rays and mammograms. The images included patients who identified as Black, white, and Asian. For each type of scan, the researchers trained algorithms using images labeled with a patient’s self-reported race. Then they challenged the algorithms to predict the race of patients in different, unlabeled images.

Radiologists don’t generally consider a person’s racial identity—which is not a biological category—to be visible on scans that look beneath the skin. Yet the algorithms somehow proved capable of accurately detecting it for all three racial groups, and across different views of the body.

For most types of scan, the algorithms could correctly identify which of two images was from a Black person more than 90 percent of the time. Even the worst performing algorithm succeeded 80 percent of the time; the best was 99 percent correct. The results and associated code were posted online late last month by a group of more than 20 researchers with expertise in medicine and machine learning, but the study has not yet been peer reviewed.

I wonder how precise this could get?  Broad racial type I can understand.  What about Mediterranean whites or Middle Eastern whites?  What about South Asians versus East Asians?  Khoi versus Bantu?

Regardless, this is potentially significant both from a medical perspective as well as from an AI perspective.  AI for the reasons mentioned above, i.e. the need to understand how the system is able to make its forecasts.  

Medically it is valuable because there are genetic conditions more prevalent in different races.  The classic example being sickle cell anemia among Africans to a greater degree than in Asia and Europe.  

As is typical of the MSM, the author dwells on how AI software trained on x-rays could be a mechanism to exacerbate medical inequities based on a presumed systemic bias.  It is pretty weak speculation.  It even invokes the substantially discredited notion of racist priming.  It also delves into the mistaken notion that group variance is evidence of intentional or systemic discrimination.

After spending half the article discussing baseless Critical Race Theory/Social Justice Theory speculations, Simonite returns to a rather telling outcome.

Frustratingly, the authors of the new study could not figure out how exactly their models could so accurately detect a patient’s self-reported race. They say that will likely make it harder to pick up biases in such algorithms. 

Follow-on experiments showed that the algorithms were not making predictions based on particular patches of anatomy, or visual features that might be associated with race due to social and environmental factors such as body mass index or bone density. Nor did age, sex, or specific diagnoses that are associated with certain demographic groups appear to be functioning as clues.

The fact that algorithms trained on images from a hospital in one part of the US could accurately identify race in images from institutions in other regions appears to rule out the possibility that the software is picking up on factors unrelated to a patient’s body, says Yi, such as differences in imaging equipment or processes.

The algorithm is 99% accurate.  It works equally accurately with data provided from geographically and institutionally disparate sources.  

And yet Simonite is obsessed that despite its useful accuracy, because the means by which the AI system is recognizing the pattern is not understood, it cannot be examined to determine whether subjective charges of bias might be made against it.


No comments:

Post a Comment