Technology may yet help us overturn ideological idiocy. The idea that software might encode biases (gender and racial being the favored focus) is common in postmodernist critical theory circles. Since they already believe that racism and misogyny are already inherent in any power structure associated with western civilization, it is a small leap to reach the conclusion that this must be also true for software. But it is all a shallow and self-contradicting ideological belief system.
They are desperate to find coding evidence that indeed software mirrors the evil racial and misogynistic bias they are already convinced without evidence exists. So they are seeking to find such bias.
Lawyers for Eric Loomis stood before the Supreme Court of Wisconsin in April 2016, and argued that their client had experienced a uniquely 21st-century abridgment of his rights: Mr. Loomis had been discriminated against by a computer algorithm.It is easy to dismiss this foolishness as just so much idiotic ideological garbage. Because, well, it is.
Three years prior, Mr. Loomis was found guilty of attempting to flee police and operating a vehicle without the owner’s consent. During sentencing, the judge consulted COMPAS (aka Correctional Offender Management Profiling for Alternative Sanctions), a popular software system from a company called Equivant. It considers factors including indications a person abuses drugs, whether or not they have family support, and age at first arrest, with the intent to determine how likely someone is to commit a crime again.
The sentencing guidelines didn’t require the judge to impose a prison sentence. But COMPAS said Mr. Loomis was likely to be a repeat offender, and the judge gave him six years.
The aspects of society that computers are often used to facilitate have a history of abuse and bias.
An algorithm is just a set of instructions for how to accomplish a task. They range from simple computer programs, defined and implemented by humans, to far more complex artificial-intelligence systems, trained on terabytes of data. Either way, human bias is part of their programming. Facial recognition systems, for instance, are trained on millions of faces, but if those training databases aren’t sufficiently diverse, they are less accurate at identifying faces with skin colors they’ve seen less frequently. Experts fear that could lead to police forces disproportionately targeting innocent people who are already under suspicion solely by virtue of their appearance.
As Mims notes, software does not have any emotions to bias. SW has instructions.
COMPAS has become the subject of fierce debate and rigorous analysis by journalists at ProPublica and researchers at Stanford, Harvard and Carnegie Mellon, among others—even Equivant itself. The results are often frustratingly inconclusive. No matter how much we know about the algorithms that control our lives, making them “fair” may be difficult or even impossible. Yet as biased as algorithms can be, at least they can be consistent. With humans, biases can vary widely from one person to the next.All of this arises owing to a hidden rhetorical sleight of hand.
As governments and businesses look to algorithms to increase consistency, save money or just manage complicated processes, our reliance on them is starting to worry politicians, activists and technology researchers. The aspects of society that computers are often used to facilitate have a history of abuse and bias: who gets the job, who benefits from government services, who is offered the best interest rates and, of course, who goes to jail.
Classical Liberals (Age of Enlightenment) broadly define "fair" as "the minimum number of rules necessary, applied equally to all. They believe in universal rights, in rule of law, and equality before the law." Oh, and consent of the governed.
Postmodernist social justice theory jacobins define "fair" as "equal outcomes." Consequently, they are always on the lookout for disparate impact, particularly for race or gender but also, intermittently, by orientation, by religion, by emigrant status or some other, usually arbitrary, identity. Their operating assumption is that if there is disparate impact, there must be intentional and/or malicious discrimination. Of course, this is errant nonsense that anyone with even the most rudimentary awareness of either statistics, logic, or rhetoric knows.
And it smacks of a back-door approach to centralized statist economies under control of the few at the center.
This approach of defining unfair as an outcome which not equally distributed (i.e. disparate impact) runs smack up against the well-known and long-known issue that correlation does not prove causation. Just because application of Rule X to a heterogenous population yields a non-representative outcome has no necessary requirement that there be any discrimination, conscious or unconscious.
A requirement that a candidate for a job must be able to lift 100 pounds of weight, because that is the nature of the job, will inherently yield more healthy, younger, men than one might expect from a simple statistical representation in the population. Not because it is discriminating in favor of men but because the natural distribution of that required trait is unevenly unevenly distributed in the population.
The correct denominator to detect illicit discrimination is not to compare the recruited candidate pool to the entire population, but to the pertinent population, i.e. those who can lift 100 pounds. If that pool from the population at large with the desired trait is 90% young, healthy, males and your recruitment process has yielded 90% young, healthy, males, then you probably have little reason to be concerned that there is inappropriate discrimination occurring, even though young, healthy, males might only be 15% of the population.
Critical theory approach of defining fair as equal outcomes also runs smack against the fact that every act of selection or prioritization (which unavoidably occurs when there are system constraints such as time or money or talent or other limiting factor) is inherently an act of discrimination. And desirably so. We cannot do everything all the time. We have to choose because we are resource constrained.
As long as the criteria for selection are pertinent, as best we can tell, to the actual requirements of the job, then we would expect there to be a disparate impact because abilities, and desires, and interests, and rewards, are never equally distributed.
Disparate impact is no proxy for inappropriate discrimination because virtually all systems will have disparate impacts.
Which brings us back to the mind-numbing assumption that software can and will be inherently racist, misogynist or whatever pet bias you want to find.
Software, as Mims points out, can have some inappropriate disparate impact that arises from unrepresentative data samples on which AI is being applied, or because of unawareness on the part of programmers. When these instances occur, they are almost immediately addressed. It is recognized to be a flaw in the development process.
And sometimes a deliberate process. Much software in the west is initially developed based on English language capability. It discriminates against non-English speakers initially. But once the concept is proved out, and like almost all consumer product life-cycles, what was limited to a select part of the market becomes available to all at lower price points.
But what about the unconscious digital biases? For argument's sake, my basic premise is that there aren't any. People believe there are and people are searching for them but . . .
Determining what biases an algorithm has is very difficult; measuring the potential harm done by a biased algorithm is even harder.I liken this to the consequence of increased digital policing, pioneered in New York twenty and more years ago.
Thirty years ago, it was at least plausible for radicals to claim that police were an instrument of suppression, deployed in black neighborhoods to suppress citizens and unjustifiably incarcerate them.
While acknowledging that might have occurred in some places at some times in the past, it has been increasingly rare in modern times. And now, digital policing is bringing clarity.
Most major cities now have an increasingly digitally integrated policing system - everyone is encouraged to call 911 for all suspicious activities; many have increasing numbers of LPRs and CCTVs; many have sound systems for detecting time and location of gunfire. And then on top of that, all police data is used to place patrols and dispatch police to the most consequential crimes. Murder beats assault; assault beats robbery; robbery beats burglary; burglary beats larceny; larceny beats suspicious character reports, etc. The prioritization and weighting of crime categories can be tinkered with but generally these algorithms are discussed and determined by consensus.
And what do these independent, non-biased systems yield? A disparate impact. Police are dispatched to poorer parts of town than richer and poorer parts tend to have a higher percentage population of African Americans or illegal immigrants. The police aren't there because they are poor and/or black. They are there because that is where the most serious crime is occurring. And it is desirable that they should be there under those circumstances.
If the police are understaffed, that means whiter richer neighborhoods are underserved and don't see many police even if they want them. And it is hard to argue that this is an inappropriate bias (read algorithm.) If we want to reduce crime, go where the crime is. It is a moral extra that those are also the areas where the poorest, who are least able to absorb the costs of crime, are also located.
Social Justice jacobins want there to be inappropriate policing but digitization ensures that police resources are dispatched where they are most needed, rather than based on ignorant assumptions or malice.
There is no bias in software because there is no bias to be exercised. Disparate impacts are to be expected because reality is disparately distributed. And software is making reality more apparent.
And indeed, Mims does not present any example where there has been a clear-cut case of actual racism.
Loomis and his claim of software discrimination?
For Mr. Loomis, the sentencing algorithm did get an audit—in the form of his appeal to the state’s supreme court. His lawyers, the prosecutors and their expert witnesses debated the merits of his sentence. Ultimately, the court decided his sentence was in line with what a human judge would have handed down without help from an algorithm.Disparate choices drive disparate outcomes undermining the postmodernist article of faith that all people with power are seeking to exploit the powerless and that all variances are caused by hatred for group identities.
He’s due to be released this year.
No comments:
Post a Comment