Friday, December 30, 2022

Privacy for me and mine but not for thee - the authoritarian's dream

Woof.  This is an astonishing argument made in The New York Times.  From The Signal App and the Danger of Privacy at All Costs by Reid Blackman.  Blackman is essentially arguing against free speech and privacy for citizens.  He argues that the government should always be able to monitor speech and that citizens using services which do not capture and sell their data are somehow unethical.  

Blackman believes "that a technology free of corporate and government control" is a truly bad thing. 

This authoritarian set of views comes from "an adviser to government and corporations on digital ethics."  You really have to look out for the ethicists.  They are the worst sinners.

There is a certain tone to the whole piece.  

The company — an L.L.C. that is governed by a nonprofit — is founded on the belief that it needs to combat what it calls “state corporate surveillance” of our online activities in defense of an uncompromisable value: individual privacy. Distrustful of government and large corporations and apparently persuaded that they are irredeemable, technologists look for workarounds.

"Apparently persuaded that they are irredeemable" - Well, why wouldn't they be given what we have discovered about the actions and deceits over the past decade?

When Blackman finally gets down to making an argument for his position, the quality of the argument is as weak as the position is bad.

This level of privacy can be beneficial on a number of fronts. For instance, Signal is used by journalists to communicate with confidential sources. But it is no coincidence that criminals have also used this government-evading technology. 

We should ignore that Blackman thinks that privacy for journalists and members of the establishment is fine.  It's just not for the plebeians.  

Criminals using technology is the age-old Association Fallacy.

An association fallacy is an informal inductive fallacy of the hasty-generalization or red-herring type and which asserts, by irrelevant association and often by appeal to emotion, that qualities of one thing are inherently qualities of another. Two types of association fallacies are sometimes referred to as guilt by association and honor by association.

Further, Blackman is being nonsensical.  Every technology which improves the ability to communicate can and is used by both good and bad people doing good and bad things.  He makes no claim that this technology is only or uniquely being used by criminals.  

Next we have the old Appeal to Emotions fallacy.

The ethical universe, according to Signal, is simple: The privacy of individuals must be respected above all else, come what may. If terrorists or child abusers or other criminals use the app, or one like it, to coordinate activities or share child sexual abuse imagery behind impenetrable closed doors, that’s a shame — but privacy is all that matters.

I don't know if his characterization of Signal's ethical worldview is accurate but it doesn't matter.  That's not relevant.  Blackman is trying to force a sotto voce argument that only bad people want privacy.  He is setting up the argument that anything that shields you from commercial exploitation by private companies or from surveillance by government is ipso facto evil.  He doesn't want to be explicit that that is his argument but that is what it comes down to.  Nonsense.

Then there is this eye-popping assertion.

What’s more, the company’s proposition that if anyone has access to data, then many unauthorized people probably will have access to that data is false. 

Not misguided, not unlikely.  False!

Blackman is way out over his skis here.  Our world is filled with safeguards and securities around privacy and data which are routinely abused.  Currently in the headlines is the most recent outrage that the current models of Roombas have cameras.  Supposedly the captured images are secure and private.  Till the images started showing up on the internet.

And let's not get started with the government.  I am not sure I am aware of a single intrusive surveillance technology which hasn't been abused by our government or its agents.  Blackman is either tellign a falsehood or putting heavy reliance on weasel words, distinguishing few from "many people".

Blackman offers:

There are some people who have access to the nuclear launch codes, but “Mission Impossible” movies aside, we’re not particularly worried about a slippery slope leading to lots of unauthorized people having access to those codes.

Again, he is being a weasel.  It doesn't matter whether people are particularly worried.  The issue is whether the launch codes are secure.  Given that he is making this argument, it would seem like he might have at least Googled, have nuclear launch codes ever been breached.  If he had, he would be aware of numerous incidents and concerns that have occurred in recent years.

Blackman then takes an interesting turn.

I am drawing attention to Signal, but there’s a bigger issue here: Small groups of technologists are developing and deploying applications of their technologies for explicitly ideological reasons, with those ideologies baked into the technologies. To use those technologies is to use a tool that comes with an ethical or political bent.

Hmm.  Everything has an impact on something else, not just ethics or politics.  What I think he is saying is that using tools which provide privacy has consequences.  And what is the first consequence about which he is concerned?  It might hurt the big technology behemoths which harvest personal data.

Signal is pushing against businesses like Meta that turn users of their social media platforms into the product by selling user data.

Hmmm.  Blackman is holding Meta out as an ethical organization that will be undermined by the privacy afforded by Signal.  Well. . . that's a novel argument.

And back to the Association Fallacy.

But Signal embeds within itself a rather extreme conception of privacy, and scaling its technology is scaling its ideology. Signal’s users may not be the product, but they ‌‌are the witting or unwitting advocates of the moral views of the 40 or so people who operate Signal.

Blackman is arguing that you should not avail yourself of the privacy of the Signal product because then you will be advocates for the 40 people who designed it.  Again, plain nonsense.  No-one argues that by buying a GM you carry the burden of all the executives and employees involved in the production of that car.  No one is making the case that because Apple uses prison labor in China, no one should buy Apple products.  There are moral decision involved and to be made by individuals based on a balance of objectives and ills but nothing like the argument Blackman is making.

Blackman is so far down the rabbit hole, it hard to see where he is headed.

There’s something somewhat sneaky in all this (though I don’t think the owners of Signal intend to be sneaky). 

A company that wants to provide private communication to its customers is sneaky?  This is just getting weird.  

All of sudden Blackman is making the old Marxist false consciousness argument.

Usually advocates know that they’re advocates. They engage in some level of deliberation and reach the conclusion that a set of beliefs is for them.

There is more blather, culminating with.

So I am not convinced we are really getting more freedom and “for the people by the people” by way of our technology overlords. Instead, we have a technologically driven shift of power to ideological individuals and organizations whose lack of appreciation for moral nuance and good governance puts us all at risk.

Reading this, it is hard not to feel like Blackman is simply upset because Signal is producing a product desired by citizens that allow them privacy and secure communication and which makes it harder for his government clients to abuse their citizens and which undermines the commercial viability of his corporate clients.

How on earth did this case study in failed Rhetoric 101 even get past the NYT editors?

No comments:

Post a Comment