Friday, May 19, 2017

Quis custodiet ipsos custodes? - Fact checking the fact checkers

Fact-checking of political statements has been big business in recent years with Politifact at Tampa Bay Times and Fact Checker at Washington Post being among the more significant. I enjoyed reading them when they first came out. In principle, they serve a clear need.

In practice, I think they have been a bust, even before all the media generated hysteria about fake news. My experience has been that they are nearly as flawed as the original statement maker. The fact-checking is frequently colored by unstated a priori assumptions. The fact-checkers struggle with contingent arguments. I quit reading them when I began to notice how often they were fact-checking opinions and forecasts. A politician who says "I will deliver 1 million new jobs in six months" has not made a statement of fact. It is her opinion and her forecast. You can critique the opinion or assign probabilities to the forecast but you cannot deem it a true or false fact.

Another weakness that soured me on the mainstream media fact-checking was how tendentiously partisan it could be. A politician for Party X might claim that a policy had improved the lives of millions and the fact-checker (of Party Y) would deem it false because it had only improved the lives of 1.95 million and to be millions, it would have to have been at least 2 million. Pedantic nit-picking as well as partisan.

While I set no great store by the fact-checkers for these and other flaws, I have never seen an objective measurement of their effectiveness. Until now. Thank you Ms. Lim, Checking How Fact-checkers Check by Chloe Lim.

This is the first, not the last word but it looks reasonable. And it certainly comports with my anecdotal experience. From the abstract.
Fact-checking has gained prominence as a reformist movement to revitalize truth-seeking ideals in journalism. While fact-checkers are often assumed to code facts accurately, no studies have formally assessed fact-checkers’ performance. I evaluate the performance of two major online fact-checkers, Politfact at Tampa Bay Times and Fact Checker at Washington Post, comparing their interrater reliability using a method that is regularly utilized across the social sciences. I show that fact-checkers rarely fact-check the same statement, and when they do, there is little agreement in their ratings. Approximately, 1 in 10 statements is fact-checked by both fact-checking outlets, and among claims that both outlets check, their factual ratings have a Cohen’s κ of 0.52, an agreement rate much lower than what is acceptable for social scientific coding. The results suggest that difficulties in fact-checking elites’ statements may limit the ability of journalistic fact-checking to hold politicians accountable.
Just to check that Lim and my impression is reasonably consistent with any other research that might be out there, I turned to Google Search. It is pretty revealing when the autofill response to entering "Politifact " is "Politifact bias".

I found an article which has some reasonably detailed analysis and egregious examples of partisan fact-checking, Running The Data On PolitiFact Shows Bias Against Conservatives by Matt Shapiro. One striking example is when Politifact deemed Ron Paul's statement that there was no income tax until 1913 as only Half True. It is of course simply True that our current income tax regime was initiated in 1913 when the 16th Amendment was passed authorizing a federal income tax. Politifact manages to work itself around to a Half True rating by pointing out that there had been a temporary income tax during the Civil War and a first attempt at a federal income tax in 1893 which was overturned by the courts as unconstitutional. Paul's statement was on the face of it True but because he was a Republican, Politifact worked hard to find a justification to deem it Half True.

Why impute a partisan animus to Politifact? Because when Politifact rated a Democrat, Jim Webb, on the same claim three years later, they rated it as Mostly True. For the same claim, Half True if from a Republican and Mostly True if from a Democrat. When the discrepancy was pointed out, they issued a correction of their assessment so that both claims were calibrated down to Half True. None-the-less it is a crystal clear example that the party of the claimant makes a difference in how Politifact rates the claim.

Running The Data On PolitiFact Shows Bias Against Conservatives by Matt Shapiro is actually pretty rich in all sorts of examples of duplicity. Pretty entertaining.

UPDATE: Should have thought of the old Roman adage: De gustibus non est disputandum. In matters of taste (opinion), there can be no disputes.

No comments:

Post a Comment