Sunday, August 10, 2014

When we discover that our powers of persuasion are limited to those who were already predisposed to agree with us

Via Judith Curry's post, there is Political psychology or politicized psychology? Is the road to scientific hell paved with good moral intentions? by Philip Tetlock. From Tetlock's paper.
What exactly is scientific hell? I use the concept to denote the complete collapse of our credibility as a science. We find ourselves in scientific hell when we discover that our powers of persuasion are limited to those who were already predisposed to agree with us (or when our claims to expertise are granted only by people who share our moral-political outlook). Thoughtful outsiders cease to look upon us as scientists and see us rather as political partisans of one stripe or another.

How do we fall into scientific hell? The principal temptation in political psychology-the forbidden fruit-is to permit our political passions to trump normal scientific standards of evidence and proof . Researchers sometimes feel so passionately about a cause that those passions influence key methodological and conceptual decisions in re­search programs. When journal reviewers, editors, and funding agencies feel the same way about the cause, they are less likely to detect and correct potential logical or methodological bias. As a result, political psychology becomes politicized.

It is one thing, however, to argue that values can easily influence inquiry and quite another to argue that values inevitably drive and determine the conclusions of inquiry. Value neutrality is an impossible ideal, but it still remains a useful benchmark for assessing our research performance. Indeed, the price of abandoning value neutrality as an ideal is prohibitively steep: nothing less, I believe, than our collective credibility as a science.

Do we seek scientific knowledge of causal relationships? Or do we seek to advance certain moral or political causes by stigmatizing groups with whom we disagree and applauding groups with whom we sympathize? These skeptics raise serious questions that merit serious responses. We should be candid about our motives as political psychologists. Very few of us, I suspect, are driven by purely epistemic motives or by purely partisan motives of policy advocacy. We are motivated, in part, by causal curiosity and in part by the desire to make the world a better place in which to live. And, being human, we don’t like to acknowledge that these goals occasionally conflict.

My own view is that epistemic and advocacy goals frequently collide. The most overt cases of politicization tend to occur when evidence of causality is particularly weak and the policy stakes are particularly high. It is understandable that political psychologists as citizens often lend their voices to one or another political cause; it is less understandable when political psychologists (consciously or unconsciously) bend normal scientific standards of evidence and proof to advance those same causes.
This paper is from twenty years ago, 1994, and was in many ways prescient. According to a recent poll,
Only 36 percent of Americans reported having "a lot" of trust that information they get from scientists is accurate and reliable. Fifty-one percent said they trust that information only a little, and another 6 percent said they don't trust it at all.

Science journalists fared even worse in the poll. Only 12 percent of respondents said they had a lot of trust in journalists to get the facts right in their stories about scientific studies. Fifty-seven percent said they have a little bit of trust, while 26 percent said they don't trust journalists at all to accurately report on scientific studies.

What’s more, many Americans worry that the results of scientific studies are sometimes tainted by political ideology -- or by pressure from the studies’ corporate sponsors.

A whopping 78 percent of Americans think that information reported in scientific studies is often (34 percent) or sometimes (44 percent) influenced by political ideology, compared to only 18 percent who said that happens rarely (15 percent) or never (3 percent).
To be fair, abysmal as those number are, they aren't as bad as those for Congress (8%) or local politicians (14%). but that's a pretty low bar to set.

In some ways, this isn't unexpected. As we become a more complex and sophisticated society there are two trends that almost necessarily militate the reputation of scientists. 1) All the easy knowledge problems are solved. Only the really complex issues remain. We are at the knowledge frontier where everything is dominated by precise definitions, measures, risks, and uncertainty. At that frontier, scientific investigators are simply more likely to be wrong more often. 2) In an increasingly connected world, it is harder to hide being wrong AND it is harder to hide motivated research (research that is intended to find a specific outcome).

But I think Tetlock's call for increased rigor is still relevant. Motivated research has brought many fields in to disrepute. Motivated research intended to support specific advocacy agendas likewise has brought particular researchers and organizations into disrepute.

Ideally, all parties would start behaving more in a fashion to warrant trust, and degrees of trust would rise overall. Hopefully there will be some conjunction of trends that might aid such an outcome. We are not there yet.

No comments:

Post a Comment