Sunday, August 19, 2018

Translating accurate observations into possibly useful information

I am intrigued by:

The book is Psychology in Crisis by Brian Hughes.

Degen is highlighting some of the symptoms which Hughes has identified. None of this is unfamiliar and all of it fits in with much of my research on logical errors and fallacies in personal or group decision-making. But there is something about the listing that is intriguing and yet unsatisfying. All of it is true, and especially true for psychology, but it doesn't feel especially accessible.

Is there a way to repackage these observations so that a person, when confronted by a factual assertion, regardless of knowledge domain, might be able to assess the likely reliability of the assertion?

I am not sure that this is it, but here's a shot.

When confronted by a claim, the more of these factors which are present, the more suspect is the assertion. Each factor is associated with high levels of failed replication, i.e. the assertion is wrong.

System being studied

Are the effect sizes small?

Is it a complex system?

Is it a dynamic system?

Is the system cyclical?

Does the system evolve over time?

Is the system subject to direct observation or via proxies?

Does the system demonstrate pareto effects and/or power laws?

Are the effects the product of multiple loosely coupled systems?

Study Characteristics

Does the assertion arise from a small study with a low number of participants?

Were the study participants randomly selected?

Did the participants have an incentive?

Was the study a snapshot or longitudinal?

Was the study methodology pre-registered?

Did the study perform multivariate analysis if it is a multivariable system?

Were apples being compared to apples?

Field Characteristics

Is the field ideologically/culturally homogeneous?

Does the field have few research teams?

Are there tight affiliative interrelationships between the different researchers?

Does the field have a tradition of low research transparency (published methodologies with published data)?

Is this a field where the incentive structures favors media mentions over accuracy?

Does this field have a winner-take-all reward structure?

Does this field have only a few stakeholders?

Does this fields have low valuation of process/methodological consistency?

Does this field invest heavily in replication?

Does this field have a high retraction rate on research?
The more affirmative responses to the above questions, the more likely it is that the assertion cannot be relied upon. Any one of the attributes can, under the right circumstances, completely undermine the assertion. It does not take many of them in combination to sink it completely.

When someone makes an assertion, running through this checklist gives you a ball park estimation of reliability. Even if you do not know the particulars, you can get to a Fermi Estimation which puts you in an order of magnitude.

No comments:

Post a Comment