From Epistemology in a World of Fake Data by Richard Hanania. The subheading is On not trusting social science and still avoiding epistemological nihilism.
He is pessimistic about the quality of information produced by academia and our scientific institutions and he is right to be concerned. At both a pragmatic and theoretical level we have gained much knowledge in the past couple or three decades which caution us about the quality of purported research and the degree of confidence we ought to vest in those findings.
Especially when NGOs, government, and the mainstream media tend to have a strong but naive inclination to trumpet findings which are uncertain or unreliable and to squawk various slogans ("Stand with the facts", "Follow the Science", "Believe the Science", etc.) which have absolutely no philosophical or real-world foundation.
The whole column is worth a read. I am interested in this example.
One problem with social science research is that it is often difficult to know how generalizable the results of any particular study are. Let’s say that you want to find out whether immigration causes more or less support for economic redistribution. The researcher has to make a countless number of decisions, such as which years to investigate; which measures of immigration to rely on; what measures of support for redistribution to look at; which countries or locations to investigate; and at what level of granularity to consider geographical or political units.A paper by Breznau et al. (2022) goes one step further and asks what happens if you take many of the most important decisions out of the hands of researchers. Even under such very limited conditions, can quantitative, non-experimental social science help tell us much of anything?Breznau et al. recruited what ended up being 73 research teams, and gave them data from the International Social Survey Program, which included questions about the appropriate role of government in regulating the economy and redistributing wealth. They also provided them with yearly data on each country’s inflow of migrants, and their stock of immigrants as the percentage of the population. All of this data covered survey waves in the years 1985, 1990, 1996, 2006, and 2016. The teams were to rely on the migration and survey data provided, but were allowed to seek out other sources of information to include in their models. They could, for example, control for year, use dummy variables for things like region, or introduce other independent variables that they thought mattered. Here were the results.
Across all studies, there were 1,261 models used. No two were identical, and the authors identified 166 distinct research choices associated with them. 58% of models found no effect of immigration on attitudes towards redistribution, about 25% found that migrants reduced support, and 17% found a relationship in the opposite direction.Once again, the research teams were all given the exact same data. In real life, analysts can pick and choose whatever numbers they want. Instead of European nations in 1985, etc., one might look at American states or counties in the 2010s. One may choose to look at voting patterns instead of responses to survey questions. Decisions like this are made well before we get to the point where we can observe what happened in this study, where the researchers had what are usually the most fundamental choices facing a project made for them already. Moreover, the different results of the various teams could not be explained by things like the prior beliefs of researchers or how much expertise they had in statistics. The errors were mostly random and unexplained. Someone might’ve eaten something for lunch that made them want to control for belief in god, for example, while others did not.So nobody knows the effect of immigration on support for redistributionist policies across European countries in the years considered in Breznau et al.
An important question. The answers would be consequential. Multiple bright researchers. And no answers.
I am, or try to be, an empirical rationalist. I know I fall reasonably short of that ideal but at least I know what I am striving, that I fail, and usually I have a reasonably good idea why I fall short. And all that is fine. It is reality and it is the cost of learning.
Over the past couple of decades however, I have become increasingly convinced of the importance of a priori factors such as world view/religion, culture, class and social norms. Not that these provide certifiable answers (though they often do) but because by their acceptance, you move the ball down the epistemic field much farther and faster than you can by constructing a new epistemic model from the ground up.
There is just a ton of knowledge we cannot know fast enough and widely enough. Trying to understand evolving, power law driven, loosely coupled, dynamic, complex chaotic systems such as climate or the economy or sociology or politics or financial markets is a mugs game. You need predicate beliefs against which to test suppositions.
The "science" of social sciences are an example of the failure accumulating from the limits of empirical rationalism.
I suspect there is a great deal to be said for that leap of faith that comes from embracing world view/religion, culture, class and social norms. Either the ones to which you are born or the ones which you choose.
Embrace them and then refining them through rational empiricism and the scientific method and Socratic questioning.
No comments:
Post a Comment