"Evidence-based policymaking" is the latest trend in expert government. The appeal is obvious: Who, after all, could be against evidence?The whole thing is worth a read and his basic criticism that proponents of government intervention are asymmetric in their approach to evidence is reasonable. But hammering on about that bad behavior takes away from what I see as the other legitimate points he makes.
Most EBP initiatives seem eminently sensible, testing a plausible policy under conditions that should provide meaningful information about its effectiveness. So it is not surprising to see bipartisan support for the general idea. Speaker of the House Paul Ryan and Senator Patty Murray even collaborated on the creation of an Evidence-Based Policymaking Commission that has won praise from both the Urban Institute and the Heritage Foundation.
But the perils of such an approach to lawmaking become clear in practice. Consider, for instance, the "universal basic income" campaign. Faced with the challenge of demonstrating that society will improve if government guarantees to every citizen a livable monthly stipend, basic-income proponents suggest an experiment: Give a group of people free money, give another group no money, and see what happens. Such experiments are underway from the Bay Area to Finland to Kenya to India.
No doubt many well-credentialed social scientists will be doing complex regression analysis for years, but in this case we can safely skip to the last page: People like free money better than no free money. Unfortunately, this inevitable result says next to nothing about whether the basic income is a good public policy.
The flaws most starkly apparent in the basic-income context pervade EBP generally, and its signature method of "controlled" experiments in particular. The standard critique of overreliance on pilot programs, which are difficult to replicate or scale, is relevant but only scratches the surface. Conceptually, the EBP approach typically compares an expensive new program to nothing, instead of to alternative uses of resources — in effect assuming that new resources are costless. It emphasizes immediate effects on program participants as the only relevant outcome, ignoring systemic and cultural effects as well as unintended consequences of government interventions. It places a premium on centralization at the expense of individual choice or local problem-solving.
Politics compounds the methodological shortcomings, imposing a peculiar asymmetry in which positive findings are lauded as an endorsement of government intervention while negative findings are dismissed as irrelevant — or as a basis for more aggressive intervention. Policies that reduce government, when considered at all, receive condemnation if they are anything other than totally painless. Throughout, the presence of evidence itself becomes an argument for empowering bureaucrats, as if the primary explanation for prior government failure was a lack of good information.
Cass's criticisms of Evidence-Based Policymaking (EPB) include (my wording):
The comparison of controlled tests are to the wrong baseline. They look to see if there is an effect at all,
not whether there is a material effect.
EPB should be comparing the study case to alternatives rather than to the binary of effect:no effect.
EPB studies rarely establish the measurements of success in advance and so there is frequently goal-post shifting after the fact.
EPBs are rarely structured to allow trade-off decisions. Since there are always multiple outcomes, positive and negatives, we cannot simply focus on the positives, for analysis purposes we have to find a way to off-set the positives with the negatives. If you are building a dam to generate 10MW of energy, it is not enough to consider just the 10MW. You have to consider the lost farmland, the displaced people, the disrupted lives, etc.
EPBs are rarely structured to give us insight to alternatives. What we really want to know is "Is the cost for the desired outcome less than the cost of any alternative policy?" Bjorn Lomborg is especially trenchant on this issue as illustrated in this video.
When conducting EPBs, negative results are as meaningful as positive results and yet advocates tend to cover up negative results and keep conducting tests till they get the result they want.
Advocates also tend to be asymmetric in their criticism. If the result is against their expectations, the criticism is about test procedures, sample size, participant selection, uniqueness of circumstances, etc. None of these critical filters are called into play when the result is consistent with their desires.
EPBs fail to distinguish between the action of redistributing "free" money and the allocation of those resources for a particular use. Everyone likes free money, no one likes to be forced to spend their own money, even if it is intended to be for their own good.
EPBs often focus on process measures (number of people with health coverage) versus outcome measures (improvement in health).
EPBs typically omit second and third order effects, particularly over time.
Small scale EPBs make the mistake of confusing the map for the terrain. Small scale trials can work by free-loading off of other societal structures and then fail when they scale because they overwhelm those structures.
Policy interventions usually entail multiple complex systems interacting with one another. EPBs frequently fail to take into account context dynamic processes and input complex processes. With all the moving parts, it becomes impossible to disentangle which actions were contributive to what degree.
More broadly EPBs fail to address the full complexity of the system-environments in which they operate.
EPBs often fail to establish causative direction. Do middle class people A) have high home ownership because of their middle class behaviors or B) does owning a home cause people to acquire middle class behaviors? The causal direction is critical to know before adopting policies.
No comments:
Post a Comment