In my business of consulting related to complex decision-making, one of the points we attempt to make again and again is that all decision-making processes result in error or failure - it is the nature of the beast and a philosophical truism. Given that, then how do we adjust our decision-making processes in a fashion that anticipates and therefore mitigates the cost of inevitable but unpredictable error and failure.
A related useful thing that faith tells you, if you take it seriously enough, is that the great majority of people who believe something on faith, in fact believe falsehoods. Hence, faith is insufficient for true belief. As the Nobel-Prize-winning biologist Peter Medawar said: “the intensity of the conviction that a hypothesis is true has no bearing on whether it is true or not.”and
You know that Medawar’s advice holds for all ideas, not just scientific ones, and, by the same argument, to all the other diverse things that are held up as infallible (or probable) touchstones of truth: holy books; the evidence of the senses; statements about who is probably right; even true love.
It’s all about error. We used to think that there was a way to organize ourselves that would minimize errors. This is an infallibilist chimera that has been part of every tyranny since time immemorial, from the “divine right of kings” to centralized economic planning. And it is implemented by many patterns of thought that protect misconceptions in individual minds, making someone blind to evidence that he isn’t Napoleon, or making the scientific crank reinterpret peer review as a conspiracy to keep falsehoods in place.
Whether the idea was originally suggested to you by a passing hobo or a physicist makes no difference.
Popper’s answer is: We can hope to detect and eliminate error if we set up traditions of criticism—substantive criticism, directed at the content of ideas, not their sources, and directed at whether they solve the problems that they purport to solve. Here is another apparent paradox, for a tradition is a set of ideas that stay the same, while criticism is an attempt to change ideas. But there is no contradiction. Our systems of checks and balances are steeped in traditions—such as freedom of speech and of the press, elections, and parliamentary procedures, the values behind concepts of contract and of tort—that survive not because they are deferred to but precisely because they are not: They themselves are continually criticized, and either survive criticism (which allows them to be adopted without deference) or are improved (for example, when the franchise is extended, or slavery abolished). Democracy, in this conception, is not a system for enforcing obedience to the authority of the majority. In the bigger picture, it is a mechanism for promoting the creation of consent, by creating objectively better ideas, by eliminating errors from existing ones.
“Our whole problem,” said the physicist John Wheeler, “is to make the mistakes as fast as possible.” This liberating thought is more obviously true in theoretical physics than in situations where mistakes hurt. A mistake in a military operation, or a surgical operation, can kill. But that only means that whenever possible we should make the mistakes in theory, or in the laboratory; we should “let our theories die in our place,” as Popper put it. But when the enemy is at the gates, or the patient is dying, one cannot confine oneself to theory. We should abjure the traditional totalitarian assumption, still lurking in almost every educational system, that every mistake is the result of wrongdoing or stupidity. For that implies that everyone other than the stupid and the wrongdoers is infallible. Headline writers should not call every failed military strike “botched;” courts should not call every medical tragedy malpractice, even if it’s true that they “shouldn’t have happened” in the sense that lessons can be learned to prevent them from happening again. “We are all alike,” as Popper remarked, “in our infinite ignorance.” And this is a good and hopeful thing, for it allows for a future of unbounded improvement.
Fallibilism, correctly understood, implies the possibility, not the impossibility, of knowledge, because the very concept of error, if taken seriously, implies that truth exists and can be found. The inherent limitation on human reason, that it can never find solid foundations for ideas, does not constitute any sort of limit on the creation of objective knowledge nor, therefore, on progress. The absence of foundation, whether infallible or probable, is no loss to anyone except tyrants and charlatans, because what the rest of us want from ideas is their content, not their provenance: If your disease has been cured by medical science, and you then become aware that science never proves anything but only disproves theories (and then only tentatively), you do not respond “oh dear, I’ll just have to die, then.”
No comments:
Post a Comment