Bill Easterly took aim today at Paul Collier’s new book, Wars, Guns and Votes. The accusation? Data mining. In short: if you run enough regressions, you’ll find the answer you were looking for from the start.
My informed opinion on Collier’s book is going to have to wait until that elusive day when I have time to finish reading it. (So far no luck, in spite of the end of semester.) I know the literature on which it’s based, though, having just written a behemoth of a civil war lit review.
My own view: the cross-country regression is often a spurious thing. Sometimes incredibly useful, but just as often misleading.
When does statistical work lead us astray? For that we could do worse that to take a second look at Karl Popper. In a famous 1957 lecture, he gave us seven guidelines for scientific inquiry:
- It is easy to obtain confirmations, or verifications, for nearly every theory–if we look for confirmations.
Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory–an event which would have refuted the theory.
Every ‘good’ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.
Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory.
Some genuinely testable theories, when found to be false, are still upheld by their admirers–for example by introducing ad hoc some auxiliary assumption, or by re-interpreting the theory ad hoc in such a way that it escapes refutation.
Point number one warns us against data mining. The rest give us a sense of how research should be done.
Now, almost no cross country study meets these strictures. That would probably be too much to ask; risky, refutable predictions are hard to come by, and the questions matter too much to ignore. When we only have confirming evidence and weak tests, we don’t ignore the evidence, we just take it with caution.
This is where most of the literature fails: overconfidence in weak tests on poor data. I actually buy most of Collier’s conclusions, and share his intuition. But this is theory-building, not theory-proving, and the answer remains to be found.