Can you turn swords into ploughshares? (Or a paper, for that matter?)

Three years ago, randomizing Liberian ex-combatants into a reintegration program sounded like a good idea.

The problem wouldn’t be that ex-combatants would angrily resist randomization. Far from it. I’ve never experienced a problem with lottery-based program entry. People are excluded from programs all the time. “At least this time,” people say, “I know why he got it and I didn’t.”

The challenge would not be the enormous tracking effort required. Well, at least not an unexpected challenge. We saw that one coming. 1300 ex-combatants in rural hotspots around the country, mostly engaged in illicit mining and rubber-tapping, in places without roads or mobile networks? We always knew they would be hard and expensive to track. And they did not disappoint. My research assistants and surveyors deserve medals.

The issue wasn’t program quality. When you’re evaluating a real-world program, no matter the institution, there  will always be issues. Given what a mess most post-war and ex-combatant programs are, this one was run extremely well. Four months of residential training in agricultural skills, counselling and life skills; land access in the village of their choosing; and a start-up package of seeds and tools.

No, the issue is one that I’ve started to associate with most of the program evaluations I started a few years back: you can get a beautiful treatment effect, but  that doesn’t necessarily mean you know what to do with it.

You can see my Impact Evaluation 2.0 and 3.0 talks for the general idea. In this case, it’s pretty simple: if we see a fall in poverty, it’s hard to say why or what did it. If we see a reduction in violence, you can’t tell what theory of violence you have just established. Or not.

Some interventions are still worth studying. When are we ever going to get a chance to study as high risk a population, with so extensive an intervention, again? And randomly, no less. We might not.

The punchline? Well, wealth is up a great deal, and some violence is down. Most of all, those who entered the program were less likely to get involved in the war in Cote d’Ivoire. That’s worth knowing. Our policy report is here. A summary of the project and findings are here. An academic paper is in the works.

But there are some puzzles. Most interpersonal violence and social integration measures show no change. So is their absence of interest in the Ivoirian war special or spurious? And while wealth is up, current incomes and consumption are unchanged. Economic theory can explain that, but it’s not what I predicted. Again, special or spurious?

Most of all, I’m even more undecided now than at the outset of why men rebel. The simplest reading of this evidence is that wealthier people don’t rebel because it’s too costly–a vindication of the economic approach to conflict. But this would be too simple a reading, and other theories are at least as consistent.

The upside: lessons learned. My newer experiments do not make the same mistakes. But what a long and costly education these few years of field experiments have been. I have not started a new one in more than two years, and haven’t decided whether I will again. 2012 is my year for meditating on the next research steps. While I am sure I won’t be able to help myself, after three years tracking all manner of unstable and difficult populations in some of the more challenging countries, the life of a historian looks very attractive right now…