Question 12

Coordinatorsub-sub-post 1 Comment

How are the issues you are raising with RCTs not also present in most other available methodologies and what are your proposing be put in their place to simultaneously achieve causal identification and to solve the issues you are raising. This conversation feels like a reductive view of RCTs reminiscent of the same debate we were having ten years ago. No so-called randomista would use the term “gold standard” or suggest using the tool widely. Why is micro work and systemic change mutually exclusive? Why can’t qualitative work (ex-ante, during, and ex-post) and implementation research as components of RCTs not complement impact estimates? In fact, how well does the book go beyond the “usual” claims about the weaknesses of RCTs that randomistas generally oppose?

Lant: This question is slightly surreal. So, about a year ago three economists won the Nobel Prize in economics for the promotion of the widespread use of RCTs because the number of RCTs being done in the field of development skyrocketed (as a matter of fact). And yet your claim is they would not suggest using the tool widely? Are they going to give the Nobel Prize back because they realized their former position was wrong?

And what is a “reductive” view of RCTs? I know all of the main protagonists. I have heard what they say. I have heard Esther Duflo say: “Non-RCT evidence is an oxymoron.” I was in meeting with Abhijit Banerjee in which he said any economist not using an RCT isn’t doing science (and intimated not doing “science” (in his view) and giving policy advice was being aa charlatan). They set up an organization and raised untold millions of dollars to do more RCTs. I think my view of RCTs is a pretty accurate view and not “reductive” (whatever that means).

I have given a presentation (at NYU in 2018) entitled “The Debate is Over. We Won. They Lost.” And so if you are suggesting that we shouldn’t be having the same debate because the proponents of RCTs lost that debate (as I think they did) then I agree with you. But my impression is that the randomistas have made some minor, cosmetic, changes but continue to believe roughly the same set of things they always have.

And, at the same time, this question misses the point I am making in the book. The question suggests that I have to come up with a better method for assessing causal identification of individual development projects. But my point is that almost none of the problems of development stem from a lack of methods of casual identification. Now, it is true that to get published and promote oneself as an economist a new tool (or new application of an old tool more accurately as Heckman made clear) is important and there is not question that many people have won fame and prestige for themselves in this way. But the idea this is important for development is just obviously false (see my “are your evaluations of something important” blog from many years ago).

First, there has been massive heterogeneity in the extent to which countries have achieved national development (and made progress in any metric of human well-being). Some countries like Korea and Chile and Indonesia (and others) have made massive progress while other countries are in desperate straights like Haiti, Somalia, South Sudan. You cannot believe that any part of the success of the successful and the failure of the failures was because they successes applied reliable methods of causal identification and the failures didn’t.

Second, as my paper shows, both the levels of headcount poverty and the changes in headcount poverty are super tightly correlated across countries with the levels and changes in median income/consumption. None of the reason why, in a generation more or less, Indonesia, Vietnam, and China went from high levels of poverty to low levels of poverty is because they used reliable methods of causal identification to see “what works” in anti-poverty programs.

Third, as my presentation showed, there are massive differences in the learning children gain from schooling across countries: 150 to 200 points (on a scale where 100 is the standard deviation across OECD students, by construction), even at the same level of income and education spending. It is impossible to suggest that the reason Kenya is 100 points above Ghana or Vietnam is 200 points ahead of Zambia is because Kenya and Vietnam relied on good methods of causal identification and the others did not.

The idea that better methods for causal identification of the impact of development projects is a key/important/central/pressing element of actually doing development is this just unbelievably self-serving claim of academics—for which there has never been any evidence. So, no, I do not accept the premise that the burden is on me to propose a better method to do what isn’t important. I want methods that can answer questions for which the RCT (or similar) methods just don’t work.

Finally, you ask why is micro work and systemic change mutually exclusive? Well, things are mutually exclusive because there are only 24 hours in a day and so every hour you or I or anyone spends on activity X is an hour they didn’t spend on activity Y.

And it is not that micro work cannot, in principle, contribute to systemic change, but it can only do so if the micro work is structured around answering the questions of systemic change. So perhaps one could design an RCT that contributed important knowledge to explanations of systemic differences in development and development outcomes. But that is not what the randomistas did or said they were doing. Their claim, made very explicitly, is that the big questions about systemic changes could never be answered with “scientific” certainty and so they were going to devote their method to answering questions that could be addressed with their method and that were susceptible to their method. So, they explicitly and consciously chose questions and research programs based on method. (And don’t accuse me of having a “reductive” view or not understanding or caricaturing what they said, I have heard them say this exactly).

Let me conclude with an example. In the early 2000s many of us were grappling with new ways of studying the economics of growth and were developing new empirical methods of analyzing growth (e.g. the episodic approach) and something called a “growth diagnostic” as a means of trying to create a contextualized way of summarizing the evidence about a particular countries conditions that could lead to better recommendations about how to improve a countries growth prospects. So the British aid agency put a lot of money into research into how to do this better. In 2008 the result was the International Growth Centre based out of LSE. But, instead of devoting their resources and time and attention to understanding the phenomena of economic growth they decided that the money would be better used funding “rigorous” studies (read: RCTs) (because, time and money are, by their nature, mutually exclusive). Now, to date, I would guess that the IGC has spent over 100 million dollars. I challenge you, or anyone to name off the top of your head one intellectual contribution to growth that would lead to more reliable policy advice from the International Growth Centre research (not that they didn’t produce research, they did, lots of it, it just wasn’t about growth). My claim is that if these resources had been used to actually do policy relevant research about growth this would have been a much better use than frittering it away on academics obsessed with producing papers that used good methods of causal identification and hence choosing questions to which those methods were amenable.

Editors: There are many methodological weaknesses specific to RCTs and the book clearly lists them (see the introduction of the edited volume).