And what can you say about the cost of RCTs and their application in such countries compared to “developed” countries, and across sectors?
Lant: In general, the costs are wildly excessive relative to the benefits of anything other than the kind of ‘evaluations at scale’ (that Muralidharan and Niehaus argue for). Most of the “boutique” RCTs generate results that have no general applicability (zero external validity). Had the resources been devoted to, say, the creation of the routine collection of facts, it almost certainly would have been of more use.
I suspect the spread of the RCT fad in developing countries is the result of it being a “weapon against the weak” and the academics are using the leverage of official donors and philanthropists to pressure implementing organizations (both NGO and government) into “impact evaluations” they neither want nor need. RCTs are a great way to write papers, not a way to do development.
Editors: The issue of cost is indeed central. There is a real crowding out effect, as several chapters of the book show (see in particular that of Ravallion). And we raise this issue in the introduction:
On the question of funding, consider two examples by way of illustration. In the Indian setting, a study truly capable of evaluating the impact of sanitation on infant mortality (the most appropriate indicator, but one that RCTs do not have the statistical power to capture) would cost around $90 million (subject to certain conditions; Spears, Ban, and Cumming, Chapter 6). The cost of a classic RCT is between $500,000 and $1,500,000,21 and each RCT often generates just one published research paper. Is this cost effective when a poor country‘s statistical household survey system could be funded for the same amount, with a host of possible studies drawn from these observational data? This is one of the crucial questions asked by Ila Patnaik (Interviews, this volume)