BLOG POST

I Failed to Seriously Consider the Limitations of Microfinance as a Poverty Reduction Approach

August 17, 2011

...that according to the newest review of the evidence on the impact of microcredit (p. 73). The review was commissioned by the British aid agency DFID and carried out by British academics, all but one of whom (James Copestake) were based at the University of East Anglia.I should explain that I wrote the title for this post, which personalizes my reaction to the report, with some humor. The bits of the report that are about me are not nearly as important as what it says about our knowledge of the impacts of microcredit.I haven't read all 184 pages of the report with equal care. My reaction at this point is conflicted, as you might sense. On the one hand, I really like the executive summary. With a couple of exceptions, it concisely corroborates my thinking. As Jonathan Morduch and I wrote in 2009:

We assert, however, that decisive statistical evidence in favor of [claims of positive impact] is absent from these studies and extraordinarily scarce in the literature as a whole.
I say something similar in the Tom Heinemann documentary (notice the Norwegian subtitles in the screenshot at 40:24, where I opine that "35 years into the microfinance movement, we don't have any clear evidence that microcredit...reduces poverty on average.").On the other hand, when I examined the parts of the report that overlap most with my own expertise---those on the Pitt and Khandker studies of the impact of microcredit in Bangladesh and the randomized studies---I found them to be problematic in certain ways, even mildly offensive (in making confident assertions about what is going on in my head).How I feel about the text is unimportant in itself, but I do think my reaction points to a problem with the work that matters for the public and for the public agency that funded it. It seems to ally itself with the current stream of vociferous criticism of microfinance, led by another Brit, Milford Bateman---whose book "has very little time" for academic research. Strange that the authors seem to make common cause with someone who views with nihilism the work to which they are devoting their careers. Meanwhile the report seems to distance itself from researchers, notably Jonathan and me, whom the report portrays as wanting to believe that microcredit reduces poverty despite the lack of evidence.Similarly, the report perceives a "high risk of bias" in the Karlan and Zinman randomized study of microcredit in the Philippines. Here too the argument seem so illogical that I can't help wondering what animosity and bias lie behind it. This is mildly unfortunate in a government-sponsored report.The fundamental conclusions of this report are that a) we have almost no credible evidence on the average impact of microcredit on poverty and b) what little we have puts the impact at 0. In the current battle royale over whether microcredit is good or bad, that seemingly puts the report right in the middle. Yet in naming intellectual allies and opponents, the report appears to pick sides in a way that departs from the evidence it so thoroughly critiques. This invites the public to spin the report in a certain way, to confuse absence of evidence with evidence of absence, as has already happened in the Bangladeshi press ("Microcredit is a mirage, says UK study").Still, this is an intelligent critique of the evidence and anyone interested enough to read it will learn from it. I particularly like the point that "advanced econometric techniques will not be able to control for poor quality data," which wisely summarizes my experience replicating the complex Pitt and Khandker study. Three specific comments:
  1. Although "microfinance" is in the title, the report and its negative conclusions are really about microcredit. The latter term is actually in the title of the study protocol. The important Dupas & Robinson study of microsavings gets a tiny mention (p. 48), and it is incorrect ("no impacts on well-being").
  2. In the conclusion of our 2009 working paper, Jonathan and I accentuate the negative in one place (quoted above) and the positive in the previous sentence, quoted below and in the report:
    In our view, nothing in the present paper contradicts those [the view that microcredit is effective in reducing poverty generally, that extremely poor people benefit most especially so when women are borrowers] ideas.
    These two statements are compatible: lack of evidence means lack of evidence of help and lack of evidence of harm. But the new report quotes only that positive half of this symmetric pair, and from that conjectures about what Jonathan and I think:
    some prominent academics involved in microfinance seem to have preferred to not reject the alternate hypothesis.
    (A footnote makes clear the reference is mainly to us.)Here's the logic:
    Failing to contradict the alternate hypothesis encourages one to believe there is a positive effect and therefore to tend to (continue to) reject the null (no effect) hypothesis even though it (no effect) may be true. This of course depends on the decision procedure (see Neyman and Pearson 1933, for a detailed discussion on decision rules) and weighing the costs and benefits of an intervention.
    Ergo:
    Even for critics of these evaluations the absence of robust evidence rejecting the null hypothesis of no impact has not led to a rejection of belief in the beneficent impacts of microfinance (Armendáriz de Aghion and Morduch 2010, p310; Roodman and Morduch 2009, p39-40), since it allows the possibility that more robust evidence (from better designed, executed and analysed studies) could allow rejection of this nul. However, given the possibility that much of the enthusiasm for microfinance could be constructed around other powerful but not necessarily benign, from the point of view of poor people, policy agendas (Bateman 2010, Roy 2010), this failure to seriously consider the limitations of microfinance as a poverty reduction approach, amounts in our view to a failure to take seriously the results of appropriate critical evaluation of evaluations.
    As hinted in the earlier-mentioned footnote, I debated this issue with report author Richard Palmer-Jones by e-mail last November while I was in India. I explained that just because I see no evidence on the impact of microcredit in Pitt and Khandker one way or the other does not mean that I have decided to presume the impact is positive. "Failing [to] contradict the idea that microcredit [helps] is not the same as asserting that it does." Despite providing this direct evidence, I apparently did not change his view of my view.
  3. The critique of the Karlan and Zinman Philippines study seems mistaken. (I put this last because it is technical. See this post for background.) The DFID report's errors in this regard hardly affect its conclusions, which are actually quite compatible with Karlan and Zinman's failure to find impact. Still, they seem worth flagging. In particular, the report lists a series of threats to "internal validity," which is how reliably the study measures what it sets out to study; this concept is distinct from "external validity," which is how representative the report's findings are for the rest of the world:
    • The report points out that loan officers randomly prompted by their computers to offer loans to marginally qualified applicants sometimes ignored those prompts, and probably did so for applicants they knew to have poorer prospects: "While analysis was on an intention-to-treat basis using the original allocations, it seems likely that the sample of marginally creditworthy people actually being offered and taking up loans...would have been biased by selection by loan officers and by self-selection." I don't get it. We'd expect the randomly rejected and the randomly unrejected to have equal numbers of people with poor prospects, and that is the basis for comparison.
    • The report argues that the unrejected, marginally qualified applicants might be systematically less reliable than applicants with higher credit scores (or more so if loan officers compensate by visiting them more). But the comparison is not between these two different treatment groups. Well-qualified applicants are not part of the study. If anything, this is a critique of external validity, noted by K&Z.
    • The report worries that attrition in the K&Z study is high: surveyors only found 70% of subjects for follow-up. It is possible that some borrower characteristic is correlated with both outcomes and attrition in a way that biases results. But this is harder to believe if outcomes intent to treat and attrition are uncorrelated with each other, as is the case.
    • Finally, the report points out that some of the Philippine corner store operators may have realized that they shouldn't really have qualified for microcredit and must therefore be part of an experiment, which might have caused them to behave differently than they would have ordinarily. I suppose there is something to this...but again it goes to external, not internal validity.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.

Topics