BLOG POST

Accounting for the Corruption that Counts at the MCC

December 14, 2016

The Millennium Challenge Corporation (MCC) announced the list of compact countries for FY2017 today. Last week, my colleague Sarah Rose came up with her analysis of MCC compact decisions, and almost perfectly predicted everything the board would decide. She also suggested a lot of the uncertainty around picks this year was about “good data sense over data fundamentalism.” That applied in particular to how the MCC board viewed outcomes on the corruption “hard hurdle”—the requirement that countries be in the upper half of the distribution in their income group when it comes to the Worldwide Governance Indicator Control of Corruption measure. There’s a lot to be said for good data sense, and one way the MCC could demonstrate it next year is to replace their current corruption measure with a better one: surveyed evidence of bribes paid.

This year the MCC board had to make decisions about reselecting two countries—Kosovo and Mongolia—that fail the Control of Corruption indicator. But they only just fail it, and given the statistical noise associated with creating a mash-up measure like control of corruption, they can’t be said to have a significantly different score from a bunch of countries that are above the hurdle. The board chose to let Mongolia continue developing a compact program while Kosovo was ‘demoted’ to a smaller threshold program.

So while the Board is practicing reasonable data sense, it seems a good time to revisit the corruption hard hurdle –or at the very least the indicator on which it is based. The Worldwide Governance Indicators are a useful research tool but they weren’t designed to be used to influence policy making or, in particular, to make pass-fail decisions about aid allocation. The control of corruption indicator is a mash up of survey evidence on bribe payments with a bunch of expert perceptions which is both an arguable measure of ‘the corruption problem’ and very hard for policymakers to influence—especially in the short term because the underlying data often comes in with long lags.  

I’ve long hated on the idea of a hurdle at all, but leave that aside: at the very least a hurdle ought to be based on an indicator that policymakers have some chance to change in the short term—that is concretely linked to particular behaviors which can be influenced.  And in that regard, a far better indicator would be Transparency International’s Global Corruption Barometer.

For the 2013 Global Corruption Barometer, about 1,000 people from each of 107 countries were surveyed over a seven month period. Amongst other things they were asked if they’d had contact with various government services and if they’d ever paid a bribe to service providers if they did have contact. The survey covers  education, the judiciary, medical and health services, the police, registry and permit services, utilities, tax and/or customs, and land services.  From the survey you can learn that 78 percent of the people who came in contact with the police in the Democratic Republic of the Congo had paid a bribe to a policeman, for example. And an aggregate measure captures average bribery across these services—this is the measure I’d propose to replace control of corruption.

Because the GCB asks about a specific act (paying a bribe) for specific services, it is far more ‘actionable’ than a dated, generalized sense of malfeasance that makes up the majority of inputs into the WGI control of corruption measure. If a country does particularly poorly on a cross-sectoral measure of the frequency of bribe payments and so fails the hard hurdle, it can see which services in the country are particularly bribe-prone and crack down on a specific issue in a specific sector. If it improves, perhaps it will pass the hurdle next year. That would be a strong demonstration of the ‘MCC effect.’

The GCB isn’t perfect for the MCC: it isn’t designed to capture ‘grand corruption’ (looted treasuries and payoffs to politicians).  All measures of corruption are subject to uncertainty, including bribe measures (what counts as a ‘bribe’ rather than a tip for service? Will people honestly admit to paying a bribe even in a confidential survey?). There are data quality concerns with some of the surveys around sampling.  MCC economists would need to look at margins of error around the survey questions and how much countries cluster around income group medians to see if the GCB would do a better job than Control of Corruption in terms of providing statistically significant clarity about low and high-scoring countries.  The GCB isn’t carried out every year, and its country coverage would have to be (somewhat) increased. 

But the evidence that the existing control of corruption measure is better at measuring grand corruption is absent, as is any evidence that the existing indicator better predicts the effectiveness of MCC support. What we do know is that the global corruption barometer directly and clearly asks about a specific, widespread and important injustice: people being forced to pay bribes for services they should be getting for free or for a set, public fee. If the MCC can provide an incentive for countries to tackle that problem, it will have a marked and direct impact on the quality of life of potential compact country citizens worldwide.

And the problems with survey quality, annual availability and coverage are easily fixed. The MCC is authorized to spend money on improving indicator availability and coverage—it could fund Transparency International or its partners to carry out the relevant parts of the GCB survey on a large, carefully selected sample and do it every year.  Any concerns that the involvement of the MCC might create the perception of bias in the survey could be addressed by using the existing survey questions unaltered and working through a trusted independent party like Transparency International to administer it. The corruption measure is the most important indicator the MCC uses—it should be willing to pay for a better one.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.