BLOG POST

As USAID Thinks about Procurement and Program Design, It Should Keep Evidence in Mind

July 05, 2018

Procurement is at the core of how USAID does business. It’s so central, in fact, that the last two administrations have devoted considerable energy to seeking reforms to the procurement process aimed at improving the agency’s performance and effectiveness. The current administration’s “Effective Partnering and Procurement Reform” is exploring ways to diversify partners, including increasing support for local partners; improving partner collaboration; making procurement tools more strategic and efficient; and allowing for greater adaptability in programming. As part of this reform effort, I hope the agency is also thinking about how procurement and program design can help ensure USAID’s interventions are informed by evidence and/or build in opportunities to generate evidence. The other day, a USAID request for proposal (RFP) came across my desk that highlights just how important that is—and how infrequently and inadequately it’s done. Below, I offer three recommendations for how USAID can address this disconnect between evidence and procurement.

The offending RFP: Nice idea in theory, but that theory’s been tested—and it doesn’t usually work

In April, USAID/Ukraine released an RFP requesting bids to implement a community-driven development (CDD) project with the goals of creating greater acceptance of shared culture and increasing participation to improve governance and resolve community problems. What’s absent from the RFP is any evidence of whether and under what circumstances the requested activities are likely to achieve those goals.

Had USAID sought to base its request on the best available evidence, it might have come across the International Initiative for Impact Evaluation’s (3ie) synthesis of 25 impact evaluations of CDD programs, which found that these types of activities have little or no impact on social cohesion and governance. In fact, not only do they fail to increase participation, but they rely on social cohesion rather than build it. They’ve also been shown to undermine local governance by setting up parallel structures. These findings led 3ie to recommend abandoning social cohesion building as a program objective of CDD. Though 3ie’s summary is itself not without critique, they stand by their conclusions about social cohesion. Either way, the point remains that USAID is offering to spend up to $60 million on activities whose effectiveness in achieving the stated objectives is highly questionable. The problem isn’t that USAID is seeking to procure a CDD project per se, but that the evidence suggesting it will accomplish the stated objectives is weak. USAID makes no reference to this nor does it request that bidders respond accordingly.

On the hunt for evidence in the procurement process

So how common is this kind of thing? Answering that question is beyond the scope of this blog post. But we did do some (very unscientific) digging—examining a handful of USAID’s most recent RFPs to determine the extent to which the choice of intervention solicited is supported by evidence.

The verdict? There’s plenty of room for improvement (though our small sample is nowhere near large enough to draw firm conclusions).

Many RFPs include some type of evidence to explain the scope of the problem the solicitation seeks to address within the country context (e.g., statistics showing low school attendance, disease prevalence rates). That makes for helpful background reading, but it doesn’t get at the crux of the matter—whether the intervention USAID is soliciting will plausibly achieve the intended outcomes. Far fewer RFPs offered evidence for this. It was uncommon to see references to studies supporting the linkage between the proposed intervention(s) and the stated objective(s). Some did (e.g., a tuberculosis response activity in Indonesia). Others referenced no evidence whatsoever (e.g., a small and medium-sized enterprise project in Vietnam, a democracy and governance indefinite delivery/indefinite quantity (IDIQ) contract).

How much does this matter? One might suggest that interventions described as “illustrative”—as those in the latter two solicitations were—don’t need a great deal of detail on the existing evidence base. But prospective contractors are likely to assume their bids will be more competitive if they respond directly to one or more of the proposed activities, even, perhaps, in the absence of evidence justifying them. With an IDIQ, individual task orders for particular interventions may present a better opportunity for referencing the evidence linked to the desired objective. The problem with reserving the evidence requirement for this later stage is that firms will have already been preselected for the opportunity to bid based on criteria other than whether they understand the evidence base for a particular intervention and can successfully build upon it.

We did find some examples that didn’t reference the evidence base for an intervention but did tie it to a priority identified in a national strategy or action plan. Aligning with country priorities is good practice and a welcome part of USAID’s push toward greater local ownership. However, donors must still understand the evidence base around the national priorities they choose to support. The reasons that countries champion certain approaches can be complex, and while expected results are often at the forefront, other factors may also contribute.

It was also fairly common for RFPs to refer to the Project Appraisal Document (PAD) as justification for the chosen intervention(s). Since the PAD “describes the project design and the supporting evidence upon which it is based,” one might (generously) assume that the evidence case for the RFP is contained in the corresponding PAD. But while the ones we read contained varying degrees of evidence, none discussed the evidence base for all the activities they expect the project to include. And because few PADs are publicly available, bidders have only irregular access to the evidence, if any, on which USAID bases its selection of interventions.

Three ideas for baking evidence into the procurement process

In an earlier piece, Amanda Glassman and I outlined ideas for how USAID should use its procurement process to help ensure the agency’s programs reflect the current state of evidence and/or build in opportunities to generate evidence.

  1. RFPs should reflect and cite the latest evidence supporting the proposed theory of change. First, and fundamentally, USAID’s RFPs should reflect (and cite) the latest research and evaluation findings that link the proposed intervention(s) to the identified objective(s). This would help avoid solicitations that seek someone to implement a disproven theory of change. It also creates an expectation that an implementer needs to know the literature and build upon it for their proposal to be successful. This is particularly important for contract awards since USAID exerts substantial control over project design and implementation. If the agency is going to limit flexibility, it needs to be sure that what it’s requiring is evidence-based.

  2. Where evidence is weak, USAID should consider more flexible award mechanisms and identify ways to add to the evidence base. The state of evidence for a particular theory of change should play a significant role in informing mechanism choice. If the evidence base is weak or contradictory, which would limit USAID’s ability to provide strict technical guidance about implementation, the agency should consider flexible award types and build in opportunities for evidence generation (i.e., plan a high-quality evaluation from the outset in close coordination with the implementer and allow for adaptive management). It’s encouraging that USAID, through its “Effective Partnering and Procurement Reform,” is seeking to identify more flexible procurement mechanisms and program design processes that allow adaptation based on the learning that occurs during implementation (in line with the agency’s Collaborating, Learning, and Adapting approach). These may fit better where the evidence for a theory of change is weak and/or where the implementation context is more fluid.

  3. USAID should value how bidders incorporate evidence into their proposals. USAID should also explore how it could include score-able requirements to assess how well a bidder demonstrates an understanding of the existing evidence base relevant to the project and incorporates economic evaluation and evidence into their proposal. Where the evidence base is weaker, USAID should evaluate bids on how well their program can be implemented in an adaptive manner (e.g., how well do they outline their theory of change and the parts that need testing? How well can they collect relevant information and feed it back into decision making?). Both requirements would demand that USAID technical evaluation committees have a strong grounding in the evidence base themselves. And because there would be some qualitative judgment involved, such specifications would have to be developed in close coordination with contracting officers.

Thanks to Drew D’Alelio for RFP and PAD hunting.

Disclaimer

CGD blog posts reflect the views of the authors, drawing on prior research and experience in their areas of expertise. CGD is a nonpartisan, independent organization and does not take institutional positions.