Speeches Shim
FEATURED VIDEO |
---|
Esther Duflo: Social Experiments to Fight Poverty 2010 Director of the Abdul Latif Jameel Poverty Action Lab (J-PAL) at MIT, Esther Duflo explains how rigorous impact evaluations are providing the evidence needed to make development interventions more effective. |
Also see |
TIPS: Rigorous Impact Evaluation Technical Note: Impact Evaluations |
Impact evaluations measure the change in a development outcome that is attributable to a defined intervention; impact evaluations are based on models of cause and effect and require a credible and rigorously defined counterfactual to control for factors other than the intervention that might account for the observed change.
Compared to performance evaluations which can address questions on a wide range of topics, impact evaluations are normally focused narrowly on cause-and-effect questions about the effect of an intervention, or sequence of interventions, or on which of several alternative approaches for achieving a given result is the most effective.
One way to conceptualize an impact evaluation is as a deliberate test of a program hypothesis. The impact evaluation must be able to show that change occurred on the outcome of interest and it must be able to demonstrate that the change it measured would not have occurred, or at least to that degree, in the absence of a particular USAID intervention. In most cases this involves a comparison of what happened to the beneficiary (or treatment) group to what happened at another site or to another group that did not receive the intervention. When experiments of this sort are carried out in field situations, both the treatment and comparison groups are selected ahead of time and baseline data on them is obtained before the intervention is delivered by USAID's implementing partner. Other types of impact evaluations are carried out retrospectively using long data series for a single country, or on a multi-country basis.
A good place to start when identifying opportunities for impact evaluations is with the hypotheses embedded in a CDCS Results Framework and in projects a Mission is contemplating undertaking during a CDCS period to achieve its Goal. Hypotheses for which Missions have the least evidence are often found at the Sub-IR level or in preliminary project design documents that envision using a particular modality or approach to achieve in important result.
To Identify Opportunities for Impact Evaluation Try Thinking About Where Programs Could Learn From a Relatively Brief Experiment |
---|
In some countries the time required to clear customs at an international airport or port is much lower than is the case for land border crossings. If a Mission had as a Sub-IR customs clearance time at borders reduced and was discussing various steps for achieving this result with its country partner, such as increasing staff at border posts, improving staff training on current procedures and adding automation, it could start with an experiment that would help determine which of these interventions has the greatest effect on clearance times. This might involve selecting an initial set of border crossings and randomly assigning them to receive (a) a temporary staffing increase, (b) substantive training on current customs procedures or (c) automation and associated training. Within a year, and possibly less, the Mission and partner country would have sound empirical data for making a decision about which intervention, or combination, to roll out on a broader basis. |
<< CDCS Evaluation Questions Template | Up | Impact Evaluation Opportunities Template >> |
ProjectStarterBETTER PROJECTS THROUGH IMPROVED |
A toolkit developed and implemented by: For more information, please contact Paul Fekete. |
Comment
Make a general inquiry or suggest an improvement.