Program Evaluation: 9 Things to Avoid and What to Do if You Can't
The best and most appropriate thing to do when asked to conduct an evaluation with insufficient time, budget and/or resources is to simply walk away. However, this is not always possible – especially for in-house evaluation practitioners in the Public Service, where elements of the bureaucratic process (e.g. legal requirements and budgets) do not always align themselves to the best advantage.
There are a number of pitfalls that are surprisingly easy to fall into, especially under pressures of time and budget. Here are some of the things to avoid and some suggestions concerning what to do when avoidance is not an option.
Avoid the rush to judgment.
There may be a temptation to go straight to the empirical evidence at hand, to the neglect of the assumptions and logical arguments that make sense of it. If this is not provided as part of the program foundation documentation, a thorough literature review will provide the information that you need. If you are short of time, it is most efficient to prioritize the available literature by direct relevance to the topic, authoritativeness of source and recentness of publication. If you are new to the subject-matter area, I suggest starting with the most current and widely used textbook, including its bibliography. This will ensure that you cover the basics and give you the wherewithal to assess the veracity of other sources such as those you might find on the Internet.
Avoid starting your data collection effort before you have a detailed plan of analysis.
Your detailed plan of analysis should be aligned with the program logic model and specific evaluation questions, showing how each piece of data will contribute to the answer. In the absence of such a plan prior to data collection, you risk collecting too much data or, even worse, neglecting to collect some needed data, leaving gaps in the analysis. If you neglect to ask an important question in a survey, for example, it is impossible to go back and may be difficult or impossible to fill the gap in some other way. The plan of analysis will also help you to avoid scope creep.
Avoid losing sight of the original issues being addressed.
It is easy to get immersed in the program subject matter and/or methodological details, losing sight of the original evaluation issues. This can be avoided by making frequent reference back to the evaluation planning documents. Although you must be flexible enough to include relevant new information that had not been anticipated, any change in scope should be implemented through a change in the plan of analysis with agreement from evaluation lead.
Avoid the temptation to use a sledgehammer to kill a fly.
Clinical therapy trials, for example, are generally conducted with the utmost rigour because the consequences of error can be extremely serious. Some social program decisions can also have weighty consequences. However, many programs have only marginal significance in terms of their cost and/or consequences. It is important to recognize this and to align the level of scientific rigour, which may be expensive and/or impractical, with the risk associated with making a mistaken judgment, the materiality of the program and the likelihood of any challenges to the conclusions.
Avoid the attribution of outcomes in the absence of a controlled experimental or quasi-experimental design.
Purists will insist that in order to count as a real evaluation, a study must deal with the attribution issue by using an adequate comparison group, preferably in a controlled environment. This is actually true. However, it is not always (rarely is, in fact) possible to adhere to this standard. The next best thing is to use a performance monitoring system to track output and outcome data, which, if you are confident about the underlying program logic, can be interpreted as real results of the program.
Avoid the recipe-like application of textbook methods.
Consult an experienced evaluation practitioner. If you do not have access to an experienced evaluator, you can usually consult someone with an advanced social science degree who can at least help you to avoid some ‘rookie’ mistakes and/or let you know how and under what circumstances it is OK to bend the rules. One of the most common examples is where you want to use statistical significance levels to describe results in a non-random sample. If you do have to bend the rules, make sure that you provide an explanation in the methodology section of your report.
Avoid committing to a specific schedule and level of effort until you have seen the available data.
Someone other than the person conducting the evaluation frequently sets the evaluation timeline and budget using guesswork and ideal assumptions. That is, the schedule and required level of effort is calculated with a guess at the volume of data to be analyzed and an assumption that all of the needed file data will be immediately available in a useable form, survey instruments and interview protocols can be developed and agreed to in one draft, each interview can be arranged with one e-mail or phone call, etc. This results in a serious underestimate of the time and effort required. As a rule of thumb, I would take the budget and timeline set under the ideal assumptions and double it for the reality of what you are likely to face.
Avoid pressure to complete a summative evaluation of a program that is still running.
Believe it or not, it happens – particularly when there is a statutory obligation to conduct an evaluation after a fixed time period, failing to account for program delays. If you must accept the assignment, there are a couple of things you can do, provided it is clearly documented in the report. Regarding program outputs, it can be acceptable to extrapolate from existing data to the prospective program end date. Outcomes, on the other hand, are difficult to evaluate even under the best of circumstances. In the absence of any outcome data, the best you can do is to document the expected outcomes based on the program theory. Key informant interviews may be helpful here, especially if the initial program theory needs to be modified based on experience with the program.
Avoid the loss of detail as you synthesize findings for your report.
You can never predict with certainty which, if any, areas of your report will be challenged. Therefore, it is important to maintain an ‘audit trail’ from your final report all the way back to the raw data (interview notes, survey database, etc.). I tend to build a report from the ground up, iteratively synthesizing and re-organizing the data as I go. The key here is to exercise version control and save as a new version frequently. That way, you can easily go back to a version with greater detail if the client asks for more evidence for your conclusions.
Have you fallen prey to any of these pitfalls? What other challenges have you encountered while conducting evaluations?