Delta Partners Management Consultants
Your trusted advisors.

Benchmarking Evaluation in the Canadian Federal Government

Greg Tricklebank

 

Some time ago, in anticipation of the new Government of Canada (GOC) Policy on Evaluation, we conducted an evaluation benchmarking study of several federal government departments and agencies on behalf of one of our clients.  By sharing some of the findings and conclusions, we hope to spotlight a few issues that need to be addressed as managers seek to achieve compliance with the new policy on evaluation.

The New Policy on Evaluation

All departments and agencies are expected to achieve compliance with the new Policy on Evaluation by April 1, 2013.

The policy requires that evaluation be explicitly linked to the Expenditure Management System (EMS) and that evaluations be conducted of 100% of direct program spending, revolving on a five-year schedule.  It is also required that departmental evaluation units review all TBS submissions and Memoranda to Cabinet (MC). 

Within four years of the introduction of the policy on April 1, 2009, departments must have developed the capacity to implement the policy, demonstrated in an approved rolling five-year departmental evaluation plan.  During the transition period, the departmental evaluation plan must demonstrate progress toward 100% coverage and use a risk-based approach to explain the department’s coverage and non-coverage choices.

Our Study Findings

The general preference for most departments is to conduct as many of their evaluations as possible in-house.  Those that do contract out tend to manage all evaluations themselves and focus the contractors on fieldwork, which may be extensive because their delivery agents have poor data collection practices and few databases.    

Evaluation staff tends to be located at departmental headquarters with a centralized reporting structure.   A decentralized reporting structure is possible, where evaluation staff has some reporting relationship with the program areas.  However, this does not meet the neutrality criteria of the new Policy.

A matrix organization structure is common within evaluation units, where staff are assigned to managers for administrative purposes and assigned to a project manager or lead - which may or may not be the same person - for each evaluation.  It is also possible to organise into fixed teams assigned to each strategic outcome area.

The majority of the evaluators are at the ES-5 or ES-4 levels, with managers at the ES-6 or ES-7 level.  ES-2 and ES-3 classifications are typically used for developmental positions.

In the year prior to the coming into force of the new Policy (2008-09), each of the studied evaluation units conducted 7 in-house evaluations on average, with a ratio of 2.3 evaluation staff per project, including EX and administrative/ support positions and funded positions that were unfilled.

Impact of the TBS Evaluation Policy

Evaluation units that already review Cabinet documents will be least affected by the new Policy, as will those already close to 20% evaluation coverage per year.  Those with a high proportion of grants and contributions in their budgets will most likely fall into this category as well, because G’s & C’s had previously been mandated for 100% review every five years.

Departments and agencies that currently use a risk-managed approach to evaluation will need to become more sophisticated in the way they do it.  For example, it will no longer be acceptable to simply ignore low-risk programs.  Rather, a decision will be required as to the evaluation approach (e.g. whether to conduct a full implementation evaluation or just an impact evaluation). The relative risk of non-performance of individual programs is the key criterion for planning both the timing and evaluation approach for each program evaluation.

For some departments, the number of evaluations that they must do will increase greatly, leaving little choice but to contract out.  This will be particularly challenging in sensitive areas, where outsiders may experience difficulty in collecting good information.

Options in Responding to the Policy

Departments and agencies have a number of options that can be used singly or in combination.

Negotiate with TBS to limit the definition of 100% coverage

Some program expenditures may be deemed impossible to evaluate because they comprise overhead costs similar to those commonly found in internal services, or because security and/or privacy issues would block access to the relevant data.

Although there is no defined process for refining the definition of coverage, TBS may be open to lowering the target coverage if a convincing case was put forward.  Alternatively, they may agree to limit their expectations for evaluations of this type of spending.

Maximize the use of targeted summative evaluations

The Policy provides for a flexible suite of evaluation approaches within the Evaluation Plan.  Targeted summative evaluations are acceptable to TBS for smaller programs with low or medium risk levels.  Although small programs linked to strategic outcomes may be deemed high risk because they receive a lot of public attention, the cost of an evaluation should be somewhat related to the cost of the program.  It is likely that some programs will carry a lower risk than others. With a larger number of evaluations to be done, applying the risk factors to planning will become more critical to minimize costs and to maintain an appropriate cost to value ratio.

Increase the productivity of existing resources

The evaluation unit may initiate a number of productivity improvement initiatives.  These include:

  • utilizing students for data coding,
  • introducing automated surveys to replace some interviews,
  • collaborating with the performance measurement group to develop data collection tools,
  • training evaluators to increase data analysis capability,
  • holding programs more accountable for program data collection
  • and restructuring roles and responsibilities of managers and directors to better utilize existing resources. 

Contracting Out

Contracting is the most common option for supplementing staff capacity. However, contracting increases the risk of compromising quality and sometimes creates the requirement to rewrite reports to better address terminology, political sensitivities, etc. 

One approach to mitigating this risk is to manage all evaluations internally and to sub-contract elements - primarily the fieldwork - rather than the entire project. Components that typically lend themselves to contracting out include literature search, data entry, interviews, survey design, data collection, peer review and assisting the programs to set up data collection tools at the front end of projects.  Also, developing comprehensive guidelines for contractor use can approach the quality challenge.

With an increase in demand for contractors from all departments simultaneously, driven by the implementation of the new Policy, many evaluation directors believe there is not an adequate supply of experienced evaluation consultants available.  One option is to evaluate and accept alternative experience, such as research or management reviews, for applicability to evaluations or some of the sub-components. 

Finally, expansion of contracting out would require evaluation managers or project leads to develop contract management skills and the ability to articulate quality standards for use by contractors.

Increase the Evaluation Staffing Complement

The main challenge to relying on increased personnel to meet the new targets is being able to staff the positions.  Most departments report difficulties in staffing due to a lack of qualified and experienced candidates. If all evaluation units across government grow at the same time, the candidate pool will not be adequate to meet the needs. Without an increase in supply, departments may resort to classification increases and incentives to steal from each other, which will increase the cost of the evaluation function without increasing the output - and the overall percentage of vacant positions across government will rise.

Since recruiting experienced staff is already difficult and is likely to become more so, departments need to put in place a developmental program to be able to train as many evaluators as possible prior to April 2013.

One alternative to some staff increases in evaluation would be to collaborate with the research unit, if there is one, to include some of its products as part of an evaluation; the Policy requires that the report be approved by the Head of Evaluation to qualify for inclusion in the evaluation coverage statistics.

Suggested Better Practices

The following is a list of some better practices suggested by the benchmarked departments and agencies on the basis of their experience:

  • Have separate Performance Measurement and Evaluation units, permitting more focus on evaluation. This does not preclude the possibility of collaboration between the two units to gain efficiencies in data collection.   The Evaluation unit can also engage in joint planning or collaboration with Audit and/or Research units.
  • Use in-house teams to manage and be involved in every evaluation, even if much of the work is contracted out.  Also, ensure that there are well-articulated quality standards for contracts.
  • Establish Memoranda of Understanding with universities to subcontract parts of evaluations such as literature reviews.
  • Involve the Evaluation unit in the early stage development of all MC’s and TBS submissions.
  • Ensure that the Evaluation unit has access to a data-mining expert.

With respect to HR recruitment and retention for evaluation units, suggestions included the following:

  • Use collective staffing pools.
  • Maintain flexibility in recruitment and developmental progression by having excess unfunded positions classified with varying linguistic profiles.
  • Hire Management Training Program graduates and/or use the Federal Student Work Experience Program (FSWEP) with bridging option.
  • Provide developmental programs for ES-2 and ES-3’s, and ensure that evaluation assignments are included in formal or informal departmental professional, management development or internship programs.
  • Maintain a focus on work-life balance with many flexible work arrangements.

Conclusion

The new Evaluation Policy will present transitional challenges for some departments and agencies over the coming months. However, there are reasons for maintaining a robust evaluation program that go beyond mere compliance with the Policy. 

In view of the GOC strategic and operational review, the survivors are likely to be programs with a clear and well-articulated PAA (program activity architecture), backed up with a well-integrated performance measurement and evaluation framework.  

In the longer term, drastic ‘program review’ will not last forever.  Public Service culture is intrinsically oriented toward outcomes for the public good.  Evaluation practitioners are among the strongest proponents of this and, in the right environment, the evaluation function can be part of the positive motivating force for program managers and staff throughout the Public Service.

Add a Comment


Notify me of follow-up comments?


About this Article

Posted by Greg Tricklebank
Posted on July 30, 2011
0 Comments

Share |

Categories: lessons learned, management, program evaluation