Delta Partners Management Consultants
Your trusted advisors.

Why All Public Service Managers Need to Care About Evaluation

Greg Tricklebank

Results-based management in its current iteration has been around for over ten years and has taken firm root in the Public Service modus operandi as evidenced by the new Policy on Evaluation that requires all direct spending to be evaluated every five years.  Combined with Strategic and Operating Review (SOR), this poses a major challenge for all managers and a major threat to those who adhere to the bureaucratic old school of the “Yes, Minister” variety.



By way of setting the context, Results for Canadians: A Management Framework for the Government of Canada was introduced around 1999.   The President of the Treasury Board articulated the  ‘loose-tight’ management philosophy, when it was stated that:

This philosophy underscores an important management balance: flexible enough on the delegation of decision-making authority and on administrative rules to support initiative and common sense - but tight enough on standards and control systems to ensure clear accountability. With the support of well-functioning management systems, …  delegation and accountability can be seen as essential and complementary elements of citizen-focused management.

This was released during a time of budget surpluses.  Since then, we’ve had 9/11 and a new round of fiscal deficit creation – and the loose-tight balance has veered toward the tight side. 

If you were a manager in the Public Service during the late nineties, it was all about developing Human Capital and Organizational Learning.  Now, it’s all about Expenditure Management.  According to the current Economic Action Plan, the objective of SOR is to achieve at least $5 billion in ongoing annual savings by 2014–15, placing particular emphasis on generating savings from operating expenses and improving productivity, while also examining the relevance and effectiveness of programs.

For all managers of programs and other PAA sub-activities, this means that your business case needs to be rock solid.

Although you won’t normally be conducting formal evaluations of your own programs and activities, there are at least two ways in which your familiarity and use of evaluation tools and techniques can help to improve your business case. 

  1. The most obvious connection to a sound business case is through a clear program/activity logic model and aligned performance measurement regime.  There is no substitute for evidence concerning the relevance and performance of your programs and services (value for money).
  2. Your programs and services need to meet standards required by the Policy on Evaluation (evaluability).  Regardless of how good your business case sounds, it will not stand up if it can’t be evaluated effectively.

As a discipline, evaluation is not new.  However, modern IT tools have made it possible to gather and synthesize large amounts of information, removing a principle barrier to results-based management


based on the common evaluation lines of evidence. 

  • Statistical analysisSurveys have become easy to conduct and analyze using on-line questionnaires and Excel spreadsheets or SPSS statistical software.
  • Textual data from interviews, documents and case studies can now be managed and analysed with the aid of powerful Qualitative Data Analysis tools such as Atlas-ti.
  • Literature reviews have become far less time-consuming due to the advent of the Internet and modern search engines.

A sound business case, based on well-articulated program logic and demonstrably evaluable lines of evidence (quantitative and qualitative), will have a much better chance of surviving Program Review.   Conversely, it is almost certain that even the best programs will be cut back if they cannot be evaluated effectively.

Going beyond the business case and the maintaining of performance information, managers are expected to fulfill additional responsibilities with respect to the evaluation function. Program managers are expected to:

  • Maintain a balanced relationship with program evaluators, ensuring a balance between co-operation and independence;
  • Consult with the departmental Head of Evaluation concerning performance measurement strategies for all direct spending under their direction; and,
  • Participate constructively in the development of a management response and action plan to address recommendations in all evaluation reports.

Finally, ‘evaluation’ constitutes a mind-set that includes, but goes beyond, performance measurement. The more holistic approach encouraged by the evaluation discipline provides the perspective needed to use performance information for more effective decision-making.  This mindset needs to permeate all levels from the front-line all the way up to the Deputy Head, where the  ‘use of evaluation in decision-making’ is a key element of the Management Accountability Framework.

Sooner or later, the loose-tight pendulum will swing back again to a point where flexibility will, once again, be the emphasized challenge.  However, this does not mean that results-based management will disappear. 

On the contrary, evaluation tools and techniques will be more needed than ever to exercise that flexibility in a responsible manner.

What do you think?

  • Is rigorous program evaluation here to stay?
  • Does the average public service manager need to care about evaluation?

Add a Comment

Notify me of follow-up comments?

About this Article

Posted by Greg Tricklebank
Posted on May 3, 2012

Share |

Categories: evalution, management, performance measurement, planning & policy, public service renewal