Monday, May 10, 2010

"Yes, I'm happy. Except for my wife leaving, my dog dying and I lost my job..." The importance of contributing factors.

Sounds sort of like the storyline to a blues song. But the subject is just there to make a point - that you can't measure performance data in a vacuum, there are always contributing factors that need to be brought in to give context to the data.

Here is a good news story from Fast Company magazine's website. The good news for those of us that believe climate change is happening (the snow in my backyard this weekend should certainly be proof of that!), is that carbon emissions have dropped. Sort of. We think.

Why isn't anyone trumpeting the news that the USA was able to slash emissions by 7 percent? Because emissions decline has a number of very big contributing factors: the recession being a huge one. Less economic activity = less emissions, plain and simple.

Or how about population increase. With a 21-month-old already filling my garbage with diapers, and another little one on the way in the next week or two, I can say with certainty population increases are directly related to carbon emission increases. So say the USA's population increase had slowed, that would give you the false impression that carbon emissions were dropping.

It's important to look at those contributing factors because they tell a larger story about carbon emissions, which have very little to do with the efforts of Americans to reduce emissions.

A figure that I found much more interesting was "a 4.3% drop in the carbon intensity of the energy sector due to increased use of renewables and natural gas production efficiency improvements". You have cause and effect nicely bundled together here - cause, renewables and natural gas efficiencies; effect, drop in carbon intensity.

Looking at contributing factors when designing performance measurement frameworks and planning documents is extremely difficult, and often relegated to an "environmental scan" section of a plan. But finding ways of integrating the data can be extremely important, given the role that other factors can play on your data.

Thursday, May 6, 2010

Evaluation theory vs. Program design

I had a question in a presentation yesterday that had me somewhat stumped. The question was, essentially, what are we supposed to do when the planners tell us to take a results-based approach where we only plan and measure the key things, and the auditors tell us they want formal plans that are activities-based and have exhaustive lists of plans and measurements. Good question, and not one that I had the answer to immediately.

Then I got to thinking about a presentation that a friend of mine at Grant Thornton LLP forwarded me recently. In it, one of the panelists talked about the divergence that occurs between evaluation theory and program design. That, it occurred to me, was the problem.

This problem that seems to arise in public sector programs between results-based management and evaluation, comes down to the relationship between the two schools of thought. Evaluation theory is based on a rational approach: you perform a needs assessment; you develop a logic model; you allocate resources; then you monitor and evaluate as you continuously improve the quality of your program delivery.

Program design, however, can often go a bit different: a politician (likely the Minister) conceives of a program to respond to what he or she perceives as a public need, or is receiving public pressure over; and then the program design then begins to respond to internal and external factors (ie., lack of resources, so it looks for cost-sharing opportunities). So what emerges is not a program defined by needs assessments and logic models, but one that is defined more by external factors and political whims.

So how, then, do you resolve the two? Because, surely, the likelihood of eliminating program evaluations and audits is extremely low. (And I would never advocate such an idea, as these evaluations and audits can provide valuable information!) And neither is it likely that politicians will begin to make decisions solely based on rationality, and not public pressure.

It's not an answer I have, but would be interested in knowing what other people thought.

Monday, May 3, 2010

To do: (1) Make plan. (2) Execute plan. (3) Evaluate results. (4) Modify Plan.



Planning is essentially a to-do list -- for a team, for a work group, for a branch, for an organization. Check out this article from Fast Company on how to make a good to-do list.