Wednesday, June 16, 2010

*hiatus*

On a small hiatus while on parental leave. I'll try to post irregularly until mid-February, when it's back to the grind.

Monday, May 10, 2010

"Yes, I'm happy. Except for my wife leaving, my dog dying and I lost my job..." The importance of contributing factors.

Sounds sort of like the storyline to a blues song. But the subject is just there to make a point - that you can't measure performance data in a vacuum, there are always contributing factors that need to be brought in to give context to the data.

Here is a good news story from Fast Company magazine's website. The good news for those of us that believe climate change is happening (the snow in my backyard this weekend should certainly be proof of that!), is that carbon emissions have dropped. Sort of. We think.

Why isn't anyone trumpeting the news that the USA was able to slash emissions by 7 percent? Because emissions decline has a number of very big contributing factors: the recession being a huge one. Less economic activity = less emissions, plain and simple.

Or how about population increase. With a 21-month-old already filling my garbage with diapers, and another little one on the way in the next week or two, I can say with certainty population increases are directly related to carbon emission increases. So say the USA's population increase had slowed, that would give you the false impression that carbon emissions were dropping.

It's important to look at those contributing factors because they tell a larger story about carbon emissions, which have very little to do with the efforts of Americans to reduce emissions.

A figure that I found much more interesting was "a 4.3% drop in the carbon intensity of the energy sector due to increased use of renewables and natural gas production efficiency improvements". You have cause and effect nicely bundled together here - cause, renewables and natural gas efficiencies; effect, drop in carbon intensity.

Looking at contributing factors when designing performance measurement frameworks and planning documents is extremely difficult, and often relegated to an "environmental scan" section of a plan. But finding ways of integrating the data can be extremely important, given the role that other factors can play on your data.

Thursday, May 6, 2010

Evaluation theory vs. Program design

I had a question in a presentation yesterday that had me somewhat stumped. The question was, essentially, what are we supposed to do when the planners tell us to take a results-based approach where we only plan and measure the key things, and the auditors tell us they want formal plans that are activities-based and have exhaustive lists of plans and measurements. Good question, and not one that I had the answer to immediately.

Then I got to thinking about a presentation that a friend of mine at Grant Thornton LLP forwarded me recently. In it, one of the panelists talked about the divergence that occurs between evaluation theory and program design. That, it occurred to me, was the problem.

This problem that seems to arise in public sector programs between results-based management and evaluation, comes down to the relationship between the two schools of thought. Evaluation theory is based on a rational approach: you perform a needs assessment; you develop a logic model; you allocate resources; then you monitor and evaluate as you continuously improve the quality of your program delivery.

Program design, however, can often go a bit different: a politician (likely the Minister) conceives of a program to respond to what he or she perceives as a public need, or is receiving public pressure over; and then the program design then begins to respond to internal and external factors (ie., lack of resources, so it looks for cost-sharing opportunities). So what emerges is not a program defined by needs assessments and logic models, but one that is defined more by external factors and political whims.

So how, then, do you resolve the two? Because, surely, the likelihood of eliminating program evaluations and audits is extremely low. (And I would never advocate such an idea, as these evaluations and audits can provide valuable information!) And neither is it likely that politicians will begin to make decisions solely based on rationality, and not public pressure.

It's not an answer I have, but would be interested in knowing what other people thought.

Monday, May 3, 2010

To do: (1) Make plan. (2) Execute plan. (3) Evaluate results. (4) Modify Plan.



Planning is essentially a to-do list -- for a team, for a work group, for a branch, for an organization. Check out this article from Fast Company on how to make a good to-do list.

Friday, April 30, 2010

A planner walks into an Apple store...

It sounds like the start of a bad joke. But I was shopping in my local Apple store this morning. And as I picked up an paid for my new wireless keyboard / mouse, it struck me that there was a lesson I could take home with regards to good planning.

Apple stores are cleverly set up in a what that provides a basic structure, but allows the user to define the experience they have within that structure.

You walk into an Apple store, you know it sells computers, iPhones, iPods - and a select few peripherals. You might not be sure what you're looking for (ie., I went in the other week wanting to know more about audio editing software), just that you need some information from them. A person is at the front of the store, and will direct you where you want to go -- or if you feel that you need extra help, will guide a staff person over to assist you.

Then it's up to you, as the user, to decide what your experience is: do you want to buy something?; do you want to check your email at one of the many computers?; do you want to just play around with the latest gadget?.

There is no check-out counter giving you the feeling that you are there solely to purchase something. Instead, Apple wants you to experience their products - and they are confident that the experience you have will be all you need to convince you to buy that product.

Imagine translating this to the planning field. You create a basic planning framework that outlines the principles of planning and the information you need. And that's it. From there you let the users of the planning framework define the experience they want to have. They decide how they want to plan; and you're there to direct them, and provide a guide when they need it.

Most planning frameworks are prescriptive, creating templates and endless pages of instructions so that planners have an exhaustive list of information when all is said and done. And the users of the planning framework often just feel that they're "feeding the beast" because they're only planning for what they're told to plan, not necessarily what they want to plan.

Flipping a planning framework from a prescriptive process to a user-defined process would be an interesting challenge, but one that could certainly lead to great rewards: user buy-in; a more reflective product; and an efficiency from planners who can then shift more focus to analysis.

Thursday, April 15, 2010

9 hallmarks of successful risk / planning processes

I came across this article recently, on the hallmarks of a successful organization-wide level of risk management regime. It notes that the signs of successful enterprise risk management (ERM) are:

1) Board-level commitment to ERM as a critical framework for successful decision making and driving value

2) Dedicated risk executive in a senior-level position, driving and facilitating the ERM process

3) ERM culture that encourages full engagement and accountability at all levels of the organization

4) Engagement of stakeholders in risk management strategy development and policy setting

5) Transparency of risk communication

6) Integration of financial and operational risk information into decision making

7) Use of sophisticated quantification methods to understand risk and demonstrate added value through risk management

8) Identification of new and emerging risks using internal data as well as information from external providers

9) A move from focusing on risk avoidance and mitigation to leveraging risk and risk management options that extract value


This got me to thinking about how tied together risk and planning really are, since I'd say that with just a bit of editing you could make this into a list of the 9 hallmarks of a successful planning regime:

1) Senior-management level commitment to results-based planning as a critical framework for successful decision making and driving value

2) Dedicated executive in a senior-level position, driving and facilitating the planning, reporting and performance measurement process

3) A planning and reporting culture that encourages full engagement and accountability at all levels of the organization

4) Engagement of stakeholders in strategic and operational level work plans, and the identification of appropriate performance measures

5) Transparency of communication

6) Integration of financial and operational information into decision making through integration into planning

7) Use of sophisticated quantification methods to understand and demonstrate added value

8) Identification of new and emerging activities / objectives using internal data as well as information from external providers

9) A move from focusing on short-term activity planning to broader definitions of results and objectives that extract value for stakeholders


Wednesday, April 14, 2010

Risk identification and management

Below is some text excerpted from a presentation I am giving next week on risk identification and management:

What is risk?
A state of uncertainty where some of the possibilities involve a loss, catastrophe or other undesirable outcome. (Borrowed from: The Failure of Risk Management: Why It's Broken and How to Fix It by Doug Hubbard)

Risk is measured by:
- Impact: How severe of an impact will there be if the risk occurs?
- Likelihood: How likely is it that this risk will occur?
- Example: The impact of a plane crash is that there is a that I may die (66% chance actually). The likelihood of my plane crashing is 1 in 11 million.


How do you identify risks?

Risks are events that, when triggered, cause problems. Risk identification can start with the source of problems, or with the problem itself.

Examples of risk sources are: stakeholders of a project, access to/stability of funding, or political influences.

When either source or problem is known, the events that a source may trigger or the events that can lead to a problem can be investigated. Example: Stakeholders withdrawing during a project may endanger funding of the project.

Risk Management vs. Risk Mitigation
Risk management is the identification, assessment, and prioritization of risks, followed by a coordinated and economical application of resources to minimize, monitor, and control the likelihood and/or impact of events.

Risk mitigation is a way to manage risk.

Organizations often focus a lot of effort on risk mitigation, and much of their risk management language is centred around risk mitigation.

But there are other ways to manage risk…

Ways to manage risk:

(1) Avoid it.
Eliminate the risk, withdraw yourself from being exposed to the potential of it.

Example: I don’t want to die in a plane crash, so I will no longer fly in airplanes.

(2) Reduce it.
The primary choice of many organizations. Find ways to mitigate your risk – reduce its likelihood or impact.

Example: Always take a direct flight. Most plane crashes occur on take-off or landing. So mitigate the likelihood of the risk by reducing take-offs and landings.

(3) Share it.
Transfer, outsource, insure or find other ways to share your risk with a partner.

Example: My Home Depot credit card. Home Depot wants to drive customer loyalty by offering store credit, but they don’t want to assume the risks of late payments, people defaulting, refunding charges because of stolen cards, etc. So they partner with CitiGroup, who runs similar programs for several dozen other stores. They share the risk with a credit company, who in turn shares the risk across customers gained by issuing cards to dozens of stores.

Could you share risks with your strategic partners?

(4) Own it.
Risk happens. Sometimes you need to accept that the risk is a part of the business you’re in, and budget for the occurrence of it.

Example: The Government of Canada self-insurance of drivers / vehicles. The GoC has accepted that car accidents will happen on the job involving its employees. The cost of accepting the risk and budgeting for costs associated with car accidents is less than the cost of transferring the risk to an insurance company. So the GoC self-insures.

Risks vs. Challenges
Remember, a risk is a state of uncertainty where some of the possibilities involve a loss, catastrophe or other undesirable outcome

Challenges, then, are states of certainty where loss, catastrophe or other undesirable outcomes will occur.

The same management techniques apply. The only difference is you know the likelihood of these events is 100%.


Integrating risk into planning

Establish a picture of the risks you face as an organization (risk profile)

Identify risk management strategies for key activities / objectives.

Identify risk mitigation strategies (where appropriate) for your deliverables / milestones.

Read more on risk management.

Monday, April 12, 2010

Cola for breakfast and other bad plans.

This morning I was looking for examples of planning gone horribly wrong, for use in a work planning session I'm hosting in Yellowknife, NWT, next week. Even though we all know they exist, I found it surprisingly hard to find a couple short anecdotes. Here are a couple of the ones I did find:

(original blog post of these stories here)

Pepsi AM

- In the late 1980s, Pepsi saw an unexploited consumer: the breakfast cola drinker.
- Although they hadn’t conducted much market research, they thought many young adults would rather drink a caffeinated cola for breakfast, instead of a coffee.
- Pepsi failed to assess that there was no demand for a separate product for breakfast consumption.

Maxwell House ready-to-drink coffee

- In 1990, General Foods launched cartons of Maxwell House ready-to-drink coffee.
- The refrigerated cartons couldn’t be microwaved in the original container. In order to enjoy your convenient cup of coffee, you had to pour it into another container before popping it in the microwave – thus eliminating the convenience.
- Maxwell House failed to plan on how they would reach their objective of providing ready-to-serve coffee.

If you know of some other great misadventures in planning, please share them.

Wednesday, April 7, 2010

ER + PI + Target = Performance Story

How many people would be willing to take a final exam after reading only one chapter of the course textbook? Hopefully no one. So why do so many people only focus on one chapter of their performance story?

My experience has always been that most program managers feel their job is done once they've defined what their Expected Results (ER) are. Occassionaly, I come across a program manager that also takes an interest in which Performance Indicators (PI) are used to measure progress against those results. But it's very rare that I find someone who finishes their performance story by also identifying appropriate targets.

Here's an example of how the three chapters fit together into a story.

Say you're a program manager responsible for delivering a program that treats drinking water on reserves. Your performance story might look something like this:

Expected Result: Clean and safe drinking water for the residents of the Little Buffalo Reserve.
Performance Indicator: Number of boil water advisories.

If you end your performance story there, you get misleading data. While there's no doubt that boil water advisories are directly related to clean and safe drinking water, you have to ask what your target is.

The way it reads right now, someone might be lead to think that the more boil water advisories, the better job you're doing. But would you want to define your success by the number of times you communicate your failure in providing safe drinking water? Probably not. It's better to have no boil water advisories at all because you've put enough checks and balances in place to ensure safe and clean drinking water 100% of the time.

So, your performance story would be better if it read:

Expected Result: Clean and safe drinking water for the residents of the Little Buffalo Reserve.
Performance Indicator: Number of boil water advisories.
Target: Zero boil water advisories for 2010.

That's a performance story that makes sense.

Read more about developing Key Performance Indicators.

Tuesday, April 6, 2010

Want to be ignored? Put it in an attachment.

Being a business planner is sometimes a lot like working in advertising - planners are fighting for a program manager's attention. Fighting with HR issues, finance reports, client issues...and, oh yeah, delivery of their program.

So here's a great tip to get completely ignored on your next e-mail call letter: Put all of your important information in an attachment!

This morning I saw an email go out to senior managers asking for quarterly reporting, and it contained no less than 14 separate attachments. The best part, of course, being that our internal information management system gives only a numbered reference, so you can't even preview the documents by looking at their title.

How many managers do you think will open all those attachments? Or even read through the email, given the wall of attachments staring at them from across the bottom of the window?

When sending out a call letter, I like to try and keep it as simple as possible.

- Keep your intro short and to the point. Make sure you state what the objective of this call is / who the audience is / why it's being done.
- Make all of your important points in bullets. People don't read blocks of instructions. It's much easier for a program manager to follow a couple of bullet points on what they are expected to do.
- Only attach the necessary documents. If there are other documents that might be useful to people, send another e-mail with the various "reference" documents for them. Or make an offer in the email to follow-up with them and provide further information.

Now I'm off to tie up the printer, figuring out the 14 attachments and how they interact.

Get a better idea on how to write emails in this book.

Or for a more academic perspective, check out this link:
http://www.useit.com/alertbox/newsletters.html

Tuesday, March 30, 2010

Use a RAM to make people cooperate

The bane of my existence as a planner is the person who, despite me having explained it a dozen times, comes to me at deadline and asks what they are supposed to do. Or what I want from them.

I've found that using a Responsiblity Assignment Matrix (RAM) is an effective tool for herding just this sort of cat.

A RAM sets out, in a chart, the tasks that will be accomplished, all of the people involved in a process --- and then at the intersection between person and task, defines what that person's role is.

Say, for example, I have a particularly difficult senior manager who rarely reviews and approves draft -- then at the end of a process asks for several dozen changes. A RAM I might submit at the start of the project might have definitions like this:

Planner Send out instructions.
Program Officer Create first draft.
Senior Manager Review & approve draft.

A RAM is really only effective if you use it at the very start of a project, as it defines how people will be involved. That's why it's usually one of the first documents that I introduce to a process - often at the Memorandum of Understanding or Agreement In Principle stage.

With a RAM in place, when we reach deadline and someone hasn't fulfilled their role, I can point to the RAM and say "I thought that we had defined this as your role. Why didn't it work?" More often than not, the person will feel sufficiently chastised to let the project proceed -- and will make sure they understand their role next time I come to them with a RAM.

Read more about the Responsiblity Assignment Matrix, and other Project Managment Tools, in the Guide to the Project Management Body of Knowledge.

Monday, March 29, 2010

Break out the spandex and big hair - it's time to define your key performance indicators



It's hard to believe that the big hair behind such classics as Jump and Panama could be the basis for a lesson in performance measurement. But Van Halen, and David Lee Roth, are just that.

You've probably heard the story about Van Halen insisting that there be no brown M&Ms backstage. But what you may not have known is that it wasn't (just) the band acting like rock & roll divas; it was actually an ingenious performance indicator.

David Lee Roth explained the clause in the band's rider in his auto-biography:

Van Halen was the first band to take huge productions into tertiary, third-level markets.

We’d pull up with nine eighteen-wheeler trucks, full of gear, where the standard was three trucks, max. And there were many, many technical errors — whether it was the girders couldn’t support the weight, or the flooring would sink in, or the doors weren’t big enough to move the gear through. [...] So, when I would walk backstage, if I saw a brown M&M in that bowl . . . well, line-check the entire production.

Guaranteed you’re going to arrive at a technical error. They didn’t read the contract. Guaranteed you’d run into a problem.

Sometimes it would threaten to just destroy the whole show. Something like, literally, life-threatening.


So next time you're trying to explain to a group of managers that you don't need to know the number of emails they sent last week, you only want performance data that gives an indication of the state of affairs -- try tossing on Hot For Teacher, and challenging them to find the brown M&Ms in their own programs.

Read more about Van Halen and the brown M&Ms. Check out Dan & Chip Heath's book: Made To Stick!

http://www.fastcompany.com/magazine/143/made-to-stick-the-telltale-brown-mampm.html