Eval Academy

View Original

Three Ways to Increase the Chances Your Evaluation Results Will Actually Get Used

This article is rated as:

Utilization of evaluation results can be underwhelming to the well-intentioned evaluator. Time and time again, we hear of people going through an evaluation only to be disappointed that the findings didn’t give them the answers they wanted. It is such a big problem that Michael Quinn Patton decided to design an entire approach to evaluation called Utilization-Focused Evaluation (UFE). I’m going to save you from reading all 600-and-some pages of it and instead share three ways we at Three Hive Consulting help clients use the results from our evaluations.

 

1. Identify your A-Team and get them involved!

People will use information if it is the right information — in other words, the information they want or need for decision making. However, you need to find the “right” stakeholders to work with and to provide the right information to. It sounds simple enough, but the more complex the initiative, the more complex the number and type of stakeholders. You can try to meet the needs of everyone, but doing so often leads to not adequately meeting anybody’s needs. Unless you have unlimited resources, which in my experience is never the case, you will need to identify your A-team. In UFE these are called ‘primary intended users.’ Primary intended users are the stakeholders that have the willingness, authority and ability to use the findings. When you have identified your A-team, it will be much easier to design your evaluation  — people drive purpose and purpose drives design.

A stakeholder matrix is a simple tool that organizes:

  • stakeholders,

  • the group they belong to,

  • what they see as the purpose for the evaluation,

  • how they will use the results, and;

  • how they want to be involved throughout the evaluation.

 

Below is an example:

Stakeholder matrix where Service Providers are the A-team. The nature of their involvement is to 1) inform data collection and tool development, 2) collect and/or provide data, and 3) inform findings and recommendations.

Notice the last column, “Nature of Involvement.” Can you guess who the A-team is? Chances are your A-team are the service providers. They are the ones that have multiple ways they can and should be involved throughout the evaluation. I say should because the more involved your A-team is throughout the evaluation process, the greater chance they will use the findings in a meaningful way.

 

2. Tailor reporting to needs

One size does not fit all when it comes to evaluation reporting. Your findings need to be accessible and relevant to each stakeholder group, which means some extra work tailoring your reporting. One way to help meet the information needs of various stakeholder groups is to layer your content. In Kylie Hutchinson’s A Short Primer to Innovative Evaluation Reporting”, she uses the analogy of a burger to explain that not everyone can digest the entire burger, so we need to layer in the fixings (i.e. executive summaries, one-page overviews, appendices, etc.) if people only want to digest some of the burger.

A few years ago, we worked on a project where we needed to do just that. We ended up layering our reporting by providing the full burger (i.e. the final report), but also producing a variety of other reports throughout the evaluation:

Results briefing for the A-team: Detailed reports to help inform next steps in the evaluation (Left)

Results briefing for leadership group: A one-page summary report showing interim findings and next steps in the evaluation (Right)

Final Evaluation Report: A comprehensive report that contains detailed methods, findings, recommendations, conclusions and appendices

Whiteboard video: A six-minute whiteboard video using Videoscribe to visually tell the story (Left)

Evaluation Summary: A one-page evaluation brief that summarized the final report and the Social Return of Investment (Right)

3. Stop being so boring! Facilitate use through interactive strategies

One of Three Hive’s core values is “intelligence having fun.” We don’t believe that evaluation should be a boring make-work project where some outsider comes in and tells someone what needs to be done, asks for data, tells them what is wrong and then recommends a bunch of things that aren’t feasible or relevant. There is very little chance that any learning will result from that approach. As we know, most people do not truly learn through passively listening or reading  — we learn by doing. This means that if we want people to truly understand and transform what they are doing, they need to be involved in the evaluation process. 

Jean King and Laurie Stevahn’s “Interactive Evaluation Practice is a book I frequently refer to when I am looking for facilitation ideas. In it, they lay out what they call “An Evaluator’s Dozen of Interactive Strategies.” They go on to describe the materials needed, instructions for how to conduct the strategy, facilitation tips, variations on the strategy, and when they can be used throughout the evaluation process (see the figure below).

People may initially moan and groan at the idea of interactive activities, but in the end, they really do enjoy them. After putting the time in to use these strategies, I have received feedback that described the strategies as “engaging,” “productive,” and “approachable” ways to be involved in evaluations.


You’ll notice that the title of this article isn’t “Three simple ways to increase the chances your evaluation results will actually get used.” You do have to build in extra time and resources to take the time to: 

  1. identify your A-team and involve them throughout the process,

  2. tailor your reporting, and;

  3. make things fun through interactive strategies.

However, the investment in these three areas will make a difference for how your clients understand and utilize findings and, as a result, your credibility as an evaluator.



See this form in the original post