How (and whether) to write recommendations

This article is rated as:

 

Want to hear my elevator speech about what an evaluator does? It goes something like this: 

“What do you do?” 

“I’m an evaluator.” 

“What’s that?” 

“When an organization runs a program or initiative and they want to know how well they did, they hire an evaluator to help them measure and report back.” 

“Cool, so you tell them how to do better?” 

“Uhh…sometimes.” 

See how it kind of falls apart at the end there? It’s not that I don’t know my own work, it’s that the scope of the evaluator role has been bouncing around in my brain for a while now. Do we tell programs how to improve? Or do we simply share the data with them and let them draw their own conclusions? 


Does an evaluator make recommendations?

I had a mentor for the first few years of my evaluation career. He had an answer to this: “No.”  He firmly believed that an evaluator’s role is to stay neutral, design data collection strategies, implement them, analyze them and report back in a utilization-focused way.  

Michael Quinn Patton would argue there are definitely situations where an evaluator should be anything but neutral, like in evaluating complex innovations where evaluation feeds into design, but that’s not really what I’m driving at here. So, in a traditional formative/summative evaluation, what is the scope of an evaluator’s role: should an evaluator make recommendations? 

Let’s argue both sides of this: 

Pro Recommendations: 

  • Clients look for recommendations, it is expected. 

  • Recommendations are the “now what” to reporting results.  

  • You’ve gathered and analyzed data, you’ve interpreted the data (the “so what”). Recommendations are the next logical progression. 

Con Recommendations: 

  • The evaluator is not the expert.  

  • Imagine recommending that a program change approach (think harm reduction vs. abstinence, or patient-centred care vs. physician-directed care), undermining the expertise and approach being tested 

  • The evaluator does not know the entire context.  

  • Imagine recommending that an organization hire a Communications Advisor to help in getting their message out and build awareness, only to learn that their funding has recently been cut in half 


So where does that leave us?

One option is to side-step the issue: hearing my mentor’s voice in my head, I have written reports that stop just shy of “recommendations” but definitely include “Key Lessons Learned.”

So, I might say: 

“Training attendance was better on weekday mornings.”  

Rather than saying: 

“Recommendation #1. Run training sessions on weekday mornings.” 


Are we splitting hairs? Is there an actual difference here? 

Part of the difference is in human psychology. Not only do people generally not like being told what to do, but change management literature says that people will engage in change more readily if they come up with the idea themselves. So, if you say, “Training was better in the morning” and the project lead says, “Hey, we should run all sessions in the morning” – they get the credit, and the change is more likely to happen and be sustained. 

Another option might be to use softer language, so instead of: 

“Recommendation #1. Run training sessions on weekday mornings.” 

I might try to soften the language: 

“Consider alternative delivery dates and times. Review methods for optimizing the schedule of delivery based on participant needs and trainer capacity.” 


Again, this feels a little like cheating. And it’s not answering the questions about should an evaluator make recommendations. I think it’s because ultimately it depends on the relationship between the evaluator and the client, the strength of the evidence, the level of knowledge of the evaluator, and the integration of the evaluator within the operations.  

Michael Scrivens tells us that “lessons learned—of whatever type—should be sought diligently, expressed cautiously, and applied even more cautiously.” Scrivens suggested that “micro recommendations” which offer commentary or suggestions about implementation or operational details may be very appropriate within a formative evaluation while macro-recommendations – think “adopt, adapt or abandon”  –  are not necessarily the role of the evaluation and should not be made unless the evaluator has extensive knowledge and knows the context well.  

Micro and macro recommendations describe two types of recommendations, but there are others. Not all recommendations are created equal. Recommending that the coffee vendor at the conference be swapped for better quality coffee is not really the same as recommending that a program close down due to poor outcomes. Yet another type of recommendation (to avoid) is the less-than-helpful “More research is needed,” which can be valid but probably isn’t what your client is looking for.   

So, if your client wants recommendations, or perhaps your role as the evaluator is integrated and knowledgeable enough to warrant recommendations, consider the following….recommendations (see what I did there?):


Drafting Recommendations

1. Plan early! 

  • When you are scoping an evaluation with your clients ask, “Are you seeking recommendations?” Knowing their expectations can frame the purpose of the evaluation and prepare you to deliver a product that meets their needs. 

  • When you are working with your client to develop the key evaluation questions to frame your evaluation plan, ask your stakeholders “What would you do differently if the outcome is A? What if it was B?” 

2. Engage stakeholders in coming up with recommendations

  • Share results early and often with your key stakeholders. Conduct a sense-making session to review the findings and gather their insights on what actions may be feasible as a result of lessons learned. This is sort of the best of both worlds, technically your report will include recommendations but also technically they didn’t just emerge from your brain! 

3. Validate draft recommendations

  • Are they actionable? Feasible? Under the control of the stakeholders? 

  • Consider conducting a brief literature review, or comparing to an existing review. 

4. Make sure they are justifiable

  • Link your recommendations directly and clearly to the data that drives it. 

  • If possible, seek out multiple data sources that align around the same path forward. This includes perspectives from multiple stakeholders at multiple levels. Think about who is the decision-maker and who would be responsible for implementation. 


Presenting Recommendations

1. Don’t bury them in the narrative of a report! 

  • Think about where to include your recommendation in the final report. Some writers like to showcase them upfront as a quick highlight or summary of the report. Some people like to save them for the end after the results have been presented in full. Other people will embed recommendations throughout the report, often below the data that informed them. In any case, ensure they stand out clearly as recommendations and not hard-to-find, half-hearted suggestions. 

2. Consider if there is a way to group recommendations by urgency, by ease, by impact, or by content theme

3. Though ideally you’ll have engaged stakeholders in crafting the recommendations, it’s always good practice to facilitate a discussion afterward as well

  • Help the team to do some visioning around what would happen if the recommendation was actioned. Is there evidence you can share? Are there indicators you can offer to help the client measure the impact? 

4. Be concise

  • Be clear, but not directive.  

  • Try to limit your recommendations to those that are most valid and actionable. 

  • Avoid lumping several recommendations into one. 


Other Considerations 

1. If possible, give options

  • Perhaps the data showed that mornings are better for training. One recommendation could be to run training in the morning, but alternatively, if you address the barrier about why training was better in the morning you may find an alternative recommendation is “Offer lunch vouchers and free parking to training sessions in the afternoon.” 

2. If you are an internal evaluator, try to build in time to follow up after your recommendations have been made 


Making poor recommendations can undermine your credibility as an evaluator, but conversely, strong, relevant recommendations are valuable and likely exactly what your client is looking for.  

If you’re looking for more tips on how to craft an amazing evaluation report, check out our 6-part series on Renovating Your Evaluation Report or how to deliver less-than-stellar results to your clients. 



 
Previous
Previous

From data to actionable insights

Next
Next

Evaluation Report Inspiration: Excerpts From A Breast Cancer Clinic Evaluation