Eval Academy

View Original

How to Write Good Evaluation Questions

This article is rated as:

Evaluators ask questions. All the time. We ask questions in focus groups, we write questions in surveys, we pose questions to our datasets. But the questions that really drive our work are evaluation questions.

What are evaluation questions?

Evaluation questions focus data collection. They are what our stakeholders need to answer. When they have the answer to these questions, they can tell their stories. As we’ve written, evaluation questions are the high-level questions an evaluation is designed to answer.

Knowing the definition of “evaluation question” is one thing; writing them is another. It can be challenging to write questions at just the right level, that will provide guidance for choosing methods and developing data collection tools, and will actually yield the information to satisfy stakeholders.

Keep these points in mind, and you’ll be off to a good start.

Evaluation questions are informed by the evaluation purpose

Why are you doing this evaluation? Is it to support new policy development? Is it to inform a decision about spreading or contracting a program? Whatever the reason, that purpose will guide the evaluation question development. For example, an evaluation that is intended to demonstrate accountability will likely have an evaluation question around meeting the funder’s requirements.

Write evaluation questions with your stakeholders

Stakeholder engagement is key throughout evaluation projects. Working closely with program leaders and operational staff will ensure that the questions you develop together are the right questions. There is no point in writing what you think are great questions if they don’t meet stakeholders’ needs. Group writing is hard—in your evaluation planning session, don’t worry about getting every word perfect. Make sure you understand the concept that is important, then finesse the language on your own.

Stay open

Evaluation questions should be open-ended (except when they don’t need to be… see our post on why the answer to so many evaluation methodology questions is “it depends”). Open-ended questions give room for a range of possible answers.

  • Close-ended question: Did participants enjoy the program?

  • Open-ended question: How do participants characterize their experience?

See how that second question gives room for a range of responses beyond “yes” and “no”? This second question brings the opportunity for nuanced data that yields deeper insights; that depth is what makes a good evaluation question.

Evaluation questions are not survey questions

Survey questions are very focused, while evaluation questions are broader. Multiple survey questions may be used to answer an evaluation question. If the question you write feels like something you’ve answered before in a survey, you haven’t written an evaluation question. Climb up a level and rewrite.

  • Survey question: How satisfied are you with the timeliness of the email from your support worker?

  • Evaluation question: To what extent are services delivered in a timely fashion?

The data from that survey question can be one of the indicators you use to answer the evaluation question.

Evaluation questions may have multiple indicators

Strong evaluations employ triangulation; that is, multiple views on the same question. One evaluation question may be answered by a combination of two, three or more indicators, relying on multiple methods of data collection.

  • Evaluation question: To what extent is the program having a positive impact on families?

  • Indicators:

    • Parents’ self-reported ability to attend training classes

    • Youth mental health scores

    • Changes in number of hours spent together each week

Together, this suite of indicators provides more reliable insight into the program’s impact than one indicator alone.

How many evaluation questions?

Well, it depends. For a very comprehensive evaluation of a major initiative, more evaluation questions may be required. You may need fewer questions for a simpler project. A general guideline is between five and seven evaluation questions, but it’s not uncommon to see between three and ten. Remember, every evaluation project is different—the main goal is to ensure that stakeholders’ information needs are met, but we must also consider feasibility. If your capacity to collect data is limited and cannot be increased, whether through existing resources or by hiring external help, you will likely need to stick with fewer evaluation questions.

Themes can help

Evaluation questions can be clustered in themes that are relevant to the purpose of the evaluation and the nature of the initiative. For example, my evaluation firm has worked on several healthcare projects that rely on a quality matrix for health. That matrix provides a common language and shared concepts throughout healthcare partners, so we use themes like accessibility and appropriateness to guide our evaluation questions. If your organization has a strategic plan or shared goals, those may be key to guiding your evaluation question development. Or look to other frameworks, like the RE-AIM framework, for inspiration on evaluation question themes.

(Check out this list for some content-specific examples of evaluation questions or this list for how questions change based on your evaluation framework or approach.)

Edit, edit, edit, then step back

Language matters when writing good evaluation questions. Changing just one word can mean the difference between clarity and ambiguity. Use the writing process that works for you, whether that’s working on paper, consulting with a colleague, or staring each word down until you find the absolute perfect alternative. If you’re a true evaluation nerd, there is immense satisfaction in writing the very best question you can. But remember, perfection is not always possible or practical, and just like that last literature review you wrote, sometimes you just need to call it done and move on.

How do you know when you have it right?

You’ll know you have near-perfect evaluation questions when:

  • Together, their answers will tell a high-level story of the initiative being evaluated

  • You have between three and ten questions

  • The questions cannot be answered by a simple yes/no, or by a number

  • Indicators and methods are already suggesting themselves

  • Your stakeholders (and you!) breathe a sigh of relief when they read them

  • For an extra round of review, try using our checklist

What comes next?

After you’ve crafted fantastic evaluation questions, you’ll move on to selecting indicators and data collection methods. In doing so, you may need to revisit your evaluation questions and make minor modifications, or even add or remove questions altogether.


See this form in the original post