This article is rated as:

Early in my studies in health sciences, I was under the impression that research and evaluation were largely the same – after all, many of the job postings I saw were for ‘research and evaluation analysts’. To me, it seemed that evaluation was a sub-branch of research looking at one specific program or intervention, while research aimed to identify areas in need of programming and compare outcomes across programs to understand best practices for different demographic groups. As I progressed through my program, I came to learn that this was not the case!

In this article, I summarize some key differences between research and evaluation in terms of their purpose, methods, dissemination, and implications.

If you’re pressed for time, bookmark this article to read later and check out our brief infographic on differences between research and evaluation instead!

 

Goals and Purpose

  1. Research

    • Purpose: The primary goal of research is to generate new knowledge, test theories, and contribute to the academic body of literature. Research seeks to answer specific questions or hypotheses.

    • Product: Research outcomes are usually published in academic journals and are intended to advance scientific understanding and theory.

  2. Evaluation

    • Purpose: The main purpose of evaluation is to assess the effectiveness, efficiency, and impact of programs or interventions. Evaluation aims to inform decision-making, improve programs, and guide policy.

    • Product: Evaluation outcomes are often actionable recommendations for program improvement, detailed in reports for individuals involved in the program’s development and delivery, including program managers, funders, and policymakers.

 

Flexibility and Adaptation

  1. Research

    • Flexibility: Research methods are generally less flexible, with a strong emphasis on maintaining methodological consistency and control to ensure validity and reliability, and to allow for replication by other researchers in the field.

    • Adaptation: Research designs are usually pre-specified and less likely to change once the study begins.

 

  1. Evaluation

    • Flexibility: Evaluation methods are more flexible and can be adapted as the evaluation progresses to better suit the needs of the program and those involved in its development and delivery.

    • Adaptation: Evaluators may adjust their approaches based on ongoing feedback and emerging findings, making the process more responsive and dynamic.

 

Data Collection Methods

  1. Research

    • Design: Research design is often rigid and follows a predefined methodology to ensure replicability and validity. This might include controlled experiments, longitudinal studies, or cross-sectional surveys.

    • Sample Selection: Sampling strategies in research are usually aimed at generalizability, ensuring that findings can be extrapolated to a larger population. Researchers will often try to obtain as large of a sample as possible to justify extrapolation and minimize the influence of bias.

 

  1. Evaluation

    • Design: Evaluation design is more flexible and adaptive and is often tailored to the specific context and needs of the program being evaluated. Depending on the needs of the program, data may be collected through surveys, observations, interviews, or focus groups. There are different types of evaluation approaches, which include formative, summative, developmental, most significant change, and principles-focused evaluations, among others.

    • Sample Selection: Sampling in evaluation is typically purposive, focusing on individuals and groups directly involved in or affected by the program to gain relevant insights. Some examples of individuals you may want to collect data from for an evaluation may include individuals directly served by the program, members of the community for which a program serves, and staff or facilitators involved in the implementation of a program.

Analytic Methods

  1. Research

    • Quantitative Analysis: Involves statistical methods to test hypotheses, identify patterns, and establish correlations or causations. The primary focus of quantitative analysis in research is to determine whether outcomes are statistically significant, or in other words, unlikely to be due to random chance or forms of bias. Common tools include SPSS, R, Stata and SAS, among others.

    • Qualitative Analysis: Uses methods like thematic analysis, grounded theory, or discourse analysis to interpret textual or visual data. The goal of qualitative analysis in research is to generate or identify common or impactful narratives, theories, or phenomena among the population from which participants were sampled. Software like NVivo or ATLAS.ti are often used to aid in the analytic process.

    • Multi- and Mixed-Methods Analysis: Researchers may use a combination of different quantitative and/or qualitative approaches to data collection and analysis, often to address a specific research question or aim.  

 

  1. Evaluation

    • Quantitative Analysis: Similar methods to research analyses may be used but are often applied to assess program outcomes, efficiency, and impact. The focus is on practical significance rather than just statistical significance; in other words, quantitative analysis in program evaluation aims to determine whether the program contributed to positive changes for those it serves, not whether those changes are statistically significant.

    • Qualitative Analysis: Involves methods like content analysis, case studies, and thematic analysis to provide actionable insights and recommendations.

    • Multi- and Mixed-Methods Analysis: The use of multiple methods is common in evaluation, where one or more quantitative and/or qualitative approaches to data collection and analysis are used to address various evaluation questions or aims. The use of multiple methods allows evaluators to conduct a comprehensive evaluation capturing both the practical impacts of a program as well as the perspectives and experiences of the individuals receiving or delivering the program.

 

Reporting and Dissemination

  1. Research

    • Format: Research findings are typically reported in academic articles, dissertations, or conference presentations. The focus is on theoretical contributions, methodological rigor, and scholarly discourse. Often, these reports adhere to strict word limits, specific formatting rules, and limited creative freedom in visualizing data compared to evaluation reports.

    • Audience: The primary audience for research reports includes academics, scholars, and students in the relevant field.

 

  1. Evaluation

    • Format: Evaluation findings are presented in practical, accessible reports that include recommendations for program improvement. These reports often incorporate easily digestible graphs and infographics to enhance readers’ understanding. Compared to research reports, reporting findings from evaluations often provides more leniency regarding formatting which is often based on the depth of evaluation findings and the specific needs of the program. 

    • Audience: The audience for evaluation reports includes program managers, funders, policymakers, and other impacted community members. The focus is on actionable insights and practical recommendations. Since evaluation reports are intended to be read by audiences from a wider range of academic and practical backgrounds, it is often beneficial for evaluators to create a variety of deliverables (including comprehensive reports, executive summaries, infographics, slide decks, etc.) to tailor to the needs of particular audience members.

 

Conclusion

While both research and evaluation involve systematic data collection, analysis, and reporting, their goals, methods, and outcomes differ greatly. Research aims to generate new knowledge and advance theory, using rigid methodologies and targeting an academic audience. In contrast, evaluation focuses on assessing and improving programs, employing flexible and adaptive methods to provide practical recommendations for impacted and involved parties. Despite differences in goals, methods, and outcomes, both research and evaluation are crucial in driving change and improving health and social well-being for individuals and communities.

 

Did we miss any key differences between research and evaluation? Let us know in the comments!

Previous
Previous

Data Visualization Applications: Slope Charts

Next
Next

New Template: Stratified Sampling Tool (Single Strata)