Presentations

Abstracts and full presentations can be found below. Presentations are organized by those delivered at conferences, invited talks, and events.

Conference Presentations

2024

Najjar, D., Amoak, D. & Belcher, B. (2024, November). Transdisciplinary Research Approaches for Crop Science Research: Theory, Practice, and Implications for Research Design. Presented at MIPPI MELIA Workshop, Peru.

Belcher, B. (2024, November). A Quality Assessment Framework for Transdisciplinary Research Design, Planning, and Evaluation: Lessons Learned. Presented at International Transdisciplinarity Conference 2024, Utrecht, The Netherlands.

Abstract:

Appropriate definitions and measures of quality are needed to guide research design and evaluation. Traditional disciplinary research is built on well-established methodological and epistemological principles and practices. Disciplines have their own evaluation criteria and processes in which research quality is often narrowly defined, with emphasis on scientific excellence and scientific relevance. Emerging transdisciplinary approaches are highly context specific and problem oriented, they integrate disciplines and include societal actors in the research process. Standard research assessment criteria are simply inadequate for evaluating change-oriented transdisciplinary research (TDR), and inappropriate use of standard criteria may disadvantage TDR proposals and impede the development of TDR. There is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a TDR context. In 2015, we developed a TDR quality assessment framework consisting of twenty-five criteria organized under four principles. Since that time, the literature on TDR and TDR assessment has grown, other TDR research assessment frameworks have been published and tested, and we have further tested and refined our own assessment framework. This talk will present the underlying principles and approach of the TDR Quality Assessment Framework and review lessons learned from testing the framework in evaluations of several completed research for development projects. It will then review two other frameworks in use: RQ+ and the CGIAR Quality of Research for Development Framework. Based on this, we have developed a revised version of the assessment framework and the scoring system. The revised principles are: 1. Relevance. The importance, significance, and usefulness of the research problem(s), objectives, processes, and findings to the problem context (6 criteria); 2. Credibility. The research findings are robust and the sources of knowledge are dependable (12 criteria). 3. Legitimacy. The research process is perceived as fair and ethical (4 criteria). 4. Positioning for Use. The research process is designed and managed to enhance sharing, uptake, and use of research outputs and stimulates actions that address the problem and contribute to solutions (7 criteria). The main changes from the original version are in: the definition and naming of the fourth principle (from “Effectiveness” to “Positioning for Use”); filling gaps, eliminating overlap and refining definitions in individual criteria; replacing rubric statements with guidance notes. The QAF is designed for a range of users, including: research funders and research managers assessing proposals; researchers designing, planning, and monitoring a research project; and research evaluators assessing projects ex post. We present the key components of the revised framework and describe how to apply it.

Download Full Presentation

 

2023

Belcher, B., Claus, R., & Davel, R. (2023, May). A Quality Assessment Framework for Research Design, Planning, and Evaluation. Presented at DORA National and International Initiatives Community of Practice.

2022

Claus, R., Davel, R., & Belcher, B. (2022, September). A Quality Assessment Framework for Research Design, Planning, and Evaluation. Presented at Changemaker Education Research Forum 2022.

Abstract:

Transdisciplinary research that aims to catalyze change requires context-responsive ways to guide and evaluate its design and implementation. Traditional disciplinary research quality criteria are insufficient to assess the variety of research approaches characteristic of change-making research.

We present the Transdisciplinary Research Quality Assessment Framework (QAF) and guidance for its application in designing and evaluating research.

The framework is organized around four principles:
  1. Relevance, the appropriateness of the problem positioning, objectives, and research approach for intended users;
  2. Credibility, the rigour of the research design and process to produce dependable and defensible conclusions;
  3. Legitimacy, the perceived fairness and representativeness of the research process; and
  4. Effectiveness, the degree to which research is positioned for use to contribute to positive outcomes and impacts.

The QAF was designed for a range of users, including: research funders and research managers assessing proposals; researchers designing, planning, and monitoring a research project; and research evaluators assessing projects ex post. We present the key components of the revised framework and describe how to apply it in each of these contexts.

Our discussion will focus on the facets of effective change-making research. We will solicit ideas and experiences from Ashoka Fellows, Change Leaders, researchers, students, and others with interests and experience in the world of change-making research to further refine the criteria.

Download Full Presentation

Davel, R., Claus, R., & Belcher, B. (2022, June). Lessons for Research on Agroforestry Concessions: Insights from an Outcome Evaluation of the SUCCESS Project. Presented at CIFOR-ICRAF Science Week 2022.

2021

Belcher, B. (2021, September). Theory-Based Approaches for Assessing the Impact of Integrated Systems Research. Presented at the WLE-FTA-PIM-SPIA Workshop on Measuring the Impact of Integrated Systems Research, 2021.

Abstract:

Contemporary society, research funding bodies, and researchers have high expectations for research to contribute to positive societal and environmental impacts. This ambition to increase societal benefits has prompted an evolution in how research is conducted. Moreover, it has created a corresponding need for appropriate research evaluation tools and approaches. This presentation will consider the fundamentally different impact pathways employed by conventional technology-oriented research versus research approaches used in integrated systems research Furthermore, the presentation will discuss theory-based research evaluation approaches for learning and accountability.

Research programs and projects aim to generate new knowledge and also, increasingly, to promote and facilitate the use of that knowledge to enable change, solve problems, and support innovation (Clark and Dickson, 2003). Gibbons et al. (1994) called attention to this transformation in research approaches in their landmark publication on “The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies”. By the first decade of the new century, many authors were discussing new, more engaged approaches to research with names like Mode 2 research (Nowotny, Scott and Gibbons 2001; 2003; Hessels and Van Lente, 2008), Transdisciplinary Research (Klein, 2006; Wickson, Carew and Russel, 2006) and Sustainability Science (Clark and Dickson, 2003; Komiyama and Takeuchi, 2006).

Reductionist, positivist, and disciplinary research approaches are being augmented or replaced with constructivist, transdisciplinary approaches that appreciate the complex nature of problems and actively engage within systems to increase the likelihood and the magnitude of change realized. There has also been a recent turn toward large, coordinated, multi-disciplinary research collaborations focused on major societal problems, such as the Grand Challenges in US universities (Popowitz & Dorgelo, 2018) and the Global Grand Challenges (Bill and Melinda Gates Foundation, n.d.). This research appreciates that a transition to sustainability will require changes in the rules, practices, and norms that guide the development and use of technologies, as well as the social and institutional structures for continual learning and adaptation (Smith, Stirling, & Berkhout 2005; Millet et al. 2014).

The CGIAR has been part of this evolution in research approaches. Beginning in the 1970s with a focus on using research to improve food security, the CGIAR primarily engaged in genetics research, plant breeding, and agronomy research to increase the productivity of staple cereals, and later a wider range of crops. Over time, the CGIAR system expanded to include policy research and a range of natural resources management research. A 2001 reform of the CGIAR included three “Challenge Programs” intended to increase collaboration across centres (and disciplines) to address major global or regional issues. Subsequent reform initiated in 2008 included the specific aim to increase engagement among stakeholders and build stronger partnerships. The 2030 Research and Innovation Strategy (CGIAR 2021) “situates the CGIAR in the evolving global context, which demands a systems transformation approach for food, land, and water systems” (p.1, emphasis original).

The problem this workshop addresses is that approaches for evaluating research have not kept up with these shifts in the way research is conceptualized, designed, and implemented. The prevailing approach to impact assessment in the CGIAR uses a regularity or counterfactual framework, and statistical, econometric, experimental and quasi-experimental methods (Stevenson, Macours, & Gollin 2018a; 2018b). These are powerful approaches for assessing the impact of discrete outputs, such as a technology or innovation. However, they are inappropriately technically (need large n) and they do not provide the in-depth understanding of context and mechanisms needed for learning. For research in complex adaptive systems engaging with and seeking to influence a range of system actors, where objectives are jointly determined and changeable, where process may be as or more important than specific research products, and where there are multiple interacting impact pathways, the most promising approach seems to be theory-based evaluation.

The presentation will briefly overview a range of theory-based evaluation methods. These include: Process Tracing (Beach and Pedersen 2019); Realist Evaluation (Pawson 2013); Outcome Mapping (Earl et al 2001), RAPID Outcome Assessment (ODI 2012), Payback Framework (Buxton and Hanney 1996), Social Impact Assessment (SIAMPI) (Spaapen & van Drooge 2011), Participatory Impact Pathway Evaluation (Douthwaite 2003), and Contribution Analysis (Mayne 2012). It will then provide more detail on a refined method developed for application in a research-for-development context, reviewing challenges, opportunities, and lessons-learned in several FTA outcome evaluations.

The Outcome Evaluation approach:

  • Conceptualizes research within a complex system
  • Recognizes the roles of other actors, context, and external processes
  • Uses a detailed actor-centred Theory of Change (ToC) as the analytical framework; and
  • Empirically tests a set of hypotheses about the relationship between the research process/outputs and outcomes

Based on experience to date, there will be a need to build researchers and research managers capacity for ToC development and use; strengthen the application of social theories within ToCs; improve the quality and consistency of project documentation and monitoring; encourage active use of ToC at all project/program stages.

Download Full Presentation

Claus, R., Davel, R., Pinto, D., Heykoop, C., & Belcher, B. (2021, September). Theory of Change: Application for Strategic Planning of Transdisciplinary Research for Outcomes. Workshop presented at International Transdisciplinary Conference 2021.

Abstract:

Transdisciplinary research aims to both generate knowledge and contribute to positive societal transformation. The urgency of complex social problems has led to increased social pressure for research to generate impact. In response, there has been an emergence of “meta-science” (the science behind effective science), in which scholars have developed, tested, and refined theory and methods to support effective design and implementation for research impact.

Theory of Change (ToC) continues to gain popularity as a multi-purpose tool for the planning and evaluation of transdisciplinary research (TDR). The rise in popularity of applying ToC in TDR is demonstrated by other proposed ITD 2021workshops (Schäfer et al.; Kny et al.; Deutsch et al.), which aim to build the base of experiences, share learning, and chart a path forward. A research ToC is a set of hypotheses about the causal relationships between a research project’s outputs and the resulting outcomes and impacts. It serves as a model of the change process. There is, however, limited documented experience of ToC application for planning and adaptive management of TDR. Hence, there is a need for conceptual clarity and guidance to support the application of ToC for effective TDR planning, monitoring, and evaluation.

Moreover, the climate crisis and the current COVID-19 pandemic accelerated the trend toward the use of virtual meetings and collaboration. Even without travel and social-distancing restrictions, it can be difficult to assemble all members of a transdisciplinary team together in one physical space. Online workspaces are an ideal alternative. Yet, the online environment poses additional challenges for effective research planning and implementation. Virtual meeting fatigue has led to decreased engagement and communication within and between teams that hinder effective collaboration. Therefore, this workshop responds to both sets of needs simultaneously, offering practical tools to support effective ToC design using strategic and engaging ways to facilitate online research planning sessions for productive collaboration.

Workshop Structure and Aims:

This workshop focuses on ToC application for TDR planning and is intended for TDR researchers and program managers who seek to design effective research initiatives. The workshop will also be useful to evaluators and research funders. The workshop has four goals, to:

  1. Provide participants with a conceptual overview of ToC;
  2. Demonstrate the application of the ToC tool for strategic TDR planning in real-time;
  3. Provide participants with an overview of how a ToC workshop (and other workshops) can be held in an online environment; and
  4. Provide the Fishbowl participant the opportunity to think about their project within a structured ToC framework to help inform strategy development for realizing intended outcomes.
Download Full Presentation

Belcher, B., Claus, R., Davel, R., Jones, S., & Pinto, D. (2021, September). QAF 2.0: A Refined Transdisciplinary Research Quality Assessment Framework. Presented at International Transdisciplinary Conference 2021.

Abstract:

Transdisciplinary research (TDR) aims to solve complex societal issues through systems transformation. TDR approaches continue to evolve at an ever-increasing pace. As the boundaries between disciplines are crossed and blurred, more and more diverse stakeholders are engaged in and co-generating research. Traditional research quality definitions and criteria are insufficient to assess the variety of new research approaches characteristic of TDR. New, more comprehensive, and multi-dimensional principles and criteria are needed to guide and evaluate TDR design and implementation.

Belcher et al. (2016) conducted a systematic review of literature on defining and measuring research quality in an interdisciplinary or transdisciplinary context. We used the findings to develop a prototype Transdisciplinary Research Quality Assessment Framework (QAF).

The four QAF principles are:
  1. Relevance, which refers to the appropriateness of the problem positioning, objectives, and research approach for intended users;
  2. Credibility, which pertains to rigour of the design and research process to produce dependable and defensible conclusions;
  3. Legitimacy, which refers to the perceived fairness and representativeness of the research process; and
  4. Effectiveness, with criteria that assess the degree to which research is positioned for use to contribute to positive outcomes and impacts.

The QAF was designed for a range of users and uses. Research funders and research managers assessing proposals will find it useful to identify projects designed for change. Researchers designing, planning, and monitoring a research project can use the QAF as a guide for adaptive management. Research evaluators assessing projects ex post can employ it to learn about effective research practice.

We tested the QAF tool in evaluations of completed research projects in a range of TDR, graduate student research, and research-for-development contexts. On that basis, we revised the principles, criteria, and definitions to improve clarity, reduce ambiguity and potential for double-counting, and add new criteria as needed. We also developed guidance for the application of each criterion. This contribution presents the revised set of QAF criteria, definitions, and guidance, as well as scoring tools and templates, and discusses how to apply the QAF.

View Presentation

Davel, R., Claus, R., Jones, S., Belcher, B., & Pinto, D. (2021, September). A Quality Assessment Framework for Transdisciplinary Research: Lessons from Evaluating Graduate Research Projects. Presented at International Transdisciplinary Conference 2021.

Abstract:

University-based research has a major role to play in addressing urgent social and environmental challenges. Graduate research remains underdiscussed in the literature and is an untapped means to influence social transformation. Students, whether they continue in academia or as practitioners, are part of the next generation of researchers, professionals, decision-makers, and members of society, and the learning, skills, and values brought to and gained through the research experience can translate to other areas of students’ personal, social, and working lives. As part of a broad effort to increase societal impact, research approaches are evolving to be more problem-oriented, engaged, and transdisciplinary. New approaches to research evaluation are therefore needed to learn whether and how research contributes to societal change.

We used the principles and criteria presented in Belcher et al.’s (2016) Transdisciplinary Research Quality Assessment Framework (QAF) to assess the transdisciplinary research design elements of three completed Royal Roads University doctoral research projects. The cases were selected purposively based on their potential to contribute to real-world impact and to generate lessons about the change process. The student researchers were all mid-career development practitioners, each tackling a different development issue in Africa (e.g., post-conflict transitional justice in Uganda, private aid in Tanzania, and water, sanitation, and hygiene in Nigeria).

The four principles of the QAF are:
  1. Relevance, which refers to the appropriateness of the problem positioning, objectives, and research approach for intended users;

  2. Credibility, which pertains to rigour of the design and research process to produce dependable and defensible conclusions;

  3. Legitimacy, which refers to the perceived fairness and representativeness of the research process; and

  4. Effectiveness, with criteria that assess the degree to which research is positioned for use.

Paired with an outcome assessment, the QAF enabled analysis of projects’ design and implementation to draw connections between design and outcomes. Results indicated stronger transdisciplinary characteristics were associated with more pronounced outcomes and diverse contributions to change processes (i.e., research, organizational practice, governmental policy, professional development). QAF results also uncovered transdisciplinary qualities supported by training as well as those which were inherent in the student researchers.

We draw lessons from our testing of the QAF on the doctoral case studies, learning about effective research design and implementation. Next, we discuss how higher education institutions can provide training and support for impactful student research. Lastly, we reflect on how to improve the QAF tool. This presentation provides an overview of the key theoretical concepts of the QAF, presents examples of QAF analysis to graduate research case studies, and concludes with lessons learned.

View Presentation

Claus, R., Davel, R., Jones, S. & Belcher, B. (2021, March). Evaluating and Improving the Contributions of Doctoral Research to Social Innovation. Presented at Ashoka Changemaker Education Research Forum 2021.

Abstract:

University-based research has a major role to play in change-making. Faculty and students are keen to use their research to contribute to social innovation and help solve urgent social and environmental challenges. As part of a broad effort to increase societal impact, research approaches are evolving to be more problem-oriented, engaged, and transdisciplinary. New approaches to research evaluation are therefore needed to learn whether and how research contributes to social innovation.

We use a theory-based evaluation method to assess the contributions of three completed doctoral research projects. Each case study documents the project’s Theory of Change (ToC) and uses qualitative data (document review, surveys, interviews) to test the ToC. We use a Transdisciplinary Research Quality Assessment Framework to analyze projects’ design and implementation. We then draw lessons from the individual case studies and a comparative analysis of the three cases. Lessons focus on design and implementation of effective research projects for social transformation, and training and support for impactful research.

Results indicate each project aimed to influence government policy, organizational practice, other research, and/or the students’ own professional development. All contributed to many of their intended outcomes, but with varied levels of accomplishment. Stronger transdisciplinary characteristics were associated with more pronounced outcomes. This suggests that researchers should explicitly consider their role in a change process, with a clear ToC. To support impactful research and promote an impact culture, universities should provide training and support for transdisciplinary research theory and practice and give more attention to research evaluation.

Download Full Presentation 

2020

Belcher, B. (2020, September). Research Influence on Policy and Practice: Evaluating Systems Research. Presented at CGIAR MELIA COP.

Belcher, B., Claus, R., Davel, R., Jones, S. (2020, September). A Quality Assessment Framework for Transdisciplinary Research. Presented at FTA Science Conference.

Abstract:

Researchers and research organizations are under increasing pressure to demonstrate that their work contributes to positive change and helps solve pressing societal challenges. There is a simultaneous trend toward more engaged, transdisciplinary research (TDR) that is complexity-aware and appreciates that change happens through systems transformation, not only through technological innovation. Research increasingly seeks both to generate knowledge and contribute to real-world solutions, with strong emphasis on context and social engagement. As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient. There is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a TDR context.

We built on a prototype TDR Quality Assessment Framework developed from a systematic review of the literature (Belcher et al., 2016) and experience as researchers in a research-for-development context. Four main principles emerged:

  1. Relevance, including criteria of social significance, pertinence, and appropriateness;
  2. Credibility, including criteria of integration and reflexivity which supplement traditional criteria of scientific rigour;
  3. Legitimacy, including criteria of inclusion and fair representation of stakeholder interests; and
  4. Effectiveness, with criteria that assess the degree to which research is positioned for use.

We applied, tested, and refined the 2016 framework. This presentation provides an overview of the key concepts, presents the revised framework and scoring tools, and summarizes lessons learned for applying the framework for research planning, monitoring, and assessment (ex ante and ex post).

Download Full Presentation

Davel, R., Belcher, B., Claus, R., Jones, S. (2020, September). Assessing the Effectiveness of FTA Research on the Oil Palm Sector. Presented at FTA Science Conference.

Abstract:

A substantial portion of FTA research aims to contribute to impact by informing and influencing policy and practice in forest, agroforestry, and landscape management. A prime example is CIFOR’s program of research on the oil palm sector in Indonesia since the early 2010s. This research engaged with a wide range of stakeholders to provide research-based information and analysis, as well as capacity strengthening and networking, to influence the policy discourse and decision-making. A series of projects researched social and environmental trade-offs, sustainable commodity supply, social inclusion, biodiversity conservation, and other topics related to oil palm production and trade in Indonesia and internationally.

This study applies a theory-based outcome evaluation approach (Belcher, Davel, & Claus, 2020) to assess the societal contributions of four of these projects. The analysis explores the multiple impact pathways through which the research contributed to policy and practice changes in the oil palm sector. Moreover, we uncover the extent of those changes and how they were realized. We also discuss gaps and opportunities to improve research design, implementation, adaptive management, and evaluation for effectiveness. The lessons apply directly to ongoing oil palm research and more broadly to policy-oriented research-for-development.

Download Full Presentation

2019

Belcher, B. (2019, November). Using Theory of Change to Improve Research Effectiveness. Presented at Helsinki Institute for Sustainability Science, Helsinki, Finland.

Claus, R., Belcher, B., Davel, R., Ramirez, L., & Jones, S. (2019, September). Does Transdisciplinarity in Research Help Achieve Outcomes? Lessons from Practice. Presented at International Transdisciplinarity Conference, Gothenburg, Sweden.

Abstract:

Researchers working to address sustainability challenges face high expectations from society, funders, research managers, and themselves, to make effective contributions to societal transformations and be able to demonstrate those contributions. As a result, there has been a marked evolution in the way research is conducted. Increasingly, researchers make deliberate efforts to cross disciplinary and professional boundaries, engage stakeholders and intended research users in project design and implementation, and incorporate processes and partnerships that facilitate knowledge translation. This evolution toward transdisciplinarity provides a natural laboratory to assess whether and how research that employs transdisciplinary principles contributes to (more) effective scientific and social outcomes.

Our program developed evaluative methods and conducted a series of case studies of completed research projects from two main bodies of work: international research-for-development projects and applied graduate student research projects. Our analyses assess the degree and character of transdisciplinarity applied in each case using Belcher et al.’s (2016) Transdisciplinary Research Quality Assessment Framework. We empirically assess the societal impacts of each project using a participatory theory-based outcome evaluation method. This presentation will provide a brief overview of the range of cases and analytical methods employed, and then present the key findings and lessons learned to date. We found that:

1. There are multiple non-linear impact pathways through which TDR achieves societal effects, varying with project context and approach;
2. When a project employs more transdisciplinary elements, it has more mechanisms at its disposal to make a broader range of knowledge and social process contributions;
3. Process contributions of TDR can be as or more important than knowledge contributions;
4. Engagement with the problem context, effective communication, and genuine and explicit inclusion of key actors (which are defining characteristics of TDR) are critical to success;
5. Intended research users can have highly differentiated perceptions of the relevance, credibility, and legitimacy of a research project and its outputs;
6. Perceptions of researchers’ reputations and expertise have influence on target audiences, regardless of the knowledge generated; and
7. Deliberate and systematic planning and management for outcomes using a Theory of Change approach can help facilitate learning to improve research practice.

Download Full Presentation

2017

Davies, B., Abernethy, P., Belcher, B., & Claus, R. (2017, September). An Empirical Evaluation of Knowledge Translation in Policy-relevant Forestry Research. Presented at IUFRO 125th Anniversary Congress, Freiburg, Germany.

Abstract:

The increasing external demand from research funders and research managers to assess, evaluate, and demonstrate the quality and the effectiveness of research is well known. Less discussed, but equally important, is the evolving interest and use of research evaluation to support learning and adaptive management within research programmes. Researchers and research managers need to know whether and how their work is contributing to positive social and environmental outcomes to be able to adapt and improve their projects and programmes.

We conducted a series of theory-based evaluations of international forestry research projects. Each evaluation began with documentation of a Theory of Change (ToC) that identified key actors, processes, and results. We analysed data collected through document reviews, key informant interviews, and focus group discussions to test the ToCs against evidence of outcomes. Outcomes came in the form of discourse, policy formulation, and practice change. The analyses identified strengths and weaknesses in knowledge translation, helped understand the conditions and mechanisms of knowledge translation, and suggested improved strategies to increase research effectiveness. Thus, the evaluation approach proved valuable as a learning tool for researchers and research managers. Moreover, the approach facilitated communication with funders about actual and reasonable research contributions to change.

Belcher, B., Abernethy, P., Claus, R., & Davies B. (2017, September). Transdisciplinarity in International Forestry Research: An Assessment of 3 CIFOR Projects. Presented at IUFRO 125th Anniversary Congress, Freiburg, Germany.

Abstract:

As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient. There is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a transdisciplinary research (TDR) context.

We used a systematic review of the literature on TDR quality to develop a prototype TDR Quality Assessment Framework (QAF), which we tested empirically in three international forestry research projects conducted by the Center for International Forestry Research. We collected data using participatory evaluation workshops, semi-structured stakeholder interviews, and document analysis. The TDR quality assessment criteria helped to evaluate the degree to which current projects are employing transdisciplinary approaches. Moreover, it helped to systematically evaluate strengths and weaknesses in the projects reviewed. We found good examples of the application of transdisciplinary principles and criteria. However, there was considerable scope to further improving research design and implementation for improved effectiveness. This presentation will provide an overview of the QAF and the assessment methods, results, and lessons learned for designing and evaluating TDR in the forestry sector.

Abernethy, P., Belcher, B., & Reed, M. (2017, August). How to Make Transdisciplinary Research Relevant, Credible, Legitimate and Effective? Bridging Social Theories and Quality Assessment of Transdisciplinary Research. Presented at Resilience 2017, Stockholm, Sweden.

Abstract:

In recent years, transdisciplinary sustainability research presides as one of the key approaches to address challenges and identify opportunities of the Anthropocene. While knowledge co-creation and mutual learning are broadly acknowledged as key components in inter- and TDR, less attention is paid to the complex nature of social interaction in stakeholder engagement and its implications to both resilience building and the quality of research itself.

Some of the main issues we identified relate to inclusivity, power asymmetries, type of engagement, and limited use of the new knowledge produced. To address these challenges, we explored how existing social theories could be used to improve emerging TDR practices. We investigated some of better-known theories related to deliberative participation, power, empowerment, and social capital. Combining insights from Habermas, Foucault, Freire, and Putnam with evaluation criteria for TDR, we developed a TDR quality assessment framework.

We tested this framework by analysing five sustainability science projects, each of which engaged both researchers and other stakeholders in knowledge production. The case studies focused on natural resource management and sustainability governance at national (Indonesia, Canada, and Peru) and global (international) scales. Data were collected by document analyses, interactive workshops, and semi-structured stakeholder interviews and analysed using the abovementioned framework. Findings indicate that integrating existing social theories in applied theoretical frameworks can help make TDR more effective by offering more nuanced, in-depth tools to plan research projects, assess research processes, and analyse research findings. We describe the development and assessment of the integrated theoretical framework and discuss associations between different types of knowledge production and changes in policy and practice. This new approach has a great potential to improve sustainability research by increasing our understanding of social-ecological interactions and how to address knowledge-to-action gaps.

2016

Belcher, B., & Palenberg, M. (2016, October). Outcomes and Impact Terminology: Towards Conceptual Clarity. Expert lecture presented at the American Evaluation Association, Atlanta, Georgia, USA.

Abstract:

The terms “outcome” and “impact” are ubiquitous in evaluation discourse. However, there are many competing definitions in use that lack clarity and consistency, and sometimes even represent fundamentally different views. This leads to profound confusion, undermines efforts to improve learning and accountability, and represents a challenge for the evaluation profession.

This paper investigates how the terms are defined and understood by different institutions and communities. It systematically breaks down the definitions into a total of 15 distinct defining elements. These are used to compare usage patterns across definitions and methodically assess usefulness and limitations. Based on this assessment, the paper proposes remedy in three parts: by applying several good definition practices in future updates, by differentiating causal perspectives, and by employing a set of meaningful qualifiers when using the terms outcome and impact.

Belcher, B., & Palenberg, M. (2016, September). Outcomes and Impacts – Towards Conceptual Clarity. Presented at European Evaluation Society Conference, Maastricht, The Netherlands.

Abstract:

The terms “outcome” and “impact” are ubiquitous in evaluation discourse. However, there are many competing definitions in use that lack clarity and consistency, and sometimes even represent fundamentally different views. This leads to profound confusion, undermines efforts to improve learning and accountability, and represents a challenge for the evaluation profession.

This paper investigates how the terms are defined and understood by different institutions and communities. It systematically breaks down the definitions into a total of 15 distinct defining elements and uses this framework to compare usage patterns across definitions and methodically assess usefulness and limitations. Based on this assessment, the paper proposes remedy in three parts: by applying several good definition practices in future updates, by differentiating causal perspectives, and by employing a set of meaningful qualifiers when using the terms outcome and impact.

2016 (cont.)

Belcher, B., & Soni, S. (2016, June). Applying and Testing a Quality Assessment Framework for Inter- and Transdisciplinary Research. Presented at 2016 Canadian Evaluation Society Conference, St. John’s, Canada.

Abstract:

Research increasingly seeks to generate knowledge and contribute to real-world solutions. As boundaries between disciplines are crossed and research engages more with stakeholders in complex systems, traditional academic definitions and criteria of quality of are no longer sufficient. In turn, there is a need for a parallel evolution of principles to define and evaluate quality in a transdisciplinary research (TDR) context.

We developed a Quality Assessment Framework (QAF) based on a systematic review, organized around four principles: relevance; credibility; legitimacy, and effectiveness. QAF scores reflect the degree to which projects apply theoretically-derived principles in their design and implementation. We tested the QAF on 34 theses completed by RRU students. Inter-rater reliability was good and the instrument proved to be practical and can be applied consistently across projects and reviewers. Moreover, the QAF helped systematically evaluate strengths and weaknesses in the projects reviewed and provide guidance for improving research design and implementation. This presentation will provide an overview of the QAF and the assessment methods, results, and lessons learned for designing and evaluating quality TDR, as well as enhancing solution-oriented student research at RRU.

Davies, B., Colomer, J., Belcher, B., & Suryadarma, D. (2016, February). Effective Research for Development: Using Theory-based Approaches to Assess the Effectiveness of Knowledge Programs. Presented at 2016 Australasian Aid Conference, Canberra, Australia.

Abstract:

With significant finance being channeled into research and development programs internationally, organizations that work in this field are being challenged to answer: what does aid effectiveness look like in the context of knowledge creation and utilization? The challenge is multi-fold. Research and knowledge creation are unpredictable; especially as we don’t know what we will find out. Knowledge-based interventions tend to operate early in results chains, with multiple stages and multiple actors required to achieve impacts on-the-ground. Moreover, time lags and feedback loops of complex systems come into play.

Theoretical understanding of how knowledge-based interventions work is still not well developed. Furthermore, the knowledge creation process is, by necessity, becoming more diffuse. Chiefly, research-for-development activities are becoming increasingly inter- and transdisciplinary; the interventions themselves are often multi-pronged. To deal with these challenges, three organisations working in a UK International Climate Fund-financed knowledge-for-development partnership trialed the use of theory-driven approaches for design, monitoring and evaluation, and learning (DMEL). DMEL is a method for assessing and communicating the contribution of knowledge to broader social, economic, and environmental impacts.

Experience so far shows that theory-driven approaches are appropriate and helpful for designing, adapting, and evaluating knowledge for development programmes. Thus, the approach provides a framework to develop a testable causal model connecting research activities to policy and practice changes and ultimately to the desired impacts. Enhanced attention to the “sphere of influence” and especially the identification of key actors who need to be involved (boundary partners) supports more effective program design and implementation. This is because attention is given to the intermediate results that knowledge programs to manage and deliver. The approach enhances our ability to test theories about the role of knowledge in policy and development. Therefore, we can enhance learning about how investing in knowledge for development works. In other words, it has begun to shift the emphasis of DMEL from an accountability-driven administrative task to a process that is central to project design and organisational learning.

In addition, this collaboration also revealed a number of constraints and opportunities for further improvement. We found that theory-driven DMEL approaches are frequently constrained by a failure to adequately resource or incentivise systematic DMEL approaches. There is a real or perceived disjuncture between stated support for systematic DMEL and actual resource allocation and performance management. There are also frequently real or perceived conflicting demands, expectations, and prescribed DMEL approaches from multiple donors. Practically, there are few published examples of applied theory-driven approaches in relevant programs and sectors. Moreover, there is limited hard data on DMEL as a critical impact delivery mechanism and a lack of knowledge, experience, and institutional flexibility to support adaptive management during the activity cycle of knowledge generation programs forthwith.

This presentation will share lessons based on the experience of applying theory driven DMEL to their investments in enhanced DMEL techniques. This involved developing Theory of Change models and tracking performance in relation to: creating enabling internal systems and cultures; improving internal capacity and practices; influencing debates and practice with the wider donor and practitioner community. The presentation will highlight key findings from the application of theory-driven techniques in programmatic areas and insights into the factors that enable and constrain the development of evaluative cultures. It will also outline priority next steps for internal practice.

2015

Belcher, B. (2015, October). Seeking Evidence of Research Effectiveness: Lessons from an International Research-for-development Programme. Presented at Institute of Advanced Study Seminar, Durham University, Durham, UK.

Abstract:

Publicly supported research faces high expectations from funding agencies to achieve and prove “impact”. At the same time, there are strong internal pressures to evaluate research effectiveness, learn from experience, and improve research design and implementation. The tools for doing this are still not well developed. However, there is substantial emerging experience with utilization-focused evaluation and theory-based evaluation approaches applied to research. The Center for International Forestry (CIFOR), an international research-for-development organization, is on the forefront of this development.

This seminar will provide an overview and context of CIFOR’s research focus and approach. Moreover, emphasis will be made on the ongoing reform process that markedly shifted the center and its scientists from a primary focus on high quality outputs to a shared responsibility for outcomes. We developed and started implementing new approaches to planning, monitoring, and evaluating in which the intended contributions of research are deliberate, explicit, and testable. This improves our ability to gather evidence, assess, and communicate outcomes and impacts for enhanced accountability. Hence, it enhances our ability to learn from experience. The approach is promising, with important technical lessons and lessons about the inherent cultural change and how to support it. Questions still remain about whether the evidence produced and used in this approach will satisfy all constituencies. Furthermore, we question how to further improve the quality of this evidence.

Belcher, B., & Suryadarma, D. (2015, July). Lessons from a Participatory Evaluation of the Global Comparative Study of REDD. Presented at the inaugural meeting of KNOWFOR Knowledge Mobilization Community of Practice, IIED, London, UK.

Davies, B., Belcher, B., & Colomer, J. (2015, March). Learning Partnerships in Participatory Planning, Monitoring & Evaluation. Presented at Conference on Monitoring and Evaluation for Responsible Innovation, Wageningen University and Research Centre.

2014

Belcher, B., Rasmussen, K., Kemshaw, M., & Zornes, D. (2014, June). Defining and Measuring Research Quality in a Transdisciplinary Context: A Systematic Review. Presented at Interdisciplinary Social Sciences Conference, Vancouver, Canada.

2013

Belcher, B. (2013). Building a Theory of Change for Impact-oriented Natural Resources Management Research. Presented at the meeting of the program on “Management and conservation of Forest and Tree Resources”, Bioversity International, Rome, Italy.

2012

Belcher, B. (2012, September). Co-learning on Impact Evaluation Design in NRM Research Programmes. Presented at Natural Resources Management Research Impact Evaluation Community of Practice, The Worldfish Center, Penang, Malaysia.

Belcher, B. (2012). Specifying Outcomes to Enhance Research Effectiveness. Presented at Consortium Research Program on Forests, Trees and Agroforestry. Nairobi, Kenya.

Invited Talks

2020

Belcher, B. (2020, January). Research for changemaking: Concepts and lessons for research effectiveness. Keynote presentation to Canadian Changemaker Education Research Forum. Toronto, Canada. January 15th, 2020.

Download Full Presentation

2019

Belcher, B. (2019, October). Toward Conceptual Clarity in Research Assessment. Invited presentation to Declaration on Research Assessment (DORA) meeting on “Driving Institutional Change for Research Assessment Reform”, Howard Hughes Medical Institute. Chevy Chase, USA. October 21-23, 2019.

Download Abstract
Download Full Presentation

2018

Belcher, B. (2018, October). Impact Assessment in Landscapes Research: Practical & Conceptual Considerations. Invited Presentation to CGIAR ISPC’s Standing Panel on Impact Assessment (SPIA) on impact assessments and landscapes in CGIAR. Stellenbosch, South Africa.

Download Full Presentation

Belcher, B. (2018, October). Impact Assessment and Research Evaluation at CIFOR/FTA. Invited Presentation to CIFOR’s Annual Meeting: Forests Matter. Bogor, Indonesia.

Download Full Presentation

2017

Belcher B. (2017, July). Theory Based Evaluation of Policy Relevant Research: Lessons from a Series of FTA Case Studies. Invited presentation at Impacts of International Agricultural Research: Rigorous Evidence for Policy. Nairobi, Kenya.

Abstract:

Increasing demand from research funders and managers to assess, evaluate, and demonstrate the quality and effectiveness of research is well known. Less discussed, but equally important, is the evolving interest and use of research evaluation to support learning and adaptive management within research programmes. This is especially true in a research-for-development context where research competes with other worthy alternatives for overseas development assistance funding. Moreover, highly complex social, economic and ecological environments add to evaluation challenges. Researchers and research managers need to know whether and how their interventions are working to be able to adapt and improve their programmes, and satisfy their funders.

This paper presents a theory-based research evaluation approach developed and tested on several policy-relevant research activities undertaken by CIFOR and FTA. Each research evaluation began with documentation of a Theory of Change (ToC) that identified key actors, processes and results. Data collected through document reviews, key informant interviews, and focus group discussions were analysed to test the ToCs against evidence of outcomes in the form of discourse, policy formulation, and practice change.

The approach proved valuable as a learning tool for researchers and research managers. Furthermore, it facilitated communication with funders about actual and reasonable research contributions to change. Evaluations that employed a participatory approach with project scientists and partners noticeably supported team learning about past work and possible adaptations for the future. In all four cases, the retrospective ToC development proved challenging and resulted in simplistic ToCs.

Further work is needed to draw on social scientific theories of knowledge translation and policy processes to develop and further test more sophisticated ToCs. In conclusion, theory-based approach to research evaluation provides a valuable means of assessing research effectiveness (summative value) and supports learning and adaptation (formative value) at the project or programme scale. The approach is well suited to the research-for-development projects represented by the case studies, but it should be applicable to any research that aspires to have a societal impact.

Download Full Presentation

Belcher, B. (2017, January). Evaluation with and Evaluation of ToC: Lessons from FTA. Invited presentation at CGIAR Independent Evaluation Arrangement Symposium on use and evaluation of Theories of Change. Rome, Italy.

Belcher, B. (2017, January). Evaluation with and Evaluation of ToC: Lessons from FTA. Invited presentation to CGIAR Independent Evaluation Arrangement Symposium on Use and Evaluation of Theories of Change. Rome, Italy.

2016

Belcher, B. (2016, December). Using Theories of Change to Enhance Research Effectiveness. Invited presentation to World Agroforestry Center “Research to Impact” Meeting. Bogor, Indonesia.

2015

Belcher, B. (2015, December). Defining and Measuring Research Quality in a Transdisciplinary Context. Invited presentation at Institute of Advanced Study Public Lecture, Hatfield College, Durham University. Durham, United Kingdom.

Abstract:

Research increasingly seeks to both generate knowledge and to contribute to real-world solutions, with strong emphasis on context and social engagement. As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient. There is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a transdisciplinary research (TDR) context.

We conducted a systematic review to help answer the question: What are appropriate principles and criteria for defining and assessing TDR quality? Articles were selected and reviewed seeking information on: arguments for or against expanding definitions of research quality; purposes for research quality evaluation; proposed principles of research quality; proposed criteria for research quality assessment; proposed indicators and measures of research quality; proposed processes for evaluating TDR. We used the information from the review and our own experience in two research organizations that use TDR approaches to develop a prototype framework for evaluating TDR.

The presentation will provide an overview of the relevant literature and summarize the main aspects of TDR quality identified.

Four main principles emerge:
  1. Relevance, including social significance and applicability;
  2. Credibility, including criteria of integration and reflexivity, added to traditional criteria of scientific rigour;
  3. Legitimacy, including criteria of inclusion and fair representation of stakeholder interests; and
  4. Effectiveness, with criteria that assess actual or potential contributions to problem-solving and social change.

2014

Belcher, B. (2014). Building Theories of Change to Enhance Research Effectiveness. Presented at World Bank PROFOR Team Meeting on Monitoring and Evaluation. Washington, D.C., USA.

2012

Belcher, B. (2012). Specifying Outcomes to Enhance Research Effectiveness. Presented at Consortium Research Program on Forests, Trees and Agroforestry. Nairobi, Kenya.

Events

Co-building Sustainability and Reconciliation 2017

Baie Comeau, QC

In collaboration with the University of Saskatchewan and the Canadian Biosphere Reserves Association, the Sustainability Research Effectiveness team organized an event to integrate reconciliation into sustainable development. The event brought practitioners, academics, Indigenous leaders, and government stakeholders together to share experiences, lessons learned, and discuss paths forward for conservation, environmental stewardship, and community healing.

Please connect with us to share how you have used our publications. Feel free to share any literature that aligns with our work (e.g., TDR, theory-based evaluation, social theory, etc.). Moreover, share findings from budding new areas of research.