Resources

 

Peer Reviewed Publications

Click for abstracts

In Press: Ramirez, L.F. & Belcher, B.M. (2018). Crossing the science-policy interface: Lessons from a research project on Brazil nut management in Peru. Forest Policy and Economics. https://doi.org/10.1016/j.forpol.2018.07.018
Abstract: There are high expectations for contemporary forestry research, and sustainability research more broadly, to have impact in the form of improved institutions, policy and practice and improved social and environmental conditions. As part of this trend, there has been an evolution of research approaches that move beyond isolated, reductionist, disciplinary science toward approaches that integrate disciplines (interdisciplinary) and that engage a wider range of research stakeholders (transdisciplinary) as a way to be more effective. While these approaches evolve, there are good opportunities to learn from the experience of projects that have had impact at some level. This paper presents lessons from a case-study of a research project that succeeded in crossing the science-policy interface. Our study characterizes the design and implementation of a research project on the influence of timber harvesting on Brazil nut production using transdisciplinary research (TDR) design principles, and empirically assesses project outputs and outcomes in relation to a project theory of change (ToC) based on document review and key informant interviews. The Brazil Nut Project included some TDR elements and realized a substantial part of its ToC. The interviews identified mixed perceptions of the research design, implementation and the extent of outcomes achievement from different stakeholder perspectives. Our analysis suggests that limited stakeholder engagement was a crucial factor affecting perceptions of legitimacy and relevance, the two main TDR principles underpinning the overall research effectiveness in our study. The application of the TDR analytical framework indicates substantial scope to improve research effectiveness, even without striving for a TDR theoretical ideal.
In Press: Belcher, B. M., Ramirez, L. F., Davel, R., & Claus, R. (2018). A response to Hansson and Polk (2018) "Assessing the impact of transdisciplinary research: The usefulness of relevance, credibility, and legitimacy for understanding the link between process and impact" https://doi.org/10.1093/reseval/rvy022/5059537
Abstract: Hansson and Polk (2018, Research Evaluation, 27/2: 132–44) aim to assess the usefulness of the concepts of relevance, credibility, and legitimacy for understanding the link between process and
impact in transdisciplinary (TD) research. However, the paper seems to misunderstand and misrepresent some of the ideas in the two main reference articles. It also uses definitions of the concepts it aims to test that are inconsistent with the definitions offered by the reference papers. The methods description is insufficient to know what data were collected or how they were analyzed. More importantly, the effort to understand relationships between process and impact in TD research needs
more careful definitions of the concepts outcome and impact as well as more objective ways to assess
outcomes and impact. This letter discusses shortcomings in the article and makes suggestions
to improve conceptual clarity and methods for empirically assessing TD research effectiveness.
Belcher, B., & Palenberg, M. (2018). Outcomes and Impacts of Development Interventions: Toward Conceptual Clarity. American Journal of Evaluation, 1–18. https://doi.org/10.1177/1098214018765698
Abstract: The terms “outcome” and “impact” are ubiquitous in evaluation discourse. However, there are
many competing definitions that lack clarity and consistency and sometimes represent fundamentally
different meanings. This leads to profound confusion, undermines efforts to improve learning and
accountability, and represents a challenge for the evaluation profession. This article investigates how
the terms are defined and understood by different institutions and communities. It systematically
investigates representative sets of definitions, analyzing them to identify 16 distinct defining elements.
This framework is then used to compare definitions and assess their usefulness and limitations.
Based on this assessment, the article proposes a remedy in three parts: applying good
definition practice in future definition updates, differentiating causal perspectives and using appropriate
causal language, and employing meaningful qualifiers when using the terms outcome and
impact. The article draws on definitions used in international development, but its findings also apply
to domestic public sector policies and interventions.
Belcher, B., Suryadarma, D., & Halimanjaya, A. (2017). Evaluating Policy-Relevant Research: Lessons from a Series of Theory-Based Outcomes Assessments. Palgrave Commmunications 3: 1-16.
Abstract: The increasing external demand from research funders and research managers to assess, evaluate and demonstrate the quality and the effectiveness of research is well known. Less discussed, but equally important, is the evolving interest and use of research evaluation to support learning and adaptive management within research programmes. This is especially true in a research-for-development context where research competes with other worthy alternatives for overseas development assistance funding and where highly complex social, economic and ecological environments add to evaluation challenges. Researchers and research managers need to know whether and how their interventions are working to be able to adapt and improve their programmes as well as to be able to satisfy their funders. This paper presents a theory-based research evaluation approach that was developed and tested on four policy-relevant research activities: a long-term forest management research programme in the Congo Basin; a large research programme on forests and climate change; a multi-country research project on sustainable wetlands management, and; a research project of the furniture value chain in one district in Indonesia. The first used Contribution Analysis and the others used purpose-built outcome evaluation approaches that combined concepts and methods from several approaches. Each research evaluation began with documentation of a theory of change (ToC) that identified key actors, processes and results. Data collected through document reviews, key informant interviews and focus group discussions were analysed to test the ToCs against evidence of outcomes in the form of discourse, policy formulation and practice change. The approach proved valuable as a learning tool for researchers and research managers and it has facilitated communication with funders about actual and reasonable research contributions to change. Evaluations that employed a participatory approach with project scientists and partners noticeably supported team learning about past work and about possible adaptations for the future. In all four cases, the retrospective ToC development proved challenging and resulted in overly-simplistic ToCs. Further work is needed to draw on social scientific theories of knowledge translation and policy processes to develop and further test more sophisticated theories of change. This theory-based approach to research evaluation provides a valuable means of assessing research effectiveness (summative value) and supports learning and adaptation (formative value) at the project or programme scale. The approach is well suited to the research-for-development projects represented by the case studies, but it should be applicable to any research that aspires to have a societal impact. This article is published as part of a collection on the future of research assessment.
Belcher, B. M., Rasmussen, K. E., Kemshaw, M. R., & Zornes, D. A. (2016). Defining and assessing research quality in a transdisciplinary context. Research Evaluation, 25(1), 1-17.
Abstract: Research increasingly seeks both to generate knowledge and to contribute to real-world solutions, with strong emphasis on context and social engagement. As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient—there is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a transdisciplinary research (TDR) context. We conducted a systematic review to help answer the question: What are appropriate principles and criteria for defining and assessing TDR quality? Articles were selected and reviewed seeking: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, proposed principles of research quality, proposed criteria for research quality assessment, proposed indicators and measures of research quality, and proposed processes for evaluating TDR. We used the information from the review and our own experience in two research organizations that employ TDR approaches to develop a prototype TDR quality assessment framework, organized as an evaluation rubric. We provide an overview of the relevant literature and summarize the main aspects of TDR quality identified there. Four main principles emerge: relevance, including social significance and applicability; credibility, including criteria of integration and reflexivity, added to traditional criteria of scientific rigor; legitimacy, including criteria of inclusion and fair representation of stakeholder interests, and; effectiveness, with criteria that assess actual or potential contributions to problem solving and social change.

Events

Co-building Reconciliation and Sustainability 2017

Baie Comeau, QC

The Sustainability Research Effectiveness team in collaboration with the University of Saskatchewan and the Canadian Biosphere Reserve Association was responsible for organizing an exciting event in which practitioners, academics, indigenous leaders and government stakeholders came together to share experiences, lessons learned and discuss paths forward for reconciliation and sustainable development.

Conference Presentations

Davies, B., Abernethy, P., Belcher, B. & Claus, R. (September, 2017). An Empirical Evaluation of Knowledge Translation in Policy-Relevant Forestry Research. Presented at IUFRO 125th Anniversary Congress, Frieburg, Germany.
Abstract: The increasing external demand from research funders and research managers to assess, evaluate and demonstrate the quality and the effectiveness of research is well known. Less discussed, but equally important, is the evolving interest and use of research evaluation to support learning and adaptive management within research programmes. Researchers and research managers need to know whether and how their work is contributing to positive social and environmental outcomes to be able to adapt and improve their projects and programmes. We have done a series of theory-based evaluations of international forestry research projects. Each evaluation began with documentation of a theory of change (ToC) that identified key actors, processes and results. Data collected through document reviews, key informant interviews and focus group discussions were analysed to test the ToCs against evidence of outcomes in the form of discourse, policy formulation and practice change. The analyses identified strengths and weaknesses in knowledge translation, helped understand the conditions and mechanisms of knowledge translation and suggested improved strategies to increase research effectiveness. The evaluation approach proved valuable as a learning tool for researchers and research managers and it has facilitated communication with funders about actual and reasonable research contributions to change.
Belcher, B., Abernethy, P., Claus, R. & Davies B. (September 2017). Transdisciplinarity in International Forestry Research: An Assessment of 3 CIFOR Projects. Presented at IUFRO 125th Anniversary Congress, Frieburg, Germany.
Abstract: As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient; there is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a transdisciplinary research (TDR) context. We used a systematic review of the literature on TDR quality to develop a prototype TDR quality assessment framework (QAF), which we tested empirically in three international forestry research projects conducted by the Center for International Forestry Research. The data were collected using participatory evaluation workshops, semi-structured stakeholder interviews, and document analysis. The developed TDR quality assessment criteria of the framework helped to evaluate the degree to which current projects are employing transdisciplinary approaches and to systematically evaluate strengths and weaknesses in the projects reviewed. We found good examples of the application of transdisciplinary principles and criteria but also considerable scope to further improving research design and implementation for improved effectiveness. This presentation will provide an overview of the QAF and the assessment methods, results and lessons learned for designing and evaluating TDR in the forestry sector.
Abernethy, P., Belcher, B. & Reed, M. (August, 2017). How to make transdisciplinary research relevant, credible, legitimate and effective? Bridging social theories and quality assessment of transdisciplinary research. Presented at Resilience 2017, Stockholm, Sweden.
Abstract: In recent years, transdisciplinary sustainability research has become one of the key
approaches to address challenges and identify opportunities of the Antropocene.
While knowledge co-creation and mutual learning have become broadly
acknowledged as key components in inter- and TDR, less attention has been paid to
the complex nature of social interaction in stakeholder engagement and its
implications to both resilience building and the quality of research itself. Some of the
main issues we identified relate to inclusivity, power asymmetries, type of
engagement, and limited use of the new knowledge produced. To address these
challenges we explored how existing social theories could be used to improve
emerging TDR practices. We investigated some of better-known theories related to
deliberative participation, power, empowerment, and social capital. Combining
insights from Habermas, Foucault, Freire, and Putnam with evaluation criteria for TDR
regarding relevance, credibility, legitimacy, and effectiveness, we developed a TDR
quality assessment framework. This framework was tested by analysing five
sustainability science projects, each of which engaged both researchers and other
stakeholders in knowledge production. The case studies focused on natural resource
management and sustainability governance at national (Indonesia, Canada and Peru)
and global (international) scales. The data were collected by document analyses,
interactive workshops, and semi-structured stakeholder interviews and analysed using
the abovementioned framework. Findings indicate that integrating existing social
theories in applied theoretical frameworks can help make TDR more effective by
offering more nuanced, in-depth tools to plan research projects, to assess research
processes, and to analyse research findings. We describe the development and
assessment of the integrated theoretical framework and discuss associations between
different types of knowledge production and changes in policy and practice. This new
approach has a great potential to improve sustainability research in the Anthropocene,
by increasing our understanding of social-ecological interactions and how to address
knowledge-to-action gaps.
Davies, B., Colomer, J., Belcher, B. and Suryadarma, D. (2016). Effective Research for Development: Using theory-based approaches to assess the effectiveness of knowledge programs. Presented to 2016 Australasian Aid Conference, Feb. 10-11 2016, Canberra.
Abstract: With significant finance being channelled into research and knowledge for development programs internationally, organizations that work in this field are being challenged to answer: what does aid effectiveness look like in the context of knowledge creation and utilization?

The challenge is multi-fold. Research and knowledge creation are unpredictable; we don’t know what we will find out. Knowledge-based interventions tend to operate early in results chains, with multiple stages and multiple actors required to achieve impacts on the ground, and with the time lags and feedback loops of complex systems. Theoretical understanding of how knowledge-based interventions work is still not well developed. And the knowledge creation process is, by necessity, becoming more diffuse, with research-for-development activities becoming increasingly inter- and transdisciplinary; the interventions themselves are often multi-pronged.

To deal with these challenges, three organisations working in a UK International Climate Fund -financed knowledge-for-development partnership have trialled the use of theory driven approaches for design, monitoring and evaluation and learning (DMEL) as a method for assessing and communicating the contribution of knowledge to broader social, economic and environmental impacts.

Experience so far shows that theory driven approaches are appropriate and helpful for designing, adapting, and evaluating knowledge for development programmes. The approach provides a framework to develop a testable, causal model connecting research activities to policy and practice changes and ultimately to the desired social and environmental impacts. Enhanced attention to the “sphere of influence” and the identification of key actors who need to be involved (boundary partners) has supported more effective program design and implementation, by focusing attention on the intermediate results that knowledge programs are able to manage towards delivering. The approach enhances our ability to test theories about the role of knowledge in policy and development and therefore to enhance learning about how investing in knowledge for development works. In other words, it has begun to shift the emphasis of DMEL from an accountability-driven administrative task to a process that is central to project design and organisational learning.

This collaboration has also revealed a number of constraints and opportunities for further improvement.  We have found that theory driven DMEL approaches are frequently constrained by a failure to adequately resource or incentivise systematic DMEL approaches, with a real or perceived disjuncture between stated support for systematic DMEL and actual resource allocation and performance management. There are also frequently real or perceived conflicting demands, expectations and prescribed DMEL approaches from multiple donors. Practically, there are few published examples of applied theory-driven approaches in relevant programs and sectors; limited hard data on DMEL as a critical impact delivery mechanism and a lack of knowledge, experience and institutional flexibility to support adaptive management during the activity cycle of knowledge generation programs.

This presentation will share lessons based on the experience of applying theory driven DMEL to their investments in enhanced DMEL techniques. This has involved developing theory of change models and tracking performance in relation to: creating enabling internal systems and cultures; improving internal capacity and practices; influencing debates and practice with the wider donor and practitioner community. The presentation will highlight key findings from the application of theory driven techniques in programmatic areas and insights into the factors that enable and constrain the development of evaluative cultures. It will also outline priority next steps for internal practice.

Belcher, B. and Palenberg, M. (2016). Outcomes and Impact Terminology: Towards Conceptual Clarity. Expert Lecture to the American Evaluation Association, Atlanta Ga., USA. Oct. 26 – 29 2016.
Abstract: The terms “outcome” and “impact” are ubiquitous in evaluation discourse. But there are many competing definitions in use that lack clarity and consistency, and sometimes even represent fundamentally different views. This leads to profound confusion, undermines efforts to improve learning and accountability, and represents a challenge for the evaluation profession.

This paper investigates how the terms are defined and understood by different institutions and communities. It systematically breaks down the definitions into a total of 15 distinct defining elements and uses this framework to compare usage patterns across definitions and to methodically assess usefulness and limitations.

Based on this assessment, the paper proposes remedy in three parts: by applying several good definition practices in future updates, by differentiating causal perspectives, and by employing a set of meaningful qualifiers when using the terms outcome and impact.

Belcher, B. and Soni, S. (2016). Applying and testing a quality assessment framework for inter- and transdisciplinary research. Presented to 2016 Canadian Evaluation Society Conference, St. John’s, Canada.
Abstract: Research increasingly seeks to generate knowledge and contribute to real-world solutions. As boundaries between disciplines are crossed and research engages more with stakeholders in complex systems, traditional academic definitions and criteria of quality of are no longer sufficient. There is a need for a parallel evolution of principles to define and evaluate quality in a transdisciplinary research (TDR) context. We developed a quality assessment framework (QAF) based on a systematic review, organized around four principles: relevance; credibility; legitimacy, and; effectiveness. QAF scores reflect the degree to which projects apply theoretically-derived principles in their design and implementation. We tested the QAF on 34 theses completed by RRU students. Inter-rater reliability was good and the instrument proved to be practical and can be applied consistently across projects and reviewers. The QAF helped systematically evaluate strengths and weaknesses in the projects reviewed and provide guidance for improving research design and implementation. This presentation will provide an overview of the QAF and the assessment methods, results and lessons learned for designing and evaluating quality TDR, and for enhancing solution-oriented student research at RRU.
Belcher, B. and Palenberg, M. (2016). Outcomes and Impacts – Towards Conceptual Clarity. Presented to the European Evaluation Society Conference, Maastricht, The Netherlands, 26-30 Sept. 2016.
Abstract: The terms “outcome” and “impact” are ubiquitous in evaluation discourse. But there are many competing definitions in use that lack clarity and consistency, and sometimes even represent fundamentally different views. This leads to profound confusion, undermines efforts to improve learning and accountability, and represents a challenge for the evaluation profession.

This paper investigates how the terms are defined and understood by different institutions and communities. It systematically breaks down the definitions into a total of 15 distinct defining elements and uses this framework to compare usage patterns across definitions and to methodically assess usefulness and limitations.

Based on this assessment, the paper proposes remedy in three parts: by applying several good definition practices in future updates, by differentiating causal perspectives, and by employing a set of meaningful qualifiers when using the terms outcome and impact.

Belcher, B. (2015). Seeking Evidence of Research Effectiveness: Lessons from an international research-for-development programme. Institute of Advanced Study Seminar, Oct. 12 2015.
Abstract: Publicly supported research faces high expectations from funding agencies to achieve and prove “impact”. At the same time, there are strong internal pressures to evaluate research effectiveness, to better learn from experience and to improve research design and implementation. The tools for doing this are still not well developed, but there is substantial emerging experience with utilization-focused evaluation and theory-based evaluation approaches applied to research. The Center for International Forestry (CIFOR), an international research-for-development organization, is on the forefront of this development. This seminar will provide an overview and context of CIFOR’s research focus and approach, with emphasis on an ongoing reform process that markedly shifted the center and its scientists from a primary focus on high quality outputs to a shared responsibility for outcomes. We have developed and started implementing new approaches to planning, monitoring and evaluating in which the intended contributions of research are deliberate, explicit and testable. This improves our ability to gather evidence, assess and communicate outcomes and impacts for enhanced accountability, and our ability to learn from experience. The approach is promising, with important technical lessons and lessons about the inherent cultural change and how to support it. Questions still remain about whether the evidence produced and used in this approach will satisfy all constituencies, and about how to further improve the quality of this evidence.
Belcher, B. (2015). Seeking Evidence of Research Effectiveness: Lessons from an international research-for-development programme. Institute of Advanced Study Seminar, Oct. 12 2015.
Belcher, B. and Suryadarma, D. (2015). Lessons from a Participatory Evaluation of the Global Comparative Study of REDD. Presented to inaugural meeting of KNOWFOR Knowledge Mobilization Community of Practice, July 9 2015, IIED, London, UK.
Davies, B., Belcher, B. and Colomer, J. (2015). Learning Partnerships in Participatory Planning, Monitoring & Evaluation. Presented to Conference on Monitoring and Evaluation for Responsible Innovation, Mar 19-20, 2015, Wageningen University and Research Centre
Belcher, B., Rasmussen, K., Kemshaw, M. and Zornes, D. (2014).  Defining and Measuring Research Quality in a Transdisciplinary Context: A Systematic Review. Presented to Interdisciplinary Social Sciences Conference, 11-13 June, Vancouver, Canada.
Belcher, B. (2013). Building a Theory of Change for Impact-Oriented Natural Resources Management Research. Meeting of the Program on “Management and conservation of Forest and Tree Resources”, Bioversity International, Rome, Italy.
Belcher, B. (2012). Co-learning on Impact Evaluation Design in NRM Research Programmes. Natural Resources Management Research Impact Evaluation Community of Practice, The Worldfish Center, Penang, Malaysia.
Belcher, B. (2012). Specifying Outcomes to Enhance Research Effectiveness. Consortium Research Program on Forests, Trees and Agroforestry. Nairobi, Kenya.

Invited Talks

Belcher B. (2017). Theory Based Evaluation of Policy Relevant Research: Lessons from a Series of FTA Case Studies. Invited Presentation to Impacts of international agricultural research: Rigorous evidence for policy. Nairobi Kenya, July 6-8.
Abstract: The increasing demand from research funders and research managers to assess, evaluate and demonstrate the quality and the effectiveness of research is well known. Less discussed, but equally important, is the evolving interest and use of research evaluation to support learning and adaptive management within research programmes. This is especially true in a research-for-development context where research competes with other worthy alternatives for overseas development assistance funding and where highly complex social, economic and ecological environments add to evaluation challenges. Researchers and research managers need to know whether and how their interventions are working to be able to adapt and improve their programmes as well as to be able to satisfy their funders. This paper presents a theory-based research evaluation approach that was developed and tested on several policy-relevant research activities undertaken by CIFOR and FTA. Each research evaluation began with documentation of a theory of change (ToC) that identified key actors, processes and results. Data collected through document reviews, key informant interviews and focus group discussions were analysed to test the ToCs against evidence of outcomes in the form of discourse, policy formulation and practice change. The approach proved valuable as a learning tool for researchers and research managers and it has facilitated communication with funders about actual and reasonable research contributions to change. Evaluations that employed a participatory approach with project scientists and partners noticeably supported team learning about past work and about possible adaptations for the future. In all four cases, the retrospective ToC development proved challenging and resulted in simplistic ToCs. Further work is needed to draw on social scientific theories of knowledge translation and policy processes to develop and further test more sophisticated theories of change. This theory-based approach to research evaluation provides a valuable means of assessing research effectiveness (summative value) and supports learning and adaptation (formative value) at the project or programme scale. The approach is well suited to the research-for-development projects represented by the case studies, but it should be applicable to any research that aspires to have a societal impact.
Belcher, B. (2017). Evaluation with and Evaluation of ToC: Lessons from FTA. Invited Presentation to CGIAR Independent Evaluation Arrangement Symposium on use and evaluation of Theories of Change. Rome, Jan. 12-13, 2017.
Belcher, B. (2017). Evaluation with and Evaluation of ToC: Lessons from FTA. Invited Presentation to CGIAR Independent Evaluation Arrangement Symposium on use and evaluation of Theories of Change. Rome, Jan. 12-13, 2017.
Belcher, B. (2016). Using Theories of Change to Enhance Research Effectiveness. Invited presentation to World Agroforestry Center “Research to Impact” meeting, Bogor, Indonesia, Dec. 5 2016.
Belcher, B. (2015) Defining and Measuring Research Quality in a Transdisciplinary Context. Invited presentation, Institute of Advanced Study Public Lecture, Hatfield College, Durham University, Dec. 1 2015
Abstract: Research increasingly seeks to both generate knowledge and to contribute to real-world solutions, with strong emphasis on context and social engagement. As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient – there is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a transdisciplinary research (TDR) context. We conducted a systematic review to help answer the question: What are appropriate principles and criteria for defining and assessing TDR quality? Articles were selected and reviewed seeking information on: arguments for or against expanding definitions of research quality; purposes for research quality evaluation; proposed principles of research quality; proposed criteria for research quality assessment; proposed indicators and measures of research quality; proposed processes for evaluating TDR. We used the information from the review and our own experience in two research organizations that use TDR approaches, to develop a prototype framework for evaluating TDR. The presentation will provide an overview of the relevant literature and summarize the main aspects of TDR quality identified there. Four main principles emerge: relevance, including social significance and applicability; credibility, including criteria of integration and reflexivity, added to traditional criteria of scientific rigour; legitimacy, including criteria of inclusion and fair representation of stakeholder interests, and; effectiveness, with criteria that assess actual or potential contributions to problem-solving and social change.
Belcher, B. (2012). Specifying Outcomes to Enhance Research Effectiveness. Consortium Research Program on Forests, Trees and Agroforestry. Nairobi, Kenya.

Other

Belcher, B. M., Rasmussen, K. E., Kemshaw, M. R., & Zornes, D. A (2013). Finding appropriate definitions and measures of research quality for transdisciplinary and applied natural resource management research: a systematic review protocol. CIFOR Occassional Paper no. 99. Center for International Forestry Research: Bogor, Indonesia.