Crossing the science-policy interface: Lessons from a research project on Brazil nut management in Peru.
Citation: Ramirez, L.F., & Belcher, B.M. (2018). Crossing the science-policy interface: Lessons from a research project on Brazil nut management in Peru. Forest Policy and Economics. [In press]. https://doi.org/10.1016/j.forpol.2018.07.018
There are high expectations for contemporary forestry research, and sustainability research more broadly, to have impact in the form of improved institutions, policy and practice and improved social and environmental conditions. As part of this trend, there has been an evolution of research approaches that move beyond isolated, reductionist, disciplinary science toward approaches that integrate disciplines (interdisciplinary) and that engage a wider range of research stakeholders (transdisciplinary) as a way to be more effective. While these approaches evolve, there are good opportunities to learn from the experience of projects that have had impact at some level. This paper presents lessons from a case-study of a research project that succeeded in crossing the science-policy interface. Our study characterizes the design and implementation of a research project on the influence of timber harvesting on Brazil nut production using transdisciplinary research (TDR) design principles, and empirically assesses project outputs and outcomes in relation to a project theory of change (ToC) based on document review and key informant interviews. The Brazil Nut Project included some TDR elements and realized a substantial part of its ToC. The interviews identified mixed perceptions of the research design, implementation and the extent of outcomes achievement from different stakeholder perspectives. Our analysis suggests that limited stakeholder engagement was a crucial factor affecting perceptions of legitimacy and relevance, the two main TDR principles underpinning the overall research effectiveness in our study. The application of the TDR analytical framework indicates substantial scope to improve research effectiveness, even without striving for a TDR theoretical ideal.
A response to Hansson and Polk (2018) "Assessing the impact of transdisciplinary research: The usefulness of relevance, credibility, and legitimacy for understanding the link between process and impact"
Citation: Belcher, B. M., Ramirez, L. F., Davel, R., & Claus, R. (2018). A response to Hansson and Polk (2018) “Assessing the impact of transdisciplinary research: The usefulness of relevance, credibility, and legitimacy for understanding the link between process and impact”. Research Evaluation [In press]. https://doi.org/10.1093/reseval/rvy022
Hansson and Polk (2018, Research Evaluation, 27/2: 132–44) aim to assess the usefulness of the concepts of relevance, credibility, and legitimacy for understanding the link between process and impact in transdisciplinary (TD) research. However, the paper seems to misunderstand and misrepresent some of the ideas in the two main reference articles. It also uses definitions of the concepts it aims to test that are inconsistent with the definitions offered by the reference papers. The methods description is insufficient to know what data were collected or how they were analyzed. More importantly, the effort to understand relationships between process and impact in TD research needs more careful definitions of the concepts outcome and impact as well as more objective ways to assess outcomes and impact. This letter discusses shortcomings in the article and makes suggestions to improve conceptual clarity and methods for empirically assessing TD research effectiveness.
Citation: Belcher, B., & Palenberg, M. (2018). Outcomes and Impacts of Development Interventions: Toward Conceptual Clarity. American Journal of Evaluation, 39(4), 478–495. https://doi.org/10.1177/1098214018765698
The terms “outcome” and “impact” are ubiquitous in evaluation discourse. However, there are many competing definitions that lack clarity and consistency and sometimes represent fundamentally different meanings. This leads to profound confusion, undermines efforts to improve learning and accountability, and represents a challenge for the evaluation profession. This article investigates how the terms are defined and understood by different institutions and communities. It systematically investigates representative sets of definitions, analyzing them to identify 16 distinct defining elements. This framework is then used to compare definitions and assess their usefulness and limitations. Based on this assessment, the article proposes a remedy in three parts: applying good definition practice in future definition updates, differentiating causal perspectives and using appropriate causal language, and employing meaningful qualifiers when using the terms outcome and impact. The article draws on definitions used in international development, but its findings also apply to domestic public sector policies and interventions.
Citation: Belcher, B., Suryadarma, D., & Halimanjaya, A. (2017). Evaluating policy-relevant research: lessons from a series of theory-based outcomes assessments. Palgrave Communications, 3: 1-16.
The increasing external demand from research funders and research managers to assess, evaluate and demonstrate the quality and the effectiveness of research is well known. Less discussed, but equally important, is the evolving interest and use of research evaluation to support learning and adaptive management within research programmes. This is especially true in a research-for-development context where research competes with other worthy alternatives for overseas development assistance funding and where highly complex social, economic and ecological environments add to evaluation challenges. Researchers and research managers need to know whether and how their interventions are working to be able to adapt and improve their programmes as well as to be able to satisfy their funders. This paper presents a theory-based research evaluation approach that was developed and tested on four policy-relevant research activities: a long-term forest management research programme in the Congo Basin; a large research programme on forests and climate change; a multi-country research project on sustainable wetlands management, and; a research project of the furniture value chain in one district in Indonesia. The first used Contribution Analysis and the others used purpose-built outcome evaluation approaches that combined concepts and methods from several approaches. Each research evaluation began with documentation of a theory of change (ToC) that identified key actors, processes and results. Data collected through document reviews, key informant interviews and focus group discussions were analysed to test the ToCs against evidence of outcomes in the form of discourse, policy formulation and practice change. The approach proved valuable as a learning tool for researchers and research managers and it has facilitated communication with funders about actual and reasonable research contributions to change. Evaluations that employed a participatory approach with project scientists and partners noticeably supported team learning about past work and about possible adaptations for the future. In all four cases, the retrospective ToC development proved challenging and resulted in overly-simplistic ToCs. Further work is needed to draw on social scientific theories of knowledge translation and policy processes to develop and further test more sophisticated theories of change. This theory-based approach to research evaluation provides a valuable means of assessing research effectiveness (summative value) and supports learning and adaptation (formative value) at the project or programme scale. The approach is well suited to the research-for-development projects represented by the case studies, but it should be applicable to any research that aspires to have a societal impact. This article is published as part of a collection on the future of research assessment.
Citation: Belcher, B.M., Rasmussen, K.E., Kemshaw, M.R., & Zornes, D.A. (2016). Defining and assessing research quality in a transdisciplinary context. Research Evaluation, 25(1), 1-17.
Research increasingly seeks both to generate knowledge and to contribute to real-world solutions, with strong emphasis on context and social engagement. As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient—there is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a transdisciplinary research (TDR) context. We conducted a systematic review to help answer the question: What are appropriate principles and criteria for defining and assessing TDR quality? Articles were selected and reviewed seeking: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, proposed principles of research quality, proposed criteria for research quality assessment, proposed indicators and measures of research quality, and proposed processes for evaluating TDR. We used the information from the review and our own experience in two research organizations that employ TDR approaches to develop a prototype TDR quality assessment framework, organized as an evaluation rubric. We provide an overview of the relevant literature and summarize the main aspects of TDR quality identified there. Four main principles emerge: relevance, including social significance and applicability; credibility, including criteria of integration and reflexivity, added to traditional criteria of scientific rigor; legitimacy, including criteria of inclusion and fair representation of stakeholder interests, and; effectiveness, with criteria that assess actual or potential contributions to problem solving and social change.
Finding appropriate definitions and measures of research quality for transdisciplinary and applied natural resource management research: A systematic review protocol
Citation: Belcher, B.M., Rasmussen, K.E., Kemshaw, M.R., & Zornes, D.A. (2013). Finding appropriate definitions and measures of research quality for transdisciplinary and applied natural resource management research: a systematic review protocol. CIFOR Occasional Paper no. 99. Center for International Forestry Research: Bogor, Indonesia.