Towards an evaluation framework for participatory modeling

By Miles McNall

miles-mcnall
Miles McNall (biography)

What are the results of participatory modeling efforts? What contextual factors, resources and processes contribute to these results? Answering such questions requires the systematic and ongoing evaluation of processes, outputs and outcomes. At present participatory modeling lacks a framework to guide such evaluation efforts. In this post I offer some initial thoughts on the features of this framework.

A first step in developing an evaluation framework for participatory modeling is to establish criteria for processes, outputs, and outcomes. Such criteria would answer a basic question about what it means when we say that a participatory modeling process, output, or outcome is good, worthy, or meritorious.

These criteria will be somewhat unique to participatory modeling, while also drawing on metrics from related fields. For example, criteria for participatory processes in general might be informed by fields such as Community-Based Participatory Research, which already possesses well-defined criteria. Other indicators of participatory processes might be more specific to participatory modeling, focusing on how and to what degree stakeholders participate in model development, refinement, use, and dissemination.

Similarly, what are the expected short-term outputs or products of participatory modeling and how do we assess their quality? Outputs might range from a qualitative system map to a well-calibrated simulation model with an attractive user interface to facilitate continued use by stakeholders. Evaluating outputs requires more than simply counting the number of products; it also entails assessing the quality of those products along different dimensions. What are the various criteria of merit for a cognitive map? A system dynamics model? Finally, what are the anticipated short-term, intermediate, and long-term outcomes of participatory modeling and how do we measure them?

Learning is an often-cited outcome, but what kind of learning do we mean? Is it the enrichment of mental models, greater understanding of social and ecological tradeoffs associated with a particular issue, or greater understanding of how to collaborate with other stakeholders and modelers to arrive at an acceptable compromise? How do we assess how much learning is enough? Other outcomes might include improved decision making, conflict resolution, changes in practices, changes in policy, or (one would hope!) the amelioration of the identified problem, issue, or situation. In each domain, more refined evaluation criteria than currently exist will need to be developed while taking into account the perspectives of both modelers and stakeholders on what counts for quality in participatory modeling.

Another important evaluation question is the benefits or return-on-investment of participatory modeling as compared to alternatives. In the evaluation field it is generally considered good practice to evaluate the benefits of any particular intervention not in an absolute sense, but relative to the available alternatives. In the case of participatory modeling, this might involve the comparison of different participatory modeling approaches with each other as well with alternatives that fall outside of the participatory modeling family, such as Structured Decision Making.

The comparison of participatory modeling to structured decision making might be particularly useful insofar as both are participatory, both can serve the purpose of decision support, and both can be used to address similar problems. Only by answering the question of relative return-on-investment can we demonstrate to funders and stakeholders that participatory modeling is worth their time and money.

Much work needs to be done to develop a framework that would facilitate the systematic evaluation of participatory modeling. Developing such a framework will require answering the following questions: What are the indicators of high quality inputs, processes, outputs, and outcomes for participatory modeling? What methods can be used to measure them with accuracy? What are the relevant alternatives to which participatory modeling might be compared? What evaluation designs (e.g., case studies, experimental designs, or quasi-experimental designs) are best suited to answering questions about the outcomes and impacts of participatory modeling? Who will pay for all of this?

Biography: Miles McNall, Ph.D. is the Director of the Community Evaluation and Research Collaborative (CERC) at Michigan State University. His current research interests are the theory and practice of program evaluation and implementation science. Dr. McNall’s interest in quantitative systems modeling stems from the application of systems thinking and modeling to evaluation practice and the use of participatory modeling techniques to incorporate the knowledge of a variety of stakeholders into understanding and managing complex problems. Dr. McNall has been a member of the Michigan Association Evaluation board since 2010. He earned his doctorate in sociology in 1996 at the University of Minnesota, Twin Cities. He is member of the Participatory Modeling Pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).

This blog post is one of a series resulting from the first meeting in February 2016 of the Participatory Modeling Pursuit. This pursuit is part of the theme Building Resources for Action-Oriented Team Science through Syntheses of Practices and Theories funded by the National Socio-Environmental Synthesis Center (SESYNC).

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: