Community member post by Kirsten Kainz
How can co-creation communities use models – simple visual representations and/or sophisticated computer simulations – in ways that promote learning and improvement? Modeling techniques can serve to generate insights and correct misunderstandings. Are they equally as useful for fostering new learning and adaptation? Sterman (2006) argues that if new learning is to occur in complex systems then models must be subjected to testing. Model testing must, in turn, yield evidence that not only guides decision-making within the current model, but also feeds back evidence to improve existing models so that subsequent decisions can be based on new learning.
Consider the real-world case I was involved in of a meeting in a school district that intends to roll-out a new mathematics curriculum and support teachers’ use of the new curriculum through professional development. The district has made a large monetary investment in the curriculum and professional development both through the purchase of materials and the dedication of human resources to the effort. The sizeable investment at the district level is warranted in the minds of district administrators as they predict that the new curriculum paired with professional development will accelerate mathematics learning and manifest as improved student performance on state tests.
In this context, teachers and school district administrators have gathered to plan collaboratively for the curriculum roll-out and professional development. Participants have been asked to work in job-alike teams (teachers with teachers, administrators with administrators) to make conceptual models that respond to this question, “How will professional development activities increase student achievement”.
The model produced by administrators in this setting reflected a rational ordering of activities from left to right leading to a desired outcome. On its own, this version of reality seemed reasonable, if perhaps limited. In fact, a very similar implicit model guided much of the planning done by administrators and their consultants responsible for choosing the curriculum and professional development models. Teachers were not included in the planning and selection process.
However, when the administrators’ model was juxtaposed with one created by teachers it became clear that perceptions about prominent issues and predictive relations were different for people at different levels of the system.
What seemed like common sense at the administrative level of the system was not reflected by common sense at the instructional level of the system. The model created by teachers focused less on the left-to-right causal prediction of student performance and more on the interactive and cyclical nature of teaching and learning. The teachers’ model also reflected questions about the timing and purpose of the professional development and called into question its potential impact. When juxtaposed the administrators’ and teachers’ models revealed important separations in members’ primary concerns and assumptions.
Participants in a co-creation endeavor could be stymied by such divergence in thinking within the team. Collaborative work and good will could end at this point.
However, careful facilitation can lead diverse groups to articulate and evaluate the assumptions of their mental models, readying the models for testing procedures, either via simulation or empirical study.
Even with compelling testing procedures in place, Sterman (2006) indicates that evidence feedback can fail to support group learning (and subsequent coherence) due to three primary pitfalls:
- System complexity and practical limitations in time and cognitive energy can interact in ways that encourage people to be guided by habit rather than by new learning;
- Participants may be more strongly influenced by emotional, affiliative, and personal forces than by the opportunities for new learning afforded by evidence; and
- Pressures to perform and appear competent may ultimately override the desire for change based on new learning.
This then raises important questions such as: How can a co-creation agenda be built to reinforce learning and coherence via modeling while avoiding dissolution due to differences in assumptions and beliefs among team members? Is the reinforcement for learning best achieved through the careful selection of participants a priori, or is there value in creating teams that will indeed diverge at points so that the divergence and path to coherence can support reflection?
What has your experience been in using modelling for learning? What challenges have you encountered and how have you overcome them?
Sterman, J. D. (2006). Learning from evidence in a complex world. American Journal of Public Health, 96, 3: 505-514.
Biography: Kirsten Kainz, PhD, is Director of Statistics at the Frank Porter Graham Child Development Institute, Clinical Associate Professor of Social Work, and Research Associate Professor of Education at the University of North Carolina at Chapel Hill. Additionally, she serves as an Education Partnership Consultant for the Strategic Education Research Partnership Institute in Washington, DC. Kainz uses research to design, examine, and evaluate effective education practices for students historically under-represented in education success, especially economically disadvantaged students. She is a member of the Co-Creative Capacity Pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).
This blog post is one of a series developed in preparation for the second meeting in January 2017 of the Co-Creative Capacity Pursuit. This pursuit is part of the theme Building Resources for Complex, Action-Oriented Team Science funded by the US National Socio-Environmental Synthesis Center (SESYNC).