Learning through modeling

By Kirsten Kainz

Kirsten Kainz (biography)

How can co-creation communities use models – simple visual representations and/or sophisticated computer simulations – in ways that promote learning and improvement? Modeling techniques can serve to generate insights and correct misunderstandings. Are they equally as useful for fostering new learning and adaptation? Sterman (2006) argues that if new learning is to occur in complex systems then models must be subjected to testing. Model testing must, in turn, yield evidence that not only guides decision-making within the current model, but also feeds back evidence to improve existing models so that subsequent decisions can be based on new learning.

Consider the real-world case I was involved in of a meeting in a school district that intends to roll-out a new mathematics curriculum and support teachers’ use of the new curriculum through professional development. The district has made a large monetary investment in the curriculum and professional development both through the purchase of materials and the dedication of human resources to the effort. The sizeable investment at the district level is warranted in the minds of district administrators as they predict that the new curriculum paired with professional development will accelerate mathematics learning and manifest as improved student performance on state tests.

In this context, teachers and school district administrators have gathered to plan collaboratively for the curriculum roll-out and professional development. Participants have been asked to work in job-alike teams (teachers with teachers, administrators with administrators) to make conceptual models that respond to this question, “How will professional development activities increase student achievement”.
The model produced by administrators in this setting reflected a rational ordering of activities from left to right leading to a desired outcome. On its own, this version of reality seemed reasonable, if perhaps limited. In fact, a very similar implicit model guided much of the planning done by administrators and their consultants responsible for choosing the curriculum and professional development models. Teachers were not included in the planning and selection process.

However, when the administrators’ model was juxtaposed with one created by teachers it became clear that perceptions about prominent issues and predictive relations were different for people at different levels of the system.
What seemed like common sense at the administrative level of the system was not reflected by common sense at the instructional level of the system. The model created by teachers focused less on the left-to-right causal prediction of student performance and more on the interactive and cyclical nature of teaching and learning. The teachers’ model also reflected questions about the timing and purpose of the professional development and called into question its potential impact. When juxtaposed the administrators’ and teachers’ models revealed important separations in members’ primary concerns and assumptions.

Participants in a co-creation endeavor could be stymied by such divergence in thinking within the team. Collaborative work and good will could end at this point.

However, careful facilitation can lead diverse groups to articulate and evaluate the assumptions of their mental models, readying the models for testing procedures, either via simulation or empirical study.

Even with compelling testing procedures in place, Sterman (2006) indicates that evidence feedback can fail to support group learning (and subsequent coherence) due to three primary pitfalls:

  1. System complexity and practical limitations in time and cognitive energy can interact in ways that encourage people to be guided by habit rather than by new learning;
  2. Participants may be more strongly influenced by emotional, affiliative, and personal forces than by the opportunities for new learning afforded by evidence; and
  3. Pressures to perform and appear competent may ultimately override the desire for change based on new learning.

This then raises important questions such as: How can a co-creation agenda be built to reinforce learning and coherence via modeling while avoiding dissolution due to differences in assumptions and beliefs among team members? Is the reinforcement for learning best achieved through the careful selection of participants a priori, or is there value in creating teams that will indeed diverge at points so that the divergence and path to coherence can support reflection?

What has your experience been in using modelling for learning? What challenges have you encountered and how have you overcome them?


Sterman, J. D. (2006). Learning from evidence in a complex world. American Journal of Public Health, 96, 3: 505-514.

Biography: Kirsten Kainz, PhD, is Director of Statistics at the Frank Porter Graham Child Development Institute, Clinical Associate Professor of Social Work, and Research Associate Professor of Education at the University of North Carolina at Chapel Hill. Additionally, she serves as an Education Partnership Consultant for the Strategic Education Research Partnership Institute in Washington, DC. Kainz uses research to design, examine, and evaluate effective education practices for students historically under-represented in education success, especially economically disadvantaged students. She is a member of the Co-Creative Capacity Pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).

This blog post is one of a series developed in preparation for the second meeting in January 2017 of the Co-Creative Capacity Pursuit. This pursuit is part of the theme Building Resources for Complex, Action-Oriented Team Science funded by the US National Socio-Environmental Synthesis Center (SESYNC).

6 thoughts on “Learning through modeling”

  1. Graeme, thanks for that push. I just finished Gerald’s article on issues mapping, where the emphasis is on fostering dialogue rather than persuasion. It seems that if participants were willing to participate in dialogue, then revising models based on evolving coherence could be very useful. Plus!! Creating models of the evolution of thought that results from dialogue could also be very productive. I’m very interested in thinking about how to create contexts where participants agree to put aside persuasion and participate in dialogue. I bet you have some ideas about that!

  2. Nice post. Thanks Kirsten for the thoughts it has provoked. I wonder about a further iteration of modelling once the various groups have produced their first models and reflected on the models of others. What would happen if the teachers and the administrators each went away again and continued to model, but this time taking account of each other as part of the system they are modelling and whatever they have learned by reflecting on the content and form of the other’s model?

  3. Gerald, thanks for this very thoughtful response! It’s great to think that “yes, and” opens doors in ways that “no, but” simply can’t.

  4. There is an interesting assumption in your facilitation of model building that is different to the assumption that is often made in the research communities I am engaged in.

    I am thinking particularly of the sub-set of Operational Research (OR) people interested in ‘problem structuring methods’, and the rationale for problem structuring is very like yours – getting people to use modelling to increase mutual understanding of each other’s perspectives. Problem structuring researchers tend to develop modelling methods, so the structure of each of the models produced by different stakeholders is the same. This ensures that all models are subject to the same critical/systemic analysis in the process of construction, so they represent an ALREADY EVOLVED understanding, and not the initial thoughts of the participants.

    You, on the other hand, did not give the participants a pre-ordained structure to work with, and the result was that the FORM of the models said a lot about the assumptions that participants were working with. This betrays a blind spot in the way my own research communities tend to think about models: the form is given in the methodology, and it is done this way to channel the thinking of participants in a manner that experience tells us enhances critical and systemic thinking, but this suppresses the possibility of insights from comparisons of PRE-COLLABORATIVE thinking – our work is all about evolving thinking through the model building. Yours is about revealing existing thinking.

    The interesting question here is not if you or I are right or wrong, but whether it is possible to combine the two so we can have the strengths of both. I think the answer is ‘yes’.

    Interestingly, I co-authored a paper in 2014 on a process called Issues Mapping, which did exactly this: it produced models of stakeholder value positions (but using a standard form of representation) that were taken into a workshop in which emergent understandings (both mutual understanding and understanding of possible new ways forward on this issue, which was the use of genetically modified organisms in food production) were facilitated. Even this though doesn’t have the flexibility of model building in your example – yours I see as more like ‘sculpting’, which is the use of human bodies to symbolically represent people’s relationships, beliefs and feelings in a context, which can be really revealing and indeed FUN. Mutual understanding can arise on some contentious issues while people are laughing rather than shouting!

    Incidentally, the reference above (if you are interested) is: Cronin K, Midgley G and Skuba Jackson L (2014). Issues Mapping: A Problem Structuring Method for Addressing Science and Technology Conflicts. European Journal of Operational Research, 233, 145-158.

  5. Nice article, congratulations. On know-why.net you will find a number of models some of which are the results from school classes. It is fairly simple to reach action learning from children through qualitative modeling – even simpler than for adults. Define what the model should answer, agree on an overall target, and then start asking four questions (what leads to more/less now/in the future?) for each factor. A connection then translates into “more of … leads directly to more/less …”. That’s basically it. Of course, you assist successful modeling and learning with a lot more.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: