Toolkits for transdisciplinary research

Community member post by Gabriele Bammer

Gabriele Bammer (biography)

If you want to undertake transdisciplinary research, where can you find relevant concepts and methods? Are there compilations or toolkits that are helpful?

I’ve identified eight relevant toolkits, which are described briefly below and in more detail in the journal GAIA’s Toolkits for Transdisciplinarity series.

One toolkit provides concepts and methods relevant to the full range of transdisciplinary research, while the others cover four key aspects: (i) collaboration, (ii) synthesis of knowledge from relevant disciplines and stakeholders, (iii) thinking systemically, and (iv) making change happen.

Full range of transdisciplinary research
1. Research integration and implementation

2. Collaboration

Synthesis of knowledge from relevant disciplines and stakeholders
3. Co-producing knowledge
4. Dialogue methods for knowledge synthesis
5. Integration methods

Thinking systemically
6. (Dynamic) systems thinking

Making change happen
7. Engaging and influencing policy
8. Change

Of the eight toolkits, two – on knowledge co-production and on integration – were developed by transdisciplinary researchers. The others were developed in different contexts but still include many methods that transdisciplinarians will find useful.

1. Research integration and implementation

The Integration and Implementation Sciences (I2S) website provides more than 100 tools, approaches and cases relevant to research integration and implementation. They deal with:

  1. synthesis of knowledge from different disciplines and stakeholders
  2. understanding and managing unknowns
  3. providing integrated research support for policy and practice change.


2. Collaboration

Practical guidance is provided on collaboration in research teams, including those which strive for high levels of integration. The toolkit is divided into nine sections:

  1. preparing to collaborate
  2. building a research team
  3. fostering trust
  4. developing a shared vision
  5. communicating about science
  6. sharing recognition and credit
  7. handling conflict
  8. strengthening team dynamics
  9. navigating and leveraging networks and systems.

Reference: Bennett, L. M., Gadlin, H. and Levine-Finley, S. (2010). Collaboration and team science: A field guide. National Institutes of Health Publication, 10-7660. National Institutes of Health: Bethesda, United States of America. Online (open access):

3. Co-producing knowledge

The td-net toolbox for co-producing knowledge provides 14 methods for bringing together different perspectives on a problem, recognising that both individuals and social groups have different ways of thinking about issues. The methods deal with ways of:

  1. tailoring research questions
  2. identifying relevant ‘actors’
  3. constructing groups
  4. sharing and consolidating expert and/or non-expert knowledge and perspectives
  5. constructing a timeline of significant events
  6. planning possible futures
  7. challenging suggested solutions
  8. identifying impacts.


4. Dialogue methods for knowledge synthesis

Fourteen dialogue methods to bring together disciplinary experts and/or stakeholders are described, along with case studies of applications in four research areas: environment, population health, security and technological innovation. The methods deal with ways of understanding and combining:

  1. judgements
  2. visions
  3. assumptions
  4. interests
  5. values.

Reference: McDonald, D., Bammer, G. and Deane. P. (2009). Research integration using dialogue methods. ANU Press: Canberra, Australia. Online (open access):

5. Integration methods

Integration is essential in transdisciplinary research, with seven groups of integration methods:

  1. Integration through conceptual clarification and theoretical framing
  2. Integration through research questions and hypothesis formulation
  3. Screening, using, refining, and further developing effective integrative scientific methods
  4. Integrative assessment procedures
  5. Integration through development and application of models
  6. Integration through artifacts, services and products as boundary objects
  7. Integrative procedures and instruments of research organisation

Reference: Bergmann, M., Jahn, T., Knobloch, T., Krohn, W., Pohl, C. and Schramm, E. (2012). Methods for transdisciplinary research: A primer for practice. Campus Verlag: Frankfurt am Main // German version: Methoden transdisziplinärer Forschung:
Ein Überblick mit Anwendungsbeispielen. Published in 2010. See also Matthias Bergmann’s blog post.

6. (Dynamic) systems thinking

Transdisciplinary research often requires systems thinking, especially understanding how the inter-related elements of a problem form a complex and unified whole, and how those interdependencies influence the actions that can be taken. Seven how-to guides provide an introduction to systems thinking tools, especially for understanding dynamic systems. Several focus on systems archetypes, which are distinctive combinations of reinforcing and balancing processes found in many kinds of organisations, under many circumstances, and at many levels and scales. They are:

  1. Introduction to systems thinking
  2. Systems thinking tools: a user’s reference guide
  3. System archetypes basics: from story to structure
  4. Systems archetypes I: diagnosing systemic issues and designing interventions
  5. Systems archetypes II: using systems archetypes to take effective action
  6. Systems archetypes III: understanding patterns of behaviour and delay
  7. Applying systems archetypes


7. Engaging and influencing policy

This guide provides a general approach and specific methods addressing how researchers can effectively interact with policy makers. It is based on the extensive experience of the Research in Policy and Development (RAPID) programme of the UK Overseas Development Institute. The toolkit provides guidance in three areas:

  1. Diagnosing the problem: understanding root causes rather than symptoms, understanding why the problem persists, diagnosing complexity and uncertainty, and identifying stakeholders.
  2. Developing an engagement strategy to influence policy: identifying realistic outcomes, identifying who or what is to be influenced, developing a theory of change, developing and implementing a communications strategy, and assessing the available capacity and resources.
  3. Developing a monitoring and learning plan: defining information requirements, collecting and managing data, and making sense of data to improve decision-making.

Reference: Young, J., Shaxson, L., Jones, H., Hearn, S., Datta, A. and Cassidy, C. (2014). Rapid Outcome Mapping Approach: A Guide to Policy Engagement and Influence. Overseas Development Institute (ODI): London, UK. Online (PDF and workbook):

8. Change

More than 120 techniques aimed at achieving change are presented, many of which can be adapted for transdisciplinary research. There are three major sections covering change at the following levels:

  1. Personal, in two groups: goals and creativity, and personal growth
  2. Team, in two groups: different perceptions of reality (maps) and team learning
  3. Larger systems, in six groups: organisational analysis; vision, values and goals; planning and project management; understanding clients and stakeholders; systems thinking, and large systems change.

Reference: Nauheimer, H. (1997). The change management toolbook. A collection of tools, methods and strategies. Online (open access PDF and online version with additional more recent tools):

Do you have useful tools or toolkits to share?

To find out more, see Toolkits for transdisciplinarity:
Toolkit #1 – Co-producing knowledge. GAIA, 24, 3: 149. Online (DOI): 10.14512/gaia.24.3.2
Toolkit #2 – Engaging and influencing policy. GAIA, 24, 4: 221. Online (DOI): 10.14512/gaia.24.4.2
Toolkit #3 – Dialogue methods for knowledge synthesis. GAIA, 25, 1: 7. Online (DOI): 10.14512/gaia.25.1.3
Toolkit #4 – Collaboration. GAIA, 25, 2: 77. Online (DOI): 10.14512/gaia.25.2.2
Toolkit #5 – Change. GAIA, 25, 3: 149. Online (DOI): 10.14512/gaia.25.3.2
Toolkit #6 – Research integration and implementation. GAIA, 25, 4: 229. Online (DOI): 10.14512/gaia.25.4.2
Toolkit #7 – (Dynamic) systems thinking. GAIA, 26, 1: 7. Online (DOI): 10.14512/gaia.26.1.3
Toolkit #8 – Integration Methods. GAIA, 26, 2: 79. Online (DOI): 10.14512/gaia.26.2.3

Biography: Gabriele Bammer PhD is a professor at The Australian National University in the Research School of Population Health’s National Centre for Epidemiology and Population Health. She is developing the new discipline of Integration and Implementation Sciences (I2S) to improve research strengths for tackling complex real-world problems through synthesis of disciplinary and stakeholder knowledge, understanding and managing diverse unknowns and providing integrated research support for policy and practice change. She leads the theme “Building Resources for Complex, Action-Oriented Team Science” at the US National Socio-environmental Synthesis Center.

Good practices in system dynamics modelling

Community member post by Sondoss Elsawah and Serena Hamilton

Sondoss Elsawah (biography)

Too often, lessons about modelling practices are left out of papers, including the ad-hoc decisions, serendipities, and failures incurred through the modelling process. The lack of attention to these details can lead to misperceptions about how the modelling process unfolds.

Serena Hamilton (biography)

We are part of a small team that examined five case studies where system dynamics was used to model socio-ecological systems. We had direct and intimate knowledge of the modelling process and outcomes in each case. Based on the lessons from the case studies as well as the collective experience of the team, we compiled the following set of good practices for systems dynamics modelling of complex systems.

Good practices in the model scoping and conceptualization phase:

  • Account for the time and resources required for evaluation and iterations in the proposal and planning phase.
  • Taking a step-wise approach is critical, especially for identifying the conceptual model elements with stakeholders. This stakeholder engagement should occur over multiple sessions; it can be too overwhelming to build a model in one sitting.
  • Remind stakeholders that modifying the hypotheses is relatively simple, so they can begin with an initial interpretation and change it easily. Also, the modeller should not assert too much pressure over including feedback loops as some stakeholders may find it difficult or irrelevant.
  • Elicit knowledge on the formulation of decision rules as well as the system at stake.
  • Be aware of the limitations related to the different methods used to elicit and visualize the dynamic hypothesis, and how they may affect the final model and its application/use.
  • Also, leverage the strength of various elicitation and mapping techniques throughout the modelling process utilizing approaches such as pairing methods, and detailed variant maps.

Good practices in the model formulation phase:

  • Build a simple model first – identify the key variables, key decisions, main functions and primary behaviours. Once one has a first simple version of the model, think about the reference behaviours, ie., does it behave as expected? More detail, including different metrics, can be built in iteratively.
  • Reflect on the model as it advances. At each model iteration, consider whether the model is aligned with its objective and scope.
  • Make use of (already tested) model structures (aka model modules) when pertinent and available. This reuse may include model components built using other modelling approaches to form hybrid system dynamics models. Sometimes other modelling approaches can simulate parts of the dynamic hypothesis better or easier than system dynamics.
  • When your model consists of various modules, first test these components individually, next in pairs, threesomes and so on, in order to manage model complexity. Also allow for a thorough investigation of each relationship between core processes.
  • Make smart use of prototypes to give users an appreciation of the final model capability while avoiding the risk of overshooting their expectations.
  • Pay attention to (spatial and temporal) scaling and reporting unit questions when considering the model’s objective.
  • Avoid hiding parameters in equations. Having them explicit makes the model more transparent (to stakeholders and users) and easier to update.
  • Make use of software development methodologies (eg., the Vee development process) and practices (eg., version control) to structure the way you develop and test the model.
  • Make careful use of arrays and subscripts as they can be complex to develop and test, and hide some complexity of the model structure. Develop and test a full version of the single dimension (non-arrayed) model before adding dimensions.
  • Calibrate the model using historic data where possible, even if the data are only available for part of the system.

Good practices in model evaluation:

  • Make use of logical reasoning and expert judgement to assess the structural validity of each of the interactions, and the complete set of interactions.
  • Perform behavioural testing with data in parts of the model where available. This should be complemented with other forms of model evaluation including peer review, sensitivity analysis, uncertainty analysis, robustness checks and comparison with other models.
  • Whether using specific data to populate a function or inferring a reference behaviour, stress-test to ensure that the model reproduces the system behaviour as closely as possible across the range of potential scenario or decision variable settings.
  • Test “on the go”, ie., test small components before uncertainty grows “out of control”. When all components are tested, an integrated/whole of system test is essential.

Good practices in model use:

  • Ensure that the final model is only delivered to end users for their purposes after it has undergone and passed rigorous model evaluation. If released prematurely, errors in the model (including bugs, or model structural or behaviour errors) may diminish their confidence in using the model, even if the errors are subsequently fixed.
  • Link the model behaviour back to the sources of dynamics (eg., feedback loops and delays). Make use of conceptual models (eg., causal loop diagrams) developed throughout the process to complement the discussion.
  • Ensure the tools are well documented with adequate ‘help’ resources available online. Over-reliance on the developers for technical support is unwise and often limits uptake of the model.
  • Clearly discuss or describe the limits of the model, including possible inconsistencies with expected behaviour. Also note what the model best represents, including which behaviours are expected to be good indicators of response.

Good practices in software selection:

  • Explore the strengths and limitations of software platforms for specific technical and participatory modelling requirements early in the project, as software selection can have large implications for the modelling capabilities. If unsure about specific requirements, start with an easy-to-use, open access package (eg., InsightMaker) until there is a better understanding of the required functionalities.
  • In general, consider the use of system dynamics software which has active user communities as they are more likely to provide adequate information, communication, and support for modellers.

Although we used system dynamics models as the common lens from which lessons were drawn, many of these insights are applicable to other modelling approaches. Are any of these practices useful for challenges you face? Are there good practices you can add to these lists?

To find out more:
Elsawah, S., Pierce, S., Hamilton, S.H., van Delden, H., Haase, D., Elmahdi, A., Jakeman, A. J. (2017). An overview of the system dynamics process for integrated modelling of socio-ecological systems: Lessons on good modelling practice from five case studies. Environmental Modelling and Software, 93: 127-145. Online:

Biography: Sondoss Elsawah is a senior lecturer at the University of New South Wales, Canberra, Australia. She comes from an operations research background. Her research focuses on the development and use of multi-method approaches to support learning and decision making in complex socio-ecological and socio-technical decision problems. Application areas include natural resource management and defence capability management. Her recent work focuses on how to integrate and transfer knowledge across projects and application domains to improve the practice and teaching of systems modelling methodologies. She is member of the Core Modeling Practices pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).

Biography: Serena Hamilton is a Postdoctoral Research Fellow at the Centre for Ecosystem Management at Edith Cowan University, Western Australia. Her research interests include integrated assessment and modelling, Bayesian networks and decision support tools for water resources management. Her recent research focuses on modelling for improving understanding of system linkages and management of complex socioecological systemsShe is member of the Core Modeling Practices pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).

Two barriers to interdisciplinary thinking in the public sector and how time graphs can help

Community member post by Jane MacMaster

Jane MacMaster (biography)

After one year or so delivering seminars that share practical techniques to help navigate complexity to public sector audiences, I’ve observed two simple and fundamental barriers to dealing more effectively with complex, interdisciplinary problems in the public sector.

First, is the lack of time to problem-solve – to pause and reflect on an issue, to build a deeper understanding of it, to think creatively about it from different angles, to think through some ideas, to test out some ideas. There is too much else going on.

Second, is that it’s often quite difficult to put one’s collective finger on what, exactly, the problem is. Continue reading

Choosing a model: If all you have is a hammer…

Community member post by Jen Badham

Jen Badham (biography)

As a modeller, I often get requests from research or policy colleagues along the lines of ‘we want a model of the health system’. It’s relatively easy to recognise that ‘health system’ is too vague and needs explicit discussion about the specific issue to be modelled. It is much less obvious that the term ‘model’ also needs to be refined. In practice, different modelling methods are more or less appropriate for different questions. So how is the modelling method chosen? Continue reading

Modeling as empowerment

Community member post by Laura Schmitt Olabisi

Laura Schmitt Olabisi (biography)

Who can make systems change? The challenges of complexity are intensely felt by those who are trying to make strategic interventions in coupled human-environmental systems in order to fulfill personal, societal, or institutional goals. The activists, leaders, and decision-makers I work with often feel overwhelmed by trying to deal with multiple problems at once, with limited time, resources, and attention. We need tools to help leaders cut through the complexity so that they can identify the most effective strategies to make change.

This is where participatory system dynamics modelers like myself come in. Continue reading

Where to publish? Journals for research integration and implementation concepts, methods and processes

Community member post by Gabriele Bammer

Gabriele Bammer (biography)

If you have developed a new dialogue method for bringing together insights from different disciplinary experts and stakeholders, or a refined modelling technique for taking uncertainty into account, or an innovative process for knowledge co-creation with government policy makers, where can you publish these to get maximum exposure and uptake? Continue reading