By Gabriele Bammer
Do we need a protocol for documenting how research tackling complex social and environmental problems was undertaken?
Usually when I read descriptions of research addressing a problem such as poverty reduction or obesity prevention or mitigation of the environmental impact of a particular development, I find myself frustrated by the lack of information about what was actually done. Some processes may be dealt with in detail, but others are glossed over or ignored completely.
For example, often such research brings together insights from a range of disciplines, but details may be scant on why and how those disciplines were selected, whether and how they interacted and how their contributions to understanding the problem were combined. I am often left wondering about whose job it was to do the synthesis and how they did it: did they use specific methods and were these up to the task? And I am curious about how the researchers assessed their efforts at the end of the project: did they miss a key discipline? would a different perspective from one of the disciplines included have been more useful? did they know what to do with all the information generated?
Research on complex problems may often also seek to have a direct impact on the problem. I find myself asking: how did the researchers decide that that was a reasonable and realistic expectation? how did they decide the kind of impact to focus on? what theory and methods did they use to understand their options for having an impact? how did they identify and get to understand the political, historical, cultural and other contextual factors that might affect their ability to have an impact? who did they identify as key players and how did they decide to make them familiar with their research findings – did they have a communication strategy (if so, what was it) or did they seek to engage those stakeholders in the research (if so, how; and how successful were they)?
In contrast to established disciplines such as chemistry and sociology, where there are well-developed ways of writing the ‘methods section’ when publishing a piece of research, there is no agreed upon way to write-up how research tackling a complex problem was undertaken. In particular, conventions are lacking about what should be included and in what detail.
It’s worth reminding ourselves about the purposes of the methods section; essentially there are four:
- to allow the reader to understand how the problem was tackled
- to allow the reader to judge whether the most up to date and suitable methods were used, and if the methods were used appropriately
- to allow the reader to judge whether the authors’ interpretation of the results is justified given the methods used
- where appropriate, to enable replication of the research.
Understanding how the problem was tackled
Regarding the first purpose, a key challenge to fully describing how a complex problem was tackled is that there is usually a lot of detail to provide. As yet, there are few agreed upon shorthand conventions, so that the descriptions are often also cumbersome. Becoming more streamlined will develop over time, but only if the long descriptions are written to form the basis for discussion and debate about what should be included and in what detail. Such long descriptions often go beyond what is currently considered acceptable for publication in a peer-reviewed journal.
Have the most up-to-date and suitable methods been used, and were the methods used appropriately?
Allowing the reader to assess the methods used is currently difficult, because there is no repository of all available methods. As described in an earlier blog post, relatively few journals publish such methods, making accumulation of knowledge about methods slow.
Various toolkits have been developed, but these generally cover only a limited section of the terrain. Examples are provided in the Toolkits for Transdisciplinarity series published in the journal GAIA (see references below). The Integration and Implementation Sciences (I2S) website is gradually accumulating a wider series of tools.
[Author note December 2022: These tools are currently being updated and relocated to this i2Insights repository or the i2S-Talks YouTube channel, while those that are outdated are being archived.]
It is not yet clear whether the issue of fidelity is important for methods used to address complex social and environmental problems. Fidelity means using the method as intended by the developer. There may be some essential requirements, with others left to the discretion of the user. But often the developers of the methods themselves do not specify essential requirements and it will take time to accumulate enough experience – published in a suitable way – to determine what the essential requirements are.
Is the interpretation justified given the methods used?
Interpretation often bedevils research on complex social and environmental problems, given the multiple dimensions of such problems and the often competing perspectives and values embedded in them. Nevertheless, it is still important and possible to assess at least some of the claims made by the researchers, for example: was coverage of stakeholder perspectives as widespread and representative as the interpretation assumes? were limitations consequent on identified gaps appropriately accounted for? have the researchers’ own biases and values distorted the interpretation?
Replicability is a key feature of research that looks for universal findings. A description of an experiment, for example, should be detailed enough that an independent research team can perform the same experiment to see if they get the same results. In dealing with complex problems, the role of replication provides a topic for productive debate. Usually context is critical in addressing complex problems and how context limits replicability is an area wide open for discussion. However, one place where replication would seem to be appropriate is in searching for simple rules to explain complex behaviours, as occurs in agent-based modelling.
There is currently a vicious cycle where poor description of how complex problems were tackled limits the development of methods to address such problems, which in turn promotes on-going glossing over of what actually happened. If we are to make progress in research and action on complex social and environmental problems, we need to turn this into a virtuous cycle by ensuring that methods are fully described, and therefore, debated and improved.
What do you think? Does your experience mirror mine or is it different? How do you think we could progress?
Bammer, G. (2015). Toolkits for transdisciplinarity: Toolkit #1 Co-producing knowledge. GAIA, 24, 3: 149. Online (DOI): 10.14512/gaia.24.3.2.
Bammer, G. (2015). Toolkits for transdisciplinarity: Toolkit #2 Engaging and influencing policy. GAIA, 24, 4: 221. Online (DOI): 10.14512/gaia.24.4.2.
Bammer, G. (2016). Toolkits for transdisciplinarity: Toolkit #3 Dialogue methods for knowledge synthesis. GAIA, 25, 1: 7. Online (DOI): 10.14512/gaia.25.1.3.
Bammer, G. (2016). Toolkits for transdisciplinarity: Toolkit #4 Collaboration. GAIA, 25, 2: 77. Online (DOI): 10.14512/gaia.25.2.2.
Biography: Gabriele Bammer PhD is a professor at The Australian National University in the Research School of Population Health’s National Centre for Epidemiology and Population Health. She is developing the new discipline of Integration and Implementation Sciences (I2S) to improve research strengths for tackling complex real-world problems through synthesis of disciplinary and stakeholder knowledge, understanding and managing diverse unknowns and providing integrated research support for policy and practice change. She leads the theme “Building Resources for Complex, Action-Oriented Team Science” at the US National Socio-environmental Synthesis Center.