The ‘methods section’ in research publications on complex problems – Purpose

By Gabriele Bammer

gabriele-bammer
Gabriele Bammer (biography)

Do we need a protocol for documenting how research tackling complex social and environmental problems was undertaken?

Usually when I read descriptions of research addressing a problem such as poverty reduction or obesity prevention or mitigation of the environmental impact of a particular development, I find myself frustrated by the lack of information about what was actually done. Some processes may be dealt with in detail, but others are glossed over or ignored completely.

For example, often such research brings together insights from a range of disciplines, but details may be scant on why and how those disciplines were selected, whether and how they interacted and how their contributions to understanding the problem were combined. I am often left wondering about whose job it was to do the synthesis and how they did it: did they use specific methods and were these up to the task? And I am curious about how the researchers assessed their efforts at the end of the project: did they miss a key discipline? would a different perspective from one of the disciplines included have been more useful? did they know what to do with all the information generated?

Research on complex problems may often also seek to have a direct impact on the problem. I find myself asking: how did the researchers decide that that was a reasonable and realistic expectation? how did they decide the kind of impact to focus on? what theory and methods did they use to understand their options for having an impact? how did they identify and get to understand the political, historical, cultural and other contextual factors that might affect their ability to have an impact? who did they identify as key players and how did they decide to make them familiar with their research findings – did they have a communication strategy (if so, what was it) or did they seek to engage those stakeholders in the research (if so, how; and how successful were they)?

In contrast to established disciplines such as chemistry and sociology, where there are well-developed ways of writing the ‘methods section’ when publishing a piece of research, there is no agreed upon way to write-up how research tackling a complex problem was undertaken. In particular, conventions are lacking about what should be included and in what detail.

It’s worth reminding ourselves about the purposes of the methods section; essentially there are four:

  1. to allow the reader to understand how the problem was tackled
  2. to allow the reader to judge whether the most up to date and suitable methods were used, and if the methods were used appropriately
  3. to allow the reader to judge whether the authors’ interpretation of the results is justified given the methods used
  4. where appropriate, to enable replication of the research.

Understanding how the problem was tackled
Regarding the first purpose, a key challenge to fully describing how a complex problem was tackled is that there is usually a lot of detail to provide. As yet, there are few agreed upon shorthand conventions, so that the descriptions are often also cumbersome. Becoming more streamlined will develop over time, but only if the long descriptions are written to form the basis for discussion and debate about what should be included and in what detail. Such long descriptions often go beyond what is currently considered acceptable for publication in a peer-reviewed journal.

Have the most up-to-date and suitable methods been used, and were the methods used appropriately?
Allowing the reader to assess the methods used is currently difficult, because there is no repository of all available methods. As described in an earlier blog post, relatively few journals publish such methods, making accumulation of knowledge about methods slow.

Various toolkits have been developed, but these generally cover only a limited section of the terrain. Examples are provided in the Toolkits for Transdisciplinarity series published in the journal GAIA (see references below). The Integration and Implementation Sciences (I2S) website is gradually accumulating a wider series of tools.
[Author note December 2022: These tools are currently being updated and relocated to this i2Insights repository or the i2S-Talks YouTube channel, while those that are outdated are being archived.]

It is not yet clear whether the issue of fidelity is important for methods used to address complex social and environmental problems. Fidelity means using the method as intended by the developer. There may be some essential requirements, with others left to the discretion of the user. But often the developers of the methods themselves do not specify essential requirements and it will take time to accumulate enough experience – published in a suitable way – to determine what the essential requirements are.

Is the interpretation justified given the methods used?
Interpretation often bedevils research on complex social and environmental problems, given the multiple dimensions of such problems and the often competing perspectives and values embedded in them. Nevertheless, it is still important and possible to assess at least some of the claims made by the researchers, for example: was coverage of stakeholder perspectives as widespread and representative as the interpretation assumes? were limitations consequent on identified gaps appropriately accounted for? have the researchers’ own biases and values distorted the interpretation?

Replication
Replicability is a key feature of research that looks for universal findings. A description of an experiment, for example, should be detailed enough that an independent research team can perform the same experiment to see if they get the same results. In dealing with complex problems, the role of replication provides a topic for productive debate. Usually context is critical in addressing complex problems and how context limits replicability is an area wide open for discussion. However, one place where replication would seem to be appropriate is in searching for simple rules to explain complex behaviours, as occurs in agent-based modelling.

Conclusion
There is currently a vicious cycle where poor description of how complex problems were tackled limits the development of methods to address such problems, which in turn promotes on-going glossing over of what actually happened. If we are to make progress in research and action on complex social and environmental problems, we need to turn this into a virtuous cycle by ensuring that methods are fully described, and therefore, debated and improved.

What do you think? Does your experience mirror mine or is it different? How do you think we could progress?

References:
Bammer, G. (2015). Toolkits for transdisciplinarity: Toolkit #1 Co-producing knowledge. GAIA, 24, 3: 149. Online (DOI): 10.14512/gaia.24.3.2.

Bammer, G. (2015). Toolkits for transdisciplinarity: Toolkit #2 Engaging and influencing policy. GAIA, 24, 4: 221. Online (DOI): 10.14512/gaia.24.4.2.

Bammer, G. (2016). Toolkits for transdisciplinarity: Toolkit #3 Dialogue methods for knowledge synthesis. GAIA, 25, 1: 7. Online (DOI): 10.14512/gaia.25.1.3.

Bammer, G. (2016). Toolkits for transdisciplinarity: Toolkit #4 Collaboration. GAIA, 25, 2: 77. Online (DOI): 10.14512/gaia.25.2.2.

Biography: Gabriele Bammer PhD is a professor at The Australian National University in the Research School of Population Health’s National Centre for Epidemiology and Population Health. She is developing the new discipline of Integration and Implementation Sciences (I2S) to improve research strengths for tackling complex real-world problems through synthesis of disciplinary and stakeholder knowledge, understanding and managing diverse unknowns and providing integrated research support for policy and practice change. She leads the theme “Building Resources for Complex, Action-Oriented Team Science” at the US National Socio-environmental Synthesis Center.

15 thoughts on “The ‘methods section’ in research publications on complex problems – Purpose”

  1. In research around complex issues we are often grasping in the dark; there are rarely methodologies or specific frameworks that can be universally applied, but rather approaches that are often useful for sensemaking across topic areas. What can be learned and detected by classical “scientific” methods is rarely applicable to complex research questions under real world conditions. Luckily, practitioners find that using a combination of broad stakeholder engagement and the insights derived from academic research is often enough to productively tackle a given issue. An iterative and agile process of problem framing and solution discovery is, in practice, often more important than the replicability of any given approach.

    Reply
    • Thanks Michelle. I agree and it’s sharing those approaches so we can learn from each other and improve them that I am arguing for.

      Reply
  2. Nice article, Gabriele.

    In the case of software-centric modeling papers, I’ve noticed a lot of authors treat their software like a black box and only focus on describing the results of using their software in, for example, a case study. I think this is equivalent to writing a paper about a new cold-fusion reactor without describing the chemicals and compounds used, in what quantities, and how they were mixed! Of course such a paper would never be accepted in a chemistry journal, and I don’t think we should ever publish “black box” software papers in modeling journals.

    The methods section is the perfect place to describe one’s software design, key algorithms, data structures etc. A second part of the methods section can describe the set up of the experimental case study. This can be followed by a two-part results section where the first part describes the actual implementation of the software (screen shots, etc as appropriate) and the results of the experimental case study.

    Ultimately, the goal of such a paper would be to not only show that the software system worked as intended, but to present it in such a way that someone else could implement the concepts, algorithms, and case study in another programming language or environment, or otherwise validate the claims made in the paper.

    Without such transparency, the software black box could be, as far as we know, filled with nothing but cold-fusion fantasy.

    Reply
  3. I agree, Gabriele, that this practice creates a vicious cycle. I notice this from reviewers’ tendency to emphasize the technical parts of the paper at the expense of the method description, making the comment that details in this section is ‘just too much background text’.

    In regard to Replicability, I like Checkland’s ‘recoverability’ concept in action research as a suitable criterion for research validity when addressing complex problems. Quoting from Checkland and Holwell (1998): In order to be recoverable “it is essential to state the epistemology (the set of ideas and the processes in which they are used methodologically) by means of which they will make sense of their research, and so define what counts for them as acquired”. For interested readers, please check Checkland, P., & Holwell, S. (1998). Action research: its nature and validity. Systemic Practice and Action Research, 11(1), 9-21.

    On the ODD, I personally find it useful as a design and conceptualization tool (especially for beginners) rather than a good documentation tool. There is also the documentation protocol for System Dynamics models: Martinez‐Moyano, I. J. (2012). Documentation for model transparency. System Dynamics Review, 28(2), 199-208.

    Reply
    • Many thanks, Sondoss. Very useful to hear about your experience. Thanks also for the tools from Checkland and Holwell and from Martinez-Moyano.

      Reply
  4. I agree that communication of methods in nascent fields can be difficult, but I don’t conceptualise the problem in the same way. As noted by the soft systems literature, a push for comprehensiveness is not necessarily the best way forward. In sociology, I am sure there are many parts of what is done that are not described, but that might have a significant impact on results. Even in chemistry, there have been cases where replication failed because even the authors were not aware of experimental conditions that affected their results.

    As I understand it, the solution in soft systems is to be critically aware of what is and is not described (and at what level of abstraction that is done). I would suggest that the best way to get into a virtuous cycle is not to push for “full description”, but rather for reviewers, readers and potential replicators to be more vocal about what they would specifically like to see better described in specific types of publications – norms are also likely to vary depending on the problem domain. Researchgate’s functionality to comment on publications is an interesting development for this purpose – it allows gaps in methods sections to potentially be discussed in a public forum even after publication. While a research publication looks final, we actually need a culture of constructive criticism about not just changes to the research, but also about how the research is presented.

    Regarding the specific points highlighted – I find that a high level description is often enough to understand how a problem was tackled, and that because the problem domain is so immature, any method is likely to make some contribution. When it isn’t clear whether an interpretation is justifiable, I tend to err on the side of caution as a reviewer and ask the author to clarify – I treat unclear justifications as a failure of the review process. Replication is easy when analysis code and data is provided, and otherwise not even on the radar, because of the subjectivity involved. I get frustrated when the authors don’t make a genuine effort to show how their approach relates to existing work. A problem I don’t know how to deal with is peer-review of very large analyses. When even the authors struggle with conceptualising exactly what they have done, what is a reviewer meant to do? Structured approaches like ODD+D can help, but I think they are only part of the solution. Perhaps that is another reason for trying to innovate regarding what a methods section should look like in different types of publications.

    Reply
    • That’s an interesting perspective, Joseph, thanks! For those interested, ODD+D is an expanded version of Overview, Design Concepts and Details, referred to by Jason below, to also include human decision making (an online version of the paper by Miller and colleagues describing this can be found at https://www.ufz.de/export/data/2/100069_ODD%20+%20D%20-%20Author%20manuscript.pdf).

      I expect that the level of description varies based on the approach; and that modelling approaches are moving towards much more standard protocols – indeed that’s the task being undertaken by the Core Modeling Practices pursuit funded by SESYNC – see http://www.sesync.org/project/enhancing-socio-environmental-research-education/model-process-practices.

      In contrast, I was really struck by the problem of lack of description of methods used when reading the work of the World Commission on Dams ([Moderator update – In December 2021, this link no longer available: unep[dot]org…dams…WCD]). This was an impressive undertaking, but there’s scant detail about how they did their work.

      Reply
  5. I struggled for a time, and I assume many other do as well, when transitioning from a disciplinary to a trans-disciplinary mindset in tackling complex socio-economic issues. The vehicle I used to transcend this mental barrier is the evolving field of sustainability science; a science that attempts to resolves issues as much as its goal to define the issues.

    I was able to create a solution framework [for agriculture sustainability] by proposing there are three principle sources of [wicked] complex socio-economic issues:
    1) Outputs and outcomes of the system are varied in scope and scale
    2) System stakeholders have different values of, and accounting systems for, the varied output and outcomes
    3) Stakeholder organizations use different, often conflicting governance frameworks, when accounting for, valuing and interacting with other stakeholder organizations.

    In my early work, it seemed reasonable to focus on the most tangible (#1) issue from a disciplinary approach and then, at times, incorporate others’ views (#2) to bridge toward multi-disciplinary and inter-disciplinary solutions. Much was learned, but it failed to address the ‘socio’ aspect of the socio-economic issue.

    My approach was then to dissect organizational governance frameworks as an emergent quality (rather than said governance structure) to understand the interactions at the point of decision-making in these complex systems. In the spirit of sustainability science, it enabled a measurement of #1 issues, aligned and/or standardized #2 issues, and created a point of influence for the purpose of “resolving issues as much as its goal to define the issues”

    Reply
  6. Thanks for another interesting post Gabrielle. Could this proposal be extended to include a self-narrative on (what I think is most confounding aspect of research today) the disposition of the researchers mind that is applied to the methods. What is the motivation or prior “belief” that the researcher brings to the work? It will impact upon the methods and interpretation.

    Were methods used to mitigate prior belief – such as blinding or adversarial collaboration [Moderator update – In November 2023, this link was no longer available and so the link structure has been left in place but the active link deleted: www. discovermagazine. com/mind/team-of-rivals-does-science-need-adversarial-collaboration] or “bracketing” as practiced in the social sciences – would be great to read a narrative on how the method of bracketing was actually used during the research process and what insights it provided to the researcher.

    Returning to the main thrust of your post – there was an interesting recent article in the BMJ arguing that we could learn from scientific practice 500 years ago, in the time of Kepler and Copernicus, when the publication of the complete research process (as opposed to a summary “journal article”) was standard practice: http://www.bmj.com/content/354/bmj.i4911

    Thanks so much for these provocative posts – you are an ideas factory! regards, Craig

    Reply
    • Thanks Craig – some interesting ideas, and it is good to know about the BMJ article about making data available for deeper reflection and analysis.

      Reply
  7. Agent-based modelling has attempted to tackle this issue through the ‘ODD’ (Overview, Design concepts, and Details) protocol – with some (limited) success. I find when I try to use it, it never quite works. As a basis for description and communication, however, it is useful. It also takes up a lot of word count to do it completely – not what editors are looking for in applied journals, especially in medicine.

    Reply

Leave a Reply to Gabriele BammerCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading