What is meant by impact generation and how can it be facilitated, captured and shared? How can researchers be empowered to think beyond ‘instrumental’ impact and identify other changes generated by their work? How can the cloud of complexity be dispersed so that numerous factors affecting development of impacts can be seen? How can a way be opened for researchers to step back and reflect critically on what happened and what could be improved in the future? How can research teams and stakeholders translate isolated examples of impact and causes of impact into narratives for both learning and dissemination?
Community groups are often consulted by researchers, government agencies and industry. The issues may be contentious and the relationship vexed by distrust and poor communication. Could an inventory capture the fundamental sources of community frustration and highlight scope for improvement in respect, transparency, fairness, co-learning, and meeting effectiveness from a community perspective?
The trust and empowerment inventory presented below is based on the main sources of community frustration that I have witnessed over two decades as a public health physician and researcher liaising with communities about environmental health risks and it is likely to have broader relevance.
Many environmental, social, and public health problems require collaborative problem solving because they are too complex for an individual to work through alone. This requires a research and technical workforce that is better prepared for collaborative problem solving. How can this be supported by educational programs from kindergarten through college? How can we ensure that the next generation of researchers and engineers are able to effectively engage in team science?
Drawing from disciplines that study cognition, collaboration, and learning, colleagues and I (Graesser et al., 2018) make three key recommendations to improve research and education with a focus on instruction, opportunities to practice, and assessment.
In situations where multiple factors, in addition to your research, are likely to have caused an observed policy or practice change, how can you measure your contribution? How can you be sure that the changes would not have happened anyway?
In making contribution claims there are three levels of rigour, each requiring more evaluation expertise and resourcing. These are summarised in the table below. The focus in this blog post is on the basic or minimum level of evaluation and specifically on the “what else test.”
How can projects produce evaluation and communication strategies in tandem? Why should they even try? A major benefit of helping projects produce evaluation and communication strategies at the same time is that it helps projects clarify their theories of change; it helps teams be specific and explicit about their actions. Before returning to the benefits, let us begin with how we mentor projects to use this approach.
What lessons and challenges about institutionalising interdisciplinarity can be systematized from experiences in Latin American universities?
We analyzed three organizational structures in three different countries to find common challenges and lessons learned that transcend national contexts and the particularities of individual universities. The three case studies are located in:
Universidad de Buenos Aires in Argentina. The Argentinian center (1986 – 2003) was created in a top-down manner without participation of the academic community, and its relative novelty in organizational terms was also a cause of its instability and later closure.
Universidad de la República in Uruguay. The Uruguayan case, started in 2008, shows an innovative experience in organizational terms based on a highly interactive and participatory process.
Universidad Nacional Autónoma de México. The Mexican initiative, which began in 1986, shows a center with a network structure in organizational terms where the focus was redefined over time.
How can research writing best be crafted to present transdisciplinarity? How can doctoral candidates effectively communicate to examiners a clear understanding of ‘data’, what it is and how the thesis uses it convincingly?
The authors have all recently completed transdisciplinary doctorates in the field of sustainable futures and use this experience to highlight the challenges of crafting a convincing piece of research writing that also makes claims of transdisciplinarity (Palmer et al., 2018). We propose four strategies for working with data convincingly when undertaking transdisciplinary doctoral research.
How can we identify and evaluate decision forks in a modelling project; those points where a different decision might lead to a better model?
Although modellers often follow so called best practices, it is not uncommon that a project goes astray. Sometimes we become so embedded in the work that we do not take time to stop and think through options when decision points are reached.
One way of clarifying thinking about this phenomenon is to think of the path followed. The path is the sequence of steps actually taken in developing a model or in a problem solving case. A modelling process can typically be carried out in different ways, which generate different paths that can lead to different outcomes. That is, there can be path dependence in modelling.
Recently, we have come to understand the importance of human behaviour in modelling and the fact that modellers are subject to biases. Behavioural phenomena naturally affect the problem solving path. For example, the problem solving team can become anchored to one approach and only look for refinements in the model that was initially chosen. Due to confirmation bias, modelers may selectively gather and use evidence in a way that supports their initial beliefs and assumptions. The availability heuristic is at play when modellers focus on phenomena that are easily imaginable or recalled. Moreover particularly in high interest cases strategic behaviour of the project team members can impact the path of the process.
Outcome measures in research about treatment and service provision may not seem a particularly controversial or even exciting domain for citizen involvement. Although the research landscape is changing – partly as a result of engaging stakeholders in knowledge production and its effects – the design of outcome measures has been largely immune to these developments.
The standard way of constructing such measures – for evaluating treatment outcomes and services – has serious flaws and requires an alternative that grounds them firmly in the experiences and situations of the people whose views are being solicited.
I am a firm believer in looking at interdisciplinary collaboration and knowledge exchange – or impact generation – as processes. If you can see something as a process, you can learn about it. If you can learn about it, you can do it better!
I find that this approach helps people to feel enfranchised, to believe that it is possible for them to open up what might have seemed to be a static black box and achieve understanding of the dynamics of how nouns like ‘interdisciplinarity’ or ‘knowledge exchange’ or ‘research impact’ can actually come to be.
How to give others your hard-won insights so that their work can be more informed, efficient, and effective? As I’ve gotten older, it is something that I think about more.
It is widely recognized that the environment is an integrated but also “open” system. As a result, when working with issues relating to the environment we are faced with the unsatisfying fact that we won’t know “truth”. We develop an understanding that is consistent with what we currently know and what we consider state-of-the-practice methods. But, we can never be sure that more observations or different methods would not result in different insights.
Do we need a protocol for documenting how research tackling complex social and environmental problems was undertaken?
Usually when I read descriptions of research addressing a problem such as poverty reduction or obesity prevention or mitigation of the environmental impact of a particular development, I find myself frustrated by the lack of information about what was actually done. Some processes may be dealt with in detail, but others are glossed over or ignored completely.
For example, often such research brings together insights from a range of disciplines, but details may be scant on why and how those disciplines were selected, whether and how they interacted and how their contributions to understanding the problem were combined. I am often left wondering about whose job it was to do the synthesis and how they did it: did they use specific methods and were these up to the task? And I am curious about how the researchers assessed their efforts at the end of the project: did they miss a key discipline? would a different perspective from one of the disciplines included have been more useful? did they know what to do with all the information generated?