How to systematically design transdisciplinary project evaluation

By Emilia Nagy and Martina Schäfer

authors_emilia-nagy_martina-schafer
1. Emilia Nagy (biography)
2. Martina Schäfer (biography; photo source: Landtag Brandenburg)

How can the formative, ie. process, evaluation of transdisciplinary research projects best incorporate the likely link between process and outcomes in such research? What are some useful approaches for developing an effective evaluation plan with a lens of impact orientation?

We describe how to systematically formulate criteria and indicators for the evaluation of transdisciplinary projects by combining:

  • impactful research practices (Lux et al., 2019)
  • impact heuristics (Schäfer et al., 2021)
  • theory-of-change method (Belcher et al., 2019).

The combination of these approaches provides a strong foundation for impact orientation in all project phases. The additional value for formative evaluation is that it combines the quality criteria for effective processes or activities in transdisciplinary research projects with impact indicators.

Impactful research practices

In transdisciplinary research, societal effects can evolve not only from relevant project results but also from the activities carried out during the transdisciplinary research processes, especially when these are done well. In research with colleagues (Lux et al., 2019), we have suggested that there are five major areas where research practices are most likely to be impactful:

  • problem relevance – ie. everything that facilitates a better understanding of the problem situation and the application context
  • connectivityie. meeting the needs and expectations of current target groups, as well as others who may need to be involved
  • roles and responsibilitiesie. clarity about the roles, functions and tasks of each partner, which may change in different phases of the project. Particularly important are responsibility for knowledge integration and the role of intermediaries in supporting the transfer of knowledge to the field or comparable problem contexts
  • interests and concernsie. transparency about underlying interests and concerns, avoiding hidden agendas and objectives that are not shared
  • collaborative cultureie. positive and inspiring formal and informal interactions.

Impact heuristics

Impact heuristics distinguish between different orders of societal effects depending on their temporal and spatial distance from the research processes and their results (Schäfer et al., 2021). They are:

  • first-order effects: direct effects within the duration and the spatial scope of a research project, such as learning and capacity building, formation of networks, improving the situation in the respective field of action, and increase in reputation
  • second-order effects: effects beyond the project but within the close temporal or spatial context of the project, such as institutionalisation of transformative approaches, establishment of project-related products or infrastructure, transfer of project results to other spatial contexts
  • third-order effects: changes beyond the temporal or spatial context of the project in the entire field of action or problem field, such as influence on public discourse, influence on law and regulation, further structural effects.

Theory of change method

The Theory of Change method is useful for establishing shared hypotheses for a change process by planning backwards starting from a long-term shared project vision and identifying the conditions that need to be in place for the intended effects to occur (Belcher et al., 2019). Applying the theory of change method results in a flow diagram containing sets of activities, outputs and effects organized by non-linear impact-pathways.

A systematic evaluation framework

To combine the three components described above, we, as project evaluators, usually start with a workshop to define the theory of change, including as many project members (scientists as well as practitioners) as possible. From the flowchart of impacts, participants select particularly relevant impact pathways, with the starting point being a research activity.

As the formative evaluation team, we suggest indicators and monitoring questions for the intended impacts on first-, second- and third- order effects. The project team assesses the suggested indicators for suitability, feasibility and manageability.

After the joint adaptation of the set of indicators and their operationalisation via monitoring questions, data collection and ongoing reflection about the process can be started. At this stage particular attention is paid to the five impactful research practices.

Our experience so far has shown that this systematic approach to formative evaluation, which defines indicators beyond the boundaries of the project duration and project context, supports projects in their impact-oriented research activities.

What do you think? Have you developed other ways for improving formative evaluation of transdisciplinary research? Would the approach we have developed be useful in your research too?

References:
Belcher, B. M., Claus, R., Davel, R. and Ramirez, L. F. (2019). Linking transdisciplinary research characteristics and quality to effectiveness: A comparative analysis of five research-for-development projects. Environmental Science and Policy, 101: 192-203. (Online) (DOI): https://doi.org/10.1016/j.envsci.2019.08.013

Lux, A., Schäfer, M., Bergmann, M., Jahn, T., Marg, O., Nagy, E., Ransiek, A. and Theiler, L. (2019). Societal effects of transdisciplinary sustainability research – How can they be strengthened during the research process? Environmental Science and Policy, 101: 183-191. (Online). (DOI): https://doi.org/10.1016/j.envsci.2019.08.012

Schäfer, M., Bergmann, M. and Theiler, L. (2021). Systematizing societal effects of transdisciplinary research. Research Evaluation, 30, 4: 484–499. (Online) (DOI): https://doi.org/10.1093/reseval/rvab019

Biography: Emilia Nagy researches on transdisciplinarity at the Center for Technology and Society (ZTG) at Technische Universität Berlin in Germany. Her focus is on increasing the societal effects of transdisciplinary sustainability research.

Biography: Martina Schäfer PhD is the Scientific Director of the Center for Technology and Society (ZTG) of Technische Universität Berlin in Germany. She has coordinated inter- and transdisciplinary research projects in sustainable regional development, sustainable consumption and sustainable land use. One of her research foci is reflection on methods for inter- and transdisciplinary cooperation as well as the societal impact of this research mode.

13 thoughts on “How to systematically design transdisciplinary project evaluation”

  1. Thank you Emilia and Martina – It is highly commendable that the issue of “good” formative evaluation is being dealt with intensively. After all, formative evaluation is the more important role, especially as the situation of society and the environment is chronically unstable, volatile and hardly predictable even in the medium term. Summative evaluation is the hare, formative evaluation the hedgehog – the hare most often arrives too late!

    I apologize in advance for the clear criticism that follows – but with “Gaia on stake” I don’t want to beat around the bush:

    I have been observing in the Transdisciplinary Research community for years that the body of textbooks and resources of what is now approaching 100 years of evaluation science is all too often ignored.

    At least the following should be included:
    Patton, Michael Quinn (2010) : Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York: Guilford Press”. Extended Reviews: In: Evaluation and Program Planning, Vol. Volume 35, Issue 1, pp. 219-221; Journal of Evaluation, Vol. 10, Issue 1, pp. 151-154. https://www.univation.org/sites/default/files/publikation/06_2011_beywl_rezension_patton.pdf
    Mertens, Donna M. (2024): Research and evaluation in education and psychology : integrating diversity with quantitative, qualitative, and mixed methods. 6. Los Angeles: SAGE.
    See also: https://i2insights.org/2022/09/13/evaluation-origins-and-current-state/
    If you want to comment on evaluation in the field of TDR, I urge you to study the sources mentioned above. Otherwise……
    Re-inventing the wheel that is very resource-intensive and therefore potentially wasteful

    There is a danger to discuss mistakes, some of which have been repeated many times (also represented by textbooks, which really is a pity) here in the first sentence of the blog entry I comment on here (an there are some other points to be discussed): “How can the formative, i.e. process, evaluation …”
    – “formative” is one of the ‘roles’ of evaluation (Michael Scriven)
    – “process” is an element of an evaluation object that can be focused (other elements are i.e. concept or results).

    There is a formative, and there is a summative process evaluation, by no means are “formative evaluation” and “process evaluation” synonyms.
    In the hope to give some hints to study f the evaluation science literature
    Wolfgang

    Reply
    • Dear Wolfgang,

      Thank you very much for your comment on our contribution! We greatly appreciate your critical feedback and are pleased with your suggestions on how we—and the td community in general—can expand the knowledge base of transdisciplinary research. This is all the more important as the evaluation of transdisciplinary research, particularly the demonstration of its effectiveness, is increasingly in demand.

      In this context, the need for close collaboration between td and evaluation experts was already evident in the title of the DeGEval annual conference 2024: “Transdisciplinarity: Impulses for and through Evaluation!?” (German-speaking conference). During the session on “Evidence generated transdisciplinarily,” one of your slides included the statement: “… elements of transdisciplinary research are already mainstream in evaluation science.” This statement is something we in the td community must indeed address.

      Patton’s publication on Development Evaluation is particularly significant for us, as we adopt a formative role in evaluation. Furthermore, the evaluation subject is permanently emergent. Our evaluation approach aims to strengthen the impact orientation of projects and capture already visible effects. However, we face the challenge that project-based research often lacks resources to assess and evaluate the actual impacts after project completion. For this reason, we focus on the process and its adaptation to trigger impacts with the greatest possible probability. To promote process adaptability, our approach is participatory and context-sensitive. At the end of the project, we can jointly assess with the project participants the likelihood of planned and anticipated impacts occurring. As such, the evaluation result becomes a statement about the future.
      Do you have any good suggestions on how evaluation approaches handle this challenge effectively?

      We are happy to take up this impulse and methodically draw on the insights and methods of evaluation science.

      Warm regards,
      Emilia

      Reply
      • Dear Emilia
        I am very impressed by your feedback. I would like to respond to your question – certainly not satisfactorily:
        Your starting point is:
        “However, we face the challenge that project-based research often lacks resources to assess and evaluate the actual impacts after project completion.”
        The solution you have chosen seems to me to be a good stopgap solution.

        For me, moving forward has a more fundamental funding policy side and a concrete technical side:

        Policy side:
        Worldwide, it is the rule that “projects” are funded. They have a limited duration and an extension is in most of the cases not possible, as this would no longer be “innovative”. This often has side effects that are not taken into account:
        The interventions (as central elements of “programs” = technical term of evaluation) cannot be made so mature in the short project duration that they have a stable high effectiveness and potential, and that the program can be quickly and safely adapted to emergent changes. If you follow this thesis, projects all too often do not (or cannot) have the “impact” that they promise in the project proposal. At the meta-level, a solution can only lie in finding a third way between “project” and “institutionalization” – it is our task to constantly point this out, but politicians do not like to hear this message (better to build a new bridge than renovate three old ones).
        Technical side:
        Stakeholders “disperse” after the end of the project. From a technical point of view, it is necessary (even then) to bind as many stakeholders as possible to the evaluation in the long term.
        Consent forms should contain, for example, permission to send a link to an online survey again 3 years after the end of the project (and to send a short e-mail once or twice beforehand with some information on the dissemination of the project and any e-mail updates.) This may sound trivial, but in terms of a lively evaluation culture, it is essential that all parties (evaluators, program managers and members of target groups) commit themselves. Ultimately, this is also important in terms of professionalisation and democracy, because otherwise we will get neither the scientific evaluation nor the transdisciplinary research culture that we want.
        Best regards

        Wolfgang

        Reply
        • Dear Wolfgang,
          Thank you very much for your detailed response. Your thoughts on both the research policy and technical dimensions address key challenges that also concern our work.

          I also find your suggestion regarding the long-term involvement of researchers and stakeholders in evaluation quite valuable. Such measures can play a crucial role in ensuring that projects have a lasting impact and that both scientific and societal insights are preserved in the long run. Unfortunately, academic career paths are rarely tied to a single institution, which is clearly an obstacle to maintaining a long-term focus and tracing projects’ impact. The institutions where the projects took place should take responsibility for tracking their impact and make resources available for this purpose.

          I greatly appreciate our professional exchange and am grateful for your insights. Let’s stay in touch—I look forward to further opportunities to discuss these important topics with you.
          Best regards,
          Emilia

          Reply
          • Dear Emilia
            The following sentence from you catapults the discussion to the next system level:
            “Unfortunately, academic career paths are rarely tied to a single institution,”
            If you replace the term “institution” with “organization”, then you have the whole repertoire of organizational sociology/theory at your disposal to explain this.
            Just this much: on the “ facade” , universities have been showcasing the “third mission” for some years now. However, the formal rules (no in-house appointments; desired fluctuation of post-docs, etc.) counteract this: Loyalty with stakeholders is tied to individuals (Michael Patton: “The personal factor”, for what feels like 40 years). The danger for transdisciplinary research is that stakeholders perceive universities to be a perfect machine for producing academic careers – at the “expense” of stakeholders. This alienation is a dramatic phenomenon in the field of education-science-school. May TDR avoid this!

            Reply
            • Dear Wolfgang,
              Thank you for emphasizing the “personal factor”. We are currently evaluating an empirical study on the advantages and disadvantages of tdr research, in particular of the simultaneous pursuit of scientific and social impact, for those involved. We have found evidence that personal strategies of researchers (academic and non-academic) play a decisive role in the success of tdr projects, in addition to good methodological work in knowledge integration. For example, they expand their skills for performing non-scientific roles and build resilience in order to have sufficient stamina. At the organizational level and in the funding structures, positive developments for the promotion of tdr are already emerging. But there is still room for improvement 🙂 We hope to be able to publish our results before the end of the year.
              Best,
              Emilia

              Reply
  2. Hi Emilia and Martina,

    great to read about your work on using ToCs for impact evaluation purposes while at Eawag so far we have used it more for joint planning and visioning with ITD teams. I would be curious to learn more about how you exactly derive the indicators for the milestones and interventions and how you combine quantitative and qualitative indicators in a meaningful and complementary way? In addition, how long does it take from your experience to gather all the necessary information for assessing the impact in the end (e.g. for a project vs. a program)?

    Thanks in advance for your insights!
    Lisa

    Reply
    • Dear Lisa,

      Thank you very much for the questions! We have been following the project at EAWAG with great attention and are very much looking forward to exchanging ideas with you.

      Your questions deal with our major challenges. While it is relatively easy to formulate indicators for the quality of the research process and results (outputs), it is much more challenging for the impacts. For the research process, we usually refer to the quality criteria of TD research for the definition of the indicators, the outputs can be described by subject-specific criteria.

      For the definition of the impact criteria, we first collect intended impacts in three impact orders (according to temporal and spatial distance from the project context) in a ToC workshop. In a second step, we prioritise the impact pathways that emerge from the project activities and lead to impacts of higher orders. Pragmatic considerations also play a role in the prioritisation. High priority has often been given to pathways that can be influenced by the project and where context conditions are favourable.

      In the next step, the projects define indicators along the prioritised impact pathways. For the definition of indicators, the systematic differentiation of frequently used impact categories (such as ‘learning effects’, ‘capacity building’, ‘network effects’, etc.) in subcategories is very helpful (Schäfer, Bergmann and Theiler, 2021). For example, network effects can be documented by collecting data in the subcategories “new strategic contacts” and “overcoming reservations and building trusting relationships”. Counting new strategic contacts provides relatively simple quantitative data. However, the number of new contacts says little about the quality of the new strategic relationships. This data can be usefully extended with the impact quality “building trusting relationships”, which in our experience is important for most implementation oriented projects.

      Schäfer, Bergmann and Theiler (2021) describe 19 subcategories for societal effects (first order: 9 / second order: 6/ third order: 4). We usually aim at defining indicators at least for all subcategories of the first order in order to capture the variety of effects a project can achieve already during the ongoing research process. Indicators such as ‘willingness to continue activities after the end of the research project’ or ‘efforts of transferring knowledge and experiences to further contexts’ indicate 2nd order effects.

      After agreeing on a set of indicators, we support the research teams in setting up a plan how to collect the necessary data. Since the projects often have limited resources for the evaluation, pragmatic considerations as easy access to data often play a critical role for the choice of indicators.

      In our case, the period of data collection is limited to three years due to the duration of the project funding. It would make sense to conduct another survey about 2 years after the end of the project to be able to detect 2nd order effects in more detail.

      We would be very happy to exchange ideas with you on the definition of the indicators.

      Best wishes,
      Emilia and Martina

      Reply
  3. Dear Dena,

    thank you so much for your comment! We are very pleased that you find our approach inspiring.

    We usually run at least three workshops for each project: One at the beginning, one in the middle and one at the end of the project. The exact timing of the workshops also depends on the current needs of the projects.

    We had the opportunity to apply the described evaluation concept in a consortium that initiates projects to trigger socio-technical innovation processes in a specific region. We support 4-5 projects from the initial idea till finalization of the project. The projects have been embedded in a common innovation strategy and are continuously accompanied by the evaluation team. We are aware that this is a particularly privileged situation for transdisciplinary research.

    In this consortium, we conducted the first ToC workshops already in the application phase or in the early project phase. During the lifetime of the projects, we held regular reflection meetings with the projects and invited them to workshops. These reflective workshops served to collect data and to review the theory of change in the projects. At the end of the projects, we are going to hold a ToC workshop with each project. In this workshop, we are going to reflect together on which impacts have already been achieved and which can still be expected, as well as on the enabling and hindering conditions of effectiveness.

    We will be happy to explain other aspects if you have more questions.

    Best wishes,
    Emilia and Martina

    Reply
  4. Thank-you Emilia and Martina:

    This is inspiring work. I wondered whether you have any thoughts on the “readiness” requirements, especially around the Impactful Research Practices. In our experience, organizational jealousies and budget control mind-sets can get in the way of a horizontal decision-making. Have you encountered such challenges?

    With regards to the Impact Heuristics, the gradient of effects reminds me of the continuum in Outcome Mapping, especially the notion of loss of attribution. Acknowledging that impact can only barely be connected to activities remains a tough sell with many funders. I wonder if / how you have faced this challenge and accommodated it.

    In our work we mentor project teams to design their own evaluations of research (in tandem with communication planning). We are increasing leaning on Developmental Evaluation, where experimental work means that it is challenging to develop a good Theory of Change early on. Perhaps this sounds familiar?

    I look forward to exploring your references.
    Kind regards, Ricardo

    Reply
    • Dear Ricardo,

      Thank you very much for the stimulating questions! We have also encountered similar problems in our work. Our project framework does not allow us to address each of these issues with the same intensity. Still, we are happy to describe our thoughts on the three points.

      Our projects were preceded by participatory planning. In this phase, it was commonly agreed by the stakeholders that cooperative partnership among the project partner was a key component. All project members joined the project with a certain commitment to this preconception. Another special feature of our projects is that they are building up new value chains. If these can be maintained, all partners will benefit in the longer term. Nevertheless, this conceptual foundation of the projects cannot prevent partners from going their own way when their (often economic) interests “tempt” them to do so. So yes, some partners are able to jeopardize a horizontal or cooperative decision-making.

      So far, we have little experience with funders’ reaction to these outcomes. However, we think it is important to make them aware of the qualitative impacts that can be achieved in the course of the project (e.g. better mutual understanding between science and practice or between different groups of practitioners, capacity building and new networks). As far as we have experienced it, these qualitative impacts may pave the way for more far-reaching impacts, such as changes in organizational or individual practices, the introduction of new regulations, and so on. And yes, there is some work to be done to clarify that there are obstacles to clearly attribute impacts to project activities.

      In our experience, projects also react with reluctance when it comes to defining impact indicators. They cannot and do not want to commit to providing credible impact data. The priority goal of projects is to safely complete their processes and deliver their products. Our evaluation starts at this point: We raise awareness among projects of their responsibility for their contribution to the changes on a bigger scale a project can bring about.

      We have experienced that project teams find it very helpful to reflect together on intended impacts in the beginning of the project. It was relatively easy for the projects to define immediate and visionlike impacts. The participants had the most difficulties in describing complete impact pathways between these distant impact orders. Even if the impact pathways could not have been clearly defined, the projects benefited from the workshop: The different perspectives on project goals and project impacts became visible. On this basis, the participants were able to develop a common understanding of objectives and impacts.

      I hope that our answers have given you an insight into our experiences.

      Best wishes,
      Emilia and Martina

      Reply
  5. Thank you Emilia and Martina for your interesting post!
    what a very thorough way of thinking about TD project evaluation. I was wondering how you decide when these formative evaluation workshops take place throughout the developmental stages of the TD project?

    Thanks so much for your very insightful post
    Dena

    Reply
    • Thank you for this insightful work. May I ask why “planning backwards ” and is the practice in terms of formulating theories of change in transdisplinary research? What is the rationale?

      Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading