Long-term collaboration: Beware blaming back and blaming forward

Community member post by Charles Lines

Charles Lines (biography)

How can conflict be minimised in long-term collaborations where there is the potential to change priorities over time?

Partners who contributed to creating a collaborative initiative or who joined it early might, quite naturally, prefer to look back at the times when they were most influential and able to shape priorities and contribute significantly to achievements in which they believed.

Also, quite naturally, those who joined a collaborative initiative later may prefer to look forwards towards new approaches and ways of doing things that might increase their influence and enable them to shape priorities and achieve things important to them.

When these preferences for either the past or the future clash, they will form the basis of an often heated argument about a collaboration’s strategy: should it consolidate and build upon existing gains (so maintaining the esteem, credibility and influence of the founders and early joiners of the collaboration)? Or should it break its mould and flow into new and innovative areas of activity (so increasing the influence of the late joiners of the collaboration)?

Even when a strategy has been agreed, especially if the old or new guard feel they have won or lost the argument, the noise and conflicts from within which it was forged will continue to reverberate down the timeline of the collaboration. They will likely cause uncertainty and disagreements about the effectiveness of the collaboration and the value of its achievements. The old guard will continue to look towards the past and point to evidence that the collaboration has failed to live-up to initial expectations. The new guard, keen to introduce new ways, will point to evidence of the collaboration’s ability to move with the times.

Old and new understandings and agreements will overlay within people’s minds in the present of the collaboration, weakening its resolve and clarity of purpose.

New partners will challenge old and founding partners about their failure to do this and that in the first place: they will ‘blame it back’. Old partners will challenge new partners about their inability to do this and that now: they will ‘blame it forward’.

This mutual blaming will be at its most pronounced and dangerous when old and new partners are separated by time: when old partners have left the collaboration and are no longer directly involved in its activities.

Past partners (some of whom will be founders) will be quick to offer their opinions and advice as they watch the future of the collaboration unfold before them. They will be especially sensitive to any criticisms of their work or decision making that are offered as justifications for changes to well-established priorities, plans and practices and very quick to launch counter-blaming offences designed to undermine these justifications and question the competence of those currently doing the collaboration’s work.

In response, current partners will immediately seek to safeguard their reputations by countering the counter-blaming with increasingly strong justifications for their decisions and actions. These justifications will most likely increase past partners’ perceptions of being criticised and blamed (and separation in time could quickly become separation through antipathy).

Partners preferring and fighting for the credibility and reputation of different times, agreements and achievements (perhaps accompanied by the ill-feeling this could generate) will create an unstable and damaging ‘timeflux’ within the collaboration. This will, slowly but surely, encourage ambiguity about the ultimate worth of the collaboration’s achievements to grow within people’s minds. Eventually, this ambiguity will severely weaken the collaboration’s credibility and reputation (and, perhaps most importantly, threaten its legacy).

This situation is most obvious and likely to happen within large scale ‘mega-project’ collaborations; those likely to last for years and often decades, which means there will be much toing and froing of partners during the lifetimes of these collaborations and the type and mix of partners involved at their beginnings will be very different from that at their ends.

Being realistic, this tendency to blame back and blame forward probably cannot be stopped, but it can be managed and minimised by doing the following five things:

  1. Being consistently open and transparent, especially about necessary changes to the work of the collaboration in response to new pressures and demands (and being patient and willing to repeat these reasons as often as required).
  2. Having regular meetings between partners and holding ‘scouting meetings’ where old and established partners can get to know new and potential partners (and where all present can discuss current activities and how these may need to be continued, adapted, changed or added to in the future).
  3. Ensuring meetings between partners are chaired by someone who is trusted by and credible to all, and (to further encourage transparency and promote shared accountability) giving this person the authority to approve meaningful and significant decisions at the meetings with (preferably) the explicit support of all those present (both old and new partners alike).
  4. Noticing blaming language and behaviour and challenging it early so that it does not become a habit which could eventually lead to a damaging culture of blame.
  5. Ensuring the leaders of the collaboration and other high profile and influential partners model a no blame culture and, where necessary, they receive help and support (including coaching and interpersonal skills training) to achieve this.

Does this resonate with your experience? Do you have other management strategies to add?

This blog post is based on a version published at: http://cuttingedgepartnerships.blogspot.com/2018/03/interested-in-time-travel-beware_6.html

For more information and relevant references:
Lines, C. S. (2016). Sleeping with the Enemy – Achieving Collaborative Success. 5th edn. Tallis. Online: http://cuttingedgepartnerships.blogspot.co.uk/2016/05/sleeping-with-enemy-achieving.html

Biography: Charles M. Lines is an independent management consultant and a past Senior Lecturer at the UK Civil Service College, where he was Course Director of its partnership and collaborative working programmes. Since leaving the Civil Service well over a decade ago, he has continued to search out and share best practice in collaborative working.

Producing evaluation and communication strategies in tandem

Community member post by Ricardo Ramírez and Dal Brodhead

ricardo-ramirez
Ricardo Ramírez (biography)

How can projects produce evaluation and communication strategies in tandem? Why should they even try? A major benefit of helping projects produce evaluation and communication strategies at the same time is that it helps projects clarify their theories of change; it helps teams be specific and explicit about their actions. Before returning to the benefits, let us begin with how we mentor projects to use this approach.

dal-brodhead
Dal Brodhead (biography)

We co-lead DECI (Designing Evaluation and Communication for Impact), an action-research project that provides capacity building to research teams in the global south. We mentor these projects to develop their own evaluation and communication plans, something that we refer to as a hybrid decision-making approach. We recently published an on-line guide (Ramirez and Brodhead 2017) that includes the diagram below which summarizes the steps.

ramirez_mentoring - steps in evaluation and communication planning

(Source: Ramirez and Brodhead 2017)

The steps on the left side of the diagram are derived from Utilization-focused Evaluation, UFE (Patton 2008; Ramirez and Brodhead 2013), an approach to evaluation that emphasizes practical use of the findings and the process. Those on the right come from communication planning. Both share the notion of a readiness assessment, and situational analysis, which are depicted in the middle. The rotating arrows underline the iterative nature of the process.

How to do this?

We support partners at a pace that suits their needs through a process of coaching combined with peer learning. Each step in the hybrid approach includes a set of questions, which challenge the partner team to be clear on outcomes, procedures, stakeholders, networks, assumptions and methodology. On the evaluation side, we ask a set of readiness questions:

  • How capable is the project to work with this approach?
  • Is there a power relationship with funders that allows the team to co-own the evaluation design?
  • Is there an organizational culture that respects learning and adaptation?
  • Is there buy-in from senior management; and,
  • Are there engaged staff and resources able to carry out evaluation plans and implement a communication strategy?

These readiness requirements are sufficiently strategic to the point that we don’t sign a memorandum of understanding until we have most of these questions addressed (Ramírez and Brodhead 2014).

We follow with the main Utilization-focused Evaluation steps and ask who the primary users of the evaluation may be (internal staff, trusted partners, representatives of the funder, etc) and we engage with them to confirm their interest and availability. We also explore their expected uses or purposes. This process begins to show us the internal dynamics of an organization or project, especially the hierarchy, level of trust, and willingness to work in a collaborative manner.

In our experience, most evaluation users are quick to elicit evaluation questions, but they find it more difficult to step back and explore the underlying uses or purposes. The latter are important as a means of mapping how an evaluation will be utilized, rather than having a report sit on a shelf collecting dust. We find that it is also important to link the evaluation uses to the key evaluation questions. A good key evaluation question is clear, linked to the evaluation uses, and based upon this clarity one can easily determine the type of data collection tools and evidence needed.

In tandem with the Utilization-focused Evaluation steps, we begin asking questions about the project’s existing communication practices. Most projects have a communication way of doing things, even though it may be neither explicit nor strategic; yet it is often based on experience and some intuition. We ask about overall communication purposes, and we help the team identify them. The following purposes are common:

  • Communication for networking
  • Communication for active-listening and engagement
  • Communication for knowledge sharing for a community or practice
  • Communication for public relations, for visibility
  • Communication for dissemination of findings and lessons learned
  • Communication for advocacy
  • Communication for policy influence.

Differentiating the purposes and the audiences is helpful, as a means of setting priorities. We encourage the partners to do some ‘audience research’ to confirm each audience preference such as for media channels, methods, timing, etc. For each communication purpose, we explore the best combination and recommend testing of materials and methods. We also remind our partners that often the most effective communication is difficult to plan, but being ready to respond to windows of opportunity is possible and desirable, especially in the policy-making arena.

The benefits

Often, we witness how a hybrid effect begins to unfold. The partner realizes that an evaluation use could focus on the effectiveness of their communication strategy. Conversely, they realize that evaluation findings can often feed into and strengthen a communication strategy. However, most important, is that the process of clarification creates a space for organizational reflection and adaptation. The process begins as a planning process and creates the conditions for adaptive management. This result, in a nutshell, is the benefit we flagged earlier.

These concepts and processes can be useful to practitioners, facilitators, and researchers. Our website (https://evaluationandcommunicationinpractice.net/) contains multiple tools and case studies. We are keen to hear about comparable practices and experiences that others may wish to share.

To find out more:
Ramírez, R. and Brodhead, D. (2017). Evaluation and communication decision-making: A practitioner’s guide. Developing Evaluation and Communication Capacity in Information Society Research, DECI-2 Project. Ontario, Canada. (Online): https://evaluationandcommunicationinpractice.net/e-primer/

References:
Patton, M. Q. (2008). Utilization-focused evaluation. 4th edn. Sage: California, United States of America.

Ramírez, R. and Brodhead, D. (2013). Utilization-focused evaluation: A primer for evaluators. Southbound: Penang, Malaysia. (Online): https://evaluationandcommunicationinpractice.net/knowledgebase/utilization-focused-evaluation-a-primer-for-evaluators/

Ramírez, R. and Brodhead, D. (2014). Readiness and mentoring: Two touchstones for capacity development in evaluation. CDI Conference: Improving the use of M&E processes and findings, 20-21 March 2014, Wageningen, The Netherlands.

Biography: Ricardo Ramírez PhD is a researcher and consultant, based in Guelph, Ontario, Canada and is active in the fields of evaluation, communication for development, rural planning and natural resource management. He is an adjunct professor in the School of Environmental Design and Rural Development, University of Guelph. He is a Credentialed Evaluator (Canadian Evaluation Society) and co-principal investigator of DECI-3: Designing Evaluation and Communication for Impact: an action-research project in evaluation & communication funded by International Development Research Centre, Canada.

Biography: Dal Brodhead has been the CEO of the New Economy Development Group Inc., a value–based consulting firm located in Ottawa, Ontario, Canada since 1990. He brings a strong background in community development, project management, evaluation, and applied research in Canada and internationally. Previously, he held senior posts in various Federal departments, and directed a national research project on regional development for the Economic Council of Canada. Internationally, he has led numerous evaluation and monitoring missions in Asia and Africa with an emphasis upon participatory and recipient-driven and inclusive approaches. He is co-principal investigator of DECI-3: Designing Evaluation and Communication for Impact: an action-research project in evaluation & communication funded by International Development Research Centre, Canada.

Introducing interdisciplinary postgraduate degrees? Seven meta-considerations

Community member post by Dena Fam, Scott Kelly, Tania Leimbach, Lesley Hitchens and Michelle Callen

dena-fam_feb-2018
Dena Fam (biography)

What is required to plan, introduce and standardize interdisciplinary learning in higher education?

In a two-year process at the University of Technology Sydney we identified seven meta-considerations (Fam et al., 2018). These are based on a literature review of best practice of interdisciplinary programs internationally, as well as widespread consultation and engagement across the university. Each meta-consideration is illustrated by a word cloud and a key quotation from our consultations. Continue reading

A checklist for documenting knowledge synthesis

Community member post by Gabriele Bammer

Gabriele Bammer (biography)

How do you write-up the methods section for research synthesizing knowledge from different disciplines and stakeholders to improve understanding about a complex societal or environmental problem?

In research on complex real-world problems, the methods section is often incomplete. An agreed protocol is needed to ensure systematic recording of what was undertaken. Here I use a checklist to provide a first pass at developing such a protocol specifically addressing how knowledge from a range of disciplines and stakeholders is brought together.

KNOWLEDGE SYNTHESIS CHECKLIST

1. What did the synthesis of disciplinary and stakeholder knowledge aim to achieve, which knowledge was included and how were decisions made? Continue reading

Collaboration and team science: Top ten take aways

Community member post by L. Michelle Bennett and Christophe Marchand

l-michelle-bennett
L. Michelle Bennett (biography)

What are the key lessons for building a successful collaborative team? A new version of the Collaboration and Team Science Field Guide (Bennett et al., 2018) provides ten top take aways:

1. TRUST
It is almost impossible to imagine a successful collaboration without trust. Trust provides the foundation for a team. Trust is necessary for establishing other aspects of a successful collaboration such as psychological safety, candid conversation, a positive team dynamic, and successful conflict management.

christophe-marchand
Christophe Marchand (biography)

2. VISION
A strong vision attracts people to the team and provides a foundation for achieving team goals. A captivating vision provides a focal point for interesting individuals to join the team and compels them to contribute to the work. It serves as an anchor for the team and over time, the vision needs to be brought back to the team, reviewed, discussed, and, as needed, revised.

3. SELF-AWARENESS AND EMOTIONAL INTELLIGENCE
Emotional Intelligence among team members contributes to the effective functioning of research teams. Self-awareness gained through much self-reflection, -learning, and -inquiry benefits the leader(s) and participants by enhancing their ability to build relationships. Vision alone is not enough to sustain the team. Vision must be accompanied by the ability to build and nurture strong relationships. This provides people with greater control over their own emotional reactions, improves the quality of their interactions, and perhaps most importantly helps build other-awareness. The better someone gets to know and understand themself, the better they will appreciate those who surround them. Continue reading

Institutionalising interdisciplinarity: Lessons from Latin America / Institucionalizar la interdisciplina: Lecciones desde América Latina

Community member post by Bianca Vienni Baptista, Federico Vasen and Juan Carlos Villa Soto

A Spanish version of this post is available

What lessons and challenges about institutionalising interdisciplinarity can be systematized from experiences in Latin American universities?

We analyzed three organizational structures in three different countries to find common challenges and lessons learned that transcend national contexts and the particularities of individual universities. The three case studies are located in:

  • Universidad de Buenos Aires in Argentina. The Argentinian center (1986 – 2003) was created in a top-down manner without participation of the academic community, and its relative novelty in organizational terms was also a cause of its instability and later closure.
  • Universidad de la República in Uruguay. The Uruguayan case, started in 2008, shows an innovative experience in organizational terms based on a highly interactive and participatory process.
  • Universidad Nacional Autónoma de México. The Mexican initiative, which began in 1986, shows a center with a network structure in organizational terms where the focus was redefined over time.

All three centers showed an evolutionary path in which they simultaneously tried to adapt to the characteristics of the production of interdisciplinary knowledge and to the culture of the host institutions. Flexibility in this evolution seems to be a necessary condition for survival.

We found the following common lessons:

  • There is a bias in disciplinary-based academic assessment criteria, which does not consider the specific characteristics of interdisciplinary research and still punishes researchers who engage in collaborative research with partners outside academia. Specific criteria and assessment committees designed by interdisciplinary researchers are needed.
  • Interdisciplinary research requires long periods of preparation, mainly due to the collaborative dynamics, which also makes it necessary to revise assessment criteria.
  • Assessment committees should be made up of academic professionals specialized in interdisciplinary topics rather than a group of individuals representing different disciplines.
  • There is a need to explore new funding sources, especially external funds. So far, the main source of funding is still each national state.
  • There is also an urgency to promote academic publication to enhance the dissemination of interdisciplinary research and studies.
Bianca Vienni Baptista (biography)

Federico Vasen (biography)

Juan Carlos Villa Soto (biography)

Continue reading