Basic steps for dealing with problematic value pluralism

By Bethany Laursen, Stephen Crowley and Chad Gonnerman

authors_bethany-laursen_stephen-crowley_chad-gonnerman
1. Bethany Laursen (biography)
2. Stephen Crowley (biography)
3. Chad Gonnerman (biography)

Have you ever been part of a team confronting a moral dilemma? Or trying to manage deep disagreements? For that matter, on a more down-to-earth level, how many times has your team tried to settle an agreed file naming convention? Many team troubles arise from value pluralism—members having different values or holding the same values in different ways. Below, we describe problematic value pluralism and suggest steps for dealing with it.

What are values, and how do they cause problems?

Here, we’re talking about a “value” as a desire (conscious or unconscious) that directs a person’s actions. It could be a guiding ideal or a whimsical preference, for example. Most of us have multiple values and over time we have organized them so that they provide us with guidance in most of the situations we encounter.

Forming a team, especially one made up of folks with diverse backgrounds, creates the challenge of managing the interaction of multiple values. Each team member will bring their own set of values to the collaboration; these value sets, reflecting the backgrounds of the team’s members, will themselves be diverse. We call this situation “value pluralism.” Value pluralism is generally great for teamwork, especially on complex problems. It helps ensure you’re considering all sides of an issue, avoiding harms, and accessing all available resources.

But sometimes value pluralism can cause problems. Problems arise whenever the value differences lead the team to undermine norms that govern the project, even the simple norm of finishing the project. Very roughly, a team’s values should guide its actions in the right ways with the right outcomes, and problems arise when that guidance is missing or flawed. Here are some typical situations where value differences can undermine project norms:

  • Values incoherence—the different values directly conflict.
  • Pragmatic incoherence—the different values don’t direct actions clearly or they direct conflicting actions.
  • Moral incoherence—the different values or resulting actions violate moral standards.

Values Incoherence Example
Juwan and Stacy are part of team working on the role of a component C in some larger system S. Juwan feels he really understands something only when he can describe how it works, but Stacy keeps drawing attention to final impacts, arguing that they don’t need to know what’s inside the ‘black box’ (how C works) to understand what’s happening (how C impacts S). This is an example of strong value pluralism, where values directly conflict in the sense that one person values something the other person doesn’t. Juwan values mechanisms while Stacy doesn’t. In a case like this the team is *not* getting guidance from its values.

Pragmatic Incoherence Example
Aang and Katara’s team is committed to transparency and accountability in their co-production work. So, they want to publish their work open access. However, the open access publication fee is much higher than their budget estimated to the funder. While Aang and Katara take both commitments seriously, Aang feels strongly that they should remain accountable to their original budget, which says they can’t afford to pay the fee. Katara in contrast feels compelled to pay the fee to maintain transparency in their research. In this case the team is getting conflicting guidance from its values.

Moral Incoherence Example
Azula and Mai’s team is leading a water quality study funded by a government contract. Both Azula and Mai want to fulfill the contract on time as specified so it will be renewed, but a concerned citizen tells them that several people in the community are noticing funny smells in their water. To investigate would take the team beyond the scope of the contract, but if they don’t follow up, people could be poisoned. Azula overrules Mai and forces the team to ignore the citizen’s concerns in order to fulfill the contract. In this case the team is getting bad guidance from its values.

These are all examples of what we call “problematically plural values.” Working through problematically plural values is not only necessary at the time but can also grow a team’s capacity for solving future problems.

Steps for dealing with problematically plural values

Here are the basic steps a team could follow if they suspect they have problematically plural values. Of course, “steps” is a bit misleading, as these might not be discrete or sequential in practice. But “steps” is helpful for understanding what needs to happen:

  1. Detection—Does your team suffer from problematically plural values?
    1. Identification—What values do members of the team bring to the table?
    2. Determination—Do those values yield values, pragmatic, or moral incoherence?
  2. Coordination—What can your team do about problematically plural values?
    1. Give up—Dissolve the team
    2. Dodge—Change the research project in a way that avoids the problem
    3. Select—Adopt the (coherent) values of some subset of the team
    4. Compromise—Adopt a (coherent) selection of values from various team members
    5. Integrate—Create new (coherent) values from existing team values.

If a team chooses not to Give up or Dodge but rather to address its values by Selecting, Compromising, or Integrating, then there is more work to do:

  1. Articulating—Coordinating a set of shared values
  2. Recording—Making a record of those values
  3. Enacting—Carrying out those values
  4. Evaluating—Making sure the values are in operation and are effective.

It’s all easier said than done! Tools and approaches covered in blog posts about Toolbox Dialogue Initiative, the Circle of Dialogue Wisdom, the Gradients of Agreement, and many others can help with one or more of these steps. Still, there is no substitute for the wisdom of experience.

How have you dealt with problematically plural values? What lessons have you learned to share with others?

Find out more:
Laursen, B. K., Gonnerman, C. and Crowley, S. J. (2021). Improving philosophical dialogue interventions to better resolve problematic value pluralism in collaborative environmental science. Studies in History and Philosophy of Science Part A, 87, 54–71. (Online) (DOI): https://doi.org/10.1016/j.shpsa.2021.02.004
This open access article includes more examples of values pluralism and an appendix of tools and the steps they are best suited to assist.

Biography: Bethany Laursen PhD studies, develops, uses, and evaluates tools that help people make sense of wicked problems. She is a member of the Toolbox Dialogue Initiative, affiliate faculty with Michigan State University’s Center for Interdisciplinarity, and Assistant Dean in the Graduate School at Michigan State University in the USA. Bethany also maintains a consultancy called Laursen Evaluation & Design, LLC.

Biography: Stephen Crowley PhD is chair of the Philosophy Department at Boise State University, Idaho, USA. He is also a member of the Toolbox Dialogue Initiative. He helps facilitate team science projects (as part of the Toolbox Dialogue Initiative) in a variety of areas as well as working on models of such collaborations and how to support them.

Biography: Chad Gonnerman PhD is an Associate Professor of Philosophy and the Philosophy Program Coordinator at the University of Southern Indiana in the USA, in addition to being a member of the Toolbox Dialogue Initiative. He has written on the structural nature of concepts, methodology of relying on philosophical intuitions, error-possibility effects on lay attributions of knowledge, the everyday concept of knowledge-how, and philosophy’s ability to enhance cross-disciplinary research, among others.

17 thoughts on “Basic steps for dealing with problematic value pluralism”

  1. Hi Bethany, Stephen and Chad,
    Thanks heaps for taking the time to write an easy introduction to your work.
    Some open ended thoughts stimulated by this.

    I also recently read this
    https://www.radicalxchange.org/media/blog/why-i-am-a-pluralist/
    A different take on pluralism with a different focus. I found the contrast stimulating.
    One observation, Glen tackles pluralism in knowledge and institutions, while you focus on values. In an earlier post on this blog (https://i2insights.org/2017/06/20/values-rules-knowledge-and-transformation/) I talk about how systems of values, rules and knowledge make up decision contexts- giving substance and shape to the shared understanding that allows deliberation and choice.
    Not wanting to put this all in a neat box, but maybe it helps make some connections, broadening the focus of pluralism and seeing how plural values relate to plural knowledge systems and institutions?

    My other thought was that the work done on resolving an issue is also work done on shaping the context of future decisions. It is here that dialogue that helps build and strengthen opposing views may be valuable, and where encouraging pluralism can be seen as a positive, even if it makes the decision process harder and messier.
    Thanks again for a the stimulating read.

    Reply
    • Russell – thanks for these thoughts mate – I think they push us in really helpful ways. I’ve skimmed both the links (thank you) but haven’t really dived in – so what follows is preliminary.

      1. I really like the vrk framework – it seems right to say that any decision is shaped by i) what folk care about (values) ii) what they take for granted (knowledge) and iii) external constraints (rules). I suspect (he said making his prejudices clear) that values extend beyond the ethical and that in practice some elements of the context can by hard to sort into the appropriate category. Do social norms count as values, knowledge or rules?
      2. I really really like your point that work on resolving an issue at time 1 shapes the context for future decisions. I think this diachronic element is one that I haven’t fully appreciated but now that you’ve mentioned it I think helps me make sense of a lot of the rich comments in this thread. For example, Gabriele worries that we are turning everything into a values conflict. I’m tempted to say she is correct because I see a role for epistemic (and ontological) values in the creation of our knowledge (especially our models). What I see now is that I’m missing the possibility that the *current* decision context might involve knowledge or rules that were the result of earlier decisions contexts where that knowledge (or rules) was shaped by values in one way or another. So while its right to say that a model or rule can embody values it doesn’t follow that the best way to read a conflict between models is as a values conflict.

      I better stop now – grading awaits – but there is clearly much more that I need to think out here. Thanks once again for some awesome challenges!

      Reply
  2. I am curious about your definition of values to include whimsical preferences and of value conflicts to include lack of agreement over file naming conventions. Staying in a research team context, is there then any disagreement that is not the result of value differences?

    Reply
    • Gabriele – Is there any way I can plead the 5th on this? We have clearly taken a very broad view of both what counts as a value and what counts as a disagreement. Doing that does make a lot of disagreement look like value based disagreement. Two thoughts –

      1. While the broad view of values and disagreement can generate odd results. For example, are fights over file naming *really* value disagreements. Adopting a narrower view of values and disagreements is even more problematic. This is because we would have to draw some (principled) distinction among preferences (some are *values* and others are mere fancies) and among conflicts (some are *disagreements* and some mere spats). I don’t know how to make those distinctions.

      2. There will still be some disagreements that are not values based. For example I might think that I should support plan X given my values and just be wrong that plan X is the way to go. If you point that out we are not disagreeing about values but about what follows from them. I think this can be a pretty common situation when the choices we are confronting are complex. I think Machiel’s remarks about the value of reflective equilibrium might be relevant here. By thinking about how cases relate to principles and vice versa you and I can do a lot of learning (and disagreeing) even when we both share a set of principles.

      I’m also tempted by the idea that actually yes a lot of team level disagreements are value disagreements. My prejudice is that quite often when smart folk disagree its because they are ‘seeing’ the situation differently. But ‘seeing’ things differently is the result of paying attention to different aspects of the challenge and organizing those aspects in different ways. What you pay attention to and how you organize your ideas are values; judgments about what matters and what counts as understanding. So maybe yes lots of research team disagreements are values conflicts.

      In addition to the above I have to admit I’d rather describe my inability to remember how to name files as a case of Steve sticking up for his values than as a case of ‘confused collaborator’. So its possible there is a hint of self interest in what I’ve said here 🙂

      Finally – thanks for pushing on this – I’m not sure I buy my answer but I’m a lot clearer on this than I was to start with!

      As ever it is likely that Bethany and Chad will have richer thoughts on this!

      Reply
      • Thanks for that explanation, Steve. Where then do disagreements based on different mental models or interests fit in? Do you see those as subsets of values? I appreciate that none of this is straight-forward and that we’ve all got a lot of work to do sorting it out.

        Reply
        • I sympathize with Steve’s comment: “But ‘seeing’ things differently is the result of paying attention to different aspects of the challenge and organizing those aspects in different ways. What you pay attention to and how you organize your ideas are values.” So maybe mental models are types of values, or derivatives of values.

          Reply
        • Gabriele – I think disagreements driven by differences in interests are naturally understood as values disagreements (indeed our Juwan/Stacy example is set up as a disagreement driven by different interests). On this story my interests are a subset of my values. I’m not totally sold on this but the metal models challenge seems harder to me so I’ll leave the ‘interests’ problem here for now.

          On the mental models front I think Bethany is right to say that models are derivative of values. Any model of a system is a simplification of that system. Which simplification is appropriate will depend on the work it needs to do. This is just a version of the ‘all models are wrong but some are useful’ line (shout out to George Box!). To say that a model is useful is just to say that it helps me do something that I want/need to do – aka that I value in some sense. So a disagreement driven by different models is (or can be) a disagreement driven by the values that led folk to choose the models they did.

          But that can’t be the end of the story. Disagreement is hard work – it uses up scarce resources of time and attention. So actively disagreeing with someone is typically the result of a shared interest in something both parties care about. In some of those disagreements one party will be correct and the other simply wrong (e.g. Aussie Rules is *clearly* the only real football). But in general both parties to the disagreement will bring something worthwhile to the table. In such cases success involves identifying what’s good about each perspective and finding ways to synthesize those good bits. On this story both parties to the disagreement can be seen as working to find a more useful model. That’s not a disagreement about values that’s helping one another create a model that better supports us doing the work we want to do.

          All of this is making me think that what we do with a disagreement is a least as important as what we are disagreeing about. Since what to do with a disagreement will depend on the sort of disagreement it is and that in turn will depend on what the disagreement is about we do need to keep thinking about the sources of disagreement.

          So now I’m on the hook for a story about the aims of disagreement *and* a taxonomy of sources of disagreement (where values will turn up for sure but will probably benefit from being sub-divided in various ways). Its just as well the days are getting longer in the Northern Hemisphere!

          Thanks as always for the challenge to think harder about this stuff!

          Reply
  3. Thanks for an interesting post on an important topic that is indeed often difficult for teams to handle. As much as I appreciate your clear and relatively ‘lean’ approach, I wonder how that would work once team members discover the more fundamental nature of their dissensus? In my conversations with teams I tend to clarify such value conflict situations with the help of Goodman/Rawls’ notion of ‘reflective equilibrium’ which adds two elements to the situation: 1) in addition to values it emphasizes the role of background theories (which can be about human psychology but also about other relevant factors) that inform those values and can also complicate solving conflicts if they’re not articulated themselves; 2) it allows approaching the conflict both bottom-up (reasoning from cases to general principles) as top-down (from principles to cases), enabling a team to develop a suitable trajectory. Does such strategy fit in your account, as well?

    Reply
    • Machiel, great question! It sounds like the ‘reflective equilibrium’ strategy you mention is a way of further exploring the values in play (what are they, where do they come from), and exploring the interplay of those values and the circumstances. Those sound like the “Identification” and “Determination” steps of our “Detecting” phase. But I can see how that strategy would also be useful in the “Coordination” phase. Perhaps as an exercise of moral imagination (Dewey, Brown) to “Select,” “Compromise,” or “Integrate” a coherent set of values. Or, if the dissensus is too fundamental to be overcome, then “Give Up” or “Dodge.”

      But were you saying you use reflective equilibrium to avoid giving up or dodging and yet somehow keep going without a shared, coherent set of team values? Would love to hear more about that.

      Reply
    • Machiel – thanks for these thoughts mate. I think Bethany’s done a great job of describing how reflective equilibrium (RE) work might fit into the story we offer so let me push on two additional points that I think come out of your remarks.

      1. The role of background theory – I think you are right to point out that values work plays out against a background of assumptions of various kinds. You are also right to suggest that our story does not really do justice to this background material. I think Kevin Elliot’s question about how to deal with project norms is a version of your concern. I’m not quite sure what to do about this. I think any form of inquiry (and checking for conflicting values within a team surely counts as inquiry) takes place against a background of various assumptions. That leads to the following observation – sometimes the problems you are encountering arise as a result of background assumptions rather than the material in the ‘focus’ of your inquiry. That suggests that ‘checking the background’ can be valuable but leaves open the how, when etc. of such ‘checking’. I guess I think you are right but am not sure how to add something to our story that gives meaningful advice on how to address this concern. I’ve clearly got some homework here.

      2. The role of RE – I think I see RE as a constituting a standard for determining when a set of values is coherent. If that’s right then RE is ‘unpacking’ a notion (coherence) that we have left undefined here. For what it’s worth I think the use of RE in something like this fashion is a really rich one. It would be great to think with you further about that.

      Thanks for. these challenges mate – there’s lots here to think about and that is *awesome*

      Reply
      • Thanks so much for your thoughts, Bethany and Stephen. I agree that much of this is in need of further specification, which is often only possible case by case. So what is at stake in RE (or at least the ‘wide’ RE that was formulated by Daniels a.o.) is 1) a set of cases, 2) principles that may be derived and/or superimposed on those cases; and 3) the -often implicit- background theories against which 1 & 2 figure.
        Especially in cases of transdisciplinary or action research, with the involvement of extra-academic stakeholders, researchers might discover that their background theories (which may range from epistemological notions on what may count as knowledge to more metaphysical ideas) are not at all shared with these extra-academics. And they may even find that there is some incoherence at stake, the question is how to appreciate that ‘diagnosis’, what to do with it. I agree with your insistence on the importance of coherence in this context.
        Yet such observation of incoherence is almost comparable to the ethical question regarding accidental findings (e.g. tumors) in medical research: must we share such finding with a research participant, or not? The responses that you list above offer at least several strategies for handing or avoiding such situations if one doesn’t want to engage in the difficult and timeconsuming process of Reflective Equilibrium, which is helpful indeed – thanks again for providing such help!

        Reply
        • Machiel – I’m really intrigued by your thinking here. I hope I have it right. What I think you’re saying is something like the following. Let’s imagine some transdiciplinary project – to really get everyone on the same page (call this full coherence) will require a significant investment of time and energy to work though a full RE process. But perhaps we can get away with being ‘coherent enough’ that is, we could overlook parts of worldviews that don’t really fit the shared story we’re building but are also peripheral to that story. On that way of looking at things the work in our paper is something like a rough guide to achieving ‘working coherence’ – that is just enough coherence to complete the project – but not full coherence. Working coherence isn’t is as good as full coherence but its ‘cheaper’ in terms of time and energy.

          In contrast I’ve been thinking of RE as standard (and also a method) for identifying when coherence has been achieved. On that story RE ‘lives’ within our framework. It operationalizes one of our key ideas – coherence.

          Right now I can’t work out if this a fundamental disagreement or something that we can resolve with access to a whiteboard and perhaps a beverage or two. Hopefully we get to test my hypothesis with a whiteboard (and drinks) sooner rather than later!

          As ever thanks heaps for this challenge – It’s a pleasure to think with you mate!

          Reply
          • Machiel & Steve, you both mention it’s a goal to strive for (at least rough) coherence, which I used to completely buy into, but lately I’m not so sure. The more I learn from my Indigenous colleagues, the more I question the possibility or desirability of coherence. Instead, my colleagues have been advocating for ‘braiding,’ a nod to Robin Wall Kimmerer’s work. I’ve much more to learn about the difference between coherence and braiding, but so far I’ve grasped that in braiding, each knowledge/values system retains its own coherence but has points of ‘connection’ with the others. How does such a notion of ‘braiding’ fit with your understanding of ‘coherence,’ RE, and the goal of cross-disciplinary cooperation? Is ‘braiding’ a kind of ‘rough’ coherence in that there is not full consilience? Or is it something different from coherence in your view? And if so, can ‘braiding’ still be an appropriate goal for a cross-disciplinary project?

            Reply
            • Bethany – thanks heaps for this – I’m excited to learn more about braiding. Obviously I can’t speak for Machiel but from my point of view there are two things I need to separate in my thinking about how I use the term ‘coherence’.

              At one level I use it as a label for ‘useful’ working relationships. That is ‘coherence’ applies to any collective where members of the collective get enough of value to them out of their participation in the collective for them to feel that participation was worthwhile. Whether or not you get something of value out of a collaboration depends on a bunch of contextual factors including how much you invest in the collaboration. So collaborations that have minimal investment can ‘cohere’ pretty easily. So can collaborations that are time limited – think of this a bit like ‘getting along with’ – who we can ‘get along with’ varies. It depends on factors like how long we are together (visitors are one thing – roommates another) and under what circumstances (e.g. backcountry hiking vs visiting friends). Using ‘coherence’ in this way may be a mistake. Certainly there are other labels for this; e.g. I’ve seen folk use ‘going forward together’ to capture something like the notion I’m describing here. I’m keen to learn how ‘braiding’ relates to this notion.

              At another level I use coherence to describe something like ‘well functioning’ conceptual systems or ‘neatly and efficiently organized’ sets of ideas. When I feel like the mental models I share with my collaborators have this feature it makes me happy! Folk often want to use logic as a measure of coherence but for me coherence is deeper than logic. Logic is a tool that can be used to generate coherence but there are many logics and choosing the appropriate logic is part of the work of creating coherence. In this sense ‘coherence’ is a core value for me. It’s certainly not a deal breaker – collaborations that are not fully coherent in this second sense but that generate meaningful action are to be celebrated. But I’ll always take a more coherent collaboration over a less coherent one ceteris paribus. But maybe I’m wrong to do so – I’m excited to learn more about ‘braiding’ and perhaps to discover a richer and more flexible value than coherence (in my second sense).

              Thanks for this most cool idea. There is clearly much to scheme about sooner rather than later!

              Reply
  4. Thanks for this post! I think you make some great points here. You note that value pluralism is not always problematic, but that it can become problematic when it undermines project norms. This makes a lot of sense to me, but I can imagine that sometimes the interplay of diverse values on a team might indicate to us that the project norms themselves might need to change. For example, perhaps a project is designed to move at a particular pace but it becomes clear that in order to work through some of the conflicting values on a team the project might need to slow down. Or suppose a project is designed in such a way that it precludes meaningful input from some important stakeholders, but some of the people on the team really value input from those stakeholders. In some cases, one might think the project norms should be reexamined. Do you have advice on deciding when/how to alter the project norms themselves? Could the steps at the end of your post be used not only for working through conflicting values but for altering the project norms themselves?

    Reply
    • Kevin – thanks heaps for these thoughts. I suspect Bethany and Chad will have more to say but let me get the ball rolling.

      My sense is that our story can address revisions to team norms. That said, I don’t think its a salient feature of our story. The way we’ve set things up team values are composites of the values of the team members. Project norms fit poorly into that approach – they are often cultural expectations that pre-exist team formation and are taken for granted by team members rather than advocated for or consciously adopted by team members. On this account project norms are, strictly, held by individual team members and so are revisable in the sorts of ways we describe above. But really project norms are more like unconscious habits with strong external social pressure to conform to those habits – a lot of project norms are mandated by funding bodies. As such they are both i) elusive (in the sense of being hard to notice) and ii) resistant to change (given external reinforcement) which will make them unlikely candidates for value revision. Take your ‘timeline’ example above – my sense is that once a project gets funded the timeline is all but sacrosanct even though, on our story, its simply another potential team value and so open to revision to avoid problems. The timeline doesn’t become a core value because it has strong advocates, it exists as a core value because everyone “knows” you stick to timelines.

      I’m not sure how I feel about this – but I can’t see how to improve it at the moment. I’m going to have to keep thinking about this mate. Thank you for the challenge!

      Reply
      • Steve, I think you’re right that our story can address revisions of project norms, but we didn’t bring that to the fore. In the paper, we did mention that project norms can come both from within and from without, and they can be pre-date the project or can emerge during the project. Then we didn’t say anything more about them. Kevin, what you’re talking about sounds like norms that are at least modified from within the team in an emergent way during the project. Can that happen through the same steps we outlined above? I think yes.

        In fact, I think many times the resolution of problematic value pluralism results in changes to the project’s norms. After all, one way to resolve a problem is to dissolve it—change the constraints that made it a problem in the first place. Any of our ‘coordination’ strategies above can be done at the primary value level or at the secondary level of project norms. In fact, “Give Up” and “Dodge” are always going to change the project norms, because they cancel or change the project structure itself. Perhaps lengthening a timeline is “dodging” the time limit problem. The other strategies could operate at both levels.

        These are times the team needs to ‘go meta’ (as my folks would phrase it)–which means moving the conversation to the rules/norms/strategies that govern the thing we were first discussing. It can sound like this in a conversation: “I think some of the problem here exists at another level” or “It sounds like we might want to revisit how we’re approaching this.” I would also call this “double loop learning” (Argyris 1991).

        Like Steve said, many project norms come from without–they are broadly cultural, like sticking to timelines–but they must be internalized at least somewhat for them to have any effect on team members’ actions. When that internalization is strong and unconscious, as Steve mentioned, it takes skill for someone to be able to notice such a norm, and skill to bring it to the attention of the group. Because it takes skill, it’s not guaranteed to happen. Unconscious values are very hard to access, as Chad Gonnerman would put it, and our paper talked about why philosophical dialogue is a good tool for helping us with that.

        So, with skill and tools, I do think the general steps of “Articulate,” etc could work at the ‘meta’ level as well as the primary level. It would be fun to think more about this.

        Reply

Leave a Reply to Machiel KeestraCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Integration and Implementation Insights

Subscribe now to keep reading and get access to the full archive.

Continue reading