You are biased!

Community member post by Matthew Welsh

matthew-welsh
Matthew Welsh (biography)

Complex, real-world problems require cooperation or agreement amongst people of diverse backgrounds and, often, opinions. Our ability to trust in the goodwill of other stakeholders, however, is being eroded by constant accusations of ‘bias’. These are made by commentators about scientists, politicians about media outlets and people of differing political viewpoints about one another. Against this cacophony of accusation, it is worthwhile stepping back and asking “what do we mean when we say ‘bias’ and what does it say about us and about others?”.

When used in this accusatory manner, bias is a loaded word. It comes with an assumption of deliberate, motivated deceit. When we make an accusation of bias, we are accusing our targets of being bad people. We assume that they know better but continue to deliberately misrepresent the truth.

While this may be true in some instances, deliberate deceit is often not the case. If you examine people’s cognition – how they make decisions, interpret information or search for options – something becomes increasingly apparent: many biases arise without any need for us to posit conscious motivations. That is, biases occur not because people are bad or dishonest but simply because they are people – and have cognitive limitations that impact how they interpret and understand the world.

To take a well-known example from decision making research, confirmation bias is often described as people’s tendency to seek out and preferentially accept evidence that conforms to their pre-existing beliefs (see, eg., Nickerson 1998). Reading this description, it sounds like the result of a person’s conscious decision to deliberately avoid information they know is relevant. Discussed in terms of how people process information, however, it becomes less sinister.

For example, to assist their understanding and reduce cognitive effort, people simplify the complex world by constructing interconnected, causal explanations and categories – so-called schema. The entire point of such schema is that when a new piece of information is presented, if it fits within a schema, it can be uncritically accepted. If, however, new information contradicts existing beliefs, it triggers cognitive processes designed to see whether the new information should be disregarded or the schema changed. Obviously, changing a schema involves significantly more effort and, so, makes sense only in exceptional circumstances. Thus, we subject ‘surprising’ information to critical examination, which results in it being subjected to greater scrutiny and being regarded as more doubtful than unsurprising evidence. This holds true whether a scientist is reviewing a paper that presents evidence that runs contrary to their own theories or a non-scientist is scanning the internet for information about the efficacy/dangers of vaccines – their initial beliefs will, unconsciously, bias how plausible data seem.

Similarly, when we need to select among sources of information, we do not know which sources are reliable across all areas of knowledge. Instead, we select sources that we trust to provide plausible information – and, for exactly the reason outlined above, these tend to be ones that present information that accords with our pre-existing beliefs. This is because such information is easier to digest and our minds use ‘fluency’ (the ease with which we understand something) as a marker for truth. Thus, without any motivation beyond seeking information, our cognitive limitations result in us preferring evidence that reinforces what we already believe – producing the confirmation bias and potentially amplifying any initial errors in our beliefs.

So, when you are next inclined to bemoan other people’s bias, remember that ‘biased’ is the natural state of people – resulting from our attempts to grapple with a complex world using limited cognitive abilities. Rather than assuming malfeasance, a better use of your time may be trying to understand which limitations might be causing biases and considering how to present information to assist others in overcoming our shared cognitive limitations.

That is, if everyone is biased, what can we do about it? Just explaining that someone’s belief is false does not work – just think of how difficult it has been to displace the idea that vaccinations cause autism once it took hold. There is no evidence for this relationship but removing it leaves a ‘hole’ in the person’s schema. That is, because the actual cause of autism is unknown, removing vaccines as the cause results in an incomplete and thus incoherent account and people revert to their simple, coherent, false explanation as this is, subjectively, superior (remembering that ‘easy = true’ so a simple explanation is seen as more likely to be true).

This, then, illuminates a key for debiasing effects like confirmation bias. Rather than trying to insert facts into pre-existing schema they are at odds with, we need to focus on creating alternative accounts that are complete in and of themselves – building on a person’s existing beliefs. Imagine a process like a Socratic dialogue, where you start with simple, readily agreeable, facts and then build on these towards your final conclusion. Another cognitive limitation assists in this – knowledge partitioning: the observation that people construct different schema to explain different parts of the world and can be unconscious of discrepancies between these. For example, rather than repeating evidence for climate change from various scientific sources (which a sceptic may regard as unreliable when interpreted through their political schema), you could attempt to build on their potentially separate schema for basic science (the physics of the greenhouse effect, etc) towards the scientific consensus opinion that human activity is driving climate change.

The process described above is, of course, time consuming and difficult – requiring deep thought about how and why a person might be displaying biased thinking and about what other aspects of their knowledge you might be able to leverage in assisting them avoid this bias. It is, however, far more likely to work than standing at ten paces shouting “bias!” at one another.

Are there any strategies you have found useful for identifying and countering your own or other people’s cognitive biases? Do you have examples of ways to present information that might assist a recipient in avoiding confirmation (or other) biases?

To find out more about confirmation and other biases and how they can be countered:
Welsh, M. (2018). Bias in science and communication: A field guide. IOP Press: Bristol, United Kingdom; http://iopscience.iop.org/book/978-0-7503-1311-7

Reference:
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2, 2: 175-220.

Biography: Matthew Welsh PhD is a Senior Research Fellow at the Australian School of Petroleum, University of Adelaide, Australia. His research and teaching focus on how people’s inherent cognitive processes affect their judgements, estimates and decisions and the implications of this for real-world decision making.

Negotiations and ‘normative’ or ‘ethical’ power

Community member post by Lena Partzsch

lena-partzsch
Lena Partzsch (biography)

What can we learn from international relations about how ‘normative’ or ‘ethical’ power can be used in successful negotiations, for example, for pathways to sustainability? Here I build on Ian Manners’ (2002) concept of “Normative Power Europe”. He argues that the European Union’s specific history “pre‐disposes it to act in a normative way” (Manners 2002: 242) based on norms such as democracy, rule of law, social justice and respect for human rights. I explore the broader ramifications of the normative power concept for empirical studies and for practical negotiation and collaboration more generally.

First, the concept of normative power implies that the spread of particular norms is perceived as a principal policy goal, whether that relates to foreign policy, environmental policy or other kinds of policy. Continue reading

When are scientists neutral experts or strategic policy makers?

Community member post by Karin Ingold

karin-ingold
Karin Ingold (biography)

What roles can science and scientific experts adopt in policymaking? One way of examining this is through the Advocacy Coalition Framework (Sabatier and Jenkins-Smith 1993). This framework highlights that policymaking and the negotiations regarding a political issue—such as reform of the health system, or the introduction of an energy tax on fossil fuels—is dominated by advocacy coalitions in opposition. Advocacy coalitions are groups of actors sharing the same opinion about how a policy should be designed and implemented. Each coalition has its own beliefs and ideologies and each wants to see its preferences translated into policies. Continue reading

Using Ostrom’s social-ecological systems framework to set context for transdisciplinary research: A case study

Community member post by Maria Helena Guimarães

maria-helena-guimaraes
Maria Helena Guimarães (biography)

How can Elinor Ostrom’s social-ecological systems framework help transdisciplinary research? I propose that this framework can provide an understanding of the system in which the transdisciplinary research problem is being co-defined.

Understanding the system is a first step and is necessary for adequate problem framing, engagement of participants, connecting knowledge and structuring the collaboration between researchers and non-academics. It leads to a holistic understanding of the problem or question to be dealt with. It allows the problem framing to start with a fair representation of the issues, values and interests that can influence the research outcomes. It also identifies critical gaps as our case study below illustrates. Continue reading

Tool-swapping in interdisciplinary research – a case study

Community member post by Lindell Bromham

i2s-logo_small
Lindell Bromham’s biography

What can we learn from focussing on examples of interdisciplinary research where ideas or techniques from one field are imported to solve problems in another field? This may be in the context of interdisciplinary teams, or it may simply involve borrowing from one field to another by researchers embedded within a particular field. One of the major benefits of interdisciplinary research is the chance to swap tools between fields, to save having to reinvent the wheel.

The fields of evolutionary biology and language evolution have been swapping ideas and tools for over 150 years, so considering the way that ideas have flowed between these fields might provide an interesting case study. Continue reading

How is transformative knowledge ‘co-produced’?

Community member post by Andy Stirling, Adrian Ely and Fiona Marshall

andy-stirling
Andy Stirling (biography)

It’s often said that knowledge to tackle big problems in the world – food, water, climate, energy, biodiversity, disease and war – has to be ‘co-produced’. Tackling these problems is not just about solving ‘grand challenges’ with big solutions, it’s also about grappling with the underlying causal social and political drivers. But what does co-production actually mean, and how can it help to create knowledge that leads to real transformation?

Here’s how we at the Social, Technological and Environmental Pathways to Sustainability (STEPS) Centre approach this challenge of co-production. Continue reading