Using a cartoon video to achieve research impact

By Darren Gray, Yuesheng Li and Don McManus

Darren Gray
Darren Gray (biography)

In the right circumstances, a cartoon video can be an effective way to communicate research information. But what’s involved in developing a cartoon video?

This blog post is based on our experience as a Chinese-Australian partnership in developing an educational cartoon video (The Magic Glasses, link at end of post) which aimed to prevent soil-transmitted helminths (parasitic worm) infections in Chinese schoolchildren. We believe that the principles we applied are more broadly applicable and share them here.

Yuesheng Li
Yuesheng Li (biography)

Developing the cartoon video involved three major steps: formative research, production, and pilot testing plus revision.

Formative research

Don McManus
Don McManus (biography)

The aim of the formative research is to better understand what you want to change, which in our case was a reduction in behaviours which put Chinese children at risk of being infected by soil-transmitted helminths. We therefore wanted to find out about:

  • What children already knew about risky behaviours
  • What risky behaviours they were engaged in
  • What additional information about knowledge and behaviours could be provided by parents, teachers and doctors
  • How the relevant behavioural change might occur.

We gathered this information by surveying, interviewing and observing children and households, conducting key informant interviews, and reviewing relevant theory about behavioural change. As part of this information gathering we also found out about the children’s favourite comics and cartoons.

More generally, while the information that needs to be gathered will depend on the change being sought, we anticipate that a similar mix of theory and empirical data gathering will be useful.

Production

Production involves turning the formative research into a first draft cartoon and the process we used can be easily adapted to other circumstances.

The process we followed was to use the formative research to produce a series of key messages that the cartoon video needed to convey. These then needed to be turned into a script for the cartoon narrative, which in our case was done through a series of brainstorming sessions by a multi-disciplinary team comprising researchers, education experts, animators and a scriptwriter.

During the scriptwriting process, Chinese experts were consulted repeatedly for advice on China-specific cultural aspects.

The script was a written document describing the dialogue, settings and characters from which all other elements essential for cartoon development were created. These included a storyboard to visualise camera shots and an animatic, turning the storyboard into a slideshow to pace and time the cartoon. Subsequently, concept artwork was created for all the main features presented in the script including the cartoon characters, the settings and general cartoon style.

Next, resources were pooled together under the supervision of the cartoon director, and each stage was continually reviewed, iterated and placed into the movie. Backgrounds were created alongside characters, which were animated scene by scene. Dialogue and sound were then added. Throughout the process, results were discussed with the multi-disciplinary team and content was adapted accordingly.

Pilot testing, plus revision

Pilot testing with the target audience is essential to reveal and remedy weaknesses in the cartoon video before a final version is produced.

In our case, a pilot version of The Magic Glasses was tested in six schools in one Chinese city with children, teachers and invited parents. A questionnaire was used to assess whether the key messages had been understood. Small focus groups provided an opportunity for the audience to comment on the cartoon and make suggestions for improvement.

The main change we made was to re-record the audio using professional voice actors based in China (rather than Australian-based Chinese film school students), which considerably improved the quality and entertainment value of the cartoon.

Recommendations

As a result of our experience, we developed eight recommendations, modified here to be more generally applicable:

  1. Involve the relevant local community and the target group early on in the formative research phase to gain insight into the change needed and relevant context.
  2. Use multiple, both quantitative and qualitative, methods for the formative research.
  3. Use relevant theory to guide the change message.
  4. Where behaviour change is required, ensure the video incorporates instructional messages into a real-life situation displaying correct behaviour embedded in the local context (rather than depicting a stand-alone instructional message). Ideally, the educational material should be developed locally to account for cultural differences.
  5. Ensure the video is produced professionally by hiring a professional audio-visual company. It is also essential to involve an experienced scriptwriter.
  6. Ensure the knowledge can be integrated into an entertaining narrative, thereby informing and entertaining at the same time.
  7. Pilot test the video in the targeted area and solicit feedback from the local community and targeted group.
  8. Use the cartoon video in conjunction with other strategies to encourage change. (In the case of the Magic Glasses video, we also used other teaching methods such as class discussions or role-plays, allowing children to practice, consolidate and repeat the newly-acquired knowledge.)

Conclusion

Do you have experience using cartoon videos or similar techniques to achieve research impact? Do you have lessons about what does and does not work to share?

To find out more:
Bieri, F. A., Yuan, L-P., Li, Y-S., He, Y-K., Bedford, A., Li, R. S., Guo, F-Y., Li, S-M., Williams, G. M., McManus, D. P., Raso, G. and Gray, D. J. (2013). Development of an educational cartoon to prevent worm infections in Chinese schoolchildren. Infectious Diseases of Poverty, 2: 29. (Online) (DOI): https://doi.org/10.1186/2049-9957-2-29

The Magic Glasses video (14 minutes) can be seen at: https://www.youtube.com/watch?v=7C-O5M3YnRE

Biography: Darren Gray PhD is a professor and Deputy Director of the Research School of Population Health and Head of the School’s Department of Global Health at The Australian National University in Canberra, Australia. He has worked extensively in Southeast Asia in water, sanitation and hygiene (WASH); neglected tropical diseases; infectious disease transmission dynamics; health promotion/education; cluster-randomised controlled trials; and field-based epidemiological research.

Biography: Yuesheng Li PhD is a Senior Research Fellow at Berghofer Medical Research Institute at the Queensland Institute of Medical Research, Adjunct Senior Lecturer at the School of Public Health, University of Queensland, both in Brisbane, Australia and honorary professor in Hunan Institute of Parasitic Diseases, China. His research focuses on developing effective public-health interventions, including vaccines, and novel diagnostic procedures, against important parasites with the goal of elimination.

Biography: Donald P. McManus Ph.D., D.Sc. (Wales) is a NHMRC Senior Principal Research Fellow at Berghofer Medical Research Institute at the Queensland Institute of Medical Research and Professor of Tropical Health, University of Queensland, both in Brisbane, Australia. He researches the molecular biology, immunology, diagnosis and epidemiology of parasitic worms. He is the recipient of multiple awards, including Fellow of the Royal Society of Biology (UK, 2013), Fellow of the Australian Academy of Health and Medical Sciences (2013) and winner of the Sornchai Looareesuwan Medal 2018 “for outstanding achievements in experimental and clinical tropical medicine research”.

Darren Gray is a member of blog partner PopulationHealthXchange, which is in the Research School of Population Health at The Australian National University.

How Can We Know Unknown Unknowns?

By Michael Smithson

Michael Smithson
Michael Smithson (biography)

In a 1993 paper, philosopher Ann Kerwin elaborated a view on ignorance that has been summarized in a 2×2 table describing crucial components of metacognition (see figure below). One margin of the table consisted of “knowns” and “unknowns”. The other margin comprised the adjectives “known” and “unknown”. Crosstabulating these produced “known knowns”, “known unknowns”, “unknown knowns”, and unknown unknowns”. The latter two categories have caused some befuddlement. What does it mean to not know what is known, or to not know what is unknown? And how can we convert either of these into their known counterparts?

relationship_known-to-unknown_after-kerwin1993
Source: Adapted from Kerwin (1993) by Smithson in Bammer et al. (2008)

In this post, I will concentrate on unknown unknowns, what they are, and how they may be identified.

Attributing ignorance

To begin, no form of ignorance can be properly considered without explicitly tracking who is attributing it to whom. With unknown unknowns, we have to keep track of three viewpoints: the unknower, the possessor of the unknowns, and the claimant (the person making the statement about unknown unknowns). Each of these can be oneself or someone else.

Various combinations of these identities generate quite different states of (non)knowledge and claims whose validities also differ. For instance, compare:

  1. A claims that B doesn’t know that A doesn’t know X
  2. B claims that A doesn’t know that A doesn’t know X
  3. A claims that A doesn’t know that B doesn’t know X
  4. A claims that A doesn’t know that A doesn’t know X

The first two could be plausible claims, because the claimant is not the person who doesn’t know that someone doesn’t know X. The last two claims, however, are problematic because they require self-insight that seems unavailable. How can I claim I don’t know that I don’t know X? The nub of the problem is self-attributing false belief. I am claiming one of two things. First, I may be saying that I believe I know X, but my belief is false. This claim doesn’t make sense if we take “belief” in its usual meaning; I cannot claim to believe something that I also believe is false. The second possible claim is that my beliefs omit the possibility of knowing X, but this omission is mistaken. If I’m not even aware of X in the first place, then I can’t claim that my lack of awareness of X is mistaken.

Current unknown unknowns would seem to be claimable by us only about someone else, and therefore current unknown unknowns can be attributed to us only by someone else. Straightaway this suggests one obvious means to identifying our own unknown unknowns: Cultivate the company of people whose knowledge-bases differ sufficiently from ours that they are capable of pointing out things we don’t know that we don’t know. However, most of us don’t do this—Instead the literature on interpersonal attraction and friendships shows that we gravitate towards others who are just like us, sharing the same beliefs, values, prejudices, and therefore, blind-spots.

Different kinds of unknown unknowns

There are different kinds of unknown unknowns, each requiring different “remedies”. The distinction between matters that we mistakenly think that we know about, versus matters that we’re unaware of altogether, is probably the most important distinction among types of unknown unknowns. Its importance stems from the fact that these two kinds have different psychological impacts when they are attributed to us and require different readjustments to our view of the world.

1. False convictions

A shorthand term for the first kind of unknown unknown is a “false conviction”. This can be a matter of fact that is overturned by a credible source. For instance, I may believe that tomatoes are a vegetable but then learn from my more botanically literate friend that its cladistic status is a fruit. Or it can be an assumption about one’s depth of knowledge that is debunked by raising the standard of proof—I may be convinced that I understand compound interest, but then someone asks me to explain it to them and I realize that I can’t provide a clear explanation.

What makes us vulnerable to false convictions? A major contributor is over-confidence about our stock of knowledge. Considerable evidence has been found for the claim that most people believe that they understand the world in much greater breadth, depth, and coherence than they actually do. In a 2002 paper psychologists Leonid Rozenblit and Frank Keil coined a phrase to describe this: the “illusion of explanatory depth”. They found that this kind of overconfidence is greatest in explanatory knowledge about how things work, whether in natural processes or artificial devices. They also were able to rule out self-serving motives as a primary cause of the illusion of explanatory depth. Instead, illusion of explanatory depth arises mainly because our scanty knowledge-base gets us by most of the time, we are not called upon often to explain our beliefs in depth, and even if we intend to check them out, the opportunities for first-hand testing of many beliefs are very limited. Moreover, scanty knowledge also limits the accuracy of our assessments of our own ignorance—greater expertise brings with it greater awareness of what we don’t know.

Another important contributor is hindsight bias, the feeling after learning about something that we knew it all along. In the 1970’s cognitive psychologists such as Baruch Fischhoff ran experiments asking participants to estimate the likelihoods of outcomes of upcoming political events. After these events had occurred or failed to occur, they then were asked to recall the likelihoods that they had assigned. Participants tended to over-estimate how likely they had thought an event would occur if the event actually happened.

Nevertheless, identifying false convictions and ridding ourselves of them is not difficult in principle, providing that we’re receptive to being shown to be wrong and are able to resist hindsight bias. We can self-test our convictions by checking their veracity via multiple sources, by subjecting them to more stringent standards of proof, and by assessing our ability to explain the concepts underpinning them to others. We can also prevent false convictions by being less willing to leap to conclusions and more willing to suspend judgment.

2. Unknowns we aren’t aware of at all

Finally, let’s turn to the second kind of unknown-unknown, the unknowns we aren’t aware of at all. This type of unknown unknown gets us into rather murky territory. A good example of it is denial, which we may contrast with the type of unknown unknown that is merely due to unawareness. This distinction is slightly tricky, but a good indicator is whether we’re receptive to the unknown when it is brought to our attention. A climate-change activist whose friend is adamant that the climate isn’t changing will likely think of her friend as a “climate-change denier” in two senses: he is denying that the climate is changing and also in denial about his ignorance on that issue.

Can unknown unknowns be beneficial or even adaptive?

One general benefit simply arises from the fact that we don’t have the capacity to know everything. The circumstances and mechanisms that produce unknown unknowns act as filters, with both good and bad consequences. Among the good consequences is the avoidance of paralysis—if we were to suspend belief about every claim we couldn’t test first-hand we would be unable to act in many situations. Another benefit is spreading the risks and costs involved in getting first-hand knowledge by entrusting large portions of those efforts to others.

Perhaps the grandest claim for the adaptability of denial was made by Ajit Varki and Danny Brower in their book on the topic. They argued that the human capacity for denial was selected (evolutionarily) because it enhanced the reproductive capacity of humans who had evolved to the point of realising their own mortality. Without the capacity to be in denial about mortality, their argument goes, humans would have been too fearful and risk-averse to survive as a species. Whether convincing or not, it’s a novel take on how humans became human.

Antidotes

Having taken us on a brief tour through unknown unknowns, I’ll conclude by summarizing the “antidotes” available to us.

  1. Humility. A little over-confidence can be a good thing, but if we want to be receptive to learning more about what we don’t know that we don’t know, a humble assessment of the little that we know will pave the way.
  2. Inclusiveness. Consulting others whose backgrounds are diverse and different from our own will reveal many matters and viewpoints we would otherwise be unaware of.
  3. Rigor. Subjecting our beliefs to stricter standards of evidence and logic than everyday life requires of us can quickly reveal hidden gaps and distortions.
  4. Explication. One of the greatest tests of our knowledge is to be able to teach or explain it to a novice.
  5. Acceptance. None of us can know more than a tiny fraction of all there is to know, and none of us can attain complete awareness of our own ignorance. We are destined to sail into an unknowable future, and accepting that makes us receptive to surprises, novelty, and therefore converting unknown unknowns into known unknowns. That unknowable future is not just a source of anxiety and fear, but also the font of curiosity, hope, aspiration, adventure, and freedom.

References:
Bammer G., Smithson, M. and the Goolabri Group. (2008). The nature of uncertainty. In, G. Bammer and Smithson, S. (eds.), Uncertainty and Risk: Multi-Disciplinary Perspectives, Earthscan: London, United Kingdom: 289-303.

Kerwin, A. (1993). None too solid: Medical ignorance. Knowledge, 15, 2: 166-185

Rozenblit, L. and Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 5: 521-562

Varki, A. and Brower, D. (2013). Denial: Self-deception, false beliefs, and the origins of the human mind. Hachette: London, United Kingdom

Biography: Michael Smithson PhD is a Professor in the Research School of Psychology at The Australian National University. His primary research interests are in judgment and decision making under ignorance and uncertainty, statistical methods for the social sciences, and applications of fuzzy set theory to the social sciences.

This blog post is part of a series on unknown unknowns as part of a collaboration between the Australian National University and Defence Science and Technology.

Published blog posts in the series:
Accountability and adapting to surprises by Patricia Hirl Longstaff
https://i2insights.org/2019/08/27/accountability-and-surprises/

Scheduled blog posts in the series:
September 24: What do you know? And how is it relevant to unknown unknowns? by Matthew Welsh
October 8: Managing innovation dilemmas: Info-gap theory by Yakov Ben-Haim

Using discomfort to prompt learning in collaborative teams

By Rebecca Freeth and Guido Caniglia

Image of Rebecca Freeth
Rebecca Freeth (biography)

We know that reflecting can make a marked difference to the quality of our collective endeavour. However, in the daily busyness of inter- and trans- disciplinary research collaborations, time for reflection slides away from us as more immediate tasks jostle for attention. What would help us put into regular practice what we know in theory about prioritising time to reflect and learn?

Image of Guido Caniglia
Guido Caniglia (biography)

Discomfort sometimes provides the necessary nudge in the ribs that reminds us to keep reflecting and learning. The discomfort of listening to the presentation of a colleague you like and respect, but having very little idea what they’re talking about. Or, worse, failing to see how their research will make a worthy contribution to the collective project. The discomfort when an intellectual debate with a colleague turns personal. The discomfort of watching project milestones loom, knowing you’re seriously behind schedule because others haven’t done what they said. Continue reading

Accountability and adapting to surprises

By Patricia Hirl Longstaff

Image of Patricia Hirl Longstaff
Patricia Hirl Longstaff (biography)

We have all been there: something bad happens and somebody (maybe an innocent somebody) has their career ruined in order to prove that the problem has been fixed. When is blame appropriate? When is the blame game not only the wrong response, but damaging for long-term decision making?

In a complex and adapting world, errors and failure are not avoidable. The challenges decision-makers and organizations face are sometimes predictable but sometimes brand new. Adapting to surprises requires more flexibility, fewer unbreakable rules, more improvisation and deductive tinkering, and a lot more information about what’s going right and going wrong. But getting there is not easy because this challenges some very closely held assumptions about how the world works and our desire to control things. Continue reading

Why model?

By Steven Lade

Steven Lade
Steven Lade (biography)

What do you think about mathematical modelling of ‘wicked’ or complex problems? Formal modelling, such as mathematical modelling or computational modelling, is sometimes seen as reductionist, prescriptive and misleading. Whether it actually is depends on why and how modelling is used.

Here I explore four main reasons for modelling, drawing on the work of Brugnach et al. (2008):

  • Prediction
  • Understanding
  • Exploration
  • Communication.

Continue reading

Fourteen knowledge translation competencies and how to improve yours

By Genevieve Creighton and Gayle Scarrow

Genevieve Creighton
Genevieve Creighton (biography)

Knowledge translation encompasses all of the activities that aim to close the gap between research and implementation.

What knowledge, skills and attitudes (ie., competencies) are required to do knowledge translation? What do researchers need to know? How about those who are using evidence in their practice?

As the knowledge translation team at the Michael Smith Foundation for Health Research, we conducted a scoping review of the skills, knowledge and attitudes required for effective knowledge translation (Mallidou et al., 2018). We also gathered tools and resources to support knowledge translation learning. Continue reading