Hello fellow science communicators,

Communicating risk—such as in these COVID-ridden times—is not straightforward. How we think of risks is influenced by many things: our personal experiences, trust in institutions, what we need, and more. But we can head off many problems by understanding people, how they perceive risk, and what can go wrong.

In this newsletter, we examine what you can do to communicate risk and uncertainty better:

Regards from the @EconnectTeamJenni MetcalfeToss GascoigneClaire HeathMichelle RiedlingerSarah Cole & Madeleine Stirrat.


Learning to live in limbo

BY JENNI METCALFE

For all of us, COVID-19 has made it more obvious how uncertain our lives are.

For many, it’s no longer an option to nick down to the shops to pick up some forgotten item. It’s difficult to plan Christmas with certainty. And forget about that spontaneous trip to Fiji.

Researchers call this living with liminality.

COVID-19 has also put the liminality of science in the public spotlight as new research emerges to adjust what we previously thought. This has led to increasing mistrust in science, and a disturbing tendency by some to then dismiss scientific advice as too risky to follow.

Yet, science has always been about tolerating a level of uncertainty. Such uncertainty is intrinsic to how scientific knowledge progresses. We’re never 100% certain about our best health, medicine or environmental knowledge.

Part of the issue is that we as science communicators have failed to explain the nature of risk and scientific uncertainty. Too many people look to science for black and white answers. When these are not forthcoming, some people reject the science and look instead to the comfort of conspiracy theories.

So, how can we communicate scientific uncertainty so that people understand it, accept it, and can make decisions based on the best knowledge available?

  1. Understand the people we are communicating with—what context, what concerns affect their perceptions and needs? This will influence how they receive messages. For example, many Australians are suffering pandemic fatigue and start to see more risks in regular lockdowns than from COVID.
  2. Recognise people’s concerns as being valid, even if they don’t appear logical. It is perfectly OK for people to be concerned about the health of their kids or their future fertility.
  3. Explain (and keep explaining)—with examples—the process of creating progressively better scientific knowledge alongside uncertainty and academic debate. People who understand more about the process of science will likely feel more comfortable with uncertainty.
  4. Convey clear messages that recognise people’s concerns, correct misperceptions, articulate what the best available science says, and acknowledge uncertainties.
  5. Promote credible knowledge and experts, and how to determine what is credible or not among the overload of information that confront us.
Liminality has been described as “a metaphorical waiting room between one life stage and another“.

Assessing the risks of social media misinformation to your project

BY MICHELLE RIEDLINGER

I was extremely pleased to see life and behavioural scientists recently calling for emerging communication technologies and the misinformation they can freely spread to be treated as a “crisis discipline”.

An important move for collective action against social media misinformation is to assess how severe the risks are for your own work. Use these questions to help consider the risks:

1.    How could social media misinformation affect my communication?

Your answer to this question will depend on: your research discipline/area, the politicised nature of your work, and who will benefit or face losses from the science communication.

For example, social media posts with political or fringe media’s endorsements about ‘miracle cures’ for COVID-19 (such as hydroxychloroquine and ivermctin) continue to circulate long after scientific research has disproven their effectiveness and governments have banned their use.

2.    How much control do I have over misinformation?

This will depend on the kind of misinformation, the size of the community, your relationships with them, where misinformation originates and how it circulates.

Recently, Australian federal Member of Parliament Craig Kelly sent a text message to thousands of Australians with decontextualised screenshots (pages 3–6) from an Australian Therapeutic Goods Administration (TGA) report listing all adverse events related to medicines (including vaccines) and medical devices used in Australia. Kelly provided a link to download the full report, which was incorrectly labelled as “adverse event and death report”.

Addressing this misinformation has challenges. Political communication in Australia is exempt from the nation’s spam and privacy acts. The TGA issued a legal warning in response.

3.    What could I lose because of social media misinformation?

Losses can be tangible or reputational. Will it compromise community health or environmental wellbeing? Could it end important programs for positive community change? Could you lose support from your organisation or funders? What about community trust?

Use social listening to find out who engages with social media misinformation, and think about how you might respond. Coordinated inauthentic online behaviour amplifies threats because of its distributed and embedded nature, and the potential to turn fringe beliefs into mainstream beliefs.

For example, as quickly as Facebook bans COVID-19 misinformation videos or links to URLs from established misinformation networks, new videos with different URLs spring up on alternative platforms.

4.    Could my work benefit from social media misinformation?

It might be hard to imagine benefits from social media misinformation. We’re still grappling with how transformational technologies—algorithms and one-click advertising—have changed the way we make decisions that affect our own health and community wellbeing.

But we have opportunities to:

  • better understand how and why misinformation is shared
  • rearticulate our values and how science communication can contribute to effective decision-making
  • allow research users, patient groups and industry partners to step up in support of decision-making based on evidence.

Tackling misinformation also encourages us to take more collaborative and transdisciplinary approaches to science communication. We can all contribute to that.

Photo by DeymosHR on Shutterstock

You are probably bad at estimating risk to yourself

BY TOSS GASCOIGNE

Humans are not actually very good at assessing risk, especially if it applies to them.

In an article last year in the New York Times, Marie Helweg-Larsen, a professor of psychology at Dickinson College, described some factors at play in human psychology that can skew our perception of risk:

  • Optimistic bias, a well-established finding in social psychology where people think that the risk to themselves is less than the risk to other people.
  • False sense of control. If people think they are in control, generally, the less worried they are. That’s why many people feel safer driving a car than flying (although the risk of death might be 1,000 times higher).
  • Confirmation bias. People look for—and act on—information which fits their world view.

If you would like to test how good you are at assessing risk in the real world, try this Guardian interactive which invites you to estimate the risk of 12 possible events, from shark bites to getting a rare blood clot. Then it tells you how you rate.

Give it a go—after all, what have you got to lose?

Source: the Guardian interactive quiz

Some graphs can impair understanding of public health risks

BY SARAH COLE

Heard of a logarithmic scale? Before this ongoing public health crisis, many people hadn’t, nor knew how to read graphs that use them.

This one is likely very familiar – a log scale of cumulative COVID-19 deaths in various countries since 2020 (from ABC News Australia).

The graph above does—the y-axis goes up in powers of 10 (i.e. 1, 10, 100, 1000) instead of 10, 20, 30 (linearly).

It has some logical advantages: COVID-19 spreads exponentially, not in a linear way, and we more easily see ‘flattening’ when spread slows.

But do non-experts interpret log graphs well? At least two teams recently examined this question.

They found that people seeing log scales:

  • less accurately predicted spread of the virus
  • less accurately understood how the pandemic has developed
  • viewed COVID-19 as less dangerous
  • expressed less support for policy interventions (except for 2500 Canadians in another study)
  • indicated they would take less personal action to combat the virus.

Given that our behaviour and support for public health interventions is crucial to manage COVID-19 risk, not using log scales could be an important intervention.

The authors recommend:

  • Only use log scales when you need to. Linear should be the default.
  • Always include linear graphs next to log graphs if you can.
  • Add explanations next to any log graphs.

Another team of Australian researchers looked at how to increase non-expert’s understanding of population health risks from graphs.

These strategies were found to dramatically increase people’s comprehension (measured via correct test answers):

  • Change your pie chart to a bar graph. Get 3.6 x more correct answers.
  • Make the upward direction of the y-axis always represent an increase, for 2.9 x more correct answers.
  • Explain any acronyms in a note. 2.5 x more correct answers.
  • For graphs next to each other, make the range of their y-axes the same. 2 x more correct answers.

A final note about communicating public health information with graphs: 2018 research from the US showed that people ascribe more credibility to data that they can see is from a source they trust—such as a logo from a trusted university (although their understanding of the data didn’t change with perceived source).

See more excellentpractical strategies to improve your graphs.