Last month I was asked to speak to a group of Queensland science communicators about evaluation. This is an age-old dilemma that many science communicators face: how do we prove what we’re doing is effective without spending loads of cash?

In researching my talk, I rediscovered two articles I co-authored 15 years ago with colleagues. One was on evaluating national science communication programs, and the other on evaluating the communication work of an organisation.

There are still lots of good principles in those articles, and this month’s newsletter gives some further tips on Evaluation.

As always, we’d love your feedback via Facebook, Twitter or email.

Regards from the @EconnectTeam

Jenni MetcalfeSarah Cole (newsletter editor), Jane Ilsley, Melina Gillespie and guest contributor Toss Gascoigne

 

 

The 5 Ps of evaluation

By Jenni Metcalfe

My experience evaluating science communication activities has led me to develop 5 principles which all start with the letter ‘P’.

  1. Consider the Purpose of your evaluation. Do you want to get feedback so you can do things better? Do you want to prove that you have achieved the objectives you’d set out to do? Or perhaps you want some data to show to funding bodies?
  2. Be Proactive in your evaluation. Build in evaluation and allocate resources to evaluation right from the start of your program or activity.
  3. Make your evaluation Practical so that anyone can do it without a lot of resources, or a high level of skill.
  4. Involve the People participating in the project or activity in the evaluation so they all take responsibility for it happening.
  5. Build evaluation into each stage of your project or activity so it is Perpetual, or ongoing—from concept, design, implementation, reporting and then onto re-development of other similar activities or programs.

letter-1084823_640

Image: Pixabay

 

 

Demonstrating the benefits of science communication

By Toss Gascoigne

The issue of demonstrating the benefits of science communication activities to bosses or funding bodies is a challenge.

It’s hard to prove the benefits in the logical way: record the current state of affairs to establish a baseline – on, say, attitudes towards careers in science. Then run the program or activity to try and change things. Afterwards do a new survey to see if anything has changed: a sort of ‘before and after’.

There are obvious problems:

  • these changes in attitude are pretty subtle and difficult to measure
  • there are other events that also affect the situation, like a very personable scientist appearing on TV, and it’s near-impossible to separate what you’re doing from these other events.

So we revert to things easy to record, such as media coverage and bums on seats. But these don’t measure what you’re trying to do, like change attitudes. So they don’t convince anyone.

Nor do surveys. The media is full of reports of ‘independent’ surveys, trumpeting that 80% of people want more science taught in school or more action on climate change. But who commissioned the survey? Often it’s an interested party.

So that’s the bad news…

But some approaches can work.

One is telling stories: the girl who was going to be a hairdresser but after going to an engineering event, decided that looked more interesting. If you tell that story and add (something like), “This event was attended by 600 other girls, and if one in 10 had the same response as Nancy did, it means 60 new female engineers for Australia.”

A second course is to ask the person you need to persuade, Minister, boss or bureaucrat, and then ask them what will convince them. People are surprisingly willing to say. Bear in mind they may need to persuade others, and really want you to put a convincing case.

And here’s a third course used by a university trying to persuade school students to do engineering. They tested the attitude of kids before they did a week-long program, then immediately after they finished – and then again 6 months later.  A fair bit of work, but it gives the review more credibility.

 

 

Evaluating as you go

By Jenni Metcalfe

I think some of the most useful evaluation is carried out throughout a project or activity. This checks that you’re getting the messages right for the target audience, that you are using appropriate language, and that what you’re doing is helping achieve your objectives.

The feedback from such evaluation can mean you avoid wasting money on things that are not going to work. It can mean you amend your communication plan to better address an emerging issue, and it can mean that when you do that activity again, you do it even better.

To get this feedback, you might ask participants what they thought of a workshop at the end of it, or even halfway through. You might interview participants to see if a program is delivering to their needs. Or you might pre-test a draft publication or video before it is printed, produced or loaded onto a website.

For example, here are some questions I might send a sample of the target group about a fact sheet I’m producing. Their responses to these questions will help guide me when I finalise the fact sheet.

  1. Was the fact sheet relevant to you? Why or why not?
  2. What was the take-home message?
  3. What did you like most about the fact sheet?
  4. Was there anything that was confusing or unclear?
  5. Do you have any suggestions for improving the fact sheet?

 

evaluate2

Image: The Blue Diamond Gallery

 

 

Using evaluation to refine communication objectives

By Sarah Cole

We’re developing communication strategies for (and with) several organisations right now.

Part of our process is to find out what people in the organisation think about communication efforts: who they should be reaching out to, why, what they should be saying, and so on.

Almost without exception, I find that there is revealing and crucial feedback available from people’s answers about their communication objectives and ideas for evaluation.

Usually, I find the objectives that people are aiming for are too general – and it stands out even more when you look at how the organisation would have to evaluate its communication plan.

Most times, tweaking the objectives to something SMARTer makes them much more achievable.

Here are four indicators that you might want to adjust your communication objectives, as shown by the evaluation you’d need to do (though these might be perfectly fine for some organisations/activities)

  1. You have to measure something very general or vague – “what ‘the general public’ thinks about x issue”
  2. What you’re measuring is influenced a lot by things other than your activities – “kids wanting to study science at university”
  3. The changes you’re aiming to achieve will show up a long time after your activities are done – “kids wanting to study science at university”
  4. You’re counting ‘bums on seats’ at events you are a part of – “50 people came to the event where we had a stall – we reached 50 people!”
 

 

Do it well, or not at all

In a June workshop about evaluation, Warwick University’s Professor Eric Jensen challenged science communicators to move beyond ‘quick-and-dirty, surface-level evaluation’. The article also includes other resources and further work from Professor Jensen.

 

scrabble

Quote of the month

“Not everything that can be counted counts, and not everything that counts can be counted.”

— William Bruce Cameron