In response to last month’s newsletter, Questions science communication practitioners want answered by people researching science communication, we’ve had generous contributions from 4 expert science communication scholars.

Science communication scholars respond:

We’d love to know what you think of these science communication issues: email or tweet us about this, or anything you’d like to read more on.

Regards from the @EconnectTeamJenni MetcalfeSarah ColeJane Ilsley, Toss GascoigneClaire Heath & Madeleine Stirrat.

 

 

Be humble (and realistic): the foundation of good science communication and evaluation

By Dr Eric Jensen, Associate Professor (Senior Lecturer), Department of Sociology, University of Warwick

Given evaluation is rarely a top priority for employment or contracts as a science communicator, I suggest that researchers like me need to be humble and realistic about how much social scientific research methods training we can expect science communication practitioners to develop.

Moreover, the issues with precarious employment for science communicators in many countries and contexts that were noted last month cast further doubt on whether science communication practitioners will have the time or resources to develop methodological knowledge and skills that someone like me has spent years learning and refining.

However, this reality check does not preclude high quality evaluation in science communication practice.

I pointed to a ‘third way’ in science communication evaluation in a more recent commentary (Jensen 2015). This essay acknowledges that ‘there are many good reasons why robust evaluation linked is not routinely conducted and linked to science communication practice’. Indeed, the ‘practical barriers of required expertise, time and resources can make impact evaluation—particularly on an on-going basis—seem like an impossible task’.

A good deal of headway can be made by drawing on automation as a means of gathering and analyzing quantitative and qualitative survey-based evaluation data on a continuous basis:

“While social scientific expertise is required at the stage of designing the evaluation and survey questions, an automated system can run indefinitely providing insights to the organization for years without the need for external consultants or in-house evaluation staff.”

Of course, there are limits to what can be evaluated using this kind of technology-enhanced approach (see p. 6–7). But we can get a lot further with making high quality impact evaluation realistic, affordable and available to global science communication practitioners than ever before.

Since this article was published, I have been implementing this kind of approach in a wide variety of public engagement settings, ranging from science festivals to museums, university-led public engagement with research and art museums such as the National Gallery in London.

The approach has also been rolled out on a large scale across dozens of institutions and numerous countries through the ZooWise project. This is a collaborative long-term impact evaluation and audience research project that offers organizations that engage audiences with nature, wildlife and conservation off-the-shelf tools they can use to gain real-time audience evaluation insights without needing to become social scientists.

A response to the discussion: practitioners working with scholars

By Professor Alan Irwin, Department of Organization, Copenhagen Business School

There is a story told about a research centre with which I was once connected. The research centre focused on issues of culture and technology. It was widely considered to be doing rather innovative empirical and theoretical work. Many international visitors came to the centre and it scored highly on all the usual indicators.

But the centre felt it needed an advisory board: specifically, an advisory board of practitioners. Senior industry and governmental figures agreed to join and everyone (including the research council) was impressed.

The day of the first meeting of the advisory board came close. PowerPoints were prepared (this was so long ago that, for some members of the group, it was actually the first time they had done this). The researchers tried hard to summarize the practical relevance of their work and not to ‘overcomplicate’ their presentations. A dress rehearsal took place. For the first time ever, colleagues asked each other about what they planned to wear.

The big day arrived. Beginning with the very brilliant head of the research centre, each of the team tried to explain the practical relevance of their work. There were many bullet points and lessons learnt – and even some diagrams. However, something was very obviously wrong. The advisory board was, well, bored. The researchers started to hesitate and the visitors looked at their watches (there were no smart phones in those far-off days).

Eventually, the head of the research centre stopped the presentation and asked whether this was what ‘the practitioners’ wanted? After a pause, an industrial member of the advisory board asked: Have you people ever heard of something called post-modernism? Laughter all round, the PowerPoints were halted, and the participants finally began to talk together about research. For, as one of the guests pointed out later, why would we come to such an innovative research centre just to hear about the usual consultancy stuff?

This is just one story. And I don’t know if every science communication practitioner wants to hear about the latest developments in social theory (post-modernism was hot back then). But the story should alert us to at least three things:

  1. Many practitioners have substantial research insight – and many of us cross the practitioner/scholar line on almost a daily basis. So let’s be careful about imposing too solid a distinction here.
  2. I’m not sure that practitioners always know in advance what they ‘want’ – and here they have a lot in common with researchers. I would rather see this as a process of co-production with researchers sometimes following and sometimes leading the discussion.
  3. We should use the same tools and modes of understanding we have so painstakingly developed within the field of science communication to address the questions at hand. We should start with what Sarah Davies and Maja Horst have termed ‘culture, identity and citizenship’, rather than pre-conceived knowledge gaps and mutual misunderstandings.
 

 

Is it possible to reconcile science dialogue and science PR?

By Dr Marina Joubert, CREST (Centre for Research on Evaluation, Science and Technology), Stellenbosch University

In the previous newsletter, Jenni Metcalfe raised a topical question for worldwide science communication circles: “How do we reconcile the conflicting roles that a science communicator sometimes faces between being a science advocate versus providing an objective, and sometimes critical, review of science?”

Those critical of the rising tide of science PR point out that institutional science communicators’ roles are foremost to present their institutions in a favourable light, maximising positive, reputation-building publicity and not really interested in facilitating public dialogue about science.

This may occur to such an extent that the institutions may pressure research staff to communicate as much as possible about the positive aspects of their research, but restrict them from speaking openly about controversial research. This emphasis on institutional promotion therefore costs critical and reflexive public science engagement and transparent public dialogue.

Another concern about PR-focused science communication is perceived bias, lack of long-term credibility, and therefore how it contributes to eroding public trust in scientific institutions and even science itself. From this perspective, expecting institutional PR from scientists can be a dangerously slippery slope for science.

However, PR practitioners (or institutional science communicators) claim this approach will not necessarily taint science communication or detract from its credibility. While acknowledging organisational needs, they point out the communicators’ vital role in building bridges between science and society. By making science visible, relevant and accessible, they serve scientists, the public, and mass media.

Furthermore, professional science communicators care about their own reputations and communicating science responsibly and ethically. They know that it is counterproductive to oversell or hype up research findings: losing the faith and interest of journalists and other audiences.

There are certainly strategic objectives in mind from scientists when they participate in public communication, but many are also driven by a desire to inspire, a perceived moral obligation to give back to taxpayers funding their research, or interest in the public’s views, expectations and concerns.

So, while there are valid reasons to be cautious about science advocacy (or science PR), I believe it is important to acknowledge its role and value, and to make sure it is done responsibly.

It is certain that institutional science PR is here to stay, and its practices and consequences must be studied to inform new ways to encourage and support organisations and people to engage with society honestly and responsibly. This includes acknowledging the limits of science, as well as potential risks and uncertainties in new findings.

 

 

From public understanding to scientist engagement

By Dr Fabien Medvecky, Science Communicators’ Association of New Zealand (President), The Centre for Science Communication (University of Otago)

Oh, that’s just communication.” We’ve heard it before. I heard it just two weeks ago within an expert working-group, about how we ought to frame a message on environmental behaviour to get public traction.

But let’s not get our backs up; there are two important lessons here:

  1. For many researchers and scientists, communication and engagement is viewed as an insignificant ‘side dish’.
  2. No matter how good the tools we have for communicating, no matter how well we know our audiences, we’ll never do quite so well as long as disengagement and communicative challenges run both ways.

So, while we want to foster greater public understanding of and engagement with the sciences (PES), I believe another opportunity for science communication scholars is the chance to study how we could also foster greater scientist’s engagement with publics (SEP).

We could (should?) flip around the actors of our familiar questions? For instance, from the previous Econnect newsletter:

  • What does the [public] audience already know?

… becomes: What does the [scientific] audience already know [about relevant social/public aspects]?

  • What is going to motivate them to change (e.g. to a healthier lifestyle), and what will cause them to switch off?

… becomes: What is going to motivate them to change (e.g. want to become socially engaged; adhere to social values; etc), and what will cause them to switch off?

  • What is the role of professional science communicators in helping people to access scientific information for individual and policy decision-making?

… becomes: What is the role of professional science communicators in helping scientists access social information for research and scientific decision-making?

  • How can we use social and online media to engage more people in critical review and discussion of new and proposed technologies without polarising debates?

… becomes: How can we use social and online media to engage more scientists in critical discussions of new and emerging social issues without polarising debates?
If we could mobilise our scholarly resources to not only get better PES, but also better SEP, then maybe we could get better communicative traction with scientists and researchers.

And if we could turn every “Oh, that’s just communication” into a “Yes, this is indeed a communication challenge”, we’d have half the battle won.

 

 

Evading ‘ethereal objectives’ in science communication

By Dr Eric Jensen, Associate Professor (Senior Lecturer), Department of Sociology, University of Warwick

Good evaluation plans should be, above all, realistic.

As an external advisor on science communication evaluations, my first task is often to highlight a gap between the ‘lofty’ (to borrow Sarah’s term) goals of the programme or activity and what they can realistically achieve given the nature of the intervention.

For example, it is unlikely that a one-hour visit to an exhibition is going to spark major changes to someone’s long-term behavior patterns but it could help someone view that topic in a different way, or lead to further follow up activities to investigate the topic further (i.e. in this case, the impact would be an increase in curiosity on the topic of the exhibition).

The process of ensuring that intended outcomes are spelled out in concrete, measurable terms is very important. Some hints on this process:

  • Define concepts by what they ‘do’.
  • Ask how you would know that a particular kind of change has happened.
  • Think about what you would observe if the outcome was being realized in practice.
  • Explore what people would be saying or doing if this program/activity/event was working as intended.

Intended outcomes such as ‘inspire creativity’, ‘behavior change’ and ‘increase understanding’ are hopelessly vague. They are too vague to effectively design a science communication initiative to achieve them, and they are certainly too vague to design good evaluation methods to assess them as audience outcomes.
Here is an example of some suitably specific learning-oriented science communication outcomes from a project I helped to evaluate:

  • participants are able to explain what coral is
  • participants are able to recognize different types of coral.

I evaluated the impact of this science communication intervention with a focus including these two outcomes by asking participants to draw a ‘coral reef and all the plants and animals that live there’. Furthermore, they were asked to put names and labels on everything (to aid the analysis).

Below, you can see examples from this evaluation that indicates positive impact on the two example intended learning outcomes identified above. These are drawings of coral extracted from the larger pre/post drawings of a coral reef by each individual.

Maintaining a realistic notion of possible outcomes for audiences then allows the evaluation to be focused where the important action should be. That is: Is there evidence of change in the attitudes, interests, behaviors, etc. that could be expected given the type of intervention being evaluated?

Starting the evaluation with this focus is certainly helpful for ensuring that any findings are focused enough to be helpful for informing changes to the science communication activity to improve outcomes in future iterations of the initiative.