Not-So-Live-Blogging Event Evaluation Workshop – Australian Science Communicators 2010

Please note – this is from July 29th, 2010 – but since a friend was chatting about assessing outcomes in terms of science outreach, I thought I might pop this older post over here rather than have to dig it out again! It’s also very relevant in the light of this recent Conversation post about the influence of the Curiosity Landing on engaging with science.

Today I headed into Curtin University, after getting rather lost in the maze of building-works that is happening around the science and technology sector of the campus (getting lost – not so bad. Discovering that a former student of mine is attending same workshop and gleefully shouts the teacher-nickname I was known for four years ago? A little more humiliating…)

The workshop is introduced by the acting branch president for Sci Comm in WA Sarah Lau – the ASC are a national body with about 400 members around Australia and many of us at the workshop today are members. They’re keen on what kind of events are useful and how they help professional development and skills. You can find their site here – ASC!

They hope people have brought some ideas! Yike! Otherwise, they’ll be giving us some templates to work on. This is a relief, I thought of talking about a future potential SkeptiCamp, but I have nothing particular in mind.

I’m sitting next to a SciTech projects manager (SciTech is the hands-on science center in Perth and they do outreach), an education officer from the Perth Zoo and a scientist who is doing work on promoting an understanding of radio astronomy in Australia. They have lots of projects, so that helps!

[Funniest overheard: '...Then the kids shouted "YOU'RE NOT A REAL NUMBAT!!'" :p ]

Presenters today: Prof Leonie Rennie – who I interviewed for a podcast (she remembered me! Said a friend of hers considers me to be  the best interviewer on the show, wahhhh!); Associate Prof Nancy Longnecker from UWA and Dr Jesse Shore, the current ASC national president. He’s originally from the US and is now an Australian – he has his own sci comm consultancy, which is at Prismatic Sciences.

He pointed out that the term ‘science communicator’ is about communicating science to various audiences and how making sure that it helps getting out to people in effective means. As President of Australian Sci Communicators, he mentioned the Inspiring Australia: A National Strategy for Engaging with the Sciences report, which is the first about the growth of science communication in Australia. ‘It’s rather exciting to see it!’ He says that ASC are mentioned twice in one page in one paragraph, but their submission is very much reflected in the overall report and they are working on raising their profile. There are 15 recommendations in the report and they hope to be involved further, depending on funding. He talks about two of the recommendations – one is about using media more effectively in Australian society; the other recommendation is on building an evidence base for science engagement in Australia and evaluation in its broader sense (he’s on a standing committee for that). They aim to address what practitioners are doing, what universities are doing. Around ten of the recommendations needs evaluation to see if they’re effective, ‘we recognise evaluation can come across as an arid accounting exercise – everything else seems creative – but it’s not! It’s really the Julius Sumner Miller question – why is it so? It underpins your scientific research and finding the evidence to justify your opinion. It gives you a foundation of evidence and all the rest is opinion and you can be wasting your resources.’

So, this is about putting that evaluation central and how important that it underlies the research in other aspects of science communication.

First speaker is Prof Nancy Longnecker – she recognises that there’s a spectrum of career experiences and there’ll be a lot of hands on and discussion for this morning session. What are the measurable objectives, what kind of evaluation we need, survey design, and in a future session, data analysis.

Define what are your communication objections, do they align with your organisation? What are the imperatives? Who are your key stakeholders? It’s not always the obvious, what is going to win you advocates? Set your priorities. Set your messages – these are different from objectives. But they help you achieve them. This is a iterative process and requires flexibility; see what opportunities you have with the resources to hand.

  • What are you communicating?
  • What are you hoping to achieve?
  • What do you want people to do/ know?
  • How will you know if you’ve made any difference? Not that there’s anything wrong with ‘have a good time’, but how will you know this?

There’s a lot of politics behind the scenes and you have to tune into this in order to be successful! You can get derailed quite easily! She gives an example of how there can be a lot of agreement but you have to align the internal as well as the external stakeholders – there’s many factors that influence the struggle for resources. Is your specific communication objective compatible with your organisation’s strategic goal(s)?

Internal expectations and hierarchies – this is very important and often overlooked. Have to have your own team onside and able to support it!

We then get into discussion groups – what does our event aim to do?

  • How does your objective relate to an important organisational objective?
  • Describe a specific, realistic, measurable objective. Remember that we’re in a timeline of activities.
  • Consider how you will know if you have achieved this objective.

I end up chatting to three very nice scientists, one who describes an event planned for National Science Week, involving getting interaction with radio and optical telescopes in shopping precincts! This is the improvement of the understanding and awareness of the SKA project. Some suggestions include having external observers to the interaction the presenters have with the audience, a meta-analysis of the interaction. What is the average time of engagement? Maybe even use a clicker to see how long they interact. Perhaps have give-aways after a certain time period?

Specific, measurable objectives within a certain time-frame (even longitudinal can be considered?) Increasing awareness is a gradual process and it can be difficult to measure impact and if its in a general community area, it’s stealth and passing traffic – it has funding and evaluation and impact has to be shown. Cumulative impact is important and we’ll look at how to measure that.

There’s some discussion about raising awareness and bringing a different target group into SciTech, which is the Science venue in Perth – what is realistic in terms of getting new audience? Specific objectives should include should it be just at that venue or elsewhere, in terms of where SciTech does outreach?

Someone asks about events where you’re targeting scientists themselves? She brings up the example of media skills workshops, how confident afterwards do scientists feel? Is it an attitudinal thing? A director of a science experience program at a university talks about helping kids to learn and be more enthusiastic about science as a future career and how they do a survey afterwards. It’s pointed out that you need a baseline (a bit tough if you have a self-selected group!) to find out how they felt before! A pre-and-post comparison. She talks about an agriculture camp, where she talked about how getting funding and support and enrollments – what they had to articulate is it a problem that those who do not go into science still have a pro-science attitude? It’s not just about pulling students into studying science, but improving a better awareness and understanding of cost, et al.

  • % of people who fulfill objective (objective carefully defined, target % set, time period)
  • Increase understanding of X
  • increase awareness of related issues
  • exposure to a brand – therefore increase participation in using that brand
  • change attitude about  values related to X
  • reach a specific audience

So – what type of evaluation would be most appropriate for an event? You need to think about why you’re doing it in the first place, why is it appropriate, approaches and methodologies.

Some of the reasons include accountability and reporting; opportunity for improvement next time; opportunity to learn and share experiences about what’s effective.

What do you need to know about your audience, how well is your resource working and what does it communicate to others? What is your audience learning and/or understanding? Some of these might include changing misconceptions to understanding.

So – identify your objectives! Measure it against something – not just the ‘big picture!!’ How on earth can you measure your ‘increase in knowing science’?

Be objective, minimise bias! Just getting back a ‘yes, that was fun!’ isn’t good enough.

Decide on your methodology/ies – quantitative or qualitative? Will it just be a pile of surveys that collect dust, will you analyse and do something with it?

As an example – Shore says that he ‘overdid his report partly as a learning experience for himself and so that it could be handed onto ministers – not all of the info was useful, but he put in photos, attendances, and changes from year to year – so people could see how reliable it was and increasing his own credibility by making sure it was good.’ So, there was an additional benefit in creating a report on an event.

Types of evaluation:

  • Front-end – what do you know about your audience? This is about planning your event better ahead of time
  • Formative – as you go along (trialing a questionnaire)
  • Remedial – you’ve put it out and in terms of a resource
  • Summative – we’ve done the event, created the resource, what kind of resource was it?

There’s notes on this that she gives to us, but that’s basically it.

On the assumption that most people think about summative evaluation – tells about the impact of resource after it’s completed and used – generally results used as feedback.

We return to the Astronomy in shopping centres example that the scientist next to me is currently involved in – you can get estimate of numbers, but how can qualitative comments really have impact on the people reading the report? Really getting at people’s perceptions of the event actually are! ‘We need to do more listening and recording’. He talks about Astrofest and 4000 people – and all of his evaluation is mostly comments from people who delivered the content… but more was needed by the people who attended. Also, bringing in the science community together – if it brings together other groups, then it’s valuable too. Can you demonstrated increased collaborations across industry groups and organisations and stakeholders?

We have a general discussion about the balance and interaction of quantitative and qualitative and the need for a blend; there’s a variety of ways to check if things are worth the time and resources and have specific measurables, both pre-and post. The scientist from the Perth Zoo talks about getting support for causes and how making the contributors ‘want to do it – not just tokenism’, which we all agree is very important. Another scientist talks about the difficulty of using phone surveys to check on how people comply with requirements in fishing, after they get information from a boat show.

[About now I discover how terribly embarrassing to discover that Dr Shore has read my blog... hello! Agrh....]

Over morning tea, I talk to some of the others about mind-mapping (which was promoted as a useful way of organising ideas, not only for clients but for the scientists) and the ‘profession of science communicator’. Do they think that you need qualifications to be one? It’s compared to the profession of ‘writer’ – while one can be a writer with qualifications (such as a B.A), it isn’t strictly necessary. Due to the relative new-ness of the Science Communication degree (my former student tells of how it was only in the UK that anyone outside Australia heard of it as a field of study!), there’s many great Science Communicators without that qualification – I mention Robyn Williams, et al. Will this necessarily change? Probably not, but it’s certainly something (as evidenced by all the networking that is going on) that seems to help in terms of connecting with fellow professionals and finding common links and outcomes.

After morning tea – surveys and how to determine the impact of an event, with Prof Leonie Rennie.

Why do we want to know what impact we had with an event? Due to sponsor? But is it doing us much good? What about next time? Is the ‘warm fuzzies’ really enough? We have to have more than just ‘these numbers went through.’

What do we want to measure? The effect we had? Will behaviors change, what will they do differently? Did they learn? How can it be done better and what worked well and what didn’t?

How do we measure? It has to be easy to administer, easy to answer – FAST, Focused, Clear and unambiguous. Easy to analyse and can there be a longitudinal impact? Have a think about the people you’re trying to reach and are they targeted? Checking the phrasing and wording of questioning – why you enjoy and whether or not you enjoy something are two different questions. You have to know what is meant by the responses you get and that it doesn’t take long. Matching up the pre-and-post test – make sure they can be sure that it’s the same person responding, especially if it’s anon! Phone follow ups are difficult as they are time consuming – getting number, pinning people to be a specific time… average of three phonecalls to just get one person on line!

What with? True/false mixed responses, very easy to code and enter. Yet very polarising and not much detail. Social desirability response bias or response set can also influence people towards ‘yes’ (hence anon answers being useful!). It also has to be clear what the words mean – what does ‘interesting’ mean to different people!

Likert scale is also popular. The ‘undecided’ middle category can be problematic! A bimodal response distribution can occur, when in fact you want to add it up. There’s some methodological problems with these. The Likert instrument will have many different items, and if some are negatively worded rather than have the responses swapped around. If you have a generally positive response, then you discover a polarity – you’re mainly using ‘agree’ and ‘disagree’, creating a ceiling effect.

If you use a Rating scale, you can get less of a ceiling effect. Also, an audience member suggests – if the pre-test is all positive, how know if there’s a difference in the post-test? Semantic Differential – as a scale, needs more than one item and code by assigning a number, showing an attitudinal response to an event, in this case. Also show the ‘undecided category or don’t understand’. Open-ended questions – ‘three words to describe your response’ – most will say ‘interesting’ and then there might be a massive list of words that are unique. Check how you administer it, get it reviewed for face validity and be sure it’ll find out what you want to know – clarity, timing, format, attractiveness.

[She also tells a hilarious story of putting numbered stickers on people to help track of them in a museum, using the museum staff to make sure that they could give post-tests that would match the pre-tests, as there were alternative exit points for the venue: 'Number 34 is heading for the stairs! Going to exit number five! Get them!!']

For the next session, we have a look at some model examples that have been used, to stimulate ideas or a template when creating a survey. Longnecker points out ‘The advantage of having some similarity, if there’s some questions that can be used in different circumstances, then there’s a larger pool that can be drawn upon in the final results – comparing with other people is very useful!’ Shore mentions about the issue of ‘circle vs tick or cross’, if people see it as exclusionary.

In groups, we talk about font size, about what if at an event you discover that you’re talking to tourists rather than your target audience of locals! The importance of open-ended ‘what do you want?’ question.

Then it’s further talk – and we end up talking about a variety of issues. One fascinating impact of all the promotion of the S.K.A – younger generation are sick of hearing about it after all this time! The saturation now turns them off, so they’re instead focusing on different aspects that will result from the science used. Discussion about a study done at a market, where they discovered that their participants ended up mostly being tourists rather than their original target audience of locals!

  • pre and post-surveys with the influence of different mood and people remembering what they wrote last time – ‘humans are variable creatures!’ Controlling for unrelated change. Testing for consistency by rating the same group on the same thing, and how they change! Someone used the same question twice by accident and discovered variability with their respondents. Respondents getting the before and after on the same page – not a problem as it allows for self-reflection.
  • time interval – what about the time between pre and post-test? Often impact diminishes over time or get different kinds of responses. Often get more attitudinal responses. Can’t assume that the immediate impact is the same as they’d feel down the track.
  • Self-reporting and leading questions – will they always say that there’s an improvement? Asking why they responded why they did gets some interesting results. Minimise bias – there won’t be repercussion for honesty on the survey.
  • Physical structure of a survey – the font, the text formatting. If a captive audience in a classroom vs actually at the venue (e.g. at SciTech). ‘Shorter is almost invariably better – I always try to fit things on no more than one page… there are no rules, but there are good practices that is more likely to get you the information you want. You have to figure out how long it’ll take people.’ Prepare, administer, collect and analyse in a way that gives you the greatest confidence that you got the best outcome in terms of finding out what you need to know.
  • Validity of responses – outliers. With patterns and missing responses? Suggestion that alternating higher and lower responses or using what can be used; a case by case judgment and throw out what cannot be used at all.
  • Potentially confusing if use too many forms of survey tools – get the right kind of questions and get the answers efficiently is key.
  • If want the data for other things – what’s the deal with ethics? A pain, but a necessity and is more than a necessity – state by state have different national requirements. With research in science centres, there’s a sign-up and tell people what the data will be used for, what the participation will involve. With children, there has to be a carer.
  • When using the best tool for the job – looking at higher level analysis of many variables. It’s not easy, much revision and discussion. If you’ve got the same kind of questions, can see bigger picture, but needs to be discussion and talking to each other. The examples given of the survey we have can also use feedback, for example!

As Shore concludes – maybe a number of tools and disciplines can join together, a way of starting together to get a peer review of their Science Communication subject. The conclusion includes how great it is that there’s a bunch of people who have a similar sense of wanting to improve!

There is a podcast of a similar kind of lecture by the South Australian science communicators.

Print Friendly

About Kylie Sturgess

Kylie Sturgess is a Philosophy teacher, media and psychology student, blogger at Patheos and podcaster at Token Skeptic. She has conducted over a hundred interviews including artists, scientists, politicians and activists, worldwide.
She’s the author of the ‘Curiouser and Curiouser‘ column at the Committee for Skeptical Inquiry website and travels internationally lecturing on feminism, skepticism, and science.


CLOSE | X

HIDE | X