Starting Out on the Right Foot

Evaluation is bit of a dying art.

I have asserted that before here on my blog, and I don’t think the picture is getting much rosier.

We are working for agencies that are heavily constrained by policy; we are constrained by leaders who are not particularly fussed about knowing how effective their programs are (nor terribly enthused about evidence-based decision making in general, it seems.)

We are working within increasingly constrained budgets, and anybody who has ever worked in the informal education sector will tell you that proper evaluation is VERY. EXPENSIVE.

And, well, yes it can be. But I want to talk to you about just how freaking simple it can be to run a basic front-end evaluation for your interpretive program or centre.

What is front-end evaluation?

Front-end evaluation is what we do when we want to gauge the knowledge or attitudes our visitors are bringing with them through the front door. It helps us understand where we need to begin our story. And, I believe, it’s crucial to all interpretive planning.

One of the cardinal sins of interpretation is assuming a high level of knowledge or interest on the part of the visitor. It sets us up for failure; it sets us up for a dysfunctional relationship with our audiences; it sets us up for really bad reviews, too.

But we do it all the time. We assume, for example, that visitors to the country’s many fur trade sites know about the Hudson’s Bay Company, Rupert’s Land, the Northwest Company, all that history.

Missing the mark

So we start talking to them in a voice that assumes a great deal. We start throwing jargon around like they know it; we start diving into the finer points of the players in the story without offering them the most basic back-story about who these people were or why they mattered.

I recently polled front-line staff at a historic site about how much they estimated their visitors knew about the history of their site before walking in, based solely on their own interactions with visitors. I asked them to rate that knowledge from one to ten… and they gave it less than one. Less than one out of ten, they estimated, based on their direct interactions with visitors. And yet the writing in the exhibitions assumed people had a decent grasp of late 19th-century Canadian history and politics.

This disconnect is not rare.

Front-end evaluation has always been considered a pillar of visitor experience planning. Everybody in our business is supposed to do it. Everybody is supposed to know what front-end, formative, and summative evaluations are. (See the links below the article if you don’t.)

But I don’t think I’ve ever, in a 40 year career, actually worked in a place that had done a front-end survey recently. These are heritage sites and museums that have taken on multi-million-dollar exhibition projects. No front-end data at all. We just launch ourselves into our writing and designing, making our best guess about where to start the story and hoping we hit the mark.

But here’s the good news.

Front end evolutions are cheap and easy. You don’t need advanced training to collect the data; you don’t need to pay someone to write a survey for you; you don’t need to collect personal information from your subjects.

You just need to be able to ask a single question.

Here’s a format for the most simple form of front-end evolution I can recommend to you.

Let’s say your subject matter is tigers. You stand out in the parking lot, intercept the nice people, introduce yourself, and then you say, “What do you know about tigers?”

That’s it. Honestly. That’s all you have to do.

Take your site’s main subject matter and just ask what they know about it. “What do you know about Louis Riel?” “What do you know about climate change?” “What do you know about the history of Vimy Ridge?”

And you listen, take notes, or make a recording.

That’s it. That’s the data collection. Then, when they’re done talking, you might ask a couple of demographic questions so you can correlate their knowledge with their life stage, group composition, or point of origin. (“People visiting in summer from XX City know more about people visiting in shoulder season from XX Country.) But even that is icing on the cake. You can keep it to just the one question if you want.

The Analysis

Now, it’s what you do with the data after you collect it that makes these evaluations really useful. You do a basic qualitative text analysis on it: you code it out. That is to say, you go through the visitors’ responses and you tag each recurring idea with a code. “Knowledge of Riel’s birth.” “Knowledge of Métis origins.” “Misconception about carbon dioxide.” “Knowledge of WWI European context”. (Depending on your coding method you might just summarize what they tell you. “Riel was Métis.” “Riel was executed” etc. and start to add up how many people spontaneously told you that they knew these things.)

That part takes a bit of knowledge and time. Coding interviews is laborious stuff, and requires some skill and standardization—though it’s the kind of skill that anyone who has done Master’s level work in the humanities will likely have.

And so you gather the codes and you weight them out from the most-mentioned to the least-mentioned, and you discover, say, that a whole lot of people actually have no idea why Vimy Ridge is commemorated. Or what impact Louis Riel had on Canadian society or where carbon dioxide comes from in the environment. Alternately, you might discover that people actually have more knowledge than you thought, and maybe you can offer more advanced exhibits and programs than you were planning to.

But front-end evaluation is not difficult; it is not rocket science to gather this information. Nor is it particularly expensive, when weighed against the cost of launching a major exhibition (not to mention the cost of missing the mark with your visitors and having them leave bored, frustrated, or annoyed.) It requires a bit of skill to do the analysis, yes. But it can inform your programming, your social media, and your exhibitions for years to come.

Front-end evaluation was once a pillar of interpretive planning. It was a pillar of heritage site evaluation.

Why on earth is nobody doing it anymore?

Ok, you can do this:


Hey does this kind of writing tickle your brain? Sign up for monthly goodness from donenright.com.

Wanna see what past newsletters look like? Clickity click.

Get monthly (ish) updates via email from Don Enright. I write about interpretation and visitor experience. I never sell or share my lists.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.