The Dying Art of Interpretive Evaluation

Anyone who has worked in interpretation for a while will tell you that evaluations are expensive—they take time and cash and a lot of expertise. Half the battle in any evaluation is just asking the right questions, and it’s amazing how hard that really is.

But there are a couple of kinds of evaluation that are dead easy, cheap, and low-stress. And I’m amazed at how few of us are actually doing them. I’m thinking specifically about studying our interpretive media—signs, interactives and exhibits—to discover how our visitors interact with them.

Wood River Wetland
Would you stop? For how long? Photo: Greg Shine

Two Measures You Should Know and Use

Suppose you’ve installed an interpretive sign and you’re wondering how it’s performing. Or, you’re working on a new interpretive sign and you want to try a few things out with prototypes. How can you get some meaningful data, easy and cheap? Start with these two measures.

Percent of diligent visitors

This is a simple observational measure. Take your sign (exhibit, interactive, what have you) and draw an imaginary semi-circle around it—say, a few metres (maybe more, depending on the setting—it’s up to you as long as you’re consistent.) What you’re after is the radius or semicircle within which a visitor might reasonably notice the product.

Now just sit back and watch. You’re not going to intercept the visitors—just observe  them. Keep tallies of two numbers: the number of visitors who pass within the chosen radius, and the number of these who stop to attend to the sign. That’s it. That’s all there is to it. Divide one by the other and multiply by 100, and you have percent of diligent visitors. And once you’ve done that, you could move on to…

Dwell time

‘Dwell time’ is our industry’s jargon for the amount of time a visitor invests in a given visitor experience product. So once you’ve got visitor who’s willing to stop and look at the sign, all you have to do is hit a stopwatch. How many seconds are they spending at the sign? (If they’re part of a group, observe one member of the group, chosen arbitrarily.) Seconds spent observing = dwell time. (And you’ll be amazed how short dwell times actually are, in most informal education settings.)

There are variations on the dwell time measure: you can record behaviours in columns like “Pointed at sign”, “Conversed with a group member at sign”, “Called a group member over” or whatever. But those really only work in slow turnover settings—if you’ve got a busy area, you’ll have enough to do just recording dwell time.

So what do you do with this data?

You use it to build benchmarks for each of your signs. And you start to compare your signs with each other, and like any scientist, you start to ask more questions: why does one sign have consistently higher rates of diligent visitors than another sign? Why does one kind of sign seem to encourage longer dwell times than another sign?

From there, you can start to experiment.

Checking out the interpretive signs.
Photo: Washington State

A/B Testing of Interpretive Media

“A/B testing” is a trendy term that means, “when marketing people discover the scientific method and think they invented it.” Essentially, it’s testing two variations of a single product, ensuring that all other variables remain equal. (See? We used to call that science, back in a more innocent time. But whatever.)

So when you’ve got an interpretive product in development—that graphic panel, say— you might want to test the same sign in two locations, recording dwell time and percent of diligent visitors at each. But you can have even more fun than that. Before you go ahead and spend $1500 printing that sign on high-pressure laminate, what if you played around a bit with the text and the layout? Do shorter texts encourage higher engagement? What about signs with a single large image, tested against the same sign in the same place with a montage of smaller images? Choose one variable, print the sign, and test it out. For prototyping purposes, you don’t need to spend a ton on the print job. Print it on foam core, gator board, Sintra or something else that’s cheap and easy. It doesn’t have to be weather-proof if you’re just hauling it out for a couple of hours on a nice day. Likewise for exhibit elements—I have prototyped exhibits cut out of cardboard, rather than built of millwork and stone.

So choose your variables, write a little evaluation plan, and try it out. Simple.

And then, share your results with your colleagues. Write up a blog post for Interpretation Canada’s new blog, say, and let us know how it worked out. (Or submit it to my blog—I’ll run it.) There are so few people doing evaluation anymore—because they believe it to be impossibly expensive and difficult—and I think we need to prove it can be done.

These two measures are particularly suitable for junior staff or interns. I’m not saying these people should be spending days and days sitting and watching visitors. But they should be getting their feet wet in evaluation, as a way of understanding its importance. One of the organizations I used to work for had their education volunteers do evaluations as a final assignment before being qualified as educators. The feedback they recorded went on to shape the overall education program.

One last thing about observational evaluations: should you inform your visitors that you’re peeking in on their behaviour? Legally, it may not be a requirement, though you should check with your policy people. You’re not gathering personal information and you’re not placing anyone in any kind of compromised position. Ethically, it’s not a bad idea, as long as you’re not changing their behaviour. If they feel self-conscious, your visitors will act differently; those who love interpretive media might purposefully double their dwell times as a form of lobbying, for example. But a generic notice at the entrance saying something about the evaluation, with a phone number or email people can use if the have questions, might not be a bad idea.

Oh, speaking of ethics: you probably shouldn’t be evaluating your own work. If you’re the author or designer, best hand that evaluation to someone else. The temptation to skew your data through confirmation bias is just too strong. (I used to supervise a writer who was very fond of their own work. Every time I offered some feedback they didn’t like, they would say, “Well I’ll just prototype it. AND THEN I’LL SHOW YOU.” They could actually speak in all caps.)

So what are you waiting for? Get offa that thing, and start evaluating. I look forward to hearing about your results.

Hey, if this blog post tickles your brain, why not sign up for monthly emails? I promise I won’t spam you. 

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.