Defining standards for personal interpretive programs
This is part two . You should probably start with Part One, here.
A few years ago, I was approached by one of the premier interpretive facilities in western Canada. The director of visitor experience had just hired a new manager of interpretation who was young and brilliant. With the change in management, she wanted to set a baseline and document improvement in the interpretive programs over two summers.
This documentation was to take place through two parallel systems of evaluation: a two-year standard anonymous audience feedback survey, and a two-year appraisal by an expert who would rate the programs against current best practices in interpretation. I was brought on to be the interpretation expert, while another individual took care of the public surveys (we would often bump into each other on busy days at the site.)
Suddenly I needed to sit down and articulate all the things I believed to be true about quality in personal programming. I needed to try to quantify, or perhaps qualify, each of the traits that define interpretive excellence. I knew some would be simple; others would take some real thought.
We have always resisted standardization in our field, and the arguments against it tend to coalesce around two principles: one, that standardization stifles creativity and narrows what should be an art form without limits. This is a valid argument, though I maintain that standards in most fields—the hotel industry, the food services industry, the beauty industry—exist to raise the bar at the lower end of the spectrum while leaving the professionals at the top end—the true artists and iconoclasts—free to innovate. Interpretation shouldn’t be any different: we can bring up the bottom while leaving the top fearlessly and exhilaratingly open.
The second argument against standards is that they are simply impossible to articulate in our field: excellent, transformative, unforgettable interpretive programs—and excellent, transformative, unforgettable interpreters—are literally ineffable. “A good interpreter simply has that sparkle,” I remember a mentor telling me years ago, “and you can’t really pin it down.”
Nonsense, I believe. A good interpreter does sparkle; he or she effervesces exquisitely and consistently and in ways that any good observer can actually describe and document without too much trouble.
So now it was time to put my money where my mouth was, and I began by tapping into my network of friends and colleagues in interpretation across the country to see what work had been done before me.
I discovered that there are quite a few interpretive organizations that have been quietly building evaluation rubrics for their personal programs over the last couple of decades, and dutifully keeping them to themselves. Perhaps they’d assumed that their standards were too particular be useful to anyone else. But most were more than amenable to sharing them with me, and as I gathered rubrics from provincial parks, zoos, aquariums and historic sites from here and there across the country I certainly found more commonalities than differences. All touched on certain norms that have evolved in our field since the 1970s and 1980s, when agencies really began to invest in interpretation programs; many of these go back to the Freeman Tilden’s principles of interpretation from 1957.
Ultimately I came up with a set of 37 different criteria for evaluation: 37 boxes to tick over the course of any given program or show, using online survey software and an iPad. It was too much, in retrospect, though not as overwhelming as it probably sounds. Many criteria were simple and required only a moment’s consideration: “Program began at the advertised time.” Check. “Interpreter was available to mingle with the audience before and after the presentation.” Check. But looking back, I should have simplified my questionnaire considerably. Regardless, my 37 criteria broke down into several basic categories.
Starting With a Bang
All personal programs involves a welcome and introduction; these vary according to the genre, the subject and the target audience. With the introduction comes the idea of a “hook”: the program opens with something to surprise and delight the audience, catch their attention and propel the program forward. The interpretive hook is still far from standard, I should mention.
- Program began at the advertised time
- Interpreter introduced self and other staff with enthusiasm
- Program opened with a hook or a wow; some kind of a moment of emotional impact.
Another standard that is actually far from standard around the world exists thanks to professionals like Sam Ham and others: the idea of thematic interpretation. Programs are more effective when they are built around a single, clear, compelling “big idea”, supported by five (plus or minus two) associated sub-themes. If I could identify one simple area in which we would make global impact it would be in promulgating the idea of single, relevant, compelling, clear and carefully-chosen messages to provide a cognitive and affective framework for personal programs.
There is much work to do here: thematic interpretation is still virtually non-existent around the world, and lest we be tempted to feel smug about our own standards, I am discovering a tendency even at home to write broad, vague, scattershot scripts and then contrive the thinnest umbrella theme, stretch it overtop of that script, and call it thematic interpretation. It doesn’t work.
Themes need to be strong; they actually need to have a point. “Jameson Historic Site is a crossroads of history,” for example. Sure it’s a crossroads of history; all historic sites are crossroads of history. That’s why they’re historic sites. Fail. “The lives of polar bears and humans are interconnected.” Yup, that’s true- but it’s true of virtually any mammal in the world now. How are they interconnected and why should we care? “Lizards are cool!” Of course they’re cool. Everything’s cool; you’re an interpreter. You should be able to make pocket lint cool. Why is it cool? Take a stand. Have a point of view, and pare away anything that doesn’t contribute to it.
- Interpreter stated a single, clear, compelling message for program; it was stated within 3-5 minutes of the program’s opening
- Program stayed on topic and all content contributed directly to main message
- Sub-themes flowed logically with smooth transitions between them
Interactivity at strategic points in the personal interpretive program, as appropriate to audience and theme, are another fertile area for evaluation.
- Interpreter invited questions from audience at appropriate times; repeated question for others to hear; answered satisfactorily or arranged to find answer
- Program was structured to involve multiple learning styles (there was something to see, touch, hear, do)
- There was audience participation; it was appropriate, fun, not forced; audience reacted as desired
Among the most difficult to evaluate in concrete terms is the idea of the arc or momentum of storyline: the idea that a program, like any piece of creative writing, is a journey or a progression of events and ideas, punctuated by moments of strong emotional impact, each building on the last towards some form of climax or at least a logical and satisfying conclusion. This is what separates an ok interpretive program from a masterpiece. When watching an unsatisfying interpretive program and trying to articulate the problem, I have often found that “This isn’t going anywhere” is almost as common problem as “This has no point.”
- Body of program had moments of strong impact
- Script had a compelling narrative and momentum
Personal programming relies, of course, on the skill and talent of the interpreter, and her ability to bring a subject to life with passion.
- Interpreter used analogies and descriptive language to relate the subject to visitors’ lives.
- Interpreter used vocabulary suitable for target audience, without jargon or acronyms
- Interpreter spoke with ease and confidence, without using um, uh, or crutch words
- Interpreter appeared passionate and personally committed to subject
The Most Important Thing
I’d like to dwell for a moment on that last one. I really believe that if an interpreter can accomplish nothing else but come across passionately and personally committed to the subject, his or her program will probably be a success. The ability of the interpreter to transmit personal attachment and enthusiasm is at the heart of why we use personal interpretation in the first place. Personal interpretation is expensive and labor-intensive; there has to be a reason why we’re using it instead of a sign or an app. This is the reason. And this criterion is actually fairly uniformly applied across the developing world; the interpreters are really doing a good job of selling their stories with passion. It gives me hope that other qualities will be equally applied with further evangelizing and training around the world.
There was one area of evaluation that took a little time for me to analyze and describe: unity of verbal and non-verbal presentation. Even after thirty years of watching programs, I found myself at moments stymied as to why a presentation was simply not working. I had a small breakthrough when I discovered it was often a disconnect between the visual and the non-visual. That sounds esoteric; it’s actually quite concrete.
In interpretive programs where the interpreter narrates action (a firefighting demonstration, a draught horse demo, a marine mammal show), the verbal presentation must track the visual presentation very closely. The script must be constantly and intimately married to the demonstration itself. This sounds obvious but it’s surprising how often the interpreter chooses to digress from the action to add a point of information that is not illustrated. It’s amazing how quickly that ruins the moment. As soon as the verbal and non-verbal diverge, the audience is placed in a cognitive tug of war and forced to attend to one or the other; they can’t do both. People being generally more visual than auditory creatures, they end up tuning out the interpreter completely.
As interpreters we are sometimes reluctant to accept that when we are sharing the stage with firefighters, sheepdogs, elephants or acrobats, the show isn’t about us.
- Visual cues and verbal cues always supported one another
A strong ending to a program is as important as a strong beginning: What do we want the audience to do with what they’ve just experience and learned and felt? The call to action is becoming more standard; unfortunately it tends to sound pat and preachy and asks things of the audience that they’ll never deliver. “If you want to help sea otters, you can stop driving a car for the rest of your life.” No. Exhorting your audiences to simply learn more, spend time in nature, share nature with their children are perfectly valid and inspiring calls to action.
- Program had a call to action that was compelling, relevant, feasible
- Interpreter re-stated the program’s main theme before concluding.
- Interpreter acknowledged and thanked program volunteers.
- Interpreter invited personal or small-group interaction after the presentation.
- Post-program interaction included enthusiastic, positive feedback from audience
Underlying all of these criteria are more basic and banal qualities that sit a fair bit lower on Mr. Maslow’s hierarchy of needs. Can the audience see and hear? Are they comfortable? All of these “Interpreter 101” norms require regular checkups; it’s remarkable how often they fall by the wayside. But these simple Maslovian rubrics are easily evaluated by a secret shopper or, better yet, a new volunteer as part of their interpretive training. I don’t see much reason for a supervisor to use her years of expertise to check off whether the show started on time. Of course, somebody has to do it. But the idea of expert evaluation is to be watching out for the complex stuff that takes training and experience to spot.
- Mic was at appropriate level
- All visuals were legible, professional, relevant and in good repair
- Interpreter and exhibits were easily visible.
- Interpreter named both metric and imperial when using measures, or used analogous measurements like “as long as a school bus”
- Interpreter managed audience’s behaviour and remained in control of the event
- Interpreter used props safely and appropriately
- Interpreter mentioned upcoming programs
You see the word “appropriate” crop up in these rubrics. The evaluator’s job is to understand the needs and interests of the site’s top audience segments. A hook will look and feel different at a zoo than a hook at a program at Vimy Ridge, for example, or a war internment site. “As appropriate to target audience and theme” is understood through all these rubrics.
NEXT: Lessons learned.
Hey, why not sign up for my monthly email update so you’ll be sure not to miss the next instalment? I never spam my readers; it’s one email per month and you can unsubscribe any ol’ time.