Qualitative or Subjective Evaluation
In my previous post, I outlined some quantitative or data-based criteria for evaluating interpretive products. Many of us, though, are just getting started in gathering statistics and evaluations. If you’re weak in the data department, or your boss/client would like their team to weigh in with their judgment on the performance of their products, this article is for you… with a few caveats.
These criteria are subjective.
When you’re using opinion-based evaluation criteria, the opinions are really only as good as the expertise of those offering them. Not all opinions are equal, and you need to be honest with yourself about that. So if you’re going through a qualitative evaluation exercise, gather the people who know the most about the product, the audiences, and what the overall program is trying to accomplish.
Recognize that people have biases and agendas.
When I am brought in to start an evaluation process, I sometimes get hit with a wall of resistance; people can push back fairly vehemently in defence of their own programs. It doesn’t need to be this way, of course. But some see a qualitative evaluation as an opportunity to argue that all of their programs meet all of their audiences needs all of the time, that literally nothing needs improvement except of course the budget allocated to those programs. I wish I was exaggerating here, but it can take quite a bit of time and energy and support from the Big Boss to break down some of these defences and convince people that they are going to end up with a better program through evaluation.
That said, here are a few criteria that I have used with my clients:
How well does a given product highlight the site’s essence of place and bring it to life? Express this from one to five: one being not at all, five being very strongly linked to essence of place. Are you running bingo nights in a bird sanctuary? It might be a good thing for revenue… but wouldn’t it be great if it were also thematically connected to who you are?
How well does this product set you apart from your competition? Is it something that can’t be found anywhere else in the region or the country? Rate it one to five, one being fairly easily found elsewhere, five being unique in your entire region.
How likely is this product to stick in the hearts and minds of its target audience? Is it something they are likely to do and forget, or will this product create lasting memories? Rate from one to five.
Positive and Negative Cues
This is a really fruitful way of assessing. Looking at this interpretive product from its target market’s eyes, tag any positive or negative cues throughout the experience cycle. For example: a smiling staff member greets them by name (positive cue). Product materials are looking worn (negative cue). You can add a +1 for each positive cue, and a -1 for each negative, to come with a Cue Score.
Note that this one takes some discipline to do right: you’re assessing it from the market’s point of view, not from the staff’s. Often this activity devolves into a critique of the processes behind the scenes (“We keep cutting the budget for this”) which is not how it works. What does it look like on the front end? What does the visitor see? (“Our washrooms are chronically filthy.”)
This product might not be performing well, but given a little creativity and time, what could it do for you? How well could this product help you realize your plan goals, with some love and attention?
Staff absolutely love this one, and it can produce a lot of really creative solutions for your products. For this exercise, the team simply brainstorms for a minute or two on the theme of “What if?” As in, “What if we let a partner run this one? What if we did it in the morning rather than the evening? What if we added a new storytelling component to it?” and so on. Don’t dwell on each suggestion; don’t pursue it in detail. Just log it and move on.
If your team were in charge of this product, what would be their single, top recommendation for it? (“Retire this product for now” is one that often comes out, after they’ve been through the other evaluation processes.)
Marrying qualitative with quantitative
When you’ve gathered your team’s qualitative assessment, you might want to corroborate what they’ve come up with against your quantitative data. For example, when staff flag that your trails are in terrible shape, dive into the visitor feedback. Does the feedback support their point of view? I often find correlations, which helps me make strong recommendations to management.