Evaluating Your Products

When it comes to measuring your success, you can take a numbers-based or a more subjective approach. We’re going to look at doing both, starting with the quantitative.

I have written a bit over the last while about program evaluation. You might want to get an intro to my way of thinking by looking at a past post here:

For this exercise today, I’m going to assume a few things. First, I’m going to assume that you are a planner or program manager, working in an environment that values evaluation. I realize that’s a big assumption: the art of evaluation is slowly dying, and many of us work in environments where the definition of success shifts with the whims of upper management. I wish this was getting better over time; it’s not.

But let’s suppose that you are working in an evaluation-friendly environment, and you have a list of products (interpretive programs, information kiosks, brochures, what have you) and you have been gathering data about how your visitors are using them. Ok, I realize that is another big assumption: many of us are hurting for data on our products. If that’s the case for you, don’t panic: in my next article I will take a subjective or qualitative approach that you can use without a lot of data. For now, pick a couple of products for which you have participation stats, revenue stats, visitor feedback stats, and the like. (And I will say that we are slowly, collectively, getting better at gathering and organizing data. Remember, it’s never too late to start gathering stats. And trust me, you’ll be glad you did.)

So open up a spreadsheet or a database table (I use Airtable in this example because it KICKS ASS as databases go.) And down the first column, list the products you want to evaluate.

Setting Up Evaluation Criteria

Here’s the first big challenge: how are you going to define success for your products? What are your key performance indicators? Is a products successful if it has high attendance? Or is it successful if Senior Management feels good about it? is it a success if it has high revenue? And who gets to decide these things?

I cover this in a bit more detail in my video below, but for now, understand that the person who decides how we value product probably isn’t you. Not you alone, anyway. That job is up to the big boss: the VP, or the Field Unit Superintendent, or whoever they delegate. Ideally, that person is making their decisions from a current management plan, a strategic plan, and a mission statement, all of which are supported by the board members or managers above that person’s head. OK I realize I’m assuming a hell of a lot today.

But here’s the thing:

There are only so many performance indicators in the world of visitor experience.

This list isn’t entirely comprehensive, but it’s not too far off, in my experience.

  • Attendance: how many people came or chose to use the product?
  • Awards and other recognition: is this product being recognized by your peers?
  • Behaviour change: can you document a change in positive behaviour on the visitor’s part after taking part in your program/exhibit/etc?
  • Brand recognition: are people more likely to recognize your organization’s name and mission through your product?
  • Change of attitude: are people’s attitudes toward your subject changed through your product?
  • Circulation figures: for printed media, how well is this thing flying off the shelves?
  • Compliance: can you ascribe increased regulation compliance to your product?
  • Demographic change: are you reaching new audiences through your product?
  • Dwell time: how long did they stay/linger in front of your product?
  • Ecological change: can you document improvements in ecological indictors through this product? (This one is really hard to assess.)
  • Feelings of connection: do people feel more connected to your place or resource because of this product?
  • Intention to change behaviour: do people declare an intention to do things differently after experiencing your product?
  • Learning/understanding: did they learn what you wanted them to learn? Did they meet your measurable learning goals?
  • Media coverage: did your product generate a buzz in the media or social media?
  • Percent of diligent visitors: for exhibits and signs, how many of the people who walked by the thing actually attended to it?
  • Revenue: did the product make the money it was supposed to make?
  • Satisfaction: did visitors enjoy it? Did they express their satisfaction through comment cards or a survey?
  • Stakeholder satisfaction: this one is sticky and political, of course, but did it make your stakeholders happy?

For every product you assess, you and the Big Boss are going to have to identify the key performance indicator(s). Big Bosses sometimes hate doing this, by the way, because it pins them down to quantifiable results and impedes their ability define success on the fly. There’s a certain kind of boss that hates anything that compromises their autocracy. I hope that’s not your boss.

Anyway. I suggest that for each product, you consider not one criterion, but three. For your Tuesday Campfire, say, you can judge it by three criteria: visitor satisfaction, attendance figures, and learning/understanding. In a spreadsheet or database, you can weight each of these as appropriate: the first one is worth 50%, the second 30%, the third 20%, or whatever you and the BB decide.

Doing the scoring

To actually score a product, you’ll need to gather the data that support the criteria: your satisfaction data, your learning/understanding surveys, etc. And you’re going to have to rate them according to a set of rubrics. “Rubric” is a funny word with several definitions; for our purposes it’s a guide listing specific criteria for grading or scoring products, to paraphrase Mirriam Webster.

Rubrics should be as specific as you can realistically make them: “School children asked fewer than three questions about wetlands; school children asked three to six questions about wetlands…” etc.

Here are few examples.

Attendance (this one is easily quantified)

1 – Product was subscribed/attended up to 20% of its capacity
3 – Product was subscribed/attended to 60% of its capacity
5 – Product was subscribed/attended to 100% of its capacity

Changed Behaviour (this one is mushy and difficult to quantify)

1 – Product can be associated with no change in visitor behaviour
3 – Product can be associated with moderate change in visitor behaviour
5 – Product can be associated with substantial change in visitor behaviour”

Compliance (this one takes a great deal of high-quality data to be worthwhile)

1 – Product can be associated with little or no increase in compliance / decrease in tickets and warnings
3 – Product can be associated with some or moderate increase in compliance / decrease in tickets and warnings
5 – Product can be associated with substantial or marked increase in compliance / decrease in tickets and warnings”

Demographic Change

1 – Product attracted a few representatives of the desired audience segment
3 – Product attracted a moderate number of the desired audience segment
5 – Product attracted a substantial representation from the desired audience segment

Dwell Time for non-personal products

1 – Visitor appeared to gloss over or ignore the product; didn’t dwell
3 – Dwelled long enough to read/absorb some of the content offered
5 – Dwelled long enough to read/absorb the entirety of the content offered

Media Coverage

1 – Product got little or no positive media coverage, or got negative coverage
2 – Product got some positive media coverage
5 – Product got an exceptional amount of positive media coverage”

And so on.

Score them out, and total up the scores.

So what happens when a product doesn’t make the grade?

Well, the principle here is that the data doesn’t tell you what to do; social scientists don’t tell you what to do. Your mission and mandate tell you what to do; your strategic plan tells you what to do; the Big Boss decides what to do because that’s why Big Bosses get Big Bucks. The role of data, of social scientists, and planners like you and me is to tell them what they’re up against.

So don’t freak out if your favourite product gets a poor score. It may be an opportunity to make it into a great product; it may be an opportunity to get the funding to give that product the attention it needs.

The Video

I walk you through the database process in this video. Let me know in the comments if it makes sense.

Next week we’ll look at qualitative evaluations. Trust me, it’s more fun.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.