Skip to content
13 February, 2011 / Ash

Are UX Experts really making things better?

Imagine you’re blindfolded and led to a field. Someone gives you a bow and arrow and says “Try and hit the target.” After fumbling around, shooting arrows for an hour, the person quietly leads you away.

The next day, you’re blindfolded and again taken to the field to shoot arrows for an hour. You’re getting better at fumbling around for the arrows and loading them, but are unsure of everything else.

Years of this pass. Now, whenever you’re led onto the field, you bend down with confidence, grab an arrow, and in one swift motion: draw it back, and release. You really look the part, but do you think you’d be any better at hitting a target?

Prince William looking the part with a bow and arrow

Without the right feedback, we can’t know if we’re getting any better or worse at something. Someone who’s been doing the wrong thing for years is just really good at doing the wrong thing.

Of course, we know what we’re doing, right? Others may be shooting blindfolded, but we know OUR work is making things better. Hell, we’re UX Experts.

Experts, or Yogis?

When surveyed, the vast majority of people report themselves as having above average intelligence, or being better than average drivers. That’s what Social Scientists call Illusory Superiority, or the Better Than Average Bias.

Is he really smarter than the average bear?

Illusory superiority gives us our optimism. It makes us feel good about ourselves: smarter, luckier, or better performing than we actually are. Unfortunately, research shows that we systematically misjudge our abilities, virtues, importance and future actions.

Without the appropriate feedback, we can’t develop the correct skills. When the illusory superiority bias makes us sure that what we are doing is making things better, it’s known as the Dunning-Kruger effect.

The Dunning-Kruger effect is when the incompetent are mistakenly confident of their abilities. The effect is what makes the auditions in shows like Australian / British / American Idol so entertaining. Talentless hacks swing and wail, truly believing that they are the next superstar.

Dunning and Kruger argued that people who are incompetent suffer 2 consequences:

  1. Their incompetence leads them to make poor choices; and (most importantly)
  2. Their incompetence prevents them from realising they’re making poor choices.

Any evident failures are put down to other factors – mostly external – so they can remain under the illusion that they’re doing quite well. This reinforces their confidence.

Unfortunately, this is where many people who consider themselves experts sit. Without the benefit of good feedback, they’ve been shooting arrows in the dark for years. They’re confident. They look the part. They handle their tools like a veteran. But are they hitting the target? Are they really making things better?

How to become an expert

Dunning and Kruger posit that the skills for becoming competent are the same skills required to evaluate competence.

A competent person is someone who’s adept at using a feedback loop for continuous improvement. They attempt something, measure the impact, evaluate if it was better or worse, and adjust accordingly next time. They learn from their many mistakes, and build upon their few successes.

An expert is someone who’s practised being competent for a long time.

The importance of metrics

To evaluate competence in User Experience, we have to measure the impact of our decisions. This is the ONLY way of really knowing if we’ve made things better, so I’ll repeat it:

We have to measure the impact of our decisions.

There are many ways to do this, but they all are of the same form:

  1. Find a baseline metric.
  2. Set a goal metric.
  3. Measure to see if you achieved the goal.

This can be:

  • As informal as aiming to have 90% of users complete a task in paper prototyping without error (comparing concepts);
  • As formal as using ISO 25062:2006 for Summative usability evaluation. This is the scientific way of setting a baseline and measuring against it: recommended for any product that will have more than one version (comparing versions); or
  • As simple as aiming to decrease negative customer social media mentions by 5% with the next release (comparing customer satisfaction metrics).

Remember, only taking one measure can be misleading. We should always strive to triangulate our data for a more realistic measure. User Experience covers a wide gamut, so it’s important to include data from areas such as:

  • Usability (efficiency, effectiveness and satisfaction);
  • Usage (downloads, unique impressions, time, etc);
  • Affect (reactions, opinions); and
  • Customer contact (positive and negative mentions in call centres, the media, and online).

No design is ever right the first time – no matter how right it ‘feels.’ Good design takes multiple iterations and wrong directions. For us to know whether our design decisions are good or not, we have to measure early and measure often.

If we don’t measure the impact of what we do, it doesn’t matter what we call ourselves, we are shooting blindfolded.

Further reading

There are a few good books out there on measuring UX, but I’d start with Tom Tullis’s Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics.

Leave a comment