Skip to content
15 July, 2011 / Ash

Women who aspire to be leaders should avoid mainstream media

Is the mainstream media hindering the progress of women to leadership roles?

Whilst doing some research into gender bias, I found it interesting to watch Q & A’s episode on The Gender Divide. Joe Hockey started talking about Australia’s poor ranking in the OECD for promoting women to leadership roles, and went on to quote statistics:

“Look, when I first railed against this 10 years ago as minister for financial services, 8.4 per cent of board directorships were held by women. Today it is still 8.4 per cent for the overall number of directorships. It hasn’t moved.”

Later Craig Greenwood in Rockdale New South Wales asked Ms. Gail Kelly, CEO of Westpac:

“Can Ms Kelly explain why she’s lost the only two female executives she inherited as Westpac’s CEO and why, despite positive discrimination, she hasn’t managed to find any women capable enough to sit on her executive committee?”

This got me wondering why nobody on the panel was bringing up the other side of the story – that many women don’t apply for leadership positions in the first place. This is due to a number of reasons, but probably the most significant is stereotype threat.

Stereotype threat is a self-fulfilling prophecy. When someone is aware of negative stereotypes applied to their group, they tend to perform closer to what the stereotype would predict. For women, these negative stereotypes (in the context of leadership roles) include such traits as being irrational, emotional, indecisive and weak.

And here’s the rub: Mainstream media leverages stereotypes to sell products.

“Any expensive ad is as carefully built on the tested foundations of public stereotypes or sets of established attitudes, as any skyscraper is built on bedrock.”

– Marshall McLuhan, Understanding Media: The Extensions of Man

Women's tabloids also leverage stereotypes

The mainstream media constantly bombards with advertisements that reinforce gender stereotypes – triggering stereotype threat in women.

In a 2005 study exploring stereotype threat on women’s leadership aspirations, it was found that exposure to stereotypic commercials undermined women’s aspirations on leadership tasks.

This finding could reasonably be extrapolated to mean that the act of watching TV or reading tabloid magazines may discourage women from seeking leadership roles.

It’s not known what the duration of the stereotype threat effect is, but this has potentially profound implications.

A follow up study in the same paper found that the effect could be mediated by adding the line:

“There is a great deal of controversy in psychology surrounding the issue of gender-based differences in leadership and problem-solving ability; however, our research has revealed absolutely no gender differences in either ability on this particular task.”

Unfortunately, after every Mr.Sheen or Meadow Lee ad, there isn’t such a disclaimer to create an identity safe environment and eliminate the stereotype threat. So I’d say that the best thing aspiring women leaders can do is avoid engaging with the mainstream media.

Advertisement
13 February, 2011 / Ash

Are UX Experts really making things better?

Imagine you’re blindfolded and led to a field. Someone gives you a bow and arrow and says “Try and hit the target.” After fumbling around, shooting arrows for an hour, the person quietly leads you away.

The next day, you’re blindfolded and again taken to the field to shoot arrows for an hour. You’re getting better at fumbling around for the arrows and loading them, but are unsure of everything else.

Years of this pass. Now, whenever you’re led onto the field, you bend down with confidence, grab an arrow, and in one swift motion: draw it back, and release. You really look the part, but do you think you’d be any better at hitting a target?

Prince William looking the part with a bow and arrow

Without the right feedback, we can’t know if we’re getting any better or worse at something. Someone who’s been doing the wrong thing for years is just really good at doing the wrong thing.

Of course, we know what we’re doing, right? Others may be shooting blindfolded, but we know OUR work is making things better. Hell, we’re UX Experts.

Experts, or Yogis?

When surveyed, the vast majority of people report themselves as having above average intelligence, or being better than average drivers. That’s what Social Scientists call Illusory Superiority, or the Better Than Average Bias.

Is he really smarter than the average bear?

Illusory superiority gives us our optimism. It makes us feel good about ourselves: smarter, luckier, or better performing than we actually are. Unfortunately, research shows that we systematically misjudge our abilities, virtues, importance and future actions.

Without the appropriate feedback, we can’t develop the correct skills. When the illusory superiority bias makes us sure that what we are doing is making things better, it’s known as the Dunning-Kruger effect.

The Dunning-Kruger effect is when the incompetent are mistakenly confident of their abilities. The effect is what makes the auditions in shows like Australian / British / American Idol so entertaining. Talentless hacks swing and wail, truly believing that they are the next superstar.

Dunning and Kruger argued that people who are incompetent suffer 2 consequences:

  1. Their incompetence leads them to make poor choices; and (most importantly)
  2. Their incompetence prevents them from realising they’re making poor choices.

Any evident failures are put down to other factors – mostly external – so they can remain under the illusion that they’re doing quite well. This reinforces their confidence.

Unfortunately, this is where many people who consider themselves experts sit. Without the benefit of good feedback, they’ve been shooting arrows in the dark for years. They’re confident. They look the part. They handle their tools like a veteran. But are they hitting the target? Are they really making things better?

How to become an expert

Dunning and Kruger posit that the skills for becoming competent are the same skills required to evaluate competence.

A competent person is someone who’s adept at using a feedback loop for continuous improvement. They attempt something, measure the impact, evaluate if it was better or worse, and adjust accordingly next time. They learn from their many mistakes, and build upon their few successes.

An expert is someone who’s practised being competent for a long time.

The importance of metrics

To evaluate competence in User Experience, we have to measure the impact of our decisions. This is the ONLY way of really knowing if we’ve made things better, so I’ll repeat it:

We have to measure the impact of our decisions.

There are many ways to do this, but they all are of the same form:

  1. Find a baseline metric.
  2. Set a goal metric.
  3. Measure to see if you achieved the goal.

This can be:

  • As informal as aiming to have 90% of users complete a task in paper prototyping without error (comparing concepts);
  • As formal as using ISO 25062:2006 for Summative usability evaluation. This is the scientific way of setting a baseline and measuring against it: recommended for any product that will have more than one version (comparing versions); or
  • As simple as aiming to decrease negative customer social media mentions by 5% with the next release (comparing customer satisfaction metrics).

Remember, only taking one measure can be misleading. We should always strive to triangulate our data for a more realistic measure. User Experience covers a wide gamut, so it’s important to include data from areas such as:

  • Usability (efficiency, effectiveness and satisfaction);
  • Usage (downloads, unique impressions, time, etc);
  • Affect (reactions, opinions); and
  • Customer contact (positive and negative mentions in call centres, the media, and online).

No design is ever right the first time – no matter how right it ‘feels.’ Good design takes multiple iterations and wrong directions. For us to know whether our design decisions are good or not, we have to measure early and measure often.

If we don’t measure the impact of what we do, it doesn’t matter what we call ourselves, we are shooting blindfolded.

Further reading

There are a few good books out there on measuring UX, but I’d start with Tom Tullis’s Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics.

27 January, 2011 / Ash

Without foundational research, can we claim to be designing for the user experience?

Lady with a nose growing like Pinocchio

Can we truthfully say we're designing for the user experience?

My thesis is simple: without truly understanding users, we can’t manage their experience. Yet this is what many of us in the user experience field claim to be doing.

User research can be broken into four rough categories:

  1. Foundational research: getting to know the users’ mental models so we can understand their needs and discover new product or service opportunities.
  2. Design research: once we’ve decided on a product or service that will be mutually beneficial to the organisation and the user, we can focus the research on how they currently understand and achieve these goals.
  3. Formative usability evaluation: Once we understand how they achieve their goals, we can start explore ideas: prototyping concepts, and testing them with users.
  4. Summative usability evaluation: Once a solution has been settled on, it can be tested to set baseline metrics. This allows us to evaluate the next iteration of the product or service – providing meaningful feedback that allows us to improve our designs.

Foundational research should be the first step in designing for the user experience. The goal is to gain a deep understanding of our intended users, including their:

  • Mental models – Personal concepts of how things work in the world: constructs which are often different from how things really work.
  • Motivations – The reasons people behave the way they do.
  • Goals – Objectives driven by motivations.
  • Tasks – Things people need to do to achieve their goals.

We can then distill this information into personas, user stories, goals and primary tasks.

Foundational research provides a deep understanding of the target group, and is the most effective way to discover strategic business opportunities.

Unfortunately, foundational research is usually missing in the product/service development lifecycle. There are a number of reasons for this that I’ll explore in a later post.

We all claim to be designing for the User Experience, but can we really do that if we don’t understand the user in the first place?

Further reading

How Customers Think: Essential Insights into the Mind of the Market
by Gerald Zaltman

Mental Models: Aligning Design Strategy with Human Behavior
by Indi Young

The Persona Lifecycle: Keeping People in Mind Throughout Product Design (Interactive Technologies)
by John Pruitt and Tamara Adlin

24 January, 2011 / Ash

Thoughtless Design

Design comes in a number of dominant (but not exclusive) flavours:

  • Engineering-centric: optimising for the artefact, including choice of materials, serviceability, ease of development and interoperability;
  • Marketing-centric: optimising for saleability by adding features, speed to market, and brand coherence;
  • Business-centric: optimising for such things as cost per unit, strategic alignment, and distribution channels; and
  • User-centric: optimising design for the understanding, behaviours, and physical capabilities of the intended users.

Unfortunately, engineering-centric, marketing-centric and business-centric design can just come across as thoughtless design.

Poorly designed shower tap

I’m typing this from a hotel in Sydney. In the shower is the tap fitting you see above: an example of thoughtless design.

Take  a moment to reflect on how you think it may work.

Certain artefacts have culturally informed conventions: sometimes known as affordances. This is how we expect things will work. For example, here in Australia we expect that:

  • Left or down are negative: backward, past, or decreasing;
  • Right or up are positive: forward, future, or increasing.

And given a specific context (in this case, a tap fitting), we have conventions like:

  • Turning a tap anti-clockwise opens the valve;
  • Turning a tap clockwise closes the valve.

However, in the case of the pictured tap fitting, the expectations the user has are contradicted:

  • Turning the tap anti-clockwise increases the indicated water temperature; and
  • Moving the slider to the right increases the flow of water.

If someone enters the shower half-asleep, turns the tap anti-clockwise and no water comes out, they may try moving the slider and end up scalding themself.

Since both types of control are available, there’s no excuse that the designer didn’t align the functions with the users’ expectations, so that:

  • Turning the tap anti-clockwise increases the flow of water; and
  • Moving the slider to the right increases the water temperature.

The worst part is, a user that gets scalded would probably blame it on their own incompetence – instead of complaining to the hotel about the thoughtless design.

Update

I’ve since been told by @Aus_Pol that this type of tap fitting is common in certain parts of North America. Specifically, he’s seen it in “4 different places of Canada (and Seattle).” Personally, I’ve only seen it in a couple of places in the States, including Maryland and I think, Texas.

After having to use the shower again, I noticed something else. The potential issue of someone burning themself due to the thoughtless design contravening user expectations has obviously been encountered by the designers.

The button has to be depressed to turn the tap past 38deg C

The red button shown has to be pressed whilst turning the tap to set the temperature above 38 degrees Celsius (body temperature). I originally didn’t notice this because the temperature was already set to over 50degrees (nobody would have a shower at 38degrees C – brrr!).

Even when I turned the tap full circle either way, I depressed the button without knowing it. I have large-ish hands, so by grabbing the tap handle and turning it, I accidentally held the button in.

In both cases, I could unknowingly increase the temperature to 70deg C (enough to scald) – negating the new design feature.

This is a great example of the type of design that results from a corporate culture. It may have evolved (as most product designs do) along these lines:

Version 1.0

Business requirement: The tap fitting has to be a single unit, using current parts to keep costs down.
And hurry it up, we have orders to fill.

Engineering-centric design: First, design the water-flow control. Put it at the back of the fitting, so it’s out of the way. A rotating lever will do the trick.
Next, design the temperature control. We’ll just use one of our normal taps for that to keep costs down. Problem solved.

Issue

People are burning themselves.

Engineering response: “Stupid users. Didn’t they read the manual?”

Business response: “Well, we can’t go back to the drawing board. It would cost too much to re-design and re-tool. Make a modification to fix this.”

Version 1.1

Business requirement: Stop people burning themselves.

Engineering-centric design: Add numbers to the temperature controller, so the users know they are turning the temperature up, not down.

Issue

People are STILL burning themselves.

Engineering response: “Stupid users. Didn’t they see the numbers?”

Business response: “Even if it’s user error, we have to at least reach a break-even level of lawsuits.”

Version 1.2

Business requirement: Stop people burning themselves.

Engineering-centric design: Add a button that has to be depressed before the user can move the water temperature beyond body temperature.

Issue

People are STILL burning themselves, but our lawyers have said we’re OK because we’ve added enough features to try and counter their stupidity.

Business response: “We’re covered. Leave good enough alone and move on to the next product.”

Result

This of course resulted in a complex, expensive design that still contradicts user expectations and is still a danger.

Instead of investigating users’ mental models of how taps work, they started with a thoughtless concept and continued evolving design from bad to worse. This shortcut mentality meant that the project went:

  • Over budget on development (multiple release iterations);
  • Over time on development (multiple release iterations);
  • The end product cost more to manufacture (due to complexity); and
  • Resulted in an unintuitive tap fitting that is dangerous to use.

If the organisation had of done some foundational research to understand people’s mental models of how taps work, and designed for it, they would have saved on development time and product complexity, lowering the cost for development and production of each unit.

Unfortunately, this is the type of development process that corporations encourage (the industrial, production line model: the underlying philosophy of corporations) – and the reason why there are so many bad designs out there.

Can you imagine how much cheaper it would have been to do a bit of user research up front?

Further reading

Thanks to @mattmorphett for pointing out a mistake (which I’ve since corrected) in the operation. He’s also got a great post on taps and conventions: Sometimes there’s a good reason to break convention

29 December, 2010 / Ash

Intuition is the enemy of user research in UX

Know thy user.

Foundations of UX: Understanding the motivations, goals, tasks and context of users

It’s the mantra, and essential foundations of designing for a user experience (UX). To really know thy user, however, requires good research.

Even during the first site visit, interview, or contextual inquiry for a project, it’s tempting to start thinking about how to solve what appears to be an obvious problem. It’s not uncommon for User Experience practitioners to even start sketching out ideas while observing or conducting user research.

It’s how our minds naturally work. They:

Unfortunately, listening to your intuition (‘sketching out ideas’, or ‘jumping into solution mode’) is exactly how not to do research. Sure, you’ll come to conclusions that you’re certain are accurate – but it’s a good bet you’d be wrong.

Once you land on an idea, confirmation bias kicks in. You can’t help but look for evidence to confirm your assumptions – and that often means missing the real motivations, goals, tasks and issues.

Research isn’t about confirming assumptions

Research is about gathering data, analysing it, then forming conclusions. In fact, by definition research is:

the systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions.

The scientific method can be applied to user research, just as it can be applied to any area of inquiry. Diligently gathering data from appropriate sources and analysing it carefully before coming to a conclusion is unnatural, uncomfortable, and completely necessary.

Don’t get me wrong. I’m not against intuition in UX. It’s necessary – but in the design stage. The user experience needs to be built on the solid foundations of good research, otherwise we might be wasting our time working on something completely unnecessary.

Stay focussed

When you’re gathering data, that should remain your focus. Stay out of ‘solutions mode’ until after the analysis. I’m always surprised that what I assumed to be the issues during the research phase the analysis reveals to be either non-issues, or minor issues compared to the ones I didn’t even notice.

Further reading

The A-Z of Social Research: A Dictionary of Key Social Science Research
by Robert Lee Miller & John Brewer

Business Anthropology
by Ann T Jordan

Mental Models: Aligning Design Strategy with Human Behavior
by Indi Young

Research Design: Qualitative, Quantitative, and Mixed Methods Approaches
by John W Creswell

22 December, 2010 / Ash

Science doesn’t know everything!

When a debate turns to evidence and science, I often hear people retorting:

“Science doesn’t know everything!”

It’s interesting that people will resort to anthropomorphising something (treating it as if it’s human) when they run out of logical defences. Science isn’t an organisation or a person. It can’t ‘know’ anything.

To me, such a turn of phrase is an indication that the person may not understand what science is.

Science is a process

Science is in the application of the scientific method. The aim of which is to mitigate our natural human biases – like cognitive dissonance – in pursuit of knowledge. In essence, it boils down to:

  1. Question – Notice something and be curious.
  2. Hypothesis – Predict how it works.
  3. Experiment – Figure out how to test the prediction to prove it wrong – the concept of falsifiability is important to the scientific method.
  4. Observation – Collect data from the tests.
  5. Analysis – Process the data into meaningful results.
  6. Conclusions – Use the evidence to judge whether the prediction still holds. A prediction that is wrong is just as valuable to understanding as a correct one.
  7. Peer Review – Being transparent. Publishing exactly what was done and how, the results, and the reasoning behind the conclusions. Others can then question the reasoning and replicate the experiment themselves.

Science is also a body of knowledge

From the scientific method comes an ever-expanding body of knowledge. The knowledge is constantly being added to, refined and updated as our understanding of the universe grows.

Since science can’t ‘know’ anything, a more appropriate phrase may be:

“Everything hasn’t been explored by science.”

This is true. So much has been explored by science that no person has the cognitive capacity to be across every discipline, but there is still so much more to discover.

However, when someone proclaims that “science doesn’t know everything,” the point that everything hasn’t been explored by science is rarely valid. This cliché is most often heard when someone is intellectually backed into a corner, defending a credulous belief that has been tested using the scientific method and found to be wrong.  Topics ranging from complementary and alternative medicines to superstitions and the paranormal.

If you’re feeling cheeky, the next time you hear someone say “Science doesn’t know everything” ask them who science is.

Further reading

A Beginner’s Guide to Scientific Method –  a good primer for the scientific method and companion for anyone interested in critical thinking.

17 December, 2010 / Ash

Using Cards in User Experience

IDEO Method Cards in use

Lately, there’s been a trend of people creating decks of cards for use in UX. From IDEO’s Method Cards, to the AT-ONE project’s Touch Point Deck – these cards are externalised snippets of knowledge, and can be used for inspiration, as a memory jog, mapping a sequence, or planning.

User Experience is a multi-disciplinary area, so I try to stay current across a number of fields. After many years of reading theory, planning projects, and using an array of research and design techniques, I am surprised when I re-open a textbook or look at an old project and think “I forgot about that.”

This is where cards can come in handy.

Putting knowledge on cards does two key things:

  1. It externalises the knowledge; and
  2. It makes the knowledge tangible.

Externalising knowledge

When you have a vast field of knowledge to remember, some things – especially that which gets used least often – will inevitably get lost. Especially when you’re under pressure.

A little anxiety is a good thing. It helps focus the mind. Common project features such as ridiculous deadlines, slashed budgets, political infighting and missing stakeholders can crank that anxiety up a few notches though. This is when the mind goes into fight or flight mode. It narrows focus tightly: great for mechanical tasks like outrunning a predator, but terrible for abstract or creative thinking.

In such cases, it is advantageous to have your memory externalised. Being able to refer to a series of cards that represents possible ideas, methods, theories or approaches to a problem is a practical approach to a common problem.

Note: This is why pilots have Ops Manuals and Checklists. They are externalised or systemised knowledge for times of increased anxiety and cognitive load.

Turning knowledge into tangible objects

Knowledge objects are self-contained snippets of knowledge. Since they are self-contained, such cards provide a tangible form of communication for all stakeholders that can be grouped, mapped, arranged, or chosen from – either individually, or in teams.

A practitioner can rapidly come to agreements with stakeholders, using cards in workshops to do things like:

  • Form themes for a project by grouping design inspiration cards.
  • Map a customer’s journey through an interaction with the organisation (to ensure a consistent experience), using touch point cards.
  • Decide upon a pragmatic approach to user research by using method cards.
  • Plan a strategy for the organisation using inputs from research with behavioural cards.

Types of cards available

There are a large number of cards out there. I’m currently doing a meta-analysis of the following, top 10:

IDEO’s Method Cards

Stephen Anderson’s Mental Notes

Nokia’s PLEX Cards

AT-ONE’s Touch Point Deck

UXBasis’s User Journey Cards

Dan Lockton’s Design with Intent Cards

SILK’s Project Planning Cards

nForm’s UX Trading Cards

Jasper van Kuijk’s Recommendations for Usability in Practice Cards

Whitney Quesenbery’s UX Story Telling Cards

I’m planning on putting together a series of consistent UX decks (with boards if required) for planning, mapping, and inspiration.

Are there any key decks you use that I have missed? Please let me know in the comments.

29 November, 2010 / Ash

Cognitive Dissonance

Ash standing in front of an image of a brain

Image courtesy of Paul Hagon

I recently had the opportunity to speak at TEDxCanberra.

When Stephen Collins first approached me for this event, I was going to speak about the disgraceful state of Australian pharmacies – who now sell more snake oil, fad diets, cosmetics and alternative therapies than science-based medicine. Instead, I opted to go for the root of the problem: faulty reasoning – especially that caused by cognitive dissonance.

Cognitive dissonance is an uncomfortable feeling caused by attempting to hold ideas, beliefs, or behaviours that conflict. Our minds quickly act to relieve this dissonance by discarding or minimising the impact of one of the ideas, beliefs or behaviours by employing cognitive biases – and this can lead to faulty reasoning.

Being aware of how cognitive dissonance and cognitive biases affect us doesn’t stop them, but it can make us think twice before acting. I thought that was an idea worth spreading.

6 October, 2010 / Ash

Memory & recall

The most common metaphor for how memory works is a movie. Our eyes and ears watch the movie, and our minds store it away like a video. This video can then be replayed on demand – and that is a memory.

No Movies

Image courtesy of Kriss Szkurlatowski

Unfortunately, this is a really bad metaphor on two levels.

First, we don’t actually see and hear everything in the first place. Our perception is an interpretive process. We only take in essential bits of experiences, and our mind fills in the rest to give us a cohesive image of the world around us.

Second, when we remember something, we only recall certain snippets of the event – certain sights, sounds, emotions, textures – and again, our minds seamlessly fill in the gaps for the rest without us being aware. Every time we remember, we construct that event in our minds from those snippets. Next time we recall the event, it will be based on that new construction. The more often we remember something, the more divergent it is from reality.

The constructive nature of episodic memory means our memories are open to bias, suggestion, or as Elizabeth Loftus‘s research has demonstrated, even complete fabrication leading to innocent people being imprisoned. This is why memories ‘recovered’ under hypnosis – from ritual abuse, to alien abductions – are now inadmissible as evidence in most courts.

So perhaps the metaphor for memory should be more like a collage? We start with a few cut out pictures, and use paper and pen to arrange the pictures and connect them in a meaningful way.

Further reading

WNYC’s Radiolab Memory and Forgetting

Schacter & Addis (2007) The cognitive neuroscience of constructive memory: remembering the past and imagining the future (PDF, 410kB)

The British False Memory Society Twelve Myths About False Memories

27 May, 2009 / Ash

Perception

Although we “believe it when we see it with our own eyes”, there are many holes in both input and interpretation of our senses.

Sometimes its hard to get noticed

Sometimes it's hard to get noticed

A favourite perceptual flaw of mine is inattentional blindness: the inability to see things right in front of our eyes, because we’re attending to something else.

There was a famous experiment on this subject by researchers at the Visual Cognition Lab. Students were asked to watch a video and count the number of times one team passed a basketball to another. After the video, the researcher asked the students how many times the ball was passed. Some said 12, many said 13, some said 14.

Next, the researcher asked if anyone noticed anything strange in the video. Some said boys passed it boys more than girls. Some said the girls didn’t move as much.

He asked them to watch the video again: this time not to count the passes, just watch. Almost nobody believed it was the same video. It all looked the same, but in this one, a gorilla walked to centre stage, beat his chest, waved his arms, and walked off again. The students had simply failed to notice this blatant gorilla play right in front of their eyes, because their focus was on counting the number of passes.

%d bloggers like this: