Is the mainstream media hindering the progress of women to leadership roles?
Whilst doing some research into gender bias, I found it interesting to watch Q & A’s episode on The Gender Divide. Joe Hockey started talking about Australia’s poor ranking in the OECD for promoting women to leadership roles, and went on to quote statistics:
“Look, when I first railed against this 10 years ago as minister for financial services, 8.4 per cent of board directorships were held by women. Today it is still 8.4 per cent for the overall number of directorships. It hasn’t moved.”
Later Craig Greenwood in Rockdale New South Wales asked Ms. Gail Kelly, CEO of Westpac:
“Can Ms Kelly explain why she’s lost the only two female executives she inherited as Westpac’s CEO and why, despite positive discrimination, she hasn’t managed to find any women capable enough to sit on her executive committee?”
This got me wondering why nobody on the panel was bringing up the other side of the story – that many women don’t apply for leadership positions in the first place. This is due to a number of reasons, but probably the most significant is stereotype threat.
Stereotype threat is a self-fulfilling prophecy. When someone is aware of negative stereotypes applied to their group, they tend to perform closer to what the stereotype would predict. For women, these negative stereotypes (in the context of leadership roles) include such traits as being irrational, emotional, indecisive and weak.
And here’s the rub: Mainstream media leverages stereotypes to sell products.
“Any expensive ad is as carefully built on the tested foundations of public stereotypes or sets of established attitudes, as any skyscraper is built on bedrock.”
– Marshall McLuhan, Understanding Media: The Extensions of Man
The mainstream media constantly bombards with advertisements that reinforce gender stereotypes – triggering stereotype threat in women.
In a 2005 study exploring stereotype threat on women’s leadership aspirations, it was found that exposure to stereotypic commercials undermined women’s aspirations on leadership tasks.
This finding could reasonably be extrapolated to mean that the act of watching TV or reading tabloid magazines may discourage women from seeking leadership roles.
It’s not known what the duration of the stereotype threat effect is, but this has potentially profound implications.
A follow up study in the same paper found that the effect could be mediated by adding the line:
“There is a great deal of controversy in psychology surrounding the issue of gender-based differences in leadership and problem-solving ability; however, our research has revealed absolutely no gender differences in either ability on this particular task.”
Unfortunately, after every Mr.Sheen or Meadow Lee ad, there isn’t such a disclaimer to create an identity safe environment and eliminate the stereotype threat. So I’d say that the best thing aspiring women leaders can do is avoid engaging with the mainstream media.
Imagine you’re blindfolded and led to a field. Someone gives you a bow and arrow and says “Try and hit the target.” After fumbling around, shooting arrows for an hour, the person quietly leads you away.
The next day, you’re blindfolded and again taken to the field to shoot arrows for an hour. You’re getting better at fumbling around for the arrows and loading them, but are unsure of everything else.
Years of this pass. Now, whenever you’re led onto the field, you bend down with confidence, grab an arrow, and in one swift motion: draw it back, and release. You really look the part, but do you think you’d be any better at hitting a target?
Without the right feedback, we can’t know if we’re getting any better or worse at something. Someone who’s been doing the wrong thing for years is just really good at doing the wrong thing.
Of course, we know what we’re doing, right? Others may be shooting blindfolded, but we know OUR work is making things better. Hell, we’re UX Experts.
Experts, or Yogis?
When surveyed, the vast majority of people report themselves as having above average intelligence, or being better than average drivers. That’s what Social Scientists call Illusory Superiority, or the Better Than Average Bias.
Illusory superiority gives us our optimism. It makes us feel good about ourselves: smarter, luckier, or better performing than we actually are. Unfortunately, research shows that we systematically misjudge our abilities, virtues, importance and future actions.
Without the appropriate feedback, we can’t develop the correct skills. When the illusory superiority bias makes us sure that what we are doing is making things better, it’s known as the Dunning-Kruger effect.
The Dunning-Kruger effect is when the incompetent are mistakenly confident of their abilities. The effect is what makes the auditions in shows like Australian / British / American Idol so entertaining. Talentless hacks swing and wail, truly believing that they are the next superstar.
Dunning and Kruger argued that people who are incompetent suffer 2 consequences:
- Their incompetence leads them to make poor choices; and (most importantly)
- Their incompetence prevents them from realising they’re making poor choices.
Any evident failures are put down to other factors – mostly external – so they can remain under the illusion that they’re doing quite well. This reinforces their confidence.
Unfortunately, this is where many people who consider themselves experts sit. Without the benefit of good feedback, they’ve been shooting arrows in the dark for years. They’re confident. They look the part. They handle their tools like a veteran. But are they hitting the target? Are they really making things better?
How to become an expert
Dunning and Kruger posit that the skills for becoming competent are the same skills required to evaluate competence.
A competent person is someone who’s adept at using a feedback loop for continuous improvement. They attempt something, measure the impact, evaluate if it was better or worse, and adjust accordingly next time. They learn from their many mistakes, and build upon their few successes.
An expert is someone who’s practised being competent for a long time.
The importance of metrics
To evaluate competence in User Experience, we have to measure the impact of our decisions. This is the ONLY way of really knowing if we’ve made things better, so I’ll repeat it:
We have to measure the impact of our decisions.
There are many ways to do this, but they all are of the same form:
- Find a baseline metric.
- Set a goal metric.
- Measure to see if you achieved the goal.
This can be:
- As informal as aiming to have 90% of users complete a task in paper prototyping without error (comparing concepts);
- As formal as using ISO 25062:2006 for Summative usability evaluation. This is the scientific way of setting a baseline and measuring against it: recommended for any product that will have more than one version (comparing versions); or
- As simple as aiming to decrease negative customer social media mentions by 5% with the next release (comparing customer satisfaction metrics).
Remember, only taking one measure can be misleading. We should always strive to triangulate our data for a more realistic measure. User Experience covers a wide gamut, so it’s important to include data from areas such as:
- Usability (efficiency, effectiveness and satisfaction);
- Usage (downloads, unique impressions, time, etc);
- Affect (reactions, opinions); and
- Customer contact (positive and negative mentions in call centres, the media, and online).
No design is ever right the first time – no matter how right it ‘feels.’ Good design takes multiple iterations and wrong directions. For us to know whether our design decisions are good or not, we have to measure early and measure often.
If we don’t measure the impact of what we do, it doesn’t matter what we call ourselves, we are shooting blindfolded.
There are a few good books out there on measuring UX, but I’d start with Tom Tullis’s Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics.
My thesis is simple: without truly understanding users, we can’t manage their experience. Yet this is what many of us in the user experience field claim to be doing.
User research can be broken into four rough categories:
- Foundational research: getting to know the users’ mental models so we can understand their needs and discover new product or service opportunities.
- Design research: once we’ve decided on a product or service that will be mutually beneficial to the organisation and the user, we can focus the research on how they currently understand and achieve these goals.
- Formative usability evaluation: Once we understand how they achieve their goals, we can start explore ideas: prototyping concepts, and testing them with users.
- Summative usability evaluation: Once a solution has been settled on, it can be tested to set baseline metrics. This allows us to evaluate the next iteration of the product or service – providing meaningful feedback that allows us to improve our designs.
Foundational research should be the first step in designing for the user experience. The goal is to gain a deep understanding of our intended users, including their:
- Mental models – Personal concepts of how things work in the world: constructs which are often different from how things really work.
- Motivations – The reasons people behave the way they do.
- Goals – Objectives driven by motivations.
- Tasks – Things people need to do to achieve their goals.
We can then distill this information into personas, user stories, goals and primary tasks.
Foundational research provides a deep understanding of the target group, and is the most effective way to discover strategic business opportunities.
Unfortunately, foundational research is usually missing in the product/service development lifecycle. There are a number of reasons for this that I’ll explore in a later post.
We all claim to be designing for the User Experience, but can we really do that if we don’t understand the user in the first place?
How Customers Think: Essential Insights into the Mind of the Market
by Gerald Zaltman
The Persona Lifecycle: Keeping People in Mind Throughout Product Design (Interactive Technologies)
by John Pruitt and Tamara Adlin
Design comes in a number of dominant (but not exclusive) flavours:
- Engineering-centric: optimising for the artefact, including choice of materials, serviceability, ease of development and interoperability;
- Marketing-centric: optimising for saleability by adding features, speed to market, and brand coherence;
- Business-centric: optimising for such things as cost per unit, strategic alignment, and distribution channels; and
- User-centric: optimising design for the understanding, behaviours, and physical capabilities of the intended users.
Unfortunately, engineering-centric, marketing-centric and business-centric design can just come across as thoughtless design.
I’m typing this from a hotel in Sydney. In the shower is the tap fitting you see above: an example of thoughtless design.
Take a moment to reflect on how you think it may work.
Certain artefacts have culturally informed conventions: sometimes known as affordances. This is how we expect things will work. For example, here in Australia we expect that:
- Left or down are negative: backward, past, or decreasing;
- Right or up are positive: forward, future, or increasing.
And given a specific context (in this case, a tap fitting), we have conventions like:
- Turning a tap anti-clockwise opens the valve;
- Turning a tap clockwise closes the valve.
However, in the case of the pictured tap fitting, the expectations the user has are contradicted:
- Turning the tap anti-clockwise increases the indicated water temperature; and
- Moving the slider to the right increases the flow of water.
If someone enters the shower half-asleep, turns the tap anti-clockwise and no water comes out, they may try moving the slider and end up scalding themself.
Since both types of control are available, there’s no excuse that the designer didn’t align the functions with the users’ expectations, so that:
- Turning the tap anti-clockwise increases the flow of water; and
- Moving the slider to the right increases the water temperature.
The worst part is, a user that gets scalded would probably blame it on their own incompetence – instead of complaining to the hotel about the thoughtless design.
I’ve since been told by @Aus_Pol that this type of tap fitting is common in certain parts of North America. Specifically, he’s seen it in “4 different places of Canada (and Seattle).” Personally, I’ve only seen it in a couple of places in the States, including Maryland and I think, Texas.
After having to use the shower again, I noticed something else. The potential issue of someone burning themself due to the thoughtless design contravening user expectations has obviously been encountered by the designers.
The red button shown has to be pressed whilst turning the tap to set the temperature above 38 degrees Celsius (body temperature). I originally didn’t notice this because the temperature was already set to over 50degrees (nobody would have a shower at 38degrees C – brrr!).
Even when I turned the tap full circle either way, I depressed the button without knowing it. I have large-ish hands, so by grabbing the tap handle and turning it, I accidentally held the button in.
In both cases, I could unknowingly increase the temperature to 70deg C (enough to scald) – negating the new design feature.
This is a great example of the type of design that results from a corporate culture. It may have evolved (as most product designs do) along these lines:
Business requirement: The tap fitting has to be a single unit, using current parts to keep costs down.
And hurry it up, we have orders to fill.
Engineering-centric design: First, design the water-flow control. Put it at the back of the fitting, so it’s out of the way. A rotating lever will do the trick.
Next, design the temperature control. We’ll just use one of our normal taps for that to keep costs down. Problem solved.
People are burning themselves.
Engineering response: “Stupid users. Didn’t they read the manual?”
Business response: “Well, we can’t go back to the drawing board. It would cost too much to re-design and re-tool. Make a modification to fix this.”
Business requirement: Stop people burning themselves.
Engineering-centric design: Add numbers to the temperature controller, so the users know they are turning the temperature up, not down.
People are STILL burning themselves.
Engineering response: “Stupid users. Didn’t they see the numbers?”
Business response: “Even if it’s user error, we have to at least reach a break-even level of lawsuits.”
Business requirement: Stop people burning themselves.
Engineering-centric design: Add a button that has to be depressed before the user can move the water temperature beyond body temperature.
People are STILL burning themselves, but our lawyers have said we’re OK because we’ve added enough features to try and counter their stupidity.
Business response: “We’re covered. Leave good enough alone and move on to the next product.”
This of course resulted in a complex, expensive design that still contradicts user expectations and is still a danger.
Instead of investigating users’ mental models of how taps work, they started with a thoughtless concept and continued evolving design from bad to worse. This shortcut mentality meant that the project went:
- Over budget on development (multiple release iterations);
- Over time on development (multiple release iterations);
- The end product cost more to manufacture (due to complexity); and
- Resulted in an unintuitive tap fitting that is dangerous to use.
If the organisation had of done some foundational research to understand people’s mental models of how taps work, and designed for it, they would have saved on development time and product complexity, lowering the cost for development and production of each unit.
Unfortunately, this is the type of development process that corporations encourage (the industrial, production line model: the underlying philosophy of corporations) – and the reason why there are so many bad designs out there.
Can you imagine how much cheaper it would have been to do a bit of user research up front?
Thanks to @mattmorphett for pointing out a mistake (which I’ve since corrected) in the operation. He’s also got a great post on taps and conventions: Sometimes there’s a good reason to break convention
Know thy user.
It’s the mantra, and essential foundations of designing for a user experience (UX). To really know thy user, however, requires good research.
Even during the first site visit, interview, or contextual inquiry for a project, it’s tempting to start thinking about how to solve what appears to be an obvious problem. It’s not uncommon for User Experience practitioners to even start sketching out ideas while observing or conducting user research.
It’s how our minds naturally work. They:
- Immediately classify any information without waiting for the big picture – often shoving it into the wrong pigeon holes;
- See patterns – even where none exist;
- Notice certain, specific things while completely missing the obvious;
- Take shortcuts to come to solid conclusions; and
- Fool us into believing that what we’re doing is accurate.
Unfortunately, listening to your intuition (‘sketching out ideas’, or ‘jumping into solution mode’) is exactly how not to do research. Sure, you’ll come to conclusions that you’re certain are accurate – but it’s a good bet you’d be wrong.
Once you land on an idea, confirmation bias kicks in. You can’t help but look for evidence to confirm your assumptions – and that often means missing the real motivations, goals, tasks and issues.
Research isn’t about confirming assumptions
Research is about gathering data, analysing it, then forming conclusions. In fact, by definition research is:
the systematic investigation into and study of materials and sources in order to establish facts and reach new conclusions.
The scientific method can be applied to user research, just as it can be applied to any area of inquiry. Diligently gathering data from appropriate sources and analysing it carefully before coming to a conclusion is unnatural, uncomfortable, and completely necessary.
Don’t get me wrong. I’m not against intuition in UX. It’s necessary – but in the design stage. The user experience needs to be built on the solid foundations of good research, otherwise we might be wasting our time working on something completely unnecessary.
When you’re gathering data, that should remain your focus. Stay out of ‘solutions mode’ until after the analysis. I’m always surprised that what I assumed to be the issues during the research phase the analysis reveals to be either non-issues, or minor issues compared to the ones I didn’t even notice.
The A-Z of Social Research: A Dictionary of Key Social Science Research
by Robert Lee Miller & John Brewer
by Ann T Jordan
Research Design: Qualitative, Quantitative, and Mixed Methods Approaches
by John W Creswell
When a debate turns to evidence and science, I often hear people retorting:
“Science doesn’t know everything!”
It’s interesting that people will resort to anthropomorphising something (treating it as if it’s human) when they run out of logical defences. Science isn’t an organisation or a person. It can’t ‘know’ anything.
To me, such a turn of phrase is an indication that the person may not understand what science is.
Science is a process
- Question – Notice something and be curious.
- Hypothesis – Predict how it works.
- Experiment – Figure out how to test the prediction to prove it wrong – the concept of falsifiability is important to the scientific method.
- Observation – Collect data from the tests.
- Analysis – Process the data into meaningful results.
- Conclusions – Use the evidence to judge whether the prediction still holds. A prediction that is wrong is just as valuable to understanding as a correct one.
- Peer Review – Being transparent. Publishing exactly what was done and how, the results, and the reasoning behind the conclusions. Others can then question the reasoning and replicate the experiment themselves.
Science is also a body of knowledge
From the scientific method comes an ever-expanding body of knowledge. The knowledge is constantly being added to, refined and updated as our understanding of the universe grows.
Since science can’t ‘know’ anything, a more appropriate phrase may be:
“Everything hasn’t been explored by science.”
This is true. So much has been explored by science that no person has the cognitive capacity to be across every discipline, but there is still so much more to discover.
However, when someone proclaims that “science doesn’t know everything,” the point that everything hasn’t been explored by science is rarely valid. This cliché is most often heard when someone is intellectually backed into a corner, defending a credulous belief that has been tested using the scientific method and found to be wrong. Topics ranging from complementary and alternative medicines to superstitions and the paranormal.
If you’re feeling cheeky, the next time you hear someone say “Science doesn’t know everything” ask them who science is.
A Beginner’s Guide to Scientific Method – a good primer for the scientific method and companion for anyone interested in critical thinking.
Lately, there’s been a trend of people creating decks of cards for use in UX. From IDEO’s Method Cards, to the AT-ONE project’s Touch Point Deck – these cards are externalised snippets of knowledge, and can be used for inspiration, as a memory jog, mapping a sequence, or planning.
User Experience is a multi-disciplinary area, so I try to stay current across a number of fields. After many years of reading theory, planning projects, and using an array of research and design techniques, I am surprised when I re-open a textbook or look at an old project and think “I forgot about that.”
This is where cards can come in handy.
Putting knowledge on cards does two key things:
- It externalises the knowledge; and
- It makes the knowledge tangible.
When you have a vast field of knowledge to remember, some things – especially that which gets used least often – will inevitably get lost. Especially when you’re under pressure.
A little anxiety is a good thing. It helps focus the mind. Common project features such as ridiculous deadlines, slashed budgets, political infighting and missing stakeholders can crank that anxiety up a few notches though. This is when the mind goes into fight or flight mode. It narrows focus tightly: great for mechanical tasks like outrunning a predator, but terrible for abstract or creative thinking.
In such cases, it is advantageous to have your memory externalised. Being able to refer to a series of cards that represents possible ideas, methods, theories or approaches to a problem is a practical approach to a common problem.
Note: This is why pilots have Ops Manuals and Checklists. They are externalised or systemised knowledge for times of increased anxiety and cognitive load.
Turning knowledge into tangible objects
Knowledge objects are self-contained snippets of knowledge. Since they are self-contained, such cards provide a tangible form of communication for all stakeholders that can be grouped, mapped, arranged, or chosen from – either individually, or in teams.
A practitioner can rapidly come to agreements with stakeholders, using cards in workshops to do things like:
- Form themes for a project by grouping design inspiration cards.
- Map a customer’s journey through an interaction with the organisation (to ensure a consistent experience), using touch point cards.
- Decide upon a pragmatic approach to user research by using method cards.
- Plan a strategy for the organisation using inputs from research with behavioural cards.
Types of cards available
There are a large number of cards out there. I’m currently doing a meta-analysis of the following, top 10:
I’m planning on putting together a series of consistent UX decks (with boards if required) for planning, mapping, and inspiration.
Are there any key decks you use that I have missed? Please let me know in the comments.