David Hamill Talks About Mistakes Organizations Make in Regard to UX Research

UX consultant David Hamill explains several points that compromise genuine UX research practices.

Social Stories by Product Coalition
Product Coalition

--

By Tremis Skeete, for Product Coalition

How often have you wondered what your users are really experiencing when they use your digital services? As product people, we sometimes assume that we can do the research and find out the answers for ourselves — but if this were true, then why don’t we hear about this kind of thinking in the worlds of Chemistry, or Physics, or Psychology, or Biology?

Perhaps it’s because in these scientific disciplines, we rely on legitimate research experts, or “scientists” to perform such activities. In these sciences, the topics are so vast, that having a strong understanding of what needs to be investigated, has to be combined with definitive approaches on how the research needs to be performed.

When a scientific researcher has a theory to test, achieving the objective is not always a one time event. Yes, the objective can be achieved in that moment, but in other disciplines like Psychology — the objective is more like a moving target that evolves over time, which means that periodical tests are required.

Some of these practices may sound like easy things to do, but like in scientific research, to perform tests — scientists must organize activities and write things down in ways that ensure experiments are indeed designed to test the theories, and all related tasks adhere to protocols. If these steps are not taken, the results and evidence would be contaminated or “invalid”, and not accepted by the scientific community.

Genuine research requires standardized scientific protocols, so it’s interesting that in the world of user experience (UX) design, there are many organizations that don’t consider the importance of scientific methods and protocols when engaging in UX research. Why is that?

A UX researcher’s job, like the scientist, is to test theories [i.e. assumptions] in regard to how users engage digital services. To perform this work requires identifying hypotheses and theories, and applying skilled techniques in testing, measuring the observable results, and providing the final results and evidence.

These are prime reasons why scientific research methods are regarded as a “discipline” and organizations can’t just allow any untrained person to perform scientific research.

All scientific disciplines adhere to these standards, and the UX research discipline is not an exception — so it makes perfect sense that former Senior UX researcher at Skyscanner and UX research consultant, David Hamill raises valid concerns in regard to how organizations create so called UX research practices within their product development initiatives.

David Hamill. Source: https://experienceux.co.uk

Quality research in UX design needs to follow defined protocols to ensure their research and findings are scientific. What that means is — their research should generate information in the form of hypotheses, theories, experiments, results, evidence, and insights.

It’s because — to ensure research is valuable, the results need to be based in facts. Not opinions. Not conjectures. Not decisions by committees. Facts.

Why facts? It’s because if your organization makes a design decision that leads to a litigious situation with a customer, and your legal defense is not based on scientific facts — your business can be held liable for damages.

David’s LinkedIn post describes scenarios organizations engage in that could put their UX initiatives at risk; Unless they decide to standardize UX research activities driven by scientific principles, implement quality protocols, and most importantly — hire genuine UX researchers.

Read a copy of David’s LinkedIn post below to find out more:

Here are some common mistakes organisations make when it comes to UX research.

1. Thinking that democratising research means you don’t need any UX researchers. Not only do you need them, but you need them to be very experienced. They need to be experienced enough to tell a director level colleague they are doing it wrong for example. They need to do a lot of teaching and guidance.

2. Transferring people from other teams into UX research and behaving as though the change in title has magically bestowed 5 years of working experience on them. They need to learn from someone. That someone should also have been taught by someone. This is not the current norm and it’s doing massive damage to the discipline let alone your company.

3. Hiring researchers instead of UX researchers and expecting the same results. There are a range of drawbacks this can have which come from differences in knowledge and in priorities. People can swap over yes, but then you need to refer to point 2.

4. Expecting one-off projects to make up for years of user neglect. “Quick we need a new product idea, lets do a 2-week research project and find a new, valuable problem worth solving”. It doesn’t work like that.

5. Not having a subject matter expert who is a professional UX researcher. The person seen as the (self declared) expert on the subject is often a senior level product manager or designer who has never been a dedicated researcher, yet less senior researchers are supposed to defer to their knowledge. This person is often not as knowledgeable as they think they are.

6. Related to 5, having an imbalance in seniority between UX research and design. This leads to researchers being treated as assistants to the design team and valued only for having the time spare to run research. It also leaves UX researchers feeling unrepresented. You don’t need as big a team, just comparable seniority. This is more of an issue for larger companies than in smaller, tighter ones.

7. Valuing research projects based on expense and reach rather than what they found. Giving disproportionate attention to that hugely expensive, one-off worldwide, multi-cultural research project that cost a fortune and asked a shit ton of people, some very generic questions. But it didn’t help you take any decisions. And because you did it and it cost a lot, you have to keep dragging it into every project even though it doesn’t help.

8. Expecting all research to have immediately actionable insights. Sometimes those findings aren’t for now. Sometimes you don’t actually find out anything particularly useful. Sometimes you’re too stuck to act on them. Sometimes the really useful knowledge builds up over time.

9. Expecting all research to be quick. The need for speed often destroys the ability to find anything credible or useful.

--

--