Skip to content
Ideas

5 Types of Research Performance Every UXR Team Can Avoid

Research performance gives the appearance of offering credible research with none of the actual benefits. Here’s how to spot and prevent it.

Words by Zarla Ludin, Visuals by Alisa Harvey

It's a challenge familiar to many UX researchers: credible and valid user research is criticized for taking too much time, stalling agile processes, or not leading to enough “aha” moments. Company leaders may sideline or overlook the value of that research, despite a team’s need to deeply connect with and understand the people for whom they are designing.

Ironically enough, when a research gap exists—whether purposefully or not—teams tend to fill that void with their own research-like activities. We’ve seen this in many organizations: from “personas” created specifically for executive decks to convince leadership of an objective, to semi-structured chats at a regular cadence with customers through a sales or marketing channel, among others.

These research-like activities take all the lifeblood away from credible, valid user research. They tend to produce stagnant “insights” and they leave a long-tail of bad behaviors to correct. It’s these kinds of activities that can give the impression that research is more of a cost center or an optional value-add than central to design processes.

The pitfalls of UX theater

Many of these activities inspired the term “UX Theater” by Tanya Snook (with a great write-up in Fast Company). She has wonderful descriptions of various manifestations of UX theater and how to address them.

What do research performances look like? They look pretty similar to research. They adopt rituals, deliverables, jargon and mechanics of user research. But, it’s just slightly off.

In addition to and expanding on what Tanya’s already shared in her extensive materials, here’s how we’ve encountered research performances:

Jump to...

In-orbit, in-crowd

This speaks only to those who are closest to your organization or your offering—such as internal stakeholders and highly influential customers—about topics that are only of interest to you. In other words, not learning about their realities. These chats often have a sales-y objective, like sharing new features with the largest customers and getting feedback on them.

How it’s a performance

Even though there is “speaking with people” going on, at the very least, you’re gathering an incomplete perspective from a small slice of your available population. You may only be getting friendly feedback, false positives and isolated suggestions, which can lead to a learning echo chamber. If the team is solely relying on these sessions to make critical design decisions, you may be designing for a local maximum or toward a specific subset of users.

How to offer meaningful research

Consider the entire meaningful population—like users, stakeholders, influencers, and potentials—to get a sense of the best sampling approach. Depending on the knowledge needed, some studies are best conducted with a representative sample while others may need to examine edge cases of situations and behaviors. While speaking to existing customers is critical, it’s not the only population through which we would need to get inspiration and information.

Back to top

Folklore funnel

This is when there is one very effective storyteller (or source of storytelling), who is usually an influential voice among the team. This may be someone who’s been around for a long time, perhaps a subject matter expert like a resident doctor working at a medical device company or a leader.

They may be great at weaving a tale, or have a preconceived notion of a long-standing research document or depiction. The Folklore Funnel is often conjured up as a source of truth when someone needs to know more about the user.

How it's a performance

While storytelling and narrative development is central to research practices, the Folklore Funnel can be an echo chamber. The volume and delivery of the Folklore Funnel can be inspiring and instill a sense of confidence in the team, but sometimes funnels feel compelled to say something—even if that something is outdated, not substantive, or not based on anything other than opinion.

Being beholden to one particular source of user understanding can create overconfidence in inaccurate or misleading information. Also, many of these stories are locked in human heads and aren’t socialized equitably across the organization.

How to offer meaningful research

Neutralize over-amplified voices and democratize meaningful practices of sharing user stories. A user story is only as meaningful as the team’s belief in it, which we believe is driven by having credible processes and valid insights to back them up. Stories are there to align teams, so when one person, stakeholder type, or document is the sole bearer of this understanding, it’s a red flag.

Back to top

Assumption-based artifacts

This is when teams create typical or representative research outputs like journey maps or personas, but based almost entirely on internal conversations. These internal conversations may be drawn from disparate research engagements, based on anecdotal evidence, quantitative data, or the team’s opinion.

Often there is a template for team members to fill in and even a vetting process to ensure each artifact is good enough for sharing with others. Sometimes they’re created to live as a representation of users, but more often they are created to make a point or to ensure the user is considered in a deck positioned for a larger topic or to share with leadership.

How it’s a performance

Personas and journey maps, in particular, are useful mechanisms for communicating user understanding. But if the user understanding is lacking, then other details fill that void: assumptions. A clear indicator that an artifact is assumption-based is if it’s heavy on behavioral and demographic factors and lacks attitudinal and emotional understanding.

These types of tools also lack a unifying anchor to indicate the relationship between each, which is critical for understanding how to prioritize design efforts. When built off assumptions, these artifacts tend to lose steam and become vestiges of a moment in time when the team needed to feel connected with users.

How to offer meaningful research

Create operable artifacts that can be put to work by designers. They will have just enough nuance to them that designers can leverage their creative energy to design a new world based on those articulations.

Familiar tools like personas and journey maps (not to exclude other design tools) are best built by understanding moments, places, and touch points relevant to the user. Make sure they’re adequately described by the user, not a proxy of the user voice.

Back to top

Validation voices

This is when a team uses “testing” or “evaluation” moments to learn everything they possibly can about the user. Teams sometimes see usability testing or concept evaluation as another time when they can ask users everything including about their realities, beliefs, needs, aspirations, etc. They include questions that attempt to get at validation—either in a pre-task interview, or through the substitute of how they interact with a concept.

How it’s a performance

Studies that evaluate a concept have different methodological implications than studies designed to describe people’s realities. At its most basic level, the subject of evaluative studies is the thing that was designed—the subject is not the person. As a result, the team uses invalid methods and samples strategies to illuminate findings about the person.

By the time an evaluative study comes along, the team will have had some meaningful alignment and formulated hypotheses to design against. By asking about the person’s reality at the evaluative stage, a team can inadvertently inject uncertainty and misalignment late in the design process. This is part of the reason research gets a perception of coming in and dropping a bomb on design teams.

How to offer meaningful research

Have a varied research strategy and language to demonstrate that not all studies yield the same types of data. This can be achieved through visual frameworks, more active socialization of learning needs, and also just stronger pushback on demands. Sometimes a team will fill a knowledge gap in a validation moment because it’s their only time to do so—so offer them an alternative.

Back to top

Needing novel

This is when teams conjure research only in moments of dire uncertainty with the expectation that it will answer all their questions. Teams will put all of their decision-making eggs in one basket, which is usually a singular study with constrained resources and objectives. Essentially, a last resort. When the output of that study is confirmation of “what we already know” or doesn’t expose a “silver bullet,” it’s seen as anti-climactic and not a meaningful use of time.

How it’s a performance

Not every study or research moment is designed to reveal something new. Many forms of data collection are designed to confirm or stabilize insights rather than generate an “aha.” Teams that expect insights to influence big decisions with every study are often doing so because they lack any other meaningful form of decision-making. It aims a target at researchers (who are humans too) and can lead them to less-than-desirable compensations, such as introducing bias or being selective in their reporting.

How to offer meaningful research

Provide another data point for a team to confidently refer to as they make decisions. Stress that some studies can produce actionable insights that will move design decisions forward, while others are there to reveal in-roads to empathizing with human realities and needs.

Research is not the end-all-be-all, but rather the user voice in a broader network of inputs that should lead to meaningful decisions. If you’re being pressed to answer all of a team’s questions with a single study, push back.

Back to top

Research performances are no less resource consuming than actual research. It takes actual human time and effort to generate assumption-based personas, for example. It takes real human time and effort to carry on a legacy story that no longer has meaning—not to mention the real human time and effort it saps from research resources to correct bad research-like activities.

Research’s reach and meaning is becoming more apparent in organizations, but with that comes the vulnerabilities for these bad behaviors. It’s up to researchers and their advocates to ensure that a story is being told about the cost of a research void, even more so than the benefits of research abundance.

Take the opportunity to communicate and educate about research’s potential and limitations. Also be considerate about the placement of research activities, such as conducting primary studies, alignment sessions, and secondary research.

We all know the immeasurable benefits of research moments and efforts, so it’s up to us to anchor these activities into something bigger. As Jonah Lehrer said, "I thought I was a scientist until I did the manual labor of science." Without the right practices in place, research simply becomes performative ceremonies that “anyone can do.”

Related articles

Zarla is a Research Director at Craft, a Philly and Boston-based digital design studio. Prior to Craft, she was co-founder of the micro-agency twig+fish research practice and co-creator of the NCredible Framework—a strategic positioning tool for researchers. Zarla is passionate about cultivating empathic and expansive thinking at organizations.

Subscribe To People Nerds

A weekly roundup of interviews, pro tips and original research designed for people who are interested in people

The Latest