down arrow
It's not research, it's you! with Holly Hester-Reilly

It’s Not Research, It’s You! with Holly Hester-Reilly of H2R Product Science

Holly Hester-Reilly discusses how bad research processes can give research a bad reputation and how it can all go wrong.

In this episode of Awkward Silences, Erin May and John-Henry Forster are joined by Holly Hester-Reilly, CEO and Founder of H2R Product Science. They delve into how research can go  wrong, how bad research practices can give research a bad reputation, and how the methodology and timeline of your research can complement each other. Holly also outlines the best way to determine the right research method for your product. Tune in for an engaging conversation on research best practices with industry experts.

In this episode, we discuss:

  • How bad research processes can give research a bad reputation
  • Top most common ways that research can go wrong
  • The relationship between research methodologies and project timelines
  • Figuring out the right method for your research

Watch or listen to the episode

Click the embedded players below to listen to the audio recording or watch the video. Go to our podcast website for full episode details.

Highlights

  • [00:04:27] Holly’s unique perspective from academic research and into tech
  • [00:07:58] How can research go wrong?
  • [00:10:20] The components of a good research model; what you need to get right
  • [00:14:32] What to do with a research plan once you have it to ensure maximum alignment
  • [00:16:54] How to combat biases in research and questionnaires
  • [00:21:54] The interaction between methodology and the timeline in research
  • [00:24:18] Figuring out the right method for your research
  • [00:31:01] Interacting with stakeholders and organizations for the best research outcome

Sources mentioned in the episode

About our guest

Holly Hester-Reilly is the Founder and CEO of H2R Product Science, as well as a Product Discovery Coach and Consultant for the company. She also serves as an Adjunct Professor at New York University, a Member of the Board of Advisors at Octane11, and a Product Advisor at Ergatta. Needless to say, Holly is an undisputed expert in her field, and we’re lucky to have her on the show!

Transcript

Holly - 00:00:01: So if I just say, “Are you excited about my product?” They're going to be forced into yes or no. And that's not going to give me a lot of good information. But if we ask them, “how much are you about something?” So I guess in this case, how much are you excited about this product? Well, that's going to give you more of an opening for them to talk.

Erin - 00:00:28: This is Erin May.

JH - 00:00:30: I'm John Henry Forster. And this is Awkward Silences.

Erin - 00:00:35: Hello everybody and welcome back to Awkward Silences. Today, we're here with Holly Hester-Riley, who is the co-founder and CEO of H2R Product Science. You're a coach and a consultant, have tons of great experience working with all sorts of folks to solve their product and research problems. And today we're going to talk about a little bit of a spicy topic, which is “It's Not Research, It's You.” So this should be a fun one. Really excited to get into it. Thanks for joining us, Holly.

Holly - 00:01:11: Yeah, thank you for having me. I'm super excited to get into it too.

Erin - 00:01:14: We got JH here too.

JH - 00:01:16: Yeah. This topic name keeps making me think of the, like, “It's me, I'm the problem, it's me, or whatever”.

Holly - 00:01:22: Yes, totally.

JH - 00:01:24: Should be a fun one. Yeah. A little self-reflection for everyone.

Erin - 00:01:27: Now we'll have to end with some impromptu Taylor Swift singing. I think it'll be fun for everyone. So today we're going to talk about how bad research and bad research processes can give research a bad name, which is of course not what any of us want to happen, but what can happen unfortunately. But before we dig into that, Holly, maybe you could tell us just a little bit about who you are and what perspective you're bringing to this topic.

Holly - 00:01:53: Absolutely. So I'm going to go pretty far back, which is to my university days. I have a bachelor's and a master's in chemical engineering, and I was in a Ph.D. program, and I did academic research for four years. And so that was actually before I got into the tech world. I was already doing research and thinking about experiment design and communication of research results and how we collaborate to design research together and things like that. And then I moved into tech startups, and I spent five years in early-stage tech startups where we wear all the hats, and one of those hats was research for sure. And then my next five years, I was in high-growth tech startups, and at those, I was heavily involved in research. The first one that I was at was a B2B ad tech company. And when I joined the company, there was only one full-time in-house designer, let alone there being any UX research. So I was doing the research myself, and then I ended up managing the design team and leading that effort there for a while. That company grew from 100 million to 500 million in annual revenue and about 140 employees to 900 plus employees in the three years that I was there. And so it was an incredible journey, and I got to learn a lot about research and make a lot of mistakes and learn from those. And then I went to Shutterstock, and at Shutterstock, I got to dive into more strategic research where it was less about usability and more about how does this fit in the market landscape. And then it became about usability once we had the vision and the strategy for what we were building. And what we were building was Shutterstock Editor, which is now part of a suite they have called Creative Flow. And it was the first time that Shutterstock was building a tool that was not just about the delivery of stock assets but actually about the editing of them. And so we had to do a lot of user research when we were doing that process to make sure we were building it in a way that could be used because it was very new for our team to be building something like that. And then from there, I started H2R Product Science, and I've spent the last six years diving into research at companies everywhere from early-stage startups to enterprises like Capital One and Unilever and everything in between. And so I've gotten to see research in a lot of different contexts. I've done a lot of work on helping teams learn how to get more value out of the research that they can and should be doing.

JH - 00:04:31: Yeah, and if people haven't checked out some of the stuff that Holly puts out from H2R, definitely worth a check between the newsletter and the podcast. You do a lot of great material there. So just a quick plug. I think maybe a general premise to introduce here is the reason you do research is you want it to have an impact on the work you're doing, right? You want to deliver a better product to users, drive more impact to the business. And that's pretty true across any organization, whether you're a really small startup or a very large enterprise, there's a reason you're doing research to have an impact. But that's not always the case. And so just maybe to kind of frame the discussion for today, what are some of the reasons that research kind of goes wrong or doesn't have the impact that people would hope it to?

Holly - 00:05:08: Yeah, so I find that there are really two buckets of problems that occur when we're talking about getting the ROI from research. The first bucket of problems is about not doing the research correctly itself. So that's about getting faulty or biased results from the research and the ways that that can happen. So how do we conduct good-quality research? The second bucket is about how we actually communicate and make decisions based on the research. So there are situations where teams are conducting good quality research, but that good quality research isn't being optimally used by the organization. And that's also a time where you're not getting the ROI out of research that you want. And so those are the two main areas that I see problems in, and I'm happy to dive more into those.

Erin - 00:05:57: Yeah, and I think that's a great way to frame it because good research that doesn't get used isn't really good research, right? And so you definitely want to be thinking about, on the one hand, garbage in, garbage out - hopefully no garbage here. All the good, you see wonderful insights and then the tree in the forest problem: if the good insights fall on deaf ears and no ears, then they have nowhere to go and be useful. So let's jump into both of those. Maybe let's start with doing the wrong research in the first place. Where should we begin? Where can we go wrong?

Holly - 00:06:29: Yeah, absolutely. So I think one of the first places that people get this wrong is not stepping back and making a research plan before they actually execute the research. And so that is something where hopefully none of the listeners on this webinar have done that because they are all avid research aficionados. But I've certainly worked with teams where people just jump straight to, well, we always do user interviews. So we're going to do a user interview or we're going to ask them the questions that we want to know directly, like, would you buy this in the future? And it's really poorly formed research and so we can talk more about how we do a better job than that.

JH - 00:07:16: Well, having a plan seems better than not having a plan, so I'll give you that. But what are the components of a good research plan? What are the things you want to get right in that plan to kind of frame up what happens next?

Holly - 00:07:26: Yeah, that's great. So one of the first components that I always put in a research plan is the goal of the research. What questions are we trying to answer for ourselves? And in that statement, I place those questions, I word them in ways the way that the team thinks about it. So what question is the team trying to answer? So, for example, we're often trying to answer, "Can the users get to this outcome with this product?" That's a very usability-focused question. We're also often trying to answer, "How are the different segments of the audience experiencing a problem area differently?" And that's a more strategic question to be asking. And in both of those cases, we don't necessarily want to just directly ask the user that question. And so having those written out questions that we're asking ourselves in our own language first is a really great place to start with your research plan. But then on top of that, you then need to decide what are the methods and who are the participants, what is the timeline, and what the next steps look like. So those are the components that I usually put in a research plan so that we can cover what are we trying to do, how are we going to do it, who are we going to talk to, and what decisions is this going to lead to? Which I think is a really important aspect when we're thinking about making research effective, is to tie it to the decisions that the research is going to influence. We're not just doing research for the sake of learning, which is a very valid goal on its own, but typically in business, we're doing research to make a smarter decision.

Erin - 00:09:11: Practically, when you do this, do you have like a template you like to use? Or when I think about plans, right, what do they say? There are lots of aphorisms about this, but plans need to change, right? The best-made plans. And so, how do you think about the right level of fidelity for a given research project? And how do you think about making a plan adaptable as it needs to adapt?

Holly - 00:09:34: Yeah, so I do have a template and it is less than a page. So the way that I think about it is to write a research plan that's a page or less, so you're not going into a ton of complicated details, but you're stating out the goals and the way that you're going to get there. And then in terms of keeping it fluid, typically we'll conduct research in chunks. My team usually does chunks of three to five interviews at a time. If we're talking about user interviews as the form of research, at the end of the, let's say, five interviews, we then assess: have we answered the questions at the top of this plan or have we not? And if we haven't answered those questions yet, then we're going to keep going with that plan. But if we have answered those questions or we've generated new questions that are more important, then we're going to update and make a new interview guide or tweak the interview guide that we've got, tweak the plan, and iterate from there.

JH - 00:10:35: What I like about the way you're describing the plan here is it touches obviously on a couple of the "doing research well" pieces, right? Like, are we choosing the right methodology? Do we know who we want to speak to? But it also lets you start to jumpstart the other category. You spoke of making sure that people are on board with this research and actually influenced by it, because if it's not on the right timeline, then it's probably not going to have as much impact. Or if it's not answering the right questions or informing the right decisions, then it's going to kind of fall flat as well. So with an eye to that other kind of category, how do you make sure that people are on board with the plan once you've made it? I assume you kind of need to syndicate it or get buy-in or some sort of review, or like, what should people be able to do with the plan once they've made it, to maximize or make sure that there's actual alignment around it?

Holly - 00:11:15: Yeah, so what I recommend is sharing it in a couple of different ways. So for key stakeholders, let's say for example, you're a member of a product trio of a product designer, a product manager, and an engineer, and maybe you have a researcher as well, or maybe you are the researcher. The very first place is sharing it with the other members of that team group and making sure that they have full collaboration on it. So I don't just say, "Hey, I've made a plan," and deliver it to them. But rather, for those key members of the research group, we'll make the plan together. We'll brainstorm the questions together, we'll talk through it, we'll create it together. And then the next level out from that is sharing it with the wider product development team and sharing it with executives and stakeholders. And those things I will typically include in other existing communication formats that we have for those stakeholders or team members. So, for example, if I've got a monthly steering meeting for a product initiative that is between me and the executives, then I'm going to bring to that meeting, "This is what our plan looks like today," probably in a slightly different format, and maybe it's in Slides or something like that. On the other hand, for the development team, I'm probably going to bring it to Sprint planning and say, "Hey, while you're doing this coding, here's the research that we're working on."

JH - 00:12:48: Yeah, that makes sense. In terms of what goes into a good plan, there's going to be probably some broad consistency across teams and organizations. Like, you want to cover these bases, but how you actually share and syndicate the plan, you're going to want to tailor that to your organization. If you're an organization that's really formal with roles and responsibilities, you're probably going to play into that. If you're a little bit looser on process, you probably don't want to bring that in there. And then, to your point, leveraging existing touchpoints sounds like those are all some of the key parts there.

Holly - 00:13:12: Absolutely.

JH - 00:13:13: To go back to doing poor research. So we start with the plan, we have a plan, we feel good about it. What can go wrong from there? How can you still end up with maybe insights that are not that useful?

Holly - 00:13:23: The biggest one is biases, cognitive biases that come into play and people not putting in checks and balances that will help them combat those biases. So one of the strongest ones is confirmation bias. And the way that I try to combat that is to have multiple people involved in the interview and the synthesis of the research so that I can bring in some people who are going to be a little bit more, a little bit less invested, and a little bit more objective about what they see and what they hear in the research. So I want to bring in somebody who's going to be more skeptical, who's going to be more likely to say, "Well, actually, I didn't think the user was excited enough about this," and that will help me fight that particular bias. Another cognitive bias that comes up really regularly is the idealized self and the difficulty that the research participants have in being able to predict their own future. And that is a really important cognitive bias to keep in mind because it's too easy for researchers to ask questions about what a person would do and think that they can use that information when really, that's in my mind, worthless. Knowing, yes, it's valuable if they actually outright tell you that they're not going to use your product, then you should believe them. But if they tell you that they are going to use your product, then you probably can't believe them unless you've got other data and other evidence around it, including their actual visible excitement at what you're building or solving the problem that you're solving.

Erin - 00:15:14: So there's research questions and there's research questions, right? What are the questions our research is trying to answer? And then there are the interview questions. What are the questions I'm actually going to ask to get answers to those big research questions? Different mistakes you can make for each. You're talking about some of the mistakes you can make in an interview setting. "What will your future behavior be?" is, of course, a leading question and a classic example of a generally terrible kind of question to ask. What are some examples of bad research questions in the sense of the objective of the research that we're actually doing?

Holly - 00:15:50: Yeah, so some examples of bad research questions would include, so in addition to anything where you're saying "What would you do in the future?" Also, if you're asking somebody a direct yes or no question instead of a question that they can elaborate context on. So if I just say, "Are you excited about my product?" They're going to be forced into yes or no, and that's not going to give me a lot of good information. But if we ask them, "How much are you excited about this product?" Well, that's going to give you more of an opening for them to talk and to share context that you can then use to filter through and assess whether they're actually feeling what they're saying. That's one of the areas where things can be done poorly.

JH - 00:16:45: I know another one we had talked about previously was around your methodology. And so if you pick kind of the wrong approach to answer the question or the decision, you need to inform that's not going to be great. How do people get that right? If you don't have maybe a dedicated user researcher on your team or you're only familiar with certain methodologies, how do you pick the right tool for the job? And maybe just to tack onto that, how do you think about how methodology and timeline interact? Because I assume there's probably some trade-offs there as well to consider.

Holly - 00:17:13: Absolutely. So in terms of getting the right tool for the job, I don't think there's any way around whoever is sort of leading the research needs to familiarize ourselves with different systems and methodologies that one can use to do research. So in addition to doing a remote user interview, which is, I think, a very common form of research these days, there are also card-sorting exercises for understanding informational hierarchy and how people relate concepts to each other. There are also compare and contrast exercises where you give the participants some ideas that they're going to compare and contrast for you so that you can hear more detail about how they think about those different things, which is generally more valuable than if you give them a specific one thing and ask for them to react. There's also actually being in person, there are times when you have a good reason to do an interview in person rather than remotely over the computer. One of those times is when the context of the usage of your software or your product is highly contextual to the environment. So if they are using this tool while they're breastfeeding, for example, they're going to have some restraints around what's possible, and being able to actually observe them in that environment is going to be more valuable than just talking to them outside of that context.

Erin - 00:18:50: Yeah. And that's where I think you're under one-page sort of research plan is so helpful, just putting those basic thoughts together of what are we trying to learn? How are we trying to learn it? What were some of the other key components?

Holly - 00:19:03: What are we trying to learn? How are we trying to learn it? Who are we going to talk to?

Erin - 00:19:06: Who are we going to talk to? So that's part of the how, right? Yeah. Who is going to be part of the how?

Holly - 00:19:11: What decisions are we looking to make based on this work and what steps are we taking next?

Erin - 00:19:19: And what steps we're taking next? Great. Yeah. Because that's going to tell you if your methods match the objectives, you can kind of see where the plan might fall apart if it does before you get too deep into it. I think what we've seen a lot in the industry is a lot of researchers, not all, certainly, but a lot are kind of, to some extent, mixed methods researchers. Right. Generally using more than one method. And oftentimes that means maybe using a method you aren't super familiar with or haven't used in a long time or researchers are even kind of making up new methods, right? Combining old methods in new ways and things like that. So are there any resources you like to find the right method for different questions or novel methods for answering interesting research questions?

Holly - 00:20:07: Well, I guess I'm a little bit old school in that I still really like Nielsen Norman Group, and so I go to them for definitions of research methods and when you should use different ones. I think they have a good overview of different options. And so I'll often turn to that. And then honestly, things like everything that you guys put out - like, you're very focused on research. I also follow Theresa Torres and her content around continuous discovery, and I use those materials to inform my ideas about new ways to do research.

JH - 00:20:50: To go back to the piece about methodology and timeline, what are some tips for people who are maybe in a tough spot there, where some of the stakeholders want this decision or this information as soon as possible? But the researchers are really indicating that the best way to get a signal here would be a diary study, and we're going to need at least three or four weeks. How do you maybe have some heuristics for this is a case where we should really fight for the best methodology and get the best insights, versus this is one where we should be a little bit more flexible and maybe we can do something a little quicker here because it's more impactful that way? How do people navigate that kind of calculus?

Holly - 00:21:25: Yeah. So the way that I think about that is to go back to that question of what decision are you trying to make with this? And then ask yourself, how important is that decision, and how much impact on the success of your product can that decision have? And is it a one-way door decision or a two-way door decision? Can we reverse this decision if we want to, and how expensive would it be to reverse it? So I generally really, it's one of the reasons it's so useful to tie your research goals to the decisions you're trying to make with it because then you can use those decisions and the importance of them to help you decide whether you should go to bath for a better study or whether you should say, you know what? That scrappy two-week version is fine. And one of the key things that I'm always reminding people, and I feel this very deeply from my own background, is that when we're doing research for business, we don't have to have the rigor of academic research. We're not trying to publish academic papers on this. We're trying to make a good business decision. And so there are many times when directional research that gives us some indication of this versus that is enough to make a decision, even though we don't have statistical significance or a high volume of participants

Erin - 00:22:52: Trying to reduce uncertainty, not eliminate it altogether.

Holly - 00:22:57: Exactly.

JH - 00:22:58: Yeah. There's a thing I send to my team a lot, an article. It's from somebody called Brandon Chu. It's on Medium. I think it's called "Making Good Decisions as a Product Manager." But it's very general about making good decisions. It's not specific to PMs in any way, but he does a really good job framing the tradeoff of decision speed versus decision accuracy. And sort of the main hypothesis is most people think that decision accuracy or being right is the most important thing, which is kind of obvious, but there are a lot of cases where that's actually not optimal, where decision speed and being mostly right is better. So that might be a good resource for people to check out as well.

Holly - 00:23:31: That sounds fantastic.

Erin - 00:23:32: Yeah. Reminds me of another episode we recorded on a similar kind of spicy counterintuitive take, which is like when you should not do research. Right? And I would argue sometimes if you're shipping a test or shipping an MVP, it is a form of research. Right? If it's providing learning opportunities that you're going to act on the method available to you.

JH - 00:23:54: It applies to quantitative stuff too, right? I think a lot of teams default to a 95% confidence interval for statistical significance on their A/B test, which is a great standard. And obviously, if you can get it and you have the volume, that's great. But lowering it to 90, 85, or 80, how many decisions do you make day to day that you're 80% sure in that it's not wrong? Probably not that many. That's actually still pretty good. And for a drug trial or something like that, obviously not the standard we want to accept, but for some business decisions, you can probably get away with it sometimes.

Holly - 00:24:22: Absolutely. And decision speed is really important. There are a lot of times where the amount of the company's resources that are being held up waiting for that decision are expensive. And there's a lot of times where the opportunity costs of what you could be getting done or what you could be building are also expensive. And so there are a lot of times where it makes sense to do that 80% confidence decision.

Erin - 00:24:48: Yeah. And I think, right, that's a good point around it's not sometimes, "should we do research at all?" but more often, "what kind of research?" How much time and budget should we spend on it to right-size the research for the level of risk, the level of opportunity, the level of resources, and whatever the constraints might be.

Holly - 00:25:07: I was just going to say, one thing that makes me think of, that I want to call attention to, is that there is absolute value in super scrappy research as well. There were times when I was working in the AdTech company where the research that I did was checking, having instant message conversations with clients, and making a decision in the afternoon. And there are times where that is better than not talking to clients at all. But it's something that you can get done in an afternoon and keep going, and that is so valuable to be able to do.

JH - 00:25:42: One thing I was curious about in terms of having an impact with stakeholders and having these insights impact decisions is every company is a little different in terms of their culture or sort of buy-in or support of research. Some teams just get it, and they understand that these qualitative insights are really impactful even though they may only talk to five people, like this cliche, but they understand that that can be a real powerful signal. Other teams are a lot more skeptical, right? They may be a very quantitative team, or they haven't done much research. I have to imagine that impacts how you have an impact. And so, how do people maybe take the temperature of their organization's support for research and qualitative insights and use that to inform their approach a little bit?

Holly - 00:26:24: Yeah, so I think what you said about taking the temperature is a really good point. The first thing that you need to do is pull stakeholders in early and get a sense of what questions they have and how comfortable they are with the plans that you have, and to really understand: do the stakeholders believe in the need to learn through research or do the stakeholders think that they already know the right direction to go and the research is just a waste of time? And you need to know that before you can craft the right message to the stakeholders in order to bring them along on the journey with you. But no matter which of those camps the stakeholders fall in, I always begin by having individual conversations with the key stakeholders, particularly any that are somewhat difficult to get buy-in from, so that I can hear from them what the questions and concerns are that they have and what they are thinking about themselves. And I can incorporate that into the research that we're conducting. So, that's one of the ways that I start off early, taking the temperature for the different audiences. And then a lot of times, you will have a mix in the room where you're doing a presentation. You won't have everyone in this room fully gung-ho about research, nor will you have everyone in this room think it's worthless. You'll have some mix. And so usually what I do is some form of making the case for research in a sort of as succinct a way as I can, so that I can move on to the research itself. But to actually stop and address it and say, "What are we doing? How are we planning to do it? And how will it impact the decisions that we're going to make?" And I find that, for example, I recently was working with a client and we're doing a lot of discovery work for this client that's doing what may turn into a tech migration is currently just an analysis of an unstable tech system. And one of the things that we learned in meeting with the stakeholders for the research, so to back up a little bit to share the story better, we started this project a month ago, and we dove in knowing that most of the stakeholders were really asking us for data, and they wanted numbers. But we also said, "Well, we're really going to need the qualitative side too. We're going to need to understand the why behind the numbers for this to make sense to us and for us to start to even know what questions to ask the data. So we need to do both." And as we were getting started, we had very sort of small access to the stakeholders for that research. And as we started to understand more through the qualitative interviews that we were doing about how the people inside the company felt about it and about what was going on in the market, we got to a place where we said, "Okay. Now we really need to talk to the key executive stakeholders of this research independently, so that we can start to build a relationship with them where they're going to trust us." And when we went and had that conversation literally this morning, we talked to the CEO, and he said that one of the things that were really important for the success of the project was that we explained to them what our methodology was because they wanted to know, "How are we going to answer these questions? And how can they trust the answers that we bring?" And I think that's a really big part of doing good research is building trust with the stakeholders who need to make decisions based on that research or who need to follow along and improve the decisions that you make based on that research. Because if people are just going to discount the work that you do, then what's the point? But I think a lot of us have been in situations, and I certainly have, where people were discounting the work that we did. Or I've worked with clients where the work they were doing was being discounted or sort of glanced at and given cursory attention, and then the decisions that the HIPPO wanted to make were the ones that got made. And so there's work to fix that too.

Erin - 00:30:39: And probably an obvious point, but when you start to win over a person, they have friends. They talk, and then that's how your influence can really be spread in an organization, right? You don't necessarily have to win over every stakeholder one at a time, but you start to build that trust. It's almost like a small viral network effect, if you will, within the organizations you're working in to build that trust and to just be much more effective, little by little, and then hopefully all at once.

Holly - 00:31:09: Yes.

JH - 00:31:10: We do have a couple of good related questions in the Q&A, so maybe we'll take one of those.

Holly - 00:31:14: Jump in.

JH - 00:31:15: Cool. Set the top one here. So, from anonymous attendees, it's a good question for whoever asked. Thank you. Do you have thoughts on over-operationalizing research? Is this possible? How do you avoid those pitfalls? So, can you take this process and planning too far?

Holly - 00:31:28: Yes, you absolutely can. You can make it too bureaucratic. So if you're working in a company that has three designers and no user researcher, then you don't need to have a process where the research plan needs to get approved, the interview guide needs to get approved, and the synthesis needs to get reviewed before it gets shared with stakeholders. Those things can be important, but not every organization needs them. And it really depends on the culture of internal communication within the organization. Is it a place where before you present to the executive leadership, you have to have everything buttoned up and feel 90% confident in your results? Or is it a place where the executive leadership wants to roll up their sleeves and dive into the details with you and see the results early, even though they know that they can't actually make a decision based on it yet because it's still in progress? So I think there's a lot to understanding what kind of company you're in. There are times I've worked in larger companies where we did need to do some of those things. We did need to have some level of, okay, well, we do have a UXR on the team or we've got a head of research above the designers. And maybe research is still done by design and product, but there is somebody who's an internal SME on research who acts as a coach and really democratizes the research. And one of the things that I also think that ties into this is that idea that I found almost in every situation that it's better to have more coaches than bureaucrats. So if you want to be encouraging the adoption of research in your organization, but you don't want it to be seen as something that slows things down, then you really want to position any of your researchers or your research guides as coaches who are helping to facilitate the process and to make it stronger and better, but are not just reviewing and approving things.

Erin - 00:33:42: Right. The sort of yes and or being an accelerator instead of a blocker because we all want to be moving forward in our organizations. Don't be the person stopping progress.

Holly - 00:33:54: Yeah, absolutely.

Erin - 00:33:56: Let's jump into another question. What about once research has been conducted and it's time to present findings or insights back to stakeholders, how can this go wrong? And what are some ways to combat that?

Holly - 00:34:08: So one of the biggest ways I see things going wrong is when the person who's presenting the research presents all their conclusions without the primary evidence. So, for example, I've seen people who were full-time researchers bring back work where they're making all the statements about what conclusions they've come to because of the research, but they're not bringing the stakeholders along the journeys that the stakeholders can come to those conclusions themselves. And what that does is it puts the stakeholder into a position where they often are fighting with their ego. They might not be conscious of this, but they'll be fighting with their ego because they're looking at what this other person did. And they're thinking, particularly if it disagrees with some belief of theirs that they've already got about the customers or the strategy, then they're going to look at that conclusion that disagrees and then they're going to question, how did you get there? And can I trust that conclusion? And so what you need to do is actually make sure that you're bringing the primary evidence along, being the quotations, the videos, the written words from the customers that led to the conclusion that you made. And a lot of times, one of the things that I've found to be really helpful is to actually guide the research stakeholders through that process in your research synthesis work. So, for example, starting off with, "Okay, here are some of the quotes that we heard, and watch this video," and then facilitate discussion and say, "What are the insights that you get out of this?" and then say, "What insights we as the researchers got out of this?" I found that that can be really helpful because it also helps bring the stakeholders into the process and make them feel like they can take some sense of ownership over the results of the research. And I know that as the researcher, maybe there might be some feelings about that. But in the end, the more ownership that other people take over the research, the more they feel a part of it, and the more likely they are to incorporate it into their decision-making. And that's what you want in the end.

Erin - 00:36:32: Practically, I have a question. If you're putting together, whether it be a keynote or some sort of presentation documents, right? You probably want some sort of TL;DR executive summary. Just give me the highlights, which probably involves the conclusion, the decision, and the thing I recommend doing because of this. How do you practically do that in a way that also folds in those insights, that primary evidence, right? So you're not just skipping to the thing I want you to know. You're doing what you recommend, at the same time keeping it concise and just give me the facts and what I need to know. Is there a way to thread that needle?

Holly - 00:37:13: Yes. I think this does go back to that statement about what kind of company culture are you in. Some company cultures are going to be ones where people really want the conclusions more than they want the backing evidence. And in those cases, you may have to lead with conclusions. But in most cases, I find that my stakeholders are interested in the journey to the conclusion. And so I usually create a deck that walks them through from what was our methodology, what quotes and videos do we have from the research, and then what takeaways do we have from that, and then what do we think our next steps are. Are we recommending more research, are we recommending decisions be made, or is there some other thing that's needed to make a decision and what is that blocker? And so I'll usually have that flow to the synthesis.

JH - 00:38:07: Yeah, and it seems like if you are going to have like a summary or a more precise version or whatever just making sure that the underlying supporting evidence is easily available, right? Because sometimes a good way to hook people in, is if they see the conclusion they're like “Wait a minute, I disagree with this” or “I'm skeptical” and then they'll actually go and review the clips or the quotes or whatever and see it and then they can kind of figure out he got to that conclusion as well. So if you are going to go the more concise route still make sure that the key artifacts and supporting evidence are there I would imagine is also helpful.

Holly - 00:38:35: Actually. I love how Morgan has been doing this here at user interviews too just to plug her research which is using Miro which I know is not to everyone's liking but it's a tool, we like it. You can see everything kind of all at once and she'll literally draw out like choose your adventure. If you just want to look at three sections, look at these three so you get that kind of executive summary. But you can kind of see the universe of what's available including those primary pieces of evidence, including those learning objectives and key takeaways. So it kind of flips the forced linear nature of a deck on its head a little bit in a way that I think is kind of cool in terms of meeting some of these objectives we're talking about.

Erin - 00:39:15: Sounds like we have more questions to answer.

JH - 00:39:17: Yeah, do it. Sometimes we're working on a project that's both focusing on a concrete feature or development of a service but also wants to keep the overall broad perspective and keep this sort of system design perspective as well. How do you balance the two aspects and your thoughts on how to keep both in mind when planning research initiatives? So play it back. I think we have a specific thing we're trying to answer with some usability or some design research but also we are trying to get a better broad sense of how people feel about our system. Can you put those things together or does that need to be done separately? What do you think?

Holly - 00:39:49: Yes, so I do put those together sometimes. It depends on how much material we need to cover with the research and how much time we have with the participant. But I absolutely think that you can put those together. I'll typically start with some broader questions that help me understand whether the user, to what degree the user shares the problem that we're trying to solve. And then ask them usability testing of things or testing out, does this work for the solution? Or asking them more questions about the constraints they have for what solutions would work for them. And then often open it back up towards the end to the broader questions of “Okay, well, now that we've had you go through this flow, what is the bigger picture here? How does this fit into your life or your day-to-day?” And maybe at that point, I'm often mixing methods. If we Zoom out from the particular interview, a lot of times what I'm doing is I'm using the interviews to get qualitative information. That gives me a sense of what are the possible scenarios that we're going to encounter in our user base. And then, I'll often use a survey that's designed heavily based on the language that we heard in the interviews from the participants. To help us quantify how many other people match what those participants said, and that can often help with the bigger picture stuff as well. And then if we're looking at specifically designed systems and trying to test out that that is an area where I don't see them as being so terribly related in terms of the research that we're doing. So I would be entirely comfortable with them being separate interviews, even with separate people. But if we don't have so much we need to get out of an individual interview, then we can put that in at the end and be like, “Okay, now let's interact with what our choice of the drop-down list versus radio button, various decisions like that, how those interact for the user.”

Erin - 00:41:57: We've got another question, a newbie question, she says, “What skills does a newbie UX researcher need to have or must have, and to what extent?” So there are so many methods, as we talked about before, and a lot of learning potential within the field. What are the kind of fundamentals must have that you should probably be strongest at and start with for someone who's new to the field?

Holly - 00:42:22: Yeah, so I would absolutely say for qualitative research there are two must-haves. One is being able to do background, contextual, ethnographic type interviews where you're having a conversation and you don't have any kind of prototype or solution design in front of you and you are just trying to understand the customer. And then the other is where you are solution testing. Maybe you're testing the usability of a particular workflow or maybe you're comparing and contrasting different solution approaches, but to be able to put solutions in front of a customer and know how to conduct a usability test of those solutions. I think those are the two qualitative things that I would absolutely start with for a newbie UX researcher. And then I would also want to make sure that the UX researcher is comfortable with quantitative being able to understand what does it mean when we look at product usage data to say what percentage of our users engage with this part of the website or the application, for example.

Erin - 00:43:28: I like that you're not just letting our new researchers off the hook on the quant side.

Holly - 00:43:31: Having that baseline understanding right, and it tells you where to look too often, which is very important.

JH - 00:43:40: The thing I had a question on was you work with a lot of clients at all sorts of different companies. Are there any kind of traits or things that stand out from people working at companies where the research is having a ton of impact and then the clients where maybe the research is falling flat very often? I know we've talked about a lot of the kind of more general things, but are there any more squishy or nebulous things between those groups that you see that the difference kind of contrast brings up?

Holly - 00:44:08: I guess the thing that we haven't really talked about today is the attitudes of the people in the company. I think that's definitely something that I see is just a difference between the high-impact research teams have already done the work to lay the foundations for research to be valued and accepted within the organization, and the low-impact research teams are stuck in a rut where that research isn't being fully valued by the organization. And I think the beautiful thing is when a company manages to transition from the latter to the former. And I've seen that happen and I often help with that making that happen because that's a big part of what we do. But in order to make that happen, there has to be a critical mass of people who are open to change and who are open to unlearning. Unlearning is the idea that you have to unlearn things that you used to think worked. And that's often the case, especially in larger organizations where the higher up in the organization someone is, the more likely that they have their entire career prior to a lot of these methodologies becoming commonplace. And they might not only not have the experience with them, but also have found significant success without them and therefore think, well, I don't need that. And so in order to convince them that they do, it's very hard to convince them if they're not a person who is open to unlearning. And that's something where I do honestly coach people and help them understand whether they're at a company where there's a likelihood for a successful change or not. And in my coaching practice, there are absolutely times when I tell a person, look, you want an environment that you're not going to find here anytime soon.

Erin - 00:46:03: Do you find that entrenched or organizations where this sort of trust hasn't yet been built for one reason or another? Do you find that companies are more likely to be willing to unlearn if they're in a time of sort of crisis? We have to change something, whether it's the economy or COVID. Yes, I was going to ask or do they get just like really conservative and lean further into their old ways because change is even scarier in uncertain times? I could see it going either way.

Holly - 00:46:34: I think both things happen. However, what I can say is that it is extremely rare to come across a company that has had that entrenched, not valuing research mindset, that changes the research mindset without having a crisis. A lot of times there are plenty of companies that go through a crisis and don't make it out. But in order to make it out of this devaluing research mindset, you almost inevitably need there to be some kind of company crisis that's pushing you to change.

Erin - 00:47:09: So don't manufacture a crisis for your company. But if you find that there's, which I think research budgets are in jeopardy in a lot of companies right now and democratization is popular in a whole new way for new reasons right now. But there are some opportunities too. And I hear you speaking to maybe what might be one of them in some organizations of look like we've got to innovate, we've got to figure out how to do more with less, how to reach new kinds of buyers, whatever it might be and could be a good opportunity for some folks.

Holly - 00:47:44: Yeah. And I will say that one of the most common things is competition. Just the competition is changing. The competition is heating up. There's a new player in the market and we're losing market share. If that's happening at your company, use it.

JH - 00:47:58: I don't forget who this gets attributed to, but it's like the old cliche of like, “Never let a good crisis go to waste” because it is actually an environment where change becomes suddenly very possible and usually people are a lot more rigid. Cool. There is another question here about when you're presenting research insights. How do you deal with people's biases? So salespeople who also talk to users and have a different opinion on topics that you're sharing info about, any thoughts on how to defuse that?

Holly - 00:48:21: Yeah, honestly, the best way to defuse that is before you even get to that conversation. So it's to have brought them in early so that you understood from the beginning what their perspective was, what they thought you were going to learn from the research and you were able to address their concerns or skepticism. So I tend to make sure that I am close with the head of sales and some members of the sales team so that I can really hear their side of the story, what they're hearing. And if I'm seeing differences in what I see from what I hear from them, I want to be talking to them and engaging with them outside of a big group meeting to really understand what could be behind those differences and to start to lay the foundations for them, to understand why we might follow what I'm hearing over what they're hearing. And I do think that that is often necessary, that their biases are sub-optimal for the company. And so we do need to convince them to come along with us, but the best way to do that is to involve them early.

Erin - 00:49:33: We just had another one come in. We've got about two minutes left here, so maybe we can power through rapid fire. Two questions. I will waste no more time. Here we go. You kind of talked about this, but what if stakeholders don't have time to review research and just want to see the results, like the designs? So we talked a little bit about seeing the results in the form of recommendations or insights, but I guess, maybe some of the designs that were compared or the prototype we ended up going with, that sort of thing.

Holly - 00:49:59: Yes. So if stakeholders don't have time to review the research and they just want the results, that is a place where, in some ways, I'll sort of do the TL;DR of Hey, these are the designs we came to, and then have an appendix that says, here's the results that led us here.” Usually, if you can at least slip in a slide where you're like, here's the methodology of how we got here. And then maybe you don't walk them through all of the research, that's okay, but at least let them know that it was done in order to get to that spot.

Erin - 00:50:27: Let’s see, so the advice of bringing people from sales, et cetera, in early can be difficult to scale on a huge organization. Think fortune 20 70500 employees. How do you navigate this?

Holly - 00:50:37: Yeah, so that's a really good point. And I think that it is something where, depending on how high up in the organization you are, you're going to be relying on your boss or your boss's boss to having built that relationship and helped bring sales along. I think there are absolutely situations where it's not terribly feasible for you to be doing that yourself. But if that's the case, I would be advocating with my boss. I would be saying to my boss, “Hey, I'm blind about what sales think on this, and I need your help to be able to answer those questions for myself so that I can make sure that we're impactful and they receive our research well.”

Erin - 00:51:15: Holly, thank you so much for being a great guest. This is so fun, I look forward to having you on something again sometime. And for anyone with us or listening we will broadcast this as a podcast. I think, in the next week or two and send the recordings out and all of that. So thank you so much for joining us.

Holly - 00:51:34: Yeah, thank you guys as well. This is really fun, thank you to all the people who came.

Erin - 00:51:40: Thanks for listening to Awkward Silences. Brought to you by User Interviews.

JH - 00:51:45: Theme music by Fragile Gang.

Erin May
SVP, Marketing

Left brained, right brained. Customer and user advocate. Writer and editor. Lifelong learner. Strong opinions, weakly held.

Subscribe to the UX research newsletter that keeps it fresh
illustration of a stack of mail
[X]
Table of contents
down arrow
Latest posts from
Research Strategyhand-drawn arrow that is curved and pointing right
lettuce illustration against an abstract blob filled with a teal brushstroke texture

The UX research newsletter that keeps it fresh

Join over 100,000 subscribers and get the latest articles, reports, podcasts, and special features delivered to your inbox, every week.

Yay! Check your inbox for a welcome email from katryna@userinterviews.com.
Oops! Something went wrong while submitting the form.
[X]