down arrow

Discovery Research and Interviewing Skills with Maria Rosala of Nielsen Norman Group

Maria joined us live to explain why discovery research is non-negotiable, how to conduct great interviews, and her top tips and tools.

Are your research discoveries really discoveries? Or are you collecting validation for your own assumptions? 

That’s just one of the big questions Maria Rosala, User Research Specialist at Nielsen Norman Group, tackled in our second-ever live “podinar.” Maria shared her expertise on conducting thoughtful discovery research and user interviews.

Listen to (or watch!) the episode to learn why the discovery process is a non-negotiable part of doing user research, Maria’s favorite discovery research methods, how to get stakeholder buy-in, and tips for conducting better, more insightful interviews.

📖 UX Research Field Guide: Discovery Methods | 🚀 User Interview Launch Kit

Watch or listen to the episode

Click the embedded players below to listen to the audio recording. Go to our podcast website for full episode details, a video replay, and transcript.

Highlights

[00:01:33] What exactly is discovery research?

[00:04:41] Are your discoveries really discoveries? Maria explains what many research teams get wrong about the discovery process.

[00:12:16] The importance of evangelizing the discovery process to get the buy-in you need.

[00:16:40] Does discovery research have to be a big, lofty undertaking? Maria shares how teams can adapt the process to the needs and constraints of their project.

[00:24:25] How do you know when you’ve gathered enough insights? How many people do you need to talk to?

[00:30:02] How to use an interview guide (not a script) for better user interviews.

[00:32:04] Recommended tools and processes for analysing discovery research results.

[00:34:36] Discovery research for B2B software.

[00:40:04] Measuring the ROI of discovery research.

Resources mentioned in the episode

About our guest

Maria Rosala is a User Experience Specialist at Nielsen Norman Group where she plans and executes independent research for NN/g, and leads UX training courses to help teams uncover insights about their users, design better experiences, and improve their own research and design practices.

Prior to joining Nielsen Norman Group, Maria worked as a UX Researcher for the UK government at The Home Office, where she carried out research in the UK and internationally to improve digital products and services. Maria holds a Master’s degree in Human-Computer Interaction with Ergonomics from University College London (UCL).

Transcript

Maria: [00:00:00] It shouldn't be seen as something that is like a tick box exercise or something that you know, is you know, is slowing us down. In fact, discoveries can often. Speed you up. Cause they help you answer, you know, really poignant questions that help you think about how best to design something.

Erin: [00:00:31] Welcome back to Awkward Silences today. We're here with Maria Rosala. She's a UX specialist at Nielsen Norman Group.  Nielsen Norman Group are authorities on all things, UX research at user experience design. And Maria is an expert on discovery research and on interviewing skills in particular. So those are the two kinds of things we're going to talk about today. And we know you have lots of questions about those, so we'll make sure to save lots of time for that at the end as well. So, Maria, thank you so much for joining us.

Thank you for having me. It's a pleasure.

JH: [00:01:12] Yeah, it feels like we're doing some discovery about discovery. So this will be fun. Everything's meta with us.

Erin: [00:01:18] Always keep it meta, always. Great, Maria, well, thanks again for joining us. And just to get things started, let's start with sort of the biggest, most obvious question, which is: What is discovery research? What is it doing for teams?

Maria: [00:01:33] Sure. Yeah. So discovery research is about investigating the problem space.

So what do we mean by that is it's about going out and understanding how, you know, certain. Problems affect people. Because being designers, we, you know we really want to design products and services to actually help people to do things. We don't just create products and services just because we have a desire to create them or because, you know, they're aesthetically pleasing to us. The best products and services out there are ones that satisfy a need that people have.

And so discovery research is all about understanding the people that might be consuming or using your product or service. How these specific problems come about, how do they deal with them and what do their lives look like? What are their specific desires or motivations in solving these particular problems, perhaps what workarounds they have. So it's really dedicated to understanding how these problems affect people. On a larger scale before jumping to looking at solutions. So when we talk about discovery research, where we're typically talking about exploratory research methods, so methods like user interviews, which we'll talk about ethnographic-style research.

So research that involves going out and observing people in the field you might also be carrying out diary studies or focus groups but typically methods to learn more about people and their context of use. And why that's so important is because it really does help us to understand what's the best way to solve these particular problems when we go out and we understand, you know, why these problems occur, how they're affecting people, what do people make of them?

You know, we're in a much better position to be able to come up with solutions that potentially will work. So it is a way of moving towards building more successful products and services. And so you may be familiar with the two types of research. One type of research is around helping people to build the right thing, which is discovery research. And then there's the other type of research which helps people to build the thing, right? And that's typically what we refer to as, you know, our iterative design and testing that we do throughout the product development process.

So two very different types of research. Discovery research is really about helping teams to uncover what are the best solutions going forward. And, you know, what can we learn about users and what their needs are and the motivations and the context of use that'll help us to design a good solution. So that's what it is. Unfortunately not everyone is carrying out discovery research. So there are lots of challenges that stop teams from being able to implement it.

But when teams do run discovery research, it leads to much more favorable outcomes, leads to more successful products and services being built, that actually satisfy a need that people have that have utility.

JH: [00:04:15] The way you described I kind of picture it as like two ends of the spectrum, which is: build the right thing, build the thing right. Which, I like that phrasing. I'd imagine in the middle somewhere, it gets kind of blurry as you're learning more about a problem space and as you're starting to get into solutions. Is it the people aren't doing enough discovery research because they're kind of in that middle and they're kind of confused? Like they think they're doing discovery, but they're actually too close to the solution side and not understanding the problem or anything around that you've seen?

Maria: [00:04:41] Yeah. So, I did a video on this, like, are you doing real discoveries? Because I spent 2019 and a bit of 2020 interviewing and surveying UX practitioners around the discovery process.

And, you know, some discoveries are not actually discoveries. They're not about discovering: what are people's needs, what's the context, how are they solving these particular problems? Or what are these, how do these problems impact them? Instead, what they're doing is they're going out and taking out solutions and asking themselves, how can we make the solution work, right? How can we fix this in a way such that people might want to consider using it in the future?

And that's not really a true discovery when you’re starting to do things like, go and gather requirements for a given solution or try and see how you can make this solution fit for users...you know, you're sort of ignoring all of the other possibilities that are out there.

And the whole point of discovery research is to give you this understanding that helps you be in a position when you come to get together as a team to ideate, to say: All right, well, this is what we've learned. Let's come up with perhaps some, “how might we statements.” What are some potential solutions that could actually solve the specific problems that we've uncovered and, you know, satisfy these particular opportunities that we have uncovered throughout discovery research?

So I do see a lot of teams going out saying “we're doing discovery, but we already have an agreement on what the solution is going to be.” And in that case it's not really a true discovery. It's more of a requirements gathering exercise. Or they're already moving to that second phase, which is just trying to get this solution to work and making small alterations.

Erin: [00:06:17] So if you're using discovery to really just validate decisions you've already made, obviously that's not a good way to learn what users really want, why you ought to be building something, what you ought to be building.

To JH’s analogy, on the other end of the spectrum, you could ask almost any question with discovery, right? And do you risk too big of a scope?

Particularly for, I imagine, early-stage companies or even founders trying to figure out exactly what problem space do we want to hang out in, how do you—whatever kind of question you're trying to answer—zero in on questions that discovery research can really reasonably answer? Assuming you have finite time, finite budget.

Maria: [00:07:01] Yeah. And some discoveries are more strategic like that. Where it's like, we're just going to do some big picture landscape research to uncover, like, where are certain opportunities that we could go forward and explore and in some more detail.

And that might spur on some further discovery research that may be another team is going to be picking up or alternatively the same team might go and look at a narrow scope of the problem space. So that certainly does happen.

One thing that I encourage teams to do when they come and they take my class on discoveries is to set a concrete goal for the discovery.

It's no good saying we're just going to go and explore the problem space because when do you stop exploring the problem space? Right? So the whole point of discovery is to give you to get answers to unknowns or to give you enough information that helps you make a decision going forward.

So moving into the next stage, should we build some things, should we not? Is there something here that's an opportunity we should explore or potentially not? Which area of this particular service should we focus on to try and improve, or which particular need do we think is a really key one that we can potentially go away and try and create a solution to satisfy that need?

Discoveries can do that. And of course running them well can ensure they don't balloon out to being something that takes months and months. Just as you know, fairly academic, you've gathered loads and loads of research, but there's no real concrete answer or next step.

So I think the key thing really is how you run them to try and ensure that they're more targeted to what you're trying to do.

JH: [00:08:32] You mentioned that teams are often not doing enough discovery. Is that because they just kind of over-assume they understand the problem? I assume for early stage teams, when you're just trying to figure things out, discovery maybe is a little bit more natural and obvious—we need to really understand this problem space.”

But once you're kind of established and you have some traction with users and some product/market fit and all that sort of stuff—is it just the people get into the blind spot of: Well, we kind of know this problem because people are telling us about it? And so they jump right to solutions? Or what are signals the teams can use to realize that maybe it's an area where they need to do more discovery?

Maria: [00:09:05] That's a really great question because of course, you know, if you know a lot about your users and you have done discovery research in the past and you're continuing to take out your product tested with users, you are still learning a lot. But there will be certain things that will be unknowns.

So for example, maybe you're thinking about designing some new features or maybe you're thinking about targeting a new audience, right? There are unknowns there as to whether that's a good idea, how you can do that effectively.

And that would be a point where the team should really have a look at some of those ideas or something that's perhaps on the backlog and start to ask questions. Like, do we actually know there's a need here for this specific feature? Or do we have all the answers to go away and build this particular thing?

Maybe the answer is no. And in that case the teams should discuss what's the risk if we move forward without doing some initial groundwork research?

So ideally, you should be doing discovery work throughout the product development process. It's not something that starts right at the beginning when you're setting up a new product. You should continue to do it throughout as you develop that product further because things change and your products will start to morph and to tackle new audiences or you’ll be thinking about including new features.

And so that discovery work will be more targeted, but it will still be needed because there are those unknowns and those unknowns carry risk. And so one way I like to frame discovery work is it's a risk mitigation strategy. If you move forward with assumptions or things that are, you know, maybe it's anecdotal evidence, or maybe the CEO is just telling you that they want this thing, right?

What's the risk if we move forward and it's not a success? Nobody uses it. How much money have we wasted? How much time have we wasted in design and development building that particular product or building that feature that no one's going to use? And now we have to kind of throw it away or maintain it, which is often what teams do, right? They have feature bloat or all of these features have to be maintained and supported.

So it's really important that teams ask themselves: What are the knowns? What do we actually know? What comes from evidence from research, which are things that are actually more of our assumptions?

Let's highlight those, let's ask ourselves what's the risk around these particular assumptions. And you know, let's prioritize them and see if we can go out and do some discovery work to get answers.

Erin: [00:11:28] I think discovery research, as you describe, it can sort of sound perhaps to some teams like a vitamin, right? Like I know I should take it or it's like eating your fruits and vegetables or whatever you want to say. Obviously it's important. I'm going to be healthier. I'm going to be part of a healthier business if we're de-risking our assumptions. But I don't have time or, you know, what the CEO thinks is a good idea or whatever the mini excuses might be to not do this kind of “asking why” discovery research. What do you say to that? What are the tactics or frameworks or tips that folks can apply to—even in a lightweight way—make sure that discovery research is part of their ongoing process?

Maria: [00:12:16] Yeah. I'd say if you're already doing research—like you're already doing some usability testing—then, you know, you do have some time to do research. So you can add on discovery style activities. So for example, if I'm doing a usability test, let's imagine our team is building a fitness app of some sort.

I could run part of an interview at the beginning, which is related to the problem of keeping fit. I can ask people about their specific habits or some specific unknowns that are cropping up in conversation. My team and I can interview people about that So staying very much in the problem space, right?

Doing that bit of an interview, and then I can move over and start to test. A certain interface that we've been working on or a certain feature that we've designed. So it's possible to run research and to run discovery research regularly, frequently to target on alongside other methods.

I would say it's not just interviewing that is a method used in discovery. Going out and actually being in people's environment, observing them as they actually try to accomplish tasks is going to be really important as well. So you do need to make time to do that.

And don't just rely on collecting attitudinal data, which can sometimes be a part of the picture, but not the full picture. So there's that approach. You can start to just do it right, start to implement it alongside your regular cadence of testing.

The other thing that you can do is just try and try to make a habit of it. So start to try and recruit a panel of people that are going to be taking part in research. Maybe you don't know what kind of research it is going to be, but you schedule these people. So five, every sprint, for example, and then that forces you because you've already recruited those people to ask yourselves: What research can we do to get answers to some of these unknowns?

So those are some tactics that you could use to try and build this habit of doing more of it. But for a lot of people that I speak to, it's not so much that they can't be bothered. It's an extra thing to do. Actually, they really want to do it, but it's other people stopping them from being able to carry out discovery work.

And that's often because there's a lack of buy-in. People don't really understand what are the benefits of discovery research, why they need to do it. I think you already mentioned this.

People think they know what the answers are to specific problems. They think: This is the solution. This will work. And we already know everything about our users and about what their problems are. And this is the solution that's going to solve this.  

And that really requires a lot of work, a lot of evangelism to try and communicate to people that they actually don't have all of the answers.

So it requires UX practitioners and designers and other people working in product teams to start pointing at things and saying, how do we know this? Like, where is this from? Where's the evidence that has given you this idea that this is the right solution? What problem is this based on? Do we have any evidence that this is actually a problem?

And often it will be like: Oh no, I just heard that it might've been a problem. Or, I think the sales team were complaining about this. And so that might be a starting point, to have conversations. Like, well, let's just do some really quick discovery work to unpick whether this is actually a problem that needs to be solved or whether we should focus our attention on something else.

So, yeah, a lot of it will require some evangelism, some pointing to case studies to say: Look, this particular project had a discovery—it had a good outcome. This project did not have a discovery, we built this particular thing; it was the wrong thing. And as a result, we wasted all of this time and money.

And so those tactics can start to help people in senior leadership within product teams to kind of get an idea of like, “oh, this makes sense, this is a good strategy to avoid us going down the same route that we went down before and wasting everyone's time and wasting a lot of money.”

JH: [00:16:02] Is a part of the challenge here the perception that discovery is this big lofty thing? Like when I hear it, it kind of seems like it's the thing we do first. And then it's like everything's linear from there and we’ve got to get it right. And so it's like this big, kind of scary intimidating thing almost.

Whereas the way you've been describing it—just surfacing more options, making sure, like, is it helpful if teams try to build up from really small examples where: Hey, we were choosing between solution and solution B, we did some really quick discovery in a couple of days and we actually uncovered solution C and solution D and now we had four options to choose from, and we understood it a little better.

Are there ways to maybe do it more incrementally versus the grand plan that comes to mind initially?

Maria: [00:16:40] Absolutely. Discovery research is so adaptable to your particular context. For, you know, my previous experiences working in government and we were working on really large complex services that had a lot of users. And so the approach wasn't possible. A lot of the time it was looking at a particular—a new service had to be stood up because there was some change in regulation or legislation or alternatively, they were just like massive chronic problems in the way that we provided certain services. And so a team had to look into that. Like, why do we have this problem? What are all the causes?

But you're quite right. If you've learned a lot about your users so the context is much smaller, you know, the user group is much smaller, more homogenous. Then of course you can run really light touch discovery activities to, as you say, expose more of those available options that are out there. Or even just to tell people like, let's pivot, this is not the right thing we should be focusing on.

So yes, absolutely. You can use it incrementally.

It can be as large and as small as you need it to be. The real question is how much risk are you willing to carry forward? If it's a fairly big project, you probably want to spend enough time in discovery. Because it's going to have a huge impact on the organization. It's a lot of money that we could be wasting, or if this thing doesn't work, you know, it's just terrible for our organization. In that case, it makes sense to spend more time.

But if it's something that's fairly tactical, doesn't have so much impact on the bottom line for your business, then maybe you take a light touch approach and you're willing to carry more risk going forward.

So it is really adaptable and you don't have to see it as this big mammoth piece of work that you need to do and need to have everyone on board. And, you know, it is best if it's as a team— going forward and doing discovery work together, it's not just one person. But you know, you can scale discovery, work to what you need in your organization.

It's important to do it though.

Erin: [00:18:35] Yeah, so much of what you were talking about is it’s not that people don't want to do discovery research. Practitioners understand the importance of understanding the ‘why’ of why we're building something. And are we building the right thing? This notion of influence with stakeholders within an organization is just so important and obviously something that's complicated and you build over time and it's getting to work with the right people.

I know something we hear a lot from senior leaders in UX research is to join a team where research is a priority, right? Because there is only so much you can do in a finite period of time.

But that is something you can get better at over time in your career too, I think. Is learning how to influence people that can help you get the work done that you need to do to be effective in your job.

Maria: [00:19:21] Oh yeah, absolutely.

And inviting those people into the process so they get to see it firsthand just how much they didn't know initially and how much more they know now and how it makes much more sense to take this specific path over a different path. So it shouldn't be the case that teams are going away doing discovery, throwing over results over the fence and saying, “look how amazing and look at all the amazing insights we've done. Now it's your job to kind of make decisions off the back of this, please support more of this effort.” That doesn't really work that well. But it does work well when you involve those stakeholders into the process and they get to see firsthand the benefits rather than you just telling them that.

So I think that's important as well in that evangelism work. But you're quite right. And I think, you know, 20 years ago, people were trying to advocate just to do usability testing. We’re there, we’re moving, a lot of organizations and are doing that.

But now we're trying to push a little bit further. We're saying, nope, you need to invite us earlier. When you're starting to scope out these new projects, you need to bring UX practitioners earlier in the process. We need to be part of those conversations. We need to help by providing insights and answers to those unknowns so that we can build the right thing. We're good now at building the thing right, but we're not necessarily good at selecting the right thing to build in the first place.

JH: [00:20:40] Cool. Should we do a Q and a question? We've got one that has a lot of those

Erin: [00:20:43] Yeah. Yeah. Let's jump in.

JH: [00:20:45] Right. It's a big question. From Alexandra: What discovery methods do you find to be most useful? And in which context should you use each one?

Probably a lot to cover there, but maybe you can hit some of the highlights.

Maria: [00:20:55] Sure. So, interviewing people. I'm speaking to people one-on-one—that's really useful to get things like attitudinal data. So things like people's subjective experiences. That's going to be helpful to inform things like, if you're building customer journey maps, or service blueprints, or experience maps, you can start to understand: What are people's experiences as they go through and do something to achieve a specific aim? So interviewing people, you learn a lot about specific challenges, how they deal with those challenges.

You also learn about what people desire—what are their motivations, what are their backgrounds? So that's a really useful method that pretty much happens a lot in discovery. Most teams are doing some kind of interviewing.

The other method that you probably want to use is some kind of ethnic ethnography, some kind of observational research where you go and you sit in people's environments and observe them do the specific things that you're interested in. So if I'm building a fitness app, I probably want to observe people—not just ask people about how they keep fit and keep healthy and what kind of exercise they do—but I probably want to observe that as well, because I'll be capturing things I wouldn't have captured by interviewing people.

I know this is pretty difficult right now with COVID and people working from home and obviously health concerns. There are ways around that. So, thinking about doing digital ethnography, doing that through remote tools or alternatively looking to do diary studies where people capture things as they're doing them that might be an alternative.

But those are really useful methods to use. And in discovery, you want to use multiple methods, not just one, because they each provide a slightly different piece of the picture and they compliment each other really well. And because we're looking at really small samples, it provides a lot more confidence when you see similar themes emerging across different research methods.

So I'd say, try to do multiple, try to pick a method that collects some of the attitudinal data. They'll answer some of your specific research questions around what are people's mental models? How do they see things? What are they, what are their desires? What are their motivations? Interviews are great for that.

Potentially focus groups. I'm not a huge fan of focus groups. Surveys, as well—I'm not a huge fan of surveys in discovery. But those are methods that would be able to answer those kinds of research questions.

And then ethnographic research, observing people. So that could be contextual inquiry where you kind of conduct a semi-structured interview and observation or just, you know, just standard ethnography where you're observing, taking photographs, recording people, watching as they do certain things.

So those would be two methods that you could use. And that I would encourage you to use if you can.

In what context should you use each one?  Most contexts are conducive to using these research methods.

Obviously if there's nothing to observe, then it'll be hard to do observational research. But there typically is something to observe. So you just need to think about in your own context: What is it that you're interested in seeing people do? And then you'll have to tailor the research appropriately.

Erin: [00:24:25] That's related to the other really popular question here, which is how do you know when you've gathered enough evidence? So there's the amount of evidence, I guess, and there's the amount of participants. There's the amount of methods and all of these different aspects to how much insight is enough insight.

So how might there be some different ways to think about that?

Maria: [00:24:48] So, setting really clear objectives at the beginning of the discovery is going to be really important. The discovery should have some end goal, right? At the end of the discovery, it's important that you and the team can make some kind of decisions.

So specifying what that decision is going to be important. And then the next step is really what kind of pieces of data do we need to know? What sort of answers to these specific research questions do we need to have in order for us to be in that position to make the decision? So that will be almost like a checklist.

Whether we have actually done enough if there are remaining questions that have yet to be answered, then you still obviously should be doing more research. If you've been speaking to people, you're synthesizing all of your findings, you're going through your questions, and you're like, “well, we have enough; now we can actually answer these particular questions”—then it's time to stop, right? You're in a good place to make a decision going forward.

If you frame discovery in that way, then it becomes less ambiguous as to, you know, when do you stop exploring the problem space by setting this really concrete goal and specific research questions or objectives that you need to have satisfied in order for you to go forward, to make that decision.

So that would be my advice for that particular conundrum. And then in terms of numbers of interviews or numbers of fields studies, there isn't really a number with qualitative research. You probably want to ensure that you're recruiting a representative sample. So if you've only spoken to two people and you know, they don't represent your entire population in certain regards, you probably haven't spoken to enough people or you haven't observed enough people. There might be specific end cases or use cases that you want to observe, or that you want to interview people about in order to get answers to those questions.

So unfortunately there isn't like a golden number.

I can't tell you to go away and do 15 interviews because for some projects, 15 is great and for others, it's way too many, you don't need that many. And it really just depends on the context of your project and the diversity of your use cases of your users.

JH: [00:26:51]

Yeah. Some of that also feels like it's kind of the strength of the signal you're getting back. So if you've talked to a few people and you really see a clear trend that everyone's kind of saying the same thing, that's probably an easier signal that you're learning what you need to versus if it's very distributed and everyone's kind of saying a different thing. In that case, is there a best practice around, like, do you just continue to do more?

So you hopefully see a trend emerge or is there actually a signal or a sign that you need to refine what you're trying to learn? Because you're too broad and you've not actually narrowed in enough on what you're researching.

Maria: [00:27:21] Yeah. So it could be either of those things. If you haven't got a good interview guide that you've constructed to answer specific research questions then maybe your interviews are all different, right? So you're interviewing people about slightly different things in each of your interviews and therefore you're not seeing those themes emerge when you come to do your analysis. That could be one reason.

It's quite rare to have people have completely different experiences and say completely different things and not for there to be any overlap. Typically there is overlap. So as you start to interview another person, you start to see those themes cropping up and you look for saturation and themes.

If you're continuing to do interviews, you're not really learning anything new and you've exhausted all the possible personas that you should have recruited for or use cases, then you're in a pretty good position to say, right—we've got enough here, we can stop. If, you know, you haven't recruited a representative sample and you are seeing very spotty things, you probably haven't spoken to enough people or done enough research in the first place.

So you'll have to kind of continue to do more. And the nice thing about qualitative research is unlike quantitative research where you have to set your sample size, you need to speak to X number of people, or you need to do research with X number of people in order to have enough power to kind of predict things or to make generalizations with qualitative research—you don't need to do that. So you can start to recruit, right? Let's recruit three, let's recruit another three. Let's recruit another three and see how things get on. And then we'll know at what point we can stop.

So that'd be one advice I would give to teams who are worried that they don't know how many to recruit. Small numbers, right? And continue to recruit across different characteristics until you get to a point where you and your team have some confidence. Like, we feel like we've exhausted these different personas or user types or user segments or use cases and we are seeing this repetition. Let's stop here. We have enough to kind of move forward.

That'd be my advice. And obviously there could be lots of different reasons why you're seeing differences in what people are saying, but typically it's because of recruitment rather than to do with the fact that you're doing very different research across people.

Erin: [00:29:24] Gotcha. You talked a little bit about your interview guide and making sure that there's some level of consistency across interviews so that you can sort of, you know, code your responses so that you can say: This person generally said this when I asked this question.

But I know at the same time you also want to have a sort of fluid approach to an interview guide where you kind of have these objectives of what I want to learn in this interview but I don't want to be a robot, just, you know, reading through my list of questions.

What other tips do you have for doing interviews (because we know that they're a very popular, the most popular method for discovery research)?

Maria: [00:30:02] So one thing to kind of be aware is that there are different types of interviews that you could be doing. And in discovery, typically what teams are using is a semi-structured interview, which market research has referred to as in-depth interviews or depth interviews, and they use an interview guide.

And the interview guide is different from a script. A script would be where you would read the questions off one by one, and you would follow that specific order.

But with an interview guide, you have flexibility to kind of change questions to go in a different order if you need to. And they typically consist of very open-ended broad questions that get people to tell stories, to give you examples of specific things that have happened to them. And then many followup questions that are probing, that ask people to go over specifics, that gather more detail and clarification.

If you design a good interview guide, you're really setting up the stage for a successful interview. If you neglect this, you're probably going to go into your interview, you're going to ask loads of leading questions, probably not going to get very rich in-depth insight.

So the interview guide is really a tool to make sure that you are asking the right kinds of questions in order to get people to start talking. And then to a certain degree, you know, your job is to be listening and to be following up and probing on certain things that they've said and making those decisions.

Should we move on to a slightly different area of the guide? Should we sort of adapt the direction of the interview based on what the person is telling you? So you know, we wouldn't necessarily go through and code each answer to each of the questions, like, this is how people respond to this particular question (like we would in a survey). Instead, we'll be looking at specific things that people have said across all of the interviews, taking this kind of fluid approach to analysis, to kind of uncover what are themes that have emerged that are answers to our specific research questions.

JH: [00:31:49] There's a question that kind of builds off that, around the analysis that you need to do after conducting a lot of discovery research in terms of it being time-consuming. Any recommendations for tools or processes and how to kind of navigate all that volume of feedback of notes and recordings or anything like that?

Maria: [00:32:04] Yeah. So you know, I would scale analysis to what you have time for. and scale the number of interviews to what you have time to analyze.

A good rule of thumb is: It takes just as long to analyze as it does to collect the data. So if you spent 10 hours collecting data, running interviews, you’re probably looking at least 10 hours of analysis.

So a lot of people think they can get analysis done in a two hour workshop. Not the case, especially when you have a lot of interview transcripts. So I would scale back. You know, if you're being asked to do 30 interviews, but you don't have time to analyze 30 interviews—what's the point in doing 30 interviews? Maybe you could do 15 interviews or 12 interviews and still be able to get more out of those interviews in terms of your better picture from doing your analysis.

Tools that I like to use: So if you're taking a scaled back approach, maybe you're just relying on notes that people are taking for you, things like Miro or obviously like a physical board where people stick up specific things people have said.  And we start to sort through a finished diagram to do a bit of a thematic analysis to uncover what are the themes there.

That could be a nice, quick way of doing that.

There is risk there that you miss out on certain things. People are not great note takers. Often they will only take notes on things that they think is important and neglect things that actually are really key pieces of insight. So that's a risk that you run, but that's a scaled back approach where you can kind of just go for the heavy hitters, do analysis fairly quickly.

If you want to take a more thorough approach then using a piece of software to help you code is going to be really helpful, because that will allow you to be more thorough, allow you to manage a huge data set better.

So tools like Dovetail or Aurelius, Delve, right? These are all what are traditionally called CAQDAS tools—computer aided qualitative data analysis software—but now they've been designed by UX practitioners for UX practitioners. So they are fairly streamlined, easy to use, cheap in comparison to older tools that qualitative researchers used to use in the past.

That might be approached—especially if you want to work with other people. It's all web-based, it's a really good way of ensuring that the insight also lives on after the analysis has been done. So others can kind of go in, look at the raw data, look at the process that you've taken for coding. You can even write reports from within those applications as well.

So scale it to what you need, you know, recognize analysis bills, tight time, and make sure to budget that time when you're planning your research.

Erin: [00:34:29] We've got a popular question here about discovery research for B2B projects. Any specific tips for those?

Maria: [00:34:36] Yeah. So I often have a lot of people in my classes saying they have a hard time doing discoveries within a B2B context because they can't get access to users.

So I don't know whether Doris has the same problem, but that can be tricky for some organizations. But really, you know, discovery internally or on B2B projects, it doesn't look that much different in terms of process as if you were doing a discovery on a B2C project.

The only difference is that your users are people who you may have easier access to, or you’ll tend to already quite a lot about them. Because you're gathering, you know data about them. We sell services to them. So there might be that difference, but really it’s the same process: Go out, learn how they're currently doing things, observe them, interview them, uncover like: What are specific gaps? Where are some opportunities for you to  improve that particular product or service? And you know, it's really the same process.

Getting the users is something that I've heard people have difficulty with. And yeah. that I would say you definitely do want to speak to users. Don't just say, “Oh, well, it's just too difficult so we're not going to do that.” Start to make those relationships with the people that have those contact details in order to stress to them that it's going to be beneficial to them if they allow you access to users, because then you can improve the product and sell the product or the service that they're using.

So I would say, yeah, there's not really much of a difference. The recruitment's going to be slightly different though.

JH: [00:36:06] Cool. As a product person, this next question I find very fascinating, I'm curious to see what your take on it will be:

What advice would you have for a product that launched, gained a huge user base, but moved so fast the proper discovery was never established? And the reason I find this one so interesting is you mentioned earlier about risk mitigation. And to me, the biggest risk when you're building stuff is you spend all this time building something and then nobody uses it.

So I don't know if this is a situation where this team was, you know, lucky or whatever, but I'm curious to see how you’s advise them from here.

Maria: [00:36:34] Yeah. So you know, you might have a bit of a problem there because people will point to that and say: Okay, so you're telling me, I need to do discovery for future projects, but look at this project, we didn't do any discovery and it worked out well for us. And yeah, I mean, part of that is maybe they were lucky.

Maybe they did do some kind of research informally and that's how they got the idea. And/or alternatively, maybe it was luck. But there are a lot of projects that are out there that have taken the same approach and have failed.

So, you know, that's one thing to point out if you're struggling to get consensus about actually going forward and implementing discoveries, going forward to help improve the product or for additional products, if you offer a suite of products.

I would say that obviously is going to be more tricky, but pointing out that: Do you really want to risk it again? Do you want to continue to risk it or do you want to be more sure that the next time we launch something we're pretty confident that this is actually something that people need?

That would be my approach. Obviously, if you're working on a product that has already got users, continue to do some discovery research as you continue to develop that product further. Because that can give you great ideas as to how to differentiate your product from competitors, making sure that people are loyal to your specific product or service because you're giving them that special something that they wouldn't have got otherwise. So there are really advantages to implementing it into your development process.

Erin: [00:38:03] Yeah, I wonder too—maybe it's getting lucky and maybe there's a component of, if you're in the habit of talking to customers regularly, that all counts right as this sort of ongoing discovery work. And sometimes you can know the right thing ‘cause you've been paying attention the whole time. That can be part of it too.

Someone has a question about jobs to be done and how that fits in with discovery research or how that can fit in.

Maria: [00:38:27] I've heard of people doing—I've not done this myself—but have been doing jobs to be done interviews and yeah, you could definitely run that.

There are lots of different takes on interviewing that you could apply. There are critical incident technique interviews, there interviews that allow people to bring in certain things or photographs that they've taken, you start to discuss those. So there are lots of variations on  interviewing and other research methods that you can definitely utilize.

I'd say, just ask yourself: Does it help you satisfy your research questions? Because we should be picking the right method to get answers to our research questions, not just because we think, “oh, other people have done it on their projects, so therefore we should do it too.” We should ask ourselves: What is it we want to find out? Okay, now, which is the best research method to help us to get answers to the specific research question? Maybe that means adapting some of the research methods we have in our toolkit to enable us to answer that more successfully.

So, yeah, absolutely. You can use them. I've not personally used them myself. I haven't performed jobs to be done interviews myself, but I've heard other people have good success with them. But I prefer just to run semi-structured interviews and occasionally adapt them, especially if I want them to be in context.

So I can couple them with contextual inquiry as well.

Erin: [00:39:45] Great. Well, we have dozens more questions and not hours of time. So I think what we can do is answer a couple more questions and then maybe we can cover some of the themes in our write-up afterwards. Awesome. Let's see…

How would you suggest we measure the results of generative research? ROI? Results?

Maria: [00:40:04] Yeah. That's pretty, pretty tricky to do. You’re kind of dealing with, like, counterfactuals, aren't you? You're saying: “Oh, well, if we hadn't have done this research, we wouldn't have been in this position.”

So that's pretty, pretty tricky, but one thing I would say is keep a note of projects that have happened in your organization, those that have had discoveries, what the outcome was—if you can get hold of that. You know, did this thing make money? Did it lose a load of money? How long did it take to build? What's the user satisfaction look like? And then compare that to projects that have had discovery, if you are running those.

I did a little survey in 2019 with UX practitioners. And it's very hard, as I said, to measure the ROI of discovery work, but I asked people to tell me about the last project that they had to work on and I asked them whether they thought it was a success or not.

So kind of a scale, you know, strongly agree to strongly disagree. And then I also asked people whether they have discovery on that specific project. And the people who had done a discovery were much more likely statistically to say that their project was a success.

Now you know, correlation isn't causation—that could be correlating factors that suggest, that might mean that people who are doing discoveries have a better outcome.

Maybe they have more UX maturity, maybe they do more usability testing. But I think it does lend credence to the fact that discoveries do lead to better outcomes. So, if you do want to convince all the people, try and start to point fingers at specific projects that’ve had discovery work or point to competitors that are doing discovery and, you know, they're doing discovery and point to outcomes and say: Look, there's a strong correlation here between teams that are doing discoveries having successful outcomes.

So why don't we try it in our organization and measure it for ourselves and see whether it does have a significant impact

Erin: [00:41:58] Great. Do you want to pick last point here, JH?

JH: [00:42:00] Oh, it feels like a lot of pressure. Cool. Some of these overlap with some other stuff, so I'm gonna try to scan real quick…

Yeah. I mean, I think this is related to the ROI stuff, but it's just around discovery, if you're not currently doing it, being viewed as net new scope. And so how do you make sure that you're covering it well, and people are supportive of that additional scope? Or do you even see it as additional scope or do you view it as something that is kind of interwoven to everything else as well?

Maria: [00:42:27] Well, I mean, it should be interwoven, but you're quite right that some people look at it as an add-on, like something that happens before you guys start actually designing and giving me wireframes and build like, writing code and shipping something.

So yeah, in some sense it is additional scope. It's an additional step that people would need to do if it's a brand new product or a brand new service or a redesign, right? In that case, you probably will want to do some discovery work before you get started. Choosing a solution and exploring those solutions.

But it shouldn't be seen as something that is like a tick box exercise or something that is slowing us down. In fact, discoveries can often speed you up because they help you answer really poignant questions that help you think about how best to design something.

And sometimes they tell you not to design that thing. Like there is no need, don't do it. So we learned that early rather than us going away and development spending a lot of time designing something and putting it out there and testing it. So I think it, it really depends on how you look at it and how you frame it.

Of course, like on paper it looks like we're asking the client, all the stakeholders, or product teams to spend more time and that's time not delivering something. So we do need to really communicate to people the advantages of doing that. Why we're saying this is an important step is because we are reducing that risk.

So it is worth doing so that you're knowing you're moving forward with some confidence. But you can, of course, we should be doing continuous discovery so we should be implementing that into our process. It shouldn't be something that is just at the beginning of the project and then that's it, it's done and we move on. It's totally possible to integrate it into our practice as well.

JH: [00:44:12] Yeah, I love something you said before, too, that's a little bit of a tangent. But planning that the analysis piece is going to take as long as whatever methodology you're using.

And I think that's really important, ‘cause I know I've at least personally been burned on that before. We get so excited and you go out and you talk to five, six, seven people really quickly. And if you don't have time to actually process it and take all the right insights away from it you did spend all this additional time and got very little from it.

So like, it does feel better to kind of start into, as you mentioned this earlier, like small batches. Actually extract what you learned from it. And then like, let that come to bear in the rest of the design process because you know, new things and you can see the effect.

Maria: [00:44:51] Absolutely. I think also, communicating that to other people that that's going to be your approach is also going to be helpful. Because I know working in the past with stakeholders who've commissioned large pieces of research, and that research has taken a long time. It's been reported back in a really lengthy document. Noone's done anything about it. It just sitting on a server somewhere.

So taking a more pragmatic approach and telling people we're going to start small. We'll give you insights after this week and come to our show and tell, we'll show you what we've learned so far. We'll show you what things are still unknown and you'll understand why we're going out and speaking to a few more people or doing some observational research. So I think that is also handy as well.

JH: [00:45:33] It feels like we take it to the extreme of like, you just do one and really share everything you learned with the whole team. That's probably almost more beneficial in a way than talking to five people and in your head, having a bunch of things you learn, but not having a way to bring the team along with that and having this disconnect of like, why don't people see the value here?  I think it's valuable. It's like, well, they weren't in it and you don't have a way to share it with them.

Maria: [00:45:51] Yeah. And we used to have a saying in government (I think this is still a saying), that user research is a team sport. It shouldn't be the case that you are going away and doing all the research and then just reporting it back to people. It doesn't work well. it's much better if the team come along and they observe the research firsthand.

It’s a much better way of internalizing the users and the knowledge of users and empathy for users. So definitely bring your team along when you're doing discovery research. Don't go out and do it on your own.

Erin: [00:46:22] Well, there's tons more to cover, but we won't cover it all today. Thanks everyone for joining and thank you, most of all, to you, Maria. You've been a great guest. And have a great 2021—hope it's better than last year!

JH: [00:46:33] Thanks for all the great questions. It makes our job a lot easier. So appreciate it.

Maria: [00:46:40] Thank you very much. Thanks for having me.

Katryna Balboni
Head of Creative Content & Special Projects

Content marketer by day, thankless servant to cats Elaine Benes and Mr. Maxwell Sheffield by night. Loves to travel, has a terrible sense of direction. Bakes a mean chocolate tart, makes a mediocre cup of coffee. Thinks most pine trees are just okay. "Eclectic."

Subscribe to the UX research newsletter that keeps it fresh
illustration of a stack of mail
[X]
Table of contents
down arrow
lettuce illustration against an abstract blob filled with a teal brushstroke texture

The UX research newsletter that keeps it fresh

Join over 100,000 subscribers and get the latest articles, reports, podcasts, and special features delivered to your inbox, every week.

Yay! Check your inbox for a welcome email from katryna@userinterviews.com.
Oops! Something went wrong while submitting the form.
[X]