A Conversation with Quantitative UX Researcher Randy Au

Quantitative UX researcher Randy Au "analyzes all the things." He is part researcher, part educator, and part therapist philosopher. And he loves cats.

Perspectives
September 19, 2019
Image of Vince Kosek
Vince Kosek
Contributing Writer
A Conversation with Quantitative UX Researcher Randy Au

When Randy Au, Quantitative UX Researcher at Google, was a kid, it wasn’t like he went around saying, “When I grow up, I’m gonna be a UX Researcher!” Because, who says that? And what exactly is a Quantitative UX Researcher, anyway?

“I work with a lot of UX Designers and Qualitative Researchers now who come from straight up industrial design backgrounds,” says Au, “but they’re some of the top consumers of analytic content, along with Product Managers. Which makes sense. They’re building an interface, they need to know, ‘Where should the buttons go, and is anyone using the buttons I already made?’, right?”

Adding a quant element to UX Research takes a range of skills, which Au has honed in past engineering and analyst roles at Meetup, Bitly, and Primary.com. “I’ve been doing this kind of stuff for over 10 years now, but I’m fairly new to UX,” Au shares. “I came to analytics essentially from the engineering side.”

Today, in his work with Google, Au plays the roles of researcher, educator, engineer, analyst, philosopher, and seemingly sometimes even therapist.

Amplitude’s John Cutler sat down with Randy to ask him about changes he’s seen in the analytics, how he copes with uncertainty, and his take on successfully collaborating with product development teams.

John Cutler: Talk to me about changes you’ve seen in your career. Have there been any perceptible shifts in the landscape that you’ve been occupying?

Randy Au: Well the big one was when the world went to mobile. That was a big one, right? A lot of places went from having one development team to suddenly needing three because you need web, Android and iPhone. Period.

Another one is the shift to self-service. That one comes up quite a bit, over and over. It’s becoming more and more prominent with advances in visualization tools and so forth. I see a lot of people trying to push the boundaries of self-service.

You’ve gone from a data analyst person, telling the team what the results are, to now having this situation where the data person has to somehow put up guardrails so that the self-service environment can function.

That said, there’s still a lot of ad-hoc type stuff that analysts need to provide expertise on, which is a dynamic that doesn’t really change. The difference is in the role of the analyst. You’ve gone from a data analyst person, telling the team what the results are, to now having this situation where the data person has to somehow put up guardrails so that the self-service environment can function. It creates this really weird tension of, ‘I can let people have all the data that they want, but then what guarantees that their conclusions are valid?’.

JC: Right. Sometimes I see this tension between having guardrails and enabling self-service as central to defining the role of analytics. Has that been your experience? Have you had to personally transition in your career or did you latch on to certain models early on?

RA: Usually, quantitative research teams are resource constrained by the fact that there’s only so many quants around. I’m usually on small teams of one or two qualitative researchers. So from that perspective, we are usually an insights kind of team.

The issue is that we just can’t handhold everyone all the time. If things get really constrained and we don’t have a lot of time, we have to rely on a lot of self-service. Then, we become more like your tutors. We teach kids who are willing to learn. Every team always has one or two people that are very interested in data and want to use it.

Every team always has one or two people that are very interested in data and want to use it.

We train those willing learners as much as they’re willing to take. It’s always been amazing that you can always find one or two people willing to do that. What would happen if there were teams that just don’t have that kind of person around? That kind of makes me worry sometimes. But we’ve been lucky so far. So, yeah. That’s how we do it.

The ideal would be to have a quant embedded within each product team and they just live and breathe that one thing. But you know, that’s hard to do.

JC: It’s interesting because part of educating people is breaking down preconceived notions of what analytics is, and what it’s capable of. A lot of people think that it’s all A/B tests that cough up a definitive answer.

RA: Exactly! That’s the nice part about that part (the A/B testing part) of analytics. It gives you a theoretically right and wrong answer.

But there’s also this blurry part where, the more stats you know, the more you’re like, “Oh, that’s actually some hand-wavy bullshit!”. Because at 95% confidence it just means that there is a one in 20 chance you’re actually getting something totally wrong. I’m sure we’ve done all sorts of tests that totally yielded false positives. We’ve run so many experiments, it almost has to be the case.

At least in theory, there’s a correct answer. We can just say, “Okay, we’re satisfied and we have confidence.” But then when you get away from that, you start going into the area of, “Here’s the stuff that’s just indicative.”

There’s conditions that lead to increasing uncertainty. Sometimes we don’t have enough data points and we’re working on a very hard problem, or the causality isn’t particularly clear. But the reality is, we have to launch next quarter and this is the closest thing we’ve got. We’re going to just have to take it on faith because that’s all we have.

Yes, all the science and knowledge and analytics we have and we still have something where we’re kind of taking a bet! We’re not left 100% in the dark, but we’re still making a bet.

And when you lay it out on the table like that, it sounds really sketchy. I mean, at the end of the day, I think we all do it to varying degrees. But it’s a scary thing. It’s like, “Yes, all the science and knowledge and analytics we have and we still have something where we’re kind of taking a bet! We’re not left 100% in the dark, but we’re still making a bet.” And that’s the nature of business; you take the risk because that’s where the rewards are.

JC: Some people seem more open to that degree of uncertainty than others. When you find people that you’re working with that just have that sort of extremely utopian view of yes/no things, how do you talk them off the ledge? Do you have any go-to strategies?

RA: Sometimes…but it depends. Sometimes I frame it as like, “This is a bet, given all the things we know. It’s a bet, but it’s the most likely bet.”

Plus, we have some qualitative research and our own personal experiences of what products and humans and things are. It seems reasonable. It’s not totally outlandish.

This appeals to humanity because we’re not particularly good at placing the most likely bets. If you ran down the numbers, you would actually prove that we’re wrong more than we’re not. That’s why we need to use data.

But at the same time, it’s not like you pick between two things and it’s 50/50. The ways to fail probably outnumber the ways to succeed. And so you’re just shaving off those chances to make your odds a little bit better.
And then some people are just not temperamentally down with it. In that situation, if that person is running the show, then the product will move slower. It’s going to be a little bit more methodical, and it might actually be the right call! It depends on what the circumstances are. But you can’t fight human nature in that way.

JC: When you work with less experienced analysts, how do you calibrate their business sense to help them focus on techniques or analyses that are going to have the highest impact?

RA: That’s really tricky because you build business sense from working with business people. You’ll get it when you’re with the PM who’s under the gun to increase revenue this quarter, not next quarter. I learned that, essentially, I have to ask myself, “Is this insight good enough to help them make this decision, because they have to make a decision today?”

With or without me it’s happening, so I better do what I can in the next three hours and get it in. Without that pressure, you can very easily just go wander off.

With or without me it’s happening, so I better do what I can in the next three hours and get it in. Without that pressure, you can very easily just go wander off.

And that sense of “good enough” is just something you have to develop over time.

JC: It seems like you’d need to have some familiarity with the decisions that are being made if the idea is balancing reasonable decision quality with reasonable decision velocity.

RA: I always recommend that people work closely with the teams. When they’re too far removed and you lose the squishy context, it gets tough. People start latching onto artifacts that may or may not actually matter at the end of the day. It’s especially true for the business as you go higher up.

I always recommend that people work as closely with the PM as possible. Make them your best friend.

I always recommend that people work as closely with the PM as possible. Make them your best friend. Because some of the best things I’ve ever done came about when we had a skilled analyst who was familiar with the tooling in a paired situation. We’d have the PM, or tech lead, or whoever sitting next to them. And they’re just pair driving, looking through the product.

JC: Let’s say you had a team that’s been newly unleashed. They have a self-service tool, all their events are instrumented, they’ve developed a bunch of properties. If you could do a three hour class with them to try to make sure they don’t fall into some common pitfalls, what would that look like?

RA: Yeah, if I had a three hour session with a new team, almost 95% of it would be “get to know your world”.

You get in there and it’s like, “Okay, first let’s understand”. It’s kind of like we just dropped into this island full of goodies, right? So now let’s figure out what the heck is going on here. Okay, where are our users? What can we do with them?

But then we peel it back one more level and ask, “Where’s this data coming from? What is this? What does this user mean? What does unique user mean here?”. Making sure we understand those foundations is critical because everything else rests on that.

So I would probably spend almost all my time on the fundamentals of just understanding.

If that’s screwed up in the beginning, then everything you think about later on will just fall apart. So I would probably spend almost all my time on the fundamentals of just understanding.

I’d explain that, “You’re allowed to say this because of these reasons”. And then hopefully they kind of pick up on that. So later, as they build more familiarity they can form those statements on their own. It’s like, “Okay, very good…but there’s that one caveat up there!”

Teaching them that process of figuring out what they’re able to do, would probably be what I’d focus on. I mean, I’m sure it would take me weeks of plinking around. But in a couple of hours I’d want to, as best as I could, impress upon them the fact that all this stuff looks magical (and it is to some degree) but you can do some very, very incorrect things with it very easily.

JC: Right. So it’s not just like, oh, here’s the end day retention chart. Okay, let me make our next quarters strategy off of that chart that I just saw.

RA: Right. Because a retention chart can be misleading very easily. You can have a dull, “Oh, this happens all the time.” But then you have a one week sale or you do a giveaway and suddenly that entire diagonal band of your retention chart is just all screwed up. But people wouldn’t necessarily know to look for that.

JC: In an ideal world, what would you regard as bread and butter questions, that if you could set teams up in a self-service environment, they could be really good at answering?

RA: I mean, the bread and butter things usually are simple in concept. They’re usually just the, “How many people do a thing”. They usually fit this pattern. The problem is that the “do a thing” part is very fickle and it becomes all about the instrumentation.

That bread and butter thing can easily turn into some multi-step funnel or some bizarre combination of things. The actual counting part is very tricky and finicky. So the easier that gets, the more people can self-serve.

If they’re confident that they’re counting the correct thing, then usually the ratios that they come up with are fairly accurate. They make sense in general because they’re not obviously crazy. So, you can kind of trust them.

People in general are kind of hesitant to do very fancy things because they’re like, “Oh, I’m not the expert. I’m just a user of this tool.” So they’re not going to go for anything outlandish. But they do have to be familiar enough that they’re willing to take the chance.

If I could teach them those things, it would open up a lot of the more difficult cases, like the causality ones. Causality is extremely hard.

About the Author
Image of Vince Kosek
Vince Kosek
Contributing Writer
Vince Kosek is a product analytics consultant based in Santa Barbara, CA. Vince's career spans financial services, manufacturing, medical devices, and most recently vertical B2B SaaS products in construction and property management. He is passionate about helping cross-functional product teams make better product decisions.