Best Practices for AI Product Management

How many times have you heard the phrase “It’s research, it will be ready when it’s ready”? If you manage AI products, the answer is probably “many”. AI product management is a unique beast that can be quite challenging, but there are ways to make it easier. Here’s how.

Noa Ganot
Product Coalition

--

Photo by Skitterphoto from Pexels

In 2014, at the ProductX conference, I won second place for my presentation on time management for busy product managers. Itai Tomer, who won first place, gave an excellent lecture on making decisions that involve uncertainty. It was a great talk and I gave him my vote as well.

Unlike what might be implied from the name, the lecture Itai gave discussed writing requirements for functionality that included estimates (in Hebrew), which were features that gained popularity back then. He mentioned the example of Waze predicting where you are going (home, work, etc.) and offering to take you there without you needing to do anything. If you are the developer who is supposed to write this feature, you would ask the product manager for very specific instructions as to when and how to predict where this person is going.

Nowadays, this would probably be a feature assigned to data scientists and aiming to use tons of data for smart predictions. But honestly, a very simple prediction function could be sufficient for the introduction of the feature (e.g. most people — at least before COVID — go to work in the morning and back to home in the evening on weekdays, so when you start Waze on one of these times ask the driver if they are going home or to work accordingly). Back in 2014, the default wasn’t data science, it was still considered almost science fiction, and when you needed prediction functionality you simply went to the developers. Researchers, as data scientists were called back then, were saved for the “heavy stuff”, those that couldn’t be defined by simple functions by the product managers themselves. One of the key points in Itai’s lecture was that you cannot give the developers too much freedom in defining the prediction itself, since they simply won’t know what to do. They need the formula. With data science teams, however, there is a feeling that they know better — they are Ph.D.’s and understand predictions and estimates much better than the average or even than the experienced product manager. Does that mean that you don’t need to guide them as closely? Absolutely not.

I know, it’s very tempting to do so. It’s so much easier to let them handle it, since research is complicated, and we don’t really know how to speak to them or sometimes even understand what they do. Unfortunately, although it may scare you, you can’t afford to manage this at a high level only. Here are some practical tips on how to do it right.

Set Very Specific Goals

Are you familiar with the Pareto principle (AKA the 80/20 rule)? It says that in many cases, 80% of the outcome will be achieved by 20% of the effort, and the remaining 20% of the outcome will take the remaining 80% of the effort. In algorithmic research, this is very prominent. Getting to 80% precision, for example, is relatively easy, and can probably be done without any real AI involved, by simple heuristics. Advancing from 80 to 90% will be much more challenging, and getting from 90 to 92% would probably require more work than achieving the initial 90% altogether, even though it’s just a 2% improvement.

In such conditions, it is easy to lose your way. If you simply tell the team to bring you a prediction algorithm, they would take it to wherever they see fit. But as a product manager, it is your responsibility to define what exactly you are looking to achieve here. Does it need to be a perfect algorithm? If you are building the autopilot mode of an autonomous car, perhaps you can’t afford to make any mistakes, so the answer would be yes. By the way, 100% accuracy usually comes at the expense of use case coverage. So are you looking to support a minimal number of use cases with perfect accuracy, or is it more important for you to cover most use cases with decent accuracy? And what does decent mean for you? There are many questions that you need to ask yourself to help the data science team provide you with the results you really want, and not with perfect research alone.

Here is a classic example: when I was the VP Product at Twiggle, where we had a very long onboarding process that required a lot of internal work per each customer. One of our goals was to dramatically reduce the effort required per customer. When I asked the data science team to help, I knew that a fully seamless onboarding process would be difficult and take an extremely long time to achieve. Instead, I gave the team clear instructions — take what the machine can easily do, and let the manual work focus on where you really needed a person involved. This approach gave us much quicker results.

Help Them Give You Time Estimates

If you ever asked a data science team for time estimates, you’ve probably heard something similar to “We can’t tell. It’s research. It will be ready when it’s ready”. Data scientists are usually very reluctant to give time estimates. They have a good reason since the research process is different by nature than software development. Although giving time estimates is always tricky, for research it’s even trickier since you truly can’t know you are there until you are there. They might make great progress initially, but then run into an issue that will hold them back for a while, or might even require them to take a completely different approach. Research is indeed not similar to engineering by nature, and while engineering can also have unexpected issues late in the game, there are many things you can do to minimize them both by number and by impact.

Unfortunately, with all the understanding of why data science teams truly have a hard time giving you time estimates, you still need those estimates. You are managing many moving parts and need to be able to coordinate them, even roughly. You need to make decisions based on the expected time estimates — would it still be relevant? Is it worth the effort?

To help your data scientists give you time estimates, ask them for an order of magnitude instead of exact timelines. It is also important to give them examples to make it easier for them to engage in this conversation altogether. It goes something like this:

You: “How long would it take?”

Data scientists: “We don’t know, it’s research, etc.”

You: “I need an order of magnitude, not an exact number”

Data scientists: shrug

You: “Would it take 3 weeks or 6 months?”

Data scientists: “No, 6 months is too much. 3 weeks is too little though, It’s probably around 1.5–2 months to have a usable first version”

While this estimate is still risky, and would probably change, it is still infinitely more valuable for you than knowing nothing. It is something you can work with.

Constantly Review WIP

In the agile world, we are used to breaking down the product requirements into small user stories which can fit in a single sprint (or whichever other method you work with). At the end of the sprint you have a working product, even if with minimal functionality, that you can address and make sure it indeed meets your initial intent.

When you apply the same methodology to working with data science teams, you are stuck: research cannot be broken down into small pieces that easily, so many product managers end up giving the data science team the requirements, and then simply wait for the magic to happen. But if you do so and give up on breaking the requirements down to smaller and specific milestones, you must replace that with other means to keep you in the loop. With research teams, this often means that you would need to review interim results and not just the final product.

This is important because you want to make sure that they are going in the right direction, that they understood you correctly, and — just like in agile development, you might understand what you really needed only after seeing something with your own eyes. Ask the team to show you the results they currently have, even if it is in the middle of the research and still work in progress. Looking at the results may reveal, for example, that something you thought was crucial is not that important after all. It might also help you set the right requirements altogether: when you started, and set your specific goals per my recommendation above, how can you tell if you need 80 or 90% accuracy? It often requires seeing something concrete for yourself — say a 75% accurate result, to truly grasp what this accuracy means to your costumers and make your decision accordingly.

Allowing the data science teams to do their work without supervision is bad for them as much as it is for you. Most data scientists I know are concerned about making an impact, and research is simply their means of doing that. They want to do what’s right for the company, not just do research for the sake of research. To work on the things that truly matter, they need your help.

The bottom line is that even though research is complicated and your data scientists are experts, they still need your guidance. To be able to do so, you need to learn how to speak their language. Make sure you understand at least at a high level the research process, terms and methodologies they use, and how they make their decisions. Ask them questions and talk to them about it. Ask them for reading materials, to familiarize yourself with their world. As tempting as it is, in order to work effectively with AI teams, you must do the hard work of getting into their world. You can’t afford to stay uninvolved.

My free e-book “ Speed-Up the Journey to Product-Market Fit” — an executive’s guide to strategic product management is waiting for you at www.ganotnoa.com/ebook

Originally published at https://ganotnoa.com on February 10, 2022.

--

--

Helping product executives and their companies grow. Formerly VP Product @Twiggle, Head of Product @eBay Israel and Senior Product @Imperva. www.infinify.com