A Product Management Framework for Machine Learning — Part 2 of 3

Pranav Pathak
Product Coalition
Published in
6 min readMar 27, 2018

--

This is part two of a series of 3 posts — if you haven’t yet, here’s the link to read part 1.

A quick look-back at the 8 steps to building an AI Product:

  1. Identify the problem
    There are no alternatives to good old fashioned user research
  2. Get the right data set
    Machine learning needs data — lots of it!
  3. Fake it first
    Building a Machine Learning model is expensive. Try to get success signals early on.
  4. Weigh the cost of getting it wrong
    A wrong prediction can have consequences ranging from mild annoyance to the user to losing a customer forever.
  5. Build a safety net
    Think about mitigating catastrophes — what happens if Machine Learning gives a wrong prediction?
  6. Build a feedback loop
    This helps to gauge customer satisfaction and generates data for improving your machine learning model
  7. Monitor
    For errors, technical glitches and prediction accuracy
  8. Get creative!
    ML is a creative process, and Product Managers can add value

This article covers steps 4 through 6. Let’s get right to it!

4. Weigh the cost of getting it wrong

Machine Learning is not error free, and developing without guardrails can have serious consequences. Consider for example this malicious twitter bot:

Or this Google Photos AI gone wrong.

Of course, we cannot conclude that all AI will go insane. At the same time, we need to acknowledge that there is a cost to getting it wrong with AI. Think for example an order management service that automatically detects if a customer wants to cancel their order. The cost here is that the model might wrongly interpret the user’s intent as a cancellation, thereby having financial consequences for the user and the company.

Machine learning relies to a large extent on probabilities. This means that there is a chance that the model gives the wrong output. Product Managers are responsible for anticipating and recognizing the consequences of a wrong prediction. One great way to anticipate consequences like the ones above is to test, test, and test some more. Try to cover all scenarios that the AI might have to encounter. Understand what makes up the probabilities computed by the model. Think about the desired precision of the model (keep in mind, the more precise your model is, the lesser the cases it can cover — more on this in part 3). Talk to your Data Scientists about what can potentially go wrong. Ask tough questions — it is the Data Scientist’s job to build a fairly accurate model, and it is the Product Manager’s job to understand the consequences of such a model.

In general, the cost of getting it wrong varies by use case. If you are building a recommender system to suggest similar products, where you previously had nothing, the worst outcome might be a low conversion on the recommendations you offer. One must then think about how to improve the model, but the consequences of getting it wrong are not catastrophic.

If AI is automating manual flows, a good estimate for cost is how many times an AI gets it wrong in comparison to humans. If for example, a human is right about an email being Spam 99% of the time, an AI Spam detector is worth the investment at a 99.1% precision.

It is important to understand these consequences in order to mitigate them and to train the model better for future scenarios.

5. Build safety nets

Once all of the consequences of getting a prediction wrong are identified, relevant safety nets must be built to mitigate them. Safety nets can be intrinsic and extrinsic.

Intrinsic safety nets

These recognize the impossibilities that are fundamental to the nature of the product. For example, a user cannot cancel an order if there is no order made in the first place. So that email you got is definitely not talking about cancelling an order, and the model has misclassified. It’s a good idea to have a human agent look into this case. A useful activity is to map out the user journey for your product, and identify the states that the user can go through. This helps weed out impossible predictions. Intrinsic safety nets are invisible to the user.

Extrinsic safety nets

Extrinsic safety nets are visible to the user. They can take the form of confirming user intent or double checking the potential outcome. LinkedIn for example has a model to detect the intent of a message and suggest replies to its users. It does not, however, assume a reply and automatically send it. Instead, it asks the user to pick form a list of potential replies.

Extrinsic safety nets for users are not a new concept. Think about every time your Windows 95 popped up this dialog box:

This system does not use AI, but does take into account that erroneous actions can have consequences. Safety nets are ingrained in all products, and AI is not an exception.

6. Build a feedback loop

Setting up safety nets also helps gather the much needed feedback for the model.

Feedback loops help measure the impact of a model and can add to the general understanding of usability. In the context of an AI system, feedbacks are also important for a model to learn and become better. Feedbacks are an important data collection mechanism — they yield labelled datasets that can be directly plugged into the learning mechanism.

For Amazon’s recommendation module, the feedback loop is quite simple — does the guest click on the recommendations, and is this recommendation bought?

AirBnB uses a more direct approach for feedback collection.

Netflix uses a hybrid. It can understand based on how many of the recommendations you click and view, and also uses a thumbs up mechanism for explicitly logging preferences.

It must be noted that safety nets can often double down as feedback channels for refining the model. Safety nets by their nature are outside the ambit of the model’s prediction. They should be used, whenever possible, to label data and generate a stronger learning dataset.

I make AI sound scary, but responsible development while understanding consequences is essential to any product regardless of whether or not AI is involved.

Just to reiterate, here is the link to the first post in this series. Make sure to let me know if this was helpful!

--

--

Product Guy - I build and admire cool products. Principal Product Manager @ Booking.com. Previous - BrainBought, Flipkart, thePiProject