A Leader's Guide to Implementing OKRs (Part 2)

It has been incredible to see such a positive response to my OKRs post. So many of you reached out telling me how valuable it was to hear about the details of implementing a successful OKR program. Many of you also reached out with follow-up questions asking me to dive deeper into aspects you were struggling with in your own implementation. I thought I would share the most frequently asked questions as a follow-up post for those interested.

What tool do you recommend for tracking OKRs?
There are now many specialized tools on the market worth exploring for tracking, sharing, and grading OKRs, including WorkBoard, Betterworks, and Lattice. At Notejoy, we of course use Notejoy itself to capture our OKRs. We have an OKRs notebook with a pinned template note. Each quarter we duplicate the template note and create that quarter's OKRs. The notebook makes is super easy to flip through previous quarter's OKRs, collaboratively draft and edit them, leverage highlight colors to easily score them green/yellow/red, capture the discussion on lessons learned from each quarter's OKRs during reviews, and search to quickly get back to them.

How do you handle multi-quarter OKRs?
There are times when a given initiative must span multiple quarters. This is often the case when you are releasing a new product and that effort might take 9-12 months to launch. In this case, you want to scope the objective and key results to fit into the given quarter. You'll often find that you'll have to use output-oriented instead of outcome-oriented key results given these initiatives will be pre-launch. So your key results might be conduct a certain number of customer interviews, finalize designs for the product, develop a certain feature, or build a certain piece of infrastructure. It's important to at least make these measurable with dates associated with when you expect the activity to be done.

I still try whenever possible to find ways to include outcome-oriented measures as well. For example, when we were doing customer validation for LinkedIn Sales Navigator, we had a key result to get a certain percentage of the customers we interviewed to agree to join our pilot program, as evidence that the proposed solution was resonating enough with them that they were interested in doing a full pilot when a beta was ready. Once we had a beta available, we started leveraging classic post-launch measures amongst the beta users, including activation, engagement, and retention measures, as key results.

I also encourage teams to include future key results that won't be measured until a future date but reflect the user or business outcome you ultimately hope to achieve. This reminds the team from the get-go of what they are looking to achieve and helps inform trade-offs and decision-making within the quarter. It also helps you calibrate as an initiative drags on whether the expected key result still justifies the continued effort against that initiative.

When do you schedule end-of-quarter OKR reviews and beginning-of-quarter OKR kick-offs?
The end of a quarter and beginning of the next quarter are critical and often busy times when it comes to OKRs.

You'll want to hold your OKR review as close to the end of the quarter as possible to allow your initiatives enough time to achieve their key results. I like to schedule the OKR review at the beginning of the last week of the quarter. To make this effective, you need to have your key results automated and easily accessible via a dashboard. If a data team needs to go run custom queries and do ad-hoc analyses to determine whether you have hit your key results, this introduces immense delays and makes it very difficult to have an efficient OKR review and kick-off process. The automation will also give you regular feedback on how you are tracking against your OKRs mid-quarter, so it's a very worthwhile investment. It's best if the OKR owners pre-score their OKRs and then send them out as a pre-read with the already graded OKRs. This allows the OKR review meeting to first focus on ensuring everyone agrees with the scores and then spending the majority of time on the key learnings from the quarter.

I like to then require draft OKRs to be submitted in the first week of the new quarter. This gives the team a week to reflect on the key learnings and come up with draft OKRs. I find that it's easy to develop draft OKRs within a week when you have a longer annual review process in-place where you spent time roughly slotting initiatives into each quarter, making the quarterly process one of refinement and incorporating new insights. Once drafts are available, partner teams have one-week to provide feedback to ensure cross-functional alignment and then the OKRs are finalized in the second week of the new quarter. At that point, an OKR kick-off meeting is held with the entire team to broadly share them. There are certainly trade-offs here with the timing of the review and finalizing the new quarter's OKRs, but I find it's more important to allow the previous quarter's OKRs enough time to realize their key results than it is having the new OKRs finalized on day one of the new quarter. And I'm not a fan of cramming OKR reviews and finalizing new OKRs into a single meeting, because it most often means no one is actually leveraging any of the learnings from the previous quarter's OKRs when determining the new quarter's OKRs.

Do customer satisfaction survey results serve as good key results?
I've certainly used various forms of customer satisfaction survey results as key results. It might be a CSAT survey asking customers how satisfied they are with the product or service on a scale of 1 - 5. Or an NPS survey asking how likely they are to recommend your product or service to a friend or colleague on a scale of 0 - 10. Or a PMF survey asking how would they feel if they could no longer use your product or service. I'm a big fan of conducting these surveys as they undoubtedly provide unique insights that can't be garnered through quantitative metrics alone.

That being said, they do pose challenges as key results. The problem with surveys is getting to statistical significance can be difficult given low survey response rates. And attempts to boost response rates, like putting the survey in-product or adding incentives to the survey, only serve to bias the results you collect. Teams who track these survey response results quarter after quarter are sometimes left drawing conclusions from insignificant sample sizes or seeing unexplained jumps or dips that they can't correlate back to any specific product initiatives or audience changes. Equally challenging is the fact that most teams who conduct these types of surveys do so at most once per quarter. This results in flying blind the entire quarter and not finding out how you are performing against your key result until the survey results are in. I encourage teams to mitigate these challenges by ensuring large enough survey populations with realistic expected response rates and allowing for enough time to collect responses. I also encourage teams to create a continuous surveying strategy so that you get near real-time information on how you are trending towards your key results. At Notejoy, for example, we conduct real-time NPS surveys with responses streaming in every single day to our Slack #feedback channel and weekly summaries automatically sent to our inbox.

Given these challenges, I like to include customer satisfaction survey results as one of several key results for a given objective, but never the sole key result. I prefer also including user behavior metrics wherever possible. Things like D7 or D31 retention measures, weekly active users, and others are great because they effectively sample 100% of your users without any survey response drop-off and can be measured in real-time throughout the quarter, giving you valuable visibility into your progress.

Should objectives be derived from key results or key results from objectives?
I’ve actually seen teams take both directions successfully. Sometimes you start with a list of objectives derived from customer feedback or from implementing elements of your vision. In this case, you already have the objective and the task then becomes to determine the right key results that tie that objective back to intended user or business outcomes. New features or new products often operate this way.

At the same time, I've seen teams start with a desired user or business outcome that is critical for the business to achieve. Now that they have the key result, they come up with a list of potential initiatives, or objectives, that could satisfy that key result, prioritize them, and decide which ones to take on in the quarter. Growth teams or product redesigns or enhancements most often operate this way.

Are OKRs effective within a team when the entire organization is not yet leveraging OKRs?
Yes, I have definitely found OKRs to be effective within teams even when the entire org isn't yet bought off on them. They still provide focus, alignment, accountability, and an outcome-orientation amongst the team, regardless of team size. You might not get the benefit of company-wide alignment just yet, but it'll still help the team immensely. In fact, when rolling out OKRs to a large organization for the first time, I always recommend piloting them within a smaller team to sort out the kinks in the process instead of rolling it out company-wide and leaving a bad taste in the mouth of employees when you run into early challenges.

Can OKRs be used in traditional industries outside of tech? How do OKRs differ from traditional goal setting?
Yes, OKRs can certainly be used in industries outside of the technology industry. I've personally helped several non-tech companies build or refine their OKR program and seen them be very successful. OKRs are really just another goal setting framework. Every organization already has an existing goal setting framework, so the differences lie in the details.

Most traditional organizations use a form of goal setting inspired by Peter Drucker's MBOs, which stands for management by objectives. Similar to OKRs, MBOs define measurable objectives for the team. However, OKRs take it further by having key results associated with each objective. While MBOs talk mostly about what you want to achieve (ie. increase sales by 20%), OKRs add the additional layer of how you plan to achieve it (ie. launch product in 3 new international markets). MBOs also tend to be set annually, versus OKRs tend to be set quarterly, giving you more of an opportunity to update them and reflect on results. MBOs are typically tied to employee compensation, whereas OKRs tend to be divorced from employee compensation. This results in MBOs traditionally being risk-averse, versus OKRs being aggressive and ambitious. MBOs are also traditionally set top-down versus OKRs empower individual teams to come up with their own.
Want to accelerate your product career?
I've finally distilled my 15+ years of product experience into a course designed to help PMs master their craft. Join me for the next cohort of Mastering Product Management.
Are you building a new product?
Learn how to leverage the Deliberate Startup methodology, a modern approach to finding product/market fit. Join me for the next cohort of Finding Product/Market Fit.
Enjoyed this essay?
Get my monthly essays on product management & entrepreneurship delivered to your inbox.