Proposal: Scoring Shared (Aspirational) OKRs Consistently Across R & D
Tracking and scoring for shared OKRs (categorized as "aspirational" in Ally.io) across R & D needs to leverage the same definition for what success looks like, as well as the same requirements for measurement. If we don't adopt the same methodology across R & D for scoring our shared product OKRs, scoring will be subjective based on different parameters, resulting in product groups and stages displaying an unequal and inaccurate assessment of progress.
Below is an outline of what we propose be adopted by the Product and Engineering teams as of Q3 for all shared Product OKRs. We recommend that all "aspirational" OKRs in Ally, including those specific to other functions such as UX, Quality, etc., share this methodology when scoring KRs that align to shared product OKRs so we can have an accurate view across teams.
The problem
Currently some Product groups consider OKRs as binary. This means the OKR is either 100% completed or it is reported as 0% progress even if there's partial completion or incremental improvement. The opposite is true for Engineering. Engineering measures effort towards the OKR and success for the OKR is considered a score of 70% or more completion. This makes it challenging for shared OKRs to be consistently and effectively measured across teams.
Proposal
One agreed-upon scoring method for shared product OKRs will help drive better collaboration between Product and Engineering. At the beginning of each quarter, shared product KR owners should lead and align the Hive (the cross-functional team doing the work) to define a plan to accomplish their KRs as scorable in the following manner:
Scoring Method: Scale
This method allows for flexibility and can give you credit for achieving part of the key result. KRs are scored on a 0-100% scale. If you deliver part of the KR you can credit that result as a % score.
Objective: Improve License Management
- KR1 - Close 16 License issues
- KR2 - Reduce license-related support tickets by 4%
- KR3 - Manually audit 100% of missing license data
If you use the scale scoring method it would look like this:
- KR1 - Closed 8 of 16 issues - 50%
- KR2 - Reduced license-related support tickets by 3.8% - 95%
- KR3 - Manually audited 100% of missing license data - 100%
In this scenario you didn’t complete two of the KRs, but since you selected the Scale method you were able to get credit for what was completed. For example, 8 of 16 pain points is 1/2 of the KR, so the score is 50%. If additional issues were found during development, you would create a new OKR to resolve those issues the following quarter rather than shift the target of your current quarter.
How this will be managed in Ally
Leveraging the above scoring method, shared product KR owners will monitor progress at least once a month. Product KRs owners will get an Ally/Slack notification on the 5th day of every month to do a "check-in" in Ally. At that time they will review the scores in Ally and do any communication and updates as needed with their collaborators.
How this will work in Product Key Reviews
Next steps:
-
Share with Product leadership team -
Share with Engineering team leads: Dev, UX and Quality -
Share out with across product management and engineering teams -
Add to product handbook