2023 Q1 - KR2: Investigate current state of open Actionable Insight (AI) issues
Background
Based on this past summary, more AI issues are created than closed.
The Problem
While we are tracking the number of AI opened and closed we need to understand qualitatively, why, there is such a discrepancy between the number of open and closed AI issues.
The Solution
- Conduct interviews with
- AI users/creators/stakeholders
- Consumers (ex: engineers) to create a journey map, identify themes, and perceptions around the AI process.
- Establish personas within AI users
- Breakdown of issue creators by role
- Perceptions by role
- Desk research on open AI issues to see if there are trends
- Categorize types of AI issues (to see if there is variability with how people are identifying and defining)
- Look at activity, "page visits" on issues
- number of open AIs per stage
- close rate by stage
- open / close rate by Designer
- what discipline closes them? How is that determined? (also include in interviews)
- For the ones closed, is the insight implemented or just closed (this is SUPER important to know) (also include in interviews)
- number of actionable insights by severity
- What happens to actionable insights after they're created (by stage), etc? (also include in interviews)
Scope
Initial investigation into open AI issues. This will guide future recommendations and iterations on the AI process.
Hypotheses
- There is variability in how AI issues are documented/defined
- AI issues are treated like backlog issues that get documented and not prioritized over new feature development
Research Questions
- What are the possible outcomes of actionable insight (AI) issues
- How do we measure the success of those outcomes?
- What are the potential factors in whether an (AI) issue is successfully completed or not?
- Which of those factors can we measure & control?
- Who is responsible for the timeline and direction of the (AI) issue?
- What is a typical timeline,in months, for addressing AI issues?
- What are the two most impactful challenges in that timeline?
- What is the process for prioritizing or weighting AI issues?
- Who is involved in that process?
- What are the impactful factors in that prioritization?
- What possible improvements can be made to quicken AI issues being completed?
Steps
-
Investigate AI open/close behaviors -
2/4/2022@moliver28: Investigate open/close AI issues by various cohorts (stags, other tags used, creator, weight, severity)-
Collect quantitative data to support personas and use cases of AI issues.
-
-
2/9/2022@moliver28: Identify participants to interview (look at top AI issue creators). Goal is to have 5 participants at first, and then decide if we need more.- List of possible participants here. (Gitlab Only)
-
2/15/2022@moliver28: Put Quant Data findings in a google doc
-
-
Conduct interviews -
2/18/2022@moliver28: Create first draft of interview script -
2/25/2022@moliver28, @asmolinski2, @laurenevans: Decide on final script -
2/25/2022@moliver28: Finish notes template -
3/01/2022@moliver28: Invite participants (5 participants)- Invitation Template (Gitlab Only)
-
3/02/2022@moliver28: Begin Conducting Interviews@moliver28 will be OOO from 3/10 - 3/21 -
4/6/2022@moliver28: Finish Interviews (9 total work days for interviews)
-
-
Analyze interview data -
4/6/2022@moliver28: Summarize interview data -
4/8/2022@asmolinski2 & @moliver28: Decide if more participants is necessary.
-
-
Complete additional Survey -
4/15/2022@moliver28: Finish survey and share-out blurb. -
4/18/2022@moliver28: Send to UX and Product Channel. -
4/22/2022@moliver28: Summarize survey answers.
-
-
Compile all research data -
4/19/2022@moliver28: Finish summarizing qualitative & quantitative data -
4/26/2022@moliver28: Create journey map & Finish [research report] (moved to Q2 KR)(https://docs.google.com/document/d/1nlTSiU9ipkYSWGIkX3CN4_esnP_3yjZCzzh8OSIbLQA/edit?usp=sharing)
-
-
4/28/2022@moliver28: Create presentation of findings (moved to Q2 KR: #1878 ): Slides
Edited by Michael Oliver