Skip to content

Threat insights feature prioritization via the Kano Method

Current status:

  • Survey fielding complete
    • Primary cohorts filled
  • Analyzing results
    • WSJF results for Sec:IC Cohort
    • Verbatium analysis
    • Summarize results (see Results and key insights below)
  • Wrap-up

What’s this issue all about?

We have a long list of potential features we believe would be valuable to implement this year (2021/FY2022).

Who is the target user of the feature?

  • Application Security Engineer/Analyst
  • Security team lead

What questions are you trying to answer?

  • What planned features are most valuable/desirable
  • What priority should we build these features in
Methodology

Using the Kano methodology we want to devise a Qualtrics questionnaire that will help us determine both desirability and priority from a user's perspective of a shortlist of features we have planned this year.

I highly recommend reviewing the kano methodology documentation before proceeding

Feature shortlist:

Feature Theme
Grouping vulnerabilities Vulnerability Management at scale
Custom tab creation Vulnerability Management at scale
Attach multiple vulnerabilities to a single issue Vulnerability Management at scale
Customizable report download Vulnerability Management at scale
Auto-dismiss vulnerabilities Configurability and flexibility
Add reviewers to MRs based on detected vulnerability attribute Configurability and flexibility
Disallow status & severity changes based on the permission level Configurability and flexibility

Format:

Each feature is present on a single page with a mock-up or GIF and a brief description of the feature proceeded by 3 questions.

Example:

Question 1

Functional question: “If exporting any video takes under 10 seconds, how do you feel?”

A: Leikert scale:

  • I like it
  • I expect it
  • I am neutral
  • I can tolerate it
  • I dislike it

Question 2

Dysfunctional question: “If exporting some videos takes longer than 10 seconds, how do you feel?”

A: Leikert scale:

  • I like it
  • I expect it
  • I am neutral
  • I can tolerate it
  • I dislike it

Question 3

Feature importance: How important is it or would it be if: exporting videos always takes less than 10 seconds?.

A: Leikert scale:

9 point scale of importance

  • Not important
  • Somewhat important
  • Important
  • Very important
  • Extremely important

Important documents

🔎 Analysis

User cohorts

Users = participants who selected any {security scanning, vulnerability report, security dashboard}

Cohort = [User] + [Role] + [Department]

  • Individual contributors (all)
    • Individual contributors (Security) [Primary cohort]
    • Individual contributors (Engineering)
    • Individual contributors (Operations)
  • Managers/Team lead (all)
    • Managers (Security), [Secondary cohort]
    • Managers (Engineering)
    • Managers (Operations)
  • Director and up (all)

What decisions will you make based on the research findings?

The output of the analysis will be used as inputs for a Weighted Shortest Job First: WSJF product planning method. This will help generate a final development plan roadmap for the year.

WSJF

WSJF is a way to organize a work queue to maximize value delivered over time. The formula is simple:

WSJF Score = Cost of Delay / Duration

Cost of Delay quantifies the impact of waiting for work to get done. It is calculated as Value times Urgency. While value calculations are often done in terms of potential profit and cost savings, for our purposes, Value will be a quantified relative score of how important something is to a customer. Urgency will similarly be a relative score of quickly customers need a feature (and how much the perceived value decreases the longer they have to wait).

Duration is the amount of time it will take to deliver a feature. This can be specific time estimates but it is much easier to use relative sizing yet again such as a point-weight or t-shirt size. Most important is using the same scale for all calculations.

This means we can re-write our WSJF formula as:

WSJF Score = Value score x Urgency score / Development effort

Kano and WSJF

Using the kano analysis we can derive the Cost of delay [C] by using the dissatisfaction (dysfunctional score) [d] score multiplied by the importance score [i] of the feature. The greater the dissatisfaction and importance directly correlate to user need.

We now have our final formula:

WSJF Score = Importance score x Dysfunctional score / Feature Weight

Putting it all together, combining the Kano analysis results with rough development effort estimates in the WSJF formula will give us a stack-ranked list of features. Delivering these features in order from highest to lowest WSJF score should maximize value realized in the least amount of time. Of course this is only a guide; the order can change or additional non-analyzed work can be substituted based on other criteria.

One import thing to note is that WSJF scores can be compared at any time, as long as the CoD and Duration inputs use the same input types/scale. This means that future feature ideas can go through a Kano and WSJF analysis independent of any other features and still be slotted appropriately into the larger stack rank. And of course features can be re-run through the process to determine if their order is still accurate.

️ Results and key insights

You cannot simply take all the scores and look at them in aggregate as scores will be different for each cohort. For example, this analysis focused primarily on capabilities for the Security Individual Contributor (Sec:IC) so these scores will have greater emphasis than those of Security Leaders and Engineering Leaders given the features we chose to evaluate. However, we cannot ignore the results from those cohorts either and will be used directionally to help with prioritization as a larger audience consensus on a feature will merit a greater return on the work.

Sec:IC Cohort

Feature Cost of delay Kano Category
Disallow status & severity changes based on the permission level 18.7 Performance
Custom tab creation 17.0 Performance
Attach multiple vulnerabilities to a single issue 12.8 Attractive
Auto-dismiss vulnerabilities 10.8 Attractive
Grouping vulnerabilities 9.6 Attractive
Custom vuln report export 9.3 Attractive
Add reviewers to MRs based on detected vulnerability attribute 6.0 Attractive
Kano Category Plot

Sec_IC_Kano_Categories

What's the latest milestone that the research will still be useful to you?

~ 14.9

Edited by Andy Volpe