Productivity Analytics - Type of Work (pre-defined)
Problem to solve
While we started tackling productivity analytics in https://gitlab.com/gitlab-org/gitlab-ee/issues/12079, there is a lot more information that can be conveyed to EMs in order to optimize their groups. A question many managers have are what their engineers working on and how can they modify the amount they spend on new features vs security issues vs quality in order to achieve milestone goals.
Examples:
- If in the process of creating new features, we find that there is a lot of churn, this might indicate that product specifications are continuously changing or engineers are prototyping. In itself this is not bad, but if it continues for a period of time longer than expected, there might be an early indication of a problem.
- If a specific engineers spends most of his time refactoring a particular portion of the codebase due to defects, this might indicate that there is technical debt long overdue.
Solution
We add a bar chart that displays lines of code committed per day split by category (new work, churn, refactoring). The chart will be placed between the Merge request charts and the Trendline. The copy in the table is the following:
- Title of the chart:
Type of work
- Y-axis label:
Lines of code
- X-axis label:
Days
- Legend:
New work
,Churn
,Refactoring
The content of the chart will be influenced by the scope set in the filter bar at the top.
When you hover over a date, a popover displays the actual number of lines of code for each bar, as well as the percentage each type represents.
On mobile, the chart scrolls horizontally to allow the full content of the table to be viewed:
Original proposal
-
We should classify code as
New
(Number of LOC additions),Churn
(Number of LOCs changed/deleted that have existed for less than 1 month),Refactoring
(Number of LOCs changed/deleted that have existed for more than 1 month). -
On the
Productivity analytics
page, there can be a chart before the trend line chart showing the information (it won't be selectable, so I think this is a better position). The chart can be bar chart for each day, where we show the stacked % of churn, new, refactor (https://www.amcharts.com/demos/100-stacked-column-chart/) with a tooltip showing LOC (How can we show the total LOC, @matejlatin / @cperessini). The global filters should apply, so that we can filter MRs by author, assignee, milestone, days and labels. Similarly to how we have build the interaction of the charts, I would recommend if we don't select anything on the MR chart, we show all information, but if we do then it would only show us the filtered data.
A couple of things we need to think about for the future and to keep in mind:
- how would we extend the page to show us work in progress
- how could we enable comparison between teams/ projects/ people
- how can we ensure a full picture, i.e. I am seeing that one of my MRs is taking 21 days and I would like to understand why that is the case, so I see that it's because 90% of the work was refactoring, it took many big commits and reviews, etc.
Permissions and Security
The dashboard should inherit the project/ groups permissions of users.
Documentation
Testing
What does success look like, and how can we measure that?
Measure: We should include the page in user ping to start with and measure how many people visit and how long they stay on it.
Success: As we develop this further, we expect users to spend a significant time deep diving on the page at least once a week.