As an executive investing in DevOps, I want to see my ROI. I want to see an improvement in the deployment frequency of my dev team. This will most likely drive the KPIs of my team.
As a developer leader I want to see that the team is improving by measuring the deployment frequency and comparing it over time (over sprints)
We already present deployment frequency as a numerical value but we need to show this in a chart view - as the trend is the most interesting aspect of this metric.
Proposal
Under Analytics->CI/CD:
Present a chart for deployment frequency for daily periods for the past week
Add to overall statistics the deployment frequency (same value as the one presented in the value stream analytics)
Present a chart for deployment frequency for daily periods for the past month
Add to overall statistics the deployment frequency (same value as the one presented in the value stream analytics)
Present a chart for deployment frequency for daily periods for the past year
Add to overall statistics the deployment frequency (same value as the one presented in the value stream analytics)
Vertical measure of charts will represents the amount of deployments
Ability to select custom dates to get this data (at the moment we present last week, last month, last year)
ability to view group level analytics
ability to view instance level analytics
Further details
Deployment frequency otherwise known as throughput, is a measure of how frequently your team deploys code. This metric is often represented as a percentage and it answers the question “how often do we deploy to production or to another significant point in our CD pipeline such as a staging environment?”.
Assuming that we already have all of the data available in a table somewhere then it's just a matter of adding some controller logic to present the data. Here in the iteration suggested, I interpret those view to have date windows that are static (can't scroll back in time).
So, we'll need: new endpoint(s) and controller, some performant queries to roll up the data for each time period, and a JSON serializer for this data, docs, tests, and whatever scope so that it's available in ultimate. Maybe weight of 2
@ogolowinski Small question for you. The description mentions:
Present a chart for deployment frequency for daily periods for the past year
Should this instead be
Present a chart for deployment frequency for monthly periods for the past year
?
Two reasons why I think that monthly might be the intention here:
It seems like the goal is for these new "deployment frequency" charts to mirror the existing "pipelines" charts, and the current Pipelines for the last year graph is reported on a monthly basis
The example graph in the UX mockups look like it's showing monthly data points, not daily data points
@nfriend The intent was indeed to mimic the pipeline graph, but we hit a snafu when working on the API regarding performance, so for now we won't present yearly.
| Attribute | Type | Required | Description ||--------------|--------|----------|-----------------------|| `id` | string | yes | The ID of the project || Parameter | Type | Required | Description ||--------------|--------|----------|-----------------------|| `environment`| string | yes | The name of the environment to filter by || `from` | string | yes | Datetime range to start from, inclusive, ISO 8601 format (`YYYY-MM-DDTHH:MM:SSZ`) || `to` | string | no | Datetime range to end at, exclusive, ISO 8601 format (`YYYY-MM-DDTHH:MM:SSZ`) || `interval` | string | no | The bucketing interval (`all`, `monthly`, `daily`) |
@atroschinetz Did we end up limiting all to 90 days?
@ogolowinski We don't limit based on all. We limit based on the [from, to) date range. This applies for all kinds of intervals: monthly, daily, and all.
@nfriend Based on @atroschinetz 's comment - I prefer to stick to the original proposal week, month and year - unless there is a performance issues where we would limit the yearly to 90 days for this iteration
@nfriend@ogolowinski
Displaying an entire year's worth of data, regardless of the interval since the interval is irrelevant here, would require multiple API hits. Since you can get 3 months of data at a time, a years worth of data would just be 4 API hits.
I don't /think/ there would be a performance hit for that... it'd be slower than a page that just showed quarterly data since that would only require a single API hit. 🤷
@atroschinetz @nfriend I would rather start small. Let's start with 90 days (consistent to code coverage as well) and see if users demand higher intervals we can expand in a future iteration
@ogolowinski you meant to say "wider date ranges" rather than "higher intervals", correct? Just want to make sure we're not talking about totally different things. Like I said in my previous comment, the intervals are completely irrelevant here.
Hey @nfriend I'd say if you want to avoid duplication of work to just work on app.vue, even if it's behind a feature flag, I believe I might be able to remove this during this milestone or the next one
@jivanvl Actually, now that I'm working with the code, I think it might more sense to just encapsulate all of the new charts in a new component (<deployment-frequency-charts />) and drop this component into both app.vue and app_legacy.vue. That way this feature isn't dependent on the graphql_pipeline_analytics feature flag.
I haven't written any code yet, but I've looked through this issue's description and the implementation of the existing pipeline graphs. It looks like the implementation of these new graphs will mirror the existing graphs very closely, which removes a lot of the uncertainty/decision making 👍
The bulk of the work will be implementing the backend endpoint (exposed through GraphQL). My plan is to give this a shot, but I may potentially need some backup from a backend engineer if there's a lot of database querying/optimizing. Does that plan sound okay to you @nicolewilliams?
Not started: A frontend MR to begin consuming the new GraphQL endpoint.
Since my last update, I remembered that the backend is already mostly done (in !48265 (merged)). All that's left to do is expose the same information through GraphQL, which is pretty routine.
After attempting to expose the data originally developed in !48265 (merged) through GraphQL, I've decided it will be faster to just use the REST API as-is and accept the slight inconsistency/complexity on the frontend side (half the page will be rendered with data from the GraphQL API, and half from the REST API).
Exposing the data through the GraphQL endpoint would be the cleaner long-term solution, but I don't think this would be possible to complete before the end of the milestone.
cc @nicolewilliams - I think this has the best chance of getting through in %13.8 out of all the issues currently assigned to me, so I'm going to primarily focus on this over the next few days.
@ogolowinski It would be ideal, since the rest of the page is using GraphQL, but it's ultimately not too bad in its current state.
Do we plan on using this deployment frequency information elsewhere in the app in the near future? If not, I would probably recommend leaving it as is.
But if we think we might use this information elsewhere, then it's probably worth investing the time to make this consistent. For example, I've seen this UI in some of the mockups:
This would be pretty easy to build if all the data was available in GraphQL. It would be trickier if we had to consult two different APIs for this information.
@ogolowinski Thanks for this context! In that case, I think it's probably worth adding the deployment frequency data to the GraphQL endpoint. I think it will make our lives easier down the road. And it shouldn't be too much effort.
!50885 (merged): In development. This is main chunk of work for this feature. Just a few more tests to write - I plan on submitting this for review by the end of the day tomorrow.
!51137 (merged) (non-blocking): In initial review. This is a small update to the API endpoint that provides the graph's data. This MR doesn't need to be merged in order to deliver this feature.
!51126 (merged) (non-blocking): In initial review. This MR adds a frontend fixture for the API call to make the frontend tests more robust. This MR doesn't need to be merged in order to deliver this feature.
!51137 (merged) (non-blocking): In review. This MR doesn't need to be merged in order to deliver this feature.
!50885 (merged): Backend approved. With frontend maintainer for final review.
!51338 (merged): Ready to be merged once ☝ is merged. (This MR just enables the feature flag by default).
This feature is really close. However, as part of the review process, I added a disabled-by-default feature flag, which has the potential to slow down the rollout of the feature a bit, which is why my confidence value dropped a bit from last update.