Example from %6.0.0-rc1 in kicad/code/kicad The chart shows 390 issues open but in fact there are 125 + 116 = 241 issues open:
Output of checks
This bug happens on GitLab.com
Possible fixes
Missing system notes
Burnup data is generated from system notes. Currently:
Assignee system note is missing when creating issue
Created by system note is missing when creating issue
When creating an issue and setting both milestone and epic, the milestone system note never shows up.
When creating an issue and setting the milestone along with other metadata except for epics, the milestone system note shows up
So for every issue that is created with both a milestone and an epic set, the milestone system note is missing and thus the burndown chart will not accurately reflect what is set in the milestone.
Moved issue is not tracked as closed.
another scenario where a closed issue event might be missing:
when an issue is moved to another project, I think we are not creating a "closed" issue event which might be the cause for charts diverging from actual data.
we think the problems come from a mixture of adding/removing issues from the milestone, and perhaps the time of day on either milestone start or end date that issues are completed or moved.
would appreciate learning what the cause is, even if the fix isn't forthcoming.
We'd love to be able to rely on these charts for sprint info radiation but struggle to in their current state.
more than issue numbers though, the issue weights, which we use as story points estimations per job, also get messed up quite often.
here you have the charts again, but at the bottom are the lists from the board view that constitute 'open' at 32 weight, and closed at 93 weight.
Not certain what could cause the -1 for weight, possibly BE calculation issue - else there's a couple of places we pad empty values in FE that could have off-by-one error.
I'm also experiencing this, GitLab 13.8.0 on Premium license. Here's three examples:
35 open issues reported as 20 in burndown and 21 in burnup, neither of which is right:
22 open issues reported as 15 and 17:
And in a fresh milestone, 12 issues reported as 10 and 11:
It's detrimental enough that it makes it hard for me to use to gauge velocity/progress, as it misrepresents what's going on, and misses the presence (and therefore closure) of certain issues, as my teams rightly pointed out when I asked where the progress was. 😕
@sbouly I don't think that is relevant anymore given we changed how data on milestone and iteration charts are computed -- now based on actual events and not just referenced from the issue object.
@gweaver would you mind reviewing this as a continuation of the issue brought up directly? It does not look like it has been assigned and recently prevented an organization from continuing an evaluation to become a subscriber.
If it is related to just Milestones, we have a few options:
Fix the sidebar data to be historically accurate
Prioritize reaching relative parity with the iteration report view and then porting it to replace the Milestone view. I believe the largest blockers here would integrated time tracking and merge requests into the new report view.
Data in the sidebar of milestones is not historically accurate.
Yes, the sidebar numbers and the issue list below the graph are not historically accurate. So these numbers change as issues are moved to the next milestone.
The only thing we made "fixed" was the graph.
Even the iteration report issue list isn't historically accurate yet and would have the same problem.
@engwan gotcha. Now that we have resource events, we could theoretically at least make the counts accurate even if the list itself is not though correct?
we could theoretically at least make the counts accurate even if the list itself is not though correct?
Yes, this will be similar to the iteration report metrics / numbers on top. Those are based on the numbers on the last day of the burnup chart and can be different from the numbers on the iteration issue list.
@engwan I also took a minute to check out the public milestone referenced in the description. The burndown chart does seem to be off -- %6.0.0-rc1 in kicad/code/kicad. If the Milestone is still active, why would the number of open issues be different between legacy burndown and fixed burndown?
@gweaver This is because that milestone is considered a "legacy milestone". It was created before we started tracking events.
This confused me for a bit because I thought it was a fairly new milestone because its start date is February. But that's actually February last year.
You can see that in the new burndown chart, it shows 0 issues until April 13, 2020. I reckon that's when we started tracking milestone events. These milestone events are what would cause the open issue count to increase.
Now state events were tracked much later. I found #229147 (comment 380551050) which notes that we enabled this on July 16, 2020. These state events (close type) are what we use to add to the completed count and decrement the burndown.
So these issues from April to July are not tracked as closed and causes the extra open issues that we see in this particular milestone.
Part of the problem is that we didn't start tracking the different events at the same time. State tracking was added last and this is what we use to determine if a milestone is a legacy milestone.
So perhaps we could consider defaulting to the "Legacy burndown chart" tab for these? It's not going to be a regression for the customer because it's the exact same thing before we introduced the new fixed graphs. They would be able to take advantage of the new fixed graphs on the next milestone.
@engwan that all makes sense. Looking at more recent examples though -- #271625 (comment 496932153) -- where the milestone is clearly much later than when milestone events and state events were added, I can't reconcile the discrepancies.
Same for #271625 (comment 508846824) where the count in the sidebar should theoretically reflect the burndown chart since the milestone is still active.
The only thing that I did notice is that the burndown/burnup starting point moves on the same days as the milestone starts if I add/close issues. Should we would lock the initial plot point at the start of the day UTC time maybe so there is a clear baseline on day 1?
The only thing that I did notice is that the burndown/burnup starting point moves on the same days as the milestone starts if I add/close issues. Should we would lock the initial plot point at the start of the day UTC time maybe so there is a clear baseline on day 1?
Right now, the data points on the graph are end-of-day numbers.
We could do this special rule for the first day only. Or maybe we could just change the graphs to show start-of-day-numbers? We'd probably want to add the day after the milestone/iteration ends though so that we capture the events on that last day too.
Or we could do the other way and retain end-of-day numbers, but start with the day before the start date. So that we have a data point at the start of the milestone/iteration.
Large Premium customer is encountering this issue on v13.8.1 and v13.9.1 and with milestones created before v13.8, but updated afterwards (internal ticket).
I have an Ultimate customer that is running into this issue -- specifically, they see a delay on when they move the issues out of a milestone and when it shows on the burndown chart -- this is particularly a problem at the end of the milestone.
Gabe Weaverchanged the descriptionCompare with previous version
This does not backfill the missing system notes right? Do you think we should do that?
Right, it does not.
Not sure how easy / hard that would be.
I am not sure it is possible to completely backfill the events. Consider a case where an issue was assigned to a milestone, event was not created, and then issue was moved to another milestone. I don't think there is a way to backfill the not created event in this case.
We can probably backfill events for issues that are currently in a given milestone and respective events are missing. However by the time the migration ends on production that might not be very useful anymore if we consider the fix would be in place? 🤔
another scenario where a closed issue event might be missing:
when an issue is moved to another project, I think we are not creating a "closed" issue event which might be the cause for charts diverging from actual data.
extra cases for issues not getting a closed event when being moved to another project or being promoted to epic have been fixed as part of !64197 (merged)
@kevinchasse latest known edge case has been merged and deployed to canary ~20 hours ago as part of !64197 (merged) will probably be on .com in next couple days and will definitely make 14.1