The action here is to identify and prioritize at least 3 issues related to this theme to work on during Q1 FY22.
I searched the issue tracker and collected the following issues. Some of them are about adding visibility and others about improving the existing visibility.
Thanks for including Use repositories instead of workspace folders. Since I created the write-up, I can't unsee how the "workspace-centric" approach makes code more complex. But I totally get that it's going to be less priority than pushing the MR diff comments over the line.
Use repositories instead of workspace folders - I really, really want to tackle this since it has the potential of paying off the effort in as little as two or three milestones (my personal estimate).
@m_gill remember the issue I brought to the Mural exercise? Please read @viktomas part of it being a blocker. Unsure of the weight on it, but it might be worth assessing the effort.
Thanks for including Use repositories instead of workspace folders. Since I created the write-up, I can't unsee how the "workspace-centric" approach makes code more complex. But I totally get that it's going to be less priority than pushing the MR diff comments over the line.
@viktomas Given that we need to see the graphql bug resolved I think we'll have more time for this (I've put it in %13.12). You are right that I'm really trying to push on getting the MR Reviews work across the line - so we need to push when we can there.
Do you think it'd be worth implementing the global MR cache issue during %13.12 and save create new comments for %14.0? That would also allow us to make a bigger release post splash in %14.0 with a Complete MR Review in VS Code.
Do you think it'd be worth implementing the global MR cache issue during %13.12 and save create new comments for %14.0? That would also allow us to make a bigger release post splash in %14.0 with a Complete MR Review in VS Code.
@phikai I think that's a good approach. Implementing the MR cache will make it easier to implement adding comments in %14.0
So the final list of issues I see added to %13.12 (sorted by priority) is:
@andr3@m_gill@phikai as usual, here's the Mural for us to add the issues we want to see in 13.12. I scheduled our 50-minute session for Thursday, April 8.
Add your issues to the Mural before the call. Let's try to limit to 5 issues per person, so it's easier to vote on them and keep things focused. You can find instructions on how to add them in the “Outline” panel on the right side of the Mural UI.
Try not add Security or Availability issues. This is also noted in the product processes page, as those issues have forced prioritization with SLAs/SLOs.
If you can, mark issues that appeared in previous sessions by changing their sticky color to orange.
@pedroms yes, it targets our conversation more. I feel like we already tried to convey those things, but maybe sometimes got "lost in the weeds" talking about the issue itself rather than RICE of the issue.
@m_gill and @phikai - we should chat about what, if anything, needs to be pulled from my backlog into %13.12. I lose five business days of availability (May 10-14) to vacation, so some capacity will shift to my backup (@sselhorn, based in Seattle).
Seven items currently contain the Technical Writing + groupcode review labels and the %13.12 milestone.
Additional candidate issues I'm currently aware of:
Sub-20 issues labeled Technical Writing for %13.11 that haven't landed yet. I won't know how many of those will actually close on time until the end of the milestone. Best not to assume 100%.
When your list is ready, I'll need to read through those issues and make sure everything that potentially needs my eyeballs has the Technical Writing label. Review now == fewer surprises later.
w3 https://gitlab.com/gitlab-org/gitlab/-/issues/31065 (very curious whether this would help towards the OKR. Highlighting can create major performance problems. I wonder what this is like for large MRs... another reason I'm eager to see a local "large MR" test)
w6 gitlab#324381 (closed) - this needed to be a spike. What is the priority of this now? Can we get performance improvements out of it? I hate to do this but I am leaning towards deprioritizing it further. Do we accept getting more severity2 and longer development times in order to get closer to better performance on large MRs? @andr3 what is your take?
w1 gitlab#326178 (closed) - easy win and frustrating experience but honestly probably broken for quite some time so not sure what the usage data is like here
w? gitlab#326887 (closed) - if the solution here is to default back to base (again) then I think we should do it soon
w6 gitlab#324381 (closed) - this needed to be a spike. What is the priority of this now? Can we get performance improvements out of it? I hate to do this but I am leaning towards deprioritizing it further. Do we accept getting more severity2 and longer development times in order to get closer to better performance on large MRs? @andr3 what is your take?
I'd be ok in putting in a small spike to get our bearings around the problem, but anything more than w2/w3 I'd postpone. My thinking is: even though we won't be able to get down and implement deep changes in this upcoming Q2, we need to start grasping the problem asap to be able to deal with S1 that might come up and also start to provide guidance to others. So I see some value in spending time on a spike even if we only act upon it months later.
From this first spike, I expect others more-specific spikes might arise for later.
w? gitlab#229164 (helps towards OKR, and also virtual scrolling I would imagine)
Yeap, this would help somewhat. Not directly the virtual scrolling, but overall timing to first full load yes).
Question: do we have anything already for enabling the batching diffs when show whitespace changes is disabled?
After meeting with Kerri & Phil we have once again decided this needs a backend solution. We have articulated what we think is needed. Kerri is reaching out to Matt for scheduling.
I have this one as a higher priority than the confidential issue below because I think this has the potential to be a 1-2 hour fix, and the confidential issue has the potential to be a much larger change, which could cause this to slip. Open to corrections here, but I'm attempting to optimize for most things shipped - with the understanding that the confidential issue really needs to happen this milestone.
The spike for caching MR data and - hopefully - reducing initial load/render time is "done". The caveat here is that we need to do some performance & memory testing to determine exactly what (if any ) gains we get.
@thomasrandolph I agree on the scroll to file top on collapse prioritization.
The spike for caching MR data and - hopefully - reducing initial load/render time is "done". The caveat here is that we need to do some performance & memory testing to determine exactly what (if any ) gains we get.
Technically, this is still in dev, but I'm not working on it any more. The code is done, but I'm not able to replicate the actual defect, which means I can't get screenshots for UX, and I can't be sure the fix is actually fixing anything.
Obviously this one is in trouble because the milestone ended yesterday and it isn't shipped. That said, we had a very long communication cycle, so we weren't even really 100% sure this didn't require a backend solution until May 12.
gitlab-com/www-gitlab-com#10672 (closed) the VS Code related OKR is still on 50%, I'll spend some time tomorrow investigating the logs but there is a real chance of the OKR staying at 50%
FF is on for gitlab and I was planning to enable it by default today, but couldn't because of the ongoing incident. I'll enable it on production by default next Monday if everything goes well.
I can see we can detect the encoding change and at least display some helpful message. I'd say it's a short term solution, but at least we need to make it clear to our users until we figure out a way to diff them properly.
I have an WIP MR up which needs to be polished up, but I was interrupted to look at a performance issue. It's also a security issue so I feel it's quite likely it might slip.
I've started investigating this issue today. I've found some minor improvements can be made, but it seems unclear why it got slower. I'll have to continue to investigate further next week. It's unlikely that it will be done in 13.12.