Skip to content

Add Jannik Lehmann as frontend maintainer of GitLab

Jannik Lehmann requested to merge jnnkl-maintainer-frontend into master

So you wanna be a maintainer?

Welcome to Jannik Lehmann's maintainer issue 👋
Software Engineering is a team-sport to me, I love collaborating and am a strong believer in sharing knowledge. Since everythings starts with a MR around here, I care about how we do reviews and how we interact at this crucial part of our day-to-day collaboration. I actually care so much about it, I teamed up with @oregand to give a talk at Contribute 21 on this. I joined the Team in October 2020 and am a reviewer for more than a year. Since then I always tried to level up my reviewer game and now feel ready to unlock the Merge Button. WDYT?

My review philosophy

With more than 260 authored MR's merged and more than 240 MR's I approved, here are my rules I try to live by when reviewing a Merge Request at GitLab.

  • test, test and test, I always try to test things out manually and see if I can reproduce the described behaviour. If I can't reproduce something, I will state this very clearly leaving no questions open (Spoiler: I learned this lesson the hard way).
  • Ask questions. When I do a review I remind myself that the author knows at least as much as I do about the field they're currently working in if not significantly more. It is not my style to push my opinion on authors. I rather ask about how things work and listen attentively. Answers to questions like these (1, 2, 3, 4, 5) will either have a good explanation and I'm going to learn something from it, or uncover an Issue we can then fix. This is my secret super-power as a reviewer.
  • a mixture of "don't make assumptions" & "assume positive intent" & "radical transparency". Communicating is hard, I try my best to be very clear and am quick to provide a demo-video or similar. Trying to ensure nothing gets lost between the lines.(1, 2)
  • leaving praise doesn't hurt and should be done more often (1, 2, 3)

Links to Non-Trival MRs I've Reviewed

MR Changes Weight x/10 Notes
Source Editor: The toolbar integration: 675-293 \textcolor{red}{\text{10}} Large complex review. Tested thoroughly, provided help to increase maintainabilty, identified multiple UX-Issues, asked questions about extensions and local state management. This MR was heavy on the GraphQL side. Got a shoutout from the maintainer.
Use a dynamic loader for window.gapi 133-54 \textcolor{red}{\text{9}} Not so large but complex review. Asked a question that uncovered an Issue regarding loading dependencies, made a suggestion that was non-obvious but a nice improvement.
Implement corpus upload - Commit Corpus 142-44 \textcolor{orange}{\text{7}} Asked question about documentation, some UX-behaviour and the missing error-handling.
Implement corpus delete logic 83-30 \textcolor{orange}{\text{6}} Identified a potential test gap and supported coming up with a test for it.
Use gl-drawer for pipeline editor help drawer 290-342 \textcolor{orange}{\text{6}} Identified an UX Issue provided a patch to fix it, Identified an newly introduced but unused prop, found a test gap.
Use GraphQL-API to fetch security training providers 179-121 \textcolor{green}{\text{4}} Asked a question about error-handling, helped out with steps to reproduce the behaviour no maintainer additions.
Subscription Activation: Fix Subscription Form UX 127-61 \textcolor{green}{\text{4}} Very solid question work happening in here, shows my approach to reviews. There was nothing to be uncovered this time, Kudos to the author!

The thing about amount of changed lines I'm convinced that there are "large" MR's that are quite easy to review, small MR's that are hard and vice versa. I still decided to put this indicator on this Issue next to a weight between 1-10 which is a subjective value and it represents how much effort it took me to get this MR into a state where I was able to approve with confidence.

I've tried my best to make this information as digestible as possible, if you're in for the unfiltered deal please browse through my maintainer Issue: #12938 (closed)

🍕 🍕 🍕 Hungry for more? 🍕 🍕 🍕 I've also reviewed these!

MR Changes Note
Invite members for task experiment 1607-29 Massive MR, heavy on BE and Database. I had some questions regarding the styling, mentioned test-selector hygiene.
V1 License Compliance MR Widget Extension behind feature flag 891-3 Asked to reduce scope of the MR, suggested a logical refactoring.
Add non-blocking MR widget for external status checks 526-0 Suggested multiple spec-refactorings.
VSA allow users to toggle aggregated data backend 309-2 Found an UX issue, asked about a page-reload quirk going on.
Add new GraphQL query for release editing 193-16 Left a suggestion about a potential test enhancement.
Header Search Refactor - Handle Errors in the component 64-26 Noticed a suboptimal css selector and helped out coming up with a better solution. No maintainer additions.
Make the shared/groups/_dropdown Pajamas compliant 48-34 Catched a potential user-facing Issue.
Global Search - Fix reset filters button 16-6 Spotted a test gap and provided a suggestion to close it. No maintainer additions.
Migrate merge train when docs path 11-41 I helped out enhancing the MR Description by providing Screenshots and a patch to reproduce things, suggested a test enhancement, and found a UI Bug.

🤔 But do you have the perseverance a Maintainer needs? Glad you're asking! These MR's were a little tougher to get over the line

MR Changes Note
PipelineUsageApp: Add namespace usage overview 487-20 Larger review overall with multiple Issues, questions and suggestions.
Refactor-v-html-markdown.vue 7-2 Teamed up with @djadmin to help a community contributor navigate through our Pipeline, minor changes still a couple of review rounds.
Corpus table pagination 322-71 Multiple nitpicks, refactorings, questions and Issues. I’m happy with the outcome on this MR.

Smaller Reviews with No Maintainer Additions

MR Changes
Implement loading icon JS helper 100-7
Refactor DAST Site Profile form 25-14
Stop showing Jira-issue button when setting is disabled 6-18
Delete DOMContentLoaded handlers 45-99
Fix for approval check popover bug 13-8
Add IaC scanning to Security Configuration page 130-8
Integrations override table: Don't show empty state when error occurs 30-18
Display when environments will auto-stop 95-14
Set danger variant to remove user/project 2-2
Add alert and disable when cannot push code in IDE 185-48

Things I worked on and take pride in

Code quality mr widget extension

This MR is my latest "bigger feature" as a part of the Merge Request Widget Working Group:

  • I had touchpoints with the rewrite of several Merge Request Widgets to use the new Merge Request Widget Extension (License Compliance Compliance & Security Widget) but I owned the Code Quality MR Widget Extension.

  • The reviewer also liked it:

really fantastic work here! I think this MR is a very good isolated MR to use as a point of reference to add a new MR widget extension.

Refactor confirm modal to work without querySelector

  • I can’t take all the credit myself for the work in this MR, but since I’m a strong believer in collaboration I wanted to have it in here. This is an Issue we worked on together during our weekly frontend pairing sessions. In the videos linked in the MR the whole development process is documented

  • It was a non-trivial refactoring of one our reused parts of the code-base, to ensure a better stability and maintainability of our overall codebase.

Bring redesigned security-configuration page to EE

  • This MR is an example out of a bunch of MR’s but the screenshots in there are the most visual. The initial task was rather straight-forward: There was a security-configuration page already in place, new designs where being proposed, our task was now to implement those new designs and make sure we’re not loosing any features

  • The actual challenge about this work was: In the initial implementation the Security Config Page was EE only. The redesigned version should be available in all tiers.

  • Usually our Pages are available in CE and some features might be EE only, which then add on in the EE version of the application. In this case we had to migrate an EE only page to be rendered in CE + EE.

  • First thing we came up with was to provide a static page to our core users with a static but working experience. From there bit by bit, we reworked the dynamic functionalities into the static page until we were feature complete with the EE version. We're now not only feature-complete with the original page, we now have a data driven application that only has a CE version and adapts according to its API inputs.

  • Our initial implementation plan can be found here: gitlab-org&4787 Shoutouts to @markrian who I teamed up with on this Project, the project was a success and shipped without any major issues.

Add Sbom Survey Banner

  • introduced a new shared survey banner to our codebase to increase maintainability of our codebase
  • refactored an existing banner to be the first to use it

refactor(GLTable): Added console warning when GlTableLite could be used instead of GlTable

  • Part of an epic I created to reduce the size of our Pagebundle.
  • Contributors are now getting warnings when using a GlTable component when a GlTableLite component would have been - sufficient.
  • The migration work for already existing GlTable components is still ongoing

my most humbling Situation at GitLab

TLDR;
I was working on this major epic, picked up speed and created untested MR's that touched up to 6 Pages at once. I worked on one page with a big unidentified test page. MR was merged causing a priority1 severity2 Issue. My takeaway from breaking this: While working on such a big codebase as we do, it is an unreachable goal to be able to reproduce and locally test all the things that go through our hands. But I’ve gotten a better feeling of what kind of changes I can approve while only reading through the changes. I have lost fear of saying "I can’t reproduce this can we get a field-expert on this?". And most importantly: When I approve something I will make this very clear. I might not always say that I was able to reproduce things (this is the default) but I will state it clearly if I approve without being able to reproduce it. (1, 2, 3)

Lucky enough there haven’t been too many major oopsies in my time at GitLab, I think this mainly has two reasons:

  1. I’ve broken enough things in my career to go slow enough and don’t bite more then I can chew,
  2. The system we have in place: review + maintainer review + small iterations + fast rollback in case of oopsies are working pretty good.

One thing I remember going wrong:

So I’ve been working on this epic a lot. And I challenged myself a little to how many of these Issues I can close out. It was a company wide effort with ~300 Issues. But working on it, I figured that the required changes per file were very small.

My first contributions to the MR have been looking like this 1 MR to solve 1 Issue, reproduced and tested locally, all good.

It didn’t take me too long to realize that my local test "almost never" failed. And that it would take me a lot of time getting my gdk in the state it had to be in to reproduce the specific behaviour I was testing for.

So I started to “optimize the process of tackling these Issues”:

  • included multiple Issues (up to 6) in one MR
  • only partially tested my changes since they were so “minor”

This worked out great, until it didn’t.

And this comment came in:

It seems like this MR may have broke GKE cluster creation #284519 (closed)

My changes caused a priority1 severity2 Issue.

To be fair, this should have been catched by tests. But it wasn't. The damage was done. The MR was reverted quickly which fixed the bug in a very timely manner.

In the aftermath I did my very best helping out coming up with a fix for this:

There was an underlying Issue including a testgap that my changes uncovered. So in the end it was a good thing which lead to a higher code Quality of our Product, but reproducing before pushing would have catched that Issue before it hit production.

I left some kind of a post mortem on the MR.

There were also discussions about to which extend these changes should be manually tested or not while working on it.

Here’s my takeaway from breaking this: While working on such a big codebase as we do, it is an unreachable goal to be able to reproduce and locally test all the things that go through our hands.

But I’ve gotten a better feeling of what kind of changes I can approve while only reading through the changes. I have lost fear of saying “I can’t reproduce this and it’s a lot of changes can we get a field-expert on this?”.

And most importantly: When I approve something I will make this very clearly. I might not always say that I was able to reproduce things (this is the default) but I will state it clearly if I approve without being able to reproduce it. (1, 2, 3)

Additional contributions

  • Frontend On-boarding Buddy: 1, 2
  • Merge Request Coach: 1

🎉 You've made it to the end 🎉 Thanks for stopping by! What are you waiting for now? Hit that: 👍 and /approve! Click this heading after your approval to see my reaction! 👍 🍕 🎉


thumbs up
Edited by Jannik Lehmann

Merge request reports