Hi GitLab Team members from all around our precious . I wanted to get your feedback on this data and potentially maybe also suggestions how we could bring these numbers closer to our Community MR Review time SLO.
I am looking for experiments/actions/thoughts on this topic to then take action that can lead us towards achieving the SLO.
It would be great to have your input.
Current assumptions or thoughts I have:
Should we add more ICs in these stages for the specific domains that are struggling to keep up?
Should we re-assess our priorities in those stages that are not meeting the SLO of 7 days and focus less on our own product roadmap & get more focus on community contributions?
I'll fill in this section with a summary based on people's feedback on what we can do to improve this.
Stop
Start
track these community merge request SLOs by project rather than by group or stage
Out of the total number of maintainer reviews requested (e.g. ~4,200 in the past 30 days), what percentage of these are for community contributions?
For each project, what is the average number of reviews (daily or monthly) for each maintainer? What percentage of them are community contributions? High average review count would indicate a capacity problem, but if the engineers have a manageable review count then the problem lies elsewhere (ex: processes)
Of the community contribution MRs which have gone stale, what stage of the review process they are stuck on? (initial review pending, maintainer review pending)
educate more people on the workflow labels and enforce & remind GitLab team members when appropriate
Track average & median time an MR is in the review state, so if an MR has seen 3 transitions from ~"workflow::In dev" to workflowready for review and back. We should take all the time this MR was in ready for review into account. (5 days in review, 7 days in review and 20 days in review gives us an average of 10,6, while today we only take the 20 days into account)
Assess whether we should stick to using workflow labels for the review process vs using our own labels so that this workflow can work independently from the product process, which could potentially use workflow labels differently.
Train or make it more obvious to wider community members that they can set all the allowed workflow labels such as workflowblocked , ~"workflow::In dev" & ~"workflow::In review". Existing command: @gitlab-bot label ~"workflow::in dev"
Consider adding a workflow label : "workflow::reviewed & tested by the community"
How many are being reviewed/merged per project / per group so we can set a baseline & relay this information to Mek & Christopher so they can set a goal/directions
Try
Reduce current backlog & temporarily make a push in the groups/stages that don't meet the SLO to reduce the current backlog, after assessing the MRs that we should push on.
TL;DR: I think what we should do first is to push to reduce the backlog to meet the SLO, then take long-term actions to ensure we keep meeting the SLO. In my following comments, I'm mostly addressing the first part.
Should we add more ICs in these stages for the specific domains that are struggling to keep up?
For projects/areas that aren't too specific, yes, definitely (and I believe that's already happening, but not specifically focused on the groups/stages that don't meet the SLO).
I think it's more a matter of asking reviewers/coaches/maintainers to make a push in the affected groups/stages to reduce the backlog.
Should we re-assess our priorities in those stages that are not meeting the SLO of 7 days and focus less on our own product roadmap & get more focus on community contributions?
I think this will be needed given that some groups cannot be expanded due to the limited number of team members with the required expertise (e.g. Runner). That said, to reduce the current backlog, we should prioritize MRs based on a few critera:
That means MRs that require significant work should be delayed at first (until we merge all the "smaller" MRs), and then reassessed to see of these are MRs that we would like to be merged at some point (if not, we should be proactive and close them with a good explanation).
That also means that stalled MRs (due to unresponsiveness of the author) that we have a low interest to merge, could also be closed.
So in the end, I do think we should temporarily make a push in the groups/stages that don't meet the SLO to reduce the current backlog, after assessing the MRs that we should push on.
@nick_vh do you have any sense of how accurate that data is? It looks like it’s entirely based on labels changing on the merge request to indicate the state that it’s in. Do all reviewers do that? Is there a bot that does it?
There is probably some error-range when people are not accurately following the workflow labels, but that is something the Contributor Success is planning to educate more people on. We are using the existing workflow labels, so this shouldn't be too surprising.
A bot describes the process for the reviewer, so that it is clear we expect the status to return to ~"workflow::In dev" when the review is done or more work is needed (gitlab-org/gitlab!91608 (comment 1028438794))
We welcome suggestions how we can increase awareness of this, that's exactly why I am raising this issue for everybody to weigh in on.
This one never got a In Review label, but it's in fact been reviewed and approved and then it's in a second round of review. Interestingly, it looks like we go back to the Ready for Review label anytime someone tells the bot it's ready, this makes me think we might be skewing this data even more.
I understand there's a process and it's been communicated, but I wonder if maybe there isn't actually an issue here (save some outliers, long running contributions).
Maybe the easiest change that wouldn't require changing what data is used for calculations would just be to use the Median instead of the average:
Looks like it's still high, so I think that corresponds to the data quality issue we have here, but it's not as high as the average.
While I agree with verifying our data, we can't rely on a comment that was given to know if a review has happened. Similarly it is also not clear for the author or the group of contributors if they can proceed and make the suggested changes. I want to move away from assuming that a comment means a review as this is also an assumption that might not be true. Aren't workflow labels not used as such in Gitlab Team member Product-led MRs?
Interestingly, it looks like we go back to the Ready for Review label anytime someone tells the bot
it's ready, this makes me think we might be skewing this data even more.
Which is normal, a community member is not able to set or adjust labels so they need a different process led by a bot. This is not skewing the data, this is the community member saying the MR is ready for another pass through the review process. Unless I'm missing the point here?
I count workflowready for review & ~"workflow::In review" as the same status in the metrics, so this wouldn't make a difference. Our SLO is to execute a review (give feedback on what still needs to happen) in 7 days.
I understand there's a process and it's been communicated, but I wonder if maybe there isn't actually an issue here (save some outliers, long running contributions).
I did a lot of interviews/coffee chats. Certainly there is an imbalance between the amount of MRs in certain stages versus others. And in interviews some indicators such as not enough capacity / maintainer ratio was brought up. See comment from Brian Williams below.
https://app.periscopedata.com/app/gitlab/729542/Wider-Community-PIs
If there is no deterministic way to understand where community members are being blocked on GitLab team members with a certain accuracy (workflow labels give this accuracy and are flexible) we will not be able to measure and set targets for improvements.
Even if we would say that our "average" is 17, it's indeed still far of from what is expected (SLO) so anything we can come up with to lower this is valuable.
I count workflowready for review & ~"workflow::In review" as the same status in the metrics, so this wouldn't make a difference. Our SLO is to execute a review (give feedback on what still needs to happen) in 7 days.
@nick_vh Can you explain that further? If you count those labels as the same status, then how do we know when a review is given? What's the signal you're looking for to say that the initial SLO has been met? I'm just trying to understand this calculation better.
I read that as 7 days to assign and then 7 days to review - if it's 7 days for both those activities to take place, it might be good to further clarify that. I agree it's pretty confusing with that chart and then comparing what complete triage means.
If there is no deterministic way to understand where community members are being blocked on GitLab team members with a certain accuracy (workflow labels give this accuracy and are flexible) we will not be able to measure and set targets for improvements.
That may be the case... which makes it hard to have a conversation about how to improve if we're not clear on what's going on. Have you chatted with @lmai1 about how development measures throughput? I think one of the primary metrics we use is Mean Time to Merge (which is also what Code Review thinks about) - so maybe it makes sense to look at MTTM of community contributions (or Median Time to Merge could be a better real sense) and see if they're taking a long time to get through the whole process.
I think at a minimum MTTM is at least a dataset that wouldn't be influenced by users forgetting to add a label or do something else and might give a higher confidence picture of what's actually happening here.
I read that as 7 days to assign and then 7 days to review - if it's 7 days for both those activities to take place, it might be good to further clarify that. I agree it's pretty confusing with that chart and then comparing what complete triage means.
Yes, I think that's it:
We allow 7 days for a Community contribution to get a reviewer assigned (in practice, now that contributors are responsible for telling us when their MR is ready (with @gitlab-bot ready) and since we automatically assign a reviewer as soon as the command is sent, I'm confident this SLO is met, and we even could actually remove this SLO...
We allow 7 days after a reviewer is assigned to have a review performed (i.e. workflowin dev is added back or MR is merged)
That may be the case... which makes it hard to have a conversation about how to improve if we're not clear on what's going on.
I think using the labels is still a good solution, not perfect, but good. If people forget to set MRs back to workflow::in dev after their review, then the problem is actually easy to solve: just be sure to put the label back after your review. That would make the measurements more accurate and improve the metric against the SLO at the same time. There's definitely more education to do here.
Have you chatted with @lmai1 about how development measures throughput? I think one of the primary metrics we use is Mean Time to Merge (which is also what Code Review thinks about) - so maybe it makes sense to look at MTTM of community contributions (or Median Time to Merge could be a better real sense) and see if they're taking a long time to get through the whole process.
It's an interesting suggestion. One thing we should be careful about, though, is that we don't control when a contributor works on an MR, so the MTTM for Community contribution could be skewed by unresponsive contributors, while GitLab team members could meet the review SLO when they're asked for a review. Let's take an example:
Day 1: Contributor asks for a review
Day 3: The assigned reviewer performs the review, and add back the workflow::in dev label (time to review would be approximately 2 days here)
Contributor is unresponsive for 20 days
Day 23: The contributor makes a change and asks for a new review
Day 23: The assigned reviewer performs the review, approves and merge (time to review would be less than a day here)
In this example, the Time to Merge would be approximately 23 days, due to the unresponsiveness of the contributor (that we don't control), but the time to review would actually be 1 day.
In this issue, what we're interested in is to improve the "GitLab Inc" part of the Community contribution process, not the entirety of the process (which also includes the contributor themselves).
so maybe it makes sense to look at MTTM of community contributions (or Median Time to Merge could be a better real sense) and see if they're taking a long time to get through the whole process.
So then how are we measuring time to first review? Is it labels? Which labels have to change?
Yes, we should use labels. We should measure the time between when workflowready for review is added and the time when ~"workflow::In dev" is added back or the MR is merged.
I'm not sure we currently track the first review, but rather the latest review. @nick_vh Would you be able to confirm?
Confirmed
We are not tracking the time to first review, but we could do that with some magical queries. I'm also not sure if tracking that time is very valuable. We want to make sure wider community contributors get timely feedback, so any request for review should be performed in a timely manner. So we could take all the state transitions and measure average time an MR is in the review state vs the time open MRs currently in review are in that state. I've added that suggestions to the summary in the top.
Once we have that, we can also work towards providing better guidance so everybody can bring more quality to their contributions and provide a shorter cycle - eg, time to MR being merged.
I'm not sure we currently track the first review, but rather the latest review. @nick_vh Would you be able to confirm?
@nick_vh@rymai Hmm... that seems problematic as after the first review there's a whole bunch of factors that could impact availability and make that take longer. But it seems like we do really need to know where the slowdown occurs to make progress.
For example - if the first review is really fast, but then it takes two weeks for the contributor to get back to us and the original reviewer is out of office... we might sit in a weird state for a while.
I wonder if we could parse notes for the first occurrence and see if it's an initial review delay or if there's a lag in the follow up reviews which might need some kind of different process to handle depending on the length of time.
I'd also really love if this wasn't label based or wasn't label based without automation just because of the downside of human intervention that it requires to actually produce the data.
@kerrizor@pslaughter I know you both review a lot of community contributions, I'd be curious how diligent you both are about setting/resetting the labels to properly reflect the status of the MR (ready for review vs. in dev, etc...).
I'd be curious how diligent you both are about setting/resetting the labels to properly reflect the status of the MR
Not very diligent.
From an organizational process perspective, we've tried to empower the contributor with the ability to flag their MR as "ready for review" by pinging gitlab-bot. We should probably update @gitlab-bot to set this label if it's not already being set
I'd also really love if this wasn't label based or wasn't label based without automation just because of the downside of human intervention that it requires to actually produce the data.
@phikai What we could think about is to set ~"workflow::In dev" on MRs where team members assigned for review unassign themselves. That probably means that they're done with their review (but again, that's just an assumption, and could also just mean that the reviewer is not able to review at all).
I'm sure there are plans to make reviews more structured in GitLab (the product) itself. I think that would help as, right now, labels are kind of working around the lack of proper a Review feature that would allow to always have a clear state for an MR at any given time.
From an organizational process perspective, we've tried to empower the contributor with the ability to flag their MR as "ready for review" by pinging gitlab-bot.
@pslaughter From I can see, it seems to work well so far.
We should probably update @gitlab-bot to set this label if it's not already being set
I think what we're discussing here is the other part of the process: reviewers not setting ~"workflow::In dev" back on MRs they finished reviewing. Or maybe I missed something here?
reviewers not setting ~"workflow::In dev" back on MRs they finished reviewing.
Oh interesting. I never interpreted the workflow labels this way. Given that Everything is in draft, I interpreted ~"workflow::In review" to mean that it's still under development, but others are looking at it too.
brainstorm: I can see it being a bit cumbersome moving the workflow status back and forth. Maybe we just want a new label to flag that something has has a review ~has-review
start tracking our time to first review vs the average waiting time given our label workflow.
Currently we are sticking to the workflow labels until we have found a better deterministic and perhaps automated way for a reviewer to hand it back to the community author. We are also tracking average time-to-merge but there is less control available there as we do not control how our wider community authors spend their time. We can only be grateful for their contributions and hope they are as respectful with our time as we are with theirs.
Oh interesting. I never interpreted the workflow labels this way. Given that Everything is in draft, I interpreted ~"workflow::In review" to mean that it's still under development, but others are looking at it too.
Correct, everything is always in draft until it is merged. And with a new MR it all starts again in draft ;-). That does not mean we don't need to do a better job in making clear what we expect from a wider community author to get their MR merged ;-). And given that we like to work with results, it is useful to not work with assumptions of what the next step is, but to make that explicit. I understand this might take some time to get used to and I also understand this is the first iteration of many towards that goal. As always, we're open for change or new suggestions how we can get a meaningful window into this black box to understand where optimization might be possible. Our experience and analysis from MRs in the past made it clear we needed to make a change and try. This is what we're doing here.
If it helps: I take my inspiration from the book "High-Output Management book, and in particular the breakfast factory" example. It talks about how to optimize a breakfast factory and to focus on the limiting steps you can control. Once you fix 1 limiting step you can move on to the next. With these labels we get a view into the black box of our MR factory, and it helps us to identify these limiting factors we can improve as we do control that side of the equation.
Maybe we just want a new label to flag that something has has a review ~has-review
Can you explain what that would solve or what we can distill from that new label? I might be missing something here?
Maybe we just want a new label to flag that something has has a review ~has-review
Can you explain what that would solve or what we can distill from that new label? I might be missing something here?
@nick_vh I think the suggestion here is to give the community team a new label to measure off of. I think the idea that we change the label from ~"workflow::In review" back to ~"workflow::In dev" and keep doing that may be a different process for community contributions than for our own review process... so having a different label may allow you to track something without trying to shift the process for all of development for all MRs.
Good feedback - I've added it to the list of things to start. Initially we did create new labels but then decided to piggy-back on the existing labels as that would require less "special processes" and narrow the gap between community contributions and GitLab team member contributions.
side note: Often MR's have multiple parts reviewed in parallel. This creates a challenge if we prefer updating the workflow label back and forth from ~"workflow::In dev" and ~"workflow::In review". If a MR is undergoing multiple reviews, and one reviewer finishes and updates the MR to ~"workflow::In dev", the label does not accurately represent that the other part is in ~"workflow::In review".
IMHO, it could be simpler to manage and accurately capture the state of an MR if workflow labels didn't have any backtracing.
I think the suggestion here is to give the community team a new label to measure off of. I think the idea that we change the label from ~"workflow::In review" back to ~"workflow::In dev" and keep doing that may be a different process for community contributions than for our own review process... so having a different label may allow you to track something without trying to shift the process for all of development for all MRs.
I don't think we're "shifting" the process for all of development for all MRs by using these labels for community contributions? In fact, we haven't changed the process at all for team-member authored MRs. Our goal is to bring some efficiency to the review process of community contributions.
If a MR is undergoing multiple reviews, and one reviewer finishes and updates the MR to ~"workflow::In dev", the label does not accurately represent that the other part is in ~"workflow::In review".
For community contributions, reviewers sometimes prefer to stay assigned to "track" a community MR and make sure it stays on their radar
That means, we cannot assume that "in dev" community MRs are MRs with no reviewers assigned, hence the need for the ~"workflow::In dev" label to make this state explicit
IMHO, it could be simpler to manage and accurately capture the state of an MR if workflow labels didn't have any backtracing.
@pslaughter What do you mean by "backtracing" here?
I mean that the "workflow" state would be a one-way street. Assigning/unassigning reviewers and updating the MR based on review comments would just be a part of ~"workflow::In review".
brainstorm: If we want to measure the time of contributors waiting for a response, it might make sense to decouple this from workflow altogether. What if we added a contributor::waiting for response label when they comment @gitlab-bot ready. Any reviewer which passes something back to the contributor could simply remove this label (plus, it's conceivable that a bot could even figure out when the MR has been "responded").
It's important to note that sometimes contributors need help prior to any official review.
In the past, we've also wanted to measure how long code review took. Are we sill able to do this efficiently if we move back and forth from ~"workflow::In dev" and ~"workflow::In review"?
Decoupling the workflow and waiting for response concepts could ensure we have the most accurate means of measuring these kind of changes.
how diligent you both are about setting/resetting the labels to properly reflect the status of the MR
Late to the discussion, but "not at all not even one little bit" -- I didn't know we were supposed to.
Honestly.. I'm sick of setting so many labels on issues and MRs.. and there's new ones all the time. It's a lot of overhead, and I work here, so asking contributors to do so as well is just.. it feels like another barrier to contribution.
I would love this yes, but how do we know a review was given and more work is needed vs a comment that doesn't really change the state? Do you think we should set the state to ~"workflow::In dev" after every comment of a GitLab team member? This might create situations where contributor has to repeatedly ask the gitlab-bot to ask for a review again as no review was actually given.
We tried looking at whether we can see if "Start review" can be tracked so we know it is explicitly feedback for the author to keep working on the MR. But we can't expect magic here, setting the correct status is important if we to make our contributors successful. If we can make them successful, we are able to grow and do our bit towards our dual-flywheel strategy.
Some context from my experience from the Drupal community:
Everybody, literally everybody, tries to assess the state of a contribution and tries to help it forward. GitLab is not different, as GitLab team members are part of the community. It helps to make the state explicit.
One thing I do take from this discussion is that everybody should be able to set the workflow label, including our wider community, so they can help correct this status. I've added it to the list at the top. In that way, those that forget to set it after a review and more work is needed, can be helped by our wider community. I also added a suggestion to add a workflow state called "workflow::reviewed & tested by the community" so that we can create a community of reviewers. Maintainers still have the final say, but it might help offload some work and maybe attract different kinds of community members.
One thing I do take from this discussion is that everybody should be able to set the workflow label, including our wider community, so they can help correct this status. I've added it to the list at the top. In that way, those that forget to set it after a review and more work is needed, can be helped by our wider community.
I also added a suggestion to add a workflow state called "workflow::reviewed & tested by the community" so that we can create a community of reviewers. Maintainers still have the final say, but it might help offload some work and maybe attract different kinds of community members.
One thing about this proposal is that once the MR is "ready for review by a maintainer", the "workflow::reviewed & tested by the community" state/information would be "lost" in favor of workflow::ready for review/workflow::in review... In that case, a new (non-scoped) label might be better.
Note: our automation actually already allow contributors to ask for the ~"workflow::in dev" label to be set with @gitla-bot label ~"workflow::in dev"
Ack - Updated the task as such. I must have forgotten they can do this! On a side-note, we should most likely start the discussion sooner rather than later to allow labels to be added through the UI for verified contributors. I know it will (most likely) depend on the RBAC work, but at least we have it in the backlog then.
One thing about this proposal is that once the MR is "ready for review by a maintainer", the "workflow::reviewed & tested by the community" state/information would be "lost" in favor of workflow::ready for review/workflow::in review... In that case, a new (non-scoped) label might be better.
workflowready for review is the question from the author. ~"workflow::In review" seems rather overkill unless the reviewer really wants to spend days and make it know they are looking at it? reviewed and tested by the community could be done by a Technical MR Coach, but not necessarily a maintainer or by a core committer / core member on a subsystem they are not necessarily a maintainer of. This discussion might take us too far here so I would leave it for now and move this discussion in a separate issue if we identify significant optimizations by adding that.
One or more reviewers assigned who have reviewed but not approved ~"workflow::In dev"
Wdyt?
I guess we would need to think when multiple reviewers are assigned if some have reviewed and some haven't are we technically ~"workflow::In dev" or workflowready for review
This is true for team-member MRs (even though it's often that an MR requires multiple parallel reviews, e.g. ~Frontend and backend, so the MR could be in between "in dev" and "in review" when one reviewer finished their review, but not the other), but unfortunately (as said previously):
For community contributions, reviewers sometimes prefer to stay assigned to "track" a community MR and make sure it stays on their radar
One key difference between team-member MRs and community MRs is that team-members know when/how/who to assign reviewers, while we cannot expect this from community contributors, hence the @gitlab-bot ready command to allow them to say "Hey, please review my MR", without knowing how/who to assign people for review (and, even technically non-members aren't able to assign for review anyway). For people that know who to assign the review to, they can also pass usernames to the command, e.g. @gitlab-bot ready @username1 @username2. The command essentially sets the ~"workflow::ready for review" label on the MR.
You can also use workflow::ready for review label. That means that your merge request is ready to be reviewed and any reviewer can pick it. It is recommended to use that label only if there isn’t time pressure and make sure the merge request is assigned to a reviewer.
One or more reviewers assigned who have reviewed but not approved ~"workflow::In dev"
Unfortunately, I don't think we have this data exposed through the API / webhooks, as "reviews" aren't actual entities at the moment, so there's no track of "reviews" performed on an MR. That would be super helpful for sure!
I guess we would need to think when multiple reviewers are assigned if some have reviewed and some haven't are we technically ~"workflow::In dev" or workflowready for review
For the sake of simplicity, I'd say we stay in "ready for review", since reviews are still pending. In a way, an MR stays "in dev" even during "reviews" as the author can push new stuff at any time anyway.
in practice, the issue, as you said is that the data isn't exposed through API/webhooks at the moment. I'm all for improving the product here, I was mainly describing the current state and the "why" behind our decision to use labels as we do now.
@leetickett it's "funny", I just looked at merge_request_reviewers for another issue, and the workflow that too many of our engineers use of "unassign myself as reviewer once I've reviewed" actually breaks our ability to understand review state, because being a "reviewer" isn't a sticky relationship; once you remove yourself, that record is also deleted, so unless we go back and parse system notes, that piece of data is long gone. (This is why I argued against this work flow, and for the reviewers feature so strongly..)
I feel like if we fixed our process and data modeling, we could stop having to use at least SOME of this flood of labels.. honestly, the current state of them is overwhelming and inefficient.
I think that this problem has a lot of overlap with the maintainership working group and what we are seeing here is likely a symptom of the problems that they are trying to address. You can see some of the background on our maintainership problems at Require seniors to become maintainers (!106942 - merged). There is some data on this MR which is interesting:
In the last 30 days, we have needed more 2,300+ backend maintainer reviews, 400+ database maintainer reviews, and 1,500+ frontend maintainer reviews. With our current maintainer numbers and what their availability is, this means multiple MRs to review each day per maintainer - and some maintainers doing more than 100 maintainer reviews a month.
In order to understand if these two issues (maintainer capacity + community contribution review time) are related, I'd be interested in finding these data points:
Out of the total number of maintainer reviews requested (e.g. ~4,200 in the past 30 days), what percentage of these are for community contributions?
For each of these stages, what is the average number of reviews (daily or monthly) for each engineer? What percentage of them are community contributions? High average review count would indicate a capacity problem, but if the engineers have a manageable review count then the problem lies elsewhere (ex: processes)
Of the community contribution MRs which have gone stale, what stage of the review process they are stuck on? (initial review pending, maintainer review pending)
Furthermore, I think it would be more helpful to track these community merge request SLOs by project rather than by group or stage. Reviewers are selected per-project, and anyone in the reviewer pool for a project may review a merge request regardless of the group labels that are applied to it.
For example, contributions to gitlab-pages will be labeled ~"group::editor". However, the reviewers/maintainers for gitlab-pages consists of engineers from ~"group::editor" , ~"group::package", groupdelivery, groupconfigure, ~"group::container security", grouppipeline execution, groupgitaly, and grouprelease. If ~"group::editor" is missing review SLOs, it could be because of pages, but it could also be because of gitlab-org/gitlab, which has a separate reviewer pool. We can't tell with the current metrics.
We should aim to have a healthy merge request count to maintainer count for each project, so we could potentially track this as a ratio (number of MRs submitted per month for a project divided by the number of maintainers).
My understanding was there is currently no SLO for wider community MR 'Review-response' time and I believe the original sources for this issue have been changed to reflect that. If true, might be worth updating this issue description if it is staying open for discussion. Thoughts?
I'll close this issue. There is no SLO for the Wider Community MRs other than that we aim to reduce it to the best of our abilities as Contributor Success with the Leading Organization as the program with a commitment. Thanks for flagging this!