Discuss: Failing Danger for MRs with more than 10 commits
Intent of rule
Enforce guidelines to reduce debugging difficulty to triage incidents when reviewing codebase commits.
Quality commit expectations
This merge request includes more than 10 commits. Each commit should meet the following criteria:
- Have a well-written commit message
- Has all tests passing when used on its own (e.g. when using
git checkout SHA)
- Can be reverted on its own without also requiring the revert of commit that came before it
- Is small enough that it can be reviewed in isolation in under 30 minutes or so
If this merge request contains commits that do not meet this criteria and/or contains intermediate work, please rebase these commits into a smaller number of commits or split this merge request into multiple smaller merge requests.
- Set commit message rule to warn at 10 commits
- Set commit message rule to fail at ?? commits
- Add guideline to review for reviewing commit messages to https://docs.gitlab.com/ee/development/code_review.html
- Adjust merge request workflow at https://docs.gitlab.com/ee/development/contributing/merge_request_workflow.html#commit-messages-guidelines
- Batched way to apply suggestions - #25486
- A way to customize the batched suggestions commit message - #25381
- A proper commit message review workflow (lots of enterprise people using Gerrit asks for this) - #16679
- Run pipeline (tests) on all commits - #25148 (also a popular request from a open source project)
Summarized discussions from other sources
I believe that having more than 10 commits per merge request is not a problem. Large merge requests are the problem. Having many commits in a merge request makes it easier to browse history of changes, makes it easier to use
git blame, makes it easier to understand how the codebase evolves.
I would rather have a merge request with 500 LoC changed and 50 commits than a merge request with 500 LoC and 5 commits if commits are not describing the evolution of the code in a merge request.
I believe that this change has an opposite effect than improving engineering productivity:
Ok, I don’t see where we say that this rule increase productivity, but I hear your feedback.
And digging through really bad commit messages, and 10s of commits when trying to debug the application does? I don't think I would agree that googling how git works is not productive.
I would rather ask Maintainers to review commits and coach people than place arbitrary limits on the amount of commits per merge request, what, in my opinion, does not improve the situation
It is a problem of quality of commits. However, it takes way longer to teach someone how to write a good commit than your example of googling for how git works. While 10 commit limit is not perfect, it at least tries to enforce a "stop-and-think" policy that is slowly showing folks how to work better.
I think all of the folks discussing here are forgetting one thing: We are dealing with scale here.
I will invite folks to help out during our incident calls to debug what sort of stuff our application does, to paint a picture on how hard it is to go through hundreds of commits that are difficult to decipher. While teaching people is great, I have not seen a single person taking action to do so nor I've seen any interest in people improving themselves.
How are we planning on measuring the impact of this change?
While the limit is arbitrary, I believe we originally settled on 10 since this was the Git fetch depth (which IIRC at the time was 10, now it seems to be 20). If you have more commits than are fetched, Danger (or other tools) is not able to verify those commits. If the depth setting is exposed using an environment variable, we could limit to at least that.
If we want to lift this limit, we should:
- Make sure we know what alternative solutions there are
- Create issues to implement these solutions
- Implement them
- Lift the limit