Skip to content

Follow-up from "Definition of done: performance and availability implications of new changes"

The following discussions from !18744 (merged) should be addressed:

  • @ayufan started a discussion: (+1 comment)

    @smcgivern @abrandl @rymai @stanhu and anyone else interested.

    Another review?

    I will open a follow-up MRs that covers:

    1. dealing with external services,
    2. dealing with temporary storage,
    3. dealing with persistent storage.

    I don't want to inflate it further at this point.

  • @mikegreiling started a discussion: (+1 comment)

    I think we could expound upon this paragraph slightly. This mindset really is the core of the solution – to truly consider how a feature will perform at "GitLab scale".

    We need to consider "how can I break this?" when considering performance, and weigh how common that use case is against how damaging it is to performance. Usually you can use a thought experiment akin to imagining a really really large dataset. We want to optimize for the average use case, but account for the "worst case scenario" (e.g. a repo the size of the linux kernel as mentioned in the next paragraph).

    This mindset is generally useful in other areas outside of performance, including security, and UX. For example, I often consider what will happen to a design element if we need to place a very large, unbroken string inside, or what if the viewport is extremely small?