Skip to content

Ensure consistent LCP metric on project home page

Jacques Erasmus requested to merge 338610-ensure-consistent-lcp-metric into master

What does this MR do and why?

Currently, the element that is used to measure LCP on the project home page can be changed by simply editing the project name, description, adding a commit with a long subject, or adding a broadcast message. As a result, our performance dashboards can gather inconsistent data.

This MR ensures that we measure a consistent LCP metric on the project home page by adding a transparent image to the project header.

Screenshots or screen recordings

No visual changes are expected.

Before

The element used for the LCP metric can be changed by simply editing the content on the screen:

header header
broadcast message Screenshot_2021-09-10_at_13.07.15
project description Screenshot_2021-09-10_at_13.08.02
project name Screenshot_2021-09-10_at_13.10.21

After

The element used for the LCP metric stays consistent no matter what content is displayed:

Screenshot_2021-09-10_at_13.00.26

MR acceptance checklist

These checklists encourage us to confirm any changes have been analyzed to reduce risks in quality, performance, reliability, security, and maintainability.

Quality

  • Quality checklist confirmed
  1. I have self-reviewed this MR per code review guidelines.
  2. For the code that that this change impacts, I believe that the automated tests (Testing Guide) validate functionality that is highly important to users (including consideration of all test levels). If the existing automated tests do not cover this functionality, I have added the necessary additional tests or I have added an issue to describe the automation testing gap and linked it to this MR.
  3. I have considered the technical aspects of the impact of this change on both gitlab.com hosted customers and self-hosted customers.
  4. I have considered the impact of this change on the front-end, back-end, and database portions of the system where appropriate and applied frontend, backend and database labels accordingly.
  5. I have tested this MR in all supported browsers, or determiend that this testing is not needed.
  6. I have confirmed that this change is backwards compatible across updates, or I have decided that this does not apply.
  7. I have properly separated EE content from FOSS, or this MR is FOSS only. (Where should EE code go?)
  8. If I am introducing a new expectation for existing data, I have confirmed that existing data meets this expectation or I have made this expectation optional rather than required.

Performance, reliability, and availability

  • Performance, reliability, and availability checklist confirmed
  1. I am confident that this MR does not harm performance, or I have asked a reviewer to help assess the performance impact. (Merge request performance guidelines)
  2. I have added information for database reviewers in the MR description, or I have decided that it is unnecessary. (Does this MR have database-related changes?)
  3. I have considered the availability and reliability risks of this change. I have also considered the scalability risk based on future predicted growth
  4. I have considered the performance, reliability and availability impacts of this change on large customers who may have significantly more data than the average customer.

Deployment

  • Deployment checklist confirmed
  1. I have considered using a feature flag for this change because the change may be high risk. If I decided to use a feature flag, I plan to test the change in staging before I test it in production, and I have considered rolling it out to a subset of production customers before doing rolling it out to all customers. When to use a feature flag
  2. I have informed the Infrastructure department of a default setting or new setting change per definition of done, or decided that this is not needed.

Related to #338610 (closed)

Edited by Jacques Erasmus

Merge request reports