History of last 10 test failures in JUnit report
<!--IssueSummary start-->
<details>
<summary>
Everyone can contribute. [Help move this issue forward](https://handbook.gitlab.com/handbook/marketing/developer-relations/contributor-success/community-contributors-workflows/#contributor-links) while earning points, leveling up and collecting rewards.
</summary>
- [Close this issue](https://contributors.gitlab.com/manage-issue?action=close&projectId=278964&issueIid=215222)
</details>
<!--IssueSummary end-->
### Problem(s) to solve
<!-- What problem do we solve? -->
* Today when a test fails it is very hard to determine quickly if the test has a history of failure indicating it could be flakey or if it is the first time it has failed.
This problem was originally defined at https://gitlab.com/groups/gitlab-org/-/epics/1875:
>GitLab CI/CD shows job log information, so you can dig into why tests fail, but it requires humans reading a bunch of text. Most languages/frameworks are able to output [JUnit XML](https://en.wikipedia.org/wiki/JUnit) (or xUnit, NUnit, PHPUnit, etc.)
>**The user** needs a way to **more efficiently digest JUnit-style XML information** so that they **can find out why tests fail quicker**.
### Intended users
<!-- Who will use this feature? If known, include any of the following: types of users (e.g. Developer), personas, or specific company roles (e.g. Release Manager). It's okay to write "Unknown" and fill this field in later.
Personas can be found at https://about.gitlab.com/handbook/marketing/product-marketing/roles-personas/ -->
- [Sasha - Software developer](https://about.gitlab.com/handbook/marketing/product-marketing/roles-personas/#sasha-software-developer)
- [Devon - DevOps engineer](https://about.gitlab.com/handbook/marketing/product-marketing/roles-personas/#devon-devops-engineer)
### Further details
<!-- Include use cases, benefits, and/or goals (contributes to our vision?) -->
**Use Cases/outcomes**
***For Sasha and Devon***
* When a test fails, I can easily see within that CI view the history of the test, so I have the context to make a judgement call on if the test is flakey and can be skipped or needs fixed.
* When a test run completes, I can easily within that CI view the historical run times of passed tests, so that I can optimize the parallelization of my CI file to run tests faster.
** *MVC version but we can extend this later to auto optimize or at least suggest *
### Proposal
<!-- How are we going to solve the problem? Try to include the user journey! https://about.gitlab.com/handbook/journeys/#user-journey -->
This issue will focus on the scope of:
> A historical view which showcases unit test information across successful pipelines for the default branch .
Preferably this view will be flexible and thus usable beyond just Junit. It is, however, acceptable for an MVC to start off with a single format if need be.
### Permissions and Security
<!-- What permissions are required to perform the described actions? Are they consistent with the existing permissions as documented for users, groups, and projects as appropriate? Is the proposed behavior consistent between the UI, API, and other access methods (e.g. email replies)? -->
### Documentation
<!-- See the Feature Change Documentation Workflow https://docs.gitlab.com/ee/development/documentation/feature-change-workflow.html
Add all known Documentation Requirements here, per https://docs.gitlab.com/ee/development/documentation/feature-change-workflow.html#documentation-requirements -->
### Testing
<!-- What risks does this change pose? How might it affect the quality of the product? What additional test coverage or changes to tests will be needed? Will it require cross-browser testing? See the test engineering process for further guidelines: https://about.gitlab.com/handbook/engineering/quality/guidelines/test-engineering/ -->
### What does success look like, and how can we measure that?
<!-- Define both the success metrics and acceptance criteria. Note that success metrics indicate the desired business outcomes, while acceptance criteria indicate when the solution is working correctly. If there is no way to measure success, link to an issue that will implement a way to measure this. -->
#### Acceptance Criteria
* A user can click into a test case to see the last n runs of the test (or at least what we have artifacts for) within the CI view to see pass/fail and click for further details of that test run.
* Clicks are recorded to capture success metrics
#### Success Metrics
* This new feature will be successful if we see 25% or more users who view the JUnit report page click on a test history link.
### Links / references
<!-- triage-serverless v3 PLEASE DO NOT REMOVE THIS SECTION -->
*This page may contain information related to upcoming products, features and functionality.
It is important to note that the information presented is for informational purposes only, so please do not rely on the information for purchasing or planning purposes.
Just like with all projects, the items mentioned on the page are subject to change or delay, and the development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc.*
<!-- triage-serverless v3 PLEASE DO NOT REMOVE THIS SECTION -->
issue